• 0 Posts
  • 35 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle




  • First a caveat: An object with mass can’t move at the speed of light, but it could move at speeds arbitrarily close to that.

    The most successful model of gravity isGeneral Relativity, which treats gravity as a curvature of 4-dimensional space time. Gravity’s influence travels at the speed of light. There’s a classic thought experiment that sort of answers your question: what would happen if the sun was teleported away? The answer is the earth would continue to orbit around the spot the sun was for 8 minutes, and we would continue to see sunlight for that same amount of time since that’s how long it takes light to travel that distance. Then after 8 minutes the sun would disappear and the first “lack of gravity” would reach us, and things would be bad for earth :(

    The fact that gravity travels at the speed of light actually leads to an interesting phenomenon: Gravitational waves If a massive object rapidly accelerates (or decelerates), for example a star sized mass moving quickly and then coming to an abrupt stop, it will emit a ripple in space time called a gravitational wave that will travel outward at the speed of light.

    It was big news about a decade ago when gravitational waves were first detected by LIGO, a series of large interferometers that look for expansion/contraction in spacetime. Their first detection was the collision of 2 black holes; as the black holes spiral around each other and eventually merge, they emit oscillating waves with increasing frequency. They made a cool video showing how the frequency increases by converting it to sound.

    Since then LIGO and VIRGO (similar European collaboration) have detected multiple gravitational waves from the collision of black holes and neutron stars. So not only are gravitational waves a neat validation of general relativity, they’re actually being used to do astronomy.



  • The x-axis range spans the same region of “photon energy” space in both plots. The data starts at about 280 nm in the first plot, which is 1000 THz (the maximum value in the second plot).

    The stretching effect caused by working in different x-axis units is because the units don’t map linearly, but are inversely proportional. A 1 nm wide histogram bin at 1000 nm will contain the histogram counts corresponding to a 0.3 THz wide region at 300 THz in the frequency plot. Another 1 nm wide bin at 200 nm will correspond to a 7.5 THz wide region located at 1500 THz in the frequency plot.

    You can get a sense of how this works just by looking at how much space the colorful visible light portion of the spectrum takes up on each plot. In the wavelength plot, by eye I’d say visible light corresponds to about 1/6 the horizontal axis scale. In the frequency plot, it’s more like 1/4.

    That normalization is necessary because otherwise exactly how you bin the data would change the vertical scale, even if you used the same units. For example, consider the first plot. Let’s assume the histogram bins are uniformly 1 nm wide. Now imaging rebinning the data into 2 nm wide bins. You would effectively take the contents of 2 bins and combine them into one, so the vertical scale would roughly double. 2 plots would contain the same data but look vastly different in magnitude. But if in both cases you divide by bin width (1 nm or 2 nm, depending) the histogram magnitudes would be equal again. So that’s why the units have to be given in “per nm” or “per THz).


  • A quasiparticle is more of a useful concept for describing the behavior of systems than it is a distinct object. In the example you cite, phonons are a convenient way of describing how vibrations are transmitted in matter. The fact that phonons are “quantized” is more accurately just emergent behavior from the system of atoms or molecules, a consequence of the fact that the atoms have quantized vibrational states.

    As an analogy, consider a ripple in a pond. The ripple appears to be a real, distinct thing. You can describe it with math (as a wave) and predict its behavior. But it cannot exist separately from the water in the pond. The ripple is an emergent phenomenon in water, a quasi-object. It only exists as a collective behavior of the water molecules.

    By definition quasiparticles cannot exist in a vacuum.




  • But in order to do that photon actually needs to be created and travel from one particle to another.

    The electromagnetic force is mediated by virtual photons. These don’t exist as free particles, such as a photon emitted by a light source, but only as an intermediate particle. Because they’re only intermediate states, virtual photons can have non-physical energies (so long as they’re within the uncertainty principle), resulting in some having an effective mass. Suffice it to say virtual photons are quite distinct from real ones! Technically, I believe you could have some of the basic features of the em force (namely attraction/repulsion by 2 point charges) with just virtual photons. Things get tricky once charges begin accelerating though, as this leads to the emission of real photons.

    If Higgs works in a similar way also being a boson

    The short answer is, it doesn’t. The Higgs Field gives mass to fundamental particles. Existing in that field causes certain particles to have mass due to their coupling to the field. The W and Z weak gauge bosons gain mass through electroweak symmetry breaking, quarks and leptons gain mass through a different coupling. I realize this is a very unsatisfying answer as to “how” the Higgs field creates mass, but the mechanism involves some complex math (group theory and non-abelian gauge theory), so it kind of defies a simpler explanation. Regardless, it’s through interactions with the Higgs field (which can exist without any Higgs bosons around) that fundamental particles gain mass. The search for the Higgs boson was just to confirm the existence of the field, because while the field can exist without Higgs bosons present it must be possible to excite it sufficiently to create them.

    Going back to your original question: these particles have almost certainly been created “naturally” in high energy collisions between particles and matter. Nature can achieve much higher energies than our particle accelerators. The highest energy particle ever observed was a cosmic ray. However, Higgs bosons are extremely short lived, with a lifetime of 10^-22 seconds. So whenever they’re created, they don’t stick around for a meaningful amount of time.







  • I believe the idea is that a single bright star in the frame (the guide star) is used for selecting the frames. The point spread function (PSF) is just going to be some function that describes the blurred shape you would observe with the detector for an input point source. You then select frames in which the guide star is well centered, compared to its overall distribution.

    I think your guess on “sync-resampled” is correct. They increased the “resolution” by a factor of 4, so that when they realign the chosen frames to center the guide star, they can do so at a sub-pixel precision.

    You may want to check out chapter 3 in the thesis, particularly section 3.5.3. The give a lot more detail on the process than you’ll be able to find in the paper. A well-written PhD thesis can be 1000x more valuable than the journal article it ultimately produces, because it contains all the specific details that can be glossed over in the final paper.


  • This isn’t exactly my area of expertise, but I have some information that might be helpful. Here’s the description of the frame selection from a paper on a lucky imaging system:

    The frame selection algorithm, implemented (currently) as a post-processing step, is summarised below:

    1. A Point Spread Function (PSF) guide star is selected as a reference to the turbulence induced blurring of each frame.
    1. The guide star image in each frame is sinc-resampled by a factor of 4 to give a sub-pixel estimate of the position of the brightest speckle.
    1. A quality factor (currently the fraction of light concentrated in the brightest pixel of the PSF) is calculated for each frame.
    1. A fraction of the frames are then selected according to their quality factors. The fraction is chosen to optimise the trade- off between the resolution and the target signal-to-noise ra- tio required.
    1. The selected frames are shifted-and-added to align their brightest speckle positions.

    If you want all the gory details, the best place to look is probably the thesis the same author wrote on this work. That’s available here PDF warning.