Can gravitational waves be waylaid by gravity?

Yesterday, I learnt the answer is ‘yes’. Gravitational waves can be gravitationally lensed. It seems obvious once you think about it, but not something that strikes you (assuming you’re not a physicist) right away.

When physicists solve problems relating to the spacetime continuum, they imagine it as a four-dimensional manifold: three of space and one of time. Objects exist in the bulk of this manifold and visualisations like the one below are what two-dimensional slices of the continuum look like. This unified picture of space and time was a significant advancement in the history of physics.

While Hendrik Lorentz and Hermann Minkowski first noticed this feature in the early 20th century, they did so only to rationalise empirical data. Albert Einstein was the first physicist to fully figure out the why of it, through his theories of relativity.

A common way to visualise the curvature of spacetime around a massive object, in this case Earth. Credit: NASA

Specifically, according to the general theory, massive objects bend the spacetime continuum around themselves. Because light passes through the continuum, its path bends along the continuum when passing near massive bodies. Seen head-on, a massive object – like a black hole – appears to encircle a light-source in its background in a ring of light. This is because the black hole’s mass has caused spacetime to curve around the black hole, creating a cosmic mirage of the light emitted by the object in its background (see video below) as seen by the observer. By focusing light flowing in different directions around it towards one point, the black hole has effectively behaved like a lens.

So much is true of light, which is a form of electromagnetic radiation. And just the way electrically charged particles emit such radiation when they accelerate, massive particles emit gravitational waves when they accelerate. These gravitational waves are said to carry gravitational energy.

Gravitational energy is effectively the potential energy of a body due to its mass. Put another way, a more massive object would pull a smaller body in its vicinity towards itself faster than a less massive object would. The difference between these abilities is quantified as a difference between the objects’ gravitational energies.

Credit: ALMA (NRAO/ESO/NAOJ)/Luis Calçada (ESO)

Such energy is released through the spacetime continuum when the mass of a massive object changes. For example, when two binary black holes combine to form a larger one, the larger one usually has less mass than the masses of the two lighter ones together. The difference arises because some of the mass has been converted into gravitational energy. In another example, when a massive object accelerates, it distorts its gravitational field; these distortions propagate outwards through the continuum as gravitational energy.

Scientists and engineers have constructed instruments on Earth to detect gravitational energy in the form of gravitational waves. When an object releases gravitational energy into the spacetime continuum, the energy ripples through the continuum the way a stone dropped in water instigates ripples on the surface. And just the way the ripples alternatively stretch and compress the water, gravitational waves alternatively stretch and compress the continuum as they move through it (at the speed of light).

Instruments like the twin Laser Interferometer Gravitational-wave Observatories (LIGO) are designed to pick up on these passing distortions while blocking out all others. That is, when LIGO records a distortion passing through the parts of the continuum where its detectors are located, scientists will know it has just detected a gravitational wave. Because the frequency of a wave is directly proportional to its energy, scientists can use the properties of the gravitational wave as measured by LIGO to deduce the properties of its original source.

(As you might have guessed, even a cat running across the room emits gravitational waves. However, the frequency of these waves is so very low that it is almost impossible to build instruments to measure them, nor are we likely to find such an exercise useful.)

I learnt today that it is also possible for instruments like LIGO to be able to detect the gravitational lensing of gravitational waves. When an object like a black hole warps the spacetime continuum around it, it lenses light – and it is easy to see how it would lens gravitational waves as well. The lensing effect is the result not of the black hole’s ‘direct’ interaction with light as much as its distortion of the continuum. Ergo, anything that traverses the continuum, including gravitational waves, is bound to be lensed by the black hole.

The human body evolved eyes to receive information encoded in visible light, so we can directly see lensed visible-light. However, we don’t possess any organs that would allow us to do the same thing with gravitational waves. Instead, we will need to use existing instruments, like LIGO, to detect these particular distortions. How do we do that?

When two black holes are rapidly revolving around each other, getting closer and closer, they shed more and more of their potential energy as gravitational waves. In effect, the frequency of these waves is quickly increasing together with their amplitude, and LIGO registers this as a chirp (see video below). Once the two black holes have merged, both frequency and amplitude drop to zero (because a solitary spinning black hole does not emit gravitational waves).

In the event of a lensing, however, LIGO will effectively detect two sets of gravitational waves. One set will arrive at LIGO straight from the source. The second set – originally sent off in a different direction – will become lensed towards LIGO. And because the lensed wave will effectively have travelled a longer distance, it will arrive a short while after the direct wave.

The distance scale here is grossly exaggerated for effect

However, LIGO will not register two chirps; in fact, it will register no chirps at all. Instead, the direct wave and the lensed wave will interfere with each other inside the instrument to produce a characteristically mixed signal. By the laws of wave mechanics, this signal will have increasing frequency, as in the chirp, but uneven amplitude. If it were sonified, the signal’s sound would climb in pitch but have irregular volume.

A statistical analysis published in early 2018 (in a preprint paper) claimed that LIGO should be able to detect gravitationally lensed gravitational waves at the rate of about once per year (and the proposed Einstein Telescope, at about 80 per year!). A peer-reviewed paper published in January 2019 suggested that LIGO’s design specs allow it to detect lensing effects due to a black hole weighing 10-100,000-times as much as the Sun.

Just like ‘direct’ gravitational waves give away some information about their sources, lensed gravitational waves should also give something away about the objects that deflected them. So if we become able to use LIGO, and/or other gravitational wave detectors of the future, to detect gravitationally lensed gravitational waves, we will have the potential to learn even more about the universe’s inhabitants than gravitational-wave astronomy currently allows us to.

Thanks to inputs from Madhusudhan Raman, @ntavish, @alsogoesbyV and @vaa3.

Onto drafting the gravitational history of the universe

It’s finally happening. As the world turns, as our little lives wear on, gravitational wave detectors quietly eavesdrop on secrets whispered by colliding blackholes and neutron stars in distant reaches of the cosmos, no big deal. It’s going to be just another day.

On November 15, the LIGO scientific collaboration confirmed the detection of the fifth set of gravitational waves, made originally on June 8, 2017, but announced only now. These waves were released by two blackholes of 12 and seven solar masses that collided about a billion lightyears away – a.k.a. about a billion years ago. The combined blackhole weighed 18 solar masses, so one solar mass’s worth of energy had been released in the form of gravitational waves.

The announcement was delayed because the LIGO teams had to work on processing two other, more spectacular detections. One of them involved the VIRGO detector in Italy for the first time; the second was the detection of gravitational waves from colliding neutron stars.

Even though the June 8 is run o’ the mill by now, it is unique because it stands for the blackholes of lowest mass eavesdropped on thus far by the twin LIGO detectors.

LIGO’s significance as a scientific experiment lies in the fact that it can detect collisions of blackholes with other blackholes. Because these objects don’t let any kind of radiation escape their prodigious gravitational pulls, their collisions don’t release any electromagnetic energy. As a result, conventional telescopes that work by detecting such radiation are blind to them. LIGO, however, detects gravitational waves emitted by the blackholes as they collide. Whereas electromagnetic radiation moves over the surface of the spacetime continuum and are thus susceptible to being trapped in blackholes, gravitational waves are ripples of the continuum itself and can escape from blackholes.

Processes involving blackholes of a lower mass have been detected by conventional telescopes because these processes typically involve a light blackhole (5-20 solar masses) and a second object that is not a blackhole but instead usually a star. Mass emitted by the star is siphoned into the blackhole, and this movement releases X-rays that can be spotted by space telescopes like NASA Chandra.

So LIGO’s June 8 detection is unique because it signals a collision involving two light blackholes, until now the demesne of conventional astronomy alone. This also means that multi-messenger astronomy can join in on the fun should LIGO detect a collision of a star and a blackhole in the future. Multi-messenger astronomy is astronomy that uses up to four ‘messengers’, or channels of information, to study a single event. These channels are electromagnetic, gravitational, neutrino and cosmic rays.

The masses of stellar remnants are measured in many different ways. This graphic shows the masses for black holes detected through electromagnetic observations (purple); the black holes measured by gravitational-wave observations (blue); neutron stars measured with electromagnetic observations (yellow); and the masses of the neutron stars that merged in an event called GW170817, which were detected in gravitational waves (orange). GW170608 is the lowest mass of the LIGO/Virgo black holes shown in blue. The vertical lines represent the error bars on the measured masses. Credit: LIGO-Virgo/Frank Elavsky/Northwestern
The masses of stellar remnants are measured in many different ways. This graphic shows the masses for black holes detected through electromagnetic observations (purple); the black holes measured by gravitational-wave observations (blue); neutron stars measured with electromagnetic observations (yellow); and the masses of the neutron stars that merged in an event called GW170817, which were detected in gravitational waves (orange). GW170608 is the lowest mass of the LIGO/Virgo black holes shown in blue. The vertical lines represent the error bars on the measured masses. Credit: LIGO-Virgo/Frank Elavsky/Northwestern

The detection also signals that LIGO is sensitive to such low-mass events. The three other sets of gravitational waves LIGO has observed involved black holes of masses ranging from 20-25 solar masses to 60-65 solar masses. The previous record-holder for lowest mass collision was a detection made in December 2015, of two colliding blackholes weighing 14.2 and 7.5 solar masses.

One of the bigger reasons astronomy is fascinating is its ability to reveal so much about a source of radiation trillions of kilometres away using very little information. The same is true of the June 8 detection. According to the LIGO scientific collaboration’s assessment,

When massive stars reach the end of their lives, they lose large amounts of their mass due to stellar winds – flows of gas driven by the pressure of the star’s own radiation. The more ‘heavy’ elements like carbon and nitrogen that a star contains, the more mass it will lose before collapsing to form a black hole. So, the stars which produced GW170608’s [the official designation of the detection] black holes could have contained relatively large amounts of these elements, compared to the stellar progenitors of more massive black holes such as those observed in the GW150914 merger. … The overall amplitude of the signal allows the distance to the black holes to be estimated as 340 megaparsec, or 1.1 billion light years.

The circumstances of the discovery are also interesting. Quoting at length from a LIGO press release:

A month before this detection, LIGO paused its second observation run to open the vacuum systems at both sites and perform maintenance. While researchers at LIGO Livingston, in Louisiana, completed their maintenance and were ready to observe again after about two weeks, LIGO Hanford, in Washington, encountered additional problems that delayed its return to observing.

On the afternoon of June 7 (PDT), LIGO Hanford was finally able to stay online reliably and staff were making final preparations to once again “listen” for incoming gravitational waves. As part of these preparations, the team at Hanford was making routine adjustments to reduce the level of noise in the gravitational-wave data caused by angular motion of the main mirrors. To disentangle how much this angular motion affected the data, scientists shook the mirrors very slightly at specific frequencies. A few minutes into this procedure, GW170608 passed through Hanford’s interferometer, reaching Louisiana about 7 milliseconds later.

LIGO Livingston quickly reported the possible detection, but since Hanford’s detector was being worked on, its automated detection system was not engaged. While the procedure being performed affected LIGO Hanford’s ability to automatically analyse incoming data, it did not prevent LIGO Hanford from detecting gravitational waves. The procedure only affected a narrow frequency range, so LIGO researchers, having learned of the detection in Louisiana, were still able to look for and find the waves in the data after excluding those frequencies.

But what I’m most excited about is the quiet announcement. All of the gravitational wave detection announcements before this were accompanied by an embargo, lots of hype building up, press releases from various groups associated with the data analysis, and of course reporters scrambling under the radar to get their stories ready. There was none of that this time. This time, the LIGO scientific collaboration published their press release with links to the raw data and the preprint paper (submitted to the Astrophysical Journal Letters) on November 15. I found out about it when I stumbled upon a tweet from Sean Carroll.

And this is how it’s going to be, too. In the near future, the detectors – LIGO, VIRGO, etc. – are going to be gathering data in the background of our lives, like just another telescope doing its job. The detections are going to stop being a big deal: we know LIGO works the way it should. Fortunately for it, some of its more spectacular detections (colliding intermediary-mass blackholes and colliding neutron stars) were also made early in its life. What we can all look forward to now is reports of first-order derivatives from LIGO data.

In other words, we can stop focusing on Einstein’s theories of relativity (long overdue) and move on to what multiple gravitational wave detections can tell us about things we still don’t know. We can mine patterns out of the data, chart their variation across space, time and their sources, and begin the arduous task of drafting the gravitational history of the universe.

Featured image credit: Lovesevenforty/pixabay.

A universe out of sight

Two things before we begin:

  1. The first subsection of this post assumes that humankind has colonised some distant extrasolar planet(s) within the observable universe, and that humanity won’t be wiped out in 5 billion years.
  2. Both subsections assume a pessimistic outlook, and neither projections they dwell on might ever come to be while humanity still exists. Nonetheless, it’s still fun to consider them and their science, and, most importantly, their potential to fuel fiction.

Cosmology

Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving Universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0
Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0

Note: An edited version of this post has been published on The Wire.

A new study whose results were reported this morning made for a disconcerting read: it seems the universe is expanding 5-9% faster than we figured it was.

That the universe is expanding at all is disappointing, that it is growing in volume like a balloon and continuously birthing more emptiness within itself. Because of the suddenly larger distances between things, each passing day leaves us lonelier than we were yesterday. The universe’s expansion is accelerating, too, and that doesn’t simply mean objects getting farther away. It means some photons from those objects never reaching our telescopes despite travelling at lightspeed, doomed to yearn forever like Tantalus in Tartarus. At some point in the future, a part of the universe will become completely invisible to our telescopes, remaining that way no matter how hard we try.

And the darkness will only grow, until a day out of an Asimov story confronts us: a powerful telescope bearing witness to the last light of a star before it is stolen from us for all time. Even if such a day is far, far into the future – the effect of the universe’s expansion is perceptible only on intergalactic scales, as the Hubble constant indicates, and simply negligible within the Solar System – the day exists.

This is why we are uniquely positioned: to be able to see as much as we are able to see. At the same time, it is pointless to wonder how much more we are able to see than our successors because it calls into question what we have ever been able to see. Say the whole universe occupies a volume of X, that the part of it that remains accessible to us contains a volume Y, and what we are able to see today is Z. Then: Z < Y < X. We can dream of some future technological innovation that will engender a rapid expansion of what we are able to see, but with Y being what it is, we will likely forever play catch-up (unless we find tachyons, navigable wormholes, or the universe beginning to decelerate someday).

How fast is the universe expanding? There is a fixed number to this called the deceleration parameter:

q = – (1 + /H2),

where H is the Hubble constant and  is its first derivative. The Hubble constant is the speed at which an object one megaparsec from us is moving away at. So, if q is positive, the universe’s expansion is slowing down. If q is zero, then H is the time since the Big Bang. And if q is negative – as scientists have found to be the case – then the universe’s expansion is accelerating.

The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons
The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons

We measure the expansion of the universe from our position: on its surface (because, no, we’re not inside the universe). We look at light coming from distant objects, like supernovae; we work out how much that light is ‘red-shifted’; and we compare that to previous measurements. Here’s a rough guide.

What kind of objects do we use to measure these distances? Cosmologists prefer type Ia supernovae. In a type Ia supernova, a white-dwarf (the core of a dead stare made entirely of electrons) is slowly sucking in matter from an object orbiting it until it becomes hot enough to trigger fusion reaction. In the next few seconds, the reaction expels 1044 joules of energy, visible as a bright fleck in the gaze of a suitable telescope. Such explosions have a unique attribute: the mass of the white-dwarf that goes boom is uniform, which means type Ia supernova across the universe are almost equally bright. This is why cosmologists refer to them as ‘cosmic candles’. Based on how faint these candles are, you can tell how far away they are burning.

After a type Ia supernova occurs, photons set off from its surface toward a telescope on Earth. However, because the universe is continuously expanding, the distance between us and the supernova is continuously increasing. The effective interpretation is that the explosion appears to be moving away from us, becoming fainter. How much it has moved away is derived from the redshift. The wave nature of radiation allows us to think of light as having a frequency and a wavelength. When an object that is moving away from us emits light toward us, the waves of light appear to become stretched, i.e. the wavelength seems to become distended. If the light is in the visible part of the spectrum when starting out, then by the time it reached Earth, the increase in its wavelength will make it seem redder. And so the name.

The redshift, z – technically known as the cosmological redshift – can be calculated as:

z = (λobserved – λemitted)/λemitted

In English: the redshift is the factor by which the observed wavelength is changed from the emitted wavelength. If z = 1, then the observed wavelength is twice as much as the emitted wavelength. If z = 5, then the observed wavelength is six-times as much as the emitted wavelength. The farthest galaxy we know (MACS0647-JD) is estimated to be at a distance wherefrom = 10.7 (corresponding to 13.3 billion lightyears).

Anyway, z is used to calculate the cosmological scale-factor, a(t). This is the formula:

a(t) = 1/(1 + z)

a(t) is then used to calculate the distance between two objects:

d(t) = a(t) d0,

where d(t) is the distance between the two objects at time t and d0 is the distance between them at some reference time t0. Since the scale factor would be constant throughout the universe, d(t) and d0 can be stand-ins for the ‘size’ of the universe itself.

So, let’s say a type Ia supernova lit up at a redshift of 0.6. This gives a(t) = 0.625 = 5/8. So: d(t) = 5/8 * d0. In English, this means that the universe was 5/8th its current size when the supernova went off. Using z = 10.7, we infer that the universe was one-twelfth its current size when light started its journey from MACS0647-JD to reach us.

As it happens, residual radiation from the primordial universe is still around today – as the cosmic microwave background radiation. It originated 378,000 years after the Big Bang, following a period called the recombination epoch, 13.8 billion years ago. Its redshift is 1,089. Phew.

The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you're observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0
The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you’re observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0

A curious redshift is z = 1.4, corresponding to a distance of about 4,200 megaparsec (~0.13 trillion trillion km). Objects that are already this far from us will be moving away faster than at the speed of light. However, this isn’t faster-than-light travel because it doesn’t involve travelling. It’s just a case of the distance between us and the object increasing at such a rate that, if that distance was once covered by light in time t0, light will now need t > t0 to cover it*. The corresponding a(t) = 0.42. I wonder at times if this is what Douglas Adams was referring to (… and at other times I don’t because the exact z at which this happens is 1.69, which means a(t) = 0.37. But it’s something to think about).

Ultimately, we will never be able to detect any electromagnetic radiation from before the recombination epoch 13.8 billion years ago; then again, the universe has since expanded, leaving the supposed edge of the observable universe 46.5 billion lightyears away in any direction. In the same vein, we can imagine there will be a distance (closing in) at which objects are moving away from us so fast that the photons from their surface never reach us. These objects will define the outermost edges of the potentially observable universe, nature’s paltry alms to our insatiable hunger.

Now, a gentle reminder that the universe is expanding a wee bit faster than we thought it was. This means that our theoretical predictions, founded on Einstein’s theories of relativity, have been wrong for some reason; perhaps we haven’t properly accounted for the effects of dark matter? This also means that, in an Asimovian tale, there could be a twist in the plot.

*When making such a measurement, Earthlings assume that Earth as seen from the object is at rest and that it’s the object that is moving. In other words: we measure the relative velocity. A third observer will notice both Earth and the object to be moving away, and her measurement of the velocity between us will be different.


Particle physics

Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN
Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN

If the news that our universe is expanding 5-9% faster than we thought sooner portends a stellar barrenness in the future, then another foretells a fecundity of opportunities: in the opening days of its 2016 run, the Large Hadron Collider produced more data in a single day than it did in the entirety of its first run (which led to the discovery of the Higgs boson).

Now, so much about the cosmos was easy to visualise, abiding as it all did with Einstein’s conceptualisation of physics: as inherently classical, and never violating the principles of locality and causality. However, Einstein’s physics explains only one of the two infinities that modern physics has been able to comprehend – the other being the world of subatomic particles. And the kind of physics that reigns over the particles isn’t classical in any sense, and sometimes takes liberties with locality and causality as well. At the same time, it isn’t arbitrary either. How then do we reconcile these two sides of quantum physics?

Through the rules of statistics. Take the example of the Higgs boson: it is not created every time two protons smash together, no matter how energetic the protons are. It is created at a fixed rate – once every ~X collisions. Even better: we say that whenever a Higgs boson forms, it decays to a group of specific particles one-Yth of the time. The value of Y is related to a number called the coupling constant. The lower Y is, the higher the coupling constant is, and more often will the Higgs boson decay into that group of particles. When estimating a coupling constant, theoretical physicists assess the various ways in which the decays can happen (e.g., Higgs boson → two photons).

A similar interpretation is that the coupling constant determines how strongly a particle and a force acting on that particle will interact. Between the electron and the electromagnetic force is the fine-structure constant,

α = e2/2ε0hc;

and between quarks and the strong nuclear force is the constant defining the strength of the asymptotic freedom:

αs(k2) = [β0ln(k22)]-1

So, if the LHC’s experiments require P (number of) Higgs bosons to make their measurements, and its detectors are tuned to detect that group of particles, then at least P-times-that-coupling-constant collisions ought to have happened. The LHC might be a bad example because it’s a machine on the Energy Frontier: it is tasked with attaining higher and higher energies so that, at the moment the protons collide, heavier and much shorter-lived particles can show themselves. A better example would be a machine on the Intensity Frontier: its aim would be to produce orders of magnitude more collisions to spot extremely rare processes, such as particles that are formed very rarely. Then again, it’s not as straightforward as just being prolific.

It’s like rolling an unbiased die. The chance that you’ll roll a four is 1/6 (i.e. the coupling constant) – but it could happen that if you roll the die six times, you never get a four. This is because the chance can also be represented as 10/60. Then again, you could roll the die 60 times and still never get a four (though the odds of that happened are even lower). So you decide to take it to the next level: you build a die-rolling machine that rolls the die a thousand times. You would surely have gotten some fours – but say you didn’t get fours one-sixth of the time. So you take it up a notch: you make the machine roll the die a million times. The odds of a four should by now start converging toward 1/6. This is how a particle accelerator-collider aims to work, and succeeds.

And this is why the LHC producing as much data as it already has this year is exciting news. That much data means a lot more opportunities for ‘new physics’ – phenomena beyond what our theories can currently explain – to manifest itself. Analysing all this data completely will take many years (physicists continue to publish papers based on results gleaned from data generated in the first run), and all of it will be useful in some way even if very little of it ends up contributing to new ideas.

The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN
The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN

Occasionally, an oddball will show up – like a pentaquark, a state of five quarks bound together. As particles in their own right, they might not be as exciting as the Higgs boson, but in the larger schemes of things, they have a role to call their own. For example, the existence of a pentaquark teaches physicists about what sorts of configurations of the strong nuclear force, which holds the quarks together, are really possible, and what sorts are not. However, let’s say the LHC data throws up nothing. What then?

Tumult is what. In the first run, the LHC used to smash two beams of billions of protons, each beam accelerated to 4 TeV and separated into 2,000+ bunches, head on at the rate of two opposing bunches every 50 nanoseconds. In the second run, after upgrades through early 2015, the LHC smashes bunches accelerated to 6.5 TeV once every 25 nanoseconds. In the process, the number of collisions per sq. cm per second increased tenfold, to 1 × 1034. These heightened numbers are so new physics has fewer places to hide; we are at the verge of desperation to tease them out, to plumb the weakest coupling constants, because existing theories have not been able to answer all of our questions about fundamental physics (why things are the way they are, etc.). And even the barest hint of something new, something we haven’t seen before, will:

  • Tell us that we haven’t seen all that there is to see**, that there is yet more, and
  • Validate this or that speculative theory over a host of others, and point us down a new path to tread

Axiomatically, these are the desiderata at stake should the LHC find nothing, even more so that it’s yielded a massive dataset. Of course, not all will be lost: larger, more powerful, more innovative colliders will be built – even as a disappointment will linger. Let’s imagine for a moment that all of them continue to find nothing, and that persistent day comes to be when the cosmos falls out of our reach, too. Wouldn’t that be maddening?

**I’m not sure of what an expanding universe’s effects on gravitational waves will be, but I presume it will be the same as its effect on electromagnetic radiation. Both are energy transmissions travelling on the universe’s surface at the speed of light, right? Do correct me if I’m wrong.

Parsing Ajay Sharma v. E = mc2

Featured image credit: saulotrento/Deviantart, CC BY-SA 3.0.

To quote John Cutter (Michael Caine) from The Prestige:

Every magic trick consists of three parts, or acts. The first part is called the pledge, the magician shows you something ordinary. The second act is called the turn, the magician takes the ordinary something and makes it into something extraordinary. But you wouldn’t clap yet, because making something disappear isn’t enough. You have to bring it back. Now you’re looking for the secret. But you won’t find it because of course, you’re not really looking. You don’t really want to work it out. You want to be fooled.

The Pledge

Ajay Sharma is an assistant director of education with the Himachal Pradesh government. On January 10, the Indo-Asian News Service (IANS) published an article in which Sharma claims Albert Einstein’s famous equation E = mc2 is “illogical” (republished by The Hindu, Yahoo! NewsGizmodo India, among others). The precise articulation of Sharma’s issue with it is unclear because the IANS article contains multiple unqualified statements:

Albert Einstein’s mass energy equation (E=mc2) is inadequate as it has not been completely studied and is only valid under special conditions.

Einstein considered just two light waves of equal energy emitted in opposite directions with uniform relative velocity.

“It’s only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity.”

Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.

It said E=mc2 is obtained from L=mc2 by simply replacing L by E (all energy) without derivation by Einstein. “It’s illogical,” he said.

Although Einstein’s theory is well established, it has to be critically analysed and the new results would definitely emerge.

Sharma also claims Einstein’s work wasn’t original and only ripped off Galileo, Henri Poincaré, Hendrik Lorentz, Joseph Larmor and George FitzGerald.

The Turn

Let’s get some things straight.

Mass-energy equivalence – E = mc2 isn’t wrong but it’s often overlooked that it’s an approximation. This is the full equation:

E2 = m02c4 + p2c4

(Notice the similarity to the Pythagoras theorem?)

Here, m0 is the mass of the object (say, a particle) when it’s not moving, p is its momentum (calculated as mass times its velocity – m*v) and c, the speed of light. When the particle is not moving, v is zero, so p is zero, and so the right-most term in the equation can be removed. This yields:

E2 = m02c4 ⇒ E = m0c2

If a particle was moving close to the speed of light, applying just E = m0c2 would be wrong without the rapidly enlarging p2c4 component. In fact, the equivalence remains applicable in its most famous form only in cases where an observer is co-moving along with the particle. So, there is no mass-energy equivalence as much as a mass-energy-momentum equivalence.

And at the time of publishing this equation, Einstein was aware that it was held up by multiple approximations. As Terence Tao sets out, these would include (but not be limited to) p being equal to mv at low velocities, the laws of physics being the same in two frames of reference moving at uniform velocities, Planck’s and de Broglie’s laws holding, etc.

These approximations are actually inherited from Einstein’s special theory of relativity, which describes the connection between space and time. In a paper dated September 27, 1905, Einstein concluded that if “a body gives off the energyL in the form of radiation, its mass diminishes by L/c2“. ‘L’ was simply the notation for energy that Einstein used until 1912, when he switched to the more-common ‘E’.

The basis of his conclusion was a thought experiment he detailed in the paper, where a point-particle emits “plane waves of light” in opposite directions while at rest and then while in motion. He then calculates the difference in kinetic energy of the body before and after it starts to move and accounting for the energy carried away by the radiated light:

K0 – K1 = 1/2 * L/c2 * v2

This is what Sharma is referring to when he says, “Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.” Well… sure. Einstein’s was a gedanken (thought) experiment to illustrate a direct consequence of the special theory. How he chose to frame the problem depended on what connection he wanted to illustrate between the various attributes at play.

And the more attributes are included in the experiment, the more connections will arise. Whether or not they’d be meaningful (i.e. being able to represent a physical reality – such as with being able to say “if a body gives off the energy Lin the form of radiation, its mass diminishes by L/c2“) is a separate question.

As for another of Sharma’s claims – that the equivalence is “only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity”: Einstein’s theory of relativity is the best framework of mathematical rules we have to describe all these parameters together. So any gedanken experiment involving just these parameters can be properly analysed, to the best of our knowledge, with Einstein’s theory, and within that theory – and as a consequence of that theory – the mass-energy-momentum equivalence will persist. This implication was demonstrated by the famous Cockcroft-Walton experiment in 1932.

General theory of relativity – Einstein’s road to publishing his general theory (which turned 100 last year) was littered with multiple challenges to its primacy. This is not surprising because Einstein’s principal accomplishment was not in having invented something but in having recombined and interpreted a trail of disjointed theoretical and experimental discoveries into a coherent, meaningful and testable theory of gravitation.

As mentioned earlier, Sharma claims Einstein ripped off Galileo, Poincaré, Lorentz, Larmor and FitzGerald. For what it’s worth, he could also have mentioned William Kingdon Clifford, Georg Bernhard Riemann, Tullio Levi-Civita, Gregorio Ricci-Curbastro, János Bolyai, Nikolai Lobachevsky, David Hilbert, Hermann Minkowski and Fritz Hasenhörl. Here are their achievements in the context of Einstein’s (in a list that’s by no means exhaustive).

  • 1632, Galileo Galilei – Published a book, one of whose chapters features a dialogue about the relative motion of planetary bodies and the role of gravity in regulating their motion
  • 1824-1832, Bolyai and Lobachevsky – Conceived of hyperbolic geometry (which didn’t follow Euclidean laws like the sum of a triangle’s angles is 180º) over 1824-1832, which inspired Riemann and his mentor to consider if there was a kind of geometry to explain the behaviour of shapes in four dimensions (as opposed to three)
  • 1854, G. Bernhard Riemann – Conceived of elliptic geometry and a way to compare vectors in four dimensions, ideas that would benefit Einstein immensely because they helped him discover that gravity wasn’t a force in space-time but actually the curvature of space-time
  • 1876, William K. CliffordSuggested that the forces that shape matter’s motion in space could be guided by the geometry of space, foreshadowing Einstein’s idea that matter influences gravity influences matter
  • 1887-1902, FitzGerald and Lorentz – Showed that observers in different frames of reference that are moving at different velocities can measure the length of a common body to differing values, an idea then called the FitzGerald-Lorentz contraction hypothesis. Lorentz’s mathematical description of this gave rise to a set of formulae called Lorentz transformations, which Einstein later derived through his special theory.
  • 1897-1900, Joseph Larmor – Realised that observers in different frames of reference that are moving at different velocities can also measure different times for the same event, leading to the time dilation hypothesis that Einstein later explained
  • 1898, Henri Poincaré – Interpreted Lorentz’s abstract idea of a “local time” to have physical meaning – giving rise to the idea of relative time in physics – and was among the first physicists to speculate on the need for a consistent theory to explain the consequences of light having a constant speed
  • 1900, Levi-Civita and Ricci-Curbastro – Built on Riemann’s ideas of a non-Euclidean geometry to develop tensor calculus (a tensor is a vector in higher dimensions). Einstein’s field-equations for gravity, which capped his formulation of the celebrated general theory of relativity, would feature the Ricci tensor to account for the geometric differences between Euclidean and non-Euclidean geometry.
  • 1904-1905, Fritz Hasenöhrl – Built on the work of Oliver Heaviside, Wilhelm Wien, Max Abraham and John H. Poynting to devise a thought experiment from which he was able to conclude that heat has mass, a primitive synonym of the mass-energy-momentum equivalence
  • 1907, Hermann Minkowski – Conceived a unified mathematical description of space and time in 1907 that Einstein could use to better express his special theory. Said of his work: “From this hour on, space by itself, and time by itself, shall be doomed to fade away in the shadows, and only a kind of union of the two shall preserve an independent reality.”
  • 1915, David Hilbert – Derived the general theory’s field equations a few days before Einstein did but managed to have his paper published only after Einstein’s was, leading to an unresolved dispute about who should take credit. However, the argument was made moot by only Einstein being able to explain how Isaac Newton’s laws of classical mechanics fit into the theory – Hilbert couldn’t.

FitzGerald, Lorentz, Larmor and Poincaré all laboured assuming that space was filled with a ‘luminiferous ether’. The ether was a pervasive, hypothetical yet undetectable substance that physicists of the time believed had to exist so electromagnetic radiation had a medium to travel in. Einstein’s theories provided a basis for their ideas to exist withoutthe ether, and as a consequence of the geometry of space.

So, Sharma’s allegation that Einstein republished the work of other people in his own name is misguided. Einstein didn’t plagiarise. And while there are many accounts of his competitive nature, to the point of asserting that a mathematician who helped him formulate the general theory wouldn’t later lay partial claim to it, there’s no doubt that he did come up with something distinctively original in the end.

The Prestige

Ajay Sharma with two of his books. Source: Fundamental Physics Society (Facebook page)
Ajay Sharma with two of his books. Source: Fundamental Physics Society (Facebook page)

To recap:

Albert Einstein’s mass energy equation (E=mc2) is inadequate as it has not been completely studied and is only valid under special conditions.

Claims that Einstein’s equations are inadequate are difficult to back up because we’re yet to find circumstances in which they seem to fail. Theoretically, they can be made to appear to fail by forcing them to account for, say, higher dimensions, but that’s like wearing suede shoes in the rain and then complaining when they’re ruined. There’s a time and a place to use them. Moreover, the failure of general relativity or quantum physics to meet each other halfway (in a quantum theory of gravity) can’t be pinned on a supposed inadequacy of the mass-energy equivalence alone.

Einstein considered just two light waves of equal energy emitted in opposite directions with uniform relative velocity.

“It’s only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity.”

Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.

That a gedanken experiment was limited in scope is a pointless accusation. Einstein was simply showing that A implied B, and was never interested in proving that A’ (a different version of A) did not imply B. And tying all of this to the adequacy (or not) of E = mc2 leads equally nowhere.

It said E=mc2 is obtained from L=mc2 by simply replacing L by E (all energy) without derivation by Einstein. “It’s illogical,” he said.

From the literature, the change appears to be one of notation. If not that, then Sharma could be challenging the notion that the energy of a moving body is equal to the sum of the energy of the body at rest and its kinetic energy – letting Einstein say that the kinetic energy on the LHS of the equation can be substituted by L (or E) if the RHS is added to E0(energy of the body at rest): E = E0 + K. In which case Sharma’s challenge is even more ludicrous for calling one of the basic tenets of thermodynamics “illogical” without indicating why.

Although Einstein’s theory is well established, it has to be critically analysed and the new results would definitely emerge.

The “the” before “new results” is the worrying bit: it points to claims of his that have already been made, and suggests they’re contrary to what Einstein has claimed. It’s not that the German is immune to refutation – no one is – but that whatever claim this is seems to be at the heart of what’s at best an awkwardly worded outburst, and which IANS has unquestioningly reproduced.

A persistent search for Sharma’s paper on the web didn’t turn up any results – the closest I got was in unearthing its title (#237) in a list of titles published at a conference hosted by a ‘Russian Gravitational Society’ in May 2015. Sharma’s affiliation is mentioned as a ‘Fundamental Physics Society’ – which in turn shows up as a Facebook page run by Sharma. But an ibnlive.com article from around the same time provides some insight into Sharma’s ‘research’ (translated from the Hindi by Siddharth Varadarajan):

In this way, Ajay is also challenging the great scientist of the 21st century (sic) Albert Einstein. After deep research into his formula, E=mc2, he says that “when a candle burns, its mass reduces and light and energy are released”. According to Ajay, Einstein obtained this equation under special circumstances. This means that from any matter/thing, only two rays of light emerge. The intensity of light of both rays is the same and they emerge from opposite directions. Ajay says Einstein’s research paper was published in 1905 in the German research journal [Annalen der Physik] without the opinion of experts. Ajay claims that if this equation is interpreted under all circumstances, then you will get wrong results. Ajay says that if a candle is burning, its mass should increase. Ajay says his research paper has been published after peer review. [Emphasis added.]

A pattern underlying some of Sharma’s claims have to do with confusing conjecturing and speculating (even perfectly reasonably) with formulating and defining and proving. The most telling example in this context is alleging that Einstein ripped off Galileo: even if they both touched on relative motion in their research, what Galileo did for relativity was vastly different from what Einstein did. In fact, following the Indian Science Congress in 2015, V. Vinay, an adjunct faculty at the Chennai Mathematical Institute and teacher in Bengaluru, had pointed out that these differences in fact encapsulated the epistemological attitudes of the Indian and Greek civilisations: the TL;DR version is that we weren’t a proof-seeking people.

Swinging back to the mass-energy equivalence itself – it’s a notable piece but a piece nonetheless of an expansive theory that’s demonstrably incomplete. And there are other theories like it, like flotsam on a dark ocean whose waters we haven’t been able to see, theories we’re struggling to piece together. It’s a time when Popper’s philosophies haven’t been able to qualify or disqualify ‘discoveries’, a time when the subjective evaluation of an idea’s usefulness seems just as important as objectively testing it. But despite the grand philosophical challenges these times face us with, extraordinary claims still do require extraordinary evidence. And at that Ajay Sharma quickly fails.

Hat-tip to @AstroBwouy, @ainvvy and @hosnimk.

The Wire
January 12, 2016

Feeling the pulse of the space-time continuum

The Copernican
April 17, 2014

Haaaaaave you met PSR B1913+16? The first three letters of its name indicate it’s a pulsating radio source, an object in the universe that gives off energy as radio waves at very specific periods. More commonly, such sources are known as pulsars, a portmanteau of pulsating stars.

When heavy stars run out of hydrogen to fuse into helium, they undergo a series of processes that sees them stripped off their once-splendid upper layers, leaving behind a core of matter called a neutron star. It is extremely dense, extremely hot, and spinning very fast. When it emits electromagnetic radiation in flashes, it is called a pulsar. PSR B1913+16 is one such pulsar, discovered in 1974, located in the constellation Aquila some 21,000 light-years from Earth.

Finding PSR B1913+16 earned its discoverers the Nobel Prize for physics in 1993 because this was no ordinary pulsar, and it was the first to be discovered of its kind: of binary stars. As the ‘B’ in its name indicates, it is locked in an epic pirouette with a nearby neutron star, the two spinning around each other with the orbit’s total diameter spanning one to five times that of our Sun.

Losing energy but how?

The discoverers were Americans Russell Alan Hulse and Joseph Hooton Taylor, Jr., of the University of Massachusetts Amherst, and their prize-winning discovery didn’t culminate with just spotting the binary pulsar that has come to be named after them. Further, they found that the pulsar’s orbit was shrinking, meaning the system as a whole was losing energy. They found that they could also predict the rate at which the orbit was shrinking using the general theory of relativity.

In other words, PSR B1913+16 was losing energy as gravitational energy while proving a direct (natural) experiment to verify Albert Einstein’s monumental theory from a century ago. (That a human was able to intuit how two neutron stars orbiting each other trillions of miles away could lose energy is homage to the uniformity of the laws of physics. Through the vast darkness of space, we can strip away with our minds any strangeness of its farthest reaches because what is available on a speck of blue is what is available there, too.)

While gravitational energy, and gravitational waves with it, might seem like an esoteric concept, it is easily intuited as the gravitational analogue of electromagnetic energy (and electromagnetic waves). Electromagnetism and gravitation are the two most accessible of the four fundamental forces of nature. When a system of charged particles moves, it lets off electromagnetic energy and so becomes less energetic over time. Similarly, when a system of massive objects moves, it lets off gravitational energy… right?

“Yeah. Think of mass as charge,” says Tarun Souradeep, a professor at the Inter-University Centre for Astronomy and Astrophysics, Pune, India. “Electromagnetic waves come with two charges that can make up a dipole. But the conservation of momentum prevents gravitational radiation from having dipoles.”

According to Albert Einstein and his general theory of relativity, gravitation is a force born due to the curvature, or roundedness, of the space-time continuum: space-time bends around massive objects (an effect very noticeable during gravitational lensing). When massive objects accelerate through the continuum, they set off waves in it that travel at the speed of light. These are called gravitational waves.

“The efficiency of energy conversion – from the bodies into gravitational waves – is very high,” Prof. Souradeep clarifies. “But they’re difficult to detect because they don’t interact with matter.”

Albie’s still got it

In 2004, Joseph Taylor, Jr., and Joel Weisberg published a paper analysing 30 years of observations of PSR B1913+16, and found that general relativity was able to explain the rate of orbit contraction within an error of 0.2 per cent. Should you argue that the binary system could be losing its energy in many different ways, that the theory of general relativity is able to so accurately explain it means that the theory is involved, and in the form of gravitational waves.

Prof. Souradeep says, “According to Newtonian gravity, the gravitational pull of the Sun on Earth was instantaneous action at a distance. But now we know light takes eight minutes to come from the Sun to Earth, which means the star’s gravitational pull must also take eight minutes to affect Earth. This is why we have causality, with gravitational waves in a radiative mode.”

And this is proof that the waves exist, at least definitely in theory. They provide a simple, coherent explanation for a well-defined problem – like a hole in a giant jigsaw puzzle that we know only a certain kind of piece can fill. The fundamental particles called neutrinos were discovered through a similar process.

These particles, like gravitational waves, hardly interact with matter and are tenaciously elusive. Their discovery was predicted by the physicist Wolfgang Pauli in 1930. He needed such a particle to explain how the heavier neutron could decay into the lighter proton, the remaining mass (or energy) being carried away by an electron and a neutrino antiparticle. And the team that first observed neutrinos in an experiment, in 1942, did find it under these circumstances.

Waiting for a direct detection

On March 17, radio-astronomers from the Harvard-Smithsonian Centre for Astrophysics (CfA) announced a more recent finding that points to the existence of gravitational waves, albeit in a more powerful and ancient avatar. Using a telescope called BICEP2 located at the South Pole, they found the waves’ unique signature imprinted on the cosmic microwave background, a dim field of energy leftover from the Big Bang and visible to this day.

At the time, Chao-Lin Kuo, a co-leader of the BICEP2 collaboration, had said, “We have made the first direct image of gravitational waves, or ripples in space-time across the primordial sky, and verified a theory about the creation of the whole universe.”

Spotting the waves themselves, directly, in our human form is impossible. This is why the CfA discovery and the orbital characteristics of PSR B1913+16 are as direct detections as they get. In fact, finding one concise theory to explain actions and events in varied settings is a good way to surmise that such a theory could exist.

For instance, there is another experiment whose sole purpose has been to find gravitational waves, using laser. Its name is LIGO (Laser Interferometer Gravitational-wave Observatory). Its first phase operated from 2002 to 2010, and found no conclusive evidence of gravitational waves to report. Its second phase is due to start this year, in 2014, in an advanced form. On April 16, the LIGO collaboration put out a 20-minute documentary titled Passion for Understanding, about the “raw enthusiasm and excitement of those scientists and researchers who have dedicated their professional careers to this immense undertaking”.

The laser pendula

LIGO works like a pendulum to try and detect gravitational waves. With a pendulum, there is a suspended bob that goes back and forth between two points with a constant rhythm. Now, imagine there are two pendulums swinging parallel to each other but slightly out of phase, between two parallel lines 1 and 2. So when pendulum A reaches line 1, pendulum B hasn’t got there just yet, but it will soon enough.

When gravitational waves, comprising peaks and valleys of gravitational energy, surf through the space-time continuum, they induce corresponding crests and troughs that distort the metrics of space and passage of time in that area. When the two super-dense neutron stars that comprise PSR B1913+16 move around each other, they must be letting off gravitational waves in a similar manner, too.

When such a wave passes through the area where we are performing our pendulums experiment, they are likely to distort their arrival times to lines 1 and 2. Such a delay can be observed and recorded by sensitive instruments.

Analogously, LIGO uses beams of light generated by a laser at one point to bounce back and forth between mirrors for some time, and reconvene at a point. And instead of relying on the relatively clumsy mechanisms of swinging pendulums, scientists leverage the wave properties of light to make the measurement of a delay more precise.

At the beach, you’ll remember having seen waves forming in the distance, building up in height as they reach shallower depths, and then crashing in a spray of water on the shore. You might also have seen waves becoming bigger by combining. That is, when the crests of waves combine, they form a much bigger crest; when a crest and a trough combine, the effect is to cancel each other. (Of course this is an exaggeration. Matters are far less exact and pronounced on the beach.)

Similarly, the waves of laser light in LIGO are tuned such that, in the absence of a gravitational wave, what reaches the detector – an interferometer – is one crest and one trough, cancelling each other out and leaving no signal. In the presence of a gravitational wave, there is likely to be one crest and another crest, too, leaving behind a signal.

A blind spot

In an eight-year hunt for this signal, LIGO hasn’t found it. However, this isn’t the end because, like all waves, gravitational waves should also have a frequency, and it can be anywhere in a ginormous band if theoretical physicists are to be believed (and they are to be): between 10-7 and 1011 hertz. LIGO will help humankind figure out which frequency ranges can be ruled out.

In 2014, the observatory will also reawaken after four-years of being dormant and receiving upgrades to improve its sensitivity and accuracy. According to Prof. Souradeep, the latter now stands at 10-20 m. One more way in which LIGO is being equipped to find gravitational waves is by created a network of LIGO detectors around Earth. There are already two in the US, one in Europe, and one in Japan (although the Japanese LIGO uses a different technique).

But though the network improves our ability to detect gravitational waves, it presents another problem. “These detectors are on a single plane, making them blind to a few hundred degrees of the sky,” Prof. Souradeep says. This means the detectors will experience the effects of a gravitational wave but if it originated from a blind spot, they won’t be able to get a fix on its source: “It will be like trying to find MH370!” Fortunately, since 2010, there have been many ways proposed to solve this problem, and work on some of them is under way.

One of them is called eLISA, for Evolved Laser Interferometer Space Antenna. It will attempt to detect and measure gravitational waves by monitoring the locations of three spacecraft arranged in an equilateral triangle moving in a Sun-centric orbit. eLISA is expected to be launched only two decades from now, although a proof-of-concept mission has been planned by the European Space Agency for 2015.

Another solution is to install a LIGO detector on ground and outside the plane of the other three – such as in India. According to Prof. Souradeep, LIGO-India will reduce the size of the blind spot to a few tens of degrees – an order of magnitude improvement. The country’s Planning Commission has given its go-ahead for the project as a ‘mega-science project’ in the 12th Five Year Plan, and the Department of Atomic Energy, which is spearheading the project, has submitted a note to the Union Cabinet for approval. With the general elections going on in the country, physicists will have to wait until at least June or July to expect to get this final clearance.

Once cleared, of course, it will prove a big step forward not just for the Indian scientific community but also for the global one, marking the next big step – and possibly a more definitive one – in a journey that started with a strange pulsar 21,000 light-years away. As we get better at studying these waves, we have access to a universe visible not just in visible light, radio-waves, X-rays or neutrinos but also through its gravitational susurration – like feeling the pulse of the space-time continuum itself.