When cooling down really means slowing down

Consider this post the latest in a loosely defined series about atomic cooling techniques that I’ve been writing since June 2018.

Atoms can’t run a temperature, but things made up of atoms, like a chair or table, can become hotter or colder. This is because what we observe as the temperature of macroscopic objects is at the smallest level the kinetic energy of the atoms it is made up of. If you were to cool such an object, you’d have to reduce the average kinetic energy of its atoms. Indeed, if you had to cool a small group of atoms trapped in a container as well, you’d simply have to make sure they – all told – slow down.

Over the years, physicists have figured out more and more ingenious ways to cool atoms and molecules this way to ultra-cold temperatures. Such states are of immense practical importance because at very low energy, these particles (an umbrella term) start displaying quantum mechanical effects, which are too subtle to show up at higher temperatures. And different quantum mechanical effects are useful to create exotic things like superconductors, topological insulators and superfluids.

One of the oldest modern cooling techniques is laser-cooling. Here, a laser beam of a certain frequency is fired at an atom moving towards the beam. Electrons in the atom absorb photons in the beam, acquire energy and jump to a higher energy level. A short amount of time later, the electrons lose the energy by emitting a photon and jump back to the lower energy level. But since the photons are absorbed in only one direction but are emitted in arbitrarily different directions, the atom constantly loses momentum in one direction but gains momentum in a variety of directions (by Newton’s third law). The latter largely cancel themselves out, leaving the atom with considerably lower kinetic energy, and therefore cooler than before.

In collisional cooling, an atom is made to lose momentum by colliding not with a laser beam but with other atoms, which are maintained at a very low temperature. This technique works better if the ratio of elastic to inelastic collisions is much greater than 50. In elastic collisions, the total kinetic energy of the system is conserved; in inelastic collisions, the total energy is conserved but not the kinetic energy alone. In effect, collisional cooling works better if almost all collisions – if not all of them – conserve kinetic energy. Since the other atoms are maintained at a low temperature, they have little kinetic energy to begin with. So collisional cooling works by bouncing warmer atoms off of colder ones such that the colder ones take away some of the warmer atoms’ kinetic energy, thus cooling them.

In a new study, a team of scientists from MIT, Harvard University and the University of Waterloo reported that they were able to cool a pool of NaLi diatoms (molecules with only two atoms) this way to a temperature of 220 nK. That’s 220-billionths of a kelvin, about 12-million-times colder than deep space. They achieved this feat by colliding the warmer NaLi diatoms with five-times as many colder Na (sodium) atoms through two cycles of cooling.

Their paper, published online on April 8 (preprint here), indicates that their feat is notable for three reasons.

First, it’s easier to cool particles (atoms, ions, etc.) in which as many electrons as possible are paired to each other. A particle in which all electrons are paired is called a singlet; ones that have one unpaired electron each are called doublets; those with two unpaired electrons – like NaLi diatoms – are called triplets. Doublets and triplets can also absorb and release more of their energy by modifying the spins of individual electrons, which messes with collisional cooling’s need to modify a particle’s kinetic energy alone. The researchers from MIT, Harvard and Waterloo overcame this barrier by applying a ‘bias’ magnetic field across their experiment’s apparatus, forcing all the particles’ spins to align along a common direction.

Second: Usually, when Na and NaLi come in contact, they react and the NaLi molecule breaks down. However, the researchers found that in the so-called spin-polarised state, the Na and NaLi didn’t react with each other, preserving the latter’s integrity.

Third, and perhaps most importantly, this is not the coldest temperature to which we have been able to cool quantum particles, but it still matters because collisional cooling offers unique advantages that makes it attractive for certain applications. Perhaps the most well-known of them is quantum computing. Simply speaking, physicists prefer ultra-cold molecules to atoms to use in quantum computers because physicists can control molecules more precisely than they can the behaviour of atoms. But molecules that have doublet or triplet states or are otherwise reactive can’t be cooled to a few billionths of a kelvin with laser-cooling or other techniques. The new study shows they can, however, be cooled to 220 nK using collisional cooling. The researchers predict that in future, they may be able to cool NaLi molecules even further with better equipment.

Note that the researchers didn’t cool the NaLi atoms from room temperature to 220 nK but from 2 µK. Nonetheless, their achievement remains impressive because there are other well-established techniques to cool atoms and molecules from room temperature to a few micro-kelvin. The lower temperatures are harder to reach.

One of the researchers involved in the current study, Wolfgang Ketterle, is celebrated for his contributions to understanding and engineering ultra-cold systems. He led an effort in 2003 to cool sodium atoms to 0.5 nK – a record. He, Eric Cornell and Carl Wieman won the Nobel Prize for physics two years before that: Cornell, Wieman and their team created the first Bose-Einstein condensate in 1995, and Ketterle created ‘better’ condensates that allowed for closer inspection of their unique properties. A Bose-Einstein condensate is a state of matter in which multiple particles called bosons are ultra-cooled in a container, at which point they occupy the same quantum state – something they don’t do in nature (even as they comply with the laws of nature) – and give rise to strange quantum effects that can be observed without a microscope.

Ketterle’s attempts make for a fascinating tale; I collected some of them plus some anecdotes together for an article in The Wire in 2015, to mark the 90th year since Albert Einstein had predicted their existence, in 1924-1925. A chest-thumper might be cross that I left Satyendra Nath Bose out of this citation. It is deliberate. Bose-Einstein condensates are named for their underlying theory, called Bose-Einstein statistics. But while Bose had the idea for the theory to explain the properties of photons, Einstein generalised it to more particles, and independently predicted the existence of the condensates based on it.

This said, if it is credit we’re hungering for: the history of atomic cooling techniques includes the brilliant but little-known S. Pancharatnam. His work in wave physics laid the foundations of many of the first cooling techniques, and was credited as such by Claude Cohen-Tannoudji in the journal Current Science in 1994. Cohen-Tannoudji would win a piece of the Nobel Prize for physics in 1997 for inventing a technique called Sisyphus cooling – a way to cool atoms by converting more and more of their kinetic energy to potential energy, and then draining the potential energy.

Indeed, the history of atomic cooling techniques is, broadly speaking, a history of physicists uncovering newer, better ways to remove just a little bit more energy from an atom or molecule that’s already lost a lot of its energy. The ultimate prize is absolute zero, the lowest temperature possible, at which the atom retains only the energy it can in its ground state. However, absolute zero is neither practically attainable nor – more importantly – the goal in and of itself in most cases. Instead, the experiments in which physicists have achieved really low temperatures are often pegged to an application, and getting below a particular temperature is the goal.

For example, niobium nitride becomes a superconductor below 16 K (-257º C), so applications using this material prepare to achieve this temperature during operation. For another, as the MIT-Harvard-Waterloo group of researchers write in their paper, “Ultra-cold molecules in the micro- and nano-kelvin regimes are expected to bring powerful capabilities to quantum emulation and quantum computing, owing to their rich internal degrees of freedom compared to atoms, and to facilitate precision measurement and the study of quantum chemistry.”

Why are the Nobel Prizes still relevant?

Note: A condensed version of this post has been published in The Wire.

Around this time last week, the world had nine new Nobel Prize winners in the sciences (physics, chemistry and medicine), all but one of whom were white and none were women. Before the announcements began, Göran Hansson, the Swede-in-chief of these prizes, had said the selection committee has been taking steps to make the group of laureates more racially and gender-wise inclusive, but it would seem they’re incremental measures, as one editorial in the journal Nature pointed out.

Hansson and co. seems to find the argument that the Nobel Prizes award achievements at a time where there weren’t many women in science tenable when in fact it distracts from the selection committee’s bizarre oversight of such worthy names as Lise Meitner, Vera Rubin, Chien-Shiung Wu, etc. But Hansson needs to understand that the only meaningful change is change that happens right away because, even for this significant flaw that should by all means have diminished the prizes to a contest of, for and by men, the Nobel Prizes have only marginally declined in reputation.

Why do they matter when they clearly shouldn’t?

For example, according to the most common comments received in response to articles by The Wire shared on Twitter and Facebook, and always from men, the prizes reward excellence, and excellence should brook no reservation, whether by caste or gender. As is likely obvious to many readers, this view of scholastic achievement resembles a blade of grass: long, sprouting from the ground (the product of strong roots but out of sight, out of mind), rising straight up and culminating in a sharp tip.

However, achievement is more like a jungle: the scientific enterprise – encompassing research institutions, laboratories, the scientific publishing industry, administration and research funding, social security, availability of social capital, PR, discoverability and visibility, etc. – incorporates many vectors of bias, discrimination and even harassment towards its more marginalised constituents. Your success is not your success alone; and if you’re an upper-caste, upper-class, English-speaking man, you should ask yourself, as many such men have been prompted to in various walks of life, who you might have displaced.

This isn’t a witch-hunt as much as an opportunity to acknowledge how privilege works and what we can do to make scientific work more equal, equitable and just in future. But the idea that research is a jungle and research excellence is a product of the complex interactions happening among its thickets hasn’t found meaningful purchase, and many people still labour with a comically straightforward impression that science is immune to social forces. Hansson might be one of them if his interview to Nature is anything to go by, where he says:

… we have to identify the most important discoveries and award the individuals who have made them. If we go away from that, then we’ve devalued the Nobel prize, and I think that would harm everyone in the end.

In other words, the Nobel Prizes are just going to look at the world from the top, and probably from a great distance too, so the jungle has been condensed to a cluster of pin-pricks.

Another reason why the Nobel Prizes haven’t been easy to sideline is that the sciences’ ‘blade of grass’ impression is strongly historically grounded, with help from notions like scientific knowledge spreads from the Occident to the Orient.

Who’s the first person that comes to mind when I say “Nobel Prize for physics”? I bet it’s Albert Einstein. He was so great that his stature as a physicist has over the decades transcended his human identity and stamped the Nobel Prize he won in 1921 with an indelible mark of credibility. Now, to win a Nobel Prize in physics is to stand alongside Einstein himself.

This union between a prize and its laureate isn’t unique to the Nobel Prize or to Einstein. As I’ve said before, prizes are elevated by their winners. When Margaret Atwood wins the Booker Prize, it’s better for the prize than it is for her; when Isaac Asimov won a Hugo Award in 1963, near the start of his career, it was good for him, but it was good for the prize when he won it for the sixth time in 1992 (the year he died). The Nobel Prizes also accrued a substantial amount of prestige this way at a time when it wasn’t much of a problem, apart from the occasional flareup over ignoring deserving female candidates.

That their laureates have almost always been from Europe and North America further cemented the prizes’ impression that they’re the ultimate signifier of ‘having made it’, paralleling the popular undercurrent among postcolonial peoples that science is a product of the West and that they’re simply its receivers.

That said, the prize-as-proxy issue has contributed considerably as well to preserving systemic bias at the national and international levels. Winning a prize (especially a legitimate one) accords the winner’s work with a modicum of credibility and the winner, of prestige. Depending on how the winners of a prize to be awarded suitably in the future are to be selected, such credibility and prestige could be potentiated to skew the prize in favour of people who have already won other prizes.

For example, a scientist-friend ranted to me about how, at a conference he had recently attended, another scientist on stage had introduced himself to his audience by mentioning the impact factors of the journals he’d had his papers published in. The impact factor deserves to die because, among other reasons, it attempts to condense multi-dimensional research efforts and the vagaries of scientific publishing into a single number that stands for some kind of prestige. But its users should be honest about its actual purpose: it was designed so evaluators could take one look at it and decide what to do about a candidate to whom it corresponded. This isn’t fair – but expeditiousness isn’t cheap.

And when evaluators at different rungs of the career advancement privilege the impact factor, scientists with more papers published earlier in their careers in journals with higher impact factors become exponentially likelier to be recognised for their efforts (probably even irrespective of their quality given the unique failings of high-IF journals, discussed here and here) over time than others.

Brian Skinner, a physicist at Ohio State University, recently presented a mathematical model of this ‘prestige bias’ and whose amplification depended in a unique way, according him, on a factor he called the ‘examination precision’. He found that the more ambiguously defined the barrier to advancement is, the more pronounced the prestige bias could get. Put another way, people who have the opportunity to maintain systemic discrimination simultaneously have an incentive to make the points of entry into their club as vague as possible. Sound familiar?

One might argue that the Nobel Prizes are awarded to people at the end of their careers – the average age of a physics laureate is in the late 50s; John Goodenough won the chemistry prize this year at 97 – so the prizes couldn’t possibly increase the likelihood of a future recognition. But the sword cuts both ways: the Nobel Prizes are likelier than not to be the products a prestige bias amplification themselves, and are therefore not the morally neutral symbols of excellence Hansson and his peers seem to think they are.

Fourth, the Nobel Prizes are an occasion to speak of science. This implies that those who would deride the prizes but at the same time hold them up are equally to blame, but I would agree only in part. This exhortation to try harder is voiced more often than not by those working in the West, with publications with better resources and typically higher purchasing power. On principle I can’t deride the decisions reporters and editors make in the process of building an audience for science journalism, with the hope that it will be profitable someday, all in a resource-constrained environment, even if some of those choices might seem irrational.

(The story of Brian Keating, an astrophysicist, could be illuminating at this juncture.)

More than anything else, what science journalism needs to succeed is a commonplace acknowledgement that science news is important – whether it’s for the better or the worse is secondary – and the Nobel Prizes do a fantastic job of getting the people’s attention towards scientific ideas and endeavours. If anything, journalists should seize the opportunity in October every year to also speak about how the prizes are flawed and present their readers with a fuller picture.

Finally, and of course, we have capitalism itself – implicated in the quantum of prize money accompanying each Nobel Prize (9 million Swedish kronor, Rs 6.56 crore or $0.9 million).

Then again, this figure pales in comparison to the amounts that academic institutions know they can rake in by instrumentalising the prestige in the form of donations from billionaires, grants and fellowships from the government, fees from students presented with the tantalising proximity to a Nobel laureate, and in the form of press coverage. L’affaire Epstein even demonstrated how it’s possible to launder a soiled reputation by investing in scientific research because institutions won’t ask too many questions about who’s funding them.

The Nobel Prizes are money magnets, and this is also why winning a Nobel Prize is like winning an Academy Award: you don’t get on stage without some lobbying. Each blade of grass has to mobilise its own PR machine, supported in all likelihood by the same institute that submitted their candidature to the laureates selection committee. The Nature editorial called this out thus:

As a small test case, Nature approached three of the world’s largest international scientific networks that include academies of science in developing countries. They are the International Science Council, the World Academy of Sciences and the InterAcademy Partnership. Each was asked if they had been approached by the Nobel awarding bodies to recommend nominees for science Nobels. All three said no.

I believe those arguments that serve to uphold the Nobel Prizes’ relevance must take recourse through at least one of these reasons, if not all of them. It’s also abundantly clear that the Nobel Prizes are important not because they present a fair or useful picture of scientific excellence but in spite of it.

Disentangling entanglement

There has been considerable speculation if the winners of this year’s Nobel Prize for physics, due to be announced at 2.30 pm IST on October 8, will include Alain Aspect and Anton Zeilinger. They’ve both made significant experimental contributions related to quantum information theory and the fundamental nature of quantum mechanics, including entanglement.

Their work, at least the potentially prize-winning part of it, is centred on a class of experiments called Bell tests. If you perform a Bell test, you’re essentially checking the extent to which the rules of quantum mechanics are compatible with the rules of classical physics.

Whether or not Aspect, Zeilinger and/or others win a Nobel Prize this year, what they did achieve is worth putting in words. Of course, many other writers, authors, scientists, etc. have already performed this activity; I’d like to redo it if only because writing helps commit things to memory and because the various performers of Bell tests are likely to win some prominent prize, given how modern technologies like quantum cryptography are inflating the importance of their work, and at that time I’ll have ready reference material.

(There is yet another reason Aspect and Zeilinger could win a Nobel Prize. As with the medicine prizes, many of whose laureates previously won a Lasker Award, many of the physics laureates have previously won the Wolf Prize. And Aspect and Zeilinger jointly won the Wolf Prize for physics in 2010 along with John Clauser.)

The following elucidation is divided into two parts: principles and tests. My principal sources are Wikipedia, some physics magazines, Quantum Physics for Poets by Leon Lederman and Christopher Hill (2011), and a textbook of quantum mechanics by John L. Powell and Bernd Crasemann (1998).

§

Principles

From the late 1920s, Albert Einstein began to publicly express his discomfort with the emerging theory of quantum mechanics. He claimed that a quantum mechanical description of reality allowed “spooky” things that the rules of classical mechanics, including his theories of relativity, forbid. He further contended that both classical mechanics and quantum mechanics couldn’t be true at the same time and that there had to be a deeper theory of reality with its own, thus-far hidden variables.

Remember the Schrödinger’s cat thought experiment: place a cat in a box with a bowl of poison and close the lid; until you open the box to make an observation, the cat may be considered to be both alive and dead. Erwin Schrödinger came up with this example to ridicule the implications of Niels Bohr’s and Werner Heisenberg’s idea that the quantum state of a subatomic particle, like an electron, was described by a mathematical object called the wave function.

The wave function has many unique properties. One of these is superposition: the ability of an object to exist in multiple states at once. Another is decoherence (although this isn’t a property as much as a phenomenon common to many quantum systems): when you observed the object. it would probabilistically collapse into one fixed state.

Imagine having a box full of billiard balls, each of which is both blue and green at the same time. But the moment you open the box to look, each ball decides to become either blue or green. This (metaphor) is on the face of it a kooky description of reality. Einstein definitely wasn’t happy with it; he believed that quantum mechanics was just a theory of what we thought we knew and that there was a deeper theory of reality that didn’t offer such absurd explanations.

In 1935, Einstein, Boris Podolsky and Nathan Rosen advanced a thought experiment based on these ideas that seemed to yield ridiculous results, in a deliberate effort to provoke his ‘opponents’ to reconsider their ideas. Say there’s a heavy particle with zero spin – a property of elementary particles – inside a box in Bangalore. At some point, it decays into two smaller particles. One of these ought to have a spin of 1/2 and other of -1/2 to abide by the conservation of spin. You send one of these particles to your friend in Chennai and the other to a friend in Mumbai. Until these people observe their respective particles, the latter are to be considered to be in a mixed state – a superposition. In the final step, your friend in Chennai observes the particle to measure a spin of -1/2. This immediately implies that the particle sent to Mumbai should have a spin of 1/2.

If you’d performed this experiment with two billiard balls instead, one blue and one green, the person in Bangalore would’ve known which ball went to which friend. But in the Einstein-Podolsky-Rosen (EPR) thought experiment, the person in Bangalore couldn’t have known which particle was sent to which city, only that each particle existed in a superposition of two states, spin 1/2 and spin -1/2. This situation was unacceptable to Einstein because it was inimical certain assumptions on which the theories of relativity were founded.

The moment the friend in Chennai observed her particle to have spin -1/2, the one in Mumbai would have known without measuring her particle that it had a spin of 1/2. If it didn’t, the conservation of spin would be violated. If it did, then the wave function of the Mumbai particle would have collapsed to a spin 1/2 state the moment the wave function of the Chennai particle had collapsed to a spin -1/2 state, indicating faster-than-light communication between the particles. Either way, quantum mechanics could not produce a sensible outcome.

Two particles whose wave functions are linked the way they were in the EPR paradox are said to be entangled. Einstein memorably described entanglement as “spooky action at a distance”. He used the EPR paradox to suggest quantum mechanics couldn’t possibly be legit, certainly not without messing with the rules that made classical mechanics legit.

So the question of whether quantum mechanics was a fundamental description of reality or whether there were any hidden variables representing a deeper theory stood for nearly thirty years.

Then, in 1964, an Irish physicist at CERN named John Stewart Bell figured out a way to answer this question using what has since been called Bell’s theorem. He defined a set of inequalities – statements of the form “P is greater than Q” – that were definitely true for classical mechanics. If an experiment conducted with electrons, for example, also concluded that “P is greater than Q“, it would support the idea that quantum mechanics (vis-à-vis electrons) has ‘hidden’ parts that would explain things like entanglement more along the lines of classical mechanics.

But if an experiment couldn’t conclude that “P is greater than Q“, it would support the idea that there are no hidden variables, that quantum mechanics is a complete theory and, finally, that it implicitly supports spooky actions at a distance.

The theorem here was a statement. To quote myself from a 2013 post (emphasis added):

for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or [faster-than-light] communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed [like electrons or protons].

Zeilinger and Aspect, among others, are recognised for having performed these experiments, called Bell tests.

Technological advancements through the late 20th and early 21st centuries have produced more and more nuanced editions of different kinds of Bell tests. However, one thing has been clear from the first tests, in 1981, to the last: they have all consistently violated Bell’s inequalities, indicating that quantum mechanics does not have hidden variables and our reality does allow bizarre things like superposition and entanglement to happen.

To quote from Quantum Physics for Poets (p. 214-215):

Bell’s theorem addresses the EPR paradox by establishing that measurements on object a actually do have some kind of instant effect on the measurement at b, even though the two are very far apart. It distinguishes this shocking interpretation from a more commonplace one in which only our knowledge of the state of b changes. This has a direct bearing on the meaning of the wave function and, from the consequences of Bell’s theorem, experimentally establishes that the wave function completely defines the system in that a ‘collapse’ is a real physical happening.


Tests

Though Bell defined his inequalities in such a way that they would lend themselves to study in a single test, experimenters often stumbled upon loopholes in the result as a consequence of the experiment’s design not being robust enough to evade quantum mechanics’s propensity to confound observers. Think of a loophole as a caveat; an experimenter runs a test and comes to you and says, “P is greater than Q but…”, followed by an excuse that makes the result less reliable. For a long time, physicists couldn’t figure out how to get rid of all these excuses and just be able to say – or not say – “P is greater than Q“.

If millions of photons are entangled in an experiment, the detectors used to detect, and observe, the photons may not be good enough to detect all of them or the photons may not survive their journey to the detectors properly. This fair-sampling loophole could give rise to doubts about whether a photon collapsed into a particular state because of entanglement or if it was simply coincidence.

To prevent this, physicists could bring the detectors closer together but this would create the communication loophole. If two entangled photons are separated by 100 km and the second observation is made more than 0.0003 seconds after the first, it’s still possible that optical information could’ve been exchanged between the two particles. To sidestep this possibility, the two observations have to be separated by a distance greater than what light could travel in the time it takes to make the measurements. (Alain Aspect and his team also pointed their two detectors in random directions in one of their tests.)

Third, physicists can tell if two photons received in separate locations were in fact entangled with each other, and not other photons, based on the precise time at which they’re detected. So unless physicists precisely calibrate the detection window for each pair, hidden variables could have time to interfere and induce effects the test isn’t designed to check for, creating a coincidence loophole.

If physicists perform a test such that detectors repeatedly measure the particles involved in, say, two labs in Chennai and Mumbai, it’s not impossible for statistical dependencies to arise between measurements. To work around this memory loophole, the experiment simply has to use different measurement settings for each pair.

Apart from these, experimenters also have to minimise any potential error within the instruments involved in the test. If they can’t eliminate the errors entirely, they will then have to modify the experimental design to compensate for any confounding influence due to the errors.

So the ideal Bell test – the one with no caveats – would be one where the experimenters are able to close all loopholes at the same time. In fact, physicists soon realised that the fair-sampling and communication loopholes were the more important ones.

In 1972, John Clauser and Stuart Freedman performed the first Bell test by entangling photons and measuring their polarisation at two separate detectors. Aspect led the first group that closed the communication loophole, in 1982; he subsequently conducted more tests that improved his first results. Anton Zeilinger and his team made advancements on the fair-sampling loophole.

One particularly important experimental result showed up in August 2015: Robert Hanson and his team at the Technical University of Delft, in the Netherlands, had found a way to close the fair-sampling and communication loopholes at the same time. To quote Zeeya Merali’s report in Nature News at the time (lightly edited for brevity):

The researchers started with two unentangled electrons sitting in diamond crystals held in different labs on the Delft campus, 1.3 km apart. Each electron was individually entangled with a photon, and both of those photons were then zipped to a third location. There, the two photons were entangled with each other – and this caused both their partner electrons to become entangled, too. … the team managed to generate 245 entangled pairs of electrons over … nine days. The team’s measurements exceeded Bell’s bound, once again supporting the standard quantum view. Moreover, the experiment closed both loopholes at once: because the electrons were easy to monitor, the detection loophole was not an issue, and they were separated far enough apart to close the communication loophole, too.

By December 2015, Anton Zeilinger and co. were able to close the communication and fair-sampling loopholes in a single test with a 1-in-2-octillion chance of error, using a different experimental setup from Hanson’s. In fact, Zeilinger’s team actually closed three loopholes including the freedom-of-choice loophole. According to Merali, this is “the possibility that hidden variables could somehow manipulate the experimenters’ choices of what properties to measure, tricking them into thinking quantum theory is correct”.

But at the time Hanson et al announced their result, Matthew Leifer, a physicist the Perimeter Institute in Canada, told Nature News (in the same report) that because “we can never prove that [the converse of freedom of choice] is not the case, … it’s fair to say that most physicists don’t worry too much about this.”

We haven’t gone into much detail about Bell’s inequalities themselves but if our goal is to understand why Aspect and Zeilinger, and Clauser too, deserve to win a Nobel Prize, it’s because of the ingenious tests they devised to test Bell’s, and Einstein’s, ideas and the implications of what they’ve found in the process.

For example, Bell crafted his test of the EPR paradox in the form of a ‘no-go theorem’: if it satisfied certain conditions, a theory was designated non-local, like quantum mechanics; if it didn’t satisfy all those conditions, the theory be classified as local, like Einstein’s special relativity. So Bell tests are effectively gatekeepers that can attest whether or not a theory – or a system – is behaving in a quantum way and each loophole is like an attempt to hack the attestation process.

In 1991, Artur Ekert, who would later be acknowledged as one of the inventors of quantum cryptography, realised this perspective could have applications in securing communications. Engineers could encode information in entangled particles, send them to remote locations, and allow detectors there to communicate with each other securely by observing these particles and decoding the information. The engineers can then perform Bell tests to determine if anyone might be eavesdropping on these communications using one or some of the loopholes.

Can gravitational waves be waylaid by gravity?

Yesterday, I learnt the answer is ‘yes’. Gravitational waves can be gravitationally lensed. It seems obvious once you think about it, but not something that strikes you (assuming you’re not a physicist) right away.

When physicists solve problems relating to the spacetime continuum, they imagine it as a four-dimensional manifold: three of space and one of time. Objects exist in the bulk of this manifold and visualisations like the one below are what two-dimensional slices of the continuum look like. This unified picture of space and time was a significant advancement in the history of physics.

While Hendrik Lorentz and Hermann Minkowski first noticed this feature in the early 20th century, they did so only to rationalise empirical data. Albert Einstein was the first physicist to fully figure out the why of it, through his theories of relativity.

A common way to visualise the curvature of spacetime around a massive object, in this case Earth. Credit: NASA

Specifically, according to the general theory, massive objects bend the spacetime continuum around themselves. Because light passes through the continuum, its path bends along the continuum when passing near massive bodies. Seen head-on, a massive object – like a black hole – appears to encircle a light-source in its background in a ring of light. This is because the black hole’s mass has caused spacetime to curve around the black hole, creating a cosmic mirage of the light emitted by the object in its background (see video below) as seen by the observer. By focusing light flowing in different directions around it towards one point, the black hole has effectively behaved like a lens.

So much is true of light, which is a form of electromagnetic radiation. And just the way electrically charged particles emit such radiation when they accelerate, massive particles emit gravitational waves when they accelerate. These gravitational waves are said to carry gravitational energy.

Gravitational energy is effectively the potential energy of a body due to its mass. Put another way, a more massive object would pull a smaller body in its vicinity towards itself faster than a less massive object would. The difference between these abilities is quantified as a difference between the objects’ gravitational energies.

Credit: ALMA (NRAO/ESO/NAOJ)/Luis Calçada (ESO)

Such energy is released through the spacetime continuum when the mass of a massive object changes. For example, when two binary black holes combine to form a larger one, the larger one usually has less mass than the masses of the two lighter ones together. The difference arises because some of the mass has been converted into gravitational energy. In another example, when a massive object accelerates, it distorts its gravitational field; these distortions propagate outwards through the continuum as gravitational energy.

Scientists and engineers have constructed instruments on Earth to detect gravitational energy in the form of gravitational waves. When an object releases gravitational energy into the spacetime continuum, the energy ripples through the continuum the way a stone dropped in water instigates ripples on the surface. And just the way the ripples alternatively stretch and compress the water, gravitational waves alternatively stretch and compress the continuum as they move through it (at the speed of light).

Instruments like the twin Laser Interferometer Gravitational-wave Observatories (LIGO) are designed to pick up on these passing distortions while blocking out all others. That is, when LIGO records a distortion passing through the parts of the continuum where its detectors are located, scientists will know it has just detected a gravitational wave. Because the frequency of a wave is directly proportional to its energy, scientists can use the properties of the gravitational wave as measured by LIGO to deduce the properties of its original source.

(As you might have guessed, even a cat running across the room emits gravitational waves. However, the frequency of these waves is so very low that it is almost impossible to build instruments to measure them, nor are we likely to find such an exercise useful.)

I learnt today that it is also possible for instruments like LIGO to be able to detect the gravitational lensing of gravitational waves. When an object like a black hole warps the spacetime continuum around it, it lenses light – and it is easy to see how it would lens gravitational waves as well. The lensing effect is the result not of the black hole’s ‘direct’ interaction with light as much as its distortion of the continuum. Ergo, anything that traverses the continuum, including gravitational waves, is bound to be lensed by the black hole.

The human body evolved eyes to receive information encoded in visible light, so we can directly see lensed visible-light. However, we don’t possess any organs that would allow us to do the same thing with gravitational waves. Instead, we will need to use existing instruments, like LIGO, to detect these particular distortions. How do we do that?

When two black holes are rapidly revolving around each other, getting closer and closer, they shed more and more of their potential energy as gravitational waves. In effect, the frequency of these waves is quickly increasing together with their amplitude, and LIGO registers this as a chirp (see video below). Once the two black holes have merged, both frequency and amplitude drop to zero (because a solitary spinning black hole does not emit gravitational waves).

In the event of a lensing, however, LIGO will effectively detect two sets of gravitational waves. One set will arrive at LIGO straight from the source. The second set – originally sent off in a different direction – will become lensed towards LIGO. And because the lensed wave will effectively have travelled a longer distance, it will arrive a short while after the direct wave.

The distance scale here is grossly exaggerated for effect

However, LIGO will not register two chirps; in fact, it will register no chirps at all. Instead, the direct wave and the lensed wave will interfere with each other inside the instrument to produce a characteristically mixed signal. By the laws of wave mechanics, this signal will have increasing frequency, as in the chirp, but uneven amplitude. If it were sonified, the signal’s sound would climb in pitch but have irregular volume.

A statistical analysis published in early 2018 (in a preprint paper) claimed that LIGO should be able to detect gravitationally lensed gravitational waves at the rate of about once per year (and the proposed Einstein Telescope, at about 80 per year!). A peer-reviewed paper published in January 2019 suggested that LIGO’s design specs allow it to detect lensing effects due to a black hole weighing 10-100,000-times as much as the Sun.

Just like ‘direct’ gravitational waves give away some information about their sources, lensed gravitational waves should also give something away about the objects that deflected them. So if we become able to use LIGO, and/or other gravitational wave detectors of the future, to detect gravitationally lensed gravitational waves, we will have the potential to learn even more about the universe’s inhabitants than gravitational-wave astronomy currently allows us to.

Thanks to inputs from Madhusudhan Raman, @ntavish, @alsogoesbyV and @vaa3.

The science in Netflix’s ‘Spectral’

I watched Spectral, the movie that released on Netflix on December 9, 2016, after Universal Studios got cold feet about releasing it on the big screen – the same place where a previous offering, Warcraft, had been gutted. Spectral is sci-fi and has a few great moments but mostly it’s bland and begging for some tabasco. The premise: an elite group of American soldiers deployed in Moldova come upon some belligerent ghost-like creatures in a city they’re fighting in. They’ve no clue how to stop them, so they fly in an engineer to consult from DARPA, the same guy who built the goggles that detected the creatures in the first place. Together, they do things. Now, I’d like to talk about the science in the film and not the plot itself, though the former feeds the latter.

SPOILERS AHEAD

A scene from the film 'Spectral' (2016). Source: Netflix
A scene from the film ‘Spectral’ (2016). Source: Netflix

Towards the middle of the movie, the engineer realises that the ghost-like creatures have the same limitations as – wait for it – a Bose-Einstein condensate (BEC). They can pass through walls but not ceramic or heavy metal (not the music), they rapidly freeze objects in their path, and conventional weapons, typically projectiles of some kind, can’t stop them. Frankly, it’s fabulous that Ian Fried, the film’s writer, thought to use creatures made of BECs as villains.

A BEC is an exotic state of matter in which a group of ultra-cold particles condense into a superfluid (i.e., it flows without viscosity). Once a BEC forms, a subsection of a BEC can’t be removed from it without breaking the whole BEC state down. You’d think this makes the BEC especially fragile – because it’s susceptible to so many ‘liabilities’ – but it’s the exact opposite. In a BEC, the energy required to ‘kick’ a single particle out of its special state is equal to the energy that’s required to ‘kick’ all the particles out, making BECs as a whole that much more durable.

This property is apparently beneficial for the creatures of Spectral, and that’s where the similarity ends because BECs have other properties that are inimical to the portrayal of the creatures. Two immediately came to mind: first, BECs are attainable only at ultra-cold temperatures; and second, the creatures can’t be seen by the naked eye but are revealed by UV light. There’s a third and relevant property but which we’ll come to later: that BECs have to be composed of bosons or bosonic particles.

It’s not clear why Spectral‘s creatures are visible only when exposed to light of a certain kind. Clyne, the DARPA engineer, says in a scene, “If I can turn it inside out, by reversing the polarity of some of the components, I might be able to turn it from a camera [that, he earlier says, is one that “projects the right wavelength of UV light”] into a searchlight. We’ll [then] be able to see them with our own eyes.” However, the documented ability of BECs to slow down light to a great extent (5.7-million times more than lead can, in certain conditions) should make them appear extremely opaque. More specifically, while a BEC can be created that is transparent to a very narrow range of frequencies of electromagnetic radiation, it will stonewall all frequencies outside of this range on the flipside. That the BECs in Spectral are opaque to a single frequency and transparent to all others is weird.

Obviating the need for special filters or torches to be able to see the creatures simplifies Spectral by removing one entire layer of complexity. However, it would remove the need for the DARPA engineer also, who comes up with the hyperspectral camera and, its inside-out version, the “right wavelength of UV” searchlight. Additionally, the complexity serves another purpose. Ahead of the climax, Clyne builds an energy-discharging gun whose plasma-bullets of heat can rip through the BECs (fair enough). This tech is also slightly futuristic. If the sci-fi/futurism of the rest of Spectral leading up to that moment (when he invents the gun) was absent, then the second-half of the movie would’ve become way more sci-fi than the first-half, effectively leaving Spectral split between two genres: sci-fi and wtf. Thus the need for the “right wavelength of UV” condition?

Now, to the third property. Not all particles can be used to make BECs. Its two predictors, Satyendra Nath Bose and Albert Einstein, were working (on paper) with kinds of particles since called bosons. In nature, bosons are force-carriers, acting against matter-maker particles called fermions. A more technical distinction between them is that the behaviour of bosons is explained using Bose-Einstein statistics while the behaviour of fermions is explained using Fermi-Dirac statistics. And only Bose-Einstein statistics predicts the existence of states of matter called condensates, not Femi-Dirac statistics.

(Aside: Clyne, when explaining what BECs are in Spectral, says its predictors are “Nath Bose and Albert Einstein”. Both ‘Nath’ and ‘Bose’ are surnames in India, so “Nath Bose” is both anyone and no one at all. Ugh. Another thing is I’ve never heard anyone refer to S.N. Bose as “Nath Bose”, only ‘Satyendranath Bose’ or, simply, ‘Satyen Bose’. Why do Clyne/Fried stick to “Nath Bose”? Was “Satyendra” too hard to pronounce?)

All particles constitute a certain amount of energy, which under some circumstances can increase or decrease. However, the increments of energy in which this happens are well-defined and fixed (hence the ‘quantum’ of quantum mechanics). So, for an oversimplified example, a particle can be said to occupy energy levels constituting 2, 4 or 6 units but never of 1, 2.5 or 3 units. Now, when a very-low-density collection of bosons is cooled to an ultra-cold temperature (a few hundredths of kelvins or cooler), the bosons increasingly prefer occupying fewer and fewer energy levels. At one point, they will all occupy a single and common level – flouting a fundamental rule that there’s a maximum limit for the number of particles that can be in the same level at once. (In technical parlance, the wavefunctions of all the bosons will merge.)

When this condition is achieved, a BEC will have been formed. And in this condition, even if a new boson is added to the condensate, it will be forced into occupying the same level as every other boson in the condensate. This condition is also out of limits for all fermions – except in very special circumstances, and circumstances whose exceptionalism perhaps makes way for Spectral‘s more fantastic condensate-creatures. We known one such as superconductivity.

In a superconducting material, electrons flow without any resistance whatsoever at very low temperatures. The most widely applied theory of superconductivity interprets this flow as being that of a superfluid, and the ‘sea’ of electrons flowing as such to be a BEC. However, electrons are fermions. To overcome this barrier, Leon Cooper proposed in 1956 that the electrons didn’t form a condensate straight away but that there was an intervening state called a Cooper pair. A Cooper pair is a pair of electrons that had become bound, overcoming their like-charges repulsion because of the vibration of atoms of the superconducting metal surrounding them. The electrons in a Cooper pair also can’t easily quit their embrace because, once they become bound, the total energy they constitute as a pair is lower than the energy that would be destabilising in any other circumstances.

Could Spectral‘s creatures have represented such superconducting states of matter? It’s definitely science fiction because it’s not too far beyond the bounds of what we know about BEC today (at least in terms of a concept). And in being science fiction, Spectral assumes the liberty to make certain leaps of reasoning – one being, for example, how a BEC-creature is able to ram against an M1 Abrams and still not dissipate. Or how a BEC-creature is able to sit on an electric transformer without blowing up. I get that these in fact are the sort of liberties a sci-fi script is indeed allowed to take, so there’s little point harping on them. However, that Clyne figured the creatures ought to be BECs prompted way more disbelief than anything else because BECs are in the here and the now – and they haven’t been known to behave anything like the creatures in Spectral do.

For some, this information might even help decide if a movie is sci-fi or fantasy. To me, it’s sci-fi.

SPOILERS END

On the more imaginative side of things, Spectral also dwells for a bit on how these creatures might have been created in the first place and how they’re conscious. Any answers to these questions, I’m pretty sure, would be closer to fantasy than to sci-fi. For example, I wonder how the computing capabilities of a very large neural network seen at the end of the movie (not a spoiler, trust me) were available to the creatures wirelessly, or where the power source was that the soldiers were actually after. Spectral does try to skip the whys and hows by having Clyne declare, “I guess science doesn’t have the answer to everything” – but you’re just going “No shit, Sherlock.”

His character is, as this Verge review puts it, exemplarily shallow while the movie never suggests before the climax that science might indeed have all the answers. In fact, the movie as such, throughout its 108 minutes, wasn’t that great for me; it doesn’t ever live up to its billing as a “supernatural Black Hawk Down“. You think about BHD and you remember it being so emotional – Spectral has none of that. It was just obviously more fun to think about the implications of its antagonists being modelled after a phenomenon I’ve often read/written about but never thought about that way.

Parsing Ajay Sharma v. E = mc2

Featured image credit: saulotrento/Deviantart, CC BY-SA 3.0.

To quote John Cutter (Michael Caine) from The Prestige:

Every magic trick consists of three parts, or acts. The first part is called the pledge, the magician shows you something ordinary. The second act is called the turn, the magician takes the ordinary something and makes it into something extraordinary. But you wouldn’t clap yet, because making something disappear isn’t enough. You have to bring it back. Now you’re looking for the secret. But you won’t find it because of course, you’re not really looking. You don’t really want to work it out. You want to be fooled.

The Pledge

Ajay Sharma is an assistant director of education with the Himachal Pradesh government. On January 10, the Indo-Asian News Service (IANS) published an article in which Sharma claims Albert Einstein’s famous equation E = mc2 is “illogical” (republished by The Hindu, Yahoo! NewsGizmodo India, among others). The precise articulation of Sharma’s issue with it is unclear because the IANS article contains multiple unqualified statements:

Albert Einstein’s mass energy equation (E=mc2) is inadequate as it has not been completely studied and is only valid under special conditions.

Einstein considered just two light waves of equal energy emitted in opposite directions with uniform relative velocity.

“It’s only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity.”

Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.

It said E=mc2 is obtained from L=mc2 by simply replacing L by E (all energy) without derivation by Einstein. “It’s illogical,” he said.

Although Einstein’s theory is well established, it has to be critically analysed and the new results would definitely emerge.

Sharma also claims Einstein’s work wasn’t original and only ripped off Galileo, Henri Poincaré, Hendrik Lorentz, Joseph Larmor and George FitzGerald.

The Turn

Let’s get some things straight.

Mass-energy equivalence – E = mc2 isn’t wrong but it’s often overlooked that it’s an approximation. This is the full equation:

E2 = m02c4 + p2c4

(Notice the similarity to the Pythagoras theorem?)

Here, m0 is the mass of the object (say, a particle) when it’s not moving, p is its momentum (calculated as mass times its velocity – m*v) and c, the speed of light. When the particle is not moving, v is zero, so p is zero, and so the right-most term in the equation can be removed. This yields:

E2 = m02c4 ⇒ E = m0c2

If a particle was moving close to the speed of light, applying just E = m0c2 would be wrong without the rapidly enlarging p2c4 component. In fact, the equivalence remains applicable in its most famous form only in cases where an observer is co-moving along with the particle. So, there is no mass-energy equivalence as much as a mass-energy-momentum equivalence.

And at the time of publishing this equation, Einstein was aware that it was held up by multiple approximations. As Terence Tao sets out, these would include (but not be limited to) p being equal to mv at low velocities, the laws of physics being the same in two frames of reference moving at uniform velocities, Planck’s and de Broglie’s laws holding, etc.

These approximations are actually inherited from Einstein’s special theory of relativity, which describes the connection between space and time. In a paper dated September 27, 1905, Einstein concluded that if “a body gives off the energyL in the form of radiation, its mass diminishes by L/c2“. ‘L’ was simply the notation for energy that Einstein used until 1912, when he switched to the more-common ‘E’.

The basis of his conclusion was a thought experiment he detailed in the paper, where a point-particle emits “plane waves of light” in opposite directions while at rest and then while in motion. He then calculates the difference in kinetic energy of the body before and after it starts to move and accounting for the energy carried away by the radiated light:

K0 – K1 = 1/2 * L/c2 * v2

This is what Sharma is referring to when he says, “Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.” Well… sure. Einstein’s was a gedanken (thought) experiment to illustrate a direct consequence of the special theory. How he chose to frame the problem depended on what connection he wanted to illustrate between the various attributes at play.

And the more attributes are included in the experiment, the more connections will arise. Whether or not they’d be meaningful (i.e. being able to represent a physical reality – such as with being able to say “if a body gives off the energy Lin the form of radiation, its mass diminishes by L/c2“) is a separate question.

As for another of Sharma’s claims – that the equivalence is “only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity”: Einstein’s theory of relativity is the best framework of mathematical rules we have to describe all these parameters together. So any gedanken experiment involving just these parameters can be properly analysed, to the best of our knowledge, with Einstein’s theory, and within that theory – and as a consequence of that theory – the mass-energy-momentum equivalence will persist. This implication was demonstrated by the famous Cockcroft-Walton experiment in 1932.

General theory of relativity – Einstein’s road to publishing his general theory (which turned 100 last year) was littered with multiple challenges to its primacy. This is not surprising because Einstein’s principal accomplishment was not in having invented something but in having recombined and interpreted a trail of disjointed theoretical and experimental discoveries into a coherent, meaningful and testable theory of gravitation.

As mentioned earlier, Sharma claims Einstein ripped off Galileo, Poincaré, Lorentz, Larmor and FitzGerald. For what it’s worth, he could also have mentioned William Kingdon Clifford, Georg Bernhard Riemann, Tullio Levi-Civita, Gregorio Ricci-Curbastro, János Bolyai, Nikolai Lobachevsky, David Hilbert, Hermann Minkowski and Fritz Hasenhörl. Here are their achievements in the context of Einstein’s (in a list that’s by no means exhaustive).

  • 1632, Galileo Galilei – Published a book, one of whose chapters features a dialogue about the relative motion of planetary bodies and the role of gravity in regulating their motion
  • 1824-1832, Bolyai and Lobachevsky – Conceived of hyperbolic geometry (which didn’t follow Euclidean laws like the sum of a triangle’s angles is 180º) over 1824-1832, which inspired Riemann and his mentor to consider if there was a kind of geometry to explain the behaviour of shapes in four dimensions (as opposed to three)
  • 1854, G. Bernhard Riemann – Conceived of elliptic geometry and a way to compare vectors in four dimensions, ideas that would benefit Einstein immensely because they helped him discover that gravity wasn’t a force in space-time but actually the curvature of space-time
  • 1876, William K. CliffordSuggested that the forces that shape matter’s motion in space could be guided by the geometry of space, foreshadowing Einstein’s idea that matter influences gravity influences matter
  • 1887-1902, FitzGerald and Lorentz – Showed that observers in different frames of reference that are moving at different velocities can measure the length of a common body to differing values, an idea then called the FitzGerald-Lorentz contraction hypothesis. Lorentz’s mathematical description of this gave rise to a set of formulae called Lorentz transformations, which Einstein later derived through his special theory.
  • 1897-1900, Joseph Larmor – Realised that observers in different frames of reference that are moving at different velocities can also measure different times for the same event, leading to the time dilation hypothesis that Einstein later explained
  • 1898, Henri Poincaré – Interpreted Lorentz’s abstract idea of a “local time” to have physical meaning – giving rise to the idea of relative time in physics – and was among the first physicists to speculate on the need for a consistent theory to explain the consequences of light having a constant speed
  • 1900, Levi-Civita and Ricci-Curbastro – Built on Riemann’s ideas of a non-Euclidean geometry to develop tensor calculus (a tensor is a vector in higher dimensions). Einstein’s field-equations for gravity, which capped his formulation of the celebrated general theory of relativity, would feature the Ricci tensor to account for the geometric differences between Euclidean and non-Euclidean geometry.
  • 1904-1905, Fritz Hasenöhrl – Built on the work of Oliver Heaviside, Wilhelm Wien, Max Abraham and John H. Poynting to devise a thought experiment from which he was able to conclude that heat has mass, a primitive synonym of the mass-energy-momentum equivalence
  • 1907, Hermann Minkowski – Conceived a unified mathematical description of space and time in 1907 that Einstein could use to better express his special theory. Said of his work: “From this hour on, space by itself, and time by itself, shall be doomed to fade away in the shadows, and only a kind of union of the two shall preserve an independent reality.”
  • 1915, David Hilbert – Derived the general theory’s field equations a few days before Einstein did but managed to have his paper published only after Einstein’s was, leading to an unresolved dispute about who should take credit. However, the argument was made moot by only Einstein being able to explain how Isaac Newton’s laws of classical mechanics fit into the theory – Hilbert couldn’t.

FitzGerald, Lorentz, Larmor and Poincaré all laboured assuming that space was filled with a ‘luminiferous ether’. The ether was a pervasive, hypothetical yet undetectable substance that physicists of the time believed had to exist so electromagnetic radiation had a medium to travel in. Einstein’s theories provided a basis for their ideas to exist withoutthe ether, and as a consequence of the geometry of space.

So, Sharma’s allegation that Einstein republished the work of other people in his own name is misguided. Einstein didn’t plagiarise. And while there are many accounts of his competitive nature, to the point of asserting that a mathematician who helped him formulate the general theory wouldn’t later lay partial claim to it, there’s no doubt that he did come up with something distinctively original in the end.

The Prestige

Ajay Sharma with two of his books. Source: Fundamental Physics Society (Facebook page)
Ajay Sharma with two of his books. Source: Fundamental Physics Society (Facebook page)

To recap:

Albert Einstein’s mass energy equation (E=mc2) is inadequate as it has not been completely studied and is only valid under special conditions.

Claims that Einstein’s equations are inadequate are difficult to back up because we’re yet to find circumstances in which they seem to fail. Theoretically, they can be made to appear to fail by forcing them to account for, say, higher dimensions, but that’s like wearing suede shoes in the rain and then complaining when they’re ruined. There’s a time and a place to use them. Moreover, the failure of general relativity or quantum physics to meet each other halfway (in a quantum theory of gravity) can’t be pinned on a supposed inadequacy of the mass-energy equivalence alone.

Einstein considered just two light waves of equal energy emitted in opposite directions with uniform relative velocity.

“It’s only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity.”

Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.

That a gedanken experiment was limited in scope is a pointless accusation. Einstein was simply showing that A implied B, and was never interested in proving that A’ (a different version of A) did not imply B. And tying all of this to the adequacy (or not) of E = mc2 leads equally nowhere.

It said E=mc2 is obtained from L=mc2 by simply replacing L by E (all energy) without derivation by Einstein. “It’s illogical,” he said.

From the literature, the change appears to be one of notation. If not that, then Sharma could be challenging the notion that the energy of a moving body is equal to the sum of the energy of the body at rest and its kinetic energy – letting Einstein say that the kinetic energy on the LHS of the equation can be substituted by L (or E) if the RHS is added to E0(energy of the body at rest): E = E0 + K. In which case Sharma’s challenge is even more ludicrous for calling one of the basic tenets of thermodynamics “illogical” without indicating why.

Although Einstein’s theory is well established, it has to be critically analysed and the new results would definitely emerge.

The “the” before “new results” is the worrying bit: it points to claims of his that have already been made, and suggests they’re contrary to what Einstein has claimed. It’s not that the German is immune to refutation – no one is – but that whatever claim this is seems to be at the heart of what’s at best an awkwardly worded outburst, and which IANS has unquestioningly reproduced.

A persistent search for Sharma’s paper on the web didn’t turn up any results – the closest I got was in unearthing its title (#237) in a list of titles published at a conference hosted by a ‘Russian Gravitational Society’ in May 2015. Sharma’s affiliation is mentioned as a ‘Fundamental Physics Society’ – which in turn shows up as a Facebook page run by Sharma. But an ibnlive.com article from around the same time provides some insight into Sharma’s ‘research’ (translated from the Hindi by Siddharth Varadarajan):

In this way, Ajay is also challenging the great scientist of the 21st century (sic) Albert Einstein. After deep research into his formula, E=mc2, he says that “when a candle burns, its mass reduces and light and energy are released”. According to Ajay, Einstein obtained this equation under special circumstances. This means that from any matter/thing, only two rays of light emerge. The intensity of light of both rays is the same and they emerge from opposite directions. Ajay says Einstein’s research paper was published in 1905 in the German research journal [Annalen der Physik] without the opinion of experts. Ajay claims that if this equation is interpreted under all circumstances, then you will get wrong results. Ajay says that if a candle is burning, its mass should increase. Ajay says his research paper has been published after peer review. [Emphasis added.]

A pattern underlying some of Sharma’s claims have to do with confusing conjecturing and speculating (even perfectly reasonably) with formulating and defining and proving. The most telling example in this context is alleging that Einstein ripped off Galileo: even if they both touched on relative motion in their research, what Galileo did for relativity was vastly different from what Einstein did. In fact, following the Indian Science Congress in 2015, V. Vinay, an adjunct faculty at the Chennai Mathematical Institute and teacher in Bengaluru, had pointed out that these differences in fact encapsulated the epistemological attitudes of the Indian and Greek civilisations: the TL;DR version is that we weren’t a proof-seeking people.

Swinging back to the mass-energy equivalence itself – it’s a notable piece but a piece nonetheless of an expansive theory that’s demonstrably incomplete. And there are other theories like it, like flotsam on a dark ocean whose waters we haven’t been able to see, theories we’re struggling to piece together. It’s a time when Popper’s philosophies haven’t been able to qualify or disqualify ‘discoveries’, a time when the subjective evaluation of an idea’s usefulness seems just as important as objectively testing it. But despite the grand philosophical challenges these times face us with, extraordinary claims still do require extraordinary evidence. And at that Ajay Sharma quickly fails.

Hat-tip to @AstroBwouy, @ainvvy and @hosnimk.

The Wire
January 12, 2016

Feeling the pulse of the space-time continuum

The Copernican
April 17, 2014

Haaaaaave you met PSR B1913+16? The first three letters of its name indicate it’s a pulsating radio source, an object in the universe that gives off energy as radio waves at very specific periods. More commonly, such sources are known as pulsars, a portmanteau of pulsating stars.

When heavy stars run out of hydrogen to fuse into helium, they undergo a series of processes that sees them stripped off their once-splendid upper layers, leaving behind a core of matter called a neutron star. It is extremely dense, extremely hot, and spinning very fast. When it emits electromagnetic radiation in flashes, it is called a pulsar. PSR B1913+16 is one such pulsar, discovered in 1974, located in the constellation Aquila some 21,000 light-years from Earth.

Finding PSR B1913+16 earned its discoverers the Nobel Prize for physics in 1993 because this was no ordinary pulsar, and it was the first to be discovered of its kind: of binary stars. As the ‘B’ in its name indicates, it is locked in an epic pirouette with a nearby neutron star, the two spinning around each other with the orbit’s total diameter spanning one to five times that of our Sun.

Losing energy but how?

The discoverers were Americans Russell Alan Hulse and Joseph Hooton Taylor, Jr., of the University of Massachusetts Amherst, and their prize-winning discovery didn’t culminate with just spotting the binary pulsar that has come to be named after them. Further, they found that the pulsar’s orbit was shrinking, meaning the system as a whole was losing energy. They found that they could also predict the rate at which the orbit was shrinking using the general theory of relativity.

In other words, PSR B1913+16 was losing energy as gravitational energy while proving a direct (natural) experiment to verify Albert Einstein’s monumental theory from a century ago. (That a human was able to intuit how two neutron stars orbiting each other trillions of miles away could lose energy is homage to the uniformity of the laws of physics. Through the vast darkness of space, we can strip away with our minds any strangeness of its farthest reaches because what is available on a speck of blue is what is available there, too.)

While gravitational energy, and gravitational waves with it, might seem like an esoteric concept, it is easily intuited as the gravitational analogue of electromagnetic energy (and electromagnetic waves). Electromagnetism and gravitation are the two most accessible of the four fundamental forces of nature. When a system of charged particles moves, it lets off electromagnetic energy and so becomes less energetic over time. Similarly, when a system of massive objects moves, it lets off gravitational energy… right?

“Yeah. Think of mass as charge,” says Tarun Souradeep, a professor at the Inter-University Centre for Astronomy and Astrophysics, Pune, India. “Electromagnetic waves come with two charges that can make up a dipole. But the conservation of momentum prevents gravitational radiation from having dipoles.”

According to Albert Einstein and his general theory of relativity, gravitation is a force born due to the curvature, or roundedness, of the space-time continuum: space-time bends around massive objects (an effect very noticeable during gravitational lensing). When massive objects accelerate through the continuum, they set off waves in it that travel at the speed of light. These are called gravitational waves.

“The efficiency of energy conversion – from the bodies into gravitational waves – is very high,” Prof. Souradeep clarifies. “But they’re difficult to detect because they don’t interact with matter.”

Albie’s still got it

In 2004, Joseph Taylor, Jr., and Joel Weisberg published a paper analysing 30 years of observations of PSR B1913+16, and found that general relativity was able to explain the rate of orbit contraction within an error of 0.2 per cent. Should you argue that the binary system could be losing its energy in many different ways, that the theory of general relativity is able to so accurately explain it means that the theory is involved, and in the form of gravitational waves.

Prof. Souradeep says, “According to Newtonian gravity, the gravitational pull of the Sun on Earth was instantaneous action at a distance. But now we know light takes eight minutes to come from the Sun to Earth, which means the star’s gravitational pull must also take eight minutes to affect Earth. This is why we have causality, with gravitational waves in a radiative mode.”

And this is proof that the waves exist, at least definitely in theory. They provide a simple, coherent explanation for a well-defined problem – like a hole in a giant jigsaw puzzle that we know only a certain kind of piece can fill. The fundamental particles called neutrinos were discovered through a similar process.

These particles, like gravitational waves, hardly interact with matter and are tenaciously elusive. Their discovery was predicted by the physicist Wolfgang Pauli in 1930. He needed such a particle to explain how the heavier neutron could decay into the lighter proton, the remaining mass (or energy) being carried away by an electron and a neutrino antiparticle. And the team that first observed neutrinos in an experiment, in 1942, did find it under these circumstances.

Waiting for a direct detection

On March 17, radio-astronomers from the Harvard-Smithsonian Centre for Astrophysics (CfA) announced a more recent finding that points to the existence of gravitational waves, albeit in a more powerful and ancient avatar. Using a telescope called BICEP2 located at the South Pole, they found the waves’ unique signature imprinted on the cosmic microwave background, a dim field of energy leftover from the Big Bang and visible to this day.

At the time, Chao-Lin Kuo, a co-leader of the BICEP2 collaboration, had said, “We have made the first direct image of gravitational waves, or ripples in space-time across the primordial sky, and verified a theory about the creation of the whole universe.”

Spotting the waves themselves, directly, in our human form is impossible. This is why the CfA discovery and the orbital characteristics of PSR B1913+16 are as direct detections as they get. In fact, finding one concise theory to explain actions and events in varied settings is a good way to surmise that such a theory could exist.

For instance, there is another experiment whose sole purpose has been to find gravitational waves, using laser. Its name is LIGO (Laser Interferometer Gravitational-wave Observatory). Its first phase operated from 2002 to 2010, and found no conclusive evidence of gravitational waves to report. Its second phase is due to start this year, in 2014, in an advanced form. On April 16, the LIGO collaboration put out a 20-minute documentary titled Passion for Understanding, about the “raw enthusiasm and excitement of those scientists and researchers who have dedicated their professional careers to this immense undertaking”.

The laser pendula

LIGO works like a pendulum to try and detect gravitational waves. With a pendulum, there is a suspended bob that goes back and forth between two points with a constant rhythm. Now, imagine there are two pendulums swinging parallel to each other but slightly out of phase, between two parallel lines 1 and 2. So when pendulum A reaches line 1, pendulum B hasn’t got there just yet, but it will soon enough.

When gravitational waves, comprising peaks and valleys of gravitational energy, surf through the space-time continuum, they induce corresponding crests and troughs that distort the metrics of space and passage of time in that area. When the two super-dense neutron stars that comprise PSR B1913+16 move around each other, they must be letting off gravitational waves in a similar manner, too.

When such a wave passes through the area where we are performing our pendulums experiment, they are likely to distort their arrival times to lines 1 and 2. Such a delay can be observed and recorded by sensitive instruments.

Analogously, LIGO uses beams of light generated by a laser at one point to bounce back and forth between mirrors for some time, and reconvene at a point. And instead of relying on the relatively clumsy mechanisms of swinging pendulums, scientists leverage the wave properties of light to make the measurement of a delay more precise.

At the beach, you’ll remember having seen waves forming in the distance, building up in height as they reach shallower depths, and then crashing in a spray of water on the shore. You might also have seen waves becoming bigger by combining. That is, when the crests of waves combine, they form a much bigger crest; when a crest and a trough combine, the effect is to cancel each other. (Of course this is an exaggeration. Matters are far less exact and pronounced on the beach.)

Similarly, the waves of laser light in LIGO are tuned such that, in the absence of a gravitational wave, what reaches the detector – an interferometer – is one crest and one trough, cancelling each other out and leaving no signal. In the presence of a gravitational wave, there is likely to be one crest and another crest, too, leaving behind a signal.

A blind spot

In an eight-year hunt for this signal, LIGO hasn’t found it. However, this isn’t the end because, like all waves, gravitational waves should also have a frequency, and it can be anywhere in a ginormous band if theoretical physicists are to be believed (and they are to be): between 10-7 and 1011 hertz. LIGO will help humankind figure out which frequency ranges can be ruled out.

In 2014, the observatory will also reawaken after four-years of being dormant and receiving upgrades to improve its sensitivity and accuracy. According to Prof. Souradeep, the latter now stands at 10-20 m. One more way in which LIGO is being equipped to find gravitational waves is by created a network of LIGO detectors around Earth. There are already two in the US, one in Europe, and one in Japan (although the Japanese LIGO uses a different technique).

But though the network improves our ability to detect gravitational waves, it presents another problem. “These detectors are on a single plane, making them blind to a few hundred degrees of the sky,” Prof. Souradeep says. This means the detectors will experience the effects of a gravitational wave but if it originated from a blind spot, they won’t be able to get a fix on its source: “It will be like trying to find MH370!” Fortunately, since 2010, there have been many ways proposed to solve this problem, and work on some of them is under way.

One of them is called eLISA, for Evolved Laser Interferometer Space Antenna. It will attempt to detect and measure gravitational waves by monitoring the locations of three spacecraft arranged in an equilateral triangle moving in a Sun-centric orbit. eLISA is expected to be launched only two decades from now, although a proof-of-concept mission has been planned by the European Space Agency for 2015.

Another solution is to install a LIGO detector on ground and outside the plane of the other three – such as in India. According to Prof. Souradeep, LIGO-India will reduce the size of the blind spot to a few tens of degrees – an order of magnitude improvement. The country’s Planning Commission has given its go-ahead for the project as a ‘mega-science project’ in the 12th Five Year Plan, and the Department of Atomic Energy, which is spearheading the project, has submitted a note to the Union Cabinet for approval. With the general elections going on in the country, physicists will have to wait until at least June or July to expect to get this final clearance.

Once cleared, of course, it will prove a big step forward not just for the Indian scientific community but also for the global one, marking the next big step – and possibly a more definitive one – in a journey that started with a strange pulsar 21,000 light-years away. As we get better at studying these waves, we have access to a universe visible not just in visible light, radio-waves, X-rays or neutrinos but also through its gravitational susurration – like feeling the pulse of the space-time continuum itself.

The non-Nobel for Satyen Bose

Photo: The Hindu
Satyen Bose

Last week, as the Nobel Prizes were announced and Peter Higgs and Francois Englert won the highly coveted physics prize, dust was kicked up in India – just as it was in July and then in September 2012 – about how Satyendra Nath Bose had been ‘ignored’. S.N. Bose, in the 1920s, was responsible for formulating the Bose-Einstein statistics with Albert Einstein. These statistics described the physical laws that governed the class of particles that have come to be known, in honour of Bose’s work, as bosons.

The matter of ignoring S.N. Bose, on the other hand, was profoundly baseless, but a sensation realised only by a few in the country. Just because Bose had worked with bosons, many Indians, among them many academicians, felt he ought to have been remembered for his contribution. Only, they conveniently chose to forget, his contribution to the Nobel Prize for physics 2013 was tenuous and, at best, of historical value. I blogged about this for The Copernican science blog on The Hindu, and then wrote an OpEd along the same lines.

From the response I received, however, it seems as if the message is still lost on those who continue to believe Bose is now the poster-scientist for all Indian scientists whose contributions have been ignored by award-committees worldwide. Do we so strongly feel that post-colonial sting of entitlement?