Exploring what it means to be big

Reading a Nature report titled ‘Step aside CERN: There’s a cheaper way to break open physics‘ (January 10, 2018) brought to mind something G. Rajasekaran, former head of the Institute of Mathematical Sciences, Chennai, told me once: that the future – as the Nature report also touts – belongs to tabletop particle accelerators.

Rajaji (as he is known) said he believed so because of the simple realisation that particle accelerators could only get so big before they’d have to get much, much bigger to tell us anything more. On the other hand, tabletop setups based on laser wakefield acceleration, which could accelerate electrons to higher energies across just a few centimetres, would allow us to perform slightly different experiments such that their outcomes will guide future research.

The question of size is an interesting one (and almost personal: I’m 6’4” tall and somewhat heavy, which means I’ve to start by moving away from seeming intimidating in almost all new relationships). For most of history, humans’ ideas of better included something becoming bigger. From what I can see – which isn’t really much – the impetus for this is founded in five things:

1. The laws of classical physics: They are, and were, multiplicative. To do more or to do better (which for a long time meant doing more), the laws had to be summoned in larger magnitudes and in more locations. This has been true from the machines of industrialisation to scientific instruments to various modes of construction and transportation. Some laws also foster inverse relationships that straightforwardly encourage devices to be bigger to be better.

2. Capitalism, rather commerce in general: Notwithstanding social necessities, bigger often implied better the same way a sphere of volume 4 units has a smaller surface area than four spheres of volume 1 unit each. So if your expenditure is pegged to the surface area – and it often is – then it’s better to pack 400 people on one airplane instead of flying four airplanes with 100 people in each.

3. Sense of self: A sense of our own size and place in the universe, as seemingly diminutive creatures living their lives out under the perennial gaze of the vast heavens. From such a point of view, a show of power and authority would obviously have meant transcending the limitations of our dimensions and demonstrating to others that we’re capable of devising ‘ultrastructures’ that magnify our will, to take us places we only thought the gods could go and achieve simultaneity of effect only the gods could achieve. (And, of course, for heads of state to swing longer dicks at each other.)

4. Politics: Engineers building a tabletop detector and engineers building a detector weighing 50,000 tonnes will obviously run into different kinds of obstacles. Moreover, big things are easier to stake claims over, to discuss, dispute or dislodge. It affects more people even before it has produced its first results.

5. Natural advantages: An example that comes immediately to mind is social networks – not Facebook or Twitter but the offline ones that define cultures and civilisations. Such networks afford people an extra degree of adaptability and improve chances of survival by allowing people to access resources (including information/knowledge) that originated elsewhere. This can be as simple as a barter system where people exchange food for gold, or as complex as a bashful Tamilian staving off alienation in California by relying on the support of the Tamil community there.

(The inevitable sixth impetus is tradition. For example, its equation with growth has given bigness pride of place in business culture, so much so that many managers I’ve met wanted to set up bigger media houses even when it might have been more appropriate to go smaller.)

Against this backdrop of impetuses working together, Ed Yong’s I Contain Multitudes – a book about how our biological experience of reality is mediated by microbes – becomes a saga of reconciliation with a world much smaller, not bigger, yet more consequential. To me, that’s an idea as unintuitive as, say, being able to engineer materials with fantastical properties by sporadically introducing contaminants into their atomic lattice. It’s the sort of smallness whose individual parts amount to very close to nothing, whose sum amounts to something, but the human experience of which is simply monumental.

And when we find that such smallness is able to move mountains, so to speak, it disrupts our conception of what it means to be big. This is as true of microbes as it is of quantum mechanics, as true of elementary particles as it is of nano-electromechanical systems. This is one of the more understated revolutions that happened in the 20th century: the decoupling of bigger and better, a sort of virtualisation of betterment that separated it from additive scale and led to the proliferation of ‘trons’.

I like to imagine what gave us tabletop accelerators also gave us containerised software and a pan-industrial trend towards personalisation – although this would be philosophy, not history, because it’s a trend we compose in hindsight. But in the same vein, both hardware (to run software) and accelerators first became big, riding on the back of the classical and additive laws of physics, then hit some sort of technological upper limit (imposed by finite funds and logistical limitations) and then bounced back down when humankind developed tools to manipulate nature at the mesoscopic scale.

Of course, some would also argue that tabletop particle accelerators wouldn’t be possible, or deemed necessary, if the city-sized ones didn’t exist first, that it was the failure of the big ones that drove the development of the small ones. And they would argue right. But as I said, that’d be history; it’s the philosophy that seems more interesting here.

Feeling the pulse of the space-time continuum

The Copernican
April 17, 2014

Haaaaaave you met PSR B1913+16? The first three letters of its name indicate it’s a pulsating radio source, an object in the universe that gives off energy as radio waves at very specific periods. More commonly, such sources are known as pulsars, a portmanteau of pulsating stars.

When heavy stars run out of hydrogen to fuse into helium, they undergo a series of processes that sees them stripped off their once-splendid upper layers, leaving behind a core of matter called a neutron star. It is extremely dense, extremely hot, and spinning very fast. When it emits electromagnetic radiation in flashes, it is called a pulsar. PSR B1913+16 is one such pulsar, discovered in 1974, located in the constellation Aquila some 21,000 light-years from Earth.

Finding PSR B1913+16 earned its discoverers the Nobel Prize for physics in 1993 because this was no ordinary pulsar, and it was the first to be discovered of its kind: of binary stars. As the ‘B’ in its name indicates, it is locked in an epic pirouette with a nearby neutron star, the two spinning around each other with the orbit’s total diameter spanning one to five times that of our Sun.

Losing energy but how?

The discoverers were Americans Russell Alan Hulse and Joseph Hooton Taylor, Jr., of the University of Massachusetts Amherst, and their prize-winning discovery didn’t culminate with just spotting the binary pulsar that has come to be named after them. Further, they found that the pulsar’s orbit was shrinking, meaning the system as a whole was losing energy. They found that they could also predict the rate at which the orbit was shrinking using the general theory of relativity.

In other words, PSR B1913+16 was losing energy as gravitational energy while proving a direct (natural) experiment to verify Albert Einstein’s monumental theory from a century ago. (That a human was able to intuit how two neutron stars orbiting each other trillions of miles away could lose energy is homage to the uniformity of the laws of physics. Through the vast darkness of space, we can strip away with our minds any strangeness of its farthest reaches because what is available on a speck of blue is what is available there, too.)

While gravitational energy, and gravitational waves with it, might seem like an esoteric concept, it is easily intuited as the gravitational analogue of electromagnetic energy (and electromagnetic waves). Electromagnetism and gravitation are the two most accessible of the four fundamental forces of nature. When a system of charged particles moves, it lets off electromagnetic energy and so becomes less energetic over time. Similarly, when a system of massive objects moves, it lets off gravitational energy… right?

“Yeah. Think of mass as charge,” says Tarun Souradeep, a professor at the Inter-University Centre for Astronomy and Astrophysics, Pune, India. “Electromagnetic waves come with two charges that can make up a dipole. But the conservation of momentum prevents gravitational radiation from having dipoles.”

According to Albert Einstein and his general theory of relativity, gravitation is a force born due to the curvature, or roundedness, of the space-time continuum: space-time bends around massive objects (an effect very noticeable during gravitational lensing). When massive objects accelerate through the continuum, they set off waves in it that travel at the speed of light. These are called gravitational waves.

“The efficiency of energy conversion – from the bodies into gravitational waves – is very high,” Prof. Souradeep clarifies. “But they’re difficult to detect because they don’t interact with matter.”

Albie’s still got it

In 2004, Joseph Taylor, Jr., and Joel Weisberg published a paper analysing 30 years of observations of PSR B1913+16, and found that general relativity was able to explain the rate of orbit contraction within an error of 0.2 per cent. Should you argue that the binary system could be losing its energy in many different ways, that the theory of general relativity is able to so accurately explain it means that the theory is involved, and in the form of gravitational waves.

Prof. Souradeep says, “According to Newtonian gravity, the gravitational pull of the Sun on Earth was instantaneous action at a distance. But now we know light takes eight minutes to come from the Sun to Earth, which means the star’s gravitational pull must also take eight minutes to affect Earth. This is why we have causality, with gravitational waves in a radiative mode.”

And this is proof that the waves exist, at least definitely in theory. They provide a simple, coherent explanation for a well-defined problem – like a hole in a giant jigsaw puzzle that we know only a certain kind of piece can fill. The fundamental particles called neutrinos were discovered through a similar process.

These particles, like gravitational waves, hardly interact with matter and are tenaciously elusive. Their discovery was predicted by the physicist Wolfgang Pauli in 1930. He needed such a particle to explain how the heavier neutron could decay into the lighter proton, the remaining mass (or energy) being carried away by an electron and a neutrino antiparticle. And the team that first observed neutrinos in an experiment, in 1942, did find it under these circumstances.

Waiting for a direct detection

On March 17, radio-astronomers from the Harvard-Smithsonian Centre for Astrophysics (CfA) announced a more recent finding that points to the existence of gravitational waves, albeit in a more powerful and ancient avatar. Using a telescope called BICEP2 located at the South Pole, they found the waves’ unique signature imprinted on the cosmic microwave background, a dim field of energy leftover from the Big Bang and visible to this day.

At the time, Chao-Lin Kuo, a co-leader of the BICEP2 collaboration, had said, “We have made the first direct image of gravitational waves, or ripples in space-time across the primordial sky, and verified a theory about the creation of the whole universe.”

Spotting the waves themselves, directly, in our human form is impossible. This is why the CfA discovery and the orbital characteristics of PSR B1913+16 are as direct detections as they get. In fact, finding one concise theory to explain actions and events in varied settings is a good way to surmise that such a theory could exist.

For instance, there is another experiment whose sole purpose has been to find gravitational waves, using laser. Its name is LIGO (Laser Interferometer Gravitational-wave Observatory). Its first phase operated from 2002 to 2010, and found no conclusive evidence of gravitational waves to report. Its second phase is due to start this year, in 2014, in an advanced form. On April 16, the LIGO collaboration put out a 20-minute documentary titled Passion for Understanding, about the “raw enthusiasm and excitement of those scientists and researchers who have dedicated their professional careers to this immense undertaking”.

The laser pendula

LIGO works like a pendulum to try and detect gravitational waves. With a pendulum, there is a suspended bob that goes back and forth between two points with a constant rhythm. Now, imagine there are two pendulums swinging parallel to each other but slightly out of phase, between two parallel lines 1 and 2. So when pendulum A reaches line 1, pendulum B hasn’t got there just yet, but it will soon enough.

When gravitational waves, comprising peaks and valleys of gravitational energy, surf through the space-time continuum, they induce corresponding crests and troughs that distort the metrics of space and passage of time in that area. When the two super-dense neutron stars that comprise PSR B1913+16 move around each other, they must be letting off gravitational waves in a similar manner, too.

When such a wave passes through the area where we are performing our pendulums experiment, they are likely to distort their arrival times to lines 1 and 2. Such a delay can be observed and recorded by sensitive instruments.

Analogously, LIGO uses beams of light generated by a laser at one point to bounce back and forth between mirrors for some time, and reconvene at a point. And instead of relying on the relatively clumsy mechanisms of swinging pendulums, scientists leverage the wave properties of light to make the measurement of a delay more precise.

At the beach, you’ll remember having seen waves forming in the distance, building up in height as they reach shallower depths, and then crashing in a spray of water on the shore. You might also have seen waves becoming bigger by combining. That is, when the crests of waves combine, they form a much bigger crest; when a crest and a trough combine, the effect is to cancel each other. (Of course this is an exaggeration. Matters are far less exact and pronounced on the beach.)

Similarly, the waves of laser light in LIGO are tuned such that, in the absence of a gravitational wave, what reaches the detector – an interferometer – is one crest and one trough, cancelling each other out and leaving no signal. In the presence of a gravitational wave, there is likely to be one crest and another crest, too, leaving behind a signal.

A blind spot

In an eight-year hunt for this signal, LIGO hasn’t found it. However, this isn’t the end because, like all waves, gravitational waves should also have a frequency, and it can be anywhere in a ginormous band if theoretical physicists are to be believed (and they are to be): between 10-7 and 1011 hertz. LIGO will help humankind figure out which frequency ranges can be ruled out.

In 2014, the observatory will also reawaken after four-years of being dormant and receiving upgrades to improve its sensitivity and accuracy. According to Prof. Souradeep, the latter now stands at 10-20 m. One more way in which LIGO is being equipped to find gravitational waves is by created a network of LIGO detectors around Earth. There are already two in the US, one in Europe, and one in Japan (although the Japanese LIGO uses a different technique).

But though the network improves our ability to detect gravitational waves, it presents another problem. “These detectors are on a single plane, making them blind to a few hundred degrees of the sky,” Prof. Souradeep says. This means the detectors will experience the effects of a gravitational wave but if it originated from a blind spot, they won’t be able to get a fix on its source: “It will be like trying to find MH370!” Fortunately, since 2010, there have been many ways proposed to solve this problem, and work on some of them is under way.

One of them is called eLISA, for Evolved Laser Interferometer Space Antenna. It will attempt to detect and measure gravitational waves by monitoring the locations of three spacecraft arranged in an equilateral triangle moving in a Sun-centric orbit. eLISA is expected to be launched only two decades from now, although a proof-of-concept mission has been planned by the European Space Agency for 2015.

Another solution is to install a LIGO detector on ground and outside the plane of the other three – such as in India. According to Prof. Souradeep, LIGO-India will reduce the size of the blind spot to a few tens of degrees – an order of magnitude improvement. The country’s Planning Commission has given its go-ahead for the project as a ‘mega-science project’ in the 12th Five Year Plan, and the Department of Atomic Energy, which is spearheading the project, has submitted a note to the Union Cabinet for approval. With the general elections going on in the country, physicists will have to wait until at least June or July to expect to get this final clearance.

Once cleared, of course, it will prove a big step forward not just for the Indian scientific community but also for the global one, marking the next big step – and possibly a more definitive one – in a journey that started with a strange pulsar 21,000 light-years away. As we get better at studying these waves, we have access to a universe visible not just in visible light, radio-waves, X-rays or neutrinos but also through its gravitational susurration – like feeling the pulse of the space-time continuum itself.

A closet of hidden phenomena

Science has been rarely counter-intuitive to our understanding of reality, and its elegant rationalism at every step of the way has been reassuring. This is why Bell’s theorem has been one of the strangest concepts of reality scientists have come across: it is hardly intuitive, hardly rational, and hardly reassuring.

To someone interested in the bigger picture, the theorem is the line before which quantum mechanics ends and after which classical mechanics begins. It’s the line in the sand between the Max Planck and the Albert Einstein weltanschauungen.

Einstein, and many others before him, worked with gravity, finding a way to explain the macrocosm and its large-scale dance of birth and destruction. Planck, and many others after him, have helped describe the world of the atom and its innards using extremely small packets of energy called particles, swimming around in a pool of exotic forces.

At the nexus of a crisis

Over time, however, as physicists studied the work of both men and of others, it started to become clear that the the fields were mutually exclusive, never coming together to apply to the same idea. At this tenuous nexus, the Irish physicist John Stuart Bell cleared his throat.

Bell’s theorem states, in simple terms, that for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or superluminal communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed – like the moon in the morning.

The paradox is obvious. Classical mechanics is applicable everywhere, even with subatomic particles that are billionths of nanometers across. That it’s not is only because its dominant player, the gravitational force, is overshadowed by other stronger forces. Quantum mechanics, on the other hand, is not so straightforward with its offering. It could be applied in the macroscopic world – but its theory has trouble dealing with gravity or the strong nuclear force, which gives mass to matter.

This means if quantum mechanics is to have a smooth transition at some scale into a classical reality… it can’t. At that scale, one of locality or realism must snap back to life. This is why confronting the idea that one of them isn’t true is unsettling. They are both fundamental hypotheses of physics.

The newcomer

A few days ago, I found a paper on arXiv titled Violation of Bell’s inequality in fluid mechanics (May 28, 2013). Its abstract stated that “… a classical fluid mechanical system can violate Bell’s inequality because the fluid motion is correlated over very large distances”. Given that Bell stands between Planck’s individuated notion of quantum mechanics and Einstein’s waltz-like continuum of the cosmos, it was intriguing to see scientists attempting to describe a quantum mechanical phenomenon in a classical system.

The correlation that the paper’s authors talk about implies fluid flow in one region of space-time is somehow correlated with fluid flow in another region of space-time. This is a violation of locality. However, fluid mechanics has been, still is, a purely classical occurrence: its behaviour can be traced to Newton’s ideas from the 17th century. This means all flow events are, rather have to be, decidedly real and local.

To make their point, the authors use mathematical equations modelling fluid flow, conceived by Leonhard Euler in the 18th century, and how they could explain vortices – regions of a fluid where the flow is mostly a spinning motion about an imaginary axis.

Assigning fictitious particles to different parts of the equation, the scientists demonstrate how the particles in one region of flow could continuously and instantaneously affect particles in another region of fluid flow. In quantum mechanics, this phenomenon is called entanglement. It has no classical counterpart because it violates the principle of locality.

Coincidental correlation

However, there is nothing quantum about fluid flow, much less about Euler’s equations. Then again, if the paper is right, would that mean flowing fluids are a quantum mechanical system? Occam’s razor comes to the rescue: Because fluid flow is classical but still shows signs of nonlocality, there is a possibility that purely local interactions could explain quantum mechanical phenomena.

Think about it. A purely classical system also shows signs of quantum mechanical behaviour. This meant that some phenomena in the fluid could be explained by both classical and quantum mechanical models, i.e. the two models correspond.

There is a stumbling block, however. Occam’s razor only provides evidence of a classical solution for nonlocality, not a direct correspondence between micro- and macroscopic physics. In other words, it could easily be a post hoc ergo propter hoc inference: Because nonlocality came after application of local mathematics, local mathematics must have caused nonlocality.

“Not quite,” said Robert Brady, one of the authors on the paper. “Bell’s hypothesis is often said to be about ‘locality’, and so it is common to say that quantum mechanical systems are ‘nonlocal’ because Bell’s hypothesis does not apply to them. If you choose this description, then fluid mechanics is also ‘non-local’, since Bell’s hypothesis does not apply to them either.”

“However, in fluid mechanics it is usual to look at this from a different angle, since Bell’s hypothesis would not be thought reasonable in that field.”

Brady’s clarification brings up an important point: Even though the lines don’t exactly blur between the two domains, knowing more than choosing where to apply which model makes a large difference. If you misstep, classical fluid flow could become quantum fluid flow simply because it displays some pseudo-effects.

In fact, experiments to test Bell’s hypothesis have been riddled with such small yet nagging stumbling blocks. Even if a suitable domain of applicability has been chosen, an efficient experiment has to be designed that fully exploits the domain’s properties to arrive at a conclusion – and this has proved very difficult. Inspired by the purely theoretical EPR paradox put forth in 1935, Bell stated his theorem in 1964. It is now 2013 and no experiment has successfully proved or disproved it.

Three musketeers

The three most prevalent problems such experiments face are called the failure of rotational invariance, the no-communication loophole, and the fair sampling assumption.

In any Bell experiment, two particles are allowed to interact in some way – such as being born from a same source – and separated across a large distance. Scientists then measure the particles’ properties using detectors. This happens again and again until any patterns among paired particles can be found or denied.

Whatever properties the scientists are going to measure, the different values that that property can take must be equally likely. For example, if I have a bag filled with 200 blue balls, 300 red balls and 100 yellow balls, I shouldn’t think something quantum mechanical was at play if one in two balls pulled out was red. That’s just probability at work. And when probability can’t be completely excluded from the results, it’s called a failure of rotational invariance.

For the experiment to measure only the particles’ properties, the detectors must not be allowed to communicate with each other. If they were allowed to communicate, scientists wouldn’t know if a detection arose due to the particles or due to glitches in the detectors. Unfortunately, in a perfect setup, the detectors wouldn’t communicate at all and be decidedly local – putting them in no position to reveal any violation of locality! This problem is called the no-communication loophole.

The final problem – fair sampling – is a statistical issue. If an experiment involves 1,000 pairs of particles, and if only 800 pairs have been picked up by the detector and studied, the experiment cannot be counted as successful. Why? Because results from the other 200 could have distorted the results had they been picked up. There is a chance. Thus, the detectors would have to be 100 per cent efficient in a successful experiment.

In fact, the example was a gross exaggeration: detectors are only 5-30 per cent efficient.

One (step) at a time

Resolution for the no-communication problem came in 1998 by scientists from Austria, who also closed the rotational invariance loophole. The fair sampling assumption was resolved by a team of scientists from the USA in 2001, one of whom was David Wineland, physics Nobel Laureate, 2012. However, they used only two ions to make the measurements. A more thorough experiment’s results were announced just last month.

Researchers from the Institute for Quantum Optics and Quantum Communication, Austria, had used detectors called transition-edge sensors that could pick up individual photons for detection with a 98 per cent efficiency. These sensors were developed by the National Institute for Standards and Technology, Maryland, USA. In keeping with tradition, the experiment admitted the no-communication loophole.

Unfortunately, for an experiment to be a successful Bell-experiment, it must get rid of all three problems at the same time. This hasn’t been possible to date, which is why a conclusive Bell’s test, and the key to quantum mechanics’ closet of hidden phenomena, eludes us. It is as if nature uses one loophole or the other to deceive the experimenters.*

The silver lining is that the photon has become the first particle for which all three loopholes have been closed, albeit in different experiments. We’re probably getting there, loopholes relenting. The reward, of course, could be the greatest of all: We will finally know if nature is described by quantum mechanics, with its deceptive trove of exotic phenomena, or by classical mechanics and general relativity, with its reassuring embrace of locality and realism.

(*In 1974, John Clauser and Michael Horne found a curious workaround for the fair-sampling problem that they realised could be used to look for new physics. They called this the no-enhancement problem. They had calculated that if some method was found to amplify the photons’ signals in the experiment and circumvent the low detection efficiency, the method would also become a part of the result. Therefore, if the result came out that quantum mechanics was nonlocal, then the method would be a nonlocal entity. So, using different methods, scientists distinguish between previously unknown local and nonlocal processes.)

This article, as written by me, originally appeared in The Hindu’s The Copernican science blog on June 15, 2013.

A closet of hidden phenomena

An apparatus to study quantum entanglement, with superconducting channels placed millimeters apart.
An apparatus to study quantum entanglement, with superconducting channels placed millimeters apart. Photo: Softpedia

Science has been rarely counter-intuitive to our understanding of reality, and its elegant rationalism at every step of the way has been reassuring. This is why Bell’s theorem has been one of the strangest concepts of reality scientists have come across: it is hardly intuitive, hardly rational, and hardly reassuring.

To someone interested in the bigger picture, the theorem is the line before which quantum mechanics ends and after which classical mechanics begins. It’s the line in the sand between the Max Planck and the Albert Einstein weltanschauungen.

Einstein, and many others before him, worked with gravity, finding a way to explain the macrocosm and its large-scale dance of birth and destruction. Planck, and many others after him, have helped describe the world of the atom and its innards using extremely small packets of energy called particles, swimming around in a pool of exotic forces.

At the nexus of a crisis

Over time, however, as physicists studied the work of both men and of others, it started to become clear that the the fields were mutually exclusive, never coming together to apply to the same idea. At this tenuous nexus, the Irish physicist John Stuart Bell cleared his throat.

Bell’s theorem states, in simple terms, that for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or superluminal communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed – like the moon in the morning.

The paradox is obvious. Classical mechanics is applicable everywhere, even with subatomic particles that are billionths of nanometers across. That it’s not is only because its dominant player, the gravitational force, is overshadowed by other stronger forces. Quantum mechanics, on the other hand, is not so straightforward with its offering. It could be applied in the macroscopic world – but its theory has trouble dealing with gravity and the strong nuclear force, both of which have something to do with mass.

This means if quantum mechanics is to have a smooth transition at some scale into a classical reality… it can’t. At that scale, one of locality or realism must snap back to life. This is why confronting the idea that one of them isn’t true is unsettling. They are both fundamental hypotheses of physics.

The newcomer

A few days ago, I found a paper on arXiv titled Violation of Bell’s inequality in fluid mechanics (May 28, 2013). Its abstract stated that “… a classical fluid mechanical system can violate Bell’s inequality because the fluid motion is correlated over very large distances”. Given that Bell stands between Planck’s individuated notion of quantum mechanics and Einstein’s waltz-like continuum of the cosmos, it was intriguing to see scientists attempting to describe a quantum mechanical phenomenon in a classical system.

The correlation that the paper’s authors talk about implies fluid flow in one region of space-time is somehow correlated with fluid flow in another region of space-time. This is a violation of locality. However, fluid mechanics has been, still is, a purely classical occurrence: its behaviour can be traced to Newton’s ideas from the 17th century. This means all flow events are, rather have to be, decidedly realand local.

To make their point, the authors use mathematical equations modelling fluid flow, conceived by Leonhard Euler in the 18th century, and how they could explain vortices – regions of a fluid where the flow is mostly a spinning motion about an axis.


This is a vortex. Hurt your eyes yet?

Assigning fictitious particles to different parts of the equation, the scientists demonstrate how the particles in one region of flow could continuously and instantaneously affect particles in another region of fluid flow. In quantum mechanics, this phenomenon is called entanglement. It has no classical counterpart because it violates the principle of locality.

Coincidental correlation

However, there is nothing quantum about fluid flow, much less about Euler’s equations. Then again, if the paper is right, would that mean flowing fluids are a quantum mechanical system? Occam’s razorcomes to the rescue: Because fluid flow is classical but still shows signs of nonlocality, there is a possibility that purely local interactions could explain quantum mechanical phenomena.

Think about it. A purely classical system also shows signs of quantum mechanical behaviour. This meant that some phenomena in the fluid could be explained by both classical and quantum mechanical models, i.e. the two models correspond.

There is a stumbling block, however. Occam’s razor only provides evidence of a classical solution for nonlocality, not a direct correspondence between micro- and macroscopic physics. In other words, it could easily be a post hoc ergo propter hoc inference: Because nonlocality came after application of local mathematics, local mathematics must have caused nonlocality.

“Not quite,” said Robert Brady, one of the authors on the paper. “Bell’s hypothesis is often said to be about ‘locality’, and so it is common to say that quantum mechanical systems are ‘nonlocal’ because Bell’s hypothesis does not apply to them. If you choose this description, then fluid mechanics is also ‘non-local’, since Bell’s hypothesis does not apply to them either.”

“However, in fluid mechanics it is usual to look at this from a different angle, since Bell’s hypothesis would not be thought reasonable in that field.”

Brady’s clarification brings up an important point: Even though the lines don’t exactly blur between the two domains, knowing more than choosing where to apply which model makes a large difference. If you misstep, classical fluid flow could become quantum fluid flow simply because it displays some pseudo-effects.

In fact, experiments to test Bell’s hypothesis have been riddled with such small yet nagging stumbling blocks. Even if a suitable domain of applicability has been chosen, an efficient experiment has to be designed that fully exploits the domain’s properties to arrive at a conclusion – and this has proved very difficult. Inspired by the purely theoretical EPR paradox put forth in 1935, Bell stated his theorem in 1964. It is now 2013 and no experiment has successfully been able to decide if Bell was right or wrong.

Three musketeers

The three most prevalent problems such experiments face are called the failure of rotational invariance, the no-communication loophole, and the fair sampling assumption.

In any Bell experiment, two particles are allowed to interact in some way – such as being born from a same source – and separated across a large distance. Scientists then measure the particles’ properties using detectors. This happens again and again until any patterns among paired particles can be found or denied.

Whatever properties the scientists are going to measure, the different values that that property can take must be equally likely. For example, if I have a bag filled with 200 blue balls, 300 red balls and 100 yellow balls, I shouldn’t think something quantum mechanical was at play if one in two balls pulled out was red. That’s just probability at work. And when probability can’t be completely excluded from the results, it’s called a failure of rotational invariance.

For the experiment to measure only the particles’ properties, the detectors must not be allowed to communicate with each other. If they were allowed to communicate, scientists wouldn’t know if a detection arose due to the particles or due to glitches in the detectors. Unfortunately, in a perfect setup, the detectors wouldn’t communicate at all and be decidedly local – putting them in no position to reveal any violation of locality! This problem is called the no-communication loophole.

The final problem – fair sampling – is a statistical issue. If an experiment involves 1,000 pairs of particles, and if only 800 pairs have been picked up by the detector and studied, the experiment cannot be counted as successful. Why? Because results from the other 200 could have distorted the results had they been picked up. There is a chance. Thus, the detectors would have to be 100 per cent efficient in a successful experiment.

In fact, the example was a gross exaggeration: detectors are only 5-30 per cent efficient.

One (step) at a time

Resolution for the no-communication problem came in 1998 by scientists from Austria, who also closed the rotational invariance loophole. The fair sampling assumption was resolved by a team of scientists from the USA in 2001, one of whom was David Wineland, physics Nobel Laureate, 2012. However, they used only two ions to make the measurements. A more thorough experiment’s resultswere announced just last month.

Researchers from the Institute for Quantum Optics and Quantum Communication, Austria, had used detectors called transition-edge sensors that could pick up individual photons for detection with a 98 per cent efficiency. These sensors were developed by the National Institute for Standards and Technology, Maryland, USA. In keeping with tradition, the experiment admitted the no-communication loophole.

Unfortunately, for an experiment to be a successful Bell-experiment, it must get rid of all three problems at the same time. This hasn’t been possible to date, which is why a conclusive Bell’s test, and the key to quantum mechanics’ closet of hidden phenomena, eludes us. It is as if nature uses one loophole or the other to deceive the experimenters.*

The silver lining is that the photon has become the first particle for which all three loopholes have been closed, albeit in different experiments. We’re probably getting there, loopholes relenting. The reward, of course, could be the greatest of all: We will finally know if nature is described by quantum mechanics, with its deceptive trove of exotic phenomena, or by classical mechanics and general relativity, with its reassuring embrace of locality and realism.

(*In 1974, John Clauser and Michael Horne found a curious workaround for the fair-sampling problem that they realised could be used to look for new physics. They called this the no-enhancement problem. They had calculated that if some method was found to amplify the photons’ signals in the experiment and circumvent the low detection efficiency, the method would also become a part of the result. Therefore, if the result came out that quantum mechanics was nonlocal, then the method would be a nonlocal entity. So, using different methods, scientists distinguish between previously unknown local and nonlocal processes.)

(This blog post first appeared at The Copernican on June 15, 2013.)