The calculus of creative discipline

Every moment of a science fiction story must represent the triumph of writing over world-building. World-building is dull. World-building literalises the urge to invent. World-building gives an unnecessary permission for acts of writing (indeed, for acts of reading). World-building numbs the reader’s ability to fulfil their part of the bargain, because it believes that it has to do everything around here if anything is going to get done. Above all, world-building is not technically necessary. It is the great clomping foot of nerdism.

Once I’m awake and have had my mug of tea, and once I’m done checking Twitter, I can quote these words of M. John Harrison from memory: not because they’re true – I don’t believe they are – but because they rankle. I haven’t read any writing of Harrison’s, I can’t remember the names of any of his books. Sometimes I don’t remember his name even, only that there was this man who uttered these words. Perhaps it is to Harrison’s credit that he’s clearly touched a nerve but I’m reluctant to concede anymore than this.

His (partial) quote reflects a narrow view of a wider world, and it bothers me because I remain unable to extend the conviction that he’s seeing only a part of the picture to the conclusion that he lacks imagination; as a writer of not inconsiderable repute, at least according to Wikipedia, I doubt he has any trouble imagining things.

I’ve written about the virtues of world-building before (notably here), and I intend to make another attempt in this post; I should mention what both attempts, both defences, have in common is that they’re not prescriptive. They’re not recommendations to others, they’re non-generalisable. They’re my personal reasons to champion the act, even art, of world-building; my specific loci of resistance to Harrison’s contention. But at the same time, I don’t view them – and neither should you – as inviolable or as immune to criticism, although I suspect this display of a willingness to reason may not go far in terms of eliminating subjective positions from this exercise, so make of it what you will.

There’s an idea in mathematical analysis called smoothness. Let’s say you’ve got a curve drawn on a graph, between the x- and y-axes, shaped like the letter ‘S’. Let’s say you’ve got another curve drawn on a second graph, shaped like the letter ‘Z’. According to one definition, the S-curve is smoother than the Z-curve because it has fewer sharp edges. A diligent high-schooler might take recourse through differential calculus to explain the idea. Say the Z-curve on the graph is the result of a function Z(x) = y. If you differentiate Z(x) where ‘x’ is the point on the x-axis where the Z-curve makes a sharp turn, the derivative Z'(x) has a value of zero. Such points are called critical points. The S-curve doesn’t have any critical points (except at the ends, but let’s ignore them); L-, and T-curves have one critical point each; P- and D-curves have two critical points each; and an E-curve has three critical points.

With the help of a loose analogy, you could say a well-written story is smooth à la an S-curve (excluding the terminal points): it it has an unambiguous beginning and an ending, and it flows smoothly in between the two. While I admire Steven Erikson’s Malazan Book of the Fallen series for many reasons, its first instalment is like a T-curve, where three broad plot-lines abruptly end at a point in the climax that the reader has been given no reason to expect. The curves of the first three books of J.K. Rowling’s Harry Potter series resemble the tangent function (from trigonometry: tan(x) = sin(x)/cosine(x)): they’re individually somewhat self-consistent but the reader is resigned to the hope that their beginnings and endings must be connected at infinity.

You could even say Donald Trump’s presidency hasn’t been smooth at all because there have been so many critical points.

Where world-building “literalises the urge to invent” to Harrison, it spatialises the narrative to me, and automatically spotlights the importance of the narrative smoothness it harbours. World-building can be just as susceptible to non-sequiturs and deus ex machinae as writing itself, all the way to the hubris Harrison noticed, of assuming it gives the reader anything to do, even enjoy themselves. Where he sees the “clomping foot of nerdism”, I see critical points in a curve some clumsy world-builder invented as they went along. World-building can be “dull” – or it can choose to reveal the hand-prints of a cave-dwelling people preserved for thousands of years, and the now-dry channels of once-heaving rivers that nurtured an ancient civilisation.

My principal objection to Harrison’s view is directed at the false dichotomy of writing and world-building, and which he seems to want to impose instead of the more fundamental and more consequential need for creative discipline. Let me borrow here from philosophy of science 101, specifically of the particular importance of contending with contradictory experimental results. You’ve probably heard of the replication crisis: when researchers tried to reproduce the results of older psychology studies, their efforts came a cropper. Many – if not most – studies didn’t replicate, and scientists are currently grappling with the consequences of overturning decades’ worth of research and research practices.

This is on the face of it an important reality check but to a philosopher with a deeper view of the history of science, the replication crisis also recalls the different ways in which the practitioners of science have responded to evidence their theories aren’t prepared to accommodate. The stories of Niels Bohr v. classical mechanicsDan Shechtman v. Linus Pauling and the EPR paradox come first to mind. Heck, the philosophers Karl Popper, Thomas Kuhn, Imre Lakatos and Paul Feyerabend are known for their criticisms of each other’s ideas on different ways to rationalise the transition from one moment containing multiple answers to the moment where one emerges as the favourite.

In much the same way, the disciplined writer should challenge themself instead of presuming the liberty to totter over the landscape of possibilities, zig-zagging between one critical point and the next until they topple over the edge. And if they can’t, they should – like the practitioners of good science – ask for help from others, pressing the conflict between competing results into the service of scouring the rust away to expose the metal.

For example, since June this year, I’ve been participating on my friend Thomas Manuel’s initiative in his effort to compose an underwater ‘monsters’ manual’. It’s effectively a collaborative world-building exercise where we take turns to populate different parts of a large planet with sizeable oceans, seas, lakes and numerous rivers with creatures, habitats and ecosystems. We broadly follow the same laws of physics and harbour substantially overlapping views of magic, but we enjoy the things we invent because they’re forced through the grinding wheels of each other’s doubts and curiosities, and the implicit expectation of one creator to make adequate room for the creations of the other.

I see it as the intersection of two functions: at first, their curves will criss-cross at a point, and the writers must then fashion a blending curve so a particle moving along one can switch to the other without any abruptness, without any of the tired melodrama often used to mask criticality. So the Kularu people are reminded by their oral traditions to fight for their rivers, so the archaeologists see through the invading Gezmin’s benevolence and into the heart of their imperialist ambitions.

Disentangling entanglement

There has been considerable speculation if the winners of this year’s Nobel Prize for physics, due to be announced at 2.30 pm IST on October 8, will include Alain Aspect and Anton Zeilinger. They’ve both made significant experimental contributions related to quantum information theory and the fundamental nature of quantum mechanics, including entanglement.

Their work, at least the potentially prize-winning part of it, is centred on a class of experiments called Bell tests. If you perform a Bell test, you’re essentially checking the extent to which the rules of quantum mechanics are compatible with the rules of classical physics.

Whether or not Aspect, Zeilinger and/or others win a Nobel Prize this year, what they did achieve is worth putting in words. Of course, many other writers, authors, scientists, etc. have already performed this activity; I’d like to redo it if only because writing helps commit things to memory and because the various performers of Bell tests are likely to win some prominent prize, given how modern technologies like quantum cryptography are inflating the importance of their work, and at that time I’ll have ready reference material.

(There is yet another reason Aspect and Zeilinger could win a Nobel Prize. As with the medicine prizes, many of whose laureates previously won a Lasker Award, many of the physics laureates have previously won the Wolf Prize. And Aspect and Zeilinger jointly won the Wolf Prize for physics in 2010 along with John Clauser.)

The following elucidation is divided into two parts: principles and tests. My principal sources are Wikipedia, some physics magazines, Quantum Physics for Poets by Leon Lederman and Christopher Hill (2011), and a textbook of quantum mechanics by John L. Powell and Bernd Crasemann (1998).

§

Principles

From the late 1920s, Albert Einstein began to publicly express his discomfort with the emerging theory of quantum mechanics. He claimed that a quantum mechanical description of reality allowed “spooky” things that the rules of classical mechanics, including his theories of relativity, forbid. He further contended that both classical mechanics and quantum mechanics couldn’t be true at the same time and that there had to be a deeper theory of reality with its own, thus-far hidden variables.

Remember the Schrödinger’s cat thought experiment: place a cat in a box with a bowl of poison and close the lid; until you open the box to make an observation, the cat may be considered to be both alive and dead. Erwin Schrödinger came up with this example to ridicule the implications of Niels Bohr’s and Werner Heisenberg’s idea that the quantum state of a subatomic particle, like an electron, was described by a mathematical object called the wave function.

The wave function has many unique properties. One of these is superposition: the ability of an object to exist in multiple states at once. Another is decoherence (although this isn’t a property as much as a phenomenon common to many quantum systems): when you observed the object. it would probabilistically collapse into one fixed state.

Imagine having a box full of billiard balls, each of which is both blue and green at the same time. But the moment you open the box to look, each ball decides to become either blue or green. This (metaphor) is on the face of it a kooky description of reality. Einstein definitely wasn’t happy with it; he believed that quantum mechanics was just a theory of what we thought we knew and that there was a deeper theory of reality that didn’t offer such absurd explanations.

In 1935, Einstein, Boris Podolsky and Nathan Rosen advanced a thought experiment based on these ideas that seemed to yield ridiculous results, in a deliberate effort to provoke his ‘opponents’ to reconsider their ideas. Say there’s a heavy particle with zero spin – a property of elementary particles – inside a box in Bangalore. At some point, it decays into two smaller particles. One of these ought to have a spin of 1/2 and other of -1/2 to abide by the conservation of spin. You send one of these particles to your friend in Chennai and the other to a friend in Mumbai. Until these people observe their respective particles, the latter are to be considered to be in a mixed state – a superposition. In the final step, your friend in Chennai observes the particle to measure a spin of -1/2. This immediately implies that the particle sent to Mumbai should have a spin of 1/2.

If you’d performed this experiment with two billiard balls instead, one blue and one green, the person in Bangalore would’ve known which ball went to which friend. But in the Einstein-Podolsky-Rosen (EPR) thought experiment, the person in Bangalore couldn’t have known which particle was sent to which city, only that each particle existed in a superposition of two states, spin 1/2 and spin -1/2. This situation was unacceptable to Einstein because it was inimical certain assumptions on which the theories of relativity were founded.

The moment the friend in Chennai observed her particle to have spin -1/2, the one in Mumbai would have known without measuring her particle that it had a spin of 1/2. If it didn’t, the conservation of spin would be violated. If it did, then the wave function of the Mumbai particle would have collapsed to a spin 1/2 state the moment the wave function of the Chennai particle had collapsed to a spin -1/2 state, indicating faster-than-light communication between the particles. Either way, quantum mechanics could not produce a sensible outcome.

Two particles whose wave functions are linked the way they were in the EPR paradox are said to be entangled. Einstein memorably described entanglement as “spooky action at a distance”. He used the EPR paradox to suggest quantum mechanics couldn’t possibly be legit, certainly not without messing with the rules that made classical mechanics legit.

So the question of whether quantum mechanics was a fundamental description of reality or whether there were any hidden variables representing a deeper theory stood for nearly thirty years.

Then, in 1964, an Irish physicist at CERN named John Stewart Bell figured out a way to answer this question using what has since been called Bell’s theorem. He defined a set of inequalities – statements of the form “P is greater than Q” – that were definitely true for classical mechanics. If an experiment conducted with electrons, for example, also concluded that “P is greater than Q“, it would support the idea that quantum mechanics (vis-à-vis electrons) has ‘hidden’ parts that would explain things like entanglement more along the lines of classical mechanics.

But if an experiment couldn’t conclude that “P is greater than Q“, it would support the idea that there are no hidden variables, that quantum mechanics is a complete theory and, finally, that it implicitly supports spooky actions at a distance.

The theorem here was a statement. To quote myself from a 2013 post (emphasis added):

for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or [faster-than-light] communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed [like electrons or protons].

Zeilinger and Aspect, among others, are recognised for having performed these experiments, called Bell tests.

Technological advancements through the late 20th and early 21st centuries have produced more and more nuanced editions of different kinds of Bell tests. However, one thing has been clear from the first tests, in 1981, to the last: they have all consistently violated Bell’s inequalities, indicating that quantum mechanics does not have hidden variables and our reality does allow bizarre things like superposition and entanglement to happen.

To quote from Quantum Physics for Poets (p. 214-215):

Bell’s theorem addresses the EPR paradox by establishing that measurements on object a actually do have some kind of instant effect on the measurement at b, even though the two are very far apart. It distinguishes this shocking interpretation from a more commonplace one in which only our knowledge of the state of b changes. This has a direct bearing on the meaning of the wave function and, from the consequences of Bell’s theorem, experimentally establishes that the wave function completely defines the system in that a ‘collapse’ is a real physical happening.


Tests

Though Bell defined his inequalities in such a way that they would lend themselves to study in a single test, experimenters often stumbled upon loopholes in the result as a consequence of the experiment’s design not being robust enough to evade quantum mechanics’s propensity to confound observers. Think of a loophole as a caveat; an experimenter runs a test and comes to you and says, “P is greater than Q but…”, followed by an excuse that makes the result less reliable. For a long time, physicists couldn’t figure out how to get rid of all these excuses and just be able to say – or not say – “P is greater than Q“.

If millions of photons are entangled in an experiment, the detectors used to detect, and observe, the photons may not be good enough to detect all of them or the photons may not survive their journey to the detectors properly. This fair-sampling loophole could give rise to doubts about whether a photon collapsed into a particular state because of entanglement or if it was simply coincidence.

To prevent this, physicists could bring the detectors closer together but this would create the communication loophole. If two entangled photons are separated by 100 km and the second observation is made more than 0.0003 seconds after the first, it’s still possible that optical information could’ve been exchanged between the two particles. To sidestep this possibility, the two observations have to be separated by a distance greater than what light could travel in the time it takes to make the measurements. (Alain Aspect and his team also pointed their two detectors in random directions in one of their tests.)

Third, physicists can tell if two photons received in separate locations were in fact entangled with each other, and not other photons, based on the precise time at which they’re detected. So unless physicists precisely calibrate the detection window for each pair, hidden variables could have time to interfere and induce effects the test isn’t designed to check for, creating a coincidence loophole.

If physicists perform a test such that detectors repeatedly measure the particles involved in, say, two labs in Chennai and Mumbai, it’s not impossible for statistical dependencies to arise between measurements. To work around this memory loophole, the experiment simply has to use different measurement settings for each pair.

Apart from these, experimenters also have to minimise any potential error within the instruments involved in the test. If they can’t eliminate the errors entirely, they will then have to modify the experimental design to compensate for any confounding influence due to the errors.

So the ideal Bell test – the one with no caveats – would be one where the experimenters are able to close all loopholes at the same time. In fact, physicists soon realised that the fair-sampling and communication loopholes were the more important ones.

In 1972, John Clauser and Stuart Freedman performed the first Bell test by entangling photons and measuring their polarisation at two separate detectors. Aspect led the first group that closed the communication loophole, in 1982; he subsequently conducted more tests that improved his first results. Anton Zeilinger and his team made advancements on the fair-sampling loophole.

One particularly important experimental result showed up in August 2015: Robert Hanson and his team at the Technical University of Delft, in the Netherlands, had found a way to close the fair-sampling and communication loopholes at the same time. To quote Zeeya Merali’s report in Nature News at the time (lightly edited for brevity):

The researchers started with two unentangled electrons sitting in diamond crystals held in different labs on the Delft campus, 1.3 km apart. Each electron was individually entangled with a photon, and both of those photons were then zipped to a third location. There, the two photons were entangled with each other – and this caused both their partner electrons to become entangled, too. … the team managed to generate 245 entangled pairs of electrons over … nine days. The team’s measurements exceeded Bell’s bound, once again supporting the standard quantum view. Moreover, the experiment closed both loopholes at once: because the electrons were easy to monitor, the detection loophole was not an issue, and they were separated far enough apart to close the communication loophole, too.

By December 2015, Anton Zeilinger and co. were able to close the communication and fair-sampling loopholes in a single test with a 1-in-2-octillion chance of error, using a different experimental setup from Hanson’s. In fact, Zeilinger’s team actually closed three loopholes including the freedom-of-choice loophole. According to Merali, this is “the possibility that hidden variables could somehow manipulate the experimenters’ choices of what properties to measure, tricking them into thinking quantum theory is correct”.

But at the time Hanson et al announced their result, Matthew Leifer, a physicist the Perimeter Institute in Canada, told Nature News (in the same report) that because “we can never prove that [the converse of freedom of choice] is not the case, … it’s fair to say that most physicists don’t worry too much about this.”

We haven’t gone into much detail about Bell’s inequalities themselves but if our goal is to understand why Aspect and Zeilinger, and Clauser too, deserve to win a Nobel Prize, it’s because of the ingenious tests they devised to test Bell’s, and Einstein’s, ideas and the implications of what they’ve found in the process.

For example, Bell crafted his test of the EPR paradox in the form of a ‘no-go theorem’: if it satisfied certain conditions, a theory was designated non-local, like quantum mechanics; if it didn’t satisfy all those conditions, the theory be classified as local, like Einstein’s special relativity. So Bell tests are effectively gatekeepers that can attest whether or not a theory – or a system – is behaving in a quantum way and each loophole is like an attempt to hack the attestation process.

In 1991, Artur Ekert, who would later be acknowledged as one of the inventors of quantum cryptography, realised this perspective could have applications in securing communications. Engineers could encode information in entangled particles, send them to remote locations, and allow detectors there to communicate with each other securely by observing these particles and decoding the information. The engineers can then perform Bell tests to determine if anyone might be eavesdropping on these communications using one or some of the loopholes.

Roundup of missed stories – May 23, 2015

I’ve missed writing/commenting on so many science papers/articles in the two weeks following the launch of The Wire. The concepts in many of them would’ve made fun explainers, some required a takedown or two, and one had surprising ethical and philosophical implications. I think it might be a bit late to write about them myself (read: too tired), so I’m going to lay those I think are the best among them out here for you to take on in ways you see fit.

  1. Disrupting the subscription journals’ business model for the necessary large-scale transformation to open access – An OA whitepaper from a big proponent of OA, the Max Planck Digital Library. Has data to support argument that money locked in the currently dominant publishing paradigm needs to be repurposed for OA, which the whitepaper reasons is very viable. Finally, suggests that for OA to become the dominant paradigm, it must happen en masse instead of in piecemeal fashion.
  2. Self-assembling Sierpinski triangles – Sierpinski triangles are a prominent kind of fractal. So, “Defect-free Sierpiński triangles can be self-assembled on a silver surface through a combination of molecular design and thermal annealing” suggests some interesting chemical and physical reactions at play.
  3. The moral challenge of invisibility – A new optical technique allows people to look at their bodies and see nothing, thanks to an apparatus developed by a team of researchers from the Karolinska Institute in Sweden. Cool as it is, physicist Philip Ball writes that users of the technique felt their social anxieties reduce. This appears to be a curious axiom of VS Ramachandran’s mirror-box technique to reduce phantom-limb pain in amputees.
  4. Open Science decoded – “Granting access to publications and data may be a step towards open science, but it’s not enough to ensure reproducibility. Making computer code available is also necessary — but the emphasis must be on the quality of the programming.” Given the role computing and statistics are playing in validating or invalidating scientific results, I wholeheartedly agree.
  5. EPR Paradox: Nonlocal legacy – I haven’t read this article yet but it already sounds interesting.
  6. In the beginning – A long piece in Aeon discusses if cosmology is suffering a drought of creativity these days. The piece’s peg is on the BICEP2 fiasco so maybe there are some juicy inside-stories there. It also ends on a well-crafted note of hopelessness (that’s one thing I’ve noticed about longform – the graf is often the last para).

We might be trapped in this snow globe of photons forever. The expansion of the Universe is pulling light away from us at a furious pace. And even if it weren’t, not everything that exists can be observed. There are more things in Heaven and Earth than are dreamt of in our philosophies. There always will be. Science has limits. One day, we might feel ourselves pressing up against those limits, and at that point, it might be necessary to retreat into the realm of ideas. It might be necessary to ‘dispense with the starry heavens’, as Plato suggested. It might be necessary to settle for untestable theories. But not yet. Not when we have just begun to build telescopes. Not when we have just awakened into this cosmos, as from a dream.

Last: I foresee I’ll continue to miss writing on these pieces in the future, so maybe these roundups could become a regular feature.

A closet of hidden phenomena

Science has been rarely counter-intuitive to our understanding of reality, and its elegant rationalism at every step of the way has been reassuring. This is why Bell’s theorem has been one of the strangest concepts of reality scientists have come across: it is hardly intuitive, hardly rational, and hardly reassuring.

To someone interested in the bigger picture, the theorem is the line before which quantum mechanics ends and after which classical mechanics begins. It’s the line in the sand between the Max Planck and the Albert Einstein weltanschauungen.

Einstein, and many others before him, worked with gravity, finding a way to explain the macrocosm and its large-scale dance of birth and destruction. Planck, and many others after him, have helped describe the world of the atom and its innards using extremely small packets of energy called particles, swimming around in a pool of exotic forces.

At the nexus of a crisis

Over time, however, as physicists studied the work of both men and of others, it started to become clear that the the fields were mutually exclusive, never coming together to apply to the same idea. At this tenuous nexus, the Irish physicist John Stuart Bell cleared his throat.

Bell’s theorem states, in simple terms, that for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or superluminal communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed – like the moon in the morning.

The paradox is obvious. Classical mechanics is applicable everywhere, even with subatomic particles that are billionths of nanometers across. That it’s not is only because its dominant player, the gravitational force, is overshadowed by other stronger forces. Quantum mechanics, on the other hand, is not so straightforward with its offering. It could be applied in the macroscopic world – but its theory has trouble dealing with gravity or the strong nuclear force, which gives mass to matter.

This means if quantum mechanics is to have a smooth transition at some scale into a classical reality… it can’t. At that scale, one of locality or realism must snap back to life. This is why confronting the idea that one of them isn’t true is unsettling. They are both fundamental hypotheses of physics.

The newcomer

A few days ago, I found a paper on arXiv titled Violation of Bell’s inequality in fluid mechanics (May 28, 2013). Its abstract stated that “… a classical fluid mechanical system can violate Bell’s inequality because the fluid motion is correlated over very large distances”. Given that Bell stands between Planck’s individuated notion of quantum mechanics and Einstein’s waltz-like continuum of the cosmos, it was intriguing to see scientists attempting to describe a quantum mechanical phenomenon in a classical system.

The correlation that the paper’s authors talk about implies fluid flow in one region of space-time is somehow correlated with fluid flow in another region of space-time. This is a violation of locality. However, fluid mechanics has been, still is, a purely classical occurrence: its behaviour can be traced to Newton’s ideas from the 17th century. This means all flow events are, rather have to be, decidedly real and local.

To make their point, the authors use mathematical equations modelling fluid flow, conceived by Leonhard Euler in the 18th century, and how they could explain vortices – regions of a fluid where the flow is mostly a spinning motion about an imaginary axis.

Assigning fictitious particles to different parts of the equation, the scientists demonstrate how the particles in one region of flow could continuously and instantaneously affect particles in another region of fluid flow. In quantum mechanics, this phenomenon is called entanglement. It has no classical counterpart because it violates the principle of locality.

Coincidental correlation

However, there is nothing quantum about fluid flow, much less about Euler’s equations. Then again, if the paper is right, would that mean flowing fluids are a quantum mechanical system? Occam’s razor comes to the rescue: Because fluid flow is classical but still shows signs of nonlocality, there is a possibility that purely local interactions could explain quantum mechanical phenomena.

Think about it. A purely classical system also shows signs of quantum mechanical behaviour. This meant that some phenomena in the fluid could be explained by both classical and quantum mechanical models, i.e. the two models correspond.

There is a stumbling block, however. Occam’s razor only provides evidence of a classical solution for nonlocality, not a direct correspondence between micro- and macroscopic physics. In other words, it could easily be a post hoc ergo propter hoc inference: Because nonlocality came after application of local mathematics, local mathematics must have caused nonlocality.

“Not quite,” said Robert Brady, one of the authors on the paper. “Bell’s hypothesis is often said to be about ‘locality’, and so it is common to say that quantum mechanical systems are ‘nonlocal’ because Bell’s hypothesis does not apply to them. If you choose this description, then fluid mechanics is also ‘non-local’, since Bell’s hypothesis does not apply to them either.”

“However, in fluid mechanics it is usual to look at this from a different angle, since Bell’s hypothesis would not be thought reasonable in that field.”

Brady’s clarification brings up an important point: Even though the lines don’t exactly blur between the two domains, knowing more than choosing where to apply which model makes a large difference. If you misstep, classical fluid flow could become quantum fluid flow simply because it displays some pseudo-effects.

In fact, experiments to test Bell’s hypothesis have been riddled with such small yet nagging stumbling blocks. Even if a suitable domain of applicability has been chosen, an efficient experiment has to be designed that fully exploits the domain’s properties to arrive at a conclusion – and this has proved very difficult. Inspired by the purely theoretical EPR paradox put forth in 1935, Bell stated his theorem in 1964. It is now 2013 and no experiment has successfully proved or disproved it.

Three musketeers

The three most prevalent problems such experiments face are called the failure of rotational invariance, the no-communication loophole, and the fair sampling assumption.

In any Bell experiment, two particles are allowed to interact in some way – such as being born from a same source – and separated across a large distance. Scientists then measure the particles’ properties using detectors. This happens again and again until any patterns among paired particles can be found or denied.

Whatever properties the scientists are going to measure, the different values that that property can take must be equally likely. For example, if I have a bag filled with 200 blue balls, 300 red balls and 100 yellow balls, I shouldn’t think something quantum mechanical was at play if one in two balls pulled out was red. That’s just probability at work. And when probability can’t be completely excluded from the results, it’s called a failure of rotational invariance.

For the experiment to measure only the particles’ properties, the detectors must not be allowed to communicate with each other. If they were allowed to communicate, scientists wouldn’t know if a detection arose due to the particles or due to glitches in the detectors. Unfortunately, in a perfect setup, the detectors wouldn’t communicate at all and be decidedly local – putting them in no position to reveal any violation of locality! This problem is called the no-communication loophole.

The final problem – fair sampling – is a statistical issue. If an experiment involves 1,000 pairs of particles, and if only 800 pairs have been picked up by the detector and studied, the experiment cannot be counted as successful. Why? Because results from the other 200 could have distorted the results had they been picked up. There is a chance. Thus, the detectors would have to be 100 per cent efficient in a successful experiment.

In fact, the example was a gross exaggeration: detectors are only 5-30 per cent efficient.

One (step) at a time

Resolution for the no-communication problem came in 1998 by scientists from Austria, who also closed the rotational invariance loophole. The fair sampling assumption was resolved by a team of scientists from the USA in 2001, one of whom was David Wineland, physics Nobel Laureate, 2012. However, they used only two ions to make the measurements. A more thorough experiment’s results were announced just last month.

Researchers from the Institute for Quantum Optics and Quantum Communication, Austria, had used detectors called transition-edge sensors that could pick up individual photons for detection with a 98 per cent efficiency. These sensors were developed by the National Institute for Standards and Technology, Maryland, USA. In keeping with tradition, the experiment admitted the no-communication loophole.

Unfortunately, for an experiment to be a successful Bell-experiment, it must get rid of all three problems at the same time. This hasn’t been possible to date, which is why a conclusive Bell’s test, and the key to quantum mechanics’ closet of hidden phenomena, eludes us. It is as if nature uses one loophole or the other to deceive the experimenters.*

The silver lining is that the photon has become the first particle for which all three loopholes have been closed, albeit in different experiments. We’re probably getting there, loopholes relenting. The reward, of course, could be the greatest of all: We will finally know if nature is described by quantum mechanics, with its deceptive trove of exotic phenomena, or by classical mechanics and general relativity, with its reassuring embrace of locality and realism.

(*In 1974, John Clauser and Michael Horne found a curious workaround for the fair-sampling problem that they realised could be used to look for new physics. They called this the no-enhancement problem. They had calculated that if some method was found to amplify the photons’ signals in the experiment and circumvent the low detection efficiency, the method would also become a part of the result. Therefore, if the result came out that quantum mechanics was nonlocal, then the method would be a nonlocal entity. So, using different methods, scientists distinguish between previously unknown local and nonlocal processes.)

This article, as written by me, originally appeared in The Hindu’s The Copernican science blog on June 15, 2013.

A closet of hidden phenomena

An apparatus to study quantum entanglement, with superconducting channels placed millimeters apart.
An apparatus to study quantum entanglement, with superconducting channels placed millimeters apart. Photo: Softpedia

Science has been rarely counter-intuitive to our understanding of reality, and its elegant rationalism at every step of the way has been reassuring. This is why Bell’s theorem has been one of the strangest concepts of reality scientists have come across: it is hardly intuitive, hardly rational, and hardly reassuring.

To someone interested in the bigger picture, the theorem is the line before which quantum mechanics ends and after which classical mechanics begins. It’s the line in the sand between the Max Planck and the Albert Einstein weltanschauungen.

Einstein, and many others before him, worked with gravity, finding a way to explain the macrocosm and its large-scale dance of birth and destruction. Planck, and many others after him, have helped describe the world of the atom and its innards using extremely small packets of energy called particles, swimming around in a pool of exotic forces.

At the nexus of a crisis

Over time, however, as physicists studied the work of both men and of others, it started to become clear that the the fields were mutually exclusive, never coming together to apply to the same idea. At this tenuous nexus, the Irish physicist John Stuart Bell cleared his throat.

Bell’s theorem states, in simple terms, that for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or superluminal communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed – like the moon in the morning.

The paradox is obvious. Classical mechanics is applicable everywhere, even with subatomic particles that are billionths of nanometers across. That it’s not is only because its dominant player, the gravitational force, is overshadowed by other stronger forces. Quantum mechanics, on the other hand, is not so straightforward with its offering. It could be applied in the macroscopic world – but its theory has trouble dealing with gravity and the strong nuclear force, both of which have something to do with mass.

This means if quantum mechanics is to have a smooth transition at some scale into a classical reality… it can’t. At that scale, one of locality or realism must snap back to life. This is why confronting the idea that one of them isn’t true is unsettling. They are both fundamental hypotheses of physics.

The newcomer

A few days ago, I found a paper on arXiv titled Violation of Bell’s inequality in fluid mechanics (May 28, 2013). Its abstract stated that “… a classical fluid mechanical system can violate Bell’s inequality because the fluid motion is correlated over very large distances”. Given that Bell stands between Planck’s individuated notion of quantum mechanics and Einstein’s waltz-like continuum of the cosmos, it was intriguing to see scientists attempting to describe a quantum mechanical phenomenon in a classical system.

The correlation that the paper’s authors talk about implies fluid flow in one region of space-time is somehow correlated with fluid flow in another region of space-time. This is a violation of locality. However, fluid mechanics has been, still is, a purely classical occurrence: its behaviour can be traced to Newton’s ideas from the 17th century. This means all flow events are, rather have to be, decidedly realand local.

To make their point, the authors use mathematical equations modelling fluid flow, conceived by Leonhard Euler in the 18th century, and how they could explain vortices – regions of a fluid where the flow is mostly a spinning motion about an axis.


This is a vortex. Hurt your eyes yet?

Assigning fictitious particles to different parts of the equation, the scientists demonstrate how the particles in one region of flow could continuously and instantaneously affect particles in another region of fluid flow. In quantum mechanics, this phenomenon is called entanglement. It has no classical counterpart because it violates the principle of locality.

Coincidental correlation

However, there is nothing quantum about fluid flow, much less about Euler’s equations. Then again, if the paper is right, would that mean flowing fluids are a quantum mechanical system? Occam’s razorcomes to the rescue: Because fluid flow is classical but still shows signs of nonlocality, there is a possibility that purely local interactions could explain quantum mechanical phenomena.

Think about it. A purely classical system also shows signs of quantum mechanical behaviour. This meant that some phenomena in the fluid could be explained by both classical and quantum mechanical models, i.e. the two models correspond.

There is a stumbling block, however. Occam’s razor only provides evidence of a classical solution for nonlocality, not a direct correspondence between micro- and macroscopic physics. In other words, it could easily be a post hoc ergo propter hoc inference: Because nonlocality came after application of local mathematics, local mathematics must have caused nonlocality.

“Not quite,” said Robert Brady, one of the authors on the paper. “Bell’s hypothesis is often said to be about ‘locality’, and so it is common to say that quantum mechanical systems are ‘nonlocal’ because Bell’s hypothesis does not apply to them. If you choose this description, then fluid mechanics is also ‘non-local’, since Bell’s hypothesis does not apply to them either.”

“However, in fluid mechanics it is usual to look at this from a different angle, since Bell’s hypothesis would not be thought reasonable in that field.”

Brady’s clarification brings up an important point: Even though the lines don’t exactly blur between the two domains, knowing more than choosing where to apply which model makes a large difference. If you misstep, classical fluid flow could become quantum fluid flow simply because it displays some pseudo-effects.

In fact, experiments to test Bell’s hypothesis have been riddled with such small yet nagging stumbling blocks. Even if a suitable domain of applicability has been chosen, an efficient experiment has to be designed that fully exploits the domain’s properties to arrive at a conclusion – and this has proved very difficult. Inspired by the purely theoretical EPR paradox put forth in 1935, Bell stated his theorem in 1964. It is now 2013 and no experiment has successfully been able to decide if Bell was right or wrong.

Three musketeers

The three most prevalent problems such experiments face are called the failure of rotational invariance, the no-communication loophole, and the fair sampling assumption.

In any Bell experiment, two particles are allowed to interact in some way – such as being born from a same source – and separated across a large distance. Scientists then measure the particles’ properties using detectors. This happens again and again until any patterns among paired particles can be found or denied.

Whatever properties the scientists are going to measure, the different values that that property can take must be equally likely. For example, if I have a bag filled with 200 blue balls, 300 red balls and 100 yellow balls, I shouldn’t think something quantum mechanical was at play if one in two balls pulled out was red. That’s just probability at work. And when probability can’t be completely excluded from the results, it’s called a failure of rotational invariance.

For the experiment to measure only the particles’ properties, the detectors must not be allowed to communicate with each other. If they were allowed to communicate, scientists wouldn’t know if a detection arose due to the particles or due to glitches in the detectors. Unfortunately, in a perfect setup, the detectors wouldn’t communicate at all and be decidedly local – putting them in no position to reveal any violation of locality! This problem is called the no-communication loophole.

The final problem – fair sampling – is a statistical issue. If an experiment involves 1,000 pairs of particles, and if only 800 pairs have been picked up by the detector and studied, the experiment cannot be counted as successful. Why? Because results from the other 200 could have distorted the results had they been picked up. There is a chance. Thus, the detectors would have to be 100 per cent efficient in a successful experiment.

In fact, the example was a gross exaggeration: detectors are only 5-30 per cent efficient.

One (step) at a time

Resolution for the no-communication problem came in 1998 by scientists from Austria, who also closed the rotational invariance loophole. The fair sampling assumption was resolved by a team of scientists from the USA in 2001, one of whom was David Wineland, physics Nobel Laureate, 2012. However, they used only two ions to make the measurements. A more thorough experiment’s resultswere announced just last month.

Researchers from the Institute for Quantum Optics and Quantum Communication, Austria, had used detectors called transition-edge sensors that could pick up individual photons for detection with a 98 per cent efficiency. These sensors were developed by the National Institute for Standards and Technology, Maryland, USA. In keeping with tradition, the experiment admitted the no-communication loophole.

Unfortunately, for an experiment to be a successful Bell-experiment, it must get rid of all three problems at the same time. This hasn’t been possible to date, which is why a conclusive Bell’s test, and the key to quantum mechanics’ closet of hidden phenomena, eludes us. It is as if nature uses one loophole or the other to deceive the experimenters.*

The silver lining is that the photon has become the first particle for which all three loopholes have been closed, albeit in different experiments. We’re probably getting there, loopholes relenting. The reward, of course, could be the greatest of all: We will finally know if nature is described by quantum mechanics, with its deceptive trove of exotic phenomena, or by classical mechanics and general relativity, with its reassuring embrace of locality and realism.

(*In 1974, John Clauser and Michael Horne found a curious workaround for the fair-sampling problem that they realised could be used to look for new physics. They called this the no-enhancement problem. They had calculated that if some method was found to amplify the photons’ signals in the experiment and circumvent the low detection efficiency, the method would also become a part of the result. Therefore, if the result came out that quantum mechanics was nonlocal, then the method would be a nonlocal entity. So, using different methods, scientists distinguish between previously unknown local and nonlocal processes.)

(This blog post first appeared at The Copernican on June 15, 2013.)