O Voyager, where art thou?

On September 5, 1977, NASA launched the Voyager 1 space probe to study the Jovian planets Jupiter and Saturn, and their moons, and the interstellar medium, the gigantic chasm between various star-systems in the universe. It’s been 35 years and 9 months, and Voyager has kept on, recently entering the boundary between our System and the Milky Way.

In 2012, however, when nine times farther from the Sun than is Neptune, the probe entered into a part of space completely unknown to astronomers.

On June 27, three papers were published in Science discussing what Voyager 1 had encountered, a region at the outermost edge of the Solar System they’re calling the ‘heliosheath depletion region’. They think it’s a feature of the heliosphere, the imagined bubble in space beyond whose borders the Sun has no influence.

“The principal result of the magnetic field observations made by our instrument on Voyager is that the heliosheath depletion region is a previously undetected part of the heliosphere,” said Dr. Leonard Burlaga, an astrophysicist at the NASA-Goddard Space Flight Centre, Maryland, and an author of one of the papers.

“If it were the region beyond the heliosphere, the interstellar medium, we would have expected a change in the magnetic field direction when we crossed the boundary of the region. No change was observed.”

More analysis of the magnetic field observations showed that the heliosheath depletion region has a weak magnetic field – of 0.1 nano-Tesla (nT), 0.6 million times weaker than Earth’s – oriented in such a direction that it could only have arisen because of the Sun. Even so, this weak field was twice as strong as what lay outside it in its vicinity. Astronomers would’ve known why, Burlaga clarifies, if it weren’t for the necessary instrument on the probe being long out of function.

When the probe crossed over into the region, this spike in strength was recorded within a day. Moreover, Burlaga and others have found that the spike happened thrice and a drop in strength twice, leaving Voyager 1 within the region at the time of their analysis. In fact, after August 25, 2012, no drops have been recorded. The implication is that it is not a smooth region.

“It is possible that the depletion region has a filamentary character, and we entered three different filaments. However, it is more likely that the boundary of the depletion region was moving toward and away from the sun,” Burlaga said.

The magnetic field and its movement through space are not the only oddities characterising the heliosheath depletion region. Low-energy ions blown outward by the Sun constantly emerge out of the heliosphere, but they were markedly absent within the depletion region. Burlaga was plainly surprised: “It was not predicted or even suggested.”

Analysis by Dr. Stamatios Krimigis, the NASA principal investigator for the Low-Energy Charged Particle (LECP) experiment aboard Voyager 1 and an author of the second paper, also found that cosmic rays, which are highly energised charged particles produced by various sources outside the System through unknown mechanisms, weren’t striking Voyager’s detectors equally from all directions. Instead, more hits were being recorded in certain directions inside the heliosheath depletion region.

Burlaga commented, “The sharp increase in the cosmic rays indicate that cosmic rays were able to enter the heliosphere more readily along the magnetic fields of the depletion region.”

Even though Voyager 1 was out there, Krimigis feels that humankind is blind: astronomers’ models were, are, clearly inadequate, and there is no roadmap of what lies ahead. “I feel like Columbus who thought he had gotten to West India, when in fact he had gone to America,” Krimigis contemplates. “We find that nature is much more imaginative than we are.”

With no idea of how the strange region originated or whence, we’ll just have to wait and see what additional measurements tell us. Until then, the probe will continue approaching the gateway to the Galaxy.

This blog post, as written by me, first appeared in The Hindu‘s science blog on June 29, 2013.

Self-siphoning beads

This is the coolest thing I’ve seen all day, and I’m pretty sure it’ll be the coolest thing you’d have seen all day, too: The Chain of Self-siphoning Beads, a.k.a. Physics Brainmelt.

[youtube=http://www.youtube.com/watch?feature=player_embedded&v=6ukMId5fIi0]

It’s so simple; just think of the forces acting on the beads. Once a chain link is pulled up and let down, its kinetic and potential energies give it momentum going downward, and this pulls the rest of the chain up. The reason the loop doesn’t collapse is that it’s got some energy travelling along itself in the form of the beads’ momentum as they traverse that path. If the beads had been stationary, then the mass of the beads in the loop would’ve brought it down. Like a bicycle: A standing one would’ve toppled over; a moving one keeps moving.

Who talks like that?

Four-and-half years of engineering, one year of blogging and one year of J-school later, I’m a sub-editor with an Indian national daily and not doing bad at all, if you asked me. I’m not particularly important to the organization as such, but among my friends, given my background, I’m the one with a newspaper. I’m the one they call if they need an ad printed, if they need a product reviewed, if they need a chance to be published.

So, when a 20-year old from BITS, Dubai (where I studied) mailed me this, I had no f**king idea what to say.

NIce to hear bck frm u……. actually s i said m a growing writer…. i jzzt completed my frst novel n [name removed] is editing it…. i wanted to write articles n get dem published in reputed newspapers like urs….so i wanted help wid dat…. cn u jzzt give me a few guidelines so dat i cud creat sm f my best works n send dem to u…..

  1. I’m given to understand the QWERTY keyboard was designed to make typing easier for words spelled like they were originally spelled using fingers designed by evolution for a human hand. So, doesn’t typing ‘just’ have to be easier than ‘jzzt’, ‘.’ easier than ‘…………’? It’s one thing to make language work for you; it’s another to use symbols like you have no idea how they should be.
  2. Why are you so lazy that you can’t finish a word before going on to the next one? Do you think a journalist – who has lots to lose by spelling words wrong – would appreciate ‘creat’, ‘cn’, ‘sm’, ‘frst’? Don’t you think the vowel is an important part of language? It’s the letter that permits the sounding, genius.
  3. If you’re looking for a chance to get published, don’t assume I will give you the chance to be published if the best I’ve seen from you is “i jzzt completed my frst novel n so-so”, “cn u jzzt-” I cannot even.

And then to think anyone with a smartphone and a Twitter account can be stereotyped to be this way. Ugh.

A NASA photograph of the Voyager space probe, 1977.
A NASA photograph of the Voyager space probe, 1977. Photo: Wikimedia Commons

On September 5, 1977, NASA launched the Voyager 1 space probe to study the Jovian planets Jupiter and Saturn, and their moons, and the interstellar medium, the gigantic chasm between various star-systems in the universe. It’s been 35 years and 9 months, and Voyager has kept on, recently entering the boundary between our System and the Milky Way.

In 2012, however, when nine times farther from the Sun than is Neptune, the probe entered into a part of space completely unknown to astronomers.

On June 27, three papers were published in Science discussing what Voyager 1 had encountered, a region at the outermost edge of the Solar System they’re calling the ‘heliosheath depletion region’. They think it’s a feature of the heliosphere, the imagined bubble in space beyond whose borders the Sun has no influence.

“The principal result of the magnetic field observations made by our instrument on Voyager is that the heliosheath depletion region is a previously undetected part of the heliosphere,” said Dr. Leonard Burlaga, an astrophysicist at the NASA-Goddard Space Flight Centre, Maryland, and an author of one of the papers.

“If it were the region beyond the heliosphere, the interstellar medium, we would have expected a change in the magnetic field direction when we crossed the boundary of the region. No change was observed.”

More analysis of the magnetic field observations showed that the heliosheath depletion region has a weak magnetic field – of 0.1 nano-Tesla (nT), 0.6 million times weaker than Earth’s – oriented in such a direction that it could only have arisen because of the Sun. Even so, this weak field was twice as strong as what lay outside it in its vicinity. Astronomers would’ve known why, Burlaga clarifies, if it weren’t for the necessary instrument on the probe being long out of function.

When the probe crossed over into the region, this spike in strength was recorded within a day. Moreover, Burlaga and others have found that the spike happened thrice and a drop in strength twice, leaving Voyager 1 within the region at the time of their analysis. In fact, after August 25, 2012, no drops have been recorded. The implication is that it is not a smooth region.

“It is possible that the depletion region has a filamentary character, and we entered three different filaments. However, it is more likely that the boundary of the depletion region was moving toward and away from the sun,” Burlaga said.

The magnetic field and its movement through space are not the only oddities characterising the heliosheath depletion region. Low-energy ions blown outward by the Sun constantly emerge out of the heliosphere, but they were markedly absent within the depletion region. Burlaga was plainly surprised: “It was not predicted or even suggested.”

Analysis by Dr. Stamatios Krimigis, the NASA principal investigator for the Low-Energy Charged Particle (LECP) experiment aboard Voyager 1 and an author of the second paper, also found that cosmic rays, which are highly energised charged particles produced by various sources outside the System through unknown mechanisms, weren’t striking Voyager’s detectors equally from all directions. Instead, more hits were being recorded in certain directions inside the heliosheath depletion region.

Burlaga commented, “The sharp increase in the cosmic rays indicate that cosmic rays were able to enter the heliosphere more readily along the magnetic fields of the depletion region.”

Even though Voyager 1 was out there, Krimigis feels that humankind is blind: astronomers’ models were, are, clearly inadequate, and there is no roadmap of what lies ahead. “I feel like Columbus who thought he had gotten to West India, when in fact he had gone to America,” Krimigis contemplates. “We find that nature is much more imaginative than we are.”

With no idea of how the strange region originated or whence, we’ll just have to wait and see what additional measurements tell us. Until then, the probe will continue approaching the gateway to the Galaxy.

(This blog post first appeared on The Copernican on June 28, 2013.)

A Periodic Table of history lessons

This is pretty cool. Twitter user @jamiebgall tweeted this picture he’d made of the Periodic Table, showing each element alongside the nationality of its discoverer.

discovery_elements

It’s so simple, yet it says a lot about different countries’ scientific programs and, if you googled a bit, their focuses during different years in history. For example,

  1. A chunk of the transuranic actinides originated out of American labs, possibly arising out of the rapid developments in particle accelerator technology in the early 20th century.
  2. Hydrogen was discovered by a British scientist (Henry Cavendish) in the late 18th century, pointing at the country’s early establishment of research and experiment institutions. UK scientists were also responsible for the discovery of 23 elements in all.
  3. The 1904 Nobel Prizes in physics and chemistry went to Lord Rayleigh and William Ramsay, respectively, for discovering four of the six noble gases. One of the other two, helium, was co-discovered by Pierre Janssen (France) and Joseph Lockyer (UK). Radon was discovered by Friedrich Dorn (Germany) in 1898.
  4. Elements 107 to 112 were discovered by Germans at the Gesselschaft fur Schwerionenforschung, Darmstadt. Elements 107, 108 and 109 were discovered by Peter Armbruster and Gottfried Munzenberg in 1982-1994. Elements 111 and 112 were discovered by Sigurd Hoffman, et al, in 1994-1996. All of them owed their origination to the UNILAC (Universal Linear Accelerator) commissioned in 1975.
  5. The discovery of aluminium, the most abundant metal in the Earth’s crust, is attributed to Hans Christian Oersted (Denmark in 1825) even though Humphry Davy had developed an aluminium-iron alloy before him. The Dane took the honours because he was the first to isolate the metal.
  6. Between 1944 and 1952, the USA discovered seven elements; this ‘discovery density’ is beaten only by the UK, which discovered six elements in 1807 and 1808. In both countries, however, these discoveries were made by a small group of people finding one element after another. In the USA, elements 93-98 and 101 were discovered by teams led by Glenn T. Seaborg at UCal, Berkeley. In the UK, Lord Rayleigh and Sir Ramsay took the honours with the noble gases section.

And so forth…

A battery of power

Lithium ion batteries have found increasing usage in recent times, finding use in everything from portable electronics to heavy transportation. While they have their own set of problems, they’re not unsolvable. And when they are solved, they’ll also have to find other reasons to persist in a market whose demands are soaring.

The simplest upgrade that can be mounted on it is to increase its charge capacity. It will then last longer per application, reducing the frequency of replacement. During charging, electrical energy from a chemical reaction is stored in a material, inside the battery. So, the battery’s charge capacity is this material’s charge capacity.

At the moment, the material is graphite. It is widely available and easy to handle. Replacing it without disrupting how a battery is made or in what conditions it has to be stored will be helpful. Thus, a material as ‘easy’ as graphite would be the ideal substitute. Like silicon.

Silicon v. graphite

Studies have shown that silicon has 400 times the charge capacity of graphite. It is abundantly available, very resilient to heat, and is easy to produce, store and dispose. However, there’s a big problem. “The lithium-silicon system has a much higher capacity than Li-graphite, but shows a strong volume change during charging and discharging,” said Dr. Thomas Fassler, Chair of Inorganic Chemistry, Technical University of Munich.

When charging, an external voltage is provided that overpowers the battery’s internal voltage, forcing lithium ions to migrate from the positive to the negative electrode, where they’re stored in the material in question. When discharging, the ions move out of the negative electrode and into the positive, generating a current that a connected appliance draws.

If the storage material at the negative electrode is made of silicon, lithium ions entering the silicon atomic lattice stretch the lattice, making it taut. With further charging, its volume could change, fracturing then breaking the lattice. At the same time, silicon’s abundance and ubiquity are enticing attributes for materials scientists.

Two recent studies, from June 4 and June 6, propose workarounds to this problem. The earlier one was from researchers in Stanford University, Yi Cui and Zhenan Bao, assisted by scientists from Tsinghua University, Beijing, and the University of Texas, Austin. Use silicon, they say, but bolster its ability to withstand expansion while charging.

The hydrogel bolster

“Our team has used silicon-hydrogel composites to replace carbon to increase charge storage capacity by many times,” said Dr. Yi Cui. He is the David Filo and Jerry Yang Faculty Scholar, Department of Materials Science and Engineering.

Using a process called in situ synthesis polymerization, they gave silicon nanoparticles a uniform coating of a hydrogel, which is a network of polyaniline polymer chains dispersed in water. This substance is porous and flexible yet strong. When lithium ions enter the silicon lattice, it expands into space created by the hydrogel pores while being held in place.

Cui and Bao also found that the network of polymer chains formed a pathway through which the lithium ions could be transported. At the same time, because the hydrogel contains water, with which lithium is highly reactive, the battery could be ignited if not handled properly.

For such a significant problem, the scientists found a very simple solution. “We baked the water off before sealing the battery,” Bao said.

Hard to make, hard to break

The second study, from June 6, was published in the Angewandte Chemie International Edition. Instead of the elegant and industrially reproducible hydrogel solution, Dr. Fassler, who led the study, synthesized a new, sophisticated material called lithium borosilicide. He’s calling it ‘tum’ after his university.

Tum is a unique material. It is as hard as diamond. Unlike the allotrope, however, the arrangement of molecules in the tum lattice forms channels, like tubes, throughout the crystal. This facilitates an increased storage of lithium ions as well as assists in their transportation.

About the choice of boron to go with silicon, Fassler said, “Intuition and extended experimental experience is necessary to find out the proper ratio of starting materials as well as the correct parameters.” To test their out-of-the-box solution, Fassler, and his student Michael Zeilinger, went to Arizona State University and used their high-pressure chemistry lab to apply 100,000 atmospheres of pressure and 900 degrees Celsius to synthesize tum.

They found that it was stable to air and moisture, and could withstand up to 800 degrees Celsius. However, they still don’t know what the charge capacity of this new compound is. “We will build a so-called electrochemical half-cell and test it versus elemental lithium,” Fassler said.

The synthesis mechanism is no doubt inhibiting. Such high pressures and temperatures required to produce industrially commensurate quantities of tum will clearly be incompatible with the ubiquity that lithium-ion batteries enjoy. Fassler is hopeful, though. “In case the electrochemical performance turns out good, chemists will look for other, cheaper, synthetic approaches,” he said.

Rethinking the battery

Another solution to increasing the performance of lithium-ion batteries was proposed at Oak Ridge National Laboratory (ORNL), Tennessee, in the first week of June.

Led by Chengdu Lian, the team reinvented the internal structure of the battery and replaced the liquid electrolyte with a solid, sulphur-based one. This eliminated the risk of flammability and increased the charge capacity of the setup by almost 100 times, but necessitated elevated temperatures to enhance the ionic conductivity of the materials.

Commenting on the ORNL solution, Yi Cui said, “Recently, high ionic conductivity of solid electrolytes was discovered, it looks promising down the road. However, the high inter-facial resistance at the solid-solid interface still needs to be addressed. Also, the new electrode materials have very large deadweight.” He added that the cyclic performance was good – at 300 charge-discharge cycles – but not outstanding.

A battery of power

As the Stanford team continues testing its hydrogel solution, and awaits commercial deployment, the Munich team will verify tum’s electrochemical capability, and the ORNL team will try to up its battery’s performance. These solutions are important for American because, in many other countries, the battery industry is a critical part of the economy. As The Economist is quick to detail, Japan, South Korea and China are great examples.

Knowing that rechargeable and portable sources of power will play a critical role in the then-emerging electronics industry, Japan invested big in lithium-ion batteries in the 1990s. Soon, South Korea and China followed suit. America, on the other hand, kept away because manufacturing these batteries provided low return on investment at a time when it only wanted its economy to grow. Now, it’s playing catch up.

All because it didn’t see coming how lithium-ion batteries would become sources of power – electrochemical and economic.

This post, as written by me, first appeared in The Copernican science blog on June 19, 2013.

Hello and welcome to my personal blog. I’m a science reporter and blogger at The Hindu, an Indian national daily. I’m interested in high-energy physics, the history and philosophy of science, and photography. When no one’s looking, I fiddle with code and call myself a programmer. I enjoy working with the infrastructure that props up newsrooms.

(I can’t delete this post because Walter Murch has commented on it.)

A closet of hidden phenomena

Science has been rarely counter-intuitive to our understanding of reality, and its elegant rationalism at every step of the way has been reassuring. This is why Bell’s theorem has been one of the strangest concepts of reality scientists have come across: it is hardly intuitive, hardly rational, and hardly reassuring.

To someone interested in the bigger picture, the theorem is the line before which quantum mechanics ends and after which classical mechanics begins. It’s the line in the sand between the Max Planck and the Albert Einstein weltanschauungen.

Einstein, and many others before him, worked with gravity, finding a way to explain the macrocosm and its large-scale dance of birth and destruction. Planck, and many others after him, have helped describe the world of the atom and its innards using extremely small packets of energy called particles, swimming around in a pool of exotic forces.

At the nexus of a crisis

Over time, however, as physicists studied the work of both men and of others, it started to become clear that the the fields were mutually exclusive, never coming together to apply to the same idea. At this tenuous nexus, the Irish physicist John Stuart Bell cleared his throat.

Bell’s theorem states, in simple terms, that for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or superluminal communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed – like the moon in the morning.

The paradox is obvious. Classical mechanics is applicable everywhere, even with subatomic particles that are billionths of nanometers across. That it’s not is only because its dominant player, the gravitational force, is overshadowed by other stronger forces. Quantum mechanics, on the other hand, is not so straightforward with its offering. It could be applied in the macroscopic world – but its theory has trouble dealing with gravity or the strong nuclear force, which gives mass to matter.

This means if quantum mechanics is to have a smooth transition at some scale into a classical reality… it can’t. At that scale, one of locality or realism must snap back to life. This is why confronting the idea that one of them isn’t true is unsettling. They are both fundamental hypotheses of physics.

The newcomer

A few days ago, I found a paper on arXiv titled Violation of Bell’s inequality in fluid mechanics (May 28, 2013). Its abstract stated that “… a classical fluid mechanical system can violate Bell’s inequality because the fluid motion is correlated over very large distances”. Given that Bell stands between Planck’s individuated notion of quantum mechanics and Einstein’s waltz-like continuum of the cosmos, it was intriguing to see scientists attempting to describe a quantum mechanical phenomenon in a classical system.

The correlation that the paper’s authors talk about implies fluid flow in one region of space-time is somehow correlated with fluid flow in another region of space-time. This is a violation of locality. However, fluid mechanics has been, still is, a purely classical occurrence: its behaviour can be traced to Newton’s ideas from the 17th century. This means all flow events are, rather have to be, decidedly real and local.

To make their point, the authors use mathematical equations modelling fluid flow, conceived by Leonhard Euler in the 18th century, and how they could explain vortices – regions of a fluid where the flow is mostly a spinning motion about an imaginary axis.

Assigning fictitious particles to different parts of the equation, the scientists demonstrate how the particles in one region of flow could continuously and instantaneously affect particles in another region of fluid flow. In quantum mechanics, this phenomenon is called entanglement. It has no classical counterpart because it violates the principle of locality.

Coincidental correlation

However, there is nothing quantum about fluid flow, much less about Euler’s equations. Then again, if the paper is right, would that mean flowing fluids are a quantum mechanical system? Occam’s razor comes to the rescue: Because fluid flow is classical but still shows signs of nonlocality, there is a possibility that purely local interactions could explain quantum mechanical phenomena.

Think about it. A purely classical system also shows signs of quantum mechanical behaviour. This meant that some phenomena in the fluid could be explained by both classical and quantum mechanical models, i.e. the two models correspond.

There is a stumbling block, however. Occam’s razor only provides evidence of a classical solution for nonlocality, not a direct correspondence between micro- and macroscopic physics. In other words, it could easily be a post hoc ergo propter hoc inference: Because nonlocality came after application of local mathematics, local mathematics must have caused nonlocality.

“Not quite,” said Robert Brady, one of the authors on the paper. “Bell’s hypothesis is often said to be about ‘locality’, and so it is common to say that quantum mechanical systems are ‘nonlocal’ because Bell’s hypothesis does not apply to them. If you choose this description, then fluid mechanics is also ‘non-local’, since Bell’s hypothesis does not apply to them either.”

“However, in fluid mechanics it is usual to look at this from a different angle, since Bell’s hypothesis would not be thought reasonable in that field.”

Brady’s clarification brings up an important point: Even though the lines don’t exactly blur between the two domains, knowing more than choosing where to apply which model makes a large difference. If you misstep, classical fluid flow could become quantum fluid flow simply because it displays some pseudo-effects.

In fact, experiments to test Bell’s hypothesis have been riddled with such small yet nagging stumbling blocks. Even if a suitable domain of applicability has been chosen, an efficient experiment has to be designed that fully exploits the domain’s properties to arrive at a conclusion – and this has proved very difficult. Inspired by the purely theoretical EPR paradox put forth in 1935, Bell stated his theorem in 1964. It is now 2013 and no experiment has successfully proved or disproved it.

Three musketeers

The three most prevalent problems such experiments face are called the failure of rotational invariance, the no-communication loophole, and the fair sampling assumption.

In any Bell experiment, two particles are allowed to interact in some way – such as being born from a same source – and separated across a large distance. Scientists then measure the particles’ properties using detectors. This happens again and again until any patterns among paired particles can be found or denied.

Whatever properties the scientists are going to measure, the different values that that property can take must be equally likely. For example, if I have a bag filled with 200 blue balls, 300 red balls and 100 yellow balls, I shouldn’t think something quantum mechanical was at play if one in two balls pulled out was red. That’s just probability at work. And when probability can’t be completely excluded from the results, it’s called a failure of rotational invariance.

For the experiment to measure only the particles’ properties, the detectors must not be allowed to communicate with each other. If they were allowed to communicate, scientists wouldn’t know if a detection arose due to the particles or due to glitches in the detectors. Unfortunately, in a perfect setup, the detectors wouldn’t communicate at all and be decidedly local – putting them in no position to reveal any violation of locality! This problem is called the no-communication loophole.

The final problem – fair sampling – is a statistical issue. If an experiment involves 1,000 pairs of particles, and if only 800 pairs have been picked up by the detector and studied, the experiment cannot be counted as successful. Why? Because results from the other 200 could have distorted the results had they been picked up. There is a chance. Thus, the detectors would have to be 100 per cent efficient in a successful experiment.

In fact, the example was a gross exaggeration: detectors are only 5-30 per cent efficient.

One (step) at a time

Resolution for the no-communication problem came in 1998 by scientists from Austria, who also closed the rotational invariance loophole. The fair sampling assumption was resolved by a team of scientists from the USA in 2001, one of whom was David Wineland, physics Nobel Laureate, 2012. However, they used only two ions to make the measurements. A more thorough experiment’s results were announced just last month.

Researchers from the Institute for Quantum Optics and Quantum Communication, Austria, had used detectors called transition-edge sensors that could pick up individual photons for detection with a 98 per cent efficiency. These sensors were developed by the National Institute for Standards and Technology, Maryland, USA. In keeping with tradition, the experiment admitted the no-communication loophole.

Unfortunately, for an experiment to be a successful Bell-experiment, it must get rid of all three problems at the same time. This hasn’t been possible to date, which is why a conclusive Bell’s test, and the key to quantum mechanics’ closet of hidden phenomena, eludes us. It is as if nature uses one loophole or the other to deceive the experimenters.*

The silver lining is that the photon has become the first particle for which all three loopholes have been closed, albeit in different experiments. We’re probably getting there, loopholes relenting. The reward, of course, could be the greatest of all: We will finally know if nature is described by quantum mechanics, with its deceptive trove of exotic phenomena, or by classical mechanics and general relativity, with its reassuring embrace of locality and realism.

(*In 1974, John Clauser and Michael Horne found a curious workaround for the fair-sampling problem that they realised could be used to look for new physics. They called this the no-enhancement problem. They had calculated that if some method was found to amplify the photons’ signals in the experiment and circumvent the low detection efficiency, the method would also become a part of the result. Therefore, if the result came out that quantum mechanics was nonlocal, then the method would be a nonlocal entity. So, using different methods, scientists distinguish between previously unknown local and nonlocal processes.)

This article, as written by me, originally appeared in The Hindu’s The Copernican science blog on June 15, 2013.

On bad films and their purpose

The reason there are these movies that are adapted from books and don’t do well at the box office is that there are many people who haven’t read those books. Even though it’s reasonable that production houses see movies as standalone creative products, separate from the books, it’s the existence of an audience for either that’s driving the production itself.

The point is to capitalize on the potential ‘popular culture value’ of the book being adapted. And when a movie adapted from a book bombs, I think it’s because someone misjudged the size of the audience for it. The movie is subsequently forgotten, leaving no capitalizable trail of its own.

The reason I’m complaining is that I love such movies – which people think don’t do much but I think they do a lot for me. When I read books, I don’t focus on everything the book throws at me. As the plot develops, I am generally able to pick up on what’s relevant and what’s not, leaving the characters to play out in my mind as if located nowhere in particular but simply existing of/by themselves.

Now, when this ‘dud’ of a movie comes along, it fills up these spaces nicely, colors inside the plot-wise irrelevant boundaries. That’s why if there had been more people who’d read the book than those who hadn’t, the movie might’ve been appreciated for what it really was: not a creative standalone but a product crafted only to capitalize on existing emotions, not create a new one.

Most recently, Atlas Shrugged Part I did this well. The actors didn’t try to put up a performance of their own, abiding quite nicely by the narrative railroad set up by the book.

The same can be said of the Lord of the Rings trilogy, but Tolkien left little to the imagination, not that that’s a problem. If anything, the movie was always only threatening to fall short (like The Hobbit Part I did), the difference being that The Hobbit left a detestable Narnia-like episode firmly lodged in my mind. That’s also why I’d rather a movie fall short if it can’t land on just the right spot, and if it can fall short, why wouldn’t it be a hit?

Can science and philosophy mix constructively?

Quantum mechanics can sometimes be very hard to understand, so much so that even thinking about it becomes difficult. This could be because its foundations lay in the action-centric depiction of reality that slowly rejected its origins and assumed a thought-centric one garb.

In his 1925 paper on the topic, physicist Werner Heisenberg used only observable quantities to denote physical phenomena. He also pulled up Niels Bohr in that great paper, saying, “It is well known that the formal rules which are used [in Bohr’s 1913 quantum theory] for calculating observable quantities such as the energy of the hydrogen atom may be seriously criticized on the grounds that they contain, as basic elements, relationships between quantities that are apparently unobservable in principle, e.g., position and speed of revolution of the electron.”

A true theory

Because of the uncertainty principle, and other principles like it, quantum mechanics started to develop into a set of theories that could be tested against observations, and that, to physicists, left very little to thought experiments. Put another way, there was nothing a quantum-physicist could think up that couldn’t be proved or disproved experimentally. This way of looking at the world – in philosophy – is called logical positivism.

This made quantum mechanics a true theory of reality, as opposed to a hypothetical, unverifiable one.

However, even before Heisenberg’s paper was published, positivism was starting to be rejected, especially by chemists. An important example was the advent of statistical mechanics and atomism in the early 19th century. Both of them interpreted, without actual physical observations, that if two volumes of hydrogen and one volume of oxygen combined to form water vapor, then a water molecule would have to comprise two atoms of hydrogen and one atom of oxygen.

A logical positivist would have insisted on actually observing the molecule individually, but that was impossible at the time. This insistence on submitting physical proof, thus, played an adverse role in the progress of science by delaying/denying success its due.

As time passed, the failures of positivism started to take hold on quantum mechanics. In a 1926 conversation with Albert Einstein, Heisenberg said, “… we cannot, in fact, observe such a path [of an electron in an atom]; what we actually record are the frequencies of the light radiated by the atom, intensities and transition probabilities, but no actual path.” And since he held that any theory ought only to be a true theory, he concluded that these parameters must feature in the theory, and what it projected, as themselves instead of the unobservable electron path.

This wasn’t the case.

Gaps in our knowledge

Heisenberg’s probe of the granularity of nature led to his distancing from the theory of logical positivism. And Steven Weinberg, physicist and Nobel Laureate, uses just this distancing to harshly argue in a 1994 essay, titled Against Philosophy, that physics has never benefited from the advice of philosophers, and when it does, it’s only to negate the advice of another philosopher – almost suggesting that ‘science is all there is’ by dismissing the aesthetic in favor of the rational.

In doing so, Weinberg doesn’t acknowledge the fact that science and philosophy go hand in hand; what he has done is simply to outline the failure of logical positivism in the advancement of science.

At the simplest, philosophy in various forms guides human thought toward ideals like objective truth and is able to establish their superiority over subjective truths. Philosophy also provides the framework within which we can conceptualize unobservables and contextualize them in observable space-time.

In fact, Weinberg’s conclusion brings to mind an article in Nature News & Comment by Daniel Sarewitz. In the piece, Sarewitz, a physicist, argued that for someone who didn’t really know the physics supporting the Higgs boson, its existence would have to be a matter of faith than one of knowledge. Similarly, for someone who couldn’t translate electronic radiation to ‘mean’ the electron’s path, the latter would have to be a matter of faith or hope, not a bit of knowledge.

Efficient descriptions

A more well-defined example is the theory of quarks and gluons, both of which are particles that haven’t been spotted yet but are believed to exist by the scientific community. The equipment to spot them is yet to be built and will cost hundreds of billions of dollars, and be orders of magnitude more sophisticated than the LHC.

In the meantime, unlike what Weinberg and like what Sarewitz would have you believe, we do rely on philosophical principles, like that of sufficient reasoning (Spinoza 1663Leibniz 1686), to fill up space-time at levels we can’t yet probe, to guide us toward a direction that we ought to probe after investing money in it.

This is actually no different from a layman going from understanding electric fields to supposedly understanding the Higgs field. At the end of the day, efficient descriptions make the difference.

Exchange of knowledge

This sort of dependence also implies that philosophy draws a lot from science, and uses it to define its own prophecies and shortcomings. We must remember that, while the rise of logical positivism may have shielded physicists from atomism, scientific verification through its hallowed method also did push positivism toward its eventual rejection.

The moral is that scientists must not reject philosophy for its passage through crests and troughs of credence because science also suffers the same passage. What more proof of this do we need than Popper’s and Kuhn’s arguments – irrespective of either of them being true?

Yes, we can’t figure things out with pure thought, and yes, the laws of physics underlying the experiences of our everyday lives are completely known. However, in the search for objective truth – whatever that is – we can’t neglect pure thought until, as Weinberg’s Heisenberg-example itself seems to suggest, we know everything there is to know, until science and philosophy, rather verification-by-observation and conceptualization-by-ideation, have completely and absolutely converged toward the same reality.

Until, in short, we can describe nature continuously instead of discretely.

Liberation of philosophical reasoning

By separating scientific advance from contributions from philosophical knowledge, we are advocating for the ‘professionalization’ of scientific investigation, that it must decidedly lack the attitude-born depth of intuition, which is aesthetic and not rational.

It is against such advocacy that American philosopher Paul Feyerabend voiced vehemently: “The withdrawal of philosophy into a ‘professional’ shell of its own has had disastrous consequences.” He means, in other words, that scientists have become too specialized and are rejecting the useful bits of philosophy.

In his seminal work Against Method (1975), Feyerabend suggested that scientists occasionally subject themselves to methodological anarchism so that they may come up with new ideas, unrestricted by the constraints imposed by the scientific method, freed in fact by the liberation of philosophical reasoning. These new ideas, he suggests, can then be reformulated again and again according to where and how observations fit into it.

In the meantime, the ideas are not born from observations but pure thought that is aided by scientific knowledge from the past. As Wikipedia puts it neatly: “Feyerabend was critical of any guideline that aimed to judge the quality of scientific theories by comparing them to known facts.” These ‘known facts’ are akin to Weinberg’s observables.

So, until the day we can fully resolve nature’s granularity, and assume the objective truth of no reality before that, Pierre-Simon Laplace’s two-century old words should show the way: “We may regard the present state of the universe as the effect of its past and the cause of its future” (An Essay on Probabilities, 1814).

This article, as written by me, originally appeared in The Hindu’s science blog, The Copernican, on June 6, 2013.