My heart of physics

Every July 4, I have occasion to remember two things: the discovery of the Higgs boson, and my first published byline for an article about the discovery of the Higgs boson. I have no trouble believing it’s been eight years since we discovered this particle, using the Large Hadron Collider (LHC) and its ATLAS and CMS detectors, in Geneva. I’ve greatly enjoyed writing about particle physics in this time, principally because closely engaging with new research and the scientists who worked on them allowed me to learn more about a subject that high school and college had let me down on: physics.

In 2020, I haven’t been able to focus much on the physical sciences in my writing, thanks to the pandemic, the lockdown, their combined effects and one other reason. This has been made doubly sad by the fact that the particle physics community at large is at an interesting crossroads.

In 2012, the LHC fulfilled the principal task it had been built for: finding the Higgs boson. After that, physicists imagined the collider would discover other unknown particles, allowing theorists to expand their theories and answer hitherto unanswered questions. However, the LHC has since done the opposite: it has narrowed the possibilities of finding new particles that physicists had argued should exist according to their theories (specifically supersymmetric partners), forcing them to look harder for mistakes they might’ve made in their calculations. But thus far, physicists have neither found mistakes nor made new findings, leaving them stuck in an unsettling knowledge space from which it seems there might be no escape (okay, this is sensationalised, but it’s also kinda true).

Right now, the world’s particle physicists are mulling building a collider larger and more powerful than the LHC, at a cost of billions of dollars, in the hopes that it will find the particles they’re looking for. Not all physicists are agreed, of course. If you’re interested in reading more, I’d recommend articles by Sabine Hossenfelder and Nirmalya Kajuri and spiralling out from there. But notwithstanding the opposition, CERN – which coordinates the LHC’s operations with tens of thousands of personnel from scores of countries – recently updated its strategy vision to recommend the construction of such a machine, with the ability to produce copious amounts of Higgs bosons in collisions between electrons and positrons (a.k.a. ‘Higgs factories’). China has also announced plans of its own build something similar.

Meanwhile, scientists and engineers are busy upgrading the LHC itself to a ‘high luminosity version’, where luminosity represents the number of interesting events the machine can detect during collisions for further study. This version will operate until 2038. That isn’t a long way away because it took more than a decade to build the LHC; it will definitely take longer to plan for, convince lawmakers, secure the funds for and build something bigger and more complicated.

There have been some other developments connected to the current occasion in terms of indicating other ways to discover ‘new physics’, which is the collective name for phenomena that will violate our existing theories’ predictions and show us where we’ve gone wrong in our calculations.

The most recent one I think was the ‘XENON excess’, which refers to a moderately strong signal recorded by the XENON 1T detector in Italy that physicists think could be evidence of a class of particles called axions. I say ‘moderately strong’ because the statistical significance of the signal’s strength is just barely above the threshold used to denote evidence and not anywhere near the threshold that denotes a discovery proper.

It’s evoked a fair bit of excitement because axions count as new physics – but when I asked two physicists (one after the other) to write an article explaining this development, they refused on similar grounds: that the significance makes it seem likely that the signal will be accounted for by some other well-known event. I was disappointed of course but I wasn’t surprised either: in the last eight years, I can count at least four instances in which a seemingly inexplicable particle physics related development turned out to be a dud.

The most prominent one was the ‘750 GeV excess’ at the LHC in December 2015, which seemed to be a sign of a new particle about six-times heavier than a Higgs boson and 800-times heavier than a proton (at rest). But when physicists analysed more data, the signal vanished – a.k.a. it wasn’t there in the first place and what physicists had seen was likely a statistical fluke of some sort. Another popular anomaly that went the same way was the one at Atomki.

But while all of this is so very interesting, today – July 4 – also seems like a good time to admit I don’t feel as invested in the future of particle physics anymore (the ‘other reason’). Some might say, and have said, that I’m abandoning ship just as the field’s central animus is moving away from the physics and more towards sociology and politics, and some might be right. I get enough of the latter subjects when I work on the non-physics topics that interest me, like research misconduct and science policy. My heart of physics itself is currently tending towards quantum mechanics and thermodynamics (although not quantum thermodynamics).

One peer had also recommended in between that I familiarise myself with quantum computing while another had suggested climate-change-related mitigation technologies, which only makes me wonder now if I’m delving into those branches of physics that promise to take me farther away from what I’m supposed to do. And truth be told, I’m perfectly okay with that. 🙂 This does speak to my privileges – modest as they are on this particular count – but when it feels like there’s less stuff to be happy about in the world with every new day, it’s time to adopt a new hedonism and find joy where it lies.

The molecule that was also a wave

According to the principles of quantum mechanics, you’re a wave – just like light is both a particle and a wave. It’s just that your wavelength is so small that your wave nature doesn’t matter, and you’re treated like a particle. The larger an object is, the smaller its wavelength, and vice versa. We’re confused about whether light is a particle or a wave because photons, the particles of light, are so small and have a measurable wavelength as a result. Scientists know that electrons, protons, neutrons, even neutrinos have the properties of a wave.

But while the math of quantum mechanics says you’re a wave, how can we know for sure if we can’t measure it? There are two ways. One, we don’t have any evidence to the contrary. Two, scientists have been checking if larger and larger particles, as far as they can go, exhibit the properties of a wave – and at every step of the way, they’ve come up with positive results. Both together, we have no reason to believe that we’re not also waves.

Such tests reaffirm the need for quantum mechanics to understand the nature of reality because the rules of classical mechanics alone don’t explain wave-particle duality.

On September 23, scientists from Austria, China, Germany and Switzerland reported that they had measured the wavelength of a group of molecules called oligoporphyrins. Specifically, they used “oligo-tetraphenylporphyrins enriched by a library of up to 60 fluoroalkylsulphanyl chains”. Altogether, they consisted “of up to 2,000 atoms”, becoming the heaviest object directly known to exhibit wave-like properties.

The molecule in question. DOI: 10.1038/s41567-019-0663-9

According to the scientists’ peer-reviewed paper, the molecules had a wavelength of around 53 femtometers, about 100,000-times smaller than the molecules themselves.

* * *

We have known since at least the 11th century, through the work of the Arab scholar Ibn al-Haytham, that light is a wave. In 1670, Isaac Newton propounded that light is made up of small particles, and spent three decades supplying evidence for his argument. His push birthed a conflict: was light wave-like or made up of particles?

The British polymath Thomas Young built on the 17th century Dutch physicist Christiaan Huygens to devise an experiment in 1801 that definitively proved light was a wave. It is known widely today as the Young’s double-slit experiment. It is so simple even as its outcomes are so immutable that it has become a mainstay of modern tests of quantum mechanics. Physicists use upgraded versions of the experiment to this day to study the nature and properties matter-waves.

(If you would like to know more, I highly recommend Anil Ananthaswamy’s biography of this experiment, Through Two Doors At Once; here’s an excerpt.)

In the experiment, light from a common source – such as a candle – is allowed to pass through two fine slits separated by a short distance. A sheet of paper sufficiently behind the slits then shows a strange pattern of alternating light and dark bands instead of just two patches of light. This is because light waves passing through the two slits interfere with each other, producing the famous interference pattern. Since only waves can interfere, the experiment shows that light has to be a wave.

An illustration of the double-slit experiment from ‘Though Two Doors At Once’ (2019).

The particulate nature of light would get its proper due only in 1900, when Max Planck stumbled upon a mathematical inconsistency that forced him to conclude light had to be made up of smaller packets of energy. It was the birth of quantum mechanics.

* * *

The international group’s test went roughly as follows: the scientists pulsed a laser onto a glass plate coated with the oligoporphyrins to release a stream of the molecules; collected them into a beam using collimators; randomly chopped the beam into smaller bits; passed each bit through diffraction gratings to split it up; then had the two little beams interfere with each other. Finally, they counted the number of molecules striking the detector while the detector registered the interference pattern.

They had insulated the whole device, about 2m long, from extremely small disturbances, like vibrations, to prevent the results from being corrupted. In their paper, the scientists even write that the final interference pattern was blurred thanks to Earth’s rotation, and which they were able to “compensate for” using effects due to Earth’s gravity.

A schematic diagram of the experimental setup. The oligoporphyrins move from left to right as the experiment progresses. The results of the counter are visible in a diagram above the right-most component. DOI: 10.1038/s41567-019-0663-9

To ascertain that the pattern they were seeing on the detector was in fact due to interference, the scientists performed a variety of checks each of which established a relationship between the shapes on the detector with the properties of the components of the interferometer according to the rules of quantum mechanics. They were also able to rule out alternative, i.e. classical, explanations this way.

For example, the scientists fired a laser through the cloud of molecules post-interference. Each molecule split the laser light into two separate beams, which recombined to produce an interference pattern of their own. This way, scientists could elicit the molecules’ interference pattern by studying the laser’s interference pattern. As they varied the laser power, they found that the visibility distribution of the molecules more closely matched with quantum mechanical models than with classical models, confirming interference.

The solid blue line indicates the quantum mechanical model and the dashed red line is a classical model, both scaled vertically by a factor of 0.93. The shaded areas on the curves represent uncertainty in the model parameters, and the dotted lines indicate unscaled theory curves. DOI: 10.1038/s41567-019-0663-9

What these scientists have achieved isn’t only a feat of measurement. Their findings also help refine the border between the classical and the quantum. The force of gravity governs the laws of classical mechanics, which deals with macroscopic objects, while the electromagnetic, strong nuclear and weak nuclear forces rule the microscopic world. Although macroscopic and microscopic objects occupy the same universe, physicists haven’t yet understood how classical and quantum mechanics can be combined into a single theory.

One of the problems standing in the way of this union is knowing where – and how – the macroscopic world ends and the microscopic world begins. So by observing quantum mechanical effects at the scale of thousands of atoms, scientists have quite literally pushed the boundaries of what we know about how the universe works.

Scientists make video of molecule rotating

A research group in Germany has captured images of what a rotating molecule looks like. This is a significant feat because it is very difficult to observe individual atoms and molecules, which are very small as well as very fragile. Scientists often have to employ ingenious techniques that can probe their small scale but without destroying them in the act of doing so.

The researchers studied carbonyl sulphide (OCS) molecules, which has a cylindrical shape. To perform their feat, they went through three steps. First, the researchers precisely calibrated two laser pulses and fired them repeatedly – ~26.3 billion times per second – at the molecules to set them spinning.

Next, they shot a third laser at the molecules. The purpose of this laser was to excite the valence electrons forming the chemical bonds between the O, C and S atoms. These electrons absorb energy from the laser’s photons, become excited and quit the bonds. This leaves the positively charged atoms close to each other. Since like charges repel, the atoms vigorously push themselves apart and break the molecule up. This process is called a Coulomb explosion.

At the moment of disintegration, an instrument called a velocity map imaging (VMI) spectrometer records the orientation and direction of motion of the oxygen atom’s positive charge in space. Scientists can work backwards from this reading to determine how the molecule might have been oriented just before it broke up.

In the third step, the researchers restart the experiment with another set of OCS molecules.

By going through these steps repeatedly, they were able to capture 651 photos of the OCS molecule in different stages of its rotation.

These images cannot be interpreted in a straightforward way – the way we interpret images of, say, a rotating ball.

This is because a ball, even though it is composed of millions of molecules, has enough mass for the force of gravity to dominate proceedings. So scientists can understand why a ball rotates the way it does using just the laws of classical mechanics.

But at the level of individual atoms and molecules, gravity becomes negligibly weak whereas the other three fundamental forces – including the electromagnetic force – become more prominent. To understand the interactions between these forces and the particles, scientists use the rules of quantum mechanics.

This is why the images of the rotating molecules look like this:

Steps of the molecule’s rotation. Credit: DESY, Evangelos Karamatskos

These are images of the OCS molecule as deduced by the VMI spectrometer. Based on them, the researchers were also able to determine how long the molecule took to make one full rotation.

As a spinning ball drifts around on the floor, we can tell exactly where it is and how fast it is spinning. However, when studying particles, quantum mechanics prohibits observers from knowing these two things with the same precision at the same time. You probably know this better as Heisenberg’s uncertainty principle.

So if you have a fix on where the molecule is, that measurement prohibits you from knowing exactly how fast it is spinning. Confronted with this dilemma, scientists used the data obtained by the VMI spectrometer together with the rules of quantum mechanics to calculate the probability that the molecule’s O, C and S atoms were arranged a certain way at a given point of time.

The images above visualise these probabilities as a colour-coded map. With the position of the central atom (presumably C) fixed, the probability of finding the other two atoms at a certain position is represented on a blue-red scale. The redder a pixel is, the higher the probability of finding an atom there.

Rotational clock depicting the molecular movie of the observed quantum dynamics of OCS. Credit: doi.org/10.1038/s41467-019-11122-y

For example, consider the images at 12 o’clock and 6 o’clock: the OCS molecule is clearly oriented horizontally and vertically, resp. Compare this to the measurement corresponding to the image at 9 o’clock: the molecule appears to exist in two configurations at the same time. This is because, approximately speaking, there is a 50% probability that it is oriented from bottom-left to top-right and a 50% probability that it is oriented from bottom-right to top-left. The 10 o’clock figure represents the probabilities split four different ways. The ones at 4 o’clock and 8 o’clock are even more messy.

But despite the messiness, the researchers found that the image corresponding to 12 o’clock repeated itself once every 82 picoseconds. Ergo, the molecule completed one rotation every 82 picoseconds.

This is equal to 731.7 billion rpm. If your car’s engine operated this fast, the resulting centrifugal force, together with the force of gravity, would tear its mechanical joints apart and destroy the machine. The OCS molecule doesn’t come apart this way because gravity is 100 million trillion trillion times weaker than the weakest of the three subatomic forces.

The researchers’ study was published in the journal Nature Communications on July 29, 2019.

Jayant Narlikar’s pseudo-defence of Darwin

Jayant Narlikar, the noted astrophysicist and emeritus professor at the Inter-University Centre for Astronomy and Astrophysics, Pune, recently wrote an op-ed in The Hindu titled ‘Science should have the last word’. There’s probably a tinge of sanctimoniousness there, echoing the belief many scientists I’ve met have that science will answer everything, often blithely oblivious to politics and culture. But I’m sure Narlikar is not one of them.

Nonetheless, the piece IMO was good and not great because what Narlikar has written has been written in the recent past by many others, with different words. It was good because the piece’s author was Narlikar. His position on the subject is now in the public domain where it needs to be if only so others can now bank on his authority to stand up for science themselves.

Speaking of authority: there is a gaffe in the piece that its fans – and The Hindu‘s op-ed desk – appear to have glazed over. If they didn’t, it’s possible that Narlikar asked for his piece to be published without edits, and which could have either been further proof of sanctimoniousness or, of course, distrust of journalists. He writes:

Recently, there was a claim made in India that the Darwinian theory of evolution is incorrect and should not be taught in schools. In the field of science, the sole criterion for the survival of a theory is that it must explain all observed phenomena in its domain. For the present, Darwin’s theory is the best such theory but it is not perfect and leaves many questions unanswered. This is because the origin of life on earth is still unexplained by science. However, till there is a breakthrough on this, or some alternative idea gets scientific support, the Darwinian theory is the only one that should continue to be taught in schools.

@avinashtn, @thattai and @rsidd120 got the problems with this excerpt, particularly the part in bold, just right in a short Twitter exchange, beginning with this tweet (please click-through to Twitter to see all the replies):

Gist: the origin of life is different from the evolution of life.

But even if they were the same, as Narlikar conveniently assumes in his piece, something else should have stopped him. That something else is also what is specifically interesting for me. Sample what Narlikar said next and then the final line from the excerpt above:

For the present, Darwin’s theory is the best such theory but it is not perfect and leaves many questions unanswered. … However, till there is a breakthrough on this, or some alternative idea gets scientific support, the Darwinian theory is the only one that should continue to be taught in schools.

Darwin’s theory of evolution got many things right, continues to, so there is a sizeable chunk in the domain of evolutionary biology where it remains both applicable and necessary. However, it is confusing that Narlikar believes that, should some explanations for some phenomena thus far not understood arise, Darwin’s theories as a whole could become obsolete. But why? It is futile to expect a scientific theory to be able to account for “all observed phenomena in its domain”. Such a thing is virtually impossible given the levels of specialisation scientists have been able to achieve in various fields. For example, an evolutionary biologist might know how migratory birds evolved but still not be able to explain how some birds are thought to use quantum entanglement with Earth’s magnetic field to navigate.

The example Mukund Thattai provides is fitting. The Navier-Stokes equations are used to describe fluid dynamics. However, scientists have been studying fluids in a variety of contexts, from two-dimensional vortices in liquid helium to gas outflow around active galactic nuclei. It is only in some of these contexts that the Navier-Stokes equations are applicable; that they are not entirely useful in others doesn’t render the equations themselves useless.

Additionally, this is where Narlikar’s choice of words in his op-ed becomes more curious. He must be aware that his own branch of study, quantum cosmology, has thin but unmistakable roots in a principle conceived in the 1910s by Niels Bohr, with many implications for what he says about Darwin’s theories.

Within the boundaries of physics, the principle of correspondence states that at larger scales, the predictions of quantum mechanics must agree with those of classical mechanics. It is an elegant idea because it acknowledges the validity of classical, a.k.a. Newtonian, mechanics when applied at a scale where the effects of gravity begin to dominate the effects of subatomic forces. In its statement, the principle does not say that classical mechanics is useless because it can’t explain quantum phenomena. Instead, it says that (1) the two mechanics each have their respective domain of applicability and (2) the newer one must be resemble the older one when applied to the scale at which the older one is relevant.

Of course, while scientists have been able to satisfy the principle of correspondence in some areas of physics, an overarching understanding of gravity as a quantum phenomenon has remained elusive. If such a theory of ‘quantum gravity’ were to exist, its complicated equations would have to be able to resemble Newton’s equations and the laws of motion at larger scales.

But exploring the quantum nature of spacetime is extraordinarily difficult. It requires scientists to probe really small distances and really high energies. While lab equipment has been setup to meet this goal partway, it has been clear for some time that it might be easier to learn from powerful cosmic objects like blackholes.

And Narlikar has done just that, among other things, in his career as a theoretical astrophysicist.

I don’t imagine he would say that classical mechanics is useless because it can’t explain the quantum, or that quantum mechanics is useless because it can’t be used to make sense of the classical. More importantly, should a theory of quantum gravity come to be, should we discard the use of classical mechanics all-together? No.

In the same vein: should we continue to teach Darwin’s theories for lack of a better option or because it is scientific, useful and, through the fossil record, demonstrable? And if, in the future, an overarching theory of evolution comes along with the capacity to subsume Darwin’s, his ideas will still be valid in their respective jurisdictions.

As Thattai says, “Expertise in one part of science does not automatically confer authority in other areas.” Doesn’t this sound familiar?

Featured image credit: sipa/pixabay.

All the science in ‘The Cloverfield Paradox’

I watched The Cloverfield Paradox last night, the horror film that Paramount pictures had dumped with Netflix and which was then released by Netflix on February 4. It’s a dumb production: unlike H.R. Giger’s existential, visceral horrors that I so admire, The Cloverfield Paradox is all about things going bump in the dark. But what sets these things off in the film is quite interesting: a particle accelerator. However, given how bad the film was, the screenwriter seems to have used this device simply as a plot device, nothing else.

The particle accelerator is called Shepard. We don’t know what particles it’s accelerating or up to what centre-of-mass collision energy. However, the film’s premise rests on the possibility that a particle accelerator can open up windows into other dimensions. The Cloverfield Paradox needs this because, according to its story, Earth has run out of energy sources in 2028 and countries are threatening ground invasions for the last of the oil, so scientists assemble a giant particle accelerator in space to tap into energy sources in other dimensions.

Considering 2028 is only a decade from now – when the Sun will still be shining bright as ever in the sky – and renewable sources of energy aren’t even being discussed, the movie segues from sci-fi into fantasy right there.

Anyway, the idea that a particle accelerator can open up ‘portals’ into other dimensions isn’t new nor entirely silly. Broadly, an accelerator’s purpose is founded on three concepts: the special theory of relativity (SR), particle decay and the wavefunction of quantum mechanics.

According to SR, mass and energy can transform into each other as well as that objects moving closer to the speed of light will become more massive, thus more energetic. Particle decay is what happens when a heavier subatomic particle decomposes into groups of lighter particles because it’s unstable. Put these two ideas together and you have a part of the answer: accelerators accelerate particles to extremely high velocities, the particles become more massive, ergo more energetic, and the excess energy condenses out at some point as other particles.

Next, in quantum mechanics, the wavefunction is a mathematical function: when you solve it based on what information you have available, the answer spit out by one kind of the function gives the probability that a particular particle exists at some point in the spacetime continuum. It’s called a wavefunction because the function describes a wave, and like all waves, this one also has a wavelength and an amplitude. However, the wavelength here describes the distance across which the particle will manifest. Because energy is directly proportional to frequency (E = × ν; h is Planck’s constant) and frequency is inversely proportional to the wavelength, energy is inversely proportional to wavelength. So the more the energy a particle accelerator achieves, the smaller the part of spacetime the particles will have a chance of probing.

Spoilers ahead

SR, particle decay and the properties of the wavefunction together imply that if the Shepard is able to achieve a suitably high energy of acceleration, it will be able to touch upon an exceedingly small part of spacetime. But why, as it happens in The Cloverfield Paradox, would this open a window into another universe?

Spoilers end

Instead of directly offering a peek into alternate universes, a very-high-energy particle accelerator could offer a peek into higher dimensions. According to some theories of physics, there are many higher dimensions even though humankind may have access only to four (three of space and one of time). The reason they should even exist is to be able to solve some conundrums that have evaded explanation. For example, according to Kaluza-Klein theory (one of the precursors of string theory), the force of gravity is so much weaker than the other three fundamental forces (strong nuclear, weak nuclear and electromagnetic) because it exists in five dimensions. So when you experience it in just four dimensions, its effects are subdued.

Where are these dimensions? Per string theory, for example, they are extremely compactified, i.e. accessible only over incredibly short distances, because they are thought to be curled up on themselves. According to Oskar Klein (one half of ‘Kaluza-Klein’, the other half being Theodore Kaluza), this region of space could be a circle of radius 10-32 m. That’s 0.00000000000000000000000000000001 m – over five quadrillion times smaller than a proton. According to CERN, which hosts the Large Hadron Collider (LHC), a particle accelerated to 10 TeV can probe a distance of 10-19 m. That’s still one trillion times larger than where the Kaluza-Klein fifth dimension is supposed to be curled up. The LHC has been able to accelerate particles to 8 TeV.

The likelihood of a particle accelerator tossing us into an alternate universe entirely is a different kind of problem. For one, we have no clue where the connections between alternate universes are nor how they can be accessed. In Nolan’s Interstellar (2014), a wormhole is discovered by the protagonist to exist inside a blackhole – a hypothesis we currently don’t have any way of verifying. Moreover, though the LHC is supposed to be able to create microscopic blackholes, they have a 0% chance of growing to possess the size or potential of Interstellar‘s Gargantua.

In all, The Cloverfield Paradox is a waste of time. In the 2016 film Spectral – also released by Netflix – the science is overwrought, stretched beyond its possibilities, but still stays close to the basic principles. For example, the antagonists in Spectral are creatures made entirely as Bose-Einstein condensates. How this was even achieved boggles the mind, but the creatures have the same physical properties that the condensates do. In The Cloverfield Paradox, however, the accelerator is a convenient insertion into a bland story, an abuse of the opportunities that physics of this complexity offers. The writers might as well have said all the characters blinked and found themselves in a different universe.

The science in Netflix’s ‘Spectral’

I watched Spectral, the movie that released on Netflix on December 9, 2016, after Universal Studios got cold feet about releasing it on the big screen – the same place where a previous offering, Warcraft, had been gutted. Spectral is sci-fi and has a few great moments but mostly it’s bland and begging for some tabasco. The premise: an elite group of American soldiers deployed in Moldova come upon some belligerent ghost-like creatures in a city they’re fighting in. They’ve no clue how to stop them, so they fly in an engineer to consult from DARPA, the same guy who built the goggles that detected the creatures in the first place. Together, they do things. Now, I’d like to talk about the science in the film and not the plot itself, though the former feeds the latter.

SPOILERS AHEAD

A scene from the film 'Spectral' (2016). Source: Netflix
A scene from the film ‘Spectral’ (2016). Source: Netflix

Towards the middle of the movie, the engineer realises that the ghost-like creatures have the same limitations as – wait for it – a Bose-Einstein condensate (BEC). They can pass through walls but not ceramic or heavy metal (not the music), they rapidly freeze objects in their path, and conventional weapons, typically projectiles of some kind, can’t stop them. Frankly, it’s fabulous that Ian Fried, the film’s writer, thought to use creatures made of BECs as villains.

A BEC is an exotic state of matter in which a group of ultra-cold particles condense into a superfluid (i.e., it flows without viscosity). Once a BEC forms, a subsection of a BEC can’t be removed from it without breaking the whole BEC state down. You’d think this makes the BEC especially fragile – because it’s susceptible to so many ‘liabilities’ – but it’s the exact opposite. In a BEC, the energy required to ‘kick’ a single particle out of its special state is equal to the energy that’s required to ‘kick’ all the particles out, making BECs as a whole that much more durable.

This property is apparently beneficial for the creatures of Spectral, and that’s where the similarity ends because BECs have other properties that are inimical to the portrayal of the creatures. Two immediately came to mind: first, BECs are attainable only at ultra-cold temperatures; and second, the creatures can’t be seen by the naked eye but are revealed by UV light. There’s a third and relevant property but which we’ll come to later: that BECs have to be composed of bosons or bosonic particles.

It’s not clear why Spectral‘s creatures are visible only when exposed to light of a certain kind. Clyne, the DARPA engineer, says in a scene, “If I can turn it inside out, by reversing the polarity of some of the components, I might be able to turn it from a camera [that, he earlier says, is one that “projects the right wavelength of UV light”] into a searchlight. We’ll [then] be able to see them with our own eyes.” However, the documented ability of BECs to slow down light to a great extent (5.7-million times more than lead can, in certain conditions) should make them appear extremely opaque. More specifically, while a BEC can be created that is transparent to a very narrow range of frequencies of electromagnetic radiation, it will stonewall all frequencies outside of this range on the flipside. That the BECs in Spectral are opaque to a single frequency and transparent to all others is weird.

Obviating the need for special filters or torches to be able to see the creatures simplifies Spectral by removing one entire layer of complexity. However, it would remove the need for the DARPA engineer also, who comes up with the hyperspectral camera and, its inside-out version, the “right wavelength of UV” searchlight. Additionally, the complexity serves another purpose. Ahead of the climax, Clyne builds an energy-discharging gun whose plasma-bullets of heat can rip through the BECs (fair enough). This tech is also slightly futuristic. If the sci-fi/futurism of the rest of Spectral leading up to that moment (when he invents the gun) was absent, then the second-half of the movie would’ve become way more sci-fi than the first-half, effectively leaving Spectral split between two genres: sci-fi and wtf. Thus the need for the “right wavelength of UV” condition?

Now, to the third property. Not all particles can be used to make BECs. Its two predictors, Satyendra Nath Bose and Albert Einstein, were working (on paper) with kinds of particles since called bosons. In nature, bosons are force-carriers, acting against matter-maker particles called fermions. A more technical distinction between them is that the behaviour of bosons is explained using Bose-Einstein statistics while the behaviour of fermions is explained using Fermi-Dirac statistics. And only Bose-Einstein statistics predicts the existence of states of matter called condensates, not Femi-Dirac statistics.

(Aside: Clyne, when explaining what BECs are in Spectral, says its predictors are “Nath Bose and Albert Einstein”. Both ‘Nath’ and ‘Bose’ are surnames in India, so “Nath Bose” is both anyone and no one at all. Ugh. Another thing is I’ve never heard anyone refer to S.N. Bose as “Nath Bose”, only ‘Satyendranath Bose’ or, simply, ‘Satyen Bose’. Why do Clyne/Fried stick to “Nath Bose”? Was “Satyendra” too hard to pronounce?)

All particles constitute a certain amount of energy, which under some circumstances can increase or decrease. However, the increments of energy in which this happens are well-defined and fixed (hence the ‘quantum’ of quantum mechanics). So, for an oversimplified example, a particle can be said to occupy energy levels constituting 2, 4 or 6 units but never of 1, 2.5 or 3 units. Now, when a very-low-density collection of bosons is cooled to an ultra-cold temperature (a few hundredths of kelvins or cooler), the bosons increasingly prefer occupying fewer and fewer energy levels. At one point, they will all occupy a single and common level – flouting a fundamental rule that there’s a maximum limit for the number of particles that can be in the same level at once. (In technical parlance, the wavefunctions of all the bosons will merge.)

When this condition is achieved, a BEC will have been formed. And in this condition, even if a new boson is added to the condensate, it will be forced into occupying the same level as every other boson in the condensate. This condition is also out of limits for all fermions – except in very special circumstances, and circumstances whose exceptionalism perhaps makes way for Spectral‘s more fantastic condensate-creatures. We known one such as superconductivity.

In a superconducting material, electrons flow without any resistance whatsoever at very low temperatures. The most widely applied theory of superconductivity interprets this flow as being that of a superfluid, and the ‘sea’ of electrons flowing as such to be a BEC. However, electrons are fermions. To overcome this barrier, Leon Cooper proposed in 1956 that the electrons didn’t form a condensate straight away but that there was an intervening state called a Cooper pair. A Cooper pair is a pair of electrons that had become bound, overcoming their like-charges repulsion because of the vibration of atoms of the superconducting metal surrounding them. The electrons in a Cooper pair also can’t easily quit their embrace because, once they become bound, the total energy they constitute as a pair is lower than the energy that would be destabilising in any other circumstances.

Could Spectral‘s creatures have represented such superconducting states of matter? It’s definitely science fiction because it’s not too far beyond the bounds of what we know about BEC today (at least in terms of a concept). And in being science fiction, Spectral assumes the liberty to make certain leaps of reasoning – one being, for example, how a BEC-creature is able to ram against an M1 Abrams and still not dissipate. Or how a BEC-creature is able to sit on an electric transformer without blowing up. I get that these in fact are the sort of liberties a sci-fi script is indeed allowed to take, so there’s little point harping on them. However, that Clyne figured the creatures ought to be BECs prompted way more disbelief than anything else because BECs are in the here and the now – and they haven’t been known to behave anything like the creatures in Spectral do.

For some, this information might even help decide if a movie is sci-fi or fantasy. To me, it’s sci-fi.

SPOILERS END

On the more imaginative side of things, Spectral also dwells for a bit on how these creatures might have been created in the first place and how they’re conscious. Any answers to these questions, I’m pretty sure, would be closer to fantasy than to sci-fi. For example, I wonder how the computing capabilities of a very large neural network seen at the end of the movie (not a spoiler, trust me) were available to the creatures wirelessly, or where the power source was that the soldiers were actually after. Spectral does try to skip the whys and hows by having Clyne declare, “I guess science doesn’t have the answer to everything” – but you’re just going “No shit, Sherlock.”

His character is, as this Verge review puts it, exemplarily shallow while the movie never suggests before the climax that science might indeed have all the answers. In fact, the movie as such, throughout its 108 minutes, wasn’t that great for me; it doesn’t ever live up to its billing as a “supernatural Black Hawk Down“. You think about BHD and you remember it being so emotional – Spectral has none of that. It was just obviously more fun to think about the implications of its antagonists being modelled after a phenomenon I’ve often read/written about but never thought about that way.

Relativity’s kin, the Bose-Einstein condensate, is 90 now

Excerpt:

Over November 2015, physicists and commentators alike the world over marked 100 years since the conception of the theory of relativity, which gave us everything from GPS to blackholes, and described the machinations of the universe at the largest scales. Despite many struggles by the greatest scientists of our times, the theory of relativity remains incompatible with quantum mechanics, the rules that describe the universe at its smallest, to this day. Yet it persists as our best description of the grand opera of the cosmos.

Incidentally, Einstein wasn’t a fan of quantum mechanics because of its occasional tendencies to violate the principles of locality and causality. Such violations resulted in what he called “spooky action at a distance”, where particles behaved as if they could communicate with each other faster than the speed of light would have it. It was weirdness the likes of which his conception of gravitation and space-time didn’t have room for.

As it happens, 2015 also marks another milestone, also involving Einstein’s work – as well as the work of an Indian scientist: Satyendra Nath Bose. It’s been 20 years since physicists realised the first Bose-Einstein condensate, which has proved to be an exceptional as well as quirky testbed for scientists probing the strange implications of a quantum mechanical reality.

Its significance today can be understood in terms of three ‘periods’ of research that contributed to it: 1925 onward, 1975 onward, and 1995 onward.

Read the full piece here.

 

The intricacies of being sold on string theory

If you are seeking an appreciation for the techniques of string theory, then Brian Greene’s The Elegant Universe could be an optional supplement. If, on the other hand, you want to explore the epistemological backdrop against which string theory proclaimed its aesthetic vigor, then the book is a must-read. As the title implies, it discusses the elegance of string theory in great and pleasurable detail, beginning from a harmonious resolution of the conflicts between quantum mechanics and general relativity being its raison d’être to why it commands the attention of some of the greatest living scientists.

A bigger victory it secures, however, is not in simply laying out string theory but getting you interested in it – and this has become a particularly important feature of science in the 21st century.

The counter-intuitive depiction of nature by the principles of modern physics have, since the mid-20th century, foretold that reality can be best understood in terms of mathematical expressions. This contrasted the simplicity of its preceding paradigm: Newtonian physics, which was less about the mathematics and more about observations, and therefore required fewer interventions to bridge reality as it seemed and reality as it said it was.

Modern physics – encompassing quantum mechanics and Albert Einstein’s theories of relativity – overhauled this simplicity. While reality as it seemed hadn’t changed, reality as they said it was bore no semblence to any of Newton’s work. The process of understanding reality became much more sophisticated, requiring years of training just to prepare oneself to be able to understand it, while probing it required the grandest associations of intellect and hardware.

The trouble getting it across

An overlooked side to this fallout concerned the instruction of these subjects to non-technical audiences, to people who liked to know what was going on but didn’t want to dedicate their lives to it1. Both quantum mechanics and general relativity are dominated by advanced mathematics, yet spelling out such abstractions is neither convenient nor effective for non-technical communication. As a result, science communicators have increasingly resorted to metaphors, using them to negotiate with the knowledge their readers already possessed.

This is where The Elegant Universe is most effective, especially since string theory is admittedly more difficult to understand than quantum mechanics or general relativity ever was. In fact, the book’s first few chapters – before Greene delves into string theory – are seasoned with statements of how intricate string theory is, while he does a tremendous job of laying the foundations of modern physics.

Especially admirable is his seamless guidance of the reader from time dilation and Lorentzian contraction to quantum superposition to the essentials of superstring theory to the unification of all forces under M-theory, with nary a twitch in between. The examples with which he illustrates important concepts are never mundane, too. His flamboyant writing makes for the proverbial engaging read. You will often find words you wouldn’t quickly use to describe the world around you, endorsing a supreme confidence in the subject being discussed.

Consider: “… the gently curving geometrical form of space emerging from general relativity is at loggerheads with the frantic, roiling, microscopic behavior of the universe implied by quantum mechanics”. Or, “With the discovery of superstring theory, musical metaphors take on a startling reality, for the theory suggests that the microscopic landscape is suffused with tiny strings whose vibrational patterns orchestrate the evolution of the cosmos. The winds of charge, according to superstring theory, gust through an aeolian universe.”

More importantly, Greene’s points of view in the book betray a confidence in string theory itself – as if he thinks that it is the only way to unify quantum mechanics and general relativity under an umbrella pithily called the ‘theory of everything’. What it means for you, the reader, is that you can expect The Elegant Universe not to be an exploratory stroll through a garden but more of a negotiation of the high seas.

Taking recourse in emotions

Does this subtract from the objectivity an enthused reader might appreciate as it would have prepared her to tackle the unification problem by herself? Somewhat. It is a subtle flaw in Greene’s reasoning throughout the book: while he devotes many pages to discussing solutions, he spends little time annotating the flaws of string theory itself. Even if no other theory has charted the sea of unification so well, Greene could have maintained some objectivity about it.

At the same time, by the end of the book, you start to think there is no other way to expound on string theory than by constantly retreating into the intensity of emotions and the honest sensationalism they are capable of yielding. For instance, when describing his own work alongside Paul Aspinwall and David Morrison in determining if space can tear in string theory, Greene introduces the theory’s greatest exponent, Edward Witten. As he writes,

“Edward Witten’s razor-sharp intellect is clothed in a soft-spoken demeanor that often has a wry, almost ironic, edge. He is widely regarded as Einstein’s successor in the role of the world’s greatest living physicist. Some would go even further and describe him as the greatest physicist of all time. He has an insatiable appetite for cutting-edge physics problems and he wields tremendous influence in setting the direction of research in string theory.”

Then, in order to convey the difficulty of a problem that the trio was facing, Greene simply states: Witten “lit up upon hearing the ideas, but cautioned that he thought the calculations would be horrendously difficult”. If Witten expects them to be horrendously difficult, then they must indeed be as horrendous as they get.

Such descriptions of magnitude are peppered throughout The Elegant Universe, often clothed in evocative language, and constitute a significant portion of its appeal to a general audience. They rob string theory of its esoteric stature, making the study of its study memorable. Greene has done well to not dwell on the technical intricacies of his subject while still retaining both the wonderment and the frustration of dealing with something as intractable. This, in fact, is his prime achievement through writing the book.

String theory is not about technique

It was published in 1999. In the years since, many believe that string theory has become dormant. However, that is also where the book scores: not by depicting the theory as being unfalsifiable but as being resilient, as being incomplete enough to dare physicists to follow their own lead in developing it, as being less of a feat in breathtaking mathematics and more of constantly putting one’s beliefs to the test.

Simultaneously, it is unlike the theories of inflationary cosmology that are so flexible that disproving them is like fencing with air. String theory has a sound historical basis in the work of Leonhard Euler, and its careful derivation from those founding principles to augur the intertwined destinies of space and time have concerned the efforts of simply the world’s best mathematicians.

Since the late 1960s, when string theory was first introduced, it has gone through alternating periods of reaffirmation and discreditation. Each crest in this journey has been introduced by a ‘superstring revolution’, a landmark hypothesis or discovery that has restored its place in the scientific canon. Each trough, on the other hand, has represented a difficult struggle to attempt to cohere the implications of string theory into a convincing picture of reality.

These struggles are paralleled by Greene’s efforts in composing The Elegant Universe, managing to accomplish what is often lost in the translation of human endeavors: the implications for the common person. This could be in the form of beauty, or a better life, or some form of intellectual satisfaction; in the end, the book succeeds by drawing these possibilities to the fore, for once overshadowing the enormity of the undertaking that string theory will always be.

Buy the book on Amazon.

1Although it can also be argued that science communication as a special skill was necessitated by science becoming so complex.

Bohr and the breakaway from classical mechanics

One hundred years ago, Niels Bohr developed the Bohr model of the atom, where electrons go around a nucleus at the center like planets in the Solar System. The model and its implications brought a lot of clarity to the field of physics at a time when physicists didn’t know what was inside an atom, and how that influenced the things around it. For his work, Bohr was awarded the physics Nobel Prize in 1922.

The Bohr model marked a transition from the world of Isaac Newton’s classical mechanics, where gravity was the dominant force and values like mass and velocity were accurately measurable, to that of quantum mechanics, where objects were too small to be seen even with powerful instruments and their exact position didn’t matter.

Even though modern quantum mechanics is still under development, its origins can be traced to humanity’s first thinking of energy as being quantized and not randomly strewn about in nature, and the Bohr model was an important part of this thinking.

The Bohr model

According to the Dane, electrons orbiting the nucleus at different distances were at different energies, and an electron inside an atom – any atom – could only have specific energies. Thus, electrons could ascend or descend through these orbits by gaining or losing a certain quantum of energy, respectively. By allowing for such transitions, the model acknowledged a more discrete energy conservation policy in physics, and used it to explain many aspects of chemistry and chemical reactions.

Unfortunately, this model couldn’t evolve continuously to become its modern equivalent because it could properly explain only the hydrogen atom, and it couldn’t account for the Zeeman effect.

What is the Zeeman effect? When an electron jumps from a higher to a lower energy-level, it loses some energy. This can be charted using a “map” of energies like the electromagnetic spectrum, showing if the energy has been lost as infrared, UV, visible, radio, etc., radiation. In 1896, Dutch physicist Pieter Zeeman found that this map could be distorted when the energy was emitted in the presence of a magnetic field, leading to the effect named after him.

It was only in 1925 that the cause of this behavior was found (by Wolfgang Pauli, George Uhlenbeck and Samuel Goudsmit), attributed to a property of electrons called spin.

The Bohr model couldn’t explain spin or its effects. It wasn’t discarded for this shortcoming, however, because it had succeeded in explaining a lot more, such as the emission of light in lasers, an application developed on the basis of Bohr’s theories and still in use today.

The model was also important for being a tangible breakaway from the principles of classical mechanics, which were useless at explaining quantum mechanical effects in atoms. Physicists recognized this and insisted on building on what they had.

A way ahead

To this end, a German named Arnold Sommerfeld provided a generalization of Bohr’s model – a correction – to let it explain the Zeeman effect in ionized helium (which is a hydrogen atom with one proton and one neutron more).

In 1924, Louis de Broglie introduced particle-wave duality into quantum mechanics, invoking that matter at its simplest could be both particulate and wave-like. As such, he was able to verify Bohr’s model mathematically from a waves’ perspective. Before him, in 1905, Albert Einstein had postulated the existence of light-particles called photons but couldn’t explain how they could be related to heat waves emanating from a gas, a problem he solved using de Broglie’s logic.

All these developments reinforced the apparent validity of Bohr’s model. Simultaneously, new discoveries were emerging that continuously challenged its authority (and classical mechanics’, too): molecular rotation, ground-state energy, Heisenberg’s uncertainty principle, Bose-Einstein statistics, etc. One option was to fall back to classical mechanics and rework quantum theory thereon. Another was to keep moving ahead in search of a solution.

However, this decision didn’t have to be taken because the field of physics itself had started to move ahead in different ways, ways which would become ultimately unified.

Leaps of faith

Between 1900 and 1925, there were a handful of people responsible for opening this floodgate to tide over the centuries old Newtonian laws. Perhaps the last among them was Niels Bohr; the first was Max Planck, who originated quantum theory when he was working on making light bulbs glow brighter. He found that the smallest bits of energy to be found in nature weren’t random, but actually came in specific amounts that he called quanta.

It is notable that when either of these men began working on their respective contributions to quantum mechanics, they took a leap of faith that couldn’t be spanned by purely scientific reasoning, as is the dominant process today, but by faith in philosophical reasoning and, simply, hope.

For example, Planck wasn’t fond of a class of mechanics he used to establish quantum mechanics. When asked about it, he said it was an “act of despair”, that he was “ready to sacrifice any of [his] previous convictions about physics”. Bohr, on the other hand, had relied on the intuitive philosophy of correspondence to conceive of his model. In fact, even before he had received his Nobel in 1922, Bohr had begun to deviate from his most eminent finding because it disagreed with what he thought were more important, and to be preserved, foundational ideas.

It was also through this philosophy of correspondence that the many theories were able to be unified over the course of time. According to it, a new theory should replicate the results of an older, well-established one in the domain where it worked.

Coming a full circle

Since humankind’s investigation into the nature of physics has proceeded from the large to the small, new attempts to investigate from the small to the large were likely to run into old theories. And when multiple new quantum theories were found to replicate the results of one classical theory, they could be translated between each other by corresponding through the old theory (thus the name).

Because the Bohr model could successfully explain how and why energy was emitted by electrons jumping orbits in the hydrogen atom, it had a domain of applicability. So, it couldn’t be entirely wrong and would have to correspond in some way with another, possibly more successful, theory.

Earlier, in 1924, de Broglie’s formulation was suffering from its own inability to explain certain wave-like phenomena in particulate matter. Then, in 1926, Erwin Schrodinger built on it and, like Sommerfeld did with Bohr’s ideas, generalized them so that they could apply in experimental quantum mechanics. The end result was the famous Schrodinger’s equation.

The Sommerfeld-Bohr theory corresponds with the equation, and this is where it comes “full circle”. After the equation became well known, the Bohr model was finally understood as being a semi-classical approximation of the Schrodinger equation. In other words, the model represented some of the simplest corrections to be made to classical mechanics for it to become quantum in any way.

An ingenious span

After this, the Bohr model was, rather became, a fully integrable part of the foundational ancestry of modern quantum mechanics. While its significance in the field today is great yet still one of many like it, by itself it had a special place in history: a bridge, between the older classical thinking and the newer quantum thinking.

Even philosophically speaking, Niels Bohr and his pathbreaking work were important because they planted the seeds of ingenuity in our minds, and led us to think outside of convention.

This article, as written by me, originally appeared in The Copernican science blog on May 19, 2013.

Bohr and the breakaway from classical mechanics

Niels Bohr, 1950.
Niels Bohr, 1950. Photo: Blogspot

One hundred years ago, Niels Bohr developed the Bohr model of the atom, where electrons go around a nucleus at the centre like planets in the Solar System. The model and its implications brought a lot of clarity to the field of physics at a time when physicists didn’t know what was inside an atom, and how that influenced the things around it. For his work, Bohr was awarded the physics Nobel Prize in 1922.

The Bohr model marked a transition from the world of Isaac Newton’s classical mechanics, where gravity was the dominant force and values like mass and velocity were accurately measurable, to that of quantum mechanics, where objects were too small to be seen even with powerful instruments and their exact position didn’t matter.

Even though modern quantum mechanics is still under development, its origins can be traced to humanity’s first thinking of energy as being quantised and not randomly strewn about in nature, and the Bohr model was an important part of this thinking.

The Bohr model

According to the Dane, electrons orbiting the nucleus at different distances were at different energies, and an electron inside an atom – any atom – could only have specific energies. Thus, electrons could ascend or descend through these orbits by gaining or losing a certain quantum of energy, respectively. By allowing for such transitions, the model acknowledged a more discrete energy conservation policy in physics, and used it to explain many aspects of chemistry and chemical reactions.

Unfortunately, this model couldn’t evolve continuously to become its modern equivalent because it could properly explain only the hydrogen atom, and it couldn’t account for the Zeeman effect.

What is the Zeeman effect? When an electron jumps from a higher to a lower energy-level, it loses some energy. This can be charted using a “map” of energies like the electromagnetic spectrum, showing if the energy has been lost as infrared, UV, visible, radio, etc., radiation. In 1896, Dutch physicist Pieter Zeeman found that this map could be distorted when the energy was emitted in the presence of a magnetic field, leading to the effect named after him.

It was only in 1925 that the cause of this behaviour was found (by Wolfgang Pauli, George Uhlenbeck and Samuel Goudsmit), attributed to a property of electrons called spin.

The Bohr model couldn’t explain spin or its effects. It wasn’t discarded for this shortcoming, however, because it had succeeded in explaining a lot more, such as the emission of light in lasers, an application developed on the basis of Bohr’s theories and still in use today.

The model was also important for being a tangible breakaway from the principles of classical mechanics, which were useless at explaining quantum mechanical effects in atoms. Physicists recognised this and insisted on building on what they had.

A way ahead

To this end, a German named Arnold Sommerfeld provided a generalisation of Bohr’s model – a correction – to let it explain the Zeeman effect in ionized helium (which is a hydrogen atom with one proton and one neutron more).

In 1924, Louis de Broglie introduced particle-wave duality into quantum mechanics, invoking that matter at its simplest could be both particulate and wave-like. As such, he was able to verify Bohr’s model mathematically from a waves’ perspective. Before him, in 1905, Albert Einstein had postulated the existence of light-particles called photons but couldn’t explain how they could be related to heat waves emanating from a gas, a problem he solved using de Broglie’s logic.

All these developments reinforced the apparent validity of Bohr’s model. Simultaneously, new discoveries were emerging that continuously challenged its authority (and classical mechanics’, too): molecular rotation, ground-state energy, Heisenberg’s uncertainty principle, Bose-Einstein statistics, etc. One option was to fall back to classical mechanics and rework quantum theory thereon. Another was to keep moving ahead in search of a solution.

However, this decision didn’t have to be taken because the field of physics itself had started to move ahead in different ways, ways which would become ultimately unified.

Leaps of faith

Between 1900 and 1925, there were a handful of people responsible for opening this floodgate to tide over the centuries old Newtonian laws. Perhaps the last among them was Niels Bohr; the first was Max Planck, who originated quantum theory when he was working on making light bulbs glow brighter. He found that the smallest bits of energy to be found in nature weren’t random, but actually came in specific amounts that he called quanta.

It is notable that when either of these men began working on their respective contributions to quantum mechanics, they took a leap of faith that couldn’t be spanned by purely scientific reasoning, as is the dominant process today, but by faith in philosophical reasoning and, simply, hope.

For example, Planck wasn’t fond of a class of mechanics he used to establish quantum mechanics. When asked about it, he said it was an “act of despair”, that he was “ready to sacrifice any of [his] previous convictions about physics”. Bohr, on the other hand, had relied on the intuitive philosophy of correspondence to conceive of his model. In fact, only a few years after he had received his Nobel in 1922, Bohr had begun to deviate from his most eminent finding because it disagreed with what he thought were more important, and to be preserved, foundational ideas.

It was also through this philosophy of correspondence that the many theories were able to be unified over the course of time. According to it, a new theory should replicate the results of an older, well-established one in the domain where it worked.

Coming a full circle

Since humankind’s investigation into the nature of physics has proceeded from the large to the small, new attempts to investigate from the small to the large were likely to run into old theories. And when multiple new quantum theories were found to replicate the results of one classical theory, they could be translated between each other by corresponding through the old theory (thus the name).

Because the Bohr model could successfully explain how and why energy was emitted by electrons jumping orbits in the hydrogen atom, it had a domain of applicability. So, it couldn’t be entirely wrong and would have to correspond in some way with another, possibly more succesful, theory.

Earlier, in 1924, de Broglie’s formulation was suffering from its own inability to explain certain wave-like phenomena in particulate matter. Then, in 1926, Erwin Schrodinger built on it and, like Sommerfeld did with Bohr’s ideas, generalised them so that they could apply in experimental quantum mechanics. The end result was the famous Schrodinger’s equation.

The Sommerfeld-Bohr theory corresponds with the equation, and this is where it comes “full circle”. After the equation became well known, the Bohr model was finally understood as being a semi-classical approximation of the Schrodinger equation. In other words, the model represented some of the simplest corrections to be made to classical mechanics for it to become quantum in any way.

An ingenious span

After this, the Bohr model was, rather became, a fully integrable part of the foundational ancestry of modern quantum mechanics. While its significance in the field today is great yet still one of many like it, by itself it had a special place in history: a bridge, between the older classical thinking and the newer quantum thinking.

Even philosophically speaking, Niels Bohr and his path-breaking work were important because they planted the seeds of ingenuity in our minds, and led us to think outside of convention.