A giant leap closer to the continuous atom laser

One of the most exotic phases of matter is called the Bose-Einstein condensate. As its name indicates, this type of matter is one whose constituents are bosons – which are basically all subatomic particles whose behaviour is dictated by the rules of Bose-Einstein statistics. These particles are also called force particles. The other kind are matter particles, or fermions. Their behaviour is described by the rules of Fermi-Dirac statistics. Force particles and matter particles together make up the universe as we know it.

To be a boson, a particle – which can be anything from quarks (which make up protons and neutrons) to entire atoms – needs to have a spin quantum number of certain values. (All of a particle’s properties can be described by the values of four quantum numbers.) An important difference between fermions and bosons is that Pauli’s exclusion principle doesn’t apply to bosons. The principle states that in a given quantum system, no two particles can have the same set of four quantum numbers at the same time. When two particles have the same four quantum numbers, they are said to occupy the same state. (‘States’ are not like places in a volume; instead, think of them more like a set of properties.) Pauli’s exclusion principle forbids fermions from doing this – but not bosons. So in a given quantum system, all the bosons can occupy the same quantum state if they are forced to.

For example, this typically happens when the system is cooled to nearly absolute zero – the lowest temperature possible. (The bosons also need to be confined in a ‘trap’ so that they don’t keep moving around or combine with each other to form other particles.) More and more energy being removed from the system is equivalent to more and more energy being removed from the system’s constituent particles. So as fermions and bosons possess less and less energy, they occupy lower and lower quantum states. But once all the lowest fermionic states are occupied, fermions start occupying the next lowest states, and so on. This is because of the principle. Bosons on the other hand are all able to occupy the same lowest quantum state. When this happens, they are said to have formed a Bose-Einstein condensate.

In this phase, all the bosons in the system move around like a fluid – like the molecules of flowing water. A famous example of this is superconductivity (at least of the conventional variety). When certain materials are cooled to near absolute zero, their electrons – which are fermions – overcome their mutual repulsion and pair up with each other to form composite pairs called Cooper pairs. Unlike individual electrons, Cooper pairs are bosons. They go on to form a Bose-Einstein condesate in which the Cooper pairs ‘flow’ through the material. In the material’s non-superconducting state, the electrons would have scattered by some objects in their path – like atomic nuclei or vibrations in the lattice. This scattering would have manifested as electrical resistance. But because Cooper pairs have all occupied the same quantum state, they are much harder to scatter. They flow through the material as if they don’t experience any resistance. This flow is what we know as superconductivity.

Bose-Einstein condensates are a big deal in physics because they are a macroscopic effect of microscopic causes. We can’t usually see or otherwise directly sense the effects of most quantum-physical phenomena because they happen on very small scales, and we need the help of sophisticated instruments like electron microscopes and particle accelerators. But when we cool a superconducting material to below its threshold temperature, we can readily sense the presence of a superconductor by passing an electric current through it (or using the Meissner effect). Macroscopic effects are also easier to manipulate and observe, so physicists have used Bose-Einstein condensates as a tool to probe many other quantum phenomena.

While Albert Einstein predicted the existence of Bose-Einstein condensates – based on work by Satyendra Nath Bose – in 1924, physicists had the requisite technologies and understanding of quantum mechanics to be able to create them in the lab only in the 1990s. These condensates were, and mostly still are, quite fragile and can be created only in carefully controlled conditions. But physicists have also been trying to figure out how to maintain a Bose-Einstein condensate for long periods of time, because durable condensates are expected to provide even more research insights as well as hold potential applications in particle physics, astrophysics, metrology, holography and quantum computing.

An important reason for this is wave-particle duality, which you might recall from high-school physics. Louis de Broglie postulated in 1924 that every quantum entity could be described both as a particle and as a wave. The Davisson-Germer experiment of 1923-1927 subsequently found that electrons – which were until then considered to be particles – behaved like waves in a diffraction experiment. Interference and diffraction are exhibited by waves, so the experiment proved that electrons could be understood as waves as well. Similarly, a Bose-Einstein condensate can be understood both in terms of particle physics and in terms of wave physics. Just like in the Davisson-Germer experiment, when physicists set up an experiment to look for an interference pattern from a Bose-Einstein condensate, they succeeded. They also found that the interference pattern became stronger the more bosons they added to the condensate.

Now, all the bosons in a condensate have a coherent phase. The phase of a wave measures the extent to which the wave has evolved in a fixed amount of time. When two waves have coherent phase, both of them will have progressed by the same amount in the same span of time. Phase coherence is one of the most important wave-like properties of a Bose-Einstein condensate because of the possibility of a device called an atom laser.

‘Laser’ is an acronym for ‘light amplification by stimulated emission of radiation’. The following video demonstrates its working principle better than I can in words right now:

The light emitted by an optical laser is coherent: it has a constant frequency and comes out in a narrow beam if the coherence is spatial or can be produced in extremely short pulses if the coherence is temporal. An atom laser is a laser composed of propagating atoms instead of photons. As Wolfgang Ketterle, who led the creation of the first Bose-Einstein condensate and later won a Nobel Prize for it, put it, “The atom laser emits coherent matter waves whereas the optical laser emits coherent electromagnetic waves.” Because the bosons of a Bose-Einstein condensate are already phase-coherent, condensates make excellent sources for an atom laser.

The trick, however, lies in achieving a Bose-Einstein condensate of the desired (bosonic) atoms and then extracting a few atoms into the laser while replenishing the condensate with more atoms – all without letting the condensate break down or the phase-coherence being lost. Physicists created the first such atom laser in 1996 but it did not have a continuous emission nor was very bright. Researchers have since built better atom lasers based on Bose-Einstein condensates, although they remain far from being usable in their putative applications. An important reason for this is that physicists are yet to build a condensate-based atom laser that can operate continuously. That is, as atoms from the condensate lase out, the condesate is constantly replenished, and the laser operates continuously for a long time.

On June 8, researchers from the University of Amsterdam reported that they had been able to create a long-lived, sort of self-sustaining Bose-Einstein condensate. This brings us a giant step closer to a continuously operating atom laser. Their setup consisted of multiple stages, all inside a vacuum chamber.

In the first stage, strontium atoms (which are bosons) started from an ‘oven’ maintained at 850 K and were progressively laser-cooled while they made their way into a reservoir. (Here is a primer of how laser-cooling works.) The reservoir had a dimple in the middle. In the second stage, the atoms were guided by lasers and gravity to descend into this dimple, where they had a temperature of approximately 1 µK, or one-millionth of a kelvin. As the dimple became more and more crowded, it was important for the atoms here to not heat up, which could have happened if some light had ‘leaked’ into the vacuum chamber.

To prevent this, in the third stage, the physicists used a carefully tuned laser shined only through the dimple that had the effect of rendering the strontium atoms mostly ‘transparent’ to light. According to the research team’s paper, without the ‘transparency beam’, the atoms in the dimple had a lifetime of less than 40 ms, whereas with the beam, it was more than 1.5 s – a 37x difference. At some point, when a sufficient number of atoms had accumulated in the dimple, a Bose-Einstein condensate formed. In the fourth stage, an effect called Bose stimulation kicked in. Simply put, as more bosons (strontium atoms, in this case) transitioned into the condensate, the rate of transition of additional bosons also increased. Bose stimulation thus played the role that the gain medium plays in an optical laser. The size of the condensate grew until it matched the rate of loss of atoms out of the dimple, and reached an equilibrium.

And voila! With a steady-state Bose-Einstein condensate, the continuous atom laser was almost ready. The physicists have acknowledged that their setup can be improved in many ways, including by making the laser-cooling effects more uniform, increasing the lifetime of strontium atoms inside the dimple, reducing losses due to heating and other effects, etc. At the same time, they wrote that “at all times after steady state is reached”, they found a Bose-Einstein condensate existing in their setup.

The awesome limits of superconductors

On June 24, a press release from CERN said that scientists and engineers working on upgrading the Large Hadron Collider (LHC) had “built and operated … the most powerful electrical transmission line … to date”. The transmission line consisted of four cables – two capable of transporting 20 kA of current and two, 7 kA.

The ‘A’ here stands for ‘ampere’, the SI unit of electric current. Twenty kilo-amperes is an extraordinary amount of current, nearly equal to the amount in a single lightning strike.

In the particulate sense: one ampere is the flow of one coulomb per second. One coulomb is equal to around 6.24 quintillion elementary charges, where each elementary charge is the charge of a single proton or electron (with opposite signs). So a cable capable of carrying a current of 20 kA can essentially transport 124.8 sextillion electrons per second.

According to the CERN press release (emphasis added):

The line is composed of cables made of magnesium diboride (MgB2), which is a superconductor and therefore presents no resistance to the flow of the current and can transmit much higher intensities than traditional non-superconducting cables. On this occasion, the line transmitted an intensity 25 times greater than could have been achieved with copper cables of a similar diameter. Magnesium diboride has the added benefit that it can be used at 25 kelvins (-248 °C), a higher temperature than is needed for conventional superconductors. This superconductor is more stable and requires less cryogenic power. The superconducting cables that make up the innovative line are inserted into a flexible cryostat, in which helium gas circulates.

The part in bold could have been more explicit and noted that superconductors, including magnesium diboride, can’t carry an arbitrarily higher amount of current than non-superconducting conductors. There is actually a limit for the same reason why there is a limit to the current-carrying capacity of a normal conductor.

This explanation wouldn’t change the impressiveness of this feat and could even interfere with readers’ impression of the most important details, so I can see why the person who drafted the statement left it out. Instead, I’ll take this matter up here.

An electric current is generated between two points when electrons move from one point to the other. The direction of current is opposite to the direction of the electrons’ movement. A metal that conducts electricity does so because its constituent atoms have one or more valence electrons that can flow throughout the metal. So if a voltage arises between two ends of the metal, the electrons can respond by flowing around, birthing an electric current.

This flow isn’t perfect, however. Sometimes, a valence electron can bump into atomic nuclei, impurities – atoms of other elements in the metallic lattice – or be thrown off course by vibrations in the lattice of atoms, produced by heat. Such disruptions across the metal collectively give rise to the metal’s resistance. And the more resistance there is, the less current the metal can carry.

These disruptions often heat the metal as well. This happens because electrons don’t just flow between the two points across which a voltage is applied. They’re accelerated. So as they’re speeding along and suddenly bump into an impurity, they’re scattered into random directions. Their kinetic energy then no longer contributes to the electric energy of the metal and instead manifests as thermal energy – or heat.

If the electrons bump into nuclei, they could impart some of their kinetic energy to the nuclei, causing the latter to vibrate more, which in turn means they heat up as well.

Copper and silver have high conductance because they have more valence electrons available to conduct electricity and these electrons are scattered to a lesser extent than in other metals. Therefore, these two also don’t heat up as quickly as other metals might, allowing them to transport a higher current for longer. Copper in particular has a higher mean free path: the average distance an electron travels before being scattered.

In superconductors, the picture is quite different because quantum physics assumes a more prominent role. There are different types of superconductors according to the theories used to understand how they conduct electricity with zero resistance and how they behave in different external conditions. The electrical behaviour of magnesium diboride, the material used to transport the 20 kA current, is described by Bardeen-Cooper-Schrieffer (BCS) theory.

According to this theory, when certain materials are cooled below a certain temperature, the residual vibrations of their atomic lattice encourages their valence electrons to overcome their mutual repulsion and become correlated, especially in terms of their movement. That is, the electrons pair up.

While individual electrons belong to a class of particles called fermions, these electron pairs – a.k.a. Cooper pairs – belong to another class called bosons. One difference between these two classes is that bosons don’t obey Pauli’s exclusion principle: that no two fermions in the same quantum system (like an atom) can have the same set of quantum numbers at the same time.

As a result, all the electron pairs in the material are now free to occupy the same quantum state – which they will when the material is supercooled. When they do, the pairs collectively make up an exotic state of matter called a Bose-Einstein condensate: the electron pairs now flow through the material as if they were one cohesive liquid.

In this state, even if one pair gets scattered by an impurity, the current doesn’t experience resistance because the condensate’s overall flow isn’t affected. In fact, given that breaking up one pair will cause all other pairs to break up as well, the energy required to break up one pair is roughly equal to the energy required to break up all pairs. This feature affords the condensate a measure of robustness.

But while current can keep flowing through a BCS superconductor with zero resistance, the superconducting state itself doesn’t have infinite persistence. It can break if it stops being cooled below a specific temperature, called the critical temperature; if the material is too impure, contributing to a sufficient number of collisions to ‘kick’ all electrons pairs out of their condensate reverie; or if the current density crosses a particular threshold.

At the LHC, the magnesium diboride cables will be wrapped around electromagnets. When a large current flows through the cables, the electromagnets will produce a magnetic field. The LHC uses a circular arrangement of such magnetic fields to bend the beam of protons it will accelerate into a circular path. The more powerful the magnetic field, the more accurate the bending. The current operational field strength is 8.36 tesla, about 128,000-times more powerful than Earth’s magnetic field. The cables will be insulated but they will still be exposed to a large magnetic field.

Type I superconductors completely expel an external magnetic field when they transition to their superconducting state. That is, the magnetic field can’t penetrate the material’s surface and enter the bulk. Type II superconductors are slightly more complicated. Below one critical temperature and one critical magnetic field strength, they behave like type I superconductors. Below the same temperature but a slightly stronger magnetic field, they are superconducting and allow the fields to penetrate their bulk to a certain extent. This is called the mixed state.

A hand-drawn phase diagram showing the conditions in which a mixed-state type II superconductor exists. Credit: Frederic Bouquet/Wikimedia Commons, CC BY-SA 3.0

Say a uniform magnetic field is applied over a mixed-state superconductor. The field will plunge into the material’s bulk in the form of vortices. All these vortices will have the same magnetic flux – the number of magnetic field lines per unit area – and will repel each other, settling down in a triangular pattern equidistant from each other.

An annotated image of vortices in a type II superconductor. The scale is specified at the bottom right. Source: A set of slides entitled ‘Superconductors and Vortices at Radio Frequency Magnetic Fields’ by Ernst Helmut Brandt, Max Planck Institute for Metals Research, October 2010.

When an electric current passes through this material, the vortices are slightly displaced, and also begin to experience a force proportional to how closely they’re packed together and their pattern of displacement. As a result, to quote from this technical (yet lucid) paper by Praveen Chaddah:

This force on each vortex … will cause the vortices to move. The vortex motion produces an electric field1 parallel to [the direction of the existing current], thus causing a resistance, and this is called the flux-flow resistance. The resistance is much smaller than the normal state resistance, but the material no longer [has] infinite conductivity.

1. According to Maxwell’s equations of electromagnetism, a changing magnetic field produces an electric field.

Since the vortices’ displacement depends on the current density: the greater the number of electrons being transported, the more flux-flow resistance there is. So the magnesium diboride cables can’t simply carry more and more current. At some point, setting aside other sources of resistance, the flux-flow resistance itself will damage the cable.

There are ways to minimise this resistance. For example, the material can be doped with impurities that will ‘pin’ the vortices to fixed locations and prevent them from moving around. However, optimising these solutions for a given magnetic field and other conditions involves complex calculations that we don’t need to get into.

The point is that superconductors have their limits too. And knowing these limits could improve our appreciation for the feats of physics and engineering that underlie achievements like cables being able to transport 124.8 sextillion electrons per second with zero resistance. In fact, according to the CERN press release,

The [line] that is currently being tested is the forerunner of the final version that will be installed in the accelerator. It is composed of 19 cables that supply the various magnet circuits and could transmit intensities of up to 120 kA!

§

While writing this post, I was frequently tempted to quote from Lisa Randall‘s excellent book-length introduction to the LHC, Knocking on Heaven’s Door (2011). Here’s a short excerpt:

One of the most impressive objects I saw when I visited CERN was a prototype of LHC’s gigantic cylindrical dipole magnets. Event with 1,232 such magnets, each of them is an impressive 15 metres long and weighs 30 tonnes. … Each of these magnets cost EUR 700,000, making the ned cost of the LHC magnets alone more than a billion dollars.

The narrow pipes that hold the proton beams extend inside the dipoles, which are strung together end to end so that they wind through the extent of the LHC tunnel’s interior. They produce a magnetic field that can be as strong as 8.3 tesla, about a thousand times the field of the average refrigerator magnet. As the energy of the proton beams increases from 450 GeV to 7 TeV, the magnetic field increases from 0.54 to 8.3 teslas, in order to keep guiding the increasingly energetic protons around.

The field these magnets produce is so enormous that it would displace the magnets themselves if no restraints were in place. This force is alleviated through the geometry of the coils, but the magnets are ultimately kept in place through specially constructed collars made of four-centimetre thick steel.

… Each LHC dipole contains coils of niobium-titanium superconducting cables, each of which contains stranded filaments a mere six microns thick – much smaller than a human hair. The LHC contains 1,200 tonnes of these remarkable filaments. If you unwrapped them, they would be long enough to encircle the orbit of Mars.

When operating, the dipoles need to be extremely cold, since they work only when the temperature is sufficiently low. The superconducting wires are maintained at 1.9 degrees above absolute zero … This temperature is even lower than the 2.7-degree cosmic microwave background radiation in outer space. The LHC tunnel houses the coldest extended region in the universe – at least that we know of. The magnets are known as cryodipoles to take into account their special refrigerated nature.

In addition to the impressive filament technology used for the magnets, the refrigeration (cryogenic) system is also an imposing accomplishment meriting its own superlatives. The system is in fact the world’s largest. Flowing helium maintains the extremely low temperature. A casing of approximately 97 metric tonnes of liquid helium surrounds the magnets to cool the cables. It is not ordinary helium gas, but helium with the necessary pressure to keep it in a superfluid phase. Superfluid helium is not subject to the viscosity of ordinary materials, so it can dissipate any heat produced in the dipole system with great efficiency: 10,000 metric tonnes of liquid nitrogen are first cooled, and this in turn cools the 130 metric tonnes of helium that circulate in the dipoles.

Featured image: A view of the experimental MgB2 transmission line at the LHC. Credit: CERN.

Where is the coolest lab in the universe?

The Large Hadron Collider (LHC) performs an impressive feat every time it accelerates billions of protons to nearly the speed of light – and not in terms of the energy alone. For example, you release more energy when you clap your palms together once than the energy imparted to a proton accelerated by the LHC. The impressiveness arises from the fact that the energy of your clap is distributed among billions of atoms while the latter all resides in a single particle. It’s impressive because of the energy density.

A proton like this should have a very high kinetic energy. When lots of protons with such amounts of energy come together to form a macroscopic object, the object will have a high temperature. This is the relationship between subatomic particles and the temperature of the object they make up. The outermost layer of a star is so hot because its constituent particles have a very high kinetic energy. Blue hypergiant stars, thought to be the hottest stars in the universe, like Eta Carinae have a surface temperature of 36,000 K and a surface 57,600-times larger than that of the Sun. This isn’t impressive on the temperature scale alone but also on the energy density scale: Eta Carinae ‘maintains’ a higher temperature over a larger area.

Now, the following headline and variations thereof have been doing the rounds of late, and they piqued me because I’m quite reluctant to believe they’re true:

This headline, as you may have guessed by the fonts, is from Nature News. To be sure, I’m not doubting the veracity of any of the claims. Instead, my dispute is with the “coolest lab” claim and on entirely qualitative grounds.

The feat mentioned in the headline involves physicists using lasers to cool a tightly controlled group of atoms to near-absolute-zero, causing quantum mechanical effects to become visible on the macroscopic scale – the feature that Bose-Einstein condensates are celebrated for. Most, if not all, atomic cooling techniques endeavour in different ways to extract as much of an atom’s kinetic energy as possible. The more energy they remove, the cooler the indicated temperature.

The reason the headline piqued me was that it trumpets a place in the universe called the “universe’s coolest lab”. Be that as it may (though it may not technically be so; the physicist Wolfgang Ketterle has achieved lower temperatures before), lowering the temperature of an object to a remarkable sliver of a kelvin above absolute zero is one thing but lowering the temperature over a very large area or volume must be quite another. For example, an extremely cold object inside a tight container the size of a shoebox (I presume) must be lacking much less energy than a not-so-extremely cold volume across, say, the size of a star.

This is the source of my reluctance to acknowledge that the International Space Station could be the “coolest lab in the universe”.

While we regularly equate heat with temperature without much consequence to our judgment, the latter can be described by a single number pertaining to a single object whereas the former – heat – is energy flowing from a hotter to a colder region of space (or the other way with the help of a heat pump). In essence, the amount of heat is a function of two differing temperatures. In turn it could matter, when looking for the “coolest” place, that we look not just for low temperatures but for lower temperatures within warmer surroundings. This is because it’s harder to maintain a lower temperature in such settings – for the same reason we use thermos flasks to keep liquids hot: if the liquid is exposed to the ambient atmosphere, heat will flow from the liquid to the air until the two achieve a thermal equilibrium.

An object is said to be cold if its temperature is lower than that of its surroundings. Vladivostok in Russia is cold relative to most of the world’s other cities but if Vladivostok was the sole human settlement and beyond which no one has ever ventured, the human idea of cold will have to be recalibrated from, say, 10º C to -20º C. The temperature required to achieve a Bose-Einstein condensate is the temperature required at which non-quantum-mechanical effects are so stilled that they stop interfering with the much weaker quantum-mechanical effects, given by a formula but typically lower than 1 K.

The deep nothingness of space itself has a temperature of 2.7 K (-270.45º C); when all the stars in the universe die and there are no more sources of energy, all hot objects – like neutron stars, colliding gas clouds or molten rain over an exoplanet – will eventually have to cool to 2.7 K to achieve equilibrium (notwithstanding other eschatological events).

This brings us, figuratively, to the Boomerang Nebula – in my opinion the real coolest lab in the universe because it maintains a very low temperature across a very large volume, i.e. its coolness density is significantly higher. This is a protoplanetary nebula, which is a phase in the lives of stars within a certain mass range. In this phase, the star sheds some of its mass that expands outwards in the form of a gas cloud, lit by the star’s light. The gas in the Boomerang Nebula, from a dying red giant star changing to a white dwarf at the centre, is expanding outward at a little over 160 km/s (576,000 km/hr), and has been for the last 1,500 years or so. This rapid expansion leaves the nebula with a temperature of 1 K. Astronomers discovered this cold mass in late 1995.

(“When gas expands, the decrease in pressure causes the molecules to slow down. This makes the gas cold”: source.)

The experiment to create a Bose-Einstein condensate in space – or for that matter anywhere on Earth – transpired in a well-insulated container that, apart from the atoms to be cooled, was a vacuum. So as such, to the atoms, the container was their universe, their Vladivostok. They were not at risk of the container’s coldness inviting heat from its surroundings and destroying the condensate. The Boomerang Nebula doesn’t have this luxury: as a nebula, it’s exposed to the vast emptiness, and 2.7 K, of space at all times. So even though the temperature difference between itself and space is only 1.7 K, the nebula also has to constantly contend with the equilibriating ‘pressure’ imposed by space.

Further, according to Raghavendra Sahai (as quoted by NASA), one of the nebula’s cold spots’ discoverers, it’s “even colder than most other expanding nebulae because it is losing its mass about 100-times faster than other similar dying stars and 100-billion-times faster than Earth’s Sun.” This implies there is a great mass of gas, and so atoms, whose temperature is around 1 K.

All together, the fact that the nebula has maintained a temperature of 1 K for around 1,500 years (plus a 5,000-year offset, to compensate for the distance to the nebula) and over 3.14 trillion km makes it a far cooler “coolest” place, lab, whatever.

When cooling down really means slowing down

Consider this post the latest in a loosely defined series about atomic cooling techniques that I’ve been writing since June 2018.

Atoms can’t run a temperature, but things made up of atoms, like a chair or table, can become hotter or colder. This is because what we observe as the temperature of macroscopic objects is at the smallest level the kinetic energy of the atoms it is made up of. If you were to cool such an object, you’d have to reduce the average kinetic energy of its atoms. Indeed, if you had to cool a small group of atoms trapped in a container as well, you’d simply have to make sure they – all told – slow down.

Over the years, physicists have figured out more and more ingenious ways to cool atoms and molecules this way to ultra-cold temperatures. Such states are of immense practical importance because at very low energy, these particles (an umbrella term) start displaying quantum mechanical effects, which are too subtle to show up at higher temperatures. And different quantum mechanical effects are useful to create exotic things like superconductors, topological insulators and superfluids.

One of the oldest modern cooling techniques is laser-cooling. Here, a laser beam of a certain frequency is fired at an atom moving towards the beam. Electrons in the atom absorb photons in the beam, acquire energy and jump to a higher energy level. A short amount of time later, the electrons lose the energy by emitting a photon and jump back to the lower energy level. But since the photons are absorbed in only one direction but are emitted in arbitrarily different directions, the atom constantly loses momentum in one direction but gains momentum in a variety of directions (by Newton’s third law). The latter largely cancel themselves out, leaving the atom with considerably lower kinetic energy, and therefore cooler than before.

In collisional cooling, an atom is made to lose momentum by colliding not with a laser beam but with other atoms, which are maintained at a very low temperature. This technique works better if the ratio of elastic to inelastic collisions is much greater than 50. In elastic collisions, the total kinetic energy of the system is conserved; in inelastic collisions, the total energy is conserved but not the kinetic energy alone. In effect, collisional cooling works better if almost all collisions – if not all of them – conserve kinetic energy. Since the other atoms are maintained at a low temperature, they have little kinetic energy to begin with. So collisional cooling works by bouncing warmer atoms off of colder ones such that the colder ones take away some of the warmer atoms’ kinetic energy, thus cooling them.

In a new study, a team of scientists from MIT, Harvard University and the University of Waterloo reported that they were able to cool a pool of NaLi diatoms (molecules with only two atoms) this way to a temperature of 220 nK. That’s 220-billionths of a kelvin, about 12-million-times colder than deep space. They achieved this feat by colliding the warmer NaLi diatoms with five-times as many colder Na (sodium) atoms through two cycles of cooling.

Their paper, published online on April 8 (preprint here), indicates that their feat is notable for three reasons.

First, it’s easier to cool particles (atoms, ions, etc.) in which as many electrons as possible are paired to each other. A particle in which all electrons are paired is called a singlet; ones that have one unpaired electron each are called doublets; those with two unpaired electrons – like NaLi diatoms – are called triplets. Doublets and triplets can also absorb and release more of their energy by modifying the spins of individual electrons, which messes with collisional cooling’s need to modify a particle’s kinetic energy alone. The researchers from MIT, Harvard and Waterloo overcame this barrier by applying a ‘bias’ magnetic field across their experiment’s apparatus, forcing all the particles’ spins to align along a common direction.

Second: Usually, when Na and NaLi come in contact, they react and the NaLi molecule breaks down. However, the researchers found that in the so-called spin-polarised state, the Na and NaLi didn’t react with each other, preserving the latter’s integrity.

Third, and perhaps most importantly, this is not the coldest temperature to which we have been able to cool quantum particles, but it still matters because collisional cooling offers unique advantages that makes it attractive for certain applications. Perhaps the most well-known of them is quantum computing. Simply speaking, physicists prefer ultra-cold molecules to atoms to use in quantum computers because physicists can control molecules more precisely than they can the behaviour of atoms. But molecules that have doublet or triplet states or are otherwise reactive can’t be cooled to a few billionths of a kelvin with laser-cooling or other techniques. The new study shows they can, however, be cooled to 220 nK using collisional cooling. The researchers predict that in future, they may be able to cool NaLi molecules even further with better equipment.

Note that the researchers didn’t cool the NaLi atoms from room temperature to 220 nK but from 2 µK. Nonetheless, their achievement remains impressive because there are other well-established techniques to cool atoms and molecules from room temperature to a few micro-kelvin. The lower temperatures are harder to reach.

One of the researchers involved in the current study, Wolfgang Ketterle, is celebrated for his contributions to understanding and engineering ultra-cold systems. He led an effort in 2003 to cool sodium atoms to 0.5 nK – a record. He, Eric Cornell and Carl Wieman won the Nobel Prize for physics two years before that: Cornell, Wieman and their team created the first Bose-Einstein condensate in 1995, and Ketterle created ‘better’ condensates that allowed for closer inspection of their unique properties. A Bose-Einstein condensate is a state of matter in which multiple particles called bosons are ultra-cooled in a container, at which point they occupy the same quantum state – something they don’t do in nature (even as they comply with the laws of nature) – and give rise to strange quantum effects that can be observed without a microscope.

Ketterle’s attempts make for a fascinating tale; I collected some of them plus some anecdotes together for an article in The Wire in 2015, to mark the 90th year since Albert Einstein had predicted their existence, in 1924-1925. A chest-thumper might be cross that I left Satyendra Nath Bose out of this citation. It is deliberate. Bose-Einstein condensates are named for their underlying theory, called Bose-Einstein statistics. But while Bose had the idea for the theory to explain the properties of photons, Einstein generalised it to more particles, and independently predicted the existence of the condensates based on it.

This said, if it is credit we’re hungering for: the history of atomic cooling techniques includes the brilliant but little-known S. Pancharatnam. His work in wave physics laid the foundations of many of the first cooling techniques, and was credited as such by Claude Cohen-Tannoudji in the journal Current Science in 1994. Cohen-Tannoudji would win a piece of the Nobel Prize for physics in 1997 for inventing a technique called Sisyphus cooling – a way to cool atoms by converting more and more of their kinetic energy to potential energy, and then draining the potential energy.

Indeed, the history of atomic cooling techniques is, broadly speaking, a history of physicists uncovering newer, better ways to remove just a little bit more energy from an atom or molecule that’s already lost a lot of its energy. The ultimate prize is absolute zero, the lowest temperature possible, at which the atom retains only the energy it can in its ground state. However, absolute zero is neither practically attainable nor – more importantly – the goal in and of itself in most cases. Instead, the experiments in which physicists have achieved really low temperatures are often pegged to an application, and getting below a particular temperature is the goal.

For example, niobium nitride becomes a superconductor below 16 K (-257º C), so applications using this material prepare to achieve this temperature during operation. For another, as the MIT-Harvard-Waterloo group of researchers write in their paper, “Ultra-cold molecules in the micro- and nano-kelvin regimes are expected to bring powerful capabilities to quantum emulation and quantum computing, owing to their rich internal degrees of freedom compared to atoms, and to facilitate precision measurement and the study of quantum chemistry.”

Atoms within atoms

It’s a matter of some irony that forces that act across larger distances also give rise to lots of empty space – although the more you think about it, the more it makes sense. The force of gravity, for example, can act across millions of kilometres but this only means two massive objects can still influence each across this distance instead of having to get closer to do so. Thus, you have galaxies with a lot more space between stars than stars themselves.

The electromagnetic force, like the force of gravity, also follows an inverse-square law: its strength falls off as the square of the distance – but never fully reaches zero. So you can have an atom with a nucleus of protons and neutrons held tightly together but electrons located so far away that each atom is more than 90% empty space.

In fact, you can use the rules of subatomic physics to make atoms even more vacuous. Electrons orbit the nucleus in an atom at fixed distances, and when an electron gains some energy, it jumps into a higher orbit. Physicists have been able to excite electrons to such high energies that the atom itself becomes thousands of times larger than an atom of hydrogen.

This is the deceptively simple setting for the Rydberg polaron: the atom inside another atom, with some features added.

In January 2018, physicists from Austria, Brazil, Switzerland and the US reported creating the first Rydberg polaron in the lab, based on theoretical predictions that another group of researchers had advanced in October 2015. The concept, as usual, is far simpler than the execution, so exploring the latter should provide a good sense of the former.

The January 2018 group first created a Bose-Einstein condensate, a state of matter in which a dilute gas of particles called bosons is maintained in an ultra-cold container. Bosons are particles whose quantum spin takes integer values. (Other particles called fermions have half-integer spin). As the container is cooled to near absolute zero, the bosons begin to collectively display quantum mechanical phenomena at the macroscopic scale, essentially becoming a new form of matter and displaying certain properties that no other form of matter has been known to exhibit.

Atoms of strontium-84, -86 and -88 have zero spin, so the physicists used them to create the condensate. Next, they used lasers to bombard some strontium atoms with photons to impart energy to electrons in the outermost orbits (a.k.a. valence electrons), forcing them to jump to an even higher orbit. Effectively, the atom expands, becoming a so-called Rydberg atom[1]. In this state, if the distance between the nucleus and an excited electron is greater than the average distance between the other strontium atoms in the condensate, then some of the other atoms could technically fit into the Rydberg atom, forming the atom-within-an-atom.

[1] Rydberg atoms are called so because many of their properties depend on the value of the principal quantum number, which the Swedish physicist Johannes Robert Rydberg first (inadvertently) described in a formula in 1888.

Rydberg atoms are gigantic relative to other atoms; some are even bigger than a virus, and their interactions with their surroundings can be observed under a simple light microscope. They are relatively long-lived, in that the excited electron decays to its ground state slowly. Astronomers have found them in outer space. However, Rydberg atoms are also fragile: because the electron is already so far from the nucleus, any other particles in the vicinity, even a weak electromagnetic field or a slightly warmer temperature could easily knock the excited electron out of the Rydberg atom and end the Rydberg state.

Some clever physicists took advantage of this property and used Rydberg atoms as sensitive detectors of single photons of light. They won the Nobel Prize for physics for such work in 2011.

However, simply sticking one atom inside a Rydberg atom doth not a Rydberg polaron make. A polaron is a quasiparticle, which means it isn’t an actual particle by itself, as the –on suffix might suggest, but an entity that scientists study as if it were a particle. Quasiparticles are thus useful because they simplify the study of more complicated entities by allowing scientists to apply the rules of particle physics to arrive at equally correct solutions.

This said, a polaron is a quasiparticle that’s also a particle. Specifically, physicists describe the properties and behaviour of electrons inside a solid as polarons because as the electrons interact with the atomic lattice, they behave in a way that electrons usually don’t. So polarons combine the study of electrons and electrons-interacting-with-atoms into a single subject.

Similarly, a Rydberg polaron is formed when the electron inside the Rydberg atom interacts with the trapped strontium atom. While an atom within an atom is cool enough, the January 2018 group wanted to create a Rydberg polaron because it’s considered to be a new state of matter – and they succeeded. The physicists found that the excited electron did develop a loose interaction with the strontium atoms lying between itself and the Rydberg atom’s nucleus – so loose that even as they interacted, the electron could still remain part of the Rydberg atom without getting kicked out.

In effect, since the Rydberg atom and the strontium atoms inside it influence each other’s behaviour, they altogether made up one larger complicated assemblage of protons, neutrons and electrons – a.k.a. a Rydberg polaron.

The science in Netflix’s ‘Spectral’

I watched Spectral, the movie that released on Netflix on December 9, 2016, after Universal Studios got cold feet about releasing it on the big screen – the same place where a previous offering, Warcraft, had been gutted. Spectral is sci-fi and has a few great moments but mostly it’s bland and begging for some tabasco. The premise: an elite group of American soldiers deployed in Moldova come upon some belligerent ghost-like creatures in a city they’re fighting in. They’ve no clue how to stop them, so they fly in an engineer to consult from DARPA, the same guy who built the goggles that detected the creatures in the first place. Together, they do things. Now, I’d like to talk about the science in the film and not the plot itself, though the former feeds the latter.

SPOILERS AHEAD

A scene from the film 'Spectral' (2016). Source: Netflix
A scene from the film ‘Spectral’ (2016). Source: Netflix

Towards the middle of the movie, the engineer realises that the ghost-like creatures have the same limitations as – wait for it – a Bose-Einstein condensate (BEC). They can pass through walls but not ceramic or heavy metal (not the music), they rapidly freeze objects in their path, and conventional weapons, typically projectiles of some kind, can’t stop them. Frankly, it’s fabulous that Ian Fried, the film’s writer, thought to use creatures made of BECs as villains.

A BEC is an exotic state of matter in which a group of ultra-cold particles condense into a superfluid (i.e., it flows without viscosity). Once a BEC forms, a subsection of a BEC can’t be removed from it without breaking the whole BEC state down. You’d think this makes the BEC especially fragile – because it’s susceptible to so many ‘liabilities’ – but it’s the exact opposite. In a BEC, the energy required to ‘kick’ a single particle out of its special state is equal to the energy that’s required to ‘kick’ all the particles out, making BECs as a whole that much more durable.

This property is apparently beneficial for the creatures of Spectral, and that’s where the similarity ends because BECs have other properties that are inimical to the portrayal of the creatures. Two immediately came to mind: first, BECs are attainable only at ultra-cold temperatures; and second, the creatures can’t be seen by the naked eye but are revealed by UV light. There’s a third and relevant property but which we’ll come to later: that BECs have to be composed of bosons or bosonic particles.

It’s not clear why Spectral‘s creatures are visible only when exposed to light of a certain kind. Clyne, the DARPA engineer, says in a scene, “If I can turn it inside out, by reversing the polarity of some of the components, I might be able to turn it from a camera [that, he earlier says, is one that “projects the right wavelength of UV light”] into a searchlight. We’ll [then] be able to see them with our own eyes.” However, the documented ability of BECs to slow down light to a great extent (5.7-million times more than lead can, in certain conditions) should make them appear extremely opaque. More specifically, while a BEC can be created that is transparent to a very narrow range of frequencies of electromagnetic radiation, it will stonewall all frequencies outside of this range on the flipside. That the BECs in Spectral are opaque to a single frequency and transparent to all others is weird.

Obviating the need for special filters or torches to be able to see the creatures simplifies Spectral by removing one entire layer of complexity. However, it would remove the need for the DARPA engineer also, who comes up with the hyperspectral camera and, its inside-out version, the “right wavelength of UV” searchlight. Additionally, the complexity serves another purpose. Ahead of the climax, Clyne builds an energy-discharging gun whose plasma-bullets of heat can rip through the BECs (fair enough). This tech is also slightly futuristic. If the sci-fi/futurism of the rest of Spectral leading up to that moment (when he invents the gun) was absent, then the second-half of the movie would’ve become way more sci-fi than the first-half, effectively leaving Spectral split between two genres: sci-fi and wtf. Thus the need for the “right wavelength of UV” condition?

Now, to the third property. Not all particles can be used to make BECs. Its two predictors, Satyendra Nath Bose and Albert Einstein, were working (on paper) with kinds of particles since called bosons. In nature, bosons are force-carriers, acting against matter-maker particles called fermions. A more technical distinction between them is that the behaviour of bosons is explained using Bose-Einstein statistics while the behaviour of fermions is explained using Fermi-Dirac statistics. And only Bose-Einstein statistics predicts the existence of states of matter called condensates, not Femi-Dirac statistics.

(Aside: Clyne, when explaining what BECs are in Spectral, says its predictors are “Nath Bose and Albert Einstein”. Both ‘Nath’ and ‘Bose’ are surnames in India, so “Nath Bose” is both anyone and no one at all. Ugh. Another thing is I’ve never heard anyone refer to S.N. Bose as “Nath Bose”, only ‘Satyendranath Bose’ or, simply, ‘Satyen Bose’. Why do Clyne/Fried stick to “Nath Bose”? Was “Satyendra” too hard to pronounce?)

All particles constitute a certain amount of energy, which under some circumstances can increase or decrease. However, the increments of energy in which this happens are well-defined and fixed (hence the ‘quantum’ of quantum mechanics). So, for an oversimplified example, a particle can be said to occupy energy levels constituting 2, 4 or 6 units but never of 1, 2.5 or 3 units. Now, when a very-low-density collection of bosons is cooled to an ultra-cold temperature (a few hundredths of kelvins or cooler), the bosons increasingly prefer occupying fewer and fewer energy levels. At one point, they will all occupy a single and common level – flouting a fundamental rule that there’s a maximum limit for the number of particles that can be in the same level at once. (In technical parlance, the wavefunctions of all the bosons will merge.)

When this condition is achieved, a BEC will have been formed. And in this condition, even if a new boson is added to the condensate, it will be forced into occupying the same level as every other boson in the condensate. This condition is also out of limits for all fermions – except in very special circumstances, and circumstances whose exceptionalism perhaps makes way for Spectral‘s more fantastic condensate-creatures. We known one such as superconductivity.

In a superconducting material, electrons flow without any resistance whatsoever at very low temperatures. The most widely applied theory of superconductivity interprets this flow as being that of a superfluid, and the ‘sea’ of electrons flowing as such to be a BEC. However, electrons are fermions. To overcome this barrier, Leon Cooper proposed in 1956 that the electrons didn’t form a condensate straight away but that there was an intervening state called a Cooper pair. A Cooper pair is a pair of electrons that had become bound, overcoming their like-charges repulsion because of the vibration of atoms of the superconducting metal surrounding them. The electrons in a Cooper pair also can’t easily quit their embrace because, once they become bound, the total energy they constitute as a pair is lower than the energy that would be destabilising in any other circumstances.

Could Spectral‘s creatures have represented such superconducting states of matter? It’s definitely science fiction because it’s not too far beyond the bounds of what we know about BEC today (at least in terms of a concept). And in being science fiction, Spectral assumes the liberty to make certain leaps of reasoning – one being, for example, how a BEC-creature is able to ram against an M1 Abrams and still not dissipate. Or how a BEC-creature is able to sit on an electric transformer without blowing up. I get that these in fact are the sort of liberties a sci-fi script is indeed allowed to take, so there’s little point harping on them. However, that Clyne figured the creatures ought to be BECs prompted way more disbelief than anything else because BECs are in the here and the now – and they haven’t been known to behave anything like the creatures in Spectral do.

For some, this information might even help decide if a movie is sci-fi or fantasy. To me, it’s sci-fi.

SPOILERS END

On the more imaginative side of things, Spectral also dwells for a bit on how these creatures might have been created in the first place and how they’re conscious. Any answers to these questions, I’m pretty sure, would be closer to fantasy than to sci-fi. For example, I wonder how the computing capabilities of a very large neural network seen at the end of the movie (not a spoiler, trust me) were available to the creatures wirelessly, or where the power source was that the soldiers were actually after. Spectral does try to skip the whys and hows by having Clyne declare, “I guess science doesn’t have the answer to everything” – but you’re just going “No shit, Sherlock.”

His character is, as this Verge review puts it, exemplarily shallow while the movie never suggests before the climax that science might indeed have all the answers. In fact, the movie as such, throughout its 108 minutes, wasn’t that great for me; it doesn’t ever live up to its billing as a “supernatural Black Hawk Down“. You think about BHD and you remember it being so emotional – Spectral has none of that. It was just obviously more fun to think about the implications of its antagonists being modelled after a phenomenon I’ve often read/written about but never thought about that way.

Relativity’s kin, the Bose-Einstein condensate, is 90 now

Excerpt:

Over November 2015, physicists and commentators alike the world over marked 100 years since the conception of the theory of relativity, which gave us everything from GPS to blackholes, and described the machinations of the universe at the largest scales. Despite many struggles by the greatest scientists of our times, the theory of relativity remains incompatible with quantum mechanics, the rules that describe the universe at its smallest, to this day. Yet it persists as our best description of the grand opera of the cosmos.

Incidentally, Einstein wasn’t a fan of quantum mechanics because of its occasional tendencies to violate the principles of locality and causality. Such violations resulted in what he called “spooky action at a distance”, where particles behaved as if they could communicate with each other faster than the speed of light would have it. It was weirdness the likes of which his conception of gravitation and space-time didn’t have room for.

As it happens, 2015 also marks another milestone, also involving Einstein’s work – as well as the work of an Indian scientist: Satyendra Nath Bose. It’s been 20 years since physicists realised the first Bose-Einstein condensate, which has proved to be an exceptional as well as quirky testbed for scientists probing the strange implications of a quantum mechanical reality.

Its significance today can be understood in terms of three ‘periods’ of research that contributed to it: 1925 onward, 1975 onward, and 1995 onward.

Read the full piece here.

 

The Nobel intent

A depiction of Alfred Nobel in the Nobel Museum in Stockholm. Credit: sol_invictus/Flickr, CC BY 2.0
A depiction of Alfred Nobel in the Nobel Museum in Stockholm. Credit: sol_invictus/Flickr, CC BY 2.0

About three weeks from now, the Nobel Foundation will announce the winners of the 2015 Nobel Prizes. Every year, commentators, opinionators and enthusiasts try to guess who will win the awards – some of them have become famous because they’ve been able to guess the winners with uncanny accuracy. However, as it happens, the prizewinners’ profiles have sometimes exposed patterns which tell us how they might have been selected over others. For example, winners of the physics prize have also typically been awarded the Wolf Prize. For another, like a recent study showed, winners of the medicine and physiology prizes seem to have had similar qualitative preferences for their inter-institutional collaborations.

More light is likely to be shed on its opaque selection process by the Nobel Foundation’s decision to open up its archives and reveal the name of not just all nominees but also the nominators who got those names on the rosters each year.  The complete list for all prizes – except economics – awarded between 1901 and 1964 is now available for the first time. The lists for awards given after 1965 are not visible because they’re sealed for 50 years. With the information, the question of “Who nominated whom?” is worth asking not just for trivia’s sake but also because it throws up clues about the politics behind decisions, the kinds of names that were ignored for the prizes, why they were ignored, and how the underpinning rationale has changed through various social periods.

There are three famous examples with which to illustrate these issues.

Mohandas Gandhi

The first is of M.K. Gandhi. The Nobel Committee admitted in 2001 that overlooking Gandhi had been one of its most infamous mistakes. In 1937, in a total of 63 nominations by prominent people, Gandhi received his first: from Ole Colbjørnsen, a Norwegian politician. Colbjørnsen would nominate Gandhi in 1938 and 1939 as well. After that, the name of Gandhi among the nominees reappears in 1947, put there by G.B. Pant, B.G. Kher and Mavalankar, and in 1948, this time with the endorsement of Frede Castberg (a Norwegian jurist), six professors of the University of Bordeaux, five from Columbia University, the American Friends Service Committee, Christian Oftedal (a Norwegian politician) and the American economist Emily Greene Balch. Gandhi was assassinated in January 1948, and since the Foundation doesn’t allow posthumous awards, his ‘case’ ended that year.

The winners in the years he was nominated in were

  • 1937 – Robert Cecil
  • 1938 – Nansen International Office for Refugees
  • 1939 – No winner
  • 1947 – AFSC and Friends Service Council
  • 1948 – No winner

The committee declined to award the prize in 1948 because “there was no suitable living candidate”. This was with reference to Gandhi, who may have received the prize had he not been killed that year. There have also been some discussions on whether the committee could have made an exception for Gandhi and awarded it posthumously, especially since the nominations had arrived a few days before his death and because his death was quite unexpected, too (incidentally, posthumous awards of the Physics Prize were allowed until 1974 if the awardee was alive at the time of nomination). On the other hand, even if these arguments had been taken seriously, they wouldn’t have fetched the Peace Prize for Gandhi – why he wasn’t chosen alludes to a different issue.

The nomination process is essentially one of filtering, and though it differs for each prize, they are all variations of the following: some 3,000 individuals around the world are asked to send in their preliminary nominations, out of which the Nobel Committee filters out and passes on an order of magnitude fewer names to relevant institutions. Finally, the institutions, represented by members on the committee, vote on the day of the prize, with the result being announced immediately after the counting. The person/persons/institutions with the most votes wins the prizes. There is a distinct committee for each of the prizes.

The number of nominators increases every year – to also include the previous year’s winners, for one – so the names of the first winners were essentially sourced from a handful of individuals.

In 1999, Øyvind Tønnesson, then nobelprize.org’s Peace Director, wrote that in Gandhi’s time, the members of the committee weren’t in favour of him for two reasons. First, many of them couldn’t help but blame Gandhi for some of the incidents of violence in India during his supposedly peaceful resistance, going as far as to claim he should’ve known that his actions would precipitate violence – for example, and especially, the Chauri Chaura incident in 1922. Second, as Tønnesson wrote, the members preferred awardees “who could serve as moral and religious symbols in a world threatened by social and ideological conflicts”, and on that note were opposed to the political implications of Gandhi’s movement – especially his role in effectuating the Partition as well as an inability to quell the widespread violence that followed.

Oddly enough, the Nobel Peace Prize is essentially a political prize, and its credibility often can’t be dissociated from the clout of members of the voting committee. In fact, alongside the Literature Prize, the duo has often been the subject of controversy simply by illustrating the linguistic and cultural differences between the Scandinavian electors and their multitudes of candidates. In 1965, U. Thant, then the Secretary-General of the United Nations, was not given the award because the chair of the Nobel Committee then, Gunnar Jahn, was opposed to him despite a majority having favoured Thant for defusing the Cuban missile crisis. One plausible reason that has been advanced, based on Jahn’s track record when he was the chair, was that Thant was only doing his duty and that none of his initiatives to secure peace in the world stepped beyond that ambit – contrary to the actions of the recipients of the 1947 Peace Prize, in Jahn’s opinion. Another incident betrayed how Jahn’s influence was inordinate, too, despite all assurances toward the selection process being democratic: he threatened to resign if Linus Pauling wasn’t awarded the Peace Prize in 1963 while the majority had voted against the chemist.

Another contention has centred on the measures of worthiness. Why can’t the Nobel Prize be awarded to more than three people at a time? Why is the time-difference between the award-winning work being done and the award being given so huge? And on what grounds will each prospective laureate be judged precisely? In the case of the 2013 Nobel Prize for physics, Peter Higgs and Francois Englert were named the recipients for work done 49 years ago, in 1964, even as four others who’d done the same work in that year were ignored. Jorge Luis Borges has been repeatedly overlooked for the Literature Prize with rumours abounding that the committee was not supportive of his conservative political views and because he’d received a prize from Chilean dictator Augusto Pinochet. On the other hand, some of the greatest writers in history have been politically motivated to produce their best works, so in not specifying the bases on which candidates can be rejected, the Nobel Committee makes the Literature Prize an exercise in winning the approval of a group of Scandinavians who may or may not have a sound knowledge of non-European politics.

Meghnad Saha

Meghnad Saha was an astrophysicist known for an eponymous equation that allowed astronomers to determine how much various elements had been ionised in a star based on its temperature. Saha first published his results in 1920, which were built upon by Irving Langmuir in 1923. Ever since, the equation has also been known as the Saha-Langmuir equation. Presumably for this work, Saha was nominated for the Physics Prize by Dehendra Bose and Sisir Mitra in 1930, by Arthur Compton in 1937, by Mitra again in 1939, by Compton again in 1940, and by Mitra again in 1951* and 1955. On February 16, 1956, Saha passed away.

While his equation has become applicable in different high-energy physics contexts, at the time of its conception it was advertised as being for astrophysics. And in that context, however, a shortcoming was spotted among Saha’s assumptions by Ralph Fowler and Edward Arthur Milne in 1923, who then improved the equation to fix the consequences of that shortcoming. Even so, there appeared to have been some misconceptions in the wider astrophysics community, especially in Europe, about who was the originator – not of the equation but of the more important underlying theory, which Saha called the theory of selective radiation pressure. In 1917, he was financially strained and was faced with a disappointing prospect: that the paper he’d send to the Astrophysical Journal detailing the theory couldn’t be printed unless he bore some of the printing costs, which was out of the question. So he had the paper published in the Journal of the Department of Science at Calcutta University instead, “which had no circulation worth mentioning”.

To quote from the Vigyan Prasar archives, which in turn quotes from Saha himself,

“… I might claim to be the originator of the Theory of Selective Radiation Pressure, though on account of discouraging circumstances, I did not pursue the idea to develop it. E.A. Milne apparently read a note of mine in Nature 107, 489 (1921) because in his first paper on the subject ‘Astrophysical Determination of Average of an Excited Calcium Atom’, in Month. Not. R. Ast. Soc., Vol.84, he mentioned my contribution in a footnote, though nobody appears to have noticed. His exact words are: ‘These paragraphs develop ideas originally put forward by Saha’.”

Later in the same article, now quoting one of Saha’s students, Daulat Kothari:

It is pertinent to remark that the ionisation theory was formulated by Saha working by himself in Calcutta, and the paper quoted above was communicated by him from Calcutta to the Philosophical Magazine – incorrect statements to the contrary have sometimes been made. Further papers soon followed. It is not too much to say that the theory of thermal ionisation introduced a new epoch in astrophysics by providing for the first time, on the basis of simple thermodynamic consideration and elementary concepts of the quantum theory, a straight forward interpretation of the different classes of stellar spectra in terms of the physical condition prevailing in the stellar atmospheres.

Had Saha’s work appeared in the Astrophysical Journal in 1917, would his fortunes have been different?

And given that the publishing volume has been growing very fast of late, do the prizes remain representative of the research being conducted? This question may be suppressed by arguing that the prizes are awarded to remarkable research, of the kind that is so momentous that it can’t but see the light of day. At the same time, as in Saha’s case, how much research passes under the radar of the Foundation even if it’s most in need of the kind of visibility the award can bring? And perhaps this is the more important question: of the dozens of nominations the Foundation has received every year for the Nobel Prizes, how many lost out because they published their work in the so-called low impact-factor (i.e. low-visibility) journals?

Satyendra Nath Bose

A third example is of Satyendra Nath Bose. Despite seminal work done in the 1920s, including on a topic that was quickly recognised as being radical and employed by multiple Nobel-Prize-winning scientists later, Bose was never awarded the Physics Prize. Perhaps his greatest honour for performing that work, apart from contributing to the science itself, was the British physicist Paul A.M. Dirac naming a significant class of fundamental particles after him (bosons). When Higgs and Englert were awarded the Physics Prize in 2013 for having conceived the theory behind the Higgs boson in 1964, a cry went up around India calling for Bose to recognised for his work and be awarded a share of the prize that year. The demand was thoroughly misguided because the Bose-Einstein statistics describe all bosons whereas the Higgs Six had focused on one peculiar boson. If anything, Bose could have been awarded the prize separately: he was nominated by Kedareswar Banerji in 1956, by Daulat Kothari in 1959 and by S. Bagchi in 1962.

In contrast, the only other Indian to have won the Physics Prize (before 1964), C.V. Raman, was nominated by no less than 10 people, including Ernest Rutherford, Louis-Victor de Broglie, Johannes Stark and Niels Bohr – all then or future laureates – in the same year. A case of “who nominated whom”, then? Not quite. Another reported flaw of the Physics Prize has been that it has favoured discoveries over inventions, with the 2014 edition being the most recent of a handful of exceptions to that rule. And among those discoveries, the prize’s selectors have consistently preferred experimental proof. That would explain the unseemly gap between Higgs’s and Englert’s papers in 1964 and their awards in 2013 – and it would also explain why Bose never won the prize himself. Bose’s work in statistics helped understand an already observed anomaly but it provided no other new predictions against which his theory could be tested. In 1924, Einstein would make that prediction: of a unique state of matter since called the Bose-Einstein condensate (BEC). The BEC was first experimentally observed in 1995, fetching three physicists the 2001 Physics Prize. That the statistics would also explain the superfluidity of liquid helium-4 was first suggested by Fritz London in 1938 and proved by Lev Landau in 1941 (so winning the 1962 Physics Prize).

However, this is not a defence of Bose not winning the prize as much as a cautionary note: the helpful thing to remember would be that though the Nobel Prizes may rank among the most prestigious distinctions, they have a character of their own, and that human enterprise cannot be divided as Nobel-class and non-Nobel-class, as if it were an aircraft carrier. For in the more than 800 laureates the Nobel Foundation has counted since 1901, the omissions stand out as much as the rest: apart from the few already mentioned, Chinua Achebe, Jocelyn Bell Burnell, Rosalind Franklin, Václav Havel, Lise Meitner, J.R.R. Tolkien and John Updike come to mind. In Bell Burnell’s case, in fact, another man receiving the Physics Prize for a discovery she made only highlights another failure of the Nobel Foundation and has since become an example often invoked to highlight the plight of women in science.

*Also in 1951, Saha nominated Arnold Sommerfeld, a German physicist infamous for being overlooked for a Nobel Prize despite having received more than 80 nominations over many years.

The Wire
September 15, 2015

A simplification of superfluidity

“Once people tell me what symmetry the system starts with and what symmetry it ends up with, and whether the broken symmetries can be interchanged, I can work out exactly how many bosons there are and if that leads to weird behavior or not,” Murayama said. “We’ve tried it on more than 10 systems, and it works out every single time.”

– Haruki Watanabe, co-author of the paper

To those who’ve followed studies on superfluidity and spontaneous symmetry-breaking, a study by Hitoshi Murayama and Haruki Watanabe at UC, Berkeley, will come as a boon. It simplifies our understanding of symmetry-breaking for practical considerations by unifying the behaviour of supercooled matter – such as BEC and superfluidity – and provides a workable formula to derive the number of Nambu-Goldstone bosons given the symmetry of the system during a few phases!

This is the R&D article that serves as a lead-in into the issue.

This is a primer on spontaneous symmetry-breaking (and the origins of the Higgs boson).

Finally, and importantly, the pre-print paper (from arXiv) can be viewed here. Caution: don’t open the paper if you’re not seriously good at math.