Finding, and losing, Majorana

I’m looking forward to breaking down and understanding a new paper in Physical Review B soon – the sort of work of condensed-matter physics that’s complex enough to warrant a week-long dive into the subject but not so complex as to leave a non-expert enthusiast (such as myself) eventually stranded in a swamp of mathematical intricacies. But while I’m going to do that, I thought I should also make a note of how differently the paper’s principal interestingness has been presented by its publisher and by its authors. The American Physical Society, which publishes Physical Review B, tweeted this on June 21:

On the same day, both Microsoft (where the paper’s authors are employed as researchers) and a slew of popular science outlets, including Popular Science (which doesn’t once say “Majorana”), published articles claiming the tech company had achieved, in its own words, the “first milestone towards a quantum supercomputer”.

The existence of Majorana zero modes do lead to the possibility of a quantum computer that uses topological qubits as its basic information-bearing units (like the semiconductors of a classical computer). But we don’t even have a quantum computer yet, yet here we have reports about a quantum supercomputer well in the future. I understand that quantum computing is regularly in the news now, that Microsoft itself is calling the new study a step towards a supercomputing version of such a device, and that doing so is a sure-shot way to draw public attention towards the work.

But something about looking away from the past, from the long quest for observing these states in different intricately engineered systems, in order to focus on the future sits ill with me. That physicists have finally found a way that could work should be the headline, if only to hang on to the idea that Majorana modes are valuable in more ways than to build a quantum supercomputer, as well as to commemorate – in a manner of speaking – what physicists of the past did and didn’t get right, especially when they didn’t have the tools and the knowledge that they do today.

It also matters that a private technology company is undertaking this research. The Microsoft researchers published their results as a scientific paper, but what’s to say a different private entity won’t uncover some important bit of physics, not publish any papers about it, proceed straight to applying it in some lucrative technology, and keep their findings under wraps? I imagine that, on some epistemic spectrum, knowledge of the natural universe seamlessly transforms at some point into the know-how of building a highly profitable (or highly destructible, for that matter) machine. Yet some knowledge of the former variety belongs with the people at large, even if the knowledge of the latter kind need not.

Part of the issue here is that the study of topological phases of matter has progressed almost in step with, and oftentimes been motivated by challenges in, efforts to build a better quantum computer. This is a good thing – for privately employed researchers to advance science, even if in the pursuit of profit – but that resulting scientific knowledge eventually has to be out, and made available as part of the public commons. Microsoft did that (by publishing an open-access paper in Physical Review B); I’m disappointed that some of the science journalists who took over at that point, in efforts to take that knowledge to the people at large, fell short.

When cooling down really means slowing down

Consider this post the latest in a loosely defined series about atomic cooling techniques that I’ve been writing since June 2018.

Atoms can’t run a temperature, but things made up of atoms, like a chair or table, can become hotter or colder. This is because what we observe as the temperature of macroscopic objects is at the smallest level the kinetic energy of the atoms it is made up of. If you were to cool such an object, you’d have to reduce the average kinetic energy of its atoms. Indeed, if you had to cool a small group of atoms trapped in a container as well, you’d simply have to make sure they – all told – slow down.

Over the years, physicists have figured out more and more ingenious ways to cool atoms and molecules this way to ultra-cold temperatures. Such states are of immense practical importance because at very low energy, these particles (an umbrella term) start displaying quantum mechanical effects, which are too subtle to show up at higher temperatures. And different quantum mechanical effects are useful to create exotic things like superconductors, topological insulators and superfluids.

One of the oldest modern cooling techniques is laser-cooling. Here, a laser beam of a certain frequency is fired at an atom moving towards the beam. Electrons in the atom absorb photons in the beam, acquire energy and jump to a higher energy level. A short amount of time later, the electrons lose the energy by emitting a photon and jump back to the lower energy level. But since the photons are absorbed in only one direction but are emitted in arbitrarily different directions, the atom constantly loses momentum in one direction but gains momentum in a variety of directions (by Newton’s third law). The latter largely cancel themselves out, leaving the atom with considerably lower kinetic energy, and therefore cooler than before.

In collisional cooling, an atom is made to lose momentum by colliding not with a laser beam but with other atoms, which are maintained at a very low temperature. This technique works better if the ratio of elastic to inelastic collisions is much greater than 50. In elastic collisions, the total kinetic energy of the system is conserved; in inelastic collisions, the total energy is conserved but not the kinetic energy alone. In effect, collisional cooling works better if almost all collisions – if not all of them – conserve kinetic energy. Since the other atoms are maintained at a low temperature, they have little kinetic energy to begin with. So collisional cooling works by bouncing warmer atoms off of colder ones such that the colder ones take away some of the warmer atoms’ kinetic energy, thus cooling them.

In a new study, a team of scientists from MIT, Harvard University and the University of Waterloo reported that they were able to cool a pool of NaLi diatoms (molecules with only two atoms) this way to a temperature of 220 nK. That’s 220-billionths of a kelvin, about 12-million-times colder than deep space. They achieved this feat by colliding the warmer NaLi diatoms with five-times as many colder Na (sodium) atoms through two cycles of cooling.

Their paper, published online on April 8 (preprint here), indicates that their feat is notable for three reasons.

First, it’s easier to cool particles (atoms, ions, etc.) in which as many electrons as possible are paired to each other. A particle in which all electrons are paired is called a singlet; ones that have one unpaired electron each are called doublets; those with two unpaired electrons – like NaLi diatoms – are called triplets. Doublets and triplets can also absorb and release more of their energy by modifying the spins of individual electrons, which messes with collisional cooling’s need to modify a particle’s kinetic energy alone. The researchers from MIT, Harvard and Waterloo overcame this barrier by applying a ‘bias’ magnetic field across their experiment’s apparatus, forcing all the particles’ spins to align along a common direction.

Second: Usually, when Na and NaLi come in contact, they react and the NaLi molecule breaks down. However, the researchers found that in the so-called spin-polarised state, the Na and NaLi didn’t react with each other, preserving the latter’s integrity.

Third, and perhaps most importantly, this is not the coldest temperature to which we have been able to cool quantum particles, but it still matters because collisional cooling offers unique advantages that makes it attractive for certain applications. Perhaps the most well-known of them is quantum computing. Simply speaking, physicists prefer ultra-cold molecules to atoms to use in quantum computers because physicists can control molecules more precisely than they can the behaviour of atoms. But molecules that have doublet or triplet states or are otherwise reactive can’t be cooled to a few billionths of a kelvin with laser-cooling or other techniques. The new study shows they can, however, be cooled to 220 nK using collisional cooling. The researchers predict that in future, they may be able to cool NaLi molecules even further with better equipment.

Note that the researchers didn’t cool the NaLi atoms from room temperature to 220 nK but from 2 µK. Nonetheless, their achievement remains impressive because there are other well-established techniques to cool atoms and molecules from room temperature to a few micro-kelvin. The lower temperatures are harder to reach.

One of the researchers involved in the current study, Wolfgang Ketterle, is celebrated for his contributions to understanding and engineering ultra-cold systems. He led an effort in 2003 to cool sodium atoms to 0.5 nK – a record. He, Eric Cornell and Carl Wieman won the Nobel Prize for physics two years before that: Cornell, Wieman and their team created the first Bose-Einstein condensate in 1995, and Ketterle created ‘better’ condensates that allowed for closer inspection of their unique properties. A Bose-Einstein condensate is a state of matter in which multiple particles called bosons are ultra-cooled in a container, at which point they occupy the same quantum state – something they don’t do in nature (even as they comply with the laws of nature) – and give rise to strange quantum effects that can be observed without a microscope.

Ketterle’s attempts make for a fascinating tale; I collected some of them plus some anecdotes together for an article in The Wire in 2015, to mark the 90th year since Albert Einstein had predicted their existence, in 1924-1925. A chest-thumper might be cross that I left Satyendra Nath Bose out of this citation. It is deliberate. Bose-Einstein condensates are named for their underlying theory, called Bose-Einstein statistics. But while Bose had the idea for the theory to explain the properties of photons, Einstein generalised it to more particles, and independently predicted the existence of the condensates based on it.

This said, if it is credit we’re hungering for: the history of atomic cooling techniques includes the brilliant but little-known S. Pancharatnam. His work in wave physics laid the foundations of many of the first cooling techniques, and was credited as such by Claude Cohen-Tannoudji in the journal Current Science in 1994. Cohen-Tannoudji would win a piece of the Nobel Prize for physics in 1997 for inventing a technique called Sisyphus cooling – a way to cool atoms by converting more and more of their kinetic energy to potential energy, and then draining the potential energy.

Indeed, the history of atomic cooling techniques is, broadly speaking, a history of physicists uncovering newer, better ways to remove just a little bit more energy from an atom or molecule that’s already lost a lot of its energy. The ultimate prize is absolute zero, the lowest temperature possible, at which the atom retains only the energy it can in its ground state. However, absolute zero is neither practically attainable nor – more importantly – the goal in and of itself in most cases. Instead, the experiments in which physicists have achieved really low temperatures are often pegged to an application, and getting below a particular temperature is the goal.

For example, niobium nitride becomes a superconductor below 16 K (-257º C), so applications using this material prepare to achieve this temperature during operation. For another, as the MIT-Harvard-Waterloo group of researchers write in their paper, “Ultra-cold molecules in the micro- and nano-kelvin regimes are expected to bring powerful capabilities to quantum emulation and quantum computing, owing to their rich internal degrees of freedom compared to atoms, and to facilitate precision measurement and the study of quantum chemistry.”

Making sense of quantum annealing

One of the tougher things about writing and reading about quantum mechanics is keeping up with how the meaning of some words change as they graduate from being used in the realm of classical mechanics – where things are what they look like – to that of the quantum – where we have no idea what the things even are. If we don’t keep up but remain fixated on what a word means in one specific context, then we’re likely to experience a cognitive drag that limits our ability to relearn, and reacquire, some knowledge.

For example, teleportation in the classical sense is the complete disintegration of an individual or object in one location in space and its reappearance in another almost instanetaneously. In quantum mechanics, teleportation is almost always used to mean the simultaneous realisation of information at two points in space, not necessarily their transportation.

Another way to look at this: to a so-called classicist, teleportation means to take object A, subject it to process B and so achieve C. But when a quantumist enters the picture, claiming to take object A, subjecting it to a different process B* and so achieving C – and still calling it teleportation, we’re forced to jettison the involvement of process B or B* from our definition of teleportation. Effectively, teleportation to us goes from being A –> B –> C to being just A –> C.

Alfonso de la Fuente Ruiz, an engineering student at the Universidad de Burgos, Spain, in 2011, wrote in an article,

In some way, all methods for annealing, alloying, tempering or crystallisation are metaphors of nature that try to imitate the way in which the molecules of a metal order themselves when magnetisation occurs, or of a crystal during the phase transition that happens for instance when water freezes or silicon dioxide crystallises after having been previously heated up enough to break its chemical bonds.

So put another way, going from A –> B –> C to A –> C would be us re-understanding a metaphor of nature, and maybe even nature itself.

The thing called annealing has a similar curse upon it. In metallurgy, annealing is the process by which a metal is forced to recrystallise by heating it above its recrystallisation temperature and then letting it cool down. This way, the metal’s internal stresses are removed and the material becomes the stronger for it. Quantum annealing, however, is referred by Wikipedia as a “metaheuristic”. A heuristic is any technique that lets people learn something by themselves. A metaheuristic then is any technique that produces a heuristic. It is commonly found in the context of computing. What could it have to do with the quantum nature of matter?

To understand whatever is happening first requires us to acknowledge that a lot of what happens in quantum mechanics is simply mathematics. This isn’t always because physicists are dealing with unphysical entities; sometimes it’s because they’re dealing with objects that exist in ways that we can’t even comprehend (such as in extra dimensions) outside the language of mathematics.

So, quantum annealing is a metaheuristic technique that helps physicists, for example, look for one specific kind of solution to a problem that has multiple independent variables and a very large number of ways in which they can influence the state of the system. This is a very broad definition. A specific instance where it could be used is to find the ground state of a system of multiple particles. Each particle’s ground state comes to be when that particle has the lowest energy it can have and still exist. When it is supplied a little more energy, such as by heating, it starts to vibrate and move around. When it is cooled, it loses the extra energy and returns to its ground state.

But in a larger system consisting of more than a few particles, a sense of the system’s ground state doesn’t arise simply by knowing what each particle’s ground state is. It also requires analysing how the particles’ interactions with each other modifies their individual and cumulative energies. These calculations are performed using matrices with 2N rows if there are particles. It’s easy to see that the calculations can become quickly mind-boggling: if there are 10 particles, then the matrix is a giant grid with 1,048,576 cells. To avoid this, physicists take recourse through quantum annealing.

In the classical metallurgical definition of annealing, a crystal (object A) is heated beyond its recrystallisation temperature (process B) and then cooled (outcome C). Another way to understand this is by saying that for A to transform into C, it must undergo B, and then that B would have to be a process of heating. However, in the quantum realm, there can be more than one way for A to transform into C. A visualisation of the metallurgical annealing process shows how:

The x-axis marks time, the y-axis marks heat, or energy. The journey of the system from A to C means that, as it moves through time, its energy rises and then falls in a certain way. This is because of the system’s constitution as well as the techniques we’re using to manipulate it. However, say the system included a set of other particles (that don’t change its constitution), and that for those particles to go from A to C didn’t require conventional energising but a different kind of process (B) and that B is easier to compute when we’re trying to find C.

These processes actually exist in the quantum realm. One of them is called quantum tunneling. When the system – or let’s say a particle in the system – is going downhill from the peak of the energy mountain (in the graph), sometimes it gets stuck in a valley on the way, akin to the system being mostly in its ground state except in one patch, where a particle or some particles have knotted themselves up in a configuration such that they don’t have the lowest energy possible. This happens when the particle finds an energy level on the way down where it goes, “I’m quite comfortable here. If I’m to keep going down, I will need an energy-kick.” Such states are also called metastable states.

In a classical system, the particle will have to be given some extra energy to move up the energy barrier, and then roll on down to its global ground state. In a quantum system, the particle might be able to tunnel through the energy barrier and emerge on the other side. This is thanks to Heisenberg’s uncertainty principle, which states that a particle’s position and momentum (or velocity) can’t both be known simultaneously with the same accuracy. One consequence of this is that, if we know the particle’s velocity with great certainty, then we can only suspect that the particle will pop up in a given point in spacetime with fractional surety. E.g., “I’m 50% sure that the particle will be in the metastable part of the energy mountain.”

What this also means is that there is a very small, but non-zero, chance that the particle will pop up on the other side of the mountain after having borrowed some energy from its surroundings to tunnel through the barrier.

In most cases, quantum tunneling is understood to be a problem of statistical mechanics. What this means is that it’s not understood at a per-particle level but at the population level. If there are 10 million particles stuck in the metastable valley, and if there is a 1% chance for each particle to tunnel through the valley and come out the other side, then we might be able to say 1% of the 10 million particles will tunnel; the remaining 90% will be reflected back. There is also a strange energy conservation mechanism at work: the tunnelers will borrow energy from their surroundings and go through while the ones bouncing back will do so at a higher energy than they had when they came in.

This means that in a computer that is solving problems by transforming A to C in the quickest way possible, using quantum annealing to make that journey will be orders of magnitude more effective than using metallurgical annealing because more particles will be delivered to their ground state, fewer will be left behind in metastable valleys. The annealing itself is a metaphor: if a piece of metal recalibrates itself during annealing, then a problematic quantum system resolves itself through quantum annealing.

To be a little more technical: quantum annealing is a set of algorithms that introduces new variables into the system (A) so that, with their help, the algorithms can find a shortcut for A to turn into C.

The world’s most famous quantum annealer is the D-Wave system. Ars Technica wrote this about their 2000Q model in January 2017:

Annealing involves a series of magnets that are arranged on a grid. The magnetic field of each magnet influences all the other magnets—together, they flip orientation to arrange themselves to minimize the amount of energy stored in the overall magnetic field. You can use the orientation of the magnets to solve problems by controlling how strongly the magnetic field from each magnet affects all the other magnets.

To obtain a solution, you start with lots of energy so the magnets can flip back and forth easily. As you slowly cool, the flipping magnets settle as the overall field reaches lower and lower energetic states, until you freeze the magnets into the lowest energy state. After that, you read the orientation of each magnet, and that is the solution to the problem. You may not believe me, but this works really well—so well that it’s modeled using ordinary computers (where it is called simulated annealing) to solve a wide variety of problems.

As the excerpt makes clear, an annealer can be used as a computer if system A is chosen such that it can evolve into different Cs. The more kinds of C there are possible, the more problems that A can be used to solve. For example, D-Wave can find better solutions than classical computers can for problems in aerodynamic modelling using quantum annealing – but it still can’t crack Shor’s algorithm, used widely in data encryption technologies. So the scientists and engineers working on D-Wave will be trying to augment their A such that Shor’s algorithm is also within reach.

Moreover, because of how 2000Q works, the same solution can be the result of different magnetic configurations – perhaps even millions of them. So apart from zeroing in on a solution, the computer must also figure out the different ways in which the solution can be achieved. But because there are so many possibilities, D-Wave must be ‘taught’ to identify some of them, all of them or a sample of them in an unbiased manner.

Thus, such are the problems that people working on the edge of quantum computing have to deal with these days.

(To be clear: the ‘A’ in the 2000Q is not a system of simple particles as much as it is an array of qubits, which I’ll save for a different post.)

Featured image credit: Engin_Akyurt/pixabay.