Making sense of quantum annealing

One of the tougher things about writing and reading about quantum mechanics is keeping up with how the meaning of some words change as they graduate from being used in the realm of classical mechanics – where things are what they look like – to that of the quantum – where we have no idea what the things even are. If we don’t keep up but remain fixated on what a word means in one specific context, then we’re likely to experience a cognitive drag that limits our ability to relearn, and reacquire, some knowledge.

For example, teleportation in the classical sense is the complete disintegration of an individual or object in one location in space and its reappearance in another almost instanetaneously. In quantum mechanics, teleportation is almost always used to mean the simultaneous realisation of information at two points in space, not necessarily their transportation.

Another way to look at this: to a so-called classicist, teleportation means to take object A, subject it to process B and so achieve C. But when a quantumist enters the picture, claiming to take object A, subjecting it to a different process B* and so achieving C – and still calling it teleportation, we’re forced to jettison the involvement of process B or B* from our definition of teleportation. Effectively, teleportation to us goes from being A –> B –> C to being just A –> C.

Alfonso de la Fuente Ruiz, an engineering student at the Universidad de Burgos, Spain, in 2011, wrote in an article,

In some way, all methods for annealing, alloying, tempering or crystallisation are metaphors of nature that try to imitate the way in which the molecules of a metal order themselves when magnetisation occurs, or of a crystal during the phase transition that happens for instance when water freezes or silicon dioxide crystallises after having been previously heated up enough to break its chemical bonds.

So put another way, going from A –> B –> C to A –> C would be us re-understanding a metaphor of nature, and maybe even nature itself.

The thing called annealing has a similar curse upon it. In metallurgy, annealing is the process by which a metal is forced to recrystallise by heating it above its recrystallisation temperature and then letting it cool down. This way, the metal’s internal stresses are removed and the material becomes the stronger for it. Quantum annealing, however, is referred by Wikipedia as a “metaheuristic”. A heuristic is any technique that lets people learn something by themselves. A metaheuristic then is any technique that produces a heuristic. It is commonly found in the context of computing. What could it have to do with the quantum nature of matter?

To understand whatever is happening first requires us to acknowledge that a lot of what happens in quantum mechanics is simply mathematics. This isn’t always because physicists are dealing with unphysical entities; sometimes it’s because they’re dealing with objects that exist in ways that we can’t even comprehend (such as in extra dimensions) outside the language of mathematics.

So, quantum annealing is a metaheuristic technique that helps physicists, for example, look for one specific kind of solution to a problem that has multiple independent variables and a very large number of ways in which they can influence the state of the system. This is a very broad definition. A specific instance where it could be used is to find the ground state of a system of multiple particles. Each particle’s ground state comes to be when that particle has the lowest energy it can have and still exist. When it is supplied a little more energy, such as by heating, it starts to vibrate and move around. When it is cooled, it loses the extra energy and returns to its ground state.

But in a larger system consisting of more than a few particles, a sense of the system’s ground state doesn’t arise simply by knowing what each particle’s ground state is. It also requires analysing how the particles’ interactions with each other modifies their individual and cumulative energies. These calculations are performed using matrices with 2N rows if there are particles. It’s easy to see that the calculations can become quickly mind-boggling: if there are 10 particles, then the matrix is a giant grid with 1,048,576 cells. To avoid this, physicists take recourse through quantum annealing.

In the classical metallurgical definition of annealing, a crystal (object A) is heated beyond its recrystallisation temperature (process B) and then cooled (outcome C). Another way to understand this is by saying that for A to transform into C, it must undergo B, and then that B would have to be a process of heating. However, in the quantum realm, there can be more than one way for A to transform into C. A visualisation of the metallurgical annealing process shows how:

The x-axis marks time, the y-axis marks heat, or energy. The journey of the system from A to C means that, as it moves through time, its energy rises and then falls in a certain way. This is because of the system’s constitution as well as the techniques we’re using to manipulate it. However, say the system included a set of other particles (that don’t change its constitution), and that for those particles to go from A to C didn’t require conventional energising but a different kind of process (B) and that B is easier to compute when we’re trying to find C.

These processes actually exist in the quantum realm. One of them is called quantum tunneling. When the system – or let’s say a particle in the system – is going downhill from the peak of the energy mountain (in the graph), sometimes it gets stuck in a valley on the way, akin to the system being mostly in its ground state except in one patch, where a particle or some particles have knotted themselves up in a configuration such that they don’t have the lowest energy possible. This happens when the particle finds an energy level on the way down where it goes, “I’m quite comfortable here. If I’m to keep going down, I will need an energy-kick.” Such states are also called metastable states.

In a classical system, the particle will have to be given some extra energy to move up the energy barrier, and then roll on down to its global ground state. In a quantum system, the particle might be able to tunnel through the energy barrier and emerge on the other side. This is thanks to Heisenberg’s uncertainty principle, which states that a particle’s position and momentum (or velocity) can’t both be known simultaneously with the same accuracy. One consequence of this is that, if we know the particle’s velocity with great certainty, then we can only suspect that the particle will pop up in a given point in spacetime with fractional surety. E.g., “I’m 50% sure that the particle will be in the metastable part of the energy mountain.”

What this also means is that there is a very small, but non-zero, chance that the particle will pop up on the other side of the mountain after having borrowed some energy from its surroundings to tunnel through the barrier.

In most cases, quantum tunneling is understood to be a problem of statistical mechanics. What this means is that it’s not understood at a per-particle level but at the population level. If there are 10 million particles stuck in the metastable valley, and if there is a 1% chance for each particle to tunnel through the valley and come out the other side, then we might be able to say 1% of the 10 million particles will tunnel; the remaining 90% will be reflected back. There is also a strange energy conservation mechanism at work: the tunnelers will borrow energy from their surroundings and go through while the ones bouncing back will do so at a higher energy than they had when they came in.

This means that in a computer that is solving problems by transforming A to C in the quickest way possible, using quantum annealing to make that journey will be orders of magnitude more effective than using metallurgical annealing because more particles will be delivered to their ground state, fewer will be left behind in metastable valleys. The annealing itself is a metaphor: if a piece of metal recalibrates itself during annealing, then a problematic quantum system resolves itself through quantum annealing.

To be a little more technical: quantum annealing is a set of algorithms that introduces new variables into the system (A) so that, with their help, the algorithms can find a shortcut for A to turn into C.

The world’s most famous quantum annealer is the D-Wave system. Ars Technica wrote this about their 2000Q model in January 2017:

Annealing involves a series of magnets that are arranged on a grid. The magnetic field of each magnet influences all the other magnets—together, they flip orientation to arrange themselves to minimize the amount of energy stored in the overall magnetic field. You can use the orientation of the magnets to solve problems by controlling how strongly the magnetic field from each magnet affects all the other magnets.

To obtain a solution, you start with lots of energy so the magnets can flip back and forth easily. As you slowly cool, the flipping magnets settle as the overall field reaches lower and lower energetic states, until you freeze the magnets into the lowest energy state. After that, you read the orientation of each magnet, and that is the solution to the problem. You may not believe me, but this works really well—so well that it’s modeled using ordinary computers (where it is called simulated annealing) to solve a wide variety of problems.

As the excerpt makes clear, an annealer can be used as a computer if system A is chosen such that it can evolve into different Cs. The more kinds of C there are possible, the more problems that A can be used to solve. For example, D-Wave can find better solutions than classical computers can for problems in aerodynamic modelling using quantum annealing – but it still can’t crack Shor’s algorithm, used widely in data encryption technologies. So the scientists and engineers working on D-Wave will be trying to augment their A such that Shor’s algorithm is also within reach.

Moreover, because of how 2000Q works, the same solution can be the result of different magnetic configurations – perhaps even millions of them. So apart from zeroing in on a solution, the computer must also figure out the different ways in which the solution can be achieved. But because there are so many possibilities, D-Wave must be ‘taught’ to identify some of them, all of them or a sample of them in an unbiased manner.

Thus, such are the problems that people working on the edge of quantum computing have to deal with these days.

(To be clear: the ‘A’ in the 2000Q is not a system of simple particles as much as it is an array of qubits, which I’ll save for a different post.)

Featured image credit: Engin_Akyurt/pixabay.

Is the universe as we know it stable?

The anthropic principle has been a cornerstone of fundamental physics, being used by some physicists to console themselves about why the universe is the way it is: tightly sandwiched between two dangerous states. If the laws and equations that define it had slipped during its formation just one way or the other in their properties, humans wouldn’t have existed to be able to observe the universe, and conceive the anthropic principle. At least, this is the weak anthropic principle – that we’re talking about the anthropic principle because the universe allowed humans to exist, or we wouldn’t be here. The strong anthropic principle thinks the universe is duty-bound to conceive life, and if another universe was created along the same lines that ours was, it would conceive intelligent life, too, give or take a few billion years.

The principle has been repeatedly resorted to because physicists are at that juncture in history where they’re not able to tell why some things are the way they are and – worse – why some things aren’t the way they should be. The latest significant addition to this list, and an illustrative example, is the Higgs boson, whose discovery was announced on July 4, 2012, at the CERN supercollider LHC. The Higgs boson’s existence was predicted by three independently working groups of physicists in 1964. In the intervening decades, from hypothesis to discovery, physicists spent a long time trying to find its mass. The now-shut American particle accelerator Tevatron helped speed up this process, using repeated measurements to steadily narrow down the range of masses in which the boson could lie. It was eventually found at the LHC at 125.6 GeV (a proton weighs about 0.98 GeV).

It was a great moment, the discovery of a particle that completed the Standard Model group of theories and equations that governs the behaviour of fundamental particles. It was also a problematic moment for some, who had expected the Higgs boson to weigh much, much more. The mass of the Higgs boson is connected to the energy of the universe (because the Higgs field that generates the boson pervades throughout the universe), so by some calculations 125.6 GeV implied that the universe should be the size of a football. Clearly, it isn’t, so physicists got the sense something was missing from the Standard Model that would’ve been able to explain the discrepancy. (In another example, physicists have used the discovery of the Higgs boson to explain why there is more matter than antimatter in the universe though both were created in equal amounts.)

The energy of the Higgs field also contributes to the scalar potential of the universe. A good analogy lies with the electrons in an atom. Sometimes, an energised electron sees fit to lose some extra energy it has in the form of a photon and jump to a lower-energy state. At others, a lower-energy electron can gain some energy to jump to a higher state, a phenomenon commonly observed in metals (where the higher-energy electrons contribute to conducting electricity). Like the electrons can have different energies, the scalar potential defines a sort of energy that the universe can have. It’s calculated based on the properties of all the fundamental forces of nature: strong nuclear, weak nuclear, electromagnetic, gravitational and Higgs.

For the last 13.8 billion years, the universe has existed in a particular way that’s been unchanged, so we know that it is at a scalar-potential minimum. The apt image is of a mountain-range, like so:

valleys1

The point is to figure out if the universe is lying at the deepest point of the potential – the global minimum – or at a point that’s the deepest in a given range but not the deepest overall – the local minimum. This is important for two reasons. First: the universe will always, always try to get to the lowest energy state. Second: quantum mechanics. With the principles of classical mechanics, if the universe were to get to the global minimum from the local minimum, its energy will first have to be increased so it can surmount the intervening peaks. But with the principles of quantum mechanics, the universe can tunnel through the intervening peaks to sink into the global minimum. And such tunnelling could occur if the universe is currently in a local minimum only.

To find out, physicists try and calculate the shape of the scalar potential in its entirety. This is an intensely complicated mathematical process and takes lots of computing power to tackle, but that’s beside the point. The biggest problem is that we don’t know enough about the fundamental forces, and we don’t know anything about what else could be out there at higher energies. For example, it took an accelerator capable of boosting particles to 3,500 GeV and then smash them head-on to discover a particle weighing 125 GeV. Discovering anything heavier – i.e. more energetic – would take ever more powerful colliders costing many billions of dollars to build.

Almost sadistically, theoretical physicists have predicted that there exists an energy level at which the gravitational force unifies with the strong/weak nuclear and electromagnetic forces to become one indistinct force: the Planck scale, 12,200,000,000,000,000,000 GeV. We don’t know the mechanism of this unification, and its rules are among the most sought-after in high-energy physics. Last week, Chinese physicists announced that they were planning to build a supercollider bigger than the LHC, called the Circular Electron-Positron Collider (CEPC), starting 2020. The CEPC is slated to collide particles at 100,000 GeV, more than 7x the energy at which the LHC collides particles now, in a ring 54.7 km long. Given the way we’re building our most powerful particle accelerators, one able to smash particles together at the Planck scale would have to be as large as the Milky Way.

(Note: 12,200,000,000,000,000,000 GeV is the energy produced when 57.2 litres of gasoline are burnt, which is not a lot of energy at all. The trick is to contain so much energy in a particle as big as the proton, whose diameter is 0.000000000000001 m. That is, the energy density is 1064 GeV/m3.)

We also don’t know how the Standard Model scales from the energy levels it currently inhabits unto the Planck scale. If it changes significantly as it scales up, then the forces’ contributions to the scalar potential will change also. Physicists think that if any new bosons, essentially new forces, appear along the way, then the equations defining the scalar potential – our picture of the peaks and valleys – will have to be changed themselves. This is why physicists want to arrive at more precise values of, say, the mass of the Higgs boson.

Or the mass of the top quark. While force-carrying particles are called bosons, matter-forming particles are called fermions. Quarks are a type of fermion; together with force-carriers called gluons, they make up protons and neutrons. There are six kinds, or flavours, of quarks, and the heaviest is called the top quark. In fact, the top quark is the heaviest known fundamental particle. The top quark’s mass is particularly important. All fundamental particles get their mass from interacting with the Higgs field – the more the level of interaction, the higher the mass generated. So a precise measurement of the top quark’s mass indicates the Higgs field’s strongest level of interaction, or “loudest conversation”, with a fundamental particle, which in turn contributes to the scalar potential.

On November 9, a group of physicists from Russia published the results of an advanced scalar-potential calculation to find where the universe really lay: in a local minimum or in a stable global minimum. They found that the universe was in a local minimum. The calculations were “advanced” because they used the best estimates available for the properties of the various fundamental forces, as well as of the Higgs boson and the top quark, to arrive at their results, but they’re still not final because the estimates could still vary. Hearteningly enough, the physicists also found that if the real values in the universe shifted by just 1.3 standard deviations from our best estimates of them, our universe would enter the global minimum and become truly stable. In other words, the universe is situated in a shallow valley on one side of a peak of the scalar potential, and right on the other side lies the deepest valley of all that it could sit in for ever.

If the Russian group’s calculations are right (though there’s no quick way for us to know if they aren’t), then there could be a distant future – in human terms – where the universe tunnels through from the local to the global minimum and enters a new state. If we’ve assumed that the laws and forces of nature haven’t changed in the last 13.8 billion years, then we can also assume that in the fully stable state, these laws and forces could change in ways we can’t predict now. The changes would sweep over from one part of the universe into others at the speed of light, like a shockwave, redefining all the laws that let us exist. One moment we’d be around and gone the next. For all we know, that breadth of 1.3 standard deviations between our measurements of particles’ and forces’ properties and their true values could be the breath of our lives.

The Wire
November 11, 2015