25 years of Maldacena’s bridge

Twenty-five years go, in 1997, an Argentine physicist named Juan Martin Maldacena published what would become the most highly cited physics paper in history (more than 20,000 to date). In the paper, Maldacena described a ‘bridge’ between two theories that describe how our world works, but separately, without meeting each other. These are the field theories that describe the behaviour of energy fields (like the electromagnetic fields) and subatomic particles, and the theory of general relativity, which deals with gravity and the universe at the largest scales.

Field theories have many types and properties. One of them is a conformal field theory: a field theory that doesn’t change when it undergoes a conformal transformation – i.e. one which preserves angles but not lengths pertaining to the field. As such, conformal field theories are said to be “mathematically well-behaved”.

In relativity, space and time are unified into the spacetime continuum. This continuum can broadly exist in one of three possible spaces (roughly, universes of certain ‘shapes’): de Sitter space, Minkowski space and anti-de Sitter space. de Sitter space has positive curvature everywhere – like a sphere (but is empty of any matter). Minkowski space has zero curvature everywhere – i.e. a flat surface. Anti-de Sitter space has negative curvature everywhere – like a hyperbola.

A sphere, a hyperbolic surface and a flat surface. Credit: NASA

Because these shapes are related to the way our universe looks and works, cosmologists have their own way to understand these spaces. If the spacetime continuum exists in de Sitter space, the universe is said to have a positive cosmological constant. Similarly, Minkowski space implies a zero cosmological constant and anti-de Sitter space a negative cosmological constant. Studies by various space telescopes have found that our universe has a positive cosmological constant, meaning ‘our’ spacetime continuum occupies a de Sitter space (sort of, since our universe does have matter).

In 1997, Maldacena found that a description of quantum gravity in anti-de Sitter space in N dimensions is the same as a conformal field theory in N – 1 dimensions. This – called the AdS/CFT correspondence – was an unexpected but monumental discovery that connected two kinds of theories that had thus far refused to cooperate. (The Wire Science had a chance to interview Maldacena about his past and current work in 2018, in which he provided more insights on AdS/CFT as well.)

In his paper, Maldacena demonstrated his finding by using the example of string theory as a theory of quantum gravity in anti-de Sitter space – so the finding was also hailed as a major victory for string theory. String theory is a leading contender for a theory that can unify quantum mechanics and general relativity. However, we have found no experimental evidence of its many claims. This is why the AdS/CFT correspondence is also called the AdS/CFT conjecture.

Nonetheless, thanks to the correspondence, (mathematical) physicists have found that some problems that are hard on the ‘AdS’ side are much easier to crack on the ‘CFT’ side, and vice versa – all they had to do was cross Maldacena’s ‘bridge’! This was another sign that the AdS/CFT correspondence wasn’t just a mathematical trick but could be a legitimate description of reality.

So how could it be real?

The holographic principle

In 1997, Maldacena proved that a string theory in five dimensions was the same as a conformal field theory in four dimensions. However, gravity in our universe exists in four dimensions – not five. So the correspondence came close to providing a unified description of gravity and quantum mechanics, but not close enough. Nonetheless, it gave rise to the possibility that an entity that existed in some number of dimensions could be described by another entity that existed in one fewer dimensions.

Actually, in fact, the AdS/CFT correspondence didn’t give rise to this possibility but proved it, at least mathematically; the awareness of the possibility had existed for many years until then, as the holographic principle. The Dutch physicist Gerardus ‘t Hooft first proposed it and the American physicist Leonard Susskind in the 1990s brought it firmly into the realm of string theory. One way to state the holographic principle, in the words of physicist Matthew Headrick, is thus:

“The universe around us, which we are used to thinking of as being three dimensional, is actually at a more fundamental level two-dimensional and that everything we see that’s going on around us in three dimensions is actually happening in a two-dimensional space.”

This “two-dimensional space” is the ‘surface’ of the universe, located at an infinite distance from us, where information is encoded that describes everything happening within the universe. It’s a mind-boggling idea. ‘Information’ here refers to physical information, such as, to use one of Headrick’s examples, “the positions and velocities of physical objects”. In beholding this information from the infinitely faraway surface, we apparently behold a three-dimensional reality.

It bears repeating that this is a mind-boggling idea. We have no proof so far that the holographic principle is a real description of our universe – we only know that it could describe our reality, thanks to the AdS/CFT correspondence. This said, physicists have used the holographic principle to study and understand black holes as well.

In 1915, Albert Einstein’s general theory of relativity provided a set of complicated equations to understand how mass, the spacetime continuum and the gravitational force are related. Within a few months, physicists Karl Swarzschild and Johannes Droste, followed in subsequent years by Georges Lemaitre, Subrahmanyan Chandrasekhar, Robert Oppenheimer and David Finkelstein, among others, began to realise that one of the equations’ exact solutions (i.e. non-approximate) indicated the existence of a point mass around which space was wrapped completely, preventing even light from escaping from inside this space to outside. This was the black hole.

Because black holes were exact solutions, physicists assumed that they didn’t have any entropy – i.e. that its insides didn’t have any disorder. If there had been such disorder, it should have appeared in Einstein’s equations. It didn’t, so QED. But in the early 1970s, the Israeli-American physicist Jacob Bekenstein noticed a problem: if a system with entropy, like a container of hot gas, was thrown into the black hole, and the black hole doesn’t have entropy, where does the entropy go? It had to go somewhere; otherwise, the black hole would violate the second law of thermodynamics – that the entropy of an isolated system, like our universe, can’t decrease.

Bekenstein postulated that black holes must also have entropy, and that the amount of entropy is proportional to the black hole’s surface area, i.e. the area of the event horizon. Bekenstein also worked out that there is a limit to the amount of entropy a given volume of space can contain, as well as that all black holes could be described by just three observable attributes: their mass, electric charge and angular momentum. So if a black hole’s entropy increases because it has swallowed some hot gas, this change ought to manifest as a change in one, some or all of these three attributes.

Taken together: when some hot gas is tossed into a black hole, the gas would fall into the event horizon but the information about its entropy might appear to be encoded on the black hole’s surface, from the point of view of an observer located outside and away from the event horizon. Note here that the black hole, a sphere, is a three-dimensional object whereas its surface is a flat, curved sheet and therefore two-dimensional. That is, all the information required to describe a 3D black hole could in fact be encoded on its 2D surface – which evokes the AdS/CFT correspondence!

However, that the event horizon of a black hole preserves information about objects falling into the black hole gives rise to another problem. Quantum mechanics requires all physical information (like “the positions and velocities of physical objects”, in Headrick’s example) to be conserved. That is, such information can’t ever be destroyed. And there’s no reason to expect it will be destroyed if black holes lived forever – but they don’t.

Stephen Hawking found in the 1970s that black holes should slowly evaporate by emitting radiation, called Hawking radiation, and there is nothing in the theories of quantum mechanics to suggest that this radiation will be encoded with the information preserved on the event horizon. This, fundamentally, is the black hole information loss problem: either the black hole must shed the information in some way or quantum mechanics must be wrong about the preservation of physical information. Which one is it? This is a major unsolved problem in physics, and it’s just one part of the wider context that the AdS/CFT correspondence inhabits.

For more insights into this discovery, do read The Wire Science‘s interview of Maldacena.

I’m grateful to Nirmalya Kajuri for his feedback on this article.

Sources:

The journey of a crow and the story of a black hole

The Washington Post has a review, and introduction therewith, of a curious new book called Ka, authored by John Crowley (acclaimed author of Great Work of Time). It is narrated from the POV of a crow named Dar Oakley, who journeys repeatedly into the realm of the dead with a human companion. A para from the WaPo piece caught my attention for its allusion to an unsolved problem in physics:

In many cultures, crows have long been regarded as “death-birds.” Eaters of carrion and corpses, they are sometimes even said to convey the soul into the afterlife. Crowley’s title itself alludes to this notion: Dar Oakley croaks out “ka,” which isn’t just a variant spelling of “caw,” but also the ancient Egyptian word for the spiritual self that survives the decay of the body. Yet what actually remains of us after our bones have been picked clean? Might our spirits then dwell in some Happy Valley or will we suffer in eternal torment? Could death itself be simply an adventure-rich dream from which we never awake? Who knows? The narrator, who might be a writer, says of his dead and much-missed wife Debra that “the ultimate continuation of her is me.” What, however, becomes of Debra when he too is dead?

What indeed. The question is left unanswered so the reader can confront the unanswerability supposedly implicit in this riddle. But while this scheme may be acceptable in a book-length “exploration of the bond between the living and the dead”, physicists don’t have much of a choice. They really want to know, would love to know, how a very similar situation plays out in the quantum realm.

It’s called the black hole information paradox. A black hole is a single point in space around which spacetime is folded into a sphere. This means that if you get trapped in this region of spacetime, you’re locked in. You can’t leave the sphere. The surface of this sphere is called the event horizon: it’s the shortest distance from the black hole from which you can pull away.

Now, there’s no way to tell two black holes apart if their mass, angular momentum and electric charge are the same. This is called the no-hair conjecture. This means that whatever a black hole swallows – whether it be physical matter or information as a sequence of 0s and 1s encoded as an electromagnetic signal – doesn’t retain its original shape or patterns. They become lost, observable only in changes to the black hole’s mass, angular momentum and/or electric charge.

In 1974, Stephen Hawking, Alexei Starobinsky and Yakov Zel’dovich found that, thanks to quantum mechanical effects near an event horizon, the black hole within could be emitting radiation out into space. So assuming a black hole contains a finite amount of energy and has stopped eating material/info from the outside, it will evaporate slowly over time and vanish. This is where the information paradox kicks in.

You’re obviously thinking the info the black hole once swallowed was all converted into energy and emitted as Hawking radiation. This is actually where the problem begins. Quantum mechanics may be whimsically counterintuitive about what it allows nature to do at its smallest scale. But it does have some rules of its own that it always follows. One of them is that information is always conserved, that when information passes into a black hole, it can’t be converted into the same energy mulch that everything else is converted to.

We don’t know what happens to the ‘spirit’ of Debra when Dar Oakley passes away. And we don’t know what happens to the information inside a black hole when the latter evaporates.

Black holes are unique objects of study for classical and non-classical physicists alike because they combine the consequences of both general relativity and quantum mechanics. Those pursuing a unified theory, broadly called quantum gravity, hope that data about black holes will help them find a way to reconcile the laws of nature at the biggest and smallest scales. Resolving the black hole information paradox is one such path.

For example, string theory, which is a technical framework that gives physicists and mathematicians the tools to solve problems in quantum gravity, proposes a way out in the name of the holographic principle. It states (in highly simplified terms) that the information trapped by a black hole is actually trapped along the event horizon and doesn’t fall inside it. Over time, fluctuations on the horizon release the information out. However, neither the complete shape and consequences of this theory nor some contradictory predictions are fully understood.

Even whether humans will be able to resolve this paradox in their lifetime at all remains to be seen – but it’s important to hope that such a thing is possible and that the story of a black hole’s life can be told from start to finish someday. Crowley also tries to answer Dar Oakley’s question about Debra’s fate thus (according to the WaPo review):

“Maybe not, said the Skeleton. But look at it this way. When you return home, you’ll tell the story of how you sought it and failed, and that story will be told and told again. And when you’re dead yourself, the story will go on being told, and in that telling you’ll speak and act and be alive again.”

Caw!

Featured image credit: Free-Photos/pixabay.

ACAT in the wild

The software working behind the robotic voice of Stephen Hawking was released for public use on August 18 by Intel, the company that developed it. Although principally developed for Hawking, the ‘tool’ has since been made available to many other people suffering from motor neurone disease, an ailment that gradually but steadily deadens the neurons that control various muscles of the body, rendering its victims incapable of, say, moving their cheek muscles to elicit speech. Intel’s software, called the Assistive Context-Aware Toolkit (ACAT), steps in to translate visual signals like facial twitches to speech. Its source code and installation instructions are available on GitHub.

ACAT is an assembly of components that each perform a unique function. In the order of performance: an input device picks up the visual signals (cheek muscle twitches, in Hawking’s case), a calibrated text-prediction tool generates the corresponding unit of language, and a speech synthesiser vocalises the text. The first two components are unified by the Windows Communication Framework. In ACAT’s case, the text-prediction is performed by a tool called Presage, developed by Italian developer Matteo Vescovi. Other input tools include proximity sensors, accelerometers and buttons.

According to the BBC, the UK’s MND Association has celebrated the release. Of the motives behind it, Intel wrote, “Our hope is that, by open sourcing this configurable platform, developers will continue to expand on this system by adding new user interfaces, new sensing modalities, word prediction and many other features.” Company spokesperson Lama Nachman also noted that, with the current release, Intel isn’t anticipating ‘all kinds’ of innovations as much as assistive ones. A detailed user guide is available here.

Why you should care about the mass of the top quark

In a paper published in Physical Review Letters on July 17, 2014, a team of American researchers reported the most precisely measured value yet of the mass of the top quark, the heaviest fundamental particle. Its mass is so high that can exist only in very high energy environments – such as inside powerful particle colliders or in the very-early universe – and not anywhere else.

For this, the American team’s efforts to measure its mass come across as needlessly painstaking. However, there’s an important reason to get as close to the exact value as possible.

That reason is 2012’s possibly most famous discovery. It was drinks-all-round for the particle physics community when the Higgs boson was discovered by the ATLAS and CMS experiments on the Large Hadron Collider (LHC). While the elation lasted awhile, there were already serious questions being asked about some of the boson’s properties. For one, it was much lighter than is anticipated by some promising areas of theoretical particle physics. Proponents of an idea called naturalness pegged it to be 19 orders of magnitude higher!

Because the Higgs boson is the particulate residue of an omnipresent energy field called the Higgs field, the boson’s mass has implications for how the universe should be. Being much lighter, physicists couldn’t explain why the boson didn’t predicate a universe the size of a football – while their calculations did.

In the second week of September 2014, Stephen Hawking said the Higgs boson will cause the end of the universe as we know it. Because it was Hawking who said and because his statement contained the clause “end of the universe”, the media hype was ridiculous yet to be expected. What he actually meant was that the ‘unnatural’ Higgs mass had placed the universe in a difficult position.

The universe would ideally love to be in its lowest energy state, like you do when you’ve just collapsed into a beanbag with beer, popcorn and Netflix. However, the mass of the Higgs has trapped it on a chair instead. While the universe would still like to be in the lower-energy beanbag, it’s reluctant to get up from the higher-energy yet still comfortable chair.

Someday, according to Hawking, the universe might increase in energy (get out of the chair) and then collapsed into its lowest energy state (the beanbag). And that day is trillions of years away.

What does the mass of the top quark have to do with all this? Quite a bit, it turns out. Fundamental particles like the top quark possess their mass in the form of potential energy. They acquire this energy when they move through the Higgs field, which is spread throughout the universe. Some particles acquire more energy than others. How much energy is acquired depends on two parameters: the strength of the Higgs field (which is constant), and the particle’s Higgs charge.

The Higgs charge determines how strongly a particle engages with the Higgs field. It’s the highest for the top quark, which is why it’s also the heaviest fundamental particle. More relevant for our discussion, this unique connection between the top quark and the Higgs boson is also what makes the top quark an important focus of studies.

Getting the mass of the top quark just right is important to better determining its Higgs charge, ergo the extent of its coupling with the Higgs boson, ergo better determining the properties of the Higgs boson. Small deviations in the value of the top quark’s mass could spell drastic changes in when or how our universe will switch from the chair to the beanbag.

If it does, all our natural laws would change. Life would become impossible.

The American team that made the measurements of the top quark used values obtained from the D0 experiment on the Tevatron particle collider, at the Fermi National Accelerator Laboratory. The Tevatron was shut in 2011, so their measurements are the collider’s last words on top quark mass: 174.98 ± 0.76 GeV/c2 (the Higgs boson weighs around 126 GeV/c2; a gold atom, considered pretty heavy, weighs around 210 GeV/c2). This is a precision of better than 0.5%, the finest yet. This value is likely to be updated once the LHC restarts early next year.

Featured image: Screenshot from Inception