A fountainhead of intrigue

Recently, a Chinese-American mathematician named Yitang Zhang claimed to have resolved the Landau-Siegel zeroes conjecture, which is related to the Riemann hypothesis. (Specifically, disproving the conjecture brings us a small stop closer to proving the Riemann hypothesis.) His paper hasn’t been validated verified by independent mathematicians yet, but it was newsworthy nonetheless because Zhang has previously claimed to have cracked the problem only to retract it after others found some mistakes in his proof.

It also matters because any step, big or small, towards the Riemann hypothesis is important progress. Georg Bernhard Riemann formulated the hypothesis in 1859, and it is yet to be proved. In 2000, the Clay Mathematics Institute included it in its list of Millennium Problems: solving one will fetch the solver a prize of $1 million. Yet as important as the work of number theorists has been, to inch closer to a solution, the hypothesis itself is a thing of infinite beauty with intriguing parallels to ideas in far-flung fields of study.


Please jump over the next two paragraphs if you’re familiar with the Riemann zeta function and its nontrivial zeroes.

The hypothesis is a statement about the set of possible solutions to the Riemann zeta function. This function, Riemann found, could estimate the number of prime numbers between two points on the real number line. Its zeroes – the inputs for which the function has a value of zero – lie on the complex plane, i.e. they are complex numbers of the form a + bi, where a and b are real numbers. ‘a’ is called the real part and ‘b’ is called the imaginary part. i is of course the imaginary number: i = (-1)1/2 There are two kinds of zeroes, trivial and nontrivial. In trivial zeroes, the value of a is a negative integer (-2, -4, -6, -8, …). The Riemann hypothesis states that the value of a for which ‘a + bi’ is a nontrivial zero of the function is always 1/2.

This is a powerful statement if you think about it. Mathematicians (and for that matter everyone curious) have been keen to understand the pattern in which prime numbers are distributed on the number line. Finding a function that defines this pattern would unlock several mysteries in the annals of number theory, as well as in quantum physics and anywhere else where prime numbers make an appearance. If the Riemann hypothesis is proved, it will mean that a function that can specify the number of prime numbers between any two numbers has a real part that is either 1/2 or a negative integer. This will be predictability where there once was only chaos. More specifically, when the values of the trivial and nontrivial zeroes of the zeta function are plotted on a graph, the values of the real part will prevent the dots from fluctuating all over the place and constrain them within a particular range. And something about the picture that emerges could speak to mathematicians about where the secret pattern to prime-number distribution could be hiding.


Scientists have already found mysterious similarities between the distribution of nontrivial zeroes of the zeta function and quantum physics. I found a particularly evocative example in an article from 2003, entitled The Spectrum of Riemannium, written by Brian Hayes. He compared several one-dimensional distributions collected from mathematics as well as reality. Each distribution was distinguished by the distance between two datapoints. The most straightforward was the periodic distribution: a line drawn every 2 units, say (see image below). For the random distribution, a randomiser would spit out a number and a line would be drawn after those many units. For a jiggled distribution, lines are placed in a periodic pattern and each line is then moved (or “jiggled”) by a small, random amount. The distribution of zeroes of the zeta function arises when a line is drawn at every point on the number line where there is a zero.

Then there are “erbium” and “eigenvalues”. The former distribution shows the possible energy levels of the nucleus of the erbium-166 atom. This is determined by the energies and the electromagnetic properties of the nucleus’s constituent particles (68 protons and 98 neutrons). In the latter half of the 20th century, physicists found that the energy levels of a heavy nucleus were statistically similar to the eigenvalues of a type of matrix called a random Hermitian matrix. Every matrix is associated with a polynomial function. The exact solutions to the polynomial are called the matrix’s eigenvalues. If the matrix has P2 elements, it will have P eigenvalues. In a Hermitian matrix, the values of the elements are mirrored across a diagonal running from the top left to the bottom right. In a random Hermitian matrix, the value of each element is chosen at random. In Hayes’s image, the distribution shows 100 eigenvalues of a random Hermitian matrix with 3002 elements.

With these distributions in front of him, Hayes writes: “In analyzing patterns of this kind, there is seldom much hope of predicting the positions of individual elements in a series. The aim is statistical understanding – a description of a typical pattern rather than a specific one.” One of his measures of choice of statistical similarity is the pair-correlation function. If you specify a distance d, the pair-correlation function will tell you how many pairs of lines in the distribution are separated by d. Not every distribution will have the same function, of course: its form differs varies according to the properties of the distribution it is working on.

As Hayes narrates, in 1972, Hugh Montgomery and Freeman Dyson found that the pair-correlation functions of the zeroes of the Riemann zeta function and the eigenvalues of random Hermitian matrices were an exact match. Given that the distribution of the eigenvalues of the same matrices are statistically similar to the energy levels of heavy nuclei, like that of erbium-166, Hayes writes: “Is it all just a fluke, this apparent link between matrix eigenvalues, nuclear physics and zeta zeros? It could be, although a universe with such chance coincidences in its fabric might be considered even stranger than one with mysterious causal connections.” There are other connections between the Riemann zeta function and quantum physics (such as the representation of the function as a trace equation with applications in the study of quantum chaos) – but just this should suffice, I think, to illustrate how captivating the Riemann hypothesis is.

It is also tempting to imagine that in this day and age of superspecialised mathematics and advanced computing techniques and hardware, any problem that remains unsolved for long enough obviously accrues a kind of legendary status but also implies the existence of roots that run deep into disparate fields of scientific and mathematical inquiry – or its ‘unsolved’ status wouldn’t survive attacks from so many possible angles nor from the wisdom founded on the knowledge of so many concepts and meta-concepts.

I found Hayes’s 2003 article when I was rereading the work of physicist S. Pancharatnam (1934-1969) and wanted to learn more about Michael Berry, who is well-known for his description of the Berry phase. Pancharatnam had originally derived a parameter of polarised light called the geometric phase. To quote from a post I wrote in 2018 about Pancharatnam’s work: “All waves can be described by their phase and amplitude. When the values of both parameters are changed at the same time and in slow-motion, one can observe the wave evolving through different states. In some cases, when the phase and amplitude are cycled through a series of values and brought back to their original, the wave looks different from what it did at the start. The effective shift in phase is called the geometric phase.” In 1986, Berry – who was then unaware of Pancharatnam’s work – provided a generalised description of geometric phases, today called the Berry phase.

The first citation in Hayes’s article is to a 1999 article coauthored by Berry and Jonathan Keating, entitled The Riemann Zeros and Eigenvalue Asymptotics. This was a technical article and beyond my ability to parse, but knowing that Berry had studied the Riemann zeta function, I searched the web for other, more accessible descriptions of his insights – and an equally fascinating article published in 1998. It was entitled A Prime Case of Chaos, written by Barry Cipra as part of his acclaimed ‘What’s Happening in the Mathematical Sciences?’ series. Cipra’s article begins with an image resembling the one above from Hayes’s article. (If you’re interested in its origins, this is the attribution: “‘Chaotic motion and random matrix theories’ by O. Bohigas and M. J. Giannoni in Mathematical and Computational Methods in Nuclear Physics, J. M. Gomez et al., eds., Lecture Notes in Physics, volume 209 (1984), pp. 1–99.”.) In this article, Berry is quoted more extensively – as a “quantum chaologist”.

I hope Cipra won’t mind my reproducing the contents of one particular ‘box’ in full below:

Prime numbers are music to Michael Berry’s ears.

Berry, a theoretical physicist at the University of Bristol, is one of the leading theorists in the study of quantum chaos. And that’s brought him to a keen appreciation of the Riemann zeta function.

Prime numbers are a lot like musical chords, Berry explains. A chord is a combination of notes played simultaneously. Each note is a particular frequency of sound created by a process of resonance in a physical system, say a saxophone. Put together, notes can make a wide variety of music – everything from Chopin to Spice Girls. In number theory, zeroes of the zeta function are the notes, prime numbers are the chords, and theo- rems are the symphonies.

Of course chords need not be concordant; a lot of vibrations are nothing more than noise. The Riemann Hypothesis, however, imposes a pleasing harmony on the number-theoretic, zeta-zero notes. “Loosely speaking, the Riemann Hypothesis states that the primes have music in them,” Berry says.

But Berry is looking for more than a musical analogy; he hopes to find the actual instrument behind the zeta function – a mathematical drum whose natural frequencies line up with the zeroes of the zeta function. The answer, he thinks, lies in quantum mechanics. “There are vibrations in classical physics, too,” he notes, “but quantum mechanics is a richer, more varied source of vibrating systems than any classical oscillators that we know of.”

What if someone finds a counterexample to the Riemann Hypothesis? “It would destroy this idea of mine,” Berry readily admits – one reason he’s a firm believer in Riemann’s remark. A counterexample would effectively end physicists’ interest in the zeta function. But one question would linger, he says: “How could it be that the Riemann zeta function so convincingly mimics a quantum system without being one?”

One of these counterexamples is the Landau-Siegel zeroes conjecture. It states that there could be nontrivial zeroes of the zeta function whose real part is not 1/2. In his new paper, Yitang Zhang has claimed that he has constrained the possibility of this conjecture being true by a significant amount. Even if he hasn’t altogether eliminated the possibility, and if his proof is verified to be valid, Berry can rest easy – or at least as easy as his tantalising question will allow.

(Addendum: Riemann is one of my favourite mathematicians. I wrote a tribute of sorts to his work in 2015.)

Featured image: Georg Bernhard Riemann in 1863. Image: Public domain (edited with Photomosh).

Science v. tech, à la Cixin Liu

A fascinating observation by Cixin Liu in an interview in Public Books, to John Plotz and translated by Pu Wang (numbers added):

… technology precedes science. (1) Way before the rise of modern science, there were so many technologies, so many technological innovations. But today technology is deeply embedded in the development of science. Basically, in our contemporary world, science sets a glass ceiling for technology. The degree of technological development is predetermined by the advances of science. (2) … What is remarkably interesting is how technology becomes so interconnected with science. In the ancient Greek world, science develops out of logic and reason. There is no reliance on technology. The big game changer is Galileo’s method of doing experiments in order to prove a theory and then putting theory back into experimentation. After Galileo, science had to rely on technology. … Today, the frontiers of physics are totally conditioned on the developments of technology. This is unprecedented. (3)

Perhaps an archaeology or palaeontology enthusiast might have regular chances to see the word ‘technology’ used to refer to Stone Age tools, Bronze Age pots and pans, etc. but I have almost always encountered these objects only as ‘relics’ or such in the popular literature. It’s easy to forget (1) because we have become so accustomed to thinking of technology as pieces of machines with complex electrical, electronic, hydraulic, motive, etc. components. I’m unsure of the extent to which this is an expression of my own ignorance but I’m convinced that our contemporary view of and use of technology, together with the fetishisation of science and engineering education over the humanities and social sciences, also plays a hand in maintaining this ignorance.

The expression of (2) is also quite uncommon, especially in India, where the government’s overbearing preference for applied research has undermined blue-sky studies in favour of already-translated technologies with obvious commercial and developmental advantages. So when I think of ‘science and technology’ as a body of knowledge about various features of the natural universe, I immediately think of science as the long-ranging, exploratory exercise that lays the railway tracks into the future that the train of technology can later ride. Ergo, less glass ceiling and predetermination, and more springboard and liberation. Cixin’s next words offer the requisite elucidatory context: advances in particle physics are currently limited by the size of the particle collider we can build.

(3) However, he may not be able to justify his view beyond specific examples simply because, to draw from the words of a theoretical physicist from many years ago – that they “require only a pen and paper to work” – it is possible to predict the world for a much lower cost than one would incur to build and study the future.

Plotz subsequently, but thankfully briefly, loses the plot when he asks Cixin whether he thinks mathematics belongs in science, and to which Cixin provides a circuitous non-answer that somehow misses the obvious: science’s historical preeminence began when natural philosophers began to encode their observations in a build-as-you-go, yet largely self-consistent, mathematical language (my favourite instance is the invention of non-Euclidean geometry that enabled the theories of relativity). So instead of belonging within one of the two, mathematics is – among other things – better viewed as a bridge.

Geometry’s near-miss that wasn’t

On June 8, Nautilus published a piece by Evelyn Lamb talking about mathematical near-misses. Imagine a mathematician trying to solve a problem using a specific technique and imagine it allows her to get really, really close to a solution – but not the solution itself. That’s a mathematical near-miss, and the technique becomes of particular interest to mathematicians because they can reveal potential connections between seemingly unconnected areas of mathematics. Lamb starts the piece talking about geometry but further down she’s got the simplest example: the Ramanujan constant. It is enumerated as e^{π(163^0.5)} (in English, you’d be reading this as “e to the power pi-times the square-root of 163”). It’s equal to 262,537,412,640,768,743.99999999999925. According to mathematician John Baez (quoted in the same article), this amazing near-miss is thanks to 163 being a so-called Heegner number. “Exponentials related to these numbers are nearly integers,” Lamb writes. Her piece concludes thus:

Near misses live in the murky boundary between idealistic, unyielding mathematics and our indulgent, practical senses. They invert the logic of approximation. Normally the real world is an imperfect shadow of the Platonic realm. The perfection of the underlying mathematics is lost under realizable conditions. But with near misses, the real world is the perfect shadow of an imperfect realm. An approximation is “a not-right estimate of a right answer,” Kaplan says, whereas “a near-miss is an exact representation of an almost-right answer.”

It was an entirely fun article (not just because I’ve a thing for articles discussing science that has no known paractical applications). However, the minute I read the headline (‘The Impossible Mathematics of the Real World’), one other science story from the past – which turned out to be of immense practical relevance – immediately came to mind: that of the birth of non-Euclidean geometry. In 19th century Europe, the German polymath Carl Friedrich Gauss realised that though people regularly approximated the shapes of real-world objects to those conceived by Euclid in c. 300 BC, there were enough dissimilarities to suspect that some truths of the world could be falling through the cracks. For example, Earth isn’t a perfect sphere; mountains aren’t perfect cones; and perfect cubes and cuboids don’t exist in nature. Yet we seem perfectly okay with ‘solving’ problems by making often unreasonable approximations. Which one is the imperfect shadow here?

A lecture delivered by Bernhard Riemann, a student of Gauss’s at the University of Gottingen, in June 1854 put his teacher’s suspicions to rest and showed that Euclid’s shapes had been the imperfect shadows. He’d done this by inventing the mathematical tools and rules to describe a geometry that existed in more than three dimensions and could deal with curved surfaces. (E.g., the three angles of a Euclidean triangle add up to 180º – but draw a triangle on the surface of a sphere and the sum of the angles is greater than 180º.) In effect, Euclid’s geometry was a lower dimensional variant of Riemannian geometry.

But the extent of Euclidean geometry’s imperfections only really came to light when physicists* used Riemann’s geometry to set up the theories of relativity, which unified space and time and discovered that gravity’s effects could be understood as the experience of moving through the curvature of spacetime. These realisations wouldn’t have been possible without Gauss wondering why Euclid’s shapes made any sense at all in a world filled with jags and bumps. To me, this illustrates a fascinating kind of a near-miss: one where real-world objects were squeezed into mathematical rules so we could make approximate real-world predictions for over 2,300 years without really noticing that most of Euclid’s shapes looked nothing like anything in the natural universe.

*It wasn’t just Albert Einstein. Among others, the list of contributors included Hendrik Lorentz, Henri Poincare, Hermann Minkowski, Marcel Grossmann and Arnold Sommerfeld.

Featured image credit: Pexels/pixabay.