A physics story of infinities, goats and colours

When I was writing in August about physicist Sheldon Glashow’s objection to Abdus Salam being awarded a share of the 1979 physics Nobel Prize, I learnt that it was because Salam had derived a theory that Glashow had derived as well, taking a different route, but ultimately the final product was non-renormalisable. A year or so later, Steven Weinberg derived the same theory but this time also ensured that it was renormalisable. Glashow said Salam shouldn’t have won the prize because Salam hadn’t brought anything new to the table, whereas Glashow had derived the initial theory and Weinberg had made it renormalisable.

His objections aside, the episode brought to my mind the work of Kenneth Wilson, who made important contributions to the renormalisation toolkit. Specifically, using these tools, physicists ensure that the equations that they’re using to model reality don’t get out of hand and predict impossible values. An equation might be useful to solve problems in 99 scenarios but in one, it might predict an infinity (i.e. the value of a physical variable approaches a very large number), rendering the equation useless. In such cases, physicists use renormalisation techniques to ensure the equation works in the 100th scenario as well, without predicting infinities. (This is a simplistic description that I will finesse throughout this post.)

In 2013, when Kenneth Wilson died, I wrote about the “Indian idea of infiniteness” – including how scholars in ancient India had contemplated very large numbers and their origins, only for this knowledge to have all but disappeared from the public imagination today because of the country’s failure to preserve it. In both instances, I never quite fully understood what renormalisation really entailed. The following post is an attempt to fix this gap.

You know electrons. Electrons have mass. Not all this mass is implicit mass per se. Some of it is the mass of the particle itself, sometimes called the shell mass. The electron also has an electric charge and casts a small electromagnetic field around itself. This field has some energy. According to the mass-energy equivalence (E = mc2approx.), the energy should correspond to some mass. This is called the electron’s electromagnetic mass.

Now, there is an equation to calculate how much a particle’s electromagnetic mass will be – and this equation shows that this mass is inversely proportional to the particle’s radius. That is, smaller the particle, the more its electromagnetic mass. This is why the mass of a single proton, which is larger than the electron, has a lower contribution from its electromagnetic mass.

So far so good – but quickly a problem arises. As the particle becomes smaller, according to the equation, its electromagnetic mass will increase. In technical terms, as the particle radius approaches zero, its mass will approach infinity. If its mass approaches infinity, the particle will be harder to move from rest, or accelerate, because a very large and increasing amount of energy will be required to do so. So the equation predicts that smaller charged particles, like quarks, should be nearly impossible to move around. Yet this is not what we see in experiments, where these particles do move around.

In the first decade of the 20th century (when the equation existed but quarks had not yet been discovered), Max Abraham and Hendrik Lorentz resolved this problem by assuming that the shell mass of the particle is negative. It was the earliest (recorded) instance of such a tweak – so that the equations we use to model reality don’t lose touch with that reality – and was called renormalisation. Assuming the shell mass is negative is silly, of course, but it doesn’t affect the final result in a way that breaks the theory. To renormalise, in this context, assumes that our mathematical knowledge of the event to be modelled is not complete enough, or that introducing such completeness would make the majority of other problems intractable.

There is another route physicists take to make sure equations and reality match, called regularisation. This is arguably more intuitive. Here, the physicist modifies the equation to include a ‘cutoff factor’ that represents what the physicist assumes is their incomplete knowledge of the phenomenon to which the equation is being applied. By applying a modified equation in this way, the physicist argues that some ‘new physics’ will be discovered in future that will complete the theory and the equation to perfectly account for the mass.

(I personally prefer regularisation because it seems more modest, but this is an aesthetic choice that has nothing to do with the physics itself and is thus moot.)

It is sometimes the case that once a problem is solved by regularisation, the cutoff factor disappears from the final answer – so effectively it helped with solving the problem in a way that its presence or absence doesn’t affect the answer.

This brings to mind the famous folk tale of the goat negotiation problem, doesn’t it? A fellow in a village dies and bequeaths his 17 goats to three sons thus: the eldest gets half, the middle gets a third and the youngest gets one-ninth. Obviously the sons get into a fight: the eldest claims nine instead of 8.5 goats, the middle claims six instead of 5.67 and the youngest claims two instead of 1.89. But then a wise old woman turns up and figures it out. She adds one of her own goats to the father’s 17 to make up a total of 18. Now, the eldest son gets nine goats, the middle son gets six goats and the youngest son gets two goats. Problem solved? When the sons tally up the goats they received, the realise that the total is still 17. The old woman’s goat is left, which she then takes back and gets on her way. The one additional goat was the cutoff factor here: you add it to the problem, solve it, get a solution and move on.

The example of the electron was suitable but also convenient: the need to renormalise particle masses originally arose in the context of classical electrodynamics – the first theory developed to study the behaviour of charged particles. Theories that physicists developed later, in each case to account for some phenomena that other theories couldn’t, also required renormalisation in different contexts, but for the same purpose: to keep the equations from predicting infinities. Infinity is a strange number that compromises our ability to make sense of the natural universe because it spreads itself like an omnipresent screen, obstructing our view of the things beyond. To get to them, you must scale an unscaleable barrier.

While the purpose of renormalisation has stayed the same, it took on new forms in different contexts. For example, quantum electrodynamics (QED) studies the behaviour of charged particles using the rules of quantum physics – as opposed to classical electrodynamics, which is an extension of Newtonian physics. In QED, the charge of an electron actually comes out to be infinite. This is because QED doesn’t have a way to explain why the force exerted by a charged particle decreases as you move away. But in reality electrons and protons have finite charges. How do we fix the discrepancy?

The path of renormalisation here is as follows: Physicists assume that any empty space is not really empty. There may be no matter there, sure, but at the microscopic scale, the vacuum is said to be teeming with virtual particles. These are pairs of particles that pop in and out of existence over very short time scales. The energy that produces them, and the energy that they release when they annihilate each other and vanish, is what physicists assume to be the energy inherent to space itself.

Now, say an electron-positron pair, called ‘e’ and ‘p’, pops up near an independent electron, ‘E’. The positron is the antiparticle of the electron and has a positive charge, so it will move closer to E. As a result, the electromagnetic force exerted by E’s electric charge becomes screened at a certain distance away, and the reduced force implies a lower effective charge. As the virtual particle pairs constantly flicker around the electron, QED says that we can observe only the effects of its screened charge.

By the 1960s, physicists had found several fundamental particles and were trying to group them in a way that made sense – i.e. that said something about why these were the fundamental particles and not others, and whether an incomplete pattern might suggest the presence of particles still to be discovered. Subsequently, in 1964, two physicists working independently – George Zweig and Murray Gell-Mann – proposed that protons and neutrons were not fundamental particles but were made up of smaller particles called quarks and gluons. They also said that there were three kinds of quarks and that the quarks could bind together using the gluons (thus the name). Each of these particles had an electric charge and a spin, just like electrons.

Within a year, Oscar Greenberg proposed that the quarks would also have an additional ‘color charge’ to explain why they don’t violate Pauli’s exclusion principle. (The term ‘colour’ has nothing to do with colours; it is just the label that unamiginative physicists selected when they were looking for one.) Around the same time, James Bjorken and Sheldon Glashow also proposed that there would have to be a fourth kind of quark, because then the new quark-gluon model could explain three more unsolved problems at the time. In 1968, physicists discovered the first evidence for quarks and gluons in experiments, proving that Zweig, Gell-Mann, Glashow, Bjorken, Greenberg, etc. were right. But as usual, there was a problem.

Quantum chromodynamics (QCD) is the study of quarks and gluons. In QED, if an electron and a positron interact at higher energies, their coupling will be stronger. But physicists who designed experiments in which they could observe the presence of quarks found the opposite was true: at higher energies, the quarks in a bound state behaved more and more like individual particles, but at lower energies, the effects of the individual quarks didn’t show, only that of the bound state. Seen another way, if you move an electron and a positron apart, the force between them gradually drops off to zero. But if you move two quarks apart, the force between them will increase for short distance before falling off to zero. It seemed that QCD would defy QED renormalisation.

A breakthrough came in 1973. If a quark ‘Q’ is surrounded by virtual quark-antiquark pairs ‘q’ and ‘q*’, then q* would move closer to Q and screen Q’s colour charge. However, the gluons have the dubious distinction of being their own antiparticles. So some of these virtual pairs are also gluon-gluon pairs. And gluons also carry colour charge. When the two quarks are moved apart, the space in between is occupied by gluon-gluon pairs that bring in more and more colour charge, leading to the counterintuitive effect.

However, QCD has had need of renormalisation in other areas, such as with the quark self-energy. Recall the electron and its electromagnetic mass in classical electrodynamics? This mass was the product of the electromagnetic energy field that the electron cast around itself. This energy is called self-energy. Similarly, quarks bear an electric charge as well as a colour charge and cast a chromo-electric field around themselves. The resulting self-energy, like in the classical electron example, threatens to reach an extremely high value – at odds with reality, where quarks have a relatively lower, certainly finite, self-energy.

However, the simple addition of virtual particles wouldn’t solve the problem either, because of the counterintuitive effects of the colour charge and the presence of gluons. So physicists are forced to adopt a more convoluted path in which they use both renormalisation and regularisation, as well as ensure that the latter turns out like the goats – where a new factor introduced into the equations doesn’t remain in the ultimate solution. The mathematics of QCD is a lot more complicated than that of QED (they are notoriously hard even for specially trained physicists), so the renormalisation and regularisation process is also correspondingly inaccessible to non-physicists. More than anything, it is steeped in mathematical techniques.

All this said, renormalisation is obviously quite inelegant. The famous British physicist Paul A.M. Dirac, who pioneered its use in particle physics, called it “ugly”. This attitude changed the most due to the work of Kenneth Wilson. (By the way, his PhD supervisor was Gell-Mann.)

Quarks and gluons together make up protons and neutrons. Protons, neutrons and electrons, plus the forces between them, make up atoms. Atoms make up molecules, molecules make up compounds and many compounds together, in various quantities, make up the objects we see all around us.

This description encompasses three broad scales: the microscopic, the mesoscopic and the macroscopic. Wilson developed a theory to act like a bridge – between the forces that quarks experience at the microscopic scale and the forces that cause larger objects to undergo phase transitions (i.e. go from solid to liquid or liquid to vapour, etc.). When a quark enters or leaves a bound state or if it is acted on by other particles, its energy changes, which is also what happens in phase transitions: objects gain or lose energy, and reorganise themselves (liquid –> vapour) to hold or shed that energy.

By establishing this relationship, Wilson could bring to bear insights gleaned from one scale to difficult problems at a different scale, and thus make corrections that were more streamlined and more elegant. This is quite clever because even renormalisation is the act of substituting what we are modelling with what we are able to observe, and which Wilson improved on by dropping the direct substitution in favour of something more mathematically robust. After this point in history, physicists adopted renormalisation as a tool more widely across several branches of physics. As physicist Leo Kadanoff wrote in his obituary for Wilson in Nature, “It could … be said that Wilson has provided scientists with the single most relevant tool for understanding the basis of physics.”

This said, however, the importance of renormalisation – or anything like it that compensates for the shortcomings of observation-based theories – was known earlier as well, so much so that physicists considered a theory that couldn’t be renormalised to be inferior to one that could be. This was responsible for at least a part of Sheldon Glashow’s objection to Abdus Salam winning a share of the physics Nobel Prize.

Sources:

  1. Introduction to QCD, Michelangelo L. Mangano
  2. Lectures on QED and QCD, Andrey Grozin
  3. Lecture notes – Particle Physics II, Michiel Botje
  4. Lecture 5: QED
  5. Introduction to QCD, P.Z. Skands
  6. Renormalization: Dodging Infinities, John G. Cramer