A physics story of infinities, goats and colours

When I was writing in August about physicist Sheldon Glashow’s objection to Abdus Salam being awarded a share of the 1979 physics Nobel Prize, I learnt that it was because Salam had derived a theory that Glashow had derived as well, taking a different route, but ultimately the final product was non-renormalisable. A year or so later, Steven Weinberg derived the same theory but this time also ensured that it was renormalisable. Glashow said Salam shouldn’t have won the prize because Salam hadn’t brought anything new to the table, whereas Glashow had derived the initial theory and Weinberg had made it renormalisable.

His objections aside, the episode brought to my mind the work of Kenneth Wilson, who made important contributions to the renormalisation toolkit. Specifically, using these tools, physicists ensure that the equations that they’re using to model reality don’t get out of hand and predict impossible values. An equation might be useful to solve problems in 99 scenarios but in one, it might predict an infinity (i.e. the value of a physical variable approaches a very large number), rendering the equation useless. In such cases, physicists use renormalisation techniques to ensure the equation works in the 100th scenario as well, without predicting infinities. (This is a simplistic description that I will finesse throughout this post.)

In 2013, when Kenneth Wilson died, I wrote about the “Indian idea of infiniteness” – including how scholars in ancient India had contemplated very large numbers and their origins, only for this knowledge to have all but disappeared from the public imagination today because of the country’s failure to preserve it. In both instances, I never quite fully understood what renormalisation really entailed. The following post is an attempt to fix this gap.

You know electrons. Electrons have mass. Not all this mass is implicit mass per se. Some of it is the mass of the particle itself, sometimes called the shell mass. The electron also has an electric charge and casts a small electromagnetic field around itself. This field has some energy. According to the mass-energy equivalence (E = mc2approx.), the energy should correspond to some mass. This is called the electron’s electromagnetic mass.

Now, there is an equation to calculate how much a particle’s electromagnetic mass will be – and this equation shows that this mass is inversely proportional to the particle’s radius. That is, smaller the particle, the more its electromagnetic mass. This is why the mass of a single proton, which is larger than the electron, has a lower contribution from its electromagnetic mass.

So far so good – but quickly a problem arises. As the particle becomes smaller, according to the equation, its electromagnetic mass will increase. In technical terms, as the particle radius approaches zero, its mass will approach infinity. If its mass approaches infinity, the particle will be harder to move from rest, or accelerate, because a very large and increasing amount of energy will be required to do so. So the equation predicts that smaller charged particles, like quarks, should be nearly impossible to move around. Yet this is not what we see in experiments, where these particles do move around.

In the first decade of the 20th century (when the equation existed but quarks had not yet been discovered), Max Abraham and Hendrik Lorentz resolved this problem by assuming that the shell mass of the particle is negative. It was the earliest (recorded) instance of such a tweak – so that the equations we use to model reality don’t lose touch with that reality – and was called renormalisation. Assuming the shell mass is negative is silly, of course, but it doesn’t affect the final result in a way that breaks the theory. To renormalise, in this context, assumes that our mathematical knowledge of the event to be modelled is not complete enough, or that introducing such completeness would make the majority of other problems intractable.

There is another route physicists take to make sure equations and reality match, called regularisation. This is arguably more intuitive. Here, the physicist modifies the equation to include a ‘cutoff factor’ that represents what the physicist assumes is their incomplete knowledge of the phenomenon to which the equation is being applied. By applying a modified equation in this way, the physicist argues that some ‘new physics’ will be discovered in future that will complete the theory and the equation to perfectly account for the mass.

(I personally prefer regularisation because it seems more modest, but this is an aesthetic choice that has nothing to do with the physics itself and is thus moot.)

It is sometimes the case that once a problem is solved by regularisation, the cutoff factor disappears from the final answer – so effectively it helped with solving the problem in a way that its presence or absence doesn’t affect the answer.

This brings to mind the famous folk tale of the goat negotiation problem, doesn’t it? A fellow in a village dies and bequeaths his 17 goats to three sons thus: the eldest gets half, the middle gets a third and the youngest gets one-ninth. Obviously the sons get into a fight: the eldest claims nine instead of 8.5 goats, the middle claims six instead of 5.67 and the youngest claims two instead of 1.89. But then a wise old woman turns up and figures it out. She adds one of her own goats to the father’s 17 to make up a total of 18. Now, the eldest son gets nine goats, the middle son gets six goats and the youngest son gets two goats. Problem solved? When the sons tally up the goats they received, the realise that the total is still 17. The old woman’s goat is left, which she then takes back and gets on her way. The one additional goat was the cutoff factor here: you add it to the problem, solve it, get a solution and move on.

The example of the electron was suitable but also convenient: the need to renormalise particle masses originally arose in the context of classical electrodynamics – the first theory developed to study the behaviour of charged particles. Theories that physicists developed later, in each case to account for some phenomena that other theories couldn’t, also required renormalisation in different contexts, but for the same purpose: to keep the equations from predicting infinities. Infinity is a strange number that compromises our ability to make sense of the natural universe because it spreads itself like an omnipresent screen, obstructing our view of the things beyond. To get to them, you must scale an unscaleable barrier.

While the purpose of renormalisation has stayed the same, it took on new forms in different contexts. For example, quantum electrodynamics (QED) studies the behaviour of charged particles using the rules of quantum physics – as opposed to classical electrodynamics, which is an extension of Newtonian physics. In QED, the charge of an electron actually comes out to be infinite. This is because QED doesn’t have a way to explain why the force exerted by a charged particle decreases as you move away. But in reality electrons and protons have finite charges. How do we fix the discrepancy?

The path of renormalisation here is as follows: Physicists assume that any empty space is not really empty. There may be no matter there, sure, but at the microscopic scale, the vacuum is said to be teeming with virtual particles. These are pairs of particles that pop in and out of existence over very short time scales. The energy that produces them, and the energy that they release when they annihilate each other and vanish, is what physicists assume to be the energy inherent to space itself.

Now, say an electron-positron pair, called ‘e’ and ‘p’, pops up near an independent electron, ‘E’. The positron is the antiparticle of the electron and has a positive charge, so it will move closer to E. As a result, the electromagnetic force exerted by E’s electric charge becomes screened at a certain distance away, and the reduced force implies a lower effective charge. As the virtual particle pairs constantly flicker around the electron, QED says that we can observe only the effects of its screened charge.

By the 1960s, physicists had found several fundamental particles and were trying to group them in a way that made sense – i.e. that said something about why these were the fundamental particles and not others, and whether an incomplete pattern might suggest the presence of particles still to be discovered. Subsequently, in 1964, two physicists working independently – George Zweig and Murray Gell-Mann – proposed that protons and neutrons were not fundamental particles but were made up of smaller particles called quarks and gluons. They also said that there were three kinds of quarks and that the quarks could bind together using the gluons (thus the name). Each of these particles had an electric charge and a spin, just like electrons.

Within a year, Oscar Greenberg proposed that the quarks would also have an additional ‘color charge’ to explain why they don’t violate Pauli’s exclusion principle. (The term ‘colour’ has nothing to do with colours; it is just the label that unamiginative physicists selected when they were looking for one.) Around the same time, James Bjorken and Sheldon Glashow also proposed that there would have to be a fourth kind of quark, because then the new quark-gluon model could explain three more unsolved problems at the time. In 1968, physicists discovered the first evidence for quarks and gluons in experiments, proving that Zweig, Gell-Mann, Glashow, Bjorken, Greenberg, etc. were right. But as usual, there was a problem.

Quantum chromodynamics (QCD) is the study of quarks and gluons. In QED, if an electron and a positron interact at higher energies, their coupling will be stronger. But physicists who designed experiments in which they could observe the presence of quarks found the opposite was true: at higher energies, the quarks in a bound state behaved more and more like individual particles, but at lower energies, the effects of the individual quarks didn’t show, only that of the bound state. Seen another way, if you move an electron and a positron apart, the force between them gradually drops off to zero. But if you move two quarks apart, the force between them will increase for short distance before falling off to zero. It seemed that QCD would defy QED renormalisation.

A breakthrough came in 1973. If a quark ‘Q’ is surrounded by virtual quark-antiquark pairs ‘q’ and ‘q*’, then q* would move closer to Q and screen Q’s colour charge. However, the gluons have the dubious distinction of being their own antiparticles. So some of these virtual pairs are also gluon-gluon pairs. And gluons also carry colour charge. When the two quarks are moved apart, the space in between is occupied by gluon-gluon pairs that bring in more and more colour charge, leading to the counterintuitive effect.

However, QCD has had need of renormalisation in other areas, such as with the quark self-energy. Recall the electron and its electromagnetic mass in classical electrodynamics? This mass was the product of the electromagnetic energy field that the electron cast around itself. This energy is called self-energy. Similarly, quarks bear an electric charge as well as a colour charge and cast a chromo-electric field around themselves. The resulting self-energy, like in the classical electron example, threatens to reach an extremely high value – at odds with reality, where quarks have a relatively lower, certainly finite, self-energy.

However, the simple addition of virtual particles wouldn’t solve the problem either, because of the counterintuitive effects of the colour charge and the presence of gluons. So physicists are forced to adopt a more convoluted path in which they use both renormalisation and regularisation, as well as ensure that the latter turns out like the goats – where a new factor introduced into the equations doesn’t remain in the ultimate solution. The mathematics of QCD is a lot more complicated than that of QED (they are notoriously hard even for specially trained physicists), so the renormalisation and regularisation process is also correspondingly inaccessible to non-physicists. More than anything, it is steeped in mathematical techniques.

All this said, renormalisation is obviously quite inelegant. The famous British physicist Paul A.M. Dirac, who pioneered its use in particle physics, called it “ugly”. This attitude changed the most due to the work of Kenneth Wilson. (By the way, his PhD supervisor was Gell-Mann.)

Quarks and gluons together make up protons and neutrons. Protons, neutrons and electrons, plus the forces between them, make up atoms. Atoms make up molecules, molecules make up compounds and many compounds together, in various quantities, make up the objects we see all around us.

This description encompasses three broad scales: the microscopic, the mesoscopic and the macroscopic. Wilson developed a theory to act like a bridge – between the forces that quarks experience at the microscopic scale and the forces that cause larger objects to undergo phase transitions (i.e. go from solid to liquid or liquid to vapour, etc.). When a quark enters or leaves a bound state or if it is acted on by other particles, its energy changes, which is also what happens in phase transitions: objects gain or lose energy, and reorganise themselves (liquid –> vapour) to hold or shed that energy.

By establishing this relationship, Wilson could bring to bear insights gleaned from one scale to difficult problems at a different scale, and thus make corrections that were more streamlined and more elegant. This is quite clever because even renormalisation is the act of substituting what we are modelling with what we are able to observe, and which Wilson improved on by dropping the direct substitution in favour of something more mathematically robust. After this point in history, physicists adopted renormalisation as a tool more widely across several branches of physics. As physicist Leo Kadanoff wrote in his obituary for Wilson in Nature, “It could … be said that Wilson has provided scientists with the single most relevant tool for understanding the basis of physics.”

This said, however, the importance of renormalisation – or anything like it that compensates for the shortcomings of observation-based theories – was known earlier as well, so much so that physicists considered a theory that couldn’t be renormalised to be inferior to one that could be. This was responsible for at least a part of Sheldon Glashow’s objection to Abdus Salam winning a share of the physics Nobel Prize.

Sources:

  1. Introduction to QCD, Michelangelo L. Mangano
  2. Lectures on QED and QCD, Andrey Grozin
  3. Lecture notes – Particle Physics II, Michiel Botje
  4. Lecture 5: QED
  5. Introduction to QCD, P.Z. Skands
  6. Renormalization: Dodging Infinities, John G. Cramer

Chromodynamics: Gluons are just gonzo

One of the more fascinating bits of high-energy physics is the branch of physics called quantum chromodynamics (QCD). Don’t let the big name throw you off: it deals with a bunch of elementary particles that have a property called colour charge. And one of these particles creates a mess of this branch of physics because of its colour charge – so much so that it participates in the story that it is trying to shape. What could be more gonzo than this? Hunter S. Thompson would have been proud.

Like electrons have electric charge, particles studied by QCD have a colour charge. It doesn’t correspond to a colour of any kind; it’s just a funky name.

(Richard Feynman wrote about this naming convention in his book, QED: The Strange Theory of Light and Matter (pp. 163, 1985): “The idiot physicists, unable to come up with any wonderful Greek words anymore, call this type of polarization by the unfortunate name of ‘color,’ which has nothing to do with color in the normal sense.”)

The fascinating thing about these QCD particles is that they exhibit a property called colour confinement. It means that all particles with colour charge can’t ever be isolated. They’re always to be found only in pairs or bigger clumps. They can be isolated in theory if the clumps are heated to the Hagedorn temperature: 1,000 billion billion billion K. But the bigness of this number has ensured that this temperature has remained theoretical. They can also be isolated in a quark-gluon plasma, a superhot, superdense state of matter that has been creating fleetingly in particle physics experiments like the Large Hadron Collider. The particles in this plasma quickly collapse to form bigger particles, restoring colour confinement.

There are two kinds of particles that are colour-confined: quarks and gluons. Quarks come together to form bigger particles called mesons and baryons. The aptly named gluons are the particles that ‘glue’ the quarks together.

The force that acts between quarks and gluons is called the strong nuclear force. But this is misleading. The gluons actually mediate the strong nuclear force. A physicist would say that when two quarks exchange gluons, the quarks are being acted on by the strong nuclear force.

Because protons and neutrons are also made up of quarks and gluons, the strong nuclear force holds the nucleus together in all the atoms in the universe. Breaking this force releases enormous amounts of energy – like in the nuclear fission that powers atomic bombs and the nuclear fusion that powers the Sun. In fact, 99% of a proton’s mass comes from the energy of the strong nuclear force. The quarks contribute the remaining 1%; gluons are massless.

When you pull two quarks apart, you’d think the force between them will reduce. It doesn’t; it actually increases. This is very counterintuitive. For example, the gravitational force exerted by Earth drops off the farther you get away from it. The electromagnetic force between an electron and a proton decreases the more they move apart. But it’s only with the strong nuclear force that the force between two particles on which the force is acting actually increases as they move apart. Frank Wilczek called this a “self-reinforcing, runaway process”. This behaviour of the force is what makes colour confinement possible.

However, in 1973, Wilczek, David Gross and David Politzer found that the strong nuclear force increases in strength only up to a certain distance – around 1 fermi (0.000000000000001 metres, slightly larger than the diameter of a proton). If the quarks are separated by more than a fermi, the force between them falls off drastically, but not completely. This is called asymptotic freedom: the freedom from the force beyond some distance drops off asymptotically towards zero. Gross, Politzer and Wilczek won the Nobel Prize for physics in 2004 for their work.

In the parlance of particle physics, what makes asymptotic freedom possible is the fact that gluons emit other gluons. How else would you explain the strong nuclear force becoming stronger as the quarks move apart – if not for the gluons that the quarks are exchanging becoming more numerous as the distance increases?

This is the crazy phenomenon that you’re fighting against when you’re trying to set off a nuclear bomb. This is also the crazy phenomenon that will one day lead to the Sun’s death.

The first question anyone would ask now is – doesn’t asymptotic freedom violate the law of conservation of energy?

The answer lies in the nothingness all around us.

The vacuum of deep space in the universe is not really a vacuum. It’s got some energy of itself, which astrophysicists call ‘dark energy’. This energy manifests itself in the form of virtual particles: particles that pop in and out of existence, living for far shorter than a second before dissipating into energy. When a charged particle pops into being, its charge attracts other particles of opposite charge towards itself and repels particles of the same charge away. This is high-school physics.

But when a charged gluon pops into being, something strange happens. An electron has one kind of charge, the positive/negative electric charge. But a gluon contains a ‘colour’ charge and an ‘anti-colour’ charge, each of which can take one of three values. So the virtual gluon will attract other virtual gluons depending on their colour charges and intensify the colour charge field around it, and also change its colour according to whichever particles are present. If this had been an electron, its electric charge and the opposite charge of the particle it attracted would cancel the field out.

This multiplication is what leads to the build up of energy when we’re talking about asymptotic freedom.

Physicists refer to the three values of the colour charge as blue, green and red. (This is more idiocy – you might as well call them ‘baboon’, ‘lion’ and ‘giraffe’.) If a blue quark, a green quark and a red quark come together to form a hadron (a class of particles that includes protons and neutrons), then the hadron will have a colour charge of ‘white’, becoming colour-neutral. Anti-quarks have anti-colour charges: antiblue, antigreen, antired. When a red quark and an antired anti-quark meet, they will annihilate each other – but not so when a red quark and an antiblue anti-quark meet.

Gluons complicate this picture further because, in experiments, physicists have found that gluons behave as if they have both colour and anti-colour. In physical terms, this doesn’t make much sense, but they do in mathematical terms (which we won’t get into). Let’s say a proton is made of one red quark, one blue quark and one green quark. The quarks are held together by gluons, which also have a colour charge. So when two quarks exchange a gluon, the colours of the quarks change. If a blue quark emits a blue-antigreen gluon, the quark turns green whereas the quark that receives the gluon will turn blue. Ultimately, if the proton is ‘white’ overall, then the three quarks inside are responsible for maintaining that whiteness. This is the law of conservation of colour charge.

Gluons emit gluons because of their colour charges. When quarks exchange gluons, the quarks’ colour charges also change. In effect, the gluons are responsible for quarks getting their colours. And because the gluons participate in the evolution of the force that they also mediate, they’re just gonzo: they can interact with themselves to give rise to new particles.

A gluon can split up into two gluons or into a quark-antiquark pair. Say a quark and an antiquark are joined together. If you try to pull them apart by supplying some energy, the gluon between them will ‘swallow’ that energy and split up into one antiquark and one quark, giving rise to two quark-antiquark pairs (and also preserving colour-confinement). If you supply even more energy, more quark-antiquark pairs will be generated.

For these reasons, the strong nuclear force is called a ‘colour force’: it manifests in the movement of colour charge between quarks.

In an atomic nucleus, say there is one proton and one neutron. Each particle is made up of three quarks. The quarks in the proton and the quarks in the neutron interact with each other because they are close enough to be colour-confined: the proton-quarks’ gluons and the neutron-quarks’ gluons interact with each other. So the nucleus is effectively one ball of quarks and gluons. However, one nucleus doesn’t interact with that of a nearby atom in the same way because they’re too far apart for gluons to be exchanged.

Clearly, this is quite complicated – not just for you and me but also for scientists, and for supercomputers that perform these calculations for large experiments in which billions of protons are smashed into each other to see how the particles interact. Imagine: there are six types, or ‘flavours’, of quarks, each carrying one of three colour charges. Then there is the one gluon that can carry one of nine combinations of colour-anticolour charges.

The Wire
September 20, 2017

Featured image credit: Alexas_Fotos/pixabay.

Colliders of the future: LHeC and FCC-he

In this decade, CERN is exploiting and upgrading the LHC – but not constructing “the next big machine”.

Source

Looking into a section of the 6.3-km long HERA tunnel at Deutsches Elektronen-Synchrotron (DESY), Hamburg. Source: DESY
Looking into a section of the 6.3-km long HERA tunnel at Deutsches Elektronen-Synchrotron (DESY), Hamburg. Source: DESY

For many years, one of the world’s most powerful scopes, as in a microscope, was the Hadron-Elektron Ring Anlage (HERA) particle accelerator in Germany. Where scopes bounce electromagnetic radiation – like visible light – off surfaces to reveal information hidden to the naked eye, accelerators reveal hidden information by bombarding the target with energetic particles. At HERA, those particles were electrons accelerated to 27.5 GeV. At this energy, the particles can probe a distance of a few hundredths of a femtometer (earlier called fermi) – 2.5 million times better than the 0.1 nanometers that atomic force microscopy can achieve (of course, they’re used for different purposes).

The electrons were then collided head on against protons accelerated to 920 GeV.

Unlike protons, electrons aren’t made up of smaller particles and are considered elementary. Moreover, protons are approx. 2,000-times heavier than electrons. As a result, the high-energy collision is more an electron scattering off of a proton, but the way it happens is that the electron imparts some energy to the proton before scattering off (this is imagined as an electron emitting some energy as a photon, which is then absorbed by the proton). This is called deep inelastic scattering: ‘deep’ for high-energy; ‘inelastic’ because the proton absorbs some energy.

One of the most famous deep-inelastic scattering experiments was conducted in 1968 at the Stanford Linear Accelerator Centre. Then, the perturbed protons were observed to ’emit’ other particles – essentially hitherto undetected constituent particles that escaped their proton formation and formed other kinds of composite particles. The constituent particles were initially dubbed partons but later found to be quarks, anti-quarks (the matter/anti-matter particles) and gluons (the force-particles that held the quarks/anti-quarks together).

HERA was shut in June 2007. Five years later, the plans for a successor at least 100-times more sensitive than HERA were presented – in the form of the Large Hadron-electron Collider (LHeC). As the name indicates, it is proposed to be built adjoining the Large Hadron Collider (LHC) complex at CERN by 2025 – a timeframe based on when the high-luminosity phase of the LHC is set to begin (2024).

Timeline for the LHeC. Source: CERN
Timeline for the LHeC. Source: CERN

On December 15, physicists working on the LHC had announced new results obtained from the collider – two in particular stood out. One was a cause for great, yet possibly premature, excitement: a hint of a yet unknown particle weighing around 747 GeV. The other was cause for a bit of dismay: quantum chromodynamics (QCD), the theory that deals with the physics of quarks, anti-quarks and gluons, seemed flawless across a swath of energies. Some physicists were hoping it wouldn’t be so (because its flawlessness has come at the cost of being unable to explain some discoveries, like dark matter). Over the next decade, the LHC will push the energy frontier further to see – among other things – if QCD ‘breaks’, becoming unable to explain a possible new phenomenon.

Against this background, the LHeC is being pitched as the machine that could be dedicated to examining this breakpoint and some other others like it, and in more detail than the LHC is equipped to. One helpful factor is that when electrons are one kind of particles participating in a collision, physicists don’t have to worry about how the energy will be distributed among constituent particles since electrons don’t have any. Hadron collisions, on the other hand, have to deal with quarks, anti-quarks and gluons, and are tougher to analyse.

An energy recovery linac (in red) shown straddling the LHC ring. A rejected design involved installing the electron-accelerator (in yellow) concentrically with the LHC ring. Source: CERN
An energy recovery linac (in red) shown straddling the LHC ring. A rejected design involved installing the electron-accelerator (in yellow) concentrically with the LHC ring. Source: CERN

So, to accomplish this, the team behind the LHeC is considering installing a pill-shaped machine called the energy recovery linac (ERL), straddling the LHC ring (shown above), to produce a beam of electrons that’d then take on the accelerated protons from the main LHC ring – making up the ‘linac-ring LHeC’ design. A first suggestion to install the LHeC as a ring, to accelerate electrons, along the LHC ring was rejected because it would hamper experiments during construction. Anyway, the electrons will be accelerated to 60 GeV while the protons, to 7,000 GeV. The total wall-plug power to the ERL is being capped at 100 MW.

The ERL has a slightly different acceleration mechanism from the LHC, and doesn’t simply accelerate particles continuously around a ring. First, the electrons are accelerated through a radio-frequency field in a linear accelerator (linac – the straight section of the ERL) and then fed into a circular channel, crisscrossed by magnetic fields, curving into the rear end of the linac. The length of the circular channel is such that by the time the electrons travel along it, their phase has shifted by 180º (i.e. if their spin was oriented “up” at one end, it’d have become flipped to “down” by the time they reached the other). And when the out-of-phase electrons reenter the channel, they decelerate. Their kinetic energy is lost to the RF field, which intensifies and so provides a bigger kick to the new batch of particles being injected to the linac at just that moment. This way, the linac recovers the kinetic energy from each circulation.

Such a mechanism is employed at all because the amount of energy lost in a form called synchrotron radiation increases drastically as the particle’s mass gets lower – when accelerated radially using bending magnetic fields.

The bluish glow from the central region of the Crab Nebula is due to synchrotron radiation. Credit: NASA-ESA/Wikimedia Commons
The bluish glow from the central region of the Crab Nebula is due to synchrotron radiation. Credit: NASA-ESA/Wikimedia Commons

Keeping in mind the need to explore new areas of physics – especially those associated with leptons (elementary particles of which electrons are a kind) and quarks/gluons (described by QCD) – the energy of the electrons coming out of the ERL is currently planned to be 60 GeV. They will be collided with accelerated protons by positioning the ERL tangential to the LHC ring. And at the moment of the collision, CERN’s scientists hope that they will be able to use the LHeC to study:

  • Predicted unification of the electromagnetic and weak forces (into an electroweak force): The electromagnetic force of nature is mediated by the particles called photons while the weak force, by particles called W and Z bosons. Whether the scientists will observe the unification of these forces, as some theories predict, is dependent on the quality of electron-proton collisions. Specifically, if the square of the momentum transferred between the particles can reach up to 8-9 TeV, the collider will have created an environment in which physicists will be able to probe for signs of an electroweak force at play.
  • Gluon saturation: To quote from an interview given by theoretical physicist Raju Venugopalan in January 2013: “We all know the nuclei are made of protons and neutrons, and those are each made of quarks and gluons. There were hints in data from the HERA collider in Germany and other experiments that the number of gluons increases dramatically as you accelerate particles to high energy. Nuclear physics theorists predicted that the ions accelerated to near the speed of light at the [Relativistic Heavy Ion Collider] would reach an upper limit of gluon concentration – a state of gluon saturation we call colour glass condensate.”
  • Higgs bosons: On July 4, 2012, Fabiola Gianotti, soon to be the next DG of CERN but then the spokesperson of the ATLAS experiment at the LHC, declared that physicists had found a Higgs boson. Widespread celebrations followed – while a technical nitpick remained: physicists only knew the particle resembled a Higgs boson and might not have been the real thing itself. Then, in March 2013, the particle was most likely identified as being a Higgs boson. And even then, one box remained to be checked: that it was the Higgs boson, not one of many kinds. For that, physicists have been waiting for more results from the upgraded LHC. But a machine like the LHeC would be able to produce a “few thousand” Higgs bosons a year, enabling physicists to study the elusive particle in more detail, confirm more of its properties – or, more excitingly, find that that’s not the case – and look for higher-energy versions of it.

A 2012 paper detailing the concept also notes that should the LHC find that signs of ‘new physics’ could exist beyond the default energy levels of the LHeC, scientists are bearing in mind the need for the electrons to be accelerated by the ERL to up to 140 GeV.

The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN
The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN

The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN
The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN

The unique opportunity presented by an electron-proton collider working in tandem with the LHC goes beyond the mammoth energies to a property called luminosity as well. It’s measured in inverse femtobarn per second, denoting the number of events occurring per 10-39 squared centimetres per second. For example, 10 fb-1 denotes 10 events occurring per 10-39 sq. cm s-1 – that’s 1040 events per sq. cm per second (The luminosity over a specific period of time, i.e. without the ‘per seconds’ in the unit, is called the integrated luminosity). At the LHeC, a luminosity of 1033 cm-2 s-1 is expected to be achieved and physicists hope that with some tweaks, it can be hiked by yet another order of magnitude. To compare: this is 100x what HERA achieved, providing an unprecedented scale at which to explore the effects of deep inelastic scattering, and 10x the LHC’s current luminosity.

It’s also 100x lower than that of the HL-LHC, which is the configuration of the LHC with which the ERL will be operating to make up the LHeC. And the LHeC’s lifetime will be the planned lifetime of the LHC, till the 2030s, about a decade. In the same period, if all goes well, a Chinese behemoth will have taken shape: the Circular Electron-Positron Collider (CEPC), with a circumference 2x that of the LHC. In its proton-proton collision configuration – paralleling the LHC’s – China claims it will reach energies of 70,000 GeV (as against the LHC’s current 14,000 GeV) and luminosity comparable to the HL-LHC. And when its electron-positron collision configuration, which the LHeC will be able to mimic, will be at its best, physicists reckon the CEPC will be able to produce 100,000 Higgs bosons a year.

Timeline for operation of the Future Circular Collider being considered. Source: CERN
Timeline for operation of the Future Circular Collider being considered. Source: CERN

 

As it happens, some groups at CERN are already drawing up plans, due to be presented in 2018, for a machine dwarfing even the CEPC. Meet the Future Circular Collider (FCC), by one account the “ultimate precision-physics machine” (and funnily named by another). To be fair, the FCC has been under consideration since about 2013 and independent of the CEPC. However, in sheer size, the FCC could swallow the CEPC – with an 80-100 km-long ring. It will also be able to accelerate protons to 50,000 GeV (by 2040), attain luminosities of 1035 cm-2 s-1, continue to work with the ERL, function as an electron-positron collider (video), and look for particles weighing up to 25,000 GeV (currently the heaviest known fundamental particle is the top quark, weighing 169-173 GeV).

An illustration showing a possible location and size, relative to the LHC (in white) of the FCC. The main tunnel is shown as a yellow dotted line. Source: CERN
An illustration showing a possible location and size, relative to the LHC (in white) of the FCC. The main tunnel is shown as a yellow dotted line. Source: CERN

And should it be approved and come online in the second half of the 2030s, there’s a good chance the world will be a different place, too: not just the CEPC – there will be (or will have been?) either the International Linear Collider (ILC) and Compact Linear Collider (CLIC) as well. ‘Either’ because they’re both linear accelerators with similar physical dimensions and planning to collide electrons with positrons, their antiparticles, to study QCD, the Higgs field and the prospects of higher dimensions, so only one of them might get built. And they will require a decade to be built, coming online in the late 2020s. The biggest difference between them is that the ILC will be able to reach collision energies of 1,000 GeV while the CLIC (whose idea was conceived at CERN), of 3,000 GeV.

Screen Shot 2015-12-30 at 5.57.55 pmFCC-he = proton-electron collision mode; FCC-hh = proton-proton collision mode; SppC = CEPC’s proton-proton collision mode.

On meson decay-modes in studying CP violation

In particle physics, CPT symmetry is an attribute of the universe that is held as fundamentally true by quantum field theory (QFT). It states that the laws of physics should not be changed and the opposite of all allowed motions be allowed (T symmetry) if a particle is replaced with its antiparticle (C symmetry) and then left and right are swapped (P symmetry).

What this implies is a uniformity of the particle’s properties across time, charge and orientation, effectively rendering them conjugate perspectives.

(T-symmetry, called so for an implied “time reversal”, defines that if a process moves one way in time, its opposite is signified by its moving the other way in time.)

The more ubiquitously studied version of CPT symmetry is CP symmetry with the assumption that T-symmetry is preserved. This is because CP-violation, when it was first observed by James Cronin and Val Fitch, shocked the world of physics, implying that something was off about the universe. Particles that ought to have remained “neutral” in terms of their properties were taking sides! (Note: CPT-symmetry is considered to be a “weaker symmetry” then CP-symmetry.)

Val Logsdon Fitch (L) and James Watson Cronin

In 1964, Oreste Piccioni, who had just migrated to the USA and was working at the Lawrence Berkeley National Laboratory (LBNL), observed that kaons, mesons each composed of a strange quark and an up/down antiquark, had a tendency to regenerate in one form when shot as a beam into matter.

The neutral kaon, denoted as K0, has two forms, the short-lived (KS) and the long-lived (KL). Piccioni found that kaons decay in flight, so a beam of kaons, over a period of time, becomes pure KL because the KS all decay away before them. When such a beam is shot into matter, the K0 is scattered by protons and neutrons whereas the K0* (i.e., antikaons) contribute to the formation of a class of particles called hyperons.

Because of this asymmetric interaction, (quantum) coherence between the two batches of particles is lost, resulting in the emergent beam being composed of KS and KL, where the KS is regenerated by firing a K0-beam into matter.

When the results of Piccioni’s experiment were duplicated by Robert Adair in the same year, regeneration as a physical phenomenon became a new chapter in the study of particle physics. Later that year, that’s what Cronin and Fitch set out to do. However, during the decay process, they observed a strange phenomenon.

According to a theory formulated in the 1950s by Murray Gell-Mann and Kazuo Nishijima, and then by Gell-Mann and Abraham Pais in 1955-1957, the KS meson was allowed to decay into two pions in order for certain quantum mechanical states to be conserved, and the KL meson was allowed to decay into three pions.

For instance, the KL (s*, u) decay happens thus:

  1. s* → u* + W+ (weak interaction)
  2. W+ → d* + u
  3. u → g + d + d* (strong interaction)
  4. u → u

A Feynman diagram depicting the decay of a KL meson into three pions.

In 1964, in their landmark experiment, Cronin and Fitch observed, however, that the KL meson was decaying into two pions, albeit at a frequency of 1-in-500 decays. This implied an indirect instance of CP-symmetry violation, and subsequently won the pair the 1980 Nobel Prize for Physics.

An important aspect of the observation of CP-symmetry violation in kaons is that the weak force is involved in the decay process (even as observed above in the decay of the KL meson). Even though the kaon is composed of a quark and an antiquark, i.e., held together by the strong force, its decay is mediated by the strong and the weak forces.

In all weak interactions, parity is not conserved. The interaction itself acts only on left-handed particles and right-handed anti-particles, and was parametrized in what is called the V-A Lagrangian for weak interactions, developed by Robert Marshak and George Sudarshan in 1957.

Prof. Robert Marshak

In fact, even in the case of the KS and KL kaons, their decay into pions can be depicted thus:

KS → π+ + π0
KL → π+ + π+ + π

Here, the “+” and “-” indicate a particle’s parity, or handedness. When a KS decays into two pions, the result is one right-handed (“+”) and one neutral pion (“0”). When a KL decays into three pions, however, the result is two right-handed pions and one left-handed (“-“) pion.

When kaons were first investigated via their decay modes, the different final parities indicated that there were two kaons that were decaying differently. Over time, however, as increasingly precise measurements indicated that only one kaon (now called K+) was behind both decays, physicists concluded that the weak interaction was responsible for resulting in one kind of decay some of the time and in another kind of decay the rest of the time.

To elucidate, in particle physics, the squares of the amplitudes of two transformations, B → f and B* → f*, are denoted thus.

Here,

B = Initial state (or particle); f = Final state (or particle)
B* = Initial antistate (or antiparticle); f* = Final antistate (or antiparticle)
P = Amplitude of transformation B → f; Q = Amplitude of transformation B* → f*
S = Corresponding strong part of amplitude; W = Corresponding weak part of amplitude; both treated as phases of the wave for which the amplitude is being evaluated

Subtracting (and applying some trigonometry):

The presence of the term sin(WPWQ) is a sign that purely, or at least partly, weak interactions can occur in all transformations that can occur in at least two ways, and thus will violate CP-symmetry. (It’s like having the option of having two paths to reach a common destination: #1 is longer and fairly empty; #2 is shorter and congested. If their distances and congestedness are fairly comparable, then facing some congestion becomes inevitable.)

Electromagnetism, strong interactions, and gravitation do not display any features that could give rise to the distinction between right and left, however. This disparity is also called the ‘strong CP problem’ and is one of the unsolved problems of physics. It is especially puzzling because the QCD Lagrangian, which is a function describing the dynamics of the strong interaction, includes terms that could break the CP-symmetry.

[youtube http://www.youtube.com/watch?v=KDkaMuN0DA0?rel=0]

(The best known resolution – one that doesn’t resort to spacetime with two time-dimensions – is the Peccei-Quinn theory put forth by Roberto Peccei and Helen Quinn in 1977. It suggests that the QCD-Lagrangian be extended with a CP-violating parameter whose value is 0 or close to 0.

This way, CP-symmetry is conserved during the strong interactions while CP-symmetry “breakers” in the QCD-Lagrangian have their terms cancelled by an emergent, dynamic field whose flux is encapsulated by massless Goldstone bosons called axions.)

Now, kaons are a class of mesons whose composition includes a strange quark (or antiquark). Another class of mesons, called B-mesons, are identified by their composition including a bottom antiquark, and are also notable for the role they play in studies of CP-symmetry violations in nature. (Note: A B-meson composed of a bottom antiquark and a bottom quark is not called a meson but a bottomonium.)

The six quarks, the fundamental (and proverbial) building blocks of matter

According to the Standard Model (SM) of particle physics, there are some particles – such as quarks and leptons – that carry a property called flavor. Mesons, which are composed of quarks and antiquarks, have an overall flavor inherited from their composition as a result. The presence of non-zero flavor is significant because SM permits quarks and leptons of one flavor to transmute into the corresponding quarks and leptons of another flavor, a process called oscillating.

And the B-meson is no exception. Herein lies the rub: during oscillations, the B-meson is favored over its antiparticle counterpart. Given the CPT theorem’s assurance of particles and antiparticles being differentiable only by charge and handedness, not mass, etc., the preference of B*-meson for becoming the B-meson more than the B-meson’s preference for becoming the B*-meson indicates a matter-asymmetry. Put another way, the B-meson decays at a slower rate than the B*-meson. Put yet another way, matter made of the B-meson is more stable than antimatter made of the B*-meson.

Further, if the early universe started off as a perfect symmetry (in every way), then the asymmetric formation of B-mesons would have paved the way for matter to take precedence over anti-matter. This is one of the first instances of the weak interaction possibly interfering with the composition of the universe. How? By promising never to preserve parity, and by participating in flavor-changing oscillations (in the form of the W/Z boson).

In this composite image of the Crab Nebula, matter and antimatter are propelled nearly to the speed of light by the Crab pulsar. The images came from NASA’s Chandra X-ray Observatory and the Hubble Space Telescope. (Photo by NASA; Caption from Howstuffworks.com)

The prevalence of matter over antimatter in our universe is credited to a hypothetical process called baryogenesis. In 1967, Andrei Sakharov, a Soviet nuclear physicist, proposed three conditions for asymmetric baryogenesis to have occurred.

  1. Baryon-number violation
  2. Departure from thermal equilibrium
  3. C- and CP-symmetry violation

The baryon-number of a particle is defined as one-third of the difference between the number of quarks and number of antiquarks that make up the particle. For a B-meson composed of a bottom antiquark and a quark, the value’s 0; of a bottom antiquark and another antiquark, the value’s 1. Baryon-number violation, while theoretically possible, isn’t considered in isolation of what is called “B – L” conservation (“L” is the lepton number, and is equal to the number of leptons minus the number of antileptons).

Now, say a proton decays into a pion and a position. A proton’s baryon-number is 1, L-number is 0; a pion has both baryon- and L-numbers as 0; a positron has baryon-number 0 and L-number -1. Thus, neither the baryon-number nor the lepton-number are conserved, but their difference (1) definitely is. If this hypothetical process were ever to be observed, then baryogenesis would make the transition from hypothesis to reality (and the question of matter-asymmetry become conclusively answered).

The quark-structure of a proton (notice that the two up-quarks have different flavors)

Therefore, in recognition of the role of B-mesons (in being able to present direct evidence of CP-symmetry violation through asymmetric B-B* oscillations involving the mediation of the weak-force) and their ability to confirm or deny an “SM-approved” baryogenesis in the early universe, what are called the B-factories were built: a collider-based machine whose only purpose is to spew out B-mesons so they can be studied in detail by high-precision detectors.

The earliest, and possibly most well-known, B-factories were constructed in the 1990s and shut down in the 2000s: the BaBar experiment at SLAC (2008), Stanford, and the Belle experiment at the KEKB collider (2010) in Japan. In fact, a Belle II plant is under construction and upon completion will boast the world’s highest-luminosity experiment.

The Belle detector (L) and the logo for Belle II under construction

Equations generated thanks to the Daum equations editor.

Assuming this universe…

Accomplished physicists I have met or spoken with in the last four months professed little agreement over which parts of physics were set-in-stone and which parts simply largely-corroborated hypotheses. Here are some of them, with a short description of the dispute.

  1. Bosons – Could be an emergent phenomenon arising out of fermion-fermion interaction; current definition could be a local encapsulation of special fermionic properties
  2. Colour-confinement – ‘Tis held that gluons, mediators of the colour force, cannot exist in isolation nor outside the hadrons (that are composed of quarks held together by gluons); while experimental proof of the energy required to pull a quark free being much greater than the energy to pull a quark-antiquark pair out of vacuum exists, denial of confinement hasn’t yet been conclusively refuted (ref: lattice formulation of string theory)
  3. Massive gluons – A Millennium Prize problem
  4. Gravity – Again, could be an emergent phenomenon arising out of energy-corrections of hidden, underlying quantum fields
  5. Compactified extra-dimensions & string theory – There are still many who dispute the “magical” mathematical framework that string theory provides because it is a perturbative theory (i.e., background-dependent); a non-perturbative definition would make its currently divergent approximations convergent

If you ever get the opportunity to listen to a physicist ruminate on the philosophy of nature, don’t miss it. What lay-people would daily dispute is the macro-physical implications of a quantum world; the result is the all-important subjective clarification that lets us think better. What physicists dispute is the constitution of the quantum world itself; the result is the more objective phenomenological implications for everyone everywhere. We could use both debates.