You’re allowed to be interested in particle physics

This page appeared in The Hindu’s e-paper today.

I wrote the lead article, about why scientists are so interested in an elementary particle called the top quark. Long story short: the top quark is the heaviest elementary particle, and because all elementary particles get their masses by interacting with Higgs bosons, the top quark’s interaction is the strongest. This has piqued physicists’ interest because the Higgs boson’s own mass is peculiar: it’s more than expected and at the same time poised on the brink of a threshold beyond which our universe as we know it wouldn’t exist. To explain this brinkmanship, physicists are intently studying the top quark, including measuring its mass with more and more precision.

It’s all so fascinating. But I’m well aware that not many people are interested in this stuff. I wish they were and my reasons follow.

There exists a sufficiently healthy journalism of particle physics today. Most of it happens in Europe and the US, (i) where famous particle physics experiments are located, (ii) where there already exists an industry of good-quality science journalism, and (iii) where there are countries and/or governments that actually have the human resources, funds, and political will to fund the experiments (in many other places, including India, these resources don’t exist, rendering the matter of people contending with these experiments moot).

In this post, I’m using particle physics as itself as well as as a surrogate for other reputedly esoteric fields of study.

This journalism can be divided into three broad types: those with people, those concerned with spin-offs, and those without people. ‘Those with people’ refers to narratives about the theoretical and experimental physicists, engineers, allied staff, and administrators who support work on particle physics, their needs, challenges, and aspirations.

The meaning of ‘those concerned with spin-offs’ is obvious: these articles attempt to justify the money governments spend on particle physics projects by appealing to the technologies scientists develop in the course of particle-physics work. I’ve always found these to be apologist narratives erecting a bad expectation: that we shouldn’t undertake these projects if they don’t also produce valuable spin-off technologies. I suspect most particle physics experiments don’t because they are much smaller than the behemoth Large Hadron Collider and its ilk, which require more innovation across diverse fields.

‘Those without people’ are the rarest of the lot — narratives that focus on some finding or discussion in the particle physics community that is relatively unconcerned with the human experience of the natural universe (setting aside the philosophical point that the non-human details are being recounted by human narrators). These stories are about the material constituents of reality as we know it.

When I say I wish more people were interested in particle physics today, I wish they were interested in all these narratives, yet more so in narratives that aren’t centred on people.

Now, why should they be concerned? This is a difficult question to answer.

I’m concerned because I’m fascinated with the things around us we don’t fully understand but are trying to. It’s a way of exploring the unknown, of going on an adventure. There are many, many things in this world that people can be curious about. It’s possible there are more such things than there are people (again, setting aside the philosophical bases of these claims). But particle physics and some other areas — united by the extent to which they are written off as being esoteric — suffer from more than not having their fair share of patrons in the general (non-academic) population. Many people actively shun them, lose focus when reading about them, and at the same time do little to muster focus back. It has even become okay for them to say they understood nothing of some (well-articulated) article and not expect to have their statement judged adversely.

I understand why narratives with people in them are easier to understand, to connect with, but none of the implicated psychological, biological, and anthropological mechanisms also encourage us to reject narratives and experiences without people. In other words, there may have been evolutionary advantages to finding out about other people but there have been no disadvantages attached to engaging with stories that aren’t about other people.

Next, I have met more than my fair share of people that flinched away from the suggestion of mathematics or physics, even when someone offered to guide them through understanding these topics. I’m also aware researchers have documented this tendency and are attempting to distil insights that could help improve the teaching and the communication of these subjects. Personally I don’t know how to deal with these people because I don’t know the shape of the barrier in their minds I need to surmount. I may be trying to vault over a high wall by simplifying a concept to its barest features when in fact the barrier is a low-walled labyrinth.

Third and last, let me do unto this post what I’m asking of people everywhere, and look past the people: why should we be interested in particle physics? It has nothing to offer for our day-to-day experiences. Its findings can seem totally self-absorbed, supporting researchers and their careers, helping them win famous but otherwise generally unattainable awards, and sustaining discoveries into which political leaders and government officials occasionally dip their beaks to claim labels like “scientific superpower”. But the mistake here is not the existence of particle physics itself so much as the people-centric lens through which we insist it must be seen. It’s not that we should be interested in particle physics; it’s that we can.

Particle physics exists because some people are interested in it. If you are unhappy that our government spends too much on it, let’s talk about our national R&D expenditure priorities and what the practice, and practitioners, of particle physics can do to support other research pursuits and give back to various constituencies. The pursuit of one’s interests can’t be the problem (within reasonable limits, of course).

More importantly, being interested in particle physics and in fact many other branches of science shouldn’t have to be justified at every turn for three reasons: reality isn’t restricted to people, people are shaped by their realities, and our destiny as humans. On the first two counts: when we choose to restrict ourselves to our lives and our welfare, we also choose to never learn about what, say, gravitational waves, dark matter, and nucleosynthesis are (unless these terms turn up in an exam we need to pass). Yet all these things played a part in bringing about the existence of Earth and its suitability for particular forms of life, and among people particular ways of life.

The rocks and metals that gave rise to waves of human civilisation were created in the bellies of stars. We needed to know our own star as well as we do — which still isn’t much — to help build machines that can use its energy to supply electric power. Countries and cultures that support the education and employment of people who made it a point to learn the underlying science thus come out on top. Knowing different things is a way to future-proof ourselves.

Further, climate change is evidence humans are a planetary species, and soon it will be interplanetary. Our own migrations will force us to understand, eventually intuitively, the peculiarities of gravity, the vagaries of space, and (what is today called) mathematical physics. But even before such compulsions arise, it remains what we know is what we needn’t be afraid of, or at least know how to be afraid of. 😀

Just as well, learning, knowing, and understanding the physical universe is the foundation we need to imagine (or reimagine) futures better than the ones ordained for us by our myopic leaders. In this context, I recommend Shreya Dasgupta’s ‘Imagined Tomorrow’ podcast series, where she considers hypothetical future Indias in which medicines are tailor-made for individuals, where antibiotics don’t exist because they’re not required, where clean air is only available to breathe inside city-sized domes, and where courtrooms use AI — and the paths we can take to get there.

Similarly, with particle physics in mind, we could also consider cheap access to quantum computers, lasers that remove infections from flesh and tumours from tissue in a jiffy, and communications satellites that reduce bandwidth costs so much that we can take virtual education, telemedicine, and remote surgeries for granted. I’m not talking about these technologies as spin-offs, to be clear; I mean technologies born of our knowledge of particle (and other) physics.

At the biggest scale, of course, understanding the way nature works is how we can understand the ways in which the universe’s physical reality can or can’t affect us, in turn leading the way to understanding ourselves better and helping us shape more meaningful aspirations for our species. The more well-informed any decision is, the more rational it will be. Granted, the rationality of most of our decisions is currently only tenuously informed by particle physics, but consider if the inverse could be true: what decisions are we not making as well as we could if we cast our epistemic nets wider, including physics, biology, mathematics, etc.?

Consider, even beyond all this, the awe astronauts who have gone to Earth orbit and beyond have reported experiencing when they first saw our planet from space, and the immeasurable loneliness surrounding it. There are problems with pronouncements that we should be united in all our efforts on Earth because, from space, we are all we have (especially when the country to which most of these astronauts belong condones a genocide). Fortunately, that awe is not the preserve of spacefaring astronauts. The moment we understood the laws of physics and the elementary constituents of our universe, we (at least the atheists among us) may have realised there is no centre of the universe. In fact, there is everything except a centre. How grateful I am for that. For added measure, awe is also good for the mind.

It might seem like a terrible cliché to quote Oscar Wilde here — “We are all in the gutter, but some of us are looking at the stars” — but it’s a cliché precisely because we have often wanted to be able to dream, to have the simple act of such dreaming contain all the profundity we know we squander when we live petty, uncurious lives. Then again, space is not simply an escape from the traps of human foibles. Explorations of the great unknown that includes the cosmos, the subatomic realm, quantum phenomena, dark energy, and so on are part of our destiny because they are the least like us. They show us what else is out there, and thus what else is possible.

If you’re not interested in particle physics, that’s fine. But remember that you can be.


Featured image: An example of simulated data as might be observed at a particle detector on the Large Hadron Collider. Here, following a collision of two protons, a Higgs boson is produced that decays into two jets of hadrons and two electrons. The lines represent the possible paths of particles produced by the proton-proton collision in the detector while the energy these particles deposit is shown in blue. Caption and credit: Lucas Taylor/CERN, CC BY-SA 3.0.

The Higgs boson and I

My first byline as a professional journalist (a.k.a. my first byline ever) was oddly for a tech story – about the advent of IPv6 internet addresses. I started writing it after 7 pm, had to wrap it up by 9 pm and it was published in the paper the next day (I was at The Hindu).

The first byline that I actually wanted to take credit for appeared around a month later, on July 4, 2012 – ten years ago – on the discovery of the Higgs boson at the Large Hadron Collider (LHC) in Europe. I published a live blog as Fabiola Gianotti, Joe Incandela and Rolf-Dieter Heuer, the spokespersons of the ATLAS and CMS detector collaborations and the director-general of CERN, respectively, announced and discussed the results. I also distinctly remember taking a pee break after telling readers “I have to leave my desk for a minute” and receiving mildly annoyed, but also amused, comments complaining of TMI.

After the results had been announced, the science editor, R. Prasad, told me that R. Ramachandran (a.k.a. Bajji) was filing the main copy and that I should work around that. So I wrote a ‘what next’ piece describing the work that remained for physicists to do, including open problems in particle physics that stayed open and the alternative theories, like supersymmetry, required to explain them. (Some jingoism surrounding the lack of acknowledgment for S.N. Bose – wholly justifiable, in my view – also forced me to write this.)

I also remember placing a bet with someone that the Nobel Prize for physics in 2012 wouldn’t be awarded for the discovery (because I knew, but the other person didn’t, that the nominations for that year’s prizes had closed by then).

To write about the feats and mysteries of particle physics is why I became a science journalist, so the Higgs boson’s discovery being announced a month after I started working was special – not least because it considerably eased the amount of effort I had to put in to pitches and have them accepted (specifically, I didn’t have to spend too much time or effort spelling out why a story was important). It was also a great opportunity for me to learn about how breaking news is reported as well as accelerated my induction into the newsroom and its ways.

But my interest in particle physics has since waned, especially from around 2017, as I began to focus in my role as science editor of The Wire (which I cofounded/joined in May 2015) on other areas of science as well. My heart is still with physics, and I have greatly enjoyed writing the occasional article about topological phases, neutrino astronomy, laser cooling and, recently, the AdS/CFT correspondence.

A couple years ago, I realised during a spell of daydreaming that even though I have stuck with physics, my act of ‘dropping’ particle physics as a specialty had left me without an edge as a writer. Just physics was and is too broad – even if there are very few others in India writing on it in the press, giving me lots of room to display my skills (such as they are). I briefly considered and rejected quantum computing and BECCS technologies – the former because its stories were often bursting with hype, especially in my neck of the woods, and the latter because, while it seemed important, it didn’t sit well morally. I was indifferent towards them because they were centered on technologies whereas I wanted to write about pure, supposedly boring science.

In all, penning an article commemorating the tenth anniversary of the announcement of the Higgs boson’s discovery brought back pleasant memories of my early days at The Hindu but also reminded me of this choice that I still need to make, for my sake. I don’t know if there is a clear winner yet, although quantum physics more broadly and condensed-matter physics more specifically are appealing. This said, I’m also looking forward to returning to writing more about physics in general, paralleling the evolution of The Wire Science itself (some announcements coming soon).

I should also note that I started blogging in 2008, when I was still an undergraduate student of mechanical engineering, in order to clarify my own knowledge of and thoughts on particle physics.

So in all, today is a special day.

US experiments find hint of a break in the laws of physics

At 9 pm India time on April 7, physicists at an American research facility delivered a shot in the arm to efforts to find flaws in a powerful theory that explains how the building blocks of the universe work.

Physicists are looking for flaws in it because the theory doesn’t have answers to some questions – like “what is dark matter?”. They hope to find a crack or a hole that might reveal the presence of a deeper, more powerful theory of physics that can lay unsolved problems to rest.

The story begins in 2001, when physicists performing an experiment in Brookhaven National Lab, New York, found that fundamental particles called muons weren’t behaving the way they were supposed to in the presence of a magnetic field. This was called the g-2 anomaly (after a number called the gyromagnetic factor).

An incomplete model

Muons are subatomic and can’t be seen with the naked eye, so it could’ve been that the instruments the physicists were using to study the muons indirectly were glitching. Or it could’ve been that the physicists had made a mistake in their calculations. Or, finally, what the physicists thought they knew about the behaviour of muons in a magnetic field was wrong.

In most stories we hear about scientists, the first two possibilities are true more often: they didn’t do something right, so the results weren’t what they expected. But in this case, the physicists were hoping they were wrong. This unusual wish was the product of working with the Standard Model of particle physics.

According to physicist Paul Kyberd, the fundamental particles in the universe “are classified in the Standard Model of particle physics, which theorises how the basic building blocks of matter interact, governed by fundamental forces.” The Standard Model has successfully predicted the numerous properties and behaviours of these particles. However, it’s also been clearly wrong about some things. For example, Kyberd has written:

When we collide two fundamental particles together, a number of outcomes are possible. Our theory allows us to calculate the probability that any particular outcome can occur, but at energies beyond which we have so far achieved, it predicts that some of these outcomes occur with a probability of greater than 100% – clearly nonsense.

The Standard Model also can’t explain what dark matter is, what dark energy could be or if gravity has a corresponding fundamental particle. It predicted the existence of the Higgs boson but was off about the particle’s mass by a factor of 100 quadrillion.

All these issues together imply that the Standard Model is incomplete, that it could be just one piece of a much larger ‘super-theory’ that works with more particles and forces than we currently know. To look for these theories, physicists have taken two broad approaches: to look for something new, and to find a mistake with something old.

For the former, physicists use particle accelerators, colliders and sophisticated detectors to look for heavier particles thought to exist at higher energies, and whose discovery would prove the existence of a physics beyond the Standard Model. For the latter, physicists take some prediction the Standard Model has made with a great degree of accuracy and test it rigorously to see if it holds up. Studies of muons in a magnetic field are examples of this.

According to the Standard Model, a number associated with the way a muon swivels in a magnetic field is equal to 2 plus 0.00116591804 (with some give or take). This minuscule addition is the handiwork of fleeting quantum effects in the muon’s immediate neighbourhood, and which make it wobble. (For a glimpse of how hard these calculations can be, see this description.)

Fermilab result

In the early 2000s, the Brookhaven experiment measured the deviation to be slightly higher than the model’s prediction. Though it was small – off by about 0.00000000346 – the context made it a big deal. Scientists know that the Standard Model has a habit of being really right, so when it’s wrong, the wrongness becomes very important. And because we already know the model is wrong about other things, there’s a possibility that the two things could be linked. It’s a potential portal into ‘new physics’.

“It’s a very high-precision measurement – the value is unequivocal. But the Standard Model itself is unequivocal,” Thomas Kirk, an associate lab director at Brookhaven, had told Science in 2001. The disagreement between the values implied “that there must be physics beyond the Standard Model.”

This is why the results physicists announced today are important.

The Brookhaven experiment that ascertained the g-2 anomaly wasn’t sensitive enough to say with a meaningful amount of confidence that its measurement was really different from the Standard Model prediction, or if there could be a small overlap.

Science writer Brianna Barbu has likened the mystery to “a single hair found at a crime scene with DNA that didn’t seem to match anyone connected to the case. The question was – and still is – whether the presence of the hair is just a coincidence, or whether it is actually an important clue.”

So to go from ‘maybe’ to ‘definitely’, physicists shipped the 50-foot-wide, 15-tonne magnet that the Brookhaven facility used in its Muon g-2 experiment to Fermilab, the US’s premier high-energy physics research facility in Illinois, and built a more sensitive experiment there.

The new result is from tests at this facility: that the observation differs from the Standard Model’s predicted value by 0.00000000251 (give or take a bit).

The Fermilab results are expected to become a lot better in the coming years, but even now they represent an important contribution. The statistical significance of the Brookhaven result was just below the threshold at which scientists could claim evidence but the combined significance of the two results is well above.

Potential dampener

So for now, the g-2 anomaly seems to be real. It’s not easy to say if it will continue to be real as physicists further upgrade the Fermilab g-2’s performance.

In fact there appears to be another potential dampener on the horizon. An independent group of physicists has had a paper published today saying that the Fermilab g-2 result is actually in line with the Standard Model’s prediction and that there’s no deviation at all.

This group, called BMW, used a different way to calculate the Standard Model’s value of the number in question than the Fermilab folks did. Aida El-Khadra, a theoretical physicist at the University of Illinois, told Quanta that the Fermilab team had yet to check BMW’s approach, but if it was found to be valid, the team would “integrate it into its next assessment”.

The ‘Fermilab approach’ itself is something physicists have worked with for many decades, so it’s unlikely to be wrong. If the BMW approach checks out, it’s possible according to Quanta that just the fact that two approaches lead to different predictions of the number’s value is likely to be a new mystery.

But physicists are excited for now. “It’s almost the best possible case scenario for speculators like us,” Gordan Krnjaic, a theoretical physicist at Fermilab who wasn’t involved in the research, told Scientific American. “I’m thinking much more that it’s possibly new physics, and it has implications for future experiments and for possible connections to dark matter.”

The current result is also important because the other way to look for physics beyond the Standard Model – by looking for heavier or rarer particles – can be harder.

This isn’t simply a matter of building a larger particle collider, powering it up, smashing particles and looking for other particles in the debris. For one, there is a very large number of energy levels at which a particle might form. For another, there are thousands of other particle interactions happening at the same time, generating a tremendous amount of noise. So without knowing what to look for and where, a particle hunt can be like looking for a very small needle in a very large haystack.

The ‘what’ and ‘where’ instead come from different theories that physicists have worked out based on what we know already, and design experiments depending on which one they need to test.

Into the hospital

One popular theory is called supersymmetry: it predicts that every elementary particle in the Standard Model framework has a heavier partner particle, called a supersymmetric partner. It also predicts the energy ranges in which these particles might be found. The Large Hadron Collider (LHC) in CERN, near Geneva, was powerful enough to access some of these energies, so physicists used it and went looking last decade. They didn’t find anything.

A table showing searches for particles associated with different post-standard-model theories (orange labels on the left). The bars show the energy levels up to which the ATLAS detector at the Large Hadron Collider has not found the particles. Table: ATLAS Collaboration/CERN

Other groups of physicists have also tried to look for rarer particles: ones that occur at an accessible energy but only once in a very large number of collisions. The LHC is a machine at the energy frontier: it probes higher and higher energies. To look for extremely rare particles, physicists explore the intensity frontier – using machines specialised in generating collisions.

The third and last is the cosmic frontier, in which scientists look for unusual particles coming from outer space. For example, early last month, researchers reported that they had detected an energetic anti-neutrino (a kind of fundamental particle) coming from outside the Milky Way participating in a rare event that scientists predicted in 1959 would occur if the Standard Model is right. The discovery, in effect, further cemented the validity of the Standard Model and ruled out one potential avenue to find ‘new physics’.

This event also recalls an interesting difference between the 2001 and 2021 announcements. The late British scientist Francis J.M. Farley wrote in 2001, after the Brookhaven result:

… the new muon (g-2) result from Brookhaven cannot at present be explained by the established theory. A more accurate measurement … should be available by the end of the year. Meanwhile theorists are looking for flaws in the argument and more measurements … are underway. If all this fails, supersymmetry can explain the data, but we would need other experiments to show that the postulated particles can exist in the real world, as well as in the evanescent quantum soup around the muon.

Since then, the LHC and other physics experiments have sent supersymmetry ‘to the hospital’ on more than one occasion. If the anomaly continues to hold up, scientists will have to find other explanations. Or, if the anomaly whimpers out, like so many others of our time, we’ll just have to put up with the Standard Model.

Featured image: A storage-ring magnet at Fermilab whose geometry allows for a very uniform magnetic field to be established in the ring. Credit: Glukicov/Wikimedia Commons, CC BY-SA 4.0.

The Wire Science
April 8, 2021

On resource constraints and merit

In the face of complaints about how so few women have been awarded this year’s Swarnajayanti Fellowships in India, some scientists pushed back asking which of the male laureates who had been selected should have been left out instead.

This is a version of the merit argument commonly applied to demands for reservation and quota in higher education – and it’s also a form of an argument that often raises its head in seemingly resource-constrained environments.

India is often referred to as a country with ‘finite’ resources, often when people are discussing how best to put these resources to use. There are even romantic ideals associated with working in such environments, such as doing more with less – as ISRO has been for many decades – and the popular concept of jugaad.

But while fixing one variable while altering the other would make any problem more solvable, it’s almost always the resource variable that is presumed to be fixed in India. For example, a common refrain is that ISRO’s allocation is nowhere near that of NASA, so ISRO must figure how best to use its limited funds – and can’t afford luxuries like a full-fledged outreach team.

There are two problems in the context of resource availability here: 1. an outreach team proper is implied to be the product of a much higher allocation than has been made, i.e. comparable to that of NASA, and 2. incremental increases in allocation are precluded. Neither of these is right, of course: ISRO doesn’t have to wait for NASA’s volume of resources in order to set up an outreach team.

The deeper issue here is not that ISRO doesn’t have the requisite funds but that it doesn’t feel a better outreach unit is necessary. Here, it pays to acknowledge that ISRO has received not inconsiderable allocations over the years, as well as has enjoyed bipartisan support and (relative) freedom from bureaucratic interference, so it cops much of the blame as well. But in the rest of India, the situation is flipped: many institutions, and their members, have fewer resources than they have ideas and that affects research in a way of its own.

For example, in the context of grants and fellowships, there’s the obvious illusory ‘prestige constraint’ at the international level – whereby award-winners and self-proclaimed hotshots wield power by presuming prestige to be tied to a few accomplishments, such as winning a Nobel Prize, publishing papers in The Lancet and Nature or maintaining an h-index of 150. These journals and award-giving committees in turn boast of their selectiveness and elitism. (Note: don’t underestimate the influence of these journals.)

Then there’s the financial constraint for Big Science projects. Some of them may be necessary to keep, say, enthusiastic particle physicists from being carried away. But more broadly, a gross mismatch between the availability of resources and the scale of expectations may ultimately be detrimental to science itself.

These markers of prestige and power are all essentially instruments of control – and there is no reason this equation should be different in India. Funding for science in India is only resource-constrained to the extent to which the government, which is the principal funder, deems it to be.

The Indian government’s revised expenditure on ‘scientific departments’ in 2019-2020 was Rs 27,694 crore. The corresponding figure for defence was Rs 3,16,296 crore. If Rs 1,000 crore were moved from the latter to the former, the defence spend would have dropped only by 0.3% but the science spend would have increased by 3.6%. Why, if the money spent on the Statue of Unity had instead been diverted to R&D, the hike would have nearly tripled.

Effectively, the argument that ‘India’s resources are limited’ is tenable only when resources are constrained on all fronts, or specific fronts as determined by circumstances – and not when it seems to be gaslighting an entire sector. The determination of these circumstances in turn should be completely transparent; keeping them opaque will simply create more ground for arbitrary decisions.

Of course, in a pragmatic sense, it’s best to use one’s resources wisely – but this position can’t be generalised to the point where optimising for what’s available becomes morally superior to demanding more (even as we must maintain the moral justification of being allowed to ask how much money is being given to whom). That is, constantly making the system work more efficiently is a sensible aspiration, but it shouldn’t come – as it often does at the moment, perhaps most prominently in the case of CSIR – at the cost of more resources. If people are discontented because they don’t have enough, their ire should be directed at the total allocation itself more than how a part of it is being apportioned.

In a different context, a physicist had pointed out a few years ago that when the US government finally scrapped the proposed Superconducting Supercollider in the early 1990s, the freed-up funds weren’t directed back into other areas of science, as scientists thought they would be. (I couldn’t find the link to this comment nor recall the originator – but I think it was either Sabine Hossenfelder or Sean Carroll; I’ll update this post when I do.) I suspect that if the group of people that had argued thus had known this would happen, it might have argued differently.

I don’t know if a similar story has played out in India; I certainly don’t know if any Big Science projects have been commissioned and then scrapped. In fact, the opposite has happened more often: whereby projects have done more with less by repurposing an existing resource (examples herehere and here). (Having to fight so hard to realise such mega-projects in India could be motivating those who undertake one to not give up!)

In the non-Big-Science and more general sense, an efficiency problem raises its head. One variant of this is about research v. teaching: what does India need more of, or what’s a more efficient expense, to achieve scientific progress – institutions where researchers are free to conduct experiments without being saddled with teaching responsibilities or institutions where teaching is just as important as research? This question has often been in the news in India in the last few years, given the erstwhile HRD Ministry’s flip-flops on whether teachers should conduct research. I personally agree that we need to ‘let teachers teach’.

The other variant is concerned with blue-sky research: when are scientists more productive – when the government allows a “free play of free intellects” or if it railroads them on which problems to tackle? Given the fabled shortage of teachers at many teaching institutions, it’s easy to conclude that a combination of economic and policy decisions have funnelled India’s scholars into neglecting their teaching responsibilities. In turn, rejigging the fraction of teaching or teaching-cum-research versus research-only institutions in India in favour of the former, which are less resource-intensive, could free up some funds.

But this is also more about pragmatism than anything else – somewhat like untangling a bundle of wires before straightening them out instead of vice versa, or trying to do both at once. As things stand, India’s teaching institutions also need more money. Some reasons there is a shortage of teachers include the fact that they are often not paid well or on time, especially if they are employed at state-funded colleges; the institutions’ teaching facilities are subpar (or non-existent); if jobs are located in remote places and the institutions haven’t had the leeway to consider upgrading recreational facilities; etc.

Teaching at the higher-education level in India is also harder because of the poor state of government schools, especially outside tier I cities. This brings with it a separate raft of problems, including money.

Finally, a more ‘local’ example of prestige as well as financial constraints that also illustrates the importance of this PoV is the question of why the Swarnajayanti Fellowships have been awarded to so few women, and how this problem can be ‘fixed’.

If the query about which men should be excluded to accommodate women sounds like a reasonable question – you’re probably assuming that the number of fellows has to be limited to a certain number, dictated in turn by the amount of money the government has said can be awarded through these fellowships. But if the government allocated more money, we could appreciate all the current laureates as well as many others, and arguably without diluting the ‘quality’ of the competition (given just how many scholars there are).

Resource constraints obviously can’t explain or resolve everything that stands in the way of more women, trans-people, gender-non-binary and gender-non-conforming scholars receiving scholarships, fellowships, awards and prominent positions within academia. But axiomatically, it’s important to see that ‘fixing’ this problem requires action on two fronts, instead of just one – make academia less sexist and misogynistic and secure more funds. The constraints are certainly part of the problem, particularly when they are wielded as an excuse to concentrate more resources, and more power, in the hands of the already privileged, even as the constraints may not be real themselves.

In the final analysis, science doesn’t have to be a powerplay, and we don’t have to honour anyone at the expense of another. But deferring to such wisdom could let the fundamental causes of this issue off the hook.

My heart of physics

Every July 4, I have occasion to remember two things: the discovery of the Higgs boson, and my first published byline for an article about the discovery of the Higgs boson. I have no trouble believing it’s been eight years since we discovered this particle, using the Large Hadron Collider (LHC) and its ATLAS and CMS detectors, in Geneva. I’ve greatly enjoyed writing about particle physics in this time, principally because closely engaging with new research and the scientists who worked on them allowed me to learn more about a subject that high school and college had let me down on: physics.

In 2020, I haven’t been able to focus much on the physical sciences in my writing, thanks to the pandemic, the lockdown, their combined effects and one other reason. This has been made doubly sad by the fact that the particle physics community at large is at an interesting crossroads.

In 2012, the LHC fulfilled the principal task it had been built for: finding the Higgs boson. After that, physicists imagined the collider would discover other unknown particles, allowing theorists to expand their theories and answer hitherto unanswered questions. However, the LHC has since done the opposite: it has narrowed the possibilities of finding new particles that physicists had argued should exist according to their theories (specifically supersymmetric partners), forcing them to look harder for mistakes they might’ve made in their calculations. But thus far, physicists have neither found mistakes nor made new findings, leaving them stuck in an unsettling knowledge space from which it seems there might be no escape (okay, this is sensationalised, but it’s also kinda true).

Right now, the world’s particle physicists are mulling building a collider larger and more powerful than the LHC, at a cost of billions of dollars, in the hopes that it will find the particles they’re looking for. Not all physicists are agreed, of course. If you’re interested in reading more, I’d recommend articles by Sabine Hossenfelder and Nirmalya Kajuri and spiralling out from there. But notwithstanding the opposition, CERN – which coordinates the LHC’s operations with tens of thousands of personnel from scores of countries – recently updated its strategy vision to recommend the construction of such a machine, with the ability to produce copious amounts of Higgs bosons in collisions between electrons and positrons (a.k.a. ‘Higgs factories’). China has also announced plans of its own build something similar.

Meanwhile, scientists and engineers are busy upgrading the LHC itself to a ‘high luminosity version’, where luminosity represents the number of interesting events the machine can detect during collisions for further study. This version will operate until 2038. That isn’t a long way away because it took more than a decade to build the LHC; it will definitely take longer to plan for, convince lawmakers, secure the funds for and build something bigger and more complicated.

There have been some other developments connected to the current occasion in terms of indicating other ways to discover ‘new physics’, which is the collective name for phenomena that will violate our existing theories’ predictions and show us where we’ve gone wrong in our calculations.

The most recent one I think was the ‘XENON excess’, which refers to a moderately strong signal recorded by the XENON 1T detector in Italy that physicists think could be evidence of a class of particles called axions. I say ‘moderately strong’ because the statistical significance of the signal’s strength is just barely above the threshold used to denote evidence and not anywhere near the threshold that denotes a discovery proper.

It’s evoked a fair bit of excitement because axions count as new physics – but when I asked two physicists (one after the other) to write an article explaining this development, they refused on similar grounds: that the significance makes it seem likely that the signal will be accounted for by some other well-known event. I was disappointed of course but I wasn’t surprised either: in the last eight years, I can count at least four instances in which a seemingly inexplicable particle physics related development turned out to be a dud.

The most prominent one was the ‘750 GeV excess’ at the LHC in December 2015, which seemed to be a sign of a new particle about six-times heavier than a Higgs boson and 800-times heavier than a proton (at rest). But when physicists analysed more data, the signal vanished – a.k.a. it wasn’t there in the first place and what physicists had seen was likely a statistical fluke of some sort. Another popular anomaly that went the same way was the one at Atomki.

But while all of this is so very interesting, today – July 4 – also seems like a good time to admit I don’t feel as invested in the future of particle physics anymore (the ‘other reason’). Some might say, and have said, that I’m abandoning ship just as the field’s central animus is moving away from the physics and more towards sociology and politics, and some might be right. I get enough of the latter subjects when I work on the non-physics topics that interest me, like research misconduct and science policy. My heart of physics itself is currently tending towards quantum mechanics and thermodynamics (although not quantum thermodynamics).

One peer had also recommended in between that I familiarise myself with quantum computing while another had suggested climate-change-related mitigation technologies, which only makes me wonder now if I’m delving into those branches of physics that promise to take me farther away from what I’m supposed to do. And truth be told, I’m perfectly okay with that. 🙂 This does speak to my privileges – modest as they are on this particular count – but when it feels like there’s less stuff to be happy about in the world with every new day, it’s time to adopt a new hedonism and find joy where it lies.

Peter Higgs, self-promoter

I was randomly rewatching The Big Bang Theory on Netflix today when I spotted this gem:

Okay, maybe less a gem and more a shiny stone, but still. The screenshot, taken from the third episode of the sixth season, shows Sheldon Cooper mansplaining to Penny the work of Peter Higgs, whose name is most famously associated with the scalar boson the Large Hadron Collider collaboration announced the discovery of to great fanfare in 2012.

My fascination pertains to Sheldon’s description of Higgs as an “accomplished self-promoter”. Higgs, in real life, is extremely reclusive and self-effacing and journalists have found him notoriously hard to catch for an interview, or even a quote. His fellow discoverers of the Higgs boson, including François Englert, the Belgian physicist with whom Higgs won the Nobel Prize for physics in 2013, have been much less media-shy. Higgs has even been known to suggest that a mechanism in particle physics involving the Higgs boson should really be called the ABEGHHK’tH mechanism, include the names of everyone who hit upon its theoretical idea in the 1960s (Philip Warren Anderson, Robert Brout, Englert, Gerald Guralnik, C.R. Hagen, Higgs, Tom Kibble and Gerardus ‘t Hooft) instead of just as the Higgs mechanism.

No doubt Sheldon thinks Higgs did right by choosing not to appear in interviews for the public or not writing articles in the press himself, considering such extreme self-effacement is also Sheldon’s modus of choice. At the same time, Higgs might have lucked out and be recognised for work he conducted 50 years prior probably because he’s white and from an affluent country, both of which attributes nearly guarantee fewer – if any – systemic barriers to international success. Self-promotion is an important part of the modern scientific endeavour, as it is with most modern endeavours, even if one is an accomplished scientist.

All this said, it is notable that Higgs was also a conscientious person. When he was awarded the Wolf Prize in 2004 – a prestigious award in the field of physics – he refused to receive it in person in Jerusalem because it was a state function and he has protested Israel’s war against Palestine. He was a member of the Campaign for Nuclear Disarmament until the group extended its opposition to nuclear power as well; then he resigned. He also stopped supporting Greenpeace after they become opposed to genetic modification. If it is for these actions that Sheldon deemed Higgs an “accomplished self-promoter”, then I stand corrected.

Featured image: A portrait of Peter Higgs by Lucinda Mackay hanging at the James Clerk Maxwell Foundation, Edinburgh. Caption and credit: FF-UK/Wikimedia Commons, CC BY-SA 4.0.

Science v. tech, à la Cixin Liu

A fascinating observation by Cixin Liu in an interview in Public Books, to John Plotz and translated by Pu Wang (numbers added):

… technology precedes science. (1) Way before the rise of modern science, there were so many technologies, so many technological innovations. But today technology is deeply embedded in the development of science. Basically, in our contemporary world, science sets a glass ceiling for technology. The degree of technological development is predetermined by the advances of science. (2) … What is remarkably interesting is how technology becomes so interconnected with science. In the ancient Greek world, science develops out of logic and reason. There is no reliance on technology. The big game changer is Galileo’s method of doing experiments in order to prove a theory and then putting theory back into experimentation. After Galileo, science had to rely on technology. … Today, the frontiers of physics are totally conditioned on the developments of technology. This is unprecedented. (3)

Perhaps an archaeology or palaeontology enthusiast might have regular chances to see the word ‘technology’ used to refer to Stone Age tools, Bronze Age pots and pans, etc. but I have almost always encountered these objects only as ‘relics’ or such in the popular literature. It’s easy to forget (1) because we have become so accustomed to thinking of technology as pieces of machines with complex electrical, electronic, hydraulic, motive, etc. components. I’m unsure of the extent to which this is an expression of my own ignorance but I’m convinced that our contemporary view of and use of technology, together with the fetishisation of science and engineering education over the humanities and social sciences, also plays a hand in maintaining this ignorance.

The expression of (2) is also quite uncommon, especially in India, where the government’s overbearing preference for applied research has undermined blue-sky studies in favour of already-translated technologies with obvious commercial and developmental advantages. So when I think of ‘science and technology’ as a body of knowledge about various features of the natural universe, I immediately think of science as the long-ranging, exploratory exercise that lays the railway tracks into the future that the train of technology can later ride. Ergo, less glass ceiling and predetermination, and more springboard and liberation. Cixin’s next words offer the requisite elucidatory context: advances in particle physics are currently limited by the size of the particle collider we can build.

(3) However, he may not be able to justify his view beyond specific examples simply because, to draw from the words of a theoretical physicist from many years ago – that they “require only a pen and paper to work” – it is possible to predict the world for a much lower cost than one would incur to build and study the future.

Plotz subsequently, but thankfully briefly, loses the plot when he asks Cixin whether he thinks mathematics belongs in science, and to which Cixin provides a circuitous non-answer that somehow misses the obvious: science’s historical preeminence began when natural philosophers began to encode their observations in a build-as-you-go, yet largely self-consistent, mathematical language (my favourite instance is the invention of non-Euclidean geometry that enabled the theories of relativity). So instead of belonging within one of the two, mathematics is – among other things – better viewed as a bridge.

A journey through Twitter and time, with the laws of physics

Say you’re in a dark room and there’s a flash. The light travels outward in all directions from the source, and the illumination seems to expand in a sphere. This is a visualisation of how the information contained in light becomes distributed through space.

But even though this is probably what you’d see if you observed the flash with a very high speed camera, it’s not the full picture. The geometry of the sphere captures only the spatial component of the light’s journey. It doesn’t say anything about the time. We can infer that from how fast the sphere expands but that’s not an intrinsic property of the sphere itself.

To solve this problem, let’s assume that we live in a world with two spatial dimensions instead of three (i.e. length and breadth only, no depth). When the flash goes off in this world, the light travels outward in an expanding circle, which is the two-dimensional counterpart of a sphere. At 1 second after the flash, say the circle is 2 cm wide. After 2 seconds, it’s 4 cm wide. After 3 seconds, it’s 8 cm wide. After 4 seconds, it’s 16 cm wide. And so forth.

If you photographed the circles at each of these moments and put the pictures together, you’d see something like this (not to scale):

And if you looked at this stack of circles from under/behind, you’d see what physicists call the light cone.

Credit: Stib/Wikimedia Commons, CC BY-SA 3.0

The cone is nothing but a stack of circles of increasing diameter. The circumference of each circle represents the extent to which the light has spread out in space at that time. So the farther into the future of an event – such as the flash – you go, the wider the light cone will be.

(The reason we assumed we live in a world of two dimensions instead of three should be clearer now. In our three-dimensional reality, the light cone would assume a four-dimensional shape that can be quite difficult to visualise.)

According to the special theory of relativity, all future light cones must be associated with corresponding past light cones, and light always flows from the past to the future.

To understand what this means, it’s important to understand the cones as exclusionary zones. The diameter of the cone at a specific time is the distance across which light has moved in that time. So anything that moves slower – such as a message written on a piece of paper tied to a rock thrown from A to B – will be associated with a narrower cone between A and B. If A and B are so far apart that even light couldn’t have spanned them in the given time, then B is going to be outside the cone emerging from A, in a region officially called elsewhere.

Now, light is just one way to encode information. But since nothing can move faster than at the speed of light, the cones in the diagram above work for all kinds of information, i.e. any other medium will simply be associated with narrower cones but the general principles as depicted in the diagram will hold.

For example, here’s something that happened on Twitter earlier today. I spotted the following tweet at 9.15 am:

When scrolling through the replies, I noticed that one of Air Vistara’s senior employees had responded to the complaint with an apology and an assurance that it would be fixed.

https://twitter.com/TheSanjivKapoor/status/1154223981358018561

Taking this to be an admission of guilt, and to an admission of there actually having been a mistake by proxy, I retweeted the tweet at 9.16 am. However, only a minute later, another account discovered that the label of ‘professor’ didn’t work with the ‘male’ option either, ergo the glitch didn’t have so much to do with the user’s gender as much as the algorithm was just broken. A different account brought this to my attention at 9.30 am.

So here we have two cones of information that can be recast as the cones of causality, intersecting at @rath_shyama’s tweet. The first cone of causality is the set of all events in the tweet’s past whose information contributed to it. The second cone of causality represents all events in whose past the tweet lies, such as @himdaughter’s, the other accounts’ and my tweets.

As it happens, Twitter interferes with this image of causality in a peculiar way (Facebook does, too, but not as conspicuously). @rath_shyama published her tweet at 8.02 am, @himdaughter quote-tweeted her at 8.16 am and I retweeted @himdaughter at 9.16 am. But by 9.30 am, the information cone had expanded enough for me to know that my retweet was possibly mistaken. Let’s designate this last bit of information M.

So if I had un-retweeted @himdaughter’s tweet at, say, 9.31 am, I would effectively have removed an event from the timeline that actually occurred before I could have had the information to act on it (i.e., M). The issue is that Twitter doesn’t record (at least not publicly anyway) the time at which people un-retweet tweets. If it had, then there would have been proof that I acted in the future of M; but since it doesn’t, it will look like I acted in the past of M. Since this is causally impossible, the presumption arises that I had the information about M before others did, which is false.

This serves as an interesting commentary on the nature of history. It is not possible for Twitter’s users to remember historical events on its platform in the right order simply because Twitter is memoryless when it comes to one of the actions it allows. As a journalist, therefore, there is a bit of comfort in thinking about the pre-Twitter era, when all newsworthy events were properly timestamped and archived by the newspapers of record.

However, I can’t let my mind wander too far back, lest I stagger into the birth of the universe, when all that existed was a bunch of particles.

We commonly perceive that time has moved forward because we also observe useful energy becoming useless energy. If nothing aged, if nothing grew weaker or deteriorated in material quality – if there was no wear-and-tear – we should be able to throw away our calendars and pretend all seven days of the week are the same day, repeated over and over.+

Scientists capture this relationship between time and disorderliness in the second law of thermodynamics. This law states that the entropy – the amount of energy that can’t be used to perform work – of a closed system can never decrease. It can either remain stagnant or increase. So time does not exist as an entity in and of itself but only seems to as a measure of the increase in entropy (at a given temperature). We say a system has moved away from a point in its past and towards a point in its future if its entropy has gone up.

However, while this works just fine with macroscopic stuff like matter, things are a bit different with matter’s smallest constituents: the particles. There are no processes in this realm of the quantum whose passage will tell you which way time has passed – at least, there aren’t supposed to be.

There’s a type of particle called the B0 meson. In an experiment whose results were announced in 2012, physicists found unequivocal proof that this particle transformed into another one faster than the inverse process. This discrepancy provides an observer with a way to tell which way time is moving.

The experiment also remains the only occasion till date on which scientists have been able to show that the laws of physics don’t apply the same forward and backward in time. If they did, the forward and backward transformations would have happened at the same rate, and an observer wouldn’t have been able to tell if she was watching the system move into the future or into the past.

But with Twitter, it would seem we’re all clearly aware that we’re moving – inexorably, inevitably – into the future… or is that the past? I don’t know.

+ And if capitalism didn’t exist: in capitalist economies, inequality always seems to increase with time.

Chromodynamics: Gluons are just gonzo

One of the more fascinating bits of high-energy physics is the branch of physics called quantum chromodynamics (QCD). Don’t let the big name throw you off: it deals with a bunch of elementary particles that have a property called colour charge. And one of these particles creates a mess of this branch of physics because of its colour charge – so much so that it participates in the story that it is trying to shape. What could be more gonzo than this? Hunter S. Thompson would have been proud.

Like electrons have electric charge, particles studied by QCD have a colour charge. It doesn’t correspond to a colour of any kind; it’s just a funky name.

(Richard Feynman wrote about this naming convention in his book, QED: The Strange Theory of Light and Matter (pp. 163, 1985): “The idiot physicists, unable to come up with any wonderful Greek words anymore, call this type of polarization by the unfortunate name of ‘color,’ which has nothing to do with color in the normal sense.”)

The fascinating thing about these QCD particles is that they exhibit a property called colour confinement. It means that all particles with colour charge can’t ever be isolated. They’re always to be found only in pairs or bigger clumps. They can be isolated in theory if the clumps are heated to the Hagedorn temperature: 1,000 billion billion billion K. But the bigness of this number has ensured that this temperature has remained theoretical. They can also be isolated in a quark-gluon plasma, a superhot, superdense state of matter that has been creating fleetingly in particle physics experiments like the Large Hadron Collider. The particles in this plasma quickly collapse to form bigger particles, restoring colour confinement.

There are two kinds of particles that are colour-confined: quarks and gluons. Quarks come together to form bigger particles called mesons and baryons. The aptly named gluons are the particles that ‘glue’ the quarks together.

The force that acts between quarks and gluons is called the strong nuclear force. But this is misleading. The gluons actually mediate the strong nuclear force. A physicist would say that when two quarks exchange gluons, the quarks are being acted on by the strong nuclear force.

Because protons and neutrons are also made up of quarks and gluons, the strong nuclear force holds the nucleus together in all the atoms in the universe. Breaking this force releases enormous amounts of energy – like in the nuclear fission that powers atomic bombs and the nuclear fusion that powers the Sun. In fact, 99% of a proton’s mass comes from the energy of the strong nuclear force. The quarks contribute the remaining 1%; gluons are massless.

When you pull two quarks apart, you’d think the force between them will reduce. It doesn’t; it actually increases. This is very counterintuitive. For example, the gravitational force exerted by Earth drops off the farther you get away from it. The electromagnetic force between an electron and a proton decreases the more they move apart. But it’s only with the strong nuclear force that the force between two particles on which the force is acting actually increases as they move apart. Frank Wilczek called this a “self-reinforcing, runaway process”. This behaviour of the force is what makes colour confinement possible.

However, in 1973, Wilczek, David Gross and David Politzer found that the strong nuclear force increases in strength only up to a certain distance – around 1 fermi (0.000000000000001 metres, slightly larger than the diameter of a proton). If the quarks are separated by more than a fermi, the force between them falls off drastically, but not completely. This is called asymptotic freedom: the freedom from the force beyond some distance drops off asymptotically towards zero. Gross, Politzer and Wilczek won the Nobel Prize for physics in 2004 for their work.

In the parlance of particle physics, what makes asymptotic freedom possible is the fact that gluons emit other gluons. How else would you explain the strong nuclear force becoming stronger as the quarks move apart – if not for the gluons that the quarks are exchanging becoming more numerous as the distance increases?

This is the crazy phenomenon that you’re fighting against when you’re trying to set off a nuclear bomb. This is also the crazy phenomenon that will one day lead to the Sun’s death.

The first question anyone would ask now is – doesn’t asymptotic freedom violate the law of conservation of energy?

The answer lies in the nothingness all around us.

The vacuum of deep space in the universe is not really a vacuum. It’s got some energy of itself, which astrophysicists call ‘dark energy’. This energy manifests itself in the form of virtual particles: particles that pop in and out of existence, living for far shorter than a second before dissipating into energy. When a charged particle pops into being, its charge attracts other particles of opposite charge towards itself and repels particles of the same charge away. This is high-school physics.

But when a charged gluon pops into being, something strange happens. An electron has one kind of charge, the positive/negative electric charge. But a gluon contains a ‘colour’ charge and an ‘anti-colour’ charge, each of which can take one of three values. So the virtual gluon will attract other virtual gluons depending on their colour charges and intensify the colour charge field around it, and also change its colour according to whichever particles are present. If this had been an electron, its electric charge and the opposite charge of the particle it attracted would cancel the field out.

This multiplication is what leads to the build up of energy when we’re talking about asymptotic freedom.

Physicists refer to the three values of the colour charge as blue, green and red. (This is more idiocy – you might as well call them ‘baboon’, ‘lion’ and ‘giraffe’.) If a blue quark, a green quark and a red quark come together to form a hadron (a class of particles that includes protons and neutrons), then the hadron will have a colour charge of ‘white’, becoming colour-neutral. Anti-quarks have anti-colour charges: antiblue, antigreen, antired. When a red quark and an antired anti-quark meet, they will annihilate each other – but not so when a red quark and an antiblue anti-quark meet.

Gluons complicate this picture further because, in experiments, physicists have found that gluons behave as if they have both colour and anti-colour. In physical terms, this doesn’t make much sense, but they do in mathematical terms (which we won’t get into). Let’s say a proton is made of one red quark, one blue quark and one green quark. The quarks are held together by gluons, which also have a colour charge. So when two quarks exchange a gluon, the colours of the quarks change. If a blue quark emits a blue-antigreen gluon, the quark turns green whereas the quark that receives the gluon will turn blue. Ultimately, if the proton is ‘white’ overall, then the three quarks inside are responsible for maintaining that whiteness. This is the law of conservation of colour charge.

Gluons emit gluons because of their colour charges. When quarks exchange gluons, the quarks’ colour charges also change. In effect, the gluons are responsible for quarks getting their colours. And because the gluons participate in the evolution of the force that they also mediate, they’re just gonzo: they can interact with themselves to give rise to new particles.

A gluon can split up into two gluons or into a quark-antiquark pair. Say a quark and an antiquark are joined together. If you try to pull them apart by supplying some energy, the gluon between them will ‘swallow’ that energy and split up into one antiquark and one quark, giving rise to two quark-antiquark pairs (and also preserving colour-confinement). If you supply even more energy, more quark-antiquark pairs will be generated.

For these reasons, the strong nuclear force is called a ‘colour force’: it manifests in the movement of colour charge between quarks.

In an atomic nucleus, say there is one proton and one neutron. Each particle is made up of three quarks. The quarks in the proton and the quarks in the neutron interact with each other because they are close enough to be colour-confined: the proton-quarks’ gluons and the neutron-quarks’ gluons interact with each other. So the nucleus is effectively one ball of quarks and gluons. However, one nucleus doesn’t interact with that of a nearby atom in the same way because they’re too far apart for gluons to be exchanged.

Clearly, this is quite complicated – not just for you and me but also for scientists, and for supercomputers that perform these calculations for large experiments in which billions of protons are smashed into each other to see how the particles interact. Imagine: there are six types, or ‘flavours’, of quarks, each carrying one of three colour charges. Then there is the one gluon that can carry one of nine combinations of colour-anticolour charges.

The Wire
September 20, 2017

Featured image credit: Alexas_Fotos/pixabay.

A gear-train for particle physics

It has come under scrutiny at various times by multiple prominent physicists and thinkers, but it’s not hard to see why, when the idea of ‘grand unification’ first set out, it seemed plausible to so many. The first time it was seriously considered was about four decades ago, shortly after physicists had realised that two of the four fundamental forces of nature were in fact a single unified force if you ramped up the energy at which it acted. (electromagnetic + weak = electroweak). The thought that followed was simply logical: what if, at some extremely high energy (like what was in the Big Bang), all four forces unified into one? This was 1974.

There has been no direct evidence of such grand unification yet. Physicists don’t know how the electroweak force will unify with the strong nuclear force – let alone gravity, a problem that actually birthed one of the most powerful mathematical tools in an attempt to solve it. Nonetheless, they think they know the energy at which such grand unification should occur if it does: the Planck scale, around 1019 GeV. This is about as much energy as is contained in a few litres of petrol, but it’s stupefyingly large when you have to accommodate all of it in a particle that’s 10-15 metres wide.

This is where particle accelerators come in. The most powerful of them, the Large Hadron Collider (LHC), uses powerful magnetic fields to accelerate protons to close to light-speed, when their energy approaches about 7,000 GeV. But the Planck energy is still 10 million billion orders of magnitude higher, which means it’s not something we might ever be able to attain on Earth. Nonetheless, physicists’ theories show that that’s where all of our physical laws should be created, where the commandments by which all that exists does should be written.

… Or is it?

There are many outstanding problems in particle physics, and physicists are desperate for a solution. They have to find something wrong with what they’ve already done, something new or a way to reinterpret what they already know. The clockwork theory is of the third kind – and its reinterpretation begins by asking physicists to dump the idea that new physics is born only at the Planck scale. So, for example, it suggests that the effects of quantum gravity (a quantum-mechanical description of gravity) needn’t necessarily become apparent only at the Planck scale but at a lower energy itself. But even if it then goes on to solve some problems, the theory threatens to present a new one. Consider: If it’s true that new physics isn’t born at the highest energy possible, then wouldn’t the choice of any energy lower than that just be arbitrary? And if nothing else, nature is not arbitrary.

To its credit, clockwork sidesteps this issue by simply not trying to find ‘special’ energies at which ‘important’ things happen. Its basic premise is that the forces of nature are like a set of interlocking gears moving against each other, transmitting energy – rather potential – from one wheel to the next, magnifying or diminishing the way fundamental particles behave in different contexts. Its supporters at CERN and elsewhere think it can be used to explain some annoying gaps between theory and experiment in particle physics, particularly the naturalness problem.

Before the Higgs boson was discovered, physicists predicted based on the properties of other particles and forces that its mass would be very high. But when the boson’s discovery was confirmed at CERN in January 2013, its mass implied that the universe would have to be “the size of a football” – which is clearly not the case. So why is the Higgs boson’s mass so low, so unnaturally low? Scientists have fronted many new theories that try to solve this problem but their solutions often require the existence of other, hitherto undiscovered particles.

Clockwork’s solution is a way in which the Higgs boson’s interaction with gravity – rather gravity’s associated energy – is mediated by a string of effects described in quantum field theory that tamp down the boson’s mass. In technical parlance, the boson’s mass becomes ‘screened’. An explanation for this that’s both physical and accurate is hard to draw up because of various abstractions. So as University of Bruxelles physicist Daniele Teresi suggests, imagine this series: Χ = 0.5 × 0.5 × 0.5 × 0.5 × … × 0.5. Even if each step reduces Χ’s value by only a half, it is already an eighth after three steps; after four, a sixteenth. So the effect can get quickly drastic because it’s exponential.

And the theory provides a mathematical toolbox that allows for all this to be achieved without the addition of new particles. This is advantageous because it makes clockwork relatively more elegant than another theory that seeks to solve the naturalness problem, called supersymmetry, SUSY for short. Physicists like SUSY also because it allows for a large energy hierarchy: a distribution of particles and processes at energies between electroweak unification and grand unification, instead of leaving the region bizarrely devoid of action like the Standard Model does. But then SUSY predicts the existence of 17 new particles, none of which have been detected yet.

Even more, as Matthew McCullough, one of clockwork’s developers, showed at an ongoing conference in Italy, its solutions for a stationary particle in four dimensions exhibit conceptual similarities to Maxwell’s equations for an electromagnetic wave in a conductor. The existence of such analogues is reassuring because it recalls nature’s tendency to be guided by common principles in diverse contexts.

This isn’t to say clockwork theory is it. As physicist Ben Allanach has written, it is a “new toy” and physicists are still playing with it to solve different problems. Just that in the event that it has an answer to the naturalness problem – as well as to the question why dark matter doesn’t decay, e.g. – it is notable. But is this enough: to say that clockwork theory mops up the math cleanly in a bunch of problems? How do we make sure that this is how nature works?

McCullough thinks there’s one way, using the LHC. Very simplistically: clockwork theory induces fluctuations in the probabilities with which pairs of high-energy photons are created at some energies at the LHC. These should be visible as wavy squiggles in a plot with energy on the x-axis and events on the y-axis. If these plots can be obtained and analysed, and the results agree with clockwork’s predictions, then we will have confirmed what McCullough calls an “irreducible prediction of clockwork gravity”, the case of using the theory to solve the naturalness problem.

To recap: No free parameters (i.e. no new particles), conceptual elegance and familiarity, and finally a concrete and unique prediction. No wonder Allanach thinks clockwork theory inhabits fertile ground. On the other hand, SUSY’s prospects have been bleak since at least 2013 (if not earlier) – and it is one of the more favoured theories among physicists to explain physics beyond the Standard Model, physics we haven’t observed yet but generally believe exists. At the same time, and it bears reiterating, clockwork theory will also have to face down a host of challenges before it can be declared a definitive success. Tik tok tik tok tik tok