You’re allowed to be interested in particle physics

This page appeared in The Hindu’s e-paper today.

I wrote the lead article, about why scientists are so interested in an elementary particle called the top quark. Long story short: the top quark is the heaviest elementary particle, and because all elementary particles get their masses by interacting with Higgs bosons, the top quark’s interaction is the strongest. This has piqued physicists’ interest because the Higgs boson’s own mass is peculiar: it’s more than expected and at the same time poised on the brink of a threshold beyond which our universe as we know it wouldn’t exist. To explain this brinkmanship, physicists are intently studying the top quark, including measuring its mass with more and more precision.

It’s all so fascinating. But I’m well aware that not many people are interested in this stuff. I wish they were and my reasons follow.

There exists a sufficiently healthy journalism of particle physics today. Most of it happens in Europe and the US, (i) where famous particle physics experiments are located, (ii) where there already exists an industry of good-quality science journalism, and (iii) where there are countries and/or governments that actually have the human resources, funds, and political will to fund the experiments (in many other places, including India, these resources don’t exist, rendering the matter of people contending with these experiments moot).

In this post, I’m using particle physics as itself as well as as a surrogate for other reputedly esoteric fields of study.

This journalism can be divided into three broad types: those with people, those concerned with spin-offs, and those without people. ‘Those with people’ refers to narratives about the theoretical and experimental physicists, engineers, allied staff, and administrators who support work on particle physics, their needs, challenges, and aspirations.

The meaning of ‘those concerned with spin-offs’ is obvious: these articles attempt to justify the money governments spend on particle physics projects by appealing to the technologies scientists develop in the course of particle-physics work. I’ve always found these to be apologist narratives erecting a bad expectation: that we shouldn’t undertake these projects if they don’t also produce valuable spin-off technologies. I suspect most particle physics experiments don’t because they are much smaller than the behemoth Large Hadron Collider and its ilk, which require more innovation across diverse fields.

‘Those without people’ are the rarest of the lot — narratives that focus on some finding or discussion in the particle physics community that is relatively unconcerned with the human experience of the natural universe (setting aside the philosophical point that the non-human details are being recounted by human narrators). These stories are about the material constituents of reality as we know it.

When I say I wish more people were interested in particle physics today, I wish they were interested in all these narratives, yet more so in narratives that aren’t centred on people.

Now, why should they be concerned? This is a difficult question to answer.

I’m concerned because I’m fascinated with the things around us we don’t fully understand but are trying to. It’s a way of exploring the unknown, of going on an adventure. There are many, many things in this world that people can be curious about. It’s possible there are more such things than there are people (again, setting aside the philosophical bases of these claims). But particle physics and some other areas — united by the extent to which they are written off as being esoteric — suffer from more than not having their fair share of patrons in the general (non-academic) population. Many people actively shun them, lose focus when reading about them, and at the same time do little to muster focus back. It has even become okay for them to say they understood nothing of some (well-articulated) article and not expect to have their statement judged adversely.

I understand why narratives with people in them are easier to understand, to connect with, but none of the implicated psychological, biological, and anthropological mechanisms also encourage us to reject narratives and experiences without people. In other words, there may have been evolutionary advantages to finding out about other people but there have been no disadvantages attached to engaging with stories that aren’t about other people.

Next, I have met more than my fair share of people that flinched away from the suggestion of mathematics or physics, even when someone offered to guide them through understanding these topics. I’m also aware researchers have documented this tendency and are attempting to distil insights that could help improve the teaching and the communication of these subjects. Personally I don’t know how to deal with these people because I don’t know the shape of the barrier in their minds I need to surmount. I may be trying to vault over a high wall by simplifying a concept to its barest features when in fact the barrier is a low-walled labyrinth.

Third and last, let me do unto this post what I’m asking of people everywhere, and look past the people: why should we be interested in particle physics? It has nothing to offer for our day-to-day experiences. Its findings can seem totally self-absorbed, supporting researchers and their careers, helping them win famous but otherwise generally unattainable awards, and sustaining discoveries into which political leaders and government officials occasionally dip their beaks to claim labels like “scientific superpower”. But the mistake here is not the existence of particle physics itself so much as the people-centric lens through which we insist it must be seen. It’s not that we should be interested in particle physics; it’s that we can.

Particle physics exists because some people are interested in it. If you are unhappy that our government spends too much on it, let’s talk about our national R&D expenditure priorities and what the practice, and practitioners, of particle physics can do to support other research pursuits and give back to various constituencies. The pursuit of one’s interests can’t be the problem (within reasonable limits, of course).

More importantly, being interested in particle physics and in fact many other branches of science shouldn’t have to be justified at every turn for three reasons: reality isn’t restricted to people, people are shaped by their realities, and our destiny as humans. On the first two counts: when we choose to restrict ourselves to our lives and our welfare, we also choose to never learn about what, say, gravitational waves, dark matter, and nucleosynthesis are (unless these terms turn up in an exam we need to pass). Yet all these things played a part in bringing about the existence of Earth and its suitability for particular forms of life, and among people particular ways of life.

The rocks and metals that gave rise to waves of human civilisation were created in the bellies of stars. We needed to know our own star as well as we do — which still isn’t much — to help build machines that can use its energy to supply electric power. Countries and cultures that support the education and employment of people who made it a point to learn the underlying science thus come out on top. Knowing different things is a way to future-proof ourselves.

Further, climate change is evidence humans are a planetary species, and soon it will be interplanetary. Our own migrations will force us to understand, eventually intuitively, the peculiarities of gravity, the vagaries of space, and (what is today called) mathematical physics. But even before such compulsions arise, it remains what we know is what we needn’t be afraid of, or at least know how to be afraid of. 😀

Just as well, learning, knowing, and understanding the physical universe is the foundation we need to imagine (or reimagine) futures better than the ones ordained for us by our myopic leaders. In this context, I recommend Shreya Dasgupta’s ‘Imagined Tomorrow’ podcast series, where she considers hypothetical future Indias in which medicines are tailor-made for individuals, where antibiotics don’t exist because they’re not required, where clean air is only available to breathe inside city-sized domes, and where courtrooms use AI — and the paths we can take to get there.

Similarly, with particle physics in mind, we could also consider cheap access to quantum computers, lasers that remove infections from flesh and tumours from tissue in a jiffy, and communications satellites that reduce bandwidth costs so much that we can take virtual education, telemedicine, and remote surgeries for granted. I’m not talking about these technologies as spin-offs, to be clear; I mean technologies born of our knowledge of particle (and other) physics.

At the biggest scale, of course, understanding the way nature works is how we can understand the ways in which the universe’s physical reality can or can’t affect us, in turn leading the way to understanding ourselves better and helping us shape more meaningful aspirations for our species. The more well-informed any decision is, the more rational it will be. Granted, the rationality of most of our decisions is currently only tenuously informed by particle physics, but consider if the inverse could be true: what decisions are we not making as well as we could if we cast our epistemic nets wider, including physics, biology, mathematics, etc.?

Consider, even beyond all this, the awe astronauts who have gone to Earth orbit and beyond have reported experiencing when they first saw our planet from space, and the immeasurable loneliness surrounding it. There are problems with pronouncements that we should be united in all our efforts on Earth because, from space, we are all we have (especially when the country to which most of these astronauts belong condones a genocide). Fortunately, that awe is not the preserve of spacefaring astronauts. The moment we understood the laws of physics and the elementary constituents of our universe, we (at least the atheists among us) may have realised there is no centre of the universe. In fact, there is everything except a centre. How grateful I am for that. For added measure, awe is also good for the mind.

It might seem like a terrible cliché to quote Oscar Wilde here — “We are all in the gutter, but some of us are looking at the stars” — but it’s a cliché precisely because we have often wanted to be able to dream, to have the simple act of such dreaming contain all the profundity we know we squander when we live petty, uncurious lives. Then again, space is not simply an escape from the traps of human foibles. Explorations of the great unknown that includes the cosmos, the subatomic realm, quantum phenomena, dark energy, and so on are part of our destiny because they are the least like us. They show us what else is out there, and thus what else is possible.

If you’re not interested in particle physics, that’s fine. But remember that you can be.


Featured image: An example of simulated data as might be observed at a particle detector on the Large Hadron Collider. Here, following a collision of two protons, a Higgs boson is produced that decays into two jets of hadrons and two electrons. The lines represent the possible paths of particles produced by the proton-proton collision in the detector while the energy these particles deposit is shown in blue. Caption and credit: Lucas Taylor/CERN, CC BY-SA 3.0.

Looking (only) for Nehru

I have a habit of watching one old Tamil film a day. Yesterday evening, I was watching a film released in 1987, called Ivargal Indiyargal (‘They Are Indians’). In a scene in the film, an office manager distributes sweets to his colleagues. One of them takes a look at the item and asks the manager if he bought it from a particular shop that was famous for such items. The manager takes umbrage and scolds his colleague that he’s been asking that question for too many years, and demands to know if no other good sweet shop has opened since.

An innocuous scene in an innocuous film, yet it seemed to have a parallel with the Chandrayaan-3 mission. On August 23, as I’m sure you’re aware, the mission’s robotic lander module touched down in the moon’s south polar region, rendering India the first country to achieve this feat. It was a moment worth celebrating without any reservations, yet soon after, the social media commentariat had found a way – admittedly not difficult – to make it part of its relentlessly superficial avalanche of controversy and dissension. One vein of it was of course split along the lines of what Jawaharlal Nehru did or didn’t do to help ISRO in its formative years. (The Hindu also received some letters from readers to this effect.)

But more than right-wing nuts trying to rewrite history in order to diminish the influence of Nehru’s ideals on modern India, I find the counter-argument to be curious and, sometimes, worth some concern. The rebuttals frequently take the form that we must remember Nehru in this time, the idea of scientific temper with which he was so taken, the “importance of science” for India’s development, the virtues of Nehruvian secularism, and so forth. It seems to be a reflex to leap all the way back to the first 16 years after independence, always at the cost of many more variants of all these ideals, often refined or revised to better accommodate the pressures of development, modernisation, and globalisation. (See here for one example.)

Members of the Congress party are partly to blame: sometimes they seem incapable of commemorating an event in terms other than that Nehru set the stage for them many years ago. BJP nationalists have also displayed a similar tendency. For example, in 2013, after Peter Higgs and François Englert were awarded the physics Nobel Prize for predicting the existence of the Higgs boson, the nationalists demanded that the laureates should have honoured Satyendra Nath Bose, whose work laid the foundation for the study of all bosons, and that the ‘b’ in ‘boson’ should always be capitalised. It was a ridiculous ask that was disinterested in work that had built on Bose’s ideas and papers in the intervening years, and also betrayed a failure to understand how really a scientist and thinker of Bose’s calibre ought to be honoured, more than capitalising little letters.

Similarly, today, the full weight of Nehru’s legacy is invoked even to counter arguments as rudimentary as chest-thumping. To quote the office manager in Ivargal Indiyargal, has there been no other articulation of the same impulses? My concern about this frankly insensible habit to reach for Nehru is threefold: first, it will overlook other ideas from other individuals grounded in different lived experiences (especially those of marginalisation); second, the moments in which he is invoked are conducive to glazing over the problems, found only upon a closer look, with what Nehru and for that matter Vikram Sarabhai, Satish Dhawan, and others stood for; and third, perhaps I’m a fool to look for sense where it has seldom been found.

The Higgs boson and I

My first byline as a professional journalist (a.k.a. my first byline ever) was oddly for a tech story – about the advent of IPv6 internet addresses. I started writing it after 7 pm, had to wrap it up by 9 pm and it was published in the paper the next day (I was at The Hindu).

The first byline that I actually wanted to take credit for appeared around a month later, on July 4, 2012 – ten years ago – on the discovery of the Higgs boson at the Large Hadron Collider (LHC) in Europe. I published a live blog as Fabiola Gianotti, Joe Incandela and Rolf-Dieter Heuer, the spokespersons of the ATLAS and CMS detector collaborations and the director-general of CERN, respectively, announced and discussed the results. I also distinctly remember taking a pee break after telling readers “I have to leave my desk for a minute” and receiving mildly annoyed, but also amused, comments complaining of TMI.

After the results had been announced, the science editor, R. Prasad, told me that R. Ramachandran (a.k.a. Bajji) was filing the main copy and that I should work around that. So I wrote a ‘what next’ piece describing the work that remained for physicists to do, including open problems in particle physics that stayed open and the alternative theories, like supersymmetry, required to explain them. (Some jingoism surrounding the lack of acknowledgment for S.N. Bose – wholly justifiable, in my view – also forced me to write this.)

I also remember placing a bet with someone that the Nobel Prize for physics in 2012 wouldn’t be awarded for the discovery (because I knew, but the other person didn’t, that the nominations for that year’s prizes had closed by then).

To write about the feats and mysteries of particle physics is why I became a science journalist, so the Higgs boson’s discovery being announced a month after I started working was special – not least because it considerably eased the amount of effort I had to put in to pitches and have them accepted (specifically, I didn’t have to spend too much time or effort spelling out why a story was important). It was also a great opportunity for me to learn about how breaking news is reported as well as accelerated my induction into the newsroom and its ways.

But my interest in particle physics has since waned, especially from around 2017, as I began to focus in my role as science editor of The Wire (which I cofounded/joined in May 2015) on other areas of science as well. My heart is still with physics, and I have greatly enjoyed writing the occasional article about topological phases, neutrino astronomy, laser cooling and, recently, the AdS/CFT correspondence.

A couple years ago, I realised during a spell of daydreaming that even though I have stuck with physics, my act of ‘dropping’ particle physics as a specialty had left me without an edge as a writer. Just physics was and is too broad – even if there are very few others in India writing on it in the press, giving me lots of room to display my skills (such as they are). I briefly considered and rejected quantum computing and BECCS technologies – the former because its stories were often bursting with hype, especially in my neck of the woods, and the latter because, while it seemed important, it didn’t sit well morally. I was indifferent towards them because they were centered on technologies whereas I wanted to write about pure, supposedly boring science.

In all, penning an article commemorating the tenth anniversary of the announcement of the Higgs boson’s discovery brought back pleasant memories of my early days at The Hindu but also reminded me of this choice that I still need to make, for my sake. I don’t know if there is a clear winner yet, although quantum physics more broadly and condensed-matter physics more specifically are appealing. This said, I’m also looking forward to returning to writing more about physics in general, paralleling the evolution of The Wire Science itself (some announcements coming soon).

I should also note that I started blogging in 2008, when I was still an undergraduate student of mechanical engineering, in order to clarify my own knowledge of and thoughts on particle physics.

So in all, today is a special day.

US experiments find hint of a break in the laws of physics

At 9 pm India time on April 7, physicists at an American research facility delivered a shot in the arm to efforts to find flaws in a powerful theory that explains how the building blocks of the universe work.

Physicists are looking for flaws in it because the theory doesn’t have answers to some questions – like “what is dark matter?”. They hope to find a crack or a hole that might reveal the presence of a deeper, more powerful theory of physics that can lay unsolved problems to rest.

The story begins in 2001, when physicists performing an experiment in Brookhaven National Lab, New York, found that fundamental particles called muons weren’t behaving the way they were supposed to in the presence of a magnetic field. This was called the g-2 anomaly (after a number called the gyromagnetic factor).

An incomplete model

Muons are subatomic and can’t be seen with the naked eye, so it could’ve been that the instruments the physicists were using to study the muons indirectly were glitching. Or it could’ve been that the physicists had made a mistake in their calculations. Or, finally, what the physicists thought they knew about the behaviour of muons in a magnetic field was wrong.

In most stories we hear about scientists, the first two possibilities are true more often: they didn’t do something right, so the results weren’t what they expected. But in this case, the physicists were hoping they were wrong. This unusual wish was the product of working with the Standard Model of particle physics.

According to physicist Paul Kyberd, the fundamental particles in the universe “are classified in the Standard Model of particle physics, which theorises how the basic building blocks of matter interact, governed by fundamental forces.” The Standard Model has successfully predicted the numerous properties and behaviours of these particles. However, it’s also been clearly wrong about some things. For example, Kyberd has written:

When we collide two fundamental particles together, a number of outcomes are possible. Our theory allows us to calculate the probability that any particular outcome can occur, but at energies beyond which we have so far achieved, it predicts that some of these outcomes occur with a probability of greater than 100% – clearly nonsense.

The Standard Model also can’t explain what dark matter is, what dark energy could be or if gravity has a corresponding fundamental particle. It predicted the existence of the Higgs boson but was off about the particle’s mass by a factor of 100 quadrillion.

All these issues together imply that the Standard Model is incomplete, that it could be just one piece of a much larger ‘super-theory’ that works with more particles and forces than we currently know. To look for these theories, physicists have taken two broad approaches: to look for something new, and to find a mistake with something old.

For the former, physicists use particle accelerators, colliders and sophisticated detectors to look for heavier particles thought to exist at higher energies, and whose discovery would prove the existence of a physics beyond the Standard Model. For the latter, physicists take some prediction the Standard Model has made with a great degree of accuracy and test it rigorously to see if it holds up. Studies of muons in a magnetic field are examples of this.

According to the Standard Model, a number associated with the way a muon swivels in a magnetic field is equal to 2 plus 0.00116591804 (with some give or take). This minuscule addition is the handiwork of fleeting quantum effects in the muon’s immediate neighbourhood, and which make it wobble. (For a glimpse of how hard these calculations can be, see this description.)

Fermilab result

In the early 2000s, the Brookhaven experiment measured the deviation to be slightly higher than the model’s prediction. Though it was small – off by about 0.00000000346 – the context made it a big deal. Scientists know that the Standard Model has a habit of being really right, so when it’s wrong, the wrongness becomes very important. And because we already know the model is wrong about other things, there’s a possibility that the two things could be linked. It’s a potential portal into ‘new physics’.

“It’s a very high-precision measurement – the value is unequivocal. But the Standard Model itself is unequivocal,” Thomas Kirk, an associate lab director at Brookhaven, had told Science in 2001. The disagreement between the values implied “that there must be physics beyond the Standard Model.”

This is why the results physicists announced today are important.

The Brookhaven experiment that ascertained the g-2 anomaly wasn’t sensitive enough to say with a meaningful amount of confidence that its measurement was really different from the Standard Model prediction, or if there could be a small overlap.

Science writer Brianna Barbu has likened the mystery to “a single hair found at a crime scene with DNA that didn’t seem to match anyone connected to the case. The question was – and still is – whether the presence of the hair is just a coincidence, or whether it is actually an important clue.”

So to go from ‘maybe’ to ‘definitely’, physicists shipped the 50-foot-wide, 15-tonne magnet that the Brookhaven facility used in its Muon g-2 experiment to Fermilab, the US’s premier high-energy physics research facility in Illinois, and built a more sensitive experiment there.

The new result is from tests at this facility: that the observation differs from the Standard Model’s predicted value by 0.00000000251 (give or take a bit).

The Fermilab results are expected to become a lot better in the coming years, but even now they represent an important contribution. The statistical significance of the Brookhaven result was just below the threshold at which scientists could claim evidence but the combined significance of the two results is well above.

Potential dampener

So for now, the g-2 anomaly seems to be real. It’s not easy to say if it will continue to be real as physicists further upgrade the Fermilab g-2’s performance.

In fact there appears to be another potential dampener on the horizon. An independent group of physicists has had a paper published today saying that the Fermilab g-2 result is actually in line with the Standard Model’s prediction and that there’s no deviation at all.

This group, called BMW, used a different way to calculate the Standard Model’s value of the number in question than the Fermilab folks did. Aida El-Khadra, a theoretical physicist at the University of Illinois, told Quanta that the Fermilab team had yet to check BMW’s approach, but if it was found to be valid, the team would “integrate it into its next assessment”.

The ‘Fermilab approach’ itself is something physicists have worked with for many decades, so it’s unlikely to be wrong. If the BMW approach checks out, it’s possible according to Quanta that just the fact that two approaches lead to different predictions of the number’s value is likely to be a new mystery.

But physicists are excited for now. “It’s almost the best possible case scenario for speculators like us,” Gordan Krnjaic, a theoretical physicist at Fermilab who wasn’t involved in the research, told Scientific American. “I’m thinking much more that it’s possibly new physics, and it has implications for future experiments and for possible connections to dark matter.”

The current result is also important because the other way to look for physics beyond the Standard Model – by looking for heavier or rarer particles – can be harder.

This isn’t simply a matter of building a larger particle collider, powering it up, smashing particles and looking for other particles in the debris. For one, there is a very large number of energy levels at which a particle might form. For another, there are thousands of other particle interactions happening at the same time, generating a tremendous amount of noise. So without knowing what to look for and where, a particle hunt can be like looking for a very small needle in a very large haystack.

The ‘what’ and ‘where’ instead come from different theories that physicists have worked out based on what we know already, and design experiments depending on which one they need to test.

Into the hospital

One popular theory is called supersymmetry: it predicts that every elementary particle in the Standard Model framework has a heavier partner particle, called a supersymmetric partner. It also predicts the energy ranges in which these particles might be found. The Large Hadron Collider (LHC) in CERN, near Geneva, was powerful enough to access some of these energies, so physicists used it and went looking last decade. They didn’t find anything.

A table showing searches for particles associated with different post-standard-model theories (orange labels on the left). The bars show the energy levels up to which the ATLAS detector at the Large Hadron Collider has not found the particles. Table: ATLAS Collaboration/CERN

Other groups of physicists have also tried to look for rarer particles: ones that occur at an accessible energy but only once in a very large number of collisions. The LHC is a machine at the energy frontier: it probes higher and higher energies. To look for extremely rare particles, physicists explore the intensity frontier – using machines specialised in generating collisions.

The third and last is the cosmic frontier, in which scientists look for unusual particles coming from outer space. For example, early last month, researchers reported that they had detected an energetic anti-neutrino (a kind of fundamental particle) coming from outside the Milky Way participating in a rare event that scientists predicted in 1959 would occur if the Standard Model is right. The discovery, in effect, further cemented the validity of the Standard Model and ruled out one potential avenue to find ‘new physics’.

This event also recalls an interesting difference between the 2001 and 2021 announcements. The late British scientist Francis J.M. Farley wrote in 2001, after the Brookhaven result:

… the new muon (g-2) result from Brookhaven cannot at present be explained by the established theory. A more accurate measurement … should be available by the end of the year. Meanwhile theorists are looking for flaws in the argument and more measurements … are underway. If all this fails, supersymmetry can explain the data, but we would need other experiments to show that the postulated particles can exist in the real world, as well as in the evanescent quantum soup around the muon.

Since then, the LHC and other physics experiments have sent supersymmetry ‘to the hospital’ on more than one occasion. If the anomaly continues to hold up, scientists will have to find other explanations. Or, if the anomaly whimpers out, like so many others of our time, we’ll just have to put up with the Standard Model.

Featured image: A storage-ring magnet at Fermilab whose geometry allows for a very uniform magnetic field to be established in the ring. Credit: Glukicov/Wikimedia Commons, CC BY-SA 4.0.

The Wire Science
April 8, 2021

My heart of physics

Every July 4, I have occasion to remember two things: the discovery of the Higgs boson, and my first published byline for an article about the discovery of the Higgs boson. I have no trouble believing it’s been eight years since we discovered this particle, using the Large Hadron Collider (LHC) and its ATLAS and CMS detectors, in Geneva. I’ve greatly enjoyed writing about particle physics in this time, principally because closely engaging with new research and the scientists who worked on them allowed me to learn more about a subject that high school and college had let me down on: physics.

In 2020, I haven’t been able to focus much on the physical sciences in my writing, thanks to the pandemic, the lockdown, their combined effects and one other reason. This has been made doubly sad by the fact that the particle physics community at large is at an interesting crossroads.

In 2012, the LHC fulfilled the principal task it had been built for: finding the Higgs boson. After that, physicists imagined the collider would discover other unknown particles, allowing theorists to expand their theories and answer hitherto unanswered questions. However, the LHC has since done the opposite: it has narrowed the possibilities of finding new particles that physicists had argued should exist according to their theories (specifically supersymmetric partners), forcing them to look harder for mistakes they might’ve made in their calculations. But thus far, physicists have neither found mistakes nor made new findings, leaving them stuck in an unsettling knowledge space from which it seems there might be no escape (okay, this is sensationalised, but it’s also kinda true).

Right now, the world’s particle physicists are mulling building a collider larger and more powerful than the LHC, at a cost of billions of dollars, in the hopes that it will find the particles they’re looking for. Not all physicists are agreed, of course. If you’re interested in reading more, I’d recommend articles by Sabine Hossenfelder and Nirmalya Kajuri and spiralling out from there. But notwithstanding the opposition, CERN – which coordinates the LHC’s operations with tens of thousands of personnel from scores of countries – recently updated its strategy vision to recommend the construction of such a machine, with the ability to produce copious amounts of Higgs bosons in collisions between electrons and positrons (a.k.a. ‘Higgs factories’). China has also announced plans of its own build something similar.

Meanwhile, scientists and engineers are busy upgrading the LHC itself to a ‘high luminosity version’, where luminosity represents the number of interesting events the machine can detect during collisions for further study. This version will operate until 2038. That isn’t a long way away because it took more than a decade to build the LHC; it will definitely take longer to plan for, convince lawmakers, secure the funds for and build something bigger and more complicated.

There have been some other developments connected to the current occasion in terms of indicating other ways to discover ‘new physics’, which is the collective name for phenomena that will violate our existing theories’ predictions and show us where we’ve gone wrong in our calculations.

The most recent one I think was the ‘XENON excess’, which refers to a moderately strong signal recorded by the XENON 1T detector in Italy that physicists think could be evidence of a class of particles called axions. I say ‘moderately strong’ because the statistical significance of the signal’s strength is just barely above the threshold used to denote evidence and not anywhere near the threshold that denotes a discovery proper.

It’s evoked a fair bit of excitement because axions count as new physics – but when I asked two physicists (one after the other) to write an article explaining this development, they refused on similar grounds: that the significance makes it seem likely that the signal will be accounted for by some other well-known event. I was disappointed of course but I wasn’t surprised either: in the last eight years, I can count at least four instances in which a seemingly inexplicable particle physics related development turned out to be a dud.

The most prominent one was the ‘750 GeV excess’ at the LHC in December 2015, which seemed to be a sign of a new particle about six-times heavier than a Higgs boson and 800-times heavier than a proton (at rest). But when physicists analysed more data, the signal vanished – a.k.a. it wasn’t there in the first place and what physicists had seen was likely a statistical fluke of some sort. Another popular anomaly that went the same way was the one at Atomki.

But while all of this is so very interesting, today – July 4 – also seems like a good time to admit I don’t feel as invested in the future of particle physics anymore (the ‘other reason’). Some might say, and have said, that I’m abandoning ship just as the field’s central animus is moving away from the physics and more towards sociology and politics, and some might be right. I get enough of the latter subjects when I work on the non-physics topics that interest me, like research misconduct and science policy. My heart of physics itself is currently tending towards quantum mechanics and thermodynamics (although not quantum thermodynamics).

One peer had also recommended in between that I familiarise myself with quantum computing while another had suggested climate-change-related mitigation technologies, which only makes me wonder now if I’m delving into those branches of physics that promise to take me farther away from what I’m supposed to do. And truth be told, I’m perfectly okay with that. 🙂 This does speak to my privileges – modest as they are on this particular count – but when it feels like there’s less stuff to be happy about in the world with every new day, it’s time to adopt a new hedonism and find joy where it lies.

Writing itself is fantasy

The symbols may have been laid down on paper or the screen in whatever order but when we read, we read the words one at a time, one after another – linearly. Writing, especially of fiction, is an act of using the linear construction of meaning to tell a story whose message will be assimilated bit by bit into a larger whole that isn’t necessarily linear at all, and manages to evade cognitive biases (like the recency effect) that could trick the reader into paying more attention to parts of the story instead of the intangible yet very-much-there whole. Stories in fact come in many shapes. One of my favourites, Dune, is so good because it’s entirely spherical in the spacetime of this metaphor, each of its concepts like a three-dimensional ouroboros, connected end to end yet improbably layered over, under and around each other. The first four Harry Potter books are my least favourite pieces of good fantasy for their staunch linearity, even despite the use of time travel.

The plot of Embassytown struggles with this idea a little bit, with its fraction-like representation of meaning using pairs of words. Even then, China Miéville has a bit of a climb on his hands: his (human) readers consume the paired words one at a time, first the one on the top then the one on the bottom. So a bit of translation becomes necessary, an exercise in projecting a higher dimensional world in which words are semantically bipolar, like bar magnets each with two ends, onto the linguistic surface of one in which the words are less chimerical. Miéville is forced to be didactic (which he musters with some reluctance), expending a few dozen pages constructing rituals of similes the reader can employ to sync with the Ariekei, the story’s strange alien characters, but always only asymptotically so. We can after all never comprehend a reality that exists in six – or six-thousand – dimensions, much the same way the Higgs boson’s existence is a question of faith if you’re unfamiliar with the underlying mathematics and the same way a human mind and an alien mind can never truly, as they say, connect.

Arrival elevates this challenge, presenting us with alien creatures – the ‘heptapods’ – the symbols of whose communication are circular, each small segment of the circumference standing for one human word and the whole assemblage for meaning composed by a non-linear combination of words. I’m yet to read the book by Ted Chiang on which the film is based; notwithstanding the possibility that Chiang has discussed their provenance, I wonder if the heptapods think a complex thought that is translated into a clump of biochemical signals that then encode meaning in a stochastic process: not fully predictably, since we know through the simpler human experience that a complicated idea can be communicated using more than one combination of simpler ideas. One heptapod’s choice could easily differ from that of another.

The one human invention, and experience if you will, that recreates the narrative anxiety encoded in the Ariekei’s and heptapods’ attempts (through their respective authors’ skills, imagination, patience and whatever else) to communicate with humans is writing insofar as the same anxiety manifests in the use of a lower order form – linearity – to construct a higher order image. Thus from the reader’s perspective the writer inhabits an inferior totality, and the latter performs a construction, an assimilation, by synthesising the sphericity and wholeness of a story using fundamentally linear strands, an exercise in building a circle using lines, and using circles to build a sphere, and so forth.

Writing a story is in effect like convincing someone that an object exists but having no way other than storytelling to realise the object’s existence. Our human eyes will always see the Sun as a circle but we know it’s a sphere because there are some indirect ways to ascertain its sphericity, more broadly to ascertain the universe exists in three dimensions at least locally; the ‘simplest’ of these ways would be to entirely assume the Sun is spherical because that seems to simplify problem-solving. However, say one writer’s conceit is that the Sun really exists in eight dimensions and goes on to construct an elaborate story of adventure, discovery and contemplation to convince the reader that they’re right.

In this sense, the writer would draw upon our innate knowledge of the universe in three dimensions, and our knowledge and experience of the ways in which it and isn’t truthful, to build an emergent higher-order Thing. While this may seem like a work of science and/or fantasy fiction, the language humans use to build all of their stories, even the nonfiction, renders every act of story-telling a similarly architecturally constructive endeavour. No writer commences narration with the privilege of words meaning more than they stand for in the cosmos of three dimensions and perpetually forward-moving time nor sentences being parsed in any way other than through the straightforward progression of a single stream of words. Everything more complicated than whatever can be assembled with two-dimensional relationships requires a voyage through the fantastic to communicate.

Peter Higgs, self-promoter

I was randomly rewatching The Big Bang Theory on Netflix today when I spotted this gem:

Okay, maybe less a gem and more a shiny stone, but still. The screenshot, taken from the third episode of the sixth season, shows Sheldon Cooper mansplaining to Penny the work of Peter Higgs, whose name is most famously associated with the scalar boson the Large Hadron Collider collaboration announced the discovery of to great fanfare in 2012.

My fascination pertains to Sheldon’s description of Higgs as an “accomplished self-promoter”. Higgs, in real life, is extremely reclusive and self-effacing and journalists have found him notoriously hard to catch for an interview, or even a quote. His fellow discoverers of the Higgs boson, including François Englert, the Belgian physicist with whom Higgs won the Nobel Prize for physics in 2013, have been much less media-shy. Higgs has even been known to suggest that a mechanism in particle physics involving the Higgs boson should really be called the ABEGHHK’tH mechanism, include the names of everyone who hit upon its theoretical idea in the 1960s (Philip Warren Anderson, Robert Brout, Englert, Gerald Guralnik, C.R. Hagen, Higgs, Tom Kibble and Gerardus ‘t Hooft) instead of just as the Higgs mechanism.

No doubt Sheldon thinks Higgs did right by choosing not to appear in interviews for the public or not writing articles in the press himself, considering such extreme self-effacement is also Sheldon’s modus of choice. At the same time, Higgs might have lucked out and be recognised for work he conducted 50 years prior probably because he’s white and from an affluent country, both of which attributes nearly guarantee fewer – if any – systemic barriers to international success. Self-promotion is an important part of the modern scientific endeavour, as it is with most modern endeavours, even if one is an accomplished scientist.

All this said, it is notable that Higgs was also a conscientious person. When he was awarded the Wolf Prize in 2004 – a prestigious award in the field of physics – he refused to receive it in person in Jerusalem because it was a state function and he has protested Israel’s war against Palestine. He was a member of the Campaign for Nuclear Disarmament until the group extended its opposition to nuclear power as well; then he resigned. He also stopped supporting Greenpeace after they become opposed to genetic modification. If it is for these actions that Sheldon deemed Higgs an “accomplished self-promoter”, then I stand corrected.

Featured image: A portrait of Peter Higgs by Lucinda Mackay hanging at the James Clerk Maxwell Foundation, Edinburgh. Caption and credit: FF-UK/Wikimedia Commons, CC BY-SA 4.0.

The not-so-obvious obvious

If your job requires you to pore through a dozen or two scientific papers every month – as mine does – you’ll start to notice a few every now and then couching a somewhat well-known fact in study-speak. I don’t mean scientific-speak, largely because there’s nothing wrong about trying to understand natural phenomena in the formalised language of science. However, there seems to be something iffy – often with humorous effect – about a statement like the following: “cutting emissions of ozone-forming gases offers a ‘unique opportunity’ to create a ‘natural climate solution'”1 (source). Well… d’uh. This is study-speak – to rephrase mostly self-evident knowledge or truisms in unnecessarily formalised language, not infrequently in the style employed in research papers, without adding any new information but often including an element of doubt when there is likely to be none.

1. Caveat: These words were copied from a press release, so this could have been a case of the person composing the release being unaware of the study’s real significance. However, the words within single-quotes are copied from the corresponding paper itself. And this said, there have been some truly hilarious efforts to make sense of the obvious. For examples, consider many of the winners of the Ig Nobel Prizes.

Of course, it always pays to be cautious, but where do you draw the line before a scientific result is simply one because it is required to initiate a new course of action? For example, the Univ. of Exeter study, the press release accompanying which discussed the effect of “ozone-forming gases” on the climate, recommends cutting emissions of substances that combine in the lower atmosphere to form ozone, a compound form of oxygen that is harmful to both humans and plants. But this is as non-“unique” an idea as the corresponding solution that arises (of letting plants live better) is “natural”.

However, it’s possible the study’s authors needed to quantify these emissions to understand the extent to which ambient ozone concentration interferes with our climatic goals, and to use their data to inform the design and implementation of corresponding interventions. Such outcomes aren’t always obvious but they are there – often because the necessarily incremental nature of most scientific research can cut both ways. The pursuit of the obvious isn’t always as straightforward as one might believe.

The Univ. of Exeter group may have accumulated sufficient and sufficiently significant evidence to support their conclusion, allowing themselves as well as others to build towards newer, and hopefully more novel, ideas. A ladder must have rungs at the bottom irrespective of how tall it is. But when the incremental sword cuts the other way, often due to perverse incentives that require scientists to publish as many papers as possible to secure professional success, things can get pretty nasty.

For example, the Cornell University consumer behaviour researcher Brian Wansink was known to advise his students to “slice” the data obtained from a few experiments in as many different ways as possible in search of interesting patterns. Many of the papers he published were later found to contain numerous irreproducible conclusions – i.e. Wansink had searched so hard for patterns that he’d found quite a few even when they really weren’t there. As the British economist Ronald Coase said, “If you torture the data long enough, it will confess to anything.”

The dark side of incremental research, and the virtue of incremental research done right, stems from the fact that it’s non-evidently difficult to ascertain the truth of a finding when the strength of the finding is expected to be so small that it really tests the notion of significance or so large – or so pronounced – that it transcends intuitive comprehension.

For an example of the former, among particle physicists, a result qualifies as ‘fact’ if the chances of it being a fluke are 1 in 3.5 million. So the Large Hadron Collider (LHC), which was built to discover the Higgs boson, had to have performed at least 3.5 million proton-proton collisions capable of producing a Higgs boson and which its detectors could observe and which its computers could analyse to attain this significance.

But while protons are available abundantly and the LHC can theoretically perform 645.8 trillion collisions per second, imagine undertaking an experiment that requires human participants to perform actions according to certain protocols. It’s never going to be possible to enrol billions of them for millions of hours to arrive at a rock-solid result. In such cases, researchers design experiments based on very specific questions, and such that the experimental protocols suppress, or even eliminate, interference, sources of doubt and confounding variables, and accentuate the effects of whatever action, decision or influence is being evaluated.

Such experiments often also require the use of sophisticated – but nonetheless well-understood – statistical methods to further eliminate the effects of undesirable phenomena from the data and, to the extent possible, leave behind information of good-enough quality to support or reject the hypotheses. In the course of navigating this winding path from observation to discovery, researchers are susceptible to, say, misapplying a technique, overlooking a confounder or – like Wansink – overanalysing the data so much that a weak effect masquerades as a strong one but only because it’s been submerged in a sea of even weaker effects.

Similar problems arise in experiments that require the use of models based on very large datasets, where researchers need to determine the relative contribution of each of thousands of causes on a given effect. The Univ. of Exeter study that determined ozone concentration in the lower atmosphere due to surface sources of different gases contains an example. The authors write in their paper (emphasis added):

We have provided the first assessment of the quantitative benefits to global and regional land ecosystem health from halving air pollutant emissions in the major source sectors. … Future large-scale changes in land cover [such as] conversion of forests to crops and/or afforestation, would alter the results. While we provide an evaluation of uncertainty based on the low and high ozone sensitivity parameters, there are several other uncertainties in the ozone damage model when applied at large-scale. More observations across a wider range of ozone concentrations and plant species are needed to improve the robustness of the results.

In effect, their data could be modified in future to reflect new information and/or methods, but in the meantime, and far from being a silly attempt at translating a claim into jargon-laden language, the study eliminates doubt to the extent possible with existing data and modelling techniques to ascertain something. And even in cases where this something is well known or already well understood, the validation of its existence could also serve to validate the methods the researchers employed to (re)discover it and – as mentioned before – generate data that is more likely to motivate political action than, say, demands from non-experts.

In fact, the American mathematician Marc Abrahams, known much more for founding and awarding the Ig Nobel Prizes, identified this purpose of research as one of three possible reasons why people might try to “quantify the obvious” (source). The other two are being unaware of the obvious and, of course, to disprove the obvious.

A gear-train for particle physics

It has come under scrutiny at various times by multiple prominent physicists and thinkers, but it’s not hard to see why, when the idea of ‘grand unification’ first set out, it seemed plausible to so many. The first time it was seriously considered was about four decades ago, shortly after physicists had realised that two of the four fundamental forces of nature were in fact a single unified force if you ramped up the energy at which it acted. (electromagnetic + weak = electroweak). The thought that followed was simply logical: what if, at some extremely high energy (like what was in the Big Bang), all four forces unified into one? This was 1974.

There has been no direct evidence of such grand unification yet. Physicists don’t know how the electroweak force will unify with the strong nuclear force – let alone gravity, a problem that actually birthed one of the most powerful mathematical tools in an attempt to solve it. Nonetheless, they think they know the energy at which such grand unification should occur if it does: the Planck scale, around 1019 GeV. This is about as much energy as is contained in a few litres of petrol, but it’s stupefyingly large when you have to accommodate all of it in a particle that’s 10-15 metres wide.

This is where particle accelerators come in. The most powerful of them, the Large Hadron Collider (LHC), uses powerful magnetic fields to accelerate protons to close to light-speed, when their energy approaches about 7,000 GeV. But the Planck energy is still 10 million billion orders of magnitude higher, which means it’s not something we might ever be able to attain on Earth. Nonetheless, physicists’ theories show that that’s where all of our physical laws should be created, where the commandments by which all that exists does should be written.

… Or is it?

There are many outstanding problems in particle physics, and physicists are desperate for a solution. They have to find something wrong with what they’ve already done, something new or a way to reinterpret what they already know. The clockwork theory is of the third kind – and its reinterpretation begins by asking physicists to dump the idea that new physics is born only at the Planck scale. So, for example, it suggests that the effects of quantum gravity (a quantum-mechanical description of gravity) needn’t necessarily become apparent only at the Planck scale but at a lower energy itself. But even if it then goes on to solve some problems, the theory threatens to present a new one. Consider: If it’s true that new physics isn’t born at the highest energy possible, then wouldn’t the choice of any energy lower than that just be arbitrary? And if nothing else, nature is not arbitrary.

To its credit, clockwork sidesteps this issue by simply not trying to find ‘special’ energies at which ‘important’ things happen. Its basic premise is that the forces of nature are like a set of interlocking gears moving against each other, transmitting energy – rather potential – from one wheel to the next, magnifying or diminishing the way fundamental particles behave in different contexts. Its supporters at CERN and elsewhere think it can be used to explain some annoying gaps between theory and experiment in particle physics, particularly the naturalness problem.

Before the Higgs boson was discovered, physicists predicted based on the properties of other particles and forces that its mass would be very high. But when the boson’s discovery was confirmed at CERN in January 2013, its mass implied that the universe would have to be “the size of a football” – which is clearly not the case. So why is the Higgs boson’s mass so low, so unnaturally low? Scientists have fronted many new theories that try to solve this problem but their solutions often require the existence of other, hitherto undiscovered particles.

Clockwork’s solution is a way in which the Higgs boson’s interaction with gravity – rather gravity’s associated energy – is mediated by a string of effects described in quantum field theory that tamp down the boson’s mass. In technical parlance, the boson’s mass becomes ‘screened’. An explanation for this that’s both physical and accurate is hard to draw up because of various abstractions. So as University of Bruxelles physicist Daniele Teresi suggests, imagine this series: Χ = 0.5 × 0.5 × 0.5 × 0.5 × … × 0.5. Even if each step reduces Χ’s value by only a half, it is already an eighth after three steps; after four, a sixteenth. So the effect can get quickly drastic because it’s exponential.

And the theory provides a mathematical toolbox that allows for all this to be achieved without the addition of new particles. This is advantageous because it makes clockwork relatively more elegant than another theory that seeks to solve the naturalness problem, called supersymmetry, SUSY for short. Physicists like SUSY also because it allows for a large energy hierarchy: a distribution of particles and processes at energies between electroweak unification and grand unification, instead of leaving the region bizarrely devoid of action like the Standard Model does. But then SUSY predicts the existence of 17 new particles, none of which have been detected yet.

Even more, as Matthew McCullough, one of clockwork’s developers, showed at an ongoing conference in Italy, its solutions for a stationary particle in four dimensions exhibit conceptual similarities to Maxwell’s equations for an electromagnetic wave in a conductor. The existence of such analogues is reassuring because it recalls nature’s tendency to be guided by common principles in diverse contexts.

This isn’t to say clockwork theory is it. As physicist Ben Allanach has written, it is a “new toy” and physicists are still playing with it to solve different problems. Just that in the event that it has an answer to the naturalness problem – as well as to the question why dark matter doesn’t decay, e.g. – it is notable. But is this enough: to say that clockwork theory mops up the math cleanly in a bunch of problems? How do we make sure that this is how nature works?

McCullough thinks there’s one way, using the LHC. Very simplistically: clockwork theory induces fluctuations in the probabilities with which pairs of high-energy photons are created at some energies at the LHC. These should be visible as wavy squiggles in a plot with energy on the x-axis and events on the y-axis. If these plots can be obtained and analysed, and the results agree with clockwork’s predictions, then we will have confirmed what McCullough calls an “irreducible prediction of clockwork gravity”, the case of using the theory to solve the naturalness problem.

To recap: No free parameters (i.e. no new particles), conceptual elegance and familiarity, and finally a concrete and unique prediction. No wonder Allanach thinks clockwork theory inhabits fertile ground. On the other hand, SUSY’s prospects have been bleak since at least 2013 (if not earlier) – and it is one of the more favoured theories among physicists to explain physics beyond the Standard Model, physics we haven’t observed yet but generally believe exists. At the same time, and it bears reiterating, clockwork theory will also have to face down a host of challenges before it can be declared a definitive success. Tik tok tik tok tik tok

Some notes and updates

Four years of the Higgs boson

Missed this didn’t I. On July 4, 2012, physicists at CERN announced that the Large Hadron Collider had found a Higgs-boson-like particle. Though the confirmation would only come in January 2013 (that it was the Higgs boson and not any other particle), July 4 is the celebrated date. I don’t exactly mark the occasion every year except to recap on whatever’s been happening in particle physics. And this year: everyone’s still looking for supersymmetry; there was widespread excitement about a possible new fundamental particle weighing about 750 GeV when data-taking began at the LHC in late May but strong rumours from within CERN have it that such a particle probably doesn’t exist (i.e. it’s vanishing in the new data-sets). Pity. The favoured way to anticipate what might come to be well before the final announcements are made in August is to keep an eye out for conference announcements in mid-July. If they’re made, it’s a strong giveaway that something’s been found.

Live-tweeting and timezones

I’ve a shitty internet connection at home in Delhi which means I couldn’t get to see the live-stream NASA put out of its control room or whatever as Juno executed its orbital insertion manoeuvre this morning. Fortunately, Twitter came to the rescue; NASA’s social media team had done such a great job of hyping up the insertion (deservingly so) that it seemed as if all the 480 accounts I followed were tweeting about it. I don’t believe I missed anything at all, except perhaps the sounds of applause. Twitter’s awesome that way, and I’ll say that even if it means I’m stating the obvious. One thing did strike me: all times (of the various events in the timeline) were published in UTC and EDT. This makes sense because converting from UTC to a local timezone is easy (IST = UTC + 5.30) while EDT corresponds to the US east cost. However, the thing about IST being UTC + 5.30 isn’t immediately apparent to everyone (at least not to me), and every so often I wish an account tweeting from India, such as a news agency’s, uses IST. I do it every time.

New music

https://www.youtube.com/watch?v=F4IwxzU3Kv8

I don’t know why I hadn’t found Yat-kha earlier considering I listen to Huun Huur Tu so much, and Yat-kha is almost always among the recommendations (all bands specialising in throat-singing). And while Huun Huur Tu likes to keep their music traditional and true to its original compositional style, Yat-kha takes it a step further, banding its sound up with rock, and this tastes much better to me. With a voice like Albert Kuvezin’s, keeping things traditional can be a little disappointing – you can hear why in the song above. It’s called Kaa-khem; the same song by Huun Huur Tu is called Mezhegei. Bass evokes megalomania in me, and it’s all the more sensual when its rendition is accomplished with human voice, rising and falling. Another example of what I’m talking about is called Yenisei punk. Finally, this is where I’d suggest you stop if you’re looking for throat-singing made to sound more belligerent: I stumbled upon War horse by Tengger Cavalry, classified as nomadic folk metal. It’s terrible.

Fall of Light, a part 2

In fantasy trilogies, the first part benefits from establishing the premise and the third, from the denouement. If the second part has to benefit from anything at all, then it is the story itself, not the intensity of the stakes within its narrative. At least, that’s my takeaway from Fall of Light, the second book of Steven Erikson’s Kharkanas trilogy. Its predecessor, Forge of Darkness, established the kingdom of Kurald Galain and the various forces that shape its peoples and policies. Because the trilogy has been described as being a prequel (note: not the prequel) to Erikson’s epic Malazan Book of the Fallen series, and because of what we know about Kurald Galain in the series, the last book of the trilogy has its work cut out for it. But in the meantime, Fall of Light was an unexpectedly monotonous affair – and that was awesome. As a friend of mine has been wont to describe the Malazan series: Erikson is a master of raising the stakes. He does that in all of his books (including the Korbal Broach short-stories) and he does it really well. However, Fall of Light rode with the stakes as they were laid down at the end of the first book, through a plot that maintained the tension at all times. It’s neither eager to shed its burden nor is it eager to take on new ones. If you’ve read the Malazan series, I’d say he’s written another Deadhouse Gates, but better.

Oh, and this completes one of my bigger goals for 2016.