A new source of cosmic rays?

The International Space Station carries a suite of instruments conducting scientific experiments and measurements in low-Earth orbit. One of them is the Alpha Magnetic Spectrometer (AMS), which studies antimatter particles in cosmic rays to understand how the universe has evolved since its birth.

Cosmic rays are particles or particle clumps flying through the universe at nearly the speed of light. Since the mid-20th century, scientists have found cosmic-ray particles are emitted during supernovae and in the centres of galaxies that host large black holes. Scientists installed the AMS in May 2011, and by April 2021, it had tracked more than 230 billion cosmic-ray particles.

When scientists from the Massachusetts Institute of Technology (MIT) recently analysed these data — the results of which were published on June 25 — they found something odd. Roughly one in 10,000 of the cosmic ray particles were neutron-proton pairs, a.k.a. deuterons. The universe has a small number of these particles because they were only created in a 10-minute-long period a short time after the universe was born, around 0.002% of all atoms.

Yet cosmic rays streaming past the AMS seemed to have around 5x greater concentration of deuterons. The implication is that something in the universe — some event or some process — is producing high-energy deuterons, according to the MIT team’s paper.

Before coming to this conclusion, the researchers considered and eliminated some alternative explanations. Chief among them is the way scientists know how deuterons become cosmic rays. When primary cosmic rays produced by some process in outer space smash into matter, they produce a shower of energetic particles called secondary cosmic rays. Thus far, scientists have considered deuterons to be secondary cosmic rays, produced when helium-4 ions smash into atoms in the interstellar medium (the space between stars).

This event also produces helium-3 ions. So if the deuteron flux in cosmic rays is high, and if we believe more helium-4 ions are smashing into the interstellar medium than expected, the AMS should have detected more helium-3 cosmic rays than expected as well. It didn’t.

To make sure, the researchers also checked the AMS’s instruments and the shared properties of the cosmic-ray particles. Two in particular are time and rigidity. Time deals with how the flux of deuterons changes with respect to the flux of other cosmic ray particles, especially protons and helium-4 ions. Rigidity measures the likelihood a cosmic-ray particle will reach Earth and not be deflected away by the Sun. (Equally rigid particles behave the same way in a magnetic field.) When denoted in volts, rigidity indicates the extent of deflection the particle will experience.

The researchers analysed deuterons with rigidity from 1.9 billion to 21 billion V and found that “over the entire rigidity range the deuteron flux exhibits nearly identical time variations with the proton, 3-He, and 4-He fluxes.” At rigidity greater than 4.5 billion V, the fluxes of deuterons and helium-4 ions varied together whereas those of helium-3 and helium-4 didn’t. At rigidity beyond 13 billion V, “the rigidity dependence of the D and p fluxes [was] nearly identical”.

Similarly, they found the change in the deuteron flux was greater than the change in the helium-3 flux, both relative to the helium-4 flux. The statistical significance of this conclusion far exceeded the threshold particle physicists use to check whether an anomaly in the data is really real rather than the result of some fluke error. Finally, “independent analyses were performed on the same data sample by four independent study groups,” the paper added. “The results of these analyses are consistent with this Letter.”

The MIT team ultimately couldn’t find a credible alternative explanation, leaving their conclusion: deuterons could be primary cosmic rays, and we don’t (yet) know the process that could be producing them.

The problem with rooting for science

The idea that trusting in science involves a lot of faith, instead of reason, is lost on most people. More often than not, as a science journalist, I encounter faith through extreme examples – such as the Bloch sphere (used to represent the state of a qubit) or wave functions (‘mathematical objects’ used to understand the evolution of certain simple quantum systems). These and other similar concepts require years of training in physics and mathematics to understand. At the same time, science writers are often confronted with the challenge of making these concepts sensible to an audience that seldom has this training.

More importantly, how are science writers to understand them? They don’t. Instead, they implicitly trust scientists they’re talking to to make sense. If I know that a black hole curves spacetime to such an extent that pairs of virtual particles created near its surface are torn apart – one particle entering the black hole never to exit and the other sent off into space – it’s not because I’m familiar with the work of Stephen Hawking. It’s because I read his books, read some blogs and scientific papers, spoke to physicists, and decided to trust them all. Every science journalist, in fact, has a set of sources they’re likely to trust over others. I even place my faith in some people over others, based on factors like personal character, past record, transparency, reflexivity, etc., so that what they produce I take only with the smallest pinch of salt, and build on their findings to develop my own. And this way, I’m already creating an interface between science and society – by matching scientific knowledge with the socially developed markers of reliability.

I choose to trust those people, processes and institutions that display these markers. I call this an act of faith for two reasons: 1) it’s an empirical method, so to speak; there is no proof in theory that such ‘matching’ will always work; and 2) I believe it’s instructive to think of this relationship as being mediated by faith if only to amplify its anti-polarity with reason. Most of us understand science through faith, not reason. Even scientists who are experts on one thing take the word of scientists on completely different things, instead of trying to study those things themselves (see ad verecundiam fallacy).

Sometimes, such faith is (mostly) harmless, such as in the ‘extreme’ cases of the Bloch sphere and the wave function. It is both inexact and incomplete to think that quantum superposition means an object is in two states at once. The human brain hasn’t evolved to cognate superposition exactly; this is why physicists use the language of mathematics to make sense of this strange existential phenomenon. The problem – i.e. the inexactitude and the incompleteness – arises when a communicator translates the mathematics to a metaphor. Equally importantly, physicists are describing whereas the rest of us are thinking. There is a crucial difference between these activities that illustrates, among other things, the fundamental incompatibility between scientific research and science communication that communicators must first surmount.

As physicists over the past three or four centuries have relied increasingly on mathematics rather than the word to describe the world, physics, like mathematics itself, has made a “retreat from the word,” as literary scholar George Steiner put it. In a 1961 Kenyon Review article, Steiner wrote, “It is, on the whole, true to say that until the seventeenth century the predominant bias and content of the natural sciences were descriptive.” Mathematics used to be “anchored to the material conditions of experience,” and so was largely susceptible to being expressed in ordinary language. But this changed with the advances of modern mathematicians such as Descartes, Newton, and Leibniz, whose work in geometry, algebra, and calculus helped to distance mathematical notation from ordinary language, such that the history of how mathematics is expressed has become “one of progressive untranslatability.” It is easier to translate between Chinese and English — both express human experience, the vast majority of which is shared — than it is to translate advanced mathematics into a spoken language, because the world that mathematics expresses is theoretical and for the most part not available to our lived experience.

Samuel Matlack, ‘Quantum Poetics’, The New Atlantic, 2017

However, the faith becomes more harmful the further we move away from the ‘extreme’ examples – of things we’re unlikely to stumble on in our daily lives – and towards more commonplace ideas, such as ‘how vaccines work’ or ‘why GM foods are not inherently bad’. The harm emerges from the assumption that we think we know something when in fact we’re in denial about how it is that we know that thing. Many of us think it’s reason; most of the time it’s faith. Remember when, in Friends, Monica Geller and Chandler Bing ask David the Scientist Guy how airplanes fly, and David says it has to do with Bernoulli’s principle and Newton’s third law? Monica then turns to Chandler with a knowing look and says, “See?!” To which Chandler says, “Yeah, that’s the same as ‘it has something to do with wind’!”

The harm is to root for science, to endorse the scientific enterprise and vest our faith in its fruits, without really understanding how these fruits are produced. Such understanding is important for two reasons.

First, if we trust scientists, instead of presuming to know or actually knowing that we can vouch for their work. It would be vacuous to claim science is superior in any way to another enterprise that demands our faith when science itself also receives our faith. Perhaps more fundamentally, we like to believe that science is trustworthy because it is evidence-based and it is tested – but the COVID-19 pandemic should have clarified, if it hasn’t already, the continuous (as opposed to discrete) nature of scientific evidence, especially if we also acknowledge that scientific progress is almost always incremental. Evidence can be singular and thus clear – like a new avian species, graphene layers superconducting electrons or tuned lasers cooling down atoms – or it can be necessary but insufficient, and therefore on a slippery slope – such as repeated genetic components in viral RNA, a cigar-shaped asteroid or water shortage in the time of climate change.

Physicists working with giant machines to spot new particles and reactions – all of which are detected indirectly, through their imprints on other well-understood phenomena – have two important thresholds for the reliability of their findings: if the chance of X (say, “spotting a particle of energy 100 GeV”) being false is 0.27%, it’s good enough to be evidence; if the chance of X being false is 0.00006%, then it’s a discovery (i.e., “we have found the particle”). But at what point can we be sure that we’ve indeed found the particle we were looking for if the chance of being false will never reach 0%? One way, for physicists specifically, is to combine the experiment’s results with what they expect to happen according to theory; if the two match, it’s okay to think that even a less reliable result will likely be borne out. Another possibility (in the line of Karl Popper’s philosophy) is that a result expected to be true, and is subsequently found to be true, is true until we have evidence to the contrary. But as suitable as this answer may be, it still doesn’t neatly fit the binary ‘yes’/’no’ we’re used to, and which we often expect from scientific endeavours as well (see experience v. reality).

(Minor detour: While rational solutions are ideally refutable, faith-based solutions are not. Instead, the simplest way to reject their validity is to use extra-scientific methods, and more broadly deny them power. For example, if two people were offering me drugs to suppress the pain of a headache, I would trust the one who has a state-sanctioned license to practice medicine and is likely to lose that license, even temporarily, if his prescription is found to have been mistaken – that is, by asserting the doctor as the subject of democratic power. Axiomatically, if I know that Crocin helps manage headaches, it’s because, first, I trusted the doctor who prescribed it and, second, Crocin has helped me multiple times before, so empirical experience is on my side.)

Second, if we don’t know how science works, we become vulnerable to believing pseudoscience to be science as long as the two share some superficial characteristics, like, say, the presence and frequency of jargon or a claim’s originator being affiliated with a ‘top’ institute. The authors of a scientific paper to be published in a forthcoming edition of the Journal of Experimental Social Psychology write:

We identify two critical determinants of vulnerability to pseudoscience. First, participants who trust science are more likely to believe and disseminate false claims that contain scientific references than false claims that do not. Second, reminding participants of the value of critical evaluation reduces belief in false claims, whereas reminders of the value of trusting science do not.

(Caveats: 1. We could apply the point of this post to this study itself; 2. I haven’t checked the study’s methods and results with an independent expert, and I’m also mindful that this is psychology research and that its conclusions should be taken with salt until independent scientists have successfully replicated them.)

Later from the same paper:

Our four experiments and meta-analysis demonstrated that people, and in particular people with higher trust in science (Experiments 1-3), are vulnerable to misinformation that contains pseudoscientific content. Among participants who reported high trust in science, the mere presence of scientific labels in the article facilitated belief in the misinformation and increased the probability of dissemination. Thus, this research highlights that trust in science ironically increases vulnerability to pseudoscience, a finding that conflicts with campaigns that promote broad trust in science as an antidote to misinformation but does not conflict with efforts to install trust in conclusions about the specific science about COVID-19 or climate change.

In terms of the process, the findings of Experiments 1-3 may reflect a form of heuristic processing. Complex topics such as the origins of a virus or potential harms of GMOs to human health include information that is difficult for a lay audience to comprehend, and requires acquiring background knowledge when reading news. For most participants, seeing scientists as the source of the information may act as an expertise cue in some conditions, although source cues are well known to also be processed systematically. However, when participants have higher levels of methodological literacy, they may be more able to bring relevant knowledge to bear and scrutinise the misinformation. The consistent negative association between methodological literacy and both belief and dissemination across Experiments 1-3 suggests that one antidote to the influence of pseudoscience is methodological literacy. The meta-analysis supports this.

So rooting for science per se is not just not enough, it could be harmful vis-à-vis the public support for science itself. For example (and without taking names), in response to right-wing propaganda related to India’s COVID-19 epidemic, quite a few videos produced by YouTube ‘stars’ have advanced dubious claims. They’re not dubious at first glance, if also because they purport to counter pseudoscientific claims with scientific knowledge, but they are – either for insisting a measure of certainty in the results that neither exist nor are achievable, or for making pseudoscientific claims of their own, just wrapped up in technical lingo so they’re more palatable to those supporting science over critical thinking. Some of these YouTubers, and in fact writers, podcasters, etc., are even blissfully unaware of how wrong they often are. (At least one of them was also reluctant to edit a ‘finished’ video to make it less sensational despite repeated requests.)

Now, where do these ideas leave (other) science communicators? In attempting to bridge a nearly unbridgeable gap, are we doomed to swing only between most and least unsuccessful? I personally think that this problem, such as it is, is comparable to Zeno’s arrow paradox. To use Wikipedia’s words:

He states that in any one (duration-less) instant of time, the arrow is neither moving to where it is, nor to where it is not. It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.

To ‘break’ the paradox, we need to identify and discard one or more primitive assumptions. In the arrow paradox, for example, one could argue that time is not composed of a stream of “duration-less” instants, that each instant – no matter how small – encompasses a vanishingly short but not nonexistent passage of time. With popular science communication (in the limited context of translating something that is untranslatable sans inexactitude and/or incompleteness), I’d contend the following:

  • Awareness: ‘Knowing’ and ‘knowing of’ are significantly different and, I hope, self-explanatory also. Example: I’m not fluent with the physics of cryogenic engines but I’m aware that they’re desirable because liquefied hydrogen has the highest specific impulse of all rocket fuels.
  • Context: As I’ve written before, a unit of scientific knowledge that exists in relation to other units of scientific knowledge is a different object from the same unit of scientific knowledge existing in relation to society.
  • Abstraction: 1. perfect can be the enemy of the good, and imperfect knowledge of an object – especially a complicated compound one – can still be useful; 2. when multiple components come together to form a larger entity, the entity can exhibit some emergent properties that one can’t derive entirely from the properties of the individual components. Example: one doesn’t have to understand semiconductor physics to understand what a computer does.

An introduction to physics that contains no equations is like an introduction to French that contains no French words, but tries instead to capture the essence of the language by discussing it in English. Of course, popular writers on physics must abide by that constraint because they are writing for mathematical illiterates, like me, who wouldn’t be able to understand the equations. (Sometimes I browse math articles in Wikipedia simply to immerse myself in their majestic incomprehensibility, like visiting a foreign planet.)

Such books don’t teach physical truths; what they teach is that physical truth is knowable in principle, because physicists know it. Ironically, this means that a layperson in science is in basically the same position as a layperson in religion.

Adam Kirsch, ‘The Ontology of Pop Physics’, Tablet Magazine, 2020

But by offering these reasons, I don’t intend to over-qualify science communication – i.e. claim that, given enough time and/or other resources, a suitably skilled science communicator will be able to produce a non-mathematical description of, say, quantum superposition that is comprehensible, exact and complete. Instead, it may be useful for communicators to acknowledge that there is an immutable gap between common English (the language of modern science) and mathematics, beyond which scientific expertise is unavoidable – in much the same way communicators must insist that the farther the expert strays into the realm of communication, the closer they’re bound to get to a boundary beyond which they must defer to the communicator.

The not-so-obvious obvious

If your job requires you to pore through a dozen or two scientific papers every month – as mine does – you’ll start to notice a few every now and then couching a somewhat well-known fact in study-speak. I don’t mean scientific-speak, largely because there’s nothing wrong about trying to understand natural phenomena in the formalised language of science. However, there seems to be something iffy – often with humorous effect – about a statement like the following: “cutting emissions of ozone-forming gases offers a ‘unique opportunity’ to create a ‘natural climate solution'”1 (source). Well… d’uh. This is study-speak – to rephrase mostly self-evident knowledge or truisms in unnecessarily formalised language, not infrequently in the style employed in research papers, without adding any new information but often including an element of doubt when there is likely to be none.

1. Caveat: These words were copied from a press release, so this could have been a case of the person composing the release being unaware of the study’s real significance. However, the words within single-quotes are copied from the corresponding paper itself. And this said, there have been some truly hilarious efforts to make sense of the obvious. For examples, consider many of the winners of the Ig Nobel Prizes.

Of course, it always pays to be cautious, but where do you draw the line before a scientific result is simply one because it is required to initiate a new course of action? For example, the Univ. of Exeter study, the press release accompanying which discussed the effect of “ozone-forming gases” on the climate, recommends cutting emissions of substances that combine in the lower atmosphere to form ozone, a compound form of oxygen that is harmful to both humans and plants. But this is as non-“unique” an idea as the corresponding solution that arises (of letting plants live better) is “natural”.

However, it’s possible the study’s authors needed to quantify these emissions to understand the extent to which ambient ozone concentration interferes with our climatic goals, and to use their data to inform the design and implementation of corresponding interventions. Such outcomes aren’t always obvious but they are there – often because the necessarily incremental nature of most scientific research can cut both ways. The pursuit of the obvious isn’t always as straightforward as one might believe.

The Univ. of Exeter group may have accumulated sufficient and sufficiently significant evidence to support their conclusion, allowing themselves as well as others to build towards newer, and hopefully more novel, ideas. A ladder must have rungs at the bottom irrespective of how tall it is. But when the incremental sword cuts the other way, often due to perverse incentives that require scientists to publish as many papers as possible to secure professional success, things can get pretty nasty.

For example, the Cornell University consumer behaviour researcher Brian Wansink was known to advise his students to “slice” the data obtained from a few experiments in as many different ways as possible in search of interesting patterns. Many of the papers he published were later found to contain numerous irreproducible conclusions – i.e. Wansink had searched so hard for patterns that he’d found quite a few even when they really weren’t there. As the British economist Ronald Coase said, “If you torture the data long enough, it will confess to anything.”

The dark side of incremental research, and the virtue of incremental research done right, stems from the fact that it’s non-evidently difficult to ascertain the truth of a finding when the strength of the finding is expected to be so small that it really tests the notion of significance or so large – or so pronounced – that it transcends intuitive comprehension.

For an example of the former, among particle physicists, a result qualifies as ‘fact’ if the chances of it being a fluke are 1 in 3.5 million. So the Large Hadron Collider (LHC), which was built to discover the Higgs boson, had to have performed at least 3.5 million proton-proton collisions capable of producing a Higgs boson and which its detectors could observe and which its computers could analyse to attain this significance.

But while protons are available abundantly and the LHC can theoretically perform 645.8 trillion collisions per second, imagine undertaking an experiment that requires human participants to perform actions according to certain protocols. It’s never going to be possible to enrol billions of them for millions of hours to arrive at a rock-solid result. In such cases, researchers design experiments based on very specific questions, and such that the experimental protocols suppress, or even eliminate, interference, sources of doubt and confounding variables, and accentuate the effects of whatever action, decision or influence is being evaluated.

Such experiments often also require the use of sophisticated – but nonetheless well-understood – statistical methods to further eliminate the effects of undesirable phenomena from the data and, to the extent possible, leave behind information of good-enough quality to support or reject the hypotheses. In the course of navigating this winding path from observation to discovery, researchers are susceptible to, say, misapplying a technique, overlooking a confounder or – like Wansink – overanalysing the data so much that a weak effect masquerades as a strong one but only because it’s been submerged in a sea of even weaker effects.

Similar problems arise in experiments that require the use of models based on very large datasets, where researchers need to determine the relative contribution of each of thousands of causes on a given effect. The Univ. of Exeter study that determined ozone concentration in the lower atmosphere due to surface sources of different gases contains an example. The authors write in their paper (emphasis added):

We have provided the first assessment of the quantitative benefits to global and regional land ecosystem health from halving air pollutant emissions in the major source sectors. … Future large-scale changes in land cover [such as] conversion of forests to crops and/or afforestation, would alter the results. While we provide an evaluation of uncertainty based on the low and high ozone sensitivity parameters, there are several other uncertainties in the ozone damage model when applied at large-scale. More observations across a wider range of ozone concentrations and plant species are needed to improve the robustness of the results.

In effect, their data could be modified in future to reflect new information and/or methods, but in the meantime, and far from being a silly attempt at translating a claim into jargon-laden language, the study eliminates doubt to the extent possible with existing data and modelling techniques to ascertain something. And even in cases where this something is well known or already well understood, the validation of its existence could also serve to validate the methods the researchers employed to (re)discover it and – as mentioned before – generate data that is more likely to motivate political action than, say, demands from non-experts.

In fact, the American mathematician Marc Abrahams, known much more for founding and awarding the Ig Nobel Prizes, identified this purpose of research as one of three possible reasons why people might try to “quantify the obvious” (source). The other two are being unaware of the obvious and, of course, to disprove the obvious.

The nomenclature of uncertainty

The headline of a Nature article published on December 9 reads ‘LIGO black hole echoes hint at general relativity breakdown’. The article is about the prediction of three scientists that, should LIGO find ‘echoes’ of gravitational waves coming from blackhole-mergers, then it could be a sign of quantum-gravity forces at play.

It’s an exciting development because it presents a simple and currently accessible way of probing the universe for signs of phenomena that show a way to unite quantum physics and general relativity – phenomena that have been traditionally understood to be outside the reach of human experiments until LIGO.

The details of the pre-print paper the three scientists uploaded on arXiv were covered by a number of outlets, including The Wire. And The Wire‘s and Forbes‘s headlines were both questions: ‘Has LIGO already discovered evidence for quantum gravity?’ and ‘Has LIGO actually proved Einstein wrong – and found signs of quantum gravity?’, respectively. Other headlines include:

  • Gravitational wave echoes might have just caused Einstein’s general theory of relativity to break down – IB Times
  • A new discovery is challenging Einstein’s theory of relativity – Futurism
  • Echoes in gravitational waves hint at a breakdown of Einstein’s general relativity – Science Alert
  • Einstein’s theory of relativity is 100 years old, but may not last – Inverse

The headlines are relevant because: Though the body of a piece has the space to craft what nuance it needs to present the peg, the headline must cut to it as quickly and crisply as possible – while also catching the eye of a potential reader on the social media, an arena where all readers are being inundated with headlines vying for attention.

For example, with the quantum gravity pre-print paper, the headline has two specific responsibilities:

  1. To be cognisant of the fact that scientists have found gravitational-wave echoes in LIGO data at the 2.9-sigma level of statistical significance. Note that 2.9 sigma is evidently short of the threshold at which some data counts as scientific evidence (and well short of that at which it counts as scientific fact – at least in high-energy physics). Nonetheless, it still presents a 1-in-270 chance of, as I’ve become fond of saying, an exciting thesis.
  2. To make reading the article (which follows from the headline) seem like it might be time well spent. This isn’t exactly the same as catching a reader’s attention; instead, it comprises catching one’s attention and subsequently holding and justifying it continuously. In other words, the headline shouldn’t mislead, misguide or misinform, as well as remain constantly faithful to the excitement it harbours.

Now, the thing about covering scientific developments from around the world and then comparing one’s coverage to those from Europe or the USA is that, for publications in those countries, what an Indian writer might see as an international development is in fact a domestic development. So Nature, Scientific American, Forbes, Futurism, etc. are effectively touting local accomplishments that are immediately relevant to their readers. The Wire, on the other hand, has to bank on the ‘universal’ aspect and by extension on themes of global awareness, history and the potential internationality of Big Science.

This is why a reference to Einstein in the headline helps: everyone knows him. More importantly, everyone was recently made aware of how right his theories have been since they were formulated a century ago. So the idea of proving Einstein wrong – as The Wire‘s headline read – is eye-catching. Second, phrasing the headline as a question is a matter of convenience: because the quasi-discovery has a statistical significance of only 2.9 sigma, a question signals doubt.

But if you argued that a question is also a cop-out, I’d agree. A question in a headline can be interpreted in two ways: either as a question that has not been answered yet but ought to be or as a question that is answered in the body. More often than not and especially in the click-bait era, question-headlines are understood to be of the latter kind. This is why I changed The Wire copy’s headline from ‘What if LIGO actually proved Einstein wrong…’ to ‘Has LIGO actually proved Einstein wrong…’.

More importantly, the question is an escapism at least to me because it doesn’t accurately reflect the development itself. If one accounts for the fact that the pre-print paper explicitly states that gravitational-wave echoes have been found in LIGO data only at 2.9 sigma, there is no question: LIGO has not proved Einstein wrong, and this is established at the outset.

Rather, the peg in this case is – for example – that physicists have proposed a way to look for evidence of quantum gravity using an experiment that is already running. This then could make for an article about the different kinds of physics that rule at different energy levels in the universe, and what levels of access humanity has to each.

So this story, and many others like it in the past year that all dealt with observations falling short of the evidence threshold but which have been worth writing about simply because of the desperation behind them, have – or could have – prompted science writers to think about the language they use. For example, the operative words/clause in the respective headlines listed above are:

  • Nature – hint
  • IB Times – might have just caused
  • Futurism – challenging
  • Science Alert – hint
  • Inverse – may not

Granted that an informed skepticism is healthy for science and that all science writers must remain as familiar with this notion as with the language of doubt, uncertainty, probability (and wave physics, it seems). But it still is likely the case that writers grappling with high-energy physics have to be more familiar than others, dealing as the latest research does with – yes – hope and desperation.

Ultimately, I may not be the perfect judge of what words work best when it comes to the fidelity of syntax to sentiment; that’s why I used a question for a headline in the first place! But I’m very interested in knowing how writers choose and have been choosing their words, if there’s any friction at all (in the larger scheme) between the choice of words and the prevailing sentiments, and the best ways to deal with such situations.

PS: If you’re interested, here’s a piece in which I struggled for a bit to get the words right (and finally had to resort to using single-quotes).

Featured image credit: bongonian/Flickr, CC BY 2.0

Prospects for suspected new fundamental particle improve marginally

This image shows a collision event with a photon pair observed by the CMS detector in proton-collision data collected in 2015 with no magnetic field present. The energy deposits of the two photons are represented by the two large green towers. The mass of the di-photon system is between 700 and 800 GeV. The candidates are consistent with what is expected for prompt isolated photons. Caption & credit © 2016 CERN
This image shows a collision event with a photon pair observed by the CMS detector in proton-collision data collected in 2015 with no magnetic field present. The energy deposits of the two photons are represented by the two large green towers. The mass of the di-photon system is between 700 and 800 GeV. The candidates are consistent with what is expected for prompt isolated photons. Caption & credit © 2016 CERN

On December 15 last year, scientists working with the Large Hadron Collider experiment announced that they had found slight whispers of a possible new fundamental particle, and got the entire particle physics community excited. There was good reason: should such a particle’s existence become verified, it would provide physicists some crucial headway in answering questions about the universe that our current knowledge of physics has been remarkably unable to cope with. And on March 17, members of the teams that made the detection presented more details as well as some preliminary analyses at a conference, held every year, in La Thuile, Italy.

The verdict: the case for the hypothesised particle’s existence has got a tad bit stronger. Physicists still don’t know what it could be or if it won’t reveal itself to have been a fluke measurement once more data trickles in by summer this year. At the same time, the bump in the data persists in two sets of measurements logged by two detectors and at different times. In December, the ATLAS detector had presented a stronger case – i.e., a more reliable measurement – than the CMS detector; at La Thuile on March 17, the CMS team also came through with promising numbers.

Because of the stochastic nature of particle physics, the reliability of results is encapsulated by their statistical significance, denoted by σ (sigma). So 3σ would mean the measurements possess a 1-in-350 chance of being a fluke and marks the threshold for considering the readings as evidence. And 5σ would mean the measurements possess a 1-in-3.5 million chance of being a fluke and marks the threshold for claiming a discovery. Additionally, tags called ‘local’ and ‘global’ refer to whether the significance is for a bump exactly at 750 GeV or anywhere in the plot at all.

And right now, particle physicists have this scoreboard, as compiled by Alessandro Strumia, an associate professor of physics at Pisa University, who presented it at the conference:

750_new

Pauline Gagnon, a senior research scientist at CERN, explained on her blog, “Two hypotheses were tested, assuming different characteristics for the hypothetical new particle: the ‘spin 0’ case corresponds to a new type of Higgs boson, while ‘spin 2’ denotes a graviton.” A graviton is a speculative particle carrying the force of gravity. The – rather, a – Higgs boson was discovered at the LHC in July 2012 and verified in January 2013. This was during the collider’s first run, when it accelerated two beams of protons to 4 TeV (1,000 GeV = 1 TeV) each and then smashed them together. The second run kicked off, following upgrades to the collider and detectors during 2014, with a beam energy of 6.5 TeV.

Although none of the significances are as good as they’d have to be for there to be a new ‘champagne bottle boson’moment (alternatively: another summertime hit), it’s encouraging that the data behind them has shown up over multiple data-taking periods and isn’t failing repeated scrutiny. More presentations by physicists from ATLAS and CMS at the conference, which concludes on March 19, are expected to provide clues about other anomalous bumps in the data that could be related to the one at 750 GeV. If theoretical physicists have such connections to make, their ability to zero in on what could be producing the excess photons becomes much better.

But even more than new analyses gleaned from old data, physicists will be looking forward to the LHC waking up from its siesta in the first week of May, and producing results that could become available as early as June. Should the data still continue to hold up – and the 5σ local significance barrier be breached – then physicists will have just what they need to start a new chapter in the study of fundamental physics just as the previous one was closed by the Higgs boson’s discovery in 2012.

For reasons both technical and otherwise, such a chapter has its work already cut out. The Standard Model of particle physics, a theory unifying the behaviours of different species of particles and which requires the Higgs boson’s existence, is flawed despite its many successes. Therefore, physicists have been, and are, looking for ways to ‘break’ the model by finding something it doesn’t have room for. Both the graviton and another Higgs boson are such things although there are other contenders as well.

The Wire
March 19, 2016

 

New LHC data has more of the same but could something be in the offing?

Dijet mass (TeV) v. no. of events. SOurce: ATLAS/CERN
Dijet mass (TeV) v. no. of events. Source: ATLAS/CERN

Looks intimidating, doesn’t it? It’s also very interesting because it contains an important result acquired at the Large Hadron Collider (LHC) this year, a result that could disappoint many physicists.

The LHC reopened earlier this year after receiving multiple performance-boosting upgrades over the 18 months before. In its new avatar, the particle-smasher explores nature’s fundamental constituents at the highest energies yet, almost twice as high as they were in its first run. By Albert Einstein’s mass-energy equivalence (E = mc2), the proton’s mass corresponds to an energy of almost 1 GeV (giga-electron-volt). The LHC’s beam energy to compare was 3,500 GeV and is now 6,500 GeV.

At the start of December, it concluded data-taking for 2015. That data is being steadily processed, interpreted and published by the multiple topical collaborations working on the LHC. Two collaborations in particular, ATLAS and CMS, were responsible for plots like the one shown above.

This is CMS’s plot showing the same result:

Source: CMS/CERN
Source: CMS/CERN

When protons are smashed together at the LHC, a host of particles erupt and fly off in different directions, showing up as streaks in the detectors. These streaks are called jets. The plots above look particularly at pairs of particles called quarks, anti-quarks or gluons that are produced in the proton-proton collisions (they’re in fact the smaller particles that make up protons).

The sequence of black dots in the ATLAS plot shows the number of jets (i.e. pairs of particles) observed at different energies. The red line shows the predicted number of events. They both match, which is good… to some extent.

One of the biggest, and certainly among the most annoying, problems in particle physics right now is that the prevailing theory that explains it all is unsatisfactory – mostly because it has some really clunky explanations for some things. The theory is called the Standard Model and physicists would like to see it disproved, broken in some way.

In fact, those physicists will have gone to work today to be proved wrong – and be sad at the end of the day if they weren’t.

Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN
Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN

The annoying problem at its heart

The LHC chips in providing two kinds of opportunities: extremely sensitive particle-detectors that can provide precise measurements of fleeting readings, and extremely high collision energies so physicists can explore how some particles behave in thousands of scenarios in search of a surprising result.

So, the plots above show three things. First, the predicted event-count and the observed event-count are a match, which is disappointing. Second, the biggest deviation from the predicted count is highlighted in the ATLAS plot (look at the red columns at the bottom between the two blue lines). It’s small, corresponding to two standard deviations (symbol: σ) from the normal. Physicists need at least three standard deviations () from the normal for license to be excited.

But this is the most important result (an extension to the first): The predicted event-count and the observed event-count are a match across 6,000 GeV. In other words: physicists are seeing no cause for joy, and all cause for revalidating a section of the Standard Model, in a wide swath of scenarios.

The section in particular is called quantum chromodynamics (QCD), which deals with how quarks, antiquarks and gluons interact with each other. As theoretical physicist Matt Strassler explains on his blog,

… from the point of view of the highest energies available [at the LHC], all particles in the Standard Model have almost negligible rest masses. QCD itself is associated with the rest mass scale of the proton, with mass-energy of about 1 GeV, again essentially zero from the TeV point of view. And the structure of the proton is simple and smooth. So QCD’s prediction is this: the physics we are currently probing is essential scale-invariant.

Scale-invariance is the idea that two particles will interact the same way no matter how energetic they are. To be sure, the ATLAS/CMS results suggest QCD is scale-invariant in the 0-6,000 GeV range. There’s a long way to go – in terms of energy levels and future opportunities.

Something in the valley

The folks analysing the data are helped along by previous results at the LHC as well. For example, with the collision energy having been ramped up, one would expect to see particles of higher energies manifesting in the data. However, the heavier the particle, the wider the bump in the plot and more the focusing that’ll be necessary to really tease out the peak. This is one of the plots that led to the discovery of the Higgs boson:

 

Source: ATLAS/CERN
Source: ATLAS/CERN

That bump between 125 and 130 GeV is what was found to be the Higgs, and you can see it’s more of a smear than a spike. For heavier particles, that smear’s going to be wider with longer tails on the site. So any particle that weighs a lot – a few thousand GeV – and is expected to be found at the LHC would have a tail showing in the lower energy LHC data. But no such tails have been found, ruling out heavier stuff.

And because many replacement theories for the Standard Model involve the discovery of new particles, analysts will tend to focus on particles that could weigh less than about 2,000 GeV.

In fact that’s what’s riveted the particle physics community at the moment: rumours of a possible new particle in the range 1,900-2,000 GeV. A paper uploaded to the arXiv preprint server on December 10 shows a combination of ATLAS and CMS data logged in 2012, and highlights a deviation from the normal that physicists haven’t been able to explain using information they already have. This is the relevant plot:

Source: arXiv:1512.03371v1
Source: arXiv:1512.03371v1

 

The one on the middle and right are particularly relevant. They each show the probability of the occurrence of an event (observed as a bump in the data, not shown here) of some heavier mass of energy decaying into two different final states: of W and Z bosons (WZ), and of two Z bosons (ZZ). Bosons make a type of fundamental particle and carry forces.

The middle chart implies that the mysterious event is at least 1,000-times less likelier to occur than normally and the one on the left implies the event is at least 10,000-times less likelier to occur than normally. And both readings are at more than 3σ significance, so people are excited.

The authors of the paper write: “Out of all benchmark models considered, the combination favours the hypothesis of a [particle or its excitations] with mass 1.9-2.0 [thousands of GeV] … as long as the resonance does not decay exclusively to WW final states.”

But as physicist Tommaso Dorigo points out, these blips could also be a fluctuation in the data, which does happen.

Although the fact that the two experiments see the same effect … is suggestive, that’s no cigar yet. For CMS and ATLAS have studied dozens of different mass distributions, and a bump could have appeared in a thousand places. I believe the bump is just a fluctuation – the best fluctuation we have in CERN data so far, but still a fluke.

There’s a seminar due to happen today at the LHC Physics Centre at CERN where data from the upgraded run is due to be presented. If something really did happen in those ‘valleys’, which were filtered out of a collision energy of 8,000 GeV (basically twice the beam energy, where each beam is a train of protons), then those events would’ve happened in larger quantities during the upgraded run and so been more visible. The results will be presented at 1930 IST. Watch this space.

Featured image: Inside one of the control centres of the collaborations working on the LHC at CERN. Each collaboration handles an experiment, or detector, stationed around the LHC tunnel. Credit: CERN.