‘Animals use physics’

What came first: physics or the world? It’s obviously the world, whereas physics (as a branch of science) offered ways to acquire information about the world and organise it. To not understand something in this paradigm, then, is to not understand the world in terms of physics. While this is straightforward, some narratives lead to confusion.

For example, consider the statement “animals use physics” (these animals exclude humans). Do they? Fundamentally, animals can’t use physics because their brains aren’t equipped to. They also don’t use physics because they’re only navigating the world, they’re not navigating physics and its impositions on the human perception of the world.

On July 10, Knowable published an article describing just such a scenario. The article actually uses both narratives — of humans using physics and animals using physics — and they’re often hard to pry apart, but sometimes the latter makes its presence felt. Example:

“Evolution has provided animals with movement skills adapted to the existing environment without any need for an instruction manual. But altering the environment to an animal’s benefit requires more sophisticated physics savvy. From ants and wasps to badgers and beavers, various animals have learned how to construct nests, shelters and other structures for protection from environmental threats.”

An illustration follows of a prairie dog burrow that accelerates the flow of wind and enhance ventilation; its caption reads: “Prairie dogs dig burrows with multiple entrances at different elevations, an architecture that relies on the laws of physics to create airflow through the chamber and provide proper ventilation.”

Their architecture doesn’t rely on the laws of physics. It’s that we’ve physics-fied the prairie dogs’ empirical senses and lessons they learnt in their communities to see physics in the world when in fact it’s not there. Instead, what’s there is evidence of the prairie dogs ability to build these tunnels and exploit certain facts of nature, knowledge of which they’re acquired with experience.

The rest of the article is actually pretty good, exploring animal behaviour that “depends in some way on the restrictions imposed, and opportunities permitted, by physics”. Also, what’s the harm, you ask, in saying “animals use physics”? I’ve no idea. But rather than as they could be, I think it should matter to describe things as they are.

The problem with a new, rapid way to recycle textiles

Researchers from the University of Delaware have developed a chemical reaction that can break polyester in clothing down to a simpler compound that can be used to make more clothes. The reaction also spares cotton and nylon, allowing them to be recovered separately from clothing that uses a mix of fibres. Most of all, given sufficient resources, the reaction reportedly takes only 15 minutes from start to finish, which the researchers have touted as a significant achievement because I believe the prevailing duration for other chemical material-recovery processes in the textile industry is in the order of days, and have said they hope to be able to bring it down to a matter of seconds.

The team’s paper and its coverage in the popular press also advance the narrative that the finding could be a boon for the textile industry’s monumental waste problem, especially in economically developing and developed regions. This is obviously the textile industry’s analogue of carbon capture and storage (CCS) technologies, whereby certain technical machinations remove carbon out of the atmosphere and other natural reservoirs and sequester it in human-made matrices for decades or even centuries. The problem with CCS is also the problem with the chemical recycling process described in the new study: unless the state institutes policies and helps effect cultural changes in parallel that discourage consumption, encourage reuse, and lower emissions, removing contaminants from the environment will only create the impression that there is now more room to pollute, so the total effective carbon pollution will increase. This is not unlike trying to reduce motor vehicle traffic by building more roads: cities simply acquire more vehicles with which to fill the newly available motorway space.

All this said, however, there is one more thing to be concerned about vis-à-vis the 15-minute chemical recovery technique. In their paper, the researchers described a “techno-economic assessment” they undertook to understand the “economic feasibility” of their proposed solution to the textile waste problem. Their analysis flowchart is shown below, based on a “textile feed throughput of 500 kg/hour”. A separate table (available here) specifies the estimated market value of textile components — polyester, nylon, cotton, and 4,4′-methylenedianiline (MDA) — after they have been recovered from the 15-minute reaction’s output and processed a bit. They found their process is more economically feasible, achieving a profitability index of 1.29 where 1 is the breakeven point, when the resulting product sales amount to $148.7 million. I don’t know where the latter figure comes from; if it doesn’t have a sound basis and is arbitrary, the ‘1.29’ figure would be arbitrary too. The same goes for their ‘low sales’ scenario in which the profitability index is 0.95 if sales amount to $85.3 million.

Techno-economic analysis of the proposed process.

Source: DOI: 10.1126/sciadv.ado6827

Importantly, all these numbers presume demand for recycled clothes, which I assume is far more limited (based on my experiences in India) than the demand for new clothes. In fact the researchers’ paper begins by blaming fast fashion for the “rising demand for textiles and [their] shorter life span compared to a generation ago”. Fast fashion is a volume business predicated among other things on lower costs. (Did you hear about the mountain of clothes that went up in flames in the middle of the Atacama desert in 2022 because it was cheaper to let them go that way?) Should fast-fashion’s practices be accounted for in the techno-economic assessment, I doubt its numbers would still stand. They certainly won’t if implemented in the poorer countries to which richer ones have been exporting both textile manufacturing and disposal. Second, the profitability indices presume continuing, if not increasing, demand for new clothes, which is of course deeply problematic: demand untethered from their socio-economic consequences is what landed us in the present soup. That it should stay this way or further increase in order to sustain a process that “holds the potential to achieve a global textile circularity rate of 88%” is a precarious proposition because it risks erecting demand as the raison d’être of sustainability.

Finally, militating against solutions like CCS and this chemical recovery technique because they aren’t going to be implemented within the right policy and socio-cultural frameworks is reasonable even if the underlying technologies have matured completely (they haven’t in this case but let’s set that aside). On the flip side, we need to push governments to design and implement the frameworks asap rather than delay or deny the use of these technologies altogether. The pressures of climate change have shortened deadlines and incentivised speed. Yet business people and industrialists have imported far too many such solutions into India, where their purported benefits have seldom come to fruition — especially in their intended form — even as they have had toxic consequences for the people depending on these industries for their livelihoods, for the people living around these facilities, and, importantly, for people involved in parts of the value chain that come into view only when we account for externalised costs. A few illustrative examples are sewage treatment plants, nuclear reactors, hazardous waste management, and various ore-refining techniques.

In all, making the climate transition at the expense of climate justice is a fundamentally stupid strategy.

Featured image: People sort through hundreds of tonnes of clothing in an abandoned factory in Phnom Penh, November 22, 2020. Credit: Francois Le Nguyen/Unsplash.

Clocks on the cusp of a nuclear age

You need three things to build a clock: an energy source, a resonator, and a counter. In an analog wrist watch, for example, a small battery is the energy source that sends a small electric signal to a quartz crystal, which, in response, oscillates at a specific frequency (piezoelectric effect). If the amount of energy in each signal is enough to cause the crystal to oscillate at its resonant frequency, the crystal becomes the resonator. The counter tracks the crystal’s oscillation and converts it to seconds using predetermined rules.

Notice how the clock’s proper function depends on the relationship between the battery and the quartz crystal and the crystal’s response. The signals from the battery have to have the right amount of energy to excite the crystal to its resonant frequency and the crystal’s oscillation in response has to happen at a fixed frequency as long as it receives those signals. To make better clocks, physicists have been able to fine-tune these two parameters to an extreme degree.

Today, as a result, we have clocks that don’t lose more than one second of time every 30 billion years. These are the optical atomic clocks: the energy source is a laser, the resonator is an atom, and the counter is a particle detector.

An atomic clock’s identity depends on its resonator. For example, many of the world’s countries use caesium atomic clocks to define their respective national “frequency standards”. (One such clock at the National Physical Laboratory in New Delhi maintains Indian Standard Time.) A laser imparts a precise amount of energy to excite a caesium-133 atom to a particular higher energy state. The atom soon after drops from this state to its lower ground state by emitting light of frequency exactly 9,192,631,770 Hz. When a particle detector receives this light and counts out 9,192,631,770 waves, it will report one second has passed.

Caesium atomic clocks are highly stable, losing no more than a second in 20 million years. In fact, scientists used to define a second in terms of the time Earth took to orbit the Sun once; they switched to the caesium atomic clock because “it was more stable than Earth’s orbit” (source).

But there is also room for improvement. The higher the frequency of the emitted radiation, the more stable an atomic clock will be. The emission of a caesium atomic clock has a frequency of 9.19 GHz whereas that in a strontium clock is 429.22 THz and in a ytterbium-ion clock is 642.12 THz — in both cases five orders of magnitude higher. (9.19 GHz is in the microwave frequency range whereas the other two are in the optical range, thus the name “optical” atomic clock.)

Optical atomic clocks also have a narrower linewidth, which is the range of frequencies that can prompt the atom to jump to the higher energy level: the narrower the linewidth, the more precisely the jump can be orchestrated. So physicists today are trying to build and perfect the next generation of atomic clocks with these resonators. Some researchers have said they could replace the caesium frequency standard later this decade.

But yet other physicists have also already developed an idea to build the subsequent generation of clocks, which are expected to be at least 10-times more accurate than optical atomic clocks. Enter: the nuclear clock.

When an atom, like that of caesium, jumps between two energy states, the particles gaining and losing the energy are the atom’s electrons. These electrons are arranged in energy shells surrounding the nucleus and interact with the external environment. For a September 2020 article in The Wire Science, IISER Pune associate professor and a member of a team building India’s first strontium atomic clock Umakant Rapol said the resonator needs to be “immune to stray magnetic fields, electric fields, the temperature of the background, etc.” Optical atomic clocks achieve this by, say, isolating the resonator atoms within oscillating electric fields. A nuclear clock offers to get rid of this problem by using an atom’s nucleus as the resonator instead.

Unlike electrons, the nucleus of an atom is safely ensconced further in, where it is also quite small, making up only around 0.01% of the atom’s volume. The trick here is to find an atomic nucleus that’s stable and whose resonant frequency is accessible with a laser.

In 1976, physicists studying the decay of uranium-233 nuclei reported some properties of the thorium-229 nucleus, including estimating that the lowest higher-energy level to which it could jump required less than 100 eV of energy. Another study in 1990 estimated the requirement to be under 10 eV. In 1994, two physicists estimated it to be around 3.5 eV. The higher energy state of a nucleus is called its isomer and is denoted with the suffix ‘m’. For example, the isomer of the thorium-229 nucleus is denoted thorium-229m.

After a 2005 study further refined the energy requirement to 5.5 eV, a 2007 study provided a major breakthrough. With help from state-of-the-art instruments at NASA, researchers in the US worked out the thorium-229 to thorium-229m jump required 7.6 eV. This was significant. Energy is related to frequency by the Planck equation: E = hf, where h is Planck’s constant. To deliver 3.5 eV of energy, then, a laser would have to operate in the optical or near-ultraviolet range. But if the demand was 7.6 eV, the laser would have to operate in the vacuum ultraviolet range.

Further refinement by more researchers followed but they were limited by one issue: since they still didn’t have a sufficiently precise value of the isomeric energy, they couldn’t use lasers to excite the thorium-229 nucleus and find out. Instead, they examined thorium-229m nuclei formed by the decay of other elements. So when on April 29 this year a team of researchers from Germany and Austria finally reported using a laser to excite thorium-229 nuclei to the thorium-229m state, their findings sent frissons of excitement through the community of clock-makers.

The researchers’ setup had two parts. In the first, they drew inspiration from an idea a different group had proposed in 2010: to study thorium-229 by placing these atoms inside a larger crystal. The European group grew two calcium fluoride (CaF2) crystals in the lab doped heavily with thorium-229 atoms, with different doping concentrations. In a study published a year earlier, different researchers had reported observing for the first time thorium-229m decaying back to its ground state while within calcium fluoride and magnesium fluoride (MgF2) crystals. Ahead of the test, the European team cooled the crystals to under -93º C in a vacuum.

In the second part, the researchers built a laser with output in the vacuum ultraviolet range, corresponding to a wavelength of around 148 nm, for which off-the-shelf options don’t exist at the moment. They achieved the output instead by remixing the outputs of multiple lasers.

The researchers conducted 20 experiments: in each one, they increased the laser’s wavelength from 148.2 nm to 150.3 nm in 50 equally spaced steps. They also maintained a control crystal doped with thorium-232 atoms. Based on these attempts, they reported their laser elicited a distinct emission from the two test crystals when the laser’s wavelength was 148.3821 nm. The same wavelength when aimed at the CaF2 crystal doped with thorium-232 didn’t elicit an emission. This in turn implied an isomeric transition energy requirement of 8.35574 eV. The researchers also worked out based on these details that a thorium-229m nucleus would have a half-life of around 29 minutes in vacuum — meaning it is quite stable.

Physicists finally had their long-sought prize: the information required to build a nuclear clock by taking advantage of the thorium-229m isomer. In this setup, then, the energy source could be a laser of wavelength 148.3821 nm; the resonator could be thorium-229 atoms; and the counter could look out for emissions of frequency 2,020 THz (plugging 8.355 eV into the Planck equation).

Other researchers have already started building on this work as part of the necessary refinement process and have generated useful insights as well. For example, on July 2, University of California, Los Angeles, researchers reported the results of a similar experiment using lithium strontium hexafluoroaluminate (LiSrAlF6) crystals, including a more precise estimate of the isomeric energy gap: 8.355733 eV.

About a week earlier, on June 26, a team from Austria, Germany, and the US reported using a frequency comb to link the frequency of emissions from thorium-229 nuclei to that from a strontium resonator in an optical atomic clock at the University of Colorado. A frequency comb is a laser whose output is in multiple, evenly spaced frequencies. It works like a gear that translates the higher frequency output of a laser to a lower frequency, just like the lasers in a nuclear and an optical atomic clock. Linking the clocks up in this way allows physicists to understand properties of the thorium clock in terms of the better-understood properties of the strontium clock.

Atomic clocks moving into the era of nuclear resonators isn’t just one more step up on the Himalayan mountain of precision timekeeping. Because nuclear clocks depend on how well we’re able to exploit the properties of atomic nuclei, they also create a powerful incentive and valuable opportunities to probe nuclear properties.

In a 2006 paper, a physicist named VV Flambaum suggested that if the values of the fine structure constant and/or the strong interaction parameter change even a little, their effects on the thorium-229 isomeric transition would be very pronounced. The fine structure constant is a fundamental constant that specifies the strength of the electromagnetic force between charged particles. The strong interaction parameter specifies this vis-à-vis the strong nuclear force, the strongest force in nature and the thing that holds protons and neutrons together in a nucleus.

Probing the ‘stability’ of these numbers in this way opens the door to new kinds of experiments to answer open questions in particle physics — helped along by physicists’ pursuit of a new nuclear frequency standard.

Featured image: A view of an ytterbium atomic clock at the US NIST, October 16, 2014. Credit: N. Phillips/NIST.

Buildings affect winds

A 2022 trip to Dubai made me wonder how much research there was on the effects cities, especially those that are rapidly urbanising as well as are building taller, wider structures more closely packed together, had on the winds that passed through them. I found only a few studies then. One said the world’s average wind speed had been increasing since 2010, but its analysis was concerned with the output of wind turbines, not the consequences within urban settlements. Another had considered reducing wind speed within cities as a result of the Venturi effect by planting more trees. I also found a The New York Times article from 1983 about taller skyscrapers directing high winds downwards, to the streets. That was largely it. Maybe I didn’t look hard enough.

On June 11, researchers in China published a paper in the Journal of Advances in Modelling Earth Systems in which they reported findings based on a wind speed model they’d built for Shanghai city. According to the paper, Shanghai’s built-up area could slow wind speed by as much as 50%. However, they added, the urban heat-island effect could enhance “the turbulent exchange in the vertical direction of the urban area, and the upper atmospheric momentum is transported down to the surface, increasing the urban surface wind speed”. If the heat-island effect was sufficiently pronounced, then, the wind may not slow at all. I imagine the finding will be useful for people considering the ability of winds to transport pollutants to and disperse them in different areas. I’m also interested in what the model shows for Delhi (which can be hotter), Mumbai (wetter), and Chennai (fewer tall buildings). The relationship between heat-islands and the wind energy is also curious because city parts that are windier are also less warm.

But overall, even if the population density within skyscrapers may be lower than in non-skycraping buildings and tenements, allowing them to built closer together, as is normal in cities like Dubai, where these buildings are almost all located in a “business district” or a “financial district”, could also make it harder for the wind to ventilate these spaces.

You’re allowed to be interested in particle physics

This page appeared in The Hindu’s e-paper today.

I wrote the lead article, about why scientists are so interested in an elementary particle called the top quark. Long story short: the top quark is the heaviest elementary particle, and because all elementary particles get their masses by interacting with Higgs bosons, the top quark’s interaction is the strongest. This has piqued physicists’ interest because the Higgs boson’s own mass is peculiar: it’s more than expected and at the same time poised on the brink of a threshold beyond which our universe as we know it wouldn’t exist. To explain this brinkmanship, physicists are intently studying the top quark, including measuring its mass with more and more precision.

It’s all so fascinating. But I’m well aware that not many people are interested in this stuff. I wish they were and my reasons follow.

There exists a sufficiently healthy journalism of particle physics today. Most of it happens in Europe and the US, (i) where famous particle physics experiments are located, (ii) where there already exists an industry of good-quality science journalism, and (iii) where there are countries and/or governments that actually have the human resources, funds, and political will to fund the experiments (in many other places, including India, these resources don’t exist, rendering the matter of people contending with these experiments moot).

In this post, I’m using particle physics as itself as well as as a surrogate for other reputedly esoteric fields of study.

This journalism can be divided into three broad types: those with people, those concerned with spin-offs, and those without people. ‘Those with people’ refers to narratives about the theoretical and experimental physicists, engineers, allied staff, and administrators who support work on particle physics, their needs, challenges, and aspirations.

The meaning of ‘those concerned with spin-offs’ is obvious: these articles attempt to justify the money governments spend on particle physics projects by appealing to the technologies scientists develop in the course of particle-physics work. I’ve always found these to be apologist narratives erecting a bad expectation: that we shouldn’t undertake these projects if they don’t also produce valuable spin-off technologies. I suspect most particle physics experiments don’t because they are much smaller than the behemoth Large Hadron Collider and its ilk, which require more innovation across diverse fields.

‘Those without people’ are the rarest of the lot — narratives that focus on some finding or discussion in the particle physics community that is relatively unconcerned with the human experience of the natural universe (setting aside the philosophical point that the non-human details are being recounted by human narrators). These stories are about the material constituents of reality as we know it.

When I say I wish more people were interested in particle physics today, I wish they were interested in all these narratives, yet more so in narratives that aren’t centred on people.

Now, why should they be concerned? This is a difficult question to answer.

I’m concerned because I’m fascinated with the things around us we don’t fully understand but are trying to. It’s a way of exploring the unknown, of going on an adventure. There are many, many things in this world that people can be curious about. It’s possible there are more such things than there are people (again, setting aside the philosophical bases of these claims). But particle physics and some other areas — united by the extent to which they are written off as being esoteric — suffer from more than not having their fair share of patrons in the general (non-academic) population. Many people actively shun them, lose focus when reading about them, and at the same time do little to muster focus back. It has even become okay for them to say they understood nothing of some (well-articulated) article and not expect to have their statement judged adversely.

I understand why narratives with people in them are easier to understand, to connect with, but none of the implicated psychological, biological, and anthropological mechanisms also encourage us to reject narratives and experiences without people. In other words, there may have been evolutionary advantages to finding out about other people but there have been no disadvantages attached to engaging with stories that aren’t about other people.

Next, I have met more than my fair share of people that flinched away from the suggestion of mathematics or physics, even when someone offered to guide them through understanding these topics. I’m also aware researchers have documented this tendency and are attempting to distil insights that could help improve the teaching and the communication of these subjects. Personally I don’t know how to deal with these people because I don’t know the shape of the barrier in their minds I need to surmount. I may be trying to vault over a high wall by simplifying a concept to its barest features when in fact the barrier is a low-walled labyrinth.

Third and last, let me do unto this post what I’m asking of people everywhere, and look past the people: why should we be interested in particle physics? It has nothing to offer for our day-to-day experiences. Its findings can seem totally self-absorbed, supporting researchers and their careers, helping them win famous but otherwise generally unattainable awards, and sustaining discoveries into which political leaders and government officials occasionally dip their beaks to claim labels like “scientific superpower”. But the mistake here is not the existence of particle physics itself so much as the people-centric lens through which we insist it must be seen. It’s not that we should be interested in particle physics; it’s that we can.

Particle physics exists because some people are interested in it. If you are unhappy that our government spends too much on it, let’s talk about our national R&D expenditure priorities and what the practice, and practitioners, of particle physics can do to support other research pursuits and give back to various constituencies. The pursuit of one’s interests can’t be the problem (within reasonable limits, of course).

More importantly, being interested in particle physics and in fact many other branches of science shouldn’t have to be justified at every turn for three reasons: reality isn’t restricted to people, people are shaped by their realities, and our destiny as humans. On the first two counts: when we choose to restrict ourselves to our lives and our welfare, we also choose to never learn about what, say, gravitational waves, dark matter, and nucleosynthesis are (unless these terms turn up in an exam we need to pass). Yet all these things played a part in bringing about the existence of Earth and its suitability for particular forms of life, and among people particular ways of life.

The rocks and metals that gave rise to waves of human civilisation were created in the bellies of stars. We needed to know our own star as well as we do — which still isn’t much — to help build machines that can use its energy to supply electric power. Countries and cultures that support the education and employment of people who made it a point to learn the underlying science thus come out on top. Knowing different things is a way to future-proof ourselves.

Further, climate change is evidence humans are a planetary species, and soon it will be interplanetary. Our own migrations will force us to understand, eventually intuitively, the peculiarities of gravity, the vagaries of space, and (what is today called) mathematical physics. But even before such compulsions arise, it remains what we know is what we needn’t be afraid of, or at least know how to be afraid of. 😀

Just as well, learning, knowing, and understanding the physical universe is the foundation we need to imagine (or reimagine) futures better than the ones ordained for us by our myopic leaders. In this context, I recommend Shreya Dasgupta’s ‘Imagined Tomorrow’ podcast series, where she considers hypothetical future Indias in which medicines are tailor-made for individuals, where antibiotics don’t exist because they’re not required, where clean air is only available to breathe inside city-sized domes, and where courtrooms use AI — and the paths we can take to get there.

Similarly, with particle physics in mind, we could also consider cheap access to quantum computers, lasers that remove infections from flesh and tumours from tissue in a jiffy, and communications satellites that reduce bandwidth costs so much that we can take virtual education, telemedicine, and remote surgeries for granted. I’m not talking about these technologies as spin-offs, to be clear; I mean technologies born of our knowledge of particle (and other) physics.

At the biggest scale, of course, understanding the way nature works is how we can understand the ways in which the universe’s physical reality can or can’t affect us, in turn leading the way to understanding ourselves better and helping us shape more meaningful aspirations for our species. The more well-informed any decision is, the more rational it will be. Granted, the rationality of most of our decisions is currently only tenuously informed by particle physics, but consider if the inverse could be true: what decisions are we not making as well as we could if we cast our epistemic nets wider, including physics, biology, mathematics, etc.?

Consider, even beyond all this, the awe astronauts who have gone to Earth orbit and beyond have reported experiencing when they first saw our planet from space, and the immeasurable loneliness surrounding it. There are problems with pronouncements that we should be united in all our efforts on Earth because, from space, we are all we have (especially when the country to which most of these astronauts belong condones a genocide). Fortunately, that awe is not the preserve of spacefaring astronauts. The moment we understood the laws of physics and the elementary constituents of our universe, we (at least the atheists among us) may have realised there is no centre of the universe. In fact, there is everything except a centre. How grateful I am for that. For added measure, awe is also good for the mind.

It might seem like a terrible cliché to quote Oscar Wilde here — “We are all in the gutter, but some of us are looking at the stars” — but it’s a cliché precisely because we have often wanted to be able to dream, to have the simple act of such dreaming contain all the profundity we know we squander when we live petty, uncurious lives. Then again, space is not simply an escape from the traps of human foibles. Explorations of the great unknown that includes the cosmos, the subatomic realm, quantum phenomena, dark energy, and so on are part of our destiny because they are the least like us. They show us what else is out there, and thus what else is possible.

If you’re not interested in particle physics, that’s fine. But remember that you can be.


Featured image: An example of simulated data as might be observed at a particle detector on the Large Hadron Collider. Here, following a collision of two protons, a Higgs boson is produced that decays into two jets of hadrons and two electrons. The lines represent the possible paths of particles produced by the proton-proton collision in the detector while the energy these particles deposit is shown in blue. Caption and credit: Lucas Taylor/CERN, CC BY-SA 3.0.

A new source of cosmic rays?

The International Space Station carries a suite of instruments conducting scientific experiments and measurements in low-Earth orbit. One of them is the Alpha Magnetic Spectrometer (AMS), which studies antimatter particles in cosmic rays to understand how the universe has evolved since its birth.

Cosmic rays are particles or particle clumps flying through the universe at nearly the speed of light. Since the mid-20th century, scientists have found cosmic-ray particles are emitted during supernovae and in the centres of galaxies that host large black holes. Scientists installed the AMS in May 2011, and by April 2021, it had tracked more than 230 billion cosmic-ray particles.

When scientists from the Massachusetts Institute of Technology (MIT) recently analysed these data — the results of which were published on June 25 — they found something odd. Roughly one in 10,000 of the cosmic ray particles were neutron-proton pairs, a.k.a. deuterons. The universe has a small number of these particles because they were only created in a 10-minute-long period a short time after the universe was born, around 0.002% of all atoms.

Yet cosmic rays streaming past the AMS seemed to have around 5x greater concentration of deuterons. The implication is that something in the universe — some event or some process — is producing high-energy deuterons, according to the MIT team’s paper.

Before coming to this conclusion, the researchers considered and eliminated some alternative explanations. Chief among them is the way scientists know how deuterons become cosmic rays. When primary cosmic rays produced by some process in outer space smash into matter, they produce a shower of energetic particles called secondary cosmic rays. Thus far, scientists have considered deuterons to be secondary cosmic rays, produced when helium-4 ions smash into atoms in the interstellar medium (the space between stars).

This event also produces helium-3 ions. So if the deuteron flux in cosmic rays is high, and if we believe more helium-4 ions are smashing into the interstellar medium than expected, the AMS should have detected more helium-3 cosmic rays than expected as well. It didn’t.

To make sure, the researchers also checked the AMS’s instruments and the shared properties of the cosmic-ray particles. Two in particular are time and rigidity. Time deals with how the flux of deuterons changes with respect to the flux of other cosmic ray particles, especially protons and helium-4 ions. Rigidity measures the likelihood a cosmic-ray particle will reach Earth and not be deflected away by the Sun. (Equally rigid particles behave the same way in a magnetic field.) When denoted in volts, rigidity indicates the extent of deflection the particle will experience.

The researchers analysed deuterons with rigidity from 1.9 billion to 21 billion V and found that “over the entire rigidity range the deuteron flux exhibits nearly identical time variations with the proton, 3-He, and 4-He fluxes.” At rigidity greater than 4.5 billion V, the fluxes of deuterons and helium-4 ions varied together whereas those of helium-3 and helium-4 didn’t. At rigidity beyond 13 billion V, “the rigidity dependence of the D and p fluxes [was] nearly identical”.

Similarly, they found the change in the deuteron flux was greater than the change in the helium-3 flux, both relative to the helium-4 flux. The statistical significance of this conclusion far exceeded the threshold particle physicists use to check whether an anomaly in the data is really real rather than the result of some fluke error. Finally, “independent analyses were performed on the same data sample by four independent study groups,” the paper added. “The results of these analyses are consistent with this Letter.”

The MIT team ultimately couldn’t find a credible alternative explanation, leaving their conclusion: deuterons could be primary cosmic rays, and we don’t (yet) know the process that could be producing them.

Suni Williams and Barry Wilmore are not in danger

NASA said earlier this week it will postpone the return of Boeing’s crew capsule Starliner back to ground from the International Space Station (ISS), thus leaving astronauts Barry Wilmore and Sunita Williams onboard the orbiting platform for (at least) two weeks more.

The glitch is part of Starliner’s first crewed flight test, and clearly it’s not going well. But to make matters worse there seems to be little clarity about the extent to which it’s not going well. There are at least two broad causes. The first is NASA and Boeing themselves. As I set out in The Hindu, Starliner is already severely delayed and has suffered terrible cost overruns since NASA awarded Boeing the contract to build it in 2014. SpaceX has as a result been left to pick up the tab, but while it hasn’t minded the fact remains that Elon Musk’s company currently monopolises yet another corner of the American launch services market.

Against this backdrop, neither NASA nor Boeing — but NASA especially — have been clear about the reason for Starliner’s extended stay at the ISS. I’m told fluid leaks of the sort Starliner has been experiencing are neither uncommon nor dire, that crewed orbital test flights can present such challenges, and that it’s a matter of time before the astronauts return. However, NASA’s press briefings have featured a different explanation: that Starlier’s stay is being extended on purpose — to test the long-term endurance of its various components and subsystems in orbit ahead of operational flights — echoing something NASA discussed when SpaceX was test-flying its Dragon crew capsule (hat-tip to Jatan Mehta). According to Des Moines Register, the postponement is to “deconflict” with space walks NASA had planned for the astronauts and to give them and their peers already onboard the ISS to further inspect Starliner’s propulsion module.

This sort of suspiciously ex post facto reasoning has also raised concerns NASA knows something about Starliner but doesn’t plan on revealing what until after the capsule has returned — with the added possibility that it’s shielding Boeing to prevent the US government from cancelling the Starliner contract altogether.

The second broad reason is even more embarrassing: media narratives. On June 24, Economic Times reported NASA had “let down” and “disappointed” Wilmore and Williams when it postponed Starliner’s return. Newsweek said the astronauts were “stranded” on the ISS together with a NASA statement further down the article saying they weren’t stranded. The Spectator Index tweeted Newsweek’s report without linking to it but with the prefix “BREAKING”. There are many other smaller news outlets and YouTube channels with worse headlines and claims feeding a general sense of disaster.

However, I’m willing to bet a large sum of money Wilmore and Williams are neither “disappointed” nor feeling “let down” by Starliner’s woes. In fact NASA and Boeing picked these astronauts over greenhorns because they’re veterans of human spaceflight who are aware of and versed with handling uncertainties in humankind’s currently most daunting frontier. Recall also the Progress cargo capsule failure in April 2015, which prompted Russia to postpone a resupply mission scheduled for the next month until it could identify and resolve some problems with the launch vehicle. Roscosmos finally flew the mission in July that year. The delay left astronauts onboard the ISS with dwindling supplies as well as short of a crew of three.

The term “strand” may also have a specific meaning: after the Columbia Space Shuttle disaster in 2003, NASA instituted a protocol in which astronauts onboard faulty crew capsules in space could disembark at the ISS, where they’d be “stranded”, and wait for a separate return mission. By all means, then, if Boeing is ultimately unable to salvage Starliner, the ISS could undock it and NASA could commission SpaceX to fly a rescue mission.

I can’t speak for Wilmore and Williams but I remain deeply sceptical that they’re particularly bummed. Yet Business Today drummed up this gem: “’Nightmare’: Sunita Williams can get lost in space if thrusters of NASA’s Boeing Starliner fail to fire post-ISS undocking”. Let’s be clear: the ISS is in low-Earth orbit. Getting “lost in space” from this particular location is impossible. Starliner won’t undock unless everyone is certain its thrusters will fire, but even if they don’t, atmospheric drag will deorbit the capsule soon after (which is also what happened to the Progress capsule in 2015). And even if it is Business Today’s (wet) “nightmare”, it isn’t Williams’s.

There’s little doubt the world is in the throes of a second space race. The first happened as part of the Cold War and its narratives were the narratives of the contest between the US and the USSR, rife with the imperatives of grandstanding. What are the narratives of the second race? Whatever they are, they matter as much as rogue nations contemplating weapons of mass destruction in Earth orbit matters because narratives are also capable of destruction. They shape the public imagination and consciousness of space missions, the attitudes towards the collaborations that run them, and ultimately what the publics believe they ought to expect from national space programmes and the political and economic value their missions can confer.

Importantly, narratives can cut both ways. For example, for companies like Boeing the public narrative is linked to their reputation, which is linked to the stock market. When BBC says NASA having to use a SpaceX Dragon capsule to return Wilmore and Williams back to Earth “would be hugely embarrassing for Boeing”, the report stands to make millions of dollars disappear from many bank accounts. Of course this isn’t sufficient reason for BBC to withhold its reportage: its claim isn’t sensational and the truth will always be a credible defence against (alleged) defamation. Instead, we should be asking if Boeing and NASA are responding to such pressures if and when they withhold information. It has happened before.

Similarly, opportunist media narratives designed to ‘grab eyeballs’ without considering how they will pollute public debate only vitiate narratives, raise unmerited suspicions of conspiracies and catastrophe, and sow distrust in sober, non-sensational articles whose authors are the ones labouring to present a more faithful picture.

Featured image: Astronauts Sunita Williams and Barry Wilmore onboard the International Space Station in April 2007 and October 2014, respectively. Credit: NASA.

A gentle push over the cliff

From ‘Rotavirus vaccine: tortured data analyses raise false safety alarm’, The Hindu, June 22, 2024:

Slamming the recently published paper by Dr. Jacob Puliyel from the International Institute of Health Management Research, New Delhi, on rotavirus vaccine safety, microbiologist Dr. Gagandeep Kang says: “If you do 20 different analyses, one of them will appear significant. This is truly cherry picking data, cherry picking analysis, changing the data around, adjusting the data, not using the whole data in order to find something [that shows the vaccine is not safe].” Dr. Kang was the principal investigator of the rotavirus vaccine trials and the corresponding author of the 2020 paper in The New England Journal of Medicine, the data of which was used by Dr. Puliyel for his reanalysis.

This is an important rebuttal. I haven’t seen Puliyel’s study but Bharat Biotech’s conduct during and since the COVID-19 pandemic, especially that of its executive chairman Krishna Ella, plus its attitude towards public scrutiny of its Covaxin vaccine has rendered any criticism of the company or its products very believable, even if such criticism is unwarranted, misguided, or just nonsense.

Puliyel’s study itself is a case in point: a quick search on Twitter reveals many strongly worded tweets, speaking to the availability of a mass of people that wants something to be true, and at the first appearance of even feeble evidence will seize on it. Of course The Hindu article found the evidence to not be feeble so much as contrived. Bharat Biotech isn’t “hiding” anything; Puliyel et al. aren’t “whistleblowers”.

The article doesn’t mention the name of the journal that published Puliyel’s paper: International Journal of Risk and Safety in Medicine. It could have because journals that don’t keep against bad science out of the medical literature don’t just pollute the literature. By virtue of being journals, and in this case claiming to be peer-reviewed as well, they allow the claims they publish to be amplified by unsuspecting users on social media platforms.

We saw something similar earlier this year in the political sphere when members of the Indian National Congress party and its allies as well as members of civil society cast doubt on electronic voting machines with little evidence, thus only undermining trust in the electoral process.

To be sure, we’ve cried ourselves hoarse about the importance of every reader being sceptical about what appears in scientific journals (even peer-reviewed) as much as news articles, but because it’s a behavioural and cultural change it’s going to take time. Journals need to do their bit, too, yet they won’t because who needs scruples when you can have profits?

The analytical methods Puliyel and his coauthor Brian Hooker reportedly employed in their new study is reminiscent of the work of Brian Wansink, who resigned from Cornell University five years ago this month after it concluded he’d committed scientific misconduct. In 2018, BuzzFeed published a deep-dive by Stephanie M. Lee on how the Wansink scandal was born. It gave the (well-referenced) impression that the scandal was a combination of a student’s relationship with a mentor renowned in her field of work and the mentor’s pursuit of headlines over science done properly. It’s hard to imagine Puliyel and Hooker were facing any kind of coercion, which leaves the headlines.

This isn’t hard to believe considering it’s the second study to have been published recently that took a shot at Bharat Biotech based on shoddy research. It sucks that it’s become so easy to push people over the cliff, and into the ravenous maw of a conspiracy theory, but it sucks more that some people will push others even when they know better.

The universe’s shape and its oldest light

The 3-torus is a strange and wonderful shape. We can’t readily visualise it because it has a complicated structure, but there’s a way. Imagine you’re standing inside a cube in which light is moving from the left face towards the right face. If the two faces are opaque, the right face will absorb the light, say, and that will be that. But say the two faces are not opaque. Instead, if the light passes through the right face and reemerges from the left face — as if it entered a portal and emerged on the other side — you’ll be standing inside a 3-torus.

If you look in front of you or behind you, you’ll see a series of cubes: they’re all the same cube (the one in which you’re standing) illuminated by the light, which is simply flowing in a closed loop through a single cube. In the early 1980s, physicists proposed that our universe could have the shape of a 3-torus at the largest scale. “There’s a hint in the data that if you traveled far and fast in the direction of the constellation Virgo, you’d return to Earth from the opposite direction,” a 2003 The New York Times article quoted cosmologist Max Tegmark as saying. The idea is funky but it’s possible. Scientists believe our universe’s geometry was determined by quantum processes that happened just after the Big Bang, but they’re not yet sure what that geometry really is. For now, the data are not inconsistent with a 3-torus, according to a paper a team of scientists calling themselves the COMPACT collaboration published in April 2024.

Scientists try to determine the shape of the universe just the way you would have standing inside the 3-torus: using light, and what it’s revealing ahead and behind you. Light passing through a 3-torus would be in a closed loop, which means the visual information it encodes should be repeated: that is, you would’ve seen the same cube repeated ad infinitum, sort of (but not exactly) like when you stand between two mirrors and see endless repetition of the space you’re in on either side. Scientists check for similar patterns that are repeated through the universe. They haven’t found such patterns so far — but there’s a catch. The distance light has travelled matters.

Say the cube you’re standing in is 1 km wide. The light will cross this distance in one-trillionth of a second. If it is 777 billion km wide, the light will take a month. And it will take a full year if the cube is 9.5 trillion km wide. We’re talking about whether the universe could be a 3-torus, and the universe was created 13.8 billion years ago. In this time, light can travel a distance of more than 100 sextillion km. If the width of the cube is less than this distance, we might have seen repeating patterns if the universe is shaped like a 3-torus. But if the cube is even wider, the light wouldn’t have finished crossing it even once since the universe was born, therefore no repeating patterns — yet the possibility of the universe being 3-torus-shaped remains. We just need to wait for the light to finish crossing it once.

Since we can learn so much about the universe’s geometry by studying light, and light that’s travelled the longest would be most useful, scientists are very interested in light ‘left over’ from the Big Bang. Yes, this light is still hanging around, and it’s measurably different from all the other light. Scientists call it the cosmic microwave background (CMB), a.k.a. ‘relic radiation’. It’s left over from a cosmic event that happened just 370,000 years after the Big Bang. We need to subtract the distance light could have travelled in this time from the 100 sextillion km figure (I’m tired of looking at zeroes; you can give it a shot if you like) to find the maximum distance the CMB could have travelled.

In its April paper, the COMPACT collaboration considered data about the universe that astrophysicists have collected using ground and space telescopes over the years — including about the CMB — and with that have checked whether the possibility still exists that our universe could be shaped like three types of a 3-torus. The first type is the one I’ve considered in this post, and they’ve concluded (as expected) that if the cube is less wide than the distance light could’ve travelled since the universe was born, our universe can’t be shaped like this particular 3-torus. The reason is that the data astrophysicists have put together doesn’t contain signs of repeating patterns.

(Update, 8.20 pm, June 23, 2024: Here’s a good primer of what these patterns will actually look like, courtesy Nirmal Raj.)

However, the COMPACT team adds, our universe could still be shaped like one of the other two types of 3-tori even if their respective cubes are smaller than the max. distance. This is because these two shapes include twists that will produce two subtly different images of the universe once the light has completed one loop. And according to the COMPACT folks, they can’t yet eliminate the presence of these images in the astrophysics data. The collaboration’s members have written in the April 2024 paper that they intend to find new/better ways to ascertain their hypotheses with CMB data.

Until then, look out for… déjà vu?

An Ig Nobel Prize for North and South Korea?

In 2020, India and Pakistan shared the Ig Nobel Prize for peace “for having their diplomats surreptitiously ring each other’s doorbells in the middle of the night, and then run away before anyone had a chance to answer the door.” The terms of the ongoing spat between North Korea and South Korea aren’t any less amusing and they may be destined for an Ig Nobel Prize of their own, even if animosity between the two countries — much like India and Pakistan — is rooted in issues with more gravitas.

North Korea has of late been sending balloons loaded with garbage over the border to the south whereas South Korea has stepped up its “psychological warfare” by blasting K-pop music over loudspeakers into the north. But as befits any functional democracy, the latter has run into trouble.

On June 17, Reuters reported the South Korean government faces “audits and legal battles claiming [the loudspeakers] are too quiet, raising questions over how far into the reclusive North their propaganda messages can blast”. Note: K-pop is propaganda because, per the same report, “These broadcasts play a role in instilling a yearning for the outside world, or in making them realize that the textbooks they have been taught from are incorrect,” according to Kim Sung-min, “who defected from the North in 1999 and runs a Seoul radio station that broadcasts news into North Korea”.

Apparently the speakers passed two tests in 2016 but failed subsequent audits, prompting the national defence ministry to sue the manufacturers. The court threw the case out because “too many environmental factors can affect the performance”. The ministry and the manufacturer have since made up, going by the fact that the ministry reportedly gave Reuters the same excuse when it was under fire over the speakers: environmental factors.

Imagine being the manufacturer who has to build a ridiculous set of speakers while being able to do nothing about the physics of sound propagation itself. The government wanted the K-pop to reach Kaesong, 10 km in from the border, whereas checks in 2017 found sound from the speakers could only get as far as 7 km, and in most cases managed 5 km. And to think the whole enterprise hinges on (a) North Korea being annoyed enough by the K-pop to blast music of its own in the opposite direction, at least to muddle the South Korean broadcast, and (b) South Korea’s claim that two soldiers defected from the North after listening to the music. Two.

Did they risk it all to turn the damned things off, you think?