JPL layoff isn’t the fall of a civilisation

A historian of science I follow on Twitter recently retweeted this striking comment:

While I don’t particularly care for capitalism, the tweet is fair: the behemoth photolithography machine depicted here required advances in a large variety of fields over many decades to be built. If you played the game Civilization III, a machine like this would show up right at the end of your base’s development arc. (Or, in Factorio, at the bottom of the technology research tree.)

Even if we hadn’t been able to conceive and build this machine today, we still wouldn’t invalidate all the years of R&D, collaboration, funding, good governance, and, yes, political stability that came before to lead up to this moment. As such, the machine is a culmination of all these efforts but it isn’t the efforts themselves. They stand on their own and, to their great credit, facilitate yet more opportunities.

This may seem like a trivial perspective but it played through my mind when I read a post on the NASA Watch website, written by a Jeff Nosanov, a science-worker who used to work with the NASA Jet Propulsion Laboratory (JPL) until 2019. I was surprised by its tone and contents because they offer a twisted condemnation of why JPL was wrong to have laid off some 530 people last week.

According to CBS:

“The Los Angeles County facility attributed the cuts to a shrinking budget from the federal government. In an internal memo, the laboratory expected to receive a $300 million budget for its Mars Sample Return project for the 2024 fiscal year. Director Laurie Leshin said this accounts for a 63% decrease from 2023.”

Nosanov, however, would have us believe that the layoffs lead to the sort of uncertainties in the US’s future as a space superpower that history confronted the world with when the Roman empire fell, the Chinese navy dwindled in the early 16th century, and the Soviet Union collapsed in 1991. To quote:

“The leaders of the past may not have known they were making historic mistakes. The Danish explorers who abandoned Canada may not have known about the Western Roman Empire. The Chinese Navy commanders may not have known about the Danish. Lost in the mists of history, those clear mistakes are understandable. Their makers may not have had the same knowledge of world history that we have today. But we do not have the excuse of ignorance.

History shows us both what happens when a superpower abandons a frontier – someone else takes it, and that such things are conscious choices. It is the height of folly, arrogance, and fully-informed ignorance for our leaders to allow this to happen. It will lay morale in a smoking ruin for a generation and hand the torch to China, who will be glad to take the lead. Humans will lead into the darkness, but they may not be American. That may not be the worst thing in the world, but it was not always the American way.”

The conceit here is breathtaking, patronising, and misguided. The fates of empires and civilisations have turned on seemingly innocuous events, sure, but NASA not being able to operate a Mars sample-return mission to the extent it would have liked in 2024 will not be such an event.

There are of course pertinent questions about whether (i) scientific work is implicitly entitled to public funding (even when it threatens to runaway), (ii) space science research, including towards an ambitious Mars mission, mediates the US’s space superpower status to the extent Nosanov claims it does, and (iii) this is the character of JPL’s drive in today’s vastly more collaborative modern spaceflight enterprise.

For example, Nosanov writes:

“JPL has produced wonders that have explored the farthest (the Voyager space probes left the solar system), dug the deepest (rovers and landers exploring the mysteries of life and the solar system underground on other planets) and lit the darkness (examined objects in space that have never – in five billion years – seen the light of the sun) of any of humanity’s pioneers.”

Many other space agencies with which NASA has allied through its Artemis Accords, among other agreements, are pursuing the same goals – explore the farthest, dig the deepest, light the darkest, etc. – with NASA’s help and are also sharing resources in return. In this milieu, harping on sole leadership because it’s “the American way” is distasteful.

As such, as a space superpower, the US brings a lot to the table, but I’m certain we’ll all be the better for it if it leaves any dregs of a monarchical attitude it may still retain behind. Of course, Nosanov isn’t JPL and JPL, and NASA by extension, are likely to have a different, more mature view. But at the same time, I saw many people sharing Nosanov’s post on Twitter, including some whose work and opinions I’ve respected before, but not one of them flagged any issues with its tone. So I’d like to make sure what the ‘official opinion’ is.

The simple reason JPL’s current downturn won’t be a world-changing event is that, despite recounting all those decisive moments from the past, Nosanov ignores the value of history itself. Recall the sophisticated photolithography machine and the summit of human labour, ingenuity, and cooperation it represents. Take away the machine and you have taken away only the machine, not the foundations on which the possibility of such innovation rests.

Similarly, it is ludicrous to expect anyone to believe NASA’s pole position in human and robotic spaceflight is founded only on its Mars sample-return mission, or in fact any of its Mars missions. This fixation on the outcomes over processes or ingredients over the recipe is counterproductive. The US space programme still has the knowledge and technological foundations required to manufacture opportunities in the first place – and which is what other countries are still working on building.

Put differently, that an entity – whether a space agency or a country – is a superpower implies among other things that it can be resilient, that it can absorb shocks without changing its essential nature. But if Nosanov’s expectations are anything to go by and the US falls behind China because JPL received 63% less than its demand from the US government, then perhaps it deserves to.

Realistically, however, JPL might get the money it’s looking for in future and simply get back on track.

The only part of Nosanov’s post that makes sense is the penultimate line: “JPL – and the people who lost their jobs today – deserve better.”

Poonam Pandey and peer-review

One dubious but vigorous narrative that has emerged around Poonam Pandey’s “death” and subsequent return to life is that the mainstream media will publish “anything”.

To be sure, there were broadly two kinds of news reports after the post appeared on her Instagram handle claiming Pandey had died of cervical cancer: one said she’d died and quoted the Instagram post; the other said her management team had said she’d died. That is, the first kind stated her death as a truth and the other stated her team’s statement as a truth. News reports of the latter variety obviously ‘look’ better now that Pandey and her team said she lied (to raise awareness of cervical cancer). But judging the former news reports harshly isn’t fair.

This incident has been evocative of the role of peer-review in scientific publishing. After scientists write up a manuscript describing an experiment and submit it to a journal to consider for publishing, the journal editors farm it out to a group of independent experts on the same topic and ask them if they think the paper is worth publishing. (Pre-publishing) Peer-review has many flaws, including the fact that peer-reviewers are expected to volunteer their time and expertise and that the process is often slow, inconsistent, biased, and opaque.

But for all these concerns, peer-review isn’t designed to reveal deliberately – and increasingly cleverly – concealed fraud. Granted, the journal could be held responsible for missing plagiarism and the journal and peer-reviewers both for clearly duplicated images and entirely bullshit papers. However, pinning the blame for, say, failing to double-check findings because the infrastructure to do so is hard to come by on peer-review would be ridiculous.

Peer-review’s primary function, as far as I understand it, is to check whether the data presented in the study support the conclusions drawn from the study. It works best with some level of trust. Expecting it to respond perfectly to an activity that deliberately and precisely undermines that trust is ridiculous. A better response (to more advanced tools with which to attempt fraud but also to democratise access to scientific knowledge) would be to overhaul the ‘conventional’ publishing process, such as with transparent peer-review and/or paying for the requisite expertise and labour.

(I’m an admirer of the radical strategy eLife adopted in October 2022: to review preprint papers and publicise its reviewers’ findings along with the reviewers’ identities and the paper, share recommendations with the authors to improve it, but not accept or reject the paper per se.)

Equally importantly, we shouldn’t consider a published research paper to be the last word but in fact a work in progress with room for revision, correction or even retraction. Doing otherwise – as much as stigmatising retractions for reasons not related to misconduct or fraud, for that matter – on the other hand, may render peer-review suspect when people find mistakes in a published paper even when the fault lies elsewhere.

Analogously, journalism is required to be sceptical, adversarial even – but of what? Not every claim is worthy of investigative and/or adversarial journalism. In particular, when a claim is publicised that someone has died and a group of people that manages that individual’s public profile “confirms” the claim is true, that’s the end of that. This an important reason why these groups exist, so when they compromise that purpose, blaming journalists is misguided.

And unlike peer-review, the journalistic processes in place (in many but not all newsrooms) to check potentially problematic claims – for example, that “a high-powered committee” is required “for an extensive consideration of the challenges arising from fast population growth” – are perfectly functional, in part because their false-positive rate is lower without having to investigate “confirmed” claims of a person’s death than with.

Schrödinger’s temple

On January 22, in a ceremony led by Prime Minister and now high-priest Narendra Modi, priests and officials allegedly consecrated the idol of Lord Ram at the new temple in Ayodhya, with many celebrities in attendance. (‘Alleged’ because I don’t know if it’s a legitimate consecration, given the disagreement between some spiritual leaders over its rituals.) TV news channels on both sides of the spectrum were outwardly revelling in the temple’s festivities, bothering not at all with covering the ceremony in a dispassionate way. Their programming was unwatchable.

This Ram temple is a physical manifestation of the contemporary Indian nation – a superposition of state and sanctum sanctorum at once, collapsing like Schrödinger’s hypothetical cat to one or the other depending on political expedience. The temple, like many others around the country now, is both kovil and katchi office (Tamil for ‘temple’ and ‘party office’).

(I’m hardly unique in these views but I also suspect I’m in a minority, with few others to reinforce their legitimacy, so I’m writing them down so they’re easier for me to recall.)

After the consecration ceremony, Prime Minister Modi delivered a speech, as is his wont, further remixing the aspirations of the Indian state and its people with a majoritarian religious identity. (The mic then passed to the treasurer of the temple trust, who spoke in praise of Modi, and RSS chief Mohan Bhagwat, who spoke in praise of Modi’s ostensible ideals.) For now, the results of the Lok Sabha elections later this year seem like a foregone conclusion, with Modi’s Bharatiya Janata Party widely expected to begin a third term in May. The temple’s opening was effectively a show of strength by Modi, that he delivers on his promises no matter the obstacles in his way, even if any of them are legitimate.

Before the 2019 Lok Sabha elections, in another show of strength, the Modi government signed off on the anti-satellite (ASAT) missile test in March, in which a missile launched from the ground flew 300 km up and destroyed a dummy satellite in earth orbit. The operation was called ‘Mission Shakti’ (Hindi for ‘strength’). A statement from the Ministry of External Affairs said, “The test was done to verify that India has the capability to safeguard our space assets”. Oddly, however, the Defence R&D Organisation, which conducted the test, had had ASAT capabilities for a decade by then under its Ballistic Missile Defence programme, rendering the timing suspect.

Considering Prime Minister Modi delivered another hour-long speech after the test, I’ve been inclined to side with the theory that it was conducted to give him airtime that was otherwise unavailable due to the Election Commission’s restrictions on election candidates coming on air in a short period before polling. In 2024, of course, it’s an open secret that the Election Commission determines polling schedules based on the BJP’s convenience.

Violence shuts science? Err…

Dog bites man isn’t news. Man bites dog is news.

I’m reminded of this adage of the news industry – and Nambi Narayanan’s comment in August 2022 – when I read reports like ‘Explosion of violence in Ecuador shuts down science’ (Science, January 13, 2024). An “explosion of violence” in a country should reasonably be expected to affect all walks of life, so what’s the value in focusing a news report only on science and those who practice it? It’s not like we have news reports headlined “explosion of violence in Ecuador shuts down fruit shops”.

There are little tidbits in the article that might be useful to other researchers in Ecuador, but it’s unlikely they’re looking for it in Science, which is a foreign publishing reporting on Ecuador for an audience that’s mostly outside the country.

The only bit I found really worth dwelling on was this one paragraph:

The Consortium for the Sustainable Development of the Andean Ecoregion (CONDESAN) … went further. It canceled all fieldwork this week and next, says Manual Peralvo, a geographer and project coordinator. He adds that CONDESAN plans to design a stricter security protocol for future projects that involve fieldwork. “We’re going to have to plan our schedules much more specifically to know who is where and at what time,” and to avoid dangerous areas, he says.

… yet it’s just one paragraph, before the narrative moves on to how the country’s new security protocols will “deter non-Ecuadorian funding and scientists”. I’d have liked the report to drop everything else and focus on how research centres organise and administer fieldwork when field-workers are at risk of physical violence.

If anything, there may be no opportunity cost associated with such stories – except for the authors and publishers of such reports (i.e. in its current form) suggesting they believe science is somehow more special than other human endeavours.

Ram temple at science ‘festival’

The Surya Tilak project had courted controversy in the past with Trinamool Congress’s Mahua Moitra flagging it on social media in November 2021. The CSIR officials, however, defended the project arguing the scientific calculations that went into making the system.

‘Surya Tilak at Ram temple at the India International Science Festival backed by the Union Science Ministry’, Deccan Herald, January 18, 2024

This is a succinct demonstration of science’s need for a guiding hand. The Indian Science Congress isn’t happening this year, which is both for the better and otherwise, but given the vague allegations that have cast its status in limbo, I remain suspicious that its star declining (further) at the same time that of the India International Science Festival (IISF) is rising isn’t a coincidence. The latter has a budget of Rs 20-25 crore, according to the Deccan Herald article quoted above, “contributed by various scientific departments”.

The absolute value of India’s expense on scientific research is increasing – a horn the national government has often tooted – but as a percentage of the GDP as well as of the total annual budget, it is dropping. In this milieu, it’s amusing for the government to suddenly be able to provide Rs 20-25 crore for the IISF, when in fact the Department of Science and Technology has been giving the Science Congress a relatively lower Rs 5 crore and which last year alleged unspecified “financial irregularities” on the part of its organisers.

But as with the Science Congress, it wouldn’t be fair to dismiss the IISF altogether for some problematic exhibits and events. This said, CSIR officials contending the “Surya Tilak” of the upcoming Ram temple in Ayodhya deserves to be exhibited at the IISF because “scientific calculations” went into designing it is telling of the relationship between science, religion, and the Indian state today.

Considering there are government regulations stipulating the minimum structural characteristics of every building in the country, any non-small structure in the country could have been included in the IISF exhibit. Don’t be absurd, I hear you say, and that’s just as absurd as the officials’ reasoning.

Natural philosophy in many ancient civilisations, including those in India, was concerned with the motions of stars and planets across the sky and seasonal changes in these patterns. So as such, using the principles of modern science to design the “Surya Tilak” isn’t objectionable, or even remarkable.

But the fact that IISF is being organised by Vijnana Bharati, an RSS-affiliated body, and that Vijnana Bharati’s stated goal is “to champion the cause of Bharatiya heritage with a harmonious synthesis of physical and spiritual sciences” makes the relationship suspect – in much the same way the Vedas and other parts of India’s cultural heritage have become tainted by association with the government’s Hindutva programme. And these suspicions are heightened now thanks to the passions surrounding the impending consecration of the Ram temple idol.

A practice of science that constantly denies its political character is liable to be, and has been, appropriated in the service of a larger political or ideological agenda – but this isn’t to say science, more specifically the national community of science exponents, should assume a monolithic political position. Instead, it’s to say this is precisely the cost of misunderstanding that science and politics, as human endeavours go, are immiscible. It’s to say that scientists’ widespread and collective aspiration to be apolitical implicitly admits political influence and that we should all understand that it’s not desirable for science to be appropriated in this way. And when it is, we must bear in mind how these unions have become deleterious and how the two of them can be, or ought to be, separated so that we understand what science is (and isn’t) and what sort of legitimacy it should (and shouldn’t) be allowed to grant the state.

Defying awareness of the value of separating science and (a compromised) state strikes to me as being fundamentally antisocial because such awareness is the first step to asking how and in what circumstances they ought to be separated. It undermines the possibility of this awareness taking root. This isn’t new but in the increasing fervour surrounding the Ram temple, and India’s temple-state dis-separation the event will consummate, the importance of its loss seems heightened as well.

Using superconductors to measure electric current

Simply place two superconductors very close to each other, separated by a small gap, and you’ll have taken a big step towards an important piece of technology called a Josephson junction.

When the two superconductors are close to each other and exposed to electromagnetic radiation in the microwave frequency (0.3-30 GHz), a small voltage develops in the gap. As waves from the radiation rise and fall between the gap, so too the voltage. And it so happens that the voltage can be calculated exactly from the frequency of the microwave radiation.

A Josephson junction is also created when two superconductors are brought very close and a current is passed through one of them. Now, their surfaces form a capacitor: a device that builds up and holds electric charge. When the amount of charge crosses a threshold on the surface of the current-bearing superconductor, the voltage between the surfaces crosses a threshold and allows a current to jump from this to the other surface, across the gap. Then the voltage drops and the surface starts building charge again. This process keeps going as the voltage rises, falls, rises, falls.

This undulating rise and fall is called a Bloch oscillation. It’s only apparent when the Josephson junction is really small, in the order of micrometres. Since the Bloch oscillation is like a wave, it has a frequency and an amplitude. It so happens that the frequency is equal to the value of the current flowing in the superconductor divided by 2e, where e is the smallest unit of electric charge (1.602 × 10-19 coulomb).

The amazing thing about a Josephson junction is the current that jumps between the two surfaces is entirely due to quantum effects, and it’s visible to the naked eye – which is to say the junction shows quantum mechanics at work at the macroscopic scale. This is rare and extraordinary. Usually, observing quantum mechanics’ effects requires sophisticated microscopes and measuring devices.

Josephson junctions are powerful detectors of magnetic fields because of the ways in which they’re sensitive to external forces. For example, devices called SQUIDs (short for ‘superconducting quantum interference devices’) use Josephson junctions to detect magnetic fields that are a trillion-times weaker than a field produced by a refrigerator magnet.

They do this by passing an electric current through a superconductor that forks into two, with a Josephson junction at the end of each path. If there’s a magnetic field nearby, even a really small one, it will distort the amount of current passing in each path to a different degree. The resulting current mismatch will be sufficient to trigger a voltage rise in one of the junctions and a current will jump. Such SQUIDS are used, among other things, to detect dark matter.

Shapiro steps

The voltage and current in a Josephson junction share a peculiar relationship. As the current in one of the superconductors is increased in a smooth way, the voltage doesn’t increase smoothly but in small jumps. On a graph (see below), the rise in the voltage looks like a staircase. The steps here are called Shapiro steps. Each step is related to a moment when the current in the superconductor is a multiple of the frequency of the Bloch oscillation.

I’ve misplaced the source of this graph in my notes. If you know it, please share; if I find it, I will update the article asap.

In a new study, published in Physical Review Letters on January 12, physicists from Germany reported finding a way to determine the amount of electric current passing in the superconductor by studying the Bloch oscillation. This is an important feat because it could close the gap in the metrology triangle.

The metrology triangle

Josephson junctions are also useful because they provide a precise relationship between frequency and voltage. If a junction is made to develop Bloch oscillations of a specific frequency, it will develop a specific voltage. The US National Institute of Standards and Technology (NIST) uses a circuit of Josephson junctions to define the standard volt, a.k.a. the Josephson voltage standard.

We say 1 V is the potential difference between two points if 1 ampere (A) of current dissipates 1 W of power when moving between those points. How do we make sure what we say is also how things work in reality? Enter the Josephson voltage standard.

In fact, decades of advancements in science and technology have led to a peculiar outcome: the tools scientists have today to measure the frequency of waves are just phenomenal – so much so that scientists have been able to measure other properties of matter more accurately by linking them to some frequency and measuring that frequency instead.

This is true of the Josephson voltage standard. The NIST’s setup consists of 20,208 Josephson junctions. Each junction has two small superconductors separated by a few nanometres and is irradiated by microwave radiation. The resulting voltage is equal to the microwave frequency multiplied by a proportionality constant. (E.g. when the frequency is around 70 GHz, the gap between each pair of Shapiro steps is around 150 microvolt.) This way, the setup can track the voltage with a precision of up to 1 nanovolt.

The proportionality constant is in turn a product of the microwave frequency and the Planck constant, divided by two times the basic electric charge e. The latter two numbers are fundamental constants of our universe. Their values are the same for both macroscopic objects and subatomic particles.

Voltage, resistance, and current together make up Ohm’s law – the statement that voltage is roughly equal to current multiplied by resistance (V = IR). Scientists would like to link all three to fundamental constants because they know Ohm’s law works in the classical regime, in the macroscopic world of wires that we can see and hold. They don’t know for sure if the law holds in the quantum regime of individual atoms and subatomic particles as well, but they’d like to.

Measuring things in the quantum world is much more difficult than in the classical world, and it will help greatly if scientists can track voltage, resistance, and current by simply calculating them from some fundamental constants or by tracking some frequencies.

Josephson junctions make this possible for voltage.

For resistance, there’s the quantum Hall effect. Say there’s a two-dimensional sheet of electrons held at an ultracold temperature. When a magnetic field is applied perpendicular to this sheet, an electrical resistance develops across the breadth of the sheet. The amount of resistance depends on a combination of fundamental constants. The formation of this quantised resistance is the quantum Hall effect.

The new study makes the case that the Josephson junction setup it describes could pave the way for scientists to measure electric currents better using the frequency of Bloch oscillations.

Scientists have often referred to this pending task as a gap in the ‘metrology triangle’. Metrology is the science of the way we measure things. And Ohm’s law links voltage, resistance, and current in a triangular relationship.

A JJ + SQUID setup

In their experiment, the physicists coupled a Bloch oscillation in a Josephson junction to a SQUID in such a way that the SQUID would also have Bloch oscillations of the same frequency.

The coupling happens via a capacitor, as shown in the circuit schematic below. This setup is just a few micrometres wide. When a current entered the Josephson junction and crossed the threshold, electrons jumped across and produced a current in one direction. In the SQUID, this caused electrons to jump and induce a current in the opposite direction (a.k.a. a mirror current).

I1 and I2 are biasing currents, which are direct currents supplied to make the circuit work as intended. The parallel lines that form the ‘bridge’ on the left denote a capacitor. The large ‘X’ marks denote the Josephson junction and the SQUID. The blue blocks are resistors. The ellipses containing two dots each denote pairs of electrons that ‘jump’. Source: Phys. Rev. Lett. 132, 027001

This setup requires the use of resistors connected to the circuit, shown as blue blocks in the schematic. The resistance they produce suppresses certain quantum effects that get in the way of the circuit’s normal operation. However, resistors also produce heat, which could interfere with the Josephson junction’s normal operation as well.

The team had to balance these two requirements with a careful choice of the resistor material, rendering the circuit operational in a narrow window of conditions. For added measure the team also cooled the entire circuit to 0.1 K to further suppress noise.

In their paper, the team reported that it could observe Bloch oscillations and the first Shapiro step in its setup, indicating that the junction operated as intended. The team also found it could accurately simulate its experimental results using computer models – meaning the theories and assumptions the team was using to explain what could be going on inside the circuit were on the right track.

Recall that the frequency of a Bloch oscillation can be computed by dividing the amount of current flowing in the superconductor by 2e. So by tracking these oscillations with the SQUID, the team wrote in its paper that it should soon be able to accurately calculate the current – once it had found ways to further reduce noise in their setup.

For now, they have a working proof of concept.

ICC pitch-rating system is regulatory subversion

In today’s edition of The Hindu, Rebecca Rose Varghese and Vignesh Radhakrishnan have a particularly noteworthy edition of their ‘Data Point’ column – ‘noteworthy’ because they’ve used data to make concrete something we’ve all been feeling for a while, in the way we sometimes know something to be true even though we don’t have hard evidence, and which found prominent articulation in the words of Rohit Sharma in a recent interview.

Sharma was commenting on the ICC’s pitch-rating system, saying pitches everywhere should be rated consistently instead of those in the subcontinent earning poorer ratings more of the time.

Rebecca and Vignesh analysed matches and their pitch-ratings between May 14, 2019, and December 26, 2023, to find:

  1. Pitches in India, Bangladesh, Pakistan, and Sri Lanka receive ‘below average’ or ‘poor’ ratings for Test matches more often than Test pitches in Australia, West Indies, England, South Africa, and New Zealand;
  2. In Test matches played in India, Bangladesh, Pakistan, and Sri Lanka, spin bowling claimed more wickets than pace bowling; and
  3. When spin bowling claims more wickets than pace bowling in Test matches, the former pitches are rated worse than the latter pitches even if both sets of matches conclude after a relatively lower number of balls have been faced.

This is just fantastic. (1) and (2) together imply the ICC has penalised pitches in the subcontinent for being spin-friendly tracks. And this and (3) imply that this penalty doesn’t care for the fact that non-spin-friendly tracks produce similar results without incurring the same penalty.

The longer a Test match lasts, the better it is for stadiums and TV networks broadcasting the match: the stadium can sell tickets for all five days and networks can broadcast advertisements for Test matches on all five days. This thinking has come to dominate ODIs, T20s, and Test matches, but it’s a sad irony that the ICC created the ODI and the T20 formats to be more entertaining and more profitable without compromising the Test format. Now, with the ICC’s pitch-rating system, this entertainment + profitability thinking has percolated through Test matches as well.

Sharma alluded to this when he said:

I mean, we saw what happened in this match, how the pitch played and stuff like that. I honestly don’t mind playing on pitches like this. As long as everyone keeps their mouth shut in India and don’t talk too much about Indian pitches, honestly.

I’d take this further and say Test match pitches can’t be rated badly because the purpose of this format is to test players in the toughest conditions the sport can offer. In this milieu, to say a Test match pitch is ‘below average’ is to discourage teams from confronting their opponents’ batters with a track that favours bowlers’ strengths. And in the ICC’s limited view, this discouragement is biased markedly against spin-bowling.

Criticism of this paradigm isn’t without foundation. The A Cricketing View Substack wrote in a February 2021 post (hat-tip to Vignesh):

The Laws of Cricket only specify that the game not be played on a pitch which umpires might consider to be dangerous to the health of the players. The ICC has chosen to go beyond this elementary classification between dangerous and non-dangerous pitches by setting up a regulatory mechanism which is designed to minimize the probability that a bad pitch (and not just a dangerous pitch) will be prepared.

If anything, a bad pitch that results in uneven and potentially high bounce will be more dangerous to batters than a bad pitch that results in sharp turn. So the ICC’s pitch-rating system isn’t “regulatory expansion” – as A Cricketing View called it – but regulatory subversion. R. Ashwin has also questioned the view the ICC has taken, via its system, that pitches shouldn’t offer sharp turn on day 1 – another arbitrary choice, although one that makes sense from the entertainment and/or profitability PoV, that restricts ‘average’ or better spin-bowling in its view to a very specific kind of surface.

Point (3) in the ‘Data Point’ implies such pitches probably exist in places like Australia and South Africa, which are otherwise havens of pace-bowling. The advantage that pace enjoys in the ICC’s system creates another point of divergence when it meets players’ physiology. Pitches in Australia in particular are pace-friendly, but when they’re not, they’re not spin friendly either. On these tracks, Australian pacers still have an advantage because they’re taller on average and able to generate more bounce than shorter bowlers, such as those from India.

I believe Test matches should be played on tracks that teach all 22 players (of both teams) a valuable lesson – without of course endangering players’ bodies.

  1. How will stadiums and TV networks make more money off Test matches? The bigger question, to me, is: should they? I’m aware of the role stadiums have played through history in making specific sports more sustainable by monetising spectatorship. But perhaps stadiums should be organised such that the bulk of their revenue is from ODI and T20 matches and Test matches are spared the trouble of being more entertaining/profitable.
  2. Who decides what these lessons should be? I don’t trust the ICC, of course, but I don’t trust the BCCI either because I don’t trust the people who currently staff it to avoid making a habit of tit-for-tat measures – beyond one-off games – that massage Indian teams’ player-records. Other countries’ cricket boards may be different but given the effects of the ICC’s system on their specific fortunes, I’m not sure how they will react. In fact, it seems impossible that we will all agree on these lessons or how their suitability should be measured – a conclusion that, ironically, speaks to the singular pitfall of judging the value of a cricket match by its numbers.

An odd paper about India’s gold OA fees

A paper about open-access fees in India published recently in the journal Current Science has repeatedly surfaced in my networks over some problems with it. The paper is entitled ‘Publications in gold open access and article processing charge expenditure: evidence from Indian scholarly output’ and is authored by Raj Kishor Kampa, Manoj Kumar Sa, and Mallikarjun Dora of Berhampur University, the Indian Maritime University, and IIM Ahmedabad respectively. This is the paper’s abstract:

Article processing charges (APCs) ensure the financial viability of open access (OA) scholarly journals.The present study analyses the number of gold OA articles published in the Web of Science (WoS)-indexed journals by Indian researchers during 2020, including subject categories that account for the highest APC in India. Besides, it evaluates the amount of APC expenditure incurred in India. The findings of this study reveal that Indian researchers published 26,127 gold OA articles across all subjects in WoS-indexed journals in 2020. Researchers in the field of health and medical sciences paid the highest APC, amounting to $7 million, followed by life and earth sciences ($6.9 million), multidisciplinary ($4.9 million), and chemistry and materials science ($4.8 million). The study also reveals that Indian researchers paid an estimated $17 million as APC in 2020. Furthermore, 81% of APCs went to commercial publishers, viz. MDPI, Springer-Nature, Elsevier and Frontier Media. As there is a growing number of OA publications from India, we suggest having a central and state-level single-window option for funding in OA journals and backing the Plan S initiative for OA publishing in India.

It’s unclear what the point of the study is. First, it seems to have attempted a value-neutral assessment of how much scientists in India are paying as article processing charges (APCs) to have their papers published in gold OA journals. It concludes with some large, and frankly off-putting, numbers – a significant drain on the resources India has availed its scholars to conduct research – yet it proceeds to “suggest having a central and state-level single-window system” so scientists can continue to pay these fees with less hassle, and for the Indian government (presumably) to back the Plan S initiative.

As far as I know, India has declined to join the Plan S initiative; this is a good thing for the reasons enumerated here (written when India was considering joining the initiative), one of which is that it enabled the same thing the authors of the paper have asked for but on an international scale: allowing gold OA journals to hike their APCs knowing that (often tax-funded) research funders will pay the bills. This paper also marks the first time I’ve known anyone to try to estimate the APCs paid by Indian scientists and, once estimated, deem the figures not worthy of condemnation.

Funnily enough, while the paper doesn’t concern itself with taking a position on gold OA in its abstract or in the bulk of its arguments, it does contain the following statements:

“Although there is constant growth in OA publications, there is also a barrier to publishing in quality OA journals, especially the Gold and Hybrid OA, which levies APC for publications.”

“However, the high APC charges have been an issue for low-income and underdeveloped countries. In the global south, the APC is a real obstacle to publishing in high-quality OA journals”

“Extant literature reveals a constant increase in APC by most publishers like BioMed Central (BMC), Frontiers Media, Multidisciplinary Digital Publishing Institute (MDPI), and Hindawi”

“One of the ideas of open access was to make equitable access and check the rampant commercialization of scholarly publications. Still, surprisingly, many established publishers have positioned themselves in the OA landscape.”

“formulation of national-level OA policies in India is the need of the hours since OA is inevitable as everyone focuses on equity and access to scholarly communications.”

But these statements only render the paper’s conclusion all the more odd.

Of course, this is my view and the views of some scholars in India’s OA advocacy community and the authors of the Current Science paper are free to disagree. The second issue is objectively frustrating.

Unlike the products of science communication and science journalism, a scientific paper may simply present a survey of some numbers of interest to a part of the research community, but the Current Science paper falls short on this count as well. Specifically, not once does its body mention the words “discount” and “waiver” (or their variations), which is strange because OA journals regularly specify discounted APCs – or waive them altogether – if certain conditions are met (including, in the case of some journals, if a paper’s authors are from a low- and middle-income country). Accounting for discounts, researchers Moumita Koley (IISc Bengaluru) and Achal Agrawal (independent) estimated the authors could have overestimated Indian scientists’ APC expenses by 47.7% – ranging from 4.8% when submitting manuscripts to the PLoS journals to 428.3% when submitting to journals of the American Chemical Society.

Gold OA’s publishing fees are not in proportion to the amount of work and resources required to make a published paper open-access, and often extortionate, and that while discounts and waivers are available, they don’t spare research-funders in other parts of the world the expense, continue to maintain large profit margins at the expense of governments’ allocations for research, and – has scientist Karishma Kaushik wrote for The Hindu – the process of availing these concessions can be embarrassing to researchers.

Issue #1: the Current Science paper erects a flawed argument both in favour of and in opposition to APCs by potentially overestimating them! Issue #2: In their correspondence, Koley and Agrawal write:

“A possible reason for their error could be that DOAJ, which forms their primary source, does not mention discounts usually given to authors from lower-income countries. Another important error is that while the authors claim that they filtered the articles. Page 1058: ‘Extant literature suggests that the corresponding author most likely pays the APCs’. Following the corresponding author criterion, APC expenditure incurred by Indian researchers was estimated; they have not actually done so. Table 2 shows the discrepancy if one applies the filter. Also, Table 1 shows the estimated error in calculation if this criterion is included in calculation.”

To this, the authors of the Current Science paper responded thus:

“We wish to clarify any misunderstanding that may have arisen. We analysed the APC expenditure incurred in India without calculating the discounts or waivers received by authors as there is no specific single source to find all discounts, for example, an author-level or institute-level discount; hence, it would be difficult to provide an actual amount that Indian researchers spent on APC. Additionally, discounts or any publisher-provided waivers are recent developments, and discounts/waivers given to authors from LMIC countries were not mentioned in DOAJ, which is the primary source of the present study. Hence, it was not analysed in the current study. These factors may be considered as limitations of the study.”

This is such a blah exchange. To the accusation that the authors failed to account for discounts and waivers, the authors admit – not in their paper but in their response to a rebuttal – they didn’t, and that it’s a shortcoming. The authors also write that four publishers they identified as receiving 53% of APCs out of India – MDPI, Springer-Nature, Elsevier, and Frontiers Media – don’t offer “country-level discounts/waivers to authors” from LMICs and that this invalidates the concerns of Koley and Agrawal that APCs have been overestimated too much. However, they don’t address the following possibilities:

  1. The identification of these four publishers itself was founded on APC estimates that have been called into question;
  2. “Country-level” concessions aren’t the only kind of concessions; and
  3. The decision to downplay the extent of overestimation doesn’t account for the publishers that received the other 47% of the APCs.

It’s not clear, in sum, what value the Current Science paper claims to have, and perhaps this is a question better directed at Current Science itself, which published the original paper, two rebuttals – the second by Jitendra Narayan Dash of NISER Bhubaneswar – the authors’ unsatisfactory replies to them, and, since we’re on the topic, doesn’t seem to have edited the first correspondence before publishing it.

What Gaganyaan tells us about chat AI, and vice versa

Talk of chat AI* is everywhere, as I’m sure you know. Everyone would like to know where these apps are headed and what their long-term effects are likely to be. But it seems that it’s still too soon to tell what they will be, at least in sectors that have banked on human creativity. That’s why the topic was a centrepiece of the first day of the inaugural conference of the Science Journalists’ Association of India (SJAI) last month, but little came of it beyond using chat AI apps to automate tedious tasks like transcribing. One view, in the limited context of education, is that chat AI apps will be like the electronic calculator. According to Andrew Cohen, a professor of physics at the Hong Kong University of Science and Technology, as quoted (and rephrased) by Amrit BLS in an article for The Wire Science:

When calculators first became available, he said, many were concerned that it would discourage students from performing arithmetic and mathematical functions. In the long run, calculators would negatively impact cognitive and problem-solving skills, it was believed. While this prediction has partially come true, Cohen says the benefits of calculators far outweigh the drawbacks. With menial calculations out of the way, students had the opportunity to engage with more complex mathematical concepts.

Deutsche Welle had an article making a similar point in January 2023:

Daniel Lametti, a Canadian psycholinguist at Acadia University in Nova Scotia, said ChatGPT would do for academic texts what the calculator did for mathematics. Calculators changed how mathematics were taught. Before calculators, often all that mattered was the end result: the solution. But, when calculators came, it became important to show how you had solved the problem—your method. Some experts have suggested that a similar thing could happen with academic essays, where they are no longer only evaluated on what they say but also on how students edit and improve a text generated by an AI—their method.

This appeal to the supposedly higher virtue of the method, over arithmetic ability and the solutions to which it could or couldn’t lead, is reminiscent of a similar issue that played out earlier this year – and will likely raise its head again – vis-à-vis India’s human spaceflight programme. This programme, called ‘Gaganyaan’, is expected to have the Indian Space Research Organisation (ISRO) launch an astronaut onboard the first India-made rocket no earlier than 2025.

The rocket will be a modified version of the LVM-3 (previously called the GSLV Mk III); the modifications, including human-rating the vehicle, and their tests are currently underway. In October 2023, ISRO chairman S. Somanath said in an interview to The Hindu that the crew module on the vehicle, which will host the astronauts during their flight, “is under development. It is being tested. There is no capability in India to manufacture it. We have to get it from outside. That work is currently going on. We wanted a lot of technology to come from outside, from Russia, Europe, and America. But many did not come. We only got some items. That is going to take time. So we have to develop systems such as environmental control and life support systems.”

Somanath’s statement seemed to surprise many people who had believed that the human-rated LVM-3 would be indigenous in toto. This is like the Ship of Theseus problem: if you replace all the old planks of a wooden ship with new ones, is it still the same ship? Or: if you replace many or all the indigenous components of a rocket with ones of foreign provenance, is it still an India-made launch vehicle? The particular case of the UAE is also illustrative: the country neither has its own launch vehicle nor the means to build and launch one with components sourced from other countries. It lacks the same means for satellites as well. Can the UAE still be said to have its own space programme because of its ‘Hope’ probe to orbit and study Mars?

Cohen’s argument about chat AI apps being like the electronic calculator helps cut through the confusion here: the method, i.e. the way in which ISRO pieces the vehicle together to fit its needs, within its budget, engineering capabilities, and launch parameters, matters the more. To quote from an earlier post, “‘Gaganyaan’ is not a mission to improve India’s manufacturing capabilities. It is a mission to send Indians to space using an Indian launch vehicle. This refers to the recipe, rather than the ingredient.” For the same reason, the UAE can’t be said to have its own space programme either.

Focusing on the method, especially in a highly globalised world-economy, is a more sensible way to execute space programmes because the method – knowing how to execute it, i.e. – is the most valuable commodity. Its obtainment requires years of investment in education, skilling, and utilisation. I suspect this is also why there’s more value in selling launch-vehicle services rather than launch vehicles themselves. Similarly, the effects of the electronic calculator on science education speak to advantages that are virtually unknown-unknowns, and it seems reasonable to assume that chat AI will have similar consequences (with the caveat that the metaphor is imperfect: arithmetic isn’t comparable to language and large-language models can do what calculators can and more).


* I remain wary of the label ‘AI’ applied to “chat AI apps” because their intelligence – if there is one beyond sophisticated word-counting – is aesthetic, not epistemological, yet it’s also becoming harder to maintain the distinction in casual conversation. This is after setting aside the question of whether the term ‘AI’ itself makes sense.

A survey of El Salvador’s bitcoin adoption

On December 22, a group of researchers from the US had a paper published in Science in which they reported the results of a survey of 1,800 households in El Salvador over its members’ adoption, or not, of bitcoin as currency.

In September 2021, the government of El Salvador president Nayib Bukele passed a ‘Bitcoin Law’ through which it made the cryptocurrency legal tender. El Salvador is a country of 6.3 million people, many poor and without access to bank accounts, and Bukele pushed bitcoins as a way to circumvent these issues by allowing anyone with a phone with an internet connection to access a central-bank-backed cryptocurrency wallet and trading the virtual coins. Yet even at the time, adoption was muted by concerns over bitcoins’ extreme volatility.

In the new study, the researchers’ survey spotlighted the following issues, particularly that the only demographic that seemed eager to adopt the use of bitcoins as currency was “young, educated men with bank accounts”:

Privacy and transparency concerns appear to be key barriers to adoption; unexpectedly, these are the two concerns that decentralized currencies such as crypto aim to address. … we document that this payment technology involves a large initial adoption cost, has benefits that significantly increase as more people use it …, and faces resistance from firms in terms of its adoption. … Moreover, our survey work using a representative sample sheds light on how it is the already wealthy and banked who use crypto, which stands in stark contrast with recurrent hypotheses claiming that the use of crypto may help the poor and unbanked the most.

Bitcoin isn’t private. Its supporters claimed it was because the bitcoin system could evade surveillance by banks, but law enforcement authorities simply switched to other checks-and-balances governments have in place to track, monitor, and – if required – apprehend bitcoin users, with help from network scientists and forensic accountants.

The last line is also reminiscent of several claims advanced by bitcoin supporters – rather than well-thought-out “hypotheses” advanced by scholars – in the late 2010s about the benefits the use of cryptocurrencies could bring to the Global South. The favour the cryptocurrency enjoyed among these people was almost sans exception rooted in its technological ‘merits’ (such as they are). There wasn’t, and still isn’t in many cases, any acknowledgment of the social institutions and rituals that influence public trust in a currency – and the story of El Salvador’s policy is a good example of that. The paper’s authors continue:

There is substantial heterogeneity across demographic groups in the likelihood of adopting and using bitcoin as a means of payment. The reasons that young, educated men are more likely to use bitcoin for transactions remain an open question. One hypothesis is that this group has higher financial literacy. We found that, even conditional on access to financial services and education, young men were still more likely to use bitcoin. However, financial literacy encompasses several other areas of knowledge that are not captured by these controls. An alternative hypothesis is that young, educated men have a higher propensity to adopt new technologies in general. The literature on payment methods has documented that young individuals have a greater propensity to adopt means of payment beyond cash, such as cards (87). Nevertheless, further research is necessary to causally identify the factors contributing to the observed heterogeneity across demographic groups.

India and El Salvador are very different except, by virtue of being part of the Global South, they’re both good teachers. El Salvador is teaching us that something simply being easier to use won’t guarantee its adoption if people also don’t trust it. India has taught me that awareness of one’s own financial illiteracy is as important as financial literacy, among other things. I’ve met many people who won’t invest in something not because they understand it – they might – but because they don’t know enough about how they can be defrauded of their investment. And if they don’t, they simply assume they will lose their money at some point. It’s the way things have been, especially among the erstwhile middle class, for many decades.

This is probably one of several barriers. Another is complementarity (e.g. “benefits that significantly increase as more people use it”), which implies the financial instrument must be convenient in a variety of sectors and settings, which implies it needs to be better than cash, which is difficult.