Using superconductors to measure electric current

Simply place two superconductors very close to each other, separated by a small gap, and you’ll have taken a big step towards an important piece of technology called a Josephson junction.

When the two superconductors are close to each other and exposed to electromagnetic radiation in the microwave frequency (0.3-30 GHz), a small voltage develops in the gap. As waves from the radiation rise and fall between the gap, so too the voltage. And it so happens that the voltage can be calculated exactly from the frequency of the microwave radiation.

A Josephson junction is also created when two superconductors are brought very close and a current is passed through one of them. Now, their surfaces form a capacitor: a device that builds up and holds electric charge. When the amount of charge crosses a threshold on the surface of the current-bearing superconductor, the voltage between the surfaces crosses a threshold and allows a current to jump from this to the other surface, across the gap. Then the voltage drops and the surface starts building charge again. This process keeps going as the voltage rises, falls, rises, falls.

This undulating rise and fall is called a Bloch oscillation. It’s only apparent when the Josephson junction is really small, in the order of micrometres. Since the Bloch oscillation is like a wave, it has a frequency and an amplitude. It so happens that the frequency is equal to the value of the current flowing in the superconductor divided by 2e, where e is the smallest unit of electric charge (1.602 × 10-19 coulomb).

The amazing thing about a Josephson junction is the current that jumps between the two surfaces is entirely due to quantum effects, and it’s visible to the naked eye – which is to say the junction shows quantum mechanics at work at the macroscopic scale. This is rare and extraordinary. Usually, observing quantum mechanics’ effects requires sophisticated microscopes and measuring devices.

Josephson junctions are powerful detectors of magnetic fields because of the ways in which they’re sensitive to external forces. For example, devices called SQUIDs (short for ‘superconducting quantum interference devices’) use Josephson junctions to detect magnetic fields that are a trillion-times weaker than a field produced by a refrigerator magnet.

They do this by passing an electric current through a superconductor that forks into two, with a Josephson junction at the end of each path. If there’s a magnetic field nearby, even a really small one, it will distort the amount of current passing in each path to a different degree. The resulting current mismatch will be sufficient to trigger a voltage rise in one of the junctions and a current will jump. Such SQUIDS are used, among other things, to detect dark matter.

Shapiro steps

The voltage and current in a Josephson junction share a peculiar relationship. As the current in one of the superconductors is increased in a smooth way, the voltage doesn’t increase smoothly but in small jumps. On a graph (see below), the rise in the voltage looks like a staircase. The steps here are called Shapiro steps. Each step is related to a moment when the current in the superconductor is a multiple of the frequency of the Bloch oscillation.

I’ve misplaced the source of this graph in my notes. If you know it, please share; if I find it, I will update the article asap.

In a new study, published in Physical Review Letters on January 12, physicists from Germany reported finding a way to determine the amount of electric current passing in the superconductor by studying the Bloch oscillation. This is an important feat because it could close the gap in the metrology triangle.

The metrology triangle

Josephson junctions are also useful because they provide a precise relationship between frequency and voltage. If a junction is made to develop Bloch oscillations of a specific frequency, it will develop a specific voltage. The US National Institute of Standards and Technology (NIST) uses a circuit of Josephson junctions to define the standard volt, a.k.a. the Josephson voltage standard.

We say 1 V is the potential difference between two points if 1 ampere (A) of current dissipates 1 W of power when moving between those points. How do we make sure what we say is also how things work in reality? Enter the Josephson voltage standard.

In fact, decades of advancements in science and technology have led to a peculiar outcome: the tools scientists have today to measure the frequency of waves are just phenomenal – so much so that scientists have been able to measure other properties of matter more accurately by linking them to some frequency and measuring that frequency instead.

This is true of the Josephson voltage standard. The NIST’s setup consists of 20,208 Josephson junctions. Each junction has two small superconductors separated by a few nanometres and is irradiated by microwave radiation. The resulting voltage is equal to the microwave frequency multiplied by a proportionality constant. (E.g. when the frequency is around 70 GHz, the gap between each pair of Shapiro steps is around 150 microvolt.) This way, the setup can track the voltage with a precision of up to 1 nanovolt.

The proportionality constant is in turn a product of the microwave frequency and the Planck constant, divided by two times the basic electric charge e. The latter two numbers are fundamental constants of our universe. Their values are the same for both macroscopic objects and subatomic particles.

Voltage, resistance, and current together make up Ohm’s law – the statement that voltage is roughly equal to current multiplied by resistance (V = IR). Scientists would like to link all three to fundamental constants because they know Ohm’s law works in the classical regime, in the macroscopic world of wires that we can see and hold. They don’t know for sure if the law holds in the quantum regime of individual atoms and subatomic particles as well, but they’d like to.

Measuring things in the quantum world is much more difficult than in the classical world, and it will help greatly if scientists can track voltage, resistance, and current by simply calculating them from some fundamental constants or by tracking some frequencies.

Josephson junctions make this possible for voltage.

For resistance, there’s the quantum Hall effect. Say there’s a two-dimensional sheet of electrons held at an ultracold temperature. When a magnetic field is applied perpendicular to this sheet, an electrical resistance develops across the breadth of the sheet. The amount of resistance depends on a combination of fundamental constants. The formation of this quantised resistance is the quantum Hall effect.

The new study makes the case that the Josephson junction setup it describes could pave the way for scientists to measure electric currents better using the frequency of Bloch oscillations.

Scientists have often referred to this pending task as a gap in the ‘metrology triangle’. Metrology is the science of the way we measure things. And Ohm’s law links voltage, resistance, and current in a triangular relationship.

A JJ + SQUID setup

In their experiment, the physicists coupled a Bloch oscillation in a Josephson junction to a SQUID in such a way that the SQUID would also have Bloch oscillations of the same frequency.

The coupling happens via a capacitor, as shown in the circuit schematic below. This setup is just a few micrometres wide. When a current entered the Josephson junction and crossed the threshold, electrons jumped across and produced a current in one direction. In the SQUID, this caused electrons to jump and induce a current in the opposite direction (a.k.a. a mirror current).

I1 and I2 are biasing currents, which are direct currents supplied to make the circuit work as intended. The parallel lines that form the ‘bridge’ on the left denote a capacitor. The large ‘X’ marks denote the Josephson junction and the SQUID. The blue blocks are resistors. The ellipses containing two dots each denote pairs of electrons that ‘jump’. Source: Phys. Rev. Lett. 132, 027001

This setup requires the use of resistors connected to the circuit, shown as blue blocks in the schematic. The resistance they produce suppresses certain quantum effects that get in the way of the circuit’s normal operation. However, resistors also produce heat, which could interfere with the Josephson junction’s normal operation as well.

The team had to balance these two requirements with a careful choice of the resistor material, rendering the circuit operational in a narrow window of conditions. For added measure the team also cooled the entire circuit to 0.1 K to further suppress noise.

In their paper, the team reported that it could observe Bloch oscillations and the first Shapiro step in its setup, indicating that the junction operated as intended. The team also found it could accurately simulate its experimental results using computer models – meaning the theories and assumptions the team was using to explain what could be going on inside the circuit were on the right track.

Recall that the frequency of a Bloch oscillation can be computed by dividing the amount of current flowing in the superconductor by 2e. So by tracking these oscillations with the SQUID, the team wrote in its paper that it should soon be able to accurately calculate the current – once it had found ways to further reduce noise in their setup.

For now, they have a working proof of concept.

A survey of El Salvador’s bitcoin adoption

On December 22, a group of researchers from the US had a paper published in Science in which they reported the results of a survey of 1,800 households in El Salvador over its members’ adoption, or not, of bitcoin as currency.

In September 2021, the government of El Salvador president Nayib Bukele passed a ‘Bitcoin Law’ through which it made the cryptocurrency legal tender. El Salvador is a country of 6.3 million people, many poor and without access to bank accounts, and Bukele pushed bitcoins as a way to circumvent these issues by allowing anyone with a phone with an internet connection to access a central-bank-backed cryptocurrency wallet and trading the virtual coins. Yet even at the time, adoption was muted by concerns over bitcoins’ extreme volatility.

In the new study, the researchers’ survey spotlighted the following issues, particularly that the only demographic that seemed eager to adopt the use of bitcoins as currency was “young, educated men with bank accounts”:

Privacy and transparency concerns appear to be key barriers to adoption; unexpectedly, these are the two concerns that decentralized currencies such as crypto aim to address. … we document that this payment technology involves a large initial adoption cost, has benefits that significantly increase as more people use it …, and faces resistance from firms in terms of its adoption. … Moreover, our survey work using a representative sample sheds light on how it is the already wealthy and banked who use crypto, which stands in stark contrast with recurrent hypotheses claiming that the use of crypto may help the poor and unbanked the most.

Bitcoin isn’t private. Its supporters claimed it was because the bitcoin system could evade surveillance by banks, but law enforcement authorities simply switched to other checks-and-balances governments have in place to track, monitor, and – if required – apprehend bitcoin users, with help from network scientists and forensic accountants.

The last line is also reminiscent of several claims advanced by bitcoin supporters – rather than well-thought-out “hypotheses” advanced by scholars – in the late 2010s about the benefits the use of cryptocurrencies could bring to the Global South. The favour the cryptocurrency enjoyed among these people was almost sans exception rooted in its technological ‘merits’ (such as they are). There wasn’t, and still isn’t in many cases, any acknowledgment of the social institutions and rituals that influence public trust in a currency – and the story of El Salvador’s policy is a good example of that. The paper’s authors continue:

There is substantial heterogeneity across demographic groups in the likelihood of adopting and using bitcoin as a means of payment. The reasons that young, educated men are more likely to use bitcoin for transactions remain an open question. One hypothesis is that this group has higher financial literacy. We found that, even conditional on access to financial services and education, young men were still more likely to use bitcoin. However, financial literacy encompasses several other areas of knowledge that are not captured by these controls. An alternative hypothesis is that young, educated men have a higher propensity to adopt new technologies in general. The literature on payment methods has documented that young individuals have a greater propensity to adopt means of payment beyond cash, such as cards (87). Nevertheless, further research is necessary to causally identify the factors contributing to the observed heterogeneity across demographic groups.

India and El Salvador are very different except, by virtue of being part of the Global South, they’re both good teachers. El Salvador is teaching us that something simply being easier to use won’t guarantee its adoption if people also don’t trust it. India has taught me that awareness of one’s own financial illiteracy is as important as financial literacy, among other things. I’ve met many people who won’t invest in something not because they understand it – they might – but because they don’t know enough about how they can be defrauded of their investment. And if they don’t, they simply assume they will lose their money at some point. It’s the way things have been, especially among the erstwhile middle class, for many decades.

This is probably one of several barriers. Another is complementarity (e.g. “benefits that significantly increase as more people use it”), which implies the financial instrument must be convenient in a variety of sectors and settings, which implies it needs to be better than cash, which is difficult.

Unexpected: Magnetic regions in metal blow past speed limit

You’re familiar with magnetism, but do you know what it looks like at the smallest scale? Take a block of iron, for example. It’s ferromagnetic, which means if you place it near a permanent magnet – like a refrigerator magnet – the block will also become magnetic to a large extent, larger than materials that aren’t ferromagnetic.

If you zoom in to the iron atoms, you’ll see a difference between areas that are magnetised and areas that aren’t. Every subatomic particle has four quantum numbers, sort of like its Aadhaar or social security ID. No two electrons in the same system can have the same ID, i.e. one, some or all of these numbers differ from one electron to the next. One of these numbers is the spin quantum number, and it can have one of two values, or states, at any given time. Physicists refer to these states as ‘up’ and ‘down’. In the magnetised portions, in the iron block, you’ll see that electrons in the iron atoms will either all be pointing up or all down. This is a defining feature of magnetism.

Scientists have used it to make hard-disk drives that are used in computers. Each drive stores information by encoding it in electrons’ spins using a magnetic field, where, say, ‘1’ is up and ‘0’ is down, so a series of 1s and 0s become a series of ups and downs.

In the iron block, the parts that are magnetised are called domains. They demarcate regions of uniform electron spin in three dimensions in the block’s bulk. For a long time, scientists believed that the ‘walls’ of a domain – i.e. the imaginary surface between areas of uniform spin and areas of dis-uniform spin – could move at up to around 0.5 km/s. If they moved faster, they could destabilise and collapse, allowing a kind of magnetic chaos to spread within the material. They arrived at this speed limit from their theoretical calculations.

The limit matters because it says how fast the iron block’s magnetism can be manipulated, to store or modify data for example, without losing that data. It also matters for any other application that takes advantage of the properties of ferromagnetic materials.

In 2020, a group of researchers from the Czech Republic, Germany, and Sweden found that if you stacked up a layer of ferromagnets, the domain walls could move much faster – as much as 14 km/s – without collapsing. Things can move fast in the subatomic realm, yet 14 km/s was still astonishing for ferromagnetic materials. So scientists set about testing it.

A group from Italy, Sweden, and the US reported in a paper published in Physical Review Letters on December 19 (preprint here) that they were able to detect domain walls moving in a composite material at a stunning 66 km/s – greater than the predicted speed. Importantly, however, existing theories that explain a material’s magnetism at the subatomic scale don’t predict such a high speed, so now physicists know their theories are missing something.

In their study, the group erected a tiny stack of the following elements, in this order: tantalum, copper, a cobalt-iron compound, nickel, the cobalt-iron compound, copper, and tantalum. Advanced microscopy techniques revealed that the ferromagnetic nickel layer (just a nanometre wide) had developed domains of two shapes: some were like stripes and some formed a labyrinth with curved walls.

The researchers then tested the domain walls using the well-known pump-probe technique: a blast of energy first energises a system, then something probes it to understand how it’s changed. The pump here was an extremely short pulse of infrared radiation and the probe was a similarly short pulse of ultraviolet (UV) radiation.

The key is the delay between the pump and probe pulses: the smaller the delay, the greater the detail that comes to light. (Three people won the physics Nobel Prize this year for finding ways to make this delay as small as possible.) In the study it was 50 femtoseconds, or 500 trillionths of a second.

The UV pulse was diffracted by the electrons in nickel. A detector picked up the diffraction patterns and the scientists ‘read’ them together with computer simulations of the domains to understand how they changed.

How did the domains change? The striped walls were practically unmoved but the curved walls of the labyrinthine pattern did move, by about 17-23 nanometres. The group made multiple measurements. When they finally calculated an average speed (which is equal to distance divided by time), they found it to be 66 km/s, give or take 20 km/s.

An image depicting domains (black) in the nickel layer. The coloured lines show their final positions. Source: Phys. Rev. Lett. 131, 256702

The observation of extreme wall speed under far-from-equilibrium conditions is the … most significant result of this study,” they wrote in their paper. This is true: even though the researchers found that the domain-wall speed limit in a multilayer ferromagnetic material is much higher than 0.5 km/s – as the 2020 group predicted – they also found it to be a lot higher than the expected 14 km/s. Of course, it’s also stunning because the curved domain walls moved at more than 10-times the speed of sound in that material – and the more curved a portion was, the faster it seemed to move.

The researchers concluded that “additional mechanisms are required to fully understand these effects” – as well as that they could be “important” to explain “ultrafast phenomena in other systems such as emerging quantum materials”.

This is my second recent post about scientists finding something they didn’t expect to, but in settings more innocuous than in the vast universe or at particle smashers. Read the first one, about the way paint dries, here.

You can do worse than watching paint dry – ask physics

I live in Chennai, a city whose multifaceted identity includes its unrelenting humidity. Its summers are seldom hotter than those in Delhi but they are more unbearable because it leaves people sweaty, dehydrated, and irritated. Delhi’s heat doesn’t have the same effect because when people sweat there, the droplets evaporate into the air, whose low relative humidity allows it to ‘accommodate’ moisture. But in Chennai, the air is almost always humid, more so during the summer, and the sweat on people’s skin doesn’t evaporate. Yet their bodies continue to sweat because it’s one of the few responses they have to the heat.

Paint, fortunately, has a different story to tell. Fresh paint on a wall doesn’t dry faster or slower depending on how humid the air is. This is because pain is made of water plus some polymers whose molecules are much larger than those of water. At first, water does begin to escape the paint and evaporate from the surface. This pulls the polymer molecules to the surface in a process called advection. On the surface, the polymer molecules form a dense layer that prevents the water below from interacting directing with the air, or its humidity. So the rate of evaporation slows until it reaches a constant low value. This is why, even in dry weather, paint takes its time to dry.

Scientists have verified that this is the case in a new study, in which they also reported that their findings can be used to understand the behaviour of little respiratory droplets in which viruses travel through the air. (Some studies – like this and this – have suggested that a virus’s viability may depend on the relative humidity and how quickly the droplet dries, among other factors. Since the relative humidity varies by season, a link could explain why some viral outbreaks are more seasonal.)

Generally, the human skin – as the largest outer-organ of the human body – is responsible for making sure the body doesn’t lose too much water through evaporation. Scientists think that it can adjust how much sweat is released on the skin by modifying the mix of lipids (fatty substances) in its outermost layer. If it did, it would be an example of an active process – a dynamic response to environmental and biological conditions. Paint drying, on the other hand, is a non-active process: the rate of evaporation is limited by the polymer molecules at the surface and their properties.

In 2017, a chemical engineer at the University of Bordeaux named Jean-Baptiste Salmon predicted that an active process may not be needed at all to explain humidity-independent evaporation because it arises naturally in solutions like that of paint. The new study tested the prediction of Salmon et al. using a non-active polymer solution, i.e. one that’s incapable of developing an active response to changes in humidity.

They filled a plastic container with polyvinyl alcohol, then drilled a small hole near the bottom and fit a glass tube there with an open end. The liquid could flow through the tube and evaporate from the end. To prevent the liquid from evaporating from its surface, they coated it with an oily substance called 1-octadecene. They placed this container on a sensitive weighing scale and the whole apparatus inside a sealed box with adjustable humidity. The researchers adjusted the humidity from 25% to 90% and each time studied the evaporation rate for more than 16 hours.

They found that Salmon et al. were right: the evaporation rate was higher for around three hours before dropping to a lower value. This was because polymer molecules had accumulated at the layer where the liquid met the air. But in these three hours, the rate of evaporation didn’t drop even when the humidity was increased. In other words, humidity-independent evaporation begins earlier than Salmon et al. predicted.

The researchers also reported another divergence: the evaporation rate wasn’t affected by a relative humidity of up to 80% – but beyond that, the rate fell if the humidity increased further. So what Salmon et al. said was at play but it wasn’t the full picture; some other forces were also affecting the evaporation.

The researchers ended their paper with an idea. They took a closer look at the open end of the tube, where the polyvinyl alcohol evaporated, with a microscope. They found that the polymer layer was overlaid with a stiffer semisolid, or gel-like, layer. Such layers are known to form when there is a compressive stress, and further block evaporation. The researchers found that their equations to predict the evaporation rate roughly matched the observed value when they were modified to account for this stress. They also found that a sufficiently thick gel layer could form within one second – a short time span considering the many hours over which the rate of evaporation evolves.

“These discrepancies motivate the search for extra physics beyond Salmon et al., which may again relate to a gelled polymer skin at the air-solution interface,” they concluded in their paper, published in the journal Physical Review Letters on December 15.

A new path to explaining the absence of antimatter

Our universe was believed to have been created with equal quantities of matter and antimatter, only for antimatter to completely disappear over time. We know that matter and antimatter can annihilate each other but we don’t know how matter came to gain an upper hand and survive to this day, creating, stars, planets, and – of course – us.

In the theories that physicists have to explain the universe, they believe that the matter-antimatter asymmetry is the result of two natural symmetries being violated. These are the charge and parity symmetries. The charge (C) symmetry is that the universe would work the same way if we replaced all the positive charges with negative charges and vice versa. The parity (P) symmetry refers to the handedness of a particle. For example, based on which way an electron is spinning, it’s said to be right- or left-handed. All the fundamental forces that act between particles preserve their handedness except the weak nuclear force.

According to most particle physicists, matter won the war against antimatter through some process that violated both C and P symmetries. Proof of CP symmetry violation is one of modern physics’s most important unsolved problems.

In 1964, physicists discovered that the weak nuclear force is capable of violating C and P symmetries together when it acts on a particle called a K meson. In the 2000s, a different group of physicists found more evidence of CP symmetry violation in particles called B mesons. These discoveries proved that CP symmetry violation is actually possible, but they didn’t bring us much closer to understanding why matter dominated antimatter. This is because of particles called quarks.

Quarks are the smallest known constituent of the universe’s matter particles. They combine to form different types of bigger particles. For example, all mesons have two quarks each. All the matter that we’re familiar with are instead made of atoms, which are in turn made of protons, neutrons, and electrons. Protons and neutrons have three quarks each – they’re baryons. Electrons are not made of quarks; instead, they belong to a group called leptons.

To explain the matter-antimatter asymmetry in the universe, physicists need to find evidence of CP symmetry violation in baryons, and this hasn’t happened so far.

On December 7, a group of researchers from China published a paper in the journal Physical Review D in which they proposed one place where physicists could look to find the answer: the decay of a particle called a lambda-b baryon to a D-meson and a neutron.

Quarks come in six types, or flavours. They are up, down, charm, strange, top, and bottom. A lambda-b baryon is the name for a bundle containing one up quark, one down quark, and one strange quark. A D-meson is any meson that contains a charm quark. In the process the researchers have proposed, the D-meson exists in a superposition of two states: a charm quark + an up anti-quark (D0 meson) and a charm anti-quark and an up quark (D0 anti-meson).

The researchers have proposed that the probability of a lambda-b baryon decaying to a D0 meson versus a D0 anti-meson could be significantly different as a result of CP symmetry violation.

The proposal is notable because the researchers have tailored their prediction to an existing experiment that, once it’s upgraded in future, will collect data that can be used to look for just such a discrepancy. This experiment is called the LHCb – ‘LHC’ for Large Hadron Collider and ‘b’ for beauty.

The LHCb is a detector on the LHC, the famous particle-smasher in Europe that slams energetic beams of protons together to pry them open. The detectors then study the particles in the detritus and their properties. LHCb in particular tracks the signatures of different types of quarks. Physicists at CERN are planning to upgrade LHCb to a second avatar that’s expected to begin operating in the mid-2030s. Among other features, it will have a 7.5-times higher peak luminosity – a measure of the number of particles the detector can detect.

If the lambda-b baryon’s decay discrepancy exists in the new LHCb’s observed data, the decay proposed in the new study will be one way to explain it, and pave the way for proof of CP symmetry violation in baryons.

Waters and bridges between science journalism and scicomm

On November 24-25, the Science Journalists’ Association of India (SJAI) conducted its inaugural conference at the National Institute of Immunology (NII), New Delhi. I attended it as a delegate.

A persistent internal monologue of mine at the event was the lack of an explicit distinction between science communicators and science journalists. One of my peers there said (among other things) that we need to start somewhere, and with that I readily agree. Subhra Priyadarshini, a core member of SJAI and the leader de facto of the team that put the conference together, also said in a different context that SJAI plans to “upskill and upscale science journalism in India”, alluding to the group’s plans to facilitate a gateway into science journalism. But a distinction may be worthwhile because the two groups seem to have different needs, especially in today’s charged political climate.

Think of political or business journalism, where journalists critique politics or business. They don’t generally consider part of their jobs to be improving political or business literacy or engagement with the processes of these enterprise. On the other hand, science journalists are regularly expected – including by many editors, scientists, and political leaders – to improve scientific literacy or to push back on pseudoscience. (For what it’s worth, pseudoscience isn’t a simple topic, especially against the backdrop of its social origins as well as questions about what counts as knowledge, how it’s created, who creates it, etc.).

When science institutions believe that X is science journalism when it’s in fact Y, then whenever they encounter Y, they’re taken aback, if not just offended. We have seen this with many research institutes whose leaders are friendly with the media when the latter is reporting on the former’s work, but become hostile when journalists start to ask questions about any wrongdoing or controversy. (One talking point supported by people insice NCBS, when the Arati Ramesh incident played out in 2021, was whether the publics are entitled to details of the inner workings of a publicly funded institute.) Scientists should know what science journalism really is, lest they believe it’s a new kind of PR, and change their expectations about the terms on which journalists engage with them.

This recalibration is important now when journalists are expected to bend over or not report on some topics, ideas or people. Are communicators expected to bend over also? I’m not so sure. Journalism is communication plus the added responsibility of abiding by the public interest (which transforms the way the communication happens as well), and the latter imposes demands that often give science journalism its thorn-in-the-side quality.

Understanding what journalism really is could improve relationships between scientists and science journalists, let scientists know why a (critical) journalism of science is as important as the communication of science, and the ways in which both institutions – of science and of journalism – are publicly answerable.

[After a few hours] So does that mean the difference between science journalism and science communication is what scientists understand them to be?

I think accounting for the peculiarities of both space (in India) and time (today) could produce a fairer picture of the places and roles of science journalism and communication. Specifically, that science journalism in India is coming of age at this particular time in history is important, especially because it will obviously evolve to respond to the forces that matter today. Most of all, unlike any other time before, today is distinguished by trivial access to the internet, which gives explainers and communicative writing more weight than before for their ability to be used against misinformation and to temper people’s readiness to consume information on the internet with the (editorial and scientific) expertise and wisdom of communicators and journalists.

The distinction of today also births the possibility of defining Indian science journalism separately from Indian science communication using the matter of their labels, expectations, purposes, and problems.

Labels – ‘Journalism’ and ‘communication’ are fundamentally labels used to describe specific kinds of activities. They probably originated in different contexts, to isolate and identify tasks that, in their respective settings, were unlike other tasks, but that wouldn’t have to mean that once they were transplanted to the science communication/journalism enterprise, they couldn’t have a significant – maybe even self-effacing – overlap. So it may be worthwhile to explore the history of these terms, in India, as it pertains to science journalists.

Expectations – The line between journalism and communication is slender. Many products of science-journalism work are texts that are concerned, to a not-insignificant extent, with communicating science first, with explaining a relevant concept, idea, etc. in its proper technical, historical, social, etc. context. Journalism peels away from communication with the added requirement of being in the public interest, but good communication can be in the public interest as well. (Economics seemed to pose a counter-argument but with a self-undermining component: did science communication in India have such a successful ‘scene’ before science journalism in India became a thing? I have my doubts although I’m not exactly well-informed – but a bigger issue is what editors in and product managers of newsrooms considered ‘science journalism’ to be in the first place. If they conflated it with communication, this counter-example is moot.)

Purposes – What is political journalism a journalism of? (To my mind, the answer to this question needs to be some activity that, when it is performed, would sufficiently qualify the performer as a practitioner of political journalism.) Is it a journalism of political processes, political thought, political outcomes or political leaders? Considering politics is a social enterprise, I think it’s a journalism of our political leaders: stories about these people are the stories about everything else that constitutes politics. Similarly, science journalism can be a journalism of the people of science – and it’s ease to see that, this way, it opens doors to everything from clever science to issues of science and society.

Problems – Journalism and communication may also be distinguished by their specific problems. For journalists, for example, quotes from scientists are more crucial than they are for communicators. Indian science journalism is thus complicated differently by the fact that many scientists don’t wish to speak to members of the press, for fear of being misquoted, of antagonising their bosses (who may have political preferences of their own), of lacking incentives to do so (e.g. “my chances of being promoted don’t increase if I speak to reporters”), and/or of falling afoul of the law (which prohibits scientists at government institutes from criticising government policies in the press). By extension, an association like SJAI that pools journalists (and communicators) together should also be expected to help alleviate journalists’ specific needs.

To its credit, SJAI 2023 did to the extent that it could, and I think will continue to do so; the point is that any other (science-)journalistic body in the country should do so as well and also ensure it doesn’t lose sight of the issues specific to each community.

Cognitive ability and voting ‘leave’ on Brexit

In a new study published in the journal PLoS ONE on November 22, a pair of researchers from the University of Bath in the UK have reported that “higher cognitive ability” is “linked to higher chance of having voted against Brexit” in the June 2016 referendum. The authors have reported this based on ‘Understanding Society’, a “nationally representative annual longitudinal survey of approximately 40,000 households, funded by the UK Economic and Social Research Council”, conducted in 12 waves between 2009 and 2020. The researchers assessed people’s cognitive ability as a combination of five tests:

Word recall: “… participants were read a series of 10 words and were then asked to recall (immediately afterwards and then again later in the interview) as many words as possible, in any order. The scores from the immediate and delayed word recall task are then summed together”

Verbal fluency: “… participants were given one minute to name as many animals as possible. The final score on this item is based upon the number of unique correct responses”

Subtraction test: “… participants were asked to give the correct answer to a series of subtraction questions. There is a sequence of five subtractions, which started with the interviewer asking the respondent to subtract 7 from 100. The respondent is then asked to subtract 7 again, and so on. The number of correct responses out of a maximum of five was recorded”

Fluid reasoning: “… participants were asked to write down a number sequence—as read by the interviewer—which consists of several numbers with a blank number in the series. The respondent is asked which number goes in the blank. Participants were given two sets of three number sequences, where performance in the first set dictated the difficulty of the second set. The final score is based on the correct responses from the two sets of questions—whilst accounting for the difficulty level of the second set of problems”

Numerical reasoning: “Participants were asked up to five questions that were graded in complexity.The type of questions asked included: “In a sale, a shop is selling all items at half price. Before the sale, a sofa costs £300. How much will it cost in the sale?” and “Let’s say you have £200 in asavings account. The account earns ten percent interest each year. How much would you havein the account at the end of two years?”. Based on performance on the first three items, partici-pants are then asked either two additional (more difficult) questions or one additional (simpler) question”

On the face of it, the study’s principle finding, rooting the way people decided on ‘Brexit’ in cognitive ability, seems objectionable because it’s a small step away from casting an otherwise legitimate political outcome – i.e. the UK leaving the European Union – as the product of some kind of mental deficiency. Then again, in their paper, the authors have reasoned that this correlation is mediated by individuals’ susceptibility to misinformation, that people with “higher” cognitive ability are better able to cut through mis- or dis-information. This seems plausible, and in fact the objectionability is also mitigated by awareness of the Indian experience, where lynch mobs and troll armies have been set in motion by fake news, with deadly results.

This said, we must still guard against two fallacies. First: correlation isn’t causation. That higher cognitive ability could be correlated with voting ‘remain’ doesn’t mean higher cognitive ability caused people to vote ‘remain’. Second, the fallacy of the inverse: while there is reportedly a correlation between the cognitive abilities of people and their decision in the ‘Brexit’ referendum, it doesn’t mean that pro-Brexit votes couldn’t have been cast for any reason other than cognitive deficiencies. One Q&A from an interview that PLoS conducted with one of the authors, Chris Dawson, and published together with the paper makes a similar note:

Some people might assume that if Remain voters had on average higher cognitive abilities, this implies that voting Remain was the more intelligent decision. Can you explain why your research does not show this, and what misinformation has to do with it?

It is important to understand that our findings are based on average differences: there exists a huge amount of overlap between the distributions of Remain and Leave cognitive abilities. We calculated that approximately 36% of Leave voters had higher cognitive ability than the average (mean) Remain voter. So, for any Remain voters who were planning on boasting and engaging in one-upmanship, our results say very little about what cognitive ability differences may or may not exist between two random Leave and Remain voters. But what our results do imply is that misinformation about the referendum could have complicated decision making, especially for people with low cognitive ability.

The five tests that the researchers used to estimate cognitive ability (at least in a relative sense) are also potentially problematic. I only have an anecdotal counter-example, but I suspect many readers will be able to relate to it: I have an uncle who is well-educated (up to the graduate level) and has had a well-paying job for many years now, and he is a staunch bhakt – i.e. an uncritical supporter of India’s BJP government and its various policies, including (but not limited to) the CAA, the farm laws, anti-minority discrimination, etc. He routinely buys into disinformation and sometimes spreads some of his own, but I don’t see him doing badly on any of the five tests. Instead, his actions and views are better explained by his political ideology, which is equal parts conservative and cynical. There are millions of such uncles in India, and the same thing could be true of the people who voted ‘leave’ in the 2016 referendum: that it wasn’t their cognitive abilities so much as their ideological positions, and those of the people to whom they paid attention, that influenced the vote.

(The reported correlation itself can be explained away by the fact that most of those who voted ‘leave’ were older people, but the study does correct for age-related cognitive decline.)

The two researchers also have a big paragraph at the end where they delineate what they perceive to be the major issues with their work:

Most noticeably, the positive correlation between cognitive ability and voting to Remain in the referendum could, as always, be explained by omitted variable bias. Although we control for political beliefs and alliances, personality traits, a barrage of other socioeconomics factors and in our preferred model, house-hold fixed-effects, the variation of cognitive ability within households could be correlated with other unobservable traits, attitudes and behaviours. The example which comes to mind is an individual’s trust in politicians and government. Then Prime Minister of the UK David Cameron publicly declared his support for remining in the EU, as did the Chancellor of the Exchequer. The UK Treasury published an analysis to warn voters that the UK would be permanently poorer if it left the EU [63]. In addition to this were the 10 Nobel-prize winning economists making the case in the days leading up to the referendum. Whilst cognitive ability has been linked with thinking like an economist [64,65], Carl [51] also finds evidence of a moderate positive correlation between trust in experts and IQ. Moreover, work on political attitudes and the referendum have shown that a lack of trust in politicians and the government is associated with a vote to Leave the EU [56]. Therefore, the positive relationship between cognitive ability and voting Remain could be attributable for those higher in cognitive function to place a greater weight on the opinion of experts. A final note is that our dependent variable is self-reported which may induce bias, for instance, social desirability bias. Against that, the majority (75.6%) of responses were recorded through a self-completion online survey and we do control for interview mode, which produces no statistically significant effects.

It’s important to consider all these alternative possibilities to the fullest before we assume, say, that improving cognitive ability will also lead to some political outcomes over others – or in fact before we entertain ideas about whether people whose cognitive abilities have declined, according to some tests and to below a particular threshold, should be precluded from participating in referendums. If nothing else, problems of discretisation quickly arise: i.e. where do we draw the line? For example, while people with Alzheimer’s disease can be kept from voting, should those who are mathematically illiterate, and would thus probably fail the fluid reasoning and numerical reasoning tests? Similarly, and expanding the remit from referendums to elections (which isn’t without problems), which test should potential voters be expected to pass before voting in different polls – say, from the panchayat to the Lok Sabha elections?

Consider also the debates at the time Haryana passed the Haryana Panchayati Raj (Amendment) Act in 2015, which stipulated among other things that to contest in panchayat polls, candidates had to have completed class 10 or its equivalent (plus adjustments if the candidates are from an SC community, women, etc.). Obviously those contesting the polls would be well past their youth and unlikely to return to school, so the Act effectively permanently disqualified them from contesting. As such, while the answers to the questions above may be clearer in less unequal societies like those of the UK, they are not so in India, where cognitively well-equipped people have been criminally selfish and public-spiritedness has been more strongly correlated with good-faith politics than education or literacy.

At the same time, the study and its findings also reiterate the significant role that mis/disinformation has come to play in influencing the way people vote, for example, which makes individuals’ cognitive abilities – and all the factors that influence them – another avenue through which to control, for better or for worse, the opportunities we have for healthy governance.

On Somanath withdrawing his autobiography

Excerpt from The Hindu, November 4, 2023:

S. Somanath, Chairman, Indian Space Research Organisation (ISRO), told The Hindu that he’s withdrawing the publication of his memoir, Nilavu Kudicha Simhangal, penned in Malayalam. The decision followed a report in the Malayala Manorama on Saturday that quoted excerpts from the book suggesting K. Sivan, former ISRO chairman and Mr. Somanath’s immediate predecessor, may have hindered key promotions that Mr. Somanath thought were due.

“There has been some misinterpretation. At no point have I said that Dr. Sivan tried to prevent me from becoming the chairman. All I said was that being made a member of the Space Commission is generally seen as a stepping stone to (ISRO’s chairmanship). However a director from another (ISRO centre) was placed, so naturally that trimmed my chances (at chairmanship),” he told The Hindu, “Secondly the book isn’t officially released. My publisher may have released a few copies … but after all this controversy I have decided to withhold publication.”

I haven’t yet read this book nor do I know more than what’s already been reported about this new controversy. It has been simmering all evening but I assumed that it would simply blow over, as these things usually do, and that the book would be released with the customary pomp. But the book has indeed been withdrawn, which was less surprising than it should have been.

Earlier today, I was reading a paper uploaded on the Current Science website about Gold OA publishing. It was run-of-the-mill in many ways, but one of my peers sent me a strongly worded email decrying the fact that the paper wasn’t explicitly opposed to Gold OA. When I read the paper, I found that the authors’ statements earlier in the paper were quite tepid, seemingly unconcerned about Gold OA’s deleterious effects on the research publishing ecosystem, but later on, the paper threw up many of the more familiar lines, that Gold OA is expensive, discriminatory, etc.

Both Somanath’s withdrawn book and this paper have one thing in common: (potentially) literary laziness, which often speaks to a sense that one is entitled to the benefit of the doubt rather than being compelled to earn it.

Somanath told The Hindu and some other outlets that he didn’t intend to criticise Sivan, his predecessor as ISRO chairman, but that he was withholding the book’s release because some news outlets had interpreted the book in a way that his statements did come across as criticism.

Some important background: Since 2014, ISRO’s character has changed. Earlier journalists used to be able to more easily access various ISRO officials and visit sites of historic importance. These are no longer possible. The national government has also tried to stage-manage ISRO missions in the public domain, especially the more prominent ones like Chandrayaan-2 and -3, the Mars Orbiter Mission, and the South Asia Satellite.

Similarly, there have been signs that both Sivan and Somanath had and have the government’s favour on grounds that go beyond their qualifications and experiences. With Somanath, of course, we have seen that with his pronouncements about the feats of ancient India, etc., and now we have that with Sivan as well, as Somanath says that ISRO knew the Chandrayaan-2 lander had suffered a software glitch ahead of its crash, and didn’t simply lose contact with the ground as Sivan had said at the time. Recall that in 2019, when the mishap occurred, ISRO also stopped sharing non-trivial information about the incident and even refused to confirm that the lander had crashed until a week later.

In this milieu, Sivan and Somanath are two peas in a pod, and it seems quite unlikely to me that Somanath set out to criticise Sivan in public. The fact that he would much rather withhold the book than take his chances is another sign that criticising Sivan wasn’t his goal. Yet as my colleague Jacob Koshy reported for The Hindu:

Excerpts from the book, that The Hindu has viewed, do bring out Mr. Somanath’s discomfort with the “Chairman (Dr. Sivan’s)” decision to not be explicit about the reasons for the failure of the Chandrayaan 2 mission (which was expected to land a rover). The issue was a software glitch but was publicly communicated as an ‘inability to communicate with the lander.’

There is a third possibility: that Somanath did wish to criticise Sivan but underestimated how much of an issue it would become in the media.

Conveying something in writing has always been a tricky thing. Conveying something while simultaneously downplaying its asperity and accentuating its substance or its spirit is something else, requiring quite a bit of practice, a capacity for words, and of course clarity of thought. Without these things, writing can easily miscommunicate. (This is why reading is crucial to writing better: others’ work can alert you to meaning-making possibilities that you yourself may never have considered.) The Current Science paper is similar, with its awkward placement of important statements at the end and banal statements at the beginning, and neither worded to drive home a specific feeling.

(In case you haven’t, please read Edward Tufte’s analysis of the Challenger disaster and the failure of written communication that preceded it. Many of the principles he sets out would apply for a lot of non-fiction writing.)

Somanath wrote his book in Malayalam, his native tongue, rather than in English, with which, going by media interviews of him, he is not fluent. So he may have sidestepped the pitfalls of writing in an unfamiliar language, yet his being unable to avoid being misinterpreted – or so he says – still suggests that he didn’t pay too much attention to what he was putting down. In the same vein, I’m also surprised that his editors at the publisher, Lipi Books in Kozhikode, didn’t pick up on these issues earlier.

Understanding this is important because Somanath writing something and then complaining that it was taken in a way it wasn’t supposed to be taken lends itself to another inference that I still suspect the ruling party’s supporters will reach for: that the press twisted his words in its relentless quest to stoke tensions and that Somanath was as clear as he needed to be. As I said, I haven’t yet read the book, but as an editor (see Q3) – and also as someone for whom checking for incompetence before malfeasance has paid rich dividends – I would look for an intention-skill mismatch first.

Featured image: ISRO chairman S. Somanath in 2019. Credit: NASA.

An ‘expanded’ heuristic to evaluate science as a non-scientist

The Hindu publishes a column called ‘Notebook’ every Friday, in which journalists in the organisation open windows big or small into their work, providing glimpses into their process and thinking – things that otherwise remain out of view in news articles, analyses, op-eds, etc. Quite a few of them are very insightful. A recent example was Maitri Porecha’s column about looking for closure in the aftermath of the Balasore train accident.

I’ve written twice for the section thus far, both times about a matter that has stayed with me for a decade, manifesting at different times in different ways. The first edition was about being able to tell whether a given article or claim is real or phony irrespective of whether you have a science background. I had proposed the following eight-point checklist that readers could follow (quoted verbatim):

  1. If the article talks about effects on people, was the study conducted with people or with mice?
  2. How many people participated in a study? Fewer than a hundred is always worthy of scepticism.
  3. Does the article claim that a study has made exact predictions? Few studies actually can.
  4. Does the article include a comment from an independent expert? This is a formidable check against poorly-done studies.
  5. Does the article link to the paper it is discussing? If not, please pull on this thread.
  6. If the article invokes the ‘prestige’ of a university and/or the journal, be doubly sceptical.
  7. Does the article mention the source of funds for a study? A study about wine should not be funded by a vineyard.
  8. Use simple statistical concepts, like conditional probabilities and Benford’s law, and common sense together to identify extraordinary claims, and then check if they are accompanied by extraordinary evidence.

The second was about whether science journalists are scientists – which is related to the first on the small matter of faith: i.e. that science journalists are purveyors of information that we expect readers to ‘take up’ on trust and faith, and that an article that teaches readers any science needs to set this foundation carefully.

After having published the second edition, I came across a ‘Policy Forum’ article published in October 2022 in Science entitled ‘Science, misinformation, and the role of education’. Among other things, it presents a “‘fast and frugal’ heuristic” – a three-step algorithm with which competent outsiders [can] evaluate scientific information”. I was glad to see that this heuristic included many points in my eight-point checklist, but it also went a step ahead and discussed two things that perhaps more engaged readers would find helpful. One of them however requires an important disclaimer, in my opinion.

DOI: 10.1126/science.abq80

The additions are about consensus, expressed through the questions (numbering mine):

  1. “Is there a consensus among the relevant scientific experts?”
  2. “What is the nature of any disagreement/what do the experts agree on?”
  3. “What do the most highly regarded experts think?”
  4. “What range of findings are deemed plausible?”, and
  5. “What are the risks of being wrong?”

No. 3 is interesting because “regard” is of course subjective as well as cultural. For example, well-regarded scientists could be those that have published in glamorous journals like Nature, Science, Cell, etc. But as the recent hoopla about Ranga Dias having three papers about near-room-temperature superconductivity retracted in one year – with two published in Nature – showed us, this is no safeguard against bad science. In fact, even winning a Nobel Prize isn’t a guarantee of good science (see e.g. reports about Gregg Semenza and Luc Montagnier). As the ‘Policy Forum’ article also states:

“Undoubtedly, there is still more that the competent outsider needs to know. Peer-reviewed publication is often regarded as a threshold for scientific trust. Yet while peer review is a valuable step, it is not designed to catch every logical or methodological error, let alone detect deliberate fraud. A single peer-reviewed article, even in a leading journal, is just that—a single finding—and cannot substitute for a deliberative consensus. Even published work is subject to further vetting in the community, which helps expose errors and biases in interpretation. Again, competent outsiders need to know both the strengths and limits of scientific publications. In short, there is more to teach about science than the content of science itself.”

Yet “regard” matters because the people at large pay attention to notions like “well-regarded”, which is as much a comment about societal preferences as what scientists themselves have aspired to over the years. This said, on technical matters, this particular heuristic would fail only a small part of time (based on my experience).

It would fail a lot more if it is applied in the middle of a cultural shift, e.g. regarding expectations of the amount of effort a good scientist is expected to dedicate to their work. Here, “well-regarded” scientists – typically people who started doing science decades ago, have persisted in their respective fields, and have finally risen to positions of prominence, and are thus likely to be white and male, and who seldom had to bother with running a household and raising children – will have an answer that reflects the result of these privileges, but which would be at odds with the direction of the shift (i.e. towards better work-life balance, less time than before devoted to research, and contracts amended to accommodate these demands).

In fact, even if the “well-regarded” heuristic might suffice to judge a particular scientific claim, it still carries the risk of hewing in favour of the opinions of people with the aforementioned privileges. These concerns also apply to the three conditions listed under #2 in the heuristic graphic above: “Reputation among peers”, “credentials and institutional context”, “relevant professional experience”, all of which have historically been more difficult for non-cis-het male scientists to acquire. But we must work with what we have.

In this sense, the last question is less subjective and more telling: “What are the risks of being wrong?” If a scientist avoids a view and simultaneously also avoids an adverse outcome for themselves, then it’s possible they avoided the view in order to avoid the outcome and not because the view is implicitly disagreeable.

The authors of the article, Jonathan Osborne and Daniel Pimentel, both of the Graduate School of Education at Stanford University, have grounded their heuristic in the “social nature of science” and the “social mechanisms and practices that science has for resolving disagreement and attaining consensus”. This is obviously more robust (than my checklist grounded in my limited experiences), but I think it could also have discussed the intersection of the social facets of science with gender and class. Otherwise, the risk is that, while the heuristic will help “competent outsiders” better judge scientific claims, it will do as little as its predecessor to uncover the effects of intersectional biases that persist in the “social mechanisms” of science.

The alternative, of course, is to leave out “well-regarded” altogether – but the trouble there, I suspect, is we might be lying to ourselves if we pretended a scientist’s regard didn’t or ought not to matter, which is why I didn’t go there…

On Agnihotri’s Covaxin film, defamation, and false bravery

Vivek Agnihotri’s next film, The Vaccine War, is set to be released on September 28. It is purportedly about the making of Covaxin, the COVID-19 vaccine made by Bharat Biotech, and claims to be based on real events. Based on watching the film’s trailer and snippets shared on Twitter, I can confidently state that while the basis of the film’s narrative may or may not be true, the narrative itself is not. The film’s principal antagonist appears to be a character named Rohini Singh Dhulia, played by Raima Sen, who is the science editor of a news organisation called The Daily Wire. Agnihotri has said that this character is based on his ‘research’ on the journalism of The Wire during, and about, the pandemic, presumably at the time of and immediately following the DCGI’s approval for Covaxin. Agnihotri and his followers on Twitter have also gone after science journalist Priyanka Pulla, who wrote many articles in this period for The Wire. At the time, I was the science editor of The Wire. Dhulia appears to have lovely lines in the film like “India can’t do this” and “the government will fail”, the latter uttered with visible glee.

It has been terribly disappointing to see senior ICMR scientists promoting the film as well as the film (according to the trailer, at least) confidently retaining the name of Balram Bhargava for the character as well; for the uninitiated, Bhargava was the ICMR director-general during the pandemic. (One of his aides also has make-up strongly resembling Raman Gangakhedkar.) In Pulla’s words, “the political capture of this institution is complete”. The film has also been endorsed by Sudha Murthy and received a tone-deaf assessment by film critic Baradwaj Rangan, among other similar displays of support. One thing that caught my eye is that the film also retains the ICMR logo, logotype, and tagline as is (see screenshot below from the trailer).

Source: YouTube

The logo appears on the right of the screen as well as at the top-left, together with the name of NIV, the government facility that provided the viral material for and helped developed Covaxin. This is notable: AltBalaji, the producer of the TV show M.O.M. – The Women Behind Mission Mangal, was prevented from showing ISRO’s rockets as is because the show’s narrative was a fictionalised version of real events. A statement from AltBalaji to The Wire Science at the time, in 2019, when I asked why the show’s posters showed the Russian Soyuz rocket and the NASA Space Shuttle instead of the PSLV and the GSLV, said it was “legally bound not to use actual names or images of the people, objects or agencies involved”. I don’t know if the 2019 film Mission Mangal was bound by similar terms: its trailer shows a rocket very much resembling the GSLV Mk III (now called LVM-3) sporting the letters “S R O”, instead of “I S R O” ; the corresponding Hindi letters “स” and “रो”; and a different logo below the letters “G S L V” instead of the first “I” (screenshot below). GSLV is still the official designation of the launch vehicle, and a step further from what the TV show was allowed. And while the film also claims to be based on real events, its narrative is also fictionalised (read my review and fact-check).

Source: YouTube

Yet ICMR’s representation in The Vaccine War pulls no punches: its director-general at the time is represented by name and all its trademark assets are on display. It would seem the audience is to believe that they’re receiving a documentarian’s view of real events at ICMR. The film has destroyed the differences between being based on a true story and building on that to fictionalise for dramatic purposes. Perhaps more importantly: while AltBalaji was “legally bound” to not use official ISRO imagery, including those of the rockets, because it presented a fiction, The Vaccine War has been freed of the same legal obligation even though it seems to be operating on the same terms. This to me is my chief symptom of ICMR’s political capture.

Of course, that Agnihotri is making a film based on a ‘story’ that might include a matter that is sub judice is also problematic. As you may know, Bharat Biotech filed a defamation case against the Foundation for Independent Journalism in early 2022; this foundation publishes The Wire and The Wire Science. I’m a defendant in the case, as are fellow journalists and science communicators Priyanka Pulla, Neeta Sanghi, Jammi Nagaraj Rao, and Banjot Kaur, among others. But while The Wire is fighting the case, it will be hard to say before watching The Vaccine War as to whether the film actually treads on forbidden ground. I’m also not familiar with the freedoms that filmmakers do and don’t have in Indian law (and the extent to which the law maps to common sense and intuition). That said, while we’re on the topic of the film, the vaccine, defamation, and the law, I’d like to highlight something important.

In 2022, Bharat Biotech sought and received an ex parte injunction from a Telangana court against the allegedly offending articles published by The Wire and The Wire Science, and had them forcibly taken down. The court also prevented the co-defendants from publishing articles on Covaxin going forward and filed a civil defamation case, seeking Rs 100 crore in damages. As the legal proceedings got underway, I started to speak to lawyers and other journalists about implications of the orders, whether specific actions are disallowed on my part, and the way courts deal with such matters – and discovered something akin to a labyrinth that’s also a minefield. There’s a lot to learn. While the law may be clear about something, how a contention winds its way through the judicial system is both barely organised and uncodified. Rahul Gandhi’s own defamation case threw informative light on the role of judges’ discretion and the possibility of a jail term upon conviction, albeit for the criminal variety of the case.

The thing I resented the most, on the part of sympathetic lawyers, legal scholars, and journalists alike, is the view that it’s the mark of a good journalist to face down a defamation case in their career. Whatever its origins, this belief’s time is up in a period when defamation cases are being filed at the drop of a hat. It’s no longer a specific mark of good journalism. Like The Wire, I and my co-defendants stand by the articles we wrote and published, but it remains good journalism irrespective of whether it has also been accused of defamation.

Second, the process is the punishment, as the adage goes, yet by valorising the presence of a defamation case in a journalist’s record, it seeks to downplay the effects of the process itself. These effects include the inherent uncertainty; the unfamiliar procedures, documentation, and their contents and purposes; the travelling, especially to small towns, and planning ahead (taking time off work, availability of food, access to clean bathrooms, local transport, etc.); the obscure rules of conduct within courtrooms and the varying zeal with which they’re implemented; the variety and thus intractability of options for legal succour; and the stress, expenses, and the anxiety. So please, thanks for your help, but spare me the BS of how I’m officially a good journalist.