A lotus for Modi, with love from Manipur

This bit of news is so chock full of metaphors that I’m almost laughing out loud. Annotated excerpts from ‘CSIR’s new lotus variety ‘Namoh 108’ a ‘grand gift’ to PM Modi: Science Minister‘, The Hindu, August 19, 2023:

It’s a triviality today that the Indian government ministers’ relentless exaltation of Prime Minister Narendra Modi is not spontaneity so much as an orchestrated thing to keep his name in the news without him having to interact with the press, and to constantly reinforce the impression that Modi is doing great work. And this “Namoh 108” drives home how the political leadership of the scientific enterprise has been pressed to this task.

Also, Jitendra Singh hasn’t been much of a science minister: almost since the day he took charge of this ministry, he has been praising his master in almost every public utterance and speech. Meanwhile, the expenditure on science and research by the government he’s part of has fallen, pseudoscience is occupying more space in several spheres (including at the IITs), and research scholars continue to have a tough time doing their work.

As likely as the flower’s discovery many years ago in Manipur is a coincidence vis-à-vis the violence underway in the northeastern state, it’s just as hard to believe government officials are not speaking up about it now to catapult it into the news – to highlight something else more benign about Manipur and to give it a BJP connection as well: the lotus has 108 petals and the party symbol is a lotus.

(Also, this is the second connection in recent times between northeast India and India as a whole in terms of the state seeing value in a botanical resource, and proceeding to extract and exploit it. In 2007, researchers found the then-spiciest chilli variety in India’s northeast. By 2010, DRDO had found a way to pack it into grenades. In 2016, a Centre-appointed committee considered these grenades as alternatives to the use of pellet guns in the Kashmir Valley.)

It seems we’re sequencing the genomes of and conducting more detailed study of only those flowers that have a Hindu number of petals. Woe betide those that have 107, 109 or even a dozen, no matter that – short of the 108 petals conferring a specific benefit to the lotus plant (apparently not the case) – this is an accident of nature. Against the backdrop of the Nagoya Protocol, the Kunming-Montreal pact, the Convention on Biological Diversity, and issues of access and benefit sharing, India – and all other countries – should be striving to study (genetically and otherwise) and index all the different biological resources available within their borders. But we’re not. We’re only interested in flowers with 108 petals.

Good luck to children who will be expected to draw this in classrooms. Good luck also to other lotuses.

I’m quite certain that someone in that meeting would have coughed, sneezed, burped, farted or sniffed before that individual said “Om Namaha Vasudeva” out loud. I’m also sure that, en route to the meeting, and aware of its agenda, the attendees would have heard someone retching, hacking or spitting. “Kkrkrkrkrkrhrhrhrhrhrhrthphoooo 108” is more memorable, no?

So there was a naming committee! I’ll bet 10 rupees that after this committee came up with “Namoh”, it handed the note to Singh, added the footnote about its imperfect resemblance to “Namo”, and asked for brownie points.

The shadows of Chandrayaan 2

When in September 2019 the surface component of the Chandrayaan 2 mission failed, with the ‘Vikram’ lander crashing on the moon’s surface instead of gently touching down, there was a sense in all public spaces and conversations that the nation as a whole was in some grief. Until Wednesday, I couldn’t remember the excitement, anticipation, and anxiety that prevailed as the craft got closer to the moon, into its designated orbit, and began its descent. Wednesday was the start of the week before the second landing attempt, by the Chandrayaan 3 mission, and it all came screaming back.

Much of the excitement, anticipation, and anxiety that I’m feeling now as well is gratifying for the most part because it’s shared, that we’re doing this together. I cherish that because it’s otherwise very difficult to find with ISRO’s activities: all except the most superficial details of its most glamorous missions are often tucked away in some obscure corners of the web, it doesn’t have a functional public outreach unit, and there’s a lot of (unhelpful) uncertainty about the use of ISRO-made media.

But beyond facilitating this sense of togetherness, I’m concerned about ISRO’s sense of whether it should open itself up is now influenced by the public response to the Chandrayaan 2 mission, based on a parallel with India’s unfortunate tryst with solar cookers. In the early 1950s, the National Physical Laboratory fabricated a solar cooker with which the Indian government hoped to “transform household energy consumption … in a period of great uncertainty in food security and energy self-sufficiency,” in the words in The Hindu of science historian Shankar Nair. He continued:

The solar cooker was met with international press coverage and newsreels in the cinema. But the ‘indigenous’ device, based on a 19th century innovation, was dead in the water. Apart from its prohibitive price, it cooked very slowly. … The debacle caused the NPL to steer clear of populist ‘applied science’ for the remainder of K.S. Krishnan’s directorship.

Author Arun Mohan Sukumar recounted the same story but with more flair at the launch event of his book in Bangalore in March 2020:

A CSIR scientist said the failure of the solar cooker project basically ensured that all the scientists [who worked on it] retreated into the comfort of their labs and put them off “applied science”.

Here’s a project commenced almost immediately after independence meant to create technology by Indians for Indians, and after it failed for various reasons, the political spotlight that had been put on the project was counterproductive. Nehru himself investing this kind of capital exposed him and the scientific establishment to criticism that they were perhaps not used to. These were venerated men in their respective fields and they were perhaps unused to being accountable in this visceral way. …

This is the kind of criticism confronted by the scientific establishment and it is a consequence of politics. I agree with Prof [Jahnavi] Phalkey when she says it was a consequence of the political establishment not insulating the scientific establishment from the sort of criticism which may or may not be informed but you know how the press is. That led to a gradual breaking of ranks between the CSIR and the political vision for India…

The reflections of the solar-cooker debacle must be obvious in the events that followed the events of September 7, 2019. Prime Minister Narendra Modi had spoken of the Chandrayaan 2 mission on multiple occasions ahead of the landing attempt (including from the Red Fort on Independence Day). That the topmost political leader of a country takes so much interest in a spacefaring mission is a good thing but his politics has also been communal and majoritarian, and to have the mission invoked in conversations tinged with nationalistic fervour always induced nervousness.

Modi was also present in the control room as ‘Vikram’ began its descent over the lunar surface and, after news of the crash emerged, was seen hugging a visibly distraught K. Sivan, then the ISRO chairman – the same sort of hug that Modi had become famous for imposing on the leaders of other countries at multilateral fora. Modi’s governance has been marked by a fixation on symbols, and the symbols that he’d associated with Chandrayaan 2 made it clear that the mission was technological but also political. Its success was going to be his success. (Sample this.)

Sure enough, there was a considerable amount of post-crash chatter on social media platforms, on TV news channels, and on some news websites that tried to spin the mission as a tremendous success not worthy of any criticism that the ‘left’ and the ‘liberals’ were allegedly slinging at ISRO. But asking whether this is a “left v. right” thing would miss the point. If the sources of these talking points had exercised any restraint and waited for the failure committee report, I’m sure we could all have reached largely the same conclusion: that Chandrayaan 2 got ABC right and XYZ not so right, that it would have to do PQR for Chandrayaan 3, and that we can all agree that space is hard.

Irrespective of what the ‘left’ or the ‘right’ alleged, Chandrayaan 2 becoming the battleground on which these tensions manifested would surely have frayed ISRO scientists. To adapt Sukumar’s words to this context, the more cantankerous political crowd investing this kind of interest exposed ISRO to criticism that it was perhaps not used to. These were venerated men in their respective fields and they were perhaps unused to being accountable in this visceral way. This is the kind of criticism confronted by the scientific establishment and it is a consequence of politics…

The response to NPL’s solar cookers put scientists off “applied science”. Can we hope that the response to Chandrayaan 2 wouldn’t have put ISRO scientists off public engagement after Chandrayaan 3 ends, whether in (some kind of) failure or success? There are those of us beyond the din who know that the mission is very hard, and why, but at the same time it’s not like ISRO has always acted in good faith or with the public interest in mind. For example, it hasn’t released Chandrayaan 2’s failure committee report to date. So exercising the option of waiting for this report before making our minds up would have taken us nowhere.

(On the other hand, the officially determined causes of failure of the GSLV F10 mission – an almost apolitical affair – were more readily available.)

I’m also concerned whether ISRO itself can still construe respectful criticism of its work as such or will perceive it to be ideologically motivated vitriol. A characteristic feature of institutions overtaken by the nationalist programme is that they completely villify all criticism, even when it is merited. S. Somanath, ISRO’s current chairman, recently signalled that he might have been roped into this programme when he extolled “Vedic science”. If ISRO lets its response to failures be guided by politicians and bureaucrats, then we could also expect ISRO’s response to resemble that of the political class as well.

As always, time will tell, but I sincerely hope that it tells of one outcome instead of another.

Featured image: A view of the Chandrayaan 2 lander and rover seen undergoing tests, June 27, 2019. Credit: ISRO, dithered by ditherit.com.

The value of ripeness

Think of the long centuries in which attempts were made to change mercury into gold because that seemed like a very useful thing to do. These efforts failed and we found how to change mercury into gold by doing other things that had quite different intentions. And so I believe that the availability of instruments, the availability of ideas or concepts—not always but often mathematical—are more likely to determine where great changes occur in our picture of the world than are the requirements of man. Ripeness in science is really all, and ripeness is the ability to do new things and to think new thoughts. The whole field is pervaded by this freedom of choice. You don’t sit in front of an insoluble problem for ever. You may sit an awfully long time, and it may even be the right thing to do; but in the end you will be guided not by what it would be practically helpful to learn, but by what it is possible to learn.

On July 25, the science writer Ash Jogalekar shared this excerpt from Robert Oppenheimer’s 1964 book The Flying Trapeze, a compilation of the Whidden Lectures that he delivered in 1962.

Oppenheimer’s invocation of the notion of ‘ripeness’ is quite fascinating: it is reminiscent of the great mathematician Carl Friedrich Gauss’s personal philosophy, which is also inscribed on his seal: pauca sed matura, Latin for “few but ripe”. Gauss adhered to it to the extent that he would only publish his work on mathematics that was complete and with which he was wholly satisfied. As a result, he had a large body of unpublished work that anticipated discoveries that other mathematicians and physicists would only make much later.

‘Ripeness’ also beings to mind one of the French mathematician’s Alexander Grothendieck’s beliefs. In the words of Allyn Jackson:

One thing Grothendieck said was that one should never try to prove anything that is not almost obvious. This does not mean that one should not be ambitious in choosing things to work on. Rather, “if you don’t see that what you are working on is almost obvious, then you are not ready to work on that yet,” explained Arthur Ogus of the University of California at Berkeley. “Prepare the way. And that was his approach to mathematics, that everything should be so natural that it just seems completely straightforward.” Many mathematicians will choose a well-formulated problem and knock away at it, an approach that Grothendieck disliked. In a well-known passage of Récoltes et Semailles, he describes this approach as being comparable to cracking a nut with a hammer and chisel. What he prefers to do is to soften the shell slowly in water, or to leave it in the sun and the rain, and wait for the right moment when the nut opens naturally (pages 552–553). “So a lot of what Grothendieck did looks like the natural landscape of things, because it looks like it grew, as if on its own,” Ogus noted.

Ripeness, in Oppenheimer’s telling, is the ability to do new things, but I find the preceding line to be more meaningful, with stronger parallels to both Gauss’s and Grothendieck’s views, in the limited context of scientific progress: “the availability of instruments, the availability of ideas or concepts … are more likely to determine where great changes occur in our picture of the world than are the requirements of man.”

Oppenheimer says later in the same lecture that progress is inalienable to science; this, together with his other statements, provides extraordinary insight into where progress occurs and why. Imagine the realm of knowledge that science can reveal or validate to be a three-dimensional space. Here, the opportunities to find something new are located at, or even localised to, points where there is already a confluence of possibilities thanks to the availability of information, instruments, techniques, resources, and sensible people. It is in a manner of speaking, and as Oppenheimer also indicates (“than are the requirements of man”), the substitution of individual people’s needs and pursuits as the prime mover of discoveries with the social and cultural prerequisites of knowledge itself.

This also seems like a better way to think about what some have called “useless knowledge”, which supposedly is knowledge produced without regard for its applications. I’m referring not to Abraham Flexner’s excellent 1939 essay* here but to the term as wielded by some political leaders, policy-setters, and the social-media commentariat, and which often finds mention in the mainstream English press in India as the antithesis of “knowledge that solves society’s problems”. Rather than being useless, such knowledge may just be charting new points in this abstract space, and could in future become the nuclei of new worlds; and when we dismiss it as useless, we preclude some possibilities.

I won’t deny that a strategy of randomly nucleating this opportunity-space could be too expensive (for all countries, not just India: I don’t buy that our country has too little money for science for the reasons discussed here) and that it might be more gainful for governments to assume a more coordinated approach. But I will say two things. First, when we’re pursuing or being forced to pursue a more conservative path through scientific progress, let us not pretend – as many have become wont to do – that we’re taking a better path. Second, let us not wield short-sighted arguments that privilege our earthly needs over something we simply may not even known know we’re losing.

Finally, perhaps these ideas apply to other forms of progress as well. Happy Independence Day. 🙂

(* Flexner cofounded the Institute for Advanced Study, which Oppenheimer was the director of when The Flying Trapeze was published.)

What’s with superconductors and peer-review?

Throughout the time I’ve been a commissioning editor for science-related articles for news outlets, I’ve always sought and published articles about academic publishing. It’s the part of the scientific enterprise that seems to have been shaped the least by science’s democratic and introspective impulses. It’s also this long and tall wall erected around the field where scientists are labouring, offering ‘visitors’ guided tours for a hefty fee – or, in many cases, for ‘free’ if the scientists are willing to pay the hefty fees instead. Of late, I’ve spent more time thinking about peer-review, the practice of a journal distributing copies of a manuscript it’s considering for publication to independent experts on the same topic, for their technical inputs.

Most of the peer-review that happens today is voluntary: the scientists who do it aren’t paid. You must’ve come across several articles of late about whether peer-review works. It seems to me that it’s far from perfect. Studies (in July 1998, September 1998, and October 2008, e.g.) have shown that peer-reviewers often don’t catch critical problems in papers. In February 2023, a noted scientist said in a conversation that peer-reviewers go into a paper assuming that the data presented therein hasn’t been tampered with. This statement was eye-opening for me because I can’t think of a more important reason to include technical experts in the publishing process than to wean out problems that only technical experts can catch. Anyway, these flaws with the peer-review system aren’t generalisable, per se: many scientists have also told me that their papers benefited from peer-review, especially review that helped them improve their work.

I personally don’t know how ‘much’ peer-review is of the former variety and how much the latter, but it seems safe to state that when manuscripts are written in good faith by competent scientists and sent to the right journal, and the journal treats its peer-reviewers as well as its mandate well, peer-review works. Otherwise, it tends to not work. This heuristic, so to speak, allows for the fact that ‘prestige’ journals like Nature, Science, NEJM, and Cell – which have made a name for themselves by publishing papers that were milestones in their respective fields – have also published and then had to retract many papers that made exciting claims that were subsequently found to be untenable. These journals’ ‘prestige’ is closely related to their taste for sensational results.

All these thoughts were recently brought into focus by the ongoing hoopla, especially on Twitter, about the preprint papers from a South Korean research group claiming the discovery of a room-temperature superconductor in a material called LK-99 (this is the main paper). This work has caught the imagination of users on the platform unlike any other paper about room-temperature superconductivity in recent times. I believe this is because the preprints contain some charts and data that were absent in similar work in the past, and which strongly indicate the presence of a superconducting state at ambient temperature and pressure, and because the preprints include instructions on the material’s synthesis and composition, which means other scientists can produce and check for themselves. Personally, I’m holding the stance advised by Prof. Vijay B. Shenoy of IISc:

Many research groups around the world will attempt to reproduce these results; there are already some rumours that independent scientists have done so. We will have to wait for the results of their studies.

Curiously, the preprints have caught the attention of a not insignificant number of techbros, who, alongside the typically naïve displays of their newfound expertise, have also called for the peer-review system to be abolished because it’s too slow and opaque.

Peer-review has a storied relationship with superconductivity. In the early 2000s, a slew of papers coauthored by the German physicist Jan Hendrik Schön, working at a Bell Labs facility in the US, were retracted after independent investigations found that he had fabricated data to support claims that certain organic molecules, called fullerenes, were superconducting. The Guardian wrote in September 2002:

The Schön affair has besmirched the peer review process in physics as never before. Why didn’t the peer review system catch the discrepancies in his work? A referee in a new field doesn’t want to “be the bad guy on the block,” says Dutch physicist Teun Klapwijk, so he generally gives the author the benefit of the doubt. But physicists did become irritated after a while, says Klapwijk, “that Schön’s flurry of papers continued without increased detail, and with the same sloppiness and inconsistencies.”

Some critics hold the journals responsible. The editors of Science and Nature have stoutly defended their review process in interviews with the London Times Higher Education Supplement. Karl Ziemelis, one of Nature’s physical science editors, complained of scapegoating, while Donald Kennedy, who edits Science, asserted that “There is little journals can do about detecting scientific misconduct.”

Maybe not, responds Nobel prize-winning physicist Philip Anderson of Princeton, but the way that Science and Nature compete for cutting-edge work “compromised the review process in this instance.” These two industry-leading publications “decide for themselves what is good science – or good-selling science,” says Anderson (who is also a former Bell Labs director), and their market consciousness “encourages people to push into print with shoddy results.” Such urgency would presumably lead to hasty review practices. Klapwijk, a superconductivity specialist, said that he had raised objections to a Schön paper sent to him for review, but that it was published anyway.

A similar claim by a group at IISc in 2019 generated a lot of excitement then, but today almost no one has any idea what happened to it. It seems reasonable to assume that the findings didn’t pan out in further testing and/or that the peer-review, following the manuscript being submitted to Nature, found problems in the group’s data. Last month, the South Korean group uploaded its papers to the arXiv preprint repository and has presumably submitted them to a journal: for a finding this momentous, that seems like the obvious next step. And the journal is presumably conducting peer-review at this point.

But in both instances (IISc 2019 and today), the claims were also accompanied by independent attempts to replicate the data as well as journalistic articles that assimilated the various public narratives and their social relevance into a cogent whole. One of the first signs that there was a problem with the IISc preprint was another preprint by Brian Skinner, a physicist then with the Massachusetts Institute of Technology, who found the noise in two graphs plotting the results of two distinct tests to be the same – which is impossible. Independent scientists also told The Wire (where I worked then) that they lacked some information required to make sense of the results as well as expressed concerns with the magnetic susceptibility data.

Peer-review may not be designed to check whether the experiments in question produced the data in question but whether the data in question supports the conclusions. For example, in March this year, Nature published a study led by Ranga P. Dias in which he and his team claimed that nitrogen-doped lutetium hydride becomes a room-temperature superconductor under a pressure of 1,000 atm, considerably lower than the pressure required to produce a superconducting state in other similar materials. After it was published, many independent scientists raised concerns about some data and analytical methods presented in the paper – as well as its failure to specify how the material could be synthesised. These problems, it seems, didn’t prevent the paper from clearing peer-review. Yet on August 3, Martin M. Bauer, a particle physicist at Durham University, published a tweet defending peer-review in the context of the South Korean work thus:

The problem seems to me to be the belief – held by many pro- as well as anti-peer-review actors – that peer-review is the ultimate check capable of filtering out all forms of bad science. It just can’t, and maybe that’s okay. Contrary to what Dr. Bauer has said, and as the example of Dr. Dias’s paper suggests, peer-reviewers won’t attempt to replicate the South Korean study. That task, thanks to the level of detail in the South Korean preprint and the fact that preprints are freely accessible, is already being undertaken by a panoply of labs around the world, both inside and outside universities. So abolishing peer-review won’t be as bad as Dr. Bauer makes it sound. As I said, peer-review is, or ought to be, one of many checks.

It’s also the sole check that a journal undertakes, and maybe that’s the bigger problem. That is, scientific journals may well be a pit of papers of unpredictable quality without peer-review in the picture – but that would only be because journal editors and scientists are separate functional groups, rather than having a group of scientists take direct charge of the publishing process (akin to how arXiv currently operates). In the existing publishing model, peer-review is as important as it is because scientists aren’t involved in any other part of the publishing pipeline.

An alternative model comes to mind, one that closes the gaps of “isn’t designed to check whether the experiments in question produced the data in question” and “the sole check that a journal undertakes”: scientists conduct their experiments, write them up in a manuscript and upload them to a preprint repository; other scientists attempt to replicate the results; if the latter are successful, both groups update the preprint paper and submit that to a journal (with the lion’s share of the credit going to the former group); journal editors have this document peer-reviewed (to check whether the data presented supports the conclusions), edited, and polished[1]; and finally publish it.

Obviously this would require a significant reorganisation of incentives: for one, researchers will need to be able to apportion time and resources to replicate others’ experiments for less than half of the credit. A second problem is that this is a (probably non-novel) reimagination of the publishing workflow that doesn’t consider the business model – the other major problem in academic publishing. Third: I have in my mind only condensed-matter physics; I don’t know much about the challenges to replicating results in, say, genomics, computer science or astrophysics. My point overall is that if journals look like a car crash without peer-review, it’s only because the crashes were a matter of time and that peer-review was doing the bare minimum to keep them from happening. (And Twitter was always a car crash anyway.)


[1] I hope readers won’t underestimate this the importance of editorial and language assistance that a journal can provide. Last month, researchers in Australia, Germany, Nepal, Spain, the UK, and the US had a paper published in which they reported, based on surveys, that “non-native English speakers, especially early in their careers, spend more effort than native English speakers in conducting scientific activities, from reading and writing papers and preparing presentations in English, to disseminating research in multiple languages. Language barriers can also cause them not to attend, or give oral presentations at, international conferences conducted in English.”

The language in the South Korean group’s preprints indicate that its authors’ first language isn’t English. According to Springer, which later became Springer Nature, the publisher of the Nature journals, “Editorial reasons for rejection include … poor language quality such that it cannot be understood by readers”. An undated article on Elsevier’s ‘Author Services’ page has this line: “For Marco [Casola, managing editor of Water Research], poor language can indicate further issues with a paper. ‘Language errors can sometimes ring a bell as a link to quality. If a manuscript is written in poor English the science behind it may not be amazing. This isn’t always the case, but it can be an indication.'”

But instead of palming the responsibility off to scientists, journals have an opportunity to distinguish themselves by helping researchers write better papers.

Hot in Ballia

More than half of the deaths reported during the heatwave in Uttar Pradesh and Bihar this week were reported from just one district in the former, called Ballia. On (or around) June 17, the medical superintendent of the Ballia district hospital was transferred away after he attributed the deaths (until then) to the heat. He was replaced with someone else.

The state government also dispatched a team of two experts to the district to assess the local situation (as they say). One of them was director of the Uttar Pradesh health department for communicable diseases, A.K. Singh. In one of his first interactions with the press, Singh indicated that they weren’t inclined to believe the Ballia deaths were due to the heat and that the team was also considering alternative explanations, like the local water source being contaminated. I think something fishy could be going on here.

First, Hindustan Times reported Singh saying “the deaths at the hospital were primarily due to comorbidity and old age and not heatstroke”, erratic power in the area, and the time taken to reach the hospital — in effect, everything except the heat. Yet all these factors only worsen a condition; they don’t cause it. What was the condition?

Second, a reporter from The Hindu who visited Ballia learnt that it will take “more than seven days” to issue the medical certificates of the cause of death (MCCDs), so the official cause of death — i.e. what the state records the cause of each death in this period and circumstance to be — won’t be clear until then.


Aside: During the COVID-19 pandemic, the Indian Council of Medical Research issued guidelines that asked healthcare workers to not list comorbidities as the underlying cause of death for people who die with COVID-19. This didn’t stop workers from doing just this in many parts of the country. I’m not sure but I don’t think similar guidelines exist for when the underlying cause could be heat. The guidelines also specified the ICD-10 codes to be used for COVID-19; such codes already exist for heat-related deaths.


Third: Do the district authorities, and by extension the Uttar Pradesh state government, have complete knowledge of the situation in Ballia? There was the unfortunate superintendent who said there was a link between the heat and the deaths. Anonymous paramedic staff at the Ballia hospital also told The Hindu that “some of the deaths were heat-related”. Yet the new superintendent says the matter is “under investigation” even as one member of the expert team says it’s yet to find “any convincing evidence to link the deaths with heatstroke”.

I really don’t know what to make of this except that there’s a non-zero chance that a cover-up is taking shape. This is supported by the fourth issue: According to The Hindu, “the [Uttar Pradesh] State Health Department has asked the Chief Medical Officers of districts and the Chief Medical Superintendents of district hospitals to issue statements in coordination with the concerned District Magistrate only during ‘crucial situations'” — a move reminiscent of the National Disaster Management Authority’s response to the Joshimath disaster.

For now, this is as far as the facts (as I know them) will take us. I think we’ll be able to take a big stride when the hospital issues the MCCDs.

Cyclone Biparjoy and Chennai

When is a natural disaster a natural disaster? It began raining in Chennai last evening and hasn’t stopped as of this morning. But it’s been intermittent, with highly variable intensity. In my area, the wind has been feeble. I don’t know the situation in other areas because we haven’t had power since at least 3.45 am. My father tells me, from Bangalore, that early reports say Taramani (southern edge) and Nandanam (heart of the city) received 120 mm in the 24 hours until 5.30 am; Meenambakkam (outside the city, where the airport is) received 140 mm; and Nungambakkam (also heart of the city and near where I am) received 60 mm. @ChennaiRains has tweeted that the average rainfall in June in Chennai is 50 mm. Schools that were reopened just last week – after having been closed for two weeks longer than usual from the summer break due to a heatwave – have been closed again in four districts (Chengalpattu, Chennai, Kancheepuram, and Thiruvallur).

Does Chennai’s situation right now constitute a natural disaster? The consequences give that impression but the facts of the cause don’t. I’m sure some parts of the city have flooded as well, such as Pondy Bazaar (which, ironically, the state government had refurbished a few years ago under the ‘Smart Cities’ mission, including fitting a storm-water drain later found to have a critical design flaw) while many trees have been toppled. This is a city that has brought a state of disaster upon itself, like many other cities in India, thanks to their (oft-elected) leaders.

The problem at hand has two sides. One is that when a city has undermined its own ability to resist the worse consequences of an adverse natural event – such as receiving thrice the expected amount of rainfall for a month within 24 hours – it’s difficult to know what precipitated the disasterness, the state of experiencing a disaster: the city’s poor infrastructure or the intensity of the natural event. Determining exactly which one to blame is a nearly impossible problem to solve but attempting it could reveal, in the process, the most pressing problems to address at the local level. For example, right opposite my house is a vendor of construction materials who tends to close the nearest storm-water drain when loading or unloading sand to/from trucks, causing puddles of water to stagnate on the road, especially over some nasty potholes. There’s also a very rusted transformer at one end of the road and a sewage pipe that has burst at the other end. My block also doesn’t have power because I’m told a feeder line tripped in the night. But more fundamentally, this blame-apportionment exercise – the aggregate of all the local problems, for example – can be useful to piece together the true contributions of urban dysfunction to the city’s current disasterness, and contrast that with what the city’s and the state’s political leaders will soon claim the “actual problem” was, and attempt to take credit for “addressing” it.

The other side of the problem is that, thanks to climate change, we’re required to constantly update the way we think about disasters. For example, The Hindu has a good editorial today on India’s response to Cyclone Biparjoy, which made landfall over Kutch district last week as a ‘very severe cyclonic storm’. Thanks to the India Meteorological Department’s (IMD’s) accurate forecasts and the government response, only two casualties have been reported so far – versus the around-3,000 following a similar event at a similar location in June 1998. One reason there have been so few deaths this time is that the state government evacuated more than a lakh people from coastal areas, sparing them from being injured or killed by parts of their houses being blown in the wind or tossed in the water. That people were evacuated in time is a good thing but, the editorial asks, why do they have houses that can be so easily destroyed in the first place? The point is that disaster response has improved considerably, but it’s nowhere near where it actually needs to be: where the intensity at which a disaster happens following a natural event is much further along than it is today. Put another way, while the response to a disaster may never be perfect, there are ways to measure its success – and then when it is successful, we need to pay attention to how that success was defined.

When 3,000 people died, it was reasonable to ask why the IMD’s forecasts weren’t good enough and how the death toll could be lowered. When two people died, it became time to move past these measures and ask, for example, why so many people had to be evacuated and how many rupees in income they lost (that they won’t be able to recoup). This is less an attempt to downplay the significance of India’s achievement – it really is tremendous progress for 25 years – and more an acknowledgment of the nature of the beast: disasters are getting bigger, badder, and, importantly, pervasive in a way that they endanger more than lives. The living suffer, too. Storms render the seas choppy, destroy boats and fishing nets, deteriorate living conditions in less-than-pucca houses, eliminate livelihoods, and increase (informal) indebtedness. Evacuating a fisher’s family will improve its chance of living to tell the tale, but will that tale be anything other than one of greater destitution? It should be.

A related issue here is the subtle danger of using extreme measures: a focus on saving lives downplays and eventually sidelines the lack of protection for other aspects of living. They might be more recoverable, in a manner of speaking, but that doesn’t mean they will be recovered. And that’s what we need to focus on next, and next, and so forth, until our governments can guarantee the recoverability for everyone of, say, all the amenities assured by the U.N. Sustainable Development Goals.

The same goes for rain-battered cities. In the limited context of my locality, my problems are the sewage on the road, the threat of the sewage line mixing with the drinking-water line underground, the risk of a vehicular accident on my street, and power not being restored soon enough. Sure, there might be worse problems elsewhere, but these ones in particular seem to me to belong on the urban-dysfunction side of things. They make day-to-day life difficult, irritating, frustrating. They disrupt routines, increase the cognitive burden, and build stress. Over time, we have less happiness and higher healthcare expenses, both of which diverge unequally for more privileged versus less privileged people. The city as a whole could become more unequal in more ways, and the next time it rains, a new vicious cycle could be born. An agenda limited to saving lives will easily overlook this, as will an agenda that overlooks facets of life that aren’t problematic yet but could soon be.

Or maybe Chennai still has some way to go? The Tamil Nadu revenue and disaster-management minister Sattur Ramachandran just came on TV talking about how it’s notable that no lives were lost…

Notes on the NIF nuclear fusion breakthrough

My explainer/analysis of the US nuclear fusion breakthrough was published today. Some stuff didn’t make it to the final draft for space and tone constraints; I’m publishing that below.

1. While most US government officials present at the announcement of the NIF’s results, including the president’s science advisor Arati Prabhakar (and with the exception of energy secretary Jennifer Granholm), were clear that a power plant was a long way off, they weren’t sufficiently clear that the road from the achievement to such a power station was neither well-understood nor straightforward even as they repeatedly invoked the prospect of commercial power production. LLNL director Kim Budil even said she expects the technology to be ready for commercialisation within five decades. Apart from overstating the prospect as a result, their words also created a stark contrast with how the US government has responded to countries’ demand for more climate financing and emissions cuts. It’s okay with playing up a potential source of clean energy that can only be realised well after global warming has shot past the Paris Agreement threshold of 1.5º C (if at all) but dances all around its contributions to the $100 billion fund it promised it would contribute to and demands to cut emissions – both within the country and in the form of investments around the world – before 2050.

Also read: US fusion bhashan

2. A definitive prerequisite for a fusion setup to have achieved ignition [i.e. the fusion yield being higher than the input energy] is the Lawson criterion, named for nuclear engineer John D. Lawson, who derived it in 1955. It stipulates a minimum value for the product of the ion density and the confinement time for different fuels. For the deuterium-tritium reaction mixture at the NIF, for example, the product must be at least 1014 s/cm3. In words, this means the temperature must be high enough for long enough to allow the ions to get closer to each other given they are packed densely enough, achieved by compressing the capsule that contains them. The Lawson criterion in effect tells us why high temperature and high pressure are prerequisites for inertial confinement fusion and why we can’t easily compromise them on the road to higher gain.

3. Mentions of “gain” in the announcement on December 13 referred to the scientific gain of the fusion test: the ratio of fusion output to the lasers’ output. Its value is thus a reflection of the challenges of heating plasma, sources of heat loss during ignition and fusion, and increasing fusion yield. While government officials at the announcement were careful to note that the NIF result was a “scientific breakthrough”, other scientists told this correspondent that a scientific gain of 1 was a matter of time and that the real revolution would be a higher engineering gain. This is the ratio of the power supplied by an inertial confinement fusion power plant to the grid to the plant’s recirculating power – i.e. the power consumed to create, maintain and heat the fusion plasma and to operate other facilities. This metric is more brutal than the scientific gain because it includes the latter’s challenges as well as the challenges to reducing energy loss in electric engineering equipment.

4. One plasma physicist likened the NIF’s feat to “the Kitty Hawk moment for the Wright brothers” to The Washington Post. But in a January 2022 paper, scientists from the US Department of Energy wrote that their “Kitty Hawk moment” would be the wall-plug gain reaching 1, instead of the scientific gain, for fusion energy. The wall-plug gain is the ratio of the power from fusion to the power drawn from the wall-plug to run the power plant.

5. The mode of operation of the inertial confinement facility at NIF is indirect-drive and uses central hotspot ignition. Indirect-drive means the laser pulses don’t directly strike the capsule holding the ions but the hohlraum holding the capsule. When the lasers strike the capsule directly, they need to do so as symmetrically as possible to ensure uniform compression on all sides. Any asymmetry leads to a Rayleigh-Taylor instability that rapidly reduces the yield. Achieving such pinpoint accuracy is quite difficult: the capsule is only 2 mm wide, so even a sub-millimetre deviation in a single pulse can tamp the output to an enormous degree. Once the laser pulses have heated up the hohlraum’s inside surface, the latter emits X-rays, which then uniformly compress and heat the capsule from all sides.

A schematic of the laser, hohlraum and capsule setup for indirect-drive inertial confinement fusion at the National Ignition Facility. Source: S.H. Glenzer et al. Phys. Rev. Lett. 106, 085004

6. However, this doesn’t heat all of the fuel to the requisite high temperature. The fuel is arranged in concentric layers, and the heat and pressure cause the 20 µg of deueterium-tritium mix in the central portion to fuse first. This sharply increases the temperature and launches a thermonuclear “burn wave” into the rest of the fuel, which triggers additional reactions. The wisdom for this technique arises from the fact that fusing two hydrogen-2 nuclei requires a temperature corresponding to 5-10 keV of energy (a few million kelvin) whereas the yield is 17,600 keV. So supplying the energy for just one fusion reaction could yield enough energy for hundreds more. Its downside in the inertial confinement contest is that a not-insignificant fraction the energy needs to be diverted to compressing the nuclei instead of heating them, which reduces the gain.

7. As the NIF announcement turns the world’s attention to the prospect of nuclear fusion, ITER’s prospects are also under scrutiny. According to [Shishir Deshpande of IPR Gandhinagar], who is also former project director of ITER-India, the facility is 75% complete and “key components under manufacturing” will arrive in the “next three to five years”. It has already overrun several cost estimates and deadlines (India is one of its funding countries) – but [according to another scientist’s] estimate, it has “great progress” and will “deliver”. Extending the “current experiments” – referring to the NIF’s tests – “is not a direct path to a power station, unlike ITER, which is far more advanced in being an integrated power station. Many engineering issues which ITER is built to address are not even topics yet for laser fusion, such as survival of key components under high-intensity radiation environments.”

Skyward light, wayward light

This is welcome news:

… even if it’s curious that three of the four officially stated reasons for designating this ‘dark sky reserve’ aren’t directly related to the telescopes, and that telescopes had to come up in the area for the local government, the Indian Institute of Astrophysics (IIA) and whoever else to acknowledge that it deserved to have dark skies. I believe that ‘doing’ astronomy with telescopes shouldn’t be a prerequisite to “promoting livelihoods through … astro-tourism” and “spreading awareness and education about astronomy”. And that’s why I wonder if there there are other sites in the country that are favourable to a popular science-driven activity, where the locals can be taught to guide tourists to pleasurably perform that activity, but which hasn’t been done because scientists aren’t there doing it themselves.

But frankly, the government should declare as much of the country a dark-sky reserve as possible*, in consultation with local stakeholders – or at least a new kind of ‘reserve’ where, say, light, noise and other neglected forms of pollution are limited to a greater degree than is common by law and to encourage sustainability along these axes as well. This is in opposition to dealing with these irritants in piecemeal or ad hoc fashion, where each type of pollution is addressed in isolation (even when they have common sources, like factories), and – to a lesser extent – not just because scientists require certain conditions for their work.

(* I’m obviously cynical about instituting large-scale behavioural change that’d preclude the need for such reserves.)

Case in point: the new Hanle dark-sky reserve hasn’t been designated as such under law but through an MoU between the UT of Ladakh, the IIA and the Ladakh Autonomous Hill Development Council, with a commitment to fulfilling requirements defined by the International Dark Sky Association , based in the US. Fortunately – but sadly, considering we had to wait for an extraneous prompt – one of the association’s requirements is “current/planned legislation to protect the area”.

Such ‘reserves’ also don’t have to be setup at the expense of development principally because many of the ways to reduce light (and noise) pollution can do so without coming in the way, of development as well as our right as citizens to enjoy public spaces in all the ways in which we’re entitled. (I’m asking for ‘less’ knowing the Indian government’s well-known reluctance to take radical steps to protect natural resources, but we’re also at a point from the PoV of the climate crisis where every gain is good gain. I’m open to being persuaded otherwise, however.)

One of the simplest ways is in fact to have no public lighting installation that casts light upward, into the sky, but keeps it all facing down. Doing this will subtract the installation’s contribution to light pollution, improve energy-use efficiency by not ‘wasting’ any light thrown upwards and reduce the power consumed by limiting it to that required to illuminate only what needs to be illuminated, together with surfaces that limit the amount of light scattered upward.

Other similarly simple ways include turning off all lights when you have no need for them (such as when you leave the room), to prefer energy-efficient lighting solutions and to actively limit the use of decorative lighting – but the ‘turn the lamps downward’ bit is both sensible and surprising in its general non-achievement. Hanle of course will be subject to more stringent restrictions, including requiring people to keep the colour temperature under 3,000 kelvin and the light flux of unshielded lamps to 500 lumen. Here’s an example of the difference to be made:

That’s a (visibly) necessary extremum, in a manner of speaking – to maintain suitable viewing conditions for the ground-based telescopes in the area. On the other hand, India’s (and the UAE’s for that matter, since I was there recently) industrialisation and urbanisation are creating an unnecessary extremum on the other hand, giving seemingly trivial concerns like light pollution the slip. A 2016 study found that less than 10% of India is exposed to “very high nighttime light intensities with no dark adaption for human eyes” – but also that around 80% of the population is exposed to between “from 1 to 8% above the natural light” to complete lack of access to “true night because it is masked by an artificial twilight”.

The tragedy, if we can call it that, is exacerbated when even trivial fixes aren’t implemented properly. Or is it when an industrialist might look at this chart and think, “We’ve still got a lot of white to go”?

Science’s humankind shield

We need to reconsider where the notion that “science benefits all humans” comes from and whether it is really beneficial.

I was prompted to this after coming upon a short article in Sky & Telescope about the Holmdel Horn antenna in New Jersey being threatened by a local redevelopment plan. In the 1960s, Arno Penzias and Robert Wilson used the Holmdel Horn to record the first observational evidence of the cosmic microwave background, which is radiation leftover from – and therefore favourable evidence for – the Big Bang event. In a manner of speaking, then, the Holmdel Horn is an important part of the story of humans’ awareness of their place in the universe.

The US government designated the site of the antenna a ‘National Historic Landmark’ in 1989. On November 22, 2022, the Holmdel Township Committee nonetheless petitioned the planning board to consider redeveloping the locality where the antenna is located. According to the Sky & Telescope article, “If the town permits development of the site, most likely to build high-end residences, the Horn could be removed or even destroyed. The fact that it is a National Historic Landmark does not protect it. The horn is on private property and receives no Federal funds for its upkeep.” Some people have responded to the threat by suggesting that the Holmdel Horn be moved to the sprawling Green Bank Telescope premises in Virginia. This would separate it from the piece of land that can then be put to other use.

Overall, based on posts on Twitter, the prevailing sentiment appears to be that the Holmdel Horn antenna is a historic site worthy of preservation. One commenter, an amateur astronomer, wrote under the article:

“The Holmdel Horn Antenna changed humanity’s understanding of our place in the universe. The antenna belongs to all of humanity. The owners of the property, Holmdel Township, and Monmouth County have a historic responsibility to preserve the antenna so future generations can see and appreciate it.”

(I think the commenter meant “humankind” instead of “humanity”.)

The history of astronomy involved, and involves, thousands of antennae and observatories around the world. Even with an arbitrarily high threshold to define the ‘most significant’ discoveries, there are likely to be hundreds (if not more) of facilities that made them and could thus be deemed to be worthy of preservation. But should we really preserve all of them?

Astronomers, perhaps among all scientists, are likelier to be most keenly aware of the importance of land to the scientific enterprise. Land is a finite resource that is crucial to most, if not all, realms of the human enterprise. Astronomers experienced this firsthand when the Indigenous peoples of Hawai’i protested the construction of the Thirty Meter Telescope on Mauna Kea, leading to a long-overdue reckoning with the legacy of telescopes on this and other landmarks that are culturally significant to the locals, but whose access to these sites has come to be mediated by the needs of astronomers. In 2020, Nithyanand Rao wrote an informative article about how “astronomy and colonialism have a shared history”, with land and access to clear skies as the resources at its heart.


Also read:


One argument that astronomers arguing in favour of building or retaining these controversial telescopes have used is to claim that the fruits of science “belong to all of humankind”, including to the locals. This is dubious in at least two ways.

First, are the fruits really accessible to everyone? This doesn’t just mean the papers that astronomers publish based on work using these telescopes are openly and freely available. It also requires that the topics that astronomers work on need to be based on the consensus of all stakeholders, not just the astronomers. Also, who does and doesn’t get observation time on the telescope? What does the local government expect the telescope to achieve? What are the sorts of studies the telescope can and can’t support? Are the ground facilities equally accessible to everyone? There are more questions to ask, but I think you get the idea that claiming the fruits of scientific labour – at least astronomic labour – are available to everyone is disingenuous simply because there are many axes of exclusion in the instrument’s construction and operation.

Second, who wants a telescope? More specifically, what are the terms on which it might be fair for a small group of people to decide what “all of humankind” wants? Sure, what I’m proposing sounds comical – a global consensus mechanism just to make a seemingly harmless statement like “science benefits everyone” – but the converse seems equally comical: to presume benefits for everyone when in fact they really accrue to a small group and to rely on self-fulfilling prophecies to stake claims to favourable outcomes.

Given enough time and funds, any reasonably designed international enterprise, like housing development or climate financing, is likely to benefit humankind. Scientists have advanced similar arguments when advocating for building particle supercolliders: that the extant Large Hadron Collider (LHC) in Europe has led to advances in medical diagnostics, distributed computing and materials science, apart from confirming the existence of the Higgs boson. All these advances are secondary goals, at best, and justify neither the LHC nor its significant construction and operational costs. Also, who’s to say we wouldn’t have made these advances by following any other trajectory?

Scientists, or even just the limited group of astronomers, often advance the idea that their work is for everyone’s good – elevating it to a universally desirable thing, propping it up like a shield in the face of questions about whether we really need an expensive new experiment – whereas on the ground its profits are disseminated along crisscrossing gradients, limited by borders.

I’m inclined to harbour a similar sentiment towards the Holmdel Horn antenna in the US: it doesn’t belong to all of humanity, and if you (astronomers in the US, e.g.) wish to preserve it, don’t do it in my name. I’m indifferent to the fate of the Horn because I recognise that what we do and don’t seek to preserve is influenced by its significance as an instrument of science (in this case) as much as by ideas of national prestige and self-perception – and this is a project in which I have never had any part. A plaque installed on the Horn reads: “This site possesses national significance in commemorating the history of the United States of America.”

I also recognise the value of land and, thus, must acknowledge the significance of my ignorance of the history of the territory that the Horn currently occupies as well as the importance of reclaiming it for newer use. (I am, however, opposed in principle to the Horn being threatened by the prospect of “high-end residences” rather than affordable housing for more people.) Obviously others – most others, even – might feel differently, but I’m curious if a) scientists anywhere, other than astronomers, have ever systematically dealt with push-back along this line, and b) the other ways in which they defend their work at large when they can’t or won’t use the “benefits everyone” tack.

The strange beauty of Planck units

What does it mean to say that the speed of light is 1?

We know the speed of light in the vacuum of space to be 299,792,458 m/s – or about 300,000 km/s. It’s a quantity of speed that’s very hard to visualise with the human brain. In fact, it’s so fast as to practically be instantaneous for the human experience. In some contexts it might be reassuring to remember the 300,000 km/s figure, such as when you’re a theoretical physicist working on quantum physics problems and you need to remember that reality is often (but not always) local, meaning that when a force appears to to transmit its effects on its surroundings really rapidly, the transmission is still limited by the speed of light. (‘Not always’ because quantum entanglement appears to break this rule.)

Another way to understand the speed of light is as an expression of proportionality. If another entity, which we’ll call X, can move at best at 150,000 km/s in the vacuum of space, we can say the speed of light is 2x the speed of X in this medium. Let’s say that instead of km/s we adopt a unit of speed called kb/s, where b stands for bloop: 1 bloop = 79 km. So the speed of light in vacuum becomes 3,797 kb/s and the speed of X in vacuum becomes 1,898.5 kb/s. The proportionality between the two entities – the speeds of light and X in vacuum – you’ll notice is still 2x.

Let’s change things up a bit more, to expressing the speed of light as the nth power of 2. n = 18 comes closest for light and n = 17 for X. (The exact answer in each case would be log s/log 2, where s is the speed of each entity.) The constant of proportionality is not even close to 2 in this case. The reason is that we switched from linear units to logarithmic units.

This example shows how even our SI units – which allow us to make sense of how much a mile is relative to a kilometre and how much a solar year is in terms of seconds, and thus standardise our sense of various dimensions – aren’t universally standard. The SI units have been defined keeping the human experience of reality in mind – as opposed to, say, those of tardigrades or blue whales.

As it happens, when you’re a theoretical physicist, the human experience isn’t very helpful as you’re trying to understand the vast scales on which gravity operates and the infinitesimal realm of quantum phenomena. Instead, physicists set aside their physical experiences and turned to the universal physical constants: numbers whose values are constant in space and time, and which together control the physical properties of our universe.

By combining only four universal physical constants, the German physicist Max Planck found in 1899 that he could express certain values of length, mass, time and temperature in units related to the human experience. Put another way, these are the smallest distance, mass, duration and temperature values that we can express using the constants of our universe. These are:

  • G, the gravitational constant (roughly speaking, defines the strength of the gravitational force between two massive bodies)
  • c, the speed of light in vacuum
  • h, the Planck constant (the constant of proportionality between a photon’s energy and frequency)
  • kB, the Boltzmann constant (the constant of proportionality between the average kinetic energy of a group of particles and the temperature of the group)

Based on Planck’s idea and calculations, physicists have been able to determine the following:

Credit: Planck units/Wikipedia

(Note here that the Planck constant, h, has been replaced with the reduced Planck constant ħ, which is h divided by 2π.)

When the speed of light is expressed in these Planck units, it comes out to a value of 1 (i.e. 1 times 1.616255×10−35 m per 5.391247×10−44 s). The same goes for the values of the gravitational constant, the Boltzmann constant and the reduced Planck constant.

Remember that units are expressions of proportionality. Because the Planck units are all expressed in terms of universal physical constants, they give us a better sense of what is and isn’t proportionate. To borrow Frank Wilczek’s example, we know that the binding energy due to gravity contributes only ~0.000000000000000000000000000000000000003% of a proton’s mass; the rest comes from its constituent particles and their energy fields. Why this enormous disparity? We don’t know. More importantly, which entity has the onus of providing an explanation for why it’s so out of proportion: gravity or the proton’s mass?

The answer is in the Planck units, in which the value of the gravitational constant G is the desired 1, whereas the proton’s mass is the one out of proportion – a ridiculously small 10-19 (approx.). So the onus is on the proton to explain why it’s so light, rather than on gravity to explain why it acts so feebly on the proton.

More broadly, the Planck units define our universe’s “truly fundamental” units. All other units – of length, mass, time, temperature, etc. – ought to be expressible in terms of the Planck units. If they can’t be, physicists will take that as a sign that their calculations are incomplete, wrong or that there’s a part of physics that they haven’t discovered yet. The use of Planck units can reveal such sources of tension.

For example, since our current theories of physics are founded on the universal physical constants, the theories can’t describe reality beyond the scale described by the Planck units. This is why we don’t really know what happened in the first 10-43 seconds after the Big Bang (and for that matter any events that happen for a duration shorter than this), how matter behaves beyond the Planck temperature or what gravity feels like at distances shorter than 10-35 m.

In fact, just like how gravity dominates the human experience of reality while quantum physics dominates the microscopic experience, physicists expect that theories of quantum gravity (like string theory) will dominate the experience of reality at the Planck length. What will this reality look like? We don’t know, but we know that it’s a good question.

Other helpful sources: