An odd paper about India’s gold OA fees

A paper about open-access fees in India published recently in the journal Current Science has repeatedly surfaced in my networks over some problems with it. The paper is entitled ‘Publications in gold open access and article processing charge expenditure: evidence from Indian scholarly output’ and is authored by Raj Kishor Kampa, Manoj Kumar Sa, and Mallikarjun Dora of Berhampur University, the Indian Maritime University, and IIM Ahmedabad respectively. This is the paper’s abstract:

Article processing charges (APCs) ensure the financial viability of open access (OA) scholarly journals.The present study analyses the number of gold OA articles published in the Web of Science (WoS)-indexed journals by Indian researchers during 2020, including subject categories that account for the highest APC in India. Besides, it evaluates the amount of APC expenditure incurred in India. The findings of this study reveal that Indian researchers published 26,127 gold OA articles across all subjects in WoS-indexed journals in 2020. Researchers in the field of health and medical sciences paid the highest APC, amounting to $7 million, followed by life and earth sciences ($6.9 million), multidisciplinary ($4.9 million), and chemistry and materials science ($4.8 million). The study also reveals that Indian researchers paid an estimated $17 million as APC in 2020. Furthermore, 81% of APCs went to commercial publishers, viz. MDPI, Springer-Nature, Elsevier and Frontier Media. As there is a growing number of OA publications from India, we suggest having a central and state-level single-window option for funding in OA journals and backing the Plan S initiative for OA publishing in India.

It’s unclear what the point of the study is. First, it seems to have attempted a value-neutral assessment of how much scientists in India are paying as article processing charges (APCs) to have their papers published in gold OA journals. It concludes with some large, and frankly off-putting, numbers – a significant drain on the resources India has availed its scholars to conduct research – yet it proceeds to “suggest having a central and state-level single-window system” so scientists can continue to pay these fees with less hassle, and for the Indian government (presumably) to back the Plan S initiative.

As far as I know, India has declined to join the Plan S initiative; this is a good thing for the reasons enumerated here (written when India was considering joining the initiative), one of which is that it enabled the same thing the authors of the paper have asked for but on an international scale: allowing gold OA journals to hike their APCs knowing that (often tax-funded) research funders will pay the bills. This paper also marks the first time I’ve known anyone to try to estimate the APCs paid by Indian scientists and, once estimated, deem the figures not worthy of condemnation.

Funnily enough, while the paper doesn’t concern itself with taking a position on gold OA in its abstract or in the bulk of its arguments, it does contain the following statements:

“Although there is constant growth in OA publications, there is also a barrier to publishing in quality OA journals, especially the Gold and Hybrid OA, which levies APC for publications.”

“However, the high APC charges have been an issue for low-income and underdeveloped countries. In the global south, the APC is a real obstacle to publishing in high-quality OA journals”

“Extant literature reveals a constant increase in APC by most publishers like BioMed Central (BMC), Frontiers Media, Multidisciplinary Digital Publishing Institute (MDPI), and Hindawi”

“One of the ideas of open access was to make equitable access and check the rampant commercialization of scholarly publications. Still, surprisingly, many established publishers have positioned themselves in the OA landscape.”

“formulation of national-level OA policies in India is the need of the hours since OA is inevitable as everyone focuses on equity and access to scholarly communications.”

But these statements only render the paper’s conclusion all the more odd.

Of course, this is my view and the views of some scholars in India’s OA advocacy community and the authors of the Current Science paper are free to disagree. The second issue is objectively frustrating.

Unlike the products of science communication and science journalism, a scientific paper may simply present a survey of some numbers of interest to a part of the research community, but the Current Science paper falls short on this count as well. Specifically, not once does its body mention the words “discount” and “waiver” (or their variations), which is strange because OA journals regularly specify discounted APCs – or waive them altogether – if certain conditions are met (including, in the case of some journals, if a paper’s authors are from a low- and middle-income country). Accounting for discounts, researchers Moumita Koley (IISc Bengaluru) and Achal Agrawal (independent) estimated the authors could have overestimated Indian scientists’ APC expenses by 47.7% – ranging from 4.8% when submitting manuscripts to the PLoS journals to 428.3% when submitting to journals of the American Chemical Society.

Gold OA’s publishing fees are not in proportion to the amount of work and resources required to make a published paper open-access, and often extortionate, and that while discounts and waivers are available, they don’t spare research-funders in other parts of the world the expense, continue to maintain large profit margins at the expense of governments’ allocations for research, and – has scientist Karishma Kaushik wrote for The Hindu – the process of availing these concessions can be embarrassing to researchers.

Issue #1: the Current Science paper erects a flawed argument both in favour of and in opposition to APCs by potentially overestimating them! Issue #2: In their correspondence, Koley and Agrawal write:

“A possible reason for their error could be that DOAJ, which forms their primary source, does not mention discounts usually given to authors from lower-income countries. Another important error is that while the authors claim that they filtered the articles. Page 1058: ‘Extant literature suggests that the corresponding author most likely pays the APCs’. Following the corresponding author criterion, APC expenditure incurred by Indian researchers was estimated; they have not actually done so. Table 2 shows the discrepancy if one applies the filter. Also, Table 1 shows the estimated error in calculation if this criterion is included in calculation.”

To this, the authors of the Current Science paper responded thus:

“We wish to clarify any misunderstanding that may have arisen. We analysed the APC expenditure incurred in India without calculating the discounts or waivers received by authors as there is no specific single source to find all discounts, for example, an author-level or institute-level discount; hence, it would be difficult to provide an actual amount that Indian researchers spent on APC. Additionally, discounts or any publisher-provided waivers are recent developments, and discounts/waivers given to authors from LMIC countries were not mentioned in DOAJ, which is the primary source of the present study. Hence, it was not analysed in the current study. These factors may be considered as limitations of the study.”

This is such a blah exchange. To the accusation that the authors failed to account for discounts and waivers, the authors admit – not in their paper but in their response to a rebuttal – they didn’t, and that it’s a shortcoming. The authors also write that four publishers they identified as receiving 53% of APCs out of India – MDPI, Springer-Nature, Elsevier, and Frontiers Media – don’t offer “country-level discounts/waivers to authors” from LMICs and that this invalidates the concerns of Koley and Agrawal that APCs have been overestimated too much. However, they don’t address the following possibilities:

  1. The identification of these four publishers itself was founded on APC estimates that have been called into question;
  2. “Country-level” concessions aren’t the only kind of concessions; and
  3. The decision to downplay the extent of overestimation doesn’t account for the publishers that received the other 47% of the APCs.

It’s not clear, in sum, what value the Current Science paper claims to have, and perhaps this is a question better directed at Current Science itself, which published the original paper, two rebuttals – the second by Jitendra Narayan Dash of NISER Bhubaneswar – the authors’ unsatisfactory replies to them, and, since we’re on the topic, doesn’t seem to have edited the first correspondence before publishing it.

What Gaganyaan tells us about chat AI, and vice versa

Talk of chat AI* is everywhere, as I’m sure you know. Everyone would like to know where these apps are headed and what their long-term effects are likely to be. But it seems that it’s still too soon to tell what they will be, at least in sectors that have banked on human creativity. That’s why the topic was a centrepiece of the first day of the inaugural conference of the Science Journalists’ Association of India (SJAI) last month, but little came of it beyond using chat AI apps to automate tedious tasks like transcribing. One view, in the limited context of education, is that chat AI apps will be like the electronic calculator. According to Andrew Cohen, a professor of physics at the Hong Kong University of Science and Technology, as quoted (and rephrased) by Amrit BLS in an article for The Wire Science:

When calculators first became available, he said, many were concerned that it would discourage students from performing arithmetic and mathematical functions. In the long run, calculators would negatively impact cognitive and problem-solving skills, it was believed. While this prediction has partially come true, Cohen says the benefits of calculators far outweigh the drawbacks. With menial calculations out of the way, students had the opportunity to engage with more complex mathematical concepts.

Deutsche Welle had an article making a similar point in January 2023:

Daniel Lametti, a Canadian psycholinguist at Acadia University in Nova Scotia, said ChatGPT would do for academic texts what the calculator did for mathematics. Calculators changed how mathematics were taught. Before calculators, often all that mattered was the end result: the solution. But, when calculators came, it became important to show how you had solved the problem—your method. Some experts have suggested that a similar thing could happen with academic essays, where they are no longer only evaluated on what they say but also on how students edit and improve a text generated by an AI—their method.

This appeal to the supposedly higher virtue of the method, over arithmetic ability and the solutions to which it could or couldn’t lead, is reminiscent of a similar issue that played out earlier this year – and will likely raise its head again – vis-à-vis India’s human spaceflight programme. This programme, called ‘Gaganyaan’, is expected to have the Indian Space Research Organisation (ISRO) launch an astronaut onboard the first India-made rocket no earlier than 2025.

The rocket will be a modified version of the LVM-3 (previously called the GSLV Mk III); the modifications, including human-rating the vehicle, and their tests are currently underway. In October 2023, ISRO chairman S. Somanath said in an interview to The Hindu that the crew module on the vehicle, which will host the astronauts during their flight, “is under development. It is being tested. There is no capability in India to manufacture it. We have to get it from outside. That work is currently going on. We wanted a lot of technology to come from outside, from Russia, Europe, and America. But many did not come. We only got some items. That is going to take time. So we have to develop systems such as environmental control and life support systems.”

Somanath’s statement seemed to surprise many people who had believed that the human-rated LVM-3 would be indigenous in toto. This is like the Ship of Theseus problem: if you replace all the old planks of a wooden ship with new ones, is it still the same ship? Or: if you replace many or all the indigenous components of a rocket with ones of foreign provenance, is it still an India-made launch vehicle? The particular case of the UAE is also illustrative: the country neither has its own launch vehicle nor the means to build and launch one with components sourced from other countries. It lacks the same means for satellites as well. Can the UAE still be said to have its own space programme because of its ‘Hope’ probe to orbit and study Mars?

Cohen’s argument about chat AI apps being like the electronic calculator helps cut through the confusion here: the method, i.e. the way in which ISRO pieces the vehicle together to fit its needs, within its budget, engineering capabilities, and launch parameters, matters the more. To quote from an earlier post, “‘Gaganyaan’ is not a mission to improve India’s manufacturing capabilities. It is a mission to send Indians to space using an Indian launch vehicle. This refers to the recipe, rather than the ingredient.” For the same reason, the UAE can’t be said to have its own space programme either.

Focusing on the method, especially in a highly globalised world-economy, is a more sensible way to execute space programmes because the method – knowing how to execute it, i.e. – is the most valuable commodity. Its obtainment requires years of investment in education, skilling, and utilisation. I suspect this is also why there’s more value in selling launch-vehicle services rather than launch vehicles themselves. Similarly, the effects of the electronic calculator on science education speak to advantages that are virtually unknown-unknowns, and it seems reasonable to assume that chat AI will have similar consequences (with the caveat that the metaphor is imperfect: arithmetic isn’t comparable to language and large-language models can do what calculators can and more).


* I remain wary of the label ‘AI’ applied to “chat AI apps” because their intelligence – if there is one beyond sophisticated word-counting – is aesthetic, not epistemological, yet it’s also becoming harder to maintain the distinction in casual conversation. This is after setting aside the question of whether the term ‘AI’ itself makes sense.

Cognitive ability and voting ‘leave’ on Brexit

In a new study published in the journal PLoS ONE on November 22, a pair of researchers from the University of Bath in the UK have reported that “higher cognitive ability” is “linked to higher chance of having voted against Brexit” in the June 2016 referendum. The authors have reported this based on ‘Understanding Society’, a “nationally representative annual longitudinal survey of approximately 40,000 households, funded by the UK Economic and Social Research Council”, conducted in 12 waves between 2009 and 2020. The researchers assessed people’s cognitive ability as a combination of five tests:

Word recall: “… participants were read a series of 10 words and were then asked to recall (immediately afterwards and then again later in the interview) as many words as possible, in any order. The scores from the immediate and delayed word recall task are then summed together”

Verbal fluency: “… participants were given one minute to name as many animals as possible. The final score on this item is based upon the number of unique correct responses”

Subtraction test: “… participants were asked to give the correct answer to a series of subtraction questions. There is a sequence of five subtractions, which started with the interviewer asking the respondent to subtract 7 from 100. The respondent is then asked to subtract 7 again, and so on. The number of correct responses out of a maximum of five was recorded”

Fluid reasoning: “… participants were asked to write down a number sequence—as read by the interviewer—which consists of several numbers with a blank number in the series. The respondent is asked which number goes in the blank. Participants were given two sets of three number sequences, where performance in the first set dictated the difficulty of the second set. The final score is based on the correct responses from the two sets of questions—whilst accounting for the difficulty level of the second set of problems”

Numerical reasoning: “Participants were asked up to five questions that were graded in complexity.The type of questions asked included: “In a sale, a shop is selling all items at half price. Before the sale, a sofa costs £300. How much will it cost in the sale?” and “Let’s say you have £200 in asavings account. The account earns ten percent interest each year. How much would you havein the account at the end of two years?”. Based on performance on the first three items, partici-pants are then asked either two additional (more difficult) questions or one additional (simpler) question”

On the face of it, the study’s principle finding, rooting the way people decided on ‘Brexit’ in cognitive ability, seems objectionable because it’s a small step away from casting an otherwise legitimate political outcome – i.e. the UK leaving the European Union – as the product of some kind of mental deficiency. Then again, in their paper, the authors have reasoned that this correlation is mediated by individuals’ susceptibility to misinformation, that people with “higher” cognitive ability are better able to cut through mis- or dis-information. This seems plausible, and in fact the objectionability is also mitigated by awareness of the Indian experience, where lynch mobs and troll armies have been set in motion by fake news, with deadly results.

This said, we must still guard against two fallacies. First: correlation isn’t causation. That higher cognitive ability could be correlated with voting ‘remain’ doesn’t mean higher cognitive ability caused people to vote ‘remain’. Second, the fallacy of the inverse: while there is reportedly a correlation between the cognitive abilities of people and their decision in the ‘Brexit’ referendum, it doesn’t mean that pro-Brexit votes couldn’t have been cast for any reason other than cognitive deficiencies. One Q&A from an interview that PLoS conducted with one of the authors, Chris Dawson, and published together with the paper makes a similar note:

Some people might assume that if Remain voters had on average higher cognitive abilities, this implies that voting Remain was the more intelligent decision. Can you explain why your research does not show this, and what misinformation has to do with it?

It is important to understand that our findings are based on average differences: there exists a huge amount of overlap between the distributions of Remain and Leave cognitive abilities. We calculated that approximately 36% of Leave voters had higher cognitive ability than the average (mean) Remain voter. So, for any Remain voters who were planning on boasting and engaging in one-upmanship, our results say very little about what cognitive ability differences may or may not exist between two random Leave and Remain voters. But what our results do imply is that misinformation about the referendum could have complicated decision making, especially for people with low cognitive ability.

The five tests that the researchers used to estimate cognitive ability (at least in a relative sense) are also potentially problematic. I only have an anecdotal counter-example, but I suspect many readers will be able to relate to it: I have an uncle who is well-educated (up to the graduate level) and has had a well-paying job for many years now, and he is a staunch bhakt – i.e. an uncritical supporter of India’s BJP government and its various policies, including (but not limited to) the CAA, the farm laws, anti-minority discrimination, etc. He routinely buys into disinformation and sometimes spreads some of his own, but I don’t see him doing badly on any of the five tests. Instead, his actions and views are better explained by his political ideology, which is equal parts conservative and cynical. There are millions of such uncles in India, and the same thing could be true of the people who voted ‘leave’ in the 2016 referendum: that it wasn’t their cognitive abilities so much as their ideological positions, and those of the people to whom they paid attention, that influenced the vote.

(The reported correlation itself can be explained away by the fact that most of those who voted ‘leave’ were older people, but the study does correct for age-related cognitive decline.)

The two researchers also have a big paragraph at the end where they delineate what they perceive to be the major issues with their work:

Most noticeably, the positive correlation between cognitive ability and voting to Remain in the referendum could, as always, be explained by omitted variable bias. Although we control for political beliefs and alliances, personality traits, a barrage of other socioeconomics factors and in our preferred model, house-hold fixed-effects, the variation of cognitive ability within households could be correlated with other unobservable traits, attitudes and behaviours. The example which comes to mind is an individual’s trust in politicians and government. Then Prime Minister of the UK David Cameron publicly declared his support for remining in the EU, as did the Chancellor of the Exchequer. The UK Treasury published an analysis to warn voters that the UK would be permanently poorer if it left the EU [63]. In addition to this were the 10 Nobel-prize winning economists making the case in the days leading up to the referendum. Whilst cognitive ability has been linked with thinking like an economist [64,65], Carl [51] also finds evidence of a moderate positive correlation between trust in experts and IQ. Moreover, work on political attitudes and the referendum have shown that a lack of trust in politicians and the government is associated with a vote to Leave the EU [56]. Therefore, the positive relationship between cognitive ability and voting Remain could be attributable for those higher in cognitive function to place a greater weight on the opinion of experts. A final note is that our dependent variable is self-reported which may induce bias, for instance, social desirability bias. Against that, the majority (75.6%) of responses were recorded through a self-completion online survey and we do control for interview mode, which produces no statistically significant effects.

It’s important to consider all these alternative possibilities to the fullest before we assume, say, that improving cognitive ability will also lead to some political outcomes over others – or in fact before we entertain ideas about whether people whose cognitive abilities have declined, according to some tests and to below a particular threshold, should be precluded from participating in referendums. If nothing else, problems of discretisation quickly arise: i.e. where do we draw the line? For example, while people with Alzheimer’s disease can be kept from voting, should those who are mathematically illiterate, and would thus probably fail the fluid reasoning and numerical reasoning tests? Similarly, and expanding the remit from referendums to elections (which isn’t without problems), which test should potential voters be expected to pass before voting in different polls – say, from the panchayat to the Lok Sabha elections?

Consider also the debates at the time Haryana passed the Haryana Panchayati Raj (Amendment) Act in 2015, which stipulated among other things that to contest in panchayat polls, candidates had to have completed class 10 or its equivalent (plus adjustments if the candidates are from an SC community, women, etc.). Obviously those contesting the polls would be well past their youth and unlikely to return to school, so the Act effectively permanently disqualified them from contesting. As such, while the answers to the questions above may be clearer in less unequal societies like those of the UK, they are not so in India, where cognitively well-equipped people have been criminally selfish and public-spiritedness has been more strongly correlated with good-faith politics than education or literacy.

At the same time, the study and its findings also reiterate the significant role that mis/disinformation has come to play in influencing the way people vote, for example, which makes individuals’ cognitive abilities – and all the factors that influence them – another avenue through which to control, for better or for worse, the opportunities we have for healthy governance.

India’s science leadership

On October 17, the National Council for Education Research and Training (NCERT) introduced a reading module for middle-school students called “Chandrayaan Utsav”. It was released by Union Education Minister Dharmendra Pradhan in the presence of S. Somanath, Chairman of the Indian Space Research Organisation, and claims that the Chandrayaan-3 achievement was great but not the first, that “literature tells us that it can be traced back [to the] Vymaanika Shaastra: Science of Aeronautics, which reveals that our country had the knowledge of flying vehicles” in ancient times.

ISRO is currently flying high on the back of completing the first uncrewed test flight of the Gaganyaan mission, the launch of Aditya-L1 to study the sun, and the success of the Chandrayaan-3 mission. Yet in May, Somanath had said at another event in Ujjain that many mathematical concepts as well as those of time, architecture, materials, aviation, and cosmology were first described in the Vedas, and that “Sanskrit suits the language of computers”.

Such pseudoscientific claims are familiar to us because many political leaders have made them, but when they are issued by the leaders of institutions showing the country what good science can achieve, they set up more than a contradiction: they raise the question of responsible scientific leadership. Obviously, the first cost of such claims is that they do unto the Vedas what the claimants claim the liberals are doing unto the Vedas: forgetting them by obscuring what they really say. Who knows what the Vedas really say? Very few, I reckon – and the path to knowing more is now rendered tougher because the claims also cast any other effort to study the Vedas – and for that matter any other fields of study in ancient India – suspect.

But the second, and greater, cost of such pseudoscientific claims relates to the need for such leadership. For example, why don’t the claimants display confidence in the science being done today? (We’ve seen this before with the Higgs boson Nobel Prize and S.N. Bose as well.) I would have liked Somanath to speak up and refute Pradhan on October 17 but he didn’t. But what I would have liked even more is for Pradhan to have held forth on the various features of the Chandrayaan-3 lander and the challenges of developing them. There is a lot of good science being done in India today and it is sickening that our politicians can’t see beyond something that happened 7,000 years ago, leave alone understand the transformative new technologies currently on the anvil that will define India’s ability to be any kind of power in the coming centuries.

A friend of mine and a scholar of history recently told me that Homi Bhabha chaired the International Conference on the Peaceful Uses of Atomic Energy in Geneva in 1955 – when India didn’t have nuclear power. That kind of leadership is conspicuous by absence today.

Gaganyaan: The ingredient is not the recipe

For all the hoopla over indigeneity – from ISRO chairman S. Somanath exalting the vast wisdom of ancient Indians to political and ideological efforts to cast modern India as the world’s ‘vishwaguru’ – the pressure vessel of the crew module that will one day carry the first Indian astronauts to space won’t be made in India. Somanath said as much in an interview to T.S. Subramanian for The Hindu:

There is another element called the crew module and the crew escape system. The new crew module is under development. It is being tested. There is no capability in India to manufacture it. We have to get it from outside. That work is currently going on.

Personally, I don’t care that this element of the ‘Gaganyaan’ mission will be brought from abroad. It will be one of several thousand components of such provenance in the mission. The only thing that matters is we know how to do it: combine the ingredients using the right recipe and make it taste good. That we can’t locally make this or that ingredient is amply secondary. ‘Gaganyaan’ is not a mission to improve India’s manufacturing capabilities. It is a mission to send Indians to space using an Indian launch vehicle. This refers to the recipe, rather than the ingredient.

But indigeneity matters to a section of people who like to thump their chests because, to them, ‘Gaganyaan’ is about showing the world – or at least the West – that India is just as good as them, if not better. Their misplaced sentiments have spilled over into popular culture, where at least two mainstream movies and one TV show (all starring A-list actors) have made villains out of foreign spaceflight agencies or officials. Thinking like this is the reason a lack of complete indigeneity has become a problem. Otherwise, again, it is quite irrelevant, and sometimes even a distraction.

Somanath himself implies as much (almost as if he wishes to separate his comments on the Vedas, etc. from his thinking on ‘Gaganyaan’, etc.):

It depends on our confidence at that point of time… Only when we are very sure of ourselves, we will send human beings into space. Otherwise, we will not do that. In my opinion, it will take more time than we really thought of. We are not worried about it. What we are worried about is that we should do it right the first time. The schedule is secondary here. … Some claims I made last year are not important. I am focusing on capability development.

Featured image: The nose cone bearing the spacecraft of the Chandrayaan-3 mission ahead of being fit to the launch vehicle. Credit: ISRO.

An ‘expanded’ heuristic to evaluate science as a non-scientist

The Hindu publishes a column called ‘Notebook’ every Friday, in which journalists in the organisation open windows big or small into their work, providing glimpses into their process and thinking – things that otherwise remain out of view in news articles, analyses, op-eds, etc. Quite a few of them are very insightful. A recent example was Maitri Porecha’s column about looking for closure in the aftermath of the Balasore train accident.

I’ve written twice for the section thus far, both times about a matter that has stayed with me for a decade, manifesting at different times in different ways. The first edition was about being able to tell whether a given article or claim is real or phony irrespective of whether you have a science background. I had proposed the following eight-point checklist that readers could follow (quoted verbatim):

  1. If the article talks about effects on people, was the study conducted with people or with mice?
  2. How many people participated in a study? Fewer than a hundred is always worthy of scepticism.
  3. Does the article claim that a study has made exact predictions? Few studies actually can.
  4. Does the article include a comment from an independent expert? This is a formidable check against poorly-done studies.
  5. Does the article link to the paper it is discussing? If not, please pull on this thread.
  6. If the article invokes the ‘prestige’ of a university and/or the journal, be doubly sceptical.
  7. Does the article mention the source of funds for a study? A study about wine should not be funded by a vineyard.
  8. Use simple statistical concepts, like conditional probabilities and Benford’s law, and common sense together to identify extraordinary claims, and then check if they are accompanied by extraordinary evidence.

The second was about whether science journalists are scientists – which is related to the first on the small matter of faith: i.e. that science journalists are purveyors of information that we expect readers to ‘take up’ on trust and faith, and that an article that teaches readers any science needs to set this foundation carefully.

After having published the second edition, I came across a ‘Policy Forum’ article published in October 2022 in Science entitled ‘Science, misinformation, and the role of education’. Among other things, it presents a “‘fast and frugal’ heuristic” – a three-step algorithm with which competent outsiders [can] evaluate scientific information”. I was glad to see that this heuristic included many points in my eight-point checklist, but it also went a step ahead and discussed two things that perhaps more engaged readers would find helpful. One of them however requires an important disclaimer, in my opinion.

DOI: 10.1126/science.abq80

The additions are about consensus, expressed through the questions (numbering mine):

  1. “Is there a consensus among the relevant scientific experts?”
  2. “What is the nature of any disagreement/what do the experts agree on?”
  3. “What do the most highly regarded experts think?”
  4. “What range of findings are deemed plausible?”, and
  5. “What are the risks of being wrong?”

No. 3 is interesting because “regard” is of course subjective as well as cultural. For example, well-regarded scientists could be those that have published in glamorous journals like Nature, Science, Cell, etc. But as the recent hoopla about Ranga Dias having three papers about near-room-temperature superconductivity retracted in one year – with two published in Nature – showed us, this is no safeguard against bad science. In fact, even winning a Nobel Prize isn’t a guarantee of good science (see e.g. reports about Gregg Semenza and Luc Montagnier). As the ‘Policy Forum’ article also states:

“Undoubtedly, there is still more that the competent outsider needs to know. Peer-reviewed publication is often regarded as a threshold for scientific trust. Yet while peer review is a valuable step, it is not designed to catch every logical or methodological error, let alone detect deliberate fraud. A single peer-reviewed article, even in a leading journal, is just that—a single finding—and cannot substitute for a deliberative consensus. Even published work is subject to further vetting in the community, which helps expose errors and biases in interpretation. Again, competent outsiders need to know both the strengths and limits of scientific publications. In short, there is more to teach about science than the content of science itself.”

Yet “regard” matters because the people at large pay attention to notions like “well-regarded”, which is as much a comment about societal preferences as what scientists themselves have aspired to over the years. This said, on technical matters, this particular heuristic would fail only a small part of time (based on my experience).

It would fail a lot more if it is applied in the middle of a cultural shift, e.g. regarding expectations of the amount of effort a good scientist is expected to dedicate to their work. Here, “well-regarded” scientists – typically people who started doing science decades ago, have persisted in their respective fields, and have finally risen to positions of prominence, and are thus likely to be white and male, and who seldom had to bother with running a household and raising children – will have an answer that reflects the result of these privileges, but which would be at odds with the direction of the shift (i.e. towards better work-life balance, less time than before devoted to research, and contracts amended to accommodate these demands).

In fact, even if the “well-regarded” heuristic might suffice to judge a particular scientific claim, it still carries the risk of hewing in favour of the opinions of people with the aforementioned privileges. These concerns also apply to the three conditions listed under #2 in the heuristic graphic above: “Reputation among peers”, “credentials and institutional context”, “relevant professional experience”, all of which have historically been more difficult for non-cis-het male scientists to acquire. But we must work with what we have.

In this sense, the last question is less subjective and more telling: “What are the risks of being wrong?” If a scientist avoids a view and simultaneously also avoids an adverse outcome for themselves, then it’s possible they avoided the view in order to avoid the outcome and not because the view is implicitly disagreeable.

The authors of the article, Jonathan Osborne and Daniel Pimentel, both of the Graduate School of Education at Stanford University, have grounded their heuristic in the “social nature of science” and the “social mechanisms and practices that science has for resolving disagreement and attaining consensus”. This is obviously more robust (than my checklist grounded in my limited experiences), but I think it could also have discussed the intersection of the social facets of science with gender and class. Otherwise, the risk is that, while the heuristic will help “competent outsiders” better judge scientific claims, it will do as little as its predecessor to uncover the effects of intersectional biases that persist in the “social mechanisms” of science.

The alternative, of course, is to leave out “well-regarded” altogether – but the trouble there, I suspect, is we might be lying to ourselves if we pretended a scientist’s regard didn’t or ought not to matter, which is why I didn’t go there…

“Why has no Indian won a science Nobel this year?”

For all their flaws, the science Nobel Prizes – at the time they’re announced, in the first week of October every year – provide a good opportunity to learn about some obscure part of the scientific endeavour with far-reaching consequences for humankind. This year, for example, we learnt about attosecond physics, quantum dots, and invitro transcribed mRNA. The respective laureates had roots in Austria, France, Hungary, Russia, Tunisia, and the U.S. Among the many readers that consume articles about these individuals’ work with any zest, the science Nobel Prizes’ announcement is also occasion for a recurring question: how come no scientist from India – such a large country, of so many people with diverse skills, and such heavy investments in research – has won a prize? I thought I’d jot down my version of the answer in this post. There are four factors:

1. Missing the forest for the trees – To believe that there’s a legitimate question in “why has no Indian won a science Nobel Prize of late?” is to suggest that we don’t consider what we read in the news everyday to be connected to our scientific enterprise. Pseudoscience and misinformation are almost everywhere you look. We’re underfunding education, most schools are short-staffed, and teachers are underpaid. R&D allocations by the national government have stagnated. Academic freedom is often stifled in the name of “national interest”. Students and teachers from the so-called ‘non-upper-castes’ are harassed even in higher education centres. Procedural inefficiencies and red tape constantly delay funding to young scholars. Pettiness and politicking rule many universities’ roosts. There are ill-conceived limits on the use, import, and export of biological specimens (and uncertainty about the state’s attitude to it). Political leaders frequently mock scientific literacy. In this milieu, it’s as much about having the resources to do good science as being able to prioritise science.

2. Historical backlog – This year’s science Nobel Prizes have been awarded for work that was conducted in the 1980s and 1990s. This is partly because the winning work has to have demonstrated that it’s of widespread benefit, which takes time (the medicine prize was a notable exception this year because the pandemic accelerated the work’s adoption), and partly because each prize most often – but not always – recognises one particular topic. Given that there are several thousand instances of excellent scientific work, it’s possible, on paper, for the Nobel Prizes to spend several decades awarding scientific work conducted in the 20th century alone. Recall that this was a boom time for science, with the advent of quantum mechanics and the theories of relativity, considerable war-time investment and government support, followed by revolutions in electronics, materials science, spaceflight, genetics, and pharmaceuticals, and then came the internet. It was also the time when India was finding its feet, especially until economic liberalisation in the early 1990s.

3. Lack of visibility of research – Visibility is a unifying theme of the Nobel laureates and their work. That is, you need to do good work as well as be seen to be doing that work. If you come up with a great idea but publish it in an obscure journal with no international readership, you will lose out to someone who came up with the same idea but later, and published it in one of the most-read journals in the world. Scientists don’t willingly opt for obscure journals, of course: publishing in better-read journals isn’t easy because you’re competing with other papers for space, the journals’ editors often have a preference for more sensational work (or sensationalisable work, such as a paper co-authored by an older Nobel laureate; see here), and publishing fees can be prohibitively high. The story of Meghnad Saha, who was nominated for a Nobel Prize but didn’t win, offers an archetypal example. How journals have affected the composition of the scientific literature is a vast and therefore separate topic, but in short, they’ve played a big part to skew it in favour of some kinds of results over others – even if they’re all equally valuable as scientific contributions – and to favour authors from some parts of the world over others. Journals’ biases sit on top of those of universities and research groups.

4. Award fixation – The Nobel Prizes aren’t interested in interrogating the histories and social circumstances in which science (that it considers to be prize-worthy) happens; they simply fete what is. It’s we who must grapple with the consequences of our histories of science, particularly science’s relationship with colonialism, and make reparations. Fixating on winning a science Nobel Prize could also lock our research enterprise – and the public perception of that enterprise – into a paradigm that prefers individual winners. The large international collaboration is a good example: When physicists working with the LHC found the Higgs boson in 2012, two physicists who predicted the particle’s existence in 1964 won the corresponding Nobel Prize. Similarly, when scientists at the LIGO detectors in the US first observed gravitational waves in 2016, three physicists who conceived of LIGO in the 1970s won the prize. Yet the LHC and the LIGOs, and other similar instruments continue to make important contributions to science – directly, by probing reality, and indirectly by supporting research that can be adapted for other fields. One 2007 paper also found that Nobel Prizes have been awarded to inventions only 23% of the time. Does that mean we should just focus on discoveries? That’s a silly way of doing science.


The Nobel Prizes began as the testament of a wealthy Swedish man who was worried about his legacy. He started a foundation that put together a committee to select winners of some prizes every year, with some cash from the man’s considerable fortunes. Over the years, the committee made a habit of looking for and selecting some of the greatest accomplishments of science (but not all), so much so that the laureates’ standing in the scientific community created an aspiration to win the prize. Many prizes begin like the Nobel Prizes did but become irrelevant because they don’t pay enough attention to the relationship between the laureate-selecting process and the prize’s public reputation (note that the Nobel Prizes acquired their reputation in a different era). The Infosys Prize has elevated itself in this way whereas the Indian Science Congress’s prize has undermined itself. India or any Indian for that matter can institute an award that chooses its winners more carefully, and gives them lots of money (which I’m opposed to vis-à-vis senior scientists) to draw popular attention.

There are many reasons an Indian hasn’t won a science Nobel Prize in a while but it’s not the only prize worth winning. Let’s aspire to other, even better, ones.

The journal’s part in a retraction

This is another Ranga Dias and superconductivity post, so please avert your gaze if you’re tired of it already.

According to a September 27 report in Science, the journal Nature plans to retract the latest Dias et al. paper, published in March 2023, claiming to have found evidence of near-room-temperature superconductivity in an unusual material, nitrogen-doped lutetium hydride (N-LuH). The heart of the matter seems to be, per Science, a plot showing a drop in N-LuH’s electric resistance below a particular temperature – a famous sign of superconductivity.

Dias (University of Rochester) and Ashkan Salamat (University of Nevada, Las Vegas), the other lead investigator in the study, measured the resistance in a noisy setting and then subtracted the noise – or what they claimed to be the noise. The problem is apparently that the subtracted plot in the published paper and the plot put together using raw data submitted by Dias and Salamat to Nature are different; the latter doesn’t show the resistance dropping to zero. Meaning that together with the noise, the paper’s authors subtracted some other information as well, and whatever was left behind suggested N-LuH had become superconducting.

A little more than a month ago, Physical Review Letters officially retracted another paper of a study led by Dias and Salamat after publishing it last year – and notably after a similar dispute (and on both occasions Dias was opposed to having the papers retracted). But the narrative was more dramatic then, with Physical Review Letters accusing Salamat of obstructing its investigation by supplying some other data as the raw data for its independent probe.

Then again, even before Science‘s report, other scientists in the same field had said that they weren’t bothering with replicating the data in the N-LuH paper because they had already wasted time trying to replicate Dias’s previous work, in vain.

Now, in the last year alone, three of Dias’s superconductivity-related papers have been retracted. But as on previous occasions, the new report also raises questions about Nature‘s pre-publication peer-review process. To quote Science:

In response to [James Hamlin and Brad Ramshaw’s critique of the subtracted plot], Nature initiated a post-publication review process, soliciting feedback from four independent experts. In documents obtained by Science, all four referees expressed strong concerns about the credibility of the data. ‘I fail to understand why the authors … are not willing or able to provide clear and timely responses,’ wrote one of the anonymous referees. ‘Without such responses the credibility of the published results are in question.’ A second referee went further, writing: ‘I strongly recommend that the article by R. Dias and A. Salamat be retracted.’

What was the difference between this review process and the one that happened before the paper was published, in which Nature‘s editors would have written to independent experts asking them for their opinions on the submitted manuscript? Why didn’t they catch the problem with the electrical resistance plot?

One possible explanation is the sampling problem: when writing an article as a science journalist, the views expressed in the article will be a function of the scientists that I have sampled from within the scientific community. In order to obtain the consensus view, I need to sample a sufficiently large number of scientists (or a small number of representative scientists, such as those who I know are in touch with the pulse of the community). Otherwise, there’s a nontrivial risk of some view in my article being over- or under-represented.

Similarly, during its pre-publication peer-review process, did Nature not sample the right set of reviewers? I’m unable to think of other explanations because the sampling problem accounts for many alternatives. Hamlin and Ramshaw also didn’t necessarily have access to more data than Dias et al. submitted to Nature because their criticism emerged in May 2023 itself, and was based on the published paper. Nature also hasn’t disclosed the pre-publication reviewers’ reports nor explained if there were any differences between its sampling process in the pre- and post-publication phases.

So short of there being a good explanation, as much as we have a scientist who’s seemingly been crying wolf about room-temperature superconductivity, we also have a journal whose peer-review process produced, on two separate occasions, two different results. Unless it can clarify why this isn’t so, Nature is also to blame for the paper’s fate.

On India’s new ‘Vigyan Puraskar’ awards

The Government of India has replaced the 300 or so awards for scientists it used to give out until this year with the Rashtriya Vigyan Puraskar (RVP), a set of four awards with 56 laureates, The Hindu has reported. Unlike in the previous paradigm, and like the Padma awards to recognise the accomplishments of civilians, the RVP will comprise a medal and a certificate, and no cash. The changes are the result of the recommendations of a committee put together last year by the Ministry of Home Affairs (MHA).

The new paradigm presents four important opportunities to improve the way the Indian government recognises good scientific work.

1. Push for women

A note forwarded by the Department of Science and Technology, which has so far overseen more than 200 awards every year, to the MHA said, “Adequate representation of women may … be ensured” – an uncharacteristically direct statement (worded in the characteristic style of the Indian bureaucracy) that probably alludes to the Shanti Swarup Bhatnagar (SSB) Awards, which were only announced last week for the year 2022.

The SSB Awards are the most high-profile State-sponsored awards for scientists in the old paradigm, and they have become infamous for their opaque decision-making and gross under-representation of women scientists. Their arbitrary 45-year age limit further restricted opportunities for women to be nominated, given breaks in their career due to pregnancies, childcare, etc. As a result, even fewer women have won an SSB Award than the level of their participation in various fields of the scientific workforce.

According to The Hindu, to determine the winners of each year’s RVP awards, “A committee will be constituted every year, comprising the Secretaries of six science Ministries, up to four presidents of science and engineering academies, and six distinguished scientists and technologists from various fields”.

The SSB Awards’ opacity was rooted in the fact that candidates had to be nominated by their respective institutes, without any process to guarantee proper representation, and that the award-giving committee was shrouded in secrecy, with no indication as to their deliberations. To break from this regrettable tradition, the Indian government should publicise the composition of the RVP committee every year and explain its process. Such transparency, and public accountability, is by itself likely to ensure more women will be nominated for and receive the awards than through any other mechanism.

2. No cash component

The RVP awards score by eliminating the cash component for laureates. Scientific talent and productivity are unevenly distributed throughout India, and are typically localised in well-funded national institutes or in a few private universities, so members of the scientific workforce in these locales are also more likely to win awards. Giving these individuals large sums of money, that too after they have produced notable work and not before, will be redundant and only subtract from the fortunes of a less privileged scientist.

A sum of Rs 5 lakh may not be significant from a science department’s point of view, but it is the principle that matters.

To enlarge the pool of potential candidates, the government must also ensure that research scholars receive their promised scholarships on time. At present, delayed scholarships and fellowships have become a tragic hallmark of doing science in India, together with officials’ promises and scramble every year to hasten disbursals.

3. Admitting PIOs

In the new paradigm, up to one of the three Vigyan Ratna awards every year may go to a person of Indian origin (PIO), and up to three PIOs may receive the Vigyan Shri and Yuva-SSB awards, of the 25 in each group. (PIOs aren’t eligible for the three Vigyan Team awards.)

Including PIOs in the national science awards framework is a slippery slope. An award for scientific work is implicitly an award to an individual for exercising their duties as a scientist as well as for navigating a particular milieu, by securing the resources required for their work or – as is often the case in India – conducting frugal yet clever experiments to overcome resource barriers.

Rewarding a PIO who has made excellent contributions to science while working abroad, and probably after having been educated abroad, would delink the “made in India” quality of the scientific work from the work itself, whereas we need more awards to celebrate this relationship.

This said, the MHA may have opened the door to PIOs in order to bring the awards to international attention, by fêting Indian-origin scientists well-known in their countries of residence.

4. Science awards for science

The reputation of an award is determined by the persons who win it, illustrated as much by, say, Norway’s Abel Prize as by the Indian Science Congress’s little-known ‘Millennium Plaques of Honour’. To whom will the RVP prizes be awarded? As stated earlier, the award-giving committee will comprise Secretaries of the six science Ministries, “up to” four presidents of the science and engineering academies, and six “distinguished” scientists and technologists.

These ‘Ministries’ are the Departments of Science and Technology, of Biotechnology, of Space, and of Atomic Energy, and the Ministries of Earth Sciences and of Health and Family Welfare. As such, they exclude representatives from the Ministries of Environment, Animal Husbandry, and Agriculture, which also deal with research, often of the less glamorous variety.

Just as there are inclusion criteria, there should be exclusion criteria as well, such as requiring eligible candidates to have published papers in credible journals (or preprint repositories) and/or to not work with or be related in any other way to members of the jury. Terms like “distinguished” are also open to interpretation. Earlier this year, for example, Mr. Khader Vali Dudekula was conferred a Padma Shri in the ‘Science and Engineering’ category for popularising the nutritional benefits of millets, but he has also claimed, wrongly, that consuming millets can cure cancer and diabetes.

The downside of reduction and centralisation is that they heighten the risk of exclusion. Instead of becoming another realm in which civilians are excluded – or included on dubious grounds, for that matter – the new awards should take care to place truly legitimate scientific work above work that meets any arbitrary ideological standard.

Scientists’ conduct affects science

Nature News has published an excellent feature by Edwin Cartlidge on the “wall of scepticism” that arose in response to the latest superconductivity claim from Ranga Dias et al., purportedly in a compound called nitrogen-doped lutetium hydride. It seems the new paper has earned a note of concern as well, after various independent research groups failed to replicate the results. Dias & co. had had another paper, claiming superconductivity in a different material, retracted in October 2022, two years after its publication. All these facts together raise a few implications about the popular imagination of science.

First, the new paper was published by Nature, a peer-reviewed journal. And Jorge Hirsch of the University of California, San Diego, told Nature News “that editors should have first resolved the question about the provenance of the raw data in the retracted 2020 Nature article before even considering the 2023 paper”. So the note reaffirms the role of peer-review being limited to checking whether the information presented in a paper is consistent with the paper’s conclusions, and not checking whether it is well-founded and has integrity in and of itself.

Second, from Nature News:

“Researchers from four other groups, meanwhile, told Nature’s news team that they had abandoned their own attempts to replicate the work or hadn’t even tried. Eremets said that he wasted time on the CSH work, so didn’t bother with LuNH. ‘I just ignored it,’ he says.”

An amusing illustration, I think, that speaks against science’s claims to being impartial, etc. In a perfectly objective world, Dias et al.’s previous work shouldn’t have mattered to other scientists, who should have endeavoured to verify the claims in the new paper anew, given that it’s a fairly sensational claim and because it was published in a ‘prestigious’ peer-reviewed journal. But, as Eremets said, “the synthesis protocol wasn’t clear in the paper and Dias didn’t help to clarify it”.

The reciprocal is also true: Dias chose to share samples of nitrogen-doped lutetium hydride that his team had prepared only with Russell Hemley, who studies material chemistry at the University of Illinois, Chicago, (and some other groups that he refused to name) – and that Hemley is one of the researchers who hasn’t criticised Dias’s findings. Hemley is also not an independent researcher; he and Dias worked together on the work in the 2020 paper that was later retracted. Dias should ideally have shared the samples with everyone. But scientists’ social conduct does matter, influencing decisions about how other scientists believe they should respond.

Speaking of which: Nature (the journal) on the other hand doesn’t look at past work and attendant misgivings when judging each paper. From Nature News (emphasis added):

The editor’s note added to the 2023 paper on 1 September, saying that the reliability of data are in question, adds that “appropriate editorial action will be taken once this matter is resolved.” Karl Ziemelis, Nature’s chief applied- and physical-sciences editor, based in London, says that he and his colleagues are “assessing concerns” about the paper, and adds: “Owing to the confidentiality of the peer-review process we cannot discuss specific details of what transpired.” As for the 2020 paper, Ziemelis explains that they decided not to look into the origin of the data once they had established problems with the data processing and then retracted the research. “Our broader investigation of that work ceased at that point,” he says. Ziemelis adds that “all submitted manuscripts are considered independently on the basis of the quality and timeliness of their science”.

The refusal to share samples echoes an unusual decision by the journal Physical Review B to publish a paper authored by researchers at Microsoft, in which they reported discovery a Majorana zero mode – an elusive particle (in a manner of speaking) that could lead the way to building quantum ‘supercomputers’. However, it seems the team withheld some information that independent researchers could have used to validate the findings, presumably because it’s intellectual property. Rice University physics professor Douglas Natelson wrote on his blog:

The rationale is that the community is better served by getting this result into the peer-reviewed literature now even if all of the details aren’t going to be made available publicly until the end of 2024. I don’t get why the researchers didn’t just wait to publish, if they are so worried about those details being available.


Take all of these facts and opinions together and ask yourself: what then is the scientific literature? It probably contains many papers that have cleared peer-review but whose results won’t replicate. Some papers may or may not replicate but we’ll never know for a couple years. It also doesn’t contain replication studies that might have been there if the replicators and the original research group were on amicable terms. What also do these facts and view imply for the popular conception of science?

Every day, I encounter two broad kinds of critical imaginations of science. One has emerged from the practitioners of science, and those studying its philosophy, history, sociology, etc. These individuals have debated the notions presented above to varying degrees. But there is also a class of people in India that wields science as an antidote to what it claims is the state’s collusion with pseudoscience, and such collusion as displacing what is apparently science’s rightful place in the Indian society-state: as the best and sole arbiter of facts and knowledge. This science is apparently a unified whole, objective, self-correcting, evidence-based, and anti-faith. I imagine this science needs to have these characteristics in order to effectively challenge, in the courts of public opinion, the government’s oft-mistaken claims.

At the same time, the ongoing Dias et al. saga reminds us that any ‘science’ imprisoned by these assumptions would dismiss the events and forces that would actually help it grow – such as incentivising good-faith actions, acknowledging the labour required to keep science honest and reflexive, discussing issues resulting from the cultural preferences of its exponents, paying attention to social relationships, heeding concerns about the effects of one’s work and conduct on the field, etc. In the words of Paul Feyerabend (Against Method, third ed., 1993): “Science is neither a single tradition, nor the best tradition there is, except for people who have become accustomed to its presence, its benefits and its disadvantages.”