Scientists’ conduct affects science

Nature News has published an excellent feature by Edwin Cartlidge on the “wall of scepticism” that arose in response to the latest superconductivity claim from Ranga Dias et al., purportedly in a compound called nitrogen-doped lutetium hydride. It seems the new paper has earned a note of concern as well, after various independent research groups failed to replicate the results. Dias & co. had had another paper, claiming superconductivity in a different material, retracted in October 2022, two years after its publication. All these facts together raise a few implications about the popular imagination of science.

First, the new paper was published by Nature, a peer-reviewed journal. And Jorge Hirsch of the University of California, San Diego, told Nature News “that editors should have first resolved the question about the provenance of the raw data in the retracted 2020 Nature article before even considering the 2023 paper”. So the note reaffirms the role of peer-review being limited to checking whether the information presented in a paper is consistent with the paper’s conclusions, and not checking whether it is well-founded and has integrity in and of itself.

Second, from Nature News:

“Researchers from four other groups, meanwhile, told Nature’s news team that they had abandoned their own attempts to replicate the work or hadn’t even tried. Eremets said that he wasted time on the CSH work, so didn’t bother with LuNH. ‘I just ignored it,’ he says.”

An amusing illustration, I think, that speaks against science’s claims to being impartial, etc. In a perfectly objective world, Dias et al.’s previous work shouldn’t have mattered to other scientists, who should have endeavoured to verify the claims in the new paper anew, given that it’s a fairly sensational claim and because it was published in a ‘prestigious’ peer-reviewed journal. But, as Eremets said, “the synthesis protocol wasn’t clear in the paper and Dias didn’t help to clarify it”.

The reciprocal is also true: Dias chose to share samples of nitrogen-doped lutetium hydride that his team had prepared only with Russell Hemley, who studies material chemistry at the University of Illinois, Chicago, (and some other groups that he refused to name) – and that Hemley is one of the researchers who hasn’t criticised Dias’s findings. Hemley is also not an independent researcher; he and Dias worked together on the work in the 2020 paper that was later retracted. Dias should ideally have shared the samples with everyone. But scientists’ social conduct does matter, influencing decisions about how other scientists believe they should respond.

Speaking of which: Nature (the journal) on the other hand doesn’t look at past work and attendant misgivings when judging each paper. From Nature News (emphasis added):

The editor’s note added to the 2023 paper on 1 September, saying that the reliability of data are in question, adds that “appropriate editorial action will be taken once this matter is resolved.” Karl Ziemelis, Nature’s chief applied- and physical-sciences editor, based in London, says that he and his colleagues are “assessing concerns” about the paper, and adds: “Owing to the confidentiality of the peer-review process we cannot discuss specific details of what transpired.” As for the 2020 paper, Ziemelis explains that they decided not to look into the origin of the data once they had established problems with the data processing and then retracted the research. “Our broader investigation of that work ceased at that point,” he says. Ziemelis adds that “all submitted manuscripts are considered independently on the basis of the quality and timeliness of their science”.

The refusal to share samples echoes an unusual decision by the journal Physical Review B to publish a paper authored by researchers at Microsoft, in which they reported discovery a Majorana zero mode – an elusive particle (in a manner of speaking) that could lead the way to building quantum ‘supercomputers’. However, it seems the team withheld some information that independent researchers could have used to validate the findings, presumably because it’s intellectual property. Rice University physics professor Douglas Natelson wrote on his blog:

The rationale is that the community is better served by getting this result into the peer-reviewed literature now even if all of the details aren’t going to be made available publicly until the end of 2024. I don’t get why the researchers didn’t just wait to publish, if they are so worried about those details being available.


Take all of these facts and opinions together and ask yourself: what then is the scientific literature? It probably contains many papers that have cleared peer-review but whose results won’t replicate. Some papers may or may not replicate but we’ll never know for a couple years. It also doesn’t contain replication studies that might have been there if the replicators and the original research group were on amicable terms. What also do these facts and view imply for the popular conception of science?

Every day, I encounter two broad kinds of critical imaginations of science. One has emerged from the practitioners of science, and those studying its philosophy, history, sociology, etc. These individuals have debated the notions presented above to varying degrees. But there is also a class of people in India that wields science as an antidote to what it claims is the state’s collusion with pseudoscience, and such collusion as displacing what is apparently science’s rightful place in the Indian society-state: as the best and sole arbiter of facts and knowledge. This science is apparently a unified whole, objective, self-correcting, evidence-based, and anti-faith. I imagine this science needs to have these characteristics in order to effectively challenge, in the courts of public opinion, the government’s oft-mistaken claims.

At the same time, the ongoing Dias et al. saga reminds us that any ‘science’ imprisoned by these assumptions would dismiss the events and forces that would actually help it grow – such as incentivising good-faith actions, acknowledging the labour required to keep science honest and reflexive, discussing issues resulting from the cultural preferences of its exponents, paying attention to social relationships, heeding concerns about the effects of one’s work and conduct on the field, etc. In the words of Paul Feyerabend (Against Method, third ed., 1993): “Science is neither a single tradition, nor the best tradition there is, except for people who have become accustomed to its presence, its benefits and its disadvantages.”

NCBS fracas: In defence of celebrating retractions

Continuing from here

Irrespective of Arati Ramesh’s words and actions, I find every retraction worth celebrating because how hard-won retractions in general have been, in India and abroad. I don’t know how often papers coauthored by Indian scientists are retracted and how high or low that rate is compared to the international average. But I know that the quality of scientific work emerging from India is grossly disproportionate (in the negative sense) to the size of the country’s scientific workforce, which is to say most of the papers published from India, irrespective of the journal, contain low-quality science (if they contain science at all). It’s not for nothing that Retraction Watch has a category called ‘India retractions’, with 196 posts.

Second, it’s only recently that the global scientific community’s attitude towards retractions started changing, and even now most of it is localised to the US and Europe. And even there, there is a distinction: between retractions for honest mistakes and those for dishonest mistakes. Our attitudes towards retractions for honest mistakes have been changing. Retractions for dishonest conduct, or misconduct, have in fact been harder to secure, and continue to be.

The work of science integrity consultant Elisabeth Bik allows us a quick take: the rate at which sleuths are spotting research fraud is far higher than the rate at which journals are retracting the corresponding papers. Bik herself has often said on Twitter and in interviews how most journals editors simply don’t respond to complaints, or quash them with weak excuses and zero accountability. Between 2015 and 2019, a group of researchers identified papers that had been published in violation of the CONSORT guidelines in journals that endorsed the same guidelines, and wrote to those editors. From The Wire Science‘s report:

… of the 58 letters sent to the editors, 32 were rejected for different reasons. The BMJ and Annals published all of those addressed to them. The Lancet accepted 80% of them. The NEJM and JAMA turned down every single letter.

According to JAMA, the letters did not include all the details it required to challenge the reports. When the researchers pointed out that JAMA’s word limit for the letter precluded that, they never heard back from the journal.

On the other hand, NEJM stated that the authors of reports it published were not required to abide by the CONSORT guidelines. However, NEJM itself endorses CONSORT.

The point is that bad science is hard enough to spot, and getting stakeholders to act on them is even harder. It shouldn’t have to be, but it is. In this context, every retraction is a commendable thing – no matter how obviously warranted it is. It’s also commendable when a paper ‘destined’ for retraction is retracted sooner (than the corresponding average) because we already have some evidence that “papers that scientists couldn’t replicate are cited more”. Even if a paper in the scientific literature dies, other scientists don’t seem to be able to immediately recognise that it is dead and cite it in their own work as evidence of this or that thesis. These are called zombie citations. Retracting such papers is a step in the right direction – insufficient to prevent all sorts of problems associated with endeavours to maintain the quality of the literature, but necessary.

As for the specific case of Arati Ramesh: she defended her group’s paper on PubPeer in two comments that offered more raw data and seemed to be founded on a conviction that the images in the paper were real, not doctored. Some commentators have said that her attitude is a sign that she didn’t know the images had been doctored while some others have said (and I tend to agree) that this defence of Ramesh is baffling considering both of her comments succeeded detailed descriptions of forgery. Members of the latter group have also said that, in effect, Ramesh tried to defend her paper until it was impossible to do so, at which point she published her controversial personal statement in which she threw one of her lab’s students under the bus.

There are a lot of missing pieces here towards ascertaining the scope and depth of Ramesh’s culpability – given also that she is the lab’s principal investigator (PI), that she is the lab’s PI who has since started to claim that her lab doesn’t have access to the experiments’ raw data, and that the now-retracted paper says that she “conceived the experiments, performed the initial bioinformatic search for Sensei RNAs, supervised the work and wrote the manuscript”.

[Edit, July 11, 2021, 6:28 pm: After a conversation with Priyanka Pulla, I edited the following paragraph. The previous version appears below, struck through.]

Against this messy background, are we setting a low bar by giving Arati Ramesh brownie points for retracting the paper? Yes and no… Even if it were the case that someone defended the indefensible to an irrational degree, and at the moment of realisation offered to take the blame while also explicitly blaming someone else, the paper was retracted. This is the ‘no’ part. The ‘yes’ arises from Ramesh’s actions on PubPeer, to ‘keep going until one can go no longer’, so to speak, which suggests, among other things – and I’m shooting in the dark here – that she somehow couldn’t spot the problem right away. So giving her credit for the retraction would set a low, if also weird, bar; I think credit belongs on this count with the fastidious commenters of PubPeer. Ramesh would still have had to sign off on a document saying “we’ve agreed to have the paper retracted”, as journals typically require, but perhaps we can also speculate as to whom we should really thank for this outcome – anyone/anything from Ramesh herself to the looming threat of public pressure.

Against this messy background, are we setting a low bar by giving Arati Ramesh brownie points for retracting the paper? No. Even if it were the case that someone defended the indefensible to an irrational degree, and at the moment of realisation offered to take the blame while also explicitly blaming someone else, the paper was retracted. Perhaps we can speculate as to whom we should thank for this outcome – Arati Ramesh herself, someone else in her lab, members of the internal inquiry committee that NCBS set up, some others members of the institute or even the looming threat of public pressure. We don’t have to give Ramesh credit here beyond her signing off on the decision (as journals typically require) – and we still need answers on all the other pieces of this puzzle, as well as accountability.

A final point: I hope that the intense focus that the NCBS fracas has commanded – and could continue to considering Bik has flagged one more paper coauthored by Ramesh and others have flagged two coauthored by her partner Sunil Laxman (published in 2005 and 2006), both on PubPeer for potential image manipulation – will widen to encompass the many instances of misconduct popping up every week across the country.

NCBS, as we all know, is an elite institute as India’s centres of research go: it is well-funded (by the Department of Atomic Energy, a government body relatively free from bureaucratic intervention), staffed by more-than-competent researchers and students, has published commendable research (I’m told), has a functional outreach office, and whose scientists often feature in press reports commenting on this or that other study. As such, it is overrepresented in the public imagination and easily gets attention. However, the problems assailing NCBS vis-à-vis the reports on PubPeer are not unique to the institute, and should in fact force us to rethink our tendency (mine included) to give such impressive institutes – often, and by no coincidence, Brahmin strongholds – the benefit of the doubt.

(1. I have no idea how things are at India’s poorly funded state and smaller private universities, but even there, and in fact at the overall less-elite and but still “up there” in terms of fortunes, institutes, like the IISERs, Brahmins have been known to dominate the teaching and professorial staff, if not the students, and still have been found guilty of misconduct, often sans accountability. 2. There’s a point to be made here about plagiarism, the graded way in which it is ‘offensive’, access to good quality English education to people of different castes in India, a resulting access to plus inheritance of cultural and social capital, and the funneling of students with such capital into elite institutes.)

As I mentioned earlier, Retraction Watch has an ‘India retractions’ category (although to be fair, there are also similar categories for China, Italy, Japan and the UK, but not for France, Russia, South Korea or the US. These countries ranked 1-10 on the list of countries with the most scientific and technical journal publications in 2018.) Its database lists 1,349 papers with at least one author affiliated with an Indian institute that have been retracted – and five papers since the NCBS one met its fate. The latest one was retracted on July 7, 2021 (after being published on October 16, 2012). Again, these are just instances in which a paper was retracted. Further up the funnel, we have retractions that Retraction Watch missed, papers that editors are deliberating on, complaints that editors have rejected, complaints that editors have ignored, complaints that editors haven’t yet received, and journals that don’t care.

So, retractions – and retractors – deserve brownie points.

Defending philosophy of science

From Carl Bergstrom’s Twitter thread about a new book called How Irrationality Created Modern Science, by Michael Strevens:

https://twitter.com/CT_Bergstrom/status/1372811516391526400

The Iron Rule from the book is, in Bergstrom’s retelling, “no use of philosophical reasoning in the mode of Aristotle; no leveraging theological or scriptural understanding in the mode of Descartes. Formal scientific arguments must be sterilised, to use Strevens’s word, of subjectivity and non-empirical content.” I was particularly taken by the use of the term ‘individual’ in the tweet I’ve quoted above. The point about philosophical argumentation being an “individual” technique is important, often understated.

There are some personal techniques we use to discern some truths but which we don’t publicise. But the more we read and converse with others doing the same things, the more we may find that everyone has many of the same stand-ins – tools or methods that we haven’t empirically verified to be true and/or legitimate but which we have discerned, based on our experiences, to be suitably good guiding lights.

I discovered this issue first when I read Paul Feyerabend’s Against Method many years ago, and then in practice when I found during reporting some stories that scientists in different situations often developed similar proxies for processes that couldn’t be performed in their fullest due to resource constraints. But they seldom spoke to each other (especially across institutes), thus allowing an ideal view of how to do something to crenellate even as almost every one did that something in a similarly different way.

A very common example of this is scientists evaluating papers based on the ‘prestigiousness’ and/or impact factors of the journals the papers are published in, instead of based on their contents – often simply for lack of time and proper incentives. As a result, ideas like “science is self-correcting” and “science is objective” persist as ideals because they’re products of applying the Iron Rule to the process of disseminating the products of one’s research.

But “by turning a lens on the practice of science itself,” to borrow Bergstrom’s words, philosophies of science allow us to spot deviations from the prescribed normal – originating from “Iron Rule Ecclesiastics” like Richard Dawkins – and, to me particularly, revealing how we really, actually do it and how we can become better at it. Or as Bergstrom put it: “By understanding how norms and institutions create incentives to which scientists respond …, we can find ways to nudge the current system toward greater efficiency.”

(It is also gratifying a bit to see the book as well as Bergstrom pick on Lawrence Krauss. The book goes straight into my reading list.)