NCBS fracas: In defence of celebrating retractions

Continuing from here

Irrespective of Arati Ramesh’s words and actions, I find every retraction worth celebrating because how hard-won retractions in general have been, in India and abroad. I don’t know how often papers coauthored by Indian scientists are retracted and how high or low that rate is compared to the international average. But I know that the quality of scientific work emerging from India is grossly disproportionate (in the negative sense) to the size of the country’s scientific workforce, which is to say most of the papers published from India, irrespective of the journal, contain low-quality science (if they contain science at all). It’s not for nothing that Retraction Watch has a category called ‘India retractions’, with 196 posts.

Second, it’s only recently that the global scientific community’s attitude towards retractions started changing, and even now most of it is localised to the US and Europe. And even there, there is a distinction: between retractions for honest mistakes and those for dishonest mistakes. Our attitudes towards retractions for honest mistakes have been changing. Retractions for dishonest conduct, or misconduct, have in fact been harder to secure, and continue to be.

The work of science integrity consultant Elisabeth Bik allows us a quick take: the rate at which sleuths are spotting research fraud is far higher than the rate at which journals are retracting the corresponding papers. Bik herself has often said on Twitter and in interviews how most journals editors simply don’t respond to complaints, or quash them with weak excuses and zero accountability. Between 2015 and 2019, a group of researchers identified papers that had been published in violation of the CONSORT guidelines in journals that endorsed the same guidelines, and wrote to those editors. From The Wire Science‘s report:

… of the 58 letters sent to the editors, 32 were rejected for different reasons. The BMJ and Annals published all of those addressed to them. The Lancet accepted 80% of them. The NEJM and JAMA turned down every single letter.

According to JAMA, the letters did not include all the details it required to challenge the reports. When the researchers pointed out that JAMA’s word limit for the letter precluded that, they never heard back from the journal.

On the other hand, NEJM stated that the authors of reports it published were not required to abide by the CONSORT guidelines. However, NEJM itself endorses CONSORT.

The point is that bad science is hard enough to spot, and getting stakeholders to act on them is even harder. It shouldn’t have to be, but it is. In this context, every retraction is a commendable thing – no matter how obviously warranted it is. It’s also commendable when a paper ‘destined’ for retraction is retracted sooner (than the corresponding average) because we already have some evidence that “papers that scientists couldn’t replicate are cited more”. Even if a paper in the scientific literature dies, other scientists don’t seem to be able to immediately recognise that it is dead and cite it in their own work as evidence of this or that thesis. These are called zombie citations. Retracting such papers is a step in the right direction – insufficient to prevent all sorts of problems associated with endeavours to maintain the quality of the literature, but necessary.

As for the specific case of Arati Ramesh: she defended her group’s paper on PubPeer in two comments that offered more raw data and seemed to be founded on a conviction that the images in the paper were real, not doctored. Some commentators have said that her attitude is a sign that she didn’t know the images had been doctored while some others have said (and I tend to agree) that this defence of Ramesh is baffling considering both of her comments succeeded detailed descriptions of forgery. Members of the latter group have also said that, in effect, Ramesh tried to defend her paper until it was impossible to do so, at which point she published her controversial personal statement in which she threw one of her lab’s students under the bus.

There are a lot of missing pieces here towards ascertaining the scope and depth of Ramesh’s culpability – given also that she is the lab’s principal investigator (PI), that she is the lab’s PI who has since started to claim that her lab doesn’t have access to the experiments’ raw data, and that the now-retracted paper says that she “conceived the experiments, performed the initial bioinformatic search for Sensei RNAs, supervised the work and wrote the manuscript”.

[Edit, July 11, 2021, 6:28 pm: After a conversation with Priyanka Pulla, I edited the following paragraph. The previous version appears below, struck through.]

Against this messy background, are we setting a low bar by giving Arati Ramesh brownie points for retracting the paper? Yes and no… Even if it were the case that someone defended the indefensible to an irrational degree, and at the moment of realisation offered to take the blame while also explicitly blaming someone else, the paper was retracted. This is the ‘no’ part. The ‘yes’ arises from Ramesh’s actions on PubPeer, to ‘keep going until one can go no longer’, so to speak, which suggests, among other things – and I’m shooting in the dark here – that she somehow couldn’t spot the problem right away. So giving her credit for the retraction would set a low, if also weird, bar; I think credit belongs on this count with the fastidious commenters of PubPeer. Ramesh would still have had to sign off on a document saying “we’ve agreed to have the paper retracted”, as journals typically require, but perhaps we can also speculate as to whom we should really thank for this outcome – anyone/anything from Ramesh herself to the looming threat of public pressure.

Against this messy background, are we setting a low bar by giving Arati Ramesh brownie points for retracting the paper? No. Even if it were the case that someone defended the indefensible to an irrational degree, and at the moment of realisation offered to take the blame while also explicitly blaming someone else, the paper was retracted. Perhaps we can speculate as to whom we should thank for this outcome – Arati Ramesh herself, someone else in her lab, members of the internal inquiry committee that NCBS set up, some others members of the institute or even the looming threat of public pressure. We don’t have to give Ramesh credit here beyond her signing off on the decision (as journals typically require) – and we still need answers on all the other pieces of this puzzle, as well as accountability.

A final point: I hope that the intense focus that the NCBS fracas has commanded – and could continue to considering Bik has flagged one more paper coauthored by Ramesh and others have flagged two coauthored by her partner Sunil Laxman (published in 2005 and 2006), both on PubPeer for potential image manipulation – will widen to encompass the many instances of misconduct popping up every week across the country.

NCBS, as we all know, is an elite institute as India’s centres of research go: it is well-funded (by the Department of Atomic Energy, a government body relatively free from bureaucratic intervention), staffed by more-than-competent researchers and students, has published commendable research (I’m told), has a functional outreach office, and whose scientists often feature in press reports commenting on this or that other study. As such, it is overrepresented in the public imagination and easily gets attention. However, the problems assailing NCBS vis-à-vis the reports on PubPeer are not unique to the institute, and should in fact force us to rethink our tendency (mine included) to give such impressive institutes – often, and by no coincidence, Brahmin strongholds – the benefit of the doubt.

(1. I have no idea how things are at India’s poorly funded state and smaller private universities, but even there, and in fact at the overall less-elite and but still “up there” in terms of fortunes, institutes, like the IISERs, Brahmins have been known to dominate the teaching and professorial staff, if not the students, and still have been found guilty of misconduct, often sans accountability. 2. There’s a point to be made here about plagiarism, the graded way in which it is ‘offensive’, access to good quality English education to people of different castes in India, a resulting access to plus inheritance of cultural and social capital, and the funneling of students with such capital into elite institutes.)

As I mentioned earlier, Retraction Watch has an ‘India retractions’ category (although to be fair, there are also similar categories for China, Italy, Japan and the UK, but not for France, Russia, South Korea or the US. These countries ranked 1-10 on the list of countries with the most scientific and technical journal publications in 2018.) Its database lists 1,349 papers with at least one author affiliated with an Indian institute that have been retracted – and five papers since the NCBS one met its fate. The latest one was retracted on July 7, 2021 (after being published on October 16, 2012). Again, these are just instances in which a paper was retracted. Further up the funnel, we have retractions that Retraction Watch missed, papers that editors are deliberating on, complaints that editors have rejected, complaints that editors have ignored, complaints that editors haven’t yet received, and journals that don’t care.

So, retractions – and retractors – deserve brownie points.

NCBS retraction – addenda

My take on the NCBS paper being retracted, and the polarised conversation that has erupted around the incident, is here. The following are some points I’d like to add.

a. Why didn’t the editorial and peer-review teams at Nature Chemical Biology catch the mistakes before the paper was published? As the work of famous research-fraud detective Dr Elisabeth Bik has shown, detecting image manipulation is sometimes easy and sometimes hard. But what is untenable are claims by some scientists, and journals as well, that peer-review is a non-negotiable requirement to ensure the scientific literature remains of ‘high quality’. Nature Chemical Biology also tries to launder its image by writing in its retraction notice that the paper was withdrawn because the authors could not reproduce its results. Being unable to reproduce results is a far less egregious offence than manipulating images. What the journal is defending here is its peer-review process.

b. Nature Chemical Biology continues to hold the retracted paper behind a paywall ($9 to rent, EUR 55.14 to subscribe to the journal for a year). I expect readers of this blog to know the background to why paywalls are bad, etc., but I would have thought a retracted paper would be released into the public domain. It’s important for everyone to know the ways in which a paper was flawed post-retraction, especially one that has commanded so much public attention (at least as retractions go). Unless of course this is Nature Chemical Biology acknowledging that paywalls are barriers more than anything else, and the journals’ editors can hide their and their peer-review’s failure this way.

c. The (now retracted) Arati Ramesh et al result was amazing, etc. but given some social media conversations are focused on why Ramesh didn’t double-check a result that was so significant as to warrant open celebration once the paper was published, some important background info: the result was great but not entirely unexpected. In April 2020, Jianson Xu and Joseph Cotruvo reported that a known riboswitch that bound to nickel and cobalt ions also had features that allowed it to bind to iron. (Ramesh et al’s paper also cites another study from 2015 with a similar claim.) Ramesh et al reported that they had found just such behaviour in a riboswitch (present in a different bacterial species). However, many of the images in their paper appeared to be wholly manipulated, undermining the results. It’s still possible (I think) that someone else could make a legitimate version of the same discovery.

Diversifying into other beats

I delivered my annual talk AMA at the NCBS science writing workshop yesterday. While the questions the students asked were mostly the same as last year (and the year before that), I also took the opportunity to request them to consider diversifying into other subjects. Most, if not all, journalists entering India’s science journalism space every year want to compose stories about the life sciences and/or ecology. As a result, however, while there are numerous journalists to write about issues in these areas, there are fewer than a handful to deal with developments in all the other ones – from theoretical particle physics to computer science to chemical engineering.

This gives the impression to the consumers of journalism that research in these areas isn’t worth writing about or, more perniciously, that developments in these areas aren’t to be discussed (and debated, if need be) in the public domain. And this in turn contributes to a vicious cycle, where “there no stories about physics” and “there is no interest in publishing stories about physics” successively keep readers/editors and the journalists, resp., at bay.

However, from an editor’s perspective, the problem has an eminently simple solution: induct and then publish reporters producing work on research on these subjects. This doesn’t always have to be of newly minted producers but could also benefit from existing ones actively diversifying into beats other than their first choices over the course of a few years.

This sort of diversification doesn’t happen regularly but if it does, it could also benefit younger journalists who are looking to make their presence felt. For example, it’s easier to stand out from the crowd writing about, say, semiconductor fabrication than about ecological research (although this isn’t to say one is more important than the other). When more such writing is produced, editors also stand to gain because they can offer readers a more even coverage of research in the country instead of painting a lopsided picture.

One might argue that there needs to be demand from readers as well, but the relationship between editors and readers isn’t a straightforward demand-supply contest. If that were the case, the news would have become synonymous with populist drivel a long time ago. Instead, it’s more about progressively creating newer interests in the longer run that are a combination of informative and interesting. Put one way, this means the editor should be able to bypass the ‘interestedness indicator’ once in a while to publish stories that readers didn’t know they needed (such as The Wire‘s piece on quantum biology earlier this month).

Such a thing obviously wouldn’t be possible without journalists pitching stories other than what they usually do, and of course editors who have signalled that they are willing to take such risks.

Talking scicomm at NCBS – II

I was invited to speak to the students of the annual science writing workshop conducted at the National Centre for Biological Sciences, Bangalore, for the second year (first year talk’s notes here).

Some interesting things we discussed:

1. Business of journalism: There were more questions from this year’s batch of aspiring science writers about the economics of online journalism, how news websites grow, how much writers can expect to get paid, etc. This is heartening: more journalists at all levels should be aware of, and if possible involved in, how their newsrooms make their money. Because if you retreat from this space, you cede space for a capitalist who doesn’t acknowledge the principles and purpose of journalism to take over. If money has to make its way into the hands of journalists – as it should, for all the work that they’re doing – only journalists can also ensure that it’s clean.

2. Conflicts of interest: The Wire has more to lose through conflicts of interests in a story simply because there are more people out there looking to bring it down. So the cost of slipping up is high. But let’s not disagree that being diligent on this front always makes for a better report.

3. Formulae: There is no formula for a good science story. A story itself is good when it is composed by good writing and when it is a good story in the same way we think of good stories in fiction. They need to entertain without betraying the spirit of their subject, and – unlike in fiction – they need to seek out the truth. That they also need to be in the public interest is something I’m not sure about, although only to the extent that it doesn’t compromise the rights of any other actor. This definition is indeed vague but only because the ‘public interest’ is a shape-shifting entity. For example, two scholars having an undignified fight over some issue in the public domain wouldn’t be in the public interest – and I would deem it unfit for publication for just that reason. At the same time, astronomers discovering a weird star billions of lightyears away may not be in the public interest either – but that wouldn’t be enough reason to disqualify the story. In fact, a deeper point: when the inculcation of scientific temper, adherence to the scientific method and financial support for the performance of science are all deemed to not be in the public interest, then covering these aspects of science by the same yardstick will only give rise to meaningless stories.

4. Representation of authority: If two scientists in the same institute are the only two people working on a new idea, and if one of them has published a paper, can you rely on the other’s opinions of it? I wouldn’t – they’re both paid by the same institution, and it is in both their interests to uphold the stature of the institution and all the work that it supports because, as a result, their individual statures are upheld. Thankfully, this situation hasn’t come to be – but something similar has. Most science journalists in the country frequently quote scientists from Bangalorean universities on topics like molecular biology and ecology because they’re the most talkative. However, the price they’re quietly paying for this is by establishing that the only scientists in the country worth talking about apropos these topics are based out of Bangalore. That is an injustice.

5. Content is still king: You can deck up a page with the best visuals, but if the content is crap, then nothing will save the package from flopping. You can also package great content in a lousy-looking page and it will still do well. This came up in the context of a discussion on emulating the likes of Nautilus and Quanta in India. The stories on their pages read so well because they are good stories, not because they’re accompanied by cool illustrations. This said, it’s also important to remember that illustrations cost quite a bit of money, so when the success of a package is mostly the in the hands of the content itself, paying attention to that alone during a cash-crunch may not be a bad idea.

Talking about science, NCBS

On June 24, I was invited to talk at the NCBS Science Writing Workshop, held every year for 10 days. The following notes are some of my afterthoughts from that talk.

Science journalism online is doing better now than science journalism in print, in India. But before we discuss the many ways in which this statement is true, we need to understand what a science story can be as it is presented in the media. I’ve seen six kinds of science pieces:

1. Scientific information and facts – Reporting new inventions and discoveries, interesting hypotheses, breaking down complex information, providing background information. Examples: first detection of g-waves, Dicty World Race, etc.

2. Processes in science – Discussing opinions and debates, analysing how complex or uncertain issues are going to be resolved, unravelling investigations and experiments. Examples: second detection of g-waves, using CRISPR, etc.

3. Science policy – Questioning/analysing the administration of scientific work, funding, HR, women in STEM, short- and long-term research strategies, etc. Examples: analysing DST budgets, UGC’s API, etc.

4. People of science – Interviewing people, discussing choices and individual decisions, investigating the impact of modern scientific research on those who practice it. **Examples**: interviewing women in STEM, our Kip Thorne piece, etc.

5. Auxiliary science – Reporting on the impact of scientific processes/choices on other fields (typically closer to our daily lives), discussing the economic/sociological/political issues surrounding science but from an economic/sociological/political PoV. Examples: perovskite in solar cells, laying plastic roads, etc.

6. History and philosophy of science – Analysing historical and/or philosophical components of science. Examples: some of Mint on Sunday’s pieces, our columns by Aswin Seshasayee and Sunil Laxman, etc.

Some points:

1. Occasionally, a longform piece will combine all five types – but you shouldn’t force such a piece without an underlying story.

2. The most common type of science story is 5 – auxiliary science – because it is the easiest to sell. In these cases, the science itself plays second fiddle to the main issue.

3. Not all stories cleanly fall into one or the other bin. The best science pieces can’t always be said to be falling in this or that bin, but the worst pieces get 1 and 2 wrong, are misguided about 4 (but usually because they get 1 and 2 wrong) or misrepresent the science in 5.

4. Journalism is different from writing in that journalism has a responsibility to expose and present the truth. At the same time, 1, 2 and 6 stories – presenting facts in a simpler way, discussing processes, and discussing the history and philosophy of science – can be as much journalism as writing because they increase awareness of the character of science.

5. Despite the different ways in which we’ve tried to game the metrics, one thing has held true: content is king. A well-written piece with a good story at its heart may or may not do well – but a well-packaged piece that is either badly written or has a weak story at its centre (or both) will surely flop.

6. You can always control the goodness of your story by doing due diligence, but if you’re pitching your story to a publisher on the web, you’ve to pitch it to the right publisher. This is because those who do better on the web only do better by becoming a niche publication. If a publication wants to please everyone, it has to operate at a very large scale (>500 stories/day). On the other hand, a niche publication will have clearly identified its audience and will only serve that segment. Consequently, only some kinds of science stories – as identified by those niche publications’ preferences in science journalism – will be popular on the web. So know what editors are looking for.