NCBS fracas: In defence of celebrating retractions

Continuing from here

Irrespective of Arati Ramesh’s words and actions, I find every retraction worth celebrating because how hard-won retractions in general have been, in India and abroad. I don’t know how often papers coauthored by Indian scientists are retracted and how high or low that rate is compared to the international average. But I know that the quality of scientific work emerging from India is grossly disproportionate (in the negative sense) to the size of the country’s scientific workforce, which is to say most of the papers published from India, irrespective of the journal, contain low-quality science (if they contain science at all). It’s not for nothing that Retraction Watch has a category called ‘India retractions’, with 196 posts.

Second, it’s only recently that the global scientific community’s attitude towards retractions started changing, and even now most of it is localised to the US and Europe. And even there, there is a distinction: between retractions for honest mistakes and those for dishonest mistakes. Our attitudes towards retractions for honest mistakes have been changing. Retractions for dishonest conduct, or misconduct, have in fact been harder to secure, and continue to be.

The work of science integrity consultant Elisabeth Bik allows us a quick take: the rate at which sleuths are spotting research fraud is far higher than the rate at which journals are retracting the corresponding papers. Bik herself has often said on Twitter and in interviews how most journals editors simply don’t respond to complaints, or quash them with weak excuses and zero accountability. Between 2015 and 2019, a group of researchers identified papers that had been published in violation of the CONSORT guidelines in journals that endorsed the same guidelines, and wrote to those editors. From The Wire Science‘s report:

… of the 58 letters sent to the editors, 32 were rejected for different reasons. The BMJ and Annals published all of those addressed to them. The Lancet accepted 80% of them. The NEJM and JAMA turned down every single letter.

According to JAMA, the letters did not include all the details it required to challenge the reports. When the researchers pointed out that JAMA’s word limit for the letter precluded that, they never heard back from the journal.

On the other hand, NEJM stated that the authors of reports it published were not required to abide by the CONSORT guidelines. However, NEJM itself endorses CONSORT.

The point is that bad science is hard enough to spot, and getting stakeholders to act on them is even harder. It shouldn’t have to be, but it is. In this context, every retraction is a commendable thing – no matter how obviously warranted it is. It’s also commendable when a paper ‘destined’ for retraction is retracted sooner (than the corresponding average) because we already have some evidence that “papers that scientists couldn’t replicate are cited more”. Even if a paper in the scientific literature dies, other scientists don’t seem to be able to immediately recognise that it is dead and cite it in their own work as evidence of this or that thesis. These are called zombie citations. Retracting such papers is a step in the right direction – insufficient to prevent all sorts of problems associated with endeavours to maintain the quality of the literature, but necessary.

As for the specific case of Arati Ramesh: she defended her group’s paper on PubPeer in two comments that offered more raw data and seemed to be founded on a conviction that the images in the paper were real, not doctored. Some commentators have said that her attitude is a sign that she didn’t know the images had been doctored while some others have said (and I tend to agree) that this defence of Ramesh is baffling considering both of her comments succeeded detailed descriptions of forgery. Members of the latter group have also said that, in effect, Ramesh tried to defend her paper until it was impossible to do so, at which point she published her controversial personal statement in which she threw one of her lab’s students under the bus.

There are a lot of missing pieces here towards ascertaining the scope and depth of Ramesh’s culpability – given also that she is the lab’s principal investigator (PI), that she is the lab’s PI who has since started to claim that her lab doesn’t have access to the experiments’ raw data, and that the now-retracted paper says that she “conceived the experiments, performed the initial bioinformatic search for Sensei RNAs, supervised the work and wrote the manuscript”.

[Edit, July 11, 2021, 6:28 pm: After a conversation with Priyanka Pulla, I edited the following paragraph. The previous version appears below, struck through.]

Against this messy background, are we setting a low bar by giving Arati Ramesh brownie points for retracting the paper? Yes and no… Even if it were the case that someone defended the indefensible to an irrational degree, and at the moment of realisation offered to take the blame while also explicitly blaming someone else, the paper was retracted. This is the ‘no’ part. The ‘yes’ arises from Ramesh’s actions on PubPeer, to ‘keep going until one can go no longer’, so to speak, which suggests, among other things – and I’m shooting in the dark here – that she somehow couldn’t spot the problem right away. So giving her credit for the retraction would set a low, if also weird, bar; I think credit belongs on this count with the fastidious commenters of PubPeer. Ramesh would still have had to sign off on a document saying “we’ve agreed to have the paper retracted”, as journals typically require, but perhaps we can also speculate as to whom we should really thank for this outcome – anyone/anything from Ramesh herself to the looming threat of public pressure.

Against this messy background, are we setting a low bar by giving Arati Ramesh brownie points for retracting the paper? No. Even if it were the case that someone defended the indefensible to an irrational degree, and at the moment of realisation offered to take the blame while also explicitly blaming someone else, the paper was retracted. Perhaps we can speculate as to whom we should thank for this outcome – Arati Ramesh herself, someone else in her lab, members of the internal inquiry committee that NCBS set up, some others members of the institute or even the looming threat of public pressure. We don’t have to give Ramesh credit here beyond her signing off on the decision (as journals typically require) – and we still need answers on all the other pieces of this puzzle, as well as accountability.

A final point: I hope that the intense focus that the NCBS fracas has commanded – and could continue to considering Bik has flagged one more paper coauthored by Ramesh and others have flagged two coauthored by her partner Sunil Laxman (published in 2005 and 2006), both on PubPeer for potential image manipulation – will widen to encompass the many instances of misconduct popping up every week across the country.

NCBS, as we all know, is an elite institute as India’s centres of research go: it is well-funded (by the Department of Atomic Energy, a government body relatively free from bureaucratic intervention), staffed by more-than-competent researchers and students, has published commendable research (I’m told), has a functional outreach office, and whose scientists often feature in press reports commenting on this or that other study. As such, it is overrepresented in the public imagination and easily gets attention. However, the problems assailing NCBS vis-à-vis the reports on PubPeer are not unique to the institute, and should in fact force us to rethink our tendency (mine included) to give such impressive institutes – often, and by no coincidence, Brahmin strongholds – the benefit of the doubt.

(1. I have no idea how things are at India’s poorly funded state and smaller private universities, but even there, and in fact at the overall less-elite and but still “up there” in terms of fortunes, institutes, like the IISERs, Brahmins have been known to dominate the teaching and professorial staff, if not the students, and still have been found guilty of misconduct, often sans accountability. 2. There’s a point to be made here about plagiarism, the graded way in which it is ‘offensive’, access to good quality English education to people of different castes in India, a resulting access to plus inheritance of cultural and social capital, and the funneling of students with such capital into elite institutes.)

As I mentioned earlier, Retraction Watch has an ‘India retractions’ category (although to be fair, there are also similar categories for China, Italy, Japan and the UK, but not for France, Russia, South Korea or the US. These countries ranked 1-10 on the list of countries with the most scientific and technical journal publications in 2018.) Its database lists 1,349 papers with at least one author affiliated with an Indian institute that have been retracted – and five papers since the NCBS one met its fate. The latest one was retracted on July 7, 2021 (after being published on October 16, 2012). Again, these are just instances in which a paper was retracted. Further up the funnel, we have retractions that Retraction Watch missed, papers that editors are deliberating on, complaints that editors have rejected, complaints that editors have ignored, complaints that editors haven’t yet received, and journals that don’t care.

So, retractions – and retractors – deserve brownie points.

Dealing with plagiarism? Look at thy neighbour

Four doctors affiliated with Kathmandu University (KU) in Nepal are going to be fired because they plagiarised data in two papers. The papers were retracted last year from the Bali Medical Journal, where they had been published. A dean at the university, Dipak Shrestha, told a media outlet that the matter will be settled within two weeks. A total of six doctors, including the two above, are also going to be blacklisted by the journal. This is remarkably swift and decisive action against a problem that refuses to go away in India for many reasons. But I’m not an apologist; one of those reasons is that many teachers at colleges and universities seem to think “plagiarism is okay”. And for as long as that attitude persists, academicians are going to be able to plagiarise and flourish in the country.

One of the other reasons plagiarism is rampant in India is the language problem. As Praveen Chaddah, a former chairman of the University Grants Commission, has written, there is a form of plagiarism that can be forgiven – the form at play when a paper’s authors find it difficult to articulate themselves in English but have original ideas all the same. The unforgivable form is when the ideas are plagiarised as well. According to a retraction notice supplied by the Bali Medical Journal, the KU doctors indulged in plagiarism of the unforgivable kind, and were duly punished. In India, however, I’m yet to hear of an instance where researchers found to have been engaging in such acts were pulled up as swiftly as their Nepali counterparts were, or had sanctions imposed on their work within a finite period and in a transparent manner.

The production and dissemination of scientific knowledge should not have to suffer because some scientists aren’t fluent with a language. Who knows, India might already be the ‘science superpower’ everyone wants it to be if we’re able to account for information and knowledge produced in all its languages. But this does not mean India’s diversity affords it the license to challenge the use of English as the de facto language of science; that would be stupid. English is prevalent, dominant, even hegemonic (as K. VijayRaghavan has written). So if India is to make it to the Big League, then officials must consider doing these things:

  1. Inculcate the importance of communicating science. Writing a paper is also a form of communication. Teach how to do it along with technical skills.
  2. Set aside money – as some Australian and European institutions do1 – to help those for whom English isn’t their first, or even second, language write papers that will be appreciated for their science instead of rejected for their language (unfair though this may be).
  3. DO WHAT NEPAL IS DOING – Define reasonable consequences for plagiarising (especially of the unforgivable kind), enumerate them in clear and cogent language, ensure these sanctions are easily accessible by scientists as well as the public, and enforce them regularly.

Researchers ought to know better – especially the more prominent, more influential ones. The more well-known a researcher is, the less forgivable their offence should be, at least because they set important precedents that others will follow. And to be able to remind them effectively when they act carelessly, an independent body should be set up at the national level, particularly for institutions funded by the central government, instead of expecting the offender’s host institution to be able to effectively punish someone well-embedded in the hierarchy of the institution itself.

1. Hat-tip to Chitralekha Manohar.

Featured image credit: xmex/Flickr, CC BY 2.0.

Some research misconduct trends by the numbers

A study published in eLIFE on August 14, 2014, looked at data pertaining to some papers published between 1992 and 2012 that the Office of Research Integrity had determined contained research misconduct. From the abstract:

Data relating to retracted manuscripts and authors found by the Office of Research Integrity (ORI) to have committed misconduct were reviewed from public databases. Attributable costs of retracted manuscripts, and publication output and funding of researchers found to have committed misconduct were determined. We found that papers retracted due to misconduct accounted for approximately $58 million in direct funding by the NIH between 1992 and 2012, less than 1% of the NIH budget over this period. Each of these articles accounted for a mean of $392,582 in direct costs (SD $423,256). Researchers experienced a median 91.8% decrease in publication output and large declines in funding after censure by the ORI.

While the number of retractions worldwide is on the rise – also because the numbers of papers being published and of journals are on the rise – the study addresses a subset of these papers and only those drawn up by researchers who received funding from the National Institutes of Health (NIH).

pubsfreq

Among them, there is no discernible trend in terms of impact factors and attributable losses. In the chart below, the size of each datapoint corresponds to the direct attributable loss and its color, to the impact factor of the journal that published the paper.

tabpublic 15-08-2014 100128

However, is the time to retraction dropping?

The maximum time to retraction has been on the decline since 1997. However, on average, the time to retraction is still fluctuating, influenced as it is by the number of papers retracted and the nature of misconduct.

trendTimeToRetr

No matter the time to retraction or the impact factors of the journals, most scientists experience a significant difference in funding before and after the ORI report comes through, as the chart below shows, sorted by quanta of funds. The right axis displays total funding pre-ORI and the left, total funding post-ORI.

prepostfund

As the study’s authors summarize in their abstract: “Researchers experienced a median 91.8% decrease in publication output and large declines in funding after censure by the ORI,” while total funding toward all implicated researchers went from $131 million to $74.5 million.

There could be some correlation between the type of misconduct and decline in funding, but there’s not enough data to determine that. Nonetheless, there are eight instances in 1992-2012 when the amount of funding increased after the ORI report, of which the lowest rise as such as is seen for John Ho, who committed fraud, and the highest for Alan Landay, implicated for plagiarism, a ‘lesser’ charge.

incfundFrom the paper:

The personal consequences for individuals found to have committed research misconduct are considerable. When a researcher is found by the ORI to have committed misconduct, the outcome typically involves a voluntary agreement in which the scientist agrees not to contract with the United States government for a period of time ranging from a few years to, in rare cases, a lifetime. Recent studies of faculty and postdoctoral fellows indicate that research productivity declines after censure by the ORI, sometimes to zero, but that many of those who commit misconduct are able to find new jobs within academia (Redman and Merz, 2008, 2013). Our study has found similar results. Censure by the ORI usually results in a severe decrease in productivity, in many cases causing a permanent cessation of publication. However the exceptions are instructive.

Retraction Watch reported the findings with especial focus on the cost of research misconduct. They spoke to Daniele Fanelli, one part of whose quote is notable – albeit no less than the rest.

The question of collateral damage, by which I mean the added costs caused by other research being misled, is controversial. It still has to be conclusively shown, in other words, that much research actually goes wasted directly because of fabricated findings. Waste is everywhere in science, but the role played by frauds in generating it is far from established and is likely to be minor.

References

Stern, A.M., Casadevall, A., Steen, R.G. and Fang, F.C., Financial costs and personal consequences of research misconduct resulting in retracted publications, eLIFE. August 14, 2014;3:e02956.