Not all retracted papers are fake news – but then, which ones are?

The authors of a 2017 paper about why fake news spreads so fast have asked for it to be retracted because they’ve discovered a flaw in their analysis. This is commendable because it signals that scientists are embracing retractions as a legitimate part of the scientific process (which includes discovery, debate and publishing).

Such an attitude is important because without it, it is impossible to de-sensationalise retractions, and return them from the domain of embarrassment to that of matter-of-fact. And in so doing, we give researchers the room they need to admit mistakes without being derided for it, and let them know that mistakes are par for the course.

However, two insensitive responses by influential people to the authors’ call for retraction signals that they might’ve had it better to sweep such mistakes under the rug and move on. One of them is Ivan Oransky, one of the two people behind Retraction Watch, the very popular blog that’s been changing how the world thinks about retractions. Its headline for the January 9 post about the fake news paper went like this:

The authors are doing a good thing and don’t deserve to have their paper called ‘fake news’. Publishing a paper with an honest mistake is just that; on the other hand, ‘fake news’ involves an actor planting information in the public domain knowing to be false, and with which she intends to confuse/manipulate its consumers. In short, it’s malicious. The authors don’t have malice – quite the opposite, in fact, going by their suggestion that the paper be retracted.

The other person who bit into this narrative is Corey S. Powell, a contributing editor at Discover and Aeon:

https://platform.twitter.com/widgets.js

His tweet isn’t as bad as Oransky’s headline but it doesn’t help the cause either. He wrongly suggests the paper’s conclusions were fake; they weren’t. They were simply wrong. There’s a chasm in between these two labels, and we must all do better to keep it that way.


Of course, there are other categories of papers that are retracted and whose authors are often suspected of malice (as in the intent to deceive).

Case I – The first comprises those papers pushed through by bungling scientists more interested in the scientometrics than in actual research, whose substance is cleverly forged to clear peer-review. However, speaking with the Indian experience in mind, these scientists aren’t malicious either – at least not to the extent that they want to deceive non-scientists. They’re often forced to publish by an administration that doesn’t acknowledge any other measures of academic success.

Case II – Outside of the Indian experience, Retraction Watch has highlighted multiple papers whose authors knew what they were doing was wrong, yet did it anyway to have papers published because they sought glory and fame. Jan Hendrik Schön and B.S. Rajput come first to mind. To the extent that these were the desired outcomes, the authors who draft such papers exhibit a greater degree of malintent than those who are doing it to succeed in a system that gives them no other options.

But even then, the moral boundaries aren’t clear. For example, why exactly did Schön resort to research misconduct? If it was for fame/glory, to what extent would he alone be to blame? Because we already know that research administration in many parts of the world has engendered extreme competition among researchers, for recognition as much as grants, and it wouldn’t be far-fetched to pin a part of the blame for things like the Schön scandal on the system itself. The example of Brian Wansink is illustrative in this regard.

What’s wrong with that? – This brings us to that other group of papers authored by scientists who know what they’re doing and think it’s an actually legitimate way of doing it – either out of ignorance or because they harbour a different worldview, one in which they’re interpreting protocols differently, in which they think they will never have success unless they “go viral” or, usually, both. The prime example of such a scientist is Brian Wansink.

For the uninitiated, Wansink works in the field of consumer behaviour and is notorious for publishing multiple papers from a single dataset sliced in so many ways, and so thin, as to be practically meaningless. Though he has had many of his papers retracted, he has often stood by his methods. As he told The Atlantic in September 2018:

The interpretation of [my] misconduct can be debated, and I did so for a year without the success I expected. There was no fraud, no intentional misreporting, no plagiarism, or no misappropriation. I believe all of my findings will be either supported, extended, or modified by other research groups. I am proud of my research, the impact it has had on the health of many millions of people, and I am proud of my coauthors across the world.

Of all the people who we say ‘ought to know better’, the Wansink-kind exemplify it the most. But malice? I’m not so sure.

The Gotcha – Finally, there’s that one group I think is actually malicious, typified by science writer John Bohannon. In 2015, Bohannon published a falsified paper in the journal International Archives of Medicine that claimed eating chocolate could help lose weight. Many news outlets around the world publicised the study even though it was riddled several conceptual flaws. For example, the sample size of the cohort was too small. None of the news reports mentioned this, nor did any of their writers undertake any serious effort to interrogate the paper in any other way.

Bohannon had shown up their incompetence. But this suggests malice because it was 2015 – when everyone interested in knowing how many science writers around the world sucked at their jobs already knew the answer. Bohannon wanted to demonstrate it anew for some reason but only ended up misleading thousands of people worldwide. His purpose would have been better served had he drawn up the faked paper together with a guideline on how journalists could have gone about covering it.

The Baffling – All of the previous groups concerned people who had written papers whose conclusions were not supported by the data/analysis, deliberately or otherwise. This group concerns people who have been careless, sometimes dismissively so, with the other portions of the paper. The most prominent examples include C.N.R. Rao and Appa Rao Podile.

In 2016, Appa Rao admitted to The Wire that the text in many of his papers had been plagiarised, and promptly asked the reporter how he could rectify the situation. Misconduct that doesn’t extend towards a paper’s technical part is a lesser offence – but it’s an offence nonetheless. It prompts a more worrying question: If these people think it’s okay to plagiarise, what do their students think?