A Kuhnian gap between research publishing and academic success

There is a gap in research publishing and how it relates to academic success. On the one hand, there are scientists complaining of low funds, being short-staffed, low-quality or absent equipment, disoptimal employment/tenure terms, bureaucratic incompetence and political interference. On the other, there are scientists who describe their success within academia in terms of being published in XYZ journals (with impact factors of PQR), having high h-indices, having so many papers to their names, etc.

These two scenarios – both very real in India and I imagine in most countries – don’t straightforwardly lead to the other. They require a bridge, a systemic symptom that makes both of them possible even when they’re incompatible with each other. This bridge is those scientists’ attitudes about what it’s okay to do in order to keep the two façades in harmonious coexistence.

What is it okay to do? For starters, keep the research-publishing machinery running in a way that allows them to evaluate other scientists on matters other than their scientific work. This way, lack of resources for research can be decoupled from scientists’ output in journals. Clever, right?

According to a study published a month ago, manuscripts that include a Nobel laureate’s name among the coauthors are six-times more likely to be accepted for publication than those without a laureate’s name in the mast. This finding piles on other gender-related problems with peer-review, including women’s papers being accepted less often as well as men dominating the community of peer-reviewers. Nature News reported:

Knowledge of the high status of a paper’s author might justifiably influence a reviewer’s opinion: it could boost their willingness to accept a counterintuitive result, for example, on the basis of the author’s track record of rigour. But Palan’s study found that reviewers’ opinions changed across all six of the measures they were asked about, including the subject’s worthiness, the novelty of the information and whether the conclusions were supported. These things should not all be affected by knowledge of authorship, [Palan, one of the paper’s coauthors, said].

Palan also said the solution to this problem is for journals to adopt double-anonymised peer-review: the authors don’t who the reviewers and the reviewers don’t know who the authors are. The most common form of peer-review is the single-blind variety, where the reviewers know who the authors are but the authors don’t know who the reviewers are. FWIW, I prefer double-anonymised peer-review plus the journal publishing the peer-reviewers’ anonymised reports along with the paper.

Then again, modifying peer-review would still be localised to journals that are willing to adopt newer mechanisms, and thus be a stop-gap solution that doesn’t address the use of faulty peer-review mechanisms both inside journals and in academic settings. For example, given the resource-mininal context in which many Indian research institutes and universities function, hiring and promotion committees often decide whom to hire or promote based on which journals their papers have been published in and/or the number of times those papers have been cited.

Instead, what we need is systemic change that responds to all the problems with peer-review, instead of one problem at a time in piecemeal fashion, by improving transparency, resources and incentives. Specifically: a) make peer-review more transparent, b) give scientists the resources – including time and freedom – to evaluate each others’ work on factors localised to the context of their research (including the quality of their work and the challenges in their way), and c) incentivise scientists to do so in order to accelerate change and ensure compliance.

The scientometric numbers, originally invented to facilitate the large-scale computational analysis of the scientific literature, have come to subsume the purpose of the scientific enterprise itself: that is, scientists often want to have good numbers instead of want to do good science. As a result, there is often an unusual delay – akin to the magnetic hysteresis – between the resources for research being cut back and the resulting drop in productivity and quality showing in the researchers’ output. Perhaps more fittingly, it’s a Kuhnian response to paradigm change.

The matter of a journal’s reputation

Apparently (and surprisingly) The Telegraph didn’t allow Dinesh Thakur to respond to an article by Biocon employee Sundar Ramanan, in which Ramanan deems Thakur’s article about the claims to efficacy of the Biocon drug Itolizumab not being backed by enough data to have received the DCGI’s approval to be inaccurate. Even notwithstanding The Telegraph‘s policy on how rebuttals are handled (I have no idea what it is), Ramanan – as a proxy for his employer – has everything to gain by defending Itolizumab’s approval and Thakur, nothing. This fact alone means Thakur should have been allowed to respond. As it stands, the issue has been reduced to a he-said-she-said event and I doubt that in reality it is. Thakur has since published his response at Newslaundry.

I’m no expert but there are many signs of whataboutery in Ramanan’s article. As Thakur writes, there’s also the matter of the DCGI waiving phase III clinical trials for Itolizumab, which can only be done if phase II trials were great – and this they’re unlikely to have been because of the ludicrous cohort size of 30 people. Kiran Mazumdar-Shaw and Seema Ahuja, the former the MD of and the latter a PR person affiliated with Biocon, have also resorted to ad hominem arguments on Twitter against Itolizumab’s critics, on more than one occasion have construed complaints about the drug approval process as expressions of anti-India sentiments, and have more recently begun to advance company-sponsored ‘expert opinions’ as “peer-reviewed” evidence of Itolizumab’s efficacy.

Even without presuming to know who’s ultimately right here, Mazumdar-Shaw and Ahuja don’t sound like the good guys, especially since their fiercest critics I’ve spotted thus far on Twitter are a bunch of highly qualified public health experts and medical researchers. Accusing them of ‘besmirching India’ inspires anything but confidence in Itolizumab’s phase II trial results.

It’s in this context that I want to draw attention to one particular word in Ramanan’s article in The Telegraph that I believe signals the ‘you scratch my back, I scratch yours’ relationship between many scientific journals and the accumulation of knowledge as a means to power – and in my view is a further sign that something’s rotten in the state of Denmark. Ramanan writes (underline added):

Itolizumab was first approved by the Drugs Controller General of India for the treatment of patients with active moderate to severe chronic plaque Psoriasis in 2013 based on “double-blind, randomized, placebo-controlled, Phase III study”. The safety and efficacy of the drug was published in globally reputed, peer-reviewed journals and in proceedings (Journal of the American Academy of Dermatology, and the 6th annual European Antibody Congress, respectively).

What does a journal’s reputation have to do with anything? The reason I keep repeating this point is not because you don’t get it – I’m sure you do; I do it to remind myself, and everyone else who may need to be reminded, of the different contexts in which the same issue repeatedly manifests. Invoking reputation, in this instance, smells of an argument grounded in authority instead of in evidence. Then again, this is a tautological statement considering Biocon issued a press release before the published results – preprint or post-print – were available (they still aren’t), but let’s bear on in an attempt to make sense of reputation itself.

The matter of a journal’s reputation, whether local or global, is grating because the journals for whom this attribute is germane have acquired it by publishing certain kinds of papers over others – papers that tend to describe positive results, sensational results, and by virtue of their reader-pays business model, results that are of greater interest to those likely to want to pay to access them. These details are important because it’s important to ask what ‘reputation’ means, and based on that we can then understand some of the choices of people for whom this ‘reputation’ matters.

Reputation is the outcome of gatekeeping, of deeming some papers as being worthy of publication according to metrics that have less to do with the contents of the paper* and more with the journal’s desirability and profitability. As Björn Brembs wrote in 2010:

It doesn’t matter where something is published – what matters is what is being published. Given the obscene subscription rates some of these journals charge, if anything, they should be held to a higher standard and their ‘reputation’ (i.e., their justification for charging these outrageous subscription fees!) being constantly questioned, rather than this unquestioning dogma that anything published there must be relevant, because it was published there.

However, by breaking into an élite club by publishing a paper in a particular journal, the reputation starts to matter to the scientist as well, and becomes synonymous with the scientist’s own aspirations of quality, rigour and academic power (look out for proclamations like “I have published 25 papers in journal X, which has an impact factor of 43″). This way, over time, the scientific literature becomes increasingly skewed in favour of some kinds of papers over others – especially of the positive, sensational variety – and leads to a vicious cycle.

The pressure in academia to ‘publish or perish’ also forces scientists to shoehorn themselves tighter into the journals’ definition of what a ‘good’ paper is, more so if publishing in some journals has seemingly become associated with increasing one’s likelihood of winning ‘reputed’ awards. As such, reputation is neither accidental nor innocent. From the point of view of the science that fills scientific journals, reputation is an arbitrary gatekeeper designed to disqualify an observer from calling the journal’s contents into question – which I’m sure you’ll understand is essentially antiscientific.

Ramanan’s appeal to the reputation of the journal that published the results of the tests of Itolizumab’s efficacy against cytokine release syndrome (CRS) in psoriasis patients is, in similar vein, an appeal to an entity that has nothing to do either with the study itself or the matter at hand. As Dr Jammi Nagaraj Rao wrote for The Wire Science, there’s no reason for us to believe knowing how Itolizumab works against CRS will help us understand how it will work against CRS in COVID-19 patients considering we’re not entirely sure how CRS plays out in COVID-19 patients – or if Itolizumab’s molecular mechanism of action can be directly translated to a statement of efficacy against a new disease.

In effect, the invitation to defer to a journal’s reputation is akin to an invitation to hide behind a cloak of superiority that would render scrutiny irrelevant. But that Ramanan used this word in this particular context is secondary**; the primary issue is that journals that pride such arbitrarily defined attributes as ‘reputation’ and ‘prestige’ also offer them as a defence against demands for transparency and access. Instead, why not let the contents of the paper speak up for themselves? Biocon should publish the paper pertaining to its controversial phase II trial of Itolizumab in COVID-19 patients and the DCGI should publicise the inner workings of its approval process asap. As they say: show us (the results), don’t tell us (the statement).

Beyond determining if the paper is legitimate, has sound science and is free of mistakes, malpractice or fraud.

** There are also other words Ramanan uses to subtly delegitimise Thakur’s article – calling it an “opinion article” and presuming to “correct” Thakur’s arguments that constitute a “disservice to the public”.