Yes, scientific journals should publish political rebuttals

(The headline is partly click-bait, as I admit below, because some context is required.) From ‘Should scientific journals publish political debunkings?’Science Fictions by Stuart Ritchie, August 27, 2022:

Earlier this week, the “news and analysis” section of the journal Science … published … a point-by-point rebuttal of a monologue a few days earlier from the Fox News show Tucker Carlson Tonight, where the eponymous host excoriated Dr. Anthony Fauci, of “seen everywhere during the pandemic” fame. … The Science piece noted that “[a]lmost everything Tucker Carlson said… was misleading or false”. That’s completely correct – so why did I have misgivings about the Science piece? It’s the kind of thing you see all the time on dedicated political fact-checking sites – but I’d never before seen it in a scientific journal. … I feel very conflicted on whether this is a sensible idea. And, instead of actually taking some time to think it through and work out a solid position, in true hand-wringing style I’m going to write down both sides of the argument in the form of a dialogue – with myself.

There’s one particular exchange between Ritchie and himself in his piece that threw me off the entire point of the article:

[Ritchie-in-favour-of-Science-doing-this]: Just a second. This wasn’t published in the peer-reviewed section of Science! This isn’t a refereed paper – it’s in the “News and Analysis” section. Wouldn’t you expect an “Analysis” article to, like, analyse things? Including statements made on Fox News?

[Ritchie-opposed-to-Science-doing-this]: To be honest, sometimes I wonder why scientific journals have a “News and Analysis” section at all – or, I wonder if it’s healthy in the long run. In any case, clearly there’s a big “halo” effect from the peer-reviewed part: people take the News and Analysis more seriously because it’s attached to the very esteemed journal. People are sharing it on social media because it’s “the journal Science debunking Tucker Carlson” – way fewer people would care if it was just published on some random news site. I don’t think you can have it both ways by saying it’s actually nothing to do with Science the peer-reviewed journal.

[Ritchie-in-favour]: I was just saying they were separate, rather than entirely unrelated, but fair enough.

Excuse me but not at all fair enough! The essential problem is the tie-ins between what a journal does, why it does them and what impressions they uphold in society.

First, Science‘s ‘news and analysis’ section isn’t distinguished by its association with the peer-reviewed portion of the journal but by its own reportage and analyses, intended for scientists and non-scientists alike. (Mea culpa: the headline of this post answers the question in the headline of Ritchie’s post, while being clear in the body that there’s a clear distinction between the journal and its ‘news and analysis’ section.) A very recent example was Charles Piller’s investigative report that uncovered evidence of image manipulation in a paper that had an outsized influence on the direction of Alzheimer’s research since it was published in 2006. When Ritchie writes that the peer-reviewed journal and the ‘news and analysis’ section are separate, he’s right – but when he suggests that the former’s prestige is responsible for the latter’s popularity, he’s couldn’t be more wrong.

Ritchie is a scientist and his position may reflect that of many other scientists. I recommend that he and others who agree with him consider the section from the PoV of a science journalist, when they will immediately see as we do that it has broken many agenda-setting stories as well as has published several accomplished journalists and scientists (Derek Lowe’s column being a good example). Another impression that could change with the change of perspective is the relevance of peer-review itself, and the deceptively deleterious nature of an associated concept he repeatedly invokes, which could as well be the pseudo-problem at the heart of Ritchie’s dilemma: prestige. To quote from a blog post in which University of Regensburg neurogeneticist Björn Brembs analysed the novelty of results published by so-called ‘prestigious’ journals, and published in February this year:

Taken together, despite the best efforts of the professional editors and best reviewers the planet has to offer, the input material that prestigious journals have to deal with appears to be the dominant factor for any ‘novelty’ signal in the stream of publications coming from these journals. Looking at all articles, the effect of all this expensive editorial and reviewer work amounts to probably not much more than a slightly biased random selection, dominated largely by the input and to probably only a very small degree by the filter properties. In this perspective, editors and reviewers appear helplessly overtaxed, being tasked with a job that is humanly impossible to perform correctly in the antiquated way it is organized now.

In sum:

Evidence suggests that the prestige signal in our current journals is noisy, expensive and flags unreliable science. There is a lack of evidence that the supposed filter function of prestigious journals is not just a biased random selection of already self-selected input material. As such, massive improvement along several variables can be expected from a more modern implementation of the prestige signal.

Take the ‘prestige’ away and one part of Ritchie’s dilemma – the journal Science‘s claim to being an “impartial authority” that stands at risk of being diluted by its ‘news and analysis’ section’s engagement with “grubby political debates” – evaporates. Journals, especially glamour journals like Science, haven’t historically been authorities on ‘good’ science, such as it is, but have served to obfuscate the fact that only scientists can be. But more broadly, the ‘news and analysis’ business has its own expensive economics, and publishers of scientific journals that can afford to set up such platforms should consider doing so, in my view, with a degree and type of separation between these businesses according to their mileage. The simple reasons are:

1. Reject the false balance: there’s no sensible way publishing a pro-democracy article (calling out cynical and potentially life-threatening untruths) could affect the journal’s ‘prestige’, however it may be defined. But if it does, would the journal be wary of a pro-Republican (and effectively anti-democratic) scientist refusing to publish on its pages? If so, why? The two-part answer is straightforward: because many other scientists as well as journal editors are still concerned with the titles that publish papers instead of the papers themselves, and because of the fundamental incentives of academic publishing – to publish the work of prestigious scientists and sensational work, as opposed to good work per se. In this sense, the knock-back is entirely acceptable in the hopes that it could dismantle the fixation on which journal publishes which paper.

2. Scientific journals already have access to expertise in various fields of study, as well as an incentive to participate in the creation of a sensible culture of science appreciation and criticism.

Featured image: Tucker Carlson at an event in West Palm Beach, Florida, December 19, 2020. Credit: Gage Skidmore/Wikimedia Commons, CC BY-SA 2.0.

PeerJ’s peer-review problem

Of all the scientific journals in the wild, there are a few I keep a closer eye on: they publish interesting results but more importantly they have been forward-thinking on matters of scientific publishing and they’ve also displayed a tendency to think out loud (through blog posts, say) and actively consider public feedback. Reading what they publish in these posts, and following the discussions that envelope them, has given me many useful insights into how scientific publishing works and, perhaps more importantly, how the perceptions surrounding this enterprise are shaped and play out.

One such journal is eLife. All their papers are open access, and they also publish the papers’ authors’ notes and reviewers’ comments with each paper. They also have a lively ‘magazine’ section in which they publish articles and essays by working scientists – especially younger ones – relating to the extended social environments in which knowledge-work happens. Now, for some reason, I’d cast PeerJ in similarly progressive light, even though I hadn’t visited their website in a long time. But on August 16, PeerJ published the following tweet:

It struck me as a weird decision (not that anyone cares). Since the article explaining the journal’s decision appears to be available under a Creative Commons Attribution license, I’m reproducing it here in full so that I can annotate my way through it.

Since our launch, PeerJ has worked towards the goal of publishing all “Sound Science”, as cost effectively as possible, for the benefit of the scientific community and society. As a result we have, until now, evaluated articles based only on an objective determination of scientific and methodological soundness, not on subjective determinations of impact, novelty or interest.

At the same time, at the core of our mission has been a promise to give researchers more influence over the publishing process and to listen to community feedback over how peer review  should work and how research should be assessed.

Great.

In recent months we have been thinking long and hard about feedback, from both our Editorial Board and Reviewers, that certain articles should no longer be considered as valid candidates for peer review or formal publication: that whilst the science they present may be “sound”, it is not of enough value to either the scientific record, the scientific community, or society, to justify being peer-reviewed or be considered for publication in a peer-reviewed journal. Our Editorial Board Members have asked us that we do our best to identify such submissions before they enter peer review.

This is the confusing part. To the uninitiated: One type of the scientific publishing process involves scientists writing up a paper and submitting it to a journal for consideration. An editor, or editors, at the journal checks the paper and then commissions a group of independent experts on the same topic to review it. These experts are expected to provide comments to help the journal decide whether it should publish the paper, and if yes, if the paper can be improved. Note that they are usually not paid for their work or time.

Now, if PeerJ’s usual reviewers are unhappy with how many papers the journal’s asking them to review, how does it make sense to impose a new, arbitrary and honestly counterproductive sort of “value” on submissions instead of increasing the number of reviewers the journal works with?

I find the journal’s decision troublesome because some important details are missing – details that encompass borderline-unethical activities by some other journals that have only undermined the integrity and usefulness of the scientific literature. For example, the “high impact factor” journal Nature has asked its reviewers in the past to prioritise sensational results over glamorous ones, overlooking the fact that such results are also likelier to be wrong. For another example, the concept of pre-registration has started to become more recently simply because most journals used to refuse (and still do) negative results. That is, if a group of scientists set out to check if something was true – and it’d be amazing if it was true – and found that it was false instead, they’d have a tough time finding a journal willing to publish their paper.

And third, preprint papers have started to become an acceptable way of publishing research only in the last few years, and that too only in a few branches of science (especially physics). Most grant-giving and research institutions still prefer papers being published in journals, instead of being uploaded on preprint repositories, not to mention a dominant research culture in many countries – including India – still favouring arbitrarily defined “prestigious journals” over others when it comes to picking scientists for promotions, etc.

For these reasons, any decision by a journal that says sound science and methodological rigour alone won’t suffice to ‘admit’ a paper into their pages risks reinforcing – directly or indirectly – a bias in the scientific record that many scientists are working hard to move away from. For example, if PeerJ rejects a solid paper, to speak, because it ‘only’ confirms a previous discovery, improves its accuracy, etc. and doesn’t fill a knowledge gap, per se, in order to ease the burden on its reviewers, the scientific record still stands to lose out on an important submission. (It pays to review journals’ decisions assuming that each journal is the only one around – à la the categorical imperative – and that other journals don’t exist.)

So what are PeerJ‘s new criteria for rejecting papers?

As a result, we have been working with key stakeholders to develop new ways to evaluate submissions and are introducing new pre-review evaluation criteria, which we will initially apply to papers submitted to our new Medical Sections, followed soon after by all subject areas. These evaluation criteria will define clearer standards for the requirements of certain types of articles in those areas. For example, bioinformatic analyses of already published data sets will need to meet more stringent reporting and data analysis requirements, and will need to clearly demonstrate that they are addressing a meaningful knowledge gap in the literature.

We don’t know yet, it seems.

At some level, of course, this means that PeerJ is moving away from the concept of peer reviewing all sound science. To be absolutely clear, this does not mean we have an intention of becoming a highly-selective “glamour” journal publisher that publishes only the most novel breakthroughs. It also does not mean that we will stop publishing negative or null results. However, the feedback we have received is that the definition of what constitutes a valid candidate for publication needs to evolve.

To be honest, this is a laughable position. The journal admits in the first sentence of this paragraph that no matter where it goes from here, it will only recede from an ideal position. In the next sentence it denies (vehemently, considering in the article on its website, this sentence was in bold) its decision is a move that will transform it into a “glamour” journal – like Nature, Science, NEJM, etc. have been – nor, in the third sentence, that it will stop publishing “negative or null results”. Now I’m even more curious what these heuristics could be which specify that a) submissions have to have “sound science”, b) “address a meaningful knowledge gap”, and c) don’t exclude negative/null results. It’s possible to see some overlap between these requirements that some papers will occupy – but it’s also possible to see many papers that won’t tick all three boxes yet still deserve to be published. To echo PeerJ itself, being a “glamour” journal is only one way to be bad.

We are being influenced by the researchers who peer review our research articles. We have heard from so many of our editorial board members and reviewers that they feel swamped by peer review requests and that they – and the system more widely – are close to breaking point. We most regularly hear this frustration when papers that they are reviewing do not, in their expert opinion, make a meaningful contribution to the record and are destined to be rejected; and should, in their view, have been filtered out much sooner in the process.

If you ask me (as an editor), the first sentence’s syntax seems to suggest PeerJ is being forced by its reviewers, and not influenced. More importantly, I haven’t seen these bespoke problematic papers that are “sound” but at the same time don’t make a meaningful contribution. An expert’s opinion that a paper on some topic should be rejected (even though, again, it’s “sound science”) could be rooted either in an “arrogant gatekeeper” attitude or in valid reasons, and PeerJ‘s rules should be good enough to be able to differentiate between the two without simultaneously allowing ‘bad reviewers’ to over-“influence” the selection process.

More broadly, I’m a science journalist looking into science from the outside, seeing a colossal knowledge-producing machine that’s situated on the same continuum on which I see myself to be located. If I receive too many submissions at The Wire Science, I don’t make presumptuous comments about what I think should and shouldn’t belong in the public domain. Instead, I pitch my boss about hiring one more person on my team and, second, I’m honest with each submission’s author about why I’m rejecting it: “I’m sorry, I’m short on time.”

Such submissions, in turn, impact the peer review of articles that do make a very significant contribution to the literature, research and society – the congestion of the peer review process can mean assigning editors and finding peer reviewers takes more time, potentially delaying important additions to the scientific record.

Gatekeeping by another name?

Furthermore, because it can be difficult and in some cases impossible to assign an Academic Editor and/or reviewers, authors can be faced with frustratingly long waits only to receive the bad news that their article has been rejected or, in the worst cases, that we were unable to peer review their paper. We believe that by listening to this feedback from our communities and removing some of the congestion from the peer review process, we will provide a better, more efficient, experience for everyone.

Ultimately, it comes down to the rules by which PeerJ‘s editorial board is going to decide which papers are ‘worth it’ and which aren’t. And admittedly, without knowing these rules, it’s hard to judge PeerJ – except on one count: “sound science” is already a good enough rule by which to determine the quality of a scientist’s work. To say it doesn’t suffice for reasons unrelated to scientific publishing, and the publishing apparatus’s dangerous tendency to gatekeep based on factors that have little to do with science, sounds at least precarious.