Violence shuts science? Err…

Dog bites man isn’t news. Man bites dog is news.

I’m reminded of this adage of the news industry – and Nambi Narayanan’s comment in August 2022 – when I read reports like ‘Explosion of violence in Ecuador shuts down science’ (Science, January 13, 2024). An “explosion of violence” in a country should reasonably be expected to affect all walks of life, so what’s the value in focusing a news report only on science and those who practice it? It’s not like we have news reports headlined “explosion of violence in Ecuador shuts down fruit shops”.

There are little tidbits in the article that might be useful to other researchers in Ecuador, but it’s unlikely they’re looking for it in Science, which is a foreign publishing reporting on Ecuador for an audience that’s mostly outside the country.

The only bit I found really worth dwelling on was this one paragraph:

The Consortium for the Sustainable Development of the Andean Ecoregion (CONDESAN) … went further. It canceled all fieldwork this week and next, says Manual Peralvo, a geographer and project coordinator. He adds that CONDESAN plans to design a stricter security protocol for future projects that involve fieldwork. “We’re going to have to plan our schedules much more specifically to know who is where and at what time,” and to avoid dangerous areas, he says.

… yet it’s just one paragraph, before the narrative moves on to how the country’s new security protocols will “deter non-Ecuadorian funding and scientists”. I’d have liked the report to drop everything else and focus on how research centres organise and administer fieldwork when field-workers are at risk of physical violence.

If anything, there may be no opportunity cost associated with such stories – except for the authors and publishers of such reports (i.e. in its current form) suggesting they believe science is somehow more special than other human endeavours.

What’s with superconductors and peer-review?

Throughout the time I’ve been a commissioning editor for science-related articles for news outlets, I’ve always sought and published articles about academic publishing. It’s the part of the scientific enterprise that seems to have been shaped the least by science’s democratic and introspective impulses. It’s also this long and tall wall erected around the field where scientists are labouring, offering ‘visitors’ guided tours for a hefty fee – or, in many cases, for ‘free’ if the scientists are willing to pay the hefty fees instead. Of late, I’ve spent more time thinking about peer-review, the practice of a journal distributing copies of a manuscript it’s considering for publication to independent experts on the same topic, for their technical inputs.

Most of the peer-review that happens today is voluntary: the scientists who do it aren’t paid. You must’ve come across several articles of late about whether peer-review works. It seems to me that it’s far from perfect. Studies (in July 1998, September 1998, and October 2008, e.g.) have shown that peer-reviewers often don’t catch critical problems in papers. In February 2023, a noted scientist said in a conversation that peer-reviewers go into a paper assuming that the data presented therein hasn’t been tampered with. This statement was eye-opening for me because I can’t think of a more important reason to include technical experts in the publishing process than to wean out problems that only technical experts can catch. Anyway, these flaws with the peer-review system aren’t generalisable, per se: many scientists have also told me that their papers benefited from peer-review, especially review that helped them improve their work.

I personally don’t know how ‘much’ peer-review is of the former variety and how much the latter, but it seems safe to state that when manuscripts are written in good faith by competent scientists and sent to the right journal, and the journal treats its peer-reviewers as well as its mandate well, peer-review works. Otherwise, it tends to not work. This heuristic, so to speak, allows for the fact that ‘prestige’ journals like Nature, Science, NEJM, and Cell – which have made a name for themselves by publishing papers that were milestones in their respective fields – have also published and then had to retract many papers that made exciting claims that were subsequently found to be untenable. These journals’ ‘prestige’ is closely related to their taste for sensational results.

All these thoughts were recently brought into focus by the ongoing hoopla, especially on Twitter, about the preprint papers from a South Korean research group claiming the discovery of a room-temperature superconductor in a material called LK-99 (this is the main paper). This work has caught the imagination of users on the platform unlike any other paper about room-temperature superconductivity in recent times. I believe this is because the preprints contain some charts and data that were absent in similar work in the past, and which strongly indicate the presence of a superconducting state at ambient temperature and pressure, and because the preprints include instructions on the material’s synthesis and composition, which means other scientists can produce and check for themselves. Personally, I’m holding the stance advised by Prof. Vijay B. Shenoy of IISc:

Many research groups around the world will attempt to reproduce these results; there are already some rumours that independent scientists have done so. We will have to wait for the results of their studies.

Curiously, the preprints have caught the attention of a not insignificant number of techbros, who, alongside the typically naïve displays of their newfound expertise, have also called for the peer-review system to be abolished because it’s too slow and opaque.

Peer-review has a storied relationship with superconductivity. In the early 2000s, a slew of papers coauthored by the German physicist Jan Hendrik Schön, working at a Bell Labs facility in the US, were retracted after independent investigations found that he had fabricated data to support claims that certain organic molecules, called fullerenes, were superconducting. The Guardian wrote in September 2002:

The Schön affair has besmirched the peer review process in physics as never before. Why didn’t the peer review system catch the discrepancies in his work? A referee in a new field doesn’t want to “be the bad guy on the block,” says Dutch physicist Teun Klapwijk, so he generally gives the author the benefit of the doubt. But physicists did become irritated after a while, says Klapwijk, “that Schön’s flurry of papers continued without increased detail, and with the same sloppiness and inconsistencies.”

Some critics hold the journals responsible. The editors of Science and Nature have stoutly defended their review process in interviews with the London Times Higher Education Supplement. Karl Ziemelis, one of Nature’s physical science editors, complained of scapegoating, while Donald Kennedy, who edits Science, asserted that “There is little journals can do about detecting scientific misconduct.”

Maybe not, responds Nobel prize-winning physicist Philip Anderson of Princeton, but the way that Science and Nature compete for cutting-edge work “compromised the review process in this instance.” These two industry-leading publications “decide for themselves what is good science – or good-selling science,” says Anderson (who is also a former Bell Labs director), and their market consciousness “encourages people to push into print with shoddy results.” Such urgency would presumably lead to hasty review practices. Klapwijk, a superconductivity specialist, said that he had raised objections to a Schön paper sent to him for review, but that it was published anyway.

A similar claim by a group at IISc in 2019 generated a lot of excitement then, but today almost no one has any idea what happened to it. It seems reasonable to assume that the findings didn’t pan out in further testing and/or that the peer-review, following the manuscript being submitted to Nature, found problems in the group’s data. Last month, the South Korean group uploaded its papers to the arXiv preprint repository and has presumably submitted them to a journal: for a finding this momentous, that seems like the obvious next step. And the journal is presumably conducting peer-review at this point.

But in both instances (IISc 2019 and today), the claims were also accompanied by independent attempts to replicate the data as well as journalistic articles that assimilated the various public narratives and their social relevance into a cogent whole. One of the first signs that there was a problem with the IISc preprint was another preprint by Brian Skinner, a physicist then with the Massachusetts Institute of Technology, who found the noise in two graphs plotting the results of two distinct tests to be the same – which is impossible. Independent scientists also told The Wire (where I worked then) that they lacked some information required to make sense of the results as well as expressed concerns with the magnetic susceptibility data.

Peer-review may not be designed to check whether the experiments in question produced the data in question but whether the data in question supports the conclusions. For example, in March this year, Nature published a study led by Ranga P. Dias in which he and his team claimed that nitrogen-doped lutetium hydride becomes a room-temperature superconductor under a pressure of 1,000 atm, considerably lower than the pressure required to produce a superconducting state in other similar materials. After it was published, many independent scientists raised concerns about some data and analytical methods presented in the paper – as well as its failure to specify how the material could be synthesised. These problems, it seems, didn’t prevent the paper from clearing peer-review. Yet on August 3, Martin M. Bauer, a particle physicist at Durham University, published a tweet defending peer-review in the context of the South Korean work thus:

The problem seems to me to be the belief – held by many pro- as well as anti-peer-review actors – that peer-review is the ultimate check capable of filtering out all forms of bad science. It just can’t, and maybe that’s okay. Contrary to what Dr. Bauer has said, and as the example of Dr. Dias’s paper suggests, peer-reviewers won’t attempt to replicate the South Korean study. That task, thanks to the level of detail in the South Korean preprint and the fact that preprints are freely accessible, is already being undertaken by a panoply of labs around the world, both inside and outside universities. So abolishing peer-review won’t be as bad as Dr. Bauer makes it sound. As I said, peer-review is, or ought to be, one of many checks.

It’s also the sole check that a journal undertakes, and maybe that’s the bigger problem. That is, scientific journals may well be a pit of papers of unpredictable quality without peer-review in the picture – but that would only be because journal editors and scientists are separate functional groups, rather than having a group of scientists take direct charge of the publishing process (akin to how arXiv currently operates). In the existing publishing model, peer-review is as important as it is because scientists aren’t involved in any other part of the publishing pipeline.

An alternative model comes to mind, one that closes the gaps of “isn’t designed to check whether the experiments in question produced the data in question” and “the sole check that a journal undertakes”: scientists conduct their experiments, write them up in a manuscript and upload them to a preprint repository; other scientists attempt to replicate the results; if the latter are successful, both groups update the preprint paper and submit that to a journal (with the lion’s share of the credit going to the former group); journal editors have this document peer-reviewed (to check whether the data presented supports the conclusions), edited, and polished[1]; and finally publish it.

Obviously this would require a significant reorganisation of incentives: for one, researchers will need to be able to apportion time and resources to replicate others’ experiments for less than half of the credit. A second problem is that this is a (probably non-novel) reimagination of the publishing workflow that doesn’t consider the business model – the other major problem in academic publishing. Third: I have in my mind only condensed-matter physics; I don’t know much about the challenges to replicating results in, say, genomics, computer science or astrophysics. My point overall is that if journals look like a car crash without peer-review, it’s only because the crashes were a matter of time and that peer-review was doing the bare minimum to keep them from happening. (And Twitter was always a car crash anyway.)


[1] I hope readers won’t underestimate this the importance of editorial and language assistance that a journal can provide. Last month, researchers in Australia, Germany, Nepal, Spain, the UK, and the US had a paper published in which they reported, based on surveys, that “non-native English speakers, especially early in their careers, spend more effort than native English speakers in conducting scientific activities, from reading and writing papers and preparing presentations in English, to disseminating research in multiple languages. Language barriers can also cause them not to attend, or give oral presentations at, international conferences conducted in English.”

The language in the South Korean group’s preprints indicate that its authors’ first language isn’t English. According to Springer, which later became Springer Nature, the publisher of the Nature journals, “Editorial reasons for rejection include … poor language quality such that it cannot be understood by readers”. An undated article on Elsevier’s ‘Author Services’ page has this line: “For Marco [Casola, managing editor of Water Research], poor language can indicate further issues with a paper. ‘Language errors can sometimes ring a bell as a link to quality. If a manuscript is written in poor English the science behind it may not be amazing. This isn’t always the case, but it can be an indication.'”

But instead of palming the responsibility off to scientists, journals have an opportunity to distinguish themselves by helping researchers write better papers.

Yes, scientific journals should publish political rebuttals

(The headline is partly click-bait, as I admit below, because some context is required.) From ‘Should scientific journals publish political debunkings?’Science Fictions by Stuart Ritchie, August 27, 2022:

Earlier this week, the “news and analysis” section of the journal Science … published … a point-by-point rebuttal of a monologue a few days earlier from the Fox News show Tucker Carlson Tonight, where the eponymous host excoriated Dr. Anthony Fauci, of “seen everywhere during the pandemic” fame. … The Science piece noted that “[a]lmost everything Tucker Carlson said… was misleading or false”. That’s completely correct – so why did I have misgivings about the Science piece? It’s the kind of thing you see all the time on dedicated political fact-checking sites – but I’d never before seen it in a scientific journal. … I feel very conflicted on whether this is a sensible idea. And, instead of actually taking some time to think it through and work out a solid position, in true hand-wringing style I’m going to write down both sides of the argument in the form of a dialogue – with myself.

There’s one particular exchange between Ritchie and himself in his piece that threw me off the entire point of the article:

[Ritchie-in-favour-of-Science-doing-this]: Just a second. This wasn’t published in the peer-reviewed section of Science! This isn’t a refereed paper – it’s in the “News and Analysis” section. Wouldn’t you expect an “Analysis” article to, like, analyse things? Including statements made on Fox News?

[Ritchie-opposed-to-Science-doing-this]: To be honest, sometimes I wonder why scientific journals have a “News and Analysis” section at all – or, I wonder if it’s healthy in the long run. In any case, clearly there’s a big “halo” effect from the peer-reviewed part: people take the News and Analysis more seriously because it’s attached to the very esteemed journal. People are sharing it on social media because it’s “the journal Science debunking Tucker Carlson” – way fewer people would care if it was just published on some random news site. I don’t think you can have it both ways by saying it’s actually nothing to do with Science the peer-reviewed journal.

[Ritchie-in-favour]: I was just saying they were separate, rather than entirely unrelated, but fair enough.

Excuse me but not at all fair enough! The essential problem is the tie-ins between what a journal does, why it does them and what impressions they uphold in society.

First, Science‘s ‘news and analysis’ section isn’t distinguished by its association with the peer-reviewed portion of the journal but by its own reportage and analyses, intended for scientists and non-scientists alike. (Mea culpa: the headline of this post answers the question in the headline of Ritchie’s post, while being clear in the body that there’s a clear distinction between the journal and its ‘news and analysis’ section.) A very recent example was Charles Piller’s investigative report that uncovered evidence of image manipulation in a paper that had an outsized influence on the direction of Alzheimer’s research since it was published in 2006. When Ritchie writes that the peer-reviewed journal and the ‘news and analysis’ section are separate, he’s right – but when he suggests that the former’s prestige is responsible for the latter’s popularity, he’s couldn’t be more wrong.

Ritchie is a scientist and his position may reflect that of many other scientists. I recommend that he and others who agree with him consider the section from the PoV of a science journalist, when they will immediately see as we do that it has broken many agenda-setting stories as well as has published several accomplished journalists and scientists (Derek Lowe’s column being a good example). Another impression that could change with the change of perspective is the relevance of peer-review itself, and the deceptively deleterious nature of an associated concept he repeatedly invokes, which could as well be the pseudo-problem at the heart of Ritchie’s dilemma: prestige. To quote from a blog post in which University of Regensburg neurogeneticist Björn Brembs analysed the novelty of results published by so-called ‘prestigious’ journals, and published in February this year:

Taken together, despite the best efforts of the professional editors and best reviewers the planet has to offer, the input material that prestigious journals have to deal with appears to be the dominant factor for any ‘novelty’ signal in the stream of publications coming from these journals. Looking at all articles, the effect of all this expensive editorial and reviewer work amounts to probably not much more than a slightly biased random selection, dominated largely by the input and to probably only a very small degree by the filter properties. In this perspective, editors and reviewers appear helplessly overtaxed, being tasked with a job that is humanly impossible to perform correctly in the antiquated way it is organized now.

In sum:

Evidence suggests that the prestige signal in our current journals is noisy, expensive and flags unreliable science. There is a lack of evidence that the supposed filter function of prestigious journals is not just a biased random selection of already self-selected input material. As such, massive improvement along several variables can be expected from a more modern implementation of the prestige signal.

Take the ‘prestige’ away and one part of Ritchie’s dilemma – the journal Science‘s claim to being an “impartial authority” that stands at risk of being diluted by its ‘news and analysis’ section’s engagement with “grubby political debates” – evaporates. Journals, especially glamour journals like Science, haven’t historically been authorities on ‘good’ science, such as it is, but have served to obfuscate the fact that only scientists can be. But more broadly, the ‘news and analysis’ business has its own expensive economics, and publishers of scientific journals that can afford to set up such platforms should consider doing so, in my view, with a degree and type of separation between these businesses according to their mileage. The simple reasons are:

1. Reject the false balance: there’s no sensible way publishing a pro-democracy article (calling out cynical and potentially life-threatening untruths) could affect the journal’s ‘prestige’, however it may be defined. But if it does, would the journal be wary of a pro-Republican (and effectively anti-democratic) scientist refusing to publish on its pages? If so, why? The two-part answer is straightforward: because many other scientists as well as journal editors are still concerned with the titles that publish papers instead of the papers themselves, and because of the fundamental incentives of academic publishing – to publish the work of prestigious scientists and sensational work, as opposed to good work per se. In this sense, the knock-back is entirely acceptable in the hopes that it could dismantle the fixation on which journal publishes which paper.

2. Scientific journals already have access to expertise in various fields of study, as well as an incentive to participate in the creation of a sensible culture of science appreciation and criticism.

Featured image: Tucker Carlson at an event in West Palm Beach, Florida, December 19, 2020. Credit: Gage Skidmore/Wikimedia Commons, CC BY-SA 2.0.

Prestige journals and their prestigious mistakes

On June 24, the journal Nature Scientific Reports published a paper claiming that Earth’s surface was warming by more than what non-anthropogenic sources could account for because it was simply moving closer to the Sun. I.e. global warming was the result of changes in the Earth-Sun distance. Excerpt:

The oscillations of the baseline of solar magnetic field are likely to be caused by the solar inertial motion about the barycentre of the solar system caused by large planets. This, in turn, is closely linked to an increase of solar irradiance caused by the positions of the Sun either closer to aphelion and autumn equinox or perihelion and spring equinox. Therefore, the oscillations of the baseline define the global trend of solar magnetic field and solar irradiance over a period of about 2100 years. In the current millennium since Maunder minimum we have the increase of the baseline magnetic field and solar irradiance for another 580 years. This increase leads to the terrestrial temperature increase as noted by Akasofu [26] during the past two hundred years.

The New Scientist reported on July 16 that Nature has since kickstarted an “established process” to investigate how a paper with “egregious errors” cleared peer-review and was published. One of the scientists it quotes says the journal should retract the paper if it wants to “retain any credibility”, but the fact that it cleared peer-review in the first place is to me the most notable part of this story. It is a reminder that peer-review has a failure rate as well as that ‘prestige’ titles like Nature can publish crap; for instance, look at the retraction index chart here).

That said, I am a little concerned because Scientific Reports is an open-access title. I hope it didn’t simply publish the paper in exchange for a fee like its less credible counterparts.

Almost as if it timed it to the day, the journal ScienceNature‘s big rival across the ocean – published a paper that did make legitimate claims but which brooks disagreement on a different tack. It describes a way to keep sea levels from rising due to the melting of Antarctic ice. Excerpt:

… we show that the [West Antarctic Ice Sheet] may be stabilized through mass deposition in coastal regions around Pine Island and Thwaites glaciers. In our numerical simulations, a minimum of 7400 [billion tonnes] of additional snowfall stabilizes the flow if applied over a short period of 10 years onto the region (~2 mm/year sea level equivalent). Mass deposition at a lower rate increases the intervention time and the required total amount of snow.

While I’m all for curiosity-driven research, climate change is rapidly becoming a climate emergency in many parts of the world, not least where the poorer live, without a corresponding set of protocols, resources and schemes to deal with it. In this situation, papers like this – and journals like Science that publish them – only make solutions like the one proposed above seem credible when in fact they should be trashed for implying that it’s okay to keep emitting more carbon into the atmosphere because we can apply a band-aid of snow over the ice sheet and postpone the consequences. Of course, the paper’s authors acknowledge the following:

Operations such as the one discussed pose the risk of moral hazard. We therefore stress that these projects are not an alternative to strengthening the efforts of climate mitigation. The ambitious reduction of greenhouse gas emissions is and will be the main lever to mitigate the impacts of sea level rise. The simulations of the current study do not consider a warming ocean and atmosphere as can be expected from the increase in anthropogenic CO2. The computed mass deposition scenarios are therefore valid only under a simultaneous drastic reduction of global CO2 emissions.

… but these words belong in the last few lines of the paper (before the ‘materials and methods’ section), as if they were a token addition to what reads, overall, like a dispassionate analysis. This is also borne out by the study not having modelled the deposition idea together with falling CO2 emissions.

I’m a big fan of curiosity-driven science as a matter of principle. While it seemed hard at first to reconcile my emotions on the Science paper with that position, I realised that I believe both curiosity- and application-driven research should still be conscientious. Setting aside the endless questions about how we ought to spend the taxpayers’ dollars – if only because interfering with research on the basis of public interest is a terrible idea – it is my personal, non-prescriptive opinion that research should still endeavour to be non-destructive (at least to the best of the researchers’ knowledge) when advancing new solutions to known problems.

If that is not possible, then researchers should acknowledge that their work could have real consequences and, setting aside all pretence of being quantitative, objective, etc., clarify the moral qualities of their work. This the authors of the Science paper have done but there are no brownie points for low-hanging fruits. Or maybe there should be considering there has been other work where the authors of a paper have written that they “make no judgment on the desirability” of their proposal (also about climate geo-engineering).

Most of all, let us not forget that being Nature or Science doesn’t automatically make what they put out better for having been published by them.