The journal’s part in a retraction

This is another Ranga Dias and superconductivity post, so please avert your gaze if you’re tired of it already.

According to a September 27 report in Science, the journal Nature plans to retract the latest Dias et al. paper, published in March 2023, claiming to have found evidence of near-room-temperature superconductivity in an unusual material, nitrogen-doped lutetium hydride (N-LuH). The heart of the matter seems to be, per Science, a plot showing a drop in N-LuH’s electric resistance below a particular temperature – a famous sign of superconductivity.

Dias (University of Rochester) and Ashkan Salamat (University of Nevada, Las Vegas), the other lead investigator in the study, measured the resistance in a noisy setting and then subtracted the noise – or what they claimed to be the noise. The problem is apparently that the subtracted plot in the published paper and the plot put together using raw data submitted by Dias and Salamat to Nature are different; the latter doesn’t show the resistance dropping to zero. Meaning that together with the noise, the paper’s authors subtracted some other information as well, and whatever was left behind suggested N-LuH had become superconducting.

A little more than a month ago, Physical Review Letters officially retracted another paper of a study led by Dias and Salamat after publishing it last year – and notably after a similar dispute (and on both occasions Dias was opposed to having the papers retracted). But the narrative was more dramatic then, with Physical Review Letters accusing Salamat of obstructing its investigation by supplying some other data as the raw data for its independent probe.

Then again, even before Science‘s report, other scientists in the same field had said that they weren’t bothering with replicating the data in the N-LuH paper because they had already wasted time trying to replicate Dias’s previous work, in vain.

Now, in the last year alone, three of Dias’s superconductivity-related papers have been retracted. But as on previous occasions, the new report also raises questions about Nature‘s pre-publication peer-review process. To quote Science:

In response to [James Hamlin and Brad Ramshaw’s critique of the subtracted plot], Nature initiated a post-publication review process, soliciting feedback from four independent experts. In documents obtained by Science, all four referees expressed strong concerns about the credibility of the data. ‘I fail to understand why the authors … are not willing or able to provide clear and timely responses,’ wrote one of the anonymous referees. ‘Without such responses the credibility of the published results are in question.’ A second referee went further, writing: ‘I strongly recommend that the article by R. Dias and A. Salamat be retracted.’

What was the difference between this review process and the one that happened before the paper was published, in which Nature‘s editors would have written to independent experts asking them for their opinions on the submitted manuscript? Why didn’t they catch the problem with the electrical resistance plot?

One possible explanation is the sampling problem: when writing an article as a science journalist, the views expressed in the article will be a function of the scientists that I have sampled from within the scientific community. In order to obtain the consensus view, I need to sample a sufficiently large number of scientists (or a small number of representative scientists, such as those who I know are in touch with the pulse of the community). Otherwise, there’s a nontrivial risk of some view in my article being over- or under-represented.

Similarly, during its pre-publication peer-review process, did Nature not sample the right set of reviewers? I’m unable to think of other explanations because the sampling problem accounts for many alternatives. Hamlin and Ramshaw also didn’t necessarily have access to more data than Dias et al. submitted to Nature because their criticism emerged in May 2023 itself, and was based on the published paper. Nature also hasn’t disclosed the pre-publication reviewers’ reports nor explained if there were any differences between its sampling process in the pre- and post-publication phases.

So short of there being a good explanation, as much as we have a scientist who’s seemingly been crying wolf about room-temperature superconductivity, we also have a journal whose peer-review process produced, on two separate occasions, two different results. Unless it can clarify why this isn’t so, Nature is also to blame for the paper’s fate.

Is Dias bringing the bus back?

So Physical Review Letters formally retracted that paper about manganese sulphide, in the limelight for having been coauthored by Ranga P. Dias, yesterday. The retraction notice states: “Of the authors on the original paper, R. Dias stands by the data in Fig. 1(b) and does not agree to retract the Letter.” Figure 1(b) is reproduced below.

The problem with the second plot is that its curves reportedly resemble some in Dias’s doctoral thesis from 2013, in which he had examined the same properties of germanium tetraselenide, a different kind of material. Curves can look the same to the extent that they can have the same overall shape; it’s a problem when they also reproduce the little variations that are a result of the specific material synthesised for a particular experiment and the measurements made on that day.

That Dias is the only person objecting to the retraction is interesting because it means one of his coaouthors, Ashkan Salamat, agreed to it. Salamat heads a lab in the University of Nevada, Las Vegas, that’s been implicated in the present controversy. Earlier this year, well after Physical Review Letters said it was looking into the allegations against the manganese sulphide paper, Scientific American reported:

Salamat has since responded, suggesting that even though the two data sets may appear similar, the resemblance is not indicative of copied data. “We’ve shown that if you just overlay other people’s data qualitatively, a lot of things look the same,” he says. “This is a very unfair approach.”

Physical Review Letters also accused Salamat of attempting to obstruct its investigation after it found that the raw data he claimed to have submitted of the group’s experiments wasn’t in fact the raw data. Since then, Salamat may well have changed his mind to avoid more hassle or in deference to the majority opinion, but I’m still curious if he could have changed his mind because he no longer thought the criticisms to be unfair.

Anyway, Dias is in the news because he’s made some claims in the past about having found room-temperature superconductors. A previous paper was retracted in September 2022, two years after it was published and independent researchers found some problems in the data. He had another paper published in March this year, reporting room-temperature but high-pressure superconductivity in nitrogen-doped lutetium hydride. This paper courted controversy because Dias et al. refused to share samples of the material so independent scientists could double-check the team’s claim.

Following the retraction, The New York Times asked Dias what he had to say, and his reply seems to bring back the bus under which principal investigators (PIs) have liked to throw their junior colleagues at signs of trouble in the past:

[He] has maintained that the paper accurately portrays the research findings. However, he said on Tuesday that his collaborators, working in the laboratory of Ashkan Salamat, a professor of physics at the University of Nevada, Las Vegas, introduced errors when producing charts of the data using Adobe Illustrator, software not typically used to make scientific charts.

“Any differences in the figure resulting from the use of Adobe Illustrator software were unintentional and not part of any effort to mislead or obstruct the peer review process,” Dr. Dias said in response to questions about the retraction. He acknowledged that the resistance measurements in question were performed at his laboratory in Rochester.

He’s saying that his lab made the measurements at the University of Rochester and sent the data to Salamat’s lab at the University of Nevada, where someone else (or elses) introduced errors using Adobe Illustrator – presumably while visualising the data, but even then Illustrator is a peculiar choice – and these errors caused the resulting plot to resemble one in Dias’s doctoral thesis. Hmm.

The New York Times also reported that after refusing in the past to investigate Dias’s work following allegations of misconduct, the University of Rochester has now launched an investigation “by outside experts”. The university doesn’t plan to release their report of the findings, however.

But even if the “outside experts” conclude that Dias didn’t really err and that, honestly, Salamat’s lab in Las Vegas was able to introduce very specific kinds of errors in what became figure 1(b), Dias must be held accountable for being one of the PIs of the study – a role whose responsibilities arguably include not letting tough situations devolve into finger-pointing.

What’s with superconductors and peer-review?

Throughout the time I’ve been a commissioning editor for science-related articles for news outlets, I’ve always sought and published articles about academic publishing. It’s the part of the scientific enterprise that seems to have been shaped the least by science’s democratic and introspective impulses. It’s also this long and tall wall erected around the field where scientists are labouring, offering ‘visitors’ guided tours for a hefty fee – or, in many cases, for ‘free’ if the scientists are willing to pay the hefty fees instead. Of late, I’ve spent more time thinking about peer-review, the practice of a journal distributing copies of a manuscript it’s considering for publication to independent experts on the same topic, for their technical inputs.

Most of the peer-review that happens today is voluntary: the scientists who do it aren’t paid. You must’ve come across several articles of late about whether peer-review works. It seems to me that it’s far from perfect. Studies (in July 1998, September 1998, and October 2008, e.g.) have shown that peer-reviewers often don’t catch critical problems in papers. In February 2023, a noted scientist said in a conversation that peer-reviewers go into a paper assuming that the data presented therein hasn’t been tampered with. This statement was eye-opening for me because I can’t think of a more important reason to include technical experts in the publishing process than to wean out problems that only technical experts can catch. Anyway, these flaws with the peer-review system aren’t generalisable, per se: many scientists have also told me that their papers benefited from peer-review, especially review that helped them improve their work.

I personally don’t know how ‘much’ peer-review is of the former variety and how much the latter, but it seems safe to state that when manuscripts are written in good faith by competent scientists and sent to the right journal, and the journal treats its peer-reviewers as well as its mandate well, peer-review works. Otherwise, it tends to not work. This heuristic, so to speak, allows for the fact that ‘prestige’ journals like Nature, Science, NEJM, and Cell – which have made a name for themselves by publishing papers that were milestones in their respective fields – have also published and then had to retract many papers that made exciting claims that were subsequently found to be untenable. These journals’ ‘prestige’ is closely related to their taste for sensational results.

All these thoughts were recently brought into focus by the ongoing hoopla, especially on Twitter, about the preprint papers from a South Korean research group claiming the discovery of a room-temperature superconductor in a material called LK-99 (this is the main paper). This work has caught the imagination of users on the platform unlike any other paper about room-temperature superconductivity in recent times. I believe this is because the preprints contain some charts and data that were absent in similar work in the past, and which strongly indicate the presence of a superconducting state at ambient temperature and pressure, and because the preprints include instructions on the material’s synthesis and composition, which means other scientists can produce and check for themselves. Personally, I’m holding the stance advised by Prof. Vijay B. Shenoy of IISc:

Many research groups around the world will attempt to reproduce these results; there are already some rumours that independent scientists have done so. We will have to wait for the results of their studies.

Curiously, the preprints have caught the attention of a not insignificant number of techbros, who, alongside the typically naïve displays of their newfound expertise, have also called for the peer-review system to be abolished because it’s too slow and opaque.

Peer-review has a storied relationship with superconductivity. In the early 2000s, a slew of papers coauthored by the German physicist Jan Hendrik Schön, working at a Bell Labs facility in the US, were retracted after independent investigations found that he had fabricated data to support claims that certain organic molecules, called fullerenes, were superconducting. The Guardian wrote in September 2002:

The Schön affair has besmirched the peer review process in physics as never before. Why didn’t the peer review system catch the discrepancies in his work? A referee in a new field doesn’t want to “be the bad guy on the block,” says Dutch physicist Teun Klapwijk, so he generally gives the author the benefit of the doubt. But physicists did become irritated after a while, says Klapwijk, “that Schön’s flurry of papers continued without increased detail, and with the same sloppiness and inconsistencies.”

Some critics hold the journals responsible. The editors of Science and Nature have stoutly defended their review process in interviews with the London Times Higher Education Supplement. Karl Ziemelis, one of Nature’s physical science editors, complained of scapegoating, while Donald Kennedy, who edits Science, asserted that “There is little journals can do about detecting scientific misconduct.”

Maybe not, responds Nobel prize-winning physicist Philip Anderson of Princeton, but the way that Science and Nature compete for cutting-edge work “compromised the review process in this instance.” These two industry-leading publications “decide for themselves what is good science – or good-selling science,” says Anderson (who is also a former Bell Labs director), and their market consciousness “encourages people to push into print with shoddy results.” Such urgency would presumably lead to hasty review practices. Klapwijk, a superconductivity specialist, said that he had raised objections to a Schön paper sent to him for review, but that it was published anyway.

A similar claim by a group at IISc in 2019 generated a lot of excitement then, but today almost no one has any idea what happened to it. It seems reasonable to assume that the findings didn’t pan out in further testing and/or that the peer-review, following the manuscript being submitted to Nature, found problems in the group’s data. Last month, the South Korean group uploaded its papers to the arXiv preprint repository and has presumably submitted them to a journal: for a finding this momentous, that seems like the obvious next step. And the journal is presumably conducting peer-review at this point.

But in both instances (IISc 2019 and today), the claims were also accompanied by independent attempts to replicate the data as well as journalistic articles that assimilated the various public narratives and their social relevance into a cogent whole. One of the first signs that there was a problem with the IISc preprint was another preprint by Brian Skinner, a physicist then with the Massachusetts Institute of Technology, who found the noise in two graphs plotting the results of two distinct tests to be the same – which is impossible. Independent scientists also told The Wire (where I worked then) that they lacked some information required to make sense of the results as well as expressed concerns with the magnetic susceptibility data.

Peer-review may not be designed to check whether the experiments in question produced the data in question but whether the data in question supports the conclusions. For example, in March this year, Nature published a study led by Ranga P. Dias in which he and his team claimed that nitrogen-doped lutetium hydride becomes a room-temperature superconductor under a pressure of 1,000 atm, considerably lower than the pressure required to produce a superconducting state in other similar materials. After it was published, many independent scientists raised concerns about some data and analytical methods presented in the paper – as well as its failure to specify how the material could be synthesised. These problems, it seems, didn’t prevent the paper from clearing peer-review. Yet on August 3, Martin M. Bauer, a particle physicist at Durham University, published a tweet defending peer-review in the context of the South Korean work thus:

The problem seems to me to be the belief – held by many pro- as well as anti-peer-review actors – that peer-review is the ultimate check capable of filtering out all forms of bad science. It just can’t, and maybe that’s okay. Contrary to what Dr. Bauer has said, and as the example of Dr. Dias’s paper suggests, peer-reviewers won’t attempt to replicate the South Korean study. That task, thanks to the level of detail in the South Korean preprint and the fact that preprints are freely accessible, is already being undertaken by a panoply of labs around the world, both inside and outside universities. So abolishing peer-review won’t be as bad as Dr. Bauer makes it sound. As I said, peer-review is, or ought to be, one of many checks.

It’s also the sole check that a journal undertakes, and maybe that’s the bigger problem. That is, scientific journals may well be a pit of papers of unpredictable quality without peer-review in the picture – but that would only be because journal editors and scientists are separate functional groups, rather than having a group of scientists take direct charge of the publishing process (akin to how arXiv currently operates). In the existing publishing model, peer-review is as important as it is because scientists aren’t involved in any other part of the publishing pipeline.

An alternative model comes to mind, one that closes the gaps of “isn’t designed to check whether the experiments in question produced the data in question” and “the sole check that a journal undertakes”: scientists conduct their experiments, write them up in a manuscript and upload them to a preprint repository; other scientists attempt to replicate the results; if the latter are successful, both groups update the preprint paper and submit that to a journal (with the lion’s share of the credit going to the former group); journal editors have this document peer-reviewed (to check whether the data presented supports the conclusions), edited, and polished[1]; and finally publish it.

Obviously this would require a significant reorganisation of incentives: for one, researchers will need to be able to apportion time and resources to replicate others’ experiments for less than half of the credit. A second problem is that this is a (probably non-novel) reimagination of the publishing workflow that doesn’t consider the business model – the other major problem in academic publishing. Third: I have in my mind only condensed-matter physics; I don’t know much about the challenges to replicating results in, say, genomics, computer science or astrophysics. My point overall is that if journals look like a car crash without peer-review, it’s only because the crashes were a matter of time and that peer-review was doing the bare minimum to keep them from happening. (And Twitter was always a car crash anyway.)


[1] I hope readers won’t underestimate this the importance of editorial and language assistance that a journal can provide. Last month, researchers in Australia, Germany, Nepal, Spain, the UK, and the US had a paper published in which they reported, based on surveys, that “non-native English speakers, especially early in their careers, spend more effort than native English speakers in conducting scientific activities, from reading and writing papers and preparing presentations in English, to disseminating research in multiple languages. Language barriers can also cause them not to attend, or give oral presentations at, international conferences conducted in English.”

The language in the South Korean group’s preprints indicate that its authors’ first language isn’t English. According to Springer, which later became Springer Nature, the publisher of the Nature journals, “Editorial reasons for rejection include … poor language quality such that it cannot be understood by readers”. An undated article on Elsevier’s ‘Author Services’ page has this line: “For Marco [Casola, managing editor of Water Research], poor language can indicate further issues with a paper. ‘Language errors can sometimes ring a bell as a link to quality. If a manuscript is written in poor English the science behind it may not be amazing. This isn’t always the case, but it can be an indication.'”

But instead of palming the responsibility off to scientists, journals have an opportunity to distinguish themselves by helping researchers write better papers.