The journal’s part in a retraction

This is another Ranga Dias and superconductivity post, so please avert your gaze if you’re tired of it already.

According to a September 27 report in Science, the journal Nature plans to retract the latest Dias et al. paper, published in March 2023, claiming to have found evidence of near-room-temperature superconductivity in an unusual material, nitrogen-doped lutetium hydride (N-LuH). The heart of the matter seems to be, per Science, a plot showing a drop in N-LuH’s electric resistance below a particular temperature – a famous sign of superconductivity.

Dias (University of Rochester) and Ashkan Salamat (University of Nevada, Las Vegas), the other lead investigator in the study, measured the resistance in a noisy setting and then subtracted the noise – or what they claimed to be the noise. The problem is apparently that the subtracted plot in the published paper and the plot put together using raw data submitted by Dias and Salamat to Nature are different; the latter doesn’t show the resistance dropping to zero. Meaning that together with the noise, the paper’s authors subtracted some other information as well, and whatever was left behind suggested N-LuH had become superconducting.

A little more than a month ago, Physical Review Letters officially retracted another paper of a study led by Dias and Salamat after publishing it last year – and notably after a similar dispute (and on both occasions Dias was opposed to having the papers retracted). But the narrative was more dramatic then, with Physical Review Letters accusing Salamat of obstructing its investigation by supplying some other data as the raw data for its independent probe.

Then again, even before Science‘s report, other scientists in the same field had said that they weren’t bothering with replicating the data in the N-LuH paper because they had already wasted time trying to replicate Dias’s previous work, in vain.

Now, in the last year alone, three of Dias’s superconductivity-related papers have been retracted. But as on previous occasions, the new report also raises questions about Nature‘s pre-publication peer-review process. To quote Science:

In response to [James Hamlin and Brad Ramshaw’s critique of the subtracted plot], Nature initiated a post-publication review process, soliciting feedback from four independent experts. In documents obtained by Science, all four referees expressed strong concerns about the credibility of the data. ‘I fail to understand why the authors … are not willing or able to provide clear and timely responses,’ wrote one of the anonymous referees. ‘Without such responses the credibility of the published results are in question.’ A second referee went further, writing: ‘I strongly recommend that the article by R. Dias and A. Salamat be retracted.’

What was the difference between this review process and the one that happened before the paper was published, in which Nature‘s editors would have written to independent experts asking them for their opinions on the submitted manuscript? Why didn’t they catch the problem with the electrical resistance plot?

One possible explanation is the sampling problem: when writing an article as a science journalist, the views expressed in the article will be a function of the scientists that I have sampled from within the scientific community. In order to obtain the consensus view, I need to sample a sufficiently large number of scientists (or a small number of representative scientists, such as those who I know are in touch with the pulse of the community). Otherwise, there’s a nontrivial risk of some view in my article being over- or under-represented.

Similarly, during its pre-publication peer-review process, did Nature not sample the right set of reviewers? I’m unable to think of other explanations because the sampling problem accounts for many alternatives. Hamlin and Ramshaw also didn’t necessarily have access to more data than Dias et al. submitted to Nature because their criticism emerged in May 2023 itself, and was based on the published paper. Nature also hasn’t disclosed the pre-publication reviewers’ reports nor explained if there were any differences between its sampling process in the pre- and post-publication phases.

So short of there being a good explanation, as much as we have a scientist who’s seemingly been crying wolf about room-temperature superconductivity, we also have a journal whose peer-review process produced, on two separate occasions, two different results. Unless it can clarify why this isn’t so, Nature is also to blame for the paper’s fate.

Scientists’ conduct affects science

Nature News has published an excellent feature by Edwin Cartlidge on the “wall of scepticism” that arose in response to the latest superconductivity claim from Ranga Dias et al., purportedly in a compound called nitrogen-doped lutetium hydride. It seems the new paper has earned a note of concern as well, after various independent research groups failed to replicate the results. Dias & co. had had another paper, claiming superconductivity in a different material, retracted in October 2022, two years after its publication. All these facts together raise a few implications about the popular imagination of science.

First, the new paper was published by Nature, a peer-reviewed journal. And Jorge Hirsch of the University of California, San Diego, told Nature News “that editors should have first resolved the question about the provenance of the raw data in the retracted 2020 Nature article before even considering the 2023 paper”. So the note reaffirms the role of peer-review being limited to checking whether the information presented in a paper is consistent with the paper’s conclusions, and not checking whether it is well-founded and has integrity in and of itself.

Second, from Nature News:

“Researchers from four other groups, meanwhile, told Nature’s news team that they had abandoned their own attempts to replicate the work or hadn’t even tried. Eremets said that he wasted time on the CSH work, so didn’t bother with LuNH. ‘I just ignored it,’ he says.”

An amusing illustration, I think, that speaks against science’s claims to being impartial, etc. In a perfectly objective world, Dias et al.’s previous work shouldn’t have mattered to other scientists, who should have endeavoured to verify the claims in the new paper anew, given that it’s a fairly sensational claim and because it was published in a ‘prestigious’ peer-reviewed journal. But, as Eremets said, “the synthesis protocol wasn’t clear in the paper and Dias didn’t help to clarify it”.

The reciprocal is also true: Dias chose to share samples of nitrogen-doped lutetium hydride that his team had prepared only with Russell Hemley, who studies material chemistry at the University of Illinois, Chicago, (and some other groups that he refused to name) – and that Hemley is one of the researchers who hasn’t criticised Dias’s findings. Hemley is also not an independent researcher; he and Dias worked together on the work in the 2020 paper that was later retracted. Dias should ideally have shared the samples with everyone. But scientists’ social conduct does matter, influencing decisions about how other scientists believe they should respond.

Speaking of which: Nature (the journal) on the other hand doesn’t look at past work and attendant misgivings when judging each paper. From Nature News (emphasis added):

The editor’s note added to the 2023 paper on 1 September, saying that the reliability of data are in question, adds that “appropriate editorial action will be taken once this matter is resolved.” Karl Ziemelis, Nature’s chief applied- and physical-sciences editor, based in London, says that he and his colleagues are “assessing concerns” about the paper, and adds: “Owing to the confidentiality of the peer-review process we cannot discuss specific details of what transpired.” As for the 2020 paper, Ziemelis explains that they decided not to look into the origin of the data once they had established problems with the data processing and then retracted the research. “Our broader investigation of that work ceased at that point,” he says. Ziemelis adds that “all submitted manuscripts are considered independently on the basis of the quality and timeliness of their science”.

The refusal to share samples echoes an unusual decision by the journal Physical Review B to publish a paper authored by researchers at Microsoft, in which they reported discovery a Majorana zero mode – an elusive particle (in a manner of speaking) that could lead the way to building quantum ‘supercomputers’. However, it seems the team withheld some information that independent researchers could have used to validate the findings, presumably because it’s intellectual property. Rice University physics professor Douglas Natelson wrote on his blog:

The rationale is that the community is better served by getting this result into the peer-reviewed literature now even if all of the details aren’t going to be made available publicly until the end of 2024. I don’t get why the researchers didn’t just wait to publish, if they are so worried about those details being available.


Take all of these facts and opinions together and ask yourself: what then is the scientific literature? It probably contains many papers that have cleared peer-review but whose results won’t replicate. Some papers may or may not replicate but we’ll never know for a couple years. It also doesn’t contain replication studies that might have been there if the replicators and the original research group were on amicable terms. What also do these facts and view imply for the popular conception of science?

Every day, I encounter two broad kinds of critical imaginations of science. One has emerged from the practitioners of science, and those studying its philosophy, history, sociology, etc. These individuals have debated the notions presented above to varying degrees. But there is also a class of people in India that wields science as an antidote to what it claims is the state’s collusion with pseudoscience, and such collusion as displacing what is apparently science’s rightful place in the Indian society-state: as the best and sole arbiter of facts and knowledge. This science is apparently a unified whole, objective, self-correcting, evidence-based, and anti-faith. I imagine this science needs to have these characteristics in order to effectively challenge, in the courts of public opinion, the government’s oft-mistaken claims.

At the same time, the ongoing Dias et al. saga reminds us that any ‘science’ imprisoned by these assumptions would dismiss the events and forces that would actually help it grow – such as incentivising good-faith actions, acknowledging the labour required to keep science honest and reflexive, discussing issues resulting from the cultural preferences of its exponents, paying attention to social relationships, heeding concerns about the effects of one’s work and conduct on the field, etc. In the words of Paul Feyerabend (Against Method, third ed., 1993): “Science is neither a single tradition, nor the best tradition there is, except for people who have become accustomed to its presence, its benefits and its disadvantages.”

What’s with superconductors and peer-review?

Throughout the time I’ve been a commissioning editor for science-related articles for news outlets, I’ve always sought and published articles about academic publishing. It’s the part of the scientific enterprise that seems to have been shaped the least by science’s democratic and introspective impulses. It’s also this long and tall wall erected around the field where scientists are labouring, offering ‘visitors’ guided tours for a hefty fee – or, in many cases, for ‘free’ if the scientists are willing to pay the hefty fees instead. Of late, I’ve spent more time thinking about peer-review, the practice of a journal distributing copies of a manuscript it’s considering for publication to independent experts on the same topic, for their technical inputs.

Most of the peer-review that happens today is voluntary: the scientists who do it aren’t paid. You must’ve come across several articles of late about whether peer-review works. It seems to me that it’s far from perfect. Studies (in July 1998, September 1998, and October 2008, e.g.) have shown that peer-reviewers often don’t catch critical problems in papers. In February 2023, a noted scientist said in a conversation that peer-reviewers go into a paper assuming that the data presented therein hasn’t been tampered with. This statement was eye-opening for me because I can’t think of a more important reason to include technical experts in the publishing process than to wean out problems that only technical experts can catch. Anyway, these flaws with the peer-review system aren’t generalisable, per se: many scientists have also told me that their papers benefited from peer-review, especially review that helped them improve their work.

I personally don’t know how ‘much’ peer-review is of the former variety and how much the latter, but it seems safe to state that when manuscripts are written in good faith by competent scientists and sent to the right journal, and the journal treats its peer-reviewers as well as its mandate well, peer-review works. Otherwise, it tends to not work. This heuristic, so to speak, allows for the fact that ‘prestige’ journals like Nature, Science, NEJM, and Cell – which have made a name for themselves by publishing papers that were milestones in their respective fields – have also published and then had to retract many papers that made exciting claims that were subsequently found to be untenable. These journals’ ‘prestige’ is closely related to their taste for sensational results.

All these thoughts were recently brought into focus by the ongoing hoopla, especially on Twitter, about the preprint papers from a South Korean research group claiming the discovery of a room-temperature superconductor in a material called LK-99 (this is the main paper). This work has caught the imagination of users on the platform unlike any other paper about room-temperature superconductivity in recent times. I believe this is because the preprints contain some charts and data that were absent in similar work in the past, and which strongly indicate the presence of a superconducting state at ambient temperature and pressure, and because the preprints include instructions on the material’s synthesis and composition, which means other scientists can produce and check for themselves. Personally, I’m holding the stance advised by Prof. Vijay B. Shenoy of IISc:

Many research groups around the world will attempt to reproduce these results; there are already some rumours that independent scientists have done so. We will have to wait for the results of their studies.

Curiously, the preprints have caught the attention of a not insignificant number of techbros, who, alongside the typically naïve displays of their newfound expertise, have also called for the peer-review system to be abolished because it’s too slow and opaque.

Peer-review has a storied relationship with superconductivity. In the early 2000s, a slew of papers coauthored by the German physicist Jan Hendrik Schön, working at a Bell Labs facility in the US, were retracted after independent investigations found that he had fabricated data to support claims that certain organic molecules, called fullerenes, were superconducting. The Guardian wrote in September 2002:

The Schön affair has besmirched the peer review process in physics as never before. Why didn’t the peer review system catch the discrepancies in his work? A referee in a new field doesn’t want to “be the bad guy on the block,” says Dutch physicist Teun Klapwijk, so he generally gives the author the benefit of the doubt. But physicists did become irritated after a while, says Klapwijk, “that Schön’s flurry of papers continued without increased detail, and with the same sloppiness and inconsistencies.”

Some critics hold the journals responsible. The editors of Science and Nature have stoutly defended their review process in interviews with the London Times Higher Education Supplement. Karl Ziemelis, one of Nature’s physical science editors, complained of scapegoating, while Donald Kennedy, who edits Science, asserted that “There is little journals can do about detecting scientific misconduct.”

Maybe not, responds Nobel prize-winning physicist Philip Anderson of Princeton, but the way that Science and Nature compete for cutting-edge work “compromised the review process in this instance.” These two industry-leading publications “decide for themselves what is good science – or good-selling science,” says Anderson (who is also a former Bell Labs director), and their market consciousness “encourages people to push into print with shoddy results.” Such urgency would presumably lead to hasty review practices. Klapwijk, a superconductivity specialist, said that he had raised objections to a Schön paper sent to him for review, but that it was published anyway.

A similar claim by a group at IISc in 2019 generated a lot of excitement then, but today almost no one has any idea what happened to it. It seems reasonable to assume that the findings didn’t pan out in further testing and/or that the peer-review, following the manuscript being submitted to Nature, found problems in the group’s data. Last month, the South Korean group uploaded its papers to the arXiv preprint repository and has presumably submitted them to a journal: for a finding this momentous, that seems like the obvious next step. And the journal is presumably conducting peer-review at this point.

But in both instances (IISc 2019 and today), the claims were also accompanied by independent attempts to replicate the data as well as journalistic articles that assimilated the various public narratives and their social relevance into a cogent whole. One of the first signs that there was a problem with the IISc preprint was another preprint by Brian Skinner, a physicist then with the Massachusetts Institute of Technology, who found the noise in two graphs plotting the results of two distinct tests to be the same – which is impossible. Independent scientists also told The Wire (where I worked then) that they lacked some information required to make sense of the results as well as expressed concerns with the magnetic susceptibility data.

Peer-review may not be designed to check whether the experiments in question produced the data in question but whether the data in question supports the conclusions. For example, in March this year, Nature published a study led by Ranga P. Dias in which he and his team claimed that nitrogen-doped lutetium hydride becomes a room-temperature superconductor under a pressure of 1,000 atm, considerably lower than the pressure required to produce a superconducting state in other similar materials. After it was published, many independent scientists raised concerns about some data and analytical methods presented in the paper – as well as its failure to specify how the material could be synthesised. These problems, it seems, didn’t prevent the paper from clearing peer-review. Yet on August 3, Martin M. Bauer, a particle physicist at Durham University, published a tweet defending peer-review in the context of the South Korean work thus:

The problem seems to me to be the belief – held by many pro- as well as anti-peer-review actors – that peer-review is the ultimate check capable of filtering out all forms of bad science. It just can’t, and maybe that’s okay. Contrary to what Dr. Bauer has said, and as the example of Dr. Dias’s paper suggests, peer-reviewers won’t attempt to replicate the South Korean study. That task, thanks to the level of detail in the South Korean preprint and the fact that preprints are freely accessible, is already being undertaken by a panoply of labs around the world, both inside and outside universities. So abolishing peer-review won’t be as bad as Dr. Bauer makes it sound. As I said, peer-review is, or ought to be, one of many checks.

It’s also the sole check that a journal undertakes, and maybe that’s the bigger problem. That is, scientific journals may well be a pit of papers of unpredictable quality without peer-review in the picture – but that would only be because journal editors and scientists are separate functional groups, rather than having a group of scientists take direct charge of the publishing process (akin to how arXiv currently operates). In the existing publishing model, peer-review is as important as it is because scientists aren’t involved in any other part of the publishing pipeline.

An alternative model comes to mind, one that closes the gaps of “isn’t designed to check whether the experiments in question produced the data in question” and “the sole check that a journal undertakes”: scientists conduct their experiments, write them up in a manuscript and upload them to a preprint repository; other scientists attempt to replicate the results; if the latter are successful, both groups update the preprint paper and submit that to a journal (with the lion’s share of the credit going to the former group); journal editors have this document peer-reviewed (to check whether the data presented supports the conclusions), edited, and polished[1]; and finally publish it.

Obviously this would require a significant reorganisation of incentives: for one, researchers will need to be able to apportion time and resources to replicate others’ experiments for less than half of the credit. A second problem is that this is a (probably non-novel) reimagination of the publishing workflow that doesn’t consider the business model – the other major problem in academic publishing. Third: I have in my mind only condensed-matter physics; I don’t know much about the challenges to replicating results in, say, genomics, computer science or astrophysics. My point overall is that if journals look like a car crash without peer-review, it’s only because the crashes were a matter of time and that peer-review was doing the bare minimum to keep them from happening. (And Twitter was always a car crash anyway.)


[1] I hope readers won’t underestimate this the importance of editorial and language assistance that a journal can provide. Last month, researchers in Australia, Germany, Nepal, Spain, the UK, and the US had a paper published in which they reported, based on surveys, that “non-native English speakers, especially early in their careers, spend more effort than native English speakers in conducting scientific activities, from reading and writing papers and preparing presentations in English, to disseminating research in multiple languages. Language barriers can also cause them not to attend, or give oral presentations at, international conferences conducted in English.”

The language in the South Korean group’s preprints indicate that its authors’ first language isn’t English. According to Springer, which later became Springer Nature, the publisher of the Nature journals, “Editorial reasons for rejection include … poor language quality such that it cannot be understood by readers”. An undated article on Elsevier’s ‘Author Services’ page has this line: “For Marco [Casola, managing editor of Water Research], poor language can indicate further issues with a paper. ‘Language errors can sometimes ring a bell as a link to quality. If a manuscript is written in poor English the science behind it may not be amazing. This isn’t always the case, but it can be an indication.'”

But instead of palming the responsibility off to scientists, journals have an opportunity to distinguish themselves by helping researchers write better papers.

A ‘bold’ vision

‘Support Europe’s bold vision for responsible research assessment’, Nature editorial, July 27, 2022:

The Agreement on Reforming Research Assessment, announced on 20 July and open for signatures on 28 September, is perhaps the most hopeful sign yet of real change. More than 350 organizations have pooled experience, ideas and evidence to come up with a model agreement to create more-inclusive assessment systems. The initiative, four years in the making, is the work of the European University Association and Science Europe (a network of the continent’s science funders and academies), in concert with predecessor initiatives. It has the blessing of the European Commission, but with an ambition to become global.

Signatories must commit to using metrics responsibly, for example by stopping what the agreement calls “inappropriate” uses of journal and publication-based metrics such as the journal impact factor and the h-index. They also agree to avoid using rankings of universities and research organizations — and where this is unavoidable, to recognize their statistical and methodological limitations.

I’m curious if calling this plan “bold” is a way to caution readers that they must proceed cautiously, with considerable scepticism, instead of embracing such ideas with both arms. The plan itself is not bold but – as the editorial itself acknowledges, ironically – in line with what many accomplished research groups and institutes around the world have already expressed a desire for.

Also relevant here is the fact that the editorial appeared in Nature – a journal that has played up its impact factor and the significance of the various papers it has published (to their respective fields) to play up its prestigious nature. Scientists are seeking to install a new research evaluation process in the first place because of the damage such prestige has wrought to the practice of science, essentially as it has come to substitute rigour and transparency, so overturning the tyranny of prestige will deal a blow to Nature‘s large profit margins.

The cycle

Is it just me or does everyone see a self-fulfilling prophecy here?

For a long time, and assisted ably by the ‘publish or perish’ paradigm, researchers sought to have their papers published in high-impact-factor journals – a.k.a. prestige journals – like Nature.

Such journals in turn, assisted ably by parasitic strategies, made these papers highly visible to other researchers around the world and, by virtue of being high-IF journals, tainted the results in the papers with a measure of prestige, ergo importance.

Evaluations and awards committees in turn were highly aware of these papers over others and picked their authors for rewards over others, further amplifying their work, increasing the opportunity cost incurred by the researchers who lose out, and increasing the prestige attached to the high-IF journals.

Run this cycle a few million times and you end up with the impression that there’s something journals like Nature get right – when in fact it’s just mostly a bunch of business practices to ensure they remain profitable.

Why are the Nobel Prizes still relevant?

Note: A condensed version of this post has been published in The Wire.

Around this time last week, the world had nine new Nobel Prize winners in the sciences (physics, chemistry and medicine), all but one of whom were white and none were women. Before the announcements began, Göran Hansson, the Swede-in-chief of these prizes, had said the selection committee has been taking steps to make the group of laureates more racially and gender-wise inclusive, but it would seem they’re incremental measures, as one editorial in the journal Nature pointed out.

Hansson and co. seems to find the argument that the Nobel Prizes award achievements at a time where there weren’t many women in science tenable when in fact it distracts from the selection committee’s bizarre oversight of such worthy names as Lise Meitner, Vera Rubin, Chien-Shiung Wu, etc. But Hansson needs to understand that the only meaningful change is change that happens right away because, even for this significant flaw that should by all means have diminished the prizes to a contest of, for and by men, the Nobel Prizes have only marginally declined in reputation.

Why do they matter when they clearly shouldn’t?

For example, according to the most common comments received in response to articles by The Wire shared on Twitter and Facebook, and always from men, the prizes reward excellence, and excellence should brook no reservation, whether by caste or gender. As is likely obvious to many readers, this view of scholastic achievement resembles a blade of grass: long, sprouting from the ground (the product of strong roots but out of sight, out of mind), rising straight up and culminating in a sharp tip.

However, achievement is more like a jungle: the scientific enterprise – encompassing research institutions, laboratories, the scientific publishing industry, administration and research funding, social security, availability of social capital, PR, discoverability and visibility, etc. – incorporates many vectors of bias, discrimination and even harassment towards its more marginalised constituents. Your success is not your success alone; and if you’re an upper-caste, upper-class, English-speaking man, you should ask yourself, as many such men have been prompted to in various walks of life, who you might have displaced.

This isn’t a witch-hunt as much as an opportunity to acknowledge how privilege works and what we can do to make scientific work more equal, equitable and just in future. But the idea that research is a jungle and research excellence is a product of the complex interactions happening among its thickets hasn’t found meaningful purchase, and many people still labour with a comically straightforward impression that science is immune to social forces. Hansson might be one of them if his interview to Nature is anything to go by, where he says:

… we have to identify the most important discoveries and award the individuals who have made them. If we go away from that, then we’ve devalued the Nobel prize, and I think that would harm everyone in the end.

In other words, the Nobel Prizes are just going to look at the world from the top, and probably from a great distance too, so the jungle has been condensed to a cluster of pin-pricks.

Another reason why the Nobel Prizes haven’t been easy to sideline is that the sciences’ ‘blade of grass’ impression is strongly historically grounded, with help from notions like scientific knowledge spreads from the Occident to the Orient.

Who’s the first person that comes to mind when I say “Nobel Prize for physics”? I bet it’s Albert Einstein. He was so great that his stature as a physicist has over the decades transcended his human identity and stamped the Nobel Prize he won in 1921 with an indelible mark of credibility. Now, to win a Nobel Prize in physics is to stand alongside Einstein himself.

This union between a prize and its laureate isn’t unique to the Nobel Prize or to Einstein. As I’ve said before, prizes are elevated by their winners. When Margaret Atwood wins the Booker Prize, it’s better for the prize than it is for her; when Isaac Asimov won a Hugo Award in 1963, near the start of his career, it was good for him, but it was good for the prize when he won it for the sixth time in 1992 (the year he died). The Nobel Prizes also accrued a substantial amount of prestige this way at a time when it wasn’t much of a problem, apart from the occasional flareup over ignoring deserving female candidates.

That their laureates have almost always been from Europe and North America further cemented the prizes’ impression that they’re the ultimate signifier of ‘having made it’, paralleling the popular undercurrent among postcolonial peoples that science is a product of the West and that they’re simply its receivers.

That said, the prize-as-proxy issue has contributed considerably as well to preserving systemic bias at the national and international levels. Winning a prize (especially a legitimate one) accords the winner’s work with a modicum of credibility and the winner, of prestige. Depending on how the winners of a prize to be awarded suitably in the future are to be selected, such credibility and prestige could be potentiated to skew the prize in favour of people who have already won other prizes.

For example, a scientist-friend ranted to me about how, at a conference he had recently attended, another scientist on stage had introduced himself to his audience by mentioning the impact factors of the journals he’d had his papers published in. The impact factor deserves to die because, among other reasons, it attempts to condense multi-dimensional research efforts and the vagaries of scientific publishing into a single number that stands for some kind of prestige. But its users should be honest about its actual purpose: it was designed so evaluators could take one look at it and decide what to do about a candidate to whom it corresponded. This isn’t fair – but expeditiousness isn’t cheap.

And when evaluators at different rungs of the career advancement privilege the impact factor, scientists with more papers published earlier in their careers in journals with higher impact factors become exponentially likelier to be recognised for their efforts (probably even irrespective of their quality given the unique failings of high-IF journals, discussed here and here) over time than others.

Brian Skinner, a physicist at Ohio State University, recently presented a mathematical model of this ‘prestige bias’ and whose amplification depended in a unique way, according him, on a factor he called the ‘examination precision’. He found that the more ambiguously defined the barrier to advancement is, the more pronounced the prestige bias could get. Put another way, people who have the opportunity to maintain systemic discrimination simultaneously have an incentive to make the points of entry into their club as vague as possible. Sound familiar?

One might argue that the Nobel Prizes are awarded to people at the end of their careers – the average age of a physics laureate is in the late 50s; John Goodenough won the chemistry prize this year at 97 – so the prizes couldn’t possibly increase the likelihood of a future recognition. But the sword cuts both ways: the Nobel Prizes are likelier than not to be the products a prestige bias amplification themselves, and are therefore not the morally neutral symbols of excellence Hansson and his peers seem to think they are.

Fourth, the Nobel Prizes are an occasion to speak of science. This implies that those who would deride the prizes but at the same time hold them up are equally to blame, but I would agree only in part. This exhortation to try harder is voiced more often than not by those working in the West, with publications with better resources and typically higher purchasing power. On principle I can’t deride the decisions reporters and editors make in the process of building an audience for science journalism, with the hope that it will be profitable someday, all in a resource-constrained environment, even if some of those choices might seem irrational.

(The story of Brian Keating, an astrophysicist, could be illuminating at this juncture.)

More than anything else, what science journalism needs to succeed is a commonplace acknowledgement that science news is important – whether it’s for the better or the worse is secondary – and the Nobel Prizes do a fantastic job of getting the people’s attention towards scientific ideas and endeavours. If anything, journalists should seize the opportunity in October every year to also speak about how the prizes are flawed and present their readers with a fuller picture.

Finally, and of course, we have capitalism itself – implicated in the quantum of prize money accompanying each Nobel Prize (9 million Swedish kronor, Rs 6.56 crore or $0.9 million).

Then again, this figure pales in comparison to the amounts that academic institutions know they can rake in by instrumentalising the prestige in the form of donations from billionaires, grants and fellowships from the government, fees from students presented with the tantalising proximity to a Nobel laureate, and in the form of press coverage. L’affaire Epstein even demonstrated how it’s possible to launder a soiled reputation by investing in scientific research because institutions won’t ask too many questions about who’s funding them.

The Nobel Prizes are money magnets, and this is also why winning a Nobel Prize is like winning an Academy Award: you don’t get on stage without some lobbying. Each blade of grass has to mobilise its own PR machine, supported in all likelihood by the same institute that submitted their candidature to the laureates selection committee. The Nature editorial called this out thus:

As a small test case, Nature approached three of the world’s largest international scientific networks that include academies of science in developing countries. They are the International Science Council, the World Academy of Sciences and the InterAcademy Partnership. Each was asked if they had been approached by the Nobel awarding bodies to recommend nominees for science Nobels. All three said no.

I believe those arguments that serve to uphold the Nobel Prizes’ relevance must take recourse through at least one of these reasons, if not all of them. It’s also abundantly clear that the Nobel Prizes are important not because they present a fair or useful picture of scientific excellence but in spite of it.

Prestige journals and their prestigious mistakes

On June 24, the journal Nature Scientific Reports published a paper claiming that Earth’s surface was warming by more than what non-anthropogenic sources could account for because it was simply moving closer to the Sun. I.e. global warming was the result of changes in the Earth-Sun distance. Excerpt:

The oscillations of the baseline of solar magnetic field are likely to be caused by the solar inertial motion about the barycentre of the solar system caused by large planets. This, in turn, is closely linked to an increase of solar irradiance caused by the positions of the Sun either closer to aphelion and autumn equinox or perihelion and spring equinox. Therefore, the oscillations of the baseline define the global trend of solar magnetic field and solar irradiance over a period of about 2100 years. In the current millennium since Maunder minimum we have the increase of the baseline magnetic field and solar irradiance for another 580 years. This increase leads to the terrestrial temperature increase as noted by Akasofu [26] during the past two hundred years.

The New Scientist reported on July 16 that Nature has since kickstarted an “established process” to investigate how a paper with “egregious errors” cleared peer-review and was published. One of the scientists it quotes says the journal should retract the paper if it wants to “retain any credibility”, but the fact that it cleared peer-review in the first place is to me the most notable part of this story. It is a reminder that peer-review has a failure rate as well as that ‘prestige’ titles like Nature can publish crap; for instance, look at the retraction index chart here).

That said, I am a little concerned because Scientific Reports is an open-access title. I hope it didn’t simply publish the paper in exchange for a fee like its less credible counterparts.

Almost as if it timed it to the day, the journal ScienceNature‘s big rival across the ocean – published a paper that did make legitimate claims but which brooks disagreement on a different tack. It describes a way to keep sea levels from rising due to the melting of Antarctic ice. Excerpt:

… we show that the [West Antarctic Ice Sheet] may be stabilized through mass deposition in coastal regions around Pine Island and Thwaites glaciers. In our numerical simulations, a minimum of 7400 [billion tonnes] of additional snowfall stabilizes the flow if applied over a short period of 10 years onto the region (~2 mm/year sea level equivalent). Mass deposition at a lower rate increases the intervention time and the required total amount of snow.

While I’m all for curiosity-driven research, climate change is rapidly becoming a climate emergency in many parts of the world, not least where the poorer live, without a corresponding set of protocols, resources and schemes to deal with it. In this situation, papers like this – and journals like Science that publish them – only make solutions like the one proposed above seem credible when in fact they should be trashed for implying that it’s okay to keep emitting more carbon into the atmosphere because we can apply a band-aid of snow over the ice sheet and postpone the consequences. Of course, the paper’s authors acknowledge the following:

Operations such as the one discussed pose the risk of moral hazard. We therefore stress that these projects are not an alternative to strengthening the efforts of climate mitigation. The ambitious reduction of greenhouse gas emissions is and will be the main lever to mitigate the impacts of sea level rise. The simulations of the current study do not consider a warming ocean and atmosphere as can be expected from the increase in anthropogenic CO2. The computed mass deposition scenarios are therefore valid only under a simultaneous drastic reduction of global CO2 emissions.

… but these words belong in the last few lines of the paper (before the ‘materials and methods’ section), as if they were a token addition to what reads, overall, like a dispassionate analysis. This is also borne out by the study not having modelled the deposition idea together with falling CO2 emissions.

I’m a big fan of curiosity-driven science as a matter of principle. While it seemed hard at first to reconcile my emotions on the Science paper with that position, I realised that I believe both curiosity- and application-driven research should still be conscientious. Setting aside the endless questions about how we ought to spend the taxpayers’ dollars – if only because interfering with research on the basis of public interest is a terrible idea – it is my personal, non-prescriptive opinion that research should still endeavour to be non-destructive (at least to the best of the researchers’ knowledge) when advancing new solutions to known problems.

If that is not possible, then researchers should acknowledge that their work could have real consequences and, setting aside all pretence of being quantitative, objective, etc., clarify the moral qualities of their work. This the authors of the Science paper have done but there are no brownie points for low-hanging fruits. Or maybe there should be considering there has been other work where the authors of a paper have written that they “make no judgment on the desirability” of their proposal (also about climate geo-engineering).

Most of all, let us not forget that being Nature or Science doesn’t automatically make what they put out better for having been published by them.