Prestige journals and their prestigious mistakes

On June 24, the journal Nature Scientific Reports published a paper claiming that Earth’s surface was warming by more than what non-anthropogenic sources could account for because it was simply moving closer to the Sun. I.e. global warming was the result of changes in the Earth-Sun distance. Excerpt:

The oscillations of the baseline of solar magnetic field are likely to be caused by the solar inertial motion about the barycentre of the solar system caused by large planets. This, in turn, is closely linked to an increase of solar irradiance caused by the positions of the Sun either closer to aphelion and autumn equinox or perihelion and spring equinox. Therefore, the oscillations of the baseline define the global trend of solar magnetic field and solar irradiance over a period of about 2100 years. In the current millennium since Maunder minimum we have the increase of the baseline magnetic field and solar irradiance for another 580 years. This increase leads to the terrestrial temperature increase as noted by Akasofu [26] during the past two hundred years.

The New Scientist reported on July 16 that Nature has since kickstarted an “established process” to investigate how a paper with “egregious errors” cleared peer-review and was published. One of the scientists it quotes says the journal should retract the paper if it wants to “retain any credibility”, but the fact that it cleared peer-review in the first place is to me the most notable part of this story. It is a reminder that peer-review has a failure rate as well as that ‘prestige’ titles like Nature can publish crap; for instance, look at the retraction index chart here).

That said, I am a little concerned because Scientific Reports is an open-access title. I hope it didn’t simply publish the paper in exchange for a fee like its less credible counterparts.

Almost as if it timed it to the day, the journal ScienceNature‘s big rival across the ocean – published a paper that did make legitimate claims but which brooks disagreement on a different tack. It describes a way to keep sea levels from rising due to the melting of Antarctic ice. Excerpt:

… we show that the [West Antarctic Ice Sheet] may be stabilized through mass deposition in coastal regions around Pine Island and Thwaites glaciers. In our numerical simulations, a minimum of 7400 [billion tonnes] of additional snowfall stabilizes the flow if applied over a short period of 10 years onto the region (~2 mm/year sea level equivalent). Mass deposition at a lower rate increases the intervention time and the required total amount of snow.

While I’m all for curiosity-driven research, climate change is rapidly becoming a climate emergency in many parts of the world, not least where the poorer live, without a corresponding set of protocols, resources and schemes to deal with it. In this situation, papers like this – and journals like Science that publish them – only make solutions like the one proposed above seem credible when in fact they should be trashed for implying that it’s okay to keep emitting more carbon into the atmosphere because we can apply a band-aid of snow over the ice sheet and postpone the consequences. Of course, the paper’s authors acknowledge the following:

Operations such as the one discussed pose the risk of moral hazard. We therefore stress that these projects are not an alternative to strengthening the efforts of climate mitigation. The ambitious reduction of greenhouse gas emissions is and will be the main lever to mitigate the impacts of sea level rise. The simulations of the current study do not consider a warming ocean and atmosphere as can be expected from the increase in anthropogenic CO2. The computed mass deposition scenarios are therefore valid only under a simultaneous drastic reduction of global CO2 emissions.

… but these words belong in the last few lines of the paper (before the ‘materials and methods’ section), as if they were a token addition to what reads, overall, like a dispassionate analysis. This is also borne out by the study not having modelled the deposition idea together with falling CO2 emissions.

I’m a big fan of curiosity-driven science as a matter of principle. While it seemed hard at first to reconcile my emotions on the Science paper with that position, I realised that I believe both curiosity- and application-driven research should still be conscientious. Setting aside the endless questions about how we ought to spend the taxpayers’ dollars – if only because interfering with research on the basis of public interest is a terrible idea – it is my personal, non-prescriptive opinion that research should still endeavour to be non-destructive (at least to the best of the researchers’ knowledge) when advancing new solutions to known problems.

If that is not possible, then researchers should acknowledge that their work could have real consequences and, setting aside all pretence of being quantitative, objective, etc., clarify the moral qualities of their work. This the authors of the Science paper have done but there are no brownie points for low-hanging fruits. Or maybe there should be considering there has been other work where the authors of a paper have written that they “make no judgment on the desirability” of their proposal (also about climate geo-engineering).

Most of all, let us not forget that being Nature or Science doesn’t automatically make what they put out better for having been published by them.

Priggish NEJM editorial on data-sharing misses the point it almost made

Twitter outraged like only Twitter could on January 22 over a strange editorial that appeared in the prestigious New England Journal of Medicine, calling for medical researchers to not make their research data public. The call comes at a time when the scientific publishing zeitgeist is slowly but surely shifting toward journals requiring, sometimes mandating, the authors of studies to make their data freely available so that their work can be validated by other researchers.

Through the editorial, written by Dan Longo and Jeffrey Drazen, both doctors and the latter the chief editor, NEJM also cautions medical researchers to be on the lookout for ‘research parasites’, a coinage that the journal says is befitting “of people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited”. As @omgItsEnRIz tweeted, do the authors even science?

https://twitter.com/martibartfast/status/690503478813261824

The choice of words is more incriminating than the overall tone of the text, which also tries to express the more legitimate concern of replicators not getting along with the original performers. However, by saying that the ‘parasites’ may “use the data to try to disprove what the original investigators had posited”, NEJM has crawled into an unwise hole of infallibility of its own making.

In October 2015, a paper published in the Journal of Experimental Psychology pointed out why replication studies are probably more necessary than ever. The misguided publish-or-perish impetus of scientific research, together with publishing in high impact-factor journals being lazily used as a proxy for ‘good research’ by many institutions, has led researchers to hack their results – i.e. prime them (say, by cherry-picking) so that the study ends up reporting sensational results when, really, duller ones exist.

The JEP paper had a funnel plot to demonstrate this. Quoting from the Neuroskeptic blog, which highlighted the plot when the paper was published, “This is a funnel plot, a two-dimensional scatter plot in which each point represents one previously published study. The graph plots the effect size reported by each study against the standard error of the effect size – essentially, the precision of the results, which is mostly determined by the sample size.” Note: the y-axis is running top-down.

funnel_shanks1

The paper concerned itself with 43 previously published studies discussing how people’s choices were perceived to change when they were gently reminded about sex.

As Neuroskeptic goes on to explain, there are three giveaways in this plot. One is obvious – that the distribution of replication studies is markedly separated from that of the original studies. Second: the least precise results from the original studies worked with the larger sample sizes. Third: the original studies all seemed to “hug” the outer edge of the grey triangles, which represents a statistical measure responsible for indicating if some results are reliable. The uniform ‘hugging’ is an indication that all those original studies were likely guilty of cherry-picking from their data to conclude with results that are just about reliable, an act called ‘p-hacking’.

A line of research can appear to progress rapidly but without replication studies it’s difficult to establish if the progress is meaningful for science – a notion famously highlighted by John Ioannidis, a professor of medicine and statistics at Stanford University, in his two landmark papers in 2005 and 2014. Björn Brembs, a professor of neurogenetics at the Universität Regensburg, Bavaria, also pointed out how the top journals’ insistence on sensational results could result in a congregation of unreliability. Together with a conspicuous dearth of systematically conducted replication studies, this ironically implies that the least reliable results are often taken the most seriously thanks to the journals they appear in.

The most accessible sign of this is a plot between the retraction index and the impact factor of journals. The term ‘retraction index’ was coined in the same paper in which the plot first appeared; it stands for “the number of retractions in the time interval from 2001 to 2010, multiplied by 1,000, and divided by the number of published articles with abstracts”.

Impact factor of journals plotted against the retraction index. The highest IF journals – Nature, Cell and Science – are farther along the trend line than they should be. Source: doi: 10.1128/IAI.05661-11
Impact factor of journals plotted against the retraction index. The highest IF journals – Nature, Cell and Science – are farther along the trend line than they should be. Source: doi: 10.1128/IAI.05661-11

Look where NEJM is. Enough said.

The journal’s first such supplication appeared in 1997, then writing against pre-print copies of medical research papers becoming available and easily accessible – á la the arXiv server for physics. Then, the authors, again two doctors, wrote, “medicine is not physics: the wide circulation of unedited preprints in physics is unlikely to have an immediate effect on the public’s well-being even if the material is biased or false. In medicine, such a practice could have unintended consequences that we all would regret.” Though a reasonable PoV, the overall tone appeared to stand against the principles of open science.

More importantly, both editorials, separated by almost two decades, make one reasonable argument that sadly appears to make sense to the journal only in the context of a wider set of arguments, many of them contemptible. For example, Drazen seems to understand the importance of data being available for studies to be validated but has differing views on different kinds of data. Two days before his editorial was published, another appeared co-authored by 16 medical researchers – Drazen one of them – in the same journal, this time calling for anonymised patient data from clinical trials being made available to other researchers because it would “increase confidence and trust in the conclusions drawn from clinical trials. It will enable the independent confirmation of results, an essential tenet of the scientific process.”

(At the same time, the editorial also says, “Those using data collected by others should seek collaboration with those who collected the data.”)

For another example, NEJM labours under the impression that the data generated by medical experiments will not ever be perfectly communicable to other researchers who were not involved in the generation of it. One reason it provides is that discrepancies in the data between the original group and a new group could arise because of subtle choices made by the former in the selection of parameters to evaluate. However, the solution doesn’t lie in the data being opaque altogether.

A better way to conduct replication studies

An instructive example played out in May 2014, when the journal Social Psychology published a special issue dedicated to replication studies. The issue contained both successful and failed attempts at replicating some previously published results, and the whole process was designed to eliminate biases as much as possible. For example, the journal’s editors Brian Nosek and Daniel Lakens didn’t curate replication studies but instead registered the studies before they were performed so that their outcomes would be published irrespective of whether they turned out positive or negative. For another, all the replications used the same experimental and statistical techniques as in the original study.

One scientist who came out feeling wronged by the special issue was Simone Schnall, the director of the Embodied Cognition and Emotion Laboratory at Cambridge University. The results of a paper co-authored by Schnall in 2008 hadfailed to be replicated, but she believed there had been a mistake in the replication that, when corrected, would corroborate her group’s findings. However, her statements were quickly and widely interpreted to mean she was being a “sore loser”. In one blog, her 2008 findings were called an “epic fail” (though the words were later struck out).

This was soon followed a rebuttal by Schnall, followed by a counter by the replicators, and then Schnall writing two blog posts (here and here). Over time, the core issue became how replication studies were conducted – who performed the peer review, the level of independence the replicators had, the level of access the original group had, and how journals could be divorced from having a choice about which replication studies to publish. But relevant to the NEJM context, the important thing was the level of transparency maintained by Schnall & co. as well as the replicators, which provided a sheen of honesty and legitimacy to the debate.

The Social Psychology issue was able to take the conversation forward, getting authors to talk about the psychology of research reporting. There have been few other such instances – of incidents exploring the proper mechanisms of replication studies – so if the NEJM editorial had stopped itself with calling for better organised collaborations between a study’s original performers and its replicators, it would’ve been great. As Longo and Drazen concluded, “How would data sharing work best? We think it should happen symbiotically … Start with a novel idea, one that is not an obvious extension of the reported work. Second, identify potential collaborators whose collected data may be useful in assessing the hypothesis and propose a collaboration. Third, work together to test the new hypothesis. Fourth, report the new findings with relevant coauthorship to acknowledge both the group that proposed the new idea and the investigative group that accrued the data that allowed it to be tested.”

https://twitter.com/significantcont/status/690507462848450560

The mistake lies in thinking anything else would be parasitic. And the attitude affects not just other scientists but some science communicators as well. Any journalist or blogger who has been reporting on a particular beat for a while stands to become a ‘temporary expert‘ on the technical contents of that beat. And with exploratory/analytical tools like R – which is easier than you think to pick up – the communicator could dig deeper into the data, teasing out issues more relevant to their readers than what the accompanying paper thinks is the highlight. Sure, NEJM remains apprehensive about how medical results could be misinterpreted to terrible consequence. But the solution there would be for the communicators to be more professional and disciplined, not for the journal to be more opaque.

The Wire
January 24, 2016