An ‘expanded’ heuristic to evaluate science as a non-scientist

The Hindu publishes a column called ‘Notebook’ every Friday, in which journalists in the organisation open windows big or small into their work, providing glimpses into their process and thinking – things that otherwise remain out of view in news articles, analyses, op-eds, etc. Quite a few of them are very insightful. A recent example was Maitri Porecha’s column about looking for closure in the aftermath of the Balasore train accident.

I’ve written twice for the section thus far, both times about a matter that has stayed with me for a decade, manifesting at different times in different ways. The first edition was about being able to tell whether a given article or claim is real or phony irrespective of whether you have a science background. I had proposed the following eight-point checklist that readers could follow (quoted verbatim):

  1. If the article talks about effects on people, was the study conducted with people or with mice?
  2. How many people participated in a study? Fewer than a hundred is always worthy of scepticism.
  3. Does the article claim that a study has made exact predictions? Few studies actually can.
  4. Does the article include a comment from an independent expert? This is a formidable check against poorly-done studies.
  5. Does the article link to the paper it is discussing? If not, please pull on this thread.
  6. If the article invokes the ‘prestige’ of a university and/or the journal, be doubly sceptical.
  7. Does the article mention the source of funds for a study? A study about wine should not be funded by a vineyard.
  8. Use simple statistical concepts, like conditional probabilities and Benford’s law, and common sense together to identify extraordinary claims, and then check if they are accompanied by extraordinary evidence.

The second was about whether science journalists are scientists – which is related to the first on the small matter of faith: i.e. that science journalists are purveyors of information that we expect readers to ‘take up’ on trust and faith, and that an article that teaches readers any science needs to set this foundation carefully.

After having published the second edition, I came across a ‘Policy Forum’ article published in October 2022 in Science entitled ‘Science, misinformation, and the role of education’. Among other things, it presents a “‘fast and frugal’ heuristic” – a three-step algorithm with which competent outsiders [can] evaluate scientific information”. I was glad to see that this heuristic included many points in my eight-point checklist, but it also went a step ahead and discussed two things that perhaps more engaged readers would find helpful. One of them however requires an important disclaimer, in my opinion.

DOI: 10.1126/science.abq80

The additions are about consensus, expressed through the questions (numbering mine):

  1. “Is there a consensus among the relevant scientific experts?”
  2. “What is the nature of any disagreement/what do the experts agree on?”
  3. “What do the most highly regarded experts think?”
  4. “What range of findings are deemed plausible?”, and
  5. “What are the risks of being wrong?”

No. 3 is interesting because “regard” is of course subjective as well as cultural. For example, well-regarded scientists could be those that have published in glamorous journals like Nature, Science, Cell, etc. But as the recent hoopla about Ranga Dias having three papers about near-room-temperature superconductivity retracted in one year – with two published in Nature – showed us, this is no safeguard against bad science. In fact, even winning a Nobel Prize isn’t a guarantee of good science (see e.g. reports about Gregg Semenza and Luc Montagnier). As the ‘Policy Forum’ article also states:

“Undoubtedly, there is still more that the competent outsider needs to know. Peer-reviewed publication is often regarded as a threshold for scientific trust. Yet while peer review is a valuable step, it is not designed to catch every logical or methodological error, let alone detect deliberate fraud. A single peer-reviewed article, even in a leading journal, is just that—a single finding—and cannot substitute for a deliberative consensus. Even published work is subject to further vetting in the community, which helps expose errors and biases in interpretation. Again, competent outsiders need to know both the strengths and limits of scientific publications. In short, there is more to teach about science than the content of science itself.”

Yet “regard” matters because the people at large pay attention to notions like “well-regarded”, which is as much a comment about societal preferences as what scientists themselves have aspired to over the years. This said, on technical matters, this particular heuristic would fail only a small part of time (based on my experience).

It would fail a lot more if it is applied in the middle of a cultural shift, e.g. regarding expectations of the amount of effort a good scientist is expected to dedicate to their work. Here, “well-regarded” scientists – typically people who started doing science decades ago, have persisted in their respective fields, and have finally risen to positions of prominence, and are thus likely to be white and male, and who seldom had to bother with running a household and raising children – will have an answer that reflects the result of these privileges, but which would be at odds with the direction of the shift (i.e. towards better work-life balance, less time than before devoted to research, and contracts amended to accommodate these demands).

In fact, even if the “well-regarded” heuristic might suffice to judge a particular scientific claim, it still carries the risk of hewing in favour of the opinions of people with the aforementioned privileges. These concerns also apply to the three conditions listed under #2 in the heuristic graphic above: “Reputation among peers”, “credentials and institutional context”, “relevant professional experience”, all of which have historically been more difficult for non-cis-het male scientists to acquire. But we must work with what we have.

In this sense, the last question is less subjective and more telling: “What are the risks of being wrong?” If a scientist avoids a view and simultaneously also avoids an adverse outcome for themselves, then it’s possible they avoided the view in order to avoid the outcome and not because the view is implicitly disagreeable.

The authors of the article, Jonathan Osborne and Daniel Pimentel, both of the Graduate School of Education at Stanford University, have grounded their heuristic in the “social nature of science” and the “social mechanisms and practices that science has for resolving disagreement and attaining consensus”. This is obviously more robust (than my checklist grounded in my limited experiences), but I think it could also have discussed the intersection of the social facets of science with gender and class. Otherwise, the risk is that, while the heuristic will help “competent outsiders” better judge scientific claims, it will do as little as its predecessor to uncover the effects of intersectional biases that persist in the “social mechanisms” of science.

The alternative, of course, is to leave out “well-regarded” altogether – but the trouble there, I suspect, is we might be lying to ourselves if we pretended a scientist’s regard didn’t or ought not to matter, which is why I didn’t go there…

Yes, scientific journals should publish political rebuttals

(The headline is partly click-bait, as I admit below, because some context is required.) From ‘Should scientific journals publish political debunkings?’Science Fictions by Stuart Ritchie, August 27, 2022:

Earlier this week, the “news and analysis” section of the journal Science … published … a point-by-point rebuttal of a monologue a few days earlier from the Fox News show Tucker Carlson Tonight, where the eponymous host excoriated Dr. Anthony Fauci, of “seen everywhere during the pandemic” fame. … The Science piece noted that “[a]lmost everything Tucker Carlson said… was misleading or false”. That’s completely correct – so why did I have misgivings about the Science piece? It’s the kind of thing you see all the time on dedicated political fact-checking sites – but I’d never before seen it in a scientific journal. … I feel very conflicted on whether this is a sensible idea. And, instead of actually taking some time to think it through and work out a solid position, in true hand-wringing style I’m going to write down both sides of the argument in the form of a dialogue – with myself.

There’s one particular exchange between Ritchie and himself in his piece that threw me off the entire point of the article:

[Ritchie-in-favour-of-Science-doing-this]: Just a second. This wasn’t published in the peer-reviewed section of Science! This isn’t a refereed paper – it’s in the “News and Analysis” section. Wouldn’t you expect an “Analysis” article to, like, analyse things? Including statements made on Fox News?

[Ritchie-opposed-to-Science-doing-this]: To be honest, sometimes I wonder why scientific journals have a “News and Analysis” section at all – or, I wonder if it’s healthy in the long run. In any case, clearly there’s a big “halo” effect from the peer-reviewed part: people take the News and Analysis more seriously because it’s attached to the very esteemed journal. People are sharing it on social media because it’s “the journal Science debunking Tucker Carlson” – way fewer people would care if it was just published on some random news site. I don’t think you can have it both ways by saying it’s actually nothing to do with Science the peer-reviewed journal.

[Ritchie-in-favour]: I was just saying they were separate, rather than entirely unrelated, but fair enough.

Excuse me but not at all fair enough! The essential problem is the tie-ins between what a journal does, why it does them and what impressions they uphold in society.

First, Science‘s ‘news and analysis’ section isn’t distinguished by its association with the peer-reviewed portion of the journal but by its own reportage and analyses, intended for scientists and non-scientists alike. (Mea culpa: the headline of this post answers the question in the headline of Ritchie’s post, while being clear in the body that there’s a clear distinction between the journal and its ‘news and analysis’ section.) A very recent example was Charles Piller’s investigative report that uncovered evidence of image manipulation in a paper that had an outsized influence on the direction of Alzheimer’s research since it was published in 2006. When Ritchie writes that the peer-reviewed journal and the ‘news and analysis’ section are separate, he’s right – but when he suggests that the former’s prestige is responsible for the latter’s popularity, he’s couldn’t be more wrong.

Ritchie is a scientist and his position may reflect that of many other scientists. I recommend that he and others who agree with him consider the section from the PoV of a science journalist, when they will immediately see as we do that it has broken many agenda-setting stories as well as has published several accomplished journalists and scientists (Derek Lowe’s column being a good example). Another impression that could change with the change of perspective is the relevance of peer-review itself, and the deceptively deleterious nature of an associated concept he repeatedly invokes, which could as well be the pseudo-problem at the heart of Ritchie’s dilemma: prestige. To quote from a blog post in which University of Regensburg neurogeneticist Björn Brembs analysed the novelty of results published by so-called ‘prestigious’ journals, and published in February this year:

Taken together, despite the best efforts of the professional editors and best reviewers the planet has to offer, the input material that prestigious journals have to deal with appears to be the dominant factor for any ‘novelty’ signal in the stream of publications coming from these journals. Looking at all articles, the effect of all this expensive editorial and reviewer work amounts to probably not much more than a slightly biased random selection, dominated largely by the input and to probably only a very small degree by the filter properties. In this perspective, editors and reviewers appear helplessly overtaxed, being tasked with a job that is humanly impossible to perform correctly in the antiquated way it is organized now.

In sum:

Evidence suggests that the prestige signal in our current journals is noisy, expensive and flags unreliable science. There is a lack of evidence that the supposed filter function of prestigious journals is not just a biased random selection of already self-selected input material. As such, massive improvement along several variables can be expected from a more modern implementation of the prestige signal.

Take the ‘prestige’ away and one part of Ritchie’s dilemma – the journal Science‘s claim to being an “impartial authority” that stands at risk of being diluted by its ‘news and analysis’ section’s engagement with “grubby political debates” – evaporates. Journals, especially glamour journals like Science, haven’t historically been authorities on ‘good’ science, such as it is, but have served to obfuscate the fact that only scientists can be. But more broadly, the ‘news and analysis’ business has its own expensive economics, and publishers of scientific journals that can afford to set up such platforms should consider doing so, in my view, with a degree and type of separation between these businesses according to their mileage. The simple reasons are:

1. Reject the false balance: there’s no sensible way publishing a pro-democracy article (calling out cynical and potentially life-threatening untruths) could affect the journal’s ‘prestige’, however it may be defined. But if it does, would the journal be wary of a pro-Republican (and effectively anti-democratic) scientist refusing to publish on its pages? If so, why? The two-part answer is straightforward: because many other scientists as well as journal editors are still concerned with the titles that publish papers instead of the papers themselves, and because of the fundamental incentives of academic publishing – to publish the work of prestigious scientists and sensational work, as opposed to good work per se. In this sense, the knock-back is entirely acceptable in the hopes that it could dismantle the fixation on which journal publishes which paper.

2. Scientific journals already have access to expertise in various fields of study, as well as an incentive to participate in the creation of a sensible culture of science appreciation and criticism.

Featured image: Tucker Carlson at an event in West Palm Beach, Florida, December 19, 2020. Credit: Gage Skidmore/Wikimedia Commons, CC BY-SA 2.0.

The paradoxical virtues of primacy in science

The question of “Who found it first?” in science is deceptively straightforward. It is largely due to the rewards reserved by those who administer science – funding the ‘right’ people working in the ‘right’ areas at the ‘right’ time to ensure the field’s progress along paths deemed desirable by the state – that primacy in science has become valuable. Otherwise, and in an ideal world (in which rewards are distributed more equitably, such that the quality of research is rewarded a certain amount that is lower than the inordinate rewards that accrue to some privileged scientists today but greater than that which scholars working on ‘neglected’ topics/ideas receive, without regard for gender, race, ethnicity or caste), discovering something first wouldn’t matter to the enterprise of science, just as it doesn’t mean anything to the object of the discovery itself.

Primacy is a virtue imposed by the structures of modern science. There is today privilege in being cited as “Subramaniam 2021” or “Srinivasan 2022” in papers, so much so that there is reason to believe many scientific papers are published only so they may cite the work of others and keep expanding this “citation circus”. The more citations there are, the likelier the corresponding scientist is to receive a promotion, a grant, etc. at their institute.

Across history, the use of such citations has also served to obscure the work of ‘other’ scientists and to attribute a particular finding to a single individual or a group. This typically manifests in one of two forms: by flattening the evolution of a complex discovery by multiple groups of people working around the world, sometimes sharing information with each other, to a single paper authored by one of these groups; or by reinforcing the association of one or some names with particular ideas in the scientific literature, thus overlooking important contributions by less well-known scientists.

The former is a complex phenomenon that is often motivated by ‘prestigious’ awards, including the Nobel Prizes, limiting themselves to a small group of laureates at a time, as well as by the meagre availability of grants for advanced research. Scientists and, especially, the institutes at which they work engage as a result in vociferous media campaigns when an important discovery is at hand, to ensure that opportunities for profit that may arise out of the finding may rest with them alone. This said, it can also be the product of lazy citations, in which scientists cite their friends or peers they like or wish to impress, or collections of papers over the appropriate individual ones, instead of conducting a more exhaustive literature review to cite everyone involved everywhere.

The second variety of improper citations is of course one that has dogged India – and one with which anyone working with or alongside science in India must be familiar. It has also been most famously illustrated by instances of women scientists who were subsequently overlooked for Nobel Prizes that were awarded to the men who worked with them, often against them. (The Nobel Prizes are false gods and we must tear them down; but for their flaws, they remain good, if also absurdly selective, markers of notable scientific work: that is, no prize has thus far been awarded to work that didn’t deserve it.) The stories of Chien-Shiung Wu, Rosalind Franklin and Jocelyn Bell Burnell come to mind.

But also consider the Indian example of Meghnad Saha’s paper about selective radiation pressure (in the field of stellar astrophysics), which predated Irving Langmuir’s paper on the same topic by three years. Saha lost out on the laurels by not being able to afford having his paper published in a more popular journal and had to settle for one with “no circulation worth mentioning” (source). An equation in this theory is today known as the Saha-Langmuir equation, but even this wouldn’t be so without the conscious effort of some scholars to highlight Saha’s work and unravel the circumstances that forced him into the shadows.

I discovered recently that comparable, yet not similar, circumstances had befallen Bibhas De, when the journal Icarus rejected a paper he had submitted twice. The first time, his paper presented his calculations predicting that the planet Uranus had rings; the second time was five years later, shortly after astronomers had found that Uranus indeed had rings. Stephen Brush and Ariel Segal wrote in their 2015 book, “Although he did succeed in getting his paper published in another journal, he rarely gets any credit for this achievement.”

In both these examples, and many others like them, scientists’ attempts to formalise their successes by having their claims detailed in the literature were mediated by scientific journals – whose editors’ descisions had nothing to do with science (costs in the former case and who-knows-what in the latter).

At the same time, because of these two issues, flattening and reinforcing, attribution for primacy is paradoxically more relevant: if used right, it can help reverse these problems, these imprints of colonialism and imperialism in the scientific literature. ‘Right’ here means, to me at least, that everyone is credited or none at all, as an honest reflection of the fact that good science has never been vouchsafed to the Americans or the Europeans. But then this requires more problems to be solved, such as, say, replacing profit-based scientific publishing (and the consequent valorisation of sensational results) with a ‘global scientific record’ managed by the world’s governments through an international treaty.

Axiomatically, perhaps the biggest problem with primacy today is its entrenchment. I’m certain humanities and social science scholars have debated this thoroughly – the choice for the oppressed and the marginalised between beating their oppressors at their own game or transcending the game itself. Obviously the latter seems more englightened, but it is also more labour-intensive, labour that can’t be asked freely of them – our scientists and students who are already fighting to find or keep their places in the community of their peers. Then again, beating them at their own game may not be so easy either.

I was prompted to write this post, in fact, after I stumbled on four seemingly innocuous words in a Wikipedia article about stellarators. (I wrote about these nuclear-fusion devices yesterday in the context of a study about solving an overheating problem.) The article reads that when a solenoid – a coiled wire – is bent around to form a loop, the inner perimeter of the loop has a higher density of wire than the outer perimeter. Surely this is obvious, yet the Wikpedia article phrases it thus (emphasis added):

But, as Fermi pointed out, when the solenoid is bent into a ring, the electrical windings would be closer together on the inside than the outside.

Why does a common-sensical claim, which should strike anyone who can visualise or even see a solenoid made into a loop, be attributed to the celebrated Italian physicist Enrico Fermi? The rest of the paragraph to which this sentence belongs goes on to describe how this winding density affects nuclear fusion reactors; it is an arguably straightforward effect, far removed from the singularity and the sophistication of other claims whose origins continue to be mis- or dis-attributed. Wikipedia articles are also not scientific papers. But taken together, the attribution to Fermi contains the footprints of the fact that he, as part of the Knabenphysik of quantum mechanics, worked on many areas of physics, allowing him to attach his name to a variety of concepts at a time when studies on the same topics were only just catching on in other parts of the world – a body of work enabled, as is usual, by war, conquest and the quest for hegemony.

Maybe fighting over primacy is the tax we must pay today for allowing this to happen.

Bharat Biotech gets 1/10 for tweet

If I had been Bharat Biotech’s teacher and “Where is your data?” had been an examination question, Bharat Biotech would have received 1 out of 10 marks.

The correct answer to where is your data can take one of two forms: either an update in the form of where the data is in the data-processing pipeline or to actually produce the data. The latter in fact would have deserved a bonus point, if only because the question wasn’t precise enough. The question should really have been a demand – “Submit your data” – instead of allowing the answerer, in its current form, to get away with simply stating where the data currently rests. Bharat Biotech gets 1/10 because it does neither; the 1 is for correct spelling.

In fact, the company’s chest-thumping based on publishing nine papers in 12 months is symptomatic of a larger problem with the student. He fails to understand that only data is data, and that the demand for data is a demand for data per se. It ought not to be confused with a demand for authority. Data accords authority in an object-oriented and democratic sense. With data, everyone else can see for themselves – whether by themselves or through the mouths and minds of independent experts they trust – if the student’s claims hold up. And if they do, they confer the object of the data, the COVID-19 vaccine named Covaxin, with attributes like reliability.

(Why ‘he’? The patriarchal conditions in and with which science has operated around the world, but especially in Europe and the US, in the last century or so have diffused into scientific practice itself, in terms of how the people at large have constituted – as well as have been expected to constitute, by the scientific community – scientific authority, expertise’s immunity to criticism and ownership of knowledge production and dissemination apparatuses, typically through “discrimination, socialisation and the gender division of labour”. Irrespective of the means – although both from the company’s and the government’s sides, very few women have fielded and responded to questions about drug/vaccine approvals – we already see these features in the manner in which ‘conventional’ scientific journals have sought to retain their place in the international knowledge production economy, and their tendency to resort to arguments that they serve an important role in it even as they push for anti-transparent practices, from the scientific papers’ contents to details about why they charge so much money.)

However, the student has confused authority of this kind with authority of a kind we more commonly associate with the conventional scientific publishing paradigm: in which journals are gatekeepers of scientific knowledge – both in terms of what topics they ‘accept’ manuscripts on and what they consider to be ‘good’ results; and in which a paper, once published, is placed behind a steeply priced paywall that keeps both knowledge of the paper’s contents and the terms of its ‘acceptance’ by the journal beyond public scrutiny – even when public money funded the research described therein. As such, his insistence that we be okay with his having published nine papers in 12 months is really his insistence that we vest our faith in scientific journals, and by extension their vaunted decision to ‘approve of’ his work. This confusion on his part is also reflected in what he offers as his explanation for the absence of data in the public domain, but which are really his excuses.

Our scientific commitment as a company stands firm with data generation, data transparency and peer-reviewed publications.

Sharing your data in a secluded channel with government bodies is not data transparency. That’s what the student needs for regulatory approval. Transparency applies when the data is available for everyone else to independently access, understand and check.

Phase 3 final analysis data will be available soon. Final analysis requires efficacy and 2 months safety follow-up data on all subjects. This is mandated by CDSCO and USFDA. Final analysis will first be submitted to CDSCO, followed by submissions to peer reviewed journals and media dissemination.

What is required by CDSCO does not matter to those allowing Bharat Biotech’s vaccines into the bloodstreams, and in fact every Indian on whom the student has inflicted this pseudo-choice. And at this point to invoke what the USFDA requires can only lead to a joke: studies of the vaccines involved in the formal vaccination drive have already been published in the US; even studies of new vaccines as well as follow-ups of existing formulations are being placed in the public domain through preprint papers that describe the data from soup to nuts. All we got from the student vis-à-vis Covaxin this year was interim phase 3 trial data in early March, announced through a press release, and devoid even of error bars for its most salient claims.

So even for an imprecisely worded question, it has done well to elicit a telling answer from the student: that the data does not exist, and the student believes he is too good for us all.

Thanks to Jahnavi Sen for reading the article before it was published.

Another controversy, another round of blaming preprints

On February 1, Anand Ranganathan, the molecular biologist more popular as a columnist for Swarajya, amplified a new preprint paper from scientists at IIT Delhi that (purportedly) claims the Wuhan coronavirus’s (2019 nCoV’s) DNA appears to contain some genes also found in the human immunodeficiency virus but not in any other coronaviruses. Ranganathan also chose to magnify the preprint paper’s claim that the sequences’ presence was “non-fortuitous”.

To be fair, the IIT Delhi group did not properly qualify what they meant by the use of this term, but this wouldn’t exculpate Ranganathan and others who followed him: to first amplify with alarmist language a claim that did not deserve such treatment, and then, once he discovered his mistake, to wonder out loud about whether such “non-peer reviewed studies” about “fast-moving, in-public-eye domains” should be published before scientific journals have subjected them to peer-review.

https://twitter.com/ARanganathan72/status/1223444298034630656
https://twitter.com/ARanganathan72/status/1223446546328326144
https://twitter.com/ARanganathan72/status/1223463647143505920

The more conservative scientist is likely to find ample room here to revive the claim that preprint papers only promote shoddy journalism, and that preprint papers that are part of the biomedical literature should be abolished entirely. This is bullshit.

The ‘print’ in ‘preprint’ refers to the act of a traditional journal printing a paper for publication after peer-review. A paper is designated ‘preprint’ if it hasn’t undergone peer-review yet, even though it may or may not have been submitted to a scientific journal for consideration. To quote from an article championing the use of preprints during a medical emergency, by three of the six cofounders of medRxiv, the preprints repository for the biomedical literature:

The advantages of preprints are that scientists can post them rapidly and receive feedback from their peers quickly, sometimes almost instantaneously. They also keep other scientists informed about what their colleagues are doing and build on that work. Preprints are archived in a way that they can be referenced and will always be available online. As the science evolves, newer versions of the paper can be posted, with older historical versions remaining available, including any associated comments made on them.

In this regard, Ranganathan’s ringing the alarm bells (with language like “oh my god”) the first time he tweeted the link to the preprint paper without sufficiently evaluating the attendant science was his decision, and not prompted by the paper’s status as a preprint. Second, the bioRxiv preprint repository where the IIT Delhi document showed up has a comments section, and it was brimming with discussion within minutes of the paper being uploaded. More broadly, preprint repositories are equipped to accommodate peer-review. So if anyone had looked in the comments section before tweeting, they wouldn’t have had reason to jump the gun.

Third, and most important: peer-review is not fool-proof. Instead, it is a legacy method employed by scientific journals to filter legitimate from illegitimate research and, more recently, higher quality from lower quality research (using ‘quality’ from the journals’ oft-twisted points of view, not as an objective standard of any kind).

This framing supports three important takeaways from this little scandal.

A. Much like preprint repositories, peer-reviewed journals also regularly publish rubbish. (Axiomatically, just as conventional journals also regularly publish the outcomes of good science, so do preprint repositories; in the case of 2019 nCoV alone, bioRxiv, medRxiv and SSRN together published at least 30 legitimate and noteworthy research articles.) It is just that conventional scientific journals conduct the peer-review before publication and preprint repositories (and research-discussion platforms like PubPeer), after. And, in fact, conducting the review after allows it to be continuous process able to respond to new information, and not a one-time event that culminates with the act of printing the paper.

But notably, preprint repositories can recreate journals’ ability to closely control the review process and ensure only experts’ comments are in the fray by enrolling a team of voluntary curators. The arXiv preprint server has been successfully using a similar team to carefully eliminate manuscripts advancing pseudoscientific claims. So as such, it is easier to make sure people are familiar with the preprint and post-publication review paradigm than to take advantage of their confusion and call for preprint papers to be eliminated altogether.

B. Those who support the idea that preprint papers are dangerous, and argue that peer-review is a better way to protect against unsupported claims, are by proxy advocating for the persistence of a knowledge hegemony. Peer-review is opaque, sustained by unpaid and overworked labour, and dispenses the same function that an open discussion often does at larger scale and with greater transparency. Indeed, the transparency represents the most important difference: since peer-review has traditionally been the demesne of journals, supporting peer-review is tantamount to designating journals as the sole and unquestionable arbiters of what knowledge enters the public domain and what doesn’t.

(Here’s one example of how such gatekeeping can have tragic consequences for society.)

C. Given these safeguards and perspectives, and as I have written before, bad journalists and bad comments will be bad irrespective of the window through which an idea has presented itself in the public domain. There is a way to cover different types of stories, and the decision to abdicate one’s responsibility to think carefully about the implications of what one is writing can never have a causal relationship with the subject matter. The Times of India and the Daily Mail will continue to publicise every new paper discussing whatever coffee, chocolate and/or wine does to the heart, and The Hindu and The Wire Science will publicise research published in preprint papers because we know how to be careful and of the risks to protect ourselves against.

By extension, ‘reputable’ scientific journals that use pre-publication peer-review will continue to publish many papers that will someday be retracted.

An ongoing scandal concerning spider biologist Jonathan Pruitt offers a useful parable – that journals don’t always publish bad science due to wilful negligence or poor peer-review alone but that such failures still do well to highlight the shortcomings of the latter. A string of papers the work on which Pruitt led were found to contain implausible data in support of some significant conclusions. Dan Bolnick, the editor of The American Naturalist, which became the first journal to retract Pruitt’s papers that it had published, wrote on his blog on January 30:

I want to emphasise that regardless of the root cause of the data problems (error or intent), these people are victims who have been harmed by trusting data that they themselves did not generate. Having spent days sifting through these data files I can also attest to the fact that the suspect patterns are often non-obvious, so we should not be blaming these victims for failing to see something that requires significant effort to uncover by examining the data in ways that are not standard for any of this. … The associate editor [who Bolnick tasked with checking more of Pruitt’s papers] went as far back as digging into some of Pruitt’s PhD work, when he was a student with Susan Riechert at the University of Tennessee Knoxville. Similar problems were identified in those data… Seeking an explanation, I [emailed and then called] his PhD mentor, Susan Riechert, to discuss the biology of the spiders, his data collection habits, and his integrity. She was shocked, and disturbed, and surprised. That someone who knew him so well for many years could be unaware of this problem (and its extent), highlights for me how reasonable it is that the rest of us could be caught unaware.

Why should we expect peer-review – or any kind of review, for that matter – to be better? The only thing we can do is be honest, transparent and reflexive.

Confused thoughts on embargoes

Seventy! That’s how many observatories around the world turned their antennae to study the neutron-star collision that LIGO first detected. So I don’t know why the LIGO Collaboration, and Nature, bothered to embargo the announcement and, more importantly, the scientific papers of the LIGO-Virgo collaboration as well as those by the people at all these observatories. That’s a lot of people and many of them leaked the neutron-star collision news on blogs and on Twitter. Madness. I even trawled through arΧiv to see if I could find preprint copies of the LIGO papers. Nope; it’s all been removed.

Embargoes create hype from which journals profit. Everyone knows this. Instead of dumping the data along with the scientific articles as soon as they’re ready, journals like Nature, Science and others announce that the information will all be available at a particular time on a particular date. And between this announcement and the moment at which the embargo lifts, the journal’s PR team fuels hype surrounding whatever’s being reported. This hype is important because it generates interest. And if the information promises to be good enough, the interest in turn creates ‘high pressure’ zones on the internet – populated by those people who want to know what’s going on.

Search engines and news aggregators like Google and Facebook are sensitive to the formation of these high-pressure zones and, at the time of the embargo’s lifting, watch out for news publications carrying the relevant information. And after the embargo lifts, thanks to the attention already devoted by the aggregators, news websites are transformed into ‘low pressure’ zones into which the aggregators divert all the traffic. It’s like the moment a giant information bubble goes pop! And the journal profits from all of this because, while the bubble was building, the journal’s name is everywhere.

In short: embargoes are a traffic-producing opportunity for news websites because they create ‘pseudo-cycles of news’, and an advertising opportunity for journals.

But what’s in it for someone reporting on the science itself? And what’s in it for the consumers? And, overall, am I being too vicious about the idea?

For science reporters, there’s the Ingelfinger rule promulgated by the New England Journal of Medicine in 1969. It states that the journal will not publish any papers with results that have been previously published elsewhere and/or whose authors have not discussed the results with the media. NEJM defended the rule by claiming it was to keep their output fresh and interesting as well as to prevent scientists from getting carried away by the implications of their own research (NEJM’s peer-review process would prevent that, they said). In the end, the consumers would receive scientific information that has been thoroughly vetted.

While the rule makes sense from the scientists’ point of view, it doesn’t from the reporters’. A good science reporter, having chosen to cover a certain paper, will present the paper to an expert unaffiliated with the authors and working in the same area for her judgment. This is a form of peer-review that is extraneous to the journal publishing the paper. Second: a pro-embargo argument that’s been advanced is that embargoes alert science reporters to papers of importance as well as give them time to write a good story on it.

I’m conflicted about this. Embargoes, and the attendant hype, do help science reporters pick up on a story they might’ve missed out on, to capitalise on the traffic potential of a new announcement that may not be as big as it becomes without the embargo. Case in point: today’s neutron-star collision announcement. At the same time, science reporters constantly pick up on interesting research that is considered old/stale or that wasn’t ever embargoed and write great stories about them. Case in point: almost everything else.

My perspective is coloured by the fact that I manage a very small science newsroom at The Wire. I have a very finite monthly budget (equal to about what someone working eight hours a day and five days a week would make in two months on the US minimum wage) using which I’ve to ensure that all my writers – who are all freelancers – provide both the big picture of science in that month as well as the important nitty-gritties. Embargoes, for me, are good news because it helps me reallocate human and financial resources for a story well in advance and make The Wire‘s presence felt on the big stage when the curtain lifts. Rather, even if I can’t make it on time to the moment the curtain lifts, I’ve still got what I know for sure is good story on my hands.

A similar point was made by Kent Anderson when he wrote about eLife‘s media policy, which said that the journal would not be enforcing the Ingelfinger rule, over at The Scholarly Kitchen:

By waiving the Ingelfinger rule in its modernised and evolved form – which still places a premium on embargoes but makes pre-publication communications allowable as long as they don’t threaten the news power – eLife is running a huge risk in the attention economy. Namely, there is only so much time and attention to go around, and if you don’t cut through the noise, you won’t get the attention. …

Like it or not, but press embargoes help journals, authors, sponsors, and institutions cut through the noise. Most reporters appreciate them because they level the playing field, provide time to report on complicated and novel science, and create an effective overall communication scenario for important science news. Without embargoes and coordinated media activity, interviews become more difficult to secure, complex stories may go uncovered because they’re too difficult to do well under deadline pressures, and coverage becomes more fragmented.

What would I be thinking if I had a bigger budget and many full-time reporters to work with? I don’t know.

On Embargo Watch in July this year, Ivan Oransky wrote about how an editor wasn’t pleased with embargoes because “staffers had been pulled off other stories to make sure to have this one ready by the original embargo”. I.e., embargoes create deadlines that are not in your control; they create deadlines within which everyone, over time, tends to do the bare minimum (“as much as other publications will do”) so they can ride the interest wave and move on to other things – sometimes not revisiting this story again even. In a separate post, Oransky briefly reviewed a book against embargoes by Vincent Kiernan, a noted critic of the idea:

In his book, Embargoed Science, Kiernan argues that embargoes make journalists lazy, always chasing that week’s big studies. They become addicted to the journal hit, afraid to divert their attention to more original and enterprising reporting because their editors will give them grief for not covering that study everyone else seems to have covered.

Alice Bell wrote a fantastic post in 2010 about how to overcome such tendencies: by newsrooms redistributing their attention on science to both upstream and downstream activities. But more than that, I don’t think lethargic news coverage can be explained solely by the addiction to embargoes. A good editor should keep stirring the pot – should keep her journalists moving on good stories, particularly of the kind no one wants to talk about, report on it and play it up. So, while I’m hoping that The Wire‘s coverage of the neutron-star collision discovery is a hit, I’ve also got great pieces coming this week about solar flares, open-access publishing, the health effects of ******** mining and the conservation of sea snakes.

I hope time will provide some clarity.

Featured image credit: Free-Photos/pixabay.

Are the papers behind this year’s Nobel Prizes in the public domain?

Note: One of my editors thought this post would work for The Wire as well, so it’s been republished there.

“… for the greatest benefit of mankind” – these words are scrawled across a banner that adorns the Nobel Prize’s homepage. They are the words of Alfred Nobel, who instituted the prizes and bequeathed his fortunes to run the foundation that awards them. The words were chosen by the prize’s awarders to denote the significance of their awardees’ accomplishments.

However, the scientific papers that first described these accomplishments in the technical literature are often not available in the public domain. They languish behind paywalls erected by the journals that publish them, that seek to cash in on their importance to the advancement of science. Many of these papers are also funded by public money, but that hasn’t deterred journals and their publishers from keeping the papers out of public reach. How then can they be for the greatest benefit of mankind?

§

I’ve listed some of the more important papers published by this year’s laureates; they describe work that earned them their respective prizes. Please remember that my choice of papers is selective; where I have found other papers that are fully accessible – or otherwise – I have provided a note. This said, I picked the papers from the scientific background document first and then checked if they were accessible, not the other way round. (If you, whoever you are, are interested in replicating my analysis but more thoroughly, be my guest; I will help you in any way I can.)

A laureate may have published many papers collectively for which he was awarded (this year’s science laureates are all male). I’ve picked the papers most proximate to their citation from the references listed in the ‘advanced scientific background’ section available for each prize on the Nobel Prize website. Among publishers, the worst offender appears – to no one’s surprise – to be Elsevier.

A paper title in green indicates it’s in the public domain; red indicates it isn’t – both on the pages of the journal itself. Some titles in red maybe available in full elsewhere, such as in university archives. The names of laureates in the papers’ citations are underlined.

Physiology/medicine

“for their discoveries of molecular mechanisms controlling the circadian rhythm”

The paywall for papers by Young and Rosbash published in Nature were lifted by the journal on the day their joint Nobel Prize was announced. Until then, they’d been inaccessible to the general public. Interestingly, both papers acknowledge funding grants from the US National Institutes of Health, a tax-funded body of the US government.

Michael Young

Restoration of circadian behavioural rhythms by gene transfer in Drosophila – Nature 312, 752 – 754 (20 December 1984); doi:10.1038/312752a0 link

Isolation of timeless by PER protein interaction: defective interaction between timeless protein and long-period mutant PERL – Gekakis, N., Saez, L., Delahaye-Brown, A.M., Myers, M.P., Sehgal, A., Young, M.W., and Weitz, C.J. (1995). Science 270, 811–815. link

Michael Rosbash

Feedback of the Drosophila period gene product on circadian cycling of its messenger RNA levels – Nature 343, 536 – 540 (08 February 1990); doi:10.1038/343536a0 link

The period gene encodes a predominantly nuclear protein in adult Drosophila – Liu, X., Zwiebel, L.J., Hinton, D., Benzer, S., Hall, J.C., and Rosbash, M. (1992). J Neurosci 12, 2735–2744. link

Jeffrey Hall

Molecular analysis of the period locus in Drosophila melanogaster and identification of a transcript involved in biological rhythms – Reddy, P., Zehring, W.A., Wheeler, D.A., Pirrotta, V., Hadfield, C., Hall, J.C., and Rosbash, M. (1984). Cell 38, 701–710. link

P-element transformation with period locus DNA restores rhythmicity to mutant, arrhythmic Drosophila melanogaster – Zehring, W.A., Wheeler, D.A., Reddy, P., Konopka, R.J., Kyriacou, C.P., Rosbash, M., and Hall, J.C. (1984). Cell 39, 369–376. link

Antibodies to the period gene product of Drosophila reveal diverse tissue distribution and rhythmic changes in the visual system – Siwicki, K.K., Eastman, C., Petersen, G., Rosbash, M., and Hall, J.C. (1988). Neuron 1, 141–150. link

Physics

“for decisive contributions to the LIGO detector and the observation of gravitational waves”

While results from the LIGO detector were published in peer-reviewed journals, the development of the detector itself was supported by personnel and grants from MIT and Caltech. As a result, the Nobel laureates’ more important contributions were published as a reports since archived by the LIGO collaboration and made available in the public domain.

Rainer Weiss

Quarterly progress reportR. Weiss, MIT Research Lab of Electronics 105, 54 (1972) link

The Blue BookR. Weiss, P.R. Saulson, P. Linsay and S. Whitcomb link

Chemistry

“for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution”

The journal Cell, in which the chemistry laureates appear to have published many papers, publicised a collection after the Nobel Prize was announced. Most papers in the collection are marked ‘Open Archive’ and are readable in full. However, the papers cited by the Nobel Committee in its scientific background document don’t appear there. I also don’t know whether the papers in the collection available in full were always available in full.

Jacques Dubochet

Cryo-electron microscopy of vitrified specimens – Dubochet, J., Adrian, M., Chang, J.-J., Homo, J.-C., Lepault, J., McDowall, A. W., and Schultz, P. (1988). Q. Rev. Biophys. 21, 129-228 link

Vitrification of pure water for electron microscopyDubochet, J., and McDowall, A. W. (1981). J. Microsc. 124, 3-4 link

Cryo-electron microscopy of viruses – Adrian, M., Dubochet, J., Lepault, J., and McDowall, A. W. (1984). Nature 308, 32-36 link

Joachim Frank

Averaging of low exposure electron micrographs of non-periodic objectsFrank, J. (1975). Ultramicroscopy 1, 159-162 link

Three-dimensional reconstruction from a single-exposure, random conical tilt series applied to the 50S ribosomal subunit of Escherichia coli – Radermacher, M., Wagenknecht, T., Verschoor, A., and Frank, J. (1987). J. Microsc. 146, 113-136 link

SPIDER-A modular software system for electron image processingFrank, J., Shimkin, B., and Dowse, H. (1981). Ultramicroscopy 6, 343-357 link

Richard Henderson

Model for the structure of bacteriorhodopsin based on high-resolution electron cryo-microscopyHenderson, R., Baldwin, J. M., Ceska, T. A., Zemlin, F., Beckmann, E., and Downing, K. H. (1990). J. Mol. Biol. 213, 899-929 link

The potential and limitations of neutrons, electrons and X-rays for atomic resolution microscopy of unstained biological moleculesHenderson, R. (1995). Q. Rev. Biophys. 28, 171-193 link (available in full here)

§

By locking the red-tagged papers behind a paywall – often impossible to breach because of the fees involved – they’re kept out of hands of less-well-funded institutions and libraries, particularly researchers in countries whose currencies have lower purchasing power. More about this here and here. But the more detestable thing with the papers listed above is that the latest of them (among the reds) was published in 1995, fully 22 years ago, and the earliest, 42 years go – both on cryo-electron microscopy. Both represent almost unforgivable durations across which to have paywalls, with the journals Nature and Cell further attempting to ride the Nobel wave for attention. It’s not clear if the papers they’ve liberated from behind the paywall will always be available for free hence either.

Read all this in the context of the Nobel Prizes not being awarded to more than three people at a time and maybe you’ll see how much of scientific knowledge is truly out of bounds of most of humankind.

Featured image credit: Pexels/pixabay.

Are the papers behind this year's Nobel Prizes in the public domain?

Note: One of my editors thought this post would work for The Wire as well, so it’s been republished there.

“… for the greatest benefit of mankind” – these words are scrawled across a banner that adorns the Nobel Prize’s homepage. They are the words of Alfred Nobel, who instituted the prizes and bequeathed his fortunes to run the foundation that awards them. The words were chosen by the prize’s awarders to denote the significance of their awardees’ accomplishments.

However, the scientific papers that first described these accomplishments in the technical literature are often not available in the public domain. They languish behind paywalls erected by the journals that publish them, that seek to cash in on their importance to the advancement of science. Many of these papers are also funded by public money, but that hasn’t deterred journals and their publishers from keeping the papers out of public reach. How then can they be for the greatest benefit of mankind?

§

I’ve listed some of the more important papers published by this year’s laureates; they describe work that earned them their respective prizes. Please remember that my choice of papers is selective; where I have found other papers that are fully accessible – or otherwise – I have provided a note. This said, I picked the papers from the scientific background document first and then checked if they were accessible, not the other way round. (If you, whoever you are, are interested in replicating my analysis but more thoroughly, be my guest; I will help you in any way I can.)

A laureate may have published many papers collectively for which he was awarded (this year’s science laureates are all male). I’ve picked the papers most proximate to their citation from the references listed in the ‘advanced scientific background’ section available for each prize on the Nobel Prize website. Among publishers, the worst offender appears – to no one’s surprise – to be Elsevier.

A paper title in green indicates it’s in the public domain; red indicates it isn’t – both on the pages of the journal itself. Some titles in red maybe available in full elsewhere, such as in university archives. The names of laureates in the papers’ citations are underlined.

Physiology/medicine

“for their discoveries of molecular mechanisms controlling the circadian rhythm”

The paywall for papers by Young and Rosbash published in Nature were lifted by the journal on the day their joint Nobel Prize was announced. Until then, they’d been inaccessible to the general public. Interestingly, both papers acknowledge funding grants from the US National Institutes of Health, a tax-funded body of the US government.

Michael Young

Restoration of circadian behavioural rhythms by gene transfer in Drosophila – Nature 312, 752 – 754 (20 December 1984); doi:10.1038/312752a0 link

Isolation of timeless by PER protein interaction: defective interaction between timeless protein and long-period mutant PERL – Gekakis, N., Saez, L., Delahaye-Brown, A.M., Myers, M.P., Sehgal, A., Young, M.W., and Weitz, C.J. (1995). Science 270, 811–815. link

Michael Rosbash

Feedback of the Drosophila period gene product on circadian cycling of its messenger RNA levels – Nature 343, 536 – 540 (08 February 1990); doi:10.1038/343536a0 link

The period gene encodes a predominantly nuclear protein in adult Drosophila – Liu, X., Zwiebel, L.J., Hinton, D., Benzer, S., Hall, J.C., and Rosbash, M. (1992). J Neurosci 12, 2735–2744. link

Jeffrey Hall

Molecular analysis of the period locus in Drosophila melanogaster and identification of a transcript involved in biological rhythms – Reddy, P., Zehring, W.A., Wheeler, D.A., Pirrotta, V., Hadfield, C., Hall, J.C., and Rosbash, M. (1984). Cell 38, 701–710. link

P-element transformation with period locus DNA restores rhythmicity to mutant, arrhythmic Drosophila melanogaster – Zehring, W.A., Wheeler, D.A., Reddy, P., Konopka, R.J., Kyriacou, C.P., Rosbash, M., and Hall, J.C. (1984). Cell 39, 369–376. link

Antibodies to the period gene product of Drosophila reveal diverse tissue distribution and rhythmic changes in the visual system – Siwicki, K.K., Eastman, C., Petersen, G., Rosbash, M., and Hall, J.C. (1988). Neuron 1, 141–150. link

Physics

“for decisive contributions to the LIGO detector and the observation of gravitational waves”

While results from the LIGO detector were published in peer-reviewed journals, the development of the detector itself was supported by personnel and grants from MIT and Caltech. As a result, the Nobel laureates’ more important contributions were published as a reports since archived by the LIGO collaboration and made available in the public domain.

Rainer Weiss

Quarterly progress reportR. Weiss, MIT Research Lab of Electronics 105, 54 (1972) link

The Blue BookR. Weiss, P.R. Saulson, P. Linsay and S. Whitcomb link

Chemistry

“for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution”

The journal Cell, in which the chemistry laureates appear to have published many papers, publicised a collection after the Nobel Prize was announced. Most papers in the collection are marked ‘Open Archive’ and are readable in full. However, the papers cited by the Nobel Committee in its scientific background document don’t appear there. I also don’t know whether the papers in the collection available in full were always available in full.

Jacques Dubochet

Cryo-electron microscopy of vitrified specimens – Dubochet, J., Adrian, M., Chang, J.-J., Homo, J.-C., Lepault, J., McDowall, A. W., and Schultz, P. (1988). Q. Rev. Biophys. 21, 129-228 link

Vitrification of pure water for electron microscopyDubochet, J., and McDowall, A. W. (1981). J. Microsc. 124, 3-4 link

Cryo-electron microscopy of viruses – Adrian, M., Dubochet, J., Lepault, J., and McDowall, A. W. (1984). Nature 308, 32-36 link

Joachim Frank

Averaging of low exposure electron micrographs of non-periodic objectsFrank, J. (1975). Ultramicroscopy 1, 159-162 link

Three-dimensional reconstruction from a single-exposure, random conical tilt series applied to the 50S ribosomal subunit of Escherichia coli – Radermacher, M., Wagenknecht, T., Verschoor, A., and Frank, J. (1987). J. Microsc. 146, 113-136 link

SPIDER-A modular software system for electron image processingFrank, J., Shimkin, B., and Dowse, H. (1981). Ultramicroscopy 6, 343-357 link

Richard Henderson

Model for the structure of bacteriorhodopsin based on high-resolution electron cryo-microscopyHenderson, R., Baldwin, J. M., Ceska, T. A., Zemlin, F., Beckmann, E., and Downing, K. H. (1990). J. Mol. Biol. 213, 899-929 link

The potential and limitations of neutrons, electrons and X-rays for atomic resolution microscopy of unstained biological moleculesHenderson, R. (1995). Q. Rev. Biophys. 28, 171-193 link (available in full here)

§

By locking the red-tagged papers behind a paywall – often impossible to breach because of the fees involved – they’re kept out of hands of less-well-funded institutions and libraries, particularly researchers in countries whose currencies have lower purchasing power. More about this here and here. But the more detestable thing with the papers listed above is that the latest of them (among the reds) was published in 1995, fully 22 years ago, and the earliest, 42 years go – both on cryo-electron microscopy. Both represent almost unforgivable durations across which to have paywalls, with the journals Nature and Cell further attempting to ride the Nobel wave for attention. It’s not clear if the papers they’ve liberated from behind the paywall will always be available for free hence either.

Read all this in the context of the Nobel Prizes not being awarded to more than three people at a time and maybe you’ll see how much of scientific knowledge is truly out of bounds of most of humankind.

Featured image credit: Pexels/pixabay.