An ‘expanded’ heuristic to evaluate science as a non-scientist

The Hindu publishes a column called ‘Notebook’ every Friday, in which journalists in the organisation open windows big or small into their work, providing glimpses into their process and thinking – things that otherwise remain out of view in news articles, analyses, op-eds, etc. Quite a few of them are very insightful. A recent example was Maitri Porecha’s column about looking for closure in the aftermath of the Balasore train accident.

I’ve written twice for the section thus far, both times about a matter that has stayed with me for a decade, manifesting at different times in different ways. The first edition was about being able to tell whether a given article or claim is real or phony irrespective of whether you have a science background. I had proposed the following eight-point checklist that readers could follow (quoted verbatim):

  1. If the article talks about effects on people, was the study conducted with people or with mice?
  2. How many people participated in a study? Fewer than a hundred is always worthy of scepticism.
  3. Does the article claim that a study has made exact predictions? Few studies actually can.
  4. Does the article include a comment from an independent expert? This is a formidable check against poorly-done studies.
  5. Does the article link to the paper it is discussing? If not, please pull on this thread.
  6. If the article invokes the ‘prestige’ of a university and/or the journal, be doubly sceptical.
  7. Does the article mention the source of funds for a study? A study about wine should not be funded by a vineyard.
  8. Use simple statistical concepts, like conditional probabilities and Benford’s law, and common sense together to identify extraordinary claims, and then check if they are accompanied by extraordinary evidence.

The second was about whether science journalists are scientists – which is related to the first on the small matter of faith: i.e. that science journalists are purveyors of information that we expect readers to ‘take up’ on trust and faith, and that an article that teaches readers any science needs to set this foundation carefully.

After having published the second edition, I came across a ‘Policy Forum’ article published in October 2022 in Science entitled ‘Science, misinformation, and the role of education’. Among other things, it presents a “‘fast and frugal’ heuristic” – a three-step algorithm with which competent outsiders [can] evaluate scientific information”. I was glad to see that this heuristic included many points in my eight-point checklist, but it also went a step ahead and discussed two things that perhaps more engaged readers would find helpful. One of them however requires an important disclaimer, in my opinion.

DOI: 10.1126/science.abq80

The additions are about consensus, expressed through the questions (numbering mine):

  1. “Is there a consensus among the relevant scientific experts?”
  2. “What is the nature of any disagreement/what do the experts agree on?”
  3. “What do the most highly regarded experts think?”
  4. “What range of findings are deemed plausible?”, and
  5. “What are the risks of being wrong?”

No. 3 is interesting because “regard” is of course subjective as well as cultural. For example, well-regarded scientists could be those that have published in glamorous journals like Nature, Science, Cell, etc. But as the recent hoopla about Ranga Dias having three papers about near-room-temperature superconductivity retracted in one year – with two published in Nature – showed us, this is no safeguard against bad science. In fact, even winning a Nobel Prize isn’t a guarantee of good science (see e.g. reports about Gregg Semenza and Luc Montagnier). As the ‘Policy Forum’ article also states:

“Undoubtedly, there is still more that the competent outsider needs to know. Peer-reviewed publication is often regarded as a threshold for scientific trust. Yet while peer review is a valuable step, it is not designed to catch every logical or methodological error, let alone detect deliberate fraud. A single peer-reviewed article, even in a leading journal, is just that—a single finding—and cannot substitute for a deliberative consensus. Even published work is subject to further vetting in the community, which helps expose errors and biases in interpretation. Again, competent outsiders need to know both the strengths and limits of scientific publications. In short, there is more to teach about science than the content of science itself.”

Yet “regard” matters because the people at large pay attention to notions like “well-regarded”, which is as much a comment about societal preferences as what scientists themselves have aspired to over the years. This said, on technical matters, this particular heuristic would fail only a small part of time (based on my experience).

It would fail a lot more if it is applied in the middle of a cultural shift, e.g. regarding expectations of the amount of effort a good scientist is expected to dedicate to their work. Here, “well-regarded” scientists – typically people who started doing science decades ago, have persisted in their respective fields, and have finally risen to positions of prominence, and are thus likely to be white and male, and who seldom had to bother with running a household and raising children – will have an answer that reflects the result of these privileges, but which would be at odds with the direction of the shift (i.e. towards better work-life balance, less time than before devoted to research, and contracts amended to accommodate these demands).

In fact, even if the “well-regarded” heuristic might suffice to judge a particular scientific claim, it still carries the risk of hewing in favour of the opinions of people with the aforementioned privileges. These concerns also apply to the three conditions listed under #2 in the heuristic graphic above: “Reputation among peers”, “credentials and institutional context”, “relevant professional experience”, all of which have historically been more difficult for non-cis-het male scientists to acquire. But we must work with what we have.

In this sense, the last question is less subjective and more telling: “What are the risks of being wrong?” If a scientist avoids a view and simultaneously also avoids an adverse outcome for themselves, then it’s possible they avoided the view in order to avoid the outcome and not because the view is implicitly disagreeable.

The authors of the article, Jonathan Osborne and Daniel Pimentel, both of the Graduate School of Education at Stanford University, have grounded their heuristic in the “social nature of science” and the “social mechanisms and practices that science has for resolving disagreement and attaining consensus”. This is obviously more robust (than my checklist grounded in my limited experiences), but I think it could also have discussed the intersection of the social facets of science with gender and class. Otherwise, the risk is that, while the heuristic will help “competent outsiders” better judge scientific claims, it will do as little as its predecessor to uncover the effects of intersectional biases that persist in the “social mechanisms” of science.

The alternative, of course, is to leave out “well-regarded” altogether – but the trouble there, I suspect, is we might be lying to ourselves if we pretended a scientist’s regard didn’t or ought not to matter, which is why I didn’t go there…