An ‘expanded’ heuristic to evaluate science as a non-scientist

The Hindu publishes a column called ‘Notebook’ every Friday, in which journalists in the organisation open windows big or small into their work, providing glimpses into their process and thinking – things that otherwise remain out of view in news articles, analyses, op-eds, etc. Quite a few of them are very insightful. A recent example was Maitri Porecha’s column about looking for closure in the aftermath of the Balasore train accident.

I’ve written twice for the section thus far, both times about a matter that has stayed with me for a decade, manifesting at different times in different ways. The first edition was about being able to tell whether a given article or claim is real or phony irrespective of whether you have a science background. I had proposed the following eight-point checklist that readers could follow (quoted verbatim):

  1. If the article talks about effects on people, was the study conducted with people or with mice?
  2. How many people participated in a study? Fewer than a hundred is always worthy of scepticism.
  3. Does the article claim that a study has made exact predictions? Few studies actually can.
  4. Does the article include a comment from an independent expert? This is a formidable check against poorly-done studies.
  5. Does the article link to the paper it is discussing? If not, please pull on this thread.
  6. If the article invokes the ‘prestige’ of a university and/or the journal, be doubly sceptical.
  7. Does the article mention the source of funds for a study? A study about wine should not be funded by a vineyard.
  8. Use simple statistical concepts, like conditional probabilities and Benford’s law, and common sense together to identify extraordinary claims, and then check if they are accompanied by extraordinary evidence.

The second was about whether science journalists are scientists – which is related to the first on the small matter of faith: i.e. that science journalists are purveyors of information that we expect readers to ‘take up’ on trust and faith, and that an article that teaches readers any science needs to set this foundation carefully.

After having published the second edition, I came across a ‘Policy Forum’ article published in October 2022 in Science entitled ‘Science, misinformation, and the role of education’. Among other things, it presents a “‘fast and frugal’ heuristic” – a three-step algorithm with which competent outsiders [can] evaluate scientific information”. I was glad to see that this heuristic included many points in my eight-point checklist, but it also went a step ahead and discussed two things that perhaps more engaged readers would find helpful. One of them however requires an important disclaimer, in my opinion.

DOI: 10.1126/science.abq80

The additions are about consensus, expressed through the questions (numbering mine):

  1. “Is there a consensus among the relevant scientific experts?”
  2. “What is the nature of any disagreement/what do the experts agree on?”
  3. “What do the most highly regarded experts think?”
  4. “What range of findings are deemed plausible?”, and
  5. “What are the risks of being wrong?”

No. 3 is interesting because “regard” is of course subjective as well as cultural. For example, well-regarded scientists could be those that have published in glamorous journals like Nature, Science, Cell, etc. But as the recent hoopla about Ranga Dias having three papers about near-room-temperature superconductivity retracted in one year – with two published in Nature – showed us, this is no safeguard against bad science. In fact, even winning a Nobel Prize isn’t a guarantee of good science (see e.g. reports about Gregg Semenza and Luc Montagnier). As the ‘Policy Forum’ article also states:

“Undoubtedly, there is still more that the competent outsider needs to know. Peer-reviewed publication is often regarded as a threshold for scientific trust. Yet while peer review is a valuable step, it is not designed to catch every logical or methodological error, let alone detect deliberate fraud. A single peer-reviewed article, even in a leading journal, is just that—a single finding—and cannot substitute for a deliberative consensus. Even published work is subject to further vetting in the community, which helps expose errors and biases in interpretation. Again, competent outsiders need to know both the strengths and limits of scientific publications. In short, there is more to teach about science than the content of science itself.”

Yet “regard” matters because the people at large pay attention to notions like “well-regarded”, which is as much a comment about societal preferences as what scientists themselves have aspired to over the years. This said, on technical matters, this particular heuristic would fail only a small part of time (based on my experience).

It would fail a lot more if it is applied in the middle of a cultural shift, e.g. regarding expectations of the amount of effort a good scientist is expected to dedicate to their work. Here, “well-regarded” scientists – typically people who started doing science decades ago, have persisted in their respective fields, and have finally risen to positions of prominence, and are thus likely to be white and male, and who seldom had to bother with running a household and raising children – will have an answer that reflects the result of these privileges, but which would be at odds with the direction of the shift (i.e. towards better work-life balance, less time than before devoted to research, and contracts amended to accommodate these demands).

In fact, even if the “well-regarded” heuristic might suffice to judge a particular scientific claim, it still carries the risk of hewing in favour of the opinions of people with the aforementioned privileges. These concerns also apply to the three conditions listed under #2 in the heuristic graphic above: “Reputation among peers”, “credentials and institutional context”, “relevant professional experience”, all of which have historically been more difficult for non-cis-het male scientists to acquire. But we must work with what we have.

In this sense, the last question is less subjective and more telling: “What are the risks of being wrong?” If a scientist avoids a view and simultaneously also avoids an adverse outcome for themselves, then it’s possible they avoided the view in order to avoid the outcome and not because the view is implicitly disagreeable.

The authors of the article, Jonathan Osborne and Daniel Pimentel, both of the Graduate School of Education at Stanford University, have grounded their heuristic in the “social nature of science” and the “social mechanisms and practices that science has for resolving disagreement and attaining consensus”. This is obviously more robust (than my checklist grounded in my limited experiences), but I think it could also have discussed the intersection of the social facets of science with gender and class. Otherwise, the risk is that, while the heuristic will help “competent outsiders” better judge scientific claims, it will do as little as its predecessor to uncover the effects of intersectional biases that persist in the “social mechanisms” of science.

The alternative, of course, is to leave out “well-regarded” altogether – but the trouble there, I suspect, is we might be lying to ourselves if we pretended a scientist’s regard didn’t or ought not to matter, which is why I didn’t go there…

The problem with rooting for science

The idea that trusting in science involves a lot of faith, instead of reason, is lost on most people. More often than not, as a science journalist, I encounter faith through extreme examples – such as the Bloch sphere (used to represent the state of a qubit) or wave functions (‘mathematical objects’ used to understand the evolution of certain simple quantum systems). These and other similar concepts require years of training in physics and mathematics to understand. At the same time, science writers are often confronted with the challenge of making these concepts sensible to an audience that seldom has this training.

More importantly, how are science writers to understand them? They don’t. Instead, they implicitly trust scientists they’re talking to to make sense. If I know that a black hole curves spacetime to such an extent that pairs of virtual particles created near its surface are torn apart – one particle entering the black hole never to exit and the other sent off into space – it’s not because I’m familiar with the work of Stephen Hawking. It’s because I read his books, read some blogs and scientific papers, spoke to physicists, and decided to trust them all. Every science journalist, in fact, has a set of sources they’re likely to trust over others. I even place my faith in some people over others, based on factors like personal character, past record, transparency, reflexivity, etc., so that what they produce I take only with the smallest pinch of salt, and build on their findings to develop my own. And this way, I’m already creating an interface between science and society – by matching scientific knowledge with the socially developed markers of reliability.

I choose to trust those people, processes and institutions that display these markers. I call this an act of faith for two reasons: 1) it’s an empirical method, so to speak; there is no proof in theory that such ‘matching’ will always work; and 2) I believe it’s instructive to think of this relationship as being mediated by faith if only to amplify its anti-polarity with reason. Most of us understand science through faith, not reason. Even scientists who are experts on one thing take the word of scientists on completely different things, instead of trying to study those things themselves (see ad verecundiam fallacy).

Sometimes, such faith is (mostly) harmless, such as in the ‘extreme’ cases of the Bloch sphere and the wave function. It is both inexact and incomplete to think that quantum superposition means an object is in two states at once. The human brain hasn’t evolved to cognate superposition exactly; this is why physicists use the language of mathematics to make sense of this strange existential phenomenon. The problem – i.e. the inexactitude and the incompleteness – arises when a communicator translates the mathematics to a metaphor. Equally importantly, physicists are describing whereas the rest of us are thinking. There is a crucial difference between these activities that illustrates, among other things, the fundamental incompatibility between scientific research and science communication that communicators must first surmount.

As physicists over the past three or four centuries have relied increasingly on mathematics rather than the word to describe the world, physics, like mathematics itself, has made a “retreat from the word,” as literary scholar George Steiner put it. In a 1961 Kenyon Review article, Steiner wrote, “It is, on the whole, true to say that until the seventeenth century the predominant bias and content of the natural sciences were descriptive.” Mathematics used to be “anchored to the material conditions of experience,” and so was largely susceptible to being expressed in ordinary language. But this changed with the advances of modern mathematicians such as Descartes, Newton, and Leibniz, whose work in geometry, algebra, and calculus helped to distance mathematical notation from ordinary language, such that the history of how mathematics is expressed has become “one of progressive untranslatability.” It is easier to translate between Chinese and English — both express human experience, the vast majority of which is shared — than it is to translate advanced mathematics into a spoken language, because the world that mathematics expresses is theoretical and for the most part not available to our lived experience.

Samuel Matlack, ‘Quantum Poetics’, The New Atlantic, 2017

However, the faith becomes more harmful the further we move away from the ‘extreme’ examples – of things we’re unlikely to stumble on in our daily lives – and towards more commonplace ideas, such as ‘how vaccines work’ or ‘why GM foods are not inherently bad’. The harm emerges from the assumption that we think we know something when in fact we’re in denial about how it is that we know that thing. Many of us think it’s reason; most of the time it’s faith. Remember when, in Friends, Monica Geller and Chandler Bing ask David the Scientist Guy how airplanes fly, and David says it has to do with Bernoulli’s principle and Newton’s third law? Monica then turns to Chandler with a knowing look and says, “See?!” To which Chandler says, “Yeah, that’s the same as ‘it has something to do with wind’!”

The harm is to root for science, to endorse the scientific enterprise and vest our faith in its fruits, without really understanding how these fruits are produced. Such understanding is important for two reasons.

First, if we trust scientists, instead of presuming to know or actually knowing that we can vouch for their work. It would be vacuous to claim science is superior in any way to another enterprise that demands our faith when science itself also receives our faith. Perhaps more fundamentally, we like to believe that science is trustworthy because it is evidence-based and it is tested – but the COVID-19 pandemic should have clarified, if it hasn’t already, the continuous (as opposed to discrete) nature of scientific evidence, especially if we also acknowledge that scientific progress is almost always incremental. Evidence can be singular and thus clear – like a new avian species, graphene layers superconducting electrons or tuned lasers cooling down atoms – or it can be necessary but insufficient, and therefore on a slippery slope – such as repeated genetic components in viral RNA, a cigar-shaped asteroid or water shortage in the time of climate change.

Physicists working with giant machines to spot new particles and reactions – all of which are detected indirectly, through their imprints on other well-understood phenomena – have two important thresholds for the reliability of their findings: if the chance of X (say, “spotting a particle of energy 100 GeV”) being false is 0.27%, it’s good enough to be evidence; if the chance of X being false is 0.00006%, then it’s a discovery (i.e., “we have found the particle”). But at what point can we be sure that we’ve indeed found the particle we were looking for if the chance of being false will never reach 0%? One way, for physicists specifically, is to combine the experiment’s results with what they expect to happen according to theory; if the two match, it’s okay to think that even a less reliable result will likely be borne out. Another possibility (in the line of Karl Popper’s philosophy) is that a result expected to be true, and is subsequently found to be true, is true until we have evidence to the contrary. But as suitable as this answer may be, it still doesn’t neatly fit the binary ‘yes’/’no’ we’re used to, and which we often expect from scientific endeavours as well (see experience v. reality).

(Minor detour: While rational solutions are ideally refutable, faith-based solutions are not. Instead, the simplest way to reject their validity is to use extra-scientific methods, and more broadly deny them power. For example, if two people were offering me drugs to suppress the pain of a headache, I would trust the one who has a state-sanctioned license to practice medicine and is likely to lose that license, even temporarily, if his prescription is found to have been mistaken – that is, by asserting the doctor as the subject of democratic power. Axiomatically, if I know that Crocin helps manage headaches, it’s because, first, I trusted the doctor who prescribed it and, second, Crocin has helped me multiple times before, so empirical experience is on my side.)

Second, if we don’t know how science works, we become vulnerable to believing pseudoscience to be science as long as the two share some superficial characteristics, like, say, the presence and frequency of jargon or a claim’s originator being affiliated with a ‘top’ institute. The authors of a scientific paper to be published in a forthcoming edition of the Journal of Experimental Social Psychology write:

We identify two critical determinants of vulnerability to pseudoscience. First, participants who trust science are more likely to believe and disseminate false claims that contain scientific references than false claims that do not. Second, reminding participants of the value of critical evaluation reduces belief in false claims, whereas reminders of the value of trusting science do not.

(Caveats: 1. We could apply the point of this post to this study itself; 2. I haven’t checked the study’s methods and results with an independent expert, and I’m also mindful that this is psychology research and that its conclusions should be taken with salt until independent scientists have successfully replicated them.)

Later from the same paper:

Our four experiments and meta-analysis demonstrated that people, and in particular people with higher trust in science (Experiments 1-3), are vulnerable to misinformation that contains pseudoscientific content. Among participants who reported high trust in science, the mere presence of scientific labels in the article facilitated belief in the misinformation and increased the probability of dissemination. Thus, this research highlights that trust in science ironically increases vulnerability to pseudoscience, a finding that conflicts with campaigns that promote broad trust in science as an antidote to misinformation but does not conflict with efforts to install trust in conclusions about the specific science about COVID-19 or climate change.

In terms of the process, the findings of Experiments 1-3 may reflect a form of heuristic processing. Complex topics such as the origins of a virus or potential harms of GMOs to human health include information that is difficult for a lay audience to comprehend, and requires acquiring background knowledge when reading news. For most participants, seeing scientists as the source of the information may act as an expertise cue in some conditions, although source cues are well known to also be processed systematically. However, when participants have higher levels of methodological literacy, they may be more able to bring relevant knowledge to bear and scrutinise the misinformation. The consistent negative association between methodological literacy and both belief and dissemination across Experiments 1-3 suggests that one antidote to the influence of pseudoscience is methodological literacy. The meta-analysis supports this.

So rooting for science per se is not just not enough, it could be harmful vis-à-vis the public support for science itself. For example (and without taking names), in response to right-wing propaganda related to India’s COVID-19 epidemic, quite a few videos produced by YouTube ‘stars’ have advanced dubious claims. They’re not dubious at first glance, if also because they purport to counter pseudoscientific claims with scientific knowledge, but they are – either for insisting a measure of certainty in the results that neither exist nor are achievable, or for making pseudoscientific claims of their own, just wrapped up in technical lingo so they’re more palatable to those supporting science over critical thinking. Some of these YouTubers, and in fact writers, podcasters, etc., are even blissfully unaware of how wrong they often are. (At least one of them was also reluctant to edit a ‘finished’ video to make it less sensational despite repeated requests.)

Now, where do these ideas leave (other) science communicators? In attempting to bridge a nearly unbridgeable gap, are we doomed to swing only between most and least unsuccessful? I personally think that this problem, such as it is, is comparable to Zeno’s arrow paradox. To use Wikipedia’s words:

He states that in any one (duration-less) instant of time, the arrow is neither moving to where it is, nor to where it is not. It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.

To ‘break’ the paradox, we need to identify and discard one or more primitive assumptions. In the arrow paradox, for example, one could argue that time is not composed of a stream of “duration-less” instants, that each instant – no matter how small – encompasses a vanishingly short but not nonexistent passage of time. With popular science communication (in the limited context of translating something that is untranslatable sans inexactitude and/or incompleteness), I’d contend the following:

  • Awareness: ‘Knowing’ and ‘knowing of’ are significantly different and, I hope, self-explanatory also. Example: I’m not fluent with the physics of cryogenic engines but I’m aware that they’re desirable because liquefied hydrogen has the highest specific impulse of all rocket fuels.
  • Context: As I’ve written before, a unit of scientific knowledge that exists in relation to other units of scientific knowledge is a different object from the same unit of scientific knowledge existing in relation to society.
  • Abstraction: 1. perfect can be the enemy of the good, and imperfect knowledge of an object – especially a complicated compound one – can still be useful; 2. when multiple components come together to form a larger entity, the entity can exhibit some emergent properties that one can’t derive entirely from the properties of the individual components. Example: one doesn’t have to understand semiconductor physics to understand what a computer does.

An introduction to physics that contains no equations is like an introduction to French that contains no French words, but tries instead to capture the essence of the language by discussing it in English. Of course, popular writers on physics must abide by that constraint because they are writing for mathematical illiterates, like me, who wouldn’t be able to understand the equations. (Sometimes I browse math articles in Wikipedia simply to immerse myself in their majestic incomprehensibility, like visiting a foreign planet.)

Such books don’t teach physical truths; what they teach is that physical truth is knowable in principle, because physicists know it. Ironically, this means that a layperson in science is in basically the same position as a layperson in religion.

Adam Kirsch, ‘The Ontology of Pop Physics’, Tablet Magazine, 2020

But by offering these reasons, I don’t intend to over-qualify science communication – i.e. claim that, given enough time and/or other resources, a suitably skilled science communicator will be able to produce a non-mathematical description of, say, quantum superposition that is comprehensible, exact and complete. Instead, it may be useful for communicators to acknowledge that there is an immutable gap between common English (the language of modern science) and mathematics, beyond which scientific expertise is unavoidable – in much the same way communicators must insist that the farther the expert strays into the realm of communication, the closer they’re bound to get to a boundary beyond which they must defer to the communicator.

The constructionist hypothesis and expertise during the pandemic

Now that COVID-19 cases are rising again in the country, the trash talk against journalists has been rising in tandem. The Indian government was unprepared and hapless last year, and it is this year as well, if only in different ways. In this environment, journalists have come under criticism along two equally unreasonable lines. First, many people, typically supporters of the establishment, either don’t or can’t see the difference between good journalism and contrarianism, and don’t or can’t acknowledge the need for expertise in the practise of journalism.

Second, the recognition of expertise itself has been sorely lacking across the board. Just like last year, when lots of scientists dropped what they were doing and started churning out disease transmission models each one more ridiculous than the last, this time — in response to a more complex ‘playing field’ involving new and more variants, intricate immunity-related mechanisms and labyrinthine clinical trial protocols — too many people have been shouting their mouths off, and getting most of it wrong. All of these misfires have reminded us of two things: again and again that expertise matters, and that unless you’re an expert on something, you’re unlikely to know how deep it runs. The latter isn’t trivial.

There’s what you know you don’t know, and what you don’t know you don’t know. The former is the birthplace of learning. It’s the perfect place from which to ask questions and fill gaps in your knowledge. The latter is the verge of presumptuousness — a very good place from which to make a fool of yourself. Of course, this depends on your attitude: you can always be mindful of the Great Unknown, such as it is, and keep quiet.

As these tropes have played out in the last few months, I have been reminded of an article written by the physicist Philip Warren Anderson, called ‘More is Different’, and published in 1972. His idea here is simple: that the statement “if everything obeys the same fundamental laws, then the only scientists who are studying anything really fundamental are those who are working on those laws” is false. He goes on to explain:

“The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any means imply a ‘constructionist’ one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. … The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. The behaviour of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviours requires research which I think is as fundamental in its nature as any other.”

The seemingly endless intricacies that beset the interaction of a virus, a human body and a vaccine are proof enough that the “twin difficulties of scale and complexity” are present in epidemiology, immunology and biochemistry as well – and testament to the foolishness of any claims that the laws of conservation, thermodynamics or motion can help us say, for example, whether a particular variant infects people ‘better’ because it escapes the immune system better or because the immune system’s protection is fading.

But closer to my point: not even all epidemiologists, immunologists and/or biochemists can meaningfully comment on every form or type of these interactions at all times. I’m not 100% certain, but at least from what I’ve learnt reporting topics in physics (and conceding happily that covering biology seems more complex), scale and complexity work not just across but within fields as well. A cardiologist may be able to comment meaningfully on COVID-19’s effects on the heart in some patients, or a neurologist on the brain, but they may not know how the infection got there even if all these organs are part of the same body. A structural biologist may have deciphered why different mutations change the virus’s spike protein the way they do, but she can’t be expected to comment meaningfully on how epidemiological models will have to be modified for each variant.

To people who don’t know better, a doctor is a doctor and a scientist is a scientist, but as journalists plumb the deeper, more involved depths of a new yet specific disease, we bear from time to time a secret responsibility to be constructive and not reductive, and this is difficult. It becomes crucial for us to draw on the wisdom of the right experts, who wield the right expertise, so that we’re moving as much and as often as possible away from the position of what we don’t know we don’t know even as we ensure we’re not caught in the traps of what experts don’t know they don’t know. The march away from complete uncertainty and towards the names of uncertainty is precarious.

Equally importantly, at this time, to make our own jobs that much easier, or at least less acerbic, it’s important for everyone else to know this as well – that more is vastly different.

The journalist as expert

I recently turned down some requests for interviews because the topics of discussion in each case indicated that I would be treated as a scientist, not a science journalist (something that happened shortly after the Balakot airstrikes and the ASAT test as well). I suspect science and more so health journalists are being seen as important sources of information at this crucial time for four reasons (in increasing order of importance, at least as I see it):

1. We often have the latest information – This is largely self-explanatory except for the fact that since we discover a lot of information first-hand, often from researchers to whom the context in which the information is valid may be obvious but who may not communicate that, we also have a great responsibility to properly contextualise what we know before dissemination. Many of us do, many of us don’t, but either way both groups come across as being informed to their respective audiences.

2. We’re “temporary experts”.

3. We’re open to conversations when others aren’t – I can think of a dozen experts who could replace me in the interviews I described and do a better job of communicating the science and more importantly the uncertainty. However, a dozen isn’t a lot, and journalists and any other organisations committed to spreading awareness are going to be hard-pressed to find new voices. At this time, science/health journalists could be seen as stand-in experts: we’re up-to-date, we’re (largely) well-versed with the most common issues, and unlike so many experts we’re often willing to talk.

4. It would seem journalists are the only members of society who are synthesising different schools of thought, types of knowledge and stories of ground realities into an emergent whole. This is a crucial role and, to be honest, I was quite surprised no one else is doing this – until I realised the problem. Our scholastic and academic systems may have disincentivised such holism, choosing instead to pursue more and more specialised and siloised paths. But even then the government should be bringing together different pieces of the big picture, and putting them together to design multifaceted policies and inventions, but isn’t doing so. So journalists could be seen as the only people who are.

Now, given these reasons, is treating journalists as experts so bad?

It’s really not, actually. Journalism deserves more than to be perceived as an adjacent enterprise – something that attaches itself on to a mature substrate of knowledge instead of being part of the substrate itself. There are some journalists who have insightfully combined, say, what they know about scientific publishing with what they know about research funding to glimpse a bigger picture still out of reach of many scientists. There is certainly a body of knowledge that cannot be derived from the first principles of each of its components alone, and which journalists are uniquely privileged to discover. I also know of a few journalists who are better committed to evidence and civic duty than many scientists, in turn producing knowledge of greater value. Finally, insofar as knowledge is also produced through the deliberate opposition of diverse perspectives, journalists contribute every time they report on a preprint paper, bringing together multiple independent experts – sometimes from different fields – to comment on the paper’s merits and demerits.

But there are some issues on the flip side. For example, not all knowledge is emergent in this way, and more importantly journalists make for poor experts on average when what we don’t know is as important as what we know. And when lives are at stake, anyone who is being invited to participate in an interview, panel discussion or whatever should consider – even if the interviewer hasn’t – whether what they say could cause harm, and if they can withstand any social pressure to not be seen to be ignorant and say “I don’t know” when warranted. And even then, there can be very different implications depending on whether it’s a journalist or an expert saying “I don’t know”.

Even more importantly, journalists need to be recognised in their own right, instead of being hauled into the limelight as quasi-experts instead of as people who practice a craft of their own. This may seem like a minor issue of perception but it’s important to maintain the distinction between the fourth estate and other enterprises lest journalism’s own responsibilities become subsumed by those of the people and organisations journalists write about or – worse yet – lest they are offset by demands that society has been unable to meet in other ways. If a virologist can’t be found for an interview, a journalist is a barely suitable replacement, except if the conversation is going to be sharply focused on specific issues the journalist is very familiar with, but even then it’s not the perfect solution.

If a virologist or a holist (as in the specific way mentioned above) can’t be found, the ideal way forward would be to look harder for another virologist or holist, and in doing so come up against the unique challenges to accessing expertise in India. In this regard, if journalists volunteer themselves as substitutes, they risk making excuses for a problem they actually needed to be highlighting.

In defence of ignorance

Wish I may, wish I might
Have this wish, I wish tonight
I want that star, I want it now
I want it all and I don’t care how

Metallica, King Nothing

I’m a news editor who frequently uses Twitter to find new stories to work on or follow up. Since the lockdown began, however, I’ve been harbouring a fair amount of FOMO born, ironically, from the fact that the small pool of in-house reporters and the larger pool of freelancers I have access to are all confined to their homes, and there’s much less opportunity than usual to step out, track down leads and assimilate ground reports. And Twitter – the steady stream of new information from different sources – has simply accentuated this feeling, instead of ameliorating it by indicating that other publications are covering what I’m not. No, Twitter makes me feel like I want it all.

I’m sure this sensation is the non-straightforward product of human psychology and how social media companies have developed algorithms to take advantage of it, but I’m fairly certain (despite the absence of a personal memory to corroborate this opinion) that individual minds of the pre-social-media era weren’t marked by FOMO, and more certain that they were marked less so. I also believe one of the foremost offshoots of the prevalence of such FOMO is the idea that one can be expected to have an opinion on everything.

FOMO – the ‘fear of missing out’ – is essentially defined by a desire to participate in activities that, sometimes, we really needn’t participate in, but we think we need to simply by dint of knowing about those activities. Almost as if the brains of humans had become habituated to making decisions about social participation based solely on whether or not we knew of them, which if you ask me wouldn’t be such a bad hypothesis to apply to the pre-information era, when you found out about a party only if you were the intended recipient of the message that ‘there is a party’.

However, most of us today are not the intended recipients of lots of information. This seems especially great for news but it also continuously undermines our ability to stay in control of what we know or, more importantly, don’t know. And when you know, you need to participate. As a result, I sometimes devolve into a semi-nervous wreck reading about the many great things other people are doing, and sharing their experiences on Twitter, and almost involuntarily develop a desire to do the same things. Now and then, I even sense the seedling of regret when I look at a story that another news outlet has published, but which I thought I knew about before but simply couldn’t pursue, aided ably by the negative reinforcement of the demands on me as a news editor.

Recently, as an antidote to this tendency – and drawing upon my very successful, and quite popular, resistance to speaking Hindi simply because a misguided interlocutor presumes I know the language – I decided I would actively ignore something I’m expected to have an opinion on but there being otherwise no reason that I should. Such a public attitude exists, though it’s often unspoken, because FOMO has successfully replaced curiosity or even civic duty as the prime impetus to seek new information on the web. (Obviously, this has complicated implications, such as we see in the dichotomy of empowering more people to speak truth to power versus further tightening the definitions of ‘expert’ and ‘expertise’; I’m choosing to focus on the downsides here.)

As a result, the world seems to be filled with gas-bags, some so bloated I wonder why they don’t just float up and fuck off. And I’ve learnt that the hardest part of the antidote is to utter the words that FOMO has rendered most difficult to say: “I don’t know”.

A few days ago, I was chatting with The Soufflé when he invited me to participate in a discussion about The German Ideology that he was preparing for. You need to know that The Soufflé is a versatile being, a physicist as well as a pluripotent scholar, but more importantly The Soufflé knows what most pluripotent scholars don’t: that no matter how much one is naturally gifted to learn this or that, knowing something needs not just work but also proof of work. I refused The Soufflé’s invitation, of course; my words were almost reflexive, eager to set some distance between myself and the temptation to dabble in something just because it was there to dabble. The Soufflé replied,

I think it was in a story by Borges, one of the characters says “Every man should be capable of all ideas, and I believe that in the future he will be.” 🙂

To which I said,

That was when the world was simpler. Now there’s a perverse expectation that everyone should have opinions on everything. I don’t like it, and sometimes I actively stay away from some things just to be able to say I don’t want to have an opinion on it. Historical materialism may or may not be one of those things, just saying.

Please bear with me, this is leading up to something I’d like to include here. The Soufflé then said,

I’m just in it for the sick burns. 😛 But OK, I get it. Why do you think that expectation exists, though? I mean, I see it too. Just curious.

Here I set out my FOMO hypothesis. Then he said,

I guess this is really a topic for a cultural critic, I’m just thinking out loud… but perhaps it is because ignorance no longer finds its antipode in understanding, but awareness? To be aware is to be engaged, to be ‘caught up’ is to be active. This kind of activity is low-investment, and its performance aided by social media?

If you walked up to people today and asked “What do you think about factory-farmed poultry?” I’m pretty sure they’d find it hard to not mention that it’s cruel and wrong, even if they know squat about it. So they’re aware, they have possibly a progressive view on the issue as well, but there’s no substance underneath it.

Bingo.

We’ve become surrounded by socio-cultural forces that require us to know, know, know, often sans purpose or context. But ignorance today is not such a terrible thing. There are so many people who set out to know, know, know so many of the wrong ideas and lessons that conspiracy theories that once languished on the fringes of society have moved to the centre, and for hundreds of millions of people around the world stupid ideas have become part of political ideology.

Then there are others who know but don’t understand – which is a vital difference, of the sort that The Soufflé pointed out, that noted scientist-philosophers have sensibly caricatured as the difference between the thing and the name of the thing. Knowing what the four laws of thermodynamics or the 100+ cognitive biases are called doesn’t mean you understand them – but it’s an extrapolation that social-media messaging’s mandated brevity often pushes us to make. Heck, I know of quite a few people who are entirely blind to this act of extrapolation, conflating the label with the thing itself and confidently penning articles for public consumption that betrays a deep ignorance (perhaps as a consequence of the Dunning-Kruger effect) of the subject matter – strong signals that they don’t know it in their bones but are simply bouncing off of it like light off the innards of a fractured crystal.

I even suspect the importance and value of good reporting is lost on too many people because those people don’t understand what it takes to really know something (pardon the polemic). These are the corners the push to know more, all the time, often even coupled to capitalist drives to produce and consume, has backed us to. And to break free, we really need to embrace that old virtue that has been painted a vice: ignorance. Not the ignorance of conflation nor the ignorance of the lazy but the cultivated ignorance of those who recognise where knowledge ends and faff begins. Ignorance that’s the anti-thing of faff.

Science journalism, expertise and common sense

On March 27, the Johns Hopkins University said an article published on the website of the Centre For Disease Dynamics, Economics and Policy (CDDEP), a Washington-based think tank, had used its logo without permission and distanced itself from the study, which had concluded that the number of people in India who could test positive for the new coronavirus could swell into the millions by May 2020. Soon after, a basement of trolls latched onto CDDEP founder-director Ramanan Laxminarayan’s credentials as an economist to dismiss his work as a public-health researcher, including denying the study’s conclusions without discussing its scientific merits and demerits.

A lot of issues are wound up in this little controversy. One of them is our seemingly naïve relationship with expertise.

Expertise is supposed to be a straightforward thing: you either have it or you don’t. But just as specialised knowledge is complicated, so too is expertise.

Many of us have heard stories of someone who’s great at something “even though he didn’t go to college” and another someone who’s a bit of a tubelight “despite having been to Oxbridge”. Irrespective of whether they’re exceptions or the rule, there’s a lot of expertise in the world that a deference to degrees would miss.

More importantly, by conflating academic qualifications with expertise, we risk flattening a three-dimensional picture to one. For example, there are more scientists who can speak confidently about statistical regression and the features of exponential growth than there are who can comment on the false vacua of string theory or discuss why protein folding is such a hard problem to solve. These hierarchies arise because of differences in complexity. We don’t have to insist only a virologist or an epidemiologist is allowed to answer questions about whether a clinical trial was done right.

But when we insist someone is not good enough because they have a degree in a different subject, we could be embellishing the implicit assumption that we don’t want to look beyond expertise, and are content with being told the answers. Granted, this argument is better directed at individuals privileged enough to learn something new every day, but maintaining this chasm – between who in the public consciousness is allowed to provide answers and who isn’t – also continues to keep power in fewer hands.

Of course, many questions that have arisen during the coronavirus pandemic have often stood between life and death, and it is important to stay safe. However, there is a penalty to think the closer we drift towards expertise, the safer we become — because then we may be drifting away from common sense and accruing a different kind of burden, especially when we insist only specialised experts can comment on a far less specialist topic. Such convictions have already created a class of people that believes ad hominem is a legitimate argumentative ploy, and won’t back down from an increasingly acrimonious quarrel until they find the cherry-picked data they have been looking for.

Most people occupy a less radical but still problematic position: even when neither life nor fortune is at stake, they claim to wait for expertise to change one’s behaviour and/or beliefs. Most of them are really waiting for something that arrived long ago and are only trying to find new ways to persist with the status quo. The all-or-nothing attitude of the rest – assuming they exist – is, simply put, epistemologically inefficient.

Our deference to the views of experts should be a function of how complex it really is and therefore the extent to which it can be interrogated. So when the topic at hand is whether a clinical trial was done right or whether the Indian Council of Medical Research is testing enough, the net we cast to find independent scientists to speak to can include those who aren’t medical researchers but whose academic or vocational trajectories familiarised them to some parts of these issues as well as who are transparent about their reasoning, methods and opinions. (The CDDEP study is yet to reveal its methods, so I don’t want to comment specifically on it.)

If we can’t be sure if the scientist we’re speaking to is making sense, obviously it would be better to go with someone whose words we can just trust. And if we’re not comfortable having such a negotiated relationship with an expert – sadly, it’s always going to be this way. The only way to make matters simpler is by choosing to deliberately shut ourselves off, to take what we’re hearing and, instead of questioning it further, running with it.

This said, we all shut ourselves off at one time or another. It’s only important that we do it knowing we do it, instead of harbouring pretensions of superiority. At no point does it become reasonable to dismiss anyone based on their academic qualifications alone the way, say, Times of India and OpIndia have done (see below).

What’s more, Dr Giridhar Gyani is neither a medical practitioner nor epidemiologist. He is academically an electrical engineer, who later did a PhD in quality management. He is currently director general at Association of Healthcare Providers (India).

Times of India, March 28

Ramanan Laxminarayanan, who was pitched up as an expert on diseases and epidemics by the media outlets of the country, however, in reality, is not an epidemiologist. Dr Ramanan Laxminarayanan is not even a doctor but has a PhD in economics.

OpIndia, March 22

Expertise has been humankind’s way to quickly make sense of a world that has only been becoming more confusing. But historically, expertise has also been a reason of state, used to suppress dissenting voices and concentrate political, industrial and military power in the hands of a few. The former is in many ways a useful feature of society for its liberating potential while the latter is undesirable because it enslaves. People frequently straddle both tendencies together – especially now, with the government in charge of the national anti-coronavirus response.

An immediately viable way to break this tension is to negotiate our relationship with experts themselves.

The difficulty of option ‘c’

Can any journalist become a science journalist? More specifically, can any journalist become a science journalist without understanding the methods of scientific practice and administration? This is not a trivial question because not all the methods of science can be discovered or discerned from the corresponding ‘first principles’. That is, common sense and intelligence alone cannot consummate your transformation; you must access new information that you cannot derive through inductive reasoning.

For example, how would you treat the following statement: “Scientists prove that X causes Y”?

a. You could take the statement at face-value

b. You could probe how and why scientists proved that X causes Y

c. You could interrogate the claim that X causes Y, or

d. You could, of course, ignore it.

(Option (d) is the way to go for claims in the popular as well as scientific literature of the type “Scientists prove that coffee/wine/chocolate cause your heart to strengthen/weaken/etc.” unless the story you’re working on concerns the meta-narrative of these studies.)

Any way, choosing between (a), (b) and (c) is not easy, often because which option you pick depends on how much you know about how the modern scientific industry works. For example, a non-science journalist is likely to go with (a) and/or (b) because, first, they typically believe that the act of proving something is a singular event, localised in time and space, with no room for disagreement.

This is after all the picture of proof-making that ill-informed supporters of science (arguably more than even supporters of the ideal of scientism) harbour: “Scientists have proved that X causes Y, so that’s that,” in the service of silencing inconvenient claims like “human activities aren’t causing Earth’s surface to heat up” or like “climate geoengineering is bad”. I believe that anthropogenic global warming is real and that we need to consider stratospheric aerosol injections but flattening the proof-making exercise threatens to marginalise disagreements among scientists themselves, such as about the extent of warming or about the long-term effects on biodiversity.

The second reason (a) and (b) type stories are more common, but especially (a), follows from this perspective of proofs: the view that scientists are authorities, and we are not qualified to question them. As it happens, most of us will never be qualified enough, but question them we can thanks to four axioms.

First, science being deployed for the public good must be well understood in much the same way a drug that has been tested for efficacy must also be exculpated of deleterious side-effects.

Second, journalists don’t need to critique the choice of reagents, animal models, numerical methods or apparatus design to be able to uncover loopholes, inconsistencies and/or shortcomings. Instead, that oppositional role is easily performed by independent scientists whose comments a journalist can invite on the study.

Third, science is nothing without the humans that practice it, and most of the more accessible stories of science (not news reports) are really stories of the humans practising the science.

Fourth, organised science – hot take: like organised religion – is a human endeavour tied up with human structures, human politics and human foibles, which means as much of what we identify as science lies in the discovery of scientific knowledge as in the way we fund, organise, disseminate and preserve that knowledge.

These four allowances together imply that a science journalist is not a journalist familiar with advanced mathematics or who can perform a tricky experiment but is a journalist trained to write about science without requiring such knowledge.

§

Anyone familiar with India will recognise that these two principal barriers – a limited understanding of proof-making and the view of scientists as authority figures – to becoming a good science journalist are practically seeded by the inadequate school-level education system. But they are also furthered by India’s prevailing political climate, especially in the way a highly polarised society undermines the role of expertise.

Some people will tell you that you can’t question highly trained scientists because you are not a highly trained scientist but others will say you’re entitled to question everything as a thinking, reasoning, socially engaged global citizen.

As it happens, these aren’t opposing points of view. It’s just that the left and the right have broken the idea of expertise into two pieces, taking one each for themselves, such that the political left is often comfortable with questioning facts like grinding bricks to unusable dust while the political right will treat all bricks the same irrespective of the quality of clay; the leftist will subsequently insist that quality control is all-important whereas the rightist will champion the virtues of pragmatism.

In this fracas to deprive expertise either of authority or of critique, or sometimes both, the expert becomes deconstructed to the point of nonexistence. As a result, the effective performance of science journalism, instead of trying to pander equally to the left’s and the right’s respective conceptions of the expert, converges on the attempt to reconstruct expertise as it should be: interrogated without undermining it, considered without elevating it.

Obviously, this is easier said, and more enjoyably said, than done.

Authority, authoritarianism and a scicomm paradox

I received a sharp reminder to better distinguish between activists and experts irrespective of how right the activists appear to be with the case of Ustad, that tiger shifted from its original habitat in Ranthambore sanctuary to Sajjangarh Zoo in 2015 after it killed three people. Local officials were in favour of the relocation to make life easier for villagers whose livelihoods depended on the forest whereas activists wanted Ustad to be brought back to Ranthambore, citing procedural irregularities, poor living conditions and presuming to know what was best for the animal.

One vocal activist at the agitation’s forefront and to whose suggestions I had deferred when covering this story turned out to be a dentist in Mumbai, far removed from the rural reality that Ustad and the villagers co-habited as well as the opinions and priorities of conservationists about how Ustad should be handled. As I would later find out, almost all experts (excluding the two or three I’d spoken to) agreed Ustad had to be relocated and that doing so wasn’t as big a deal as the activists made it out to be, notwithstanding the irregularities.

I have never treated activists as experts since but many other publications continue to make the same mistake. There are many problems with this false equivalence, including the equation of expertise with amplitude, insofar as it pertains to scientific activity, for example conservation, climate change, etc. Another issue is that activists – especially those who live and work in a different area and who haven’t accrued the day-to-day experiences of those whose rights they’re shouting for – tend to make decisions on principle and disfavour choices motivated by pragmatic thinking.

Second, when some experts join forces with activists to render themselves or their possibly controversial opinions more visible, the journalist’s – and by extension the people’s – road to the truth becomes even more convoluted than it should be. Finally, of course, using activists in place of experts in a story isn’t fair to activists themselves: activism has its place in society, and it would be a disservice to depict activism as something it isn’t.

This alerts us to the challenge of maintaining a balancing act.

One of the trends of the 21st century has been the democratisation of information – to liberate it from technological and economic prisons and make it available and accessible to people who are otherwise unlikely to do so. This in turn has made many people self-proclaimed experts of this or that, from animal welfare to particle physics. And this in turn is mostly good because, in spite of faux expertise and the proliferation of fake news, democratising the availability of information (but not its production; that’s a different story) empowers people to question authority.

Indeed, it’s possible fake news is as big a problem as it is today because many governments and other organisations have deployed it as a weapon against the availability of information and distributed mechanisms to verify it. Information is wealth after all and it doesn’t bode well for authoritarian systems predicated on the centralisation of power to have the answers to most questions available one Google, Sci-Hub or Twitter search away.

The balancing act comes alive in the tension between preserving authority without imposing an authoritarian structure. That is, where do you draw the line?

For example, Eric Balfour isn’t the man you should be listening to to understand how killer whales interpret and exercise freedom (see tweet below); you should be speaking to an animal welfare expert instead. However, the question arises if the expert is hegemon here, furthering an agenda on behalf of the research community to which she belongs by delegitimising knowledge obtained from sources other than her textbooks. (Cf. scientism.)

This impression is solidified when scientists don’t speak up, choosing to remain within their ivory towers, and weakened when they do speak up. This isn’t to say all scientists should also be science communicators – that’s a strawman – but that all scientists should be okay with sharing their comments with the press with reasonable preconditions.

In India, for example, very, very few scientists engage freely with the press and the people, and even fewer speak up against the government when the latter misfires (which is often). Without dismissing the valid restrictions and reservations that some of them have – including not being able to trust many journalists to know how science works – it’s readily apparent that the number of scientists who do speak up is minuscule relative to the number of scientists who can.

An (English-speaking) animal welfare expert is probably just as easy to find in India as they might be in the US but consider palaeontologists or museologists, who are harder to find in India (sometimes you don’t realise that until you’re looking for a quote). When they don’t speak up – to journalists, even if not of their own volition – during a controversy, even as they also assert that only they can originate true expertise, the people are left trapped in a paradox, sometimes even branded fools to fall for fake news. But you can’t have it both ways, right?

These issues stem from two roots: derision and ignorance, both of science communication.

Of the scientists endowed with sufficient resources (including personal privilege and wealth): some don’t want to undertake scicomm, some don’t know enough to make a decision about whether to undertake scicomm, and some wish to undertake scicomm. Of these, scientists of the first type, who actively resist communicating research – whether theirs or others, believing it to be a lesser or even undesirable enterprise – wish to perpetuate their presumed authority and their authoritarian ‘reign’ by hoarding their knowledge. They are responsible for the derision.

These people are responsible at least in part for the emergence of Balfouresque activists: celebrity-voices that amplify issues but wrongly, with or without the support of larger organisations, often claiming to question the agenda of an unholy union of scientists and businesses, alluding to conspiracies designed to keep the general populace from asking too many questions, and ultimately secured by the belief that they’re fighting authoritarian systems and not authority itself.

Scientists of the second type, who are unaware of why science communication exists and its role in society, are obviously the ignorant.

For example, when scientists from the UK had a paper published in 2017 about the Sutlej river’s connection to the Indus Valley civilisation, I reached out to two geoscientists for comment, after having ascertained that they weren’t particularly busy or anything. Neither had replied after 48 hours, not even with a ‘no’. So I googled “fluvio-deltaic morphology”, picked the first result that was a university webpage and emailed the senior-most scientist there. This man, Maarten Kleinhans at the University of Utrecht, wrote back almost immediately and in detail. One of the two geoscientists wrote me a month later: “Please check carefully, I am not an author of the paper.”

More recently, the 2018 Young Investigators’ Meet in Guwahati included a panel discussion on science communication (of which I was part). After fielding questions from the audience – mostly from senior scientists already convinced of the need for good science communication, such as B.K. Thelma and Roop Malik – and breaking for tea, another panelist and I were mobbed by young biologists completely baffled as to why journalists wanted to interrogate scientific papers when that’s exactly why peer-review exists.

All of this is less about fighting quacks bearing little to no burden of proof and more about responding to the widespread and cheap availability of information. Like it or not, science communication is here to stay because it’s one of the more credible ways to suppress the undesirable side-effects of implementing and accessing a ‘right to information’ policy paradigm. Similarly, you can’t have a right to information together with a right to withhold information; the latter has to be defined in the form of exceptions to the former. Otherwise, prepare for activism to replace expertise.