The problem with rooting for science

The idea that trusting in science involves a lot of faith, instead of reason, is lost on most people. More often than not, as a science journalist, I encounter faith through extreme examples – such as the Bloch sphere (used to represent the state of a qubit) or wave functions (‘mathematical objects’ used to understand the evolution of certain simple quantum systems). These and other similar concepts require years of training in physics and mathematics to understand. At the same time, science writers are often confronted with the challenge of making these concepts sensible to an audience that seldom has this training.

More importantly, how are science writers to understand them? They don’t. Instead, they implicitly trust scientists they’re talking to to make sense. If I know that a black hole curves spacetime to such an extent that pairs of virtual particles created near its surface are torn apart – one particle entering the black hole never to exit and the other sent off into space – it’s not because I’m familiar with the work of Stephen Hawking. It’s because I read his books, read some blogs and scientific papers, spoke to physicists, and decided to trust them all. Every science journalist, in fact, has a set of sources they’re likely to trust over others. I even place my faith in some people over others, based on factors like personal character, past record, transparency, reflexivity, etc., so that what they produce I take only with the smallest pinch of salt, and build on their findings to develop my own. And this way, I’m already creating an interface between science and society – by matching scientific knowledge with the socially developed markers of reliability.

I choose to trust those people, processes and institutions that display these markers. I call this an act of faith for two reasons: 1) it’s an empirical method, so to speak; there is no proof in theory that such ‘matching’ will always work; and 2) I believe it’s instructive to think of this relationship as being mediated by faith if only to amplify its anti-polarity with reason. Most of us understand science through faith, not reason. Even scientists who are experts on one thing take the word of scientists on completely different things, instead of trying to study those things themselves (see ad verecundiam fallacy).

Sometimes, such faith is (mostly) harmless, such as in the ‘extreme’ cases of the Bloch sphere and the wave function. It is both inexact and incomplete to think that quantum superposition means an object is in two states at once. The human brain hasn’t evolved to cognate superposition exactly; this is why physicists use the language of mathematics to make sense of this strange existential phenomenon. The problem – i.e. the inexactitude and the incompleteness – arises when a communicator translates the mathematics to a metaphor. Equally importantly, physicists are describing whereas the rest of us are thinking. There is a crucial difference between these activities that illustrates, among other things, the fundamental incompatibility between scientific research and science communication that communicators must first surmount.

As physicists over the past three or four centuries have relied increasingly on mathematics rather than the word to describe the world, physics, like mathematics itself, has made a “retreat from the word,” as literary scholar George Steiner put it. In a 1961 Kenyon Review article, Steiner wrote, “It is, on the whole, true to say that until the seventeenth century the predominant bias and content of the natural sciences were descriptive.” Mathematics used to be “anchored to the material conditions of experience,” and so was largely susceptible to being expressed in ordinary language. But this changed with the advances of modern mathematicians such as Descartes, Newton, and Leibniz, whose work in geometry, algebra, and calculus helped to distance mathematical notation from ordinary language, such that the history of how mathematics is expressed has become “one of progressive untranslatability.” It is easier to translate between Chinese and English — both express human experience, the vast majority of which is shared — than it is to translate advanced mathematics into a spoken language, because the world that mathematics expresses is theoretical and for the most part not available to our lived experience.

Samuel Matlack, ‘Quantum Poetics’, The New Atlantic, 2017

However, the faith becomes more harmful the further we move away from the ‘extreme’ examples – of things we’re unlikely to stumble on in our daily lives – and towards more commonplace ideas, such as ‘how vaccines work’ or ‘why GM foods are not inherently bad’. The harm emerges from the assumption that we think we know something when in fact we’re in denial about how it is that we know that thing. Many of us think it’s reason; most of the time it’s faith. Remember when, in Friends, Monica Geller and Chandler Bing ask David the Scientist Guy how airplanes fly, and David says it has to do with Bernoulli’s principle and Newton’s third law? Monica then turns to Chandler with a knowing look and says, “See?!” To which Chandler says, “Yeah, that’s the same as ‘it has something to do with wind’!”

The harm is to root for science, to endorse the scientific enterprise and vest our faith in its fruits, without really understanding how these fruits are produced. Such understanding is important for two reasons.

First, if we trust scientists, instead of presuming to know or actually knowing that we can vouch for their work. It would be vacuous to claim science is superior in any way to another enterprise that demands our faith when science itself also receives our faith. Perhaps more fundamentally, we like to believe that science is trustworthy because it is evidence-based and it is tested – but the COVID-19 pandemic should have clarified, if it hasn’t already, the continuous (as opposed to discrete) nature of scientific evidence, especially if we also acknowledge that scientific progress is almost always incremental. Evidence can be singular and thus clear – like a new avian species, graphene layers superconducting electrons or tuned lasers cooling down atoms – or it can be necessary but insufficient, and therefore on a slippery slope – such as repeated genetic components in viral RNA, a cigar-shaped asteroid or water shortage in the time of climate change.

Physicists working with giant machines to spot new particles and reactions – all of which are detected indirectly, through their imprints on other well-understood phenomena – have two important thresholds for the reliability of their findings: if the chance of X (say, “spotting a particle of energy 100 GeV”) being false is 0.27%, it’s good enough to be evidence; if the chance of X being false is 0.00006%, then it’s a discovery (i.e., “we have found the particle”). But at what point can we be sure that we’ve indeed found the particle we were looking for if the chance of being false will never reach 0%? One way, for physicists specifically, is to combine the experiment’s results with what they expect to happen according to theory; if the two match, it’s okay to think that even a less reliable result will likely be borne out. Another possibility (in the line of Karl Popper’s philosophy) is that a result expected to be true, and is subsequently found to be true, is true until we have evidence to the contrary. But as suitable as this answer may be, it still doesn’t neatly fit the binary ‘yes’/’no’ we’re used to, and which we often expect from scientific endeavours as well (see experience v. reality).

(Minor detour: While rational solutions are ideally refutable, faith-based solutions are not. Instead, the simplest way to reject their validity is to use extra-scientific methods, and more broadly deny them power. For example, if two people were offering me drugs to suppress the pain of a headache, I would trust the one who has a state-sanctioned license to practice medicine and is likely to lose that license, even temporarily, if his prescription is found to have been mistaken – that is, by asserting the doctor as the subject of democratic power. Axiomatically, if I know that Crocin helps manage headaches, it’s because, first, I trusted the doctor who prescribed it and, second, Crocin has helped me multiple times before, so empirical experience is on my side.)

Second, if we don’t know how science works, we become vulnerable to believing pseudoscience to be science as long as the two share some superficial characteristics, like, say, the presence and frequency of jargon or a claim’s originator being affiliated with a ‘top’ institute. The authors of a scientific paper to be published in a forthcoming edition of the Journal of Experimental Social Psychology write:

We identify two critical determinants of vulnerability to pseudoscience. First, participants who trust science are more likely to believe and disseminate false claims that contain scientific references than false claims that do not. Second, reminding participants of the value of critical evaluation reduces belief in false claims, whereas reminders of the value of trusting science do not.

(Caveats: 1. We could apply the point of this post to this study itself; 2. I haven’t checked the study’s methods and results with an independent expert, and I’m also mindful that this is psychology research and that its conclusions should be taken with salt until independent scientists have successfully replicated them.)

Later from the same paper:

Our four experiments and meta-analysis demonstrated that people, and in particular people with higher trust in science (Experiments 1-3), are vulnerable to misinformation that contains pseudoscientific content. Among participants who reported high trust in science, the mere presence of scientific labels in the article facilitated belief in the misinformation and increased the probability of dissemination. Thus, this research highlights that trust in science ironically increases vulnerability to pseudoscience, a finding that conflicts with campaigns that promote broad trust in science as an antidote to misinformation but does not conflict with efforts to install trust in conclusions about the specific science about COVID-19 or climate change.

In terms of the process, the findings of Experiments 1-3 may reflect a form of heuristic processing. Complex topics such as the origins of a virus or potential harms of GMOs to human health include information that is difficult for a lay audience to comprehend, and requires acquiring background knowledge when reading news. For most participants, seeing scientists as the source of the information may act as an expertise cue in some conditions, although source cues are well known to also be processed systematically. However, when participants have higher levels of methodological literacy, they may be more able to bring relevant knowledge to bear and scrutinise the misinformation. The consistent negative association between methodological literacy and both belief and dissemination across Experiments 1-3 suggests that one antidote to the influence of pseudoscience is methodological literacy. The meta-analysis supports this.

So rooting for science per se is not just not enough, it could be harmful vis-à-vis the public support for science itself. For example (and without taking names), in response to right-wing propaganda related to India’s COVID-19 epidemic, quite a few videos produced by YouTube ‘stars’ have advanced dubious claims. They’re not dubious at first glance, if also because they purport to counter pseudoscientific claims with scientific knowledge, but they are – either for insisting a measure of certainty in the results that neither exist nor are achievable, or for making pseudoscientific claims of their own, just wrapped up in technical lingo so they’re more palatable to those supporting science over critical thinking. Some of these YouTubers, and in fact writers, podcasters, etc., are even blissfully unaware of how wrong they often are. (At least one of them was also reluctant to edit a ‘finished’ video to make it less sensational despite repeated requests.)

Now, where do these ideas leave (other) science communicators? In attempting to bridge a nearly unbridgeable gap, are we doomed to swing only between most and least unsuccessful? I personally think that this problem, such as it is, is comparable to Zeno’s arrow paradox. To use Wikipedia’s words:

He states that in any one (duration-less) instant of time, the arrow is neither moving to where it is, nor to where it is not. It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.

To ‘break’ the paradox, we need to identify and discard one or more primitive assumptions. In the arrow paradox, for example, one could argue that time is not composed of a stream of “duration-less” instants, that each instant – no matter how small – encompasses a vanishingly short but not nonexistent passage of time. With popular science communication (in the limited context of translating something that is untranslatable sans inexactitude and/or incompleteness), I’d contend the following:

  • Awareness: ‘Knowing’ and ‘knowing of’ are significantly different and, I hope, self-explanatory also. Example: I’m not fluent with the physics of cryogenic engines but I’m aware that they’re desirable because liquefied hydrogen has the highest specific impulse of all rocket fuels.
  • Context: As I’ve written before, a unit of scientific knowledge that exists in relation to other units of scientific knowledge is a different object from the same unit of scientific knowledge existing in relation to society.
  • Abstraction: 1. perfect can be the enemy of the good, and imperfect knowledge of an object – especially a complicated compound one – can still be useful; 2. when multiple components come together to form a larger entity, the entity can exhibit some emergent properties that one can’t derive entirely from the properties of the individual components. Example: one doesn’t have to understand semiconductor physics to understand what a computer does.

An introduction to physics that contains no equations is like an introduction to French that contains no French words, but tries instead to capture the essence of the language by discussing it in English. Of course, popular writers on physics must abide by that constraint because they are writing for mathematical illiterates, like me, who wouldn’t be able to understand the equations. (Sometimes I browse math articles in Wikipedia simply to immerse myself in their majestic incomprehensibility, like visiting a foreign planet.)

Such books don’t teach physical truths; what they teach is that physical truth is knowable in principle, because physicists know it. Ironically, this means that a layperson in science is in basically the same position as a layperson in religion.

Adam Kirsch, ‘The Ontology of Pop Physics’, Tablet Magazine, 2020

But by offering these reasons, I don’t intend to over-qualify science communication – i.e. claim that, given enough time and/or other resources, a suitably skilled science communicator will be able to produce a non-mathematical description of, say, quantum superposition that is comprehensible, exact and complete. Instead, it may be useful for communicators to acknowledge that there is an immutable gap between common English (the language of modern science) and mathematics, beyond which scientific expertise is unavoidable – in much the same way communicators must insist that the farther the expert strays into the realm of communication, the closer they’re bound to get to a boundary beyond which they must defer to the communicator.

Magic bridges

The last two episodes of the second season of House, the TV series starring Hugh Laurie as a misanthropic doctor at a facility in Princeton, have been playing on my mind off and on during the COVID-19 pandemic. One of its principal points (insofar as Dr Gregory House can admit points to the story of his life) is that it’s ridiculous to expect the families of patients to make informed decisions about whether to sign off on a life-threatening surgical procedure, say, within a few hours when in fact medical workers might struggle to make those choices even after many years of specific training.

The line struck me to be a chasm stretching between two points on the healthcare landscape – so wide as to be insurmountable by anything except magic, in the form of decisions that can never be grounded entirely in logic and reason. Families of very sick patients are frequently able to conjure a bridge out of thin with the power of hope alone, or – more often – desperation. As such, we all understand that these ‘free and informed consent’ forms exist to protect care-providers against litigation as well as, by the same token, to allow them to freely exercise their technical judgments – somewhat like how it’s impossible to physically denote an imaginary number (√-1) while still understanding why they must exist. For completeness.

Sometimes, it’s also interesting to ask if anything meaningful could get done without these bridges, especially since they’re fairly common in the real world and people often tend to overlook them.

I’ve had reason to think of these two House episodes because one of the dominant narratives of the COVID-19 pandemic has been one of uncertainty. The novel coronavirus is, as the name suggests, a new creature – something that evolved in the relatively recent past and assailed the human species before the latter had time to understand its features using techniques and theories honed over centuries. This in turn predicated a cascade of uncertainties as far as knowledge of the virus was concerned: scientists knew something, but not everything, about the virus; science journalists and policymakers knew a subset of that; and untrained people at large (“the masses”) knew a subset of that.

But even though more than a year has passed since the virus first infected humans, the forces of human geography, technology, politics, culture and society have together ensured not everyone knows what there is currently to know about the virus, even as the virus’s interactions with these forces in different contexts continues to birth even more information, more knowledge, by the day. As a result, when an arbitrary person in an arbitrary city in India has to decide whether they’d rather be inoculated with Covaxin or Covishield, they – and in fact the journalists tasked with informing them – are confronted by an unlikely, if also conceptual, problem: to make a rational choice where one is simply and technically impossible.

How then do they and we make these choices? We erect magic bridges. We think we know more than we really do, so even as the bridge we walk on is made of nothing, our belief in its existence holds it up and stiff beneath our feet. This isn’t as bad as I’m making it seem; it seems like the human thing to do. In fact, I think we should be clearer about the terms on which we make these decisions so that we can improve on them and make them better.

For example, all frontline workers who received Covaxin in the first phase of India’s vaccination drive had to read and sign off on an ‘informed consent’ form that included potential side effects of receiving a dose of the vaccine, its basic mechanism of action and how it was developed. These documents tread a fine line between being informative and being useful (in the specific sense of the risk of debilitating action by informing too much and of withholding important information in order to skip to seemingly useful ‘advice’): they don’t tell you everything they can about the vaccine, nor can they assert the decision you should make.

In this context, and assuming the potential recipient of the vaccine doesn’t have the education or training to understand how exactly vaccines work, a magic bridge is almost inevitable. So in this context, the recipient could be better served by a bridge erected on the right priorities and principles, instead of willy-nilly and sans thought for medium- or long-term consequences.

There’s perhaps an instructive analogy here with software programming, in the form of the concept of anti-patterns. An anti-pattern is a counterproductive solution to a recurrent problem. Say you’ve written some code that generates a webpage every time a user selects a number from a list of numbers. The algorithm is dynamic: the script takes the user-provided input, performs a series of calculations on it and based on the results produces the final output. However, you notice that your code has a mistake due to which one particular element on the final webpage is always 10 pixels to the left of where it should be. Being unable to identify the problem, you take the easy way out: you add a line right at the end of the script to shift that element 10 pixels to the right, once it has been rendered.

This is a primitive example of an anti-pattern, an action that can’t be determined by the principles governing the overall system and which exists nonetheless because you put it there. Andrew Koenig introduced the concept in 1995 to identify software programs that are unreliable in some way, and which could be made reliable by ensuring the program conforms to some known principles. Magic bridges are currently such objects, whose existence we deny often because we think they’re non-magical. However, they shouldn’t have to be anti-patterns so much as precursors of a hitherto unknown design en route to completeness.

In pursuit of a nebulous metaphor…

I don’t believe in god, but if he/it/she/they existed, then his/its/her/their gift to science communication would’ve been the metaphor. Metaphors help make sense of truly unknowable things, get a grip on things so large that our minds boggle trying to comprehend them, and help writers express book-length concepts in a dozen words. Even if there is something lost in translation, as it were, metaphors help both writers and readers get a handle on something they would otherwise have struggled to.

One of my favourite expositions on the power of metaphors appeared in an article by Daniel Sarewitz, writing in Nature (readers of this blog will be familiar with the text I’m referring to). Sarewitz was writing about how nobody but trained physicists understands what the Higgs boson really is because those of us who do think we get it are only getting metaphors. The Higgs boson exists in a realm that humans cannot ever access (even Ant-Man almost died getting there), and physicists make sense of them through complicated mathematical abstractions.

Mr Wednesday makes just this point in American Gods (the TV show), when he asks his co-passenger in a flight what it is that makes them trust that the plane will fly. (Relatively) Few of us know the physics behind Newton’s laws of motion and Bernoulli’s work in fluid dynamics – but many of us believe in their robustness. In a sense, faith and metaphors keep us going and not knowledge itself because we truly know only little.

However, the ease that metaphors offer writers at such a small cost (minimised further for those writers who know how to deal with that cost) sometimes means that they’re misused or overused. Sometimes, some writers will abdicate their responsibility to stay as close to the science – and the objective truth, such as it is – as possible by employing metaphors where one could easily be avoided. My grouse of choice at the moment is this tweet by New Scientist:

The writer has had the courtesy to use the word ‘equivalent’ but it can’t do much to salvage the sentence’s implications from the dumpster. Different people have different takeaways from the act of smoking. I think of lung and throat cancer; someone else will think of reduced lifespan; yet another person will think it’s not so bad because she’s a chain-smoker; someone will think it gives them GERD. It’s also a bad metaphor to use because the effects of smoking vary from person to person based on various factors (including how long they’ve been smoking 15 cigarettes a day for). This is why researchers studying the effects of smoking quantify not the risk but the relative risk (RR): the risk of some ailment (including reduced lifespan) relative to non-smokers in the same population.

There are additional concerns that don’t allow the smoking-loneliness congruence to be generally applicable. For example, according to a paper published in the Journal of Insurance Medicine in 2008,

An important consideration [is] the extent to which each study (a) excluded persons with pre-existing medical conditions, perhaps those due to smoking, and (b) controlled for various co-morbid factors, such as age, sex, race, education, weight, cholesterol, blood pressure, heart disease, and cancer. Studies that excluded persons with medical conditions due to smoking, or controlled for factors related to smoking (e.g., blood pressure), would be expected to find lower RRs. Conversely, studies that did not account for sufficient confounding factors (such as age or weight) might find higher RRs.

So, which of these – or any other – effects of smoking is the writer alluding to? Quoting from the New Scientist article,

Lonely people are at increased risk of “just about every major chronic illness – heart attacks, neurodegenerative diseases, cancer,” says Cole. “Just a completely crazy range of bad disease risks seem to all coalesce around loneliness.” A meta-analysis of nearly 150 studies found that a poor quality of social relationships had the same negative effect on risk of death smoking, alcohol and other well-known factors such as inactivity and obesity. “Correcting for demographic factors, loneliness increases the odds of early mortality by 26 per cent,” says Cacioppo. “That’s about the same as living with chronic obesity.”

The metaphor the writer was going for was one of longevity. Bleh.

When I searched for the provenance of this comparison (between smoking and loneliness), I landed up on two articles by the British writer George Monbiot in The Guardian, both of which make the same claim*: that smoking 15 cigarettes a day will reduce your lifespan by as much as a lifetime of loneliness. Both claims referenced a paper titled ‘Social Relationships and Mortality Risk: A Meta-analytic Review’, published in July 2010. Its ‘Discussion’ section reads:

Data across 308,849 individuals, followed for an average of 7.5 years, indicate that individuals with adequate social relationships have a 50% greater likelihood of survival compared to those with poor or insufficient social relationships. The magnitude of this effect is comparable with quitting smoking and it exceeds many well-known risk factors for mortality (e.g., obesity, physical inactivity).

In this context, there’s no doubt that the writer is referring to the benefits of smoking cessation on lifespan. However, the number ’15’ itself is missing from its text. This is presumably because, as Cacioppo – one of the scientists quoted by the New Scientist – says, loneliness can decrease your lifespan by 26%, and I assume an older study cited by the one quoted above relates it to smoking 15 cigarettes a day. So I went looking, and (two hours later) couldn’t find anything.

I don’t mean to rubbish the congruence as a result, however – far from it. I want to highlight the principal reason I didn’t find a claim that fit the proverbial glove: most studies that seek to quantify smoking-related illnesses like to keep things as specific as possible, especially the cohort under consideration. This suggests that extrapolating the ’15 cigarettes a day’ benchmark into other contexts is not a good idea, especially when the writer does not know – and the reader is not aware of – the terms of the ’15 cigarettes’ claim nor the terms of the social relationships study. For example, one study I found involved the following:

The authors investigated the association between changes in smoking habits and mortality by pooling data from three large cohort studies conducted in Copenhagen, Denmark. The study included a total of 19,732 persons who had been examined between 1967 and 1988, with reexaminations at 5- to 10-year intervals and a mean follow-up of 15.5 years. Date of death and cause of death were obtained by record linkage with nationwide registers. By means of Cox proportional hazards models, heavy smokers (≥15 cigarettes/day) who reduced their daily tobacco intake by at least 50% without quitting between the first two examinations and participants who quit smoking were compared with persons who continued to smoke heavily.

… and it presents a table of table with various RRs. Perhaps something from there can be fished out by the New Scientist writer and used carefully to suggest the comparability between smoking-associated mortality rates and the corresponding effects of loneliness…

*The figure of ’15 cigarettes’ seems to appear in conjunction with a lot of claims about smoking as well as loneliness all over the web. It seems 15 a day is the line between light and heavy smoking.

Featured image credit: skeeze/pixabay.