The constructionist hypothesis and expertise during the pandemic

Now that COVID-19 cases are rising again in the country, the trash talk against journalists has been rising in tandem. The Indian government was unprepared and hapless last year, and it is this year as well, if only in different ways. In this environment, journalists have come under criticism along two equally unreasonable lines. First, many people, typically supporters of the establishment, either don’t or can’t see the difference between good journalism and contrarianism, and don’t or can’t acknowledge the need for expertise in the practise of journalism.

Second, the recognition of expertise itself has been sorely lacking across the board. Just like last year, when lots of scientists dropped what they were doing and started churning out disease transmission models each one more ridiculous than the last, this time — in response to a more complex ‘playing field’ involving new and more variants, intricate immunity-related mechanisms and labyrinthine clinical trial protocols — too many people have been shouting their mouths off, and getting most of it wrong. All of these misfires have reminded us of two things: again and again that expertise matters, and that unless you’re an expert on something, you’re unlikely to know how deep it runs. The latter isn’t trivial.

There’s what you know you don’t know, and what you don’t know you don’t know. The former is the birthplace of learning. It’s the perfect place from which to ask questions and fill gaps in your knowledge. The latter is the verge of presumptuousness — a very good place from which to make a fool of yourself. Of course, this depends on your attitude: you can always be mindful of the Great Unknown, such as it is, and keep quiet.

As these tropes have played out in the last few months, I have been reminded of an article written by the physicist Philip Warren Anderson, called ‘More is Different’, and published in 1972. His idea here is simple: that the statement “if everything obeys the same fundamental laws, then the only scientists who are studying anything really fundamental are those who are working on those laws” is false. He goes on to explain:

“The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any means imply a ‘constructionist’ one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. … The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. The behaviour of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviours requires research which I think is as fundamental in its nature as any other.”

The seemingly endless intricacies that beset the interaction of a virus, a human body and a vaccine are proof enough that the “twin difficulties of scale and complexity” are present in epidemiology, immunology and biochemistry as well – and testament to the foolishness of any claims that the laws of conservation, thermodynamics or motion can help us say, for example, whether a particular variant infects people ‘better’ because it escapes the immune system better or because the immune system’s protection is fading.

But closer to my point: not even all epidemiologists, immunologists and/or biochemists can meaningfully comment on every form or type of these interactions at all times. I’m not 100% certain, but at least from what I’ve learnt reporting topics in physics (and conceding happily that covering biology seems more complex), scale and complexity work not just across but within fields as well. A cardiologist may be able to comment meaningfully on COVID-19’s effects on the heart in some patients, or a neurologist on the brain, but they may not know how the infection got there even if all these organs are part of the same body. A structural biologist may have deciphered why different mutations change the virus’s spike protein the way they do, but she can’t be expected to comment meaningfully on how epidemiological models will have to be modified for each variant.

To people who don’t know better, a doctor is a doctor and a scientist is a scientist, but as journalists plumb the deeper, more involved depths of a new yet specific disease, we bear from time to time a secret responsibility to be constructive and not reductive, and this is difficult. It becomes crucial for us to draw on the wisdom of the right experts, who wield the right expertise, so that we’re moving as much and as often as possible away from the position of what we don’t know we don’t know even as we ensure we’re not caught in the traps of what experts don’t know they don’t know. The march away from complete uncertainty and towards the names of uncertainty is precarious.

Equally importantly, at this time, to make our own jobs that much easier, or at least less acerbic, it’s important for everyone else to know this as well – that more is vastly different.

Why scientists should read more

The amount of communicative effort to describe the fact of a ball being thrown is vanishingly low. It’s as simple as saying, “X threw the ball.” It takes a bit more effort to describe how an internal combustion engine works – especially if you’re writing for readers who have no idea how thermodynamics works. However, if you spend enough time, you can still completely describe it without compromising on any details.

Things start to get more difficult when you try to explain, for example, how webpages are loaded in your browser: because the technology is more complicated and you often need to talk about electric signals and logical computations – entities that you can’t directly see. You really start to max out when you try to describe everything that goes into launching a probe from Earth and landing it on a comet because, among other reasons, it brings together advanced ideas in a large number of fields.

At this point, you feel ambitious and you turn your attention to quantum technologies – only to realise you’ve crossed a threshold into a completely different realm of communication, a realm in which you need to pick between telling the whole story and risk being (wildly) misunderstood OR swallowing some details and making sure you’re entirely understood.

Last year, a friend and I spent dozens of hours writing a 1,800-word article explaining the Aharonov-Bohm quantum interference effect. We struggled so much because understanding this effect – in which electrons are affected by electromagnetic fields that aren’t there – required us to understand the wave-function, a purely mathematical object that describes real-world phenomena, like the behaviour of some subatomic particles, and mathematical-physical processes like non-Abelian transformations. Thankfully my friend was a physicist, a string theorist for added measure; but while this meant that I could understand what was going on, we spent a considerable amount of time negotiating the right combination of metaphors to communicate what we wanted to communicate.

However, I’m even more grateful in hindsight that my friend was a physicist who understood the need to not exhaustively include details. This need manifests in two important ways. The first is the simpler, grammatical way, in which we construct increasingly involved meanings using a combination of subjects, objects, referrers, referents, verbs, adverbs, prepositions, gerunds, etc. The second way is more specific to science communication: in which the communicator actively selects a level of preexisting knowledge on the reader’s part – say, high-school education at an English-medium institution – and simplifies the slightly more complicated stuff while using approximations, metaphors and allusions to reach for the mind-boggling.

Think of it like building an F1 racecar. It’s kinda difficult if you already have the engine, some components to transfer kinetic energy through the car and a can of petrol. It’s just ridiculous if you need to start with mining iron ore, extracting oil and preparing a business case to conduct televisable racing sports. In the second case, you’re better off describing what you’re trying to do to the caveman next to you using science fiction, maybe poetry. The problem is that to really help an undergraduate student of mechanical engineering make sense of, say, the Casimir effect, I’d rather say:

According to quantum mechanics, a vacuum isn’t completely empty; rather, it’s filled with quantum fluctuations. For example, if you take two uncharged plates and bring them together in a vacuum, only quantum fluctuations with wavelengths shorter than the distance between the plates can squeeze between them. Outside the plates, however, fluctuations of all wavelengths can fit. The energy outside will be greater than inside, resulting in a net force that pushes the plates together.

‘Quantum Atmospheres’ May Reveal Secrets of Matter, Quanta, September 2018

I wouldn’t say the following even though it’s much less wrong:

The Casimir effect can be understood by the idea that the presence of conducting metals and dielectrics alters the vacuum expectation value of the energy of the second-quantised electromagnetic field. Since the value of this energy depends on the shapes and positions of the conductors and dielectrics, the Casimir effect manifests itself as a force between such objects.

Casimir effect, Wikipedia

Put differently, the purpose of communication is to be understood – not learnt. And as I’m learning these days, while helping virologists compose articles on the novel coronavirus and convincing physicists that comparing the Higgs field to molasses isn’t wrong, this difference isn’t common knowledge at all. More importantly, I’m starting to think that my physicist-friend who really got this difference did so because he reads a lot. He’s a veritable devourer of texts. So he knows it’s okay – and crucially why it’s okay – to skip some details.

I’m half-enraged when really smart scientists just don’t get this, and accuse editors (like me) of trying instead to misrepresent their work. (A group that’s slightly less frustrating consists of authors who list their arguments in one paragraph after another, without any thought for the article’s structure and – more broadly – recognising the importance of telling a story. Even if you’re reviewing a book or critiquing a play, it’s important to tell a story about the thing you’re writing about, and not simply enumerate your points.)

To them – which is all of them because those who think they know the difference but really don’t aren’t going to acknowledge the need to bridge the difference, and those who really know the difference are going to continue reading anyway – I say: I acknowledge that imploring people to communicate science more without reading more is fallacious, so read more, especially novels and creative non-fiction, and stories that don’t just tell stories but show you how we make and remember meaning, how we memorialise human agency, how memory works (or doesn’t), and where knowledge ends and wisdom begins.

There’s a similar problem I’ve faced when working with people for whom English isn’t the first language. Recently, a person used to reading and composing articles in the passive voice was livid after I’d changed numerous sentences in the article they’d submitted to the active voice. They really didn’t know why writing, and reading, in the active voice is better because they hadn’t ever had to use English for anything other than writing and reading scientific papers, where the passive voice is par for the course.

I had a bigger falling out with another author because I hadn’t been able to perfectly understand the point they were trying to make, in sentences of broken English, and used what I could infer to patch them up – except I was told I’d got most of them wrong. And they couldn’t implement my suggestions either because they couldn’t understand my broken Hindi.

These are people that I can’t ask to read more. The Wire and The Wire Science publish in English but, despite my (admittedly inflated) view of how good these publications are, I’ve no reason to expect anyone to learn a new language because they wish to communicate their ideas to a large audience. That’s a bigger beast of a problem, with tentacles snaking through colonialism, linguistic chauvinism, regional identities, even ideologies (like mine – to make no attempts to act on instructions, requests, etc. issued in Hindi even if I understand the statement). But at the same time there’s often too much lost in translation – so much so that (speaking from my experience in the last five years) 50% of all submissions written by authors for whom English isn’t the first language don’t go on to get published, even if it was possible for either party to glimpse during the editing process that they had a fascinating idea on their hands.

And to me, this is quite disappointing because one of my goals is to publish a more diverse group of writers, especially from parts of the country underrepresented thus far in the national media landscape. Then again, I acknowledge that this status quo axiomatically charges us to ensure there are independent media outlets with science sections and publishing in as many languages as we need. A monumental task as things currently stand, yes, but nonetheless, we remain charged.

Science journalism, expertise and common sense

On March 27, the Johns Hopkins University said an article published on the website of the Centre For Disease Dynamics, Economics and Policy (CDDEP), a Washington-based think tank, had used its logo without permission and distanced itself from the study, which had concluded that the number of people in India who could test positive for the new coronavirus could swell into the millions by May 2020. Soon after, a basement of trolls latched onto CDDEP founder-director Ramanan Laxminarayan’s credentials as an economist to dismiss his work as a public-health researcher, including denying the study’s conclusions without discussing its scientific merits and demerits.

A lot of issues are wound up in this little controversy. One of them is our seemingly naïve relationship with expertise.

Expertise is supposed to be a straightforward thing: you either have it or you don’t. But just as specialised knowledge is complicated, so too is expertise.

Many of us have heard stories of someone who’s great at something “even though he didn’t go to college” and another someone who’s a bit of a tubelight “despite having been to Oxbridge”. Irrespective of whether they’re exceptions or the rule, there’s a lot of expertise in the world that a deference to degrees would miss.

More importantly, by conflating academic qualifications with expertise, we risk flattening a three-dimensional picture to one. For example, there are more scientists who can speak confidently about statistical regression and the features of exponential growth than there are who can comment on the false vacua of string theory or discuss why protein folding is such a hard problem to solve. These hierarchies arise because of differences in complexity. We don’t have to insist only a virologist or an epidemiologist is allowed to answer questions about whether a clinical trial was done right.

But when we insist someone is not good enough because they have a degree in a different subject, we could be embellishing the implicit assumption that we don’t want to look beyond expertise, and are content with being told the answers. Granted, this argument is better directed at individuals privileged enough to learn something new every day, but maintaining this chasm – between who in the public consciousness is allowed to provide answers and who isn’t – also continues to keep power in fewer hands.

Of course, many questions that have arisen during the coronavirus pandemic have often stood between life and death, and it is important to stay safe. However, there is a penalty to think the closer we drift towards expertise, the safer we become — because then we may be drifting away from common sense and accruing a different kind of burden, especially when we insist only specialised experts can comment on a far less specialist topic. Such convictions have already created a class of people that believes ad hominem is a legitimate argumentative ploy, and won’t back down from an increasingly acrimonious quarrel until they find the cherry-picked data they have been looking for.

Most people occupy a less radical but still problematic position: even when neither life nor fortune is at stake, they claim to wait for expertise to change one’s behaviour and/or beliefs. Most of them are really waiting for something that arrived long ago and are only trying to find new ways to persist with the status quo. The all-or-nothing attitude of the rest – assuming they exist – is, simply put, epistemologically inefficient.

Our deference to the views of experts should be a function of how complex it really is and therefore the extent to which it can be interrogated. So when the topic at hand is whether a clinical trial was done right or whether the Indian Council of Medical Research is testing enough, the net we cast to find independent scientists to speak to can include those who aren’t medical researchers but whose academic or vocational trajectories familiarised them to some parts of these issues as well as who are transparent about their reasoning, methods and opinions. (The CDDEP study is yet to reveal its methods, so I don’t want to comment specifically on it.)

If we can’t be sure if the scientist we’re speaking to is making sense, obviously it would be better to go with someone whose words we can just trust. And if we’re not comfortable having such a negotiated relationship with an expert – sadly, it’s always going to be this way. The only way to make matters simpler is by choosing to deliberately shut ourselves off, to take what we’re hearing and, instead of questioning it further, running with it.

This said, we all shut ourselves off at one time or another. It’s only important that we do it knowing we do it, instead of harbouring pretensions of superiority. At no point does it become reasonable to dismiss anyone based on their academic qualifications alone the way, say, Times of India and OpIndia have done (see below).

What’s more, Dr Giridhar Gyani is neither a medical practitioner nor epidemiologist. He is academically an electrical engineer, who later did a PhD in quality management. He is currently director general at Association of Healthcare Providers (India).

Times of India, March 28

Ramanan Laxminarayanan, who was pitched up as an expert on diseases and epidemics by the media outlets of the country, however, in reality, is not an epidemiologist. Dr Ramanan Laxminarayanan is not even a doctor but has a PhD in economics.

OpIndia, March 22

Expertise has been humankind’s way to quickly make sense of a world that has only been becoming more confusing. But historically, expertise has also been a reason of state, used to suppress dissenting voices and concentrate political, industrial and military power in the hands of a few. The former is in many ways a useful feature of society for its liberating potential while the latter is undesirable because it enslaves. People frequently straddle both tendencies together – especially now, with the government in charge of the national anti-coronavirus response.

An immediately viable way to break this tension is to negotiate our relationship with experts themselves.

If AI is among us, would we know?

Our machines could become self-aware without our knowing it. We need a better way to define and test for consciousness.

… an actual AI might be so alien that it would not see us at all. What we regard as its inputs and outputs might not map neatly to the system’s own sensory modalities. Its inner phenomenal experience could be almost unimaginable in human terms. The philosopher Thomas Nagel’s famous question – ‘What is it like to be a bat?’ – seems tame by comparison. A system might not be able – or want – to participate in the classic appraisals of consciousness such as the Turing Test. It might operate on such different timescales or be so profoundly locked-in that, as the MIT cosmologist Max Tegmark has suggested, in effect it occupies a parallel universe governed by its own laws.

The first aliens that human beings encounter will probably not be from some other planet, but of our own creation. We cannot assume that they will contact us first. If we want to find such aliens and understand them, we need to reach out. And to do that we need to go beyond simply trying to build a conscious machine. We need an all-purpose consciousness detector.

Interesting perspective by George Musser – that of a “consciousness creep”. In the larger scheme of things (of very-complex things in particular), isn’t the consciousness creep statistically inevitable? Musser himself writes that “despite decades of focused effort, computer scientists haven’t managed to build an AI system intentionally”. As a result, perfectly comprehending the composition of the subsystem that confers intelligence upon the whole is likelier to happen gradually – as we’re able to map more of the system’s actions to their stimuli. In fact, until the moment of perfect comprehension, our knowledge won’t reflect a ‘consciousness creep’ but a more meaningful, quantifiable ‘cognisance creep’ – especially if we already acknowledge that some systems have achieved self-awareness and are able to think compute intelligently.

Is anything meant to remain complex?

The first answer is “No”. I mean, whatever you’re writing about, the onus is on the writer to break his subject down to its simplest components, and then put them back together in front of the reader’s eyes. If the writer fails to do that, then the blame can’t be placed on the subject.

It so happens that the blame can be placed on the writer’s choice of subject. Again, the fault is the writer’s, but what do you when the subject is important and ought to be written about because some recent contribution to it makes up a piece of history? Sure, the essentials are the same: read up long and hard on it, talk to people who know it well and are able to break it down in some measure for you, and try and use infographics to augment the learning process.

But these methods, too, have their shortcomings. For one, if the subject has only a long-winded connection to phenomena that affect reality, then strong comparisons have to make way for weak metaphors. A consequence of this is that the reader is more misguided in the long-term than he is “learned” in the short-term. For another, these methods require that the writer know what he’s doing, that what he’s writing about makes sense to him before he attempts to make sense of it for his readers.

This is not always the case: given the grey depths that advanced mathematics and physics are plumbing these days, science journalism concerning these areas are written with a view to make the subject sound awesome, enigmatic, and, sometimes, hopefully consequential than they are in place to provide a full picture of on-goings.

Sometimes, we don’t have a full picture because things are that complex.

The reader is entitled to know – that’s the tenet of the sort of science writing that I pursue: informational journalism. I want to break the world around me down to small bits that remain eternally comprehensible. Somewhere, I know, I must be able to distinguish between my shortcomings and the subject’s; when I realize I’m not able to do that effectively, I will have failed my audience.

In such a case, am I confined to highlighting the complexity of the subject I’ve chosen?


The part of the post that makes some sense ends here. The part of the post that may make no sense starts here.

The impact of this conclusion on science journalism worldwide is that there is a barrage of didactic pieces once something is completely understood and almost no literature during the finding’s formative years despite public awareness that important, and legitimate, work was being done (This is the fine line that I’m treading).

I know this post sounds like a rant – it is a rant – against a whole bunch of things, not the least-important of which is that groping-in-the-dark is a fact of life. However, somehow, I still have a feeling that a lot of scientific research is locked up in silence, yet unworded, because we haven’t received the final word on it. A safe course, of course: nobody wants to be that guy who announced something prematurely and the eventual result was just something else.