Broken clocks during the pandemic

Proponents of conspiracy theories during the pandemic, at least in India, appear to be like broken clocks: they are right by coincidence, without the right body of evidence to back their claims. Two of the most read articles published by The Wire Science in the last 15 months have been the fact-checks of Luc Montagnier’s comments on the two occasions he spoke up in the French press. On the first, he said the novel coronavirus couldn’t have evolved naturally; the second, he insisted mass vaccination was a big mistake. The context in which Montagnier published his remarks evolved considerably between the two events, and it tells an important story.

When Montagnier said in April 2020 that the virus was lab-made, the virus’s spread was just beginning to accelerate in India, Europe and the US, and the proponents of the lab-leak hypothesis to explain the virus’s origins had few listeners and were consigned firmly to the margins of popular discourse on the subject. In this environment, Montagnier’s comments stuck out like a sore thumb, and were easily dismissed.

But when Montagnier said in May 2021 that mass vaccination is a mistake, the context was quite different: in the intervening period, Nicholas Wade had published his article on why we couldn’t dismiss the lab-leak hypothesis so quickly; the WHO’s missteps were more widely known; China’s COVID-19 outbreak had come completely under control (actually or for all appearances); many vaccine-manufacturers’ immoral and/or unethical business practices had come to light; more people were familiar with the concept and properties of viral strains; the WHO had filed its controversial report on the possible circumstances of the virus’s origins in China; etc. As a result, speaking now, Montagnier wasn’t so quickly dismissed. Instead, he was, to many observers, the man who had got it right the first time, was brave enough to stick his neck out in support of an unpopular idea, and was speaking up yet again.

The problem here is that Luc Montagnier is a broken clock – in the way even broken clocks are right twice a day: not because they actually tell the time but because the time is coincidentally what the clock face is stuck at. On both occasions, the conclusions of Montagnier’s comments coincided with what conspiracists have been going on about since the pandemic’s start, but on both occasions, his reasoning was wrong. The same has been true of many other claims made during the pandemic. People have said things that have turned out to be true but they themselves have always been wrong, whenever they have been wrong, because their particular reasons for something to be true were wrong.

That is, unless you can say why you’re right, you’re not right. Unless you can explain why the time is what it is, you’re not a clock!

Montagnier’s case also illuminates a problem with soothsaying: if you wish to be a prophet, it is in your best interests to make as many predictions as possible – to increase the odds of reality coinciding with at least one prediction in time. And when such a coincidence does happen, it doesn’t mean the prophet was right; it means they weren’t wrong. There is a big difference between these positions, and which becomes pronounced when the conspiratorially-minded start incorporating every article published anywhere, from The Wire Science to The Daily Guardian, into their narratives of choice.

As the lab-leak hypothesis moved from the fringes of society to the centre and came mistakenly to conflate possibility with likelihood (i.e. zoonotic spillover and lab-leak are two valid hypotheses for the virus’s origins but they aren’t equally likely to be true), the conspiratorial proponents of the lab-leak hypotheses (the ones given to claiming Chinese scientists engineered the pathogen as a weapon, etc.) have steadily woven imaginary threads between the hypothesis and Indian scientists who opposed Covaxin’s approval, the Congress leaders who “mooted” vaccine hesitancy in their constituencies, scientists who made predictions that came to be wrong, even vaccines that were later found to have rare side-effects restricted to certain demographic groups.

The passage of time is notable here. I think adherents of lab-leak conspiracies are motivated by an overarching theory born entirely of speculation, not evidence, and who then pick and choose from events to build the case that the theory is true. I say ‘overarching’ because, to the adherents, the theory is already fully formed and true, and that pieces of it become visible to observers as and when the corresponding events play out. This could explain why time is immaterial to them. You and I know that Shahid Jameel and Gagandeep Kang cast doubt on Covaxin’s approval (and not Covaxin itself) after the time we were aware that Covaxin’s phase 3 clinical trials were only just getting started in December, and before Covishield’s side-effects in Europe and the US came to light (with the attendant misreporting). We know that at the time Luc Montagnier said the novel coronavirus was made in a lab, last year, we didn’t know nearly enough about the structural biology underlying the virus’s behaviour; we do now.

The order of events matters: we went from ignorance to knowledge, from knowing to knowing more, from thinking one thing to – in the face of new information – thinking another. But the conspiracy-theorists and their ideas lie outside of time: the order of events doesn’t matter; instead, to these people, 2021, 2022, 2023, etc. are preordained. They seem to be simply waiting for the coincidences to roll around.

An awareness of the time dimension (so to speak), or more accurately of the arrow of time, leads straightforwardly to the proper practice of science in our day-to-day affairs as well. As I said, unless you can say why you’re right, you’re not right. This is why effects lie in the future of causes, and why theories lie in the causal future of evidence. What we can say to be true at this moment depends entirely on what we know at this moment. If we presume what we can say at this moment to be true will always be true, we become guilty of dragging our theory into the causal history of the evidence – simply because we are saying that the theory will come true given enough time in which evidence can accrue.

This protocol (of sorts) to verify the truth of claims isn’t restricted to the philosophy of science, even if it finds powerful articulation there: a scientific theory isn’t true if it isn’t falsifiable outside its domain of application. It is equally legitimate and necessary in the daily practice of science and its methods, on Twitter and Facebook, in WhatsApp groups, every time your father, your cousin or your grand-uncle begins a question with “If the lab-leak hypothesis isn’t true…”.

Journalistic entropy

Say you need to store a square image 1,000 pixels wide to a side with the smallest filesize (setting aside compression techniques). The image begins with the colour #009900 on the left side and, as you move towards the right, gradually blends into #1e1e1e on the rightmost edge. Two simple storage methods come to mind: you could either encode the colour-information of every pixel in a file and store that file, or you could determine a mathematical function that, given the inputs #009900 and #1e1e1e, generates the image in question.

The latter method seems more appealing, especially for larger canvases of patterns that are composed by a single underlying function. In such cases, it should obviously be more advantageous to store the image as an output of a function to achieve the smallest filesize.

Now, in information theory (as in thermodynamics), there is an entity called entropy: it describes the amount of information you don’t have about a system. In our example, imagine that the colour #009900 blends to #1e1e1e from left to right save for a strip along the right edge, say, 50 pixels wide. Each pixel in this strip can assume a random colour. To store this image, you’d have to save it as an addition of two functions: ƒ(x, y), where x = #009900 and y = #1e1e1e, plus one function to colour the pixels lying in the 50-px strip on the right side. Obviously this will increase the filesize of the stored function.

Even more, imagine if you were told that 200,000 pixels out of the 1,000,000 pixels in the image would assume random colours. The underlying function becomes even more clumsy: an addition of ƒ(x, y) and a function R that randomly selects 200,000 pixels and then randomly colours them. The outputs of this function R stands for the information about the image that you can’t have beforehand; the more such information you lack, the more entropy the image is said to have.

The example of the image was simple but sufficiently illustrative. In thermodynamics, entropy is similar to randomness vis-à-vis information: it’s the amount of thermal energy a system contains that can’t be used to perform work. From the point of view of work, it’s useless thermal energy (including heat) – something that can’t contribute to moving a turbine blade, powering a motor or motivating a system of pulleys to lift weights. Instead, it is thermal energy motivated by and directed at other impetuses.

As it happens, this picture could help clarify, or at least make more sense of, a contemporary situation in science journalism. Earlier this week, health journalist Priyanka Pulla discovered that the Indian Council of Medical Research (ICMR) had published a press release last month, about the serological testing kit the government had developed, with the wrong specificity and sensitivity data. Two individuals she spoke to, one from ICMR and another from the National Institute of Virology, Pune, which actually developed the kit, admitted the mistake when she contacted them. Until then, neither organisation had issued a clarification even though it was clear both individuals are likely to have known of the mistake at the time the release was published.

Assuming for a moment that this mistake was an accident (my current epistemic state is ‘don’t know’), it would indicate ICMR has been inefficient in the performance of its duties, forcing journalists to respond to it in some way instead of focusing on other, more important matters.

The reason I’m tending to think of such work as entropy and not work per se is such instances, whereby journalists are forced to respond to an event or action characterised by the existence of trivial resolutions, seem to be becoming more common.

It’s of course easier to argue that what I consider trivial may be nontrivial to someone else, and that these events and actions matter to a greater extent than I’m willing to acknowledge. However, I’m personally unable to see beyond the fact that an organisation with the resources and, currently, the importance of ICMR shouldn’t have had a hard time proof-reading a press release that was going to land in the inboxes of hundreds of journalists. The consequences of the mistake are nontrivial but the solution is quite trivial.

(There is another feature in some cases: of the absence of official backing or endorsement of any kind.)

So as such, it required work on the part of journalists that could easily have been spared, allowing journalists to direct their efforts at more meaningful, more productive endeavours. Here are four more examples of such events/actions, wherein the non-triviality is significantly and characteristically lower than that attached to formal announcements, policies, reports, etc.:

  1. Withholding data in papers – In the most recent example, ICMR researchers published the results of a seroprevalence survey of 26,000 people in 65 districts around India, and concluded that the prevalence of the novel coronavirus was 0.73% in this population. However, in their paper, the researchers include neither a district-wise breakdown of the data nor the confidence intervals for each available data-point even though they had this information (it’s impossible to compute the results the researchers did without these details). As a result, it’s hard for journalists to determine how reliable the results are, and whether they really support the official policies regarding epidemic-control interventions that will soon follow.
  2. Publishing faff – On June 2, two senior members of the Directorate General of Health services, within India’s Union health ministry, published a paper (in a journal they edited) that, by all counts, made nonsensical claims about India’s COVID-19 epidemic becoming “extinguished” sometime in September 2020. Either the pair of authors wasn’t aware of their collective irresponsibility or they intended to refocus (putting it benevolently) the attention of various people towards their work, turning them away from the duo deemed embarrassing or whatever. And either way, the claims in the paper wound their way into two news syndication services, PTI and IANS, and eventually onto the pages of a dozen widely-read news publications in the country. In effect, there were two levels of irresponsibility at play: one as embodied by the paper and the other, by the syndication services’ and final publishers’ lack of due diligence.
  3. Making BS announcements – This one is fairly common: a minister or senior party official will say something silly, such as that ancient Indians invented the internet, and ride the waves of polarising debate, rapidly devolving into acrimonious flamewars on Twitter, that follow. I recently read (in The Washington Post I think, but I can’t find the link now) that it might be worthwhile for journalists to try and spend less time on fact-checking a claim than it took someone to come up with that claim. Obviously there’s no easy way to measure the time some claims took to mature into their present forms, but even so, I’m sure most journalists would agree that fact-checking often takes much longer than bullshitting (and then broadcasting). But what makes this enterprise even more grating is that it is orders of magnitude easier to not spew bullshit in the first place.
  4. Conspiracy theories – This is the most frustrating example of the lot because, today, many of the originators of conspiracy theories are television journalists, especially those backed by government support or vice versa. While fully acknowledging the deep-seated issues underlying both media independence and the politics-business-media nexus, numerous pronouncements by so many news anchors have only been akin to shooting ourselves in the foot. Exhibit A: shortly after Prime Minister Narendra Modi announced the start of demonetisation, a beaming news anchor told her viewers that the new 2,000-rupee notes would be embedded with chips to transmit the notes’ location real-time, via satellite, to operators in Delhi.

Perhaps this entropy – i.e. the amount of journalistic work not available to deal with more important stories – is not only the result of a mischievous actor attempting to keep journalists, and the people who read those journalists, distracted but is also assisted by the manifestation of a whole industry’s inability to cope with the mechanisms of a new political order.

Science journalism itself has already experienced a symptom of this change when pseudoscientific ideas became more mainstream, even entering the discourse of conservative political groups, including that of the BJP. In a previous era, if a minister said something, a reporter was to drum up a short piece whose entire purpose was to record “this happened”. And such reports were the norm and in fact one of the purported roots of many journalistic establishments’ claims to objectivity, an attribute they found not just desirable but entirely virtuous: those who couldn’t be objective were derided as sub-par.

However, if a reporter were to simply report today that a minister said something, she places herself at risk of amplifying bullshit to a large audience if what the minister said was “bullshit bullshit bullshit”. So just as politicians’ willingness to indulge in populism and majoritarianism to the detriment of society and its people has changed, so also must science journalism change – as it already has with many publications, especially in the west – to ensure each news report fact-checks a claim it contains, especially if it is pseudoscientific.

In the same vein, it’s not hard to imagine that journalists are often forced to scatter by the compulsions of an older way of doing journalism, and that they should regroup on the foundations of a new agreement that lets them ignore some events so that they can better dedicate themselves to the coverage of others.

Featured image credit: Татьяна Чернышова/Pexels.

In defence of ignorance

Wish I may, wish I might
Have this wish, I wish tonight
I want that star, I want it now
I want it all and I don’t care how

Metallica, King Nothing

I’m a news editor who frequently uses Twitter to find new stories to work on or follow up. Since the lockdown began, however, I’ve been harbouring a fair amount of FOMO born, ironically, from the fact that the small pool of in-house reporters and the larger pool of freelancers I have access to are all confined to their homes, and there’s much less opportunity than usual to step out, track down leads and assimilate ground reports. And Twitter – the steady stream of new information from different sources – has simply accentuated this feeling, instead of ameliorating it by indicating that other publications are covering what I’m not. No, Twitter makes me feel like I want it all.

I’m sure this sensation is the non-straightforward product of human psychology and how social media companies have developed algorithms to take advantage of it, but I’m fairly certain (despite the absence of a personal memory to corroborate this opinion) that individual minds of the pre-social-media era weren’t marked by FOMO, and more certain that they were marked less so. I also believe one of the foremost offshoots of the prevalence of such FOMO is the idea that one can be expected to have an opinion on everything.

FOMO – the ‘fear of missing out’ – is essentially defined by a desire to participate in activities that, sometimes, we really needn’t participate in, but we think we need to simply by dint of knowing about those activities. Almost as if the brains of humans had become habituated to making decisions about social participation based solely on whether or not we knew of them, which if you ask me wouldn’t be such a bad hypothesis to apply to the pre-information era, when you found out about a party only if you were the intended recipient of the message that ‘there is a party’.

However, most of us today are not the intended recipients of lots of information. This seems especially great for news but it also continuously undermines our ability to stay in control of what we know or, more importantly, don’t know. And when you know, you need to participate. As a result, I sometimes devolve into a semi-nervous wreck reading about the many great things other people are doing, and sharing their experiences on Twitter, and almost involuntarily develop a desire to do the same things. Now and then, I even sense the seedling of regret when I look at a story that another news outlet has published, but which I thought I knew about before but simply couldn’t pursue, aided ably by the negative reinforcement of the demands on me as a news editor.

Recently, as an antidote to this tendency – and drawing upon my very successful, and quite popular, resistance to speaking Hindi simply because a misguided interlocutor presumes I know the language – I decided I would actively ignore something I’m expected to have an opinion on but there being otherwise no reason that I should. Such a public attitude exists, though it’s often unspoken, because FOMO has successfully replaced curiosity or even civic duty as the prime impetus to seek new information on the web. (Obviously, this has complicated implications, such as we see in the dichotomy of empowering more people to speak truth to power versus further tightening the definitions of ‘expert’ and ‘expertise’; I’m choosing to focus on the downsides here.)

As a result, the world seems to be filled with gas-bags, some so bloated I wonder why they don’t just float up and fuck off. And I’ve learnt that the hardest part of the antidote is to utter the words that FOMO has rendered most difficult to say: “I don’t know”.

A few days ago, I was chatting with The Soufflé when he invited me to participate in a discussion about The German Ideology that he was preparing for. You need to know that The Soufflé is a versatile being, a physicist as well as a pluripotent scholar, but more importantly The Soufflé knows what most pluripotent scholars don’t: that no matter how much one is naturally gifted to learn this or that, knowing something needs not just work but also proof of work. I refused The Soufflé’s invitation, of course; my words were almost reflexive, eager to set some distance between myself and the temptation to dabble in something just because it was there to dabble. The Soufflé replied,

I think it was in a story by Borges, one of the characters says “Every man should be capable of all ideas, and I believe that in the future he will be.” 🙂

To which I said,

That was when the world was simpler. Now there’s a perverse expectation that everyone should have opinions on everything. I don’t like it, and sometimes I actively stay away from some things just to be able to say I don’t want to have an opinion on it. Historical materialism may or may not be one of those things, just saying.

Please bear with me, this is leading up to something I’d like to include here. The Soufflé then said,

I’m just in it for the sick burns. 😛 But OK, I get it. Why do you think that expectation exists, though? I mean, I see it too. Just curious.

Here I set out my FOMO hypothesis. Then he said,

I guess this is really a topic for a cultural critic, I’m just thinking out loud… but perhaps it is because ignorance no longer finds its antipode in understanding, but awareness? To be aware is to be engaged, to be ‘caught up’ is to be active. This kind of activity is low-investment, and its performance aided by social media?

If you walked up to people today and asked “What do you think about factory-farmed poultry?” I’m pretty sure they’d find it hard to not mention that it’s cruel and wrong, even if they know squat about it. So they’re aware, they have possibly a progressive view on the issue as well, but there’s no substance underneath it.

Bingo.

We’ve become surrounded by socio-cultural forces that require us to know, know, know, often sans purpose or context. But ignorance today is not such a terrible thing. There are so many people who set out to know, know, know so many of the wrong ideas and lessons that conspiracy theories that once languished on the fringes of society have moved to the centre, and for hundreds of millions of people around the world stupid ideas have become part of political ideology.

Then there are others who know but don’t understand – which is a vital difference, of the sort that The Soufflé pointed out, that noted scientist-philosophers have sensibly caricatured as the difference between the thing and the name of the thing. Knowing what the four laws of thermodynamics or the 100+ cognitive biases are called doesn’t mean you understand them – but it’s an extrapolation that social-media messaging’s mandated brevity often pushes us to make. Heck, I know of quite a few people who are entirely blind to this act of extrapolation, conflating the label with the thing itself and confidently penning articles for public consumption that betrays a deep ignorance (perhaps as a consequence of the Dunning-Kruger effect) of the subject matter – strong signals that they don’t know it in their bones but are simply bouncing off of it like light off the innards of a fractured crystal.

I even suspect the importance and value of good reporting is lost on too many people because those people don’t understand what it takes to really know something (pardon the polemic). These are the corners the push to know more, all the time, often even coupled to capitalist drives to produce and consume, has backed us to. And to break free, we really need to embrace that old virtue that has been painted a vice: ignorance. Not the ignorance of conflation nor the ignorance of the lazy but the cultivated ignorance of those who recognise where knowledge ends and faff begins. Ignorance that’s the anti-thing of faff.

Review: ‘Hunters’ (2020)

Just binge-watched the first season of Hunters, the bizarre Amazon Prime original about a covert group of Jews in 1970s’ New York city tracking down and killing Nazis who were integrated by the US government into American society under Operation Paperclip. It’s obvious how this premise could be presented through 10 hours of grit and moral dilemma but instead we get 10 hours of grit mixed with satire and melodrama – a combination that only brings a certain journalist’s words in 2013, delivered as a comment on a prominent newspaper’s suddenly disagreeable design, to mind: “pastiche and mishmash”.

I’m not sure what Hunters is trying to be, beyond a vessel for Al Pacino as its protagonist and patriarch, because its story is weak and the violence is neither realistic nor displays purpose; the only exception that everyone seems to be able to agree on, with good reason, is Jerrika Hinton as Agent Morris. But worst of all, the show gives more than ample screen-time for neo-Nazi characters to air their newly sharpened anti-Semitic and supremacist points of view.

Hunters seems to believe that such views are instantaneously and automatically disqualified by their implicit absurdity whereas the opposite is true. We live today in a world where conspiracy theories have moved from the fringes of society to the centre. So beyond the first time the Nazis are allowed to spew their bile, the show resembles porn for the sufficiently misguided bigot looking for a new language and new methods to assert his dominance. Makes you want to skip forward in cringe. Even the concentration camp scenes are awfully close to being voyeuristic.

To see faces where there are none

This week in “neither university press offices nor prestigious journals know what they’re doing”: a professor emeritus at Ohio University who claimed he had evidence of life on Mars, and whose institution’s media office crafted a press release without thinking twice to publicise his ‘findings’, and the paper that Nature Medicine published in 2002, cited 900+ times since, that has been found to contain multiple instances of image manipulation.

I’d thought the professor’s case would remain obscure because it’s evidently crackpot but this morning, articles from Space.com and Universe Today showed up on my Twitter setting the record straight: that the insects the OU entomologist had found in pictures of Mars taken by the Curiosity rover were just artefacts of his (insectile) pareidolia. Some people have called this science journalism in action but I’d say it’s somewhat offensive to check if science journalism still works by gauging its ability, and initiative, to countering conspiracy theories, the lowest of low-hanging fruit.

The press release, which has since been taken down. Credit: EurekAlert and Wayback Machine

The juicier item on our plate is the Nature Medicine paper, the problems in which research integrity super-sleuth Elisabeth Bik publicised on November 21, and which has a science journalism connection as well.

Remember the anti-preprints article Nature News published in July 2018? Its author, Tom Sheldon, a senior press manager at the Science Media Centre, London, argued that preprints “promoted confusion” and that journalists who couldn’t bank on peer-reviewed work ended up “misleading millions”. In other words, it would be better if we got rid of preprints and journalists deferred only to the authority of peer-reviewed papers curated and published by journals, like Nature. Yet here we are today, with a peer-reviewed manuscript published in Nature Medicine whose checking process couldn’t pick up on repetitive imagery. Is this just another form of pareidolia, to see a sensational result – knowing prestigious journals’ fondness for such results – where there was actually none?

(And before you say this is just one paper, read this analysis: “… data from several lines of evidence suggest that the methodological quality of scientific experiments does not increase with increasing rank of the journal. On the contrary, an accumulating body of evidence suggests the inverse: methodological quality and, consequently, reliability of published research works in several fields may be decreasing with increasing journal rank.” Or this extended critique of peer-review on Vox.)

This isn’t an argument against the usefulness, or even need for, peer-review, which remains both useful and necessary. It’s an argument against ludicrous claims that peer-review is infallible, advanced in support of the even more ludicrous argument that preprints should be eliminated to enable good journalism.