Cooperative distrust

Is there a doctrine or manifesto of cooperative distrust? Because I think that’s what we need today, in the face of reams of government data — almost all of it, in fact — that is untrustworthy, and the only way it can support our democracy is if the public response to it (if and when it becomes available in the public domain) is led by cooperative distrust: one and all distrusting it, investigating the specific way in which it has been distorted, undoing that distortion and, finally, reassessing the data.

The distrust here needs to be cooperative not to undermine the data (and thus avoid spiralling off into conspiracies) but to counteract the effects of ‘bad data’ on ethical public governance. There are some things that we the public trust our government to not undercut – but our present one has consistently undercut government while empowering the party whose members occupy it.

In the latest, and quite egregious, example, the Indian government has said an empowered committee it set up during the country’s devastating second COVID-19 outbreak to manage the supply of medical oxygen does not exist. Either the government really didn’t create the committee and lied during the second wave or it created the committee but is desperately trying to hide its proceedings now by lying. Either way, this is a new low. But more pertinently, the government is behaving this way because it seems to be intent on managing each event to the party’s utmost favour – pointing to a committee when having one is favourable, pretending it didn’t exist when it is unfavourable – without paying attention to the implications for the public memory of government action.

Specifically, the government’s views at different points of time don’t – can’t – fit on one self-consistent timeline because its reality in, say, April 2021 differs from its reality in August 2021. But to consummate its history-rewrite, it has some commentators’ help; given enough time, OpIndia and its ilk are sure to manufacture explanations for why there never was a medical oxygen committee. On the other hand, what do the people remember? Irrespective of public memory, public attention is more restricted and increasingly more short-lived, and it has always boded poorly that both sections of the press and the national government have been comfortable with taking advantage of this ‘feature’, for profits, electoral gains, etc.

Just as there is a difference between what the world really looks like and what humans see (with their eyes and brains), there is a significant difference between history and memory. Today, remembering that there was a medical oxygen committee depends simply on recent memory; one more year and remembering the same thing will also demand the inclination to distrust the government’s official line and reach for the history books (so to speak).

But the same government has also been eroding this inclination – with carrots as well as sticks – and it will continue, resulting ultimately in the asymptotic, but fallacious and anti-democratic, convergence of history and memory. Cooperative distrust can be a useful intervention here, especially as a matter of habit, to continuously reconcile history and memory (at least to the extent to which they concern facts) into a self-consistent whole at every moment, instead of whenever an overt conflict of facts arises.

Freatured image credit: geralt/pixabay.

The problem with rooting for science

The idea that trusting in science involves a lot of faith, instead of reason, is lost on most people. More often than not, as a science journalist, I encounter faith through extreme examples – such as the Bloch sphere (used to represent the state of a qubit) or wave functions (‘mathematical objects’ used to understand the evolution of certain simple quantum systems). These and other similar concepts require years of training in physics and mathematics to understand. At the same time, science writers are often confronted with the challenge of making these concepts sensible to an audience that seldom has this training.

More importantly, how are science writers to understand them? They don’t. Instead, they implicitly trust scientists they’re talking to to make sense. If I know that a black hole curves spacetime to such an extent that pairs of virtual particles created near its surface are torn apart – one particle entering the black hole never to exit and the other sent off into space – it’s not because I’m familiar with the work of Stephen Hawking. It’s because I read his books, read some blogs and scientific papers, spoke to physicists, and decided to trust them all. Every science journalist, in fact, has a set of sources they’re likely to trust over others. I even place my faith in some people over others, based on factors like personal character, past record, transparency, reflexivity, etc., so that what they produce I take only with the smallest pinch of salt, and build on their findings to develop my own. And this way, I’m already creating an interface between science and society – by matching scientific knowledge with the socially developed markers of reliability.

I choose to trust those people, processes and institutions that display these markers. I call this an act of faith for two reasons: 1) it’s an empirical method, so to speak; there is no proof in theory that such ‘matching’ will always work; and 2) I believe it’s instructive to think of this relationship as being mediated by faith if only to amplify its anti-polarity with reason. Most of us understand science through faith, not reason. Even scientists who are experts on one thing take the word of scientists on completely different things, instead of trying to study those things themselves (see ad verecundiam fallacy).

Sometimes, such faith is (mostly) harmless, such as in the ‘extreme’ cases of the Bloch sphere and the wave function. It is both inexact and incomplete to think that quantum superposition means an object is in two states at once. The human brain hasn’t evolved to cognate superposition exactly; this is why physicists use the language of mathematics to make sense of this strange existential phenomenon. The problem – i.e. the inexactitude and the incompleteness – arises when a communicator translates the mathematics to a metaphor. Equally importantly, physicists are describing whereas the rest of us are thinking. There is a crucial difference between these activities that illustrates, among other things, the fundamental incompatibility between scientific research and science communication that communicators must first surmount.

As physicists over the past three or four centuries have relied increasingly on mathematics rather than the word to describe the world, physics, like mathematics itself, has made a “retreat from the word,” as literary scholar George Steiner put it. In a 1961 Kenyon Review article, Steiner wrote, “It is, on the whole, true to say that until the seventeenth century the predominant bias and content of the natural sciences were descriptive.” Mathematics used to be “anchored to the material conditions of experience,” and so was largely susceptible to being expressed in ordinary language. But this changed with the advances of modern mathematicians such as Descartes, Newton, and Leibniz, whose work in geometry, algebra, and calculus helped to distance mathematical notation from ordinary language, such that the history of how mathematics is expressed has become “one of progressive untranslatability.” It is easier to translate between Chinese and English — both express human experience, the vast majority of which is shared — than it is to translate advanced mathematics into a spoken language, because the world that mathematics expresses is theoretical and for the most part not available to our lived experience.

Samuel Matlack, ‘Quantum Poetics’, The New Atlantic, 2017

However, the faith becomes more harmful the further we move away from the ‘extreme’ examples – of things we’re unlikely to stumble on in our daily lives – and towards more commonplace ideas, such as ‘how vaccines work’ or ‘why GM foods are not inherently bad’. The harm emerges from the assumption that we think we know something when in fact we’re in denial about how it is that we know that thing. Many of us think it’s reason; most of the time it’s faith. Remember when, in Friends, Monica Geller and Chandler Bing ask David the Scientist Guy how airplanes fly, and David says it has to do with Bernoulli’s principle and Newton’s third law? Monica then turns to Chandler with a knowing look and says, “See?!” To which Chandler says, “Yeah, that’s the same as ‘it has something to do with wind’!”

The harm is to root for science, to endorse the scientific enterprise and vest our faith in its fruits, without really understanding how these fruits are produced. Such understanding is important for two reasons.

First, if we trust scientists, instead of presuming to know or actually knowing that we can vouch for their work. It would be vacuous to claim science is superior in any way to another enterprise that demands our faith when science itself also receives our faith. Perhaps more fundamentally, we like to believe that science is trustworthy because it is evidence-based and it is tested – but the COVID-19 pandemic should have clarified, if it hasn’t already, the continuous (as opposed to discrete) nature of scientific evidence, especially if we also acknowledge that scientific progress is almost always incremental. Evidence can be singular and thus clear – like a new avian species, graphene layers superconducting electrons or tuned lasers cooling down atoms – or it can be necessary but insufficient, and therefore on a slippery slope – such as repeated genetic components in viral RNA, a cigar-shaped asteroid or water shortage in the time of climate change.

Physicists working with giant machines to spot new particles and reactions – all of which are detected indirectly, through their imprints on other well-understood phenomena – have two important thresholds for the reliability of their findings: if the chance of X (say, “spotting a particle of energy 100 GeV”) being false is 0.27%, it’s good enough to be evidence; if the chance of X being false is 0.00006%, then it’s a discovery (i.e., “we have found the particle”). But at what point can we be sure that we’ve indeed found the particle we were looking for if the chance of being false will never reach 0%? One way, for physicists specifically, is to combine the experiment’s results with what they expect to happen according to theory; if the two match, it’s okay to think that even a less reliable result will likely be borne out. Another possibility (in the line of Karl Popper’s philosophy) is that a result expected to be true, and is subsequently found to be true, is true until we have evidence to the contrary. But as suitable as this answer may be, it still doesn’t neatly fit the binary ‘yes’/’no’ we’re used to, and which we often expect from scientific endeavours as well (see experience v. reality).

(Minor detour: While rational solutions are ideally refutable, faith-based solutions are not. Instead, the simplest way to reject their validity is to use extra-scientific methods, and more broadly deny them power. For example, if two people were offering me drugs to suppress the pain of a headache, I would trust the one who has a state-sanctioned license to practice medicine and is likely to lose that license, even temporarily, if his prescription is found to have been mistaken – that is, by asserting the doctor as the subject of democratic power. Axiomatically, if I know that Crocin helps manage headaches, it’s because, first, I trusted the doctor who prescribed it and, second, Crocin has helped me multiple times before, so empirical experience is on my side.)

Second, if we don’t know how science works, we become vulnerable to believing pseudoscience to be science as long as the two share some superficial characteristics, like, say, the presence and frequency of jargon or a claim’s originator being affiliated with a ‘top’ institute. The authors of a scientific paper to be published in a forthcoming edition of the Journal of Experimental Social Psychology write:

We identify two critical determinants of vulnerability to pseudoscience. First, participants who trust science are more likely to believe and disseminate false claims that contain scientific references than false claims that do not. Second, reminding participants of the value of critical evaluation reduces belief in false claims, whereas reminders of the value of trusting science do not.

(Caveats: 1. We could apply the point of this post to this study itself; 2. I haven’t checked the study’s methods and results with an independent expert, and I’m also mindful that this is psychology research and that its conclusions should be taken with salt until independent scientists have successfully replicated them.)

Later from the same paper:

Our four experiments and meta-analysis demonstrated that people, and in particular people with higher trust in science (Experiments 1-3), are vulnerable to misinformation that contains pseudoscientific content. Among participants who reported high trust in science, the mere presence of scientific labels in the article facilitated belief in the misinformation and increased the probability of dissemination. Thus, this research highlights that trust in science ironically increases vulnerability to pseudoscience, a finding that conflicts with campaigns that promote broad trust in science as an antidote to misinformation but does not conflict with efforts to install trust in conclusions about the specific science about COVID-19 or climate change.

In terms of the process, the findings of Experiments 1-3 may reflect a form of heuristic processing. Complex topics such as the origins of a virus or potential harms of GMOs to human health include information that is difficult for a lay audience to comprehend, and requires acquiring background knowledge when reading news. For most participants, seeing scientists as the source of the information may act as an expertise cue in some conditions, although source cues are well known to also be processed systematically. However, when participants have higher levels of methodological literacy, they may be more able to bring relevant knowledge to bear and scrutinise the misinformation. The consistent negative association between methodological literacy and both belief and dissemination across Experiments 1-3 suggests that one antidote to the influence of pseudoscience is methodological literacy. The meta-analysis supports this.

So rooting for science per se is not just not enough, it could be harmful vis-à-vis the public support for science itself. For example (and without taking names), in response to right-wing propaganda related to India’s COVID-19 epidemic, quite a few videos produced by YouTube ‘stars’ have advanced dubious claims. They’re not dubious at first glance, if also because they purport to counter pseudoscientific claims with scientific knowledge, but they are – either for insisting a measure of certainty in the results that neither exist nor are achievable, or for making pseudoscientific claims of their own, just wrapped up in technical lingo so they’re more palatable to those supporting science over critical thinking. Some of these YouTubers, and in fact writers, podcasters, etc., are even blissfully unaware of how wrong they often are. (At least one of them was also reluctant to edit a ‘finished’ video to make it less sensational despite repeated requests.)

Now, where do these ideas leave (other) science communicators? In attempting to bridge a nearly unbridgeable gap, are we doomed to swing only between most and least unsuccessful? I personally think that this problem, such as it is, is comparable to Zeno’s arrow paradox. To use Wikipedia’s words:

He states that in any one (duration-less) instant of time, the arrow is neither moving to where it is, nor to where it is not. It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.

To ‘break’ the paradox, we need to identify and discard one or more primitive assumptions. In the arrow paradox, for example, one could argue that time is not composed of a stream of “duration-less” instants, that each instant – no matter how small – encompasses a vanishingly short but not nonexistent passage of time. With popular science communication (in the limited context of translating something that is untranslatable sans inexactitude and/or incompleteness), I’d contend the following:

  • Awareness: ‘Knowing’ and ‘knowing of’ are significantly different and, I hope, self-explanatory also. Example: I’m not fluent with the physics of cryogenic engines but I’m aware that they’re desirable because liquefied hydrogen has the highest specific impulse of all rocket fuels.
  • Context: As I’ve written before, a unit of scientific knowledge that exists in relation to other units of scientific knowledge is a different object from the same unit of scientific knowledge existing in relation to society.
  • Abstraction: 1. perfect can be the enemy of the good, and imperfect knowledge of an object – especially a complicated compound one – can still be useful; 2. when multiple components come together to form a larger entity, the entity can exhibit some emergent properties that one can’t derive entirely from the properties of the individual components. Example: one doesn’t have to understand semiconductor physics to understand what a computer does.

An introduction to physics that contains no equations is like an introduction to French that contains no French words, but tries instead to capture the essence of the language by discussing it in English. Of course, popular writers on physics must abide by that constraint because they are writing for mathematical illiterates, like me, who wouldn’t be able to understand the equations. (Sometimes I browse math articles in Wikipedia simply to immerse myself in their majestic incomprehensibility, like visiting a foreign planet.)

Such books don’t teach physical truths; what they teach is that physical truth is knowable in principle, because physicists know it. Ironically, this means that a layperson in science is in basically the same position as a layperson in religion.

Adam Kirsch, ‘The Ontology of Pop Physics’, Tablet Magazine, 2020

But by offering these reasons, I don’t intend to over-qualify science communication – i.e. claim that, given enough time and/or other resources, a suitably skilled science communicator will be able to produce a non-mathematical description of, say, quantum superposition that is comprehensible, exact and complete. Instead, it may be useful for communicators to acknowledge that there is an immutable gap between common English (the language of modern science) and mathematics, beyond which scientific expertise is unavoidable – in much the same way communicators must insist that the farther the expert strays into the realm of communication, the closer they’re bound to get to a boundary beyond which they must defer to the communicator.

NCBS fracas: In defence of celebrating retractions

Continuing from here

Irrespective of Arati Ramesh’s words and actions, I find every retraction worth celebrating because how hard-won retractions in general have been, in India and abroad. I don’t know how often papers coauthored by Indian scientists are retracted and how high or low that rate is compared to the international average. But I know that the quality of scientific work emerging from India is grossly disproportionate (in the negative sense) to the size of the country’s scientific workforce, which is to say most of the papers published from India, irrespective of the journal, contain low-quality science (if they contain science at all). It’s not for nothing that Retraction Watch has a category called ‘India retractions’, with 196 posts.

Second, it’s only recently that the global scientific community’s attitude towards retractions started changing, and even now most of it is localised to the US and Europe. And even there, there is a distinction: between retractions for honest mistakes and those for dishonest mistakes. Our attitudes towards retractions for honest mistakes have been changing. Retractions for dishonest conduct, or misconduct, have in fact been harder to secure, and continue to be.

The work of science integrity consultant Elisabeth Bik allows us a quick take: the rate at which sleuths are spotting research fraud is far higher than the rate at which journals are retracting the corresponding papers. Bik herself has often said on Twitter and in interviews how most journals editors simply don’t respond to complaints, or quash them with weak excuses and zero accountability. Between 2015 and 2019, a group of researchers identified papers that had been published in violation of the CONSORT guidelines in journals that endorsed the same guidelines, and wrote to those editors. From The Wire Science‘s report:

… of the 58 letters sent to the editors, 32 were rejected for different reasons. The BMJ and Annals published all of those addressed to them. The Lancet accepted 80% of them. The NEJM and JAMA turned down every single letter.

According to JAMA, the letters did not include all the details it required to challenge the reports. When the researchers pointed out that JAMA’s word limit for the letter precluded that, they never heard back from the journal.

On the other hand, NEJM stated that the authors of reports it published were not required to abide by the CONSORT guidelines. However, NEJM itself endorses CONSORT.

The point is that bad science is hard enough to spot, and getting stakeholders to act on them is even harder. It shouldn’t have to be, but it is. In this context, every retraction is a commendable thing – no matter how obviously warranted it is. It’s also commendable when a paper ‘destined’ for retraction is retracted sooner (than the corresponding average) because we already have some evidence that “papers that scientists couldn’t replicate are cited more”. Even if a paper in the scientific literature dies, other scientists don’t seem to be able to immediately recognise that it is dead and cite it in their own work as evidence of this or that thesis. These are called zombie citations. Retracting such papers is a step in the right direction – insufficient to prevent all sorts of problems associated with endeavours to maintain the quality of the literature, but necessary.

As for the specific case of Arati Ramesh: she defended her group’s paper on PubPeer in two comments that offered more raw data and seemed to be founded on a conviction that the images in the paper were real, not doctored. Some commentators have said that her attitude is a sign that she didn’t know the images had been doctored while some others have said (and I tend to agree) that this defence of Ramesh is baffling considering both of her comments succeeded detailed descriptions of forgery. Members of the latter group have also said that, in effect, Ramesh tried to defend her paper until it was impossible to do so, at which point she published her controversial personal statement in which she threw one of her lab’s students under the bus.

There are a lot of missing pieces here towards ascertaining the scope and depth of Ramesh’s culpability – given also that she is the lab’s principal investigator (PI), that she is the lab’s PI who has since started to claim that her lab doesn’t have access to the experiments’ raw data, and that the now-retracted paper says that she “conceived the experiments, performed the initial bioinformatic search for Sensei RNAs, supervised the work and wrote the manuscript”.

[Edit, July 11, 2021, 6:28 pm: After a conversation with Priyanka Pulla, I edited the following paragraph. The previous version appears below, struck through.]

Against this messy background, are we setting a low bar by giving Arati Ramesh brownie points for retracting the paper? Yes and no… Even if it were the case that someone defended the indefensible to an irrational degree, and at the moment of realisation offered to take the blame while also explicitly blaming someone else, the paper was retracted. This is the ‘no’ part. The ‘yes’ arises from Ramesh’s actions on PubPeer, to ‘keep going until one can go no longer’, so to speak, which suggests, among other things – and I’m shooting in the dark here – that she somehow couldn’t spot the problem right away. So giving her credit for the retraction would set a low, if also weird, bar; I think credit belongs on this count with the fastidious commenters of PubPeer. Ramesh would still have had to sign off on a document saying “we’ve agreed to have the paper retracted”, as journals typically require, but perhaps we can also speculate as to whom we should really thank for this outcome – anyone/anything from Ramesh herself to the looming threat of public pressure.

Against this messy background, are we setting a low bar by giving Arati Ramesh brownie points for retracting the paper? No. Even if it were the case that someone defended the indefensible to an irrational degree, and at the moment of realisation offered to take the blame while also explicitly blaming someone else, the paper was retracted. Perhaps we can speculate as to whom we should thank for this outcome – Arati Ramesh herself, someone else in her lab, members of the internal inquiry committee that NCBS set up, some others members of the institute or even the looming threat of public pressure. We don’t have to give Ramesh credit here beyond her signing off on the decision (as journals typically require) – and we still need answers on all the other pieces of this puzzle, as well as accountability.

A final point: I hope that the intense focus that the NCBS fracas has commanded – and could continue to considering Bik has flagged one more paper coauthored by Ramesh and others have flagged two coauthored by her partner Sunil Laxman (published in 2005 and 2006), both on PubPeer for potential image manipulation – will widen to encompass the many instances of misconduct popping up every week across the country.

NCBS, as we all know, is an elite institute as India’s centres of research go: it is well-funded (by the Department of Atomic Energy, a government body relatively free from bureaucratic intervention), staffed by more-than-competent researchers and students, has published commendable research (I’m told), has a functional outreach office, and whose scientists often feature in press reports commenting on this or that other study. As such, it is overrepresented in the public imagination and easily gets attention. However, the problems assailing NCBS vis-à-vis the reports on PubPeer are not unique to the institute, and should in fact force us to rethink our tendency (mine included) to give such impressive institutes – often, and by no coincidence, Brahmin strongholds – the benefit of the doubt.

(1. I have no idea how things are at India’s poorly funded state and smaller private universities, but even there, and in fact at the overall less-elite and but still “up there” in terms of fortunes, institutes, like the IISERs, Brahmins have been known to dominate the teaching and professorial staff, if not the students, and still have been found guilty of misconduct, often sans accountability. 2. There’s a point to be made here about plagiarism, the graded way in which it is ‘offensive’, access to good quality English education to people of different castes in India, a resulting access to plus inheritance of cultural and social capital, and the funneling of students with such capital into elite institutes.)

As I mentioned earlier, Retraction Watch has an ‘India retractions’ category (although to be fair, there are also similar categories for China, Italy, Japan and the UK, but not for France, Russia, South Korea or the US. These countries ranked 1-10 on the list of countries with the most scientific and technical journal publications in 2018.) Its database lists 1,349 papers with at least one author affiliated with an Indian institute that have been retracted – and five papers since the NCBS one met its fate. The latest one was retracted on July 7, 2021 (after being published on October 16, 2012). Again, these are just instances in which a paper was retracted. Further up the funnel, we have retractions that Retraction Watch missed, papers that editors are deliberating on, complaints that editors have rejected, complaints that editors have ignored, complaints that editors haven’t yet received, and journals that don’t care.

So, retractions – and retractors – deserve brownie points.

Pseudoscientific materials and thermoeconomics

The Shycocan Corp. took out a full-page jacket ad in the Times of India on June 22 – the same day The Telegraph (UK) had a story about GBP 2,900 handbags by Gucci that exist only online, in some videogame. The Shycocan product’s science is questionable, at best, though its manufacturers have disagreed vehemently with this assessment. (Anusha Krishnan wrote a fantastic article for The Wire Science on this topic). The Gucci ‘product’ is capitalism redigesting its own bile, I suppose – a way to create value out of thin air. This is neither new nor particularly exotic: I have paid not inconsiderable sums of money in the past for perks inside videogames, often after paying for the games themselves. But thinking about both products led me to a topic called thermoeconomics.

This may be too fine a point but the consumerism implicit in both the pixel-handbags and Shycocan and other medical devices of unproven efficacy has a significant thermodynamic cost. While pixel-handbags may represent a minor offense, so to speak, in the larger scheme of things, their close cousins, the non-fungible tokens (NFTs) of the cryptocurrency universe, are egregiously energy-intensive. (More on this here.) NFTs represent an extreme case of converting energy into monetary value, bringing into sharp focus the relationships between economics and thermodynamics that we often ignore because they are too muted.

Free energy, entropy and information are three of the many significant concepts at the intersection of economics and thermodynamics. Free energy is the energy available to perform useful work. Entropy is energy that is disorderly and can’t be used to perform useful work. Information, a form of negative entropy, and the other two concepts taken together are better illustrated by the following excerpt, from this paper:

Consider, as an example, the process of converting a set of raw materials, such as iron ore, coke, limestone and so forth, into a finished product—a piece of machinery of some kind. At each stage the organization (information content) of the materials embodied in the product is increased (the entropy is decreased), while global entropy is increased through the production of waste materials and heat. For example:

Extraction activities start with the mining of ores, followed by concentration or benefication. All of these steps increase local order in the material being processed, but only by using (dissipating) large quantities of available work derived from burning fuel, wearing out machines and discarding gauge and tailings.

Metallurgical reduction processes mostly involve the endothermic chemical reactions to separate minerals into the desired element and unwanted impurities such as slag, CO2 and sulfur oxides. Again, available work in the form of coal, oil or natural gas is used up to a much greater extent than is embodied in metal, and there is a physical wear and tear on machines, furnaces and so forth, which must be discarded eventually.

Petroleum refining involves fractionating the crude oil, cracking heavier fractions, and polymerizing, alkylating or reforming lighter ones. These processes require available work, typically 10% or so of the heating value of the petroleum itself. Petrochemical feedstocks such as olefins or alcohols are obtained by means of further endo- thermic conversion processes. Inorganic chemical processes begin by endothermic reduction of commonplace salts such as chlorides, fluorides or carbonates into their components. Again, available work (from electricity or fuel) is dissipated in each step.

Fabrication involves the forming of materials into parts with desirable forms and shapes. The information content, or orderliness, of the product is increased, but only by further expending available work.

Assembly and construction involves the linking of components into complex subsystems and systems. The orderliness of the product continues to increase, but still more available work is used up in the processes. The simultaneous buildup of local order and global entropy during a materials processing sequence is illustrated in figure 4. Some, but not all of the orderliness of the manufactured product is recoverable as thermodynamically available work: Plastic or paper products, for example, can be burned as fuel in a boiler to recover their residual heating value and con- vert some of that to work again. Using scrap instead of iron ore in the manufacture of steel or recycled aluminum instead of bauxite makes use of some of the work expended in the initial refining of the ore.

Some years ago, I read an article about a debate between a physicist and an economist; I’m unable to find the link now. The physicist says infinite economic growth is impossible because the laws of thermodynamics forbid it. Eventually, we will run out of free energy and entropy will become more abundant, and creating new objects will exact very high, and increasing, resource costs. The economist counters that what a person values doesn’t have to be encoded as objects – that older things can re-acquire new value or become more valuable, or that we will be able to develop virtual objects whose value doesn’t incur the same costs that their physical counterparts do.

This in turn recalls the concept of eco-economic decoupling – the idea that we can continue and/or expand economic activity without increasing environmental stresses and pollution at the same time. Is this possible? Are we en route to achieving it?

The Solar System – taken to be the limit of Earth’s extended neighbourhood – is very large but still finite, and the laws of thermodynamics stipulate that it can thus contain a finite amount of energy. What is the maximum number of dollars we can extract through economic activities using this energy? A pro-consumerist brigade believes absolute eco-economic decoupling is possible; at least one of its subscribers, a Michael Liebreich, has written that in fact infinite growth is possible. But NFTs suggest we are not at all moving in the right direction – nor does any product that extracts a significant thermodynamic cost with incommensurate returns (and not just economic ones). Pseudoscientific hardware – by which I mean machines and devices that claim to do something but have no evidence to show for it – belongs in the same category.

This may not be a productive way to think of problematic entities right now, but it is still interesting to consider that, given we have a finite amount of free energy, and that increasing the efficiency with which we use it is closely tied to humankind’s climate crisis, pseudoscientific hardware can be said to have a climate cost. In fact, the extant severity of the climate crisis already means that even if we had an infinite amount of free energy, thermodynamic efficiency is more important right now. I already think of flygskam in this way, for example: airplane travel is not pseudoscientific, but it can be irrational given its significant carbon footprint, and the privileged among us need to undertake it only with good reason. (I don’t agree with the idea the way Greta Thunberg does, but that’s a different article.)

To quote physicist Tom Murphy:

Let me restate that important point. No matter what the technology, a sustained 2.3% energy growth rate would require us to produce as much energy as the entire sun within 1400 years. A word of warning: that power plant is going to run a little warm. Thermodynamics require that if we generated sun-comparable power on Earth, the surface of the Earth—being smaller than that of the sun—would have to be hotter than the surface of the sun! …

The purpose of this exploration is to point out the absurdity that results from the assumption that we can continue growing our use of energy—even if doing so more modestly than the last 350 years have seen. This analysis is an easy target for criticism, given the tunnel-vision of its premise. I would enjoy shredding it myself. Chiefly, continued energy growth will likely be unnecessary if the human population stabilizes. At least the 2.9% energy growth rate we have experienced should ease off as the world saturates with people. But let’s not overlook the key point: continued growth in energy use becomes physically impossible within conceivable timeframes. The foregoing analysis offers a cute way to demonstrate this point. I have found it to be a compelling argument that snaps people into appreciating the genuine limits to indefinite growth.

And … And Then There’s Physics:

As I understand it, we can’t have economic activity that simply doesn’t have any impact on the environment, but we can choose to commit resources to minimising this impact (i.e., use some of the available energy to avoid increasing entropy, as Liebreich suggests). However, this would seem to have a cost and it seems to me that we mostly spend our time convincing ourselves that we shouldn’t yet pay this cost, or shouldn’t pay too much now because people in the future will be richer. So, my issue isn’t that I think we can’t continue to grow our economies while decoupling economic activity from environmental impact, I just think that we won’t.

A final point: information is considered negative entropy because it describes certainty – something we know that allows us to organise materials in such a way as to minimise disorder. However, what we consider to be useful information, thanks to capitalism, nationalism (it is not for nothing that Shycocan’s front-page ad ends with a “Jai Hind”), etc., has become all wonky, and all forms of commercialised pseudoscience are good examples of this.

On the lab-leak hypothesis

One problem with the debate over the novel coronavirus’s “lab leak” origin hypothesis is a problem I’m starting to see in quite a few other areas of pandemic-related analysis and discussion. It’s that no one will say why others are wrong, even as they insist others are, and go on about why they are right.

Shortly after I read Nicholas Wade’s 10,000-word article on Medium, I pitched a summary to a medical researcher, whose first, and for a long time only, response was one word: “rubbish”. Much later, he told me about how the virus could have evolved and spread naturally. Even if I couldn’t be sure if he was right, having no way to verify the information except to bounce it off a bunch of other experts, I was sure he thought he was right. But how was Wade wrong? I suspect for many people the communication failures surrounding this (or a similar) question may be a sticking point.

(‘Wade’, after the first mention, is shorthand for an author of a detailed, non-trivial article that considers the lab-leak hypothesis, irrespective of what conclusion it reaches. I’m cursorily aware of Wade’s support for ‘scientific racism’, and by using his name, I don’t condone any of his views on these and other matters. Other articles to read on the lab-leak topic include Nicholson Baker’s in Intelligencer and Katherine Eban’s in Vanity Fair.)

We don’t know how the novel coronavirus originated, nor are we able to find out easily. There are apparently two possibilities: zoonotic spillover and lab-leak (both hypotheses even though the qualification has been more prominently attached to the latter).

Quoting two researchers writing in The Conversation:

In March 2020, another article published in Nature Medicine provided a series of scientific arguments in favour of a natural origin. The authors argued: The natural hypothesis is plausible, as it is the usual mechanism of emergence of coronaviruses; the sequence of SARS-CoV-2 is too distantly related from other known coronaviruses to envisage the manufacture of a new virus from available sequences; and its sequence does not show evidence of genetic manipulation in the laboratory.

Proponents of the lab-leak hypothesis (minus the outright-conspiratorial) – rather more broadly the opponents of the ‘zoonotic-spillover’-evangelism – have argued that lab leaks are more common than we think, the novel coronavirus has some features that suggest the presence of a human hand, and a glut of extra-scientific events that point towards suspicious research and communication by members of the Wuhan Institute of Virology.

However, too many counterarguments to Wade’s and others’ articles along similar lines have been to brush the allegations aside, as if they were so easily dismissed – like my interlocutor’s “rubbish”. And it’s an infuriating response. To me at least (as someone who’s been at the receiving end of many such replies), it smacks of an attitude that seems to say (a) “you’re foolish to take this stuff seriously,” (b) “you’re being a bad journalist,” (c) “I doubt you’ll understand the answer,” and (d) “I think you should just trust me”.

I try not to generalise (c) and (d) to maintain my editorial equipoise, so to speak – but it’s been hard. There’s too much of too many scientists going around insisting we should simply listen to them, while making no efforts to ensure non-experts can understand what they’re saying, much less admitting the possibility that they’re kidding themselves (although I do think “science is self-correcting” is a false adage). In fact, proponents of the zoonotic-spillover hypothesis and others like to claim that their idea is more likely, but this is often a crude display of scientism: “it’s more scientific, therefore it must be true”. The arguments in favour of this hypothesis are also being increasingly underrepresented outside the scientific literature, which isn’t a trivial consideration because the disparity could exacerbate the patronising tone of (c) and (d), and render scientists less trustworthy.

Science communication and/or journalism are conspicuous by absence here, but I also think the problem with the scientists’ attitude is broader than that. Short of engaging directly in the activities of groups like DRASTIC, journalists take a hit when scientists behave like pedagogic communication is a waste of time. More scientists should make more of an effort to articulate themselves better. It isn’t wise to dismiss something that so many take seriously – although this is also a slippery slope: apply it as a general rule, and soon you may find yourself having to debunk in great detail a dozen ridiculous claims a day. Perhaps we can make an exception for the zoonotic-spillover v. lab-leak hypotheses contest? Or is there a better heuristic? I certainly think there should be one instead of having none at all.

Proving the absence is harder than proving the presence of something, and that’s why everyone might be talking about why they’re right. However, in the process, many of these people seem to forget that what they haven’t denied is still firmly in the realm of the possible. Actually, they don’t just forget it but entirely shut down the idea. This is why I agree with Dr Vinay Prasad’s words in MedPage Today:

If it escaped due to a wet market, I would strongly suggest we clean up wet markets and improve safety in BSL laboratories because a future virus could come from either. And, if it was a lab leak, I would strongly suggest we clean up wet markets and improve safety in BSL 3 and 4 … you get the idea. Both vulnerabilities must be fixed, no matter which was the culprit in this case, because either could be the culprit next time.

His words provide an important counterweight of sorts to a tendency from the zoonotic-spillover quarter to treat articles about the lab-leak possibility as a monolithic allegation instead of as a collection of independent allegations that aren’t equally unlikely. For example, the Vanity Fair, Newsweek and Wade’s articles have all also called into question safety levels at BSL 3 and 4 labs, whether their pathogen-handling protocols sufficiently justify the sort of research we think is okay to conduct, and allegations that various parties have sought to suppress information about the activities at such facilities housed in the Wuhan Institute.

I don’t buy the lab-leak hypothesis and I don’t buy the zoonotic-spillover hypothesis; in fact, I don’t personally care for the answer because I have other things to worry about, but I do buy that the “scientific illiberalism” that Dr Prasad talks about is real. And it’s tied to other issues doing the rounds now as well. For example, Newsweek‘s profile of DRASTIC’s work has been a hit in India thanks to the work of ‘The Seeker’, the pseudonym for a person in their 20s living in “Eastern India”, who uncovered some key documents that cast suspicion on Wuhan Institute’s Shi Zhengli’s claims vis-à-vis SARS-CoV-2. And two common responses to the profile (on Twitter) have been:

  1. “In 2020, when people told me about the lab-leak hypothesis, I dismissed them and argued that they shouldn’t take WhatsApp forwards seriously.”
  2. “Journalism is redundant.”

(1) is said as if it’s no longer true – but it is. The difference between the WhatsApp forwards of February-April 2020 and the articles and papers of 2021 is the body of evidence each set of claims was based on. Luc Montagnier was wrong when he spoke against the zoonotic-spillover hypothesis last year simply because his reasoning was wrong. The reasons and the evidence matter; otherwise, you’re no better than a broken clock. Facile WhatsApp forwards and right-wingers’ ramblings continue to deserve to be treated with extreme scepticism.

Just because a conspiracy theory is later proven to have merit doesn’t make it not a conspiracy theory; their defining trait is belief in the absence of evidence. The most useful response, here, is not to get sucked into the right-wing fever swamps, but to isolate legitimate questions, and try and report out the answers.

Columbia Journalism Review, April 15, 2020

The second point is obviously harder to fight back, considering it doesn’t stake a new position as much as reinforces one that certain groups of people have harboured for many years now. It’s one star aligning out of many, so its falling out of place won’t change believers’ minds, and because the believers’ minds will be unchanged, it will promptly fall back in place. This said, apart from the numerous other considerations, I’ll say investigations aren’t the preserve of journalists, and one story that was investigated to a greater extent by non-journalists – especially towards a conclusion that you probably wish to be true – has little necessarily to do with journalism.

In addition, the picture is complicated by the fact that when people find that they’re wrong, they almost never admit it – especially if other valuable things, like their academic or political careers, are tied up with their reputation. On occasion, some turn to increasingly more technical arguments, or close ranks and advertise a false ‘scientific consensus’ (insofar as such consensus can exist as the result of any exercise less laborious than the one vis-à-vis anthropogenic global warming), or both. ‘Isolating the legitimate questions’ here apart – from both sides, mind you – needs painstaking work that only journalists can and will do.

Featured image credit: Ethan Medrano/Pexels.

The problems with one-shot Covishield

NDTV quoted unnamed sources in the Indian government saying it will be conducting a study to assess the feasibility of deploying the Covishield vaccine in a single-dose regimen instead of continuing the extant double-dose regimen.

At any other time, such a statement may have been sufficient to believe the government would organise and conduct a well-designed trial, publicise the findings and revise policy (or not) to stay in line with the findings, informed by socio-economic considerations. But the last 15 months have thrown up enough incidents of public-health malpractice on the state’s part to make such hope outright stupid. I’m fairly certain, especially if the vaccine shortage persists and the outbreaks on an upward trajectory in some parts of the country at the moment aren’t tamped down quickly, that the government is going to conduct a trial, not publish its methods and findings and push through a policy to deploy Covishield as a single-dose shot.

Of course I would be happy to be proven wrong – but in the event that I’m not, I’m already filled with a mix of sadness and fury. The government seems set on finding new ways to play with our lives.

News that the government is going to conduct a feasibility study broke to the accompaniment of a suggestion, by NDTV’s same unnamed sources, that Covishield was originally intended as a single-dose vaccine and that it was later found to be better as a two-dose vaccine. This is ridiculous to begin with, considering Covishield’s phase 3 trials around the world, conducted by AstraZeneca and the University of Oxford, tested the two-dose regimen.

But it is rendered more ridiculous because Public Health England (PHE) reported just a week ago that two doses of Covishield are necessary for a recipient to be sufficiently protected against infections by the B.1.617.2 variant. The PHE study found that one dose of Covishield had an efficacy of 33% against symptomatic COVID-19 caused by the variant, increasing to 60% after both doses. Has the Indian government forgotten that B.1.617.2 is becoming the more common variant circulating in the country? Or is laundering the national party’s image more important than the safety of hundreds of millions? (The latter is entirely plausible: in the last seven years, the country has seldom been larger than the supreme leader’s ego.)

The PHE study isn’t without its shortcomings – but I’d be more inclined to pay attention to them at this moment if:

  1. I didn’t have to contend with the non-trivial possibility that the Indian government will bury, obfuscate and/or twist the data arising from its assessment, and therefore we (the public) need to bank on whatever else is available;
  2. I didn’t have to contend with the fact that data from Covaxin’s phase 3 trial (which apparently went past its final interim-analysis endpoint in April) and Covishield’s bridging trial (which IIRC concluded on March 24) are still missing from the public domain;
  3. If we could access large-scale effectiveness data of the two vaccines (the National Institute of Epidemiology, Chennai, is set to begin collecting such data this week); and
  4. If there was any other reliable data at the moment about the two vaccines vis-à-vis the different variants circulating in India.

There is another problem. If Covishield is administered as a single-dose vaccine, its efficacy against symptomatic COVID-19 caused by B.1.617.2 viral particles is 33% – which is below the WHO’s recommended efficacy threshold of 50% for these vaccines. If the Indian government formalises the ‘Covishield will be one dose’ policy and if the B.1.617.2 variant continues its conquest, will the vaccine, as it is used in India, lose its place on the WHO’s vaccine list? And what of the consequences that will follow, including other countries becoming reluctant to admit Indians who received one dose of Covishield and one dose of the BJP’s way of doing things?

I would be wary, too. The longer the particles of the novel coronavirus are able to circulate within a population, the more opportunities they will have to mutate, and the more mutations they will accumulate. So any population that allows the virus to persist for longer automatically increases the chance of engendering potentially deadlier variants within its borders. One-dose Covishield plus B.1.617.2, and other variants, will set just such a stage – compounded by the fact that Serum Institute, which makes Covishield, has a much larger production capacity than Bharat Biotech, the maker of Covaxin.

(The PHE study also found that Covishield and the Pfizer-BioNTech vaccine had an efficacy of “around 50%” against symptomatic COVID-19 caused by an infection of the B.1.1.7 variant.)

In fact, the government could have made more sense today by saying it would prioritise the delivery of the first dose to as many people as possible before helping people get the second one. This way the policy would be in line with the most recent scientific findings, be synonymous with a single-dose campaign and keep the door open to vaccinating people with both doses in a longer span of time (instead of closing that door entirely), while admitting that the vaccine shortage is real and crippling – something most of us know anyway. But no; Vishwaguru first.

On crypto-art, racism and outcome fantasies

If you want to find mistakes with something, you’ll be able to find them if you tried long enough. That doesn’t inherently make the thing worthless. The only exception I’ve encountered to this truism is the prevailing world-system – which is both fault-ridden and, by virtue of its great size and entrenchment, almost certainly unsalvageable.

I was bewitched by cryptocurrencies when I first discovered them, in 2008. I wrote an op-ed in The Hindu in 2014 advocating for the greater use of blockchain technology. But between then and 2016 or so, I drifted away as I found how the technology was also drifting away from what I thought it was to what it was becoming, and as I learnt more about politics, social systems and the peopled world, as it were — particularly through the BJP’s rise to power in 2014 and subsequent events that illustrated how the proper deployment of an idea is more important than the idea itself.

I still have a soft spot for cryptocurrencies and related tokens, although it’s been edging into pity. I used to understand how they could be a clever way for artists to ensure they get paid every time someone, somewhere downloads one of their creations. I liked that tokens could fractionate ownership of all kinds of things – even objects in the real world. I was open to being persuaded that fighting racism in the crypto-art space could have a top-down reformatory effect. But at the same time, I was – and remain – keenly aware that fantasies of outcomes are cheap. Today, I believe cryptocurrencies need to go; their underlying blockchains may have more redeeming value but they need to go, too, because more than being a match for real-world cynicism, they often enable it.

§

Non-fungible tokens (NFTs) are units of data that exist on the blockchain. According to Harvard Business Review:

The technology at the heart of bitcoin and other virtual currencies, blockchain is an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. The ledger itself can also be programmed to trigger transactions automatically.

NFTs have been in the news because the auction house Christie’s recently sold a (literal) work of art secured as an NFT for a stunning $69.3 million (Rs 501.37 crore). The NFT here is a certificate of sorts attesting to the painting’s provenance, ownership and other attributes; it exists as a token that can be bought or sold in transactions performed over the blockchain – just like bitcoins can be, with the difference that while there are millions of bitcoins, each NFT is permanently associated with the artwork and is necessarily one of its kind. In this post, I’m going to address an NFT and its associated piece of art as a single, inseparable entity. If you read about NFTs in other contexts, they’re probably just referring to units of data.

The reason a combined view of the two is fruitful here is that AFP has called crypto user Metakovan’s winning bid “a shot fired for racial equality”, presumably in the crypto and/or crypto-art spaces. (Disclaimer: I went to college with Metakovan but we haven’t been in touch for many years. If I know something, it’s by Googling.) He and his collaborator also wrote on Substack:

Imagine an investor, a financier, a patron of the arts. Ten times out of nine, your palette is monochrome. By winning the Christie’s auction of Beeple’s Everydays: The First 5000 Days, we added a dash of mahogany to that color scheme. … The point was to show Indians and people of color that they too could be patrons, that crypto was an equalizing power between the West and the Rest, and that the global south was rising.

This is a curious proposition that’s also tied to the NFT as an idea. The ‘non-fungible’ of an NFT means the token cannot be replaced by another of its kind; it’s absolutely unique and can only be duplicated by forging it – which is very difficult. So the supply of NFTs is by definition limited and can be priced through speculation in the millions, if need be. NFTs are thus “ownership certificates for digital art that imbue” their owners “with demonstrable scarcity,” as one writer put it. This is also where the picture gets confusing.

First, the Christie’s auction was really one wealth-accumulator purchasing a cultural product created by consuming X watts of power, paid for using a new form of money that the buyer is promoting, and whose value the buyer is stewarding, in a quantity determined by the social priorities of other wealth-accumulators, to an artist who admits he’s cashing in on a bubble, plus allegations of some other shady stuff – although legal experts have also said that there appear to be no “apparent” signs of wrongdoing. What is really going on here?

Minting an NFT is an energy-intensive process. For example, you can acquire bitcoin, which is an example of a fungible token, by submitting verifiable proof of work to a network of users transacting via a blockchain. This work is in the form of solving a complex mathematical problem. Every time you solve a problem to unlock some bitcoins, the next problem automatically becomes harder. So in time, acquiring new bitcoins becomes progressively more difficult, and requires progressively more computing power. Once some proof of work is verified, the blockchain – being the distributed ledger – logs the token’s existence and the facts of its current ownership.

‘NFTs are anti-climatic’ is a simple point, but this argument becomes stronger with some numbers. According to one estimate, the carbon footprint of one ether transaction (ether is another fungible token, transacted via the Ethereum blockchain) is 29.5 kg CO2; that of bitcoin is 359.04 kg CO2. The annual power consumption of the international bitcoin mining and trading enterprise is comparable to that of small countries. Consider what Memo Atken said here, connecting NFTs, fungible tokens and the “crypto-art” in between: “Artists should be able to release hundreds of digital artworks” – but “there is absolutely no reason that releasing hundreds of digital artworks should have footprints of hundreds of MWh.”

Of course, it’s important to properly contextualise the energy argument due to nuances in how and why bitcoin is traded. In February this year, Coindesk, a news outlet focusing on cryptocurrencies, rebutted an article in Bloomberg that claimed bitcoin was a “dirty business”, alluding to its energy consumption. Coindesk claimed instead that bitcoins and the blockchain do more than just what dollars stand for, so saying bitcoin is “dirty” based on Visa’s lower energy consumption is less useful than comparing it to the energy, social and financial costs of mining, processing, transporting and securing gold. (Visa secures credit and debit card transactions just like, but not in the same way, the blockchain secures transactions using consensus algorithms.)

However, the point about energy consumption still stands because comparing bitcoins and the blockchain to the Fedwire RTGS system plus banks, which together do a lot more than what Visa does and could be a fairer counterpart in the realm of bona fide money, really shows up bitcoin’s disproportionate demands. Fintech analyst Tim Swanson has a deep dive on this topic, please read it; for those who’d rather not, let me quote two points. First:

“The participating computing infrastructure for Fedwire involves between ten and twenty thousand computers, none of which need to generate [power-guzzling cryptographic safeguards]. Its participants securely transfer trillions of dollars in real value each day. And most importantly: Fedwire does not take the energy footprint of Egypt or the Netherlands to do so. … the more than 2 million machines used in bitcoin mining alone consume as much energy as Egypt or the Netherlands consumes each year. And they do so while simultaneously only securing a relatively small amount of payments, less than $4 billion last year.”

The energy consumption, and the second point, shows up when users need to protect against a vulnerability of consensus-based transactions, called the Sybil attack (a.k.a. pseudospoofing). Consider the following reductively simple consensus-generating scenario. If there is a group of 10 members and most of them agree that K is true, then K is said to be true. But one day, another member joins the group and also signs on 14 of his friends. When the group meets again, the 15 new people say K is false while the original 10 say K is true, so finally K is said to be false. The first 10 members later find out that the 15 who joined were all in cahoots, and by manufacturing a majority opinion despite not being independent actors, they compromised the group’s function. This is the Sybil attack.

Because the blockchain secures transactions by recursively applying a similar but more complicated logic, it’s susceptible to being ‘hacked’ by people who can deceptively conjure evidence of new but actually non-existent transactions and walk away with millions. To avoid this loophole without losing the blockchain’s decentralised nature, its inventor(s) forced all participants in the network to show proof of work – which is the mathematical problem they need to solve and the computing power and related costs they need to incur.

Proof of work here is fundamentally an insurance against scammers and spammers, achieved by demanding the ability to convert electrical energy into verifiable digital information – and this issue is in turn closer to the real world than the abstracted concepts of NFTs and blockchain. The problem in the real world is that access to crypto assets is highly unequal, being limited by access to energy, digital literacy, infrastructure and capital.* The flow of all of these resources is to this day controlled by trading powers that have profited from racism in the past and still perpetuate the resulting inequality by enforcing patents, trade agreements, import/export restrictions – broadly, through protectionism.

* Ethereum’s plan to transition from a proof-of-work to a proof-of-stake system could lower energy consumption, but this is an outcome fantasy and also still leaves the other considerations.

So even when Black people talk about cryptocurrencies’ liberating potential for their community, I look at my wider South and Southeast Asian neighbourhood and feel like I’m in a whole other world. Here, replacing banks’ or credit-card companies’ centralised transaction verification services with a blockchain on every person’s computer is more of the same because most people left out by existing financial systems will also be left behind by blockchain technology.

Metakovan’s move was ostensibly about getting the world’s attention and making it think about racism in, for some reason, art patronage. And it seems opportunistic more than anything else, a “shot fired” to be able to improve one’s own opportunities for profit in the crypto space instead of undermining the structural racism and bigotry embedded in the whole enterprise. This is a system which owes part of its current success to the existence of social and economic inequalities, which has laboured over the last few decades to exploit cheap labour and poor governance in other, historically beleaguered parts of the world to entrench technocracy and scientism over democracy and public accountability.

I’m talking about Silicon Valley and Big Tech whereas Metakovan labours in the cryptocurrency space, but they are not separate. Even if cryptocurrencies are relatively younger compared to the decades of policy that shaped Silicon Valley’s ascendancy, it has benefited immensely from the tech space’s involvement and money: $20 billion in “initial coin offerings” since 2017 plus a “wave of financial speculation”, for starters. In addition, cryptocurrencies have also helped hate groups raise money – although I’m also inclined to blame subpar regulation for such a thing being possible.

I’ll get on board a good cryptocurrency value proposition – but one is yet to show itself. The particular case of ‘Everydays’ and the racism angle is what rankles most. “Depending on your point of view, crypto art could be the ultimate manifestation of conceptual art’s separation of the work of art from any physical object,” computer scientist Aaron Hertzmann wrote. “On the other hand, crypto art could be seen as reducing art to the purest form of buying and selling for conspicuous consumption.” Metakovan’s “shot” is the latter – a gesture closer to a dog-whistle about making art-trading an equal-opportunity affair in which anyone, including Metakovan himself, can participate and profit from.

If you really don’t want racism, the last thing you should do is participate in an opaque and unregulated enterprise using obfuscated financial instruments. Or at least be prepared to pursue a more radical course of action than to buy digital tosh and call it “the most valuable piece of art for this generation”.

This brings me to the second issue: what can the energy cost of culture be? For example, Tamil-Brahmin weddings in Chennai, my home-city, are a gala affair – each one an elaborate wealth-signalling exercise that consumes thousands of fresh-cut banana leaves, a few quintals of wood, hundreds of units of power for air-conditioning and lots of new wedding clothes that are often worn only once or a few times – among many other things. Is such an exercise really necessary? My folks would say ‘yes’ in a heartbeat because they believe it’s what we need to do, that we can’t forego any of these rituals because they’re part of our culture, or at least how we’ve come to perform it.

To me, this is excessive – but then I have a dilemma. As I wrote about a similar issue last year, vis-à-vis Netflix:

Binge-watching is bad – in terms of consuming enough energy to “power 40,000 average US homes for a year” and in other ways – but book-keepers seem content to insulate the act of watching itself from what is being watched, perhaps in an effort to determine the worst case scenario or because it is very hard to understand, leave alone discern or even predict, the causal relationships between how we feel, how we think and how we act. However, this is also what we need: to accommodate, but at the same time without being compelled to quantify, the potential energy that arises from being entertained.

At this juncture, consider: at what point does art itself become untenable because it paid an energy cost deemed too high? And was the thing that Metakovan purchased from Beeple, ‘Everydays’, really worth it? While I don’t see that it could be easy to answer the first question, the second one makes it easy for us: ‘Everydays’ doesn’t appear to deserve the context it’s currently luxuriating in.

Aside from its creator Beeple’s admission of its mediocrity, writer Andrew Paul took a closer look at its dense collage for Input Magazine and found “juvenile, trollish bigoted artwork including racist Asian caricatures, homophobic language, and Hillary Clinton wearing a grill”. (Metakovan said in one interview that he felt a “soul connection” with Beeple’s work.) ‘Everydays’, Paul continues, “appears to say more about the worst aspects of the art world and capitalism than any one … of Beeple’s doodles: gatekeeping, exploitative, bigoted, and very, very tiresome.”

What is academic freedom?

Note: I originally wrote two versions of this article for The Wire; one, a ‘newsier’ version, was published in June 2020. I’d intended to publish the version below, which is more of a discussion/analysis, sometime last year itself but it slipped my mind. I’m publishing it today, shortly after rediscovering it by accident.

Since the Cold War, science has been a reason of state, as the social theorist Ashis Nandy has argued. So when scientists, or academicians in general, seek to assert themselves, their actions are a threat to the state itself and its stewards.

This is no different in India – but it’s particularly relevant because not just science but also pseudoscience has been adapted as a reason of state, amplifying scholars’ vaguely moral imperative to rebut the state’s claims to a nearly existential one. And in parallel, the perception of academic freedom has evolved from a human right to a more-enforceable fundamental one, if only to check a political class that no longer sees reason and democracy as boundary conditions.

“If deliberation is central to democracy, then it is not enough to to simply have a negative right to free speech. A democratic society should also cultivate forums where open deliberation takes place,” Tarun Menon, of the National Institute for Advanced Studies, Bengaluru, said.

“Universities have traditionally been such forums, often involving young people who are seriously engaging with the public sphere for the first time in their lives, developing their civic identities. Maintaining academic freedom – understood as an atmosphere free of intimidation or intellectual control – is essential to preserving these spaces as hubs of participatory democracy.”

Researchers at the Global Public Policy Institute, the University of Erlangen-Nuremberg (FAU), the Scholars At Risk Network and the V-Dem Project at the University of Gothenburg have prepared a new report that offers a way to quantify this freedom. They have developed an ‘academic freedom index’ (AFI), which determines, with a few parameters, the relative extent to which different countries value academic freedom.

To quote from The Wire‘s news report,

India has an AFI of 0.352, comparable to the scores of Saudi Arabia and Libya. Countries that scored higher than India include Pakistan (0.554), Brazil (0.466), Ukraine (0.422), Somalia (0.436) and Malaysia (0.582). Uruguay and Portugal top the list with scores of 0.971 each, followed closely by Latvia and Germany. At the bottom are North Korea (0.011), Eritrea (0.015), Bahrain (0.039) and Iran (0.116).

The AFI has eight components, defined by the following questions:

  1. “To what extent are scholars free to develop and pursue their own research and teaching agendas without interference?”
  2. “To what extent are scholars free to exchange and communicate research ideas and findings?”
  3. “To what extent do universities exercise institutional autonomy in practice?”
  4. “To what extent are campuses free from politically motivated surveillance or security infringements?”
  5. “Is there academic freedom and freedom of cultural expression related to political issues?”
  6. “Do constitutional provisions for the protection of academic freedom exist?”
  7. “Is the state party to the ICESCR without reservations to Article 15 (right to science)?”
  8. “Have universities (ever) existed in this country?”

According to the report, some 1,810 academicians responded to the first five questions, for each of their countries. (For a closer look at the methods, please read The Wire‘s news report.)

On this count, the report’s authors themselves advise caution: “While there is evidence of a deteriorating condition for academics in [India], the extent of the AFI score’s decline seems somewhat disproportional in comparison to earlier periods in the [country’s] history as well as in comparison to other countries over the same period.” It’s likely this caveat extends to all countries.

Our impression of universities as simply centres of learning has divorced them from their status as places where students can investigate ideas without fear. So an entity like AFI is notable because it reminds us of the need for universities to be free as well as active participants in realising the ‘right to science’, as embodied in the International Covenant on Economic, Social and Cultural Rights (ICESCR).

After it came into force from January 1976 – with India ratifying it in April 1979 – the covenant, among other things, entitles the people of its party states “to enjoy the benefits of scientific progress and its applications” and requires the states “to respect the freedom indispensable for scientific research and creative activity”.

So the AFI’s makers suggest the UN could read the indicator with self-assessment reports the parties submit. They also suggest other ways their findings could prove useful – but the report doesn’t escape the fate all indices share: the farther it ventures from its status as an index, the less useful it becomes.

Among academicians, conventionally underprivileged groups – such as women and transgender people – as well as underprivileged areas of study like women’s studies, could use the AFI as a way to strengthen protections for themselves.

A post published on the Times Higher Education blog in 2019 read, “Scholars of feminism attract an overwhelming amount of intimidation; their right to explore controversial issues demands explicit protection.”

However, one of the AFI’s constituent questions – “To what extent are scholars free to exchange and communicate research ideas and findings?” – treats scholars as one monolithic unit. What happens when scholars themselves oppose each other’s right to study certain subjects? Such a contention may not always fit within the bounds of academic debate either, and could even compromise another question: “Is there academic freedom and freedom of cultural expression related to political issues?”

For example, academicians in the UK have been embroiled in a fierce debate over the freedom to critique transgender rights. One group has accused the other of adopting “a ‘censorious’ approach to gender identity”. The other has accused the first of transphobia. However, “universities are negotiating a minefield, trying to maintain free speech while faced with two groups of people who both argue they are being made to feel unsafe,” Anna Fazackerley wrote for The Guardian in January 2020.

But without a close reading of the ‘codebook’ accompanying the report, which explains the questions the academicians answered, the UK’s AFI of 0.934 doesn’t immediately suggest that external interference isn’t the only kind of problem.

More broadly, Madhusudhan Raman, a postdoctoral fellow at the Tata Institute of Fundamental Research, Mumbai, said he is “suspicious” of attempts “to reduce what is a complex, often fluid, social-political consensus to a number between 0 and 1.” For one, such ‘metrics’ “don’t shed light on how societies arrive at their respective consensuses.”

And even more broadly, the report’s data isn’t grainy enough to examine the type of academic freedom available at a university. Menon, for example, identified two ways to justify that freedom of expression is not inherently valuable but for a purpose: for deliberative democracy – described earlier – and for the marketplace of ideas.

And “a free marketplace of ideas, where that freedom is interpreted exclusively as freedom from government intervention, will tend to produce knowledge that is valuable to powerful monied interests, not democratic interests more broadly construed”.

This is more so when so few Indians study at universities, and even fewer among them are not of the upper castes.

“Academic freedom is crucial but we need to talk about specific factors like caste,” an anthropologist at the University of Delhi, who didn’t wish to be named, said. “Another thing that destroyed academic freedom is the artificial binary of teaching and research encouraged by various governments, including the current one. Many Indian universities and colleges are still feudal and patriarchal. We also need to talk about the institutional cultures and the way in which it restricts academic freedom through the contractualisation of appointments.”

In addition, middle-class parents could even use an index like the AFI to identify places where their children could study without being ‘distracted’ by political activities.

Katrin Kinzelbach, a professor of political science at FAU, who conceived the AFI and helped prepare the report, pointed to the codebook, which explains how the results were arrived at, and thus how they could and couldn’t be interpreted.

“In these clarifications, we state clearly that interference by ‘non-academic actors’ includes not only interference by government representatives and politicians but also businesses, foundations, other private funders as well as religious groups and advocacy groups,” she told The Wire. “As a matter of fact, we consciously avoided an exclusive focus on government interference.”

In India, a strong politics-business-media nexus has allowed the government to exert its will through a combination of social, financial, legal and even religious instruments. Together with the fact that the state has also become the chief ‘intervener’ in student affairs – from censoring conversations on some topics to turning a blind eye to violence against students backed by politico-religious powers – it’s hard to separate each intervention from another when all of them seem to have the same outcome: to reduce the university to a collection of classrooms by eroding the culture of debate that the state perceives as a threat to itself.

So, Menon said, “genuinely democratic academic freedom” should also consider “inclusivity of education, resistance to privatisation of education and funding, resistance to the vocationalisation of education.”

But without these considerations, the report’s “priorities … are in line with the neoliberal consensus according to which academic freedom essentially just means laissez-faire applied to the academic realm just as it is to the economic realm.”

Kinzelbach contested this conclusion: she “echoed” Menon’s thoughts on the lack of inclusivity and the perils of privatised education but, she continued, “I would argue that [inclusivity] would be more appropriately studied under a ‘right to education’ framework, not under the notion of ‘academic freedom’.”

She added that had her team “included the funding structure of universities as an indicator of academic freedom, it would not be possible to study these hypothesised causal relationships, and that would make the data much less useful for further research.”

Magic bridges

The last two episodes of the second season of House, the TV series starring Hugh Laurie as a misanthropic doctor at a facility in Princeton, have been playing on my mind off and on during the COVID-19 pandemic. One of its principal points (insofar as Dr Gregory House can admit points to the story of his life) is that it’s ridiculous to expect the families of patients to make informed decisions about whether to sign off on a life-threatening surgical procedure, say, within a few hours when in fact medical workers might struggle to make those choices even after many years of specific training.

The line struck me to be a chasm stretching between two points on the healthcare landscape – so wide as to be insurmountable by anything except magic, in the form of decisions that can never be grounded entirely in logic and reason. Families of very sick patients are frequently able to conjure a bridge out of thin with the power of hope alone, or – more often – desperation. As such, we all understand that these ‘free and informed consent’ forms exist to protect care-providers against litigation as well as, by the same token, to allow them to freely exercise their technical judgments – somewhat like how it’s impossible to physically denote an imaginary number (√-1) while still understanding why they must exist. For completeness.

Sometimes, it’s also interesting to ask if anything meaningful could get done without these bridges, especially since they’re fairly common in the real world and people often tend to overlook them.

I’ve had reason to think of these two House episodes because one of the dominant narratives of the COVID-19 pandemic has been one of uncertainty. The novel coronavirus is, as the name suggests, a new creature – something that evolved in the relatively recent past and assailed the human species before the latter had time to understand its features using techniques and theories honed over centuries. This in turn predicated a cascade of uncertainties as far as knowledge of the virus was concerned: scientists knew something, but not everything, about the virus; science journalists and policymakers knew a subset of that; and untrained people at large (“the masses”) knew a subset of that.

But even though more than a year has passed since the virus first infected humans, the forces of human geography, technology, politics, culture and society have together ensured not everyone knows what there is currently to know about the virus, even as the virus’s interactions with these forces in different contexts continues to birth even more information, more knowledge, by the day. As a result, when an arbitrary person in an arbitrary city in India has to decide whether they’d rather be inoculated with Covaxin or Covishield, they – and in fact the journalists tasked with informing them – are confronted by an unlikely, if also conceptual, problem: to make a rational choice where one is simply and technically impossible.

How then do they and we make these choices? We erect magic bridges. We think we know more than we really do, so even as the bridge we walk on is made of nothing, our belief in its existence holds it up and stiff beneath our feet. This isn’t as bad as I’m making it seem; it seems like the human thing to do. In fact, I think we should be clearer about the terms on which we make these decisions so that we can improve on them and make them better.

For example, all frontline workers who received Covaxin in the first phase of India’s vaccination drive had to read and sign off on an ‘informed consent’ form that included potential side effects of receiving a dose of the vaccine, its basic mechanism of action and how it was developed. These documents tread a fine line between being informative and being useful (in the specific sense of the risk of debilitating action by informing too much and of withholding important information in order to skip to seemingly useful ‘advice’): they don’t tell you everything they can about the vaccine, nor can they assert the decision you should make.

In this context, and assuming the potential recipient of the vaccine doesn’t have the education or training to understand how exactly vaccines work, a magic bridge is almost inevitable. So in this context, the recipient could be better served by a bridge erected on the right priorities and principles, instead of willy-nilly and sans thought for medium- or long-term consequences.

There’s perhaps an instructive analogy here with software programming, in the form of the concept of anti-patterns. An anti-pattern is a counterproductive solution to a recurrent problem. Say you’ve written some code that generates a webpage every time a user selects a number from a list of numbers. The algorithm is dynamic: the script takes the user-provided input, performs a series of calculations on it and based on the results produces the final output. However, you notice that your code has a mistake due to which one particular element on the final webpage is always 10 pixels to the left of where it should be. Being unable to identify the problem, you take the easy way out: you add a line right at the end of the script to shift that element 10 pixels to the right, once it has been rendered.

This is a primitive example of an anti-pattern, an action that can’t be determined by the principles governing the overall system and which exists nonetheless because you put it there. Andrew Koenig introduced the concept in 1995 to identify software programs that are unreliable in some way, and which could be made reliable by ensuring the program conforms to some known principles. Magic bridges are currently such objects, whose existence we deny often because we think they’re non-magical. However, they shouldn’t have to be anti-patterns so much as precursors of a hitherto unknown design en route to completeness.

SSC: Addendum

It’s wonderful how the mind has a way of cultivating clarity in the background, away from the gaze of the mind’s eye and as the mind itself is preoccupied with other thoughts, on matters considered only a few days ago to be too complicated to synthesise into a unified whole.

Recap: On February 14, the New York Times published a profile of Slate Star Codex, the erstwhile blog penned by Scott Alexander Siskind that had become one of the internet’s few major water coolers for rationalists. Siskind had previously appeared to make peace with the newspaper’s decision to reveal his full name – he hadn’t been using his last name on the blog – in the profile, but since February 14 at least, he has seemingly taken a vindictive turn, believing the New York Times doxxed him on purpose for “embarrassing” them.

Somewhat separately, many of Siskind’s supporters have rejected the profile as an unfaithful portrayal of the blog’s significance in the rationalism community and for its allegedly overtly conspiratorial overtones about the blog’s relationship with powerful figures in Silicon Valley. Many of these supporters have since decided to boycott Cade Metz, the New York Times reporter who crafted the profile.

A few days ago, I put down my thoughts about this affair to clarify them for myself as well as, less importantly, lay out my views. Since then, but especially this morning, I’ve realised the essence of my struggle with composing that post. A shade less than 100% of the time, I start a post with thoughts on some subject, and by the time I’m through a thousand words, I discover a point or two I need to make that stitches the thoughts together. I’d struggled to find this point with the SSC affair but I now I think I have some clarity:

(The sources for claims in the points below are available in my first post.)

  • The New York Times profile’s simpler mistakes are a significant problem, and I agree with those supporters’ decision boycott the reporter. But I would also encourage them to find other reporters they’d rather speak to – and do so. Even if this means their words start to appear in publications whose other contents may be objectionable (like, say, Quillette), they will still be part of the public conversation instead of finding themselves silenced.
  • On a related note: it’s quite amusing that a community so wedded to a particular impression of its identity and self-perception thought it would be profiled by the New York Times in line with this perception. Granted, this may not have been an entirely foreseeable outcome, but the magnitude of the supporters’ reactions seems disproportionate to the chances of Siskind’s and their views being lost in translation (from their PoV).
  • The New York Times‘ decision to reveal Scott Alexander’s last name for the profile is difficult to understand, even as it’s not hard to see that the profile could have been composed together with Siskind’s objections and his reasons. Some commentators have advanced an argument that free speech, an absolute version of which Siskind as well as the rationalists’ community desires, is incompatible with anonymity – but be this as it may, it doesn’t seem to have anything to do with Metz’s and the newspaper’s decision-making process itself and only smells like post-hoc justification.
  • Siskind’s allegation, based on some things people “in the know” told him, that the New York Times doxxed him because he embarrassed them (with his decision to unplug his blog from the internet after Metz first told him Metz might have to reveal his full identity) is more laughable the more you think of it, no? I’m also curious as to why Siskind goes from apparently making his peace with the newspaper’s decision to reveal his last name to taking steps to ensure his “survivability” in a scenario where his full name is known to all to, finally, resorting to invoking a vague authority (“people in the know”) – as if to advance a justification for his victimisation.