Magic bridges

The last two episodes of the second season of House, the TV series starring Hugh Laurie as a misanthropic doctor at a facility in Princeton, have been playing on my mind off and on during the COVID-19 pandemic. One of its principal points (insofar as Dr Gregory House can admit points to the story of his life) is that it’s ridiculous to expect the families of patients to make informed decisions about whether to sign off on a life-threatening surgical procedure, say, within a few hours when in fact medical workers might struggle to make those choices even after many years of specific training.

The line struck me to be a chasm stretching between two points on the healthcare landscape – so wide as to be insurmountable by anything except magic, in the form of decisions that can never be grounded entirely in logic and reason. Families of very sick patients are frequently able to conjure a bridge out of thin with the power of hope alone, or – more often – desperation. As such, we all understand that these ‘free and informed consent’ forms exist to protect care-providers against litigation as well as, by the same token, to allow them to freely exercise their technical judgments – somewhat like how it’s impossible to physically denote an imaginary number (√-1) while still understanding why they must exist. For completeness.

Sometimes, it’s also interesting to ask if anything meaningful could get done without these bridges, especially since they’re fairly common in the real world and people often tend to overlook them.

I’ve had reason to think of these two House episodes because one of the dominant narratives of the COVID-19 pandemic has been one of uncertainty. The novel coronavirus is, as the name suggests, a new creature – something that evolved in the relatively recent past and assailed the human species before the latter had time to understand its features using techniques and theories honed over centuries. This in turn predicated a cascade of uncertainties as far as knowledge of the virus was concerned: scientists knew something, but not everything, about the virus; science journalists and policymakers knew a subset of that; and untrained people at large (“the masses”) knew a subset of that.

But even though more than a year has passed since the virus first infected humans, the forces of human geography, technology, politics, culture and society have together ensured not everyone knows what there is currently to know about the virus, even as the virus’s interactions with these forces in different contexts continues to birth even more information, more knowledge, by the day. As a result, when an arbitrary person in an arbitrary city in India has to decide whether they’d rather be inoculated with Covaxin or Covishield, they – and in fact the journalists tasked with informing them – are confronted by an unlikely, if also conceptual, problem: to make a rational choice where one is simply and technically impossible.

How then do they and we make these choices? We erect magic bridges. We think we know more than we really do, so even as the bridge we walk on is made of nothing, our belief in its existence holds it up and stiff beneath our feet. This isn’t as bad as I’m making it seem; it seems like the human thing to do. In fact, I think we should be clearer about the terms on which we make these decisions so that we can improve on them and make them better.

For example, all frontline workers who received Covaxin in the first phase of India’s vaccination drive had to read and sign off on an ‘informed consent’ form that included potential side effects of receiving a dose of the vaccine, its basic mechanism of action and how it was developed. These documents tread a fine line between being informative and being useful (in the specific sense of the risk of debilitating action by informing too much and of withholding important information in order to skip to seemingly useful ‘advice’): they don’t tell you everything they can about the vaccine, nor can they assert the decision you should make.

In this context, and assuming the potential recipient of the vaccine doesn’t have the education or training to understand how exactly vaccines work, a magic bridge is almost inevitable. So in this context, the recipient could be better served by a bridge erected on the right priorities and principles, instead of willy-nilly and sans thought for medium- or long-term consequences.

There’s perhaps an instructive analogy here with software programming, in the form of the concept of anti-patterns. An anti-pattern is a counterproductive solution to a recurrent problem. Say you’ve written some code that generates a webpage every time a user selects a number from a list of numbers. The algorithm is dynamic: the script takes the user-provided input, performs a series of calculations on it and based on the results produces the final output. However, you notice that your code has a mistake due to which one particular element on the final webpage is always 10 pixels to the left of where it should be. Being unable to identify the problem, you take the easy way out: you add a line right at the end of the script to shift that element 10 pixels to the right, once it has been rendered.

This is a primitive example of an anti-pattern, an action that can’t be determined by the principles governing the overall system and which exists nonetheless because you put it there. Andrew Koenig introduced the concept in 1995 to identify software programs that are unreliable in some way, and which could be made reliable by ensuring the program conforms to some known principles. Magic bridges are currently such objects, whose existence we deny often because we think they’re non-magical. However, they shouldn’t have to be anti-patterns so much as precursors of a hitherto unknown design en route to completeness.

In pursuit of a nebulous metaphor…

I don’t believe in god, but if he/it/she/they existed, then his/its/her/their gift to science communication would’ve been the metaphor. Metaphors help make sense of truly unknowable things, get a grip on things so large that our minds boggle trying to comprehend them, and help writers express book-length concepts in a dozen words. Even if there is something lost in translation, as it were, metaphors help both writers and readers get a handle on something they would otherwise have struggled to.

One of my favourite expositions on the power of metaphors appeared in an article by Daniel Sarewitz, writing in Nature (readers of this blog will be familiar with the text I’m referring to). Sarewitz was writing about how nobody but trained physicists understands what the Higgs boson really is because those of us who do think we get it are only getting metaphors. The Higgs boson exists in a realm that humans cannot ever access (even Ant-Man almost died getting there), and physicists make sense of them through complicated mathematical abstractions.

Mr Wednesday makes just this point in American Gods (the TV show), when he asks his co-passenger in a flight what it is that makes them trust that the plane will fly. (Relatively) Few of us know the physics behind Newton’s laws of motion and Bernoulli’s work in fluid dynamics – but many of us believe in their robustness. In a sense, faith and metaphors keep us going and not knowledge itself because we truly know only little.

However, the ease that metaphors offer writers at such a small cost (minimised further for those writers who know how to deal with that cost) sometimes means that they’re misused or overused. Sometimes, some writers will abdicate their responsibility to stay as close to the science – and the objective truth, such as it is – as possible by employing metaphors where one could easily be avoided. My grouse of choice at the moment is this tweet by New Scientist:

The writer has had the courtesy to use the word ‘equivalent’ but it can’t do much to salvage the sentence’s implications from the dumpster. Different people have different takeaways from the act of smoking. I think of lung and throat cancer; someone else will think of reduced lifespan; yet another person will think it’s not so bad because she’s a chain-smoker; someone will think it gives them GERD. It’s also a bad metaphor to use because the effects of smoking vary from person to person based on various factors (including how long they’ve been smoking 15 cigarettes a day for). This is why researchers studying the effects of smoking quantify not the risk but the relative risk (RR): the risk of some ailment (including reduced lifespan) relative to non-smokers in the same population.

There are additional concerns that don’t allow the smoking-loneliness congruence to be generally applicable. For example, according to a paper published in the Journal of Insurance Medicine in 2008,

An important consideration [is] the extent to which each study (a) excluded persons with pre-existing medical conditions, perhaps those due to smoking, and (b) controlled for various co-morbid factors, such as age, sex, race, education, weight, cholesterol, blood pressure, heart disease, and cancer. Studies that excluded persons with medical conditions due to smoking, or controlled for factors related to smoking (e.g., blood pressure), would be expected to find lower RRs. Conversely, studies that did not account for sufficient confounding factors (such as age or weight) might find higher RRs.

So, which of these – or any other – effects of smoking is the writer alluding to? Quoting from the New Scientist article,

Lonely people are at increased risk of “just about every major chronic illness – heart attacks, neurodegenerative diseases, cancer,” says Cole. “Just a completely crazy range of bad disease risks seem to all coalesce around loneliness.” A meta-analysis of nearly 150 studies found that a poor quality of social relationships had the same negative effect on risk of death smoking, alcohol and other well-known factors such as inactivity and obesity. “Correcting for demographic factors, loneliness increases the odds of early mortality by 26 per cent,” says Cacioppo. “That’s about the same as living with chronic obesity.”

The metaphor the writer was going for was one of longevity. Bleh.

When I searched for the provenance of this comparison (between smoking and loneliness), I landed up on two articles by the British writer George Monbiot in The Guardian, both of which make the same claim*: that smoking 15 cigarettes a day will reduce your lifespan by as much as a lifetime of loneliness. Both claims referenced a paper titled ‘Social Relationships and Mortality Risk: A Meta-analytic Review’, published in July 2010. Its ‘Discussion’ section reads:

Data across 308,849 individuals, followed for an average of 7.5 years, indicate that individuals with adequate social relationships have a 50% greater likelihood of survival compared to those with poor or insufficient social relationships. The magnitude of this effect is comparable with quitting smoking and it exceeds many well-known risk factors for mortality (e.g., obesity, physical inactivity).

In this context, there’s no doubt that the writer is referring to the benefits of smoking cessation on lifespan. However, the number ’15’ itself is missing from its text. This is presumably because, as Cacioppo – one of the scientists quoted by the New Scientist – says, loneliness can decrease your lifespan by 26%, and I assume an older study cited by the one quoted above relates it to smoking 15 cigarettes a day. So I went looking, and (two hours later) couldn’t find anything.

I don’t mean to rubbish the congruence as a result, however – far from it. I want to highlight the principal reason I didn’t find a claim that fit the proverbial glove: most studies that seek to quantify smoking-related illnesses like to keep things as specific as possible, especially the cohort under consideration. This suggests that extrapolating the ’15 cigarettes a day’ benchmark into other contexts is not a good idea, especially when the writer does not know – and the reader is not aware of – the terms of the ’15 cigarettes’ claim nor the terms of the social relationships study. For example, one study I found involved the following:

The authors investigated the association between changes in smoking habits and mortality by pooling data from three large cohort studies conducted in Copenhagen, Denmark. The study included a total of 19,732 persons who had been examined between 1967 and 1988, with reexaminations at 5- to 10-year intervals and a mean follow-up of 15.5 years. Date of death and cause of death were obtained by record linkage with nationwide registers. By means of Cox proportional hazards models, heavy smokers (≥15 cigarettes/day) who reduced their daily tobacco intake by at least 50% without quitting between the first two examinations and participants who quit smoking were compared with persons who continued to smoke heavily.

… and it presents a table of table with various RRs. Perhaps something from there can be fished out by the New Scientist writer and used carefully to suggest the comparability between smoking-associated mortality rates and the corresponding effects of loneliness…

*The figure of ’15 cigarettes’ seems to appear in conjunction with a lot of claims about smoking as well as loneliness all over the web. It seems 15 a day is the line between light and heavy smoking.

Featured image credit: skeeze/pixabay.

Some notes on empiricism, etc.

The Wire published a story about the ‘atoms of Acharya Kanad‘ (background here; tl;dr: Folks at a university in Gujarat claimed an ancient Indian sage had put forth the theory of atoms centuries before John Dalton showed up). The story in question was by a professor of philosophy at IISER, Mohali, and he makes a solid case (not unfamiliar to many of us) as to why Kanad, the sage, didn’t talk about atoms specifically because he was making a speculative statement under the Vaisheshika school of Hindu philosophy that he founded. What got me thinking were the last few lines of his piece, where he insists that empiricism is the foundation of modern science, and that something that doesn’t cater to it can’t be scientific. And you probably know what I’m going to say next. “String theory”, right?

No. Well, maybe. While string theory has become something of a fashionable example of non-empirical science, it isn’t the only example. It’s in fact a subset of a larger group of systems that don’t rely on empirical evidence to progress. These systems are called formal systems, or formal sciences, and they include logic, mathematics, information theory and linguistics. (String theory’s reliance on advanced mathematics makes it more formal than natural – as in the natural sciences.) And the dichotomous characterisation of formal and natural sciences (the latter including the social sciences) is superseded by a larger, more authoritative dichotomy*: between rationalism and empiricism. Rationalism prefers knowledge that has been deduced through logic and reasoning; empiricism prioritises knowledge that has been experienced. As a result, it shouldn’t be a surprise at all that debates about which side is right (insofar as it’s possible to be absolutely right – which I don’t think everwill happen) play out in the realm of science. And squarely within the realm of science, I’d like to use a recent example to provide some perspective.

Last week, scientists discovered that time crystals exist. I wrote a longish piece here tracing the origins and evolution of this exotic form of matter, and what it is that scientists have really discovered. Again, a tl;dr version: in 2012, Frank Wilczek and Alfred Shapere posited that a certain arrangement of atoms (a so-called ‘time crystal’) in their ground state could be in motion. This could sound pithy to you if you were unfamiliar with what ground state meant: absolute zero, the thermodynamic condition wherein an object has no energy whatsoever to do anything else but simply exist. So how could such a thing be in motion? The interesting thing here is that though Shapere-Wilczek’s original paper did not identify a natural scenario in which this could be made to happen, they were able to prove that it could happen formally. That is, they found that the mathematics of the physics underlying the phenomenon did not disallow the existence of time crystals (as they’d posited it).

It’s pertinent that Shapere and Wilczek turned out to be wrong. By late 2013, rigorous proofs had showed up in the scientific literature demonstrating that ground-state, or equilibrium, time crystals could not exist – but that non-equilibrium time crystals with their own unique properties could. The discovery made last week was of the latter kind. Shapere and Wilczek have both acknowledged that their math was wrong. But what I’m pointing at here is the conviction behind the claim that forms of matter called time crystals could exist, motivated by the fact that mathematics did not prohibit it. Yes, Shapere and Wilczek did have to modify their theory based on empirical evidence (indirectly, as it contributed to the rise of the first counter-arguments), but it’s undeniable that the original idea was born, and persisted with, simply through a process of discovery that did not involve sense-experience.

In the same vein, much of the disappointment experienced by many particle physicists today is because of a grating mismatch between formalism – in the form of theories of physics that predict as-yet undiscovered particles – and empiricism – the inability of the LHC to find these particles despite looking repeatedly and hard in the areas where the math says they should be. The physicists wouldn’t be disappointed if they thought empiricism was the be-all of modern science; they’d in fact have been rebuffed much earlier. For another example, this also applies to the idea of naturalness, an aesthetically (and more formally) enshrined idea that the forces of nature should have certain values, whereas in reality they don’t. As a result, physicists think something about their reality is broken instead of thinking something about their way of reasoning is broken. And so they’re sitting at an impasse, as if at the threshold of a higher-dimensional universe they may never be allowed to enter.

I think this is important in the study of the philosophy of science because if we’re able to keep in mind that humans are emotional and that our emotions have significant real-world consequences, we’d not only be better at understanding where knowledge comes from. We’d also become more sensitive to the various sources of knowledge (whether scientific, social, cultural or religious) and their unique domains of applicability, even if we’re pretty picky, and often silly, at the moment about how each of them ought to be treated (Related/recommended: Hilary Putnam’s way of thinking).

*I don’t like dichotomies. They’re too cut-and-dried a conceptualisation.

Discussing some motivations behind a particle physics FAQ

First, there is information. From information, people distill knowledge, and from knowledge, wisdom. Information is available on a lot of topics and in varying levels of detail. Knowledge on topics is harder to find – and even more hard is wisdom. This is because knowledge and wisdom require work (to fact-check and interpret) on information and knowledge, respectively. And people can be selective on what they choose to work on. One popular consequence of such choices is that most people are more aware of business information, business knowledge and business wisdom than they are of scientific information, scientific knowledge and scientific wisdom. This graduated topical awareness reflects in how we produce and consume the news.

struc

News articles written on business issues rarely see fit to delve into historical motivations or explainer-style elucidations because the audience is understood to be better aware of what business is about. Business information and knowledge are widespread and so is, to some extent, business wisdom, and articles can take advantage of conclusions made in each sphere, jumping between them to tease out more information, knowledge and wisdom. On the other hand, articles written on some topics of science – such as particle physics – have to start from the informational level before wisdom can be presented. This places strong limits on how the article can be structured or even styled.

There are numerous reasons for why this is so, especially for topics like particle physics, which I regularly (try to) write on. I’m drawn toward three of them in particular: legacy, complexity and pacing. Legacy is the size of the body of work that is directly related to the latest developments in that work. So, the legacy of the LHC stretches back to include the invention of the cyclotron in 1932 – and the legacy of the Higgs boson stretches back to 1961. Complexity is just that but becomes more meaningful in the context of pacing.

A consequence of business developments being reported on fervently is that there is at least some (understandable) information in the public domain about all stages of the epistemological evolution. In other words, the news reports are apace of new information, new knowledge, new wisdom. With particle physics, they aren’t – they can’t be. The reports are separated by some time, according to when the bigger developments occurred, and in the intervening span of time, new information/knowledge/wisdom would’ve arisen that the reports will have to accommodate. And how much has to be accommodated can be exacerbated by the complexity of what has come before.

struc1

But there is a catch here – at least as far as particle physics is concerned because it is in a quandary these days. The field is wide open because physicists have realised two things: first, that their theoretical understanding of physics is far, far ahead of what their experiments are capable of (since the 1970s and 1980s); second, that there are inconsistencies within the theories themselves (since the late 1990s). Resolving these issues is going to take a bit of time – a decade or so at least (although we’re likely in the middle of such a decade) – and presents a fortunate upside to communicators: it’s a break. Let’s use it to catch up on all that we’ve missed.

The break (or a rupture?) can also be utilised for what it signifies: a gap in information/knowledge. All the information/knowledge/wisdom that has come before is abruptly discontinued at this point, allowing communicators to collect them in one place, compose them and disseminate them in preparation for whatever particle physics will unearth next. And this is exactly what motivated me to write a ‘particle physics FAQ’, published on The Wire, as something anyone who’s graduated from high-school can understand. I can’t say if it will equip them to read scientific papers – but it will definitely (and hopefully) set them on the road to asking more questions on the topic.

Why I like writing

Thought I’d quickly put my two cents down.

  1. It exposes flaws in your thinking – This is the equivalent of bouncing your ideas off a friend before you explore them further. Writing about the ideas has a similar effect because when you put down your reasoning, it’s easier to jump between different parts of it and pick out inconsistencies. This is harder to do when your ideas are just in your head. Writing more often to hone your ideas, and ideation, can also sensitise you to alternate perceptions and train you to be your own devil’s advocate.
  2. You’re likelier to remember something if you write it down – And when you write about current affairs, scientific research and history, you quickly build up knowledge that you’re unlikely to forget anytime soon – knowledge that you can recall easily when you feel you most need it. The process of writing fosters a measure of introspection that can encourage you to be vocal about your knowledge, too.
  3. You can do it very right or very wrong, you’ll still learn something – There’s no perfecting writing, whether it’s fiction, non-fiction, anything in between or something beyond. Writing will always teach you about how to structure your paragraphs, which words to use where, what style or voice or inflections to adopt, or how best to tickle your audience.
  4. Despite the timescale required to perfect it (if at all), you’ll sense progress – No serious writer is going to ever admit that he or she has perfected the art of writing. Perfection in writing is impossible. At the same time, you’ll see yourself scaling this infinitely high mountain. With every subsequent piece you write, you will be able to tell how you did better than the last time you did it. Writing affords you the chance to see yourself getting better and better and better, all the time.
  5. It’s cost-effective – To write, it takes a pen and paper or a text-editor. The point is not that it’s monetarily cheap but that it’s accessible in terms of resources, not that there’s very little by way of an excuse not to write on that front but that there are more incentives to take it up.
  6. It can be addictive – If it’s addictive, it becomes a habit much faster. Writing does take a bit of time to become addictive but if you do it with the right kind of discipline, it can really stick. All you’ll feel like doing when you’re bored (or not) is writing after that.
  7. It’s not picky to your moods but the other way round – Even when you’re feeling down, there’s that down-in-the-dumps sort of writing that many writers have honed (Bukowski, Hemingway, Heller, Plath, etc.). If you’re angry, writing can often be the perfect weapon with which to display it. There have been times when I’ve looked forward to a mood-swing so I take advantage of the inherent catharsis to finish writing a story. It can be an abusive relationship.
  8. A body of work is always uplifting to look at if you’ve nothing else to hold on to – As a depressed person, I cannot overstate how thankful I am to have a blog that I’ve been writing in since 2009. When my day-job leaves me tired and/or feeling drained of soul, when all I want to do is shutter myself in my room and turn off the lights, I often also open my blog and just read through old pieces. It feels good then to be reminded that I have been up to something and that not all was for nothing.
  9. It can be all these things as well as a career – It may not pay much and it can be a grueling road to the top. When I was in the Middle East and enjoying the conversion rate in 2010, content-writing for corporate establishments fetched from Rs. 9,000 to Rs. 20,000 for a week’s work. It wasn’t fulfilling work but it paid the rent, kept the lights on, etc. while I got to work on a bad but nonetheless satisfying novel. It’s not a bad place to be because you get to write all the time.

Featured image: A Stipula fountain pen. Credit: Wikimedia Commons

Big science, bigger nationalism.

Nature India ran a feature on March 21 about three Indian astrophysicists who had contributed to the European Space Agency’s Planck mission that studied the universe’s CMBR, etc. I was wary even before I started to read it. Why? Because of that first farce in July, 2012, that’s why.

That was when many Indians called for the ‘boson’ in the ‘Higgs boson’ to be celebrated with as much jest as was the ‘Higgs’. Oddly, Kolkata sported no cultural drapes that took ownership of the ‘boson’ as opposed to Edinburgh, quick to embrace the ‘Higgs’.

Why? Because a show of Indians celebrating India’s contributions to science through claims of ownership betrays that it’s not a real sense of ownership at all, but just another attempt to hog the limelight. If we wanted to own the ‘boson’ in honor of Satyendra Nath Bose, we’d have ensured he was remembered by the common man even outside the context of the Higgs boson. For his work with Einstein in establishing the Bose-Einstein statistics, for starters.

This is an attitude I find divisive and abhorrent. At the least, that circumstantial shout-out leaves no cause to remember S.N. Bose for the rest of the time. At the most, it paints a false picture of what ownership of scientific knowledge manifests itself as in the 21st century. The Indian contribution, the Chilean contribution, the Russian contribution… these are divisive tendencies in a world constantly aspiring to Big Science that is more seamless and painless.

Ownership of scientific knowledge in the 21st century, I believe, cannot be individuated. It belongs to no one and everyone at the same time. In the past, using science-related decorations to impinge upon our contributions to science may have inspired someone to believe we did good. Today, however, it’s simply taking a stand on a very slippery slope.

I understand how scientific achievement in the last century or so had gained a colonial attitude, and how there are far more Indians who have received the Nobel Prize as Americans than as Indians themselves. However, the scientific method has also gotten more rigorous, more demanding in terms of resources and time. While America may have shot ahead in the last century of scientific achievement, awareness of its possession of numerous individuals on the rosters of academic excellence is coeval tribute to some other country’s money and intellectual property, too.

I understand how news items of a nation’s contributions to an international project could improve the public’s perception of where and how their tax-money is being spent. However, the alleviation of any ills in this area must not arise solely from the notification that a contribution was made. It should arise through a dissemination of the importance of that contribution, too. The latter is conspicuous by its absence… to me, at least.

We put faces to essentially faceless achievements and then forget their features over time.

I wish there had been an entity to point my finger at. It could’ve been just the government, it could’ve been just a billion Indians. It could’ve been just misguided universities. It could’ve been just the Indian media. Unfortunately, it’s a potent mix of all these possibilities, threatening to blow up with nationalistic fervor in a concordant world.

As for that Nature India article, it did display deference to the jingoism. How do I figure? Because its an asymmetric celebration of achievement, especially an achievement not rooted in governmental needs even.

~

This post also appeared in ‘The Copernican’ science blog at The Hindu on March 28, 2013.

Is there only one road to revolution?

Read this first.

mk

Some of this connects, some of it doesn’t. Most of all, I have discovered a fear in me that keeps me from from disagreeing with people like Meena Kandasamy – great orators, no doubt, but what are they really capable of?

The piece speaks of revolution as being the sole goal of an Indian youth’s life, that we must spend our lives stirring the muddied water, exposing the mud to light, and separating grime from guts and guts from guts from glory. This is where I disagree. Revolution is not my cause. I don’t want to stir the muddied water. I concede that I am afraid that I will fail.

And at this point, Meena Kandasamy would have me believe, I should either crawl back into my liberty-encrusted shell or lay down my life. Why should I when I know I will succeed in keeping aspirations alive? Why should I when, given the freedom to aspire, I can teach others how to go about believing the same? Why should I when I can just pour in more and more clean water and render the mud a minority?

Why is this never an option? Have we reached a head, that it’s either a corruption-free world or a bloodied one? India desperately needs a revolution, yes, but not one that welcomes a man liberated after pained struggles to a joyless world.