The paradoxical virtues of primacy in science

The question of “Who found it first?” in science is deceptively straightforward. It is largely due to the rewards reserved by those who administer science – funding the ‘right’ people working in the ‘right’ areas at the ‘right’ time to ensure the field’s progress along paths deemed desirable by the state – that primacy in science has become valuable. Otherwise, and in an ideal world (in which rewards are distributed more equitably, such that the quality of research is rewarded a certain amount that is lower than the inordinate rewards that accrue to some privileged scientists today but greater than that which scholars working on ‘neglected’ topics/ideas receive, without regard for gender, race, ethnicity or caste), discovering something first wouldn’t matter to the enterprise of science, just as it doesn’t mean anything to the object of the discovery itself.

Primacy is a virtue imposed by the structures of modern science. There is today privilege in being cited as “Subramaniam 2021” or “Srinivasan 2022” in papers, so much so that there is reason to believe many scientific papers are published only so they may cite the work of others and keep expanding this “citation circus”. The more citations there are, the likelier the corresponding scientist is to receive a promotion, a grant, etc. at their institute.

Across history, the use of such citations has also served to obscure the work of ‘other’ scientists and to attribute a particular finding to a single individual or a group. This typically manifests in one of two forms: by flattening the evolution of a complex discovery by multiple groups of people working around the world, sometimes sharing information with each other, to a single paper authored by one of these groups; or by reinforcing the association of one or some names with particular ideas in the scientific literature, thus overlooking important contributions by less well-known scientists.

The former is a complex phenomenon that is often motivated by ‘prestigious’ awards, including the Nobel Prizes, limiting themselves to a small group of laureates at a time, as well as by the meagre availability of grants for advanced research. Scientists and, especially, the institutes at which they work engage as a result in vociferous media campaigns when an important discovery is at hand, to ensure that opportunities for profit that may arise out of the finding may rest with them alone. This said, it can also be the product of lazy citations, in which scientists cite their friends or peers they like or wish to impress, or collections of papers over the appropriate individual ones, instead of conducting a more exhaustive literature review to cite everyone involved everywhere.

The second variety of improper citations is of course one that has dogged India – and one with which anyone working with or alongside science in India must be familiar. It has also been most famously illustrated by instances of women scientists who were subsequently overlooked for Nobel Prizes that were awarded to the men who worked with them, often against them. (The Nobel Prizes are false gods and we must tear them down; but for their flaws, they remain good, if also absurdly selective, markers of notable scientific work: that is, no prize has thus far been awarded to work that didn’t deserve it.) The stories of Chien-Shiung Wu, Rosalind Franklin and Jocelyn Bell Burnell come to mind.

But also consider the Indian example of Meghnad Saha’s paper about selective radiation pressure (in the field of stellar astrophysics), which predated Irving Langmuir’s paper on the same topic by three years. Saha lost out on the laurels by not being able to afford having his paper published in a more popular journal and had to settle for one with “no circulation worth mentioning” (source). An equation in this theory is today known as the Saha-Langmuir equation, but even this wouldn’t be so without the conscious effort of some scholars to highlight Saha’s work and unravel the circumstances that forced him into the shadows.

I discovered recently that comparable, yet not similar, circumstances had befallen Bibhas De, when the journal Icarus rejected a paper he had submitted twice. The first time, his paper presented his calculations predicting that the planet Uranus had rings; the second time was five years later, shortly after astronomers had found that Uranus indeed had rings. Stephen Brush and Ariel Segal wrote in their 2015 book, “Although he did succeed in getting his paper published in another journal, he rarely gets any credit for this achievement.”

In both these examples, and many others like them, scientists’ attempts to formalise their successes by having their claims detailed in the literature were mediated by scientific journals – whose editors’ descisions had nothing to do with science (costs in the former case and who-knows-what in the latter).

At the same time, because of these two issues, flattening and reinforcing, attribution for primacy is paradoxically more relevant: if used right, it can help reverse these problems, these imprints of colonialism and imperialism in the scientific literature. ‘Right’ here means, to me at least, that everyone is credited or none at all, as an honest reflection of the fact that good science has never been vouchsafed to the Americans or the Europeans. But then this requires more problems to be solved, such as, say, replacing profit-based scientific publishing (and the consequent valorisation of sensational results) with a ‘global scientific record’ managed by the world’s governments through an international treaty.

Axiomatically, perhaps the biggest problem with primacy today is its entrenchment. I’m certain humanities and social science scholars have debated this thoroughly – the choice for the oppressed and the marginalised between beating their oppressors at their own game or transcending the game itself. Obviously the latter seems more englightened, but it is also more labour-intensive, labour that can’t be asked freely of them – our scientists and students who are already fighting to find or keep their places in the community of their peers. Then again, beating them at their own game may not be so easy either.

I was prompted to write this post, in fact, after I stumbled on four seemingly innocuous words in a Wikipedia article about stellarators. (I wrote about these nuclear-fusion devices yesterday in the context of a study about solving an overheating problem.) The article reads that when a solenoid – a coiled wire – is bent around to form a loop, the inner perimeter of the loop has a higher density of wire than the outer perimeter. Surely this is obvious, yet the Wikpedia article phrases it thus (emphasis added):

But, as Fermi pointed out, when the solenoid is bent into a ring, the electrical windings would be closer together on the inside than the outside.

Why does a common-sensical claim, which should strike anyone who can visualise or even see a solenoid made into a loop, be attributed to the celebrated Italian physicist Enrico Fermi? The rest of the paragraph to which this sentence belongs goes on to describe how this winding density affects nuclear fusion reactors; it is an arguably straightforward effect, far removed from the singularity and the sophistication of other claims whose origins continue to be mis- or dis-attributed. Wikipedia articles are also not scientific papers. But taken together, the attribution to Fermi contains the footprints of the fact that he, as part of the Knabenphysik of quantum mechanics, worked on many areas of physics, allowing him to attach his name to a variety of concepts at a time when studies on the same topics were only just catching on in other parts of the world – a body of work enabled, as is usual, by war, conquest and the quest for hegemony.

Maybe fighting over primacy is the tax we must pay today for allowing this to happen.

On anticipation and the history of science

In mid-2012, shortly after physicists working with the Large Hadron Collider (LHC) in Europe had announced the discovery of a particle that looked a lot like the Higgs boson, there was some clamour in India over news reports not paying enough attention or homage to the work of Satyendra Nath Bose. Bose and Albert Einstein together developed Bose-Einstein statistics, a framework of rules and principles that describe how fundamental particles called bosons behave. (Paul A.M. Dirac named these particles in Bose’s honour.) The director-general of CERN, the institute that hosts the LHC, had visited India shortly after the announcement and said in a speech in Kolkata that in honour of Bose, he and other physicists had decided to capitalise the ‘b’ in ‘boson’.

It was a petty victory of a petty demand, but few realised that it was also misguided. Bose made the first known (or at least published) attempts to understand the particles that would come to be called bosons – but neither he nor Einstein anticipated the existence of the Higgs boson. There have also been some arguments (justified, I think) that Bose wasn’t awarded a Nobel Prize for his ideas because he didn’t make testable predictions; Einstein received the Nobel Prize for physics in 1915 for anticipating the photoelectric effect. The point is that it was unreasonable to expect Bose’s work to be highlighted, much less attributed, as some had demanded at the time, every time we find a new boson particle.

What such demands only did was to signal an expectation that the reflection of every important contribution by an Indian scientist ought to be found in every major discovery or invention. Such calls detrimentally affect the public perception of science because they are essentially contextless.

Let’s imagine that discovery of the Higgs boson was the result of series of successes, depicted thus:

O—o—o—o—o—O—O—o—o—O—o—o—o—O

An ‘O’ shows a major success and an ‘o’ shows a minor success, where major/minor could mean the relative significance within particle physics communities, the extent to which physicists anticipated it or simply the amount of journal/media coverage it received. In this sequence, Bose’s paper on a certain class of subatomic particles could be the first ‘O’ and the discovery of the Higgs boson the last ‘O’. And looking at this sequence, one could say Bose’s work led to a lot of the work that came after and ultimately led to the Higgs boson. However, doing that would diminish the amount of study, creativity and persistence that went into each subsequent finding – and would also ignore the fact that we have identified only one branch of endeavour, leading from Bose’s work to the Higgs boson, whereas in reality there are hundreds of branches crisscrossing each other at every o, big or small – and then there are countless epiphanies, ideas and flashes, each one less the product of following the scientific method and more of a mysterious combination of science and intuition.

By reducing the opportunity to celebrate Bose’s work by pointing to just the Higgs boson point on the branch, we lose the opportunities to know and celebrate the importance of Bose’s work for all the points in between, but especially the points that we still haven’t taken the trouble to understand.

Recently, a couple people forwarded to me a video on WhatsApp of an Indian-American electrical engineer named Nisar Ahmed. I learnt when in college (studying engineering) that Nisar Ahmed was the co-inventor, along with K. Ramamohan Rao, of the direct cosine transform, a technique to transmit a given amount of information using fewer bits than those contained in the information itself. The video introduced Ahmed’s work as the basis for our being able to take video-conferencing for granted; direct cosine transform allows audiovisual data to be compressed by two, maybe three orders of magnitude, making its transmission across the internet much less resource-intensive than if it had to be transmitted without compression.

However, the video did little to address the immediate aftermath of Ahmed’s and Rao’s paper, the other work by other scientists that built on it, as well as its use in other settings, and rested on the drawing just one connection between two fairly unrelated events (direct cosine transform and their derivatives, many of them created in the same decade, heralded signal compression, but they didn’t particularly anticipate different forms of communication).

This flattening of the history of science, and technology as the case may be, may be entertaining but it offers no insights into the processes at work behind these inventions, and certainly doesn’t admit any other achivements before each development. In the video, Ahmed reads out tweets by people reacting to his work as depicted on the show This Is Us. One of them says that it’s because of him, and because of This Is Us, that people are now able to exchange photos and videos of each other around the world, without worrying about distance. But… no; Ahmed himself says in the video, “I couldn’t predict how fast the technology would move” (based on his work).

Put it simply, I find such forms of communication – and thereunto the way we are prompted to think about science – objectionable because they are content with ‘what’, and aren’t interested in ‘when’, ‘why’ or ‘how’. And simply enumerating the ‘what’ is practically non-scientific, more so when they’re a few particularly sensational whats over others that encourage us to ignore the inconvenient details. Other similar recent examples were G.N. Ramachandran, whose work on protein structure, especially Ramachandran plots, have been connected to pharmaceutical companies’ quest for new drugs and vaccines, and Har Gobind Khorana, whose work on synthesising RNA has been connected to mRNA vaccines.

Is anything meant to remain complex?

The first answer is “No”. I mean, whatever you’re writing about, the onus is on the writer to break his subject down to its simplest components, and then put them back together in front of the reader’s eyes. If the writer fails to do that, then the blame can’t be placed on the subject.

It so happens that the blame can be placed on the writer’s choice of subject. Again, the fault is the writer’s, but what do you when the subject is important and ought to be written about because some recent contribution to it makes up a piece of history? Sure, the essentials are the same: read up long and hard on it, talk to people who know it well and are able to break it down in some measure for you, and try and use infographics to augment the learning process.

But these methods, too, have their shortcomings. For one, if the subject has only a long-winded connection to phenomena that affect reality, then strong comparisons have to make way for weak metaphors. A consequence of this is that the reader is more misguided in the long-term than he is “learned” in the short-term. For another, these methods require that the writer know what he’s doing, that what he’s writing about makes sense to him before he attempts to make sense of it for his readers.

This is not always the case: given the grey depths that advanced mathematics and physics are plumbing these days, science journalism concerning these areas are written with a view to make the subject sound awesome, enigmatic, and, sometimes, hopefully consequential than they are in place to provide a full picture of on-goings.

Sometimes, we don’t have a full picture because things are that complex.

The reader is entitled to know – that’s the tenet of the sort of science writing that I pursue: informational journalism. I want to break the world around me down to small bits that remain eternally comprehensible. Somewhere, I know, I must be able to distinguish between my shortcomings and the subject’s; when I realize I’m not able to do that effectively, I will have failed my audience.

In such a case, am I confined to highlighting the complexity of the subject I’ve chosen?


The part of the post that makes some sense ends here. The part of the post that may make no sense starts here.

The impact of this conclusion on science journalism worldwide is that there is a barrage of didactic pieces once something is completely understood and almost no literature during the finding’s formative years despite public awareness that important, and legitimate, work was being done (This is the fine line that I’m treading).

I know this post sounds like a rant – it is a rant – against a whole bunch of things, not the least-important of which is that groping-in-the-dark is a fact of life. However, somehow, I still have a feeling that a lot of scientific research is locked up in silence, yet unworded, because we haven’t received the final word on it. A safe course, of course: nobody wants to be that guy who announced something prematurely and the eventual result was just something else.