By the way: the Chekhov’s gun and the science article

“If in the first act you have hung a pistol on the wall, then in the following one it should be fired. Otherwise don’t put it there.” (source)

This is the principle of the Chekhov’s gun: that all items within a narrative must contribute to the overarching narrative itself, and those that don’t should be removed. This is very, very true of the first two Harry Potter books, where J.K. Rowling includes seemingly random bits of information in the first half of each book that, voila, suddenly reappear during the climax in important ways. (Examples: Quirrell’s turban and the Whomping Willow). Thankfully, Rowling’s writing improves significantly from the third book, where the Chekhov’s guns are more subtly introduced, and don’t always stay out of sight before being revived for the grand finale.

However, does the Chekhov’s gun have a place in a science article?

Most writers, editors and readers (I suspect) would reply in the affirmative. The more a bit of science communication stays away from redundancy, the better. Why introduce a term if it’s not going to be reused, or if it won’t contribute to the reader understanding what a writer has set out to explain? This is common-sensical. But my concern is about introducing information deftly embedded in the overarching narrative but which does not play any role in further elucidating the writer’s overall point.

Consider this example: I’m explaining a new research paper that talks about how a bunch of astronomers used a bunch of cool techniques to identify the properties of a distant star. While what is entirely novel about the paper is the set of techniques, I also include two lines about how the telescopes the astronomers used to make their observations operate using a principle called long baseline interferometry. And a third line about why each telescope is equipped with an atomic clock.

Now, I have absolutely no need to mention the phrases ‘long baseline interferometry’ and ‘atomic clocks’ in the piece. I can make my point just as well without them. However, to me it seems like a good opportunity to communicate to – and not just inform – the reader about interesting technologies, an opportunity I may not get again. But a professional editor (again, I suspect) would argue that if I’m trying to make a point and I know what that point is, I should just make that. That, like a laser pointer, I should keep my arguments focused and coherent.

I’m not sure I would agree. A little bit of divergence is okay, maybe even desirable at times.

Yes, I’m aware that editors working on stories that are going to be printed, and/or are paying per word, would like to keep things as concisely pointy as possible. And yes, I’m aware that including something that needn’t be included risks throwing the reader off, that we ought to minimise risk at all times. Finally, yes, I’m aware that digressing off into rivulets of information also forces the writer to later segue back into the narrative river, and that may not be elegant.

Of these three arguments (that I’ve been able to think of; if you have others, please feel free to let me know), the first one alone has the potential to be non-negotiable. The other two are up to the writer and the editor: if she or they can tuck away little gems of trivia without disrupting the story’s flow, why not? I for one would love to discover them, to find out about connections – scientific, technological or otherwise – in the real world that frequently find expression only with the prefix of a “by the way, did you know…”.

Featured image credit: DariuszSankowski/pixabay.

Before seeing, there are the ways of imaging

When May-Britt Moser, Edvard Moser and John O’Keefe were awarded the 2014 Nobel Prize for physiology and medicine “for their discoveries of cells that constitute a positioning system in the brain”, there was a noticeable uptick in the number of articles on similar subjects in the popular as well as scientific literature in the following months. The same thing happened with the sciences Nobel Prizes in subsequent years, and I suspect it will be the same this year with cryo-electron microscopy (cryoEM) as well. And I’d like to ride this wave.

§

It has often been that the Nobel Prizes for physiology/medicine (a.k.a. ~ for biology) and for chemistry have awarded advancements in chemistry and biology, respectively. This year, however, the chemistry prize was more physics. Joachim Frank, Jacques Dubochet and Richard Henderson – three biologists – were on a quest to make the tool that they were using to explore structural biology more powerful, more efficient. So Frank invented computational techniques; Dubochet invented a new way to prepare the sample; and Henderson used them both deftly to prove their methods worked.

Since then, cryoEM has come a long way but the improvisations hence have only been more sophisticated versions of what Frank, Dubochet and Henderson first demonstrated … except for one component: the microscope’s electronics.

Just the way human eyes are primed to detect photons of a certain wavelength, extract the information encoded in them, convert that into an electric signal and send it to the brain for processing, a cryoEM uses electrons. A wave can be scattered by objects in its path that are of size comparable to the wave’s wavelength. So electrons, which have a shorter wavelength than photons, can be used to probe smaller distances. A cryoEM fires a tight, powerful beam of electrons into the specimen. Parts of the specimen scatter the electrons into a detector on the microscope. The detector ‘reads’ how the electrons have changed and delivers that information to a computer. This happens repeatedly as electron beams are fired at different copies of the specimen oriented at random angles. A computer then puts together a high-resolution 3D image of the specimen using all the detector data. In this scheme of things: a technological advancement in 2012 significantly improved the cryoEM’s imaging abilities. It was called the direct electron detector, developed to substitute the charged couple device (CCD).

The simplest imaging system known to humans is the photographic film, which uses a surface of composed of certain chemical substances that are sensitive visible light. When the surface is exposed to a frame, say a painting, the photons reflected by the painting impinge on the surface. The substances therein then ‘record’ the information carried by the photons in the form of a photograph. A CCD employs a surface of metal-oxide semiconductors (MOS). A semiconductor relies on the behaviour of electric charge on either side of a special junction: an interface of dissimilar materials to which impurities have been added such that one layer is rich in electrons (n) and the other, poor (p). The junction will now either conduct electricity or not depending on how a voltage is applied across it. Anyway: when a photon impinges on the MOS, the latter releases an electron (thanks to the photoelectric effect) that is then moved through the device to an area where it can be manipulated to contribute to one pixel of the image.

(Note: When I write ‘one photon’ or ‘one electron’, I don’t mean one exactly. Various uncertainties, including Heisenberg’s, prevail in quantum mechanics and it’s unreasonable to assume humans can manipulate particles one at a time. My use of the singular is only illustrative. At the same time, I hope you will pause to appreciate – later in this post – how close to the singular we’ve been able to get.)

CCDs can produce images quickly and with high contrast even in low light. However, they have an important disadvantage. CCDs have a lower detective quantum efficiency than photographic films at higher spatial frequencies. Detective quantum efficiency is a measure of how well a detector – like the film or a CCD – can record an image when the signal to noise ratio is higher. For example, when you’re getting a dental X-ray done to understand how your teeth look below the gums, your mouth is bombarded with X-ray photons that penetrate the gums but don’t penetrate the teeth. The more such photons there are, the better the image of your teeth. However, inundating your mouth with X-rays just to get a better picture risks damaging tissue and hurting you. This would be the case if an X-ray ‘camera’ had a CCD with a lower detective quantum efficiency. The simplest workaround would be to use an amplifier to boost the signal produced by the detector – but then this would also boost the noise.

So, in other words, CCDs have more trouble recording the finer details in an image than photographic films when there is a lot of noise coming with the incident signal. The noise can also be internally generated, such as during the process when photons are converted into electrons.

However, scientists can’t use photographic films with cryoEM instead because CCDs have other important advantages. They scan images faster, allow for easier refocusing and realignment of the object under study, and require lesser maintenance. This dilemma provided the impetus to develop the direct electron detector – effectively a CCD with better detective quantum efficiency.

Because a cryoEM is in the business of ‘seeing’ electrons, a scintillator is placed between the electrons and the CCD. When the electron hits the scintillator, the material absorbs the energy and emits a glow – in the form of a photon. This photon is then picked up by the CCD for processing. Sometimes, the incoming electron may not create a photon at exactly the location on the scintillator where it is received. Instead, it may bounce off of multiple locations, producing a splatter of photons in a larger area and creating a blur in the image.

In a direct electron detector, the scintillator is removed, forcing the CCD to directly receive and process electrons produced by the initial beam for study. Such (higher energy) electrons can damage the CCD as well as produce unnecessary signals within the system. These effects can be protected against using suitable hardware and circuit design techniques, either of which required advancements in materials science that weren’t available until recently. Even so, the eventual device itself is pretty simple in design. According to the 2009 doctoral thesis of one Liang Jin,

The device can be divided into three major regions. At the very top of the surface is the circuitry layer that has pixel transistors and photodiode as well as interconnects between all the components (metallisation layers). The middle layer is a p-epitaxial layer (about 8 to 10 µm thick) that is epitaxially grown with very low defect levels and highly doped. The rest of the 300 um silicon substrate is used mainly for mechanical support.

On average, a single incident electron of 200 keV will generate about 2,000 ionisation electrons in the 10 µm epitaxial layer, which is significantly larger than the noise level of the device (less than 50 electrons). Each pixel integrates the collected electrons during an exposure period and at the conclusion of a frame, the contents of the sensor array are read out, digitised and stored.

To understand the extent to which noise was reduced as a result, consider an example. In 2010, a research group led by Jean-Paul Armache of the Ludwig-Maximilians-Universität München was able to image eukaryotic ribosomes using cryoEM at a resolution of 6 angstrom (0.6 nanometers) using 1.4 million images. In 2013, a different group, led by Xiao-chen Bai of the Medical Research Council Laboratory of Molecular Biology in Cambridge, the UK, imaged the same ribosomes to 4.5 angstrom using 35,813 images. The first group used cryoEM + CCDs. The second group used cryoEM + direct detection devices.

An even newer development seeks to bring back the CCD as the detector of choice among structural biologists. In September 2017, scientists from the Femi National Accelerator Laboratory announced that they had engineered a highly optimised skipper CCD in their lab. The skipper CCD was first theorised by, among others, D.D. Wen in 1974. It’s a CCD in which the electrons released by the photons are measured multiple times – up to 4,000 times per pixel according to one study – during processing to better separate signal from noise. The same study said that, as a result, the skipper CCD’s readout noise could be reduced to 0.068 electrons per pixel. The cost for this was that from the time the CCD received the first electrons to when the processed image became available, it would be a few hours. But in a review, Michael Schirber, a corresponding editor for Physics, argues that “this could be an acceptable tradeoff for rare events, such as hypothetical dark matter particles interacting with silicon atoms”.

Featured image: Scientists using a 300kV cryo-electron microscope at the Max Planck Institute of Molecular Physiology, Dortmund. Credit: MPI Dortmund.

Are the papers behind this year’s Nobel Prizes in the public domain?

Note: One of my editors thought this post would work for The Wire as well, so it’s been republished there.

“… for the greatest benefit of mankind” – these words are scrawled across a banner that adorns the Nobel Prize’s homepage. They are the words of Alfred Nobel, who instituted the prizes and bequeathed his fortunes to run the foundation that awards them. The words were chosen by the prize’s awarders to denote the significance of their awardees’ accomplishments.

However, the scientific papers that first described these accomplishments in the technical literature are often not available in the public domain. They languish behind paywalls erected by the journals that publish them, that seek to cash in on their importance to the advancement of science. Many of these papers are also funded by public money, but that hasn’t deterred journals and their publishers from keeping the papers out of public reach. How then can they be for the greatest benefit of mankind?

§

I’ve listed some of the more important papers published by this year’s laureates; they describe work that earned them their respective prizes. Please remember that my choice of papers is selective; where I have found other papers that are fully accessible – or otherwise – I have provided a note. This said, I picked the papers from the scientific background document first and then checked if they were accessible, not the other way round. (If you, whoever you are, are interested in replicating my analysis but more thoroughly, be my guest; I will help you in any way I can.)

A laureate may have published many papers collectively for which he was awarded (this year’s science laureates are all male). I’ve picked the papers most proximate to their citation from the references listed in the ‘advanced scientific background’ section available for each prize on the Nobel Prize website. Among publishers, the worst offender appears – to no one’s surprise – to be Elsevier.

A paper title in green indicates it’s in the public domain; red indicates it isn’t – both on the pages of the journal itself. Some titles in red maybe available in full elsewhere, such as in university archives. The names of laureates in the papers’ citations are underlined.

Physiology/medicine

“for their discoveries of molecular mechanisms controlling the circadian rhythm”

The paywall for papers by Young and Rosbash published in Nature were lifted by the journal on the day their joint Nobel Prize was announced. Until then, they’d been inaccessible to the general public. Interestingly, both papers acknowledge funding grants from the US National Institutes of Health, a tax-funded body of the US government.

Michael Young

Restoration of circadian behavioural rhythms by gene transfer in Drosophila – Nature 312, 752 – 754 (20 December 1984); doi:10.1038/312752a0 link

Isolation of timeless by PER protein interaction: defective interaction between timeless protein and long-period mutant PERL – Gekakis, N., Saez, L., Delahaye-Brown, A.M., Myers, M.P., Sehgal, A., Young, M.W., and Weitz, C.J. (1995). Science 270, 811–815. link

Michael Rosbash

Feedback of the Drosophila period gene product on circadian cycling of its messenger RNA levels – Nature 343, 536 – 540 (08 February 1990); doi:10.1038/343536a0 link

The period gene encodes a predominantly nuclear protein in adult Drosophila – Liu, X., Zwiebel, L.J., Hinton, D., Benzer, S., Hall, J.C., and Rosbash, M. (1992). J Neurosci 12, 2735–2744. link

Jeffrey Hall

Molecular analysis of the period locus in Drosophila melanogaster and identification of a transcript involved in biological rhythms – Reddy, P., Zehring, W.A., Wheeler, D.A., Pirrotta, V., Hadfield, C., Hall, J.C., and Rosbash, M. (1984). Cell 38, 701–710. link

P-element transformation with period locus DNA restores rhythmicity to mutant, arrhythmic Drosophila melanogaster – Zehring, W.A., Wheeler, D.A., Reddy, P., Konopka, R.J., Kyriacou, C.P., Rosbash, M., and Hall, J.C. (1984). Cell 39, 369–376. link

Antibodies to the period gene product of Drosophila reveal diverse tissue distribution and rhythmic changes in the visual system – Siwicki, K.K., Eastman, C., Petersen, G., Rosbash, M., and Hall, J.C. (1988). Neuron 1, 141–150. link

Physics

“for decisive contributions to the LIGO detector and the observation of gravitational waves”

While results from the LIGO detector were published in peer-reviewed journals, the development of the detector itself was supported by personnel and grants from MIT and Caltech. As a result, the Nobel laureates’ more important contributions were published as a reports since archived by the LIGO collaboration and made available in the public domain.

Rainer Weiss

Quarterly progress reportR. Weiss, MIT Research Lab of Electronics 105, 54 (1972) link

The Blue BookR. Weiss, P.R. Saulson, P. Linsay and S. Whitcomb link

Chemistry

“for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution”

The journal Cell, in which the chemistry laureates appear to have published many papers, publicised a collection after the Nobel Prize was announced. Most papers in the collection are marked ‘Open Archive’ and are readable in full. However, the papers cited by the Nobel Committee in its scientific background document don’t appear there. I also don’t know whether the papers in the collection available in full were always available in full.

Jacques Dubochet

Cryo-electron microscopy of vitrified specimens – Dubochet, J., Adrian, M., Chang, J.-J., Homo, J.-C., Lepault, J., McDowall, A. W., and Schultz, P. (1988). Q. Rev. Biophys. 21, 129-228 link

Vitrification of pure water for electron microscopyDubochet, J., and McDowall, A. W. (1981). J. Microsc. 124, 3-4 link

Cryo-electron microscopy of viruses – Adrian, M., Dubochet, J., Lepault, J., and McDowall, A. W. (1984). Nature 308, 32-36 link

Joachim Frank

Averaging of low exposure electron micrographs of non-periodic objectsFrank, J. (1975). Ultramicroscopy 1, 159-162 link

Three-dimensional reconstruction from a single-exposure, random conical tilt series applied to the 50S ribosomal subunit of Escherichia coli – Radermacher, M., Wagenknecht, T., Verschoor, A., and Frank, J. (1987). J. Microsc. 146, 113-136 link

SPIDER-A modular software system for electron image processingFrank, J., Shimkin, B., and Dowse, H. (1981). Ultramicroscopy 6, 343-357 link

Richard Henderson

Model for the structure of bacteriorhodopsin based on high-resolution electron cryo-microscopyHenderson, R., Baldwin, J. M., Ceska, T. A., Zemlin, F., Beckmann, E., and Downing, K. H. (1990). J. Mol. Biol. 213, 899-929 link

The potential and limitations of neutrons, electrons and X-rays for atomic resolution microscopy of unstained biological moleculesHenderson, R. (1995). Q. Rev. Biophys. 28, 171-193 link (available in full here)

§

By locking the red-tagged papers behind a paywall – often impossible to breach because of the fees involved – they’re kept out of hands of less-well-funded institutions and libraries, particularly researchers in countries whose currencies have lower purchasing power. More about this here and here. But the more detestable thing with the papers listed above is that the latest of them (among the reds) was published in 1995, fully 22 years ago, and the earliest, 42 years go – both on cryo-electron microscopy. Both represent almost unforgivable durations across which to have paywalls, with the journals Nature and Cell further attempting to ride the Nobel wave for attention. It’s not clear if the papers they’ve liberated from behind the paywall will always be available for free hence either.

Read all this in the context of the Nobel Prizes not being awarded to more than three people at a time and maybe you’ll see how much of scientific knowledge is truly out of bounds of most of humankind.

Featured image credit: Pexels/pixabay.

Are the papers behind this year's Nobel Prizes in the public domain?

Note: One of my editors thought this post would work for The Wire as well, so it’s been republished there.

“… for the greatest benefit of mankind” – these words are scrawled across a banner that adorns the Nobel Prize’s homepage. They are the words of Alfred Nobel, who instituted the prizes and bequeathed his fortunes to run the foundation that awards them. The words were chosen by the prize’s awarders to denote the significance of their awardees’ accomplishments.

However, the scientific papers that first described these accomplishments in the technical literature are often not available in the public domain. They languish behind paywalls erected by the journals that publish them, that seek to cash in on their importance to the advancement of science. Many of these papers are also funded by public money, but that hasn’t deterred journals and their publishers from keeping the papers out of public reach. How then can they be for the greatest benefit of mankind?

§

I’ve listed some of the more important papers published by this year’s laureates; they describe work that earned them their respective prizes. Please remember that my choice of papers is selective; where I have found other papers that are fully accessible – or otherwise – I have provided a note. This said, I picked the papers from the scientific background document first and then checked if they were accessible, not the other way round. (If you, whoever you are, are interested in replicating my analysis but more thoroughly, be my guest; I will help you in any way I can.)

A laureate may have published many papers collectively for which he was awarded (this year’s science laureates are all male). I’ve picked the papers most proximate to their citation from the references listed in the ‘advanced scientific background’ section available for each prize on the Nobel Prize website. Among publishers, the worst offender appears – to no one’s surprise – to be Elsevier.

A paper title in green indicates it’s in the public domain; red indicates it isn’t – both on the pages of the journal itself. Some titles in red maybe available in full elsewhere, such as in university archives. The names of laureates in the papers’ citations are underlined.

Physiology/medicine

“for their discoveries of molecular mechanisms controlling the circadian rhythm”

The paywall for papers by Young and Rosbash published in Nature were lifted by the journal on the day their joint Nobel Prize was announced. Until then, they’d been inaccessible to the general public. Interestingly, both papers acknowledge funding grants from the US National Institutes of Health, a tax-funded body of the US government.

Michael Young

Restoration of circadian behavioural rhythms by gene transfer in Drosophila – Nature 312, 752 – 754 (20 December 1984); doi:10.1038/312752a0 link

Isolation of timeless by PER protein interaction: defective interaction between timeless protein and long-period mutant PERL – Gekakis, N., Saez, L., Delahaye-Brown, A.M., Myers, M.P., Sehgal, A., Young, M.W., and Weitz, C.J. (1995). Science 270, 811–815. link

Michael Rosbash

Feedback of the Drosophila period gene product on circadian cycling of its messenger RNA levels – Nature 343, 536 – 540 (08 February 1990); doi:10.1038/343536a0 link

The period gene encodes a predominantly nuclear protein in adult Drosophila – Liu, X., Zwiebel, L.J., Hinton, D., Benzer, S., Hall, J.C., and Rosbash, M. (1992). J Neurosci 12, 2735–2744. link

Jeffrey Hall

Molecular analysis of the period locus in Drosophila melanogaster and identification of a transcript involved in biological rhythms – Reddy, P., Zehring, W.A., Wheeler, D.A., Pirrotta, V., Hadfield, C., Hall, J.C., and Rosbash, M. (1984). Cell 38, 701–710. link

P-element transformation with period locus DNA restores rhythmicity to mutant, arrhythmic Drosophila melanogaster – Zehring, W.A., Wheeler, D.A., Reddy, P., Konopka, R.J., Kyriacou, C.P., Rosbash, M., and Hall, J.C. (1984). Cell 39, 369–376. link

Antibodies to the period gene product of Drosophila reveal diverse tissue distribution and rhythmic changes in the visual system – Siwicki, K.K., Eastman, C., Petersen, G., Rosbash, M., and Hall, J.C. (1988). Neuron 1, 141–150. link

Physics

“for decisive contributions to the LIGO detector and the observation of gravitational waves”

While results from the LIGO detector were published in peer-reviewed journals, the development of the detector itself was supported by personnel and grants from MIT and Caltech. As a result, the Nobel laureates’ more important contributions were published as a reports since archived by the LIGO collaboration and made available in the public domain.

Rainer Weiss

Quarterly progress reportR. Weiss, MIT Research Lab of Electronics 105, 54 (1972) link

The Blue BookR. Weiss, P.R. Saulson, P. Linsay and S. Whitcomb link

Chemistry

“for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution”

The journal Cell, in which the chemistry laureates appear to have published many papers, publicised a collection after the Nobel Prize was announced. Most papers in the collection are marked ‘Open Archive’ and are readable in full. However, the papers cited by the Nobel Committee in its scientific background document don’t appear there. I also don’t know whether the papers in the collection available in full were always available in full.

Jacques Dubochet

Cryo-electron microscopy of vitrified specimens – Dubochet, J., Adrian, M., Chang, J.-J., Homo, J.-C., Lepault, J., McDowall, A. W., and Schultz, P. (1988). Q. Rev. Biophys. 21, 129-228 link

Vitrification of pure water for electron microscopyDubochet, J., and McDowall, A. W. (1981). J. Microsc. 124, 3-4 link

Cryo-electron microscopy of viruses – Adrian, M., Dubochet, J., Lepault, J., and McDowall, A. W. (1984). Nature 308, 32-36 link

Joachim Frank

Averaging of low exposure electron micrographs of non-periodic objectsFrank, J. (1975). Ultramicroscopy 1, 159-162 link

Three-dimensional reconstruction from a single-exposure, random conical tilt series applied to the 50S ribosomal subunit of Escherichia coli – Radermacher, M., Wagenknecht, T., Verschoor, A., and Frank, J. (1987). J. Microsc. 146, 113-136 link

SPIDER-A modular software system for electron image processingFrank, J., Shimkin, B., and Dowse, H. (1981). Ultramicroscopy 6, 343-357 link

Richard Henderson

Model for the structure of bacteriorhodopsin based on high-resolution electron cryo-microscopyHenderson, R., Baldwin, J. M., Ceska, T. A., Zemlin, F., Beckmann, E., and Downing, K. H. (1990). J. Mol. Biol. 213, 899-929 link

The potential and limitations of neutrons, electrons and X-rays for atomic resolution microscopy of unstained biological moleculesHenderson, R. (1995). Q. Rev. Biophys. 28, 171-193 link (available in full here)

§

By locking the red-tagged papers behind a paywall – often impossible to breach because of the fees involved – they’re kept out of hands of less-well-funded institutions and libraries, particularly researchers in countries whose currencies have lower purchasing power. More about this here and here. But the more detestable thing with the papers listed above is that the latest of them (among the reds) was published in 1995, fully 22 years ago, and the earliest, 42 years go – both on cryo-electron microscopy. Both represent almost unforgivable durations across which to have paywalls, with the journals Nature and Cell further attempting to ride the Nobel wave for attention. It’s not clear if the papers they’ve liberated from behind the paywall will always be available for free hence either.

Read all this in the context of the Nobel Prizes not being awarded to more than three people at a time and maybe you’ll see how much of scientific knowledge is truly out of bounds of most of humankind.

Featured image credit: Pexels/pixabay.

In pursuit of a nebulous metaphor…

I don’t believe in god, but if he/it/she/they existed, then his/its/her/their gift to science communication would’ve been the metaphor. Metaphors help make sense of truly unknowable things, get a grip on things so large that our minds boggle trying to comprehend them, and help writers express book-length concepts in a dozen words. Even if there is something lost in translation, as it were, metaphors help both writers and readers get a handle on something they would otherwise have struggled to.

One of my favourite expositions on the power of metaphors appeared in an article by Daniel Sarewitz, writing in Nature (readers of this blog will be familiar with the text I’m referring to). Sarewitz was writing about how nobody but trained physicists understands what the Higgs boson really is because those of us who do think we get it are only getting metaphors. The Higgs boson exists in a realm that humans cannot ever access (even Ant-Man almost died getting there), and physicists make sense of them through complicated mathematical abstractions.

Mr Wednesday makes just this point in American Gods (the TV show), when he asks his co-passenger in a flight what it is that makes them trust that the plane will fly. (Relatively) Few of us know the physics behind Newton’s laws of motion and Bernoulli’s work in fluid dynamics – but many of us believe in their robustness. In a sense, faith and metaphors keep us going and not knowledge itself because we truly know only little.

However, the ease that metaphors offer writers at such a small cost (minimised further for those writers who know how to deal with that cost) sometimes means that they’re misused or overused. Sometimes, some writers will abdicate their responsibility to stay as close to the science – and the objective truth, such as it is – as possible by employing metaphors where one could easily be avoided. My grouse of choice at the moment is this tweet by New Scientist:

The writer has had the courtesy to use the word ‘equivalent’ but it can’t do much to salvage the sentence’s implications from the dumpster. Different people have different takeaways from the act of smoking. I think of lung and throat cancer; someone else will think of reduced lifespan; yet another person will think it’s not so bad because she’s a chain-smoker; someone will think it gives them GERD. It’s also a bad metaphor to use because the effects of smoking vary from person to person based on various factors (including how long they’ve been smoking 15 cigarettes a day for). This is why researchers studying the effects of smoking quantify not the risk but the relative risk (RR): the risk of some ailment (including reduced lifespan) relative to non-smokers in the same population.

There are additional concerns that don’t allow the smoking-loneliness congruence to be generally applicable. For example, according to a paper published in the Journal of Insurance Medicine in 2008,

An important consideration [is] the extent to which each study (a) excluded persons with pre-existing medical conditions, perhaps those due to smoking, and (b) controlled for various co-morbid factors, such as age, sex, race, education, weight, cholesterol, blood pressure, heart disease, and cancer. Studies that excluded persons with medical conditions due to smoking, or controlled for factors related to smoking (e.g., blood pressure), would be expected to find lower RRs. Conversely, studies that did not account for sufficient confounding factors (such as age or weight) might find higher RRs.

So, which of these – or any other – effects of smoking is the writer alluding to? Quoting from the New Scientist article,

Lonely people are at increased risk of “just about every major chronic illness – heart attacks, neurodegenerative diseases, cancer,” says Cole. “Just a completely crazy range of bad disease risks seem to all coalesce around loneliness.” A meta-analysis of nearly 150 studies found that a poor quality of social relationships had the same negative effect on risk of death smoking, alcohol and other well-known factors such as inactivity and obesity. “Correcting for demographic factors, loneliness increases the odds of early mortality by 26 per cent,” says Cacioppo. “That’s about the same as living with chronic obesity.”

The metaphor the writer was going for was one of longevity. Bleh.

When I searched for the provenance of this comparison (between smoking and loneliness), I landed up on two articles by the British writer George Monbiot in The Guardian, both of which make the same claim*: that smoking 15 cigarettes a day will reduce your lifespan by as much as a lifetime of loneliness. Both claims referenced a paper titled ‘Social Relationships and Mortality Risk: A Meta-analytic Review’, published in July 2010. Its ‘Discussion’ section reads:

Data across 308,849 individuals, followed for an average of 7.5 years, indicate that individuals with adequate social relationships have a 50% greater likelihood of survival compared to those with poor or insufficient social relationships. The magnitude of this effect is comparable with quitting smoking and it exceeds many well-known risk factors for mortality (e.g., obesity, physical inactivity).

In this context, there’s no doubt that the writer is referring to the benefits of smoking cessation on lifespan. However, the number ’15’ itself is missing from its text. This is presumably because, as Cacioppo – one of the scientists quoted by the New Scientist – says, loneliness can decrease your lifespan by 26%, and I assume an older study cited by the one quoted above relates it to smoking 15 cigarettes a day. So I went looking, and (two hours later) couldn’t find anything.

I don’t mean to rubbish the congruence as a result, however – far from it. I want to highlight the principal reason I didn’t find a claim that fit the proverbial glove: most studies that seek to quantify smoking-related illnesses like to keep things as specific as possible, especially the cohort under consideration. This suggests that extrapolating the ’15 cigarettes a day’ benchmark into other contexts is not a good idea, especially when the writer does not know – and the reader is not aware of – the terms of the ’15 cigarettes’ claim nor the terms of the social relationships study. For example, one study I found involved the following:

The authors investigated the association between changes in smoking habits and mortality by pooling data from three large cohort studies conducted in Copenhagen, Denmark. The study included a total of 19,732 persons who had been examined between 1967 and 1988, with reexaminations at 5- to 10-year intervals and a mean follow-up of 15.5 years. Date of death and cause of death were obtained by record linkage with nationwide registers. By means of Cox proportional hazards models, heavy smokers (≥15 cigarettes/day) who reduced their daily tobacco intake by at least 50% without quitting between the first two examinations and participants who quit smoking were compared with persons who continued to smoke heavily.

… and it presents a table of table with various RRs. Perhaps something from there can be fished out by the New Scientist writer and used carefully to suggest the comparability between smoking-associated mortality rates and the corresponding effects of loneliness…

*The figure of ’15 cigarettes’ seems to appear in conjunction with a lot of claims about smoking as well as loneliness all over the web. It seems 15 a day is the line between light and heavy smoking.

Featured image credit: skeeze/pixabay.

A close encounter with the first kind: the obnoxious thieves of good journalism

A Huffington Post article purportedly published by the US bureau has flicked two quotes from a story first published by The Wire, on the influenza epidemics ravaging India. The story’s original author and its editor (me) reached out to HuffPo India folks via Twitter to get them to attribute The Wire for both quotes – and remove, rephrase or enclose-in-double-quotes a paragraph copied verbatim from the original. What this resulted in was half-assed acknowledgment: one of the quotes was attributed to The Wire, the other quote was left unattributed, giving the impression that it was sourced first-hand, and the plagiarised paragraph was left in as is.

I’m delighted that The Wire‘s story is receiving wider reach, and is being read by the people who matter around the world. (And I request you, the reader, to please share the original article and not the plagiarised version.)

But to acknowledge our requests for change and then to assume that attributing only one of the quotes will suffice is to suggest that “this is enough”. This is an offensive attitude that I think has its roots in a complacence of sorts. Huffington Post could be assuming that a partial attribution (and plagiarism) is ‘okay’ because nobody cares about these things because they’re getting valuable information in return that’s going to distract consumers, and because it’s Huffington Post and their traffic volumes are going to make up for the oversight.

For the average consumer – by which I mean someone who only consumes journalism and doesn’t produce it – does it matter that Huffington Post, in some sense, has cheated to get the content it has? I don’t think it does. (This is a problem; there should be specific short-term sanctions if a publisher chooses to behave this way. Edit: Priyanka Pulla, the original author: “It DOES hurt you, the reader. Each time you read bad journalism, it’s because content thieves destroy market for good journalism and skew incentives.”) However, if anything, the publisher effectively signals that consumers will be getting content produced in newsrooms other than the Post’s. The website is now a ‘destination’ site.

Who this kind of irreverence really hurts is other journalists. For example, Pulla spent a lot of time and work writing the piece, I spent a lot of time and work editing it and The Wire spent a lot of money for commissioning and publishing it. By thinking our work is available to reuse for free, Huffington Post disparages the whole enterprise.

This enterprise is an intangible commodity – the kind that encourages readers to pay for journalism because it’s the absence of this enterprise, and the attendant diligence, that leads to ‘bad journalism’. And at a time when every publisher publishing journalistic content online on the planet is struggling to make money, what Huffington Post has done is value theft. At last check, the article on their site had 3,300 LinkedIn Shares and 5,100 shares on StumbleUpon.

(Edit: “We didn’t know” wouldn’t work with HuffPo here because my issue is with their response to our bringing the problems to their notice.)

This isn’t the first time such a thing has happened with The Wire. From personal experience (having managed the site for 18 months), there are three forms of content-stealing I’ve seen:

  1. The more obnoxious kind – where a publisher that has traffic in the millions every month lifts an article, or reuses parts of it, without permission; and when pulled up for it, gives this excuse: “We’re giving your content free publicity. You should let us do this.” The best response for this has been public-shaming.
  2. The more insidious kind – where a bot from an algorithmic publisher freely republishes content in bulk without permission, and then takes the content down 24-48 hours later once its shelf-life has lapsed. The most effective, and also the most blunt-edged, response to this has been to issue a DMCA notice.
  3. The more frustrating kind – where a small publisher (monthly traffic at 1 million/month or less and/or operating on a small budget) reuses some articles without permission and then pulls a sad face when pulled up for the act. The best response to this has either been to strike a deal with the publisher for content-exchange or a small fee or, of course, a strongly worded email (the latter is restricted to some well-defined circumstances because otherwise it’s The Wire strong-arming the little guy and nobody likes that).

Dear Huffington Post – I dearly hope you don’t belong to the first kind.

Featured image credit: TheDigitalWay/pixabay.

Writing, journalism and the revolutionary spirit

One of my favourite essays of all time – insofar as that’s a legitimate category – is one called ‘How to do what you love’ by Paul Graham, the startup guru. In it, he makes a case for the usefulness of a passion. Mine is writing; what kind of writing I don’t know yet. According to Graham,

To be happy I think you have to be doing something you not only enjoy, but admire. You have to be able to say, at the end, wow, that’s pretty cool. This doesn’t mean you have to make something. If you learn how to hang glide, or to speak a foreign language fluently, that will be enough to make you say, for a while at least, wow, that’s pretty cool. What there has to be is a test.

So one thing that falls just short of the standard, I think, is reading books. Except for some books in math and the hard sciences, there’s no test of how well you’ve read a book, and that’s why merely reading books doesn’t quite feel like work. You have to do something with what you’ve read to feel productive.

My personal test of how I’ve read a book comes to be when I write about it, when I take away something the book’s author did not directly intend, but which I realised by merging the book’s lessons and my experiences. I’ve applied my habit of writing – whether for The Wire or for the blog – in a similar vein to almost everything I do, hear, read or think. I write personally informed takeaways. The flipside of this is that when I’m unable to write about something, I disregard it, and I don’t know what the consequences of this have been or will be.

The one thing I’ve realised I don’t like about this habit is that it prevents me from crafting bigger lessons. Because I read a book, write about it, and then throw the book away (figuratively speaking), I make a habit of ‘not mulling over it’. I move on. As a result, my blog is littered with a string of shorter, piecemeal observations but nothing too protracted or profound. Thankfully, my writing habit also improves my memory: I remember better what I’ve written than what I’ve read/seen/heard. So looking back, I can piece together a picture of my thoughts over the course of time. The true issue arises when this habit is brought over into journalism.

In journalism, this seems to be a problem because it fosters a coverage-oriented mindset: “Have I covered this? If yes, then move on. If not, then cover it now and then move on.” Our coupling with the news cycle – which is a polished way of saying our dependence on traffic from Google News – means we cover frequently cover the smaller issues but rarely piece them together to reflect on the bigger ones. Ultimately, we believe that because we’ve written about it, it counts for something, and that we get to move on with clearer heads.

Mayank Tewari, who wrote the dialogues for the Bollywood film Newton, perhaps alludes to this when he tells Anindita Ghose (in Livemint),

“We are living in a time of self-conscious irony,” says Tewari. “We are aware of what’s wrong with our society… but if you read the righteous online news platforms, it’s as if just knowing this elevates [their writers and editors] from that reality. The revolutionary spirit is exhausted right there…the constant talking about what they are doing and what other people are not doing.”

The realisation that one knows about something is meaningless to our readers at large. But is its expression in words also equally meaningless? If they’ve adopted the coverage mindset, then Tewari is right: “the revolutionary spirit is exhausted right there”. We need to stop assuming that expressing our knowledge once will change anything.

This is difficult to internalise, however, especially if the journalist in question is busy. To go hammer and tonks at an issue, to repeat some details over and over again, doesn’t make for good business; it’s novelty that sells so it’s novelty that journalists seek out. And depending on what kind of a news organisation a journalist is employed at, I wouldn’t blame her if she wasn’t harbouring the revolutionary spirit.

Featured image credit: ChristopherPluta/pixabay.

Why do we cover the Nobel Prize announcements?

The Nobel Prizes are too big to fail. Even if they’ve become beset by a host of problems, such as:

  1. Long gap between invention/discovery and recognition,
  2. A large cash component given to old scientists,
  3. Limiting number of awardees to three,
  4. Not awarding prizes posthumously,
  5. Not awarding prizes to women, especially in the sciences, and
  6. Limiting laureates to those who had published in English or European languages*

… they have been able to carry over the momentum they accrued in the mid-20th century, as an identifier of important contributions, into the 21st century. The winner of a Nobel Prize gets his (it’s usually ‘his’) name added to a distinguished list, and has the attention of the world’s press turn towards him for 12-24 hours. The latter in particular is almost impossible to achieve otherwise. As a result, the Nobel Prizes, for all their shortcomings, still stand for a certain kind of recognition that is not easily attainable through other means.

Any other prize instituted today with the same shortcomings as the Nobel Prizes will struggle to be taken seriously (unless the cash component is overwhelmingly high). It is thanks to these qualities of its legacy that even those who write against the Nobel Prizes and their import can at best hope to fix the prize, and not have it cancelled. And this is also why people continue to lament problems #3 and #5 instead of neglecting the Nobel Prizes altogether.

I personally wish the Nobel Prizes stopped being important – but it’s a conflicted desire because of two reasons:

  1. It’s an opportunity – even only if it’s for one week of the year – to talk about pure science research instead of having to bother with what it’s good for, and still be read. Otherwise, there’s a high cost attached to ‘indulging’ in such articles.
  2. The Nobel Prizes are not going to drop in value among the people if only I abstain from covering them. Either all journalists have to stop giving a damn (they won’t) or the Nobel Committee itself will have to rethink the prizes (so far, they haven’t).

So if only I sit out and not write about who won which Nobel Prize for what, only I – rather, The Wire – loses out. I’d much rather make a bigger deal of homegrown awards like the S.S. Bhatnagar Prize, specialised prizes like the Wolf, the Abel and the Lasker, and the international – and more au courant – Breakthrough Prizes.

*I’m speaking only about the science prizes.

Chromodynamics: Gluons are just gonzo

One of the more fascinating bits of high-energy physics is the branch of physics called quantum chromodynamics (QCD). Don’t let the big name throw you off: it deals with a bunch of elementary particles that have a property called colour charge. And one of these particles creates a mess of this branch of physics because of its colour charge – so much so that it participates in the story that it is trying to shape. What could be more gonzo than this? Hunter S. Thompson would have been proud.

Like electrons have electric charge, particles studied by QCD have a colour charge. It doesn’t correspond to a colour of any kind; it’s just a funky name.

(Richard Feynman wrote about this naming convention in his book, QED: The Strange Theory of Light and Matter (pp. 163, 1985): “The idiot physicists, unable to come up with any wonderful Greek words anymore, call this type of polarization by the unfortunate name of ‘color,’ which has nothing to do with color in the normal sense.”)

The fascinating thing about these QCD particles is that they exhibit a property called colour confinement. It means that all particles with colour charge can’t ever be isolated. They’re always to be found only in pairs or bigger clumps. They can be isolated in theory if the clumps are heated to the Hagedorn temperature: 1,000 billion billion billion K. But the bigness of this number has ensured that this temperature has remained theoretical. They can also be isolated in a quark-gluon plasma, a superhot, superdense state of matter that has been creating fleetingly in particle physics experiments like the Large Hadron Collider. The particles in this plasma quickly collapse to form bigger particles, restoring colour confinement.

There are two kinds of particles that are colour-confined: quarks and gluons. Quarks come together to form bigger particles called mesons and baryons. The aptly named gluons are the particles that ‘glue’ the quarks together.

The force that acts between quarks and gluons is called the strong nuclear force. But this is misleading. The gluons actually mediate the strong nuclear force. A physicist would say that when two quarks exchange gluons, the quarks are being acted on by the strong nuclear force.

Because protons and neutrons are also made up of quarks and gluons, the strong nuclear force holds the nucleus together in all the atoms in the universe. Breaking this force releases enormous amounts of energy – like in the nuclear fission that powers atomic bombs and the nuclear fusion that powers the Sun. In fact, 99% of a proton’s mass comes from the energy of the strong nuclear force. The quarks contribute the remaining 1%; gluons are massless.

When you pull two quarks apart, you’d think the force between them will reduce. It doesn’t; it actually increases. This is very counterintuitive. For example, the gravitational force exerted by Earth drops off the farther you get away from it. The electromagnetic force between an electron and a proton decreases the more they move apart. But it’s only with the strong nuclear force that the force between two particles on which the force is acting actually increases as they move apart. Frank Wilczek called this a “self-reinforcing, runaway process”. This behaviour of the force is what makes colour confinement possible.

However, in 1973, Wilczek, David Gross and David Politzer found that the strong nuclear force increases in strength only up to a certain distance – around 1 fermi (0.000000000000001 metres, slightly larger than the diameter of a proton). If the quarks are separated by more than a fermi, the force between them falls off drastically, but not completely. This is called asymptotic freedom: the freedom from the force beyond some distance drops off asymptotically towards zero. Gross, Politzer and Wilczek won the Nobel Prize for physics in 2004 for their work.

In the parlance of particle physics, what makes asymptotic freedom possible is the fact that gluons emit other gluons. How else would you explain the strong nuclear force becoming stronger as the quarks move apart – if not for the gluons that the quarks are exchanging becoming more numerous as the distance increases?

This is the crazy phenomenon that you’re fighting against when you’re trying to set off a nuclear bomb. This is also the crazy phenomenon that will one day lead to the Sun’s death.

The first question anyone would ask now is – doesn’t asymptotic freedom violate the law of conservation of energy?

The answer lies in the nothingness all around us.

The vacuum of deep space in the universe is not really a vacuum. It’s got some energy of itself, which astrophysicists call ‘dark energy’. This energy manifests itself in the form of virtual particles: particles that pop in and out of existence, living for far shorter than a second before dissipating into energy. When a charged particle pops into being, its charge attracts other particles of opposite charge towards itself and repels particles of the same charge away. This is high-school physics.

But when a charged gluon pops into being, something strange happens. An electron has one kind of charge, the positive/negative electric charge. But a gluon contains a ‘colour’ charge and an ‘anti-colour’ charge, each of which can take one of three values. So the virtual gluon will attract other virtual gluons depending on their colour charges and intensify the colour charge field around it, and also change its colour according to whichever particles are present. If this had been an electron, its electric charge and the opposite charge of the particle it attracted would cancel the field out.

This multiplication is what leads to the build up of energy when we’re talking about asymptotic freedom.

Physicists refer to the three values of the colour charge as blue, green and red. (This is more idiocy – you might as well call them ‘baboon’, ‘lion’ and ‘giraffe’.) If a blue quark, a green quark and a red quark come together to form a hadron (a class of particles that includes protons and neutrons), then the hadron will have a colour charge of ‘white’, becoming colour-neutral. Anti-quarks have anti-colour charges: antiblue, antigreen, antired. When a red quark and an antired anti-quark meet, they will annihilate each other – but not so when a red quark and an antiblue anti-quark meet.

Gluons complicate this picture further because, in experiments, physicists have found that gluons behave as if they have both colour and anti-colour. In physical terms, this doesn’t make much sense, but they do in mathematical terms (which we won’t get into). Let’s say a proton is made of one red quark, one blue quark and one green quark. The quarks are held together by gluons, which also have a colour charge. So when two quarks exchange a gluon, the colours of the quarks change. If a blue quark emits a blue-antigreen gluon, the quark turns green whereas the quark that receives the gluon will turn blue. Ultimately, if the proton is ‘white’ overall, then the three quarks inside are responsible for maintaining that whiteness. This is the law of conservation of colour charge.

Gluons emit gluons because of their colour charges. When quarks exchange gluons, the quarks’ colour charges also change. In effect, the gluons are responsible for quarks getting their colours. And because the gluons participate in the evolution of the force that they also mediate, they’re just gonzo: they can interact with themselves to give rise to new particles.

A gluon can split up into two gluons or into a quark-antiquark pair. Say a quark and an antiquark are joined together. If you try to pull them apart by supplying some energy, the gluon between them will ‘swallow’ that energy and split up into one antiquark and one quark, giving rise to two quark-antiquark pairs (and also preserving colour-confinement). If you supply even more energy, more quark-antiquark pairs will be generated.

For these reasons, the strong nuclear force is called a ‘colour force’: it manifests in the movement of colour charge between quarks.

In an atomic nucleus, say there is one proton and one neutron. Each particle is made up of three quarks. The quarks in the proton and the quarks in the neutron interact with each other because they are close enough to be colour-confined: the proton-quarks’ gluons and the neutron-quarks’ gluons interact with each other. So the nucleus is effectively one ball of quarks and gluons. However, one nucleus doesn’t interact with that of a nearby atom in the same way because they’re too far apart for gluons to be exchanged.

Clearly, this is quite complicated – not just for you and me but also for scientists, and for supercomputers that perform these calculations for large experiments in which billions of protons are smashed into each other to see how the particles interact. Imagine: there are six types, or ‘flavours’, of quarks, each carrying one of three colour charges. Then there is the one gluon that can carry one of nine combinations of colour-anticolour charges.

The Wire
September 20, 2017

Featured image credit: Alexas_Fotos/pixabay.

The significance of Cassini’s end

Many generations of physicists, astronomers and astrobiologists are going to be fascinated by Saturn because of Cassini.

I wrote this on The Wire on September 15. I lied. Truth is, I don’t care about Saturn. In fact, I’m fascinated with Cassini because of Saturn. We all are. Without Cassini, Saturn wouldn’t have been what it is in our shared imagination of the planet as well as the part of the Solar System it inhabits. At the same time, without Saturn, Cassini wouldn’t have been what it is in our shared imagination of what a space probe is and how much they mean to us. This is significant.

The aspects of Cassini’s end that are relevant in this context are:

  1. The abruptness
  2. The spectacle

Both together have made Cassini unforgettable (at least for a year or so) and its end a notable part of our thoughts on Saturn. We usually don’t remember probes, their instruments and interplanetary manoeuvres during ongoing missions because we are appreciably more captivated by the images and other data the probe is beaming back to Earth. In other words, the human experience of space is mediated by machines, but when a mission is underway, we don’t engage with information about the machine and/or what it’s doing as much as we do with what it has discovered/rediscovered, together with the terms of humankind’s engagement with that information.

This is particularly true of the Hubble Space Telescope, whose images constantly expand our vision of the cosmos while very few of us know how the telescope actually achieves what it does.

From a piece I wrote on The Wire in July 2015:

[Hubble’s] impressive suite of five instruments, highly polished mirrors and advanced housing all enable it to see the universe in visible-to-ultraviolet light in exquisite detail. Its opaque engineering is inaccessible to most but this gap in public knowledge has been compensated many times over by the richness of its observations. In a sense, we no longer concern ourselves with how the telescope works because we have drunk our fill with what it has seen of the universe for us…

Cassini broke this mould by – in its finish – reminding us that it exists. And the abruptness of the mission’s end contributed to this. In contrast, consider the story of the Mars Phoenix lander. NASA launched Phoenix (August 2007 to May 2010) in August 2007. It helped us understand Mars’s north polar region and the distribution of water ice on the planet. Its landing manoeuvre also helped NASA scientists validate the landing gear and techniques for future missions. However, the mission’s last date has a bit of uncertainty. Phoenix’s last proper signal was sent in November 2, 2008. It was declared not on the same day but a week later, when attempts reestablish contact with Phoenix failed. But the official declaration of ‘mission end’ came only in May 2010, when a NASA satellite’s attempts to reestablish contact failed.

Is it easier to deal with the death of someone because their death came suddenly? Does it matter if their body was found or not? For Phoenix, we have a ‘body’ (a hunk of metal lying dormant near the Martian north pole); for Cassini, we don’t have a ‘body’. On the other hand, we don’t have a fixed date of ‘mission end’ for Phoenix but we do for Cassini, down to the last centisecond and which will be memorialised at NASA one way or another.

Spectacle exacerbates this tendency to memorialise by providing a vivid representation of ‘mission end’ that has been shared by millions of people. Axiomatically, a memorial for Cassini – wherever one emerges – will likely evoke the same memories and emotions in a larger number of people, and all of those people will be living existences made congruent by the shared cognisance and interpretation of the ‘Cassini event’.

However, Phoenix’s ‘mission end’ wasn’t spectacular. The lander – sitting in one place, immobile – slowly faded to nothing. Cassini burnt up over Saturn. Interestingly, both probes experienced similar ‘deaths’ (though I am loth to use that word) in one sense: neither probe knew the way an I/AI could that they were going to their deaths but both their instrument suites fought against failing systems all guns blazing. Cassini only got the memorial upper hand because it could actively reorient itself in space (akin to the arms on our bodies) and because it was in an environment it was not designed for at all.

The ultimate effect is for humans to remember Cassini more vividly than they would Phoenix, as well as associate a temporality with that remembrance. Phoenix was a sensor, the nicotine patch for a chain-smoking planet (‘smoking’ being the semantic variable here). Cassini moved around – 2 billion km’s worth – and also completed a complicated sequence of orbits around Saturn in three dimensions in 13 years. Cassini represents more agency, more risk, more of a life – and what better way to realise this anthropomorphisation than as a time-wise progression of events with a common purpose?

We remember Cassini by recalling not one moment in space or time but a sequence of them. That’s what establishes the perfect context for the probe’s identity as a quasi-person. That’s also what shatters the glaze of ignorance crenellated around the object, bringing it unto fixation from transience, unto visibility from the same invisibility that Hubble is currently languishing in.

Featured image credit: nasahqphoto/Flickr, CC BY-NC-ND 2.0.