Starless city

Overheard three people in Delhi:

When you feel the rain fall, you feel the dirt pouring down on you, muck streaking down your face and clothes. It washes down the haze from the skies and you can finally breathe clean air and the sky gets so blue. The dust settles. Some 25 drops of water fell on my car and collected all the dirt in runnels. The next morning, my whole car had brown sports of dirt all over it. But the day after the rain, the Sun is really clear and bright in a nice way. You can finally see the stars at night. Like two or three of them!

Understanding what ‘400 years’ stands for, through telescopes

This is how Leonard Digges described a telescope in 1571:

By concave and convex mirrors of circular [spherical] and parabolic forms, or by paires of them placed at due angles, and using the aid of transparent glasses which may break, or unite, the images produced by the reflection of the mirrors, there may be represented a whole region; also any part of it may be augmented so that a small object may be discerned as plainly as if it were close to the observer, though it may be as far distant as the eye can descrie. (source)

While it’s not clearly known who first invented the telescope – or if such an event even happened – Hans Lippershey is widely credited by historians for having installed two specially crafted lenses in a tube in 1608 “for seeing things far away as if they were nearby” (source). People would describe a telescope this way today as well. But the difference would be that this definition captures much less of the working of a telescope today than one built even a hundred years ago. For example, consider this description of how the CHIME (Canadian Hydrogen Intensity Mapping Experiment) radio telescope works:

To search for FRBs, CHIME will continuously scan 1024 separate points or “beams” on the sky 24/7. Each beam is sampled at 16,000 different frequencies and at a rate of 1000 times per second, corresponding to 130 billion “bits” of data per second to be sifted through in real time. The data are packaged in the X-engine and shipped via a high-speed network to the FRB backend search engine, which is housed in its own 40-foot shipping container under the CHIME telescope. The FRB search backend will consist of 128 compute nodes with over 2500 CPU cores and 32,000 GB of RAM. Each compute node will search eight individual beams for FRBs. Candidate FRBs are then passed to a second stage of processing which combines information from all 1024 beams to determine the location, distance and characteristics of the burst. Once an FRB event has been detected, an automatic alert will be sent, within seconds of the arrival of the burst, to the CHIME team and to the wider astrophysical community allowing for rapid follow up of the burst. (source)

I suppose this is the kind of advancement you’d expect in 400 years. And yes, I’m aware that I’ve compared an optical telescope to a radio telescope, but my point still stands. You’d see similar leaps between optical telescopes from 400 years ago and optical telescopes as they are today. I only picked the example of CHIME because I just found out about it.

Now, while the difference in sophistication is awesome, the detector component of CHIME itself looks like this:

Credit: CHIME Experiment
Credit: CHIME Experiment

The telescope has no moving parts. It will passively scan patches of the sky, record the data and send it for processing. How the recording happens is derived directly from a branch of physics that didn’t exist until the early 20th century: quantum mechanics. And because we had quantum mechanics, we knew what kind of instrument to build to intercept whatever information about the universe we needed. So the data-gathering part itself is not something we’re in awe of. We might have been able to put something resembling the CHIME detector together 50 years ago if someone had wanted us to.

What I think we’re really in awe of is how much data CHIME has been built to gather in unit time and how that data will be processed. In other words, what really makes this leap of four centuries evident is the computing power we have developed. This also means that, going ahead, improving on CHIME will mean improving the detector hardware a little and improving the processing software a lot. (According to the telescope’s website, the computers connected to CHIME will be able to process data with an input rate of 13 TB/s. That’s already massive.)

Making sense of quantum annealing

One of the tougher things about writing and reading about quantum mechanics is keeping up with how the meaning of some words change as they graduate from being used in the realm of classical mechanics – where things are what they look like – to that of the quantum – where we have no idea what the things even are. If we don’t keep up but remain fixated on what a word means in one specific context, then we’re likely to experience a cognitive drag that limits our ability to relearn, and reacquire, some knowledge.

For example, teleportation in the classical sense is the complete disintegration of an individual or object in one location in space and its reappearance in another almost instanetaneously. In quantum mechanics, teleportation is almost always used to mean the simultaneous realisation of information at two points in space, not necessarily their transportation.

Another way to look at this: to a so-called classicist, teleportation means to take object A, subject it to process B and so achieve C. But when a quantumist enters the picture, claiming to take object A, subjecting it to a different process B* and so achieving C – and still calling it teleportation, we’re forced to jettison the involvement of process B or B* from our definition of teleportation. Effectively, teleportation to us goes from being A –> B –> C to being just A –> C.

Alfonso de la Fuente Ruiz, an engineering student at the Universidad de Burgos, Spain, in 2011, wrote in an article,

In some way, all methods for annealing, alloying, tempering or crystallisation are metaphors of nature that try to imitate the way in which the molecules of a metal order themselves when magnetisation occurs, or of a crystal during the phase transition that happens for instance when water freezes or silicon dioxide crystallises after having been previously heated up enough to break its chemical bonds.

So put another way, going from A –> B –> C to A –> C would be us re-understanding a metaphor of nature, and maybe even nature itself.

The thing called annealing has a similar curse upon it. In metallurgy, annealing is the process by which a metal is forced to recrystallise by heating it above its recrystallisation temperature and then letting it cool down. This way, the metal’s internal stresses are removed and the material becomes the stronger for it. Quantum annealing, however, is referred by Wikipedia as a “metaheuristic”. A heuristic is any technique that lets people learn something by themselves. A metaheuristic then is any technique that produces a heuristic. It is commonly found in the context of computing. What could it have to do with the quantum nature of matter?

To understand whatever is happening first requires us to acknowledge that a lot of what happens in quantum mechanics is simply mathematics. This isn’t always because physicists are dealing with unphysical entities; sometimes it’s because they’re dealing with objects that exist in ways that we can’t even comprehend (such as in extra dimensions) outside the language of mathematics.

So, quantum annealing is a metaheuristic technique that helps physicists, for example, look for one specific kind of solution to a problem that has multiple independent variables and a very large number of ways in which they can influence the state of the system. This is a very broad definition. A specific instance where it could be used is to find the ground state of a system of multiple particles. Each particle’s ground state comes to be when that particle has the lowest energy it can have and still exist. When it is supplied a little more energy, such as by heating, it starts to vibrate and move around. When it is cooled, it loses the extra energy and returns to its ground state.

But in a larger system consisting of more than a few particles, a sense of the system’s ground state doesn’t arise simply by knowing what each particle’s ground state is. It also requires analysing how the particles’ interactions with each other modifies their individual and cumulative energies. These calculations are performed using matrices with 2N rows if there are particles. It’s easy to see that the calculations can become quickly mind-boggling: if there are 10 particles, then the matrix is a giant grid with 1,048,576 cells. To avoid this, physicists take recourse through quantum annealing.

In the classical metallurgical definition of annealing, a crystal (object A) is heated beyond its recrystallisation temperature (process B) and then cooled (outcome C). Another way to understand this is by saying that for A to transform into C, it must undergo B, and then that B would have to be a process of heating. However, in the quantum realm, there can be more than one way for A to transform into C. A visualisation of the metallurgical annealing process shows how:

The x-axis marks time, the y-axis marks heat, or energy. The journey of the system from A to C means that, as it moves through time, its energy rises and then falls in a certain way. This is because of the system’s constitution as well as the techniques we’re using to manipulate it. However, say the system included a set of other particles (that don’t change its constitution), and that for those particles to go from A to C didn’t require conventional energising but a different kind of process (B) and that B is easier to compute when we’re trying to find C.

These processes actually exist in the quantum realm. One of them is called quantum tunneling. When the system – or let’s say a particle in the system – is going downhill from the peak of the energy mountain (in the graph), sometimes it gets stuck in a valley on the way, akin to the system being mostly in its ground state except in one patch, where a particle or some particles have knotted themselves up in a configuration such that they don’t have the lowest energy possible. This happens when the particle finds an energy level on the way down where it goes, “I’m quite comfortable here. If I’m to keep going down, I will need an energy-kick.” Such states are also called metastable states.

In a classical system, the particle will have to be given some extra energy to move up the energy barrier, and then roll on down to its global ground state. In a quantum system, the particle might be able to tunnel through the energy barrier and emerge on the other side. This is thanks to Heisenberg’s uncertainty principle, which states that a particle’s position and momentum (or velocity) can’t both be known simultaneously with the same accuracy. One consequence of this is that, if we know the particle’s velocity with great certainty, then we can only suspect that the particle will pop up in a given point in spacetime with fractional surety. E.g., “I’m 50% sure that the particle will be in the metastable part of the energy mountain.”

What this also means is that there is a very small, but non-zero, chance that the particle will pop up on the other side of the mountain after having borrowed some energy from its surroundings to tunnel through the barrier.

In most cases, quantum tunneling is understood to be a problem of statistical mechanics. What this means is that it’s not understood at a per-particle level but at the population level. If there are 10 million particles stuck in the metastable valley, and if there is a 1% chance for each particle to tunnel through the valley and come out the other side, then we might be able to say 1% of the 10 million particles will tunnel; the remaining 90% will be reflected back. There is also a strange energy conservation mechanism at work: the tunnelers will borrow energy from their surroundings and go through while the ones bouncing back will do so at a higher energy than they had when they came in.

This means that in a computer that is solving problems by transforming A to C in the quickest way possible, using quantum annealing to make that journey will be orders of magnitude more effective than using metallurgical annealing because more particles will be delivered to their ground state, fewer will be left behind in metastable valleys. The annealing itself is a metaphor: if a piece of metal recalibrates itself during annealing, then a problematic quantum system resolves itself through quantum annealing.

To be a little more technical: quantum annealing is a set of algorithms that introduces new variables into the system (A) so that, with their help, the algorithms can find a shortcut for A to turn into C.

The world’s most famous quantum annealer is the D-Wave system. Ars Technica wrote this about their 2000Q model in January 2017:

Annealing involves a series of magnets that are arranged on a grid. The magnetic field of each magnet influences all the other magnets—together, they flip orientation to arrange themselves to minimize the amount of energy stored in the overall magnetic field. You can use the orientation of the magnets to solve problems by controlling how strongly the magnetic field from each magnet affects all the other magnets.

To obtain a solution, you start with lots of energy so the magnets can flip back and forth easily. As you slowly cool, the flipping magnets settle as the overall field reaches lower and lower energetic states, until you freeze the magnets into the lowest energy state. After that, you read the orientation of each magnet, and that is the solution to the problem. You may not believe me, but this works really well—so well that it’s modeled using ordinary computers (where it is called simulated annealing) to solve a wide variety of problems.

As the excerpt makes clear, an annealer can be used as a computer if system A is chosen such that it can evolve into different Cs. The more kinds of C there are possible, the more problems that A can be used to solve. For example, D-Wave can find better solutions than classical computers can for problems in aerodynamic modelling using quantum annealing – but it still can’t crack Shor’s algorithm, used widely in data encryption technologies. So the scientists and engineers working on D-Wave will be trying to augment their A such that Shor’s algorithm is also within reach.

Moreover, because of how 2000Q works, the same solution can be the result of different magnetic configurations – perhaps even millions of them. So apart from zeroing in on a solution, the computer must also figure out the different ways in which the solution can be achieved. But because there are so many possibilities, D-Wave must be ‘taught’ to identify some of them, all of them or a sample of them in an unbiased manner.

Thus, such are the problems that people working on the edge of quantum computing have to deal with these days.

(To be clear: the ‘A’ in the 2000Q is not a system of simple particles as much as it is an array of qubits, which I’ll save for a different post.)

Featured image credit: Engin_Akyurt/pixabay.

Gods of nothing

The chances that I’ll ever understand Hollywood filmmakers’ appetite for films based on ancient mythology is low, and even lower that I’ll find a way to explain the shittiness of what their admiration begets. I just watched Gods of Egypt, which released in 2016, on Netflix. From the first scene, it felt like one of those movies that a bunch of actors participate in (I can’t say if they perform) to have some screen time and, of course, some pay. It’s a jolly conspiracy, a well-planned party with oodles of CGI to make it pleasing on the eyes and the faint hope that you, the viewer, will be distracted from all the vacuity on display.

I’m a sucker for bad cinema because I’ve learnt so much about what makes good cinema good by noticing what it is that gets in the way. However, with Gods of Egypt, I’m not sure where to begin. It’s an abject production that is entirely predictable, entirely devoid of drama or suspense, entirely devoid of a plot worth taking seriously. Not all Egyptian, Green, Roman, Norwegian, Celtic and other legends can be described by the “gods battle, mortal is clever, revenge is sweet” paradigm, so it’s baffling that Hollywood reuses it as much as it does. Why? What does it want to put on display?

It surely is neither historical fidelity nor entertainment, and audiences aren’t wowed by much these days unless you pull off an Avatar. There is a glut of Americanisms, including (but not limited to) the habit of wining one’s sorrows away, a certain notion of beauty defined by distinct clothing choices, the sole major black character being killed off midway, sprinklings of American modes of appreciation (such as in the use of embraces, claps, certain words, etc.), and so forth. And in all the respect that they have shown for the shades of Egyptian lore, which is none, what they have chosen to retain is the most (white) American Americanism of all: a self-apotheosising saviour complex, delivered by Geoffrey Rush as the Sun god Ra himself.

 

There seems to be no awareness among scriptwriters angling at mythologies of the profound, moving nuances at play in many of these tales – of, for example, the kind that Bryan Fuller and Michael Green are pulling off for TV based on Neil Gaiman’s book.

There is no ingenuity. In a scene from the film, a series of traps laid before a prized artifact are “meant to lure Horus’s allies to their deaths” – but they are breachable. In another – many others, in fact – a series of barriers erected by the best builders in the world (presumably) are surmounted by a lot of jumping. In yet another, an important character who was strong as well as wily at the beginning relies on just strength towards the end because, as he became supposedly smarter by appropriating the brain of the god of wisdom, he gave himself a breakable suit of armour. Clearly, someone’s holding a really big idiot ball here. It’s even flashing blue-red lights and playing ‘Teenage wasteland’.

Finaly, Gods of Egypt makes no attempt even to deliver the base promise that some bad films make – that there will be a trefoil knot of a twist, or a moment of epiphany, or a well-executed scene or two – after which you might just be persuaded to consign your experience to the realm of lomography or art brut. I even liked Clash of the Titans (2010) even though it displayed none of these things; it just took itself so seriously.

But no, this is the sort of film that happens when Donald Trump thinks he’s Jean-Michel Basquiat. It is a mistake, an unabashed waste of time that all but drools its caucasian privilege over your face. Seriously, the only black people in the film – apart from the one major guy that dies – are either beating drums or are being saved. Which makes it all the more maddening. Remember Roger Christian’s Battlefield Earth (2000)? It was so bad – but it is still remembered because at the heart of its badness was an honest, if misguided, attempt by its makers to experiment, to exercise their agency as artists. A common complaint about the film is that Christian overused Dutch angles. I would have wept in relief if the Gods of Egypt had done anything like that. Anything at all.

Caste guilt

This is the age of the start-up, not megaliths. Remember T-Rex did not survive evolution. It is unlikely that religions organised like T-Rexes – most of the Abrahamic religions fit this description – will survive an era of fast change.

The T-Rex did evolve in the first place because the evolutionary pathway existed for it to, and what it didn’t survive wasn’t evolution but a meteorite strike. What is only true is that T-Rex-like creatures couldn’t re-emerge after the strike because the evolution of other creatures had moved on and because the world had changed.

The quote above is from a piece on the supposed guilt Hindus have because their spiritual ancestors were the progenitors of casteism in India, by R. Jagannathan in Swarajya. Excerpt:

Put simply, just as it is foolish to blame Africans for giving us AIDS, it is pointless blaming Hinduism for caste, even though this is where it may have originated. Where caste originated should not be a source of perpetual guilt for Hindus. It is now everybody’s problem, not Hinduism’s alone.

I’m not sure if Hindus are blamed because their forefathers did something or because Hindus continue to perpetuate their beliefs, disenfranchise the weaker sections of society and, increasingly today, subject them to majoritarian justice. If anything, I regret that my Hindu forefathers did what they did but I’m certainly neither ashamed nor guilty because of it.

Jagannathan also writes that, like the deras have done to Sikhism, Hinduism should loosen up and allow individual caste groups to function by themselves because this could only benefit the religion.

Hindus are comfortable with caste, and those who want to remain in it should be free to do so. It does not matter if castes become separate religions, retaining only a loose link with Hinduism; it does not matter if groups that are currently identified with Hinduism want to break away, and seek minority, non-Hindu status, as some groups within the Lingayats want to do. If the Ramakrishna Mission wants to be treated as a non-Hindu denomination, why not allow it to do so? It will not actually become less Hindu because of this nomenclature change. In fact, it could become more innovative and grow faster.

From what I’ve understood, one of the biggest ways in which casteism is evil is that it ‘locks in’ its adherents into certain social classes that individuals inherit from generation to generation, and can’t escape easily from. So the only Hindus who “want to remain” within the folds of casteism and who would “be free to do so” are the upper-caste Hindus. This is why the Dera Sacha Sauda flowered – because, to paraphrase Jagannathan, it offered a “casteless” form of Sikhism to Dalits, a ladder to use to climb through social and power structures, increasingly dominated by the upper-caste Jats and Khatris.

In all, the piece is very interesting because of its novel use of metaphors – borrowed from adaptive systems like evolution and capitalism and applied to regressive systems like caste – and because at its heart it seems okay with there being a caste system, just not in the form it’s prevalent at the moment. That’s just wishful thinking because, to those suppressed by their bond with the caste system, being able to live under a more liberal and progressive form of the practice (if such a thing is possible) would be nigh indistinguishable from being liberated altogether.

A flood as an opportunity

There’s a piece by Eric Holthaus, on Politico, that’s been doing the rounds on Twitter since yesterday. I’ll grant you it’s a powerful piece of writing, such as is necessary to cast Hurricane Harvey in what many would call the right light: as the face of climate change. One paragraph in particular I thought was particularly effective because it quickly but just as effectively explained how Harvey was a storm that’s been many years in the making, and how the intensity of rains it has brought to bear on Houston has been unusual even after accounting for the fact that the city has been battered by three once-in-500-years floods in the last few years.

Harvey is in a class by itself. By the time the storm leaves the region on Wednesday, an estimated 40 to 60 inches of rain will fall on parts of Houston. So much rain has fallen already that the National Weather Service had to add additional colors to its maps to account for the extreme totals. Harvey is infusing new meaning into meteorologists’ favorite superlatives: There are simply no words to describe what has happened in the past few days. In just the first three days since landfall, Harvey has already doubled Houston’s previous record for the wettest month in city history, set during the previous benchmark flood, Tropical Storm Allison in June 2001. For most of the Houston area, in a stable climate, a rainstorm like Harvey is not expected to happen more than once in a millennium.

In fact, Harvey is likely already the worst rainstorm in U.S. history. An initial analysis by John Nielsen-Gammon, the Texas state climatologist, compared Harvey’s rainfall intensity to the worst storms in the most downpour-prone region of the United States, the Gulf Coast. Harvey ranks at the top of the list, with a total rainwater output equivalent to 3.6 times the flow of the Mississippi River. (And this is likely an underestimate, because there’s still two days of rains left.) That much water – 20 trillion gallons over five days – is about one-sixth the volume of Lake Erie. According to a preliminary and informal estimate by disaster economist Kevin Simmons of Austin College, Harvey’s economic toll “will likely exceed Katrina”—the most expensive disaster in U.S. history. Harvey is now the benchmark disaster of record in the United States.

The pronounced “climate change is real” tone to the entire piece is clearly aimed at the Donald Trump government, which has always denied the ‘A’ of AGW and has pushed dangerous policies that many predict will eventually uninstall the US from the forefront of climate change negotiations as well as action. Holthaus’s piece, in this context, succeeds in painting a scary picture of the future by highlighting how much of an exception Harvey appears to be and why its occurrence isn’t one of chance.

Nonetheless, the piece did still make me wonder if the world paid as much attention to the 2015 Tamil Nadu floods as it is paying to Harvey. Sure, Holthaus is writing against the backdrop of an American president who recently said the world’s largest polluter would not abide by the terms of the Paris Agreement, and against the backdrop of a city receiving about 50 inches of rain in less than a week. In contrast, Narendra Modi has been generally accepting of the fact that climate change is real and will require drastic action (although that hasn’t stopped his government from continuing the UPA’s work to weaken institutional environmental protection safeguards or the NITI Aayog from drafting an energy policy that will ensure India remains dependent on fossil fuels until 2040).

Second: unlike Houston, the parts of Tamil Nadu that were wrecked in November-December 2015 were relatively underdeveloped areas rife with illegal constructions and pavements that effectively resulted in those areas being, to use Holthaus’s term, “flood factories”. Thus, 20 inches of rain is likelier to be deadlier in the cities of Tamil Nadu than in Houston.

But this doesn’t make it harder to distinguish between the effects of AGW-driven storms in, say, Chennai and the effects of poor urban infrastructure. Our preparedness for the effects of climate change is both mitigating global avg. surface temperature rise and better planning public spaces and improving the distribution/accessibility of resources. So if Chennai, or any other place, isn’t prepared to handle 20 inches/day of rain, it’s going to get doubly screwed in a world whose surface is (at least) 2º C hotter on average about eight decades from now.

Anyway, the north Indian mainstream media (more widely consumed by far) was mostly apathetic to the plight of Tamil Nadu’s residents during the 2015 floods – just the way the Western media at large has been relatively more apathetic towards Oriental tragedies. I think this resulted in a big opportunity missed by national-level newsrooms to cast the floods as the face of both urban and rural India’s experience with climate change, perhaps even as the face of climate change itself, and use that to underscore the state’s abject underpreparedness – for which successive state governments would have been to blame – and the Narendra Modi government’s two-faced relationship with the demands of climate change. (E.g. accepting them gleefully in some ways – e.g. by the MNRE – but blatantly ignoring them in others – e.g. by the MoEFCC – and which I’d argue is more insidious than claiming outright that climate change is codswallop.)

A measure of media trustworthiness

A publication online that makes its money by displaying ads can be profitable even by publishing a slew of bad or offensive articles. That will drive the traffic too; people will share its content even if it’s to complain about it. It will also be trustworthy in that people can always trust to publish a predictable kind of content.

But when the publication stops making its money through ads and pivots to a less quantitative, more qualitative channel of revenue, it can afford less to publish bad content and even lesser to make money off of it.

You can see how this is a stream: upstream is the publisher publishing the content, midway is the consumer reading and engaging with the content, and downstream is the publisher once again, cashing in on the user’s actions in some way.

Now, if the midway behaviour changes, will there be an upstream effect that is not mediated by the downstream response? I.e., if people stopped sharing bad content because they no longer want to give the article in question any play, will publishers stop putting out bad content irrespective of whether it affects their revenues?

If the answer to this question is yes, then I think that’s what would make (or keep) the publisher trustworthy in an economic environment where private corporations are simply buying publications out instead of fighting them.

Breaking the bargain

nasty review of P.V. Sindhu’s performance at the World Badminton Championships by Sandip Sarkar for the Hindustan Times:

In the last one year, Sindhu has played 17 matches that stretched to the third game. She won 10 and lost seven. Six of those seven losses have come against players who were unseeded or seeded below her. A top-5 player of the world certainly has to have a better record than that.

Take Spain’s Carolina Marin for example. Just two years older than Sindhu, Marin comes from a country with no recognisable heritage in badminton. But she won gold at the 2014 and 2015 World Championships, following it up with gold at Rio 2016.

Okuhara had come well-prepared for the final. By being drawn into long rallies, Sindhu lost the plot, drained her reserves of energy and finally went down on the big points. PV Sindhu is undoubtedly one of top sportspersons of our country but till she does a Marin or a Okuhara, she will not be pure gold standard.

… all as if to suggest Sindhu was taking the tournament easy and hadn’t done her homework. To be disappointed with a sportsperson’s performance is one thing but to complain that she didn’t do her best is quite another.

Sarkar has failed to keep up the other end of the bargain: if sportspersons are expected to treat journalists with dignity, then journalists must treat sportspersons like the professionals that they are instead of accusing them of wilfully underperforming. I also feel that Sarkar’s assessment is especially out of place because Sindhu won the silver at the championships, in the sort of tight game that could likely have spit out a different outcome if played a second time. So I wholeheartedly agree with Mahesh Bhupathi when he says…

The unclosed clause and other things about commas

The Baffler carried a fantastic critique of The New Yorker‘s use of commas by Kyle Paoletta on August 23. Excerpt:

The magazine’s paper subscription slips have long carried a tagline: “The best writing, anywhere.” It follows that the source of the best writing, anywhere, must also be the finest available authority on grammar, usage, and punctuation. But regular readers know that The New Yorker’s signature is not standard usage, but its opposite. Nowhere else will you find an accent aigu on “élite” or a diaeresis on “reëmerge.” And the commas—goodness, the commas! These peculiarities are as intrinsic to the magazine’s brand as the foppish Eustace Tilley, and, in the digital age, brand determines content. But the rise of the magazine’s copy desk has done more for The New Yorker than simply generate clicks. It has bolstered the reputation of the magazine as a peerless institution, a class above the Vanity Fairs and Economists of the world, even if the reporting and prose in those publications is on par with (if not often better than) what fills the pages of The New Yorker.

Paoletta’s piece was all the more enjoyable because it touched on all the little notes about commas that most people usually miss. In one example, he discusses the purpose of commas, split as they are between subordination and rhythm. The former is called so because it “subordinates” content to the grammatical superstructure applied to it. Case in point: a pair of commas is used to demarcate a dependent clause – whether or not it affects the rhythm of the sentence. On the other hand, the rhythmic purpose denotes the use of commas and periods for “varying amounts of breath”. Of course, Paoletta doesn’t take kindly to the subordination position.

Not only does this attitude treat the reader as somewhat dim, it allows the copy editor to establish a position of privilege over the writer. Later in the same excerpt, [Mary] Norris frets over whether or not some of James Salter’s signature descriptive formulations (a “stunning, wide smile,” a “thin, burgundy dress”) rely on misused commas. When she solicits an explanation, he answers, “I sometimes ignore the rules about commas… Punctuation is for clarity and also emphasis, but I also feel that, if the writing warrants it, punctuation can contribute to the music and rhythm of the sentences.” Norris begrudgingly accepts this defense, but apparently only because a writer of no lesser stature than Salter is making it.

I’m firmly on the subordination side of things: more than indicating pause, commas are scaffolding for grammar, and thus essential to conveying various gradations of meaning. Using a comma to enforce a pause, or invoke an emphasis, is also meaningless because pauses must originate not out of the writer’s sense of anticipation and surprise but out of the clever arrangement of words and sentences, out of the use of commas to suppress some senses and enhance others. It is not the explicit purpose of written communication to also dictate how it should be performed.

Along the same vein, I’m aware that using the diaeresis in words like ‘reemerge’ is also a form of control expressed over the performance of language, and one capable of assuming political overtones in some contexts. For example, English is India’s official language, the one used for all official documentation and almost all purposes of identification. However, English is also the tongue of colonialists. As a result, its speakers in India are those who (a) have been able to afford education in a good school, (b) have enjoyed a social standing that, in the pre-Independence period, brought them favours from the British, (c) by virtue of pronouncing some words this way or that, have had access to British or American societies, or combinations of some or all of them. So beating upon the reader that this precisely is how a word ought to be pronounced could easily be The New Yorker using a colonial cudgel over the heads of “no speak English” ‘natives’.

That said, debating the purpose of commas from the PoV of The New Yorker is one thing. Holding the same debate from the PoV of most English-language newspapers and magazines in the Indian mainstream media is quite another. The comma, in this sphere, is given to establishing rhythm for an overwhelming majority of writers and copy-editors, even though what we’re taught in school is only the use of commas for – as Paoletta put it – subordination. A common mistake that arises out of this position is that, more often than you’d like, clauses are not closed. Here’s an example from The Wire:

Gertrud Scholtz-Klink, described by Hitler as the “perfect Nazi woman” was held in check by male colleagues when she proposed that female members be awarded similar titles to the males.

There ought to be a comma after woman” and before was but there isn’t. This comma would be the terminal counterpart to the one that flagged off the start of the dependent clause (described by Hitler as…). Without it, what we have are two dependent clauses demarcated by one comma and no independent clauses – which there ought to be considering we’re looking at what happens to be a full and complete sentence.

The second most common type of comma-related mistake goes something like the following sentence (picked up from the terms of service of Authory.com):

You are responsible for the content, that you make available via Authory.

What the fuck is that comma even doing there? Does the author really think we ought to pause between “content” and “that”? While Salter it would seem used the comma to construct a healthy sense of rhythm, Authory – and hundreds of writers around the world – mortgage punctuation to build the syntactic versions of dubstep. This issue also highlights the danger in letting commas denote rhythm alone: rhythm is subjective, and ordering the words in sentences using subjective rules cannot ever make for a consistent reading experience. On the other hand, using commas as a matter of an objective ruleset would help achieve what Paoletta writes is overarching purpose of style:

[Style], unlike usage, has no widely agreed upon correct answers. It is useful only insofar as it enforces consistency. Style makes unimportant decisions so that writers don’t have to—about whether to spell the element “sulfur” or “sulphur,” or if it’s best to italicize the names of films or put them in quotes. It is not meant to be noticed: it is meant to remove the possibility of an inconsistency distracting the reader from experiencing the text as the writer intends.


Here again, of course, I’m not about to let many Indian copy-editors and writers off the hook. Paoletta cites Norris’s defence of the following paragraph as an example of style enforcement gone overboard:

Strait prefers to give his audience as few distractions as possible: he likes to play on a stage in the center of the arena floor, with four microphones arranged like compass points; every two songs, he moves, counterclockwise, to the next microphone, so that people in each quadrant of the crowd can feel as if he were singing just to them.

Compare this aberration to nothing short of the outright misshapenness that was an oped penned by Gopalkrishna Gandhi for The Hindu in May 2014. Excerpt:

In invoking unity and stability, you have regularly turned to the name and stature of Sardar Vallabhbhai Patel. The Sardar, as you would know, chaired the Constituent Assembly’s Committee on Minorities. If the Constitution of India gives crucial guarantees — educational, cultural and religious — to India’s minorities, Sardar Patel has to be thanked, as do other members of that committee, in particular Rajkumari Amrit Kaur, the Christian daughter of Sikh Kapurthala. Adopt, in toto, Mr. Modi, not adapt or modify, dilute or tinker with, the vision of the Constitution on the minorities. You may like to read what the indomitable Sardar said in that committee. Why is there, in so many, so much fear, that they dare not voice their fears?

A criticism of the oped along these lines that appeared on the pages of this blog elicited a cocky, but well-meaning, repartee from Gandhi:

Absolutely delighted and want to tell him that I find his comment as refreshing as a shower in lavender for it cures me almost if not fully of my old old habit of taking myself too seriously and writing as if I am meant to change the world and also that I will be very watchful about not enforcing any pauses through commas and under no circumstances on pain of ostracism for that worst of all effects namely dramatic effect and will assiduously follow the near zero comma if not a zero comma rule and that I would greatly value a meet up and a chat discussing pernicious punctuation and other evils.

It is for very similar reasons that I can’t wait for my copy of Solar Bones to be delivered.

Featured image: An extratropical cyclone over the US midwest, shaped like a comma. Credit: gsfc/Flickr, CC BY 2.0.

Talking scicomm at NCBS – II

I was invited to speak to the students of the annual science writing workshop conducted at the National Centre for Biological Sciences, Bangalore, for the second year (first year talk’s notes here).

Some interesting things we discussed:

1. Business of journalism: There were more questions from this year’s batch of aspiring science writers about the economics of online journalism, how news websites grow, how much writers can expect to get paid, etc. This is heartening: more journalists at all levels should be aware of, and if possible involved in, how their newsrooms make their money. Because if you retreat from this space, you cede space for a capitalist who doesn’t acknowledge the principles and purpose of journalism to take over. If money has to make its way into the hands of journalists – as it should, for all the work that they’re doing – only journalists can also ensure that it’s clean.

2. Conflicts of interest: The Wire has more to lose through conflicts of interests in a story simply because there are more people out there looking to bring it down. So the cost of slipping up is high. But let’s not disagree that being diligent on this front always makes for a better report.

3. Formulae: There is no formula for a good science story. A story itself is good when it is composed by good writing and when it is a good story in the same way we think of good stories in fiction. They need to entertain without betraying the spirit of their subject, and – unlike in fiction – they need to seek out the truth. That they also need to be in the public interest is something I’m not sure about, although only to the extent that it doesn’t compromise the rights of any other actor. This definition is indeed vague but only because the ‘public interest’ is a shape-shifting entity. For example, two scholars having an undignified fight over some issue in the public domain wouldn’t be in the public interest – and I would deem it unfit for publication for just that reason. At the same time, astronomers discovering a weird star billions of lightyears away may not be in the public interest either – but that wouldn’t be enough reason to disqualify the story. In fact, a deeper point: when the inculcation of scientific temper, adherence to the scientific method and financial support for the performance of science are all deemed to not be in the public interest, then covering these aspects of science by the same yardstick will only give rise to meaningless stories.

4. Representation of authority: If two scientists in the same institute are the only two people working on a new idea, and if one of them has published a paper, can you rely on the other’s opinions of it? I wouldn’t – they’re both paid by the same institution, and it is in both their interests to uphold the stature of the institution and all the work that it supports because, as a result, their individual statures are upheld. Thankfully, this situation hasn’t come to be – but something similar has. Most science journalists in the country frequently quote scientists from Bangalorean universities on topics like molecular biology and ecology because they’re the most talkative. However, the price they’re quietly paying for this is by establishing that the only scientists in the country worth talking about apropos these topics are based out of Bangalore. That is an injustice.

5. Content is still king: You can deck up a page with the best visuals, but if the content is crap, then nothing will save the package from flopping. You can also package great content in a lousy-looking page and it will still do well. This came up in the context of a discussion on emulating the likes of Nautilus and Quanta in India. The stories on their pages read so well because they are good stories, not because they’re accompanied by cool illustrations. This said, it’s also important to remember that illustrations cost quite a bit of money, so when the success of a package is mostly the in the hands of the content itself, paying attention to that alone during a cash-crunch may not be a bad idea.