The Keeper of Words

It had to come to this at some point, and here we are finally.

To undertake a challenge to write one blog post a day – when I’ve mentioned to my friends and colleagues that I’m doing this, all of their responses have presumed that this is a difficult thing to do. It surely seems difficult to explore one new idea every day and write about it, but I suggest you look at the bigger picture instead: it’s not at all implausible or difficult for us, all people, to be able to come up with 365 ideas in a year.

We generate dozens over drinks with a few good friends on Saturday evenings and toss them by the conversational wayside as impractical or outlandish. They’re ideas nonetheless, and are worth writing about in some form.

So it’s quite easy, especially once you get in the groove, to write one blog post a day. What is not easy about this exercise… rather, the true face of the challenge presents itself to you on that one day when you have no ideas to write about. Anybody can write a post when there’s an idea waiting to be written about, and there are always ideas. The undertaking is a real, actual challenge on that one day – the first day – when you’re forced to confront its fundamental essence, its eigenvalue: the writing.

Good writing is the soul of any story, good or bad, strange or charming, real or fictional. Like an organism, that soul is contained within a body defined by various elements of the story, a sinew of ideas coalescing over the fluid form of language to give it definite shape and, when well executed, an empathetic purpose.

The first time you’re brought face to face with a body that has had its flesh and blood and bones stripped away, it could feel as if you see nothing – a debris field with a blankness at its heart. However, a story is never so forgiving. You can claw away at the material at the interface between words and grammar on one side and the reader’s mind on the other, but you will never be left with nothing. The words will still be there, bared naked.

On the day, especially the first day, when ideas have deserted you, the only way to survive is to face the words you have birthed, put them gently down one after another, and try not to be the first to blink.

On this day, you must write for writing’s sake, you must write without syntactic hold or grasp, you must write to judge yourself more harshly than you ever have.

You must write without fear, you must write even though purpose fights to get away in the middle of every sentence, you must write to prove that – if nothing else – you are the keeper of words.

You must write.

The three times intermediate-mass black holes were first discovered

There’s a report in Science dated June 8 with the headline ‘Middleweight black holes found at last’. The abstract describes an effort by an “international team” of astronomers to find intermediate-class black holes, which weigh more than tens of solar masses but less than millions of solar masses.

This is a bit confusing because labels in the natural and social sciences have fixed and well-defined meanings. So “intermediate-class” means something specific. When the first LIGO announcement of a black hole merger was made in February 2016, Karan Jani, a member of the LIGO Scientific Collaboration, told me that “intermediate-class” black holes weighing 20-10,000 solar masses generate the loudest signals in the instruments.

Doesn’t this mean black holes fitting the label intermediate-class were found in 2016 itself?

Further, this isn’t even the second time intermediate-class black holes have been announced discovered. In February 2017, astronomers from the Harvard-Smithsonian Centre for Astrophysics announced that they’d found a black hole weighing ~2,300 solar masses at the centre of a globular cluster called 47 Tucanae.

Another thing that concerns me here is how the data is being sliced. You have astronomers claiming discoveries every day and, given a recent spate of articles about the neutron-star merger discovery and the BICEP2 ‘cosmic blunder’, you know astronomy and cosmology are ultra-competitive realms of scientific endeavour. As a result, in such cases, there is often a real risk that someone out there will claim to discover something that is not really significant at all.

For example, it would seem intermediate-class black holes have been discovered thrice (LIGO, 47 Tucanae and when the “international team” mentioned above discovered them at the centres of 300 galaxies). At this rate, it’s quite possible this kind of black holes has been discovered even more. Which one was the first one? Or is the ‘intermediate-class’ itself going to be cut up into three or more parts to accommodate all these claims? Additionally, if you’re wary about using the term ‘intermediate-class’, then you should know that the term ‘middleweight’ also has precedence.

A press release from the Harvard-Smithsonian Centre for Astrophysicists in February 2017 had this line:

Astronomers expect that intermediate-mass black holes weighing 100 – 10,000 Suns also exist, but so far no conclusive proof of such middleweights has been found.

Lee Billings, a science writer for the Scientific American, wrote in June 2017:

Most of the black holes in LIGO’s mergers have been middleweights, being heavier than that 20–solar mass limit but much lighter than the supermassive variety, raising questions about their origins and relationship to the two well-studied populations of black holes. (emphasis added)

It’s likely that in both these cases the authors are using the term ‘middleweight’ to refer to the intermediate class in a non-technical way – but then so is the Science article.

Everybody remembers the infamous case of Brian Wansink, who tortured his data and sliced his results into smaller and smaller pieces that he published as separate papers. Nobody wants that sort of thing to happen because what Wansink did wasn’t science; he was simply hacking the system for personal gain. However, if we can’t reach consensus on what intermediate-class really means and, following that, when the first black hole of this class was found, we might never see an end to scientific papers and research groups claiming that their authors/members have discovered a new class of something.

There are a few issues that could keep such consensus from being reached; I can think of three.

1. The intermediate-class comprises four orders of magnitude: hundreds, thousands, tens of thousands and hundreds of thousands of solar masses. I’m no astronomer but this still sounds like a very wide swath to homogenise, especially if professional astronomers are going to claim the discovery of a black hole of each order of magnitude is significant. (E.g. the abstract of the latest discovery assumes intermediate-class black holes weighs thousands to millions of solar masses, excluding the hundreds.)

2. LIGO used gravitational waves to spot middleweight black holes; the 47 Tucanae group used radio and X-ray data from large observational studies; the “international team” used galaxy spectra data collected in the archives. Since each group has used a different technique to find what they have, does each effort get to stake its own, distinct claim to primacy?

3. The Wansink outcome – where astronomers are slicing the data really thin with the (reasonable) assumption that differences between one supernova and the next are the result of distinct processes instead of a common process with stochastic fluctuations. Of course, we’ve no way until much later in the future, after we’ve made lots more observations, to tell whether the way we’re slicing the data in 2018 actually makes sense. But in the same vein, there should be a way to tell if, based on observations made in the last few decades, we’re classifying black holes right.

(To be clear, while the first and second issues described could imply that astronomers are doing something wrong, the third doesn’t; it’s just something to consider.)

The Higgs boson and the top quark

There were two developments in the news last week that were very important but at the same time didn’t get mainstream attention: Microsoft acquiring GitHub and the LHC collaboration’s measurement of the strength of the Higgs boson/top quark interaction.

Before either of these developments could pushed onto the front page (or equivalent), what the people could really have used was a “why does it matter” kinda piece. Paul Ford at Bloomberg had just such a piece explaining why it would be nice if more of us gave a damn about GitHub’s future. But I couldn’t fine the equivalent for the top quark announcement.

To me, the biggest reason to give a damn about the ATLAS and CMS detectors on the LHC measuring the strength of the interaction between the Higgs boson and the top quark is very simple. The Higgs boson, rather the Higgs mechanism in which it participates, is what gives a fundamental particle its mass. The stronger the boson couples with a particle, the higher the particle’s mass is.

The top quark is the heaviest known fundamental particle, which means the Higgs boson couples to it the strongest. It weighs ~172 GeV/c2, which is 1.3-times the mass of the Higgs boson itself, about 183-times the mass of a proton and almost the same as an entire atom of tungsten. If the fundamental particles were all the Angry Birds, the top quark would be Terence.

So by studying the strength and the nature of this coupling, physicists can learn more about both particles as well as the peculiarities of the Higgs mechanism. Additionally, of the six types of quarks, only the top quark has been known to never hadronise — i.e. come together with other particles to form a heavier particle. Protons and neutrons are each called a hadron because they’re made up of up and down quarks. The charm, strange and bottom quarks also hadronise.

In fact, all the other quarks can be (indirectly) observed only in the presence of other quarks, leaving the top quark to be the sole ‘bare’ quark in nature – further entrenching it as an object of interest among particle physicists.

Further refining what we know about the top quark and the Higgs boson also helps physicists decide what colliders of the future should be able to do, determine what questions they should be able to answer. And the sooner they know the better because particle accelerators/colliders are very hard to build and can take many years, so it pays to keep an eye on the ball at all times instead of regret not installing a feature later.

‘DEWEY DEFEATS TRUMAN’

Until a few hours ago, I thought Harry S. Truman had been one of the worst-performing American presidents of all time. I was wrong. I’d spotted an infographic on Twitter, drawn up by FiveThirtyEight and talking about how Donald Trump might soon beat Truman in terms of having the lowest approval rating as a sitting president.

However, go through Truman’s Wikipedia page and you’ll see that though his rating dipped to 22% (the joint-lowest with Richard Nixon in 1974) when he fired Douglas MacArthur as commander of the US armed forces in 1951, he’s within the top 10 greatest American presidents of all time. The biggest reasons: he desegregated the armed forces and federal offices, founded the United Nations, executed the Berlin Airlift, enacted the Marshall Plan and chaperoned the American economy from war to peacetime.

The effect of these successes on his public image is nowhere more apparent than in the 1948 presidential elections, which he was widely expected to lose but won with near-thumping margins, especially in the eastern and southern states. All stories of this sort – including Trump’s in 2016 – feature a part or whole of the mainstream press covering the wrong stories, missing the bigger picture and generally making predictions that it sticks to even in the face of opposing evidence.

In 1948, the press’s being-caught-by-surprise was exemplified by a headline printed by the Chicago Daily Tribute (today the Chicago Tribune) proclaiming “DEWEY DEFEATS TRUMAN” and 150,000 copies of which were actually sold on newsstands. When Truman’s victory was declared, a famous photograph was taken showing him beaming into a crowd, holding up the edition to the Tribune‘s eternal embarrassment.

How was this edition even sent to print? It seems that more than editorial presumptuousness and prejudice, the chief conspirators were a staff strike and technology.

The Taft-Hartley Act of 1947, which ironically Truman had vetoed but was still passed by Congress, had clamped down on labour unions and restricted labour actions. In response, the linotype printers of the Tribune had called for a strike and were absent on the day before the elections results were due. Around the same time, according to Lloyd Wendt, who profiled the Tribune for a 1979 book, the paper had switched to a new printing workflow: copies were composed on typewriters, photographed and finally engraved onto printing plates.

The result was that the paper required a lead time of “several hours” to be prepared before printing could begin – in turn forcing its editors to assume the outcome of the 1948 elections before the official word was out. Of course, that the Tribune had thought Dewey would win couldn’t be pegged on the Taft-Hartley Act and/or technology exclusively; there were many reasons for that, including the failure of American media at large to capture the popularity of Truman’s ‘whistle stop’ train tour. However, the workers’ strike and the technology in use conspired to preserve the Tribune‘s surprise, and subsequent embarrassment, forever.

According to a website called Chicagology, run by a man named Terry Gregory, a self-proclaimed “Chicagologist”, the Tribune‘s typesetting team once boasted that it was “more flexible as to schedule than any other paper” – a confidence born out of the leadership of one Leo Loewenberg, a seemingly noted printer and member of the newspaper’s composing room for 45 years.

Although such efficiency contradicts what happened on November 2, 1948, the cumbersome workflow was in place because the Tribune was transitioning between two prominent printing technologies of the time: from linotype to phototype. While linotype required the use of heated metal blocks to transfer ink onto paper, phototypesetting essentially photographed text from a magnetic drive onto paper.

Even though phototypesetting is clearly faster, the Tribune hadn’t yet been up to speed when Dewey and Truman were locking horns. Together with the prevailing shortage of staff that might have allowed its journalists to wait until closer to the official announcement, it was forced to call the result early. And it called it wrong.

Any journalist will tell you that these things happen. Though newspapers seldom screw up election result headlines these days, the nature of blunders has changed in keeping with the prevailing technology. The flexibility that Loewenberg boasted of in the early 20th century is, almost a hundred years later, so magnified as to be a near-meaningless consideration in the digitised newsroom.

However, what the Tribune did, rather accomplished, wasn’t any blunder. It was an inadvertent memorialisation of the constant reminders journalists everywhere seem to need to never presume familiarity with electoral politics.

Featured image credit: Bank Phrom/Unsplash.

In defence of world-building, from Ikea

Whenever I think of world-building – as in the fantasy exercise where you build out the lay of the land, and then the land itself where you’re going to situate your story – the first thing that comes to mind, of all things, is Ikea.

Yes, the Swedish furniture brand. You’re probably thinking that I think world-building is something like putting Ikea furniture to together, but that’s not what I’m thinking. I think of Ikea first when I think of world-building because of my first visit to an Ikea store, which was in Stockholm in 2009. It was the Ikea HQ, a large cuboidal building that looks more monolithic than its interiors actually are.

Excluding fire exits, the store has one point entry and one point of exit. It was designed this way, I was told, to force all visitors to walk all the way through the building, through all floors and every section on display, before exiting. By maximising the amount of time spent inside the store, Ikea wanted to maximise sales: every visitor would have to take a look at the dazzling variety of interior decoration options, and have little by way of chickening out of a purchase. The Ikea showroom in Dubai Festival City (IIRC) that I visited later that year is designed the same way. (I went there for the amazing breakfast buffet they have over the weekend: AED 3 for all you can guzzle/gorge.)

I’m a poor writer of fantasy, or of fiction in general. The only thing I ever wrote that experienced a feeble measure of success was a story called ‘The Sea’. It was published in one edition of a magazine produced by Them Pretentious Basterds, a Chennai-based writers’ group I was a part of. ‘The Sea’, you’ll see, is so sparse with details, it’s almost as if I was afraid of taking on something only to lose control. And this would be true. The other fantasy stories I have been comfortable writing were almost all smaller vignettes from a D&D universe that Thomas Manuel created for a campaign called ‘Taxmen: High Risk Unit’.

World-building to me has always been about building an Ikea store and then sticking myself, the writer, in it, forcing me to plot my way through and emerge alive at the end. To me, the exercise of writing fantasy begins in earnest with the world-building because this is where I’m already plotting to ambush myself, my plot and my characters. The fantasy world – to me – is the world that I’m going to experience, not the world whose fate I’m going to script. Once built, this world is set in stone as far as I’m concerned, and from there I simply live it out and write down what I’m seeing, sensing, feeling to create what others read as the story.

It’s not the ‘great clomping foot of nerdism’, as M. John Harrison called it. Instead, world-building is an exercise of gaming. The best games, especially of the video variety, give you control just as much as they don’t let you fly off the handle in terms of your in-game destiny. The stories of the best games are not the product of a choice between mechanical decision-making (e.g. by offering you multiple choices and then taking the narrative along what you’ve decided to do) and glorious visuals. Instead, they mimic life, forcing you to make the same choices in-game that the characters of celebrated works of literature do.

World-building would be the great clomping foot of nerdism if the world is expected to justify your entire experience of the game, or the story. To persist with Harrison’s view, world-building does “literalise the urge to invent” but it pays to ask what exactly is being invented. If you’re able to build a world whose physical, cultural and historical dynamics are together able to embody great stories – stories that force their writers to play games with themselves as they navigate fragments of their own creation – then world-building would have far outclassed the more insular view of the exercise Harrison seems to harbour in his exposition.

I realise Harrison and those who agree with him would’ve come to their conclusions because world-building as I use it is exceedingly difficult to shape and manoeuvre, and would be a horrible prescription for young fantasy writers such as myself. My defence of world-building is mine alone, and I don’t express it to rebut or rebuke Harrison. It’s the only way I can engage with fantasy. This is why, when I criticise films or books, I struggle to make sense of what the author or script-writer could’ve done better to make the product more engaging. Instead, I think to myself, “I’ve just witnessed a story unfold in this make-believe world, and it’s a so-so story”; my judgment ends there.

Ikea, in much the same way, has designed its Stockholm and Dubai stores to tell a story. If the story falls flat, I can’t blame the world because it is what it is. I can’t blame the story either because that’s what the world has engendered. The world, to me, is sacrosanct because, in my conception, they don’t exist to please. They just do, and I, the traveller, maybe even the trespasser, need to deal with that by myself.

Featured image credit: Saide Serna Marcial/Unsplash.

The proximity rule

In the morning, I had managed to read a few pages of Karunanidhi: A Life in Politics, a new book by Sandhya Ravishankar about the DMK supremo, and noticed that even though it’s a journalist’s book, and even though Mukund Padmanabhan’s verbose foreword points that out, Ravishankar had written it quite well (note: I’m only about 50 pages in). I don’t mean she’s written it ‘good’, I mean she’s written it ‘well’.

Now, it might seem like I’m suggesting Ravishankar doesn’t usually write well or that journalists in general aren’t the best writers of book-level long-form. The former isn’t the case but the latter certainly is: journalists (by whom I mostly mean reporters) suck at writing. The best of them are the best because of the stories they’re able to get, not because they’ve mastered prose. And those that write well among them have honed the craft over many years. There are exceptions to this quasi-rule of mine, of course (Sowmiya Ashok and Pheroze Vincent come to mind) but they are few and far between.

In this sense, I say Ravishankar writes well because… well, it shows. I stopped reading the book as a reader and turned on my editor sense when I noticed that she had done something on page 14 that a regular reporter would almost never do but a regular writer definitely would. On this page, she quotes the noted author V. Geetha for the first time. However, Ravishankar doesn’t tell us everything about Geetha that would qualify her as a pertinent and important expert in this context. Instead, Ravishankar follows when I call ‘the proximity rule’.

I’m sure you’re thinking I’m a pompous arse, especially if what I’m going to tell sounds familiar, even pithy. I once read somewhere that definitions make the most sense when you place them narratively proximate to the words they’re defining. For (a very convenient) example, if an expert you’re quoting uses a technical term, then it helps if you interrupt the quote at that point to insert the definition and then bring on the rest instead of letting the quote finish, particularly if it’s long.

To illustrate:

“Although the Higgs field is non-zero everywhere” – a way of saying it has some potential energy wherever it manifests – “and its effects ubiquitous, proving its existence was far from easy. In principle, it can be proved to exist by detecting its excitations, which manifest as Higgs particles (the Higgs boson), but these are extremely difficult to produce and to detect. The importance of this fundamental question led to a 40-year search, and the construction of one of the world’s most expensive and complex experimental facilities to date, CERN’s Large Hadron Collider, in an attempt to create Higgs bosons and other particles for observation and study.

… is better than to say:

Although the Higgs field is non-zero everywhere and its effects ubiquitous, proving its existence was far from easy. In principle, it can be proved to exist by detecting its excitations, which manifest as Higgs particles (the Higgs boson), but these are extremely difficult to produce and to detect. The importance of this fundamental question led to a 40-year search, and the construction of one of the world’s most expensive and complex experimental facilities to date, CERN’s Large Hadron Collider, in an attempt to create Higgs bosons and other particles for observation and study.” A ‘non-zero’ field has some potential energy wherever it goes.

(Source of text: the Higgs boson’s Wikipedia page)

Of course, like all rules, this one’s not set in stone – especially when the writer has built up a flow that is so strong, so good that you want to preserve its continuity over striving even for clarity of language. (Sadly, Thomas Pynchon often believes there’s flow where there’s none).

Over pages 14 and 15, Ravishankar quotes Geetha thrice. The first time, she introduces Geetha as an “expert on the Dravidian movement”. This is because her quote is somewhat generic: “The personal appeal (of leaders such as Annadurai and Karunanidhi) cut across caste lines and drew the non-dominant communities to the DMK as well.” The second time, Ravishankar doesn’t embellish Geetha any further.

The third time, Ravishankar reintroduces Geetha as the author of Suyamariyadhai Samadharmam, a celebrated book on the politics and philosophy of the Dravidian movement. And this time, Geetha’s quote is also more specific, discussing how the DMK built support for itself among the backward castes of Tamil Nadu in the 1950s and 1960s but managed to exclude Dalits.

When a science writer quotes a scientist in a story, it’s important that the writer tells her readers what it is that the scientist does. Their profession as such – such as professor, researcher, RA/TA, etc. – doesn’t matter; I ask the writer to mention what it is they’re actually studying, whether they’re biologists or chemists or whatever. This is because it’s not the former that establishes authority; it’s the latter. Further, the former’s claim to authority often tends to be false, especially when presented in the wrong context.

Another example from this morning is a piece by Robbert Dijkgraaf, a tenured professor at the University of Amsterdam; Leon Levy professor at and director of the Institute for Advanced Study, Princeton; member of the Royal Netherlands Academy of Arts and Sciences; Knight of the Order of the Netherlands Lion; and fellow of the American Mathematical Society, in Quanta, where he has attempted to provide a post hoc justification for string theory’s legitimacy as a theory of nature. However, all that matters here is that he’s just a fucking string theorist – which you’re reminded of when you read Peter Woit shredding Dijkgraaf in his usual style.

In the same vein, in Geetha’s case, Ravishankar has smartly split up her qualifications into two contexts, and presented just the right parts each time. I say “just right” because Ravishankar’s words essentially follow the proximity rule.

On page 14, Geetha is an author and expert on the Dravidian movement (the way I can be an ‘expert’ in high-energy physics), and so the relevant part of her qualification is placed close by. Then, on page 15, she’s the author of a famous book about the DMK’s formative years, and so she gets to speak about something very specific and rest easy that she will be taken seriously. However, if Suyamariyadhai Samadharmam had been introduced on page 14, it would’ve been overkill for what Geetha was saying and would also have robbed her of the reader’s awe on page 15.

One big thing v. many small things

On average, people don’t read as many books as they used to before because what we want to read has become available in more forms. We read articles on the web, tweets and posts on the social media, emails, WhatsApp notes and whatnot. This is why, while reading a book is still considered a unique experience, we don’t deserve as much derision. We’re still reading a lot, just not in books.

Now, what if we applied the same line of reasoning to writing? Do people write just as much as they used to before? The answer’s likely “no”, that the average person writes more these days because the volume of interpersonal communication that is textual has increased, so we spend more time composing WhatsApp notes, emails, tweets and posts on the social media and, of course, commentaries and blog posts on the internet.

If we’re reading and writing just as much, if not more, then what have we lost?

I think we’ve lost, or least we’re losing, the ability to read or write one big thing even though we can read or write many small things. The average reader of the 21st century does have a famously low attention span while the average writer – a.k.a. the arm-chair commentator on Facebook and Twitter – is adept at composing quick-fire opinions and bite-sized posts.

This is not a problem that technology can fix because the technological solutions already exist. Instead, it’s a question of adoption, of people moving as a society towards a slower yet more thoughtful mode of information dissemination. How can this be made to happen?

A community-driven scicomm effort

The National Council of Science and Technology Communication, under the Department of Science and Technology (DST), has floated a new initiative to promote science communication by researchers, particularly PhD students and postdoctoral fellows. It’s being called the ‘Augmenting Writing Skills for Articulating Research’ (AWSAR). According to The Hindu, the DST will award the 100 best articles by PhD students Rs 1 lakh each and a certificate, and the 20 best articles by postdoctoral fellows Rs 10,000 each and a certificate, every year under AWSAR’s banner.

The terms of the scheme appear to have changed since it was first announced in January this year, in the form of a larger corpus of funds being made available to disburse.

AWSAR is heartening news on two fronts. First, it means that the supply of informed science writing is going to increase. Second, if the initiative is well-received among researchers, especially mentors who can encourage their students to write about their work and help them secure the resources to do so, then it can only be good for the future of science communication in India.

Like all crafts that have presented unique rewards and insights to their practitioners, the hope is that young scientists will cherish the experience of writing about their research and, in time, find it is something worth doing irrespective of the DST’s prize. That they will use the initial prompt to measure their own research against benefits to society, in time, to enjoy writing about human knowledge for its own sake.

The Wire Science regularly features articles, reviews, essays and commentaries written by scientists around India, and welcomes AWSAR as an opportunity to continue its conversations with experts encouraging them to write more. As our audience of science readers has repeatedly reminded us, there is no topic that’s not interesting enough if the writing is good. In light of AWSAR, I reiterate my commitment, and extend the offer to scientists everywhere and of every age that I will work with you to help you write the best piece you can.

(I certainly don’t believe that all students of science can be good writers off the bat. This is why our science-writing submission guidelines contain some useful tips to get you started. These aren’t hard-coded, and you’re welcome to ‘break the rules’; you could also reach out to me at @1amnerd on Twitter or mukunth at thewire.in for further assistance.)

There are, of course, other questions that must be answered before the scheme can be assessed for its overall usefulness, such as whether articles with multiple authors, or in languages other than English, will be considered. At the same time, it must be acknowledged that AWSAR presents a valuable opportunity that, if carefully negotiated, can yield handsome benefits that might be otherwise difficult to achieve.

Because this is going to be a conversation between scientists and editors first before it can be a conversation between scientists and the people that the editors serve, we hope scientists and editors alike also take the time to understand the ethos of science communication and publish the best piece that can be published. This will not be possible without a community effort.

Here’s an example to illustrate why. Laudable as AWSAR’s aims are, one of its stipulations – that a researcher must write about her own work – comes right up against one of the central rituals of science journalism: the independent comment.

Now, imagine a setting where a doctoral or postdoctoral student submits an article about their work to an editor who is not familiar with the science. Editors will fail both themselves and the researchers who author pieces if they assume the contents to be completely and absolutely true.

Scientists must understand that this is not a breach of trust. On the contrary, it’s an exercise of trust whereby the editor can help the author be more reflexive about their study’s pitfalls and their own, often unavoidable, cognitive biases. As is often foolishly believed, acknowledging that these biases and confounding factors exist is not antithetical to good science communication – it is essential. Scientists must internalise this aspect just as much as the AWSAR evaluation committee should.

Instead, the editor would be well-served if she or he contacted another scientist in the same field and unaffiliated with the author to comment on its claims and spirit. This is why AWSAR’s success is going to be nothing short of a community effort, with the honest commitment of editors conversing with scientists and scientists conversing with editors.

Meta-design: An invisible bias

Puja Mehra wrote an excellent breakdown of the Narendra Modi government’s economic performance of the last four years, the duration for which it has been in office. You should read it if you’re interested in this sort of thing (and you should be).

However, the article’s layout bothers me:

1. The references are listed at the end instead of in-line. The former is a vestige of print publishing, where it’s not possible to display layered text, so printers listed a citation in the page’s footer, or at the end of each chapter, for those would need to refer to it. In-line references, on the other hand, are far more convenient because they don’t require the reader to jump across the page, or pages, to find the citation; it’s inserted proximate to the claim itself.

2. For all the numbers, and the “empirical analysis” that the article is being lauded for, there’s not a single chart in it.

The first issue in particular is something I sense a lot of people conflate with “serious” and “solemn”: an article laid out such that it meets print’s publishing standards, which have been improvised over hundreds of years, and as if not concerned about digital publishing, which has been around in its current form for less than a decade.

Another concern is whether the publisher of Mehra’s article, The Hindu Centre for Politcs and Public Policy, is tracking the number of people whose eyeballs rest on the references portion of the page for a (statistically) significant period and the number of people who click on the links. (Of course, there’s a qualitative funnel here whereby a reader clicks a reference not to verify if it contains the claim to which it has been attached but to learn.

Excluding these people) I suspect a majority of readers will rest easier knowing that a specific claim _has been_ referenced (“blah blah gurgle nyah21“), and not bother to validate it themselves. That’s how we all read Wikipedia: we trust the platform to have robust rules for maintaining reliability and we trust volunteers to want to apply those rules.

When we take the existence as well as trustworthiness of this relationship for granted, we sow the seeds for a meta-design to take effect on the page we’re reading: where the mere presence of certain elements encourages us to interpret the substance on the page in this way or that. Put another way, because of its ubiquity and its heritage, print publushing brings with it an attendant set of processes that must be followed before a book, article, review, etc. can be published. When the published content contains symbols that suggest these processes have been followed, we assume due diligence has been done on the publisher’s part to check and prepare the content (especially since words once printed can’t be unprinted).

What’s curious here is that we believe we can trust an article more if it contains these symbols _irrespective_ of whether it has been published offline or online. For example, when you see a superscripted [?] next to a claim on Wikipedia (“blah blah gurgle[?] nyah”), your mind immediately works to discard it from memory (at least mine does) – in much the same way I sit up and pay more attention when I read an article is laid out in two columns on the page with references strewn around it, because it’s likely a scientific paper.

Similarly, detecting the presence of such meta-design markers on Mehra’s article and trusting in the validity of the substance of those markers, we’re encouraged to conclude that the article is trustworthy and reliable. I would be interested in any scientific studies conducted to determine the strength of this encouragement and how readers’ impression of the article changes as a result, measured the extent of the article they’ve already read.

Featured image credit: Geraldine Lewa/Unsplash.

Nicking the notch

I got myself a new phone today – the iPhone 6S. Before the purchase, I had spent hours on Amazon looking for the right phone within my budget, and quite possibly went through at least two score models. During this exercise, I noticed many phones on the market that had unabashedly copied the fullscreen design of the iPhone X and called it their own.

The hallmark of this design is the absence of any buttons on the phone’s UI, and the presence of a ‘notch’ – a black bar at the top that’s host to two cameras, a few sensors, the mic, etc. The design by itself isn’t very revolutionary but Apple’s decision to change the look of a phone that’s maintained one specific look for a decade is, to borrow Marco Arment’s verdict, courageous.

Screen Shot 2018-05-31 at 22.03.46.png

However, I noticed at least five other brands – OnePlus, Vivo, Huawei, LG and Asus – with phones that sported the same notch (6, V9, P20, G7 and Zenfone 5 resp.). I’m sure there are many others nicking the notch, especially the China-based rapid prototypers like Xiaomi. (This article highlights a bunch.)

One reason they’re able to get away with this is because Apple doesn’t have a patent on the design. Additionally, while Apple designed the iPhone X’s screen thus to maximise display size, those who added the notch after did so to capitalise on the trend that was sure to follow.

Second, OEMs argue that there are only so many to maximise display size and that, if anything, Apple should also be criticised for considering edge-to-edge display after Samsung popularised the idea with its Edge+ model.

Evidently, the argument (or counterargument, depending on your POV) is that there is only a finite number of ways in which to combine UI elements to achieve certain UX goals. And at the other, minimal end of the interfacial spectrum is the question of what exactly it is that you’re patenting when all semblance of creative detail has been shaved off of your product.

This line of thinking brought an amusing anecdote to mind, involving the cult sci-fi classic 2001: A Space Odyssey, which celebrated its 50th anniversary last month. When Apple sued Samsung for allegedly copying the iPad’s design for the Galaxy tab, Samsung hit back in mid-2011 with a crazy defence: that Apple’s patent was null because the iPad’s design had been copied from devices depicted in the film.

Of course, the sitting judge dismissed Samsung’s argument: Apple may have been inspired by the design as depicted in the film but the idea of the tablet as a product as such was its own, and Samsung’s ‘defence’ didn’t address that. The iPhone X notch has a similar identity: according to Android phone-makers, it’s an inevitable design choice, and doesn’t represent any new ideas as such.