The missile test before the polls

On March 27, 2019, the Defence Research and Development Organisation (DRDO) conducted ‘Mission Shakti’: India’s first anti-satellite (ASAT) missile test. After the event, the national broadcaster broadcast an hour-long speech by Prime Minister Narendra Modi. Since the Election Commission’s restrictions on poll candidates’ screen time was in effect ahead of the Lok Sabha polls that year, some of us surmised the test had been timed to allow Modi a reason to get on TV without explicitly violating the rules.

Yesterday, on March 11, the DRDO conducted a test of its new Agni 5 missile in its MIRV – short for ‘multiple independently targetable reentry vehicles’ – configuration, a powerful defence technology that allows a single suborbital missile to deliver multiple warheads (possibly nuclear) to strike different targets. This time, however, the Commission’s restrictions are not yet in effect nor has Modi tried to deliver a speech ostensibly about the test, although he has been in Pokhran today talking about ‘Bharat Shakti’, which I believe is the name of India’s programme for self-sufficiency in defence.

Surely this is some kind of pre-election muscle-flexing bluster? After the first Agni V test in April 2012, DRDO’s then chief controller of missiles Avinash Chander told Business Standard: “The primary modules of MIRV are in an advanced stage of development. Realisation and integration of them into a weapon is just a question of threat perceptions and the need as it arises.” This ‘need’ seems to be signalling to both agam and puram actors just before the national elections. It holds for the ASAT in March 2019 as well, when there was reason to believe India was ready with ASAT capability during Manmohan Singh’s tenure as prime minister, if not earlier.

In the broader view, China tested both MIRV and ASAT missiles before India, most recently in 2017 (DF-41 missile) and in 2007, respectively, notwithstanding some claims in 2008 that it was modifying its submarine-launched JL-2 MIRV to have ASAT capabilities as well. The post-test bluster by BJP leaders on both occasions was directed at China. What will India test come March 2029, I wonder.

Review: ‘Oppenheimer’ (2023)

Oppenheimer was great. I really liked it. I don’t have a review as much as some notes that I took during the film that I’d like to share. But before diving into them, I should say that I got a certain impression of the film before I watched it based on all the reviews, the hot-takes, and the analyses, and it was almost entirely at odds with my final experience of it. How happy am I to have been wrong.

SPOILERS AHEAD

1. “Brilliance makes up for a lot.” – The idea that genius is an excuse to overlook other flaws, a famously problematic notion among scientists, as we’ve seen of late, recurs non-ironically throughout the film. But it’s also the sort of criticism that, while it’s important to take note of, doesn’t seem interesting vis-à-vis the film itself. The film shows Oppenheimer as he was, warts and all – and there’s value in that – living and working in a time that encouraged such thinking. The point was neither to redeem him nor make sure we ‘learn’ that such thinking is worthy of discouragement, in much the same way it doesn’t discuss who occupied the land where the Trinity test was conducted.

(This said, it did strike me as odd why the film chose not to show the images of the bomb’s consequences in Japan, as they were being displayed to an audience that included Oppenheimer. I can’t say I agree that us observing him as he reacted to those images was more important.)

2. Military and science – This is a tension that’s also been made clear in several historical accounts of the Manhattan Project, of the working culture among scientists clashing with how the military operates, and how, in the course of this contest, each side perceived profound flaws in the way the other achieved its objectives. One is, or claims to be, democratic (epitomised in the film by Oppenheimer persuading Teller to stay back at Los Alamos) while the other prizes brutal efficiency and a willingness to get its hands ‘dirty’ because of the clear apportionment of blame (irrespective of whether that’s really possible from the PoV of today).

3. “How could this man who saw so much be so blind?” – Strauss’s comment in the beginning sets up the kind of person Oppenheimer was very well. The real-world Oppenheimer was often disrespectful, flippant towards other people’s opinions or feelings. But in the film, this disposition is directed almost always at Strauss, so it’s possible to come away thinking that Oppenheimer just believed Strauss alone to be worthy of some disdain. But Strauss’s comment hints at Oppenheimer’s hubris very well, and so concisely.

4. “Scientists don’t respect your judgment” – Another comment of Strauss’s, which although we see by the end of the film was born largely out of an inflated self-importance, also spoke, I thought, to the tension between how the scientists and the soldiers operate and to the sense of unease among some in the military that comes of looking outside-in into the Manhattan Project, until of course the bomb was delivered.

5. A science and military complex – Vannevar Bush is ‘represented’ in the film. After the war ended, he was to famously advocate for the US investing in blue-sky research, that such research, while delivering no short-term gains, would in the longer one hold the country in good stead on a variety of advanced technologies. The complex still operating today is the military-industrial one, but science during the war became a glue holding them together. And it’s interesting to get such a well-dramatised view of the tensions through which these two enterprises were reconciled.

6. Tension ahead of Trinity – This is the principal reason I liked Oppenheimer. I’ve read a lot (relatively) about how the bomb came to be, but one thing all of those accounts lacked is such a faithful – or what I imagine is a faithful – description of the emotions at play as the bomb was built, tested, and reckoned with. When that man’s fingers tremble over the big red button that would detonate the weapon, I was trembling in my seat. The nervousness, the anger, the frustration, even the complementary nonchalance of Teller and Feynman. This is very difficult to get through scholarship.

7. Nolan’s comment – In several interviews before the film’s release, Nolan said he believed Oppenheimer was the “greatest person” to have ever lived. I assumed before watching the film that this was an insight into the sort of film Oppenheimer would be, with hero worship and its attendant rituals. But in the end, the comment was so irrelevant to the experience of the film.

8. What is a nuclear weapon? – To me, Oppenheimer‘s principal triumph is that, through the eyes of its eponymous protagonist, it conveys what it means for there to be such a thing as a nuclear weapon. It’s fundamentally the breaking of the strong nuclear force between two nucleons, but it’s also, to paraphrase something Strauss says in his angry tirade near the end, the irreversible act of letting the nuclear genie out of the bottle and everything that entails. It’s power and therefore a herald of cynical politics. It’s classified information and therefore a source of mis- or dis-trust. (“If you create the ultimate destructive power, it will also destroy those who are near and dear to you” – Nolan.) It’s knowledge of another country’s power and intent. It’s a demonstration of its scientists’ ability to channel their talents as well as their moral bearings. It’s the weapon to reshape all wars. So forth.

9. Shockwave in the gymnasium – This was such an excellent, poignant scene, when Oppenheimer is going through the motions, or what he thinks ought to be the motions, and the place goes quiet just as it did when the Trinity shot succeeded. Then, as he is walking out, the sound of his audience’s cheering hits him like a shockwave. Such a well-conceived metaphor for the bomb’s political nature, and a cementing of Oppenheimer’s epiphany that there’s really nothing he can do to control how it will be used.

10. Partial fictions – Strauss’s vendetta against Oppenheimer isn’t borne out in the historical record, including the fact that Strauss was the one to hand the FBI the all-important file (via Borden). This sadly constitutes the same sort of mistake that films of lower calibre do: claiming to be based on real-world events (or, as in this case, a book documenting real-world events) but then fictionalising some small detail. The effect is for a watcher to be left wondering what else didn’t exactly happen, which they won’t know about unless they specifically check. In Oppenheimer, this is true of parts of the Strauss storyline, the Oppenheimers’ parenting skills, how concerned the physicists really were of the bomb setting “the air on fire”, and, irony of ironies, it all begins with a literal poisoned fruit.

(A couple inconsistencies are in my opinion worth singling out, despite being quite minor: (i) when the Trinity shot succeeds, Oppenheimer is shown being accosted by George Kistiyakowsky demanding the $10 he bet Oppenheimer the previous night that the test would go through. Oppenheimer says “I’m good for $10” and hands him a bill, but in reality he didn’t have the money. But that’s not all. In that moment, Oppenheimer would later recall mulling those famous words from the Gita, only for Kenneth Bainbridge to have been plainer: “Oppie, now we’re all sons of bitches.” (ii) When Chevalier tells Oppenheimer that Eltenton can help pass information through to the Soviets, Kitty comes to the kitchen not wanting the two of them to be alone and is also the one to tell Chevalier that his proposal constitutes treason. In the film, Kitty enters the kitchen after this conversation has concluded. This is worth pointing out because, in the film itself, she’s always been the better judge of character than Oppenheimer.)

11. Compartmentalisation – The concept of compartmentalisation appears throughout the film in the context of maintaining the secrecy of the Manhattan Project. But as it happened, a certain loss of compartmentalisation had to transpire for the project’s physicists to actually want to build a bomb – something that happened, by some accounts, at a meeting on April 15, 1943, when Robert Serber clarified to those present at the Los Alamos site that they were to build a nuclear weapon. When the physicists set about their task with gusto, they surprised Enrico Fermi, who then told Oppenheimer: “I believe your people actually want to make a bomb.” A terribly profound comment.


Addendum

Oppenheimer forced me to confront and question a little knot of apprehension that had taken root within my mind when it released. It was fed mostly by the fact that the film would expose to a very large number of people a world of information that had taken many others (myself included) a lot more time to find, learn, and parse. I was apprehensive that some nuance of this passage of history would get shredded by some inane right- or left-wing outrage, and be denied an opportunity to make some meaningful impression on the minds of its viewers.

I daresay that this is a legitimate concern at a time when writers and journalists have had to double-check how something might be construed on social media platforms, in specific parts of the country, even to a court somewhere. We may never be able to fully control how something that we produce will be consumed but there are parts of it that we can. In my own writing, I noticed last year a tendency to be defensive, to write in such a way that I explain myself thoroughly and accommodate all possible counter-arguments. The style is time-consuming and, more importantly, because how we write can affect how we think, it leads to defensive thinking as well.

I was also anxious of encountering the hypocrisy that I suspected would be put on display when, despite being able to find physics beautifully described in hundreds of articles and videos on the web, the “average audience” recoils from them but gravitates with glee to Oppenheimer, and perhaps after holds forth on Facebook as if it understood the ideas involved all along.

But then, in the film, Oppenheimer tells Leo Szilard that the scientists who made the bomb have no greater say than others about how to use it. I disagreed with the comment, but it struck me that we’d have to agree if we replaced “bomb” with “knowledge”. I’m glad that more people now know about the circumstances in which the first nuclear weapons were made because even if only a few are prepared to treat the film as a gateway, rather than as the definitive take or whatever, the world should be the better for it.

Featured image. A screenshot of a scene from Oppenheimer (2023). Source: YouTube

Unseating Feynman, and Fermi

Do physicists whitewash the legacy of Enrico Fermi the same way they do Richard Feynman?

Feynman disguised his sexism as pranks and jokes, and writers have spent thousands of pages offering his virtues as a great physicist and teacher as a counterweight against his misogyny. Even his autobiography doesn’t make any attempts to disguise his attitude, but to be fair, the attitude in question became visibly problematic only in the 21st century.

This doesn’t mean nobody exalts Feynman anymore but only that such exaltation is expected to be contextualised within his overall persona.

This in turn invites us to turn the spotlight on Fermi, who would at first glance appear to be Italy’s Feynman by reputation but on deeper study seems qualified to be called one of the greatest physicists of the 20th century.

Like Feynman, Fermi made important and fundamental contributions to physics and chemistry. Like Feynman, Fermi was part of the Manhattan Project to build the bombs that politicians would eventually drop on Hiroshima and Nagasaki. But unlike Feynman, Fermi’s participation in the latter extended to consultations on decisions about where to drop the bomb and when.

For us to acknowledge that we were being grossly unfair to all women when we overlooked Feynman’s transgressions, women needed to become more vocal about their rights in social and political society.

So it’s only fair to assume that at some point in the future, society’s engagement with and demands of scientists and scientific institutes to engage more actively with a country’s people and their leaders will show us how we’ve been whitewashing the legacy of Enrico Fermi – by offering his virtues as a physicist and teacher as a counterweight against his political indifference.

Many people who fled fascist regimes in 20th century Europe and came to the US, together with people who had relatives on the frontlines, supported the use of powerful weapons against the Axis powers because these people had seen firsthand what their enemies were capable of. Fermi was one such émigré – but here’s where it gets interesting.

Fermi was known to be closed-off, to be the sort of man who wouldn’t say much and kept his feelings to himself. This meant that during meetings where military leaders and scientists together assessed a potential threat from the Germans, Fermi would maintain his dispassionate visage and steer clear of embellishments. If the threat was actually severe, Fermi wouldn’t be the person of choice to convey its seriousness, at least not beyond simply laying down the facts.

This also meant that Fermi didn’t have the sort of public, emotional response people commonly associate with J. Robert Oppenheimer, Karl Darrow or Leo Szilard after the bomb was first tested. In fact, according to one very-flattering biography – by Bettina Hoerlin and Gino Segrè published in 2016 – Fermi was only interested in his experiments and was “not eager to deal with the extra complications of political or military involvement”. Gen. Leslie Groves, the leader of the Manhattan Project, reportedly said Fermi “just went along his even way, thinking of science and science only.”

But at the same time, Fermi would also advocate – against the spirit of Szilard’s famous petition – for the bomb to be dropped without prior warning on a non-military target in Japan to force the latter to surrender. How does this square with his oft-expressed belief that scientists weren’t the best people to judge how and when the bomb would have to be used to bring a swift end to the war?

Fermi’s legacy currently basks in the shadow of the persistent conviction that the conducts of science and politics are separate and that they should be kept that way. The first part of the claim is false, an untruth fabricated to keep upper-class/caste science workers from instituting reforms that would make research a more equitable enterprise; the second part is becoming more untenable but it’s taking its time.

Ultimately, the fight for a scientific enterprise founded on a more enlightened view of its place within, not adjacent to, society should also provide us a clearer view of our heroes as well as help us discover others.

Some thoughts on the nature of cyber-weapons

With inputs from Anuj Srivas.

There’s a hole in the bucket.

When someone asks for my phone number, I’m on alert, even if it’s so my local supermarket can tell me about new products on their shelves. Or for my email ID so the taxi company I regularly use can email me ride receipts, or permission to peek into my phone if only to see what other music I have installed – All vaults of information I haven’t been too protective about but which have off late acquired a notorious potential to reveal things about me I never thought I could so passively.

It’s not everywhere but those aware of the risks of possessing an account with Google or Facebook have been making polar choices: either wilfully surrender information or wilfully withhold information – the neutral middle-ground is becoming mythical. Wariness of telecommunications is on the rise. In an effort to protect our intangible assets, we’re constantly creating redundant, disposable ones – extra email IDs, anonymous Twitter accounts, deliberately misidentified Facebook profiles. We know the Machines can’t be shut down so we make ourselves unavailable to them. And we succeed to different extents, but none completely – there’s a bit of our digital DNA in government files, much like with the kompromat maintained by East Germany and the Soviet Union during the Cold War.

In fact, is there an equivalence between the conglomerates surrounding nuclear weapons and cyber-weapons? Solly Zuckerman (1904-1993), once Chief Scientific Adviser to the British government, famously said:

When it comes to nuclear weapons … it is the man in the laboratory who at the start proposes that for this or that arcane reason it would be useful to improve an old or to devise a new nuclear warhead. It is he, the technician, not the commander in the field, who is at the heart of the arms race.

These words are still relevant but could they have accrued another context? To paraphrase Zuckerman – “It is he, the programmer, not the politician in the government, who is at the heart of the surveillance state.”

An engrossing argument presented in the Bulletin of the Atomic Scientists on November 6 did seem an uncanny parallel to one of whistleblower Edward Snowden’s indirect revelations about the National Security Agency’s activities. In the BAS article, nuclear security specialist James Doyle wrote:

The psychology of nuclear deterrence is a mental illness. We must develop a new psychology of nuclear survival, one that refuses to tolerate such catastrophic weapons or the self-destructive thinking that has kept them around. We must adopt a more forceful, single-minded opposition to nuclear arms and disempower the small number of people who we now permit to assert their intention to commit morally reprehensible acts in the name of our defense.

This is akin to the multiple articles that appeared following Snowden’s exposé in 2013 – that the paranoia-fuelled NSA was gathering more data than it could meaningfully process, much more data than might be necessary to better equip the US’s counterterrorism measures. For example, four experts argued in a policy paper published by the nonpartisan think-tank New America in January 2014:

Surveillance of American phone metadata has had no discernible impact on preventing acts of terrorism and only the most marginal of impacts on preventing terrorist-related activity, such as fundraising for a terrorist group. Furthermore, our examination of the role of the database of U.S. citizens’ telephone metadata in the single plot the government uses to justify the importance of the program – that of Basaaly Moalin, a San Diego cabdriver who in 2007 and 2008 provided $8,500 to al-Shabaab, al-Qaeda’s affiliate in Somalia – calls into question the necessity of the Section 215 bulk collection program. According to the government, the database of American phone metadata allows intelligence authorities to quickly circumvent the traditional burden of proof associated with criminal warrants, thus allowing them to “connect the dots” faster and prevent future 9/11-scale attacks.

Yet in the Moalin case, after using the NSA’s phone database to link a number in Somalia to Moalin, the FBI waited two months to begin an investigation and wiretap his phone. Although it’s unclear why there was a delay between the NSA tip and the FBI wiretapping, court documents show there was a two-month period in which the FBI was not monitoring Moalin’s calls, despite official statements that the bureau had Moalin’s phone number and had identified him. This undercuts the government’s theory that the database of Americans’ telephone metadata is necessary to expedite the investigative process, since it clearly didn’t expedite the process in the single case the government uses to extol its virtues.

So, just as nuclear weapons seem to be plausible but improbable threats fashioned to fuel the construction of evermore nuclear warheads, terrorists are presented as threats who can be neutralised by surveilling everything and by calling for companies to provide weakened encryption so governments can tap civilian communications easier-ly. This state of affairs also points to there being a cyber-congressional complex paralleling the nuclear-congressional complex that, on the one hand, exalts the benefits of being a nuclear power while, on the other, demands absolute secrecy and faith in its machinations.

However, there could be reason to believe cyber-weapons present a more insidious threat than their nuclear counterparts, a sentiment fuelled by challenges on three fronts:

  1. Cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss
  2. Lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons
  3. Computer scientists have been slow to recognise the moral character and political implications of their creations

That cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss

In 1995, Joseph Rotblat won the Nobel Prize for peace for helping found the Pugwash Conference against nuclear weapons in 1955. In his lecture, he lamented the role scientists had wittingly or unwittingly played in developing nuclear weapons, invoking those words of Zuckerman quoted above as well as going on to add:

If all scientists heeded [Hans Bethe’s] call there would be no more new nuclear warheads; no French scientists at Mururoa; no new chemical and biological poisons. The arms race would be truly over. But there are other areas of scientific research that may directly or indirectly lead to harm to society. This calls for constant vigilance. The purpose of some government or industrial research is sometimes concealed, and misleading information is presented to the public. It should be the duty of scientists to expose such malfeasance. “Whistle-blowing” should become part of the scientist’s ethos. This may bring reprisals; a price to be paid for one’s convictions. The price may be very heavy…

The perspectives of both Zuckerman and Rotblat were situated in the aftermath of the nuclear bombings that closed the Second World War. The ensuing devastation beggared comprehension in its scale and scope – yet its effects were there for all to see, all too immediately. The flattened cities of Hiroshima and Nagasaki became quick (but unwilling) memorials for the hundreds of thousands who were killed. What devastation is there to see for the thousands of Facebook and Twitter profiles being monitored, email IDs being hacked and phone numbers being trawled? What about it at all could appeal to the conscience of future lawmakers?

As John Arquilla writes on the CACM blog

Nuclear deterrence is a “one-off” situation; strategic cyber attack is much more like the air power situation that was developing a century ago, with costly damage looming, but hardly societal destruction. … Yes, nuclear deterrence still looks quite robust, but when it comes to cyber attack, the world of deterrence after [the age of cyber-wars has begun] looks remarkably like the world of deterrence before Hiroshima: bleak. (Emphasis added.)

… the absence of “societal destruction” with cyber-warfare imposed less of a real burden upon the perpetrators and endorsers.

And records of such intangible devastations are preserved only in writing, in our memories, and can be quickly manipulated or supplanted by newer information and problems. Events that erupt as a result of illegally obtained information continue to be measured against their physical consequences – there’s a standing arrest warrant while the National Security Agency continues to labour on, flitting between the shadows of SIPA, the Patriot Act and others like them. The violations are like a creep, easily withdrawn, easily restored, easily justified as being counterterrorism measures, easily depicted to be something they aren’t.

That lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons

What makes matters frustrating is a multilateral instrument called the Wassenaar Arrangement (WA), which was originally drafted in 1995 to restrict the export of potentially malignant technologies leftover from the Cold War, but which lawmakers resorted to in 2013 to prevent entities with questionable human-rights records from accessing “intrusive software” as well. In effect, the WA defines limits on its 41 signatories about what kind of technology can or can’t be transferred between themselves or not at all to non-signatories based on the tech’s susceptibility to be misused. After 2013, the WA became one of the unhappiest pacts out there, persisting largely because of the confusion that surrounds it. There are three kinds of problems:

1. In its language – Unreasonable absolutes

Sergey Bratus, a research associate professor in the computer science department at Dartmouth College, New Hampshire, published an article on December 2 highlighting WA’s failure to “describe a technical capability in an intent-neutral way” – with reference to the increasingly thin line (not just of code) that separates a correct output from a flawed one, which hackers have become adept at exploiting. Think of it like this:

Say there’s a computer, called C, which Alice uses for a particular purpose (like to withdraw cash if C were an ATM). C accepts an input called I and spits out an output called O. Because C is used for a fixed purpose, its programmers know that the range of values I can assume is limited (such as the four-digit PIN numbers used at ATMs). However, they end up designing the machine to operate safely for all known four-digit numbers and neglecting what would happen should I be a five-digit number. By some technical insight, a hacker could exploit this feature and make C spit out all the cash it contains using a five-digit I.

In this case, a correct output by C is defined only for a fixed range of inputs, with any output corresponding to an I outside of this range being considered a flawed one. However, programmatically, C has still only provided the correct O for a five-digit I. Bratus’s point is just this: we’ve no way to perfectly define the intentions of the programs that we build, at least not beyond the remits of what we expect them to achieve. How then can the WA aspire to categorise them as safe and unsafe?

2. In its purpose – Sneaky enemies

Speaking at Kiwicon 2015, New Zealand’s computer security conference, cyber-policy buff Katie Moussouris said the WA was underprepared to confront superbugs targeting computers connected to the Internet irrespective of their geographical location but the solutions for which could potentially emerge out of a WA signatory. A case in point that Moussouris used was Heartbleed, a vulnerability that achieved peak nuisance in April 2014. Its M.O. was to target the OpenSSL library, used by a server to encrypt personal information transmitted over the web, and force it to divulge the encryption key. To protect against it, users had to upgrade OpenSSL with a software patch containing the solution. However, such patches targeted against bugs of the future could fall under what the WA has defined simply as “intrusion software”, and for which officials administering the agreement will end up having to provide exemptions dozens of times a day. As Darren Pauli wrote in The Register,

[Moussouri] said the Arrangement requires an overhaul, adding that so-called emergency exemptions that allow controlled goods to be quickly deployed – such as radar units to the 2010 Haiti earthquake – will not apply to globally-coordinated security vulnerability research that occurs daily.

3. In presenting an illusion of sufficiency

Beyond the limitations it places on the export of software, the signatories’ continued reliance on the WA as an instrument of defence has also been questioned. Earlier this year, India received some shade after hackers revealed that its – our – government was considering purchasing surveillance equipment from an Italian company that was selling the tools illegitimately. India wasn’t invited to be part of the WA and had it been, it would’ve been able to purchase the surveillance equipment legitimately. Sure, it doesn’t bode well that India was eyeing the equipment at all but when it does so illegitimately, international human rights organisations have fewer opportunities to track violations in India or be able to haul authorities up for infarctions. Legitimacy confers accountability – or at least the need to be accountable.

Nonetheless, despite an assurance (insufficient in hindsight) that countries like India and China would be invited to participate in conversations over the WA in future, nothing has happened. At the same time, extant signatories have continued to express support for the arrangement. “Offending” software came to be included in the WA following amendments in December 2013. States of the European Union enforced the rules from January 2015 while the US Department of Commerce’s Bureau of Industry and Security published a set of controls pursuant to the arrangement’s rules in May 2015 – which have been widely panned by security experts for being too broadly defined. Over December, however, they have begun to hope National Security Adviser Susan Rice can persuade the State Department push for making the language in the WA more specific at the plenary session in December 2016. The Departments of Commerce and Homeland Security are already onboard.

That computer scientists have been slow to recognise the moral character and political implications of their creations

Phillip Rogaway, a computer scientist at the University of California, Davis, penned an essay he published on December 12 titled The Moral Character of Cryptographic Work. Rogaway’s thesis is centred on the increasing social responsibility of the cryptographer – as invoked by Zuckerman – as he writes,

… we don’t need the specter of mushroom clouds to be dealing with politically relevant technology: scientific and technical work routinely implicates politics. This is an overarching insight from decades of work at the crossroads of science, technology, and society. Technological ideas and technological things are not politically neutral: routinely, they have strong, built-in tendencies. Technological advances are usefully considered not only from the lens of how they work, but also why they came to be as they did, whom they help, and whom they harm. Emphasizing the breadth of man’s agency and technological options, and borrowing a beautiful phrase of Borges, it has been said that innovation is a garden of forking paths. Still, cryptographic ideas can be quite mathematical; mightn’t this make them relatively apolitical? Absolutely not. That cryptographic work is deeply tied to politics is a claim so obvious that only a cryptographer could fail to see it.

And maybe cryptographers have missed the wood for the trees until now but times are a’changing.

On December 22, Apple publicly declared it was opposing a new surveillance bill that the British government is attempting to fast-track. The bill, should it become law, will require messages transmitted via the company’s iMessage platform to be encrypted in such a way that government authorities can access them if they need to but not anyone else – a fallacious presumption that Apple has called out as being impossible to engineer. “A key left under the doormat would not just be there for the good guys. The bad guys would find it too,” it wrote in a statement.

Similarly, in November this year, Microsoft resisted an American warrant to hand over some of its users’ data acquired in Europe by entrusting a German telecom company with its servers. As a result, any requests for data about German users using Microsoft to make calls or send emails, and originating from outside Germany, will now have to deal with German lawmakers. At the same time, anxiety over requests from within the country are minimal as the country boasts some of the world’s strictest data-access policies.

Apple’s and Microsoft’s are welcome and important changes in tack. Both companies were featured in the Snowden/Greenwald stories as having folded under pressure from the NSA to open their data-transfer pipelines to snooping. That the companies also had little alternative at that time was glossed over by the scale of NSA’s violations. However, in 2015, a clear moral as well as economic high-ground has emerged in the form of defiance: Snowden’s revelations were in effect a renewed vilification of Big Brother, and so occupying that high-ground has become a practical option. After Snowden, not taking that option when there’s a chance to has come to mean passive complicity.

But apropos Rogaway’s contention: at what level can, or should, the cryptographer’s commitment be expected? Can smaller companies or individual computer-scientists afford to occupy the same ground as larger companies? After all, without the business model of data monetisation, privacy would be automatically secured – but the business model is what provides for the individuals.

Take the case of Stuxnet, the virus unleashed by what are believed to be agents with the US and Israel in 2009-2010 to destroy Iranian centrifuges suspected of being used to enrich uranium to explosive-grade levels. How many computer-scientists spoke up against it? To date, no institutional condemnation has emerged*. Though it could be that neither the US nor Israel publicly acknowledging their roles in developing Stuxnet could have made it tough to judge who may have crossed a line, that a deceptive bundle of code was used as a weapon in an unjust war was obvious.

Then again, can all cryptographers be expected to comply? One of the threats that the 2013 amendments to the WA attempts to tackle is dual-use technology (which Stuxnet is an example of because the virus took advantage of its ability to mimic harmless code). Evidently such tech also straddles what Aaron Adams (PDF) calls “the boundary between bug and behaviour”. That engineers have had only tenuous control over these boundaries owes itself to imperfect yet blameless programming languages, as Bratus also asserts, and not to the engineers themselves. It is in the nature of a nuclear weapon, when deployed, to overshadow the simple intent of its deployers, rapidly overwhelming the already-weakened doctrine of proportionality – and in turn retroactively making that intent seem far, far more important. But in cyber-warfare, its agents are trapped in the ambiguities surrounding what the nature of a cyber-weapon is at all, with what intent and for what purpose it was crafted, allowing its repercussions to seem anywhere from rapid to evanescent.

Or, as it happens, the agents are liberated.

*That I could find. I’m happy to be proved wrong.

Featured image credit: ikrichter/Flickr, CC BY 2.0.