Some thoughts on the nature of cyber-weapons

With inputs from Anuj Srivas.

There’s a hole in the bucket.

When someone asks for my phone number, I’m on alert, even if it’s so my local supermarket can tell me about new products on their shelves. Or for my email ID so the taxi company I regularly use can email me ride receipts, or permission to peek into my phone if only to see what other music I have installed – All vaults of information I haven’t been too protective about but which have off late acquired a notorious potential to reveal things about me I never thought I could so passively.

It’s not everywhere but those aware of the risks of possessing an account with Google or Facebook have been making polar choices: either wilfully surrender information or wilfully withhold information – the neutral middle-ground is becoming mythical. Wariness of telecommunications is on the rise. In an effort to protect our intangible assets, we’re constantly creating redundant, disposable ones – extra email IDs, anonymous Twitter accounts, deliberately misidentified Facebook profiles. We know the Machines can’t be shut down so we make ourselves unavailable to them. And we succeed to different extents, but none completely – there’s a bit of our digital DNA in government files, much like with the kompromat maintained by East Germany and the Soviet Union during the Cold War.

In fact, is there an equivalence between the conglomerates surrounding nuclear weapons and cyber-weapons? Solly Zuckerman (1904-1993), once Chief Scientific Adviser to the British government, famously said:

When it comes to nuclear weapons … it is the man in the laboratory who at the start proposes that for this or that arcane reason it would be useful to improve an old or to devise a new nuclear warhead. It is he, the technician, not the commander in the field, who is at the heart of the arms race.

These words are still relevant but could they have accrued another context? To paraphrase Zuckerman – “It is he, the programmer, not the politician in the government, who is at the heart of the surveillance state.”

An engrossing argument presented in the Bulletin of the Atomic Scientists on November 6 did seem an uncanny parallel to one of whistleblower Edward Snowden’s indirect revelations about the National Security Agency’s activities. In the BAS article, nuclear security specialist James Doyle wrote:

The psychology of nuclear deterrence is a mental illness. We must develop a new psychology of nuclear survival, one that refuses to tolerate such catastrophic weapons or the self-destructive thinking that has kept them around. We must adopt a more forceful, single-minded opposition to nuclear arms and disempower the small number of people who we now permit to assert their intention to commit morally reprehensible acts in the name of our defense.

This is akin to the multiple articles that appeared following Snowden’s exposé in 2013 – that the paranoia-fuelled NSA was gathering more data than it could meaningfully process, much more data than might be necessary to better equip the US’s counterterrorism measures. For example, four experts argued in a policy paper published by the nonpartisan think-tank New America in January 2014:

Surveillance of American phone metadata has had no discernible impact on preventing acts of terrorism and only the most marginal of impacts on preventing terrorist-related activity, such as fundraising for a terrorist group. Furthermore, our examination of the role of the database of U.S. citizens’ telephone metadata in the single plot the government uses to justify the importance of the program – that of Basaaly Moalin, a San Diego cabdriver who in 2007 and 2008 provided $8,500 to al-Shabaab, al-Qaeda’s affiliate in Somalia – calls into question the necessity of the Section 215 bulk collection program. According to the government, the database of American phone metadata allows intelligence authorities to quickly circumvent the traditional burden of proof associated with criminal warrants, thus allowing them to “connect the dots” faster and prevent future 9/11-scale attacks.

Yet in the Moalin case, after using the NSA’s phone database to link a number in Somalia to Moalin, the FBI waited two months to begin an investigation and wiretap his phone. Although it’s unclear why there was a delay between the NSA tip and the FBI wiretapping, court documents show there was a two-month period in which the FBI was not monitoring Moalin’s calls, despite official statements that the bureau had Moalin’s phone number and had identified him. This undercuts the government’s theory that the database of Americans’ telephone metadata is necessary to expedite the investigative process, since it clearly didn’t expedite the process in the single case the government uses to extol its virtues.

So, just as nuclear weapons seem to be plausible but improbable threats fashioned to fuel the construction of evermore nuclear warheads, terrorists are presented as threats who can be neutralised by surveilling everything and by calling for companies to provide weakened encryption so governments can tap civilian communications easier-ly. This state of affairs also points to there being a cyber-congressional complex paralleling the nuclear-congressional complex that, on the one hand, exalts the benefits of being a nuclear power while, on the other, demands absolute secrecy and faith in its machinations.

However, there could be reason to believe cyber-weapons present a more insidious threat than their nuclear counterparts, a sentiment fuelled by challenges on three fronts:

  1. Cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss
  2. Lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons
  3. Computer scientists have been slow to recognise the moral character and political implications of their creations

That cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss

In 1995, Joseph Rotblat won the Nobel Prize for peace for helping found the Pugwash Conference against nuclear weapons in 1955. In his lecture, he lamented the role scientists had wittingly or unwittingly played in developing nuclear weapons, invoking those words of Zuckerman quoted above as well as going on to add:

If all scientists heeded [Hans Bethe’s] call there would be no more new nuclear warheads; no French scientists at Mururoa; no new chemical and biological poisons. The arms race would be truly over. But there are other areas of scientific research that may directly or indirectly lead to harm to society. This calls for constant vigilance. The purpose of some government or industrial research is sometimes concealed, and misleading information is presented to the public. It should be the duty of scientists to expose such malfeasance. “Whistle-blowing” should become part of the scientist’s ethos. This may bring reprisals; a price to be paid for one’s convictions. The price may be very heavy…

The perspectives of both Zuckerman and Rotblat were situated in the aftermath of the nuclear bombings that closed the Second World War. The ensuing devastation beggared comprehension in its scale and scope – yet its effects were there for all to see, all too immediately. The flattened cities of Hiroshima and Nagasaki became quick (but unwilling) memorials for the hundreds of thousands who were killed. What devastation is there to see for the thousands of Facebook and Twitter profiles being monitored, email IDs being hacked and phone numbers being trawled? What about it at all could appeal to the conscience of future lawmakers?

As John Arquilla writes on the CACM blog

Nuclear deterrence is a “one-off” situation; strategic cyber attack is much more like the air power situation that was developing a century ago, with costly damage looming, but hardly societal destruction. … Yes, nuclear deterrence still looks quite robust, but when it comes to cyber attack, the world of deterrence after [the age of cyber-wars has begun] looks remarkably like the world of deterrence before Hiroshima: bleak. (Emphasis added.)

… the absence of “societal destruction” with cyber-warfare imposed less of a real burden upon the perpetrators and endorsers.

And records of such intangible devastations are preserved only in writing, in our memories, and can be quickly manipulated or supplanted by newer information and problems. Events that erupt as a result of illegally obtained information continue to be measured against their physical consequences – there’s a standing arrest warrant while the National Security Agency continues to labour on, flitting between the shadows of SIPA, the Patriot Act and others like them. The violations are like a creep, easily withdrawn, easily restored, easily justified as being counterterrorism measures, easily depicted to be something they aren’t.

That lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons

What makes matters frustrating is a multilateral instrument called the Wassenaar Arrangement (WA), which was originally drafted in 1995 to restrict the export of potentially malignant technologies leftover from the Cold War, but which lawmakers resorted to in 2013 to prevent entities with questionable human-rights records from accessing “intrusive software” as well. In effect, the WA defines limits on its 41 signatories about what kind of technology can or can’t be transferred between themselves or not at all to non-signatories based on the tech’s susceptibility to be misused. After 2013, the WA became one of the unhappiest pacts out there, persisting largely because of the confusion that surrounds it. There are three kinds of problems:

1. In its language – Unreasonable absolutes

Sergey Bratus, a research associate professor in the computer science department at Dartmouth College, New Hampshire, published an article on December 2 highlighting WA’s failure to “describe a technical capability in an intent-neutral way” – with reference to the increasingly thin line (not just of code) that separates a correct output from a flawed one, which hackers have become adept at exploiting. Think of it like this:

Say there’s a computer, called C, which Alice uses for a particular purpose (like to withdraw cash if C were an ATM). C accepts an input called I and spits out an output called O. Because C is used for a fixed purpose, its programmers know that the range of values I can assume is limited (such as the four-digit PIN numbers used at ATMs). However, they end up designing the machine to operate safely for all known four-digit numbers and neglecting what would happen should I be a five-digit number. By some technical insight, a hacker could exploit this feature and make C spit out all the cash it contains using a five-digit I.

In this case, a correct output by C is defined only for a fixed range of inputs, with any output corresponding to an I outside of this range being considered a flawed one. However, programmatically, C has still only provided the correct O for a five-digit I. Bratus’s point is just this: we’ve no way to perfectly define the intentions of the programs that we build, at least not beyond the remits of what we expect them to achieve. How then can the WA aspire to categorise them as safe and unsafe?

2. In its purpose – Sneaky enemies

Speaking at Kiwicon 2015, New Zealand’s computer security conference, cyber-policy buff Katie Moussouris said the WA was underprepared to confront superbugs targeting computers connected to the Internet irrespective of their geographical location but the solutions for which could potentially emerge out of a WA signatory. A case in point that Moussouris used was Heartbleed, a vulnerability that achieved peak nuisance in April 2014. Its M.O. was to target the OpenSSL library, used by a server to encrypt personal information transmitted over the web, and force it to divulge the encryption key. To protect against it, users had to upgrade OpenSSL with a software patch containing the solution. However, such patches targeted against bugs of the future could fall under what the WA has defined simply as “intrusion software”, and for which officials administering the agreement will end up having to provide exemptions dozens of times a day. As Darren Pauli wrote in The Register,

[Moussouri] said the Arrangement requires an overhaul, adding that so-called emergency exemptions that allow controlled goods to be quickly deployed – such as radar units to the 2010 Haiti earthquake – will not apply to globally-coordinated security vulnerability research that occurs daily.

3. In presenting an illusion of sufficiency

Beyond the limitations it places on the export of software, the signatories’ continued reliance on the WA as an instrument of defence has also been questioned. Earlier this year, India received some shade after hackers revealed that its – our – government was considering purchasing surveillance equipment from an Italian company that was selling the tools illegitimately. India wasn’t invited to be part of the WA and had it been, it would’ve been able to purchase the surveillance equipment legitimately. Sure, it doesn’t bode well that India was eyeing the equipment at all but when it does so illegitimately, international human rights organisations have fewer opportunities to track violations in India or be able to haul authorities up for infarctions. Legitimacy confers accountability – or at least the need to be accountable.

Nonetheless, despite an assurance (insufficient in hindsight) that countries like India and China would be invited to participate in conversations over the WA in future, nothing has happened. At the same time, extant signatories have continued to express support for the arrangement. “Offending” software came to be included in the WA following amendments in December 2013. States of the European Union enforced the rules from January 2015 while the US Department of Commerce’s Bureau of Industry and Security published a set of controls pursuant to the arrangement’s rules in May 2015 – which have been widely panned by security experts for being too broadly defined. Over December, however, they have begun to hope National Security Adviser Susan Rice can persuade the State Department push for making the language in the WA more specific at the plenary session in December 2016. The Departments of Commerce and Homeland Security are already onboard.

That computer scientists have been slow to recognise the moral character and political implications of their creations

Phillip Rogaway, a computer scientist at the University of California, Davis, penned an essay he published on December 12 titled The Moral Character of Cryptographic Work. Rogaway’s thesis is centred on the increasing social responsibility of the cryptographer – as invoked by Zuckerman – as he writes,

… we don’t need the specter of mushroom clouds to be dealing with politically relevant technology: scientific and technical work routinely implicates politics. This is an overarching insight from decades of work at the crossroads of science, technology, and society. Technological ideas and technological things are not politically neutral: routinely, they have strong, built-in tendencies. Technological advances are usefully considered not only from the lens of how they work, but also why they came to be as they did, whom they help, and whom they harm. Emphasizing the breadth of man’s agency and technological options, and borrowing a beautiful phrase of Borges, it has been said that innovation is a garden of forking paths. Still, cryptographic ideas can be quite mathematical; mightn’t this make them relatively apolitical? Absolutely not. That cryptographic work is deeply tied to politics is a claim so obvious that only a cryptographer could fail to see it.

And maybe cryptographers have missed the wood for the trees until now but times are a’changing.

On December 22, Apple publicly declared it was opposing a new surveillance bill that the British government is attempting to fast-track. The bill, should it become law, will require messages transmitted via the company’s iMessage platform to be encrypted in such a way that government authorities can access them if they need to but not anyone else – a fallacious presumption that Apple has called out as being impossible to engineer. “A key left under the doormat would not just be there for the good guys. The bad guys would find it too,” it wrote in a statement.

Similarly, in November this year, Microsoft resisted an American warrant to hand over some of its users’ data acquired in Europe by entrusting a German telecom company with its servers. As a result, any requests for data about German users using Microsoft to make calls or send emails, and originating from outside Germany, will now have to deal with German lawmakers. At the same time, anxiety over requests from within the country are minimal as the country boasts some of the world’s strictest data-access policies.

Apple’s and Microsoft’s are welcome and important changes in tack. Both companies were featured in the Snowden/Greenwald stories as having folded under pressure from the NSA to open their data-transfer pipelines to snooping. That the companies also had little alternative at that time was glossed over by the scale of NSA’s violations. However, in 2015, a clear moral as well as economic high-ground has emerged in the form of defiance: Snowden’s revelations were in effect a renewed vilification of Big Brother, and so occupying that high-ground has become a practical option. After Snowden, not taking that option when there’s a chance to has come to mean passive complicity.

But apropos Rogaway’s contention: at what level can, or should, the cryptographer’s commitment be expected? Can smaller companies or individual computer-scientists afford to occupy the same ground as larger companies? After all, without the business model of data monetisation, privacy would be automatically secured – but the business model is what provides for the individuals.

Take the case of Stuxnet, the virus unleashed by what are believed to be agents with the US and Israel in 2009-2010 to destroy Iranian centrifuges suspected of being used to enrich uranium to explosive-grade levels. How many computer-scientists spoke up against it? To date, no institutional condemnation has emerged*. Though it could be that neither the US nor Israel publicly acknowledging their roles in developing Stuxnet could have made it tough to judge who may have crossed a line, that a deceptive bundle of code was used as a weapon in an unjust war was obvious.

Then again, can all cryptographers be expected to comply? One of the threats that the 2013 amendments to the WA attempts to tackle is dual-use technology (which Stuxnet is an example of because the virus took advantage of its ability to mimic harmless code). Evidently such tech also straddles what Aaron Adams (PDF) calls “the boundary between bug and behaviour”. That engineers have had only tenuous control over these boundaries owes itself to imperfect yet blameless programming languages, as Bratus also asserts, and not to the engineers themselves. It is in the nature of a nuclear weapon, when deployed, to overshadow the simple intent of its deployers, rapidly overwhelming the already-weakened doctrine of proportionality – and in turn retroactively making that intent seem far, far more important. But in cyber-warfare, its agents are trapped in the ambiguities surrounding what the nature of a cyber-weapon is at all, with what intent and for what purpose it was crafted, allowing its repercussions to seem anywhere from rapid to evanescent.

Or, as it happens, the agents are liberated.

*That I could find. I’m happy to be proved wrong.

Featured image credit: ikrichter/Flickr, CC BY 2.0.

AT&T, the weakest link

In the throng of American companies and their confused compliance with the National Security Agency’s controversial decade-long snooping on internal and international communications, The New York Times and ProPublica have unravelled one that actually bent over backwards to please the NSA: AT&T. The basis of their allegations is a tranche of NSA documents detailing the features and scope of AT&T’s compliance with the agency’s ‘requests’, dating from 2003 to 2013.

The standout feature of the partnership is that, according to a note from AT&T, it wasn’t contractual, implying the ISP hadn’t been coerced into snooping and sharing data on the traffic that passed through its domestic servers. As ProPublica writes, “its engineers were the first to try out new surveillance technologies invented by the eavesdropping agency”. One of the documents even goes as far as to “highlight the Partner’s extreme willingness to help with NSA’s SIGINT and Cyber missions”.

The documents were part of those released by whistleblower Edward Snowden in 2013. According to the reporters, the three entities implicated in them – NSA, AT&T and Verizon – refused to discuss the findings, in keeping with what has become a tradition of various ISPs refusing to reveal the terms of their ‘collaborations’ and the NSA refusing to reveal the ISPs it did work with. Since Snowden released the documents in 2013, public ire against the government’s intrusive snooping programmes have increased even as President Barack Obama as well as the judiciary have been in agreement that revealing any more details than Snowden already had would threaten national security.

As a result, the news that AT&T didn’t bother challenging the NSA throws valuable light on how the agency was able to eavesdrop on foreign governments and international organisations.

The ISPs aren’t named but are referred to by code names, but their real identities were given away when the dates of some of their surveillance ops coincided, sometimes too perfectly, with dates on which some fibre optic cables were ‘repaired’. For example, a document dated August 5, 2011, talks about Fairview’s data-logging resuming over a cable damaged by the earthquake near Japan in the same year – while, ProPublic states, a “Fairview fiber-optic cable … was repaired on the same date as a Japanese-American cable operated by AT&T”. So, the Fairview programme was found to be NSA + AT&T and the Stormbrew programme, NSA + Verizon/MCI.

However, AT&T got more attention than Stormbrew. In 2011, the NSA spent $188.9 million on AT&T and less than half that on Verizon, possibly because the former also practiced peering, a technique in networking where one company relays data through the network on behalf of other companies. As a result, users’ data from other ISPs and TSPs also ended up going through the wired AT&T servers.

AT&T’s complicity dates back to the mid-1980s, when antitrust regulators broke up the monopolistic Ma Bell telephone company, a fragment of which was AT&T. Its formation roughly coincided with NSA’s launching the Fairview program into which the TSP got subsumed. Following the 9/11 attacks, both Fairview and Stormbrew assumed centre-stage in the agency’s anti-terrorism programmes, with Fairview being especially effective. As the Times writes, “AT&T began turning over emails and phone calls ‘within days’ after the warrantless surveillance began in October 2001”.

All the documents disclosed by the publications in the latest release are available here.

The Wire
August 16, 2015

Leaked emails say India is ‘really huge’ market for snooping, spyware

The Wire
July 10, 2015

Bangalore: Indian intelligence agencies and police forces are listed among the customers of an Italian company accused of selling spyware to repressive regimes.

Apart from the NIA, RAW, the intelligence wing of the Cabinet Secretariat and the Intelligence Bureau, the Gujarat, Delhi, Andhra and Maharashtra state police forces also earn a mention in the latest dump of files by WikiLeaks, which outs emails exchanged within the Milan-based IT company called Hacking Team, a purveyor of hacking and surveyor tools.

In August 2011, an Israeli firm named NICE, in alliance with Hacking Team, discusses opportunities to loop in the Cabinet Secretariat. In February 2014, Hacking Team’s employees seem to be discussing the organisation of a webinar in which senior officials of the NIA, RAW and IB will be participating. While this may be deflected as an intelligence agency’s need for counter-surveilling recruitment attempts by groups like the Islamic State, it is unclear why the Andhra Pradesh police asked for equipment to snoop on mobile phones (“cellular interception hardware solutions”).

In fact, in an email dated June 8, 2014, a hacking team employee named David Vincenzetti writes India “represents a huge – really huge and largely untapped – market opportunity for us”. Vincenzetti appears to have been spurred by former foreign minister Salman Khurshid’s infamous remark defending the American NSA’s widespread snooping and following Edward Snowden’s revelation that India was the fifth-most targeted nation: “It’s not really snooping.”

Most of the emails relevant to Indian interests, dating from 2011 to 2015, discuss the “world’s largest democracy” as representing a big opportunity for Hacking Team. They also betray the company’s interest in Pakistan as a client, especially referencing India’s alliance with the United States following the war in Afghanistan and Osama bin Laden’s assassination in Abbottabad.A rough translation of the following statement in one of the emails, dated May 4, 2011

Il Pakistan e’ un paese diviso, fortemente religioso, rivale dell’India ma alletato degli US per la guerra in Afganistan. Dagli US riceve ogni anno >$4bn per aiuti civili e armamenti. Potrebbe essere un cliente eccezionale.

… reads: “Pakistan receives more than $4 billion for civilian aid and arms from the US. It’s a divided country and very religious, and its rival India is allied with the US for the war in Afghanistan.”

Apart from discussing the Pakistani civilian government as a potential business partner for its strategic positioning, Hacking Team emails also reveal that the company performed demonstrations for the erstwhile UPA-II government on how to snoop on phones and other mobile devices, once in 2011 at the invitation of the Indian embassy in Italy.

The emails were obtained by a hacker (or a group of hackers) identified as PhineasFinisher, who/which dug into the Hacking Team’s internal communications and released them on July 6. PhineasFinisher also hacked the company’s Twitter account and renamed it Hacked Team, apart from launching a GitHub repository. The haul totaled over 400 GB, and included sensitive information from other infosec companies as the British-German Gamma Group and the French VUPEN. In fact, the haul includes a 40-GB tranche from Gamma.

The hacker(s) subsequently went on to declare that he/they would wait for Hacking Team to figure out how they system was hacked, failing which he/they will reveal it himself/themselves.

https://twitter.com/GammaGroupPR/status/618250515198181376

Apart from discussing the sale of interception equipment, the company was also considering a proposal to the Indian government to better wiretap data sent through RIM servers. RIM, or Research In Motion, is the Canadian company that owns Blackberry. For long, the Indian government was trying to get RIM to hand over users’ data exchanged over Blackberry devices for civilian surveillance purposes.

Il BBM non puo’ essere intercettato. Ma con RCS si’. [The BBM cannot be intercepted. But with RCS, yes.]

RCS stands for Hacking Team’s proprietary snooping software Remote Control System (RCS).

Ad ogni modo RCS offre possibilita’ ulteriori rispetto a qualunque sistema di intercettazione passiva come la cattura di dati che tipicamente non viaggiano via rete (rubrica, files, foto, SMS vecchi salvati, ecc.) e la possibilita’ di “seguire” un target indiano quando questo si reca all’estero (e.g., Pakistan). [However RCS offers a possibility more than that of any system of passive interception, with the capture of data that typically does not travel over the network (phonebook, files, photos, SMS old saved, etc.). And the possibility to “follow” an Indian target when it goes abroad (e.g., to Pakistan).]

Other state clients for RCS and other Hacking Team services include serial human-rights abusers Saudi Arabia, Ethiopia, Uzbekistan and Russia.

Although Hacking Team has gone on to vehemently deny the allegations, the information released in the WikiLeaks dump remains undeniable for now. And so, it’s unsettling at the least that the Indian government is seen to be in cahoots with a company that feeds on the persistence of human rights violations around the world.

WikiLeaked emails of IT firm show India as ‘really huge’ market for snooping, spyware

The Wire
July 10, 2015

Bangalore: Indian intelligence agencies and police forces are listed among the customers of an Italian company accused of selling spyware to repressive regimes.

Apart from the NIA, RAW, the intelligence wing of the Cabinet Secretariat and the Intelligence Bureau, the Gujarat, Delhi, Andhra and Maharashtra state police forces also earn a mention in the latest dump of files by WikiLeaks, which outs emails exchanged within the Milan-based IT company called Hacking Team, a purveyor of hacking and surveyor tools.

In August 2011, an Israeli firm named NICE, in alliance with Hacking Team, discusses opportunities to loop in the Cabinet Secretariat. In February 2014, Hacking Team’s employees seem to be discussing the organisation of a webinar in which senior officials of the NIA, RAW and IB will be participating. While this may be deflected as an intelligence agency’s need for counter-surveilling recruitment attempts by groups like the Islamic State, it is unclear why the Andhra Pradesh police asked for equipment to snoop on mobile phones (“cellular interception hardware solutions”).

In fact, in an email dated June 8, 2014, a hacking team employee named David Vincenzetti writes India “represents a huge – really huge and largely untapped – market opportunity for us”. Vincenzetti appears to have been spurred by former foreign minister Salman Khurshid’s infamous remark defending the American NSA’s widespread snooping and following Edward Snowden’s revelation that India was the fifth-most targeted nation: “It’s not really snooping.”

Most of the emails relevant to Indian interests, dating from 2011 to 2015, discuss the “world’s largest democracy” as representing a big opportunity for Hacking Team. They also betray the company’s interest in Pakistan as a client, especially referencing India’s alliance with the United States following the war in Afghanistan and Osama bin Laden’s assassination in Abbottabad.A rough translation of the following statement in one of the emails, dated May 4, 2011

Il Pakistan e’ un paese diviso, fortemente religioso, rivale dell’India ma alletato degli US per la guerra in Afganistan. Dagli US riceve ogni anno >$4bn per aiuti civili e armamenti. Potrebbe essere un cliente eccezionale.

… reads: “Pakistan receives more than $4 billion for civilian aid and arms from the US. It’s a divided country and very religious, and its rival India is allied with the US for the war in Afghanistan.”

Apart from discussing the Pakistani civilian government as a potential business partner for its strategic positioning, Hacking Team emails also reveal that the company performed demonstrations for the erstwhile UPA-II government on how to snoop on phones and other mobile devices, once in 2011 at the invitation of the Indian embassy in Italy.

The emails were obtained by a hacker (or a group of hackers) identified as PhineasFinisher, who/which dug into the Hacking Team’s internal communications and released them on July 6. PhineasFinisher also hacked the company’s Twitter account and renamed it Hacked Team, apart from launching a GitHub repository. The haul totaled over 400 GB, and included sensitive information from other infosec companies as the British-German Gamma Group and the French VUPEN. In fact, the haul includes a 40-GB tranche from Gamma.

The hacker(s) subsequently went on to declare that he/they would wait for Hacking Team to figure out how they system was hacked, failing which he/they will reveal it himself/themselves.

https://twitter.com/GammaGroupPR/status/618250515198181376

Apart from discussing the sale of interception equipment, the company was also considering a proposal to the Indian government to better wiretap data sent through RIM servers. RIM, or Research In Motion, is the Canadian company that owns Blackberry. For long, the Indian government was trying to get RIM to hand over users’ data exchanged over Blackberry devices for civilian surveillance purposes.

Il BBM non puo’ essere intercettato. Ma con RCS si’. [The BBM cannot be intercepted. But with RCS, yes.]

RCS stands for Hacking Team’s proprietary snooping software Remote Control System (RCS).

Ad ogni modo RCS offre possibilita’ ulteriori rispetto a qualunque sistema di intercettazione passiva come la cattura di dati che tipicamente non viaggiano via rete (rubrica, files, foto, SMS vecchi salvati, ecc.) e la possibilita’ di “seguire” un target indiano quando questo si reca all’estero (e.g., Pakistan). [However RCS offers a possibility more than that of any system of passive interception, with the capture of data that typically does not travel over the network (phonebook, files, photos, SMS old saved, etc.). And the possibility to “follow” an Indian target when it goes abroad (e.g., to Pakistan).]

Other state clients for RCS and other Hacking Team services include serial human-rights abusers Saudi Arabia, Ethiopia, Uzbekistan and Russia.

Although Hacking Team has gone on to vehemently deny the allegations, the information released in the WikiLeaks dump remains undeniable for now. And so, it’s unsettling at the least that the Indian government is seen to be in cahoots with a company that feeds on the persistence of human rights violations around the world.

A call for a new human right, the right to encryption

The Wire
June 2, 2015

DUAL_EC_DRBG is the name of a program that played an important role in the National Security Agency’s infiltration of communication protocols, which was revealed by whistleblower Edward Snowden. The program, at the time, drew the suspicion of many cryptographers who wondered why it was being used instead of the NIST’s more advanced standards. The answer arrived in December 2013: DUAL_EC_DRBG was a backdoor.

A backdoor is a vulnerability deliberately inserted into a piece of software to allow specific parties to decrypt it whenever they want to. When the NSA wasn’t forcibly getting companies to hand over private data, it was exploiting pre-inserted backdoors to enter and snoop around. Following 9/11, the Patriot Act made such acts lawful, validating the use of programs like DUAL_EC_DRBG that put user security and privacy at stake to defend the more arbitrarily defined questions of national security.

However, the use of such weakened encryption standards is a Trojan horse that lets in the weaknesses of those standards as well. When engineers attempt to use those standards for something so well-defined as the public interest, such weaknesses can undermine that definition. For example, one argument after Snowden’s revelations was to encrypt communications such that only the government could access them. This was quickly dismissed because it’s open knowledge among engineers that there are no safeguards that can be placed around such ‘special’ access that would deter anyone skilled enough to hack through it.

It’s against this ‘power draws power’ scenario that a new report from the UN Office of the High Commissioner for Human Rights (OHCHR) makes a strong case – one which the influential Electronic Frontier Foundation has called “groundbreaking”. It says, “requiring encryption back-door access, even if for legitimate purposes, threatens the privacy necessary to the unencumbered exercise of the right to freedom of expression.” Some may think this verges on needless doubt, but the report’s centre of mass rests on backdoors’ abilities to compromise individual identities in legal and technological environments that can’t fully protect those identities.

On June 1, those provisions of the Patriot Act that justified the interception of telephone calls expired and the US Senate was unable to keep them going. As Anuj Srivas argues, it is at best “mild reform” that has only plucked at the low-hanging fruit – reform that rested on individuals’ privacy being violated by unconstitutional means. The provisions will be succeeded by the USA Freedom Act, which sports some watered-down notions of accountability when organisations like the NSA trawl data.

According to the OHCHR report, however, what we really need are proactive measures. If decryption is at the heart of privacy violations, then strong encryption needs to be at the heart of privacy protection – i.e. encryption must be a human right. Axiomatically, as the report’s author, Special Rapporteur David Kaye writes, individuals rely on encryption and anonymity to “safeguard and protect their right to expression, especially in situations where it is not only the State creating limitations but also society that does not tolerate unconventional opinions or expression.” On the same note, countries like the US that intentionally compromise products’ security, and the UK and India which constantly ask for companies to hand over the keys to their data to surveil their citizens, are now human rights violators.

By securing the importance of strong encryption and associating it with securing one’s identity, the hope is to insulate it from fallacies in the regulation of decryption – such as in the forms of the Patriot Act and the Freedom Act. Kaye argues, “Privacy interferences that limit the exercise of the freedoms of opinion and expression …  must not in any event interfere with the right to hold opinions, and those that limit the freedom of expression must be provided by law and be necessary and proportionate to achieve one of a handful of legitimate objectives.”

This anastomosis in the debate can be better viewed as a wedge that was created around 1995. The FBI Director at the time, Louis Freeh, had said that the bureau was “in favor of strong encryption, robust encryption. The country needs it, industry needs it. We just want to make sure we have a trap door and key under some judge’s authority where we can get there if somebody is planning a crime.”

Then, in October 2014, then FBI Director James Comey made a similar statement: “It makes more sense to address any security risks by developing intercept solutions during the design phase, rather than resorting to a patchwork solution when law enforcement comes knocking after the fact.” In the intervening decades, however, awareness of the vulnerabilities of partial encryption has increased while the law has done little to provide recourse for the gaps in online protection. So, Comey’s arguments are more subversive than Freeh’s.

Kaye’s thesis is from a human rights perspective, but its conclusions apply to everyone – to journalists, lawyers, artists, scholars, anyone engaged in the exploration of controversial information and with a stake in securing their freedom of expression. In fact, a corollary of his thesis is that strong encryption will ensure unfettered access to the Internet. His report also urges Congress to pass the Secure Data Act, which would prevent the US government from forcibly inserting backdoors in software to suit its needs.