The AI trust deficit predates AI

There are alien minds among us. Not the little green men of science fiction, but the alien minds that power the facial recognition in your smartphone, determine your creditworthiness and write poetry and computer code. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter daily.

But AI systems have a significant limitation: Many of their inner workings are impenetrable, making them fundamentally unexplainable and unpredictable. Furthermore, constructing AI systems that behave in ways that people expect is a significant challenge.

If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?

Trust plays an important role in the public understanding of science. The excerpt above – from an article by Mark Bailey, chair of Cyber Intelligence and Data Science at the National Intelligence University, Maryland, in The Conversation about whether we can trust AI – showcases that.

Bailey treats AI systems as “alien minds” because of their, rather their makers’, inscrutable purposes. They are inscrutable not just because they are obscured but because, even under scrutiny, it is difficult to determine how an advanced machine-based logic makes decisions.

Setting aside questions about the extent to which such a claim is true, Bailey’s argument as to the trustworthiness of such systems can be stratified based on the people to whom it is addressed: AI experts and non-AI-experts, and I have a limited issue with the latter vis-à-vis Bailey’s contention. That is, to non-AI-experts – which I take to be the set of all people ranging from those not trained as scientists (in any field) to those trained as such but who aren’t familiar with AI – the question of trust is more wide-ranging. They already place a lot of their trust in (non-AI) technologies that they don’t understand, and probably never will. Should they rethink their trust in these systems? Or should we taken their trust in these systems to be ill-founded and requiring ‘improvement’?

Part of Bailey’s argument is that there are questions about whether we can or should trust AI when we don’t understand it. Aside from AI in a generic sense, he uses the example of self-driving cars and a variation of the trolley problem. While these technologies illustrate his point, they also give the impression that AI systems not making decisions aligned with human expectations and their struggle to incorporate ethics is a problem restricted to high technologies. It isn’t. The trust deficit vis-à-vis predates AI. Many of the technologies that non-experts trust but which don’t uphold that (so to speak) are not high-tech; examples from India alone include biometric scanners (for Aadhaar), public transport infrastructure, and mechanisation in agriculture. This is because people’s use of any technology beyond their ability to understand is mediated by social relationships, economic agency, and cultural preferences, and not technical know-how.

For the layperson, trust in a technology is really trust in some institution, individuals or even some organisational principle (traditions, religion, etc.), and this is as it should be – perhaps even for more-sophisticated AI systems of the future. Many of us will never fully understand how a deep-learning neural network works, nor should we be expected to, but that doesn’t implicitly make AI systems untrustworthy. I expect to be able to trust scientists in government and in respectable scientific institutions to discharge their duties in a public-spirited fashion and with integrity, so that I can trust their verdict on AI, or anything else in similar vein.

Bailey also writes later in the article that some day, AI systems’ inner workings could become so opaque that scientists may no longer be able to connect their inputs with their outputs in a scientifically complete way. According to Bailey: “It is important to resolve the explainability and alignment issues before the critical point is reached where human intervention becomes impossible.” This is fair but it also misses the point a little bit by limiting the entities that can intervene to individuals and built-in technical safeguards, like working an ethical ‘component’ into the system’s decision-making framework, instead of taking a broader view that keeps the public institutions, including policies, that will be responsible for translating the AI systems’ output into public welfare in the picture. Even today in India, that’s what’s failing us – not the technologies themselves – and therein lies the trust deficit.

Featured image credit: Cash Macanaya/Unsplash.

A future for driverless cars, from a limbo between trolley problems and autopilots

By Anuj  Srivas and Vasudevan Mukunth

What’s the deal with everyone getting worried about artificial intelligence? It’s all the Silicon Valley elite seem willing to be apprehensive about, and Oxford philosopher Nick Bostrom seems to be the patron saint along with his book Superintelligence: Paths, Dangers, Strategies (2014).

Even if Big Data seems like it could catalyze things, they could be overestimating AI’s advent. But thanks to Google’s espied breed of driverless cars, conversations on regulation are already afoot. This is the sort of subject that could benefit from its tech being better understood; it’s not immediately apparent. To make matters worse, now is also the period when not enough data is available for everyone to scrutinize the issue but at the same time there are some opinion-mongers distorting the early hints of a debate with their desires.

In an effort to bypass this, let’s say things happen like they always do: Google doesn’t ask anybody and starts deploying its driverless cars, and then the law is forced to shape around that. Yes, this isn’t something Google can force on people because it’s part of no pre-existing ecosystem. It can’t force participation like it did with Hangouts. Yet, the law isn’t prohibitive.

In the Silicon Valley, Google has premiered its express Shopping service – for delivering purchases made online within three hours of someone placing the order for no extra cost. No extra cost because the goods are delivered using Google’s driverless cars, and the service is a test-bed for them, where they get to ‘learn’ what they will. But when it comes to buying them, who will? What about insurance? What about licenses?

A better trolley problem

It’s been understood for a while that the problem here is liabilities, summarized in many ways by the trolley problem. There’s something unsettling about loss of life due to machine failure, whereas it’s relatively easier to accept when the loss is the consequence of human hands. Theoretically it should make no difference – planes for example are driven more by computers these days than a living, breathing pilot. Essentially, you’re trusting your life to the computers running the plane. And when driverless cars are rolled out, there’s ample reason to believe that will have a similarly low chance of failure as aircrafts run by computer-pilots. But we could be missing something through this simplification.

Even if we’re laughably bad at it at times, having a human behind the wheel makes it predictable, sure, but more importantly it makes liability easier to figure. The problem with a driverless car is not that we’d doubt its logic – the logic could be perfect – but that we’d doubt what that logic dictates. A failure right now is an accident: a car ramming into a wall, a pole, into another car, another person, etc. Are these the only failures, though? A driverless car does seem similar to autopilot, but we must be concerned about what its logic dictates. We consciously say that human decision making skills are inferior, that we can’t be trusted. Though that is true, we cross an epistemological ground when we do so.

Perhaps the trolley problem isn’t well-thought out. The problem with driverless cars is not about 5 lives versus 1 life; that’s an utterly human problem. The updated problem for driverless cars would be: should the algorithm look to save the the passengers of the car or should it look to save bystanders?

And yet even this updated trolley problem is too simplistic. Computers and programmers make these kind of decisions on a daily basis already, by choosing at what time, for instance, an airbag should deploy, especially considering that if deployed unnecessarily, the airbag can also grievously injure a human being.

Therefore, we shouldn’t fall into a Frankenstein complex where our technological creations are automatically assumed to be doing evil things simply because they have no human soul. It’s not a question of “it’s bad if a machine does it and good if a human does it”.

Who programs the programmers?

And yet, the scale and moral ambiguity is pumped up to a hundred when it comes to driverless cars. Things like airbag deployment can often take refuge in physics and statistics – they are often seen in that context. And yet for driverless cars, specific programming decisions will be forced to confront morally ambiguous situations and it is here that the problem starts. If an airbag deploys unintentionally or wrongly it can always be explained away as an unfortunate error, accident or freak situation. Or, more simply, that we can’t program airbags to deploy on a case-by-case basis. Driverless cars however, can’t take refuge behind statistics or simple physics when it it is confronted with its trolley problem.

There is a more interesting question here. If a driverless car has to choose between a) running over a dog, b) swerving your car in order to miss the dog, thereby hitting a tree, and c) freeze and do nothing, what will it do? It will do whatever the programmer tells it to do. Earlier we had the choice, depending on our own moral compass, as to what we should do. People who like dogs wouldn’t kill the animal; people who cared more about their car would kill the dog. So, who programs the programmers?

And as with the simplification to a trolley problem, comparing autonomous cars to autopilot on board an aircraft is similarly short-sighted. In his book Normal Accidents, sociologist Charles Perrow talks about nuclear power plant technology and its implications for insurance policy. NPPs are packed in with redundant safety systems. When accidents don’t happen, these systems make up a bulk of the plant’s dead weight, but when an accident does happen, their failure is often the failure worth talking about.

So, even as the aircraft is flying through the air, control towers are monitoring its progress, the flight data recorders act as a deterrent against complacency, and simply the cost of one flight makes redundant safety systems feasible over a reasonable span of time.

Safety is a human thing

These features together make up the environment in which autopilot functions. On the other hand, an autonomous car doesn’t inspire the same sense of being in secure hands. In fact, it’s like an economy of scale working the other way. What safety systems kick in when the ghost in the machine fails? To continue the metaphor: As Maria Konnikova pointed out in The New Yorker in September 2014, maneuvering an aircraft can be increasingly automated. The problem arises when something about it fails and humans have to take over: we won’t be able to take over as effectively as we think we can because automation encourages our minds to wander, to not pay attention to the differences between normalcy and failure. As a result, a ‘redundancy of airbags’ is encouraged.

In other words, it would be too expensive to include all these foolproof safety measures for driverless cars but at the same time they ought to be. And this is why the first ones likely won’t be owned by individuals. The best way to introduce them would be through taxi services like Uber, effectuating communal car sharing with autonomous drivers. In a world of driverless cars, we may not own the cars themselves, so a company like Uber could internalize the costs involved in producing that ecosystem, and having them around in bulk makes safety-redundancies feasible as well.

And if driverless cars are being touted as the future, owning a car could probably become a thing of the past, too. The thrust of digital has been to share and rent more than to own with pretty much most things. Only essentials like smartphones are owned. Look at music, business software, games, rides (Uber), even apartments (Airbnb). Why not autonomous vehicles?