So many cynical ads on TV

I wrote about cynical ads airing on Indian cable TV a while ago. Since then I’ve started to notice more such ads and thought it might be useful to maintain a running list.

  1. Rapido – Don’t bother with asking the government to improve public transport, instead race to the bottom with a form of transport that makes using Indian roads feel like a circle of hell.
  2. PhonePe insurance – Easy bike insurance, so easy that you can get it when a cop catches you, so maybe don’t bother until then. [video]
  3. Fogg – Men not wearing perfume is a deal-breaker, for no discernible reason other than a problem with something other than body odour, since that isn’t discussed. [video]
  4. PharmEasy – Don’t leave the house, give the app all your medical info, get deliveries at a discount, and don’t leave the house. [video]
  5. Swiggy Instamart – Order and expect deliveries in minutes, to the detriment of “delivery executives” labouring in terrible weather, traffic, errant motorists, foul air, etc. (One of the first ads Swiggy put out showed a little girl throwing a tantrum and the father appeasing her by ordering whatever she wanted, and having it delivered almost right away. Swiggy subsequently took this ad down from YouTube and cable.)
  6. Voltas AC – Why go to places with greenery or complain about bad air where you are when you can install this AC and get good air right in your living room? [video]
  7. Vimal Elaichi – Four Padma awardees – Amitabh Bachchan, Ajay Devgan, Shah Rukh Khan, Akshay Kumar – and Ranveer Singh in surrogate advertisements for chewing tobacco. Bachchan and Kumar pulled out after criticism. [video]
  8. Sony Ten – Ranbir Kapoor threatens a group of English cricket fans to chant “India jeetega” and BMKJ under pain of death, implied when he slams a giant axe on the table in front of them. [video]
  9. Uber – “#RentalHealthDay” for you to skip the stress of driving because, for an astonishingly small fee, another person will assume that stress for you and undermine their well-being. [video]
  10. Star Sports – While its ads for later matches were more sedate, its ad for the India-Pakistan T20 World Cup match packed macho and some mild emotional blackmail to fan fans’ frenzy [more here].
  11. Manyavar – Ranveer Singh in fancy house smiles and says, “Diwali is coming, you’re expected to be prepared” – for rich brats setting off loud, noxious crackers while the harm we suffer for that being blamed on us not being prepared. [video]
  12. Bose – Amazing noise-cancelling headphones for rich people so they can focus on just the light emitted by the firecrackers they’re (shown in the ad) setting off.

To be continued…

A future for driverless cars, from a limbo between trolley problems and autopilots

By Anuj  Srivas and Vasudevan Mukunth

What’s the deal with everyone getting worried about artificial intelligence? It’s all the Silicon Valley elite seem willing to be apprehensive about, and Oxford philosopher Nick Bostrom seems to be the patron saint along with his book Superintelligence: Paths, Dangers, Strategies (2014).

Even if Big Data seems like it could catalyze things, they could be overestimating AI’s advent. But thanks to Google’s espied breed of driverless cars, conversations on regulation are already afoot. This is the sort of subject that could benefit from its tech being better understood; it’s not immediately apparent. To make matters worse, now is also the period when not enough data is available for everyone to scrutinize the issue but at the same time there are some opinion-mongers distorting the early hints of a debate with their desires.

In an effort to bypass this, let’s say things happen like they always do: Google doesn’t ask anybody and starts deploying its driverless cars, and then the law is forced to shape around that. Yes, this isn’t something Google can force on people because it’s part of no pre-existing ecosystem. It can’t force participation like it did with Hangouts. Yet, the law isn’t prohibitive.

In the Silicon Valley, Google has premiered its express Shopping service – for delivering purchases made online within three hours of someone placing the order for no extra cost. No extra cost because the goods are delivered using Google’s driverless cars, and the service is a test-bed for them, where they get to ‘learn’ what they will. But when it comes to buying them, who will? What about insurance? What about licenses?

A better trolley problem

It’s been understood for a while that the problem here is liabilities, summarized in many ways by the trolley problem. There’s something unsettling about loss of life due to machine failure, whereas it’s relatively easier to accept when the loss is the consequence of human hands. Theoretically it should make no difference – planes for example are driven more by computers these days than a living, breathing pilot. Essentially, you’re trusting your life to the computers running the plane. And when driverless cars are rolled out, there’s ample reason to believe that will have a similarly low chance of failure as aircrafts run by computer-pilots. But we could be missing something through this simplification.

Even if we’re laughably bad at it at times, having a human behind the wheel makes it predictable, sure, but more importantly it makes liability easier to figure. The problem with a driverless car is not that we’d doubt its logic – the logic could be perfect – but that we’d doubt what that logic dictates. A failure right now is an accident: a car ramming into a wall, a pole, into another car, another person, etc. Are these the only failures, though? A driverless car does seem similar to autopilot, but we must be concerned about what its logic dictates. We consciously say that human decision making skills are inferior, that we can’t be trusted. Though that is true, we cross an epistemological ground when we do so.

Perhaps the trolley problem isn’t well-thought out. The problem with driverless cars is not about 5 lives versus 1 life; that’s an utterly human problem. The updated problem for driverless cars would be: should the algorithm look to save the the passengers of the car or should it look to save bystanders?

And yet even this updated trolley problem is too simplistic. Computers and programmers make these kind of decisions on a daily basis already, by choosing at what time, for instance, an airbag should deploy, especially considering that if deployed unnecessarily, the airbag can also grievously injure a human being.

Therefore, we shouldn’t fall into a Frankenstein complex where our technological creations are automatically assumed to be doing evil things simply because they have no human soul. It’s not a question of “it’s bad if a machine does it and good if a human does it”.

Who programs the programmers?

And yet, the scale and moral ambiguity is pumped up to a hundred when it comes to driverless cars. Things like airbag deployment can often take refuge in physics and statistics – they are often seen in that context. And yet for driverless cars, specific programming decisions will be forced to confront morally ambiguous situations and it is here that the problem starts. If an airbag deploys unintentionally or wrongly it can always be explained away as an unfortunate error, accident or freak situation. Or, more simply, that we can’t program airbags to deploy on a case-by-case basis. Driverless cars however, can’t take refuge behind statistics or simple physics when it it is confronted with its trolley problem.

There is a more interesting question here. If a driverless car has to choose between a) running over a dog, b) swerving your car in order to miss the dog, thereby hitting a tree, and c) freeze and do nothing, what will it do? It will do whatever the programmer tells it to do. Earlier we had the choice, depending on our own moral compass, as to what we should do. People who like dogs wouldn’t kill the animal; people who cared more about their car would kill the dog. So, who programs the programmers?

And as with the simplification to a trolley problem, comparing autonomous cars to autopilot on board an aircraft is similarly short-sighted. In his book Normal Accidents, sociologist Charles Perrow talks about nuclear power plant technology and its implications for insurance policy. NPPs are packed in with redundant safety systems. When accidents don’t happen, these systems make up a bulk of the plant’s dead weight, but when an accident does happen, their failure is often the failure worth talking about.

So, even as the aircraft is flying through the air, control towers are monitoring its progress, the flight data recorders act as a deterrent against complacency, and simply the cost of one flight makes redundant safety systems feasible over a reasonable span of time.

Safety is a human thing

These features together make up the environment in which autopilot functions. On the other hand, an autonomous car doesn’t inspire the same sense of being in secure hands. In fact, it’s like an economy of scale working the other way. What safety systems kick in when the ghost in the machine fails? To continue the metaphor: As Maria Konnikova pointed out in The New Yorker in September 2014, maneuvering an aircraft can be increasingly automated. The problem arises when something about it fails and humans have to take over: we won’t be able to take over as effectively as we think we can because automation encourages our minds to wander, to not pay attention to the differences between normalcy and failure. As a result, a ‘redundancy of airbags’ is encouraged.

In other words, it would be too expensive to include all these foolproof safety measures for driverless cars but at the same time they ought to be. And this is why the first ones likely won’t be owned by individuals. The best way to introduce them would be through taxi services like Uber, effectuating communal car sharing with autonomous drivers. In a world of driverless cars, we may not own the cars themselves, so a company like Uber could internalize the costs involved in producing that ecosystem, and having them around in bulk makes safety-redundancies feasible as well.

And if driverless cars are being touted as the future, owning a car could probably become a thing of the past, too. The thrust of digital has been to share and rent more than to own with pretty much most things. Only essentials like smartphones are owned. Look at music, business software, games, rides (Uber), even apartments (Airbnb). Why not autonomous vehicles?