Kant is my co-driver: philosophy and driverless cars

Should autonomous vehicles have ethics programmed in? asks Alan Ryan

July 30, 2015
Alan Ryan columnist illustration
Source: Paul Hamlyn

I’ve spent an academic year in Silicon Valley and hardly noticed until the last moment. Many Stanford buildings bear the names of assorted Hewletts and Packards, Gateses and others, but if you are walking another generation of students through the ideas of dead white males from Thomas Hobbes to John Rawls, you welcome the ease of looking things up on the internet but aren’t in close contact with the latest thing in the information revolution.

At the very end of the year, however, I was invited to a day-long conference on “Programming Ethics in Autonomous Vehicles”. The ambiguity in the title was – I think – intended, and if it was not, it was a fine stroke of serendipity. Are we really going to program morality into cars and trucks? Car makers have been providing increasingly sophisticated computerised assistance for drivers: putting on the brakes if you are about to hit something, helping you to park, and so on, but autonomous vehicles are another matter entirely. They really drive themselves. On the roads around the Google headquarters in Mountain View, you can see Google cars, equipped with sensors and cameras and lots of kit that allows them to drive without a human driver.

Students who see them in action say that autonomous vehicles are much better drivers than the average Silicon Valley human. They get into occasional fender benders, but many fewer than the local humans, and invariably such collisions are the result of human error. I sympathise with the humans; it is hard to concentrate on driving while you’re gawping at something with a sort of small windmill on its roof. There are, no doubt, some people who want to test the autonomous vehicle’s reflexes and cut in front to do so.

This being Silicon Valley, some conference participants were lawyers interested in questions of liability; if the vehicle is truly autonomous, where does liability for an accident lie? If I run into the person in front of me, it’s very likely that I shall be held liable. Can we imagine holding a vehicle liable? The first impulse is to say, “Of course not, it’s just a machine.” But if it’s really autonomous, it’s less obvious which human agent is going to be held liable – the programmer, the car manufacturer, the people who maintain the computer network responsible for the information on which the autonomous vehicle depends? One answer is to abandon talk of liability and adopt no-fault insurance.

ADVERTISEMENT

The argument runs into interesting philosophical territory. It’s absurd to think of autonomous vehicles as being liable for anything; that’s pure anthropomorphism. It’s very hard to resist, but we should. Programming ethics in autonomous vehicles is not about creating vehicles that can feel guilt or anxiety or read Kant of an evening. However, when we clear our heads and think about the ethical principles that should govern the way we program the decision-making of autonomous vehicles, we find that one of the most famous puzzles in recent moral philosophy is squarely in front of us. This is, appropriately enough, the “trolley problem” (“trolley” in the American sense of a tram or light rail vehicle).

The simplest version is this: you see a tram heading towards five people standing on the track; they will be killed if the tram isn’t diverted. You can divert it; however, if you divert it, it will kill someone else on the second track. Pure utilitarians treat it as obvious that you should save five people by killing one; pure Kantians usually take it as obvious that you cannot sacrifice an innocent person as a means to an end, even if that end is saving five people. Anyone concerned to see all the ramifications should read the work of Harvard philosophy professor Frances Kamm. But it is not a purely philosophical puzzle. What are you going to tell the autonomous vehicle to do if an accident is inevitable and it faces the choice between, let us say, killing five pedestrians and hitting something in such a way that it kills the “driver” – or “lead passenger”, as perhaps we should say? There are large questions here; early attempts to program vehicles to minimise the damage caused by a collision would have resulted in the car hitting a child to avoid a large box, simulations showed.

ADVERTISEMENT

Interestingly enough, people’s reactions are all over the landscape. Many people’s instinct is to demand that they can override the computer if an accident is imminent. However, this is exactly when human beings do worst; rapidly processing more information than we can handle, and in a panic, is not the recipe for a good outcome. Many other people take a pure utilitarian tack; we should get used to working with machines that make optimal decisions on a utilitarian basis. Even from a purely selfish view, we shall on average do better and live longer if we program the machinery to minimise accidents and save most lives.

That raises many more questions; which are ethical is itself a large question. One is whether we humans ought to try to teach ourselves to be more rational – to try to estimate risks in a mathematically careful way – and behave more like the autonomous vehicle, unflustered by crying children, irritated spouses or annoying drivers. It’s far from obvious that we ought, let alone that we can. Should we give machines some of our quirks instead? That’s not obvious either. It might make drivers more comfortable with autonomous vehicles in the first instance; but how many extra accidents would we want? And we’d have to be careful about just what quirks we added to our robotic chauffeurs: you don’t really want one that responds to your request to drive you to your favourite pub with a long harangue about the virtues of a healthy lifestyle and an insistence on taking you to the gym instead.

Alan Ryan is emeritus professor of political theory at the University of Oxford and visiting professor of philosophy at Stanford University.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT