Driving in an Empty Room

Photo by Jaromír Kavan on Unsplash

Let’s set a scenario, the varieties of which you can color however you like - a self-driving car is going down a road next to a ravine and suddenly a group of school children jump out in front of the car. The car’s AI now has a decision to make - kill the children or kill the driver.

This hypothetical and its variants are intended to force us as a technological culture to confront the dangers and philosophical implications of our promised future. How can do we teach computers who lives and who dies when we as humans can’t even make that decision, even with perfect timing?

The problem is that the question is idiotic. I mean that in the kindest possible way - it’s the type of question regarding technology that attempts to sound profound, but is like deciding world politics using a Risk board.

Let’s take almost this exact situation without a self-driving car:

What happened? Did the driver have to make a decision between the kid’s life and their own? No - the driver did exactly what a self-driving vehicle would do. It braked.

Of course this was only possible by the revolutionary technological breakthroughs in braking systems. Had that been a truck that was fifteen years older and we would have seen a much darker end. There wouldn’t have even been an option to swerve out of the way regardless of how responsive the driver was.

The point here is that a seemingly dull piece of technology, braking systems, suddenly makes this apparently intense philosophical question pointless. While the tram problem or the example above might be fun to superficially warn about the dangers of technology, it is the associated pieces of technology that go along with self-driving cars that will resolve these problems - not solving philosophy.

If you looked at the first Wright Brothers plane and then imagined it flying at 30000 ft, would you question - is it worth the fast travel if some passengers will freeze to death? No. That’s stupid - you’d instead think, as history has shown, that you aren’t going to fly that high until you build safe environments in which that is no longer a question.

Take an example from the film Demolition Man. A silly movie to be sure, however, one piece of technology has always stood out to me - the styrofoam crash mechanism. For those who haven’t seen the movie, to Stallone’s surprise, when he crashes a car for the first time, the body completely transforms into styrofoam that pads his impact, removes shattering glass, and allows him to quickly break from from the wreckage.

It’s a fun visual gag, however, imagine our self driving car that has a similar mechanism. What, beyond inconvenience, does it matter if you fly off the ravine if your car turns into a safe bubble? There’s no philosophical problem there, only an insurance problem.

While the barstool criticism of self-driving cars ponders how to decide how best robots can serve man, keep in mind that a car is more than its driver. Even modern vehicles attempt to do their best to mitigate the harm to the driver and to those around them, regardless of the decisions of whoever is behind the wheel.