Road Ahead

Autonomous Cars Will Need Ethics, Morals and NASCAR Mode


It is easy to make autonomous vehicles drive in a more virtuous manner than humans, but this could make them vulnerable as they mix with conventional cars and drivers on the road.

Imagine you’re riding in an autonomous car down a narrow 2-lane road with a double-yellow center line. Suddenly you come upon a stalled car in your lane. How long does the self-driving car wait until it decides it is okay to break the law and cross the lines to pass?

Here’s an even tougher scenario: An out-of-control trolley with five people onboard is directly in the robot car’s path. The only way to avoid it is to swerve and run down a pedestrian. Does the autonomous car choose the lesser of two evils, or smash into the trolley?

These and thousands of similar ethical, moral and legal questions are being considered by engineers and programmers as they develop autonomous vehicles and try to take human error out of the driving equation. The effort underscores how difficult it will be to go from vehicles with autonomous features such as automatic emergency braking to cars that truly drive themselves.

So far, the driving style of Google’s autonomous cars is described by observers as that of an overly cautious elderly person: safe, but not able to avoid every mishap. The company says its self-driving cars have been involved in about a dozen crashes over six years, none of which were their fault.

It is easy to make autonomous vehicles drive in a more virtuous manner than humans, but this could make them vulnerable as more such vehicles mix with conventional cars and drivers on the road.

“We found very quickly that engineers and philosophers can talk to each other much better than we ever expected,” says Chris Gerdes, director of Stanford University’s Center for Automotive Research, at the recent TU-Automotive conference in suburban Detroit.

“We can take frameworks that had been developed in philosophy and map them to the actual software we were putting in the car.”

Gerdes is addressing complex autonomous driving issues by bringing philosophers, engineers and programmers together at Stanford to answer questions such as how to determine when the desire for safety outweighs the need for legality.

The trolley example is a famous topic among philosophers and ethicists known as “The Trolley Car Problem,” which deals with the weighing of consequences to come up with the best result. In practice, it is less about life and death and more about when it makes sense to cross double yellow lines and such. “Automated vehicle ethics revolve around trolley-car problems,” Gerdes says.

An autonomous vehicle’s “morals” are based on what are called deontological ethics. These are like the 10 Commandments, Gerdes says, and focus on what the vehicle should not do, such as speed and drive through yellow lights. All these wholesome values promise to make autonomous vehicles far safer than today’s cars and trucks, but they still do not fully prepare ultra-virtuous self-drivers to coexist with the millions of human drivers that will dominate roads for decades. And this is especially true in high-stress situations.    

“A Google car would never be able to make a left turn, never be able to pull away from the curb and into traffic, because aggressive taxi drivers would quickly learn to exploit its algorithms,” Wall Street Journal writer Holman Jenkins points out in a recent commentary.

Why would a cab driver bully an autonomous taxi? You mean, aside from the fact it is designed to eliminate his job?

Unfortunately, Gerdes does not spend much time talking about the need for what I will call a NASCAR mode: an algorithm that directs the car to arbitrarily play chicken with aggressive human drivers and swap paint occasionally with conventional taxis, just to remind everyone it is a robot that does not know fear or pain. It should be a mandatory feature for all self-driving vehicles transporting anyone in New York or other large cities.

It’s important to send individuals out into the world with strong ethics and morals, but they still won’t survive unless they know how to demand respect and defend themselves if necessary. That goes for people, and soon, robot cars.

Discuss this Blog Entry 4

on Jul 9, 2015

Interesting and somewhat counterintuitive perspective. My driving instructor from many decades ago taught all his students to "take their place" under the light in the intersection when making a left turn so even with heavy traffic, once the light changed the person would be able to turn left after the oncoming traffic cleared the intersection. Some similar "aggressive" heuristics will be needed for fully autonomous vehicles no doubt, but I think we will see a lot of assistance before we get to fully automatic and the transition will be more organic and gradual than most of the literature is supposing. The newest crop of luxury vehicles are incorporating more and more driver assist technology and that trend will continue. The move to full automatic after years of heavy assist will be less dramatic and traumatic than the kind of huge switch where one day everyone is driving themselves and the next day the car does everything.

on Jul 10, 2015

I totally agree. The forecasts of some futurists predicting millions of totally autonomous vehicles on the road in 10 or 15 years requires us to assume their will be fundamental changes in human nature (suddenly nobody aspires to own and drive a car and trusts machines with their children's lives) and politics (politicians will agree to fund massive infrastructure improvements to support V2V and V2I). I don't see it, at least not in the U.S., where politicians cannot even figure out how to find the money to fix potholes.

on Jul 20, 2015

It goes further, who gets to decide on which morals come into play? There are countries in the world where individual life is not held in high regard. Think of the middle east, does you car run down 2 women or 1 man.
In the aggressive scenario, who pays for the damage, the company running the autonomous car or the manufacturer? I suspect the lawyers will go after the one with money, probably the manufacturer. So I don't think this mode will ever get programmed without some strong laws determining liability limitations.

on Jul 23, 2017

The comment about attorneys and politicians is heads on. Since most politicians are attorneys, nothing will get done. The liability issue is huge. The country is broke, so no new infrastructure will be put in place. So all this talk about autonomous cars is just talk. Nice to think about. It will happen in a few small countries. But not here. We just have too many issues here that will not get solved and those impede the advance of autonomous vehicles.

Please or Register to post comments.

What's Road Ahead?

Blogs with an emphasis on technology, design and suppliers.


Drew Winter

Drew Winter is Editor-in-Chief of WardsAuto World magazine and a Senior Editor at He was won numerous awards for his work in both print and digital media and has been...

Tom Murphy

Tom Murphy is executive editor of WardsAuto World magazine, with an emphasis on technology and suppliers. He leads selection of the Ward’s 10 Best Engines and Ward’s 10 Best Interiors...
Blog Archive
Follow Us

Sponsored Introduction Continue on to (or wait seconds) ×