Why AI Won’t Turn Cars into Evil Robots

Artificial neural networks that mimic functions of a biological brain can solve problems almost impossible to program manually. That doesn’t mean they are near becoming self-aware and concluding the human race is a cancer that needs to be removed from the planet.

September 7, 2016

7 Min Read
Terminator robot one of many monsters reflecting manrsquos fear of artificial intelligence going back to Mary Shelleyrsquos ldquoFrankensteinrdquo
Terminator robot one of many monsters reflecting man’s fear of artificial intelligence, going back to Mary Shelley’s “Frankenstein.” Getty Images

A panel of top machine-learning experts will discuss the impact of artificial intelligence and deep learning in automotive at the WardsAuto User Experience Conference coming up Oct. 4 at the Suburban Collection Showcase in Novi, MI.

Artificial intelligence, cognitive computing, deep learning, whatever it is called, is on the verge of transforming vehicle safety, driving and the overall user experience.

In the near term, the technology will give ordinary motorists super-human powers behind the wheel, such as 360-degree vision and give vehicles the capability of sensing and reacting to dangerous situations faster than humans. It also will enable vehicles to communicate more effectively in everyday language, like IBM’s Watson or Apple’s Siri do now for a variety of tasks. Longer term, machine intelligence will enable cars to drive themselves.  

Most consumers like the idea of their car becoming a friendly robot with its own intelligence. But then their brow furrows and they think out loud about what happens if cars become “too smart” and start plotting to take over the world and destroy mankind, like Skynet and the Terminator.

It is not an entirely frivolous concern. Stephen Hawking, Bill Gates, Elon Musk and others have publicly expressed worries that AI someday could pose an existential threat to mankind.

But Andrew Ng, chief scientist at Chinese web search giant Baidu, says such concerns are beyond remote, like worrying about overpopulation and pollution on Mars before we’ve set foot on the planet.

James Kuffner, who ran Google’s robotics program before becoming chief technology officer at the Toyota Research Institute, says Western culture in particular has been fearful of technology’s impact on society and this has been reflected in popular culture, books and movies for almost 200 years.

He traces it back to Mary Shelley’s “Frankenstein,” first published in 1818.

“When she wrote that book it laid a philosophical groundwork for the relationship between people and technology and creating intelligence. It set up a fear of playing God,” Kuffner tells WardsAuto in an interview.

“The idea of scientist doctor Frankenstein creating another life or intelligence that then goes awry and turns against him has been a recurring theme, whether it is “The Matrix,” or “iRobot,” and this fear has been played out a lot in Western cultures.

“I spent many years living in Japan and they don’t yet have that innate fear of technology, and I think that’s why robotics and some of the product ideas around it have less cultural baggage,” Kuffner says.

Automakers Trying to Ease Fears

The auto industry takes consumer concerns about the growing level of machine intelligence in vehicles seriously and is trying to allay them.

When Mercedes-Benz introduced the ’17 E-Class at the North American International Auto Show in January, which it touted as having the highest level of machine intelligence of any production car on the planet, it had a scientist come out to inform the media why people have nothing to fear.

To automakers and safety advocates who see traffic fatality rates rising and know 90% of serious crashes are caused by easily avoidable driver errors, adding more intelligence to vehicles seems like a no-brainer. But some consumers, already suspicious of technology in their private and work lives, worry about things getting out of control.  

“Deep learning usually has an aura of witchcraft,” Andreas Busse, senior architect-Driver Assistance Systems, Electronics Research Lab at Volkswagen Group of America, says at the recent CAR Management Briefing Seminars.

However, Danny Shapiro, senior director-Automotive at NVIDIA, a company that makes deep- learning processors, software and related technologies, puts it in a different light. He says the kind of AI being implemented is aimed at making vehicles perform better than humans at highly specific tasks.

“Artificial intelligence is everywhere,” Shapiro says. “Deep learning is a form of AI. It’s used in health care to treat early onset Alzheimer’s, it’s used in making wine and beer and in agriculture to grow crops more efficiently,” he says.

“The thing about AI that we (NVIDIA) are focused on and the concept of deep learning, which is using the neural net, is that the system becomes smarter than a human at a specific task. It’s not that we’ve mastered the human brain and free thought.”

In the area of self-driving cars, NVIDIA is training guidance systems how to recognize other objects. Vehicles can recognize other cars, people and bikes.

“We can do that more accurately than a human can. And we can calculate the speed of the other vehicles and the trajectory of those vehicles. Essentially we have superhuman levels of performance and precision that lets us be much safer on the road, and we do this in a full 360 degrees,” Shapiro says.

“We are not able to train a vehicle to take on its own will to go do something. It’s just going to get us from point A to point B much safer than any human possibly could,” he says.

In the context of thinking independently, computers still are “pretty dumb,” Toyota’s Kuffner says. They are good at doing billions of arithmetic calculations per second and managing and accessing tremendous amounts of data, which is not the same as intelligence, he says.

Humans Do Programming

Artificial neural networks that try to mimic the functions of a biological brain are starting to be built and they are getting sophisticated enough to solve problems that would have been almost impossible to program manually, Kuffner says.

But that doesn’t mean they are anywhere near becoming self-aware and concluding the human race is a cancer that needs to be removed from the planet, like Agent Smith decided in “The Matrix.”

“There is a lot of hype around what these systems are. We are so early and they are so far away from approaching levels of intelligence portrayed in popular culture it is not worth worrying about,” Kuffner says.

That doesn’t mean potential issues down the road should be ignored.

“I believe at the very highest levels of any system that the logic of the behavior should always be programmed by a human,” Kuffner says. “That means if I am building a robot to collect trash, a human will write the highest level of behavior that says ‘identify trash, pick up trash and put it in the receptacle.’ That is something I can debug and we can understand how it bounds the behavior, even though components of those behaviors may be trained by neural networks.”

There also is a lot of discussion about how self-driving vehicles will have to be programmed with ethics and morals before they are allowed on highways. Foremost in the discussion is their ability to answer what philosophers call the “Trolley Car problem.”

The essence of the problem is being able to choose between the lesser of two evils in dire circumstances: If a vehicle is headed toward a trolley car loaded with many people and swerving will result in the death of one bystander, can it decide swerving is the better decision?

Advanced artificial intelligence, created by human programmers, no doubt will enable future vehicles to make the right decisions in these types of scenarios, but NVIDIA’s Shapiro points out today’s relatively simple systems can solve most of this problem.

“What we are focused on is building intelligence into the vehicle in order to anticipate these potential scenarios in order to avoid them completely. Those situations usually come about because the human driver isn’t paying attention,” Shapiro says.

Ultimately consumers will embrace and not fear machine intelligence as it proves its competence in many situations, Kuffner says.

“When auto pilots on airplanes were introduced for domestic air travel many people said they would never fly in a plane with an autopilot. Over time they realized if you have high wind and zero visibility you absolutely want the computer flying. Trust comes with a proven track record of safety and reliability.”

[email protected] 

 

You May Also Like