Kinder, Gentler Voice-Recognition Systems Reflect Driver’s Mood

New-generation systems reaching the market in about 2009 will be able to discern the emotional state of the driver and adjust accordingly.

Barbara McClellan

October 18, 2006

2 Min Read
WardsAuto logo in a gray background | WardsAuto

commentary0_9.gif

A 30-year evolution in vehicle electronics technology is making unimaginable things possible.

Cars can talk to other cars; wireless infotainment products can communicate with one another; and drivers of high-end luxury models can give voice commands to their vehicles.

But what about the way the car talks back? Is it sensitive to the driver’s feelings? Does it reflect the driver’s mood? And does it recognize when it’s a bad time to speak?

Robert Sicconi of IBM says these are among key concerns as his company works with auto makers to refine voice-recognition systems before they are made available on higher-volume cars.

“It’s about making sure it works,” he says. “We don’t want the buyer to say, ‘This car is a jerk.’”

Sicconi is among 90 presenters and 8,000 attendees at this week’s Convergence 2006 Transportation Electronic Conference in Detroit.

The biennial symposium, held since the early 1970s, is a virtual candy land for engineers, technologists and executives who say advanced electronics, propulsion, materials and telematics are gaining an ever-increasing importance in reinventing the automobile.

FlexRay, for instance, promises to enable future electronic “by-wire” steering, braking and other functions by speeding and simplifying the way information is transferred throughout vehicle data networks.

German-led Autosar hopes to nudge the industry closer to a worldwide software standard for automotive electronics, with potentially vast implications for the development of future “mechatronic” components that combine mechanical and electronic functionality.

Compact Power of Troy, MI, has a new $6.3 million contract to develop lithium-ion battery technology for hybrid-electric vehicle applications that focuses on battery cell and module development, including improving lifecycle abuse tolerance and low-temperature performance.

So while Sicconi injects humor into his subject, like others at the conference, he is quite serious.

Driver distraction is a key concern in developing voice-recognition software, he says, which has been alleviated somewhat by systems that allow more conversational speech.

New-generation voice-recognition systems reaching the market in about 2009 will be able to discern the emotional state of the driver and adjust accordingly, allowing drivers to switch back and forth among personality choices to reflect their moods.

“If you just found out your best friend died, you might not want a cheerful voice speaking to you,” he says.

By 2011, new systems will be capable of knowledge management, including route planning as part of the navigation system. The next step after that will be recognizing when it’s a bad time to speak to the driver.

“We will need to gather input from the car in terms of speed, steering, braking,” Sicconi says. “The system shouldn’t distract the driver at a critical time.”

[email protected]

Read more about:

2006
Subscribe to a WardsAuto newsletter today!
Get the latest automotive news delivered daily or weekly. With 6 newsletters to choose from, each curated by our Editors, you can decide what matters to you most.

You May Also Like