Ensuring Voice-Recognition Systems Help, Not Annoy

Systems reaching the market in about 2009 will be able to detect the emotional state of the driver and adjust accordingly, an IBM executive says.

David E. Zoia

October 16, 2006

2 Min Read
WardsAuto logo in a gray background | WardsAuto

logo20_62.gif

DETROIT – Nobody wants to plunk down thousands of dollars only to discover the new car they just purchased is a jerk.

IBM Corp. says that’s a key concern as it and auto makers develop and refine vehicle voice-recognition systems.

And making sure they’ve worked out the bugs is one reason voice-recognition systems have remained a high-end feature, not a mass-market commonality, says IBM’s Roberto Sicconi.

“The reason for waiting, for not making (voice recognition) available on (higher-volume vehicles) is not the cost,” he says at a technical session here at the Convergence 2006 Transportation Electronics conference. “It’s about making sure it works. We don’t want the buyer to say, ‘This car is a jerk.’”

Sicconi says the evolution of voice-recognition systems continues, with new versions bowing on ’07 vehicles doing a better job of recognizing conversational speech so drivers do not have to learn key phrases in order to operate accessories.

“Buyers didn’t like the fact they had to remember key sentences,” Sicconi says of earlier systems that did not allow as much free-flow communication as the newer devices.

Systems nearing the market also will have the ability to create multiple personas, Sicconi says, allowing drivers to switch back and forth among personality choices to reflect their moods.

“If you just found out your best friend died, you might not want a cheerful voice speaking to you,” Sicconi says. “It’s like the music you listen to. Sometimes you’re in the mood for classical music, sometimes you’re not.”

Systems reaching the market in about 2009 will be able to discern the emotional state of the driver and adjust accordingly, he says.

“It can’t be too short or too fast or too often,” Sicconi says of the way the car talks to the driver. “And it has to sense if the driver is getting agitated and change (the way it speaks).”

By 2011, systems will be capable of knowledge management, including route planning as part of the navigation system.

Sicconi says driver distraction is a key concern in developing voice-recognition software. That has been alleviated somewhat by systems that allow more conversational speech, he says.

But the next step will be devices that recognize when it’s a bad time to speak to the driver.

“We will need to gather input from the car in terms of speed, steering, braking,” he says. “The system shouldn’t distract the driver at a critical time.”

Sicconi says voice recognition isn’t meant to replace pushbutton controls for vehicle accessories. But it will play a role in relieving some of the effort and distraction for drivers, such as in operating entertainment systems with myriad satellite channels and storage capacity for thousands of music files, Sicconi says.

“When you’re searching for a specific song from 10,000, it’s easy for someone sitting on his couch, but it’s a distraction for the driver,” he says.

[email protected]

Read more about:

2006

You May Also Like