Although available in vehicles for a decade, voice-recognition systems still are not used or understood by many car buyers.
At the heart of the matter is the lack of education at the new-vehicle dealership level, which translates into a lack of education among consumers, says Nuance Communications Inc.’s Peter Mahoney, vice president-worldwide marketing, speech division.
“It’s amazing how many people have had very little experience with speech technology in cars,” he says. “It’s been in cars for 10 years.”
Nuance considers itself the market leader in speech-recognition technology, supplying software not only to auto makers but also to call centers and for medical dictation. It believes education will become an increasingly pressing issue as speech technology migrates from luxury to mass-market vehicles.
That movement is in evidence atMotor Co., which will begin equipping a dozen models with the Ford-Microsoft Corp. Sync system this fall. Nuance is supplying speech software to Microsoft for Sync.
“We’ve been raising our hands to the manufacturers, because we do spot checks with the dealers,” Mahoney says. “They don’t know how to use (the voice-recognition systems) – and it’s purely a training issue.”
He says Boston-based Nuance, which will offer speech solutions for up to 100 ’07 and ’08 models worldwide, continues to work with auto makers on developing training programs.
Next year, it will roll out a couple of interactive programs for dealers to support new models coming to market with Nuance software, Mahoney says.
But he believes educating dealers, and thus consumers, on the increasing array of infotainment systems goes beyond what Nuance or any one company can handle.
“It’s an issue the industry needs to deal with,” he says. “You’re going to need the ‘Geek Squad’ to demonstrate your car or, hopefully, (the infotainment systems are) going to get easier (to use), a little bit more intuitive.”
Mahoney says Nuance has discussed teaming with other speech software suppliers to raise awareness and educate the auto industry on the technology’s benefits, similar to what suppliersTeves and Robert Corp. did in 2003 with the Electronic Stability Control Coalition.
“I think that’s a great idea,” he says. “We’ve talked about it some and the timing is pretty interesting now to think about it more. There clearly needs to be some more education in the overall community.”
Like ESC, voice-recognition technology took hold faster in Europe than in North America.
“There’s been some reluctance in the North American market, because there’s a feeling the North American consumer is a little more skeptical and has a higher quality expectation,” Mahoney says, adding research has shown North American car buyers get frustrated sooner than their European counterparts when it comes to interacting with technology.
“Until it gets to the point where people feel it will work all the time, they don’t want to deal with it,” he says.
Also, Nuance’s research and development center is in Aachen, Germany, Mahoney says, so it has been easier for Nuance to pilot new technology and cultivate strong relationships with European auto makers.
As for coming advancements in automotive speech-recognition technology, Mahoney cites the ability to call out a recording artist’s name and song title to access MP3 music files as a way to reduce effort and cut driver distraction. Such a function is a feature of the upcoming-Microsoft Sync system.
“It’s a pain and it’s dangerous, even with the great interface Apple (Corp.) has,” Mahoney says of selecting music by using the controls of an iPod or MP3 player while driving.
Another breakthrough likely by the end of the decade is software that recognizes multiple voices, so that two or more people who drive the same car will be able to utilize its hands-free command system more easily.
The technology already is available, Mahoney says, but because of the auto industry’s long lead times it has yet to make it into vehicles.
“None of the manufacturers have implemented that yet,” he says. “We think that will happen, and you’ll have better driver adaptation over time.”
Mahoney predicts in less than five years speech software will be able to support Statistical Language Modeling technology that allows people to activate functions via less-scripted, more conversational dialogue.
“They may say something like, ‘I want to listen to the radio,’ instead of ‘Tune radio to this station,’” Mahoney explains.
This type of technology, which will require more processing power, will “come incrementally,” Mahoney predicts.
“You’re not going to see next year the car that understands what you’re saying based on a bunch of random phrases.”