Two Keys to Advancing Autonomous Driving in 2023
Developing a cost-efficient, mass-market approach to full autonomy calls for moving past sensor technology and applying better algorithms with overall better perception through new techniques coming to market.
January 31, 2023
The past decade has seen a wave of progression and regression in autonomous driving.
Automakers have continued to prioritize R&D, yet progress remains incremental and not at a pace where Level 4 autonomy is near mass deployment.
Deployment of fully autonomous vehicles hasn’t progressed as quickly as we’d hoped, though incremental algorithm development has made advanced driver-assistance system technology and other subsystems safer in the progressive march to autonomy.
Last year saw a few major milestones in autonomous driving, including General Motors and Trimble logging 34 million miles (54.7 million km) of successful hands-free driving and Mobileye’s spin-out from Intel which raised $861 million during the IPO. So, interest and progress in the industry persist.
As we begin 2023, we remain hopeful for more solid advancements in the self-driving world.
Until now, the industry at large has used an ADAS approach, providing semi-autonomous driving via cameras and lidar sensors. While it’s cost-effective to use only cameras and basic sensors, a more robust and safer option encompasses more sensors. This is where the future lies, but the cost for mass deployment of these sensors is not currently where it needs to be to make mass-market adoption achievable.
So, the question remains: How do we develop a mass market approach to full autonomy at a price point similar to what we have today – or even less expensive? The answer lies in moving past sensor technology and learning to apply better algorithms that can spot pedestrians, see lane markers, account for bad weather, automatically update in real-time and have overall better perception through new techniques coming to market.
In current ADAS technology, if your car sensor gets muddied by road grime or weather conditions, it isn’t functional, and the driver is forced to take over control of the vehicle. General Motors’ Super Cruise is a good example. It’s an assistance mechanism that provides some level of autonomy and is affordable, but comprehensive maps would make it more robust and closer to full autonomy. Keeping maps accurate in current semi-autonomous vehicles is a laborious process, including specialized vehicles that gather, relay and download information, followed by humans who declutter and clean up the maps before they can be useful and downloaded into a vehicle. By the time this all happens, the maps easily could be outdated.
To achieve full autonomy, the promise of mass-deployable, solid-state sensors in a true fused array needs to be realized, which is tough when operating in complex environments and all weather conditions. In addition, real-time driving condition and map updating is critical in serving as a continuous feedback process that is enabled by 5G knowing exactly where a vehicle is relative to others and the road.
In 2023, the industry must overcome two principal challenges if autonomy is to move to the next level. First, we need precise absolute positioning in all current GNSS-denied or -obscured environments, no matter the weather or road condition. Precise absolute positioning is defined as lane-level (10 cm [4 in.]) precision. Achieving that in all facets of a typical drive from freeway to downtown corridor and underground is essential.
Adding to this requirement is the need for Automotive Safety Integrity Level (ASIL) certification of the software, hardware, correction source and integrity of the management. With all ASIL-certified parts, OEMs will feel more confident the solutions can be used for mass production.
Second, to make these maps more accurate, crowdsourcing — using passenger or shared mobility vehicles, not vehicles dedicated solely for mapping – is crucial. Maps are only as good as they are current, so a continuous stream of data from road vehicles that regularly drive the same path is critical to keeping maps current and providing complete situational awareness of a vehicle. Techniques such as map-based localization are paramount to that process and fusing sensor data to help derive a correct position. By taking the GPS positions of a vehicle and using visual cues to understand where the vehicle is based on the odometer, map-based locations can perceive what is around the vehicle.
For instance, if there’s construction on a certain route, there needs to be a real-time way to transmit and process that data in order to send it to the next vehicle on the road. Instead of having expensive dedicated mapping vehicles, the ultimate, cost-effective way to do this is by having everyday road vehicles become those mapping vehicles. Using numbers and vehicles as mobile mapping vehicles themselves creates high-fidelity, very usable maps in real time.
Louis Nastro (2)
Think of it as a version of Google Maps and how it projects traffic – compiling data from all users to showcase which routes are busy, when police or construction are ahead and when there might be a better route. MBL is like Google Maps, but much more. It’s as precise as precise can get. It won’t just find the local Starbucks, but it will know which lane you need to be in to get there and, if there is a lane that’s closed for construction, it will show you which lanes are open with the fewest stops.As an industry, if we can move past the traditional way we’ve always thought of autonomy, we can take autonomous vehicles to the next level in the coming years. I implore all of our partners, collaborators and even competitors to change our way of thinking and begin to embrace these new technologies that have real promise for advancing autonomy in vehicles on the roads in 2023 and beyond.
Louis Nastro (pictured, above left) is director of on-road strategy for the Autonomy Div. of Trimble, a software, hardware and services technology company.
About the Author
You May Also Like