For the people who develop self-driving cars—the software engineers, the hardware tinkerers, the welders and the bumper-affixers, the C-Suite execs and the marketing folks paid to sell it all—the rest of the world is bit like like a kid-crowded backseat. Are we there yet? the globe asks.
Sometimes, the public is excited, because autonomous vehicles promise to, perhaps, one day banish dangerous drunk, drowsy, and distracted drivers in favor of precise machines. (Nearly 40,000 Americans died on the roads last year, and the National Highway Safety Administration attributes about 90 percent of those to human error.) Sometimes, the public is fearful, because loosing new technology on public streets can be frightening. Just last month, a self-driving Uber testing in the Phoenix metro area struck and killed a woman as she crossed the street.
Either way, people want to know when autonomous vehicles will get here, when they will be ready. Here’s the unsatisfying but correct answer: never. “The technology is constantly being updated,” says Nidhi Kalra, a roboticist who co-directs the Rand Corporation’s Center for Decision Making Under Uncertainty. “Sometimes we will talk about it as if, ‘We have this self-driving car, we have this product.’ But with software updates, there’s a new vehicle every week.”
This is what differentiates the autonomous vehicle from even the most advanced cars rolling off the production lines in places like Detroit: so. much. software. More than half a million lines of code will power the various systems and algorithms that could one day help self-driving cars go anywhere. That includes localization systems, overlaid with high-definition maps to help the vehicles understand where they are. And perception systems, which help vehicles determine exactly what’s going on around them (Is that really a person? Should I expect her to walk in front of the vehicle?) And planning systems, which synthesize all that info and actually chart the vehicle’s journey from this intersection to that one. Oh, and the software that actually makes the thing move without a foot to push a gas pedal or a hand to guide a steering wheel.
There’s a reason experts are softly backpedaling expectations on autonomous vehicle tech—this stuff is complicated. Add in weather, terrain, and car cultures that differ from city to city, and you can see why companies like Waymo are only testing in specific places. (Ever heard of a Pittsburgh left?) Testing everywhere would be nigh-impossible. And just like your iPhone, your Snap app, or your Tesla, these cars have code that will get updated, and updated a lot.
“Any product is going to be improved over time,” says Mike Wagner, co-founder and CEO of Edge Case Research, which helps robotics companies build more robust software. “That’s life-cycle maintenance in any system.”
Now, this shouldn’t necessarily be a scary prospect. If, say, Waymo wants to expand a (theoretical) taxi service from this neighborhood in Atlanta to that neighborhood in Atlanta, it will need to update its software. If, say, General Motors wants to start offering autonomous vehicle riders the chance to make mid-trip pit stops at Starbucks, that’s a software update, too. If, say, any autonomous vehicle built five years ago wants to work today, it needs an upgrade—there will be new car models to recognize, new traffic patterns to negotiate, maybe new, climate changed weather patterns to contend with.
“The environment isn’t static,”says Forrest Iandola, the CEO of the startup DeepScale, which builds perception systems. “Even if you, in theory, have a perfect system for today in a certain location, that becomes stale.”
Vehicles will also constantly encounter new situations on the roads, and contend with obstacles engineers might never have dreamed of. “As soon as you turn any sensor to face the outside environment, the number of different things it could see is on the order of the number of permutations of atoms you could see in the universe,” says Iandola. A bunch of tigers escaped the zoo? Time to train self-driving on tiger images—and update.
Then there are the scarier fixes, the safety-related updates that need to go through extremely rigorous validation processes before they’re pushed to robot cars. This is new territory for automotive engineers, even the people who validate the software in cars driven around by humans today. “Consider automatic emergency braking or adaptive cruise control. That clearly has software,” says Kalra. “But these are pretty limited algorithms: They’re handwritten, they’re provable.” With self-driving cars, the scale of the systems, and the way they interact with each other, makes perfecting updates much harder.
“The challenge is, as those systems get more and more safety-critical, traditionally what happens is you slow down the cycle,” says Wagner. “You have to do quite a bit of classical safety validation before you release a product.” But some of these fixes will have to happen much more quickly—if there’s a bug sending cars careening into barriers, or opening a backdoor to hackers, for example. Wagner’s company, Edge Case Research, is working on ways to speed up that process, to get important robotics safety updates proven out and patched quickly.
Experts say prepping for that iterative future, for the constantly updated autonomous car, starts now. Smart engineers should be building in many software entry points, so they can validate separate parts of the system. Can they sort what’s going on in the sensor fusion system from the localization system? Can they quickly diagnose what’s wrong?
The fortunate news is that a number of autonomous vehicle developers are heads down, working on these problems—though it’s an open question whether they’re moving fast enough. OK, and more good news: If you’re looking for long-term employment, the imperfectable self-driving space may be the place to be.