This post originally appeared on MIT Technology Review
Don’t believe your car’s lying eyes.
Hackers have manipulated multiple Tesla cars into speeding up by 50 miles per hour. The researchers fooled the car’s MobilEye EyeQ3 camera system by subtly altering a speed limit sign on the side of a road in a way that a person driving by would almost never notice.
This demonstration from the cybersecurity firm McAfee is the latest indication that adversarial machine learning can potentially wreck autonomous driving systems, presenting a security challenge to those hoping to commercialize the technology.
MobilEye EyeQ3 camera systems read speed limit signs and feed that information into autonomous driving features like Tesla’s automatic cruise control, said Steve Povolny and Shivangee Trivedi from McAfee’s Advanced Threat Research team.
The researchers stuck a tiny and nearly imperceptible sticker on a speed limit sign. The camera read the sign as 85 instead of 35 and, in testing, both the 2016 Tesla Model X and that year’s Model S sped up 50 miles per hour.
The modified speed limit sign reads as 85 on the Tesla’s heads-up display. A Mobileye spokesperson downplayed the research by suggesting this sign would fool a human into reading 85 as well.
The Tesla, reading the modified 35 as 85, is tricked into accelerating.
This is the latest in an increasing mountain of research showing how machine learning systems can be attacked and fooled in life-threatening situations.
In an 18-month-long research process, Trivedi and Povolny replicated and expanded upon a host of adversarial machine learning attacks including a study from UC Berkeley professor Dawn Song that used stickers to trick a self-driving car into believing a stop sign was a 45 mile-per-hour speed limit sign. Last year, hackers tricked a Tesla into veering into the wrong lane in traffic by placing stickers on the road in an adversarial attack meant to manipulate what the car’s machine learning algorithms.
“Why we’re studying this in advance is because you have intelligent systems that at some point in the future are going to be doing tasks that are now handled by humans,” Povolny said. “If we are not very prescient about what the attacks are and very careful about how the systems are designed, you then have a rolling fleet of interconnected computers which are one of the most impactful and enticing attack surfaces out there.”
As autonomous systems proliferate, the issue extends to machine learning algorithms far beyond vehicles: A March 2019 study showed medical machine-learning systems fooled into giving bad diagnoses.
The McAfee research was disclosed to both Tesla and MobilEye EyeQ3 last year. Tesla did not respond to a request for comment from MIT Technology Review but they did acknowledge McAfee’s findings and say that they would not be fixing the issues on that generation of hardware. A Mobileye spokesperson downplayed the research by suggesting the modified sign would even fool a human into reading 85 instead of 35. The company doesn’t consider tricking the camera to be an attack and, despite the role the camera plays in Tesla’s cruise control and the camera wasn’t designed for autonomous driving.
“Autonomous vehicle technology will not rely on sensing alone, but will also be supported by various other technologies and data, such as crowdsourced mapping, to ensure the reliability of the information received from the camera sensors and offer more robust redundancies and safety,” the Mobileye spokesperson said in a statement.
Tesla has since moved to proprietary cameras on newer models of their cars and MobilEye EyeQ3 has released several new versions of its cameras that in preliminary testing were not susceptible to this exact attack.
There are still a sizable number of Tesla cars operating with the vulnerable hardware, Povolny said. He pointed out that Teslas with the first version of hardware cannot be upgraded to newer hardware.
“What we’re trying to do is we’re really trying to raise awareness for both consumer and vendors of the types of flaws that are possible,” Povolny said “We are not trying to spread fear and say that if you drive this car, it will accelerate into through a barrier, or to sensationalize it. The reason we are doing this research is we’re really trying to raise awareness for both consumers and vendors of the types of flaws that are possible.”