January 30, 2026

The ability to sense what is happening around an autonomous vehicle is paramount. Autonomous vehicles, like human drivers, need to be able to make quick decisions.

Most autonomous vehicles today rely on multiple sensors in order to perceive the environment. The majority of systems combine cameras, radar sensors, and LiDAR (light detection range) sensors to create their design. Computers on board combine this data into a complete picture of the world around the vehicle. Autonomous vehicles could not navigate the world safely without this data. Multiple sensor systems make cars safer and more efficient. Each design can be used to check the other, but no system is safe from attack.

These systems are not foolproof. The camera-based perception system can be fooled by simply placing stickers on road signs in order to change the meaning.

The RobustNet research group of the University of Michigan, along with computer scientist Qi Alfred Chen of UC Irvine and colleagues from SPQR lab, have shown that LiDAR-based systems can also be compromised.

The attack can fool the LiDAR-based vehicle perception system by strategically spoofing sensor signals. In this case, the vehicle may cause a collision by blocking traffic and braking suddenly.

Spoofing LiDAR Signals

LiDAR-based systems of perception have two components: the sensor and a machine learning model to process the sensor data. LiDAR sensors calculate the distance between themselves and their surroundings by emitting light signals and measuring the time it takes for the call to bounce back from an object. The “time of flight” is the duration of this back and forth.

A LiDAR unit can send out thousands of light signals every second. Its machine-learning model then uses the returned signals to create a picture of what is around the vehicle. Bats use echolocation to find obstacles at night.

These pulses are susceptible to being manipulated. An attacker can use his own light signal to fool the sensor. You only need one light signal to confuse the sensor.

It’s harder to fool the LiDAR to “see” an “object” that’s not there. The attacker must precisely time his signals to the LiDAR of the victim in order to succeed. The signs are traveling at light speed, so this must be done at the nanosecond scale. When the LiDAR calculates the distance based on the measured time of flight, small differences will be noticeable.

The machine learning model must also be fooled if an attacker is able to fool the LiDAR sensor. , a research lab at OpenAI has shown that machine-learning models are susceptible to specially crafted inputs or signals – also known as adversarial examples. Specially generated stickers on road signs, for instance, can fool cameras that use perception.

We discovered that an attacker can use a similar method to create perturbations that are effective against LiDAR. The stickers would be fake signals, not visible stickers. They were designed to trick the machine-learning model into believing that there are obstacles when there is none. The LiDAR sensor feeds the fake signals from the hacker to the machine-learning model. This will identify them as obstacles.

The fake object could be designed to match the machine-learning model’s expectations. The attacker could, for example, create a signal that looks like a truck in motion. To carry out the attack, attackers could place it at an intersection or on a car that drives in front of an automated vehicle.

Video illustration of two ways to fool the AI in the self-driving vehicle.

Two possible attacks

We chose Baidu Apollo to demonstrate the planned attack. This product has more than 100 partners, and a mass-production agreement has been reached with several manufacturers, including Volvo and Ford.

Two attacks have been demonstrated using sensor data from Baidu Apollo. We established an “emergency braking attack” in which an attacker could stop a vehicle suddenly by fooling it into believing that an obstacle was blocking its path. In the second “AV freezing” attack, we used a spoofed object to trick a vehicle stopped at a traffic light into remaining arrested even after it turned green.

By exploiting the vulnerabilities in autonomous driving perception systems, we can alert teams developing autonomous technologies. We are just starting to research new security issues in autonomous driving systems.

 

Leave a Reply

Your email address will not be published. Required fields are marked *