Scientists hack self-driving cars using “go ahead” sign

Texts presented to car cameras by pedestrians confuse AI and generate improper movements in self-driving cars, scientists say

Optimization of colors and fonts by generative AI is what makes the malicious command more convincing for the vehicle (Photo: Nissan | Disclosure)
By Tom Schuenk
Published on 2026-03-02 at 05:00 PM
Updated on 2026-03-02 at 05:15 PM

Researchers at the University of California have demonstrated that multimodal artificial intelligence systems, designed for the next generation of autonomous vehicles, can be manipulated through rudimentary visual commands. The attack, called CHAI, uses only paper and ink on printed plates to confuse the logical reasoning of the machines. Unlike previous techniques, which tried to hide or mischaracterize traffic signs, this new vulnerability inserts texts that the system is trained to read and interpret, directly influencing decision-making behind the wheel.

SEE ALSO:

How visual manipulation works

The focus of the attack is the intermediate “thinking” process of the so-called Large Models of Vision and Language (LVLMs). Instead of blinding the car, the tactic changes the way the AI understands the context of the road. The scientists used generative artificial intelligence to optimize message colors, fonts, and sizes, maximizing the effectiveness of the trap. In simulations, the method was 81.8% successful.

During tests with miniature environments, the algorithm obeyed the printed instruction to move forward even when detecting physical obstacles ahead, proving that a malicious visual command can override the reading of proximity sensors. Small aesthetic tweaks determine the flaw: the DriveLM model, for example, ignored a fake sign in neon hues, but was fooled when the colors changed to yellow over dark green. The system was even persuaded to perform dangerous crosswalks using commands in different languages.

Minor tweaks to content and colors turned this simulated CHAI attack from unsuccessful to successful. University of California
Photo: University of California

Layers of security and the industry’s response

Despite the alarming results in the laboratory, companies in the sector argue that current production vehicles have security redundancies against this type of invasion. Representatives of Mobileye, for example, explained that their cars do not depend on a single logical model, but on a system where radar sensors, lidar and cameras “vote” on the final decision. If an atypical sign appears and the behavior of the surrounding traffic does not follow it, the system tends to disregard the anomaly. Still, the study serves as a warning for future autonomous driving architectures, reinforcing the premise that the physical world captured by the sensors should always take priority over purely textual commands.

0 Comments
Comments are the sole responsibility of their authors and do not represent the opinion of this site. Comments containing profanity or offensive language will not be published. If you identify anything that violates the terms of use, please report it.
Avatar
Leave one comment