Criminals cross-reference Google Street View data with fake videos to demand money from drivers; learn how to protect yourself
The United States authorities have issued an official warning about a new and sophisticated type of online scam, which may arrive in Brazil soon. The trap combines GPS tools and generative artificial intelligence to simulate physical attacks on vehicles, coercing victims into making immediate payments under the ongoing false threat of vandalism.
Fraud, which has become recurrent, takes advantage of the rapid evolution of video manipulation tools to create emergency scenarios. Unlike traditional scams, this one uses the specificity of the victim’s address to verify the threat.
SEE ALSO:
In a recent case detailed by authorities, a resident received a text message containing a video that faithfully depicted the driveway of his home. The recording, digitally manipulated, showed individuals puncturing the tires and vandalizing the body of his pickup. Immediately after sending the file, the criminals demanded a transfer of $500 (approximately R$ 2,850) to stop the alleged attack.
Despite the realism of the images, the police clarify that the scammers are not physically at the scene. The modus operandi consists of using mapping tools, such as Google Earth and Street View, to capture static images of the residence and identify the model of the parked vehicle.
Armed with this visual data, the fraudsters use AI software to overlay animations of vandals destroying the property. The cycle closes with the crossing of data leaked on the “dark web”, allowing the address to be associated with the owner’s cell phone number for sending the threat.
Digital security experts and the Michigan police emphasize that no payments should be made. The transfer of values does not guarantee security and signals to criminals that the victim is susceptible to further extortion.
The orientation is to remain calm and, if possible, personally check the vehicle to verify the integrity of the property. The crime must be reported immediately to local authorities, attaching the messages and videos received. With the advancement of generative AI in 2026, the recommendation is standard skepticism in the face of urgent contacts coming from unknown numbers, especially those involving financial requests.