Federal regulators have reportedly recorded a jump in AI-related scam ads on social media.
The Federal Trade Commission (FTC) has seen a sharp rise in complaints in the last year involving ads that either used artificial intelligence (AI) or claimed to use it to pull people into scams, Bloomberg News reported Sunday (March 17).
The report — citing a document obtained via a Freedom of Information Act request — said at least a third of those complaints dealt with ads seen on social media platforms like Facebook and YouTube, meaning those sites now face “a new kind of misinformation adversary among their own advertisers,” as Bloomberg’s Parmy Olson writes.
According to the report, the FTC received just two ad-related complaints mentioning AI in February 2023, but that number climbed to 14 a year later, as generative AI boomed in popularity. Olson argues that while the numbers may not indicate a widespread problem, most social media users are more likely to complain to the platforms themselves and not the FTC.
In one case, a Los Angeles user in their 30s told the FTC they had been tricked into transferring $7,000 to a phony Tesla website after seeing a YouTube clip in which a deep fake Elon Musk said his car company would “double your money for a short period of time” by working with another crypto company. However, this person never got their money back.
“That was all I had,” they said in the complaint.
A spokesperson for YouTube parent Alphabet told Bloomberg it was aware of the deep fake ad trend and that the company was “investing heavily in our detection and enforcement against these deep fake ads and the bad actors behind them.”
The news comes a little more than a month after the FTC proposed a new set of rules that would bar the impersonation of individuals following a rise in complaints around impersonation fraud, fueled by AI.
“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” FTC Chair Lina M. Khan said at the time.
“Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals.”
The efforts come amid a rise in the use of AI to carry out financial fraud. In an interview with PYMNTS last month, Farhad Farzaneh, chief product officer at Trustly, described an instance where scammers used AI to create convincing likenesses of a company’s executives on a video conference call, leading to a fraudulent, multimillion-dollar transaction.