Google Aims to Set Precedent With AI Scam Lawsuit

Google Lawsuit

Google has filed a lawsuit aimed at preventing scams related to its AI offerings.

The tech giant announced the suit Monday (Nov. 13) on its blog, saying the hope is that these actions build a legal precedent for preventing artificial intelligence (AI)-related fraud.

The suit deals with scammers who target people who want to use Google’s Bard chatbot, wrote Halimah DeLaine Prado, general counsel for Google.

She added that these fraudsters created ads that invite people to download Bard — which Google notes— does not need to be downloaded — but in actuality trick them into downloading malware that compromises their social media accounts.

“We are seeking an order to stop the scammers from setting up domains like these and allow us to have them disabled with U.S. domain registrars,” Prado wrote. “If this is successful, it will serve as a deterrent and provide a clear mechanism for preventing similar scams in the future.”

The lawsuit is one of two legal actions announced by Google Monday. The second targets scammers who abuse the Digital Millennium Copyright Act (DMCA) by “using bogus copyright takedowns to harm competitors,” wrote Prado.

Google claims these scammers abuse the DMCA, designed to protect copyright holders, by setting up “dozens” of Google accounts to submit bogus copyright complaints against competitors, leading to the removal of more than 100,000 business websites.

“We hope our lawsuit will not only put an end to this activity, but also deter others and raise awareness of the harm that fraudulent takedowns can have on small businesses across the country,” the blog post said.

Google’s lawsuits come two weeks after the White House announced plans to crack down on fraudsters who use AI generated voice models to commit scams over the phone.

As noted here, the Biden administration plans to host a “virtual hackathon,” where companies can build AI models that can spot and block unwanted robocalls and robotexts, particularly those using AI-generated voice models that tend to target senior citizens.

PYMNTS examined the concept of fighting “bad AI with good AI” earlier this year, writing that insiders have stressed that the technology could “supercharge the capabilities of bad actors by providing turnkey and scalable cybertools, including AI-generated voice clones and other techniques straight out of the realm of science fiction” that can be used for illicit goals.