Merriam-Webster’s Words of the Year Reflect Growing Impact of AI

Authentic, Merriam-Webster, Word of the Year, AI, artificial intelligence

The very phrase artificial intelligence (AI) itself promises two things: intelligence and artificiality.

And while it is the leverageable machine intelligence that businesses and organizations want, it is the inherent artificiality of the technical innovation’s AI productions that they have to deal with. 

This, as Merriam-Webster, the dictionary publisher, reported on Monday (Nov. 27) that 2023’s most looked-up word was “authentic,” which saw a substantial increase in 2023 driven by stories and conversations about AI, with “deepfake” not far behind it.

They were inspired and relevant choices for today’s landscape. Authenticity, traditionally associated with the reliability and truthfulness of information, faces a new challenge in the age of AI. 

Deepfakes, a portmanteau of “deep learning” and “fake,” have evolved over the past few years, driven by the progress in machine learning algorithms and generative multimodal models.  

These algorithms enable the creation of hyper-realistic content by training on vast datasets of images and videos. 

Initially used for entertainment purposes, deepfakes have since sparked concerns due to their potential to manipulate information and deceive the public by convincingly depicting individuals saying or doing things they never did. 

Read also: Generative AI Fabrications Are Already Spreading Misinformation

Preserving the Integrity of Information

Deepfakes blur the line between reality and fiction, making it increasingly difficult to discern genuine content from manipulated creations. 

Governments are taking notice — particularly as their leaders and elected officials increasingly find themselves the unwitting subjects of AI-generated deepfake media. 

The White House has gone on record saying that it wants Big Tech companies to disclose when content has been created using their AI tools, and President Biden’s executive order on the safe, secure and trustworthy development of AI tasks the Commerce Department with issuing guidance for labeling and watermarking AI-generated content. 

Microsoft Vice Chairman and President Brad Smith has called deepfakes the greatest AI-related threat.

The EU is also gearing up to require tech platforms to label their AI-generated images, audio and video with “prominent markings” disclosing their synthetic origins. 

Ryan Abbott, professor of law and health sciences at the University of Surrey, told PYMNTS that crafting effective data provenance and AI-generated content watermarking processes won’t be an easy task.

“We are talking about different countries having a different interpretation. But it is very important to get protection more or less globally. … AI is going to be doing a lot more heavy lifting in the creative space pretty soon,” he said. 

In the meantime, the private sector isn’t sitting still, either. 

Google has announced a policy mandating advertisers for the upcoming U.S. election to disclose when ads have been manipulated or created using AI, while Meta has also said it is imposing new controls on AI-generated ads ahead of the 2024 election. On the commercial front, YouTube, which is also owned by Google parent Alphabet, will add disclosure requirements and other rules for content on its platform created with AI

See also: Is It Real or Is It AI?

Mitigating the Impact of Deepfakes

Complicating matters somewhat is that, according to PYMNTS Intelligence, there doesn’t yet exist a truly foolproof method to detect and expose AI-generated content

That’s why educating the public about the existence and potential impact of deepfakes is essential. Increased media literacy can empower individuals to critically evaluate information and recognize potential manipulations.

“As long as you can tell consumers what the content is made of, they can then choose to make decisions around that information based on what they see. But if you don’t give that to them, then it’s that shielding and blackboxing that the industry needs to be careful with, and where regulators can step in more aggressively if the industry fails to be proactive,” Shaunt Sarkissian, founder and CEO of AI-ID, told PYMNTS.

“The food industry was the first sector to really start adopting things like disclosure of ingredients and nutrition labels, providing consumers transparency and knowledge of what’s in their products — and with AI, it’s much of the same. Companies need to say, ‘Look, this was AI-generated, but this other piece was not,” Sarkissian added. “Get into the calorie count, if you will.

In this vein, last month (Oct. 11), Adobe and other companies including Arm, Intel, Microsoft and Truepic established a symbol that can be attached to content alongside metadata, listing its provenance, including whether it was made with AI tools.

And the threat of AI generated deepfakes is itself very real. 

“Utilizing generative AI, a fraudster can effectively mimic a voice within three seconds of having recorded data,” Karen Postma, managing vice president of risk analytics and fraud services at PSCU, told PYMNTS in an interview posted Oct. 4.

“There’s a beautiful upside [to generative AI] that can reduce cost and drive much better customer experience,” Gerhard Oosthuizen, chief technology officer of Entersekt, told PYMNTS in February. “Unfortunately, there is also a darker side. People are already using ChatGPT and generative AI to write phishing emails, to create fake personas and synthetic IDs.”

That’s why, as the AI ecosystem moves forward, it is becoming more important to make it obvious when an AI model is generating synthetic content, including text, images and even voice applications by flagging its source.

For further reading, the PYMNTS Intelligence “Generative AI Tracker®,” a collaboration with AI-ID, examines the challenge of detecting AI-generated content and distinguishing it from human-created material. 

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.