Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It

By: Lisa Macpherson (Public Knowledge)
The emergence of generative artificial intelligence (AI) has captured widespread attention ever since the public unveiling of ChatGPT in November 2022. This term denotes machine learning systems capable of generating novel content in response to human prompts, leveraging extensive training on vast datasets. The outputs of generative AI encompass various forms, including audio (examples being Amazon Polly and Murf.AI), code (as seen with CoPilot), images (embodied by Stable Diffusion, Midjourney, and Dall-E), text (represented by ChatGPT and Llama), and videos (exemplified by Synthesia). As with previous advancements in science and technology, discussions abound about the immediate and long-term hazards, along with the societal and economic advantages, that accompany these capabilities.
Within this discourse, we will delve into a particular risk associated with the widespread adoption of generative AI systems, namely their potential to exacerbate the degradation of our news ecosystem by generating and disseminating false information. Moreover, we will explore a spectrum of proposed solutions aimed at safeguarding the integrity of our informational landscape.
Generative AI systems have the capacity to compound the existing quandaries within our information sphere through several avenues. Firstly, they can amplify the number of entities capable of fabricating plausible narratives of misinformation. Secondly, they can lower the cost associated with producing such content, thereby rendering it more accessible. Thirdly, they can heighten the challenge of detecting fabricated content, as traditional indicators utilized by researchers to identify false information—such as linguistic anomalies, syntax inconsistencies, and cultural missteps observed in foreign intelligence activities—might not be present in generative AI-generated content. Just as social media once facilitated the cost-effective diffusion of misinformation, generative AI now stands to facilitate its creation…
Featured News
Belgian Authorities Detain Multiple Individuals Over Alleged Huawei Bribery in EU Parliament
Mar 13, 2025 by
CPI
Grubhub’s Antitrust Case to Proceed in Federal Court, Second Circuit Rules
Mar 13, 2025 by
CPI
Pharma Giants Mallinckrodt and Endo to Merge in Multi-Billion-Dollar Deal
Mar 13, 2025 by
CPI
FTC Targets Meta’s Market Power, Calls Zuckerberg to Testify
Mar 13, 2025 by
CPI
French Watchdog Approves Carrefour’s Expansion, Orders Store Sell-Off
Mar 13, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Self-Preferencing
Feb 26, 2025 by
CPI
Platform Self-Preferencing: Focusing the Policy Debate
Feb 26, 2025 by
Michael Katz
Weaponized Opacity: Self-Preferencing in Digital Audience Measurement
Feb 26, 2025 by
Thomas Hoppner & Philipp Westerhoff
Self-Preferencing: An Economic Literature-Based Assessment Advocating a Case-By-Case Approach and Compliance Requirements
Feb 26, 2025 by
Patrice Bougette & Frederic Marty
Self-Preferencing in Adjacent Markets
Feb 26, 2025 by
Muxin Li