
By: Bruce Schneier & Nathan E. Sanders (MIT Tech Review)
Just nine months ago, ChatGPT was introduced, and since then, we have been continuously exploring its impact on our daily lives, careers, and systems of self-governance.
However, the public discourse surrounding AI’s potential threat to our democracy often lacks creativity. The prevalent discussions focus on familiar dangers like campaigns employing fake images, audio, or video to attack opponents—an issue we’ve dealt with for decades. Similarly, there is concern over foreign governments disseminating misinformation, a fear stemming from the 2016 US presidential election. Moreover, the growing prevalence of political “astroturfing,” where fake online accounts are used to simulate policy support, has fueled worries that AI-generated opinions could overwhelm the genuine preferences of real people, further compounding the challenges we face…
Featured News
Belgian Authorities Detain Multiple Individuals Over Alleged Huawei Bribery in EU Parliament
Mar 13, 2025 by
CPI
Grubhub’s Antitrust Case to Proceed in Federal Court, Second Circuit Rules
Mar 13, 2025 by
CPI
Pharma Giants Mallinckrodt and Endo to Merge in Multi-Billion-Dollar Deal
Mar 13, 2025 by
CPI
FTC Targets Meta’s Market Power, Calls Zuckerberg to Testify
Mar 13, 2025 by
CPI
French Watchdog Approves Carrefour’s Expansion, Orders Store Sell-Off
Mar 13, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Self-Preferencing
Feb 26, 2025 by
CPI
Platform Self-Preferencing: Focusing the Policy Debate
Feb 26, 2025 by
Michael Katz
Weaponized Opacity: Self-Preferencing in Digital Audience Measurement
Feb 26, 2025 by
Thomas Hoppner & Philipp Westerhoff
Self-Preferencing: An Economic Literature-Based Assessment Advocating a Case-By-Case Approach and Compliance Requirements
Feb 26, 2025 by
Patrice Bougette & Frederic Marty
Self-Preferencing in Adjacent Markets
Feb 26, 2025 by
Muxin Li