US Senator Michael Bennet, a Democrat from Colorado, recently addressed a letter to major technology and generative AI companies, calling for them to label AI-generated content and limit the spread of fake or misleading material. Bennet cited several recent examples of AI-generated content causing alarm and market turbulence. He also underscored the importance of Americans knowing when AI is being used to shape political content.
“Fabricated images can derail stock markets, suppress voter turnout, and shake Americans’ confidence in the authenticity of campaign material,” Bennet said.
OpenAI CEO Sam Altman testified before the Senate Judiciary Committee, highlighting AI’s impact on the spread of false information. Bennet applauds the steps taken by technology companies to identify and label AI-generated content, but acknowledges that these measures are voluntary and can be easily bypassed.
“Americans should know when images or videos are the product of generative AI models, and platforms and developers have a responsibility to label such content properly,” Bennet said during his letter.
U.S. lawmaker N/A echoed Bennet’s sentiments, arguing that platforms ought to update their policies given the availability of generative AI tools to all.
Related: EU Commissioner Says AI-Generated Content Should Be Labelled
“We cannot expect users to dive into the metadata of every image in their feeds, nor should platforms force them to guess the authenticity of content shared by political candidates, parties, and their supporters,” N/A said.
Meanwhile, other lawmakers, including Senate Majority Leader Chuck Schumer, have expressed interest in introducing legislation to regulate AI. Bennet has gone on to introduce a bill requiring political ads to disclose whether AI was used in the production process.
“Continued inaction endangers our democracy. Generative AI can support new creative endeavors and produce astonishing content, but these benefits cannot come at the cost of corrupting our shared reality,” Bennet said.
Bennet’s letter asked the executives about the standards and requirements they employ to identify AI content and how those standards were developed and audited. He also inquired about the consequences for users who violate the rules.
Twitter responded to a request for comment with a poop emoji, while Microsoft declined to comment and TikTok, OpenAI, Meta, and Alphabet did not respond immediately.
As AI-generated content becomes more prevalent and nefarious, U.S. Senator Michael Bennet is pressing for major technology and generative AI companies to act responsibly and promptly to protect public discourse and electoral integrity. Bennet’s letter and subsequent bill demonstrate a sense of urgency and awareness into the risks posed by artificial intelligence and the powerful implications it has on our democracy.
Featured News
Big Tech Braces for Potential Changes Under a Second Trump Presidency
Nov 6, 2024 by
CPI
Trump’s Potential Shift in US Antitrust Policy Raises Questions for Big Tech and Mergers
Nov 6, 2024 by
CPI
EU Set to Fine Apple in First Major Enforcement of Digital Markets Act
Nov 5, 2024 by
CPI
Six Indicted in Federal Bid-Rigging Schemes Involving Government IT Contracts
Nov 5, 2024 by
CPI
Ireland Secures First €3 Billion Apple Tax Payment, Boosting Exchequer Funds
Nov 5, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI