The Federal Trade Commission (FTC) is reportedly investigating OpenAI for issues around false information and data security.
The regulator has sent a letter to the creator of the artificial intelligence (AI)-powered chatbot ChatGPT asking dozens of detailed questions about these issues, The Wall Street Journal (WSJ) reported Thursday (July 13), citing an unnamed source.
One issue being investigated by the FTC is whether ChatGPT has harmed people by publishing false information about them, according to the WSJ report.
The agency is also looking into OpenAI’s data security practices, including the company’s 2020 disclosure that a bug exposed information about users’ chats and payment-related information, the report said.
The FTC’s civil investigative demand also asks questions about OpenAI’s marketing efforts, AI model training practices and handling of user’s personal information, per the report.
FTC Chair Lina Khan wrote in an op-ed published by The New York Times in May that AI should be regulated, and that the agency is looking at “how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices.”
Read more: US Advocacy Group Asks FTC To Stop New OpenAI GPT Releases
“Can [the U.S.] continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices,” Khan wrote at the time.
OpenAI said in a May blog post that it’s time to start thinking about the governance of future AI systems.
In the post, OpenAI President Greg Brockman, CEO Sam Altman and Chief Scientist Ilya Sutskever suggested that the leading development efforts in AI be coordinated to limit the rate of growth per year in AI capability, that an international authority be formed to monitor AI development efforts and restrict those above a certain capability, and that technical capability be developed to make superintelligence safe.
In June, PYMNTS reported that OpenAI and Google, another player in the generative AI sector, have different views about regulatory oversight of the sector.
Google asked for AI oversight to be shared by existing agencies led by the National Institute of Standards and Technology (NIST), while OpenAI favored a more centralized and specialized approach.
Featured News
Big Tech Braces for Potential Changes Under a Second Trump Presidency
Nov 6, 2024 by
CPI
Trump’s Potential Shift in US Antitrust Policy Raises Questions for Big Tech and Mergers
Nov 6, 2024 by
CPI
EU Set to Fine Apple in First Major Enforcement of Digital Markets Act
Nov 5, 2024 by
CPI
Six Indicted in Federal Bid-Rigging Schemes Involving Government IT Contracts
Nov 5, 2024 by
CPI
Ireland Secures First €3 Billion Apple Tax Payment, Boosting Exchequer Funds
Nov 5, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI