Whistleblowers have reportedly accused OpenAI of preventing employees from warning regulators about possible artificial intelligence (AI) risks.
The whistleblowers say the AI company handed down overly restrictive agreements for employment, nondisclosure and severance, the Washington Post reported Sunday (July 14), citing a letter from those whistleblowers to the Securities and Exchange Commission (SEC).
That letter, obtained by the Post, says that those agreements could have meant penalties against workers who raised concerns about OpenAI to federal regulators, and required workers waive their federal rights to whistleblower compensation, in violation of federal law.
“These contracts sent a message that ‘we don’t want … employees talking to federal regulators,’” one of the whistleblowers told the Post. “I don’t think that AI companies can build technology that is safe and in the public interest if they shield themselves from scrutiny and dissent.”
PYMNTS has contacted OpenAI for comment but has not yet gotten a reply.
A spokesperson for the company offered this statement to to Post:
“Our whistleblower policy protects employees’ rights to make protected disclosures. Additionally, we believe rigorous debate about this technology is essential and have already made important changes to our departure process to remove nondisparagement terms.”
OpenAI’s approach to safety has been the subject of some debate this year, with at least two notable employees — AI researcher Jan Leike and policy researcher Gretchen Krueger — saying the company was prioritizing product development over safety considerations in announcing their resignations.
Another former executive, Ilya Sutskever, OpenAI’s co-founder and former chief scientist, has launched Safe Superintelligence, a new AI company focused on creating a safe and powerful AI system without commercial interests.
As PYMNTS wrote soon after, this has sparked some discussion over whether such a feat is even possible.
“Critics of the superintelligence goal point to the current limitations of AI systems, which, despite their impressive capabilities, still struggle with tasks that require common sense reasoning and contextual understanding,” that report said. “They argue that the leap from narrow AI, which excels at specific tasks, to a general intelligence that surpasses human capabilities across all domains is not merely a matter of increasing computational power or data.”
In addition, even some people who believe in the possibility of AI superintelligence have concerns about ensuring its safety. The creation of a superintelligent AI would necessitate advanced technical capabilities and a strong grasp of ethics, values and the potential consequences of such a system’s actions.