Generative artificial intelligence (AI) has captured both the public imagination and lawmakers’ attention.
Sam Altman, co-founder and CEO of OpenAI, the bleeding-edge company behind the buzzy ChatGPT, is making his first appearance on Capitol Hill Tuesday (May 16) to speak before the Congressional Subcommittee on Privacy, Technology and the Law in a hearing titled “Oversight of AI: Rules for Artificial Intelligence.”
Altman’s prepared testimony emphasized that “regulation of AI is essential,” and that the AI leader is “eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology’s benefits.”
His testimony comes as regulators and governments around the world step up their examination of AI in a race to mitigate fears about its transformative powers, which have spread in step with the future-fit technology’s ongoing integration into the broader business landscape.
The President’s Council of Advisors on Science and Technology (PCAST) has separately created a new working group tasked with expanding efforts by other federal agencies to ensure the responsible use of generative AI. The group’s first activity will be a public meeting Friday (May 19).
Joining Altman before the assembled lawmakers Tuesday are IBM Chief Privacy and Trust Officer Christina Montgomery and professor emeritus of psychology and neural science at New York University, Gary Marcus, a longtime proponent of regulation around AI who has for years called for increased AI literacy among the public.
Marcus was one of the signatories on a March 29 open letter published earlier this year calling for a pause on “the training of AI systems more powerful than GPT-4.”
Read also: Can Humanity Ever Match, Much Less Control, AI’s Hyper-Rapid Growth?
Ongoing Efforts on Effective AI Framework
The congressional hearing comes at a critical moment, as lawmakers around the world continue to struggle not just to understand the fast-developing AI movement but to enact the appropriate guardrails for policing and regulating it.
The last time the United States passed meaningful regulation impacting the tech sector was in the late ’90s during Microsoft’s antitrust case.
Now, the U.S. risks falling behind its global peers.
European Union lawmakers voted May 11 to approve a draft form of milestone regulations over the use of AI, including restrictions on chatbots like ChatGPT as well as a ban on the use of facial recognition in public and on predictive policing tools.
“This hearing begins our subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology,” said Sen. Richard Blumenthal, chair of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, about Wednesday’s hearing. “I look forward to working with my colleagues as we explore sensible standards and principles to help us navigate this uncharted territory.”
“Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” Blumenthal added.
Ranking Member of the Subcommittee Josh Hawley said: “Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs and security.”
“AI companies [should have] a governance regime flexible enough to adapt to new technical development,” OpenAI’s Altman said to lawmakers, emphasizing that regulatory requirements must not be overly restrictive or draconian.
“There’s no way a non-industry person can understand what’s possible,” former Google CEO Eric Schmidt said in an NBC News interview Sunday (May 14).
“My concern with any kind of premature regulation, especially from the government, is it’s always written in a restrictive way… What I’d much rather do is have an agreement among the key players that we will not have a race to the bottom,” Schmidt added.
The Federal Trade Commission fired warning shots at the AI industry May 3, while the Civil Rights Division of the Department of Justice, the Consumer Financial Protection Bureau, the FTC and the Equal Employment Opportunity Commission released a joint statement April 25 underscoring that any decisions made by AI tools must follow U.S. laws.
Altman emphasized to members of Congress that, in his view, AI is a tool — not a creature.
“It will do tasks, not jobs,” he said.
What remains to be seen is how the U.S. will develop effective guardrails that support both the technology’s growth and innovation as well as the safety and privacy of businesses and other end-users.