AI Companies Hope to Close Global Regulation Gap

AI regulation

A quartet of tech giants is increasing its efforts to promote artificial intelligence (AI) safety standards.

MicrosoftOpenAIGoogle and Anthropic have named a director for the group, which is aiming to close a gap in worldwide artificial intelligence (AI) regulation, the Financial Times (FT) reported Wednesday (Oct. 24).

The four companies chose Chris Meserole from the Brookings Institution to be executive director of their Frontier Model Forum, and say they’ll commit $10 million to an AI safety fund.

“We’re probably a little ways away from there actually being regulation,” Meserole, who is stepping down from his role as an AI director at the Washington, D.C.-based think tank, told the Financial Times. “In the meantime, we want to make sure that these systems are being built as safely as possible.”

The companies announced the launch of the forum in July, saying they hoped to use their expertise to benefit the AI ecosystem by “advancing technical evaluations and benchmarks and developing a public library of solutions to support industry best practices and standards.”

As noted here, the announcement drew criticism from AI skeptics, who argue that it’s a way for the companies to skirt stronger government regulations.

Calls for stronger AI regulation continue as the technology draws increasing attention. Last week, former Google CEO Eric Schmidt, along with Inflection and DeepMind Co-founder and AI pioneer Mustafa Suleyman, said they want to create an international panel to oversee AI, the same as the world’s nations already do for climate change.

“AI is here,” the two tech leaders wrote in an FT op-ed. “Now comes the hard part: learning how to manage and govern it.

“Actionable suggestions are in short supply,” they added. “What’s more, national measures can only go so far given [AI’s] inherently global nature.”

Their call for an International Panel on AI Safety (IPAIS) takes a page from the Intergovernmental Panel on Climate Change (IPCC), especially the panel’s approach of offering policymakers “regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation.”

Meanwhile, PYMNTS wrote last month that the danger of leading too strongly with AI regulation is “that it could force AI development to other areas of the globe where its benefits can be captured by other interests.”

Speaking to Congress, NVIDIA Chief Scientist and Senior Vice President of Research William Dally said that it was possible to regulate AI deployment and use without regulating creation.

“Uncontrollable general AI is science fiction,” he said. “At the core, AIs are based on models created by humans. We can responsibly create powerful and innovative AI tools.”