Google’s chief legal officer thinks artificial intelligence (AI) regulation needs to bolster innovation.
Although officials in Europe are hoping to agree to new AI rules in the coming weeks, Google’s Kent Walker says the European Union (EU) should focus on crafting the best regulation, rather than the first rules for AI, Reuters reported Tuesday (Nov. 28).
“Technological leadership requires a balance between innovation and regulation. Not micromanaging progress, but holding actors responsible when they violate public trust,” he said, according to the text of a speech to be delivered at a business summit.
The report noted that EU nations and lawmakers are working on the final details of draft regulations from the European Commission, with the hope of reaching an accord Dec. 6.
“We’ve long said that AI is too important not to regulate, and too important not to regulate well,” Walker’s speech says.
It also calls for trade-offs between security and openness, data access and privacy, and explainability and accuracy, with “proportionate, risk-based” rules based on existing regulations that give businesses the confidence to keep investing in AI.
As PYMNTS wrote in September, one of the biggest challenges facing regulators when it comes to AI is fostering a basic understanding of how it works so they can oversee it productively yet not hinder its growth.
“If you go too fast, you can ruin things,” U.S. Senate Majority Leader Chuck Schumer told journalists following a meeting in September with nearly two dozen tech executives and AI experts, including Meta Platforms CEO Mark Zuckerberg and Tesla and X CEO Elon Musk.
The European Union went “too fast,” Schumer added.
And as Europe prepares to regulate AI, lawyers are getting pushback from businesses and tech groups warning against excessive regulations. As noted here last week, a joint letter signed by 32 European digital associations stressed the need for a balanced approach.
The signers note that just 3% of the world’s AI unicorns originate from the European Union, and lent their support to a proposal by France, Germany and Italy designed to narrow the focus of AI rules for foundation models specifically to transparency requirements.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.