Artificial intelligence (AI) regulation takes center stage as tech giants, researchers and watchdogs join the fray, pushing for smart governance of a technology with the potential to reshape our world.
In a move signaling Big Tech’s stance on artificial intelligence governance, Kent Walker, president of global affairs at Google and Alphabet, has publicly endorsed several AI regulatory bills and proposed a set of principles for responsible AI regulation.
Walker’s comments come as legislators across the United States, from Connecticut to California, grapple with how to effectively regulate the rapidly evolving field of AI. His statement underscores the tech industry’s growing acknowledgment that AI regulation is not only inevitable but necessary.
“We’ve long said AI is too important not to regulate, and too important not to regulate well,” Walker stated, highlighting Google’s position on the matter.
The Google executive expressed support for five specific bills mentioned in the Senate’s AI Policy Roadmap, including the Future of AI Innovation Act and the AI Grand Challenges Act. These bills aim to advance AI standards, promote U.S. leadership in AI, and incentivize innovation in the field.
Walker also proposed seven principles for responsible AI regulation, emphasizing the need to support responsible innovation, focus on outputs rather than processes, strike a balance in copyright issues, and empower existing agencies to handle AI-related matters.
“Progressing American innovation requires intervention at points of actual harm, not blanket research inhibitors,” Walker argued, advocating for a nuanced approach to regulation.
Walker emphasized the importance of collaboration between public and private sectors: “AI is a unique tool, a new general-purpose technology. And as with the steam engine, electricity or the internet, seizing its potential will require public and private stakeholders to collaborate to bridge the gap from AI theory to productive practice.”
This push for a regulatory framework comes as AI technologies are increasingly integrated into various sectors, from healthcare to finance. Walker highlighted the potential impact: “A recent report from McKinsey pegs that global economic impact at between $17-$25 trillion annually by 2030. (That’s an amount comparable to the current U.S. GDP.)”
As debates around AI regulation intensify, Google’s position is likely to influence discussions in legislative chambers and boardrooms across the country. Walker concluded by emphasizing AI’s long-term potential: “AI can drive more stunning breakthroughs like these — if we stay focused on its long-term potential.”
A new report sheds light on the rapidly evolving landscape of AI legislation.
Vero AI, a company specializing in AI risk assessment, analyzes 70 state and federal regulations from September 2018 to May 2024, emphasizing the need for businesses to stay compliant with evolving AI rules.
Eric Sydell, CEO and co-founder of Vero AI, said in a news release, “With AI’s rapid development and adoption, organizations must understand both its business value and associated risks, especially as legislation evolves.” He added, “While some business leaders worry these guidelines will stifle innovation, our report shows that most legislation centers on data privacy, transparency and accountability, and the protection against bias of specific classes.”
The report indicated that current AI legislation primarily focuses on protecting individuals and users, particularly concerning personal data. It also notes that state-level AI regulations are rapidly emerging, signaling that businesses across the U.S. will soon be held accountable for their AI systems.
Sydell said, “For companies already practicing responsible AI, compliance should not be burdensome, and adhering to these principles will promote user trust and reliability on the road to deployment success.”
This report comes at a crucial time when businesses are increasingly adopting AI technologies while grappling with potential risks and regulatory challenges. As governments worldwide work to establish frameworks for responsible AI use, companies must stay informed and proactive in their approach to AI governance.
The nonprofit research organization Electronic Privacy Information Center on Tuesday published a document that can be used to assess the strength of state and federal AI legislation.
EPIC’s AI Legislation Scorecard, which the group said includes minimum standards for the responsible use of commercial AI and automated decision-making systems, lays out the legal aspects effective AI legislation should contain. This list includes strong legal definitions, data minimization requirements, obligations for conducting impact assessments and tests, and complete prohibitions on particularly harmful AI uses.
The group’s leaders said the scorecard is intended for lawmakers, journalists, advocates, and academics to evaluate the strength of AI bills. However, they advised that AI legislation should only complement current laws, not preempt ones that offer more protections.
While the scorecard was developed with comprehensive regulatory AI bills in mind — like the law enacted in Colorado last month — it can also be used to evaluate the strength of the dozens of federal AI proposals introduced over the past year, such as those that create task forces or regulate narrow and sector-specific uses of AI.
Along with strict requirements on data minimization, the practice of only collecting necessary information, a key part of EPIC’s legal recommendations, includes robust enforcement mechanisms. These include limited cure periods, giving attorneys general or relevant agencies investigative and enforcement authority, and a private right of action, which allows individuals to bring civil suits against organizations found using, storing or sharing data improperly.