Lawmakers around the globe are organizing their efforts to tackle artificial intelligence (AI) more aggressively.
Both Beijing and Brussels have struck first, although the European Union’s (EU) AI Act has been passed as a first draft.
“The AI Act presents a very horizontal regulation, one that tries to tackle AI as a technology for all kinds of sectors, and then introduces what is often called a risk-based approach where it defines certain risk levels and adjusts the regulatory requirements depending on those levels,” Dr. Johann Laux told PYMNTS as part of “The Grey Matter” series presented by AI-ID.
Laux explained that the EU approach is based on a historical approach for regulating products that stems from the industrial area, which “may or may not go well” for the modern digital area.
The European approach to AI regulation stands in contrast to the more hands-off approach taken by the U.K. and the U.S.
Companies like Meta, Google and Microsoft have actively engaged in the lobbying process surrounding the drafting of the AI Act, with many executives believing that the AI act could “stifle innovation.”
But then again, they are supposed to say that.
ChatGPT creator OpenAI lobbied for — and got — changes to the EU’s AI legislation, succeeding in its efforts to alter “significant elements” of the AI Act and reducing the regulatory burdens the company would have faced.
“You could think that OpenAI got what they wanted through lobbying, and that’s definitely part of the story. On the other side, there are also scholars who are independent who would say it makes no sense to classify foundation models as high risk because it depends on where you’re going to use them and the foreseeability of using foundation models is limited,” Laux said.
Read more: How AI Regulation Could Shape Three Digital Empires
In general, tech sector oversight rules that come out of the EU tend to be very consumer privacy-oriented, and less fixated on how future-fit innovations could be used in commerce.
In what’s known as the “Brussels Effect,” multinational companies frequently standardize their global operations in such a way as that they adhere to EU regulations no matter where the business is taking place.
And under the proposed EU AI Act, which will go into effect in the next 12 to 24 months, AI developers hoping to use data from the 27-member nation bloc to train their algorithms will be bound by the EU’s regulatory constraints even beyond the EU’s borders.
“There’s a risk-layered system with the AI Act, and most of the lobbying focused on which AI systems were going to be deemed high risk,” Laux said. “We can definitely expect that industry interests are going to play a huge role in shaping the implementation of the AI act.”
“Usually, the knowledge about a technology lies in the industry that is developing it. And if you think again of those foundational models that we see in generative AI, such as ChatGPT’s LLM [large language model], when the commission started working on the AI act, this was a niche area of AI research, at least for us in the policy domain. And now it’s the first thing that everybody talks about when they talk about AI,” he said.
This reliance on industry expertise raises concerns about the potential for industry interests to shape the technical standards that will govern AI systems. This phenomenon, known as regulatory capture, often occurs when regulators cater to the interests of the industry they regulate.
“If you actually have the skills to regulate something like the AI industry, if you have some deeper knowledge, deeper understanding, then the most profitable jobs will be in the industry and not with the public regulators,” Laux said.
As the AI act moves forward, it is essential to address both industry and consumer concerns and ensure that industry influence does not compromise the regulatory framework’s effectiveness.
And that will be a particularly delicate task to pull off.
“The AI act is really special because, first of all, it relies on technical standards that will have to be developed to implement it,” Laux said. “If you want to audit AI systems, we need an audit industry to emerge.”
“When it comes to something like standardization, the stakes are high for industry influence when it involves questions where the answer isn’t clear, such as something like fairness. What fairness means requires you to either endorse a specific interpretation of fairness or at least to specify acceptable trade-offs between competing interests,” he said.
This could give a lot of discretion for AI developers to define their own benchmarks of fairness and then roll them out and implement them, Laux said.
There is a risk of audit capture, he noted, where auditors in AI systems may prioritize the interests of those they audit, raising questions about the integrity of the auditing process.
To mitigate the risk of “audit capture” and ensure a diverse pool of auditors, Laux suggests implementing tools like auditor rotation and preventing revolving doors.
As the AI Act makes its way through the EU’s legislative body, it will be crucial to address these concerns head-on to ensure the AI Act’s implementation aligns with the goal of responsible and ethical AI development by balancing the need for technical expertise with the protection of fundamental rights.