Europe’s new AI legislation could reportedly go easy on open-source artificial intelligence models.
Lawmakers in Europe were still hammering out the details of the landmark AI Act on Thursday (Dec. 7), Reuters reported.
However, a document seen by the news outlet says the legislation would not apply to free, open-source AI models unless they are determined to be high risk or use for purposes that have already been banned.
Open-source models are ones in which — as the name suggests — the source code is shared openly, letting users voluntarily improve its function and design. Examples include Meta Platform’s Llama 2.
A number of leading AI models, however, remain closed-source, including the ones offered by Google, Microsoft, OpenAI and Anthropic.
As Reuters notes, the EU has been trying to finalize rules governing AI proposed by the European Commission in 2021, though keeping up with the technology has been a challenge.
A separate report by the news outlet, however, said lawmakers had agreed to a provisional deal, which would involve the European Commission keeping a list of AI models deemed to pose a “systemic risk.” Providers of general-purpose AIs, on the other hand, would be required to release detailed summaries of the content used to train their models.
At least one obstacle still remained, sources told Reuters: how to govern the use of AI in biometric surveillance, as well as source code access.
The news comes two days after the launch of the AI Alliance, a group that includes NASA, Oracle, CERN, Intel and the Linux Foundation — and led by Meta and IBM — and is dedicated to the development of open-source AI.
“We believe it’s better when AI is developed openly — more people can access the benefits, build innovative products and work on safety,” Nick Clegg, Meta’s president for global affairs, said in the alliance’s founding statement.
As PYMNTS wrote Tuesday (Dec. 5), the AI Alliance’s goals contrast sharply with those of other industry players.
It follows the launch of the Frontier Model Forum — an industry-led group whose members include Anthropic, Google and OpenAI that is also aiming to ensure “safe and responsible development” of the most powerful AI models.
“It shines a light on the growing divide between AI players that is not just constrained to the marketplace but spans global academia and beyond and centers around the responsible use, development, and deployment of AI,” the report said. “None of the members of the Frontier Model Forum have joined the AI Alliance, for example, and vice versa.”