New artificial intelligence (AI) regulations are rolling out around the world.
Colorado has become the first U.S. state to regulate AI, aiming to prevent consumer harm and discrimination. New requirements for high-risk AI systems will start in 2026.
Meanwhile, the EU’s comprehensive AI Act, set to influence global regulations, will take effect soon, and Japan is considering requiring major AI developers to disclose information for security and fairness.
Colorado became the first U.S. state to create a regulatory framework for AI last week, as Gov. Jared Polis signed a bill setting guardrails for companies developing and using the technology.
The law, which takes effect in 2026, aims to prevent consumer harm and discrimination from AI systems increasingly deployed in sensitive areas like hiring, banking and housing. It imposes new requirements on developers and users of “high-risk” AI.
Polis, a Democrat, expressed reservations even as he signed the legislation.
“While the guardrails, long timeline for implementation, and limitations contained in the final version are adequate for me to sign this legislation today, I am concerned about the impact this law may have on an industry that is fueling critical technological advancements,” he wrote in a signing statement.
The governor said he hopes the conversation around AI regulation continues at the state and federal levels, cautioning against a patchwork of state policies. A similar bill in Connecticut failed during that state’s recent legislative session.
The Colorado law was sponsored by Democratic legislators and passed in the final days of the legislative session that concluded May 8. It comes as a flurry of AI developments — from OpenAI’s ChatGPT to Google’s Gemini — spark a global reckoning over the technology.
The European Union’s pioneering legislation to regulate AI cleared its final hurdle, paving the way for the law to take effect within weeks.
The Council of Ministers, one of the EU’s central lawmaking bodies, approved the AI Act following the European Parliament’s adoption of the measure in March. The legislation, dubbed the world’s first AI law, comes three years after the European Commission initially proposed it.
“The EU AI Act does not ‘cherry-pick’ specific areas of AI, for example, the regulation of generative AI, but rather follows an all-embracing approach, trying to set the scene for developers, deployers and those affected by the use of AI,” Nils Rauer, a technology law expert at law firm Pinsent Masons in Frankfurt, Germany, said in a statement.
The AI Act will become effective 20 days after its publication in the Official Journal of the EU, though most of its provisions will only take effect for a maximum of two years. It sets strict requirements for high-risk AI systems and prohibits some uses altogether.
Experts said the legislation is likely to influence AI regulation in other jurisdictions.
“The introduction of data governance in respect of training, validation and test datasets about high-risk AI systems is a positive development as it will drive the maturity and hygiene of those using software and AI in their operations,” said Wouter Seinen, an Amsterdam-based lawyer at Pinsent Masons.
The AI Act’s passage comes amid a surge of concern over the technology following the release of chatbots like ChatGPT. Lawmakers have grappled with balancing innovation and mitigating potential risks.
“EU legislators have decided to combine two structural concepts in one piece of law,” Rauer said. “There is the risk-based approach for AI systems and a separate concept applying to general-purpose AI. It remains to be seen how these two concepts work together.”
The European Union has taken a significant step forward in regulating artificial intelligence with the approval of the EU AI Act, U.K. AI lawyer Matt Holman told PYMNTS.
“The approval of the EU AI is an extremely important step forward for AI regulation as it is unlike any law anywhere else on earth,” Holman said. “It creates, for the first time, a detailed regulatory regime for AI.”
The new law, which is technology and sector agnostic, will require anyone developing, creating, using, or reselling AI in the EU to comply with its provisions. The law seeks to control AI as it is developed, trained and deployed.
U.S. tech giants have been closely monitoring the development of this law, particularly as there has been significant funding into public-facing generative AI systems that will need to ensure compliance with the new regulations, which are quite onerous in some places, according to Holman.
“They will need to ensure AI literacy across their staff base and transparency with users about what the AI does and how it uses their data,” he said.
The implementation of the law will be staggered over a period of time, with different provisions coming into force at different stages. The provisions that outlaw prohibited AI will come into force first, followed by the provisions regarding penalties, transparency, AI literacy and CE marking obligations. Finally, laws regarding high-risk AI systems will be implemented. AI products that are deemed to be high risk include tools used for recruitment decision-making or law enforcement.
Holman said the law includes GDPR-style fines, with the largest tier of fines at 35 million euros ($37.9 million) or 7% of worldwide turnover for using outlawed AI. Firms that breach procedural rules face a 15 million euro fine ($16.3 million) or 3% of worldwide turnover for breach of procedural rules. Providing misleading information can also land companies a 7.5 million euro ($8.1 million) fine or 1% of worldwide turnover.
The Japanese government is considering requiring major AI developers to disclose certain information as part of its basic policies on AI regulation, according to Japan’s Kyodo News.
According to a draft of the policies, the government is also exploring the control of AI from a security perspective in accordance with future technological developments and international discussions. The draft emphasizes the importance of establishing a transparent approach to ensure the fairness and necessity of AI regulations.
The draft also states that the government will consider the types of regulations needed to control AI systems that could be linked to crimes and human rights violations.
Furthermore, the government is expected to discuss creating a framework requiring major AI operators, whose systems significantly impact the public, to make certain adjustments and disclose relevant information. This policy aims to ensure that developers share safety information with the government.