A PYMNTS Company

EU’s AI Legislation: Protection or Innovation Stifler?

 |  July 17, 2024

According to the Financial Times, the European Union’s pioneering legislation, the Artificial Intelligence Act, aims to protect humans from the potential dangers of AI. However, critics argue that it is undercooked and could stifle innovation in the rapidly evolving tech sector.

Andreas Cleve, CEO of Danish healthcare start-up Corti, is among those concerned. As he navigates the challenges of attracting investors and convincing clinicians to adopt his company’s AI “co-pilot,” Cleve now faces the additional hurdle of complying with the new AI Act. Many tech start-ups fear that this well-intentioned legislation might suffocate the emerging industry with excessive red tape, reported the Financial Times.

Compliance costs, which European officials acknowledge could reach six-figure sums for a company with 50 employees, pose a significant burden. Cleve describes this as an extra “tax” on small enterprises in the bloc. “I worry about legislation that becomes hard to bear for a small company that can’t afford it,” he said. “It’s a daunting task to raise cash and now you’ve had this tax imposed. You also need to spend time to understand it.”

Despite these concerns, Cleve supports regulation of AI, emphasizing the importance of safeguards around potentially harmful products. “The AI Act is a good idea but I worry that it will make it very hard for deep tech entrepreneurs to find success in Europe,” he said.

The act, which formally comes into force in August and will be implemented in stages over the next two years, is the first of its kind, emerging from the EU’s desire to become the “global hub for trustworthy AI.” It categorizes AI systems by risk, with minimal-risk applications like spam filters remaining unregulated. Limited-risk systems, such as chatbots, must meet certain transparency obligations. The strictest regulations apply to high-risk systems, which might profile individuals or process personal data, per the Financial Times.

The rules demand greater transparency on data usage, the quality of data sets used to train models, clear information to users, and robust human oversight. Medical devices and critical infrastructure fall within this high-risk category.

EU officials assert that the AI legislation is designed to foster technological innovation with clear regulations. They highlight the dangers of human-AI interactions, including risks to safety and security, and potential job losses. The drive to regulate also stems from concerns that public mistrust in AI could hinder technological progress in Europe, leaving the bloc behind superpowers like the US and China.

Source: The Financial Times