According to the Financial Times, the European Union’s pioneering legislation, the Artificial Intelligence Act, aims to protect humans from the potential dangers of AI. However, critics argue that it is undercooked and could stifle innovation in the rapidly evolving tech sector.
Andreas Cleve, CEO of Danish healthcare start-up Corti, is among those concerned. As he navigates the challenges of attracting investors and convincing clinicians to adopt his company’s AI “co-pilot,” Cleve now faces the additional hurdle of complying with the new AI Act. Many tech start-ups fear that this well-intentioned legislation might suffocate the emerging industry with excessive red tape, reported the Financial Times.
Compliance costs, which European officials acknowledge could reach six-figure sums for a company with 50 employees, pose a significant burden. Cleve describes this as an extra “tax” on small enterprises in the bloc. “I worry about legislation that becomes hard to bear for a small company that can’t afford it,” he said. “It’s a daunting task to raise cash and now you’ve had this tax imposed. You also need to spend time to understand it.”
Despite these concerns, Cleve supports regulation of AI, emphasizing the importance of safeguards around potentially harmful products. “The AI Act is a good idea but I worry that it will make it very hard for deep tech entrepreneurs to find success in Europe,” he said.
The act, which formally comes into force in August and will be implemented in stages over the next two years, is the first of its kind, emerging from the EU’s desire to become the “global hub for trustworthy AI.” It categorizes AI systems by risk, with minimal-risk applications like spam filters remaining unregulated. Limited-risk systems, such as chatbots, must meet certain transparency obligations. The strictest regulations apply to high-risk systems, which might profile individuals or process personal data, per the Financial Times.
The rules demand greater transparency on data usage, the quality of data sets used to train models, clear information to users, and robust human oversight. Medical devices and critical infrastructure fall within this high-risk category.
EU officials assert that the AI legislation is designed to foster technological innovation with clear regulations. They highlight the dangers of human-AI interactions, including risks to safety and security, and potential job losses. The drive to regulate also stems from concerns that public mistrust in AI could hinder technological progress in Europe, leaving the bloc behind superpowers like the US and China.
Featured News
Subscribers Defend $4.7 Billion Antitrust Verdict Against NFL in Court Filings
Jul 19, 2024 by
CPI
Von der Leyen Calls for Competition Policy to Boost EU Companies’ Growth
Jul 19, 2024 by
CPI
Vermont AG Sues Pharmacy Benefit Managers Over Drug Prices
Jul 18, 2024 by
CPI
Australians Face Increased Stamp Prices Following ACCC Approval
Jul 18, 2024 by
CPI
Live Nation Seeks Dismissal of DOJ Antitrust Allegations
Jul 18, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Private Equity Roll-Up Schemes
Jun 28, 2024 by
CPI
The FTC’s Focus on Private Equity is Warranted
Jun 28, 2024 by
CPI
Unraveling the Roll-Up: Private Equity’s Misunderstood Investment Strategy
Jun 28, 2024 by
CPI
Antitrust Focus on Private Equity Funds and Serial Acquisitions
Jun 28, 2024 by
CPI
Private Equity Roll-Ups Amidst Heightened Antitrust Enforcement
Jun 28, 2024 by
CPI