According to the Financial Times, the European Union’s pioneering legislation, the Artificial Intelligence Act, aims to protect humans from the potential dangers of AI. However, critics argue that it is undercooked and could stifle innovation in the rapidly evolving tech sector.
Andreas Cleve, CEO of Danish healthcare start-up Corti, is among those concerned. As he navigates the challenges of attracting investors and convincing clinicians to adopt his company’s AI “co-pilot,” Cleve now faces the additional hurdle of complying with the new AI Act. Many tech start-ups fear that this well-intentioned legislation might suffocate the emerging industry with excessive red tape, reported the Financial Times.
Compliance costs, which European officials acknowledge could reach six-figure sums for a company with 50 employees, pose a significant burden. Cleve describes this as an extra “tax” on small enterprises in the bloc. “I worry about legislation that becomes hard to bear for a small company that can’t afford it,” he said. “It’s a daunting task to raise cash and now you’ve had this tax imposed. You also need to spend time to understand it.”
Despite these concerns, Cleve supports regulation of AI, emphasizing the importance of safeguards around potentially harmful products. “The AI Act is a good idea but I worry that it will make it very hard for deep tech entrepreneurs to find success in Europe,” he said.
The act, which formally comes into force in August and will be implemented in stages over the next two years, is the first of its kind, emerging from the EU’s desire to become the “global hub for trustworthy AI.” It categorizes AI systems by risk, with minimal-risk applications like spam filters remaining unregulated. Limited-risk systems, such as chatbots, must meet certain transparency obligations. The strictest regulations apply to high-risk systems, which might profile individuals or process personal data, per the Financial Times.
The rules demand greater transparency on data usage, the quality of data sets used to train models, clear information to users, and robust human oversight. Medical devices and critical infrastructure fall within this high-risk category.
EU officials assert that the AI legislation is designed to foster technological innovation with clear regulations. They highlight the dangers of human-AI interactions, including risks to safety and security, and potential job losses. The drive to regulate also stems from concerns that public mistrust in AI could hinder technological progress in Europe, leaving the bloc behind superpowers like the US and China.
Featured News
Big Tech Braces for Potential Changes Under a Second Trump Presidency
Nov 6, 2024 by
CPI
Trump’s Potential Shift in US Antitrust Policy Raises Questions for Big Tech and Mergers
Nov 6, 2024 by
CPI
EU Set to Fine Apple in First Major Enforcement of Digital Markets Act
Nov 5, 2024 by
CPI
Six Indicted in Federal Bid-Rigging Schemes Involving Government IT Contracts
Nov 5, 2024 by
CPI
Ireland Secures First €3 Billion Apple Tax Payment, Boosting Exchequer Funds
Nov 5, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI