By: Phillip Hacker, Andreas Engel & Marco Mauer (Oxford Business Law Blog)
The world’s attention has been captured by Large Generative AI Models (LGAIMs), the latest iteration of the much-expected and often misunderstood ‘Artificial Intelligence’ technology that is revolutionizing the way we create, visualize, and interpret new content, thus transforming our work and personal lives in unprecedented ways. As this technology matures all sectors of our society will be affected, from business to medicine, to academia, to the art world and the tech industry itself. However, along with the enormous possibilities for good, significant risks are also present. LGAIMS are already deployed by millions of users to create human-like text, images, audio or even video (ChatGPT, Stable Diffusion, DALL·E 2, Synthesia, and others). These tools may soon become part of systems used to interview and evaluate everything, from job candidates to hospital and school admissions. The savings in terms of time and labor could be substantial, allowing professionals to focus on the more pressing matters. These engines may thus contribute to a better and more effective use of resources. Still, errors in this area are costly, and risks cannot be ignored. The potential of these systems to be misused for manipulation (fake news and malicioius information being prime examples) can represent a whole new level of danger involved. This has made the discussion on how to regulate LGAIMs (or the reasons to leave them be) rise as an urgent issue, one which will have wide-ranging and long-lasting consequences.
In this paper, the authors argue that AI regulation, particularly that being considered in the EU, is not quite ready for the rise of this new generation of AI tools. While the EU has been at the vanguard of regulation for new technologies and AI systems, including passing sophisticated legislation and legal instruments (AI Act, AI Liability Directive) targeting platforms that make use of AI (Digital Services Act, Digital Markets Act), LGAIMs require and deserve special focus and unique solutions from legislators. So far, regulation in the EU and beyond has mainly dealt with the more conventional AI models, with all its limitations, while not yet grappling with the new generation of tools that have sprung up in recent days.
Considering this situation, the authors tear down the EU AI Act, which aims to address the risks presented by AI, while in reality failing to adequately consider the dangers and downsides of LGAIMS due to their versatility. Addressing all possible risks as part of a comprehensive risk management system under the AI Act (Article 9) proposals might be unneccessary, costly, and burdensome. However, an alternative regulation of the risks presented by lGAIM can be imagined that focuses on applications themselves, rather than on the base model…
Featured News
Big Tech Braces for Potential Changes Under a Second Trump Presidency
Nov 6, 2024 by
CPI
Trump’s Potential Shift in US Antitrust Policy Raises Questions for Big Tech and Mergers
Nov 6, 2024 by
CPI
EU Set to Fine Apple in First Major Enforcement of Digital Markets Act
Nov 5, 2024 by
CPI
Six Indicted in Federal Bid-Rigging Schemes Involving Government IT Contracts
Nov 5, 2024 by
CPI
Ireland Secures First €3 Billion Apple Tax Payment, Boosting Exchequer Funds
Nov 5, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI