A PYMNTS Company

Criminal Negligence and Acceptable Risk in the EU’s AI Act: Casting Light, Leaving Shadows

 |  October 2, 2024

By: Leonardo Romano (DCU Law & Tech Blog)

Regulators and policymakers in Europe are currently navigating a ‘post-modernity shock,’ characterized by the complex task of balancing competing goals and interests linked to the wide-ranging and varied technologies grouped under the ambiguous term Artificial Intelligence (AI). The urgent need to ensure Europe benefits economically and socially from AI stands in tension with the necessity to establish criminal liability when self-learning algorithms, behaving unpredictably, cause harm to individuals or society.

In determining who should bear liability for production activities, the relevant legal framework typically revolves around negligence offences. In the context of AI, this raises a broader question: how much risk from (potentially dangerous but socially beneficial) intelligent products is European society prepared to accept? To address this, the conceptual tool that comes into play is the ‘area of permitted or acceptable risk’ (erlaubtes Risiko). This legal concept, which has been a subject of long-standing debate in criminal law doctrine and is gaining renewed relevance in discussions on AI technologies, introduces a ‘margin of tolerance.’ Within this margin, operators cannot be held criminally liable based on generic negligence for harmful events that occur despite adhering to established precautionary norms.

The challenge here lies in balancing social utility with the protection of legal interests threatened by AI, raising critical questions about the scope of this acceptable risk area and the identification of objective diligence standards with specific regard to the responsibilities of AI providers. Defining the boundaries of this risk and determining what constitutes acceptable behavior for AI systems are crucial to fostering the development of AI technologies that benefit society while ensuring legal clarity and safeguarding against potential harms…

CONTINUE READING…