In a bid to address the burgeoning concerns surrounding the ethical and societal implications of artificial intelligence (AI), Chile has introduced pioneering legislation aimed at regulating AI systems within its borders. Drawing inspiration from the European Union’s regulatory framework, the proposed legislation delineates various obligations contingent upon the perceived risks posed by AI applications.
The bill, which was presented to the lower house of the Chilean Congress on May 7, distinguishes between different categories of AI systems based on their potential risks and impacts. At the forefront of the legislation are measures designed to safeguard fundamental rights, mitigate potential harms to health, safety, and the environment and ensure consumer protection.
Read more: New Report Says AI Regulations Lag Behind Industry Advances
Under Chile’s proposed legislation, AI systems are classified into distinct risk categories:
- Unacceptable Risk Systems: These are AI systems deemed incompatible with the respect and guarantee of individuals’ fundamental rights, thereby warranting their outright prohibition. Notably, this category encompasses manipulative systems, with exceptions made for those serving authorized therapeutic purposes and with informed consent. Additionally, systems that exploit vulnerabilities to instigate harmful behaviors fall under this classification.
- High-Risk Systems: This category encompasses AI systems posing significant risks to health, safety, fundamental rights, the environment and consumer rights. Stringent regulations are envisaged to govern the development, deployment and usage of such systems to mitigate potential adverse outcomes.
- Limited-Risk Systems: AI systems presenting an insignificant risk of manipulation, deception or error during interaction with individuals are categorized as limited-risk systems. These systems are subject to lighter regulatory scrutiny compared to higher-risk counterparts.
- AI Systems without Obvious Risk: Finally, the legislation recognizes AI systems that do not exhibit discernible risks, signifying a lower regulatory burden for such applications.
Crucially, the bill outlines several prohibitions and obligations aimed at addressing specific concerns associated with AI deployment:
- Biometric Identification Systems: The use of AI for biometric identification purposes in public spaces in real-time is prohibited, except in cases of public security and criminal investigations.
- Facial Recognition Systems: AI systems utilizing facial scraping techniques to extract facial images from public sources or closed-circuit television are explicitly banned.
- Emotional State Evaluation Systems: Systems designed to evaluate an individual’s emotional state using AI algorithms are prohibited.
- Data Governance and Cybersecurity: The legislation imposes obligations on AI developers and operators to adhere to robust data governance and cybersecurity standards to mitigate the risk of data breaches and misuse.
Source: BN Americas
Featured News
Big Tech Braces for Potential Changes Under a Second Trump Presidency
Nov 6, 2024 by
CPI
Trump’s Potential Shift in US Antitrust Policy Raises Questions for Big Tech and Mergers
Nov 6, 2024 by
CPI
EU Set to Fine Apple in First Major Enforcement of Digital Markets Act
Nov 5, 2024 by
CPI
Six Indicted in Federal Bid-Rigging Schemes Involving Government IT Contracts
Nov 5, 2024 by
CPI
Ireland Secures First €3 Billion Apple Tax Payment, Boosting Exchequer Funds
Nov 5, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI