By Phillippe Hacker (Oxford Business Law Blog)
Advanced machine learning (ML) techniques, such as deep neural networks or random forests, are often said to be powerful, but opaque. However, a burgeoning field of computer science is committed to developing machine learning tools that are interpretable ex ante or at least explainable ex post. This has implications not only for technological progress, but also for the law, as we explain in a recent open-access article.
On the legal side, algorithmic explainability has so far been discussed mainly in data protection law, where a vivid debate has erupted over whether the European Union’s General Data Protection Regulation (GDPR) provides for a ‘right to an explanation’. While the obligations flowing from the GDPR in this respect are quite uncertain, we show that more concrete incentives to adopt explainable ML tools may arise from contract and tort law.
To this end, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the (legally required) trade-off between accuracy and explainability, and demonstrate the effect in a technical case study.
Featured News
Subscribers Defend $4.7 Billion Antitrust Verdict Against NFL in Court Filings
Jul 19, 2024 by
CPI
Von der Leyen Calls for Competition Policy to Boost EU Companies’ Growth
Jul 19, 2024 by
CPI
Vermont AG Sues Pharmacy Benefit Managers Over Drug Prices
Jul 18, 2024 by
CPI
Australians Face Increased Stamp Prices Following ACCC Approval
Jul 18, 2024 by
CPI
Live Nation Seeks Dismissal of DOJ Antitrust Allegations
Jul 18, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Private Equity Roll-Up Schemes
Jun 28, 2024 by
CPI
The FTC’s Focus on Private Equity is Warranted
Jun 28, 2024 by
CPI
Unraveling the Roll-Up: Private Equity’s Misunderstood Investment Strategy
Jun 28, 2024 by
CPI
Antitrust Focus on Private Equity Funds and Serial Acquisitions
Jun 28, 2024 by
CPI
Private Equity Roll-Ups Amidst Heightened Antitrust Enforcement
Jun 28, 2024 by
CPI