
By: Jayne Ponder, Robert Huffman, Stephanie Barna & Jorge Ortiz (Inside Tech Media)
U.S. federal agencies and working groups have promulgated a number of issuances in January 2023 related to the development and use of artificial intelligence (“AI”) systems. These updates join proposals in Congress to pass legislation related to AI. Specifically, in January 2023, the Department of Defense (“DoD”) updated Department of Defense Directive 3000.09 and the National Artificial Intelligence Research Resource (“NAIRR”) Task Force Final Report on AI; the National Institute of Standards and Technology (“NIST”) released its AI Risk Management Framework, each discussed below.
Department of Defense Directive 3000.09.
On January 25, 2023, the DoD updated Directive 3000.09, “Autonomy in Weapon Systems,” which governs the development and fielding of autonomous and semi-autonomous weapons systems, including those systems that incorporate AI technologies. The Directive has three primary purposes: (1) establishing a policy and assigning responsibilities for the development and use of autonomous and semi-autonomous functions in weapons systems; (2) establishing guidelines designed to minimize the probability and consequences of failures in such systems; and (3) establishing the “Autonomous Weapon Systems Working Group.” For example, the Directive provides that autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators “to exercise appropriate levels of human judgment” over the use of force, and that these systems must be subject to verification and validation testing to build confidence in the weapon system’s operation. The Directive also underscores that design and development of AI capabilities in autonomous and semi-autonomous weapons systems must be consistent with the DoD’s AI Ethical Principles – specifically, that the AI is: (1) responsible; (2) equitable; (3) traceable; (4) reliable; and (5) governable. The Directive outlines a number of roles and responsibilities regarding oversight for autonomous and semi-autonomous weapon systems and provides guidance as to when senior review and approval are required to use these types of systems. Directive 3000.09 and the DoD’s AI Ethical Principles will be important for entities working with, and providing AI-enabled tools and services for the DoD…
Featured News
Belgian Authorities Detain Multiple Individuals Over Alleged Huawei Bribery in EU Parliament
Mar 13, 2025 by
CPI
Grubhub’s Antitrust Case to Proceed in Federal Court, Second Circuit Rules
Mar 13, 2025 by
CPI
Pharma Giants Mallinckrodt and Endo to Merge in Multi-Billion-Dollar Deal
Mar 13, 2025 by
CPI
FTC Targets Meta’s Market Power, Calls Zuckerberg to Testify
Mar 13, 2025 by
CPI
French Watchdog Approves Carrefour’s Expansion, Orders Store Sell-Off
Mar 13, 2025 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Self-Preferencing
Feb 26, 2025 by
CPI
Platform Self-Preferencing: Focusing the Policy Debate
Feb 26, 2025 by
Michael Katz
Weaponized Opacity: Self-Preferencing in Digital Audience Measurement
Feb 26, 2025 by
Thomas Hoppner & Philipp Westerhoff
Self-Preferencing: An Economic Literature-Based Assessment Advocating a Case-By-Case Approach and Compliance Requirements
Feb 26, 2025 by
Patrice Bougette & Frederic Marty
Self-Preferencing in Adjacent Markets
Feb 26, 2025 by
Muxin Li