Sixteen prominent companies leading the charge in Artificial Intelligence (AI) development have made a resolute pledge to global leaders to prioritize the safe advancement of this transformative technology. The commitment comes amidst a backdrop of rapid innovation that outpaces regulatory frameworks, raising concerns about emerging risks.
According to a report by Reuters, the pledge was made during a global meeting, where industry giants such as Google, Meta, Microsoft and OpenAI, alongside firms from China, South Korea and the United Arab Emirates, joined forces.
This coalition was supported by a broader declaration from influential entities including the Group of Seven (G7) major economies, the European Union (EU), Singapore, Australia and South Korea. The virtual meeting, hosted by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, served as a platform to underscore the importance of AI safety, innovation and inclusivity.
Emphasizing the urgency of the matter, President Yoon highlighted how AI safety is essential to societal wellbeing and democracy, citing concerns over risks like deepfake technology. The agreement reached at the meeting prioritized AI safety, innovation and inclusivity, as per South Korea’s presidential office.
Related: New Report Says AI Regulations Lag Behind Industry Advances
Participants stressed the significance of interoperability between governance frameworks, proposed the establishment of a network of safety institutes and advocated for engagement with international bodies to strengthen collective efforts in addressing AI-related risks effectively.
Among the companies committing to ensuring AI safety were notable names such as Zhipu.ai, supported by China’s tech giants Alibaba, Tencent, Meituan and Xiaomi, as well as the UAE’s Technology Innovation Institute, Amazon, IBM and Samsung Electronics, as reported by Reuters. These entities pledged to publish safety frameworks for assessing risks, steer clear of models where risks couldn’t be adequately mitigated and uphold principles of governance and transparency.
Commenting on the declaration, Beth Barnes, founder of METR, a group dedicated to promoting AI model safety, underscored the necessity of international consensus to define “red lines” beyond which AI development could pose unacceptable risks to public safety, according to Reuters.
Source: Reuters
Featured News
Big Tech Braces for Potential Changes Under a Second Trump Presidency
Nov 6, 2024 by
CPI
Trump’s Potential Shift in US Antitrust Policy Raises Questions for Big Tech and Mergers
Nov 6, 2024 by
CPI
EU Set to Fine Apple in First Major Enforcement of Digital Markets Act
Nov 5, 2024 by
CPI
Six Indicted in Federal Bid-Rigging Schemes Involving Government IT Contracts
Nov 5, 2024 by
CPI
Ireland Secures First €3 Billion Apple Tax Payment, Boosting Exchequer Funds
Nov 5, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI