A PYMNTS Company

Global Leaders Pledge to Develop AI Technology Safely Amid Regulatory Challenges

 |  May 21, 2024

Sixteen prominent companies leading the charge in Artificial Intelligence (AI) development have made a resolute pledge to global leaders to prioritize the safe advancement of this transformative technology. The commitment comes amidst a backdrop of rapid innovation that outpaces regulatory frameworks, raising concerns about emerging risks.

According to a report by Reuters, the pledge was made during a global meeting, where industry giants such as Google, Meta, Microsoft and OpenAI, alongside firms from China, South Korea and the United Arab Emirates, joined forces.

This coalition was supported by a broader declaration from influential entities including the Group of Seven (G7) major economies, the European Union (EU), Singapore, Australia and South Korea. The virtual meeting, hosted by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, served as a platform to underscore the importance of AI safety, innovation and inclusivity.

Emphasizing the urgency of the matter, President Yoon highlighted how AI safety is essential to societal wellbeing and democracy, citing concerns over risks like deepfake technology. The agreement reached at the meeting prioritized AI safety, innovation and inclusivity, as per South Korea’s presidential office.

Related: New Report Says AI Regulations Lag Behind Industry Advances

Participants stressed the significance of interoperability between governance frameworks, proposed the establishment of a network of safety institutes and advocated for engagement with international bodies to strengthen collective efforts in addressing AI-related risks effectively.

Among the companies committing to ensuring AI safety were notable names such as Zhipu.ai, supported by China’s tech giants Alibaba, Tencent, Meituan and Xiaomi, as well as the UAE’s Technology Innovation Institute, Amazon, IBM and Samsung Electronics, as reported by Reuters. These entities pledged to publish safety frameworks for assessing risks, steer clear of models where risks couldn’t be adequately mitigated and uphold principles of governance and transparency.

Commenting on the declaration, Beth Barnes, founder of METR, a group dedicated to promoting AI model safety, underscored the necessity of international consensus to define “red lines” beyond which AI development could pose unacceptable risks to public safety, according to Reuters.

Source: Reuters