Amazon, Microsoft and Others Agree on AI Safety Commitment at Seoul Summit

Why Companies Must Take AI Implications Seriously

Artificial intelligence companies from around the globe came together to address the growing concerns surrounding the safety and responsible development of advanced AI systems.

The Seoul AI Safety Summit, held in the South Korean capital, saw MicrosoftAmazonOpenAI and other major players from the United States, China, Canada, the United Kingdom, France, South Korea and the United Arab Emirates agree on a set of voluntary commitments aimed at ensuring the safe and ethical advancement of AI technology.

The agreement comes as the progress in AI has raised concerns about potential risks, such as automated cyberattacks, the development of bioweapons, and the misuse of the technology by malicious actors. To address these issues, the participating companies pledged to publish safety frameworks that outline their approach to measuring and mitigating the challenges associated with their most advanced AI models, often referred to as “frontier” models.

Safety First

Nicole Carignan, vice president of strategic cyber AI at Darktrace, a provider of global cybersecurity AI, told PYMNTS that the agreement will be important in achieving AI safety.

“We commend the recent AI safety commitments made by leading AI organizations as these efforts are critical to ensuring the safe and responsible use of the technology and hope this sparks similar commitments from every organization innovating with or adopting AI,” she said.

The safety frameworks will include clearly defined “red lines” that outline intolerable risks, such as the potential for AI systems to be used in automated cyberattacks or the development of bioweapons. Companies have agreed to refrain from developing or deploying AI models if these risks cannot be sufficiently mitigated, demonstrating their commitment to prioritizing safety and ethics in the advancement of AI technology.

Stephen Kowski, field chief technology officer of SlashNext, a California-based anti-phishing company, told PYMNTs that the agreement could have repercussions for businesses.

“This announcement demonstrates that vendors are trying to help CIOs remove risk from AI investments by committing not to develop or deploy AI models if risks cannot be mitigated below defined thresholds, with input from trusted actors, including home governments.”

The agreement also includes commitments to accountable governance structures and public transparency, ensuring that companies are held responsible for their actions in the development and deployment of AI systems. Regular reporting on AI systems’ capabilities, limitations and risk management practices will be required, fostering a culture of openness and accountability within the AI industry.

Carignan pointed out the importance of guidance from organizations such as the National Cyber Security Centre, the Cybersecurity and Infrastructure Security Agency and the National Institute of Standards and Technology in helping companies realize their AI safety commitments.

“To help realize these commitments, organizations should look to guidance from NCSC and CISA for benchmarks on securing AI through the design, development, deployment and maintenance lifecycles or NIST’s draft AI Risk Management Framework, which highlights the importance of a robust testing, evaluation, verification and validation process for managing risk,” she explained.

Renewed Calls for Safety

As AI continues to evolve, experts call for similar commitments in related fields such as data science and data integrity. Carignan emphasized that “data integrity, testing, evaluation and verification, as well as accuracy benchmarks, are key components in the accurate and effective use of AI.” She also stressed the importance of encouraging diversity of thought in AI teams to help combat bias and harmful training and/or output.

The public nature of these commitments allows customers and regulators to hold companies accountable for their actions in the development and deployment of AI technology.

“Companies will hold individual companies accountable with their words, but far more directly through their actions, i.e., their investment or lack thereof in a given company’s technology,” Kowski said.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.