In a major move to regulate the booming Artificial Intelligence (AI) industry, seven of the top tech companies have made voluntary commitments to the White House, with a series of measures aimed at ensuring AI systems that are secure, safe and transparent.
According to Reuters, representatives from the companies, including OpenAI, Alphabet, Meta Platforms and Microsoft, recently concluded meetings with the Biden administration to hammer out a set of AI safeguards.
The move has drawn praise from President Joe Biden who, in a statement, said, ” We must be clear-eyed and vigilant about the threats emerging technologies can pose. Social media has shown us the harm that powerful technology can do without the right safeguards in place. These commitments are a promising step, but we have a lot more work to do together.”
The sweeping agreements, centering on safety, security and trust, include the companies investing in cybersecurity, external and internal testing of AI systems before their release, and developing a system to ‘watermark’ all AI-generated content.
The companies have also pledged to focus on protecting users’ privacy and ensuring that the technology is free from bias and not used to discriminate against vulnerable groups.
Read more: UN Security Council Holds First Formal Discussion On Artificial Intelligence
These voluntary commitments are especially important given that the US lags behind the European Union (EU) in tackling AI regulation. The EU has recently agreed on a set of draft rules which require AI systems to disclose AI-generated content and help distinguish deep-fake images from real ones. Microsoft president Brad Smith believes his company’s commitment goes beyond the White House pledge, including support for creating a ‘licensing regime for highly capable models.
Meanwhile, OpenAI chief executive Mustafa Suleyman recently concluded a global tour where he visited multiple countries to talk about the need for responsible AI. He believes that submitting to ‘red team’ tests that poke at their AI systems, while voluntary, is not an easy promise.
Apple CEO Tim Cook has also focused on the potential and dangers of AI, emphasizing the needfor regulation and companies to make their own ethical decisions for the technology.
Not surprisingly, the move has been met with some pushback, with critics like Amba Kak, executive director of the AI Now Institute voicing concern that the ‘closed-door’ deliberation with corporate actors is not enough to ensure the best safety and transparency for users.
AI technology is driving massive change in the world today, bringing with it both promise and peril. There is no doubt that these measures taken by the industry giants are an important step forward in rising to the challenge of low-stakes AI regulation. It is clear that the Biden-Harris Administration is taking AI very seriously and is determined to see these commitments through.
We can expect to see more tangible changes in the coming weeks, as the Administration continues to work on developing an executive order as well as bipartisan legislation, to ensure a safe and just future for the technology.
Source: Reuters
Featured News
Big Tech Braces for Potential Changes Under a Second Trump Presidency
Nov 6, 2024 by
CPI
Trump’s Potential Shift in US Antitrust Policy Raises Questions for Big Tech and Mergers
Nov 6, 2024 by
CPI
EU Set to Fine Apple in First Major Enforcement of Digital Markets Act
Nov 5, 2024 by
CPI
Six Indicted in Federal Bid-Rigging Schemes Involving Government IT Contracts
Nov 5, 2024 by
CPI
Ireland Secures First €3 Billion Apple Tax Payment, Boosting Exchequer Funds
Nov 5, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI