Governments around the world are racing to regulate the use of artificial intelligence (AI) tools as the technology rapidly advances, posing new challenges and risks. From Australia to the United Nations, national and international governing bodies are taking various steps to establish guidelines and laws for AI, reported Reuters.
Let’s take a closer look at the latest developments in different countries and organizations:
Australia is planning to introduce regulations that require search engines to draft new codes to prevent the sharing of child sexual abuse material created by AI and the production of deepfake versions of the same material. The country’s internet regulator aims to combat the misuse of AI in these sensitive areas.
In Britain, the Financial Conduct Authority is consulting with the Alan Turing Institute and other legal and academic institutions to enhance its understanding of AI. The competition regulator is also examining the impact of AI on consumers, businesses, and the economy to determine if new controls are necessary.
China has already implemented temporary regulations, requiring service providers to undergo security assessments and obtain clearance before releasing mass-market AI products. Several Chinese tech firms, including Baidu Inc and SenseTime Group, have launched their AI chatbots to the public after receiving government approvals.
The European Union (EU) is planning regulations and has called for a global panel to assess the risks and benefits of AI, similar to the global IPCC panel for climate change. EU lawmakers have agreed to changes in a draft of the bloc’s AI Act, with the biggest issue being facial recognition and biometric surveillance. Some lawmakers advocate for a total ban, while EU countries seek exceptions for national security, defense, and military purposes.
France’s privacy watchdog, CNIL, is investigating possible breaches related to ChatGPT, an AI chatbot. France’s National Assembly has approved the use of AI video surveillance during the 2024 Paris Olympics, despite concerns raised by civil rights groups.
The Group of Seven (G7) leaders have acknowledged the need for governance of AI and immersive technologies. They have agreed to have ministers discuss the technology as part of the “Hiroshima AI process” and report the results by the end of 2023. G7 digital ministers have also recommended adopting “risk-based” regulation on AI.
Ireland’s data protection chief emphasizes the need to regulate generative AI properly, without rushing into prohibitions. The governing bodies should find the right balance between innovation and the preservation of human rights.
Related: CFPB Begins To ‘Muscle Up’ AI Regulations
Israel is also seeking input on AI regulations to strike a balance between innovation and human rights. The country has published a draft AI policy and is collecting public feedback before making a final decision.
Italy’s data protection authority plans to review artificial intelligence platforms and hire AI experts to ensure compliance with privacy rules. ChatGPT faced temporary bans in Italy over concerns about privacy breaches.
Japan expects to introduce regulations on AI by the end of 2023, which are likely to be closer to the U.S. attitude than the stringent ones planned in the EU. The country’s privacy watchdog has warned OpenAI, the developer of ChatGPT, not to collect sensitive data without people’s permission.
Spain’s data protection agency is investigating potential data breaches by ChatGPT and has requested the EU’s privacy watchdog to evaluate privacy concerns surrounding the AI tool.
The United Nations (UN) has recognized the importance of regulating AI and held its first formal discussion on the topic. U.N. Secretary-General Antonio Guterres supports the creation of an AI watchdog similar to the International Atomic Energy Agency. The UN Secretary-General has also announced plans to establish a high-level AI advisory body to review AI governance arrangements.
In the United States, Congress held hearings on AI, and the White House announced voluntary commitments governing AI signed by companies like Adobe, IBM, and Nvidia. These commitments include steps such as watermarking AI-generated content. The U.S. Federal Trade Commission has opened an investigation into OpenAI on claims of consumer protection law violations. Additionally, a Washington D.C. district judge ruled that AI-generated artwork without human input cannot be copyrighted under U.S. law.
As governments worldwide grapple with the regulation of AI, it is clear that the technology’s impact on society and the need for responsible governance are at the forefront of discussions. The race to regulate AI tools reflects the complexity of the task and the importance of striking the right balance between innovation and protecting individuals’ rights.
Source: Reuters
Featured News
Big Tech Braces for Potential Changes Under a Second Trump Presidency
Nov 6, 2024 by
CPI
Trump’s Potential Shift in US Antitrust Policy Raises Questions for Big Tech and Mergers
Nov 6, 2024 by
CPI
EU Set to Fine Apple in First Major Enforcement of Digital Markets Act
Nov 5, 2024 by
CPI
Six Indicted in Federal Bid-Rigging Schemes Involving Government IT Contracts
Nov 5, 2024 by
CPI
Ireland Secures First €3 Billion Apple Tax Payment, Boosting Exchequer Funds
Nov 5, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI