India wants tech companies to get government permission before launching untested artificial intelligence (AI) products.
The country’s IT ministry has said tools that are “unreliable” or still being trialed must get the government’s go-ahead before release, Reuters reported Monday (March 4).
As Reuters notes, the government’s actions came soon after a top minister criticized Google’s Gemini AI tool for a response that said that Indian Prime Minister Narendra Modi has been accused of implementing “fascist” policies.
Google said it had addressed the issue and cautioned that its tool “may not always be reliable,” especially when it came to politics and current events, the Reuters report said.
“Safety and trust is platforms legal obligation. ‘Sorry Unreliable’ does not exempt from law,” deputy IT minister Rajeev Chandrasekhar wrote X in response to Google’s statement.
According to Reuters, India — which will hold its general election this summer — is now also asking companies to make sure AI tools do not “threaten the integrity of the electoral process.”
Those sorts of concerns were on display recently in the U.S, when a bipartisan Congressional group launched an effort to explore AI legislation.
The importance of addressing AI-related issues was spotlighted by an incident earlier this year when a fake robocall impersonating President Joe Biden emerged, attempting to dissuade voters from supporting him in New Hampshire’s Democratic primary election.
In response, the Federal Communications Commission (FCC) declared calls made using AI-generated voices illegal earlier this year.
Speaking on the importance of establishing regulatory frameworks, House Democratic leader Hakeem Jeffries emphasized that “the rise of artificial intelligence also presents a unique set of challenges and certain guardrails must be put in place to protect the American people.”
As PYMNTS wrote in January, 2024 could be a year in which AI policies are created and implemented on a national and even international level.
“But regulation of AI is a complex and evolving topic that involves various considerations — not the least of which is the fact that the technology knows no borders, putting a spotlight on global cooperation and coordination around industry standardization, similar to frameworks that apply to financial regulations, or to cars and healthcare,” that report said.
And few observers, that report added, think it will be possible to create effective oversight of AI with just one piece of legislation.
“Trying to regulate AI is a little bit like trying to regulate air or water,” University of Pennsylvania law professor Cary Coglianese told PYMNTS in an interview for part of the “TechReg Talks” series. “It’s not one static thing.”