The perfect artificial intelligence (AI) regulation should ensure the technology doesn’t turn on its creators.
But it also needs to ensure that the regulation itself doesn’t turn on the technology, by dampening or stamping out innovation.
That’s why PYMNTS CEO Karen Webster sat down with Shaunt Sarkissian, CEO and founder of AI-ID, and asked him to play a game of “AI Regulation Roulette.”
The advent of AI has put nations around the world in a unique position and has certainly heralded the end of the world as we formerly knew it. But will the innovative technology one day lead to — as some doomsday adherents believe — the end of the world itself?
And where should the ball stop in the AI regulation game?
Sarkissian said that the age-old fear of AI turning against its creators are mostly unfounded and highlighted the importance of “air-gapping” AI systems from critical functions.
As long as AI is provided with a limited mission and role, it can help reduce the scope for catastrophic outcomes, he added, noting that military simulations consistently show that autonomous AI systems do not pose an existential threat.
But to wholly ensure the absence of any “Terminator scenario,” the world will need to not just figure out AI regulation, but also get it right.
And Sarkissian proposed a hypothetical three-step approach to doing so.
“You have to think about the regulation of the apparatus itself and the human touching or working with that apparatus. Number one: What does the human need to know about the AI? It needs to know that first, it is an AI and not human, and it needs to know its rules and what it is supposed to, and allowed to, do,” he said.
After ensuring transparency and defining AI’s roles and rules, it is important to compartmentalize the AI’s functions to restrict its scope and purpose, he added.
Third, and most critical, Sarkissian said, is to develop specific rules and regulations for different use cases, focusing on copyright, law protection and compensation.
“For everything to work and happen, you need to establish the copyright and compensation rules; here’s what you have to track and what you have to pay for, as well as what you can’t,” he explained.
Key to balancing innovation and regulation in the burgeoning AI landscape is acknowledging that while the technology is ever-evolving, use cases often fit within existing regulatory frameworks.
“Any idea that regulation is going to be globally ubiquitous is a fool’s errand,” Sarkissian said.
He suggested an approach where regulations primarily target use cases, with technology and security measures tailored to specific applications, arguing for example that within healthcare, the existing regulations, such as HIPAA, provide a strong foundation.
To adapt to AI’s role in healthcare, Sarkissian posited tweaking rules to label AI-generated content and ensure human approvals.
“Marry [AI] with what commerce laws already exist, tweak it as needed, and it works,” he said.
Still, one of the foundational challenges posed by AI is the assembly of data sets, raising questions about copyright protection and compensation for data usage. But even within that realm, Sarkissian pointed out that the provenance of data sets can be managed within existing regulations, ensuring transparency in data usage and compensation for copyright holders.
While Sarkissian believes the approach the U.S. is taking with the White House’s executive order on AI to be “the best bet on the board right now,” he explained that there exist numerous challenges posed by the diverse, self-interested constituencies in the AI space, and emphasized the importance of finding common ground.
Because of the rate and speed at which AI technology’s capabilities are evolving, observers believe there exists an opportunity for America to lead the way by supporting innovation while being smart and clear-eyed about the risks of AI technology.
“Anybody thinking about regulation needs to start with an outside-in approach. If you’re looking at concentric circles, go with the farthest thing and create the final boundary,” said Sarkissian, noting that a “final boundary” could be something apocalyptic. Then, by narrowing down the broad boundaries, governments can allow innovation to thrive within those constraints.
“You will never be as smart as the innovator, and the innovator will always think about ways to work around you,” he said.
And as global initiatives ramp up across the U.S., U.K., EU, China and beyond, the regulatory landscape is becoming increasingly multifaceted.
“This is one case where the house will always win a little bit,” Sarkissian said.
“But I’m encouraged by what I saw in the framework of the White House’s executive order,” he added, particularly the focus on critical standards and safety guidelines, as well as the commitment to identifying AI-generated content.
President Joe Biden has called his executive order on AI “the most significant action any government, anywhere in the world, has ever taken on AI safety, security and trust.”
For his part, Sarkissian noted that the EU’s AI Act, which unlike the executive order represents an actual piece of legislation, is “a little myopic.”
“It’s focused on content and on consumer protections, which are not bad things to be focused on, but [the AI Act] still is really blurred by GDPR [General Data Protection Regulation],” he said. “It’s taken the lessons of GDPR and said, we’re going to limit informing the models. We’re going to put everybody in control of how data is used and shared, as opposed to the commerce effects of that data.”
While the EU was the first government to draft a piece of legislation geared toward regulating AI, China was the first major economy to enact an AI policy, with a prescriptive set of guardrails that took effect in August.
Former President Barack Obama recently observed — as have others before him — that those developing AI in the private sector should engage with the government to educate policymakers about AI’s potential and associated risks.
“When it comes to engagement and education, more is better than less,” Sarkissian said.
He added that the evolving dynamics between the government and AI innovators underscores the importance of government agencies setting high-level standards and criteria for AI companies hoping to work with them.