China is reportedly preparing to unveil tougher rules for generative artificial intelligence (AI).
The Cyberspace Administration of China (CAC), the country’s internet watchdog, wants to develop a system in which companies would need a license to release generative AI models, the Financial Times (FT) reported Monday (July 10), citing sources close to Chinese regulators.
According to the report, this regulation would be a stricter version of draft rules set for this spring, which require companies to register AI products within 10 days after their debut. The licensing requirement is part of regulations that could be finalized this month, the sources said.
“It is the first time that [authorities in China] find themselves having to do a trade-off” between two Communist party goals of sustaining AI leadership and controlling information, Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, told the FT.
And a person close to the CAC’s discussions added: “If Beijing intends to completely control and censor the information created by AI, they will require all companies to obtain prior approval from the authorities.”
However, this source noted that regulation “must avoid stifling domestic companies in the tech race” and said that authorities “are wavering.”
As noted here last month, China leads the world in AI-powered surveillance and facial recognition tools but trails other countries in creating innovative generative AI systems due to censorship rules that restrict data that can be used to train foundation models.
The FT report says that such rules are part of the draft regulations, which say AI content should “embody core socialist values” and not involve anything that “subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity.”
PYMNTS has for months been exploring the “regulation/innovation tug-of-war” surrounding AI, most recently in a conversation Tuesday (July 11) with Jennifer Huddleston, technology policy research fellow at the Cato Institute.
“I have a lot of caution around calls to say ‘regulate AI writ large,’” she told PYMNTS, adding that AI is being used in a number of activities, and restricting its reach altogether could have a wide-ranging impact.
Huddleston added that regulation should look at “specific … clear-cut harm” to make sure that AI’s benefits are not adversely affected. The notion of harm, and the collection of data and its use, may look different in, say agriculture than it would in education.