VC-Led Startup Safety Commitments Spark Debate Around AI Accelerationism 

Artificial intelligence (AI) was no doubt the most pivotal technology of the past year.

And it may just prove to be the most pivotal innovation of the 21st century.

But as we move on to the next year, the powers and perils of generative AI, and increasingly the potential for artificial general intelligence (AGI), are together marching forward into 2024 unregulated in the U.S. — if not unnoticed.

A group of over 40 venture capital (VC) firms, including General Catalyst, Felicis Ventures, Bain Capital, IVP, Insight Partners and Lux Capita among others, have signed a set of voluntary commitments around how the startups they back should develop AI technology responsibly as the technology and the companies behind it continue to grow.

Sound familiar? It’s because voluntary commitments around AI safety are all the rage. The White House had one, and the commitments signed by the over three-dozen VC firms were in fact created with feedback from the Biden administration’s Department of Commerce.

The initiative, which was announced Tuesday (Nov. 14) and comes from Responsible Innovation Labs, an organization that was founded by Hemant Taneja, the CEO of VC fund General Catalyst, focuses on five key commitments: a general commitment to responsible AI including internal governance; appropriate transparency and documentation; risk and benefit forecasting; auditing and testing; and feedback cycles and ongoing improvements.

Despite the intentions behind the VC-led commitment, reception within the AI startup ecosystem has been divided, with some observers alleging that the responsible AI commitments aimed at ensuring safety actually hurt AI safety more than they contribute to it.

Read alsoCalls to Pause AI Development Miss the Point of Innovation

Regulatory Capture or Responsible Development?

Right now, China is the only country to have passed a policy framework meant to regulate AI, and identifying the most effective approach to regulating AI’s varied use cases has become a priority for governments around the world, as well as multinational bodies like the United Nations, which is worried about the technology.

U.S. President Joe Biden’s recent executive order on AI established evaluation standards for the most powerful AI models, but the startup ecosystem has been left relatively untouched.

As it stands, most — if not all — of the innovations in AI have been happening within the U.S.

The VC-signed voluntary agreement is meant to demonstrate leadership from the private sector around controlling for AI’s risks, but it has sparked a debate among AI founders, with some in the AI field even pulling out of scheduled meetings with VCs citing “how public statements like RAI endanger open-source AI research and contribute to regulatory capture.”

“Thanks, very helpful list to not accept money from,” tweeted one Web3 CEO. 

“The *big* problem here is point 4. It is literally impossible to *ensure* safety of a general purpose model, and attempts to do so are likely to *reduce* safety,” said Fast.AI co-founder Jeremy Howard.

Many responses focused around how VCs and other private investors were ill-equipped to do things like audit AI models, and that much of the language focused on risk to VCs and their LPs rather than propelling the broader ecosystem forward.

Read also: AI Policy Group Says Promising Self-Regulation Is Same Thing as No Regulation

Why Regulating AI Is Like Catching Smoke

General Catalyst’s Taneja replied to the criticism on Wednesday (Nov. 15).

He reiterated the “importance of preserving open source, its pivotal role in establishing AI competitiveness, reframing political fear into opportunity, and its security benefits;” noted the importance of “preventing our best technology from falling into the wrong anti-democratic hands while not overreacting and restricting ourselves;” acknowledged concerns around regulatory capture by promising to “create better feedback cycles so government hears from founders at all stages rather than just companies with massive resources and DC presence;” and underscored the “need to avoid bad/over regulation (like some EU proposals), and the key differences between internal governance and regulation.”

PYMNTS has previously covered the challenges of attempting to regulate a technology as dynamic and increasingly pervasive as AI, with companies like JPMorgan seeking regulator guidance as they build out new AI products that touch on safety-critical areas. 

Shaunt Sarkissian, CEO and founder of AI-ID, suggested an approach where regulations primarily target use cases, with technology and security measures tailored to specific applications, arguing for example that within healthcare, the existing regulations, such as HIPAA, provide a strong foundation.

“You will never be as smart as the innovator, and the innovator will always think about ways to work around you,” he said. “Marry AI with what commerce laws already exist, tweak it as needed, and it works.”