IBM, Meta Launch 50+ Member Alliance Promoting Open-Source AI Development 

How do you safely develop and responsibly profit from an innovation like artificial intelligence (AI)?

This ongoing question has prompted the sudden proliferation of committees and forums dedicated to the safe development of AI technology.

This, as a group of over 50 founding members — including NASA, Oracle, CERN, Oracle, Intel, and the Linux Foundation — and spearheaded by Meta and IBM — launched the AI Alliance on Tuesday (Dec. 5).

The AI Alliance, which draws from leading organizations across industry, startup, academia, research and government, is being established to support “open innovation and open science in AI.”

Along with NASA and CERN, the U.S. National Science Foundation (NSF) and Singapore’s Agency for Science, Technology and Research (A*Star) have joined the Alliance. However, as independent government agencies, it is unclear what their commitments say about national endorsements.

None of the agencies have immediately replied to PYMNTS’ request for comment.

Read Also: AI Adds Fresh Parameters to Open- vs Closed-Source Software Debate

“We believe it’s better when AI is developed openly — more people can access the benefits, build innovative products and work on safety,” said Nick Clegg, president, global affairs of Meta, in the alliance’s founding statement.

“AI progress that drives real value for humanity can only happen with open innovation and in an open ecosystem,” added Jeff Boudreau, chief AI officer of Dell Technologies.

“The AI Alliance’s focus on fostering an open and diverse ecosystem is a pivotal step in advancing AI research worldwide. It’s a striking contrast to the idea of AI being tightly controlled by a few entities,” emphasized Rohan Malhotra, CEO and founder, Roadzen, highlighting the contrast between the AI Alliance’s open-source ambitions and the goals of other industry participants.

That’s because the AI Alliance comes after the launch of the Frontier Model Forum — an industry-led body that is also aiming to ensure “safe and responsible development” of the most powerful AI models — announced this past July, by Anthropic, Google, Microsoft and OpenAI.

It shines a light on the growing divide between AI players that is not just constrained to the marketplace but spans global academia and beyond and centers around the responsible use, development, and deployment of AI. None of the members of the Frontier Model Forum have joined the AI Alliance, for example, and vice versa.

Elon Musk’s xAI project is not a member of either industry group, despite past claims by the tech entrepreneur that he is “biased toward open source.”

Pioneering academic research groups, like Stanford’s Institute for Human-Centered AI and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL), haven’t joined the Alliance either, although AI research labs from Dartmouth, Yale, NYU, UC Berkeley, and other international universities have.

Responsible Use of Closed AI vs. Open-Source AI

The recent chaos at OpenAI has intensified and brought out into the public sphere a debate that has long been simmering within the AI ecosystem: whether open and transparent or closed and controlled innovation is the best way forward to shaping a future where AI benefits society without creating unintended negative consequences.

Closed AI refers to proprietary systems developed and owned by specific organizations. This is the approach favored by companies like OpenAI, Amazon, Anthropic, Microsoft, Google and others who have not joined the AI Alliance.

The AI systems being built by these companies are often tightly controlled, with limited outside access to the underlying algorithms and data.

While closed AI can offer cutting-edge capabilities and streamlined user experiences, it raises concerns about transparency, accountability and potential biases.

Open-source AI, on the other hand, involves making the source code and underlying algorithms publicly accessible. This approach fosters collaboration, innovation and community-driven development.

However, it also presents challenges related to security and quality control.

Meta has long been a proponent of open-sourcing AI development, with its Fundamental AI Research (FAIR) team celebrating its 10th birthday on Nov. 30, as have other AI Alliance members, including AI startup and research company Hugging Face.

As PYMNTS has written, traditionally, businesses prefer closed source as it protects their trade secrets, while academics and researchers prefer open source, as it allows for democratized tinkering and exploration.

Read AlsoHumanity-First Corporate Structures Highlight AI’s Growing Mind-Body Problem

Can a Balance Be Struck?

This year has seen a race to put AI tools into as many hands — both individual and enterprise — as possible.

At first glance, it is interesting that IBM, whose AI audience is decidedly corporate and enterprise-heavy, chose to spearhead the AI Alliance with Meta.

However, the idea of open-sourcing AI models is to gain a competitive advantage by building products and services on top of them while relying on researchers and developers worldwide to improve the underlying technologies.

The AI Alliance is a global one, representing over $80 billion in annual research and development funding with over one million collective staff members.

However, its dozens of founding members also represent a broad number of competing interests (for example, health insurers and care delivery networks), which could make it challenging to present an aligned front.

As for the AI Alliance’s mission, the group’s launch announcement stated six core objectives: to develop and deploy benchmarks and evaluation standards, tools, and other resources that enable the responsible development and use of AI systems at global scale; to responsibly advance the ecosystem of open foundation models with diverse modalities; to foster a vibrant AI hardware accelerator ecosystem; to support global AI skills building and exploratory research; to develop educational content and resources to inform the public discourse and policymakers on benefits, risks, solutions and precision regulation for AI; and launch initiatives that encourage open development of AI in safe and beneficial ways.

What the collective efforts of the group will achieve remains to be seen.