US Intelligence Agencies Are Putting the AI in CIA 

Artificial intelligence (AI) is taking the private sector by storm. 

But its rapid integration across the commercial ecosystem doesn’t mean that governments around the world haven’t taken notice of the innovative technology’s capabilities — even if attempts at AI regulation are progressing at a slow, fragmented pace.

“The starting gun has been fired on a globally competitive race in which individual companies, as well as countries, will strive to push the boundaries as far and fast as possible,” said Oliver Dowden, deputy prime minister of the United Kingdom, in his speech to the 78th session of the UN General Assembly in New York last Friday (Sept. 22).

Bloomberg reported on Tuesday (Sept. 26) that the U.S. Central Intelligence Agency’s Open-Source Enterprise division has developed an internal AI tool designed to make life a little easier for its employees and analysts as they sift through mountains and zettabytes of data for glimmers of intelligence. 

The CIA plans to roll out the AI tool across the entire 18-agency U.S. intelligence community, which includes the CIA, National Security Agency (NSA), the FBI and agencies run by branches of the military to assist and streamline their own intelligence gathering. 

The report noted that the tool comes amid an escalating AI arms race between the U.S. and China and fears around the technology’s applications within military and other national security settings. 

China, as covered by PYMNTS, was the first nation to officially enact a formal policy around AI use — although the European Union (EU) beat it to the punch on drafting one. 

Read alsoFrom PopeGPT to the Pentagon: All Eyes on Gen AI Oversight

Sifting for Gold in Mountains of Data

While the CIA’s homegrown AI tool won’t be available to the American public or even to policymakers, the use of the tool comes as U.S. intelligence agencies find themselves swamped by ever more amounts of data — as well as criticism that the CIA and NSA have been slow to process and exploit the mountains of publicly available information produced by the increasingly online lives of global citizens. 

“We’ve gone from newspapers and radio, to newspapers and television, to newspapers and cable television, to basic internet, to big data, and it just keeps going,” Randy Nixon, director of the CIA’s Open-Source Enterprise division, said in the Bloomberg interview. “The scale of how much we collect and what we collect on has grown astronomically over the last 80-plus years, so much so that this could be daunting and at times unusable.”

The CIA’s new tool will allow intelligence analysts to chat with and ask questions of the AI model, which will be trained to auto-summarize and group together information culled from publicly available sources. It will show its work to the government users, including providing the original source of information provided so that analysts can verify the data trail. 

One of the main bottlenecks, outside of cost, as it relates to more widespread adoption of generative AI is the technology’s tendency to fabricate or “hallucinate” believable responses in response to queries. 

The CIA did not say what foundational large language model (LLM) it will use to train its new tool, nor how it will keep the information that the AI tool generates from leaking outside the perimeters of the intelligence agencies using it. 

Of course, both those types of information could be used by national adversaries to hack it. 

Read alsoCalls to Pause AI Development Miss the Point of Innovation

A Defining Technology

As PYMNTS has written, AI’s potential impact transcends borders, with many observers — including UN Secretary-General António Guterres — believing there needs to be a globally coordinated approach to both reining in its potential perils and supporting its potential good.

“I think if this technology goes wrong, it can go quite wrong,” AI pioneer OpenAI’s CEO Sam Altman has previously said.

Speaking to Congress earlier this month (Sept. 12), NVIDIA’s chief scientist and senior vice president of research, William Dally, said “the genie is already out of the bottle” with AI, and stressed to lawmakers that, “AI models are portable; they can go a USB drive and can be trained at a data center anywhere in the world. … We can regulate deployment and use but cannot regulate creation. If we do not create AIs here, people will create them elsewhere. We want AI models to stay in the U.S., not where the regulatory climate might drive them.”

That’s why, whether it is an intelligence agency or a small to medium-sized business, responsible use of AI and thoughtful integration into workflows is of the utmost importance.