The White House has moved to restrict foreign access to American artificial intelligence breakthroughs, setting new boundaries for the global AI market as U.S. tech companies face fresh limits on international deals.
The directive, which orders federal agencies to shield private-sector AI innovations like military technology, could force American tech giants to obtain government approval before sharing developments with overseas partners or selling to foreign customers. According to industry analysts and executives familiar with the matter, this could affect global commerce for companies like OpenAI, Anthropic and Microsoft.
“AI is really becoming a national security question, and there are going to be two sides to that,” Kristof Horompoly, vice president of AI risk management at ValidMind, told PYMNTS. “One of them is that the U.S., and other countries, for that matter, will want to keep innovation in the country and make sure that the brightest minds in AI are coming to develop AI in the country, and they’re also going to get protective about exporting that technology.”
Export restrictions on advanced AI will likely tighten, particularly for non-allied countries, but domestic opportunities could expand as governments increase support and resources to keep AI development within U.S. borders, Horompoly said.
The new restrictions on AI exports could reshape how American tech companies sell their most advanced systems abroad, with firms like OpenAI and Microsoft potentially needing government approval for deals with foreign customers. Industry analysts estimate billions in international AI revenues could be affected, as companies may need to create separate versions of their technology for domestic versus foreign markets or limit certain export capabilities.
President Biden issued the first National Security Memorandum on artificial intelligence on Thursday (Oct. 24), directing federal agencies to protect U.S. AI advances as strategic assets while fostering their safe development for national security. The memo establishes the AI Safety Institute as the industry’s main government contact point and prioritizes intelligence collection on foreign attempts to steal American AI technology.
The White House directive outlines three core objectives: maintaining U.S. leadership in safe artificial intelligence development, harnessing AI for national security while protecting democratic values and building international consensus on AI governance. It follows recent U.S.-led efforts, including a G7 code of conduct on AI and agreements with over 50 nations on military AI use.
“It will be harder for AI companies to sell their technology abroad, especially some of the sensitive AI and some of the cutting-edge AI,” Horompoly said. “I am certain that this is going to be more and more restricted. I can see a future where AI is treated in the same category as weapons today. I think that’s what we’re going to be moving toward. AI can be used as a very powerful weapon already, and this is going to get stronger in terms of misinformation campaigns, deepfakes, and even infiltrating organizations.”
While potentially limiting American artificial intelligence companies on many fronts, the AI NSM simultaneously provides opportunities in the form of government contracts, other government funding initiatives, and the stepped-up creation of an AI testing industry, Anthony Miyazaki, professor of marketing at Florida International University, told PYMNTS. He said American AI companies would also be given greater abilities to recruit tech-savvy employees worldwide due to specific wording addressing the immigration of AI-trained talent.
“The need to test AI systems for potential threats to national security could prove to be the most significant delay to innovation timelines,” he said. “The fastest innovations typically are generated via open beta testing with built-in feedback mechanisms among diverse sets of users. This allows quick improvements in an ongoing fashion. To repeatedly halt beta-testing feedback loops for potentially months at a time for government assessments could drastically limit AI growth for U.S. developers. Meanwhile, developers in other countries may benefit from less restrictive government requirements.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.