OpenAI says it has disrupted state-sponsored hackers attempting to use its technology for malicious purposes.
Working with its partner Microsoft, the artificial intelligence (AI) company said in a report issued Wednesday (Feb. 14) that it blocked five state-affiliated attacks: two related to China, the others with ties to North Korea, Iran and Russia.
“Although the capabilities of our current models for malicious cybersecurity tasks are limited, we believe it’s important to stay ahead of significant and evolving threats,” OpenAI said on its blog. “To respond to the threat, we are taking a multi-pronged approach to combating malicious state-affiliate actors’ use of our platform.”
Among the incidents cited in the report was one in which Charcoal Typhoon, a group with ties to China, used OpenAI’s services to create content the company said was most likely meant to be used for phishing campaigns.
And the Russian-affiliated Forest Blizzard used OpenAI’s services “primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks,” the company said.
In its report on the attacks, Microsoft said that the two companies’ research had not uncovered “significant attacks” involving the AI large language models (LLMs) it monitors closely.
“At the same time, we feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community,” it said.
The news comes as the cybersecurity world is dealing with ransomware attacks at a record level, with payments topping $1 billion last year, according to the latest chapter of Chainalysis’ 2024 Crypto Crime Report.
The increasing frequency and sophistication of these attacks spotlights the need to bolster security systems, as highlighted in a recent report by PYMNTS Intelligence.
According to that research, the share of financial institutions (FIs) using AI and machine learning (ML) technologies to deal with fraud and financial crimes has surged from around 34% to 70% between 2022 and 2023 — an increase which reflects the adoption of advanced technologies to handle increasingly sophisticated attacks.
“The encouraging news is that FIs embracing these technologies are witnessing positive outcomes,” PYMNTS wrote recently. The report highlights that those employing AI or ML are “likelier to experience a decrease in the overall fraud rate and less likely to see an increase in the overall fraud rate.”