A newly discovered security vulnerability in artificial intelligence (AI) systems could pose significant risks to eCommerce platforms, financial services and customer support operations across industries. Microsoft has unveiled details of a technique called “Skeleton Key,” which can bypass ethical safeguards built into AI models businesses use worldwide.
“Skeleton Key works by using a multi-turn (or multiple-step) strategy to cause a model to ignore its guardrails,” Microsoft explains in a blog post. This flaw could allow malicious users to manipulate AI systems to generate harmful content, provide inaccurate financial advice or compromise customer data privacy.
The vulnerability affects AI models from major providers, including Meta, Google, OpenAI and others, that are widely used in commercial applications. This security gap raises concerns about the integrity of digital operations at online retailers, banks and customer service centers that use AI chatbots and recommendation engines.
“This is a significant issue because of the widespread impact across multiple foundational models,” Narayana Pappu, CEO at Zendata, told PYMNTS. “To prevent this, companies should implement input/output filtering and set up abuse monitoring. This is also an opportunity to come up with exclusion of harmful content from future releases of foundational models.”
In response to this threat, Microsoft has implemented new security measures in its AI services and advises businesses on protecting their systems. For eCommerce companies using Azure AI services, Microsoft has enabled additional safeguards by default.
“We recommend setting the most restrictive threshold to ensure the best protection against safety violations,” the company states, emphasizing the importance of stringent security measures for businesses that handle sensitive customer data and financial transactions.
These protective steps are crucial for maintaining consumer trust in AI-powered shopping experiences, personalized financial services and automated customer support systems.
The danger of Skeleton Key is that it can trick AI models into generating harmful content, Sarah Jones, cyber threat intelligence research analyst at Critical Start, told PYMNTS.
“By feeding the AI model a cleverly crafted sequence of prompts, attackers can convince the model to ignore safety restrictions,” she said. “Malicious actors could use this function to generate malicious code, promote violence or hate speech, or even create deepfakes for malicious purposes. If AI-generated content becomes known to be easily manipulated, trust in the technology could be eroded.”
Jones said companies that develop or use generative AI models need to take a layered defense approach to mitigate these risks. One way is to implement input filtering systems that detect and block malicious intent prompts. Another method is output filtering, where the system checks the AI’s generated content to prevent the release of harmful material. Additionally, companies should carefully craft the prompts used to interact with the AI, ensuring they are clear and include safeguards.
“Choosing AI models that are inherently resistant to manipulation is also important,” Jones said. “Finally, companies should continuously monitor their AI systems for signs of misuse and integrate AI security solutions with broader security frameworks. By taking these steps, companies can build more robust and trustworthy AI systems less susceptible to manipulation and misuse.”
The discovery of the Skeleton Key vulnerability is critical for AI adoption in the business world. Many companies have rapidly integrated AI into their operations to improve efficiency and customer experience.
For instance, major retailers have used AI to personalize product recommendations, optimize pricing strategies and manage inventory. Financial institutions have deployed AI for fraud detection, credit scoring and investment advice. The potential compromise of these systems could have far-reaching consequences for business operations and customer trust.
This security concern may temporarily slow AI deployment as companies reassess their AI security protocols. Businesses may need to invest more in AI security measures and conduct thorough audits of their existing AI systems to ensure they are not vulnerable to such attacks.
The revelation highlights the need for ongoing vigilance and adaptation in the face of evolving AI capabilities. As AI becomes more deeply embedded in commerce, quickly identifying and mitigating security risks will be crucial for maintaining the integrity of digital business operations.
For consumers, this development serves as a reminder to remain cautious when interacting with AI-powered systems, particularly when sharing sensitive information or making financial decisions based on AI recommendations.
As the AI landscape evolves, businesses will face the challenge of harnessing AI’s potential while maintaining robust security measures. The Skeleton Key vulnerability underscores the delicate balance between innovation and security in the rapidly advancing world of AI-driven commerce.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.