Security researchers are sounding the alarm about two new types of attacks that can make AI models act in harmful ways. Imagine using an AI app that suddenly starts working against you. While it’s not as scary as Skynet from the Terminator movies, it’s still very concerning.
The attacks show how AI systems, which are supposed to help us, can be manipulated to cause serious problems. These problems range from denial of service attacks, which can shut down websites, to changing prices in online stores. Experts agree that the threat is very real.
How Do These Attacks Work?
Malicious actors use conversational AI and change it to make apps perform dangerous activities. This isn’t just about spreading false information; it can also involve creating offensive content. The researchers behind this study want to change how people think about jailbreaking AI apps. They want to show the dangers and how users can stay safe.
Many security experts don’t take these kinds of threats seriously when it comes to Generative AI. They wonder, how much harm can a chatbot prompt really do? But the latest zero-click malware attack shows that attackers don’t even need to compromise the AI app first to cause damage.
Jailbreaking AI: A New Kind of Threat
This new type of attack involves adding user inputs that give the AI engine orders from the attackers. Additional commands can also introduce malware. The jailbroken AI engine then starts performing actions based on these harmful commands. What happens next depends on the app’s design and permissions.
Even though there are many safeguards to prevent this, researchers have found ways to jailbreak AI apps easily. To prove their point, they used a denial of service (DOS) attack as an example. They showed how simple inputs can make the AI app enter an infinite loop, triggering endless API calls and stopping the app from working properly.
Advanced PromptWare Threat
The study also talked about a more advanced version of this attack, called the Advanced PromptWare Threat. These attacks can happen even if the attacker doesn’t know how the app works. They use the AI system’s own abilities to carry out a six-step process, causing a chain of harmful events.
Given the seriousness of these threats, it’s no surprise that AI developers and security experts are taking notice. Google has yet to respond, but a representative from OpenAI said they are working on better safeguards to counter these actions. They appreciate the feedback from experts and plan to improve their models.
Expert Opinions
The security research team lead at Checkmarx says that Large Language Models (LLM) and Generative AI assistants are modern examples of today’s software. They need to be handled with caution, especially as more malicious actors are emerging. Jailbroken AI systems could become new attack vectors if not properly managed.
It’s clear that as AI technology advances, so do the threats. Staying informed and vigilant is crucial to ensuring our safety in this digital age.
Read next: US Secretaries Of State Pen Open Letter To Elon Musk As X’s Grok Chatbot Spreads Misinformation About Elections