Firewalls may soon need an upgrade as legacy tools fail at AI security

Firewalls may soon need an upgrade as legacy tools fail at AI security

Cybersecurity engineers are developing a new breed of security tools designed specifically to sit between users and AI models, inspecting not just traffic patterns but intent and context as well.

Akamai showcased its Firewall for AI at RSA 2025 as one of such tools that helped flag 6% of over 100,000 requests made on an early customer’s platform as “risky.” The risky requests included sensitive data leaks, toxic responses, and prompt injection attempts.

The launch is stirring up talks of a need to shift from traditional WAFs and API gateways to dedicated security controls for LLM and agentic AI workflows.

Traditional security tools struggle to keep up as they constantly run into threats introduced by LLMs and agentic AI systems that legacy defences weren’t designed to stop. From prompt injection to model extraction, the attack surface for AI applications is uniquely weird.

“Traditional security tools like WAFs and API gateways are largely insufficient for protecting generative AI systems mainly because they are not pointing to, reading, and intersecting with the AI interactions and do not know how to interpret them,” said Avivah Litan, distinguished VP analyst, Gartner.

AI threats could be zero-day

AI systems and applications, while extremely capable at automating business workflows, and threat detection and response routines, bring their own problems to the mix, problems that weren’t there before. Security threats have evolved from SQL injections or cross-site scripting exploits to behavioral manipulations, where adversaries trick models into leaking data, bypassing filters, or acting in unpredictable ways.

Gartner’s Litan said that while AI threats like model extractions have been around for many years, some are very new and hard to tackle. “Nation states and competitors who do not play by the rules have been reverse-engineering state-of-the-art AI models that others have created for many years.”

“GenAI-specific threats such as prompt injection are new and not common yet as far as we know,” she added. “The problem is we don’t know what we don’t know–and if enterprises are not putting in the tools to look for these threats, they may not be aware of how this new threat vector is being exploited.”

John Grady, principal analyst at Enterprise Strategy Group (ESG), however, pointed out that these threats are very much real and present. “We’ve seen examples of these types of attacks on public GenAI apps like ChatGPT/ OpenAI, so they’re not hypothetical,” he said. “Many enterprises are moving forward quickly with internally built apps leveraging AI, so this is going to be an increasing issue as those applications are developed and become accessible either to the public or for internal use across the organization.”

Both strongly feel the need for a separate batch of security tooling aimed at Generative AI systems, because of the inability of traditional ones to understand or filter natural language prompts, or enforce guardrails for responses.

Firewall for AI to the rescue

Responding to the call for an adaptive, context-aware protection that AI security entails, Akamai’s Firewall for AI offers to scan for and respond to threats facing AI applications, LLMs, and AI-driven APIs. The solution claims real-time AI threat detection, along with risk mitigation features such as filtering AI outputs to prevent toxic content, hallucinations, and unauthorized data leaks.

“We believe all global businesses will need security tools specific to LLMs and AI interactions,” said Rupesh Choksi, senior vice president and general manager, Application Security, Akamai. “WAFs remain foundational, but AI introduces a new class of threats that require specialized protection.”

According to him, a few early adopters of Akamai’s Firewall for AI, who said they didn’t use AI at their company, discovered many API calls made to LLMs in a proof of concept (POC) run.

Weighing in on the offering, Grady said, “Solutions like this are targeted at GenAI being used for internally developed applications. I think it’s natural to expect that over time these capabilities will become tightly integrated, if not completely incorporated into traditional application security tools such as WAF.”

A move towards a dedicated security stack?

Talking about the general direction AI security is headed, ESG’s Grady said, “The AI space is moving so fast, is so different, and represents both a significant opportunity and risk. It makes sense that we’ve seen a rash of startups as well as new products from existing vendors to address the issue. But over time, as AI becomes foundational to everything we do, those security controls have to be better integrated across the stack- whether it’s for secure access, application security, data security, identity, or anything else.”

Litan, too, does not see such offerings forming a standalone security market. “I see it as incremental and important functionality that existing security vendors must build or acquire to remain relevant and competitive.”

Apart from Akamai, we have already seen a handful of strategic moves focused at AI security, like Palo Alto acquiring Protect-AI to address AI-specific security risks, and Cisco acquiring Robust Intelligence to improve threat protection and AI traffic visibility.

Recently, vendors like ZScaler and Securiti have also natively added functionalities targeted at securing AI workflows.

Both Grady and Litan agreed on the positive impact the AI regulatory shifts, like the EU AI Act, are expected to have on the demand for these kinds of tools. “Regulated sectors like finance and healthcare are already governed by strict data privacy laws (e.g., CPRA, PCI DSS), and we will see accelerated adoption (of such tools) in those industries to avoid legal and reputational risks,” Litan added.

​The original article found on Cisco patches max-severity flaw allowing arbitrary command execution | CSO Online Read More