Firewalls may soon need an upgrade as legacy tools fail at AI security



Traditional security tools struggle to keep up as they constantly run into threats introduced by LLMs and agentic AI systems that legacy defences weren’t designed to stop. From prompt injection to model extraction, the attack surface for AI applications is uniquely weird.

“Traditional security tools like WAFs and API gateways are largely insufficient for protecting generative AI systems mainly because they are not pointing to, reading, and intersecting with the AI interactions and do not know how to interpret them,” said Avivah Litan, distinguished VP analyst, Gartner.

AI threats could be zero-day

AI systems and applications, while extremely capable at automating business workflows, and threat detection and response routines, bring their own problems to the mix, problems that weren’t there before. Security threats have evolved from SQL injections or cross-site scripting exploits to behavioral manipulations, where adversaries trick models into leaking data, bypassing filters, or acting in unpredictable ways.

Gartner’s Litan said that while AI threats like model extractions have been around for many years, some are very new and hard to tackle. “Nation states and competitors who do not play by the rules have been reverse-engineering state-of-the-art AI models that others have created for many years.”

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here