Agentic AI is both boon and bane for security pros



These and other data points show the dark underbelly where the agentic boon has turned into a bane and created more work for security defenders. “For almost all situations, agentic AI technology requires high levels of permissions, rights, and privileges in order to operate. I recommend that security leaders should consider the privacy, security, ownership, and risk any agentic AI deployment may have on your infrastructure,” said Morey Haber, chief security advisor at BeyondTrust.

What is agentic AI?

Generative AI agents are described by analyst Jeremiah Owyang as “autonomous software systems that can perceive their environment, make decisions, and take actions to achieve a specific goal, often with the ability to learn and adapt over time.” Agentic AI takes this a step further by coordinating groups of agents autonomously with a series of customized integrations to databases, models, and other software. These connections enable the agents to adapt dynamically to their circumstances and have more contextual awareness, or coordinate actions among multiple agents. Google’s threat intel team has loads of specific examples of current AI-fed abuses in a recent report.

But trusting security tools isn’t anything new. When network packet analyzers were first introduced, they did reveal intrusions but also were used to find vulnerable servers. Firewalls and VPNs can segregate and isolate traffic but can also be leveraged to allow hackers access and lateral network movement. Backdoors can be built for both good and evil purposes. But never have these older tools been so superlatively good and bad at the same time. In the rush to develop agentic AI, the potential of future misery was also created.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here