CyberArk Alerts About Cybersecurity Threats To LLM And AI Agents

image

The 2025 Forbes AI 50 list : “AI graduated from an answer engine to an action engine in the workplace.” Like other graduates entering the workforce, AI agents meet fresh opportunities, face unfamiliar responsibilities, and must overcome new challenges.

As more AI agents are deployed in enterprises worldwide, the scale and scope of cyberattacks escalate, as many organizations embed them into critical systems without proper safeguards. AI agents represent new classes of cybersecurity vulnerabilities, providing new venues for infiltrating and manipulating enterprise systems.

At IMPACT 2025 last week, Lavi Lazarovitz, VP of cyber research at CyberArk, presented an initial analysis of the range of threats posed by the brand-new agentic systems. “Agents are distinguished by autonomy and proactivity,” said Lazarovitz. As such, they are turning into the most privileged digital identities enterprises have ever seen.

The cybersecurity landscape for AI agents will continue to evolve, and at present, there is no silver bullet that can fully mitigate all security risks they pose, according to CyberArk researchers. The best approach is what they call “defense in depth,” or implementing multiple layers of protection at different stages of the workflow and across various security measures.

More broadly, CyberArk warns enterprises to “never trust an LLM.” Attackers will always find ways to exploit and manipulate these models, so security must be built around them, not within them. At IMPACT 2025, Retsef Levi, Professor of Operations Management at the MIT Sloan School of Management, spoke about the “Very real risk of creating complex systems with opaque operational boundaries and eroded human capabilities that are prone to major disasters and are not resilient.”

MORE FOR YOU

Using an LLM is like taking a drug without knowing what’s in it, says Levi. The mystery is three-dimensional: The humongous number of parameters obscuring what the model can do; the open data, internet data, on which the model is based (as opposed to in-house, clean data); and the source, the origin of the model’s development.

The key challenge in implementing AI agents, says Levi, is making sure they “don’t degenerate and erode critical human capabilities,” especially in the areas where humans are superior to AI: Identifying nuance; sensitivity to changing conditions, exceptions, and anomalies; and sensing a new context. “Don’t confuse performance with capability,” advises Levi. As generative AI and LLMs enhance cyberattack capabilities by using machines to manipulate humans or other machines, Levi recommends developing “measurements for understanding your digital supply chain,” identifying potential vulnerabilities.

The research effort to uncover the new “attack surface” created by generative AI is growing fast. Startup Pillar Security, for example, analyzing over 2,000 real-world LLM-powered applications. Pillar found that 90% of successful attacks resulted in the leakage of sensitive data and that adversaries require only 42 seconds on average to complete an attack, highlighting the speed at which vulnerabilities can be exploited.

This present state of attacks on generative AI will get worse in the near future. By 2028, according to , “25% of enterprise breaches will be traced back to AI agent abuse, from both external and malicious internal actors.”

The interest in investing or acquiring related cyber defense skills and solutions is also growing. For example, Palo Alto Networks is set to buy AI cybersecurity company Protect AI for an estimated $650-700 million, sources informed last week. “Protect AI might end up being the second acquisition [after Cisco’s acquisition of Robust Intelligence for a reported $400 million] in the nascent AI security market, but it certainly won’t be the last,” Information Security Media Group.

AI agents’ autonomous nature and complex decision-making capabilities introduce various threats and vulnerabilities that span security, privacy, ethical, operational, legal, and technological domains. These real-world challenges will probably not slow down the widespread deployment of AI agents. According to CB Insights, mentions of “agent” and “agentic” on earnings calls surged in the first quarter of 2025, with both hitting all-time highs.

For the second year in a row, Amazon CEO Andy Jassy used his to stress the contribution of generative AI applications to Amazon’s continuing success. He reported that “there are more than 1,000 GenAI applications being built across Amazon, aiming to meaningfully change customer experiences in shopping, coding, personal assistants, streaming video and music, advertising, healthcare, reading, and home devices, to name a few.”

Jassy also highlighted the importance of generative AI to the future of all enterprises: “If your customer experiences aren’t planning to leverage these intelligent models… and their future agentic capabilities, you will not be competitive.”

Leave a Comment