More than three-quarters of cybersecurity professionals are concerned about the risks posed by AI agents operating inside their organisations, according to new research from Darktrace, highlighting growing unease as artificial intelligence becomes more deeply embedded in enterprise systems.
Darktrace’s 2026 State of AI Cybersecurity Report found that 76 per cent of security professionals are worried about the security implications of integrating AI agents, with concern rising to 47 per cent among senior security executives who said they are very or extremely concerned. The report points to AI agents’ access to sensitive data and critical business processes, combined with immature governance frameworks, as key drivers of anxiety.
The findings come as AI adoption accelerates across enterprises. Nearly three-quarters of respondents (73 per cent) said AI-powered threats are already having a significant impact on their organisation, while 87 per cent reported that AI is increasing the volume of cyber attacks they face. Almost nine in ten said attacks are becoming more sophisticated as a result of AI, particularly in phishing and social engineering.
Despite heightened awareness of the threat landscape, preparedness remains a challenge. Almost half of security professionals surveyed said they feel unprepared to defend against AI-driven attacks, a figure largely unchanged from last year. At the same time, 92 per cent said AI-related threats are driving major upgrades to their defensive capabilities.
Concerns around AI agents are closely tied to data risk. Data exposure was identified as the leading issue by 61 per cent of respondents, followed by potential breaches of data security and privacy regulations (56 per cent) and the misuse or abuse of AI tools (51 per cent). Only 37 per cent of organisations reported having a formal policy governing the secure deployment of AI, down from the previous year.
“Agentic AI introduces a new class of insider risk,” said Issy Richards, vice-president of product at Darktrace. “These systems can operate with the reach of an employee, accessing sensitive data and triggering business processes, but without human context or accountability. Governance and oversight of AI agents need to be treated as a board-level responsibility.”
At the same time, the report shows that organisations are increasingly relying on AI to defend themselves. More than three-quarters of respondents said generative AI is now embedded in their security stack, and 96 per cent reported that AI significantly improves the speed and efficiency of security operations. Detecting novel threats and anomalies was cited as the area where AI delivers the greatest value.
Many organisations are also beginning to allow AI systems to take action. Fourteen per cent of respondents said AI is permitted to act independently within their security operations, while 70 per cent allow AI to act with human approval. Only 13 per cent restrict AI to advisory or recommendation roles.
Alongside the report, Darktrace announced the launch of Darktrace / SECURE AI™, a new solution designed to give organisations visibility and control over how AI tools and agents are used across the enterprise. The company said the offering is intended to address emerging blind spots created by rapid AI adoption.
Darktrace data shows a 39 per cent month-on-month increase in anomalous data uploads to generative AI services, with the average upload equivalent to around 4,700 pages of documents, underscoring the potential for sensitive information to leave organisations unnoticed.
“As AI becomes embedded across core business operations, many organisations are losing clear visibility into what AI systems can access and how they behave,” Richards said. “The challenge now is enabling AI safely and responsibly, without slowing innovation.”

