KnowBe4 has released new research showing that organisations are struggling to secure the human element as the workforce rapidly evolves to incorporate AI tools and agentic systems. The State of Human Risk 2025 report highlights a sharp rise in both human-driven security incidents and breaches involving AI applications, underscoring the growing complexity of behavioural risk in modern environments.
The study surveyed 700 cybersecurity leaders and 3,500 employees across multiple regions and industries, focusing on organisations that had experienced an employee-related security incident in the past year. It found that human-related incidents surged by 90 per cent, driven by social engineering, risky behaviour and simple mistakes.
Ninety-three per cent of surveyed leaders reported incidents caused by cybercriminals exploiting employees, while email continued to serve as the dominant attack vector. Email-related incidents increased by 57 per cent, and 64 per cent of organisations were targeted through email-based external attacks. Human error remained a major vulnerability, with 90 per cent experiencing incidents arising from mistakes, while malicious insiders accounted for 36 per cent of cases. Almost all leaders surveyed (97 per cent) said they require increased budget to secure the human element effectively.
The study also reveals the accelerating impact of AI integration in the workplace. Security incidents involving AI applications rose by 43 per cent—one of the largest increases across all channels. Despite 98 per cent of organisations taking steps to manage AI-related risks, AI-powered threats were ranked as the number-one concern among cybersecurity leaders. Forty-five per cent cited the fast-changing nature of AI threats as their biggest challenge in managing behavioural risk.

Deepfake-related incidents continued to grow, with 32 per cent of organisations reporting an increase. The report notes a widening misalignment between organisational controls and employee expectations: while businesses race to deploy AI safeguards, 56 per cent of employees remain dissatisfied with their organisation’s approach, driving many toward unsanctioned tools and contributing to “shadow AI” risk.
KnowBe4 predicts that email will remain the most at-risk channel for several years, but warns that multi-channel attacks—including messaging applications and voice phishing—are rising in frequency and sophistication. Cybercriminals are also using AI to create more complex and scalable social engineering campaigns.
Javvad Malik, lead CISO advisor at KnowBe4, said organisations must rethink how they manage risk as humans and AI systems increasingly collaborate. “The productivity gains from AI are too great to ignore, so the future of work requires seamless collaboration between humans and AI. Employees and AI agents will need to work in harmony, supported by a security program that proactively manages the risk of both. Human risk management must evolve to cover the AI layer before critical business activity migrates onto unmonitored, high-risk platforms.”
The full report includes further analysis and recommendations for adapting human risk management frameworks to the AI era.

