Thales has announced the launch of its AI Security Fabric, a new security framework designed to provide runtime protection for agentic AI systems and applications built on large language models (LLMs), as organisations accelerate the deployment of AI across core business functions.
The initiative addresses a growing set of AI-specific risks that are emerging as generative and autonomous systems move from experimentation into production. Industry data cited by Thales shows that AI adoption has risen sharply in recent years, with more than three-quarters of organisations now using AI in at least one business function. At the same time, enterprises are increasing investment in AI-focused security tools to manage new attack surfaces and compliance challenges.
The AI Security Fabric is intended to act as a foundational security layer across enterprise AI ecosystems, protecting applications, data and identities at runtime. The initial capabilities focus on mitigating risks such as prompt injection, data leakage, model manipulation, insecure retrieval-augmented generation (RAG) pipelines and exposure of sensitive or regulated information.
According to Thales, the first capabilities now available include AI application security controls that provide real-time protection for internally developed LLM-powered applications. These controls are designed to detect and prevent threats such as jailbreaking, system prompt leakage, model denial-of-service attacks and inappropriate content generation, and can be deployed across cloud-native, on-premises or hybrid environments.
The platform also introduces security controls for RAG-based applications, enabling organisations to identify and protect sensitive structured and unstructured data before it is ingested into AI systems. This includes securing communications between LLMs and enterprise data sources through encryption, key management and controlled access.
Sebastien Cano, Senior Vice President of Thales’ Cyber Security Products business, said the rapid expansion of agentic and generative AI is creating security requirements that differ from those of traditional applications.
“As AI reshapes business operations, organisations require security solutions tailored to the specific risks posed by agentic AI and GenAI applications,” Cano said. “The goal is to secure AI-driven innovation while minimising operational complexity and maintaining compliance.”
Thales has indicated that the AI Security Fabric will be expanded in 2026 with additional runtime security capabilities, including enhanced data leakage prevention, a Model Context Protocol security gateway and more granular runtime access controls. These additions are intended to strengthen governance across AI data flows and improve oversight of interactions between users, models and enterprise data.
The announcement reflects a broader shift in cybersecurity strategy, as organisations move beyond traditional perimeter and application security models to address the operational realities of autonomous and AI-driven systems operating at scale.

