Secure Code Warrior Launches AI Traceability

0
Secure Code Warrior have released a beta program for a major expansion of AI capabilities within its Trust Agent product. The upgrade, collectively referred to as Trust Agent: AI, leverages a combination of key signals, including AI coding tool usage, vulnerability data, code commit data and developer secure coding skills, to provide visibility into how AI development tools are impacting risk within the software development lifecycle (SDLC).

Trust Agent: AI will evaluate the relationship between the developer, the models they use–including the vulnerabilities they introduce–and the actual repository where AI-generated code is being committed. General availability is expected in 2026, but an early access list for the beta program is available today.

“AI allows developers to generate code at a speed we’ve never seen before,” said Pieter Danhieux (pictured), Secure Code Warrior Co-Founder & CEO. “However, using the wrong LLM by a security-unaware developer, the 10x increase in code velocity will introduce 10x the amount of vulnerabilities and technical debt. Trust Agent: AI produces the data needed to plug knowledge gaps, filter security-proficient developers to the most sensitive projects, and, importantly, monitor and approve the AI tools they use throughout the day. We’re dedicated to helping organizations prevent uncontrolled use of AI on software and product security.”

With Trust Agent: AI, Secure Code Warrior offers observability of AI coding tools and LLMs used across an enterprise’s entire codebase. The solution also offers integrated governance at scale through:

  • Identification of unapproved LLMs, including visibility into the actual vulnerabilities LLMs introduce
  • Flexible policy controls to log, warn or block pull requests from developers using unsanctioned tools, or developers with insufficient secure coding knowledge
  • Output analysis that surveys how much code is AI-generated and where it’s located across repositories
Share.