Vectra AI has identified a vulnerability in Google Cloud’s Document AI Service that exposes users to data infiltration risks.
The vulnerability arises from transitive access abuse, where the Document AI service agent’s excessive permissions allow unauthorised access to any Cloud Storage object within the same project. This vulnerability enables users, including malicious actors, to access data they would not usually have permission to.
Document AI is a Google Cloud service that extracts information from unstructured documents. It processes documents stored in cloud storage through online and offline (batch) jobs. The vulnerability lies in batch processing, where the Document AI Core Service Agent, which has broader access permissions, handles data ingestion and results output. This allows the agent to access any cloud storage bucket within the project, unlike standard processing, which restricts access to the permissions of the initial caller.
This issue enables attackers to exfiltrate data from Google Cloud Storage to an arbitrary bucket, bypassing access controls and extracting sensitive information, exemplifying transitive access abuse.
“The challenge for organisations is clear – safeguarding valuable data is more crucial than ever as AI becomes deeply embedded in our digital infrastructure,” said Vectra AI Principal Security Researcher Kat Traxler. “Data exfiltration is a major concern, highlighting AI’s dual role in both facilitating and thwarting cyberattacks.”
Ponemon Institute research shows that over 60% of organisations have experienced at least one form of data exfiltration in the past two years, underscoring the prevalence of this threat. Cyberattacks and data breaches continue to be the foremost business risk in Australia, with a reported 9% increase in data breaches in the first half of 2024 compared to the previous six months. This marks the highest number of notifications since the latter half of 2020, according to the Australian Information Commissioner’s Office
To mitigate the risks of this occurring, security operations centre teams should apply:
-
Project-level Segmentation: Do not co-mingle in the same Project data-at-rest with their consumers like Document AI. When using SaaS and ETL services, which pick up, manipulate, and write data, configure the inputs and outputs to reside cross-project. This will force the manual binding of IAM permissions for the Service Agent rather than relying on automatic grants.
-
Restrict the API and Service: Use the Org Policy Constraint serviceuser.services to prevent the enablement of the Document AI service when it’s not needed, and restrict the API usage with the Org Policy Constraint serviceuser.restrictServiceUsage.
“It is crucial to remember that permission grants only tell part of the story, especially once service functionality and the possibility of transitive access are considered,” adds Traxler. “Transitive access abuse is not isolated to the Document AI service but will likely reoccur across services (and all the major cloud providers). Segmenting data storage, business logic, and workloads in different projects can reduce the blast radius of excessively privileged service agents.”