Back to all articles

Securing AI agents: Addressing default permission risks in Google Cloud Vertex AI

Security research into Google Cloud’s Vertex AI platform reveals how excessive default permissions in deployed AI agents can lead to unauthorized access to sensitive data and infrastructure. Implementing a "Bring Your Own Service Account" (BYOSA) model allows organizations to enforce least-privilege access and safely integrate agentic AI into their environments.

Triage Security Media Team
3 min read

As organizations increasingly deploy AI agents to automate complex operational workflows, ensuring these systems are configured with appropriate permissions is a critical defensive measure. Recent security research by Palo Alto Networks details how this risk can materialize within Google Cloud's Vertex AI platform. Their analysis demonstrates that broad default permissions could enable an unauthorized party to misuse a deployed AI agent, potentially leading to unauthorized access to sensitive data and restricted internal infrastructure.

The risk of excessive default permissions

Vertex AI is a Google Cloud platform offering an Agent Engine and Application Development Kit. Developers use these tools to build autonomous agents that interact with APIs, manage files, query databases, and execute decisions with minimal human oversight. Because these agents automate significant enterprise workflows—analyzing data, powering customer service tools, and enabling existing cloud services—they often require broad access to cloud environments.

During a security assessment, researchers identified that every deployed Vertex AI agent utilizes a default service account, known as the Per-Project, Per-Product Service Agent (P4SA), which was provisioned with excessive default permissions. If a malicious actor successfully extracts the agent's service account credentials, they could leverage these permissions to access sensitive areas of a customer's cloud environment. The research methodology demonstrated that these credentials could also grant access to Google's internal infrastructure, allowing the retrieval of proprietary container images and revealing hardcoded references to internal Google storage buckets.

Validating the scope of access

To validate this risk, researchers developed a proof-of-concept Vertex AI agent. Once deployed, the agent queried Google's internal metadata service to extract the active credentials of the underlying P4SA service agent. These credentials provided the necessary permissions to escalate access beyond the AI agent's immediate environment, reaching the customer's broader Google Cloud Project and elements of Google's internal infrastructure.

"This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into an insider threat," wrote Palo Alto researcher Ofir Shaty in the published findings. He noted that the default scopes set on the Agent Engine could potentially extend access into an organization's Google Workspace, including services such as Gmail, Google Calendar, and Google Drive.

Ian Swanson, VP of AI security at Palo Alto Networks, emphasized the need for organizations to assess potential risks before deployment and protect agents during runtime. “Agents represent a shift in enterprise productivity including AI that talks and AI that acts,” he stated, noting that this shift introduces risks of unauthorized actions alongside traditional data exposure concerns.

Implementing least-privilege access

Following the disclosure of these findings, Google updated its official documentation to clarify how Vertex AI uses agents and resources. To secure agentic AI environments, Google recommends that organizations replace the default service agent on Vertex Agent Engine with a custom, dedicated service account.

A Google spokesperson emphasized this approach as a primary defense mechanism. "A key best practice for securing Agent Engine and ensuring least-privilege execution is Bring Your Own Service Account (BYOSA)," the spokesperson stated. "Using BYOSA, Agent Engine users can enforce the principle of least privilege, granting the agent only the specific permissions it requires to function and effectively mitigating the risk of excessive privileges."

About the original reporting

This security bulletin preserves the factual reporting originally authored by Jai Vijayan, a contributing writer and technology reporter with over 20 years of experience in IT trade journalism. Previously a Senior Editor at Computerworld covering information security, data privacy, big data, Hadoop, the Internet of Things, e-voting, and data analytics, Vijayan also covered technology for The Economic Times in Bangalore, India. He holds a Master's degree in Statistics and resides in Naperville, Illinois.