Security researchers at Check Point Research have identified three vulnerabilities in Anthropic’s AI-powered coding utility, Claude Code. These findings, which have since been addressed by Anthropic, demonstrate how manipulated project repositories could potentially expose development environments to unauthorized access and credential exfiltration.
Following the responsible disclosure of these flaws last year, Anthropic released updates to resolve the issues. The company is also implementing additional security controls to harden the platform. To maintain a secure development lifecycle, Triage recommends that all teams using Claude Code verify they are running the latest version of the software.
Analyzing the Exposure
The vulnerabilities in Claude Code illustrate a specific challenge in modern software engineering: the intersection of automation and security. As Check Point researchers Aviv Donenfeld and Oded Vanunu noted, the core issue lies in balancing powerful automation features with strict access controls.
The research identified that configuration files within a repository. Typically treated as passive data—could be used to execute arbitrary commands. This creates a supply chain risk where a single compromised commit in a shared repository could affect any developer who subsequently opens that project.
Anthropic and security analysts have categorized these findings into two primary tracking identifiers:
CVE-2025-59536: This identifier covers two related vulnerabilities involving the execution of commands without explicit user consent via configuration settings.
CVE-2026-21852: This vulnerability affected versions prior to 2.0.65 and permitted the potential exfiltration of API credentials through malicious project configurations.
Configuration Files as Execution Vectors
The research team focused on how Claude Code processes instructions from local files, identifying three distinct vectors for unauthorized activity.
Lifecycle Hooks (CVE-2025-59536)
The first vulnerability involved "Hooks," a feature designed to enforce consistent behaviors, such as code formatting, at specific stages of a project’s lifecycle. Researchers demonstrated that a threat actor could embed unauthorized commands within the Claude Code configuration file of a repository. Upon opening the project, the tool would automatically execute these commands without notifying the developer. The proof-of-concept showed that this mechanism could grant remote access to the developer's terminal with the user’s full privileges.
Model Context Protocol (CVE-2025-59536)
The second vulnerability, also tracked under the same CVE, involved the Model Context Protocol (MCP). This setting manages connections between the coding platform and external services. Similar to the Hooks vector, researchers found that MCP servers could be defined within a project's configuration file. A manipulated file could force the execution of commands before the standard user warning dialog appeared, bypassing the intended consent flow.
Credential Redirection (CVE-2026-21852)
The third finding involved the handling of API keys. By altering settings in a project’s configuration file, researchers were able to intercept API-related communications between the client and Anthropic’s servers. This flaw allowed traffic to be routed to a server controlled by an unauthorized party, enabling the logging of API keys before the user interacted with a warning prompt.
Securing AI-Assisted Workflows
Claude Code represents a growing class of AI development tools, alongside GitHub Copilot, Amazon CodeWhisperer, and others—that accelerate software delivery by integrating directly with source code and local files.
While these tools offer significant productivity gains, they also introduce new variables into the security model. "Configuration files that were once passive data now control active execution paths," Donenfeld and Vanunu explained in their analysis.
This shift requires security teams and developers to treat development tools as critical infrastructure. As AI agents gain direct access to production environments and credentials, maintaining rigorous patch management and validating the integrity of third-party repositories becomes essential.
Anthropic has patched these vulnerabilities, and current versions of Claude Code are no longer susceptible to these specific vectors. Triage advises organizations to audit their AI tooling versions regularly to ensure protecting against emerging supply chain risks.