Back to all articles

Critical Vulnerability in Langflow AI Platform Requires Immediate Remediation

A critical code injection flaw in the Langflow AI framework (CVE-2026-33017) allows unauthenticated remote code execution. With active scanning and unauthorized access attempts observed within 24 hours of disclosure, organizations must upgrade to version 1.9.0 and implement runtime defenses immediately.

Triage Security Media Team
2 min read

According to reporting from Dark Reading, a critical vulnerability in Langflow—an open-source framework for AI agent development—has been subject to active security incidents shortly after its initial disclosure.

On Wednesday, the Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2026-33017, a critical code injection flaw, to its Known Exploited Vulnerabilities (KEV) catalog. The vulnerability carries a 9.8 CVSS score and was first disclosed on March 17, 2026. Reports of unauthorized activity emerged almost immediately.

Cloud security vendor Sysdig observed access attempts less than 24 hours after the vulnerability was disclosed. Sysdig researchers noted that malicious actors were able to use the technical details provided in the advisory to quickly construct functional code execution sequences, even though no public proof-of-concept (PoC) code was initially available.

This rapid turnaround indicates that the window between vulnerability disclosure and active network scanning is now measured in hours, rather than days or weeks. Researchers noted that AI workloads are frequently targeted because they process high-value data and provide software supply chain access, often before comprehensive security measures are fully implemented.

Technical details of CVE-2026-33017

Langflow is a widely used low-code framework for building and deploying AI agents. The vulnerability, CVE-2026-33017, originates in the POST /api/v1/build_public_tmp/{flow_id}/flow endpoint, which is designed to allow users to build public flows without authentication.

According to the Langflow GitHub advisory, if a user supplies the optional "data" parameter, the endpoint processes the provided flow data instead of the stored flow data from the local database. If this input contains arbitrary Python code within node definitions, the application passes the code directly to the exec() function without sandboxing. This mechanism grants unauthenticated remote code execution (RCE) to anyone who can reach the endpoint.

Langflow clarified that this issue is distinct from CVE-2025-3248, an earlier vulnerability that was previously utilized to distribute the Flodrix botnet.

The technical advisory for CVE-2026-33017 included specific details, such as the vulnerable endpoint path and the exact code injection mechanism. This transparency, while vital for defenders, provided enough information for unauthorized parties to formulate operational inputs without requiring extensive independent research.

System impact and remediation

Researchers warn that unauthorized parties who successfully execute arbitrary code via CVE-2026-33017 can extract sensitive configuration data from vulnerable Langflow instances. Because these instances often store API keys and credentials for services like OpenAI, Anthropic, and AWS, exposure can enable lateral movement to connected databases and external cloud environments.

To protect your systems, we recommend the following immediate actions:

  • Upgrade immediately: Langflow version 1.9.0 mitigates this vulnerability. System administrators should upgrade to the fixed version as soon as possible.

  • Implement runtime detection: Utilize runtime security monitoring to identify unexpected shell execution or anomalous network callbacks originating from AI workloads.

  • Segment networks: Isolate AI development frameworks from critical production databases and restrict outbound external access to only necessary, approved endpoints.

  • Accelerate response capabilities: Organizations operating on scheduled, delayed patch cycles face an elevated risk during the critical hours following a disclosure. Bridging the gap between disclosure and remediation requires rapid, targeted response procedures.

Securing AI pipelines is a collaborative effort. By taking these steps, security and engineering teams can ensure their organizations continue building innovative applications safely and confidently.