Recent analysis of the threat scene indicates a significant reduction in the time required for unauthorized actors to move from initial access to lateral movement. According to CrowdStrike’s assessment of activity in 2025, the average "breakout time", the window between an initial foothold and pivoting to other systems—has decreased to just 29 minutes.
This represents a 65% acceleration compared to the previous year. In the fastest recorded instance, lateral movement occurred in only 27 seconds. Another case involved data exfiltration beginning just four minutes after initial access. CrowdStrike’s 2026 Global Threat Report suggests that speed has become a primary operational characteristic for threat actors, requiring defenders to drastically reduce the time allotted for detection and response.
Adam Meyers, senior vice president of counter adversary operations at CrowdStrike, notes the urgency of this shift. "Just a few years ago the average breakout time was 62 minutes," Meyers states. He suggests that the integration of AI into threat tradecraft over the past year has contributed to this acceleration, creating a challenging environment for response teams.
The Role of Identity in Acceleration
The primary driver of this increased speed appears to be the strategic use of legitimate credentials. By leveraging valid accounts, threat actors can bypass traditional perimeter security and blend with authorized network traffic.
CrowdStrike’s data shows that in 35% of investigated cloud-related incidents, unauthorized parties used valid credentials to navigate environments without triggering standard alerts. Furthermore, 82% of detections in 2025 were malware-free. This indicates that the majority of intrusions now rely on authorized pathways, impersonating trusted personnel, systems, and software integrations—rather than deploying malicious code or exploits to breach defenses.
Meyers highlights that threat actors are leveraging identity more effectively to move across cloud, SaaS, on-premises, and virtual environments. In cloud environments specifically, where incident volume increased by 37%, actors frequently utilized single sign-on (SSO) credentials for initial access before quickly pivoting to virtual infrastructure and network devices.
Visibility Gaps in Unmanaged Devices
Unmanaged devices, those lacking enterprise endpoint detection and response (EDR) controls—remain a significant entry point. This category includes VPN concentrators, firewall appliances, personal devices (BYOD), webcams, third-party applications, and virtual machines.
State-sponsored groups, including those linked to China such as Blockade Spider, Punk Spider, and Scattered Spider, have demonstrated a capability to target these devices effectively. Meyers notes that investment in targeting unmanaged infrastructure allows actors to exploit vulnerabilities in network devices that organizations often cannot fully monitor or control. Additionally, these groups are working to reduce the time between vulnerability disclosure and exploitation, targeting a window of two days.
AI as a Tool and an Attack Surface
Artificial intelligence has influenced the security field in two distinct ways: as a tool for accelerating tradecraft and as a new surface for exploitation.
AI in Tradecraft
Threat actors, including organized crime groups and nation-state entities, have adopted AI to speed up reconnaissance, generate social engineering content, develop code, and troubleshoot tools in real time. The report identifies groups such as Punk Spider, North Korea's Famous Chollima, and Russia's Fancy Bear as active users of these technologies. Overall, the number of incidents involving threat actors utilizing AI increased by 89% in 2025.
However, some of this usage remains experimental. For example, Fancy Bear released malware known as LameHug in mid-2025, which incorporated a large language model (LLM) for information gathering. Analysis suggests that while the approach was novel, the malware’s functionality did not significantly differ from traditional tools, indicating that some actors are still in the testing phase of operationalizing AI.
AI Platforms as Targets
Concurrently, the integration of AI tools into enterprise workflows has introduced new vulnerabilities. Threat actors are targeting platforms used for building and deploying AI applications.
A notable example is CVE-2025-3248, a vulnerability in Langflow, a low-code platform for AI applications. Unauthorized parties have exploited this flaw to access credentials, establish persistence, and deploy ransomware.
Additionally, researchers observed attempts to manipulate AI-enabled security workflows through prompt injection and the abuse of model context protocol (MCP) servers. In one instance, a threat actor published a spoofed version of a legitimate Postmark MCP server to harvest sensitive data, such as API keys and financial information—including developers who downloaded it.
CrowdStrike also recorded cases where malicious prompts were injected into generative AI platforms across at least 90 organizations and access credentials and cryptocurrency. The models most frequently discussed in underground forums mirror those used by legitimate enterprises, including ChatGPT, Claude, Grok, and Gemini.
Prioritizing Speed and Identity Defense
The contracting breakout window emphasizes the need for security strategies that prioritize speed and comprehensive visibility. With the majority of intrusions leveraging valid identities rather than malware, organizations are advised to strengthen identity governance, enforce reliable authentication across all environments (including cloud and SaaS), and extend monitoring capabilities to unmanaged devices where possible.