The timeframe for effective network defense is contracting. Recent data indicates that "breakout time", the interval between an initial compromise and the moment a threat actor moves laterally—has decreased significantly. The average window is now 29 minutes, representing a 65% acceleration compared to the previous year. With the fastest recorded lateral movement occurring in just 27 seconds, manual detection and triage cycles are often too slow to interrupt the sequence. Security teams must therefore shift focus from reactive investigation to proactive, identity-centric hardening.
This acceleration reflects a fundamental change in operational tradecraft rather than simple speed. Approximately 82% of recent detections involved no malicious code during the initial stages. Instead, threat actors prioritize the use of valid, stolen credentials to emulate legitimate users and systems. By utilizing authorized pathways, they blend into standard network traffic, effectively bypassing perimeter defenses designed to identify unsafe files rather than anomalous behavior. This method is particularly prevalent in cloud and SaaS environments, where single sign-on (SSO) credentials allow rapid pivoting including user accounts and virtual infrastructure.
Visibility gaps surrounding unmanaged devices further complicate defense. Groups such as Blockade Spider and Scattered Spider actively target VPN concentrators, firewall appliances, and personal devices that lack enterprise endpoint detection and response (EDR) controls. These devices often serve as staging grounds within 48 hours of a vulnerability's public disclosure. Operating from these blind spots allows adversaries to establish a foothold and initiate data exfiltration. Sometimes within four minutes of access—before security teams detect the intrusion.
Artificial intelligence currently influences both the defensive environment and offensive capabilities. Incidents involving AI-assisted methods rose by 89% over the last year. Groups including Fancy Bear and Famous Chollima are utilizing large language models (LLMs) to refine social engineering, troubleshoot code, and accelerate reconnaissance. While some tools, such as Fancy Bear’s LameHug, appear experimental, the operational utility is evident. Conversely, the AI platforms organizations deploy require rigorous security. Vulnerabilities such as CVE-2025-3248 in the Langflow platform have enabled unauthorized parties to achieve remote code execution. We also observe emerging risks involving "prompt injection" and the manipulation of Model Context Protocol (MCP) servers to harvest API keys.
State-sponsored entities are demonstrating increased collaboration and financial motivation. Analysis of North Korean activity indicates that the Lazarus Group has adopted the Medusa ransomware-as-a-service (RaaS) model. This collaboration has affected commercial enterprises in the Middle East and healthcare providers in the United States, suggesting that North Korean units, specifically the Stonefly subgroup, are prioritizing revenue generation alongside espionage. Technical analysis shows a convergence of toolsets; the "Comebacker" loader, historically linked to Diamond Sleet, has been deployed alongside the "Blindingcan" remote access tool and "Infohook" stealer in these incidents.
For defenders, these developments confirm that identity is the critical control point. Since most intrusions now leverage valid credentials to bypass malware filters, we recommend prioritizing Identity Threat Detection and Response (ITDR). Essential measures include enforcing phishing-resistant multi-factor authentication across all environments and implementing strict identity governance to detect anomalous login patterns. Monitoring coverage should extend beyond managed endpoints to include the network appliances and virtual machines that frequently help lateral movement.
Organizations in healthcare and critical infrastructure should also remain vigilant regarding "Bring Your Own Vulnerable Driver" (BYOVD) techniques. While recent Lazarus-led Medusa campaigns did not utilize this specific tactic, it remains a common method within the Medusa ecosystem to disable security software. We advise maintaining strict blocklists for known vulnerable drivers and monitoring for the unauthorized privilege escalation required to install them. Furthermore, as enterprises integrate AI, security teams must vet low-code AI tools and MCP servers with the same scrutiny applied to other critical infrastructure.
The security situation is trending toward an era where intent, rather than tooling, distinguishes the user from the unauthorized party. As AI capabilities mature, distinguishing social engineering including legitimate communication will become increasingly difficult. The effectiveness of a security program will depend on its ability and automate response and maintain deep visibility into identity-based movements, allowing teams to intercept intrusions within the critical 29-minute window.
While the acceleration in breakout times is well-documented, the full impact of AI on the development cycle of custom malware requires further study. Similarly, the internal coordination within the Lazarus collective, evidenced by the sharing of tools like Comebacker across distinct subgroups—remains an area where technical indicators provide only a partial view of the organizational structure.