Artificial intelligence is no longer just a defensive tool; it is now a core accelerant for cybercriminals and nation-state actors alike. That is the central message from CrowdStrike’s newly released 2026 Global Threat Report, which paints 2025 as the “year of the evasive adversary”, defined by speed, identity abuse and direct attacks on AI systems themselves.
According to the report, AI-enabled adversaries increased operations by 89% year-on-year, using generative AI to enhance reconnaissance, scale phishing, automate credential theft and refine post-exploitation activity. The result is a dramatic compression in the time defenders have to respond.
Breakout time hits record low
One of the report’s starkest findings is the continued collapse of average eCrime breakout time, the period between initial compromise and lateral movement. In 2025, that average fell to just 29 minutes, down from 48 minutes in 2024 and 98 minutes in 2021.
The fastest observed breakout occurred in just 27 seconds, while in one intrusion, data exfiltration began within four minutes of initial access. CrowdStrike describes speed as the defining characteristic of modern intrusion activity, with adversaries increasingly operating through legitimate credentials and trusted tools to avoid detection.
Notably, 82% of detections in 2025 were malware-free, reflecting a broader shift toward interactive, hands-on-keyboard intrusions that blend into normal business activity.
AI becomes both weapon and target
While AI is accelerating established tactics rather than creating entirely new ones, its operational impact is significant. The report details how threat actors are integrating AI across the kill chain, from social engineering to malware development and defence evasion.
Russia-nexus actor FANCY BEAR deployed LLM-enabled malware known as LAMEHUG, embedding prompt-based logic to automate reconnaissance and document collection. Meanwhile, eCrime actor PUNK SPIDER used AI-generated scripts to accelerate credential dumping and erase forensic artefacts, and DPRK-linked FAMOUS CHOLLIMA leveraged AI tools to scale fraudulent insider employment schemes.
But AI systems themselves are also under direct attack. CrowdStrike responded to incidents at more than 90 organisations where adversaries injected malicious prompts into legitimate AI development tools, abusing local AI command-line interfaces to generate commands that stole credentials and cryptocurrency.
Elsewhere, threat actors exploited vulnerabilities in AI platforms such as Langflow to establish persistence and deploy ransomware, while malicious clones of legitimate Model Context Protocol (MCP) servers were used to intercept sensitive data.
The report warns that prompt injection and jailbreak techniques, while not yet consistently effective at scale, illustrate a growing willingness to manipulate AI workflows indirectly by targeting inputs rather than infrastructure.
China-nexus actors accelerate edge exploitation
Beyond AI, the report highlights a sharp rise in China-nexus activity, which increased by 38% overall in 2025. Logistics targeting rose 85%, with telecommunications and financial services also heavily impacted.
China-linked actors demonstrated a systematic focus on internet-facing edge devices, including VPN appliances and firewalls. In 40% of cases where these actors exploited a vulnerability, the target was an edge device. Many exploits were weaponised within days of public disclosure, with some operationalised in as little as two to six days.
Zero-day abuse also continued to climb, with a 42% increase in vulnerabilities exploited before public disclosure.
Ransomware adapts to evade visibility
Ransomware groups further refined cross-domain tradecraft to avoid heavily monitored endpoints. Actors such as SCATTERED SPIDER and BLOCKADE SPIDER moved laterally across cloud, identity and virtualised environments, often deploying ransomware solely on VMware ESXi infrastructure.
PUNK SPIDER emerged as the most active big game hunting (BGH) actor, conducting 198 observed intrusions, a 134% increase year-on-year. Techniques such as remote file encryption over SMB shares allowed attackers to encrypt data without executing ransomware directly on managed hosts.
Meanwhile, DPRK-linked PRESSURE CHOLLIMA executed a $1.46 billion cryptocurrency theft via a supply chain compromise, the largest single financial heist reported to date.
Cloud and identity under sustained pressure
Cloud-conscious intrusions rose 37% in 2025, including a 266% increase from state-nexus actors. Valid account abuse accounted for 35% of cloud incidents, underscoring identity as the new perimeter.
Taken together, the trends point to a threat landscape defined by speed, legitimacy and low-visibility access paths. Adversaries are chaining together identity compromise, SaaS abuse, edge exploitation and AI manipulation to stay ahead of fragmented security controls.
As the report concludes, the challenge for defenders is no longer simply detecting malware, but operating at machine speed to counter adversaries who are doing the same.
In 2026, the AI arms race is set to intensify – and the window for response will only continue to narrow.
The post AI Arms Race Shrinks Breakout Time to 29 Minutes as Adversaries Turn GenAI on the Enterprise appeared first on IT Security Guru.
The original article found on IT Security Guru Read More