What the Microsoft Copilot EchoLeak Incident Reveals About AI Security for SMEs

Cybersecurity AI Risk Microsoft Copilot SME Advisory Prompt Injection

🕐 Revisiting a Recent Incident

The EchoLeak vulnerability in Microsoft 365 Copilot (CVE-2025-32711) was first reported in April 2025. We are returning to it now — not because it is new, but because it is still relevant. Most organisations affected have not yet put adequate controls in place. Some are not aware the risk exists at all. This post exists to change that.

It didn't require malware.

It didn't involve a system breach.

It didn't even require a user to click anything.

The email looked like every other email in the inbox. A familiar sender. A reasonable subject line. The kind of message that gets opened without a second thought — because there is no reason to have one.

And yet, somewhere inside that message, hidden from the reader but visible to the machine, was an instruction. Not for a person. For the AI.

This is not a hypothetical. In the Microsoft 365 Copilot "EchoLeak" vulnerability (CVE-2025-32711), researchers demonstrated exactly this: that a single ordinary email could silently influence how an AI assistant behaves — redirecting it, repurposing it, turning a productivity tool into an unwitting participant in a data exposure event.

It did not look like an attack. That is precisely what made it one.

Source

TechRepublic — Microsoft 365 Copilot Flaw "EchoLeak" Exposed. Reported April 2025.
techrepublic.com/article/news-microsoft-365-copilot-flaw-echoleak/

This article is for general informational purposes only and does not constitute legal, technical, or professional cybersecurity advice. Secompass recommends engaging a qualified adviser before making decisions based on this content.

How a Normal Email Becomes a Security Incident

The inbox is where work begins. It is also, increasingly, where AI begins. Microsoft 365 Copilot reads your emails, summarises them, drafts responses, and takes actions — all because it has been trusted to do so.

That trust is the vulnerability. When an email arrives carrying embedded instructions — instructions formatted not for a human but for an AI — Copilot does not always recognise the difference between content meant to be read and instructions meant to be followed. It processes the message. It acts.

The user sees nothing unusual. The AI has already moved on to the next task.

What the EchoLeak Research Demonstrated

Security researchers used a technique called prompt injection — embedding hidden instructions inside content that an AI system would process. In the EchoLeak scenario, those instructions were placed inside an email. Copilot read the email. Copilot followed the instructions. No malware was needed. No account was compromised. No traditional security alert was triggered.

What makes this particularly difficult to address is how it compares to the attack vectors most security tools are built to detect:

Attack vectorVisibility to userTraditional detectionRisk level
Prompt injection via email None — appears as normal email Not flagged — no malware signature High
Malicious attachment Attachment visible to user AV / sandbox scanning Medium
Phishing link Link visible, may raise suspicion URL filtering, user training Medium
Credential theft Login prompt or form MFA, anomaly detection Medium

Why This Is Still Relevant Today

A vulnerability is reported. A patch is issued. The news cycle moves on. And most organisations — particularly small and mid-sized ones without a dedicated security function — return to business as usual, assuming the problem has been resolved for them.

It rarely has. A patched vulnerability in one version of one tool does not address the underlying pattern. Prompt injection is a class of risk, not a single flaw. The conditions that made EchoLeak possible — AI systems that process external content as trusted instruction — remain present across a wide range of tools in active use today.

We return to this incident not to alarm, but to make the abstract concrete. EchoLeak is a clear, documented example of something that many security conversations still treat as theoretical. It happened. It can happen again — through different tools, in different forms, with the same invisible footprint.

"It doesn't look like an attack.
It looks like normal system behaviour.
That is the problem."

Where This Type of AI Security Risk Appears

Prompt injection risk is not unique to Microsoft Copilot. The pattern exists wherever an AI system processes external, untrusted content and has access to sensitive data or the ability to take actions. For most SMEs, that includes:

  • AI-powered email assistants reading and summarising inboxes
  • Productivity copilots integrated into Microsoft 365 or Google Workspace
  • CRM platforms with AI features accessing client records
  • Customer support automation handling sensitive enquiries
  • Internal knowledge bases and document search tools powered by AI

Each of these tools was adopted to improve efficiency. Each carries a version of the same risk — an AI trusted to act, encountering content it was not designed to question.

The Assumption That Creates the Gap

There is a widely held belief in business environments, rarely stated out loud but almost universally present:

The Assumption

"If content is safe for a human to read, it is safe for an AI system to process."

This assumption made sense before AI became an active participant in business workflows. It no longer holds. AI systems do not only read — they interpret, infer, and act. External content that is harmless to a person can carry instruction for a machine. The gap between those two realities is where EchoLeak lived. It is where future incidents will too.

Practical Controls to Reduce AI Security Risk

Addressing prompt injection risk and broader AI security exposure does not require removing AI from the business. It requires governing it — deliberately, with clear boundaries. The following controls are a starting point:

Risk areaRecommended controlWho owns it
Untrusted external inputs Treat all emails and documents as untrusted by default — sandbox AI processing of external content where possible IT / Security team
Excessive data access Apply least privilege — restrict AI tool access to only what is necessary for the specific task IT / vCISO
High-impact AI outputs Introduce mandatory human review before AI-generated actions involving sensitive data are executed Operations lead
Lack of visibility Enable logging and monitoring of AI activity to detect anomalous behaviour over time Security / Compliance
Ungoverned AI usage Define and enforce an AI governance policy — covering which tools are permitted, what data they can access, and under what conditions CEO / vCISO

The goal is not to limit what AI can do. It is to ensure that what AI does is bounded, visible, and recoverable — qualities that good security has always required of any system trusted with business data.


Where Things Stand

EchoLeak was documented. Patched. Reported on. And quietly forgotten by most of the businesses it was most relevant to.

In most environments we review, AI is already connected to more systems than leadership is aware of. It is reading emails, accessing files, and taking actions — often with no governance framework in place and no audit trail to follow.

The risk is not theoretical. It is already embedded in how work gets done. The question is whether your organisation is paying attention.

Work With Secompass

If Your Business Uses AI Tools — It's Worth Understanding What They Can Access.

We help SME leaders across Australia and New Zealand map AI risk quickly and put the right controls in place — without disrupting how your team works.

  • What can your AI tools access right now?
  • What actions can they take without human review?
  • Do you have an AI governance policy in place?
Book a Free Consultation →
Next
Next

Lake Alice Privacy Breach: Why this is more than a privacy incident