Your AI Tools Are Only as Safe as Their Supply Chain

One compromised AI vendor, one stolen OAuth token, and everything your organisation trusted it with was exposed.

Your AI Tools Are Only as Safe as Their Supply Chain | SeCompass

📋 April 2026 Security Incident

The Vercel/Context AI breach was publicly disclosed on 19 April 2026. We are covering it now because the attack pattern it represents — AI supply chain exploitation — is already being replicated across other tools and organisations. Most businesses have not yet put adequate controls in place.

You vetted the AI tool.

You checked the privacy policy.

You even asked about security certifications.

And then the breach happened anyway — from a vendor you never directly approved, through a connection you didn't know existed, using access that looked completely legitimate every step of the way.

In April 2026, Vercel — one of the world's most widely used developer platforms — disclosed that its internal systems had been breached. The entry point was not a phishing email to a Vercel employee. It was not a weak password or an unpatched server. It was an AI tool called Context AI that a Vercel employee had connected to their corporate Google account.

Context AI had been compromised weeks earlier. When the attacker gained access to Context AI's OAuth tokens, they gained access to every organisation whose employees had connected that app to their own accounts. Vercel's internal data — including customer information — was subsequently listed for sale on BreachForums for USD $2 million.

This is the AI supply chain attack. And it is now one of the defining security stories of 2026.

Primary Source

Vercel. (2026, April 19). April 2026 Security Incident. Vercel Knowledge Base.
vercel.com/kb/bulletin/vercel-april-2026-security-incident

This article is for general informational purposes only and does not constitute legal, technical, or professional cybersecurity advice. SeCompass recommends engaging a qualified adviser before making decisions based on this content.

How This Breach Actually Happened

The inbox is not the only entry point any more. Any application an employee connects to their corporate accounts — particularly AI tools with broad data permissions — becomes a potential path for an attacker who compromises that application's vendor first.

That is exactly what happened here. The attacker did not target Vercel directly. They targeted a vendor that Vercel's employees trusted — and used that trust as the entry point.

The result was invisible to every traditional security control. No malware was detected. No suspicious login was flagged. No alert was triggered. The access came from a legitimate, trusted OAuth connection — and it looked normal every step of the way.

The Full Attack Chain — Step by Step

To understand why this incident matters beyond the headline, it helps to trace how the attack actually unfolded.

🦠
Step 1 — Infostealer malware targets a Context AI employee
A Context AI employee downloaded an application containing Lumma Stealer — a credential-harvesting malware. Their device, credentials, and OAuth tokens were silently compromised without their knowledge.
🔑
Step 2 — Attacker captures OAuth tokens for all Context AI users
Using the stolen credentials, the attacker accessed Context AI's internal systems and captured the OAuth tokens that Context AI held on behalf of every customer who had connected the application.
🔗
Step 3 — Lateral movement into Vercel via trusted connection
A Vercel employee had connected Context AI to their corporate Google account, granting it broad read access to Google Drive. The attacker used Context AI's stolen tokens to access that employee's Google Workspace — appearing as a legitimate, trusted connection throughout.
🏦
Step 4 — Vercel's internal database accessed and data exfiltrated
Using access obtained through the Google account, the attacker reached Vercel's internal database. Customer data was exfiltrated and subsequently listed for sale on BreachForums for USD $2 million. No traditional security control flagged the activity.
📢
Step 5 — Leaked API key detected nine days before public disclosure
A Vercel customer received an OpenAI leaked-key notification on April 10 — for a key that had only ever existed inside Vercel — nine days before Vercel's public disclosure on April 19. Detection-to-disclosure latency became a secondary concern in its own right.

The Defining Characteristic of This Attack

It broke no perimeter. It exploited no traditional vulnerability. Every step used legitimate-looking access from trusted connections. This is why it was completely invisible to conventional security tooling — and why it represents a genuinely different class of risk to anything most organisations are currently prepared for.

"The most dangerous attacks of 2026 did not break in.
They walked in — through doors your AI tools had already opened."

This Is Not an Isolated Incident — It's a Pattern

The Vercel breach would be alarming enough on its own. But it sits within a documented pattern of AI supply chain attacks that accelerated significantly through 2025 and into 2026.

700+
Organisations hit by one stolen OAuth token in a single 2025 Salesforce integration attack
$670K
Average extra cost of a breach involving shadow AI vs a standard incident
1 in 5
Organisations that have experienced a shadow AI-related breach
300K+
ChatGPT credentials found in infostealer malware in 2025 alone

The LiteLLM supply chain attack — malware planted inside a widely used open-source AI library — affected thousands of organisations simultaneously, including Mercor, a USD $10 billion AI startup. In each case, the common thread was identical: attackers exploited trust rather than vulnerabilities.

Researchers have described this as SaaS-to-SaaS lateral movement — a technique that bypasses endpoint security entirely because it never touches endpoints. The attack surface is the connection itself: the OAuth token, the API key, the integration permission. And most organisations have no visibility into these connections at all.

Why AI Governance Can't Wait Any Longer

According to IBM's 2026 X-Force Threat Intelligence Index: 97% of organisations that experienced AI-related breaches lacked basic access controls. 80% of IT workers have already seen AI agents perform tasks without authorisation. And by the end of 2026, Gartner expects up to 40% of enterprise applications to integrate with AI agents — up from under 5% in 2025.

The governance gap is not theoretical. Every AI tool connected to internal systems is a potential entry point. Every OAuth permission granted to an AI vendor is a potential lateral movement path. Every API key stored in a developer platform is a potential credential to steal.

The AI Governance Framework: Where to Start

An effective AI governance framework doesn't need to be complex to be effective. The organisations that avoided major incidents in 2025 were not the ones with the largest budgets. They were the ones with real visibility into how their AI systems behaved and what connections they maintained.

Pillar 1 — Visibility: Know What You Have

You cannot govern what you cannot see. The first step is a complete inventory of AI tools — including those adopted by individual teams without central IT approval.

  • Inventory all AI tools: sanctioned and unsanctioned (shadow AI)
  • Map every OAuth connection and third-party AI integration
  • Document what data each tool can access and what actions it can take
  • Identify API keys and credentials stored in development platforms
  • Review which employee accounts are connected to which AI services

Pillar 2 — Access Control: Apply Least Privilege

Most AI integrations carry far more permissions than their function requires — and those excess permissions become the attacker's path.

  • Revoke all AI tool permissions that are not actively required
  • Replace broad OAuth scopes with minimum-necessary scopes
  • Rotate API keys and OAuth tokens on a regular cadence
  • Require explicit approval for any new AI tool integration
  • Apply least-privilege principles to AI agents as strictly as to human users

Pillar 3 — Vendor Due Diligence: Treat AI Tools as Third-Party Vendors

An AI tool is a third-party vendor with privileged access to your systems. It should be subject to the same risk management process as any other supplier.

  • Assess the security posture of every AI vendor before connection
  • Verify security certifications independently — do not rely on self-attestation
  • Review vendor incident history and disclosure practices
  • Include AI vendors in your third-party risk register
  • Establish contractual requirements for breach notification timelines

Pillar 4 — Monitoring: Detect Unusual AI Behaviour

Standard security monitoring was not designed to detect SaaS-to-SaaS lateral movement. Dedicated AI behaviour monitoring is now a necessary component of any security programme.

  • Monitor OAuth-connected applications for access from unexpected IPs or timeframes
  • Set up alerts for AI-related credential leakage notifications from providers
  • Review AI tool access logs for activity during periods your applications were not active
  • Include AI integrations explicitly in your incident response runbooks

Pillar 5 — Policy and Culture: Close the Shadow AI Gap

Research consistently shows that banning AI tools doesn't work — nearly half of employees continue using personal AI accounts after a ban. The effective approach is building a sanctioned programme that meets employees' genuine needs.

  • Develop a clear, accessible AI acceptable use policy
  • Provide approved AI tools that match what employees actually need
  • Train employees to recognise AI-specific risks (credential handling, data sharing)
  • Create a simple process for employees to request new AI tool approvals
  • Make it easier to do the right thing than to go rogue

The AI Governance Priority Matrix

Not everything can be done at once. Here is a prioritised view of governance actions, organised by urgency and impact.

Action When Why it matters
Audit all AI OAuth connections — revoke unnecessary permissions Do now Closes the primary lateral movement path from supply chain attacks
Rotate all API keys stored in dev/deployment platforms Do now Mitigates exposure from potential past credential theft
Inventory all AI tools in use — including shadow AI Do now Can't govern what you can't see
Enable 2FA on all accounts connected to AI services Do now Significantly reduces impact of credential theft
Add AI tools to your vendor risk register This quarter Establishes ongoing third-party AI risk oversight
Build AI monitoring into existing SIEM/SOAR This quarter Enables detection of supply chain lateral movement
Develop AI acceptable use policy This quarter Reduces shadow AI risk with a positive, enabling approach
Conduct an AI risk assessment with expert review This quarter Identifies your specific exposure across all five governance pillars
Implement formal AI procurement and approval process Next quarter Prevents ungoverned AI adoption before it creates risk
Establish AI security training programme Next quarter Builds long-term organisational AI governance capability
📋

Download: AI Governance Cheatsheet

One-page reference — 5 pillars, priority matrix, 10 vendor questions, and red flags. Free.

Download free cheatsheet →

Where Things Stand

The incidents of 2025 and 2026 are teaching us something important: AI risk is not primarily a technical problem to be solved by a security tool. It is a governance problem — one that requires clear ownership, documented policies, and regular review.

The organisations that will navigate this period well are not necessarily those with the largest security budgets. They are the ones that ask a simple question: "What did we give our AI tools access to — and did anyone check?"

If you do not know the answer to that question today, now is the time to find out.

JO

Jatinder Oberoi — CEO, SeCompass

Jatinder has over 20 years of experience in cybersecurity, privacy, and information security management across financial services, telecommunications, health, and the public sector. SeCompass provides vCISO, vISM, ISO 27001, SOC 2, and AI governance advisory services to organisations across Australia and New Zealand.

Work With SeCompass

If Your Organisation Uses AI Tools — It's Worth Understanding What They Can Access.

We help organisations across Australia and New Zealand map AI supply chain risk quickly and put the right governance controls in place — without disrupting how your team works.

  • What OAuth connections do your AI tools hold right now?
  • Are your AI vendors in your third-party risk register?
  • Do you have an AI governance policy in place?
Book a Free AI Governance Review →
Source

Vercel. (2026, April 19). April 2026 Security Incident. Vercel Knowledge Base.
vercel.com/kb/bulletin/vercel-april-2026-security-incident

Next
Next

One Email Was Enough