The Vercel Breach: How a Third-Party AI Tool Became the Entry Point to Your Deployment Infrastructure

On April 19, Vercel disclosed a security breach that began outside its own systems, inside a small AI productivity tool that one of its employees was using. This incident is a textbook example of a third-party compromise cascading through OAuth trust chains into critical infrastructure, and it raises an uncomfortable question for every engineering organization and CISO: how many AI tools has your team authorized, and what can they actually reach? Even more critical – what about the unauthorized tools that you don’t know about?

What Happened

The attack chain started with Context.ai, a third-party AI assistant used by a Vercel employee. Context.ai was compromised as part of a broader attack on its Google Workspace OAuth application. Because the employee had connected Context.ai to their Google Workspace account via OAuth, the attacker inherited those permissions and used them to take over the employee's Vercel Google Workspace account entirely.

From there, the attacker moved laterally into internal Vercel systems and accessed environment variables that were not marked as “sensitive.” A limited subset of Vercel customers had their credentials exposed as a result. Vercel notified those customers directly and recommended immediate credential rotation.

Vercel published its first indicators of compromise on April 19 at 11:04 AM PST and released full origin details and recommendations by 6:01 PM the same day.

The Attack Chain

What makes this incident particularly instructive is the multi-hop nature of the breach.

First, Context.ai was compromised. A third-party AI tool with broad Google Workspace OAuth access became the initial victim. Next, OAuth trust was weaponized. The attacker inherited the OAuth permissions the employee granted to Context.ai, bypassing Vercel's own authentication entirely. With a valid session, the attacker then pivoted into the employee's corporate Google Workspace account. From that account, they reached Vercel’s internal systems, including environments and environment variables that weren’t protected by its sensitive variable encryption. Finally, customer credentials were exposed. A limited set of customers whose credentials lived in those unprotected environment variables were put at risk.

There was no malware, no phishing of Vercel employees, and no vulnerability in Vercel's own code. The entire chain ran on legitimate, authorized, trust relationships.

What Was and Wasn't Exposed

Vercel's architecture provides an important distinction here. Environment variables explicitly marked as “sensitive” in Vercel are stored in a way that prevents them from being read, even by internal systems. There’s no evidence that sensitive variables were accessed.

The exposure was limited to environment variables that hadn’t been designated sensitive, meaning API keys, tokens, and credentials stored in cleartext in non-protected project environments. Vercel states that customers who weren’t directly contacted by Vercel should assume their credentials were not compromised. If you didn’t receive a notification, Vercel has no evidence of impact to your account.

To support the wider community in investigating and vetting potential malicious activity in their environments, Vercel published the following IOC. Google Workspace administrators and Google account owners should check for usage of this app immediately.

OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

A third-party investigation by Infostealers provides additional context on the root cause. The report links the Context.ai compromise to an infostealer infection, suggesting the attacker obtained Google Workspace OAuth credentials through malware on a Context.ai employee's machine rather than through a direct attack on Context.ai's infrastructure.

Vercel's Response

Vercel engaged Mandiant and additional cybersecurity firms for incident response, notified law enforcement, and deployed extensive monitoring and protection measures. The company's disclosure timeline, with the breach detected, IOCs published within hours, and full origin details released the same day, reflects a mature incident response posture.

Vercel's recommended actions for affected customers are to rotate all non-sensitive environment variables that contain secrets, API keys, or tokens, and to enable the sensitive environment variables feature for any secrets going forward, since this prevents them from being read even if an attacker gains internal access. Customers should also review account activity logs for suspicious behavior, especially around deployment configuration, inspect recent deployments for unexpected changes, reconfigure Deployment Protection to Standard at minimum if not already set, and rotate Deployment Protection tokens.

The Bigger Picture

This breach is a reminder that the blast radius of a third-party compromise is determined by the OAuth permissions that organizations casually grant. AI tools in particular have become prolific OAuth consumers. They request broad access to calendars, email, documents, and collaboration tools in order to function. Each one of those connections is a potential pivot point if the tool itself is compromised.

The concept of shadow IT has existed for decades, but the AI assistant era has dramatically accelerated its scope. Employees readily adopt AI tools, but security teams may not review them for months, if ever. Most of those tools request permissions far exceeding what they need to do their job, and most organizations have no inventory of which tools are authorized, what they can access, or how to revoke them quickly when something goes wrong.

A few controls would have materially limited the damage here. OAuth scope minimization, which requires tools to request only the permissions they need rather than broad workspace access, reduces the blast radius of any single compromise. Regular OAuth audit and revocation, meaning periodic reviews and revocation of OAuth grants for unused or unreviewed applications, limits persistent attacker footholds. Sensitive variable enforcement matters too, and Vercel's own architecture proved valuable here. Organizations should treat “mark as sensitive” as the default, not the exception. Finally, third-party AI tool vetting is essential. Any tool that connects to Google Workspace, GitHub, or deployment infrastructure should go through the same review as any other vendor with system access.

The Vercel breach didn’t involve a sophisticated zero-day or an advanced persistent threat. It involved an employee using a legitimate AI tool that became compromised. The sophistication was in the attacker's choice of target, a tool trusted by developers at a company that itself is trusted by hundreds of thousands of development teams.

If your organization uses Vercel, audit your environment variables today and enable sensitive storage for any secret that matters. If your organization uses AI productivity tools connected to corporate accounts, it is worth knowing, right now, exactly what those tools can reach.

In this post:

Stay Connected

Subscribe to receive new blog articles and updates from Thoropass in your inbox.


Thoropass Pentest Team

See all Posts

Related Posts

Stay connected

Subscribe to receive new blog articles and updates from Thoropass in your inbox.


Want to join our team?

Help Thoropass ensure that compliance never gets in the way of innovation.

View Open Roles

Have any feedback?

Drop us a line and we’ll be in touch.

Contact us