The Roblox Script That Brought Down Vercel: A Complete Technical Postmortem
How a game cheat download led to the compromise of one of the most important web infrastructure companies in the world.
On April 19, 2026, Vercel — the company behind Next.js and a platform hosting millions of web applications — disclosed a security breach that exposed customer credentials, internal systems, and environment variables. The threat actor ShinyHunters claimed responsibility and listed the stolen data for $2 million.
But this isn't a story about sophisticated zero-days or nation-state actors. It's about a Context.ai employee downloading Roblox cheats, and the cascade of failures that followed.
Let me walk you through exactly what happened.
The Attack Chain: Four Hops to Catastrophe
Hop 1: The Initial Infection (February 2026)
It started with Roblox.
According to Hudson Rock's analysis, a Context.ai employee was browsing for game exploits — specifically “auto-farm” scripts and executors for Roblox. These downloads are notorious malware vectors, and this one delivered Lumma Stealer, an infostealer malware that harvests credentials from browsers, password managers, and system caches.
The infection extracted:
- Google Workspace credentials (including the
support@context.aiaccount) - Supabase keys and logins
- Datadog credentials
- Authkit authentication tokens
- Browser session cookies and autofill data
The compromised employee wasn't just anyone — they were a core member of Context.ai's Vercel team (“context-inc”), with access to administrative dashboards and environment variable settings.
Key insight: Hudson Rock had this compromised credential data over a month before the Vercel breach went public. A single record in their cybercrime intelligence database — the only Context.ai infection they had on file — pointed directly to this employee.
Hop 2: Context.ai Compromise (March 2026)
Armed with the stolen credentials, attackers gained access to Context.ai's AWS environment. Context.ai detected this intrusion in March and engaged CrowdStrike for forensic investigation. They shut down the compromised AWS infrastructure.
But they missed something critical.
Context.ai operated a consumer product called AI Office Suite — a workspace that let users collaborate with AI agents on documents, presentations, and spreadsheets. A key feature: AI agents could perform actions across users' external applications via OAuth integrations.
During the AWS breach, attackers exfiltrated OAuth tokens for AI Office Suite consumer users. CrowdStrike's initial investigation didn't catch this. Context.ai only learned about the OAuth compromise after Vercel informed them their systems had been breached via a Context.ai token.
What Context.ai's AI Office Suite collected:
- Google Workspace OAuth tokens
- “Allow All” permission grants from users
- Access to perform actions across connected external applications
Hop 3: OAuth Pivot to Vercel's Google Workspace
Here's where it gets interesting.
At least one Vercel employee had signed up for Context.ai's AI Office Suite using their Vercel enterprise Google Workspace account. When prompted for permissions, they clicked “Allow All.”
The attackers used the stolen OAuth token to hijack this employee's Google Workspace session. Vercel's internal OAuth configurations — which should have restricted what third-party apps could access enterprise accounts — apparently allowed these broad permissions to propagate.
The IOC Vercel published:
OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.comThis OAuth client ID belongs to Context.ai's compromised application. If you're a Google Workspace admin, search for this ID in your API Controls immediately.
Hop 4: Vercel Internal Access
From the compromised Google Workspace account, attackers pivoted into Vercel's internal systems. Browser history logs from the infected Context.ai machine showed the compromised user had accessed:
vercel.com/context-inc/valinor/settings/environment-variables— where API keys, tokens, and deployment secrets are managedvercel.com/context-inc/valinor/settings— project-level managementvercel.com/context-inc/valinor/logs— production and staging logs
The attackers enumerated environment variables that were not marked as “sensitive.”
Vercel's architecture distinguishes between sensitive and non-sensitive environment variables:
- Sensitive variables: Encrypted at rest, cannot be read even by authenticated users
- Non-sensitive variables: Intended for non-secret configuration, stored in a readable format
The problem? Developers don't always categorize correctly. API keys, database credentials, and tokens often end up in non-sensitive variables because it's the default, or because developers want to debug more easily.
Vercel's CEO Guillermo Rauch admitted: “Unfortunately, the attacker got further access through their enumeration.”
What Was Stolen
Based on threat actor claims and Vercel's disclosure:
Confirmed
- Environment variables not marked as sensitive (API keys, tokens, database credentials, signing keys)
- A “limited subset” of customer credentials
- Internal system access
Claimed by ShinyHunters
- 580 Vercel employee records (names, Vercel email addresses, account status, activity timestamps)
- Linear data (internal project management)
- NPM tokens (package publishing credentials)
- GitHub tokens (repository access)
- Internal deployment access and API keys
- Screenshot of internal Vercel Enterprise dashboard
The threat actor posted proof on hacking forums and Telegram, offering the full package for $2 million. They also claimed to be in contact with Vercel regarding a ransom.
The Technical Failures
1. OAuth Permission Sprawl
When the Vercel employee granted “Allow All” permissions to Context.ai's OAuth app, Vercel's Google Workspace didn't block it. Enterprise OAuth configurations should restrict which third-party apps can receive broad permissions — especially for apps not on an approved list.
Defense that should have existed:
- OAuth app allowlisting (only pre-approved apps can request permissions)
- Permission scope restrictions (block “Allow All” grants to unknown apps)
- Alerts on new OAuth grants with sensitive scopes
2. The “Non-Sensitive” Trap
Vercel's sensitive environment variable feature is good security design — secrets encrypted at rest, unreadable even to authenticated users. But it's opt-in.
The default is non-sensitive. Developers setting up projects quickly, or unfamiliar with the feature, will leave secrets in readable variables. The attacker knew this and enumerated everything.
What Vercel has done since:
- Rolled out a new dashboard overview page for environment variables
- Improved UI for sensitive variable creation and management
- Presumably, pushed harder for customers to audit and migrate secrets
3. Delayed Credential Remediation
Hudson Rock flagged this exact infection — the Context.ai employee with Lumma Stealer — in their cybercrime database over a month before the breach. If Context.ai had subscribed to infostealer monitoring, or if the compromised credentials had been revoked immediately, the entire attack chain would have been broken.
This is the tragedy of the incident: it was preventable with faster detection.
4. CrowdStrike's Incomplete Investigation
Context.ai hired CrowdStrike after detecting the March AWS breach. CrowdStrike's investigation led to shutting down the AWS environment and notifying one identified customer.
But they didn't catch the OAuth token exfiltration. Context.ai only learned about it when Vercel came knocking, asking why a Context.ai OAuth token was used to breach their systems.
To be fair, OAuth token theft from a consumer product database might not have been in-scope for an AWS infrastructure investigation. But it's a reminder that forensic investigations are bounded by the questions you think to ask.
The Agentic AI Risk
This breach is a warning shot for the agentic AI era.
Context.ai's AI Office Suite was designed to let AI agents “perform actions across external applications.” That's the promise of agentic AI — autonomous systems that can read your email, update your docs, and take action on your behalf.
But every OAuth grant an AI agent receives is an OAuth grant an attacker can steal. The more permissions we give AI tools, the more valuable their token databases become.
The attack surface for agentic AI:
- Token storage: Where does the AI service store OAuth tokens? How are they encrypted?
- Permission scope: Does the AI request minimal permissions or “Allow All”?
- User awareness: Do users understand what they're granting when they connect external apps?
- Blast radius: If the AI service is compromised, how many connected accounts are exposed?
Context.ai's consumer product connected to users' Google Workspace accounts with broad permissions. When their infrastructure was breached, every one of those OAuth tokens became a pivot point to the user's organization.
This is the supply chain risk security researchers have been warning about. Now we have a concrete example.
Timeline
| Date | Event |
|---|---|
| February 2026 | Context.ai employee infected with Lumma Stealer via Roblox cheat download |
| February 2026 | Stolen credentials appear in Hudson Rock's cybercrime database |
| March 2026 | Attackers breach Context.ai's AWS environment |
| March 2026 | Context.ai detects breach, engages CrowdStrike, shuts down AWS infrastructure |
| March 2026 | OAuth tokens for AI Office Suite users exfiltrated (undetected) |
| April 2026 | Attackers use stolen OAuth token to access Vercel employee's Google Workspace |
| April 2026 | Attackers pivot into Vercel internal systems, enumerate environment variables |
| April 19, 2026 | Vercel discloses breach, publishes security bulletin |
| April 19, 2026 | ShinyHunters claims responsibility, lists data for $2M |
| April 19, 2026 | Vercel publishes IOC (OAuth App ID), engages Mandiant |
| April 19, 2026 | Context.ai publishes security update acknowledging OAuth token compromise |
Recommendations
If You're a Vercel Customer
- Check if you were contacted. Vercel reached out to the confirmed-affected subset. If you weren't contacted, you're probably fine — but don't assume.
- Audit your environment variables. Pull them locally and check what's not marked sensitive:
Any API keys, tokens, database credentials, or signing keys in non-sensitive variables should be rotated immediately.vercel env pull - Enable sensitive variable protection. Going forward, mark all secrets as sensitive so they're encrypted and unreadable.
- Review activity logs. Look for suspicious activity in your Vercel dashboard or via CLI:
vercel activity - Check recent deployments. Look for anything unexpected. Delete suspicious deployments.
- Rotate Deployment Protection tokens if you use them.
If You're a Google Workspace Admin
- Search for the malicious OAuth app:
- Go to Admin Console → Security → API Controls → Manage Third-Party App Access
- Filter by Client ID:
110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
- If found: Immediately revoke access and begin incident response.
- Audit all OAuth grants with broad permissions. Consider implementing:
- App allowlisting
- Alerts on new OAuth grants
- Periodic access reviews
If You Use Third-Party AI Tools
- Audit permissions. What have you granted “Allow All” to?
- Prefer minimal scopes. If an AI tool asks for broad permissions, question why.
- Use separate accounts. Don't connect AI tools to enterprise accounts unless IT has approved them.
- Monitor for compromised credentials. Services like Hudson Rock, SpyCloud, and Have I Been Pwned can alert you if your credentials appear in infostealer databases.
The Bigger Picture
A Vercel employee signed up for an AI productivity tool. A Context.ai employee downloaded Roblox cheats. Neither of them intended to cause a breach that would expose customer credentials at one of the most important web infrastructure companies in the world.
But that's how modern supply chain attacks work. The blast radius isn't determined by the initial target — it's determined by the permission graphs, OAuth tokens, and trust relationships that connect thousands of organizations.
Vercel's response has been relatively transparent. They published the IOC quickly, engaged Mandiant, coordinated with Context.ai, and are actively improving their dashboard. Guillermo Rauch acknowledged the “non-sensitive” variable issue publicly.
Context.ai's response is messier. Their March investigation missed critical details. Their consumer product's OAuth architecture created a single point of failure for all connected users. And their security bulletin reads like they're trying to distance their current enterprise product from the compromised consumer offering.
But the real lesson is about the ecosystem. We've built a web of OAuth connections, AI agents with broad permissions, and third-party tools that employees adopt without IT oversight. Every one of those connections is a potential pivot point.
The Roblox script didn't bring down Vercel. The interconnected trust graph did.
Connect with me on LinkedIn to discuss security, AI architecture, and building production systems.
Enjoyed this article?
Connect with me on LinkedIn for more insights on AI, automation, and full-stack development.
