Skip to content

Vercel, Context.ai, and the Case Against Bearer Secrets

On 19 April 2026, Vercel disclosed that an attacker had accessed a subset of internal systems and customer environment variables. The entry point wasn’t a Vercel product vulnerability. It was an OAuth grant.

Here’s the chain, as reported:

  1. A Vercel employee had authorised Context.ai, a third-party AI tool, against their Google Workspace account — a normal OAuth integration with broad scopes.
  2. Context.ai itself was breached.
  3. The attacker used the existing OAuth grant to take over the Vercel employee’s Google account.
  4. From there, they pivoted into Vercel’s internal systems and exfiltrated corporate credentials, ~580 employee PII records, and customer environment variables that weren’t flagged Sensitive.

Vercel’s guidance to affected customers: assume API keys, tokens, database credentials, and signing keys stored as non-Sensitive env vars have been compromised. Rotate them.

This is the part of the story that matters for anyone thinking about credential architecture.

Vercel did a lot of things right in the disclosure. Fast IOC publication, law-enforcement engagement, coordination with GitHub/Microsoft/npm/Socket, and a product change making Sensitive the default for new environment variables going forward.

But the cleanup cost — the thing landing on thousands of engineering teams right now — is a direct consequence of what the stolen data was: static, long-lived, extractable bearer secrets sitting in plaintext in a platform database.

Bearer secrets have a defining property. They work from anywhere. An attacker who reads the secret can replay it from their own infrastructure, at their own pace, against any service that accepts it. There’s nothing about an AWS access key, a Stripe secret, a Postgres URI, or a signing key that cares who is holding it.

That’s what makes “rotate everything” the only safe response. The team doing the rotation has no way to prove which specific secrets have been used maliciously and which haven’t, because a stolen bearer secret leaves no fingerprint that distinguishes legitimate use from attacker use.

Why env vars still look like the right answer

Section titled “Why env vars still look like the right answer”

Environment variables are a 2010s secret-storage pattern that survived into 2026 for one reason: nothing more ergonomic replaced them for the “my web app needs an API key” case.

The tooling around env vars is genuinely excellent. Vercel’s platform exposes them cleanly at build and runtime, pulls them into local dev, integrates with secret managers, and now defaults to encrypted storage with restricted access. The ergonomics are close to unbeatable.

The problem is architectural, not tooling. Encryption at rest with platform-held keys is a software mitigation. A sufficiently privileged internal account — or a successful intrusion on the platform itself — can still read the plaintext. The Vercel incident is a reminder that this is not a theoretical risk.

And the failure mode compounds. Context.ai → Google Workspace → Vercel → N customers → downstream cloud providers, databases, signing infrastructure, third-party APIs. One OAuth breach exposed the entire upstream credential surface of everyone using the affected platform. The secrets don’t care that they’re on the attacker’s laptop instead of Vercel’s servers.

The alternative pattern — already deployed for production workloads in most clouds — is to stop storing bearer secrets altogether.

There are two flavours of this, and they address different environments.

In the cloud, the root of trust is the platform itself. A Kubernetes pod gets a projected service account token signed by the cluster. A GitHub Actions runner gets a JWT signed by GitHub’s OIDC issuer. The cloud provider trusts the issuer, validates the JWT, and returns short-lived credentials via AWS STS, GCP Workload Identity Federation, or Azure Federated Identity Credentials. No signing key on the workload — the platform vouches for it. Stolen short-lived credentials expire in minutes, and there’s nothing long-lived to steal in the first place.

Off the cloud — developer workstations, on-prem services, edge devices — there’s no platform to vouch. The signing key has to live somewhere. The answer is hardware: TPM 2.0 on Linux and Windows, Secure Enclave on macOS, dedicated HSMs for higher-assurance workloads. The key is generated in the chip, never leaves it, and can’t be exported by malware or disk imaging. The device becomes its own root of trust, and the same federation protocols issue the same short-lived credentials on top.

None of this is new. Production services should have been authenticating this way for years.

For the Vercel case, the hardware-bound version of the same architecture would have broken the chain at two points:

  • Employee side. The OAuth session that enabled the initial takeover was itself a long-lived bearer secret on the employee’s device. A hardware-bound session — gated by device attestation and a non-extractable key — couldn’t be silently exfiltrated by compromising a connected SaaS app. The attacker would need the physical device.
  • Customer side. If customer workloads authenticated to AWS/GCP/Stripe/Postgres via federated, short-lived credentials rather than long-lived env-var secrets, the stolen env vars wouldn’t be usable off the issuing workload. There would be nothing to “rotate” because there would be nothing static to steal.

The rotation burn-down that Vercel customers are running this week is the direct cost of the architecture. Hardware-bound, short-lived credentials don’t eliminate incidents — they eliminate the catastrophic replay phase that turns an incident into a cleanup project.

credctl is the developer-workstation version of this pattern. Your Mac’s Secure Enclave generates a non-extractable signing key. credctl auth signs a JWT with that hardware-bound key and exchanges it for short-lived AWS or GCP credentials via OIDC federation. Credentials last one hour. The signing key can’t leave the chip.

It doesn’t solve the whole Vercel problem. Production workloads still need their own identity. Platform vendors still need to rotate their internal OAuth hygiene. SaaS integrations still need scoped, auditable grants.

But it removes one class of long-lived, extractable credential from the developer’s laptop entirely — the class that shows up in the first line of nearly every postmortem: “the attacker exfiltrated credentials from a developer machine.”

The next time a platform has a rough week, the teams running hardware-bound credentials will be reading the postmortem with interest rather than with a panicked grep -r secret.