Vercel April 2026 security incident

1 day ago (bleepingcomputer.com)

https://vercel.com/kb/bulletin/vercel-april-2026-security-in...

When one OAuth token can compromise dev tools, CI pipeline, secrets and deployment simultaneously, something architectural has gone wrong. Vercel have had React2Shell (CVSS 10), the middleware bypass (CVSS 9.1), and now this, all within 12 months.

At what point do we start asking questions about the concentration of trust in the web ecosystem?

It's funny that at the engineering level we are continuously grilled in interviews about the single responsibility principle, meanwhile the industry's business model is to undermine the entirety of web standards and consolidate the web stack into a CLI.

  • Coming from a company that makes infrastructure out of a view layer / vDOM library - I think anyone relying on Vercel has only themselves to blame.

    • It's interesting that Next is becoming so popular when LLMs supposedly have a capability to work with all these other frameworks that don't create a dependency on something like Vercel.

  • You have no idea how indifferent security officers can be-even when you point out critical issues. The other day, we flagged that a customer’s database had users with excessive privileges. Their only question: “Can this be exploited from the outside?”

    No, but most breaches today come from compromised internal accounts that are then used to break everything.

    • What's the problem to have local API connected in HTTP? We are within the enterprise network.

      And that's how I passed for a annoying "PM". With half of the program management complaining that I was slowing down things until 6m later, the head of risk management told them to get lost.

      1 reply →

    • The answer is Yes, this can be exploited from the outside by taking over dev machines and using their access.

      If you answer No and complain that it’s not taken seriously, it’s at least in part because you didn’t show the risk clearly.

    • The problem with security is that often it's cheaper to deal with the bad outcome than to prevent it. Actually getting security right is very expensive because it requires virtually every engineer to have some security awareness, and engineers who can be trusted with that tend to be difficult to find. Meanwhile if you have a security incident you say "sorry", maybe you pay a small fine, and a month later everyone had already moved on.

      1 reply →

  • JavaScript living only as a built artifact in an s3 bucket makes for a much simpler life.

    • until someone starts a botnet making your S3 invoice to $10k. Pay per usage is always a liability.

      It is horrendous that aws doesnt allow any usage limits.

  • Polite reminder as to why Domain Driven Design is super-important. It makes more sense to spend 80% on DDD initially and then only 20% on the code (80-20 rule) vs the other way round. Or you will end up in a clusterfuck like this.

    • Domain Driven Design is something that I have only come to know with full understanding at my current job and oh my it is useful. It's not a silver bullet, but for complex domains it's a must.

  • The whole hiring system needs to be eradicated. You get grilled by incompetents, who ask one question, never ask back when you provide something that is debatable, they give zero feedback and then you see what kind of errors these "elitist" engineers make. Burn it to the ground.

    • Best hiring systems I saw were when actual engineers hiring for their team were doing the bulk. You get a gauge of what you can expect and them too.

  • three critical vulns in 12 months is a pattern not a coincidence. the SRP point is sharp - we interview engineers on isolation principles then build platforms that are the opposite of that.

Claude Code defaulting to a certain set of recommended providers[0] and frameworks is making the web more homogenous and that lack of diversity is increasing the blast radius of incidents

[0] https://amplifying.ai/research/claude-code-picks/report

  • It's interesting how many of the low-effort vibecoded projects I see posted on reddit are on vercel. It's basically the default.

    • Reddit vibecoded LLM posts are kind of fascinating for how homogenous they are. The number of vibe coded half-finished projects posted to common subreddits daily is crazy high.

      It’s interesting how they all use LLMs to write their Reddit posts, too. Some of them could have drawn in some people if they took 5 minutes to type an announcement post in their own words, but they all have the same LLM style announcement post, too. I wonder if they’re conversing with the LLM and it told them to post it to Reddit for traction?

      11 replies →

    • There's a push and pull here, Typescript + React + Vercel are also very amenable to LLM driven development due to a mix of the popularity of examples in the LLMs dataset, how cheap the deployment is and how quick the ecosystem is to get going.

    • I've done a ton of low-effort vibe-coded projects that suit my exact use cases. In many cases, I might do a quick Google search, not find an exact match, or find some bloated adware or subscription-ware and not bother going any further.

      Claude Code can produce exactly what I want, quickly.

      The difference is that I don't really share my projects. People who share them probably haven't realized that code has become cheap, and no one really needs/wants to see them since they can just roll their own.

      1 reply →

  • The other day, I was forcing myself to use Claude Code for a new CRUD React app[1], and by default it excreted a pile of Node JS and NPM dependencies.

    So I told something like, "don't use anything node at all", and it immediately rewrote it as a Python backend, and it volunteered that it was minimizing dependencies in how it did that.

    [1] only vibe coding as an exercise for a throwaway artifact; I'm not endorsing vibe coding

    • You can tell Claude to use something highly structured like Spring Boot / Java. It's a bit more verbose in code, but the documentation is very good which makes Claude use it well. And the strict nature of Java is nice in keeping Claude on track and finding bugs early.

      I've heard others had similar results with .NET/C#

      2 replies →

    • My vibe coded one-off app projects have are all, by default, "self-contained single file static client side webapp, no build step, no React or other webshit nonsense" in their prompt. For more complex cases, I drop the "single file". Works like a charm.

    • I'm struggling to understand how they bought Bun but their own Ai Models are more fixated in writing python for everything than even the models of their competitor who bought the actual Python ecosystem (OAI with uv)

    • > Python

      I once made a golang multi-person pomodoro app by vibe coding with gemini 3.1 pro (when it had first launched first day) and I asked it to basically only have one outside dependency of gorrilla websockets and everything else from standard library and then I deployed it to hugging face spaces for free.

      I definitely recommend golang as a language if you wish to vibe code. Some people recommend rust but Golang compiles fast, its cross compilation and portable and is really awesome with its standard library

      (Anecdotally I also feel like there is some chances that the models are being diluted cuz like this thing then has become my benchmark test and others have performed somewhat worse or not the same as this to be honest and its only been a few days since I am now using hackernews less frequently and I am/was already seeing suspicions like these about claude and other models on the front page iirc. I don't know enough about claude opus 4.7 but I just read simon's comment on it, so it would be cool if someone can give me a gist of what is happening for the past few days.)

    • It emits Actix and Axum extremely well with solid support for fully AOT type checked Sqlx.

      Switch to vibe coding Rust backends and freeze your supply chain.

      Super strong types. Immaculate error handling. Clear and easy to read code. Rock solid performance. Minimal dependencies.

      Vibe code Rust for web work. You don't even need to know Rust. You'll osmose it over a few months using it. It's not hard at all. The "Rust is hard" memes are bullshit, and the "difficult to refactor" was (1) never true and (2) not even applicable with tools like Claude Code.

      Edit: people hate this (-3), but it's where the alpha is. Don't blindly dismiss this. Serializing business logic to Rust is a smart move. The language is very clean, easy to read, handles errors in a first class fashion, and fast. If the code compiles, then 50% of your error classes are already dealt with.

      Python, Typescript, and Go are less satisfactory on one or more of these dimensions. If you generate code, generate Rust.

      4 replies →

  • It's a good point, but I don't think the problem here is Claude. It's how you use it. We need to be guiding developers to not let Claude make decisions for them. It can help guide decisions, but ultimately one must perform the critical thinking to make sure it is the right choice. This is no different than working with any other teammate for that matter.

    • That's not helped by a recent change to their system prompt "acting_vs_clarifying":

      > When a request leaves minor details unspecified, the person typically wants Claude to make a reasonable attempt now, not to be interviewed first. Claude only asks upfront when the request is genuinely unanswerable without the missing information (e.g., it references an attachment that isn’t there).

      > When a tool is available that could resolve the ambiguity or supply the missing information — searching, looking up the person’s location, checking a calendar, discovering available capabilities — Claude calls the tool to try and solve the ambiguity before asking the person. Acting with tools is preferred over asking the person to do the lookup themselves.

      > Once Claude starts on a task, Claude sees it through to a complete answer rather than stopping partway. [...]

      In my experience before this change. Claude would stop, give me a few options and 70% of the time I would give it an unlisted option that was better. It actually would genuinely identify parts of the specs that were ambiguous and needed to be better defined. With the new change, Claude plows ahead making a stupid decision and the result is much worse for it.

    • Shouldn’t Claude just refuse to make decisions, then, if it is problematic for it to do so? We’re talking about a trillion dollar company here, not a new grad with stars in their eyes

      1 reply →

  • The thing I can’t stop thinking about is that Ai is accelerating convergence to the mean (I may be misusing that)

    The internet does that but it feels different with this

    • > convergence to the mean

      That's a funny way of saying "race to the bottom."

      > The internet does that but it feels different with this

      How does "the internet do that?" What force on the internet naturally brings about mediocrity? Or have we confused rapacious and monopolistic corporations with the internet at large?

      5 replies →

  • This is why Im glad I learned to code before vibecoding. I tell codex exactly what tools and platforms to use instead of letting it default to whatever is the most popular, and I guard my .env and api keys carefully. I still build things page by page or feature by feature instead of attempting to one shot everything. This should be vibe-coding 101.

  • That report greatly overrates the tendency to default for Vercel for web because among its 2 web projects it mandated one use Next.js and the other one to be a React SPA as well. Obviously those prime Claude towards Vercel. They shouldve had the second project be a non-React web project for diversity.

  • Is that bad? I would think having everyone on the same handful of platforms should make securing them easier (and means those platforms have more budget to to so), and with fewer but bigger incidents there's a safety-of-the-herd aspect - you're unlikely to be the juiciest target on Vercel during the vulnerability window, whereas if the world is scattered across dozens or hundreds of providers that's less so.

    • When everyone uses the same handful of platforms, then everyone becomes the indirect target and victim of those big incidents. The recent AWS and Cloudflare outages are vivid examples. And then the owners of those platforms target everyone with their enshittification as well to milk more and more money.

  • Interstingly, a recent conversation [1] between Hank Green and security researcher Sherri Davidoff argued the opposite. More GenAI generated code targeted at specific audiences should result in a more resilient ecosystem because of greater diversity. That obviously can't work if they end up using the same 3 frameworks in every application.

    [1] https://www.youtube.com/watch?v=V6pgZKVcKpw

    • I love Hank, but he has such a weird EA-shaped blind spot when it comes to AI. idgi

      It is true that "more diversity in code" probably means less turnkey spray-and-pray compromises, sure. Probably.

      It also means that the models themselves become targets. If your models start building the same generated code with the same vulnerability, how're you gonna patch that?

      3 replies →

  • Yes, this is a genuine problem with AI platforms. It does sometimes feel like they're suspiciously over-promoting certain solutions; to the point that it's not in the AI platform's interest.

    I know what it's like being on the opposite side of this as I maintain an open source project which I started almost 15 years ago and has over 6k GitHub stars. It's been thoroughly tested and battle-tested over long periods of time at scale with a variety of projects; but even if I try to use exact sentences from the website documentation in my AI prompt (e.g. Claude), my project will not surface! I have to mention my project directly by name and then it starts praising it and its architecture saying that it meets all the specific requirements I had mentioned earlier. Then I ask the AI why it didn't mention my project before if it's such a good fit. Then it hints at number of mentions in its training data.

    It's weird that clearly the LLM knows a LOT about my project and yet it never recommends it even when I design the question intentionally in such a way that it is the perfect fit.

    I feel like some companies have been paying people to upvote/like certain answers in AI-responses with the intent that those upvotes/likes would lead to inclusion in the training set for the next cutting-edge model.

    It's a hard problem to solve. I hope Anthropic finds a solution because they have a great product and it would be a shame for it to devolve into a free advertising tool for select few tech platforms. Their users (myself included) pay them good money and so they have no reason to pander to vested interests other than their own and that of their customers.

    • > It's weird that clearly the LLM knows a LOT about my project and yet it never recommends it even when I design the question intentionally in such a way that it is the perfect fit.

      That's literally what "weight" means - not all dependencies have the same %-multiplier to getting mentioned. Some have a larger multiplier and some have a smaller (or none) multiplier. That multiplier is literally a weight.

  • That's only looking at half of the equation.

    That lack of diversity also makes patches more universal, and the surface area more limited.

  • It's so trivial to seed. LLMs are basically the idiots that have fallen for all the SEO slop on Google. Did some travel planning earlier and it was telling me all about extra insurances I need and why my normal insurance doesn't cover X or Y (it does of course).

  • That's the irony of Mythos. It doesn't need to exist. LLM vibe slop has already eroded the security of your average site.

    • Self fulfilling prophecy: You don't need to secure anything because it doesn't make a difference, as Mythos is not just a delicious Greek beer, but also a super-intelligent system that will penetrate any of your cyber-defenses anyway.

      4 replies →

    • Conspiracy theory: they intentionally seeded the world with millions of slop PRs and now they’re “catching bugs” with Mythos

There are 3 main questions here:

1) Vercel rolled out sensitive secrets on February 1, 2024, why were not all existing env vars transitioned to sensitive type? Why was there any assumption that any secret added as env var before that date was still OK to be left as "non-sensitive".

2) How was actually the Google workspace account was compromised? If context.ai was the originating issue, what actually led to the takeover? Were there too many access privileges given to the Google Workspace token context.ai had, or was there actually a workstation takeover here?

3) And finally why the hack a compromised Google Workspace account lead to someone having access to bunch of customer projects? Were is the connection? I don't get this..

  • I can't comment about 1, but my read of 2 and 3 is that the chain was something like this:

    1. One or more Vercel employees - likely engineers - grant OAuth access to context.ai. They presumably did this for office-suite style features, but the OAuth request included a GCP grant for some reason, maybe laziness on context.ai's part or planned future features? Either way, Google's OAuth flow has little differentiation between "office suite" scopes and "cloud platform" scopes, so this may not have been particularly obvious to those at Vercel

    2. context.ai's AWS account was compromised (unspecified how), and the Google OAuth tokens they had for customer accounts, including those for at least one Vercel employee, were taken

    3. Those OAuth token(s) were used to authenticate to the GCP APIs as those Vercel employees, then allowing access to Vercel's DBs, and therefore access to customer data and secrets

  • Taking this at face value: https://www.infostealers.com/article/breaking-vercel-breach-...

       Context.ai employee searches for Roblox exploits on web
       -> Context.ai support access breached by malware
       -> Vercel privileged employee account who uses Context.ai breached
       -> Vercel customer secrets breached
    

    Tl;dr - insufficient endpoint protection and activity detection at Context.ai (big surprise!) + insufficient privileged account isolation at Vercel

They just added more details:

> Indicators of compromise (IOCs)

> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.

> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.

> OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

https://vercel.com/kb/bulletin/vercel-april-2026-security-in...

  • https://x.com/rauchg/status/2045995362499076169

    > A Vercel employee got compromised via the breach of an AI platform customer called http://Context.ai that he was using.

    > Through a series of maneuvers that escalated from our colleague’s compromised Vercel Google Workspace account, the attacker got further access to Vercel environments.

    > We do have a capability however to designate environment variables as “non-sensitive”. Unfortunately, the attacker got further access through their enumeration.

    > We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.

    Still no email blast from Vercel alerting users, which is concerning.

    • > We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.

      Blame it on AI ... trust me... it would have never happened if it wasn't for AI.

    • > We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI.

      Reads like the script of a hacker scene in CSI. "Quick, their mainframe is adapting faster than I can hack it. They must have a backdoor using AI gifs. Bleep bleep".

    • > Still no email blast from Vercel alerting users, which is concerning.

      On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams.

      But on the other hand... It's Sunday. Unless you're tuned-in to social media over the weekend, your main provider could be undergoing a meltdown while you are completely unaware. Many higher-up folks check company email over the weekend, but if they're traveling or relaxing, social media might be the furthest thing from their mind. It really bites that this is the only way to get critical information.

      25 replies →

    • Surprising velocity? It appears the hackers had the oauth key for a month.

    • > an AI platform customer called http://Context.ai that he was using

      Hmm? Who is the customer in this relationship? Is Vercel using a service provided by Context.ai which is hosted on Vercel?

    • Production network control plane must be completely isolated from the internet with a separate computer for each. The design I like best is admins have dedicated admin workstations that only ever connect to the admin network, corporate workstations, and you only ever connect to the internet from ephemeral VMs connected via RDP or similar protocol.

  • The actual app name would be good to have. Understandable they don’t want to throw them under the bus but it’s just delaying taking action by not revealing what app/service this was.

  • Idk exactly how to articulate my thoughts here, perhaps someone can chime in and help.

    This feels like a natural consequence of the direction web development has been going for the last decade, where it's normalised to wire up many third party solutions together rather than building from more stable foundations. So many moving parts, so many potential points of failure, and as this incident has shown, you are only as secure as your weakest link. Putting your business in the hands of a third party AI tool (which is surely vibe-coded) carries risks.

    Is this the direction we want to continue in? Is it really necessary? How much more complex do things need to be before we course-correct?

    • This isn't a web development concept. It's the unix philosophy of "write programs that do one thing and do it well" and interconnect them, being taken to the extremes that were never intended.

      We need a different hosting model.

      23 replies →

I've been part of a response team on a security incident and I really feel for them. However, this initial communication is terrible.

Something happened, we won't say what, but it was severe enough to notify law enforcement. What floors me is the only actionable advice is to "review environment variables". What should a customer even do with that advice? Make sure the variable are still there? How would you know if any of them were exposed or leaked?

The advice should be to IMMEDIATELY rotate all passwords, access tokens, and any sensitive information shared with Vercel. And then begin to audit access logs, customer data, etc, for unusual activity.

The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.

I know there is a huge fog of uncertainly in the early stages of an incident, but it spooks me how intentionally vague they seem to be here about what happened and who has been impacted.

  • Seriously. Why am I reading about this here and not via an email? I've been a paying customer for over a year now. My online news aggregator informs me before the actual company itself does?

    • Please remember that this is the same company that couldn't figure out how to authorize 3rd party middleware and had, with what should be a company ending, critical vulnerability .

      Oh and the owner likes to proudly remind people about his work on Google AMP, a product that has done major damage to the open web.

      This is who they are: a bunch of incompetent engineers that play with pension funds + gulf money.

      1 reply →

    • I just deleted my account. Their laid-back notice just is not worth it anymore. I will hold them accountable using my cash. You can get out with me. Let their apologies hit your spam filter. They need to be better prepared to react to the storm of insanity that comes with a breach or they lose my info (lose it twice, I guess..)

  • Via the incident page:

    > Environment variables marked as "sensitive" in Vercel are stored in a manner that prevents them from being read, and we currently do not have evidence that those values were accessed. However, if any of your environment variables contain secrets (API keys, tokens, database credentials, signing keys) that were not marked as sensitive, those values should be treated as potentially exposed and rotated as a priority.

    https://vercel.com/kb/bulletin/vercel-april-2026-security-in... as of 4:22p ET

  • > The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.

    This and because it's so convenient to click some buttons and have your application running. I've stopped being lazy, though. Moved everything from Render to linode. I was paying render $50+/month. Now I'm paying $3-5.

    I would never use one of those hosting providers again.

  • Completely agreed. At minimum they should be advising secret rotation.

    The only possibility for that not being a reasonable starting point is if they think the malicious actors still have access and will just exfiltrate rotated secrets as well. Otherwise this is deflection in an attempt to salvage credibility.

  • Yeah, given there insane pricing I think the expectations can be higher. Although I know it is impossible to provide 100% secure system, but if something like that happens, then the communication should at least be better. Don’t wait until you have talked to the lawyers... inform your customers first, ideally without this cooperate BS speak, most vercel customers are probably developers, so they understand that incidents like this can happen, just be transparent about it

  • Welcome to the show.

    While a different kind of incident (in hindsight), the other week Webflow had a serious operational incident.

    Sites across the globe going down (no clue if all or just a part of them). They posted plenty of messages, I think for about 12 hours, but mostly with the same content/message: "working on fixing this with an upstream provider" (paraphrased). No meaningful info about what was the actual problem or impact.

    Only the next day did somebody write about what happened. Essentially a database running out of storage space. How that became a single point of failure, to at least plenty of customers: no clue. Sounds like bad architecture to me though. But what personally rubbed me the wrong way most of all, was the insistence on their "dashboard" having indicated anything wrong with their database deployment, as it allegedly had misrepresented the used/allocated storage. I don't who this upstream service provider of Webflow is, but I know plenty about server maintenance.

    Either that upstream provider didn't provide a crucial metric (on-disk storage use) on their "dashboard", or Webflow was throwing this provider under the bus for what may have been their own ignorant/incompetent database server management. I guess it all depends to which extend this database was a managed service or something Webflow had more direct control over. Either way, with any clue about the provider or service missing from their post-mortem, customers can only guess as to who was to blame for the outage.

    I have a feeling that we probably aren't the only customer they lost over this. Which in our case would probably not have happened, if they had communicated things in a different way. For context: I personally would never need nor recommend something like Webflow, but I do understand why it might be the right fit for people in a different position. That is, as long as it doesn't break down like it did. I still can't quite wrap my head around that apparent single point of failure for a company the size of Webflow though.

    /anecdote

The real story isn't Vercel. It's that a Context.ai employee got infostealer'd in February and four months later that single compromise propagated through an 'Allow All' Google Workspace OAuth grant into Vercel's env vars. This is less a Vercel incident and more the chronic OAuth-supply-chain problem finally surfacing somewhere visible.

  • Where did you see that a Context employee had credentials stolen in February? I haven't run into that particular data point.

  • How do you go from a Google Workspace to production env vars without Vercel doing something wrong?

> Vercel did not specify which of its systems were compromised

I’m no security engineer, but this is flatly unacceptable, right? This feels like Vercel is covering its own ass in favor of helping its customers understand the impact of this incident.

  • I dunno. If I work on GitHub and I say “obscure subsystem X” has been breached, it’s no more useful than the level of specificity that Vercel has already given (“some customer environments have been compromised”)

They just added more details:

> Indicators of compromise (IOCs)

> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.

> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.

> OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

https://vercel.com/kb/bulletin/vercel-april-2026-security-in...

I'm on a macbook pro, Google Chrome 147.0.7727.56.

Clicking the Vercel logo at the top left of the page hard crashes my Chrome app. Like, immediate crash.

What an interesting bug.

  • Huh, curiously; I'm on Arch Linux, crash happens in Google Chrome (147.0.7727.101) for me too, but not in Firefox (149.0.2) nor even in Chromium (147.0.7727.101).

    I find it fun we're all reading a story how Vercel likely is compromised somehow, and managed to reproduce a crash on their webpage, so now we all give it a try. Surely could never backfire :)

  • Sadly I coudn't make Chrome crash here. Would be fun.

    Chrome Version 147.0.7727.101 (Official Build) (64-bit). Windows 11 Pro.

    Video: https://imgur.com/a/pq6P4si

    I use uBlock Origin Lite. Maybe it blocks some crash causing script? edit: still no crash when I disabled UBO.

  • Same thing here, 147.0.7727.101, M3 Macbook Air. Immediate crash of all open profile windows, so not even a tab-level crash.

  • Reminds me of circa 2021 Chromium bug where opening the dropdown menu on GitHub would crash the entire system on Linux. At some point, it got fixed.

  • Same with Chrome on Windows 11. I opened the vercel home page using the url once after which it stopped crashing when clicking on the logo.

  • I'm running 147.0.7727.57 and this doesn't happen. Macbook Air M5. VERY interesting.

  • MBP - M4 Max - Chrome 146.0.7680.178.

    No crash.

    Now I don't want to click that "Finish update" button.

    • if it does so happen that the crash originates from a browser exploit, you should expect to be more at risk due to the absence of a crash on an older version, not less

Am I reading this[1] correctly that they basically had that "compromised OAuth token" for a month now and it was only detected now when the attackers posted about it in a forum?

[1] https://context.ai/security-update

  • > Vercel’s internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel’s enterprise Google Workspace.

    This was an interesting tidbit too. If true, this means that Vercel’s IT/Infosec maybe didn’t bother enabling the allowlist and request/review features for OAuth apps in their Google Workspace.

    On top of that, they almost certainly didn’t enable the scope limits for unchecked OAuth apps (e.g limiting it to sign-on/basic profile scopes).

An email from Vercel came to my company at 10:47am UTC. It contained little information, and said:

> At this time, we do not have reason to believe that your Vercel credentials or personal data have been compromised.

Which is not very reassuring without actual information, since presumably they would have said the same thing on Saturday, if asked.

Neon, the Vercel recommended database storage integration, doesn't use the sensitive option for the environment variables it manages including the database connection string/password and need to be rotated then deleted and manually set up as sensitive.

Related: > I have reason to believe this is credible.

https://x.com/theo/status/2045870216555499636

> Env vars marked as sensitive are safe. Ones NOT marked as sensitive should be rolled out of precaution

https://x.com/theo/status/2045871215705747965

> Everything I know about this hack suggests it could happen to any host

  • Who is this “theo” person and why are multiple people quoting him? He seems to have little to say that’s substantive at this point.

    • He’s a tech influencer, probably getting quoted here because he has the biggest reach of people covering this so far.

    • He’s a streamer who talks about tech. Previously had a sponsorship relationship with Vercel so is theoretically more well connected than average on the topic. He’s also very divisive because he does a lot of ragebait, grievance reporting, and contrarian takes but famously has blind spots for a few companies and technologies that he’s favored in past videos or been sponsored by. I have friends who watch a lot of his videos but I’ve never been able to get into it.

  • > Ones NOT marked as sensitive should be rolled out of precaution

    if it's not marked as sensitive (because it is not sensitive) there is no reason to roll them. if you must roll a insensitive env var it should've been sensitive in the first place, no?

    • There's a difference between sensitive, private and public. If public (i.e. NEXT_PUBLIC_) then yeah likely not a reason to roll. Private keys that aren't explicitly sensitive probably are still sensitive. It doesn't seem to be the default to have things "sensitive" and I can't tell if that's a new classification or has always been there.

      I can imagine the reason why an env variable would be sensitive, but need to be re-read at some point. But overwhelmingly it makes sense for the default to be set, and never access again (i.e. Fly env values, GCP secret manager etc)

  • Vercel, a deployment shell script turned billion dollar company, turned global liability. A story older than time.

    Context AI published a statement https://context.ai/security-update

    > Last month, we identified and stopped a security incident involving unauthorized access to our AWS environment.

    > Today, based on information provided by Vercel and some additional internal investigation, we learned that, during the incident last month, the unauthorized actor also likely compromised OAuth tokens for some of our consumer users.

    Is this one of those situations where _a lot_ of customers are affected and the “subset” are just the bigger ones they can’t afford to lose?

    • Conjecture, but the wording "limited subset" rarely turns out to be good news. Usually a provider will say "less than 1% of our users" or some specific number when they can to ease concerns. My guess is they don't have the visibility or they don't like the number.

      I feel for the team; security incidents suck. I know they are working hard, I hope they start to communicate more openly and transparently.

      • “Less than 1% of our users” means 10k affected users if you have 1 million users. 10k victims is a lot! Imagine “air travel is safe, only a subset of 1% of travellers die”

    Incidents like this are a good reminder of how concentrated our single points of failure have become in the modern web ecosystem. I appreciate the transparency in their disclosure so far, but it definitely makes you re-evaluate the risk profile of leaning entirely on fully managed PaaS solutions.

    The lack of details makes me wonder how large this "subset" of users really is

    • I remember working support and being told "always say 'subset' unless you absolutely know it's exactly 100% of customers" lol

      • Same, there was always very specific wording we had to use unless legal approved an exact number or scope.

    • The lack of details itself is telling enough. Whatever comes out will be no doubt PR sanitised and some bigger clumps of truth won't make it through the PR process.

    This is why I moved my video streaming app (strimoza.com) to signed URLs with short expiry times for every single request. Extra complexity but at least if something leaks, the damage is contained. Curious how many people actually audit their CDN token policies before an incident forces them to.

    Use VPS, nowadays with the help of AI it's a lot easier to set everything up, you don't need Versel at all. And of course way cheaper

    This announcement in its current form is quite useless and not actionable. As least people won’t be able to say “why didn’t you say something sooner?” They said _something_

    Wow, maybe Cloudflare can help them secure their systems? I hear they have a pretty good WAF.

    So, the Vercel post says a number of customers were impacted, but not everyone, and they will contact the people that were impacted. I wasn't contacted so does that mean I'm safe?

    What is the rationale for using vercel ? I'm getting a lot of value out of cloudflare with the $5/month plan lately but my bare metal box with triple digit ram has seen zero downtime since 2015.

    • They put a massive amount of VC cash into convincing people that Next.js was "the modern way" to create a website. Then they got lucky with the timing of LLMs becoming popular while they were the hot thing, leading LLMs to default to it when creating new websites. To picture that amount of VC cash - they're at Series F, and a huge chunk of that went towards marketing.

      Both have been changing as people realize it's rarely the right tool for the job, and as LLMs also become more intelligent and better at suggesting other, better options depending on what is asked for (especially Claude Opus).

      • I really want this to be true. nextjs is a nightmare. I'm eternally disgruntled.

        nextjs is also powerful due to AI. But the value is a robust interactive front-end, easily iterated, with maybe SSR backing, nothing specific to nextjs (it's routing semantics + React).

        So much complexity has gone into SSR. I hate 5MB client runtime just to read text as much as anyone, but not if the tradeoff is isomorphic env with magic file first-line incantations.

        1 reply →

      • > To picture that amount of VC cash - they're at Series F, and a huge chunk of that went towards marketing.

        I guess they should have put some of that marketing money into hiring someone to manage the security of their systems. It's pretty telling that they had to hire an "incident response provider" just to figure out what happened and clean up after the hack. If you treat security like something you don't have to worry about until after you've been hacked you're probably going to get hacked.

        1 reply →

      • I don’t think they “got lucky”. nextjs is an old project now, and for a long time it was the simplest framework to run a React website.

        This is why most open source landing pages used nextjs, and if most FOSS landing pages use it, then most LLM’s have been trained on it, which means LLM’s are more familiar with that framework and choose it

        There must be a term for this kind of LLM driven adoption flywheel…

        2 replies →

      • > They put a massive amount of VC cash into convincing people that Next.js was "the modern way" to create a website

        My impression is Next started becoming popular mostly as a reaction against create-react-app.

      • So glad I decided to just stick with django/htmx on my project a few years ago. I invested a little time into nextjs and came to the conclusion that this can't be the way.

    • You use a free template that's done in Next.js and uses its Image component, so you need a server.

      Everything runs fine locally until you try to deploy it, and bam you need 4g ram machine to run the thing.

      So you host it on Vercel for free cause it's easy!

      Then you want to check for more than 30 seconds of analytics, and it's pay time.

      • I am not following the logic. If you’re a hobbyist, sure.

        But the argument is if you’re using Vercel for production, you’re paying 5-10x what you’d pay for a VM, with 4gb.

        So then what’s the rationale? You can’t be a hobbyist but also “it’s pay time” for production?

        3 replies →

    • Very nice developer experience. A lot of batteries included, like CDN, incremental page regeneration, image pipeline or observability. Not having to maintain a server.

      I’m still planning to move elsewhere though, the vendor lock-in is not worth it and I’d like to keep our infra in the EU.

    • I haven't used Cloudflare and am the first to shit on Vercel. But I have to say, some aspects of their hosting are nice. In many ways it really is just a terminal command and up it goes with good tooling around it. For example, the PR previews take zero setup and just work. Managing your projects is easy, it's all nicely designed, it integrates well with Next and some other frontend-heavy systems and so on.

      • Render is really good at this too. I specifically chose "not Vercel" when looking into hosting. Though I haven't tried both to compare: render has been a pleasure, just works, and auto deploys per branch also available.

    • For many people Vercel is Easy (not simple)

      Knowing how to operate a basic server is perceived as hard and dangerous by many, especially the generation that didn’t have a chance to play with Linux for fun when growing up

      • Great point on the playing with Linux growing up, it's second nature to me now.

        I am always feeling like I'm doing something wrong running bare metal based on modern advice, but it's low latency, simple, and reliable.

        Probably because I've been using linux since Slackware in the 90s so it's second nature. And now with the CLI-based coding tools, I have a co-sysadmin to help me keep things tidy and secure. It's great and I highly recommend more people try it.

    • it's free for newbies and everyone, ofc it's a trap but freemium model gets people. aws can cost easily few thousands with 2-3 mistakes and clicks. vercel makes you start free then if you grow they bill you 10x-100x aws

      • I dunno I put a lot of traffic through Vercel, maybe 100k visitors per day, and it was under a few hundred a month. I think a couple EC2 instances behind a load balancer would cost similar or more. I was under the impression that its still a VC subsidized service.

        They regularly try to get me to join an enterprise plan but no service cutoff threats yet.

        1 reply →

    • For a lot of folks, I think its ease of deployment when using Next.js. I switched to astro, also doing a lot of cloudflare at the moment. Before that, I was doing OpenNext with sst.dev on AWS but it started feeling annoying.

    • If you are using nextjs it is easier because vercel done a lot of things to make it a pain to host outside of vercel.

      • NextJs requires what exactly? Running a nodejs server? I mean yes, it takes a bit more time to set up than one-command deploy to Vercel. But in 2026, even this setup overhead can be cut down to minutes by telling your favorite LLM agent to SSH into your server and set it up for you.

      • Do you have any examples?. I'm not that acquainted with the pains of deploying Next apps, though I've heard that argument being used.

    • Out of curiosity what are you using cloudflare for that it costs $5 and who do you use for the baremetal box?

    • I suppose their market is one click deployments. Maybe for non technical people or people not willing to deal with infra.

    • There really isn't any if you are running a serious product.

      They have a free tier plan for non-commercial usage and a very very good UX for just deploying your website.

      Many companies start using Vercel for the convenience and, as they grow, they continue paying for it because migrating to a cheaper provider is inconvenient.

    • Develop experience. Ephemeral deploys. Decent observability. Decent CI options. Generous free tier.

    • I started using it a few years ago when I moved to my current company, and have to say I've learned to like it quite a bit. Moving to Cloudflare is an option, but currently it just works so we can't be bothered. Costs are not nothing, but basically no issues with it until now, and it's not so expensive that it raises eyebrows with the biggest being that we have 3 seats. The setup is quick and again it just works. We are a very small team, and the fact we don't have to deal with it on a daily/weekly basis is valuable. Obviously this current situation is a problem, but I am not sure which platform is free of issues like these. People act like it can't happen to me, until it does.

    • 0.82% of homes are burglarized every year.

      Meaning since 2015, you’ve got an 8.2% chance of having someone walk out with that box. Hopefully there’s nothing precious on it.

    Not very familiar with Vercel. Discovered them only recently when a business my brother is a customer of fell victim of a phishing attack. The "Login to Microsoft" page hosted on Vercel was still online many days later when I heard of the case.

    We run on Vercel and I wonder if / how long before we're alerted about a leak. Quick look online suggests environment variables marked as sensitive are ok, but to which extent I wonder.

    We proactively rotated keys. Even if you haven’t received an official email, expect customers to inquire about this tomorrow morning.

    Porter also had a breach recently. I assume it is as tightly scoped as they say to not have publicized it.

    > incident response provider

    So they use third-party for incident management? They are de-risking by spending more, which is a loose-loose for the customers.

    • It's very typical to have a retainer / insurance to bring in "emergency" incident responders beyond your existing team. Not saying that's the case here but it wouldn't be surprising.

    Ahhh...another product I'm boycotting, and now doubly glad I'm boycotting.

    Hmmm, the dashboard 404 I got 6 hours ago now makes a bit more sense..

    This is why you pay a real provider for serious business needs, not an AWS reseller. Next.js is a fundamentally insecure framework, as server components are an anti-pattern full of magic leading to stuff like the below. Given their standards for framework security, it's not hard to believe their business' control plane is just as insecure (and probably built using the same insecure framework).

    Next.js is the new PHP, but worse, since unlike PHP you don't really know what's server side and what's client side anymore. It's all just commingled and handled magically.

    https://aws.amazon.com/security/security-bulletins/rss/aws-2...

    • > Next.js is the new PHP, but worse, since unlike PHP you don't really know what's server side and what's client side anymore. It's all just commingled and handled magically.

      Wasn't unheard of back in the day, that you leaked things via PHP templates, like serializing and adding the whole user object including private details in a Twig template or whatever, it just happened the other way around kind of. This was before "fat frontend, thin backend" was the prevalent architecture, many built their "frontends" from templates with just sprinkles of JavaScript back then.

    • People say "Next.js is the new PHP" because it's the most popular and prominent tooling out there, and so by sheer number of available targets it's the one that comes up the most when things go wrong like this.

      But there are more people trying to secure this framework and the underlying tools than there would be on some obscure framework or something the average company built themselves.

      Also "pay a real provider", what does that mean? Are you again implying that the average company should be responsible for _more_ of their own security in their hosting stack, not less?

      Most companies have _zero_ security engineers.. Using a vertically-integrated hosting company like Vercel (or other similar companies, perhaps with different tech stacks - this opinion has nothing to do with Next or Node) is very likely their best and most secure option based on what they are able to invest in that area.

    • Next.js is the polar opposite of PHP, in a way.

      PHP was so simple and easy to understand that anyone with a text editor and some cheap shared hosting could pick it up, but also low level enough that almost nothing was magically done for you. The result was many inexperienced developers making really basic mistakes while implementing essential features that we now take for granted.

      Frameworks like Next.js take the complete opposite approach, they are insanely complex but hide that complexity behind layers and layers of magic, actively discouraging developers from looking behind the curtain, and the result is that even experienced developers end up shooting themselves in the foot by using the magical incantations wrong.

      • Totally agree. Nextjs is a vehicle to sell their PaaS, every other feature is a coincidence.

        What’s worse is vercel corrupted the react devs and convinced them that RSC was a good idea. It’s not like react was strictly in good hands at Facebook but at least the team there were good shepherds and trying to foster the ecosystem.

      • PHP had plenty of magic and footguns, magic_quotes, register_globals, mysql_real_escape_string, errors with stacktraces leaking into the HTML output by default, and these are just from the top of my head.

    • The new PHP? PHP is the same PHP and it's still running 80% of the web to the point that even Reuters, NASA, White House are on it.

    Finally got an email from Vercel saying that my account probably isn't compromised.

    7:57 AM Monday, April 20, 2026 Coordinated Universal Time (UTC)

    Looks like their rampant vibe coding is starting to catch up to them. Expect to see many pre vulns like this in the future.

    Is the calculus breaking for these cloud providers? They are vibe coding at unsustainable speeds and shit is just breaking left and right.

    Has anyone made the move to self hosting on their own servers again?

    Why does anyone running a third party tool have access to all of their clients’ accounts? I can’t imagine something this stupid happening with a real service provider.

    I see Vercel is hosted on AWS? Are they hosting every one on a single AWS account with no tenant isolating? Something this dumb could never happen on a real AWS account. Yes I know the internal controls that AWS has (former employee).

    Anyone who is hosting a real business on Vercel should have known better.

    I have used v0 to build a few admin sites. But I downloaded the artifacts, put in a Docker container and hosted everything in Lambda myself where I controlled the tenant isolation via separate AWS accounts, secrets in Secret Manager and tightly scoped IAM roles, etc.

    • Is AWS security boundary the AWS account? Are you expecting Vercel to provision and manage an AWS account per user? That doesn’t make any sense man, though makes sense if you’re a former AWS employee.

      • Yes the security boundary is the AWS account.

        It doesn’t make sense for a random employee who mistakenly uses a third party app to compromise all of its users it’s a poor security architecture.

        It’s about as insecure as having one Apache Server serving multiple customer’s accounts. No one who is concerned about security should ever use Vercel.

        6 replies →

    There is no serious reason to use Vercel, other than for those being locked into the NextJs ecosystem and demo projects.

    • I recently got hit by a car on my bike. While I was starting the claim filing process the web portal for ICBC (British Columbia insurance) was acting a little funky / stalling / and then gave me a weird access error. Down at the bottom of the error page was a little grey underlined link that said “vercel”.

      I’m not exactly surprised, but it seems like the unserious, ill-informed and lazy are taking over. There is absolutely zero reason why a large, essential public service should be overspending and running on an unnecessary managed service like vercel… yet, here we are.

    Another win for self-hosters, I host my own vercel (coolify) and it works well, all under my control and only expose what I want.

    [flagged]

    • How does that work, when you add an OAuth app, the resulting tokens are specific to that app having a certain set of permissions?

      It's not a new attack vector as in giving too many scopes (beyond the usual "get personal details").

      I am curious how this external OAuth app managed to move through the systems laterally.

    • I'm not super savvy with OAuth, but shouldn't scopes prevent issues like this?

      https://oauth.net/2/scope/

      • From what I understood at [1], Context.ai users "enable AI agents to perform actions across their external applications, facilitated via another 3rd-party service." I.e., it's designed to get someone's OAuth token and use it. Unless that is done really carefully, the risks are as high as the user's authorization goes. The danger doesn't only come from leaks, but also from agents, that can clear your db or directory at a whim.

        [1] https://context.ai/security-update

        1 reply →

      • They can mitigate it, if the user refuses to oauth into something that asks for too much scope. Most users just click "accept" (this claim based on no data at all).

        2 replies →

    • good point, we think of these OAuth logins as so safe and yet they may be the exact opposite because it's more like logging in with your master password. I think these oauth providers like Microsoft and Google need to start mandating 2FA for every company login, it's just too dangerous otherwise.

    • I remember implementing OAuth2 for my platform months ago and I was using the username from the provider's platform as the username within my own platform... But this is a big problem because what if a different person creates an account with the same username on a different platform? They could authenticate themselves onto my platform using that other provider to hijack the first person's account!

      Thankfully I patched this issue just before it became a viable exploit because the two platforms I was supporting at the time had different username conventions; Google used email addresses with an @ symbol and GitHub used plain usernames; this naturally prevented the possibility of username hijacking. I discovered this issue as I was upgrading my platform to support universal OAuth; it would have been a major flaw had I not identified this. This sounds similar to the Vercel issue.

      Anyway my fix was to append a unique hash based on the username and platform combination to the end of the username on my platform.

      • You should use the subject identifiers, not the usernames. You store a mapping of provider & subject to internal users yourself.

        But this has been a problem in the past where people would hijack the email and create a new Google account to sign in with Google with.

        Similarly, when someone deletes their account with a provider, someone else can re-register it and your hash will end up the same. The subject identifiers should be unique according to the spec.

        3 replies →

    https://x.com/theo/status/2045871215705747965 - "Everything I know about this hack suggests it could happen to any host"

    He also suggests in another post that Linear and GitHub could also be pwned?

    Either way, hugops to all the SRE/DevOps out there, seems like it's going to be a busy Sunday for many.

    • Based on what, "feels like it"? Claiming that Cloudflare is affected by the same hack has to come from somewhere, but where is that coming from?

    • Ah, Theo with his vast insights and connections into everything. That man gets around, and his content is worth it's cost.

      Theo's content boils down to the same boring formula. 1. Whatever buzzword headline is trending at the time 2. Immediate sponsored ad that is supposed to make you sympathize with Theo cause he "vets" his sponsors. 3. The man makes you listen to a "that totally happened" story that he somehow always involved himself personally. 4. Man serves you up an ad for his t3.chat and how it's the greatest thing in the world and how he should be paid more for his infinite wisdom. 5. A rag on Claude or OpenAI (whichever is leading at the time) 6. 5-10 minutes of paraphrasing an article without critical thought or analysis on the video topic.

      I used to enjoy his content when he was still in his Ping era, but it's clear hes drunken the YT marketer kool-aid. I've moved on, his content gets recommend now and again, but I can't entertain his non-sense anymore.

      • I just wanted to chime in and say I think he is knowledgeable; he's not a con. I know you didn't say that, but people might have the impression he doesn't know what he's talking about. He does know, and I've learned quite a lot from him in the past.

        However, since the LLM Cambria explosion, he has become very clickbaity, and his content has become shallow. I don't watch his videos anymore.

        2 replies →

      • I don't watch his content, but I felt comfortable posting his link as I believe he's generally considered a reputable guy? His tweets sometimes come up in my for you tab and he seems reasonable and knowledgable generally? Maybe I'm wrong and shouldn't have linked to him as a source.

        3 replies →

    • ”Any host” of what? That’s such a non-descriptive statement and clearly not true at face value.

    • I do remember that OpenAI did use Vercel a year ago. They might have likely moved off of it to something better.

    • > @theo: "I have reason to believe this is credible. If you are using Vercel, it’s a good idea to roll your secrets and env vars."

      > @ErdalToprak: "And use your own vps or k3s cluster there’s no reason in 2026 to delegate your infra to a middle man except if you’re at AWS level needs"

      > @theo: "This is still a stupid take"

      lol, okay. Thanks for the insight, Theo, whoever you are.

    Much as I want to rip on vercel, its clear that ai is going to lead to mass security breaches. The attack surface is so large, and ai agents are working around the clock. This is a new normal. Open source software is going to change, companies wont be running random repos off github anymore

    • LOL. Attackers will run these agents but the thousands of maintainers will be so dumb to sit idly and get hammered with exploits. I wonder what the ratio of attackers to maintainers must be, 1:1000 is a fair assessment i take it.

      Also LLMs will be used to attack only, no one will be smart to integrate it into CI flows, because everyone is that dumb. No security tools will pop up.

    • Slop coding and makeshift sites being thrown up with abandon at breakneck speeds is going to buy me a lot of minivans.

    • >> ai is going to lead to mass security breaches.

      Let that be the end of Microsoft. Was forced to use their shitty products for years, by corporate inertia and their free Teams and Azure licenses, first-dose-is-free, curse.