Comment by nikcub

1 day ago

Claude Code defaulting to a certain set of recommended providers[0] and frameworks is making the web more homogenous and that lack of diversity is increasing the blast radius of incidents

[0] https://amplifying.ai/research/claude-code-picks/report

It's interesting how many of the low-effort vibecoded projects I see posted on reddit are on vercel. It's basically the default.

  • Reddit vibecoded LLM posts are kind of fascinating for how homogenous they are. The number of vibe coded half-finished projects posted to common subreddits daily is crazy high.

    It’s interesting how they all use LLMs to write their Reddit posts, too. Some of them could have drawn in some people if they took 5 minutes to type an announcement post in their own words, but they all have the same LLM style announcement post, too. I wonder if they’re conversing with the LLM and it told them to post it to Reddit for traction?

    • I find that often the developers of these apps don't speak English, but want to target an English-speaking audience. For the marketing copy, they're using the LLM more to translate than to paraphrase, but the LLM ends up paraphrasing anyway.

      4 replies →

    • They are not exclusive to reddit. HN has also been full of vibe submissions of the same nature.

    • It's insane how most of the dev subreddits are filled with slop like this. I've thought the same thing - why can't they even spend 5 minutes to write their own post about their project?

      2 replies →

  • next, vercel, and supabase is basically the foundation of every vibecoded project by mere suggestion.

    • If this kind of vulnerability exists at the platform level, imagine how vulnerable all the vibe-coded apps are to this kind of exploit.

      I don't doubt the competence of the Vercel team actually and that's the point. Imagine if this happens to a top company which has their pick of the best engineers, on a global scale.

      My experience with modern startups is that they're essentially all vulnerable to hacks. They just don't have the time to actually verify their infra.

      Also, almost all apps are over-engineered. It's impossibly difficult to secure an app with hundreds of thousands of lines of code and 20 or so engineers working on the backend code in parallel.

      Some people are like "Why they didn't encrypt all this?" This is a naive way to think about it. The platform has to decrypt the tokens at some point in order to use them. The best we can do is store the tokens and roll them over frequently.

      If you make the authentication system too complex, with too many layers of defense, you create a situation where users will struggle to access their own accounts... And you only get marginal security benefits anyway. Some might argue the complexity creates other kinds of vulnerabilities.

      1 reply →

    • They’re all shit too. All three decided to do custom auth instead of OIDC and it’s a nightmare to integrate with any of them.

      1 reply →

  • I've done a ton of low-effort vibe-coded projects that suit my exact use cases. In many cases, I might do a quick Google search, not find an exact match, or find some bloated adware or subscription-ware and not bother going any further.

    Claude Code can produce exactly what I want, quickly.

    The difference is that I don't really share my projects. People who share them probably haven't realized that code has become cheap, and no one really needs/wants to see them since they can just roll their own.

    • The kind of code, with the kind of quality, that LLMs can output has become cheap. Learning has not, and neither has genuinely well designed, human designed, code. This might be surprising to the majority of users on HN, but once a really good programmer joins your team, who is both really good, and also uses LLMs to speed up the parts that he or she isn't good at, you really learn how far away vibe coders are from producing something worth using.

  • There's a push and pull here, Typescript + React + Vercel are also very amenable to LLM driven development due to a mix of the popularity of examples in the LLMs dataset, how cheap the deployment is and how quick the ecosystem is to get going.

  • Another Anthropic revenue stream:

    Protection money from Vercel.

    "Pay us 10% of revenue or we switch to generating Netlify code."

The other day, I was forcing myself to use Claude Code for a new CRUD React app[1], and by default it excreted a pile of Node JS and NPM dependencies.

So I told something like, "don't use anything node at all", and it immediately rewrote it as a Python backend, and it volunteered that it was minimizing dependencies in how it did that.

[1] only vibe coding as an exercise for a throwaway artifact; I'm not endorsing vibe coding

  • > forcing myself to use Claude Code

    You don't have to live like this.

    • Even though I'm a hardcore programmer and software engineer, I still need to at least keep aware of the latest vibe coding stuff, so I know what's good and bad about it.

  • You can tell Claude to use something highly structured like Spring Boot / Java. It's a bit more verbose in code, but the documentation is very good which makes Claude use it well. And the strict nature of Java is nice in keeping Claude on track and finding bugs early.

    I've heard others had similar results with .NET/C#

    • Spring Boot is every bit as random mystery meat as Vercel or Rails. If you want explicit then use non-Boot Spring or even no Spring at all.

  • You wanted it to use React but not node? Am I missing something here?

    • You can use React without Node by using a CDN. You can even use JSX if you use Babel in a script tag. It's just inefficient and stupid as hell.

  • My vibe coded one-off app projects have are all, by default, "self-contained single file static client side webapp, no build step, no React or other webshit nonsense" in their prompt. For more complex cases, I drop the "single file". Works like a charm.

  • I'm struggling to understand how they bought Bun but their own Ai Models are more fixated in writing python for everything than even the models of their competitor who bought the actual Python ecosystem (OAI with uv)

  • > Python

    I once made a golang multi-person pomodoro app by vibe coding with gemini 3.1 pro (when it had first launched first day) and I asked it to basically only have one outside dependency of gorrilla websockets and everything else from standard library and then I deployed it to hugging face spaces for free.

    I definitely recommend golang as a language if you wish to vibe code. Some people recommend rust but Golang compiles fast, its cross compilation and portable and is really awesome with its standard library

    (Anecdotally I also feel like there is some chances that the models are being diluted cuz like this thing then has become my benchmark test and others have performed somewhat worse or not the same as this to be honest and its only been a few days since I am now using hackernews less frequently and I am/was already seeing suspicions like these about claude and other models on the front page iirc. I don't know enough about claude opus 4.7 but I just read simon's comment on it, so it would be cool if someone can give me a gist of what is happening for the past few days.)

  • It emits Actix and Axum extremely well with solid support for fully AOT type checked Sqlx.

    Switch to vibe coding Rust backends and freeze your supply chain.

    Super strong types. Immaculate error handling. Clear and easy to read code. Rock solid performance. Minimal dependencies.

    Vibe code Rust for web work. You don't even need to know Rust. You'll osmose it over a few months using it. It's not hard at all. The "Rust is hard" memes are bullshit, and the "difficult to refactor" was (1) never true and (2) not even applicable with tools like Claude Code.

    Edit: people hate this (-3), but it's where the alpha is. Don't blindly dismiss this. Serializing business logic to Rust is a smart move. The language is very clean, easy to read, handles errors in a first class fashion, and fast. If the code compiles, then 50% of your error classes are already dealt with.

    Python, Typescript, and Go are less satisfactory on one or more of these dimensions. If you generate code, generate Rust.

    • How are you getting low dependencies for Web backend with Rust? (All my manually-written Rust programs that use crates at all end up pulling in a large pile of transitive dependencies.)

    • Ok I mean this is a little crazy, "minimal dependencies" and Rust? Brother I need dependencies to write async traits without tearing my hair out.

      But you're also correct in that Rust is actually possible to write in a more high level way, especially for web where you have very little shared state and the state that is shared can just be wrapped in Arc<> and put in the web frameworks context. It's actually dead easy to spin up web services in Rust, and they have a great set of ORM's if thats your vibe too. Rust is expressive enough to make schema-as-code work well.

      On the dependencies, if you're concerned about the possibility of future supply chain attacks (because Rust doesn't have a history like Node) you can vendor your deps and bypass future problems. `cargo vendor` and you're done, Node has no such ergonomic path to vendoring, which imo is a better solution than anything else besides maybe Go (another great option for web services!). Saying "don't use deps" doesn't work for any other language other than something like Go (and you can run `go vendor` as well).

      But yeah, in today's economy where compute and especially memory is becoming more constrained thanks to AI, I really like the peace of mind knowing my unoptimised high level Rust web services run with minimal memory and compute requirements, and further optimisation doesn't require a rewrite to a different language.

      Idk mate, I used to be a big Rust hater but once I gave the language a serious try I find it more pleasant to write compared to both Typescript and Go. And it's very amiable to AI if that's your vibe(coding), since the static guarantees of the type system make it easier for AI to generate correct code, and the diagnostics messages allow it to reroute it's course during the session.

    • Except with using Rust like this you're using it like C#. You don't get to enjoy the type system to express your invariants.

It's a good point, but I don't think the problem here is Claude. It's how you use it. We need to be guiding developers to not let Claude make decisions for them. It can help guide decisions, but ultimately one must perform the critical thinking to make sure it is the right choice. This is no different than working with any other teammate for that matter.

  • That's not helped by a recent change to their system prompt "acting_vs_clarifying":

    > When a request leaves minor details unspecified, the person typically wants Claude to make a reasonable attempt now, not to be interviewed first. Claude only asks upfront when the request is genuinely unanswerable without the missing information (e.g., it references an attachment that isn’t there).

    > When a tool is available that could resolve the ambiguity or supply the missing information — searching, looking up the person’s location, checking a calendar, discovering available capabilities — Claude calls the tool to try and solve the ambiguity before asking the person. Acting with tools is preferred over asking the person to do the lookup themselves.

    > Once Claude starts on a task, Claude sees it through to a complete answer rather than stopping partway. [...]

    In my experience before this change. Claude would stop, give me a few options and 70% of the time I would give it an unlisted option that was better. It actually would genuinely identify parts of the specs that were ambiguous and needed to be better defined. With the new change, Claude plows ahead making a stupid decision and the result is much worse for it.

  • I think most people would agree.

    However it is less clear on how to do this, people mostly take the easiest path.

  • Shouldn’t Claude just refuse to make decisions, then, if it is problematic for it to do so? We’re talking about a trillion dollar company here, not a new grad with stars in their eyes

The thing I can’t stop thinking about is that Ai is accelerating convergence to the mean (I may be misusing that)

The internet does that but it feels different with this

  • > convergence to the mean

    That's a funny way of saying "race to the bottom."

    > The internet does that but it feels different with this

    How does "the internet do that?" What force on the internet naturally brings about mediocrity? Or have we confused rapacious and monopolistic corporations with the internet at large?

This is why Im glad I learned to code before vibecoding. I tell codex exactly what tools and platforms to use instead of letting it default to whatever is the most popular, and I guard my .env and api keys carefully. I still build things page by page or feature by feature instead of attempting to one shot everything. This should be vibe-coding 101.

That report greatly overrates the tendency to default for Vercel for web because among its 2 web projects it mandated one use Next.js and the other one to be a React SPA as well. Obviously those prime Claude towards Vercel. They shouldve had the second project be a non-React web project for diversity.

Is that bad? I would think having everyone on the same handful of platforms should make securing them easier (and means those platforms have more budget to to so), and with fewer but bigger incidents there's a safety-of-the-herd aspect - you're unlikely to be the juiciest target on Vercel during the vulnerability window, whereas if the world is scattered across dozens or hundreds of providers that's less so.

  • When everyone uses the same handful of platforms, then everyone becomes the indirect target and victim of those big incidents. The recent AWS and Cloudflare outages are vivid examples. And then the owners of those platforms target everyone with their enshittification as well to milk more and more money.

Interstingly, a recent conversation [1] between Hank Green and security researcher Sherri Davidoff argued the opposite. More GenAI generated code targeted at specific audiences should result in a more resilient ecosystem because of greater diversity. That obviously can't work if they end up using the same 3 frameworks in every application.

[1] https://www.youtube.com/watch?v=V6pgZKVcKpw

  • I love Hank, but he has such a weird EA-shaped blind spot when it comes to AI. idgi

    It is true that "more diversity in code" probably means less turnkey spray-and-pray compromises, sure. Probably.

    It also means that the models themselves become targets. If your models start building the same generated code with the same vulnerability, how're you gonna patch that?

    • > start building the same generated code with the same vulnerability

      This situation is pretty funny to me. Some of my friends who arent technical tried vibe coding and showed me what they built and asked for feedback

      I noticed they were using Supabase by default, pointed out that their database was completely open with no RLS

      So I told them not to use Supabase in that way, and they asked the AI (various diff LLMs) to fix it. One example prompt I saw was: please remove Supabase because of the insecure data access and make a proper secure way.

      Keep in mind, these ppl dont have a technical background and do not know what supabase or node or python is. They let the llm install docker, install node, etc and just hit approve on "Do you want to continue? bash(brew install ..)"

      Whats interesting is that this happened multiple times with different AI models. Instead of fixing the problem the way a developer normally would like moving the database logic to the server or creating proper API endpoints it tried to recreate an emulation of Supabase, specifically PostgREST in a much worse and less secure way.

      The result was an API endpoint that looked like: /api/query?q=SELECT * FROM table WHERE x

      In one example GLM later bolted on a huge "security" regular expression that blocked , admin, updateadmin, ^delete* lol

      2 replies →

Yes, this is a genuine problem with AI platforms. It does sometimes feel like they're suspiciously over-promoting certain solutions; to the point that it's not in the AI platform's interest.

I know what it's like being on the opposite side of this as I maintain an open source project which I started almost 15 years ago and has over 6k GitHub stars. It's been thoroughly tested and battle-tested over long periods of time at scale with a variety of projects; but even if I try to use exact sentences from the website documentation in my AI prompt (e.g. Claude), my project will not surface! I have to mention my project directly by name and then it starts praising it and its architecture saying that it meets all the specific requirements I had mentioned earlier. Then I ask the AI why it didn't mention my project before if it's such a good fit. Then it hints at number of mentions in its training data.

It's weird that clearly the LLM knows a LOT about my project and yet it never recommends it even when I design the question intentionally in such a way that it is the perfect fit.

I feel like some companies have been paying people to upvote/like certain answers in AI-responses with the intent that those upvotes/likes would lead to inclusion in the training set for the next cutting-edge model.

It's a hard problem to solve. I hope Anthropic finds a solution because they have a great product and it would be a shame for it to devolve into a free advertising tool for select few tech platforms. Their users (myself included) pay them good money and so they have no reason to pander to vested interests other than their own and that of their customers.

  • > It's weird that clearly the LLM knows a LOT about my project and yet it never recommends it even when I design the question intentionally in such a way that it is the perfect fit.

    That's literally what "weight" means - not all dependencies have the same %-multiplier to getting mentioned. Some have a larger multiplier and some have a smaller (or none) multiplier. That multiplier is literally a weight.

That's only looking at half of the equation.

That lack of diversity also makes patches more universal, and the surface area more limited.

It's so trivial to seed. LLMs are basically the idiots that have fallen for all the SEO slop on Google. Did some travel planning earlier and it was telling me all about extra insurances I need and why my normal insurance doesn't cover X or Y (it does of course).

That's the irony of Mythos. It doesn't need to exist. LLM vibe slop has already eroded the security of your average site.

  • Self fulfilling prophecy: You don't need to secure anything because it doesn't make a difference, as Mythos is not just a delicious Greek beer, but also a super-intelligent system that will penetrate any of your cyber-defenses anyway.

    • In some ways Mythos (like many AI things) can be used as the ultimate accountability sink.

      These libraries/frameworks are not insecure because of bad design and dependency bloat. No! It's because a mythical LLM is so powerful that it's impossible to defend against! There was nothing that could be done.

      1 reply →

  • Conspiracy theory: they intentionally seeded the world with millions of slop PRs and now they’re “catching bugs” with Mythos