Comment by azemetre

6 days ago

Isn’t this because the LLMs had like a million+ react tutorials/articles/books/repos to train on?

I mean I try to use them for svelte or vue and it still recommends react snippets sometimes.

Generally speaking, "LLMs" that I use are always the latest thinking versions of the flagship models (Grok 3/Gemini 2.5/...). GPT4o (and equivalent) are a mess.

But you're correct, when you use more exotic and/or quite new libraries, the outputs can be of mixed quality. For my current stack (Typescript, Node, Express, React 19, React Router 7, Drizzle and Tailwind 4) both Grok 3 (the paid one with 100k+ context) and Gemini 2.5 are pretty damn good. But I use them for prototyping, i.e. quickly putting together new stuff, for types, refactorings... I would never trust their output verbatim. (YET.) "Build an app that ..." would be a nightmare, but React-like UI code at sufficiently granular level is pretty much the best case scenario for LLMs as your components should be relatively isolated from the rest of the app and not too big anyways.

I put these in the Gemini Pro 2.5 system prompt and it's golden for Svelte.

https://svelte.dev/docs/llms

  • I do this and it still spits out react snippets regardless like 40% of the time... I feel like unless you are doing something extremely basic this is fine but once you introduce state or animations all these systems death spiral.

I have had no issues with LLMs trying to force a language on me. I tried the whole snake game test with ChatGPT but Instead of using Python I asked it to use the nodejs bindings for raylib, which is rather unusual.

It did it in no time and no complaints.

  • To be more honest, it did feel like if I just stuck with the standard library it was okay at generating a higher ratio of useful snippets. Once I introduced a library is where things fell apart.

I use https://visualstudio.microsoft.com/services/intellicode/ for my IDE which learns on your codebase, so it does end up saving me a ton of time after its learned my patterns and starts suggesting entire classes hooked up to the correct properties in my EF models.

It lets me still have my own style preferences with the benefit of AI code generation. Bridged the barrier I had with code coming from Claude/ChatGPT/etc where its style preferences were based on the wider internets standards. This is probably a preference on the level of tabs vs spaces, but ¯\_(ツ)_/¯