Comment by theflyinghorse
3 days ago
I keep thinking about perhaps LLMs would make writing code in these lower-level-but-far-better-performing languages in vogue. Why have claude generate a python service when you could write a rust or C3 service with compiler doing a lot of heavy lifting around memory bugs?
The architecture of my current project is actually a Python/Qt application which is a thin wrapper around an LLM generated Rust application. I go over almost every line of the LLM generated Rust myself, but that machine is far more skilled at generating quality Rust than I currently am. But I am using this as an opportunity to learn.
> that machine is far more skilled at generating quality Rust than I currently am. But I am using this as an opportunity to learn.
I'm currently doing this with golang. It is not that bad of an experience. LLMs do struggle with concurrency, though. My current project has proved to be pretty challenging for LLMs to chew through.
Having worked with rust in the past couple years, I can say that it hands down much better fit for LLMs than Python thanks to its explicitness and type information. This provides a lot of context for LLM to incrementally grow the codebase. You still have to watch it, of course. But the experience is very pleasant.
Because there’s more python on the internet to interpolate from. LLMs are not equally good at all languages
You can throw Claude at a completely private Rust code base with very specific niche requirements and conventions that are not otherwise common in Rust and it will demonstrate a remarkably strong ability to explain it and program according to the local idioms. I think your statement is based on liking a popular language, not on evidence..
I find that having a code-base properly scaffolded really, really helps a model handle implementing new features or performing bug-fixes. There's this grey area between greenfield and established that I hit every time I try to take a new project to a more stable state. I'm still trying to sort out how to get through that grey area.
1 reply →
That’s been my experience. LLMs excel at languages that are popular. JavaScript and Python are two great examples.
I think the same. It sounds quite more practical to have LLMs code in languages whose compilers provide as much compile-time guardrails as possible (Rust, Haskell?). Ironically in some ways this applies to humans writing code as well, but there you run into the (IMO very small) problem of having to write a bit more code than with more dynamic languages.
It seems cynically fitting that the future we're getting and deserve is one where we've automated the creation of memory bugs with AI.
You still want to be able to easily review the LLM generated code. At least I want to.