Comment by behnamoh
13 days ago
I wonder if the era of dynamic programming languages is over. Python/JS/Ruby/etc. were good tradeoffs when developer time mattered. But now that most code is written by LLMs, it's as "hard" for the LLM to write Python as it is to write Rust/Go (assuming enough training data on the language ofc; LLMs still can't write Gleam/Janet/CommonLisp/etc.).
Esp. with Go's quick compile time, I can see myself using it more and more even in my one-off scripts that would have used Python/Bash otherwise. Plus, I get a binary that I can port to other systems w/o problem.
Compiled is back?
> But now that most code is written by LLMs
Am I in the Truman show? I don’t think AI has generated even 1% of the code that I run in prod, nor does anyone I respect. Heavily inspired by AI examples, heavily assisted by AI during research sure. Who are these devs that are seeing such great success vibecoding? Vibecoding in prod seems irresponsible at best
It's all over the place depending on the person or domain. If you are building a brand new frontend, you can generate quite a lot. If you are working on an existing backend where reliability and quality are critical, it's easier to just do yourself. Maybe having LLMs writing the unit tests on the code you've already verified working.
> Who are these devs that are seeing such great success vibecoding? Vibecoding in prod seems irresponsible at best
AI written code != vibecoding. I think anyone who believes they are the same is truly in trouble of being left behind as AI assisted development continues to take hold. There's plenty of space between "Claude build me Facebook" and "I write all my code by hand"
I was talking to a product manager a couple weeks ago about this. His response: most managers have been vibecoding for long time. They've just been using engineers instead of LLMs.
This is a really funny perspective
Having done both, right now I prefer vibe coding with good engineers. Way less handholding. For non-technical managers, outside of prototyping vibe coding produces terrible results
FAANG here (service oriented arch, distributed systems) and id say probably 20+ percent of code written on my team is by an LLM. it's great for frontends, works well with test generation, or following an existing paradigm.
I think a lot of people wrote it off initially as it was low quality. But gemini 3 pro or sonnet 4.5 saves me a ton of time at work these days.
Perfect? Absolutely not. Good enough for tons of run of the mill boilerplate tasks? Without question.
> probably 20+ percent of code written on my team is by an LLM. it's great for frontends
Frontend has always been shitshow since JS dynamic web UIs invented. With it and CSS no one cares what runs page and how many Mb it takes to show one button.
But regarding the backend, the vibecoding still rare, and we are still lucky it is like that, and there was no train crush because of it. Yet.
10 replies →
As someone currently outside FAANG, can you point to where that added productivity is going? Is any of it customer visible?
Looking at the quality crisis at Microsoft, between GitHub reliability and broken Windows updates, I fear LLMs are hurting them.
I totally see how LLMs make you feel more productive, but I don't think I'm seeing end customer visible benefits.
1 reply →
Does great for front ends mean considerate A11Y? In the projects I've looked over, that's almost never the case and the A11Y implementation is hardly worthy of being called prototype, much less production. Mock up seems to be the best label. I'll bet you think because the surface looks right that runs down to the roots so you call it good at front ends. This is the problem with LLMs, they do not do the hard work and they teach people that the hard work they cannot do is fine left undone or partially done and the more people "program" like this the worse the situation gets for real human beings trying to live in a world dominated by software.
1 reply →
For the last 2 or 3 months we made a commitment as a team to go all in on claude code, and have been sharing prompts, skills, etc, and documented all of our projects and at this point, claude is writing a _large_ percentage of our code. Probably upwards of 70 or 80%. It's also been updating our jira tickets and github PRs, which is probably even more useful than writing the code.
Our test coverage has improved dramatically, our documentation has gotten better, our pace of development has gone up. There is also a _big_ difference between the quality of the end product between junior and senior devs on the team.
Junior devs tend to be just like "look at this ticket and write the code."
Senior devs are more like: Okay, can you read the ticket, try to explain to to me in your own words, let's refine the description, can you propose a solution -- ugh that's awful, what if we did this instead.
You would think you would not save a lot of time that way, but even spending an _hour_ trying to direct claude to write the code correctly is less than the 5-6 hours it would take to write it yourself for most issues, with more tests and better documentation when you are finished.
When you first start using claude code, it feels like you are spending more time to get worse work out of it, but once you sort of build up the documentation/skills/tools it needs to be successful, it starts to pay dividends. Last week, I didn't open an IDE _once_ and I committed several thousands lines of code across 2 or 3 different internal projects. A lot of that was a major refactor (smaller files, smaller function sizes, making things more DRY) that I had been putting off for months.
Claude itself made a huge list of suggestions, which I knocked back to about 8 or 10, it opened a tracking issue in jira with small, tractable subtasks, then started knocking out one at a time, each of them being a fairly reviewable PR, with lots of test coverage (the tests had been built out over the previous several months of coding with cursor and claude that sort of mandated them to stop them from breaking functionality), etc.
I had a coworker and chatgpt estimate how long the issue would take if they had to do it without AI. The coworker looked at the code base and said "two weeks". Both claude and chat GPT estimate somewhere in the 6-8 weeks range (which I thought was a wild over estimate, even without AI). Claude code knocked the whole thing out in 8 hours.
If you work on highly repetitive areas like web programming, I can clearly see why they're using LLMs. If you're in a more niche area, then it gets harder to use LLM all the time.
There is a nice medium between full-on vibe coding and doing it yourself by hand. Coding agents can be very effective on established codebases, and nobody is forcing you to push without reviewing.
> But now that most code is written by LLMs, it's as "hard" for the LLM to write Python as it is to write Rust/Go
The LLM still benefits from the abstraction provided by Python (fewer tokens and less cognitive load). I could see a pipeline working where one model writes in Python or so, then another model is tasked to compile it into a more performant language
It's very good (in our experience, YMMV of course) when/llm write prototype with python and then port automatically 1-1 to Rust for perf. We write prototypes in JS and Python and then it gets auto ported to Rust and we have been doing this for about 1 year for all our projects where it makes sense; in the past months it has been incredibly good with claude code; it is absolutely automatic; we run it in a loop until all (many handwritten in the original language) tests succeed.
IDK what's going on in your shop but that sounds like a terrible idea!
- Libraries don't necessarily map one-to-one from Python to Rust/etc.
- Paradigms don't map neatly; Python is OO, Rust leans more towards FP.
- Even if the code be re-written in Rust, it's probably not the most Rustic (?) approach or the most performant.
2 replies →
Why not get it to write it in Rust in the first place?
1 reply →
I think that's not as beneficial as having proper type errors and feeding that into itself as it writes
Expressive linting seems more useful for that than lax typing without null safety.
NP (as in P = NP) is also much lower for Python than Rust on the human side.
What does that mean? Can you elaborate?
5 replies →
100% of my LLM projects are written in Rust - and I have never personally written a single line of Rust. Compilation alone eliminates a number of 'category errors' with software - syntax, variable declaration, types, etc. It's why I've used Go for the majority of projects I've started the past ten years. But with Rust there is a second layer of guarantees that come from its design, around things like concurrency, nil pointers, data races, memory safety, and more.
The fewer category errors a language or framework introduces, the more successful LLMs will be at interacting with it. Developers enjoy freedom and many ways to solve problems, but LLMs thrive in the presence of constraints. Frontiers here will be extensions of Rust or C-compatible languages that solve whole categories of issue through tedious language features, and especially build/deploy software that yields verifiable output and eliminates choice from the LLMs.
Perl is right out! Maybe the LLMs could help us decipher extent Perl "write once, maintain never" code.
it's very good at this BTW
4 replies →
> But now that most code is written by LLMs
Got anything to back up this wild statement?
Depends, what to you would qualify as evidence?
Something quantitative and not "company with insane vested interest/hype blogger said so".
If you have to ask, you can't afford it.
Me, my team, and colleagues also in software dev are all vibe coding. It's so much faster.
If I may ask, does the code produced by LLM follow best practices or patterns? What mental model do you use to understand or comprehend your codebase?
Please know that I am asking as I am curious and do not intend to be disrespectful.
5 replies →
> It's so much faster.
A lot of things are "so much faster" than the right thing. "Vibe traffic safety laws" are much faster than ones that increase actual traffic safety: http://propublica.org/article/trump-artificial-intelligence-... . You, your team, and colleagues are producing shiny trash at unbelievable velocity. Is that valuable?
I mean, people who use LLMs to crank out code are cranking it out by the millions of lines. Even if you have never seen it used toward a net positive result, you have to admit there is a LOT of it.
If all code is eventually tech debt, that sounds like a massive problem.
[flagged]
> But now that most code is written by LLMs
Is this true? It seems to be a massive assumption.
By lines of code produced in total? Probably true. By usefulness? Unclear.
Replace _is_ with _can be_ and I think the general point still stands.
Sounds like just as big an assumption.
Replacing “is” with “can be” is in practical terms the same thing as replacing “is” with “isn’t”
By lines of code, almost by an order of magnitude.
Some of the code is janky garbage, but that’s what most code it. There’s no use pearl clutching.
Human engineering time is better spent at figuring out which problems to solve than typing code token by token.
Identifying what to work on, and why, is a great research skill to have and I’m glad we are getting to realistic technology to make that a baseline skill.
Well, you will somehow have to turn that 'janky garbage' into quality code, who will do that then?
23 replies →
I have certainly become Go-curious thanks to coding agents - I have a medium sized side-project in progress using Go at the moment and it's been surprisingly smooth sailing considering I hardly know the language.
The Go standard library is a particularly good fit for building network services and web proxies, which fits this project perfectly.
It's funny seeing you say that, because I've had an entire arc of despising the design of, and peremptorily refusing to use, Go, to really enjoying it, thanks to AI coding agents being able to take care of the boilerplate for me.
It turns out that verbosity isn't really a problem when LLMs are the one writing the code based on more high level markdown specs (describing logic, architecture, algorithms, concurrency, etc), and Go's extreme simplicity, small range of language constructs, and explicitness (especially in error handling and control flow) make it much easier to quickly and accurately review agent code.
It also means that Go's incredible (IMO) runtime, toolchain, and standard library are no longer marred by the boilerplate either, and I can begin to really appreciate their brilliance. It has me really reconsidering a lot of what I believed about language design.
Yeah, I much prefer Go to Rust for LLM things because I find Go code easy to read and understand despite having little experience with it - Rust syntax still trips me up.
2 replies →
Just completed my first, small go program. It is just a cli tool to use with code quality tool for coding agent skill. The toolchain built into go left a good first impression. Recursion and refinement of guard rails on coding agents has been high on my priorities to deliver better quality code faster.
God you people are so lazy.
1 reply →
100% check out Golang even more! I have been writing Golang AI coding projects for a really long time because I really loved writing different languages and Golang was one in which I settled on.
Golang's libraries are phenomenal & the idea of porting over to multiple servers is pretty easy, its really portable.
I actually find Golang good for CLI projects, Web projects and just about everything.
Usually the only time I still use python uvx or vibe code using that is probably when I am either manipulating images or pdf's or building a really minimalist tkinkter UI in python/uv
Although I tried to convert the python to golang code which ended up using fyne for gui projects and surprisingly was super robust but I might still use python in some niche use cases.
Check out my other comment in here for finding a vibe coded project written in a single prompt when gemini 3 pro was launched in the web (I hope its not promotion because its open source/0 telemetry because I didn't ask for any of it to be added haha!)
Golang is love. Golang is life.
> considering I hardly know the language.
Same boat! In fact I used to (still do) dislike Go's syntax and error handling (the same 4 lines repeated every time you call a function), but given that LLMs can write the code and do the cross-model review for me, I literally don't even see the Go source code, which is nice because I'd hate it if I did (my dislike of Go's syntax + all the AI slop in the code would drive me nuts).
But at the end of the day, Go has good scaffolding, the best tooling (maybe on par with Rust's, definitely better than Python even with uv), and tons of training data for LLMs. It's also a rather simple language, unlike Swift (which I wish was simpler because it's a really nice language otherwise).
> But now that most code is written by LLMs
I'm sure it will eventually be true, but this seems very unlikely right now. I wish it were true, because we're in a time where generic software developers are still paid well, so doing nothing all day, with this salary, would be very welcome!
Code written by LLM != developer doing nothing
Has anyone tried creating a language that would be good for LLMs? I feel like what would be good for LLMs might not be the same thing that is good for humans (but I have no evidence or data to support this, just a hunch).
The problem with this is the reason LLMs are so good at writing Python/Java/JavaScript is that they've been trained on a metric ton of code in those languages, have seen the good the bad and the ugly and been tuned to the good. A new language would be training from scratch and if we're introducing new paradigms that are 'good for LLMs but bad for humans' means humans will struggle to write good code in it, making the training process harder. Even worse, say you get a year and 500 features into that repo and the LLM starts going rogue - who's gonna debug that?
But coding is largely trained on synthetic data.
For example, Claude can fluently generate Bevy code as of the training cutoff date, and there's no way there's enough training data on the web to explain this. There's an agent somewhere in a compile test loop generating Bevy examples.
A custom LLM language could have fine grained fuzzing, mocking, concurrent calling, memoization and other features that allow LLMs to generate and debug synthetic code more effectively.
If that works, there's a pathway to a novel language having higher quality training data than even Python.
1 reply →
>Has anyone tried creating a language that would be good for LLMs?
I’ve thought about this and arrived at a rough sketch.
The first principle is that models like ChatGPT do not execute programs; they transform context. Because of that, a language designed specifically for LLMs would likely not be imperative (do X, then Y), state-mutating, or instruction-step driven. Instead, it would be declarative and context-transforming, with its primary operation being the propagation of semantic constraints. The core abstraction in such a language would be the context, not the variable. In conventional programming languages, variables hold values and functions map inputs to outputs. In a ChatGPT-native language, the context itself would be the primary object, continuously reshaped by constraints. The atomic unit would therefore be a semantic constraint, not a value or instruction.
An important consequence of this is that types would be semantic rather than numeric or structural. Instead of types like number, string, bool, you might have types such as explanation, argument, analogy, counterexample, formal_definition.
These types would constrain what kind of text may follow, rather than how data is stored or laid out in memory. In other words, the language would shape meaning and allowable continuations, not execution paths. An example:
@iterate: refine explanation until clarity ≥ expert_threshold
There are two separate needs here. One is a language that can be used for computation where the code will be discarded. Only the output of the program matters. And the other is a language that will be eventually read or validated by humans.
Most programming languages are great for LLMs. The problem is with the natural language specification for architectures and tasks. https://brannn.github.io/simplex/
There was an interesting effort in that direction the other day: https://simonwillison.net/2026/Jan/19/nanolang/
I don’t know rust but I use it with llms a lot as unlike python, it has fewer ways to do things, along with all the built in checks to build.
I want to create a language that allows an LLM to dynamically decide what to do.
A non dertermistic programing language, which options to drop down into JavaScript or even C if you need to specify certain behaviors.
I'd need to be much better at this though.
You're describing a multi-agent long horizon workflow that can be accomplished with any programming language we have today.
4 replies →
What does that even mean?
I agree with this. Making languages geared toward human ergonomics probably won’t be a thing going forward.
Go is positioned really well here, and Steve Yegge wrote a piece on why. The language is fast, less bloated than Python/TS, and less dogmatic than Java/Kotlin. LLMs can go wham with Go and the compiler will catch most of the obvious bugs. Faster compilation means you can iterate through a process pretty quickly.
Also, if I need abstraction that’s hard to achieve in Go, then it better be zero-cost like Rust. I don’t write Python for anything these days. I mean, why bother with uv, pip, ty, mypy, ruff, black, and whatever else when the Go compiler and the standard tooling work better than that decrepit Python tooling? And it costs almost nothing to make my scripts faster too.
I don’t yet know how I feel about Rust since LLMs still aren’t super good with it, but with Go, agentic coding is far more pleasurable and safer than Python/TS.
Python (with Qt, pyside) is still great for desktop GUI applications. My current project is all LLM generated (but mostly me-verified) Rust, wrapped in a thin Python application for the GUI, TUI, CLI, and web interfaces. There's also a Kotlin wrapper for running it on Android.
Yeah, Python is nice to work with in many contexts for sure. I mostly meant that I don’t personally use it as much anymore, since Go can do everything I need, and faster.
Plus the JS/Python dependency ecosystem is tiring. Yeah, I know there’s uv now, but even then I don’t see much reason to suffer through that when opting for an actually type-safe language costs me almost nothing.
Dynamic languages won’t go anywhere, but Go/Rust will eat up a pretty big chunk of the pie.
LLM should generate to terse and easy to read language for human to review. Beside Python, F# can be a perfect fit.
[dead]
> Python/JS/Ruby/etc. were good tradeoffs when developer time mattered.
First I don't think this is the end of those languages. I still write code in Ruby almost daily, mostly to solve smaller issues; Ruby acts as the ultimate glue that connects everything here.
Having said that, Ruby is on a path to extinction. That started way before AI though and has many different reasons; it happened to perl before and now ruby is following suit. Lack of trust in RubyCentral as our divine new ruler is one (recently), after they decided to turn against the community. Soon Ruby can be renamed into Suby, to indicate Shopify running the show now. What is interesting is that you still see articles "ruby is not dead, ruby is not dead". Just the frequency of those articles coming up is worrying - it's like someone trying to pitch last minute sales - and then the company goes bankrupt. The human mind is a strange thing.
One good advantage of e. g. Python and Ruby is that they are excellent at prototyping ideas into code. That part won't go away, even if AI infiltrates more computers.
> One good advantage of e. g. Python and Ruby is that they are excellent at prototyping ideas into code. That part won't go away, even if AI infiltrates more computers.
Why wouldn't they go away for prototyping? If an LLM can help you prototype in whatever language, why pick Ruby or Python?
(This isn't a gotcha question. I primarily use python these days, but I'm not married to it).
I wouldn't speak so quickly for the 'uncommon' language set. I had Claude write me a fully functional typed erlang compiler with ocaml and LLVM IR over the last two days to test some ideas. I don't know ocaml. It made the right calls about erlang, and the result passes a fairly serious test suite, so it must've known enough ocaml and LLVM IR.
> But now that most code is written by LLMs...
Pause for a moment and think through a realistic estimation of the numbers and proportions involved.
My intuition from using the tools broadly is that pre-baked design decisions/“architectures” are going to be very competitive on the LLM coding front. If this is accurate, language matters less than abstraction.
Instructions files are just pre-made decisions that steer the agent. We try to reduce the surface area for nondeterminism using these specs, and while the models will get better at synthesizing instructions and code understanding, every decision we remove pays dividends in reduced token usage/time/incorrectness.
I think this is what orgs like Supabase see, and are trying to position themselves as solutions to data storage, auth, events etc within the LLM coding space, and are very successful albeit in the vibe coder area mostly. And look at AWS Bedrock, they’ve abstracted every dimension of the space into some acronym.
I'm not sure that LLMs are going to [completely] replace the desire for JIT, even with relatively fast compilers.
Frameworks might go the way of the dinosaur. If an LLM can manage a lot of complex code without human-serving abstractions, why even use something like React?
Frameworks aren't just human-serving abstractions - they're structural abstractions that allow for performant code, or even being able to achieve certain behaviours.
Sure, you could write a frontend without something like react, and create a backend without something like django, but the code generated by an LLM will become similarly convoluted and hard to maintain as if a human had written it.
LLM's are still _quite_ bad at writing maintainable code - even for themselves.
Test cases; test coverage
I think you're missing the reason LLMs work: It's cause they can continue predictable structures, like a human.
The surmise that compiled languages fit that just doesn't follow. The same way LLMs have trouble finishing HTML because of the open/close are too far apart.
The language that an LLM would succeed with is one where:
1. Context is not far apart
2. The training corpus is wide
3. Keywords, variables, etc are differentiated in the training.
4. REPL like interactivity allows for a feedback loop.
So, I think it's premature to think just because the compiled languages are less used because of human inabilities, doesn't mean the LLM will do any better.
I was also thinking this some days ago. The scaffolding that static languages provide is a good fit for LLMs in general.
Interestingly, since we are talking about Go specifically, I never found that I was spending too much typing... types. Obviously more than with a Python script, but never at a level where I would consider it a problem. And now with newer Python projects using type annotations, the difference got smaller.
> And now with newer Python projects using type annotations, the difference got smaller.
Just FWIW, you don't actually have to put type annotations in your own code in order to use annotated libraries.
Indeed, but nowadays it’s common to add the annotations to claw back a bit of more powerful code linting.
The quality of the error messages matters a _lot_ (agents read those too!) and Python is particularly good there.
Especially since Python 3.14 shipped big improvements to error messages: https://docs.python.org/3/whatsnew/3.14.html#whatsnew314-imp...
Agree on compiled languages, wondering about Go vs Rust. Go compiles faster but is more verbose, token cost is an important factor. Rust's famously strict compiler and general safety orientation seems like a strong candidate for LLM coding. Go would probably have more training data out already though.
I generally use LLMs to generate Python (or TypeScript) because the quality and maintainability is significantly better than if I ask it to, for example, pump out C. They really do not perform very well outside of the most "popular" languages.
I’ve moved to rust for some select projects and it’s actually been a bit easier… I converted an electron app to rust/tauri… perf improvement was massive and development was quicker. I’m rethinking the stacks I should be focused on.
We may as well have the LLMs use the hardest most provably-correct language possible
Astronaut 1: You mean... strong static typing is an unmitigated win?
Astronaut 2: Always has been...
Might as well choose a language with a much better type system than go, given how beneficial quick feedback loops are to LLM code generation.
> assuming enough training data
This is a big assumption. I write a lot of Ansible, and it can’t even format the code properly, which is a pretty big deal in yaml. It’s totally brain dead.
Have you tried telling it to run a script to verify that the YAML is valid? I imagine it could do that with Python.
It gets it wrong 100% of the time. A script to validate would send it into an infinite loop of generating code and failing validation.
2 replies →
Peak LLM will be when we can give some prompt and just get fully compiled binaries of programs to download, no code at all.
Claude code, not too surprisingly, can do that (on a toy example).
toys are for children
Still less tokens to produce with higher level languages, and therefore less cost to maintain in the long run?
> LLMs still can't write Gleam
Have you tried? I've had surprisingly good results with Gleam.
If you asked the LLM it's possible it would tell you Java is a better fit.
People are still going to want to audit the code, at the very least.
I love golang man! And I use it for the same thing too!!
I mean people mention rust and everything and how AI can write proper rust code with linter and some other thing but man trust me that AI can write some pretty good golang code.
I mean though, I don't want everyone to write golang code with AI of all of a sudden because I have been doing it for over an year and its something that I vibe with and its my personal style. I would lose some points of uniqueness if everyone starts doing the same haha!
Man my love for golang runs deep. Its simple, cross platform (usually) and compiles super fast. I "vibe code" but feel faith that I can always manage the code back.
(self promotion? sorry about that: but created golang single main.go file project with a timer/pomodoro with websockets using gorilla (single dep) https://spocklet-pomodo.hf.space/)
So Shhh let's keep it a secret between us shall we! ;)
(Oh yeah! Recently created a WHMCS alternative written in golang to hook up to any podman/gvisor instance to build your own mini vps with my own tmate server, lots of glue code but it actually generated it in first try! It's surprisingly good, I will try to release it as open source & thinking of charging just once if people want everything set up or something custom
Though one minor nitpick is that the complexity almost rises many folds between a single file project and anything which requires database in golang from what I feel usually but golang's pretty simple and I just LOVE golang.)
Also AI's pretty good at niche languages too I tried to vibe code a fzf alternative from golang to v-lang and I found the results to be really promising too!
Agreed. The compiler is a feedback cycle made in heaven.
or maybe someone will use an LLM to create a JIT that works so well that compiled languages will be gone.
> LLMs still can't write Gleam/Janet/CommonLisp/etc
hoho - I did a 20/80 human/claude project over the long weekend using Janet: https://git.sr.ht/~lsh-0/pj/tree (dead simple Lerna replacement)
... but I otherwise agree with the sentiment. Go code is so simple it scrubs any creative fingerprints anyway. The Clojure/Janet/scheme code I've seen it writing isn't _great_ but it gets the job done quickly and correct enough for me to return to it later and golf it some.
[dead]
> Plus, I get a binary that I can port to other systems w/o problem.
So cross-platform vibe-coded malware is the future then?
I hope that AVs will also evolve using the new AI tech to detect this type of malware.
Honestly I looked at Go for malware and I mean AV detection for golang used to be ehh but recently It got strong.
Then it became a cat and mouse game with obfuscators and deobfucsators.
John Hammond has a *BRILLIANT* Video on this topic. 100% recommneded.
Honestly Speaking from John Hammond I feel like Nim as a language or V-lang is something which will probably get vibe coded malware from. Nim has been used for hacking so much that iirc windows actually blocked the nim compiler as malware itself!
Nim's biggest issue is that hackers don't know it but if LLM's fix it. Nim becomes a really lucrative language for hackers & John Hammond described that Nim's libraries for hacking are still very decent.