Comment by xtracto
4 days ago
This won't be a popular opinion here but, this resistance and skepticism of AI code, and people making it less smells to me very similar to the stance I see from some developers that have this belief that people from other countries CANNOT be as good as them (like, saying that outsourcing or hiring people from developing countries will invariably bring low[er] quality code).
Feels a.but like snobbism and projection of fear that what they do is becoming less valuable. In this case, how DARE a computer progeam write such code!
It's interesting how this is happening. And in the future it will be amazing seeing the turning point when the.machine generated code cannot ne ignored.
Kind of like chess/Go players: First they laughed at a computer playing chess/Go, but now, they just accept that there's NO way they could beat a computer, and keep playing other humans for fun.
This would be fine if LLMs generated quality code, which they don't. Anything beyond trivial and boilerplate code is either riddled with errors or copied almost verbatim. None of these systems are able to even remotely do what a competent developer does.
Despite the PR author's claims, LLMs have no, and can't have any, understanding of the code. Especially when you start talking about architecture, robustness, security, etc. And those are the really challenging parts. Coding is 10% of a developer's job, and they're usually the easiest. If reasonably used LLM tools can help developers code, awesome. But that part was never the problem or the bottleneck.
The chess/Go analogy doesn't work, because those are games that have set rules and winning conditions. Algorithms can work with that, that's why they beat humans. The "winning conditions" of software development are notoriously tricky to get right an often impossible to perfectly formulate. If they weren't, natural language programming might be a viable path. Dijkstra knew in the 70s that it can't be.[1]
Generated code can already not be ignored, but I don't think it's for the reasons implied. Someone here mentioned Brandolini's Law. We can't ignore it for the same reason we can't ignore spam e-mails. They're too easy and cheap to produce, and practically none of what's produced has any real value or quality. We can't ignore the code because it's threatening to make an already worrying crisis of QA and security in software development even worse.
[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
This is an excerpt from the session where AI is writing my Lisp compiler. What do you call this? I call this doing what a competent developer does!
39/40 tests pass. The native reader works for integers, hexadecimal, lists, strings and quote forms. The one failure is symbol comparison (known limitation).
Sounds to me like someone roleplaying being a developer. Never in my career have I seen someone think/reason/act like this.
1 reply →
The chess analogy is fundamentally flawed. In chess you don't have to maintain your moves - you make a move, and it's done. In engineering code isn't the end of the game, it's the start of a liability.
Code is read 10x more often than it is written. A programmer's primary job isn't "making the computer do X," but "explaining to other programmers (and their future self) why the computer should do X." AI generates syntax, but it lacks intent.
Refusing to accept such code isn't snobbery or fear. It's a refusal to take ownership of an asset that has lost its documentation regarding provenance and meaning
AI-powered programmers have all the tools, freedom, investment(!) they need _now_ to start their own open source projects or forks without having to subject themselves to outdated meat-based reviewers.
I say they should “walk the talk”
Except it's the other way round: the poor quality is evident up front, and "they used AI" is an inference for why the quality is poor.