Comment by paradox242
6 days ago
I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere. He acknowledged that the tools need an expert to use properly, and as he illustrated, he refined his expertise over many years. He is of the first and last generation of experienced programmers who learned without LLM assistance. How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase? I can almost anticipate an interjection along the lines of "well we used to build everything with our hands and now we have tools etc, it's just different" but this is an order of magnitude different. This is asking a robot to design and assemble a shed for you, and you never even see the saw, nails, and hammer being used, let alone understand enough about how the different materials interact to get much more than a "vibe" for how much weight the roof might support.
I think the main difference between shortcuts like "compilers" and shortcuts like "LLMs" is determinism. I don't need to know assembly because I use a compiler that is very well specified, often mathematically proven to introduce no errors, and errs on the side of caution unless specifically told otherwise.
On the other hand, LLMs are highly nondeterministic. They often produce correct output for simple things, but that's because those things are simple enough that we trust the probability of it being incorrect is implausibly low. But there's no guarantee that they won't get them wrong. For more complicated things, LLMs are terrible and need very well specified guardrails. They will bounce around inside those guardrails until they make something correct, but that's more of a happy accident than a mathematical guarantee.
LLMs aren't a level of abstraction, they are an independent entity. They're the equivalent of a junior coder who has no long term memory and thus needs to write everything down and you just have to hope that they don't forget to write something down and hope that some deterministic automated test will catch them if they do forget.
If you could hire an unpaid intern with long term memory loss, would you?
Determinism is only one part of it: predictability and the ability to model what it’s doing is perhaps more important.
The physics engine in the game Trackmania is deterministic: this means that you can replay the same inputs and get the same output; but it doesn’t mean the output always makes sense: if you drive into a wall in a particular way, you can trigger what’s called an uberbug, where your car gets flung in a somewhat random direction at implausibly high speed. (This sort of thing can lead to fun tool-assisted speedruns that are utterly unviable for humans.)
The abstractions part you mention, there’s the key. Good abstractions make predictable. Turn the steering wheel to the left, head left. There are still odd occasions when I will mispredict what some code in a language like Rust, Python or JavaScript will do, but they’re rare. By contrast, LLMs are very unpredictable, and you will fundamentally never be able to mentally model what it achieves.
Having an LLM code for you is like watching someone make a TAS. It technically meets the explicitly-specified goals of the mapper (checkpoints and finish), but the final run usually ignores the intended route made by the mapper. Even if the mapper keeps on putting in extra checkpoints and guardrails in between, the TAS can still find a crazy uberbug into backflip into taking half the checkpoints in reverse order. And unless you spend far longer studying the TAS than it would have taken to learn to drive it yourself normally, you're not going to learn much yourself.
Exactly. Compilers etc. are like well-proven algebraic properties, you can build on them and reason with them and do higher level math with confidence. That's a very different type of "advancement" than what we're seeing with LLMs.
> If you could hire an unpaid intern with long term memory loss, would you?
It's clearly a deficiency. And that's why one of the next generations of AIs will have long term memory and online learning. Although even the current generation of the models shows signs of self-correction that somewhat mitigate the "random walk" you've mentioned.
ok, let me know when that happens
1 reply →
It's not just one unpaid intern with long term memory loss, it's several of them. And they don't need breaks.
If you could hire an army of unpaid interns with long term memory loss who work 24/7, would you?
Hell no, any experienced engineer would rather do it themselves than attempt to corral an untrained army. Infinite monkeys can write a sequel to shakespeare, but it's faster to write it myself than to sift through mountains of gibberish on a barely-domesticated goose chase.
What do you think the "mistake" is here?
It seems like you're pointing out a consequence, not a counter argument.
There’s a really common cognitive fallacy of “the consequences of that are something I don’t like, therefore it’s wrong”.
It’s like reductio ad absurdum, but without the logical consequence of the argument being incorrect, just bad.
You see it all the time, especially when it comes to predictions. The whole point of this article is coding agents are powerful and the arguments against this are generally weak and ill-informed. Coding agents having a negative impact on skill growth of new developers isn’t a “fundamental mistake” at all.
Exactly.
What I’ve been saying to my friends for the last couple of months has been, that we’re not going to see coding jobs go away, but we’re going to run into a situation where it’s harder to grow junior engineers into senior engineers because the LLMs will be doing all the work of figuring out why it isn’t working.
This will IMO lead to a “COBOL problem” where there are a shortage of people with truly deep understanding of how it all fits together and who can figure out the line of code to tweak to fix that ops problem that’s causing your production outage.
I’m not arguing for or against LLMs, just trying to look down the road to consequences. Agentic coding is going to become a daily part of every developer’s workflow; by next year it will be table stakes - as the article said, if you’re not already doing it, you’re standing still: if you’re a 10x developer now, you’ll be a 0.8x developer next year, and if you’re a 1x developer now, without agentic coding you’ll be a 0.1x developer.
It’s not hype; it’s just recognition of the dramatic increase in productivity that is happening right now.
> How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase?
LLM's are so-so coders but incredible teachers. Today's students get the benefit of asking copying and pasting a piece of code into an LLM and asking, "How does this work?"
There's a lot of young people that will use LLM's to be lazy. There's also a lot that will use them to feed their intellectual curiosity.
Many of the curious ones will be adversely affected.
When you're a college student, the stakes feel so high. You have to pass this class or else you'll have to delay graduation and spend thousands of dollars. You have to get this grade or else you lose your grant or scholarship. You want to absorb knowledge from this project (honestly! you really do) but you really need to spend that time studying for a different class's exam.
"I'm not lazy, I'm just overwhelmed!" says the student, and they're not wrong. But it's very easy for "I'm gonna slog through this project" to become "I'm gonna give it a try, then use AI to check my answer" and then "I'm gonna automate the tedious bits that aren't that valuable anyway" and then "Well I'll ask ChatGPT and then read its answer thoroughly and make sure I understand it" and then "I'll copy/paste the output but I get the general idea of what it's doing."
Is that what students will do, though? Or will they see the cynical pump and dump and take the shortcuts to get the piece of paper and pass the humiliation ritual of the interview process?
I'm hearing this fear more frequently, but I do not understand it. Curriculum will adapt. We are a curious and intelligent species. There will be more laypeople building things that used to require deep expertise. A lot of those things will be garbage. Specialists will remain valuable and in demand. The kids will still learn to write loops, use variables, about OOP and functional programming, how to write "hello world," to add styles, to accept input, etc. And they'll probably ask a model for help when they get stuck, and the teacher won't let them use that during a test. The models will be used in many ways, and for many things, but not all things; it will be normal and fine. Developing will be more productive and more fun, with less toil.
>How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase?
Dunno. Money is probably going to be a huge incentive.
I see the same argument everywhere. Like animators getting their start tweening other peoples content. AI is great at tweening and likely to replace farms of juniors. But companies will need seniors to direct animation, so they will either have to pay a lot of money to find them or pay a lot of money to train them.
Well this is actually happening in Japanese Animation and the result is that no young talents are getting trained in the workforce. [1]
But unlike animation, where the demand for the art can just disappear. I don't think the demand for software engineer will disappear. Same thing with musician. Young engineers might just be jobless or on training mode for much longer period of time before they can make actual living money.
Good thing is, as far as I know, Kyoto Animation managed to avoid this issue by having in-house training, growing their own talent pools.
[1]: https://blog.sakugabooru.com/2023/03/31/the-long-quest-to-fi...
Expecting commercial entities to engage in long term thinking when they can not do that and reduce costs in the next financial quarter is a fools game.
I think what you've said is largely true, but not without a long period of mess in between.
Back in the day I found significant career advancement because something that I haven't been able to identify (lack of on the job training i believe) had removed all the mid level IT talent in my local market. For a while I was able to ask for whatever I wanted because there just was not anyone else available. I had a week where a recruitment agency had an extremely attractive young woman escort me around a tech conference, buying me drinks and dinner and then refer me out to a bespoke MSP for a job interview (which I turned down which is funny) The market did eventually respond but it benefited me greatly. I imagine, this is what it will be like for a decade or so as a trained senior animator. No competition coming up, and plenty of money to be made. Until businesses sort their shit out, which like you say will happen eventually.
> get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase?
I wonder this too, as someone who is entirely self-taught, when I started escaping “tutorial hell” was the hardest part of the journey, and took quite a bit of both encouragement and sheer willpower. Not sure I would have ever went beyond that if I had LLMs.
I worry for Juniors, and either we’ll need to find a way to mentor them past the vibe coding phase, or we hope that AI gets good enough before we all retire.
There will always be people that manage to get into the guts of something.
All AI is going to do is create a new class of programmer, such that the people who know the details will end up being more valuable.
I wonder if that will make the great generation of human coders. Some of our best writers were the generation that spanned between oral education and mass production of books. Late generations read and wrote, rather than memorized and spoke. I think that was Shakespeare’s genius. Maybe our best coders will be supercharged with AI, and subsequent ones enfeabled by it.
Shakespeare was also popular because he was published as books became popular. Others copied him.
I suppose the counterargument is, how many experienced programmers today have seen a register or a JMP instruction being used?
Quite a lot of the good programmers I have worked with may never have needed to write assembly, but are also not at all confused or daunted by it. They are curious about their abstractions, and have a strong grasp of what is going on beneath the curtain even if they don't have to lift it all that often.
Most of the people I work with, however, just understand the framework they are writing and display very little understanding or even curiosity as to what is going on beneath the first layer of abstraction. Typically this leaves them high and dry when debugging errors.
Anecdotally I see a lot more people with a shallow expertise believing the AI hype.
The difference is that the abstraction provided by compilers is much more robust. Not perfect: sometimes programmers legitimately need to drop into assembly to do various things. But those instances have been rare for decades and to a first approximation do not exist for the vast majority of enterprise code.
If AI gets to that level we will indeed have a sea change. But I think the current models, at least as far as I've seen, leave open to question whether they'll ever get there or not.
It's pretty common for CS programs to include at least one course with assembly programming. I did a whole class programming controllers in MIPS.
I would assume at least the ones that did a formal CS degree would know JMP exists.
Your compiler does not hallucinate registers or JMP instructions
Doesn't it? Many compilers offer all sorts of novel optimizations for operations that end up producing the same result with entirely different runtime characteristics than the source code would imply. Going further, turn on gcc fast math and your code with no undefined behavior suddenly has undefined behavior.
I'm not much of a user of LLMs for generating code myself, but this particular analogy isn't a great fit. The one redeeming quality is that compiler output is deterministic or at least repeatable, whereas LLMs have some randomness thrown in intentionally.
With that said, both can give you unexpected behavior, just in different ways.
10 replies →
I bet they did at one point in time, then they stopped doing that, but still not bug free.
1 reply →
Agree. We'll get a new breed of programmer — not shitty ones — just different. And I am quite sure, at some point in their career, they'll drop down to some lower level and try to do things manually.... Or step through the code and figure out a clever way to tighten it up....
Or if I'm wrong about the last bit, maybe it never was important.
Counter-counterargument; You don't need to understand metalworking to use a hammer or nails, that's a different trade, though an important trade that someone else does need to understand in order for you to do your job.
If all of mankind lost all understanding of registers overnight, it'd still affect modern programming (eventually)
Anyone that's gotten a CS degree or looked at godbolt output.
Not really a counter-argument.
The abstraction over assembly language is solid; compilers very rarely (if at all) fail to translate high level code into the correct assembly code.
LLMs are nowhere near the level where you can have almost 100% assurance that they do what you want and expect, even with a lot of hand-holding. They are not even a leaky abstraction; they are an "abstraction" with gaping holes.
Registers: All the time for embedded. JMP instruction? No idea what that is!
Probably more than you might think.
As a teen I used to play around with Core Wars, and my high school taught 8086 assembly. I think I got a decent grasp of it, enough to implement quicksort in 8086 while sitting through a very boring class, and test it in the simulator later.
I mean, probably few people ever need to use it for something serious, but that doesn't mean they don't understand it.
Feels like coding with and without autocomplete to me. At some point you are still going to need to understand what you are doing, even if your IDE gives you hints about what all the functions do.
Sure, it's a different level, but it's still more or less the same thing. I don't think you can expect to learn how to code by only ever using LLMs, just like you can't learn how to code by only ever using intellisense.
> I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere
Some of the arguments in the article are so bizarre that I can’t believe they’re anything other than engagement bait.
Claiming that IP rights shouldn’t matter because some developers pirate TV shows? Blaming LLM hallucinations on the programming language?
I agree with the general sentiment of the article, but it feels like the author decided to go full ragebait/engagement bait mode with the article instead of trying to have a real discussion. It’s weird to see this language on a company blog.
I think he knows that he’s ignoring the more complex and nuanced debates about LLMs because that’s not what the article is about. It’s written in inflammatory style that sets up straw man talking points and then sort of knocks them down while giving weird excuses for why certain arguments should be ignored.
They are not engagement bait. That argument, in particular, survived multiple rounds of reviews with friends outside my team who do not fully agree with me about this stuff. It's a deeply sincere, and, I would say for myself, earned take on this.
A lot of people are misunderstanding the goal of the post, which is not necessarily to persuade them, but rather to disrupt a static, unproductive equilibrium of uninformed arguments about how this stuff works. The commentary I've read today has to my mind vindicated that premise.
> That argument, in particular, survived multiple rounds of reviews with friends outside my team who do not fully agree with me about this stuff. It's a deeply sincere, and, I would say for myself, earned take on this.
Which argument? The one dismissing all arguments about IP on the grounds that some software engineers are pirates?
That argument is not only unpersuasive, it does a disservice to the rest of the post and weakens its contribution by making you as the author come off as willfully inflammatory and intentionally blind to nuance, which does the opposite of breaking the unproductive equilibrium. It feeds the sense that those in the skeptics camp have that AI adopters are intellectually unserious.
I know that you know that the law and ethics of IP are complicated, that the "profession" is diverse and can't be lumped into a cohesive unit for summary dismissal, and that there are entirely coherent ethical stances that would call for both piracy in some circumstances and condemnation of IP theft in others. I've seen enough of your work to know that dismissing all that nuance with a flippant call to "shove this concern up your ass" is beneath you.
17 replies →
What really resonated with me was your repeated calls for us at least to be arguing about the same thing, to get on the same page.
Everything about LLMs and generative AI is getting so mushed up by people pulling it in several directions at once, marketing clouding the water, and the massive hyperbole on both sides, it's nearly impossible to understand if we're even talking about the same thing!
It's a good post and I strongly agree with the part about level setting. You see the same tired arguments basically every day here and subreddits like /r/ExperiencedDevs. I read a few today and my favorites are:
- It cannot write tests because it doesn't understand intent
- Actually it can write them, but they are "worthless"
- It's just predicting the next token, so it has no way of writing code well
- It tries to guess what code means and will be wrong
- It can't write anything novel because it can only write things it's seen
- It's faster to do all of the above by hand
I'm not sure if it's the issue where they tried copilot with gpt 3.5 or something, but anyone who uses cursor daily knows all of the above is false, I make it do these things every day and it works great. There was another comment I saw here or on reddit about how everyone needs to spend a day with cursor and get good at understanding how prompting + context works. That is a big ask but I think the savings are worth it when you get the hang of it.
1 reply →
[flagged]
1 reply →
>> Blaming LLM hallucinations on the programming language?
My favorite was suggesting that people select the programming language based of which ones LLMs are best at. People who need an LLM to write code might do that, but no experienced developer would. There are too many other legitimate considerations.
If an LLM improves coding productivity, and it is better at one language than another, then at the margin it will affect which language you may choose.
At the margin means that both languages, or frameworks or whatever, are reasonably appropriate for the task at hand. If you are writing firmware for a robot, then the LLM will be less helpful, and a language such as Python or JS which the LLM is good at is useless.
But Thomas's point is that arguing that LLMs are not useful for all languages is not the same as saying that are not useful for any language.
If you believe that LLM competencies are not actually becoming drivers in what web frameworks people are using, for example, you need to open your eyes and recognize what is happening instead of what you think should be happening.
(I write this as someone who prefers SvelteJS over React - but LLM's React output is much better. This has become kind of an issue over the last few years.)
11 replies →
People make productivity arguments for using various languages all the time. Let's use an example near and dear to my heart: "Rust is not as productive as X, therefore, you should use X unless you must use Rust." If using LLMs makes Rust more productive than X, that changes this equation.
Feel free to substitute Y instead of Rust if you want, just I know that many people argue Rust is hard to use, so I feel the concreteness is a good place to start.
Maybe they don’t today, or up until recently, but I’d believe it will be a consideration for new projects.
Is certainly true that at least some projects choose languages based on or at least influenced by how easy it is to hire developers fluent in that language.
I see no straw men in his arguments: what i see are pretty much daily direct quotes pasted in from HN comments.
> daily direct quotes pasted in from HN comments.
That’s literally the strawman.
I am squarely in the bucket of AI skeptic—an old-school, code-craftsman type of personality, exactly the type of persona this article is framed again, and yet my read is nothing like yours. I believe he's hitting these talking points to be comprehensive, but with nothing approaching the importance and weightiness you are implying. For example:
> Claiming that IP rights shouldn’t matter because some developers pirate TV shows?
I didn't see him claiming that IP rights shouldn't matter, but rather that IP rights don't matter in the face of this type of progress, they never have since the industrial revolution. It's hypocritical (and ultimately ineffectual) for software people to get up on a high horse about that now just to protect their own jobs.
And lest you think he is an amoral capitalist, note the opening statement of the section: "Artificial intelligence is profoundly — and probably unfairly — threatening to visual artists in ways that might be hard to appreciate if you don’t work in the arts.", indicating that he does understand and empathize with the most material of harms that the AI revolution is bringing. Software engineers aren't on that same spectrum because the vast majority of programming is not artisinal creative work, it's about precise automation of something as cheaply as possible.
Or this one:
> Blaming LLM hallucinations on the programming language?
Was he "blaming"? Or was he just pointing out that LLMs are better at some languages than others? He even says:
> People say “LLMs can’t code” when what they really mean is “LLMs can’t write Rust”. Fair enough!
Which seems very truthy and in no way is blaming LLMs. Your interpretation is taking a some kind of logical / ethical leap that is not present in the text (as far as I can tell).
> Software engineers aren't on that same spectrum because the vast majority of programming is not artisinal creative work...
That's irrelevant. Copyright and software licensing terms are still enforced in the US. Unless the software license permits it, or it's for one of a few protected activities, verbatim reproduction of nontrivial parts of source code is not legal.
Whether the inhalation of much (most? nearly all?) of the source code available on the Internet for the purpose of making a series of programming machines that bring in lots and lots of revenue for the companies that own those machines is either fair use or it's infringing commercial use has yet to be determined. Scale is important when determining whether or not something should be prohibited or permitted... which is something that many folks seem to forget.