Comment by tdeck
20 days ago
I have seen folks who are relatively new to programming work like this.
Rather than simple laziness, it was often because they felt intimidated by their lack of knowledge and wanted to be more productive.
However, the result of a ChatGPT based workflow is that reasoning often is the very last resort. Ask the LLM for a solution, paste it in, get an error, paste that in, get a new solution, get another error, ask for a fix again, etc. etc.
Before someone chimes in to say this is like Stack Overflow: no it isn't. Real people expect you to put some work and effort into first describing your problem, then solving it. You would rarely find someone willing to go through such an exercise with you, and they probably wouldn't hallucinate broken code to you while doing it.
15 minutes of this and it turns out to be something silly that ChatGPT would never catch - e.g. you have installed a very old version of the Python module for some internal company reason. But because the reasoning muscle isn't being built up, and the context isn't being built up, they can't figure it out.
They didn't see the bit on the docs page that says "this function was added in version 1.5" because they didn't write the function call, and didn't open the documentation, and perhaps wouldn't even consider opening the documentation because that's what ChatGPT is for. In fact, they might not have even consciously chosen that library because again.. that's what ChatGPT is for.
> Ask the LLM for a solution, paste it in, get an error, paste that in, get a new solution, get another error, ask for a fix again, etc. etc.
That's exactly what I've seen as well. The students don't even read the code, let alone try to reason through how it works. They just develop hand-eye coordination for copy-pasting.
> Rather than simple laziness, it was often because they felt intimidated by their lack of knowledge and wanted to be more productive.
Part of it really is laziness, but what you say is also true. Unfortunately, this is the nature of learning. Reading or listening is by itself a weak stimulus for building neural pathways. You need to actively recall and apply, and struggle with problems until they yield. It is so much easier to look up a solution somewhere. And now you don't even to look anything up anymore -- just ask.
Just a funny, or depressing, aside - and then a point about LLMs.
Real coding can, unfortunately, be as bad as that or worse. Here is one very famous HN comment from 2018, and I know what he is talking about because participating in this madness was my first job after university, dispelling a lot of my illusions:
https://news.ycombinator.com/item?id=18442941
I went into that job (of porting Oracle to another Unix platform for an Oracle platform partner) full of enthusiasm and gave up finding any meaning or enjoyment after the first few weeks, or trying to understand or improve anything. If AI could do at least some of that job it would actually a big plus.
(it's the working-on-Oracle-code comment if you didn't already guess it)
I think there's a good chance code becomes more like biology. You can understand the details, but there are sooo many of them, and there are way too many connections directly and indirectly across layers. You have to find higher level methods because it's too much for a direct comprehension.
I saw a main code contributor in a startup I worked at work kind of like that. Not all his fault, forced to move too quickly and the code was so ill defined, not even the big boss knowing what they wanted and only talking in meta terms and always coming up with new sometimes contradicting ideas. The code was very hard to comprehend and debug, especially since much of it was distributed algorithms. So his approach was running it with demo data, observing higher level outcomes, and tweaking this or that component until it kind of worked. It never worked reliably, it was demo-quality software at best. But he managed to implement all the new ideas from management at least.
I found that style interesting and could not dismiss it outright, even though I really really did not want to have to debug that thing in production. But I saw something different from what I was used to, focus on a higher level, working when you just can't have the same depth of understanding of what you are doing as one would traditionally like. Given my Oracle experience, I saw how this would be a useful style IRL for many big long-running projects, like that Oracle code, that you had no chance of comprehending or improving without "rm -rf" and a restart which you could not do.
I think education needs to also show these more "biology-level complexity" and more statistical higher level approaches. Much of our software is getting too complex for the traditional low-level methods.
I see LLMs as just part of such a toolkit for the future. On the one hand, there is supplying code for "traditional" smaller projects, where you still have hope to be in control and have at least the seniors fully understand the system. On the other hand, LLMs could help with too-complex systems, not with making them understandable, that is impossible for those messy systems, but with being able to still productively work with them, add new features and debug issues. Code such as in the Oracle case. A new tool for even higher levels of messiness and complexity in our systems, which we won't be able to engineer away due to real life constraints.