Comment by capnrefsmmat
6 days ago
The argument seems to be that for an expert programmer, who is capable of reading and understanding AI agent code output and merging it into a codebase, AI agents are great.
Question: If everyone uses AI to code, how does someone become an expert capable of carefully reading and understanding code and acting as an editor to an AI?
The expert skills needed to be an editor -- reading code, understanding its implications, knowing what approaches are likely to cause problems, recognizing patterns that can be refactored, knowing where likely problems lie and how to test them, holding a complex codebase in memory and knowing where to find things -- currently come from long experience writing code.
But a novice who outsources their thinking to an LLM or an agent (or both) will never develop those skills on their own. So where will the experts come from?
I think of this because of my job as a professor; many of the homework assignments we use to develop thinking skills are now obsolete because LLMs can do them, permitting the students to pass without thinking. Perhaps there is another way to develop the skills, but I don't know what it is, and in the mean time I'm not sure how novices will learn to become experts.
> Question: If everyone uses AI to code, how does someone become an expert capable of carefully reading and understanding code and acting as an editor to an AI?
Well, if everyone uses a calculator, how do we learn math?
Basically, force students to do it by hand long enough that they understand the essentials. Introduce LLMs at a point similar to when you allow students to use a calculator.
> Well, if everyone uses a calculator, how do we learn math?
Calculators have made most people a lot worse in arithmetic. Many people, for instance, don't even grasp what a "30%" discount is. I mean other than "it's a discount" and "it's a bigger discount than 20% and lower than 40%". I have seen examples where people don't grasp that 30% is roughly one third. It's just a discount, they trust it.
GPS navigation has made most people a lot worse at reading maps or generally knowing where they are. I have multiple examples where I would say something like "well we need to go west, it's late in the day so the sun will show us west" and people would just not believe me. Or where someone would follow their GPS on their smartphone around a building to come back 10m behind where they started, without even realising that the GPS was making them walk the long way around the building.
Not sure the calculator is a good example to say "tools don't make people worse with the core knowledge".
GPS has also ruined our city-level spatial awareness.
Before, you had the map. So you were aware that Fitzroy was to the west of Collingwood and both were south of Clifton Hill and so on. I had dozens of these suburbs roughly mapped out in my mind.
Driving down an unfamiliar road, one could use signs to these suburbs as a guide. I might not know exactly where I was, but I had enough of an idea to point me in the right direction.
That skill has disappeared.
1 reply →
Apparently 1/3 lb hamburgers didn't help A&W against McDonald's because too many people thought 1/3 is smaller than 1/4. So the Quarter Pounder remains supreme. Snopes: [https://www.snopes.com/news/2022/06/17/third-pound-burger-fr...]
But how important is the core knowledge if it isn't necessary to achieve the outcomes people actually value? People only cared about map reading skills to the extent that it got them where they want to go. Once GPS became a thing, especially GPS on mobile phones, getting them where they want to go via map reading became irrelevant. Yes, there are corner cases where map reading or general direction finding skills are useful, but GPS does a vastly better and quicker job in the large majority of cases so our general way-finding experience has improved.
This is especially true because the general past alternative to using GPS to find some new unfamiliar place wasn't "read a map" it was "don't go there in favor of going some place you already knew" in a lot of cases. I remember the pre-GPS era, and my experience in finding new stuff is significantly better today than it was back then.
2 replies →
At the end of the day, it's the average productivity across a population that matters.
So GPS makes people worse at orienteering -- on average, does it get everyone where they need to go, better / faster / easier?
Sometimes, the answer is admittedly no. Google + Facebook + TikTok certainly made us less informed when they cannibalized reporting (news media origination) without creating a replacement.
But on average, I'd say calculators did make the population more mathematically productive.
After all, lots of people sucked at math before them too.
13 replies →
I'm unconvinced that calculators have made most people a lot worse in arithmetic. There have always been people who are bad at math. It's likely there are fewer people who can quickly perform long division on paper, but it's also possible the average person is _more_ numerate because they can play around with a calculator and quickly build intuition.
28 replies →
You're confusing maths with accounting. 30% is an intuition/familiarity, not knowledge.
1 reply →
Writing has made people worse at memorization. This argument has been around since Plato.
1 reply →
But somehow I was born in the age of GPS and yet I ended up with a strong mental map and navigation skills.
I suspect there will be plenty of people who grow up in the age of LLMs and maybe by reading so much generated code, or just coding things themselves for practice, will not have a hard time learning solid coding skills. It may be easy to generate slop, but it’s also easy to access high quality guidance.
If calculators were unreliable... Well, we'd be screwed if everyone blindly trusted them and never learned math.
They'd also be a whole lot less useful. Calculators are great because they always do exactly what you tell them. It's the same with compilers, almost: imagine if your C compiler did the right thing 99.9% of the time, but would make inexplicable errors 0.1% of the time, even on code that had previously worked correctly. And then CPython worked 99.9% of the time, except it was compiled by a C compiler working 99.9% of the time, ...
But bringing it back on-topic, in a world where software is AI-generated, and tests are AI-generated (because they're repetitive, and QA is low-status), and user complaints are all fielded by chat-bots (because that's cheaper than outsourcing), I don't see how anyone develops any expertise, or how things keep working.
Early calculators were unreliable. Assume that AI based coding will improve.
1 reply →
While I agree with your suggestion, the comparison does not hold: calculators do not tell you which numbers to input and compute. With an LLM you can just ask vaguely, and get an often passable result
Then figure out how to structure the assignment to make students show their work. If a student doesn't understand the concept, it will show in how they prompt AI.
For example, you could require that students submit all logs of AI conversations, and show all changes they made to the code produced.
IE, yesterday I asked ChatGPT how to add a copy to clipboard button in MudBlazor. It told me the button didn't exist, and then wrote the component for me. That saved me a bunch of research; but I needed to refactor the code for various reasons.
So, if this was for an assignment, I could turn in both my log from ChatGPT, and then show the changes I made to the code ChatGPT provided.
> a novice who outsources their thinking to an LLM or an agent (or both) will never develop those skills on their own. So where will the experts come from?
Well, if you’re a novice, don’t do that. I learn things from LLMs all the time. I get them to solve a problem that I’m pretty sure can be solved using some API that I’m only vaguely aware of, and when they solve it, I read the code so I can understand it. Then, almost always, I pick it apart and refactor it.
Hell, just yesterday I was curious about how signals work under the hood, so I had an LLM give me a simple example, then we picked it apart. These things can be amazing tutors if you’re curious. I’m insatiably curious, so I’m learning a lot.
Junior engineers should not vibe code. They should use LLMs as pair programmers to learn. If they don’t, that’s on them. Is it a dicey situation? Yeah. But there’s no turning back the clock. This is the world we have. They still have a path if they want it and have curiosity.
> Well, if you’re a novice, don’t do that.
I agree, and it sounds like you're getting great results, but they're all going to do it. Ask anyone who grades their homework.
Heck, it's even common among expert users. Here's a study that interviewed scientists who use LLMs to assist with tasks in their research: https://doi.org/10.1145/3706598.3713668
Only a few interviewees said they read the code through to verify it does what they intend. The most common strategy was to just run the code and see if it appears to do the right thing, then declare victory. Scientific codebases rarely have unit tests, so this was purely a visual inspection of output, not any kind of verification.
> Junior engineers should not vibe code. They should use LLMs as pair programmers to learn. If they don’t, that’s on them. Is it a dicey situation? Yeah. But there’s no turning back the clock. This is the world we have. They still have a path if they want it and have curiosity.
Except it's impossible to follow your curiosity when everything in the world is pushing against it (unless you are already financially independent and only programming for fun). Junior developers compete in one of the most brutal labor markets in the world, and their deliverables are more about getting things done on time than doing things better. What they "should" do goes out the window once you step out of privilege and look at the real choices.
You sound like an active learner who could become a top programmer even without LLMs. Most students will take the path of least resistance.
There is absolutely a thing where self-motivated autodidacts can benefit massively more from these new tools than people who prefer structured education.
2 replies →
This reminds me of Isaac Asimov's "Profession" short story. Most people receive their ability (and their matching assigned profession, thus the name) from a computer. They then are able to do the necessary tasks for their job, but they can't advance the art in any way. A few people aren't compatible with this technology, and they instead learn to do things themselves, which is fortunate because it's the only way to advance the arts.
Deliberate practice, which may take a form different from productive work.
I believe it's important for students to learn how to write data structures at some point. Red black trees, various heaps, etc. Students should write and understand these, even though almost nobody will ever implement one on the job.
Analogously electrical engineers learn how to use conservation laws and Ohm's law to compute various circuit properties. Professionals use simulation software for this most of the time, but learning the inner workings is important for students.
The same pattern is true of LLMs. Students should learn how to write code, but soon the code will write itself and professionals will be prompting models instead. In 5-10 years none of this will matter though because the models will do nearly everything.
I agree with all of this. But it's already very difficult to do even in a college setting -- to force students to get deliberate practice, without outsourcing their thinking to an LLM, you need various draconian measures.
And for many professions, true expertise only comes after years on the job, building on the foundation created by the college degree. If students graduate and immediately start using LLMs for everything, I don't know how they will progress from novice graduate to expert, unless they have the self-discipline to keep getting deliberate practice. (And that will be hard when everyone's telling them they're an idiot for not just using the LLM for everything)
You're talking about students, but the question was about seniors. You don't go to school to become a senior dev, you code in real-world settings, with real business pressures, for a decade or two to become a senior. The question is how are decent students supposed to grow into seniors who can independently evaluate AI-produced code if they are forced to use the magic box and accept its results before being able to understand them?
I was talking about students because I was replying to a comment from a professor talking about his students
> Question: If everyone uses AI to code, how does someone become an expert capable of carefully reading and understanding code and acting as an editor to an AI?
LLMs are very much like pair programmers in my experience. For the junior engineer, they are excellent resources for learning, the way a senior engineer might be. Not only can they code what the junior can’t, they can explain questions the junior has about the code and why it’s doing what it’s doing.
For senior devs, it is a competent pair programmers, acting as an excellent resource for bouncing ideas off of, rubber ducking, writing boilerplate, and conducting code reviews.
For expert devs, it is a junior/senior dev you can offload all the trivial tasks to so you can focus on the 10% of the project that is difficult enough to require your expertise. Like a junior dev, you will need to verify what it puts together, but it’s still a huge amount of time saved.
For junior devs specifically, if they are not curious and have no interest in actually learning, they will just stop at the generated code and call it a day. That’s not an issue with the tool, it’s an issue with the dev. For competent individuals with a desire to learn and grow, LLMs represent one of the single best resources to do so. In that sense, I think that junior devs are at a greater advantage than ever before.
> That’s not an issue with the tool, it’s an issue with the dev.
Hard disagreeing here. It's a difference to work on a task because you feel it brings you tangible progress or because it's an artificial exercise that you could really do with one sentence to Claude if it weren't for the constraints of the learning environment. This feeling is actually demotivating for learning.
I don’t know about you, but I use LLMs as gateways to knowledge. I can set a deep research agent free on the internet with context about my current experience level, preferred learning format (books), what I’m trying to ramp up on, etc. A little while later, I have a collection of the definitive books for ramping up in a space. I then sit down and work through the book doing active recall and practice as I go. And I have the LLM there for Q&A while I work through concepts and “test the boundaries” of my mental models.
I’ve become faster at the novice -> experienced arc with LLMs, even in domains that I have absolutely no prior experience with.
But yeah, the people who just use LLMs for “magic oracle please tell me what do” are absolutely cooked. You can lead a horse to water, but you can’t make it drink.
If no one really becomes an expert anymore, that seems like great news for the people who are already experts. Perhaps people actively desire this.
Problem is, at some point those experts retire or change their focus and you end up with COBOL problem.
Except instead of just one language on enterprise systems no one wants to learn because there is no money in them, it's everything.
That seems like even better news for the people about to be paid large sums to fix all that stuff because no one else knows how any of it works.
It’s a great point and one I’ve wondered myself.
Arguments are made consistently about how this can replace interns or juniors directly. Others say LLMs can help them learn to code.
Maybe, but not on your codebase or product and not with a seniors knowledge of pitfalls.
I wonder if this will be programmings iPhone moment where we start seeing a lack of deep knowledge needed to troubleshoot. I can tell you that we’re already seeing a glut of security issues being explained by devs as “I asked copilot if it was secure and it said it was fine so I committed it”.
> I can tell you that we’re already seeing a glut of security issues being explained by devs as “I asked copilot if it was secure and it said it was fine so I committed it”.
And as with Google and Stack Overflow before, the Sr Devs will smack the wrists of the Jr's that commit untested and unverified code, or said Jr's will learn not to do those things when they're woken up at 2 AM for an outage.
That's assuming the business still employs those Sr Devs so they can do the wrist smacking.
To be clear, I think any business that dumps experienced devs in favor of cheaper vibe-coding mids and juniors would be making a foolish mistake, but something being foolish has rarely stopped business types from trying.
The way the responses to this subthread show the classical "the problem doesn't exist - ok, it does exist but it's not a big deal - ok, it is a big deal but we should just adapt to it" progression makes me wonder if we found one of the few actually genuine objections to LLM coding.
Nail on head. Before, innovations in code were extensions of a human's capabilities. The LLM-driven generation could diminish the very essence of writing meaningful code, to the point where they will live in the opposite of a golden era. The dead internet theory may yet prevail.
I think a large fraction of my programming skills come from looking through open source code bases. E.g. I'd download some code and spend some time navigating through files looking for something specific, e.g. "how is X implemented?", "what do I need to change to add Y?".
I think it works a bit like pre-training: to find what you want quickly you need to have a model of coding process, i.e. why certain files were put into certain directories, etc.
I don't think this process is incompatible with LLM use...
If I were a professor, I would make my homework start the same -- here is a problem to solve.
But instead of asking for just working code, I would create a small wrapper for a popular AI. I would insist that the student use my wrapper to create the code. They must instruct the AI how to fix any non-working code until it works. Then they have to tell my wrapper to submit the code to my annotator. Then they have to annotate every line of code as to why it is there and what it is doing.
Why my wrapper? So that you can prevent them from asking it to generate the comments, and so that you know that they had to formulate the prompts themselves.
They will still be forced to understand the code.
Then double the number of problems, because with the AI they should be 2x as productive. :)
For introductory problems, the kind we use to get students to understand a concept for the first time, the AI would likely (nearly) nail it on the first try. They wouldn't have to fix any non-working code. And annotating the code likely doesn't serve the same pedagogical purpose as writing it yourself.
Students emerge from lectures with a bunch of vague, partly contradictory, partly incorrect ideas in their head. They generally aren't aware of this and think the lecture "made sense." Then they start the homework and find they must translate those vague ideas into extremely precise code so the computer can do it -- forcing them to realize they do not understand, and forcing them to make the vague understanding concrete.
If they ask an AI to write the code for them, they don't do that. Annotating has some value, but it does not give them the experience of seeing their vague understanding run headlong into reality.
I'd expect the result to be more like what happens when you show demonstrations to students in physics classes. The demonstration is supposed to illustrate some physics concept, but studies measuring whether that improves student understanding have found no effect: https://doi.org/10.1119/1.1707018
What works is asking students to make a prediction of the demonstration's results first, then show them. Then they realize whether their understanding is right or wrong, and can ask questions to correct it.
Post-hoc rationalizing an LLM's code is like post-hoc rationalizing a physics demo. It does not test the students' internal understanding in the same way as writing the code, or predicting the results of a demo.
> They will still be forced to understand the code.
But understanding is just one part of the learning process, isn't it? I assume everybody has had this feeling: the professor explains maths on the blackboard, and the student follows. The students "understands" all the steps: they make sense, they don't feel like asking a question right now. Then the professor gives them an exercise slightly different and asks to do the same, and the students are completely lost.
Learning is a loop: you need to accept it, get it in your memory (learn stuff by heart, be it just the vocabulary to express the concepts), understand it, then try to do it yourself. Realise that you missed many things in the process, and start at the beginning: learn new things by heart, understand more, try it again.
That loop is still there. They have to get the AI to write the right code.
And beyond that, do they really need to understand how it works? I never learned how to calculate logarithms by hand, but I know what they are for and I know when to punch the button on the calculator.
I'll never be a top tier mathematician, but that's not my goal. My goal is to calculate things that require logs.
If they can get the AI to make working code and explain why it works, do they need to know more than that, unless they want to be top in their field?
1 reply →
Yep, this is the thing I worry about as well.
I find these tools incredibly useful. But I constantly edit their output and frequently ask for changes to other peoples' code during review, some of which is AI generated.
But all of that editing and reviewing is informed by decades of writing code without these tools, and I don't know how I would have gotten the reps in without all that experience.
So I find myself bullish on this for myself and the experienced people I work with, but worried about training the next generation.
Yes I feel the same way. But I worry about my kids. My 15-year old son wanted to go into software engineering and work for a game studio. I think I'll advocate civil engineering, but for someone who will still be working 50 years from now its really hard to know what will be a good field right now.
Yeah but in fairness, it's always true that it's hard to know what a good field will be in half a century.
> So where will the experts come from?
They won't, save for a relative minority of those who enjoy doing things the hard way or those who see an emerging market they can capitalize on (slop scrubbers).
I wrote this post [1] last month to share my concerns about this exact problem. It's not that using AI is bad necessarily (I do every day), but it disincentivizes real learning and competency. And once using AI is normalized to the point where true learning (not just outcome seeking) becomes optional, all hell will break loose.
> Perhaps there is another way to develop the skills
Like sticking a fork in a light socket, the only way to truly learn is to try it and see what happens.
[1] https://ryanglover.net/blog/chauffeur-knowledge-and-the-impe...
LLMs are also great to ask questions about existing code. It's like being able to converse with StackOverflow.
I dont know if im convinced by this. Like if we were talking about novels, you don't have to be a writer to check grammar and analyze plot structure in a passable way. It is possible to learn by reading instead of doing.
Sure, you could learn about grammar, plot structure, narrative style, etc. and become a reasonable novel critic. But imagine a novice who wants to learn to do this and has access to LLMs to answer any question about plots and style that they want. What should they do to become a good LLM-assisted author?
The answer to that question is very different from how to become an author before LLMs, and I'm not actually sure what the answer is. It's not "write lots of stories and get feedback", the conventional approach, but something new. And I doubt it's "have an LLM generate lots of stories for you", since you need more than that to develop the skill of understanding plot structures and making improvements.
So the point remains that there is a step of learning that we no longer know how to do.
I've had a lot of success using LLMs to deepen my understanding of topics. Give them an argument, and have them give the best points against it. Consider them, iterate. Argue against it and let it counter. It's a really good rubber duck
> The expert skills... currently come from long experience writing code
Do they? Is it the writing that's important? Or is it the thinking that goes along with it? What's stopping someone from going through LLM output, going back and forth on design decisions with the LLM, and ultimately making the final choice of how the tool should mold the codebase after seeing the options
I mean of course this requires some proactive effort on your part.. but it always did
The key point I think though is to not outsource your thinking. You can't blindly trust the output. It's a modern search engine
I think it's the writing.
I learned long ago that I could read a book, study it, think about it. And I still would really master the material until I built with it.
> If everyone uses AI to code, how does someone become an expert
The same way they do now that most code is being copied/integrated from StackOverflow.
I had this conversation with a friend:
HIM: AI is going to take all entry level jobs soon. ME: So the next level one up will become entry level? HIM: Yes. ME: Inductively, this can continue up to the CEO. What about the CEO? HIM. Wait...
I simply don’t believe all the jobs will go away; it feels much more like the field will just be significantly pared back. There will be more opportunities for juniors eventually if it turns out to be too high of a barrier to entry and elder programmers start to retire.
This is such a non issue and so far down the list of questions. Weve invented AI that can code, and you're asking about career progression? Thats the the top thing to talk about? Weve given life to essentially an alien life form
"What is this going to do to humans?" is probably the #1 question that should be on the mind of every engineer, every day. Being toolmakers for civilization is the entire point of our profession.
I'll take the opposite view of most people. Expertise is a bad thing. We should embrace technological changes that render expertise economically irrelevant with open arms.
Take a domain like US taxation. You can certainly become an expert in that, and many people do. Is it a good thing that US taxes are so complicated that we have a market demand for thousands of such experts? Most people would say no.
Don't get my wronf, I've been coding for more years of being alive than I haven't by this point, I love the craft. I still think younger me would have far preferred a world where he could have just had GPT do it all for him so he didn't need to spend his lunch hours poring over the finer points of e.g. Python iterators.
By the same logic we should allow anyone with an LLM to design ships, bridges, and airliners.
Clearly, it would be very unwise to buy a bridge designed by an LLM.
It's part of a more general problem - the engineering expectations for software development are much lower than for other professions. If your AAA game crashes, people get annoyed but no one dies. If your air traffic control system fails, you - and a large number of other poeple - are going to have a bad day.
The industry that has a kind of glib unseriousness about engineering quality - not theoretical quality, based on rules of thumb like DRY or faddy practices, but measurable reliability metrics.
The concept of reliability metrics doesn't even figure in the LLM conversation.
That's a very bizarre place to be.
> We should embrace technological changes that render expertise economically irrelevant with open arms.
To use your example, is using AI to file your taxes actually "rendering [tax] expertise economically irrelevant?" Or is it just papering over the over-complicated tax system?
From the perspective of someone with access to the AI tool, you've somewhat eased the burden. But you haven't actually solved the underlying problem (with the actual solution obviously being a simpler tax code). You have, on the other hand, added an extra dependency on top of an already over-complicated system.
In addition, a substantial portion of the complexity in software is essential complexity, not just accidental complexity that could be done away with.
1 reply →
I never said anything about using AI to do your taxes.
I was drawing an analogy. We would probably be better off with a tax system that wasn't so complicated it creates its own specialized workforce. Similarly we would be better off with programming tools that make the task so simple that professional computer programmers feel like a 20th century anachronism. It might not be what we personally want as people who work in the field, but it's for the best.
1 reply →
The question then becomes whether or not it's possible (or will be possible) to effectively use these LLMs for coding without already being an expert. Right now, building anything remotely complicated with an LLM, without scouring over every line of code generated, is not possible.
Counter-counter point. The existence of tools like this can allow the tax code to become even more complex.
Nowhere do I suggest using AI to do your taxes. My point was, if you think it's bad taxes are complicated enough that many people need to hire a professional to do it, you should also think it's bad programming is complicated enough that many people need to hire a professional to do it.
I mean, we already have vibe tariffs, so vibe taxation isn’t far off. ;)
Don't think of it from someone who had to learn. Think of it from someone who has never had the experience the friction of learning at all.
But that is incompatible with the fact that you need be an expert to wield this tool effectively.