Comment by globnomulous
2 days ago
Students of ancient languages fall into one of two camps: those who use translations for 'assistance' and those who don't. Classroom experiences have shown me that the two groups of students learn vastly different skills.
The group who struggle through texts by themselves with relying on any shortcuts -- they just sit with the text -- probably won't become top-shelf philologists, but when you give them a sentence they haven't seen before from an author they've read, the chances are very good that they'll be able to make sense of it without assistance. These students learn, in other words, how to read ancient languages.
The group who rely on translations learn to do precisely that: rely on a translation. If you give them a text by an author they've 'read' before and deny them use of side-by-side translation, they almost never had any clue how to proceed, even at the level of rudimentary parsing. Is that word the second-person-singular aorist imperative middle or is it the aorist infinitive active? They probably won't even know how to identify the difference -- or that there is one.
Our brains are built for energy conservation. They do what, and only what, we ask of them. Learning languages is hard. Reading a translation is easy. Given the choice betweem the harder skill and the easier, he brain will always learn the easier. The only way to learn the harder one is to remove the option: sit with the text; struggle.
So far I've been able to avoid LLMs and AI. I've written in other comments on HN about this. I don't want to talk to an anthropmorphic chat UI, which I call "meeting-based programming." I want to work with code. I want to become a more skillful SWE and better at working with programming languages, software, and systems. LLMs won't help me do this. All the time they save me -- all the time they steal from reading code, thinking about it, and consulting documentation -- is time they've stolen from the work I actually want to do. They'll make me worse at what I do and deprive me of the joy I find in it.
I've argued with teammates about this. They don't want to do the boring stuff. They say AI will do it for them. To me that's a Faustian bargain. Every time someone hands off the boring stuff to the machine, I'd wager they're weakening and giving up the parts of themselves that they'll need to call upon when they find something 'interesting' to work on (edit: and I'd wager that what they consider interesting will be debased over time as well, as programming effort itself becomes foreign and a less common practice.)
One could say this about absolutely any technology.
Using a hoe is making you weaker than if you just used your bare hands. Using a calculator is making your brain lose skill in doing complicated arithmetic in your head.
Most have never built a fire completely from scratch, they surely are lacking certain skills but do/should they care?
But as with everything else, you can take technology to do more, things that might be impossible for you to do without it, and that's ok.
> One could say this about absolutely any technology.
What do I become worse at when I learn metallurgy, woodworking, optics, painting, or cooking?
> But as with everything else, you can take technology to do more, things that might be impossible for you to do without it, and that's ok.
Whether LLMs are helpful or enable anybody to do 'more' is beside the point.
I don't care about doing more -- or the 'more' I care about is only tangentially related to my actual output as an engineer. I care about developing my skill as an SWE and deepening my understanding. LLMs stand in the way of that. They poison it. Anybody who loves and values the skill as I do does themselves a disservice by letting an LLM do the work, particularly the thinking and problem solving. And even if you don't care about the skill, and are delighted to find that LLMs increase your output while you're using them, I'd wager you'll pay a hefty long-term intellectual and personal cost, in that you'll become a worse, lazier, less engaged engineer.
That's what this guy's post is about: losing the ability to do the work, or finding yourself bewildered by it, because you're no longer practicing it.
If code is just an obstacle to your goals but also the means of reaching them, and LLMs help you reach your goals, great, more power to you. My goal is to program. I just want to continue to do what I love and, day by day, problem by problem, become better at it. When I can no longer do that as an SWE, and I'm expected (let alone required) to let an obnoxious, chipper chatbot do the work, while I line the pockets of some charlatan 'thought leader,' I'll retire or blow my brains out.
Does the hoe operate itself?
I took a statistics course in high school where we learned how to do everything on a calculator. I was terrible and didn’t understand statistics at the end of it. My teacher gave me a gentleman’s C. I decided to retake the course in college where my teacher taught us how to calculate the formulas by hand. After learning them by hand, I applied everything on exams with my calculator. I finished the class with a 100/100, and my teacher said there was no need for me to take the final exam. It was clear I understood the concept.
What changed between the two classes? Well, I actually learned statistics rather than how to let a tool do the work for me. Once I learned the concept, then I was able to use the tool in a beneficial way.
> To me that's a Faustian bargain. Every time someone hands off the boring stuff to the machine, I'd wager they're weakening the parts of themselves that they call upon when they want to work on the 'interesting' stuff.
It's worse than that, people who rely too much on the AI never learn how to tell when it is wrong.
This is different from things like "nobody complains about using a calculator".
A calculator doesn't lie; LLMs on the other hand lie all the time.
(And, to be fair, even the calculator statement isn't completely true. The reason why the HP 12C is so popular is that calculators did lie about some financial calculations (numerical inaccuracy). It was deemed too hard for business majors to figure out when and why so they just converged on a known standard.)