Comment by egl2020

13 hours ago

"You can learn anything now. I mean anything." This was true before before LLMs. What's changed is how much work it is to get an "answer". If the LLM hands you that answer, you've foregone learning that you might otherwise have gotten by (painfully) working out the answer yourself. There is a trade-off: getting an answer now versus learning for the future. I recently used an LLM to translate a Linux program to Windows because I wanted the program Right Now and decided that was more important than learning those Windows APIs. But I did give up a learning opportunity.

I'm conflicted about this. On one hand, I think LLMs make it easier to discover explanations that, at least superficially, superficially "click" for you. Sure, they were available before, but maybe in textbooks you needed to pay for (how quaint), or on websites that appeared on the fifth page of search results. Whatever are the externalities of that, in the short term, that part may be a net positive for learners.

On the other hand, learning is doing; if it's not at least a tiny bit hard, it's probably not learning. This is not strictly an LLM problem; it's the same issue I have with YouTube educators. You can watch dazzling visualizations of problems in mathematics or physics, and it feels like you're learning, but you're probably not walking away from that any wiser because you have not flexed any problem-solving muscles and have not built that muscle memory.

I had multiple interactions like that. Someone asked an LLM for an ELI5 and tried to leverage that in a conversation, and... the abstraction they came back feels profound to them, but is useless and wrong.

  • This. I feel this all the time. I love 3Blue1Brown's videos and when I watch them I feel like I really get a concept. But I don't retain it as well as I do things I learned in school.

    It's possible my brain is not as elastic now in my 40s. Or maybe there's no substitute for doing something yourself (practice problems) and that's the missing part.

  • > On one hand, I think LLMs make it easier to discover explanations that, at least superficially, superficially "click" for you.

    The other benefit is that LLMs, for superficial topics, are the most patient teachers ever.

    I can ask it to explain a concept multiple times, hoping that it'll eventually click for me, and not be worried that I'd look stupid, or that it'll be annoyed or lose patience.

  • One factor in favor of the use of LLM as a learning tool is the poor quality of documentation. It seems we've forgotten how to write usable explanations that help readers to build a coherent model of the topic at hand.

It always comes down to economics and then the person and their attitude towards themselves.

Some things are worth learning deeply, in other cases the easy / fast solution is what the situation calls for.

I've thought recently that some kinds of 'learning' with AI are not really that different from using Cliffs Notes back in the day. Sometimes getting the Cliffs Notes summary was the way to get a paper done OR a way to quickly get through a boring/challenging book (Scarlet Letter, amirite?). And in some cases reading the summary is actually better than the book itself.

BUT - I think everyone could agree that if you ONLY read Cliffs Notes, you're just cheating yourself out of an education.

That's a different and deeper issue because some people simply do not care to invest in themselves. They want to do minimum work for maximum money and then go "enjoy themselves."

Getting a person to take an interest in themselves, in their own growth and development, to invite curiosity, that's a timeless problem.

  • So I've actually been putting more effort into deliberate practice since I started using AI in programming.

    I've been a fan of Zed Shaw's method for years, of typing out interesting programs by hand. But I've been appreciating it even more now, as a way to stave off the feeling of my brain melting :)

    The gross feeling I have if I go for too long without doing cardio, is a similar feeling to when I go for too long without actually writing a substantial amount of code myself.

    I think that the feeling of making a sustained effort is itself something necessary and healthy, and rapidly disappearing from the world.

  • I’ve always like the essential/accidental complexity split. It can be hard to find, but for a problem solving perspective, it may defines what’s fun and what’s a chore.

    I’ve been reading the OpenBSD lately and it’s quite nice how they’ve split the general OS concepts from the machine dependent needs. And the general way they’ve separated interfaces and implementation.

    I believe that once you’ve solve the essential problem, the rest becomes way easier as you got a direction. But doing accidental problem solving without having done the essential one is pure misery.

That's not what the author means. Multiple times a day, I have conversations with LLMs about specific code or general technologies. It is very similar to having the same conversation with a colleague. Yes, the LLM may be wrong. Which is why I'm constantly looking at the code myself to see if the explanation makes sense, or finding external docs to see if the concepts check out.

Importantly, the LLM is not writing code for me. It's explaining things, and I'm coming away with verifiable facts and conceptual frameworks I can apply to my work.

  • Yeah, it's a great way for me to reduce activation energy to get started on a specific topic. Certainly doesn't get me all the way home, but cracks it open enough to get started.

  • I kinda wonder to what extent grad students’ experience grading projects and homework will end up being a differentiating skill. 75% kidding.

My solution to this is to prioritize. There isn't enough time in a person's life to learn everything anyways.

Selectively pick and struggle through things you want to learn deeply. And let AI spoon-feed you for things you don't care as much about.

  • I've managed to go my whole career using regex and never fully grokking it, and now I finally feel free to never learn!

    I've also wanted to play with C and Raylib for a long time and now I'm confident in coding by hand and struggling with it, I just use LLMs as a backstop for when I get frustrated, like a TA during lab hours.

    • Same there is a few things I never learned and don't care to learn and ultimately it has no greater value to learn.

      Like do I really get anything out of learning another framework or how some particular library does something?

      2 replies →

I am beginning to disagree with this, or at least I am beginning to question its universal truth. For instance, there are so many times when "learning" is an exercise at attempting to apply wrong advice many times until something finally succeeds.

For instance, retrieving the absolute path an Angular app is running at in a way that is safe both on the client and in SSR contexts has a very clear answer, but there are a myriad of wrong ways people accomplish that task before they stumble upon the Location injectable.

In cases like the above, the LLM is often able to tell you not only the correct answer the first time (which means a lot less "noise" in the process trying to teach you wrong things) but also is often able to explain how the answer applies in a way that teaches me something I'd never have learned otherwise.

We have spent the last 3 decades refining what it means to "learn" into buckets that held a lot of truth as long as the search engine was our interface to learning (and before that, reading textbooks). Some of this rhetoric begins to sound like "seniority" at a union job or some similar form of gatekeeping.

That said, there are also absolutely times (and sometimes it's not always clear that a particular example is one of those times!!) when learning something the "long" way builds our long term/muscle memory or expands our understanding in a valuable way.

And this is where using LLMs is still a difficult choice for me. I think it's less difficult a choice for those with more experience, since we can more confidently distinguish between the two, but I no longer think learning/accomplishing things via the LLM is always a self-damaging route.

AI gave you the option of making it happen without learning anything.

It also gives you an avenue to accelerate your learning if that is your goal.

I learn a lot faster now with LLMs.

You could learn the windows APIs much faster if you wanted to learn them

  • Is this maybe more about the quality of the documentation? I say this 'cause my thinking is that reading is reading, it takes the same time to read the information.

  • How is this faster than just reading the documentation? Given that LLMs hallucinate, you have to double check everything it says against the docs anyway

    • I learn fastest from the examples, from application of the skill/knowledge - with explanations.

      AIs allowed me to get on with Python MUCH faster than I was doing myself, and understand more of the arcane secrets of jq in 6 months than I was able in few years before.

      And AIs mistakes are brilliant opportunity to debug, to analyse, and to go back to it saying "I beg you pardon, wth is this" :) pointing at the elementary mistakes you now see because you understand the flow better.

      Recently I had a fantastic back and forth with Claude and one of my precious tools written in python - I was trying to understand the specifics of the particular function's behaviour, discussing typing, arguing about trade-offs and portability. The thing I really like in it that I always get a pushback or things to consider if I come up with something stupid.

      It's a tailored team exercise and I'm enjoying it.

    • Human teachers make mistakes too. If you aren't consuming information with a skeptical eye you're not learning as effectively as you could be no matter what the source is.

      The trick to learning with LLMs is to treat them as one of multiple sources of information, and work with those sources to build your own robust mental of how things work.

      If you exclusively rely on official documentation you'll miss out on things that the documentation doesn't cover.

      14 replies →

    • Yes you have to be careful, but the LLM will read and process core and documentation literally millions of times faster than you, so it's worth it

      6 replies →

It is uncertain what will be valuable in the future at the rate things are changing.

Books are for the mentally enfeebled who can't memorize knowledge.

- Socrates

I don't know, most shit I learned programming (and subsequently get paid for) is meaningless arcana. For example, Kubernetes. And for you, it's Windows APIs.

For programming in general, most learning is worthless. This is where I disagree with you. If you belong to a certain set of cultures, you overindex on this idea that math (for example) is the best way to solve problems, that you must learn all this stuff by this certain pedagogy, and that the people who are best at this are the best at solving problems, which of course is not true. This is why we have politics, and why we have great politicians who hail from cultures that are underrepresented in high levels of math study, because getting elected and having popular ideas and convincing people is the best way to solve way more problems people actually have than math. This isn't to say that procedural thinking isn't valuable. It's just that, well, jokes on you. ChatGPT will lose elections. But you can have it do procedural thinking pretty well, and what does the learning and economic order look like now? I reject this form of generalization, but there is tremendous schadenfreude about, well the math people are destroying their own relevance.

All that said, my actual expertise, people don't pay for. Nobody pays for good game design or art direction (my field). They pay because you know Unity and they don't. They can't tell (and do not pay for) the difference between a good and bad game.

Another way of stating this for the average CRUD developer is, most enterprise IT projects fail, so yeah, the learning didn't really matter anyway. It's not useful to learn how to deliver better failed enterprise IT project, other than to make money.

One more POV: the effortlessness of agentic programming makes me more sympathetic to anti intellectualism. Most people do not want to learn anything, including people at fancy colleges, including your bosses and your customers, though many fewer in the academic category than say in the corporate world. If you told me, a chatbot could achieve in hours what would take a world expert days or weeks, I would wisely spend more time playing with my kids and just wait. The waiters are winning. Even in game development (cultural product development generally). It's better to wait for these tools to get more powerful than to learn meaningless arcana.

  • Convincing / coercing a bunch of slaves to build a pyramid takes a leader.

    But no amount of politics and charisma will calculate the motions of the planets or put satellites in orbit.

    A nation needs more than just influencers and charlatans.

    • > But no amount of politics and charisma will calculate the motions of the planets or put satellites in orbit.

      the government invented computers. you need politics to fund all of this. you are talking about triumphs of politics as much as invention. i don't know why you think i am pro influencer or charlatan...

I do disagree with the notion that you have to slog through a problem to learn efficiently. That it's either "the easy way [bad, you dont learn] or the hard way [good you do learn]" is a false dichotomy. Agents / LLMs are like having an always-on, highly adept teacher who can synthesize information in an intuitive way, and that you can explore a topic with. That's extremely efficient and effective for learning. There is maybe a tradeoff somewhat in some things, but this idea that LLMs make you not learn doesn't feel right; they allow you to learn _as much as you want and about the things that you want_, which wasn't before. You had to learn, inefficiently(!), a bunch of crap you didn't want to in order to learn the thing you _did_ want to. I will not miss those days.

  • I don't think your saying the same thing. Ai can help you get through the hard stuff effeciently and you'll learn. It acts as a guide, but you still do the work.

    Offloading completely the hard work and just getting a summary isn't really learning.