Comment by simonw
1 day ago
> if you’re doing this for your own learning: you will learn better without AI.
I'm certain that's not true. AI is the single biggest gift we could possible give to people who are learning to program - it's shaved that learning curve down to a point where you don't need to carve out six months of your life just to get to a point where you can build something small and useful that works.
AI only hurts learning if you let it. You can still use AI and learn effectively if you are thoughtful about the way you apply it.
100% rejecting AI as a learner programmer may feel like the right thing to do, but at this point it's similar to saying "I'm going to learn to program without ever Googling for anything at all".
(I do not yet know how to teach people to learn effectively with AI though. I think that's a very important missing piece of this whole puzzle.)
I'm a BIG fan of these three points though:
rewrite the parts you understand
learn the parts you don’t
make it so you can reason about every detail
If you are learning to program you should have a very low tolerance for pieces that you don't understand, especially since we now have a free 24/7 weird robot TA that we can ask questions of.
I think it's a pretty small generosity to implicitly extend what the author is saying to "you will learn better without generating your code". I don't know if that's what they meant, but AI is certainly a good tool for learning how things work and seeing examples (if you don't blindly trust everything it says and use other sources too).
That's fair. I also just noticed that the sentence before the bit I quoted is important:
> AI overuse hurts you:
> - if you’re doing this for your own learning: you will learn better without AI.
So they're calling out "AI overuse", and I agree with that - that's where the skill comes in of deciding how to use AI to help your learning in a way that doesn't damage that learning process.
I think the parallel is photobashing. I've seen art teachers debating how early a student should start photobashing. Everyone knows it's a widely adopted technique in the industry, but some consider it harmful for beginners.
Needlessly to say there is no consensus. I err on the side of photobashing personally.
It cannot be understated how much of a boon AI-assisted programming has been for getting stuff up and running. Once you get past the initial hurdle of setting up an environment along with any boilerplate, you can actually start running code and iterating in order to figure out how something works.
Cognitive bandwidth is limited, and if you need to fully understand and get through 10 different errors before anything works, that's a massive barrier to entry. If you're going to be using those tools professionally then eventually you'll want to learn more about how they work, but frontloading a bunch of adjacent tooling knowledge is the quickest way to kill someone's interest.
The standard choice isn't usually between a high-quality project and slopware, it's between slopware or nothing at all.
> It cannot be understated
You mean it cannot be overstated?
You got ‘em!!
Rejecting AI as a learning tool looks to me like a very good policy in an academic setting.
And the completely wrong decision in a hobby setting.
AI only hurts learning if you let it. You can still use AI and learn effectively if you are thoughtful about the way you apply it.
I think that's very important.
Never mind six months; with AI, "you" can "build" something small and useful that works in six minutes. But "you" almost certainly didn't learn anything, and I think it's quite questionable if "you" "built" something.
I have found AI to be a great tool for learning, but I see it -- me, personally -- as a very slippery slope into not learning at all. It is so easy, so trivial, to produce a (seemingly accurate) answer to just about any question whatsoever, no matter how mundane or obscure, that I can really barely engage my own thinking at all.
On one hand, with the goal of obtaining an answer to a question quickly, it's awesome.
On the other hand, I feel like I have learned almost nothing at all. I got precisely, pinpointed down, the exact answer to the question I asked. Going through more traditional means of learning -- looking things up in books, searching web sites, reading tutorials, etc. -- I end up with my answer, but I also end up with more context, and a deeper+broader understanding of the overall problem space.
Can I get that with AI? You bet. And probably even better, in some respects. But I have to deliberately choose to. It's way too easy to just grab the exact answer I wanted and be on my way.
I feel like that is both good and bad. I don't want to be too dismissive of the good, but I also feel like it would be unwise to ignore the bad.
Whoa hey though, isn't this just exactly like books? Didn't, like, Plato and all them Greek cats centuries ago say that writing things down would ruin our brains, and what I'm claiming here is 100% the same thing? I don't think so. I see it as a matter of scale. It's a similar effect -- you probably do lose something (whether if it's valuable or not is debatable) when you choose to rely on written words rather than memorize. But it's tiny. With our modern AI tools, there is potential to lose out on much more. You can -- you don't have to, but you can -- do way more coasting, mentally. You can pretty much coast nonstop now.
> Never mind six months; with AI, "you" can "build" something small and useful that works in six minutes. But "you" almost certainly didn't learn anything, and I think it's quite questionable if "you" "built" something.
I think you learned something critically important: that the thing you wanted to build is feasible to build.
A lot of ideas people have are not possible to build. You can't prove a negative but you CAN prove a positive: seeing a version of the thing you want to exist running in front of you is a big leap forward from pondering if it could be built.
That's a useful thing to learn.
The other day, at brunch, I had Claude Code on my phone add webcam support (with pinch-to-zoom) to my https://tools.simonwillison.net/is-it-a-bird is-it-a-bird CLIP-in-your-browser app. I didn't even have to look at the code it wrote to learn that it's possible for Mobile Safari to render the webcam input in a box on the page (not full screen) and to have a rough pinch-to-zoom mechanism work - it's pixelated, not actual-camera-zoom, but for a CLIP app that's fine because the zoom is really just to try and exclude things from the image that aren't a potential bird.
(The prompts I used for this are quoted in the PR description: https://github.com/simonw/tools/pull/175)
> Can I get that with AI? You bet. And probably even better, in some respects. But I have to deliberately choose to. It's way too easy to just grab the exact answer I wanted and be on my way.
100% agree with that. You need a lot of self-discipline to learn effectively with AI. I'd argue you need self-discipline to learn via other means as well though.
A robot TA that gives the wrong answer 50% of the time isn't very helpful.
Right, but this is a TA that gives a wrong answer more like 10% of the time (or less).
I think it's possible that for learning a 90% accuracy rate is MORE helpful than 100%. If it gets things wrong 1/10th of the time it means you have to think critically about everything it tells you. That's a much better way to approach any source of information than blindly trusting it.
The key to learning is building your own robust mental model, from multiple sources of information. Treat the LLM as one of those sources, not the exclusive source, and you should be fine.
You need to choose another field if it takes you 6 months to hello world
Don't speak like this to people. Also, don't put words in people's mouths (they didn't say "hello world").
I deliberately didn't say "hello world", I said "build something small that works" - I'm editing my post now to add the words "and useful".
The author probably meant "coding without AI", not "learning without AI".