Comment by holsta
5 days ago
> It used to be that if you got stuck on a concept, you're basically screwed.
We were able to learn before LLMs.
Libraries are not a new thing. FidoNet, USENET, IRC, forums, local study/user groups. You have access to all of Wikipedia. Offline, if you want.
I learned how to code using the library in the 90s.
I think it's accurate to say that if I had to do that again, I'm basically screwed.
Asking the LLM is a vastly superior experience.
I had to learn what my local library had, not what I wanted. And it was an incredible slog.
IRC groups is another example--I've been there. One or two topics have great IRC channels. The rest have idle bots and hostile gatekeepers.
The LLM makes a happy path to most topics, not just a couple.
>Asking the LLM is a vastly superior experience.
Not to be overly argumentative, but I disagree, if you're looking for a deep and ongoing process, LLMs fall down, because they can't remember anything and can't build upon itself in that way. You end up having to repeat alot of stuff. They also don't have good course correction (that is, if you're going down the wrong path, it doesn't alert you, as I've experienced)
It also can give you really bad content depending on what you're trying to learn.
I think for things that represent themselves as a form of highly structured data, like programming languages, there's good attunement there, but you start talking about trying to dig around about advanced finance, political topics, economics, or complex medical conditions the quality falls off fast, if its there at all
I used llms to teach me a programming language recently.
It was way nicer than a book.
That's the experience I'm speaking from. It wasn't perfect, and it was wrong sometimes, sure. A known limitation.
But it was flexible, and it was able to do things like relate ideas with programming languages I already knew. Adapt to my level of understanding. Skip stuff I didn't need.
Incorrect moments or not, the result was i learned something quickly and easily. That isn't what happened in the 90s.
8 replies →
Most LLM user interfaces, such as ChatGPT, do have a memory. See Settings, Personalization, Manage Memories.
1 reply →
Agreed, I'd add to the statement, "you're basically screwed, often, without investing a ton of time (e.g. weekends)"
Figuring out 'make' errors when I was bad at C on microcontrollers a decade ago? (still am) Careful pondering of possible meanings of words... trial and error tweaks of code and recompiling in hopes that I was just off by a tiny thing, but 2 hours later and 30 attempts later, and realizing I'd done a bad job of tracking what I'd tried and hadn't? Well, made me better at being careful at triaging issues. But it wasn't something I was enthusiastic to pick back up the next weekend, or for the next idea I had.
Revisiting that combination of hardware/code a decade later and having it go much faster with ChatGPT... that was fun.
Are we really comparing this research to just writing and having a good answer in a couple of seconds?
Like, I agree with you and I believe those things will resist and will always be important, but it doesn't really compare in this case.
Last week I was in the nature and I saw a cute bird that I didn't know. I asked an AI and got the correct answer in 10 seconds. Of course I would find the answer at the library or by looking at proper niche sites, but I would not have done it because I simply didn't care that much. It's a stupid example but I hope it makes the point
There's a gigantic difference between outsourcing your brain to generative AI (LLMs, Stable Diffusion, ..) and pattern recognition that recognises songs, birds, plants or health issues.
It’s not an or/either situation.
> We were able to learn before LLMs.
We were able to learn before the invention of writing, too!