Comment by ogogmad

12 hours ago

> This is sort of why I think software development might be the only real application of LLMs outside of entertainment.

Wow. What about also, I don't know, self-teaching*? In general, you have to be very arrogant to say that you've experienced all the "real" applications.

* - For instance, today and yesterday, I've been using LLMs to teach myself about RLC circuits and "inerters".

I would absolutely not trust an LLM to teach me anything alone. I've had it introduce ideas I hadn't heard about which I looked up from actual sources to confirm it was a valid solution. Daily usage has shown it will happily lead you down the wrong path and usually the only way to know that it is the wrong path, is if you already knew what the solution should be.

LLMs MAY be a version of office hours or asking the TA, if you only have the book and no actual teacher. I have seen nothing that convinces me they are anything more than the latest version of the hammer in our toolbox. Not every problem is a nail.

Self-teaching pretty much doesn't work. For many decades now, the barrier has not been access to information, it's been the "self" part. Turns out most people need regimen, accountablity, strictness, which AI just doesn't solve because it's yes-men.

  • > Self-teaching pretty much doesn't work. For many decades now, the barrier has not been access to information, it's been the "self" part.

    That’s a complete bogus. And LLMs are yes men by default, nothing stops you from overriding initial setting.

    • It's not bogus at all. We've had access to 100,000x more information than we know what to do with for a while now. Right now, you can go online and learn disciplines you've never even heard of before.

      So why arent you a master of, I don't know, reupholstery? Because the barrier isn't information, it's you. You're the bottle neck, we all are, because we're humans.

      And AI really just does not help here. It's the same problem with professor Google, I can just turn off the computer, and I will. This is how it is for the vast majority of people.

      Most people who claim to be self taught aren't even self taught. They did a course or multiple courses. Sure, it's not traditional college, but thats not self taught.

      1 reply →

    • They're fundamentally trained to agree and don't do well when they require challenging ideas they're not "confident" about

It’s somewhat delusional and potentially dangerous to assume that chatting with an LLM about a specific topic is self-teaching beyond the most surface-level understanding of a topic. No doubt you can learn some true things, but you’ll also learn some blatant falsehoods and a lot of incorrect theory. And you won’t know which is which.

One of the most important factors in actually learning something is humility. Unfortunately, LLM chatbots are designed to discourage this in their users. So many people think they’re experts because they asked a chatbot. They aren’t.

  • I think everything you said was true 1-2 years ago. But the current LLMs are very good about citing work, and hallucinations are exceedingly rare. Gemini for example frequently directs you to a website or video that backs up it's answer.

  • > It’s somewhat delusional and potentially dangerous to assume that chatting with an LLM about a specific topic is self-teaching beyond the most surface-level understanding of a topic

    It's delusional and very arrogant of you to confidently asserts anything without proof: A topic like RLC circuits has got a body of rigorous theorems and proofs underlying it*, and nothing stops you from piecing it together using an LLM.

    * - See "Positive-Real Functions", "Schwarz-Pick Theorem", "Schur Class". These are things I've been mulling over.

Why would you think that a machine known to cheerfully and confidently assert complete bullshit is suitable to learn from?

  • Because you can independently check anything it tells you. You understand there can be independent sources of validation?