← Back to context

Comment by skywhopper

8 hours ago

It’s somewhat delusional and potentially dangerous to assume that chatting with an LLM about a specific topic is self-teaching beyond the most surface-level understanding of a topic. No doubt you can learn some true things, but you’ll also learn some blatant falsehoods and a lot of incorrect theory. And you won’t know which is which.

One of the most important factors in actually learning something is humility. Unfortunately, LLM chatbots are designed to discourage this in their users. So many people think they’re experts because they asked a chatbot. They aren’t.

I think everything you said was true 1-2 years ago. But the current LLMs are very good about citing work, and hallucinations are exceedingly rare. Gemini for example frequently directs you to a website or video that backs up it's answer.

> It’s somewhat delusional and potentially dangerous to assume that chatting with an LLM about a specific topic is self-teaching beyond the most surface-level understanding of a topic

It's delusional and very arrogant of you to confidently asserts anything without proof: A topic like RLC circuits has got a body of rigorous theorems and proofs underlying it*, and nothing stops you from piecing it together using an LLM.

* - See "Positive-Real Functions", "Schwarz-Pick Theorem", "Schur Class". These are things I've been mulling over.