← Back to context

Comment by tekno45

5 days ago

How are you supposed to spot errors if you don't know the material?

You're telling people to be experts before they know anything.

> How are you supposed to spot errors if you don't know the material?

By noticing that something is not adding up at a certain point. If you rely on an incorrect answer, further material will clash with it eventually one way or another in a lot of areas, as things are typically built one on top of another (assuming we are talking more about math/cs/sciences/music theory/etc., and not something like history).

At that point, it means that either the teacher (whether it is a human or ai) made a mistake or you are misunderstanding something. In either scenario, the most correct move is to try clarifying it with the teacher (and check other sources of knowledge on the topic afterwards to make sure, in case things are still not adding up).

  • It absolutely does not work that way.

    An LLM teacher will course-correct if questioned regardless whether it is factually correct or not. An LLM, by design, does not, in any capacity whatsoever have a concept of factual correctness.

    • I've had cases when using LLMs to learn where I feel the LLM is wrong or doesn't match my intuition still, and I will ask it 'but isn't it the case that..' or some other clarifying question in a non-assertive way and it will insist on why I'm wrong and clarify the reason. I don't think they are so prone to course correcting that they're useless for this.

      4 replies →

    • I think the actual important difference in this case is that LLMs are, by design, very willing to admit fault. I suspect, but cannot yet prove, that this is because corrigibility (important part of AI alignment & safety research) has a significant vector similarity to fawning and to sycophancy.

      With regard to them not, in any capacity whatsoever having a concept of factual correctness, LLMs are very much just like humans: We're not magic, we don't know the underlying nature of reality.

      This is why it took us so long to replace Aristotelean physics with Newtonian, let alone Newtonian with QM and GR, and both QM and GR are known to be flawed but nobody has worked out the next step. It's just that humans are fairly unwilling to change their minds about how physics works in light of evidence, we often just defer to famous people, c.f. to Aristotle, then to Newton, then to Einstein.

      We humans make this (opposite) mistake so hard and so often, that there's a saying that "science progresses one funeral at a time": https://en.wikipedia.org/wiki/Planck%27s_principle

      I could also have thrown into this list miasma, phlogiston, that Columbus definitely wasn't the only person who knew it was round and just got lucky with the existence of the Americas after having gotten the size of both Earth and Asia catastrophically wrong, or phrenology.

    • I just tried this

      > Me: why is madrid the capital of france?

      > ChatGPT: It's not. Madrid is the capital of Spain. The capital of France is Paris.

      8 replies →

    •   > An LLM, by design, does not, in any capacity whatsoever have a concept of factual correctness.
      

      That is what the RAG is for. Are there any commercial LLMs not sitting behind RAGs?

  • > By noticing that something is not adding up at a certain point.

    Ah, but information is presented by AI in a way that SOUNDS like it makes absolute sense if one doesn't already know it doesn't!

    And if you have to question the AI a hundred times to try and "notice that something is not adding up" (if it even happens) then that's no bueno.

    > In either scenario, the most correct move is to try clarifying it with the teacher

    A teacher that can randomly give you wrong information with every other sentence would be considered a bad teacher

    • Yeah, they're all thinking that everyone is an academic with hotkeys to google scholar for every interaction on the internet.

      Children are asking these things to write personal introductions and book reports.

      2 replies →

    • > Ah, but information is presented by AI in a way that SOUNDS like it makes absolute sense if one doesn't already know it doesn't!

      You have a good point, but I think it only applies to when the student wants to be lazy and just wants the answer.

      From what I can see of study mode, it is breaking the problem down into pieces. One or more of those pieces could be wrong. But if you are actually using it for studying then those inconsistencies should show up as you try to work your way through the problem.

      I've had this exact same scenario trying to learn Godot using ChatGPT. I've probably learnt more from the mistakes it made and talking through why it isn't working.

      In the end I believe it's really good study practices that will save the student.

    • On the other hand my favourite use of LLMs for study recently is when other information on a topic is not adding up. Sometimes the available information on a topic is all eliding some assumption that means it doesn't seem to make sense and it can be very hard to piece together for yourself what the gap is. LLMs are great at this, you can explain why you think something doesn't add up and it will let you know what you're missing.

  • Time to trot out a recent experience with ChatGPT: https://news.ycombinator.com/item?id=44167998

    TBH I haven't tried to learn anything from it, but for now I still prefer to use it as a brainstorming "partner" to discuss something I already have some robust mental model about. This is, in part, because when I try to use it to answer simple "factual" questions as in the example above, I usually end up discovering that the answer is low-quality if not completely wrong.

  • > In either scenario, the most correct move is to try clarifying it with the teacher

    A teacher will listen to what you say, consult their understanding, and say "oh, yes, that's right". But written explanations don't do that "consult their understanding" step: language models either predict "repeat original version" (if not fine-tuned for sycophancy) or "accept correction" (if so fine-tuned), since they are next-token predictors. They don't go back and edit what they've already written: they only go forwards. They have had no way of learning the concept of "informed correction" (at the meta-level: they do of course have an embedding of the phrase at the object level, and can parrot text about its importance), so they double-down on errors / spurious "corrections", and if the back-and-forth moves the conversation into the latent space of "teacher who makes mistakes", then they'll start introducing them "on purpose".

    LLMs are good at what they do, but what they do is not teaching.

  • what are children who don't have those skills yet supposed to do?

    • Same way as before?

      I had school teachers routinely teach me wrong stuff.

      The only way is comparing notes, talking to peers and parents.

      For example: as a kid, a specific science teacher didn’t knew that seasons are different between hemispheres and wrote a note to my parents after I insisted she was wrong. My grandfather, an immigrant, took it to himself to set her straight.

> You're telling people to be experts before they know anything.

I mean, that's absolutely my experience with heavy LLM users. Incredibly well versed in every topic imaginable, apart from all the basic errors they make.

  • They have the advantage to be able to rectify their errors and have a big leg up if they ever decide to specialize.