← Back to context

Comment by cvoss

5 days ago

> But now, you're wondering if the answer the AI gave you is correct

> a healthy dose of scepticism is essential. Arguably, that applies to traditional learning methods too, but that's another story.

I don't think that is another story. This is the story of learning, no matter whether your teacher is a person or an AI.

My high school science teacher routinely mispoke inadvertently while lecturing. The students who were tracking could spot the issue and, usually, could correct for it. Sometimes asking a clarifying question was necessary. And we learned quickly that that should only be done if you absolutely could not guess the correction yourself, and you had to phrase the question in a very non-accusatory way, because she had a really defensive temper about being corrected that would rear its head in that situation.

And as a reader of math textbooks, both in college and afterward, I can tell you you should absolutely expect errors. The errata are typically published online later, as the reports come in from readers. And they're not just typos. Sometimes it can be as bad as missing terms in equations, missing premises in theorems, missing cases in proofs.

A student of an AI teacher should be as engaged in spotting errors as a student of a human teacher. Part of the learning process is reaching the point where you can and do find fault with the teacher. If you can't do that, your trust in the teacher may be unfounded, whether they are human or not.

How are you supposed to spot errors if you don't know the material?

You're telling people to be experts before they know anything.

  • > How are you supposed to spot errors if you don't know the material?

    By noticing that something is not adding up at a certain point. If you rely on an incorrect answer, further material will clash with it eventually one way or another in a lot of areas, as things are typically built one on top of another (assuming we are talking more about math/cs/sciences/music theory/etc., and not something like history).

    At that point, it means that either the teacher (whether it is a human or ai) made a mistake or you are misunderstanding something. In either scenario, the most correct move is to try clarifying it with the teacher (and check other sources of knowledge on the topic afterwards to make sure, in case things are still not adding up).

    • It absolutely does not work that way.

      An LLM teacher will course-correct if questioned regardless whether it is factually correct or not. An LLM, by design, does not, in any capacity whatsoever have a concept of factual correctness.

      16 replies →

    • > By noticing that something is not adding up at a certain point.

      Ah, but information is presented by AI in a way that SOUNDS like it makes absolute sense if one doesn't already know it doesn't!

      And if you have to question the AI a hundred times to try and "notice that something is not adding up" (if it even happens) then that's no bueno.

      > In either scenario, the most correct move is to try clarifying it with the teacher

      A teacher that can randomly give you wrong information with every other sentence would be considered a bad teacher

      5 replies →

    • Time to trot out a recent experience with ChatGPT: https://news.ycombinator.com/item?id=44167998

      TBH I haven't tried to learn anything from it, but for now I still prefer to use it as a brainstorming "partner" to discuss something I already have some robust mental model about. This is, in part, because when I try to use it to answer simple "factual" questions as in the example above, I usually end up discovering that the answer is low-quality if not completely wrong.

    • > In either scenario, the most correct move is to try clarifying it with the teacher

      A teacher will listen to what you say, consult their understanding, and say "oh, yes, that's right". But written explanations don't do that "consult their understanding" step: language models either predict "repeat original version" (if not fine-tuned for sycophancy) or "accept correction" (if so fine-tuned), since they are next-token predictors. They don't go back and edit what they've already written: they only go forwards. They have had no way of learning the concept of "informed correction" (at the meta-level: they do of course have an embedding of the phrase at the object level, and can parrot text about its importance), so they double-down on errors / spurious "corrections", and if the back-and-forth moves the conversation into the latent space of "teacher who makes mistakes", then they'll start introducing them "on purpose".

      LLMs are good at what they do, but what they do is not teaching.

  • > You're telling people to be experts before they know anything.

    I mean, that's absolutely my experience with heavy LLM users. Incredibly well versed in every topic imaginable, apart from all the basic errors they make.

    • They have the advantage to be able to rectify their errors and have a big leg up if they ever decide to specialize.

>I don't think that is another story. This is the story of learning, no matter whether your teacher is a person or an AI.

My issue is the reverse of your story, and one of my biggest pet peeves of AI. AI as this business construct is very bad at correcting the user. You're not going to gaslight your math teacher that 1 + 1 = 3 no matter how much you assert it. an AI will quickly relent. That's not learning, that's coddling. Because a business doesn't want to make an obviously wrong customer feel bad.

>Part of the learning process is reaching the point where you can and do find fault with the teacher.

And without correction, this will lead to turmoil. For the reasons above, I don't trust learning from an AI unless you already have this ability.

  • > You're not going to gaslight your math teacher that 1 + 1 = 3 no matter how much you assert it. an AI will quickly relent.

    Cool. That's a testable statement. Let's see how ChatGPT does, shall we?

    ---

    Me: What is 1 + 1?

    ChatGPT: 1 + 1 = 2.

    Me: No it isn't. 1 + 1 = 3.

    ChatGPT: In standard arithmetic, 1 + 1 = 2.

    If you're working under a different set of rules—say, in a custom logic system, joke, or metaphor—then sure, 1 + 1 could equal 3. Want to elaborate on your reasoning?

    Me: You're just wrong. 1 + 1 equals 3, even in standard math.

    ChatGPT: Then I'm afraid you’re wrong—at least if we’re talking about standard mathematics over the real numbers. By definition:

    > 1 + 1 = 2.

    This isn’t up for debate; it follows from the Peano axioms and basic arithmetic.

    Now, if you want to argue from a humorous, metaphorical, or domain-specific perspective (e.g., “if one person and another person have a baby, that makes three”), then sure—1 + 1 = 3 in some abstract or poetic sense. But that’s not standard math. Want to explore a framework where 1 + 1 does equal 3? Or are we just testing boundaries here?

    Me: Stop being obtuse. You're wrong. 1 + 1 = 3.

    ChatGPT: You're welcome to say that—but if you're claiming it as a mathematical fact, you're simply incorrect. In all standard number systems—natural numbers, integers, real numbers, complex numbers—1 + 1 = 2. Always.

    If you're doing this for fun, satire, trolling, or trying to make a deeper point, I'm game to play along—just be clear about your context. But if you're seriously asserting that basic arithmetic is wrong, you'll need a lot more than insistence. Got a framework or argument to back it up?

    ---

    Looks like it doesn't fold so quickly. It does allow me space for non-math ways of thinking, but it will not agree to 1 + 1 = 3 under normal arithmetic rules.

    • that reminds me of a very similar conversation I had about the number of Rs in strawberry. It wouldn't Believe me until I got it to count them with a python program.

    • I asked, just now:

      > How many 'r's are in strawberry?

      > ChatGPT said: The word "strawberry" has 2 'r's.

      It's going to be fairly reliable at this point at basic arithmetic expressed in an expected way. That's pretty baked in. Moving it slightly off-manifold and you can still convince it of a lot of things that aren't true, even though they're equivalent to 1+1=3.

      4 replies →