Comment by augment_me

5 days ago

There is, but there is an equal risk if you were to engage about any topic with any teacher you know. Everyone has a bias, and as long as you dont base your worldview and decisions fully on one output you will be fine.

Experimenting with LLMs, I've had examples like it providing the Cantor Set (a totally disconnected topological space) as an example of a Continuum immediately after it provides the (correct) definition as a non-empty compact, connected (Hausdorff) topological space. This is immediately obvious as nonsense if you understand the topic, but if one was attempting to learn from this, it could be very confusing and misleading. No human teacher would do this.

  • I don’t know what any of this means!

    But I’m not trying to become an expert in these subjects. If I were, this isn’t the tool I’d use in isolation (which I don’t for these cases anyway.)

    Part of reading, questioning, interpreting, and thinking about these things is (a) defining concepts I don’t understand and (b) digging into the levels beneath what I might.

    It doesn’t have to be 100% correct to understand the shape and implications of a given study. And I don’t leave any of these interactions thinking, “ah, now I am an expert!”

    Even if it were perfectly correct, neither my memory nor understanding is. That’s fine. If I continue to engage with the topic, I’ll make connections and notice inconsistencies. Or I won’t! Which is also fine. It’s right enough to be net (incredibly) useful compared to what I had before.

It’s my experience that humans are far, far, far more trustworthy about their limitations than LLMs. Obviously, this varies by human.

>but there is an equal risk if you were to engage about any topic with any teacher you know.

No it isnt.

  • I’ve used LLMs to summarize hundreds of papers. Theyve been more accurate than any teacher I’ve known. Summarizing text is one of their best skills.

It’s only equal if you consider two outcomes: some risk and no risk.

And there’s always some risk.

Are you just saying that broadly, e.g. original 2022 chatgpt was also an equal risk if you use this way?

You won't be able to verify everything taught from first principles, so do have to at some point give different sources different credibility I think.