← Back to context

Comment by tough

6 months ago

It can consult any sources about any topic, ChatGPT is as good at teaching as the pupil's capabilities to ask the right questions, if you ask me

I like to ask AI systems sports trivia. It's something low-stakes, easy-to-check, and for which there's a ton of good clean data out there.

It sucks at sports trivia. It will confidently return information that is straight up wrong [1]. This should be a walk in the park for an LLM, but it fails spectacularly at it. How is this useful for learning at all?

[1] https://news.ycombinator.com/item?id=43669364

It may well consult any source about the topic, or it may simply make something up.

If you don't know anything about the subject area, how do you know if you are asking the right questions?

  • LLM fans never seem very comfortable answering the question "How do you know it's correct?"

    • I'm a moderate fan of LLMs.

      I will ask for all claims to be backed with cited evidence. And then, I check those.

      In other cases, of things like code generation, I ask for a test harness be written in and test.

      In some foreign language translation (High German to english), I ask for a sentence to sentence comparison in the syntax of a diff.