Comment by doodpants
6 hours ago
> One of the ongoing problems in LLM research is how to get these machines to say “I don’t know”, rather than making something up.
To be fair, I've known humans who are like this as well.
6 hours ago
> One of the ongoing problems in LLM research is how to get these machines to say “I don’t know”, rather than making something up.
To be fair, I've known humans who are like this as well.
This is a limitation of the training data. If you were uncertain about something, you wouldn’t write a book about it. The kinds of people you’re talking about tend to generate far more text in their lives than others, because they can spend more time generating - writing books, blogposts, whatever - and less time thinking and working and actually doing things. The models never say they’re uncertain because we never say we’re uncertain, or at least we don’t write it down anywhere.
Those people aren't the ones doing the work though.