Comment by killerstorm
7 months ago
That applies only to most basic use of LLM: pre-trained LLM generating text.
You can do a lot of things on top: e.g. train a linear probe to give a confidence score. Yes, it won't be 100% reliable, but it might be reliable if you constraint it to a domain like math.
No comments yet
Contribute on Hacker News ↗