Comment by matjet
13 days ago
Look what they need to mimic a fraction of [the power of having the logit probabilities exposed so you can actually see where the model is uncertain]
13 days ago
Look what they need to mimic a fraction of [the power of having the logit probabilities exposed so you can actually see where the model is uncertain]
All the LLM logprob outputs I've seen aren't very well calibrated, at least for transcription tasks - I'm guessing it's similar for OCR type tasks.
"I already decided in my private reasoning trace to resolve this ambiguity by emitting the string '27' instead of '22' right here, thus '27' has 100% probability"