Comment by gardnr
3 days ago
Perplexity: a metric, often used to evaluate LLMs, is derived from the negative average logprob of the tokens in a test set. Lower perplexity indicates that the model assigns higher probabilities to the observed tokens, reflecting better language modeling.
No comments yet
Contribute on Hacker News ↗