Comment by nopelynopington
9 days ago
Of course they don't.
LLMs are a brainless algorithm that guesses the next word. When you ask them what they think they're also guessing the next word. No reason for it to match, except a trick of context
9 days ago
Of course they don't.
LLMs are a brainless algorithm that guesses the next word. When you ask them what they think they're also guessing the next word. No reason for it to match, except a trick of context
No comments yet
Contribute on Hacker News ↗