Comment by fennecbutt
14 hours ago
Yes exactly. But for llms it's more that it's not really "thinking" about what it's saying per se, it's that it's predicting next token. Sure, in a super fancy way but still predicting next token. Context poisoning is real
No comments yet
Contribute on Hacker News ↗