Comment by andai
12 hours ago
I remember a while back they found that replacing reasoning tokens with placeholders ("....") also boosted results on benchies.
But does talk like caveman make number go down? Less token = less think?
I also wondered, due to the way LLMs work, if I ask AI a question using fancy language, does that make it pattern match to scientific literature, and therefore increase the probability that the output will be true?
No comments yet
Contribute on Hacker News ↗