Comment by tardedmeme
3 hours ago
I think it's funny how we are all tweaking LLM output by adding instructional tokens instead of, say, finding a vector that indicates "user asked for emojis", and forbidding emoji tokens in the sampling unless that vector passes a threshold.
No comments yet
Contribute on Hacker News ↗