Comment by DiogenesKynikos
6 hours ago
First, they do follow instructions most of the time, and the leading models get better and better at doing it month for month.
Second, whether they're perfect at following commands is besides the point. They're not just "predicting tokens," in the same way you're not just "sending electrochemical signals." LLMs think, solve problems, answer questions, write code, etc.
No comments yet
Contribute on Hacker News ↗