Comment by wat10000
21 days ago
LLMs literally are just predicting tokens with a probabilistic model. They’re incredibly complicated and sophisticated models, but they still are just incredibly complicated and sophisticated models for predicting tokens. It’s maybe unexpected that such a thing can do summarization, but it demonstrably can.
The rub is that we don't know if intelligence is anything more than "just predicting next output".
I think we do.
That's just what you were most likely to say...
1 reply →