Comment by DiogenesKynikos
15 hours ago
> They predict text not obey our orders.
Those are the same thing in this case. The latter is just an extremely reductionist description of the mechanics behind the former.
15 hours ago
> They predict text not obey our orders.
Those are the same thing in this case. The latter is just an extremely reductionist description of the mechanics behind the former.
They are not in fact the same thing, and the difference is important.
They are certainly marketed as if they think, learn and follow orders, but they do not.
The result of "predicting text" is that they obey orders, just like the result of "random electrochemical impulses in synapses" is that you typed your comment.
You can always reduce high-level phenomena to lower-level mechanisms. That doesn't mean that the high-level phenomenon doesn't exist. LLMs are obviously able to understand and follow instructions.
> The result of "predicting text" is that they obey orders
And yet they don't, quite a lot of the time, and in a random way that is hard to predict or even notice sometimes (their errors can be important but subtle/small).
They're simply not reliable enough to treat as independent agents, and this story is a good example of why not.
1 reply →