Comment by msvana
6 months ago
This reminds me of the idea that LLMs are simulators. Given the current state (the prompt + the previously generated text), they generate the next state (the next token) using rules derived from training data.
As simulators, LLMs can simulate many things, including agents that exhibit human-like properties. But LLMs themselves are not agents.
More on this idea here: https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/agi-s...
This perspective makes a lot of sense to me. Still, I wouldn't avoid anthropomorphization altogether. First, in some cases, it might be a useful mental tool to understand some aspect of LLMs. Second, there is a lot of uncertainty about how LLMs work, so I would stay epistemically humble. The second argument applies in the opposite direction as well: for example, it's equally bad to say that LLMs are 100% conscious.
On the other hand, if someone argues against anthropomorphizing LLMs, I would avoid phrasing it as: "It's just matrix multiplication." The article demonstrates why this is a bad idea pretty well.
No comments yet
Contribute on Hacker News ↗