← Back to context

Comment by tovej

16 days ago

It's especially important not to antropomorphise when there is a risk people actually mistake something for a humanlike being.

What is least helpful is using misleading terms like this, because it makes reasoning about this more difficult. If we assume the model "knows" something, we might reasonably assume it will always act according to that knowledge. That's not true for an LLM, so it's a term that should clearly be a oided.