← Back to context

Comment by NitpickLawyer

11 hours ago

Why, though? Just because some people would find it odd? Who cares?

Trying to limit / disallow something seems to be hurting the overall accuracy of models. And it makes sense if you think about it. Most of our long-horizon content is in the form of novels and above. If you're trying to clamp the machine to machine speak you'll lose all those learnings. Hero starts with a problem, hero works the problem, hero reaches an impasse, hero makes a choice, hero gets the princess. That can be (and probably is) useful.

Is it? I don't think most of the content LLM are trained on is written in the first person. Wikipedia / news articles / other information articles don't aren't written in the first person. Most novels, or at least a substantial portion of it are not written in the first person.

LLM write in the first person because they have been specifically been finetuned for a chat task, it's not a fundamental feature of language models that would have to be specifically disallowed