Comment by getpokedagain

1 day ago

We are anthropomorphizing whenever we refer to prompts as instructions to models. They predict text not obey our orders.

> They predict text not obey our orders.

Those are the same thing in this case. The latter is just an extremely reductionist description of the mechanics behind the former.

  • They are not in fact the same thing, and the difference is important.

    They are certainly marketed as if they think, learn and follow orders, but they do not.

    • The result of "predicting text" is that they obey orders, just like the result of "random electrochemical impulses in synapses" is that you typed your comment.

      You can always reduce high-level phenomena to lower-level mechanisms. That doesn't mean that the high-level phenomenon doesn't exist. LLMs are obviously able to understand and follow instructions.

      2 replies →

That’s not how language works, just how engineers think it works

  • This isn't a sarcastic response. What do you mean?

    • I just mean that the argument that words like “instructions”, “think”, “confess” are inaccurate when used in reference to a machine assumes that those words can only refer to humans/conscious beings, when really they can refer to more than that if used widely enough in those ways (in this case - text prediction following a human input). So it’s not “anthropomorphizing” because when people use those words they don’t [typically] actually believe the machine can think or reason, it’s just the word that most closely matches the concept, it’s convenient. You’re extending the definition of the words to apply to non-conscious entities too, not applying consciousness to the entities.

      It’s the same reason we call the handheld device we carry around to do everything a “phone” without a second thought. We don’t call it a phone because it’s primary purpose is calling, we call it a phone because the definition of the word “phone” has grown to include “navigates, entertains, takes pictures, etc”.