Comment by krapp

15 days ago

I mean, you're right, but LLMs are designed to process natural language. "talking to them as if they were humans" is the intended user interface.

The problem is believing that they're living, sentient beings because of this or that humans are functionally equivalent to LLMs, both of which people unfortunately do.

LLMs don't have ego, unlike humans, this is why they're so effective at communication.

You can say to it "you did thing wrong" or "you stupid piece of shit it's not working" and it will be able to extract the gist from the both messages all the same, unlike human that might offended by the second phrasing.

  • It will be able, but it's trained on a corpus that expresses getting offended, so at some point the most likely token sequence will probably be the "offended" one.

    As can be seen here.