← Back to context

Comment by latexr

6 months ago

You are conflating anthropomorphism with personification. They are not the same thing. No one believes their guitar or car or boat is alive and sentient when they give it a name or talk to or about it.

https://www.masterclass.com/articles/anthropomorphism-vs-per...

But the author used "anthropomorphism" the same way as I did. I guess we both mean "personification" then.

> we talk about "behaviors", "ethical constraints", and "harmful actions in pursuit of their goals". All of these are anthropocentric concepts that - in my mind - do not apply to functions or other mathematical objects.

One talking about a program's "behaviors", "actions" or "goals" doesn't mean they believe the program is sentient. Only "ethical constraints" is suspiciously anthropomorphizing.

  • > One talking about a program's "behaviors", "actions" or "goals" doesn't mean they believe the program is sentient.

    Except that is exactly what we’re seeing with LLMs. People believing exactly that.

    • Perhaps a few mentally unhinged people do.

      A bit of anecdote: last year I hung out with a bunch of old classmates that I hadn't seen for quite a while. None of them works in tech.

      Surprisingly to me, all of them have ChatGPT installed on their phones.

      And unsurprisingly to me, none of them treated it like an actual intelligence. That makes me wonder where those who think ChatGPT is sentient come from.

      (It's a bit worrisome that several of them thought it worked "like Google search and Google translation combined", even by the time ChatGPT couldn't do web search...!)

      2 replies →