← Back to context

Comment by raincole

6 months ago

I've said that before: we have been anthropomorphizing computers since the dawn of information age.

- Read and write - Behaviors that separate humans from animals. Now used for input and output.

- Server and client - Human social roles. Now used to describe network architecture.

- Editor - Human occupation. Now a kind of software.

- Computer - Human occupation!

And I'm sure people referred their cars and ships as 'her' before the invention of computers.

You are conflating anthropomorphism with personification. They are not the same thing. No one believes their guitar or car or boat is alive and sentient when they give it a name or talk to or about it.

https://www.masterclass.com/articles/anthropomorphism-vs-per...

  • But the author used "anthropomorphism" the same way as I did. I guess we both mean "personification" then.

    > we talk about "behaviors", "ethical constraints", and "harmful actions in pursuit of their goals". All of these are anthropocentric concepts that - in my mind - do not apply to functions or other mathematical objects.

    One talking about a program's "behaviors", "actions" or "goals" doesn't mean they believe the program is sentient. Only "ethical constraints" is suspiciously anthropomorphizing.

    • > One talking about a program's "behaviors", "actions" or "goals" doesn't mean they believe the program is sentient.

      Except that is exactly what we’re seeing with LLMs. People believing exactly that.

      3 replies →

I'm not convinced... we use these terms to assign roles, yes, but these roles describe a utility or assign a responsibility. That isn't anthropomorphizing anything, but it rather describes the usage of an inanimate object as tool for us humans and seems in line with history.

What's the utility or the responsibility of AI, what's its usage as tool? If you'd ask me it should be closer to serving insights than "reasoning thoughts".