Comment by jgeada
18 hours ago
Any set of rules that makes humans responsible and starts with "don't anthropomorphize <whatever>" is a broken set of rules.
Humans will anthropomorphize anything and everything. Dolls, soccer balls with a crude drawing of a face on it, rocks, craters on the moon, …
As a species, we're unable to not anthropomorphize things we interact with, it is just how're we're made.
I'm not sure why so many seem to think anthropomorphism is so mad in this specic instance, if it is because people think that anthropomorphism creates a belief that the imagined features are real, they are simply wrong. The abundance of examples in all areas of life where this does not happen is proof that anthropomorphism does not lead to an erroneous belief in a mind that does not exist.
If people are believing in minds of AI, true or not, they are doing so for reasons that are different from mere anthropomorphism.
To me it feels like we are like sailors approaching a new land, we can see shapes moving on the shoreline but can't make out what they are yet. Then someone says "They can't be people, I demand that we decide now that they are not people before we sail any closer."
People who anthropomorphize a rock don’t actually think it’s intelligent and has emotions.
Yeah, we do it, but so what? A good chunk of all civilization involves recognizing human foolishness and building something to mitigate it anyway.
Software is no exception. Yeah, people are lazy and will instinctively click "continue" to dismiss annoying popups, but humans building the software can and do add things like "retype the volume name of the data that you want ultra-destroyed."
That is exactly the point: this burden should be placed on the software and its controls, not on the humans.
Aviation learned this the hard way, that automation should be adapted to how humans actually work and not on how we wish we worked.
Sorry, I interpreted your post as "this is inevitable and pointless to try to stop."