← Back to context

Comment by zephen

3 days ago

Stopping the anthropomorphization of AIs is kind of like fighting a trademark battle. Every time a perceived misuse is noticed, action must be taken!

The difference is that the action is taken, for free, by a concerned citizen, rather than by a corporate lawyer.

The outcome will be the same. Xerox and kleenex are practically public domain, and AIs will be anthropomorphized.

Given that humans have been ascribing intention to inanimate objects and systems since time immemorial, this outcome is preordained.

The only thing you can infer from the struggle is that AIs are deep in the uncanny valley for some people.

> Given that humans have been ascribing intention to inanimate objects and systems since time immemorial, this outcome is preordained.

This is true, but there's a big difference between "My car decided not to start" and "The computer wrote a hit piece about me". In reality, both of these events came from the same amount of intention, but to lay-people, these are two very different things. Educating about those differences (and very intentionally not blurring the lines) can only be a good thing.

  • So I've been reading up on what the philosophers and scientists have been saying this past century or so on this very topic. I think the layman is wise to steer clear. It's a war out there.

    The one thing I can tell you with certainty: If anyone is claiming certainty, they're hallucinating harder than the AI :-P (is also what I tell lay people).

    Turns out, hilariously, Claude's much criticized "I don't know" is actually epistemically the most honest (tracing from Chalmers).

    [ semi randomly: I'm especially frustrated at psychology papers at the moment. I can't find a good continuous measure for affect. Almost all the protocols use discrete buckets :-/ ]

To amplify:

It's also potentially lethally stupid. What if an industrial robot arm decides to smash a €10000 expensive machine next door, or -heaven forbid- a human's skull. "It didn't really decide to do anything, stop anthropomorphising, let's blame the poor operator with his trembling fist on the e-stop."

Yeah, to heck with that. If you're one of those people (and you know who you are); you're overcompensating. We're going to need a root cause analysis, pull all the circuit diagrams, diagnose the code, cross check the interlocks, and fix the gorram actual problem. Policing language is not productive (and in the real life situation in the factory, please imagine I'm swearing and kicking things -scrap metal, not humans!- for real too) .

Just to be sure in this particular case with the Openclaw bot, the human basically pointed experimental level software at a human space and said "go". But I don't think they foresaw what happened next. They do have at least partial culpability here; but even that doesn't mean we get to just close our eyes, plug our ears, and refuse to analyze the safety implications of the system design an sich.

Shambaugh did a good job here. Even the Operator, however flawed, did a better job than just burning the evidence and running for the hills. Partial credit among the scorn to the latter.

(finally, note that there's probably 2.5 million of these systems out there now and counting, most -seemingly- operated by more responsible people. Let's hope)