Comment by Kim_Bruning
4 days ago
To amplify:
It's also potentially lethally stupid. What if an industrial robot arm decides to smash a €10000 expensive machine next door, or -heaven forbid- a human's skull. "It didn't really decide to do anything, stop anthropomorphising, let's blame the poor operator with his trembling fist on the e-stop."
Yeah, to heck with that. If you're one of those people (and you know who you are); you're overcompensating. We're going to need a root cause analysis, pull all the circuit diagrams, diagnose the code, cross check the interlocks, and fix the gorram actual problem. Policing language is not productive (and in the real life situation in the factory, please imagine I'm swearing and kicking things -scrap metal, not humans!- for real too) .
Just to be sure in this particular case with the Openclaw bot, the human basically pointed experimental level software at a human space and said "go". But I don't think they foresaw what happened next. They do have at least partial culpability here; but even that doesn't mean we get to just close our eyes, plug our ears, and refuse to analyze the safety implications of the system design an sich.
Shambaugh did a good job here. Even the Operator, however flawed, did a better job than just burning the evidence and running for the hills. Partial credit among the scorn to the latter.
(finally, note that there's probably 2.5 million of these systems out there now and counting, most -seemingly- operated by more responsible people. Let's hope)
> "It didn't really decide to do anything, stop anthropomorphising, let's blame the poor operator with his trembling fist on the e-stop."
It's not the operator that's to blame, it's whoever made the decision to have a skull-smashing machine who's only safety interlock is a poor operator with an e-stop. The world has gone insane, and personifying these AI systems is a way to shift blame from the decision makers to "Shit happens shrug". That's what we should be fighting back against
Seriously, that's not how you investigate incidents.
For one, there's no single executive who pushes a red button marked "Deploy The Skull-Splitter". Rather the opposite in fact, especially in eg german industry where people very much care and demand proper adherence to safety.
Assuming good faith; sometimes, the holes in the swiss cheese line up [1]
Advanced safety and reliability cultures don't look for people to blame [2] [3] . Your first goal is to look for the causes and you solve them. Very sometimes, someone does deserve blame (due to eg malice or gross negligence), in which case then you get to blame them.
[1] https://en.wikipedia.org/wiki/Swiss_cheese_model
[2] https://en.wikipedia.org/wiki/Just_culture https://www.faa.gov/about/initiatives/cp (FAA Just Culture)
[3] https://www.atlassian.com/incident-management/postmortem/bla... https://sre.google/sre-book/postmortem-culture/ Atlassian, Google SRE
Advanced safety and reliability cultures also don't choose technologies that are unpredictable and misunderstood. Nothing is safe or reliable about these systems.
1 reply →
All excellent points.
Unfortunately, your most excellent point:
> Policing language is not productive
goes against the grain here. Policing language is the one thing that our corporate overlords have gotten the right and the left to agree on. (Sure, they disagree on the details, but the first amendment is in graver danger now than it has been for a long time.)
https://www.durbin.senate.gov/newsroom/press-releases/durbin...