Comment by Kim_Bruning
1 day ago
Seriously, that's not how you investigate incidents.
For one, there's no single executive who pushes a red button marked "Deploy The Skull-Splitter". Rather the opposite in fact, especially in eg german industry where people very much care and demand proper adherence to safety.
Assuming good faith; sometimes, the holes in the swiss cheese line up [1]
Advanced safety and reliability cultures don't look for people to blame [2] [3] . Your first goal is to look for the causes and you solve them. Very sometimes, someone does deserve blame (due to eg malice or gross negligence), in which case then you get to blame them.
[1] https://en.wikipedia.org/wiki/Swiss_cheese_model
[2] https://en.wikipedia.org/wiki/Just_culture https://www.faa.gov/about/initiatives/cp (FAA Just Culture)
[3] https://www.atlassian.com/incident-management/postmortem/bla... https://sre.google/sre-book/postmortem-culture/ Atlassian, Google SRE
Advanced safety and reliability cultures also don't choose technologies that are unpredictable and misunderstood. Nothing is safe or reliable about these systems.
Absolutely; if you're deploying experimental systems: do your homework and assess the risks, get consent of the human participants, and stay in constant communication. If the Openclaw's operator here had done that from the start, things would have gone a lot differently.
In fact, you can imagine that if we build up a just culture around deployment of semi-autonomous agents like this, the operator wouldn't have had to remain anonymous in the first place. Best practices help everyone.