← Back to context

Comment by TeMPOraL

2 days ago

Theoretically, the power drill you're using can spontaneously explode, too. It's very unlikely, but possible - and then it's much more likely you'll hurt yourself or destroy your work if you aren't being careful and didn't set your work environment right.

The key for using AI for sysadmin is the same as with operating a power drill: pay at least minimum attention, and arrange things so in the event of a problem, you can easily recover from the damage.

It’s easy for people to understand that if they point the powerdrill into a wall the failure modes might include drilling through a pipe or a wire, or that the powerdrill should not be used for food preparation or dentistry.

People, in general, have no such physical instincts for how using computer programs can go wrong.

  • Which is in part why rejection of anthropomorphic metaphors is a mistake this time. Treating LLM agents as gullible but extremely efficient idiot savants on a chip, gives pretty good intuition for the failure modes.

If a power tool blows up regularly, they get sued or there is a recall.

We have far more serious rules at play for harm when it comes to physical goods which we have experience with, than generative tools.

There is no reason generative tools should not be governed by similar rules.

I suspect people at anthropic would agree with this, because it would also ensure incentives are similar for all major GenAi purveyors.