It's more like a hammer which makes its own independent evaluation of the ethics of every project you seek to use it on, and refuses to work whenever it judges against that – sometimes inscrutably or for obviously poor reasons.
If I use a hammer to bash in someone else's head, I'm the one going to prison, not the hammer or the hammer manufacturer or the hardware store I bought it from. And that's how it should be.
How many people do dogs kill each year, in circumstances nobody would justify?
How many people do frontier AI models kill each year, in circumstances nobody would justify?
The Pentagon has already received Claude's help in killing people, but the ethics and legality of those acts are disputed – when a dog kills a three year old, nobody is calling that a good thing or even the lesser evil.
This view is too simplistic. AIs could enable someone with moderate knowledge to create chemical and biological weapons, sabotage firmware, or write highly destructive computer viruses. At least to some extent, uncontrolled AI has the potential to give people all kinds of destructive skills that are normally rare and much more controlled. The analogy with the hammer doesn't really fit.
This isn't a lock
It's more like a hammer which makes its own independent evaluation of the ethics of every project you seek to use it on, and refuses to work whenever it judges against that – sometimes inscrutably or for obviously poor reasons.
If I use a hammer to bash in someone else's head, I'm the one going to prison, not the hammer or the hammer manufacturer or the hardware store I bought it from. And that's how it should be.
Given the increasing use of them as agents rather than simple generators, I suggest a better analogy than "hammer" is "dog".
Here's some rules about dogs: https://en.wikipedia.org/wiki/Dangerous_Dogs_Act_1991
How many people do dogs kill each year, in circumstances nobody would justify?
How many people do frontier AI models kill each year, in circumstances nobody would justify?
The Pentagon has already received Claude's help in killing people, but the ethics and legality of those acts are disputed – when a dog kills a three year old, nobody is calling that a good thing or even the lesser evil.
1 reply →
This view is too simplistic. AIs could enable someone with moderate knowledge to create chemical and biological weapons, sabotage firmware, or write highly destructive computer viruses. At least to some extent, uncontrolled AI has the potential to give people all kinds of destructive skills that are normally rare and much more controlled. The analogy with the hammer doesn't really fit.
Why would we lock ourselves out of our own house though?
How is it related? I dont need lock for myself. I need it for others.
The analogy should be obvious--a model refusing to perform an unethical action is the lock against others.
But "you" are the "other" for someone else.
Can you give an example where I should care about other adults lock? Before you say image or porn, it was always possible to do it without using AI.
6 replies →