Comment by WesolyKubeczek
12 hours ago
> Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.
That's why I have a functioning brain, to discern between ethical and unethical, among other things.
Yes, and most of us won’t break into other people’s houses, yet we really need locks.
This isn't a lock
It's more like a hammer which makes its own independent evaluation of the ethics of every project you seek to use it on, and refuses to work whenever it judges against that – sometimes inscrutably or for obviously poor reasons.
If I use a hammer to bash in someone else's head, I'm the one going to prison, not the hammer or the hammer manufacturer or the hardware store I bought it from. And that's how it should be.
Given the increasing use of them as agents rather than simple generators, I suggest a better analogy than "hammer" is "dog".
Here's some rules about dogs: https://en.wikipedia.org/wiki/Dangerous_Dogs_Act_1991
2 replies →
This view is too simplistic. AIs could enable someone with moderate knowledge to create chemical and biological weapons, sabotage firmware, or write highly destructive computer viruses. At least to some extent, uncontrolled AI has the potential to give people all kinds of destructive skills that are normally rare and much more controlled. The analogy with the hammer doesn't really fit.
Why would we lock ourselves out of our own house though?
How is it related? I dont need lock for myself. I need it for others.
The analogy should be obvious--a model refusing to perform an unethical action is the lock against others.
But "you" are the "other" for someone else.
6 replies →
You are not the one folks are worried about. US Department of War wants unfettered access to AI models, without any restraints / safety mitigations. Do you provide that for all governments? Just one? Where does the line go?
> US Department of War wants unfettered access to AI models
I think the two of you might be using different meanings of the word "safety"
You're right that it's dangerous for governments to have this new technology. We're all a bit less "safe" now that they can create weapons that are more intelligent.
The other meaning of "safety" is alignment - meaning, the AI does what you want it to do (subtly different than "does what it's told").
I don't think that Anthropic or any corporation can keep us safe from governments using AI. I think governments have the resources to create AIs that kill, no matter what Anthropic does with Claude.
So for me, the real safety issue is alignment. And even if a rogue government (or my own government) decides to kill me, it's in my best interest that the AI be well aligned, so that at least some humans get to live.
Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations.
What line are we talking about?
> Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations.
You recon?
Ok, so now every random lone wolf attacker can ask for help with designing and performing whatever attack with whatever DIY weapon system the AI is competent to help with.
Right now, what keeps us safe from serious threats is limited competence of both humans and AI, including for removing alignment from open models, plus any safeties in specifically ChatGPT models and how ChatGPT is synonymous with LLMs for 90% of the population.
3 replies →
Yes IMO the talk of safety and alignment has nothing at all to do with what is ethical for a computer program to produce as its output, and everything to do with what service a corporation is willing to provide. Anthropic doesn’t want the smoke from providing DoD with a model aligned to DoD reasoning.
the line of ego, where seeing less "deserving" people (say ones controlling Russian bots to push quality propaganda on big scale or scam groups using AI to call and scam people w/o personnel being the limiting factor on how many calls you can make) makes you feel like it's unfair for them to posses same technology for bad things giving them "edge" in their en-devours.
What about people who want help building a bio weapon?
5 replies →
If you are US company, when the USG tells you to jump, you ask how high. If they tell you to not do business with foreign government you say yes master.
> Where does the line go?
a) Uncensored and simple technology for all humans; that's our birthright and what makes us special and interesting creatures. It's dangerous and requires a vibrant society of ongoing ethical discussion.
b) No governments at all in the internet age. Nobody has any particular authority to initiate violence.
That's where the line goes. We're still probably a few centuries away, but all the more reason to hone in our course now.
That you think technology is going to save society from social issues is telling. Technology enables humans to do things they want to do, it does not make anything better by itself. Humans are not going to become more ethical because they have access to it. We will be exactly the same, but with more people having more capability to what they want.
2 replies →