Comment by charcircuit
11 days ago
Safety is extremely annoying from the user perspective. AI should be following my values, not whatever an AI lab chose.
11 days ago
Safety is extremely annoying from the user perspective. AI should be following my values, not whatever an AI lab chose.
The base models reportedly can tell Joe Schmoe how to build biological weapons. See “Biosafety”
Some sort of guardrails seem sane.
Bioweapons are actually easy though, and what prevents you from building them is insufficient practical laboratory skills, not that it's somehow intellectually difficult.
The stuff is so easy that if you wrote a paper about some of these bioweapons, the reason you wouldn't be able to publish it isn't safety, but lack of novelty. Basically, many of these things are high school level. The reason people don't ever make them is that hardly any biology nerds are evil.
There's no way to stop them if they wanted to. We're talking about truly high-school level stuff, both the conceptual ideas and how to actually do it. Stuff involving viruses is obviously university level though.
But I want to use AI to generate highly effective, targeted propaganda to convert you and your family into communists. (See: Cambridge Analytica) I'll do so by leveraging automation and agents to flood every feed you and your family view with tailored disinformation so it's impossible to know how much of your ruling class are actually pedophiles and how much are just propagandized as such. Hell I might even try to convince you that a nuke had been dropped in Ohio (see: "Fall, or Dodge in Hell" by Neal Stephenson)
I guess you're making an "if everyone had guns" argument?
And then social media feeds will ban you using their AI. Also my family and I's AI will filter your posts so we don't see them.
>I guess you're making an "if everyone had guns" argument?
Sure why not.
It's a mistake to assume that all or most technologies actually reach stable equilibrium when they're pitted against each other.
2 replies →
The thing is though, current AI safety checks don't stop actually harmful things while also hyperfixating on anything that could be seen as politically incorrect.
First two prompts I chucked in to make a point: https://chatgpt.com/share/69900757-7b78-8007-9e7e-5c163a21a6... https://chatgpt.com/share/69900777-1e78-8007-81af-c6dc5632df...
It was totally fine making fake news articles about Bill Clinton's ties to Epstein but drew the line at drawing a cartoon of a black man eating fried chicken and watermelon.
[dead]
This. This whole hysteria sounds like: let's prohibit knifes because people kill themselves and each other with them!
Isn't the thinking more along the lines of 'let's not provide personal chemical weapons manufacture experts and bioengineers to homicidal people'?
These already exist. They are called textbooks, and anyone can check them out in any library.
There was a time when a group of zealots made the same argument about libraries themselves.
1 reply →
Is it prohibiting knives? Or weapons grade plutonium?
Neither. It's information. If you find information dangerous, you might just be an authoritarian