Comment by codyvoda
1 day ago
^I like email as an analogy
if I send a death threat over gmail, I am responsible, not google
if you use LLMs to make bombs or spam hate speech, you’re responsible. it’s not a terribly hard concept
and yeah “AI safety” tends to be a joke in the industry
What if I ask it for something fun to make because I'm bored, and the response is bomb-building instructions? There isn't a (sending) email analogue to that.
In what world would it respond with bomb building instructions?
If I were to make a list of fun things, I think that blowing stuff up would feature in the top ten. It's not unreasonable that an LLM might agree.
if it used search and ingested a malicious website, for example.
1 reply →
Why might that happen is not really the point is it? If I ask for a photorealistic image of a man sitting at a computer, a priori I might think 'in what world would I expect seven fingers and no thumbs per hand', alas...
1 reply →
There's more than one way to view it. Determining who has responsibility is one. Simply wanting there to be fewer causal factors which result in death threats and bombs being made is another.
If I want there to be fewer[1] bombs, examining the causal factors and affecting change there is a reasonable position to hold.
1. Simply fewer; don't pigeon hole this into zero.
> if you use LLMs to make bombs or spam hate speech, you’re responsible.
What if it's easier enough to make bombs or spam hate speech with LLMs that it DDoSes law enforcement and other mechanisms that otherwise prevent bombings and harassment? Is there any place for regulation limiting the availability or capabilities of tools that make crimes vastly easier and more accessible than they would be otherwise?
The same argument could be made about computers. Do you prefer a society where CPUs are regulated like guns and you can't buy anything freer than an iPhone off the shelf?
I mean this stuff is so easy to do though. An extremist doesn’t even need to make a bomb, he/she already drives a car that can kill many people. In the US it’s easy to get a firearm that could do the same. If capacity + randomness were a sufficient model for human behavior, we’d never gather in crowds, since a solid minority would be rammed, shot up, bombed etc. People don’t want to do that stuff; that’s our security. We can prevent some of the most egregious examples with censorship and banning, but what actually works is the fuzzy shit, give people opportunities, social connections, etc. so they don’t fall into extremism.
or alternatively, if I cook myself a cake and poison myself, i am responsible.
If you sell me a cake and it poisons me, you are responsible.
So if you sell me a service that comes up with recipes for cakes, and one is poisonous?
I made it. You sold me the tool that “wrote” the recipe. Who’s responsible?
The seller of the tool is responsible. If they say it can produce recipes, they're responsible for ensuring the recipes it gives someone won't cause harm. This can fall under different categories if it doesn't depending on the laws of the country/state. Willful Negligence, false advertisement, etc.
Ianal, but I think this is similar to the red bull wings, monster energy death cases, etc.
Sure, I may be responsible, but you’d still be dead.
I’d prefer to live in a world where people just didn’t go around making poison cakes.
It's a hard concept in all kinds of scenarios. If a pharmacist sells you large amounts of pseudoephedrine, which you're secretly using to manufacture meth, which of you is responsible? It's not an either/or, and we've decided as a society that the pharmacist needs to shoulder a lot of the responsibility by putting restrictions on when and how they'll sell it.
sure but we’re talking about literal text, not physical drugs or bomb making materials. censorship is silly for LLMs and “jailbreaking” as a concept for LLMs is silly. this entire line of discussion is silly
Except it’s not, because people are using LLMs for things, thinking they can put guardrails on them that will hold.
As an example, I’m thinking of the car dealership chatbot that gave away $1 cars: https://futurism.com/the-byte/car-dealership-ai
If these things are being sold as things that can be locked down, it’s fair game to find holes in those lockdowns.
6 replies →
This is assuming people are responsible and with good will. But how many of the gun victims each year would be dead if there were no guns? How many radiation victims would there be without the invention of nuclear bombs? safety is indeed a property of knowledge.
Just imagine how many people would not die in traffic incidents if the knowledge of the wheel had been successfully hidden?
Nice try but the causal chain isn't as simple as wheels turning → dead people.
If someone wants to make a bomb, chatgpt saying "sorry I can't help with that" won't prevent that someone from finding out how to make one.
Sure, but if ten-thousand people might sorta want to make a bomb for like five minutes, chatgpt saying "nope" might prevent nine-thousand nine-hundred and ninety nine of those, at which point we might have a hundred fewer bombings.
4 replies →
That's really not true, by that logic LLMs provide no value which is obviously false.
It's one thing to spend years studying chemistry, it's another to receive a tailored instruction guide in thirty seconds. It will even instruct you how to dodge detection by law enforcement, which a chemistry degree will not.
1 reply →