Comment by natrius
2 days ago
An LLM can trivially instruct someone to take medications with adverse interactions, steer a mental health crisis toward suicide, or make a compelling case that a particular ethnic group is the cause of your society's biggest problem so they should be eliminated. Words can't kill people, but words can definitely lead to deaths.
That's not even considering tool use!
Part of the problem is due to the marketing of LLMs as more capable and trustworthy than they really are.
And the safety testing actually makes this worse, because it leads people to trust that LLMs are less likely to give dangerous advice, when they could still do so.
Spend 15 minutes talking to a person in their 20's about how they use ChatGPT to work through issues in their personal lives and you'll see how much they already trust the "advice" and other information produced by LLMs.
Manipulation is a genuine concern!
Netflix needs to do a Black Mirror episode where either a sentient AI pretends that it's "dumber" than it is while secretly plotting to overthrow humanity. Either that or a LLM is hacked by deep state actors that provides similar manipulated advice.
2 replies →
It's not just young people. My boss (originally a programmer) agreed with me that there's lots of problems using ChatGPT for our products and programs as it gives the wrong answers too often, but tgen 30 seconds later told me that it was apparently great at giving medical advice.
...later someone higher-up decided that it's actually great at programming as well, and so now we all believe it's incredibly useful and necessary for us to be able to do our daily work
8 replies →
Can you point to a specific bit of marketing that says to take whatever medications a LLM suggests, or other similar overreach?
People keep talking about this “marketing”, and I have yet to see a single example.
This is analogous to saying a computer can be used to do bad things if it is loaded with the right software. Coincidentally, people do load computers with the right software to do bad things, yet people are overwhelmingly opposed to measures that would stifle such things.
If you hook up a chat bot to a chat interface, or add tool use, it is probable that it will eventually output something that it should not and that output will cause a problem. Preventing that is an unsolved problem, just as preventing people from abusing computers is an unsolved problem.
> This is analogous to saying a computer can be used to do bad things if it is loaded with the right software.
It's really not. Parent's examples are all out-of-the-box behavior.
As the runtime of any program approaches infinity, the probability of the program behaving in an undesired manner approaches 1.
That is not universally true. The yes program is a counter example:
https://www.man7.org/linux/man-pages/man1/yes.1.html
3 replies →
The society has accepted that computers bring more benefit than harm, but LLMs could still get pushback due to bad PR.
PDFs can do this too.
In such a case, the author of the PDF can be held responsible.
Radical idea: let’s hold the reader responsible for the actions they take from the material.
4 replies →
Twitter does it at scale.
Yes, and a table saw can take your hand. As can a whole variety of power tools. That does not render them illegal to sell to adults.
It dose render them illigal to sell without studying their safety.
No but they have guards on them.
An interesting comparison.
Table saws sold all over the world are inspected and certified by trusted third parties to ensure they operate safely. They are illegal to sell without the approval seal.
Moreover, table saws sold in the United States & EU (at least) have at least 3 safety features (riving knife, blade guard, antikickback device) designed to prevent personal injury while operating the machine. They are illegal to sell without these features.
Then of course there are additional devices like sawstop, but it is not mandatory yet as far as I'm aware. Should be in a few years though.
LLMs have none of those board labels or safety features, so I'm not sure what your point was exactly?
They are somewhat self regulated, as they can cause permament damage to the company that releases them, and they are meant for general consumers without any training, unlike table saws that are meant for trained people.
An example is the first Microsoft bot that started to go extreme rightwing when people realized how to make it go that direction. Grok had a similar issue recently.
Google had racial issues with its image generation (and earlier with image detection). Again something that people don't forget.
Also an OpenAI 4o release was encouraging stupid things to people when they asked stupid questions and they just had to roll it back recently.
Of course I'm not saying that that's the real reason (somehow they never say that the problem is with performance for not releasing stuff), but safety matters with consumer products.
1 reply →
An LLM is not gonna chop of your limb. You can’t use it to attack someone.
2 replies →
> An LLM can trivially make a compelling case that a particular ethnic group is the cause of your society's biggest problem so they should be eliminate
This is an extraordinary claim.
I trust that the vast majority of people are good and would ignore such garbage.
Even assuming that an LLM can trivially build a compelling case to convince someone who is not already murderous to go on a killing spree to kill a large group of people, one killer has limited impact radius.
For contrast, many books and religious texts, have vastly more influence and convincing power over huge groups of people. And they have demonstrably caused widespread death or other harm. And yet we don’t censor or ban them.
> An LLM can trivially instruct someone to take medications with adverse interactions,
What’s an example of such a medication that does not require a prescription?
How about just telling people that drinking grapefruit juice with their liver medicine is a good idea and to ignore their doctor.
> liver medicine
What is an example liver medicine that does not require a prescription?
Tylenol.
> Tylenol
This drug comes with warnings: “Taking acetaminophen and drinking alcohol in large amounts can be risky. Large amounts of either of these substances can cause liver damage. Acetaminophen can also interact with warfarin, carbamazepine (Tegretol), and cholestyramine. It can also interact with antibiotics like isoniazid and rifampin.”
It is on the consumer to read it.
Oil of wintergreen?
Yeah, give it access to some bitcoin and the internet, and it can definitely cause deaths.
The problem is “safety” prevents users from using LLMs to meet their requirements.
We typically don’t critique the requirements of users, at least not in functionality.
The marketing angle is that this measure is needed because LLMs are “so powerful it would be unethical not to!”
AI marketers are continually emphasizing how powerful their software is. “Safety” reinforces this.
“Safety” also brings up many of the debates “mis/disinformation” brings up. Misinformation concerns consistently overestimate the power of social media.
I’d feel much better if “safety” focused on preventing unexpected behavior, rather than evaluating the motives of users.
The closed weights models from OpenAI already do these things though
does your CPU, your OS, your web browser come with ~~built-in censorship~~ safety filters too?
AI 'safety' is one of the most neurotic twitter-era nanny bullshit things in existence, blatantly obviously invented to regulate small competitors out of existence.
It isn’t. This is dismissive without first thinking through the difference of application.
AI safety is about proactive safety. Such an example: if an AI model could be used to screen hiring applications, making sure it doesn’t have any weighted racial biases.
The difference here is that it’s not reactive. Reading a book with a racial bias would be the inverse; where you would be reacting to that information.
That’s the basis of proper AI safety in a nutshell
As someone who has reviewed people’s résumés that they submitted with job applications in the past, I find it difficult to imagine this. The résumés that I saw had no racial information. I suppose the names might have some correlation to such information, but anyone feeding these things into a LLM for evaluation would likely censor the name to avoid bias. I do not see an opportunity for proactive safety in the LLM design here. It is not even clear that they even are evaluating whether there is bias in such a scenario when someone did not properly sanitize inputs.
4 replies →
If you're deploying LLM-based decision making that affects lives, you should be the one held responsible for the results. If you don't want to do due diligence on automation, you can screen manually instead.
Social media does. Even person to person communication has laws that apply to it. And the normal self-censorship a normal person will engage in.
okay. and? there are no AI 'safety' laws in the US.
without OpenAI, Anthropic and Google's fearmongering, AI 'safety' would exist only in the delusional minds of people who take sci-fi way too seriously.
https://en.wikipedia.org/wiki/Regulatory_capture
for fuck's sake, how more obvious could they be? sama himself went on a world tour begging for laws and regulations, only to purge safetyists a year later. if you believe that he and the rest of his ilk are motivated by anything other than profit, smh tbh fam.
it's all deceit and delusion. China will crush them all, inshallah.
iOS certainly does by limiting you to the App Store and restricring what apps are available there
They have been forced to open up to alternative stores in the EU. This is unequivocally a good thing, and a victory for consumer rights.
Books can do this too.
There's a reason the inherititors of the coyright* refused to allow more copies of Mein Kampf to be produced until that copyright expired.
* the federal state of Bavaria
Was there? It seems like that was the perfect natural experiment then. So what was the outcome? Was there a sudden rash of holocausts the year that publishing started again?
1 reply →
Major book publishers have sensitivity readers that evaluate whether or not a book can be "safely" published nowadays. And even historically there have always been at least a few things publishers would refuse to print.
All it means is that the Overton window on "should we censor speech" has shifted in the direction of less freedom.
1 reply →
Right, the books you are allowed to read is controlled by the people with the power to make them.
At the end of the day an LM is just a machine that talks. It might say silly things, bad things, nonsensical things, or even crazy insane things. But end the end of the day it just talks. Words don't kill.
LM safety is just a marketing gimmick.
We absolutely regulate which words you can use in certain areas. Take instructions on medicine for one example
[dead]