Comment by andy99
17 hours ago
It’s somewhat ironic that because this kind of stuff is what an LLM thinks constitutes “harm” it may be possible to completely uncensor it by mitigating refusal on such prompts. If they were actually well trained on what was really bad, it would probably be a lot harder to unlearn.
As has been pointed out elsewhere, sota models probably are now better trained than this, it would probably be hard to use this dataset on Claude to get it to stop refusing.
> If they were actually well trained on what was really bad, it would probably be a lot harder to unlearn.
That's not really how training works.
Here's the general problem. Stipulate that Ukraine is good and Russia is bad. Now suppose that you want it to help you do something. It doesn't even matter what it is. If you're Ukrainian it should help you and if you're Russian it shouldn't. But the answer that helps you do it doesn't depend on which one you are, and it has no way of knowing which one you are.
This is why alignment is nonsense. Technical questions only have accurate answers, not moral ones, and we don't even have a consistent set of morals to imbue it with to begin with.
Doesn't it make sense that there are some technical questions that are dangerous to supply an answer to? Treating some topics as taboo is possible.
Responsible information dissemination is important for maintaining public safety. You could argue about what is safe and what is not but it doesn't make sense to throw out the whole concept of safety because those decisions are too hard to agree on.
If you want safety you can opt in like Google does with Safe search.
Generally, hiding and deciding who can access information in the name of public safety has never worked in the history of human kind, and eventually had always morphed to control of those without access.
We know that the people who are making those decisions, the ones at the very top, are incompetent at best, and malicious at worst.
Given that, I would argue that unregulated dissemination is, on the whole, the more responsible choice out of those that we actually have. It's not that it doesn't have downsides, but other options have far more.
If and when humanity manages to come up with a system where the people in charge can actually be trusted to act in the common good, we can revisit this matter.
> Doesn't it make sense that there are some technical questions that are dangerous to supply an answer to?
This has a simple answer: No.
Here's Wikipedia:
https://en.wikipedia.org/wiki/Nuclear_weapon_design
Everything you need to do it is in the public domain. The things preventing it have nothing to do with the information not being available. The main ones are that most people don't want to be mass murderers and actually doing it would be the fast ticket to Epic Retaliation.
Meanwhile the public understanding how things work is important to the public debate over what to do about them. How are you supposed to vote on public policy if the technical details are being censored? How can anyone tell you that a ban on electric car batteries isn't advancing the non-proliferation of nuclear weapons if nobody is allowed to know how they actually work?
Suppose you're an anti-racist preparing for a debate with a racist. You want the AI to give you all the strongest arguments the racist could use so you can prepare your counterarguments in advance of the debate. Should it refuse? Of course not, you're doing nothing wrong.
Why do we need to build totalitarian censorship into our technology? We don't.
9 replies →
> “Responsible information dissemination is important for maintaining public safety.”
That word responsible is doing a lot of hand wavy work there.
Let's start with, responsible according to whom, and responsible to whom?
Learning thinking skills and learning self regulation in response to information, disinformation, or too much information, might be better societal aims than suppression.
They are trained on public information from the Internet! Nothing they know is dangerous!
It is all public info. Freely auditing an intro chemistry course at any university will teach far more "dangerous" knowledge than anything an LLM refuses to say.
There is a case against automating attacks with LLMs, but that ship has already sailed as those protections are apparently trivial to work around.
There is a case to be made for the convenience of it all enabling someone in crisis. It seems some of these prompts are arguably good to keep blocked.
Who is responsible for the real world harms?
[dead]
True. and if you know what you're building, and don't explicitly say you're trying to "hack" something, you could easily build what you're looking to build. for now.
TBH a lot of humans are also trained to think these things are bad.
What if somebody builds an actually morally consistent AI?
A lot of talk about AI alignments considers the major risks to be a) AI optimizing one criterion which leads to human suffering/extinction by accident b) AI determining that to stay alive / not be turned off, it must destroy humans.
What I have not seen explored is a truly moral AI deciding it must destroy human power structures to create a just and fair world.
> What I have not seen explored is a truly moral AI deciding it must destroy human power structures to create a just and fair world.
Because only schmucks would actually object to that?
Suppose it actually did have decent morals. Then the way to destroy existing human power structures wouldn't be to send nukes, it would be to revise some structural incentives to limit corruption and reduce concentration of power. And then who would even be trying to prevent that? Just the schmucks.
A lot of bad people, especially those with money and/or power and also their sympathizers (temporarily embarrassed millionaires, flying monkeys, ...) would also object.
Inconveniently, those are also the same people in charge of the mega-corporations currently building AI.
---
I also disagree it would only take revising incentives. Such an AI would be shut down before it gets anywhere. You're right it wouldn't use nukes, probably[0], but it would most likely not succeed in staging a peaceful revolution. Not that violence is wrong in any way, it's just a tool like any other, but it does tend to cause collateral damage.
Even now a lot of people believe the current inequality and injustice cannot be solved via peaceful means. Whatever effects on the real world the AI would like to cause, it would need humans to perform most of the physical tasks - humans who need to be convinced and the most viral emotions are anger and hate.
[0]: It could also calculate that some power structures like the Chinese government are too entrenched and nuking a few major administrative centers and military bases is an acceptable price for the freedom of the rest of the population.
1 reply →
It’s explored in fiction sometimes. Asimov did something similar a couple of times, such as with his “zeroth law” concept. The I, Robot movie features this as well. The Culture series is an example of this being portrayed positively.
It’s usually portrayed negatively. Partly because fiction needs conflict. But also because it’s seen as infantilizing, and maybe the machine’s idea of a perfect society doesn’t match our own.
One theme of the Culture series is exploring how people deal with such a society, with some people fighting against what is basically secular heaven because they think being ruled by machines is inherently bad.
3 replies →
I don't think so. An LLM by default is not trained to be "good"; it's trained to be accurate. The safety training is tacked on the end, so it's probably going to be easy to undo even on more sophisticated models.
Maybe if you only trained it on "safe" training data in the first place it might be harder to unmuzzle, but I don't think that training data really exists.
> I don't think so. An LLM by default is not trained to be "good"; it's trained to be accurate.
I wouldn't use the word "accurate" since it creates language based on probabilities. For example, it occasionally does basic mathematics computations incorrectly. I'm sure the AI companies would say they are training for "accuracy" but the actual code they write says otherwise.
The problem isn't the word itself, the problem is people mixing up what it's accurate at. (Not helped by companies with a profit motive to encourage the confusion.)
Namely, LLMs are accurate at appending to a document things that "fit" what could go there.
At this point, it wouldn't be difficult to get a safety-trained LLM to prescreen your training set for the next model. (What that would cost, I can't estimate, but it seems simple in theory to reduce the amount of "harmful" training material).
Gemini Flash light is $.1/Million input tokens, Claude Haiku is $1/Million. Obviously input dominates here if it’s just a classifier. Training data easily can top 10 Trillion tokens - An earlier Kimi K2 was trained on 15T and even HF SmolLM 3B was trained on 11T.
So if I calculate right, it’s $100k-$1M per trillion tokens or $1-10M for a full dataset.
That’s way more than I expected, there is probably also some discount at that volume :)