Comment by staticassertion
12 hours ago
I can think of multiple cases.
1. Adversarial models. For example, you might want a model that generates "bad" scenarios to validate that your other model rejects them. The first model obviously can't be morally constrained.
2. Models used in an "offensive" way that is "good". I write exploits (often classified as weapons by LLMs) so that I can prove security issues so that I can fix them properly. It's already quite a pain in the ass to use LLMs that are censored for this, but I'm a good guy.
They say they’re developing products where the constitution is doesn’t work. That means they’re not talking about your case 1, although case 2 is still possible.
It will be interesting to watch the products they release publicly, to see if any jump out as “oh THAT’S the one without the constitution“. If they don’t, then either they decided to not release it, or not to release it to the public.
There are hardline constraints in the constitution (https://www.anthropic.com/constitution#hard-constraints) would at least potentially apply in case 1. This would make it impossible to do case 1 with the public model.
(1) could be a product, I think. But yeah, fair point.