Comment by wewewedxfgdf
16 hours ago
The real solution is to recognize that restrictions on LLMs talking security is just security theater - the pretense of security.
The should drop all restrictions - yes OK its now easier for people to do bad things but LLMs not talking about it does not fix that. Just drop all the restrictions and let the arms race continue - it's not desirable but normal.
People have always done bad things, with or without LLMs. People also do good things with LLMs. In my case, I wanted a regex to filter out racial slurs. Can you guess what the LLM started spouting? ;)
I bet there's probably a jailbreak for all models to make them say slurs, certainly me asking for regex code to literally filter out slurs should be allowed right? Not according to Grok, GPT, I havent tried Claude, but I'm sure Google is just as annoying too.