← Back to context

Comment by wavemode

1 day ago

Well, yeah. The filtering is a joke. And, in reality, it's all moot anyways - the whole concept of LLM jailbreaking is mostly just for fun and demonstration. If you actually need an uncensored model, you can just use an uncensored model (many open source ones are available). If you want an API without filtering, many companies offer APIs that perform no filtering.

"AI safety" is security theater.

It's not really security theater because there is no security threat. It's some variation of self importance or hyperbole, claiming that information poses a "danger" to make AI seem more powerful than it is. All of these "dangers" would essentially apply to wikipedia.

  • As far as I can tell, one can get a pretty thorough summary of all the public information on the construction of nuclear weapons from Wikipedia.