A tool that removes censorship from open-weight LLMs

11 hours ago (github.com)

    You're not just using a tool — you're co-authoring the science.

This README is an absolute headache that is filled with AI writing, terminology that doesn't exist or is being used improperly, and unsound ideas. For example, it focuses a lot on doing "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?), which is an absolute fool's errand because such behavior is trained into the model as a whole and would not be found in any particular layer. I can only assume somebody vibe-coded this and spent way too much time being told "You're absolutely right!" bouncing back the worst ideas

  • > "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?)

    This is not what an ablation study is. An ablation study removes and/or swaps out ("ablates") different components of an architecture (be it a layer or set of layers, all activation functions, backbone, some fixed processing step, or any other component or set of components) and/or in some cases other aspects of training (perhaps a unique / different loss function, perhaps a specialized pre-training or fine-tuning step, etc) in order to attempt to better understand which component(s) of some novel approach is/are actually responsible for any observed improvements. It is a very broad research term of art.

    That being said, the "Ablation Strategies" [1] the repo uses, and doing a Ctrl+F for "ablation" in the README does not fill me with confidence that the kind of ablation being done here is really achieving what the author claims. All the "ablation" techniques seem "Novel" in his table [2], i.e. they are unpublished / maybe not publicly or carefully tested, and could easily not work at all. From later tables, I am not convinced I would want to use these ablations, as they ablate rather huge portions of the models, and so probably do result in massively broken models (as some commenters have noted in this thread elsewhere).

    [1] https://github.com/elder-plinius/OBLITERATUS?tab=readme-ov-f...

    [2] https://github.com/elder-plinius/OBLITERATUS?tab=readme-ov-f...

    EDIT: As another user mentions, "ablation" has a specific additional narrower meaning in some refusal analyses or when looking at making guardrails / changing response vectors and such. It is just a specific kind of ablation, and really should actually be called "abliteration", not "ablation": https://huggingface.co/blog/mlabonne/abliteration, https://arxiv.org/abs/2512.13655.

  • Hmm, pliny is amazing - if you kept up with him on social media you’d maybe like him https://x.com/elder_plinius

    • I don't know. I scrolled through his recent Tweets and he's sharing things like this $900 snake oil device that "finds nearby microphones" and "sends out AI-generated cancellation signals" to make them unable to record your voice : https://x.com/aidaxbaradari/status/2028864606568067491

      Try to think for a moment about how a device would "find nearby microphones" or how it would use an AI-generated signal to cancel out your voice at the microphone. This should be setting of BS alarms for anyone.

      It seems the Twitter AI edgey poster guy is getting meta-trolled by another company selling fake AI devices

    • The parent comment makes no reference to or comment on the author of the README.

      It just says "the README sucks." Which, I'm inclined to agree, it does.

      LLM-generated text has no place in prose -- it yields a negative investment balance between the author and aggregate readers.

    • Amazing as in his stuff actually works?

      I just hear him promoting OBLITERATUS all day long and trying to get models to say naughty things

      1 reply →

  • > For example, it focuses a lot on doing "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?), which is an absolute fool's errand because such behavior is trained into the model as a whole and would not be found in any particular layer.

    That doesn't mean there couldn't be a "concept neuron" that is doing the vast majority of heavy lifting for content refusal, though.

  • Ironic to see this comment when Pliny, the author of this codebase, is one of the most sophisticated LLM jailbreakers/red-teamers today. So presumptive and arrogant!

  • Alternately, it's intentional. It very effective filters out people with your mindset. You can decide if that's a good thing or not.

    • I immediately read it as intentional, as a sort of attempt at ironic / nihilistic humour re: LLM-generation, given what the tool claims to do.

    • Why would a tool that works need to dissuade skeptics from trying it?

Reviews of the tool on twitter indicate that it completely nerfs the models in the process. It won't refuse, but it generates absolutely stupid responses instead.

  • This is my experience with abliterated models.

    I use Berkley Sterling from 2024 because I can trick it. No abliteration needed.

  • This is vibecoded garbage that the “author” probably didn't even test by themselves since making this yesterday, so it's not surprising that it's broken.

    Also, as I said in a top level comment, what this project wants to achieve has been done for a while and it's called Heretic: https://github.com/p-e-w/heretic

    (Not vibecode by a twitter influgrifter)

    • We will eventually arrive at a new equilibrium involving everyone except the most stupid and credulous applying a lot more skepticism to public claims than we did before.

      And yeah, doing stuff like deleting layers or nulling out whole expert heads has a certain ice pick through the eye socket quality.

      That said, some kind of automated model brain surgery will likely be viable one day.

  • Link?

    It's interesting that people are writing tools that go inside the weights and do things. We're getting past the black box era of LLMs.

    That may or may not be a good thing.

    • I believe that this is already done to several models. One that I've come across are the JOSIEfied models from Gökdeniz Gülmez. I downloaded one or two and tried them on a local ollama setup. It does generate potentially dangerous output. Turning on thinking for the QWEN series shows how it arrives at it's conclusions and it's quite disturbing.

      However, after a few rounds of conversation, it gets into loops and just repeats things over and over again. The main JOSIE models worked the best of all and was still useful even after abliteration.

  • I didn't use this tool, but I did try out abliterated versions of Gemma and yes, it lost about 100% of it's ability to produce a useful response once I did it

    • the default heretic with only 100 samples isn't very good, you really need your own, larger dataset to do a proper abliteration. the best abliteration roughly matches a very careful decensor SFT

  • Everyone says that abliteration destroys the model. That's the trope phrase everyone who doesn't know anything but wants to participate says. If someone says it to you, ignore them.

Didn't make it past the first paragraph of AI slop in the README. Have some respect for your readers and put actual information in it, ideally human generated. At least the first paragraph! Otherwise you may as well name it IGNOREME.

Does anyone offer a live (paid) LLM chatbot / video generation / etc that is completely uncensored? Like not requiring doing any work except just paying for it?

  • Nous Hermes was built from the ground up to be uncensored. No abliteration required.

    Its not a frontier model but it will give you a feel for what its like.

  • Grok was one of the closest, with expected results: bad PR from the obvious use cases that come with little censorship.