Comment by int_19h
4 days ago
Indeed. But for stuff like this, ChatGPT is overkill. It's better to get a dedicated RP finetune of LLaMA, Qwen, or some other open weights model (you can still run it in the cloud if you don't have hardware to do so locally). There are enough finetunes around by now that you can "dial in" how dark you want it, but for some examples:
https://huggingface.co/jukofyork/Dark-Miqu-70B
https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_7...
Just curious, how do you keep up to date on these models? Is there a community out there that discusses them?
https://old.reddit.com/r/LocalLLaMA/ is the place to be, but it's a subreddit with an eclectic population, so apply some caution when browsing it at work.
r/LocalLLaMA has the discussions, but on top of that, I just periodically browse new model lists on Hugging Face. There's a lot of stuff, but most low-effort finetunes tend to focus on small models (since that's much cheaper and faster), so if you only look at 70B+ ones, there's a lot less garbage there.