← Back to context Comment by mgol94 16 hours ago What are you using LLMs for? To learn about world’s politics? Oh boy I have a news for you… 3 comments mgol94 Reply rvba 15 hours ago One of the first things I did when openAI came out was asking it "which active politican is a spy?" - and it was blocked from the start.I asked early, at the time people were posting various jailbreaks, never worked.On a side note, any self hosted model I can get for my PC? I have 96 GB of RAM. KronisLV 15 hours ago > On a side note, any self hosted model I can get for my PC? I have 96 GB of RAM.Try the 8 bit quantized version (UD-Q8_K_X) of Qwen 3.6 35B A3B by Unsloth: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUFSome people also like the new Gemma 4 26B A4B model: https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUFEither should leave plenty of space for OS processes and also KV cache for a bigger context size.I'm guessing that MoE models might work better, though there are also dense versions you can try if you want.Performance and quality will probably both be worse than cloud models, though, but it's a nice start! DANmode 5 hours ago > and it was blocked from the start.Wait - what?
rvba 15 hours ago One of the first things I did when openAI came out was asking it "which active politican is a spy?" - and it was blocked from the start.I asked early, at the time people were posting various jailbreaks, never worked.On a side note, any self hosted model I can get for my PC? I have 96 GB of RAM. KronisLV 15 hours ago > On a side note, any self hosted model I can get for my PC? I have 96 GB of RAM.Try the 8 bit quantized version (UD-Q8_K_X) of Qwen 3.6 35B A3B by Unsloth: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUFSome people also like the new Gemma 4 26B A4B model: https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUFEither should leave plenty of space for OS processes and also KV cache for a bigger context size.I'm guessing that MoE models might work better, though there are also dense versions you can try if you want.Performance and quality will probably both be worse than cloud models, though, but it's a nice start! DANmode 5 hours ago > and it was blocked from the start.Wait - what?
KronisLV 15 hours ago > On a side note, any self hosted model I can get for my PC? I have 96 GB of RAM.Try the 8 bit quantized version (UD-Q8_K_X) of Qwen 3.6 35B A3B by Unsloth: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUFSome people also like the new Gemma 4 26B A4B model: https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUFEither should leave plenty of space for OS processes and also KV cache for a bigger context size.I'm guessing that MoE models might work better, though there are also dense versions you can try if you want.Performance and quality will probably both be worse than cloud models, though, but it's a nice start!
One of the first things I did when openAI came out was asking it "which active politican is a spy?" - and it was blocked from the start.
I asked early, at the time people were posting various jailbreaks, never worked.
On a side note, any self hosted model I can get for my PC? I have 96 GB of RAM.
> On a side note, any self hosted model I can get for my PC? I have 96 GB of RAM.
Try the 8 bit quantized version (UD-Q8_K_X) of Qwen 3.6 35B A3B by Unsloth: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF
Some people also like the new Gemma 4 26B A4B model: https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF
Either should leave plenty of space for OS processes and also KV cache for a bigger context size.
I'm guessing that MoE models might work better, though there are also dense versions you can try if you want.
Performance and quality will probably both be worse than cloud models, though, but it's a nice start!
> and it was blocked from the start.
Wait - what?