Comment by zmmmmm
6 days ago
It's why I use Aider, because it only operates on explicit files that you give it. Works great with OpenAI but if you are really worried, it interfaces perfectly with Ollama for local LLMs. A 12b model on my Mac does well enough for coding that it's serviceable for me.
Which 12b model are you running?
Gemma 12b quantized (gemma3:12b-it-qat in ollama)