← Back to context

Comment by jrm4

15 hours ago

Nice. You're exactly nailing what I'm working towards already. I'm programming with gemini for now and have no problem there, but the home use case I found for local Ollama was "taking a billion old bookmarks and tagging them." Am looking forward to pointing ollama at more personal stuff.

Yeah I have two servers now. One with a big AMD for decent LLM performance. And one for a smaller Nvidia that runs mostly Whisper and some small models for side tasks.