← Back to context

Comment by hasbellmcclee

8 hours ago

Daily-driver, zero-cloud stack (runs on a 16 GB M2 Mac mini): Ollama + llama3:8b-instruct-q4_K_M – 4 GB, answers code questions in <300 ms. Open WebUI – single Docker line, keeps chat history in local SQLite. continue VS Code plug-in – auto-complete + inline /explain without leaving the editor. Obsidian → Templater script calls Ollama to summarise yesterday’s notes into one paragraph. Image boost: when I need a quick mock-up, a tiny FastAPI wrapper hits Imagen 4 (https://www.imagen4.org/) and drops the 2 K PNG straight into my assets folder—no browser tab, no upload.