Comment by lioeters
1 day ago
I've been running local language models on an existing laptop with 8GB GPU, currently using ministral-3:8b. It's faster than other models of similar size I used previously, fast enough that I never wait for it, rather have to scroll back to read the full output.
So far I'm using it conversationally, and scripting with tools. I wrote a simple chat interface / REPL in the terminal. But it's not integrated with code editor, nor agentic/claw-like loops. Last time I tried an open-source Codex-like thing, a popular one but I forget its name, it was slow and not that useful for my coding style.
It took some practice but I've been able to get good use out of it, for learning languages (human and programming), translation, producing code examples and snippets, and sometimes bouncing ideas like a rubber-duck method.
No comments yet
Contribute on Hacker News ↗