Comment by Abishek_Muthian
7 months ago
I use a laptop with 4090 16GB VRAM, core i9 and 96GB RAM for low latency work and Mac mini M4 for tasks which doesn’t require low latency.
I had written a blog on how I run LLM locally a while back[1] I’ll update the information on models & Mac mini soon.
Have you tried to hook up your local setup to Cline/Roo or similar stuff?
I haven’t heard of them, can you explain how it could be useful for those run LLMs locally ?
They're agentic IDEs. They don't "do" anything to Llama, they run on top.