Comment by Abishek_Muthian

7 months ago

I use a laptop with 4090 16GB VRAM, core i9 and 96GB RAM for low latency work and Mac mini M4 for tasks which doesn’t require low latency.

I had written a blog on how I run LLM locally a while back[1] I’ll update the information on models & Mac mini soon.

[1] https://abishekmuthian.com/how-i-run-llms-locally/