← Back to context

Comment by macNchz

13 hours ago

Gemma 4 26B-A4B might be interesting to try on your machine. The latest optimizations make MoE models work pretty nicely on setups like that with a decent GPU and lots of slowish RAM. I have a 16gb GPU and 64gb of 3200mhz DDR4 and get 15-20 tokens/sec out of that model with zero finagling or tweaking. I’ve been very impressed by it, even having run just about every other open weight model that would fit on my machine over the last few years.

that seems slow? 15-20, was expecting 50-60 like mistral although I have not measured that yet on my setup

I've been asking other people but what do you use it for?