← Back to context

Comment by mft_

5 hours ago

I agree with the previous post that there's hope that there's a convergence point in the not too distant future where consumer hardware can run powerful models.

At the moment, the 397Bn Qwen3.5 model (which I assume is what you're referring to) is still out of reach of most consumers to run locally: the only relatively straightforward path (i.e. discounting custom Threadripper builds) to running it would be a 512Gb Mac Studio.

However, in a generation or two (of hardware and models) maybe we'll see convergence with more hardware available with 3-400Gb of memory for more approachable money (a tough sell right now, I accept, with memory prices as they are) and models offering great performance in this size range.

I was referring to the 35B version. It is surprisingly good for its size. You can use it for implementation tasks without it going off the rails