Comment by furyofantares

2 days ago

I tried a few things and checked CPU usage in Task Manager to see how much work the CPU is doing.

KV Cache in GPU and 36/36 layers in GPU: CPU usage under 3%.

KV Cache in GPU and 35/36 layers in GPU: CPU usage at 35%.

KV Cache moved to CPU and 36/36 layers in GPU: CPU usage at 34%.

I believe you that it doesn't make sense to do it this way, it is slower, but it doesn't appear to be doing much of anything on the CPU.

You say gigabytes of weights PER TOKEN, is that true? I think an expert is about 2 GB, so a new expert is 2 GB, sure - but I might have all the experts for the token already in memory, no?

gpt-oss-120b chooses 4 experts per token and combines them.

I don't know how lmstudio works. I only know the fundamentals. There is not way it's sending experts to the GPU per token. Also, the CPU doesn't have much work to do. It's mostly waiting on memory.

  • > There is not way it's sending experts to the GPU per token.

    Right, it seems like either experts are stable across sequential tokens fairly often, or there's more than 4 experts in memory and it's stable within the in-memory experts for sequential tokens fairly often, like the poster said.