Comment by FuckButtons
10 hours ago
Not entirely true, it’s random access within the relevant subset of experts and since concepts are clustered you actually have a much higher probability of repeatedly accessing the same subset of experts more frequently.
It’s called mixture of experts but it’s not that concepts map cleanly or even roughly to different experts. Otherwise you wouldn’t get a new expert on every token. You have to remember these were designed to improve throughput in cloud deployments where different GPUs load an expert. There you precisely want each expert to handle randomly to improve your GPU utilization rate. I have not heard anyone training local MoE models to aid sharding.
is there anywhere good to read/follow to get operational clarity on this stuff?
my current system of looking for 1 in 1000 posts on HN or 1 in 100 on r/locallama is tedious.
Ask any of the models to explain this to you