← Back to context

Comment by zamadatix

14 days ago

The only thing about this which may be unintuitive from the name is an "Expert" is not something like a sub-llm that's good at math and gets called when you ask a math question. Models like this have layers of networks they run tokens through and each layer is composed of 256 sub-networks, any of which can be selected (or multiple selected and merged in some way) for each layer independently.

So the net result is the same: sets of parameters in the model are specialized and selected for certain inputs. It's just a done a bit deeper in the model than one may assume.

the most unintuitive part is that from my understanding, individual tokens are routed to different experts. this is hard to comprehend with "experts" as that means two you can have different experts for two sequential tokens right?

I think where MoE is misleading is that the experts aren't what we would call "experts" in the normal world but rather they are experts for a specific token. that concept feels difficult to grasp.

  • It's not even per token. The routing happens once per layer, with the same token bouncing between layers.

    It's more of a performance optimization than anything else, improving memory liquidity. Except it's not an optimization for running the model locally (where you only run a single query at a time, and it would be nice to keep the weights on the disk until they are relevant).

    It's a performance optimization for large deployments with thousands of GPUs answering tens of thousands of queries per second. They put thousands of queries into a single batch and run them in parallel. After each layer, the queries are re-routed to the GPU holding the correct subset of weights. Individual queries will bounce across dozens of GPUs per token, distributing load.

    Even though the name "expert" implies they should experts in a given topic, it's really not true. During training, they optimize for making the load distribute evenly, nothing else.

    • BTW, I'd love to see a large model designed from scratch for efficient local inference on low-memory devices.

      While current MoE implementations are tuned for load-balancing over large pools of GPUs, there is nothing stopping you tuning them to only switch expert once or twice per token, and ideally keep the same weights across multiple tokens.

      Well, nothing stopping you, but there is the question of if it will actually produce a worthwhile model.

      5 replies →

    • > It's not even per token. The routing happens once per layer, with the same token bouncing between layers.

      They don't really "bounce around" though do they (during inference)? That implies the token could bounce back from eg. layer 4 -> layer 3 -> back to layer 4.

    • > making the load distribute evenly, nothing else.

      so you mean a "load balancer" for neural nets … well, why don't they call it that then?

      1 reply →

  • Also note that MoE is a decades old term, predating deep learning. It's not supposed to be interpreted literally.

  • > individual tokens are routed to different experts

    that was AFAIK (not an expert! lol) the traditional approach

    but judging by the chart on LLaMa4 blog post, now they're interleaving MoE models and dense Attention layers; so I guess this means that even a single token could be routed through different experts at every single MoE layer!

  • ML folks tend to invent fanciful metaphorical terms for things. Another example is “attention”. I’m expecting to see a paper “consciousness is all you need” where “consciousness” turns out to just be a Laplace transform or something.

So really it's just utilizing sparse subnetworks - more like the human brain.