Comment by dnhkng

1 month ago

No worries, happy to discuss anyway :)

MoE (mixture of experts), is an architecture that forces sparsity (not all 'neurons' are active during the forward pass.

This is pretty much orthogonal to that; it works with dense and MoE models, by repeating 'vertical' sections of the transformer stack.

>forces sparsity

That's branching and then coalescing, right? It selects a path that is weighted as being most beneficial to the input?

Given you pointed out how even the vertical part of the architecture allows for skipping layers anyway, isn't that essentially the same thing?