Comment by imtringued
5 days ago
self distillation and mutual distillation are used in MoE models. What you can do is freeze all but one expert and then train the model. If you want to do it again, you have to do self/mutual distillation to spread the training result onto the other experts.
No comments yet
Contribute on Hacker News ↗