← Back to context

Comment by tibbar

4 hours ago

I think the big frustration I've had in learning modern ML is that the entire owl is just so complicated. A poor explainer reads like "black box is black boxing the other black box", completely undecipherable. A mediocre-to-above-average explanation will be like "(loosely introduced concept) is (doing something that sounds meaningful) to black box", which is a little better. However, when explanations start getting more accurate, you run into the sheer volume of concepts/data transforms taking place in a transformer, and there's too much information to be useful as a pedagogical device.