← Back to context

Comment by ACCount37

2 days ago

The math covers the low level decently well, but you run out of it quick. A lot of it fails to scale, and almost all of it fails to capture the high level behavior of modern AIs.

You can predict how some simple narrow edge case neural networks will converge, but this doesn't go all the way to frontier training runs, or even the kind of runs you can do at home on a single GPU. And that's one of the better covered areas.

You can’t predict because the data is unknown before training. And training is computation based on math. And the results are the weights. And every further computation is also math based. The result can be surprising, but there’s no fairy dust here.

  • There's no fairy dust there, but that doesn't mean we understand how it works. There's no fairy dust in human brain either.

    Today's mathematical background applied to frontier systems is a bit like trying to understand how a web browser works from knowing how a transistor works. The mismatch is palpable.

    Sure, if you descend to a low enough level, you wouldn't find any magic fairy dust - it's transistors as far as eye can see. But "knowing how a transistor works" doesn't come close to capturing the sheer complexity. Low level knowledge does not automatically translate to high level knowledge.