Comment by JoshCole

2 years ago

> It's nothing like described in the article and I don't understand why people who should know better don't call out the bullshit media reporting more.

I'm kind of assuming you didn't read the article, but if you did then I'm kind of assuming that you've never done machine learning, but if you have: how did you manage to do that without ever noticing that you were doing approximation?

Objectively, neural networks are approximators. Like, truly objectively, as in, the literal objective function, objectively minimizes approximation error. We call them objective functions and minimizing the approximation error is typically the objective of these objective functions. This isn't bullshit. It isn't. If you think it is, you are deeply and profoundly mistaken.

The article advances this view of language models. This is a reasonable view of language models for the same reason that machine learning papers exploring neural networks describe them as universal function approximators.