← Back to context

Comment by refulgentis

1 year ago

> Naively, you wouldn't expect

I, a nave, expected this.

Is multiplication versus sine in the analogy hiding it, perhaps?

I've always pictured it as just "needing to learn" the function terms and the function guts are an abstraction that is learned.

Might just be because I'm a physics dropout with a bunch of whacky half-remembered probably-wrong stuff about how any function can be approximated by ex. fourier series.

So (most) neural nets can be seen as a function of a _fixed_ form with some inputs and lots and lots of parameters.

In my example, a and b were the parameters. The kinds of data you can approximate well with a simple sine wave and the kinds of data you can approximate with a straight line are rather different.

Training your neural net only fiddles with the parameters like a and b. It doesn't do anything about the shape of the function. It doesn't change sine into multiplication etc.

> [...] about how any function can be approximated by ex. fourier series.

Fourier series are an interesting example to bring up! I think I see what you mean.

In theory they work well to approximate any function over either a periodic domain or some finite interval. But unless you take special care, when you apply Fourier analysis naively it becomes extremely sensitive to errors in the phase parameters.

(Special care could eg be done by hacking up your input domain into 'boxes'. That works well for eg audio or video compression, but gives up on any model generalisation between 'boxes', especially for what would happen in a later box.)

Another interesting example is Taylor series. For many simple functions Taylor series are great, but for even moderately complicated ones you need to be careful. See eg how the Taylor serious for the logarithm around x=1 works well, but if you tried it around x=0, you are in for a bad time.

The interesting observation isn't just that there are multiple universal approximators, but that at high enough parameter count, they seem to perform about equally well in how good they are at approximating in practice (but differ in how well they can be trained).

  • > Training your neural net only fiddles with the parameters like a and b. It doesn't do anything about the shape of the function. It doesn't change sine into multiplication etc.

    It definitely can. The output will always be piecewise linear (with ReLU), but the overall shape can change completely.

    • You can fit any data with enough parameters. What’s tricky is to constrain a model so that it approximates the ground truth well where there are no data points. If a family of functions is extremely flexible and can fit all kinds of data very efficiently I would argue it makes it harder for those functions to have correct values out of distribution.

      2 replies →

    • Sorry, when I meant 'shape' of the function, I meant the shape of the abstract syntax tree (or something like that).

      Not the shape of its graph when you draw it.

      2 replies →