Comment by eru

1 year ago

I see what you are saying, and I made a similar comment.

However it's still an interesting observation that many architectures can arrive at the same performance (even though the training requirements are different).

Naively, you wouldn't expect eg 'x -> a * x + b' to fit the same data as 'x -> a * sin x + b' about equally well. But that's an observation from low dimensions. It seems once you add enough parameters, the exact model doesn't matter too much for practical expressiveness.

I'm faintly reminded of the Church-Turing Thesis; the differences between different computing architectures are both 'real' but also 'just an optimisation'.

> When you start layering in normalisation techniques to minimise overfitting, and especially once you start thinking about more agentic architectures (eg. Deep Q Learning, some of the search space design going into OpenAI's o1), then I don't think the just-an-optimisation perspective can hold much water at all - more computing power simply couldn't solve those problems with older architectures.

You are right, these normalisation techniques help you economise on training data, not just on compute. Some of these techniques can be done independent of the model, eg augmenting your training data with noise. But some others are very model dependent.

I'm not sure how the 'agentic' approaches fit here.

> Naively, you wouldn't expect

I, a nave, expected this.

Is multiplication versus sine in the analogy hiding it, perhaps?

I've always pictured it as just "needing to learn" the function terms and the function guts are an abstraction that is learned.

Might just be because I'm a physics dropout with a bunch of whacky half-remembered probably-wrong stuff about how any function can be approximated by ex. fourier series.

  • So (most) neural nets can be seen as a function of a _fixed_ form with some inputs and lots and lots of parameters.

    In my example, a and b were the parameters. The kinds of data you can approximate well with a simple sine wave and the kinds of data you can approximate with a straight line are rather different.

    Training your neural net only fiddles with the parameters like a and b. It doesn't do anything about the shape of the function. It doesn't change sine into multiplication etc.

    > [...] about how any function can be approximated by ex. fourier series.

    Fourier series are an interesting example to bring up! I think I see what you mean.

    In theory they work well to approximate any function over either a periodic domain or some finite interval. But unless you take special care, when you apply Fourier analysis naively it becomes extremely sensitive to errors in the phase parameters.

    (Special care could eg be done by hacking up your input domain into 'boxes'. That works well for eg audio or video compression, but gives up on any model generalisation between 'boxes', especially for what would happen in a later box.)

    Another interesting example is Taylor series. For many simple functions Taylor series are great, but for even moderately complicated ones you need to be careful. See eg how the Taylor serious for the logarithm around x=1 works well, but if you tried it around x=0, you are in for a bad time.

    The interesting observation isn't just that there are multiple universal approximators, but that at high enough parameter count, they seem to perform about equally well in how good they are at approximating in practice (but differ in how well they can be trained).

    • > Training your neural net only fiddles with the parameters like a and b. It doesn't do anything about the shape of the function. It doesn't change sine into multiplication etc.

      It definitely can. The output will always be piecewise linear (with ReLU), but the overall shape can change completely.

      6 replies →

This reminds me of control systems theory where provided there's feedback, the forward transfer function doesn't matter beyond very basic properties around the origin.