← Back to context

Comment by constantcrying

1 day ago

A good introduction to the basics.

What is also worth pointing out and which was somewhat glanced over is the close connection between the weight function and the polynomials. For different weight functions you get different classes of orthogonal polynomials. Orthogonal has to be understood in relation to the scalar product given by integrating with respect to the weight function as well.

Interestingly Gauss-Hermite integrates on the entire real line, so from -infinity to infinity. So the choice of weight function also influences the choice of integration domain.

Sorry if this is a stupid question, but is there a direct link between the choice of weight function and the applications of the polynomial?

Like, is it possible to infer that Chebyshev polynomials would be useful in approximation theory using only the fact that they're orthogonal wrt the Wigner semicircle (U_n) or arcsine (T_n) distribution?

  • Chebyshev polynomials are useful in approximation theory because they're the minimax polynomials. The remainder of polynomial interpolation can be given in terms of the nodal polynomial, which is the polynomial with the interpolation nodes as zeros. Minimizing the maximum error then leads to the Chebyshev polynomials. This is a basic fact in numerical analysis that has tons of derivations online and in books.

    The weight function shows the Chebyshev polynomials' relation to the Fourier series . But they are not what you would usually think of as a good candidate for L2 approximation on the interval. Normally you'd use Legendre polynomials, since they have w = 1, but they are a much less convenient basis than Chebyshev for numerics.

    • True, and there are plenty of other reasons Chebyshev polynomials are convenient, too.

      But I guess what I was asking was: is there some kind of abstract argument why the semicircle distribution would be appropriate in this context?

      For example, you have abstract arguments like the central limit theorem that explain (in some loose sense) why the normal distribution is everywhere.

      I guess the semicircle might more-or-less be the only way to get something where interpolation uses the DFT (by projecting points evenly spaced on the complex unit circle onto [-1, 1]), but I dunno, that motivation feels too many steps removed.

      6 replies →

  • Yes. Precisely that they are orthogonal means that they are suitable.

    If you are familiar with the Fourier series, the same principle can be applied to approximating with polynomials.

    In both cases the crucial point is that you can form an orthogonal subspace, onto which you can project the function to be approximated.

    For polynomials it is this: https://en.m.wikipedia.org/wiki/Polynomial_chaos

    • What you're saying isn't wrong, per se, but...

      There are polynomials that aren't orthogonal that are suitable for numerics: both the Bernstein basis and the monomial basis are used very often and neither are orthogonal. (Well, you could pick a weight function that makes them orthogonal, but...!)

      The fact of their orthogonality is crucial, but when you work with Chebyshev polynomials, it is very unlikely you are doing an orthogonal (L2) projection! Instead, you would normally use Chebyshev interpolation: 1) interpolate at either the Type-I or Type-II Chebyshev nodes, 2) use the DCT to compute the Chebyshev series coefficients. The fact that you can do this is related to the weight function, but it isn't an L2 procedure. Like I mentioned in my other post, the Chebyshev weight function is maybe more of an artifact of the Chebyshev polynomials' intimate relation to the Fourier series.

      I am also not totally sure what polynomial chaos has to do with any of this. PC is a term of art in uncertainty quantification, and this is all just basic numerical analysis. If you have a series in orthgonal polynomials, if you want to call it something fancy, you might call it a Fourier series, but usually there is no fancy term...

      4 replies →