Gaussian integration is cool

15 hours ago (rohangautam.github.io)

No, the integral cannot be "expressed" as a sum over weights and function evaluations (with a "="), it can be approximated with this idea. If you fix any n+1 nodes, interpolate your function, and integrate your polynomial, you will get this sum where the weights are integrals over (Lagrange) basis polynomials. By construction, you can compute the integral of polynomials up to degree n exactly. Now, if you choose the nodes in a particular way (namely, as the zeros of some polynomials), you can increase this to up to 2n+1. What I'm getting at is that the Gaussian integration is not estimating the integrals of polynomials of degree 2n+1, but it's evaluating them exactly.

What is Fig. 1 showing? Is it the value of the integral compared with two approximations? Would it not be more interesting to show the error of the approximations instead? Asking for a friend who isn’t computing a lot of integrals.

  • Fig 1 could use a rethink. It uses log scale, but the dynamic range of the y-axis is tiny, so the log transform isn't doing anything.

    It would be better shown as a table with 3 numbers. Or, maybe two columns, one for integral value and one for error, as you suggest.

I thought when I first saw the title that it was going to be about the Gaussian integral[1] which has to be one of the coolest results in all of maths.

That is, the integral from - to + infinity of e^(-x^2) dx = sqrt(pi).

I remember being given this as an exercise and just being totally shocked by how beautiful it was as a result (when I eventually managed to work out how to evaluate it).

[1] https://mathworld.wolfram.com/GaussianIntegral.html

A good introduction to the basics.

What is also worth pointing out and which was somewhat glanced over is the close connection between the weight function and the polynomials. For different weight functions you get different classes of orthogonal polynomials. Orthogonal has to be understood in relation to the scalar product given by integrating with respect to the weight function as well.

Interestingly Gauss-Hermite integrates on the entire real line, so from -infinity to infinity. So the choice of weight function also influences the choice of integration domain.

  • Sorry if this is a stupid question, but is there a direct link between the choice of weight function and the applications of the polynomial?

    Like, is it possible to infer that Chebyshev polynomials would be useful in approximation theory using only the fact that they're orthogonal wrt the Wigner semicircle (U_n) or arcsine (T_n) distribution?

    • Chebyshev polynomials are useful in approximation theory because they're the minimax polynomials. The remainder of polynomial interpolation can be given in terms of the nodal polynomial, which is the polynomial with the interpolation nodes as zeros. Minimizing the maximum error then leads to the Chebyshev polynomials. This is a basic fact in numerical analysis that has tons of derivations online and in books.

      The weight function shows the Chebyshev polynomials' relation to the Fourier series . But they are not what you would usually think of as a good candidate for L2 approximation on the interval. Normally you'd use Legendre polynomials, since they have w = 1, but they are a much less convenient basis than Chebyshev for numerics.

      7 replies →