← Back to context

Comment by noobermin

5 hours ago

Sorry, re-reading this, I should have said "most". As the other reply mentions, Pade approx. are also well liked for numerical methods.

I personally mostly do my everyday work using taylor expansion (mostly explicit numerical methods in comp. EM because they're cheaper these days and it's simpler to write down) so it's what first comes to mind.

A quick meta-take here: it is hard to assess the level of expertise here on HN. Some might be just tangentially interested, other might have degrees in the specific topic. Others might maintain a scientific computing library. Domains vary too: embedded systems, robotics, spacecraft navigation, materials modeling, or physics simulation. Until/unless people step up and fill the gaps somehow, we have little notion of identity nor credentialing, for better and for worse.*

So it really helps when people explain (1) their context** and (2) their reasoning. Communicating well is harder than people think. Many comments are read by hundreds or more (thousands?) of people, most of whom probably have no idea who we are, what we know, or what we do with our brains on a regular basis. It is generous and considerate to other people to slow down and really explain where we're coming from.

So, when I read "most people use Taylor approximations"...

1. my first question is "on what basis can someone say this?"

2. for what domains might this somewhat true? False?

3. but the bigger problem is that claims like the above don't teach. i.e. When do Taylor series methods fall short? Why? When are the other approaches more useful?

Here's my quick take... Taylor expansions tends to work well when you are close to the expansion point and the function is analytic. Taylor expansions work less well when these assumptions don't hold. More broadly they don't tend to give uniform accuracy across a range. So Taylor approximations are usually only local. Other methods (Padé, minimax, etc) are worth reaching for when other constraints matter.

* I think this is a huge area we're going to need to work on in the age where anyone can sound like an expert.

** In the case above, does "comp. EM" mean "computational electromagnetics" or something else? The paper talks about "EML" so it makes me wonder if "EM" is a typo. All of these ambiguities add up and make it hard for people to understand each other.

  • I do computational electromagnetism, specifically plasma simulation. In the field solver (I mainly do explicit codes meaning we just approximate the derivatives numerically) we only do taylor expansion so that the derivatives are essentially second order accurate. We don't bother going further, although I can, because in my domain, being more "accurate" as a function of step size (dx in approximating f(x)->f(x+dx)) yields less of a profit vs just decreasing step sizes and grid sizes (or increasing resolution), and even then, the numerical accuracy pales in comparison to say setting up the physical problem wrong (the focus of a simulated laser pulse being ten wavelengths out of focus).

    Replying to some of your questions (1 and 2), this is from the perspective of a computational scientist, and a less theoretical type who works closely with experimentalists. This I am closer to a user of codes to model experiments than I am to someone who does a lot of analytic or fundamental theory, although my experience and perspective is probably close to others who are computational-ish in other domains like engineering, for the reasons I'll explain below.

    For 3, most physically useful simulations that are not merely theoretical exercises (that is, simulations that are more predictive or explanative of actual experiments scientists want to do) will not consist of analytic functions you can write down. First, say that I suppose initial conditions in a problem has an aspect that is analytic (me setting my laser profile as a gaussian pulse), once the interaction with a plasma target occurs, the result you obtain (and thus predictions a simulation will make that can be compared to experiment) will not be gaussian but will evolve due to the complex physics modeled in the simulation. And a gaussian as an initial condition is already an approximation to an actual experiment. An "easy sim" for me is doing a best fit to the waist from a profile they'll read from a power meter, and using a gaussian that closely matches that, while a more realistic simulation would be me directly taking that data they have on an excel sheet and feeding that into the simulation directly as an initial condition. In most real world scenarios, most ICs already aren't analytic and must be solved numerically. By the way, this isn't that different for how engineers use computational codes too. Not many airplane wings are spheres or cylinders, you'd likely have to import the design for a wing from a CAD file into say an aerodynamics fluid code.

    So in all these cases, the bottleneck isn't really approximating analytic functions you can write down either in closed form or even in series form as to the nth degree. Many people in the computational domain do not need accuracy beyond two or three terms in a taylor series. This is because it is usually easier to just cut down dx and do more steps in total rather than using a large dx and requiring more terms...and this before using any more sophisticated approximations. No code I know uses Pade approximations, I just know that some libraries for special functions (that may be one or two function calls exist in a code I use) use them.

    Also, just a quick example you can try. Let's look at exp, for small argument (this only really works for small argument, you obviously can't do taylor expansion well for large argument). Consider the following:

    >>> np.exp(0.4231)

    np.float64(1.5266869570289792)

    I will see how many terms I need to get 4 digits of accuracy (note that I had four digits in my input, so even ignoring sig figs, I probably shouldn't expect better than 4 digits for a result, note that numpy itself is numerical too so shouldn't be considered exact although i'll trust the first four digits here too)

    >>> x = 0.4231

    >>> 1

    1

    >>> 1 + x

    1.4231

    >>> 1 + x + x**2/2

    1.512606805

    >>> 1 + x + x**2/2 + x**3/(3*2)

    1.5252302480651667

    >>> 1 + x + x**2/2 + x**3/(3*2) + x**4/24

    1.5265654927553847

    Note that by 3 terms (x**3) I'm already off in the last digit by 1, by 4 terms, it's converged enough. Given a fp register, you can reuse powers of x from the last term, this is already dangerously cheap, why do you even need better than this in a practical sense? Most results I see in the wild do not even reach requiring this level of precision in a single simulation. I am a scientist however, it's possible engineers need more precision.

    For us, more sophisticated methods on the "calculating transcendental functions" level are not really required, which is why they don't appear in codes I usually see. What we need better are things that make the actual elemental operations, like fma and the like, faster. Things like avx512 are far more interesting to me, for example.