← Back to context

Comment by zahlman

11 hours ago

My point is that plenty of people use NumPy for reasons that have nothing to do with a GPU.

The whole point of NumPy is to make things much, much faster than interpreted Python, whether you're GPU-accelerated or not.

Even code you write now, you may need to GPU accelerate later, as your simulations grow.

Falling back on loops is against the entire reason of using NumPy in the first place.

  • I really disagree. That's not the only point of NumPy. A lot of people use it like Matlab, to answer questions with minimal coding time, not minimal runtime.

    • I mean sure, the fact that it is performant means tons of functionality is built on it that is hard to find elsewhere.

      But the point is still that the main purpose in building it was to be performant. To be accelerated. Even if that's not why you're personally using it.

      I mean, I use my M4 Mac's Spotlight to do simple arithmetic. That's not the main point in building the M4 chip though.

      2 replies →

  • I used it once purely because I figured out that "turn a sequence of per-tile bitmap data into tiles, then produce the image corresponding to those tiles in order" is equivalent to swapping the two inner dimensions of a four-dimensional array. (And, therefore, so is the opposite operation.) The task was extremely non-performance-critical.

    Of course I wasn't happy about bringing in a massive dependency just to simplify a few lines of code. Hopefully one day I'll have a slimmer alternative, perhaps one that isn't particularly concerned with optimization.

I mean yes. Also in your example where you hardly spend any time running Python code, the performance difference likely wouldn't matter.