← Back to context

Comment by nyeah

21 hours ago

Right, you can use loops. But then it goes much slower than a GPU permits.

But once you need to use the GPU you need to go to another framework anyway (e.g. jax, tensorflow, arrayfire, numba...). AFAIK many of those can parallise loops using their jit functionality (in fact, e.g. numbas jit for a long time could not deal with numpy broadcasing, so you had to write out your loops). So you're not really running into a problem?

My point is that plenty of people use NumPy for reasons that have nothing to do with a GPU.

  • The whole point of NumPy is to make things much, much faster than interpreted Python, whether you're GPU-accelerated or not.

    Even code you write now, you may need to GPU accelerate later, as your simulations grow.

    Falling back on loops is against the entire reason of using NumPy in the first place.

    • I really disagree. That's not the only point of NumPy. A lot of people use it like Matlab, to answer questions with minimal coding time, not minimal runtime.

      3 replies →

    • I used it once purely because I figured out that "turn a sequence of per-tile bitmap data into tiles, then produce the image corresponding to those tiles in order" is equivalent to swapping the two inner dimensions of a four-dimensional array. (And, therefore, so is the opposite operation.) The task was extremely non-performance-critical.

      Of course I wasn't happy about bringing in a massive dependency just to simplify a few lines of code. Hopefully one day I'll have a slimmer alternative, perhaps one that isn't particularly concerned with optimization.

  • I mean yes. Also in your example where you hardly spend any time running Python code, the performance difference likely wouldn't matter.