Comment by praetor22

8 months ago

Look, I understand the value proposition and how cool it is from a theoretical standpoint, but I honestly don't think this will ever become relevant.

Here are some notes from my first impressions and after skimming through the paper. And yes, I am aware that this is very very early software.

1. Bend looks like an extremely limited DSL. No FFI. No way of interacting with raw buffers. Weird 24bit floating point format.

2. There's a reason why ICs are not relevant: performance is and will always be terrible. There is no other way to put it, graph traversal simply doesn't map well on hardware.

3. The premise of optimal reduction is valid. However, you still need to write the kernels in a way that can be parallelized (ie. no data dependencies, use of recursion).

4. There are no serious examples that directly compare Bend/HVM code with it's equivalent OMP/CUDA program. How am I suppose to evaluate the reduction in implementation complexity and what to expect on performance. So many claims, so little actual comparisons.

5. In the real world of high performance parallel computing, tree-like structures are non-existent. Arrays are king. And that's because of the physical nature of how memory works on a hardware level. And do you know what works best on mutable contiguous memory buffers ? Loops. We'll see when HVM will implement this.

In the end, what we currently have is half-baked language that is (almost) fully isolated from external data, extremely slow, a massive abstraction on the underlying hardware (unutilised features: multilevel caches, tensor cores, simd, atomics).

I apologize if this comes out as harsh, I still find the technical implementation and the theoretical background to be very interesting. I'm simply not (yet) convinced of its usefulness in the real world.

Thanks for the feedback. Some corrections:

We do use multi-level caching, and you can achieve 5x higher performance by using it correctly. FFI is already implemented, just not published, because we want to release it with graphics rendering, which I think will be really cool. Haskell/GHC uses a graph and trees too, and nobody would say it is not practical of useful. And while it is true that arrays are king, there are many SOTA algorithms that are implemented in Haskell (including compilers, type-checkers, solvers) because they do not map well to arrays at all.

The main reason ICs are not fast is that nobody ever has done low-level optimization work over it. All previous implementations were terribly inefficient. And my own work is too, because I spent all time so far trying to get it to run *correctly* on GPUs, which was very hard. As you said yourself, there aren't even loops yet. So, how can we solve that? By adding the damn loops! Or do you think there is some inherent limitation preventing us to do that? If you do, you'll be surprised.

HVM2 is finally a correct algorithm that scales. Now we'll optimize it for the actual low-level performance.

  • > HVM2 is finally a correct algorithm that scales.

    This, I think, is the key thing people are missing.

    Maybe your low level performance will never be as good as hoped, but for this sort of task, "the parallelisation part works and produces correct results" might not be sufficient but is absolutely necessary, and any optimisation work done before that has such a high probability of having to be thrown away that under similar circumstances I wouldn't bother in advance either.

Re: 5, trees are fairly widely used (though not as most CS people would implement them) with Morton or H index ordering in things like the Fast Multipole and Barnes Hut algorithms which reduce O(n^2) pair wise ops to O(n) and O(n log n) respectively. BH more common in Astro, FMM in chemical molecular dynamics.