Comment by ziedaniel1

10 months ago

Very cool idea - but unless I'm missing something, this seems very slow.

I just wrote a simple loop in C++ to sum up 0 to 2^30. With a single thread without any optimizations it runs in 1.7s on my laptop -- matching Bend's performance on an RTX 4090! With -O3 it vectorizes the loop to run in less than 80ms.

    #include <iostream>

    int main() {
      int sum = 0;
      for (int i = 0; i < 1024*1024*1024; i++) {
        sum += i; 
      }
      std::cout << sum << "\n";
      return 0;
    }

Bend has no tail-call optimization yet. It is allocating a 1-billion long stack, while C is just looping. If you compare against a C program that does actual allocations, Bend will most likely be faster with a few threads.

Bend's codegen is still abysmal, but these are all low-hanging fruits. Most of the work went into making the parallel evaluator correct (which is extremely hard!). I know that sounds "trust me", but the single-thread performance will get much better once we start compiling procedures, generating loops, etc. It just hasn't been done.

(I wonder if I should have waited a little bit more before actually posting it)

  • > (I wonder if I should have waited a little bit more before actually posting it)

    No. You built something that’s pretty cool. It’s not done yet, but you’ve accomplished a lot! I’m glad you posted it. Thank you. Ignore the noise and keep cooking!

  • >> Bend has no tail-call optimization yet.

    I've never understood the fascination with tail calls and recursion among computer science folks. Just write a loop, it's what it optimises to anyway.

    • Computer science traditionally focuses on algorithms. Divide-and-conquer algorithms are frequently much more clearly expressed as recursion. Quicksort is the classic example, also various tree operations.

    • I've never understood the fascination with programming languages among computer science folks. Just write machine code directly, it's what it compiles to anyway.

  • If they’re low-hanging fruit, why not do that before posting about it publicly? All that happens is that you push yourself into a nasty situation: people get a poor first impression of the system and are less likely to trust you the second time around, and in the (possibly unlikely) event that the problems turn out to be harder than you expect, you wind up in the really nasty situation of having to deal with failed expectations and pressure to fix them quickly.

    • I agree with you. But then there's the entire "release fast, don't wait before it is perfect". And, then, there's the case that people using it will guide us to iteratively building what is needed. I'm still trying to find that balance, it isn't so easy. This release comes right after we finally managed to compile it to GPUs, which is a huge milestone people could care about - but there are almost no micro-optimizations.

You might want to double check with objdump if the loop is actually vectorized, or if the compiler just optimizes it out. Your loop actually performs signed integer overflow, which is UB in C++; the compiler could legally output anything. If you want to avoid the UB, declare sum as unsigned (unsigned integer overflow is well-defined); the optimization will still happen but at least you’ll be guaranteed that it’ll be correct.

If compiled with -O3 on clang, the loop is entirely optimized out: https://godbolt.org/z/M1rMY6qM9. Probably not the fairest comparison.

  • Exactly, this kind of thing always happens with these loops, which is why I think programs that allocate are fairer. But then people point out that the C allocator is terrible, so we can't make that point :')

    • Might be worth seeing if e.g. jemalloc is enough less terrible for the sort of examples you're looking at to help with that.

      (though note that I mention jemalloc because I've had "huh, this C code now magically runs faster" experiences with it, make no claim it's the right one to look at, and am very sure that I don't know what I'm talking about sufficiently wrt allocators to be able to recognise the right one if it bit me on the leg)

I think the point is that Bend in a much higher level than C++. But to be fair: I also may be missing the point!

  • The point is that Bend parallelizes everything that can be parallelized without developers having to do that themselves.

  • here is the same loop finishing in one second on my laptop, single-threaded, in a very high-level language, q:

      q)\t sum til floor 2 xexp 30
      1031