Comment by Rochus

5 days ago

> We implement LuaJIT Remake (LJR)[...] using Deegen. Across 44 benchmarks, LJR's interpreter is on average 179% faster than the official PUC Lua interpreter, and 31% faster than LuaJIT's interpreter.

Well, LuaJIT in JIT mode is about factor 3 faster on average than LuaJIT in interpreter mode (depending on the benchmark up to ten times). And LuaJIT in JIT mode is e.g. factor 8 faster on average than PUC Lua 5.1 (see e.g. http://software.rochus-keller.ch/are-we-fast-yet_Lua_results... for more information). So if Degen is factor 2 faster than PUC Lua or factor 1.3 faster than the LuaJIT interpreter, this is not very impressive. But since the LuaJIT interpreter is written in assembler, we might conclude that the speed-up of a manual assembler implementation compared to a generated interpreter is about 30%. Therefore it's no longer worth the effort to implement an interpreter in assembler (even less if we consider cross-platform migration costs). But on the other hand the Degen generated VM is significantly slower than e.g. the Mono VM or CoreCLR in JIT mode (see e.g. https://github.com/rochus-keller/Oberon/blob/master/testcase...).

> we might conclude that the speed-up of a manual assembler implementation compared to a generated interpreter is about 30%[, t]herefore it's no longer worth the effort to implement an interpreter in assembler

You got that backwards. The paper reports Deegen’s generated interpreter is faster than LuaJIT’s handwritten one by 30%. That’s actually pretty impressive—and pretty impressively straightforwardly achieved[1], TL;DR: instruction dispatch via tail calls avoids the pessimized register allocation that you get for a huge monolithic interpreter loop.

[1] https://sillycross.github.io/2022/11/22/2022-11-22/

  •     # decode next bytecode opcode
        movzwl      8(%r12), %eax
        # advance bytecode pointer to next bytecode
        addq        $8, %r12
        # load the interpreter function for next bytecode
        movq        __deegen_interpreter_dispatch_table(,%rax,8), %rax
        # dispatch to next bytecode
        jmpq        *%rax
    

    You may reduce that even further by pre-decoding the bytecode: you replace a bytecode by the address of the its implementation and then do (with GCC extended goto)

      goto *program_bytecodes[counter]

    • I've been playing around with this and its worth noting that pre-decoding the bytecode because it means every instruction (without operands) is the width of a pointer (8 bytes on x86) which means you fit far fewer instructions into cache, eg my opcodes are a byte, so that's 8x more instructions. I haven't had time to compare it in benchmarks to see what the real world difference is, but its worth keeping in mind.

      Somewhat off topic, looking at that assembly... mine compiles to (for one of the opcodes):

          movzx  eax,BYTE PTR [rdi]
          lea    r9,[rip+0x1d6fd]        # 2ae30 <instructions_table>
          mov    rax,QWORD PTR [r9+rax*8]
          inc    rdi
          jmp    rax
      

      (also compiled from C++ with clang's musttail annotation)

      2 replies →

  • They still have register allocation issues:

    > Register shuffling to fulfill C calling convention when making a runtime call.

    Not sure how common is that in their benchmarks because it's tempting to handle everything frequently used as bytecode.

  • > Deegen’s generated interpreter is faster than LuaJIT’s handwritten one by 30%

    That's exactly what I've written. But apparently I got that wrong with their baseline JIT.

    • > That's exactly what I've written.

      In that case I don’t get the logic. “It’s no longer worth the effort to handcode an interpreter because that’d only be 30% faster” is a sentiment I could understand. “It’s no longer worth the effort to handcode an interpreter because that’d be 30% slower” I can’t. It’s that it’d be worth or not worth the effort—it’s that it’s actively detrimental! (For this particular application anyway.)

      > Also note that the Deegen "interpreter" uses a "baseline JIT".

      What? No it doesn’t? Unless the paper is deliberately misleading, they are completely different modules (utilizing the same set of bytecode definitions). The paper explicitly describes them as implementing the first two tiers of a three-tier architecture—two different tiers. Not once does the description of the interpreter in section 6 mention JITting anything. Figures 26–27 show e.g. array3d on “LJR (interpreter only)” is at 3× PUC Lua speed (same as “LuaJIT (interpreter only)”), while on “LJR (baseline JIT)” it’s at 7× PUC Lua speed (compared to 30× on “LuaJIT”).

      1 reply →