Faster interpreters in Go: Catching up with C++

11 days ago (planetscale.com)

It's crazy that this post seems to have stumbled across an equivalent to the Copy-and-Patch technique[0] used to create a Lua interpreter faster than LuaJit[1]

[0]: https://sillycross.github.io/2023/05/12/2023-05-12/ [1]: https://sillycross.github.io/2022/11/22/2022-11-22/

The major difference is that LuaJIT Remake's Copy-and-Patch requires "manually" copying blocks of assembly code and patching values, while this post relies on the Go compiler's closures to create copies of the functions with runtime-known values.

I think there's fascinating processing being made in this area—I think in the future this technique (in some form) will be the go-to way to create new interpreted languages, and AST interpreters, switch-based bytecode VMs, and JIT compilation will be things of the past.

  • It’s not really copy and patch, the whole point of the copy patch is so you can inline that in your compilation output and it’s a fast baseline interpreter because individual builtin functions are optimized (via the compiler output you’re copying from) and inlined (which is why you need to patch to update what registers are being used. In this model you jit only control flow really, then inline the implementation of each bytecode operation (in contrast to sparkplug [https://v8.dev/blog/sparkplug] which just calls a builtin instead of copy/patch). It’s still JIT which is vastly different than an interpreter.

    > JIT will be things of the past

    Sorry no. JIT is not going anywhere. They mentioned in the article JIT would be better performance just more effort than they wanted to put in (a good tradeoff!) JIT powers Java, Wasm and Javascript VMs and are certainly the way to get the fastest code because you can give the CPU code that it can do a much better job predicting the next instruction. With interpreters you’re often limited by the indirection of loads when looking up the next function to call, and generating code for the control flow outside calling your “builtins” is precisely what Sparkplug is doing.

    At the end of the day, like most of engineering, choose the right tool for the job, which in this case is simplicity (which is often the right choice!), but that doesn’t mean it’s always the right choice. For example if browsers did this then Javascript performance would tank compared to what we get today.

  • The JVM has had a template interpreter since the mid-90s, it’s not anything new, and template interpreters are only sufficiently performant as to provide acceptable execution speed until you JIT.

    Template interpreters are not a substitute for real JIT — JIT compilation isn’t going anywhere.

    • My understanding of most optimizing compilers is that this is an extremely common "last step" sort of optimization. A lot of the optimizing work is beating the code into a canonical form where these sorts of templates can be readily applied.

      It was also my understanding that that's also the point of "super optimizer"s [1] which look for these common patterns in something like LLVM IR to generate optimization targets for the mainline optimizer.

      [1] https://en.wikipedia.org/wiki/Superoptimization

  • > It's crazy that this post seems to have stumbled across an equivalent to the Copy-and-Patch technique[0] used to create a Lua interpreter faster than LuaJit[1]

    > this post relies on the Go compiler's closures to create copies of the functions with runtime-known values

    To be clear the technique of using closures like this is ancient in the world of LISP. You can see in Paul Graham's books on LISP from the 90s, and in LiSP in Small Pieces, and many interpreters of 80s/90s vintage. I would say that it is quite standard.

  • > switch-based bytecode VMs

    I am finding a switched byte interpreter to be very expedient on my computer. It seems that if the # of cases is kept small enough, your chances of getting a good branch prediction can go up substantially. Something like a brainfuck interpreter runs extremely fast. In the worst case of randomly guessing, you are still going to time travel with a 12.5% success rate.

    • > In the worst case of randomly guessing, you are still going to time travel with a 12.5% success rate.

      Random guessing clearly isn't the worst case then, a bad prediction can miss every single time.

Compiling an expression to a tree of closures, and a list of statements to a slice of closures, is exactly how I optimized [gomacro](https://github.com/cosmos72/gomacro) my Go interpreter written in go.

There are more tricks available there, as for example unrolling the loop that calls the list of closures, and having a `nop` closure that is executed when there's nothing to run but execution is not yet at the end of the the unrolled loop.

Compiling queries is one of those things that is both Make it Right and Make it Fast.

Because bind variables and Prepared Statements go hand in hand, you want to do everything you can to coax your users into doing the right thing. If each unique query has to be compiled before first use, that’s an extra inducement to using prepared statements properly.

I built a 'slice of function pointers' bytecode interpreter in Go in 2019 for the Algorand VM (Blockchain smart contract stuff) and before that the same pattern in C for a toy JVM around 2005.

It's a good pattern!

The Algorand VM was focused on low overhead running thousands of tiny programs per second. Version 1 had no loops and a 1000 instruction limit.

The JVM was focused on low memory, towards possible embedded microcontroller use.

So, 'array of function pointers' is nothing new, but it is a good pattern.

The article mentions the struggle to speed up expression computation on MySQL due to its dynamic typing. I wonder if Postgres' more static type system works better in that regard.

Really interesting technique. I guess one of the (theoretical?) drawbacks is the memory usage is not ideal? In a traditional bytecode compiler the result of compilation is a very-cache-friendly binary array, whereas here it looks like a vector of pointers to heap allocated closures.

It would be ideal if Go would add support for computed goto, so that we could build direct threaded interpreters.

  • Is computed goto used for anything other than interpreter loops? Because if not, I would rather have a special "it looks like you're trying to implement an interpreter loop" case in the compiler than add new syntax.

  • I don't think "ideal" would be the exact word - I used to shudder at it in FORTRAN.

  • hard no - goto is a feature for code generators, not anything meant for human use

    • I never understood this argument. Without RAII you can easily get screwed by resource leaks without goto when returning early. In this regard, using goto is expedient. How do C programmers avoid this problem without goto?

          bool function_with_cleanup(void) {
              int *buffer1 = NULL;
              int *buffer2 = NULL;
              FILE *file = NULL;
              bool success = false;
              
              // Allocate first resource
              buffer1 = malloc(sizeof(int) * 100);
              if (!buffer1) {
                  goto cleanup;  // Error, jump to cleanup
              }
              
              // Allocate second resource
              buffer2 = malloc(sizeof(int) * 200);
              if (!buffer2) {
                  goto cleanup;  // Error, jump to cleanup
              }
              
              // Open a file
              file = fopen("data.txt", "r");
              if (!file) {
                  goto cleanup;  // Error, jump to cleanup
              }
              
              // Do work with all resources...
              
              success = true;  // Only set to true if everything succeeded
              
          cleanup:
              // Free resources in reverse order of acquisition
              if (file) fclose(file);
              free(buffer2);  // free() is safe on NULL pointers
              free(buffer1);
              
              return success;
          }

      10 replies →

    • There are situations in practical code where goto is the only reasonable choice if you want to avoid spaghetti code. Absolutes have no place in software, you can find scenarios where almost every typically bad idea is actually a good idea.

      It is a tool, not a religion.

      1 reply →

Nice article. Instructive to see the latest trends and direction for high performance interpreters. Thanks for sharing.