← Back to context

Comment by JonChesterfield

2 days ago

Outperforming languages in its class is doing some heavy lifting here. Missing comparison to wasm interpreter, any of the java or dot net interpreters, the MLs, any lisps etc.

Compile to register bytecode is legitimate as a strategy but its not the fast one, as the author knows, so probably shouldn't be branding the language as fast at this point.

It might be a fast language. Hard to tell from a superficial look, depends on how the type system, alias analysis and concurrency models interact. It's not a fast implementation at this point.

> This means functions do not need to dynamically capture their imports, avoiding closure invocations, and are able to linearly address them in the import array instead of making some kind of environment lookup.

That is suspect, could be burning function identifiers into the bytecode directly, not emitting lookups in a table.

Likewise the switch on the end of each instruction is probably the wrong thing, take a look at a function per op, forced tailcalls, with the interpreter state in the argument registers of the machine function call. There's some good notes on that from some wasm interpreters, and some context on why from luajit if you go looking.

The class is "embeddable interpreted scripting language", which is not quite the same thing as just an interpreter.

Embedded interpreters are that designed to be embedded into a c/c++ program (often a game) as a scripting language. They typically have as few dependencies as possible, try to be lightweight and focus on making it really easy to interopt between contexts.

The comparison hits many of the major languages for this usecase. Though it probably should have included mono's interpreter mode, even if nobody really uses it since mono got AoT