Comment by mananaysiempre

10 hours ago

> That's exactly what I've written.

In that case I don’t get the logic. “It’s no longer worth the effort to handcode an interpreter because that’d only be 30% faster” is a sentiment I could understand. “It’s no longer worth the effort to handcode an interpreter because that’d be 30% slower” I can’t. It’s that it’d be worth or not worth the effort—it’s that it’s actively detrimental! (For this particular application anyway.)

> Also note that the Deegen "interpreter" uses a "baseline JIT".

What? No it doesn’t? Unless the paper is deliberately misleading, they are completely different modules (utilizing the same set of bytecode definitions). The paper explicitly describes them as implementing the first two tiers of a three-tier architecture—two different tiers. Not once does the description of the interpreter in section 6 mention JITting anything. Figures 26–27 show e.g. array3d on “LJR (interpreter only)” is at 3× PUC Lua speed (same as “LuaJIT (interpreter only)”), while on “LJR (baseline JIT)” it’s at 7× PUC Lua speed (compared to 30× on “LuaJIT”).

English is not my native language; I probably should have written "that the speed-up *from* a manual assembler implementation compared to a generated interpreter is about 30%"; the point is that the speedup is small but at least demonstrates that assembler programming apparently isn't worth it any longer.

> Unless the paper is deliberately misleading,

Apparently I misinterpreted their paper concerning the JIT; as pointed out by others they indeed run separate measurements with baseline JIT on and off; so apparently it was off for the measurement I referred to. All in all it confirms that even for the JIT case assembler programming isn't worth it.