Comment by hinkley
1 year ago
I’ve seen a few languages over the years drop and reacquire JIT layers. Some of it has to do with programmer skill and lessons learned, but some is also down to CPU generation.
Like everything else in CS, when the cost balance shifts for different kinds of operations, the best algorithm can shift back to something we haven’t used in fifteen, twenty years. It contributes a lot to the faddishness of programming. Just because we are bringing something back doesn’t mean there’s no reason. But we forget the reasons it wasn’t a panacea last time so that’s still a problem.
If your main JIT gets faster or slower, then the cost-benefit for running it changes, so the threshold to trigger it gets adjusted, and now the amount of code that runs in the other tiers shifts, which might make the amortized cost of that tier worse. It’s like balancing a double pendulum.
If you can make a JIT tier fast and dirty enough, you can skip the interpreter entirely. And, from my armchair position, it seems that the cognitive load of bookkeeping tasks between the interpreter and say two JITs is high enough that a few languages have mothballed the interpreter and used a JIT optimized for compile time not output speed.
And I don’t recall what language, but I’m pretty sure at least one team that did this ended up dropping an intermediate compiler as well, because of that balancing act I mentioned above. It was better to double down on two than to try to handle three.
No comments yet
Contribute on Hacker News ↗