Comment by adev_

4 days ago

> No, the problem for a language server is incremental performance, not batch performance

"When something is fast enough, people start to use it differently" - Linus Torvalds.

Make your parser able to parse the current file at 30FPS and you do not need incremental parsing anymore nor error recovery. That is probably part of the idea here.

Here that can go both ways - SIMD parsing can allow handling arbitrary changes in reasonable time for files below like maybe 100MB (i.e. everything non-insane), whereas incremental parsing can allow handling small changes in truly-arbitrary-size files in microseconds. A trade-off between better average-case and worst-case time. (of course the ideal thing would be both, but that's even more non-trivial)

Absolutely.

Quite a long time ago I was working on a some business application's reporting facility.

It used to take about an hour, and my development reduced this time to a 1 or 2 seconds ballpark.

This was HUGE. And changed the way users create these reports forever.

It’s not good enough. Incremental parsers can save trees across edits, and you can hang type information off of those trees, so you just aren’t saving parsing time, you are saving type checking time as well. Even if you have a super fast batch parser, you are screwing yourself in other areas that are actually much more expensive.

  • Agreed. But all things considered:

    The runtime cost of type checking is highly dependent on the type system / meta-programming complexity of your language.

    For simple languages (Golang?) with a pretty well designed module system: it should be doable to reach ~500KLOC/sec (probably even 1MLOC/s in some case) so more than enough for an interactive usage.

    And for complex languages with meta-programming capabilities: they are indeed slow to type check. But are also a giant pain in the butt to cache without side effects for incremental parsing. It is 2025 and clangd / intellisense still fail to do that reliably for C++ codebases that rely heavily on template usage.

    So it does not seem a so-crazy approach to me: It is trading a complexity problem for a performance one.