Comment by rapind
17 hours ago
> But writing Haskell, it's pretty bad,
I’m surprised by this. Most likely significant white space is a big part of the problem (LLMs seem horrible at white space). Functional with types has been a win for me with Gleam.
But LLMs do Python quite well, so white space isn’t necessarily a problem.
Yes - a point supported the Vera benchmark: https://github.com/aallan/vera-bench
The benchmark is strange: single-run results (the author acknowledges it's unreliable) and uses older models like GPT-4o or Opus 4 (although the benchmark is from 2026).