Comment by nr378

14 hours ago

The data doesn't well support the claim that FP is best. Elixir tops the table at 97.5%, but C# (88.4%) is OOP and scores almost identically to Racket (88.9%), and Ruby (81.0%) and Java (80.9%) both outscore Scala (78.4%), which is explicitly functional. If FP were the driver, Scala should beat those languages, but it doesn't.

It's tempting to argue that a more constrained language helps, but Rust (62.8%) vs Elixir (97.5%) is an interesting data point here. Both are highly constrained, but in different directions. Elixir's constraints narrow the solution space because you can't mutate, you can't use loops, and you must pattern match, so every constraint eliminates options and funnels you toward fewer valid solutions that the LLM has to search through. Rust adds another constraint that must independently be satisfied on top of solving the actual problem, where the borrow checker doesn't eliminate approaches but adds a second axis of correctness the LLM has to get right simultaneously.

Overall, it seems like languages with strong conventions and ecosystems that narrow the solution space beat languages where there's a thousand ways to do something. Elixir has one build tool, one formatter, one way to do things. C#, Kotlin, and Java have strong ceremony and convention that effectively narrow how you write a program. Meanwhile JS, Python, PHP, and Perl offer endless choices, fragmented ecosystems, and rapidly shifting idioms, and they cluster at the bottom of the table.

Scala is explicitly multiparadigm and offers a lot of advanced OOP features. It also had a Python-like (though reportedly better handled) 2 -> 3 transition, which deprecated some things, removed others, and added a bunch of new ones. Scala has always been complex, and right now it's also chaotic. It's a wonder the models can get that high a score with it, honestly.

Racket is a similarly large PL, with many abstractions built on the metaprogramming primitives it offers. Without looking at the generated code, it's hard to say anything, but I suspect the high score despite that might be because of the Scheme core of Racket: `racket/base` is a much smaller language than `racket`, so if the LLMs keep to it, it might narrow the solution space enough to show different results.

In general, I think you're half-right: the "solution space" size is a factor, but so is its shape - ie. which features specifically are offered and how they interact. A more compact and cohesive language design should yield better results than just a reduced surface area. C is not a huge language, but the features it offers don't lend themselves to writing correct code much. Elixir is both relatively small and strongly steers a programmer towards safer idioms. Racket is big, but the advanced features are opt-in, while the baseline (immutable bindings, pure functions, expressive contracts) is similar to Elixir. Python is both huge and complex; "there's one obvious way to do it" has always been a bit of a joke. Rust is incredibly complex - the idea is that the tooling should allow you to handle that complexity easily, but that requires agents; one-shotting solutions there won't work as well.

What if it is the quality of data? Internet is full of terrible python/js, but probably not Elixir.

  • Seems plausible. I used to refer to StackOverflow before LLMs and a good amount of the examples there were flawed code presented as working. If the LLM had less junk in its training then it might benefit even though the volume of training on that language is lower.

If we assume that the amount of training data matters at least a bit (which is a very reasonable assumption), I wouldn’t immediately discard the functional hypothesis. Scala’s score is almost equal to Java’s even though there’s probably something like two orders of magnitude less Scala than Java code in the wild. Similarly with C# and Racket.

  • Yep I think you can reasonably argue that immutability + strong conventions are the most important dimensions (as opposed to FP vs. OOP, as much as I like FP and dislike OOP):

    Immutable by convention + Strong conventions: 91.3% - Elixir 97.5%, Kotlin 90.5%, Racket 88.9%, C# 88.4%

    Immutable by convention + Fragmented: 78.4% - Scala 78.4% (n=1)

    Mutable + Strong conventions: 77.5% - Ruby 81.0%, Swift 78.5%, Julia 78.5%, Dart 78.0%, Go 71.7%

    Mutable + Fragmented: 67.9% - Java 80.9%, R 75.8%, C++ 75.8%, Shell 72.9%, Python 65.3%, Perl 64.5%, TS 61.3%, JS 60.9%, PHP 53.8%

    (my grouping is somewhat subjective)

  • I agree with you, but, from the article: "The amount of training data doesn’t matter as much as we thought. Functional paradigms transfer well"

    Anyway, I tend to think you are right, and the article is wrong in that sentence. (Or I misinterpreted something?)

    I think both the quantity and quality of that has a big influence in the results.

    • I took that to mean ≈ "Amount of training data isn't the big factor dwarfing all else." Depends who "we" refers to, I guess. Back when LLM-generated code was new, I definitely saw predictions that LLMs would struggle with niche or rarely used languages. These days, consensus among colleagues within earshot is that LLMs handle Rust much better than Python or C++ (corpus size and AutoCodeBench scores notwithstanding).