Now, what we’ve been told about models is that they’re only as good as their training data. And so languages with gargantuan amounts of training data ought to fare best, right? Turns out that models kind of universally suck at Python and Javascript (comparatively). The top performing languages (independent of model) are C#, Racket, Kotlin, and standing at #1 is Elixir.
In that case I encourage you to build Django with your LLM of choice.
Do what the Django team does, and be of service to the public!
I challange you to prove that Django is sloppier than your LLM-Version
Someone beat you to it: https://github.com/mymi14s/openviper
> Django in particular is optimized for LLMs
Meanwhile, a different take:
Now, what we’ve been told about models is that they’re only as good as their training data. And so languages with gargantuan amounts of training data ought to fare best, right? Turns out that models kind of universally suck at Python and Javascript (comparatively). The top performing languages (independent of model) are C#, Racket, Kotlin, and standing at #1 is Elixir.
https://news.ycombinator.com/item?id=47410349
I am using Claude Code with Elm, a very obscure language, and I find that it's amazing at it.
I wouldn’t call Elm obscure. It’s old, well understood, well documented, and has a useful compiler. This is nearly the perfect fit for an LLM.
50 day old account, are you even a real person or a clawdbot? (such are the times we live in)