Comment by kcsrk
20 hours ago
What’s surprised me in the last few months is that agents are great at producing OCaml 5+ and OxCaml code, not much of which is out there in the training data. OxCaml’s strong types and modes seem to serve as great testable oracles to guide the agents.
I taught a course on concurrent programming based on OCaml 5 and OxCaml where almost all of the code in the teaching materials were vibe coded. I reviewed all of the code (because I was teaching it to a class of 50+ students) and frankly the agent writes better O(x)Caml (mostly) than me.
I must confess to also using agents to do most of my OxCaml annotations: https://github.com/avsm/ocaml-claude-marketplace/tree/main/p...
There's not that much downside since the annotations only change the performance characteristics of the program, and the static type system rejects inconsistent annotations.
There is some bizarre facility with hindley-milner based languages embedded in LLMs, they're basically automatically good at even very new ones like gleam and nanolang. I have a never-released-anywhere hobby ML that compiles to lua and coding models can write it fine. Better than it writes python or php for sure and those have huge corpuses in the training data.
I don't even have good conjecture about why this is the case but right now all my assisted coding is in MLs for this reason.