Comment by tombert
10 hours ago
Something I just started doing yesterday, and I'm hoping it catches on, is that I've been writing the spec for what I want in TLA+/PlusCal at a pretty high level, and then I tell Codex implement exactly to the spec. I tell it to not deviate from the spec at all, and be as uncreative as possible.
Since it sticks pretty close to the spec and since TLA+ is about modifying state, the code it generates is pretty ugly, but ugly-and-correct code beats beautiful code that's not verified.
It's not perfect; something that naively adheres to a spec is rarely optimized, and I've had to go in and replace stuff with Tokio or Mio or optimize a loop because the resulting code is too slow to be useful, and sometimes the code is just too ugly for me to put up with so I need to rewrite it, but the amount of time to do that is generally considerably lower than if I were doing the translation myself entirely.
The reason I started doing this: the stuff I've been experimenting with lately has been lock-free data structures, and I guess what I am doing is novel enough that Codex does not really appear to generate what I want; it will still use locks and lock files and when I complain it will do the traditional "You're absolutely right", and then proceed to do everything with locks anyway.
In a sense, this is close to the ideal case that I actually wanted: I can focus on the high-level mathey logic while I let my metaphorical AI intern deal with the minutia of actually writing the code. Not that I don't derive any enjoyment out of writing Rust or something, but the code is mostly an implementation detail to me. This way, I'm kind of doing what I'm supposed to be doing, which is "formally specify first, write code second".
Interesting, just the other day I tried asking if iterating in haskell or prolog wouldn't help both converging speed and token use. I wish there was a group to study how to do proper engineering with LLMs without losing the modeling / verification aspect.
This is how I’m also developing most of my code these days as well. My opinions are pretty similar to the pig book author https://martin.kleppmann.com/2025/12/08/ai-formal-verificati....
For the first time I might be able to make a case for TLA+ to be used in a workplace. I've been trying for the last nine years, with managers that will constantly say "they'll look into it".
You might find success with having the LLM contribute to the spec itself. It suddenly started to work with the most recent frontier models, to the point that economics of writing then shifted due to turn getting 10-100x cheaper to get right.