Comment by xienze
5 hours ago
> I'd also be interested generally in how much tacit knowledge was needed to come up with these rules and how much iteration on this file was needed, for example how many of the rules here came from a failure case hit as part of iterating on the translation.
I think that's the point the original poster was making. There's basically zero chance this file was just spit out by memory in an afternoon. It was obviously the result of a LOT of pre-planning and back and forth checking over the artifacts that Claude was incorrectly generating for one reason or another. So yeah, an extremely iterative process.
With rules as fine-grained as these, there was almost certainly many instances where hundreds of files are generated -> one particular file doesn't translate <X> correctly -> add a rule for <X> -> regenerate everything again -> crap, that rule broke a different file because <Y> -> add a rule for <X if Y>, another for <X not Y> -> regenerate everything again[0] -> repeat. The token costs must have been out of this world.
0: now I'm sure people will say "why would you regenerate a file that generated correctly once? Just mark it off the list and move on." Well, when essentially 99.9999% of your codebase is generated artifacts, the tiny fraction that is actually human-understandable is now the spec, the source of truth for everything. It HAS to be able to essentially redo the entire process if you expect any level of maintainability going forward.
No comments yet
Contribute on Hacker News ↗