Comment by cmrdporcupine
10 hours ago
Not my experience, honestly. With a good code base for it to explore and good tooling, and a really good prompt I've had excellent results with frankly quite obscure things, including homegrown languages.
As others said, the key is feedback and prompting. In a model with long context, it'll figure it out.
Yeah, I've had Claude work on my buggy, incomplete Ruby compiler written (mostly) in Ruby, which uses an s-expression like syntax with a custom "mini language" to implement low-level features that can't be done (or is impractical to do) in pure Ruby, and it only had minor problems with the s-expression language that was mostly fixed with a handful of lines in CLAUDE.md (and were, frankly, mostly my fault for making the language itself somewhat inconsistent) and e.g. when it write a bigint implementation, I had to "tell it off" for too readily resorting to the s-expression syntax since it seemed to "prefer it" over writing high-level code in Ruby.
Even 3 years ago, GH Copilot, hardly the most intelligent of LLMs was suggesting/writing bytecode in my custom VM, writing full programs in bytecode for a custom VM just by looking at a couple examples.
That's when I smelled that things were getting a little crazy.
But isn't this inefficient since the agent has to "bootstrap" its knowledge of the new language every time it's context window is reset?
No, it gets it “for free” just by looking around when it is figuring out how to solve whatever problem it is working on.