Comment by atgreen
13 hours ago
This is not my experience. I've been experimenting with something very similar to vera. However my language transpiles into multiple languages (Java, Typescript, Common Lisp, Rust, C++, Python, C# and Swift). The transpiler is written in the language itself (there's a separate bootstrap transpiler written in Common Lisp). But where I'm going is that Claude, at least, is extremely capable at writing decent code in my new language with barely any prompting; just minimal guidance on the language itself and no examples.
That's simply not true. That's just not the way LLMs work. LLMs are not magic.
LLMs are stateless, they don't "remember" your bespoke programming language manual and examples between completion calls, so you have to repeatedly include all that with each and every completion call, which balloons the number of tokens used, reduces how much useful work you can do with the remaining tokens and attention, and is a costly waste of tokens and electricity and money.
That isn't anywhere near as effective or efficient as using the LLM's pre-existing training on billions of lines of well known programming languages, manuals, tutorials, examples, code bases, stack overflow discussions, books, github repos, pr's, etc.
What is your extraordinary evidence for your extraordinary claims? Have you empirically measured how well it works, or is it just vibes and handwaving?