> The empirical literature shows that models are particularly vulnerable to naming-related errors like choosing misleading names, reusing names incorrectly, and losing track of which name refers to which value.
I think Vera might be missing something here. In my experience, LLMs code better the less of a mental model you need, vs the more is in text on the page.
Go – very little hidden, everything in text on the page, LLMs are great. Java, similar. But writing Haskell, it's pretty bad, Erlang, not wonderful. You need much more of a mental model for those languages.
For Vera, not having names removes key information that the model would have, and replaces it with mental modelling of the stack of arguments.
My Spidey sense was tingling when I saw that, too. An additional issue is how humans are supposed to read the code at all so that they can provide help to the LLM if it’s off track. If the code is only usable by models, the models need to be good enough to deal with binary feedback (“Code doesn’t work.”). The human won’t be able to read the code and steer the model. Given the levels of steering required today, that makes me quite nervous.
> Go – very little hidden, everything in text on the page, LLMs are great. Java, similar. But writing Haskell, it's pretty bad, Erlang, not wonderful. You need much more of a mental model for those languages.
I don't think that follows. It could just be that there is way more Go and Java code to train on than Haskell and Erlang. Haskell's terseness and symbol-named operators probably don't help either.
Hmm, interesting. Are you speaking from experience for Haskell? I'm a Haskell developer since 2017, and have been using LLMs to write code (including Haskell) since 2024. In my experience, LLMs perform much better generating Haskell/Rust code over Python/Javascript.
This will serve as an interesting empirical test, then: will LLMs do better with Vera than with Go or other languages? The testing so far seems inconclusive (https://github.com/aallan/vera-bench), but the authors make this interesting observation:
"No LLM has ever been trained on Vera. There are no Vera examples on GitHub, no Stack Overflow answers, no tutorials — the language was created after these models' training cutoffs. Every token of Vera code in these results was written by a model that learned the language entirely from a single document (SKILL.md [https://veralang.dev/SKILL.md]) provided in the prompt at evaluation time."
If LLMs do much better with Vera (or something like it) than with traditional languages, we may be entering a time when most machine-written code will be difficult for humans to review - but maybe that ship has already sailed.
I too have found the models do well with Go. I will say despite the backwards compatibility guarantee library API changes, what counts as "good" patterns, and new language additions do add some friction to the experience. Almost always works but it can be a bit inconsistent in how the code shows up.
I’m surprised by this. Most likely significant white space is a big part of the problem (LLMs seem horrible at white space). Functional with types has been a win for me with Gleam.
The context window also limits how deeply the model can "think", and it does this in natural language. So a language suited to LLMs would have balanced density, if it's too dense, the model spends many tokens working through the logic, if it's too sparse, it spends many tokens to read/write the code.
I think in the context of already trained LLMs, the languages most suited to LLMs are also the ones most suited to humans. Besides just having the most code to train on, humans also face similar limitations, if the language is too dense they have to be very careful in considering how to do something, if it's too sparse, the code becomes a pain to maintain.
Density is a double edged sword. On the one hand you want to minimise context usage, but on the other hand more text on the page means more that the LLM can work with.
There are many problems we will need to address in the future.
A programming language that is easy for machines to write but hard for humans to read isn’t one of them.
Why would anybody use a vibe-coded and vibe-desinged language which effectively does not exist yet instead of an established one with such features, like Scala?
Depends. A professor told me AI is really good at writing bad pandas code because it's seen a lot of bad pandas code, so starting from scratch isn't necessarily the worst thing.
> Every function is a specification that the compiler can verify against its implementation.
This has been tried so many times already. It works nice for functions that only do some arithmetic. But in any real life system that pushes data around over the network or to databases, most things will happen inside effects which leaves the compiler clueless as to whether the function implementation does what it's supposed to do or not.
Don't get me wrong, I'm a big fan of using the compiler to improve productivity and I also believe strong typing leverages LLM power.
But this kind of function specification is a dead end IMO.
I think this is the wrong path in LLM and SWE optimizations:
1) Programming language training happens by volume, and the amount of JS/TS/python out there, and the rate it's growing at - is causing a training effects loop, which means for a few generations of models, these will be the best performing languages. Will be hard for a contender to spin up.
2) At some point, if we plateau on productivity - then efficiency improvements will happen, which will open a door for programming languages that maintains productivity, but is 10x cheaper on cost.
3) I think more immediate gains are at the cloud level. IMO, one of the reasons Google cloud is performing better(along with firebase) is much better overall CLI experience, leading to a pleasurable experience developing against it. This part of the market is ripe - whoever builds a most LLM friendly cloud has a shot of shooting up. Hence projects like exe.dev, and whatever cloudflare and vercel are trying. It would be good to have some shakeup in the cloud world.
I think Hindley Milner (for decidability) + Linear Types (for resource management) + Refinement Types (for lightly asserting invariants) + Delimited Continuation based Effects (for tracking effectful code) + Unison style Content Addressability (for corralling code changes, documentation, and tests) would make a really nice language for an LLM.
It doesn't have Hindley-Milner type inference, but it has very strong type inference.
We will get linearity soon thanks to and as part of the Capybara[1] effort.
Refinement types are already long a reality.
The whole new effect tracking thing is based on delimited continuations.
The Unison style content addressability comes up now and then, maybe it will become a reality at some point. It's though mostly not a language thing but more a build system thing.
Scala is already great for for LLMs also for other reasons:
> The evidence suggests the biggest problem models face isn't syntax
So then why is the first mentioned and most obvious difference from other languages
> There are no variable names. @Int.0 is the most recent Int binding
LLMs are trained on code written by humans. They are most “familiar” with popular programming languages, have large datasets of examples and idioms to draw on. I don’t see the advantage of inventing a new language the machine must “learn” with syntax unlike anything it’s been trained on.
Validation and testing are also already things we do with human written code, too.
The lack of naming seems to indicate a fundamental misunderstanding of how LLM coding agents are successful, and just makes me doubt anything about this project being useful and workable.
Yeah it seems based on 2023 research which is ancient, back when we didn't have coding agents at all, and on some 1980s sci fi concepts of "how machines think" (beedeeboop) rather than the all too human coding agents we have.
If I had to design one of these, I'd go for:
1. Token minimization (which may be circular, I'm sure tokens are selected for these models at least in part based on syntax of popular languages)
2. As many compile time checks as possible (good for humans, even better for machines with limited context)
3. Maximum locality. That is, a feature can largely be written in one file, rather than bits and pieces all over the codebase. Because of how context and attention work. This is the one I don't see much in commercially popular languages. It's more of a declarative thing, "configuration driven development".
Features written in one file, rather than "cohesive" modules with a single "responsibility" in one file?
So, orthogonal to the accepted, common code organization idiom (no matter how infrequently adhered to)?
Fascinating! Just the other day I decomposed a massive Demeter violation into stepwise proxying "message passing." I was concerned that implementing this entire feature—well, at least a solid chunk of it— as a single, feature-scoped module would cause the next developers eyes to glaze over upon encountering such a ball-of-mud, such a dense vortex of spaghetti.
But, as I drove home that evening, I couldn't help wonder if I hadn't, instead, merely buried the gordian lede behind so many ribbons of silk.
> That is, a feature can largely be written in one file, rather than bits and pieces all over the codebase.
This seems to be at odds with the goal of token minimization. Lots of small files that are narrowly scoped means less has to be loaded into context when making a change, right?
Throwing out another idea: I wonder if we could see some kind of equivalent of c header files for more modern languages so that an llm just has to read the equivalent of a .h file to start using a library.
Well, Rust does fulfill these to a reasonable degree. There is obvious room for improvement, but the vast majority of languages don't even bother being a Rust successor. Instead, they take a step back and decided that what Rust is doing is too much, e.g. Zig. It's kind of irritating that everyone and their dog is coming up with a new programming language that barely changes anything when there are so many low hanging fruit. The vast majority of programming languages that people are coming up with could have been language subsets, extensions or alternative runtimes for existing languages.
There is no actual thought occurring. Arguably, we can say the same about a lot of humans at any given moment, but with machines there never is. It's all statistics.
> Traditional compilers produce diagnostics for humans: expected token '{'. Vera produces instructions for the model that wrote the code. Every error includes what went wrong, why, how to fix it with a concrete code example, and a spec reference.
Is this a thing for the llms? As a human, I also prefer being told what went wrong and why and how to fix it, rather than `expected {`
It feels wrong to dump identifiers to save tokens: now they're devoid of semantics, and can't be grep'ed or mapped to concepts. CPUs are good with numbers, but LLMs are good with words.
I agree 100% with this thinking approach, I've been working in this domain for quite a few months now.
The right granularity for agents isn't files or lines, it's entities: functions, classes, methods. That's how both humans and agents actually think about code.
We built sem(Ataraxy-Labs/sem) which extracts entities from 30+ languages via tree-sitter and builds a cross-file dependency graph, so building semantic version control and semantic diff. weave (same org) takes it further and does git merges at entity level. Matches functions by name, merges their bodies independently.
The dependency graph also answers questions LLMs can't. I love the analysis based on ASTs.
This isn't my project, but I shared it here because it has a few important ideas I've been thinking about in my own work. Effect type systems in particular are a really good fit for LLMs because they allow you to reason very precisely about a program's capabilities before runtime (basically, using the type system for capability proofs). This helps you trust agent-created code (for example, you know it can't do IO), or, if the code does require certain capabilities, run it in a sandbox (e.g., mock network or filesystem). This kind of language design also provides a safer foundation for complex meta-systems of agents-that-create-agents, depending on how the runtime is implemented, though Vera may be somewhat limited in that particular respect.
The major design decision I'm a little skeptical about is removing variable names; it would be interesting to see empirical data on that as it seems a bit unintuitive. I would expect almost the opposite, that variable names give LLMs some useful local semantics.
Presumably an analyzer that makes it an error to not have an immediately traceable zero check.
C# can do something similar with null references. It can require you to indicate which arguments and variables are capable of being null, and then compiler error/warning if you pass it to something that expects a non-null reference without a null check.
But that’s because null is a static type. Zero isn’t a static type. How can I know if a calculation produces zero if I can’t predict the result of it at compile time?
I think the best language for LLMs is going to be as close to English as you can get with the compiler guarantees offered by Vera (or something similar).
> There are no variable names. @Int.0 is the most recent Int binding; @Int.1 is the one before.
You already lost me here. There's a reason variable names are a thing in programming, and that's to semantically convey meaning. This matters no matter whether a human is writing the code or a LLM.
>The short answer is that variable names are one of the things that confuses LLMs rather than helps them. Unlike with humans, names undermine a model's efforts to keep track of state over larger scales. Models confuse similarly named variables in different parts of the codebase easily
So I wonder, doesn't this apply to function names too, which the author keeps in? I've seen LLMs use wrong functions/classes as well.
I think a proper harness, LSP and tests already solve everything Vera is trying to solve. They mostly cite research from 2021 before coding harnesses and agentic loops were a thing, back when they were basically trying to one-shot with relatively weak models (by modern standards)
The only way the author could have come up with that rationale is that he doesn't understand what a token is, what attention is and how coding agents work.
Tokens combine multiple characters into a single vector. Attention computes similarity scores between vectors. This means you'd want each variable to be a single token so that the LLM can instantly know that two names refer to the same variable. If everything is numbered, the attention mechanism will attend every first parameter to every first parameter in every function. This means that the numbering scheme would have to be randomized instead of starting at zero.
Coding agents are now capable of using tools, including text search, which means that having the ability to look for specific variable names is extremely helpful. By using numbering, the author of the language has now given himself the burden of relying entirely on LSPs rather than innate model properties that operate on the text level.
So yeah, on a textual level, the language is designed for an era of LLMs that has been obsolete for a long time.
Is there any evidence that using structural references rather than names allows large language models to generate better code? This bit just feels like obfuscation for obfustcation’s sake.
I've read the FAQ (https://github.com/aallan/vera/blob/main/FAQ.md) that provides the justification for this and it is, IMO, fairly weak. The main argument is that misleading names can confuse models. I have no problem believing this bit I'm not sure why we should assume code will have misleading names. In fact, the same document says that in tests they've had LLMs mix up the indices, which is exactly the problem I would foresee. It seems especially messy that the name for the same variable will change in different places in the code. The utility of De Bruijn indices is easy substitutability of expressions, which seems like totally the wrong thing to optimize for in a programming language.
Edit: the more I think about it the more this seems like a really bad idea. Three more issues come to mind: 1) it becomes impossible to grep for a variable, which I know agents do all the time. 2) editing code at the top of the function, say introducing a new variable, can require editing all the code in the rest of the function, even if it was semantically unchanged! 3) they say it is less context for the LLM to track but now, instead of just having to know the name of one variable, you have to keep track of every other variable in the function
I find the claims regarding LLMs and their mistake prone nature around variable names very confusing.
It appears that me and creator have had vastly different experiences with LLMs and their capabilities with complex code bases and complicated business logic.
My observations point to LLMs being much more successful when variables and methods have explicit, detailed names, it's the best way to keep them on track and minimize the chance of confusion, next closest thing being explicit comments and inline documentation.
Poorly named and poorly documented things in a codebase only cause it to reason more on what it could be, often reaching a (wrong) conclusion, wasting tokens, wasting time.
Perhaps this diversion in philosophy is due to fundamental differences in how we view the tool at hand.
I do not trust the machine, as such I review it's output, and if the variables lacked names, that would be significantly harder. But if I had a "Jesus, take the wheel!" attitude, perhaps I'd care far less.
Why not prolog or one of the other logic languages? It's really old, should be lots of good training data for it and the declarative nature would seem to be a great fit for llms.
This is exactly the wrong approach. LLMs are good at writing programming languages they already know, that are well represented in the training data, not at writing programming languages that they have never seen before, so that you have to include the entire programming language manual and lots of example code in every prompt.
This is not my experience. I've been experimenting with something very similar to vera. However my language transpiles into multiple languages (Java, Typescript, Common Lisp, Rust, C++, Python, C# and Swift). The transpiler is written in the language itself (there's a separate bootstrap transpiler written in Common Lisp). But where I'm going is that Claude, at least, is extremely capable at writing decent code in my new language with barely any prompting; just minimal guidance on the language itself and no examples.
> The empirical literature shows that models are particularly vulnerable to naming-related errors like choosing misleading names, reusing names incorrectly, and losing track of which name refers to which value.
I think Vera might be missing something here. In my experience, LLMs code better the less of a mental model you need, vs the more is in text on the page.
Go – very little hidden, everything in text on the page, LLMs are great. Java, similar. But writing Haskell, it's pretty bad, Erlang, not wonderful. You need much more of a mental model for those languages.
For Vera, not having names removes key information that the model would have, and replaces it with mental modelling of the stack of arguments.
My Spidey sense was tingling when I saw that, too. An additional issue is how humans are supposed to read the code at all so that they can provide help to the LLM if it’s off track. If the code is only usable by models, the models need to be good enough to deal with binary feedback (“Code doesn’t work.”). The human won’t be able to read the code and steer the model. Given the levels of steering required today, that makes me quite nervous.
I guess the point is that there is no need for humans to read the code.
How often do you read assembly to check what your compiler is doing?
There is a niche of people doing it when they have special constraints, but that's a tiny niche.
1 reply →
> Go – very little hidden, everything in text on the page, LLMs are great. Java, similar. But writing Haskell, it's pretty bad, Erlang, not wonderful. You need much more of a mental model for those languages.
I don't think that follows. It could just be that there is way more Go and Java code to train on than Haskell and Erlang. Haskell's terseness and symbol-named operators probably don't help either.
Hmm, interesting. Are you speaking from experience for Haskell? I'm a Haskell developer since 2017, and have been using LLMs to write code (including Haskell) since 2024. In my experience, LLMs perform much better generating Haskell/Rust code over Python/Javascript.
Same experience. Being able to iterate on compile errors is helpful.
This will serve as an interesting empirical test, then: will LLMs do better with Vera than with Go or other languages? The testing so far seems inconclusive (https://github.com/aallan/vera-bench), but the authors make this interesting observation:
"No LLM has ever been trained on Vera. There are no Vera examples on GitHub, no Stack Overflow answers, no tutorials — the language was created after these models' training cutoffs. Every token of Vera code in these results was written by a model that learned the language entirely from a single document (SKILL.md [https://veralang.dev/SKILL.md]) provided in the prompt at evaluation time."
If LLMs do much better with Vera (or something like it) than with traditional languages, we may be entering a time when most machine-written code will be difficult for humans to review - but maybe that ship has already sailed.
I too have found the models do well with Go. I will say despite the backwards compatibility guarantee library API changes, what counts as "good" patterns, and new language additions do add some friction to the experience. Almost always works but it can be a bit inconsistent in how the code shows up.
If it's incomprehensible to humans, it must be perfect for LLMs. Never mind the training.
> But writing Haskell, it's pretty bad,
I’m surprised by this. Most likely significant white space is a big part of the problem (LLMs seem horrible at white space). Functional with types has been a win for me with Gleam.
But LLMs do Python quite well, so white space isn’t necessarily a problem.
2 replies →
I'm curious what issues you had with haskell? I have had the opposite experience and find them dreadful at Java et al.
Surely, denser languages should be better for LLMs?
The context window also limits how deeply the model can "think", and it does this in natural language. So a language suited to LLMs would have balanced density, if it's too dense, the model spends many tokens working through the logic, if it's too sparse, it spends many tokens to read/write the code.
I think in the context of already trained LLMs, the languages most suited to LLMs are also the ones most suited to humans. Besides just having the most code to train on, humans also face similar limitations, if the language is too dense they have to be very careful in considering how to do something, if it's too sparse, the code becomes a pain to maintain.
1 reply →
Density is a double edged sword. On the one hand you want to minimise context usage, but on the other hand more text on the page means more that the LLM can work with.
my (uninformed) speculation is that you want resilience and error correction, which implies some level of redundancy rather than pure density.
The same logic applies to comments. No comments are better than wrong comments.
I've found Claude Code to be amazing at Elm, so your comment about Haskell seems strange to me.
[dead]
There are many problems we will need to address in the future. A programming language that is easy for machines to write but hard for humans to read isn’t one of them.
This isn't that different from circuit languages.
Whittling everything down so the language is relatively 1-to-1 with the structure of the compute. With little or no extraneous decoration.
Why would anybody use a vibe-coded and vibe-desinged language which effectively does not exist yet instead of an established one with such features, like Scala?
https://arxiv.org/html/2510.11151v1
Also isn't it an advantage for LLM coding to use an existing language that has a lot of code that LLM's have already stol... I mean ingested?
Depends. A professor told me AI is really good at writing bad pandas code because it's seen a lot of bad pandas code, so starting from scratch isn't necessarily the worst thing.
Exactly! Completely new languages without large amounts of reference material are terrible for LLMs.
Providing a blackbox to the blackbox to reason. We are screwed
> Every function is a specification that the compiler can verify against its implementation.
This has been tried so many times already. It works nice for functions that only do some arithmetic. But in any real life system that pushes data around over the network or to databases, most things will happen inside effects which leaves the compiler clueless as to whether the function implementation does what it's supposed to do or not.
Don't get me wrong, I'm a big fan of using the compiler to improve productivity and I also believe strong typing leverages LLM power. But this kind of function specification is a dead end IMO.
I think this is the wrong path in LLM and SWE optimizations:
1) Programming language training happens by volume, and the amount of JS/TS/python out there, and the rate it's growing at - is causing a training effects loop, which means for a few generations of models, these will be the best performing languages. Will be hard for a contender to spin up.
2) At some point, if we plateau on productivity - then efficiency improvements will happen, which will open a door for programming languages that maintains productivity, but is 10x cheaper on cost.
3) I think more immediate gains are at the cloud level. IMO, one of the reasons Google cloud is performing better(along with firebase) is much better overall CLI experience, leading to a pleasurable experience developing against it. This part of the market is ripe - whoever builds a most LLM friendly cloud has a shot of shooting up. Hence projects like exe.dev, and whatever cloudflare and vercel are trying. It would be good to have some shakeup in the cloud world.
Anyway, this is where my thoughts are currently.
I think Hindley Milner (for decidability) + Linear Types (for resource management) + Refinement Types (for lightly asserting invariants) + Delimited Continuation based Effects (for tracking effectful code) + Unison style Content Addressability (for corralling code changes, documentation, and tests) would make a really nice language for an LLM.
That's in large parts Scala.
It doesn't have Hindley-Milner type inference, but it has very strong type inference.
We will get linearity soon thanks to and as part of the Capybara[1] effort.
Refinement types are already long a reality.
The whole new effect tracking thing is based on delimited continuations.
The Unison style content addressability comes up now and then, maybe it will become a reality at some point. It's though mostly not a language thing but more a build system thing.
Scala is already great for for LLMs also for other reasons:
https://arxiv.org/html/2510.11151v1
[1] https://2025.workshop.scala-lang.org/details/scala-2025/6/Sy...
> The evidence suggests the biggest problem models face isn't syntax
So then why is the first mentioned and most obvious difference from other languages
> There are no variable names. @Int.0 is the most recent Int binding
LLMs are trained on code written by humans. They are most “familiar” with popular programming languages, have large datasets of examples and idioms to draw on. I don’t see the advantage of inventing a new language the machine must “learn” with syntax unlike anything it’s been trained on.
Validation and testing are also already things we do with human written code, too.
Plus LLMs need semantics just like humans do. Maybe more. Removing variable names seems utter madness.
The lack of naming seems to indicate a fundamental misunderstanding of how LLM coding agents are successful, and just makes me doubt anything about this project being useful and workable.
Yeah it seems based on 2023 research which is ancient, back when we didn't have coding agents at all, and on some 1980s sci fi concepts of "how machines think" (beedeeboop) rather than the all too human coding agents we have.
If I had to design one of these, I'd go for:
1. Token minimization (which may be circular, I'm sure tokens are selected for these models at least in part based on syntax of popular languages)
2. As many compile time checks as possible (good for humans, even better for machines with limited context)
3. Maximum locality. That is, a feature can largely be written in one file, rather than bits and pieces all over the codebase. Because of how context and attention work. This is the one I don't see much in commercially popular languages. It's more of a declarative thing, "configuration driven development".
Features written in one file, rather than "cohesive" modules with a single "responsibility" in one file?
So, orthogonal to the accepted, common code organization idiom (no matter how infrequently adhered to)?
Fascinating! Just the other day I decomposed a massive Demeter violation into stepwise proxying "message passing." I was concerned that implementing this entire feature—well, at least a solid chunk of it— as a single, feature-scoped module would cause the next developers eyes to glaze over upon encountering such a ball-of-mud, such a dense vortex of spaghetti.
But, as I drove home that evening, I couldn't help wonder if I hadn't, instead, merely buried the gordian lede behind so many ribbons of silk.
> That is, a feature can largely be written in one file, rather than bits and pieces all over the codebase.
This seems to be at odds with the goal of token minimization. Lots of small files that are narrowly scoped means less has to be loaded into context when making a change, right?
Throwing out another idea: I wonder if we could see some kind of equivalent of c header files for more modern languages so that an llm just has to read the equivalent of a .h file to start using a library.
4 replies →
Well, Rust does fulfill these to a reasonable degree. There is obvious room for improvement, but the vast majority of languages don't even bother being a Rust successor. Instead, they take a step back and decided that what Rust is doing is too much, e.g. Zig. It's kind of irritating that everyone and their dog is coming up with a new programming language that barely changes anything when there are so many low hanging fruit. The vast majority of programming languages that people are coming up with could have been language subsets, extensions or alternative runtimes for existing languages.
> all too human coding agents
There is no actual thought occurring. Arguably, we can say the same about a lot of humans at any given moment, but with machines there never is. It's all statistics.
I feel like this misses how LLMs work.
Yes, you’re adding this layer of verification, but LLMs don’t think in ASTs or use formal logic.
They are statistical predictors, just predicting what the next token will be.
There is a reason they perform best with TS/PY and not Haskell. The difference in size of the code corpus for each language.
The premise behind this seems to ignore all of that.
> Traditional compilers produce diagnostics for humans: expected token '{'. Vera produces instructions for the model that wrote the code. Every error includes what went wrong, why, how to fix it with a concrete code example, and a spec reference.
Is this a thing for the llms? As a human, I also prefer being told what went wrong and why and how to fix it, rather than `expected {`
It feels wrong to dump identifiers to save tokens: now they're devoid of semantics, and can't be grep'ed or mapped to concepts. CPUs are good with numbers, but LLMs are good with words.
I agree 100% with this thinking approach, I've been working in this domain for quite a few months now.
The right granularity for agents isn't files or lines, it's entities: functions, classes, methods. That's how both humans and agents actually think about code.
We built sem(Ataraxy-Labs/sem) which extracts entities from 30+ languages via tree-sitter and builds a cross-file dependency graph, so building semantic version control and semantic diff. weave (same org) takes it further and does git merges at entity level. Matches functions by name, merges their bodies independently.
The dependency graph also answers questions LLMs can't. I love the analysis based on ASTs.
> Models struggle with maintaining invariants across a codebase, understanding the ripple effects of changes, and reasoning about state over time.
I do, too!
This isn't my project, but I shared it here because it has a few important ideas I've been thinking about in my own work. Effect type systems in particular are a really good fit for LLMs because they allow you to reason very precisely about a program's capabilities before runtime (basically, using the type system for capability proofs). This helps you trust agent-created code (for example, you know it can't do IO), or, if the code does require certain capabilities, run it in a sandbox (e.g., mock network or filesystem). This kind of language design also provides a safer foundation for complex meta-systems of agents-that-create-agents, depending on how the runtime is implemented, though Vera may be somewhat limited in that particular respect.
The major design decision I'm a little skeptical about is removing variable names; it would be interesting to see empirical data on that as it seems a bit unintuitive. I would expect almost the opposite, that variable names give LLMs some useful local semantics.
You're looking for Scala… ;-)
https://news.ycombinator.com/item?id=47957121
> Division by zero is not a runtime error — it is a type error. The compiler checks every call site to prove the divisor is non-zero.
Elaborate a little here.
Presumably an analyzer that makes it an error to not have an immediately traceable zero check.
C# can do something similar with null references. It can require you to indicate which arguments and variables are capable of being null, and then compiler error/warning if you pass it to something that expects a non-null reference without a null check.
But that’s because null is a static type. Zero isn’t a static type. How can I know if a calculation produces zero if I can’t predict the result of it at compile time?
13 replies →
I think the best language for LLMs is going to be as close to English as you can get with the compiler guarantees offered by Vera (or something similar).
Seemingly opposing forces.
Reminds me of http://cobra-language.com/
I’d ask for a refund on the tokens tbh
I love the ## Why README section! Every repo should have one :-)
> There are no variable names. @Int.0 is the most recent Int binding; @Int.1 is the one before.
You already lost me here. There's a reason variable names are a thing in programming, and that's to semantically convey meaning. This matters no matter whether a human is writing the code or a LLM.
>The short answer is that variable names are one of the things that confuses LLMs rather than helps them. Unlike with humans, names undermine a model's efforts to keep track of state over larger scales. Models confuse similarly named variables in different parts of the codebase easily
So I wonder, doesn't this apply to function names too, which the author keeps in? I've seen LLMs use wrong functions/classes as well.
I think a proper harness, LSP and tests already solve everything Vera is trying to solve. They mostly cite research from 2021 before coding harnesses and agentic loops were a thing, back when they were basically trying to one-shot with relatively weak models (by modern standards)
The only way the author could have come up with that rationale is that he doesn't understand what a token is, what attention is and how coding agents work.
Tokens combine multiple characters into a single vector. Attention computes similarity scores between vectors. This means you'd want each variable to be a single token so that the LLM can instantly know that two names refer to the same variable. If everything is numbered, the attention mechanism will attend every first parameter to every first parameter in every function. This means that the numbering scheme would have to be randomized instead of starting at zero.
Coding agents are now capable of using tools, including text search, which means that having the ability to look for specific variable names is extremely helpful. By using numbering, the author of the language has now given himself the burden of relying entirely on LSPs rather than innate model properties that operate on the text level.
So yeah, on a textual level, the language is designed for an era of LLMs that has been obsolete for a long time.
> You already lost me here.
Agreed.
I'm working on a language designed for machines to write and humans to understand and review.
It doesn't seem worthwhile to have code nobody can understand.
[dead]
So there are variable names, they’re just inscrutable context dependent numbers.
Same here, reminds of JIRA’s field_17190 in MCP responses instead of description (and in similar excel-like systems)
Good luck managing hallucinations on that context
Is there any evidence that using structural references rather than names allows large language models to generate better code? This bit just feels like obfuscation for obfustcation’s sake.
I've read the FAQ (https://github.com/aallan/vera/blob/main/FAQ.md) that provides the justification for this and it is, IMO, fairly weak. The main argument is that misleading names can confuse models. I have no problem believing this bit I'm not sure why we should assume code will have misleading names. In fact, the same document says that in tests they've had LLMs mix up the indices, which is exactly the problem I would foresee. It seems especially messy that the name for the same variable will change in different places in the code. The utility of De Bruijn indices is easy substitutability of expressions, which seems like totally the wrong thing to optimize for in a programming language.
Edit: the more I think about it the more this seems like a really bad idea. Three more issues come to mind: 1) it becomes impossible to grep for a variable, which I know agents do all the time. 2) editing code at the top of the function, say introducing a new variable, can require editing all the code in the rest of the function, even if it was semantically unchanged! 3) they say it is less context for the LLM to track but now, instead of just having to know the name of one variable, you have to keep track of every other variable in the function
[dead]
I find the claims regarding LLMs and their mistake prone nature around variable names very confusing.
It appears that me and creator have had vastly different experiences with LLMs and their capabilities with complex code bases and complicated business logic.
My observations point to LLMs being much more successful when variables and methods have explicit, detailed names, it's the best way to keep them on track and minimize the chance of confusion, next closest thing being explicit comments and inline documentation.
Poorly named and poorly documented things in a codebase only cause it to reason more on what it could be, often reaching a (wrong) conclusion, wasting tokens, wasting time.
Perhaps this diversion in philosophy is due to fundamental differences in how we view the tool at hand.
I do not trust the machine, as such I review it's output, and if the variables lacked names, that would be significantly harder. But if I had a "Jesus, take the wheel!" attitude, perhaps I'd care far less.
Why not prolog or one of the other logic languages? It's really old, should be lots of good training data for it and the declarative nature would seem to be a great fit for llms.
Most Prolog code on the Web is complete garbage.
This is exactly the wrong approach. LLMs are good at writing programming languages they already know, that are well represented in the training data, not at writing programming languages that they have never seen before, so that you have to include the entire programming language manual and lots of example code in every prompt.
This is not my experience. I've been experimenting with something very similar to vera. However my language transpiles into multiple languages (Java, Typescript, Common Lisp, Rust, C++, Python, C# and Swift). The transpiler is written in the language itself (there's a separate bootstrap transpiler written in Common Lisp). But where I'm going is that Claude, at least, is extremely capable at writing decent code in my new language with barely any prompting; just minimal guidance on the language itself and no examples.
[dead]