Comment by simonw
11 hours ago
I went looking for a single Markdown file I could dump into an LLM to "teach" it the language and found this one:
https://github.com/jordanhubbard/nanolang/blob/main/MEMORY.m...
Optimistically I dumped the whole thing into Claude Opus 4.5 as a system prompt to see if it could generate a one-shot program from it:
llm -m claude-opus-4.5 \
-s https://raw.githubusercontent.com/jordanhubbard/nanolang/refs/heads/main/MEMORY.md \
'Build me a mandelbrot fractal CLI tool in this language'
> /tmp/fractal.nano
Here's the transcript for that. The code didn't work: https://gist.github.com/simonw/7847f022566d11629ec2139f1d109...
So I fired up Claude Code inside a checkout of the nanolang and told it how to run the compiler and let it fix the problems... which DID work. Here's that transcript:
https://gisthost.github.io/?9696da6882cb6596be6a9d5196e8a7a5...
And the finished code, with its output in a comment: https://gist.github.com/simonw/e7f3577adcfd392ab7fa23b1295d0...
So yeah, a good LLM can definitely figure out how to use this thing given access to the existing documentation and the ability to run that compiler.
Oh, wow. I thought the control flow from the readme was a little annoying with the prefix -notation for bigger/smaller than;
But that's nothing compared to the scream for a case/switch-statement in the Mandelbrot example...
I mean for all intents and purposes this language is designed for use by LLM's, not humans, and the AI probably won't complain that a switch-case statement is missing. ;)
> scream for a case/switch-statement
Maybe I’m missing some context, but all that actually should be needed in the top-level else block is ‘gradient[idx]’. Pretty much anything else is going to be longer, harder to read, and less efficient.
True, with early return - there's no need to actually nest with else.
Logically this still would be a case/switch though...
1 reply →
If you are planning to write so many if else statements. You might as well write Prolog.
I think you need to either feed it all of ./docs or give your agent access to those files so it can read them as reference. The MEMORY.md file you posted mentions ./docs/CANONICAL_STYLE.md and ./docs/LLM_CORE_SUBSET.md and they in turn mention indirectly other features and files inside the docs folder.
Yeah, I think you're right about that.
The thing that really unlocked it was Claude being able to run a file listing against nanolang/examples and then start picking through the examples that were most relevant to figuring out the syntax: https://gisthost.github.io/?9696da6882cb6596be6a9d5196e8a7a5...
But are you losing horsepower of the LLM available to problem solving on a given task by doing so?
Maybe a little, but Claude has 200,000 tokens these days and GPT-5.2 has 400,000 - there's a lot of space.