Comment by simonw

20 days ago

I went looking for a single Markdown file I could dump into an LLM to "teach" it the language and found this one:

https://github.com/jordanhubbard/nanolang/blob/main/MEMORY.m...

Optimistically I dumped the whole thing into Claude Opus 4.5 as a system prompt to see if it could generate a one-shot program from it:

  llm -m claude-opus-4.5 \
    -s https://raw.githubusercontent.com/jordanhubbard/nanolang/refs/heads/main/MEMORY.md \
    'Build me a mandelbrot fractal CLI tool in this language' 
   > /tmp/fractal.nano

Here's the transcript for that. The code didn't work: https://gist.github.com/simonw/7847f022566d11629ec2139f1d109...

So I fired up Claude Code inside a checkout of the nanolang and told it how to run the compiler and let it fix the problems... which DID work. Here's that transcript:

https://gisthost.github.io/?9696da6882cb6596be6a9d5196e8a7a5...

And the finished code, with its output in a comment: https://gist.github.com/simonw/e7f3577adcfd392ab7fa23b1295d0...

So yeah, a good LLM can definitely figure out how to use this thing given access to the existing documentation and the ability to run that compiler.

Oh, wow. I thought the control flow from the readme was a little annoying with the prefix -notation for bigger/smaller than;

    # Control flow
    if (> x 0) {
      (println "positive")
    } else {
      (println "negative or zero")
    }

But that's nothing compared to the scream for a case/switch-statement in the Mandelbrot example...

    # Gradient: " .:-=+*#%@"
        let gradient: string = " .:-=+*#%@"
        let gradient_len: int = 10
        let idx: int = (/ (* iter gradient_len) max_iter)
        if (>= idx gradient_len) {
            return "@"
        } else {
            if (== idx 0) {
                return " "
            } else {
                if (== idx 1) {
                    return "."
                } else {
                    if (== idx 2) {
                        return ":"
                    } else {
                        if (== idx 3) {
                            return "-"
                        } else {
                            if (== idx 4) {
                                return "="
                            } else {
                                if (== idx 5) {
                                    return "+"
                                } else {
                                    if (== idx 6) {
                                        return "*"
                                    } else {
                                        if (== idx 7) {
                                            return "#"
                                        } else {
                                            if (== idx 8) {
                                                return "%"
                                            } else {
                                                return "@"
                                            }
                                        }

  • > scream for a case/switch-statement

    Maybe I’m missing some context, but all that actually should be needed in the top-level else block is ‘gradient[idx]’. Pretty much anything else is going to be longer, harder to read, and less efficient.

  • If you are planning to write so many if else statements. You might as well write Prolog.

  • I mean for all intents and purposes this language is designed for use by LLM's, not humans, and the AI probably won't complain that a switch-case statement is missing. ;)

I think you need to either feed it all of ./docs or give your agent access to those files so it can read them as reference. The MEMORY.md file you posted mentions ./docs/CANONICAL_STYLE.md and ./docs/LLM_CORE_SUBSET.md and they in turn mention indirectly other features and files inside the docs folder.

But are you losing horsepower of the LLM available to problem solving on a given task by doing so?

  • Maybe a little, but Claude has 200,000 tokens these days and GPT-5.2 has 400,000 - there's a lot of space.

    • True. You would know this better but are you also burning "attention" by giving it a new language? Rather than use its familiar Python pathways it needs to attend more to generate the unseen language. It needs to KV across from the language spec to the language to the goal. Rather than just speak the Python or JS it is uses to speaking.