← Back to context

Comment by dingnuts

9 months ago

[flagged]

> If you define a grammar for a new programming language and feed it to an LLM and give it NO EXAMPLES can it write code in your language?

Yes. If you give models that have a cutoff of 2024 the documentation for a programming language written in 2025 it is able to write code in that language.

  • I have found this not to work particularly well in practice. Maybe I’m holding it wrong? Do you have any examples of this?

In my experience it generally has a very good understanding and does generate the relevant test cases. Then again I don't give it a grammar, I just let it generalize from examples. In my defense I've tried out some very unconventional languages.

Grammars are an attempt at describing a language. A broken attempt if you ask me. Humans also don't like them.

  • For natural language you are right. The language came first, the grammar was retrofitted to try to find structure.

    For formal languages, which programming languages (and related ones like query languages, markup languages, etc) are an instance of, the grammar defines the language. It come first, examples second.

    Historically, computers were very good at formal languages. With LLMs we are entering a new age where machines are becoming terrible at something they once excelled at.

    Have you lately tried asking Google whether it's 2025? The very first data keeping machines (clocks) were also pretty unreliable at that. Full circle I guess.

> NO.

YES! Sometimes. You’ll often hear the term “zero-shot generation”, meaning creating something new given zero examples, this is something many modern models are capable of.

> If you define a grammar for a new programming language and feed it to an LLM and give it NO EXAMPLES can it write code in your language?

Neither does your average human. What's your point?

> If you define a grammar for a new programming language and feed it to an LLM and give it NO EXAMPLES can it write code in your language?

Of course it can. It will experiment and learn just like humans do.

Hacker news people still think LLMs are just some statistical model guessing things.

  • > Hacker news people still think LLMs are just some statistical model guessing things.

    That's exactly what they are. It's the definition of what they are. If you are talking about something that is doing something else, then it's not an LLM.

    • No, it's not. Arrogant developers on HN like to parrot this, but it isn't true.

      The power of LLMs are the emergent properties that have emerged which weren't specifically taught to the model. That is not a property that exists in statistical, largely more deterministic models.

      If you think it's just a giant token prediction machine you've just ignored the last 5 years.