Comment by DonHopkins

3 days ago

@dev_l1x_be: The answer isn't a new typed scripting language. It's recognizing what the interpreter already is.

LLMs are eval(). Skills are programs. YAML is the motherboard.

@unkulunkulu nails it -- "library as the final language", languages all the way down. Exactly. Skills ARE languages. They teach the interpreter what to understand. When the interpreter understands intent, the distinction dissolves.

@conartist6: "DSL is fuzzy... languages and libraries don't have to be opposing" -- yes. Traditional DSL: parse -> AST -> evaluate. LLM "DSL": read intent -> understand -> act. All one step. You can code-switch mid-sentence and it doesn't care.

The problem with opinionated frameworks like ROR and their BDFLs like DHH is that one opinion is the WRONG number!

The key insight nobody's mentioned: SPEED OF LIGHT vs CARRIER PIGEON.

Carrier pigeon: call LLM, get response, parse it, call LLM again, repeat. Slow. Noisy. Every round-trip destroys precision through tokenization.

Speed of light: ONE call. I ran 33 turns of Stoner Fluxx -- 10 characters, many opinions, game state, hands, rules, dialogue, jokes -- in a single LLM invocation. The LLM simulates internally at the speed of thought. No serialization overhead. No context-destroying round trips.

@jakkos, @PaulHoule: nushell and Python are fine. But you're still writing syntax for a parser. What if you wrote intent for an understander?

Bash is a tragedy -- quoting footguns, jq gymnastics, write-only syntax. Our pattern: write intent in YAML, let the LLM "uplift" to clean Python when you need real code.

Postel's Law as type system: liberal in what you accept. Semantic understanding catches nonsense because it knows what you MEANT, not just what you TYPED.

Proof and philosophy: https://github.com/SimHacker/moollm/blob/main/designs/stanza...

Holy slop!

  • That's a trite, low effort, worthless, content free comment, which totally misses the point and fails to engage. You haven't made any other comments contributing to this discussion, except that one shallow drive by complaint. If you're going to whine about using llms as a shell, then at least try to do better than an llm or redditor yourself.

    So do you disagree with any of my points, or my direct replies to other people's points, or is that all you can think of to say, instead of engaging?

    Do you prefer to use bash directly? Why? If not, then what is your alternative?

    What do you think of Anthropic Skills? Have you used or made any yourself, or can you suggest any improvements? I've created 50+ skills, and I've suggested, implemented, and tested seven architectural extensions -- do you have any criticism of those?

    https://github.com/SimHacker/moollm/tree/main/skills

    Obviously you use llms yourself, so you're not a complete luddite, and you must have some deeper more substantial understanding and criticism than those two words from your own experience.

    How do your own ideas that you blogged about in "My LLM System Prompt" compare to my ideas and experience, in your own "professional, no bullshit, scientific" opinion?

    https://mahesh-hegde.github.io/posts/llm_system_prompt/

    Your entire blog post on LLM prompts is "I don't like verbiage" in five sentences. Ironic, then, that your entire contribution here is two empty words. I made specific technical points, replied to real people, linked proof. 'Slop' is the new 'TL;DR' -- a confession of laziness dressed as critique. Calling substance slop while contributing nothing? That's actual slop.

    • > LLMs are eval(). Skills are programs. YAML is the motherboard.

      This for some reason irritated me so much I wrote the comment.