← Back to context

Comment by DonHopkins

1 day ago

>The nice thing about prolog is that you write logical rules, and they can get used in whatever order and direction that is needed.

This generalizes!

Prolog: declare relations. Engine figures out how to satisfy them. Bidirectional -- same rules answer "is X a grandparent?" and "find all grandparents."

LLMs do something similar but fuzzier. Declare intent. Model figures out how to satisfy it. No parse -> AST -> evaluate. Just: understand, act.

@tannhaeuser is right that Prolog's power comes from what the engine does -- variables that "range over potential values," WAM optimization, automatic pruning. You can't get that from a library bolted onto an imperative language. The execution model is different.

Same argument applies to LLMs. You can't library your way into semantic understanding. The model IS the execution model. Skills aren't code the LLM runs -- they're context that shapes how it thinks.

Prolog showed that declarative beats imperative for problems where you can formalize the rules. LLMs extend that to problems where you can't.

I've been playing with and testing this: Directories of YAML files as a world model -- The Sims meets TinyMUD -- with the LLM as the inference engine. Seven architectural extensions to Anthropic Skills. 50+ skills. 33 turns of a card game, 10 characters, one LLM call. No round trips. It just works.

https://github.com/SimHacker/moollm/blob/main/designs/stanza...

https://github.com/SimHacker/moollm/tree/main/skills