← Back to context

Comment by dkypuros

18 hours ago

We use a deliberately small, hand‑written grammar so that we can prove properties like grammaticality, aⁿbⁿ generation, and bounded memory. The price we pay is that the next‑token distribution is limited to the explicit rules we supplied. Large neural LMs reverse the trade‑off: they learn the rules from data and therefore cover much richer phenomena, but they can’t offer the same formal guarantees. The fibration architecture is designed so we can eventually blend the two—keeping symbolic guarantees while letting certain fibres (e.g. embeddings or rule weights) be learned from data.