Comment by mcyc

1 day ago

This is a fantastic guide! I did a lot of work on structured generation for my PhD. Here are a few other pointers for people who might be interested:

Some libraries:

- Outlines, a nice library for structured generation

  - https://github.com/dottxt-ai/outlines

- Guidance (already covered by FlyingLawnmower in this thread), another nice library

  - https://github.com/guidance-ai/guidance

- XGrammar, a less-featureful but really well optimized constrained generation library

  - https://github.com/mlc-ai/xgrammar

  - This one has a lot of cool technical aspects that make it an interesting project

Some papers:

- Efficient Guided Generation for Large Language Models

  - By the outlines authors, probably the first real LLM constrained generation paper

  - https://arxiv.org/abs/2307.09702

- Automata-based constraints for language model decoding

  - A much more technical paper about constrained generation and implementation

  - https://arxiv.org/abs/2407.08103

- Pitfalls, Subtleties, and Techniques in Automata-Based Subword-Level Constrained Generation

  - A bit of self-promotion. We show where constrained generation can go wrong and discuss some techniques for the practitioner

  - https://openreview.net/pdf?id=DFybOGeGDS

Some blog posts:

- Fast, High-Fidelity LLM Decoding with Regex Constraints

  - Discusses adhering to the canonical tokenization (i.e., not just the constraint, but also what would be produced by the tokenizer)

  - https://vivien000.github.io/blog/journal/llm-decoding-with-regex-constraints.html

- Coalescence: making LLM inference 5x faster

  - Also from the outlines team

  - This is about skipping inference during constrained generation if you know there is only one valid token (common in the canonical tokenization setting)

  - https://blog.dottxt.ai/coalescence.html

Hello, the part about canonical filtering in https://openreview.net/pdf?id=DFybOGeGDS doesn't seem to try to account for pretokenization. For example, if you receive " 天天中彩票APP" in o200k, it means there has to be a lowercase letter within the span of letters, and while tokens like (4 spaces) may be pairwise compatible with tokens like "123" according to the BPE merge rules, the pretokenizer would split the span of spaces to give (3 spaces), " ", "123" instead. Are you aware of any work that does actual canonical generation for models with this kind of pretokenization regex?

I've never fully understood where Outlines fit in the stack. Is it a way to create a structured output API similar to the ones big providers have? Have you looked at something like BAML?