Comment by thesz
2 years ago
Usually, LLM's output gets passed through beam search [1] which is as symbolic as one can get.
[1] https://www.width.ai/post/what-is-beam-search
It is possible to even have 3-gram model to output better text predictions if you combine it with the beam search.
See https://news.ycombinator.com/item?id=40073039 for a discussion.