← Back to context

Comment by nextos

5 hours ago

We've done this, and it works. Our setup is to have some agents that synthesize Prolog and other types of symbolic and/or probabilistic models. We then use these models to increase our confidence in LLM reasoning and iterate if there is some mismatch. Making synthesis work reliably on a massive set of queries is tricky, though.

Imagine a medical doctor or a lawyer. At the end of the day, their entire reasoning process can be abstracted into some probabilistic logic program which they synthesize on-the-fly using prior knowledge, access to their domain-specific literature, and observed case evidence.

There is a growing body of publications exploring various aspects of synthesis, e.g. references included in [1] are a good starting point.

[1] https://proceedings.neurips.cc/paper_files/paper/2024/file/8...