← Back to context

Comment by sebmellen

8 months ago

For an internal workflow where we have an LLM looking at relatively simple data (where the conclusions the LLM may make vary widely depending on what the LLM believes the data represents) we found that taking a consortium approach, where you have multiple models approach the same problem at once and then essentially argue about the results, yields far better outcomes than if you have a single model performing the analysis, or even a single model arguing against itself multiple times. Somewhat adjacent to what’s done here, but it’s clearly true that having model diversity is a plus.

The article talks about that at the end, then says:

> Let models talk to each other directly, making their own case and refining each others’ answers. Exemplified in patterns like Multi-Agent Debate, this is a great solution for really critical individual actions. But XBOW is basically conducting a search, and it doesn’t need a committee to decide for each stone it turns over whether there might not be a better one.

In general, this seems reasonable to me as a good approximation of what works with humans, but with _much_ faster feedback loops in communication.