I gotta say, having white blurry blobs of something in the background floating behind white/grey text maybe wasn't the best design-choice out there.
None the less, I tried to find the actual APIs/service/software used for the "search" part, as I've found that to be the hardest to actually get right (at least for as-local-as-possible usage) for my own "Deep Research Agent".
I've experimented with Brave's search API which worked OK, but seems pricey for agent usage. Currently experimenting with using my own (local) YaCy instance right now, which actually gives me higher quality artifacts at the end, as there are no rate-limits and the model can do hundreds of search calls without me worrying about the cost. But it isn't very quick at picking up some stuff like news and more, otherwise works OK too.
What is the author doing here for the actual searching? Anyone else have any other ideas/approaches to this?
So the core idea is the Deep Orchestrator is pretty unopinionated on what to use for searching, as long as it is exposed over MCP. I tried with a basic fetch server that's one of the reference MCP servers (with a single tool called `fetch`), and also tried with Brave.
I think the folks at Jina wrote some really good stuff on the actual search part: https://jina.ai/news/a-practical-guide-to-implementing-deeps... -- and how to do page/url ranking over the course of the flow. My recommendation would be to do all that in an MCP server itself. That keeps the "deep orchestrator" architecture fairly clean, and you can plug in increasingly sophisticated search techniques over time.
Self host an instance of SearXNG[1] either locally or on a remote server with a simple docker container and use its JSON API [2]. You have to enable the JSON API in the config manually [3].
Thanks for sharing, this looks great! Do they have an MCP server? It should be easy to wrap around their JSON API but I couldn't see MCP support in the repo/docs.
OP here -- I think the general principle I would recommend is using a big reasoning model for the planning phase. I think Claude Code and other agents do the same. The reason this is important is because the quality of the plan really affects the final result, and error rates will compound if the plan isn't good.
based on the article, it seems like a good reasoning model like gpt5 or opus 4.1 might be good choices for the planner. I wonder if the gpt oss reasoning models would do well
Personally been using GPT-OSS-120b locally with reasoning_effort set to `high` and it blows pretty much every other local model out of the water, but takes a lot of time for it to eventually do a proper content reply. But for fire-and-forget jobs like "Create a well-researched report on X from perspective Y" it works really well.
I gotta say, having white blurry blobs of something in the background floating behind white/grey text maybe wasn't the best design-choice out there.
None the less, I tried to find the actual APIs/service/software used for the "search" part, as I've found that to be the hardest to actually get right (at least for as-local-as-possible usage) for my own "Deep Research Agent".
I've experimented with Brave's search API which worked OK, but seems pricey for agent usage. Currently experimenting with using my own (local) YaCy instance right now, which actually gives me higher quality artifacts at the end, as there are no rate-limits and the model can do hundreds of search calls without me worrying about the cost. But it isn't very quick at picking up some stuff like news and more, otherwise works OK too.
What is the author doing here for the actual searching? Anyone else have any other ideas/approaches to this?
Haha, I didn't have control on the blog website, just the content. The readme and code is the ultimate source of truth (and easier to read):https://github.com/lastmile-ai/mcp-agent/blob/main/src/mcp_a...
So the core idea is the Deep Orchestrator is pretty unopinionated on what to use for searching, as long as it is exposed over MCP. I tried with a basic fetch server that's one of the reference MCP servers (with a single tool called `fetch`), and also tried with Brave.
I think the folks at Jina wrote some really good stuff on the actual search part: https://jina.ai/news/a-practical-guide-to-implementing-deeps... -- and how to do page/url ranking over the course of the flow. My recommendation would be to do all that in an MCP server itself. That keeps the "deep orchestrator" architecture fairly clean, and you can plug in increasingly sophisticated search techniques over time.
Self host an instance of SearXNG[1] either locally or on a remote server with a simple docker container and use its JSON API [2]. You have to enable the JSON API in the config manually [3].
[1] https://docs.searxng.org/admin/installation-docker.html#inst...
[2] https://docs.searxng.org/dev/search_api.html
[3] https://github.com/searxng/searxng/discussions/3542
Thanks for sharing, this looks great! Do they have an MCP server? It should be easy to wrap around their JSON API but I couldn't see MCP support in the repo/docs.
1 reply →
Great write-up! Gives me a few ideas for a governance bot that I'm working on. Thanks for sharing :)
A good model for planner seems pretty important, what models are best?
OP here -- I think the general principle I would recommend is using a big reasoning model for the planning phase. I think Claude Code and other agents do the same. The reason this is important is because the quality of the plan really affects the final result, and error rates will compound if the plan isn't good.
based on the article, it seems like a good reasoning model like gpt5 or opus 4.1 might be good choices for the planner. I wonder if the gpt oss reasoning models would do well
Personally been using GPT-OSS-120b locally with reasoning_effort set to `high` and it blows pretty much every other local model out of the water, but takes a lot of time for it to eventually do a proper content reply. But for fire-and-forget jobs like "Create a well-researched report on X from perspective Y" it works really well.
1 reply →
Gemini 2.5 Pro is also a great reasoning model, I still prefer it over GPT 5
4 replies →