Comment by visarga
2 months ago
I had an idea - take SerpAPI and save top-10 or 20 links for many queries (millions), and put that in a RAG database. Then it can power a local LLM do web search without ever touching Google.
The index would just point a local crawler towards hubs of resources, links, feeds, and specialized search engines. Then fresh information would come from the crawler itself. My thinking is that reputable sites don't appear every day, if you update your local index once every few months it is sufficient.
The index could host 1..10 or even 100M stubs, each one touching on a different topic, and concentrating the best entry points on the web for that topic. A local LLM can RAG-search it, and use an agent to crawl from there on. If you solve search this way, without Google, and you also have local code execution sandbox, and local model, you can cut the cord. Search was the missing ingredient.
You can still call regular search engines for discovery. You can build your personalized cache of search stubs using regular LLMs that have search integration, like ChatGPT and Gemini, you only need to do it once per topic.
Fetching web pages at the kind of volume needed to keep the index fresh is a problem, unless you're Googlebot. It requires manual intervention with whitelisting yourself with the likes of Cloudflare, cutting deals with the likes of Reddit and getting a good reputation with any other kind of potential bot blocking software that's unfamiliar with your user agent. Even then, you may still find yourself blocked from critical pieces of information.
No, I think we can get by with using CommonCrawl, pulling every few months the fresh content and updating the search stubs. The idea is you don't change the entry points often, you open them up when you need to get the fresh content.
Imagine this stack: local LLM, local search stub index, and local code execution sandbox - a sovereign stack. You can get some privacy and independence back.
CC is not on the same scale as Google and not nearly as fresh. It's around 100th of the size and not much chance of having recent versions of a page.
I imagine you'd get on just fine for short tail queries but the other cases (longer tail, recent queries, things that haven't been crawled) begin to add up.