Comment by MichaelZuo
7 months ago
Hence why I specifically said search quality without mentioning a large userbase...
Did you not see the last part on your end?
Plus, if anything a small userbase makes it more difficult for quality search because the long tail is still effectively infinitely large, relative to the competencies of a single decision maker, but now there is only have one user searching for any random super niche topic maybe once a month, in total.
So they can't even a/b test or rely on customers reporting in on the real situation because it is too sparse.
The point is that search quality is subjective, not objective, and the two companies are each structured to approach it very differently.
In pursuing billions of global users across all demographics and trying to maximally monetize them through ads, Google is pursuing an entirely different measure of "search quality" than Kagi.
Google delivers their version of search quality when a rice farmer in Thailand and financier in the Bay Area both reach for Google when they want to find something online and then get distracted by an ad.
Meanwhile, Kagi gets their version right when they have a profitable base of happy customers. They can make different and more aggressive assumptions about the needs of their users, solicit and digest direct feedback about those assumptions, and optimize a product that delivers superb search quality for their niche.
They're completely different technical problems that only occasionally intersect. Their engineering teams aren't competing with each other.
Even if the entire customer base was limited to only HN users with karma exceeding X amount there will still be thousands of searches per day in obscure niches, just each one in a different niche.
So I don't see how Kagi can avoid having to deliver quality search results in millions of niches. Just at a very low frequency compared to Google.
Or are you suggesting to not bother with search quality past a certain lower threshold?