Comment by consumer451
20 hours ago
meta comment separated for its own discussion
I tried to find that paper via google search first, and I failed after 3 different searches. I then opened my not-important-stuff LLM, chatgpt.com, and found it in 3 interactions, where in the 3rd I made it use search. Chatbots with search are just so good at "on the tip of my tongue" type things.
Google is in such a weird position because of their bread and butter legacy UX * scale. This has to be the biggest case of innovators dilemma of all time?
then you have people complaining that search is no longer a keyword match when people claim to know exactly what they want...
Totally! Hence the dilemma.
Google.com has "AI mode," and it tries to intelligently decide when to suggest that based on a search query. I could have likely clicke AI Mode on google.com once it gave me a crap SERP response, and used that to find the same thing. But, I instinctively just went to chatgpt.com instead. I am not a total moron, I use gemini, claude, and gpt APIs in the 2 LLM enabled products that I am working on...
However, just last week I noticed that the AI mode default reply for some queries was giving me just horrible AI mode replies. Like gpt-3.5 quality wrong assumptions. For the first time I saw google.com as the worst option. I cannot be the only one.
I think that I might understand the problem. Google has the tech, but as a public company they cannot start to lose money on every search query, right? The quarterly would look bad, bonuses would go down. Same reason ULA can't build Starship, even if they could and wanted to. However, OpenAI can lose money on every query. SOTA inference is not cheap.