The AI Overviews are... extremely bad. For most of my queries, Google's AI Overview misrepresents its own citations, or almost as bad, confidently asserts a falsehood or half-truth based on results that don't actually contain an answer to my search query.
I had the same issue with Kagi, where I'd follow the citation and it would say the opposite of the summary.
A human can make sense of search results with a little time and effort, but current AI models don't seem to be able to.
Cheap AI models aren't good at this, anyway, and AI Overviews have to use cheap models since they get used so much. They would be a lot better (still need to check, but they'd be much less stupid) if they used something like GPT-5, but that's just not feasible right now.
From a UX perspective, the AI overview summary being a multi-paragraph summary makes sense since that was a single query that isn't expected to have conversational context. Where it does not make sense is in conversation-based interfaces. Like, the most popular product is literally called "chat".
"I ask a short and vague question and you response with a scrollbar-full of information based on some invalid assumptions" is not, by any reasonable definition, a "chat".
The AI Overviews are... extremely bad. For most of my queries, Google's AI Overview misrepresents its own citations, or almost as bad, confidently asserts a falsehood or half-truth based on results that don't actually contain an answer to my search query.
I had the same issue with Kagi, where I'd follow the citation and it would say the opposite of the summary.
A human can make sense of search results with a little time and effort, but current AI models don't seem to be able to.
Cheap AI models aren't good at this, anyway, and AI Overviews have to use cheap models since they get used so much. They would be a lot better (still need to check, but they'd be much less stupid) if they used something like GPT-5, but that's just not feasible right now.
I find myself skipping the AI overview like I used to skip over "Sponsored" results back in the day, looking for a trustworthy domain name.
Those AI overviews are dumb and wrong so often I have cut them out of the results entirely. They're embarrassing, really.
It’s fine about 80% of the time, but the other 20% is a lot harder to answer because of lower quality results.
From a UX perspective, the AI overview summary being a multi-paragraph summary makes sense since that was a single query that isn't expected to have conversational context. Where it does not make sense is in conversation-based interfaces. Like, the most popular product is literally called "chat".
"I ask a short and vague question and you response with a scrollbar-full of information based on some invalid assumptions" is not, by any reasonable definition, a "chat".