Comment by goalieca
2 days ago
This is a recent phenomenon. It seems most of the pages today are SEO optimized LLM garbage with the aim of having you scroll past three pages of ads.
THe internet really used to be efficient and i could always find exactly what i wanted with an imprecise google search ~ 15 years ago.
You'd think with the reputation of LLMs being trained on Twitter (pre-Musk radicalization) and Reddit, they'd be better at understanding normal conversation flow since twitter requires short responses and Reddit... while Wall of Text happens occasionally, it's not the typical cadence of the discussion.
Reddit and Twitter don't have human conversations. They have exchanges of confident assertions followed with rebuttals. In fact, both of our comments are perfect demonstrations of exactly that too. Fairly reflective of how LLMs behave — except nobody wants to "argue" with an LLM like Twitter and Reddit users want to.
This is not how humans converse in human social settings. The medium is the message, as they say.
Twitter, Reddit, HN don't always have the consistency of conversation that two people talking do.
Even here, I'm responding to you on a thread that I haven't been in on previously.
There's also a lot more material out there in the format of Stack Exchange questions and answers, Quora posts, blog posts and such than there is for consistent back and forth interplay between two people.
IRC chat logs might have been better...ish.
The cadence for discussion is unique to the medium in which the discussion happens. What's more, the prompt may require further investigation and elaboration prior to a more complete response, while other times it may be something that requires story telling and making it up as it goes.
Don’t you get this today with AI Overviews summarizing everything on top of most Google results?
The AI Overviews are... extremely bad. For most of my queries, Google's AI Overview misrepresents its own citations, or almost as bad, confidently asserts a falsehood or half-truth based on results that don't actually contain an answer to my search query.
I had the same issue with Kagi, where I'd follow the citation and it would say the opposite of the summary.
A human can make sense of search results with a little time and effort, but current AI models don't seem to be able to.
Cheap AI models aren't good at this, anyway, and AI Overviews have to use cheap models since they get used so much. They would be a lot better (still need to check, but they'd be much less stupid) if they used something like GPT-5, but that's just not feasible right now.
From a UX perspective, the AI overview summary being a multi-paragraph summary makes sense since that was a single query that isn't expected to have conversational context. Where it does not make sense is in conversation-based interfaces. Like, the most popular product is literally called "chat".
"I ask a short and vague question and you response with a scrollbar-full of information based on some invalid assumptions" is not, by any reasonable definition, a "chat".
I find myself skipping the AI overview like I used to skip over "Sponsored" results back in the day, looking for a trustworthy domain name.
Those AI overviews are dumb and wrong so often I have cut them out of the results entirely. They're embarrassing, really.
It’s fine about 80% of the time, but the other 20% is a lot harder to answer because of lower quality results.