Comment by xp84
7 months ago
As far as I can tell, these aren’t made by asking a competent model to answer the “question” — based on what they seem to do, it seems to me like they take a model (a “mini” type of model?), pipe in the contents of the first 5 or 10 results from the slopfest that are Google’s current search results, and tell it to summarize THAT.
This is why it tells you to eat rocks. It is a very narrow sample of webpages and suffers from not even contextualizing each page it is reading to wonder if it’s a troll, satire, propaganda, fiction, or fact.
I have taken to ignoring them completely. I’d rather ask ChatGPT directly than trust these - and often I do just that. It’s much more accurate.
What’s frustrating is that the real estate these occupy was until a few years ago where they’d put text extracts quoted directly from a short-ish list of reputable sites. Same purpose, different content. While it was arguably a bit abusive of the sites to extract and display their contents there, the information used to be pretty reliable as a result.
No comments yet
Contribute on Hacker News ↗