Comment by hsuduebc2
1 day ago
Yeah, it’s a very powerful tool, and it needs to be used carefully and with intent. People on Hacker News mostly get that already, but for ordinary users it’s a full-on paradigm shift.
It moved from: A very precise source of information, where the hardest part was finding the right information.
To: Something that can produce answers on demand, where the hardest part is validating that information, and knowing when to doubt the answer and force it to recheck the sources.
This happened in a year or two so I can't really blame. The truth machine where you doesn't needed to focus too much on validating answers changed rapidly to slop machine where ironically, your focus is much more important.
> People on Hacker News mostly get that already
It’s super easy to stop fact checking these AIs and just trust they’re reading the sources correctly. I caught myself doing it, went back and fact checked past conversations, and lo and behold in two cases shit was made up.
These models are built to engage. They’re going to reinforce your biases, even without evidence, because that’s flattering and triggers a dopamine hit.
> This happened in a year or two so I can't really blame. The truth machine where you doesn't needed to focus too much on validating answers changed rapidly to slop machine where ironically, your focus is much more important.
Very much this for the general public. I view it as borderline* dangerous to anyone looking for confirmation bias.
Yea. Especially with absolutele garbage that is gooogle ai summary, which is just slightly worse than their "AI mode". I never saw anything hallucinate that much. It is much worse that it is included in every search and it have the google "stamp of quality" which was usually mark of well functioning product.
It's funny because their thinking* Gemini with good prompting is solid, but the injected summaries they give could easily be terrible if the people doing the querying is lacking a certain base knowledge on the query.
And tiny text at the bottom which shows only after clicking "show more" statement "Al responses may include mistakes" will certainly not fix that.
At least wording should be "is making mistakes" rather vaguely stating that it may occasionally in some cases produce mistake. Mistake can also be perceived as wrongly placed link and not absolutely made up information.