← Back to context

Comment by SecretDreams

1 day ago

> This happened in a year or two so I can't really blame. The truth machine where you doesn't needed to focus too much on validating answers changed rapidly to slop machine where ironically, your focus is much more important.

Very much this for the general public. I view it as borderline* dangerous to anyone looking for confirmation bias.

Yea. Especially with absolutele garbage that is gooogle ai summary, which is just slightly worse than their "AI mode". I never saw anything hallucinate that much. It is much worse that it is included in every search and it have the google "stamp of quality" which was usually mark of well functioning product.

  • It's funny because their thinking* Gemini with good prompting is solid, but the injected summaries they give could easily be terrible if the people doing the querying is lacking a certain base knowledge on the query.

  • And tiny text at the bottom which shows only after clicking "show more" statement "Al responses may include mistakes" will certainly not fix that.

    At least wording should be "is making mistakes" rather vaguely stating that it may occasionally in some cases produce mistake. Mistake can also be perceived as wrongly placed link and not absolutely made up information.