← Back to context

Comment by Velorivox

7 days ago

> which are both completely irrational

Really!?

[0] https://i.imgur.com/ly5yk9h.png

Your screenshot is conveniently omitting the disclaimer below: "AI responses may include mistakes. Learn more[1]"

[1]: https://support.google.com/websearch/answer/14901683

  • It isn't doing anything "conveniently", I was not shown the disclaimer (nor anything else, I assume it mostly failed to load).

    In any case, if you really believe a disclaimer makes it okay for Google to display blatant misinformation in a first-party capacity, we have little to discuss.

    • https://www.google.com/search?q=is+all+of+oregon+north+of+ne...

      Show more -> Disclaimer and the feedback buttons are shown at the end. If you bothered enough to read the full response, you would have seen the disclaimer, but you never did, so you haven't. For something to be considered "misinformation," in the very least the subject of speech has to be asserting its truthfulness—and, indeed, Google makes no such claims. The claim they're making is precisely that its search result-embedded "[..] responses may include mistakes." In this specific case, they are not asserting truthfulness.

      FWIW, Gemini 2.5 Pro answers the question correctly.

      The search hints are clearly a low-compute first approximation, which is probably correct for most trivial questions which is probably the majority of user queries, and it's not surprising that it fails in this specific instance. The application doesn't allow for reasoning due to scale; even Google cannot afford to run reasoning traces on every search question. I concur that there's very little to discuss: you seemingly made up your mind re: LLM technology, and I doubt you will appreciate the breaking-up of your semantics to begin with.

      1 reply →