← Back to context

Comment by TheEdonian

6 days ago

Well let's not forget that it's an opinionated source. There is also the point that if you ask it about a topic it will (often) give you the answer that has the most content about it (or easiest to access information).

Agree.

I find that, for many, LLMs are addictive, a magnet, because it offers to do your work for you, or so it appears. Resisting this temptation is impossibly hard for children for example, and many adults succumb.

A good way to maintain a healthy dose of skepticism about its output and keep on checking this output, is asking the LLM about something that happened after the training cut off.

For example, I asked if lidar could damage phone lenses. And the LLM very convincingly argued it was highly improbable. Because that recently made the news as a danger for phone lenses, and wasn’t part of the training data.

This helps me stay sane and resist the temptation of just accepting LLM output =)

On a side note, the kagi assistant is nice for kids I feel because it links to its sources.

  • LIDAR damaging the lens is extremely unlikely. A lens is mostly glass.

    What it can damage is the sensor, which is actually not at all the same thing as a lens.

    When asking questions it's important to ask the right question.

  • I asked ChatGPT o3 if lidar could damage phone sensors and it said yes https://chatgpt.com/share/683ee007-7338-800e-a6a4-cebc293c46...

    I also asked Gemini 2.5 pro preview and it said yes. https://g.co/gemini/share/0aeded9b8220

    I find it interesting to always test for myself when someone suggests to me that a "LLM" failed at a task.

    • I should have been more specific, but you missed my point I believe.

      I tested this at the time on Claude 3.7 sonnet, which have an earlier cut off date and I just tested again with this prompt: “Can the lidar of a self driving car damage a phone camera sensor?” and the answer is still wrong in my test.

      I believe the issue is the training cut off date, that’s my point, LLM seem smart but they have limits and when asked about something discovered after training cut off date, they will sometimes confidently be wrong.

      3 replies →