← Back to context

Comment by SignalsFromBob

18 hours ago

The author is making the same mistake that they're claiming other news outlets have made. They're placing too much responsibility on the AI chatbot rather than the end-user.

The problem that needs correcting is educating the end-user. That's where the fix needs to happen. Yet again people are using a new technology and assuming that everything it provides is correct. Just because it's in a book, or on TV or the radio, doesn't mean that it's true or accurate. Just because you read something on the Internet doesn't mean it's true. Likewise, just because an AI chatbot said something doesn't mean it's true.

It's unfortunate that the young man mentioned in the article found a way to reinforce his delusions with AI. He just as easily could've found that reinforement in a book, a youtube video, or a song whose lyrics he thought were speaking directly to him and commanding him to do something.

These tools aren't perfect. Should AI provide more accurate output? Of course. We're in the early days of AI and over time these tools will converge towards correctness. There should also be more prominent warnings that the AI output may not be accurate. Like another poster said, the AI mathematically assembles sentences. It's up to the end-user to figure out if the result makes sense, integrate it with other information and assess it for accuracy.

Sentences such as "Tech companies have every incentive to encourage this confusion" only serve to reinforce the idea that end-users shouldn't need to think and everything should be handed to us perfect and without fault. I've never seen anyone involved with AI make that claim, yet people write article after article bashing on AI companies as if we were promised a tool without fault. It's getting tiresome.

Do you think of your non-AI conversational partners as tools as well?

  • Yes, although resources might be a better word than tools in that case. If I'm at the library and I'm asking the librarian to help me locate some information, they are definitely an educated resource that I'm using. The same for interacting with any other person who is an expert whose opinion or advice I'm seeking.

    • Those experts will generally ask clarifying questions if they don't understand what you're asking, rather than spin you in circles. The reason they're considered experts in the first place is they understand the topic they're sharing information on better than you do. It's not the end users fault that the LLM is spewing nonsense in a way that can be mistaken for human-like.

I'm tired of companies putting out dangerous things and then saying it should be the responsibility of the end user.