Comment by 152334H
9 hours ago
Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies?
How is a chatbot supposed to determine when a user fools even themselves about what they have experienced?
What 'tough love' can be given to one who, having been so unreasonable throughout their lives - as to always invite scorn and retort from all humans alike - is happy to interpret engagement at all as a sign of approval?
> How is a chatbot supposed to determine when a user fools even themselves about what they have experienced?
And even if it _could_, note, from the article:
> Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found.
The vendors have a perverse incentive here; even if they _could_ fix it, they'd lose money by doing so.
> Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies?
Markets don't optimize for what is sensible, they optimize for what is profitable.
It's not market driven. AI is ludicrously unprofitable for nearly all involved.
The profit appears to be capturing the political class and it's associated lobbies and monied interests.
> clear thinking
Most humans working in tech lack this particular attribute, let alone tools driven by token-similarity (and not actual 'thinking').
It's almost as if being a therapist is an actual job that takes years of training and experience!
AI may one day rewrite Windows but it will never be counselor Troi.
Implying that programming is not an actual job that takes years of training and experience
To be clear I don't think the AI can do either job
Well, unless insurance companies figure out they can make more money by pushing everyone onto AI [step-]therapy instead of actual therapy
Come on, I'm sure Dario can find a nice tight bodysuit for claude