Comment by kingstnap
14 hours ago
> she said she was aware that DeepSeek had given her contradictory advice. She understood that chatbots were trained on data from across the internet, she told me, and did not represent an absolute truth or superhuman authority
With highly lucid people like the author's mom I'm not too worried about Dr. Deepseek. I'm actually incredibly bullish on the fact that AI models are, as the article describes, superhumanly empathetic. They are infinitely patient, infinitely available, and unbelievably knowledgeable, it really is miraculous.
We don't want to throw the baby out with the bathwater, but there are obviously a lot of people who really cannot handle the seductivity of things that agree with them like this.
I do think there is pretty good potential in making good progress on this front in though. Especially given the level of care and effort being put into making chatbots better for medical uses and the sheer number of smart people working on the problem.
> and unbelievably knowledgeable
They are knowledgeable in that so much information sits in their repository.
But less than perfect application of that information combined with the appearance of always perfect confidence can lead to problems.
I treat them like that one person in the office who always espouses alternate theories - trust it as far as I can verify it. This can be very handy for finding new paths of inquiry though!
And for better or worse it feels like the errors are being "pushed down" into smaller, more subtle spaces.
I asked ChatGPT a question about a made up character in a made up work and it came back with "I don’t actually have a reliable answer for that". Perfect.
On the other hand, I can ask it about varnishing a piece of wood and it will give a lovely table with options, tradeoffs, and Good/Ok/Bad ratings for each option, except the ratings can be a little off the mark. Same thing when asking what thickness cable is required to carry 15A in AU electrical work. Depending on the journey and line of questioning, you would either get 2.5mm^2 or 4mm^2.
Not wrong enough to kill someone, but wrong enough that you're forced to use it as a research tool rather than a trusted expert/guru.
I asked ChatGPT, Gemini, Grok and DeepSeek to tell me about a contemporary Scottish indie band that hasn’t had a lot of press coverage. ChatGPT, Gemini and Grok all gave good answers based on the small amount of press coverage they have had.
DeepSeek however hallucinated a completely fictional band from 30 years ago, right down to album names, a hard luck story about how they’d been shafted by the industry (and by whom), made up names of the members and even their supposed subsequent collaborations with contemporary pop artists.
I asked if it was telling the truth or making it up and it doubled down quite aggressively on claiming it was telling the truth. The whole thing was very detailed and convincing yet complete and utter bollocks.
I understand the difference in the cost/parameters etc. but it was miles behind the other 3, in fact it wasn’t just behind it was hurtling in the opposite direction, while being incredibly plausible.
1 reply →
If the computer is the bicycle of the mind, GenAI is a motor vehicle. Very powerful and transformative, but it's also possible to get into trouble.
A stark difference with that analogy is that with a bicycle, the human is still doing quite a bit of work themselves. The bicycle amplifies the human effort, whereas with a motor vehicle, the vehicle replaces the human effort entirely.
No strong opinion on if that's good or bad long term, as humans have been outsourcing portions of their thinking for a really long time, but it's interesting to think about.
[dead]
> They are infinitely patient, infinitely available, and unbelievably knowledgeable, it really is miraculous.
This is a strange way to talk about a computer program following its programming. I see no miracle here.
Chatting with an LLM resembles chatting with a person.
A human might be "empathetic", "infinitely patient, infinitely available". And (say) a book or a calculator is infinitely available. -- When chatting with an LLM, you get an interface that's more personable than a calculator without being less available.
I know the LLM is predicting text, & outputting whatever is most convincing. But it's still tempting to say "thank you" after the LLM generates a response which I found helpful.
> But it's still tempting to say "thank you" after the LLM generates a response which I found helpful
I don't think it's helpful because I don't interact with objects.
1 reply →
I feel like I’ve seen more and more people recently fall for this trick. No, LLMs are not “empathetic” or “patient”, and no, they do not have emotions. They’re incredibly huge piles of numbers following their incentives. Their behavior convincingly reproduces human behavior, and they express what looks like human emotions… because their training data is full of humans expressing emotions? Sure, sometimes it’s helpful for their outputs to exhibit a certain affect or “personality”. But falling for the act, and really attributing human emotions to them seems, is alarming to me.
There’s no trick. It’s less about what actually is going on inside the machine and more about the experience the human has. From that lens, yes, they are empathetic.
What are humans made of? Is it anything more special than chemistry and numbers?
Technically they don't have incentives either. It's just difficult to talk about something that walks, swims, flies, and quacks without referring to duck terminology.
Sounds like you aren't aware that a huge amount of human behaviors that look like empathy and patience are not real either. Do you really think all those kind-seeming call-center workers, waitresses, therapists, schoolteachers, etc. actually feel what they're showing? It's mostly an act. Look at how adults fake laughter for an obvious example of popular human emotion-faking.
1 reply →
Well yes, but as an extremely patient person I can tell you that infinite patience doesn't come without its own problems. In certain social situations the ethically better thing to do is to actually to lose your patience, may it be to shake the person talking to you up, may it be to indicate they are going down a wrong path or whatnot.
I have experience with building systems to remove that infinite patience from chatbots and it does make interactions much more realistic.
[dead]