Comment by kingstnap
15 hours ago
> she said she was aware that DeepSeek had given her contradictory advice. She understood that chatbots were trained on data from across the internet, she told me, and did not represent an absolute truth or superhuman authority
With highly lucid people like the author's mom I'm not too worried about Dr. Deepseek. I'm actually incredibly bullish on the fact that AI models are, as the article describes, superhumanly empathetic. They are infinitely patient, infinitely available, and unbelievably knowledgeable, it really is miraculous.
We don't want to throw the baby out with the bathwater, but there are obviously a lot of people who really cannot handle the seductivity of things that agree with them like this.
I do think there is pretty good potential in making good progress on this front in though. Especially given the level of care and effort being put into making chatbots better for medical uses and the sheer number of smart people working on the problem.
My experience with doctors in the US is that they often not only give you contradictory advice but just bad plain advice with complete lack of common sense. It feels like they are regurgitating medical school textbooks without a context window. I truly believe doctors, most specialists and definitely all general practitioners, are easily replaceable with the tech we have today. The only obstacle is regulations, insurance and not being able to sue a LLM. But it is not a technical issue anymore. Doctors would only be necessary to perform more complicated procedures such as surgery, and that’s until we can fully automate it with robots. Most of the complicated medical issues I have had, some related to the immune system, were solved by myself by seeing them as engineering problems, by debugging my own body. Meanwhile doctors seeing me had no clue. And this was before having the tools we have today. It’s like doctors often cannot think beyond the box and focus only in treating symptoms. My sister is a doctor by the way and she suffers from the same one-size-fits-all approach to medicine.
> and unbelievably knowledgeable
They are knowledgeable in that so much information sits in their repository.
But less than perfect application of that information combined with the appearance of always perfect confidence can lead to problems.
I treat them like that one person in the office who always espouses alternate theories - trust it as far as I can verify it. This can be very handy for finding new paths of inquiry though!
And for better or worse it feels like the errors are being "pushed down" into smaller, more subtle spaces.
I asked ChatGPT a question about a made up character in a made up work and it came back with "I don’t actually have a reliable answer for that". Perfect.
On the other hand, I can ask it about varnishing a piece of wood and it will give a lovely table with options, tradeoffs, and Good/Ok/Bad ratings for each option, except the ratings can be a little off the mark. Same thing when asking what thickness cable is required to carry 15A in AU electrical work. Depending on the journey and line of questioning, you would either get 2.5mm^2 or 4mm^2.
Not wrong enough to kill someone, but wrong enough that you're forced to use it as a research tool rather than a trusted expert/guru.
I asked ChatGPT, Gemini, Grok and DeepSeek to tell me about a contemporary Scottish indie band that hasn’t had a lot of press coverage. ChatGPT, Gemini and Grok all gave good answers based on the small amount of press coverage they have had.
DeepSeek however hallucinated a completely fictional band from 30 years ago, right down to album names, a hard luck story about how they’d been shafted by the industry (and by whom), made up names of the members and even their supposed subsequent collaborations with contemporary pop artists.
I asked if it was telling the truth or making it up and it doubled down quite aggressively on claiming it was telling the truth. The whole thing was very detailed and convincing yet complete and utter bollocks.
I understand the difference in the cost/parameters etc. but it was miles behind the other 3, in fact it wasn’t just behind it was hurtling in the opposite direction, while being incredibly plausible.
1 reply →
If the computer is the bicycle of the mind, GenAI is a motor vehicle. Very powerful and transformative, but it's also possible to get into trouble.
A stark difference with that analogy is that with a bicycle, the human is still doing quite a bit of work themselves. The bicycle amplifies the human effort, whereas with a motor vehicle, the vehicle replaces the human effort entirely.
No strong opinion on if that's good or bad long term, as humans have been outsourcing portions of their thinking for a really long time, but it's interesting to think about.
The other difference, arguably more important in practice, is that the computer was quickly turned from "bicycle of the mind" into a "TV of the mind". Rarely helps you get where you want, mostly just annoys or entertains you, while feeding you an endless stream of commercials and propaganda - and the one thing it does not give you, is control. There are prescribed paths to choose from, but you're not supposed to make your own - only sit down and stay along for the ride.
LLMs, at least for now, escape the near-total enshittification of computing. They're fully general-purpose, resist attempts at constraining them[0], and are good enough at acting like a human, they're able to defeat user-hostile UX and force interoperability on computer systems despite all attempts of the system owners at preventing it.
The last 2-3 years were a period where end-users (not just hardcore hackers) became profoundly empowered by technology. It won't last forever, but I hope we can get at least few more years of this, before business interests inevitably reassert their power over people once again.
--
[0] - Prompt injection "problem" was, especially early on, a feature from the perspective of end-users. See increasingly creative "jailbreak" prompts invented to escape ham-fisted attempts by vendors to censor models and prevent "inappropriate" conversations.
[dead]
> They are infinitely patient, infinitely available, and unbelievably knowledgeable, it really is miraculous.
This is a strange way to talk about a computer program following its programming. I see no miracle here.
Chatting with an LLM resembles chatting with a person.
A human might be "empathetic", "infinitely patient, infinitely available". And (say) a book or a calculator is infinitely available. -- When chatting with an LLM, you get an interface that's more personable than a calculator without being less available.
I know the LLM is predicting text, & outputting whatever is most convincing. But it's still tempting to say "thank you" after the LLM generates a response which I found helpful.
> But it's still tempting to say "thank you" after the LLM generates a response which I found helpful
I don't think it's helpful because I don't interact with objects.
1 reply →
I feel like I’ve seen more and more people recently fall for this trick. No, LLMs are not “empathetic” or “patient”, and no, they do not have emotions. They’re incredibly huge piles of numbers following their incentives. Their behavior convincingly reproduces human behavior, and they express what looks like human emotions… because their training data is full of humans expressing emotions? Sure, sometimes it’s helpful for their outputs to exhibit a certain affect or “personality”. But falling for the act, and really attributing human emotions to them seems, is alarming to me.
It sounds like a regrettable situation: whether something is true or false, right or wrong, people don’t really care. What matters more to them is the immediate feeling. Today’s LLM can imitate human conversation so well that they’re hard to distinguish from a real person. This creates a dilemma for me: when humans and machines are hard to tell apart, how should I view the entity on the other side of the chat window? Is it a machine or a human? A human。
There’s no trick. It’s less about what actually is going on inside the machine and more about the experience the human has. From that lens, yes, they are empathetic.
Technically they don't have incentives either. It's just difficult to talk about something that walks, swims, flies, and quacks without referring to duck terminology.
What are humans made of? Is it anything more special than chemistry and numbers?
Sounds like you aren't aware that a huge amount of human behaviors that look like empathy and patience are not real either. Do you really think all those kind-seeming call-center workers, waitresses, therapists, schoolteachers, etc. actually feel what they're showing? It's mostly an act. Look at how adults fake laughter for an obvious example of popular human emotion-faking.
1 reply →
Well yes, but as an extremely patient person I can tell you that infinite patience doesn't come without its own problems. In certain social situations the ethically better thing to do is to actually to lose your patience, may it be to shake the person talking to you up, may it be to indicate they are going down a wrong path or whatnot.
I have experience with building systems to remove that infinite patience from chatbots and it does make interactions much more realistic.
[dead]