Comment by gherkinnn

19 hours ago

To answer your question: talking to a human.

Medicine is so much more than "knowledge, experience, and pattern matching", as any patient ever can attest to. Why is it so hard for some people to understand that humans need other humans and human problems can't be solved with technology?

So much of what I know from women in my life is that the human element of medicine is almost a strict negative for them. As a guy it hasn't been much better, but at least doctors listen to me when I say something.

  • One of, if not THE biggest challenge in getting treatment is getting past insurance rules designed to deny treatment. This is much, much easier when you're able to convince a doctor (and/or trained medical staff) to argue on your behalf. If you can't get those folks to listen to you, that's probably not gonna happen. You might have to go through several different practices before you find a sympathetic ear.

    Now replace some / all of those humans with... A machine whose function also needs insurance approval.

    It's gonna end badly.

    • Sounds like we need to dismantle and replace this broadly dysfunctional system at multiple points. It's not like the US insurance landscape is anywhere close to the best way of handling healthcare if you look at many places in the world.

      15 replies →

    • The whole system has basic flaws in how's financing set up.

      There is an intermediary between customers and seller and it's allowed to take percentage of the sale. No such entity will ever work in the interest of the consumer. It has every incentive to inflate prices. Intermediary is needed but it should be financed by buyers with flat fee (possibly for additional incentives that reinforce the desired behavior). The tragedy here is that initially it was. But it was deemed too expensive for the buyers and got privatized which made it vastly more expensive in the long run.

      Insurance is also wrong. Insurance is gambling and gambling needs restrictions. You are allowed to take people's money without providing any service most of the time, so you shouldn't be allowed to refuse legal service for that privilege.

  • Perhaps, but I don't have much optimism for what this ends up looking like if it's an AI you have to convince to listen to you. In the spaces where this is already happening (rescruitment comes to mind), things are not looking good..

  • Agreed. Last time I was sick I said my fevers were pushing up to 100 and they said it's not a concern until 100.4. felt like an odd number. It's 38 C. Because my dramatic undersampling of my temperature was 0.4 degrees lower than their rounded threshold through some unit conversions, I clearly didn't have a fever. That's not a very human touch

  • Yes, yes, but when was your last period?

    This even translates to the pediatric space. I took all of my kids to the pediatrician because either they don't make comments to me like they do to my wife, or I don't take shit from them. I'm not sure which. Here's an example:

    My wife and daughter were there and the doctor asked what kind of milk my daughter was drinking. She said "whole milk" and the doctor made a comment along the lines of "Wow, mom, you really need to switch to 2%". To understand this, though, you need to understand that my daughter was _small_. Like they had to staple a 2nd sheet of paper to the weight chart because she was below the available graph space. It wasn't from lack of food or anything like that, she's just small and didn't have much of an appetite.

    So I became the one to take the kids there. Instead of chastising me, they literally prescribed cheeseburgers and fettuccine alfredo.

    My daughter is in her 20s now and is still small -- it's just the way she is. When she goes to see her primary, do you know what their first question is? "When was your last period."

    • My experiences broadly support your conclusions.

      However, your argument focuses on the routine intake instead of any listening part. The fact that the doctor measures height, weight, temperature, and blood pressure on intake and then asks about LMP doesn’t surprise me… that’s the part of the script where you just provide the data before you bring up concerns.

      Not to say the doctor was not a jerk, just that your argument doesn’t do much for me.

    • Yes? That's a very important piece of information, and I hope would be a thing a doctor asks, especially if there are concerns about weight or nutrition.

      1 reply →

    • medical industry must be going for some long term achievement in how much they disbelieve, mistreat, and degrade women going to them.

      I wonder how many units of their training courses are spent on this and how much is spent on the cultural reinforcement of it.

      2 replies →

    • > My daughter is in her 20s now and is still small -- it's just the way she is. When she goes to see her primary, do you know what their first question is? "When was your last period."

      Is that supposed to be a problem? How does it connect to the story in your comment?

      The question seems to be warranted to me, since being underweight can stop you from menstruating. So if you find someone thin and her last period was off in the distant past, you can conclude that there's a problem and something should be done about it; if it was a couple of weeks ago, you can conclude that she's fine.

      (It could also just be something that is automatically assessed as a potential indicator of all kinds of different things. Notably pregnancy. For me, it bothered me that whenever you have an appointment at Kaiser for any reason, part of their checkin procedure is asking you how tall you are. I'd answer, but eventually I started pointing out to them that I wasn't ever measuring my height and they were just getting the same answer from my memory over and over again. [By contrast, they also take your weight every time, but they do that by putting you on a scale and reading it off.] The fact that my height wasn't being remeasured didn't bother them; I'm not sure what that question is for.)

      6 replies →

  • At which point I'd ask: how much of that is baked into the AI now?

    It doesn't have opinions, research, direction of its own. Is this a path of codifying the worst elements of human society as we've known it, permanently?

One doctor didn't want to give me ritalin, so i went to another one.

One was against it, the other one saw it as a good idea.

I would love to have real data, real statistics etc.

  • Why do you need ritalin my dude? Aren't LLMs already doing all the work that requires focus and intelligence instead of you?

    Also, the very idea that LLMs would prescribe you ritalin at all is laughable... Having no human doctors in the loop is a guaranteed way to cut prescription drug abuse, as ya can't really bribe an LLM or appeal to its humanity...

    • Because i actually have real ADHD.

      I have it so strong, that after I was preparing myself, my work desc, my books everything, i was starring into the books i wanted to learn for 15-30 minutes unable to just start or do anything.

      With ritalin, i might have this mental block to, but its overcome in a few seconds.

      I went from a 'nearly/borderline failing grade' to the nearly the best grade in just one year.

      This changed significantly were I am today.

    • > Cool. Aren't LLMs already doing all the work that requires focus and intelligence instead of you?

      So your solution is to outsource thinking and work? That'll work out great in the long run.

Because people believe that they know everything about humans and how they work (or they hedge it). This is the exact same reason I don't trust supposed "experts" claiming AI will replace all these jobs: those same experts have no idea what these jobs actually entail and just look at the job title (and maybe the description) but have not once actually worked those jobs. And there is a huge chasm between "You read the job description" and "you actually know what it is like to be in this position and you fully understand everything that goes into it".

> human problems can't be solved with technology

How are you defining technology? How are you defining human problems? Inventions are created to solve human problems, not theoretical problems of fictional universe. Do X-rays, refrigerators, phones and even looms solve problems for nonhumans?

Claiming something that sounds deep doesn’t make it an axiom.

Doctors are not necessarily great at talking to patients and patients are unhappy with the information Doctors provide. This moat has dried up.

  • If you prefer an LLM to a human doctor, you deserve an LLM instead of a human doctor, and I wish you get it.

    • Free markets and all that right?

      Ok fellas put your money where your mouth is. It’s easy to talk until you put your money behind it (or lack of by getting rid of spending on it) if you are so confident in doctor as a service by llm.

      2 replies →

    • I would use one for sure. Much of medicine is getting tests / labs booked fighting to get certain medicines. Doctors will barely give you 5 minutes only deal with one issue per visit, rarely are available and going into an office can make you sicker. An llm with Doctor powers could offer more. I don't think we are at the surgery point but we are past getting notes and medicine's refilled.

      3 replies →

It seems likely to me that doctors whose job is almost or entirely about making diagnoses and prescribing treatments won't be able to keep up in the long run, where those who are more patient facing will still be around even after AI is better than us at just about everything.

If I were picking a specialty now, I'd go with pediatrics or psychiatry over something like oncology.

  • You are confusing the job with a subset of tasks. Some tasks can be automated, some won't. That doesn't mean LLMs, which cannot tell how many r's are in strawberry, will replace anyone.

    • > That doesn't mean LLMs, which cannot tell how many r's are in strawberry, will replace anyone.

      But most of us live in America in 2026. There are a lot of interests that don't give a shit about you who would love if you to got your medical care from a machine that "cannot tell how many r's are in strawberry". And there a lot of useful idiots with no real medical issues who will loudly claim the machine is great.

If you read the study, the whole conclusion is much less spectacular than the article. What the article really pushes happened:

patients -> AI -> diagnosis (you know, with a camera, or perhaps a telephone I guess)

What REALLY happened

patients -> nurse/MD -> text description of symptoms -> MD -> question (as in MD asked a relevant diagnostic question, such as "is this the result of a lung infection?", or "what lab test should I do to check if this is a heart condition or an infection?") -> AI -> answer -> 2 MDs (to verify/score)

vs

patients -> nurse/MD -> text description of symptoms -> MD -> question -> (same or other) MD -> answer -> 2 MDs verify/score the answer

Even with that enormous caveat, there's major issues:

1) The AI was NOT attempting to "diagnose" in the doctor House sense. The AI was attempting to follow published diagnostic guidelines as perfectly as possible. A right answer by the AI was the AI following MDs advice, a published process, NOT the AI reasoning it's way to what was wrong with the patient.

2) The MD with AI support was NOT more accurate (better score but NOT statistically significant, hence not) than just the MD by himself. However it was very much a nurse or MD taking the symptoms and an MD pre-digesting the data for to the AI.

3) Diagnoses were correct in the sense that it followed diagnostic standards, as judged afterwards by other MDs. NOT in the sense that it was tested on a patient and actually helped a live patient (in fact there were no patients directly involved in the study at all)

If you think about it in most patients even treating MDs don't know the correct conclusion. They saw the patient come in, they took a course of action (probably wrote at best half of it down), and the situation of the patient changed. And we repeat this cycle until patient goes back out, either vertically or horizontally. Hopefully vertically.

And before you say "let's solve that" keep in mind that a healthy human is only healthy in the sense that their body has the situation under control. Your immune system is fighting 1000 kinds of bacteria, and 10 or so viruses right now, when you're very healthy. There are also problems that developed during your life (scars, ripped and not-perfectly fixed blood vessels, muscle damage, bone cracks, parts of your circulatory system having way too much pressure, wounds, things that you managed to insert through your skin leaking stuff into your body (splinters, insects, parasites, ...), 20 cancers attempting to spread (depends on age, but even a 5 year old will have some of that), food that you really shouldn't have eaten, etc, etc, etc). If you go to the emergency room, the point is not to fix all problems. The point is to get your body out of the worsening cycle.

This immediately calls up the concern that this is from doctor reports. In practice, of course, maybe the AI only performs "better" because a real doctor walked up to the patient and checked something for himself, then didn't write it down.

What you can perhaps claim this study says is that in the right circumstances AIs can perform better at following a MD's instructions under time and other pressure than an actual MD can.

  • Thank you.

    100% of the cases where some headline makes big claims about "AI" based on some study, you take a good hard look at the study and none of the big claims stand on their own.

    It's all heavily spinned, taken out of context, editorialized... It's become almost a hobby of mine lately. And I am glad for have read so many papers and reasoned critically about methods and statistics. But it is also scary to realize just how much people take at face value of bombastic interpretations of datasets that support no such claim or much weaker versions only.

    Chasing down sources is something that I often do and I've learned that people take a lot of liberty when divulging opinions about sources they don't think will be checked. Even in high trust environments. I have first hand received work by post-doctoral fellows where some articles in the bibliography didn't even exist.

  • > However it was very much a nurse or MD taking the symptoms and an MD pre-digesting the data for to the AI.

    Excellent. We should be striving for a world where humans are meat puppets for machines.

  • This. The fact that the ai projects have to spin so hard should be tipping people off. But for some reason it doesn’t.

    • People only read headlines and offload their critical thinking skills to the companies who are selling them in their next publication. It's sad.

"Human problems can't be solved with technology" is just wrong, unless you have narrower definitions of a "human problem" or "technology".

For instance, transportation is a "human problem". It's being successfully solved with such technologies as cars, trains, planes, etc. Growing food at scale is a "human problem" that's being successfully solved by automation. Computing... stuff could be a "human problem" too. It's being successfully solved by computers. If "human problems" are more psychological, then again, you can use the Internet to keep in touch with people, so again technology trying to solve a human problem.

  • I think you may be misunderstanding the concept of 'human problem'. A human problem is caused by humans, it isn't something like transportation. That is a physics problem. An example of a human problem is cheating; you can't solve cheating with technology. Just add [incentive] after human and it should make more sense.

Yes talking to a human is good and necessary. But for diagnostics humans are not good at it. I'm happy for to human to use a tricorder and then tell me the answer.

>Medicine is so much more than "knowledge, experience, and pattern matching", as any patient ever can attest to.

Humans (doctors/nurses) can still be there to make you feel the warmth of humanity in your darkest times, but if a machine is going to perform better at diagnosing (or perhaps someday performing surgery), then I want the machine.

Even now, I'll take a surgeon that's a complete jerk over a nice surgeon any day, because if they've got that job even as a jerk they've got to be good at their jobs. I want results. I'll handle hurt feelings some other time.

  • I'd be a little bit careful here - being a jerk is quite different to non-conformity / red sneaker effect in surgery and it is not a quality you should look for.

    The truly compassionate surgeons will want to improve their skills because they care about their patients. They care if they develop complications and may feel terrible if they do, the jerk may not. Being a jerk may mean that the surgeon can rise to the top, but it may not be due to surgical skill at all, they may be better at navigating politics etc.

  • > Even now, I'll take a surgeon that's a complete jerk over a nice surgeon any day, because if they've got that job even as a jerk they've got to be good at their jobs.

    This seems like an incredibly poor line of reasoning.

    Hospitals are often desperate for surgeons. The poorly mannered ones are often deeply unsatisfied, angry at the grueling lives they've opted into, and the hospitals can't replace them. The market is not exactly at work here.

  • I haven't known doctors or nurses to be very warm and fuzzy. I have known them to have real world experience in seeing the outcomes of their actions instead of...

    Dude you removed my right thumb I was in for an appendectomy!?

    You are so right! I ignored everything you asked for. I am so sorry. I am administering general anesthesia now, then I will prepare you for your next surgery.

I think there's a real space there, and a lot of what e.g. nurses and doctors do is talking to humans, and that won't go away.

But two facts are also true: a) diagnosis itself can be automated. A lot of what goes on between you having an achy belly and you getting diagnosed with x y or z is happening outside of a direct interaction with you - all of that can be augmented with AI. And b), the human interaction part is lacking a great deal in most societies. Homeopathy and a lot of alternative medicine from what I can see has its footing in society simply because they're better at talking to people. AI could also help with that, both in direct communication with humans, but also in simply making a lot of processes a lot cheaper, and maybe e.g. making the required education to become a human facing medicinal professional less of a hurdle. Diagnosis becomes cheaper & easier -> more time to actually talk to patients, and more diagnosises made with higher accuracy.

  • > Diagnosis becomes cheaper & easier -> more time to actually talk to patients

    Unfortunately is this not likely to happen. More like:

    Diagnosis becomes cheaper & easier -> more patients a doctor is expected to see in the same period of time as before

Yeah... No. I can't possibly disagree with this view more.

I don't need to "talk to a human", I need a problem with my meatbag resolved.

> humans need other humans and human problems can't be solved with technology

WTF are you talking about? Is this bait? You can't possibly mean this. Yes humans are social creatures, but what does that have to do with medicine? Are you talking about a priest, a witch doctor, a therapist? Because if you're not, that sentence is utter BS.

In psychotherapy patients tend to prefer talking to AI than a human therapist and rank the interaction higher.

  • > In psychotherapy patients tend to prefer talking to AI than a human therapist and rank the interaction higher.

    Even if your statement is true, it's questionable. People also tend to prefer hearing what they want to hear to hearing what they need to hear, and rank the former interaction higher.

    Basically, tech's favorite feedback mechanism, customer reviews, cannot actually be relied upon to tell you how good something is.

The human doesn't need to be as highly trained and paid as a doctor if the human is not performing tasks concordant with that training.

You have 2 options

A) nice chatty friendly and cool doctor and can diagnose correctly 50% of the times. B) robotic ai that diagnoses 60% correctly.

What you chose? If you have a disease than can kill your, the ai is 20% more likely to help you and probably prevent. I can’t see too many people choosing human doctor. Anyway I’m sure there will be people that will chose doctor with 10% correctness vs a 100% ai no matter what.

I time is clear there very little human element.

Doctors talk to patients?

I know. I know. Part of it is that talking to patients on average is useless but still this can’t be really used for an argument against AI.

Still doctors can have a more broad picture of the situation since they can look at the patient as a whole; something the LLM can’t really synthesize in its context.

I would personally vastly, vastly prefer to go to a robot doctor, who diagnoses, treats and nurses me. What exactly do I need from a human here? Except of course being the one making the system.

  • a good human doctor is going to notice things other that just what you are telling them and showing them

    theyre also going to tell you things other than just what your insurance is agreeing to.

    a robo doctor will be corrupt in ways that a regular doctor can be held accountable, but without the individual accountability

  • Emotional support. Some human doctors absolutely radiate confidence and a kind of "you're gonna be okay" attitude. For me, this helps a lot. I'm not sure a machine can do this.

    • But I hate if the human doctor "radiates confidence" when I know he is not doing the proper scan, because I have to get back with worse symptoms first for him to take it serious. I don't need emotional support from a human doctor. I need the adequate scans and a proper analysis. I am pretty sure that a competent human will be still way better than AI, but AI even now will likely be better than a doctor not really paying attention.

    • You can hopefully get emotional support from your loved ones. If not a coach seems much more appropriate.