Using AI tools for medical questions, as a patient - Discussion

Jaybee00

Senior Member (Voting Rights)
When Doctors Couldn’t Solve Their Medical Mysteries, They Turned to A.I.—discusses ME/CFS


Some women with complex chronic illnesses are using chatbots to search for diagnoses or relief from their symptoms.

Highlights patient with ME/CFS

Deborah Holcomb, 62, a former electrical engineer in San Diego, has myalgic encephalomyelitis/chronic fatigue syndrome and can move around for about 30 minutes a day. She finds chatbots invaluable for identifying symptom patterns and exploring treatment options, though she doesn’t make major changes without consulting a doctor.

But while chatbots are trained in part on the best evidence about ME/CFS, they are also trained on pseudoscientific ideas that spread among desperate patients, she noted, and on popular misconceptions.

Ms. Holcomb was alarmed when ChatGPT suggested “regular exercise,” because exercise intolerance is a hallmark of ME/CFS, and even mild activity can worsen symptoms. But, she added, some doctors make the same recommendation.
 
Last edited by a moderator:
A member in my group asked about recovery and the AI (I am not sure which one) offered her the Recovery Norway stories among others, confirming her hope that yes, it is possible to recover...
 
I randomly asked Google:

How often do people fall ill on Thursdays?


AI Overview

While there is no specific scientific data suggesting people fall ill specifically on Thursdays more than other weekdays, Thursdays are sometimes associated with the, "leisure sickness" phenomenon, where symptoms of illness (such as colds or fatigue) emerge just as the work week is ending and a person starts to relax, or due to a build-up of fatigue from the week
.
Which explains everything you wanted to know about fatigue I guess.
 
But while chatbots are trained in part on the best evidence about ME/CFS, they are also trained on pseudoscientific ideas that spread among desperate patients, she noted, and on popular misconceptions.

Ms. Holcomb was alarmed when ChatGPT suggested “regular exercise,” because exercise intolerance is a hallmark of ME/CFS, and even mild activity can worsen symptoms. But, she added, some doctors make the same recommendation.
This is hilarious, and oh so problematic. Because it's true, she's right, this advice is false pseudoscience. And something like this is not supposed to happen. And yet.

The self-correcting process of science is not self-enforcing. Just like laws. People make decisions, and they're not always correct, or even rational.
 
But while chatbots are trained in part on the best evidence about ME/CFS, they are also trained on pseudoscientific ideas that spread among desperate patients, she noted, and on popular misconceptions.
These AI chatbots are heavily trained on Reddit posts so they often suggest pseudoscientific nonsense that's popular in non-evidence based online communities.
 
BBC article:
Should you really trust health advice from an AI chatbot?
Quotes from the article:
The Reasoning with Machines Laboratory at the University of Oxford got a team of doctors to create detailed, realistic scenarios that ranged from mild health issues you could deal with at home; through to needing a routine GP appointment, an A&E trip, or requiring calling an ambulance.

When the chatbots were given the complete picture they were 95% accurate. "They were amazing, actually, nearly perfect," researcher Prof Adam Mahdi tells me.

But it was a very different story when 1,300 people were given a scenario to have a a conversation with a chatbot about in order to get a diagnosis and advice.

It was the human-AI interaction that made things unravel as the accuracy fell to 35%, two thirds of the time people were getting the wrong diagnosis or care.
A separate analysis, external by The Lundquist Institute for Biomedical Innovation in California this week showed AI chatbots can peddle misinformation too.

They used a deliberately challenging approach, where questions were phrased in a way that invited misinformation, to see how robust the AI's were.

Gemini, DeepSeek, Meta AI, ChatGPT and Grok were tested across cancer, vaccines, stem cells, nutrition, and athletic performance.

More than half the answers were classed as problematic in some way.
 
But it was a very different story when 1,300 people were given a scenario to have a a conversation with a chatbot about in order to get a diagnosis and advice.
It's valid criticism, but all of this can be worked on and fixed in ways that are simply impossible when it comes to the kinds of problems humans fail at, most of which have to do with economics and access to professionals.
They used a deliberately challenging approach, where questions were phrased in a way that invited misinformation, to see how robust the AI's were.
Humans also fail a lot at this, just in different ways. Hell, it's openly acknowledged that patients get dismissed because we annoy them with incorrect information, and it's completely impossible to fix that. The idea that any of this means humans will always be superior is laughable, because all the flaws that AIs have are technical problems, while the flaws humans carry are all human problems and we have long hit the limits of what humans can achieve on those fronts.

No health care system in the world fulfills even half of the needs, in truth it's probably closer to 10-20%. No country will ever spend 5-10x as much on health care, it would literally amount to the whole GDP, health care already consumes 10-15% of GDP. There is no human solution to this, and that's all before the demographic collapse that no one is preparing for.

One argument I keep seeing that I find especially weak is physicians saying that diagnosis isn't all that important because they actually spend very little time compared to treating people, and that people still need to be seen in person and treated by hand. Which is true, from their perspective. However from the perspective of a patient, you know, the whole "patient-centred" stuff, it's the complete opposite: the vast majority of time is spent waiting for access to either clinicians or tests, in the 99%+ range. Even when it comes to treatment it consists mainly of 99% of waiting. Some of that wait is necessary, can't have 9 pregnant women give birth in a month, but there is basically no meaningful support for it, and that's a huge neglected component.

AIs have a huge problem with sycophancy, with agreeing too much with the user. Medical AIs will have to be configured very differently, but unlike human nature, they can be improved and fixed. We have seen how impossible it is to fix this, where being right makes zero difference, because biases rule everything. Most of those biases are simply a product of scarcity, of available physician time, and they mostly go away once this bottleneck is fixed, which is not possible to achieve with only humans.
 
Back
Top Bottom