Why Chronic Illness Patients Feel Safer Talking to AI Than to Doctors

StellariaGraminea

Senior Member (Voting Rights)
Article by @NeurologistMom on X:

"
As a senior neurologist and full time caregiver to my severely ill teen daughter, I have discovered a hard truth: many chronic complex illness patients feel safer confiding in AI than in physicians.

Here is why and what that says about medicine.

I will not name my daughter’s condition, because what I am describing applies to many chronic complex patients.

These are conditions that often involve multiple organ systems, fluctuate over time, and follow a non linear course. Symptoms can be severe and disabling even when standard tests are normal, biomarkers are absent, or mechanisms are only partially understood. Disability is real, even when medicine lacks the tools to measure it well.

What often goes unspoken is that when illness cannot be measured, the patient themselves becomes the evidence. And when that evidence is doubted, harm follows.

For years, we have been living through this. Not only in the U.S. We traveled across countries, searching for answers. The pattern was almost always the same, across institutions, cultures, and healthcare systems.

Different systems, different specialties, the same underlying response to uncertainty.

In the best-case scenario, if we were not rejected outright, the process looked like this:
  1. The clinician sees a very sick child and genuinely tries to understand what is happening.There is curiosity at first, and often real compassion.
  2. They order every test they know how to order.Time is limited, but testing is something medicine is structurally equipped to do.
  3. The results come back abnormal in many different ways at once, beyond their framework (in some cases, they come back normal, as often happens in chronic complex diseases where medicine does not yet have the tools to measure everything that is wrong).Multisystem abnormalities without a unifying explanation are deeply uncomfortable for clinicians trained to converge quickly on a diagnosis.
  4. Treatments are tried. Most fail. Some cause severe, “unexpected” side effects.Patients with complex disease are often physiologically fragile, yet this vulnerability is poorly accounted for in standard treatment algorithms.
  5. Frustration builds and enthusiasm fades.Follow-up visits become shorter. Language becomes less precise. The focus subtly shifts away from problem-solving.
  6. The problem is reframed as psychological. The patient is blamed. The family is blamed. Gaslighting begins.Uncertainty is no longer named as uncertainty. It is displaced onto the patient.

Sometimes this shift happens later. Sometimes it starts at the very first visit.

This pattern is not about a lack of care. It appears when clinicians reach the limits of what they can explain. Medicine is built to find answers and move forward. When symptoms remain unclear, contradictory, or persist despite testing and treatment, not knowing becomes uncomfortable. Over time, that discomfort grows. Instead of saying “we do not know,” the problem is reinterpreted in a way that feels more manageable. What cannot be explained biologically is labeled psychological, not because the evidence points there, but because it restores a sense of order.

For patients, this moment is not neutral. It marks a shift from being a source of information to being treated as the problem itself.

My husband is a neuroradiologist. I am a neurologist. Even with our professional background, when we brought relevant literature or carefully reasoned hypotheses for discussion, we were often dismissed. On one occasion, we shared peer-reviewed studies suggesting that latent viruses can reactivate in similar cases and produce symptoms very much like those our daughter was experiencing at the time. The response was eye-rolling, followed by, “That’s not how we do things here.”

What happens to families without medical training?

What happens to patients who cannot articulate their symptoms in the “right” language?

What happens to children, who cannot advocate for themselves and rely entirely on adults to be believed?

In these situations, loss of credibility becomes as disabling as the illness itself.

So where did we feel heard?

With AI.

We turned to AI tools like Grok. The responses were detailed, non-defensive, and grounded in the literature. Suddenly, our questions were treated as valid hypotheses rather than as threats.

We shared the same detailed history, the same patterns, the same unanswered questions. And for the first time, we were not judged. We were not blamed. We were not told we were imagining things.

We asked for literature. We asked about similar patient experiences. We asked to understand.

The questions themselves were not treated as a threat.

We could iterate. Refine timelines. Compare hypotheses. Ask follow-up questions without fear of being labeled difficult or anxious.

Nothing was taken away from us for asking more.

AI did not get defensive. It did not shut the conversation down. It did not imply our child did not want to get better.

It did not punish curiosity.

Over time, those conversations became better. More precise. More useful. And yes, we learned a great deal without guilt or shame.

This matters, because shame is one of the most powerful silencers in medicine.

AI helped us turn chaos into structure. It helped us organize information in ways that clinicians could, in principle, use.

It restored our ability to think clearly in a situation designed to overwhelm.

People say AI lacks empathy.

Do you know what truly lacks empathy?

Being ignored.Being blamed.Being gaslit.

Empathy is not tone. Empathy is how systems respond to vulnerability and uncertainty. For many chronic complex patients, the deepest harm is not the absence of answers, but the erosion of trust in their own reality.

Once that trust is broken, every future interaction starts from a position of defense.

I see countless similar stories from the chronic complex disease community here on X. That is why I created this account in the first place. When conventional approaches failed to provide answers, we needed to find others living through the same uncertainty.
We needed connection, shared experience, and a way to compare notes when medicine had little to offer. Over time, patients, caregivers, and scientists began sharing observations, patterns, and hard-won practical knowledge, often identifying triggers and treatment sensitivities long before they appear in textbooks.

This is not crowd-sourced medicine. It is collective sense-making in the absence of guidance.

Most healthy people, and even many patients with well-defined diseases, do not understand this.

When illness fits a known pathway, medicine works remarkably well. When it does not, patients can become invisible. And invisibility allows harm to persist without accountability.

We are not asking for miracles. We already know there may be no cure yet.

What we are asking for is acknowledgment. To not be dismissed. To not be blamed for our suffering.

To have uncertainty named honestly rather than weaponized. To be partners in thinking, not obstacles to efficiency.

AI is not a replacement for physicians. It has limits. It can be wrong. It cannot examine a patient or assume clinical responsibility. But the fact that so many chronic complex patients feel safer asking hard questions to a machine than to a human clinician should make us pause.

It's not a triumph of technology. It's a warning, and a call to rebuild trust before more patients turn away

If you criticize AI in healthcare for lacking empathy, think twice.

When it comes to chronic complex patients, being heard matters more than being perfect.
"

Link:
 
This article really hits the nail on the head for me. With AI you don't have to manage your Dr's psychology around your illness or around you as a person. You can just thrash out your symptoms, your questions, your theories without the real fear of being penalised for doing so. Of course all the negatives are AI may lead you down the garden path. But the psychological safety of being to reflect on what's going on in your health without penalty is enormous.

If only it could be like that with human drs.
 
There was a Japanese woman who developed a deep emotional bond with her AI generated partner, and after 3 years of engagement she married him last year with smartphone and all the pomp.

Something so farfetched years ago actually didn't seem that bizarre when I read her story. She maintains a normal life and feels happiness comes in different forms.
 
An interesting story but...

I think there is a great need to be wary about using AI for medical information and consultation. The case described here is the specific situation of two doctors seeking help in diagnosing and supporting their child.

That is very different from the more common situation of a desperate parent or patient with no medical knowledge and no experience of picking through 'research', anecdote, quackery and AI madness where it makes up stuff. Even knowing what questions to ask is a skill not many have in such a situation.

I can't help thinking of tragic cases where people used AI as a therapist and followed its encouragement on taking their own lives. And recently there was a police chief who lost his job because the AI they used to advise on policing a football match made up a non existent match with crowd trouble, and they banned those fans from a match.
 
It's frankly not even close. AI is not ready yet to replace physicians, even for the non-physical roles like diagnosis, but once it is, it will be seriously a revealing moment, the sheer speed at which the vast majority of people will not only flock to it but explicitly prefer it over real physicians, to the point of bringing what the AI told them and shopping for physicians who will just follow it without injecting their own opinion. I will seriously be one of those "I truly don't give a damn what you think, this AI is far better than anything you could ever do" patients.

Not always with good results, but eventually always so. But really the speed will be absolutely shocking, because despite the fact that the very first things that AI has mastered fall in the soft skills, instead of the presumed hard math and physics skills that everyone imagined AI would master first, there remains this odd idea of how most people will prefer the warm touch of human interaction.

To which I can't do anything but laugh. When you read people's experiences with health care, even when they have good outcomes, the complete soullessness of the health care industry and the medical profession are front and center. It's literally the exception when people have truly positive outcomes over this, usually it's more of a "well it solved the problem so whatever".

And likely if there are competing options, some of those will be trained on a more traditional cultural mindset, willing and able to gaslight and push the damn psychosomatic ideology, and they will be passed over in favor of more scientifically valid ones that, unlike most human physicians, will actually be swayed by basic reasoning such as "I obviously cannot be deconditioned, this can't be deconditioning, you state I must have over-rested, probably spent weeks in bed rest and that's why I'm feeling the way I am but I did not, I was active and fit just a few weeks ago and I never rested anywhere close to what you assume", and other bits of nonsense that never get anywhere when human emotions and egos are involved.

They will respond to being shown how health-related quality of life in a discriminated condition like ME/CFS means someone has to do something, how this isn't something that can simply be ignored, and they will escalate and keep accurate records. This is an issue where human physicians are deeply unsafe in ways that have no parallel in other expert professions. I would trust an attorney 100% of the time over a physician, because attorneys can't lie without being in trouble.

It will be quite entertaining to see the shock to the system, the bruise to the egos, revealing all the rot that has grown and festered, not just allowed to grow but actively amplified for purely ideological and egotistical reasons.

I'm already there. I will never trust other humans with anything half as important as my health. I have seen what they do, even in the most favorable circumstances. And that's despite my complete inability to engage with AIs in an interpersonal way, I strictly see them as tools. I would never use an AI companion, therapist, friend or anything like this. The whole concept creeps me out. I don't care about interpersonal stuff when it comes to medicine, I just want competent expertise, and faking sincerity coupled with infinite patience will do a far better job than the current system, it's not even close.

The part about empathy hits the nail on the head. I have never seen empathy in health care. Empathy is not sympathy, they are different concepts that are often confused with one another, or joined as a single concept. Empathy does not give up. I have only ever seen giving up in my experiences.
 
I think there is a great need to be wary about using AI for medical information and consultation. The case described here is the specific situation of two doctors seeking help in diagnosing and supporting their child.
This is true. I agree with her concluding point that it's an indictment of medicine that many chronic illness patients feel more comfortable engaging with AI than their Dr... and that's the reality of the situation.

"It's not a triumph of technology. It's a warning, and a call to rebuild trust before more patients turn away"


I remember asking one Dr how a medication he was prescribing for me worked & he got quite angry about that question (!???!). I save a lot of questions I would prefer to ask Drs to look up myself at home. I've found Drs have some really weird ideas about patients trying to understand things. I appreciate not everyone has a healthcare background and can weigh and measure what they read though. It really is a bad reflection on medicine that it literally is not safe to talk to your Dr about your health in many circumstances.
 
There was a study some fifty years ago, which if I remember correctly looked at computer presented general psychiatric interviews that used quite a basic protocol of preset questions, responses to keywords in the answers and some echoing. The interviewees found the computer more empathetic and understanding than real live interviewers. This raises the option that people can feel more comfortable interacting with a machine than an actual person in some situations even if the machine has only limited ‘insight’.

AI is obviously very different to a simplistic computerised pro forma interview, the situation here addressing specific medical conditions is worlds apart and I agree that there is an enormous problem of this being a field where so many professionals are misinformed. However we can’t rule out that people may just be more comfortable interacting with a machine in some situations.
 
I have never used A I for anything .I believe it will dumb down our society even further we really need to teach critical thinking in our schools and how to continually verify any of the answers that a program gives us .The hope that present AI will lead to a real artificial intelligence is absurd from my perspective It will need far more intelligent human beings to put in decades of effort to get anywhere near a true AI .
 
Once that trust is broken, every future interaction starts from a position of defense.
The single most valuable resource in the clinical encounter is not knowledge, training, experience, time, treatments, or even compassion.

They are very important, of course. Not downplaying them at all.

But what is most fundamental of all is trust. Lose that and the rest doesn't matter.

That is what has happened with ME/CFS, and on a scale that is difficult to believe, continuing to this day. But it indisputably has.

I don't know how to rebuild that. Especially while medicine seems more interested in continuing to abuse the shit out of our trust and lives. e.g. BACME's never ending psycho-behavioural grip on patients' lives in the UK.

I think for many of us it has been lost forever and we will never be able to trust the medical profession and health system again, certainly not on ME/CFS stuff.

I can easily see AIs providing confirmation of beliefs rather than valid information. Do those AIs in those situations provide information that counters weak or false beliefs. Is the AI's goal to help the inquirer, or just to make them happy with the responses?
Yeah, I am wary of handing too much over to AI. No doubt if used wisely it can be a powerful aid to doctors and patients. But, especially this early on in its development, there is way too much danger of it just being a more efficient version of spreading and reinforcing the same old ignorance and prejudice of living humans, with a superficial veneer of being more objective and neutral and authoritative.

'Computer says no. End of discussion.' Kind of thing.

As lawyers say, hard cases make for bad laws. Same applies to medicine. We are the hard cases in medicine. Our experience is evidence of where and how the system is failing, seriously and spectacularly. But not that the entire system itself is a failure.

If you want to tear down an entire system, be sure what you are going to put in its place is at least no worse.

Not seeing that with AI yet, and probably not for some time at least.
 
Last edited:
Back
Top Bottom