How do you use AI for medical feedback?

X_User

Established Member
I wanted to share that I'm reorganizing my entire medical history into a structured format to easily share it with LLMs like Gemini, ChatGPT, etc.

I'm converting all my PDF reports and test results into plain text, stripping out my personal data, and saving them in Markdown. The last time I consulted an AI with my full arsenal of data was back in 2023, and the technology has advanced so much since then.

Even just using models like Gemini 2.5 Pro and GPT-5 Think, I've been able to get some pretty interesting insights to help me move forward. To be more specific: It has reinforced the idea of pursuing a neuromodulator implant as a potential compassionate use treatment for my OI. It won't be easy to access, but at least it's a clear goal to work towards. Besides, to investigate the root cause, it suggested testing for cerebral venous stenosis and completing my panel for an autoantibody I believe I'm missing.

My plan is, once I have everything organized, to pay for a month of a 200-300 tier plan and see what feedback I can get from a model like GPT-5 Pro, Gemini Deep Think, or Grok Heavy (I haven't decided which one yet).

Has anyone else tried something like this?
 
While I’m not a medical expert, I have yet to see any examples of generic LLMs contributing anything worthwhile for pwME/CFS.

LLMs don’t know anything. They guess at what you want to hear next. And they are very good a lying convincingly.

There are also substantial privacy concerns with feeding your medical history to private companies.

I don’t think this is a good idea.
 
So far I haven't found any use for it. It's doubtful that will happen until AIs get good enough to replace physicians. Even though right now the big LLMs are far better at ME/CFS and chronic illness than 99.999% of physicians, mainly because they can be persuaded by facts, and care about being wrong, especially when using contradictory reasoning. So that's saying a lot, but it's very hard to do worse than the literal worst-case scenario.

Until then I have slowly worked on writing as much down as I can, for when I can ask a specialized AI, but since this process will involve a lot of proactive questioning, there isn't much to do to prepare for that. I doubt very much I will ever get meaningful help from human doctors, the profession has made it clear they refuse to do the work. So I'm just waiting until we can bypass them entirely, because they are the main obstacle to solving this problem.

Until then the few attempts I have made are really pointless, though no less than every single consult I ever had. And it only takes minutes of my times instead of hours, so that's, well that's actually meaningful progress: nothing, but cheaper and faster. Which in a non-superiority trial would advise against the use of human physicians, which is hella funny, and also very sad.

But that's all mostly because I know the relevant facts. For someone new to this illness, patient forums and a bit of AI questions is very likely far superior to seeing any MD in terms of how likely it is to get useful advice, but that depends on how the questions are asked. At least AIs can learn. Sometimes people refuse to learn. This is the key ingredient here.
 
While LLMs have definitely been useful for me in figuring out how to manage symptoms and such and such.

I really doubt they are any better than human doctors at “treating” ME/CFS. Which means nada really. At this point we honestly have no clue. I really wouldn’t throw 300 bucks at an OAI model with a few billion more parameters and expect something in that domain. Perhaps it could be helpful for differential diagnosis? Maybe? I don’t know.
 
For someone new to this illness, patient forums and a bit of AI questions is very likely far superior to seeing any MD in terms of how likely it is to get useful advice, but that depends on how the questions are asked.
99 % of what I learned from other forums as fact was wrong. 99 % of what I read from bio-focused HCPs as fact is wrong. Mostly because almost everyone go beyond the evidence.
 
For the sort of usage described here no and I absolutely wouldn’t. It’s understandable to want to try everything and anything but I’d really caution against this sort of use.

We’ve seen how LLM chatbots can lead people, particularly those who are vulnerable or desperate, down iffy paths. It’s a risk to be mindful of.

I do find LLMs useful but mainly for things based in text and code manipulation (language) when you can verify their output. That seems to be their realistic main use cases now. I know specialists working with small and custom language models in specific targeted use cases too. But this is all a long way from the marketing and magical woo promised by tech founders searching for funding.
 
Thanks for your comments. My whole experiment with AI doesn't come from some blind faith that it's a miracle cure, but from a need to find better tools to help me keep finding things out.

The key part of my plan isn't just to dump my own records in. It's to give the AI a curated "reading list" for our session. So, along with my (anonymized) data, I'm also feeding it the Canadian Criteria, a few key research papers I'm tracking (I'm making a data base), and other high-quality info. That way, it's not just pulling from its vast, generic knowledge base, but it's forced to cross-reference my specific case against a set of documents that I trust.

My bet isn't that I'll find a clear answer next month. It's that as these models get smarter year after year, the quality of the "breadcrumbs" they provide will get better too. It's about having my data ready for a better tool tomorrow.

I'm that radically skeptical by human doctors.
 
I'm that radically skeptical by human doctors.
but from a need to find better tools to help me keep finding things out.
We definitively need better tools, but what makes you think that AI might be useful in its current state? There is a real possibility that there simply isn’t anything out there today that could bring us any closer to an answer in general or for anyone in particular.
So, along with my (anonymized) data,
It will still be tied to the IP you use and more importantly your user.
I'm also feeding it the Canadian Criteria, a few key research papers I'm tracking (I'm making a data base), and other high-quality info.
How do you determine the quality of the papers? And which papers do you intend to include?
and completing my panel for an autoantibody I believe I'm missing.
Aren’t autoantibodies antibodies directed against yourself? Why would missing one be bad? And how would you even determine if you’re missing it?
 
but what makes you think that AI might be useful in its current state?
My experience. Besides, the current state is evolving every few months

It will still be tied to the IP you use and more importantly your user.
It comes down to individual priorities. If something can ease my suffering, I'm willing to cede that level of privacy. This forum could also be hacked and have its IPs leaked. Besides,you van use VPNs. The user link is harder to solve depending on the model
How do you determine the quality of the papers? And which papers do you intend to include?
This isn't trivial. Medical journals have quality metrics like the impact factor. I'm building a database to publish on GitHub and I've considered including a field for community-based scoring. For my personal use I trust my judgment up to a certain point
Aren’t autoantibodies antibodies directed against yourself? Why would missing one be bad? And how would you even determine if you’re missing it?
I meant that I haven't been tested for those autoantibodies. And even when a finding isn't good news, it at least provides information for a diagnosis and some light
 
99 % of what I learned from other forums as fact was wrong. 99 % of what I read from bio-focused HCPs as fact is wrong. Mostly because almost everyone go beyond the evidence.
Yeah, it's a lot like news, you can only get a good picture of what's going on by reading multiple sources over a long enough timeline. Very few people have the time and patience for that, and it's really depressing anyway.

This forum is about the only reliably useful place for this, but it still takes years to get around it.
 
When AIs are at AGI and good enough to figure out ME/CFS you won’t need markdown of your tests, my two cents. Until AGI I don’t see how LLM’s will solve anything ME/CFS related if they can’t create/link/build new ideas. There’s just not enough real research for them to even link anything together for ME/CFS, let alone compare biomarkers that you can get commercially from your doctor to find new connections.
 
My experience. Besides, the current state is evolving every few months
Would you be able to elaborate?

And even if the AI evolves quickly, it would still be limited to the information that’s available to it. It can’t discover things we do not already know about how our bodies work.
It comes down to individual priorities. If something can ease my suffering, I'm willing to cede that level of privacy. This forum could also be hacked and have its IPs leaked. Besides,you van use VPNs. The user link is harder to solve depending on the model
You do you.
This isn't trivial. Medical journals have quality metrics like the impact factor. I'm building a database to publish on GitHub and I've considered including a field for community-based scoring. For my personal use I trust my judgment up to a certain point
Impact factor is not a good measurement for the quality of the paper.

What do you base that judgement on when you don’t have any medical qualifications? I wouldn’t have any idea about where to start if I tried to do this for myself.
I meant that I haven't been tested for those autoantibodies. And even when a finding isn't good news, it at least provides information for a diagnosis and some light
Do you have any reason to suspect that you have an autoimmune condition?
 
Back
Top Bottom