What will be the threshold for calling ME/CFS a 'disease' and are we already across it?

Mine had not either going by her answers when I asked questions eariler, but that didn’t stop her from saying whatever she thought I had was psychosomatic!
My experience was my symptoms are taken very seriously, until we've ran out of tests to do. Like my me/CFS started with bad stomach symptoms and I got like 3 blood tests endoscopy. colonoscopy. even a stomach mri, lactose test etc. then once everything had come back negative I was just told they can't help me and sent home.
 
Last edited:
We've had some discussion before of what the criteria are for calling something a 'disease' rather than 'an illness' and I know we're all desperate to cross that threshold and be able to go around shouting, 'I told you! I've got a proper disease!'

But I have forgotten the criteria and the threshold is anyway looking a bit blurry. Will genetic associations take us over the threshold, and why haven't the ones we've already seen reported done so? What about the Zhang et al. study? @Jonathan Edwards's theory should be getting published as a Qeios preprint tomorrow, but what level and type of empirical confirmation of it (or bits of it) would be needed for us to have a 'disease'? What other front-runners could get us over the threshold? And who decides when we've crossed it?

Sorry, but I am not at all desperate for that, in fact I rather fear it. As I move beyond middle age the last thing I want is a label that can be used to justify not testing for all sorts of other conditions that could occur as I get older.

Although I do not have severe ME, I have spent much of the last few years feeling quite severely ill. Of course I have already had it suggested that perhaps it is just my "chronic fatigue" causing me to take months to recover from whatever latest "infection" has knocked on to my back for months at a time. I am not at all sure that it is only ME that I am experiencing now. This idea that a "proper" disease will suddenly mean respectful treatment from society is pretty naive in my opinion anyway, certainly for those unable to work and subject to the accusations of parasitism being encouraged by governments desperate to cut welfare costs.
 
We already have that. There is nothing that indicates that it’s all in our heads, and PACE, MAGENTA, etc. showed that exercise does not work.
About 95-99% of physicians would disagree on all of this. Despite the fact that it's not just false, the model has been thoroughly debunked, and it makes zero difference. Which is really the problem, and probably goes back Jonathan's question about rational doctors. Humans are not rational for the most part.

It's a hard process to pin down, and it can be maddeningly irrational. Peptic ulcers provide the best example of how a dramatic shift occurs, but even that one is odd because I don't think many physicians at the time deemed it not a disease. They could see them even with the instruments at the time, the bleeding was indisputable, people died from it, and from most standard definitions definitely a disease, and yet it was THE textbook psychosomatic disorder.
 
Sorry, but I am not at all desperate for that, in fact I rather fear it. As I move beyond middle age the last thing I want is a label that can be used to justify not testing for all sorts of other conditions that could occur as I get older.

Although I do not have severe ME, I have spent much of the last few years feeling quite severely ill. Of course I have already had it suggested that perhaps it is just my "chronic fatigue" causing me to take months to recover from whatever latest "infection" has knocked on to my back for months at a time. I am not at all sure that it is only ME that I am experiencing now.

I'm very sorry to hear that you have extra problems on top of your ME. But what we hear from a lot of people in that position is that their extra problems get dismissed because they already are seen as having a condition that's all in their heads, and so the extra stuff is probably all in their heads too.

This idea that a "proper" disease will suddenly mean respectful treatment from society is pretty naive in my opinion anyway, certainly for those unable to work and subject to the accusations of parasitism being encouraged by governments desperate to cut welfare costs.

My hope is for better treatment from society. I think it's a matter of degree. There are some people who will find an excuse to treat people badly (those prepared to see the sick as parasites); those who don't generally feel that way but who will look down on those whose illnesses are 'all in their heads'; and those who treat all people well with no distinction about cause. I think the latter group is fairly small and the middle group is fairly big. If we can move out of the 'it's all your heads' camp, at least will have two groups treating us well rather than just one.
 
We've had some discussion before of what the criteria are for calling something a 'disease' rather than 'an illness' and I know we're all desperate to cross that threshold and be able to go around shouting, 'I told you! I've got a proper disease!'

Syndrome is used when there is a lack of specificity.

An example is Guillain Barre Syndrome. Each case is clearly a disease, but it is not a single disease and most neurologists aren't smart enough to tell the difference so it's still a syndrome.
 
And as I said, I think getting hung up on 'disease' terminology is likely to be counterproductive. Within medicine we don't even think about whether something is a 'disease'.
The BPS club have made much of the distinction between illness and disease over the years, especially earlier on. It is one of their foundational arguments for claiming no organic (primary) pathology. You know, the ol' dualism they claim to be battling.

At the practical survival level, at least, we patients can't just ignore it. Marketing, however distorted and dishonest, has real world consequences. To some degree at least we need to somehow counteract it and inject a little reality back into the debate.

I would love nothing more than to be able to ignore it. But that is a luxury we don't have.
 
Since there has been no new information for several years (apart from some negative results from studies and trials, which don't let you pass thresholds) I see no reason to think why one should now be any closer to anything. In that sense somebody coming up with their own theory of things, without any data can't be seen any differently to a BPSer coming up with their story of things.

I suspect once a threshold has been succeeded nobody will be discussing whether it is the case but things will naturally follow.
 
I personally think the Zhang study changes all that, @EndME .
Unless of course we think it is statistically unsound, but the consensus seems to be against that.

I have not seen anybody that has been able to argue strongly in favor of the study. We might have the feeling that there might be something very real hiding in there, but I don't think that anybody has been able to argue for that yet nor been able to point at what it might be. I find it equally plausible that it could all be noise, which should maybe not be unexpected given the miniscule sample sizes. From what I understand we also have hardly any reference points for this methodology. Some reference points would be: Decoding the Genomics of Abdominal Aortic Aneurysm and
Integrated systems analysis reveals a molecular network underlying autism spectrum disorders (from what I can gather what is done here is a rather different to the Zhang study in ME/CFS). Both are well cited but have those studies revealed much?

Of course you may be very inclined to think of the study differently on the basis of DecodeME data, but that doesn't help me being assured about the data we can currently discuss.

I have emailed the authors of the Zhang study but didn't receive a reply. I think it could be very useful if someone could start a dialogue with the authors. @forestglip has done a lot of analysis, I'm sure the authors should be interested in that as well. I think there are many people here that could help further understand the study and whilst maybe some authors might not be interested in having their work picked apart, I'm sure they'd be rather happy to do it if there is the prospect of them being able to put their name on a paper that subsequently came out of such an analysis.
 
At what point, or what threshold of evidence, do you suspect they will stop rolling their eyes?

When there are effective treatments I am pretty sure they will. Before that it is pretty hard to see in the crystal ball!!

I suspect some will be rolling their eyes until they retire - whether there are effective treatments or not.
 
Unless of course we think it is statistically unsound, but the consensus seems to be against that.
Is there a consensus? One thing for me: to lose virtually no accuracy on the test set (AUROC 0.677 to 0.670) makes me think there
was a fair bit of randomness playing a part due to the size of the independent set, and it just happened to be high accuracy with this small group by chance. How often does an ML model not lose any accuracy on a test set?

That, and as I was saying in the thread, if the people in the cohorts have long COVID, which I'm still not clear about, then maybe some/all of the genetic signal they did find was related to COVID susceptibility, not ME/CFS, which would explain why that is one of the few phenotypes associated with these genes in the BioBank analysis.

I'd like to see if DecodeME clearly gives us the same top genes or genes related to the top genes to give me more confidence in it. I'm no machine learning expert, though, so just offering questions I'd like answered.
 
Is there a consensus?

In comparison to most of the studies we look at, which fall apart as soon as you open the packet, there did seem to be a general sense that the statistics were credible, or at least a dearth of major sceptics.

I realise that amenable to statistical analysis but I was impressed that the genes that came up made sense, while still being, at the individual gene level not a priori predictable. If the p values were just inflated I would expect a lot of random garbage - which we often seem to see.
 
Is there a consensus? One thing for me: to lose virtually no accuracy on the test set (AUROC 0.677 to 0.670) makes me think there
was a fair bit of randomness playing a part due to the size of the independent set, and it just happened to be high accuracy with this small group by chance. How often does an ML model not lose any accuracy on a test set?
I’d also keep in mind that they did lose a lot of accuracy in the test set using their original HEAL model. If it was simply a case where the training models were overfit and the test cohort happened to really resemble the training cohort by chance, you would expect the HEAL model to also do really well there by chance.
 
If it was simply a case where the training models were overfit and the test cohort happened to really resemble the training cohort by chance, you would expect the HEAL model to also do really well there by chance.
Well they're basically models trained in different ways, pulling out different genes. If they included some other third random ML model that also didn't do better than chance, that wouldn't make me more confident about the one that did. Isn't it possible that by chance one model returned genes also found in the test cohort while the other didn't?
 
Well they're basically models trained in different ways, pulling out different genes. If they included some other third random ML model that also didn't do better than chance, that wouldn't make me more confident about the one that did. Isn't it possible that by chance one model returned genes also found in the test cohort while the other didn't?
Not if the problem is over fitting, which would be the most likely issue. Sure, what you’re saying is theoretically possible, it always is, but in all honesty it seems vanishingly so here

[Edit: just to make it explicit, in my experience a small test cohort tends to stack the deck against you in terms of being able to achieve similar performance as in the training, rather than the other way around]
 
Last edited:
Well they're basically models trained in different ways, pulling out different genes. If they included some other third random ML model that also didn't do better than chance, that wouldn't make me more confident about the one that did. Isn't it possible that by chance one model returned genes also found in the test cohort while the other didn't?

I think we'd know a lot more if there were more details on some things. In some of their other studies the authors tend to be more explicit, so likely keeping things short is all an artefact of getting everything fitting into the right journal, but then again we've also seen ML researchers who play around with a multitude of different things and where things only end up working out with the exact choices that are eventually made.
 
But what was the evidence for synapses before this?

That ME/CFS has a dynamic that is hard to explain other than on the basis of complex regulatory systems such as immune and CNS and some of the symptoms are hard to explain without at least invoking some form of sensory nerve sensitisation. When members of the forum published a review of 'The Biomedical Challenge of ME/CFS - a Soluble Problem" in 2016 we concluded that it probably involved both immune and CNS regulation. I would not necessarily have predicted a genetic risk for the neural side but it is exactly what one would have predicted if anything.

I recently went back to a post I put up about a year ago and noted that I had proposed - as a best guess - that some form of sensitivity of neurons to gamma interferon would best explain things.

So the evidence for synapses was the clinical pattern of disease itself.
 
Back
Top