Preprint Initial findings from the DecodeME genome-wide association study of myalgic encephalomyelitis/chronic fatigue syndrome, 2025, DecodeMe Collaboration

It means that the ME genetic signals are not equivalent to any… arthritis, Parkinson’s, Alzheimer’s, depression, anxiety, none of those and more.

I'm wondering if it's possible to use a GWAS to develop better diagnostic criteria for diseases with no objective tests and unspecific symptoms.

One could use GWAS data to compare different diagnostic criteria or combinations of different parameters to see which ones give the most hits above the significance threshold. One could go a step further and compare the profile (a list of the genomic regions of the hits) with that of other diseases which are difficult to separate. Maybe it's possible to find patterns in the data that suggest the disease being studied is actually more than one disease.

Has something like this been attempted?
 
Last edited:
I'm wondering if it's possible to use a GWAS to develop better diagnostic criteria for diseases with no objective tests and unspecific symptoms.

One could use GWAS data to compare different diagnostic criteria or combinations of different parameters to see which ones give the most hits above the significance threshold. One could go a step further and compare the profile (a list of the genomic regions of the hits) with that of other diseases which are difficult to separate. Maybe it's possible to find patterns in the data that suggest the disease being studied is actually more than one disease.

Has something like this been attempted?
Why would ‘most hits’ be the criteria to judge diagnostic criteria on?
 
Unrelated to above discussion:

The individual variants aren't going to be diagnostically useful from this study. But I wonder if there might be an attempt to make a polygenic risk score from the DecodeME data and then see how well it classifies patients in other databases.
 
Why would ‘most hits’ be the criteria to judge diagnostic criteria on?

I didn't say it was the criteria that should be used. It's just a possibility. It should be possible to devise some criteria that fulfills the purpose.

I'm thinking that the 8 hits in DecodeME is a powerful sign that there is at least one specific disease whose signals the study was able to capture. It validates the diagnostic criteria, recruitment process and the analysis. It's not just the number of hits, but also how they fit together in a coherent way and don't resemble some other disease.
 
Maybe it's possible to find patterns in the data that suggest the disease being studied is actually more than one disease.
Doesn't this already happen? Look at risk genes (or location) and look at spread across the study population. The easiest example would be where you have 2 extremely significant risk genes and see that people on average only have 1, suggesting that there may be 2 different underlying pathways (unless the genes somehow end up having the same function). But here the situation will be a lot harder, but I did think people would look at this.
 
Last edited:
Unrelated to above discussion:

The individual variants aren't going to be diagnostically useful from this study. But I wonder if there might be an attempt to make a polygenic risk score from the DecodeME data and then see how well it classifies patients in other databases.
Given the data heritability estimate is 9.5% I guess it could give us “max” a 9.5% better chance?
 
But why? People without the risk genes for MS have MS just like those with the risk gene?

In ME/CFS we have the problem that we don't know what diagnostic criteria are best.

We can create different diagnostic criteria according to what we believe best describes the disease, but that's just an opinion-based process.

If we had an objective method for defining ME/CFS, then we could develop and refine diagnostic criteria until they could capture the disease with high accuracy.

I'm wondering if a GWAS can be used to provide some much needed objectivity and help validate or refine the diagnostic criteria, or discover subtypes.

Immagine if we could have some objective data showing that with one diagnostic criteria, you get few hits with significant overlap with other disease, while with another, you get more hits and less overlap.
 
Last edited:
In ME/CFS we have the problem that we don't know what diagnostic criteria are best.

We can create different diagnostic criteria according to what we believe best describes the disease, but that's just an opinion-based process.

If we had an objective method for defining ME/CFS, then we could develop and refine diagnostic criteria until they could capture the disease with high accuracy.

I'm wondering if a GWAS can be used to provide some much needed objectivity and help validate or refine the diagnostic criteria, or discover subtypes.
Yes but how would that work? If you'd just include people with MS, or Ank Spond in your ME/CFS cohort you might artificially just get more significant findings but your diagnostic criteria to get there will mean that you're not actually doing anything meaningful anymore. Maybe a certain Fukuda cohort ends up having many hits due to MS etc.

DecodeME showed that you can just ask people whether they have ME/CFS (edit: in the sense as mentioned by Andi below) and actually get some genes from that. But it isn't clear whether that is applicable to other cohorts and to what extent (the failure to replicate suggests it won't work for badly selected cohorts but we don't know what the results will look like in clinical cohorts or say a "German DecodeME cohort"). I think a GWAS in a country including a clincal cohort and a DecodeME like cohort could be a useful next step to understand how the signals behave then.

I think it should be fairly simply to do a grading how symptoms correlated to significant gene presence, but I don't think that's necessarily meaningful for identifying diagnostic criteria.

The problem here would be the same as always: Against what value do you define high accuracy of disease? Every criteria is 100% accurate against itself and we don't know any diseases processes as of yet.
 
Last edited:
DecodeME showed that you can just ask people whether they have ME/CFS
But that isn't what we did. We asked our participants to confirm that they had a diagnosis from a medical professional AND we put them through a screening questionnaire to assess them against CCC and IOM criteria.

Given that we have confirmed that this method has produced a genetically reasonably well defined cohort I don't see much value in deviating from that method.
 
But that isn't what we did. We asked our participants to confirm that they had a diagnosis from a medical professional AND we put them through a screening questionnaire to assess them against CCC and IOM criteria.

Given that we have confirmed that this method has produced a genetically reasonably well defined cohort I don't see much value in deviating from that method.
Yes, that is what I meant. What I meant with a "German DecodeME cohort" would be to just repeat this procedure (supposedly without telling them that you're doing this) and also have a clinical cohort as comparison (that also undergoes the questionnaire screening).
 
I guess the caveat is that MECFS might be two similar looking diseases like DR4 encouraged arthritis and B27 encouraged arthritis. Ritux is no good for the latter.

But if they were treated as one disease because there was no practical way to distinguish them, I guess the drug would tell you a good bit about where not to look for an answer for the non-responders?
 
Exactly @ME/CFS Science Blog. Any of a hundred genes can lead us to a treatment strategy that is not dependent on any of them. Ritux for RA works whether or not you have DR4, but DR4 pointed us to it.
Thanks, both. That have been my understanding, but I wanted to check it.

ADDED and I think this is an important message to convey to people who understandably wonder what the relevance of this big genetic study is to them when many won't have the relevant genetic differences, and heritability is so low.

I know my blog didn't address this
 
The problem here would be the same as always: Against what value do you define high accuracy of disease? Every criteria is 100% accurate against itself and we don't know any diseases processes as of yet.
I would say the gold standard is diagnosis by expert clinicians. We know from a couple of studies that half of GP diagnoses are wrong – and almost all cases because that had been other, undiagnosed diseases (mostly biomedical, but also psychological) that explain the symptoms.

I suspect the specific criteria are less important, so long as we have Pem. (I'll be interesting to see, though, how.DecodeME cases would stack up against expert diagnosis as they, probably uniquely, have a decent definition of PEM, which does seem to be a very unusual symptom).

By expert clinicians, I mean those who are very familiar with the illness, and I will resource. The two papers that found about 50% of GP diagnosis were were led by Julian Newton and Peter White respectively. Plus, I think they had bigger teams for this to do a good job.

GP's unfamiliar with the illness and often dismiss it. Even specialist clinics are frequently held up by lack of resources, or by impeded by beliefs.

I believe that DecodeME is the best large cohort we have available, and the best possible job that could be done.

But it would still miss cases where there are undiagnosed alternative diagnosis. That's why I think it's use of a good pen question is so important.
 
Last edited:
I am sharing the initial results from the analysis I have been performing using 5 different reasoning engines to suggest causal hypotheses given the GWAS results and the candidate genes. @Hutan it was difficult to extract text because of the table.

I will not provide the details on how it was done here, I am providing this information in hope that helps experts to formulate their hypotheses.

Tagging @Chris Ponting @DMissa @Jonathan Edwards

EDIT : The hypotheses generated are not discussed here but on a later post below.

First the areas of agreement in the 5 hypotheses generated by the reasoning engines :



Screenshot 2025-08-12 at 10.59.56.png




And here are the areas of disagreement.



Screenshot 2025-08-12 at 10.59.02.png
 
Last edited:
Back
Top Bottom