DecodeME Initial Results Webinar, Thurs Aug 14th, 3:30pm

How likely is it that looking at rare variants would significantly change the picture from what we see here? (Particularly wrt things like heritability and genetic contribution to the disease). Are there other examples of diseases with multiple rare variants contributing more significantly than common variants?
 
How do the technological methods (not recruitment) between WGS differ and how can this influence replication? I'm asking that because if one was to do WGS on (a subset) of DecodeME participants one could then only compare the results to WGS results in only very few cohorts (for example people in the study by Zhang et al).

My understanding is that different sequencing platforms (SequenceME=Oxford Nanopore, Zhang et al=Illumina/Novogene) do things quite differently, and that you can't just extract the raw data from 2 studies and then process them through the same framework, but that there's some potential workarounds if one carefully looks at the data?

(I imagine a genome to be like a massive book full of letters and then Illumina might be like a high resolution camera that takes pictures of groups of hundred letters at once and Nanopore might be like a scanner that scans whole chapters at once, so in the one it's hard to see the whole story but individual letters at very clear but in the other the letters might not always be as precise but you can read the whole story in one go and if two people would try to look at the images created they might get a different sense of the book but they can agree on a consenus on how certain chapters have to be read if they look at what things they can agree on).
 
Chris said in the webinar:
In the Sequence ME and Long COVID project, what is undoubtedly going to happen, if funded, is that we will be able to find individuals whose symptoms can be explained by very rare conditions, where the symptoms match and mirror what is in ME, but are better explained by other rare diseases.

That's a great benefit to those individuals of course if they could be notified that they have these rare diseases.

But on top of that, I feel like this could be valuable for identifying common mechanisms of ME/CFS, if you don't take the position that these individuals with rare diseases don't have ME/CFS because they have a rare disease. Instead they now have an explanation for their ME/CFS.

And potentially some of those rare diseases are already partly understood, in terms of their pathophysiology. So if you look at all these rare diseases that emerge, maybe you can weave together a story for how the different rare disease mechanisms overlap to potentially cause PEM and other ME/CFS symptoms.

I've wondered if the same could be done with the diseases discussed in the following thread that have similar presentations to ME/CFS: PEM-like descriptions and accounts in non-ME illnesses
 
How do the technological methods (not recruitment) between WGS differ and how can this influence replication? I'm asking that because if one was to do WGS on (a subset) of DecodeME participants one could then only compare the results to WGS results in only very few cohorts (for example people in the study by Zhang et al).

My understanding is that different sequencing platforms (SequenceME=Oxford Nanopore, Zhang et al=Illumina/Novogene) do things quite differently, and that you can't just extract the raw data from 2 studies and then process them through the same framework, but that there's some potential workarounds if one carefully looks at the data?

(I imagine a genome to be like a massive book full of letters and then Illumina might be like a high resolution camera that takes pictures of groups of hundred letters at once and Nanopore might be like a scanner that scans whole chapters at once, so in the one it's hard to see the whole story but individual letters at very clear but in the other the letters might not always be as precise but you can read the whole story in one go and if two people would try to look at the images created they might get a different sense of the book but they can agree on a consenus on how certain chapters have to be read if they look at what things they can agree on).
I think theoretically the most robust way is to do both sequencing methods on the same sample. But things like patents make this financially unfeasible.
 
But on top of that, I feel like this could be valuable for identifying common mechanisms of ME/CFS, if you don't take the position that these individuals with rare diseases don't have ME/CFS because they have a rare disease. Instead they now have an explanation for their ME/CFS.

As a clinician I would stick my neck out further and say that if a whole genome study identifies rare monogenic causes in the DecodeME cohort these will indeed be rare explanations for ME/CFS rather than 'other' rare diseases. There may be a few misdiagnosed people with neurodegenerative disorders or progressive mitochondrial disease but once identified I think it will become clear that their symptoms do not really match ME/CFS.
 
As a clinician I would stick my neck out further and say that if a whole genome study identifies rare monogenic causes in the DecodeME cohort these will indeed be rare explanations for ME/CFS rather than 'other' rare diseases. There may be a few misdiagnosed people with neurodegenerative disorders or progressive mitochondrial disease but once identified I think it will become clear that their symptoms do not really match ME/CFS.
Which symptoms might you refer to? Plenty of neurological in those with exertion induced severe ME
 
Which symptoms might you refer to? Plenty of neurological in those with exertion induced severe ME

Lots of symptoms overlap between lots of diseases for sure but the detailed temporal pattern, localisation and type of aggravating stimuli differ in each disease. There are no rare diseases that I know of that give the specific pattern of ME/CFS.
 
And potentially some of those rare diseases are already partly understood, in terms of their pathophysiology. So if you look at all these rare diseases that emerge, maybe you can weave together a story for how the different rare disease mechanisms overlap to potentially cause PEM and other ME/CFS symptoms.
This is a very big project on it's own, on top of a SequenceME like project. You would likely need to carry out individual proteomics & RNA / western blot / sanger sequencing etc to confirm the gene is in fact causal. The most important part is to have clinical centers the patients identified can attend to dig deeper into their symptoms and run specialized testing.

There is a reason NHS geneticists only run specific panels instead of wide sweeps. It is a LOT of work.

And one of the most complicated parts is the IRB. I would bet an IRB will not fly without a clear plan of clinical partnership.

In my opinion clinical centers tied to researchers are a must if we want to take this approach.
 
I think theoretically the most robust way is to do both sequencing methods on the same sample. But things like patents make this financially unfeasible.
I'm not sure what you are refering to? You have WGS data from one set using one technology and WGS data from another set of people using a different technology. Presumably any differences introduced by the setups are handled similarly as they were in DecodeME study (which introduces some loss).
 
Long-read sequencing is much more interesting for its potential to discover things that short-read sequencing cannot find. There was a suggestion that structural variants affecting neurosteroid metabolism might cause a ME/CFS-like illness. These variants are difficult to discover with commonly used whole genome sequencing.

According to the SequenceME presentation, a study of this size using long-read sequencing has never been done before. I suspect because it's costly and costs are only now coming down enough to think about attempting it.

If ME/CFS research can deliver the world first's large long read sequencing study, it will attract attention to the illness.
 
I'm not sure what you are refering to? You have WGS data from one set using one technology and WGS data from another set of people using a different technology. Presumably any differences introduced by the setups are handled similarly as they were in DecodeME study (which introduces some loss).
What I mean is that WGS uses multiple scans per genome. Like a standard WGS scans the genome 30 times to wipe out errors.

But theoretically, instead of doing 30 times with 1 method. It would be way more successful to scan 10 times with each method, because they have different areas they’re more error prone.

This particularly becomes interesting in places like HLA which are notoriously error prone.

Edit: I think in the Jargon this is called Multi Platform Sequencing or Hybrid Sequencing approaches
 
Back
Top Bottom