Apologies that this is longer than I hoped and may have fallen into rant territory. I appreciate those who bear with my attempt to be coherent.
It's fine. It's not a rant at all. Although I should warn everyone that my reply is long too.
I see where you are going with this argument. But I see it differently.
From my point of view and experience of 30 years, I have witnessed many patients diagnosed with CFS and/or ME/CFS who ended up having something else (often something that is listed as needing to be screened for in the IC Primer). I remember there being studies using Fukuda that contradicted what was found in patients using CCC or ICC. (Sorry I don't have examples handy. I think TRPM3 might be one of those.)
Loose criteria
in trials have always been a problem and no one will disagree with you there. But things like TRPM3 still need to be independently verified. It's not as easy as saying that the null result in following this up is due to criteria differences. It might be, but we simply don't know. A single result that we like doesn't make something true.
It's also worth noting that regardless of which criteria were used in that trial, the paper refers to 'CFS/ME' and not ME. That's because the name we use often has little relation to the criteria used. It's safe to say NICE 2007 wasn't the criteria used in that trial.
So whether a patient calls their own illness CFS, ME or something in between is no indication of how they were diagnosed. It's also no indication of what exclusionary testing they have.
In the UK, for instance, NICE 2007 recommends a basic panel to exclude obvious alternative diagnoses (autoimmune diseases, hepatitis, anaemia, etc) but some/many/most of us will have had other tests based on history because most GPs don't think of ME as a first resort. I have a family history of asthma and autoimmune disease, so I had additional tests, including an MRI, to rule those out.
Quite often, the referral to a 'CFS clinic' is a last resort, so that basic battery of tests is done at the end after subjecting you to a range of other investigations.
But there's still a problem in that multiple negative test results can bias clinicians against you (they start to think you're making it up), can be demoralising, and can delay diagnosis. There is a complex issue here which needs thorough discussion and probably a lot of nuance.
Finally, there's an assumption here that only people diagnosed with non-ME criteria are misdiagnosed because people using ICC will also use every test in the IC Primer. However, there are people diagnosed by all criteria who are misdiagnosed, few clinicians will treat the list in the Primer are as an essential checklist, and in practice, clinicians mix and match approaches based on their own clinical experiences (cf: the report from BACME in the UK which shows clinics use a combination of criteria with their own clinical judgment).
All tests aren't universally available. The average person doesn't have limitless insurance coverage, or if they live in a country with free healthcare, there will be a necessary limit on exactly how many tests can be offered.
If a person's history doesn't suggest RA, to give an example, most clinicians probably won't do much more than a basic blood panel. But there are also anecdotal stories of people diagnosed with ME later developing MS or Parkinson's, so it's also possible that tests come back negative at the time because their disease is subclinical or not yet at the stage where it's easily apparent. That's something that exhaustive testing can't always pick up.
Science is about removing variables.
Yes and no. It's also about finding the truth, despite our own biases as human beings. In science, ideally, you should test a hypothesis from every angle until you're sure it's right.
The problem with ME research is that there's a lot of crap and not a lot of good. That requires investment, training, confidence and vision.
But I think even some of the 'heroes' of the ME research world are sadly doing low quality work or a low volume of good quality work. We need fresh blood urgently, and especially money.
The first variable is patient selection. We are seeing more and more move from the CCC criteria to using the ICC.
Where's the evidence of this? Few of the ICC authors themselves use the criteria. That says to me that this statement is false.
Of course more papers since 2011 have used the ICC, but that's because it didn't exist before then. It doesn't mean it's being used much.
What I see is more researchers using a combination of criteria (usually Fukuda and CCC) to see if there's a tangible difference between the two cohorts. That's probably sensible, since if there's a major difference between them, that may help us identify subgroups and/or alternative diagnoses that are being missed.
It's hard to know if all the comments you made were directed at me specifically or if I was lumped in with some others.
I specifically didn't name anyone because it was a long post and I didn't want it to feel like an attack. But they're also general points made on my own observations. We all get things wrong and close ranks at times. It's not a flaw specific to one person.
The conversations on here are mostly very civil, thanks to the moderators, but I see things on Facebook exploding all the time. It's not any one person there either.
There's just a common narrative that underlies most of it, which is really depressing because it's just the same soundbites over and over again.
We have some excellent work done by respected researchers who had no choice but to use the CFS or ME/CFS label. I think anyone who has been in the community a while knows which research is to be respected and which does not properly reflect this patient population.
Two things really:
i) Why can't the same courtesy be extended to patients who use various terms? Why do we attack each other over using the wrong terms but researchers get away with it? Patients may just be using the terms that have been imposed on them by clinics or the health service. (I'm not saying you're doing this, BTW, but I do see pile-ons when people use the wrong terminology.)
ii) While we have ideas about who's doing good and bad research, we also have to be careful not to give researchers we like a free pass.
The Griffiths team, for example, to use a finding you mentioned, may announce results we like, but are they really giving us the best science they can?
Many people would argue they aren't. They exaggerate their findings and use tiny patient samples which make their results very hard to have faith in. We deserve better than this.
In order to bring in new researchers and expect them to be able to reproduce the science they will HAVE to know which patient group is appropriate. I continue to believe that using the ICC and the IC PRIMER to screen patients is the best approach.
You're conflating two issues here: research and clinical practice.
I already agree that stronger criteria are better for research, but you haven't engaged at all with the flaws I pointed out in the ICC, which selects a less homogeneous group than the CCC. If precision is what you want, ideally you want patients with the same core symptoms.
Having a set list of fatigue, PEM, sleep dysfunction, neurocognitive impairment and pain (with options for autonomic, neuroendocrine, or immune manifestations) seems better than ICC's pick-and-mix approach.
Diagnostic criteria should identify the most characteristic (and prevalent) symptoms, rather than every single possible symptom, and will always require a work up with clinical judgment.
In clinical practice, it's more important to diagnose someone quickly to start treatment. We hear again and again how patients have to
fight to get any sort of diagnosis. The longer they wait before treatment, often the worse their symptoms are. Why would we try to make that harder for them?
In clinical practice, a diagnosis will always be a 'best fit' and liable to change if new information arises. That's just the nature of how it works. That's why practising clinicians prefer CCC (or even IOM), because it's straightforward.
Yes, patients should have a proper work-up and exclusions. But they shouldn't still be waiting for a diagnosis 12+ months down the line! Exclusions have to be investigated based on relevance to the patient. Not every patient needs an MRI with contrast if they show no symptoms of MS. Some of the exclusions can be investigated while a 'suspected ME' diagnosis is entertained, if needed, so patients can at least access relief for pain and sleep.
It is important to note that even studies that aren't using the ICC are starting to pay attention to stratification of patients who are in the more severe category. That is a very welcome approach and likely to lead more quickly to finding out if the more severe have different disease
I agree. We still desperately need more good quality research though.
I strongly object to the term "pretend patients". That is offensive language and adds to the divisive tone. Of course, I don't think ANY of these other patients are "pretend"! I think it is quite likely many of them need to be properly evaluated. As Dr. Hyde has pointed out much too often patients given the CFS (or ME/CFS - aka SEID) label have not been properly screened. Dr. Guthridge is currently on twitter listing many of the diseases that are too often lumped in with the ME/CFS patient population.
I used intentionally pointed language, but you've just proved my point again.
You're still claiming that many patients labelled with CFS, ME/CFS or SEID will have incorrect diagnoses, while failing to acknowledge that a) the name used often has no relation to which criteria were used at diagnosis; and b) many ME-ICC patients are misdiagnosed too.
Let's at least agree that no matter which label or criteria is used that ALL patients need to be properly screened for all the possible diseases/conditions that could have landed them with any of these labels. This is a problem that is seen in the EDS, MCAS(D), MCS, POTS, Fibro, and many other patient communities. We should all be fighting for insurance/government health agencies to start spending money up front to make sure we are accurately diagnosed instead of fumbling around for years, often using $ we don't have to get private testing that finally gives us the diagnosis we should have had years sooner.
A lack of exclusionary testing is a problem, I agree, but there are many patients who've undergone the Byron Hyde model of extensive testing and still come out feeling pretty raw about it.
Testing itself has a burden on patients in terms of post-exertional relapse. It's not as simple as sending everyone for 100 different investigations.
Insurance companies won't pay to do every possible test just because it's in the IC Primer. Neither will state-run healthcare systems.
Until we have a biomarker, there will continue to be mistakes (and we'll probably still have mistakes even after; that's unfortunately how real life works).
Simple answers may be seductive, but it doesn't mean they're right. Healthcare systems and societies are machines with vast numbers of moving parts.