CDC Treatment Evidence Review - consultation period

@InitialConditions I think it's easier to understand the purpose of the document if you compare it with the evidence reviews (listed here) that were published alongside the draft NICE guideline, if you just read the evidence reviews, not the guideline.

It says somewhere that this document will be used as the evidence basis for developing a guideline. It seems anyone can comment, you don't have to register as a stakeholder.
 
@InitialConditions - this is the draft of the systematic evidence review. CDC told CFSAC that performing the evidence review is a necessary first but separate step to developing clinical practice guidelines. As I understand it, the product of this current effort will be just the final systematic review.

In essence, this is an update of the 2014 review that another HHS agency contracted. This new review has not referenced their own 2016 addendum to the 2014 review in which they downgraded conclusions about safety and efficacy of CBT and GET after first excluding Oxford studies.
 
This new review has not referenced their own 2016 addendum to the 2014 review in which they downgraded conclusions about safety and efficacy of CBT and GET after first excluding Oxford studies.

This is the part I don't understand. My first question when I read the snippets that have been posted here was, "but, didn't they read the addendum from the 2014 review?!" I mean, it's the same damn company who did the review in 2014! Though I'm assuming (likely incorrectly) it's different individuals who wrote this one?
 
I have just started to read the document, and after a few pages had a look at the references. There are a worrying number of studies with names of the usual suspects, but I couldn't see the re-analysis of PACE, I thought they had decided that use of the Oxford criteria led to trials that were unreliable or words to that effect, yet many of their references use it.

edit- sorry, poor vision and late at night. I thought there weren't any comments yet, and now I see there are several pages. My observations might well be irrelevant. I'll look back and catch up!
 
I'm late into this and would like some advice please. I haven't been able yet to read through previous comments, and the content of the report is very long. But it bases its conclusions on the studies considered, and all we have to go is the references.

Those references look heavily distorted towards Oxford and the usual suspects. I was thinking of working through those references and drawing up some sort of relationship diagram between using Oxford, thinking CBT was the answer, and the limited number of intertwined researchers.

It's making me very angry that they don't have our re-analysis of the PACE data using the original protocols.

Rather than work through the report, my feeling is one of garbage in, garbage out: I want to produce an attack on the very foundation of their work, in plain rather than polite scientific language.

Am I barking up the wrong tree? Wasting my time? Have I missed something?

It's frustrating only being able to read bits at a time.

Thanks.
 
I am thinking it is up to the group that uses this evidence review as the basis for producing a new CDC guideline to do that work, @Graham. It should be sufficient to write a strongly worded statement to the effect that the NIH and CDC have already declared Oxford a definition that should no longer be used (I think that is correct), and that therefore all research included in the review based on Oxford needs to be disregarded in making any recommendations for management and treatment of ME/CFS.
We can give links to relevant documents reinforcing the uselessness of the CBT/GET studies with all the reasoning in them.
If you think it's worth doing what you suggest, maybe ask for volunteers to help.
 
Thanks.

I understand your comments, @Trish , but two things bother me. The first is that if I had come to this fresh, knowing nothing of the controversy of PACE etc., I'd have been pretty shocked by the shoddy work, appalled by the Oxford criteria, and dug much deeper. Why haven't they?

You can say that this is my hindsight, but that's exactly how it hit me when I took early retirement and started to look at the studies. I had expected the studies to be impressive things, ones which I would struggle to understand. Instead they didn't even match up to my sixth-formers' understanding.

The second thing is that we have gone through the polite, structured, scientific argument: we have had papers published (and our internal peer-review was much tougher than any external one). Yet none of our analyses are on the list – not even the re-analysis of PACE data according to the approved protocols. I'm beginning to think that they need a plain English, utterly blunt version to get it into their heads. Are they so impressed with status and use of specialised language that they lose essential meanings?

I'm not looking for help at the moment. There isn't a lot that I can do, and plodding through a list of studies just to pick which definition was used, and who the researchers were should be within my capabilities. I just wanted to check that I am not misunderstanding things.

So, I'll give it a go and report back here rather than under any of the specific "chapter" sections, if that is agreed.
 
The second thing is that we have gone through the polite, structured, scientific argument: we have had papers published (and our internal peer-review was much tougher than any external one). Yet none of our analyses are on the list – not even the re-analysis of PACE data according to the approved protocols. I'm beginning to think that they need a plain English, utterly blunt version to get it into their heads.
Yes, that ignorance has to be wilful in the extreme. Nobody claiming to be doing a serious honest appraisal can ignore that stuff.
 
I'm late into this and would like some advice please. I haven't been able yet to read through previous comments, and the content of the report is very long. But it bases its conclusions on the studies considered, and all we have to go is the references.

Those references look heavily distorted towards Oxford and the usual suspects. I was thinking of working through those references and drawing up some sort of relationship diagram between using Oxford, thinking CBT was the answer, and the limited number of intertwined researchers.

It's making me very angry that they don't have our re-analysis of the PACE data using the original protocols.

Rather than work through the report, my feeling is one of garbage in, garbage out: I want to produce an attack on the very foundation of their work, in plain rather than polite scientific language.

Am I barking up the wrong tree? Wasting my time? Have I missed something?

It's frustrating only being able to read bits at a time.

Thanks.
I gave the non-diplomatic, though polite, version of this so language aside I think we are seeing the same.

My comment was short, that if this were a class assignment it would be turned back with order to do it all over again or get a zero for effort, that this is simply not a serious effort and I question their professionalism for it. The diplomatic version of this would be more compelling but that's too much for me.

But this is definitely the right tree to bark at. All the cool dogs are barking at it, or so I'm told.
 
Going through the report, I noticed that it says in the discussion section (page 157 in the pdf):
The inability to blind is of particular concern for subjective outcomes such as fatigue and function. Most trials also had other methodological limitations, including unclear randomization or allocation concealment methods and high attrition. Because of these issues, the strength of evidence for exercise therapy and CBT versus inactive therapies was rated low, even though these represented the most robust bodies of evidence on treatments for ME/CFS.
So the strength of evidence, for GET and CBT was rated as low, even when the comparison was inactive therapies. It might be good to ask them to describe this in the abstract. Currently, the abstract only cautions that several limitations "precluded strong conclusions" which is a bit weaker than saying that the strength of evidence is low.
 
@Medfeb
I see that you have included this in your bio for the Cochrane Exercise therapy review:
Non-financial: I was a key informant on the protocol and a peer reviewer for a current systematic review of ME/CFS treatments. Pacific Northwest Evidence-based Practice Center at Oregon Health Sciences University - was contracted by CDC.

Is that the review that is the subject of this thread? Can you say anything about what has happened here?
 
@Hutan - Yes it is the same review. And no, for reasons I am sure you understand, I won't be saying anything at this time. My comments on the 2014/2016 ME/CFS systematic evidence review/addendum by this same group are in the public domain.
 
Last edited:
It is extremely important to remember that assuming that Cochrane, NICE, CDC or local medical processes operate in a vacuum is incorrect. Their scope and actions are bound by the law. Full stop.

It is the fundamental driver and arbitrator, unless we wish to avoid highlighting it to appease insistent law breakers and purveyors of malpractice. The various reviews are mere details WITHIN their lawful limits. Not addressing that fact opens up waste of energy and effort, allowing discussion of possibilities that are not legally sustainable anyway. Plus, most legal advice will fail to be informed correctly and come terminally incorrect conclusions.

How this reality is referred to or plays out in different contexts has to be different, sure. But the fact remains. Neither Cochrane, NICE nor the CDC define the law. And us giving any implicit space to such thinking is us unnecessarily gifting away rights and obligations for the short to medium term. Again.

https://www.s4me.info/threads/compl...nce-underwriting-etc-vs-me.20482/#post-349054
Summary on Twitter: The relative lack of interest in this topic is counterproductively surreal and disturbing. We have become chronically used to persistent lawbreaking and aggressive insistence on malpractice. Neither medics, associations nor bodies of state define the law. Not your GP, not the RGCP, not the BMA, not NICE, not Cochrane nor the CDC. The lack of interest continues the community's misdiagnoses of the nature of power and control in our context, playing the game on the wrong terms, terms that are entirely defined by the fallacious malfeasants. Ignoring this ignores the underlying fundamental factor that defines everyday conversations with medics, the NICE process, the CDC process, Cochrane review, Swedish parliamentary sessions, etc. Literally everything. All those processes exists within the limits of the law and do not define what is lawful. Full stop. The sooner we realise that, the sooner we have realised the primary truth that is inconvenient to the lawbreakers and their facilitators. Our response is all too often to persuade or educate, only. This is very appropriate where there is no bad faith, not where there is persistence or insistence. I understand. We tend to stick to the persuasion for a fear of upsetting people or being accused of "activism". But this is a mix of accepting 1) bastardized terms of engagement defined by the "the opposition" and 2) it often veers into beaten wife syndrome, frankly. Our fear is irrelevant to the law. Our relative lack of interest is disturbing.
 
It seems that the big CBT trial by Prins et al. (published in 2001 in The Lancet) was excluded because it didn't use the Fukuda criteria correctly. The report says:

Prins JB, Bleijenberg G, Bazelmans E, et al. Cognitive behaviour therapy for chronic fatigue syndrome: a multicentre randomised controlled trial. Lancet. 2001;357(9259):841-7. PMID: 11265953. Excluded: excluded population
The trial report by Prins et al. states:

"Patients were eligible for the study if they met the US Centers for Disease Control and Prevention criteria for CFS,1 with the exception of the criterion requiring four of eight additional symptoms to be present."
That's a weird definition but I think the end result will be similar to the Oxford criteria. Given that there are plenty of other included studies that used the Oxford criteria, there's a case that the trial by Prins et al. should have been included as well.

EDIT: it seems that the 2014 AHRQ report did the same thing.
 
There are a couple of non-randomized studies that showed that employment/hours worked do not increase after CBT. My personal impression is that a lot of doctors and policymakers think that CBT is an evidence-based rehabilitation and therefore it must be good to get patients back to work.

It's unfortunate that most reviews restrict their scope to randomized trials because one really does not need randomization and a control group if the results show that there is no increase in employment/hours worked after CBT.

To debunk the idea that "CBT helps ME/CFS patients back to work" one doesn't need more than these observational data if these show null results. Randomization and a control group only come into play if one wants to distill a treatment effect from reported improvements (i.e. reduce the noise and bias) or if you want to compare the relative effectiveness of two treatment approaches. If there is simply no improvement over time in patients who received CBT there is really no big interpretation problem and so a control group and randomization are not necessary to conclude that CBT does not increase employment/hours worked.

I assume it will be hopeless to try to explain this to the authors of the report?
 
Back
Top Bottom