NICE ME/CFS guideline - draft published for consultation - 10th November 2020

Status
Not open for further replies.
Could the NICE committee, which I understand was stacked with BPS people, have told them to leave out all their criticisms of trial methodology and just present results taken at face value in their report?

I think this was before NICE took to using GRADE. As to how decisions were made then I have no idea. I guess the 2007 report will contain assessments of trials of some sort.
 
I think this was before NICE took to using GRADE

Cochrane though use GRADE too, and repeatedly came to different conclusions in their assessment of the quality of evidence of therapist-delivered treatments.

Also, didn't NICE use GRADE for their cr*p Guideline on "primary pain"?

I think that point needs to be addressed -- see post above:

I think it might be important to be able to factually argue in detail with respect to the evaluation of the quality of evidence: What are the weaknesses of GRADE and how can it be used to either sift non-robust evidence out or to let it pass?

Edited to add: And what differed in NICE's evaluation of the evidence on the treatments of primary pain on the one hand, and on the treatments for ME/CFS on the other hand?
 
Last edited:
I think this was before NICE took to using GRADE. As to how decisions were made then I have no idea. I guess the 2007 report will contain assessments of trials of some sort.

That is what Brian Hughes looked into in his latest 'The Science Bit' article. A couple of quotes, referring to the evidence review in the published appendix to the 2007 guideline:
The authors of the Appendix reported that the unadulterated RCTs showed positive effects for CBT and GET, although not in every case. And that was it. They provided no discussion of potential limitations of these studies, other than to declare them to have had high scores for “validity”.

What made this rather slim discussion especially intriguing was its provenance. The work was prepared by a group of researchers from University of York’s Centre for Reviews and Dissemination. The intriguing part is that back in 2001, these very same authors published a much more thorough review of exactly the same evidence base. This earlier review was not exactly obscure: it appeared JAMA, the flagship journal of the American Medical Association.

Unlike in their Appendix for NICE, in JAMA the York Reviewers elaborated on several caveats to the so-called evidence base:
Hughes then spells out some of the caveats and concludes:
Given the various caveats that characterised this research, they declared: “All conclusions about effectiveness should be considered together with the methodological inadequacies of the studies.” The best they could say about CBT and GET was that the relevant studies showed “promise.”

Strangely, this focus on caveats disappeared by the time the York Reviewers wrote their Appendix for NICE. This was despite the fact that their assessment of the research for NICE was based on the same studies covered in their JAMA article. The text they prepared for NICE contained no caveats or discussion of study limitations. ...
One wonders what brief the York Reviewers were given by NICE at the time.
...
https://thesciencebit.net/2021/08/1...the-new-nice-guideline-ask-about-the-old-one/

That's what I was querying. Why did the York researchers, commissioned by NICE to review the research evidence on which they had already published a review of the same trials full of caveats/limitations in JAMA several years earlier, not repeat those limitations in their NICE report? Who instructed them not to? And why?
 
That's what I was querying. Why did the York researchers, commissioned by NICE to review the research evidence on which they had already published a review of the same trials full of caveats/limitations in JAMA several years earlier, not repeat those limitations in their NICE report? Who instructed them not to? And why?

You are several steps ahead of me @Trish.

I missed that you were responding to #1004.
 
That is an extraordinary piece of information regarding the York teams evidence. It looks as though we can now see how the evidence became "supportive".

Yes, I think this would reasonably be regarded as unethical practice.

I keep being tempted to write a rapid response to the BMJ news piece but I think best not.
So there are two answers why the evidence became unsupportive. Documents at the time show that it was not supportive even then, and was gerrymandered, and the PACE trial confirmed that there was nothing there.
 
I had not come across the history of the difference between what the same reviewers published and what they told NICE about the same research back in 2007. Would it be possible to find out what brief NICE gave the reviewers for their review for the 2007 guideline? Could the NICE committee, which I understand was stacked with BPS people, have told them to leave out all their criticisms of trial methodology and just present results taken at face value in their report?

I've only skimmed this but it seems like something you could ask via a freedom of information request. I can't see it being permissible to refuse to release documents in this context.
 
After a lot of hard work by them on a tight timescale, the list of what our team considered to be substantive errors in the guideline was sent to NICE this morning. My assumption is that we will not be able to share this feedback publicly until the guideline itself is public.

For any other stakeholder reading this, the deadline is 5pm today.
 
For me the real revelation from Hughe's article is:

In 2001, the York Reviewers noted high dropout rates and the unreliability of self-reported outcomes, especially in combination with CBT attempeting to modify the patient's perception of their health.

In the 2007 guidelines these weaknesses were apparently omitted from the discussion.

In 2021 the draft guidelines acknowledge these weaknesses once again.
 
Last edited:
After a lot of hard work by them on a tight timescale, the list of what our team considered to be substantive errors in the guideline was sent to NICE this morning. My assumption is that we will not be able to share this feedback publicly until the guideline itself is public.

Many thanks to our NICE guidelines (rapid response) team and to those forum members who are members of or who inputted to the NICE Committee itself. The work involved has been considerable, but hopeful a resultant significant improvement will make this cost worth while.

Brian Hughes latest Science Bit piece highlights for me just how flawed the process that resulted in the previous guidelines was, and how much at least last year’s draft and the evidence review represent a significant advance for towards science based management of ME.

Fingers and toes crossed and thumbs squeezed (Daumen drücken) for next Wednesday.
 
If anyone wants to check the JAMA paper from 2001:

Whiting P, Bagnall A, Sowden AJ, Cornell JE, Mulrow CD, Ramírez G. Interventions for the Treatment and Management of Chronic Fatigue Syndrome: A Systematic Review. JAMA. 2001;286(11):1360–1368. doi:10.1001/jama.286.11.1360

https://jamanetwork.com/journals/jama/article-abstract/194209 (paywalled)

sci-hub: https://sci-hub.se/10.1001/jama.286.11.1360

(Edited to remove a question that has probably become redundant).


A thread for this paper has been created here:
Interventions for the Treatment and Management of CFS: A Systematic Review, 2001, Whiting et al
 
Last edited by a moderator:
Many thanks to our NICE guidelines (rapid response) team and to those forum members who are members of or who inputted to the NICE Committee itself. The work involved has been considerable, but hopeful a resultant significant improvement will make this cost worth while.

Brian Hughes latest Science Bit piece highlights for me just how flawed the process that resulted in the previous guidelines was, and how much at least last year’s draft and the evidence review represent a significant advance for towards science based management of ME.

Fingers and toes crossed and thumbs squeezed (Daumen drücken) for next Wednesday.

Can't be repeated too often. (All 3 paragraphs)

Huge thanks from me, too.
 
I don’t know if this is of any help or use but I did get copies of all the minutes from the previous guideline group meetings that NICE held. I had a quick search of these documents and there are references to the York review.

I have just spent the weekend in hospital though and my head is scrambled and I can’t go through them to see if there is anything useful.

There are references to the discussions so there might be something good in there that would be a great quote for an article.

if anyone wants to look through these they were uploaded in this thread https://www.s4me.info/threads/2005-...eline-development-group-meeting-minutes.2589/
 
That is because when evidence and logic evaporate, narrative is all you have.

This.
It's usually used in politics but, well, this is political and applies 100%: "If the facts are against you, argue the law. If the law is against you, argue the facts. If the law and the facts are against you, pound the table and yell".

Not sure which is it when all the steps are skipped and they went straight to pounding the table and yelling, but then they did start on the wrong side of the facts and evidence so it was the only option available to them.
 
There is an interesting answer to this, maybe - the evidence became unsupportive because in the intervening period there was the 'definitive' PACE trial - which was unsupportive.

In 2007 the 'evidence' was based on some small inconclusive studies. By 2020 we had seen a large multi-arm trial show that almost certainly CBT and GET had no useful effect and if there was any it was trivial and unsustained. The authors had to truncate the Y axis to make any apparent difference visible and that difference was well below what would be predicted on the basis of other studies (e.g. rituximab phase 2, antivirals) from expectation bias alone.
The difference in number of trials considered is massive, but it seems wrong, there were definitely more small trials of either CBT or GET before 2007, certainly more than a handful. So the selective reporting of cherry-picked studies was especially heavy. But to have added no less than a combined 200 trials for both since really sounds like a crisis in itself. When something works, who needs 200+ small trials with loose methodology?

I frankly don't understand how that in itself is not a problem, just doing the exact same thing in circles hundreds of times over the exact same way with the same intent, same conclusions and basically same everything. Like "hitting" an entire game of holes-in-one, just needed a few thousand Mulligans and clever video editing since none ever actually made it. Super legit.
 
It's whack a mole, or whatever.

Perform enough 'trials' and hope that eventually one of them will produce fluke results that 'verify' their religion is 'right'.

The billions of ones that didn't, are irrelevant, once one does, an simply, prove' how determined they are to 'help', and what good 'scientists' they are.

Keep doing the same thing, over and over, and, if you keep going long enough, eventually you'll hit a mole.
 
Status
Not open for further replies.
Back
Top Bottom