NICE ME/CFS draft guideline - publication dates and delays 2020

I'm wondering about the indirectness thing having quickly looked at what they are saying in the guidance https://gdt.gradepro.org/app/handbook/handbook.html#h.w6r7mtvq3mjz

They talk about indirectness of the population. I assume the trials were marked down since the had poor selection criteria and the resulting argument is that 'well some patients had PEM just look at these really obscure reports'. But this leads to a question that if a trial is targetted at a large population that may include a smaller subset - Can results from that subset be taken seriously - there will be issues of statistical power, whether the analysis is preplanned as well as whether the smaller subset is a genuine subset or whether there is just some overlap. It seems a bit incoherent to make the argument to say part of the population may have had PEM therefore don't downgrade or to use an adhoc sub-analysis of a not very well defined group (the intersection of patients with PEM and meeting Oxford).

The other thing I noticed was indirectness in the intervention and I feel trials could be marked down here since each trial has a different version of what GET/CBT is, different protocols and different delivery and quality issues in monitoring consistent delivery.
 
There could be a danger that the submissions from patient groups will not have emphasised the wider problems with GET studies as NICE was proposing dropping them?

I also hadn't realised NICE would be doing fact checking as part of this process - this could have been an opportunity to get problems with work like PACE fact-checked by an official body.
 
I'm wondering about the indirectness thing

The reality is that these trials are light years away from anything useful on so many counts. Indirectness is probably the least relevant of these. It comes into GRADE but the remit NICE is to get a valid assessment evidence quality, not to follow some arcane recipe. If people are going to play silly games arguing about this then I think there is plenty of room for downgrading some other way. My impression is that the committee has sensible enough people on it not to be duped by twisting arguments round and round.
 
Last edited:
The other thing I noticed was indirectness in the intervention and I feel trials could be marked down here since each trial has a different version of what GET/CBT is, different protocols and different delivery and quality issues in monitoring consistent delivery.

Sorry a bit of a tangent, but ....

It seems to me that there is enormous problems in defining what CBT is in practice. My understanding, at least from the origins of CBT, was not that it had specific content, but that it involved objectives agreed between the client and the practitioner. For example PACE CBT involved both the CBT ideas about how to change and the PACE beliefs about the world that CFS involves deconditioning and false illness beliefs (ie what needs changing).

For example few years ago I undertook a course of CBT based on a totally contradictory view that ME is biomedical condition that I struggled to manage effectively because on a day to day basis I had unrealistic expectations about what I should be able to do and also struggled to leave things undone. BPS advocates keep saying in abstract CBT is good and should be applied to every condition under the sun, but also they say if CBT for people with cancer is good, why isn’t CBT for people with ME/CFS equally good. This ignores the fact that the content of the CBT as generally practiced is completely different in the two situations.

Any meaningful evaluation of CBT must include an assessment of the relevance of its contents, in effect ask the question do the ideas and beliefs being changed/challenged have any basis in reality?

This prompts some to talk about good and bad CBT in relation to ME/CFS, however the first (BPS inspired) has been demonstrated to have no benefit in the long term and no effect in the short term on objective measures and we have no evidence at all relating to the latter (around adjustment to a biomedical condition).

It all feels rather pointless expending energy of trying to salvage something meaningful from evidence based on short term improvements in subjective measures that are within the range of experimental bias, when there are greater methodological faults (ie it is impossible to rule out bias) and a total failure to demonstrate the content has any relationship with reality.
 
If I remember correctly PACE used the london criteria (although perhaps a strangely modified version) which has PEM as a requirement. There results showed no or little difference between those just meeting that criteria or the Oxford one.
Yes and I think the same is true for the FINE trial as well.

I think there is plenty of room for downgrading some other way.
I agree. Even the controversial Cochrane review rated everything for GET as low to very low quality with the exception of fatigue measured with the Chalder Fatigue Scale shortly after treatment ended.

EDIT: the longer follow-ups of PACE, FINE and now also GETSET shows that the control group catches up over time.
 
There could be a danger that the submissions from patient groups will not have emphasised the wider problems with GET studies as NICE was proposing dropping them?

I am pretty sure that the people who matter on the committee are fully aware of all the nuances. The progress of events is determined to a large extent by the artificial rules, but their artificiality is understood.
 
It seems to me that there is enormous problems in defining what CBT is in practice.
It all feels rather pointless expending energy of trying to salvage something meaningful from evidence based on short term improvements in subjective measures that are within the range of experimental bias, when there are greater methodological faults (ie it is impossible to rule out bias) and a total failure to demonstrate the content has any relationship with reality.

Yes, these are some of the light years away from anything meaningful.
For a drug treatment even to get a licence, let alone be recommended by NICE there have to be trials that show a quantifiable relation between some clearly identifiable specific component of treatment and level of benefit - traditionally the dose response study. CBT is as far from that as Odysseus on the island of sirens was from Ithaca.
 
EDIT: the longer follow-ups of PACE, FINE and now also GETSET shows that the control group catches up over time.

And yet they somehow interpret those studies (at least PACE and GETSET follow-ups) as a success based on outcome-swapping--prioritising "within-group" maintaining "benefits" over time rather than the fact of null results in the between-group comparisons. I have raised that in my correspondence about the GETSET follow-up with the editor of the Journal of Psycbosomatic Research, which now purportedly frowns upon subjective outcomes in unblinded studies, but hasn't yet acted on that belief. I am planning to send him a nudge this week about that
 
And yet they somehow interpret those studies (at least PACE and GETSET follow-ups) as a success based on outcome-swapping--prioritising "within-group" maintaining "benefits" over time rather than the fact of null results in the between-group comparisons. I have raised that in my correspondence about the GETSET follow-up with the editor of the Journal of Psycbosomatic Research, which now purportedly frowns upon subjective outcomes in unblinded studies, but hasn't yet acted on that belief. I am planning to send him a nudge this week about that

A thing of beauty --- the Journal of Psycbosomatic Research --- makes me think of the Q-Con, Annual Gaming Convention!
 
If I remember correctly PACE used the london criteria (although perhaps a strangely modified version) which has PEM as a requirement. There results showed no or little difference between those just meeting that criteria or the Oxford one. If they do represent two different sets of patents this suggests to me there is a high chance that improvements are reporting biases rather than any real change.
My (bad) memory tells me their algorithm was an OR type that included Oxford. So some fit the other criteria, loosely anyway, but everyone met Oxford, which is entirely useless as a definition of ME. So the very best they can say is that it's for idiopathic chronic fatigue and nothing else as nothing else was required.

But I'll say that I have negative trust in their clinical assessment of diagnostic criteria and how they are reflected in the data, although I don't know how this can be shown from published research. It's clear that they never do a proper assessment as for them there is only one functional disorder so the differences are irrelevant. A point many of them have argued many times, just not in their pragmatic trials.

Which itself is a serious problem but apparently the new normal is that bias is necessary for BPS therefore bias should not be considered.
 
Last edited:
If I remember correctly PACE used the london criteria (although perhaps a strangely modified version) which has PEM as a requirement.
The 2011 pace trial paper, reference 13 is:

The London criteria. Report on chronic fatigue syndrome (CFS),
post viral fatigue syndrome (PVFS) and myalgic encephalomyelitis
(ME). Westcare, Bristol: The National Task Force, 1994.

Apparently what was reported in there was not the true London Criteria, and the MEA published the version that was in that report, and was therefore used in PACE:

https://meassociation.org.uk/2011/02/london-criteria-for-m-e/

I cannot see anything in there that looks like PEM, although later versions of the London Criteria do I think.
 
My (bad) memory tells me their algorithm was an OR type that included Oxford. So some fit the other criteria, loosely anyway, but everyone met Oxford, which is entirely useless as a definition of ME. So the very best they can say is that it's for idiopathic chronic fatigue and nothing else as nothing else was required.

But I'll say that I have negative trust in their clinical assessment of diagnostic criteria and how they are reflected in the data, although I don't know how this can be shown from published research. It's clear that they never do a proper assessment as for them there is only one functional disorder so the differences are irrelevant. A point many of them have argued many times, just not in their pragmatic trials.

I don't think it matters what the diagnostic criteria were really since the trial gains were likely to be due to reporting bias.

Apparently what was reported in there was not the true London Criteria, and the MEA published the version that was in that report, and was therefore used in PACE:

https://meassociation.org.uk/2011/02/london-criteria-for-m-e/

I cannot see anything in there that looks like PEM, although later versions of the London Criteria do I think.

I think that is an interesting point. If they are refering to PACE in terms of patients having PEM then a close look at the criteria and how they were operationalized is necessary in any review - which is why it is good they are digging out the documents and hopefully they will examine them carefully - afterall the BPS people don't have a good history of acurately quoting papers (even their own)
 
I agree. Even the controversial Cochrane review rated everything for GET as low to very low quality with the exception of fatigue measured with the Chalder Fatigue Scale shortly after treatment ended.

That really makes Cochrane a joke as an organization how can they give any sort of positive rating to anything measured by an incoherent set of questions whose scores can be added up in two ways so that a patient can both improve and worsen at the same time.
 
I got a bit uneasy listening to Carol Monaghan when she talked about the draft guidelines in the video The ME Action Network made for 12th May.

She said:

The new NICE guidelines are giving hope to the ME community. It's great to see the removal of Graded Exercise Therapy in the draft guidelines, but it would be quite worrying, and I think we need to watch this very carefully and make sure that we don't see a creep back in of graded exercise into the guidelines. So I know that developing these draft guidelines, patients were listened to. That was so important when these were been drawn up, to have that patient voice as part of the response to this.
(23.25 - 24.04)
 
Back
Top Bottom