UK NICE 2021 ME/CFS Guideline, published 29th October - post-publication discussion

Things to add to a longer version:
  • Cochrane weaknesses (and irrelevance)
  • "We actually used an untested version of GET we made up, not the version in trials."
  • Backstage lobbying by vested parties
  • The bullying of a BPS person by their own colleagues (but we're the supposed harassers)
  • The litany of complaints sent to NICE about each and every committee member (maybe not all; Saran wasn't on the web at that point) -- and how NICE investigated and rebutted them all, making us the most vigorously scrutinised and transparently appointed committee of all time
  • The fact reps from clinics outnumbered the rest of us anyway (but there was a painstakingly fair attempt at balance)
  • "It's quite Dickensian" -- said by a very surprising person at the roundtable
  • The incredible patience, diligence and rigour shown by Peter Barry, Baroness Finlay, Kate Kelley, and all the team at NICE and the NGC
  • "I would be mad if I were them too" -- said with resigned acceptance by someone at NICE after the pause, when they were reading all the emails, letters and petition comments (they honestly didn't blame us or find us militant)
  • "This is the most engaged and well informed patient group we have ever worked with" -- another NICE person, echoed by several others, including NGC staff
  • The fact NICE and NGC staff had to plough through hundreds of pages of passive aggressive (and sometimes aggressive aggressive) vitriol, and still managed to stay polite (though I noticed some highly restrained snark in a few responses)
  • The fake, inept and/or fraudulent referencing in many of the consultation responses (e.g., claiming studies said the opposite of what they actually said, using papers that weren't about ME, or just making the references up -- dare I say, READ THE BLOODY PAPER, indeed!).
And there's loads more I can't remember now!
 
One day, I will go into more detail than that, but I'll leave it at that for now.
I hope you will. Have you considered writing a book?

It might also be useful to post your review with a full list of references.

There was just one bit that wasn’t clear to me. You wrote:
but the general trend was that older surveys showed a smaller rate of harm while more recent ones showed a higher rate of harm (reflecting, perhaps, the general loss of goodwill when the initial excitement of having any kind of treatment, and therefore hope, wore off -- see above).
I’m not sure if I’m misunderstanding or if this is incorrect. How would a “general loss of goodwill when the initial excitement of having any kind of treatment, and therefore hope, wore off” result in older surveys showing a smaller rate of harm? Wouldn’t it be the other way around?
 
There was just one bit that wasn’t clear to me. You wrote:

but the general trend was that older surveys showed a smaller rate of harm while more recent ones showed a higher rate of harm (reflecting, perhaps, the general loss of goodwill when the initial excitement of having any kind of treatment, and therefore hope, wore off -- see above).

I’m not sure if I’m misunderstanding or if this is incorrect. How would a “general loss of goodwill when the initial excitement of having any kind of treatment, and therefore hope, wore off” result in older surveys showing a smaller rate of harm? Wouldn’t it be the other way around?


I think I understand what is meant. Patients didn't initially report harm because of wanting to believe the treatments worked or would work if they continued to apply it. But after something like a 1-2 year follow-up, it could have become more apparent that the treatments didn't work and continuing to push through symptoms of PEM as taught made patients worse.


I also think there is more to it. Based on experience, some of us who carried out older surveys didn't even receive PACE style CBT. I carried out primary mental health CBT without knowledge. I recently looked over my clinic notes, and unbelievably I said CBT helped me so much during the survey. There are a few things that would have led to me saying that;

1) I was moderate but declining but could still perform a lot of activity - much more than most people report here. So ME was bad - but not as bad as it is now, as I was still able to have a social life and casual relationships, etc. A bit of activity management such as bulk cooking and advice on cleaning my home slowly over a course of days - with breaks, instead of all in one go as I did before was actually helpful for avoiding PEM.

2) The term 'treatment' was misleading. I thought if I followed the programme, I would be cured of the slow dreadful life zapping 'problem' as ME/CFS wasn't adequately explained to me.

3) I was grateful to receive 'treatment' (what I believed was the buzzword for a cure) after suffering for so long with a problem that none of my GP's could identify. It was the first time and only place my issues were recognised and partially explained. Just what I was told would help didn't turn out to be true.

4) I was happy with CBT, which for me turned out to be discussing old and forgotten emotionally unpleasant experiences, but I would then soon go on to rapidly deteriorate within the service despite stating CBT for revisited emotions was helpful, so clearly that couldn't have been the cause at the time).

4) The CBT therapist crossed professional patient-clinician boundaries on several occasions and said it was because they strongly believed I could be cured of CFS. So I felt it necessary to put in a good word for someone risking their job for me.

Basically, the data is BS! and all of this is under the NHS.

[Edited to add more information]
 
Last edited:
I think I understand what is meant. Patients didn't initially report harm because of wanting to believe the treatments worked or would work if they continued to apply it. But after something like a 1-2 year follow-up, it could have become more apparent that the treatments didn't work and continuing to push through symptoms of PEM as taught made patients worse.


I also think there is more to it. Based on experience, some of us who carried out older surveys didn't even receive PACE style CBT. I carried out primary mental health CBT without knowledge. I recently looked over my clinic notes, and unbelievably I said CBT helped me so much during the survey. There are a few things that would have led to me saying that;

1) I was moderate but declining but could still perform a lot of activity - much more than most people report here. So ME was bad - but not as bad as it is now, as I was still able to have a social life and casual relationships, etc. A bit of activity management such as bulk cooking and advice on cleaning my home slowly over a course of days - with breaks, instead of all in one go as I did before was actually helpful for avoiding PEM.

2) The term 'treatment' was misleading. I thought if I followed the programme, I would be cured of the slow dreadful life zapping 'problem' as ME/CFS wasn't adequately explained to me.

3) I was grateful to receive 'treatment' (what I believed was the buzzword for a cure) after suffering for so long with a problem that none of my GP's could identify. It was the first time and only place my issues were recognised and partially explained. Just what I was told would help didn't turn out to be true.

4) I was happy with CBT, which for me turned out to be discussing old and forgotten emotionally unpleasant experiences, but I would then soon go on to rapidly deteriorate within the service despite stating CBT for revisited emotions was helpful, so clearly that couldn't have been the cause at the time).

4) The CBT therapist crossed professional patient-clinician boundaries on several occasions and said it was because they strongly believed I could be cured of CFS. So I felt it necessary to put in a good word for someone risking their job for me.

Basically, the data is BS! and all of this is under the NHS.

[Edited to add more information]
Exactly right. And I think I touched on a lot of this with expectation bias and the like, but didn't want to go into even more detail on that point.

I did consider removing that part of the review altogether, but thought it better to contextualise the 54% from all surveys versus the ~75% from the most recent one.

I will make a note for future reference, though, so if I do write something longer, I can add more context. I have thought about doing a book at some point.

One of the women from Lesbians and Gays Against the Miners, who wrote a book called Over the Rainbow: Money, Class and Homophobia that I published, had asked me to co-write a book on ME a few years back, so it could be in that or it could be something independent.

@Robert 1973, which parts did you want references for, in particular? Most of it is just referring to what's in the NICE GLs (either the short GL or the rationale/long GL), so it didn't feel necessary (it's also a review, rather than a paper). But I could certainly consider it when I have more spoons.
 
I think I understand what is meant. Patients didn't initially report harm because of wanting to believe the treatments worked or would work if they continued to apply it. But after something like a 1-2 year follow-up, it could have become more apparent that the treatments didn't work and continuing to push through symptoms of PEM as taught made patients worse.

Exactly right. And I think I touched on a lot of this with expectation bias and the like, but didn't want to go into even more detail on that point.
Thanks. I think I was misunderstanding. If I’m understanding correctly now, your point is that in the older surveys the average time between receiving the therapy and completing the surveys would have been less, and therefore the effect of response bias is likely to have been greater. That makes sense, but I think it could have been clearer (or maybe I was just being dim).

BTW, I see you can edit Amazon reviews after they been posted – although I don’t know if that will effect their rating.

I did consider removing that part of the review altogether, but thought it better to contextualise the 54% from all surveys versus the ~75% from the most recent one.
I think it was useful to include them.

One of the women from Lesbians and Gays Against the Miners, who wrote a book called Over the Rainbow: Money, Class and Homophobia that I published, had asked me to co-write a book on ME a few years back, so it could be in that or it could be something independent.
I hope you will be able to one way or another. I could imagine you could write a book just about your experience of being on the gl committee – and that might be a good angle to approach it from.

which parts did you want references for, in particular? Most of it is just referring to what's in the NICE GLs (either the short GL or the rationale/long GL), so it didn't feel necessary (it's also a review, rather than a paper). But I could certainly consider it when I have more spoons
I didn’t mean you should have included references on the Amazon, but it might be useful to add some references to the version on here, or if you write an extended version. I also wondered if @dave30th might invite you to guest blog a version.

The sort of references I was thinking of were:

- Brian’s blog that you quote
- Monbiot articles
- Papers and surveys which point to harms of CBT/GET
- Minutes which confirm that committee members resigned after signing off draft
- Jonathan’s expert testimony and/or JHP paper which explains problems of relying on subjective outcomes in unblinded trials.
- Lowering recovery threshold
- BMC psychology correspondence where Sharpe et al admit that they changed recovery criteria to “give results more in line with what they expected from clinical practice”
– rituximab trials
– petitions to NICE
– “unbecoming” quote in Hansard
– dismissal “harrassment” claims at tribunal
– Dave’s ME/CFS virology blogs

etc.

I think quite a few of these refs are in Adam’s excellent Twitter thread. I know some of this is in the GL but some bits are hard to find in such a large document, so it can be useful to point to specific parts.

Thanks again, Adam. I don’t mean to give you more work. The review is brilliant as it is. I’m just floating some ideas in case you have spoons and inclination to do any more with it.

PS A spooky thing just happened. I accidentally typed “woukd” and it wasn’t auto-corrected. I don’t think that’s every happened to me before. Am I being channelled by the secret texter? Yikes!!!
 
Thanks. I think I was misunderstanding. If I’m understanding correctly now, your point is that in the older surveys the average time between receiving the therapy and completing the surveys would have been less, and therefore the effect of response bias is likely to have been greater. That makes sense, but I think it could have been clearer (or maybe I was just being dim).

BTW, I see you can edit Amazon reviews after they been posted – although I don’t know if that will effect their rating.

I think it was useful to include them.

I hope you will be able to one way or another. I could imagine you could write a book just about your experience of being on the gl committee – and that might be a good angle to approach it from.

I didn’t mean you should have included references on the Amazon, but it might be useful to add some references to the version on here, or if you write an extended version. I also wondered if @dave30th might invite you to guest blog a version.

The sort of references I was thinking of were:

- Brian’s blog that you quote
- Monbiot articles
- Papers and surveys which point to harms of CBT/GET
- Minutes which confirm that committee members resigned after signing off draft
- Jonathan’s expert testimony and/or JHP paper which explains problems of relying on subjective outcomes in unblinded trials.
- Lowering recovery threshold
- BMC psychology correspondence where Sharpe et al admit that they changed recovery criteria to “give results more in line with what they expected from clinical practice”
– rituximab trials
– petitions to NICE
– “unbecoming” quote in Hansard
– dismissal “harrassment” claims at tribunal
– Dave’s ME/CFS virology blogs

etc.

I think quite a few of these refs are in Adam’s excellent Twitter thread. I know some of this is in the GL but some bits are hard to find in such a large document, so it can be useful to point to specific parts.

Thanks again, Adam. I don’t mean to give you more work. The review is brilliant as it is. I’m just floating some ideas in case you have spoons and inclination to do any more with it.

PS A spooky thing just happened. I accidentally typed “woukd” and it wasn’t auto-corrected. I don’t think that’s every happened to me before. Am I being channelled by the secret texter? Yikes!!!
Thanks, Robert. All noted.

What I meant about the older surveys is that GET was still relatively 'new' at the time. And I suspect clinics were also less militant about it (i.e., they were still trying things out, so some were more flexible).

So in those early years, people may not have reported harm as much because they were still expecting to improve and hopeful that if they kept trying and being positive it would work. (I guess it could also be to do with the memberships targeted for those surveys, and how they've changed.)

Then, after a few years, more studies came out, clinics became more and more rigid, more people started talking to each other and learning that it wasn't working for everyone else too, and they realised it wasn't just them but the treatment itself.

So yes, expectation bias, desperate hope, a lack of wider context (people thought it was just them doing it wrong, not the treatment), and a hardening of clinical practice.

As others have said in this thread, too, what was being offered as CBT and GET in the past (maybe the 00s?) seems to have varied quite a bit from what was in the trials, and what has become the dogma for a decade or so since.

Look at the Cochrane review and you'll see there are a lot of different protocols there, and many of them aren't GET. Some are more like pacing. Over time, though, especially in UK trials, the same wording ('based on the deconditioning and exercise intolerance [or avoidance] theories of CFS/ME') became more and more prevalent.

I'm not sure what happened -- perhaps it was a response to greater dissatisfaction with the treatments -- but clinics seem to have become more and more prescriptive as time has gone on.

It is often 'you must do x' now, instead of 'try x, and if that doesn't work, try y'. Maybe the worsened outcomes has caused them to double down, and not be as lax, because they assume the evidence is divine law so the flaw must be in the flexibility (so let's rule that out) or the dedication of the patient (so let's pressure them to stick to it religiously).

In my experience, I have found that if I don't seem resistant to their favoured treatments in the clinic, the staff are much more comfortable with letting me try something else. But the one time I tried to raise concerns, the clinician became very hostile, and options closed up.

I have also noticed people who were more resistant when speaking to their clinicians have tended to get a more 'my way or the highway' response. Hostility seems to be related to asking questions and challenging authority.

I think there is some lesson about human psychology here. Tell someone you doubt their expertise/worldview, and they become more fundamentalist. Flatter them and play along, and they are more willing to tolerate or even entertain the grey areas -- perhaps because they don't feel challenged, so they feel more comfortable humouring you, even if they don't share your views.
 
It is often 'you must do x' now, instead of 'try x, and if that doesn't work, try y'. Maybe the worsened outcomes has caused them to double down, and not be as lax, because they assume the evidence is divine law so the flaw must be in the flexibility (so let's rule that out) or the dedication of the patient (so let's pressure them to stick to it religiously).
I imagine that, given how convinced they have become it is not the treatment that's at fault but the patient's adherence to it, and the need (for the clinicians) to show that it works, their mindset may have automatically gravitated towards being more assertive and less flexible with "what they know is right" for patients. Perhaps?
 
@adambeyoncelowe

Outstanding. Thank you. One for the history books. :thumbup::hug:

Subjective measures are fine when blinding occurs. Unblinded treatments are fine when objective measures are used. When both occur together, the results become that much less reliable.
That last sentence might be ambiguous to the uninformed reader. Suggested:

But when only unblinded subjective outcome measures are used then the results become much less reliable.

(I would argue strongly that such results become so ambiguous and unreliable that they are are simply not safe to apply in any clinical, medico-legal, or policy advice settings. At best they might provide modest clues for researchers about where to look more robustly.)

Finally, in PACE and other trials, any gains seemed to disappear at long-term follow-up. By two years, there was no difference between those who undertook costly GET or CBT and those who got nothing. This means that any initial 'improvements' recorded on surveys, perhaps due to placebo or expectation, vanished. The triallists in PACE said this is because they contaminated the trial arms (i.e., patients in the control arms clamoured for CBT and GET, so they let everyone have those treatments) which adds two more problems: again, it was sloppy, and made the results harder to interpret; but also, it confirms that there was bias in how the treatment arms and control arms were presented to patients, making it clear that there was higher expectation for CBT and GET. The trial was therefore subject to all kinds of biases which likely influenced the results.
All true. But there was enough data at long-term follow-up to distinguish between those in the SMC and APT arms who did and did not take up the offer of CBT/GET after the trial, and hence to show that there was no significant difference in outcomes between those two groups.

We know that adding CBT/GET to SMC or APT did not improve outcomes.

The PACE authors' claim that it is not possible to quantify the confounding effect of stuffing randomisation is wrong (at least for the 2.5 year follow-up, what it means for any future follow-ups might be another story).

From the re-analysis paper by Wilshire, et al:
Consequently, the disappearance of group differences at long-term follow-up cannot be attributed to the effects of additional post-trial therapy.
The technical argument is conditional upon some unreleased PACE data not being available for the analysis:
Of course, our analyses did not include important potentially confounding variables that might differ amongst trial arms, and such a comprehensive analysis might possibly produce a different result.
But as it stands, there is no data supporting a long-term benefit:
However, until there is positive evidence to suggest that this is the case, the conclusion we must draw is that PACE’s treatment effects are not sustained over the long term, not even on self-report measures. CBT and GET have no long-term benefits at all. Patients do just as well with some good basic medical care.
I think we can take it as given that if there was any robust data in PACE supporting the long-term benefits of CBT/GET then the trial authors would have published it by now.
 
@adambeyoncelowe

Outstanding. Thank you. One for the history books. :thumbup::hug:


That last sentence might be ambiguous to the uninformed reader. Suggested:

But when only unblinded subjective outcome measures are used then the results become much less reliable.

(I would argue strongly that such results become so ambiguous and unreliable that they are are simply not safe to apply in any clinical, medico-legal, or policy advice settings. At best they might provide modest clues for researchers about where to look more robustly.)


All true. But there was enough data at long-term follow-up to distinguish between those in the SMC and APT arms who did and did not take up the offer of CBT/GET after the trial, and hence to show that there was no significant difference in outcomes between those two groups.

We know that adding CBT/GET to SMC or APT did not improve outcomes.

The PACE authors' claim that it is not possible to quantify the confounding effect of stuffing randomisation is wrong (at least for the 2.5 year follow-up, what it means for any future follow-ups might be another story).

From the re-analysis paper by Wilshire, et al:

The technical argument is conditional upon some unreleased PACE data not being available for the analysis:

But as it stands, there is no data supporting a long-term benefit:

I think we can take it as given that if there was any robust data in PACE supporting the long-term benefits of CBT/GET then the trial authors would have published it by now.
I'll note this for a longer version. I didn't want to increase the space discussing PACE, though it ended up taking a lot of space anyway, as it was only one trial. The other ones were also important in making our decision. But in a longer article or paper, I can certainly dedicate more space and time to this key example.
 
Is there a resource describing how 'patient and public' representation was included, experienced (by patients), perceived and received, how it affected the outcome, and what was learned about the process throughout this guideline refresh? It may be too much to ask that a formal review and report has been done (like a post project review) so perhaps there are some blogs or some key threads/comments on the forum?

I think I've asked before but can't remember of find the thread - sorry
 
Back
Top Bottom