Suffolkres
Senior Member (Voting Rights)
Well done Adam and thank you !
Things to add to a longer version:Brilliant @adambeyoncelowe![]()
In a review posted to Amazon UK of Fiona Fox's book
The bullying of a BPS person by their own colleagues
In brief: there was a lot of pressure from people outside the committee, and that included pressuring their own colleagues to 'get it right' in meetings. One day, I will go into more detail than that, but I'll leave it at that for now.What was this about??
That is a post for history. Thank you @adambeyoncelowe.
In brief: there was a lot of pressure from people outside the committee, and that included pressuring their own colleagues to 'get it right' in meetings. One day, I will go into more detail than that, but I'll leave it at that for now.
I hope you will. Have you considered writing a book?One day, I will go into more detail than that, but I'll leave it at that for now.
I’m not sure if I’m misunderstanding or if this is incorrect. How would a “general loss of goodwill when the initial excitement of having any kind of treatment, and therefore hope, wore off” result in older surveys showing a smaller rate of harm? Wouldn’t it be the other way around?but the general trend was that older surveys showed a smaller rate of harm while more recent ones showed a higher rate of harm (reflecting, perhaps, the general loss of goodwill when the initial excitement of having any kind of treatment, and therefore hope, wore off -- see above).
There was just one bit that wasn’t clear to me. You wrote:
but the general trend was that older surveys showed a smaller rate of harm while more recent ones showed a higher rate of harm (reflecting, perhaps, the general loss of goodwill when the initial excitement of having any kind of treatment, and therefore hope, wore off -- see above).
I’m not sure if I’m misunderstanding or if this is incorrect. How would a “general loss of goodwill when the initial excitement of having any kind of treatment, and therefore hope, wore off” result in older surveys showing a smaller rate of harm? Wouldn’t it be the other way around?
Exactly right. And I think I touched on a lot of this with expectation bias and the like, but didn't want to go into even more detail on that point.I think I understand what is meant. Patients didn't initially report harm because of wanting to believe the treatments worked or would work if they continued to apply it. But after something like a 1-2 year follow-up, it could have become more apparent that the treatments didn't work and continuing to push through symptoms of PEM as taught made patients worse.
I also think there is more to it. Based on experience, some of us who carried out older surveys didn't even receive PACE style CBT. I carried out primary mental health CBT without knowledge. I recently looked over my clinic notes, and unbelievably I said CBT helped me so much during the survey. There are a few things that would have led to me saying that;
1) I was moderate but declining but could still perform a lot of activity - much more than most people report here. So ME was bad - but not as bad as it is now, as I was still able to have a social life and casual relationships, etc. A bit of activity management such as bulk cooking and advice on cleaning my home slowly over a course of days - with breaks, instead of all in one go as I did before was actually helpful for avoiding PEM.
2) The term 'treatment' was misleading. I thought if I followed the programme, I would be cured of the slow dreadful life zapping 'problem' as ME/CFS wasn't adequately explained to me.
3) I was grateful to receive 'treatment' (what I believed was the buzzword for a cure) after suffering for so long with a problem that none of my GP's could identify. It was the first time and only place my issues were recognised and partially explained. Just what I was told would help didn't turn out to be true.
4) I was happy with CBT, which for me turned out to be discussing old and forgotten emotionally unpleasant experiences, but I would then soon go on to rapidly deteriorate within the service despite stating CBT for revisited emotions was helpful, so clearly that couldn't have been the cause at the time).
4) The CBT therapist crossed professional patient-clinician boundaries on several occasions and said it was because they strongly believed I could be cured of CFS. So I felt it necessary to put in a good word for someone risking their job for me.
Basically, the data is BS! and all of this is under the NHS.
[Edited to add more information]
I think I understand what is meant. Patients didn't initially report harm because of wanting to believe the treatments worked or would work if they continued to apply it. But after something like a 1-2 year follow-up, it could have become more apparent that the treatments didn't work and continuing to push through symptoms of PEM as taught made patients worse.
Thanks. I think I was misunderstanding. If I’m understanding correctly now, your point is that in the older surveys the average time between receiving the therapy and completing the surveys would have been less, and therefore the effect of response bias is likely to have been greater. That makes sense, but I think it could have been clearer (or maybe I was just being dim).Exactly right. And I think I touched on a lot of this with expectation bias and the like, but didn't want to go into even more detail on that point.
I think it was useful to include them.I did consider removing that part of the review altogether, but thought it better to contextualise the 54% from all surveys versus the ~75% from the most recent one.
I hope you will be able to one way or another. I could imagine you could write a book just about your experience of being on the gl committee – and that might be a good angle to approach it from.One of the women from Lesbians and Gays Against the Miners, who wrote a book called Over the Rainbow: Money, Class and Homophobia that I published, had asked me to co-write a book on ME a few years back, so it could be in that or it could be something independent.
I didn’t mean you should have included references on the Amazon, but it might be useful to add some references to the version on here, or if you write an extended version. I also wondered if @dave30th might invite you to guest blog a version.which parts did you want references for, in particular? Most of it is just referring to what's in the NICE GLs (either the short GL or the rationale/long GL), so it didn't feel necessary (it's also a review, rather than a paper). But I could certainly consider it when I have more spoons
Thanks, Robert. All noted.Thanks. I think I was misunderstanding. If I’m understanding correctly now, your point is that in the older surveys the average time between receiving the therapy and completing the surveys would have been less, and therefore the effect of response bias is likely to have been greater. That makes sense, but I think it could have been clearer (or maybe I was just being dim).
BTW, I see you can edit Amazon reviews after they been posted – although I don’t know if that will effect their rating.
I think it was useful to include them.
I hope you will be able to one way or another. I could imagine you could write a book just about your experience of being on the gl committee – and that might be a good angle to approach it from.
I didn’t mean you should have included references on the Amazon, but it might be useful to add some references to the version on here, or if you write an extended version. I also wondered if @dave30th might invite you to guest blog a version.
The sort of references I was thinking of were:
- Brian’s blog that you quote
- Monbiot articles
- Papers and surveys which point to harms of CBT/GET
- Minutes which confirm that committee members resigned after signing off draft
- Jonathan’s expert testimony and/or JHP paper which explains problems of relying on subjective outcomes in unblinded trials.
- Lowering recovery threshold
- BMC psychology correspondence where Sharpe et al admit that they changed recovery criteria to “give results more in line with what they expected from clinical practice”
– rituximab trials
– petitions to NICE
– “unbecoming” quote in Hansard
– dismissal “harrassment” claims at tribunal
– Dave’s ME/CFS virology blogs
etc.
I think quite a few of these refs are in Adam’s excellent Twitter thread. I know some of this is in the GL but some bits are hard to find in such a large document, so it can be useful to point to specific parts.
Thanks again, Adam. I don’t mean to give you more work. The review is brilliant as it is. I’m just floating some ideas in case you have spoons and inclination to do any more with it.
PS A spooky thing just happened. I accidentally typed “woukd” and it wasn’t auto-corrected. I don’t think that’s every happened to me before. Am I being channelled by the secret texter? Yikes!!!
I imagine that, given how convinced they have become it is not the treatment that's at fault but the patient's adherence to it, and the need (for the clinicians) to show that it works, their mindset may have automatically gravitated towards being more assertive and less flexible with "what they know is right" for patients. Perhaps?It is often 'you must do x' now, instead of 'try x, and if that doesn't work, try y'. Maybe the worsened outcomes has caused them to double down, and not be as lax, because they assume the evidence is divine law so the flaw must be in the flexibility (so let's rule that out) or the dedication of the patient (so let's pressure them to stick to it religiously).
That last sentence might be ambiguous to the uninformed reader. Suggested:Subjective measures are fine when blinding occurs. Unblinded treatments are fine when objective measures are used. When both occur together, the results become that much less reliable.
All true. But there was enough data at long-term follow-up to distinguish between those in the SMC and APT arms who did and did not take up the offer of CBT/GET after the trial, and hence to show that there was no significant difference in outcomes between those two groups.Finally, in PACE and other trials, any gains seemed to disappear at long-term follow-up. By two years, there was no difference between those who undertook costly GET or CBT and those who got nothing. This means that any initial 'improvements' recorded on surveys, perhaps due to placebo or expectation, vanished. The triallists in PACE said this is because they contaminated the trial arms (i.e., patients in the control arms clamoured for CBT and GET, so they let everyone have those treatments) which adds two more problems: again, it was sloppy, and made the results harder to interpret; but also, it confirms that there was bias in how the treatment arms and control arms were presented to patients, making it clear that there was higher expectation for CBT and GET. The trial was therefore subject to all kinds of biases which likely influenced the results.
The technical argument is conditional upon some unreleased PACE data not being available for the analysis:Consequently, the disappearance of group differences at long-term follow-up cannot be attributed to the effects of additional post-trial therapy.
But as it stands, there is no data supporting a long-term benefit:Of course, our analyses did not include important potentially confounding variables that might differ amongst trial arms, and such a comprehensive analysis might possibly produce a different result.
I think we can take it as given that if there was any robust data in PACE supporting the long-term benefits of CBT/GET then the trial authors would have published it by now.However, until there is positive evidence to suggest that this is the case, the conclusion we must draw is that PACE’s treatment effects are not sustained over the long term, not even on self-report measures. CBT and GET have no long-term benefits at all. Patients do just as well with some good basic medical care.
I'll note this for a longer version. I didn't want to increase the space discussing PACE, though it ended up taking a lot of space anyway, as it was only one trial. The other ones were also important in making our decision. But in a longer article or paper, I can certainly dedicate more space and time to this key example.@adambeyoncelowe
Outstanding. Thank you. One for the history books.
That last sentence might be ambiguous to the uninformed reader. Suggested:
But when only unblinded subjective outcome measures are used then the results become much less reliable.
(I would argue strongly that such results become so ambiguous and unreliable that they are are simply not safe to apply in any clinical, medico-legal, or policy advice settings. At best they might provide modest clues for researchers about where to look more robustly.)
All true. But there was enough data at long-term follow-up to distinguish between those in the SMC and APT arms who did and did not take up the offer of CBT/GET after the trial, and hence to show that there was no significant difference in outcomes between those two groups.
We know that adding CBT/GET to SMC or APT did not improve outcomes.
The PACE authors' claim that it is not possible to quantify the confounding effect of stuffing randomisation is wrong (at least for the 2.5 year follow-up, what it means for any future follow-ups might be another story).
From the re-analysis paper by Wilshire, et al:
The technical argument is conditional upon some unreleased PACE data not being available for the analysis:
But as it stands, there is no data supporting a long-term benefit:
I think we can take it as given that if there was any robust data in PACE supporting the long-term benefits of CBT/GET then the trial authors would have published it by now.