Guided graded exercise self-help for chronic fatigue syndrome: Long term follow up & cost-effectiveness following the GETSET trial, 2021, Clark et al

So it seems that in all studies of GET and CBT the control catches up over time?

FITNET, FINE, PACE CURE and now GETSET all seem to report no statistically significant difference at follow-up. One reason might be the reduction in sample size due to drop-outs.

Another possible explanation is that the initial 'improvements' were due to various reporting biases as we have argued in our paper on bias and lack of blinding. Bias caused by reliance on patient-reported outcome measures in non-blinded randomized trials: an in-depth look at exercise therapy for chronic fatigue syndrome: Fatigue: Biomedicine, Health & Behavior: Vol 8, No 4 (tandfonline.com)
 
There's also the CBT study by the Dutch group of Bleijenberg and Van Der Meer. The main results were published in The Lancet in 2001.

As far as I know, the follow-up results have never been reported but I found two people who stated that during a conference Bleijenberg announced that there was no longer a significant difference between the groups at 3 year follow-up.

Laasen wrote: "I find disturbing the lack of full disclosure. At the AACFS, a question was asked about the length of benefit for CBT. The presenter stated that the natural course and CBT groups did not differ significantly 3 years after treatment."
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(01)05421-6/references

Van Hoof stated: "This is consistent with the report by one of the coresearchers that the effects of CBT were no longer present after 3 years (Bleijenberg G, communication, Fifth International Research, Clinical and Patient Conference).”
https://www.tandfonline.com/doi/abs/10.1300/J092v11n04_05

This is also the study where actigraphy was used (showing null results) and where this data was not reported in the main paper. It only appeared years later in a separate paper where they grouped data from multiple studies (so you really had to look at the references closely to know what happened in the 2001 trial on CBT).
 
So it seems that in all studies of GET and CBT the control catches up over time?

This is the consistent finding, yes. And at least twice now they come up with a new way to analyze data--claiming success because of "withing-group" comparisons in the sense that previously measured "improvements" were "sustained"--as if the "improvements" were actual improvements and not artifacts of a bad study design.

Has anyone seen this kind of analysis in other clinical trial follow-ups? Do other follow-ups in other fields focus on 'within-group" comparisons and downplay between-group comparisons?
 
And at least twice now they come up with a new way to analyze data--claiming success because of "withing-group" comparisons in the sense that previously measured "improvements" were "sustained"--as if the "improvements" were actual improvements and not artifacts of a bad study design.
They are very good at rooting around to find some angle that they can put a positive spin on post hoc, so long as they conveniently ignore (or maybe they really cannot fathom) more stringent, more competent analyses.
 
There's also the CBT study by the Dutch group of Bleijenberg and Van Der Meer. The main results were published in The Lancet in 2001.

As far as I know, the follow-up results have never been reported but I found two people who stated that during a conference Bleijenberg announced that there was no longer a significant difference between the groups at 3 year follow-up.

Laasen wrote: "I find disturbing the lack of full disclosure. At the AACFS, a question was asked about the length of benefit for CBT. The presenter stated that the natural course and CBT groups did not differ significantly 3 years after treatment."
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(01)05421-6/references

Van Hoof stated: "This is consistent with the report by one of the coresearchers that the effects of CBT were no longer present after 3 years (Bleijenberg G, communication, Fifth International Research, Clinical and Patient Conference).”
https://www.tandfonline.com/doi/abs/10.1300/J092v11n04_05

This is also the study where actigraphy was used (showing null results) and where this data was not reported in the main paper. It only appeared years later in a separate paper where they grouped data from multiple studies (so you really had to look at the references closely to know what happened in the 2001 trial on CBT).

That's interesting. I hadn't heard about those LTFU results before. The selective reporting of results from that study is pretty hilarious.
 
I've an uncomfortable feeling the notion of being "cost effective" might be as crude as being much less expensive than an alternative therapy the NHS would otherwise be obliged to fund. Whether it works may not be much on the radar of the bean counters, and the political influences behind them. If they can be convinced something is "cost effective", and that most normal people would be unable to prove otherwise, then that probably suits them just fine.

And yes, I do believe "Yes Minister" was very close to reality ... I mean - given what we know of human nature, what are the odds it was not?

Often these phrases are euphemisms for something else. In the UK’s NHS ‘efficiency savings’ actually means ‘budget cuts’, the rationale supposedly being that if you cut people’s budgets they provide exactly the same service at less cost. As the cuts are fed down the management structure, they are met by leaving posts vacant for longer or changing full time posts to part time posts, etc. Ultimately budget cuts result in service cuts.
 
Often these phrases are euphemisms for something else. In the UK’s NHS ‘efficiency savings’ actually means ‘budget cuts’, the rationale supposedly being that if you cut people’s budgets they provide exactly the same service at less cost. As the cuts are fed down the management structure, they are met by leaving posts vacant for longer or changing full time posts to part time posts, etc. Ultimately budget cuts result in service cuts.
Exactly.
 
It feels like the old guard is running out of ammunition. Relegated to publishing in their own journal and can't even bother to put a proper spin on their results as they used to. 10 years ago they would have found away around the sentence "most patients remained unwell at follow up". I think these guys are done. The much bigger worry for me is the MUS/functional disorder paradigm and who the next P. D. White among that will be.
 
My guess (and I've only skimmed the abstract) is that they are claiming that the earlier reduction in fatigue creates a more quality of life per year and hence with a relatively small additional cost it their intervention becomes cost effective.
And yet still blithely ignores the minefield effect - that some will be harmed, but with no way of identifying who those people will be other than once they have actually been harmed, potentially irrevocably.
 
Last edited:
Trial By Error: My Letters to Psychosomatics Journal About Prof White’s Misleading GETSET Paper

"On April 24th, I sent a letter to Professor Jess Fiedorowicz, editor-in-chief of the Journal of Psychosomatic Research. He responded quickly and promised to review the matter with journal colleagues. Given the August deadline for the National Institute for Health and Care Excellence to publish its revised version of its new ME/CFS guidelines, I sent a follow-up letter today to try to nudge the journal to respond sooner rather than later."

https://www.virology.ws/2021/05/06/...al-about-prof-whites-misleading-getset-paper/
 
upload_2021-5-7_11-23-0.png

https://www.sciencedirect.com/science/article/abs/pii/S002239992100129X

Ermm ... how is it a controlled trial if you disregard the control data?

A controlled trial doesn't just require the trial to operate a control condition, it requires that control to be applied to the outcomes, so that the intervention arm has been controlled for non-intervention effects. There is no control if it is not applied!

So it is a blatant lie to state a trial is a controlled trial, if you do not use the control data as it is supposed to be. I don't imagine their trial protocol stated that although they were running a control, they had no intention of using it ... but were still going to call it a controlled trial! Maybe they should update it and call it an uncontrolled trial!
 
Last edited:
When is a trial's protocol supposed to have been finalized by? No doubt I'm rehashing old ground, my memory being what it is, but here ...

https://pubmed.ncbi.nlm.nih.gov/27278762/

... it is dated 8 June 2016 referring to results ...
Results: The project was funded in 2011 and enrolment was completed in December 2014, with follow-up completed in March 2016. Data analysis is currently underway and the first results are expected to be submitted soon.
... so was clearly written after that.

I thought the protocol was supposed to have been finalised way before that.
 
We have written a short blog post about the long-term follow-up findings of the GETSET trial and its implications.



After years of waiting, the long-term follow-up results of the GETSET study have finally been published. The control group that received no intervention did just as well as the group that received guided graded exercise self-help. This isn’t the first time that the control group catches up over time. A similar pattern was seen in the FINE, PACE, FITNET, and QURE-studies. This blog post explores the intriguing implications of these follow-up findings.

https://mecfsskeptic.com/getset-long-term-follow-up/
 
:thumbup: @dave30th

trial by error said:
The corrigendum also acknowledged a minor edit in the “Highlights” section sentence about cost-effectiveness. (I hadn’t noticed this change previously, and I’m not completely sure what the change is, since the corrigendum unfortunately didn’t include this detail.) In any event, the current version of the statement appears nonsensical, given the null results. In what way can a treatment be said to be “cost-effective” if it does not produce beneficial impacts? Cost-effective at what, exactly?

Old version
old version of highlights said:
  • Guided graded exercise self-help (GES) can lead to sustained improvement in patients with chronic fatigue syndrome.
• There was no evidence of greater harm after GES compared to specialist medical care at long-term follow-up.
• The study showed that GES was probably cost-effective.
• Most patients remained unwell at follow up; more effective treatments are required.

New version
new version said:
The revised highlights are as follows:
• There were no differences between interventions in primary outcomes at long-term follow up.
• There was no evidence of greater harm after GES compared to specialist medical care at long-term follow-up.
• The study showed that GES probably was cost-effective.
• Most patients remained unwell at follow up; more effective treatments are required.
 
Last edited:
Back
Top Bottom