Cochrane Review: 'Exercise therapy for chronic fatigue syndrome', Larun et al. - New version October 2019 and new date December 2024

It's downright incompetent of them (putting it charitably). Indeed their own risk of bias is massive I think. Perhaps one day in a court room they may begin to realise it.

Outcome Reporting Bias ...


PACE trial ... most of the above. Hugely exacerbated by being fully unblinded.

There truly must be some Machiavellian moves in play behind the scenes here.


Thank you @Barry; precisely - "Perhaps one day in a court room they may begin to realise it."

This "accident" just keeps rolling on - I think the only way to stop it is legal action....but legal action ain't got no traction at the moment.

ETA: None of these people will wholeheartedly give themselves over to logic.
 
The end points mentioned only go up to about 70-weeks. Does that mean someone forgot to include the 134-week data for PACE? That probably needs correcting if the protocol says to use the furthest end point.
And they can't claim they weren't aware of the paper as they mention it:
SharpeMD, GoldsmithKA, JohnsonAL, ChalderT, WalkerJ, WhitePD. Rehabilitative treatments for chronic fatigue syndrome: long-term follow-up from the PACE trial. Lancet Psychiatry 2015;2:1067-74. [DOI: 10.1016/S2215-0366(15)00317- X]
 
They continue to classify the PACE Trial as having a low risk of bias in terms of selective reporting which is hard to accept.

This is the comment I made previously:





From one of Robert Courtney's previous comments with regard to the PACE Trial and risk of bias:
Larun states that the "changes [to the trial] were drawn up before the analysis commenced and before examining any outcome data. In other words they were pre-specified [...]" However, the latter assertion is not consistent with Cochrane's glossary, which states that prespecified changes are those defined before data collection has commenced [5].
 
The end points mentioned only go up to about 70-weeks. Does that mean someone forgot to include the 134-week data for PACE? That probably needs correcting if the protocol says to use the furthest end point.
It's debatable whether 24 weeks should be considered the point at which therapy ended in the PACE Trial and the results at 52 weeks the follow-up results (with the follow-up results at approximately 2.5 years ignored) given that there was a "booster" therapy session at 36 weeks.

The PACE Trial authors seemed to consider the results of 52 weeks the main results.
 
The PACE Trial authors seemed to consider the results of 52 weeks the main results.
Still, haven't been able to read the full study, but I suspect that this will be one of the main points that should be raised. I've tried to mention it here: https://www.s4me.info/threads/david...port-on-courtneys-complaint.8555/#post-150341

Most of the data come from three large studies: PACE, FINE and Powell et al. 2001. For all of these trials, the main outcome was specified as a measurement several months after the treatment ended. This is what the Cochrane review calls the long-term follow up.

So what the large studies reported as their main outcome was deemed too uncertain for Larun et al. to make any conclusion. But the measurement directly after treatment (when respons bias and placebo effects can be expected to be higher) was turned into the main outcome, simply because they had (a little) more data on this coming from the small studies.

So the distinction between (1) the results directly after treatment which are deemed certain enough to mention and (2) the results for the long-term follow up which are viewed as uncertain, seems rather arbitrary and it happens to coincide with results changing to no longer being statistically significant.
 
Last edited:
AKA cherry-picking.

The lack of concern over blatant cherry-picking is very disappointing, especially after written admission of making choices based explicitly on results they prefer.

Just to be clear - this was prespecified as the time for their primary outcome, but there were changes to the primary outcome at that point. If there's cherry-picking on the time, it's the way the Cochrane review fails to look at more long-term follow up data, emphasising instead the data at end of treatment. That could be what you meant, I just wasn't really sure.


I think the main issue I have with this, particularly over the SMD debacle, is that although Tom and Bob were correct to pull them up on not using SMD for the combined fatigue scores, because that's what the authors said they would do (and then didn't do it because it made the results look bad), combining the results in this way was always flawed.

FSS and CFQ are completely different scales, and measure different things to produce a number (the scale goes in the same direction, but that's about it). But it's like combining distance and speed - FSS might be a proxy for absolute fatigue, but CFQ measures change in fatigue (how much worse have you got in the past 3 months or before you were ill or whenever you can last remember). They should NOT ever be combined. Even combining individual participant results within a study is dodgy, because it's individual mean difference that is important and not difference in group means (what they are referring to here as "mean difference" confusingly). Combining the 3 different ways of recording CFQ is not valid because the likert and binomial scores do not exactly correspond.

One would hope that by making them do it again, as they originally said they would, they would realise why it is a problem, but clearly they haven't.

If we could be confident that their 'fatigue' measures were measuring fatigue then combining their fatigue measure would presumably be okay. That we can't is such a big problem with the review that it makes the problem of combining their fatigue measures seem a bit trivial!

The end points mentioned only go up to about 70-weeks. Does that mean someone forgot to include the 134-week data for PACE? That probably needs correcting if the protocol says to use the furthest end point.

Does the protocol say that? - edit: Adam was just speculating about what the protocol may say about end pints.

Could they argue that PACE would no longer count as a randomized trial at that point?
 
Last edited:
#MEAction are right that the review authors still claim they "included eight RCTs with data from 1518 participants."

Yet the biggest trial, the PACE trial, wasn't a Randomised Controlled Trial, and even the PIs called it only a "Randomised Trial".

We probably need to be careful with phrasing on that, as it's disputable to what extent PACE can/can't be considered an RCT. It fails to control for many of the things likely to bias outcomes in favour of CBT/GET, but it has features that would lead some people to class it as an RCT.

I'm suddenly getting scared that any exaggerated criticism of this review is going to lead to more spin about abusive patients: "How dare these terrorists dispute our RCT classification!"
 
AfMEs statement on their website re the new review:
Cochrane Reviews, the gold-standard of systematic healthcare research reviews, has published an update of its review of graded exercise therapy (GET) for M.E./CFS.

This comes under the category of intervention reviews, which “assess the benefits and harms of interventions used in healthcare and health policy.”

However, the review remains based on what is now a very outdated protocol, using a research question and methodology from 2002; and only including eight randomised controlled trials that relied on the 1991 Oxford criteria and/or the 1994 Centers for Disease Control and Prevention criteria for M.E./CFS, including the PACE trial.

This is extremely concerning, given the US Agency for Healthcare Research and Quality’s conclusion that the Oxford criteria comes with a “high risk of including patients who may have an alternate fatiguing illness, or whose illness resolves spontaneously with time.” (Smith et al, 2014)

We do not support the Cochraine review’s conclusion that GET “probably has a positive effect on fatigue in adults with CFS compared to usual care or passive therapies. The evidence regarding adverse effects is uncertain. Due to limited evidence it is difficult to draw conclusions about the comparative effectiveness of CBT, adaptive pacing or other interventions.”
full post here:
https://www.actionforme.org.uk/news/​cochrane-review-of-get-our-concerns/
 
We probably need to be careful with phrasing on that, as it's disputable to what extent PACE can/can't be considered an RCT. It fails to control for many of the things likely to bias outcomes in favour of CBT/GET, but it has features that would lead some people to class it as an RCT.

I'm suddenly getting scared that any exaggerated criticism of this review is going to lead to more spin about abusive patients: "How dare these terrorists dispute our RCT classification!"

Hm. Do you think there are reasonable arguments to classify the PACE trial as a controlled trial? What arguments are there?

Is the knowledge I acquired on S4ME (that it needs adequately controlled groups to classify a trial as an RCT) disputable? Or is it disputable whether the groups were adequately controlled?

See: https://www.s4me.info/threads/a-general-thread-on-the-pace-trial.807/page-11#post-90488
 
Last edited:
It is notable that the main conclusion which was a sticking point for David Tovey, namely downgrading the evidence from "probably" to "may" and from "moderate" to "low-moderate" has not made it into the revised article. (See the FOI correspondence on 29th of May)

I suggest this is a point of contention that we can leverage.
 
AfMEs statement on their website re the new review:

full post here:
https://www.actionforme.org.uk/news/cochrane-review-of-get-our-concerns/

Given the way Cochrane has acted, why would they assert Cochrane Reviews are "the gold-standard of systematic healthcare research reviews"?

Bringing up AfME's survey data to dispute this review is not a great move either.

"We are keen to see Cochrane progress this as soon as possible, with children and adults with M.E. at the very heart of it."

At this point, I don't see much reason to be keen on this. If Action for ME are the patient group Cochrane is working with then I feel a deep concern.

Hm. Do you think there are reasonable arguments to classify the PACE trial as a controlled trial? What arguments are there?

Is the knowledge I acquired on S4ME (that it needs adequately controlled groups to classify a trial as an RCT) disputable? Or is it disputable whether the groups were adequately controlled?

See: https://www.s4me.info/threads/a-general-thread-on-the-pace-trial.807/page-11#post-90488

Rather than reasonable arguments, I'm more concerned about the cultural assumptions within the research community, and the way we phrase our valid concerns. eg: Talking about PACE not being adequately controlled to account for the biases likely to afflict their primary outcomes, and it therefore being questionable whether it should be classed as an RCT, is one thing. But just saying it's not an RCT, even though participants were randomized to four groups, is something that goes against the assumptions of many researchers and so could be interpreted as showing criticism of PACE is unreasonable or ill-informed. When there are so many researchers who view poor quality work as acceptable and assume ME/CFS patient criticism of trials like PACE is driven by our ideological opposition to psychologically informed treatments, it's worth trying to avoid any potential misunderstandings.

edit: The comment I made about 'exaggerated criticism' was not about your post, but just my fears.
 
Last edited:
We probably need to be careful with phrasing on that, as it's disputable to what extent PACE can/can't be considered an RCT. It fails to control for many of the things likely to bias outcomes in favour of CBT/GET, but it has features that would lead some people to class it as an RCT.

I'm suddenly getting scared that any exaggerated criticism of this review is going to lead to more spin about abusive patients: "How dare these terrorists dispute our RCT classification!"

I do think this matters. Researchers can be very lax when it comes to describing trials. The randomised controlled trial (RCT), and by that, I would mean a properly randomised, double (or even triple)-blind, placebo-controlled trial is considered the "gold standard" in epidemiology because of the steps it takes to reduce bias. However, what most researchers seem to describe as RCTs are really just randomised comparative trials or randomised clinical trials or sometimes even just random trials done in a clinic setting.

Not being able to properly blind, or adequately randomise, or compare against a valid placebo is not an excuse for still being able to call a trial an RCT when it isn't.

Unblinded trials with complex, composite, subjective endpoints and no placebo intervention ARE NOT RCTs!
 
Who gets to define what counts as an RCT as in what organisation has the ultimate authority to say if a trial does or does not meet the criteria ?

Looking at definitions such as on Wikipedia for example I do worry the bar is set quite low.

Another example from cochrane

Randomised controlled trial
An experiment in which two or more interventions, possibly including a control intervention or no intervention, are compared by being randomly allocated to participants. In most trials one intervention is assigned to each individual but sometimes assignment is to defined groups of individuals (for example, in a household) or interventions are assigned within individuals (for example, in different orders or to different parts of the body).
 
Could they argue that PACE would no longer count as a randomized trial at that point?
Considering that we know:
  1. Participants in sham control arms also tried the active treatments
  2. Trial leaders took no effort to account for what patients did (that we know of anyway) and so themselves were not necessarily aware of which patients tried which treatments
  3. Trial leaders promised sham control participants they could try the active treatments after the initial run but before any follow-up
There's a reasonable claim to make that PACE isn't properly randomized. Or maybe that's arm contamination. Whatever, basically it's uninterpretable anyway since participants did not stick to their arm. Despite all the money wasted, it was a very badly run trial.
 
Back
Top Bottom