Bias caused by reliance on patient-reported outcome measures in non-blinded randomized trials: an in-depth look at exercise therapy for CFS, 2020,Tack

Andy

Retired committee member
Background
Several randomized trials have reported that graded exercise therapy (GET) is an effective treatment for chronic fatigue syndrome (CFS). These trials were not blinded and relied on patient-reported outcome measures (PROMs). We investigate whether bias introduced by this study design influenced the results.

Methods
We extracted standardized mean differences from the most recent meta-analysis on exercise therapy for CFS to analyze their size, consistency over time, and congruence with objective measurements. A narrative review methodology was used to examine mediation analyses, plausible mechanisms of improvement, and risk of response bias.

Results
Patient-reported improvements in exercise trials for CFS tend to be small, transient, and poorly supported by objective measurements. The risk of expectancy effects and response bias was high as patients were actively encouraged to adopt a positive attitude towards exercise therapy. Mediation analyses suggest that self-reported improvements in fatigue and physical function are not mediated by objective measures of fitness.

Conclusions
Treatment effects seen in exercise trials for CFS could be the result of bias associated with the use of PROMs in non-blinded trials. This might explain the discrepancy between positive results reported in randomized trials and views on exercise therapy expressed by patient organizations. We hope that this case study furthers critical assessment of patient-reported improvements in areas of medicine where blinding of therapists and trial participants faces practical limitations.
Paywall, https://www.tandfonline.com/doi/full/10.1080/21641846.2020.1848262
Sci hub. no access at time of posting.

ETA: Corrected access details.
 
Last edited:
Here's a Twitter summary:


2) Our analysis focuses on a discrepancy between several randomized trial that report graded exercise therapy (GET) for chronic fatigue syndrome (CFS) is effective and multiple surveys by patient organizations that indicate just the opposite.

3) All trials on GET for CFS were not blinded and relied on patient-reported outcome measures. Some (for example Edwards 2017 and Wilshire et al. 2018) have suggested that this trial introduces bias that could distort the main outcomes.

4) Patients who know they are receiving an active intervention rather than a passive control might be more optimistic about its effect on their health or report symptoms according to what they think will please the investigators.

5) We’ve tried to investigate this hypothesis further.

We investigated the size of patient-reported improvements, their consistency over time and congruence with objective measurements. We reviewed mediation analyses, plausible mechanisms of improvement and the risk of expectancy effects.

6) First, we found that the reported improvements were really small, comparable to the minimally clinically important difference for each of the scales and questionnaires, and similar to bias that has been attributed to blinding in other studies.

7) Second, we note that the differences between the GET and control group are often no longer statistically significant if assessments are made not directly after treatment, but a couple of months later.

(caveat: this can also be due to decreased statistical power).

8) Third, Vink & Vink-Niese showed in their analysis of the Cochrane review that objective measures - that are believed to be more robust to bias due to lack of blinding – show little improvements, in contrast to subjective outcomes.

9) Fourth, mediation analyses suggests that the reported improvements in GET-trials are not the result of an increase in fitness as the theory behind this treatment assumed. Exercise therapy in CFS currently lacks a plausible mechanism for improvement.

10) Lastly, the risk of expectancy effects is large as the treatment manuals of GET show how therapists were instructed to encourage optimism while patients were told to interpret their CFS symptoms as a consequence of deconditioning, stress or anxiety instead of an unknown disease.

11) Based on the arguments outlined above, we conclude that treatment effects seen in exercise trials for CFS are likely the result of bias associated with a lack of blinding

12) We acknowledge a level of uncertainty about our hypothesis but argue that the burden of proof lies, as for all interventions, on those claiming efficacy. Those who claim that a treatment is effective should provide evidence that reported improvements are not merely the result of methodological weaknesses.

13) We hope this case study furthers critical assessment of patient-reported improvements in areas of medicine where blinding of therapists and trial participants faces practical limitations.
 
Thanks to Tom Kindlon, Simon McGrath, and Andrew Kewley for providing thoughtful comments to earlier drafts of the analysis.

Also big thanks to Jonathan Edwards who has been most articulate about this argument for example in his 2017 commentary on the PACE trial or his recent testimony to the NICE guideline, discussed in this thread:
The difficulties of conducting intervention trials for the treatment of [ME/CFS]: Expert testimony to NICE guidelines committee by Jonathan Edwards | Science for ME (s4me.info)
 
To be honest, most of the arguments in our paper are not new, but well known within the ME/CFS patient community. Most of those who have been following the literature closely are probably well aware of these issues. It has been discussed on this forum multiple times.

I thought it was worth spelling out these arguments in a scientific publication, to provide an overview that can be referenced and shared among doctors and researchers.
 
Important. Especially how obsessively those are used, on and on and on. Always using the same methods to assess, despite those methods not being valid. Patient-reported outcomes can be made relevant with care and effort. Here the effort is the exact opposite, meant to misrepresent and redirect to an alternative cause and features, a bias multiplier of sorts, about as susceptible to manipulation and misinterpretation as "lie detectors". They stem from ideology, not outcomes that are relevant to patients and even less to outcomes that are objective and significant.

But I think there is a greater point, one that is harder to make as most physicians would be confused by it: the rating instruments used are not pertinent, they are not specific and they capture a tiny portion of the illness at best. Much of what they are concerned with is actually not relevant at all. It's one thing to rely strictly on self-reported outcomes, it's a whole other thing to rely strictly on largely irrelevant self-reported outcomes, further distorted by biased analysis from conflicted individuals.

The various psychometric questionnaires, the CFQ and other measures, they are all vague and imprecise to the point of being useless. None of them have any significant relevance to ME, they ignore most of the features and symptoms and are especially ambiguous, with responses that can vary despite no significant factors having been changed.

This point is harder to make since the misrepresentation of ME as being just fatigue has been the only effective outcome of the BPS ideology. So to most physicians looking into this, the reported outcomes seem relevant. But they aren't, not even close. They could not really be any less pertinent and as the only tools to be used in the fabrication of the current paradigm, they add up to a whole bunch of nothing.

The bias isn't just there and significant, it's maximal, it genuinely could not be any more biased than it is. It corrupts the whole to the point where the whole paradigm may as well be presented as genuine belief system, closer in letter and spirit to Scientology than any credible scientific knowledge.
 
Fantastic work, Michiel, Dave and Caroline!

A suggestion: if you were to create an account on ResearchGate (free to anyone), you could post the manuscript version of the fulltext of this paper there, and still be within the copyright rules of Taylor and Francis (that's the version before it got formatted by the publishers).

If you did that, we could link to the article here, and everyone could read it. Without violating any copyright restrictions. Also, the link would appear alongside the paper name whenever it appeared in a Google Scholar search.
 
A suggestion: if you were to create an account on ResearchGate (free to anyone), you could post the manuscript version of the fulltext of this paper there, and still be within the copyright rules of Taylor and Francis (that's the version before it got formatted by the publishers).
Thanks for this helpful suggestion.

I've been reading about this and it seems that I can post the Accepted Manuscript (AM) on my personal website but not on repositories like Researchgate. On the other hand, I can post the Author’s Original Manuscript (AOM) anywhere I like. see: Publishing Open Access - What is Open Access? | Author Services (taylorandfrancis.com)

So I'm wondering what would be the best option: post the AM on me/cfsskeptic@wordpress.com or post the OAM on Researchgate?
 
Hmmm, I would go for ResarchGate - you're right, technically, the rules only cover personal and institutional repositories, and ResearchGate is a third party private host. So yes, if you want to stay strictly in the rules, its not allowed.

But in practice, everyone ignores that. The big publishing houses know they're on borrowed time if they mess with that (there is already a huge backlash against them in academic circles). Plus, in practice, there's no means for them to check the version you posted against the original one you submitted, so in other words, they cannot easily detect exactly which version you posted (unless its the final formatted one).

Technically speaking I don't think the rules would allow you to post your piece on WordPress for the same reason - its a private, third party host, not your own or your Instutition's website (You could link to it from elsewhere though). The reason ResearchGate is good is because its well trawled by Google Scholar bots. So the link to your fulltext paper is more likely to pop up if anyone does a google scholar search.
 
Last edited:
Back
Top Bottom