Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

You suggest I think, Trish, that the BPS proponents will produce more studies of CBT and GET with subjective outcomes that aim to cure people by increasing activity, regardless of whether past trials are discounted due to diagnostic guidelines or due to subjective outcomes. But, if Cochrane comes out with a clear statement, that exercise trials in ME/CFS that rely on subjective outcomes to assess treatment utility do not produce findings worthy of inclusion in the evidence base, then that would largely stop those sorts of trials. It would give ME/CFS advocates the ammunition needed to stop them getting funded.
I hope you're right. I fear they will find ways around it.

They can lump us in with MUS and we still have the CBT battle to fight, where subjective outcomes are 'justified' on the grounds that ME is a problem of wrong thinking, so questionnaires logically assess whether we are thinking 'right' thoughts. And then there is the whole field of 'rehabilitation' that is based on exercise. Just redefine GET as rehab, and there's a whole new area to research. And there's 'activity management', which can mean anything.

Sorry, I'm being gloomy today. I need to step away from this thread.
 
... Longterm commitment to an issue means you can't manage that so easily (eg by muting/blocking/diverting to spam, particular words like the issue's name and people/email accounts). That's a drag on your communication media & psyche, and a disproportionate time cost, when there are so many other incredibly worthwhile and more gratifying things to do that don't carry those productivity costs. (And productivity loss when you're a freelancer is also economic loss.)
Your engagement is highly appreciated.
 
Cohort selection should be a key factor in whether trials are included in the review. I think only those trials specifically set up using PEM inclusive diagnosis should be included, or alternatively the review should be relabelled as a review of exercise therapy of idiopathic chronic fatigue, not applicable to ME/CFS.
I suspect that Cochrane won't exclude trials from the review, per se, as that means they can't assess them in the first place.

Most likely they'll set criteria for what's good and then downgrade everything that doesn't meet those criteria, and explain why.

So things like PACE would probably be included in the evidence review even if they didn't inform the final recommendation. You can't say something is good or bad without looking at it first.

The final recommendation would (ideally) explain why the reviewers used some data and ignored the rest, but it would have to justify why certain trials didn't form the basis of their recommendation.

Really, you don't want to exclude PACE from the review, because you want to be able to publish a judgement on it. Ignoring it altogether would likely get a lot of flak from all sides.

The same goes with criteria. You'd need to downgrade stuff that doesn't meet the criteria you set, and then you'd be able to say why you did that and what the risks are with using different criteria.

You will probably find, though, that not every trial details whether PEM was mandatory or not, and therefore you will probably end up downgrading based on the main criteria mentioned in the protocol rather than how it was operationalised.

Fukuda is the most commonly used criteria, but it's a PEM-optional set of criteria and most researchers who've operationalised it to require PEM don't necessarily say that in their protocol. This means most trials will probably be downgraded for this criterion.
 
Last edited:
I suspect that Cochrane won't exclude trials from the review, per se, as that means they can't assess them in the first place.

Most likely they'll set criteria for what's good and then downgrade everything that doesn't meet those criteria, and explain why.

So things like PACE would probably be included in the evidence review even if they didn't inform the final recommendation. You can't say something is good or bad without looking at it first.

The final recommendation would (ideally)w explain why the reviewers used some data and ignored the rest, but it would have to justify why certain trials didn't form the basis of their recommendation.

Really, you don't want to exclude PACE from the review, because you want to be able to publish a judgement on it. Ignoring it altogether would likely get a lot of flak from all sides.

The same goes with criteria. You'd need to downgrade stuff that doesn't meet the criteria you set, and then you'd be able to say why you did that and what the risks are with using different criteria.

You will probably find, though, that not every trial details whether PEM was mandatory or not, and therefore you will probably end up downgrading based on the main criteria mentioned in the protocol rather than how it was operationalised.

Fukuda is the most commonly used criteria, but it's a PEM-optional set of criteria and most researchers who've operationalised it to require PEM don't necessarily say that in their protocol. This means most trials will probably be downgraded for this criterion.
You could definitely consider PACE for inclusion in the review and then exclude it because the participants don't meet the diagnostic criteria of having PEM. There's a section in every Cochrane review called "Characteristics of excluded studies". Including PACE in the review because Cochrane want to assess it again doesn't seem logical. If Cochrane applied the new Risk of Bias tool to PACE, I think it would come out with a better score than it did before on all outcomes, including the ones vulnerable to bias due to lack of blinding. However, maybe it would be possible to include the objective outcomes from PACE which were not affected to the same extent by lack of blinding.
 
I've been extremely critical of the demonization of ME/CFS activists, publicly and privately, and will continue to be. I believe it is their demonization that has resulted in deterring people from ME/CFS research, not the actual behavior of activists (which is actually mild in comparison to many other topics) - and that they have actually publicly campaigned with a message that translates to stay out of ME/CFS research is unconscionable.
This is very much appreciated. I agree with @strategist that the CBT/GET model has been a much more significant factor in deterring people from ME/CFS research, but the demonisation has probably been a factor, and it has certainly appears to have deterred independent scientists from scrutinising BPS ME/CFS research, and had a hugely damaging effect on patients.

Unfortunately, as a community we have struggled to successfully counter the orchestrated campaign by a small group of influential researchers to demonise anyone who is critical of their work, which (if I recall correctly) you accurately described as a collective ad-hominem attack on the whole community. This is off-topic, but as a health consumer advocate, can you suggest what more could be done to counter this damaging narrative, and to hold those responsible for it to account?

We all know who these researchers are, and yet I note that in your PLOS blog you chose not to name any of them. I’m not criticising you for that decision, but I would be interested to know why you have so far chosen not to publicly identified any of those whose actions you describe as “unconscionable”. Given that some of these people continue to have a significant influence not only on ME/CFS research and treatments but also on other aspects of public policy (including the response to Covid-19), do you/we not have a moral obligation to name those who have behaved so unethically in order to prevent the public from further harm? By not naming them, are we not allowing them to benefit from their unethical behaviour?

As you will be aware, most attempts by people from within the ME community to raise concerns about the ethics and competence of these people are not only dismissed but used as further ammunition against us.
 
If Cochrane applied the new Risk of Bias tool to PACE, I think it would come out with a better score than it did before on all outcomes, including the ones vulnerable to bias due to lack of blinding.
I really do think that risk assessment is flawed if it seeks to evaluate each individual component of a trial as if wholly unaffected by flaws in other parts of the trial. I have absolutely no problem, and fully agree, with each component being evaluated individually, that makes sense. But I think there should be an overarching whole-trial-reliability weighting, that should be factored into all component assessments.

Going back to my earlier house survey analogy (4th para https://www.s4me.info/threads/indep...ed-by-hilda-bastian.13645/page-26#post-266870), if something major in the survey report makes clear that the survey has been undertaken and/or reported with really even one really serious flaw, then that just has to cast doubts on anything in the survey. So even components in the report that seem apparently be OK, still have to be risk-assessed in the light of things serious enough to cast doubt on the trustworthiness of anything in the survey. It's a valid analogy I think.

Maybe this is done anyway, but if not then I think it should.
 
The argument that it's unethical is a bit tricky. I don't think we yet have solid evidence of long term deterioration? (although much anecdotal evidence). And the BPS argument is that, just like getting fit, you need to push through the pain and ignore the odd ache. The facts that GET doesn't work and it costs government money should be enough to stop GET being a recommended treatment.
There was some discussion about the ethics of CBT/GET trials in another thread, where I wrote:

“To me, the interesting ethical questions are:

1) Is it ethical to try to convince patients that their illness is reversible by their own efforts (SW’s CBT model) in the absence of any evidence which supports that belief?

2) Is it ethical to conduct a clinical trial which requires the participants to be persuaded of the efficacy of the treatment they are being given, when, as evidenced by the fact that it is being trialled, the efficacy of treatment must be uncertain.

3) Can it ever be ethical for anyone – and medical professionals in particular – to put what Cochrane founder Hilda Bastian described as a “massive effort” into trying to discredit an entire patient community with a “collective ad hominem attack”, based on the alleged actions of a small number of individuals?”

The question about the safety GET for people with ME is clearly another serious ethical concern.
 
I really do think that risk assessment is flawed if it seeks to evaluate each individual component of a trial as if wholly unaffected by flaws in other parts of the trial. I have absolutely no problem, and fully agree, with each component being evaluated individually, that makes sense. But I think there should be an overarching whole-trial-reliability weighting, that should be factored into all component assessments.

Going back to my earlier house survey analogy (4th para https://www.s4me.info/threads/indep...ed-by-hilda-bastian.13645/page-26#post-266870), if something major in the survey report makes clear that the survey has been undertaken and/or reported with really even one really serious flaw, then that just has to cast doubts on anything in the survey. So even components in the report that seem apparently be OK, still have to be risk-assessed in the light of things serious enough to cast doubt on the trustworthiness of anything in the survey. It's a valid analogy I think.

Maybe this is done anyway, but if not then I think it should.
I absolutely agree. No I don't think it is done, but I'm not 100% sure. All I know is the new risk of bias tool assesses individual outcomes separately whereas before the whole trial was assessed. I would prefer, of course, if PACE were excluded in its entirety.
 
This is very much appreciated. I agree with @strategist that the CBT/GET model has been a much more significant factor in deterring people from ME/CFS research, but the demonisation has probably been a factor, and it has certainly appears to have deterred independent scientists from scrutinising BPS ME/CFS research, and had a hugely damaging effect on patients.

Unfortunately, as a community we have struggled to successfully counter the orchestrated campaign by a small group of influential researchers to demonise anyone who is critical of their work, which (if I recall correctly) you accurately described as a collective ad-hominem attack on the whole community. This is off-topic, but as a health consumer advocate, can you suggest what more could be done to counter this damaging narrative, and to hold those responsible for it to account?

We all know who these researchers are, and yet I note that in your PLOS blog you chose not to name any of them. I’m not criticising you for that decision, but I would be interested to know why you have so far chosen not to publicly identified any of those whose actions you describe as “unconscionable”. Given that some of these people continue to have a significant influence not only on ME/CFS research and treatments but also on other aspects of public policy (including the response to Covid-19), do you/we not have a moral obligation to name those who have behaved so unethically in order to prevent the public from further harm? By not naming them, are we not allowing them to benefit from their unethical behaviour?

As you will be aware, most attempts by people from within the ME community to raise concerns about the ethics and competence of these people are not only dismissed but used as further ammunition against us.
Yes, that would be seen as confirmation that the accusations are justified, especially if competence is also brought into it. I'll give it some thought.

I didn't mention names at that time or link to examples, because I didn't want to distract from what I was principally trying to achieve with that post. I don't think my naming them in it would have had any impact on that issue: it wasn't a time or place that would have. But I intend to eventually follow that "collective ad hominem attack" statement up when I think it will have an effect (and not harm this Cochrane process).
 
I absolutely agree. No I don't think it is done, but I'm not 100% sure. All I know is the new risk of bias tool assesses individual outcomes separately whereas before the whole trial was assessed. I would prefer, of course, if PACE were excluded in its entirety.
Disagreeing here. There is nothing to exclude from an assessment:

... This distortion happens to such a degree that the resulting description of the illness is often unrecognizable by patients, but unfortunately it will appear credible and science-based to people without special knowledge of the topic.
You cannot read it, but you can smell it - the paper is badly written, looking somehow intelligent with a lot of knowledge, making "only" observations, but there is no recurrence throughout the pages. It´s scattered and then culminating into the conclusion that CBT/GET helps ...

(only) upon SMC. They make themselves even irresponsible for what they are conveying.
 
I absolutely agree. No I don't think it is done, but I'm not 100% sure. All I know is the new risk of bias tool assesses individual outcomes separately whereas before the whole trial was assessed. I would prefer, of course, if PACE were excluded in its entirety.
Just for clarification: the original risk of bias tool also assessed aspects of outcomes separately (eg outcome assessment), and the new one still has domains for the whole trial (eg randomization).
 
Disagreeing here. There is nothing to exclude from an assessment:

You cannot read it, but you can smell it - the paper is badly written, looking somehow intelligent with a lot of knowledge, making "only" observations, but there is no recurrence throughout the pages. It´s scattered and then culminating into the conclusion that CBT/GET helps ...

(only) upon SMC. They make themselves even irresponsible for what they are conveying.
Sure. But a systematic review is not the same as a critical appraisal (assessment) of individual trials. It is a way of synthesizing relevant and trustworthy evidence from different sources - in this case trials. I understand your point though.
 
Just for clarification: the original risk of bias tool also assessed aspects of outcomes separately (eg outcome assessment), and the new one still has domains for the whole trial (eg randomization).
The old review got a high risk of bias for outcomes assessment because of the lack of blinding (see pdf attached). I *think* the new risk of bias tool would look at the bias in outcome assessment, or something equivalent, for each outcome measure separately. The whole trial would get a low risk of bias assessment in the domain of randomization.
 

Attachments

I absolutely agree. No I don't think it is done, but I'm not 100% sure. All I know is the new risk of bias tool assesses individual outcomes separately whereas before the whole trial was assessed. I would prefer, of course, if PACE were excluded in its entirety.
Just to clarify, I'm saying that it's good to assess each facet individually, but that reduced confidence in any one would contribute to a global additional weighting that would also be applied. So a minor slip up that only rings minor alarm bells might contribute only, say, 2% to the global weighting, whereas changing primary outcomes from objective to highly subjective with open label might contribute 50% maybe. (Not sure how they would aggregate, additive or multiplicative? The latter I suspect). So even the items that got a clean bill of health (ha!) in isolation, would still have the global factor applied, given and reflect the overall untrustworthiness factor. And those items with a risk rating individually, would also have the global factor applied as well.

To me this feels like it would have the right sort of structure so that a number of not-too-serious individual muck ups could potentially aggregate to make any part of the trial untrustworthy, as could just a single major muck up. Mapping it onto my house survey example feels like a fair sanity check.
 
Disagreeing here. There is nothing to exclude from an assessment:
I think the point is that if a study is included in the initial candidate list, then any subsequent exclusion should be in that it does not contribute to the review's findings. But that for each candidate study so excluded, some evidence and analysis should be provided, as part of the review, why it did not pass muster. Indeed there should probably be a stronger effort made to provide evidential support for including trials as well.

Quite apart from the trials themselves, this might also help provide confidence the reviewers know what they are on about, and are prepared to be accountable.
 
Another issue which Hilda mentioned in her blog about protocols was the mandatory section on "How the intervention might work". This is one of the reasons I am unsure about the idea of a new review on exercise for ME/CFS, as opposed to maybe splitting it into two: 1. ME and 2. Idiopathic (non-ME) chronic fatigue. Or widening the scope to include pharma and non-pharma treatments. In the latter case the section “how the interventions might work” could include other hypotheses other than the BPS one. The overview might still be empty if the diagnostic criteria limited participants to those with ME and not other fatiguing conditions, and the selection of outcomes were also tightened up. Even without any/many included studies, it would be a useful overview of the state of research in this area, for all proposed treatments, and could give a clearer idea of what needs to change to move forward.
Further to this comment about the scope of the review, I noticed a paragraph in the CBT review referring to a programme of reviews on CFS, which is why the review is important

https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD001027.pub2/full

Why it is important to do this review
The current body of evidence for CBT remains limited to narrative synthesis within generic CFS reviews (NICE 2007; Chambers 2006) or to meta‐analysis of mean effect sizes (Malouff 2008). Furthermore, potential heterogeneity has been largely based on qualitative assessment and the impact of symptom severity and healthcare setting are uncertain moderators of effect (NICE 2007). An in‐depth, up‐to‐date, systematic review of CBT alone and in combination with other treatments for CFS is of key importance to inform treatment decision by patients, clinicians and policy‐makers. This review is central in a programme of Cochrane reviews for CFS, which also cover exercise therapy (Edmonds 2004), pharmacological treatments (Rawson 2007) and complementary approaches, including acupuncture (Zhang 2006) and traditional Chinese herbal medicine (Adams 2007).

Two reviews mentioned in this programme have disappeared. The review of pharmacological treatments Rawson 2007 [Rawson KM, Rickards H, Haque S, Ward C. Pharmacological treatments for chronic fatigue syndrome. Cochrane Database of Systematic Reviews 2007, Issue 4. [DOI: 10.1002/14651858.CD006813] and acupuncture Zhang 2007 [Zhang W, Liu ZS, Wu Taixiang, Peng WN. Acupuncture for chronic fatigue syndrome. Cochrane Database of Systematic Reviews 2006, Issue Issue 2. [DOI: 10.1002/14651858.CD006010] despite them having a full reference in the review. How can two documents with a DOI disappear without trace? Presumably these were protocols that never progressed to reviews? I might try and contact the authors to find out what happened.

The bringing together of trials of all pharmacological treatments in particular would have been (and still would be) very useful to enable comparison between alternative hypotheses about what may cause and perpetuate the condition, and what may or may not help patients. I will comment on the review and ask them to correct the text referring to reviews that never existed.
 
Sorry for the diversion, but the tweet below highlights why this review is so critical.
This also happens to children.

Much guidance / advice references Cochrane reviews , it gives the authority as being a solid base for those unaware of issues ( 2 doctors in wider family independently suggested GET and CBT and that if no progress then it us likely psychological/ psychiatric) .

The complete misrepresentation of a serious illness is perpetuated by professional politics and ego.

We have no figures for those consigned to languish under MHA . I shudder to think what may happen if / when parents and carers are no longer there . This is the worldview that Cochrane helps underpin.

 
The argument that it's unethical is a bit tricky. I don't think we yet have solid evidence of long term deterioration? (although much anecdotal evidence). And the BPS argument is that, just like getting fit, you need to push through the pain and ignore the odd ache. The facts that GET doesn't work and it costs government money should be enough to stop GET being a recommended treatment
There was some discussion about the ethics of CBT/GET trials in another thread, where I wrote:

“To me, the interesting ethical questions are:

1) Is it ethical to try to convince patients that their illness is reversible by their own efforts (SW’s CBT model) in the absence of any evidence which supports that belief?

2) Is it ethical to conduct a clinical trial which requires the participants to be persuaded of the efficacy of the treatment they are being given, when, as evidenced by the fact that it is being trialled, the efficacy of treatment must be uncertain.

3) Can it ever be ethical for anyone – and medical professionals in particular – to put what Cochrane founder Hilda Bastian described as a “massive effort” into trying to discredit an entire patient community with a “collective ad hominem attack”, based on the alleged actions of a small number of individuals?”

The question about the safety GET for people with ME is clearly another serious ethical concern.
In addition, it is unethical to knowingly cause pain (broadly conceived) with no benefit to the patient.

-It is generally agreed that CBT/GET interventions cause symptoms to worsen at least temporarily. Obviously this is part of what patients report about the syndrome (pushing through activity makes it worse), but it is also fundamental to the CBT/GET theories and always acknowledged in those studies that I have seen.

-We have good reason to believe that these interventions have no medical benefit.

-Thus any further use of these interventions is done with the knowledge available that they cause pain with no known or reasonably suspected medical benefit to the patient. The exact same could be said if waterboarding were to be repackaged as a psychotherapy.
 
Back
Top Bottom