Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

Look what Cochrane have done for COVID-19. https://covid-19.cochrane.org/. It's a study register....!!!
I imagine it will be kept updated a lot better than the ME/CFS one, (but that wouldn't be hard).
I see that NICE are using it (the Corona one) although only after they have done their own searches:

https://www.nice.org.uk/process/pmg35/chapter/finding-evidence

"The guideline will be updated as the situation changes. The process for review and updating is being developed."
I'm guessing for Corona they will take less than 2 years if anything needs urgently updating.
 
Yesterday (14/4/20) I submitted a few comments to to the IAG email address (ie, Hilda Bastian) too, largely based on watching the Cochrane presentation about the review update at the CMRC conference. I hope you don't mind me sharing. I gather from the background information that all correspondence to that address will be archived and available after the project has finished anyway.

Hi there
I have a few comments on the pilot.
https://community.cochrane.org/orga...eholder-engagement-high-profile-reviews-pilot

1. What is the definition of high-profile review? This particular review is only high-profile because it’s had a huge amount of criticism. How will future high-profile reviews be defined? In the same way?
2. Why is this “engagement” (relatively passive, implying interest and acceptance rather than any element of control or influence)? I thought Cochrane were interested in stakeholder involvement in reviews – ie. helping prioritise topics/scope, choosing outcomes, control and influence over methods, reporting etc. The advisory group model comes across as hands off.
3. Cochrane is committed to involvement in all reviews. Why is this pilot only focused on “high-profile” reviews?
4. The project is defined as an update of an existing review with a new more inclusive approach. Cochrane has already called the shots by assuming an update of this review is appropriate, rather than a withdrawal/retraction. The current review is out of date, it uses invalid diagnostic criteria and relies on subjective outcomes when there is evidence that lack of blinding significantly inflates effect sizes when subjective measures are used. There is still no mention of this last important issue in the amended review which continues to give credibility to exercise as a treatment option for people with ME because the null results on objective measures are ignored. One of the reasons for withdrawal of a Cochrane review
“Serious error in a Cochrane Review. Following the conclusions of the published review could result in harm to patients or populations of interest (other than known adverse effects);” (https://documentation.cochrane.org/display/EPPR/Withdrawing+published+Cochrane+Reviews)
Clinicians who read the amended review and follow the conclusion that exercise might be helpful for people with ME could (and still do) prescribe it where harm has been documented in numerous studies. The latest survey was done in 2019 https://www.meassociation.org.uk/20...urvey-on-cbt-and-get-in-me-cfs-03-april-2019/
5. It seems the IAG is to provide approval/oversight of a process already started by and ultimately controlled by Cochrane.
6. Why does Karla Soares-Weiser have the ultimate decision making power if the review authors and the IAG disagree? Cochrane appointed you to lead an independent group, but can overrule the group if the authors don’t agree with the advice? I understand Karla could decide in the IAG’s favour, but that would mean the authors could also down tools and we would be back to square one. Why can’t the IAG author the update (or propose a new review title) itself?
All the best
Caroline
Hi everyone
Today I got a response from Hilda - see attached
 

Attachments

From the response as to why Karla Soares-Weiser will.have the final say should it not be possible to come to an agreement.

I very much hope that there will be no serious disagreement, and I am
committed to working hard to achieve that. I think the editor-in-chief of the Cochrane Library is the
logical arbiter, should there be a disagreement. Authors will know this going in as well, and in participating, will have agreed to the process.

Given the topic and the entrenched views of some authors, the strength of their vested interests I am a bit puzzled as to why Hilda is so confident.

As for the authors knowing and agreeing to submit to the process in advance, we know they've made all the right noises at the start and when push comes to shove they throw their toys out of the pram and get away with it.

Perhaps I am misunderstanding but this seems naive to me. I am not reassured.
 
Perhaps I am misunderstanding but this seems naive to me. I am not reassured.
Same. I still hope Bastian will do us good but it does not appear that Cochrane is capable, as an organization, of doing this right.

I would frankly much rather they retract everything and frankly not ever publish a damn thing about ME. pretend we don't even exist, I have zero confidence in this organization or its capabilities, whatever they are.
 
Same. I still hope Bastian will do us good but it does not appear that Cochrane is capable, as an organization, of doing this right.

I would frankly much rather they retract everything and frankly not ever publish a damn thing about ME. pretend we don't even exist, I have zero confidence in this organization or its capabilities, whatever they are.
I agree 100% with this!
 
Has anything been announced about who is going to be on the Independent Advisory Group led by Hilda Bastian, and who is going to carry out the review?
Nope. But the review will be carried out by central Cochrane people, rather than the usual volunteer teams (like Larun and co.) who rock up to propose a review title. I think the group called "Cochrane response" https://community.cochrane.org/organizational-info/people/central-executive-team/cochrane-response will be doing it. I can't quite remember, but they announced this in the presentation at CMRC in March. This means they are unlikely to have any knowledge of ME, which could be both a good thing and a bad thing. I guess there will be mix of editor types and statistics/systematic review methods people. They will be neutral and uninterested in the car crash of awfulness that is the old review, as it seems is everyone in Cochrane.
 

The responses from Cochrane are infuriating but this is a solid base of constructive criticism.

First things first: it's going in the right direction, which is a refreshing change from haphazardly collapsing into imaginary dimensions that was the starting point of the current disaster.

Still, I can't see a single valid reason to keep the current reviews published, they are simply invalid opinions. Absolutely none. I will remain highly skeptical of Cochrane's motives right until they make the first positive move on that front. You know, the whole "first do no harm" slogan thingy. Nice slogan. I like that slogan. But without backing it with action, it's just that, a nice slogan. As with the law, enforcement is 9/10 of it.
 
This is just a nobody's opinion but even with good intentions I don't see Cochrane as able to do the right thing regardless.

I don't know much about all the details here but from my metaphorical seat up here in the bleachers what I do see is that Cochrane are too far down the road of imbeded corporate interests so that any product is now just a vehicle to the end of profit bottom lines. All of the science issues that would otherwise matter are now at best, secondary in this model.

Maybe I'm wrong. Would like to be. Would be interesting to know who else might be taking issue/what other problems there might be presently with what Cochrane is churning out.
 
fwiw I've just submitted a comment to the IAG. I hope it helps.

Since others are doing the same, I thought I'd post what I sent to Hilda and the IAG:
I would like to comment on the use of the Chalder Fatigue Scale (referred to below as the Chalder Fatigue Questionnaire or CFQ) in the Cochrane review on Exercise therapy for chronic fatigue syndrome.1

The review states that, fatigue is “measured at end of treatment (12‐26 weeks)” and “after 52‐70 weeks” with “3 different versions of the Chalder Fatigue Scale (0‐11; 0‐33, or 0‐42 points)” and that “low score means less fatigue”.

This final statement that “low score means less fatigue” necessitates that the fatigue scales used are absolute measures of fatigue, and that there are no inherent changes in the way the scales are used between baseline and outcome.

However, the assumption that low score at outcome means less fatigue is not safe in the case of the CFQ because there may be an inherent flaw in the understanding of the comparison timepoints when the CFQ is used as a diagnostic tool at baseline compared with its use as an endpoint/outcome measure. In addition, different measures of fatigue (CFQ vs FSS) may not be equivalent at outcome, even if they were well correlated at baseline.

There are other points, which have been made by others,2,3 that refer to issues with equivocating between different versions of the CFQ (bimodal vs Likert), which I am not going to address here. That is not to say that those points are not also important.

Background
The CFQ was designed primarily as a diagnostic tool,4 and as such, seems to be useful in the diagnosis of those with a wide range of fatiguing conditions.

Although it has been validated by the tool authors as a diagnostic tool in some populations,5 it has not been adequately (or independently) assessed as an outcome measure or a repeated measures tool.

There are issues with the construction and content of the tool that I would like to comment on, with reference to its use as an outcome measure in particular.

Scaling
First, I will focus on the scaling of the tool. The tool asks the patient to respond to questions about “feeling tired, weak or lacking in energy in the last month” by ticking one of the following answers: “less than usual”, “no more than usual”, “more than usual”, or “much more than usual”.

The scaling of the tool lets it down as an outcome measure, because it is skewed towards “getting worse” rather than being balanced between improvement and worsening of symptoms: there are two degrees of worsening, and only one of improvement. This makes it difficult to record incremental improvements, which then affects how the tool can be used to compare results between individuals.

This creates a problem for both researcher and patient when trying to record their perceived progress.

I have been told of instances in which patients have been inadvertently coached to complete the tool in a particular way to suggest improvement where there has been no improvement. I don’t believe this is necessarily done deliberately or fraudulently, just that it may seem to be the right thing to do to get around the failings of the instrument.

For example, at outcome, a patient may be told to compare themselves with the start of the trial or treatment, rather than the start of their illness. If there has been no change in their condition, the patient will record a score of 11 (“no more than usual” on all items), which on the face of it seems entirely reasonable, but will result in an improvement being logged if their original (baseline) score was higher than this - which it will be if a score of 18 is required for inclusion into a trial for instance.

There then only needs to be a slight imbalance between groups for this to have a substantial effect on any differences reported in a trial setting.

Ceiling effect
The ceiling effect may produce a similar result, in that the recording of a maximum score at baseline may affect how the tool is completed on subsequent occasions.

If a patient scores the maximum (score of 33), or close to it, at baseline, and their condition then worsens, there is a temptation to reset (because of the scaling limitations of the tool) and use the start of the trial as the comparison point for subsequent form completions. If their fatigue has then got slightly worse (“more than usual”, but not “much more than usual” on most items), the recorded lower score will give the false impression of improvement when there has in fact been a slight worsening of their condition.

Inconsistency and ambiguity in timepoints
The questionnaire text mentions at least three separate timepoints in describing how the patient should complete it. They are asked “about any problems… in the last month”, and the questions relate to what is “usual”. In addition, there is a conditional clause referring to “a long while” ago, and “when you were last well”. The use of a conditional clause in particular (“If you have been feeling tired for a long while, then compare yourself to how you felt when you were last well”) means that it is crucial to know what comparison point is being used by the patient each time they complete the form.

It is a shame that the tool’s devisors did not foresee this as an issue, because it would have been easy to make a record of this comparison point on the form itself.

There is also a subjective judgement to be made about how “for a long while” should be interpreted. The use of multiple time periods in the questionnaire (“last month”, “for a long while”, “usual”) increases the ambiguity, and makes it very hard to assume that every patient will have interpreted it in the same way.6 The researchers have also assumed that there is an equivalence between what was experienced a long time ago and what is “usual”, which again may be problematic. What is usual for one’s condition over the past month (or in the last month) may not be usual for the period before the illness arose. Then having to extrapolate that across multiple timepoints over which the CFQ is used in the following months of a trial will add further layers of complexity.

There may also be the possibility that the patient makes a comparison with how they felt at the beginning of the last month, particularly if their condition has fluctuated during that period.

For these reasons, I do not believe that it is safe to assume that every patient will be comparing themselves to when they were last well, or even that “when you were last well” is equivalent between patients, on every occasion that they complete the form. For example, does “when you were last well” actually mean “before the illness arose”, or could it also be interpreted as “during my last period of remission” or “before my last episode of PEM (post-exertional malaise)”?

Change in fatigue versus absolute fatigue
Ultimately, the CFQ measures change in fatigue each time it is used. It does not provide an absolute measure of fatigue, which makes comparison between timepoints problematic.

By comparison, the Fatigue Severity Scale (FSS), which uses a visual-analogue scale of fatigue symptoms over the course of a week, provides a more absolute score, and patients are not required to make comparisons between one timepoint and another when completing the questionnaire. Asking patients to consider their experience over just the past week is conceptually easier than having to remember an average over the course of a month.6 The FSS has other issues with regard to the content of the question items that may bias responses when exposed to certain interventions that I won’t discuss further here.

I would hypothesise that if both scales (CFQ and FSS) were used together in a non-intervention trial in naïve patients with stable fatiguing conditions over a period of some months, the scales would broadly correlate at baseline, but would be found to divert some months later when used as an outcome measure. I would predict that although both scales will record high scores at baseline, at outcome, only the FSS will maintain those scores, and the CFQ would start to tend towards scores of 11 as patients record what is “most usual”, unless they have been specifically told to do otherwise.

If such a trial were to make this finding, then it would confirm that the CFQ is not safe to use as an outcome measure or repeated measures tool without some form of modification to address these issues.

References
1. Larun L, Brurberg KG, Odgaard-Jensen J, Price JR. Exercise therapy for chronic fatigue syndrome. Cochrane Database of Systematic Reviews 2019, Issue 10: CD00320. https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD003200.pub8/

2. Comments on the review made by Tom Kindlon and Robert Courtney. https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD003200.pub8/read-comments

3. Wilshire C, Kindlon T, Matthees A, McGrath S. Can patients with chronic fatigue syndrome really recover after graded exercise or cognitive behavioural therapy? A critical commentary and preliminary analysis of the PACE trial. Fatigue: Biomedicine, Health & Behaviour 2017; 5: 43-56 https://www.tandfonline.com/doi/full/10.1080/21641846.2017.1259724

4. Chalder T, Berelowitz G, Hirsch S, Pawlikowska T, Wallace P, Wessely S. Development of a fatigue scale. Journal of Psychosomatic Research 1993; 37, 147–153 [PubMed]

5. Cella M, Chalder T. Measuring fatigue in clinical and community settings. Journal of Psychosomatic Research 2010; 69, 17–22 [PubMed]
Note – Cella & Chalder only looked at discriminative validity in a GP population and only as a diagnostic tool. Morriss et al. (1998) looked at construct validity, deciding that the 11-point questionnaire was the better tool. Morriss RK, Wearden AJ, Mullis R. Exploring the validity of the Chalder Fatigue scale in chronic fatigue syndrome. Journal of Psychosomatic Research 1998; 45: 411-417 [pubmed]

6. Streiner DL, Norman GR. Ambiguity. Chapter 5: Selecting the items. In: Health measurement scales: a practical guide to their development and use (3rd edn). Oxford: OUP, 2003: p62.
Note – See also the subsequent chapter (6) on Biases in responding – particularly with regard to asking respondents to recall how they felt a while ago.
 
Last edited:
Since others are doing the same, I thought I'd post what I sent to Hilda and the IAG:
@Lucibee would you be ok with me copying & printing that out/showing it to people please?. A while back I was trying to explain the problems with the CFQ to a friend who is a scientist, but I struggled. Would appreciate being able to give them this - it's such a succinct explanation, written in a professional manner that's adds credibility - lol my statement that 'it's a useless piece of rubbish', didn't really fly! :D
 
Back
Top Bottom