Caroline Struthers' correspondence and blog on the Cochrane Review: 'Exercise therapy for chronic fatigue syndrome, 2017 and 2019, Larun et al.

Yeah, this response makes me think that Cochrane is even worse than I'd feared and leaves me dreading whatever they're going to do next. So many annoying and misguided things about it. I'm going to have to re-read it with lowered expectations so I can avoid being distracted by my eyes rolling back into my head.

I think we should pretty much assume that no help will come from the gatekeepers of medicine. It shouldn't stop anyone from trying, at least it gets it all on record and will matter in the long term, but Tovey's responses were basically a politician's non-responses. It's simply not a serious process because they don't think it needs to be. They only use the tools and institutions of medicine to give them the authority to do what they want, not because they think it's medically relevant. All the power of medical authority with none of the accountability, a classic recipe for disaster.

Medicine needs massive reform for patients' rights. There is no due process, no ability to appeal, no evidence is necessary to convict. Evidence can be withheld or arbitrarily dismissed. It can even be asserted without evidence based on eminence. And here we have clear abuse of authority leading to a massive human rights failure because the standards and ethics of medicine can be waived off at will if the majority opinion is "I just don't believe in these new mumbo jumbo diagnoses".

These are supposed to be the people helping us. They are doing exactly the opposite, acting as oppressive forces denying us basic human rights. We don't have a messaging problem, we have a sabotage-from-within problem.
 
We patients currently have no political power, so those with power do with us as they please.

One solution would be to acquire political power. That is much easier if we worked with other patients who are in a similar situation. Then we could create a funding system where patient representatives can prevent further funding of BPS garbage research.
 
I am struggling to understand now why Tovey has gone through this whole process with Larun and rejected her resubmisssions following complaints and critiques.

From this recent exchange he seems to be rather confused about why he has taken the current actions.

Anyone?

I think Tovey was beginning to understand how problematic it would get for Cochrane to quadruple-check that they stand behind this research and how it will play out when the house of cards tumbles. Their credibility will take a huge shot, the failure is on a scale that they don't really comprehend yet but they know it will be bad.

But then Cochrane was reminded how much political influence can hurt them. When the former head of the RCGP basically calls you a wet rag in public, knowing her influence, by herself and in addition to her rotten husband, then you perk up and listen hard. The Reuters hit job was a loud shot to the bow and the message was received and digested.

Political influence made Tovey cave. This is as clear as it gets. He tried to mitigate the damage but his unhelpful beliefs about his "independence" were corrected.
 
To use the argument it is simply a matter of opinion is appalling.

If Cochrane wants to be taken seriously, they should be standing for the highest levels of scientific methodology.

How independent were the methodologists and were they perhaps psychiatric/psychological methodologists for whom such methodological sleight of hand is the norm, is acceptable, is the basis of their careers?

Cochrane seems to enjoy the top spot right now as far as being taken seriously. Until there is a paradigm shift this won't hurt their standing so they're resting on their yannis.

Of course with hindsight this will look catastrophically bad but most medical professionals dislike us anyway so for now we can be entirely shut out, even harmed, without consequences. The SMC-Reuters stunt probably had more influence than our decades of advocacy and all the biomedical research. Politics control this issue, not science, not our welfare and especially not our rights.
 
Struthers: 5. The PACE trial is now being used as a teaching example of how not to do a randomised trial.

Tovey: Please see above. Many of the criticisms of the PACE study, including reported COI of the researchers, and changing outcome measures prior to analysis of the results, are not unique to this study.

Struthers: 3. Using only subjective outcomes in unblindable trials is terrible methodology. This is even more inexplicable in a review. And avoidable. Some of the included trials, including PACE, did measure objective outcomes, such as employment levels. They didn’t support the positive findings from subjective measures.

The review was conducted in a manner that was consistent with the review protocol. In fact, objective measures such as resource use were reported when they were identified. To describe this as ‘terrible methodology’ is simply an opinion and it is not shared by independent methodologists who we consulted, or the Cochrane Handbook. It is common for Cochrane Reviews to use patient-reported outcomes as primary outcomes for chronic conditions. When the primary complaint is a private experience (e.g., pain, fatigue, anxiety) the most appropriate outcome is frequently the direct observation and report of patient experience by the patient, for whom it matters most. Obviously, there are limitations and risk of bias

my bolding.

Eminence based medicine. And 'everybody does it so it's good science'.

Bah humbug. Cochrane should be ashamed of themselves.
 
I think a lot of psychiatric conditions are about people having distorted self perceptions. So psychs live in a world where the way to cure their patients is to change their self perceptions. And the way they assess their patients' progress is to ask them questions which reveal their prevailing self perceptions. So far as that goes, there is a lot of validity in that, though patients are hopefully also asked more objective questions as well. e.g. If someone is acutely introverted and withdrawn, then checking if they are getting out and meeting people might be a more objective progress check, together with subjective questions.

It may also be true I think, that for people whose psychiatric condition is about self perceptions, simply knowing what treatment you are getting, and how good it's reputed to be, could very likely positively bias your self perceptions; the bias forms part of the treatment, and validly so! Anything that positively influences the person's self perceptions, is a potential treatment component.

So for many psychiatrists the combination of subjective outcomes with unblinded treatments is probably not just the norm, but prerequisite; they are probably surprised at all the fuss. I also wonder how this approach serves their patients whose problems are not rooted in self perception issues.

Their biggest mistake has been to gate crash into the realm of physical illnesses, whilst unthinkingly presuming their belief system maps onto it, and believing it to be science.

That, I believe, is the universe many psychiatrist exist within, and for them it is their science; cannot get their heads round anything else. And I think they carry an awful lot of influential people along with them, including people with less scientific training and rigour, and therefore incapable of distinguishing between something being fact rather than opinion.

So I think this is what we are up against, and have to overcome. And the first step to solving a problem is understanding the problem.

I find it somewhat ironic that the problem we face is one of changing the distorted perceptions, of those belonging to a discipline which includes in its remit, curing people of having distorted perceptions.
 
Struthers: 3. Using only subjective outcomes in unblindable trials is terrible methodology. This is even more inexplicable in a review. And avoidable. Some of the included trials, including PACE, did measure objective outcomes, such as employment levels. They didn’t support the positive findings from subjective measures.

The review was conducted in a manner that was consistent with the review protocol. In fact, objective measures such as resource use were reported when they were identified. To describe this as ‘terrible methodology’ is simply an opinion and it is not shared by independent methodologists who we consulted, or the Cochrane Handbook. It is common for Cochrane Reviews to use patient-reported outcomes as primary outcomes for chronic conditions. When the primary complaint is a private experience (e.g., pain, fatigue, anxiety) the most appropriate outcome is frequently the direct observation and report of patient experience by the patient, for whom it matters most. Obviously, there are limitations and risk of bias.

I was going to include a reference to the Cochrane Handbook in my blog (thanks for the kind comments btw!), but changed my mind. In fact, I now realise the quotes I was going to use came from the AHRQ (US) systematic review guidance, which mentions the Cochrane Handbook, rather than the CH itself.

In particular, in Table 2 - Taxonomy of core biases in the Cochrane Handbook, they deliniate each of the key biases.

For example: Performance bias
Systematic differences in the care provided to participants and protocol deviation.
Examples include contamination of the control group with the exposure or intervention, unbalanced provision of additional interventions or co-interventions, difference in co-interventions, and inadequate blinding of providers and participants.

However, I've just checked the Cochrane handbook (2012 version) and bias toolkit, and they only mention blinding under performance bias, and not contamination, unbalanced provision, or differences.

Then for detection bias:
Systematic differences in outcomes assessment among groups being compared, including systematic misclassification of the exposure or intervention, covariates, or outcomes because of variable definitions and timings, diagnostic thresholds, recall from memory, inadequate assessor blinding, and faulty measurement techniques. Erroneous statistical analysis might also affect the validity of effect estimates.

Risk of bias assessment criteria: Blinding of outcome assessors, especially with subjective outcome assessments, bias in inferential statistics, valid and reliable measures.

But this quote in the AHRQ guidance stood out to me:
A critical task that reviewers need to incorporate within each review is the careful identification and recording of likely sources of bias for each topic and each included design. Reviewers may select specific criteria or combinations of criteria relevant to the topic. For instance, blinding of outcome assessors may not be possible for surgical interventions but the inability to blind outcome assessors does not obviate [prevent] the risk of bias from lack of blinding. Reviewers should be alert to the use of self-reported or subjective outcome measures or poor controls for differential treatment in such studies that could elevate the risk of bias further.

I am looking for a similar passage in the Cochrane Handbook, because it really ought to be there - and if it isn't, then that really is "terrible"!

[The Cochrane Handbook is a devil to search for anything though - aargh!]
 
Found it! (From Cochrane Handbook v 5.1)
12.2.2 Factors that decrease the quality level of a body of evidence
We now describe in more detail the five reasons for downgrading the quality of a body of evidence for a specific outcome (Table 12.2.b). In each case, if a reason is found for downgrading the evidence, it should be classified as ‘serious’ (downgrading the quality rating by one level) or ‘very serious’ (downgrading the quality grade by two levels).

1. Limitations in the design and implementation: Our confidence in an estimate of effect decreases if studies suffer from major limitations that are likely to result in a biased assessment of the intervention effect. For randomized trials, these methodological limitations include lack of allocation concealment, lack of blinding (particularly with subjective outcomes highly susceptible to biased assessment), a large loss to follow-up, randomized trials stopped early for benefit or selective reporting of outcomes.
 
1. Limitations in the design and implementation: Our confidence in an estimate of effect decreases if studies suffer from major limitations that are likely to result in a biased assessment of the intervention effect. For randomized trials, these methodological limitations include lack of allocation concealment, lack of blinding (particularly with subjective outcomes highly susceptible to biased assessment), a large loss to follow-up, randomized trials stopped early for benefit or selective reporting of outcomes.

Found it! (From Cochrane Handbook v 5.1)
Thats just a matter of opinion!
 
Tovey also claims that “objective measures such as resource use were reported when they were identified.” That doesn’t seem right.
He also suggests that Cochrane followed their own protocol but one of the issues that Bob raised was that they failed to follow their own protocol. So I'm not sure that Tovey has a great grasp of the issues.

There is a more serious issue that he hasn't mentioned which is that their peer review process is clearly not very good in that a report got through that they are now having to deal with and rewrite. This suggests organizational issues that they aren't going to tackle which could lead to problems in the future.
 
So what's the correct rating for the PACE trial?

Er, not to ask you to do a whole review on your own in the next 2 minutes... but whether we can downgrade it by 2 levels?

There are additional problems with PACE, including use of the Oxford criteria (no studies based on the Oxford criteria are relevant to ME/CFS), and including outcome switching, sending a pro GET/CBT newsletter mid study to participants that represent research malpractice all of which surely require excluding PACE data from the analysis befor you get to any rating of susceptibility to bias.

Any reasonable review of excerise data in ME/CFS should exclude all studies using the Oxford criteria and also exclude any demonstrating research malpractice (ie PACE) which should not even have been published in any respectable scientific journal, before you even get to assessing the reliability of the data included.

In this review I don't think anything remains that can reliably be said to relate to ME/CFS just to the symptom of 'chronic fatigue'. This is a matter of objective truth not just opinion.

If the review was rewritten as the use of exercise in patients with 'chronic fatigue' (not ME/CFS) there would be some studies left to include other than PACE, but surely even then the use of subjective measures in unblinded trials should give its findings the lowest level of reliability and it should include the disappearance of any effects on long term follow up and an analysis of the null results for objective data when available.
 
I'm probably wishing for the moon but I still think that a different group made up of independant clinicians/scientists and patients should and could submit a better, upto date, review.

eta punctuation
 
Last edited:
He also suggests that Cochrane followed their own protocol but one of the issues that Bob raised was that they failed to follow their own protocol. So I'm not sure that Tovey has a great grasp of the issues.

Failure to follow the protocol is the least of its problems.

The thing about protocols and guidelines and toolkits and checklists is that they are there to enable comparability, to make sure that things are reported in the same way. They are not there to guarantee the quality of a study or to ensure that it is done properly (although they might indicate problems to be investigated if not followed correctly). I see this time and time again, and in many different circumstances, that folks think that just because they "followed the protocol", that everything is fine. It isn't. And it certainly isn't if the protocol was flawed to start with.

However, making changes to a flawed protocol to make it less flawed should be a good thing. And in some senses the PACE authors did that, by listening to some of the criticism levelled at them after they published the trial protocol. However, not all the changes they made were good - for example, changes to the outcome thresholds. And of course, publishing a protocol 3 years into the study is utterly useless if the flaws are more substantial - like using subjective endpoints in an unblinded study - there's no way you can undo that!

The problem with the Cochrane protocol was that it was based on the flawed protocols of all the studies included, so it simply replicated them. It then failed to adequately compare the studies included. The most serious problem was the failure to scrutinise the use of the measures (particularly the CFQ), and to assume that they even could be combined across completely different trial populations. The need to split the analyses into bimodal and likert(ish), and the different results they got as a result, should have alerted them to that.
 
Does the Cochrane Handbook say anywhere that trials which aren’t randomised controlled trials shouldn’t be included in their reviews?

Sorry, I realised I could look this up myself.

I found this in the Cochrane Handbook (version 5.1):

5.1.2 Eligibility criteria
One of the features that distinguish a systematic review from a narrative review is the pre-specification of criteria for including and excluding studies in the review (eligibility criteria). Eligibility criteria are a combination of aspects of the clinical question plus specification of the types of studies that have addressed these questions. The participants, interventions and comparisons in the clinical question usually translate directly into eligibility criteria for the review. Outcomes usually are not part of the criteria for including studies: a Cochrane review would typically seek all rigorous studies (e.g. randomized trials) of a particular comparison of interventions in a particular population of participants, irrespective of the outcomes measured or reported. However, some reviews do legitimately restrict eligibility to specific outcomes. For example, the same intervention may be studied in the same population for different purposes (e.g. hormone replacement therapy, or aspirin); or a review may address specifically the adverse effects of an intervention used for several conditions (see Chapter 14, Section 14.2.3).

The above criteria does not appear to exclude uncontrolled randomised trials. Is this a flaw of the Cochrane eligibility criteria?

(I’m feeling a bit out of my depth in this discussion so I hope I’m not asking stupid questions!)

Thanks by the way @Lucibee for the great blog post.
 
The above criteria does not appear to exclude uncontrolled randomised trials. Is this a flaw of the Cochrane eligibility criteria?

Not necessarily for systematic reviews in general, but they were doing a meta-analysis and combining results from lots of different studies, which is going to be really tricky to interpret unless there is some element of control between groups and within and between studies (even if it's not as far as placebo control).

In the review, they didn't seem to provide any sort of control over the different types of GET used, how long the treatment lasted, how intensive it was, whether it was GET or GAT.

Again, it's not so much about rules or criteria - it's about their understanding of what they are doing and why, and whether they can justify that.

a Cochrane review would typically seek all rigorous studies (e.g. randomized trials) of a particular comparison of interventions in a particular population of participants, irrespective of the outcomes measured or reported.

I do think this is a problematic statement though. Just because a trial is randomised, doesn't necessarily make it rigorous, and yet that's what folks tend to believe. Look! It's a randomised trial! Everything it says must be true!

There are plenty of ways to completely mess up a trial after you've done the randomisation bit.
 
Last edited:
Back
Top Bottom