2025: The 2019/24 Cochrane Larun review Exercise Therapy for CFS - including IAG, campaign, petition, comments and articles

George Monbiot has shared Bastian's blog on Bluesky together with a thread:

This is deeply shocking and disturbing, the opposite of scientific good practice. As I see it, a group of diehards promoting a discredited treatment (exercise "therapy" for ME/CFS patients) are seeking to stifle medical progress - to protect their reputations. And Cochrane has kowtowed to them.

The result of their concerted reputation-washing is that patients continue to be abused and subjected to treatments that make their condition worse. However grand and eminent scientists may be, their reputations must always take second place to the evidence. They need to admit they got it wrong.

Instead, a massive propaganda effort, led by the Science Media Centre and lapped up by credulous journalists, has protected their discredited claims from criticism, at a devastating cost to patients. *There is no place for grandeur in science*.

It seems horribly symbolic of what is happening in other areas of public life: powerful interests override the public good, evidence and reason. The last places where this dynamic should operate are science and medicine. But even here, eminent bodies succumb to the pandemic of grovelling.

 
It’s so confusing because she’s a sort of celebrity “advocate” for people with Long COVID. But has a terrible track record on ME.

For those like me who struggle to remember who people are, or for people from outside the UK Wikipedia says:

Patricia Mary Greenhalgh (born 11 March 1959) is a British professor of primary health care at the University of Oxford, and retired general practitioner.
 
Interesting that MEAction are focusing solely on the editorial note rather than withdrawal, stressing their link to the IAG and avoiding any mention of our ‘withdraw of Larun et al’ campaign.

It maybe that they regard the editorial note as the most realistic goal, though it seems to me something of a cop out, in it gives Cochrane a way of putting the issue to bed if the outcry gets too embarrassing, without addressing the central problem of the inherent bias in the use of subjective outcomes in unblinded trials. Both @Hutan and I reference these wider issues in our comments on the MEAction blog. I wonder, given our comments are still lurking in moderation (echoes of Hilda’s moderating strategy), if our demand for withdrawal of the old review is a hot potatoe for them too.
Yes, I would really like to understand why MEAction (largely Jaime I assume) and Hilda think that the case is not met for withdrawal of the review.

All I can think of, given Cochrane's rules around withdrawal, is that they think that the contention that there are real harms arising from the review has not been sufficiently proven. I wish that they would cooperate and let us know what they are thinking, so that we can either persuade them that our thinking that withdrawal of the review is justified is right and perhaps find a way to make our arguments more compelling and convincing, or we can change our thinking to agree with them.
 
Last edited:
The go-to professor on stats at my medical faculty is a co-author on a study with just this problem (on ME patients to boot).

Worth remembering that statisticians tend not to understand anything about blinding of trials. Blinding is there to solve a problem with human nature entering with measurements. Statisticians are often not that good on human nature. Medical students get steeped in it from their first clinical year!
 
Which means that the choice of criteria is relevant because of the possibility of studying a non-generalizable selection of the CF population.

The choice of criteria is relevant but if criteria are wider than the set of interest they are still valid for that set unless there is evidence to the contrary.

In the end the analysis is a statistical one of what is the most likely predicted probability of result in a study applying to another set of patients, based on all available knowledge. If you say, 'Ah but the cohort might not be representative on factor x', without evidence then that is special pleading and a statistical analysis explicitly disallows special pleading because it gives a prediction considered most likely to apply all current knowledge considered.

You can certainly raise concerns about a subgroup not being represented adequately by a larger group. But then what you do is stratify by subgroup and re-analyse. To their credit, the PACE authors did this and showed no difference between results for the wider group and for a more specific ME/CFS group. They didn't point out that there were no eggs in either box but they did what they could be expected to do.
 
While we are taught about double blind, the cop-out when not being able to blind the participant is the same as the BPS are using: "It's difficult" and left at that. Nothing about how using objective outcomes could reduce the issue, or that subjective outcomes would be an additional issue. And since everyone is doing it then it must be fine.

Yes, but, as my old boss used to say, even a policeman could work out that 'it's difficult' is no argument and that objective results are the solution. At that stage it is simply a question of assuming that young doctors have basic common sense.
 
Is it because as some researchers have tried to argue that ME/CFS is an inherently subjective experience so it can only be measured by patient self report or that the objective outcomes fail to provide the desired result. Also strangely the researchers are reluctant to even acknowledge the existence of patient reported harms.

It is simply because they are biased and dumb, let's face it.
 
I seem to recall a table with "rules" in Norwegian, but I can't find it again but as I recall it was like this for when something is seen as reliable:
Two large cohort studies that point in the same direction
One cohort study can be exchanged by five case-control studies

There is no justification for using rules based on what other people think. Someone assessing reliability has to know why things are reliable themselves. It isn't difficult. It is almost entirely common sense. But rules can only ever be an approximation to what some other people think and in science you never go by what other people think anyway.
 
I so do not understand where Trisha Greenhalgh is on this. I haven't been following closely but she seemed to be prejudiced against people with ME/CFS, blocking reasonable people on Twitter. But argues against people with Long Covid being subject to BPS ideas? And now she is retweeting Jacqui Wise's good article in the BMJ that is supportive of people with ME/CFS?
Suspect her views re Cochrane are not so much about ME/CFS but shaped by other events - both the mask review and other events such as this:

https://blogs.bmj.com/bmj/2018/09/17/trish-greenhalgh-the-cochrane-collaboration-what-crisis/
 
If they fix their methodology, but don’t fix the inclusion criteria, we might end up with robust studies that are still wrong.

No, if the methodology is OK the studies will be right. They may not tell you what you want to know but they will be right on their own terms. You cannot force other people to use the diagnostic criteria you prefer. They are always an arbitrary choice based on personal opinion, even if the opinion of a gaggle of people (with another gaggle disagreeing).
 
Yes, I would really like to understand why MEAction (largely Jaime I assume) and Hilda think that the case is not met for withdrawal of the review.

The contrary side of my brain thinks it might be better to have a note saying it's outdated and unreliable than allowing Cochrane to save face further down the line by disappearing it altogether.
 
There are also a number of medical professionals who get, or know someone very well who has got, ME/CFS-like Long COVID (and so are inherently sympathetic to that) but who cannot shake their long-held, ingrained views of "ME activists".

Perhaps another way of solving the Cochrane debacle would be around the fact that their post-2019 withdrawal policy isn't actually appropriate at all for systematic reviews. They seem to have created a policy that aligns more with COPE guidelines & retraction policies at other journals, but what is needed with outdated, dubious or low-quality systematic reviews is more formal deprecation than retraction in the classical sense; I think HB alludes to this in her blog when she says that systematic reviews are generally considered outdated after 5 years. Perhaps Cochrane needs a deprecation policy separate from the withdrawal policy?

If there is one potential benefit that comes from all of this, it is that people with significant roles in influential organisations are becoming aware of the scope & magnitude of the psychobehaviouralists' behind-the-scenes campaigning and influence-peddling. It happened with the NICE guideline, now it is happening with Bastian/Cochrane. What gets you the Maddox prize once may seem unseemly now - the culture seems to be changing, yet their playbook hasn't.
 
Here's the asthma study @Medfeb
https://www.nejm.org/doi/full/10.1056/nejmoa1103319

View attachment 25123
Subjective improvement: from the left: inhaler with asthma drug, sham inhaler, sham acupuncture, no intervention
Only no intervention does not "work"

View attachment 25124
Objective improvement: same order of interventions
Only the inhaler with asthma drug improved breathing

Basically, if you have a poor trial design, with respect to outcomes and controls, the results are worthless and misleading. If you have a wide selection, you can still say something, possibly useful, about that wide group that you selected. And, if your trial is big enough, you can do some post hoc analysis to work out what trials would be useful to do next to deal with subgroups with different responses.
It's really impressive to see how close to the real intervention the controls match. Almost the same.

This right there is what 'the placebo' is. It's 'real' in the exact same sense as the psychobehavioral ideology claims our symptoms are. It is a real effect. It's just not the effect that is being described. It's simply shoddy methodology in a field where shoddy methodology has long been accepted, especially when it gives you a false positive, aka hopium for physicians.
 
@Medfeb, could you help us understand why Hilda and MEAction talk primarily about the diagnostic criteria as being the fault with BPS trials and ignore that trial design issue that so many of us see as the real problem? I find it such a puzzle that we aren't on the same page about this.
From memory, though it's long in the past, I don't think Bastian agrees with that being a problem either. It would pretty much kill Cochrane's business model, would be massively embarrassing to the medical profession. Especially after decades of overt hostility where this 'controversy' has existed mainly on insisting that this pseudoscience may as well be the work of gods, indisputable and irrefutable.

MEAction can definitely do better. I'm sure they get it. Maybe they just mirror Bastian's argument.
 
The problem isn’t that they are using other criteria, the problem is that they are trying to generalize results from their wider criteria to the population in the narrower criteria.
Adding to this, though, the more general problem being that even in the broader population, the results are so mediocre that they rarely reach statistical significance, and it's only through spurious secondary data torture that positive claims are made.

Even in the general healthy population, exercise is mostly good at improving fitness, and where evaluations of fitness are used as proxy for health, it looks beneficial. But those benefits are massively overhyped because it fits perfectly in the dominant neoliberal politics that seek to cast all social problems as having purely individual solutions, aka "you do you, stop bothering us".
 
Back
Top Bottom