A systematic literature review of randomized controlled trials evaluating prognosis following treatment for adults with CFS, 2022, Chalder

So sad that years on from when you started helping us with your articles and letters that this sort of junk by Chalder is still being published.
It is infuriating. Clearly there is something profoundly rotten in the heart of mainstream medicine.

The fact that they have been able to get away with this for so long, and continue to do so, just proves how much political protection they have, and how willing they are are to use for ill purpose.

:mad::mad::mad:
 
As described in the PACE protocol (White et al, 2007), the trial included multiple objective outcomes—a six-minute walking test, a step-test for fitness, and measures of employment status and of being in receipt of social or welfare benefits. In contrast to the reports of “improvement” and “recovery” on PACE’s subjective measures, all of these objective outcomes yielded null results or—in the case of GET and the six-minute walking test–statistically significant but clinically insignificant findings.
At the risk of exposing my statistical ignorance, there were 4 trials arms in PACE, each with 4 objective outcome measures, which equals 16 separate measurements in total. At a significance level of 0.05 there is a 1 in 20 chance of a random result. 1 in 16 is getting close to 1 in 20.

If that is legit reasoning then the very modest single result on the 6MWT for the GET arm could easily be a random result, indicating nothing.

@Lucibee
 
At the risk of exposing my statistical ignorance, there were 4 trials arms in PACE, each with 4 objective outcome measures, which equals 16 separate measurements in total. At a significance level of 0.05 there is a 1 in 20 chance of a random result. 1 in 16 is getting close to 1 in 20.

If that is legit reasoning then the very modest single result on the 6MWT for the GET arm could easily be a random result, indicating nothing.

@Lucibee

Maybe - I'm sure they'd try to argue about 'power' (sample size) being incoprorated into the p score or whatever it is they used, but there will always be a sense of throw enough at the wall you might get something to stick and if it doesn't make sense in relation to the other things measured not being consistent then that needs to be part of the discussion. Any result should use a bit of triangulation if you are using so many measures. If you measured a 6 min walk but even the maths test (if there were one) plummetted that would make sense to all of us given how ME works re: energy 'package'.

As it is - and maybe age of the trial is a bit of an excuse - 6min walking test wouldn't be something I'd back. Were they even comparing individual's own scores with their previous and times for completion etc? Utterly useless if it is an average/aggregate and their methodology equates to using the drop-out rate to filter out the 'weakest' and surprise surprise have a 'less ill field' at the end by that filtering alone. That's before we get started on who in the field had what illness (and of these illnesses which might 'respond' in what way to which test best) given their criteria.

Even with this is it an appropriate single measure on-off to say anything about?Pedometer maybe (though doesn't include all the cognitive) as it shows no other 'compensation' or just simple - worked out how to get to the place better so you were less exhausted by the tme you walked in there the second time. And I'd want the PEM/impact measured - we all know that longitudinally we all get better at 'performing whilst making our bodies worse'. But CPET, probably even just a one-day would be something (though it should be 2-day) would at least give an idea of 'fitness', if they are claiming aerobic.

Simply on the basis that my experience (and the workwell anecdote of the marathon runner who determinedly tried to make herself fitter but got worse) is that it is fitness that goes first - which isn't the same as learning to perform 6mins of walking to the detriment of all other health (have they checked even basic body stats like HR, BP, weight changes, electrolytes etc).
 
There were ao many who didn't do the post treatment 6 minute walk test that the results are meaningless. Given that those who skipped the test are most likely to be ones who were too sick to do it, I think all those who skipped it should have been included as walking zero distance.
 
The drop out rate on Pace and the lack of follow up as to why remains a giant red flag around PACE. Some of the people who did drop out explained it was because they got too ill to do it further, I find it hard to believe they didn't try to tell the researchers this. This means the reasons for drop out were more than likely refused and not written down. Very high drop out rates with a condition known to get worse with exertion is the primary evidence the trial produced, it proves the treatment made people worse, along with them saying it did. The rest of it is largely irrelevant due to the poor criteria used for selecting people can attribute where any positive results, which there weren't really, came from.
 
There were ao many who didn't do the post treatment 6 minute walk test that the results are meaningless. Given that those who skipped the test are most likely to be ones who were too sick to do it, I think all those who skipped it should have been included as walking zero distance.

Indeed - one thing a half decent review should be able to flag is how even with so much craply done research that 'misses the point' of the illness there could be a better fact file of data if protocols for reporting were put in for ME. Turns out these are highly relevant as the 'total exertion' is pretty important so knowing whether any of the tests were within a week of the others matters (cumulative exertion), time of day, journey to get there matter - yet we don't even have individual-level datasets reporting.

And that is stupid and a shame not just for 'overalls' but because if there are different subsets of 'types' such clusters and patterns might have been obvious. And methodology would have become more forensic and responsive if tables were focused on rather than words of spiel covering up the raw data and 'smoothing' these out.

By which I mean if it was required that there was a reporting line for each participant on each test - not 'drop-outs' being used to disappear things but instead those who couldn't even start nevermind complete a test are required to be reported in a data-appropriate way. So if a 6min walking test before and after happens and someone can't start it after treatment but completed 400m in the first the data inserted is '0m' and that must be included in calcs. With safeguards to make sure that participants are not pressured into doing these tests if they aren't up to it.

For example if it was a decathlon is there a 'max/min score' that replaces where a disqualified or did not complete happens on one event and in other forms of stats in other areas (e.g. business) a score that relevant represens what that means in the data would be put in e.g. why not make it -200m. THese decisions should not be being left to individual investigators but be something that has to be approved by an 'ME/CFS board' so that we don't have the nonsense we currently have.

And yes all raw data collected should be required to be fully reported along with detail of the 'conditions' (EDIT* by which I mean testing conditions such as how many tests in a row, journey there, place to lie down and rest between tests etc) and 'who' and 'any other symptoms' because how anyone thinks these can be compared I don't know
 
Last edited:
The drop out rate on Pace and the lack of follow up as to why remains a giant red flag around PACE. Some of the people who did drop out explained it was because they got too ill to do it further, I find it hard to believe they didn't try to tell the researchers this. This means the reasons for drop out were more than likely refused and not written down. Very high drop out rates with a condition known to get worse with exertion is the primary evidence the trial produced, it proves the treatment made people worse, along with them saying it did. The rest of it is largely irrelevant due to the poor criteria used for selecting people can attribute where any positive results, which there weren't really, came from.

Agree. They did not have a ‘safety measure’ of ‘reporting harm or deterioration’ (a gross failure despite their claims there were) for what was a Guinea pig trial of ‘let’s try doing the most counterintuitive approach’ (given back then rest still was advised when people were ill).

this equates to running a trial where the hypothesis couldn’t be debunked - worse, where their treatment harmed the results were ‘disappeared’ as drop-outs from data reporting. So no harm report and no ‘zero’ when someone harmed.

the shocking thing is how that’s the real trend PACE started. For people to realise they could get away with methods that ‘only prove a positive’ by using this drop-outs technique. Shocking
 
Last edited:
PACE was actually one of the few trials that looked for harms. Unfortunately all this did was allow then to claim their treatment was safe!

Originally the protocol was to look for harms that lasted from one appointment to the next but as part of their revisions this was changed to two appointments. The result was a very low number. They did say there were lots of small harms but never revealed what the criteria was that they used.

Outcomes were averaged. It is impossible to tell if a few patients improved a lot or if a lot of patients improved a little bit. Every possible way the results could be made to look better was used.
 
PACE was actually one of the few trials that looked for harms. Unfortunately all this did was allow then to claim their treatment was safe!

Originally the protocol was to look for harms that lasted from one appointment to the next but as part of their revisions this was changed to two appointments. The result was a very low number. They did say there were lots of small harms but never revealed what the criteria was that they used.

Outcomes were averaged. It is impossible to tell if a few patients improved a lot or if a lot of patients improved a little bit. Every possible way the results could be made to look better was used.
Since they do not recognize PEM, or have any understanding of the symptomology of ME, I don't think that can be said seriously. It's more like the studies on LC so far that have looked at familiar diagnoses and finding few asserted there's nothing there.

There were no strokes or heart attacks and I don't doubt they would have recorded and reported those, but I very much doubt they were looking for anything else relevant to ME. Nothing they've ever done in their entire career makes this claim credible.

The big tell is in marking severely disabled people as having been recovered. This is simply not compatible with looking for harms.
 
They were quite determined to not find any! If they had it would have clashed with their declaration to the GET and CPT groups that there was absolutely NO risk of harm - at the start of the trial because that would not be at all dodgy.
 
KCL write up of study:
13 December 2022

Evaluating effectiveness of treatment for adults with chronic fatigue syndrome
A new systematic review of 15 studies, led by researchers from the Institute of Psychiatry, Psychology & Neuroscience (IoPPN) and South London and Maudsley NHS Foundation Trust, has investigated the prognosis of adults with chronic fatigue syndrome (CFS) treated with two well-known approaches: cognitive behavioural therapy (CBT) and graded exercise therapy (GET).

The systematic review, published in Psychological Medicine, is one of the only reviews to have focused on prognosis following these treatments, and captures the proportion of subjects who improved or worsened according to various outcomes including fatigue, functioning or post-treatment change. The study found that prognosis was 8–26% better following CBT and GET compared to control conditions such as relaxation, medical care or wait-list.

CFS is a serious illness, affecting 0.2-0.4% of the population and characterized by unexplained tiredness which is severe enough to result in substantial disability. Other common symptoms include musculoskeletal pain, sleep disturbance and problems with thinking and attention. There is no ‘evidence based’ medical treatment for CFS although CBT and GET have most support within the current literature.
"Information about prognosis is vital for translating findings from research to practice. In simple terms, our study showed that a greater proportion of patients improved, and a lower proportion worsened, following GET and CBT compared to control conditions, and that CBT and GET yielded similar outcomes. This should inform patients, clinicians and commissioners about treatments that may help with the debilitating symptoms of CFS."– Dr Tom Ingman, first author of the study and Clinical Psychologist at the Department of Psychology, King's IoPPN

https://www.kcl.ac.uk/news/evaluati...ment-for-adults-with-chronic-fatigue-syndrome
 
A new systematic review of 15 studies
There have literally been hundreds. This is completely unserious. How can this pretend to be a systematic review when it so obviously cherrypicks? The very pretense behind a systematic review is that it reviews everything, systematically.

I've never seen less serious professionals than basically everyone involved in EBM. It's a complete ethical and professional free-for-all.
 
There is no ‘evidence based’ medical treatment for CFS although CBT and GET have most support within the current literature.

Well certain people have successfully flooded the literature for [checks notes] decades with strong claims about CBT & GET, despite there being no good evidence supporting them.

If there is no 'evidence based' treatments, then stop recommending CBT & GET or any other psycho-behavioural approach, and start asking the hard questions about why decades of dominance by the psycho-behavioural school has utterly failed to deliver any substantive result.
 
There is no ‘evidence based’ medical treatment for CFS although CBT and GET have most support within the current literature.

I suppose this is technically true, given that GET/CBT have more papers or researchers advocating for them than for any other hypothesised treatment, as long as one ignores the inconvenient truth that there is no reliable evidence of any type for them working and that the studies cited in support of them if anything demonstrate they do not have any long term impact on any measure and only transient effect on subjective measures within the expected parameters of experimental bias.
 
I think you're right that this looks like a press release. I thought I was used to the idea that the CBT/GET promoters would go on fairly ineffectually grumbling about NICE. Now I'm finding it both sinister and upsetting that they are going all out to try to overturn it. Combining the above unscientific and superficial sales pitch for PACE et al. with the Peter White et al paper about to be published and it becomes clear that there is a coordinated campaign.
I expect the Science Media Centre will run with it.
 
Back
Top Bottom