Prediction of Discontinuation of Structured Exercise Programme in Chronic Fatigue Syndrome Patients, Kujawski, Newton, Hodges et al, 2020

John Mac

Senior Member (Voting Rights)
Abstract
Purpose:
The purpose of this study was to assess differences in the physiological profiles of completers vs. non-completers following a structured exercise programme (SEP) and the ability to predict non-completers, which is currently unknown in this group.

Methods:
Sixty-nine patients met the Fukuda criteria.

Patients completed baseline measures assessing fatigue, autonomic nervous system (ANS), cognitive, and cardiovascular function.

Thirty-four patients completed a home-based SEP consisting of 10–40 min per day at between 30 and 80% actual HR max.

Exercise intensity and time was increased gradually across the 16 weeks and baseline measures were repeated following the SEP.

Results:
Thirty-five patients discontinued, while 34 completed SEP.

For every increase in sympathetic drive for blood pressure control as measured by the taskforce, completion of SEP decreased by a multiple of 0.1.

For a 1 millisecond increase in reaction time for the simple reaction time (SRT), the probability for completion of SEP also decreases by a multiple of 0.01.

For a one beat HRmax increase, there is a 4% increase in the odds of completing SEP.

Conclusion:
The more sympathetic drive in the control of blood vessels, the longer the reaction time on simple visual stimuli and the lower the HRmax during physical exercise, then the lower the chance of SEP completion in ME/CFS.

https://www.mdpi.com/2077-0383/9/11/3436
 
It seems crazy to me to carry out a study of GET in this way using the Fukuda criteria, and not assessing PEM in some way before during or after.

The use of Fukuda is probably because clinicians think Fukuda is more representative of who they see, hence they think that using this criteria will be more useful than more stringent criteria.

Not assessing PEM, is indeed a major flaw, but part of a (mythical) belief that exercise is safe, so long as patients wear heart rate monitors and don't exceed a certain heart rate. The other part is the conflation of symptoms that a sedentary person who starts an exercise programme will experience, with that of PEM. Because many researchers don't understand the difference, they are unable to choose measures which can tell the difference, hence they choose not to measure it altogether.
 
It seems crazy to me to carry out a study of GET in this way using the Fukuda criteria, and not assessing PEM in some way before during or after.
Agree. For all we know, all they managed to show is that those without PEM managed to complete the treatment and those with PEM dropped out. But since they didn't ask about PEM, who knows.

The other thing missing (unless I missed it) is whether the completers actually got fitter or better or what? Because if they didn't improve, what's the point of trying to predict who can or can't complete an exercise programme that harms half the participants and doesn't work for the other half?

Having said that, it's nice to see they at least thought about why half their participants dropped out, something which doesn't even seem to occur to too many others. Still, asking about PEM would have been the obvious first question here.
 
they are unable to choose measures which can tell the difference, hence they choose not to measure it altogether.
I am coming to the view that failed choice in outcome measures results in optimization to the wrong things. This happens in research other than ME. Outcome measures need to be understood very, VERY well if a study is to avoid a huge source of bias.

Why did I write "optimization"? These are used in applied research, and subsequent studies try to optimize the outcomes they measure, not the patient's life, in study after study.

Right now I think this might be happening in type 2 diabetes research, where assumptions were made and then things were optimized without considering and properly validating the assumptions. This is why there is now a move to get diabetics to low carb diets with time restricted eating. In all head to head studies the classic methods do not do well. Keeping blood sugar stable is in many ways a very bad thing. Optimizing for it is therefore a suboptimal target. Its not the worst target, but it prevents you from finding the very best outcomes.

We really need a standardized test not only for ME but for PEM. Really, really need them.
 
Authors include Julia Newton, Lynette Hodges (NZ exercise researcher) and a number of regular collaborators with Karl Morten. Which means it's doubly disappointing to see
Myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) is a complex condition characterised by intense debilitating fatigue after physical activity.
as the very first sentence in the friggin paper!

I think we need to put together some sort of standard letter that we can send to all researchers who mis-represent our illness in this way.
 
Authors include Julia Newton, Lynette Hodges (NZ exercise researcher) and a number of regular collaborators with Karl Morten. Which means it's doubly disappointing to see
.

Hodges (and probably Newton) were recruited to help interpret the data after the study had been started, Hodges talked about this in a recent talk that was posted online.
It still suggests amateur-hour quality work though, if they didn't know what they were supposed to be doing when collecting the data.
 
It still suggests amateur-hour quality work though
It seems that none of their results remained significant after correction for multiple comparisons (Benjamini–Hochberg, FDR). They could have mentioned this in the abstract!

The trial had a small sample size and could probably only detect large differences. The most notable conclusion is probably that there were no such large difference between those who completed the exercise program and those who dropped out.

I thought this part was interesting (an argument that Tom Kindlon made before them):
Whilst withdrawals are completely normal during supervised exercise programmes, when comparing this rate in the current study (23% pre CPET testing, 36% after CPET testing) to those in previously reported studies within ME/CFS, it is interesting to note that this is substantially larger than that evidenced in the GET trial (6%) [10], and the GETSET study (12 participants, 6%) of 211 participants.

...

It is important to note that in the PACE trial, exercise adherence was not directly measured, and only attendance was measured [14,15]. It was clear after 12 months of completing exercise prescription that there were minimal changes in fitness between the groups [15]. The lack of changes in fitness may suggest that there was a lack of adherence to the exercise programme; however, this was not directly reported
 
Besides the absurdity of actually going through with an exercise program, this is a real missed opportunity to actually record and analyze the reasons for drop-outs. This is something that is always dutifully ignored by BPS research, as they don't want to have those reasons put down to writing.

I really don't understand the point of this, what the expectations were. So many flaws.
 
The lack of changes in fitness [in the PACE trial] may suggest that there was a lack of adherence to the exercise programme; however, this was not directly reported.

Or that it just didn't work, even in those who did adhere to the exercise programme.
 
This is something that is always dutifully ignored by BPS research, as they don't want to have those reasons put down to writing.
During the PACE trial a patient whose name I no longer recall, and would not say anyway, contacted me. They were doing worse, with lots of symptoms. They were not so much distressed at that though. What distressed them was that they were reporting symptoms, and the interviewer was making notes, but the notes were along the lines of "patient is doing well". That is not verbatim, I no longer recall the exact line that was used, but its the gist of the problem. The interviewers, and the interview structure, appear to have inherent bias. This explains why, and I think it was the FINE trial, some of the medical staff wound up saying things like "the bastards don't want to get better". So I ask the question, does this CBT ideology change the mind of the medical personnel as much as the patients? Further, is it pervasive brainwashing?
 
Last edited:
@alex3619 @chrisb
PACE deviated from the protocol to make it harder to report adverse events at the same time as making it easier to claim success. It's a few years since I looked, but I think that an AE had to last across two follow-ups which could be several months apart. Then two unblinded psychiatrists judged whether the AE was indeed attributable to the therapy. There were still more AEs in the GET arm according to the tables, but I don't think they released the primary data under FOI, making it hard to check. Imagine a drug trial where they didn't count side effects unless they lasted months. Grotesque, but I don't think it's had as much attention as the positive outcome fiddling.
 
It is strange how remote the information on which researchers rely may be from the "reality" which it is supposed to represent. The patient experiences something, seeks to express that experience in words, the research assistant hears and interprets the words according to his or her own expectations and beliefs. Its the old "send three and fourpence we're going to a dance" routine. The least that should be expected in evidence taking of this nature is that the records are put to the patient to signify approval of and agreement to what is recorded, with the opportunity to correct errors.
 
Back
Top Bottom