Efficacy of cognitive behavioral therapy targeting severe fatigue following COVID-19: results of a randomized controlled trial 2023, Kuut, Knoop et al

Just noting those error bars on the chart and in Table 3 are standard errors. I've said before, if a paper plots SE's, then I reckon it's a fair bet that it is a rubbish paper.

Standard error of the mean is the standard deviation divided by the square root of the sample size. Say the sample size is 57. the square root is about 7.55. So the standard deviation at T2 is 1.7 *7.55. That's about 12.8! The standard deviation is 12.8. If we approximate a 95% confidence interval to plus or minus two standard deviations, then the values at T2 on that chart should have error bars of 25.6 either side of the point.

Try drawing those error bars on the chart above. If I haven't made a mistake, and it's quite possible that I have, then the result means nothing. (Bear in mind that the lowest possible score on the scale is 8, and the highest possible score is 56.)

(And if you look at the standard errors in Table 3 - they look suspiciously repetitive. I would not be surprised if there was a problem in that table.)

I'm just looking at this now. It's all over the place. The lengths of the bars in Fig 2 do not even correspond to the values in Table 3. Happy to work with you on this to get this crap corrected or withdrawn.
 
Note that the confidence interval is sometimes given as +/- twice the standard error, not =/- twice the standard deviation. They may have plotted this, but this would still need correcting to say as much, and it doesn't explain why the error bars at T1 are different lengths despite the same standard error for both groups.

More importantly, this would probably mess up the stats downstream of these calculations.
 
Last edited:
For a good explanation of standard deviation and standard error I found this:
https://s4be.cochrane.org/blog/2018...d error tells you,of the true population mean.
It is good, but I don't think that chart in that link is quite right. The labels at the top make it look as though 68% is within half a standard deviation either side of the mean. I think the chart below is better - it's clearer that 68% is + and - one standard deviation from the mean. And 95% is plus and minus two standard deviations from the mean.

Screen Shot 2023-05-09 at 10.49.35 pm.png
 
Last edited:
What clearly matters most is the null hypothesis testing i.e. this:

"Patients who received CBT were significantly less severely fatigued across follow-up assessments than patients receiving CAU (-8.8, (95% confidence interval (CI)) -11.9 to -5.8); P<0.001), representing a medium Cohen’s d effect size (0.69). The between-group difference in fatigue severity was present at T1 -9.3 (95% CI -13.3 to -5.3) and T2 -8.4 (95% CI -13.1 to -3.7)."

We need to understand how they came to these figures, particularly the confidence intervals on these group differences. I suspect they are wrong!
 
Someone mentioned that at the second measurement moment T2 the CBT group shows a deterioration compared to T1, while the control group continues to improve. Why does the CBT group deteriorate when treatment stops and the control group continues to improve?
 
The difference in the outcomes on that questionnaire between the two treatments is equivalent to one point on the 1 to 7 scale. So e.g. "I feel tired" - where 1 is 'no, that's true' and 7 is 'yes, that's true'. So, given all the participants have on average improved a bit, or become a bit better at managing their illness over the 6 months, how hard would it be for the people with CBT to be swayed by instruction that their focus on symptoms is partly causing their fatigue to answer just a little more positively.
They say a difference of 6 is considered a clinically relevant effect for the CIS-20, and they cite this paper for that statement. I don't see that substantiated in that paper

For that chart showing standard errors for those figures, what's more important is the 95% confidence interval for the effect size, which is listed elsewhere in the paper. Confidence intervals aren't twice the standard deviation, they're roughly twice the standard error, assuming a normal distribution.
 
Someone mentioned that at the second measurement moment T2 the CBT group shows a deterioration compared to T1, while the control group continues to improve. Why does the CBT group deteriorate when treatment stops and the control group continues to improve?
If you look at the top lines of that table, the control group stays at 39.9 for T1 and T2. The CBT group gets marginally worse, from 30.6 to 31.5. It's not a big worsening, but I imagine part of the cause could be that the education the participants received that 'everyone gets tired' and that they've just fallen into a bad habit of whining about being fatigued starts to wear off a little over time.
 
Someone mentioned that at the second measurement moment T2 the CBT group shows a deterioration compared to T1, while the control group continues to improve. Why does the CBT group deteriorate when treatment stops and the control group continues to improve?

That's true, and they haven't mentioned that (as far as I can tell), but the change is also very small and not significant.
 
Looking at the data for SF-36 Physical Functioning, and taking the figures for before therapy and at 6 months post treatment.
The CBT group improved from 64.7 to 77.2, so improved by 12.5
The CAU group improved from 62.5 to 72.3, so improved by 9.8

So that gives a magnificent (!) between group difference of 2.7. That's less than one change on one of the 10 descriptors. Moreover the CBT group was on a downward trajectory from the immediate post treatment level, whereas the CAU group was heading upwards.

Interesting what a difference choice of comparison points makes.
 
Someone mentioned that at the second measurement moment T2 the CBT group shows a deterioration compared to T1, while the control group continues to improve. Why does the CBT group deteriorate when treatment stops and the control group continues to improve?
There are various possible explanations if this reflects reality:
  • The experimental group were irrationally optimistic about what they were able to do as a result of the false information provided via the CBT mistakenly believed to be curative so long term attempting to undertake more than feasible given their persistent health issues making their health objectively worse
  • The experimental group initially scored themselves better than they actually were as a result of researcher induced bias and then scored themselves more accurately after sufficient time had elapsed, so what appears to be a deterioration is rather more accurate reporting
  • The experimental group and the people around them adjusted their behaviour to allow them to focus on the active intervention and reduce other activity giving a misleading sense of improvement, but subsequent to treatment the pressures of everyday life again intrude resulting in subjective deterioration
  • Etc
These issues indicate the impossibility of ever drawing meaningful conclusions from an open label trial with only subjective outcomes, also that it is essential to look at total activity levels before, during and after the intervention if the aim of intervention is to enable participants to increase their activity levels. Also they hint at the difficulties of ever achieving adequate control over all factors in real life settings suggesting the foolishness of focusing on just one experimental design, rather than seeking convergent evidence from multiple sources/approaches.
 
This plot is objectively worse than those I used to make in GCSE I.T. when I was just learning Excel. Absolutely no care at all. Looks like shit. View attachment 19505

On a technical note, the error bars are somehow not lined up with the datapoints themselves. I'm not even sure how you'd manage that (maybe it's an artefact of the plotting software). I have absolutely no faith in any of this work.

OK. The error bars in this figure (Fig. 2) actually show the 95% confidence interval (mean ± 1.96 x SE). That's not what's stated in the caption for this figure. So that's an error, but a presentational one.

What I don't understand is how you calculate an *overall* between-group difference from the values at T1 and T2. Calculation of the between-group difference at T1 or T2 alone is trivial.
 
While it's difficult to follow for me (especially today), I'm reading with interest. Please keep on analysing content, but I wanted to add some context/docs that might be helpful.

There was a S4ME thread on the announcement of this trial, which can be found here.

Particularly note the doc added by @Grigor , access here, where Knoop is discussing the trial.

  • No trial ? treatment protocol made public, because of "intellectual property" and because "the treatment must first be tested in research"; ZonMw later reports that a trial protocol is accepted for publication by BMC, but so far no publication (possibly agreed with BMC to publish after trial publication). --> My bad! There is a trial protocol published after this interview. See: A randomised controlled trial testing the efficacy of Fit after COVID, a cognitive behavioural therapy targeting severe post-infectious fatigue following COVID-19 (ReCOVer): study protocol - PubMed (nih.gov), published December 2021. On November 18th 2021 Knoop was not planning to make a treatment protocol published - I'm too hazy atm to discern if that's the same or different. (The BMC publication contains detailed info on the treatment, so I'd say it's the same?)
  • (Nastily) conflates subjective questionnaire outcomes after research participants are trained to reframe cognitions (methodology flaw) with "listening to patients" (patient collaboration and agency)
  • "Why are patient organizations so obsessed with objective measures? Objective measures are also subjective. They depend on how you interpret them."
  • "medical research is increasingly moving away from objective outcomes and looking at what is relevant to the patient and how he feels." (again conflating bad research practise with patient representation & -participation)
Also note that participants were recruited via a website that repeatedly stated to candidate participants that CBT was a proven effective treatment for chronic fatigue (if it's removed, contact me for screenshots).

I also have a question on study aims. There are at least three sources (the grant application, the ZonMw project site, and the recruiting website) that state that the aim of this trial is to study if CBT prevents Long Covid/chronic fatigue after Covid19. (So not if it treats it when it's already established for a while.) Does this match the paper?

I've also made a doc with a selection of some pages from a FOI request made by the dutch Steungroep ME en Arbeidsongeschiktheid regarding ReCoVer. The whole document can be accessed here:
Beslissing ZonME op WOB-verzoek ReCOver-onderzoek.pdf (steungroep.nl)
and the selection PDF below.

I haven't read the whole document (it's 100 pages), but my eye fell on a couple of things that matter (see PDF attachment below). @dave30th , tagging you as I think you'll find this interesting, and part of the PDF I added below is in english.

Note:
  • that the thing tested here is, as per this grant application, to prevent rather than treat chronic fatigue after Covid-19
  • that Knoop, as usual, has his eyes on "fatigue after other infections" - dude is after making Long Covid part of one big ball of treatment consumers with ME, Q-fever, mono and tick borne disease & more.
  • that this document appears written in May 2020: deadline the 14th, latest reference the 2nd. It states that "a substantial subgroup is expected to develop persistent post-Covid-19 chronic fatigue". Knoop expected this. ZonMw accepted this. They knew. That will be received well with the thousands of LC patients who got sick in the last years while The Netherlands followed a immunity-by-infection policy and did little to prevent infections as long as hospitals seemed to cope./s
Instead of warning for it, Knoop stayed quiet and tries to build business, career and income on it.
  • that again Knoop claims CBT is effective for chronic fatigue
  • the fabricated but claimed involvement of the patient community; Longfonds and Qsupport might be "active in support for COVID patients" but they do not represent the Long Covid patient community. This is not a patient-driven or -supported study by far. (See also point 1.8 in the PDF)
The second part of the PDF is in dutch, but I added it because it contains imortant information regarding the assessment by ZonMw of his project on relevance and quality.

Section 1.13 states strong points:

"CBT is, like the researchers indicated, a proven effective intervention for chronic fatigue in people with different chronic illnesses (including infectious diseases).
According to the researchers this is the first study in which CBT is used to prevent chronic fatigue.
....
Research group with much experience in this area
Patient organisations are involved in the research"

Weak points (1.14 and 2.2) are that it's only looking at short term outcomes and the setup of the study. ZonMw suggests looking at "quality of life" instead of employment, as there are a lot of old people in the COVID-19 population. They also find it unclear what "care as usual"means for the control group.

They remark: "The researchers are clearly enthusiastic about iCBT." Though she wonders if it's ethical to withold this easy-accesible treatment to participants while it could prevent developing chronic fatigue.


Edited to change some poorly translated english
Edited to correct a mistake: there is a published study protocol
 

Attachments

Last edited:
Edit - and when was the study undertaken?

I've not looked at the whole FOI document, but the deadline for the grant application form was the 14th of May 2020
The final (I think elaborated) grant request was received by ZonMw in June 2020. The decision to fund him was taken and communicated in July 2020. (FOI doc)
In early August 2020 the grant approval was made public. (See thread on S4ME & article David Tuller)
In oktober 2020 the trial started (ZonMw page screenshot personal collection)
I swear I have a screenshot, though I can't find it, that the trial was closed for participation in February 2022


I'm getting too poorly to add all the links, I will do that later.
 
Aside from the pseudoscientific BS about CBT, someone on twitter asked a question that needs to be asked and almost never is: what the hell does CBT have to do with sleeping habits and the other themes they chose?

This is exactly as substantial as using CBT to teach cooking. Why? Literally no reason for it, unless someone's job is to sell CBT. The specific format is useless, adds nothing to the outcome and only serves to employ cheap CBT therapists.

There is not a single actual reason to use some psychotherapeutic model to deliver generic information. This is all nothing but a giant scam that uses a pseudoscientific construct in order to appear scientific, which is basically insane.
 
Back
Top Bottom