Efficacy of cognitive behavioral therapy targeting severe fatigue following COVID-19: results of a randomized controlled trial 2023, Kuut, Knoop et al

Yesterday, I sent some comments to Hans Knoop and Tanja Kuut on potential issues with the data in Table 3 and presentation of results in Fig 2.

I have asked that they consider addressing these points—which may well be explainable—before the paper is published in final form. Lets see if they do that!

Obviously, there are other issues—such as the trial protocol and philosophy behind the study—that have been raised here. I just focused on some things that don't make sense to me.
 

Attachments

Last edited:
Yes this can't be right. Must be an error of some kind.
upload_2023-5-18_16-2-14.png

It also doesn't make sense to report the standard error (SE) instead of the standard deviation (SD) for the values at baseline.

Because we know that the sample size n for each group was 57, we can calculate the SD as SE times the square root of n. For example for the baseline value of the CBT group, SE = 0.5, n = 57 so: SD = 3.775. That value is also surprisingly low. The authors assumed a SD of 12 in their power calculation.
 
Last edited:
Yes this can't be right. Must be an error of some kind.
View attachment 19567

It also doesn't make sense to report the standard error (SE) instead of the standard deviation (SD) for the values at baseline.

Because we know that the sample size n for each group was 57, we can calculate the SD as SE times the square root of n or 3.775. That value is also surprisingly low. The authors assumed a SD of 12 in their power calculation.

Yep the SD would be 3.775 for CIS-fatigue in the CBT group (SE=0.5), much lower than 12.
 
There's some stuff in the data analysis section that might explain the odd figures. I don't have the energy or knowledge to figure it out but did see mention of pooled standard deviations which might explain the duplications, and standard error of the difference between two figures which might explain the small SE numbers. I think a statistician needs to look closely before we assume it's errors.
 
I don't have the energy or knowledge to figure it out but did see mention of pooled standard deviations which might explain the duplications, and standard error of the difference between two figures which might explain the small SE numbers.
The pooled standard deviation is just for estimating the effect size, the Cohen's D.

I'm far from an expert myself but really think these are errors.
 
Yes this can't be right. Must be an error of some kind.
View attachment 19567

It also doesn't make sense to report the standard error (SE) instead of the standard deviation (SD) for the values at baseline.

Because we know that the sample size n for each group was 57, we can calculate the SD as SE times the square root of n. For example for the baseline value of the CBT group, SE = 0.5, n = 57 so: SD = 3.775. That value is also surprisingly low. The authors assumed a SD of 12 in their power calculation.

If it's not too much trouble, @ME/CFS Skeptic, are you able to resend this image with the repeated SEs at T0 also highlighted. There is one instance. Cheers.
 
One question I have: the title says 'Estimated means and linear mixed model analyses....'

What do they mean by this? Are they showing results from some extra modeling step?

EDIT: And why are T1 and T2 means referred to as 'Estimated means', whereas the baseline value at T0 is just 'mean'? There could be a subsequent step that means the means (and their SEs) are somehow modelled.
 
Last edited:
One question I have: the title says 'Estimated means and linear mixed model analyses....'

What do they mean by this? Are they showing results from some extra modeling step?

EDIT: And why are T1 and T2 means referred to as 'Estimated means', whereas the baseline value at T0 is just 'mean'? There could be a subsequent step that means the SEs are somehow modelled.

This is the key part of the text explain the method to achieve the values in Table 3. It seems convoluted to me, and perhaps these numbers are 'modelled'. But, if this is the case, why not just use the actual values!

Screenshot 2023-05-18 at 16.52.29.png
 
Yes the means post-treatment are estimated means and accounting for the correlation of multiple measurements by the same person ('repeated measurements were nested within participants') might have reduced variability.

But in my view it would not explain why the SE are the same for 9 out of 10 outcome measures post-treatment. It also doesn't help explaining why they report SE for the baselines values instead of a SD.

EDIT: Intention-to-treat might explain why they report modelled results but it is still weird to report the SE for baseline values and it also would not explain the similarities in SE between groups.

Might be wrong but still think an error is the most likely explanation.
 
Last edited:
Back
Top Bottom