Post-Hospitalisation COVID-19 Rehabilitation (PHOSP-R): A randomised controlled trial of exercise-based rehabilitation, 2025, Daynes et al

Shouldn’t changes from the protocol have a further ethical approval as well as being reported and explained in any write up?

I'm not sure that changing a reference would automatically require new ethical approval. I think it would likely depend on how substantive it is. I haven't looked at the protocol at this point but do we know if/what changes occurred? But yes, if the protocol had a different MCID cited, they should definitely at the least explain in the paper. Saying that new research has come out I think would be considered a perfectly normal explanation (unless they themselves cooked up and published a shady new analysis in order to get a better advantage in the trial itself).
 
Changes in protocol are acceptable if there is

1) good reason for it,

2) it is reported and justified in full,

3) the calculations and results for the original protocol are also reported in full, and any differences in outcomes (between the protocols), and their implications, are properly acknowledged and accounted for.
If not, then protocol changes are not acceptable.
 
Thanks for highlighting (the flaws with) this study David.

Minor issue:
The face-to-face intervention group had a drop-out rate of 29% and the remote intervention group had a drop-out rate of 39%. These drop-out rates are quite high.
I got lower drop-out rates: 11/56 (20%) in the face-to face group and 17/63 (27%) in the remote group.

Strange that they don't report the data for the primary outcome of the control group. I suspect that the control group decreased based on the data that they do report.
  • The face-to-face rehabilitation group improved from 285 m to 312 m. So that is an average increase of 27 meter, lower than the MCID.
  • In the remote group the The remote rehabilitation group improved from 353 m to 388 m so an increase of 35 meter, similar to the MCID.
But for the comparison with the control group they report increases of 55 and 34 respectively.

A 2013 study called “Age-specific normal values for the incremental shuttle walk test in a healthy British population” found that the average distance walked during the ISWT by those in their 40s, 50s, 60s, and over 70 were, respectively, 824 meters, 788 meters, 699 meters, and 633 meters. By comparison, those in the face-to-face group increased from 285 to 312 meters, and those in the remote group from 353 to 388 meters.
That probably says it all. Quite surprising and sad that these patients weren't able to improve more.
 
Had a look at the protocol which said:

The sample size is calculated on the ISWT (primary outcome) with a change of 50m at 90% power, with a standard deviation of 72 m and a 0.05 type 1 error as previously documented in the literature as the minimum important difference and variance of the ISWT [14, 37].
So 50m was originally viewed as the minimum important difference?

Reference 14 is:
Minimum clinically important improvement for the incremental shuttle walking test | Thorax
Which says:
The minimum clinically important improvement for the ISWT is 47.5 m. In addition, patients were able to distinguish an additional benefit at 78.7 m.
 
The paper says they added multiple factors in the statistical model which weren't mentioned in the protocol:
Independent variables included the interaction between time point and treatment group (faceto-face vs. usual care and remote vs. usual care), with age, sex, BMI, time since hospitalization, number of comorbidities, WHO severity index, and recruiting site included as fixed independent variables in the model.

The protocol says they would correct for the false discovery rate (FDR):
The FDR adjustment for multiple comparisons with be applied to multiple comparisons
But I don't see this in the paper.
 
I got lower drop-out rates: 11/56 (20%) in the face-to face group and 17/63 (27%) in the remote group.

I was going with the numbers they used in the per-protocol analysis: "140/181 participants were included in the per-protocol analysis, 40/56 (71%) face-to-face, 38/62 (61%) remote, 60/62 (98%) usual care completing 75% of the intervention and the follow-up measures." From this, it would mean that 29% and 39% in the face-to-face and remote groups, respectively, were lost to follow up. It's true that in the consort diagram they cite the numbers you mentioned. But that doesn't make a lot of sense. If those people completed the study, why were they not included in the per-protocol analysis? Those not included in a per-protocol analysis would be considered lost-to-follow up. Can anyone make sense of this for me? I'm not much of a statistician, so I can't figure it out.
 
Last edited:
Back
Top Bottom