Trial Report Preliminary evaluation of a cognitive rehabilitation intervention for post-COVID-19 cognitive impairment: A pilot [RCT], 2025, Becker et al

forestglip

Moderator
Staff member
Preliminary evaluation of a cognitive rehabilitation intervention for post-COVID-19 cognitive impairment: A pilot randomized controlled trial

Jacqueline H. Becker, Eric Watson, Nadia Zubair, Fernando Carnavali, Emilia Bagiella, David Reich, Juan P. Wisnivesky

Background:
Despite the profound impact of “brain fog” and/or cognitive impairment in relatively young people with Long COVID, no interventions with demonstrated efficacy are currently available. We conducted a pilot randomized controlled trial investigating the preliminary outcomes of a cognitive rehabilitation (CR) intervention adapted for persons with post-COVID cognitive impairment.

Methods:
Participants were ≥18 years of age, English-speaking, had history of SARS-CoV-2, and had cognitive impairment on objective measures. Eligible participants were randomized to a 12-week CR intervention or a time – and attention-matched control arm. Objective and subjective cognitive functioning was assessed at pre – and within 2-weeks post-intervention, utilizing validated neuropsychological measures across multiple domains. We compared pre vs. post intervention changes in cognitive scores in intervention vs. control groups.

Results:
The mean change in the intervention group compared to the controls in measures of processing speed, learning, memory, language, and of executive function did not reach the threshold for futility. In comparison to controls, the intervention group self-reported significant improvements in cognitive functioning.

Conclusions:
We found that an adapted CR intervention for Long COVID may improve post-COVID cognitive impairment in comparison to a time – and attention-matched control group and should be evaluated in a larger trial.

Web | Neuropsychological Rehabilitation | Paywall
 
Edit: lesson learned. The futility concept flips the null hypothesis so my interpretation of the abstract is wrong.
The mean change in the intervention group compared to the controls in measures of processing speed, learning, memory, language, and of executive function did not reach the threshold for futility.
So it was a complete bust.
We found that an adapted CR intervention for Long COVID may improve post-COVID cognitive impairment in comparison to a time
No, you did not. You found that it did nothing.
and attention-matched control group and should be evaluated in a larger trial.
The horse is dead. Leave it alone.
 
Last edited:
The mean change in the intervention group compared to the controls in measures of processing speed, learning, memory, language, and of executive function did not reach the threshold for futility
Typo? This whole thing sure is futile. And I mean the whole thing, not just this yet-another-awful-study/propaganda.

Where the hell is it that all these people get the idea that you can just whip up some random bullshit program for a purpose and it will... just work? Why not do this for everything then? The old way. The pre-science way. What a complete waste of everything.
 
Cognitive rehabilitation (CR) interventions can improve cognitive and functional outcomes in various populations with CI. However, standard CR programmes may not be appropriate for persons with Long COVID with CI who frequently experience fatigue or post-exertional malaise (PEM), making it difficult to attend lengthy, in-person CR sessions with cognitively challenging exercises. While CR interventions for persons with Long COVID are being offered in clinical settings worldwide, very few tailored, evidence-based trials have tested their efficacy specifically for this population.

Pilot randomized controlled trials (RCTs) are the ideal context in which to test the efficacy of CR interventions. However, with smaller sample sizes they are often underpowered to detect statistically significant effects, even when an intervention is potentially beneficial. Given the urgent need for effective interventions for cognitive symptoms in Long COVID, a non-futility approach is particularly well-suited for evaluating pilot trial outcomes in this context. This approach offers an alternative framework by focusing on whether the observed outcomes are promising enough to justify a larger, definitive trial, rather than attempting to prove efficacy at an early stage. This method sets a predefined threshold (under the null hypothesis) for what would be considered a potentially meaningful effect and if the results showed that the effect of the intervention may exceed this threshold (i.e., the null hypothesis of efficacy is not rejected), the intervention can be deemed “not futile” and thus worthy of future trials.

The aim of this study was to conduct a pilot RCT investigating the preliminary outcomes of an adapted, evidence-based CR programme for persons with postCOVID CI. Our primary goal will be to assess feasibility and signal of benefit and thus, we will assess futility to determine if our intervention may ultimately prove effective when tested in a fully powered trial.

Using a futility design, we assumed (under the null hypothesis) that the intervention has a ≥0.25 units improvement (post vs. pre-intervention) in cognitive z-scores (i.e., a clinically significant difference) in the intervention vs. control arm for EF (i.e., that the intervention may be effective and therefore non-futile). Thus, rejection of the null hypothesis would suggest that the improvement of cognitive scores in the intervention vs. control group was <0.25 and lead to declaring futility of the intervention; failing to reject the null would lead us proceed with more studies. In intention to treat analysis, we used a one-sided Student’s T-test (alpha level 0.2) to assess futility for a ≥0.25 units increase in z-scores of the intervention by comparing the difference between pre vs. post intervention changes in cognitive scores in the intervention vs. control group (i.e., upper 80% confidence interval [CI] of the difference-in-differences [DID]: < 0.25).

We calculated the sample size for a one-sided futility hypothesis that the CR intervention achieves a clinically meaningful improvement in EF scores such that it can be further studied in larger trials. Thus, we calculated the sample size using a one-sided T-test, fixing the type II error (i.e., the probability of declaring a “promising” treatment that is in fact futile) at 0.20 and the type I error (i.e., the probability of declaring futile a treatment that is in fact “promising”) at 0.1 (one sided). Based on the literature we choose an effect size of 0.25 as the minimal improvement worthwhile of further studies of the proposed CR intervention. The resulting sample size required for the trial was 25 subject in each group for a total of 50 subjects; to account for potential dropouts, we recruited 29 subjects per arm, for a total sample of 58.

The mean change in intervention group was greater than controls in measures of processing speed, including the TMT-A (DID: 0.10), D-KEFS Color naming (DID: 0.04), and D-KEFS Word-reading (DID: 0.11). Measures of learning and memory were also greater in the intervention group than controls, including HVLT-R Total and Delayed Recall (DID: 0.15 and 0.01, respectively), BVMT-R Total and Delayed Recall (DID: 0.04 and 0.04, respectively), and BVMT-R Recognition Discrimination (DID: 0.50). The mean change in measures of language was greater in the intervention than control group, including the D-KEFS letter (DID: – 0.15) and category fluency (DID: – 0.09). The mean change in one measure of EF, the D-KEFS Verbal category switching (DID: 0.18), was greater in the intervention group than in the control group, whereas other measures did not improve more with the intervention compared to the control group. The mean change in measures of attention and working memory were not improved in the intervention group. None of these changes were statistically significant different in the intervention vs. control group.

Finally, the mean change in CFQ was significantly greater in the intervention group in comparison to controls (DID: 0.18). The analyses did not show futility for any cognitive measure (upper confidence interval for the DID <0.25) except for attention, working memory, and novel problem-solving. With respect to other variables, there was a statistically significant improvement in anxiety and fatigue in the intervention group in comparison to the control group (DID: – 1.38 and – .025, respectively), while no significant improvement was found for depression and daily functioning in either group (DID: – 0.88 and – 0.44).

In this pilot RCT, we found that an adapted CR intervention for Long COVID may improve CI in comparison to a time – and attention-matched control group. That is, we did not find futility in any outcomes, suggesting that a larger effectiveness trial is warranted. Our completion rate of 88% also demonstrates that participants were able to complete the intervention. While our small sample size does not allow us to determine efficacy, it is notable that participants in the intervention group self-reported significant improvements in cognitive functioning compared to those in the control group. This suggests a potential positive impact of the intervention on participants’ perceptions of cognitive functioning, which has significant implications for quality of life.
 
Thank you @SNT Gatchaman - would you be able to post the info on the content of the interventions as well? They say they were attention-matched in the abstract, but that doesn’t tell us much.

While I can understand the use of futility as a screening tool to quickly and cheaply weed out treatments that are definitively (within our threshold of certainty) not effective, it feels like a design that makes it a lot more probable that any intervention will proceed to a larger trial compared to with the regular superiority design.

The cynic in me wonders if that’s the true reason for the trial design, because we very rarely see BPS-researchers truly attempt to falsify their models. It’s better to have potential than nothing.
 
Participants and study personnel were blinded, with the exception of interventionists, who received the randomization results and assigned participants. Interventionists were neuropsychologists or neuropsychology doctoral students, supervised and trained by two licensed neuropsychologists. Interventionist fidelity was assessed through a pre-determined checklist of critical activities for each manualized session, rated by the interventionist at each session. An independent observer also audited and rated a random selection of 3 sessions for each interventionist.

The intervention consisted of a virtual, 12-week programme of 9 small group (3-5 participants) and 3 individual sessions of either, (a) a CR active intervention or (b) a time – and attention-matched control arm. The parallel arms were the CR active intervention + BrainHQ vs. the attention control programme + online puzzles. Participants in the active arm received 90-minute CR intervention sessions informed by evidence-based CR protocols. Protocols were adapted to meet the needs of patients with Long COVID by (a) being offered virtually to reduce the need for in-person visits and burdensome travel, (b) tailoring the content for applicability to Long COVID and living with chronic illness, (c) Offering breaks throughout sessions to reduce fatigue and symptom exacerbation, and (d) including fatigue management training to encourage pacing and prevent fatigue and PEM exacerbation, which was woven in throughout sessions. Homework included 20 hours of computerized cognitive training using BrainHQ (Posit Science, https://www.brainhq. com). Homework adherence for the intervention group was tracked via the BrainHQ platform and self-reported for the control group.

To maintain participant blinding, the control group was time – and attention matched, including 9 small group and 3 individual sessions of a manualized brain health didactic in addition to 20 hours of publicly available computer puzzles. Sessions included information about diet, sleep, mental health, and effects of stress, but did not emphasize or train cognitive skills or provide active coping strategies.

Intervention.jpg
 
Thank you!

Most of the differences were quite small. I wonder how much of that was some kind of learning effect if they presumably did tasks that resembled the actual tests?

I also seems like part of the intervention is similar to pacing - I wonder how much that affected the results not by improving the underlying issue but getting people to actually do less so you have fewer flares?
 
Back
Top Bottom