PRINCE Secondary: transdiagnostic CBT is not effective for persistent physical symptoms, 2021, Tack and Tuller

Andy

Retired committee member
Abstract
PRINCE Secondary was a randomised trial to test the efficacy and cost-effectiveness of therapist-delivered, transdiagnostic cognitive behavioural therapy (TDT-CBT) for patients with persistent physical symptoms (PPS) (Chalder et al., 2021). In total, 324 PPS patients were randomised to receive either TDT-CBT plus standard medical care (SMC) or SMC alone. The trial's primary outcome was the mean score on the Work and Social Adjustment Scale (WSAS) at follow-up assessment 52 weeks post-randomisation. The trial also included several secondary outcomes.

In the conclusion of the abstract, the authors state that the trial provides ‘preliminary evidence that TDT-CBT + SMC may be helpful for people with a range of PPS’. This statement is misleading since it ignores the null findings of the trial's primary outcome. Although the intervention group reported a modest benefit on the WSAS over the SMC group, it was not statistically or clinically significant. For the WSAS, the authors designated a reduction of −3.6 as a minimum clinically important difference (MCID). At 12 months, the mean for the intervention group was −1.48 points less than that for the SMC group, with a 95% confidence interval of −3.44 to 0.48. The confidence interval thus excluded what the authors had predefined as the MCID. This indicates that the trial was adequately powered to provide strong and actionable evidence of efficacy but that it failed to do so.

Open access, https://www.cambridge.org/core/jour...cal-symptoms/C6086F0F86BB3EA1A19DBD49C1617847
 
Last edited by a moderator:
This looks to be a response, at least in part, to Michiel and David's letter.

https://www.cambridge.org/core/jour...al-symptoms/DD2770CFD116BF4B3405499630491299#

"Persistent physical symptoms (PPS) are associated with distress, disability, and high costs within the working population (Bermingham, Cohen, Hague, & Parsonage, 2010)."

interesting that they avoid stating exactly how much, given their previous continuous misquoting of the actual costs.

see https://www.s4me.info/threads/trial...ts-to-the-nhs-david-tuller.20282/#post-340693
 
Invited Letter Rejoinder (response to David and Michiel) said:

Knowledge gained from this preliminary trial will influence the planning of future studies which may control for therapist time and attention.

So this large trial was only a preliminary trial? Is this how RCTs that fail to deliver the expected results are usually called?

We do not agree that we overstated the findings. The abstract states that TDT-CBT may be helpful for people with a range of PPS and that further work is needed to maintain or maximise effects. The papers conclusions state ‘Our transdiagnostic model and treatment of PPS was not superior to treatment as usual at the final follow-up (52 weeks) (Chalder et al., 2021).

But isn't that the point? The abstract's conclusion doesn't mention the null findings at 52 weeks/ the final follow up.

Invited Letter Rejoinder (response to David and Michiel) also said:
We stated in the protocol that the models used to assess the primary outcome would contain all four follow-up time points as the dependent variables. This included the WASAS at 20 weeks.

I admit I'm confused about the primary outcome being the WASAS results at every of the four follow-up time points. From skimming the paper and protocol I also don't understand how the "follow-up time points" relate to the duration and the end of the intervention (active treatment or standard medical care).

Anyway, if the primary outcome was the WASAS at all four follow-up time points then shouldn't the results of all of them also be reported, both in the abstract and the paper's conclusion?

I think though this isn't clear in the paper's conclusion either. The paper's conclusion acknowledges the null findings at the final follow up (52 weeks), but the immediately following lines imply that (some) secondary outcome measures are indeed evidence for a benefit and to that evidence it seems the primary outcome measure at the end of treatment (20 weeks) would add more evidence.

PRINCE Secondary paper's conclusion said:
Nevertheless, transdiagnostic CBT was associated with improvements in other secondary clinical outcome measures including symptom severity and global improvement. Our intervention also showed an advantage over SMC in changing WSAS at 20 weeks, which was when the active treatment ended.

This study needs to be further developed and assessed in a multi-centre study with a larger group of therapists to assess its generalisability.
Why? Is this not a large enough trial? If it was badly designed why develop it further instead of doing a study with a better design?

(Edited for clarity.)
 
Last edited:
I just noticed this:

No consent was provided for sharing data with third parties. Once papers have been published data will be anonymised and deposited in a repository. Bona-fide researchers can apply to use the data but are required to clearly specify the research question a priori.

“Bona-fide researchers” meaning all data requests from filthy plebs (ie patients) will be denied.
 
I think it means more than that. I would read it as only those researchers who believe in the way we express our results.

Agreed. It’s also telling that they deliberately chose not to do what most trials these days do which is to obtain written consent for data sharing.

Not that it’s even necessary to reanalyse the original dataset in this case. Unlike in PACE where the data analyses were fraudulent, here it’s transparent from the results section that the trial did not meet its primary endpoint and that most of the secondary outcomes were non-significant also. The only issue here is that the abstract deliberately misrepresents the results because they know that most busy clinicians will only read the abstract.
 
Agreed. It’s also telling that they deliberately chose not to do what most trials these days do which is to obtain written consent for data sharing.

Not that it’s even necessary to reanalyse the original dataset in this case. Unlike in PACE where the data analyses were fraudulent, here it’s transparent from the results section that the trial did not meet its primary endpoint and that most of the secondary outcomes were non-significant also. The only issue here is that the abstract deliberately misrepresents the results because they know that most busy clinicians will only read the abstract.

The abstract issue should really be handled by the journal. It is about time they started taking responsibility for the output they publish.

With PACE they had clearly moved from the protocol as you say so it needed an analysis based on the protocol. But there were also some interesting things that came from the data such as seeing patients that both improved and got worse with the CFQ and correlations between the different measures. So I do think there could be some interesting things in the data but probably not interesting enough for anyone to spend the time looking.
 
The abstract issue should really be handled by the journal. It is about time they started taking responsibility for the output they publish.
Money spent that doesn't add value, adds costs, doesn't increase revenue. Good for science, bad for business. The journals have no incentives when clearly not enough people care about it. In fact most seem content with the lowest possible quality standards, it allows below-average researchers to thrive whereas in a functioning system many would simply have to find a different career.

In psychology they are content with the fiction that they find 100% of what they set out to find, even when the findings contradict other findings. Everyone gets a participation trophy and one's value and reputation is based on how many participation trophies they can put on display.

To drive the point further, clinical psychology is almost universally driving in the direction of lowering standards even further to preserve the damn biopsychosocial ideology. There is no interest in fixing any of this within the profession. Only from people like us who can see the harm it causes, and medicine being completely insular, they don't listen to anyone but themselves.
 
From the Invited Letter Rejoinder:

"...further work is needed to maintain or maximize effects."

From what we've seen with how pwME are treated by disability systems, and health care, governments have little appetite for funding our community or those similar.Therefore, it appears unlikely that long term funding would be available to "maintain or maximize effects", that is, for individuals in these studies.

On the other hand, if the intent is not to provide longer term support to previous study subjects, but to produce more studies to work on how to "maintain and maximize effects", then we may see these.

ETA: added clarification.
 
Last edited:
Back
Top