Efficacy of therapist-delivered transdiagnostic CBT for patients with persistent physical symptoms in secondary care: an RCT, 2021, Chalder et al

The results are actually very clear that the intervention didn't work. the confidence interval for the primary outcome, WSAS measured at 52 weeks had a 95% confidence interval of −3.44 to 0.48. It excludes what the authors themselves have defined as the minimum clinically important difference. None of the secondary outcomes showed a significant difference, even though patients in the control group received no intervention. Yet the authors claim: "We have preliminary evidence that TDT-CBT + SMC may be helpful for people with a range of PPS."
 
Yet the authors claim: "We have preliminary evidence that TDT-CBT + SMC may be helpful for people with a range of PPS."

The point is also this was a major trial funded after other trials provided "preliminary evidence." It wasn't I assume supposed to produce preliminary evidence but actual actionable definitive evidence. Which of course it didn't produce.

Has anyone seen a mininal clinically important difference figure for the PHQ-15 measure?
 
good catch. classic outcome-switching
Yes, the protocol says ...
Outcomes will be assessed at 9, 20, 40- and 52-weeks post randomisation. Efficacy will be assessed by examining the difference between arms in the primary outcome Work and Social Adjustment Scale (WSAS) at 52 weeks after randomisation. Secondary outcomes will include mood, symptom severity and clinical global impression at 9, 20, 40 and 52 weeks. Cost-effectiveness will be evaluated by combining measures of health service use, informal care, loss of working hours and financial benefits at 52 weeks.
Makes no suggestion of WSAS being a secondary outcome at any point.
 
The point is also this was a major trial funded after other trials provided "preliminary evidence." It wasn't I assume supposed to produce preliminary evidence but actual actionable definitive evidence. Which of course it didn't produce.

Has anyone seen a mininal clinically important difference figure for the PHQ-15 measure?
We have preliminary evidence that TDT-CBT + SMC may be helpful for people with a range of PPS. However, further study is required to maximise or maintain effects seen at end of treatment.
[my bold]

This seems to be their standard get-out-of-jail card - we didn't prove anything useful but we'll spin it so it sounds like we only ever intended to lay the ground for another (worthless?) study anyway.
 
Makes no suggestion of WSAS being a secondary outcome at any point.

They were always going to measure WSAS at the other time-points but designated the 52-week as the primary outcome. I'm not sure if the other time points would automatically be inferred to be secondary outcomes? Or just to have no defined category? And I'm not sure oft he same about the other time-points for the designated secondary outcomes. Were they all secondary outcomes? Or just the 52-week time point? Researchers sometimes seem loosey-goosey about whether something is a primary outcome measure (WSAS) or the primary outcome at a specific time point (WSAS at 52 weeks).
 
They were always going to measure WSAS at the other time-points but designated the 52-week as the primary outcome. I'm not sure if the other time points would automatically be inferred to be secondary outcomes? Or just to have no defined category? And I'm not sure oft he same about the other time-points for the designated secondary outcomes. Were they all secondary outcomes? Or just the 52-week time point? Researchers sometimes seem loosey-goosey about whether something is a primary outcome measure (WSAS) or the primary outcome at a specific time point (WSAS at 52 weeks).
I thought a major objective of a protocol was to pin down such detail and so avoid ambiguity. Especially something so crucial as whether something is a primary or secondary outcome at whatever point in the process. If anything so important can be reinterpreted or misinterpreted (possibly deliberately) after a trial has started, then the protocol has failed in its objective surely.
 
I thought a major objective of a protocol was to pin down such detail and so avoid ambiguity. Especially something so crucial as whether something is a primary or secondary outcome at whatever point in the process. If anything so important can be reinterpreted or misinterpreted (possibly deliberately) after a trial has started, then the protocol has failed in its objective surely.

that is, in fact, the intent. but it seems the format can still leave it hard to pin people down on some of these things.
 
Honestly whoever gave approval for WSAS (or whatever acronym I won't even bother getting it right) being a primary outcome deserves to be fired, from a cannon, into the Sun, and also mostly out of a job, Sun cannon or not.

Seriously who are these clowns? Have they no pride in their work at all? They are just there day-in-day-out going through the motions as if none of this mattered? As if none of this had any real-life impacts? Who are these clowns and what is wrong with not only them but the people who decided they should be making any decisions of more importance than which condiment goes first in the sandwich someone ordered?
 
I think this is their post-PACE grifting strategy. Everything is now 'preliminary' and 'needs more research'. Being 'definitive' is a huge problem.

PACE backfired because it was hyped and sold as definitive. As such, too many important people and institutions took note of it being obliterated and it has turned into a grinding, comprehensive, and public loss for them. Wouldn't want to repeat that with any other conditions.

Better to fly under the radar and collect funding for studies you never intend to mean anything. Clearly there is no shortage of gullible funders. You don't even need to manufacture a positive result, apparently.

It's less than they had hoped for, but it will pay the bills.
 
from @dave30th 's blog
Anyone who takes the time to review the paper should be mystified by this conclusion. This full-scale trial was approved because a lot of earlier research, as outlined in the protocol, had produced ample “preliminary evidence” of the kind mentioned in the conclusion. The protocol, unless I misread it, did not propose to produce more “preliminary evidence” that the TDT-CBT intervention “may be helpful.” PRINCE Secondary was presented in the protocol and received funding based on the notion that it would produce hard data about “the efficacy and cost-effectiveness” of the intervention. (The Psychological Medicine paper did not include the “cost-effectiveness” data.)

It should be noted that “helpfulness” is not the same as “efficacy” and is not defined in the protocol or the trial itself. An intervention might be “helpful” in some way as a supportive strategy while having no “efficacy” as an actual treatment. In this trial, the method of assessing the “efficacy” of the treatment was clearly designated; the results did not achieve that metric, so the treatment cannot be described as “efficacious.” As a vague stand-in, “helpfulness” sounds positive but can mean more or less anything—as it seems to here.
Why does the scientific establishment allow researchers to behave like estate agents? (thats real estate agents in the US i believe) The abstract's conclusion is like reading an estate agent's spun description of a dump that is literally falling down, as 'needs new wallpaper'. What the hell are they playing at? i feel furious with them for letting all this crap slide past, for YEARS, without most of them doing more than raising an eyebrow, even AFTER the problems are pointed out to them.

Thank God for the Dave Tullers & Jonathan Edwards', without whom we would be royally, comprehensively screwed, but sometimes it feels like they are King Canute :( I just despair at the ever flowing tide of crap that keeps flowing and drowning us all.
 
PACE backfired because it was hyped and sold as definitive.

Interesting theory. They later claimed at some point that they didn't mean to claim that PACE was "definitive," and that they'd only said "definitive" one time--that it wasn't their overall opinion. Or something like that. No one funds major trials after small trials to get "preliminary results." It is very disturbing that the journals allow them to make these claims when the trial obviously did not produce the results they expected.
 
They later claimed at some point that they didn't mean to claim that PACE was "definitive," and that they'd only said "definitive" one time--that it wasn't their overall opinion. Or something like that.


Riiight. £5m on a trial that wasn't meant to be"definitive", funded in part by the DWP.

If that's the case ,someone needs to be asking why the heck they were shelling out that kind of dosh.
 
Interesting theory. They later claimed at some point that they didn't mean to claim that PACE was "definitive," and that they'd only said "definitive" one time--that it wasn't their overall opinion. Or something like that. No one funds major trials after small trials to get "preliminary results." It is very disturbing that the journals allow them to make these claims when the trial obviously did not produce the results they expected.
Yeah I think after PACE their best strategy is to avoid any showdowns. As long as funders are gullible enough, they may well be able to get funding even for 'major' trials; then they just tell the funder what they want to hear, but otherwise keep pretty quiet - just publish the paper rather than the whole media campaign that came after PACE.
 
Back
Top Bottom