Efficacy of therapist-delivered transdiagnostic CBT for patients with persistent physical symptoms in secondary care: an RCT, 2021, Chalder et al

Joan Crawford

Senior Member (Voting Rights)
The spin on this is quite shocking:

https://www.cambridge.org/core/jour...rolled-trial/CEDD9B7597902C283BBB7CA7B43A81C7

Efficacy of therapist-delivered transdiagnostic CBT for patients with persistent physical symptoms in secondary care: a randomised controlled trial
Published online by Cambridge University Press: 31 May 2021

Trudie Chalder
Open the ORCID record for Trudie Chalder [Opens in a new window]
,
Meenal Patel
,
Matthew Hotopf
,
Rona Moss-Morris
,
Mark Ashworth
,
Katie Watts
,
Anthony S. David
,
Paul McCrone
,
Mujtaba Husain
,
Toby Garrood
,
Kirsty James
and
Sabine Landau

Abstract
Background
Medically unexplained symptoms otherwise referred to as persistent physical symptoms (PPS) are debilitating to patients. As many specific PPS syndromes share common behavioural, cognitive, and affective influences, transdiagnostic treatments might be effective for this patient group. We evaluated the clinical efficacy and cost-effectiveness of a therapist-delivered, transdiagnostic cognitive behavioural intervention (TDT-CBT) plus (+) standard medical care (SMC) v. SMC alone for the treatment of patients with PPS in secondary medical care.

Methods
A two-arm randomised controlled trial, with measurements taken at baseline and at 9, 20, 40- and 52-weeks post randomisation. The primary outcome measure was the Work and Social Adjustment Scale (WSAS) at 52 weeks. Secondary outcomes included mood (PHQ-9 and GAD-7), symptom severity (PHQ-15), global measure of change (CGI), and the Persistent Physical Symptoms Questionnaire (PPSQ).

Results
We randomised 324 patients and 74% were followed up at 52 weeks. The difference between groups was not statistically significant for the primary outcome (WSAS at 52 weeks: estimated difference −1.48 points, 95% confidence interval from −3.44 to 0.48, p = 0.139). However, the results indicated that some secondary outcomes had a treatment effect in favour of TDT-CBT + SMC with three outcomes showing a statistically significant difference between groups. These were WSAS at 20 weeks (p = 0.016) at the end of treatment and the PHQ-15 (p = 0.013) and CGI at 52 weeks (p = 0.011).

Conclusion
We have preliminary evidence that TDT-CBT + SMC may be helpful for people with a range of PPS. However, further study is required to maximise or maintain effects seen at end of treatment.

Keywords
Cognitive behavioural therapy (CBT)
medically unexplained symptoms
persistent physical symptoms
randomised controlled trial (RCT)
secondary medical care
transdiagnostic

Open access paper.
 
However, the results indicated that some secondary outcomes had a treatment effect in favour of TDT-CBT + SMC with three outcomes showing a statistically significant difference between groups.

I assume they have not adjusted their significance tests for the number of secondary outcomes they are looking at - they don't mention that they do. Not sure what is normal but if they are claiming results based on it it looks dodgy.


I
 
That makes sense - I just read it as meaning a 'secondary' part of the PRINCE trial.

As you would expect they don't seem to take too much notice of the protocol as they write the paper and claim what are secondary outomes.

The secondary outcomes for PRINCE secondary were (according to the trial registration https://clinicaltrials.gov/ct2/show/study/NCT02426788) tha last two were added late on.
    1. Persistent Physical Symptom Questionnaire [ Time Frame: 52 weeks post randomisation ]
      Measures severity, distress, interference and problematic nature of PPS
    2. Patient Health Questionnaire-15 (PHQ-15) [ Time Frame: 52 weeks post randomisation ]
      Measures physical symptoms severity
    3. Patient Health Questionnaire-9 (PHQ-9) [ Time Frame: 52 weeks post randomisation ]
      Measures mood
    4. Generalized Anxiety Disorder-7 (GAD-7) [ Time Frame: 52 weeks post randomisation ]
      Measures generalised anxiety
    5. Clinical Global Impression (CGI) [ Time Frame: 52 weeks post randomisation ]
      Measures patient's perception of their general health improvement


    6. Client Service Receipt Inventory (CSRI) [ Time Frame: 52 weeks post randomisation ]
      Measures health care service receipt, direct and indirect costs of illness, and cost-effectiveness of interventions

    7. EuroQol-5D (EQ-5D) [ Time Frame: 52 weeks post randomisation ]
      Measures health outcome

    8. Cognitive Behavioural Responses Questionnaire [ Time Frame: 52 weeks post randomisation ]
      Measures beliefs and behaviours

    9. Acceptance scale [ Time Frame: 52 weeks post randomisation ]
      assesses degree of acceptance of difficult symptoms

The ones in bold are what they report.

In the paper they claim:
WSAS measured at 9, 20- and 40 weeks post randomisation were secondary outcomes.
But I don't see this in the list of secondary outcomes.

In the published protocol https://bmcpsychiatry.biomedcentral.com/articles/10.1186/s12888-019-2297-y they have a similar list though not the ones they added later in the registration info.


Secondary outcome measures
  1. 1.
    Physical Symptoms: the Patient Health Questionnaire 15 (PHQ15) will be used to measure somatic symptoms [25]. Each item is rated on a 3- point Likert scale (0 = not bothered at all; 1 = bothered a little; 2 = bothered a lot) and the total score can range from 0 to 30 where a higher score indicates higher symptom severity. The PHQ15 is a brief well-validated tool for detecting somatisation [26].

  2. 2.
    Depression: the Patient Health Questionnaire-9 (PHQ-9) will be used to monitor and measure the severity of depression in participants [27]. Each item is rated on a 4-point Likert scale (0 = not at all; 1 = several days; 2 = more than half the days; 3 = nearly every day) and the total score can range from 0 to 27 where a higher score indicates greater depressive severity. The PHQ-9 is a reliable and well-validated measure of depression severity [26].

  3. 3.
    Anxiety: the Generalised Anxiety Disorder – 7 (GAD − 7) questionnaire will be used to measure the severity of GAD in participants [28]. Each item is rated on a 4- point Likert scale (0 = not at all; 1 = several days; 2 = more than half the days; 3 = nearly every day) and the total score can range from 0 to 21 where a higher score indicates greater anxiety. The GAD-7 has demonstrated reliable psychometric properties in the measurement of anxiety in the general population[29].

  4. 4.
    The main presenting symptom: The Persistent Physical Symptom Questionnaire is comprised of three scales to measure (i) severity, (ii) distress and the (iii) problematic nature of the patients main presenting symptom (e.g., chest pain). Each item is scored on a 10-point scale (from 1 = not at all to 10 = extremely). Average scores from the three scales will be used to calculate an overall interference score. This measure was adapted from the Chest Pain questionnaire, which has been previously used for patients with non-cardiac chest pain [30].

  5. 5.
    Global Outcome: the adapted Clinical Global Impression (CGI patient) will be used to measure global change. It has been used in many previous trials of psychosocial treatments [31]. This is rated on a 9-point Likert scale where 1 is completely recovered and 9 is could not get any worse.

  6. 6.
    Costs (Client Service Receipt Inventory): the self-report Client Service Receipt Inventory will be used to assess health service use, informal care, lost work time and financial benefits [32].

  7. 7.
    EuroQoL 5D: the EQ-5D is a reliable and valid tool to measure health related quality of life [33]. Each dimension (mobility, self-care, usual activity, pain/discomfort and anxiety/depression) is rated on 5 levels (1 = no problems; 2 = slight problems; 3 = moderate problems; 4 = severe problems; 5 = extreme problems). The participant will also rate their own perception of their current health on a visual analogue scale ranging from 0 to 100 (0 = the best health you can imagine to 100 = the worst health you can imagine).
 
Still lumping different groups together. "It's all one condition, eh?" By what definition?
Defined by symptoms, symptoms aren't even the primary outcome and are only asked generically. What a bunch of crap. Any process that approves such ridiculous studies needs to be rebuilt from scratch, is completely unfit for purpose and frankly everyone involved has no business working in such roles.
 
Started looking to see what the Journal’s policy on after the fact deviations from the original protocol is, but not up to reading the full text of their publication ethics policy and the COPE policies which they claim to be bound by ( https://www.cambridge.org/core/journals/psychological-medicine/information/publishing-ethics ), though I am assuming that the failure of peer review to pick up the outcome swapping resulted in the editors breaching their own publication ethics policy by publishing the article as it stands.
 
I wonder if its just a lack of competance they have no advantage to doing that so feels like a lack of rigour and showing they don't believe in (or read) their own protocols.

I agree. They are savvy and competent enough to know what needs to be in a proposal for it to be accepted. So they write something acceptable and then go off and do the work (not sure at this stage if they do the protocol and ignore parts for the write up or do half the protocol).

Either way what makes it to the report stage is what was 'meant' to be there all along.
 
I wonder if its just a lack of competance they have no advantage to doing that so feels like a lack of rigour and showing they don't believe in (or read) their own protocols.

I don't know. but it is certainly worth a letter bringing it to the journal's attention. These people seem incapable of doing studies without outcome-switching, made up end. points, and the like. Or claiming success based on failed primary outcomes but marginal secondary outcome measures. It really is mind-blowing.
 
Started looking to see what the Journal’s policy on after the fact deviations from the original protocol is, but not up to reading the full text of their publication ethics policy and the COPE policies which they claim to be bound by ( https://www.cambridge.org/core/journals/psychological-medicine/information/publishing-ethics ), though I am assuming that the failure of peer review to pick up the outcome swapping resulted in the editors breaching their own publication ethics policy by publishing the article as it stands.

absolutely.
 
Waw, this is really a textbook example of how to misrepresent results: highlight a couple in a long list of secondary outcomes that reached statistical significance, even although differences were very minor and not clinically significant.

Also: this study had an A versus A + B design. Patients in the control got no intervention, therapists were not blinded and subjective outcomes were used and still, there was no meaningful difference between groups.
 
And the WSAS is a 40-point scale. So even if the 1.5 difference for the primary outcome were statistically significant, it would be unlikely to beclinically significant. This is just trash. I'm going to write to the journal after I finish my post about Peter White's most recent GET defense in the Journal of Psychosomatic Research.
 
Waw, this is really a textbook example of how to misrepresent results: highlight a couple in a long list of secondary outcomes that reached statistical significance, even although differences were very minor and not clinically significant.

Also: this study had an A versus A + B design. Patients in the control got no intervention, therapists were not blinded and subjective outcomes were used and still, there was no meaningful difference between groups.

And yet more research is needed because they have some indications of improvement in vague secondary outcomes.
 
even if the 1.5 difference for the primary outcome were statistically significant, it would be unlikely to beclinically significant.
The authors themselves have defined a minimum clinically important difference for the WSAS at 3.6 points. So 1.5 isn't even close. They measured the WSAS at 9 weeks (difference of 0.19), at 20 weeks (difference of 2.41), at 40 weeks (difference of 1.32), and at 52 weeks (difference of 1.48).

At no point was the difference close to what they defined as the minimum clinically important difference (a difference of 3.6 points). Yet the authors were allowed to highlight the difference of 2.41 at 20 weeks because it reached statistical significance. I don't understand why the peer reviewers allowed this.
 
Back
Top Bottom