PACE trial TSC and TMG minutes released

This from Wessely's book on Clinical Trials...

View attachment 3355
This piece talks about a state of scientific "equipoise" between antidepressants and psychotherapeutic interventions.

Whether or not that's correct - that there is genuinely equal evidence for the efficacy of both treatments - I don't know. But people so quickly forget that there's a huge elephant in the room when you try to directly compare drugs and talking therapies. The drugs are trialled in fully blinded trials, whereas the talking therapies are trialled non-blinded studies, so outcomes are likelt to benefit enormously from expectancy effects.

Not a fair competition.

What does it take for people to notice that elephant in the room?
 
Are there any "immune or metabolic treatments" that are commonly used in other countries for ME/CFS? I'm just wondering why they mentioned them at all, if they really do deny any involvement of immune or metabolic processes in the disease?
 
Is this clinical trial design a fair comparison of a parmacological intervention and CBT?

Immunologic and psychologic therapy for patients with chronic fatigue syndrome: a double-blind, placebo-controlled trial.

https://www.ncbi.nlm.nih.gov/pubmed/8430715
It seems better than most, I like the way they had a "clinical" control, that was designed to incorporate all the nonspecific effects you might get alongside CBT, such as support, validation, etc. Good find!

But there are no manuals or details given, you just have to trust the authors that they made the "clinical" control condition to be as persuasive as possible. In this instance, there was no significant difference between the treatment arms, but if there had been a difference, you'd need to know more about what was actually done in the "clinical" control to make sense of it.
 
It is an interesting study from 1993 and fairly big at 90 patients. I've only seen the abstract. So, an immunological treatment and CBT did no better than the control.

The lead author, AR Lloyd, is presumably the Lloyd from Sydney University, the one who did the good Dubbo study also. And, oddly enough, the same one who runs a CBT/GET clinic for CFS patients and believes he is a leading light of CFS treatment in Australia.
 
Could Wessely and co possibly have been referring to the "study" of which he said, after MS called it a "well conducted case series",

"You were in fact too kind to our study, Dr Sharpe. This was not a study at all; we were just trying to treat people (Butler et al 1991)We started this treatment at a time when the view was that CFS patients are untreatable; not only that but this kind of approach was considered harmful. We did everything we could, in a completely uncontrolled fashion using antidepressant drugs, and behaviour and cognitive therapy, just to demonstrate that something would work. This enabled us to get funding for a controlled trial."

I think it was. Strange how often this "study" was quoted.
 
I think it was. Strange how often this "study" was quoted.
Might be an interesting study to look at how many times in medicine a single poor quality (or at least a far from definitive) study somehow ends up being uncritically accepted and endlessly cited as the one that established a field/approach/idea/treatment.

The original paper on placebo for example, that has been cited many many times for a supposed 30% effect size.
 
Might be an interesting study to look at how many times in medicine a single poor quality (or at least a far from definitive) study somehow ends up being uncritically accepted and endlessly cited as the one that established a field/approach/idea/treatment.

The original paper on placebo for example, that has been cited many many times for a supposed 30% effect size.
Yea:

The powerful placebo
HK Beecher - Journal of the American Medical Association, 1955 - jamanetwork.com
Cited by 2257

I can't think of any other medical ones, but there's this from Psychology:

The Stanford Prison experiment:
Interpersonal dynamics in a simulated prison

C Haney, C Banks, P Zimbardo - 1972 - dtic.mil
Cited by 1591
(participants were assigned to the role of guard or prisoner, and the study reports the guards became cruel and the prisoners hopeless. The study was supposed to have shown that people act according to their group identity, but it turns out the researchers instructed the guards to be cruel!).
 
(participants were assigned to the role of guard or prisoner, and the study reports the guards became cruel and the prisoners hopeless. The study was supposed to have shown that people act according to their group identity, but it turns out the researchers instructed the guards to be cruel!).

Yesterday I read that one of the prisoners admitted that he had faked an emotional breakdown (or a similar event, I don't remember the details).
 
It was clear from the original protocol that CBT and GET were expected to outperform APT - the hypotheses are unidirectional, asking whether CBT/GET were more effective than APT.

Primary objectives
  1. Is APT and SSMC more effective than SSMC alone in reducing (i) fatigue, (ii) disability, or (iii) both?
  2. Is CBT and SSMC more effective than APT and SSMC in reducing (i) fatigue, (ii) disability or (iii) both?
  3. Is GET and SSMC more effective than APT and SSMC in reducing (i) fatigue, (ii) disability, or (iii) both?
  4. Are the active rehabilitation therapies (of either CBT or GET) more effective than the adaptive approach of APT when each is added to SSMC, in reducing (i) fatigue and/or (ii) disability?
Is it likely that all the therapists and assessors would have been unaware of what was in the protocol?

Good spot! They cannot claim equipoise if the hypotheses are unidirectional.
 
Good spot! They cannot claim equipoise if the hypotheses are unidirectional.

I had not come across this equipoise term before but from what I have read recently I think you can have equipoise even if you have a hypothesis that favours one treatment. As I understand it equipoise is about the ethics of a trial. For a trial to be ethical there has to be genuine doubt, amongst those expected to know, whether A is better than B (or nothing). I think there remains genuine doubt for the PACE treatments since we still do not know. The problem with having a hypothesis the alters body language and thereby biases outcomes is, I think, separate.
 
As I understand it equipoise is about the ethics of a trial. For a trial to be ethical there has to be genuine doubt, amongst those expected to know, whether A is better than B (or nothing).

So this quote from Sharpe could just mean he's not 100% certain CBT/GET are better than nothing?

"Furthermore, I would like to take this opportunity to emphasise that despite my previous research into particular treatment approaches, I am in a position of equipoise as regards the relative efficacy of the treatments being evaluated in this trial."

Given the difficulty of being certain about anything, is 'equipoise' between trial arms something that can be claimed for almost anything?

As it's a bit relevent to the discussion, I thought I'd also post the assumptions about efficacy from the full PACE trial protocol: Final version 5.0, 01.02.2006

11.1 Assumptions
At one year we assume that 60% will improve with CBT, 50% with GET, 25% with APT
and 10% with SSMC. The existing evidence suggests that at one year follow up, 50 to
63% of participants with CFS/ME had a positive outcome, by intention to treat, in the
three RCTs of rehabilitative CBT, [18, 25, 26] with 69% improved after an educational
rehabilitation that closely resembled CBT.E 433 This compares to 18 to 63% improved in the
two RCTs of GET, [23' 241 and 47% improvement in a clinical audit of GET.E 54] Having usual
medical care allowed 6% to 17% to improve by one year in two RCTs.E 18' 253 There are no
previous RCTs of APT to guide us,[11' 12] but we estimate that APT will be at least as
effective as the control treatments of relaxation and flexibility used in previous RCTs, with
26% to 27% improved on primary outcomes.E23' 26] We propose that a clinically important
difference would be between 2 and 3 times the improvement rate of SSMC.
 
As it's a bit relevent to the discussion, I thought I'd also post the assumptions about efficacy from the full PACE trial protocol: Final version 5.0, 01.02.2006

Surely those are predictions, not assumptions. And if they are assumptions, they can't claim to be in "a position of equipoise".

So it's basically, previous trials showed this, so we'll repeat them, make all the same mistakes and assumptions and get the same result. Bingo!
 
Surely those are predictions, not assumptions. And if they are assumptions, they can't claim to be in "a position of equipoise".

So it's basically, previous trials showed this, so we'll repeat them, make all the same mistakes and assumptions and get the same result. Bingo!

They were from the Sample Size section of the full protocol and led into their power analyses:

11.2 Power analyses
Our planned intention to treat analyses will compare APT against SSMC alone, and both
CBT and GET against APT. Assuming a = 5 % and a power of 90 %, we require a
minimum of 135 participants in the SSMC alone and APT groups, 80 participants in the
GET group and 40 in the CBT group.E553 However these last two numbers are insufficient
to study predictors, process, or cost-effectiveness. We will not be able to get a precise
estimate of the difference between CBT and GET, though our estimates will be useful in
planning future trials. As an example, to detect a difference in response rates of 50% and
60%, with 90% power, would require 520 participants per group; numbers beyond a
realistic two-arm trial. Therefore, we will study equal numbers of 135 participants in each
of the four arms, which gives us greater than 90 % power to study differences in efficacy
between APT and both CBT and GET. We will adjust our numbers for dropouts, at the
same time as designing the trial and its management to minimise dropouts. Dropout rates
were 12 and 33 % in the two studies of GETE 23' 243 and 3, 10, and 40 % in the three
studies of rehabilitative CBT. {18' 25' 263 On the basis of our own previous trials, we estimate
a dropout rate of 10 %. We therefore require approximately 150 participants in each
treatment group, or 600 participants in all. Calculation of the sample size required to
detect economic differences between treatment groups requires data of cost per change in
outcome, which is not currently available.

I don't really know how these things should be best described.
 
They were from the Sample Size section of the full protocol and led into their power analyses:

Sample size calcs should be based on what would be considered clinically important, not just statistically significant. They seem to have based it on the results that previous trials got, and haven't looked at whether any of that was clinically relevant or not.

I would have thought that for true equipoise, they should have stated that this is how much one (unspecified) treatment should be better than all the other (unspecified) treatments, and on that basis (of meaningful clinical difference) this is what their power calcs should look like. I really don't think they can claim equipoise and then state which treatments they are expecting to do best, and by how much.

If I were them, I would have kept the trial as simple as possible and done a straight comparison between what is offered in the clinic and what patients say works for them. They "controlled" it in completely the wrong way by expecting pts to attend clinic for their "pacing" intervention. But that just shows how poorly they understand the condition.
 
Back
Top Bottom