Active placebos versus antidepressants for depression, 2004, Moncrieff, Wessely, Hardy

Woolie

Senior Member
Just came across this and thought it might be of interest (its not new, its from 2004)

Moncrieff, J., Wessely, S., & Hardy, R. (2004). Active placebos versus antidepressants for depression. Cochrane database of systematic reviews, CD003012.PUB2

https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD003012.pub2/full

Abstract:
Background. Although there is a consensus that antidepressants are effective in depression, placebo effects are also thought to be substantial. Side effects of antidepressants may reveal the identity of medication to participants or investigators and thus may bias the results of conventional trials using inert placebos. Using an 'active' placebo which mimics some of the side effects of antidepressants may help to counteract this potential bias.

Objectives. To investigate the efficacy of antidepressants when compared with 'active' placebos. Search methods CCDANCTR-Studies and CCDANCTR-References were searched on 12/2/2008. Reference lists from relevant articles and textbooks were searched. Selection criteria Randomised and quasi randomised controlled trials comparing antidepressants with active placebos in people with depression.

Data collection and analysis. Since many different outcome measures were used a standard measure of effect was calculated for each trial. A subgroup analysis of inpatient and outpatient trials was conducted. Two reviewers independently assessed whether each trial met inclusion criteria.

Main results. Nine studies involving 751 participants were included. Two of them produced effect sizes which showed a consistent and statistically significant difference in favour of the active drug. Combining all studies produced a pooled estimate of effect of 0.39 standard deviations (confidence interval, 0.24 to 0.54) in favour of the antidepressant measured by improvement in mood. There was high heterogeneity due to one strongly positive trial. Sensitivity analysis omitting this trial reduced the pooled effect to 0.17 (0.00 to 0.34). The pooled effect for inpatient and outpatient trials was highly sensitive to decisions about which combination of data was included but inpatient trials produced the lowest effects.

Authors' conclusions. The more conservative estimates from the present analysis found that differences between antidepressants and active placebos were small. This suggests that unblinding effects may inflate the efficacy of antidepressants in trials using inert placebos. Further research into unblinding is warranted.
 
This suggests that unblinding effects may inflate the efficacy of antidepressants in trials using inert placebos. Further research into unblinding is warranted.
Oh, the irony.

He just doesn't seem to see the self-contradiction at all. Fussing about whether a blinded trial was fully blinded when most of his own trials are completely non-blinded and this is never even mentioned in the reports.
 
Last edited:
Oh, the irony.

He just doesn't seem to see the self-contradiction at all. Fussing about whether a blinded trial was fully blinded when most of his own trials are completely non-blinded and this is never even mentioned in the reports.

It is even more ironic in that BPS advocates argue because CFS is solely assessed and diagnosed on the basis of self reported subjective measures, it is only possible to evaluate it by subjective measures and there is therefor no need to be concerned about potential bias in unblinded trials. What is then never mentioned is that it is the BPS advocates that choose to define ME/CFS as a purely subjective phenomenon and deliberately ignore any potential objective measures in reporting their studies; either not including them or selectively excluding the objective measures used from their conclusions as in the PACE write ups. Even though here, where Depression might more reasonably than ME/CFS be described as a purely subjective phenomenon, Wessely understands the potential bias in using subjective outcomes without blinding.

It is bizarre that intelligent people can be aware of the logical and methodological issues in contexts that suit them, but be totally blind in other contexts when it does not, even though they are obviously aware of the criticisms that their research in ME/CFS is at particular risk of bias. Is this selective blindness a conscious choice or an unfortunate accident resulting from ‘science’, being used or misused not to advance knowledge or answer questions of fact, but rather as a political or ideological weapon?
 
I must be missing something here:

The review is published in 2004, but their evidence search was in 2008?

"To investigate the efficacy of antidepressants when compared with 'active' placebos. Search methods CCDANCTR-Studies and CCDANCTR-References were searched on 12/2/2008."
 
He just doesn't seem to see the self-contradiction at all.
I don’t know if he’s unaware of the self-contradiction or whether he is fully aware and knows he can get away with it. Forum rules prevent me from comparing him with any particular politicians but there seem to parallels with populists who know that they can say things which are untrue and self-contradictory but don’t care because they know they can get away with it.

In imperfect democratic systems, politicians can get away with it because they are elected by people (usually a minority of the electorate) who don’t need to understand the issues they are voting on, and in many cases may only vote for someone because they’re not as bad as the alternative. But how can a scientist not only get away with such self-contraction for so long, but also be invited to join the Royal Society? These are not uneducated, disillusioned or disenfranchised voters he has bewitched but fellows of arguably the most prestigious scientific institution in the world.
 
Last edited:
From the Author’s Conclusions:
The more conservative estimates from the present analysis found that differences between antidepressants and active placebos were small. This suggests that unblinding effects may inflate the efficacy of antidepressants in trials using inert placebos.

In this post @Lucibee quotes SW (I think) as saying "Unfortunately some people decide to monitor their symptoms and can get trapped in vicious circles..." (I think that comes from this article which I can’t access: https://www.newscientist.com/article/mg20126997-000-mind-over-body/).

From memory, PACE required participant assigned to APT to keep extensive symptom diaries (ie continuously monitor their symptoms). Here SW seems to be suggesting that that would have a negative impact on self-reported symptoms. If that is correct, does it suggest that, along with all the other problems (eg not being told how wonderful and effective the treatments is), APT may not only been an inert placebo but (possibly deliberately) designed as a negative placebo, or nocebo?
 
This appears to explain why Wessely advances the view that placebos are an effective treatment. If placebos were considered inefective, there would be few treatments in psychiatry that meaningfully exceed placebos in effectiveness.
I still remember Wessely replying to someone criticizing the PACE trial and asking him whether they considered the placebo effect, which frankly as far as I am concerned is simply the act of responding differently on a questionnaire than what reality actually is, and he said something to the effect of "the placebo is one of the most powerful interventions we have".

I mean what are we even supposed to do with that? Here this guy who managed to grab complete control over millions of lives is gushing over the effectiveness of what is literally the controlling factor for... nothing. And he is the voice of science and we are the irrational ones? Mercy.
 
He just doesn't seem to see the self-contradiction at all. Fussing about whether a blinded trial was fully blinded when most of his own trials are completely non-blinded and this is never even mentioned in the reports.
Would be interesting to know if there is some sort of chronology in his conclusions from this and similar trials and the subject as well als trial design of following trials?

I just began to re-read SW's national elf blog article on the PACE trial.

In addition to the ocean liner rhetoric that was called out by Steven Lubet, it is so obvious that he tries to talk the methodological shortcomings away, especially the lack of blinding -- as point g) for what makes an RCT a good RCT:

"This is not unique to PACE. It is true in any trial of a psychological, behavioural or surgical intervention for example. Indeed, it turns out to be true in many trials of drug treatments as well, since it is difficult and sometimes impossible to remove recognition of a treatment medicine because of the impact of side effects."

"So patients knew what they were getting. This is what would happen in real life, which is what the PACE trial was trying to recreate. Did this matter? One way is to see whether there were differences in what patients thought of the treatment, to which they were allocated, before they started them. There might be problems if one treatment was thought to be better than another, whether rightly or wrongly."

"Expectations can influence the outcomes, especially in psychological treatments, which is why so called patient preference trials, in which patients chose the intervention they prefer – give results that can be difficult to interpret, which indeed is an issue around the longer term outcomes of PACE after the end of the formal follow up (see the references below)."

"Randomisation removes the worst of this problem, since patients by definition cannot select what they get. But if they still have higher or lower expectations of one treatment over another, it can still matter."

"And that did happen in the PACE trial itself. One therapy was rated beforehand by patients as being less likely to be helpful, but that treatment was CBT. In the event, CBT came out as one of the two treatments that did perform better. If it had been the other way round; that CBT had been favoured over the other three, then that would have been a problem. But as it is, CBT actually had a higher mountain to climb, not a smaller one, compared to the others."

https://www.nationalelfservice.net/...syndrome-choppy-seas-but-a-prosperous-voyage/

Even if they had surveyed the particpants' expectations at the beginning and during the trial (did they?) -- could that really compensate for the lack of blinding?

And his mentioning of potentially recognizable side effects of drugs -- didn't he demonstrate with the trial on anti-depressants discussed here that it's possible to use active placebos?

Edit: wording
 
Last edited:
Even though here, where Depression might more reasonably than ME/CFS be described as a purely subjective phenomenon, Wessely understands the potential bias in using subjective outcomes without blinding.
But does he really?

I don't have access to the review from here and so don't know if the authors explicitly discuss the issue with subjective measures.

The only outcome mentioned in the abstract is "improvement in mood" -- that seems to be insufficent to measure symptoms of depression anyway?

So are the authors really aware that it's the combination of subjective outcomes and insufficient blinding that is problematic? Do they even consider the possibility to look for more reliable outcomes when reliable blinding is not possible?

(Edited to add: And I think it's important to distinguish between the subjectivity of symptoms and the impact of symptoms on outcomes that can be objectively measured.)

Or was it perhaps the goal to demonstrate that even drug trials can't be properly blinded, so let's not bother too much about the lack of blinding in therapist-delivered treatment trials?

Didn't read the comment and the reply by one of the authors -- perhaps interesting, too:

https://www.cochranelibrary.com/cds....pub2/detailed-comment/en?messageId=314047782

(After all, I think it is interesting that is a Cochrane review )
 
Last edited:
From memory, PACE required participant assigned to APT to keep extensive symptom diaries (ie continuously monitor their symptoms). Here SW seems to be suggesting that that would have a negative impact on self-reported symptoms. If that is correct, does it suggest that, along with all the other problems (eg not being told how wonderful and effective the treatments is), APT may not only been an inert placebo but (possibly deliberately) designed as a negative placebo, or nocebo?

Yes. Whether it was designed that way or not, encouraging patients to be more conscious and aware of their symptoms means they're much less likely to report improvements.
 
Last edited:
For APT, from what I read in the handbooks (https://me-pedia.org/wiki/PACE_trial_documents), the general idea seemed to be to focus on symptoms more, the participants were told pacing itself wouldn't help them (rather it would give space for the body to heal on its own), and generally the focus seemed to be on limiting energy expenditure and doing less. So it's not much wonder they'd report more symptoms in questionaires and not report improving if that's what they were told. Compared to being repeatedly told by therapists who the participants likely formed a connection with that GET and CBT would make them recover, and not to focus on symptoms.

From my own experience, pacing doesn't really mean focusing on symptoms more (it does involve understanding your symptoms better, but that's different to focusing on them more), but rather figuring out what your limit is, and also pacing in such a way that you can do more of things that you actually need to or want to do.
 
Yes. Whether it was designed that way or not, encouraging patients to be more conscious and aware of their symptoms means they're much less likely to report improvments.
I looked into this some time ago, and got the impression that daily experience sampling (e.g. noting your symptoms daily) generally resulted in higher rates of reported symptoms than retrospective sampling (being asked to recall your symptoms for a prespecified period in the past). At least for non-depressed individuals.

These are the only articles I have at hand right now to back this up, but I know there are others.

edited to add: Dawson, E. G., Kanim, L. E., Sra, P., Dorey, F. J., Goldstein, T. B., Delamarter, R. B., & Sandhu, H. S. (2002). Low back pain recollection versus concurrent accounts: outcomes analysis. Spine, 27(9), 984-993. Retrospective reports underestimate some features of spinal pain when compared to real time reporting.

Redelmeier, D. A., & Kahneman, D. (1996). Patients' memories of painful medical treatments: Real-time and retrospective evaluations of two minimally invasive procedures. pain, 66(1), 3-8. Suggests that when people recall overall pain for a given period, they are biased in favour of earliest and latest time slices in the period.

Aleem, I., Duncan, J. S., Ahmed, A. M., Zarrabian, M., Eck, J. C., Rhee, J. M., ... & Nassr, A. N. (2016). Do lumbar decompression and fusion patients recall their preoperative status? A cohort study of recall bias in patient-reported outcomes. The Spine Journal, 16(10), S370. Shows that people recall their pain before an intervention as being more severe than it was (this could contribute to the placebo effect, and to exaggerated self-reported outcomes).

(See also Rodrigues, R., Silva, P. S., Cunha, M., Vaz, R., & Pereira, P. (2018). Can We Assess the Success of Surgery for Degenerative Spinal Diseases Using Patients' Recall of Their Preoperative Status?. World neurosurgery, 115, e768-e773. Aleem, I. S., Currier, B. L., Yaszemski, M. J., Poppendeck, H., Huddleston, P., Eck, J., ... & Nassr, A. (2018). Do Cervical Spine Surgery Patients Recall Their Preoperative Status?. Clinical spine surgery, 31(10), E481-E487.)
 
Last edited:
I just found this, a review which mentions a study of Fatigue reports in "CFS" (by Friedberg and co), which found the opposite effect - real time reports of fatigue were lower than retrospective reports.

Stull, D. E., Leidy, N. K., Parasuraman, B., & Chassany, O. (2009). Optimal recall periods for patient-reported outcomes: challenges and potential solutions. Current medical research and opinion, 25(4), 929-942.

https://sci-hub.se/10.1185/03007990902774765
 
It is even more ironic in that BPS advocates argue because CFS is solely assessed and diagnosed on the basis of self reported subjective measures, it is only possible to evaluate it by subjective measures and there is therefor no need to be concerned about potential bias in unblinded trials. What is then never mentioned is that it is the BPS advocates that choose to define ME/CFS as a purely subjective phenomenon and deliberately ignore any potential objective measures in reporting their studies; either not including them or selectively excluding the objective measures used from their conclusions as in the PACE write ups. Even though here, where Depression might more reasonably than ME/CFS be described as a purely subjective phenomenon, Wessely understands the potential bias in using subjective outcomes without blinding.

It is bizarre that intelligent people can be aware of the logical and methodological issues in contexts that suit them, but be totally blind in other contexts when it does not, even though they are obviously aware of the criticisms that their research in ME/CFS is at particular risk of bias. Is this selective blindness a conscious choice or an unfortunate accident resulting from ‘science’, being used or misused not to advance knowledge or answer questions of fact, but rather as a political or ideological weapon?

Over the years, it has become clear that they know exactly what they are doing because they dismiss biomedical research for reasons that are more apparent in their own work.

Both White and Wessely have done what looks like biomedical research into ME but it was apparent at the time it was simply to make out they were openminded about the cause - we looked but sadly there were no signs of physical effects sort of thing

When the PACE trial was first mooted there were many objections, in fact Wessely used the number of objections to mock us in the same way Crawley did about the LP. The use of objective outcomes made it seem like a proper unbiased trial at the beginning but then they were dropped, not reported or glossed over at the end.

When you look at a lot of BPS trials they do not follow the initial protocols but change them willy nilly to get the result they want to report. It looks very much as if they get the trials authorised by making them sound scientific and then do whatever they like after that.
 
Back
Top Bottom