Comment: Psychiatry’s stance towards scientifically implausible therapies: are we losing ground?, 2019, Rosen et al

Andy

Retired committee member
Around 225 years ago, Franz Mesmer's theories on animal magnetism fell into disrepute when a Royal Commission—headed by then US French ambassador Benjamin Franklin—concluded that claims of cure resulted from suggestion and imagination.

In its report of 1784, the Commission described a placebo-controlled experiment done at Franklin's Paris residence, with the cooperation of Mesmer's understudy, Charles D'Eslon. While D'Eslon was occupied magnetising a tree, a blindfolded 12-year-old boy showed striking reactions to four inert trees some distance away. D'Eslon responded that all trees were magnetised and his presence, however distant, increased that natural phenomenon. D'Eslon's ad hoc explanation was rightly rejected by Franklin's Commission based on logic and everyday experience.
Paywall, https://www.thelancet.com/journals/lanpsy/article/PIIS2215-0366(19)30276-7/
Sci hub, https://sci-hub.se/10.1016/S2215-0366(19)30276-7
 
One must ask how it has come to pass that large sectors of the scientific community appear more credulous toward scientifically implausible treatments today than they were in 1784.

Ouch!
We believe blind allegiance to randomised controlled trial outcome data has produced this result and offer the following recommendations.

First, authorities in mental health research and all who read the scientific literature must move beyond randomised controlled trials alone and adopt broader science-based criteria that consider the plausibility—or lack thereof—of therapeutic rationales and proposed change mechanisms. Consistent with Bayesian approaches to evidence evaluation, such criteria consider all scientific evidence that might influence an intervention, not merely outcome evidence.

I don't think CBT/GET would pass these criteria, even though the main problem is that PACE is not a controlled clinical trial while being presented as such.

Second, journal reviewers and editors should adopt these recommended
criteria and be sceptical of theoretically implausible ad hoc hypotheses.
 
Last edited:
To enhance evidentiary rigor, the EST criteria must accommodate the full body of treatment outcome data, both positive and negative, and published and unpublished. They also must account for the methodological quality of included studies, such as sources of potential experimental bias (e.g., differential group attrition, imperfect randomization to conditions).

Second, evidence‐based guidelines must move beyond reliance on measures of symptomatic improvement, emphasized in EST criteria, to incorporate objective and subjective criteria of everyday life functioning1, 7. Some patients with major depression, for example, may display significant improvement in depressive signs and symptoms (e.g., anhedonia, guilt), yet remain impaired in work and interpersonal relationships.

Third, provisional but burgeoning data from experimental and quasi‐experimental studies suggest that certain treatments, such as crisis debriefing following trauma, scared straight interventions for conduct disorder, and suggestive techniques to recover ostensible memories of sexual abuse, are iatrogenic for some patients. Nevertheless, most evidence‐based guidelines, including those for ESTs, overlook the possibility of harm. One challenge to addressing this omission is that many psychotherapy studies rely on unipolar outcome measures, which range from no improvement to substantial improvement; they must instead administer bipolar outcome measures, which can detect patient deterioration during and after treatment
 
The truth is that psychiatry and alternative medicine have both been up to exactly the same tricks to game RCTs and guarantee positive outcomes. Both do not properly control for placebo and bias, rely on self-reported measures, deliberately tell the participants the result they are expecting, cherry-pick data, hide negative results, have like-minded colleagues uncritically peer review their work and report conclusions that go beyond what the data supports.

Now the psychiatrists are doing mental gymnastics to come up with reasons to dismiss the alternative medicine papers without pointing to any of the obvious dirty tricks that they are equally guilty of.
 
Last edited:
Now the psychiatrists are doing mental gymnastics to come up with reasons to dismiss the alternative medicine papers without pointing to any of the obvious dirty tricks that they are equally guilty of.

:rofl: :rofl: :rofl: I may be laughing, but it isn't really funny. Better to laugh than cry maybe - perhaps we can laugh them out of the research room. After all most of their results are laughable.
 
First, authorities in mental health research and all who read the scientific literature must move beyond randomised controlled trials alone and adopt broader science-based criteria that consider the plausibility—or lack thereof—of therapeutic rationales and proposed change mechanisms. Consistent with Bayesian approaches to evidence evaluation, such criteria consider all scientific evidence that might influence an intervention, not merely outcome evidence.
I think this misses the point. If a RCT shows a positive effect using an intervention that we know is totally implausible (e.g., homeopathy) then there must be something wrong with the way the trial was conducted (e.g., subjective endpoints and unblinded). What is needed is for these bad methodologies to be rooted out and instantly recognised for the crap that they are. Saying we should reject some trials because they use woo is putting the cart before the horse. We should reject the trial because of the crappy methods used. The fact that crappy methodology is not recognised as such is where the problem lies and that is where effort to improve should go.
 
I think this misses the point. If a RCT shows a positive effect using an intervention that we know is totally implausible (e.g., homeopathy) then there must be something wrong with the way the trial was conducted (e.g., subjective endpoints and unblinded). What is needed is for these bad methodologies to be rooted out and instantly recognised for the crap that they are. Saying we should reject some trials because they use woo is putting the cart before the horse. We should reject the trial because of the crappy methods used. The fact that crappy methodology is not recognised as such is where the problem lies and that is where effort to improve should go.

This is true, but I think the overall point remains - a Randomised Controlled (blinded comparison group) Trial is the minimum level of evidence and we need more evidence to determine the relevance.

The classical examples are the use of antidepressants. We don't really know why they are effective for some people with depression but not others. If we knew why, we could better target them and reduce both cost and harm.
 
I am not good at getting my head round statistics so find it difficult to explain what I mean but significance is just a consensus figure which means the result is not likely to be random but if you look at a lot of outcomes you could get a figure that looks significant just by chance! That is one reason why you prespecify which outcomes you are going to look at. Then if one of them is significant it is less likely to be random.

If you look at enough outcomes it is likely you will find a significant outcome, but these are all to do with the numbers, not to do with any specific thing. Bayesian theory says you have to add another layer to your analysis by looking at your specific experiment and deciding if the results are plausible.

For instance, one study of childhood trauma found something like ill health in adulthood was significantly associated inappropriate sexual interactions but not with rape which makes no sense as the more serious trauma includes the less serious.
 
Back
Top Bottom