Bias due to a lack of blinding: a discussion

Interesting stuff given that for something like insomnia there might be legal implications surely if someone gave out incorrect advice to eg truck drivers , or worse said therapy made someone only think they had slept better than they objectively had who then was in a road traffic accident or injured themselves etc.

Surely the risk is that is what this study might be 'proving' or at least warning of could be the effect of said therapy - that it reduces accuracy in self-assessing sleep?
 
The FDA disapproved MDMA-therapy for PTSD.
https://www.bbc.com/news/articles/cl4465dpmrro

Although randomized trials showed a clinically significant improvement, there were concerns about unblinding as most participants were able to guess which trial arm they were in. So lack of blinding was one of the main reasons (in addition to safety concerns) why the FDA panel voted against approval.

The FDA document reviewing the evidence is available here and reads:
https://www.fda.gov/media/178984/download
midomafetamine produces profound alterations in mood, sensation, suggestibility, and cognition. As a result, studies are nearly impossible to blind. Although participants were randomized to either drug or placebo, the vast majority (approximately 90% of those assigned to drug and 75% of those assigned to placebo per a poststudy survey) were able to accurately guess their treatment assignment—the study was designed and conducted as a double-blind trial, but participants experienced functional unblinding due to the effects of the drug itself. Functional unblinding can introduce bias in clinical studies. Along with bias from functional unblinding, there may also be expectation bias in which those who believed that they received active treatment expected that they would experience a clinical benefit, those who received placebo fared worse due to disappointment when they did not experience anticipated effects from the treatment, or some combination of both. In addition, it is likely that the in-session monitors could deduce a participant’s treatment assignment based on that participant’s behavior during the session. Thus, both the participant and the study staff were likely aware to which treatment arm a given participant was assigned. It is reasonable to assume that functional unblinding and expectation bias has impacted treatment effects observed in the clinical trials with MDMA to some extent

Some comments about the decision:
https://twitter.com/user/status/1822391311321755712

https://twitter.com/user/status/1822576212679500276
 
Everyone in the psych-skeptic community accepts these points as given when applied to various scammy clinical trials of biological interventions. Yet none of them have ever spoken against the exact same biases affecting BPS/functional disorders trials.
 
IIRC, the FDA has approved a number of apps like the CBT for IBS one from Mahana. On the basis of the very format of evidence they reject here.

I don't know on what basis the burden of evidence must be different. It truly makes zero sense. If anything, it should be even higher given that there is far more bias involved in those. But instead they are enabling grift and frauds in some places, while essentially making it impossible for some treatments to be approved.

At the very least the burden should be the same. There is nothing special about drugs other than the fact that they can, in fact, be blinded for testing. Even though it's usually done relatively poorly, because there are so many other factors that can bias outcome, and this is exactly why the double-blinded controlled requirement exists. And it still fails very often.

Given that it's becoming harder and more expensive to develop new drugs, this pretty much sets up a future where most treatments approved are pure grift that would be systematically rejected if it were a drug. What an absurd society we live in.
 
That youtube video by Eiko Fried linked in the tweet above is great, well worth a listen. I'll put another link here in case something happens to the tweet:


In the last section, he comments that the same criticisms have been made about research into other sorts of therapies and mentions CBT. He says that we have known about these problems for a long time.
 
He says that we have known about these problems for a long time.
I think it was @Peter Trewhitt who pointed out a while back that the basics of experimental psychology were figured out fifty years ago.

The BPS club have completely failed to deliver a robust explanatory and therapeutic model by those standards, so they have simply downgraded standards until they can claim a 'result'.
 
Trial participant in ecstasy for PTSD makes serious allegations regarding reporting of serious adverse events:

Of note, three MAPP1 trial participants from the MDMA arm have reported significant worsening of suicidality in the weeks following the trial, which they have attributed to the trial, and one MAPP1 participant reported the emergence of psychotic symptoms within 1-2 days of one of their dosing sessions (McNamee, et al., 2022; Nickels & Ross, 2021). These very serious adverse effects are neither in the published journal article for MAPP1, nor is the JAMA Psychiatry article in which we reported them cited in the MAPP2 journal article.

Psychotherapy is regulated because psychotherapy can do harm. Mixed with drugs that increase suggestibility and amplify experience, I cannot stress enough the possibility that the drugs can also amplify psychotherapy’s potential for harm.

While I was in the study, there were many things my trial therapists did - things I accepted because I thought they were experts and I wanted to heal — and because they said this was a “paradigm shifting treatment” (i.e., a cue to release previously held beliefs about what therapy or medicine “should” look like)

But, it includes things like encouraging me to view my worsening symptoms as evidence of healing and “spiritual awakening;” seeding mistrust in mainstream psychiatry; talking to me about past life traumas; encouraging and, one time, pressuring me to cuddle with them; repeatedly telling me I was “helping make history” and that I was “part of a movement;” and letting me know how my responses and behaviours during and after the trial could jeopardize legalization.

When I tried to tell my therapists about my emerging and worsening mental health symptoms at the end of the trial, one of them responded that he predicted that I would be feeling better in six months time. I later found out that another participant, at a different site, was given a similar “prediction,” using the same vocabulary, in the face of their mounting distress.

https://twitter.com/user/status/1822887395165192620
 
Trial participant in ecstasy for PTSD makes serious allegations regarding reporting of serious adverse events:



https://twitter.com/user/status/1822887395165192620


Interesting insight

I can't help but think so this is what happens when you have lax methodological regulations for therapy-based treatments being trialled, and then mix them with drugs (which would normally be measured with objective measures and have yellow-card reporting)....

basically it depends on who is running it, and if it's a therapy-based then they are used to and don't see the errors in what they see as their norms.

Some of the things flagged sound rather familiar to eg the newsletter point brought up about the PACE trial protocol where people were being sent newsletters suggesting it was working for lots of other people with testimonials


Just because you have some people that go above and beyond or don't even think of running a trial in the ways some do, doesn't mean there isn't an issue when regulations leave the door open for that behaviour. Worse of course those people don't keep on doing it if it isn't benefitting them, and so it has to become the culture because of the competitive advantage it gives so even those who are 'good' have no choice but to play the game.
 
Trial participant in ecstasy for PTSD makes serious allegations regarding reporting of serious adverse events:



https://twitter.com/user/status/1822887395165192620

Odd how all it takes to go off from biopsychosocial standards is to introduce anything biological. Because the descriptions above are pretty standard biopsychosocial methodology for the most part.
 
Here's a tricky one -- does that trial show that both modafinil and CBT are effective in relieving MS related fatigue, and that each treatment alone as well as their combination are equally effective?

Full title:
Comparative effectiveness of cognitive behavioural therapy, modafinil, and their combination for treating fatigue in multiple sclerosis (COMBO-MS): a randomised, statistician-blinded, parallel-arm trial

Background
Fatigue is one of the most disabling symptoms reported by people with multiple sclerosis. Although behavioural and pharmacological interventions might be partly beneficial, their combined effects have not been evaluated for multiple sclerosis fatigue, or examined with sufficient consideration of characteristics that might affect treatment response. In this comparative effectiveness research trial, we compared the effectiveness of cognitive behavioural therapy (CBT), modafinil, and their combination for treating multiple sclerosis fatigue.

Methods
This randomised, analyst-blinded, parallel-arm, comparative effectiveness trial was done at two universities in the USA. Adults (aged ≥18 years) with multiple sclerosis and problematic fatigue (Fatigue Severity Scale [FSS] score ≥4) were randomly assigned (1:1:1), using a web-based treatment assignment system with minimisation, to receive CBT, modafinil, or both for 12 weeks. Statisticians were masked to group assignment, but participants, study neurologists, CBT interventionalists, and coordinators were not masked to treatment assignment. The primary outcome was the change in Modified Fatigue Impact Scale (MFIS) from baseline to 12 weeks, assessed using multiple linear regression, adjusted for age, sex, study site, anxiety, pain, baselines MFIS score, and physical activity. Analyses were done by intent to treat. The trial was registered with clinicaltrials.gov, NCT03621761, and is completed.

Findings
Between Nov 15, 2018, and June 2, 2021, 336 participants were randomly assigned treatment (114 assigned to CBT, 114 assigned to modafinil, and 108 assigned to combination therapy). At 12 weeks, CBT (n=103), modafinil (n=107), and combination therapy (n=102) were associated with clinically meaningful within-group MFIS reductions of 15·20 (SD 11·90), 16·90 (15·90), and 17·30 (16·20) points, respectively. Change in MFIS scores from baseline to 12 weeks did not differ between groups: relative to combination therapy, the adjusted total mean difference in MFIS change score was 1·88 (95% CI –2·21 to 5·96) for CBT and 1·20 (–2·83 to 5·23) for modafinil. Most common adverse events for modafinil-containing treatment groups included insomnia (eight [7%] for modafinil and eight [7%] for combination therapy) and anxiety (three [3%] for modafinil and nine [8%] for combination therapy).

Interpretation
Modafinil, CBT, and combination therapy were associated with similar reductions in the effects of multiple sclerosis fatigue at 12 weeks. Combination therapy was not associated with augmented improvement compared with the individual interventions. Further research is needed to determine whether effects of these interventions on multiple sclerosis-related fatigue is influenced by sleep hygiene and sleepiness. No serious adverse events related to the study drug were encountered.

Funding
Patient-Centered Outcomes Research Institute and National Multiple Sclerosis Society.

https://www.thelancet.com/journals/laneur/article/PIIS1474-4422(24)00354-5/abstract

(Paywalled)

"The primary outcome was the change in Modified Fatigue Impact Scale (MFIS) from baseline to 12 weeks, assessed using multiple linear regression, adjusted for age, sex, study site, anxiety, pain, baselines MFIS score, and physical activity."

"Change in MFIS scores from baseline to 12 weeks did not differ between groups: relative to combination therapy, the adjusted total mean difference in MFIS change score was 1·88 (95% CI –2·21 to 5·96) for CBT and 1·20 (–2·83 to 5·23) for modafinil."


Some very quick thoughts:

Would have been interesting to add two more groups: 4) Active Placebo (drug containing another stimulating substance, e.g. caffeine) and 5) Sham CBT

Paywalled so was not able to skim-read beyond abstract.

Edit after a quick online search showed reviews stating Modafinil was effective for MS fatigue, but not sure how good the control was, as short-term stimulating effects of Modafinil are pretty obvious.



For what seems the current mainstream view on evidence on modafinil for Multiple sclerosis-related fatigue --

From Wikipedia:


The National Institute for Health and Care Excellence (NICE) in the UK, along with various non-governmental organizations focused on multiple sclerosis (MS), endorse the off-label use of modafinil to alleviate fatigue associated with MS.[20][37][38]

When prescribed for MS-related fatigue management, modafinil works by promoting wakefulness and increasing alertness without causing drowsiness or disrupting nighttime sleep. People with multiple sclerosis often report increased energy levels, reduced feelings of tiredness, improved cognitive function, and an overall improvement in their quality of life when taking modafinil.[43]

The primary goal of using modafinil in MS is symptom management and improving daily functioning.[41][42][44][45]
The effects of modafinil on other aspects of MS-related fatigue, such as severity and cognitive function, are less clear.[45][43]


Any thoughts / discussion appreciated.

Thread on the trial:
Comparative effectiveness of [CBT], modafinil, and their combination for treating fatigue in multiple sclerosis (COMBO-MS), 2024, Braley et al
 
Last edited by a moderator:
Any thoughts / discussion appreciated.

Only had a quick look but I thin it pretty certain that the trial tells us nothing other than nothing probably works much, otherwise the results would have differed for different arms.

I am surprised that NICE endorses off label usage here. Presumably the committee was composed of believers.

If I had MS I am pretty sure I would not want to be taking all sorts of extra drugs because some physician thought he/she was clever to prescribe something for my fatigue. In my experience any drug that alters brain function is likely to produce more dysphoria than anything useful.
 
I haven't read the paper, comments are based on the abstract.

The primary outcome was the change in Modified Fatigue Impact Scale (MFIS) from baseline to 12 weeks, assessed using multiple linear regression, adjusted for age, sex, study site, anxiety, pain, baselines MFIS score, and physical activity.
Goodness knows what jiggery pokery happened in the 'adjustments'. That's a lot of factors to be adjusting for, and most of them don't seem to have a clear rationale for influencing fatigue reductions. For example, why would you adjust for study site? So, it is possible that we have heavily biased data before we get into considering the effect of the treatments.

This randomised, analyst-blinded, parallel-arm, comparative effectiveness trial was done at two universities in the USA. Adults (aged ≥18 years) with multiple sclerosis and problematic fatigue (Fatigue Severity Scale [FSS] score ≥4) were randomly assigned (1:1:1), using a web-based treatment assignment system with minimisation, to receive CBT, modafinil, or both for 12 weeks. Statisticians were masked to group assignment, but participants, study neurologists, CBT interventionalists, and coordinators were not masked to treatment assignment.
Really, the word 'blinded' hardly deserves to be used here. Everyone except the people analysing the data knew what treatment participants had, and the outcome looks to have been a PROM. Perhaps that serves to reduce the suspicion about the adjustments for the seven factors. But, well, I remain suspicious.

Certainly, neither modafinil nor CBT working is a scenario compatible with the data reported - with reported fatigue improvements being the result of reversion to the mean and expectation bias. I'm not sure how they dealt with the data of dropouts (they mention that the analysis is 'intention to treat', but it is possible that the dropouts contributed to a lifting of the mean change by their absence at followup.

The trial results are also compatible with a scenario compatible with modafinil having a small positive effect (as we would expect with a stimulant). As we would also expect, there were some side effects (insomnia, feelings of anxiety), and who knows if it is really a sustainable approach to fatigue management. But, say modafinil did improve fatigue a bit for the 12 week trial, then what was happening with the CBT? As we know, the placebo effect of CBT could result in a reduction of reported fatigue without any real improvement, and that would also show up as a small improvement on the PROM.

I can easily imagine a situation where there was a small real improvement from the modafinil alone, a small reported improvement from the CBT alone, and only a small real improvement from the combined treatment. The real improvement of the modafinil in the combined treatment would essentially overlap with the expectation bias created by the CBT. The participants don't have a reason to inflate the improvement in fatigue beyond that small improvement that they actually experience.
 
I can easily imagine a situation where there was a small real improvement from the modafinil alone, a small reported improvement from the CBT alone, and only a small real improvement from the combined treatment.

Real improvements aren't always the same as meaningful improvements, either. You shouldn't have to rely on statistical fudging to show the latter if you've set the outcomes properly, because they'll light up in that part of the cohort.
 
It's certainly an interesting example. MSEspe, were you wondering, as I am, whether this design (A, B, A+B) might actually be a way to test whether CBT and all sorts of other interventions that can't be blinded actually help? Because, if they do help, then, barring some particular situations, the real benefits of the two treatments should be additive. It's a bit like a dose response trial, but with different interventions.

I mean, if you give people 'stomach rubbing' and there's a reported benefit of X and you give people 'gargling' and there's a reported benefit of Y, then, if the benefits are real, if you give people both stomach rubbing and gargling, I think there should be a benefit of X+Y*. If there isn't, I reckon it is likely that at least one of the treatments is ineffective.

*unless, for example, the biological (rather than placebo) mechanism of the two treatments is the same and there is an expected ceiling on the amount of benefit possible.

A drawback is that a trial like that is likely to be trumpeted in the media as 'stomach rubbing and gargling both are effective!'.

Does this make sense?
 
whether this design (A, B, A+B) might actually be a way to test whether CBT and all sorts of other interventions that can't be blinded actually help? Because, if they do help, then, barring some particular situations, the real benefits of the two treatments should be additive.

Intriguing! My initial response is that, if a person's symptoms are not rooted in disordered thinking, CBT by definition cannot work.

But there are other difficult-to-blind therapies that might work, such as pacing. Maybe it would be a useful approach there?
 
Also curious that in the ecological momentary assessments both treatments resulted in improvements in fatigue intensity & interference but no treatment resulted in significant improvements in perceived fatiguability.

The watch-like device ("PRO-Diary") that they used for the EMAs has both actimetry functionality and can ask the user self-report questions; something like that may be quite useful for ME/CFS trials.

[Edited to remove a potentially misleading sentence which needs much more thorough explanation. I shouldn't post at 2AM...]
 
Last edited:
Back
Top Bottom