The effectiveness of specialist cognitive behavioural therapy for functional neurological disorder: a service evaluation 2026 White, Pick and Chalder

Andy

Senior Member (Voting rights)
Introduction:

Functional neurological disorder (FND) is a common and disabling condition associated with high levels of functional impairment and psychological distress. Psychological therapies such as cognitive behavioural therapy (CBT) are increasingly recommended as part of multidisciplinary care, but evidence from real-world clinical settings remains limited.

Methods:

This retrospective observational cohort study evaluated outcomes from a specialist CBT programme for adults with FND using routinely collected clinical data. Self-report measures of depression, anxiety, psychological distress, functional impairment, physical functioning, pain, and six cognitive-behavioural responses were collected before, during, and at the end of treatment. Linear mixed-effects models were used to examine change over time, with correction for multiple testing.

Results:

Data from 234 patients were analyzed (70.5% female; mean age 40.9 years). Despite low completion of measures at follow up, significant improvements were observed in psychological distress, functional impairment, and five of the six cognitive-behavioural response domains across treatment, whereas no significant change was seen in physical functioning or pain measures. Sensitivity analyses excluding patients who received three or fewer sessions produced a consistent pattern of results.

Conclusions:

These findings suggest that specialist CBT for FND delivered in routine clinical practice is associated with meaningful improvements in distress, functioning, and key cognitive-behavioural maintenance processes, supporting its role within multidisciplinary care.

Open access
 
These findings suggest that specialist CBT for FND delivered in routine clinical practice is associated with meaningful improvements in distress, functioning, and key cognitive-behavioural maintenance processes, supporting its role within multidisciplinary care.
Typical overclaiming of effectiveness from this research group. Their claim is surely scuppered completely by the very high drop out rate at follow up. Especially as they admit it made no difference to the patients' FND.

no significant change was seen in physical functioning or pain measures.
 
five of the six cognitive-behavioural response domains across treatment
When a measure becomes a target, it ceases to be a good measure. Far worse when it's not even a measure, and the whole process of trying to move the target involves explicitly trying to convince participants to report that the target has moved.

It's so absurd that the entire process here involves getting people to report differently, and this isn't considered a source of bias, when it might be the most extreme form of bias in the entire history of professions. Even the Stanford prison experiment had less bias than this.
184 participants provided pre-treatment data. 71 participants provided mid-treatment data and 53 provided end-treatment data.
Textbook cherry-picking after a selective process that weeds out cherries that they don't want to pick. This is performance so atrocious it actually makes it clear that this approach is not acceptable to most. Literally.
 
They actually cite CODES as evidence of success:
Similarly, mediation analysis of the CODES trial demonstrated that CBT-related improvements in avoidance behaviour and unhelpful cognitions were associated with improvements in functional outcomes and quality of life in patients with dissociative seizures (19).
The CODES trial (23), demonstrated that CBT plus standardised medical care (SMC) was superior to SMC alone at 6 months in significantly reducing monthly seizure frequency for adults with functional seizures. However, this was not sustained at 12 months follow up.
By helping patients to reframe self-defeating illness beliefs, shift attention away from symptoms, and develop more adaptive strategies for emotional and physiological regulation, CBT may directly address the psychological and physiological processes sustaining functional symptoms
But all it gets you is people slightly reporting things a bit differently. This is as biased as an aggressive sales pitch where they don't let you leave until you sign something but it's not binding so most people just sign to get out and never return. In business this means either 1) you get fired or 2) the company goes bankrupt.
Participants underwent an initial 2-hour assessment, followed by up to 12 one-to-one treatment sessions lasting one hour.
The mean number of sessions across all participants was 13.6.
With dropouts like this, the math ain't mathing. Or I guess most participants actually did complete but did not bother filling in questionnaires.
To assess whether the participants providing mid or post-treatment in addition to pre-treatment data differed systematically to those who only provided pre-treatment data, group comparisons were made. Independent samples t-tests for each of the twelve outcome measures, and none showed any significant difference between those providing data at pre-treatment only compared to those providing additional follow-up data (p <0.05).
With a 9% completion rate, they simply filled in the missing data by, I guess, estimating what they could have been if they were similar to the other data? And found they were similar to other data, because that's how they did it. They don't seem to find the abysmal completion rate to be a problem. After all, it's not as if any of this matters as far as the funding the clinic gets. Ah, actually they frame it as a strength, by arguing that it makes a strong case for them to keep doing the same thing:
A substantial proportion of participants provided data at only one timepoint, with fewer contributing mid- or end-treatment assessments. Importantly, those who provided follow-up data did not differ at baseline from those who contributed only pre-treatment data, suggesting that missingness was unlikely to be driven by initial symptom severity or functioning. The mixed-effects modelling approach was selected to allow inclusion of all available data, minimising selection bias that may result from restricting analysis to complete cases. However, the high proportion of missing data nonetheless limits our ability to accurately describe clinical outcomes in the treatment group overall. This highlights the need to further improve data collection procedures within time-limited clinical settings; for example, integrating digital measures, automated reminders, or therapist-led completion of outcome measures during sessions.
Funny how that never actually works, but these people are completely insulated from having to show anything. They use the "time-limited clinical setting" as an excuse, but this is consistent with every other trial of this.
Furthermore, although the study was conceptually grounded in a biopsychosocial framework, outcome measurement primarily focused on psychological and behavioural process-level mechanisms and functional impairment.
Yeah that's standard biopsychosocial.
Consistent with existing findings in the field, there was no significant change in physical functioning and pain.
Yeah that's all very useful. It changes nothing but it doesn't matter.

Sometimes it's lies, damned lies and statistics all at once.
 
Despite low completion of measures at follow up,
There's an understatement.

184 participants provided pre-treatment data. 71 participants provided mid-treatment data and 53 provided end-treatment data.
That's 39% providing data at ~7 weeks and 29% providing data at ~14 weeks.
But it's worse:

176 participants provided data at only one timepoint, 42 provided data at exactly two timepoints, and 16 provided data for all three timepoints.
So only 23% provided data at 2 timepoints, and 9% at all 3 timepoints.

The overwhelming finding is that participants didn't participate!

Baseline physical functioning is a bit higher in this FND group than in the corresponding study for ME/CFS at the same clinic, despite similar age and % female:
  • FND mean SF36 PF 57.4 (SD 31.3) vs ME/CFS mean 47.6 (standard error 0.95)
  • FND mean age 40 vs ME/CFS mean age 39
  • FND 71% female vs ME/CFS 73% female
 
What's unclear to me is if they did the course of treatment and just didn't fill out the forms, or did they not finish the course of treatment? They say that the average number of sessions was 13, even though the course of treatment was up to 12 sessions. But one person had 60+ sessions. So it's all a bit of a mystery.
 
What's unclear to me is if they did the course of treatment and just didn't fill out the forms, or did they not finish the course of treatment? They say that the average number of sessions was 13, even though the course of treatment was up to 12 sessions. But one person had 60+ sessions. So it's all a bit of a mystery.
Yeah, I went down the same rabbit hole!

In the discussion, they treat it as a data collection issue:
the high proportion of missing data nonetheless limits our ability to accurately describe clinical outcomes in the treatment group overall. This highlights the need to further improve data collection procedures within time-limited clinical settings; for example, integrating digital measures, automated reminders, or therapist-led completion of outcome measures during sessions. Recent qualitative work (37) provides important context for these challenges, highlighting stakeholder concerns regarding the relevance, burden, and timing of commonly used outcome measures in FND. Participants emphasised the importance of capturing changes in coping, understanding, participation, and quality-of-life, and identified measurement burden as a barrier to consistent completion, highlighting the need for outcome frameworks and data collection procedures that are both clinically meaningful and feasible in routine care.

Elsewhere they explain:
To assess whether the participants providing mid or post-treatment in addition to pre-treatment data differed systematically to those who only provided pre-treatment data, group comparisons were made. Independent samples t-tests for each of the twelve outcome measures, and none showed any significant difference between those providing data at pre-treatment only compared to those providing additional follow-up data (p <0.05).
and
In view of the wide range of sessions participants received, a sensitivity analysis was conducted after removal of data points (n = 33) provided by participants who received three or fewer treatment sessions, or for whom the number of treatment sessions was unknown.
...and the results were the same. The latter echoes what they found in the corresponding ME/CFS study, Adamson et al. 2020, where they wrote:
The largest change appears to happen between the start of treatment and session 4 [mean difference=4.74, 95% confidence
interval (3.73–5.75)] with subsequent time point differences not meeting significance,

That argues against all the explaining away of not-positive outcomes of rehab in ME/CFS as being due to an inadequate number of sessions. And it suggests that what's happening for people who are helped has nothing to do with, say, reversing deconditioning, because you wouldn't do that in 3 or 4 weeks.

Back to the point, I think people who are getting something out of treatment - and feeling well enough to do optional things - are likely to be more cooperative when it comes to filling out forms.
 
It's really stunning how it not only proves that the service is worthless, it even debunks the entire model in the process, but they can not only still describe it as a success, no one involved even cares and they are allowed to pretend that up is down because direction doesn't even matter.

What we have here is a service requiring participation in an implausible treatment approach, based on a model of lack of motivation, fear or other nonsensical things that would make most patients not even complete the course. But most did, even though the whole thing is a farce, people who understand nothing about a problem go on 'teaching' about it is pure cult stuff. They completed the treatment, proving that they are trying hard enough, especially in a context in which the concept of 'dosage' doesn't even make sense.

But most of them did not even bother evaluating the service, on the obvious basis that it's worthless, and they could easily confirm this by simply allowing such comments, instead of using pre-screened questionnaires where such answers aren't even possible. This is the ultimate test of whether something is acceptable, whether it has any worth. And it clearly doesn't, but it doesn't matter, the services don't exist for the benefit of the patients, they merely exist so that failed systems can pretend they are doing normal, competent things.
Participants emphasised the importance of capturing changes in coping, understanding, participation, and quality-of-life, and identified measurement burden as a barrier to consistent completion, highlighting the need for outcome frameworks and data collection procedures that are both clinically meaningful and feasible in routine care.
The above is simple and obvious: they aren't asking relevant questions or keeping track of relevant things, but they can't accept that without throwing the model away, so they instead blame the patients for their own failures, because health care systems are biased in favor of failing and not caring about such problems. The patients are basically telling them it's not relevant to them, but that doesn't matter because it's supposed to be relevant to the failing system, and the ideologues who locked it in a state of perpetual failure.

They can never fail, they can only be failed.
 
Back
Top Bottom