Functional Neurological Disorder (FND) - articles, social media and discussion

"Bizarrely unbalanced group - not a single flat earth conspiracy theorist, climate change denier or psychic healer."
Thanks. I needed that laugh. :)
Carson’s statements are especially false and misleading because CBT has not only been conclusively demonstrated not to work for ME/CFS (PACE) but also for FND. The CODES trial for FND was a PACE-like effort led by the usual KCL crowd and the primary endpoint was negative despite the huge sample size. It was negative because they arrogantly chose an objective outcome measure (reduction in the number of nonepileptic seizures). Of course a host of crappy unblinded self report questionnaires were favouring the CBT group as you’d expect due to bias.
Yet another in the long tedious trail of failures that I had forgotten about. There are so many of them.
 
I’ve noticed that David Perez is receiving deserved backlash for his sermonizing response to Walter Koroshetz’s tweet on ME awareness day. Undoubtedly, this will be categorized as an attack, or harassment of a magnanimous researcher.

It’s hard not to see Carson and Perez’s statements as deliberately provocative. Koroshetz wrote an anodyne, necessary statement on ME awareness day. This perfunctory solidarity, on a day the community comes together, is then interrupted by figures known to be antagonistic to the patient community. It’s like a belligerent former romantic partner crashing a wedding. Of course people will be agitated!
 
Trial By Error: Questions About the Prevalence of Functional Neurological Disorder and the Research on Hoover’s Sign for Functional Leg Weakness

"I have great sympathy for patients diagnosed with functional neurological disorder (FND). Their symptoms can be seriously disabling and their plight has long been neglected and dismissed by the medical establishment. When I post about FND, I like to recommend this well-written essay by a patient who goes by the moniker FNDPortal. The article provides a harrowing portrait of the experience of living with FND as well as a cogent account of the history of the construct.

I have, however, raised issues with how FND experts and investigators have made claims that do not seem to conform to the evidence cited. That includes the routine and unwarranted tripling of the reported FND prevalence rate from a 2010 study from Stone et al called “Who is referred to neurology clinics?—the diagnoses made in 3781 new patients,” published in the journal Clinical Neurology and Neurosurgery. FND, formerly called conversion disorder, was redefined in 2013 in the fifth edition of the Diagnostic and Statistical Manual (DSM-5), often referred to as the “psychiatric bible.” Among the changes in the new definition of the diagnosis was that it required the presence of a clinical sign incompatible with neurological disease.

As I’ve blogged here, here and here, Stone et al has repeatedly been referenced for the claim that FND–as re-defined in DSM-5–is the second-most common reason, after headache, for patients to see a neurologist, and/or that it has a 16% prevalence among new presentations at neurology clinics. That is simply not what the paper reported, as should be apparent to anyone reading it."

https://virology.ws/2023/05/22/tria...-on-hoovers-sign-for-functional-leg-weakness/
 
They've got a lot of theories, just no evidence for any of it.

But the "theories" are beloved and infinitely believed. And they call it evidence-based medicine. Frankly, I can't wait enough for our medical AI overlords, even with all the risk to civilization, they can't do any worst than this sorry excuse for a "system" that is regressing before our eyes. While technology is accelerating at unprecedented pace, no less. Everything political is regressing, everything technological is progressing rapidly. And medicine is mostly regressing, despite some progress at the cutting edge. Says a lot about where it sits.

Loosely related, but I'm honestly at the point where if anyone talks about "gold standard" anything in healthcare, I just assume it's quackery and process all of it as fart noises in my head. Words rarely come this empty.
 
They've got a lot of theories, just no evidence for any of it.

Assuming this is a reference to the famous line reported to have been said by Rudy Guiliani during his campaign to overturn the 2020 US election...Yes, this is largely the point. I thought of citing Guiliani in the post but decided against it not because I didn't find the analogy appropriate but because using it might come across as a cheap and unnecessary shot. I think it's clear that they have lots of theories with no evidence--or to be fair, let's say pretty thin evidence.
 
(Specificity and sensitivity are complicated. In brief, the first is a measure of whether a true positive case is correctly identified by a positive test and the second is a measure of whether a true negative case is correctly identified by a negative test. There is often a trade-off between the two, but the best tests are those that measure close to 100% on both. I realize this mini-explanation will leave many a bit perplexed. Sorry!!)”

If Hoover’s sign has high diagnostic specificity for functional leg weakness, the corollary is that other conditions would rarely generate a positive result—or never, if the specificity were 100%. But if clinicians are relying on a claim of specificity that is inflated or exaggerated, other diagnoses that might explain a positive Hoover’s sign could potentially be overlooked and missed.
Dave, I think you have specificity and sensitivity the wrong way around. Or at least, not everything you have written there can logically be true.
google said:
Sensitivity refers to a test's ability to designate an individual with disease as positive. A highly sensitive test means that there are few false negative results, and thus fewer cases of disease are missed. The specificity of a test is its ability to designate an individual who does not have a disease as negative.


Some terrific stuff in the blog e.g.
These implications raise a key question: Is the research into the diagnostic reliability of Hoover’s sign robust? As it turns out, the answer is—not really, despite the sign’s venerable history. The evidence base is very thin—as I explain below. Two issues are immediately apparent. First, the few studies that have been done only included handfuls of FND patients; the most authoritative validation study of Hoover’s sign had eight FND patients. Beyond that, studies were designed in a circular fashion, with Hoover’s sign apparently serving in many or all cases as a diagnostic tool initially as well as being the object of epidemiological investigation.

Dr Putrino, whose interview with me prompted Dr Perez’ tweets, said this:

“A positive Hoover’s sign basically shows us that, for whatever reason, someone is unable to initiate a voluntary muscle contraction but that they have intact spinal reflexes. There are so many things that can go wrong with the nervous system to cause this that are easily missed during a mainstream neurological exam, especially if you have a bias towards diagnosing ‘conversion disorder.’ So to immediately and over-confidently assume that a positive Hoover’s sign means ‘functional neurological disorder’ is emblematic of the sort of thinking that we would associate with a clinician who is light on anatomical knowledge.”

Jonathan Edwards, an emeritus professor of medicine at University College London, agreed that Hoover’s sign could play a role in patient assessment but that it was unwarranted to suggest it had such high specificity:

“There is no doubt that there are people with neurological symptoms that have to be assigned to unexplained central problems. There is also no doubt that in some cases the defect seems to relate more to conscious conceptions than any neuroanatomy. Sometimes signs like Hoover’s sign are quite remarkably salient. From my perspective here the problem is not with the idea that neurological symptoms can occur as a result of conscious or unconscious mental processes. The problem is the claim that anyone understands what is going on or that any such mysterious goings on can be reliably recognised with such signs.”
 
Dave, I think you have specificity and sensitivity the wrong way around. Or at least, not everything you have written there can logically be true.

Thanks for the comments and highlights! It took a while to write. I'm pretty sure that google quote has it backwards. Specificity is about positives and sensitivity is about negatives. But it is very complicated and even the epidemiologists I know sometimes phrase it wrong.
 
Wikipedia:
Sensitivity (true positive rate) is the probability of a positive test result, conditioned on the individual truly being positive.
Specificity (true negative rate) is the probability of a negative test result, conditioned on the individual truly being negative.

Dave Tuller's blog said:
(Specificity and sensitivity are complicated. In brief, the first is a measure of whether a true positive case is correctly identified by a positive test and the second is a measure of whether a true negative case is correctly identified by a negative test. There is often a trade-off between the two, but the best tests are those that measure close to 100% on both. I realize this mini-explanation will leave many a bit perplexed. Sorry!!)”

If Hoover’s sign has high diagnostic specificity for functional leg weakness, the corollary is that other conditions would rarely generate a positive result—or never, if the specificity were 100%. But if clinicians are relying on a claim of specificity that is inflated or exaggerated, other diagnoses that might explain a positive Hoover’s sign could potentially be overlooked and missed.
Regardless of which way round the terms go, I don't think all of what you have written there can be correct.
Even if the claim of a high ability to identify positive cases as positive was true, it could still be the case that lots of people with something else are being incorrectly diagnosed.
 
100% specific means every positive is a true positive. There should be no false positives. There could be false negatives. ADDED: But if I'm wrong I'm sure it will be pointed out soon enough by those who will challenge the larger point!!
 
Last edited:
Wikipedia:



Regardless of which way round the terms go, I don't think all of what you have written there can be correct.
Even if the claim of a high ability to identify positive cases as positive was true, it could still be the case that lots of people with something else are being incorrectly diagnosed.


Yep they sort of need to get, somehow, 200 people in who have the Hoover's sign and check longitudinally it isn't explained by something else without an FND diagnosis that means such investigations and follow-up and annotating as history doesn't happen.

Of course that latter part of the sentence was exactly what the CFS label did for those labelled with it and medical information on that condition

So one could almost say that such claims are almost precursors to trying to make a self-fulfilling prophecy. I mean do those whose leg weakness could have turned out to have been CJD when they were finally diagnosed with that gone back and the notes and stats amended? Does any of that happen with the BPS CFS research?

I think that science needs to close this loophole on these people and require specificity can only be certified by those outside of that specialism and conflict of interest checking it - I mean it isn't like they'd be necessary capable, nevermind interested in, finding another explanation that sat in another area of medicine. And they should certainly rely on longitudinal registers and not the likes of someone selecting a sample of their own?
 
They've got a lot of theories, just no evidence for any of it.

But the "theories" are beloved and infinitely believed. And they call it evidence-based medicine. Frankly, I can't wait enough for our medical AI overlords, even with all the risk to civilization, they can't do any worst than this sorry excuse for a "system" that is regressing before our eyes. While technology is accelerating at unprecedented pace, no less. Everything political is regressing, everything technological is progressing rapidly. And medicine is mostly regressing, despite some progress at the cutting edge. Says a lot about where it sits.

Loosely related, but I'm honestly at the point where if anyone talks about "gold standard" anything in healthcare, I just assume it's quackery and process all of it as fart noises in my head. Words rarely come this empty.

I wonder whether this is another area that flags the issue brought up on another thread where exploratory and descriptive research is underfunded - albeit it surely would need close attention from funders to still have defined goals and points to the investigation and definitely standards that meant the literature could conclude when a certain area had been 'well-looked into' in certain ways and 'come up clean' or what not. Basically established diagrams and data not counting for so much less than trials that are inference-based when really they can provide a bedrock future findings might keep referring back to in order to unpick mechanisms and systems.

But from this angle astoundingly the lack of requirement to do proper exploratory research means theories are from belief systems rather than the old days even in scientific psychology where models had to have some logic that traced back to decent proper studies into symptoms and mechanisms - very different as a source and explanation than 'I reckon they all just think it because they seem to be the type of people by my assumption who let stress get to them' or 'well if you do go around telling a load of women/people that there might be something infectious then lots will starting looking out for it and thinking they have it'. Which isn't really sound 'theory' basis.?
 
Even if the claim of a high ability to identify positive cases as positive was true, it could still be the case that lots of people with something else are being incorrectly diagnosed.

you mean, they would be getting false positives? not if specificity is 100% or close to it. In that case you would have people who purportedly have FND but were not being found. For example, the eight with FND in the 2011 study. Five of them were positive on Hoover's, and no one who hadn't been diagnosed with FND had a positive Hoover's sign. So the specificity there was 100%. But because there of those with FND were not identified by Hoover's sign, it's sensitivity was much lower.
 
It doesn't matter much at all, it's just if someone opposed to your whole message wants to pick holes in what you have written, they might start with this.
Wikipedia said:
Sensitivity (true positive rate) is the probability of a positive test result, conditioned on the individual truly being positive.
Specificity (true negative rate) is the probability of a negative test result, conditioned on the individual truly being negative.

the blog said:
(Specificity and sensitivity are complicated. In brief, the first is a measure of whether a true positive case is correctly identified by a positive test and the second is a measure of whether a true negative case is correctly identified by a negative test. There is often a trade-off between the two, but the best tests are those that measure close to 100% on both. I realize this mini-explanation will leave many a bit perplexed. Sorry!!)”

If Hoover’s sign has high diagnostic specificity for functional leg weakness, the corollary is that other conditions would rarely generate a positive result—or never, if the specificity were 100%. But if clinicians are relying on a claim of specificity that is inflated or exaggerated, other diagnoses that might explain a positive Hoover’s sign could potentially be overlooked and missed.
Specificity is, if you have 100 people without a disease, how many of them will be correctly identified as not having the disease on the basis of the test? How specific is the thing that is being measured as a sign of the disease? If the test is diagnosing everyone with funny walking as having FND, then it's not very specific to FND. So, it's not a 'measure of whether a true positive case is correctly identified by a positive test', as you wrote.

Sensitivity is, if you have 100 people with "FND", how many of them will be correctly identified as having the disease on the basis of the test? How sensitive is the test to the expression of the disease? If you made the test more specific (adding a requirement that the person also has to be young and female, for example), it would become less sensitive, because it would miss some people with "FND".

(Sensitivity is probably also 'not needlessly banging on about something that most people don't care about'. And I've been banging on. Sorry. Perhaps I've misunderstood something, got something wrong. Either way, don't worry about it.)
 
Last edited:
The Wikipedia diagram is drawn with the left half = "people with condition" and right half = "people without the condition". So the left half fraction at the bottom of the image is sensitivity and could be written as "test says positive / actually positive". The right half fraction is specificity and could be written as "test says negative / actually negative". Simpler English is: sensitivity = "true positive rate"; and specificity = "true negative rate".

This is what @Hutan has described above. When I'm thinking about this, I usually think in order of sensitivity followed by specificity, i.e. left then right sides in the diagram below, so I think when I read that initially I reversed the order so it read "correctly" to me. Would it work for the article to simply swap the initial order to saying "sensitivity and specificity are complicated..." and then the first and second would be correct? Or do any of the follow-on points also need refinement?

341px-Sensitivity_and_specificity_1.01.svg.png
 

Ha! No reason to apologize! I am very capable of making mistakes and I always pay attention to your points. That paragraph describing the two constructs was written by my epi colleague and I'm 99.5% sure it's right--and that's Hoover's sign levels!! I think wikipedia has it backwards. specificity is about is a positive a true positive. sensitivity is whether a negative is a true negative. Again, look at the two in the 2011 study. ADD: But I should dig out my epi textbooks.
 
The wikipedia definitions don't seem to fit with the findings in the 2011 study. Five out of eight FND patients had positive Hoover's sign. None of the 116 FND-negatives had a positive Hoover's sign. The specificity was 100% because a positive Hoover's sign indicated they definitely had FND.

three of the eight FND-patients had a negative Hoover's sign. All of the 116 others also had a negative Hoover's sign, of course. So the sensitivity of the test was low--in other words, the test produced false negatives.
 
Back
Top Bottom