Prevalence of symptom exaggeration among North American independent medical evaluation examinees: systematic review ..., 2025, Darzi, Guyatt, Busse +

Their arguments mostly amount to a word salad gish-gallop to me. They take zero care about being coherent or reasonable, it's all the naked pursuit of a goal, and the ends justify all the means.
Gish gallop said:
The Gish gallop is a rhetorical technique in which a person in a debate attempts to overwhelm an opponent by presenting an excessive number of arguments, without regard for their accuracy or strength, with a rapidity that makes it impossible for the opponent to address them in the time available. Gish galloping prioritizes the quantity of the galloper's arguments at the expense of their quality.
It's criminal to have so little regard for such a serious matter. I can't understand how the medical profession is so careless and no one seems to find anything objectionable here. It's as if absolutely nothing has ever been learned from past mistakes, humans are incapable of making ourselves progress, only ever able to develop technologies that take care of those problems.

These days I keep seeing things that make me turn my head as if I was looking at some "4th wall" camera, unable to believe what the hell I'm seeing and/or reading. And it's happening just as much with the medical profession as the other horrible stuff happening. The dishonesty, the corruption, the impotence, it's everywhere, and not only is medicine not escaping it, it's especially awful outside of pure science and applied technologies. It's mind-boggling how incompetent humans are when left to our own devices.
 
I think this sentence shows how they view this issue:

EDIT: So they think that the biomedical model is causing patients to exaggerate their symptoms. I suspect that might also be the reason why Busse got involved in ME/CFS research and criticised the NICE guidance quite aggressively.
Which is especially ridiculous, as 99-98% of humans know almost nothing about that model, and all they do is bring up what they are going through to someone who is a professional at it.

So their own biases bias their own biased perspectives. It's three biases in a trenchcoat, only one can see where they're going, but the two others aren't listening.
 
Continuing with the Methods section

Search strategy

The search strategies were developed using a validation set of known relevant articles and included a combination of MeSH headings and free text key words, such as malinger* or litigation or litigant or “insufficient effort” and “independent medical examination” or “independent medical evaluation” or “disability” or “classification accuracy”.
They are vague about what words were used for their search strategy. I think, if you were trying to determine the prevalence of exaggeration of symptoms in independent medical exams, there are problems with the words reported as being chosen and with the words not reported as being chosen. (That's setting aside the problems with how you work out if someone is exaggerating symptoms that rely on self-report.).

I'm not sure how the search works - is it done on words that study authors have specifically chosen as 'key words' for their study, or is it done on all of the words mentioned in a paper? assuming it's done on key words, then I think a paper that has the key word combination of 'insufficient effort' and 'independent medical exam' might have a particular bias compared to another paper about the accuracy of independent medical exams that did not have a key word of 'insufficient effort'.

By not accurately reporting their search terms, it is hard to know if some studies that might not have supported their intent were not included.

Eligible studies
Eligible studies: (i) enrolled individuals presenting for an IME in North America, (ii) in the presence of external incentive (e.g., insurance claims), and (iii) assessed the prevalence of symptom exaggeration using a known group design or multi-modal determination method [19,20].
A study had to involve assessment by a known method. The Slick Sherman and Iverson is one of the methods, and is the one that most of the included papers had used. As we have seen, this 'method' is actually a mash mash of cognitive studies, which the person claiming disability has to do poorly on', direct evaluation by an assessor, and other evidence from places such as social media and the testimony of people who know the person. It's not really a method, more a set of ideas of approaches that can be used.

No doubt it sounds good when you are an expert providing testimony in a law court that you followed the 'Slick Sherman and Iverson method of determining Malingered Neurocognitive Dysfunction'. But, doesn't mean much - someone could legitimately claim that and have done something completely different to another assessor also claiming that. And Slick, Sherman and Iverson acknowledged in a 2020 paper that the method they set out in 1999 had substantial problems.

We excluded studies that used only beyond-chance scores on symptom validity tests as an indicator of symptom exaggeration, since beyond-chance scores are infrequent and likely to result in underestimates [2527].
So they excluded studies that did not refer to named methods (like Slick, Sherman and Iversen) specifically because they wanted to eliminate papers that might underestimate the reported prevalence of symptom exaggeration. I suspect that if we looked carefully at what the included studies and the excluded studies actually did to assess people, we would find some messy overlap. And messy overlaps are where investigator bias thrives.

The authors of this review actually seem to acknowledge this messy overlap.
We categorized the reference standard and rated our confidence in it as either: (i) ‘weak’ when the study declared a known-group design, however its only criterion for identifying symptom exaggeration was below-chance performance on forced-choice symptom validity testing without any corroborating clinical observations or inconsistencies in medical records. For example, a patient with a mild ankle sprain labeled as exaggerating exclusively because they failed a below‐chance forced‐choice test of pain threshold, with no clinical exam or review of documented pain or functional abilities;
A study that mentions having used a known assessment method (known-group design) such as Slick Sherman and Iversen is included in the review, but if it only seemed to use below-chance performance as a criterion, the evidence was rated as weak. Studies that didn't mention a known assessment method and only seemed to use below-chance performance were not included in this review. I think an example of below-chance performance on a cognitive test would be scoring much lower than is expected - I can't really see how that works to discriminate exaggeration.
 
Last edited:
Back
Top Bottom