Prevalence of symptom exaggeration among North American independent medical evaluation examinees: systematic review ..., 2025, Darzi, Guyatt, Busse +

Mij

Senior Member (Voting Rights)

Abstract​

Background​

Independent medical evaluations (IMEs) are commonly acquired to provide an assessment of impairment; however, these assessments show poor inter-rater reliability. One potential contributor is symptom exaggeration by patients, who may feel pressure to emphasize their level of impairment to qualify for incentives. This study explored the prevalence of symptom exaggeration among IME examinees in North America, which if common may represent an important consideration for improving the reliability of IMEs.

Methods​

We searched CINAHL, EMBASE, MEDLINE and PsycINFO from inception to July 08, 2024. We included observational studies that used a known-group design or multi-modal determination method. Paired reviewers independently assessed risk of bias and extracted data. We performed a random-effects model meta-analysis to estimate the overall prevalence of symptom exaggeration and explored potential subgroup effects for sex, age, education, clinical condition, and confidence in the reference standard. We used the GRADE approach to assess the certainty of evidence.

Results​

We included 44 studies with 46 cohorts and 9,794 patients. The median of the mean age was 40 (interquartile range [IQR] 38–42). Most cohorts included patients with traumatic brain injuries (n = 31, 67%) or chronic pain (n = 11, 24%). Prevalence of symptom exaggeration across studies ranged from 17% to 67%. We found low certainty evidence suggesting that studies with a greater proportion of women (≥40%) may be associated with higher rates of exaggeration (47%, 95%CI 36–58) vs. studies with a lower proportion of women (<40%) (31%, 95%CI 28–35; test of interaction p = 0.02). Possible explanations include biological differences, greater bodily awareness, or higher rates of negative affectivity. We found no significant subgroup effects for type of clinical condition, confidence in the reference standard, age, or education.

Conclusion​

Symptom exaggeration may occur in almost 50% of women and in approximately a third of men undergoing IMEs. The high prevalence of symptom exaggeration among IME attendees provides a compelling rationale for clinical evaluators to formally explore this issue. Future research should establish the reliability and validity of evaluation criteria for symptom exaggeration and develop a structured IME assessment approach.
LINK
 
Of the 32% of studies that described their sampling method (14 of 44), 13 used consecutive sampling and one used random sampling methods to identify IME referrals.
Huge red flag right from the start.
None of the known group designs used to evaluate symptom exaggeration provided evidence of reliability and validity testing; however, there has been formal evaluation of psychometric properties of forced-choice tests that were administered in eligible studies (See S4 Tablein supplementary material for details). (S2 Table).
And another one.

They also used GRADE and the studies had multiple serious risks of bias (all were at low quality, and should be scrapped).

The conclusion in this study should have been:
We found no studies with sufficient methodological quality to assess the research question.
 
Symptom exaggeration may occur in almost 50% of women and in approximately a third of men undergoing IMEs. Assessors should evaluate symptom exaggeration when conducting IMEs using a multi-modal approach that includes both clinical findings and validated tests of performance effort, and avoid conflation with malingering which presumes intent. Priority areas for future research include establishing the reliability and validity of current evaluation criteria for symptom exaggeration, and development of a structured IME assessment approach that includes consideration of symptom exaggeration.
Their conclusion is laughable. They found that there are no good tools for measuring symptoms exaggeration, yet they conclude that it was very present, especially among women.
 
Wow. That is really prejudice laid bare. If we were in any doubt about what we are dealing with here, this dispels it.

Information about the three senior authors here:
Gordon Guyatt, Jason Busse, both from McMaster University, both associated with "evidence-based medicine" and Cochrane. Regina Kunz leads Cochrane Insurance Medicine and is based in Basel, Switzerland - Switzerland of course having a lot to do with the insurance industry and the insurance industry having deep pockets.

AI on Gordon Guyatt said:
Gordon Guyatt is a prominent figure in the field of evidence-based medicine, notably recognized for coining the term "evidence-based medicine" and for his leadership in developing its methods. He is a Distinguished University Professor at McMaster University and has significantly contributed to the development of the GRADE approach for assessing the quality of evidence and strength of recommendations. He is also associated with Cochrane, an organisation known for producing high-quality systematic reviews.
on Jason Busse said:
Jason Busse is an Assistant Professor in the Departments of Anesthesia and Clinical Epidemiology & Biostatistics at McMaster University. He has authored over 150 peer-reviewed publications with a focus on chronic pain, disability management, predictors of recovery, and methodological research.
AI on Regina Kunz said:
Regina Kunz is a key figure associated with Cochrane, specifically known for her work in Cochrane Insurance Medicine. She is the founder and director of this unit, which focuses on applying evidence-based medicine principles to insurance and disability evaluations. Kunz's work emphasizes the importance of rigorous methodology, including the use of Cochrane reviews, to ensure reliable and consistent assessments in these areas.


There is no statement of Conflicts of Interest, only this Acknowledgement.

Acknowledgments​

We would like to thank Michael Bagby from the Departments of Psychology and Psychiatry at University of Toronto for his contributions to the initial discussions around conceptualization and design of this study. No financial compensation was provided to any of these individuals.

It is not at all clear what individuals are being referred to in that statement about not receiving financial compensation. And, even so, there are many ways that the authors could benefit without directly receiving a payment for this study. The insurance industry and governments have much to gain by suggesting that half of women undergoing assessments of impairment exaggerate their symptoms. Institutions can be funded, people can appointed to well remunerated advisory positions.

There's a comment facility.
 
Last edited:
Possible explanations include biological differences, greater bodily awareness, or higher rates of negative affectivity. We found no significant subgroup effects for type of clinical condition, confidence in the reference standard, age, or education.

If you ask a dozen normal people none of them would pick this list.

Why study human behaviour if you have no understanding of human nature, or at least wish to write an 'academic' paper (for some reason) that gives that impression, even if unintentionally.

Above all else what comes across is lack of any sort of common sense.
 
We found low certainty evidence suggesting that studies with a greater proportion of women (≥40%) may be associated with higher rates of exaggeration (47%, 95%CI 36–58) vs. studies with a lower proportion of women (<40%) (31%, 95%CI 28–35; test of interaction p = 0.02).

Symptom exaggeration may occur in almost 50% of women and in approximately a third of men undergoing IMEs.

I have yet to read the study, but the abstract suggests that these people may have a very loose grasp of mathematics.

Putting aside the 'low certainty of evidence' for a moment, they found that studies where more than 40% of the participants were women were associated with higher rates of exaggeration. So, the studies with more than 40% of women had a mean reported 'exaggeration rate' of 47%. The studies with less than 40% of women had a mean reported 'exaggeration rate' of 31%.

Then the conclusion appears to turn these figures for mixed cohorts into data applying to purely to men and to women. So, 'almost 50% of women' and 'approximately a third of men'.

It is possible that they have actually got sex-specific data from each of the included 44 studies. It will be interesting to look at the detail of the paper to see if they have made an unjustified assumption.

It will also be interesting to see why they chose '40% women' as the point to differentiate the two lots of studies. I imagine lots of other things could have been chosen - other % rates, a regression between % of women in a study and reported exaggeration rates.
 
One potential contributor is symptom exaggeration by patients, who may feel pressure to emphasize their level of impairment to qualify for incentives
Whew, the idea that it's for "incentives" is just plain silly, but obviously emphasizing is not at all the same thing as exaggerating.

And then you get to the thorny fact that it's literally impossible to reliably assess whether someone is exaggerating, and it still gets downhill from there. This is one reason why, despite truth-telling being critical to it, judges aren't allowed to make such, uh, judgments. Because it's literally not a human ability, there is no technology for it, certainly no back-propagation (the action of correcting a system's judgment based on having an accurate answer), and it's riddled with assumptions and even more unreliable biases.

The creep of pseudoscience into health care has long past alarming levels, and is getting rather close to destroy-all-credibility level. The main outcome for this is likely to make AI medicine get adopted so fast they won't even know what's happening, but that's frankly a best case scenario, because so far it's literally just enshittification.
 
Wow. That is really prejudice laid bare. If we were in any doubt about what we are dealing with here, this dispels it.

Information about the three senior authors here:
Gordon Guyatt, Jason Busse, both from McMaster University, both associated with "evidence-based medicine" and Cochrane. Regina Kunz leads Cochrane Insurance Medicine and is based in Basel, Switzerland - Switzerland of course having a lot to do with the insurance industry and the insurance industry having deep pockets.






There is no statement of Conflicts of Interest, only this Acknowledgement.


It is not at all clear what individuals are being referred to in that statement about not receiving financial compensation. And, even so, there are many ways that the authors could benefit without directly receiving a payment for this study. The insurance industry and governments have much to gain by suggesting that half of women undergoing assessments of impairment exaggerate their symptoms. Institutions can be funded, people can appointed to well remunerated advisory positions.

There's a comment facility.
It only covers ‘direct’ compensation I assume too ?

If someone got a promotion or pay rise based eg on participation in certain things then is that included?
 
Last edited:
If you ask a dozen normal people none of them would pick this list.

Why study human behaviour if you have no understanding of human nature, or at least wish to write an 'academic' paper (for some reason) that gives that impression, even if unintentionally.

Above all else what comes across is lack of any sort of common sense.
Are they studying it or trying to insert what equates to propaganda level documents into the literature to distort/reframe said ‘understanding’ of humans (replace with either disability/illness)
 
I have yet to read the study, but the abstract suggests that these people may have a very loose grasp of mathematics.

Putting aside the 'low certainty of evidence' for a moment, they found that studies where more than 40% of the participants were women were associated with higher rates of exaggeration. So, the studies with more than 40% of women had a mean reported 'exaggeration rate' of 47%. The studies with less than 40% of women had a mean reported 'exaggeration rate' of 31%.

Then the conclusion appears to turn these figures for mixed cohorts into data applying to purely to men and to women. So, 'almost 50% of women' and 'approximately a third of men'.

It is possible that they have actually got sex-specific data from each of the included 44 studies. It will be interesting to look at the detail of the paper to see if they have made an unjustified assumption.

It will also be interesting to see why they chose '40% women' as the point to differentiate the two lots of studies. I imagine lots of other things could have been chosen - other % rates, a regression between % of women in a study and reported exaggeration rates.
Are they actually studying or reporting exaggeration or is what they are documenting and claiming is that misogyny and equating to what we now commonly know and hear about lots regarding racist presumptions on pain levels eg that black people get their pain underestimated,

Certainly there’s obviously a major issue with the area of study called ‘pain management’ having been infiltrated by non-methods to such an extent I’d say there needs to be a writing off of qualifications as being medical or scientific until some outside better, not Cochran, verifier can go through and come up probably with new exams that check people are able to critically evaluate the dross out when reading that literature.

But can this who swathe of dross be allowed to stand, have we reached a point where it needs red-lining/such propaganda papers that are non-research being inserted in merely as Trojan horses to be used to justify black is white to eg provide justification for inaccurate policy changes or decisions regarding how the disabled are treated ? I think this is so poor and dangerous it obviously needs to be kept for the purposes of court cases/outing the authors for what they are trying to sell. But they are obviously using the war of attrition of such a court system where dragging out proving what dross it all is gets made so long and arduous against eg a rich company that the truth becomes something citizens no longer have a right to as that is accessible what only via a ten year case to prove the made up methods are nonsense?

And everyone working in those careers under this of learning wider areas like medicine has their education distorted by this sort of naff attempt being stuffed through certain literatures as ‘ist’ ideologies seen through fir them to soak up as if it has basis?
 
One thing I think is made abundantly clear by this kind of research is how little the authors understand of how flawed measurements, questionnaires and observations are.

They essentially define exaggeration as:
«I don’t believe that your symptoms are as bad as you say»

And they ignore every possible confounder and alternative explanation, and make a general assumption that all deviations between «reported» symptoms burden and «actual» symptoms burden is caused by malice.

They simultaneously assume that everything they do is perfect, and that everything «the others» do is flawed.

They are outright accusing a large proportion of people of being morally and ethically corrupt. Ironically, this article puts the moral and ethical corruption of the authors at full display for everyone to see.
 
One thing I think is made abundantly clear by this kind of research is how little the authors understand of how flawed measurements, questionnaires and observations are.

They essentially define exaggeration as:
«I don’t believe that your symptoms are as bad as you say»

And they ignore every possible confounder and alternative explanation, and make a general assumption that all deviations between «reported» symptoms burden and «actual» symptoms burden is caused by malice.

They simultaneously assume that everything they do is perfect, and that everything «the others» do is flawed.

They are outright accusing a large proportion of people of being morally and ethically corrupt. Ironically, this article puts the moral and ethical corruption of the authors at full display for everyone to see.
Oh they know exactly what they are doing they are serving the insurance industry enabling their justification for claims denial .what judge or arbitrator is going to wade through mountains of trash level papers in order to come to a fair assessment on part of the claimant.
 
Introduction
Despite their widespread use and far-reaching consequences, the consistency and reliability of IMEs has been challenged. The most recent systematic review found that clinical experts assessing the same patients often dissented on whether they were disabled from working (median inter-rater reliability 0.45) [7]. Although this review suggested that standardization of the assessment process may improve the reliabil-ity of IMEs, [7] two subsequent studies failed to support this hypothesis [8]. Another potential source of variability in IME assessments is symptom exaggeration [3]. IME assessors may focus too narrowly on a biomedical model to explain symptoms, without giving sufficient attention to psychosocial and work-related factors that may influence how individuals present their symptoms [3,9]
In other words, 'independent medical evaluations of the same individual are coming up with quite different answers. It's not a problem with the evaluation process. The problem is that some doctors are failing to recognise that many of the people making claims have reasons other than being genuinely disabled to want to be judged as disabled.

Also, terminology such as exaggeration, malingering, or over-reporting are defined inconsistently across studies, making it difficult to distinguish intentional deception from psy-chological amplification of distress [4,14].
In other words, 'we aren't saying that all these people are malingerers (although some definitely are), it's just that perhaps some of these people just can't stop themselves from lying'.

We undertook the first systematic review of observational studies to explore the prevalence of symptom exaggeration among IME examinees in North America.
'and we did this, despite it being mostly impossible to determine if someone is exaggerating their pain in an IME.'

***
I understand that this is a real problem for the insurance industry. But, if you can't measure pain or fatigue objectively, then you are left with self-reporting. Assuming that people who had a rough childhood, or who don't like their boss or, well, who are women, are probably not genuinely disabled can't be allowed to be the answer. It's not fair. There are other ways to approach the problem.
 
Last edited:
Methods
We registered our protocol on the Open Science Framework (Reg-istration DOI: https://doi.org/10.17605/OSF.IO/64V2B) [17]. After registration but prior to data analysis, we included five meta-regressions/subgroup analyses to explore variability among studies reporting the prevalence of symptom exaggera-tion: (1) proportion of female participants, (2) older age, (3) level of formal education, (4) clinical condition, and (5) level of confidence in the reference standard used in the approach for evaluating symptom exaggeration.
They registered a protocol. But then, some time after that, they decided how they would analyse the data. They say it was before they looked at their data. The lack of a pre-trial analysis plan increases the likelihood that they cherry-picked the analysis approach to gives them the answer they want.

The search strategy:
The search strategies were developed using a validation set of known relevant articles and included a combination of MeSH headings and free text key words, such as malinger* or litigation or litigant or “insufficient effort” and “independent medical examination” or “independent medical evaluation” or “disability” or “classification accuracy”.

Assessment of the reliability of assessment of exaggeration:
Eligible studies: (i) enrolled individuals presenting for an IME in North America, (ii) in the presence of external incentive (e.g., insurance claims), and (iii) assessed the prevalence of symptom exaggeration using a known group design or multi-modal determination method [19,20]. As there is no singular reliable and valid criteria (reference standard) in the literature that is used to assess for symptom exaggeration, we included known group study designs that defined their reference standard based on criteria incorporating both clinical findings and performance on psychometric testing to classify individ-uals as exaggerating (within diagnostic test terminology, the target positive group), or not exaggerating (the target nega-tive group) their symptoms [21,22].
That's not making much sense to me yet.

Examples of two commonly used known group designs are the Slick, Sherman, and Iverson criteria for malingered neurocognitive dysfunction [23] and the Bianchini, Greve, & Glynn criteria for malingered pain-related disability [24]. We excluded studies that used only beyond-chance scores on symptom validity tests as an indicator of symptom exaggeration, since beyond-chance scores are infrequent and likely to result in underestimates [25–27]. We restricted our focus to North America as there may be important differences between IMEs conducted within North America where social insurance for disability is limited and Europe where social insurance is prominent. In cases where multiple studies had population overlap, we included only the study with the larger sample size.
So, it looks like we need to understand those criteria they mention for malingered neurocognitive dysfunction and malingered pain-related disability. It's not clear to me yet how they are going to determine people who aren't in fact malingerers but who exaggerated their pain because of distress by using studies with scales assessing malingering.
 
Last edited:
Their conclusion is laughable. They found that there are no good tools for measuring symptoms exaggeration, yet they conclude that it was very present, especially among women.
One thing I think is made abundantly clear by this kind of research is how little the authors understand of how flawed measurements, questionnaires and observations are.

They essentially define exaggeration as:
«I don’t believe that your symptoms are as bad as you say»

And they ignore every possible confounder and alternative explanation, and make a general assumption that all deviations between «reported» symptoms burden and «actual» symptoms burden is caused by malice.

They simultaneously assume that everything they do is perfect, and that everything «the others» do is flawed.

They are outright accusing a large proportion of people of being morally and ethically corrupt. Ironically, this article puts the moral and ethical corruption of the authors at full display for everyone to see.
Worth repeating.
Oh they know exactly what they are doing they are serving the insurance industry enabling their justification for claims denial .what judge or arbitrator is going to wade through mountains of trash level papers in order to come to a fair assessment on part of the claimant.
Yep, they know. There are no more excuses left. They are fully culpable now, in every sense.
 
I've made a thread to discuss criteria for determining if someone is a malingerer, including the Slick, Sherman, and Iverson criteria that was used by a lot of the studies included in the review.
Diagnostic criteria for malingering

The authors themselves, in a 2020 review of their 1999 criteria suggest that there were serious problems with the operationalising of the original criteria. They note that a lot of the tests that were supposed to identify if someone is a malingerer don't really work, and are more likely to catch genuinely impaired people than people pretending to be impaired. There's quite a bit of effort to make things seem precise and evidence-based.

The field is a joke. I can see that there are sometimes legitimate reasons to try to determine if an individual is feigning reduced capacity, not least because some parts of the world create enormous financial incentives to feign acquired incapacity after an accident, incentives not just for the claimant but also for the legal machinery around them. I think those incentives corrode people's trust in one another and foster an industry in proving others are not as sick as they say they are.

The determination that someone is feigning may sometimes be supported by decent evidence, but is likely to be significantly impacted by the bias of the assessor - by their social prejudices and by what answer the person who is paying them wants. Reported percentages of people feigning in studies are likely to be hopelessly incorrect, making this review of those studies hopelessly flawed.
 
Last edited:
Back
Top Bottom