Summary
Neurologists often evaluate patients whose symptoms cannot be readily explained even after thorough clinical and diagnostic testing. Such medically unexplained symptoms are common, occurring at a rate of 10%–30% among several specialties.
These patients are frequently diagnosed as having somatoform, functional, factitious, or conversion disorders. Features of these disorders may include symptom exaggeration and inadequate effort. Symptom validity tests (SVTs) used by psychologists when assessing the validity of symptoms and impairments are structured, validated, and objectively scored. They could detect poor effort, underperformance, and exaggeration. In settings with appropriate prior probabilities, detection rates for symptom exaggeration have diagnostic utility. SVTs may help in moderating expensive diagnostic testing and redirecting treatment plans. This article familiarizes practicing neurologists with their merits, shortcomings, utility, and applicability in practice.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5764424/
The manifesto justifying the 10-30% to write-off continues... I wonder what that number really is explained by and if it is 'in the mind/ideology/attitude ofthe neurologist' [perhaps truthfully 'post hoc justification' for sadly certain demographics or 'don't like the face of'], communication styles they just simply tend not to understand, or new conditions they are too lazy/ignorant/stupid/and so on to care about being curious of.
Given Poppers theorem they simply can't prove their patient population they claim to be experts in the behaviour of.......so this just seem contradictory nonsense. It's backwards finding criteria to 'prove' such gut-instinct categorisation is right ... and then searching for fake post-hoc justifications over why they were made in the first place, whether they are any valid or consistent measures (neurologist's objective rating?) or not their apparent validity is based on 'does it match'?
The most important part of this research would be the 1,3,5,10 and yes 20 year follow-up by a genuine independent to see how many turned out to have something else and the life quality and expectancy difference the labels directly caused vs those who didn't have such misfortune. Weird they never want to know the
real results, when that's the blindingly obvious 'test'? Shouldn't that list be the most fundamental thing that haunts such people being asked to the point it is a 'tracked outcome' just as much as other measures for departments and conditions?
This made me remember a really good comment made by
@rvallee relatively recently to another thread noting that [relevant to that particular thread] there seemed to be a potentially disturbing trend towards people trying to claim the term 'psychometric' - which means something quite specific and is about measurability vs a population - applies just because they claim their job title has the word 'psych' in it. Except these individuals don't even have that if it is neuros.
I daren't open this to find out it might be a bunch of fools doing a test with ambiguous questions about 'how many times a day' or 'how someone feels when' on a likert scale, instead of e.g. a properly scientifically designed (with all impacting e.g. environmental factors accounted for at least if not 'controlled and measured') challenge of reaction times or task they can label the cognitive elements of, done in the form of a 2-day CPET.
I love the fact they think they can wield poisonous terms like 'poor effort' in what will be including those with serious illness and potentially severe symptoms as if there is no need to rate that in a way relative to what 100% effort (or more for pwme, given how PEM works) could achieve
And that they think 'rated by someone else' doesn't mean their data is actually more psychometric about tendency to rate certain individuals than it is about those paraded in front of them, given they will be the ones 'doing the action' ie part of the measurement if they are 'rating'. Can people really be that confused on methodological terms?
It also makes me laugh that whilst their supposed beloved 'BPS' claims to holistically bear in mind more things impact on 'the person' they can't imagine the idea someone could have had something else [bad] happen on one day versus another
The words validated and scored makes me very suspicious that isn't because they required ranking continuous into discrete data for some important statistical reason but because it isn't psychometric at all. Indeed psychometric would normally demand data of a continuous form because of the nature of distributions.
And I balk at the word 'objectively' being included where I have a horrible feeling that in 'objectively scored' when the principal investigator uses terms fictious and conversion disorder is double-speak
Either way I guess the underlying causation for any correlations they have might sadly be things related to gender, peer pressure and various presumptions and tropes tht are inaccurate about what disabled vs malingerer would answer in a situation ... all scientifically gleaned by the mind of one person backed up by other people who have the same presumptions because they all chat together in the tea room about what they think malingerers are. And they don't realise that is neither scientific nor knowledge-based often because..... well we're experts? erm... not if you don't know how to test things or can't accept the results or your own test if it contradicts your gut feelings?
I remember watching a talk from an expert in university league tables who noted that after having done all of the careful thinking as to what criteria matter, how they are best measured, checking weighting and modelling it through for flaws...... the important thing was that unless Oxford or Cambridge came at or very near the top then noone will believe it. Which of course = calibration, as much as triangulation (a process that does need to be used to check for validity, and seems to be highly absent here and in most BPSM research... but then they don't like using well-defined patient criteria either).
I have a grim feeling these people know their audience and that they aren't checking the scientific rigour and giving awards for ingenuity in robust measuring ... and of course 'calibration' in the context of people who demand to see what they presume is a scary example of reorganising reality in order to back up deluded beliefs and feel better that they seem 'more feasible' because there is now a scale of it with the right amount of people landing in the different groups.
Wouldn't someone who really wanted to calibrate begin with the few famous 'proven' frauds and include huge numbers of people with proper diagnoses, particularly including a group who might have gone through the hell of being doubted or 'under a MUS type approach' (there are enough of them who end up finding out they have cancer or RA or PA or thyroid issues and so on) so that gaslighting-impact and the horrendous impact it has on your confidence when talking about symptoms is accounted for IF they wanted to genuinely get the
right people and not the
right numbers (and who cares about the collateral damage) in this apparent 'holy grail' of tools?
Isn't doing it this way - by asking those who might be getting it wrong based on their own individual biases, just a way of ingraining problems and flaws and error through an industry instead of the other way around? But as long as the low hanging fruit = the right number seems to be the focus here which is scary?