Explaining persistent physical symptoms to patients in general practice: can tests to measure central sensitisation add value? 2024, den Boer et al

A positive test result may facilitate the explanation to the patient that their symptoms are possibly related to CS. Importantly, GPs found all three tests to be more valuable when the result was positive rather than negative. They considered a positive test to be more valuable for the patient, whereas the patients’ questionnaires indicated that it made no significant difference to them (Appendix 5).

When faced with a negative test result, GPs used various approaches. First, they clarified before testing that a negative test did not necessarily imply that the symptoms were unrelated to CS. Second, they emphasized that the tests were still in the research stage, which could effect the reliability of the outcomes. Third, the GPs maintained their explanation regardless of the negative test result.

However, some GPs and patients expressed confusion regarding negative test results. Some GPs adjusted their explanations, while certain patients struggled to accept that their symptoms might still be related to CS.

When a test was negative:
GP 13: “Not that I think it will make me doubt the diagnosis, but I did have a story in mind and then I couldn’t explain it that way anymore. So yes, then I had to stop and think what to say instead.”

:rofl: What an utter embarrassment - this is modern medicine.
 
Last edited:
So they started with a group of patients the researchers believed to have “central sensitisation”, then performed tests which showed that fewer than half of them did have a heightened physical response to stimuli, and so they decided to administer a questionnaire instead in which anyone with physical or emotional symptoms is assumed to have central sensitisation, and still only managed to get 3/4 patients coming out with CS? Did I get that right?

Wouldn’t a more honest summation of this research have been: “Most patients assumed to have CS do not show measurably heightened responses to stimuli”? These self-undermining papers are a very weird phenomenon of BPS medicine.
 
who artfully pays for such artfully blinding and profiteering nonsense, and why, and are all these papers open access or making pocket money from downloads or am I the bitterly paranoid and hostile toerag who doesn't want people to benefit from scientific discovery - and did the writers all attend those notorious international online university courses on psychosomatics (memo to self find that link), if so can their lecturers with tenure there and elsewhere be disciplined ? ? ? ? ? ? ? (7 in one)
 
So they started with a group of patients the researchers believed to have “central sensitisation”, then performed tests which showed that fewer than half of them did have a heightened physical response to stimuli, and so they decided to administer a questionnaire instead in which anyone with physical or emotional symptoms is assumed to have central sensitisation, and still only managed to get 3/4 patients coming out with CS? Did I get that right?
Essentially yes, although the details are slightly different.

GP selection said:
We invited 30 GPs from West-Friesland in North-Holland, the Netherlands, who had previously participated in our study on explaning CS (which did not involve testing CS) [19]: 16 GPs agreed to participate. We recruited nine additional GPs who already had (some) experience with explaining CS from the professional network of researcher CdB, who is involved in the training of GP residents and has an extensive regional network. Some experience with explaining CS was required to be able to answer the central question in the focus groups: what is the added value of the tests measuring CS compared to providing only the explanation? In total, 25 GPs participated in this study.
Substantial effort went in to selecting GPs who supported the idea of central sensitisation and who had experience in explaining it to patients.

It was the GPs who selected willing patients and who decided which one (or more) of three tests to use for each patient.
The researchers selected patients for in-depth interviews, applying a high degree of selectivity. It looks as though patients probably paid a GP consultation fee for a separate appointment in order to undertake the test.
patient selection said:
During consultations, GPs invited adult patients with PPS to participate in the study. We included patients who agreed to discuss their PPS symptoms with the GPs and consented to undergo one of the tests. No exclusion criteria were applied. GPs determined whether they considered the patient capable to understand the explanation, for instance, deciding whether to invite patients with linguistic challenges or low IQ to participate. When patients agreed, the GPs provided them with study information and informed consent forms. Upon deciding to participate, patients scheduled a follow-up appointment with their GP, signed the informed consent forms, and received the explanation of CS along with one or more tests, depending on the GP’s access to the testing materials.

We purposively selected 30 patients for in-depth interviews at the end of the study, of whom 17 agreed to participate. Using purposive sampling, we primarily selected patients with either high or low test scores, as well as those with conflicting opinions about the test (e.g., patients who found the test valuable but not clarifying), to enable a focused exploration of their experiences.

number of tests said:
The distribution of test applications among the GPs was as follows: eight GPs employed all three available tests, six GPs applied two different tests, and seven GPs administered only one test. Notably, four GPs did not use any tests at all. Furthermore, two GPs combined two different tests for a single patient, while one GP applied all three tests to another patient. Among the tests used, the algometer was administered in 37 instances, the monofilament in 28 instances, and the CSI in 19 instances.
So, the CSI, the survey test, which was promoted as the best was only provided to 19 patients.

percentage of positive tests said:
We analysed 84 questionnaires completed by the GPs. (Table 1) They mainly applied the tests to patients with moderate and severe symptoms. The GPs estimated that CS was likely in approximately two-thirds of the patients and possibly in one-third. Positive tests results were found in 57% of cases with the algometer, 45% with the monofilament, and 63% with the CSI.
Yes, the number of positive cases as assessed by the tests was quite low. That's particularly so where the doctors must surely have been deciding a patient had 'central sensitisation' based on their symptoms and only gave a patient the test when they were pretty sure the patient would meet the criteria. And yet, the CSI, which asks about symptoms, only found 63% of people qualified for the diagnosis.

The medical ethical committee of VUmc (METC VUmc) confirmed that the Medical Research Involving Human Subjects Act (WMO) did not apply to our study.
I think this is outrageous. This was medical research with significant risks (as others have noted - labelling someone as basically an unreliable witness to their symptoms is not a risk-free undertaking). I think this might be an example of a study where it is appropriate to make a complaint to the ethics committee.
 
I think it is worth noting that the CSI is biased to diagnosing women as having central sensitisation.

There are two questions that relate to urinary tract infections:
I feel discomfort in my bladder and/ or burning when I urinate.
I have to urinate frequently.
and one on pelvic pain
I have pain in my pelvic area.

Women are more likely than men to experience bladder infections, endometriosis and period pain - all those could increase the scoring from Never to Rarely, or even to Sometimes.
 
Yes, the survey is circular nonsense.
1."Central sensitisation" (having symptoms without any identifiable pathology) has this set of symptoms with no identifiable medical cause,
2. You have these symptoms and I haven't identified any medical cause,
3. Therefore you have central sensitisation.
I don't understand why people don't immediately see through it. I guess a lot of patients do. Certainly it's a way to erode trust in a GP's professional competence.

On the method for determining pain thresholds:

"patients were sometimes unsure where to indicate that the feeling of pressure changed into pain"
We've seen a study recently, where women were found to indicate feeling pain with less noxious exposure than men did, but they rated the pain at lower levels than the men did when they reported feeling pain. So, just as an example, a person can report feeling pain with a mild to moderately painful stimulus, but they report the pain severity is only of 3/10, whereas another person can wait until the stimulus is moderately painful, but report the pain severity is 5/10. It's extremely subjective as to when a person reports feeling pain. It tells you little about their pain thresholds, but rather more about what they have been taught about what is appropriate when acknowledging pain.

"GPs reported occasional confusion regarding negative test results, particularly when they assumed the patient had CS-related symptoms but the patients exhibited very high [pressure pain thresholds]"
Yes, I can see that would be rather difficult. A performative test aiming to convince patients that they have central sensitisation that doesn't actually work. Perhaps that's why the ridiculous circular survey turned out to be the test of choice - nothing else worked.
This is like using measurement tools that have not been calibrated. Where in fact it's known that unless you do careful calibration each time, you will not get an accurate measure.

Something that would get flagged in most cases where an objectivement measurement is made. In fact this is where most interesting results die: details.

Everyone in this profession knows this, they simply choose to overlook this because otherwise they have nothing. In itself that's not a problem, not knowing is normal and fine, but they've been making stuff up for so long that they can't do that, so they all play pretend with things they know to be total BS, as if they're just playing with Monopoly money.
 
When a test was negative:
GP 13: “Not that I think it will make me doubt the diagnosis, but I did have a story in mind and then I couldn’t explain it that way anymore. So yes, then I had to stop and think what to say instead.”

:rofl: What an utter embarrassment - this is modern medicine.
Same thing with the recent pretense at "rule-in signs" for functional disorders. When the test is "positive", this is validation. If the test is negative, well those tests aren't 100% reliable, there are other ways they can determine that. Ways such as: making stuff and not being bothered to use made-up stuff when making critical decisions of life and death about real people in real life.
 
The medical ethical committee of VUmc (METC VUmc) confirmed that the Medical Research Involving Human Subjects Act (WMO) did not apply to our study.

So it seems to me that too many medical ethical committees too often fail to do whatever they are supposed to for the trade journals satisfaction like supra-peer preview, screen, quality control, filter and sieve, but 1st assess

and it seems to me that - since there might be some thoughtless or witless, careless or conflicted and very interested rubber-stamping of the document conveyor belt in process as we speak, then after a thorough examination @Hutan deduced:

"I think this is outrageous. This was medical research with significant risks (as others have noted - labelling someone as basically an unreliable witness to their symptoms is not a risk-free undertaking). I think this might be an example of a study where it is appropriate to make a complaint to the ethics committee".


It is the WMA setting the recently updated global standard for the industry to
inform, guide, direct and mandate the world's medical ethic committees, health organisations, institutions, commissions, funders, governments, clinics, researchers, trade journals, review trade journals, and patients, and other wildlife

Warning, these links can wait, maybe next year, once delving into these links there is fascinating syllabus that scrolls down down down and down so worth copying to study at leisure, but then its a dismal few hours of overwhelming impotency, see it took me 6 months to get back onto this job after looking into this abyss:

it is worth looking up the lecturers, these people need students and students need jobs lined up to make their outlay pay and the most inhumanly programmed government-traded Ministers need to spin delusions about getting everyone back to their feet in time for work or school or whats the point

Schwannauer, Matthias - TECH United Kingdom

Benito de Benito, Luis - TECH United Kingdom

Espinoza Vázquez, Óscar - TECH United Kingdom

Segovia Garrido, Domingo - TECH United Kingdom

The academic institutions are not going to fund and hire enough to make the industry and expand it so very assiduously, and ps I doubt any landlubber academia has such extenxsive syllabus, maybe its funded by JKR and associated

Curiouser and curiouser

may I call it a cash cow for highly organised highly educated cashiers ker-ching fascinated by human modification ops and opportunities, while waiting fot the genome-tailoring breakthrough that will render their costly time-consuming skills obsolete and put them all back on the scrap heap unless they can re-discover integrity, but may it be doing some good to the few, the few that did not warrant scaling all this up

How many debunked professionals have Techtitute on their CV, or where did these alumni go ? Onto integrative medical ethics committees maybe

Maybe the "ethics" also get presented to the WHO ethics committee for grants for "integrative" research / clinics / rehabs and maybe thats how Wallit found his "integrative" metier (but also his master, A. Nath):



https://www.uclh.nhs.uk/our-service...ion-collaborating-centre-integrative-medicine

- RLHIM links related to ME / CFS might be unfound due to lack of character needed to be found

https://www.uclh.nhs.uk/our-service...n-hospital-integrated-medicine/research-rlhim

Also RLHIM Education gives CBT work experience, big national Insomnia & Sleep Disroder Clinic in-house, a germinal Self-Care online out-reach, and the ....tbc

1st EDIT to re-insert italicated bit re: our Wally, see above

2nd EDIT to re-insert bit re: Techtitute may be based on Hogwarts the skool for vicious 'cool' see:
Not sure if already posted, but "Harry Potter is also Ableist" by Ember Green:



Also from 2022: https://www.theonceandfuturecripple.com/en/rowlings-new-book-is-ableist-and-so-was-harry-potter/


thanks for the movie @Snow Leopard

3rd EDIT to add Education to the list of what a WHO ethics commitee (under Helsinki standards) might promote WHO grants for eg "integrative" research / clinics / rehabs .... and "£"integrative" education, ALSO maybe our charities can get WHO "integration" grants vetted, certainly the RLHIM collabrators might
 
Last edited:
I think it is worth noting that the CSI is biased to diagnosing women as having central sensitisation.

There are two questions that relate to urinary tract infections:


and one on pelvic pain


Women are more likely than men to experience bladder infections, endometriosis and period pain - all those could increase the scoring from Never to Rarely, or even to Sometimes.


Such awful construct validity!
 
Did the vetting of standardised methodology get delegated to the ethics committees ? I get the gist but I am vague on the geography of these fields

The global WMA's ethical update seems to find all these ethics committees responsible for vetting the standards of methodology and productivity * too

Do ethics committees have competence to vet methodology and do they 'ave ethics specialists on board too. What CV is required for the bunch on board ?

Which bodies are supposed to be vetting the methodology of research in advance and after and to date ? And somehow made it mostly optional

* i get my definition of productivity from the WMA ethics update - it sounds something like: must probably discover and produce useful and applicable scientific knowledge and data, probably.

I am reminded of the UK White Paper on how the researched CBT statistics produced were not actual statistics but all were probable statistics, to justify the Return To Health & Work Bidgets, probably
 
Back
Top Bottom