The effects of a structured communication tool in patients with medically unexplained physical symptoms: a cluster randomized trial 2023 Abrahamsen

Just read this paper. They found a remarkable strong effect for sick leave (GPs recorded sick leave from the participants’ medical records).

In the intervention group sick leave dropped from 52% to 25.2%, while in the control group it went from 49.7% to 45.7%.
They are comparing care as usual which often includes giving them sick leave, with an intervention that wants to avoid using sick leave. Or course there will be a massive difference..
 
It gets worse.

This study won a prize from the GP research fund of the Norwegian Association of GPs for it’s «high quality»:
Convincing results

The evaluation committee for the travel grant consisted of Anette Fosse, Anja Brænd and Trygve Skonnord. They write this in the justification for the award:

"Based on her own practice and her own perceived inadequacy towards a common patient group, the prize winner has developed a structured conversation tool that she is exploring the effect of in her PhD project. The results of the intervention are convincing, with effects on function, symptoms, quality of life and sick leave, and the study was published in a highly ranked international journal. The project has obvious general medical relevance, maintains high quality and originality, and the prize winner has a good ability to pedagogically communicate the tool and the research."
And apparently, 1/3 of all GPs in Norway have taken the ICIT course.

 
I simply don't believe that figure! But then they use the weasel words "up to" about the number.

In one sense, prior to diagnosis, all medical consultations are for symptoms not yet explained. However, ‘MUPS’ is a specific diagnosis the validity of which quite rightly requires serious examination, not least to establish whether or not it has an objective reality beyond the minds of some researchers and clinicians.

Certainly the advocates of the existence of MUPS as a meaningful clinical grouping rather than a description of current clinical knowledge have a record of misrepresenting the figures despite @dave30th many letters to journals pointing this out.
 
It gets worse.

This study won a prize from the GP research fund of the Norwegian Association of GPs for it’s «high quality»:
A level of bias so high and quality so low that most professions wouldn't even look at the study, because every single evidence they use is absurdly much better than this. They are really showing how poor their judgment is, how obvious it is that they value a study's quality based on whether they like what it says, rather than actual quality.

Honestly it feels like a joke. It's too surreal to see stuff like this. Most professions wouldn't even wipe their asses with studies like this, and they give it prizes for high quality. Again they are really showing why they struggle so hard to deliver better outcomes. The bubble of medicine has been closed off for too long, it completely lacks any diversity of perspective and it shows.

Like someone who just put a bag of hot-dog buns next to a pack of sausages winning a prestigious cooking prize. Completely absurd.
 
I simply don't believe that figure! But then they use the weasel words "up to" about the number.
It's also completely expected! Of course they regularly see problems they don't understand. This is perfectly normal and probably a huge under-estimate, in that they don't count the many problems they just wave off unwritten because they are completely inconsequential.

I'd say that it's totally natural that most problems they encounter on a daily basis will not be explained, but they can't seem to deal with that fact. Even though this fact will be acknowledged at times. What a bizarre profession, impaired by an aristocratic culture detached from accountability.
 
A level of bias so high and quality so low that most professions wouldn't even look at the study, because every single evidence they use is absurdly much better than this. They are really showing how poor their judgment is, how obvious it is that they value a study's quality based on whether they like what it says, rather than actual quality.

Honestly it feels like a joke. It's too surreal to see stuff like this. Most professions wouldn't even wipe their asses with studies like this, and they give it prizes for high quality. Again they are really showing why they struggle so hard to deliver better outcomes. The bubble of medicine has been closed off for too long, it completely lacks any diversity of perspective and it shows.

Like someone who just put a bag of hot-dog buns next to a pack of sausages winning a prestigious cooking prize. Completely absurd.
Trygve Skonnord, one of the members of the committee for that prize, did his PhD on acupuncture for acute lower back pain and claims he’s «interested in method development in randomized controlled trials (RCTs) in primary health care».

At the very least, his open label study with mostly subjective outcomes where the interventions was a single session of 8-9 minutes of acupuncture for acute LBP didn’t find any differences between the acupuncture and care as usual groups..

 
I think that sick leave outcome can be explained by the bias in participant selection.



If a GP started suggesting to a patient with persisting symptoms that they could just think better and be well, there's a reasonable chance that that patient isn't going to turn up for the second session of being told that. They might even decide that it's time for a new GP.

It's very likely that the participants who stuck with the doctor were either improving anyway or were able to push through their symptom when encouraged to do so, at least for a while. so, I think it would be wrong to assume that that apparent benefit of reduced sick leave could be applied to the general population of people with persisting symptoms.
But they were randomized ....
 
But they were randomized ....

I don't think that impacts on Hutan's point. If you end up with a sample of patients that is not representative of the population targeted then you cannot extrapolate to that population. If the treatment is not blinded then bias can come in.

Randomisation is a bare minimum requirement for trials but is not the main determinant of reliability of results - blinding and representative sampling are likely to be much bigger issues I think.
 
But they were randomized ....
Sorry, I can't tell if that is sarcasm.

There was intense self-selection of the GPs involved - they had signed up to a course on this. And there will have been some selection bias in what patients GPs included in their assessments. There is a ridiculous level of effort put into randomising the GPs to the intervention or treatment as usual, but it doesn't really reduce the huge amount of bias in this study.
In Norway, a total of 129 GPs enrolled in an open enrollment course that was promoted through the Norwegian Medical Association's class program and a Facebook group for Norwegian GPs. Out of the 129 GPs who enrolled in the course, a total of 103 GPs from various locations throughout Norway met the eligibility criteria of practicing in primary care and willingly provided written informed consent prior to the start of the study. GPs either joined the course individually as the sole representatives from their clinics or participated as a group from the same clinic. To maintain the study's integrity and prevent any cross-contamination, GPs from the same clinic were assigned to the same cluster. For those GPs who joined individually, they were randomly distributed among ten clusters to achieve a balanced distribution of GPs across each cluster, which served as the units of randomization.
The randomization process was conducted in the following manner:

The 103 General Practitioners (GPs) were divided into ten clusters based on the previously mentioned grouping prior to randomization.

The names of the GPs within each cluster were securely sealed inside envelopes to ensure confidentiality and integrity of the process.

To ensure impartiality, an independent staff member from the University of Oslo, who had no affiliation with the research team, was responsible for the selection of envelopes. This staff member alternately chose envelopes to assign the clusters of GPs to either the usual care or intervention group.
It is important to note that although the randomization process determined the allocation of the ten clusters to either the intervention or usual care, each individual GP represented a cluster within the study. Furthermore, eligible participants with MUPS were enrolled in the study by their respective GPs.


And, some patients will have given up on their doctor when they heard the spiel from the clipboard, and not returned to participate in the followup.

To keep the study as close to current practice as possible, patients were selected from the standard patient booking system. No patients were invited to see the GP for the purpose of the study. GPs were instructed to enroll the first ten eligible and willing patients who made appointments in the first four weeks of the study. Participants gave informed consent and completed a pre-consultation questionnaire before their first appointment.
103 GPs, so around 50 allocated to the intervention. Each of them enrolled ten eligible and willing people to the study. So that is potentially 500 patients enrolled in the intervention. However, data is provided for only 238 patients in the intervention.

Ethical restrictions prevented registration of the number and reason of participants who chose not to participate. Given that patients with MUPS may be considered a vulnerable population, we acknowledge that it may have been difficult for them to decline their physician's invitation to participate.
 
Last edited:
85% of the patients were women. Did doctors think it was easier to apply their clipboard inspired instruction to women? Were women more agreeable when it came to signing up for the study and complying with instructions to return for followup? Are doctors less likely to undertake proper investigation of women with difficult to diagnose symptoms? Or are women just more likely to make a big deal about nothing?
 
Of the 103 GPs recruited, an extraordinary 16 were removed from the study, the reason being they no longer worked in primary care:
However, 16 GPs were excluded from the study due to their transition out of primary care at the time of recruitment, which we refer to as post-randomization exclusions.
I don't know what is going on with that. The Supplementary Information with its promised more information isn't opening for me.
 
The GPs strive to achieve these objectives by utilizing the tool ICIT, following a series of steps: (1) validating patients' feelings, (2) presenting an explanatory model of MUPS, deliberately created to explicate the concept of allostatic overload,17 to establishes a mutual understanding of the patients’ complaints, and (3) jointly formulating a written activity plan, such as a "job list," "problem list," or "list of opportunity," depending on the patient’s specific issue. The ICIT's condensed version (Supplementary Material S1) were made available to physicians in a laminated manual, allowing for easy access during patient consultations.
It really is extraordinary stuff. They seem quite upfront about presenting a non-evidenced based story to explain the patient's condition. I don't know how that gets through an ethics committee.

Does anyone in Norway know if this laminated manual with its 'deliberately created' story is still in use?
 
I'm not really sure what they did for the sick leave measure. It's pretty opaque.
The GPs recorded sick leave from the participants’ medical records at baseline, after the final session of the study, and at the 11-week follow-up. At each time point, a participant could either be assigned a value for full time sick leave (yes or no) or a value for partial sick leave. The latter was quantified in percentage points. Both variables were unrelated to the number of hours worked per week. We then defined a joint variable, “sickness absence adjusted for partial sick leave” (SAAPSL), where the scale for full time sick leave was aligned with the percentage scale for partial sick leave (i.e., yes = 100%, no = 0%). Thus, each participant was assigned a value ranging from 0% to 100% for the joint variable. Therefore, "Full-time sick leave" and "Partial sick leave" mentioned in Table 2 are variables that are not further analyzed. Consequently, when we subsequently refer to "sick leave," it specifically pertains to the SAAPSL variable.

I'm not sure that hours actually worked is considered or compared at any point.
 
Last edited:
The study adhered to the established protocol. Unfortunately, a regrettable oversight occurred as the trial registration was not updated before commencement, leading to a minor discrepancy between the trial registration and the manuscript.
It's not clear what the regrettable oversight was.
 
Back
Top Bottom