Trial Report Exploring the content validity of the Chalder Fatigue Scale using cognitive interviewing in an ME/CFS population, 2024, Gladwell

The patients' responses are thoughtful, and if the researchers took them properly into consideration, they would learn something. However, it doesn't look at the CFQ's usefulness overall in evaluating fatigue and distinguishing fatigue from depression/anxiety. Also, no recognition of the need to evaluate PEM.

It's not ethical to sit on this for so long, when the researchers have concluded revision is required. Thanks to @bobbler and @Maat's excellent sleuthing on the other, it does seem that replacing the CFQ with a new PROM that the bps lobby have devised is a strategy that has been brewing for a long time. However if patients draw attention to that, we'll be the conspiracy-theorists!

I've lost who made the astute point, that usually the only evidence they can cite is on improvements on the CFQ. How will that be squared?
 
That seems to be the fundamental problem with PROMs: they can't be calibrated. Instead they get 'validated' by some convoluted process involving acceptability, reproducibility and few other hoops, but none of those actually provide any reliability or accuracy. More often than not they're not even assessing when they're supposed to, they're barely more than Meyers-Briggs/horoscopes of health.

Maybe if at least it were understood that PROMs are a bit better than nothing, may be useful in some contexts, but cannot be used to reliably build evidence, it could work out. But that's not what we see happening. PROMs are systematically misused, starting with misinterpretation about being measures, when in reality they're only patient-reported outcome assessments or evaluations, not measures in a scientific sense.

EBM has been misusing PROMs for decades in large part because they are so easy to misuse, and everything biopsychosocial is built on it, their whole evidence based vanishes otherwise. CFQ has been misused for decades even though it's probably one of the most invalid unreliable ones out there, precisely because it allowed ideologues to misrepresent reality.

They're basically an imperfect solution to a different problem: that MDs inherently don't trust patient input, and are not entirely wrong about it. So they built this convoluted process where they get patient outcomes, but in a structured way, except they're mostly arbitrary, usually biased, often weird, and in no way any more valid than simply asking the right handful of questions. But that's always the hard part: which questions to ask? And how to interpret them?

But of course no PROMs is also bad, since most illnesses and their impacts can't be assessed by some biological test. So we're stuck in this weird worst-of-both-worlds situation where the problem is amplified instead of corrected.

I wonder that, and note the references people make to Meyers-Briggs are probably more generally but the one job interview I’ve ever had where someone had added something like that in I was intrigued that it seemed to be more about asking the same questions (about who you are and you personality type) several different times in slightly different ways to see if your answers were consistent. ie more about catching out fibs/people who are just giving the answer they think people want to hear (which most think is the name of the game of job interviews).

The issue with trying to use this sort of thing in the arena of health is endless and then add in the issue of something where you are exhausting people and the repetitive nature itself has physical and psychological impacts. And yes there is research that if you keep asking people certain things then you change their answers just by the process of re-presenting them with the same questionnaire (just like when someone is put in a police interview and asked the same question over and over there is a reason for it).

I simply don't believe a PROMS is about assessing a service or any of the things listed in this circumstance - the very fact you think you can use the same thing for checking whether outsourcing delivery of knee operations that are done to a specification , along with other things they developed PROMS for gives insight into what they do and don't work on. This is supposed to be repeated to assess the service over time, whereas those were one-offs so necessarily it was chosen as a measure because it would have been weak for the former vs the latter. And that is just one example of the difference.

Before you begin thinking about the fact those referred for the knee ops are going to have been diagnosed by a clinician before and triaged by professionals, then likely checked afterwards at some point - certainly by GP putting in relevant updates if there are still ongoing issues, using a totally different surveillance mechanism.

So there is sleight of hand where the real conclusion is suggesting 'only PROMS' and tbh all they really concluded is that the CFQ is pointless and adds little, without identifying what is actually missing they are suggesting taking the other bits away and replacing with more of the same thing they've just criticised? SO the conclusion is inaccurate and ingenuine.
 
Last edited:
13 December 2023

Consumer rights

Live seminar: Mind-body, neural pathway disorders as a way to explain chronic fatigue syndromes. (coffi-collaborative.com)

"As informed consumers, we were staggered to find that there is a huge literature out there on mind-body, neural pathway disorders. This is in both popular self-help books, but also in specialist medical journals, and there isn’t a comprehensive synthesis that takes this all into account.


On top of this, there is also a diverse scientific literature on how the brain and body interact with whole journals on topics such as psychoneuroimmunology, psychoneuroendocrinology; and studies in mainstream basic science journals, brain science journals, and psychology. If indeed chronic fatigue syndromes are related to complex relationships between the brain, the endocrine, nervous, muscular, gut and other body systems, then understanding this biology, and the way thoughts and the mind may interact with these systems, is important for everyone. For research groups, this is obviously critical in investigating fatigue, and there will be high levels of knowledge in such a group, so being able to develop a narrative that is a bit more rigorous than the self-help books but also accessible to the informed consumer would be a fantastic long-term goal. For consumers, we know there is literature that suggests simply having an explanation of symptoms sometimes helps patients, so this makes getting accurate narratives developed important."

For consumers, we know there is literature that suggests simply having an explanation of symptoms sometimes helps patients, so this makes getting accurate narratives developed important.

accurate narratives rather than accurate information, science, prognosis, diagnosis, the list is endless on the important things they don't mention.

says it all really about an area if the most important bit is the sales spiel that even a marketer would - in any other area due to legal requirements when they are making marketing claims requiring them to be based on real/proper evidence - be doing the writing on. Is this just about people who went into one profession because they fancied the pay-packet and power etc and actually want to be doing something different as their day job now?
 
Seems like a useful paper that identified many of the problems with the popular Chalder Fatigue Scale, that have been mentioned multiple times here on the forum.

Some quotes from the paper:

One challenge relates to the initial instruction: ‘If you have been feeling tired for a long while, compare yourself to how you felt when you were last well’. Most participants had longstanding ME/CFS so were being asked to recall how they felt many years ago: they doubted their ability to do this accurately

Another challenge is that this instruction seems to offer a choice: that only those who had longstanding symptoms should compare themselves to when they were last well

As it stands, the CFQ does not allow participants to represent their variable experience over the past month, and the impact of PEM. In addition, the questionnaire does not capture information about the cyclical nature of the condition over a longer period

The response options also raised questions for some participants who indicated that endorsing the response items ‘more than usual’ or ‘much more than usual’ might indicate an increase in severity, or frequency, or both

The findings indicate that the CFQ consists of one item clearly related to physical symptoms (6), four items clearly related to cognitive function (8, 9, 10,11) and one item relating to fatigue (5) which could be interpreted as cognitive and/or physical fatigue. The other five items have been identified by participants as lacking clarity (1, 7), relating to behaviour not symptoms (1, 4), or relating to sleepiness not fatigue (3).​

When we compare this item to the PROMs I'm curious whether they have gone out of their way to fix this (ie whether this happens to coincidentally justify changes, or whether they've done the same thing with the PROMs measure replacing it anyway)?
 
Back
Top Bottom