Cochrane Review: 'Exercise therapy for chronic fatigue syndrome', Larun et al. - New version October 2019 and new date December 2024

None of this is going to change without legal action!!

The Lancet wont change, the BMJ wont and Cochrane wont either. I doubt NICE will throw themselves under the bus either especially with Cochrane now claiming there is evidence of efficacy.

This needs to go to the Supreme Court. Simple as that.
Unfortunately medicine is pretty much sovereign and its institutions can do whatever they want in this regard. They can and do simply declare us, without evidence, to be raving lunatics and that is all that is needed to treat us like second-class citizens, deprived of fundamental human rights and without legal standing in regard to our own lives. We are several layers beyond violation of informed consent, we are essentially considered rejects unworthy of even being listened to at all, sub-human for all practical matters.

Looking at it from within, that's a really freaking huge blind spot that will need some serious fixing, especially when it's all on the mere implication from people who are utterly clueless about us. No one has actually come out and said that or put their name to that, and yet it may as well be the law that we be denied the very right to speak for ourselves.

Those flaws have nothing to do with us, they are systemic and will need system-wide reform. Courts can't do much about it until those flaws are patched up. Not sure how we can do that beyond being the definitive case that, with time, will force the issue of patients' rights to have legal standing.
 
Courts can't do much about it until those flaws are patched up.

That's not true courts have fined pharmaceutical companies billions in the past so there's no reason if a case is put in front of them for misrepresentation why they shouldn't do the same to the psych lobby and medical journals.

Not sure how we can do that beyond being the definitive case that, with time, will force the issue of patients' rights to have legal standing.

Theres another route too, people will just have to start understanding that they cant always hide behind organisations and that they can be named as an individual in their own name in civil damages cases that.

There is enough provable contradiction against Cochranes own policies and procedures that are in place to supposedly help inform health care policy even if they are a private organisation.
 
it has been added to wikipedia
A 2019 Cochrane review stated that exercise therapy probably has a positive effect on fatigue in adults, and slightly improve sleep, however the long term effects are unknown.[6] The Cochrane review also noted that research was inconclusive as to which, if any, type of exercise therapy was superior, and concluded that the evidence regarding adverse effects is uncertain.[6]

time it was scrapped altogether.
 
Re Chalder fatigue scale and the ceiling effect:
But there is a counter argument that people on the ceiling could "improve" without showing a fall in their score, since they are already off the scale.
So people fill in the questionnaire a little 'better.'

The Chalder Fatigue Scale scoring deosn't quite work like that.

It is a bit weird, see below.

"If you have been feeling tired for a long while, then compare yourself to how

you felt when you were last well. [score each of 11 items] Less /same/More than usual/Much more than usual [scored 0-3 for Likert]

e.g. Do you have problems with tiredness"

More: https://studylib.net/doc/11997962/chalder-fatigue-scale

One issue problem is that zero is "less than usual", which would equate to "less than when you last felt well"! So effectively the minimum score is 1, making it an 11 to 33 point scale, a range of 23 points.

The maximum score of 3 is for "much more than usual". This is where the ceiling problem comes in, when many patients would say there tiredness is "way, way more than usual".

Somebody who at the start of the trial might regard themselves as "way, way more tired than usual" but can still only score the maximum 3 points. At the end of it may feel they have improved (whether this is due to response bias or not is irrelevant) and feel that their tiredness is now only "much more than usual". As a result the score for that question remains at 3: this is the ceiling effect of not been able to measure a change. That's the very long explanation of I meant by my rather brief original comment!

The Chalder fatigue scale has many flaws, regardless of the response bias issue. And the ceiling effect is one of those flaws.

I find it difficult to believe that many patients would consider a 3.4 change to be important in trials of the sort assessed in this review. But who knows? It seems no-one bothered to ask us.

This is a really good point. Some studies attempt to "anchor" the claimed "minimal useful difference" or whatever by comparing it to other questionnaires, such as overall change in health (to anchor a change in a pain scale, for instance).

A better way to do that would be to ask patients if the change they report on a fatigue scale corresponds to what they consider a useful change in fatigue (you would need to do this as part of a double-blind study, e.g. for a rheumatoid arthritis drug, to avoid problems of response bias).

That might give a much better measure of what counts as a useful change for a subjective symptom (leaving aside response bias issues).
 
Last edited:
That's not true courts have fined pharmaceutical companies billions in the past so there's no reason if a case is put in front of them for misrepresentation why they shouldn't do the same to the psych lobby and medical journals.
Oh we will get to that. Just not yet because it would be too easy to find medical experts arguing we are delusional. And we're not organized anyway.

It will definitely get to that, but right now we are guilty until proven innocent and it's too unusual to work out. It's one of those cases that should not exist and so there is no process to work it out. The harm is clear but cannot be demonstrated objectively so until then it's our word against, well, the whole of medicine.

It's the public record that will make the difference with time. Because it is ridiculously, absurdly damning and self-evident.
 
The Chalder Fatigue Scale scoring deosn't quite work like that.
I see but I really meant something else. When I said something 'better' I referred to response bias and patients filling in the questionnaire in a way that pleases the investigators.

Statistics isn't really my thing but here's basically the point I'm trying to make: the effect size is a difference in means, expressed in standard deviations. With the Chalder Fatigue Scale, especially the 11-point version, there seem to be ceiling effects. A lot of patients are close to the maximum score. Some, as you said, are basically off the scale: if they improve or deteriorate, that would not be visible on the scale. So most patients have pretty much the same score with little variation. My guess is that would result in a relatively small standard deviation (please correct me if I'm wrong). That's what determines the bottom part of the effect size equation.

The upper part, however, is determined by different rules. Because I think that changes on the Chalder Fatigue Scale in the GET-trials are not so much determined by patients' health by but response bias, optimism, placebo effects, unwillingness to admit that 12 weeks of therapy were a waste of time etc. In that case it doesn't really matter much that the scale doesn't make much sense or that a score of 11 is the minimum score. That CBT-trial on patients with multiple sclerosis (Van Kessel et al. 2008) showed that patients reported scores lower than 11 and better than healthy controls. Patients report a score that looks 'better' to the trial hypothesis and is in accordance with their own hopes and expectations. So I think that determines the upper part of the effect size equation.

What I think may be happening here is that the small changes bias causes on the CFQ, look bigger when expressed as SMD because CFS patients pretty much all have similar high scores with little variation. Because the scale doesn't work, it makes the small changes caused by bias look bigger than would be the case if the scale reflected the whole spectrum of fatigue severity in CFS patients.

If it would be equally difficult for treatment effects to cause a change on the scale than the problems would be balanced. But I don't think that's the case in the GET trials where improvement is determined by other factors, things that have little to do with how sick patients are.

That's sort of the idea/suspicion I have.
 
Last edited:
The Chalder fatigue scale has many flaws, regardless of the response bias issue. And the ceiling effect is one of those flaws.
As a quick experiment I ran this by my wife. She is mild/moderate and had ME for around 12 years. I did not lead her in the slightest, but the answers are exactly as I would have predicted. Based on how she is now compared to before she had ME.

Just one answer was "No more than usual" - to the question "Do you make slips of the tongue when speaking." Every single other response was unequivocally "Much more than usual". So if my wife had done the PACE trial, and been made worse due to GET, on this score it could never have shown any significant difference. And they call it science.
 
Some references if anyone ever wants to highlight the ceiling effect, from my 2011 paper, "Reporting of Harms Associated with Graded Exercise Therapy and Cognitive Behavioural Therapy in Myalgic Encephalomyelitis/Chronic Fatigue Syndrome" https://iacfsme.org/PDFS/Reporting-of-Harms-Associated-with-GET-and-CBT-in.aspx

Furthermore, the instruments used to measure fatigue often suffer from ceiling/floor effects so it is not possible to ascertain whether some participants experienced a worsening of their fatigue (69-71).

(69) Stouten B. Identification of ambiguities in the 1994 chronic fatigue syndrome research case definition and recommendations for resolution. BMC Health Serv Res. 2005;5:37.

(70) Morriss RK, Wearden AJ, Mullis R. Exploring the validity of the Chalder Fatigue scale in chronic fatigue syndrome. J Psychosom Res. 1998;45:411-7.

(71) Goudsmit EM, Stouten B, Howes S. Fatigue in Myalgic Encephalomyelitis. Bulletin of the IACFS/ME 2008, 16(3). Available at: http://www.iacfsme.org/BULLETINFALL2008/ Fall08 GoudsmitFatigueinMyalgicEnceph/tabid/292/Default.aspx . Accessed September 16, 2011
 
Last edited:
Some references if anyone ever wants to highlight the ceiling effect, from my 2011 paper, "Reporting of Harms Associated with Graded Exercise Therapy and Cognitive Behavioural Therapy in Myalgic Encephalomyelitis/Chronic Fatigue Syndrome" https://iacfsme.org/PDFS/Reporting-of-Harms-Associated-with-GET-and-CBT-in.aspx
I really cannot get my head around how supposed scientists can devise and use measuring instruments with such glaring limitations, especially as they are limitations having such a high probability of being encountered? It would be hysterical if it weren't so serious.

It's not even science, just basic engineering - very basic.
 
The Chalder fatigue scale has many flaws, regardless of the response bias issue. And the ceiling effect is one of those flaws.

Diagnostic tools should not be used as endpoint measures in this way. It certainly shouldn't have been used as a repeated measures tool with such poorly defined baseline comparison point (as I keep saying - sorry for sounding like a stuck record). And to use it as a repeated measures tool in trials where the researchers are trying to capture *improvement*, it is pretty much useless. Why? Because most of the scoring emphasis is about "getting worse", not better. And yet they have ploughed on with it for 30+ years without realising that. It beggers belief really. :banghead:
 
Why? Because most of the scoring emphasis is about "getting worse", not better.

This is important to emphasise, because they are essentially recording "getting worse" as an improvement across the board, simply because the score looks slightly better.

I simply do not accept that trial participants fill in this quesionnaire the way that folks here thinks they should, and genuinely compare themselves with "when they were last well", particularly not if they had already filled it in 3 months previously, and also because of the ceiling effect.

It is clear from the PACE trial data itself that there is a resetting process. Just look at the graphs. Everyone's score drops at 3 months into the trial, including those in the SMC (no treatment) group. They are resetting, and comparing with the start of the trial, at the very least. Pretty much no-one then scores 11 or less (a score that indicates no change or any kind of improvement), so they are all... getting... worse.

Without knowing *how* people are filling in this questionnaire, each time they fill it in, we cannot infer anything useful about it, even within individuals. The ambiguous baseline is its main and fatal flaw.

And as @Jonathan Edwards says, "none of this pseudo statistics has any bearing on reality", because every analysis of this data is pseudo-statistics. It is uninterpretable.
 
Last edited:
I would also be very interested to know what instructions would have been given to PACE participant if, having scored themselves "Much more than usual" at baseline, then later asked how they could record the fact they felt even worse than before. I strongly suspect the instruction would have been along the lines of "Well, we are still comparing to before you got ill, so 'much more than usual' must still apply".

If I use a voltmeter to measure a voltage, of say 20V, with the meter set to a 0 - 10V scale, the meter will not give me a reading of 10V! It will give some sort of Over-Range indication, to ensure I realise that, even though it cannot give the the correct reading on that range, I will know the voltage cannot be correctly read on that range, and so know to switch to, say, a 0 - 100V range and try again.

The idea that readings can be taken that are out of range, without any indication that they are of range, but instead misconstrued as valid readings, is the height of incompetence at best, and wilful deception at worst. It really beggars belief. And I repeat, it is so so basic. But of course SW admits to devising this scale.
 
Back
Top Bottom