Lancet Infectious Diseases: Editorial, "A proper place for retraction", 2017, mentions PACE in passing

I was asked to submit my letter in order to "continue the debate". But so far, I can see no evidence of that debate continuing at all. It seems to have been stifled!
This is why I am not a fan of journal Correspondence. It very rarely furthers debate, and in this day and age, it is way too slow.

People can try to use blather about 'furthering the debate' in order to avoid actually engaging in real debate about the merits of the claims they've made. It's a little annoying when this happens.
 
Or are they saying that they do see a problem but they see it with the conclusions/results - in that the numbers reported were hugely inflated.
They seemed to be saying they saw a problem only in the results, which is a very flawed observation. A major contributor to the rubbish results, was the rubbish trial methodology underpinning the results, though I don't doubt the interpretations that formed the results were also dubious.
 
Last edited:
An application of my proposed Bayes' Alchemy in action again @Woolie ?
Yes, rather like some archeological documentaries, which merge in various speculations along the way, and then with a magical wave of the documentary wand say along the lines of "Now that we know all this, we can see such and such must have been the case". Now I think of it, it really does have a lot in common.
 
Thye seemed to be saying they saw a problem only in the results, which is a very flawed observation.
I took it to mean that they saw no problems with the methodology, and the complaints only arise because we (the patients) don't agree with the results of the study. I see nothing in what they have written that says that they (The Lancet) acknowledge any issues with PACE at all.
 
I took it to mean that they saw no problems with the methodology, and the complaints only arise because we (the patients) don't agree with the results of the study. I see nothing in what they have written that says that they (The Lancet) acknowledge any issues with PACE at all.
My comment obviously was unclear, because my intent was to say the same thing in effect. When I said "seemed to be saying they saw a problem only in the results", by that I meant they did not seem to see a problem anywhere else, other than in the interpretations forming the results - which of course is immensely blinkered.
 
They seemed to be saying they saw a problem only in the results, which is a very flawed observation. A major contributor to the rubbish results, was the rubbish trial methodology underpinning the results, though I don't doubt the interpretations that formed the results were also dubious.
I think they are trying to manage the cognitive dissonance as best they can, they believe in CBT/GET but can't use PACE as evidence so they are dithering.
 
Or to but it another way "they believe in CBT/GET but can't use PACE as evidence so they are dithering screwed"
Not yet, its still not retracted and they can still take away children from their families using the power of the state and torturing them, there is one being fought right now (and losing) :emoji_cry:

Well the actual truth is that they are desperate if the best they can come up with is that,"patients didn't like the conclusion".
In a rational world that would open up a pandora's box, if patients don't like the results then it follows that the study is likely messed up because people like being healthy. However in our world victim blaming, ableism and ethically challenged scientists/doctors are rampant :emoji_face_palm:
 
OK. This is what I really think:

Retraction (and/or correction) will only happen if both the authors and journal agree it should happen. That is unlikely to happen with PACE because neither groups have the will to do so. The only other way is if a bigger, better study is done that shows that CBT/GET is harmful (which would be unethical to do if the researchers believed that either of those treatments might cause harm), or if you can show that those involved in the original trial were harmed by it.

I would love to think that pointing out the methodological flaws would have some effect, but I suspect there is too much vested interest in allowing those sorts of flaws to continue to exist (unblinded interventions; subjective, complex, composite outcomes), particularly in psychological research, which is regrettable.

The problem is that publication of research has not kept up with technology. There needs to be a better way of linking research with criticism. Maybe it would help if journals were obliged to publish a list of all subsequent citations of an article. Currently, it is only tracked through subsequent Correspondence, which as I've said is totally inadequate, particularly if the journal and authors choose to be evasive.
 
There's also the additional problem that even in the GET group there may not have been significant harms if the patients did not comply with the instructions to keep increasing activity and push through symptoms. Those with ME including PEM, rather than Oxford fatigue may have self protected once they realised the exercise set off PEM. Since actual activity was not tracked we will never know.
So PACE is a non starter in assessing harms in my opinion.
 
The problem is that publication of research has not kept up with technology. There needs to be a better way of linking research with criticism. Maybe it would help if journals were obliged to publish a list of all subsequent citations of an article. Currently, it is only tracked through subsequent Correspondence, which as I've said is totally inadequate, particularly if the journal and authors choose to be evasive.

It seems that this problem is far from insurmountable, if there is the will to manage it.

Even a simple traffic light system: green - its all good, amber - caveats that there are criticisms that need to be considered, red- the conclusions cannot be relied upon. Links to the criticisms easily attached.

Such a system would automatically encourage better methodolgy, improved accuracy and provide great teaching cases.

No. The problem, as I see it, is there is no will to do anything about it. Too many people are benefiting personally from the current system. It's a very fine example of the old school tie club.
 
There's a great model to follow in the example of open source software.

You write your program or code and publish it in a repository. Anyone is free to examine the code, copy it, and mess about with it.

If you find a problem, you let the author know in a public post. Anyone can add either a solution, a hint, advice or confirmation of the problem, proposed methodology etc.

If you find there was a better way to do something you tell the author and they can choose to amend the original.

If you like the idea but not the solution, or only part of the solution you can 'fork' the project and work away on it, leaving the original author to keep their project.

Nobody runs around worrying that idiots are messing about with their code, everyone is open with their data and ideas, people join in to help, word is spread around if it is a good idea, it gets ignored if not. Everything is there for anyone else to see, and the whole thing is a social experience.
 
Retraction (and/or correction) will only happen if both the authors and journal agree it should happen. That is unlikely to happen with PACE because neither groups have the will to do so. The only other way is if a bigger, better study is done that shows that CBT/GET is harmful (which would be unethical to do if the researchers believed that either of those treatments might cause harm), or if you can show that those involved in the original trial were harmed by it.

I would love to think that pointing out the methodological flaws would have some effect, but I suspect there is too much vested interest in allowing those sorts of flaws to continue to exist (unblinded interventions; subjective, complex, composite outcomes), particularly in psychological research, which is regrettable.

I'd agree with the Lancet paper (although the accompanying commentary claiming a 30% recovery rate certainly needs a correction), but I think that the recovery paper in Psychological Medicine is even more seriously flawed. If we were able to get any debate going on the merits of retracting that paper, I do not see how it could be defended. It includes clear falsehood which if corrected would undermine the central findings of the paper.

edit: here's an old summary of the two clear inaccuracies:

Just to be clear, here are at least two factual errors published in the recovery paper:

We changed our original protocol’s threshold score for being within a normal range on this measure from a score of >=85 to a lower score as that threshold would mean that approximately half the general working age population would fall outside the normal range. The mean (S.D.) scores for a demographically representative English adult population were 86.3 (22.5) for males and 81.8 (25.7) for females (Bowling et al. 1999). We derived a mean (S.D.) score of 84 (24) for the whole sample, giving a normal range of 60 or above for physical function.

FAIL. In the general working age population, the median (middle score) is 100 and the 1st quartile (25th percentile) is 90. A threshold of >=85 only excludes about 18% of the general working age population, and 8% of the working age population without long-term health problems, the population which 'recovered' participants should be compared to. White et al appeared to wrongly assume that the mean (average score) was about the same as the median.

So the stated justification for changing the threshold is based on a falsehood and/or misinterpretation. Unfortunately, those figures are derived from raw data from the UK Data Archive and not published in a paper (however, one can estimate from the histogram in the Bowling paper that about 28% of the general population score 85 or higher). Psychological Medicine were made aware of this error but did not publish the submitted letter with this information, claiming that the same point was already made in one of the other letters to be published, which is clearly false. There was no correction issued either. Another letter made a different point about the threshold, but here is a factual error which they are obliged to investigate and correct.

How do these results compare with previous studies? We are not aware of any previous studies that have compared comprehensively defined recovery between different treatments. Two studies of recovery in adults after CBT found similar proportions in recovery: 23% and 24% (Deale et al. 2001; Knoop et al. 2007), compared with 22% in the PACE trial. [...] The other study used similar criteria and domains for recovery (Knoop et al. 2007), but the definition for normal range used was the more liberal population mean -2S.D. rather than the more conservative 1 S.D. that we used; the treatment was delivered by therapists in one specialist CFS centre and outside of a trial setting.

FAIL. The PACE Trial recovery thresholds were not 'more conservative' than the Knoop et al paper. Both used mean -1S.D. The threshold for normal physical function was 80 in Knoop et al but only 60 in the PACE Trial. White was a co-author of both papers, which makes the blunder more baffling. Fortunately, the above error is very easy to confirm. I don't know if Psychological Medicine know of this error, but they don't seem interested in corrections anyway.

PS: PACE referred to Bowling et al for their normative data (general population), which was based on the 1992 ONS Omnibus Survey sample. The mean±S.D. and median(IQR) physical function score for the working age population without chronic illness in this sample is 95.0±10.2 and 100(95-100) points respectively. Presenting 60 as a 'conservative' threshold for complete recovery is scandalous, particularly as it overlapped with trial criteria for 'significant disability' i.e. 65 or less.
 
Back
Top Bottom