More PACE trial data released

Something to hide is a bold claim
That's pretty much already established. The first release of data showed they lowered their threshold to below entry. That was something critical to hide as it demolishes the whole trial. They already hid things that make them look bad, that's beyond dispute, we can be bold about it and call it out for the garbage that it is.
 
Something has been bugging me about the whole issue of the PACE trial and how the findings were/are reported.
Yes we know they changed criteria etc etc and the reanalysis of the data and this data is important but for the average person with no understanding of the ins and outs of research trials it is too complicated.

For me the 'elephant in the room' that not only applies to PACE but many of the other studies, is their definition of recovery.

I would like to see some exposure of what they deemed as 'recovered'; namely that it roughly equates to the health/fitness of someone in their eighties with rheumatoid arthritis or congestive heart failure, and these were previously active people in their 30's and 40's.

If the journalists can't get to grips with the scientific methodology issues, surely this is one thing they can see is so wrong.
 
Now you mention it......however did they hit upon such a definition? Is there any precedent in the literature, or prior agreement/ Who was party to the discussions to set such anomalous figures? Was Aylward involved? Which hat was White wearing at the time?

edit: punctuation
 
Last edited:
Something has been bugging me about the whole issue of the PACE trial and how the findings were/are reported.
Yes we know they changed criteria etc etc and the reanalysis of the data and this data is important but for the average person with no understanding of the ins and outs of research trials it is too complicated.

For me the 'elephant in the room' that not only applies to PACE but many of the other studies, is their definition of recovery.

I would like to see some exposure of what they deemed as 'recovered'; namely that it roughly equates to the health/fitness of someone in their eighties with rheumatoid arthritis or congestive heart failure, and these were previously active people in their 30's and 40's.

If the journalists can't get to grips with the scientific methodology issues, surely this is one thing they can see is so wrong.
The overlap between entry and recovery criteria is also something that is easily understandable by non scientific people. (At entry, 65 is a score considered to correspond to great disabilty, but the same result (65) is used as a sign of recovery at the end of the trial).
 
That's pretty much already established. The first release of data showed they lowered their threshold to below entry. That was something critical to hide as it demolishes the whole trial. They already hid things that make them look bad, that's beyond dispute, we can be bold about it and call it out for the garbage that it is.
We knew about the lowered threshold before the release of the data. We just didn’t know whether anyone deteriorated from entry and was counted as recovered.
 
For me the 'elephant in the room' that not only applies to PACE but many of the other studies, is their definition of recovery.

I would like to see some exposure of what they deemed as 'recovered'; namely that it roughly equates to the health/fitness of someone in their eighties with rheumatoid arthritis or congestive heart failure, and these were previously active people in their 30's and 40's.

If the journalists can't get to grips with the scientific methodology issues, surely this is one thing they can see is so wrong.
There is an ongoing debate in bio/medical/psychological research about exactly this issue. Publishing a paper mostly involves showing that your results are statistically significant (which means, roughly, that there is less than a 5% chance that you would have found the results that you got if there was no effect of the treatment). But it very often doesn't involve looking at whether the results were clinically relevant.

Let's suppose that CBT and GET have no influence on spontaneous physical functioning in the environment, but that they boost a patient's self-belief to the point where they feel slightly more accepting of their condition, and so give slightly better replies on self-report questionnaires, or push themselves a little bit more when given a physical test. If that happened for a decent number of people, the results would be statistically significant, but the improvements in daily functioning could be negligible.

It's a real limitation of science that, in their understandable wish to eliminate noise and have reliable measures, researchers tend to reduce outcomes to what they can measure. Meanwhile in the real world, people would (completely legitimately) all have their own slightly different definition of recovery (being able to hold down a job, or kick a football around with the kids, etc; even "getting back to where I was before" is hard to measure because we didn't evaluate their life before they became ill) and so that becomes hard to use for the trial outcome.

One of the difficulties is that science of this kind is proper hard to do, and the journalists know it (I suspect that many science journalists are people who didn't make it through graduate school), so when the scientists say "Ah, well, yes, but of course we have to use objective criteria because <jargon, much of it actually legitimate>", the journalists are reluctant to say "Yeah, but that's not much help to the individual patients, is it?". At that point the researchers would look pained and say "Well, we're trying our best". And that's if they're acting in good faith too.
 
It's a real limitation of science that, in their understandable wish to eliminate noise and have reliable measures, researchers tend to reduce outcomes to what they can measure. Meanwhile in the real world, people would (completely legitimately) all have their own slightly different definition of recovery (being able to hold down a job, or kick a football around with the kids, etc; even "getting back to where I was before" is hard to measure because we didn't evaluate their life before they became ill) and so that becomes hard to use for the trial outcome.

It's only a limitation of science if the scientists have limited initiative. As I have mentioned before I published a trial in lupus using exactly the sort of outcome measure you are advocating - each patient having a different definition of minor or major improvement. There were no complaints from the referees - in fact the opposite, they thought it a rather good idea. Unfortunately, none of my colleagues seem to feel they can be as imaginative. Something about a certain foolish consistency ... (Ralph Waldo Emerson).

The problem with PACE is that measures were limited to what the authors thought they could be sure would give a positive result for their treatments.
 
their own slightly different definition of recovery
well, if the researchers don't know then why not ask the patients/participants?
On one of the dozens of questionnaires that they have to fill in, surely it wouldn't be hard to include somewhere.........'Would you say that you have recovered'?

and why try and 'invent' ways of measuring things that are imeasurable (eg fatigue) rather than use objective clearly measurable test methods?

and then, having ignored all that, pick a level that is equivalent to
the health/fitness of someone in their eighties with rheumatoid arthritis or congestive heart failure
and call it 'recovered'?

Sorry, but saying that 'it's complicated' doesn't wash.
 
Last edited:
It's a real limitation of science that, in their understandable wish to eliminate noise and have reliable measures, researchers tend to reduce outcomes to what they can measure. Meanwhile in the real world, people would (completely legitimately) all have their own slightly different definition of recovery (being able to hold down a job, or kick a football around with the kids, etc; even "getting back to where I was before" is hard to measure because we didn't evaluate their life before they became ill) and so that becomes hard to use for the trial outcome.

As a general point, that's a reasonable one. But the specific problems with the way recovery was redefined in PACE mean that this point offers no real defence to the researchers imo.

They made provably false (and still uncorrected) claims in their recovery paper to try to justify their post-hoc revisions (which they failed to make clear were post-hoc) and created recovery thresholds for with SF36-PF and Chalder Fatigue Scale that allowed patients to be classed as 'recovered' with worse scores than were needed to be entered into the trial with "severe and disabling" fatigue and baseline. (Details available on request!)

Also, their post-hoc recovery criteria was no easier to measure than the one laid out in the trial protocol.

There are real difficulties with defining 'recovery' in trials, but the PACE recovery spin is such BS that those real difficulties aren't particularly relevant (other than as something PACE researchers can refer to as a way of muddying the water).
 
Last edited:
It certainly wasn't my intention to defend the researchers here.

Yeah, I assumed that, but can see how my reply might have been misleading. It's jut that I have seen arguments like that used to try to defend the PACE recovery paper so just wanted to explain why I thought it was a weak argument in relation to PACE, partly in case others thought it might apply to PACE.
 
Let's suppose that CBT and GET have no influence on spontaneous physical functioning in the environment, but that they boost a patient's self-belief to the point where they feel slightly more accepting of their condition, and so give slightly better replies on self-report questionnaires, or push themselves a little bit more when given a physical test. If that happened for a decent number of people, the results would be statistically significant, but the improvements in daily functioning could be negligible.

A point to consider is that there is a fundamental difference in worldview between the psychiatric view and other views. The psychiatric view puts the self-belief/reporting as fundamental. Sharpe and White have hinted as much over the years, when they say stuff like: patients don't care about whether a blood test says they're better, they care about whether they feel better. (Of course this is usually politician-style evading the question, when patient advocates say they want objective outcome measures, meaning objective measures of functioning, not blood tests.) Either they are deliberately ignoring patients views or they don't understand patients views because they have a fundamentally different worldview.

One of the difficulties is that science of this kind is proper hard to do, and the journalists know it (I suspect that many science journalists are people who didn't make it through graduate school), so when the scientists say "Ah, well, yes, but of course we have to use objective criteria because <jargon, much of it actually legitimate>", the journalists are reluctant to say "Yeah, but that's not much help to the individual patients, is it?". At that point the researchers would look pained and say "Well, we're trying our best". And that's if they're acting in good faith too.

Ah, all those science journalists on twitter who say they were former grad students who gave up due to the dismal job market...

As patients, the most frustrating thing is that researchers don't bother to ask patients what the most relevant outcome measures are. Most PROMs for most illnesses have never been tested for cross-cultural understandability and relevance for patients. When questioned, researchers love to quote the Chronbach's Alpha, or mention that such PROMs are widely used in the field. But Chronbach's alpha doesn't necessarily imply anything about real-world relevance or reliability, but simply the same bias applies to the questions in the set.
 
It is perhaps worth recalling how some of the problems over "recovery" initially arose.

It may be difficult to fulfil the patients' expectations of what their level of fitness should be since patients have an exaggerated perception of their pre-morbid level of fitness.

Treatment of chronic fatigue syndrome. SJ McBride DR McCluskey. British Medical Bulletin (1991) vol 47 no 4 pp 895-907 @p900

Certainty is a marvellous quality amongst "scientists".
 
Another thing that I think confuses issues is using the word 'recovery' when actually what is meant is 'in recovery' or in other words in remission as opposed to recovered, as in, no longer symptomatic.

The use of the word 'recovery' for psychotherapies (see stuff on IAPT) seems to be quite different from how most people would understand it.

Although, in PACE as I remember, they state that the 'recovered' patients no longer met the diagnostic criteria for CFS(?) (someone correct me if I'm wrong), which then takes us full circle back to how they diagnosed it in the first place!
 
They used Oxford which needs 6 months of fatigue such that you can only do 50% of normal. How that level of fatigue is measured is another mystery of course. If it was by some preordained level on the SF36 then it probably meant that you were recovered by the new 60 points.

I suspect it is a circular answer.
 
It is perhaps worth recalling how some of the problems over "recovery" initially arose.

It may be difficult to fulfil the patients' expectations of what their level of fitness should be since patients have an exaggerated perception of their pre-morbid level of fitness.

Treatment of chronic fatigue syndrome. SJ McBride DR McCluskey. British Medical Bulletin (1991) vol 47 no 4 pp 895-907 @p900

Certainty is a marvellous quality amongst "scientists".
Good grief! Not seen that before. I wonder where they found the 'evidence' for that. (In case not obvious, that was sarcasm on my part). They just dream up these fancy-sounding but completely spurious justifications for their equally spurious claims of treatment successes. It is they who seem to have the grandiosely exaggerated perceptions.
 
Back
Top Bottom