Why would using a blinded evaluator help if you are recording said subjective responses using a paper questionnaire?


Pace did have some doctor based evaluations and I think for the CGI score they substituted in the assessors value when the patients value was missing.
 
https://ora.ox.ac.uk/catalog/uuid:b8a4b340-4d03-41dc-8351-9cb462ef1ba4/download_file?file_format=pdf&safe_filename=Controversy+over+exercise+therapy+for+chronic+fatigue+syndrome+-+Continuing+the+debate.pdf&type_of_work=Journal+article

You can see how the PACE authors respond to criticism.

They refer to the Wilshire et al paper as "results of a secondary analysis of the data" that "reports the proportions of participants meeting various criteria for recovery". They try to obfusctae that Wilshire et al used their own published protocol for this analysis.

They dismiss the criticism regarding the newsletter, essentially saying that it didn't unduly favor any treatment. It's still a mistake to bias participant expectations at all, and whether this truly didn't favour certain treatments more than other is unclear and very debatable. "Therapists" are mentioned several times in glowing comments which cannot refer to the control group. A doctor praises the therapy. It's also mentioned that NICE recommends CBT, GET and activity management (but activity management is not adaptive pacing therapy or pacing, NICE describes it as form of graded activity). Yet they base their claims of efficacy on the difference between the control group and CBT/GET groups.

We also measured participant expectation of their allocated treatment after they had been informed of it and, as reported in the main paper, most participants considered adaptive pacing therapy (APT) and GET to be most likely to help them, whereas the trial found CBT and GET were most effective (White et al, 2011)

Here they pretend that a criticism concerning the introducing of bias during the trial could be refuted by referring to baseline expectations. It cannot. They also fail to mention that the control group had much lower expectations even at baseline. Again, they base their claims of efficacy on the difference between the control group and CBT/GET groups.
 
Last edited:
Pace did have some doctor based evaluations and I think for the CGI score they substituted in the assessors value when the patients value was missing.

Arguably that could create more bias, not less.

CGI and attendance, although filled in on anonymised forms, was recorded by the therapist - they had to know the participant to be able to do that.
It was not a blinded assessment.
 
@Lucibee what do you think about the original PACE protocol allowing for a later formulation of an exact statistical analysis plan? That sounds like giving themselves permission to make unspecified changes to the statistical analysis.

They also only published this statistical plan in 2013, and we are asked to trust that they never peeked at the data before finalizing their plan. They probably didn't need to: they knew from the NICE trial that the rehabilitation program it tested did nothing (and it included graded return to activity).
 
Last edited:
There were so many problems with the original protocol anyway, I'm really not sure it makes much difference. We sometimes get a bit obsessed that they haven't followed the rules, and forget that what they *did* intend to do didn't actually stand up to much either.
 
Why would using a blinded evaluator help if you are recording said subjective responses using a paper questionnaire?

Yeah, that's a weird one. It's still a self-evaluation. That there would be an evaluator involved at all influences the outcome, which is of course the whole point when the treatment itself is gaslighting.
 
I was originally diagnosed as having migraine "with more of the funny symptoms and less of the headache" In about 1982 I was part of a clinical trial in the psychology department of my local hospital. I think it might have been an early form of CBT against more traditional treatments as we made a list of certain concerns and then looked at them one at a time.

Apart from needing reminded my problems in shops were not because of agrophobia but headaches from the strip lighting it was not too bad and useful for some things like making plans and lists which have helped me live with ME and the usual relaxation exercises.

(When we were done, I asked him if being on the trial would mean that every ache I had from now on would be put down to psychological problems and he said, maybe but I would know they weren't, which I have kept in mind in the dark days of ME :))

Anyway, back to the point. At the start I was assessed by a different psychologist with a standard set of questions. This was repeated at the end by the same guy. He did not know what arm of the trial I had been in so I always felt it was properly blinded, though there might be flaws with it that I can't see.
 
I know that people have already reported that some of the links to the documentation for the PACE trial no longer work.
Just going thro the Sharpe/Wessely/Mike Godwin twitter thread here on S4ME and picked up that the link Wessely gives for the PACE trial is now a dead link.
The current link is here:
https://www.qmul.ac.uk/wolfson/research-projects/current-projects/projects/pace-trial.html

(some of the links still don't work eg the one to the Cochrane review on Exercise)
 
I know that people have already reported that some of the links to the documentation for the PACE trial no longer work.
Just going thro the Sharpe/Wessely/Mike Godwin twitter thread here on S4ME and picked up that the link Wessely gives for the PACE trial is now a dead link.
The current link is here:
https://www.qmul.ac.uk/wolfson/research-projects/current-projects/projects/pace-trial.html

(some of the links still don't work eg the one to the Cochrane review on Exercise)

Interesting that the top 2 items under "Latest news" have lost their links now...

9 September 2016 Statement: Release of individual patient data from the PACE trial

8 September 2016: PACE trial team analyse main outcome measures according to the original protocol

I seem to remember that analysis getting pretty similar results to those of Wilshire et al, but they disappeared from the website soon afterwards. Did anyone capture it before it went?
 
Interesting that the top 2 items under "Latest news" have lost their links now...



I seem to remember that analysis getting pretty similar results to those of Wilshire et al, but they disappeared from the website soon afterwards. Did anyone capture it before it went?
 

Attachments

Thanks @Tom Kindlon . I had misremembered. It was improvement they looked at. The results are somewhat affected by their definition of "improvement", which also included a 50% increase from baseline (in PF) - which is very sensitive to regression to the mean, particularly in those with low scores at baseline.

Looking back at this, I'm also struck by this combination of info:

In the Protocol (Background, Introduction [section 4.1]), they say,
The prognosis is poor: in primary care only a third improve by one year, and of those referred to secondary care less than 10% return to pre-morbid functioning.

Yet, in their per-protocol analysis, only a fifth of those on CBT or GET improved by one year, even with subjective enhancement. And their study provided no way to determine whether anyone returned to pre-morbid functioning.
 
Thanks

In the Protocol (Background, Introduction [section 4.1]), they say,

The prognosis is poor: in primary care only a third improve by one year, and of those referred to secondary care less than 10% return to pre-morbid functioning [3, 9].

Yet, in their per-protocol analysis, only a fifth of those on CBT or GET improved by one year, even with subjective enhancement. And their study provided no way to determine whether anyone returned to pre-morbid functioning.

Good point.

For anyone interested, these are the references:

3. Wessely SC, Hotopf M, Sharpe M: Chronic fatigue and its syndromes. 1998, Oxford , Oxford University Press, 428-Google Scholar

9. Joyce J, Hotopf M, Wessely S: The prognosis of chronic fatigue and chronic fatigue syndrome: a systematic review. Q J Med. 1997, 90: 223-233. View ArticleGoogle Scholar

I'd assumed that they'd have cited this 2005 review on prognosis that had similar figures on recovery (https://academic.oup.com/occmed/article/55/1/20/1392403), but maybe that came out after the protocol was written, and I've not read that 1997 one. The evidence in the 2005 one was still less than overwhelming.
 
Last edited:
I'd assumed that they'd have cited this 2005 review on prognosis that had similar figures on recovery (https://academic.oup.com/occmed/article/55/1/20/1392403), but maybe that came out after the protocol was written, and I've not read that 1997 one. The evidence in the 2005 one was still less than overwhelming.

Interesting that Hotopf's review abstract makes the point that this is improvement *without* any systematic intervention, so to find such a poor response in the PACE per-protocol analysis is doubly damning.
 
In the Protocol (Background, Introduction [section 4.1]), they say,
The prognosis is poor: in primary care only a third improve by one year, and of those referred to secondary care less than 10% return to pre-morbid functioning.

Yet, in their per-protocol analysis, only a fifth of those on CBT or GET improved by one year, even with subjective enhancement. And their study provided no way to determine whether anyone returned to pre-morbid functioning.

It's clear as mud. They need to be much clearer about what they are referring to.

The key is the timeframe - the claim that one third improve by one year most likely is based on time of acute onset - a third improve somewhat until their illness plateaus.

But the PACE trial did not involve capturing acute onset and diagnosis in a community/population sample and subsequent treatment, so the two improvement claims are not comparable.
 
Back
Top Bottom