Controversy over exercise therapy for chronic fatigue syndrome: Continuing the debate (from 2017)

Discussion in 'Psychosomatic news - ME/CFS and Long Covid' started by Sly Saint, Apr 25, 2021.

  1. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Part of: BJPsych Advances Cochrane Corner and Round the Corner Collection
    Published online by Cambridge University Press: 11 April 2018

    Editor's summary
    In a recent Round the Corner, Mitchell commented on a Cochrane Review of exercise therapy for chronic fatigue syndrome (CFS). One of the trials included in that review, and discussed by Mitchell, was the PACE trial. In this month's Round the Corner we are publishing a response we received from authors of the PACE trial (Chalder, White & Sharpe), together with Mitchell's reply. Ed."

    Response by Chalder, White & Sharpe
    In his Round the Corner commentary on a Cochrane Review of exercise therapy for chronic fatigue syndrome (CFS), Mitchell made a number of criticisms of the PACE trial (Mitchell 2017). However, the criticisms are based on inaccurate information and are consequently misleading.

    First, Mitchell suggests that a re-analysis of some of the data related to the PACE trial found the effect sizes to be smaller than those which we originally reported (White 2011). This is incorrect. He has confused the effect size reported in the main trial paper (which was calculated using scores of the two primary outcomes) with results of a secondary analysis of the data. The latter reports the proportions of participants meeting various criteria for recovery (see below) (White 2013; Wilshire 2017).

    Second, Mitchell implies that we only released certain results, such as objective metrics from the 6-minute walking test data, as a consequence of data release that was forced on us following an information tribunal hearing in 2016. These results were in fact published in our main results paper 5 years earlier (White 2011).

    Third, Mitchell states: ‘it is also alleged that the investigators (perhaps inadvertently) influenced participants’ self-reports with indiscriminate encouragement in newsletters sent out during the trial’ (Mitchell 2017). It has indeed been alleged, but the allegation is incorrect. As in all well-run trials, we engaged with participants by sending them regular newsletters about trial progress. As part of that, we included quotations of positive feedback about the trial and the treatments that they had received. The newsletters (which readers can review at did not name any treatment and included positive quotations about all four treatments being evaluated in the trial. We also measured participants’ expectations of their allocated treatment after they had been informed of it and, as reported in the main paper, most participants considered adaptive pacing therapy (APT) and graded exercise therapy (GET) to be most likely to help them, whereas the trial found that cognitive–behavioural therapy (CBT) and GET were most effective (White 2011).

    Fourth, Mitchell says, ‘It is also alleged that the investigators switched their own scoring methods mid-trial’ (Mitchell 2017). As is common practice in most trials, and as we agreed to do in our original protocol (White 2007), the outline analysis plan was reviewed and a detailed analysis plan was written and subsequently published (Walwyn 2013). This was approved by two independent oversight committees before any outcome data were analysed. The detailed plan used the same primary outcomes. The change Mitchell is referring to was in the scoring method of one of the primary outcome measures. A binary scoring method (0, 0, 1, 1) was changed to a Likert scoring method (0, 1, 2, 3) in order to provide a more accurate measure of efficacy. This change and the reason for it were clearly reported in the papers (White 2011; Walwyn 2013). Re-analysing the data using the binary scoring made no difference to our conclusions that both CBT and GET are effective treatments (Goldsmith 2016).

    Fifth, Mitchell criticises us and one of our universities for not releasing more data and earlier. This criticism is misleading. We have already explained that we simply did not have participants’ consent to release their individual patient data (White 2016). This is because the public release of data, which has now occurred as a result of an information tribunal hearing, and which Mitchell promotes in providing a link, has been explicitly proscribed by our research ethics committee. We have, however, shared data with other researchers, including a Cochrane Collaboration team, who agreed to keep the data confidential.

    Finally, Mitchell suggests that a re-analysis of the proportions of participants meeting criteria for recovery suggests that few participants recovered with CBT and GET (Wilshire 2017). We have already pointed out that our recovery (as opposed to improvement) estimates depended on assumptions (White 2013; Sharpe 2017). The Wilshire re-analysis simply makes different assumptions, using more stringent thresholds to determine recovery. That said, our recovery rates were similar to those found in previous studies (22% recovered after CBT and GET) (Sharpe 2017).

    We agree with Mitchell that there are lessons to be learnt from the PACE trial, but they are not the lessons he suggests.

    Mitchell says: ‘First and foremost, it is imperative for researchers to publish studies in the most open and transparent manner possible’. In fact almost all our papers were published with open access, and we have responded to scientific queries and criticisms appropriately and repeatedly in papers cited here, in journal correspondence, and in over 100 frequently asked questions available on the trial website ( We have also shared data when ethically possible (White 2016).

    Mitchell says: ‘A second lesson is that clinicians and researchers should work more closely with patients…’. In fact a patient charity and a patient were involved early on in designing the trial, and were full members of our trial steering and/or trial management committees (White 2015).

    Mitchell says: ‘The third lesson is that, to promote acceptability, psychosocial treatments should be integrated into medical care’. In fact the PACE trial treatments were integrated with medical care and all participants in the PACE trial received appropriate medical care provided by CFS specialists.

    We suggest that the most obvious lesson from our experience of the PACE trial is that science can sometimes provide answers that are not popular with everyone (Lancet 2011; Hawkes 2011; Wessely 2015; Sharpe 2016). However, such answers should stand or fall by independent replication, not by unreasonable criticism and demands for retraction. We note that the PACE trial replicated findings from many earlier randomised controlled trials, many of which were conducted by independent researchers in different countries (Castell 2011; Larun 2016)."

    Riposte by Mitchell
    "I thank Chalder et al for their response to my commentary (Mitchell 2017), which included discussion of the PACE trial. Since publication of this commentary I have been contacted by various individuals claiming I was too lenient and various others stating I was not lenient enough with my evaluation of the PACE trial. All the points raised by Chalder et al have been extensively discussed online already (Wilshire 2017; Sharpe 2017), but I will take this opportunity to look at three key points. The first is how many patients improved in the PACE trial, the second is how many recovered in the PACE trial and the third is the general point whether original suitably anonymised data should be published along with primary studies, an initiative called ‘open data’."

    full reply at link
    Woolie, Esther12, Yessica and 7 others like this.

Share This Page