Wilshire et al. suggest that, as the primary outcomes in PACE were participant-rated (CFS is defined by self-report as there are no reliable objective measures)and the participants inevitably knew what treatment they were given (the therapies tested could not be blinded asthey were collaborative between therapist and partici-pant), the trial findings can be dismissed as simply due to participant bias when rating outcomes. We disagree with this proposition. Whilst participant rated outcomes do potentially pose a risk of bias for all trials testing the effect of unblindable treatments, we do not agree that this is a convincing alternative explanation for the PACE trial findings. This is because: First, participants did not just give global ratings at the end of treatment, they answered specific questions about fatigue and function, as long as six months after therapy was completed, making a transient ‘placebo’ type effect very unlikely. Second, the majority of the trial secondary measures showed a similar pattern as the primary outcomes. Third, and most importantly, the trial design controlled for a non-specific effect of treatment by comparing three therapies (CBT, GET and APT) that wereall attention and credibility matched. Credibility matching was determined by asking participants how logical they found the treatment they were allocated to and how confident they were it would help them before they received it; the credibility rating of APT was higher than CBT, and similar to GET. Despite this control the biggest difference in outcomes was between APT and both CBT and GET. Indeed, one could argue that if the participant rated outcomes had been biased by the non-specific factors, such as perceived credibility, APT should have performed best, when in fact it performed worst.