The failure of the PACE authors to do sensitivity analysis for their change of outcomes is one of their worst. Doing it could very well have exposed the fundamental problems much more starkly. Which, presumably, is why they didn't do it (or at least didn't publish it if they did.)
Indeed.
I find it even less excusable for a review at this point, with the benefit of these patient surveys of harm - which no matter what they want to claim are the issues with them, are better than 'pondering' and are data, and at least a massive highlight of this being a big and relevant issue underlying this - can ignore doing this.
The drop-out rate on PACE etc bangs anyone who looks at it closely over the head as a huge question of whether the treatment was just used as a filter to gain a 'fitter population' by putting ill people through a treatment the least well could complete. Certainly by this point.
If you have data that explains this as being due to harm then if you want to critique that data then replace it with something more extensive as a check (yellow card, bigger reporting survey - but along the lines of how harms would be collected for any other illness - not with 'tricks' behind it), you don't dismiss it and use old top-of-the-head explanations that explain it less well from the original authors.
Otherwise what is the point of any review or retrospective if you are neither picking up on these things or using data and knowledge that has since been discovered and could aid in explaining these issues. And I think the choice to do both the inclusion of PACE type trials and to not update them with such knowledge now known, if it stands as it does, simply leaves iqwig with no leg to stand on with regards being a proper review if it doesn't update. Particularly when Nice guidelines have pointed all these issues out and they just chose to remove these and not address in any other way - it's pretty astounding.