For me the 'elephant in the room' that not only applies to PACE but many of the other studies, is their definition of recovery.
I would like to see some exposure of what they deemed as 'recovered'; namely that it roughly equates to the health/fitness of someone in their eighties with rheumatoid arthritis or congestive heart failure, and these were previously active people in their 30's and 40's.
If the journalists can't get to grips with the scientific methodology issues, surely this is one thing they can see is so wrong.
There is an ongoing debate in bio/medical/psychological research about exactly this issue. Publishing a paper mostly involves showing that your results are statistically significant (which means, roughly, that there is less than a 5% chance that you would have found the results that you got if there was no effect of the treatment). But it very often doesn't involve looking at whether the results were clinically relevant.
Let's suppose that CBT and GET have no influence on spontaneous physical functioning in the environment, but that they boost a patient's self-belief to the point where they feel slightly more accepting of their condition, and so give slightly better replies on self-report questionnaires, or push themselves a little bit more when given a physical test. If that happened for a decent number of people, the results would be statistically significant, but the improvements in daily functioning could be negligible.
It's a real limitation of science that, in their understandable wish to eliminate noise and have reliable measures, researchers tend to reduce outcomes to what they can measure. Meanwhile in the real world, people would (completely legitimately) all have their own slightly different definition of recovery (being able to hold down a job, or kick a football around with the kids, etc; even "getting back to where I was before" is hard to measure because we didn't evaluate their life before they became ill) and so that becomes hard to use for the trial outcome.
One of the difficulties is that science of this kind is proper hard to do, and the journalists know it (I suspect that many science journalists are people who didn't make it through graduate school), so when the scientists say "Ah, well, yes, but of course we have to use objective criteria because <jargon, much of it actually legitimate>", the journalists are reluctant to say "Yeah, but that's not much help to the individual patients, is it?". At that point the researchers would look pained and say "Well, we're trying our best". And that's if they're acting in good faith too.