Caroline Struthers' correspondence and blog on the Cochrane Review: 'Exercise therapy for chronic fatigue syndrome, 2017 and 2019, Larun et al.

I have vague memories of Wesseley saying something incriminating, probably in one of his "Standing up for science/poor little me, all those militant Me activists are out to get me" speeches. He was referring to saying something like "well we had to change the criteria otherwise no-one would have got better" and then referred to this with a mixed metaphor: something like "I seem to have let the hare out of the bag...."

Does any of this resonate with anyone or am I just having weird dreams. I think @Nathalie Wright might have been at the relevant talk.
https://www.s4me.info/threads/iime-...recommended-treatments.1949/page-8#post-35003

upload_2019-1-15_20-49-30.png

https://www.s4me.info/threads/prof-...tific-bbc-radio-4-14-feb-2017.991/#post-29947

Does he really not see what a joke he is.

Edit: Added second link.
 
Last edited:
Wessely:
In essence though they decided they were using a overly harsh set of criteria that didn’t match what most people would consider recovery and were incongruent with previous work so they changed their minds – before a single piece of data had been looked at of course. Nothing at all wrong in that- happens in vast numbers of trials. The problem arises, as studies have shown, when these chnaged are not properly reported. PACE reported them properly. And indeed I happen to think the changes were right – the criteria they settled on gave results much more congruent with previous studies and indeed routine outcome measure studies of which there are many.


https://www.statnews.com/2016/09/21/chronic-fatigue-syndrome-pace-trial/comment-page-6/#comments

What Wessely saying is load of rubbish. He was one of the co-authors of Deale et al.
2. For example, the definition of recovery used in the studies by Deale et al. [8] and Knoop et al. [9] were a lot stricter than the revised criteria used in PACE. If the Deale et al. recovery criteria are applied to the PACE data, for example (it is possible to use three of the four criteria), the PACE recovery rates fall to a maximum of 9% for CBT, which is very different from the 24% for CBT cited in Deale et al.

Carolyn Wilshire, Tom Kindlon & Simon McGrath (2017) PACE trial claims of recovery are not justified by the data: a rejoinder to Sharpe, Chalder, Johnson, Goldsmith and White (2017), Fatigue: Biomedicine, Health & Behavior, 5:1, 62-67, DOI: 10.1080/21641846.2017.1299358
 
"As long as it's properly reported (ie, CONSORT), you can do what you like."

Well, no, you can't actually!
Yep. (Post #89)
And where is he coming from, by suggesting that so long as changes are reported properly, then they must be OK! If that were the case then they could have changed the criteria so that corpses were deemed recovered.
 
"As long as it's properly reported (ie, CONSORT), you can do what you like."

Well, no, you can't actually!

I thought Consort involved reporting all secondary outcomes and thus they didn't follow Consort. They may claim they did because the silently dropped some of the secondary outcomes in the SAP but that is not a new protocol and they give no reasoning.
 
Why did the results from PACE have to be CONGRUENT with other studies?
Surely every study should be independent? Unless of course PACE was about to overturn the results from all the previous trials, and void the GET hypothesis.
 
According to the BPS crowd PACE had to be done because we didn't have a definitive answer previously in a large enough trial.

And now that PACE is hanging by a thread it's Wessely's position that PACE isn't all that important (unless he or his colleagues are awarded for it being their crowning achievement, of course) because there are other small studies showing the same, which we now know is because they deliberately aligned PACE with past results. Nevermind that PACE was literally the final trial meant to put the past results to their ultimate test, that it failed, as they all failed the same way, without any objective data.

This kind of nonsense would be ridiculed in most fields of science. It's beyond amateur, it's nakedly fraudulent, with barely any effort to hide this fact because they control the message and institutions. The data shows it's useless at best, harmful at worst, protocol shows a null result, conclusion claims a positive outcome and media coverage boasted a full cure for anyone who wanted it.

This will seriously be THE textbook case of fraudulent research for decades to come. And Wessely actually mocks us by joking that he knew if they didn't fix the results it would have been a failure. It's not even a secret, it's right there for all to see, if it weren't for prejudice maintaining the suspension of disbelief.
 
Just thought it could be helpful to have the following quotes in one post.

I just thought I'd see how the protocol changes were described on the PACE FAQ: (there now seems to only be an annoying link for this: https://www.qmul.ac.uk/wolfson/research-projects/current-projects/projects/#faq )

For their primary outcomes they say the change was : "before any data was analysed".

For the recovery outcome they say the change was: "before the analysis [occured]".

I think that this is another example of White trying to be clever with his language.
Indeed. Plus, it was not necessary to have analyzed data but was sufficient just to have had a look at some data in order to become afraid that some of the gathered data might not show what they were supposed to show.

While Wessely's language is too obviously self-revealing:
Wessely:
In essence though they decided they were using a overly harsh set of criteria that didn’t match what most people would consider recovery and were incongruent with previous work so they changed their minds – before a single piece of data had been looked at of course. Nothing at all wrong in that- happens in vast numbers of trials. The problem arises, as studies have shown, when these chnaged are not properly reported. PACE reported them properly. And indeed I happen to think the changes were right – the criteria they settled on gave results much more congruent with previous studies and indeed routine outcome measure studies of which there are many.
*(1)

Why did the results from PACE have to be CONGRUENT with other studies?
Surely every study should be independent? Unless of course PACE was about to overturn the results from all the previous trials, and void the GET hypothesis.

Yep. If he had said "comparable", but no, "congruent"....

And then, in plainest language...
„They changed the recovery measure because they realised they had gone too extreme and they would have the problem that nobody would recover.“ *(2)
---

*(1) S. Wessely, Sept 23, 2016 at 7:13 am, comment on J. Rehmeyer:
Bad science misled millions with chronic fatigue syndrome. Here’s how we fought back, Sept 21, 2016, https://www.statnews.com/2016/09/21/chronic-fatigue-syndrome-pace-trial/comment-page-6/#comment-56390
, posted by @large donner


*(2) S. Wessely, Standing up for Science panel discussion, March 2017, tweet by Janet Eastham, posted by @Sly Saint / @Barry
 
Last edited:
had gone too extreme

Which is absurd on its face. 85 is still below average. This is implausible deniability built on the fiction of a normal distribution on a scale that is heavily skewed.

Wessely absolutely did give the game away. And people laughed. Haha, so funny, we published fraudulent research that is destroying millions of lives, leaving them in despair and isolation. So funny, hahahaha, it's just a modest proposal, hahahaha.
 
You shouldn't be so swift to condemn.

EDIT sorry. Couldn't resist the response to that allusion.
 
Last edited:
A further thought on Wessely's infamous quote ...
upload_2019-1-27_12-32-38.png
They could of course only have realised that, by peeking at the data beforehand, or at least having a very good idea what that data was going to show. SW clearly concedes that else he could not have made that statement.
 
Last edited:
Today I managed to canvas the opinion of the members of the University College London Department of Medicine at Grand Rounds. The attendance was good, about 80, for a presentation on medical negligence. I asked of the statement:

An experiment with subjective outcome measures not blinded to test versus control is unreliable and therefore unsatisfactory.

Do you agree or disagree?

Nobody disagreed. All agreed bar one abstention.
I deliberately made the statement general because this is a general principle for science, not just for trials. The abstainer pointed out that more detail might affect his opinion and it is true that there can be mitigating factors.

So although this is an opinion it is more or less universally held by a body of academics with no vested interest in any particular case.
 
Would they equally have agreed with this version "An experiment with subjective outcome measures not blinded to test versus control is unreliable and therefore unsatisfactory unless the experiment involves ME"

I cannot answer without the evidence. However, if the previous head of department had been present he would have been more likely to agree if 'unless' was replaced by 'especially if'.

Maybe followed by (laughter from audience)
 
Back
Top Bottom