Michael Sharpe skewered by @JohntheJack on Twitter

I wonder how Michael Sharpe felt reading that? He'll know it's a huge porkie, and that he's being abandoned.

He will be on the phone to Wessely now saying.... 'how can you lie like that you should show some integrity', then they will both laugh, then there will be a 3 second silence then Sharpe will say, 'you know i'll take you down with me you bastard!!!'....then another 3 second silence follow by the dialling tone being heard at Sharpes end.
 
But what we do know is that the field has been clique-ified for decades - as long as the research is presented within a "psychological" framework, they know they can get the "right" reviewers for their work.
There is a concept of abuse in research called a citation circle. Its frowned upon. For years I have been talking about review circles, where authors review each other's work, but never publish together. In small fields this is easy to do, and results in research largely unaccountable to the scientific community ... until of course somebody with influence notices this.
 
This was a post three days ago, before I prodded Godwin to reply to Sharpe’s tweet...



Wessely is too savvy to get into a public fight with an old friend who happens to think his beloved trial is a pile of crap. The probability that Sharpe would provoke a skirmish was always much higher. I’m so pleased he didn’t let us down.
 
It's great that Godwin is weighing in, although I don't really understand why in PACE the model was not falsifiable--it was falsifiable. The results per the protocol assessments proved that the model of the illness and treatments was wrong--or at least was not borne out in this experiment. They just disrespected their own results, as @TiredSam notes, and published bogus results.
No. In one sense, Sharpe is correct. A null result would not have demonstrated that their underlying behavioural model was wrong. It still could have been right, but patients' beliefs were simply unshakeable, and therefore they simply didn't respond to the intervention.

But a null result certainly doesn't provide any support for the model either.
 
Some of Shape's tweets to patients in the early stages were quite patronising. Using capitals to explain to patients who were much cleverer and well-informed than him that a clinical trial is all about the "DIFFERENCE between arms" (the capitalisation is his!). This is a fascinating ocurrence that seems to reflect two psychological phenomena.

The first is the "curse of knowledge". This refers to the finding that people who know a particular fact overestimate the likelihood of other people will also know it. If they don't know a fact, they underestimate the number of people who know it. I suspect Sharpe has a shaky knowledge of trial procedure, so can't imagine anyone knowing more than him.

That's why he talks down to us.

Another phenomenon, less well documented is what I'd like to call the "Goop" effect. This happens when a person enjoys widespread unquestioning adulation for a long time. Their ego now hugely inflated, they start to believe they are an authority on everything. They then overestimate their own capabilities in areas that have nothing to do with what gained them adulation and respect in the first place (in this case, selling alternative medicine BS). I think a similar thing happens to high profile psychiatrists. Years of unquestioning adulation. I don't think they have any grasp of the fact that they are amateurs at doing science. They've had no proper formal training in research. They genuinely think they know so much more than those idiot patients.
 
@Woolie, I believe you're referencing the DUNNING-KRUGER EFFECT*
*(sorry, I couldn't resist the ironic dumbsplaining and capitalisation :bag:)
:rofl::rofl::rofl::rofl::rofl:!!!!

No Dunning-Kruger is different. That's about how you think you know a lot about a subject when you're starting out (because you're unaware of all the complexities), but then as you start to get more familiar with the subject, you appreciate there's huge gaps n your knowledge.

The curse of knowledge is about assuming people think the same as you, its a theory of mind failure. Its what makes teachers annoyed when their students don't get the simplest problems (because honestly, its SO basic, how could you not know that!):

https://en.wikipedia.org/wiki/Curse_of_knowledge
 
Actually I have it on good authority that (*literally*) NOBODY knew that health care could be so complicated.

...

A null result would not have demonstrated that their underlying behavioural model was wrong. It still could have been right, but patients' beliefs were simply unshakeable, and therefore they simply didn't respond to the intervention.
Or that the treatments needed to be refined to better 'reach' patients.

I take PACE as 'just' more circumstantial evidence against the unhelpful cognition/deconditioning 'model'. You can't completely disprove the model until you can objectively measure beliefs - even if it was cancer under investigation rather than ME/CFS.

For me the issue you can't avoid with PACE and the like is that it's basically asserting that patients are sufficiently delusional to produce such a severe illness but at the same time amenable to CBT. That's not a thing. Maybe I'm missing something here?
 
No. In one sense, Sharpe is correct. A null result would not have demonstrated that their underlying behavioural model was wrong. It still could have been right, but patients' beliefs were simply unshakeable, and therefore they simply didn't respond to the intervention.

I am confused now. In these terms Godwin is right. An experiment that is unable to falsify a hypothesis is of not value as a test of that hypothesis. I don't think Sharpe even understands what Godwin is going on about here.

But I think David is right to suggest that Godwin may be focusing on the wrong objective for the experiment. PACE was not primarily designed to test an underlying behavioural model. It was designed, as Sharpe implies, to test a practical implication of that model - that CBT and GET would be effective treatments. And if it were not for the fact that the unblinded nature of the study makes it hard to make anything of then David is right to say that this more limited 'model' is falsifiable - and was pretty much falsified. The fact that this does not test the underlying theory in a 'dangerous' way does not really matter because a strongly positive result with a more robust assessment methodology would have provided useful corroboration for the theory and that is not something that even Popper discounts. In terms of the underlying theory the trial is fatally flawed because with the unblinded methodology it could not have provided reliable corroboration.
 
Back
Top Bottom