We were actually invited to write a reply to this, which Tom Kindlon and I did. I'm surprised it hasn't come out at the same time as the Sharpe response. This is very distressing, as obviously, its much better to be able to reply at the same time. Hopefully, because the journal is fully online, the reply can be linked as soon as its released.
There was some confusion about the process we were supposed to follow. We were invited to reply, and send the manuscript to the editor via email, which we did. It was reviewed and we were asked to make a few small changes. I was then told the plan had changed, and the Sharpe piece and our reply would now be published as separate pieces. I was asked to resubmit my reply via the online portal (the usual way to submit a new article). I wonder whether what they meant is that it would also go through the review process - hence the delay?
I see absolutely no reason why I shouldn't share the reply with you all now. Heres a pdf. As soon as I get a chance, I'll reformat it into a prettier version and update the file here.
We were actually invited to write a reply to this, which Tom Kindlon and I did.
Sharpe once claimed that the reason Lancet fast-tracked it is because they pre-registered the protocol.It also brings into question The Lancet's fast-track process, whereby trials that have lodged a protocol can then undergo an expedited review. This makes no sense if crucial elements on the protocol are then amended at publication.
Has this issue been raised with The Lancet?
[eta - apart from me banging on about it on Twitter that is]
There was some crying foul by Sharpe and Vogt about why it took so long for their letter to be published.Excellent work @Carolyn Wilshire and @Tom Kindlon. Thank you. I hope the journal publishes it as promised and as soon as possible.
As always it just confirms that the trial was a formality
Oh, I don't mean it was easy. It's pretty hard to cheat in a way that seems legit and you're right about the TMG minutes showing how much trouble they had keeping the appearance of doing a serious trial. Reality really didn't want to conform to their expectations.I'm not sure that was the case though. They made some things damn hard for themselves. It was massively over-complex and was a logistical nightmare, as the TMG minutes recount. They were clearly trying to cover all bases and not really thinking of what the participants were going to have to put up with as a consequence.
... and if you include the recovery paper, are also consistent with regression to the mean ... all results including basically doing nothing have the same outcome.It means they are in denial about the fact that the findings reported in PACE are entirely consistent with placebo effects.
In good science this ends all their claims. Since they wont accept that ...It was the fact that there was no longer any difference at long-term follow-up...
Ditto.The term CFS was in use in Australia no later than 1988/89, because that was when I was diagnosed. It was also being used in the media here.
I have always thought this. Plus not disclosing in the papers the fact that 13% met the fraudulent "normal range"/"recovery" thresholds. No one has held them to account for the fact that this was in no way a "normal range" given their population data. It is a seriously overlooked point.I still think the deliberate use of SD on SF36PF data, given the PDW 2007 paper, is evidence of deliberate scientific misconduct and needs to be formally investigated by independent and qualified investigators.
With good reason:Sharpe seemed particularly sensitive to this point.
We were actually invited to write a reply to this [...] Heres a pdf.
My favorite is dismissing the reanalysis because it only used partial data.Such a great response with many well reasoned points that ought to be obvious.
One point in particular that never ceases to amaze me is where they accuse the reanalysis of 'not using an a prirori analysis plan'. So... their change is fine because they simply felt it was better this way, but if anyone else does it (which was not even the case) it is bad? I mean, yes it is bad for all the reasons mentioned as a response to THEIR changes, but come on. You cannot have your pie and eat it too.
My favorite is dismissing the reanalysis because it only used partial data.
Because they withhold the data.
I don't understand how an editor finds that acceptable. Completely ridiculous.