They seem to have left out the 'controlled' bit of RCT.No amount of randomisation impacts on the high risk of bias when using subjective measures in unblinded trials.
As I understand it randomisation is to avoid selection bias. Other forms of bias need other strategies to deal with them. Randomisation is just one important tool amongst a good many important tools needed for properly run trials; the integrity of a trial is presumably only as strong as the weakest link in the toolchain.No amount of randomisation impacts on the high risk of bias when using subjective measures in unblinded trials.
No matter how good or sophisticated your statistical analysis, if your data is subject to bias your results are worthless. Not sure this language is acceptable, but there is no way of getting around ‘Shit in, means shit out’.
This is nothing but a cynical attempt to lend spurious credibility to bad research.
There is a very good reason for this I'm sure, as this link will help to clarify:They seem to have left out the 'controlled' bit of RCT.
[my bold]For example, Robert Courtney points out:
“Although it was a large and expensive government-funded trial, the PACE trial, as with most cognitive-behavioural research, was open-label and failed to control for placebo effects and biases such as response bias
It gets better:Which is why, as I understand, the 2011 PACE paper did not describe it as controlled:
"Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial"
Yes, I didn't want to laden my post with too much. I suspect there must have been rumblings between 2007 and 2011 that brought them up short. I've a vague recollection a reviewer maybe brought the issue up.PACE was originally described as 'controlled' in the PACE protocol paper (2007), but not in the main paper (2011).
.... Does Cochrane know that LP literally entails standing on paper and shouting at it?.....