Absolutely.One thing that astonishes me is the number of people involved, the international meetings involved, the time involved (first draft 2016), the cost involved of all those people's time. And they produce this crap. We could have done better here in a week.
And they wonder why we question their integrity.One thing that astonishes me is the number of people involved, the international meetings involved, the time involved (first draft 2016), the cost involved of all those people's time. And they produce this crap. We could have done better here in a week.
One thing that astonishes me is the number of people involved, the international meetings involved, the time involved (first draft 2016), the cost involved of all those people's time. And they produce this crap. We could have done better here in a week.
Is there such a thing as a large (or even a small) trial without any flaws? That seems either incredibly naive, or very devious as it's completely subjective and therefore opens the door to open-ended arguments about one how obviously flawed trial is more significant than another just because the outcome is preferred. We're already at the point where this argument is actually made and accepted.[my bold]
On the face of it that makes some sense to me. The gold standard for comparison being "a large randomised trial without any flaws". So all we need now is either the authors' definition of that gold standard, or a reference in the paper to a document defining it.
But I see no definition in this paper of any such gold standard for flawless large randomised trials, nor any reference to such a definition. (If I've missed it then please someone correct me). Without it the whole paper is worthless. The tool is for identifying bias (discrepancies from a gold standard), yet the gold standard is not identified; so how can the validity of the tool possibly be assessed from this paper? Without definition it is merely a muddy standard.
I actually think it is is valid to consider a hypothetically flawless ideal whatever-it-is-under-consideration, from which to then assess how the reality deviates away from that ideal. In reality everything will inevitably deviate from it to some degree, in some aspects, but to what degree, and in what aspects, are key.Is there such a thing as a large (or even a small) trial without any flaws? That seems either incredibly naive, or very devious as it's completely subjective and therefore opens the door to open-ended arguments about one how obviously flawed trial is more significant than another just because the outcome is preferred. We're already at the point where this argument is actually made and accepted.
I do appreciate this. But in engineering there is no such thing as a perfect component either, even thought there is a theoretical ideal against which its deviations from ideal will be measured.I can see both sides of the argument here. But I tend to come down on the idea that there is no such thing as a perfect trial most of the time.
Let us say we have an unblindable treatment. And our key outcome is subjective. How can there ever be a perfect trial of how well the treatment improves the outcome measure? Every time you think of a clever way to optimise things you will find you sacrifice something.
Having spent days in trial planning meetings I am pretty sure that in most situations there is no perfect trial. Everything is compromise. Which is why I tackle the problem from the other end. Just ask the question 'do we have empirical evidence indicating that this method is unreliable'. If the answer is yes then nothing more needs to be said.
It's definitely valid to aspire to an ideal, the best that we can achieve, but in practice this is unquantifiable, unlike a precise number given for a specific system of measurement that is universal and standardized. In the case of PACE, a perfect flawless trial is one that makes claims promoted for decades seem credible, being objectively accurate is actually undesired. In every non-discriminated disease it would have been laughed out of any room. A "perfect trial" is a very flexible notion depending on bias and circumstances. A clinical trial for peptic ulcers in 2019 would have very different notions of flawlessness from one in the 1960's, all other things being equal.I actually think it is is valid to consider a hypothetically flawless ideal whatever-it-is-under-consideration, from which to then assess how the reality deviates away from that ideal. In reality everything will inevitably deviate from it to some degree, in some aspects, but to what degree, and in what aspects, are key.
It's why systems of fits and tolerances came into being, and is at the heart of just about every manufactured product today. You might ideally want a 10.00 mm shaft fitting into a 10.05 mm hole, but manufacturing tolerances make it impossible to reliably and consistently achieve precisely that. So it might be decided that the fit is within acceptable limits if the minimum clearance is 0.03 mm, and the maximum clearance 0.07 mm. And then specify that the shaft has to be manufactured to within 9.99 mm to 10.01 mm, and the hole to within 10.04 mm to 10.06 mm.
It's normal and commonplace in engineering to have a nominal, flawless, and hypothetical datum, from which the majority of instances in reality will inevitably deviate. But you have to have the nominal datum in the first place, even though it may be hypothetical.
So I don't have a problem with the notion of a flawless hypothetical ideal. In my trivial example above, if the nominal 10.00, 10.05, with 0.05 clearance was not specified up front, along with the tolerances, the implementation could be any old rubbish with no way to know what it was supposed to be. You'd buy a new car that sounded like a can of old nails from the outset, and last 5 mins if you were lucky.
In the case of this paper they claim their nominal hypothetical ideal is a perfectly run trial; unachievable in practice, but the yardstick from which to check real world deviations against, and whether those deviations are within acceptable limits or not. But how can you assess if something is within acceptable limits, if you do not identify what the datum is?
So taking the importance of objective outcomes for instance to minimise bias, if you do not up front specify your ideal trial conditions, and in there then include the relevance of objective outcomes, then it's a shambles. They dare not do that because they need there to be no precise definition, because a) If it is of their definition, it would be ripped to shreds by real scientists, and b) if it is a proper definition then the tool they have developed would be ripped to shreds by real scientists.
Yes I do appreciate this. To me it is akin to engineering before any fits and clearance systems existed. In the Napoleonic wars if a gun component failed, there was no such notion of ordering a replacement part and just fitting it, because even the same component varied greatly from one gun to the next. Each one had to be individually worked to fit by a craftsman on the spot, to fit with whatever sizes the rest of the gun's components had been made to - the variation would have been considerable by modern standards. So there would have been some excellent guns made/repaired by excellent craftsmen, and some lousy guns made/repaired by lousy craftsmen.What we are working with instead is things like a response from Cochrane saying something (memory faulty) to the effect that it's just a matter of opinion that an objective outcome is preferable to a subjective one, which is obviously only meant as an exemption to allow poor quality psychosocial research like PACE to stay relevant.
My design experience is only in software, albeit previous technician level in mechanical engineering. But I'm certain any design endeavour encounters indeterminism, no matter how hard you try to pin down everything up front. It's impossible to foresee everything.But in engineering do you have paradoxes? As for instance in the question:
Where will you be after you have moved from there?
Trial design tends to throw up just this sort of paradox.
Missing data is missing data. Having documented reasons for it being missing is nice, but it is still missing."If all missing outcome data occurred for documented reasons that are unrelated to the outcome then the risk of bias due to missing outcome data will be low (for example, failure of a measuring device or interruptions to routine data collection)."
https://twitter.com/CochraneUK/status/1168170398976004097
Irony?
Code:https://twitter.com/CochraneUK/status/1168170398976004097
Guess who else retweeted this.
Good summary of the problems with Cochrane. Would be nice if they were aware of being guilty on all counts.
/QUOTE]
Looks like the perfect example of denialism!
![]()
http://www.virology.ws/2019/09/04/trial-by-error-more-on-cochranes-new-risk-of-bias-tool/As Virology Blog has reported, the lead author of the revised version of Cochrane’s Risk of Bias tool, published last week in BMJ, is a long-time Bristol University colleague of Professor Esther Crawley. In that capacity, he is a co-author of two high-profile studies that violated key principles of scientific investigation—the Lightning Process study, published by Archives of Disease in Childhood two years ago, and the 2011 school absence study published in BMJ Open.
Can these people really not hear themselves! I am no scientist, but I could come up with a randomised trial, and it would be total junk, because I would not understand all the other essential aspects of trial methodology.randomised trial, a study design that is scientifically strong, well understood, and often well implemented in practice