1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 15th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

RoB 2: a revised tool for assessing risk of bias in randomised trials (2019) Sterne et al.

Discussion in 'Research methodology news and research' started by ME/CFS Skeptic, Aug 29, 2019.

  1. Trish

    Trish Moderator Staff Member

    Messages:
    52,340
    Location:
    UK
    One thing that astonishes me is the number of people involved, the international meetings involved, the time involved (first draft 2016), the cost involved of all those people's time. And they produce this crap. We could have done better here in a week.
     
    Hutan, Cheshire, Daisybell and 27 others like this.
  2. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Absolutely.
     
  3. Sean

    Sean Moderator Staff Member

    Messages:
    7,213
    Location:
    Australia
    And they wonder why we question their integrity.

    This is Clinical Trials 101 they are repeatedly failing. There are no excuses.
     
    ukxmrv, Hutan, ladycatlover and 11 others like this.
  4. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,829
    Location:
    Australia
    Because patients don't know anything about methodological biases. We're just here to be experimented on!
     
  5. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,255
    It appears that having published research is a conflict of interest in some cases.
     
  6. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,469
    Location:
    Canada
    Is there such a thing as a large (or even a small) trial without any flaws? That seems either incredibly naive, or very devious as it's completely subjective and therefore opens the door to open-ended arguments about one how obviously flawed trial is more significant than another just because the outcome is preferred. We're already at the point where this argument is actually made and accepted.

    It's basically the argument around PACE: it's flawless. None of the criticism is even acknowledged, everything is dismissed, it has no faults, deserves as much credit as quantum field theory. And yet it might just be the most flawed trial of its size ever, maximally biased and cherry-picked with ample cheating and fraudulent claims. All because of outlandish claims, that are not even backed by data, that align with expectations and prejudices. It's "flawless" to some because they like the lies the authors make from it, that's it.

    This is injecting maximum subjectivity into research, since all it takes is for enough people to declare a trial to be flawless to dismiss any and all points of criticism. It gives the gatekeepers unlimited and arbitrary influence to promote viewpoints without challenge, since it's all down to opinion. And this is already established, as we know with the Cochrane exercise review, where all points of criticism from the brutal peer review were dismissed and a "low bias, high impact" label was slapped on it.

    If this passes and becomes normalized, it may just become the largest regression in the history of science. Something's terribly broken at the heart of medicine that this seems popular enough to go ahead.
     
    Last edited: Sep 1, 2019
  7. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    I actually think it is is valid to consider a hypothetically flawless ideal whatever-it-is-under-consideration, from which to then assess how the reality deviates away from that ideal. In reality everything will inevitably deviate from it to some degree, in some aspects, but to what degree, and in what aspects, are key.

    It's why systems of fits and tolerances came into being, and is at the heart of just about every manufactured product today. You might ideally want a 10.00 mm shaft fitting into a 10.05 mm hole, but manufacturing tolerances make it impossible to reliably and consistently achieve precisely that. So it might be decided that the fit is within acceptable limits if the minimum clearance is 0.03 mm, and the maximum clearance 0.07 mm. And then specify that the shaft has to be manufactured to within 9.99 mm to 10.01 mm, and the hole to within 10.04 mm to 10.06 mm.

    It's normal and commonplace in engineering to have a nominal, flawless, and hypothetical datum, from which the majority of instances in reality will inevitably deviate. But you have to have the nominal datum in the first place, even though it may be hypothetical.

    So I don't have a problem with the notion of a flawless hypothetical ideal. In my trivial example above, if the nominal 10.00, 10.05, with 0.05 clearance was not specified up front, along with the tolerances, the implementation could be any old rubbish with no way to know what it was supposed to be. You'd buy a new car that sounded like a can of old nails from the outset, and last 5 mins if you were lucky.

    In the case of this paper they claim their nominal hypothetical ideal is a perfectly run trial; unachievable in practice, but the yardstick from which to check real world deviations against, and whether those deviations are within acceptable limits or not. But how can you assess if something is within acceptable limits, if you do not identify what the datum is?

    So taking the importance of objective outcomes for instance to minimise bias, if you do not up front specify your ideal trial conditions, and in there then include the relevance of objective outcomes, then it's a shambles. They dare not do that because they need there to be no precise definition, because a) If it is of their definition, it would be ripped to shreds by real scientists, and b) if it is a proper definition then the tool they have developed would be ripped to shreds by real scientists.
     
  8. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,518
    Location:
    London, UK
    I can see both sides of the argument here. But I tend to come down on the idea that there is no such thing as a perfect trial most of the time.

    Let us say we have an unblindable treatment. And our key outcome is subjective. How can there ever be a perfect trial of how well the treatment improves the outcome measure? Every time you think of a clever way to optimise things you will find you sacrifice something.

    Having spent days in trial planning meetings I am pretty sure that in most situations there is no perfect trial. Everything is compromise. Which is why I tackle the problem from the other end. Just ask the question 'do we have empirical evidence indicating that this method is unreliable'. If the answer is yes then nothing more needs to be said.
     
  9. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    I do appreciate this. But in engineering there is no such thing as a perfect component either, even thought there is a theoretical ideal against which its deviations from ideal will be measured.

    If a particular trial has no choice but to deviate a considerable way from the ideal, that simply means that the tolerances for it may have to be significantly wider in certain aspects, and possibly asymmetrical.

    But I do appreciate I'm probably digressing a long way from trials' methodology.
     
    alktipping likes this.
  10. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,518
    Location:
    London, UK
    But in engineering do you have paradoxes? As for instance in the question:

    Where will you be after you have moved from there?

    Trial design tends to throw up just this sort of paradox.
     
    alktipping, Barry and Simbindi like this.
  11. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,469
    Location:
    Canada
    It's definitely valid to aspire to an ideal, the best that we can achieve, but in practice this is unquantifiable, unlike a precise number given for a specific system of measurement that is universal and standardized. In the case of PACE, a perfect flawless trial is one that makes claims promoted for decades seem credible, being objectively accurate is actually undesired. In every non-discriminated disease it would have been laughed out of any room. A "perfect trial" is a very flexible notion depending on bias and circumstances. A clinical trial for peptic ulcers in 2019 would have very different notions of flawlessness from one in the 1960's, all other things being equal.

    An ideal of perfection only works if you can quantify everything using a measurement system that everyone agrees with. 10cm is 10cm at ground level on Earth or in Jupiter's core, even accounting for the curvature of space there, which changes what 10cm means but will still be locally measured as 10cm no matter who measures or how many measures are taken.

    What we are working with instead is things like a response from Cochrane saying something (memory faulty) to the effect that it's just a matter of opinion that an objective outcome is preferable to a subjective one, which is obviously only meant as an exemption to allow poor quality psychosocial research like PACE to stay relevant. The same person who replied that would never agree that this is a valid statement in diseases that they don't find unimportant.
     
  12. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Yes I do appreciate this. To me it is akin to engineering before any fits and clearance systems existed. In the Napoleonic wars if a gun component failed, there was no such notion of ordering a replacement part and just fitting it, because even the same component varied greatly from one gun to the next. Each one had to be individually worked to fit by a craftsman on the spot, to fit with whatever sizes the rest of the gun's components had been made to - the variation would have been considerable by modern standards. So there would have been some excellent guns made/repaired by excellent craftsmen, and some lousy guns made/repaired by lousy craftsmen.

    At this time we have some excellent scientists running excellent trials (because they know what needs doing and how to do it), and some lousy scientists running lousy trials (because they don't have a f' clue). The considerable discrepancy is partly because what constitutes good trial methodology, although well understood by good scientists, is insufficiently pinned down to ensure the poor scientists either a) buck up and pick up their game, or b) get out of the game because they are never going to master it.

    It just feels to me that something should be written in tablets of stone what the ideals are, and alongside that, what the various tolerances are for different applications. If everything can be blinded and objective, then the tolerances should be tight - nothing will be ideal, but a fair approximation to it. If trial conditions not so good, then the tolerances adapted to suit. If everything is so bad - fully unblinded, fully subjectiv, etce - then maybe there are no acceptable tolerances, and it would fall way outside even the loosest acceptable limits ... and thereby be rejected.
     
    alktipping, Annamaria and Keela Too like this.
  13. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    My design experience is only in software, albeit previous technician level in mechanical engineering. But I'm certain any design endeavour encounters indeterminism, no matter how hard you try to pin down everything up front. It's impossible to foresee everything.

    I'm not saying I think every trial should be stipulated to be run exactly the same way, because that would do much more harm than good I'm sure. But there must be fundamentals that could/should be laid down of what to strive for, and especially how to assess what are and are not acceptable limits when the reality is far from the ideal.
     
    alktipping and Annamaria like this.
  14. Sean

    Sean Moderator Staff Member

    Messages:
    7,213
    Location:
    Australia
    Missing data is missing data. Having documented reasons for it being missing is nice, but it is still missing.
     
  15. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,861
    Location:
    betwixt and between
    Last edited: Sep 2, 2019
    Hutan, Joh, Amw66 and 10 others like this.
  16. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,469
    Location:
    Canada
    Good summary of the problems with Cochrane. Would be nice if they were aware of being guilty on all counts.

    Oh well, it's not like lives are at stake or anything like that.
     
    alktipping, MSEsperanza, Sean and 5 others like this.
  17. Anna H

    Anna H Senior Member (Voting Rights)

    Messages:
    241
    Location:
    Sweden
     
    Last edited: Sep 2, 2019
    alktipping, MSEsperanza, Sean and 3 others like this.
  18. Andy

    Andy Committee Member

    Messages:
    21,963
    Location:
    Hampshire, UK
    Trial By Error: More on Cochrane’s New Risk of Bias Tool
    http://www.virology.ws/2019/09/04/trial-by-error-more-on-cochranes-new-risk-of-bias-tool/
     
    Annamaria, rvallee, Skycloud and 8 others like this.
  19. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Can these people really not hear themselves! I am no scientist, but I could come up with a randomised trial, and it would be total junk, because I would not understand all the other essential aspects of trial methodology.

    "Often well implemented" is basically code for "often badly implemented", thereby completely undermining any point of making their statement ... or writing their paper ... or developing their tool. It's like taking a modern car and 'enhancing' it to the level of something from the 1920s.
     
    Last edited: Sep 5, 2019
  20. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    I really don't understand how these people can live with themselves. They talk the most blatant drivel, and think themselves to be scientists. The world seems to be overrun with self-delusional, high IQ morons at the moment.
     

Share This Page