Outcome Reporting bias in Exercise Oncology trials (OREO): a cross-sectional study, 2021, Singh, Twomey et al

Discussion in 'Research methodology news and research' started by rvallee, Mar 15, 2021.

  1. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,390
    Location:
    Canada
    Background
    Despite evidence of selective outcome reporting across multiple disciplines, this has not yet been assessed in trials studying the effects of exercise in people with cancer. Therefore, the purpose of our study was to explore prospectively registered randomised controlled trials (RCTs) in exercise oncology for evidence of selective outcome reporting.

    Methods
    Eligible trials were RCTs that 1) investigated the effects of at least partially supervised exercise interventions in people with cancer; 2) were preregistered (i.e. registered before the first patient was recruited) on a clinical trials registry; and 3) reported results in a peer-reviewed published manuscript. We searched the PubMed database from the year of inception to September 2020 to identify eligible exercise oncology RCTs clinical trial registries. Eligible trial registrations and linked published manuscripts were compared to identify the proportion of sufficiently preregistered outcomes reported correctly in the manuscripts, and cases of outcome omission, switching, and silently introduction of non- novel outcomes.

    Results
    We identified 31 eligible RCTs and 46 that were ineligible due to retrospective registration. Of the 405 total prespecified outcomes across the 31 eligible trials, only 6.2% were preregistered complete methodological detail. Only 16% (n=148/929) of outcomes reported in published results manuscripts were linked with sufficiently preregistered outcomes without outcome switching. We found 85 total cases of outcome switching. A high proportion (41%) of preregistered outcomes were omitted from the published results manuscripts, and many published outcomes (n=394; 42.4%) were novel outcomes that had been silently introduced (median, min-max=10, 0-50 per trial). We found no examples of preregistered efficacy outcomes that were measured, assessed, and analysed as planned.

    Conclusions
    We found evidence suggestive of widespread selective outcome reporting and non-reporting bias (omitted preregistered outcomes, outcome switching, and silently introduced novel outcomes). The existence of such reporting discrepancies has implications for the integrity and credibility of RCTs in exercise oncology.


    https://www.medrxiv.org/content/10.1101/2021.03.12.21253378v1
     
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,390
    Location:
    Canada
    Those problems are clearly fundamental to EBM. They probably have a higher impact on discriminated chronic illnesses because there is nothing else to offer, maximizing their harmful impacts, but the practices appear to be widespread to the point of being normal, it's basically expected to put several fingers on the scale.

    It's hard to justify continuing to put resources into this system. It is so completely unfit for purpose that cheating is basically standard and medicine is paralyzed over what to do: accept that the whole thing has been a bust (which means acknowledging decades of mismanaged failure) or just keep harming people because it's the only way to keep promoting easy-but-wrong-solutions to complex problems.
     
    MEMarge, Cheshire, alktipping and 8 others like this.
  3. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,390
    Location:
    Canada
    Cheshire, alktipping, Hutan and 4 others like this.
  4. Snowdrop

    Snowdrop Senior Member (Voting Rights)

    Messages:
    2,134
    Location:
    Canada
    Nice mix of authors from US, UK, Sweden, Denmark, Australia, and Canada.

    Don't know if that helps it reach a wide audience.
     
    alktipping, Sean, Hutan and 4 others like this.
  5. Joan Crawford

    Joan Crawford Senior Member (Voting Rights)

    Messages:
    688
    Location:
    Warton, Carnforth, Lancs, UK
    That's pretty shocking The pressure to find a positive at all costs reigns supreme over objective science. Kinda wonder why journals are publishing. Or at least being clearer about the limitations of the study and findings.
     
  6. Andy

    Andy Committee Member

    Messages:
    22,814
    Location:
    Hampshire, UK
  7. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,706
    Location:
    London, UK
    I am not sure that they are fundamental to EBM. I think it is more a case that they are rife, as failures of EBM. Yes, the current machinery popular amongst EBM enthusiasts (e.g. GRADE) is rubbish, but the basic philosophy of EBM that we need good evidence on which to base medicine must surely be right?
     
    Hutan, alktipping, Sean and 7 others like this.
  8. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,706
    Location:
    London, UK
    This makes me think of the power of inverted funnel plots.

    The thing about inverted funnel plots is that they can prove that on average people are fiddling their results, when of course you can never prove that in an individual case the results are fiddled, unless you had video cameras in place.

    This study seems to be doing something similar suggesting that on average you can expect trials of exercise therapy to be gerrymandered. In any individual case it may be hard to argue that a nice statistically significance result for a silently introduced outcome is not at least interesting but it is important if you can show that the whole field is so rife with gerrymandering that we can reasonably assume that these nice results are nothing more than chance findings - at very best.

    It compares in a way to another sort of analysis - looking up all the other papers written by the authors of a study. You never hear about this I peer review and I am sure it does not feature in the GRADE system but statistically it must be entirely valid. If author TC has, in addition to the paper under scrutiny, written a whole lot of really awful papers showing a complete lack of understanding of bias then is that not significant?
     
  9. Andy

    Andy Committee Member

    Messages:
    22,814
    Location:
    Hampshire, UK
  10. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,706
    Location:
    London, UK
  11. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,953
    Location:
    Belgium
    Made me think of the following: almost all GET/CBT-trials report positive findings which is a bit weird. Such trials usually only have a power lower than 80% to detect a moderate effect size, so even if GET/CBT trials were effective and produced such a moderate effect, we wouldn't expect to find so many positive results.

    That shows that the literature must suffer from publication or reporting bias.
     
    Michelle, MEMarge, Helene and 11 others like this.
  12. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,706
    Location:
    London, UK
    I have never gone into the methodology in detail but the power of the funnel plot is that it can show that skewed results across a set of trials is not even due to reporting or publication bias but due to manipulation of data. With reporting bias you get something called a right hand half funnel. With manipulation you get a distortion fo the funnel shape. Maybe it would be worth finding someone who is into these things to do a formal analysis. There was a very nice study of trials of injections of hyaluronic acid into joints that demonstrated that on average trials must have manipulated data if I remember rightly. I made use of that when looking at inverse funnel plots derived for so-called telepathic effects. The really intriguing thing there was that the results were manipulated to consistently show a very slight effect - presumably because investigators thought that finding anything more dramatic would not be believed or would be easily refuted. The key point was that the variance in results should have depended on the sample size and yet there was a straight line with the same variance for all sample sizes.
     
  13. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,812
    Yea reminds me of an "issue" re fish length/weight --- the data causing "concern" didn't show the seasonal effect --- if your going to cheat then you actually need to have a good understanding of what you should find!
     
  14. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,812
    EDIT - just realised you mean biomedical research so had to redraft ("charlatans" removed etc!)! Obviously there's a question of the validity of the outcome measure. You really shouldn't use subject indicators, i.e. questionnaires, you should use objective outcomes i.e. activity monitors. You should blind your trial or do a dose response curve analysis (plagiarised from Jonathan).
    Consistent +ve findings where questionnaires are used, and the intervention isn't blinded, might just be demonstrating the (remarkable) consistency of the Hawthorne effect [https://en.wikipedia.org/wiki/Hawthorne_effect]. And yes, only studies which show +ve outcomes are published!

    I've spent just long enough around labs to realise that @Jonathan Edwards is probably right - in some cases results are just made up!
     
    Last edited: Mar 15, 2021
    Joan Crawford, MEMarge and alktipping like this.
  15. Colin

    Colin Established Member (Voting Rights)

    Messages:
    92
    Location:
    Brisbane, Australia
    But the British tradition is that priors aren't introduced into court until it's time to sentence the miscreant...
     
  16. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,706
    Location:
    London, UK
    But if a witness for the prosecution has a long history of making misleading or incompetent statements I think a judge is allowed to say that can be taken into consideration.
     
    Joan Crawford, MEMarge, FMMM1 and 5 others like this.
  17. Trish

    Trish Moderator Staff Member

    Messages:
    54,804
    Location:
    UK
    @dave30th have you seen this. Looks like PACE and SMILE are just the tip of a very big iceberg of research misconduct.
     
  18. Hutan

    Hutan Moderator Staff Member

    Messages:
    28,874
    Location:
    Aotearoa New Zealand
    An iceberg is altogether too pure for this metaphor, I'm thinking 'fat berg' might be more appropriate.
     
    Last edited: Mar 16, 2021
  19. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,836
    Location:
    UK
     
  20. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,836
    Location:
    UK
    "One potential 772 solution to improve preregistration quality is to preregister using SPIRIT reporting guidelines 773 [www.spirit-statement.org; (93)] and platforms that guide and support detailed preregistration 774 such as the Open Science Framework (www.osf.io) and AsPredicted (www.aspredicted.org)."
    just noting these as haven't heard of them before.
     

Share This Page