cassava7
Senior Member (Voting Rights)
Compelling evidence from meta-epidemiological studies demonstrates overestimation of effects in randomized trials that fail to optimize randomization and blind patients and outcome assessors
[A number of authors are from Gordon Guyatt’s group at McMaster University in Canada]
Objective
To investigate the impact of potential risk of bias elements on effect estimates in randomized trials.
Study Design and Setting
We conducted a systematic survey of meta-epidemiological studies examining the influence of potential risk of bias elements on effect estimates in randomized trials. We included only meta-epidemiological studies that either preserved the clustering of trials within meta-analyses (compared effect estimates between trials with and without the potential risk of bias element within each meta-analysis, then combined across meta-analyses; between-trial comparisons), or preserved the clustering of sub-studies within trials (compared effect estimates between sub-studies with and without the element, then combined across trials; within-trial comparisons). Separately for studies based on between- and within-trial comparisons, we extracted ratios of odds ratios (RORs) from each study and combined them using a random-effects model. We made overall inferences and assessed certainty of evidence based on GRADE and ICEMAN.
Results
Forty-one meta-epidemiological studies (34 of between-, 7 of within-trial comparisons) proved eligible. Inadequate random sequence generation (ROR 0.94, 95% CI 0.90 to 0.97) and allocation concealment (ROR 0.92, 95% CI 0.88 to 0.97) probably lead to effect overestimation (moderate certainty). Lack of patients blinding probably overestimates effects for patient-reported outcomes (ROR 0.36, 95% CI 0.28 to 0.48; moderate certainty). Lack of blinding of outcome assessors results in effect overestimation for subjective outcomes (ROR 0.69, 95% CI 0.51 to 0.93; high certainty). The impact of patients or outcome assessors blinding on other outcomes, and the impact of blinding of healthcare providers, data collectors, or data analysts, remain uncertain. Trials stopped early for benefit probably overestimate effects (moderate certainty). Trials with imbalanced co-interventions may overestimate effects, while trials with missing outcome data may underestimate effects (low certainty). Influence of baseline imbalance, compliance, selective reporting, and intention-to-treat analysis remain uncertain.
Conclusion
Failure to ensure random sequence generation or adequate allocation concealment probably results in modest overestimates of effects. Lack of patients blinding probably leads to substantial overestimates of effects for patient-reported outcomes. Lack of blinding of outcome assessors results in substantial effect overestimation for subjective outcomes. For other elements, though evidence for consistent systematic overestimate of effect remains limited, failure to implement these safeguards may still introduce important bias.
Plain Language Summary
Fail to optimize randomization and blind patients and outcome assessors in randomized trials probably leads to overestimation of effects.
Link (Journal of Clinical Epidemiology): https://www.sciencedirect.com/science/article/abs/pii/S0895435623002950
[A number of authors are from Gordon Guyatt’s group at McMaster University in Canada]
Objective
To investigate the impact of potential risk of bias elements on effect estimates in randomized trials.
Study Design and Setting
We conducted a systematic survey of meta-epidemiological studies examining the influence of potential risk of bias elements on effect estimates in randomized trials. We included only meta-epidemiological studies that either preserved the clustering of trials within meta-analyses (compared effect estimates between trials with and without the potential risk of bias element within each meta-analysis, then combined across meta-analyses; between-trial comparisons), or preserved the clustering of sub-studies within trials (compared effect estimates between sub-studies with and without the element, then combined across trials; within-trial comparisons). Separately for studies based on between- and within-trial comparisons, we extracted ratios of odds ratios (RORs) from each study and combined them using a random-effects model. We made overall inferences and assessed certainty of evidence based on GRADE and ICEMAN.
Results
Forty-one meta-epidemiological studies (34 of between-, 7 of within-trial comparisons) proved eligible. Inadequate random sequence generation (ROR 0.94, 95% CI 0.90 to 0.97) and allocation concealment (ROR 0.92, 95% CI 0.88 to 0.97) probably lead to effect overestimation (moderate certainty). Lack of patients blinding probably overestimates effects for patient-reported outcomes (ROR 0.36, 95% CI 0.28 to 0.48; moderate certainty). Lack of blinding of outcome assessors results in effect overestimation for subjective outcomes (ROR 0.69, 95% CI 0.51 to 0.93; high certainty). The impact of patients or outcome assessors blinding on other outcomes, and the impact of blinding of healthcare providers, data collectors, or data analysts, remain uncertain. Trials stopped early for benefit probably overestimate effects (moderate certainty). Trials with imbalanced co-interventions may overestimate effects, while trials with missing outcome data may underestimate effects (low certainty). Influence of baseline imbalance, compliance, selective reporting, and intention-to-treat analysis remain uncertain.
Conclusion
Failure to ensure random sequence generation or adequate allocation concealment probably results in modest overestimates of effects. Lack of patients blinding probably leads to substantial overestimates of effects for patient-reported outcomes. Lack of blinding of outcome assessors results in substantial effect overestimation for subjective outcomes. For other elements, though evidence for consistent systematic overestimate of effect remains limited, failure to implement these safeguards may still introduce important bias.
Plain Language Summary
Fail to optimize randomization and blind patients and outcome assessors in randomized trials probably leads to overestimation of effects.
Link (Journal of Clinical Epidemiology): https://www.sciencedirect.com/science/article/abs/pii/S0895435623002950
Last edited: