(…) overestimation of effects in randomized trials that fail to optimize randomization and blind patients and outcome assessors, Wang, Guyatt+, 2023

cassava7

Senior Member (Voting Rights)
Compelling evidence from meta-epidemiological studies demonstrates overestimation of effects in randomized trials that fail to optimize randomization and blind patients and outcome assessors

[A number of authors are from Gordon Guyatt’s group at McMaster University in Canada]

Objective

To investigate the impact of potential risk of bias elements on effect estimates in randomized trials.

Study Design and Setting

We conducted a systematic survey of meta-epidemiological studies examining the influence of potential risk of bias elements on effect estimates in randomized trials. We included only meta-epidemiological studies that either preserved the clustering of trials within meta-analyses (compared effect estimates between trials with and without the potential risk of bias element within each meta-analysis, then combined across meta-analyses; between-trial comparisons), or preserved the clustering of sub-studies within trials (compared effect estimates between sub-studies with and without the element, then combined across trials; within-trial comparisons). Separately for studies based on between- and within-trial comparisons, we extracted ratios of odds ratios (RORs) from each study and combined them using a random-effects model. We made overall inferences and assessed certainty of evidence based on GRADE and ICEMAN.

Results

Forty-one meta-epidemiological studies (34 of between-, 7 of within-trial comparisons) proved eligible. Inadequate random sequence generation (ROR 0.94, 95% CI 0.90 to 0.97) and allocation concealment (ROR 0.92, 95% CI 0.88 to 0.97) probably lead to effect overestimation (moderate certainty). Lack of patients blinding probably overestimates effects for patient-reported outcomes (ROR 0.36, 95% CI 0.28 to 0.48; moderate certainty). Lack of blinding of outcome assessors results in effect overestimation for subjective outcomes (ROR 0.69, 95% CI 0.51 to 0.93; high certainty). The impact of patients or outcome assessors blinding on other outcomes, and the impact of blinding of healthcare providers, data collectors, or data analysts, remain uncertain. Trials stopped early for benefit probably overestimate effects (moderate certainty). Trials with imbalanced co-interventions may overestimate effects, while trials with missing outcome data may underestimate effects (low certainty). Influence of baseline imbalance, compliance, selective reporting, and intention-to-treat analysis remain uncertain.

Conclusion

Failure to ensure random sequence generation or adequate allocation concealment probably results in modest overestimates of effects. Lack of patients blinding probably leads to substantial overestimates of effects for patient-reported outcomes. Lack of blinding of outcome assessors results in substantial effect overestimation for subjective outcomes. For other elements, though evidence for consistent systematic overestimate of effect remains limited, failure to implement these safeguards may still introduce important bias.

Plain Language Summary

Fail to optimize randomization and blind patients and outcome assessors in randomized trials probably leads to overestimation of effects.

Link (Journal of Clinical Epidemiology): https://www.sciencedirect.com/science/article/abs/pii/S0895435623002950
 
Last edited:
Thanks for tagging @cassava7

I don't have access to the text but this looks like a summary of both the meta-epidemiological studies and the within-trial comparisons that have been done thus far.

Both have serious weaknesses. The meta-epidemiological studies compare trials that might have a different design and selection criteria, while the within-trial comparisons (which compare a blinded and non-blinded treatment arm) are almost nonexistent (most are from acupuncture trials).

I'm surprised though that the effect for blinding is so large (and and that this comes from Guyatt). I wonder if they included the MetaBlind study.
 
Someone kindly shared the paper with me and it seems that the MetaBlind study was included. I'm not fully sure how the treatment effect for blinding patients and outcome assessors became so large as many previous meta-epidemiological studies found uncertain results, but I expect it was because of the inclusion of within-trial comparisons (summarised in this paper by Hrobjartsson et al.).

Thanks for posting this. Relevant for letters to editors for sure :)
At the end the paper states (my bolding):

"Our results have clear and important implications for the conduct and interpretation of clinical trials. All trials can and should appropriately generate random sequence and ensure concealment through central randomization wherever possible and numbered, sealed envelopes when impossible. When outcomes are subjective, investigators should ensure wherever possible patients and outcome assessors are blinded, and acknowledge in their study limitations that overestimation of effects is likely."​
 
Thanks for tagging @cassava7

I don't have access to the text but this looks like a summary of both the meta-epidemiological studies and the within-trial comparisons that have been done thus far.

Both have serious weaknesses. The meta-epidemiological studies compare trials that might have a different design and selection criteria, while the within-trial comparisons (which compare a blinded and non-blinded treatment arm) are almost nonexistent (most are from acupuncture trials).

I'm surprised though that the effect for blinding is so large (and and that this comes from Guyatt). I wonder if they included the MetaBlind study.
I agree with this and I remember your published criticisms of the MetaBLIND study. However, as we have seen, the authors have not addressed your concerns and the study is still being cited. Even though this study may be flawed for the same reasons, and even though this may be quite unscientific, it can be used as “ammunition” for advocacy purposes especially since it comes from Guyatt himself. For instance, like @Hutan as pointed out, it can be presented to Cochrane in the upcoming consultation for criticisms about their review of exercise for CFS.
 
When outcomes are subjective, investigators should ensure wherever possible patients and outcome assessors are blinded,..."

Or use objective outcome measures alongside the unblinded subjective measures.

Either (or both) is fine. Neither is not.
 
Back
Top Bottom