Guidance to best tools and practices for systematic reviews, 2023

Sly Saint

Senior Member (Voting Rights)
Abstract
Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.

A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.

Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.

https://content.iospress.com/articles/journal-of-pediatric-rehabilitation-medicine/prm230019
 
Article in the Journal of Pediatric Rehabilitation Medicine is a first step towards combating a flawed system
The goal of a Guidance article in the Journal of Pediatric Rehabilitation Medicine is to improve how systematic reviews impacting medical practice are conducted, reported, and evaluated. Systematic review authors, and editors who publish these reviews, are encouraged to follow established guidelines and safeguards to ensure the trustworthiness of medical evidence syntheses.

Ideally, a systematic review should review and summarize empirical data from research about different clinical treatment options and possible side effects. They form the basis for clinical practice guidelines that drive specific recommendations for patient care, however, data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative.

Anyone can publish a systematic review on any topic, whether qualified to do so or not. In addition, journals do not follow uniform standards for publication of systematic reviews. Also, funding bodies might seek or commission a systematic review but not have the purest of intentions. They may exert influence on systematic review authors, and as a result, data the authors of the review select to summarize may support those desired results with little regard for how it could impact patient care.

Kat Kolaski, MD, Wake Forest University School of Medicine, Lynne Romeiser Logan, PT, PhD, SUNY Upstate Medical University, and world-renowned scientist John Ioannidis MD, PhD, Stanford University School of Medicine, share a concern about the high numbers of untrustworthy systematic reviews published in the medical literature and decided to do something about it.
https://www.iospress.com/news/impor...d-to-make-systematic-reviews-more-trustworthy
 
Back
Top Bottom