Nature: comment article
Replication games: how to make reproducibility research more systematic
Abel Brodeur, Anna Dreber, Fernando Hoces de la Guardia & Edward Miguel
In some areas of social science, around half of studies can’t be replicated. A new test-fast, fail-fast initiative aims to show what research is hot — and what’s not.
In October last year, one of us (A.B.) decided to run an ad hoc workshop at a research centre in Oslo, to try to replicate papers from economics journals. Instead of the handful of locals who were expected to attend, 70 people from across Europe signed up. The message was clear: researchers want to replicate studies.
Replication is sorely needed. In areas of the social sciences, such as economics, philosophy and psychology, some studies suggest that between 35% and 70% of published results cannot be replicated when tested with new data1–4. Often, researchers cannot even reproduce results when using the same data and code as the original paper, because key information is missing.
Yet most journals will not publish a replication unless it refutes an impactful paper. In economics, less than 1% of papers published in the top 50 journals between 2010 and 2020 were some type of replication5. That suggests that many studies with errors are going undetected.
Open: https://www.nature.com/articles/d41586-023-02997-5
Replication games: how to make reproducibility research more systematic
Abel Brodeur, Anna Dreber, Fernando Hoces de la Guardia & Edward Miguel
In some areas of social science, around half of studies can’t be replicated. A new test-fast, fail-fast initiative aims to show what research is hot — and what’s not.
In October last year, one of us (A.B.) decided to run an ad hoc workshop at a research centre in Oslo, to try to replicate papers from economics journals. Instead of the handful of locals who were expected to attend, 70 people from across Europe signed up. The message was clear: researchers want to replicate studies.
Replication is sorely needed. In areas of the social sciences, such as economics, philosophy and psychology, some studies suggest that between 35% and 70% of published results cannot be replicated when tested with new data1–4. Often, researchers cannot even reproduce results when using the same data and code as the original paper, because key information is missing.
Yet most journals will not publish a replication unless it refutes an impactful paper. In economics, less than 1% of papers published in the top 50 journals between 2010 and 2020 were some type of replication5. That suggests that many studies with errors are going undetected.
Open: https://www.nature.com/articles/d41586-023-02997-5