Abstract
This study investigates researcher variability in computational reproduction, an activity for which it is least expected. Eighty-five independent teams attempted numerical replication of results from an original study of policy preferences and immigration. Reproduction teams were randomly grouped into a 'transparent group' receiving original study and code or 'opaque group' receiving only a method and results description and no code. The transparent group mostly verified original results (95.7% same sign and p-value cutoff), while the opaque group had less success (89.3%). Second-decimal place exact numerical reproductions were less common (76.9 and 48.1%). Qualitative investigation of the workflows revealed many causes of error, including mistakes and procedural variations. When curating mistakes, we still find that only the transparent group was reliably successful. Our findings imply a need for transparency, but also more. Institutional checks and less subjective difficulty for researchers 'doing reproduction' would help, implying a need for better training. We also urge increased awareness of complexity in the research process and in 'push button' replications.
Original language | English |
---|---|
Article number | 241038 |
Number of pages | 23 |
Journal | Royal Society Open Science |
Volume | 12 |
Issue number | 3 |
DOIs | |
Publication status | Published - Mar 2025 |
Keywords
- Reliability
- Replications
- Computational reproduction
- Social and behavioural sciences
Fingerprint
Dive into the research topics of 'The reliability of replications: A study in computational reproductions'. Together they form a unique fingerprint.Datasets
-
Supplementary material from "The Reliability of Replications: A Study in Computational Reproductions"
Raes, L. (Contributor), Figshare, 3 Feb 2025
DOI: 10.6084/m9.figshare.c.7655134.v1, https://doi.org/10.6084/m9.figshare.c.7655134.v1 and one more link, https://github.com/nbreznau/how_many_replicators (show fewer)
Dataset