Abstract
Employing two vignette studies, we examined how psychology researchers interpret the results of a set of four experiments that all test a given theory. In both studies, we found that participants’ belief in the theory increased with the number of statistically significant results, and that the result of a direct replication had a stronger effect on belief in the theory than the result of a conceptual replication. In Study 2, we additionally found that participants’ belief in the theory was lower when they assumed the presence of p-hacking, but that belief in the theory did not differ between preregistered and non-preregistered replication studies. In analyses of individual participant data from both studies, we examined the heuristics academics use to interpret the results of four experiments. Only a small proportion (Study 1: 1.6%; Study 2: 2.2%) of participants used the normative method of Bayesian inference, whereas many of the participants’ responses were in line with generally dismissed and problematic vote-counting approaches. Our studies demonstrate that many psychology researchers overestimate the evidence in favor of a theory if one or more results from a set of replication studies are statistically significant, highlighting the need for better statistical education.
Original language | English |
---|---|
Pages (from-to) | 1609-1620 |
Journal | Psychonomic Bulletin and Review |
Volume | 30 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2023 |
Keywords
- Bayesian inference
- HYPOTHESIS
- Heuristics
- Multi-study paper
- PREVALENCE
- PUBLICATION BIAS
- Replication
- SCIENCE
- Statistical misinterpretation
- Vote counting