The rules of the game called psychological science

M. Bakker, A. van Dijk, J.M. Wicherts

Research output: Contribution to journalArticleScientificpeer-review

519 Citations (Scopus)
39 Downloads (Pure)


If science were a game, a dominant rule would probably be to collect results that are statistically significant. Several reviews of the psychological literature have shown that around 96% of papers involving the use of null hypothesis significance testing report significant outcomes for their main results but that the typical studies are insufficiently powerful for such a track record. We explain this paradox by showing that the use of several small underpowered samples often represents a more efficient research strategy (in terms of finding p < .05) than does the use of one larger (more powerful) sample. Publication bias and the most efficient strategy lead to inflated effects and high rates of false positives, especially when researchers also resorted to questionable research practices, such as adding participants after intermediate testing. We provide simulations that highlight the severity of such biases in meta-analyses. We consider 13 meta-analyses covering 281 primary studies in various fields of psychology and find indications of biases and/or an excess of significant results in seven. These results highlight the need for sufficiently powerful replications and changes in journal policies. Keywords: replication, sample size, power, publication bias, false positives
Original languageEnglish
Pages (from-to)543-554
JournalPerspectives on Psychological Science
Issue number6
Publication statusPublished - 2012


Dive into the research topics of 'The rules of the game called psychological science'. Together they form a unique fingerprint.

Cite this