Abstract
In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of 0.26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small-study effects, potentially indicating publication bias and overestimated effects. We found no differences in small-study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We concluded that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields.
Original language | English |
---|---|
Article number | 36 |
Number of pages | 24 |
Journal | Journal of Intelligence |
Volume | 8 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2020 |
Keywords
- Bias
- Effect size
- Intelligence
- Meta-meta-analysis
- Meta-science
- Power
Fingerprint
Dive into the research topics of 'Effect sizes, power, and biases in intelligence research: A meta-meta-analysis'. Together they form a unique fingerprint.Datasets
-
Effect sizes, power, and biases in intelligence research: A meta-meta-analysis
Nuijten, M. (Contributor), van Assen, M. (Contributor), Augusteijn, H. (Contributor), Crompvoets, E. (Contributor) & Wicherts, J. (Contributor), OSF, 2020
Dataset