Abstract
Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers’ experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies.
Keywords: power, survey, methodology, sample size, effect size, open data, open materials
Keywords: power, survey, methodology, sample size, effect size, open data, open materials
Original language | English |
---|---|
Pages (from-to) | 1069-1077 |
Journal | Psychological Science |
Volume | 27 |
Issue number | 8 |
DOIs | |
Publication status | Published - 2016 |