Abstract
Self-report scales are widely used in psychology to compare means in latent constructs across groups, experimental conditions, or time points. However, for these comparisons to be meaningful and unbiased, the scales must demonstrate measurement invariance (MI) across compared time points or (experimental) groups. MI testing determines whether the latent constructs are measured equivalently across groups or time, which is essential for meaningful comparisons. We conducted a systematic review of 426 psychology articles with openly available data, to (a) examine common practices in conducting and reporting of MI testing, (b) assess whether we could reproduce the reported MI results, and (c) conduct MI tests for the comparisons that enabled sufficiently powerful MI testing. We identified 96 articles that contained a total of 929 comparisons. Results showed that only 4% of the 929 comparisons underwent MI testing, and the tests were generally poorly reported. None of the reported MI tests were reproducible, and only 26% of the 174 newly performed MI tests reached sufficient (scalar) invariance, with MI failing completely in 58% of tests. Exploratory analyses suggested that in nearly half of the comparisons where configural invariance was rejected, the number of factors differed between groups. These results indicate that MI tests are rarely conducted and poorly reported in psychological studies. We observed frequent violations of MI, suggesting that reported differences between (experimental) groups may not be solely attributed to group differences in the latent constructs. We offer recommendations aimed at improving reporting and computational reproducibility practices in psychology.
Original language | English |
---|---|
Publisher | American Psychological Association |
DOIs | |
Publication status | E-pub ahead of print - 2023 |
Keywords
- measurement invariance
- reproducibility
- reporting standards
- construct validity
- psychometrics
Fingerprint
Dive into the research topics of 'The dire disregard of measurement invariance testing in psychological science'. Together they form a unique fingerprint.Datasets
-
Replication data for: The dire disregard of measurement invariance testing in psychological science
D'Urso, D. (Creator), Maassen, E. (Creator), Van Assen, M. (Creator), Nuijten, M. (Creator), De Roover, K. (Creator) & Wicherts, J. (Creator), DataverseNL, 15 Dec 2022
DOI: 10.34894/9euwvm, https://dataverse.nl/citation%3FpersistentId=doi:10.34894/9EUWVM
Dataset