Abstract
One major threat to revealing cultural influences on psychological states or processes is the presence of bias (i.e., systematic measurement error). When quantitative measures are not targeting the same construct or they differ in metric across cultures, the validity of inferences about cultural variability (and universality) is in doubt. The objectives of this article are to review what can be done about it and what is being done about it. To date, a multitude of useful techniques and methods to reduce or assess bias in cross-cultural research have been developed. We explore the limits of invariance/equivalence testing and suggest more flexible means of dealing with bias. First, we review currently available established and novel methods that reveal bias in cross-cultural research. Second, we analyze current practices in a systematic content analysis. The content analysis of more than 500 culture-comparative quantitative studies (published from 2008 to 2015 in three outlets in cross-cultural, social, and developmental psychology) aims to gauge current practices and approaches in the assessment of measurement equivalence/invariance. Surprisingly, the analysis revealed a rather low penetration of invariance testing in cross-cultural research. Although a multitude of classical and novel approaches for invariance testing is available, these are employed infrequent rather than habitual. We discuss reasons for this hesitation, and we derive suggestions for creatively assessing and handling biases across different research paradigms and designs.
Original language | English |
---|---|
Pages (from-to) | 713-734 |
Journal | Journal of Cross-Cultural Psychology |
Volume | 49 |
Issue number | 5 |
DOIs | |
Publication status | Published - 2018 |
Keywords
- COMPARABILITY
- COUNTRIES
- ISSUES
- ITEM
- MODELS
- PROFILES
- REPRESENTATION
- SCALAR
- SPACE
- VALIDITY
- bias
- cross-cultural comparability
- invariance
- measurement error