Assessing students’ personal characteristics, as well as the structures and processes of teaching and learning, is an integral part of the Programme for International Student Assessment (PISA). Providing input for solid evidence-based educational policies, one of the main aims of PISA, creates huge methodological challenges: Various biases in self-reported data across cultures pose a persistent challenge in unpacking the black box of student learning; these biases jeopardize PISA’s scope for evidence-based policy making. This chapter focuses on challenges in the design and analysis of PISA background questionnaires, especially in noncognitive outcome measures. Our conceptual background however is not primarily PISA-related but is comparative work in the social and behavioral sciences more broadly. We first review sources of bias at construct, method, and item levels, as well as levels of equivalence (construct, metric, and scalar invariance), using examples from educational surveys. We then illustrate the strategies used in the PISA project to deal with different types of bias. Specifically, qualitative, non-statistical strategies such as instrument development (adaptation), standardization of assessment procedures, and statistical strategies to mitigate bias are outlined. State-of-the-art psychometric procedures to examine the comparability of these noncognitive outcome data, including partial invariance and approximate invariance, are also discussed. We conclude by suggesting future research topics.
|Title of host publication||Assessing contexts of learning world-wide|
|Subtitle of host publication||An international perspective|
|Editors||S. Kuger, E. Klieme, N. Jude, D. Kaplan|
|Place of Publication||New York|
|Number of pages||24|
|Publication status||Published - 20 Dec 2016|