Can IRT Solve the Missing Data Problem in Test Equating?

Maria Bolsinova*, Gunter Maris

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review


In this paper test equating is considered as a missing data problem. The unobserved responses of the reference population to the new test must be imputed to specify a new cutscore. The proportion of students from the reference population that would have failed the new exam and those having failed the reference exam are made approximately the same. We investigate whether item response theory (IRT) makes it possible to identify the distribution of these missing responses and the distribution of test scores from the observed data without parametric assumptions for the ability distribution. We show that while the score distribution is not fully identifiable, the uncertainty about the score distribution on the new test due to non-identifiability is very small. Moreover, ignoring the non-identifiability issue and assuming a normal distribution for ability may lead to bias in test equating, which we illustrate in simulated and empirical data examples.

Original languageEnglish
Article number1956
Number of pages13
JournalFrontiers in Psychology
Publication statusPublished - 5 Jan 2016
Externally publishedYes


  • item response theory
  • incomplete design
  • marginal Rasch model
  • missing data
  • non-identifiability
  • test equating


Dive into the research topics of 'Can IRT Solve the Missing Data Problem in Test Equating?'. Together they form a unique fingerprint.

Cite this