Assessing the item response theory with covariate (IRT-C) procedure for ascertaining differential item functioning

L. Tay, J.K. Vermunt, C. Wang

Research output: Contribution to journalArticleScientificpeer-review

15 Citations (Scopus)

Abstract

We evaluate the item response theory with covariates (IRT-C) procedure for assessing differential item functioning (DIF) without preknowledge of anchor items (Tay, Newman, & Vermunt, 2011). This procedure begins with a fully constrained baseline model, and candidate items are tested for uniform and/or nonuniform DIF using the Wald statistic. Candidate items are selected in turn based on high unconditional bivariate residual (UBVR) values. This iterative process continues until no further DIF is detected or the Bayes information criterion (BIC) increases. We expanded on the procedure and examined the use of conditional bivariate residuals (CBVR) to flag for DIF; aside from the BIC, alternative stopping criteria were also considered. Simulation results showed that the IRT-C approach for assessing DIF performed well, with the use of CBVR yielding slightly better power and Type I error rates than UBVR. Additionally, using no information criterion yielded higher power than using the BIC, although Type I error rates were generally well controlled in both cases. Across the simulation conditions, the IRT-C procedure produced results similar to the Mantel-Haenszel and MIMIC procedures. Keywords: differential item functioning, item response theory, multiple covariates, simulation
Original languageEnglish
Pages (from-to)201-222
JournalInternational Journal of Testing
Volume13
Issue number3
DOIs
Publication statusPublished - 2013

Fingerprint

Dive into the research topics of 'Assessing the item response theory with covariate (IRT-C) procedure for ascertaining differential item functioning'. Together they form a unique fingerprint.

Cite this