On the bias and stability of the results of comparative judgment

Elise Crompvoets*, Anton Béguin, K. Sijtsma

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

2 Citations (Scopus)
57 Downloads (Pure)

Abstract

Comparative judgment is a method that allows measurement of a competence by comparison of items with other items. In educational measurement, where comparative judgment is becoming an increasingly popular assessment method, items are mostly students’ responses to an assignment or an examination. For assessments using comparative judgment, the Scale Separation Reliability (SSR) is used to estimate the reliability of the measurement. Previous research has shown that the SSR may overestimate reliability when the pairs to be compared are selected with certain adaptive algorithms, when raters use different underlying models/truths, or when the true variance of the item parameters is below one. This research investigated bias and stability of the components of the SSR in relation to the number of comparisons per item to increase understanding of the SSR. We showed that many comparisons are required to obtain an accurate estimate of the item variance, but that the SSR can be useful even when the variance of the items is overestimated. Lastly, we recommend adjusting the general guideline for the required number of comparisons per item to 41 comparisons per item. This recommendation partly depends on the number of items and the true variance in our simulation study and needs further investigation.
Original languageEnglish
Article number788202
Number of pages10
JournalFrontiers in Education
Volume6
DOIs
Publication statusPublished - 2022

Keywords

  • bias
  • comparative judgment (CJ)
  • pairwise comparison (PC)
  • reliability
  • stability

Fingerprint

Dive into the research topics of 'On the bias and stability of the results of comparative judgment'. Together they form a unique fingerprint.

Cite this