Improving the discriminant validation of multi-item scales

Constant Pieters, H. Baumgartner, Rik Pieters

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Discriminant validation examines to what extent constructs measured with multi-item scales, which are hypothesized to be conceptually distinct, are empirically distinct. A literature review of published scale development studies shows that a variety of criteria and approaches to assess discriminant validity are in use. However, the requirements for an appropriate criterion have not been spelled out, which has led to the use of problematic criteria. The present research introduces three requirements that an appropriate discriminant validation criterion should satisfy, concerning the correlation, comparison standard, and comparison method. It shows that the common Fornell and Larcker criterion is based on an inappropriate comparison standard and method and that alternative criteria have weaknesses as well. The authors therefore propose an improved comparison standard, congeneric reliability (CR), and develop a systematic discriminant validation procedure based on CR and an existing criterion (Phi), both of which satisfy the three requirements. The procedure provides continuous measures of support for discriminant validity and accounts for measurement and sampling error. A detailed case study and reanalyses of seven published scale development articles demonstrate the application and strengths of the procedure. Example code and an online application facilitate its implementation.

Original languageEnglish
JournalJournal of Marketing Research
DOIs
Publication statusE-pub ahead of print - Oct 2025

Keywords

  • scale development
  • discriminant validity
  • congeneric reliability
  • Fornell-Larcker criterion

Fingerprint

Dive into the research topics of 'Improving the discriminant validation of multi-item scales'. Together they form a unique fingerprint.

Cite this