Towards competitive instead of biased testing of heuristics

a reply to hilbig and richter (2011)

Henry Brighton, Gerd Gigerenzer

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Our programmatic article on Homo heuristicus (Gigerenzer & Brighton, 2009) included a methodological section specifying three minimum criteria for testing heuristics: competitive tests, individual-level tests, and tests of adaptive selection of heuristics. Using Richter and Späth's (2006) study on the recognition heuristic, we illustrated how violations of these criteria can lead to unsupported conclusions. In their comment, Hilbig and Richter conduct a reanalysis, but again without competitive testing. They neither test nor specify the compensatory model of inference they argue for. Instead, they test whether participants use the recognition heuristic in an unrealistic 100% (or 96%) of cases, report that only some people exhibit this level of consistency, and conclude that most people would follow a compensatory strategy. We know of no model of judgment that predicts 96% correctly. The curious methodological practice of adopting an unrealistic measure of success to argue against a competing model, and to interpret such a finding as a triumph for a preferred but unspecified model, can only hinder progress. Marewski, Gaissmaier, Schooler, Goldstein, and Gigerenzer (2010), in contrast, specified five compensatory models, compared them with the recognition heuristic, and found that the recognition heuristic predicted inferences most accurately.

Original languageEnglish
Pages (from-to)197-205
Number of pages9
JournalTopics in Cognitive Science
Volume3
Issue number1
DOIs
Publication statusPublished - Jan 2011
Externally publishedYes

Fingerprint

heuristics
Testing
Recognition (Psychology)

Keywords

  • Cognition/physiology
  • Cues
  • Decision Making/physiology
  • Humans
  • Judgment/physiology
  • Models, Psychological
  • Problem Solving/physiology
  • Recognition (Psychology)/physiology

Cite this

@article{c81e5ffdede1454687beec2652edc59e,
title = "Towards competitive instead of biased testing of heuristics: a reply to hilbig and richter (2011)",
abstract = "Our programmatic article on Homo heuristicus (Gigerenzer & Brighton, 2009) included a methodological section specifying three minimum criteria for testing heuristics: competitive tests, individual-level tests, and tests of adaptive selection of heuristics. Using Richter and Sp{\"a}th's (2006) study on the recognition heuristic, we illustrated how violations of these criteria can lead to unsupported conclusions. In their comment, Hilbig and Richter conduct a reanalysis, but again without competitive testing. They neither test nor specify the compensatory model of inference they argue for. Instead, they test whether participants use the recognition heuristic in an unrealistic 100{\%} (or 96{\%}) of cases, report that only some people exhibit this level of consistency, and conclude that most people would follow a compensatory strategy. We know of no model of judgment that predicts 96{\%} correctly. The curious methodological practice of adopting an unrealistic measure of success to argue against a competing model, and to interpret such a finding as a triumph for a preferred but unspecified model, can only hinder progress. Marewski, Gaissmaier, Schooler, Goldstein, and Gigerenzer (2010), in contrast, specified five compensatory models, compared them with the recognition heuristic, and found that the recognition heuristic predicted inferences most accurately.",
keywords = "Cognition/physiology, Cues, Decision Making/physiology, Humans, Judgment/physiology, Models, Psychological, Problem Solving/physiology, Recognition (Psychology)/physiology",
author = "Henry Brighton and Gerd Gigerenzer",
note = "Copyright {\circledC} 2011 Cognitive Science Society, Inc.",
year = "2011",
month = "1",
doi = "10.1111/j.1756-8765.2010.01124.x",
language = "English",
volume = "3",
pages = "197--205",
journal = "Topics in Cognitive Science",
issn = "1756-8757",
publisher = "John Wiley & Sons Inc.",
number = "1",

}

Towards competitive instead of biased testing of heuristics : a reply to hilbig and richter (2011). / Brighton, Henry; Gigerenzer, Gerd.

In: Topics in Cognitive Science, Vol. 3, No. 1, 01.2011, p. 197-205.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - Towards competitive instead of biased testing of heuristics

T2 - a reply to hilbig and richter (2011)

AU - Brighton, Henry

AU - Gigerenzer, Gerd

N1 - Copyright © 2011 Cognitive Science Society, Inc.

PY - 2011/1

Y1 - 2011/1

N2 - Our programmatic article on Homo heuristicus (Gigerenzer & Brighton, 2009) included a methodological section specifying three minimum criteria for testing heuristics: competitive tests, individual-level tests, and tests of adaptive selection of heuristics. Using Richter and Späth's (2006) study on the recognition heuristic, we illustrated how violations of these criteria can lead to unsupported conclusions. In their comment, Hilbig and Richter conduct a reanalysis, but again without competitive testing. They neither test nor specify the compensatory model of inference they argue for. Instead, they test whether participants use the recognition heuristic in an unrealistic 100% (or 96%) of cases, report that only some people exhibit this level of consistency, and conclude that most people would follow a compensatory strategy. We know of no model of judgment that predicts 96% correctly. The curious methodological practice of adopting an unrealistic measure of success to argue against a competing model, and to interpret such a finding as a triumph for a preferred but unspecified model, can only hinder progress. Marewski, Gaissmaier, Schooler, Goldstein, and Gigerenzer (2010), in contrast, specified five compensatory models, compared them with the recognition heuristic, and found that the recognition heuristic predicted inferences most accurately.

AB - Our programmatic article on Homo heuristicus (Gigerenzer & Brighton, 2009) included a methodological section specifying three minimum criteria for testing heuristics: competitive tests, individual-level tests, and tests of adaptive selection of heuristics. Using Richter and Späth's (2006) study on the recognition heuristic, we illustrated how violations of these criteria can lead to unsupported conclusions. In their comment, Hilbig and Richter conduct a reanalysis, but again without competitive testing. They neither test nor specify the compensatory model of inference they argue for. Instead, they test whether participants use the recognition heuristic in an unrealistic 100% (or 96%) of cases, report that only some people exhibit this level of consistency, and conclude that most people would follow a compensatory strategy. We know of no model of judgment that predicts 96% correctly. The curious methodological practice of adopting an unrealistic measure of success to argue against a competing model, and to interpret such a finding as a triumph for a preferred but unspecified model, can only hinder progress. Marewski, Gaissmaier, Schooler, Goldstein, and Gigerenzer (2010), in contrast, specified five compensatory models, compared them with the recognition heuristic, and found that the recognition heuristic predicted inferences most accurately.

KW - Cognition/physiology

KW - Cues

KW - Decision Making/physiology

KW - Humans

KW - Judgment/physiology

KW - Models, Psychological

KW - Problem Solving/physiology

KW - Recognition (Psychology)/physiology

U2 - 10.1111/j.1756-8765.2010.01124.x

DO - 10.1111/j.1756-8765.2010.01124.x

M3 - Article

VL - 3

SP - 197

EP - 205

JO - Topics in Cognitive Science

JF - Topics in Cognitive Science

SN - 1756-8757

IS - 1

ER -