Deciding what to replicate: A decision model for replication study selection under resource and knowledge constraints

Peder Mortvedt Isager*, Robbie C. M. van Aert, Stepan Bahnik, Mark J. Brandt, K. Andrew DeSoto, Roger Giner-Sorolla, Joachim Krueger, Marco Perugini, Ivan Ropovik, Anna E. van 't Veer, Marek Vranka, Daniel Lakens

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

21 Citations (Scopus)
174 Downloads (Pure)

Abstract

Robust scientific knowledge is contingent upon replication of original findings. However, replicating researchers are constrained by resources, and will almost always have to choose one replication effort to focus on from a set of potential candidates. To select a candidate efficiently in these cases, we need methods for deciding which out of all candidates considered would be the most useful to replicate, given some overall goal researchers wish to achieve. In this article we assume that the overall goal researchers wish to achieve is to maximize the utility gained by conducting the replication study. We then propose a general rule for study selection in replication research based on the replication value of the set of claims considered for replication. The replication value of a claim is defined as the maximum expected utility we could gain by conducting a replication of the claim, and is a function of (a) the value of being certain about the claim, and (b) uncertainty about the claim based on current evidence. We formalize this definition in terms of a causal decision model, utilizing concepts from decision theory and causal graph modeling. We discuss the validity of using replication value as a measure of expected utility gain, and we suggest approaches for deriving quantitative estimates of replication value. Our goal in this article is not to define concrete guidelines for study selection, but to provide the necessary theoretical foundations on which such concrete guidelines could be built.

Translational Abstract Replication-redoing a study using the same procedures-is an important part of checking the robustness of claims in the psychological literature. The practice of replicating original studies has been woefully devalued for many years, but this is now changing. Recent calls for improving the quality of research in psychology has generated a surge of interest in funding, conducting, and publishing replication studies. Because many studies have never been replicated, and researchers have limited time and money to perform replication studies, researchers must decide which studies are the most important to replicate. This way scientists learn the most, given limited resources. In this article, we lay out what it means to think about what is the most important thing to replicate, and we propose a general decision rule for picking a study to replicate. That rule depends on a concept we call replication value. Replication value is a function of the importance of the study, and how uncertain we are about the findings. In this article we explain how researchers can think precisely about the value of replication studies. We then discuss when and how it makes sense to use replication value as a measure of how valuable a replication study would be, and we discuss factors that funders, journals, or scientists could consider when determining how valuable a replication study is.

Original languageEnglish
Pages (from-to)438-451
JournalPsychological Methods
Volume28
Issue number2
DOIs
Publication statusPublished - 2023

Keywords

  • expected utility
  • replication
  • replication value
  • study selection

Fingerprint

Dive into the research topics of 'Deciding what to replicate: A decision model for replication study selection under resource and knowledge constraints'. Together they form a unique fingerprint.

Cite this