A comparison of procedures to test for moderators in mixed-effects meta-regression models

W. Viechtbauer, Jose Antonio Lopez-Lopez, Julio Sanchez-Meca, Fulgencio Marin-Martinez

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Several alternative methods are available when testing for moderators in mixed-effects meta-regression models. A simulation study was carried out to compare different methods in terms of their Type I error and statistical power rates. We included the standard (Wald-type) test, the method proposed by Knapp and Hartung (2003) in 2 different versions, the Huber-White method, the likelihood ratio test, and the permutation test in the simulation study. These methods were combined with 7 estimators for the amount of residual heterogeneity in the effect sizes. Our results show that the standard method, applied in most meta-analyses up to date, does not control the Type I error rate adequately, sometimes leading to overly conservative, but usually to inflated, Type I error rates. Of the different methods evaluated, only the Knapp and Hartung method and the permutation test provide adequate control of the Type I error rate across all conditions. Due to its computational simplicity, the Knapp and Hartung method is recommended as a suitable option for most meta-analyses.
Original languageEnglish
Pages (from-to)360-374
JournalPsychological Methods
Volume20
Issue number3
DOIs
Publication statusPublished - 2015

Keywords

  • meta-analysis
  • meta-regression
  • moderator analysis
  • heterogeneity estimator
  • standardized mean difference

Cite this

Viechtbauer, W., Lopez-Lopez, J. A., Sanchez-Meca, J., & Marin-Martinez, F. (2015). A comparison of procedures to test for moderators in mixed-effects meta-regression models. Psychological Methods, 20(3), 360-374. https://doi.org/10.1037/met0000023
Viechtbauer, W. ; Lopez-Lopez, Jose Antonio ; Sanchez-Meca, Julio ; Marin-Martinez, Fulgencio. / A comparison of procedures to test for moderators in mixed-effects meta-regression models. In: Psychological Methods. 2015 ; Vol. 20, No. 3. pp. 360-374.
@article{962b244267274f16b11e1542ecd75e40,
title = "A comparison of procedures to test for moderators in mixed-effects meta-regression models",
abstract = "Several alternative methods are available when testing for moderators in mixed-effects meta-regression models. A simulation study was carried out to compare different methods in terms of their Type I error and statistical power rates. We included the standard (Wald-type) test, the method proposed by Knapp and Hartung (2003) in 2 different versions, the Huber-White method, the likelihood ratio test, and the permutation test in the simulation study. These methods were combined with 7 estimators for the amount of residual heterogeneity in the effect sizes. Our results show that the standard method, applied in most meta-analyses up to date, does not control the Type I error rate adequately, sometimes leading to overly conservative, but usually to inflated, Type I error rates. Of the different methods evaluated, only the Knapp and Hartung method and the permutation test provide adequate control of the Type I error rate across all conditions. Due to its computational simplicity, the Knapp and Hartung method is recommended as a suitable option for most meta-analyses.",
keywords = "meta-analysis, meta-regression, moderator analysis, heterogeneity estimator, standardized mean difference",
author = "W. Viechtbauer and Lopez-Lopez, {Jose Antonio} and Julio Sanchez-Meca and Fulgencio Marin-Martinez",
year = "2015",
doi = "10.1037/met0000023",
language = "English",
volume = "20",
pages = "360--374",
journal = "Psychological Methods",
issn = "1082-989X",
publisher = "AMER PSYCHOLOGICAL ASSOC",
number = "3",

}

Viechtbauer, W, Lopez-Lopez, JA, Sanchez-Meca, J & Marin-Martinez, F 2015, 'A comparison of procedures to test for moderators in mixed-effects meta-regression models', Psychological Methods, vol. 20, no. 3, pp. 360-374. https://doi.org/10.1037/met0000023

A comparison of procedures to test for moderators in mixed-effects meta-regression models. / Viechtbauer, W.; Lopez-Lopez, Jose Antonio; Sanchez-Meca, Julio; Marin-Martinez, Fulgencio.

In: Psychological Methods, Vol. 20, No. 3, 2015, p. 360-374.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - A comparison of procedures to test for moderators in mixed-effects meta-regression models

AU - Viechtbauer, W.

AU - Lopez-Lopez, Jose Antonio

AU - Sanchez-Meca, Julio

AU - Marin-Martinez, Fulgencio

PY - 2015

Y1 - 2015

N2 - Several alternative methods are available when testing for moderators in mixed-effects meta-regression models. A simulation study was carried out to compare different methods in terms of their Type I error and statistical power rates. We included the standard (Wald-type) test, the method proposed by Knapp and Hartung (2003) in 2 different versions, the Huber-White method, the likelihood ratio test, and the permutation test in the simulation study. These methods were combined with 7 estimators for the amount of residual heterogeneity in the effect sizes. Our results show that the standard method, applied in most meta-analyses up to date, does not control the Type I error rate adequately, sometimes leading to overly conservative, but usually to inflated, Type I error rates. Of the different methods evaluated, only the Knapp and Hartung method and the permutation test provide adequate control of the Type I error rate across all conditions. Due to its computational simplicity, the Knapp and Hartung method is recommended as a suitable option for most meta-analyses.

AB - Several alternative methods are available when testing for moderators in mixed-effects meta-regression models. A simulation study was carried out to compare different methods in terms of their Type I error and statistical power rates. We included the standard (Wald-type) test, the method proposed by Knapp and Hartung (2003) in 2 different versions, the Huber-White method, the likelihood ratio test, and the permutation test in the simulation study. These methods were combined with 7 estimators for the amount of residual heterogeneity in the effect sizes. Our results show that the standard method, applied in most meta-analyses up to date, does not control the Type I error rate adequately, sometimes leading to overly conservative, but usually to inflated, Type I error rates. Of the different methods evaluated, only the Knapp and Hartung method and the permutation test provide adequate control of the Type I error rate across all conditions. Due to its computational simplicity, the Knapp and Hartung method is recommended as a suitable option for most meta-analyses.

KW - meta-analysis

KW - meta-regression

KW - moderator analysis

KW - heterogeneity estimator

KW - standardized mean difference

U2 - 10.1037/met0000023

DO - 10.1037/met0000023

M3 - Article

VL - 20

SP - 360

EP - 374

JO - Psychological Methods

JF - Psychological Methods

SN - 1082-989X

IS - 3

ER -