Data aggregation can lead to biased inferences in Bayesian linear mixed models

Daniel J. Schad, Bruno Nicenboim, Shravan Vasishth

Research output: Contribution to journalArticleScientific


Bayesian linear mixed-effects models are increasingly being used in the cognitive sciences to perform null hypothesis tests, where a null hypothesis that an effect is zero is compared with an alternative hypothesis that the effect exists and is different from zero. While software tools for Bayes factor null hypothesis tests are easily accessible, how to specify the data and the model correctly is often not clear. In Bayesian approaches, many authors recommend data aggregation at the by-subject level and running Bayes factors on aggregated data. Here, we use simulation-based calibration for model inference to demonstrate that null hypothesis tests can yield biased Bayes factors, when computed from aggregated data. Specifically, when random slope variances differ (i.e., violated sphericity assumption), Bayes factors are too conservative for contrasts where the variance is small and they are too liberal for contrasts where the variance is large. Moreover, Bayes factors for by-subject aggregated data are biased (too liberal) when random item variance is present but ignored in the analysis. We also perform corresponding frequentist analyses (type I and II error probabilities) to illustrate that the same problems exist and are well known from frequentist tools. These problems can be circumvented by running Bayesian linear mixed-effects models on non-aggregated data such as on individual trials and by explicitly modeling the full random effects structure. Reproducible code is available from
Original languageEnglish
Pages (from-to)1-38
Publication statusSubmitted - 2022


  • Methodology (stat.ME)
  • FOS: Computer and information sciences
  • Data aggregation
  • Bayes factors
  • Bayesian model comparison
  • Simulation-based calibration


Dive into the research topics of 'Data aggregation can lead to biased inferences in Bayesian linear mixed models'. Together they form a unique fingerprint.

Cite this