On the prevalence of information inconsistency in normal linear models

Joris Mulder*, James O. Berger, Víctor Peña, M. J. Bayarri

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

4 Citations (Scopus)
68 Downloads (Pure)


Informally, ‘information inconsistency’ is the property that has been observed in some Bayesian hypothesis testing and model selection scenarios whereby the Bayesian conclusion does not become definitive when the data seem to become definitive. An example is that, when performing a t test using standard conjugate priors, the Bayes factor of the alternative hypothesis to the null hypothesis remains bounded as the t statistic grows to infinity. The goal of this paper is to thoroughly investigate information inconsistency in various Bayesian testing problems. We consider precise hypothesis tests, one-sided hypothesis tests, and multiple hypothesis tests under normal linear models with dependent observations. Standard priors are considered, such as conjugate and semi-conjugate priors, as well as variations of Zellner’s g prior (e.g., fixed g priors, mixtures of g priors, and adaptive (data-based) g priors). It is shown that information inconsistency is a widespread problem using standard priors while certain theoretically recommended priors, including scale mixtures of conjugate priors and adaptive priors, are information consistent.
Original languageEnglish
Pages (from-to)103-132
Issue number1
Publication statusPublished - 2021


  • Bayes factors
  • Conjugate priors
  • Information inconsistency
  • Regression models


Dive into the research topics of 'On the prevalence of information inconsistency in normal linear models'. Together they form a unique fingerprint.

Cite this