In order to accurately control the Type I error rate (typically .05), a p value should be uniformly distributed under the null model. The posterior predictive p value (ppp), which is commonly used in Bayesian data analysis, generally does not satisfy this property. For example there have been reports where the sampling distribution of the ppp under the null model was highly concentrated around .50. In this case, a ppp of .20 would indicate model misfit, but when comparing it with a significance level of .05, which is standard statistical practice, the null model would not be rejected. Therefore, the ppp has very little power to detect model misfit. A solution has been proposed in the literature, which involves calibrating the ppp using the prior distribution of the parameters under the null model. A disadvantage of this “prior-cppp” is that it is very sensitive to the prior of the model parameters. In this article, an alternative solution is proposed where the ppp is calibrated using the posterior under the null model. This “posterior-cppp” (a) can be used when prior information is absent, (b) allows one to test any type of misfit by choosing an appropriate discrepancy measure, and (c) has a uniform distribution under the null model. The methodology is applied in various testing problems: testing independence of dichotomous variables, checking misfit of linear regression models in the presence of outliers, and assessing misfit in latent class analysis.