# Bayesian inference: an approach to statistical inference

Overview

Published Online: Jul 15 2010

DOI: 10.1002/wics.102

Can't access this content? Tell your librarian.

Abstract The original Bayes used an analogy involving an invariant prior and a statistical model and argued that the resulting combination of prior with likelihood provided a probability description of an unknown parameter value in an application; the combination in particular contexts with invariance can currently be called a confidence distribution and is subject to some restrictions when used to construct confidence intervals and regions. The procedure of using a prior with likelihood has now, however, been widely generalized with invariance being extended to less restrictive criteria such as non‐informative, reference, and more. Other generalizations are to allow the prior to represent various forms of background information that is available or elicited from those familiar with the statistical context; these can reasonably be called subjective priors. Still further generalizations address an anomaly where marginalization with a vector parameter gives results that contradict the term probability; these are Dawid, Stone, Zidek marginalization paradoxes; various priors for this are called targeted priors. A special case where the prior describes a random source for the parameter value is however just probability analysis but is frequently treated as a Bayes procedure. We survey the argument in support of probability characteristics and outline various generalizations of the original Bayes proposal. Copyright © 2010 John Wiley & Sons, Inc. This article is categorized under: Statistical and Graphical Methods of Data Analysis > Bayesian Methods and Theory