This course presents the motivations for Bayesian statistical analysis, both in relation to decision-making theory and the various associated notions of optimality (minimaxity, admissibility, invariance) and in terms of the use of the information that is a priori available. It then considers methods for prior modelling and calculating Bayes estimators for point estimation and hypothesis testing. The various concepts will be illustrated in the framework of generalised linear models in order to demonstrate the applicability and relevance of the Bayesian approach. The course seeks to look in detail at a particular point in each session, for which the foundations must be acquired in prior reading of the corresponding chapter.
- Decision-making theory. Definitions, models and motivations. Minimaxity, maximin rules and least favourable laws. Admissibility and complete classes. Invariance and best equivariant estimators.
- Prior information modelling. Probabilistic representation of information. Choice of conjugate laws. Extension to mixtures of conjugate laws. Uninformative framework and reference laws. Sensitivity of responses to the prior law.
- Bayesian inference. Point estimation. The specific case of regression models. Hypothesis and comparison tests with the Neyman-Pearson approach. Methods for calculating Bayes estimators.
"The Bayesian Choice : from Decision-Theoretic Motivations to Computational Imple-
mentation", C. Robert, Springer-Verlag, New York (2001).
"Statistical Decision Theory and Bayesian Analysis", J. Berger, Springer-Verlag, New
"Bayesian nonparametrics", Hjort et al. eds., Cambridge University Press, (2010).