Advanced statistics: statistical inference
To infer means to make general statements on the basis of specific observations. Methods for using probabilistic models to make general statements on the basis of an observed set of data is the central topic of this half course.
From an early age, human beings are experts at inference. It is such a fundamental part of our intelligence that we do it without even thinking about it. We learn to classify objects on the basis of a very limited set of examples. In statistical inference, we go from specific to general via a mathematical model. Our specific observations come from a data set; that is, a collection of numbers, or at least, information that can be represented numerically. The mathematical models that we use draw on distributions of probability that are described in the companion half course Advanced statistics: distribution theory.
- Data reduction: Sufficiency, minimal sufficiency. Likelihood.
- Point estimation: Bias, consistency, mean square error. Central limit theorem. Rao-Blackwell theorem. Minimum variance unbiased estimates, Cramer-Rao bound. Properties of maximum likelihood estimates.
- Interval estimation: Pivotal quantities. Size and coverage probability.
- Hypothesis testing: Likelihood ratio test. Most powerful tests. Neyman-Pearson lemma.
If you complete the course successfully, you should be able to:
- Explain the principles of data reduction
- Judge the quality of estimators
- Choose appropriate methods of inference to tackle real problems
- Casella, G. and R.L. Berger. Statistical Inference. Duxbury. Hogg, R.V. and E.A. Tanis. Probability and Statistical Inference. Pearson/Prentice Hall.