Proof of the Rao–Blackwell Theorem
I walk the reader through a proof the Rao–Blackwell Theorem.
The Rao–Blackwell Theorem (Rao, 1992; Blackwell, 1947) states:
Let be an unbiased estimator of with a finite second moment for all . Let be a sufficient statistic for . Then for all ,
,
.
This is a remarkably general result. In words, it says: if we have an unbiased estimator of our statistical parameter and a sufficient statistic of that parameter , then we can construct another estimator such that this new estimator is still unbiased and may have less variance.
The proof of the first claim is
The first equality just applies our definition of this new estimator . The next applies the law of total expectation. The last holds because is unbiased.
The proof of the second claim is
Once again, we just use the definition of and the law of total expectation. The third equality holds because and the linearity of expectation. The inequality holds because
In my mind, the Rao–Blackwell Theorem is remarkable in that (1) the proof is quite simple and (2) the result is quite general.
Acknowledgements
This proof is based on a blackboard proof by Matias Cattaneo in Princeton’s Statistical Theory and Methods.
- Rao, C. R. (1992). Information and the accuracy attainable in the estimation of statistical parameters. In Breakthroughs in statistics (pp. 235–247). Springer.
- Blackwell, D. (1947). Conditional expectation and unbiased sequential estimation. The Annals of Mathematical Statistics, 105–110.