This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: Introduction A Brief History of Statistics Statistics, the science of learning from data, is a relatively new discipline. One can divide the history of Statistics into three periods using the years 1900 and 1960. In the early days of Statistics (before 1900), much of the statistical work was devoted to data analysis including the construction of graphical dis- plays. There was little work done on inferential statistics. The foundations of Bayesian inference had been developed by Bayes and Laplace in the 18th century. The foundations of statistical inference were developed in the period be- tween 1900 and 1970. Karl Pearson developed the chi-square goodness of fit procedure around the year 1900 and R. A. Fisher developed the notions of sufficiency and maximum likelihood in this period. Statistical procedures are evaluated in terms of their long-run behavior in repeated sampling. For this reason, these procedures are known as frequentist methods. Prop- erties such as unbiasedness and mean square error are used to evaluate procedures. Some prominent Bayesians such as Harold Jeffreys, Jimmie Savage, and I. J. Good made substantial contributions during this period, but the frequentist methods became the standard inferential methods in the statisticians toolkit. In the last 40 years, there has been a great development in new statis- tical methods, especially computational demanding methods such as the bootstrap and nonparametric smoothing. Due to the recent availability of high-speed computers together with new simulation-based fitted algo- rithms, Bayesian methods have become increasingly popular. In contrast to the middle period of statistics where frequentist methods were domi- nate, we currently live in a frequentist/Bayesian world where statisticians routinely use Bayesian methods in situations where this inferential per- spective has particular advantages. 2 An Example One fundamental inference problem is learning about the association pattern in a 2 by 2 contingency table. Suppose we sample data values that are cat- egorized with respect to the presence and absence of two variables A and B and one observes the following table of counts....
View Full Document
This note was uploaded on 01/01/2011 for the course STAT 665 taught by Professor Albert during the Spring '10 term at Bowling Green.
- Spring '10