This preview shows pages 1–7. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Stat 5102 Lecture Slides Deck 4 Charles J. Geyer School of Statistics University of Minnesota 1 Bayesian Inference Now for something completely different. Everything we have done up to now is frequentist statistics. Bayesian statistics is very different. Bayesians dont do confidence intervals and hypothesis tests. Bayesians dont use sampling distributions of estimators. Modern Bayesians arent even interested in point estimators. So what do they do? Bayesians treat parameters as random variables. To a Bayesian probability is only way to describe uncertainty. Things not known for certain like values of parameters must be described by a probability distribution. 2 Bayesian Inference (cont.) Suppose you are uncertain about something. Then your uncer tainty is described by a probability distribution called your prior distribution . Suppose you obtain some data relevant to that thing. The data changes your uncertainty, which is then described by a new prob ability distribution called your posterior distribution . The posterior distribution reflects the information both in the prior distribution and the data. Most of Bayesian inference is about how to go from prior to posterior. 3 Bayesian Inference (cont.) The way Bayesians go from prior to posterior is to use the laws of conditional probability, sometimes called in this context Bayes rule or Bayes theorem . Suppose we have a PDF g for the prior distribution of the pa rameter , and suppose we obtain data x whose conditional PDF given is f . Then the joint distribution of data and parameters is conditional times marginal f ( x  ) g ( ) This may look strange because, up to this point in the course, you have been brainwashed in the frequentist paradigm. Here both x and are random variables. 4 Bayesian Inference (cont.) The correct posterior distribution, according to the Bayesian paradigm, is the conditional distribution of given x , which is joint divided by marginal h (  x ) = f ( x  ) g ( ) R f ( x  ) g ( ) d Often we do not need to do the integral. If we recognize that 7 f ( x  ) g ( ) is, except for constants, the PDF of a brand name distribution, then that distribution must be the posterior. 5 Binomial Data, Beta Prior Suppose the prior distribution for p is Beta( 1 , 2 ) and the con ditional distribution of x given p is Bin( n,p ). Then f ( x  p ) = n x p x (1 p ) n x g ( p ) = ( 1 + 2 ) ( 1 )( 2 ) p 1 1 (1 p ) 2 1 and f ( x  p ) g ( p ) = n x ( 1 + 2 ) ( 1 )( 2 ) p x + 1 1 (1 p ) n x + 2 1 and this, considered as a function of p for fixed x is, except for constants, the PDF of a Beta( x + 1 ,n x + 2 ) distribution....
View
Full
Document
This note was uploaded on 10/28/2010 for the course STAT 2102 taught by Professor Geyer during the Spring '09 term at Minnesota.
 Spring '09
 Geyer
 Statistics

Click to edit the document details