slides1-9-28 - Stat C180/C236/Sanchez Introduction to...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Stat C180/C236/Sanchez Introduction to Bayesian Statistics. Introduction and Examples Reading: Hoff, chapter 1 and Back Matter ( list of probability distributions) Fall 2010 Reading: Hoff, chapter 1 and Back Matter ( list of probability distributions) Stat C180/C236/Sanchez Introduction to Bayesian Statistics. Introductio n
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Bayesian Learning (1) 1. Statistical induction is the process of learning about the general characteristics of a population from a subset of members of that population. 2. Numerical values of population characteristics are typically expressed in terms of a parameter θ 3. Numerical descriptions of the subset make up a dataset y Reading: Hoff, chapter 1 and Back Matter ( list of probability distributions) Stat C180/C236/Sanchez Introduction to Bayesian Statistics. Introductio n
Background image of page 2
Bayesian Learning (2) I Before a dataset is obtained, the numerical values of both the population characteristics and the dataset are uncertain. I After a dataset y is obtained, the information it contains can be used to decrease our uncertainty about the population characteristics. I Quantifying this change in uncertainty is the purpose of Bayesian inference. Reading: Hoff, chapter 1 and Back Matter ( list of probability distributions) Stat C180/C236/Sanchez Introduction to Bayesian Statistics. Introductio n
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Bayesian Learning (3) I The sample space Y is the set of all possible datasets, from which a single dataset y will result. I The parameter space Θ is the set of possible parameter values, from which we hope to identify the value that best represents the true population characteristics. I The idealized form of Bayesian learning begins with a numerical formulation of joint beliefs about y and θ , expressed in terms of probability distributions over Y and Θ. Reading: Hoff, chapter 1 and Back Matter ( list of probability distributions) Stat C180/C236/Sanchez Introduction to Bayesian Statistics. Introductio n
Background image of page 4
Bayesian Learning (4) 1. For each numerical value θ Θ, our prior distribution p ( θ ) describes our belief that θ represents the true population characteristics. 2. For each θ Θ and y Y , our sampling model p ( y | θ ) describes our belief that y would be the outcome of our study if we knew θ to be true. Once we obtain the data y , the last step is to update our beliefs about θ : (3) For each numerical value of θ Θ, our posterior distribution p ( θ | y ) describes our belief that θ is the true value, having observed dataset y . Reading: Hoff, chapter 1 and Back Matter ( list of probability distributions) Stat C180/C236/Sanchez Introduction to Bayesian Statistics. Introductio n
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Bayesian Learning (5) The posterior distribution is obtained from the prior distribution and sampling model via Bayes rule : p ( θ | y ) = p ( y | θ ) p ( θ ) R Θ p ( y | ˜ θ ) p ( ˜ θ ) d ˜ θ (1) It is important to note that Bayes rule does not tell us what our beliefs should be, it tells us how they should change after seeing new information.
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 27

slides1-9-28 - Stat C180/C236/Sanchez Introduction to...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online