Lecture11 - Lecture 11 Bayesian Data Analysis Until now we...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture 11 Bayesian Data Analysis Until now, we have been fitting logistic (logit) and loglinear regression models to cate- gorical data. The utility of these models is that certain parameters have interpretations as odds ratios or relative risks on the log scale. When you look at the output produced by either R or SAS, you see that the parameter estimates and their standard errors are calculated using the Fisher scoring algorithm . Based on the data that you input and the model assumptions that you specify (eg. binomial, Poisson), the Fisher scoring algorithm maximizes a likelihood function . The likelihood function is a probability-based function of the data and parameters in your model. Fisher scoring maximizes this function with respect to the parameters. This is why the parameter estimates are called maximum likelihood estimates (MLEs). Based on the probability model, the MLEs represent the most likely estimates, given the data. For the logit and loglinear models we have considered, the likelihood function is nonlinear in the parameters, and the Fisher scoring algorithm iterates until convergence to the MLEs. For example, you may have noticed that R displays the number of iterations until convergence of the Fisher scoring algorithm. From our applied perspective, the estimation procedure is transparent to us since R or SAS does all the work. However, it’s important for us to understand, at least in some sense, technical aspects of the fitting process so that we are aware of the differences between classical or frequentist data analysis, which we have studied so far, and Bayesian data analysis, which we will study now. Instead of using R or SAS, we will now be using WinBugs (Bayesian Inference Using Gibbs Sampling) to perform estimation....
View Full Document

This note was uploaded on 02/07/2010 for the course STATS 620-202 taught by Professor R during the Two '09 term at University of Melbourne.

Page1 / 5

Lecture11 - Lecture 11 Bayesian Data Analysis Until now we...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online