note10_regression

5 density 05 10 6 4 0 00 2 density 8 20 10 25 nesting

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: breaks=50) hist(sigma2, main="Variance", xlab=expression(sigma^2), prob=T, breaks=50) & % ' $ SIZE 1.5 Density 0.5 1.0 6 4 0 0.0 2 Density 8 2.0 10 2.5 NESTING 0.10 0.15 0.20 0.25 0.30 0.35 0.40 −1.0 −0.5 β1 STATUS Variance 3 Density 1 0 0.0 0.0 0.5 β3 & 2 1.5 1.0 0.5 Density 4 2.0 5 Slide 18 0.0 β2 1.0 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 σ2 % MATH-440 Linear Regression ' $ Below are some Monte Carlo summary statistics for each parameter: apply(beta, 2, mean) [1] 0.4307409 0.2651380 -0.6522922 Slide 19 0.5046479 apply(beta, 2, quantile, c(0.025, 0.5, 0.975)) [,1] [,2] [,3] [,4] 2.5% 0.01606187 0.1913462 -0.9872335 0.1387419 50% 0.43047972 0.2651018 -0.6513544 0.5053110 97.5% 0.84611036 0.3388107 -0.3179631 0.8671088 quantile(sigma2, c(.025, 0.5, 0.975)) 2.5% 50% 97.5% 0.3042492 0.4304378 0.6357527 Since we used a non-informative prior, the posterior summaries are equivalent to the ordinary regression estimates. & % ' $ Next, suppose we are interested in estimating the mean log extinction time for four new sets of observations with the following covariates: Covariate set Size Status 4 small migrant B 4 small resident C 4 large migrant D & Nesting pairs A Slide 20 4 large resident % MATH-440 Linear Regression ' $ For the classical approach, we can use the R function predict: predict(fit, data.frame(NESTING=rep(4,4), SIZE=c(rep(0,2),rep(1,2)), STATUS=rep(c(0,1),2)), interval="prediction") Slide 21 1 2 3 4 fit lwr upr 1.4909277 0.13467449 2.847181 1.9950931 0.66191816 3.328268 0.8387294 -0.51147467 2.188934 1.3428949 0.01251361 2.673276 & % ' $ For Bayesian inference, we can simulate draws from the posterior predictive distribution: Slide 22 xnew = as.matrix(rbind(c(1,4,0,0), c(1,4,0,1), c(1,4,1,0), c(1,4,1,1))) nnew = dim(xnew)[1]; ytilde = matrix(NA, T, nnew) for(i in 1:T) { ytilde[i,] = rmnorm(1, xnew%*%beta[i,], sigma2[i]*diag(nnew)) } c.labels = c("A", "B", "C", "D") par(mfrow=c(2,2)) for(j in 1:4) { hist(ytilde[,j], main=paste("Covariate set", c.labels[j]), xlab="log time", prob=T, breaks=20) } & % MATH-440 Linear Regression ' $ 0.6 0.4 Density 0.2 0.0 Density Covariate set B 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Covariate...
View Full Document

This note was uploaded on 02/22/2013 for the course MATH 440 taught by Professor Tadesse during the Spring '13 term at Georgetown.

Ask a homework question - tutors are online