note10_regression

6 06 slide 23 density 4 log time 2 1 0 1 2 3 1 log

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: set A −1 0 1 2 3 4 0 1 2 3 log time Covariate set C Covariate set D 0.4 Density 0.0 0.2 0.0 0.2 0.4 0.6 0.6 Slide 23 Density 4 log time −2 −1 0 1 2 3 −1 log time 0 1 2 3 4 log time & % ' $ cbind(apply(ytilde, 2, mean), t(apply(ytilde, 2, quantile, c(0.025, 0.975)))) Slide 24 [1,] [2,] [3,] [4,] 1.4926618 1.9932358 0.8362441 1.3452031 & 2.5% 0.14143269 0.66109645 -0.51500608 0.01216690 97.5% 2.839166 3.344944 2.182603 2.677539 % MATH-440 Linear Regression ' $ Let us check if the observations are consistent with the tted model. ∗ • Let yi denote the density of a future log extinction time for a bird with covariate vector xi . Slide 25 • We can simulate draws of the posterior predictive distributions ∗ ∗ for all y1 , . . . , y62 . ystar = matrix(NA, T, n) for(i in 1:T) { ystar[i,] = rmnorm(1, x%*%beta[i,], sigma2[i]*diag(n)) } cbind(y, apply(ystar, 2, mean), t(apply(ystar, 2, quantile, c(0.025, 0.975)))) & % ' $ pred.ci = apply(ystar, 2, quantile, c(0.025, 0.975)) par(mfrow=c(1,1)) matplot(rbind(1:n, 1:n), pred.ci, type="l", lty=1, xlab="INDEX", ylab="Log time") points(1:n, LOGTIME, pch=19) out = (LOGTIME > pred.ci[2,]) text((1:n)[out], LOGTIME[out], label=SPECIES[out], pos=4) Slide 26 • We summarize each predictive distributioon by the 95% credible interval and graph these using the matplot command. • The actual observed log extinction times y1 , . . . , yn are placed as solid dots in the plot. • Any point that falls outside of the corresponding 95% interval is a possible outlier. & % MATH-440 Linear Regression $ 5 ' 4 q Raven q q Skylark q 3 q q q q q q 2 q q q q q q q q q 1 Log time Slide 27 q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q q q q q qq q qq −1 0 q & 0 10 20 30 50 60 INDEX ' Zellner's β |σ 2 ∼ σ2 Slide 28 40 ∼ % $ g -prior Nk (β0 , gσ 2 (X T X )−1 ) Inv-Gamma 2 ν0 σ0 , 22 • The constant g reects the amount of information in the data relative to the prior; if one believes strongly in the prior guess, one would choose a small value for g . • A nice feature of the g -prior is that the posterior distribution has a relatively simple functional form: p(β, σ 2 |y ) = p(β |y, σ 2 )p(σ 2 |y ) & % MATH-440 Linear Regression ' $ where β |y, σ 2 ∼ Nk (m, V ) m V Slide 29 = = g β0 + (X T X )−1 X T y g+1 g g σ 2 (X T X )−1 g+1 and σ 2 |y, X, β ∼ Inv-Gamma 2 Sg = y T I− ν0 +n 1 2 2 , 2 ν0 σ0 2 + Sg , where g X (X T X )−1 X T g+1 y & % ' $ Using semi-conjugate prior distribution β ∼ σ2 Slide 30 ...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online