Using this result 1 \u03b1 prediction interval for a new observation Y h new is \u02c6 Y

Using this result 1 α prediction interval for a new

This preview shows page 14 - 18 out of 28 pages.

Using this result, (1 - α ) prediction interval for a new observation Y h ( new ) is ˆ Y h t 1 - α/ 2 ,n - 2 se pred . 14
Image of page 14
2.4.5 Inference about both β 0 and β 1 simultaneously Suppose that β * 0 and β * 1 are given numbers and we are interested in testing the following hypothesis: H 0 : β 0 = β * 0 and β 1 = β * 1 versus H 1 : at least one is different (9) We shall derive the likelihood ratio test for (9). The likelihood function (7), when maximized under the unconstrained space yields the MLEs ˆ β 1 , ˆ β 1 , ˆ σ 2 . Under the constrained space, β 0 and β 1 are fixed at β * 0 and β * 1 , and so ˆ σ 2 0 = 1 n n X i =1 ( Y i - β * 0 - β * 1 x i ) 2 . The likelihood statistic reduces to Λ( Y , x ) = sup σ 2 L ( β * 0 , β * 1 , σ 2 ) sup β 0 1 2 L ( β 0 , β 1 , σ 2 ) = ˆ σ 2 ˆ σ 2 0 n/ 2 = " n i =1 ( Y i - ˆ β 0 - ˆ β 1 x i ) 2 n i =1 ( Y i - β * 0 - β * 1 x i ) 2 # n/ 2 . The LRT procedure specifies rejecting H 0 when Λ( Y , x ) k, for some k , chosen given the level condition. Exercise: Show that n X i =1 ( Y i - β * 0 - β * 1 x i ) 2 = S 2 + Q 2 , where S 2 = n X i =1 ( Y i - ˆ β 0 - ˆ β 1 x i ) 2 Q 2 = n ( ˆ β 0 - β * 0 ) 2 + n X i =1 x 2 i ! ( ˆ β 1 - β * 1 ) 2 + 2 n ¯ x ( ˆ β 0 - β * 0 )( ˆ β 1 - β * 1 ) . Thus, Λ( Y , x ) = S 2 S 2 + Q 2 n/ 2 = 1 + Q 2 S 2 - n/ 2 . It can be seen that this is equivalent to rejecting H 0 when Q 2 /S 2 k 0 which is equivalent to U 2 := 1 2 Q 2 ˜ σ 2 γ. 15
Image of page 15
Exercise: Show that, under H 0 , Q 2 σ 2 χ 2 2 . Also show that Q 2 and S 2 are independent. We know that S 2 2 χ 2 n - 2 . Thus, under H 0 , U 2 F 2 ,n - 2 , and thus γ = F - 1 2 ,n - 2 (1 - α ). 3 Linear models with normal errors 3.1 Basic theory This section concerns models for independent responses of the form Y i N ( μ i , σ 2 ) , where μ i = x > i β for some known vector of explanatory variables x > i = ( x i 1 , . . . , x ip ) and unknown parameter vector β = ( β 1 , . . . , β p ) > , where p < n . This is the linear model and is usually written as Y = X β + ε (in vector notation) where Y n × 1 = Y 1 . . . Y n , X n × p = x > 1 . . . x > n , β p × 1 = β 1 . . . β p , ε n × 1 = ε 1 . . . ε n , ε i i.i.d. N (0 , σ 2 ) . Sometimes this is written in the more compact notation Y N n ( X β , σ 2 I ) , where I is the n × n identity matrix. It is usual to assume that the n × p matrix X has full rank p . 16
Image of page 16
3.2 Maximum likelihood estimation The log–likelihood (up to a constant term) for ( β , σ 2 ) is ( β , σ 2 ) = - n 2 log σ 2 - 1 2 σ 2 n X i =1 ( Y i - x > i β ) 2 = - n 2 log σ 2 - 1 2 σ 2 n X i =1 Y i - p X j =1 x ij β j ! 2 . An MLE ( ˆ β , ˆ σ 2 ) satisfies 0 = ∂β j ( ˆ β , ˆ σ 2 ) = 1 ˆ σ 2 n X i =1 x ij ( y i - x > i ˆ β ) , for j = 1 , . . . , p, i.e., n X i =1 x ij x > i ˆ β = n X i =1 x ij y i for j = 1 , . . . , p, so ( X > X ) ˆ β = X > Y . Since X > X is non-singular if X has rank p , we have ˆ β = ( X > X ) - 1 X > Y . The least squares estimator of β minimizes k Y - X β k 2 . Check that this estimator coincides with the MLE when the errors are normally distributed.
Image of page 17
Image of page 18

You've reached the end of your free preview.

Want to read all 28 pages?

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture