Using this result, (1-α)prediction intervalfor a new observationYh(new)isˆYh∓t1-α/2,n-2sepred.14
2.4.5Inference about bothβ0andβ1simultaneouslySuppose thatβ*0andβ*1are given numbers and we are interested in testing the followinghypothesis:H0:β0=β*0andβ1=β*1versusH1: at least one is different(9)We shall derive the likelihood ratio test for (9).The likelihood function (7), when maximized under the unconstrained space yields the MLEsˆβ1,ˆβ1,ˆσ2.Under the constrained space,β0andβ1are fixed atβ*0andβ*1, and soˆσ20=1nnXi=1(Yi-β*0-β*1xi)2.The likelihood statistic reduces toΛ(Y,x) =supσ2L(β*0, β*1, σ2)supβ0,β1,σ2L(β0, β1, σ2)=ˆσ2ˆσ20n/2="∑ni=1(Yi-ˆβ0-ˆβ1xi)2∑ni=1(Yi-β*0-β*1xi)2#n/2.The LRT procedure specifies rejectingH0whenΛ(Y,x)≤k,for somek, chosen given the level condition.Exercise:Show thatnXi=1(Yi-β*0-β*1xi)2=S2+Q2,whereS2=nXi=1(Yi-ˆβ0-ˆβ1xi)2Q2=n(ˆβ0-β*0)2+nXi=1x2i!(ˆβ1-β*1)2+ 2n¯x(ˆβ0-β*0)(ˆβ1-β*1).Thus,Λ(Y,x) =S2S2+Q2n/2=1 +Q2S2-n/2.It can be seen that this is equivalent to rejectingH0whenQ2/S2≥k0which is equivalent toU2:=12Q2˜σ2≥γ.15
Exercise:Show that, underH0,Q2σ2∼χ22. Also show thatQ2andS2are independent.We know thatS2/σ2∼χ2n-2. Thus, underH0,U2∼F2,n-2,and thusγ=F-12,n-2(1-α).3Linear models with normal errors3.1Basic theoryThis section concerns models for independent responses of the formYi∼N(μi, σ2),whereμi=x>iβfor some known vector of explanatory variablesx>i= (xi1, . . . , xip) andunknownparametervectorβ= (β1, . . . , βp)>, wherep < n.This is the linear modeland is usually written asY=Xβ+ε(in vector notation) whereYn×1=Y1...Yn,Xn×p=x>1...x>n,βp×1=β1...βp,εn×1=ε1...εn,εii.i.d.∼N(0, σ2).Sometimes this is written in the more compact notationY∼Nn(Xβ, σ2I),whereIis then×nidentity matrix.It is usual to assume that then×pmatrixXhas full rankp.16
3.2Maximum likelihood estimationThe log–likelihood (up to a constant term) for (β, σ2) is‘(β, σ2) =-n2logσ2-12σ2nXi=1(Yi-x>iβ)2=-n2logσ2-12σ2nXi=1Yi-pXj=1xijβj!2.An MLE (ˆβ,ˆσ2) satisfies0=∂∂βj‘(ˆβ,ˆσ2) =1ˆσ2nXi=1xij(yi-x>iˆβ),forj= 1, . . . , p,i.e.,nXi=1xijx>iˆβ=nXi=1xijyiforj= 1, . . . , p,so(X>X)ˆβ=X>Y.SinceX>Xis non-singular ifXhas rankp, we haveˆβ= (X>X)-1X>Y.Theleast squares estimatorofβminimizeskY-Xβk2.Check that this estimator coincides with the MLE when the errors are normally distributed.