MLEpart1_loglikelihoodgradientAndDefOfWmatrix - Full log likelihood The log-likelihood of based on all n observations is l l | y1 yn = ln L | y1 yn

# MLEpart1_loglikelihoodgradientAndDefOfWmatrix - Full...

• 5
• 100% (2) 2 out of 2 people found this document helpful

This preview shows page 1 out of 5 pages. Unformatted text preview: Full log- likelihood The log-likelihood of β based on all n observations is l , l( β | y1, . . . , yn ) = ln L( β | y1, . . . , yn )  n n  X X yiθi − b(θi) = + c(yi, φ) li = ai(φ) i=1 i=1 The j th component of the score statistic vector: n X ∂l ∂li uj , = ∂βj ∂βj i=1  n  X ∂li dθi dµi ∂ηi = ∂θi dµi dηi ∂βj i=1 n X \$i 1 0 = (yi − µi)g (µi)xij where \$i = 0 (µ ))2 a (φ) V (g i i i i=1 Matrix form u1 ∂l u , ... = ∂β up \$1 x11 · · · xn1 a (φ) 1 . . . ... . . . = . . . x1p · · · xnp (y1 − µ1)g 0(µ1) ... = XT W (yn − µn)g 0(µn) where W , \$1 a1 (φ) \$n an (φ) (y1 − µ1)g 0(µ1) ... (yn − µn)g 0(µn) ... \$n an (φ) Score equation We want to estimate β by solving the score equation 0 (y1 − µ1)g (µ1) .. = 0. u = XT W (yn − µn)g 0(µn) (2) There is dependence on β in several places on the Left Hand Side of equation (2). A non-linear system. Some saw weighted least squares in PSTAT 126, but most did not. We therefore will discuss the intuition for the weighted least squares for Gaussian regression with independent errors with non-constant variance during 127 lecture. Comparison to weighted least squares - review Consider the LM Y = Xβ + ,  ∼ N(0, σ 2W −1) where W −1 is a known weight matrix. We can estimate β by minimizing the weighted LS (y − Xβ)T W (y − Xβ) which leads to the normal equation X T W (y − Xβ) = 0. (3) Numerical solution The score equation (2) is similar to the normal equation of the weighted LS (3). The difference is that the weight matrix W −1 in (2) is unknown and may depend on β. Therefore, (2) is a non-linear system of equations and can’t be solved analytically. We need to compute them numerically using an iterative scheme. A common approach is the Newton-Raphson procedure. We may return to discuss Newton-Raphson and Fisher Scoring algorithms for finding the MLE's of beta-vector later, but to immediately help you analyze data in R we will continue with inference discussion, and then residual diagnostics. At this stage we will simply assume we can obtain MLE's for beta-vector, using an iterative algorithm, assuming matrix X has full column-rank. ...
View Full Document

• • • 