Lecture5_s10 - SEM with observed variables estimation Psychology 588 Covariance structure and factor models Feb 5 2010 Estimation 2 Tries to find a

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon
SEM with observed variables: estimation Psychology 588: Covariance structure and factor models Feb 5, 2010
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Estimation 2 • Tries to find a solution that best approximates ideally population covariance matrix Σ , but in reality a sample estimate S () ˆ Ff = Σθ S • The “best” approximation is defined in various ways, leading to different fitting functions • Our job is to find which is the best under what data conditions
Background image of page 2
Desirable fitting functions 3 • have the following properties: () ˆ , ˆ ,0 ˆˆ f f f f == S Σθ S S S S S is a scalar if and only if is continuous both in and • Minimizing such fitting functions provides a consistent estimator of θ (e.g., ML, ULS and GLS) --- true for all functions to be considered
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Desirable asymptotic properties of estimators 4 • Unbiased • Consistent • Efficient Note: asymptotic means N →∞ by definition but its practical meaning is “as N becomes sufficiently large” --- “how large is sufficient” will depend on many things such as complexity of the model, size of measurement errors, etc.
Background image of page 4
: ˆ N N θ θθ parameters in the population estimate of from a sample of size • If is unbiased • If is asymptotically unbiased • If is consistent --- or called is efficient if its asymptotic variance is the minimum of all consistent estimator of θ () ˆˆ , NN E = θ EN =→ θ as 1 0, PN αα −< = > θ as for any ˆ N θ ˆ plim "" =
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Maximum likelihood 6 • ML assumes: ¾ Satisfactorily large sample ¾ All observed variables distributed multivariate normal --- we will consider later a relaxed alternative to this for exogenous x ¾ All observations independent and identically distributed • Minimizing its fitting function F ML maximizes joint (log) likelihood of the model parameters θ given observed data S ( ) () 1 ML ˆˆ log tr log F pq =+ + Σ S Σ S Obviously, both S and must be nonsingular for F ML to be defined ˆ Σ
Background image of page 6
How to optimize a fitting function? 7 • Find the partial derivatives w.r.t. all free model parameters a and solve --- necessary for minimization • The second-derivative matrix is positive definite (nonsingular) at the that minimizes --- sufficient for minimization • Usually, there is no closed form solution to this problem;
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 8
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 06/11/2010 for the course PSYC 588 taught by Professor Sunjinghong during the Spring '10 term at University of Illinois at Urbana–Champaign.

Page1 / 20

Lecture5_s10 - SEM with observed variables estimation Psychology 588 Covariance structure and factor models Feb 5 2010 Estimation 2 Tries to find a

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online