scores
If TRUE, find component scores
missing
if scores are TRUE, and missing=TRUE, then impute missing values using either
the median or the mean
impute
"median" or "mean" values are used to replace missing values
oblique.scores
If TRUE (default), then the component scores are based upon the structure ma-
trix. If FALSE, upon the pattern matrix.
method
Which way of finding component scores should be used. The default is "regres-
sion"
weight
If not NULL, a vector of length n.obs that contains weights for each observation.
The NULL case is equivalent to all cases being weighted 1.
use
How to treat missing data, use="pairwise" is the default".
See cor for other
options.
cor
How to find the correlations: "cor" is Pearson", "cov" is covariance, "tet" is
tetrachoric, "poly" is polychoric, "mixed" uses mixedCor for a mixture of tetra-
chorics, polychorics, Pearsons, biserials, and polyserials, Yuleb is Yulebonett,
Yuleq and YuleY are the obvious Yule coefficients as appropriate
correct
When doing tetrachoric, polycoric, or mixed cor, how should we treat empty
cells. (See the discussion in the help for tetrachoric.)
...
other parameters to pass to functions such as factor.scores or the various rotation
functions.
Details
Useful for those cases where the correlation matrix is improper (perhaps because of SAPA tech-
niques).
There are a number of data reduction techniques including principal components analysis (PCA) and
factor analysis (EFA). Both PC and FA attempt to approximate a given correlation or covariance
matrix of rank n with matrix of lower rank (p).
n
R
n
≈
n
F
kk
F
0
n
+
U
2
where k is much less
than n. For principal components, the item uniqueness is assumed to be zero and all elements of
the correlation or covariance matrix are fitted. That is,
n
R
n
≈
n
F
kk
F
0
n
The primary empirical
difference between a components versus a factor model is the treatment of the variances for each
item.
Philosophically, components are weighted composites of observed variables while in the
factor model, variables are weighted composites of the factors. As the number of items increases,
the difference between the two models gets smaller. Factor loadings are the asymptotic component
loadings as the number of items gets larger.
For a n x n correlation matrix, the n principal components completely reproduce the correlation
matrix. However, if just the first k principal components are extracted, this is the best k dimensional
approximation of the matrix.
It is important to recognize that rotated principal components are not principal components (the
axes associated with the eigen value decomposition) but are merely components. To point this out,
unrotated principal components are labelled as PCi, while rotated PCs are now labeled as RCi (for
rotated components) and obliquely transformed components as TCi (for transformed components).

#### You've reached the end of your free preview.

Want to read all 423 pages?