ch10 - Chapter 10. Supplemental Text Material S10-1. The...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Chapter 10. Supplemental Text Material S10-1. The Covariance Matrix of the Regression Coefficients In Section 10-3 of the textbook, we show that the least squares estimator of β in the linear regression model yX =+ β ε ± () = ′′ XX Xy 1 is an unbiased estimator. We also give the result that the covariance matrix of is (see Equation 10-18). This last result is relatively straightforward to show. Consider ± σ 2 1 ] ) VV ( ± )[ ( ) = 1 The quantity is just a matrix of constants and y is a vector of random variables. Now remember that the variance of the product of a scalar constant and a scalar random variable is equal to the square of the constant times the variance of the random variable. The matrix equivalent of this is ( ) X 1 V ( ± ] ( ) [ ()] = = −− X y X 1 11 Now the variance of y is , where I is an n × n identity matrix. Therefore, this last equation becomes 2 I V ( ± ] ( ) [ ()[ = = = = = X y X X XX X XXXX 1 21 1 1 We have used the result from matrix algebra that the transpose of a product of matrices is just the produce of the transposes in reverse order, and since ( is symmetric its transpose is also symmetric. S10-2. Regression Models and Designed Experiments In Examples 10-2 through 10-5 we illustrate several uses of regression methods in fitting models to data from designed experiments. Consider Example 10-2, which presents the regression model for main effects from a 2 3 factorial design with three center runs. Since the matrix is symmetric because the design is orthogonal, all covariance terms between the regression coefficients are zero. Furthermore, the variance of the regression coefficients is ( ) 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
V Vi i ( ± )/ . ( ± )/., , βσ σ 0 22 12 0 0833 8 0125 1 2 3 == = , In Example 10-3, we reconsider this same problem but assume that one of the original 12 observations is missing. It turns out that the estimates of the regression coefficients does not change very much when the remaining 11 observations are used to fit the first-order model but the ( matrix reveals that the missing observation has had a moderate effect on the variances and covariances of the model coefficients. The variances of the regression coefficients are now larger, and there are some moderately large covariances between the estimated model coefficients. Example 10-4, which investigated the impact of inaccurate design factor levels, exhibits similar results.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 03/20/2011 for the course STATISTIC 101 taught by Professor Fandia during the Spring '10 term at UCLA.

Page1 / 5

ch10 - Chapter 10. Supplemental Text Material S10-1. The...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online