Chapter 10. Supplemental Text Material S10-1. The Covariance Matrix of the Regression Coefficients In Section 10-3 of the textbook, we show that the least squares estimator of βin the linear regression model yX=+βε±()β=′′−X XX y1is an unbiased estimator. We also give the result that the covariance matrix of is (see Equation 10-18). This last result is relatively straightforward to show. Consider ±βσ2()′−X X1]′′′)VV(±)[()β=′′−X XX y1The quantity is just a matrix of constants and yis a vector of random variables. Now remember that the variance of the product of a scalar constant and a scalar random variable is equal to the square of the constant times the variance of the random variable. The matrix equivalent of this is ()′−X XX1VVV(±)[()]()( )[()]β=′′=′′′′−−−X XX yX XXyX XX111Now the variance of yis , where Iis an n ×nidentity matrix. Therefore, this last equation becomes σ2IVVV(±)[()]()( )[()]()[()]()()()βσσσ=′′=′′′′=′′′′ ′=′′′=′−−−−−−−−X XX yX XXyX XXX XXX XXX XX X X XX X11121121121We have used the result from matrix algebra that the transpose of a product of matrices is just the produce of the transposes in reverse order, and since (′X Xis symmetric its transpose is also symmetric. S10-2. Regression Models and Designed Experiments In Examples 10-2 through 10-5 we illustrate several uses of regression methods in fitting models to data from designed experiments. Consider Example 10-2, which presents the regression model for main effects from a 23 factorial design with three center runs. Since the matrix is symmetric because the design is orthogonal, all covariance terms between the regression coefficients are zero. Furthermore, the variance of the regression coefficients is ()′−X X1
has intentionally blurred sections.
Sign up to view the full version.