Chapter 10.
Supplemental Text Material
S10-1. The Covariance Matrix of the Regression Coefficients
In Section 10-3 of the textbook, we show that the least squares estimator of
β
in the linear
regression model
y
X
=
+
β
ε
±
(
)
β
=
′
′
−
X X
X y
1
is an unbiased estimator.
We also give the result that the covariance matrix of
is
(see Equation 10-18).
This last result is relatively straightforward to show.
Consider
±
β
σ
2
(
)
′
−
X X
1
]
′
′
′
)
V
V
(
±
)
[(
)
β
=
′
′
−
X X
X y
1
The quantity
is just a matrix of constants and
y
is a vector of random
variables.
Now remember that the variance of the product of a scalar constant and a
scalar random variable is equal to the square of the constant times the variance of the
random variable. The matrix equivalent of this is
(
)
′
−
X X
X
1
V
V
V
(
±
)
[(
)
]
(
)
( )[(
)
]
β
=
′
′
=
′
′
′
′
−
−
−
X X
X y
X X
X
y
X X
X
1
1
1
Now the variance of
y
is
, where
I
is an
n
×
n
identity matrix.
Therefore, this last
equation becomes
σ
2
I
V
V
V
(
±
)
[(
)
]
(
)
( )[(
)
]
(
)
[(
)
]
(
)
(
)
(
)
β
σ
σ
σ
=
′
′
=
′
′
′
′
=
′
′
′
′ ′
=
′
′
′
=
′
−
−
−
−
−
−
−
−
X X
X y
X X
X
y
X X
X
X X
X
X X
X
X X
X X X X
X X
1
1
1
2
1
1
2
1
1
2
1
We have used the result from matrix algebra that the transpose of a product of matrices is
just the produce of the transposes in reverse order, and since
(
′
X X
is symmetric its
transpose is also symmetric.
S10-2.
Regression Models and Designed Experiments
In Examples 10-2 through 10-5 we illustrate several uses of regression methods in fitting
models to data from designed experiments. Consider Example 10-2, which presents the
regression model for main effects from a 2
3
factorial design with three center runs.
Since
the
matrix is symmetric because the design is orthogonal, all covariance terms
between the regression coefficients are zero. Furthermore, the variance of the regression
coefficients is
(
)
′
−
X X
1

This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*