Chapter 11.
Supplemental Text Material
S11-1.
The Method of Steepest Ascent
The method of steepest ascent can be derived as follows.
Suppose that we have fit a first-
order model
±
±
±
y
x
i
i
i
k
=
+
=
∑
β
β
0
1
and we wish to use this model to determine a path leading from the center of the design
region
x
=
0
that increases the predicted response most quickly.
Since the first–order
model is an unbounded function, we cannot just find the values of the
x
’s that maximize
the predicted response.
Suppose that instead we find the
x
’s that maximize the predicted
response at a point on a hypersphere of radius
r
.
That is
Max
subject to
±
±
y
x
x
r
i
i
k
i
i
i
k
=
+
=
=
=
∑
∑
β
β
0
1
2
1
2
The can be formulated as
Max
G
x
x
i
i
k
i
i
i
k
=
+
−
−
r
L
N
M
O
Q
P
=
=
∑
∑
β
β
λ
0
1
2
1
2
±
where
λ
is a LaGrange multiplier.
Taking the derivatives of
G
yields
∂
∂
=
−
−
∂
∂
= −
−
L
N
M
O
Q
P
=
∑
G
x
x
i
k
G
x
r
i
i
i
i
i
k
±
, ,
,
β
λ
λ
2
1 2
2
1
2
"
Equating these derivatives to zero results in
x
i
x
r
i
i
i
i
k
=
=
=
=
∑
±
, ,
,
β
λ
2
1 2
2
1
2
"
k
Now the first of these equations shows that the coordinates of the point on the
hypersphere are proportional to the signs and magnitudes of the regression coefficients
(the quantity 2
λ
is a constant that just fixes the radius of the hypersphere).
The second
equation just states that the point satisfies the constraint.
Therefore, the heuristic
description of the method of steepest ascent can be justified from a more formal
perspective.

This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*