2. The constraint matrix
W
t
is block diagonal:
W
t
=
A
t
0
0
B
t
.
(5
.
5)
3. The other components of the constraints are random but we assume
that for each realization of
ω
,
T
t
(
ω
) and
h
t
(
ω
) can be written:
T
t
(
ω
) =
R
t
(
ω
)
0
S
t
(
ω
)
0
and
h
t
(
ω
) =
b
t
(
ω
)
d
t
(
ω
)
,
(5
.
6)
where the zero components of
T
t
correspond to the detailed level variables.
Notice that (3) in the definition implies that detailed level variables have
no direct effect on future constraints. This is the fundamental advantage
of block separability.
With block separable recourse, we may rewrite
Q
t
(
x
t
−
1
, ξ
t
(
ω
)) as the
sum of two quantities,
Q
t
w
(
w
t
−
1
, ξ
t
(
ω
)) +
Q
t
y
(
w
t
−
1
, ξ
t
(
ω
)), where we need
132
3. Basic Properties and Theory
not include the
y
t
−
1
terms in
x
t
−
1
,
Q
t
w
(
w
t
−
1
, ξ
t
(
ω
)) = min
r
t
(
ω
)
w
t
(
ω
) +
Q
t
+1
(
x
t
)
s
.
t
. A
t
w
t
(
ω
)=
b
t
(
ω
)
−
R
t
−
1
(
ω
)
w
t
−
1
,
w
t
(
ω
)
≥
0
,
(5
.
7)
and
Q
t
y
(
w
t
−
1
, ξ
t
(
ω
)) = min
q
t
(
ω
)
y
t
(
ω
)
s
.
t
. B
t
y
t
(
ω
)=
d
t
(
ω
)
−
S
t
−
1
(
ω
)
w
t
−
1
,
y
t
(
ω
)
≥
0
.
(5
.
8)
The great advantage of block separability is that we need not consider
nesting among the detailed level decisions. In this way, the
w
variables can
all be pulled together into a first stage of aggregate level decisions. The
second stage is then composed of the detailed level decisions. Note that if
the
b
t
and
R
t
are known, then the block separable problem is equivalent
to a similarly sized twostage stochastic linear program.
Separability is indeed a very useful property for stochastic programs.
Computational methods should try to exploit it whenever it is inherent
in the problem because it may reduce work by orders of magnitude. We
will also see in Chapter 11 that separability can be added to a problem
(with some error that can be bounded). This approach opens many possible
applications with large numbers of random variables.
Another modeling approach that may have some computational advan
tage appears in Grinold [1976]. This approach extends from analyses of
stochastic programs as examples of Markov decision process. He assumes
that
ω
t
belongs to some finite set 1
, . . . , k
t
, that the probabilities are deter
mined by
p
ij
=
P
{
ω
t
+1
=
j

ω
t
=
i
}
for all
t
, and that
T
t
=
T
t
(
ω
t
, ω
t
+1
).
In this framework, he can obtain an approximation that again obtains a
form of separability of future decisions from previous outcomes. We discuss
more approximation approaches in Chapter 11.
We now consider generalizations into nonlinear functions and infinite
horizons. The general results of the previous section can be extended
here directly. We concentrate on some areas where differences may oc
cur and, for notational convenience, concentrate just on the description
of problems in the form with explicit nonanticipativity constraints. More
detailed descriptions of these problems appear in the papers by Rockafel
lar and Wets [1976a,1976b], Dempster [1988], Fl˚
am [1985], and Birge and
Dempster [1992].
You've reached the end of your free preview.
Want to read all 440 pages?