completely fair game. Then a choice of strategy j by X and
i by Y wins ai j for X, and a choice of j by Y and i by X wins
the same amount for Y (because aji = ai j). The optimal
strategies x and y must be the same, and the
expected payoff must be y Ax = 0
equation (1). In fact, x2 is the only other vector in the
string, and the corresponding block J1 is of order 2.
Equation (2) describes two different strings, one in which
x4 follows x3, and another in which x5 is alone; the
blocks J2 and J3 are 2 by 2 and
answer the key question: How does the minimum cost
cx = y b change, if we change b or c? This is a
question in sensitivity analysis. It allows us to squeeze
extra information out of the dual problem. For an
economist or an executive, these questions about
are all similar to each other (and to J), but no matrices in
different families are similar. In every family, J is the most
beautifulif you like matrices to be nearly diagonal. With
this classification into families, we stop. Example 1. A =
0 1 2 0 0 1
transformations from V to W. If V is only a line in R 2 , and
W is only a line in R 3 , then V W is only a line in matrix
space. The dimensions are now 11 = 1. All the rank-1
matrices vwT will be multiples of one matrix. 462
Appendix A Intersection, Sum,
simplex step picked m columns of the long matrix [A I]
to be basic, and shifted them (theoretically) to the front.
This produced [B N]. The same shift reordered the long
cost vector [c 0] into [cB cN]. The stopping condition,
which brought the simplex met
Some of those sixteen couples are compatible, others
regrettably are not. When is it possible to find a complete
matching, with everyone married? If linear algebra can
work in 20-dimensional space, it can certainly handle the
trivial problem of marriage.
impossible. That case is now our main concern. We
repeat the theorem that is to be proved: If a matrix A has
s linearly independent eigenvectors, then it is similar to a
matrix J that is in Jordan form, with s square blocks on
the diagonal: J = M1AM = J1
the eigenspace for ). Multiplier `i j The pivot row j is
multiplied by `i j and subtracted from row i to eliminate
the i, j entry: `i j = (entry to eliminate)/(jth pivot).
Network A directed graph that has constants c1,.,cm
associated with the edges. Nilp
columns plus free columns) always equals the total
number of columns. When [A B] has k+` columns, with k
= dimV and ` = dimW, we reach a neat conclusion:
Dimension formula dim(V+W) +dim(VW) = dim(V)
+dim(W). (3) Not a bad formula. The overlap of V and W
i
product (tensor product) AB Blocks ai jB, eigenvalues
p(A)q(B). Krylov subspace Kj(A,b) The subspace
spanned by b,Ab,.,A j1b. Numerical methods
approximate A 1b by x j with residual bAx j in this
subspace. A good basis for Kj requires only multiplication
terms, one term for each permutation P of the columns.
That term is the product a1 an down the diagonal of
the reordered matrix, times det(P) = 1. Block matrix A
matrix can be partitioned into matrix blocks, by cuts
between rows and/or between columns. Bl
other two involve the generalized eigenvectors x2 and x4:
u2 = e 8t (tx1 +x2) and u4 = e 0t (tx3 +x4). (8) The most
general solution to du/dt = Au is a combination c1u1 +
+ c5u5, and the combination that matches u0 at time t =
0 is again u0 = c1x1 +c5x5,
c. Because feasibility also includes x 0 and y 0, we can
take inner products without spoiling those inequalities
(multiplying by negative numbers would reverse them):
yAx yb and yAx cx. (1) Since the left-hand sides are
identical, we have weak duality yb
EA = R with an invertible E. Elimination matrix =
Elementary matrix Ei j The identity matrix with an extra
`i j in the i, j entry (i 6= j). Then Ei jA subtracts `i j times
row j of A from row i. Ellipse (or ellipsoid) x TAx = 1 A
must be positive definite
knight out and back and then follow that strategy, leading
to the impossible conclusion that both would win. 13. If X
chooses a prime number and simultaneously Y guesses
whether it is odd or even (with gain or loss of $1), who
has the advantage? 14. If X
pure two-hand strategy, the more X will move toward
one hand. The fundamental problem is to find the best
mixed strategies. Can X choose probabilities x1 and x2
that present Y with no reason to move his own strategy
(and vice versa)? Then the average payo
orders of 1,.,n; the n! Ps have the rows of I in those
orders. PA puts the rows of A in the same order. P is a
product of row exchanges Pi j; P is even or odd (detP = 1
or 1) based on the number of exchanges. Pivot columns
of A Columns that contain pivots
find a shortest spanning tree for the network of Problem
2. 14. (a) Why does the greedy algorithm work for the
spanning tree problem? (b) Show by example that the
greedy algorithm could fail to find the shortest path from
s to t, by starting with the shor
beautiful. Problem Set 8.4 1. In Figure 8.5, add 3 to every
capacity. Find by inspection the maximal flow and
minimal cut. 2. Find a maximal flow and minimal cut for
the following network: 450 Chapter 8 Linear
Programming and Game Theory 3. If you could i
Confess and you are free, provided your accomplice does
not confess (the accomplice then gets 10 years). If both
confess, each gets 6 years. If neither confesses, only a
minor crime (2 years each) can be proved. What to do?
The temptation to confess is ve
0. In the language of subspaces, either b is in the column
space, or it has a component sticking into the left
nullspace. That component is the required y. For
inequalities, we want to find a theorem of exactly the
same kind. Start with the same system Ax
I) = 0 but only (A I) 2 = 0an equation with a repeated
root. AppendixC Matrix Factorizations 1. A = LU = lower
triangular L 1s on the diagonal! upper triangular U
pivots on the diagonal! Requirements: No row exchanges
as Gaussian elimination reduces A to
Union. With vector spaces, this is not natural. The union
V W of two subspaces will not in general be a subspace.
If V and W are the x-axis and the y-axis in the plane, the
two axes together are not a subspace. The sum of (1,0)
and (0,1) is not on either
cost = (cN cBB 1N)xN +cBB 1 b = rxN +cBB 1 b. If egg
is the first free variable, then increasing the first
component of xN to will increase the cost by r1 . The
real cost of egg is r1. This is the change in diet cost as the
zero lower bound (nonnegativity
from that tree. The steps depend on the selection order
of the trees. To stay with the same tree is algorithm 1. To
take the lengths in order is algorithm 2. To sweep through
all the trees 8.4 Network Models 449 in turn is a new
algorithm. It sounds so ea
maximized. The feasible sets for the primal and dual
problems look completely different. The first is a subset
of R n , marked out by x 0 and Ax b. The second is a
subset of R m, 8.3 The Dual Problem 435 determined by y
0 and A T and c. The whole theory
row rank m, then A + = A T (AAT ) 1 has AA+ = Im. 482
Appendix D Glossary: A Dictionary for Linear Algebra
Rotation matrix R = cos sin sin cos rotates the
plane by , and R 1 = R T rotates back by . Orthogonal
matrix, eigenvalues e i and e i , eigenvectors
any other strategies x and y, Ax b implies yAx yb =
1 and y A c implies y Ax cx = 1. The main point is
that y Ax 1 yAx . Dividing by S, this says that
player X cannot win more than 1/S against the strategy y
/S, and player Y cannot lose less than 1/S agai
But the capacity n p+r is below n exactly when p > r, and
Halls condition fails. 448 Chapter 8 Linear Programming
and Game Theory Spanning Trees and the Greedy
Algorithm A fundamental network model is the shortest
path problemin which the edges have lengt
good. You can see how the duality has become complete.
These optimality conditions are easy to understand in
matrix terms. From equation (1) we want y Ax = y b
at the optimum. Feasibility requires Ax b, and we look
for any components in which equality fai
at y , this is a saddle point from which nobody wants to
move: y Ax y Ax yAx for all x and y. (4) At this
saddle point, x is at least as good as any other x (since
y Ax y Ax ). And the second player Y could only
pay more by leaving y . 8.5 Game Theory 455
strings are Aw1 = 8w1, Aw2 = 8w2 +w1, Aw3 = 0w3, Ay =
0y+w3, Az = 0z. Comparing with equations (1) and (2), we
have a perfect matchthe Jordan form of our example
will be exactly the J we wrote earlier. Putting the five
vectors into the columns of M must g
smaller and smaller barriers, given by the size of . In
reality, those nonlinear equations are approximately
solved by Newtons method (which means they are
linearized). The nonlinear term is s = eX1 . To avoid 1/xi
, 440 Chapter 8 Linear Programming and G
the y-direction (examples below). If A and B are square,
so m = n and p = q, then the big matrix AB is also
square. Example 8. (Finite differences in the x and y
directions) Laplaces partial differential equation 2u/
x 2 2u/ y 2 = 0 is replaced by finite