left of x j , and 1/h in the interval to the right. If these
double intervals do not overlap, the product V0 i V 0 j is
zero and Ai j = 0. Each hat function overlaps itself and
only two neighbors: Diagonal i = j Aii = Z V 0 i V 0 i dx = Z
1 h 2 dx+ Z 1 h
feasible set is the intersection of the three halfspaces
x+2y 4, x 0, and y 0. A feasible set is composed of
the solutions to a family of linear inequalities like Ax b
(the intersection of m halfspaces). When we also require
that every component of x is n
become a complex conjugate pairboth have | = 1,
which is now increasing with . The discovery that such
an improvement could be produced so easily, almost as if
by magic, was the starting point for 20 years of enormous
activity in numerical analysis. The f
normalization for b; our problem is to compare the
relative change kbk/kbk with the relative error k
xk/kxk. The worst case is when k xk is largewith b in
the direction of the eigenvector x1and when kxk is
small. The true solution x should be as small as
Mi j = R ViVjdx for n hat functions with h = 1 n+1 ?
Chapter 7 Computations with Matrices 7.1 Introduction
One aim of this book is to explain the useful parts of
matrix theory. In comparison with older texts in abstract
linear algebra, the underlying theo
largest amount by which any vector (eigenvector or not)
is amplified by matrix multiplication: kAk =
max(kAxk/kxk). The norm of the identity matrix is 1. To
compute the norm, square both sides to reach the
symmetric A TA: kAk 2 = max kAxk 2 kxk 2 = max x
Then Section 8.4 is about problems (like marriage) in
which the solution is an integer. Section 8.5 discusses
poker and other matrix games. The MIT students in
Bringing Down the House counted high cards to win at
blackjack (Las Vegas follows fixed rules,
connected to the absolutely original A by Q 1AQ = A0.
As it stands, the QR algorithm is good but not very good.
To make it special, it needs two refinements: We must
allow shifts to Ak kI, and we must ensure that the QR
factorization at each step is very
ynVn(x), we look for the particular combination (call it U)
that minimizes P(V). This is the key idea, to minimize over
a subspace of Vs instead of over all possible v(x). The
function that gives the minimum is U(x). We hope and
expect that U(x) is near t
391 standing successes of numerical analysis. It is clearly
defined, its importance is obvious, but until recently no
one knew how to solve it. Dozens of algorithms have
been suggested, and everything depends on the size and
the properties of A (and on th
P32P31P21A = I, the three robot turns are in A = P 1 21
P 1 31 P 1 32 . The three angles are Euler angles.
Choose the first so that P21A = cos sin 0 sin
cos 0 0 0 1 1 2 1 2 2 2 1 2 2 2 1 is
zero in the (2,1) position. 7.4 Iterative Methods for Ax = b
In c
a general 2 by 2 matrix A = " a b c d# , find the Jacobi
iteration matrix S 1T = D 1 (L +U) and its eigenvalues
i . Find also the Gauss-Seidel matrix (D+L) 1U and its
eigenvalues i , and decide whether max = 2 max. 8.
Change Ax = b to x = (I A)x+b. What a
prove a lower bound: k xk kxk 1 c kbk kbk . (Consider
A 1 b = x instead of Ax = b.) 12. Find the norms max
and condition numbers max/min of these positive
definite matrices: " 100 0 0 2# "2 1 1 2# "3 1 1 1# . 13.
Find the norms and condition numbers from
needed in Chicago at a distance of 1000, 2000, and 3000
miles from the three producers, respectively; and
2,200,000 barrels are needed in New England 1500,
3000, and 3700 miles away. If shipments cost one unit for
each barrel-mile, what linear program wit
row r, If B 1u 0, the next corner is infinitely far away
and the minimal cost is (this doesnt happen here).
Our example will go from the corner P to Q, and begin
again at Q. Example 3. The original cost function x+y and
constraints Ax = b = (6,6) give " A
and a Cadillac in 3 minutes. What is the maximum profit
in 8 hours (480 minutes)? Problem Maximize the profit
200x+300y+500z subject to 20x+17y+14z 18(x+y+z),
x+2y+3z 480, x,y,z 0. 2. Portfolio Selection. Federal
bonds pay 5%, municipals pay 6%, and junk
right away, on the first column of A. The final Q 1AQ is
allowed one nonzero diagonal below the main diagonal
(Hessenberg form). Therefore only the entries strictly
below the diagonal will be involved: x = a21 a31 .
. . an1 , z = 1 0 . . . 0 , Hx =
0 .
contained some component of the eigenvector xn, so
that cn 6= 0, this component will gradually dominate in
uk : uk k n = c1 1 n k x1 +cn1 n1 n k
xn1 +cnxn. (1) The vectors uk point more and more
accurately toward the direction of xn. Their convergence
fac
Karmarkars Method We come now to the most
sensational event in the recent history of linear
programming. Karmarkar proposed a method based on
two simple ideas, and in his experiments it defeated the
simplex method. The choice of problem and the details of
b. (2) Every important quantity appears in the fully
reduced tableau R. We can decide whether the corner is
optimal by looking at r = cN cBB 1N in the middle of
the bottom row. If 428 Chapter 8 Linear Programming
and Game Theory any entry in r is negative
the corner R is optimal. 6. Phase I finds a basic feasible
solution to Ax = b (a corner). After changing signs to make
b 0, consider the auxiliary problem of minimizing w1 +
w2 + + wm, subject to x 0, w 0, Ax +w = b.
Whenever Ax = b has a nonnegative solu
multiply the old B 1 by E 1 = 1 v1 vk
vn 1 1 = 1 v1/vk 1/vk
vn/vk 1 (5) Many simplex codes use the
product form of the inverse, which saves these simple
matrices E 1 instead of directly updating B 1 . When
needed, they are applied to b and cB. At regula
Ax = b. Problem Set 7.4 1. This matrix has eigenvalues 2
2, 2, and 2+ 2: A = 2 1 0 1 2 1 0 1 2 .
Find the Jacobi matrix D 1 (L U) and the Gauss-Seidel
matrix (D +L) 1 (U) and their eigenvalues, and the
numbers opt and max for SOR. 2. For this n by n
matr
matrix AD takes the place of A, and the vector cTD takes
the place of cT . The second step projects the new c onto
the nullspace of the new A. All the work is in this
projection, to solve the weighted normal equations:
(AD2A T )y = AD2 c. (7) The normal w
this cause in x? 24. If you know L, U, Q, and R, is it faster
to solve LUx = b or QRx = b? 25. Choosing the largest
available pivot in each column (partial pivoting), factor
each A into PA = LU: A = " 1 0 2 2# and A = 1 0 1 2 2
0 0 2 0 . 7.3 Computation o
vector x (the old x and w) are zero. These n components
of x are the free variables in Ax = b. The remaining m
components are the basic variables or pivot variables.
Setting the n free variables to zero, the m equations Ax =
b determine the m basic variab
methods approach that optimal solution from inside the
feasible set. Note. With a different cost function, the
intersection might not be just a single point. If the cost
happened to be x+2y, the whole edge between B and A
would be optimal. The minimum cos
The condition number is approximately c(A) = 1 2 n 2 ,
and this time the dependence on the order n is genuine.
The better we approximate u 00 = f , by increasing the
number of unknowns, the harder it is to compute the
approximation. At a certain crossover
impractical for large m and n. It is the task of Phase 424
Chapter 8 Linear Programming and Game Theory x y
feasible set 2x + y = 6 x + 2y = 6 y = 0 x = 0 b b b b b b P Q
R Figure 8.3: The corners P, Q, R, and the edges of the
feasible set. I either to fi
perfectly conditioned: c(Q) = 1. The change in the
eigenvalues is no greater than the change A. Therefore
the best case is when A is symmetric, or more generally
when AAT = A TA. Then A is a normal matrix; its
diagonalizing S is an orthogonal Q (Section 5