left of x j , and 1/h in the interval to the right. If these
double intervals do not overlap, the product V0 i V 0 j is
zero and Ai j = 0. Each hat function overlaps itself and
only two neighbors: Dia
feasible set is the intersection of the three halfspaces
x+2y 4, x 0, and y 0. A feasible set is composed of
the solutions to a family of linear inequalities like Ax b
(the intersection of m halfspace
become a complex conjugate pairboth have | = 1,
which is now increasing with . The discovery that such
an improvement could be produced so easily, almost as if
by magic, was the starting point for 20
normalization for b; our problem is to compare the
relative change kbk/kbk with the relative error k
xk/kxk. The worst case is when k xk is largewith b in
the direction of the eigenvector x1and when k
Mi j = R ViVjdx for n hat functions with h = 1 n+1 ?
Chapter 7 Computations with Matrices 7.1 Introduction
One aim of this book is to explain the useful parts of
matrix theory. In comparison with olde
largest amount by which any vector (eigenvector or not)
is amplified by matrix multiplication: kAk =
max(kAxk/kxk). The norm of the identity matrix is 1. To
compute the norm, square both sides to reac
Then Section 8.4 is about problems (like marriage) in
which the solution is an integer. Section 8.5 discusses
poker and other matrix games. The MIT students in
Bringing Down the House counted high car
connected to the absolutely original A by Q 1AQ = A0.
As it stands, the QR algorithm is good but not very good.
To make it special, it needs two refinements: We must
allow shifts to Ak kI, and we must
ynVn(x), we look for the particular combination (call it U)
that minimizes P(V). This is the key idea, to minimize over
a subspace of Vs instead of over all possible v(x). The
function that gives the
391 standing successes of numerical analysis. It is clearly
defined, its importance is obvious, but until recently no
one knew how to solve it. Dozens of algorithms have
been suggested, and everything
P32P31P21A = I, the three robot turns are in A = P 1 21
P 1 31 P 1 32 . The three angles are Euler angles.
Choose the first so that P21A = cos sin 0 sin
cos 0 0 0 1 1 2 1 2 2 2 1 2 2 2 1 is
zero in th
a general 2 by 2 matrix A = " a b c d# , find the Jacobi
iteration matrix S 1T = D 1 (L +U) and its eigenvalues
i . Find also the Gauss-Seidel matrix (D+L) 1U and its
eigenvalues i , and decide whethe
prove a lower bound: k xk kxk 1 c kbk kbk . (Consider
A 1 b = x instead of Ax = b.) 12. Find the norms max
and condition numbers max/min of these positive
definite matrices: " 100 0 0 2# "2 1 1 2# "3
needed in Chicago at a distance of 1000, 2000, and 3000
miles from the three producers, respectively; and
2,200,000 barrels are needed in New England 1500,
3000, and 3700 miles away. If shipments cost
row r, If B 1u 0, the next corner is infinitely far away
and the minimal cost is (this doesnt happen here).
Our example will go from the corner P to Q, and begin
again at Q. Example 3. The original co
and a Cadillac in 3 minutes. What is the maximum profit
in 8 hours (480 minutes)? Problem Maximize the profit
200x+300y+500z subject to 20x+17y+14z 18(x+y+z),
x+2y+3z 480, x,y,z 0. 2. Portfolio Select
right away, on the first column of A. The final Q 1AQ is
allowed one nonzero diagonal below the main diagonal
(Hessenberg form). Therefore only the entries strictly
below the diagonal will be involved
contained some component of the eigenvector xn, so
that cn 6= 0, this component will gradually dominate in
uk : uk k n = c1 1 n k x1 +cn1 n1 n k
xn1 +cnxn. (1) The vectors uk point more and more
accur
Karmarkars Method We come now to the most
sensational event in the recent history of linear
programming. Karmarkar proposed a method based on
two simple ideas, and in his experiments it defeated the
s
b. (2) Every important quantity appears in the fully
reduced tableau R. We can decide whether the corner is
optimal by looking at r = cN cBB 1N in the middle of
the bottom row. If 428 Chapter 8 Linear
the corner R is optimal. 6. Phase I finds a basic feasible
solution to Ax = b (a corner). After changing signs to make
b 0, consider the auxiliary problem of minimizing w1 +
w2 + + wm, subject to x 0,
multiply the old B 1 by E 1 = 1 v1 vk
vn 1 1 = 1 v1/vk 1/vk
vn/vk 1 (5) Many simplex codes use the
product form of the inverse, which saves these simple
matrices E 1 instead of directly updating B 1
Ax = b. Problem Set 7.4 1. This matrix has eigenvalues 2
2, 2, and 2+ 2: A = 2 1 0 1 2 1 0 1 2 .
Find the Jacobi matrix D 1 (L U) and the Gauss-Seidel
matrix (D +L) 1 (U) and their eigenvalues, and t
matrix AD takes the place of A, and the vector cTD takes
the place of cT . The second step projects the new c onto
the nullspace of the new A. All the work is in this
projection, to solve the weighted
this cause in x? 24. If you know L, U, Q, and R, is it faster
to solve LUx = b or QRx = b? 25. Choosing the largest
available pivot in each column (partial pivoting), factor
each A into PA = LU: A = "
vector x (the old x and w) are zero. These n components
of x are the free variables in Ax = b. The remaining m
components are the basic variables or pivot variables.
Setting the n free variables to ze
methods approach that optimal solution from inside the
feasible set. Note. With a different cost function, the
intersection might not be just a single point. If the cost
happened to be x+2y, the whole
The condition number is approximately c(A) = 1 2 n 2 ,
and this time the dependence on the order n is genuine.
The better we approximate u 00 = f , by increasing the
number of unknowns, the harder it
impractical for large m and n. It is the task of Phase 424
Chapter 8 Linear Programming and Game Theory x y
feasible set 2x + y = 6 x + 2y = 6 y = 0 x = 0 b b b b b b P Q
R Figure 8.3: The corners P,
perfectly conditioned: c(Q) = 1. The change in the
eigenvalues is no greater than the change A. Therefore
the best case is when A is symmetric, or more generally
when AAT = A TA. Then A is a normal ma