This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: I. The intersection of two vector spaces The key idea goes back to the deﬁnition
of a vector space and a subspace. New questions arise from considering not just
a single subspace or a single matrix A, but the interconnections between two sub
spaces or two matrices. The ﬁrst point is the most important: The proof is immediate. Suppose x and y belong to V n W, in other words they
are vectors in V and also in W. Then, because V and W are vector spaces in their
own right, x + y and ex are in V and in W. The results of addition and scalar
multiplication stay within the intersection. Geometrically, the intersection of two
planes through the origin (or “hyperplanes” in R") is again a subspace. The same
will be true of the intersection of several subspaces, or even of inﬁnitely many. EXAMPLE 1 The intersection of two orthogonal subspaces V and W is the one
point subspace {0}. Only the zero vector is orthogonal to itself. EXAMPLE 2 If the sets of n by n upper and lower triangular matrices are the sub
spaces V and W, their intersection is the set of diagonal matrices. This is certainly
a subspace. Adding two diagonal matrices, or multiplying by a scalar, leaves us
with a diagonal matrix. EXAMPLE 3 Suppose V is the nullspace of A and W is the nullspace of B. Then
V n W is the smaller nullspace of the larger matrix A
c _ [B].
Cx = 0 requires both Ax = 0 and Bx = 0, so x has to be in both nullspaces. 2. The sum of two vector spaces Usually, after discussing and illustrating the
intersection of two sets, it is natural to look at their union. With vector spaces,
however, it is not natural. The union V u W of two subspaces will not in general
be a subspace. Consider the x axis and the y axis in the plane. Each axis by itself
is a subspace, but taken together they are not. The sum of (l, 0) and (0, l) is not
on either axis. This will always happen unless one of the subspaces is contained 3.6 Review and Preview 199 in the other; only then is their union (which coincides with the larger one) again a subspace.
Nevertheless, we do want to combine two subspaces, and therefore in place of their union we turn to their sum. This is nothing but the space spanned by V U W. It is the smallest vector space
that contains both V and W. The sum of the x axis and the y axis is the whole
xy plane; so is the sum of any two different lines, perpendicular or not. If V is the
x axis and W is the 45° line x = y, then any vector like (5, 3) can be split into
0 + w = (2, 0) + (3, 3). Thus V + W is all of R2. EXAMPLE 4 Suppose V and W are orthogonal complements of one another in
R". Then their sum is V + W = R". Every x is the sum of its projection 12 in V
and its projection w in W. EXAMPLE 5 If V is the space of upper triangular matrices, and W is the space
of lower triangular matrices, then V + W is the space of all matrices. Every matrix
can be written as the sum of an upper and a lower triangular matrix—in many
ways, because the diagonals are not uniquely determined. EXAMPLE 6 If V is the column space of a matrix A, and W is the column space
of B, then V + W is the column space of the larger matrix D = [A B]. The di—
mension of V + Wmay be less than the combined dimensions of Vand W(because
the two spaces may overlap), but it is easy to ﬁnd: dim(V + W) = rank of D. (1)
Surprisingly, the computation of V n W is much more subtle. Suppose we are
given the two bases 12,, . . . , v,‘ and w,, . . . , w,; this time we want a basis for the intersection of the two subspaces. Certainly it is not enough just to check whether
any of the v’s equal any of the w‘s. The two spaces could even be identical, V = W,
and still the bases might be completely diﬂ'erent. The most efﬁcient method is this. Form the same matrix D whose columns are
0,, . . . , v,” w“ . . . , w,, and compute its nullspace JV (D). We shall show that a basis
for this nullspace leads to a basis for V n W, and that the two spaces have the
same dimension. The dimension of the nullspace is called the “nullity," so dim(V n W) = nullity of D. (2)
This leads to a formula which is important in its own right. Adding (l) and (2),
dim(V + W) + dim(V n W) = rank of D + nullity of D. 200 3 Orthogonality From our computations with the four fundamental subspaces, we know that the
rank plus the nullity equals the number of columns. In this case D has k + I col
umns, and since k = dim V and l = dim W, we are led to the following conclusion: Not a bad formula. EXAMPLE 7 The spaces V and W of upper and lower triangular matrices both
have dimension n(n + 1)/2. The space V + W of all matrices has dimension n2,
and the space V n W of diagonal matrices has dimension n. As predicted by (3),
n2 + n = n(n + l)/2 + n(n + 1)/2. We now look at the proof of (3). For once in this book, the interest is less in the
actual computation than in the technique of proof. It is the only time we will use
the trick of understanding one space by matching it with another. Note first that
the nullspace of D is a subspace of R"*', whereas V n W is a subspace of R’". We
have to prove that these two spaces have the same dimension. The trick is to show
that these two subspaces are perfectly matched by the following correspondence. Given any vector x in the nullspace of D, write the equation Dx = 0 in terms
of the columns as follows: xivt+'”+xt”k+xt+iwi +"'+Xx+twr=0, .(4)
or
xlv,+"+x,vk= —x.+1w,——xk+,w,. (5) The left side of this last equation is in V, being a combination of the oh, and the
right side is in W. Since the two are equal, they represent a vector y in V n W.
This provides the correspondence between the vector x in ./V (D) and the vector y
in V n W. It is easy to check that the correspondence preserves addition and
scalar multiplication: If xcorresponds to y and x’ to y', then x + .r' corresponds to y +
y’ and ex corresponds to cy. Furthermore, every y in V Wcomes from one and only
one x in _.I ‘(D) (Exercise 3.6. l8). This is a perfect illustration of an isomorphism between two vector spaces. The
spaces are different, but for all algebraic purposes they are exactly the same. They
match completely: Linearly independent sets correspond to linearly independent
sets, and a basis in one corresponds to a basis in the other. So their dimensions
are equal, which completes the proof of (2) and (3). This is the kind of result an
algebraist is after, to identify two different mathematical objects as being funda~
mentally the samesl It is a fact that any two spaces with the same scalars and the
same (finite) dimension are always isomorphic, but this is too general to be very 1' Another isomorphism is between the row space and column space, both of dimension r. 3.6 Review and Preview 201 exciting. The interest comes in matching two superﬁcially dissimilar spaces, like
./V (D) and V n W. EXAMPLE 8 V is the xy plane and W is the xz plane: 1 0 I 0 ﬁrst 2 columns: basis for V
D = 0 l 0 0 .
0 0 0 1 last 2 columns: basm for W The rank of D is 3, and V + W is all of R3. The nullspace contains x = (l, 0, — l, 0),
and has dimension 1. The corresponding vector y is l(column l) + 0(column 2), pointing along the xaxis—which is the intersection V n W. Formula (3) for the
dimensions of V + W and V n W becomes 3 + l = 2 + 2. 3.6.1 3.6.2 3.8.3 3.6.4 3.5.5 EXERCISES Suppose S and T are subspaces of R”, with dim S = 7 and dim T = 8.
(a) What is the largest possible dimension of S n T? (b) What is the smallest possible dimension of S n T? (c) What is the smallest possible dimension of S + T? (d) What is the largest possible dimension of S + 7‘? What are the intersections of the following pairs of subspaces? (a) The xy plane and the yz plane in R’. (b) The line through (l, l, l) and the plane through (1, 0, 0) and (0, l, 1). (c) The zero vector and the whole space R3. ((1) The plane perpendicular to (l, l, 0) and the plane perpendicular to (0, l, l) in
R3. What are the sums of those pairs of subspaces? Within the space of all 4 by 4 matrices. let V be the subspace of m‘di'agonal matrices
and W the subspace of upper triangular matrices. Describe the subspace V + W,
whose members are the upper Hessenberg matrices, and the subspace V n W. Verify
formula (3). If V n W contains only the zero vector then (3) becomes dile + W) = dim V +
dim W. Check this when V is the row space of A, W is the nullspace, and A
is m by n of rank r. What are the dimensions? Give an example in R3 for which V n W contains only the zero vector but V is
not orthogonal to W. 206 3 Orthogonality 3.6.6 If V n W = {0} then V + W is called the direct sum of V and W, with the special
notation Ve W. If V is spanned by (l, l. I) and (1, 0, 1). choose a subspace W so
that 1/9 W = R3. 3.6.7 Explain why any vector x in the direct sum V 6 W can be written in one and only
one way as x = v + w (with v in V and win W). 3.6.8 Find a basis for the sum V + W of the space V spanned by v, = (l, l, 0, 0), v2 =
(l, 0, 1.0) and the space W spanned by wI = (0, l, 0, l), wz = (0, 0, l, I). Find also
the dimension of V n W and a basis for it. 3.6.9 Show by example that the nullspace of AB need not contain the nullspace of A,
and the column space of AB is not newssarily contained in the column space of B. 3.6.10 Find the largest invertible submatrix and the rank of 0°C 1
2 and A2 =
4 1 0 0 0 0
Al: 2 l i
3 l l l l 0 0 3.8.11 Suppose A is m by n and B is n by m, with n < m. Prove that their product AB
is singular. 3.6.12 Prove from (3) that rank(A + B) s rank(A) + rank(B). 3.6.13 11' A is square and invertible prove that AB has the same nullspace (and the same
row space and the same rank) as B itself. Hint: Apply relationship (i) also to the
product of A“l and AB. 3.6.14 Factor A into an m by r matrix L. times an r by n matrix Q: l 0 0
0 l 4 0
A = =
[0 2 8 0] and also A 0 l 0 .
0 0 0 3.6.15 Multiplying each column of L. by the corresponding row of y. and adding, gives
the product A = LQ as the sum of r matrices of rank one. Construct L. and y and
the two matrices of rank one that add to 1 —l 0
A: 0 l —l .
1 0 —l 3.6.16 Prove that the intersection of three 6dimensional subspaces of R8 is not the single
pomt {0}. Hint: How small can the intersection of the ﬁrst two subspaces be? 3.6.17 Find the factorization A=LDLT, and then the two Cholesky factors in (LD"1)(LD"’)T, for
A =[ 4 12}
12 45 ———————————A 3.6.18 3.5.19 3.8.20 3.6.21 3.6.22 3.6.23 3.624 3.6 Review and Preview 207 Verify the statement that “every y in Vn W comes from one and only one x in
JV(D)”—by describing, for a given y, how to go back to equation (5) and ﬁnd 1:. What happens to the weighted average in, = (wﬂr, + wibQ/(w? + wg) if the first
weight wl approaches zero? The measurement bl is totally unreliable. From in independent measurements b.,..., b," of your pulse rate, weighted by
w,, . . . , wm what is the weighted average that replaces (6)? It is the best estimate when the statistical variances are of = l/wf. H W = [3 ‘,’], find the Winner product of x = (2, 3) and y = (l, I) and the Wlength
of x. What line of vectors is W—perpendicular to y? Find the weighted least squares solution fw to Ax = b: l 0 0
A=l b= W: 10.
l 01 N—‘O 0
l
1 COM Check that the projection 11f”, is still perpendicular (in the W—inner product!) to
the error b — A3,... (a) Suppose you guess your professor‘s age, making errors e = —2, — l, 5 with
probabilities g, &, }. Check that the expected error E(e) is zero and ﬁnd the variance
509). (b) If the professor guesses too (or tries to remember), making errors  l, 0, l with
probabilities g, 3, 1%, what weights w, and w, give the reliability of your guess and
the professor’s guess? Suppose p rows and q columns, taken together, contain all the nonzero entries of A.
Show that the rank is not greater than p + q. How large does a square block of
zeros have to be, in a corner of a 9 by 9 matrix, to guarantee that the matrix is
singular? ...
View
Full Document
 Fall '11
 OSU

Click to edit the document details