This preview shows pages 1–13. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture #2 1) Linear Transformations The next topic is functions from one vector space to another that preserve their linear
structure. Definition: If A and B are nonempty sets, a function f from A to B assigns a single point
in B to every point in A. Such a function is written as f: A ~ B. The set A is called the domain of the function and B is called the codomain. A function is also called a mapping. Definition: If V and W are vector spaces, T: V ~W is linear or a linear transformation if T(av1 + bvz) = aT(v1) + bT(v2), for all numbers a and b and for all vectors v andv in V.
1 2 Example: Let T: R“ ~ R“ be defined by T(x) = Ax, where A is an MxN matrix. Then
T(ax + by) = A(ax + by) = an + bAy = aT(x) + bT(y), so that T is linear. Remark: if V is a finite dimensional vector space and W is a vector space, then to define a
linear transformation from V to W, it is sufficient to define T(v ), for every member v ofa n n basis v , ,v for V.
1 N The Matrix Representation of Linear Transformations Matrices can be used to represent linear transformations from one finite dimensional vector space to another. Let T: V ~W be linear, let v1, ,vN be a basis for V, and let
w , , w be a basis for W. If v e V, there are unique numbers x , , x such that
1 M 1 N
v : x v + + x v . Since T(v) e W, there are unique numbers y1, , yM such that
1 1 N N
T(v) = y w + + y w . Since T(v) e W, for each n, there are unique numbers a , ,a
1 1 M M n 1n Mn
such that T(v ) = a w + + a w . By the linearity of T and what has been said,
n 1n 1 Mn M
M N N N M M N
wa=T(v)=T():xv)= ZxT(v)=ZxZaw=Z[Zax)w,
m=1 m m n=1 " " n=1 " ” n=1 nm=1 m“ m m=1 n=1 m" ” m
N
sothaty = [a x , for all m. Lety = (y1, , yM) and x = (x1, , xN) and letA: (a ) be
m_ “:1 mn n mn the MxN matrix with (m, n)th entry a , for all m and n. Then y = Ax. The matrix A represents mn
T in that there is one and only one linear transformation T corresponding to A and one and only one matrix A corresponding to the linear transformation T, given the bases v1, ,vN for V and w , , w for W. The advantage of the matrix representation is that it translates calculations
1 M involving T into simple arithmetic computations. Suppose that in addition to the linear transformation T: V aW we are given a linear
transformation S: W ~Q where Q is a vector space with basis q , ,q. Let B: (b ) be the
1 J jm
JXM matrix representing 8 with respect to the bases w , , w and q , , q, forW and Q,
1 M 1 J
respectively, so that, for all m, J
S(W) = 2b q.
m i=1 1m] Define SOT: V 40 to be the linear transformation SoT(v) = S(T(v) ). The transformation SoT
is called the composition of T and S. Then M
where c. = 2 b a . The JxN matrix C: (c ) represents SoT. In "1:1 jm mn jn Definition: If A is an MxN matrix and B is a JxM matrix, the product of B and A is the
M
JxN matrix C = BA with typical entry c. = X b a in m=1 jm mn m:
23 110 23—1
001: , 010 001
100 Remark: If the MxN matrix A represents the linear transformation T and the JxM matrix
B represents the linear transformation 8, then the JxN matrix C = BA represents the linear transformation SoT. Remark: The order in which matrices are multiplied does not affect the product in that if
A is an MxN matrix, B is a JxM matrix, and C is a KxJ matrix, then (CB)A = C(BA), so that the
meaning of the product CBA is unambiguous. Remark: If A is an NXN matrix and I is the NxN identity matrix, then IA = A = A I. Remark: The NxN identity matrix I represents the identity function id :V ~ V from an
V Ndimensional vector space V to itself, where id (v) = v, for all veV.
V Invertible Matrices and Linear Transformations Definition: An NxN matrix A is invertible if there an NxN matrix A‘1 such that
A“A= AA‘1 = I, where I is the NxN identity matrix. Lemma 2.1: If A and B are invertible NxN matrices, then AB is invertible and
(AB)‘1 = B“A‘1. PrOOf? (B‘1A'1) (AB) = B~1("’F‘A) B = B"IB = 8‘18 2 1.
(AB)( B“A“) = A(BB‘)A1= AIA‘z AA—1= 1_ I Definition: A function f: V —) W is invertible, if there exists f": W ~ V such that M“1 = id and f'1 of = id , where id and id are the identity functions on W and V, respectively.
W V W V That is, f(f"(w)) = w, for all w e W, and f“(f(v)) = v, for all v e V. Definition: A function f: V ~ W is onto, it for every w e W, there exists a v e V such
that f(v) = w. Definition: A function f: V a W is one to one, if for every 1 e V and 7 e V such that _v_¢v,f(v)¢f(v). Remarks: 1) f: V ~ W is onto if and only if there exists a function 9: W « V such that
f(g(w)) = w, for all w e W. 2) f: V a W is one to one if and only if there exists a function h: f(V) a V such that h(f(v)) = v,
for all v e V, where f( V) = {f(v)  v e V} is the image or range of f. 3) f: W a V is invertible if and only if it is one to one and onto. Theorem 2.2: If T : V a W is an invertible linear transformation from the vector space V
to the vector space W, then T‘1 is linear. Proof: Let w and w belong to W and let c
1 2 1 ). SinceTis linear, T(cv +cv) =cT(v
11 22 1 andc be numbers. Letv = T"(w) and
2 1 1 )=cw +cw. Hence
1 v =T“(w
2 1 2 2 ) + c2T(v 1 2 2 cT“(w) +cT“(w) =cv +cv =T“‘°T(cv +cv) =T“(cw +cw),
1 2 2 1 1 1122 122 1122 and so T‘1 is linear. I Theorem 2.3: Let T : V ~ V be a linear transformation and let v , , v be a basis for V.
1 N if A is the NxN matrix representing T with respect to the basis v , , v , then T is invertible
1 N if and only if A is invertible, and the matrix representation of T“ is A“. Proof: If T is invertible and B is the NxN matrix representing T“ with respect to the basis v , , v , then idV = T“ J, BA represents T“ 0T, and the NxN matrix I represents id .
N V 1
Hence BA = I. Similarly AB = I, so that B : A“. If A is invertible, then A“ is the matrix representation with respect to the basis v , , vN of a linear transformation S: V ~ V. Since A“A= AA“ = I, if follows that
1 SOT = ToS = id . Hence 8 =T“, and so T is invertible. l
V Proposition 2.4: Let T:V ~ W be an invertible linear transformation from the vector space V to the vector space W. The vectors v1, , v are a basis for V if and only if
N T(v ), , T(vN) are a basis for W.
1 Proof: Please prove this theorem yourself as an exercise. Corollary 2.5: LetT:V a W be an invertible linear transformation from the finite
dimensional vector space V to the vector space W. Then V and W have the same dimension. Theorem 2.6: If A is an NxN matrix, then the following are equivalent: 1) A is invertible,
2) there is an NxN matrix B such that BA = I, and
3) the system Ax = O has no nonzero solution. Proof: (1) implies (2): Let B = A“.
(2) implies (3): If BA = I and Ax = 0, then 0 = BAx = Ix = x. (3) implies (1): By corollary 1.5, A is row equivalent to the NxN identity matrix 1.
Each elementary row operation on A corresponds to left multiplication by an invertible matrix
P. i now check this statement for each such operation. a) Multiplication of the kth row of A by the nonzero number 0 corresponds to replacing
A by PA, where P = (p ) is the NxN matrix defined by mn 1,ifm=n¢k,
p : c,ifm=n=k, rm 0, ifmatn. P—t (qmn) is the matrix defined by 1,ifm= n¢k, q mn c‘1,ifm=n=k, O, ifm¢n. b) Interchange of rows k and r of A corresponds to replacing A by PA, where P = (p > is the NxN matrix defined by 1, ifm=n,m¢k, m¢r,
p Z 1, ifm=kandn= r,
m" 1,ifm=randn=k,
O,otherwise.
P“=P. c) Replacement of the kth row of A by row k plus c times row r corresponds to replacing
A by PA, where P = (p ) is the NxN matrix defined by mn 1, if m = n,
pmn_ c, ifm=k,n= r,
O, othenNise.
P'1 = (qmn) is the matrix defined by 1,ifm=n, q mm —c, ifm: k,n= r,
0, otherwise. It follows that I: P P ..... ..PQA, where P is invertible, for all q. Let P = P P ..... ..P , so thatI
q 1 2 1 2 Q
= PA. Then P is invertible, since P‘1 = P: ..... ..P;‘. Then P" = P‘1PA = (P‘1P) A = IA= A, so that
P = A‘1 and A is invertible. I Corollary 2.7: if A is an NxN matrix and BA = I, for some NxN matrix B, then B = A“. Proof: By the theorem, A is invertible. Therefore the equation BA = 1 implies that
(BA)A“= A"‘, so that B = BI = B(AA“) = (BA)A“= A“. l The Range, Rank, Kernel, and Nullity of Linear Transformations For the rest of this lecture, let V and W by finite dimensional vector spaces.
Definition: If f: A ~ B is a function, the image or range of f is {f( x)  x e A}. Definition: If T: V ~ W is a linear transformation, the null space or kernel of T is
{v e V T(v) = 0}. Theorem 2.8: If T: V ~ W is a linear transformation, then the range of T is a subspace of
W and the kernel of T is a subspace of V. Proof: If T(v1) and T( v2) belong to the range ofT and c1 and 02 are numbers, then
c T(v ) + c T(v ) belongs to the range of T, because, by the linearity of T, c T(v ) + c T(v )
1 1 2 2 1 2 2 = T( c v + 0 v2) . Hence the range of T is a subspace of W.
1 1 2 1 If v and v belong to the kernel of T and c1 and c are numbers, then c v + c v belongs
1 2 2 1 1 2 2 to the kernel ofT, because T(c1v1+ cevz) = c1T(v ) + czT(v ) = C10 + 020 = O. I 1 2 Definition: lf T: V ~ W is a linear transformation, the rank of T is the dimension of the
range of T and nullity of T is the dimension of the null space of T. Theorem 2.9: Let T: V « W be a linear transformation, then rank T + nullity T = dim V. Proof: Let N = dim V, let K be the nullity of T. I must show that rank T = N — K. Let v , , v be a basis for the null space of T. Letv , , v , v , , v be an extension of
1 K 1 K K+1 N v , , v to a basis for V.  show that the T( vK 1), , T(vN) is a basis for the range of T. The
1 k + vectors T(v1), , T(VN) span the range of T. Since T(v1) = T(v2) = = T(vK) = 0, it
follows that T(v ), , T(v) span the range of T. lshow that T(v ), , T(v) are
N K+1 N K+1 linearly independent, so that T(vK 1), , T(vN) is a basis for the range of T and hence rank T = N
N — K. Suppose that Z c T(v ) = 0. Then the linearity of T implies that n=K+1 ” ”
N N N T[ 2 cv )2 Z cT(v) = 0, so that 2 cv belongstothe kernelofT. Sincev , , v is n=K+1 n " n=K+1 n n n:K+1 n n 1 K N K
a basis for the kernel of T, Z c v = 2 b v , for some numbers b1, , b . Therefore
n=K+1 " " n=1 ” ” K K N 2 b v — Z c v = 0. Since v , , v form a basis for V, they are linearly independent and
":1 n n n=K+1 n n 1 N
henceb=.....=b=0=c =....=C. I 1 K K+1 N Theorem 2.10: Let T: V ~ W be a linear transformation and suppose that A is the MxN
matrix representing T with respect to the bases v , , v for V and w , , w for W. Then
1 N 1 M rank T equals the column rank of A and nullity T equals N minus the column rank of A, which
equals N minus the row rank of A. Proof: Let SV: RN 4 V be the linear transformation such that S (9”) : v , for n = 1, ,
V n n N, where eN is the nth standard basis vector for B”. Let S : RM ~ W be the linear transformation
W n
such that SW(eM) = w , for m = 1, , M, where e'V' is the mth standard basis vector for R“. m m m
The transformations S and SW are invertible. Let Q: S‘1 oToS . Then Q is a linear
v W V transformation from R“ to. RM, and the matrix representation of Q with respect to the standard
bases of RN and R” is A. The range of Q is the column space of A, so that rank 0 equals the column rank of A. The
range ofT equals the range ofToS equals 31 (range Q), so that by corollary 2.5, rank Q = rank
V W ToSv, which equals rank T. Putting all this together, we see that the column rank of A equals rank 0 equals rank T, as was to be proved. By theorem 2.9, nullity Q = N — rank Q. It was just proved that rank Q equals the
column rank of A. By theorem 1.18, the column rank of A equals the row rank of A. Therefore
nullity Q equals N minus the row rank of A. Since Q(x) = 0 if and only if ToSV( x) = O, the null space ofTequals SV( null space of Q), so that by corollary 2.5 nullity T = nullity O, which equals N minus the row rank of A. I If T: V ~ W is a linear transformation, then for any w e W, T"(w) = v + T"(0), where
v is any vector in V such that T(v) = w and where v + T'1(O) = {v + 2 2 e T"( O) }. In order to
see that this is so, let 2 e T"(w). Then T(z — v) = T(z) — T(v) = w — w = 0. Hence
2 = v + (z — v) e v + T"(O). Similarly any point in v + T“(O) belongs to T“( w). it is
possible to visualize the meaning of the assertion T“(w) = v + T“( 0), by graphing a linear
transformation T: R2 ~ R in the manner shown below. The function portrayed in the diagram
may be thought of as a projection of R2 onto the vertical axis of R2 followed by a linear function from the vertical axis onto R. This way of visualizing a linear function may help you understand
the implicit function theorem, when we get to it. T (0) = kernel of T Singular and NonSingular Linear Transformations Definition: if T: V ~ W is a linear transformation, T is non—singular if the kernel of T
equals {0}. mark: The linear transformation T is nonsingular if and only if T is one to one, since
T(v) = T(v2) if and only if T(v1 — v2) = o, 1 Lemma 2.11: if T: V «W is a linear transformation, then T is nonsingular if and only if T(v ), , T(v ) are linearly independent whenever v1, , vN are linearly independent.
1 N Proof: Suppose that T is nonsingular. Since T is linear, the equation 0 T(v ) + + c T(v ) = 0 implies that T(c1v1 + + chN) = O, which implies that
1 1 N N c v + + c v = 0, since T is nonsingular; Since v , ,v are linearly independent, it
1 1 N 1 N follows that c = c = 0 and hence that T(v ), , T(v ) are linearly independent.
1 N 1 N Ilz Suppose that T carries independent vectors to independent vectors. Let v at 0, where
v e V. Since the vector v by itself is linearly independent, TM is independent and hence T(v) at O. (The vector 0 is linearly dependent.) Therefore the kernel of T is {O}. I Theorem 2.12: If T: V ~ W is a linear transformation and dim V = dim W, then the
following are equivalent. a) T is invertible. b) T is nonsingular. c) T is onto. d) if v1, , vN is a basis for V, then T( v), , T(vN) is a basis for W. 9) There is a basis v1, , vN for V such that T( v), , T(vN) is a basis for W. Proof: (a) implies (b). This assertion is obvious. (b) implies (0). Suppose that T is nonsingular. Let v1, , vN be a basis for V. By
lemma 2.11, T(v1), , T(vN) are linearly independent. Since dim W = N, corollary 1.11
implies that T(v ), , T(vN) is a basis for W. lf w e W, then w = c1T(v1) + + c T(v ), 1 N N
for some numbers 0, , 0. Since T is linear, c T(v) + + c T(v) = T(c v + + c v ),
1 N 1 1 N N 1 1 N N
so that w = T( c v + + chN) and hence T is onto.
1 1
(0) implies (d). Let v1, , vN be a basis forV. Since these vectors span V and T is
onto, the vectors T(v ) , , T(vN) span W. Since dim W = N, corollary 1.8 implies that
1
T(v ), , T(vN) form a basis for W.
1
(d) implies (e). This assertion is obvious.
(e) implies (a). Suppose there is a basis v , , v for V such that T(v ), , T(v ) is
1 N 1 N a basis for W. Then rank T = dim W = dim V. Therefore by theorem 2.9, nullity T = 0, so that T
is one to one. Since rank T = dim W, T is onto. Therefore T is invertible. I The Inner Product: Definition: The standard inner product or dot product on Ft” is the function x.y from
N
RNxRN to R defined by x.y = Z x y . ":1 n n In this definition, the symbol x stands for the Cartesian product, defined as follows. Dem: if A and B are sets, the Cartesian product of A and B is AxB
= {(a, b)  a eAandb e B}. Definition: if x e R”, the length of x or the norm of x is xl = 4x.x = ,lx: + + x2. m: 1) if a and b are numbers and x, y, and z belong to R“, then x.y = y.x and x.(ay + bz) = ax.y
+ bx.z. 2) If x e R“ and y 6 RN and if e is the angle between x and y, then cos 6 = ___L. llel llyll
3) lx.y S x y. This is called the CauchySchwarz inequality and follows from (2). 4) x is perpendicular or orthogonal to y if and only if cos 9 = O, which is true if and only if
x.y = 0. N
5) x.x = Z x2 2 O and x.x = 0 implies that x = 0. n=1 n 6) If y e R“, the function f(x) = y.x is a linear transformation from R” to R. Orthonormal Bases and Orthogonal Complements Definition: A set of vectors v , , v in R” is said to be orthogonal if v .v = 0,
1 M n m whenever n at m. Theorem 2.13: Orthogonal nonzero vectors are linearly independent. M
Proof: Let v , , vM be orthogonal and suppose that Z c v : O, for some numbers
1 :1 m m
M M n
c, . Fork=1, M,O=v.0=v.[2cv J: Ecv.v =cv.v. Sincev.v >0,
1 M k km=1mm m=1ml<m kkk kk
c = 0. Hence 0 = O, for all k, and so v , , v are linearly independent. I
k k 1 M
Definition; A basis v , , v for a subspace V of R” is said to be orthonormal if it is
1 M
orthogonal and if v .v = 1, for all m.
m m
Lemma 2.14: If v , , v is an orthonormal basis for V, then for any v e V, 1 M M M M
Proof:lfv=Ecv,then,foranyk,v.v= ch .v=2cv.v=cv.v=c.
["31 m m k "1:1 m k m=1 m m k k k k k 10 Theorem 2.15: Every vector space V that is a subspace of Ft” has an orthonormal basis. Proof; Let y , , y be a basis for V. I define an orthonormal basis v , , v by
1 M 1 M
induction on the index k of the successive basis vectors. Let v = Y1 . Then v .v = 1 .
1 1 1
JV y
1 1
Suppose that for k between 1 and M we are given v , , v such that v .v = 1, if n s k,
1 k n n
v , , v are orthogonal, and v is a linear combination of y , , y, for n = 1, , k. Let
1 k n 1 n
w = y — (y .v )v — —— (y .v )v. Thenw is a linear combination of y , , y .
k+1 k+1 k+1 1 1 k+1 k k k+1 1 k+1
Also w i 0, for otherwise y1, , y would be linearly dependent, which would contradict
k+1 k+1
the linear independence of y , , y . If n S k, then w .v = y .v — (y .v )v .v
1 M k+1 n k+1 n k+1 n n n
W I
=y .v —y .v =0. Letv = _____k_+__1____. Thenv .v =0, Ifnskandv .v :1.
k+1 n k+1 n k+1 k+1 n k+1 k+1
w .w
k+1 k+1
This completes the induction and hence the definition of v , , v . Since v , , v are
1 M 1 M independent and dim V = M, corollary 1.11 implies that these vectors form a basis for V.l The construction used in the previous proof is called the GramSchmidt
orthogonalization process. Definition: If S is a subset of V, which is a subspace of Ft“, the orthogonal complement of
SinVisSi={y EV  y.x=0, forallx ES}. Remark: Si is a subspace of V. Theorem 2.16: If W is a subspace of V and V is a subspace of R”, then
dim W + dim W~L = dim V. Proof: Let dim V = M. By theorem 2.15, W has an orthonormal basis v , , v . If
1 K K = M = dim V, then dim W = dim V and so W = V. Hence Wi = {0} and dim Wi = O and so dim W + dim Wi = dim V = M. Suppose that K < M. Use the GramSchmidt orthogonalization process to extend v , , v to an orthonormal basis v , , vK, v , , vM of V. Then 1 K 1 K+1 v , , v belong to Wi. Since these vectors are non—zero and orthogonal, theorem 2.13
K+1 M implies that they are independent. I show that they span WL and so form a basis for Wi. Let M
v eWi. By lemma 2.14,v= 2 (v.v)v. Sincev eWi, v.v =0, ifn=1, and so
n=1 n n n M v = Z (v.v )v . This proves that v , ,vM form a basis for Wi and hence thatdim Wi =
n = K+1 n n K+1 11 M—K=dimV—dimW. I
Orthogonal Proiections Let W be a subspace of V, which is a subspace of RN. Definition: An orthogonal proiection n: V ~ W is a linear transformation from V to W
such that v — TE(V) e Wi, for all v e V. That is, for all v e V, [v — n(v)].w = O, for all w e W. Example: Let V = R2. Let W = {(—t, t)  t is a number}. Then Wi = {(t, t)  t e R} and n(4, 14) = (—5, 5), since (4, 14) — (—5, 5) = (9, 9), which belongs to Wi. The example is
illustrated in the diagram below. Theorem 2.17: If W is a subspace of V, which is a subspace of B“, there exists a unique
orthogonal projection from V onto W. Proof: By theorem 2.15, W has an orthonormal basis v , , v . Use the GramSchmidt
1 K
orthogonalization process to extend v1, , vK to an orthonormal basis v , , v , v , , v
1 K K+1 M
for V. By an argument in the proof of theorem 2.16, v , , vM is a basis for Wi. If v e V,
K+1 M K
thenv: Z (v,v )v , by lemma 2.14. Letn(v) : [(v,v )v . Then m m m m=1 m=1 12 M
v—n(v) = Z (v, v )v , which belongs to Wi. Since 7: is clearly linear, it is a linear m=K+1 m m projection and so a linear projection exists.
In order to show that n is unique, let v e V and w e W be such that v — w e Wi. Then M M M M
v—w= Z (v—w,v)v = Z (v,v)v — Z (w,v)v : Z (v,v)v,wherethe
m=K+1 m m m=K+1 m m m=K+1 m m m=K+1 m m
last equation follows because w belongs to W and v , , v belong to Wi. Therefore
K+1 M M M K
w=v—(v—w)=Z(v,v)v— E(v,v)v=2(v,v)v =Tt(V). I m=1 m m m=K+1 m m m=1 m m 13 ...
View
Full
Document
This note was uploaded on 02/22/2012 for the course EE 441 taught by Professor Neely during the Spring '08 term at USC.
 Spring '08
 Neely

Click to edit the document details