Properties of Matrix Transformations Theorem 4.9.1: For every matrix A the matrix transformation TA : Rn Rm has the following properties for all vectors u and v in Rn and for every scalar k: (a) TA (0) = 0 (b) TA (ku) = kTA (u) (Homogeneity property) (c)

Back to matrix multiplication: Recall for matrix addition we have zero matrix with A 0 = 0 + A = A for any matrix A. + 1 0 . 0 0 1 . . . 0 We have a similar element called identity matrix I = . . . . . such that AI = A. . . . . 0 0 . 1 Size of I must be a

Theorem 1.5.1 suggests that reducing a matrix A to (reduced) row echelon form is tha same as multiplying A from left by the appropriate elementary matrices. Hence if B is a matrix obtained from a matrix A by performing a nite sequence of elementary row o

Lets see some more examples of nding standard matrix of a matrix transformation Example: Find the standard matrix of the given operators 1. T : R3 R3 , reection through the xy-plane 2. T : R3 R3 , reection through the plane x=z 3. T : R3 R3 , Dilation wit

Denition: A matrix transformation T : Rn Rm is said to be onto if evey vector in Rm is the image of at least one vector in Rn . Theorem 8.2.2: If T is a matrix transformation, T : Rn Rn , then the following are equivalent (a) T is one-to-one (b) T is onto

Theorem 1.7.1: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product of lower triangular matrices is lower triangular, a

Section 5.2 Diagonalization Denition: If A and B are square matrices, then we say that B is similar to A if there is an invertible matrix P such that B = P 1 AP. Facts: 1. A and B have the same determinant 2. A is invertible if and only if B is invertible

Section 4.3 Linear Independence Linear indepencency of vectors will be used to dene basis of a vector space that we will see in section 4.4 and to determine the dimension of a space that we will see in section 4.5. Denition: A non-empty set of vectors S =

Section 5.3 Complex Vector Spaces & Appendix B Denition: A complex number is an ordered pair of real numbers, denoted either by (a, b) or by a + bi where i = 1. Usual notation is z = a + bi, a=Re(z) is called the real part of z, b=Im(z) is called the imag

Section 2.2: Evaluating Determinants by Row Reduction We will look at the relationship between the determinants of row-equivalent matrices. Recall we have 3 elementary matrix operations 1. Interchange two rows 2. Muptiply a row bya nonzero constant 3. Add

Polar form of a Complex number
i Polar Example: Let z = 1 + i. Then i form of z = |z |e . For z = 1 + i, |z | = 12 + 12 = 2 and = /4. So z = 2e 4
Multiplication: Let z1 = |z1 |ei1 , z2 = |z2 |ei2 then z1 z2 = |z1 |z2 |ei(1 +2 ) = |z1 |z2 |(cos (1 + 2 ) +

Relation of row space and column space of A, to Ax=b Theorem 4.7.1: A system of linear equations Ax=b is consistent if and only if b is in the column space of A. x1 . Idea: Consider Ax = b where x = . , A = c1 . . . cn . c1 , . . . , cn denotes the . xn c

Chapter 4, General Vector Spaces Section 4.1, Real Vector Spaces In this chapter we will call objects that satisfy a set of axioms as vectors. This can be thought as generalizing the idea of vectors to a class of objects. Vector space axioms: Denition: Le

Linear Algebra 1600a Midterm
Last Name
7:00-10:00 pm
October 30, 2009
1 First Name Student ID CIRCLE LECTURE AND LAB SECTIONS: LECTURE: 001 MWF 8:30 LAB: 003 W 9:30 002 MWF 10:30 005 Th 11:30 006 W 3:30 19
2
3
4+5
6
7
8
8
9
9
15
6
4
70
This exam has 11 pr

Example: Find a matrix that orthogonally diagonalize A, and determine P 1 AP , where A = P 2 1 1 1 2 1 1 1 2 Solution: Find eigenvalues of A; Consider characteristic polynomial 2 1 1 1 2 1 = ( 3)2 1 1 2
det (I A) =
Hence = 0 and = 3 are eigenvalues of A.

Chapter 3: Euclidean Vector Spaces Section 3.1: Vectors in 2-Space, 3-Space, and n-Space A vector is described by a numerical value with a direction. Representation of vectors: 1. Geometrically: described by initial and terminal points 2. Algebraically: d

Theorem 4.4.1 (Uniqueness of Basis Representation) If S = cfw_v1 , v2 , . . . , vn is a basis for a vector space V, then every vector v in V can be expressed in the form v = c1 v1 + c2 v2 + . . . + cn vn in exactly one way. Proof: Assume there is another

Two vectors in Rn are said to be parallel or collinear if one of them is a scalar multiple of the other. Example: 1. 0 is parallel to every vector in Rn 2. (2,6) is parallel to (1,3) Forming new vectors from old ones: Addition, substraction and scalar mu

Gram-Schmidt Process This process consists of steps that describes how to obtain an orthonormal basis for any nite dimensional inner products. Let V be any nonzero nite dimensional inner product space and suppose that cfw_u1 , u2 , . . . , un is any basi

Theorem 3.2.1: If v is a vector in Rn , and if k is any scalar, then (a) |v | 0 (b) |v | = 0 if and only if v=0 (c) |kv | = |k |v | Proof: Exercise, (Hint: express u as u = (u1 , u2 , . . . , un ) and use the denition of norm) Applying the denition of len

To determine the direction of u v we have the right hand rule; Fingers point the rst vector, Palm points the second vector, tumb gives the direction of the cross product.
Theorem 3.5.1 (Relationship involving Cross Product and Dot Product:) If u,v and w a

Section 4.6 Change of Basis Let B = cfw_v1 , v2 , . . . , vn be a basis for a nite dimensional vector space V. Let v V then we can express v as v = c1 v1 + c2 v2 + . . . + cn vn . Recall the coordinate vector of v, which was denoted as (v )B = (c1 , c2 ,