Equation sheet

Equation sheet - MATH 133 - Formula Sheet Definition(Norm...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: MATH 133 - Formula Sheet Definition(Norm of a vector) v1 v2 If v = . in Rn the norm of v is given by ||v|| = . . vn Definition(Dot Product) u1 u2 If u = . and v = . . un v1 v2 . . . vn 2 2 2 v1 + v2 + · · · + vn then the dot product of u and v is defined by u · v = u1 v 1 + u2 v 2 + · · · + un v n If θ is the angle between u and v then u · v = ||u|| ||v|| cos θ The Following Are Equivalent 1. u is orthogonal to v 2. u · v = 0 3. ||u + v||2 = ||u||2 + ||v||2 Definition(Orthogonal projection) Let u = 0 and v be two vectors in Rn , the projection of v onto u is given by u·v u u·u proju v = Equation of a line in R2 If L is a line in R2 its general equation is given by ax + by = c where n = a b is a normal vector for L. Equation of a line in R3 In R3 the vector equation of a line is given by (L) : p + td a where p = (x0, y0, z0 ) is a point on the line and d = b is its directional vector. c x = x0 + at y = y0 + bt In parametric form we write (L) : z = z0 + ct 1 Equation of a plane in R3 a Let p = (x0, y0 , z0) be a point in the plane, n = b a vector normal to the c plane, u and v two vectors parallel to the plane (but not parallel to each other) and x = (x, y, z ) any point in the plane then: • Normal form: n · (x − p) • General form: ax + by + cz = d • Vector Form: x = p + su + tv Some distances • distance from a point B to a line (L) Get a point A on the line. Let d be the directional vector of the line. Denote the vector AB by v. dist(B, L) = ||v − projd v|| • distance from a point B (x0, y0) to a line (L) : ax + by = c (in R2) |ax0 + by0 − c| √ a 2 + b2 • distance from a point B (x0, y0, z0) to a plane (P ) : ax + by + cz = d dist(B, L) = |ax0 + by0 + cz0| dist(B, P ) = √ a 2 + b 2 + c2 Theorem (Number of solution of a system of linear equations ) A system of linear equations can have only one of the following • No solution (inconsistent system) • A unique solution (consistent system) • A infinte number of solutions (consistent system) Definition(Elementary Row Operations, ERO) The three elementary row operations are: 1. Interchange two rows. 2. Multiply (or divide) a row by a non-zero constant. 3. Add a multiple of a row to another. Definition(Reduced Row Echelon Form, RREF) A matrix is in Reduced Row Echelon Form if it satisfies the following 4 conditions 2 1. All zero rows are at the bottom. 2. The first non-zero entry of every non-zero row is a 1 (leading one). 3. Leading ones go from left to right. 4. All entries above and below any leading one are zero. If a matrix satisfies only the first 3 conditions above then we say it is in Row Echelon Form (REF). Definition(Gauss-Jordan elimination process) This is the process of applying the ERO’s to a matrix to get it to RREF. Definition(Rank of a matrix) The rank of a matrix is the number of non-zero rows in its RREF or REF . Definition(Linear combination) A vector u is a linear combination of the vectors v1 , v2, . . . , vn if we can find scalars a1, a2, . . . , an such that u = a1 v1 + a2 v2 + · · · + an vn Definition(Span, Spanning Set) Given a set S = {v1 , v2, . . . , vn } of vectors in Rn : • Span(S ) = the set of all linear combinations of the vectors in S . • If span(S )= Rn then we say S is a spanning set for Rn . Definition(Linear independance) A set v1, v2 , . . . , vn of vectors in Rn is said to be linearly independant if the only solution to the equation c1 v 1 + c2 v 2 + · · · + cn v n = 0 is c1 = c2 = · · · = cn = 0. Otherwise the vectors are called linearly dependant (which also means that at least one of them can be written as a linear combination of the others). Definition(Symmetric matrix) A square matrix is symmetric if A = AT . Definition(Inverse of a Square Matrix) Given a square matrix A its inverse (if it exists) is the matrix denoted by A−1 such that AA−1 = A−1 A = I . If the matrix is a 2 2 matrix we use the formula ab cd −1 = 1 ad − bc 3 d −b −c a provided that the determinant of A, det(A) = ad − bc = 0. For a matrix of higher dimensions the process looks like this: [A | I ] → Gauss Jordan Process → I | A−1 If the matrix is not invertible (i.e. does not have an inverse) we will not get the identity on the left side after applying the Gauss-Jordan process. Definition(Elementary Matrix) An elementary matrix is a matrix that can be obtained by applying one Elementary Row Operation to the identity matrix. Definition(Row Space, Column Space, Null Space) Let A be an m × n matrix, • The row space of A = span(Rows of A). • The Column space of A = span(Columns of A). • The Null space is the subspace of Rn spanned by the solutions of the homogeneous system Ax = 0. Definition(Basis) A basis of a subspace S of Rn is a set of vectors that span S and are linearly independant. Definition(Rank) The rank of a matrix A (denoted by rank(A))is the dimension of its row space (or column space since they’re equal) Definition(Nullity) The nullity of a matrix A (denoted by nullity(A)), is the dimension of its Null space. Theorem (The Rank Theorem ) For any Am×n , rank(A) + nullity(A) = n. Definition(Linear Transformation) A transformation T : Rn → Rm is called a linear transformation if it satisfies 1. T (u + v) = T (u) + T (v) 2. T (k u) = kT (u) We usually check if T is a linear transformation by checking that T (c 1 v 1 + c 2 v 2 ) = c 1 T (v 1 ) + c 2 T (v 2 ) for c1 , c2 scalars and v1 , v2 in Rn . 4 Definition(Minor) Given An×n , the minor of entry ij is denoted by Aij and is the determinant of the matrix we get from A by removing row i and column j . Definition(Cofactor) Cij = (−1)i+j Aij Definition(Determinant of an n × n matrix) Given an n × n matrix A (n 2) det(A) = ai1Ci1 + ai2Ci2 + · · · + ain Cin by expanding along the ith row. det(A) = a1j C1j + a2j C2j + · · · + anj Cnj by expanding along the j th column. Properties of the determinant function Given an n × n matrix A • If A has a zero row or zero column then det(A) = 0. • If we get matrix B by interchanging two rows of A then det(B ) = − det(A). • If we get matrix B by multipying one row of A by k = 0 then det(B ) = k det(A). • If we get matrix B by adding a multiple of a row to another of matrix A then det(B ) = det(A). • det(kA) = k n det(A). • det(AT ) = det(A). • det(AB ) = det(A) det(B ) 1 . • det(A−1 ) = det(A) Definition(Eigenvalue, Eigenvector, Eigenspace) Given An×n a scalar λ is an eigenvalue of A if there is a non-zero vector x such that Ax = λx. The eigenvalues of A are the roots of the characteristic polynomial given by det(A − λI ); (we solve det(A − λI ) = 0). In this case x is called an eigenvector or A corresponding to λ. The collection of all eigenvectors corresponding to λ along with the zero vector form the eigenspace of λ denoted by Eλ . Definition(Similar Matrices) Given A and B two n × n matrices. A is said to be similar to B (written A ∼ B ) if there is an invertible matrix P such that P −1 AP = B . 5 Definition(Diagonalizable matrix) An n × n matrix A is diagonalizable if there is a diagonal matrix D that is similar to A. i.e. If there is a diagonal matrix D and an invertible matrix P such that D = P −1 AP . Theorem (when is a matrix diagonalizable? ) An n × n matrix A is diagonalizable if one of the following is true • A has n distinct eigenvalues. • For each eigenvalue the geometric multiplicity is equal to the algebraic multiplicity. Definition(Orthogonal set) A set of vectors {v1, v2, . . . , vn } is an orthogonal set if any two vectors in the set are orthogonal. (i.e. vi · vj = 0 for all i, j = 1, . . . n). Definition(Orthogonal basis) An orthogonal basis is a basis that is also an orthogonal set. Definition(Orthogonal matrix) An m × n matrix Q is called orthogonal if QT Q = In . (The columns of Q form an orthonormal set) Theorem (Important property about Orthogonal matrices ) If Q is a square orthogonal matrix then QT = Q−1 . Definition(Orthogonal complement) Let W be a subspace of Rn . We say that a vector v in Rn is orthogonal to W if v is orthogonal to every vector in W . The set of all vectors that are orthogonal to W is called the Orthogonal complement of W and denoted by W⊥ . Theorem (Important theorem to find W⊥ ) If A is an m × n matrix then then (row(A))⊥ = null(A) and (col(A))⊥ = null(AT ) Definition(Orthogonal projection of v onto W ) Let W be a subspace of Rn and let {u1 , u2, . . . , uk } be an orthogonal basis for W . For any vector v in Rn , the orthogonal projection of v onto W is given by projW v = u1 · v uk · v u1 + · · · + uk u1 · u1 uk · uk Definition(The Gram-Schmidt process) The Gram-Schmidt process is the process we use to transform a basis into an orthogonal basis. It works as follows: Given {x1, x2, . . . , xk } a basis for a subspace W of Rn 6 v1 = x1 v2 = x2 − projv1 x2 v3 = x3 − projv1 x3 − projv2 x3 . . . vk = xk − projv1 xk − projv2 xk − · · · − projvk xk Finally we have {v1, v2, . . . , vk } an orthogonal basis for W . 7 ...
View Full Document

This note was uploaded on 12/01/2010 for the course MATH 133 taught by Professor Klemes during the Fall '08 term at McGill.

Ask a homework question - tutors are online