This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: CHAPTER 3 Matrix Inverses Section 3.6 A Note on LU-Decomposition One of the most basic and useful ideas in mathematics is the concept of a factorization of an object. You have already seen that it can be very useful to factor a number into primes, or to factor a polynomial. Similarly, in many applications of Linear Algebra one wishes to decompose a matrix into factors that have certain properties. Here we will look at our first matrix factorization, often called a matrix decomposition, and will look at other matrix factorizations in Math 235. In many areas of applied Linear Algebra one often needs to solve multiple systems A~x = ~ b where the coefficient matrix A remains the same, but the vector ~ b changes. Our goal is to derive a matrix factorization which will allows us to quickly solve such a system for any ~ b . Observe that for each such system A~x = ~ b we can use the same row operations to row reduce [ A | ~ b ] to row echelon form and then solve the system using back substitution. The only difference between each such system will then be how the row operations used will affect ~ b . In particular, we see the two important pieces of information we require are the row echelon form of A and the elementary row operation used. For our purposes, we will assume that our m × n coefficient matrix A can be brought into row echelon form without swapping rows. At the end of the section, we will discuss the case where swapping rows is required. Moreover, if we do not require the row echelon form of A to have leading ones, then we see that we can row reduce A to a row echelon form U only using the row operation ”Add a multiple of one row to another”....
View Full Document