100%(3)3 out of 3 people found this document helpful
This preview shows page 49 - 51 out of 68 pages.
every Gauss–Jordan step is a multiplication on the left by an elementarymatrix. We are allowing three types of elementary matrices:1.Ei jto subtract a multipleof rowjfrom rowi2.Pi jto exchange rowsiandj3.D(orD−1) to divide all rows by their pivots.The Gauss–Jordan process is really a giant sequence of matrix multiplications:(D−1···E···P···E)A=I.(6)That matrix in parentheses, to the left ofA, is evidently a left-inverse! It exists, it equalsthe right-inverse by Note 2, soevery nonsingular matrix is invertible.The converse is also true:If A is invertible, it has n pivots. In an extreme case thatis clear:Acannot have a whole column of zeros. The inverse could never multiply acolumn of zeros to produce a column ofI. In a less extreme case, suppose eliminationstarts on an invertible matrixAbut breaks down at column 3:BreakdownNo pivot in column 3A=d1xxx0d2xx000x000x.This matrix cannot have an inverse, no matter what thex’s are. One proof is to use columnoperations (for the first time?) to make the whole third column zero. By subtractingmultiples of column 2 and then of column 1, we reach a matrix that is certainly notinvertible. Therefore the originalAwas not invertible. Elimination gives a complete test:An n by n matrix is invertible if and only if it has n pivots.The Transpose MatrixWe need one more matrix, and fortunately it is much simpler than the inverse. ThetransposeofAis denoted byAT. Its columns are taken directly from the rows ofA—theith row of A becomes the ith column of AT:TransposeIfA=214003thenAT=201043.At the same time the columns ofAbecome the rows ofAT. IfAis anmbynmatrix,thenATisnbym. The final effect is to flip the matrix across its main diagonal, and theentry in rowi, columnjofATcomes from rowj, columniofA:Entries ofAT(AT)i j=Aji.(7)
NOT FOR SALE Strang-5060bookApril 25, 200517:385050Chapter 1Matrices and Gaussian EliminationThe transpose of a lower triangular matrix is upper triangular. The transpose ofATbringsus back toA.If we add two matrices and then transpose, the result is the same as first transposingand then adding:(A+B)Tis the same asAT+BT. But what is the transpose of a productABor an inverseA−1? Those are the essential formulas of this section:1M(i) The transpose ofABis(ii) The transpose ofA−1is(AB)T=BTAT.(A−1)T=(AT)−1.Notice how the formula for(AB)Tresembles the one for(AB)−1. In both cases we reversethe order, givingBTATandB−1A−1. The proof for the inverse was easy, but this onerequires an unnatural patience with matrix multiplication. The first row of(AB)Tis thefirst column ofAB. So the columns ofAare weighted by the first column ofB. Thisamounts to the rows ofATweighted by the first row ofBT. That is exactly the first rowofBTAT. The other rows of(AB)TandBTATalso agree.