STATS
135Bbig

# R q p qq tr p tr qq tr q q r x matrixrnorm1553 y qrx

• Notes
• 130

This preview shows pages 66–80. Sign up to view the full content.

R = Q P = QQ . tr P = tr QQ = tr Q Q = r .

This preview has intentionally blurred sections. Sign up to view the full version.

> x<-matrix(rnorm(15),5,3) > y<-qr(x) > y \$qr [,1] [,2] [,3] [1,] -1.72567945 -0.5367561 -0.14277359 [2,] 0.06905176 1.1188063 -0.06029126 [3,] 0.27293972 -0.6187797 0.44352349 [4,] -0.45991800 -0.1067149 0.18713748 [5,] -0.44927581 0.5819942 0.78630026 \$rank [1] 3 \$qraux [1] 1.712294 1.516727 1.588822 \$pivot [1] 1 2 3 attr(,"class") [1] "qr" 67
> qr.Q(y) [,1] [,2] [,3] [1,] -0.71229379 -0.3456045 -0.26525534 [2,] -0.06905176 -0.5306637 0.06260371 [3,] -0.27293972 0.5636904 -0.66100833 [4,] 0.45991800 0.1995434 -0.12104787 [5,] 0.44927581 -0.4913137 -0.68857518 > qr.R(y) [,1] [,2] [,3] [1,] -1.725679 -0.5367561 -0.14277359 [2,] 0.000000 1.1188063 -0.06029126 [3,] 0.000000 0.0000000 0.44352349 68

This preview has intentionally blurred sections. Sign up to view the full version.

6: LDU Decomposition and Matrix Inverse 69
In this chapter we will re-define the rank and the inverse of square matrices, using the LDU decomposition . In a matrix decomposition a matrix is written as a product of simpler matrices. This gives information about the matrix and can be used to guide subsequent computations. This is like decomposing a polynomial into factors, which shows what the roots and the general properties of the polynomial are. 70

This preview has intentionally blurred sections. Sign up to view the full version.

LDU decomposition Each square matrix A can be written as a product A=LDU , where L is lower-triangular, U is upper- triangular, and D is diagonal. We can choose L and U such that they have ones on the diagonal (and are thus non-singular). 71 a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 = 1 0 0 l 21 1 0 l 31 l 32 1 d 11 0 0 0 d 22 0 0 0 d 33 1 u 12 u 13 0 1 u 23 0 0 1 .
More in detail: We can easily solve for the unknowns from here. NB: this process can go astray (suppose for instance a 11 =0 and a 12 0 ). We then need to pivot on off- diagonal elements (and use permutation matrices). 72 1 0 0 l 21 1 0 l 31 l 32 1 d 11 0 0 0 d 22 0 0 0 d 33 1 u 12 u 13 0 1 u 23 0 0 1 = = d 11 d 11 u 12 d 11 u 13 d 11 l 21 d 22 + d 11 l 21 u 12 d 11 l 21 u 13 + d 22 u 23 d 11 l 31 d 11 l 31 u 12 + d 22 l 32 d 11 l 31 u 13 + d 22 l 32 u 23 + d 33

This preview has intentionally blurred sections. Sign up to view the full version.

In matrix-vector notation LDU can also be written as follows. If A is a square matrix and e i is a unit vector (a vector with element i equal to one and zero elsewhere), then Ae i is column i of A and e i 'A is row i . Now compute This matrix has both row i and column i equal to zero. 73 A - Ae i e i A e i Ae i
If we apply this result recursively along the diagonal we get a decomposition of the form and so on, which can be written as with L lower triangular with unit diagonal, U upper triangular with unit diagonal, and with D diagonal. 74 A = Ae 1 e 1 A e 1 Ae 1 + A 1 , A 1 = A 1 e 2 e 2 A 1 e 2 A 1 e 2 + A 2 , A = LDU

This preview has intentionally blurred sections. Sign up to view the full version.

pivot<-function(x,i,j) { p<-x[i,j]; r<-x[i,]/p; c<-x[,j]/p return(x-p*outer(c,r)) } pivot_along_diagonal<-function(x) { n<-nrow(x); d<-diag(n);l<-r<-matrix(0,n,n) for (i in 1:n) { p<-x[i,i]; u<-x[i,]/p; v<-x[,i]/p x<-x-p*outer(v,u) l[,i]<-v; r[i,]<-u;d[i,i]<-p } return(list(l=l,r=r,d=d)) } 75
pivot <- function ( x ) { n <- nrow ( x ); m <- ncol ( x ); ii <- rep ( 0 , n ); jj <- rep ( 0 , m ) l <- u <- d <- matrix ( 0 , n , m ); k <- 1 repeat { print ( x ) if ( substr ( readline ( "Continue ? " ), 1 , 1 ) == "n" ) break () i <- as.integer ( readline ( "Row Index ? " )) j <- as.integer ( readline ( "Column Index ? " )) ii [ k ]<- i ; jj [ k ]<- j p <- x [ i , j ]; r <- x [ i ,]/ p ; c <- x [, j ]/ p x <- x - p * outer ( c , r ) l [, i ]<- c ; u [ j ,]<- r ; d [ i , i ]<- p k <- k + 1 } return ( list ( l = l , u = u , d = d )) } 76

This preview has intentionally blurred sections. Sign up to view the full version.

> a [,1] [,2] [,3] [1,] -0.17 0.15 0.46 [2,] 0.92 -0.57 0.95 [3,] -1.89 -0.41 -1.22 > a1 [,1] [,2] [,3] [1,] 0 0.00 0.0 [2,] 0 0.26 3.4 [3,] 0 -2.12 -6.3 > a2 [,1] [,2] [,3] [1,] 0 0 0 [2,] 0 0 0 [3,] 0 0 22 77
> l l1 l2 l3 [1,] 1.0 0.0 0 [2,] -5.4 1.0 0 [3,] 11.1 -8.2 1 > u u1 u2 u3 [1,] 1.0 0 0 [2,] -0.9 1 0 [3,] -2.7 13 1 > d [,1] [,2] [,3] [1,] -0.17 0.00 0 [2,] 0.00 0.26 0 [3,] 0.00 0.00 22 78

This preview has intentionally blurred sections. Sign up to view the full version.

For symmetric matrices the LDU decomposition becomes the LDL' decomposition. The Cholesky decomposition is the LDL' decomposition that can applied to positive semi-definite matrices. It
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern