This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: LU factorisation Cholesky Factorisation Solving linear systems LU Factorisation Dhavide Aruliah UOIT MATH 2070U c D. Aruliah (UOIT) LU Factorisation MATH 2070U 1 / 30 LU factorisation Cholesky Factorisation Solving linear systems LU Factorisation 1 LU factorisation Reminder: Gaussian elimination Computing LU factorisations 2 Positive definite matrices and the Cholesky factorisation Positive definiteness Cholesky factorisation 3 Solving linear systems with matrix factorisations Pivoting in practice c D. Aruliah (UOIT) LU Factorisation MATH 2070U 2 / 30 LU factorisation Cholesky Factorisation Solving linear systems Reading Assignment 10: Q1 Let { u } , { v } , and { w } be ndimensional column vectors. Then, since matrix products are associative, there are two different ways to compute the product { u }{ v } T { w } , namely ( { u }{ v } T ) { w } of { u } ( { v } T { w } ) . Run the MATLAB code segment below. Try it again for several values of n (e.g., n =1500, 3000, 6000, 12000, 24000). n = 750; u = randn(n,1); v = randn(n,1); w = randn(n,1); t1 = cputime; y1 = (u*v’)*w; t1 = cputime  t1 t2 = cputime; y2 = u*(v’*w); t2 = cputime  t2 Based on your experiments, which takes longer to compute: y1 or y2 ? What would the operation count be (in flops, as a function of n ) for computing y1 ? What would the operation count be (in flops, as a function of n ) for computing y2 ? Are the times you compute consistent with what you would expect from simply counting flops? c D. Aruliah (UOIT) LU Factorisation MATH 2070U 3 / 30 LU factorisation Cholesky Factorisation Solving linear systems Reading Assignment 10: Q1 My answer Computing y1 is extremely inefficient: I n 2 multiplications to compute matrix [ B ] : = ( { u }{ v } T ) I 2 n 2 n flops to compute matrixvector product ( { u }{ v } T ) { w } = [ B ] { w } Computing y1 requires 3 n 2 n = O ( n 2 ) flops (and storage) By contrast, y2 is very efficient: I 2 n 1 flops to compute inner product α : = { v } T { w } I n flops to compute { u } ( { v } T { w } ) = { u } α Computing y2 requires 3 n = 1 = O ( n ) flops (and storage) Notice computation of y1 caused out of memory error for some values of n even when y2 succeeded c D. Aruliah (UOIT) LU Factorisation MATH 2070U 4 / 30 LU factorisation Cholesky Factorisation Solving linear systems Gaussian elimination Reminder: Gaussian elimination example Consider solving linear system of equations 2 x 1 + x 2 + x 3 = 4 4 x 1 + 3 x 2 + 3 x 3 + x 4 = 11 8 x 1 + 7 x 2 + 9 x 3 + 5 x 4 = 29 6 x 1 + 7 x 2 + 9 x 3 + 8 x 4 = 30 Write system as [ A ] { x } = { b } with [ A ] = 2 1 1 4 3 3 1 8 7 9 5 6 7 9 8 , { x } = x 1 x 2 x 3 x 4 , { b } = 4 11 29 30 c D. Aruliah (UOIT) LU Factorisation MATH 2070U 6 / 30 LU factorisation Cholesky Factorisation Solving linear systems Gaussian elimination Reminder: Gaussian elimination example (cont.)Reminder: Gaussian elimination example (cont....
View
Full Document
 Winter '10
 aruliahdhavidhe
 Determinant, Linear Systems, Triangular matrix, LU factorisation, Cholesky factorisation

Click to edit the document details