{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

cs240a-sparse

# cs240a-sparse - CS 240A Solving Ax = b in parallel Dense A...

This preview shows pages 1–12. Sign up to view the full content.

CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) Same flavor as matrix * matrix, but more complicated Sparse A: Gaussian elimination – Cholesky, LU, etc. Graph algorithms Sparse A: Iterative methods – Conjugate gradient, etc. Sparse matrix times dense vector Sparse A: Preconditioned iterative methods and multigrid Mixture of lots of things

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Matrix and Graph Edge from row i to column j for nonzero A(i,j) No edges for diagonal nonzeros If A is symmetric, G(A) is an undirected graph Symmetric permutation PAP T renumbers the vertices 1 2 3 4 7 6 5 A G(A)
Compressed Sparse Matrix Storage Full storage: 2-dimensional array. (nrows*ncols) memory. 31 0 53 0 59 0 41 26 0 31 41 59 26 53 1 3 2 3 1 Sparse storage: Compressed storage by columns (CSC). Three 1-dimensional arrays. (2*nzs + ncols + 1) memory. Similarly, CSR. 1 3 5 6 value: row: colstart:

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More Robust Less Storage (if sparse) More Robust More General Pivoting LU GMRES, BiCGSTAB, Cholesky Conjugate gradient
CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) See April 15 slides Same flavor as matrix * matrix, but more complicated Sparse A: Gaussian elimination – Cholesky, LU, etc. Graph algorithms Sparse A: Iterative methods – Conjugate gradient, etc. Sparse matrix times dense vector Sparse A: Preconditioned iterative methods and multigrid Mixture of lots of things

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
For a symmetric, positive definite matrix: 1. Matrix factorization: A = LLT ( Cholesky factorization ) 2. Forward triangular solve: Ly = b 3. Backward triangular solve: LTx = y For a nonsymmetric matrix: 4. Matrix factorization: PA = LU ( Partial pivoting ) 5. . . . Gaussian elimination to solve Ax = b
Sparse Column Cholesky Factorization for j = 1 : n L(j:n, j) = A(j:n, j); for k < j with L(j, k) nonzero % sparse cmod(j,k) L(j:n, j) = L(j:n, j) – L(j, k) * L(j:n, k); end ; % sparse cdiv(j) L(j, j) = sqrt(L(j, j)); L(j+1:n, j) = L(j+1:n, j) / L(j, j); end ; Column j of A becomes column j of L L L LT A j

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
8 Irregular mesh: NASA Airfoil in 2D
Graphs and Sparse Matrices : Cholesky factorization 10 1 3 2 4 5 6 7 8 9 10 1 3 2 4 5 6 7 8 9 G(A) G + (A) [chordal] Symmetric Gaussian elimination: for j = 1 to n add edges between j’s higher-numbered neighbors Fill : new nonzeros in factor

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Permutations of the 2-D model problem Theorem: With the natural permutation, the n-vertex model problem has (n 3/2 ) fill. (“order exactly”) Theorem: With any permutation, the n-vertex model problem has (n log n) fill. (“order at least”) Theorem: With a nested dissection permutation, the n-vertex model problem has O(n log n) fill. (“order at most”)
Nested dissection ordering A separator in a graph G is a set S of vertices whose removal leaves at least two connected components.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}