This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Foundations of Computational Math I Exam 2 Takehome Exam Open Notes, Textbook, Homework Solutions Only Due beginning of Class Wednesday, December 1, 2010 Question Points Points Possible Awarded 1. Iterative Methods 25 for Ax = b 2. Iterative Methods 30 for Ax = b 3. Nonlinear Equations 30 4. Nonlinear Equations 25 Total 110 Points Name: Alias: to be used when posting anonymous grade list. 1 Problem 1 (25 points) Suppose B ∈ R n × n is a symmetric positive definite tridiagonal matrix of the form B = parenleftbigg D r L L T D b parenrightbigg where n = 2 k , D r and D b are diagonal matrices of order n/ 2 and L is a lower triangular matrix with nonzeros restricted to its main diagonal and its first subdiagonal. Assume that Bx = b can be solved using Jacobi’s method, i.e., the iteration converges acceptably fast. Partition each iterate x i into the top half and bottom half, i.e., x i = parenleftBigg x ( top ) i x ( bot ) i parenrightBigg 1.a . Assume an initial guess x is given and identify what information, i.e., the pieces of x i for 0 ≤ i ≤ j , determines the values found in the vectors x ( top ) j and x ( bot ) j for any j > 0. 1.b . Can the relationships from 1.a be used to design an iteration that approximates the solution essentially as well but only requires half of the work of Jacobi’s method? 1.c . Relate your new method from 1.b to applying GaussSeidel to solve Bx = b starting from the same initial guess x . Solution: The system is of the form Bx = b parenleftbigg D r L L T D b parenrightbiggparenleftbigg x ( top ) x ( bot ) parenrightbigg = parenleftbigg b ( top ) b ( bot ) parenrightbigg Given an initial guess x = parenleftBigg x ( top ) x ( bot ) parenrightBigg the key fact about Jacobi is that it splits into two independent sequences . The Jacobi iteration exploiting the partitioning given is parenleftBigg x ( top ) k +1 x ( bot ) k +1 parenrightBigg = parenleftbigg D − 1 r L D − 1 b L T parenrightbigg parenleftBigg x ( top ) k x ( bot ) k parenrightBigg + parenleftbigg ˜ b ( top ) ˜ b ( bot ) parenrightbigg = parenleftBigg D − 1 r Lx ( bot ) k D − 1 b L T x ( top ) k parenrightBigg + parenleftbigg ˜ b ( top ) ˜ b ( bot ) parenrightbigg 2 It is easily seen from writing out the partition matrix vector operations defining each step that the information from earlier steps that defines x ( top ) j and x ( bot ) j flows as follows: Sequence 1: x ( top ) → x ( bot ) 1 → x ( top ) 2 → x ( bot ) 3 ··· Sequence 2: x ( bot ) → x ( top ) 1 → x ( bot ) 2 → x ( top ) 3 ··· So if we only follow one of them we halve the number of operations and since we know by assumption that Jacobi is converging we would construct an estimate x i = parenleftBigg x ( top ) i x ( bot ) i − 1 parenrightBigg x i = parenleftBigg x ( top ) i − 1 x ( bot ) i parenrightBigg depending on the choice of sequence....
View
Full Document
 Spring '11
 Gallivan
 Convergence, Diagonal matrix, Jacobi method, Jacobi

Click to edit the document details