1 show that the operator is stable well omit their

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: the mesh spacing, we can construct higher-order approximations by Richardson's extrapolation. Thus, we calculate two solutions y , i = 0 1 : : : N , and y 22, i = 0 1 : : : 2N , with spacing h and h =2, i = 0 1 : : : N , respectively. Using (8.3.5a) we have X y(x ) ; y = d (x )h2 + O(h2 +2) i = 0 1 ::: N h i i K h i k i k =1 k i K i 11 h= i i= y(x ) ; y 2 = h= i X d (x )( h )2 + O(h2 2 K i k i k =1 k K i +2) i = 0 1 : : : N: Subtracting the two results, we obtain an approximation of the error as d1(x )h2 = 4 (y 2 ; y ) + O(h2) 3 h= i (8.3.7a) h i i i The approximation can be added to, e.g., y 2, i = 0 1 : : : N , to obtain the higher-order solution h= i 2 ^ y 2 = y 2 + y 3; y + O(h4): h= h= i h i h= i (8.3.7b) i This process can be repeated to eliminate successively higher-order terms is the error expansion (8.3.5a). Instead of doing this, however, we'll describe the alternate approach of deferred corrections. Consider a solution y , i = 0 1 : : : N , of (8.2.3). Using (8.3.1), we know that this solution satis es i i (y(x )) = (y): i (8.3.8a) i Suppose that ^ (y)) is an O(h ) approximation of , then the solution of p i i i (^ ) = ^ (y) y i i g (^0) = g (^ ) = 0 y y i = 1 2 ::: N ; 1 L R N (8.3.8b) is an O(h ) approximation of y(x). This process can be repeated, as shown in Figure 8.3.2, with successively better approximations of the local discretization error. Some comments about the procedure follow. p 1. Unlike Richardson's extrapolation, the same mesh is used for the entire sequence of computations. 2. ^ is an O(h2 +2) approximation of the local discretization error hence, it is an O(h2 +2) approximation of the rst K terms in (8.3.2a). K K i K 3. Likewise, y( ), i = 0 1 : : : N , is an O(h2 +2) approximation of y(x). K K i 4. ^ ( +1) (y( )) is an a posteriori estimate of the local discretization error of the solution y( ), i = 0 1 : : : N . K K i K i 12 procedure deferred correction begin ^ (0) (y( 1) ) = 0 i = 1 2 ::: N ; K := 0 while accuracy not su cient do i begin Solve (y( )) = ^ ( )(y( 1)) ( g (y0 )) = g (y( )) = 0 Calculate ^ +1(y( )) K := K + 1 K i K i i = 1 2 : : : N ; 1, K; i K K L R K N K i end end f deferred correction g Figure 8.3.2: Algorithm for deferred corrections. 5. Lentini and Pereyra 7] implemented this procedure in a nite-di erence code called PASVA for the solution of two-point boundary value problems. It remains to compute the approximation ^ ( )(y( 1)), i = 0 1 : : : N . This can be done by passing a (2K + 1) th-degree polynomial through the 2K points neighboring x and interpolating (x f (x y( 1))) at these points. We obtain an approximation ^ T (y( 1)) of T (y(x 1 2 )) by di erentiating the interpolating polynomial. The result is X ^ ^ ( )(y( 1)) = h2 T( )(y( 1)(x 1 2 )): K K; i K; i K j K; k k j j i; = K K K; K k i k =1 i K; i; = k The interpolation formulas can be complex and, typically, a mixture of centered, forward, and backward di erence approximations are used. Example 8.3.2. When K = 1 we require ^ (1) (y(0) ) to be an O(h4 ) accurate approximation of . This can be done with a cubic polynomial approximation of i i T1 (y(x 1 (x 1 2 y(x 1 2 )): 1 2 )) = ; 12 f i; = 00 i; = i; = Let the cubic polynomial...
View Full Document

This document was uploaded on 03/16/2014 for the course CSCI 6820 at Rensselaer Polytechnic Institute.

Ask a homework question - tutors are online