113 remark 1 the assumption that y t exists is not

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: of Example 2.1.1 00 00 y 0 = y y (0) = 1 : Since we know the exact solution to this problem, we may use (2.1.12) to calculate ( )2 = 2 0max j ( )j = 2 tT 2 d h y 00 h t : With = ;100 and = 0 001, we have = 0 005. This value may be compared with the actual local error 0.00484 that appears in the second line of Table 2.1.1. (Computed local error values are not available for the other steps recorded Table 2.1.1. Why?) In this case, the bound compares quite favorably with the exact local error. However, bear in mind that we have very precise data for this example, including an exact solution. With ( ) = , we have h : d : d fty y j ( ) ; ( )j = j jj ; j fty Hence, we may take = 0 01 yields T : L ftz y z: = j j = 100 as a Lipschitz constant. Then, using (2.1.13) with 001 j nj 0200 ( 100(0:01) ; 1)104 0 0859 : e e : : Comparing this bound and the actual error on the last line of Table 2.1.1, we see that the bound is about 4.5 times the actual error. Comparing formula (2.1.13) with the results of Table 2.1.2, reveals that both predict a linear convergence rate in . Example 2.1.4. Let's solve the problem of Examples 2.1.1 and 2.1.3 with a di erent choice of . Thus, we'll keep = ;100 and choose = 0 05 to obtain the di erence equation (Example 2.1.1) h h h yn = (1 + ) h yn;1 = ;4 yn;1 10 n> : 0 y0 =1 : Results, shown in Table 2.1.3, are very puzzling. As ( ) increases, the solution increases exponentially in magnitude while oscillating from time step-to-time step. The results are clearly not approximating the decaying exponential solution of = ;100 . n t y n 0 1 2 3 4 5 6 7 8 Table 2.1.3: The solution of = 0 05. h y 0 = y tn 0 y yn 0.00 1 0.05 -4 0.10 16 0.15 -64 0.20 256 0.25 -1024 0.30 4096 0.35 -16384 0.40 65536 , (0) = 1, by Euler's method for = ;100 and y : The results of Example 2.1.4 are displaying a \numerical instability." Stability of a numerical method is analogous to \well posedness" of a di erential equation. Thus, we would like to know whether or not small changes in, e.g., the initial data or ( ) produce bounded changes in the solution of the di erence scheme. fty De nition 2.1.5. Let n, 0, be the solution of a one-step numerical method with initial condition 0 and let n be the solution of the same numerical method with a perturbed initial condition 0 + 0 . The one-step method is stable if there exists positive constants ^ and such that y y n> z y h k j n ; nj y whenever j 0 j z k 8 nh T h 2 (0 ^ ) h (2.1.14) . Remark 2. A one-step method, like Euler's method, only requires information about the solution at tn 1 to compute a solution at tn . We'll have to modify this de nition when dealing with multi-step methods. Remark 3. The de nition is di cult to apply in practice. It is too dependent on f (t y ). ; 11 Example 2.1.5. Although De nition 2.1.5 is di cult to apply, we can apply it to Euler's method. Consider the original and perturbed problems yn zn = zn;1 = + yn;1 ( hf tn;1 yn;1 ( hf tn;1 zn;1 Let n + = zn ; ) ) = z0 yn y0 + 0: 0 n...
View Full Document

Ask a homework question - tutors are online