This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: NUMERICAL ANALYSIS NOTES JOHN RANDALL Contents 1. Errors 2 1.1. Introduction 2 1.2. Floatingpoint format 2 1.3. Error calculation 3 1.4. Errors in scientific calculation 3 2. Root Finding 4 2.1. Introduction 4 2.2. Bisection Method 5 2.3. Newtons Method 7 2.4. Error analysis and accelerating convergence 9 3. Interpolation 10 3.1. Introduction 10 3.2. Lagrange Interpolation 11 3.3. Cubic Spline Interpolation 12 4. Numerical Integration 13 4.1. Introduction 13 4.2. Basic Quadrature 13 4.3. Error Estimates in Basic Quadrature 14 4.4. Composite quadrature rules 15 5. Numerical Differentiation 16 5.1. Introduction 16 5.2. Formulas for numerical differentiation 17 6. Approximation Theory 18 6.1. Introduction 18 6.2. Linear discrete least squares approximation 18 6.3. Polynomial least squares approximation 19 7. Direct Methods for Linear Systems 20 7.1. Introduction 20 7.2. Systems of linear equations 20 8. Iterative Methods for Linear Systems 21 8.1. Introduction 21 8.2. Convergence of vectors 21 Date : September 1, 2004. 1 2 JOHN RANDALL 1. Errors 1.1. Introduction. In mathematics we have the luxury of working with real num bers, which can be regarded as infinite decimals. A computer can only represent a finite set of rational numbers. Fortunately many scientific phenomena are suffi ciently stable that even when playing with this extremely short deck, we are still able to make sensible predictions about the world. Whenever we do a calculation with a fixed amount of precision, we also lose accuracy as soon as we do a calculation. This is referred to as roundoff error . When carrying out a numerical calculation, we wish to do so as quickly and accurately as possible. These goals are on the face of it diametrically opposed. However, the pervasiveness of roundoff error, especially when it can build up over the course of a long calculation, means that doing a calculation in as few steps as possible may also be important for accuracy. In this chapter, we look at how computers represent numbers, and introduce the basic terminology for talking about error. 1.2. Floatingpoint format. The numerical types represented on a computer are integers (whole numbers) and floatingpoint numbers (numbers with a fractional part). These are represented completely differently. Integers are represented essen tially as binary versions as we would write down on paper, and will not be discussed here. Floatingpoint numbers are represented in a variant of scientific notation. A typical format is the IEEE 64bit format, used on many desktop computers. A floatingpoint number is 64 bits long. (A bit is a binary digit, representing 0 or 1.) The 64 bits are divided into several parts, as follows: Symbol Length Description s 1 sign c 11 characteristic f 52 mantissa The sign is 0 for +, and 1 for . The characteristic is an 11bit integer that will be used to form an exponent with base 2, and the mantissa is a binary fraction....
View Full
Document
 Spring '10
 CAREY

Click to edit the document details