Errors_and_Calibration - Errors and Calibration Author:...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Errors and Calibration Author: John M. Cimbala, Penn State University Latest revision: 16 January 2008 Random vs. Systematic Errors There are two general categories of error: systematic (or bias ) errors and random (or precision ) errors . Systematic errors (also called bias errors ) are consistent, repeatable errors . For example, suppose the first two millimeters of a ruler are worn off, and the user is not aware of it. Everything he or she measures will be too short by two millimeters – a systematic error. Random errors (also called precision errors ) are caused by a lack of repeatability in the output of the measuring system . The most common sign of random errors is scatter in the measured data. For example, background electrical noise often results in small random errors in the measured output. Systematic (or Bias) Errors Systematic errors are consistent, repeatable errors. Systematic errors arise for many reasons. Here are just a few: o calibration errors – perhaps due to nonlinearity or errors in the calibration method. o loading or intrusion errors – the sensor may actually change the very thing it is trying to measure. o spatial errors – these arise when a quantity varies in space, but a measurement is taken only at one location (e.g. temperature in a room - usually the top of a room is warmer than the bottom). o human errors – these can arise if a person consistently reads a scale on the low side, for example. o defective equipment errors – these arise if the instrument consistently reads too high or too low due to some internal problem or damage (such as our defective ruler example above). Systematic error of a large set of measurements is defined as the average of measured values minus the true value . Random (or Precision) Errors Random errors are unrepeatable, inconsistent errors, resulting in scatter in the output data. The random error of one data point is defined as the reading minus the average of readings . Example : Given: Five temperature readings are taken of some snow: -1.30, -1.50, -1.20, -1.60, and -1.50 o C. To do: Find the maximum magnitude of random error in o C. Solution: The mean (average) of the five temperature readings is -1.42 o C. The largest deviation from this is the reading of -1.20 o C. The random (or precision) error for this data point is defined as the reading minus the average of readings, or -1.20 - (-1.42) = 0.22 o C. Thus, the maximum absolute value of random error is 0.22 o C . You can verify that the magnitude of the random error for any of the other data points is less than this. Comment:
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 07/23/2008 for the course ME 345 taught by Professor Staff during the Spring '08 term at Pennsylvania State University, University Park.

Page1 / 4

Errors_and_Calibration - Errors and Calibration Author:...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online