Describe the difference between accuracy and precision, and identify sources of error in measurement
Accuracy refers to how closely the measured value of a quantity corresponds to its "true" value.
Precision expresses the degree of reproducibility or agreement between repeated measurements.
The more measurements you make and the better the precision, the smaller the error will be.
systematic errorAn inaccuracy caused by flaws in an instrument.
PrecisionAlso called reproducibility or repeatability, it is the degree to which repeated measurements under unchanged conditions show the same results.
AccuracyThe degree of closeness between measurements of a quantity and that quantity's actual (true) value.
Accuracy and Precision
Accuracy is how close a measurement is to the correct value for that measurement. The precision of a measurement system is refers to how close the agreement is between repeated measurements (which are repeated under the same conditions). Measurements can be both accurate and precise, accurate but not precise, precise but not accurate, or neither.
Precision is sometimes separated into:
Repeatability — The variation arising when all efforts are made to keep conditions constant by using the same instrument and operator, and repeating the measurements during a short time period.
Reproducibility — The variation arising using the same measurement process among different instruments and operators, and over longer time periods.
All measurements are subject to error, which contributes to the uncertainty of the result. Errors can be classified as human error or technical error. Perhaps you are transferring a small volume from one tube to another and you don't quite get the full amount into the second tube because you spilled it: this is human error.
Technical error can be broken down into two categories: random error and systematic error. Random error, as the name implies, occur periodically, with no recognizable pattern. Systematic error occurs when there is a problem with the instrument. For example, a scale could be improperly calibrated and read 0.5 g with nothing on it. All measurements would therefore be overestimated by 0.5 g. Unless you account for this in your measurement, your measurement will contain some error.
How do accuracy, precision, and error relate to each other?
The random error will be smaller with a more accurate instrument (measurements are made in finer increments) and with more repeatability or reproducibility (precision). Consider a common laboratory experiment in which you must determine the percentage of acid in a sample of vinegar by observing the volume of sodium hydroxide solution required to neutralize a given volume of the vinegar. You carry out the experiment and obtain a value. Just to be on the safe side, you repeat the procedure on another identical sample from the same bottle of vinegar. If you have actually done this in the laboratory, you will know it is highly unlikely that the second trial will yield the same result as the first. In fact, if you run a number of replicate (that is, identical in every way) trials, you will probably obtain scattered results.
As stated above, the more measurements that are taken, the closer we can get to knowing a quantity's true value. With multiple measurements (replicates), we can judge the precision of the results, and then apply simple statistics to estimate how close the mean value would be to the true value if there was no systematic error in the system. The mean deviates from the "true value" less as the number of measurements increases.
Boundless vets and curates high-quality, openly licensed content from around the Internet. This particular resource used the following sources: