This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ERROR ANALYSIS FOR PHYS375
by
C. C. Chang, University of Maryland 1 Deﬁnition Error: In a scientiﬁc measurement, an error means the inevitable uncertainty in the measured results. As such, errors are not mistakes. You can not avoid them by
being careful. The best you can hope to do is to ensure that errors are as small as
reasonably possible, and to have some reliable estimate of how large they are. Discrepancy: If two measurements of the same quantity disagree, then we say that
there is a discrepancy. Note: One of the measurements could have been the socalled accepted value, value
based on previous measurements as the ”true” value, or a theoretically pre
dicted value. 2 Two Types of Errors Random or Statistical Errors: Experimental uncertainties that can be revealed by
repeating the measurements are called random or statistical errors. Systematic Errors: Experimental uncertainties that cannot be revealed by repeating
the measurements are called systematic errors. As an example, let’s measure the welldeﬁned width of a table top with a
ruler. Uncertainty caused by needing to interpolate between scale markings is a
random error. This is because when interpolating, one is probably just as likely to
overestimate as to underestimate. On the other hand, uncertainty caused by the
distortion of the ruler is a systematic error. This is because if the ruler has stretched,
we always underestimate; if the ruler has shrunk, we always overestimate. The treatment of random errors is quite different from that of systematic errors.
The statistical methods to be discussed later give a reliable estimate of the random
uncertainties, and, as we shall see, provide a welldeﬁned procedure for reducing
them. On the other hand, experienced scientists have to learn to anticipate the
possible sources of systematic error, and to make sure that all systematic errors are
much less than the required precision. Doing so will involve, for example, checking
the instruments against accepted standards (or calibrated ones), and correcting them,
or buying better instruments if necessary. 3 The Mean and Standard Deviation Error Analysis for PHYS375 2 Suppose we need to measure a quantity 1: and have identiﬁed all sources of
systematic errors and reduced them to a negligible level. Since all remaining sources
of uncertainty are random, we should be able to detect them by repeating the mea
surement several times. Suppose we make N (where N —> 00) measurements of the
quantity 1: (all using the same equipment and procedures), and ﬁnd the N values: m1,m2,...,mN The best est1mate for a: is the average of $1,152,    ,mN, i.e., We will also deﬁne the following quantities: 1 N
2 : 1 t. I Z l. — '_ — 2
0'3, popu a ion var1ance Nip;O 12:51:, 1:)
0'3, : population standard deviation or rms (rootmeansquare) deviation 1 N
_  Z ._— 2
_ Jig; (VI21(ml m) For ﬁnite N, it is more appropriate to deﬁne: 2 .92 : sample variance 2 — 2(151' — 102 q
 2 0'3, 2 s : sample standard deviation 
2

._\
m
:3

RI
V
N) The factor (N — 1) is used in the sample variance and standard deviation
instead of N. This is because we have to use data to ﬁnd the mean 5:. In a certain
sense, this left only (N — 1) independent measured values. For large N, it does not
make any difference either N or (N — 1) is used. For now, we will use 0'3, to mean the
sample standard deviation. As an example, let’s assume we have m, : 71, 72, 72, 73, 71. 71+72—l—72l—73l—71 a—c : f : 71.8
71.8—712 1. — 22 2.
a: : w : ﬂ : 0.56
5 5
0'3, 2 0.7
.52 2 0.7 :> write it as 0':
s 2 0.8 :> write it as 0'3, 4 Meaning of the Standard Deviation  the Uncertainty in a Single Mea
surement Error Analysis for PHYS375 3 If we were to plot the above result as a histogram, we would have: Instead of just making 5 measurements, if we were to make many measurements of
m, we would get a limiting distribution as follows: And the distribution could be represented by the socalled normal or Gaussian dis tribution:
2 fXa (m) : e—(w—XV/(Zax) Note that
Also
X+0x
/ fX,a—$(ar:)dar: 2 0.68 :> 68%
X—ax
X+20$
/ fX,a—$(m)dm 2 0.954 :s 95.4%
X—ZG'E Let’s suppose that we made N measurements of a: and obtained the values m1,ar:2,    ,mN. Let’s compute a7: and 0'35. From the discussion above we can conclude
that If our measurements are normally distributed and if we were to continue
the measurement of a: many more times (after making N measurements
and using the same equipment), then about 68 % of our new measure
ments would lie within a distance 0'3, on either side of 5:; that is, 68 %
of our new measurements would lie in the range 5: :: 0'35. We can rephrase the above conclusion as follows: Error Analysis for PHYS375 4 Suppose, as before, that we obtain the values m1,ar:2,    ,mN and com
pute a7: and 0'35. If we then make one more measurement (using the same
equipment), there is a 68 % probability that the new measurement will
be within 0'35 of 5:. Now, if the original number of measurements N was
large, then 5: should be a very reliable estimate for the actual value
of 1:. Therefore, we can say that there is a 68 % probability that a
single measurement (using the same equipment) will be within 0'35 of
the actual value. 5 The Standard Deviation of the Mean If m1,ar:2,    ,mN are the results of N measurements of the same quantity 1:,
then, as we have discussed earlier, our best estimate for the quantity 1: is their mean,
5:. We have also discussed that the standard deviation 0'35 characterizes the aver
age uncertainty of the separate measurements m1,ar:2,    ,mN. However, our answer
mbest : 5: represents a judicious combination of all N measurements, and there is
every reason to believe that 5: will be more reliable than any one of the measurements
(mi) considered separately. As we will show later, the uncertainty in the ﬁnal answer
mbest : 5: turns out to be the standard deviation 0'35 divided by \/N. This quantity is called the standard deviation of the mean, and is denoted by 0'55: We can now state our ﬁnal answer for the value of 1: (based on the N measurements of m) as
0x W (Value of m) = a7: :: 6 More on Standard Deviation and Standard Deviation of the Mean Let’s assume we have made many sets of N measurements of a: with the same
equipment: m1,m2,uuu,mN, m1,m2,uuu,mN,uu
h\/_’_\/_’ In other words, we have made many determinations of the average of N measurements.
Each set of the N measurements will be normally distributed about the true value
X with width 0'35, shown as dashed curve below. The average of each set of the
Nmeasurements will also be normally distributed about X, but with width 0'55 : ax/x/N, shown as solid curve below. Error Analysis for PHYS375 5 7 Weighted Averages It often happens that a physical quantity is measured several times, perhaps in
several separate laboratories (or by different students), and the question arises how
these measurements can be combined to give a single best estimate. Suppose, for
example, that two students, A and B, measure a quantity 1: and obtain these results: Student A: a: = mA :: 0'A
Student B: a: : m3 :: 03 Each of these results will probably itself be the results of several measurements,
in which case, am will be the mean of all A’s measurements and 0A the standard
deviation of that mean (and similarly for $3 and 0'3). The question is how best to
combine mA and 1:3 as a single best estimate for m. The answer to this question is to
use the principle of maximum likelihood as follows: Let’s assume that both measurements are ”correct” (more discussion on this
later) and they are governed by the Gaussian distribution. Let’s further assume that
the unknown true value of a: is X. Then the probability of A obtaining the particular
value of am is: PM“) QC ie—(wA—XV/(zai)
0A
Similarly, for B:
12,4303)“ ie—oB—XV/(2ﬁ3)
0'B The probability that A ﬁnds the value mA and B the value an; is just the
product of the two probabilities: PX($A,$B) = PX($A)PX($B) O( 1 6 20'?4 2033
(TAOB
1 _
: e 2
(TAOB
where 2 2
2 (mA—X) (mB—X)
X _ 2 + 2
0 0
A B 5—962 _ 0_w+2(mB—X)(—1)
(9X — — 0'31 0%;
—X —X
: “THE—2:0
0A OB
:> szbestzalA 0'13 Error Analysis for PHYS375 6 Deﬁning the weights as: we obtain This analysis can be generalized to combine several measurements of the same
quantity. Suppose we have N separate measurements of a quantity 1:, 11:1 :01, m2::0'2, , mNZZO'N
Then N
ZZZ:1 cum
mbest : N
21'21 wl
where 1
w, — —
0.2 It is obvious to note that the larger the error 0', the smaller the contribution
of m, to the mean. It can be shown that
N —1/2
Umbest : (2 ml)
i=1 A special case: If all 07’s are equal, i.e., 01:02:”‘ZO'NZO'
then N
1 N
_(,—2:i=1mi_ 1 __
mbESt—l—N—szi—m
0—2 i=1
and 1 —1/2
Umbest : (—2N) : L
0' W This implies that if a quantity is measured N times, the error will be improved
over the error of a single measurement by a factor of j—N. This is what we learn when we discussed the error of the mean earlier. 8 Consistency of the Data After calculated the weighted average and the error, we can then calculate the
2 X i
N N 2
m' — m
X2 : E :wi(mi _ mbest)2 : E ( I 0.255“) and compare it with N — 1, which is the expectation value of X2 if the measurements
are from a Gaussian distribution. We have the following three cases: Error Analysis for PHYS375 7 o If XZ/(N — 1) is less than or equal to 1 and there are no known problems with
the data, we say that the data are consistent and should accept the results. 0 If XZ/(N — 1) is greater than 1, but not greatly so, we can still accept the
weighted average, but then we need to increase the error awbm by a scale factor deﬁned as 1/2 s = [XZ/(N — 1)] . o If XZ/(N — 1) is very large, we say that the data are inconsistent and should
suspect that something has gone wrong in at least one of the measurements.
In this case, we should examine all the measurements carefully to see whether
some (or all) of the measurements might be subject to unnoticed systematic
errors (this could result in larger total errors than quoted). We may choose not
to use the weighted average at all. Alternatively, we may quote the weighted
average, but then make an educated guess of the error. For example, one could
use the standard deviation 1 N
al‘bes : 2(ml — mbest)2
t N — 1 i=1
as the error for each measurement instead of the original individual 0",, and use
ambest/VN as the error in the weighted mean, instead of the weighted error N —1/2
Umbest : (2 ml)
i=1 9 Propagation of Errors Suppose that, in order to ﬁnd a value for the function q(ar:,y), we measure the
two quantities a: and y several times, obtaining N pairs of data, (why/1), (m2,y2), , (mmyN).
From the N measurements m1, m2,    , am, we can compute the mean 5: and standard
deviation 0'3, in the usual way; similarly, from y1, y2, , yN, we can compute 3] and
0y. Next, using the N pairs of measurements we can compute N values of the quantity
of interest: (11' : Q($i,yi), (i : 1,2, . . . ,N)
Given q1, q2, .., qN, we can now calculate their mean c], which we assume giving our best estimate for q,, and their standard deviation O'q, which is our measure of the
random uncertainty in the value of q,. We will assume, as usual, that all our uncertainties are small, and hence that
all the numbers m1, m2, , mN are close to E: and that all the y1, y2, , yN are close
to 3]. We can then use Taylor series expansion to make the approximation: qr = (Mum) 2 Q($ay)+ a (mi—i)+ — Error Analysis for PHYS375 8 And we have 1 N
q I Nng'
1 N (9g (9g
= — qi,‘+— 001—5: +— 111—3?
NZ ( > my ) 8y; )]
1N__ 1Naq _ 1Naq _
— NngyHN;amwz—mHﬁga—ymwi—y)
_ _ 1 N aq 1 N _ aq 1 N _
— q(m’y)ﬁgl+$mﬁl§(mi—m)+8—ngNZ1(yi—y)
v 7 _V—’ b\/—’
N 0 0
: q(a_0,3]) This means, to ﬁnd the mean q, we have only to calculate the function q(a:,y) at the
point a: = E: and y = 3]. The standard deviation of the N values q1, q2, , qN is given by
1 N
02 = — (Ch—(1)2
q Ni=1
2
1 N (9g _ (9g _
— EH[auwz—ﬂdra—yuwz—W]
1— any any
89:21 _28q21N _2
— ($>__F§(mz— )+ 8—3; "ﬁgﬁll—y)
any 1— any 1—
8q (9g 1 N
2— — — i—_ 1——
+ ammammNgw My 1/) The ﬁrst two terms are just 0': and 0'13. Let’s deﬁne the socalled the covariance of a:
and y as follows: 8g 2 (9g 2 (9g 89
2_ _ 2 _ 2 _ _
”‘1‘ (a) “(6:1) ”9+2 602 6y “W (Note that we have dropped the subscripts E: and 3].) then we have If a: and y are independent, then aw : 0. This is because for a given value of
y, the quantity (m, — 27:) is just as likely to be negative as it is to be positive. (This
is true for any given value of 3],.) Thus, after many measurements, the positive and
negative terms in aw should nearly balance. We then have (9g 2 (9g 2
2 _ _ 2 _ 2
”a  (am) (ay) 2 Error Analysis for PHYS375 a 2 a 2
<a—:> <a—2> If there are more than two variables, then we have a 2 a 2
<a—:> we) 01' 10 Speciﬁc Formulas 10.1 Sums and Differences Let q = 0m :: by, then
861 861 — = — = :5
(9m 0, (9y
0'q : (0)2032” —— (::b)20'§
: 020'; —— (ﬂag <— added in quadrature 10.2 Products and Quotients o For products: Let q = may, then 861 ﬂ : ay 861 8—3] I am 0q = (ay)20§+(am)205
_ q2 2 2 2 2 2 2
_ 02m2y2(a y 0'35 —— a a: 0g) 0' 0' 2 0' 2
: _q (—22) + (—9)
q m y
=> Added in quadrature for the fractional errors. 0 For quotients: Let q = 0%, then @
(91: y Error Analysis for PHYS375 a 2 J: 2
“q = (a) “rt—r) a
22 22 222
: qy aaw+am0y
23:2 2 4
3/ 3/
2 (2%
2% = (2H?) => Added in quadrature for the fractional errors. Q  
Q 10.3 Powers Let q : of“, then (9g __n_1 nq
— = a: :1: —__—
(9m ( ) a:
2
n
=> O'q = (::—q) 0'3
1:
n2q2 2
2
2 q 112(2)
a:
0' 0'3”
2» —q = n<—>
q a: => The fractional error in a: is increased by a factor n. 10.4 Exponentials
Let q : aerx, then % = “(:b)ebx — __bq
O'q : w/(::bq)20'g : bqam
:> 0" — b
_ _ 0'35
q 10.5 Logarithm Error Analysis for PHYS375 11 Let q : aln(::bar:), then
(9g _ ::b _ a
a _ a E _ E a 2
O'q : — 0'3; : —0'w
a: 10.6 Three Examples 1. Measurement of g, the acceleration of gravity, using a simple pendulum. We have
I
T : 27r\/:
g 471'2l =>g= T2 Using the formulae given above, we have (2)1002
g l T2 But,
0']? 01‘
T2 Z 2—
0'9 0'1 2 ( 0T)2
_ : _ 2—
:> g ( l) + T
Suppose
I = 92.95 :: 0.1 cm
T = 1.936 :: 0.004 sec
471'2 X (92.95) 2
=> gbest = W = 979 cm/sec
We have
0'] 0.1
— = — = 0.1
l 92.95 %
07 0.004
— = — = .2
T 1.936 0 %
g = (0.1%)2 + (2 x 0.2%)2 = 0.4%
:> 0'g : 0.004 X 979 : 4cm/sec2 => 9 = (979 :: 4)cm/.sec2 Note: There is no need to improve the accuracy in the measurement of l since
ﬁnal error is dominated by the error in T. Error Analysis for PHYS375 2. Acceleration of a cart down a slope. We have “M. I. I r
HillRaﬁ t. 3%.
Iii‘5 .H . ‘
""~. I II
_"".. . E42322. But We assume of —— 2as 2 2
'02 _'U1 2s l = 5.0 ::0.05cm (1%) .s = 100.0 :
t1 = 0.054 :
t2 = 0.031 : :0.2cm (0.2%)
:0.001sec (2%) To calculate on, let’s do it in steps. (a) First, let’s ﬁnd 0(E): 25 : 2 Note that the uncertainty in 5 makes no appreciable contribution. (b) Next, let’s ﬁnd 0(é) and 0(é): : 0.001 sec (3%) (2 x 1%)2 + (0.2%)2
% C(35) = 20(L) :2Ut122X2%:4%
t1 t1 0(1
2
t2 V
II
[\9
q A [file )22at222x3%:6% 12 Error Analysis for PHYS375 13 1
=> —2 = (343 :: 14)sec_2
t1 if
1
=> —2 = (1041 :: 62 )sec—2
t2 E“
c ext, ets n 0'
N l ’ ﬁ d (1 1)
?T_?7
2 1 : 64566—2
1 1 _
:> —2 — —2 — (698 :: 64) sec (9%)
t2 t1
(d) Lastly,
0a _
a _
= (2%)2 + (9%)2
= 9%
5 x 5
:5 a _ (2X 100) x698::9% = 87.3 cm/sec2 :: 9%
= (87.3 :: 8) cm/sec2 3. Refractive index using Snell’s law. We have sin 61 n = _
s1n 62 where 61 (the angle of incidence) and 62 (the angle of refraction) are measured. 0'" _ (Usin01)2 + (Usin02)2
n sin 61 sin 62
To ﬁnd em let’s use
5111 04 (9 2
O'q : (i) 0'2 <2 use original formula 1‘ asin 04
=> 2 cos oz (904 Error Analysis for PHYS375 14 And
asin 91 = cos 61 0'91
asin 91 cos 61
_ = _ 0'91 : cot 01091
s1n 61 s1n 61
Similarly, we have
0Isinﬁ
_ 2 : cot 62092
s1n 62 it

 0'
n w/cot2 01031 —— cot2 02032
71 Suppose we now measure the angle 62 for a couple of values of 01, and get the
results shown in the ﬁrst two columns of the table below. Let’s assume that all angles are measured to an accuracy of :10, or 0.0175 radians. And the results are:
* cot(20) x 0.0175 2 0.05 = 5%
T cot(13) x 0.0175 2 0.08 = 8%
And
1.52 __ 1.61
ﬂ : (0.114) (0.38) : 1.59
(0.14)2 __ (0.08)2
1 1/2
Unbest : ﬂ 2 0'07
(0.14)2 + (0.08)2
Thus, (Value of n) = 1.59 :: 0.07 11 LeastAquare Fit to a Straight Line It often happens that we wish to determine one characteristic of an experiment
y as a function of some other quantity 1:. That is, we wish to ﬁnd the function f
such that y : f(ar:). Instead of making a number of measurements of the quantity y
for one particular value of a: (of course, one does this so that one can determine 0y),
we make a series of N measurements yi, one for each of several values of the quantity
1: : m, where i is an index that runs from 1 to N to identify the measurements. Error Analysis for PHYS375 15 Probably the most important experiments of this type are those where the
expected relation (function) is linear, and this is the case we consider here. In other
words, our data consist of pairs of measurements (ml,yi). We wish to ﬁt the data with
an equation of the form: yza—l—bm by determining the values of the coefﬁcients a and I) such that the discrepancy is
minimized between the values of our measured y, and the corresponding values y, = f (0%)
The problem is to establish and to optimize the estimates of the coefﬁcients.
We will again use the principle of maximum likelihood for this problem. If we knew the constants a and b, then, for any given value 1:, (which we assume
to have no uncertainty), we could compute the true value of the corresponding yi: (True value for y,) = a —— baa, Assuming that the measurement of y, is governed by a normal (Gaussian) distribution
centered on this true value, with a width aw. Therefore, the probability of obtaining
the observed value y, is p. b(y.) oc iewiaW/aazn O'y. ’L where the subscripts a and b indicate that this probability depends on the unknown
values of a and b. The probability of obtaining our complete set of measurements ($1,311), (mZayZ)? ' ' 'a (mNayN) is the PIOduCt:
Pa,b(y1ay23 ' ' ' ayN) OC Pa,b(y1)Pa,b(y2) ' I ' Pa,b(yN)
1 {_§ 2 (yr(149002}
cc —6 Z a”
0141 0142 ' ' ' UyN
1 _ﬁ
: e 2
0141 0142 ' ' ' UyN where y—a—bar:2) M2 i=1
As before, the best estimate for the unknown constants a and I), based on the given
measurements, are those values of a and b for which the probability Payb(y1, yg,    ,yN)
is maximum or for which X2 (the sum of squares) is a minimum (that is why this
method is known as the leastsquares ﬁtting). To ﬁnd the values of a and b which
yield the minimum value of X2, we differentiate X2 with respect to a and b and set
the derivatives to zero: 8X2 N yli—a—bm) — = —2 —=0
(9a 2 0y, 8X2 N (yi_a_bmi)mi
W — ”20—2—0 Error Analysis for PHYS375 These equations can be rearranged to yield a pair of simultaneous equations: N 3/1 N 1 N 301
Z 2 = “2—2 “52 2
1:1 0141 1:1 091 1:1 091"
N N N 2
301311 $1 391
Z 2 = “Z 2  52 2
1:1 0141 1:1 091 1:1 091"
The solutions are:
N y' N a:
1 21:1 Tl 21:1 21
a — — a“ ”3'21
— A N 901141 N w_1
1:1 0.2 1:1 0.2.
1 y’L
N 2 N N N
_ i 2 m1' 2 3/1 _ 2 m1' m1y1
_ 2 2 2 2
A ':1 0111 1:1 0111 1:1 0111 1:1 0111
N 1 N 111
b 1 21:1 ”—5, 1:1 UT
_ ’L ’L
— — N & N 331141
A 21:1 051. 21:1 0.21
N N N N
— 1 2—1 2% Z 211 yl
_ _ 2 2 _ 2 2
A 1:1 0111 1:1 0111 1:1 0111 1:1 0111
N 1 N an
2121—, 21:1 —.
A Z N 121 N 1121
’L 1
21:1 ”—5. 21:1 0—5,
’L ’L
2
N N 2
— 2—1 Z 211. Z 211
_ 2 2 _ 2
1:1 0111 1:1 0111 1:1 0111 We state without proof that the errors in a and b are given as: then where NZ ”2 01? 'ZO'yNZO'y N N N
NZMM _ EJ512291
1:1 1:1 1:1 N N 2 16 Error Analysis for PHYS375 17 and q
H
l>H l>H q
0“
  12 Uncertainty in the Measurement of y If each of the y, for a given 1:, is measured many times, one can certainly get an
idea of am by examining the spread in their values, 2'....
View
Full Document
 Spring '11
 ENO
 Physics

Click to edit the document details