Error Analysis Notes
Error Analysis Notes
Introduction
Physics is an experimental science. The physicist uses mathematics but the
models he or she constructs arent abstract fantasiesthey must describe the real
world. All physical theories are inspired by experimental observations of nature
and must ultimately agree with these observations to survive. The interplay
between theory and experiment is the essence of modern science.
The task of constructing a theory, inherently difficult, is compounded by the fact
that the observations are never perfect. Because instruments and experimenters
depart from the ideal, measurements are always slightly uncertain. This
uncertainty appears as variations between successive measurements of the same
quantity.
With better instrumentation and greater care, these fluctuations can be reduced
but they can never be completely eliminated1. One can, however, estimate how
large the uncertainty is likely to be and to what extent one can trust the
measurements. Over the years powerful statistical methods have been devised
to do this. In an imperfect world we can expect no more; and once we
understand the limitations of an experiment we can, if we feel the need, try to
improve on it.
In the laboratory you, like any scientist, will have to decide just how far you can
trust your observations. Here is a brief description of experimental uncertainties
and their analysis. Your TA will indicate how far you need to carry the analysis
in each experiment.
In certain instances, there is a fundamental limit imposed on this process of improvement. Quantum
mechanics, which deals with very small systems such as atoms and nuclei, suggests that there is an inherent
uncertainty in nature that even perfect instrumentation cannot overcome. Luckily, most experiments do not
confront this ultimate barrier.
6.2 Definitions
The difference between the observed value of a quantity and the true value is
called the error of the measurement. This term is misleading; it does not
necessarily imply that the experimenter has made a mistake. If this were the case,
and the experimenter would correct it! Uncertainty is a better term but it is not
as commonly used; well use them interchangeably here.
Errors may be conveniently classified into three types:
A. Illegitimate Errors: These are the true mistakes or blunders either in
measurement or in computation. Reading the wrong scale or misplacing a
decimal point in multiplying are examples. These errors usually stand out if
the data is examined critically. You can correct such errors when you find
them by eliminating their cause and possibly by repeating the measurement.
If its too late for that, you can at least guess where the mistake is likely to lie.
B. Systematic Errors: These errors arise from faulty calibration of equipment,
biased observers, or other undetected disturbances that cause the measured
values to deviate from the true valuealways in the same direction. The
bathroom scale that reads 3 lbs before anyone steps on it exhibits a
systematic error. These errors cannot be adequately treated by statistical
methods. They must be estimated and, if possible, corrected from an
understanding of the experimental techniques used.
Systematic errors affect the accuracy of the experiment; that is, how closely the
measurements agree with the true value.
C. Random Errors: These are the unpredictable fluctuations about the average
or true value that cannot be reduced except by redesign of the experiment.
These errors must be tolerated although we can estimate their size. Random
errors affect the precision of an experiment; that is, how closely the results of
successive measurements are grouped.
Note that accuracy and precision are different. See if you can think of an example
of something that is precise but not accurate, and something that is accurate but
not precise.
The concepts of error analysis that are introduced below are strictly applicable
only to random errors.
guess for the best value is the average, 1.1 cm, and the uncertainty in each
measurement seems to be about 0.2 cm.
1. Best Value: Statistics does show that the average, or mean, of a set of
measurements provides the best estimate of the true value. This is simply
the sum of the measurements divided by the number taken:
xave x
1
N
x .
i
i 1
1
N 1
i 1
2
2
2
2
1
5.5 5.1 5.3 5.1 4.9 5.1 4.7 5.1 0.37 cm
0.37
0.18 cm
4
The final result for the length of the line is then 5..1 0 2 cm.
Significant Figures
There is a natural shorthand for the estimate of errors and their propagation in
the use of a definite number of figures to represent in a measurement. These
figures are called significant figures. The measurement 15.23 cm, for example, has
four significant figures; it is uncertain to a few one-hundredths of a centimeter
(the exact uncertainty is deliberately left vague). It is different from 15 cm, 15.2
cm, 15.230 cm, and 15.2300 cm; those numbers might all represent the same
measurement, but the uncertainty is different in each case.
For multiplication and division, the largest fractional error will dominate. It occurs in
the number with the fewest significant figures. Hence the result can have no
more significant digits than the least accurate of the factors. As an example:
15.23 471
380
19
The answer is not 377 or 377.5437. We could be even more definite by writing
3.8 102 .
Propagation of Errors
In the laboratory we seldom measure directly the quantities or results of interest.
Instead we must measure others from which the results are derived. For
example, to measure the volume of a rectangular solid, we measure the three
sides and multiply these values. In the course of such a calculation, the errors in
the measured quantities propagate through the computation to affect the
result.
The basic equation which describes the propagation of errors is best expressed in
terms of partial derivatives. Suppose that we require a quantity P that is a
P f a,b, c,
P2
f
f
f
a2 b2 c2
a
b
c
(1)
Consult the following references if you want to see how this result is obtained:
P. R. Bevington, Data Reduction and Error Analysis for the Physics Sciences
H. Young, Statistical Treatment of Experimental Data
Y. Beers, Introduction to the Theory of Error
f
. It means: take the
a
derivative of f with respect to a keeping all the other variables b , c ,
constant.
The partial derivative of f with respect to a is written as:
We will show how this general formula is applied to some specific cases of error
propagation.
y a b
A. Addition:
The estimated errors on the measured quantities a and b are a and b . What
is the uncertainty in y ?
Evaluate:
y
1 ;
a
y
1
b
y2 a2 b2
B. Subtraction:
y a b
y2 a2 b2
C. Multiplication:
y a b
Evaluate:
y
y
b (when b is kept constant);
a (when a is kept constant)
a
b
y2 b 2a2 a 2b2
So that
b b :
2
y
a 2 b 2
y a b
D. Division:
We have:
y 1
;
a b
a
b
y
a
2
b
b
y2
so that:
1
b
2
2 a
a2
b
b2
a 2
By dividing each term by y 2 , we see that, again, the fractional errors are
b
added in quadrature:
2
2
2
y
a b
y
a b
E. Power Law:
y an
We find:
y
na n 1
a
y na n 1 a
so that:
or
n a
y
a
2x
t
. Then
g
2
g
4x
2 and
3 , and
x t
t
t
therefore:
2
2 2 4x 2
x 3 t
t2
t
g2