0: Systematic Error: Experiment: Measuring Resistance I
0: Systematic Error: Experiment: Measuring Resistance I
Physical scientists. . . know that measurements are never perfect and thus want to know how true a given measurement is. This is a good practice, for it keeps everyone honest and prevents research reports from degenerating into sh stories. Robert Laughlin (1998 Physics Nobel Laureate) p.10 A Dierent Universe A hypothesis or theory is clear, decisive, and positive, but it is believed by no one but the man who created it. Experimental ndings, on the other hand, are messy, inexact things, which are believed by everyone except the man who did the work. Harlow Shapley, Through Rugged Ways to the Stars. 1969 Perhaps the dullest possible presentation of progress1 in physics is displayed in Figure 1: the march of improved experimental precision with time. The expected behavior is displayed in Figure 1(d): improved apparatus and better statistics (more measurements to average) results in steady uncertainty reduction with apparent convergence to a value consistent with any earlier measurement. However frequently (Figs. 1(a)1(c)) the behavior shows a nal value inconsistent with the early measurements. Setting aside the possibility of experimental blunders, systematic error is almost certainly behind this odd behavior. Uncertainties that produce dierent results on repeated measurement (sometimes called random errors) are easy to detect (just repeat the measurement) and can perhaps be eliminated (the standard deviation of the mean 1/N 1/2 which as N , gets arbitrarily small). But systematic errors do not telegraph their existence by producing varying results. Without any telltale signs, systematic errors can go undetected, much to the future embarrassment of the experimenter. This semester you will be completing labs which display many of the problems of non-random errors.
10
Systematic Error
Figure 1: Measured values of particle properties improve with time, but progress is often irregular. The error bars (x) are intended to be 1 : the actual value should be in the range x x 68.3% of the time (if the distribution were normal) and in the range x 2x 95.4% of the time. These gures are from the Particle Data Group, pdg.lbl.gov.
Systematic Error
11
2 1 0 1 2 10 5 0 5 Voltage (V)
A power supply
10
V (V) 0.9964 2.984 4.973 6.963 8.953 10.942 0.9962 2.980 4.969 6.959 8.948 10.938
V (V) .005 .007 .009 .011 .013 .015 .005 .007 .009 .011 .013 .015
I (mA) 0.2002 0.6005 1.0007 1.4010 1.8009 2.211 0.1996 0.6000 1.0002 1.4004 1.8001 2.206
I (mA) .001 .003 .005 .007 .009 .012 .001 .003 .005 .007 .009 .012
Figure 2: A pair DM-441B DMMs were used to measure the voltage across (V ) and the current through (I ) a 4.99 k resistor expected and reported in the devices specications. Using a pair of DM-441B multimeters, I measured the current through and the voltage across a resistor. (The circuit and results are displayed in Figure 2.) Fitting the expected linear relationship (I = V /R), Lint reported R = 4.9696 .0016 k (i.e., a relative error of 0.03%) with a reduced 2 of .11. (A graphical display showing all the following resistance measurements appears in Figure 3. It looks quite similar to the results reported in Figs. 1.) This result is wrong and/or misleading. The small reduced 2 correctly ags the fact that the observed deviation of the data from the t is much less than what should have resulted from the supplied uncertainties in V and I (which were calculated from the manufacturers specications). Apparently the deviation between the actual voltage and the measured voltage does not uctuate irregularly, rather there is a high degree of consistency of the form: Vactual = a + bVmeasured (0.1)
where a is small and b 1. This is exactly the sort of behavior expected with calibration errors. Using the manufacturers specications (essentially V /V .001 and I/I .005) we would expect any resistance calculated by V /I to have a relative error of .12 + .52 = .51% (i.e., an absolute error of .025 k for this resistor) whereas Lint reported an error 17 times smaller. (If the errors were unbiased and random, Lint could properly report some error reduction due to averaging: using all N = 12 data pointsperhaps an error reduction by a factor of N 1/2 3.5but not by a factor of 17.) Lint has ignored the systematic error that was entered and is basing its error estimate just on the deviation between data and t. (Do notice that Lint warned of this problem when it noted the small reduced 2 .)
Current (mA)
12
Systematic Error
Figure 3: Three dierent experiments are used to determine resistance: (A) a pair of DM441B: V /I , (B) a pair of Keithley 6-digit DMM: V /I , (C) a Keithley 6-digit DMM direct R. The left plot displays the results with error bars determined from Lint; the right plot displays errors calculated using each devices specications. Note that according to Lint errors the measurements are inconsistent whereas they are consistent using the error directly calculated using each devices specications.
When the experiment was repeated with 6-digit meters, the result was R = 4.9828 .0001 k with a reduced 2 of .03. (So calibration errors were again a problem and the two measurements of R are inconsistent.) Direct application of the manufacturers specications to a V /I calculation produced a 30 larger error: .003 k A direct measurement of R with a third 6-digit DMM, resulted in R = 4.9845 .0006 k. Notice that if Lint errors are reported as accurate I will be embarrassed by future measurements which will point out the inconsistency. On the other hand direct use of calibration errors produces no inconsistency. (The graphical display in Figure 3 of these numerical results is clearly the best way to appreciate the problem.) How can we know in advance which errors to report? Reduced 2 much greater or much less than one is always a signal that there is a problem with the t (and particularly with any reported error). Lesson: Fitting programs are designed with random error in mind and hence do not properly include systematic errors. When systematic errors dominate random errors, computer reported errors are some sort of nonsense. Comment: If a high precision resistance measurement is required there is no substitute for making sure that when the DMM reads 1.00000 V the actual voltage is also 1.00000 V. Calibration services exist to periodically (typically annually) check that the meters read true. (However, our SJU DMMs are not calibrated periodically.) Warning: Statistics seems to suggest that arbitrarily small uncertainties can be obtained simply by taking more data. (Parameter uncertainties, like the standard deviation of the
Resistance (k)
Systematic Error
13
mean, will approach zero in proportion to the inverse square-root of the number of data points.) This promise of asymptotic perfection is based on the assumption that errors are exactly unbiased so that with a large number of data points the errors will cancel and the underlying actual mean behavior will be revealed. However, in real experiments the errors are almost never unbiased; systematic errors cannot generally be removed by averaging. Care is always required in interpreting computer reported uncertainties. You must always use your judgment to decide if your equipment really has the ability to determine the parameters to accuracy suggested by computer analysis. You should particularly be on your guard when large datasets have resulted in errors much smaller than those reported for the individual data points.
Problem of Denition
Often experiments require judgment. The required judgments often seem insignicant: Is this the peak of the resonance curve? Is A now lined up with B ? Is the image now best in focus? Is this the start and end of one fringe? While it may seem that anyone would make the same judgments, history has shown that often such judgments contain small observer biases. Problem of denition errors are errors associated with such judgments. Historical Aside: The personal equation and the standard deviation of the mean. Historically the rst attempts at precision measurement were in astrometry (accurate measurement of positions in the sky) and geodesy (accurate measurement of positions on Earth). In both cases the simplest possible measurement was required: lining up an object of interest with a crosshair and recording the data point. By repeatedly making these measurements, the mean position was very accurately determined. (The standard deviation of the mean is the standard deviation of the measurements divided by the square root of the number of measurements. So averaging 100 measurements allowed the error to be reduced by a factor of 10.) It was slowly (and painfully: people were red for being poor observers) determined that even as simple an observation as lining up A and B was seen dierently by dierent people. Astronomers call this the personal equation: an extra adjustment to be made to an observers measurements to be consistent with other observers measurements. This small bias would never have been noticed without the error-reduction produced by
14
Systematic Error
averaging. Do notice that in this case the mean value was not the correct value: the personal equation was needed to remove unconscious biases. Any time you use the standard deviation of the mean to substantially reduce error, you must be sure that the random component you seek to remove is exactly unbiased, that is the mean answer is the correct answer. In the bubble chamber lab, you will make path-length measurements from which you will determine a particles mass. Length measurements (like any measurement) are subject to error, say 0.1 mm. A computer will actually calculate the distance, but you have to judge (and mark) the beginning and end of the paths. The resulting error is a combination of instrument errors and judgment errors (problem of denition errors). Both of these errors have a random component and a systematic component (calibration errors for the machine, unconscious bias in your judgments). A relatively unsophisticated statistical treatment of these length measurements produces a rather large uncertainty in the average path length (and hence in the particles mass calculated from this length). However, a more sophisticated treatment of the same length data produces an incredibly small estimated length error much less than 0.1 mm. Of course its the aim of fancy methods to give more bang for the buck (i.e., smaller errors for the same inputs), however no amount of statistical manipulation can remove built in biases, which act just like systematic (non-uctuating) calibration errors. Personal choices about the exact location of path-beginning and path-end will bias length measurements, so while random length errors can be reduced by averaging (or fancy statistical methods), the silent systematic errors will remain.
Systematic Error
15
possible parameters to adequately describe the system. A resistor subjected to extremes of voltage does not actually have a resistance. Nevertheless that single number does go a long way in describing the resistor. With luck, the t parameters of a too-simple model have some resemblance to reality. In the case of our Ohms law resistance experiment, the resulting value is something of an average of the high and low temperature resistances. However, it is unlikely that the computer-reported error in a t parameter has any signicant connection to reality (like the dierence between the high and low temperature resistances) since the error will depend on the number of data points used. The quote often attributed2 to Einstein: things should be made as simple as possible, but not simpler I hope makes clear that part of art of physics is to recognize the fruitful simplications. Lesson: We are always tting less-than-perfect theories to less-than-perfect data. The meaning of of the resulting parameters (and certainly the error in those parameters) is never immediately clear: judgment is almost always required.
16
Systematic Error crazymethods to improve milk production. At he end of his rope, he drives to Madison to consult with the greatest seer available: a theoretical physicist. The physicist listens to him, asks a few questions, and then says hell take the assignment, and that it will take only a few hours to solve the problem. A few weeks later, the physicist phones the farmer, and says Ive got the answer. The solution turned out to be a bit more complicated than I thought and Im presenting it at this afternoons Theory Seminar. At the seminar the farmer nds a handful of people drinking tea and munching on cookiesnone of whom looks like a farmer. As the talk begins the physicist approaches the blackboard and draws a big circle. First, we assume a spherical cow... (Yes that is the punch line)
One hopes (as in the spherical cow story) that approximations are clearly reported in derivations. Indeed, many of the problems youll face this semester stem from using high accuracy test equipment to test an approximate theory. (It may be helpful to recall the 191 lab on measuring the kinetic coecient of friction in which you found that accurate measurement invalidated F = k N where k was a constant. Nevertheless coecient of friction is a useful approximation.) For example, in the Langmuirs probe lab we assume that the plasma is in thermal equilibrium, i.e., that the electrons follow the Maxwell-Boltzmann speed distribution and make a host of additional approximations that, when tested, turn out to be not exactly true. In that lab, you will nd an explicit discussion of the error (20% !) in the theoretical equation Eq. 7.53. kTe 1 (0.3) Ji en 2 Mi Again this error is not a result of a measurement, but simply a report that if the theory is done with slightly dierent simplications, dierent equations result. Only rarely are errors reported in theoretical results, but they almost always have them! (Use of awed or approximate parameters is actually quite common, particularly in engineering and process controlwhere consistent conditions rather than fundamental parameters are the main concern.) What can be done when the model seems to produce a useful, but statistically invalid t to the data? 0. Use it! Perhaps the deviations are insignicant for the engineering problem at hand, in which case you may not care to expore the reasons for the small (compared to what matters) deviations, and instead use the model as a good enough approximation to reality. 1. Find a model that works. This obvious solution is always the best solution, but often (as in these labs) not a practical solution, given the constraints. 2. Monte Carlo simulation of the experiment. If you fully understand the processes going on in the experiment, you can perhaps simulate the entire process on a computer: the computer simulates the experimental apparatus, producing simulated data sets which can be analyzed using the awed model. One can detect dierences (biases and/or random uctuation) between the t parameters and the actual values (which are known because they are set inside the computer program).
Systematic Error
17
3. Repeat the experiment and report the uctuation of the t parameters. In some sense the reporting of parameter errors is damage control: you can only be labeled a fraud and a cheat if, when reproducing your work, folks nd results outside of the ballpark you specify. You can play it safe by redoing the experiment yourself and nding the likely range (standard deviation) of variation in t parameters. In this case one wants to be careful to state that parameter values are being reported not physical parameters (e.g., indicated temperature rather than actual temperature). Again, since systematic errors do not result in uctuation, the likely deviation between the physical parameters and the t parameters is not known. This was the approach used in the 191 k experiment. 4. Use bootstrapping3 to simulate multiple actual experiments. Bootstrapping resamples (i.e., takes subsets) from the one in-hand data set, and subjects these subsets to the same tting procedure. Variation in the t parameters can then be reported as bootstrap estimates of parameter variation. The program fit can bootstrap. (Again: report that an unknown amount of systematic error is likely to be present.) 5. Fudge the data. In dire circumstances, you might try scaling all your x and y error bars by a constant factor until the probability is acceptable (0.5, say), to get plausible values for A and B . Numerical Recipes by Press, et al., 3rd ed. p. 787 Increase the size of your error bars so you get reduced 2 = 1, and then calculate errors as in the usual approach. Clearly this is the least legitimate procedure (but it is what LINFIT does). One must warn readers of the dicey nature of the resulting error estimates. The program fit can fudge.
Special Problem: Temperature Measuring temperature is a particular problem. (Here, the rst two labs involve measuring temperatures above 1000 K in situations a bit removed from the experimenter.) You may remember from 211 that while temperature is a common part of human experience, it has a strikingly abstruse denition: ln 1 (0.4) kT E While the usual properties of Newtonian physics (mass, position, velocity, etc.) exist at any time, temperature is a property that exists contingent on a situation: thermal equilibrium. And thermal equilibrium is an idealization only approximately achievednever exactly achievedin real life. Furthermore in these experiments, thermal equilibrium is not even closely approximated, so the resulting temperatures have somewhat restricted meanings. In the photometry lab the temperature of stars is measured. In fact stars do not have a temperature and are not in thermal equilibrium. Nevertheless, astronomers nd it useful to dene an eective temperature which is really just a t parameter that is adjusted for the best match between the light produced by the star and the light predicted by the model.
3
18
Systematic Error
Special Problem: Assuming Away Variation In the 191 k lab, you assumed the sliding motion was characterized by one value of k , whereas a little experimentation nds usually slippery and sticky locations (handprints?). In the thermionic emission lab you will measure how various properties of a hot wire depend on temperature, however the hot wire does not actually have a temperature: near the supports the wire is cooled by those supports and hence is at a lower temperature. Our spherical cow models have simplied away actual variation. The hope is that the t model will thread between the extrems and nd something like the typical value. Of course, real variations will result in deviations-from-t which will be detected if suciently accurate measurements are made.
Special Problem: Derive in Idealized Geometry, Measure in Real Geometry Often results are derived in simplied geometry: perfect spheres, innite cylinders, at planes, whereas measurements are made in this imperfect world. In these labs (and often in real life) these complications are set aside; instead of waiting for perfect theory, experiment can test if we have the right end of the stick. Thus a Spherical Cow is born. The theory should of course be re-done using the actual geometry, but often such calculations are extremely complex. Engineering can often proceed perfectly adequately with such a rst approximation (with due allowance for a safety factor) and, practically speaking, we simply may not need accuracy beyond a certain number of siggs. Indeed it takes a special breed of physicist to push for the next sigg; such folks are often found in national standards labs like nist.gov.
Systematic Error
19
resistor equation (Eq. 0.2) is potentially such a dangerously free parameter: I will accept any value the computer suggests if ony it improves the t. While I have provided a story which suggests why such a term might be present, I have not actually checked that there is any truth in the story (for example, by measureing the actual temperature of the resistor at high voltage and by measureing the resistance of the resistor when placed in an oven). Skepticism about such inventions is expressed as Occams razor6 and the law of parsimony.
Purpose:
In all your physics labs we have stressed the importance of error analysis. However, in this course you will have little use for that form of error analysis (because it was based on computer reports of random errors). Instead, my aim in this course is to introduce you to the problems of non-random error. In the bubble chamber lab you will see how increasingly sophisticated analysis can reveal systematic error not important or evident in more elementary analysis. In the other labs you will see how systematic error can be revealed by measuring the same quantity using dierent methods. In all of these labs you will use too simple theory to extract characterizing parameters, which are not exactly the same quantity as might occur in a perfect version of the problem.
Comment:
The lesson: measure twice using dierent methods is often impractical in real life. The real message is to be constantly aware that the numbers displayed on the meter may not be the truth. Be vigilant; check calibrations and assumptions whenever you can. But the opening Shapley quotation tells the truth: Experimental ndings. . . are messy, inexact things, which are believed by everyone except the man who did the work.
6 entia non sunt multiplicanda praeter necessitatem, roughly (Wiki) translated as entities must not be multiplied beyond necessity.