Revision Notes U3
Revision Notes U3
Measurement Techniques
Common instruments used in Physics are:
o Metre rules – to measure distance and length
o Balances – to measure mass
o Protractors – to measure angles
o Stopwatches – to measure time
o Ammeters – to measure current
o Voltmeters – to measure potential difference
More complicated instruments such as the micrometer screw gauge and Vernier
calipers can be used to more accurately measure length
When using measuring instruments like these you need to ensure that you are fully
aware of what each division on a scale represents
o This is known as the resolution
The resolution is the smallest change in the physical quantity being measured that
results in a change in the reading given by the measuring instrument
The smaller the change that can be measured by the instrument, the greater the degree
of resolution
For example, a standard mercury thermometer has a resolution of 1°C whereas a
typical digital thermometer will have a resolution of 0.1°C
o The digital thermometer has a higher resolution than the mercury thermometer
o
o Measuring Instruments Table
Random error
Systematic error
Systematic errors arise from the use of faulty instruments used or from flaws in the
experimental method
This type of error is repeated every time the instrument is used or the method is
followed, which affects the accuracy of all readings obtained
To reduce systematic errors: instruments should be recalibrated or the technique
being used should be corrected or adjusted
To reduce systematic errors:
o Instruments should be recalibrated, or different instruments should be used
o Corrections or adjustments should be made to the technique
Systematic errors on graphs are shown by the offset of the line from the origin
Zero error
This is a type of systematic error which occurs when an instrument gives a reading
when the true reading is zero
This introduces a fixed error into readings which must be accounted for when the
results are recorded
Accuracy: this is how close a measured value is to the true value; the accuracy can be
increased by repeating measurements and finding a mean average
Calculating Uncertainty
There is always a degree of uncertainty when measurements are taken; the uncertainty
can be thought of as the difference between the actual reading taken (caused by the
equipment or techniques used) and the true value
Uncertainties are not the same as errors
o Errors can be thought of as issues with equipment or methodology that cause a
reading to be different from the true value
o The uncertainty is a range of values around a measurement within which the
true value is expected to lie, and is an estimate
For example, if the true value of the mass of a box is 950 g, but a systematic error
with a balance gives an actual reading of 952 g, the uncertainty is ±2 g
These uncertainties can be represented in a number of ways:
o Absolute Uncertainty: where uncertainty is given as a fixed quantity
o Fractional Uncertainty: where uncertainty is given as a fraction of the
measurement
o Percentage Uncertainty: where uncertainty is given as a percentage of the
measurement
Measuring instruments
1 . Micrometer Screw Gauge :.
1 div = 0.01 mm
Answer 1 :
Reading = linear scale + circular scale x constant
= 5.5 + 0 x 0.01
= 5.5 mm
Example 2 : Answer 2
Worked example 3:
Answer 3:
Reading = Main scale reading + vernier scale reading x constant
= 11 mm + 13 x 0.05 mm
= 11.65 mm