Measurement notes
Measurement notes
Measurement is the act, or the result, of a quantitative comparison between a given quantity and a
quantity of the same kind chosen as a unit. The result of the measurement is expressed by a pointer
deflection over a predefined scale or a number representing the ratio between the unknown quantity
and the standard. A standard is defined as the physical personification of the unit of measurement or its
submultiple or multiple values. The device or instrument used for comparing the unknown quantity with
the unit of measurement or a standard quantity is called a measuring instrument. The value of the
unknown quantity can be measured by direct or indirect methods. In direct measurement methods, the
unknown quantity is measured directly instead of comparing it with a standard. Examples of direct
measurement are current by ammeter, voltage by voltmeter, resistance by ohmmeter, power by
wattmeter, etc. In indirect measurement methods, the value of the unknown quantity is determined by
measuring the functionally related quantity and calculating the desired quantity rather than measuring it
directly. Suppose the resistance as (R) of a conductor can be measured by measuring the voltage drop
across the conductor and dividing the voltage (V) by the current (I) through the conductors, by
Ohm’s law
• Fundamental
• Derived units
The fundamental units in mechanics are measures of length, mass and time. The sizes of the
fundamental units, whether foot or metre, pound or kilogram, second or hour are arbitrary and can be
selected to fit a certain set of circumstances. Since length, mass and time are fundamental to most other
physical quantities besides those in mechanics, they are called the primary fundamental units. Measures
of certain physical quantities in the thermal, electrical and illumination disciplines are also represented
by fundamental units. These units are used only when these particular classes are involved, and they
may therefore be defined as auxiliary fundamental units.
All other units which can be expressed in terms of the fundamental units are called derived units. Every
derived unit originates from some physical law defining that unit. For example, the area (A) of a
rectangle is proportional to its length (l) and breadth (b), or A =lb. if the metre has been chosen as the
unit of length then the area of a rectangle of 5 metres by 7 metres is 35 m2. Note that the numbers of
measure are multiplied as well as the units. The derived unit for area (A) is then the metre square (m2).
A derived unit is recognized by its dimensions, which can be defined as the complete algebraic formula
for the derived unit. The dimensional symbols for the fundamental units of length, mass and time are L,
M and T respectively. The dimensional symbol for the derived unit of area is L2 and that for volume is
L3 . The dimensional symbol for the unit of force is MLT , which follows from the defining equation for
force. The dimensional formulas of the derived units are particularly useful for converting units from one
system to another. For convenience, some derived units have been given new names. For example, the
derived unit of force in the SI system is called the newton (N), instead of the dimensionally correct kg-m/
s2.
1. International standards
2. Primary standards
3. Secondary standards
4. Working standards
International Standards
The international standards are defined by international agreement. They represent certain units of
measurement to the closest possible accuracy that production and measurement technology allow.
International standards are periodically checked and evaluated by absolute measurements in terms of
the fundamental units. These standards are maintained at the International Bureau of Weights and
Measures and are not available to the ordinary user of measuring instruments for purposes of
comparison or calibration. Table 1.1 shows basic SI Units, Quantities and Symbols.
Primary Standards
The primary standards are maintained by national standards laboratories in different places of the
world. The National Bureau of Standards (NBS) in Washington is responsible for maintenance of the
primary standards in North America. Other national laboratories include the National Physical
Laboratory (NPL) in Great Britain and the oldest in the world, the Physikalisch Technische Reichsanstalt
in Germany. The primary standards, again representing the fundamental units and some of the derived
mechanical and electrical units, are independently calibrated by absolute measurements at each of the
national laboratories. The results of these measurements are compared with each other, leading to a
world average figure for the primary standard. Primary standards are not available for use outside the
national laboratories. One of the main functions of primary standards is the verification and calibration
of secondary standards.
Secondary Standards
Secondary standards are the basic reference standards used in the industrial measurement laboratories.
These standards are maintained by the particular involved industry and are checked locally against other
reference standards in the area. The responsibility for maintenance and calibration rests entirely with
the industrial laboratory itself. Secondary standards are generally sent to the national standards
laboratory on a periodic basis for calibration and comparison against the primary standards. They are
then returned to the industrial user with a certification of their measured value in terms of the primary
standard.
Working Standards
Working standards are the principle tools of a measurement laboratory. They are used to check and
calibrate general laboratory instruments for accuracy and performance or to perform comparison
measurements in industrial applications. A manufacturer of precision resistances, for example, may use
a standard resistor in the quality control department of his plant to check his testing equipment. In this
case, the manufacturer verifies that his measurement setup performs within the required limits of
accuracy.
METHODS OF MEASUREMENT
• Direct methods
• Indirect methods
In direct measurement methods, the unknown quantity is measured directly. Direct methods of
measurement are of two types, namely, deflection methods and comparison methods.
In indirect methods of measurement, it is general practice to establish an empirical relation between the
actual measured quantity and the desired parameter.
The operation of a measurement system can be explained in terms of functional elements of the system.
Every instrument and measurement system is composed of one or more of these functional elements
and each functional element is made of distinct components or groups of components which performs
required and definite steps in measurement. The various elements are the following:
It is an element that is sensitive to the measured variable. The physical quantity under measurement,
called the measurand, makes its first contact with the primary sensing element of a measurement
system. The measurand is always disturbed by the act of the measurement, but good instruments are
designed to minimise this effect. Primary sensing elements may have a non-electrical input and output
such as a spring, manometer or may have an electrical input and output such as a rectifier. In case the
primary sensing element has a non-electrical input and output, then it is converted into an electrical
signal by means of a transducer. The transducer is defined as a device, which when actuated by one
form of energy, is capable of converting it into another form of energy. Many a times, certain operations
are to be performed on the signal before its further transmission so that interfering sources are
removed in order that the signal may not get distorted. The process may be linear such as amplification,
attenuation, integration, differentiation, addition and subtraction or nonlinear such as modulation,
detection, sampling, filtering, chopping and clipping, etc. The process is called signal conditioning. So a
signal conditioner follows the primary sensing element or transducer, as the case may be. The sensing
element senses the condition, state or value of the process variable by extracting a small part of energy
from the measurand, and then produces an output which reflects this condition, state or value of the
measurand.
After passing through the primary sensing element, the output is in the form of an electrical signal, may
be voltage, current, frequency, which may or may not be accepted to the system. For performing the
desired operation, it may be necessary to convert this output to some other suitable form while
retaining the information content of the original signal. For example, if the output is in analog form and
the next step of the system accepts only in digital form then an analog-to-digital converter will be
employed. Many instruments do not require any variable conversion unit, while some others require
more than one element.
Manipulation Elements
Sometimes it is necessary to change the signal level without changing the information contained in it for
the acceptance of the instrument. The function of the variable manipulation unit is to manipulate the
signal presented to it while preserving the original nature of the signal. For example, an electronic
amplifier converts a small low voltage input signal into a high voltage output signal. Thus, the voltage
amplifier acts as a variable manipulation unit. Some of the instruments may require this function or
some of the instruments may not.
The data transmission elements are required to transmit the data containing the information of the
signal from one system to another. For example, satellites are physically separated from the earth
where the control stations guiding their movement are located.
The function of the data presentation elements is to provide an indication or recording in a form that
can be evaluated by an unaided human sense or by a controller. The information regarding measurand
(quantity to be measured) is to be conveyed to the personnel handling the instrument or the system for
monitoring, controlling or analysis purpose. Such a device may be in the form of analog or digital format.
The simplest form of a display device is the common panel meter with some kind of calibrated scale and
pointer. In case the data is to be recorded, recorders like magnetic tapes or magnetic discs may be used.
For control and analysis purpose, computers may be used.
CLASSIFICATION OF INSTRUMENTS
Analog Instruments
The signals of an analog unit vary in a continuous fashion and can take on infinite number of values in a
given range. Fuel gauge, ammeter and voltmeters, wrist watch, speedometer fall in this category.
Digital Instruments
Signals varying in discrete steps and taking on a finite number of different values in a given range are
digital signals and the corresponding instruments are of digital type. Digital instruments have some
advantages over analog meters, in that they have high accuracy and high speed of operation. It
eliminates the human operational errors. Digital instruments can store the result for future purposes. A
digital multimeter is the example of a digital instrument.
Mechanical Instruments
Mechanical instruments are very reliable for static and stable conditions. They are unable to respond
rapidly to the measurement of dynamic and transient conditions due to the fact that they have moving
parts that are rigid, heavy and bulky and consequently have a large mass. Mass presents inertia
problems and hence these instruments cannot faithfully follow the rapid changes which are involved in
dynamic instruments. Also, most of the mechanical instruments causes noise pollution.
• Reliable and accurate for measurement of stable and time invariant quantity
2. Electrical Instruments
When the instrument pointer deflection is caused by the action of some electrical methods then it is
called an electrical instrument. The time of operation of an electrical instrument is more rapid than that
of a mechanical instrument. Unfortunately, an electrical system normally depends upon a mechanical
measurement as an indicating device. This mechanical movement has some inertia due to which the
frequency response of these instruments is poor.
Electronic Instruments
Electronic instruments use semiconductor devices. Most of the scientific and industrial instrumentations
require very fast responses. Such requirements cannot be met with by mechanical and electrical
instruments. In electronic devices, since the only movement involved is that of electrons, the response
time is extremely small owing to very small inertia of the electrons. With the use of electronic devices, a
very weak signal can be detected by using pre-amplifiers and amplifiers.
• Greater flexibility
In case of manual instruments, the service of an operator is required. For example, measurement of
temperature by a resistance thermometer incorporating a Wheatstone bridge in its circuit, an operator
is required to indicate the temperature being measured. In an automatic type of instrument, no
operator is required all the time. For example, measurement of temperature by mercury-in-glass
thermometer.
1. Accuracy
Accuracy is the closeness with which the instrument reading approaches the true value of the variable
under measurement. Accuracy is determined as the maximum amount by which the result differs from
the true value. It is almost impossible to determine experimentally the true value. The true value is not
indicated by any measurement system due to the loading effect, lags and mechanical problems (e.g.,
wear, hysteresis, noise, etc.).
• Whether or not the quantity is being truly impressed upon the instrument.
2. Precision
Precision is a measure of the reproducibility of the measurements, i.e., precision is a measure of the
degree to which successive measurements differ from one another. Precision is indicated from the
number of significant figures in which it is expressed. Significant figures actually convey the information
regarding the magnitude and the measurement precision of a quantity. More significant figures imply
greater precision of the measurement.
3. Resolution
If the input is slowly increased from some arbitrary value it will be noticed that the output does not
change at all until the increment exceeds a certain value called the resolution or discrimination of the
instrument. Thus, the resolution or discrimination of any instrument is the smallest change in the input
signal (quantity under measurement) which can be detected by the instrument. It may be expressed as
an accrual value or as a fraction or percentage of the full scale value. Resolution is sometimes referred
as sensitivity. The largest change of input quantity for which there is no output of the instrument is
called the dead zone of that instrument.
4.Sensitivity
The sensitivity gives the relation between the input signal to an instrument or a part of the instrument
system and the output. Thus, the sensitivity is defined as the ratio of output signal or response of the
instrument to a change of input signal or the quantity under measurement.
Expected value – the design value or the most probable value that expect to obtain.
Error – the deviation of the true value from the desired value.
Types of Errors
The origination of error may be in a variety of ways. They are categorised in three main
types.
• Gross error
• Systematic error
• Random error
1. Gross Error
The errors occur because of mistakes in observed readings, or using instruments and in recording and
calculating measurement results. These errors usually occur because of human mistakes and these may
be of any magnitude and cannot be subjected to mathematical treatment. One common gross error is
frequently committed during improper use of the measuring instrument.
2. Systematic Errors: These errors are shortcomings of instruments, such as defective or worn parts, and
effects of the environment on the equipment or the user.
(b) Applying correction factors after determining the amount of instrumental error;
(ii) Environmental Errors: These errors are due to external conditions surrounding the instruments
which affect the measurements. The surrounding conditions may be the changes in temperature,
humidity, barometric pressure, or of magnetic or electrostatic fields. Thus, a change in ambient
temperature at which the instrument is used causes a change in the elastic properties of the spring in a
moving-coil mechanism and so causes an error in the reading of the instrument. To reduce the effects of
external conditions surrounding the instruments the corrective measures are to be taken as follows:
(b) Certain components in the instrument should be completely closed i.e., hermetically sealed, and
3. Random Errors: These errors are those errors which are due to unknown causes and they occur even
when all systematic errors have been taken care of. This error cannot be corrected by any method of
calibration or other known methods of control. Few random errors usually occur in well-designed
experiments, but they become important in high-accuracy work. For example, a voltmeter with
accurately calibrated is being used in ideal environmental conditions to read voltage of an electric
circuitry system. It will be found that the readings vary slightly over the period of observation. This
variation cannot be corrected by any method of calibration or other known method of control and it
cannot be explained without minute investigation. The only way to offset these errors is by increasing
the number of readings and using statistical methods in order to obtain the best approximation of the
true value of the quantity under measurement.