Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Electronic Measurements

Download as pdf or txt
Download as pdf or txt
You are on page 1of 66

UECG001-ELECTRONIC

MEASUREMENTS
Course Objectives
 This course aims at teaching functional elements
of instrumentation.
 This course makes the student to learn the
fundamentals of electrical and electronic
instruments.
 The students will be exposed to various
measurement techniques, storage and display
devices
Course Outcomes
CO1: Ability to measure electrical parameters
using appropriate Electronics Instruments.
CO2: Ability to make use of storage and display
devices.
CO3: Ability to select appropriate sensors in
various applications.
UNIT-I
Electronics Instruments
Functional elements of an instrument – Static and
dynamic characteristics – Errors in measurement ––
Standards and calibration – Principle and types of
analog and digital voltmeters, ammeters,
multimeters – Single and three phase wattmeter’s
and energy meters – Magnetic measurements –
Determination of B-H curve and measurements of
iron loss.
MEASUREMENTS
 Measurement is the act, or the result, of a quantitative
comparison between a given quantity and a quantity of
the same kind chosen as a unit.
 The result of the measurement is expressed by a pointer
deflection over a predefined scale or a number
representing the ratio between the unknown quantity and
the standard.
 The device or instrument used for comparing the
unknown quantity with the unit of measurement or a
standard quantity is called a measuring instrument
MEASUREMENTS
 The value of the unknown quantity can be measured by direct or
indirect methods.
 In direct measurement methods, the unknown quantity is measured
directly instead of comparing it with a standard.
 Examples of direct measurement are current by ammeter, voltage by
voltmeter, resistance by ohmmeter, power by wattmeter, etc.
 In indirect measurement methods, the value of the unknown quantity is
determined by measuring the functionally related quantity and
calculating the desired quantity rather than measuring it directly.
 Suppose the resistance as (R) of a conductor can be measured by
measuring the voltage drop across the conductor and dividing the
voltage (V) by the current (I) through the conductors, by Ohm’s Law
Types of Measurements
Primary Measurements
Secondary measurements
Tertiary Measurements
CLASSIFICATION OF INSTRUMENTS
ABSOLUTE INSTRUMENTS
 The instruments of this type give the value of the measurand in terms of
instrument constant and its deflection.
 Such instruments do not require comparison with any other standard.
 The example of this type of instrument is tangent galvanometer, which gives
the value of the current to be measured in terms of tangent of the angle of
deflection produced, the horizontal component of the earth’s magnetic field,
the radius and the number of turns of the wire used.
 Rayleigh current balance and absolute electrometer are other examples of
absolute instruments. Absolute instruments are mostly used in standard
laboratories and in similar institutions as standardizing.
SECONDARY INSTRUMENTS
 These instruments are so constructed that the deflection of such
instruments gives the magnitude of the electrical quantity to be
measured directly.
 These instruments are required to be calibrated by comparison with
either an absolute instrument or with another secondary instrument,
which has already been calibrated before the use. These instruments
are generally used in practice.
Secondary instruments are further classified as
Indicating instruments
Integrating instruments
Recording instruments
SECONDARY INSTRUMENTS
Indicating Instruments
 Indicating instruments are those which indicate the
magnitude of an electrical quantity at the time when it
is being measured.
 The indications are given by a pointer moving over a
calibrated (pregraduated) scale.
 Ordinary ammeters, voltmeters, wattmeters, frequency
meters, power factor meters, etc., fall into this
category.
SECONDARY INSTRUMENTS
Integrating Instruments
 Integrating instruments are those which measure the
total amount of either quantity of electricity (ampere-
hours) or electrical energy supplied over a period of time.
 The summation, given by such an instrument, is the
product of time and an electrical quantity under
measurement.
 The ampere-hour meters and energy meters fall in this
class.
SECONDARY INSTRUMENTS
Recording Instruments
 Recording instruments are those which keep a continuous record of
the variation of the magnitude of an electrical quantity to be observed
over a definite period of time.
 In such instruments, the moving system carries an inked pen which
touches lightly a sheet of paper wrapped over a drum moving with
uniform slow motion in a direction perpendicular to that of the
direction of the pointer.
 Thus, a curve is traced which shows the variations in the magnitude of
the electrical quantity under observation over a definite period of time.
 Such instruments are generally used in powerhouses where the
current, voltage, power, etc., are to be maintained within certain
acceptable limit.
ANALOG AND DIGITAL INSTRUMENTS
Analog Instruments
 The signals of an analog unit vary in a continuous fashion and can take on
infinite number of values in a given range.
 Fuel gauge, ammeter and voltmeters, wrist watch, speedometer fall in this
category.
Digital Instruments
 Signals varying in discrete steps and taking on a finite number of different
values in a given range are digital signals and the corresponding instruments
are of digital type.
 Digital instruments have some advantages over analog meters, in that they
have high accuracy and high speed of operation.
 It eliminates the human operational errors.
 Digital instruments can store the result for future purposes. A digital
multimeter is the example of a digital instrument.
MECHANICAL, ELECTRICAL AND
ELECTRONICS INSTRUMENTS
Mechanical Instruments
 Mechanical instruments are very reliable for static and stable
conditions.
 They are unable to respond rapidly to the measurement of
dynamic and transient conditions due to the fact that they have
moving parts that are rigid, heavy and bulky and consequently
have a large mass.
 Mass presents inertia problems and hence these instruments
cannot faithfully follow the rapid changes which are involved in
dynamic instruments.
 Also, most of the mechanical instruments causes noise pollution.
MECHANICAL, ELECTRICAL AND
ELECTRONICS INSTRUMENTS
Electrical Instruments
 When the instrument pointer deflection is caused by the
action of some electrical methods then it is called an
electrical instrument.
 The time of operation of an electrical instrument is more
rapid than that of a mechanical instrument.
 Unfortunately, an electrical system normally depends
upon a mechanical measurement as an indicating device.
 This mechanical movement has some inertia due to which
the frequency response of these instruments is poor.
MECHANICAL, ELECTRICAL AND
ELECTRONICS INSTRUMENTS
Electronic Instruments
 Electronic instruments use semiconductor devices. Most of the
scientific and industrial instrumentations require very fast
responses.
 Such requirements cannot be met with by mechanical and
electrical instruments.
 In electronic devices, since the only movement involved is that
of electrons, the response time is extremely small owing to
very small inertia of the electrons.
 With the use of electronic devices, a very weak signal can be
detected by using pre-amplifiers and amplifiers.
BASIC REQUIREMENTS OF MEASUREMENT
i) The standard used for comparison purposes
must be accurately defined & should be commonly
accepted
ii) The apparatus used & the method adopted
must be provable.
MEASURING INSTRUMENT:
 A measurement system may be defined as a systematic arrangement
for the measurement or determination of an unknown quantity and
analysis of instrumentation.
 The generalized measurement system and its different
components/elements are shown.

FUNCTIONAL ELEMENTS OF AN INSTRUMENT:


Most of the measurement systems contain three main functional
elements. They are:
i) Primary sensing element
ii) Variable conversion element &
iii) Data presentation element.
FUNCTIONAL ELEMENTS OF AN INSTRUMENT
PRIMARY SENSING ELEMENTS
 It is an element that is sensitive to the measured variable.
 The physical quantity under measurement, called the measurand, makes its
first contact with the primary sensing element of a measurement system.
 The measurand is always disturbed by the act of the measurement, but good
instruments are designed to minimize this effect.
 Primary sensing elements may have a non-electrical input and output such as a
spring, manometer or may have an electrical input and output such as a
rectifier.
 In case the primary sensing element has a non-electrical input and output,
then it is converted into an electrical signal by means of a transducer.
 The transducer is defined as a device, which when actuated by one form of
energy, is capable of converting it into another form of energy.
PRIMARY SENSING ELEMENTS
 Many a times, certain operations are to be performed on the signal before its
further transmission so that interfering sources are removed in order that the
signal may not get distorted.
 The process may be linear such as amplification, attenuation, integration,
differentiation, addition and subtraction or nonlinear such as modulation,
detection, sampling, filtering, chopping and clipping, etc.
 The process is called signal conditioning.
 So a signal conditioner follows the primary sensing element or transducer, as
the case may be.
 The sensing element senses the condition, state or value of the process
variable by extracting a small part of energy from the measurand, and then
produces an output which reflects this condition, state or value of the
measurand.
VARIABLE CONVERSION ELEMENTS
 After passing through the primary sensing element, the output
is in the form of an electrical signal, may be voltage, current,
frequency, which may or may not be accepted to the system.
 For performing the desired operation, it may be necessary to
convert this output to some other suitable form while
retaining the information content of the original signal.
 For example, if the output is in analog form and the next step
of the system accepts only in digital form then an analog-to-
digital converter will be employed.
 Many instruments do not require any variable conversion unit,
while some others require more than one element.
MANIPULATION ELEMENTS
 Sometimes it is necessary to change the signal level without
changing the information contained in it for the acceptance of
the instrument.
 The function of the variable manipulation unit is to manipulate
the signal presented to it while preserving the original nature of
the signal.
 For example, an electronic amplifier converts a small low
voltage input signal into a high voltage output signal.
 Thus, the voltage amplifier acts as a variable manipulation unit.
Some of the instruments may require this function or some of
the instruments may not.
DATA TRANSMISSION ELEMENTS
The data transmission elements are required
to transmit the data containing the
information of the signal from one system to
another.
For example, satellites are physically
separated from the earth where the control
stations guiding their movement are located.
DATA PRESENTATION ELEMENTS
 The function of the data presentation elements is to provide an indication
or recording in a form that can be evaluated by an unaided human sense
or by a controller.
 The information regarding measurand (quantity to be measured) is to be
conveyed to the personnel handling the instrument or the system for
monitoring, controlling or analysis purpose.
 Such a device may be in the form of analog or digital format. The simplest
form of a display device is the common panel meter with some kind of
calibrated scale and pointer.
 In case the data is to be recorded, recorders like magnetic tapes or
magnetic discs may be used. For control and analysis purpose, computers
may be used.
STEPS OF A TYPICAL MEASUREMENT SYSTEM
STATIC & DYNAMIC CHARACTERISTICS

The performance characteristics of an instrument


are mainly divided into two categories:
i) Static characteristics
ii) Dynamic characteristics
Static characteristics:
The set of criteria defined for the instruments, which are used to measure the quantities
which are slowly varying with time or mostly constant, i.e., do not vary with time, is called
static characteristics.
The various static characteristics are:
i) Accuracy
ii) Precision
iii) Sensitivity
iv) Linearity
v) Reproducibility
vi) Repeatability
vii) Resolution
viii) Threshold
ix) Drift
x) Stability
xi) Tolerance
xii) Range or span
Accuracy:
It is the degree of closeness with which the reading approaches the true
value of the quantity to be measured. The accuracy can be expressed in
following ways:
Point accuracy:
Such an accuracy is specified at only one particular point of scale. It does
not give any information about the accuracy at any other point on the
scale.
Accuracy as percentage of scale span:
When an instrument as uniform scale, its accuracy may be expressed in
terms of scale range.
Accuracy as percentage of true value:
The best way to conceive the idea of accuracy is to specify it in terms of
the true value of the quantity being measured.
Precision:
It is the measure of reproducibility i.e., given a fixed value of a quantity,
precision is a measure of the degree of agreement within a group of
measurements. The precision is composed of two characteristic
a) Conformity
Consider a resistor having true value as 2385692 , which is being measured
by an ohmmeter. But the reader can read consistently, a value as 2.4 M due
to the non availability of proper scale. The error created due to the
limitation of the scale reading is a precision error.
b)Number of significant figures:
The precision of the measurement is obtained from the number of
significant figures, in which the reading is expressed. The significant figures
convey the actual information about the magnitude & the measurement
precision of the quantity.
Sensitivity:
The sensitivity denotes the smallest change in the measured variable to
which the instrument responds. It is defined as the ratio of the changes in the
output of an instrument to a change in the value of the quantity to be
measured. Mathematically it is expressed as,
Sensitivity= Infinite small change in output
Infinite small change in input

Thus, if the calibration curve is liner, as shown, the sensitivity


of the instrument is the slope of the calibration curve.
If the calibration curve is not linear as shown, then the
sensitivity varies with the input.
Inverse sensitivity or deflection factor is defined as the
reciprocal of sensitivity.
Inverse sensitivity or deflection factor = 1/ sensitivity
Linearity:
The linearity is defined as the ability to reproduce the input
characteristics symmetrically & linearly.
The curve shows the actual calibration curve & idealized straight
line
Reproducibility:
It is the degree of closeness with which a given value may
be repeatedly measured. It is specified in terms of scale
readings over a given period of time.

Repeatability:
It is defined as the variation of scale reading & random in
nature.
Drift:
Drift may be classified into three categories:
a) zero drift:
If the whole calibration gradually shifts due to slippage, permanent
set, or due to undue warming up of electronic tube circuits, zero drift
sets in.
Drift:
b) span drift or sensitivity drift
If there is proportional change in the indication all along
the upward scale, the drifts is called span drift or
sensitivity drift.
c) Zonal drift:
In case the drift occurs only a portion of span of an
instrument, it is called zonal drift.
Resolution
If the input is slowly increased from some arbitrary input value, it
will again be found that output does not change at all until a
certain increment is exceeded. This increment is called
resolution.
Threshold:
If the instrument input is increased very gradually from zero
there will be some minimum value below which no output
change can be detected. This minimum value defines the
threshold of the instrument.
Stability:
It is the ability of an instrument to retain its performance
throughout is specified operating life.
Tolerance:
The maximum allowable error in the measurement
is specified in terms of some value which is called
tolerance.

Range or span:
The minimum & maximum values of a quantity for
which an instrument is designed to measure is
called its range or span.
Dynamic characteristics:
The set of criteria defined for the instruments, which
are changes rapidly with time, is called ‘dynamic
characteristics’.
The various static characteristics are:
i) Speed of response
ii) Measuring lag
iii) Fidelity
iv) Dynamic error
Speed of response:
It is defined as the rapidity with which a measurement system responds to
changes in the measured quantity.

Measuring lag:
It is the retardation or delay in the response of a measurement system to
changes in the measured quantity. The measuring lags are of two types:
a) Retardation type:
In this case the response of the measurement system begins immediately
after the change in measured quantity has occurred.
b) Time delay lag:
In this case the response of the measurement system begins after a dead
time after the application of the input.
Fidelity:
It is defined as the degree to which a
measurement system indicates changes in the
measured quantity without dynamic error.
Dynamic error:
It is the difference between the true value of the
quantity changing with time & the value indicated
by the measurement system if no static error is
assumed. It is also called measurement error.
ERRORS IN MEASUREMENT

The types of errors are follows


i) Gross errors
ii) Systematic errors
iii) Random errors
Gross Errors:
The gross errors mainly occur due to
carelessness or lack of experience of a human
begin
These errors also occur due to incorrect
adjustments of instruments
These errors cannot be treated mathematically
These errors are also called personal errors’.
Ways to minimize gross errors:
• The complete elimination of gross errors is not
possible but one can minimize them by the following
ways:
• Taking great care while taking the reading, recording
the reading & calculating the result
• Without depending on only one reading, at least
three or more readings must be taken preferably by
different persons.
Systematic errors:
A constant uniform deviation of the operation of
an instrument is known as a Systematic error
The Systematic errors are mainly due to the short
comings of the instrument & the characteristics
of the material used in the instrument, such as
defective or worn parts, ageing effects,
environmental effects, etc.
Types of Systematic errors:

There are three types of Systematic errors as:


i) Instrumental errors
ii) Environmental errors
iii) Observational errors
Instrumental errors:
These errors can be mainly due to the following three reasons:
a)Short comings of instrument
These are because of the mechanical structure of the instruments.
For example friction in the bearings of various moving parts; irregular
spring tensions, reductions in due to improper handling , hysteresis,
gear backlash, stretching of spring, variations in air gap, etc .,
Ways to minimize this error:
These errors can be avoided by the following methods:
Selecting a proper instrument and planning the proper procedure for
the measurement recognizing the effect of such errors and applying
the proper correction factors calibrating the instrument carefully
against a standard
b) Misuse of instruments:
A good instrument if used in abnormal way gives
misleading results. Poor initial adjustment,
Improper zero setting, using leads of high
resistance etc., are the examples of misusing a
good instrument. Such things do not cause the
permanent damage to the instruments but
definitely cause the serious errors.
C) Loading effects
Loading effects due to improper way of using the instrument
cause the serious errors. The best ex ample of such loading
effect error is connecting a w ell calibrated volt meter across
the two points of high resistance circuit. The same volt meter
connected in a low resistance circuit gives accurate reading.

Ways to minimize this error:


Thus the errors due to the loading effect can be avoided by
using an instrument intelligently and correctly.
Environmental errors:
These errors are due to the conditions external to the measuring instrument. The
various factors resulting these environmental errors are temperature changes,
pressure changes, thermal emf, ageing of equipment and frequency sensitivity of an
instrument.

Ways to minimize this error:


The various methods which can be used to reduce these errors are:
i) Using the proper correction factors and using the information supplied by the
manufacturer of the instrument
ii) Using the arrangement which will keep the surrounding conditions Constant
iii) Reducing the effect of dust ,humidity on the components by hermetically sealing
the components in the instruments
iv) The effects of external fields can be minimized by using the magnetic or electro
static shields or screens
v) Using the equipment which is immune to such environmental effects.
Observational errors:
These are the errors introduced by the observer.
These are many sources of observational errors such as parallax
error while reading a meter, wrong scale selection, etc.

Ways to minimize this error


To eliminate such errors one should use the instruments with
mirrors, knife edged pointers, etc.,
The systematic errors can be subdivided as static and dynamic
errors. The static errors are caused by the limitations of the
measuring device while the dynamic errors are caused by the
instrument not responding fast enough to follow the changes in
the variable to be measured.
Random errors:
Some errors still result, though the systematic and
instrumental errors are reduced or at least accounted
for. The causes of such errors are unknown and
hence the errors are called random errors.
Ways to minimize this error
The only way to reduce these errors is by increasing
the number of observations and using the statistical
methods to obtain the best approximation of the
reading.
STANDARD & CALIBRATION
CALIBRATION
 Calibration is the process of making an adjustment or marking a
scale so that the readings of an instrument agree with the
accepted & the certified standard.
 In other words, it is the procedure for determining the correct
values of measured by comparison with the measured or
standard ones.
 The calibration offers a guarantee to the device or instrument
that it is operating with required accuracy, under stipulated
environmental conditions.
 The calibration procedure involves the steps like visual inspection
for various defects, installation according to the specifications,
zero adjustment etc.,
CALIBRATION
The calibration is the procedure for determining the
correct values of measured by comparison with standard
ones. The standard of device with which comparison is
made is called a standard instrument. The instrument
which is unknown & is to be calibrated is called test
instrument. Thus in calibration, test instrument is
compared with standard instrument.
Types of calibration methodologies
There are two methodologies for obtaining the
comparison between test instrument & standard
instrument. These methodologies are
i) Direct comparisons
ii) Indirect comparisons
Direct comparisons:
 In a direct comparison, a source or generator
applies a known input to the meter under test.
 The ratio of what meter is indicating & the known
generator values gives the meter s error.
 In such case the meter is the test instrument while
the generator is the standard instrument.
 The deviation of meter from the standard value is
compared with the allowable performance limit.
Indirect comparisons:
In the indirect comparison, the test instrument is compared
with the response standard instrument of same type i .e., if
test instrument is meter, standard instrument is also meter, if
test instrument is generator; the standard instrument is also
generator & so on.
If the test instrument is a meter then the same input is
applied to the test meter as well a standard meter.
In case of generator calibration, the output of the generator
tester as well as standard, or set to same nominal levels.
Then the transfer meter is used which measures the outputs
of both standard and test generator.
STANDARD
 A standard of measurement is a physical representation
of a unit of measurement. A unit is realized by reference
to an arbitrary material standard or to natural
phenomena including physical and atomic constants.
 The term ‘standard’ is applied to a piece of equipment
having a known measure of physical quantity. For
example, the fundamental unit of mass in the SI system is
the kilogram, defined as the mass of the cubic decimeter
of water at its temperature of maximum of 4°C.
CLASSIFICATIONS OF STANDARDS
 International standards
 Primary standards
 Secondary standards
 Working standards
 Current standards
 Voltage standards
 Resistance standards
 Capacitance standards
 Time and frequency standards
CLASSIFICATIONS OF STANDARDS
International Standards
 The international standards are defined by international
agreement.
 They represent certain units of measurement to the closest
possible accuracy that production and measurement
technology allow.
 International standards are periodically checked and evaluated
by absolute measurements in terms of the fundamental units.
 These standards are maintained at the International Bureau of
Weights and Measures and are not available to the ordinary
user of measuring instruments for purposes of comparison or
calibration
CLASSIFICATIONS OF STANDARDS
CLASSIFICATIONS OF STANDARDS
Primary Standards
 The primary standards are maintained by national standards laboratories in different
places of the world.
 The National Bureau of Standards (NBS) in Washington is responsible for maintenance of
the primary standards in North America.
 Other national laboratories include the National Physical Laboratory (NPL) in Great
Britain and the oldest in the world, the Physikalisch Technische Reichsanstalt in
Germany.
 The primary standards, again representing the fundamental units and some of the
derived mechanical and electrical units, are independently calibrated by absolute
measurements at each of the national laboratories.
 The results of these measurements are compared with each other, leading to a world
average figure for the primary standard.
 Primary standards are not available for use outside the national laboratories.
 One of the main functions of primary standards is the verification and calibration of
secondary standards.
CLASSIFICATIONS OF STANDARDS
Secondary Standards
 Secondary standards are the basic reference standards used in the
industrial measurement laboratories.
 These standards are maintained by the particular involved industry and
are checked locally against other reference standards in the area.
 The responsibility for maintenance and calibration rests entirely with
the industrial laboratory itself.
 Secondary standards are generally sent to the national standards
laboratory on a periodic basis for calibration and comparison against
the primary standards.
 They are then returned to the industrial user with a certification of their
measured value in terms of the primary standard
CLASSIFICATIONS OF STANDARDS
Working Standards
 Working standards are the principle tools of a measurement
laboratory.
 They are used to check and calibrate general laboratory
instruments for accuracy and performance or to perform
comparison measurements in industrial applications.
 A manufacturer of precision resistances, for example, may use
a standard resistor in the quality control department of his
plant to check his testing equipment.
 In this case, the manufacturer verifies that his measurement
setup performs within the required limits of accuracy.

You might also like