Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 233

Republic of the Philippines

BATANGAS STATE UNIVERSITY

Pablo Borbon Main II

Batangas City

COLLEGE OF ENGINEERING, ARCHITECTURE & FINE ARTS

Chemical and Food Engineering Department

VALVES: DEFINITION, TYPES, PARTS, PURPOSE AND FUNCTIONS

In partial Fulfilment

of the Requirements of
PROCESS DYNAMICS AND CONTROL

Prepared By:

Anna Bianca C. Dimapasok

Angela Ilagan

Thrisha Lopez

Arriza Millene M. Macalalad

Kathleen C. Opeña

Ramuelle Jade P. Piñon

Roy Vincent Tan

Danica Elaine Audrey D. Valdez


ChE-3102

Introduction

Valves are mechanical devices that control flow and pressure within a system or process.
They are essential components of a piping system that carries liquids, gases, vapors, sludge and many
more. Different types of valves are available: gate, globe, plug, ball, butterfly, check, diaphragm,
clamp, pressure relief, control valves, etc. Each of these types has several models, each with different
features and functional capabilities. Some valves are self-operated while others are operated
manually or with an actuator or are operated pneumatically or hydraulically.

The valve is one of the most basic and indispensable components of our modern
technological society. It is essential for virtually all manufacturing processes and all energy
production and supply systems. However, it is one of the oldest products known to man, with a
history of thousands of years.

There has been a need to regulate the flow of water since man began building cities and
planting crops for farming. Men and women would control the flow of water using trees, tree trunks
and stones right prior to the first actual pipe systems (the aqueducts). The Romans, however, were
the first to construct something similar to a formal canal system, and are thus credited with
developing the valve. The first valves were made of a substance made of bronze. They were very
solid and built for the pipes already in place to be welded. These pipes were bronze as well, or lead
occasionally. In architecture, the first valves were very basic, but successful. A plug with a hole, a
bottom support and a long lever for turning the plug were included in the body of the valve.
Throughout the Mediterranean region, several of these valves have been found. The smart thought
that our Roman ancestors put into the development of the water systems they used in these "new"
ancient towns is shown by these artifacts.

The industrial revolution was the next major improvement to the valve and harkened to this,
disappointingly basic, system in the modern age. When Thomas Newcomen invented his steam
engine, even in high-pressure conditions, he needed modern, enhanced valve operations to manage
steam. The lessons learned from steam engineering will continue to make irrigation and plumbing
valves much more functional. Eventually, valves could be made in large quantities. The ability to
produce them on an assembly line would allow more and more towns, farms and people to use the
numerous available valves. As they are used in engines and other areas of the automobile, valves
have also become an important part of the automotive world.

Valves are usually made of metal or plastic and have several different parts. The outer part is
called the seat and often has a solid metal outer casing and a soft rubber or plastic inner seal so that
the valve makes a completely tight seal. The inner part of the valve, which opens and closes, is called
the body, and fits into the seat when the valve is closed. There is also some type of mechanism to
open and close the valve, whether it is a manual lever or wheel (as in a faucet or stopcock) or an
automatic mechanism (as in a car or steam engine).

Valves are often used to contain dangerous liquids or gases, perhaps toxic chemicals,
flammable oil, high pressure steam, or compressed air, which should not be allowed to escape under
any circumstances. In theory, a valve should be perfectly safe and, once closed, should never allow
liquid or gas to pass through. In practice, that is not entirely true. Sometimes it is better for a valve
to fail, intentionally, to protect some other part of a system or machine. For example, if you have a
steam engine powered by a water boiler where steam builds up, but the pressure suddenly rises too
high, you need a valve to open, let the steam escape, and release the pressure safely before that the
entire boiler explodes catastrophically. Valves that work in this way are called safety valves. They are
designed to open automatically when the liquid or gas they contain reaches a certain pressure
(although many systems and machines have safety valves that can be opened manually for the same
purpose).

 Main Parts of a Valve


1. The body is the part of a valve that is attached to pipes and hold all the parts together.
2. The opening element is the part that opens and closes the valve.
3. The stem raises or lowers the opening element.
4. The hand wheel or handle allows the operator to turn the stem.
5. The bonnet is a separate housing that is bolted tightly to the top of the valve body.
6. The gland packing is held in place by bolts, or sometimes it is screwed into place
 Types of Valve End Connections

1. Screwed or Threaded 3. Butt Welded

2. Flanged 4. Socket Welded


5. Wafer and Lug

There are a large variety of valves and valve configurations.


 Type of Valve According to Mechanical Motion

1. Multi-Turn valves - Depending on the valve, these can have higher or lower differentials
allowing you to open or close them at various speeds.

2. Quarter turn valves - Offer a full range of motion in a 90-degree turn of the handle.

 Type of Valve According to the Method of Actuation

1. Manual Valves 4. Non-Return Valves

2. Actuated Valves 5. Special Purpose Valves

3. Automatic Valves

1. Gate Valves
 Definition
A gate valve can be defined as a type of valve that used a gate or wedge type disk and
the disk moves perpendicular to flow to start or stop the fluid flow in piping. It is the most
common type of valve used in any process plant. It is a linear motion valve used to start or
stop fluid flow. In service, these valves are either in a fully open or fully closed position.

Figure 1.A – Gate Valve


When the gate valve is fully open, the disk of a gate valve is completely removed from the
flow. Therefore, virtually no resistance to flow. Due to this very little pressure drops when
fluid passes through a gate valve.
Gate valves are used when a straight-line flow of fluid and minimum flow restriction
are needed. Gate valves use a sliding plate within the valve body to stop, limit, or permit full
flow of fluids through the valve. The gate is usually wedge-shaped. When the valve is wide
open, the gate is fully drawn into the valve bonnet. This leaves the flow passage through the
valve fully open with no flow restrictions. Therefore, there is little or no pressure drop or
flow restriction through the valve.

 Parts of a Gate Valve

A gate valve's main components are body, seat, gate, stem, bonnet, and actuator.

Figure 1.B – Parts of a Gate Valve

Valve’s Body or Shell. A gate valve is comprised of certain parts that allow it to
function properly. The most principal part of this particular valve is the valve’s body or shell,
which is considered the initial pressure boundary and can connect inlet and outlet pipes in a
piping system. The shape of the body is often cylindrical and houses the gates, also known as
the valve’s disks, and the valve’s seats.

Bonnet .There is a covering used for a valve body’s opening at the top and this is
called the bonnet. A valve’s bonnet is often screwed in so as to enable any sort of
maintenance or repair work to be done without having to remove the entire gate valve from
the piping system, which would cause more headaches than necessary. The bonnet houses
other internal parts of the valve such as the stem, gland packing, and gland follower.

Disk. The internal parts are known collectively as a valve’s trim and these are the
parts that enable the valve to control the flow of water and other basic actions to be made.
The first integral part of the trim is the disk. The disk is what closes and opens, thus
explaining why it is often referred to as the gate in this particular kind of valve. When the
hand wheel or actuator is turned closed, the disk also seals up, preventing any water flow
from occurring. When the hand wheel or actuator is turned open, the disk opens up,
allowing water flow to happen.

Seat. The seat is thought of as the partner of the disk and can either be in a "V"
shape or pointed valley for a wedge disk or in a parallel shape to that of a parallel disk. It is
vital that both the disk and the seat are able to have a snug enough fit in order for it to be
sealed, thus, making it impossible for any flow to occur when the valve has been closed.

Stem. The stem of the valve connects the hand wheel or actuator to the disk. The
stem rotates as the hand wheel is turned, making it possible for the disk to rotate in the same
motion as the stem and hand wheel. There are two kinds of stems: the rising stem and the
non-rising stem. A rising stem rises above the hand wheel as the valve is opened while a
non-rising stem does not.

Hand Wheel. The hand wheel is the circular part found at the very top of a gate
valve. It is what controls the stem, which in turn controls the disk. It is turned clockwise to
close the valve and counter-clockwise to open the valve.

Gland Packing. The gland packing is composed of a material that creates a seal
between the stem and the trim. The gland follower extends into the gland packing. It is
necessary for the gland packing to be properly compressed to ensure no breach occurs.

 Types of Gate Valves

Gate valves are divided into a number of classes, depending on its disc and type of
stems. Gate Valves are classified by:

 Gate Valve types as per Type of Closing Element:


a. Parallel disk Gate Valve : Parallel disk gate valves consist of two discs
that are forced apart against parallel seats by a spring at the point of the
closure. The most famous type is the knife gate valve that has a flat gate

between two parallel seats (an upstream and a downstream seat) to achieve
the required shut-off. The application of a parallel gate valve is limited to
low pressures and low-pressure drops.

Figure 1.C - Parallel Disk Gate Valve

b. Solid-Wedge Gate Valve : The solid, or single wedge gate valve is the
most widely used and the lowest cost used-type in the process industry. The
purpose of the wedge shape is to introduce a high supplementary seating load.
Solid-Wedge Gate Valve can be installed in any position, suitable for almost

all fluids and practical for turbulent flow services.


Figure 1.D - Solid Wedge Gate Valve
c. Flexible Wedge Gate Valv e: Flexible wedge gate valve employs a
flexible wedge that is a one-piece disk with a cut around the perimeter (the cut
varies in size, shape, and depth). Thermal expansion and contraction entail no
problems in such kind of gate valves as the disk is able to compensate for this
and remain easy to open. Flexible wedge gate valves are widely used in steam
systems to prevent thermal binding

Figure 1.E - Flexible Wedge Gate Valve


d. Split Wedge Gate Valve : Split wedges of this type of gate valves are
made in two separate halves. This allows the wedge angle between their outer

faces to fit the seat (self -adjusting and self-aligning to both seating surfaces).
Figure 1.F - Split Wedge Gate Valve

 Gate Valve types as per Type of Stem:


a. Rising Stem Gate Valve with Outside Screw : This type of gate
valve is also known as OS & Y type (Outside steam and York). The stem
rises while opening and lower while closing the valve offering an
indication of the gate valve position. The stem threads never contact the
flow medium (not subject to corrosion/erosion).

Figure 1.G - Rising Stem Gate Valve

b. Non-Rising Stem Gate Valve : It is also known as Insider Screw


Valve. The stem of the non-rising stem gate valve is threaded into the
gate. The hand wheel and stem move together and there is no rising and
lowering of the stem. The stem is in contact with the flow medium.

Figure 1.H - Non-Rising Stem Gate Valve

 Working Principle

The working principle of a gate valve is quite simple. When the gate of the
valve is lifted from the flow path, the valve opens and when the gate again returns to its
position, the gate valve closes. This gate movement is achieved by manually turning the
hand-wheel. The hand-wheel rotates the valve stem and the internal threaded mechanism
provides a vertical movement of the gate. When the handle wheel will be rotated in
clockwise direction, steam and gate will move in downward direction across the fluid flow
line and gate will be tightly located between the two seats. Hence there will not be any
leakage of fluid through the valve once the valve is closed completely. When the handle
wheel will be rotated in anti-clockwise direction, steam and gate will move in upward
direction across the fluid flow line and valve will be opened from closed position and will
permit the flow of fluid through the gate valve. Once the gate valve is completely opened, it
will permit no resistance or very little resistance to the flow of fluid. Gate valve can be used
in semi-open condition also but there will be one issue of erosion of gate as fluid will strike
the gate if the valve is partially open. Therefore the gate valve should be used in completely
closed or completely open condition.

Figure 1.I – Working Principle of a Gate Valve

 Functions

A gate valve is designed to turn the flow of liquid through pipes on and off. It is
generally used on a valve that is not used frequently. It is also helpful in controlling the flow
of pressure through the pipes and valves. Also known as a sluice valve, it controls flow by
moving a round or rectangular wedge in and out of the liquid path.

Gate valves are used to shut off the flow of fluid by inserting a rectangular gate or
wedge into the path of a flowing fluid. Gate valves require very little space along the pipe
axis and hardly restrict the flow of fluid when the gate is fully opened enabling gate valves to
offer straightway flow with very little pressure drop.

 Purpose

Gate valves are more commonly used in refineries and petrochemical plants where
pressure remains relatively low, but temperature may be very high. Gate valves are used less
in upstream oil and gas production facilities due to high operating pressures, long
opening/closing times, and severe environmental conditions when operating in marine
atmospheres. Gate valves are mostly used with larger pipe diameters (from 2″ to the largest
pipelines) since they are less complex to construct than other types of valves in large sizes.
More recently, however, the larger sizes have been supplemented by butterfly valves due to
space limitations under which they are installed.

 Advantages and Disadvantages


 Advantages of a Gate Valve

o Gate valve provides good on/off or shutoff features.


o Pressure drop during operation is very less.
o Gate valves are bi-directional valves and can provide shut-off in either flow
directions.
o They are suitable for high pressure and temperature application and required
less maintenance.
o Gate valves tend to be slightly cheaper than ball valves of the same size and
quality.
 Disadvantages of a Gate Valve
o It cannot be used to regulate or throttle the flow.
o A gate valve is slow in operation and cannot be quickly opened or closed. This
is good also as it reduces the chance of hammering.
o It is prone to vibration and noise in the partially open state.
o It is more subject to seat and disk wear.
o Gate valves require large space envelope for installation, operation, and
maintenance.
o Repairs, such as lapping and grinding, are generally more difficult to
accomplish.
o Some designs of gate valves are susceptible to thermal or pressure locking,
depending upon the application.

2. Butterfly Valves
 Definition
Butterfly valve is a flow control device that incorporates a rotational disk to control
the flowing media in a process. The disk is always in the passageway, but because it is
relatively thin, it offers little resistance to flow. Butterfly valve technology has evolved
dramatically over the past half century, as has its industry popularity. This popularity can be
partly attributed to the quarter-turn operation, tight shutoff and its availability in a variety of
materials of construction.

Butterfly valve is a quarter-turn rotational motion valve, which is used to stop,


regulate, and start flow. Butterfly valves are easy and fast to open. A 90° rotation of the
handle provides a complete closure or opening of the valve. Large Butterfly valves are
usually equipped with a so-called gearbox, where the handwheel by gears is connected to the
stem. This simplifies the operation of the valve, but at the expense of speed.

Figure 2.A – Butterfly Valve


 Parts of a Gate Valve
The butterfly valve consists of only four main components: body, disk, stem and
seat.

Figure 2.B – Parts of a Butterfly Valve

Body. Butterfly valves generally have bodies that fit between two pipe flanges. The
most common body designs are lug and wafer. The lug body has protruding lugs that
provide bolt holes matching those in the pipe flange. A wafer body does not have protruding
lugs. The wafer valve is sandwiched between the pipe flanges, and the flange bolts surround
the body.

Each type of body has advantages, some of which are listed:


 The wafer style is less expensive than a lug style.
 Wafer designs do not transfer the weight of the piping system directly through
the valve body.
 A lug body allows dead-end service or removal of downstream piping.

Disk. The flow closure member of a butterfly valve is the disk. Many variations of
the disk design have evolved relative to the orientation of the disk and stem in an attempt to
improve flow, sealing and/or operating torque. The disk is the equivalent of a plug in a plug
valve, gate in a gate valve or a ball in a ball valve. Rotating the disk one-quarter turn or 90
Degrees opens and closes the butterfly valve.
Stem. The stem of the butterfly valve may be a one-piece shaft or a two-piece (split-
stem) design. The stem in most resilient seated designs is protected from the media, thus
allowing an efficient selection of material with respect to cost and mechanical properties. In
high-performance designs, the stems are in contact with the media and, therefore, must be
compatible, as well as provide the required strength for seating and unseating the disk from
the seat.

Seat. The seat of a resilient-seat butterfly valve utilizes an interference fit between
the disk edge and the seat to provide shutoff. The material of the seat can be made from
many different elastomers or polymers. The seat may be bonded to the body or it may be
pressed or locked in.
In high-performance butterfly valves, the shutoff may be provided by an
interference-fit seat design or a line-energized seat design, where the pressure in the pipeline
is used to increase the interference between the seat and disk edge. The most common seat
material is polytetrafluoroethylene (PTFE) or reinforced PTFE (RTFE) because of the wider
range of compatibility and temperature range.
Metal seats are also offered in high-performance butterfly valves. These metal seats
allow a butterfly valve to be used in even higher temperatures to 1,000 Degrees F. Fire-safe
designs are offered that provide the shutoff of a polymer seat valve before a fire, and the
metal seal backup provides shutoff during and after a fire.

 Types of Butterfly Valve


Depending on disc closure design, connection design, and actuation method,
Butterfly valves can be categorized into several types.

 Based on Disc Closure Design

a. Concentric Butterfly Valve: The concentric butterfly valve is the most basic
type of butterfly valve design. In this design, the stem passes through the
centreline of the disc. This is also known as a zero-offset valve. Concentric
butterfly valves are used for low-pressure services.
b. Eccentric Butterfly Valve: In an eccentric butterfly valve, the stem does not
pass through the centreline of the disc. There are three types of offset valves.

i. Single-Offset: It is the butterfly valve where the stem is located right


behind the centreline of the disc.

ii. Double Offset: The stem is located behind the disc with an additional
offset to one side.

iii. Triple Offset: For highly critical applications, a triple offset butterfly
valve is used. The third offset is in between the disc and seat contact axis.
This design results in minimal seat contact thus very less wear and highly
efficient.

Figure 2.C - Single, Double and triple Offset Butterfly Valve Design

 Based in Piping Connection

a. Lug type: The lug-style butterfly valve designs have threaded lugs outside the
valve body. Using two sets of studs, the valve is connected with piping flanges.
One end of the line can be disconnected without affecting the other side since
each flange has its own bolts.

b. Wafer Type: A wafer-type butterfly valve is sandwiched between two pipe


flanges and the flange bolts surround the valve body. This is the most
economical butterfly valve. Long bolts are used covering both flanges and
valve body. A wafer butterfly valve provides sealing against bi-directional
pressure difference in the fluid flow. The wafer version of the butterfly valves
is designed to protect against a tight seal and two-way pressure difference.

c. Flanged Butterfly Valve

d. Butt-welded Butterfly Valves for high-pressure applications

 Based on Operation method


a. Manual Actuation

i. Lever Operated

ii. Gear Operated

b. Automatic Actuation

i. Electric

ii. Pneumatic

iii. Hydraulic

 Based on Seat Material

a. Soft Seated

b. Metal Seated

 Working Principle

A Butterfly Valve is from a family of valves called Quarter-Turn Valves. Butterfly


valves have a relatively simple construction. The main components of a butterfly valve are the
body, disc, stem and seat. In operation, the valve is fully open or closed when the disc is rotated
a quarter turn. The “butterfly” is a metal disc mounted on a rod. The disc is positioned in the
center of the pipe. A rod passes through the disc to the outside of the valve. Rotating the rod,
through hand-wheel or actuator, turns the disc either parallel or perpendicular to the flow.
When the disc plane is at right angle or perpendicular to the center line of pipe, butterfly valve
is closed. Then when the disc is rotated a quarter turn to make the plane of disc in line with the
center line of pipe, butterfly valve is fully open and it allows an almost unrestricted passage of
the fluid. The valve may also be opened incrementally to throttle flow.
Butterfly valves should normally be mounted with the stem horizontal since this
allows debris in the pipe to be swept clear as the valve is closed. Where the stem is vertical
solids can lodge under the disc at the spindle and cause damage to the seal. Furthermore, when
opening the valve, the bottom of the disc should lift away from solids that may have
accumulated on the upstream side of the disc. It’s important to note that pipelines that contain
butterfly valves can’t be ‘pigged’ for cleaning.

Figure 2.D – Working Principle of a Butterfly Valve

 Functions

The main functions of a butterfly valve are:

o Flow regulation: Flow can be easily controlled or regulated simply by turning


the valve wheel. Using a flow controller the process can be automated.

o Flow Isolation

o Prevention of Backflow: Butterfly valves can be used for backflow


prevention in some situations.

 Purpose

Early use of butterfly valves focused on water applications, but new designs and
component materials have allowed them to be utilized in growing industrial fluid applications.
Presently, butterfly valves can be found in almost every chemical plant handling a variety of
diverse fluids. Butterfly valves can be used across a wide range of applications. They perform
well in large volume water applications and slurry applications. The following are some typical
applications of Butterfly valves:

o In fuel handling systems


o Water supply
o Power generation
o Wastewater treatment, Slurry
services. o Compressed air and gas
services
o Fire protection, air and gas
supply o Steam Services

o Vacuum services o Food Processing

o Lubrication System o Pharmaceutical

o In the chemical and oil o Marine System


industries
o Sanitary Valve Application.
Basically, Butterfly Valves are suitable for use in:

o Constant Load Applications

o Space-Restrictive Applications

o Throttling Valves

 Advantages and Disadvantages

 Advantages of a Butterfly Valve

o Compact design; little space requirement.

o Lightweight; easily supported by the piping system.

o Quick operation; less opening or closing time.

o Easy to Install.

o Simple operation due to low operation torque.


o Available in very large sizes

o Improved energy efficiency

o Low-pressure drop and high-pressure recovery

o Long valve life

o Relatively inexpensive to build.

o Less number of parts; hence, reduced maintenance.

 Disadvantages of a Butterfly Valve

o Throttling service is limited to low differential pressure

o Disc movement is affected by flow turbulence.

o The valve disc is always under pressure and can interrupt the flow even in an
open position.

o Possibility of Cavitation and choked flow is a concern

o Poor sealing function.

o Not suitable for high differential pressure.

3. Safety Valve

The safety valve is a valve that acts as protection of the equipment against explosion or damage
and is installed mainly in pressure vessels such as chemical plants, electric power boilers and gas
storage tanks.

The safety valve is a type of valve that is activated automatically when the pressure on the inlet
side of the valve increases to a predetermined pressure, to open the valve disc and discharge the
fluid (steam or gas); and when the pressure drops to the prescribed value, close the valve disc again.
The safety valve is a final safety device that controls the pressure and discharges a certain amount of
fluid by itself without any electrical power support.

Safety valves support not only the safety of the energy industry, but also the safety of our lives.
DEFINITION

 A valve that opens automatically to relieve excessive pressure.


 A part of a machine that opens to release pressure if it becomes too high
 A safety valve is a valve that acts as a fail-safe.
 An example of safety valve is a pressure relief valve (PRV), which automatically releases a
substance from a boiler, pressure vessel, or other system, when the pressure or temperature
exceeds preset limits.

FUNCTION

The main function of safety relief valves is the protection of life, property, and the
environment. A safety relief valve is a (safety) device designed to protect a pressurized vessel or
system from overpressure if all other safety systems fail.

A safety relief valve is designed to open and relieve excess pressure from vessels or equipment
and then to reclose to prevent further release of liquid, gas, or vapor once normal conditions have
been restored.

1. "Nozzle" inside the Safety Valve starts to receive a higher pressure from the inlet side of the
valve.

2. When the pressure becomes higher than the set pressure, "Disc" starts to lift and discharge
the fluid.

3. When the pressure decreases until the predetermined pressure, the force of the spring closes
"Disc".

COMPONENTS OF A SAFETY VALVE


DIFFERENT TYPES OF SAFETY VALVE

There are several notable differences between the terminology used in the USA and Europe.
One of the most important differences is that a valve referred to as a ‘safety valve’ in Europe is
referred to as a ‘safety relief valve’ or ‘pressure relief valve’ in the USA. In addition, the term ‘safety
valve’ in the USA generally refers specifically to the full-lift type of safety valve used in Europe.

Spring-loaded Pressure-relief Valves

Generally, the safety valves are referred to the Spring-loaded Pressure-relief valves, the
most common type (See the above figure). The load of the spring is designed to press the "Disc"
against the inlet pressure. Depending on the fluid type, such as steam, gas or liquid, we are offering a
Bellows model to clear the back pressure effect.

Pilot-operated Pressure-relief Valves

Pilot-operated Pressure-relief Valves are composed of Pilot Assy and Main Valve. Although
Spring-loaded Pressure-relief valves adopts the force of the spring against the inlet pressure, the
reliving pressure and reseating pressure of the Pilot-operated Pressure-relief Valves is controlled by
Pilot Assy, which acts nearly as same as Spring-loaded Pressure-relief valves. There is no adjusting
function in the Main Valve. Pilot-operated Pressure-relief Valves have larger size variations
compared to Spring-loaded type, which is applied in a severe condition such as high pressure.

Dead-Weight Pressure-relief Valves

In case the design pressure of the pressure vessel is set at very low pressure, Dead-weight
Pressure-relief valve adjusts relieving pressure only by the disc weight.
The Vacuum relief valve has been adopting this functional characteristic which absorbs the pressure
when the inside of the pressure vessel falls into a negative pressure.
Full Bore Type Safety Valve

A safety valve having no protrusions in the bore, and wherein the valve lifts to an extent
sufficient for the minimum area at any section, at or below the seat, to become the controlling
orifice. The flow passage area at valve seat is bigger enough than the nozzle throat area at the inlet
side.

Lift type safety valve

Lift type safety valve can be used when discharge amount is insufficient. Steam boiler needs
this type. The lift of the valve is 1/40 or more and less than 1/4 of the inside diameter of valve seat,
and the flow passage area of valve port becomes the smallest in the flow passage area when the valve disc is
opened.

Relief valve or pressure relief valve (PRV)


Relief valve or pressure relief valve (PRV) is a type of safety valve used to control or limit the
pressure in a system; pressure might otherwise build up and create a process upset, instrument or equipment
failure, or fire.

The valve is mainly applied to liquid, and automatically operates to open its valve disc when inlet
pressure rises and reaches set pressure, and to close the valve disc when inlet pressure falls to set pressure.

PURPOSE

Protection of life, property and the atmosphere is the primary objective of a safety valve. A
safety valve is designed for opening and relieving excess pressure from vessels or machinery and for
reclosing and preventing further fluid release after restoration under normal conditions.

A safety valve is a safety mechanism and the last line of defense in certain circumstances. It
is necessary to ensure that the safety valve is capable of working in all conditions and at all times. A
safety valve is not a pressure regulator or process valve and should not be misused as such. For one
reason only: overpressure security, it should have to work.

Reasons for excess pressure in a vessel

There is a number of reasons why the pressure in a vessel or system can exceed a predetermined
limit. API Standard 521/ISO 23251 Sect. 4 provides a detailed guideline about causes of
overpressure. The most common are:

 Blocked discharge
 Exposure to external fire, often referred to as “Fire Case”
 Thermal expansion
 Chemical reaction
 Heat exchanger tube rupture
 Cooling system failure

Each of the events mentioned above can occur separately from the other, and individually. They
can also take place concurrently. A separate mass or volume flow to be discharged would also be
created by each cause of overpressure, e.g. small mass flow for thermal expansion and large mass
flow in the event of a chemical reaction. For the sizing and selection of a suitable pressure relief
system, it is the duty of the customer to assess a worst case scenario.

PARTS OF A SAFETY VALVE

The standard design of a safety valve has two fundamental parts. They are the valve body
that is located at a right angle and the relation to the valve inlet or what is generally referred to as a
nozzle. It is important that you know the sections of a valve so that you can use them to the fullest
extent in the event that you use them. Typical designs consist of a variety of components.

Those with a spring inside are the normal ones that you will find. The standard design
consists of two fundamental sections. They are the valve body that is located at a right angle and the
relation to the valve inlet or what is generally referred to as a nozzle. The nozzle is also positioned
above the pressure holding device. The outlet attachment, on the other hand, can be screwed so that
it is attached to the discharged piping device. If the bonnet is closed, there will be no outlet
connection in certain situations.

Meanwhile, there may be a full-nozzle or semi-nozzle type of design in the approach channel
or the inlet tract. Aside from the disc, this part is the only portion of the safety valve that is
accessible to the process elements when the procedure is in normal mode. If the valve is used for
high pressure applications, it is recommended that you use full-nozzles, especially if the fluid is very
acidic. On the other hand, if you are not fond of replacing the inlet each time, particularly when it is
the only seat that you can exchange, you can opt for semi-nozzles.

The disc, on the other hand, is placed by the spring behind the nozzle seat, provided it is in
normal operating condition. Within a bonnet situated above the body this spring is lodged.
Depending on the valve's application and what it comprises, it can be open or locked. In the rapid
opening function of the valves, the disks in the pop styles have a holder that plays a key role. It is
the spring that is in control of the force forced into the disk so that it can contract. The springs used
in safety devices are more often than not made of carbon steel.

The spring adjuster is used to adjust the amount of compression introduced into the spring.
Especially when the disc is shifted from where it lies, it has the ability to control the pressure. In the
Dresser Inc. safety valve reviews, the reliability of all the above-mentioned sections is typically
guarded. No matter how good the valve you may have, it will be of no use if you do not know how
you can make its parts function for your benefit. Such security systems are not inexpensive, but you
can optimize your budget as well.

Advantages:

 Most reliable type if properly sized and operated.


 Versatile — can be used in many services.
 Wide range of material available.
 Wide range of chemical compatibility.
 High operating temperature.
 Standard Piping dimensions.
 Compatible with foaling and/or dirty service.
Disadvantages:
 Relieving pressure affected by back pressure.
 Susceptible to chatter if built-up back pressure is too high.
 Prone to leakage (if no soft seat).
 Long simmer or long flow period.
 Prone to chatter on liquid service unless with special trim.
 Vulnerable to effects of initial pressure losses.
 Sensitive to the effect on the back pressure (affects the set point and the capacity).
 Limited in pressure size ratio.

Globe Valves

 Definition

A Globe valve is a linear motion valve specifically designed to stop, start and control
flow. The Globe valve's disk can be completely removed from the flow path, or the flowpath
can be completely closed.For insulation and throttling services, traditional Globe valves can
be used. To regulate flow, globe valves are widely used. In the design of the valve to avoid
premature failure and ensure adequate operation, the range of flow control, pressure drop,
and duty must be taken into account. Specially designed valve trims are needed for valves
undergoing high-differential pressure-throttling operation.
Figure No. – Globe Valve

The maximum differential pressure across the valve disc should usually not exceed
20 percent or 200 psi (1380 kPa) of the maximum upstream pressure, whichever is less. For
applications exceeding these differential pressure limits, valves with special trim can be built.

 Parts of a Globe Valve


Listed are the main components of a globe valve.

Figure No. – Parts of a Globe Valve

o Bonnet: The bonnet is the outer part of the valve, which encompasses these other
parts.

o Cage: The cage surrounds the stem within the valve.

o Stem.The stem connects the disk to the valve actuator or hand wheel, transmitting
the force.

o Disk (Plug). The part of the globe valve that moves perpendicular to the seat. It’s
the moveablephysical barrier that blocks (or frees) the flow.When closed, the disk
sits against the seat, plugging the flow. When opened, the disk sits above theseat,
allowing liquid to pass.
Types of Globe Valve Disk

 The ball disk design. Used in low-pressure and low-temperature systems. It


is capable of throttling flow, but in principle, it is used to stop and start the
flow.

 Needle disk design. Provides better throttling as compared to ball or


composition disk design. A wide verity of long and tapered plug disks are
available to suit different flow conditions.

 Composition disk. Used to achieve better shutoff. A hard, non-metallic


insert ring is used in composition disk design.

o Seat (Seat ring).This part of the valve provides the seal between the stem/disk and
bonnet whenthe disk is tightly pressed against it. It’s either integrated or screwed into
the valve itself.

 Types of Globe Valves

o Z types
A Z-body is the simplest design and most common type. Within the globular
body, the Z-shaped partition includes the bench. The seat's horizontal seating
configuration enables the stem and disk to pass perpendicular to the axis of the pipe,
resulting in a loss of very high pressure.
Z Type

o Y types

A solution to the high pressure drop problem in Z-type valves is the Y-type style.
The seat and stem are angled at approximately 45 ° to the pipe axis in this form. In high
pressure and other critical facilities where pressure drops are involved, y-body valves are
used.

Y Type

o Angle Types

Without using an elbow and one additional pipe weld, the angle globe valve
transforms the flow direction by 90 degrees. Open Disk against the flow. In the
fluctuating flow state, this type of globe valve can also be used, as they are able to handle
the slugging effect.
Angle Type

 Flow Direction through the globe valves

Globe valve have specific flow directions. Depending on the application, a globe valve will have
fluid flow above or below the disc. Globe valves can be arranged so that the disk closes against or in
the same direction of fluid flow. When the disk closes against the direction of fluid flow, the kinetic
energy of the fluid impedes closing but aids opening of the valve. When the disk closes in the same
direction of fluid flow, the kinetic energy of the fluid aids closing but impedes opening.

For low temperature and low pressure applications, globe valves are ordinarily installed so that
pressure is under the disk. This promotes easy operation, helps protect the packing, and eliminates a
certain amount of erosive action to the seat and disk faces.

For high temperature and high pressure applications, like steam, globe valves are installed so that
pressure is above the disk. This helps to prevent the stem from contracting when it cools down and
will keep the disc from lifting off the seat that may cause leakage. If the pressure on top of the disc
is higher, a bypass valve may have to be provided that permits the downstream system to be
pressurized before the globe valve is
opened.

Usually flow direction arrow is marked on the globe valve body for ease and simplification of
installation purpose.
 Advantages and Disadvantages of Globe valves

Advantages:

o Good shutoff capability


o Moderate to good throttling capability
o Shorter stroke (compared to a gate valve)
o Available in tee, wye, and angle patterns, each offering unique capabilities
o Easy to machine or resurface the seats
o With disc not attached to the stem, valve can be used as a stop-check valve

Disadvantages:

o Higher pressure drop (compared to a gate valve)


o Requires greater force or a larger actuator to seat the valve (with pressure under the seat)
o Throttling flow under the seat and shutoff flow over the seat

 Typical Applications of Globe valves

The following are some of the typical applications of Globe valves:

o Cooling water systems where flow needs to be regulated


o Fuel oil system where flow is regulated and leaktightness is of importance
o High-point vents and low-point drains when leaktightness and safety are major
considerations.
o Feedwater, chemical feed, condenser air extraction, and extraction drain systems
o Boiler vents and drains, main steam vents and drains, and heater drains
o Turbine seals and drains
o Turbine lube oil system and others
 Working Principle
When the valve is actuated to open the disk will perpendicularly move away from the seat.
When compared to a gate valve, a globe valve generally yields much less seat leakage. This is because
the disk-to-seat ring contact is more at right angles, which permits the force of closing to tightly seat
the disk.

Globe valves can be arranged so that the disk closes against or in the same direction of fluid
flow. When the disk closes against the direction of flow, the kinetic energy of the fluid impedes
closing but aids opening of the valve. When the disk closes in the same direction of flow, the kinetic
energy of the fluid aids closing but impedes opening. This characteristic is preferable to other
designs when quick-acting stop valves are necessary.

 Applications

Duty:

o Controlling flow
o Stopping and starting flow
o Frequent valve operation

Service:

o Gases essentially free of solids


o Liquids essentially free of solids
o Vacuum Cryogenic
Ball Valve
A ball valve controls the flow of a fluid with the use of a rotary ball. The rotation of the ball
in 90 degrees around its axis can let the liquid or gas to flow through or be blocked. In addition, ball
valves require manual and power operators in large sizes at a high operating pressure.
Ball valves are used practically anywhere a fluid flow must be shut off, from a compressed-
air line to a high-pressure, hydraulic system. Ball valves can provide low head-loss characteristics as
the port can exactly match the pipe diameter. Ball valves also tend to seal better than butterfly
valves, but they can be costlier to purchase and maintain. Typically, they are actuated with a lever
which provides a visual indication of the valve status.
Parts and function

 Shell or Body
One of the main ball valve parts is its body. The shell or body is used to keep all the units of a
ball valve together and help them stay in one place. Plus, it helps keeping all the units safe from
outside threats and dangers to some extent. No matter what shape or type the ball valve is, it will
always need a shell or a body to keep everything together. This framework also works as the
pressure boundary of the ball valve when it comes to protecting the units inside, preventing the
volume and the pressure of the liquid from damaging ball valve parts.

 Bonnet

The bonnet in the opening of the ball valve’s body. Bonnet is used to cover this opening and
just like the body, it serves as a pressure boundary at the second stage. Bonnet holds the ball and the
stem assembly in place and its cap is fastened to the body to do the task. By the right adjustment of
this unit, you can achieve the right compression of the packing to supply the stem seal..
 Ball

The ball is another one of the ball valve parts that shapes like a sphere and contains a flow path
(also known as hole or tunnel) at its center. The ball has a connection point for a shaft which
permits the rotation of the ball.

 Actuator

The actuator works with the parts inside the trim. Actuator runs the stem and the disk (two parts
of the trim). There are different designs for actuators you can choose based on the operation you
need. Different actuator designs include: handwheels, motors, levers, solenoids, pneumatic
operators, or hydraulic arms. The most popular design is that the actuator will be mounted with the
bonnet through a yoke.

 Trim

Different ball valve parts together create the trim. These parts are Disk, Sleeves, Seat (also
known as seals ring), and Stem. Trim enables the ball valve to perform basic motions as well as
providing flow control. The disk separately is considered as a pressure boundary at the third stage in
many designs, just like the body that was the first stage and the bonnet as the second stage. The disk
has pressure-retaining capacity, enabling it to prohibit or permit fluid flow.

In the trim, the seat and the disk together help determining the performance of the ball valve
system. The seat works like an interface, showing where the disk is seated and it is designed in two
different ways. Manufacturers either forge the seat within the body, or welding it by machine. The
seat shapes like a doughnut. This round unit contains discs that form a seal between the ball and the
body.

Stems usually have a rectangular portion at the ball end. By turning the stem, the
enlargement enables the rotation of the ball. The stem is not fastened to the ball in a ball valve. The
stem is also responsible for connecting the disk and the actuator through welded joints and help
positioning the disk.

 Packing

The space between the bonnet and the valve stem needs something to help prevent leaks.
That’s where packing comes in. The packing helps preventing leaks from this space. There are
different materials enabling packing to seal the space between ball valve parts (the internal parts) and
the outside environment of the valve. These materials can fibrous materials such as flax or Teflon
that can make the packing the best choice for sealing. Since leaks can cause damage to the ball valve
system and its environment, it’s important for the packing to be properly placed at the right space. It
shouldn’t be too loose or too tight.
Working principle
To understand the working principle of a ball valve, it is important to know the 5 main ball
valve parts and 2 different operation types. The 5 main components can be seen in the ball valve
diagram in Figure 2. The valve stem (1) is connected to the ball (4) and is either manually operated
or automatically operated (electrically or pneumatically). The ball is supported and sealed by the ball
valve seat (5) and their are o-rings (2) around the valve stem. All are inside the valve housing (3).
The ball has a bore through it, as seen in the figure below. When the valve stem is turned a quarter-
turn the bore is either open to the flow allowing media to flow through or closed to prevent media
flow.

 Circuit function
The valve may have two, three or even four ports (2-way, 3-way or 4-way). The vast
majority of ball valves are 2-way and manually operated with a lever. The lever is in line with
pipe when the valve is opened. In closed position, the handle is perpendicular to the pipe.
The ball valve flow direction is simply from the input to the output for a 2-way valve.
Manually operated ball valves can be quickly closed and therefore there is a risk of water
hammer with fast-flowing media. Some ball valves are fitted with a transmission. The 3-way
valves have an L-shaped or T-shaped bore, which affect the circuit function (flow direction).
As a result, various circuit functions can be achieved such as distributing or mixing flows.

 Housing assembly
The assembly of the valve housing can be divided in three commonly used designs: one-
piece, two-piece and three-piece housings. The difference is how the valve is assembled and this
affects the possibilities for maintenance or repair. The operation of the valves is the same in
each embodiment.

 One-piece: This is the cheapest variant. The two parts which enclose the ball are pressed or
welded. The valves cannot be opened for cleaning or maintenance. This type is generally
used for low-demanding applications.
 Two-piece: Two-piece valves can be disassembled for cleaning, servicing and inspection.
Often, the parts are connected via a threaded connection. The valve must be completely
removed from the pipe in order to separate the two parts.
 Three-piece: More expensive valves have often three pieces. The parts are generally
clamped together by bolt connections. The advantage of this embodiment is that the valve
can be serviced without removing the entire valve from the pipeline.

Advantages and Disadvantages


Advantages
 ensures a reliable sealing even in the case of dirty media.
 it is easy to disassemble and repair.
 the packing seal of the valve stem is not easily broken.
 Considered as high recovery valves, having a low pressure drop and relatively high flow
capacity.
Disadvantages

 can only be fully open or fully closed and cannot be used for throttling.
 not suitable for slurry applications due to cavities around the ball and seats.
 may induce water hammer or surge pressures.
 abrasive solids suspended in the fluid flow may damage the seats and ball surface
References:

Chemicalprocessing.com. Retrieved 7 November 2020, from


https://www.chemicalprocessing.com/articles/1998/review-of-butterfly-valve-components-
operation/.

Whatispiping.com. Retrieved 7 November 2020, from https://whatispiping.com/butterfly-


valves-uses-types-advantages-symbols.

Butterfly Valve - How They Work | Tameson. Tameson. Retrieved 7 November 2020, from
https://tameson.com/butterfly-valve.html.

Hallouche, S. Whatispiping.com. Retrieved 7 November 2020, from


https://whatispiping.com/gate-valves-gate-valve-types.

Introduction to Butterfly Valves - The Process Piping. The Process Piping. Retrieved 7 November
2020, from https://www.theprocesspiping.com/introduction-to-butterfly-valves/.

S., W. Butterfly Valves Introduction - Quarter turn rotational motion. Wermac.org. Retrieved 7
November 2020, from http://www.wermac.org/valves/valves_butterfly.html.

S., W. Gate valves Introduction. Wermac.org. Retrieved 7 November 2020, from


http://www.wermac.org/valves/valves_gate.html.

The advantages, components and application of Butterfly Valves - Process Industry Forum. Process Industry
Forum. Retrieved 7 November 2020, from
https://www.processindustryforum.com/article/advantages-components-application-butterfly-
valves.

Types of Gate Valve and Parts - A Complete Guide for Engineer. HardHat Engineer. Retrieved 7
November 2020, from https://hardhatengineer.com/gate-valve-types-parts/.

Introduction to Globe Valves. Retrieved 7 November 2020 from


http://www.wermac.org/valves/valves_globe-valves_linear-motion-valve.html

A Complete Guide to Globe Valve Types and Parts. Retrieved 7 November 2020 from
https://www.cpvmfg.com/news/a-complete-guide-to-globe-valve-types-and-parts/
Globe Valve Types and Parts – A Complete Guide. Retrievd 7 November 2020 from
https://hardhatengineer.com/globe-valve-types-angle-parts/

Ashlin, & Areej. (2020, November 02). Retrieved November 08, 2020, from
https://automationforum.co/how-a-globe-valve-works-working-diadvantages-body-design/

Theprocesspiping. (2020, April 22). Introduction to Globe Valve. Retrieved November 08,
2020, from https://www.theprocesspiping.com/introduction-to-globe-valve/

https://pdhonline.com/courses/m160/SMITHCH03%20-%2054%20to%2073.pdf

https://www.lexico.com/definition/safety_valve

https://dictionary.cambridge.org/us/dictionary/english/safety-valve

https://en.wikipedia.org/wiki/Safety_valve#:~:text=A%20safety%20valve%20is%20a,or%20t
emperature%20exceeds%20preset%20limits.

https://www.fkis.co.jp/eng/product_valve.html

http://valveproducts.net/safety-valve/parts-of-a-typical-safety-valve

https://www.leser.com/en-us/support- https://www.lexico.com/definition/safety_valve

https://dictionary.cambridge.org/us/dictionary/english/safety-valve

https://en.wikipedia.org/wiki/Safety_valve#:~:text=A%20safety%20valve%20is%20a,or%20t
emperature%20exceeds%20preset%20limits.

https://www.fkis.co.jp/eng/product_valve.html

http://valveproducts.net/safety-valve/parts-of-a-typical-safety-valve

https://www.leser.com/en-us/support-and-tools/safety-valve-tutorial/and-tools/safety-valve-
tutorial/

https://www.explainthatstuff.com/valves.html
http://www.maintenanceworld.com/what-is-a-valve/

http://www.maintenanceworld.com/what-is-a-valve/

https://www.ejprescott.com/blog/the-history-of-valves

https://tameson.com/ball-valve-introduction.html

https://www.theprocesspiping.com/introduction-to-ball-valve/

https://instrumentationtools.com/what-is-a-ball-valve/
Republic of the Philippines
BATANGAS STATE UNIVERSITY
Pablo Borbon Main II, Batangas City
College of Engineering, Architecture & Fine Arts

Chemical and Food Engineering Department

FLOW MEASURING DEVICES


Written Report

Aguda, Don Jun C. Bunquin, Roselyn


Atienza, Gianna Leigh T. Luwalhati, Ghia C.
Briones, Lucia Isabelle B. Macatangay, Richmond Rafael
Buella, Jervin O. Tan, Nicole Elizabeth Y.

Group 1 & Group 2


ChE-3101

ChE 413: Process Design and Control


Engr. John Romar C. Panopio

November 2020
ORIFICE METER
Donjun C. Aguda

Definition
What is an Orifice Meter?
An Orifice Meter is basically a type of flow meter used to measure the rate of flow of
Liquid or Gas, especially Steam, using the Differential Pressure Measurement principle. It is
mainly used for robust applications as it is known for its durability and is very economical.
As the name implies, it consists of an Orifice Plate which is the basic element of the
instrument. When this Orifice Plate is placed in a line, a differential pressure is developed across
the Orifice Plate. This pressure drop is linear and is in direct proportion to the flow-rate of the
liquid or gas.
Since there is a drop in pressure, just like Turbine Flow meter, hence it is used where a
drop in pressure or head loss is permissible.
An orifice meter is a conduit and a restriction to create a pressure drop. A nozzle, venturi
or thin sharp edged orifice can be used as the flow restriction.
In order to use any of these devices for measurement it is necessary to empirically calibrate
them. That is, pass a known volume through the meter and note the reading in order to provide a
standard for measuring other quantities.
Due to the ease of duplicating and the simple construction, the thin sharp edged orifice has
been adopted as a standard and extensive calibration work has been done so that it is widely
accepted as a standard means of measuring fluids. Provided the standard mechanics of construction
are followed no further calibration is required.
Brief History of Orifice Meter
The first recorded use of an orifice device for fluid measurement was in 1797 by Giovanni
B. Venturi, an Italian physicist whose work led to the development of the modern Venturi meter
in 1886 by Clemons Herschel. In 1890, it has been reported that an orifice meter designed by
Professor S.W. Robinson of Ohio State University was used to measure gas near Columbus, Ohio.
In 1903, T.B. Weymouth began a series of tests in Pennsylvania leading to the publication of
coefficients for orifice meters with flange taps. At the same time, E.O. Hickstein conducted a
similar series of tests at Joplin, Missouri from which he developed data for orifice meters with
integrated pipe taps.
From 1924 to 1935, a significant amount of research and experimental work was conducted by the
American Gas Association (AGA) and the American Society of Mechanical Engineers (ASME)
that resulted in the development of orifice meter coefficients and standards of construction for
orifice meters. In 1935, a joint AGA-ASME report was issued, “History of Orifice Meters and The
Calibration, Construction, and Operation of Orifices for Metering” that remains as the basis for
most present day orifice meter measurement installation
Advantages
 The Orifice meter is very cheap as compared to other types of flow meters.
 Less space is required to Install and hence ideal for space constrained applications
 Operational response can be designed with perfection.
 Installation direction possibilities: Vertical / Horizontal / Inclined.
Disadvantages
 Easily gets clogged due to impurities in gas or in unclear liquids
 The minimum pressure that can be achieved for reading the flow is sometimes difficult to
achieve due to limitations in the vena-contracta length for an Orifice Plate.
 Unlike Venturi meter, downstream pressure cannot be recovered in Orifice Meters. Overall
head loss is around 40% to 90% of the differential pressure .
 Flow straighteners are required at the inlet and the outlet to attain streamline flow thereby
increasing the cost and space for installation.
 Orifice Plate can get easily corroded with time thereby entails an error.
 Discharge Co-efficient obtained is low.

Function/Purpose
How does it work?
As the fluid approaches the orifice the pressure increases slightly and then drops suddenly
as the orifice is passed. It continues to drop until the “vena-contracta” is reached and then gradually
increases until at approximately 5 to 8 diameters downstream a maximum pressure point is reached
that will be lower than the pressure upstream of the orifice.
The decrease in pressure as the fluid passes thru the orifice is a result of the increased
velocity of the gas passing thru the reduced area of the orifice.
When the velocity decreases as the fluid leaves the orifice the pressure increases and tends
to return to its original level. All of the pressure loss is not recovered because of friction and
turbulence losses in the stream.
The pressure drop across the orifice (ΔP in Fig.) increases when the rate of flow increases.
When there is no flow there is no differential.
The differential pressure is proportional to the square of the velocity, it therefore follows
that if all other factors remain constant, then the differential is proportional to the square of the
rate of flow.
Orifice plates are a primary flow element, detecting the flow of a fluid passing through the
plate by sensing the pressure drop across the plate. When a fluid flows through a restriction in a
pipe, it creates a pressure difference between upstream and downstream of the restriction. This
pressure difference is proportional to flow rate according to Bernoulli’s principal, similar to a Pitot
tube. Orifice plates are commonly used as they are simple to use, low cost, work with gases or
liquids, and require low maintenance. Adversely, they do have large pressure losses with about
50% of the pressure drop not recoverable.

VENTURIMETER
Ghia C. Luwalhati
Definition
A venturi meter is a measuring or also considered as a meter device that is usually used to
measure the flow of a fluid in the pipe. A venturi meter is also called a venturi flowmeter. It is
used to calculate the velocity of fluids in running through a pipeline. The fluid may be a liquid or
a gas. The meter consists of a pipe with a narrowing throat that expands back to its original
diameter on the other side of the choke point. The venturi meter calculates velocity by measuring
the pressure head at both points before and after the narrowed throat.
A Venturi meter may also be used to increase the velocity of any type fluid in a pipe at any
particular point. It basically works on the principle of Bernoulli's Theorem. The pressure in a fluid
moving through a small cross section drops suddenly leading to an increase in velocity of the flow.
1. Converging part
It is starting section of venturimeter which attached at inlet pipe. The cross sectional
area of this cone starts to decrease and the converging angle is 20 degree. Its length is 2.7(D-
d). Here (D) is the diameter of inlet section and (d) is the diameter of throat. Other end of
converging is attached with throat.
1. Throat
Throat is middle portion of venturimeter and its cross sectional area is too small. At
this point pressure is decreases and velocity is increases. One end is connected with converging
part and other end is attached with diverging part. Diameter of throat is ¼ to ¾ of the diameter
of the inlet pipe, but mostly it is ½ of the diameter of the pipe.
2. Diverging part
Diverging part is last part of venturimeter and its cross sectional area is increases
continually. Angle of diverging part is 5 to 15 degree. Its cross sectional area continuously
increases. One end is connected to throat and other end is connected to outlet pipe. The main
reason behind the low diverging angle is to avoid the formation of eddies because flow
separation and eddies formation will result in large amount of loss in energy
Brief History of Venturimeter
So, the venturi meter is a differential head type flowmeter. The principle of the
Venturimeter was demonstrated by Giovanni Batista Venturi (Hence the name Venturimeter), But
it was first used in practical metering applications by Clemens Herschel.
Clemens Herschel (March 23, 1842 – March 1, 1930) was an
American hydraulic engineer. His career extended from about 1860 to
1930, and he is best known for developing the Venturi meter, which
was the first large-scale, accurate device for measuring water flow.
The first part of Herschel’s career was devoted to bridge
design, including the design of cast-iron bridges. For a time, he was
employed on the sewerage system of Boston. Herschel was influenced
by James B. Francis, who was the agent and engineer of the
Proprietors of Locks and Canals on the Merrimack River at Lowell,
Massachusetts, to switch his career path to hydraulic engineering.
About 1880, he started working for the Holyoke Water Power
Company in Massachusetts. He remained with the company until
1889. While he was there, Herschel designed the Holyoke testing flume, which has been said to
mark the beginning of the scientific design of water-power wheels. Herschel first tested his Venturi
meter concept in 1886 while working for the company. The original purpose of the Venturi meter
was to measure the amount of water used by the individual water mills in the Holyoke area.
Water supply development in northern New Jersey was an active area of investment in the
late 19th century. In 1889, Herschel was hired as the manager and superintendent of the East Jersey
Water Company, where he worked until 1900. He was responsible for the development of the
Pequannock River water supply for Newark. He also installed two of his largest Venturi meters at
Little Falls, New Jersey, on the main stem of the Rockaway River to serve Paterson, Clifton and
Jersey City.

Advantages
 High-pressure recovery. Low permanent pressure drop.
 High coefficient of discharge.
 Smooth construction and low cone angle help to solid particles flow through it. So it can
be used for dirty fluids.
 It can be installed in any direction horizontal, vertical and inclined.
 They are more precise and can be used for a wide range of flows.
 More accurate than orifice and flow nozzle.

Disadvantages
 Size, as well as cost is high
 Difficult to inspection due to its construction
 They are large in size and, therefore, where space is limited, they cannot be used.
 For satisfactory operation, the venturi must be proceeded by long straight pipes.
 Its maintenance is not easy
 It cannot use in pipe that has small diameter (70mm)

Function/Purpose
This device is widely used in the water, chemical, pharmaceutical, and oil & gas industries to
measure the flow rates of fluids inside a pipe. Venturi used in a wide variety of applications that
includes gas, liquids, slurries, suspended oils and other processes where permanent pressure loss
is not tolerable. It is widely used in large diameter pipes such as found in the waste treatment
process. It allows solid particles flow through it because of their gradually sloping smooth design;
so they are suitable for measurement of dirty fluid. It also be used to measure fluid velocity.
 Plumbing
Venturi meters are used in pipelines at wastewater collection systems and treatment
plants. They are used in wastewater pipes because their overall design structure allows for
solids to pass through it instead of collecting in front of it. Less buildup in the pipes allows
for more accurate readings of the pressure of the wastewater and thus its velocity.

 The Flow of Chemicals in Pipelines


The temperatures and pressures of chemicals in a pipeline do not affect the accuracy
of a Venturi flowmeter and because of this they are used in crude oil pipelines. Crude oil
pipelines, such as the ones in Alaska, are exposed to extreme temperatures during the long
arctic winter months. Another advantage of using the Venturi meter in such volatile and
frigid environments is that it has no moving parts; there is no risk of them freezing and
breaking due to thermal expansion.
 Carburetors
The venturi in carburetors is used to measure airflow in a car engine and to ensure
that a correct amount of fuel is fed to the gas combustion engine when needed during
driving. The air and fuel mixture must be evenly distributed to the engine in order for it to
work properly. The temperatures of air and fuel are constantly changing due to the shift in
temperatures that occur in an engine during idling, acceleration, high speeds, and low
speeds. The venturi meter allows the carburetor to adjust and calibrate the distribution of
fuel and air to the engine as needed.

How does it measure flow?


A Venturi meter is used to measure the flow rate through a tube. It is based on the use of
the Venturi effect, the reduction of fluid pressure that results when a fluid runs through a
constricted section of pipe. It is called after Giovanni Battista Venturi (1746-1822), an Italian
physicist.
Operation of venturi meter:
1. The fluid whose flow rate is to be measured enters the entry section of the venturi meter
with a pressure P1.
2. As the fluid from the entry section of venturi meter flows into the converging section,
its pressure keeps on reducing and attains a minimum value P2 when it enters the throat. That is,
in the throat, the fluid pressure P2 will be minimum.
3. The differential pressure sensor attached between the entry and throat section of the
venturi meter records the pressure difference(P1-P2) which becomes an indication of the flow rate
of the fluid through the pipe when calibrated.
4. The diverging section has been provided to enable the fluid to regain its pressure and
hence its kinetic energy. Lesser the angle of the diverging section, greater is the recovery
When a fluid flows through a venturimeter, it accelerates in the convergent section and
then decelerates in the divergent section. The pressure difference between an upstream section and
throat is measured by a manometer. Using that differential pressure, applying Bernoulli’s Equation
and Contininuity Equitation the volumetric flow rate can be estimated.
Venturimeter works on Bernoulli’s equation and its simple principle is when velocity
increases pressure decreases.

 Cross sectional area of throat section is smaller than inlet section due to this the
velocity of flow at throat section is higher than velocity at inlet section, this
happen according to continuity equation.

 The increases in velocity at the throat result in decreases in pressure at this


section, due to this pressure difference is developed between inlet valve and
throat of the venturimeter.

 This difference in pressure is measured by manometer by placing this between


the inlet section and throat.

 Using pressure difference value, we can easily calculate flow rate through the
pipe.

When the flow goes through the contraction it must speed up, and so the pressure must
drop. By measuring the two pressures, engineers can directly calculate the velocity of the
fluid. Knowing the pipe diameter, this velocity can be converted into a flow rate.
To find the pressure difference between the downstream flow and the pipe narrow, we
invoke 1) the Bernoulli theorem and 2) the continuity equation. The latter assures that the rate of
fluid flow through any section remains constant, ie. mass is preserved.
Bernoulli’s principle states the relation between pressure (P), kinetic energy, and
gravitational potential energy of a fluid inside a pipe. The mathematical form of Bernoulli’s
equation is given as:
𝑷𝟏 𝒗𝟏 𝟐 𝑷𝟐 𝒗𝟐 𝟐
+ + 𝒛𝟏 = + + 𝒛𝟐
𝝆𝒈 𝟐𝒈 𝝆𝒈 𝟐𝒈
 p= pressure inside the pipe
 ρ =density of the fluid
 g =gravitational constant
 v = velocity
 z=elevation or head
 a = cross-sectional area of the pipe
 d= diameter of the pipe
Suffix 1 and 2 are used to denote two different areas; 1 denotes cylindrical inlet section and 2
denotes throat section.

Now as the pipe is horizontal; there is no difference in elevation of pipe centerline; So, z1=z2. Re-
arranging the above equation we get the following:

𝑷𝟏 − 𝑷𝟐 𝒗𝟐 𝟐 − 𝒗𝟏 𝟐
=
𝝆𝒈 𝟐𝒈
𝑷𝟏 −𝑷𝟐
is the difference of pressure heads in sections 1 and 2 which is equal to h that can be
𝝆𝒈
measured in the differential manometer. So the above equation becomes:
𝑃1 −𝑃2
Considering that ℎ = , then,
𝜌𝑔

𝒗𝟐 𝟐 − 𝒗𝟏 𝟐
𝒉=
𝟐𝒈
Now applying continuity equations between the same sections 1 and 2, we get
𝒂𝟐 𝒗𝟐
𝒂𝟏 𝒗𝟏 = 𝒂𝟐 𝒗𝟐 or 𝒗𝟏 = 𝒂𝟏
Putting this value of v1 in eqn. 1 and solving we get,

𝒂𝟏
𝒗𝟐 = √𝟐𝒈𝒉
√𝒂𝟏 𝟐 − 𝒂𝟐 𝟐
So, the rate of flow through the throat (Q) can be calculated as Q=a2v2; Substituting the above
value of v2 we get,
𝒂𝟏 𝒂𝟐
𝑄= √𝟐𝒈𝒉
√𝒂𝟏 𝟐 − 𝒂𝟐 𝟐
This Q represents the theoretical discharge of Venturi Meter in ideal condition. But in
actual practice, there will be always be some frictional loss. Hence, the actual discharge will always
be less than the theoretical discharge. So, to calculate the actual discharge, the above Q value is
multiplied by Cd, called the Coefficient of discharge of venturimeter. So the actual flow rate
through the throat of the venturimeter will be given by the following equation.
𝒂𝟏 𝒂𝟐
𝑄𝑎𝑐𝑡 = 𝐶𝑑 √𝟐𝒈𝒉
√𝒂𝟏 𝟐 − 𝒂𝟐 𝟐
The coefficient of discharge for Venturimeter, Cd is defined as the ratio of the actual flow
rate through the venturi meter tube to the theoretical flow rate. So the venturi meter discharge
coefficient is given by:

Cd=Qact/Q

As Qactual will always be less than Qtheoretical due to frictional losses, the value of Cd is always
less than 1.0.

The typical range of the discharge coefficient of a Venturi meter is 0.95-0.99 but this can
be increased by proper machining of the convergent section. The value of venturimeter discharge
coefficient differs from one flowmeter to the other depending on the venturimeter geometry and
the Reynolds number.
ROTAMETER
Gianna Leigh T. Atienza
Definition
Rotameters are also known as variable area flow meters. Rotameters are simple industrial
flow meters that measure the flow rate of liquid or gas in a closed tube. Rotameters are popular
because they have linear scales, a relatively large measurement range, low pressure drop, and are
simple to install and maintain. The term rotameter comes from the early version of the floats, which
had slots to help stabilize and center them within the fluid flow. This caused the floats to rotate.
Current float designs are a variety of shapes (spherical for example) and are constructed of stainless
steel, glass, metal, and plastic.

Brief History of Rotameter

The history of variable-area meters dates to 1908 when they were invented by German
engineer Karl Kueppers in Aachen, Germany. At that time, they were called “rotameters,” named
after the rotating float that was originally a component of these meters. Felix Meyer recognized
the commercial potential of Kueppers’ invention, and in 1909 founded “Deutsche Rotawerke” in
Aachen. The product invented by Karl Kueppers was the first variable-area flowmeter with a
rotating float.

The German company Deutsche Rotawerke was the forerunner of the company that
became known as the Rota Company. Originally, Meyer called his products “rotamesser.” In
1995, Yokogawa purchased the Rota Company and named the resulting company Rota
Yokogawa. Rota Yokogawa still manufactures its variable-area meters, which it also calls
rotameters, in Wehr, Germany.
In March 2009, Emerson Process Management acquired Solartron Mobrey, presumably
for its level, density and flow computer products. As part of the acquisition, Emerson Process
acquired the trademark to the name “rotameter” in the United Kingdom.

TYPES OF ROTAMETER
Glass Tube Rotameters
With a tapered metering tube made of borosilicate glass, this was the original rotameter. Introduced
in the mid-1940s, it is referred to as a "general-purpose" rotameter. They are typically used for
simple but reliable indication of flow rate with a high level of repeatability.

Metal Tube Rotameters


These devices, also known as armored meters, are designed for applications where the temperature
or pressure exceeds the limits of glass tubes. Designed for indication only, metal tube
meters require no external source of electric power.

Plastic Tube Rotameters


They can be an entirely suitable, very cost-effective alternative to glass or metal meters for a wide
variety of fluid measurements.

Advantages
 We can use them for both liquid and gas or steam applications.
 Their design is simple and therefore economical.
 They are light (this point depends on the measuring tube).
 They do not require power supplies.
 The flow reading can be easily performed at installation.
 Low pressure drops.

Disadvantages
 It is difficult to handle the glass tube type.
 It must be mounted vertically.
 Their accuracy is not very high.
 Require a specific calibration for each fluid.
 It is limited to low temperatures.
 It requires in-line mounting.
Purpose/Function
Rotameters are used in research and laboratory environments and in the process industries
to measure the flow of gases and air at low flowrates. They are also used when a visual indication
is sufficient, and to check on the performance of other meters.

How does a Rotameter work?

The rotameter is a type of variable area flowmeter that consists of a tapered measuring tube
and a float, which can be freely moved up and down inside the tube. The measuring tube is
mounted vertically, with the small end at the bottom. The fluid to be measured enters the bottom
of the tube, passes upwards around the float and exits at the top.

There is a small annular opening between the float and the tube. The pressure falling
through the float increases and increases the float. This increases the area between the float and
the tube until the upward hydraulic forces acting on it are balanced by their weight, less the floating
force. The float moves up and down the tube in proportion to the flow rate of fluid and the annular
area between the float and the tube. Reach a stable position in the tube when the forces are in
equilibrium.

When the float moves up towards the larger end of the conical tube, the annular opening
between the tube and the float increases. As the area increases, the pressure difference across the
float decreases. The float will assume a dynamic equilibrium position when the pressure
differential across the float plus the float effect balances the weight of the float. Each flotation
position corresponds to a particular flow and not to another for a fluid of a certain density and
viscosity. You take the flow reading from a calibrated scale in the tube. The position of the float
in the measuring tube varies in a linear relationship with the flow rate.

TURBINE FLOW METER

Nicole Elizabeth Y. Tan

Definition

Turbine flow meter, or axial turbine, is another type of flow measuring device that provides
exceptionally accurate and reliable digital outputs. The turbine flow meter principle is used for the
measurement of liquid gas and gases of very low flow rate. It was invented by Reinhard Woltman
and is an accurate and reliable flow meter for liquids and gases. In other words, the turbine flow
meter translates the mechanical action of the turbine rotating in the liquid flow around an axis into
a user-readable rate of flow (gpm, lpm, etc.). The turbine tends to have all the flow traveling around
it.

The turbine flow meter is composed of a multi-bladed rotor (turbine wheel) that is
supported centrally in the pipe along which the flow occurs. The turbine wheel is set in the path of
a fluid stream. The flowing fluid impinges on the turbine blades, imparting a force to the blade
surface and setting the rotor in motion. When a steady rotation speed has been reached, the speed
is proportional to fluid velocity. Fluid properties can change turbine flow meter performance. For
best performance, turbine flow meter shall be calibrated with the fluid to be metered.

Brief History of Turbine Flow Meter

Even though the earliest turbine flow meter was invented before positive displacement
(PD) flow meters came into use, the history of turbine flow meter use does not go back as far as
that of PD meters. PD meters were first used for gas applications in the early 1800s, while Reinhard
Woltman invented the first turbine meter in 1790. Even so, it was not until the early 1940s that
turbine meters were first used for fuel measurement, in part because fuel use on military planes in
World War II needed to be measured. Soon afterward, turbine meters were used in the petroleum
industry to measure the flow of hydrocarbons.

Rockwell introduced a turbine meter to the gas industry in 1963, but it took about 10 years
for it to become widely accepted by the industry. In 1981 the American Gas Association (AGA)
published its Report #7, "Measurement of Fuel Gas by Turbine Meters."

Turbine meter suppliers are now developing features to make them more efficient and
reliable. For example, new ceramic and sapphire ball bearings now last longer than other bearings.
Another development is a dual-rotor turbine meter that extends the flow range and provides higher
measurement accuracy, and also reduces the effects of swirl on flow measurement. Elster has
developed a turbine meter that offers bi-directional flow.

Advantages

 Good accuracy and excellent repeatability and range


 Fairly low pressure drop
 Easy to install and maintain
 Turbine meters can operate over a wide range of temperatures and pressures
 Can be compensated for viscosity variations
 Simple, durable construction

Disadvantages

 High cost (expensive)


 Limited use for slurry applications
 Requires constant backpressure to prevent cavitation
 Accuracy adversely affected by bubbles in liquids
 Turbine meters can be used with clean liquids and gases only (may need to install a strainer
upstream to prevent damage from particulates)
 Not applicable for measuring corrosive fluids

Function/Purpose

Fluid entering the flow meter is first conditioned by the inlet flow straightener which
reduces turbulence in the fluid. The moving fluid causes the rotor to spin at a speed that is
proportional to its flow rate. As the blades on the rotor pass through the magnetic field of the
pickup, an electronic pulse is generated. This pulse train signal can then be used to monitor the
fluids actual flow rate or the total amount of fluid that has passed through the flow meter. The
number of electronic pulses generated by the meter, per unit volume, is known as its K-Factor.
Each flow meter is calibrated to find its unique K-Factor, which is supplied with the flow meter
when purchased.

Working Principle

As stated in the definition part, turbine flow meter is used for translates the mechanical
action of the turbine rotating in the liquid flow around an axis into a user readable rate of flow.
Blades on the rotor are angled to transform energy from the flow stream into rotational energy.
The rotor shaft spins on bearings. When the fluid moves faster, the rotor spins proportionally faster.

Shaft rotation can be sensed mechanically or by detecting the movement of the blades.
Blade movement is often detected magnetically, with each blade or embedded piece of metal
generating a pulse. Turbine flow meter sensors are typically located external to the flowing stream
to avoid material of construction constraints that would result if wetted sensors were used. The
flowing fluid engages the vaned rotor causing it to rotate at an angular velocity proportional to the
fluid flow rate.

The rotor sits on a shaft, which in turn is suspended in the flow by the two supports. The
rotor is supported by the ball or sleeve bearings on a shaft which is retained in the flow meter
housing by a shaft support section. The rotor is free to rotate about its axis.
As the media flows, a force is applied on the rotor wings. The angle and shape of the wings
transform the horizontal force to a perpendicular force, creating rotation. Therefore, the rotation
of the rotor is proportional to the applied force of the flow. At a steady rotational speed, the speed
of the rotor is proportional to the fluid velocity and hence to the volumetric flow rate.
The rotor will immediately rotate as soon as the media induces a forward force. As the
rotor cannot turn thru the media on its own, it will stop as soon as the media stops. This ensures
an extremely fast response time, making the Turbine Flow Meter ideal for batching applications.
The speed of rotation is monitored by a magnetic pick up which is fitted to the outside of the meter
housing. A pick-up sensor is mounted above the rotor. When the magnetic blades pass by the
pickup sensor, a signal is generated for each passing blade. This provides a pulsed signal
proportional to the speed of the rotor and represents pulses per volumetric unit.; and as such the
flow rate too.
The magnetic pick-up coil consists of a permanent magnet with coil windings which is
mounted in close proximity to the rotor but internal to the fluid channel. As each rotor blade passes
the magnetic pick-up coil, it generates a voltage pulse which is a measure of the flow rate. The
total number of pulses gives a measure of the total flow.
The electrical voltage pulses produced can be totaled, differenced or manipulated by digital
techniques, so that a zero error characteristic, using this technique, is provided from the pulse
generator to the final read.
The number of pulses generated per gallon of flow, called the K factor is given by:

Where,
K = pulses per volume unit
Tk = time constant in minutes
f = frequency in Hz
Q = volumetric flow rate is gpm (gallon per minute)

VORTEX-SHEDDING FLOW METER


GROUP 1
Definition
A vortex flow meter is a flow measurement device best suited for flow measurements
where the introduction of moving parts presents problems. They are available in industrial grade,
brass or all plastic construction. Sensitivity to variations in the process conditions are low and with
no moving parts have relatively low wear compared to other types of flow meters.
Vortex shedding is the process by which vortices of gas or liquid are formed around a solid
object that obstructs the path of a gas or liquid stream. These “shed” vortices are carried
downstream in the flow and are detected by vortex shedding and fluidic flow meters, which
measure the velocity of liquids and gases such as water, cryogenic liquids, boiler feed water,
hydrocarbons, chemicals, air, nitrogen, industrial gases, and steam flowing through the pipe.
A vortex flow meter comprising: a flow sensor operable to sense pressure variations due
to vortex-shedding of a fluid in a passage and to convert the pressure variations to a flow sensor
signal, in the form of an electrical signal; and a signal processor operable to receive the flow sensor
signal and to generate an output signal corresponding to the pressure variations due to vortex-
shedding of the fluid in the passage.

Brief History of Vortex-Shedding Flow Meter


It all began in 1513, when the great Leonardo da Vinci created the first draft of vortices
downstream of objects shedding flow. A few years later in 1878, Czech Vincenc Strouhal
developed a scientific form to explain the eddies created after bluff bodies Basically, a wire
stretched tight in a jet of air will vibrate, and its frequency is directly proportional to the velocity
of the air jet.
This equation will find the Strouhal number, the relationship between the velocity of flow,
vortex shedding frequency, and diameter of the bluff body. In 1912, the physicist Theodore von
Kárman used this concept in flow measurement, and we now call his description of the Kárman
vortex street.

Advantages
 The vortex flowmeter has no moving parts, and the measuring component has a simple
structure, reliable performance and long service life.
 The vortex flowmeter has a wide measuring range. The turndown ratio can generally
reach 1:10.
 The volumetric flow rate of the vortex flowmeter is not affected by thermal parameters
such as temperature, pressure, density or viscosity of the fluid being measured.
 It measures the flow of liquids, gases or vapors, has very wide applications
 It causes little pressure loss.
 High accuracy, and low maintenance

Disadvantages
 It has poor anti-vibration performance. External vibrations can cause measurement errors
in the vortex flowmeter and may not even work properly.
 The high flow velocity shock of the fluid causes vibrations in the vortex body, which
reduces the measurement accuracy.
 Cannot measure dirty media
 Straight pipe requirements are high when mounting the vortex flow meter
 Not suitable for low Reynolds number fluids measurements;
 It is not suitable for the pulsating flow.

Function/Purpose
Vortex meters use a dimensioned bluff, sometimes called a shedder bar, to generate the
phenomenon known as Kármán vortex street in which vortices begin to form and oscillate. Using
a variety of sensor technologies, the natural frequency of these oscillating vortices is converted
into a digital signal which is then processed through the meter’s electronics to calculate flow.
The vortex shedding meter provides a linear digital (or analog) output signal without the
use of separate transmitters or converters, simplifying equipment installation. Meter accuracy is
good over a potentially wide flow range, although this range is dependent upon operating
conditions. The shedding frequency is a function of the dimensions of the bluff body and, being a
natural phenomenon, ensures good long term stability of calibration and repeatability of better than
±0.15% of rate. There is no drift because this is a frequency system.
The meter does have any moving or wearing components, providing improved reliability
and reduced maintenance. Maintenance is further reduced by the fact that there are no valves or
manifolds to cause leakage problems. The absence of valves or manifolds results in a particularly
safe installation, an important consideration when the process fluid is hazardous or toxic.
If the sensor utilized is sufficiently sensitive, the same vortex shedding meter can be used
on both gas and liquid. In addition, the calibration of the meter is virtually independent of the
operating conditions (viscosity, density, pressure, temperature, and so on) whether the meter is
being used on gas or liquid.
The vortex shedding meter also offers a low installed cost, particularly in pipe sizes below
6 in. (152 mm) diameter, which compares competitively with the installed cost of an orifice plate
and differential pressure transmitter.
The limitation includes meter size range. Meters below 0.5 in. (12 mm) diameter are not
practical, and meters above 12 in. (300 mm) have limited application due their high cost compared
to an orifice system and their limited output pulse resolution.
The number of pulses generated per unit volume decreases on a cube law with increasing
pipe diameter. Consequently, a 24 in. (610 mm) diameter vortex shedding meter with a typical
blockage ratio of 0.3 would only have a full scale frequency output of approximately 5 Hz at 10
ft/s (3 m/s) fluid velocity.
ULTRASONIC FLOW METER
Richmond Macatangay

Definition
An ultrasonic flow meter can be defined as, a meter that is used to measure liquid velocity
with ultrasound to analyze the volume of liquid flow. This is a volumetric flow meter that needs
bubble or minute particles within the liquid flow. These meters are suitable in the applications of
wastewater but they will not work with drinking/distill water. So this type of flow meter is ideal
for the applications wherever chemical compatibility, low maintenance, and low-pressure drop are
required. These meters will affect the audio properties of the liquid and also impact through
viscosity, density, temperature, etc. Like mechanical flow meters, these meters do not include
moving parts. The price of these meters will change greatly so frequently it can be used and
maintained at a low cost.

Brief History
The first ultrasonic flow meter was invented by Japanese physicist namely “Shiego
Satomura” in the year 1959. This flow meter uses Doppler technology and the main intention of
this meter is to deliver the analysis of blood flow. After four years, the earliest flow meters have
appeared in industrial applications. At present, there are many manufacturing companies were
designing different types of clamp-on flow meters to measure the liquid flow within a pipe. These
meters use high-frequency sensors by penetrating throughout the pipe wall as well as the liquid by
using Doppler otherwise transit time propagation method. So that fluid velocity and flow rate can
be determined.

Advantages
 It does not block the path of liquid flow.
 The o/p of this meter is different for density, viscosity & temperature of the liquid.
 The flow of liquid is bidirectional
 The dynamic response of this meter is good.
 The output of this meter is in analog form
 Conservation of energy
 It is appropriate for huge quality flow measurement
 It is handy to fit and maintain
 Versatility is good
 There is no contact to liquid
 There is no leakage risk
 There are no moving parts, pressure loss
 High accuracy

Disadvantages
 It is expensive as compared with other mechanical flow meters.
 Design of this meter is complex
 Auditory parts of this meter are expensive.
 These meters are complicated as compared with other meters, thus it requires specialists
for maintaining and repairing these meters
 It cannot measure cement or concrete pipes one they rusted.
 It doesn’t work once the pipe contains holes or bubbles in it
 Can’t measure cement/concrete pipe or pipe with such material lining

Function/Purpose
An ultrasonic flow meter construction can be done by using upstream and downstream
transducers, sensor pipe and reflector. The working principle of ultrasonic flow meter is it uses
sound waves to resolve the velocity of a liquid within a pipe. There are two conditions in the pipe
like no flow and flowing. In the first condition, the frequencies of ultrasonic waves are transmitted
into a pipe & its indications from the fluid are similar. In the second condition, the reflected wave’s
frequency is dissimilar because of the Doppler Effect. Whenever the liquid flows in the pipe
quickly, then the frequency shift can be increased linearly. The transmitter processes the signals
from the wave & its reflections determine the flow rate. Transit time meters transmit & receive
ultrasonic waves in both the directions within the pipe. At no-flow condition, the time taken to
flow in between upstream & downstream in between the transducers is the same. Under these two
flowing conditions, the wave at upstream will flow with less speed than the downstream wave. As
the liquid flows faster, the distinction between the up & downstream times raises. The times of the
upstream & downstream processed by the transmitter to decide the flow rate.
The applications of ultrasonic flow meters include the following.

 These meters are used in wastewater and dirty liquid applications


 These meters are used wherever chemical compatibility, less maintenance, and low-
pressure drop are required.
 These meters are used to measure the velocity of a liquid through ultrasound to analyze
volume flow.
 These meters measure the disparity between the transit time of ultrasonic pulses which
transmits with the direction of liquid flow
 The applications of these meters range from process to custody flow
 This is one kind of device for volumetric flow measurement for liquids as well as gases.
 These are excellent alternatives for both vortex & electromagnetic flowmeters.

Figure 1: Ultrasonic Flow Meter

MAGNETIC FLOW METER


Buella, Jervin Edizon O.

Definition
A magnetic flow meter is a volumetric flow meter that does not have any moving parts. It
is ideal for wastewater applications or any dirty liquid which is conductive, or water based.
Magnetic flow meters are also ideal for applications where low-pressure drop and low maintenance
are required.
Brief History
In 1832, Michael Faraday assembled a large-scale open channel magmeter and attempted
to use this to measure the flow of water passing under London’s Waterloo Bridge. In 1915, The
Americans M.W. Smith and Joseph Slepian filed a patent for “A device to measure the speed of a
boat by means of magnetohydrodynamics.” In 1930, The same idea was adapted to closed conduits
by the Briton E.J. Williams. In 1952, The Dutch company Tobi-Meter introduced the first
commercial magmeter. In 1962, The British scientist J.A. Shercliff published the “Theory of
electromagnetic flow-measurement.

Advantages
They feature an obstruction-free design with no moving parts which eliminates flow
impediment resulting to an accuracy over a wide flow range as good as ± 0.5%.

Mag meters perform extremely well in many municipal and processing applications for it
requires less maintenance and can be used on very large line sizes. It is also the meter of choice
for measuring conductive liquids such as water or slurry.

Disadvantages
These flow meters are only effective on conductive fluids. Some examples of conductive
fluids are acetic acid, blood, body lotion, hydrochloric acid, sulfuric acid etc. On the other hand,
some examples of nonconductive fluids are oil, hydrogen peroxide, corn syrup and alcohol.
Depending on their size and capacity, magnetic flow meters can be relatively heavy, and
those with higher corrosion and abrasion resistance can be expensive.
Function/Purpose
Magnetic flow meters use a magnetic field to generate and channel liquid flow through a
pipe. Electrode sensors located on the flow tube walls pick up the voltage signal and send it to the
electronic transmitter, which processes the signal to determine liquid flow.

The operation of a magnetic flow meter or mag meter is based upon Faraday's Law, which
states that the voltage induced across any conductor as it moves at right angles through a magnetic
field is proportional to the velocity of that conductor.

Faraday's Formula
e=k*B*l*v
e = Induced voltage
k = Proportionality constant
v = Velocity of processed fluid
B = Magnetic field strength
l = Length of the conducto

To apply this principle to flow measurement with a magnetic flow meter, it is necessary
first to state that the fluid being measured must be electrically conductive.

THERMAL MASS
Bunquin, Maria Roselyn N.

Definition
Thermal mass flow meters are designed to accurately monitor and measure mass flow (as
opposed to measuring volumetric flow) of clean gases. Volumetric measurements are affected by
all ambient and process conditions that influence unit volume or indirectly affect pressure drop, while
mass flow measurement is unaffected by changes in viscosity, density, temperature, or pressure.
Thermal mass flow meters are often used in monitoring or controlling mass-related processes such
as chemical reactions that depend on the relative masses of unreacted ingredients. In detecting the
mass flow of compressible vapors and gases, the measurement is unaffected by changes in pressure
and/or temperature.

Brief History
Thermal flowmeters were born on the West Coast of the United States—the result of
independent development by first two, then three separate companies. One company was Fluid
Components International, which began by developing thermal flow switches that were used in
the oil patch. The switches detected the movement of oil in oil well pipes, but they didn’t actually
evolve into actual flowmeters until 1981.

The second strain of early development in the flowmeter marketplace was a result of the
collaboration of John Olin, Ph.D., and Jerry Kurz, Ph.D. Both Olin and Kurz worked for Thermo
Systems Inc. (TSI) in Minnesota from 1968 until the early 1970s. At TSI, they developed hot-wire
and hot-film anemometers for research applications in gas dynamics, turbulence, and air flows.
These instruments are based on thermal dispersion technology and have sensors consisting of
heated tungsten wires or a thin platinum film deposited on a quartz rod. The small diameter of
these sensors gave the fast time response needed for fluid mechanics research, but made them too
fragile for general industrial applications.

While Olin and Kurz were doing research using anemometers, they were more interested
in developing measurement products for industrial environments. This would require a more
rugged device than an anemometer. They approached TSI about developing industrial products,
but TSI wasn’t interested. As a result, Olin and Kurz decided to start their own company,
incorporating Sierra Instruments (www.sierrainstruments.com) in Minnesota in 1973. In 1975,
they moved the company to California, packing the business up into two trucks, driving it across
the Continental Divide to set up shop in Monterey.
In 1977, Sierra Instruments was making both air sampling products and thermal flowmeters. That
year, Jerry Kurz decided to become independent and formed Kurz Instruments. Sierra kept the air
sampling products, while Kurz Instruments kept the thermal flowmeters. However, Sierra got
back into the flowmeter market in 1983.

In the early 1980s, Sierra, Kurz, and Fluid Components were the only companies manufacturing
thermal flowmeters. However, over time, more thermal flowmeter manufacturers arrived in the
area of Monterey.

Advantages and Disadvantages


Thermal Mass Flow Meter Advantages

 Measure gas mass flow rate directly


 Suitable for applications where temperature and pressures fluctuate
 Highly accurate and repeatable measurements with a typical accuracy of ± 1% FS
 Able to measure accurately low gas flow rates or low gas velocities
 Excellent turn down ratio, typically 50:1
 No moving parts
Thermal Mass Flow Meter Limitations

 Gas mass meter use is limited to clean, non abrasive fluids


 Presence of moisture or droplets can lead to measurement inaccuracy
 Thermal properties must be known: variation from calibrated values can cause inaccuracies
 Relatively high initial cost

Purpose/Function
Working Principle: Thermal mass flow meters employ the thermal dispersion principle whereby
the rate of heat absorbed by a fluid flowing in a pipe or duct is directly proportional to its mass
flow. In a typical thermal flow meter gas flowing over a source of heat absorbs the heat and cools
the source.

As flow increases, more heat is absorbed by the gas. The amount of heat dissipated from the heat
source is proportional to the gas mass flow and its thermal properties. Therefore, measurement of
the heat transfer supplies data from which a mass flow rate may be calculated.

Thermal mass flow meters are most often used for the regulation of low gas flows. They
operate either by introducing a known amount of heat into the flowing stream and measuring an
associated temperature change or by maintaining a probe at a constant temperature and measuring
the energy required to do so. The components of a basic thermal mass flow meter include two
temperature sensors and an electric heater between them. The heater can protrude into the fluid
stream (Figure A) or can be external to the pipe (Figure B).

In the direct-heat version, a fixed amount of heat (q) is added by an electric heater. As the
process fluid flows through the pipe, resistance temperature detectors (RTDs) measure the
temperature rise, while the amount of electric heat introduced is held constant.
The mass flow (m) is calculated on the basis of the measured temperature difference (T2 -
T1), the meter coefficient (K), the electric heat rate (q), and the specific heat of the fluid (Cp), as
follows:
m = Kq/(Cp(T2 - T1))
While all thermal flowmeters inject heat into the flow stream, there are two different
methods used to measure the rate of heat dissipation. One method is called constant temperature
differential. Thermal flowmeters using this method utilize a heated RTD as a velocity sensor and
another sensor that measures the temperature of the gas. The flowmeter attempts to maintain a
constant difference in temperature between the two sensors. Mass flowrate is computed based on
the amount of electrical power added to heat the velocity sensor to maintain this constant difference
in temperature.

The second method is called constant current. It also uses a heated RTD as a velocity sensor
and another temperature sensor to measure the temperature of the flow stream. With this method,
the power to the heated sensor is kept constant. Mass flow is computed based on the difference in
temperature between the heated velocity sensor and the temperature of the flow stream. Both
methods make use of the principle that higher velocity flows produce a greater cooling effect. And
both methods measure mass flow based on measuring the amount of cooling that occurs in the
flow stream.

CORIOLIS FLOW METER


Briones, Lucia Isabelle B.

Definition
A Coriolis flow meter is a type of mass flow meter. It is also known as inertial flowmeter,
and is often referred to simply as a mass flowmeter because of its dominance in the mass flowmeter
market.

Coriolis meters are primarily used to measure the mass flow rate of liquids, although they
have also been successfully used in some gas flow measurement applications. The flowmeter
consists either of a pair of parallel vibrating tubes or else as a single vibrating tube that is formed
into a configuration that has two parallel sections. The two vibrating tubes (or the two parallel
sections of a single tube) deflect according to the mass flow rate of the measured fluid that is
flowing inside. Tubes are made of various materials, of which stainless steel is the most common.
They are also manufactured in different shapes such as B-shaped, D-shaped, U-shaped, triangular-
shaped, helix-shaped, and straight. The tubes are anchored at two points. An electromechanical
drive unit, positioned midway between the two anchors, excites vibrations in each tube at the tube
resonant frequency. The vibrations in the two tubes, or the two parallel sections of a single tube,
are 180 out of phase. The vibratory motion of each tube causes forces on the particles in the flowing
fluid. These forces induce motion of the fluid particles in a direction that is orthogonal to the
direction of flow, and this produces a Coriolis force. This Coriolis force causes a deflection of the
tubes that is superimposed on top of the vibratory motion. The net deflection of one tube relative
to the other is given by d ¼ kfR, where k is a constant, f is the frequency of the tube vibration, and
R is the mass flow rate of the fluid inside the tube. This deflection is measured by a suitable sensor.
For the Coriolis flow meter transmitter, it can operate on either ac or dc power and require
separate wiring for the power supply and for their output signals. The Coriolis flow meter
transmitter can be integrally or remotely mounted. The transmitter controls the operation of the
driver and processes and transmits the sensor signals. The calibration factor (K) in the transmitter's
memory matches the transmitter to the particular flow tube. This calibration factor defines the
constant of proportionality between the Coriolis force and the mass flow rate for the dynamic
spring constant of the particular vibrating tubes.
The transmitter does more than convert sensor inputs into standardized output signals.
Most transmitters also offer multiple outputs, including mass flow rate, total mass flow, density,
and temperature. Analog and/or pulse outputs are both available, and intelligent transmitters can
generate digital outputs for integration into DCS systems.

History
It is designed differently and works differently than thermal or differential mass flow
meters. The first industrial Coriolis patents date back to the 1950s with the first Coriolis mass flow
meters built in the 1970s. Coriolis flowmeters were first brought onto the market by Micro Motion
in 1977. At that time, Coriolis meters all had bent tubes. Due to patent considerations, a number
of different designs were developed over the next 20 years. One issue with bent-tube meters was
pressure loss and fluid buildup around the bends in the tubes. To compensate for this, suppliers
developed straight-tube Coriolis flowmeters. Endress+Hauser introduced the first straight-tube
meter in 1987. This meter had dual tubes and later evolved into the ProMass. KROHNE introduced
the first commercially viable single-tube straight-tube Coriolis meter in 1994. This followed an
earlier design from Schlumberger that was withdrawn from the market.
Coriolis flow meters are named after Gaspard Gustave de Coriolis, a French mathematician
and engineer. In 1835, Coriolis wrote a paper in which he described the behavior of objects in a
rotating frame of reference. While this is sometimes called the Coriolis Force, it is more accurately
called the Coriolis Effect, since it is not the result of a force acting directly on the object, but rather
the perceived motion of a body moving in a straight line over a rotating body or frame of reference.

Evolution of the Coriolis flow meter

The first generation of Coriolis flow meters consisted of a single curved and a thin-walled
tube, in which high fluid velocities were created by reducing the tube cross-sectional area in
relation to the process pipe. The tube distortion was measured in reference to a fixed point or plane.
The tubes were excited in such a way that localized high amplitude bending forces were created at
the anchor points. This resulted in severe vibration problems, which were alleviated by two-tube
designs.

These designs reduced external vibration interference, decreased the power needed to
vibrate the tubes, and minimized the vibrational energy leaving the tube structure. One driver was
used to initiate tube vibration, and two sensors were used to detect the Coriolis deflections. While
this design greatly improved the performance of the Coriolis flow meters, the combination of
reduced bore, thin-walled tubing, and high fluid velocities (up to 50 ft/sec) still resulted in
premature meter failure, including potentially catastrophic spills when the meter was used on
corrosive and erosive services. In addition, the unrecovered head losses were high (sometimes over
50 psid), and accuracy was not high enough to allow users to convert batch processes into
continuous ones.

Purpose/Function
A Coriolis meter is based on the principles of motion mechanics. When the process fluid
enters the sensor, it is split. During operation, a drive coil stimulates the tubes to oscillate in
opposition at the natural resonant frequency. As the tubes oscillate, the voltage generated from
each pickoff creates a sine wave.
The Coriolis measuring principle is used in a wide range of different branches of industry,
such as the life sciences, chemicals, petrochemicals, oil and gas, food, and – no less importantly –
in custody transfer applications. Coriolis flowmeters can measure virtually all fluids: cleaning
agents, solvents, fuels, crude oil, vegetable oils, animal fats, latex, silicon oils, alcohol, fruit
solutions, toothpaste, vinegar, ketchup, mayonnaise, gases or liquefied gases.
Design and Working Principle of a Commercial Coriolis Meter

Inside U-shaped sensor housing, the U-shaped flow tube is vibrated at its natural frequency
by a magnetic device located at the bend of the tube. The vibration is like that of a tuning fork,
covering less than 0.1 in. and completing a full cycle about 80 times/sec. As the liquid flows
through the tube, it is forced to take on the vertical movement of the tube as shown in the diagram
below. When the tube is moving upward during half of its cycle, the liquid flowing into the meter
resists being forced up by pushing down on the tube.
Advantages
Coriolis flow meter could do a simultaneous measurement of mass flow, density and
temperature opens up entirely new perspectives for process control, quality assurance and plant
safety. Volume flow, solids content in a fluid, concentrations in multiple-phase fluids, and special
density values such as reference density, °brix, °baumé, °api, °balling, °plato, etc. can also be
calculated from the primary variables measured.

Coriolis flow meters are used in a wide range of critical, challenging applications, in
industries including oil and gas, water and wastewater, power, chemical, food and beverage, and
life sciences.

Coriolis meters give excellent accuracy, with measurement uncertainties of 0.2% being
typical. They also have low maintenance requirements. Generally, it is not required straight
pipelines in the upstream and downstream of the flow sensor, the vortex flow and non-uniform
flow velocity distribution caused by the upstream and downstream pipelines have no influence on
the performance of the mass flow instrument. The change in fluid viscosity has no significant effect
on the measured result, and the change in fluid density has little effect on the measured value.

Coriolis flow meters are extremely important in a multitude of flow-measurement


applications. Offering a wide breadth of line sizes, flow measurement accuracy and turndown,
these meters support many industries and can measure flow rates from a few grams/hours up to
120,000 lbs/minute. Coriolis meters have a wide, dynamic range due to the linear nature of the
signal created while measuring flow.

Disadvantage
Apart from being expensive of the Coriolis flow meter (typical cost is $6000), they suffer from
a number of operational problems. Failure may occur after a period of use because of mechanical
fatigue in the tubes. Tubes are also subject to both corrosions caused by chemical interaction with
the measured fluid and abrasion caused by particles within the fluid. Diversion of the flowing fluid
around the flowmeter causes it to suffer a significant pressure drop, though this is much less
evident in straight tube designs.

It cannot be used to measure media with lower density, such as low-pressure gas. A slightly
higher gas content in the liquid may cause a significant increase in measurement accuracy. It is
sensitive to external vibration interference. It cannot be used for larger diameters.

The pressure loss is large, especially when measuring a liquid with a high saturated vapor
pressure, the pressure loss may cause vaporization of the liquid, and cavitation occurs, causing an
increase in error or even measurement. In the selection calculation of Coriolis mass flowmeter,
pressure loss is an indicator that must be paid special attention, especially when the measured fluid
viscosity is high; the pressure loss of the flow instrument is much higher than other types
flowmeters.

TARGET
GROUP 2

Definition
Target flow meters, also known as drag force flow meters, insert a target (drag element),
usually a flat disc or a sphere with an extension rod, into the flow field. They then measure the
drag force on the inserted target and convert it to the flow velocity.
Brief History
Target flowmeter is a kind of differential pressure flowmeter. It has been used in industry
for decades. Electric and pneumatic target flow transmitters were developed in China in the 1970s.
At that time, the force converter directly adopted the force balance mechanism of differential
pressure transmitter, and this kind of flowmeter inevitably brought many defects caused by the
force balance mechanism itself, such as easy drift of zero position, low measurement accuracy,
poor reliability of lever mechanism and so on. Due to the drag of the force balance mechanism,
many advantages of the target flowmeter itself have not been effectively played, so far, users have
not eliminated the bad impression of the old target flowmeter.

Advantages and Disadvantages


Advantages

 Low initial set up cost


 Can be used with a wide variety of fluids, even viscous fluids and slurries.
 Can be used in abrasive, contaminated, or corrosive fluid flow
 Can be made to measure flow velocity that is sporadic or multi-directional with sphere
drag element designs
Disadvantages

 Pressure drop is inevitable due to the rod and the drag element
Purpose/Function
Working Principle: measure flow by measuring the amount of force exerted by the flowing fluid
on a target suspended in the flow stream. The force exerted on the target by the flow is proportional
to the pressure drop across the target.
Similar to differential pressure flow meters, Bernoulli’s equation states that the pressure drop
across the target (and hence the force exerted on the target) is proportional to the square of the
flow rate.
The deflection of the target and the force bar is measured in the instrument.
The force on the target can be expressed as :
F = cd ρ v2 At / 2
where
F = force on the target (N)
cd = overall drag coefficient obtained from empirical data
ρ = density of fluid (kg/m3)
v = fluid velocity (m/s)
At = target area (m2)

Whenever there is flow past an obstacle in a pipe, a force, commonly referred to as drag,
is generated to push or drag the obstacle in the direction of flow. Such an obstacle left unsupported
would be carried away with the fluid. If, on the other hand, the obstacle was constrained by a force
equal and opposite to the drag, the magnitude of that force could be used to determine the rate of
flow. This is the underlying principle behind the target flowmeter.

There are two primary contributors to drag. One results from the force generated by the
fluid viscosity as it slides by the obstacle. This is referred to as friction drag and has its major
influence when the flowmeter is operated in the laminar flow regime. The second contributor is
the so-called pressure drag. Pressure drag is the force resulting from the difference between the
pressures immediately upstream and immediately downstream of the obstacle. For turbulent flows,
the pressure drag is the prime contributor to the total drag on the obstacle.

The obstacle or target typically used in practice is a circular disc mounted concentrically
in a pipe. The upstream face sees a relatively high pressure since the forward motion of a large
percentage of the fluid is abruptly stopped before turning and traveling around the target. As the
fluid passes through the annulus around the target, it sees an increase in velocity and, as a result, a
decrease in pressure. At the downstream edge of the plate, this high velocity, low pressure flow
separates from the target surface, setting up turbulence downstream of the target. This results in a
relatively low-pressure region near the downstream face.
REFERENCES:

Dey, A. (2020, October 02). Venturimeter: Definition, Parts, Working, Equation, and
Applications. Retrieved November 07, 2020, from https://whatispiping.com/venturimeter-
definition-parts-working-equations

Eligoprojects. (2018, October 30). What is Venturimeter ? Construction and Working. Retrieved
November 07, 2020, from https://eligoprojects.com/what-is-venturimeter-construction-and-
working-equation/

GmbH, P. (n.d.). Vortex flow meter: Types and working principle. Retrieved November 07, 2020,
from https://visaya.solutions/en/article/vortex-flow-meters-basics/

Mecholic. (n.d.). Venturi Meter–Construction, Working, Equation, Application, Advantages, and


Disadvantages. Retrieved November 07, 2020, from
https://www.mecholic.com/2016/11/venturi-meter-construction-working-equation-
application-advantages.html

Mrsoltys. (2012, April 04). Theoretical Fluid Mechanics: Venturi Meter. Retrieved November 07,
2020, from http://www.mikesoltys.com/2011/03/15/demo-for-theoretical-fluid-mechanics-
venturi-meter/

Oldhand, T. (2019, March 02). Specifications of the E-Z-Go Engine. Retrieved November 07,
2020, from https://sciencing.com/specifications-ezgo-engine-7469216.html

Safedrinkingwaterdotcom, & Safedrinkingwaterdotcom. (2020, March 23). Venturi meter.


Retrieved November 07, 2020, from
https://thisdayinwaterhistory.wpcomstaging.com/tag/venturi-meter/

Retrieved November 18, 2020 from https://www.fierceelectronics.com/components/basics-


rotameters
Retrieved November 18, 2020 from http://www.globalw.com/support/rotameter.html
Retrieved November 18, 2020 from https://www.pctflow.com/applications/how-does-a-
rotameter-work/

Retrieved November 18, 2020 from https://www.flowcontrolnetwork.com/instrumentation/flow-


measurement/article/15555832/considering-variablearea-
flowmeters#:~:text=The%20history%20of%20variable%2Darea,a%20component%20of
%20these%20meters.
Retrieved November 18, 2020 from https://automationforum.co/rotameter-characteristics-
components-advantages-and-disadvantages/
Retrieved November 18, 2020 from
https://www.globalspec.com/learnmore/sensors_transducers_detectors/flow_sensing/rota
meters
Retrieved November 7, 2020 from https://www.flowcontrolnetwork.com/instrumentation/flow-
measurement/turbine/article/15563010/turbine-flow-meters-become-more- efficient-
reliable#:~:text=PD%20meters%20were%20first%20used,first%20turbine%20me
ter%20in%201790.&text=Soon%20afterward%2C%20turbine%20meters%20wer
e,flow%20dates%20back%20to%201953.
Retrieved November 7, 2020 from https://en.wikipedia.org/wiki/Flow_measurement
Retrieved November 7, 2020 from https://instrumentationtools.com/turbine-flow-meter-working-
principle/
Retrieved November 7, 2020 from https://www.azom.com/article.aspx?ArticleID=11999
Retrieved November 7, 2020 from https://www.emerson.com/en-us/automation/measurement-
instrumentation/flow-measurement/about-liquid-turbine-flow-
meters#:~:text=A%20turbine%20flow%20meter%20is,line%20of%20the%20turbine%20
housing.
Retrieved November 7, 2020 from https://paktechpoint.com/turbine-flowmeter-design-
requirement/
Retrieved November 7, 2020 from
https://www.instrumentationtoolbox.com/2012/12/introduction-to-turbine-
flowmeters.html
Retrieved November 7, 2020 from https://www.nonconmeter.com/blog/development-history-of-
target-flowmeter-_b28
Retrieved November 7, 2020 from https://instrumentationtools.com/target-flowmeter-working-
principle/

Retrieved November 7, 2020 from https://www.globalspec.com/reference/10750/179909/chapter-


16-target-flowmeters

Retrieved November 7, 2020 from https://instrumentationtools.com/thermal-mass-flow-meter-


working-principle/

Retrieved November 7, 2020 from https://sea.omega.com/ph/technical-learning/thermal-mass-


flow-working-principle-theory-and-design.html

Retrieved November 7, 2020 from https://www.flowcontrolnetwork.com/instrumentation/flow-


measurement/thermal/article/15559968/the-history-evolution-of-thermal-flowmeters
Retrieved November 7, 2020 from Alan S. Morris, Reza Langari. (2016). Measurement and
Instrumentation (Second Edition). Academic Press. ISBN 9780128008843.
https://doi.org/10.1016/B978-0-12-800884-3.00016-2.

Retrieved November 7, 2020 from https://www.flowcontrolnetwork.com/instrumentation/flow-


measurement/coriolis/article/15562371/coriolis-flow-measurement-past-present-
future#:~:text=Coriolis%20flowmeters%20were%20first%20brought,by%20Micro%20
Motion%20in%201977.&text=To%20compensate%20for%20this%2C%20suppliers,later
%20evolved%20into%20the%20ProMass.

Retrieved November 7, 2020 from https://www.flowcontrolnetwork.com/instrumentation/flow-


measurement/coriolis/article/15555628/coriolis-effect-flow-meters-all-you-need-to-know

Retrieved November 7, 2020 from https://www.emerson.com/en-us/automation/measurement-


instrumentation/flow-measurement/coriolis-flow-
meters#:~:text=while%20measuring%20flow.-
,Applications,and%20beverage%2C%20and%20life%20sciences.

Retrieved November 7, 2020 from https://www.silverinstruments.com/blog/advantages-and-


disadvantages-of-coriolis-mass-flowmeter.html

Retrieved November 7, 2020 from https://www.instrumentationtoolbox.com/2017/02/how-


coriolis-mass-flow-meter-works.html
FLOAT LEVEL SWITCHES

I. Definition

Float actuated devices for monitoring liquid level and activating switch contacts
at predetermined liquid level thresholds.

A float liquid level switch is an instrument for monitoring the height of a liquid and
tripping an electrical contact switch when a maximum, minimum or intermediate level
has been reached. The liquid level switch output can then be utilised by other
instrumentation to open a valve, illuminate a warning lamp, activate an audible alarm, or
switch on a pump.

II. Working Principle

The principle of float type level meter is that a float moves up and down due to
buoyancy. A reed switch in a stem is actuated by a magnet in the float, and outputs
detection signal.

Float liquid level switches are threshold contact trip devices which typically consist
of a buoyancy component which is typically a sealed hollow ball or cylinder which will
float on a liquid, and either a shaft assembly for vertical mounting, or a hinge mechanism
for horizontal mounting.
As the level increases and decreases a float will be pushed up or drop down. The
float can be connected to a hinge that opens and closes or slides along a shaft.

The position of the float along the shaft or the opening and closing of the hinge
can be detected by various means.

III. Types of Float Level Switches

Horizontal liquid level float switches are used when access to the top of the
tank is not possible or ideal. Convenient mounting in the side of the tank provides the
flexibility to choose the exact desired liquid level switching point. Side mount float
switches are an economic choice and can be used in conjunction with high level alarms
or low level alarms. The floats can be made to be mounted internally or externally,
ensuring that they can be placed easily and exactly where you desire without a high
amount of difficulty.

Vertical liquid level sensors are mounted on the top or bottom of a tank or
vessel. Vertical fluid switches have two parts: a stem and a float. The stem is stationary
and attaches to the top or bottom of a tank. One or more reed switches are hermetically
sealed inside and wires going to the outside of the tank. The float is a doughnut-shaped
object placed around the stem that rises and falls with the level of the liquid. When the
level of the float aligns with the level of the reed switch inside the stem, it magnetically
actuates the reed switch, sending an electrical signal through the wires to the outside of
the tank.

IV. Advantages
● Different types available for top, side or bottom entry mounting
● Cheapest type of liquid level switch
● Simple design and construction
● Not sensitive to changes in SG & density
● Some types do not require power to operate
● Surface disturbances such as ripples, foaming and bubbling will not affect
performance
● Dielectric constant or conductivity of fluid will not affect operation
● Different materials for media compatibility can be specified
● Vapours, condensation and mists will not affect operation
● Vacuum and low pressure sealed tank will not compromise buoyancy of float

V. Disadvantages
● Moving parts vulnerable to clogging, wear and tear, and damage
● Fouling can cause unreliable movement or seizure of float actuation
● Float actuation relies on liquid contact
● Turbulence in the fluid may cause the float to move
● Thick or sticky fluids may prevent the float from moving
● Regular maintenance to ensure nothing is restricting float movement
● Single point level switch with side entry type
● Float assembly protruding into liquid causing obstruction, attracting fouling and
hygienic cleanliness issues
VI. Common float switch applications

A float switch monitors the liquid level in various residential and industrial
applications and is typically connected to a pump, valve (solenoid valve, electric ball valve,
etc.), or an alarm to notify a user. Due to the variety of designs and types, they can be
used in a wide variety of applications. Common examples are:

● Sump pump float switches can detect and actuate a pump to prevent from
rising water level in the sump pit. They can be used in sump and sewage pits.
● Water tank float switches can help in water level control for potable water,
rainwater, wastewater, and sewage application. The pump switches on/off with
the rise/fall of the float switch in the water.
● Refrigerants and air conditioning use them for water level control.
● The beverage industry uses float switches for filling or emptying the beverage
tanks.
● Industrial washers use them to monitor the washer water level.

References:
● https://www.sensorsone.com/float-liquid-level-switches/
● http://www.ydic.co.jp/english/technology/float_E.html#:~:text=The%20principle
%20of%20float%20type,some%20viscosities%20and%20specific%20gravities.
● https://tameson.com/float-switch.html
MAGNETIC LEVEL GAUGE

I. Definition

A magnetic level gauge is a type of level sensor, a device used to measure the
level of fluids. They are used to detect and monitor the levels of liquids in demanding
environmental conditions. A magnetic level gauge includes a “floatable” device that can
float both in high and low density fluids. Magnetism is used to link the indicator in a gauge
to a float inside of a vessel in order to accurately show the level of fluid within.

Although they are similar to float devices, the communication of the liquid surface
level occurs magnetically. The float in this case is a set of strong permanent magnets,
which move in an auxiliary column that is attached to a vessel by two process
connections.

II. Function/Working Principle

The magnetic level indicator working principle is widely used in level


instrumentation, especially in measuring levels of aggressive fluids and in severe
operating conditions. It functions based on the effects that one magnet has on nearby
magnets. Magnetic level gauges use magnetism to link the indicator in a gauge to a
float inside of a vessel in order to accurately show the level of fluid within. The mechanics
are simple yet very effective, yielding reliable and repeatable level information for
continuous monitoring and recording of fluid levels.

A magnetic level indicator operates on the principle of communicating vessels,


which refers to separate containers that are connected via extrusion outlets to allow low-
or high-density fluids of either homogenous or heterogeneous consistencies to flow
between the vessels. Essentially, the measuring instrument shares the same fluid — and
therefore, the same level — as the vessel. The level indicator is attached to the vessel
and connects directly with the fluid to be measured. Within the chamber is a float with a
magnet assembly inside. This float rests on the fluid’s surface. The float is laterally
confined by the column so it remains close to the side wall of the chamber. As the fluid
level rises or falls, so does the float. As the float moves up or down, the magnet assembly
rotates a series of bi-color magnetic flags or flaps, changing the visual indicators mounted
just outside the chamber from one color to the other. This indicator (magnetized shuttle
or a bar graph) allows the operator to read the level.

From what is mentioned, it can be seen that magnetic level gauges consist of 3
main components: the chamber, the float, and the indicator.

● The chamber (auxiliary column) is the main component and can be constructed
of any non-magnetic material. It is typically mounted to the side of the vessel, with
the liquid level in the chamber set up to match the liquid level in the vessel.
● The float is designed to project a magnetic field through the gauge’s chamber to
an externally mounted indicator system in order to easily view internal fluid levels.
● The external indicator normally consists of rotating flags that are brightly colored
in order to easily identify when the flag is flipped. The indicator is magnetically
coupled to the float and moves up and down with the liquid level.

With their highly accurate fluid measurement capacities, magnetic level gauges are
perfect for applications where you want to measure the fluid level in boilers, tanks, and
process vessels. Beyond these applications, you can customize a magnetic level gauge to
fit virtually any process connection arrangement that you desire.

III. Purpose/Application

Magnetic level gauges are used in level measurement in severe operating


conditions. It is very durable and can withstand high temperature and pressures and
aggressive environments. Magnetic level gauges are suitable replacements for sight
glasses. They measure liquid levels with the use of magnetism and without any power
requirement. Additionally, with the addition of magnetic switches, magnetic level gauges
can also be used for pump control and alarms.

Magnetic level indicators fit most industrial and commercial applications in:

● Refinery and chemical industries


● Energy and power plant technology
● Feed water heaters and boilers
● Oil and gas industries
● Offshore exploration and drilling
● Pipeline compressor applications
● Pulp and paper
● Food and beverage
● Pharmaceutical

Specifically, they serve their purpose in measuring liquid levels in:

● Turbulent tanks
● Heat exchangers
● Boiler applications
● Separators
● Acid storage tanks

IV. Considerations

When selecting a magnetic level gauge it is important to take into account the
strength of the magnetic field. The magnetic field is the heart of the magnetic level
gauge – the stronger the field, the more reliable the instrument will function.

It’s always important to specify a proper magnetic level gauge that’s suitable for
the operating conditions of your specific process. This will be heavily influenced by:

● Vessel type
● Temperature
● Fluid density
● Working pressure

Magnetic level gauges are capable of withstanding high pressures, high temperatures
and corrosive fluids. They can accommodate severe environmental conditions up to 210
bars at 370 °C. While similar to a traditional sight glass in this way, magnetic level gauges
can handle some of the more extreme applications, such as ones with highly corrosive or
hazardous materials.

Aggressive fluid can be addressed by adjusting the construction of both the chamber
and the float to higher grade materials. When a fluid is very aggressive, the float can also
be coated with a suitable lining.
For this technology to work the chamber walls and the auxiliary column should be
made of non-magnetic materials. The float designs provided by most manufacturers are
optimized for a large selection of float materials and the specific gravity of the fluid that
is measured, be it propane, oil, acid, water, butane or interfaces between two fluids. For
example, larger floats may be used with liquids with specific gravities as low as 0.5 while
still maintaining buoyancy. Chemical compatibility, temperature, buoyancy, and viscosity
also affect the selection of the stem and the float. The choice of float material is also
influenced by temperature-induced changes in specific gravity and viscosity – changes
that directly affect buoyancy.

All the magnetic level gauges are fitted with a float. This float is standard in
stainless steel, but the float is also available in Titanium or Hastelloy. The float must have
enough buoyancy and the magnet must be fitted at the right position inside the float. So
it is always important to select a float which is suitable for the process conditions.

In order to select the correct float the following process conditions are necessary.

● Medium
● Density
● Working pressure
● Operating temperature

The float inside a magnetic level gauge can be fitted with a torriodal (360°) magnet
or a magnetic bar. A float with a magnetic bar can lose their guidance/ indication rail by
rapid movement inside the level gauge. As a result the magnetic level gauge will not work
properly for a while. Toroidal magnets are not affected by rapid movements of the float
and can move freely inside the level gauge.

V. Advantages

Since the magnetic level indicator working principle relies on the interaction
between magnets, these level measuring instruments do not need a power source. The
measurements won’t be affected if you lose power in your facility. That’s a key feature
since those are some of the most important times to keep track of fluid levels.

The colored flags are also easy to see, even from a distance, and are paired with
a scale for precise readings.

This technique is advantageous as it does not require restrictive mechanical guide


rails and instead relies on the limited lateral motion of the float within the column of the
magnetic level gauge. It is also suitable for fluid mixtures of varying densities, with
variable float designs and materials suitable for the specific gravity of numerous
measured fluids, including acids, butane, oils, water, and heterogenous fluid interfaces.

Magnetic level gauges are increasingly preferable to alternative level measurement


systems owing to their improved thermodynamic resistances over apparatuses such as
sight gauges. They can withstand high process temperatures and are optimized for high
pressure applications. Unique process demands can also be met using bespoke magnetic
level gauges with oversized columns and floats with improved buoyant characteristics.

Magnetic level gauges are typically made from stainless steel, making their design
very durable and resistant to corrosion. This material selection allows these gauges to
succeed in difficult applications where other styles of indication may fail.

These robust gauges can hold up to the toughest conditions, including:

● System shock
● Highly corrosive environments
● Hazardous materials
● Vibrations
● High temperatures and pressures
● Excessive moisture

For most applications, magnetic level gauges often require little maintenance. No need
to constantly recalibrate your equipment – these devices provide repeatable, reliable, and
precise level information for you to continuously monitor and record fluid levels. By using
a combination of proven buoyancy principles along with the reliability of magnetism,
magnetic level gauges offer you peace of mind knowing that your readings are right.

Since the indicator magnetically locks onto the position of the float inside the sealed
gage chamber, moving with the float, you get an accurate real-time indication of the
liquid level inside the tank every time you look. Indicators can also be magnetically
stabilized to further ensure accurate level readings regardless of any system of fluid
movement. Each flap in the indicating rail is fitted with a permanent magnet which makes
this level gauge unaffected by shocks, vibrations and high temperatures. Also moisture
and/or an aggressive environment are no problem for this level gauge.

With the available “Pointers” it is possible to set the visual limits on the indicating rail
on every level you require. When the magnetic level gauge is fitted with magnetic
switches it is possible to get a signal. With more switches you can make a pump control
(pump on/off) and/or create a high/low alarm.
REFERENCES:
● https://instrumentationtools.com/magnetic-level-gauge-working-principle-
animation/
● https://www.ljstar.com/how-does-a-magnetic-level-gauge-work/
● https://en.wikipedia.org/wiki/Magnetic_level_gauge
● https://www.azom.com/article.aspx?ArticleID=12457
● https://blog.wika.us/knowhow/magnetic-level-indicator-working-principle/
● https://new.abb.com/products/measurement-products/measurement-products-
blog/outlining-the-working-principles-of-a-magnetic-level-gauge
HEAD LEVEL DEVICES

The principle of this method is based on the measurement of hydrostatic


pressure, caused by a liquid head, proportional to the level of liquid. Hydrostatic
level measurement is a really simple and reliable method of measuring level. A
submersible pressure transmitter or a standard pressure transmitter is lowered to
or mounted at a specific depth (zero level). One of the most popular hydrostatic
liquid level measuring systems is the “Bubble Tube”.

BUBBLE TUBE
I. Definition

Bubbler method also called the Air purge method is used for level
measurement and one of the most popular methods for hydrostatic liquid level
measuring systems. which is suitable for any type of liquid level. Bubble tubes
also called “bubbler” are widely used in measuring or regulating liquid level in
a tank.The bubbler senses the hydrostatic pressure in a vessel and displays it
in a more convenient location. Since compressed air became available, bubbler-
type level detection or sensor has been in use.
The bubble tube is a completely self- contained method, requiring only air
or gas supply connections, dip tube and electrical power source to provide
accurate level indication. Since only the stationary dip tube and purge gas come
into contact with the liquid, this device is suitable for highly corrosive, viscous,
hot (molten metal), explosive, slurry form or flood-stuff applications involving
hazardous or dangerous locations or liquids.

II. Function of Bubble Tube

The main function of a bubble type level system is to determine the


liquid level by measuring the pressure required to force a gas into a liquid
at a point beneath the surface. The system consists of a source of
compressed air, air flow restrictor, sensing tube and pressure transmitter.

1. Compressed Air- air or combination of gases that has been


compressed to a pressure higher than atmospheric pressure.
2. Air Flow Restrictor- also known as orifice, a key component
frequently found in air, gas and fluid flow control applications. It is a
device used to restrict the flow of a fluid.
3. Sense Tube – also known as dip tube. It is installed in the tank,
connected to the pressure transmitter and the air supply through the
flow restrictor. It is not necessary to extend the sense tube to the
bottom of the container if you are only interested in the fluid height
in the top part of the container. Using a shorter sense tube and lower
range pressure transmitter will improve the resolution and accuracy
of the system.
4. Pressure Transmitter- also known as pressure transducer, is a
mechanical device used to measure the pressure or level of industrial
liquids and gases. It is a transducer that converts pressure into an
analog electrical signal. Pressure transmitter indicated the fluid depth
above the open end of the sense tube.

Furthermore, it is applicable to install a check valve in the air line if the


fluid level is above the air supply and pressure transmitter. A check valve is a
device that only allows the flow of fluids in one direction. They have two ports, one
as an inlet for the media and one as the output for the media. Since they only allow
media flow in one direction, they are commonly referred to as 'one way valves' or
'non return valves.
III. How does Bubble Tube Work?

This method uses a source of clean air or gas and is connected through a
restriction to a bubble tube immersed at a fixed depth into the vessel. The
restriction reduces the airflow to a very small amount. As the pressure builds,
bubbles are released from the end of the bubble tube. Pressure is maintained as
air bubbles escape through the liquid. Changes in the liquid level cause the air
pressure in the bubble tube to vary. At the top of the bubble tube is where a
pressure sensor detects differences in pressure as the level changes.

A dip tube having its open end near the vessel bottom carries a purge gas
into the tank. As gas flows down to the dip tube’s outlet, the pressure in the tube
rises until it overcomes the hydrostatic pressure produced by the liquid level at the
outlet. That pressure equals the process fluid’s density multiplied by its depth from
the end of the dip tube to the surface and is monitored by a pressure transducer
connected to the tube.

IV. Purpose

1. The level bubbler can be used to monitor in-take screens for debris
and initiate an airburst backwash. This is done by placing a dip tube
on each side of the screen and when the pressure differential between the
two reaches a certain point, a backwash can be initiated to clean the
screens. This system is called a Differential Level Bubbler.
2. The level bubbler can be used to measure the level of the wet well
to control the intake pumps. Bubbler level sensors excel in providing
continuous level measurements in applications where submerged sensors
may be damaged. models are available that include a graphical user
interface (GUI) that allow pump station operators to see real time
information about the system.
3. The level bubbler can be used to measure levels in all types of
liquids such as sanitary waste stations. The bubbler provides real time
level sensing information and will require less maintenance than other
sensors that would otherwise be installed submerged in the corrosive
wastewater handled by the lift station.
4. The level bubbler can be a retrofit replacement for ultrasonic level
transmitters. Ultrasonics, and radar level systems will fail when there’s
foam on the surface of the liquid tank, therefore a level bubbler should be
used. It also works reliably in the presence of vapors, and, unlike
ultrasonics, can be used in media temperatures of more than 350°F.

IV. Advantages of Bubble Tube


1. Equipment setup is relatively simple.
2. Suitable use for corrosive fluids
3. Sensor is not in direct contact with liquid, offering long life and greater
calibration stability.
4. Reliability is better than other level measurement methods.

V. Disadvantages of Bubble Tube


1. Build- up material on bubble tubes is not permissible.
2. Not suited to pressurized vessels.
3. Mechanical wear
4. Requires installation of air lines.
References:
● https://instrumentationtools.com/bubble-tube-level-measurement-
principle/
● https://kingmech.com/how-bubbler-systems-work-2
● https://www.instrumentationtoolbox.com/2014/06/bubbler-tube-system-
for-level.html
● http://www.electricalidea.com/air-purge-method/
Conductive Level Measurement

Conductive level measurement provides safe and simple level detection on


conductive liquids. This type of measurement is also known as electrical level. There is
no calibration required and there are no moving parts in the tank which makes it have a
long service life.

The conductivity of a liquid can be varied widely. Once a liquid reaches the fill limit
from the installation height of the electrode, the liquid medium closes the free alternating
current circuit between the two electrodes. A switching signal is produced from the
sudden increase in the current consumption.

Functions

● To avoid overflow
● For maintaining a constant level to avoid material wastage
● For switching off pumps when running dry and indicating an empty tank to avoid
wear and tear and production stoppage

How does it work?

Substances containing water are conductive and are detected very well. Aggressive
liquids can be detected easily using the probes made from highly-resistant materials.
Combustible liquids such as oils, fuels and other solvents are non conductive which
cannot be measured by this device.

The electrodes are installed above the surface of the conductive liquid to be
observed. If the liquid level rises to the point where the electrodes are in contact with
the liquid, the circuit that is connected is completed activating the switch signal.
The minimum conductivity of the liquid must be 10 microS/cm. These conditions
are fulfilled by mostly all conductive liquids, such as water, acids and lyes.

If several switches are needed, multiple electrodes should be used to make the
device work. In order to avoid electrical effects in the liquid, a DC free alternating current
is used for measurement which is generated by an electrode relay or converter.

Interfacial level detection can be easily and economically achieved with this
measurement method. Particularly with oil and petrol separators, the limit value between
the water and the non conductive liquid is easy to detect.

Advantages

● Simple, cost-effective measuring principle


● Multi-point detection with one process connection
● Liquid food applications

Disadvantages : Non conductive liquids are out of the option.

References
● http://www.nohken.com/overseas/product/level_switch/liquid_point/mt10.htm
● https://instrumentationtools.com/conductive-level-measurement-working-
principle/
● https://www.endress.com/en/field-instruments-overview/level-
measurement/Conductive-level-measurement
RADAR LEVEL TRANSMITTER

I. DEFINITION

Radar

RADAR (Radio Detection and Ranging) is a device that can detect the presence of
faraway objects by obtaining a reflection of electromagnetic waves emitted from the
device itself.

Radar Level Transmitter

Radar level transmitters electronic devices that are used for fluid level
measurement. As the name suggests, these level transmitters use radar technology for
fluid level detection. These transmitters function on the basis of electromagnetic waves
at a range of 10GHz under microwave X-band bandwidth.

II. RADAR LEVEL TRANSMITTER SETUP

The setup of radar level transmitters is not complex, rather easy to understand. It
only features three fundamental components, which are as discussed below.

Solid-state Oscillator:
A solid-state oscillator functions as an electromagnetic signal transmitter. The
solid-state oscillator sends out electromagnetic waves in the direction of the fluid surface
in order to measure the depth or level of fluid without any physical contact.
A radar Antenna:
The radar antenna in this system works as a transducer between the empty space
in the fluid container and the electromagnetic signal sources or receivers. The antenna
receives the process signals and transfers them to the receiver.

Receiver and Signal Processor System:


The receiver is a hardware microprocessor that converts the received signal into a
reading. The signal processor performs the computer data conversion to digital readings.

The operation of all radar level detectors involves sending microwave beams
emitted by a sensor to the surface of the liquid in a tank. The electromagnetic waves
after hitting the surface of the fluid returns back to the sensor which is mounted at the
top of the tank or vessel. The time taken by the signal to return back i.e. time of flight
(TOF) is then determined to measure the level of fluid in the tank.

III. WORKING PRINCIPLE OF RADAR LEVEL TRANSMITTERS

The working principle of radar fluid level transmitters is a function of Time


Domain Reflectometry (TDR). It is also known as the Time of Flight (TOF) radar
measuring principle. However, the step-by-step description of the radar level detector’s
working principle is as follows.

● Since these transmitters are used as level sensors itself, it can be said that the
electromagnetic signals are sent from the sensor. The solid-state oscillator that is
installed in radar level sensor setup oscillates electromagnetic waves to the fluid
surface. The distance can be measured considering the oscillator as a reference
point and fluid surface as the destination.
● Once electromagnetic waves hit a fluid surface, the surface itself reflects a
pulsating signal to the radar antenna. The antenna transmits the signal to the
receiver. As the receiver collects the signal returning pulse reflection, the time
taken for reflection is calculated.
● The time to level calculation is performed by the signal processor. Once the signal
processor translates reflection time into distance traveled that is considered as the
depth of the fluid according to Time Domain Reflectometry.

This is the overall working principle of radar level detector sensors but the factors
like Dielectric Constant (DC) of the fluid impact on the working efficiency of the sensor.
The pulse reflection is highly impacted by dielectric constant as high DC pulsates strong
reflections where low DC fluid absorbs most signals.
V. TYPES OF RADAR LEVEL TRANSMITTER

There are two types of radar level


transmitter:

1) Guided wave radar level transmitter


2) Pulse radar level transmitter

1. Guided wave radar transmitter


GWR is based upon the principle of Time Domain Reflectometry (TDR), which is
an electrical measurement technique that has been used for several decades in various
industrial measurement applications. With TDR, a low- energy electromagnetic pulse is
guided along a probe. When the pulse reaches the surface of the medium being
measured, the pulse energy is reflected up the probe to the circuitry, which then
calculates the fluid level from the time difference between the pulse sent and the pulse
reflected. The sensor can output the analyzed level as a continuous measurement reading
through its analog output, RS-485 communications output, or optional switching output
depending on output options required. GWR technology also has the ability to measure a
liquid interface. A liquid interface is the ability to detect the top liquid level of the media
as well as the “interface level” or level of the media that is below the liquid level, which
contains a different dielectric property or physical property than the top liquid level being
measured. A typical application would be an oil and water interface, in which the oil being
of a lighter specific gravity and lower dielectric than water would be measured as the top
liquid level and the water would be the interface level. Some applications that GWR is
commonly used in is paint, latex, animal fat, soy bean oil, saw dust, carbon black, titanium
tetrachloride, salt, and grain to name a few.

Advantages:
● Provides accurate & reliable level measurement of liquids, slurries, pastes, liquid
interface & solids.
● Suitable for low dielectric constant media.
● Unaffected by liquid turbulence, change in density, dielectric constant,
conductivity, foam, dust, temperature & pressure
● Has extreme operating capabilities and performs well under extreme
temperature (up to 600ºF)
● Capable of withstanding pressures up to 580 PSIG
● Performs well in difficult applications such as fine powders and sticky fluids

2. Non-contact (pulse) radar transmitter


Non-contact radar or pulse level radar as it is commonly referred to in the industry
features through the air technology, which emits narrow microwave pulses down the cone
shaped antenna. The microwave signal comes in contact with the measured medium
surface and reflects back to the antenna. The signal is transmitted to the electronic circuit
and partly converts to level signals (as microwave featured with high propagation speed,
it is almost instantaneous for the electromagnetic waves to reach the target and return
to the receiver). Everything inside the tank that conducts energy, such as level switches
or heater systems, can reflect the signal. If the product has a low dielectric level, then
the radar may find a false level. We may also wind up with bad readings from vapour,
foam, or other product conditions. Typical applications include corrosive or non-corrosive
liquid level monitoring, sanitary environments, caustics, small tank or process vessel,
silos, and tote tanks.

Advantages
● Suitable for liquids, solids & granules
● Reliable performance with high accuracy up to 1mm
● Very cost efficient and easy to use
● Fast response to changing levels
● Used on difficult “hard to handle” applications
● Good for corrosive and dirty application since it is not in contact with the
measured media

IV. APPLICATIONS

The radar level transmitters being accurate are used in critical engineering
environments. The industrial applications of these transmitters or level sensors are as
follows.

Mining Industry
In the mining industry, these transmitters are used to check the depth or length
of mines or to check the level of the surface of the ore. Generally, FMCW radar level
transmitters are used here due to frequency continuity. Sometimes if only air medium is
available then ultrasonic radar technology may be used.

Boiler Engineering
Boilers are hazardous equipment. It is very difficult to measure fluid level due to
extremely high water and stream temperature. In such cases guided wave radar level
sensors are used. These guided probes are immune to high temperatures. Therefore,
they are used in boiler technologies as fuel level or fluid level indicators.

Aeronautics Industry
In automobile industries, the fuel levels are often tested by using contact type
level indicators. However, in aeronautics, a fuel storage system is a little complex.
Therefore, radar level transmitters are used.

Paper and Pulp Industry


In the paper and pulp industry, in order to measure the level of slurry, water and
storage tanks, these level transmitters are used. Guided probes can be used in the
chemical atmospheres, therefore it is a suitable choice in the paper and pulp industry.

Bore Digging Technology


In the bore digging process, the devices are exposed to mud, slurry, and under
surface gases. In such cases, these level indicators are used to check to dig surface levels.

Radar level measurement is a safe solution even under extreme process conditions
(pressure, temperature) and vapours. Radar level transmitters can also be used in
hygienic applications for non-contact level measurement. Radar level transmitters
versions are available for different industries like for water/wastewater, the food industry,
life sciences or the process industry. Various antenna versions for every kind of radar
application are available.

References
● https://www.enggcyclopedia.com/2011/11/level-measurement-devices/
● https://instrumentationtools.com/radar-level-
measurement/#:~:text=Radar%20level%20instruments%20measure%20the,flig
ht%20of%20a%20traveling%20wave.
● https://visaya.solutions/en/qa/radar-level-measurement/
● https://www.transmittershop.com/blog/radar-level-transmitters-setup-and-
working-principle
Ultrasonic Level Transmitter

Definition

Sonic is the sound we can hear. Ultrasonic is the sound above human hearing
range. Humans can hear maximum up to a frequency of 20 KHz. Ultrasonic frequencies
are above 20 KHz. Ultrasonic waves are used to measure the level of liquids and solid
objects in industries.

An ultrasonic level transmitter is a level measuring device mounted on the top of


the tank and transmits an ultrasonic pulse down into the tank. This pulse, travelling at
the speed of sound, is reflected back to the transmitter from the liquid surface. The
transmitter measures the time delay between the transmitted and received echo signal
and the on-board microprocessor calculates the distance to the liquid surface using the
formula.

Distance = (Speed of sound in air x time delay) / 2


Once the transmitter is programmed with the bottom reference of the application
– usually the bottom of the tank – the liquid level is calculated by the microprocessor.
The basic equation for calculating the tank level is

Level = Tank Height – Distance

Function

When an ultrasonic pulse signal is targeted towards an object, it is reflected by


the object and echo returns to the sender. The time travelled by the ultrasonic pulse is
calculated, and the distance of the object is found.

Purpose/Application

Ultrasonic level measurement principle is used to find out fish positions in the
ocean, locate submarines below water level, also the position of a scuba diver in sea.
Ultrasonic level measurement is a contactless principle and most suitable for level
measurements of hot, corrosive and boiling liquids in the process industries. The normal
frequency range used for ultrasonic level measurements is within a range of 40 --200
KHz.
How does it work?

Ultrasonic level transmitters perform calculations to convert the distance of wave


travel into a measure of level in the tank. The time lapse between firing the sound burst
and receiving the return echo is directly proportional to the distance between the
transducer and the material in the vessel. The medium is normally air over the material’s
surface but it could be a blanket of some other gases or vapours. The instrument
measures the time for the bursts to travel down to the reflecting surface and return. This
time will be proportional to the distance from the transducer to the surface and can be
used to determine the level of fluid in the tank. This basic principle lies at the heart of
the ultrasonic measurement technology and is illustrated in the equation:

Distance = (Speed of sound in air x time delay) / 2

The lower frequency instruments are used for more difficult applications; such as
longer distances and solid level measurements and those with higher frequency are used
for shorter liquid level measurements.

Ultrasonic Transducer

Ultrasonic Sensor is the heart of the ultrasonic level Transmitter instrument. This
sensor will translate electrical energy into ultrasound waves. Piezoelectric crystals are
used for this conversion process. Piezoelectric crystals will oscillate at high frequencies
when electric energy is applied to it. The reverse is also true. These piezoelectric crystals
will generate electrical signals on receipt of ultrasound. These sensors are capable of
sending ultrasound to an object and receive the echo developed by the object. The echo
is converted into electrical energy for onward processing by the control circuit.
Advantages of Ultrasonic Level Transmitter

● Ultrasonic transmitters are easy to install on empty tanks or on tanks containing


liquid.
● Set-up is simple and those devices with on-board programming capability can be
configured in minutes.
● As there is no contact with the media and no moving parts, the devices are virtually
maintenance free. Wetted materials are usually an inert fluoropolymer, and
resistant to corrosion from condensing vapors.
● Because the device is non-contacting, the level measurement is unaffected by
changes in the liquid density, dielectric, or viscosity, and performs well on aqueous
liquids and many chemicals.
● Changes in process temperature will change the speed of the ultrasonic pulse
through the space above the liquid, but built-in temperature compensation
automatically corrects this.
● Changes in process pressure do not affect the measurement.

Limitations of Ultrasonic Level Transmitter

● Ultrasonic transmitters rely on the pulse being unaffected during its flight time.
Liquids which form heavy vapors, steam or vapor layers should be avoided (use a
Radar transmitter in these instances). As the pulse needs air to travel through,
vacuum applications are not possible.
● Materials of construction generally limit the process temperature to around 158 °F
(70 °C) and pressure to 43 psig (3 bar).
● The condition of the liquid surface is also important. Some turbulence can be
tolerated but foaming will often damp out the return echo.
● Obstructions in the tank, such as pipes, strengthening bars and agitators, will
cause false echoes, but most transmitters have sophisticated software algorithms
to allow masking or ignoring of these echoes.
● Ultrasonic transmitters can be used on silos containing dry products such as
pellets, grains or powders, but these are more difficult to commission. Factors such
as surface angle of repose, dusting and long ranges must be taken into account.
A Guided Wave Radar transmitter is better suited to dry product applications.

References:

● https://www.coulton.com/beginners_guide_to_ultrasonic_level_transmitters.html
● https://instrumentationtools.com/ultrasonic-level-transmitter-working-
principle/#:~:text=An%20ultrasonic%20level%20transmitter%20is,pulse%20do
wn%20into%20the%20tank.&text=The%20transmitter%20measures%20the%2
0time,liquid%20surface%20using%20the%20formula.
● https://www.supmeaauto.com/cpdetail_28_297_271.html?gclid=Cj0KCQiAhZT9B
RDmARIsAN2E-
J03SqJp3EZBbcKoFVlJf4xbWupMQxQuz9VVXzDMoETQIbGXDmNQP8MaArAkEALw
_wcB
● https://www.pipingengineer.org/process-instrumentation-level-measurement/
Capacitance Level Measurement

Capacitance level sensors are used for a wide variety of solids, aqueous and
organic liquids, and slurries. The technique is frequently referred to as RF as radio
frequency signals applied to the capacitance circuit. capacitance level sensors can also
be used to sense the interface between two immiscible liquids with substantially different
dielectric constants.

Since capacitance level sensors are electronic devices, phase modulation and the
use of higher frequencies makes the sensor suitable for applications in which dielectric
constants are similar.

Working Principle:

The principle of capacitive level measurement is based on change of capacitance.


An insulated electrode acts as one plate of capacitor and the tank wall (or reference
electrode in a non-metallic vessel) acts as the other plate. The capacitance depends on
the fluid level. An empty tank has a lower capacitance while a filled tank has a higher
capacitance.
A simple capacitor consists of two electrode plates separated by a small thickness
of an insulator such as solid, liquid, gas, or vacuum. This insulator is also called as
dielectric.

Value of C depends on dielectric used, area of the plate and also distance between
the plates.

Where:

C = capacitance in picofarads (pF)

E = a constant known as the absolute permittivity of free space

K = relative dielectric constant of the insulating material

A = effective area of the conductors

d = distance between the conductors

This change in capacitance can be measured using an AC bridge.

How does it work

Measurement is made by applying an RF signal between the conductive probe and


the vessel wall.

The RF signal results in a very low current flow through the dielectric process
material in the tank from the probe to the vessel wall. When the level in the tank drops,
the dielectric constant drops causing a drop in the capacitance reading and a minute
drop in current flow.

This change is detected by the level switch’s internal circuitry and translated into
a change in the relay state of the level switch in case of point level detection.

In the case of continuous level detectors, the output is not a relay state, but a
scaled analog signal.

Level Measurement can be divided into three categories:

§ Measurement of non-conductive material


§ Measurement of conductive material
§ Non-contact measurement
Non-conducting material:

For measuring the level of non-conducting liquids, bare probe arrangement is used
as liquid resistance is sufficiently high to make it dielectric. Since the electrode and tank
are fixed in place, the distance (d) is constant, capacitance is directly proportional to the
level of the material acting as dielectric.

Conducting Material:

In conducting liquids, the probe plates are insulated using a thin coating of glass
or plastic to avoid short circuiting. The conductive material acts as the ground plate of
the capacitor.

Proximity measurements (Non-contact type measurements):

In Proximity level measurement the area of the capacitance plates is fixed, but
distance between plates varies.

Proximity level measurement does not produce a linear output and are used when
the level varies by several inches.

Applications:
1. Locate position
2. Level measurement in process industries

Capacitance Level Probes are used for measuring level of

1. Liquids
2. Powdered and granular solids
3. Liquid metals at very high temperature
4. Liquefied gases at very low temperature
5. Corrosive materials like hydrofluoric acid
6. Very high-pressure industrial processes.

Advantages of Capacitive level measurement:


1. Relatively inexpensive
2. Versatile
3. Reliable
4. Requires minimal maintenance
5. Contains no moving parts
6. Easy to install and can be adapted easily for different size of vessels
7. Good range of measurement, from few cm to about 100 m
8. Rugged
9. Simple to use
10. Easy to clean
11. Can be designed for high temperature and pressure applications.

Disadvantages:

Light density materials under 20 lb/ft3 and materials with particle sizes exceeding
1/2 in. in diameter can be a problem due to their very low dielectric constants (caused
by the large amount of air space between particles).

REFERENCE

● https://instrumentationtools.com/capacitance-level-measurement-working-
principle/#:~:text=The%20principle%20of%20capacitive%20level,depends%20
on%20the%20fluid%20level.
● https://static.dc.siemens.com/datapool/industry/automation/process-
instrumentation/case-studies/Thinking-Caps.pdf
NUCLEONIC LEVEL INDICATORS

I. Definition of Nucleonic Level Measurement

Nuclear or nucleonic level measurement devices can be employed in both point


and continuous level detection applications. They are usually applied in fields where all
other level measurement techniques fail to work owing to their capability of working with
hazardous situations.

The major source utilized in nucleonic level controls includes gamma radiations.
These are electromagnetic radiations which exhibit almost identical behavior to that of
microwaves and light waves. However, they have comparatively higher energy and
shorter wavelength owing to which these radiations are competent enough to break
through the walls of process vessels and material. The field strength of these gamma
radiations is determined by a sensor mounted on the other end of the vessel, which
ultimately detects the level of process material in the vessel.

II. Function of Nucleonic Level

Major characteristics of gamma radiations which make them useful over other
technologies for level measurement applications are listed below:

1. They can detect both solid as well as liquid levels.


2. They are resistant to obstructions in the process vessel.
3. They offer an extensive range of operating temperatures.
4. Their chemical properties are not very much crucial.
5. Their performance does not get affected by factors like surface turbulence
or change in flow.
6. They can measure level in applications involving mist, foams, and intense
vapors too.
III. Working Principle

The nucleonic measuring principle is based on the attenuation of gamma radiation

as it penetrates materials. The radioactive isotope (gamma source) is installed in a

container, also referred to as shielding, which emits the radiation only in one direction.

● The source container and the transmitter detecting the radiation are usually

mounted on opposite sides of a vessel or pipe.

● The emitted radiation (e.g. gamma rays) passes through the vessel walls and the

medium contained in the vessel.

● The actual measuring effect results from the absorption of the radiation by the

medium.

● The intelligent transmitter calculates the level, density or the concentration of the

medium from the radiation received.

In general, the gamma source emitting radiations is located external to the process
vessel. These radiations pass through the vessel walls and material gets accumulated
towards the detector. The detector is installed on the other side of the vessel. In case,
the vessel contains no contents, the transmitted gamma rays arrive at the detector. As
soon as the level of the contents increases in the vessel, there is a decrease in the amount
of gamma rays getting to the detector. Thus, this gamma energy drops in an inverse
proportion to the process level.

IV. Types of Nucleonic Level Indicators

Continuous Level Measurement or ‘Full Absorption’

In this measurement principle, the radiation is fully absorbed. The radiation

difference between the source and the detector varies given the image of the level. The

radiation activity is calculated from the pulse rate received.

Typically the pulse rate (radiation level) at 100% level is zero, meaning the gamma

rays are completely absorbed by the medium (full absorption). For example, at 50% of

the full range level, only the upper part of the detector receives the radiation.

Consequently the pulse rate increases.


Nucleonic Interface Measurement

In nucleonic interface measurement, the source may be inserted in an enclosed

dip tube with a cable extension which excludes any contact of the source with the

medium. Depending on the measuring range and the application, one or several detectors

are mounted on the outside of the vessel. The intelligent transmitter measures the

average density of the medium between the source and the detector from the radiation

received. A direct relationship to the interface layer can then be derived from this density

value.

Nucleonic Density Profile Principle


The most exact information on the oil/water emulsion layer is achieved by a multi‐

detector solution, the so‐ called profiling. Several transmitters are arranged on the vessel

on a vessel wall or inside the vessel. Each detector measures an absorption image of the

density. The measuring range is subdivided into zones and an applicable density value is

calculated for each zone. The density image is analyzed via an algorithm and visually

provided on a monitor.

V. Advantages
● They can be used for both point and continuous level measurements in
liquids and solids as well as interface.
● They are non-contacting
● They are unaffected by high temperatures, pressures, corrosive materials,
abrasive materials, viscous materials, and agitation or clogging.

VI. Disadvantages
● Nuclear level measurement devices are very expensive
● Density changes can create measurement errors
● Material build up on vessel walls can affect measurement results
● Licensing from relevant authorities is required to use them
● Regular leak checks as well as high degree of health and safety checks and
source handling and disposal are critical requirements.

References:

https://instrumentationtools.com/nuclear-interface-level-measurement/
https://www.instrumentationtoolbox.com/2014/08/operating-principle-of-
nuclear-level.html
http://www.chipkin.com/nuclear-level-measurement/
PRESSURE

Members:
Adame, Dan Joseph A.
Arandia, Linard Sidryx
Garcia, Allyssa Joyce O.
Macatangay, Amir M.
Macatangay, Layra Alexis
Marquez, Jan Marini M.
Mendoza, Christine Mae M.
Panapanaan, Fate Lorizze M.

LIQUID COLUMN (MANOMETER)

Definition

Pressure measurement is the analysis of an applied force by a fluid on a surface. It is


typically measured in units of force per unit of surface area. There are two types of liquid column
manometer: vertical and; inclined. Vertical liquid column manometer is filled with mercury and is
the oldest type of instrument used to measure and display pressure in an integral unit which was
invented by Evangelista Torricelli in 1643. They have the same function but differ in orientation.

Purpose
Vertical liquid column manometer is used to check pressure in gas networks and cannot
register low-pressure quantities. The inclined manometer is essential for retaining the most
accurate pressure levels for industrial gas applications. A low-pressure industrial gas system may
be used to heat or cool manufacturing processes.

Working Principle

The principle behind a manometer gas or liquid pressure gauge is extremely simple.
Hydrostatic equilibrium shows that the pressure when a liquid is at rest is equal at any point. For
example, if both ends of the U-tube are left open to the atmosphere then the pressure on each side
will be equal. As a consequence the level of the liquid on the left-hand side will be the same as the
level of the liquid on the right-hand side – equilibrium. However, if one end of the U-tube is left
open to the atmosphere and the other connected to an additional gas/liquid supply this will create
different pressures.

Function and Process

Liquid-column gauges consist of a column of liquid in a tube whose ends are exposed
to different pressures. The column will rise or fall until its weight (a force applied due to
gravity) is in equilibrium with the pressure differential between the two ends of the tube (a
force applied due to fluid pressure). A very simple version is a U-shaped tube half-full of liquid,
one side of which is connected to the region of interest while the reference pressure (which
might be the atmospheric pressure or a vacuum) is applied to the other. The difference in
liquid levels represents the applied pressure. The pressure exerted by a column of fluid of
height h and density ρ is given by the hydrostatic pressure equation, P = hgρ. Therefore, the
pressure difference between the applied pressure Pa and the reference pressure P0 in a U-tube
manometer can be found by solving Pa − P0 = hgρ. In other words, the pressure on either end
of the liquid must be balanced (since the liquid is static), and so Pa = P0 + hgρ.

In most liquid-column measurements, the result of the measurement is the height h,


expressed typically in mm, cm, or inches. The h is also known as the pressure head. When
expressed as a pressure head, pressure is specified in units of length and the measurement fluid
must be specified.
BOURDON TUBE

Definition:

The Bourdon tube is the namesake of Eugéne Bourdon, a French watchmaker and engineer
who invented the Bourdon gauge in 1849. Over the years, the Bourdon tube has entrenched itself
as the elastic element in most pressure gauges in application today.

The Bourdon tube is an elastic-element type of pressure transducer. It is relatively cheap


and is commonly used for measuring the gauge pressure of both gaseous and liquid fluids. It
consists of a specially shaped piece of oval-section, flexible, metal tube that is fixed at one end
and free to move at the other end.

Transducers - an electronic device that converts energy from one form to another. In this case, it
converts pressure into an analog electrical signal.

Purpose:

The Bourdon-tube gauge, invented about 1850, is still one of the most widely used
instruments for measuring the pressure of liquids and gases of all kinds, including steam, water,
and air up to pressures of 100,000 pounds per square inch (70,000 newtons per square cm).

Working Principle:

The Bourdon pressure gauge operates on the principle that, when pressurized, a flattened
tube tends to straighten or regain its circular form in cross-section. The Bourdon tube comes in C,
helical, and spiral shapes.

When a gauge is pressurized, the Bourdon creates the dial tip travel to enable pressure
measurement. The higher the pressure requirement of the application, the stiffer the Bourdon tube
needs to be, which means Bourdon wall thickness and diameter are key considerations for enabling
the required tip travel to traverse the necessary movement and, thus, facilitate pressure
measurement accuracy. A standard gauge for an industrial fluid handling application would
generally call for an accuracy range of 3 to 5 percent full scale. A Bourdon test gauge typically
provides accuracy of 0.25 to 1.0 percent full scale.

Process:

Bourdon tubes are radially formed tubes with an oval cross-section. The pressure of the
measuring medium acts on the inside of the tube and produces a motion in the non-clamped end
of the tube.

This motion is the measure of the pressure and is indicated via the movement. As the sensed
pressure increases, the tube tends to straighten, and, through a linkage and gear, drives an
indicating pointer. When an elastic transducer (bourdon tube in this case) is subjected to a pressure,
it defects. This deflection is proportional to the applied pressure when calibrated.

Types:

The C-shaped Bourdon tubes, formed into an angle of approx. 250°, can be used for
pressures up to 60 bar. For higher pressures, Bourdon tubes with several superimposed windings
of the same angular diameter (helical tubes) or with a spiral coil in the one plane (spiral tubes) are
used.

1. C-Type Bourdon Tube


One end of the tube is sealed, and the other end is connected to the source of
pressure that is being measured. The end that pressure is applied to is mounted in such a
way that it cannot move. When pressure is applied to the inside of the tube, the sealed end
of the tube will tend to straighten out, which will cause a small amount of movement at the
sealed end of the tube. This movement can be amplified directly by gears and a pointer to
make a direct-reading pressure gauge. Or the movement can be transferred to a linear
potentiometer, which would let the signal convert pressure to a change of resistance. The
potentiometer can be part of a bridge circuit so that the output signal can be represented by
a change of voltage. The image on the left is a typical Bourdon tube sensor used in a
pressure gauge.

The shape of the Bourdon tube can be modified so that a small amount of pressure
will cause the tube to move farther. One modification involves bending the tube into a
series of spirals and the second involves bending the tube into a helix.

2. Spiral Bourdon Tube

Spiral Bourdon Tube is made by winding a partially flattened metal tube into a spiral
having several turns instead of a single C-bend arc. The tip movement of the spiral equals
the sum of the tip movements of all its individual C-bend arcs. Therefore, it produces a
greater tip movement with a C-bend bourdon tube. It is mainly used in low- pressure
application.

3. Helical Bourdon Tube


Helical is a bourdon tube wound in the form of helix. It allows the tip movement to be
converted to a circular motion. Helical is a bourdon tube wound in the form of helix. It
allows the tip movement to be converted to a circular motion.

1. Both the spiral and helical tubes are more sensitive than the C-Type tube. This
means that for a given applied pressure a spiral or helical tube will show more
movement than an equivalent C-Type tube, thus avoiding the need for a magnifying
linkage.

2. Spiral and helical tubes can be manufactured in very much smaller sizes than the
equivalent C-Type tubes. Hence, they can be fitted into smaller spaces, such as
inside recorders or controller cases where a C-Type would be unsuitable because of
the size.
BELLOWS

Definition

Bellows are thin-walled metallic cylinders, with deep convolutions, of which one end is
sealed and the other end remains open. The closed end can move freely while the open end is fixed.
A bellows gauge contains an elastic element that is a convoluted unit that expands and contracts
axially with changes in pressure. The pressure to be measured can be applied to the outside or
inside of the bellows.However, in practice, most bellows measuring devices have the pressure
applied to the outside of the bellows. Like Bourdon-tube elements, the elastic elements in
bellows gauges are made of brass, phosphor bronze, stainless steel, beryllium-copper, or other
metal that is suitable for the intended purpose of the gauge.

Function

The bellow type gauges are used for the measurement of absolute pressure. They are more
sensitive than bourdon gauges. It may be used for measuring pressures up to 40 mm Hg.

Purpose
Although some bellows instruments can be designed for measuring pressures up to
800 psig, their primary application aboard ship is in the measurement of low pressures or small
pressure differentials. Many differential pressure gauges are of the bellows type. In some designs,
one pressure is applied to the inside of the bellows, and the other pressure is applied to the outside.
In other designs, a differential pressure reading is obtained by opposing two bellows in a single
case. Bellows elements are used in various applications where the pressure-sensitive device
must be powerful enough to operate not only the indicating pointer but also some type of
recording device.

How it Works?/Process:

When the pressure inside the bellows increases, these discs thicken and the length of the
bellows increases. This increase in length is the sum of the expansion of all the discs and is a
measure of the pressure inside the bellows. The movement of the end of the bellows can be used
to drive a pointer over a scale. However, in practice the range of pressure needed to fully extend
the bellows is very small; so too is the pressure required to strain the bellows.

The bellows are used in two forms. In one arrangement, pressure is applied to one side of
the bellows and the resulting deflection is counterbalanced by a spring. This arrangement indicates
the gauge pressure. In the second arrangement, the differential pressure is also indicated. In this
device, one pressure is applied to the inside of one sealed bellow while the other pressure is applied
to the inside of another sealed bellow. By suitable linkage and calibration of the scale, the pressure
difference is indicated by a pointer on the scale.
Bellows Pressure Sensors

RELATIVE PRESSURE SENSOR

The bellows pressure sensor is made of a sealed


chamber that has multiple ridges like the pleats of an
accordion that are compressed slightly when the sensor is
manufactured. When pressure is applied to the chamber, the
chamber will try to expand and open the pleats. The figure
on the left shows an example of a bellows sensor, which uses
a spring to oppose the movement of the bellows and provides
a means to adjust the amount of travel the chamber will have
when pressure is applied. In low-pressure bellows sensors,
the spring is not required. The travel of the bellows can be
converted to linear motion so that a switch can be activated,
or it can be connected to a potentiometer. This type of sensor
is used in low-pressure applications usually less than 30 psi.
The bellows sensor is also used to make a differential
pressure sensor. In this application two bellows are mounted
in a housing so that the movement of each bellows opposes
the other. This will cause the overall travel of the pair to be
equal to the difference of pressure that is applied to them

ABSOLUTE PRESSURE SENSOR


To measure absolute pressure we need two bellows. The first one is the reference bellows,
which is provided with a perfect vacuum on the inside. The second one is the measuring bellows,
which is subjected to the process pressure. Since these absolute pressure sensors are generally used
to measure low pressures, the bellows are not equipped with calibrated springs and they are used
in expansion. The bellows will stretch with increasing process pressure. The deflection of the
bellows is transferred via the transmission mechanism to the pointer. A change in the atmospheric
pressure has no influence on the measurement in this case, since the influence of this pressure on
the two bellows is equally great.

Absolute pressure sensors exist in two different versions. On the one hand, there is the
beam balance principle and on the other hand the opposed principle, as in the illustrations shown
above.
DIFFERENTIAL PRESSURE SENSOR

Just as differential pressure can be measured with a single


bellows, as described above, we can also use dual
bellows.The low process pressure is connected to the first
bellows while the high process pressure is connected to the
second bellows.

Both of these process pressures will exert a force on


the effective area of the bellows upon which they act. The
resultant force rotates the pointer.

Bellows Pressure Sensors Working Principle

The bellows is a one-piece, collapsible, seamless metallic unit that has deep
folds formed from very thin-walled tubing. The diameter of the bellows ranges from
0.5 to 12 in. and may have as many as 24 folds. System or line pressure is applied
to the internal volume of the bellows. As the inlet pressure to the instrument varies,
the bellows will expand or contract. The moving end of the bellows is connected
to a mechanical linkage assembly. As the bellows and linkage assembly moves,
either an electrical signal is generated or a direct pressure indication is provided.
The flexibility of a metallic bellows is similar in character to that of a helical, coiled
compression spring. Up to the elastic limit of the bellows, the relation between
increments of load and deflection is linear.

However, this relationship exists only when the bellows are under
compression. It is necessary to construct the bellows such that all of the travel
occurs on the compression side of the point of equilibrium. Therefore, in practice,
the bellows must always be opposed by a spring, and the deflection characteristics
will be the resulting force of the spring and bellows.
DIAPHRAGM PRESSURE GAUGE

Definition

The diaphragm pressure gauge is a device that as its name suggests has a diaphragm which
has a flexible membrane with two sides. On one side is an enclosed capsule containing air or some
other fluid at a predetermined pressure. The other side can be left open to the air or screwed in to
whatever system the gauge is meant to measure. The diaphragm also attaches to some sort of meter,
which shows how high the pressure is.

Function

The diaphragm pressure gauge consists of a circular membrane, made from sheet metal of
precise dimensions, which can either be flat or corrugated. The diaphragm is mechanically
connected to the transmission mechanism which will amplify the small deflections of the
diaphragm and transfer them to the pointer.

The process pressure is applied to the lower side of the diaphragm, while the upper side is
at atmospheric pressure. The differential pressure arising across the diaphragm, lifts up the
diaphragm and puts the pointer in motion.

The deflection of the diaphragm is very small (+/- 1 mm) making it necessary to use a high-
ratio multiplying movement to rotate the pointer along the full length of the scale. The actuation
of such a high-ratio transmission mechanism is possible because diaphragm deflection is able to
generate large forces. A flat diaphragm made of metal will only be linear when the deflection is
very small, too small to have sufficient movement of the pointer.
At larger deflections, a flat diaphragm loses its linearity since more and more stress will
occur in the diaphragm. The diaphragm becomes increasingly stiffer due to the growing tension,
resulting in less deflection of the diaphragm for a similar increase in pressure.

A flexible material, such as a thin sheet of nylon, can however serve as a flat diaphragm.
The diaphragm will then be opposed by a calibrated spring which ensures the linearity and pushes
the diaphragm back to its starting position.

For industrial applications, usually corrugated


metal diaphragms are being used. The corrugations
ensure that the diaphragm will be more elastic and
they are arranged such that the deflection of the
diaphragm is linear. There are different types of
corrugated profiles as you can see in the figure on the
right.

Purpose

Diaphragm pressure gauge is considered only as an accessory device. The main purpose of
this pressure gauge is to use a diaphragm with a known pressure to measure pressure of a fluid.
Due to the presence of a diaphragm, these gauges are extremely suitable for use on viscous media.
For corrosive gases and liquids, the diaphragm may be coated or covered with a foil.

In addition, itt has many different uses; for instance, it’s used to monitor pressure of a
canister of gas, measure atmospheric pressure, or even record the strength of the vacuum in a
vacuum pump.

How does it work?

They cover measuring spans from 10 mbar to 40 bar. The measuring element consists of
one circular diaphragm clamped between a pair of flanges. The positive or negative pressure acting
on these diaphragms causes deformation of the measuring element. The magnitude of the
deformation is proportional to the pressure to be measured, and it is coupled to the pointer
mechanism.
Working Principle

The diaphragm pressure gauge consists of a circular membrane, made from sheet metal of
precise dimensions, which can either be flat or corrugated.

The diaphragm is mechanically connected to the transmission mechanism which will amplify the
small deflections of the diaphragm and transfer them to the pointer.

The Capsule

The sensing element of a capsule pressure gauge consists of two corrugated


diaphragms welded together at their periphery to form a capsule.

As you can see in the illustrations below, there are two types of capsules: the
convex and nested. A convex capsule is formed by attaching two convex diaphragms
opposite to each other. A nested capsule consists of a convex and a concave
diaphragm, also secured to one another along their periphery.

If the process pressure is applied along the exterior of the capsule, nested
capsules have the advantage of being more resistant against overpressures.
Convex capsule

Nested capsule

For the measurement of very small pressure differences, the deflection of a single
capsule may be too small. Therefore, multiple capsules can be stacked on top of each
other until sufficient displacement is obtained to move the pointer across the full scale.
These stacks can be built with either convex or nested diaphragms.

Stacked capsules
STRAIN GAUGES

Definition

A Strain gauge (sometimes referred to as a Strain gage) is a sensor whose resistance


varies with applied force; It converts force, pressure, tension, weight, etc., into a change in
electrical resistance which can then be measured. When external forces are applied to a
stationary object, stress and strain are the result. Stress is defined as the object's internal resisting
forces, and strain is defined as the displacement and deformation that occur.

The strain gauge is one of the most important sensors of the electrical measurement
technique applied to the measurement of mechanical quantities. As their name indicates, they are
used for the measurement of strain. As a technical term "strain" consists of tensile and
compressive strain, distinguished by a positive or negative sign. Thus, strain gauges can be used
to pick up expansion as well as contraction.

Function

A strain gauge is used as a precautionary measure in many testing applications. Usually,


when a strain gauge gives a certain reading an alert will be triggered to inform the user that the
capacity has been reached, this means that the issue can be addressed before it becomes
dangerous.

The functioning of a strain gauge entirely depends on the electrical resistivity of an


object/conductor. When an object gets stretched within its limits of elasticity and does not break
or buckle permanently, it becomes thinner and longer, resulting in high electrical resistance. If an
object is compressed and does not deform, but, broadens and shortens, results in decreased
electrical resistance. The values obtained after measuring the electrical resistance of a gauge
helps to understand the amount of stress induced.

The excitation voltage is applied at the input terminals of a gauge network, while the
output is read at the output terminals. Normally, these are connected to a load and are likely to
remain stable for longer periods, sometimes decades. The glue used for gauges depends on the
duration of a measurement system – cyanoacrylate glue is suitable for short term measurements
and epoxy glue for long term measurements.

A strain gauge sensor is installed to carry out a precise and reproducible measurement of
the stress changes that are equivalent to the weight of the vehicle.

Purpose

The main purpose of a strain gauge is to indirectly determine stress and its variation with
time, quantitatively. Change in stress is determined by multiplying the measured strain by the
modulus of elasticity.

Strain gauges are extensively used in the field of geotechnical monitoring and
instrumentation to constantly monitor dams, inner linings of tunnels, structures, buildings, cable-
stayed bridges, and nuclear power plants to avoid mishaps and accidents in case there’s any
deformity in them.

Timely actions taken can avoid accidents and loss of life due to deformities. Hence, strain
gauges are important sensors in the geotechnical field.

Strain gauges are installed on these structures and then, the complete data from them is
remotely retrievable through data loggers and readout units. They are considered as significant
measuring equipment for ensuring productivity and safety.

Working Principle

A strain gauge works on the principle of electrical conductance and its dependence on the
conductor’s geometry. Whenever a conductor is stretched within the limits of its elasticity, it
does not break but gets narrower and longer. Similarly, when it is compressed, it gets shorter and
broader, ultimately changing its resistance.

We know, resistance is directly dependent on the length and the cross-sectional area of
the conductor given by:

R= L/A

Where,
R = Resistance

L = Length

A = Cross-Sectional Area

The change in the shape and size of the conductor also alters its length and the cross-
sectional area which eventually affects its resistance. For example, keeping the material
resistivity constant, the resistance of the sample can be increased by increasing the length, or
decreasing the cross-sectional area. It can also be seen from the resistivity equations that
increasing the resistivity of the material will increase the resistance assuming the same
dimensions. Similarly decreasing the resistivity will decrease the resistance.

Any typical strain gauge will have a long, thin conductive strip arranged in a zig-zag
pattern of parallel lines. The reason behind aligning them in a zig-zag fashion is that they do not
increase the sensitivity since the percentage change in resistance for a given strain for the entire
conductive strip is the same for any single trace.

Also, a single trace is liable to overheating which would change its resistance and thus,
making it difficult to measure the changes precisely.

Application

Strain gauge technology has a huge amount of uses - almost unlimited. Strain gauges are
a fundamental sensing element and are used within many different types of sensors. They are
well used in industries such as rail, aerospace, mechanical engineering and research and
development. Some of the applications they have been used for includes:

o Stresses on aircraft wing deflection

o Rotational strain on turbines, wheels, fans, propellers, and motors

o Testing structural components for bridges and buildings

o Various load cells

o Dynamic strain measurement for shock inspection, and so on.

o Endurance test and measurement of various materials.

Types
1. Mechanical

2. Hydraulic

3. Electrical Resistance

4. Optical

5. Piezoelectric
PIEZORESISTIVE TRANSDUCER

Definition

A pressure transducer is a device that measures the pressure of a fluid, indicating the force
the fluid is exerting on surfaces in contact with it. Pressure transducers are used in many control
and monitoring applications such as flow, air speed, level, pump systems or altitude.

A typical piezoresistive strain gauge pressure transducer utilizes strain gauges bonded to a
flexible diaphragm so that any change in pressure causes a small deformation, or strain, in the
diaphragm material. The deformation changes the resistance of the strain gauges, typically
arranged as a Wheatstone bridge, providing a convenient conversion of the pressure measurement
into usable electrical signals.

Function

Piezoresistive transducers transform vibrations and pressure into correlative rates of


resistance. However, such devices don’t transform mechanical energy into electrical currents.
Unlike piezoelectric transducers, piezoresistive transducers are unable to transform resistance into
energy.
a) Piezoresistive Effect

Pressure is also the cause of the piezoresistive effect, yet the defining factor
here is the resistance change of the piezo property. Whereas the piezoelectric
effect is the charge or voltage that generates from pressure, the piezoresistive
effect is marked by changes in resistance of materials as the result of pressure.
In both cases, pressure plays a key role in the effects that surround piezo
properties.

Piezoresistive (diaphragm-based) differential pressure sensors based on


silicon in turn consist of a thin silicon diaphragm in which resistors in the form
of a Wheatstone bridge are embedded. If there are pressure differences, the
diaphragm pushes through and the resistors are distorted. The resistors connected
together in the measuring bridge react to this distortion, resulting in a
piezoresistive effect. This changes the resistance of the resistors and the
electrical voltage. Finally, a pressure-proportional measuring signal is triggered.

Piezo resistors are the most fundamental devices to run on the piezoresistive
effect. Piezoresistive devices generally consist of semiconductor materials and
are made for the measurement of pressure. The degree to which pressure is
applied will help to determine the intensity of the effect and its usefulness in
select applications.

The piezoresistive effect was recognized as far back as 1856 by Irish


physicist Lord Kelvin, who noted a phenomenon in metal devices that fit the
basic description of the effect. However, a century would pass before the
piezoresistive effect was formally identified in 1954 by C.S. Smith, who detailed
examples of the process at work in sample of germanium and silicon.

Sensors and transducers are the two types of devices that use both
piezoresistive and piezoelectric effects. Measurement is among the foremost
uses of such devices, detection is another.

b) Working Principle

Piezoresistive transducers are based on the idea that a mechanical input


(pressure, force, or acceleration for example) applied to a mechanical structure
of some kind (a beam, a plate, or a diaphragm) will cause the structure to
experience mechanical strain. Small piezoresistors attached to the structure
undergo the same mechanical strain. The resulting deformation causes the
piezoresistors’ electrical resistance to change, allowing for the transducer to be
used as a sensing device. The electrical resistance of the piezoresistors is usually
not sensed directly. Rather, the resistors are wired together in an electrical circuit
configuration called a Wheatstone bridge. The bridge has a constant input
voltage and produces a measurable output voltage that is proportional to the
electrical resistance.

If the transducer mechanical input is zero, then the mechanical structure


sees zero strain, resulting in the resistors also experiencing zero strain. The
bridge output voltage is therefore zero and the bridge is said to be balanced.
When the transducer does have an applied input, however, then the mechanical
structure and the resistors undergo strain, changing the electrical resistance of
the piezoresistors. This in turn changes the currents in the bridge circuit such that
the bridge now produces an output voltage. This bridge output voltage is
proportional to the magnitude of the mechanical input. The primary physical
phenomenon that makes this possible is piezoresistance: the material property
that the electrical resistance of the material changes when the material is
subjected to mechanical deformation or strain. An electrical resistor is fabricated
from a piezoresistive material and installed in the bridge. A force f causes strain
in the material in one of several ways: bending, stretching, or twisting. The
piezoresistor is usually positioned such that its orientation makes it susceptible
to strain in one primary direction. The material’s electrical characteristics are
described by Ohm’s law, e = iR, where e is voltage, i is current, and R is
resistance—and resistance is a function of strain.
Because piezoresistive transducers are based on electrical resistance, an
energy-dissipating phenomenon, the effect is not thermodynamically reversible.
Thus the piezoresistive effect can be used for sensing but not for actuation.

A piezoresistive sensor is an electromechanical system: it contains electrical


and mechanical subsystems that respond to a mechanical input (force, pressure,
or acceleration) to produce an electrical output (a voltage signal proportional to
the mechanical input). In order to model the sensor we therefore begin by
considering its primary electrical sub-system—the Wheatstone bridge.

Unlike relative and absolute pressure sensors, differential pressure sensors


measure the difference between any two pressures. A differential pressure sensor
has two separate pressure connections (hose, threaded or manifold). And –
depending on the calibration – it can measure both positive (p1>p2) and negative
(p1<p2) differential pressures.

Moreover, a piezoresistive pressure sensor contains several thin wafers of


silicon embedded between protective surfaces. Silicon has a monocrystalline
structure, which means that the diaphragm always returns to its original state
after stretching without becoming distorted. The surface is usually connected to
a Wheatstone bridge, a device for detecting small differences in resistance. The
Wheatstone bridge runs a small amount of current through the sensor. When the
resistance changes, less current passes through the pressure sensor. The
Wheatstone bridge detects this change and reports a change in pressure.

Application

The piezoresistive effect can be used for the measurement of pressure and also humidity
and gas concentrations. For example, when the membrane changes due to contact with moisture
or gases in the volume and exerts pressure on the piezoresistor it can be measured by a
Piezoresistive sensor.

Piezoresistive sensor can also be developed from polymer layers that respond to pressure
and change its shape or volume. Piezoresistive sensors are therefore suitable for a variety of
applications including solid state accelerometers and bipolar transistors.

These sensors are suitable for a variety of applications because of their simplicity and
robustness. They can be used for absolute, gauge, relative and differential pressure measurement,
in both high- and low-pressure applications.

· Barometric Absolute Pressure (BAP)

· Manifold Absolute Pressure (MAP)

· Barometric and Altimetric Measurements

· Industrial Control

· Water-level Measurement for washing machines and dishwashers


Fiber Optics

Definition

Are made from either glass or plastic. Most are roughly the diameter of a human hair, and
they may be many miles long. Light is transmitted along the center of the fiber from one end to
the other, and a signal may be imposed.

Measurement of pressure in industrial processes is performed by a variety of sensors, most


of which operate by converting the applied pressure to a mechanical movement. The mechanical
movement is then measured by a displacement sensor and converted to an electrical signal. In fiber
optic pressure sensors, the displacement is measured by altering light delivered by a fiber optic
transmission system to the sensing element. The intensity or another characteristic of the return
light is used to measure the displacement of the sensing element.

Purpose

Optical pressure sensors detect a change in pressure through an effect on light. In the
simplest case this can be a mechanical system that blocks the light as the pressure increases. In
more advanced sensors, the measurement of phase difference allows very accurate measurement
of small pressure changes. Because of their freedom from electromagnetic interference, fibre-optic
sensors are very useful in harsh environments. One example is the oil and gas industry. Conditions
in a well can easily reach 20,000 psi and 185º C. Optical sensors continue to perform well under
these extremes.

Working principle

1. Total Internal Reflection

· When light traveling in an optically dense medium hits a boundary at a steep angle,
the light is completely reflected. This is called total internal reflection. This effect is
used in optical fibers to confine light in the core. Light travels through the fiber core,
bouncing back and forth off the boundary between the core and cladding. Because the
light must strike the boundary with an angle greater than the critical angle, only light
that enters the fiber within a certain range of angles can travel down the fiber without
leaking out.

2. Critical Angle

· The critical angle is the angle of incidence where the angle of refraction is 90°. The
light must travel from an optically denser medium to an optically less dense medium.

A transducer modulates a light signal according to the value of the process parameter being
sensed. This modulated light signal travels through a fiber optic communication link to an interface
unit. The communication link is typically in the form of a fiber optic cable or cables. Note that in
some cases the transducer may be the cable itself. This. intrinsic sensing is accomplished by along
the process to alter the optical properties of the fiber core resulting in direct modulation of the light
signal. The interface unit is used to either process the incoming light signal or condition and
multiplex it with light signals from other fiber optic instrumentation. The basic operation of a fiber
optic sensor involves a light source which provides light to a transducer. The transducer modulates
the light that is then sent to an optical detector and then to the signal processing equipment. The
light source for a fiber optic sensor is typically a light emitting diode (LED) or a laser. Both of
these sources convert electrical power into light with distinct spectral Characteristics

Types of Fiber-Optic

1. Intensity-Modulated Sensors

In intensity-type sensors, the light emitted from an optical source is carried along a fiber,
its intensity is modified at the transducer and the light is returned to an optical detector. These
sensors are analog in nature, as the light intensity detected is proportional to the measured variable.
Intensity-modulated sensors can be classified as using one of two general modulation mechanisms:
transmission and reflection.

a. Transmissive Intensity Sensors

· · The effect of bending loss is utilized. The fiber is distorted


over a length by pressing it between grooved plates with a deformation
force. The wavelength of the grooves is approximately equal to the
resonance wavelength. The light intensity transmitted by the optical fiber
decreases as the liquid pressure increases.
b. Reflective Intensity Sensors

· The fiber optic cables in a reflective intensity sensor typically


consist of several optical fibers in a bundle. Some of the fibers serve to
transmit the light to the sensor, and the rest return the modulated light to
the optical detector. The sensor consists of a reflective diaphragm or
membrane that is allowed to deflect with the applied pressure. The light
beam is "bounced" off of the reflective diaphragm and picked up by the
receiving fibers. As the diaphragm is displaced, the intensity of the
reflected light is modulated.

2. Phase-Modulated Sensors

Phase-modulated sensors use interferometric methods to sense the measured variable.


Interferometry is the use of interference phenomena, based on the wave properties of light, to
perform measurements. In phase-modulated sensors, changes in the measurand result in a phase
difference between the modulated light and a reference light beam.

a. Mach-Zehnder Interferometer
· The light source is split into a reference leg and a measurement leg.
The measurement leg experiences both a length change and change in
refractive index due to the pressure applied directly to the fiber. The two
beams are then recombined and the phase modulation is detected by
measuring the intensity of the recombined light.

b. Fabry-Perot Interferometer

· Fabry-Perot interferometric pressure sensors incorporate a


resonance cavity, also referred to as an etalon, consisting of two partial
reflectors on either side of an optically transparent medium. One of the
reflectors, or mirrors, is attached to a diaphragm, and the cavity length is
allowed to vary with the applied pressure.

c. Michelson Interferometer

· This interferometer is very similar to the Mach-Zehnder


configuration, except that the sensing and reference legs are terminated
with a reflective mirror. This results in the elimination of one coupler, but
also introduces a significant disadvantage. In the Michelson
interferometric configuration, the coupler feeds light back both into the
detector and the laser. Feedback into the laser creates a source of optical
noise which reduces the sensitivity of the interferometer.
PIEZOELECTRIC TRANSDUCERS
Definition
The piezoelectric transducer is an electroacoustic transducer used for conversion of
pressure or mechanical stress into an alternating electrical force. They are small in size and have
rugged construction.
The piezoelectric transducer uses the piezoelectric material which has a special property,
i.e. the material induces voltage when the pressure or stress is applied to it. The material which
shows such property is known as the electro-resistive element. The materials used for the
measurement purpose should possess desirable properties like stability, high output, insensitive to
the extreme temperature and humidity and ability to be formed or machined into any shape. But
none of the materials exhibiting piezoelectric effect possesses all the properties. All piezoelectric
materials are non-conductive in order for the piezoelectric effect to occur and work. They can be
separated into two groups: crystals and ceramics. Originally, crystals made from quartz took hold
as the primary material for piezoelectric crystal transducers. In the early 1950s, quartz crystals
began to give way to piezoelectric ceramic as the primary transducer material. The advantages
offered by a ceramic transducer when compared to other materials include ceramic’s ability to be
manufactured in a wide variety of shapes and sizes, its capability of operating efficiently at low
voltage, and its ability to function at temperatures up to 300 degrees Celsius. Some of the materials
that exhibit piezoelectric effect are quartz, Rochelle salt, polarized barium titanate, ammonium
dihydrogen, ordinary sugar etc.
Piezoelectric Ceramics:
● Barium titanate is a ferroelectric ceramic material with piezoelectric properties. For
that reason, barium titanate has been used as a piezoelectric material longer than
most others. Its chemical formula is BaTiO3. Barium titanate was discovered in
1941 during World War II.
● Lithium niobate is a compound that combines oxygen, lithium, and niobium. Its
chemical formula is LiNbO3. Also a ferroelectric ceramic material it’s just like
barium titanate in that it has piezoelectric properties, too.
Piezoelectric Crystals:
● Quartz, which is a natural crystal, is highly stable but the output obtained from it is
very small. Due to its stability, quartz is used commonly in the piezoelectric
transducers. It is usually cut into rectangular or square plate shape and held between
two electrodes. The crystal is connected to the appropriate electronic circuit to
obtain sufficient output.
● Rochelle salt, a synthetic crystal, gives the highest output amongst all the materials
exhibiting piezoelectric effect. However, it has to be protected from the moisture
and cannot be used at temperature above 115 degree F. Overall the synthetic
crystals are more sensitive and give greater output than the natural crystals.

Function
Piezoelectric transducers can be utilized for different applications and perform various
functions.
● Industrial equipment: Piezoelectric transducers have a number of important uses in
industrial processes. Piezoelectric transducers are commonly used for individual
part cleaning, welding plastics and drilling, or milling ceramics and other difficult
materials.
● Automotive: Second only to industrial machinery, the automobile manufacturing is
one of the largest markets for piezoelectric devices. Piezoelectric transducers are
behind some of the most important advances in automotive technology. This
includes the use of a transducer to measure detonations in engines.
● Aerospace: Aerospace continues to be one of the most important piezoelectric
markets. Piezoelectric transducers offer a precise, cost-effective method of
monitoring and controlling structural vibrations. For any application that requires
an extremely small and precise mechanism, piezoelectric devices offer greater
efficiency and a higher power to weight ratio than electromagnetic motors.
● Commercial sonar: Commercial avionics is another important piezo ceramic use.
Piezo transducer technology has been used in sonar technology since the First
World War which used echoes to detect the presence of enemy ships. Small piezo
transducers were also mostly present in landline phones. They sat inside the ringer
and helped to generate a noticeable noise to alert people to incoming calls.
● Medical: Piezoelectric transducers can be found in ultrasound and other medical
imaging technology.

Purpose
The piezoelectric transducer converts any type of physical quantity in the form of
proportional electrical quantity either as voltage or electric current which is easily
measured by analogue and digital meter.
Aside from this, piezoelectric transducers are used for several purposes. These
include the following:
● High frequency response: They offer very high frequency response that means the
parameter changing at very high speeds can be sensed easily.
● High transient response: The piezoelectric transducers can detect the events of
microseconds and also give the linear output.
● High dynamic response: The piezoelectric transducer has the ability to sense
pressure oscillations at frequencies from tens of hertz to tens of megahertz.
● High output: They offer high output that can be measured in the electronic circuit.
● These devices are used in the measurement of strain, vibration and deformation.

Working Principle
The piezoelectric transducers work on the principle of piezoelectric effect. This effect was
discovered in the year 1880 by Pierre and Jacques Curie. There are certain materials that generate
electric potential or voltage when mechanical strain is applied to them or conversely when the
voltage is applied to them, they tend to change the dimensions along certain planes.

Piezoelectric Effect

The faces of piezoelectric material, usually quartz, is coated with a thin layer of conducting
material such as silver. When stress is applied, the ions in the material move towards one of the
conducting surfaces while moving away from the other. This results in the generation of charge.
This charge is used for calibration of stress. The polarity of the produced charge depends upon the
direction of the applied stress. Stress can be applied in two forms as compressive stress and tensile
stress as shown below.

Two Forms of Stress

Compressive stress is due to the application of external compressive force. Compressive


stress results in the shortening of the solid. Tensile stress is due to the application of an external
stretching force. Tensile stress results in elongation of the solids.
When a force is applied to a piezoelectric material, an electric charge is generated across
the faces of the crystal. This can be measured as a voltage proportional to the pressure.

A given static force results in a corresponding charge across the sensor. However, this will
leak away over time due to imperfect insulation, the internal sensor resistance, the attached
electronics, etc. As a result, piezoelectric sensors are not normally suitable for measuring static
pressure. The output signal will gradually drop to zero, even in the presence of constant pressure.
They are, however, sensitive to dynamic changes in pressure across a wide range of frequencies
and pressures. This dynamic sensitivity means they are good at measuring small changes in
pressure, even in a very high-pressure environment.
Unlike piezoresistive and capacitive transducers, piezoelectric sensor elements require no
external voltage or current source. They generate an output signal directly from the applied strain.
The output from the piezoelectric element is a charge proportional to pressure. Detecting this
requires a charge amplifier to convert the signal to a voltage.
Here, quartz crystal coated with silver is used as a sensor to generate a voltage when stress
is applied on it. A charge amplifier is used to measure the produced charge without dissipation. To
draw very low current the resistance R1 is very high. An internal amplifier makes the sensor
simpler to use. For example, it makes it possible to use long signal cables to connect to the sensor.
The amplifier can also include signal-conditioning circuitry to filter the output, adjust for
temperature and compensate for the changing sensitivity of the sensing element. The capacitance
of the lead wire that connects the transducer and piezoelectric sensor also affects the calibration.
So the charge amplifier is usually placed very near to the sensor. So in a piezoelectric transducer
when mechanical stress is applied a proportional electric voltage is generated which is amplified
using a charge amplifier and used for calibration of applied stress.

References:

American Piezo. (n.d.) Piezoelectric Market. Retrieved November 7, 2020 from


https://www.americanpiezo.com/markets.html

Bright Hub Engineering. (2009, October 12). What are Piezoelectric Transducers? Retrieved
November 7, 2020 from https://www.brighthubengineering.com/hvac/52190-piezoelectric-
transducers/
Circuit Globe. (n.d.) What is Piezo-Electric Transducer? Retrieved November 7, 2020 from
https://circuitglobe.com/piezo-electric-transducer.html

Electrical Voice. (2017, November 13). Piezoelectric Transducer. Retrieved November 7, 2020
from https://electricalvoice.com/piezoelectric-transducer-advantages-applications/

Elprocus. (n.d.) Piezoelectric Transducer - Working, Circuit, Advantages and Applications.


Retrieved November 7, 2020 from https://www.elprocus.com/what-is-a-piezoelectric-transducer-
circuit-diagram-working-and-applications/

Hashemian, H. M., Black, C. L., & Farmer, J. P. (1995). Assessment of Fiber Optic.

Pressure Sensors: The Design Engineer's Guide. (2020, November 6). Retrieved from AVNET:
https://www.avnet.com/wps/portal/abacus/solutions/technologies/sensors/pressure-
sensors/core-technologies/optical/

Adams, T., & Layton, R. (1970, January 01). Piezoresistive transducers. Retrieved November
07, 2020, from https://link.springer.com/chapter/10.1007/978-0-387-09511-0_8

Engineering, O. (2020, July 07). How Does a Pressure Transducer Work? Retrieved November
07, 2020, from https://www.omega.com/en-us/resources/pressure-transducers-how-it-
works

Ghosh, A. (2013, February 14). Piezoresistive Sensor : How They Works. Retrieved November
07, 2020, from https://thecustomizewindows.com/2013/02/piezoresistive-sensor-how-they-
works/

Niedenführ, P. (2019, February). The Piezoresistive Effect and Measuring Pressure. Retrieved
November 07, 2020, from https://blog.first-sensor.com/en/piezoresistive-effect

APC, I. (2017, October 16). Piezoelectric Effect vs. Piezoresistive Effect. Retrieved November
07, 2020, from https://www.americanpiezo.com/blog/piezoelectric-vs-piezoresistive/

Benson, M. (2019, January 10). Manometer Types and Working Principle.,from


https://www.engineeringclicks.com/manometer/

https://en.wikipedia.org/wiki/Pressure_measurement#:~:text=Pressure%20range%2C%20sensi
tivity%2C%20dynamic%20response,by%20Christiaan%20Huygens%20in%201661.

Rodriguez, A. (2017, April 25). Inclined Manometer Advantages., from


https://sciencing.com/instrument-measures-pressure-gas-vapor-8302.html

https://instrumentationtools.com/bellows/
https://instrumentationtools.com/bellows-pressure-sensors-working-principle-animation/

http://users.telenet.be/instrumentatie/pressure/bellows-pressure-
gauge.html#:~:text=Bellows%20are%20thin%2Dwalled%20metallic,the%20bellows%20will%2
0be%20compressed.

https://automationforum.co/bellow-type-pressure-
gauge/#:~:text=Introduction,up%20to%2040%20mm%20Hg.

https://automationforum.co/bellow-type-pressure-
gauge/#:~:text=Introduction,up%20to%2040%20mm%20Hg.

http://www.fairprene.com/terminology.htm#:~:text=In%20a%20convoluted%20diaphrag
m%2C%20the,of%20a%20non%2Dconvoluted%20diaphragm.

https://instrumentationtools.com/pressure-gauges-with-diaphragm-sensor-principle/

https://www.sika.net/en/products/sensors-and-measuring-instruments/mechanical-
pressure-gauges/diaphragm-pressure-gauges.html

http://users.telenet.be/instrumentatie/pressure/diaphragm-pressure-
gauge.html#:~:text=The%20diaphragm%20pressure%20gauge%20consists,transfer%20them%2
0to%20the%20pointer.

https://www.omega.co.uk/prodinfo/StrainGauges.html

https://www.explainthatstuff.com/straingauge.html

https://encardio.medium.com/strain-gauge-principle-types-features-and-applications-
357f6fed86a5

https://www.variohm.com/news-media/technical-blog-archive/what-is-a-strain-gauge-

https://www.sciencedirect.com/topics/engineering/strain-gauge

https://www.electronics-notes.com/articles/basic_concepts/resistance/electrical-resistivity.php

https://www.eminebea.com/en/product/mcd/straingage.shtml#:~:text=Strain%20gages%20consis
t%20of%20a,is%20proportional%20to%20the%20strain.

Hashemian, H. M., Black, C. L., & Farmer, J. P. (1995). Assessment of Fiber Optic.
Pressure Sensors: The Design Engineer's Guide. (2020, November 6). Retrieved from AVNET:
https://www.avnet.com/wps/portal/abacus/solutions/technologies/sensors/pressure-sensors/core-
technologies/optical/

Xuejin, L., Yuanlong, D., Yongqin, Y., Xinyi, W., & Jingxian, L. (2008). Microbending optical
fiber sensors and their applications. Proceedings of the 2008 International Conference
on Advanced Infocomm Technology - ICAIT ’08. doi:10.1145/1509315.1509404
PDC REPORTING
CHE- 3101
GROUP 7 AND 8
MEMBERS:
Ericka P. Canarias
Mary Grace D. Castillo
Camille C. Catena
Mariel M. Gonzales
Kyla Marie T. Plaza
Sophia Lourdes Rallos
John Emmanuel Ramos
Twinkle Anne G. Rosales

ON-LINE MEASUREMENT OPTIONS FOR PROCESS CONTROL


Temperature

THERMOCOUPLE
What is Thermocouple?
Thermocouple, also called thermal junction, thermoelectric thermometer,
or thermel, a temperature-measuring device consisting of two wires of different metals joined at
each end. One junction is placed where the temperature is to be measured, and the other is kept at
a constant lower temperature. A measuring instrument is connected in the circuit. The
temperature difference causes the development of an electromotive force (known as the Seebeck
effect) that is approximately proportional to the difference between the temperatures of the two
junctions. Temperature can be read from standard tables, or the measuring instrument can
be calibrated to read temperature directly.
Unlike a thermometer, which relies on the thermal characteristics of a material like
mercury, a thermocouple measures temperature by generating an electrical voltage. That makes it
useful for signaling electronic systems that control household gas devices, such as water heaters
and boilers. Various types of thermocouples are available, but your furnace thermocouple or the
one on your gas water heater is most likely a k type thermocouple. Its purpose is to keep the pilot
lit.
Thermocouples are known for their versatility as temperature sensors therefore
commonly used on a wide range of applications - from an industrial usage thermocouple to a
regular thermocouple found on utilities and regular appliances. Due to their wide range of
models and technical specifications, it is extremely important to understand its basic structure,
how it works, its ranges as to better determine what is the right type and material of
thermocouple for your application.
The Seekbeck effect
Thermocouples rely on Seekbeck effect. In 1821 Thomas Seebeck discovered the
continuous current flow in the thermoelectric circuit when two wires of dissimilar metals are
joined at both ends and one of the ends is heated. The magnitude of the voltage depends on the
magnitude of the temperature change and on the characteristics of the metals. A thermocouple
may consist of a pair of insulated wires, each made of different metals, joined together on one
end and connected to a measuring device at the other. It may also consist of coaxial sheathings
separated from each other by insulating material.

Functions of Thermocouples:

• Thermocouples are widely used in science and industry


• They can be used for gas turbine exhaust, kilns, diesel engines, and other industrial processes
• These are used as the temperature sensors in thermostats in offices, homes, offices &
businesses
• Thermocouples are used in industries for monitoring temperatures of metals in iron,
aluminum, and metal
• They are in the food industry for cryogenic and Low-temperature applications
• Thermocouples are used as a heat pump for performing thermoelectric cooling
• These are used to test temperature in the chemical plants, petroleum plants
• These are used in gas machines for detecting the pilot flame.

Thermocouple types
Thermocouples are available in different combinations of metals or calibrations. The
most common are the “Base Metal” thermocouples known as Types J, K, T, E and N. There are
also high temperature calibrations - also known as Noble Metal thermocouples - Types R, S, C
and GB.
Each calibration has a different temperature range and environment, although the
maximum temperature varies with the diameter of the wire used in the thermocouple. Although
the thermocouple calibration dictates the temperature range, the maximum range is also limited
by the diameter of the thermocouple wire. That is, a very thin thermocouple may not reach the
full temperature range. K Type Thermocouples are known as general purpose thermocouple due
to its low cost and temperature range.

How to choose a Thermocouple

1. Determine the application where the thermocouple will be used


2. Analyze the temperature ranges the thermocouple will be exposed to
3. Consider any chemical resistance needed for the thermocouple or sheath material
4. Evaluate the need of abrasion and vibration resistance
5. List any installation requirements
Beaded Wire Thermocouple
A beaded wire thermocouple is the simplest form of thermocouple. It consists of two
pieces of thermocouple wire joined together with a welded bead. Because the bead of the
thermocouple is exposed, there are several application limitations. The beaded wire
thermocouple should not be used with liquids that could corrode or oxidize the thermocouple
alloy. Metal surfaces can also be problematic. Often metal surfaces, especially pipes are used to
ground electrical systems The indirect connection to an electrical system could impact the
thermocouple measurement. In general, beaded wire thermocouples are a good choice for the
measurement of gas temperature. Since they can be made very small, they also provide very fast
response time.
Thermocouple Probe

A thermocouple probe consists of thermocouple wire housed inside a metallic tube. The
wall of the tube is referred to as the sheath of the probe. Common sheath materials include
stainless steel and Inconel®. Inconel supports higher temperature ranges than stainless steel,
however, stainless steel is often preferred because of its broad chemical compatibility. For very
high temperatures, other exotic sheath materials are also available.
The tip of the thermocouple probe is available in three different styles. Grounded,
ungrounded and exposed. With a grounded tip the thermocouple is in contact with the sheath
wall. A grounded junction provides a fast response time but it is most susceptible to electrical
ground loops. In ungrounded junctions, the thermocouple is separated from the sheath wall by a
layer of insulation. The tip of the thermocouple protrudes outside the sheath wall with an
exposed junction. Exposed junction thermocouples are best suited for air measurement.

Surface Probe

Measuring the temperature of a solid surface is difficult for most types of temperature
sensors. In order to assure an accurate measurement, the entire measurement area of the sensor
must be in contact with the surface. This is difficult when working with a rigid sensor and a rigid
surface. Since thermocouples are made of pliable metals, the junction can be formed flat and thin
to provide maximum contact with a rigid solid surface. These thermocouples are an excellent
choice for surface measurement. The thermocouple can even be built in a mechanism which
rotates, making it suitable for measuring the temperature of a moving surface.
Wireless Thermocouples

Bluetooth wireless transmitters that connect with smartphones or tables to log and
monitor temperature measurements. These transmitters measure different sensor inputs,
including but not limited to temperature, pH, RTD, relative humidity. The data transmission is
performed via Bluetooth wireless technology to a smart phone or tablet with the app installed.
The app will allow the smartphone to pair and set up multiple transmitters.

How Thermocouple works


When two wires composed of dissimilar metals are joined at both ends and one of the
ends is heated, there is a continuous current which flows in the thermoelectric circuit.

If this circuit is broken at the center, the net open circuit voltage (the Seebeck voltage) is
a function of the junction temperature and the composition of the two metals. Which means that
when the junction of the two metals is heated or cooled a voltage is produced that can be
correlated back to the temperature.
Considerations for Accurate Thermocouple Measurements
Thermocouple output signals are typically in the millivolt range, and generally have a very low
temperature to voltage sensitivity, which means that you must pay careful attention to the
sources of errors that can impact your measurement accuracy. The primary sources of errors for
the thermocouple measurement to take into consideration are noise, offset and gain errors, cold-
junction compensation (CJC) accuracy, and thermocouple errors.
CJC Errors
CJC errors represent the difference between the actual temperature at the point where the
thermocouple is connected to the measurement device (the cold-junction temperature), and the
measured temperature by the device. The CJC error is roughly a 1 to 1 contributor to the
accuracy of the temperature measurement of the thermocouple, and is often one of the largest
single contributors to the overall accuracy. The overall CJC error includes the error from the CJC
temperature sensor (often a thermistor) used to sense the cold-junction temperature, the error
from the device measuring the CJC sensor, and the temperature gradient between the cold-
junction and the CJC sensor.
Offset and Gain Errors
Because thermocouples often output signals very close to 0 V and have a full input range
that is measured in millivolts, offset errors from the measurement device can be a large
contributor to overall accuracy. Many devices support a built-in autozero function that measures
the internal offset of the circuit automatically. If a device supports built-in autozero, this is often
the best way to compensate for offset errors and offset drift in the measurement device. Read the
device documentation to determine if autozero is supported. If autozero is not supported, pay
careful attention to the contribution of offset error specification to the overall accuracy of the
measurement device, and ensure that the device is regularly calibrated. Gain errors are
proportional to the input voltage, so they generally have the largest impact when thermocouples
are measuring temperatures at the edge of their supported range.
Noise Errors
Thermocouple output signals are typically in the millivolt range, making them susceptible
to noise. Noise can be introduced either by the external environment or by the measurement
device. Lowpass filters are commonly used in thermocouple data acquisition systems to
effectively eliminate high-frequency noise in thermocouple measurements. For instance, lowpass
filters are useful for removing the 50 and 60 Hz power line noise that is prevalent in many
laboratory and manufacturing settings.
Another source of noise is due to thermocouples being mounted or soldered directly to a
conductive material such as steel, or submerged in conductive liquids such as water. When
connected to a 4/6 www.ni.com conductive material, thermocouples are particularly susceptible
to common-mode noise and ground loops. Isolation helps prevent ground loops from occurring,
and can dramatically improve the rejection of common-mode noise. With conductive materials
that have a large common-mode voltage, isolation is required as nonisolated amplifiers cannot
measure signals with large common-mode voltages.

Thermocouple Errors
These errors are introduced by the thermocouple. The voltage generated by the
thermocouple is proportional to the temperature difference between the point where the
temperature is measured and the point where it connects to the device. Temperature gradients
across the thermocouple wire can introduce errors due to impurities in the metals, which can be
large relative to most measurement devices.
RESISTANCE TEMPERATURE DETECTOR (RTD)
A Resistance Temperature Detector is a temperature sensor that determines
temperature changes using a metal resistor. The resistance value of the metal wire resistor that
comprises the operating element of the RTD varies as the temperature changes.

RTDs are sometimes generally referred to as resistance thermometers. The American


Society for Testing and Materials (ASTM) has defined the term resistance thermometer as
follows as “a temperature-measuring device composed of a resistance thermometer element,
internal connecting wires, a protective shell with or without means for mounting a connection
head, or connecting wire or other fittings, or both”.
The most popular RTD is the Pt100. The Pt100 is one of the most accurate temperature
sensors. Not only does it provide good accuracy, it also provides excellent stability and
repeatability. Most OMEGA standard Pt100 comply with DIN-IEC Class B. also, Pt100 are
relatively immune to electrical noise.

Widely Used Metals for RTD Devices


 Platinum
 Nickel
 Copper

These three metals are having different resistance variations with respective to the
temperature variations. Platinum has the temperature range of 650⁰C, and then Copper and
Nickel have 120⁰C and 300⁰C, respectively. In addition, platinum is best suited for the purpose
because of its stable temperature-resistance relationship.
Standard RTD Elements Specification

The stated operating ranges are typical values and are dependent upon the sensing
element and the construction of the sensor assembly.

Types of Resistance Temperature Detectors

Wire wound RTDs are built using a small diameter wire, typically platinum, which is
wound into a coil and packaged inside a ceramic insulator. Larger extension wires are then spot
welded to the platinum wire. Conversely, the small diameter wire can also be wound around the
outside of a ceramic mandrill and coated with an insulating material such as glass with extension
wires then being spot welded to the winding wires.
Thin film RTDs are made by depositing a thin layer of resistive material, typically
platinum film, onto a ceramic substrate. A pattern is then etched onto the element, creating the
electrical circuit. Platinum Thin-film RTDs (Pt-RTDs) offer a nearly linear temperature vs
resistance relationship as well as very high accuracy over wide temperature ranges.
RTD Wiring Configurations
There are three types of wire configurations, 2 wire, 3 wire, and 4 wire, that are
commonly used in RTD sensing circuits.
2 Wire RTD Connection

The 2 wire RTD configuration is the simplest among RTD circuit designs. In this serial
configuration, a single lead wire connects each end of the RTD element to the monitoring device.
Because the resistance calculated for the circuit includes the resistance in the lead wires and
connectors as well as the resistance in the RTD element, the result will always contain some
degree of error.

3 Wire RTD Connection

The 3 wire RTD configuration is the most commonly used RTD circuit design and can be
seen in industrial process and monitoring applications. In this configuration, two wires link the
sensing element to the monitoring device on one side of the sensing element, and one links it on
its other side.

4 Wire RTD Connection


This configuration is the most complex and thus the most time-consuming and expensive
to install, but it produces the most accurate results. In a 4-wire RTD configuration, two wires
link the sensing element to the monitoring devise on both sides of the sensing element. One set
of wires delivers the current used for measurement, and the other set measures the voltage drop
over the resistor.

Function and Purpose of Resistance Temperature Detectors


RTDs are one of the most common temperature sensor types used in industrial
applications.
Resistance temperature detectors are frequently used in the plastics industries and many
others. Care must be taken to eliminate moisture and vibration effects can be troublesome as
well.
They have been used for many years to measure temperature in laboratory and industrial
processes, and have developed a reputation for accuracy, repeatability, and stability.
Their applications also include:
 Used in automotive to measure the temperature of engine oil and intake air temperature
 Used in communication and instrumentation to measure the temperatures of amplifiers,
stabilizers, etc.
 Used in food handling and processing, power electronics, and aerospace engineering
 Used in computer, consumer electronics, industrial electronics, medical electronics, and
military.

Working Principle of Resistance Temperature Detector


How does RTDs measure temperature?
An RTD measures temperature using the principle that the resistance of a metal changes
with temperature. As the temperature of a metal increases, it causes an increase in the vibrational
amplitude of the atomic nuclei of the metal. This causes an increase in the probability of the
collision of the free metals that exist on the metal surface. This then leads to the increase in
resistance to the flow of electricity.
In practice, an electrical current is passed through the sensor, the resistance element is
used to measure the resistance of the current being passed through it. As the temperature of the
resistance element increases the electrical resistance also increases. The electrical resistance is
measured in Ohms. The resistance value can then be converted into temperature based on the
characteristics of the element. Typical response time for an RTD is between 0.5 and 5 seconds
making them suitable to applications where an immediate response is not required.
How does it transmit signal?
RTD’s produce very small signals. These temperature sensors can be connected to a two-
wire transmitter that will amplify and condition the small signal. RTDs make use of a
standardized transmitter which will take the changing resistance on the device and change it to a
standardized signal. Specifically, the transmitter will take the small voltage readings from the
RTD, and convert them to a standard 4 to 20mA signal, or to a digital fieldbus output such as
HART, Foundation Fieldbus, Profibus. Either of these outputs can then be transmitted a large
distance on standard twisted-pair instrumentation cables.
Once conditioned to a usable level, this signal can be transmitted through ordinary copper
wire and used to drive other equipment such as meters, dataloggers, chart recorders, computers
or controllers. An example of transmitter is which accepts a temperature RTD input signal and
outputs a 4-20mA signal is TM-2HL. It is a DIN B style head mount transmitter that accepts a 2-
wire or 3-wire PT100/1000 input signal, and converts it to an industry standard 4–20mA output.

Advantages of Resistance Temperature Detector


 Wide temperature operating range
 Linearity over wide operating range
 Easy to verify and recalibrate
 Interchangeability over wide range
 Good stability at high temperature
 High accuracy

Disadvantages of Resistance Temperature Detector

 Low sensitivity
 Higher cost than thermocouples
 Slower response time than a thermocouple
 Vibration requires special construction
REFERENCES
DeLancey (2017). FAQs about Resistance Temperature Detectors: What You Need to Know
When Choosing an RTD. Retrieved from https://blog.wika.us/products/temperature-
products/faqs-resistance-temperature-detectors-choosing-rtd/
Defineinstruments.com (n.d.). Temperature Transmitters.
http://www.defineinstruments.com/pages/temperature-transmitters
Electrical4U (2020). Resistance Temperature Detector or RTD | Construction and Working
Principle. Retrieved from https://www.electrical4u.com/resistance-temperature-detector-or-rtd-
construction-and-working-principle/
Elpocrus.com (n.d.). RTD Sensor Working Principle and Its Applications. Retrieved from
https://www.elprocus.com/a-memoir-on-rtd-
sensors/#:~:text=Applications%20of%20RTD,sensor%2C%20intake%20air%20temperature%20
sensors.&text=RTD%20is%20used%20in%20power,electronics%2C%20military%2C%20and%
20aerospace.
NetworkTech Expert (2013). Benefits of Using RTD Sensors in Industrial Applications.
Retrieved from http://www.networktechinc.com/blog/benefits-of-using-rtd-sensors-in-industrial-
applications/258/
Omega Engineering (2019). What is the Difference Between a 2, 3, and 4 Wire RTD. Retrieved
from https://www.omega.com/en-us/resources/rtd-2-3-4-wire-connections
Variohm (2019). How Does an RTD Work. Retrieved from https://www.variohm.com/news-
media/technical-blog-archive/how-does-an-rtd-work-
WatElectronics (2020). What is Resistance Thermometer: Construction & Its Working.
Retrieved from https://www.watelectronics.com/what-is-resistance-thermometer-construction-
its-working/
FILLED SYSTEM THERMOMETER

Figure 1. Schematic Diagram of Filled-System Thermometer

Figure 2. Filled-System Thermometer

Definition
A filled system thermometer is a temperature measuring instrument comprising a
Thermal System and associated means for indicating or recording.
Purpose
A device for temperature measurement.
Function
This is commonly used in industry in the temperature range from –60° to 550°C.
How it works?
Filled-system thermometers use the phenomenon of thermal expansion of matter to
measure temperature change.
Filled-system temperature measurement methods depend upon three well-known physical
phenomena:
 A liquid will expand or contract in proportion to its temperature and in
accordance to the liquid’s coefficient of thermal/volumetric expansion.
 An enclosed liquid will create a definite vapor pressure in proportion to its
temperature if the liquid only partially occupies the enclosed space.
The pressure of a gas is directly proportional to its temperature in accordance with the
basic principle of the universal/perfect gas law: PV = nRT where P = absolute pressure, V =
volume, T = absolute temperature, R = universal gas constant and n = number of gas particles
(moles).
If the volume of gas in the measuring instrument is kept constant, then the ratio of the gas
pressure and temperature is constant, so that

The only restrictions are that the temperature must be expressed in degrees Kelvin and
the pressure must be in absolute units.

References:
https://encyclopedia2.thefreedictionary.com/Filled-System+Thermometer
http://www.instrumentationtoday.com/filled-system-temperature-measurement/2011/09/
https://www.globalspec.com/reference/10945/179909/chapter-7-temperature-measurement-
filled-system-thermometers
http://navyaviation.tpub.com/14113/Filled-System-Thermometers-102.html
https://themcaa.org/wp-content/uploads/filledsystemthermometers.pdf
BIMETAL THERMOMETER
DEFINITION
Bimetallic (Bimetal) Thermometers are reliable and accurate temperature sensors
requiring no electricity or wiring. Bimetal thermometers are ideal for local, eye-level temperature
readings in most process applications. They can be recalibrated with a turn of the calibration
screw on the back of the dial. It is a thermometer, which uses two different metal strips for
converting the displacement of temperature into mechanical. The metals used in the thermometer
are steel, copper & brass. These strips are connected and they will enlarge at different rates once
they heated. This change will compare with the real temperature & moves a needle beside the
scale. These thermometers are low-cost, simple, and strong. Bimetallic thermometer is a kind of
field testing instrument for measuring low and medium temperature. It is used especially in
industry.
TYPES OF BIMETAL THERMOMETER
1. Spiral type bimetallic thermometer
The simplest design of a bimetal
thermometer is to wrap the bimetallic strip into a
spiral. The inner end of the spiral is firmly
connected to the housing. A pointer is attached to
the outer end of the spiral. The measured
temperature can then be read off a calibrated
scale.
Such a design using a bimetallic spiral is not only very space saving but also cost-
effective. However, the disadvantage is that the dial and the temperature sensor are not
separated from each other. The entire bimetal thermometer must therefore be located directly
in the medium whose temperature is to be measured. Such thermometers are used, for
example, in refrigerators or freezers or to determine the room temperature.
2. Helical type bimetallic thermometer
In many cases it is necessary to spatially separate the indicator
(pointer) from the sensor (bimetallic coil). For example, if the water
temperature is to be measured in a heating pipe, as is usual in heating
systems. The temperature sensor must then be located inside the pipe,
while the display for the temperature must be outside the pipe. Or in
the food industry it is also necessary to separate the display from the
sensor if, for example, the temperature inside the food has to be
measured (“piercing thermometer”).
In these cases, bimetal thermometers are equipped with a
bimetal strip wrapped into a helical coil. The helical bimetal is firmly
connected at one end to the inside of a measuring tube (the bimetal is attached to a
cylindrical pin, which is pressed firmly into the stem). A rotatable metal rod is guided
through this helical coil, which is connected to it at the loose end. A pointer is attached to the
upper end of the metal rod. If the measuring tube is now heated, the helical bimetal winds up
and rotates the metal rod. On a calibrated scale the corresponding temperature can be read
off.
Parts of Helical Type Bimetallic Thermometer

APPLICATIONS OF BIMETAL THERMOMETER


Bimetallic strips are one of the oldest techniques to measure temperature. Major application
areas of a bimetallic strip thermometer include:
 For various household appliances such as ovens etc.
 Thermostat switches
 Wall thermometers
 Grills
 Circuit breakers for electrical heating devices
 Used in control devices
 A spiral strip-type thermometer is utilized in AC thermostats
 The helix strip type is used in refineries, tire vulcanizes and oil burners
 These thermometers are utilized in household devices, which include AC (air
conditioner), oven, and apparatus in industries like hot wires, refineries, tempering
tanks, heater, etc.
FUNCTION
This is a simple, durable and inexpensive way to measure temperature. It can directly
measure the temperature of the liquid, vapor and gas within -80℃~+500℃ in a variety of
production processes. Bimetal thermometers work on the principle that different metals expand
at different rates as they are heated. By using two strips of different metals in a thermometer, the
movement of the strips correlates to temperature and can be indicated along a scale. The
bimetallic thermometer is made of a metal sheet, which is made into a ring and bend shape.
When one end is heated and expanded, it will lead to the rotation of the pointer and the working
instrument will show the temperature value of the thermal electric potential.

The bimetallic strip is constructed by bonding together the two thin strips of different
metals. The metals are joined together at one end with the help of the welding. The bonding is
kept in such a way that there is no relative motion between the two metals. The physical
dimension of the metals varies with the variation in temperature. Since the bimetallic strip of the
thermometer is constructed with different metals. Thereby, the length of metals changes at
different rates. When the temperature increases, the strip bends towards the metal, which has a
low-temperature coefficient. In addition, when the temperature decreases, the strip bends towards
the metal, which has a high-temperature coefficient.

Working Principle of Bimetallic Thermometer


The working principle of bimetallic thermometer depends on the two fundamental properties of
the metal.

1. The metal has the property of thermal expansion, i.e., the metal expand and contract
concerning the temperature.

2. The temperature coefficient of all the metal is not same. The expansion or contraction of
metals is different at the same temperature.

The working of the bimetallic strip depends on the thermal expansion property of the
metal. The thermal expansion is the tendency of metal in which the volume of metal changes
with the variation in temperature. Every metal has a different temperature coefficient. The
temperature coefficient shows the relation between the change in the physical dimension of
metal and the temperature that causes it. The expansion or contraction of metal depends on the
temperature coefficient, i.e., at the same temperature the metals have different changes in the
physical dimension.

Once the temperature changes, then there will be a change in the physical dimension of
the metals. Whenever the temperature rises, the metal strip turns in the direction of the less
temperature coefficient metal. Similarly, when the temperature reduces, then the strip turns in the
direction of a high-temperature coefficient metal.

The advantages of Bimetal Thermometer


 Installation is easy
 Simple maintenance
 Accuracy is good
 Less cost
 Temperature range is wide
 Linear response
 Robust and simple
The disadvantages of Bimetal Thermometer
 If the measure in low-temperature then they will give a less accurate result.
 If they handled roughly then calibration can be disturbed
 These are not suggested for above 500℃ temperature.
 When these thermometers are frequently used, then the bimetallic of this device may
permanently bend so errors will occur.
REFERENCES

 https://reotemp.com/temperature/bimetal-thermometers/
 https://www.instrumart.com/categories/6111/bimetal-thermometers
 https://circuitglobe.com/bimetallic-thermometer.html
 https://www.elprocus.com/what-is-a-bimetallic-thermometer-construction-and-its-
working/
 https://store.chipkin.com/articles/bimetallic-thermometers
 https://www.tec-science.com/thermodynamics/temperature/how-does-a-bimetallic-strip-
thermometer-work/
PYROMETER

A pyrometer is an instrument that measures temperature remotely, i.e. by measuring


radiation from the object, without having to be in contact. This is known as an Infrared
thermometer or Radiation thermometer or non-contact thermometer used to detect an object’s
surface temperature, which depends on the radiation (infrared or visible) emitted from the object.
Pyrometry is typically used above 900 ∘C (e.g., in high-temperature applications such as
combustion).

The basic pyrometer, though it comes in a variety of models and types, has two basic
components. It consists of optical systems and detectors. A pyrometer’s optical system will focus
on the energy emittance of an object. It sends radiation to the detector, the component very
sensitive to waves of radiation. The detector then outputs data on the radiation, notably the
temperature of the object from which the radiation came. The detector gets its temperature by
analyzing the energy levels of the radiation, which is directly proportional to its temperature.

Purpose

Since pyrometers measure objects from a distance, you'll find it is most beneficial to use
them for objects dangerous to touch with standard thermometer devices, or for objects that are
out of reach or moving.

Pyrometers are used in different applications such as:

 To measure the temperature of moving objects or constant objects from a greater


distance.
 In metallurgy industries
 In smelting industries
 Hot air balloons to measure the heat at the top of the balloon
 Steam boilers to measure steam temperature
 To measure the temperature of liquid metals and highly heated materials.
 To measure furnace temperature.

How does it measure the temperature?

Infrared thermometers typically work by sampling two different wavelengths from a hot
object and comparing them.

Other infrared thermometers, like the one illustrated here, compare the heat radiation
from the object whose temperature you're trying to measure with the radiation produced by an
internal heat source (whose temperature is precisely known) or the background level of infrared
inside the pyrometer's casing.

Artwork: How an infrared thermometer works. This one uses a simple, tilting mirror to
compare a reference heat source inside the case with a hot object outside. It's loosely based on
the design described in US Patent: 4,005,605: Remote reading infrared thermometer by Donald
S. Michael, Mikron Instrument Company.

1. You press in the trigger to put the detector into "reference" mode.
2. An internal heat source, whose temperature is known, fires out infrared radiation.
3. A mirror picks up the infrared.
4. A detector picks up the reflected infrared from the mirror.
5. A microchip notes the reading of the internal reference source.
6. Now you release the trigger to put it in operating mode.
7. The mirror swings back to face the front of the detector.
8. The hot sample you're interested in gives off its own pattern of infrared radiation.
9. The infrared fires in through the front and bounces off the mirror into the detector.
10. The chip compares the infrared wavelengths from the reference source and the sample
and figures out the sample's temperature.

How can this instrument transmit the signal to the transmitter?


For a process to be adequately controlled and manipulated, the variable of interest in the
process, in this case, temperature, often called the Process Variable (PV) needs to be measured
by a sensor, pyrometer, which converts the measurement into a suitable signal format and then
transmit it to a controller which makes the control decision and finally acts on a final control
element in the control loop. What does this signal transmission is referred to as a transmitter.

There are many types of Pyrometers such as:

1. Total Radiation Pyrometer

The total radiation pyrometer involves the radiation from the hot object being focused
onto a radiation detector. The figure below shows the basic form of an instrument which uses a
mirror to focus the radiation onto the detector. The detector is said to be broad band since it
detects radiation over a wide band of frequencies and so the output is the summation of the
power emitted at every wavelength. The time constant (a measure of how fast the system
responds to a change in temperature and is the time taken to reach about 63% of the final value)
for the instrument varies from about 0.1 s when the detector is just one thermocouple or small
bead thermistor to a few seconds with a thermopile involving many thermocouples.
These pyrometers are designed to detect thermal radiation in the infrared region, which is
usually at a distance of 2-14um. A radiation pyrometer measures pure radiation wavelengths. It
measures the temperature of a targeted object from the emitted radiation. This radiation can be
directed to a thermocouple to convert into electrical signals. Because the thermocouple is capable
of generating higher current equal to the heat emitted.

Purpose:

This device is used in places where physical contact temperature sensors


like Thermocouple, RTD, and Thermistors would fail because of the high temperature of the
source.

How does it measure the temperature?

As shown in the figure, the radiation pyrometer has an optical system, including a lens, a
mirror and an adjustable eye piece. The heat energy emitted from the hot body is passed on to the
optical lens, which collects it and is focused on to the detector with the help of the mirror and eye
piece arrangement. The heat energy is converted to its corresponding electrical signal by the
detector and is sent to the output temperature display device.

In many Waste-to-Energy (WTE) boilers infrared radiation (IR) pyrometers are used for
measuring the flue-gas temperature. The IR pyrometer is a measuring transducer, which receives
the infrared radiation emitted by the measuring object itself and converts it into a standardized
output signal.

All IR pyrometers display some sensitivity to the environmental conditions. The main
sources of errors in this regard are given by reflections from flames, furnace and post-
combustion membrane walls. In order to minimize these errors, IR pyrometers have to be
installed in proper places adjusting the shape and the length of the measurement volume and
eventually shielding the thermometer detector from boundary interferences.
IR pyrometers are more expensive than thermocouples and the errors that arise are
usually smaller than those obtained with a bare thermocouple; on the other hand, the temperature
measured with a radiation pyrometer could be lower or higher than the one measured with a bare
thermocouple.

External view of an infrared radiation (IR) pyrometer installed on a WTE boiler.

2. Photoelectric Pyrometer

Photoelectric pyrometer is one of the important radiation thermometers for non-contact


temperature measurement. It has an important application in the field of high temperature
measurement, and its performance directly affects the accuracy of temperature measurement.

Photoelectric pyrometer is part of the brightness thermometer in the radiation


thermometer. The brightness thermometer is based on Planck's law to determine the brightness
of the object at a certain wavelength, brightness temperature is obtained by the brightness, and
calculates the true temperature using a specific formula.

Purpose:

An instrument used to measure the temperature of a source through the use of


photoelectric cells to detect and measure the intensity of the light emitted by the source.

How does it measure the temperature?

The working principle can be simply summarized as follows: the measured object is
imaged by the movable objective lens in the field of view, the center of the field diaphragm is a
circular hole, and the surrounding is a mirror for aiming. The measured object is imaged on the
circular hole, and then the measured object is divided into measuring light path and aiming light
path.
3. Ratio Pyrometer

Two-color pyrometers use what is called a “sandwich detector” using a fixed set of
wavelengths, meaning that two wavelength filters are laid one on top of the other. It consists of
two one-color pyrometers in the same package. It uses two detectors, operating at two separate
wavelengths, but both detectors see the same hot target.

 General purpose wavelength set


 Can tolerate modest optical obstructions, misalignment and partial field of view
 Compensate for variable emissivity
 Used when there is a clear optical path between the pyrometer and target
 Can only measure temperatures above 1100°F / 600°C

Purpose:

 Measures the radiated energy of an object between two narrow wavelength bands and
then calculates the ratio of the two energies.
 Used for measuring high temperature.

The ratio pyrometer is largely resistant against dust, steam and dirty observation
windows.

How does it measure the temperature?


Advantages/Disadvantages

Usually, Pyrometers are compared with thermometers and also have some advantages
and disadvantages while using.

The advantages of pyrometer are


 It can measure the temperature of the object without any contact with the object. This is
called Non-contact measurement.
 It has a fast response time
 Good stability while measuring the temperature of the object.
 It can measure different types of object’s temperature at variable distances.
The disadvantages of pyrometer are
 Pyrometers are generally rugged and expensive
 Accuracy of the device can be affected due to the different conditions like dust,
smoke, and thermal radiation.

References:

Bartelt, T. (2020). Ratio Pyrometers. Wisc-Online OER. https://www.wisc-


online.com/learn/technical/process-control/ele3508/ratio-pyrometers
Definition of photoelectric pyrometer. (2020). Photonics.com.
https://www.photonics.com/EDU/photoelectric_pyrometer/d6103#:~:text=An%20instrume
nt%20used%20to%20measure,Silicon%20Devices%20Nov%205%2C%202020
Fluke Process Instruments. (2020). Flukeprocessinstruments.com.
https://www.flukeprocessinstruments.com/en-us/service-and-support/knowledge-
center/infrared-technology/how-do-ratio-pyrometers-work
Han, X., Huan, K., & Sheng, S. (2020). Performance evaluation and optimization design of
photoelectric pyrometer detection optical system. Defence Technology, 16(2), 401–407.
https://doi.org/10.1016/j.dt.2019.07.019
https://www.explainthatstuff.com/chris-woodford.html. (2009, September 11). How do
pyrometers work? Explain That Stuff. https://www.explainthatstuff.com/how-pyrometers-
work.html
John. (2011, August 25). Radiation Pyrometer-Working Principle,Advantages,Block Diagram.
Instrumentation-Electronics. http://www.instrumentationtoday.com/radiation-
pyrometer/2011/08/#:~:text=The%20main%20theory%20behind%20a,a%20function%20o
f%20its%20temperature.&text=Total%20Radiation%20Pyrometer%20%E2%80%93%20I
n%20this,is%20measured%20at%20all%20wavelengths.
Operating Principles of Pyrometers. (2011). Sciencing. https://sciencing.com/info-8728275-
operating-principles-pyrometers.html
Optris. (2015). Optris.Global. https://www.optris.global/optris-ctratio-1m
Pyrometer : Working Principle, Types, Advantages and Disadvantages. (2020, February 3).
ElProCus - Electronic Projects for Engineering Students. https://www.elprocus.com/what-
is-pyrometer-working-principle-and-its-types/
Ratio Pyrometer Technology - Williamson IR. (2019, December 17). Williamson IR.
https://www.williamsonir.com/ratio-pyrometers/
Rinaldi, F., & Najafi, B. (2013). Temperature Measurement in WTE Boilers Using Suction
Pyrometers. Sensors, 13(11), 15633–15655. https://doi.org/10.3390/s131115633
Tempsens Instruments. (2016, May 20). Basics Of Pyrometers. Slideshare.net.
https://www.slideshare.net/TempsensInstruments1/basics-of-
pyrometers?from_action=save
Transmitters Used in Process Instrumentation. (2013). Instrumentationtoolbox.com.
https://www.instrumentationtoolbox.com/2013/06/transmitters-used-in-process.html
LASER

LASER stands for Light Amplification by Stimulated Emission of Radiation. It is a narrow beam
of light that serves multiple purposes in technology and instruments. The light emitted from a
laser is monochromatic meaning that it is of a single wavelength (color). In the case of
thermometers, laser thermometers are actually infrared thermometers. The laser in the
thermometer provides a mean of aim.

Definition
Laser thermometer or often called infrared thermometer is a digital thermometer that measures
the temperature of a surface from a distance. It can detect hot or cold spots on surfaces. It is
sometimes called a laser thermometer because a laser is attached to the thermometer. The laser in
the thermometer plays no role in the measurement of temperature, but provides a mean of aim.
Infrared thermometer is used to measure the temperature of an object from a distance. It is often
used when other thermometers are not practical. Some objects are very fragile or hazardous;
since infrared thermometer does not require having it in contact with the object it would be more
practical and safer to use this thermometer.
Application
1. Distance – measuring the temperature of a subject from a distance
2. Dangerous – the temperature of an object can be measured without direct contact
3. Movement – can be used to measure objects that are in constant motion

Infrared thermometers are highly valuable for determining temperature and:


 The source object is moving, surrounded by an electromagnetic field or contained in a
vacuum
 A fast reading is required
 The temperature of the source object is too high for the use of contact sensors
The devices are popular among meteorologists, laboratory scientists and kitchen workers for
their variety of uses. Typical applications include recording temperatures of:
 Mechanical equipment or electrical circuit breaker boxes
 Potential or actual hot spots in firefighting situations
 A variety of research and development or manufacturing quality control
circumstances such as examining the temperature of heat-producing devices to test
calibration
 Materials that are heated or cooled for monitoring purposes
 People with highly infectious diseases
 A contaminated site
 Hard to reach places such as inside HVAC systems
 Foods such as soup where the surface temperature is imperative
 Cooking oil in a skillet or the surface of the skillet itself

Function
The function of an infrared thermometer is based on a phenomenon called black body radiation.
Any object that has temperature that is higher than absolute zero contains molecules moving
around. The increase of temperature caused the molecules to move faster and as they move they
emit infrared radiation (a type of electromagnetic radiation below the visible spectrum of light).
The higher the temperature of an object, the more infrared radiation it emits and at very high
temperature, objects start to emit visible light. The infrared thermometer detects and measures
the infrared radiation emitted by the object.
Features
 Recording minimum and maximum read values during a defined period of time
 Battery powered
 Memory to record and store data values over a course of time or ranges
 Performing other math and statistical functions
 Self-test, self-calibration and diagnostic capabilities
 Zero-point reset
 Laser spot aiming and sighting with the device’s laser, allowing the source of the
measurement to be revealed
 Automatic emissivity adjustment
 Waterproof, submersible and washable
 Explosion proof

Working Principle
Infrared light, like visible light, can be focused, reflected or absorbed. Infrared thermometer has
lens that are used to focus infrared light from one object to a detector called thermopile. The
thermopile then absorbs the infrared radiation and turns it to heat. The heat is then turned to
electrical signal that is amplified and converts into voltage output. The central processing unit of
the thermometer solve a temperature based on Planck’s Radiation Law.
The equation describes the spectral radiance as part of a body, which defines the quantity of
energy produced with distinctive frequencies. Full measurement results use several dynamics,
including the power released in each unit section of the body and the individual unit solid angle
the radiation gauges throughout (on a frequency of each unit).

Variations of the equation are used to measure distinctive wavelengths as opposed to


frequencies. A primary factor in these equations is the inclusion of the speed of light in a vacuum
or a material base.

References:
https://encyclopedia2.thefreedictionary.com/Infrared+thermometer
https://sciencing.com/infrared-thermometers-work-4965130.html
https://www.omega.com/en-us/resources/infrared-thermometer-applications
https://www.globalspec.com/learnmore/sensors_transducers_detectors/temperature_sensing/infra
red_thermometer
SURFACE ACOUSTIC WAVE

Definition

Surface Acoustic Wave was originally found by the British physicist Rayleigh in 1880s.
During his research of seismic wave, he found a wave focusing and propagating in the surface of
the earth. In 1965, American R. M. White and F. M. Voltmov invented interdigital transducer
(IDT), which can excite SAW in the piezoelectric material. This accelerated the development of
SAW technology, and many kinds of SAW devices with different characteristics appeared.
Research and development of SAW sensor began in 1980s. At first, the researchers found that
external factors (temperature, pressure, magnetic field, electric field, a gas) will affect the
propagation characteristics of SAW, and to study the relationship between these effects and
external factors. According to these relationships, a variety of structure were designed for
various physical and chemical measurements.

A Surface Acoustic Wave (SAW) is a wave propagating along the surface of an elastic
substrate with an amplitude that typically decays exponentially with depth into the substrate. To
generate SAWs, an Interdigital Transducer (IDT) is used which can also act as a source or
receiver of SAW.

The SAW sensor is of two characteristics, passive (no need for battery powered) and
wireless reading which is also known as the passive wireless sensor.

Moreover, many piezoelectric single crystal substrates, one of it is LiTaO3, show a non-
vanishing temperature coefficient of delay. Thus, by analyzing the impulse response of a given
SAW device on these materials, it is possible to determine the temperature of the SAW chip.

At present, known Surface Acoustic Wave (SAW) temperature sensor device structure
comprises of single SAW device, and SAW device comprises resonator structure and delay-line
structure, specifically comprise on piezoelectric substrate, substrate place SAW Interdigital
transducer, reflecting grating forms.
Temperature is the notable factor that influence wave propagation in SAW. The
frequency of an acoustic device changes as a function of temperature which actually sets the
foundation for developing SAW temperature sensors.

Purpose

In power accidents, equipment explosions or burnings occurring from over-heating


account for a large proportion. The heating parts are mainly all kinds of contacts, connectors,
plugs and rotating parts of power equipment, and the main causes are aging, corrosion, loose,
overload and poor ventilation etc. Before the accidents, these sites often have phenomenon of
abnormal temperature rising. So preventing the accidents by monitoring temperature of these
sites are of great importance. Moreover, in recent years, the concept of smart grid is proposed
and developed rapidly. An important feature of the smart grid is informational and automatically,
which means acquiring the status information efficiently and automatically. This trend of
development promotes the application of online monitoring technology in the power grid. Power
facilities are unmanned supervised gradually. In these new application technologies, surface
acoustic wave (SAW) sensing technique has become a bright spot.

The purpose of the SAW device testing unit is to generate and transmit a test signal
receiving and analyzing the echo signal. Testing unit incudes testing signal generating module,
switch module, and frequency detection module. Testing signal generating module which also
pre-process the echo signal; frequency detection module can extract the temperature from
detecting frequency of the echo signal.

Function

SAW devices use IDTs to convert electric signals to acoustic waves, which can be altered
by sensing event and sent to a computational device for analysis. The IDTs generate a type of
shear wave, called a Rayleigh wave, which propagates across the surface of the piezoelectric
substrate. Since these waves have transverse motion even though they are located at the surface,
they must penetrate a depth of around one wavelength. The finger spacing and length of the IDTs
primarily determine the shape of the waves. The wavelengths are typically ~ 20 μm.

Surface Acoustic Wave device is manufactured with semiconductor IC technology, such


as deposition and lithography, to form specific shape and size of metal (aluminum, et. al.) film
on a piezoelectric substrate material. Piezoelectric material selection, intervals of the IDT fingers
and reflecting gates, et. al. determine the characteristics of the SAW device, including operating
frequency, quality factor, parasitic suppression, insertion loss and sensitivity coefficient.
According to the device structure and signal characteristics, SAW sensor devices can be divided
into the delay line type and resonant type.
For the delay line type, the reflectors distribute at different position to one side of the
IDT. Due to the different positions, the reflectors reflect the acoustic signals with different delay
time. The delay time can also be affected by some environmental factors. By detecting the
alternation of the delay time, measurement to environmental factors can be realized. The
positions of the reflectors can also serve as ID function, which is the principle of RFID.

For the resonant type, the reflectors distribute periodically on both sides of the IDT,
forming a resonant cavity to the surface acoustic wave signal with certain frequency. So the
return signal is a resonance signal. The resonant frequency is not only determined by the size, but
also influenced by environmental factors. By detecting the alternation of the resonant frequency,
sensing can be realized. In addition, If an antenna is connected to the IDT, it becomes a wireless
SAW sensor which can receive and reflect electromagnetic wave. The electromagnetic signal is
received by the antenna and converted into acoustic signal by the IDT. The acoustic wave
propagate along the substrate, and is reflected by the reflectors back to the IDT, and then is
reconverted into an electrical form and re-transmitted by the antenna .As can be seen, the whole
process is completely passive, i.e. no power supply is needed. So the SAW wireless sensor is
also called the passive wireless sensor.

How does it work?

A schematic diagram of the SAW temperature monitoring system. In order to be suitable


for power switch cabinet, the antenna and reader are connected by a cable, and the antenna has a
strong magnet at its bottom which can fix the antenna on the iron shell. Generally, the reading
distance of 0.2~2 meters is sufficient for application. If long distance is needed, there are three
ways: 1) increase the reader transmitting power; 2) replace the reader antenna with a high gain
one; 3) replace the sensor antenna with a high gain one. But the high gain antenna has a large
size which maybe unaccepted by the environment.

The image is taken from the engineering field. There were 3 installed sensors and a reading
antenna. The sensors are fixed behind the static contactors of the switch cabinet, and are used to
detect the contactors’ temperature.

The antenna is installed on the wall of cabinet. The reader can be installed freely, for example
mounted on the low voltage side of the switch cabinet.

Moreover, most engineering is technical reconstruction. The power is only turned off for
only a few hours, and the tens of the SAW temperature sensing systems are to be installed. This
is a great challenge. Besides the power switch cabinet, there are other kinds of power equipment
require temperature detection. So it is necessary to develop more convenient and various
installation structures.
SEMICONDUCTOR

Semiconductor temperature sensors are the devices which come in the form of integrated
circuits hence, popularly known as IC temperature sensors. It is an electronic device fabricated in
a similar way to other modern electronic semiconductor components such as microprocessors.
Typically hundreds or thousands of devices are formed on thin silicon waters. Before the water is
scribed and cut into individual chips, they are usually laser trimmed.
These sensors are available from a number of manufacturers. There are no generic types
as with thermocouple and RTDs, although a number of devices are made by more than one
manufacturer. The AD590 and the LM35 have traditionally been the most popular devices, but
over the last few years better alternatives have become available.
Like other types of sensors such as thermocouples and RTD’s, semiconductor measures a
physical value and then record or otherwise react to it. The sensor as part of a probe measures its
surrounding temperature. These low-cost sensors are ideal for many temperature monitoring
applications, especially for medical temperature monitoring in hospitals and clinics.
Major characteristics of semiconductor thermometers includes:
 They provide reasonably linear output.
 They are available in moderately small sizes
 They are not capable enough to measure high temperatures. Their temperature range is
typically limited between -40 to +120°C.
 They give fairly accurate temperature readings if properly calibrated.
 They offer very small interchangeability.
 Semiconductor temperature sensors are not suitably designed for making well thermal
contact with external surfaces.
 Unlike other temperature sensors like thermocouples and RTDs, their electrical and
mechanical performance is not very robust.

Working Principle
A semiconductor-based temperature sensor works with dual integrated circuits (ICs).
They contain two similar diodes with temperature-sensitive voltage and current characteristics to
measure the temperature changes effectively. However, they give a linear output but, are less
accurate at 1 °C to 5 °C. They also exhibit the slowest responsiveness (5 s to 60 s) across the
narrowest temperature range (-70 °C to 150 °C).

Semiconductor temperature sensors commonly used a bandgap element which measures


variations in the forward voltage of a diode to determine temperature. To achieve reasonable
accuracy, these are calibrated at a single temperature point, typically 25 °C. Therefore, highest
accuracy is achieved at the calibration point and accuracy then deteriorates for higher or lower
temperatures. For higher accuracy across a wide temperature range, additional calibration points
or advanced signal processing techniques can be employed.

Manufacturers of semiconductor temperature sensors will specify typical and maximum


temperature accuracy within certain temperature ranges. While typical values can give some idea
of the accuracy for a few devices under ideal conditions, customers should rely on the maximum
values for a true indication of accuracy across multiple devices and under a variety of conditions.

Power supply voltage can also affect temperature accuracy in a semiconductor sensor.
Sensor devices with a lower level of internal voltage regulation will exhibit greater reductions in
accuracy when the power supply deviates from nominal voltages. Most manufacturers will
include this in their datasheet specifications, with maximum values in the range of ±0.2°C/V to
±0.3°C/V.
In higher accuracy devices with <±0.5°C error, secondary effects will begin to emerge that can
also play a role in overall accuracy.

The Sensing Element

A semiconductor temperature sensor is an IC that combines a temperature-sensing


element with signal conditioning, output, and other types of circuitry on one chip. It relies on the
change of voltage across a p-n junction, essentially a silicon diode, in response to a temperature
change to determine the ambient temperature. The bipolar IC substrate is designed to build p-n-p
and n-p-n transistors, so in practice, the sensing diode is usually formed using a transistor with
the base and collector shorted.

All semiconductor temperature sensors make use of the relationship between a bipolar
junction transistor’s (BJT) base-emitter voltage to its collector current:

𝑘𝑇 𝐼𝑐
𝑉𝐵𝐸 = ln⁡( )
𝑞 𝐼𝑠

Where k is Boltzmann’s constant, T is the absolute temperature, q is the charge of an electron,


and Is is a current related to the geometry and the temperature of the transistors. The equation
assumes a voltage of at least a few hundred mV on the collector, and ignores early effects.
If the N transistors are identical and allow the total current Ic to be shared equally among
them, the new base-emitter voltage is given by the equation:

𝑘𝑇 𝐼𝑐
𝑉𝑁 = ln⁡( )
𝑞 𝑁 ∙ 𝐼𝑠
Types of Semiconductor Sensors

1. Voltage Output Temperature Sensors


These types of sensors usually need a source of power supply for excitation. They
give an effective linear output in the form of voltage signals. Besides, they offer quite
low output impedance.

2. Current Output Temperature Sensors


As opposed to voltage output temperature sensors, the output impedance of these
sensors is very high. They usually function as constant current regulators which are
designed to pass 1 micro-amp per degree Kelvin. They also need an input voltage which
can vary between 4 and 30 V.
3. Digital Output Temperature Sensors
These are the foremost sensors designed for the integration of a sensor and an
analog to digital converter on an IC chip. These sensors don’t provide standard digital
interfaces. Hence, they cannot be employed for measurement with standard measuring
devices. Some of them are specially fabricated to enable their use with microprocessors
for thermal management.

4. Resistant Output Silicon Temperature Sensors


These are simple temperature sensors designed with the help of typical
semiconductor manufacturing equipment. The usual temperature resistance
characteristics of semiconductor materials make their use simpler. Besides, these sensors
offer high class tolerance to ion migration hence found to be additionally stable as
compared to other semiconductor temperature sensors. However, extra care must be
exercised while employing these sensors owing to their other characteristics.

5. Diode Temperature Sensors


These sensors are made up by using regular PN junction diodes. These are the
most inexpensive type of temperature sensors which are competent enough to provide
very adequate results if constant and steady excitation current is supplied to them. Also,
they need a two point calibration for satisfactory operation. An ordinary semiconductor
diode provides a sensibly linear forward biased voltage whose temperature coefficient is
around 2.3mV/°C. A typical diode temperature sensor is shown in the figure below.
Advantages of Using Semiconductor Sensors
 They are very linear with accuracies of ±1°C or better.
 They are inexpensive.
 Use of these temperature sensors enables simple interfacing with other electronic devices
like amplifiers, regulators, Digital signal processors, and microcontrollers etc.
 These types of temperature sensors are considered ideal for embedded applications where
they are installed within the equipment itself.
 Semiconductor devices are rugged with good longevity.

Disadvantages of Using Semiconductor Sensors


 Internal dissipation can cause up to 0.5°C offset resulting in temperature errors.
 Limited range of operation.

References

Dogfang, W.et al. (2015). SAW Sensor Technology and Its implementation and application in
Electric Power Temperature Measurement. Doi: https://doi.org/10.2991/amcce-15.2015.275

Rs-online.com. 2020. Measuring Temperature Accurately With Semiconductor Sensors.


Available at: <https://www.rs-online.com/designspark/measuring-temperature-
accurately-
withsemiconductorsensors#:~:text=Semiconductor%20temperature%20sensors%20com
monly%20use,point%2C%20typically%2025%20%C2%B0C.>

Semiconductor Temperature Sensors. (n.d.). Retrieved from


http://www.chipkin.com/semiconductor-temperature-sensors/#:~:text=Semiconductor
temperature sensors are the,electronic semiconductor devices like microprocessors.

Semiconductor Temperature Sensors. (n.d.). Retrieved from


https://www.capgo.com/Resources/Temperature/Semiconductor/Semi.html

Sensortipsadmin, Says, A., Anjali, Says, J. R., Gyorki, J. R., Says, P., Karelia, R. (n.d.).
Designing with Semiconductor Temperature Sensors. Retrieved from
https://www.sensortips.com/temperature/designing-with-semiconductor-temperature-
sensors/
Republic of the Philippines

BATANGAS STATE UNIVERSITY

Pablo Borbon Main II

Batangas City

College of Engineering, Architecture & Fine Arts

CHEMICAL AND FOOD ENGINEERING DEPARTMENT

PROCESS DYNAMICS AND CONTROL

COMPOSITION

GROUP 9

Esteban, Mica Ella

Fruelda, Kimberly

Jumalon, Szairah Madel

Matira, Maria Jobel

Payabyab, Wingel Ingrid

Recio, Celine Joy

Silang, Jeoh Ysrael

Vidal, Joachim
ELECTROCHEMICAL

What is electrochemistry?
We may not be aware but we encounter electrochemistry every single day. From the
batteries that we use to power up our phones to the different modes of transportation that we
have, electrochemistry is present. Electrochemistry is the study of chemical processes that cause
electrons to move. This movement of electrons is what we call electricity which is generated by
the movement of electrons that is powered by chemical reactions.
Electrochemical analysis in liquid solutions is concerned with the measurement of
electrical quantities, such as potential, current, and charge, to gain information about the
composition of the solution and the reaction kinetics of its components.
So, to put this in simpler words, the amount of electricity that is generated or measured
using these analyses is related/ affected by the changes in the chemicals/ composition of the
liquid solutions.

VOLTAMMETRY

Voltammetry is a category of electroanalytical methods used in analytical chemistry and


various industrial processes. In voltammetry, information about an analyte is obtained by
measuring the current as the potential is varied.

Anodic stripping voltammetry is a voltametric method for quantitative determination of


specific ionic species.

So, in this example let’s say you want to determine the amount/concentrations of cadmium
and lead in your sample. Cadmium is a carcinogenic metal and the more exposure you have with
this metal gives you a greater chance/risk of developing cancer. Lead on the other hand is a
known neurotoxin which is bad for children and it is also used as an additive in gasoline. So anodic
stripping voltammetry involves two major steps. The first step is the deposition step where metals
from your sample are deposited onto the electrode and the next step is the stripping part where
the different metals/element is selectively removed from the electrode by oxidizing it and then
measuring the change in the potential of the solution. The potential change can quantitatively tell
you how much type of each metal was in your sample.

POTENTIOMETRY

Potentiometry passively measures the potential of a solution between two electrodes,


affecting the solution very little in the process. One electrode is called the reference electrode and
has a constant potential, while the other one is an indicator electrode whose potential changes
with the composition of the sample. Therefore, the difference of potential between the two
electrodes gives an assessment of the composition of the sample.
Instrumentation:

ION-SELECTIVE ELECTRODES (ISES)

An ion-selective electrode (ISE) is an example of an electrochemical sensor utilizing the


principle of potentiometry, or measurement of the cell potential (i.e., ISE against a standard
reference electrode) at near-zero current. The information on the composition of the sample is
achieved through the measurement of the potential difference across two electrodes.

CONDUCTOMETRY

In addition to potentiometry, conductometric analysis represents the most important non-


faradaic method. Conductometry is based on the measurement of the electrical conductance of
an electrolyte solution, which directly depends on the number of positively and negatively charged
species in the solution. This analysis method is limited due to its nonselective nature, because all
ions in the solution will contribute to the total conductance. Nevertheless, direct conductance
measurements play an important role in the analysis of binary water/electrolyte mixtures, for
example, in chemical water monitoring. The technique can also be applied to ascertain the
endpoint detection in conductometric titrations for the determination of numerous substances.

Conductimetry (the measurement of conductivity) is a physical chemical measurement


that provides information about the total ionic content of aqueous solutions.

Instrumentation:

CONDUCTIVITY METER

Distilled water is a poor electrical conductor. The substances (or salts) dissolved in the
water determine how conductive the solution will be. As the number of dissolved ions increases,
so does the solution's ability to carry an electrical charge. This electrical charge is what allows a
conductivity meter to measure the conductance of a solution.
The conductivity meter reports conductance or it Measures the electrical conductivity in
a solution. It has multiple applications in research and engineering, with common usage
in hydroponics, aquaculture, aquaponics, and freshwater systems to monitor the amount of
nutrients, salts or impurities in the water. Conductivity measurement is a versatile tool in process
control. The measurement is simple and fast, and most advanced sensors require only a little
maintenance. The measured conductivity reading can be used to make various assumptions on
what is happening in the process. In some cases, it is possible to develop a model to calculate
the concentration of the liquid.

COULOMETRY

Coulometry is an analytical method for measuring an unknown concentration of an


analyte in solution by completely converting the analyte from one oxidation state to
another. Coulometry is an absolute measurement similar to gravimetry or titration and requires
no chemical standards or calibration. It is therefore valuable for making
absolute concentration determinations of standards.
COULOMETRIC TITRATION (CONTROLLED-CURRENT COULOMETRY)

It is a procedure where a known amount of current is applied to a solution of unknown


species in it. That current will completely oxidize or reduce that species until all of it turns into a
different or new state. The magnitude of the current applied and the time duration can be used to
find the number of moles or concentration or any information that we want to get. Controlled-
current coulometry maintains a constant current throughout the reaction period. Here, an excess
of a redox buffer substance must be added in such a way that the potential does not cause any
undesirable reaction. That means the product of the electrolysis of the redox buffer must react
quantitatively with the unknown substance to be determined. Coulometric titrations need an
electrolytically generated titrant that reacts stoichiometrically with the analyte to be determined.
As in controlled-potential coulometry, 100% current efficiency is required. The current is
accurately fixed at a constant value and the quantity of electricity can be calculated by the product
of the current (in A) and the time (in s) using endpoint detection. In principle, any endpoint
detection system that fits chemically can be used, for example, chemical indicators (color change)
and potentiometric, amperometric, or conductometric procedures. For coulometric titrations the
instrumentation consists of a titrator (constant-current source, integrator) and a cell.

APPLICATIONS

● Digital cameras (lithium batteries)

● Digital watches (mercury/silver-oxide batteries)

● Food Analysis

● Environmental monitoring

● Electrochemical sensors can be used in automobiles, aircrafts, mobile phones


etc.

● Biomedical applications

ELECTROPHORESIS
Electrophoresis is a quick and easy molecular technique used to analyze and separate
nucleic acids based on their size (i.e. how many base pairs a molecule is composed of). It is one
of the most important method for separating colloidal particles and biological molecules such as:

-proteins

-carbohydrates

-nucleic acids
-polysaccharides

-peptides

-amino acids

-oligosaccharides

-nucleosides

-organic acids

-small anions and cations of body fluids.

Electrophoresis takes advantage of the fact that DNA’s phosphate backbone is negatively
charged. Thus, when DNA is placed in an electric field, it will migrate toward the positive electrode.
The differential ability of DNA to move through a gel based on its size doesn’t really depend on
the electric field or the charged properties of DNA, but more importantly on the composition of the
gel. It is usually performed in labs to analyze DNA, RNA, or protein samples from various sources.
PRINCIPLES OF GEL ELECTROPHORESIS

The gel electrophoresis technique exploits the difference in size and charge of different molecules
in a sample. The DNA or protein sample to be separated is loaded onto a porous gel placed in an
ionic buffer medium. On application of electric charge, each molecule having different size and
charge will move through the gel at different speeds.

The porous gel used in this technique acts as a molecular sieve that separates bigger molecules
from the smaller ones. Smaller molecules move faster across the gel while the bulkier ones are
left behind. The mobility of the particles is also controlled by their individual electric charge. Two
oppositely charged electrodes that are part of the system pull molecules towards them on the
basis of their charge.

HOW DOES IT WORK?


The gel used in gel electrophoresis is usually made of a material called agarose, which is a
complex polymer that forms a matrix through which DNA travels when subjected to an electric
field. It is also a gelatinous substance extracted from seaweed. This porous gel could be used to
separate macromolecules of many different sizes. The gel is submerged in a salt buffer solution
in an electrophoresis chamber.

Tris-borate-EDTA (TBE) is commonly used as the buffer. Its main function is to control the pH of
the system. The chamber has two electrodes – one positive and another negative - at its two
ends.

Samples that need to be analyzed are then loaded into tiny wells in the gel with the help of a
pipette. Once loading is complete, an electrical current of 50–150 V is applied. Now, charged
molecules present in the sample start migrating through the gel towards the electrodes.
Negatively charged molecules move towards the positive electrode and positively charged
molecules migrate towards the negative electrode.
The speed at which each molecule travels through the gel is called its electrophoretic mobility and
is determined mainly by its net charge and size. Strongly charged molecules move faster than
weakly charged ones. Smaller molecules run faster leaving behind the larger ones. Thus, strong
charge and small size increases a molecule’s electrophoretic mobility, while weak charge and
large size decreases the mobility of a molecule. When all molecules in a sample are of the same
size, the separation will solely be based on their size.

Once the separation is complete, the gel is stained with a dye to reveal the separation bands.

Ethidium bromide is a fluorescent dye commonly used in gel electrophoresis. The gel is soaked
in a diluted ethidium bromide solution and then placed on a UV transilluminator to visualize the
separation bands.

The bands are immediately examined or photographed for future reference, as they will diffuse
into the gel over time. The dye can also be loaded into the gel well in advance to track the
migration of the molecules as it happens.

APPLICATIONS

Gel electrophoresis is widely used in the molecular biology and biochemistry labs in areas such
as forensic science, conservational biology, and medicine.

Some key applications of the technique are listed below:


To analyze results of polymerase chain reaction

In the separation of DNA fragments for DNA fingerprinting to investigate crime scenes
To analyze genes associated with a particular illness

In DNA profiling for taxonomy studies to distinguish different species


In paternity testing using DNA fingerprinting

In the study of structure and function of proteins


In the analysis of antibiotic resistance

Study of evolutionary relationships by analyzing genetic similarity among populations or species


RAMAN SPECTROSCOPY

Raman Spectroscopy is a non-destructive chemical analysis technique which provides


detailed information about chemical structure, phase and polymorphy, crystallinity and molecular
interactions. It is based upon the interaction of light with the chemical bonds within a material.
Raman is a light scattering technique, whereby a molecule scatters incident light from a
high intensity laser light source. Most of the scattered light is at the same wavelength (or color)
as the laser source and does not provide useful information – this is called Rayleigh Scatter.
However, a small amount of light (typically 0.0000001%) is scattered at different wavelengths (or
colors), which depend on the chemical structure of the analyte – this is called Raman Scatter.

Principle of Raman Spectroscopy

When light interacts with molecules in a gas, liquid, or solid, the vast majority of the
photons are dispersed or scattered at the same energy as the incident photons. This is described
as elastic scattering, or Rayleigh scattering. A small number of these photons, approximately 1
photon in 10 million will scatter at a different frequency than the incident photon. This process is
called inelastic scattering, or the Raman effect, named after Sir C.V. Raman who discovered this
and was awarded the 1930 Nobel Prize in Physics for his work. Since that time, Raman has been
utilized for a vast array of applications from medical diagnostics to material science and reaction
analysis. Raman allows the user to collect the vibrational signature of a molecule, giving insight
into how it is put together, as well as how it interacts with other molecules around it.

How does Raman Spectroscopy Work?

Unlike FTIR Spectroscopy that looks at changes in dipole moments, Raman looks at
changes in a molecular bonds’ polarizability. Interaction of light with a molecule can induce a
deformation of its electron cloud. This deformation is known as a change in polarizability.
Molecular bonds have specific energy transitions in which a change of polarizability occurs, giving
rise to Raman active modes. As an example, molecules that contain bonds between homonuclear
atoms such as carbon-carbon, sulfur-sulfur, and nitrogen-nitrogen bonds undergo a change in
polarizability when photons interact with them. These are examples of bonds that give rise to
Raman active spectral bands but would not be seen or difficult to see in FTIR.
Because Raman is an inherently weak effect, the optical components of a Raman
Spectrometer must be well matched and optimized. Also, since organic molecules may have a
greater tendency to fluoresce when shorter wavelength radiation is used, longer wavelength
monochromatic excitation sources, such as solid-state laser diodes that produces light at 785 nm,
are typically used.

Information provided by Raman spectroscopy


● Chemical structure and identity
● Phase and polymorphism
● Intrinsic stress/strain
● Contamination and impurity

Raman spectra of ethanol and methanol, showing the significant spectral differences
which allow the two liquids to be distinguished.

Typically, a Raman spectrum is a distinct chemical fingerprint for a particular molecule or


material and can be used to very quickly identify the material, or distinguish it from others. Raman
spectral libraries are often used for identification of a material based on its Raman spectrum –
libraries containing thousands of spectra are rapidly searched to find a match with the spectrum
of the analyte.

In combination with mapping (or imaging) Raman systems, it is possible to generate


images based on the sample’s Raman spectrum. These images show distribution of individual
chemical components, polymorphs and phases, and variation in crystallinity.
Mineral distribution

Application of Raman Spectroscopy

Raman spectroscopy is used in many varied fields – in fact, it can be used in any
application where non-destructive, microscopic, chemical analysis and imaging is required.
Whether the goal is qualitative or quantitative data, Raman analysis can provide key information
easily and quickly. It can be used to rapidly characterize the chemical composition and structure
of a sample, whether solid, liquid, gas, gel, slurry or powder.

● Pharmaceutical and Cosmetics


● Geology and Mineralogy
● Carbon Materials
● Semiconductors
● Life Sciences

Advantages of Raman Spectroscopy

Raman microscopy is the smart combination of optical confocal microscopy with Raman
Spectroscopy. One big advantage of light microscopes is the ability to observe living cells. It is
possible to observe a wide range of biological activity, such as the uptake of food, cell division
and movement. Additionally, it is possible to use in-vivo staining techniques to observe the uptake
of colored pigments by the cells. These processes can hardly be observed in real time using
Raman microscopes, due to long acquisition times, and electron microscopes as the specimen
has to be fixed, and dehydrated (and is therefore often dead). The low cost of optical microscopes
makes them useful in a wide range of different areas, such as education, and medical
applications.
● Detailed Chemical/Molecular Analysis
● Subtle information (crystallinity, polymorphism, phase)
● Speed
● No sample preparation
● Non-destructive
● Microscopic spatial resolution
● Confocal Analysis
● Suitable for in situ, in vitro, in vivo analysis

MASS SPECTROMETRY
● Mass Spectrometry is widely used to determine and identify the elements present in
samples and to determine their concentrations.
● Mass Spectrometry is also used to measure the relative atomic mass of an element and
to measure the relative molecular mass of a substance.

Basic Principle

A mass spectrometer generates multiple ions from the sample under investigation, it then
separates them according to their specific mass-to-charge ratio (m/z), and then records the
relative abundance of each ion type.

· In a typical procedure, a sample, which may be solid, liquid, or gas, is ionized, for
example by bombarding it with electrons.
· This may cause some of the sample’s molecules to break into charged fragments.
These ions are then separated according to their mass-to-charge ratio, typically by
accelerating them and subjecting them to an electric or magnetic field:
· Ions of the same mass-to-charge ratio will undergo the same amount of deflection.
· The ions are detected by a mechanism capable of detecting charged particles, such
as an electron multiplier. Results are displayed as spectra of the relative abundance of
detected ions as a function of the mass-to-charge ratio.
· The atoms or molecules in the sample can be identified by correlating known masses
(e.g. an entire molecule) to the identified masses or through a characteristic fragmentation
pattern.
Four Key Stages

1. Ionization
2. Acceleration
3. Deflection
4. Detection

Ionization

- Atoms are ionized by knocking one or more electrons off to give positive ions by
bombardment with a stream of electrons. Most of the positive ions formed will carry charge
of +1.
- Ionization can be achieved by :
o Electron Ionization (EI-MS)
o Chemical Ionization (CI-MS)
o Desorption Technique (FAB)

Acceleration
- Ions are accelerated so that they all have
the same kinetic energy.
- Positive ions pass through 3 slits with
voltage in decreasing order.
- Middle slit carries intermediate and finals
at zero volts.

Deflection
- Ions are deflected by a magnetic field
due to difference in their masses.
- The lighter the mass, more they are
deflected.
- It also depends upon the no. of positive
charge an ion is carrying; the more positive
charge, more it will be deflected.
Detection
- The beam of ions passing
through the mass analyzer is detected
by the detector on the basis of m/e
ratio.
- When an ion hits the metal box,
charge is neutralized by an electron
jumping from metal on to the ion.
- Types of analyzers:
o Magnetic sector mass analysers
o Double focussing analysers
o Quadrupole mass analysers
o Time of Flight analysers (TOF)
o Ion trap analyser
o Ion cyclotron analyser

APPLICATIONS

● Environmental monitoring and analysis (soil, water and air pollutants, water quality, etc.)
● Geochemistry – age determination, soil and rock composition, oil and gas surveying
● Chemical and Petrochemical industry – Quality control
● Identify structures of biomolecules, such as carbohydrates, nucleic acids
● Sequence biopolymers such as proteins and oligosaccharides
● Determination of molecular mass of peptides, proteins, and oligonucleotides.
● Monitoring gases in patients' breath during surgery.
● Identification of drug abuse and metabolites of drugs of abuse in blood, urine, and saliva.
● Analysis of aerosol particles.
● Determination of pesticides residues in food

INFRARED (IR) SPECTROSCOPY

Infrared spectroscopy (IR spectroscopy) is the spectroscopy that deals with the infrared
region of the electromagnetic spectrum, that is light with a longer wavelength and lower frequency
than visible light. It covers a range of techniques, mostly based on absorption spectroscopy.
Similar to other spectroscopic techniques, IR spectroscopy can also be used to identify and study
chemicals.
Infrared (IR) spectroscopy is one of the most common spectroscopic techniques used by
organic and inorganic chemists. Simply, it is the absorption measurement of different IR
frequencies by a sample positioned in the path of an IR beam. Generally, stronger bonds and
light atoms will vibrate at a high stretching frequency (wavenumber).
Different functional groups absorb characteristic frequencies of IR radiation. Using various
sampling accessories, IR spectrometers can accept a wide range of sample types such as gases,
liquids, and solids. Thus, IR spectroscopy is an important and popular tool for structural
elucidation and compound identification.

MAIN GOAL: to determine the chemical functional groups in the sample.

INFRARED SPECTROMETER
- Instrument used in infrared spectroscopy
- Used to produce an infrared spectrum

HOW DOES IT WORK?


THE INFRARED SPECTROSCOPIC PROCESS

- The quantum mechanical energy levels observed in IR spectroscopy are those of


molecular vibration
- When we say a covalent bond between two atoms is of a certain length, we are citing an
average because the bond behaves as if it were a vibrating spring connecting the two
atoms
- For a simple diatomic molecule, this model is easy to visualize:

There are two types of bond vibration:


- STRETCH
- Symmetric
- Asymmetric

- BEND
- Scissor
- Rock
- Twist
- Wag

Stretch – Vibration or oscillation along the line of the bond


Bend – Vibration or oscillation not along the line of the bond

THE IR SPECTRUM AND GROUP ANALYSIS

- There are 4 primary regions of the IR spectrum

It is important to make note of peak intensities to show the effect of these factors:

Strong (s) – peak is tall, transmittance is low (0-35 %)

Medium (m) – peak is mid-height (75-35%)

Weak (w) – peak is short, transmittance is high (90-75%)

Broad (br) – if the Gaussian distribution is abnormally broad


APPLICATIONS

● Identification of functional group and structure elucidation


● Identify unknown materials and substances
● Studying the progress of the reaction
● Detection of impurities
● Determine the amount of components in a mixture
● Quantitative analysis

ULTRAVIOLET (UV) SPECTROSCOPY

UV spectroscopy is a type of absorption spectroscopy in which light of the UV region (200–


400 nm) is absorbed by the molecule. Absorption of the UV radiations results in the excitation of
the electrons from the ground state to a higher energy state. The energy of the UV radiation that
is absorbed is equal to the energy difference between the ground state and the higher energy
states (ΔE = hν). Generally, the most favored transition is from the highest molecular orbital
occupied to the lowest molecular orbital unoccupied. Such electron transfer processes can take
place in metal transition ions and inorganic and organic molecules. This absorption spectroscopy
uses electromagnetic radiation between 190 and 800 nm and is divided into UV regions (190–
400 nm) and visible regions (400–800 nm). Since absorption of UV or visible radiation by a
molecule leads to a change between the electronic energy levels of the molecule, it is also
sometimes referred to as electronic spectroscopy.

UV spectroscopy obeys the Beer–Lambert law, which states that when a beam of
monochromatic light is passed through a solution of an absorbing substance, the rate of decrease
of intensity of the radiation with thickness of the absorbing solution is proportional to the incident
radiation as well as the concentration of the solution. From the Beer–Lambert law it is clear that
the greater the number of molecules capable of absorbing light of a given wavelength, the greater
the extent of light absorption. This is the basic principle of UV spectroscopy.

Principles

1. Basically, spectroscopy is related to the interaction of light with matter.


2. As light is absorbed by matter, the result is an increase in the energy content of the
atoms or molecules.
3. When ultraviolet radiations are absorbed, this results in the excitation of the electrons
from the ground state towards a higher energy state.
4. Molecules containing π-electrons or non-bonding electrons (n-electrons) can absorb
energy in the form of ultraviolet light to excite these electrons to higher anti-bonding
molecular orbitals.
5. The more easily excited the electrons, the longer the wavelength of light it can absorb.
There are four possible types of transitions (π–π*, n–π*, σ–σ*, and n–σ*), and they
can be ordered as follows: σ–σ* > n–σ* > π–π* > n–π*
6. The absorption of ultraviolet light by a chemical compound will produce a distinct
spectrum which aids in the identification of the compound.

Transitions between electronic states can be divided into the following categories:

● π → π ∗ transitions: For molecules that possess π bonds like alkenes, alkynes, aromatics,

acryl compounds or nitriles, light can promote electrons from a π bonding molecular orbital
to a π anti-bonding molecular orbital. This is called a π → π ∗ transition and is usually

strong (high extinction coefficient). Groups of atoms involved in π bonding are thus often
called chromophores. The transition energy (or absorption wavelength) can be an
indication for different types of π bonds (carbon-carbon, carbon oxygen or carbon-nitrogen
in a nitrile group).
● n → π ∗ transitions: Lone pair electrons that exist on oxygen and nitrogen atoms may be
promoted from their non-bonding molecular orbital to a π anti-bonding molecular orbital.

This is called an n → π ∗ transition and requires less energy (longer wavelength) compared
to a π → π ∗ transitions within the same chromophore. However, the transition probability
is usually much lower.
● n → σ ∗ transitions: Saturated compounds with substituents containing lone-pairs such as

water, ammonia, hydrogen disulfide only has n → σ∗ and σ → σ ∗ transitions in the UV-
visible range.

● d−d transitions: Many transition metal ion solutions are colored as a result of their partially
filled d-levels, which allows promotion of an electron to an excited state (change of d-level
occupation) by the absorption of relatively low energy visible light. The bands are often

broad and strongly influenced by the chemical environment. They are also usually very

weak.
● Charge transfer transitions: Much stronger absorption is found when complexing the
metal ion with some suitable organic chelating agent to produce a charge-transfer
complex. Electrons may be transferred from the metal to the ligand or vice versa. The high
transition probability is exploited to quantitatively detect ions in solution. There are
numerous chelating agents available which may or may not be complex selectively where
there is more than one type of metal ion present. For example, 1,10-phenanthroline is a
common chelate for the analysis of Fe (II).

Instrumentation

Optical spectrometry is the technique of measuring the intensity of absorption or emission of


radiation in the ultraviolet-visible region of the spectrum. In analytical applications, these
measurements are made by exciting, in various ways, transitions of electrons between outer
orbitals of atoms, ions or molecular species. In most applications, it is necessary to dissolve the
sample before analysis.

1. Light sources
Radiation sources need to be continuous over the range of wavelengths of interest.
The earliest sources were simply tungsten filament lamps (light-bulbs!) but these have
since been replaced by tungsten-halogen lamps. Such light sources cover the wavelength
range from 300-900 nm. To reach further into the UV an additional source is needed. This
is usually a deuterium arc lamp, which has a continuous spectrum below 400 nm.

2. Monochromator
A monochromator is used to select the wavelength at which an absorption
measurement is made. In fact, it is not possible to select a ’single’ wavelength, but rather
a narrow range of wavelengths, which defines the spectral resolution of the spectrometer.
There are two main choices for dispersing light into its different components: a prism, or
a diffraction grating. Most modern instruments employ gratings, because it is easier to
achieve high spectral resolution. However, gratings have the disadvantage of giving rise
to more than one order of diffraction. This means that if the monochromator is set to 600
nm for example, then it will also pass 300 nm (second order) radiation. This problem is
easily overcome by the use of additional filters to remove the unwanted radiation. A typical
monochromator design is shown in Figure 4. It consists of the diffraction grating
(dispersing element), slits, and curved mirrors, which image the entrance slit onto the exit
slit and produce a parallel beam at the grating. During a scan, the grating is slowly rotated,
and light of different wavelengths will emerge from the exit slit and pass through the
sample to the detector. Thus the spectrum is obtained sequentially as the grating is rotated
to select the wavelength and the detector observes the transmitted radiation intensity. The
spectral resolution can be varied by changing the size of the slits. Narrower slits allow for
higher resolution at the expense of light intensity, which can result in larger noise.

3. Detectors
The following detectors are commonly used in UV/Vis spectroscopy:
● Photomultipliers: A photomultiplier consists of a photocathode and a series of
dynodes in an evacuated glass enclosure. Light that strikes the photo cathode
causes the ejection of electrons due to the photoelectric effect. The electrons are
accelerated towards a series of additional electrodes called dynodes. These
electrodes are each maintained at a more positive potential. Additional electrons
are generated at each dynode. This cascading effect creates 105 to 107 electrons
for each photon hitting the first cathode depending on the number of dynodes and
the accelerating voltage. This amplified signal is finally collected at the anode
where it can be measured.
● Semiconductor Photodiodes: When a photon strikes a semiconductor, it can
promote an electron from the valence band (filled orbitals) to the conduction band
(unfilled orbitals) creating an electron(-) - hole(+) pair. The concentration of these
electron-hole pairs is dependent on the amount of light striking the semiconductor,
making the semiconductor suitable as an optical detector. Photovoltaic detectors
contain a p-n junction that causes the electron-hole pairs to separate to produce a
voltage that can be measured. Photodiode detectors are not as sensitive as PMTs
but they are small, cheap and robust.
● Charge-coupled devices (CCD): A CCD is an integrated-circuit chip that contains
an array of capacitors that store charge when light creates electron-hole pairs. The
charge accumulates and is read in a fixed time interval. CCDs are used in similar
applications as arrays of photodiodes but the CCD is much more sensitive for
measurement of low light levels. They can replace the exit slit of a monochromator
which disperses light only after it has passed a sample. In this way, full spectra
can be accumulated very quickly without moving any optics.

4. Dual Beam Spectrophotometers


A diagram of the components of a typical dual beam spectrometer is shown in
Figure 5. A beam of light from either the visible or UV light source is separated into its
component in a monochromator. An additional filter suppresses light at shorter
wavelengths to avoid interference from second order diffraction. The monochromatic
(narrow bandwidth) beam is then split into two beams of equal intensity by a half-mirror or
beam splitter. One beam, the sample beam, passes through the cuvette containing a
solution of the compound being studied. The other beam, the reference, passes through
an identical cuvette containing only the solvent. The intensities of these light beams are
then measured by photo detectors and compared. The intensity of the reference beam,
which should have suffered little or no light absorption but the same reflection losses as
the sample beam, is defined as I0. The intensity of the sample beam is defined as I. During
a wavelength scan, intensity changes and fluctuations are equally sensed by the two
detectors and normalized out by the division of I by I0. However, even if both cuvettes
contain the same solution, these two intensities may not be exactly the same, for example
because of different detector efficiencies or spatial beam drifts. This leads to a small
background spectrum, which can even be negative in some frequency ranges. Like with
a single beam spectrometer (no reference beam) it is thus important to first record the
background spectrum with only solvent in the sample cell. This spectrum must then be
subtracted from the one recorded with the sample solution. If you do this, the reference
compartment may even be left empty.

5. Data acquisition
The earliest instruments simply directly connected the amplified detector signal to
a chart recorder. Today, all experimental settings are controlled by a computer and the
detector signals are digitized, processed and stored. Nevertheless, it is important that you
note parameters which you set via the instrument software (slit width, scan range, scan
speed, single beam/dual beam) into your laboratory journal, along with the name of the
file containing the data (and its path). Otherwise it can become very difficult to find or
reproduce a measurement after other users have changed these settings!

Applications

1. Detection of Impurities
UV absorption spectroscopy is one of the best methods for determination of
impurities in organic molecules. Additional peaks can be observed due to impurities in the
sample and it can be compared with that of standard raw material. By also measuring the
absorbance at specific wavelengths, the impurities can be detected. Benzene appears as
a common impurity in cyclohexane. Its presence can be easily detected by its absorption
at 255 nm.

2. Structure elucidation of organic compounds


UV spectroscopy is useful in the structure elucidation of organic molecules, the
presence or absence of unsaturation, the presence of hetero atoms.
From the location of peaks and combination of peaks, it can be concluded that whether
the compound is saturated or unsaturated, hetero atoms are present or not etc.

3. Quantitative analysis
UV absorption spectroscopy can be used for the quantitative determination of
compounds that absorb UV radiation. This determination is based on Beer’s law which is
as follows.
A = log I0 / It = log 1/ T = – log T = abc = εbc
Where ε is extinction coefficient, c is concentration, and b is the length of the cell that is
used in UV spectrophotometer.
Other methods for quantitative analysis are as follows.
a. calibration curve method
b. simultaneous multicomponent method
c. difference spectrophotometric method
d. derivative spectrophotometric method

4. Qualitative analysis
UV absorption spectroscopy can characterize those types of compounds which
absorb UV radiation. Identification is done by comparing the absorption spectrum with the
spectra of known compounds.
UV absorption spectroscopy is generally used for characterizing aromatic
compounds and aromatic olefins.

5. Dissociation constants of acids and bases


PH = PKa + log [A-] / [HA]
From the above equation, the PKa value can be calculated if the ratio of [A-] / [HA]
is known at a particular PH. and the ratio of [A-] / [HA] can be determined
spectrophotometrically from the graph plotted between absorbance and wavelength at
different PH values.

6. Chemical kinetics
Kinetics of reaction can also be studied using UV spectroscopy. The UV radiation
is passed through the reaction cell and the absorbance changes can be observed.

7. Quantitative analysis of pharmaceutical substances


Many drugs are either in the form of raw material or in the form of formulation. They
can be assayed by making a suitable solution of the drug in a solvent and measuring the
absorbance at specific wavelength.
Diazepam tablet can be analyzed by 0.5% H2SO4 in methanol at the wavelength
284 nm.

8. Molecular weight determination


Molecular weights of compounds can be measured spectrophotometrically by
preparing the suitable derivatives of these compounds.
For example, if we want to determine the molecular weight of amine then it is
converted in to amine picrate. Then known concentration of amine picrate is dissolved in
a litre of solution and its optical density is measured at λmax 380 nm. After this the
concentration of the solution in gm moles per litre can be calculated by using the following
formula.
"c" can be calculated using the above equation, the weight "w" of amine picrate
is known. From "c" and "w", molecular weight of amine picrate can be calculated. And
the molecular weight of picrate can be calculated using the molecular weight of amine
picrate.

9. As HPLC detector
A UV/Vis spectrophotometer may be used as a detector for HPLC. The presence
of an analyte gives a response which can be assumed to be proportional to the
concentration. For more accurate results, the instrument's response to the analyte in the
unknown should be compared with the response to a standard; as in the case of the
calibration curve.

CHEMILUMINESCENCE

Luminescence is the emission of light by certain materials when they are relatively cool. It
is in contrast to light emitted from incandescent bodies, such as burning wood or coal, molten
iron, and wire heated by an electric current. Luminescence may be seen in neon and fluorescent
lamps; television, radar, and X-ray fluoroscopy screens; organic substances such as luminol or
the luciferins in fireflies and glowworms; certain pigments used in outdoor advertising; and also,
natural electrical phenomena such as lightning and the aurora borealis. In all these phenomena,
light emission does not result from the material being above room temperature, which is why
luminescence is often referred to as cold light. The practical value of luminescent materials lies in
their capacity to transform invisible forms of energy into visible light.

EARLY INVESTIGATIONS
Although lightning, the aurora borealis, and the dim light of
glowworms and of fungi have always been known to mankind, the first
investigation of luminescence began with a synthetic material.
Vincenzo Cascariolo, an alchemist and cobbler in Bologna, Italy,
heated a mixture of barium sulfate (in the form of barite, heavy spar)
and coal. The powder obtained after cooling exhibited a bluish glow
at night, and Cascariolo observed that this glow could be restored by
exposure of the powder to sunlight.
The name lapis solaris or “sunstone” was given to the material
because alchemists at first hoped it would transform baser metal into
gold. The afterglow aroused the interest of other people, who gave the
material other names, including phosphorus, meaning “light bearer”, which was applied to any
material that glowed in the dark.

SOURCES AND PROCESS

Luminescence emission occurs after an appropriate material has absorbed energy from a
source such as ultraviolet or X-ray radiation, electron beams, and chemical reactions. The energy
lifts the atom of the material into an excited state and because excited states are unstable, the
material undergoes another transition, back to its unexcited ground state, and the absorbed
energy is liberated in the form of either light or heat.

The excitation involves only the outermost electrons orbiting around the nuclei of the
atoms. Luminescence efficiency depends on the degree of transformation of excitation energy
into light. Luminescence phenomenon could be classified as photoluminescence (fluorescence
and phosphorescence) when the excitation source is energy from absorbed light,
chemiluminescence-energy from chemical reactions and bioluminescence energy from
biologically catalyzed reactions.

INSTRUMENTATION
light consists of billions of tiny packets of energy called photons. Photons emitted from
bioluminescent and chemiluminescent reactions are typically measured using a luminometer.

1. Luminometers are simple, relatively inexpensive instruments designed to measure sample


light output. Light output is measured by integrating, or measuring the area under the
chemical reaction’s light emission curve for a period of time. Luminometers consist of a
sample chamber, detector, signal processing method, and signal output display.

a. Sample Chamber - The luminometer sample chamber, which holds a test tube,
microplate, or other type of sample container, presents the luminescent sample to
the detector. The chamber must be sealed from ambient light in order to minimize
potential interferences. The sample chamber should be positioned as close to the
detector as possible to maximize optical efficiency. High optical efficiency is
desirable for an optimum signal-to-noise ratio, which allows rapid and precise
measurements.

b. Detector - Photodiodes and photomultiplier tubes (PMTs) are the detection devices
commonly found in commercial luminometers. Improvements in photodiodes have
made them effective for some applications, however PMTs continue to be the
detector of choice for measuring extremely low levels of light. PMTs are positioned
either to the side (“side-on” configuration) or underneath (“end-on” configuration)
the sample cell. The bottom-viewing or end-on configuration of the photomultiplier
tube ensures uniformity of light collection from even the smallest sample.

c. Photon counting vs. Current Measuring - Most of today’s commercial luminometers


are either photon-counting or current- measuring in their signal processing and
readout design. A photon-counting luminometer counts individual photons with a
PMT and a current-measuring luminometer measures the electrical current that
results when photons strike the PMT. A photon counter will read in “photons per
second”, while a current-measuring luminometer will read in arbitrary light units,
usually referred to as “relative light units” or RLUs.

THE NATURE OF CHEMILUMINESCENCE REACTIONS

Chemiluminescence is the production of light from a chemical reaction. Two chemicals


react to form an excited (high-energy) intermediate, which breaks down, releasing some of its
energy as photons of light to reach its ground state.
APPLICATIONS

Chemiluminescent reactions do not usually release much heat, because energy is


released as light instead.

1. Luminol
A glow in the dark chemical, produces a light when it reacts with an oxidizing agent.
The release of a photon of light from a molecule of luminol is a fairly complex, multi-stage
process. In an alkaline solution, luminol exists in equilibrium with its anion, which bears a
charge of -2. The anion can exist in two forms (or tautomers), with the two negative
charges delocalized on either the oxygens (the enol-form) or on the nitrogens (the ketol-
form)

Molecular oxygen (𝑂2 ) combines with the enol-form of the luminol anion, oxidizing
it to a cyclic peroxide. The required oxygen is produced in a redox reaction involving
hydrogen peroxide (𝐻2 𝑂2 ), potassium hydroxide and potassium hexacyanoferrate (III)
(𝐾3 [𝐹𝑒(𝐶𝑁)6 ],also known as potassium ferricyanide).

2. Forensics
Forensic scientists use the reaction of luminol
to detect blood at crime scenes. A mixture of luminol
in a dilute solution of hydrogen peroxide is sprayed
onto the area where the forensic scientists suspect
that there is blood. One of the drawbacks of using
luminol is that the reaction can be catalysed by other
chemicals that may be present at the crime scene, for
example, copper-containing alloys, some cleaning
fluids such as bleach, and even horseradish. Clever criminals can clean up the blood with
bleach, which destroys the evidence of the blood. Once luminol has been applied to the
area, it may prevent other tests from being performed there. Despite these drawbacks,
luminol is still used by forensic scientists as a tool to solve crime.

3. Glow sticks
When you snap a glow stick and it begins to glow, the
light produced is an example of chemiluminescence. Glow
sticks are a plastic tube containing a mixture including diphenyl
oxalate and a dye (which gives the glow stick its colour). Inside
the plastic tube is a smaller glass tube containing hydrogen
peroxide. When the outer plastic tube is bent, the inner glass
tube snaps, releasing the hydrogen peroxide and starting a
chemical reaction that produces light. The colour of light that a
glow stick produces is determined by the dye used.

Chemiluminescence reactions, such as those in glow sticks, are temperature-


dependent. The reaction speeds up as the temperature rises – snapping your glow stick
in hot water will produce a fantastic glow, but it will not last as long as it would at room
temperature. Conversely, the reaction rate slows down at low temperature; this is why
keeping your glow stick in the freezer for several hours can allow the stick to glow brightly
again when it is removed and warmed up.

Chemistry of Glow Sticks

When diphenyl oxalate reacts with hydrogen


peroxide (𝐻2 𝑂2 ),it is oxidized to give phenol and a cyclic
peroxide. The peroxide reacts with a molecule of dye to
give two molecules of carbon dioxide and in the process,
an electron in the dye molecule is promoted to an excited
state. When the excited (high-energy) dye molecule
returns to its ground state, a photon of light is released.
The reaction is pH-dependent. When the solution is slightly alkaline, the reaction produces
a brighter light.

Phenol is toxic, so if the glow stick leaks, do not let the liquid go into your hands.

The dyes used in glow sticks are conjugated aromatic compounds (arenes). The
degree of conjugation is reflected in the different color of the light emitted when an electron
drops down from the excited state to the ground state.

GAS-LIQUID CHROMATOGRAPHY (GLC)


Gas – Liquid chromatography (GLC) is one of the most useful techniques in analytical
chemistry. Claesson published one of the first important accounts of gas liquid chromatography
in 1946. Gas – liquid chromatography is a form of partition chromatography in which the stationary
phase is a film coated on a solid support and the mobile phase is an inert gas like Nitrogen (N2)
called as carrier gas flowing over the surface of a liquid film in a controlled fashion. The sample
under analysis is vaporized under conditions of high temperature programming. The components
of the vaporized sample are fractionated as a result of partitioning between a mobile gaseous
phase and a liquid stationary phase held in a column.

PRINCIPLE:

When the vapours of sample mixture move between the stationary phase (liquid) and
mobile phase (gas) the different components of a sample mixture will separate according to their
partition coefficient between the gas and liquid stationary phase.

APPLICATIONS OF GLC:

Gas liquid chromatography is generally used for both qualitative and quantitative analysis
of organic compounds. This technique is much sought technique in Agricultural Science,
Agriculture Industry, Food industry, Environmental field, Forensic field, Biotechnology field,
Perfume and fragrance industry i.e. cosmetic industry and chemical industry. This technique is
very useful for the estimation of (i) pesticide and insecticide residues in food and other
consumables (ii) estimation of pollutants in water and other food stuff (iii) Banned and controlled
drugs in urine, blood, tablets, energy drinks etc.
CARRYING OUT GAS-LIQUID CHROMATOGRAPHY
All forms of chromatography involve a stationary phase and a mobile phase. In all the
other forms of chromatography you will meet at this level, the mobile phase is a liquid. In gas-
liquid chromatography, the mobile phase is a gas such as helium and the stationary phase is a
high boiling point liquid adsorbed onto a solid.
How fast a particular compound travels through the machine will depend on how much of
its time is spent moving with the gas as opposed to being attached to the liquid in some way.

A flow scheme for gas-liquid chromatography:

Injection of the sample


Very small quantities of the sample that you are trying to analyze are injected into the
machine using a small syringe. The syringe needle passes through a thick rubber disc (known as
a septum) which reseals itself again when the syringe is pulled out.
The injector is contained in an oven whose temperature can be controlled. It is hot enough
so that all the sample boils and is carried into the column as a gas by the helium (or other carrier
gas).
HOW THE COLUMN WORKS

The packing material


There are two main types of column in gas-liquid chromatography. One of these is a long
thin tube packed with the stationary phase; the other is even thinner and has the stationary phase
bonded to its inner surface.
To keep things simple, we are just going to look at the packed column.
The column is typically made of stainless steel and is between 1 and 4 meters long with
an internal diameter of up to 4 mm. It is coiled up so that it will fit into a thermostatically controlled
oven.
The column is packed with finely ground diatomaceous earth, which is a very porous rock.
This is coated with a high boiling liquid - typically a waxy polymer.
The column temperature
The temperature of the column can be varied from about 50°C to 250°C. It is cooler than
the injector oven, so that some components of the mixture may condense at the beginning of the
column.

In some cases, as you will see below, the column starts off at a low temperature and then
is made steadily hotter under computer control as the analysis proceeds.

How separation works on the column


One of three things might happen to a particular molecule in the mixture injected into the
column:
• It may condense on the stationary phase.
• It may dissolve in the liquid on the surface of the stationary phase.
• It may remain in the gas phase.

None of these things is necessarily permanent.


A compound with a boiling point higher than the temperature of the column will obviously
tend to condense at the start of the column. However, some of it will evaporate again in the same
way that water evaporates on a warm day - even though the temperature is well below 100°C.
The chances are that it will then condense again a little further along the column.
Similarly, some molecules may dissolve in the liquid stationary phase. Some compounds
will be more soluble in the liquid than others. The more soluble ones will spend more of their time
absorbed into the stationary phase; the less soluble ones will spend more of their time in the gas.
The process where a substance divides itself between two immiscible solvents because it
is more soluble in one than the other is known as partition. Now, you might reasonably argue that
a gas such as helium can't really be described as a "solvent". But the term partition is still used in
gas-liquid chromatography.
You can say that a substance partitions itself between the liquid stationary phase and the
gas. Any molecule in the substance spends some of its time dissolved in the liquid and some of
its time carried along with the gas.

Operational Procedure:
● The sample to be reacted is injected into the gas stream just before it enters the column.
● The components of the mixture are then carried through the column in a stream of gas.
● Each compound distributes itself between the phases to different extents and therefore
emerges from the column at a different time.
● Some of the compounds dissolve in the stationary solvents more readily than others; these
travel through the column slower and so emerge last.
● The most volatile compounds usually emerge first.
● A detector on the outlet tube monitors compounds emerging from the column. Signals
from the detector are plotted out by a recorder as a chromatogram
● The chromatogram shows the recorder response against the time which has elapsed since
the sample was injected into the column.
● Each component of the mixture gives rise to a peak on the chromatogram.
APPLICATIONS

● Gas chromatography is a physical separation method in which volatile mixtures are


separated. It can be used in many different fields such as pharmaceuticals, cosmetics and
even environmental toxins. Gas-liquid chromatography is very sensitive and can be used
to detect small quantities of substances; it is often used in forensic tests.
● Since the samples have to be volatile, human breath, blood, saliva and other secretions
containing large amounts of organic volatiles can be easily analyzed using GC. Knowing
the amount of which compound is in a given sample gives a huge advantage in studying
the effects of human health and of the environment as well.
● GC/MS is also another useful method which can determine the components of a given
mixture using the retention times and the abundance of the samples. This method is
applied to many pharmaceutical applications such as identifying the amount of chemicals
in drugs. Moreover, cosmetic manufacturers also use this method to effectively measure
how much of each chemical is used for their products.

REFERENCES:

https://microbenotes.com/mass-spectrometry-ms-principle-working-instrumentation-steps-applications/

http://chemicalinstrumentation.weebly.com/mass-spectrometry.html

http://www.premierbiosoft.com/tech_notes/mass-spectrometry.html

https://microbenotes.com/uv-spectroscopy-principle-instrumentation-applications/

https://www.sciencedirect.com/topics/materials-science/ultraviolet-
spectroscopy#:~:text=UV%20spectroscopy%20is%20a%20type,to%20a%20higher%20energy%20state.

https://www.ru.nl/systemschemistry/equipment/optical-
spectroscopy/infrared/?fbclid=IwAR0_0vilWPZLAQRsMH19a4GsPH03fF3KIXZ_oYkZpdCNjf5E7NFkIphp
SRo

https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Suppleme
ntal_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Vibrational_Spectroscopy/Infrared_S
pectroscopy/Infrared%3A_Application

https://www.pharmatutor.org/pharma-analysis/analytical-aspects-of-infra-red-spectroscopy-ir/application-
ir-
spectrophotometry#:~:text=Infrared%20spectroscopy%20is%20widely%20used,in%20civil%20and%20cri
minal%20analysis.

http://chemistry.bd.psu.edu/justik/CHEM%20210/CHEM%20210%20IR%20Spectroscopy.pptx
https://www.slideshare.net/VishnuReddy85/introduction-and-principle-of-glc-
hplc?fbclid=IwAR3yjiMQR0Tx6t0UN0lXt-SZ3TYhSZtuIQ9D8wiHWbYVPkrCPD6HeyPijQU

https://www.britannica.com/science/luminescence

https://www.comm-
tec.com/library/Tutorials/CTD/Chemiluminescence%20and%20Bioluminescence%20Measurements%20.
pdf

https://www.hindawi.com/journals/isrn/2013/230858/

https://microbenotes.com/infrared-ir-spectroscopy/

https://www.scienceinschool.org/2011/issue19/chemiluminescence#:~:text=Chemiluminescence%20is%2
0the%20production%20of,see%20Figure%201%2C%20below

https://www.pdfdrive.com/measurement-instrumentation-and-sensors-handbook-electromagnetic-optical-
radiation-chemical-and-biomedical-measurement-e157759155.html

https://www.researchgate.net/publication/274081027_Solid_State_Electrochemical_Sensors_in_Process
_Control

https://en.wikipedia.org/wiki/Electrical_conductivity_meter#:~:text=An%20electrical%20conductivity%20m
eter%20(EC,or%20impurities%20in%20the%20water.

https://www.sciencedirect.com/topics/chemistry/electrochemical-sensor

https://www.britannica.com/science/electrochemical-reaction

https://intl.siyavula.com/read/science/grade-12/electrochemical-reactions/13-electrochemical-reactions-07

https://www.horiba.com/en_en/raman-imaging-and-spectroscopy/

https://www.horiba.com/en_en/technology/measurement-and-control-techniques/spectroscopy/raman-
imaging-and-spectroscopy/raman-imaging-and-spectroscopy-application-field/

https://www.horiba.com/en_en/raman-imaging-and-spectroscopy-comparison/

https://www.mt.com/ph/en/home/applications/L1_AutoChem_Applications/Raman-Spectroscopy.html

You might also like