Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
9 views23 pages

M.Module 5

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 23

Module 5

Mechatronics in Robotics-Electrical drives: DC, AC, brushless, servo and stepper motors.

Harmonic drive. Force and tactile sensors. Range finders: ultrasonic and light-based range finders

Robotic vision system - Image acquisition: Vidicon, charge coupled device (CCD) and charge injection
device (CID) cameras. Image processing techniques: histogram processing: sliding, stretching,
equalization and thresholding.

1
Page
Electrical Drives-Electric motors
The electrical motor is a device that converts electrical energy into mechanical energy. The motor
commonly consists of two basic parts, an outside stator and an inside rotor attached to the output shaft.
The motor is rotated by the repulsion/attraction between the magnetic field generated between the
stator and rotor.

DC motor
 Brushed DC motor
Stator is either magnet or electromagnet with dc supply and rotor is supplied with dc
 BLDC motor
Stator is supplied with dc and Rotor is permanent magnet
AC motors
 Asynchronous motor (Induction motor)
Stator supplied with 3 phase supply and rotor magnetic fields is due to induced current in rotor
 Synchronous motor
Stator is supplied with 3 phase ac supply and rotor is either magnet or electromagnet
Induction motor Working (asynchronous motor)
A three-phase induction motor has a stator and a rotor. When the three-phase supply is given to the
stator, the rotating magnetic field produced which cuts the stationary rotor due to relative speed b/w
rotating flux & stationary rotor, thus E.M.F are induced in the rotor conductors. Inter-action of stator
magnetic field and rotor magnetic field generates torque and rotor starts rotating. The motor rotor
never runs at the speed of the rotating magnetic field, and this is the reason because of which the
induction motor is also known as the asynchronous motor.

Advantages of Induction Motor


1. Construction is quite simple in nature and reliable,
2. Induction motor is Robust and mechanically strong-operated at any dusty place
3. Due to the absence of Brushes, there are no sparks in the motor.
2

4. Low maintenance cost


Page
Synchronous Motor - Working
A synchronous motor is one in which the rotor normally rotates at the same speed as the revolving
field in the machine. Working of synchronous motors depends on the interaction of the magnetic
field of the stator with the magnetic field of the rotor. The stator contains 3 phase windings and is
supplied with 3 phase power. Thus, stator winding produces a 3 phased rotating Magnetic- Field. DC
supply is given to the rotor which results in North (N) and South (S) poles. The locking of rotating
stator magnetic field and rotor magnetic field resulted into rotor rotation. The speed of the motor
depends on the frequency of the supplied current. Speed of the synchronous motor is controlled by
the frequency of the applied current.

Synchronous Motor-using permanent magnet


In this type of motor, the permanent magnets are mounted on the rotor and the rotor doesn’t have
any winding. A permanent magnets embedded in the rotor to create a constant magnetic field. The
stator carries windings connected to an AC supply to produce a rotating magnetic field. At
synchronous speed the rotor poles lock to the rotating magnetic field.

Working Principle of DC Motor


The basic working principle of the DC motor is that whenever a current carrying conductor places in
the magnetic field, it experiences a mechanical force. When armature winding is connected to a DC
supply, an electric current sets up in the winding. Permanent magnets or field winding
(electromagnetism) provides the magnetic field. In this case, current carrying armature conductors
experience a force due to the magnetic field, according to the principle stated above.
3
Page
Brushless DC Motor (BLDC)
When the stator coils get a supply from source, it becomes electromagnet and starts producing the
uniform field in the air gap. Though the source of supply is DC, switching makes to generate an AC
voltage waveform. Due to the force of interaction between electromagnet stator and permanent
magnet rotor, the rotor continues to rotate. It is widely used in Industrial robots, CNC machine tools.
A BLDC Motor is a type of synchronous motor in the sense that the magnetic field generated by the
stator and the rotor revolve at the same frequency.

Advantages of Brushless DC motor


1. Less overall maintenance due to absence of brushes
2. Higher speed range and lower electric noise generation.
3. High efficiency and high output power to size ratio due to use of permanent magnet rotor
4. Smaller motor in size and lighter in weight than both brushed type DC and induction motors.
Switch reluctance motors (SRM)
Reluctance is nothing but a resistance to the magnetic flux. The Switched Reluctance Motor (SRM)
does not use permanent magnets. The rotor however has no magnets or coils attached. It is a solid
made of soft magnetic material often laminated steel or soft iron. Each winding in the stator is
connected in series with the opposite poles to increase the MMF of the circuit. The minimum
reluctance portion of the rotor tries to align itself with the stator magnetic field. Hence the
reluctance torque is developed in the rotor.

4
Page
Stepper motor
A stepper motor is a pulse-driven motor that changes the angular position of the rotor in steps.
When the stepper driver receives a pulse signal, it causes the stepper motor to rotate at a
predetermined fixed angle, known as the “step angle,” in the specified direction. The motor rotates
step-by-step at the fixed step angle. Accurate positioning can be achieved by controlling the number
of pulses, and regulating the speed and acceleration of motor rotation can be accomplished by
controlling the pulse frequency.
Types of stepper motors
Permanent magnet synchronous stepper motor
When a stator is energized, it develops electromagnetic poles. The magnetic rotor aligns along the
magnetic field of the stator. The other stator coils are then energized in the sequence so that the
rotor moves and aligns itself to the new magnetic field. This way energizing the stators in a fixed
sequence rotates the stepper motor by fixed angles.

Variable reluctance stepper motor


The variable reluctance stepper has a toothed non-magnetic soft iron rotor. When the stator coil is
energized the rotor moves to have a minimum gap between the stator and its teeth. The teeth of
the rotor are designed so that when they are aligned with one stator they get misaligned with the
next stator. Now when the next stator is energized, the rotor moves to align its teeth with the next
stator. This way energizing stators in a fixed sequence completes the rotation of the step motor.

Servo Motors
The servo motor is usually a simple motor controlled for specific angular rotation with the help of
additional servomechanism (a typical closed-loop feedback control system). A servo system primarily
consists of three basic components – a controlled device, an output sensor, a feedback system. The
main reason behind using a servo is that it provides angular precision, i.e. it will only rotate as much
5

we want and then stop and wait for the next signal to take further action. A servo motor is a DC or
Page
AC or brushless DC motor combined with a position sensing device. The servomotors are used in a
closed-loop servo system. In many servo systems, both velocity and position are monitored.

Advantages of servo motors


1. Provides high intermittent torque, high torque to inertia ratio, and high speeds
2. Work well for velocity control
3. Available in all sizes
4. Smoother rotation at lower speeds
Disadvantages of servo motors
1. More expensive than stepper motors
2. Require tuning of control loop parameters
3. Not suitable for hazardous environments or in vacuum

Stepper motors generally used for small machines where speeds or torque are low. Most stepper
motors run open-loop, which can help eliminate the added cost and complexity of adding an
encoder or a resolver to the mix. It is simple to operate, inexpensive compared to servo motors and
has a high reported accuracy.
Stepper motors typically have a lower efficiency than servo motors. Loads do not accelerate rapidly
due to the low torque to inertia ratio. Despite the loud noise and overheating at high performance,
stepper motors have an overall low power output for their weight and size.
Servo motors have high accuracy and resolution owing to the sensor-fixed encoder. The motor is
powered by the servo amp, which also counts the steps made. Its high torque to inertia ratio enables
rapid load acceleration. With lighter loads, efficiency may reach up to 90 percent. Servo motors are
6

generally costlier than stepper motors and more complicated to operate


Page
Harmonic drive
Harmonic Drive, also called Harmonic Drive gear, mechanical speed-changing device, invented in the
1950s, that reduces the gear ratio of a rotary machine to increase torque. It operates on a principle
different from that of conventional speed changers.
The advantages of a harmonic drive include high precision, zero backlash, high torque capacity, and
compact size. These characteristics make harmonic drives suitable for a wide range of applications,
including robotics, aerospace, medical equipment, and industrial machinery.
The basic design of a harmonic drive consists of three main components: a circular spline, a flexible
metal wave generator, and a rigid flex spline. The wave generator is connected to the input shaft
and is designed to generate a wave-like motion as it rotates. The flex spline is then placed around
the wave generator and meshed with the circular spline, creating a high-ratio reduction gear system.
The circular spline has internal teeth that mesh with external teeth on the flex-spline. The flex-spline
has fewer teeth and consequently a smaller effective diameter than the circular spline. The wave
generator is elliptical in shape and acts as a link with two rollers that rotates within the flex-spline,
causing it to mesh with the circular spline progressively at diametrically opposite points. If the wave
generator (the input) rotates clockwise while the circular spline is fixed, the flex-spline (the output)
will rotate or roll inside the circular spline at a much slower rate in a counterclockwise direction.

The ratio of the input speed to the output speed depends on the difference in the number of teeth
in the circular spline and on the flex-spline. Speed ratios as high as 320 to 1 can be produced in
single-reduction.
One of the key features of a harmonic drive is its ability to achieve high gear ratios in a relatively
compact space. For example, a typical harmonic drive may have a gear ratio of 100:1, which means
that the output shaft will rotate 100 times for every one rotation of the input shaft.
Advantages
 No backlash,
 High compactness
 Light weight and small
 High gear ratios
 Versatile and efficient
 Reconfigurable ratios within a standard housing,
 Good resolution and excellent repeatability
7
Page
Tactile sensors
Tactile sensors are devices that are used to detect and measure physical contact between two
surfaces. A tactile sensor is also called a touch sensor works primarily when an object or individual
comes into physical contact with the sensor. Touch sensors are sensitive to contact, pressure, or
force. The touch sensor enables a device to detect the contact or close proximity. They are used in
day to day activities like touch screens of mobile phones, biometric security systems, pressure
measurement, force measurement, robotics and many more use this sensor.
 Capacitance – A change in capacitance is proportional to the applied pressure
 Piezo-resistivity – The resistance varies with applied force because of deformation of the
shape.
 Piezoelectricity –magnitude of voltage generated during deformation of the crystal lattice.

The functioning of a tactile sensor depends generally, on the conversion of mechanical energy
(pressure, force, or deformation) into an electrical signal that can be interpreted by a computer or
controller.
The change in capacitance, resistance or light intensity is detected by the use of this sensor. Then
this change is used to form a virtual image. The virtual image contains information like how much
pressure is exerted from foreign body interference on all points it has made contact with the sensor,
how is the shape of the foreign body, what is the size of the foreign body. If you are confused by
these statements, then simply think of a fingerprint sensor. It can also be called a type of tactile
sensor.
Force sensors
Force sensors are devices that measure the amount of force being applied to them. They are used in
a variety of applications, such as in manufacturing, robotics, medical devices, and consumer
electronics.
There are several types of force sensors, including strain gauge sensors, piezoelectric sensors,
capacitive sensors, and optical sensors.
Force sensors have many practical applications, such as in weight measurement, force
measurement, and pressure sensing. They are also used in the automotive industry for measuring
tire pressure, and in the medical field for measuring the force of blood flow or the pressure inside
the skull.
Strain gauge Load cell sensor working
Strain gauge sensors (load cell) are the most common type of force sensor, and they work by
measuring the deformation of a material in response to an applied force. The strain gauge is typically
8

made of a thin metal foil, and as the material is deformed, the resistance of the gauge changes,
Page

which can be measured and used to calculate the applied force.


A load cell converts a force such as tension, compression, pressure, or torque into an electrical signal
that can be measured and standardized. It is a force transducer. As the force applied to the load cell
increases, the electrical signal changes proportionally.

Piezoelectric sensors working


Piezoelectric sensors work by converting mechanical stress into an electrical charge. The
piezoelectric effect to measure the electrical potential caused by applying mechanical force to a
piezoelectric material. They are based on the principle of electromechanical energy conversion and
primarily measure force by converting the acquired data to an electrical charge.
Piezoelectricity is the phenomenon that some materials will produce an electrical voltage when
subjected to mechanical stress and vice versa.

Capacitive sensors use changes in capacitance to measure force, and they are often used in
applications where high accuracy is required.
9
Page
Range finders in robotics
A rangefinder is a device used to determine the distance between the user and an object or target.
Rangefinders are commonly used in various fields, including hunting, golfing, surveying,
photography, and military applications. There are different types of rangefinders, including optical
rangefinders, laser rangefinders, ultrasonic rangefinders, and radar rangefinders.
Types of range finders are:
1. Ultrasonic
2. Light based range finders
• Ultrasonic rangefinders emit sound waves that bounce off the target and return to the
rangefinder, where the time it takes for the sound wave to return is used to calculate the
distance.
• Laser rangefinders use a laser beam to measure the distance to the target. The laser beam is
emitted from the rangefinder, and it reflects off the target and returns to the rangefinder, where
it is measured to determine the distance.
Overall, rangefinders are useful tools that can provide accurate distance measurements in a variety
of applications.
Ultrasonic range finders
Ultrasonic range finders are commonly used in robotics for detecting the distance between objects
and the robot. These sensors work by emitting high-frequency sound waves and measuring the time
it takes for the waves to bounce back from an object and return to the sensor. By calculating the
time delay, the distance to the object can be determined. The operation of the sensor is best
understood by analyzing the waveforms used for both transmission and detection of acoustic energy
signals.
In robotics, ultrasonic range finders can be used for a variety of applications, such as obstacle
detection and avoidance, localization, and mapping. For example, a robot can use ultrasonic range
finders to detect obstacles in its path and adjust its trajectory to avoid them. Ultrasonic range finders
can also be used to determine the position of objects relative to the robot, which can be used for
mapping and navigation.

One limitation of ultrasonic range finders is that they can only detect objects that are within their
range and within their field of view. Additionally, they may be affected by environmental factors
such as ambient noise and acoustic reflection, which can impact their accuracy. However, these
limitations can be mitigated through proper sensor placement and calibration, as well as through the
use of multiple sensors to provide more comprehensive coverage.
10
Page
Light-based range finders
Light-based range finders are devices that use light waves to measure the distance between the
device and a target object. Laser range finders use a laser beam to illuminate the target object and
measure the time it takes for the laser light to bounce back to the device. The device then calculates
the distance based on the time-of-flight of the laser beam. Light-based range finders, also known as
LIDAR (Light Detection and Ranging), are commonly used in robotics for distance sensing and object
detection.
One of the key advantages of LIDAR systems in robotics is their ability to provide 3D maps of the
environment, which can be used for localization and navigation. For example, autonomous vehicles
often use LIDAR sensors to build detailed maps of their surroundings, which they can then use to
plan safe routes.
There are several types of LIDAR sensors used in robotics, including time-of-flight (ToF) sensors and
phase-shift sensors. ToF sensors work by measuring the time it takes for a laser beam to travel to an
object and back, while phase-shift sensors use the phase difference between the emitted and
reflected laser beams to determine the distance to an object.
Overall, LIDAR sensors are an important technology in robotics, allowing for accurate distance
sensing and 3D mapping of the environment, which are critical for autonomous navigation and
obstacle avoidance. They are generally very accurate and can measure distances over a wide range,
from a few centimeters to several kilometers, depending on the type of range finder used

 Robotic vision system - Image acquisition: Vidicon, charge coupled device (CCD) and
charge injection device (CID) cameras.
 Image processing techniques: histogram processing: sliding, stretching, equalization
and thresholding.
11
Page
Machine vision system
Robot vision (machine vision) may be defined as the process of extracting, characterizing and
interpreting information from images of a three dimensional world. In human vision system, eyes
sense the images and brain analyses the information and takes action on the basis of analysis.
Similarly, in robotic vision system, visual sensing of information is done after forming an image of the
object and information is analysed to make a useful decision about its content. Robotic vision
(machine vision) system thus needs both visual sensing and interpretive capabilities. The ability for
machines to “see” and interpret the world around them.
Machine vision systems are used for functions like gauging of dimensions, identification of shapes,
measurement of distances, determining orientation of parts, quantifying motion, detecting surface
shading etc.
In a machine vision system, the cameras are responsible for taking the light information from a
scene and converting it into digital information i.e. pixels using CMOS or CCD sensors. Image sensing
device may consist of vidicon camera or a charge coupled device (CCD) camera or charge injection
device (CID) camera. Image sensing device receives light through a lens system and converts this
light into electrical signals.
Machine vision is the technology that enables robots and other machines like autonomous vehicles
to see and recognize objects in their surrounding environment. By pairing optic sensors with artificial
intelligence and machine learning tools that can analyze and process image data, robots and
autonomous vehicles equipped with machine vision systems are able to perform more complex
tasks, like navigating downtown traffic
Steps in Machine Vision Process
The machine vision system involves following four basic steps:
o Image formation
o Processing of image in a form suitable for analysis by computer
o Defining and analysing the characteristics of image
o Interpretation of image and decision making.
Image Acquisition (Image Formation)
Image acquisition is the creation of digital images, typically from a physical scene. The first
link in the vision chain is the camera. It plays the role of robotic eye or the sensor. The visual
information is converted in to electric signals in the camera.

An image sensor like vidicon camera, CCD or CID camera is used to generate the electronic
12

signal representing the image. The image sensor collects light from the scene through a lens
Page

and using a photosensitive target, converts it into electronic signal.


Vidicon camera
A camera is an optical instrument that captures a visual image. Camera is just analogous to
human eye. Cameras is based on the fact each picture may be assumed to be composed of
small elements with different light intensity.
A Vidicon is a type of camera tube whose basis of working is photoconductivity, i.e.,
decrease in resistance with the amount of incident light. Vidicon is a camera tube in which a
charge image pattern is formed on a photoconductive surface (target) which is then scanned
by a beam of low-velocity electrons. Basically it changes optical energy into electrical energy
by the variation in resistance of the material corresponds to the incident optical image is
produced.

Working
Initially, light from a scene is allowed to fall on the faceplate by passing it through a lens
system. When light reaches the photoconductive material (target), then by absorbing the
light energy, free electrons get generated. Due to the externally supplied positive potential
at the signal plate, the electrons start to migrate towards it and this causes vacancy of
electrons on the photoconductive region. Thereby generating positive charges over the
surface of the material. So, each point on the photo conductive layer acquires positive
charge. Hence, a charge image that corresponds to the incident optical image is produced.
As the electron beam from the gun is incident on the charge image, drop in voltage takes
place and a varying current is produced. This current produces the video-signal output of
the camera. The electron image is periodically scanned horizontally and vertically such that
the entire image is read by the detector many times per second, producing an electrical
signal that can be conveyed to a display device, to reproduce the image

Advantages:
 Light weight & Small size
 Longer life
 Low power consumption
13
Page
Charge coupled device (CCD) camera
A Charged Coupled Device (CCD) is a type of image sensor widely used in digital cameras,
and other imaging devices. It is designed to convert incoming light into an electrical signal,
allowing the capture and storage of digital images.

When light enters the CCD, it passes through a lens system and falls onto an array of
photosensitive sites called pixels. Each pixel consists of a photosensitive element, typically a
silicon diode.
When a photon strikes the photosensitive element, it generates an electron-hole pair. The
number of electron-hole pairs generated is proportional to the intensity of the light.
By applying voltage pulses to the capacitors, the charges are shifted row by row or column by
column in a controlled manner. This shifting process is known as charge transfer. The shifted
charges are sequentially transferred to a serial register, where they accumulate as a charge
packet. The signal is then converted from analog to digital form using an analog-to-digital
converter (ADC).
Analogy of data transfer in CCD
(collecting rain water in each buckets and transfer to collection point using conveyer system.
Likewise, each pixel charge is transferred to computer and finally converted to sequence of
electrical signal).

14
Page
Charge injection device (CID) cameras
A Charge Injection Device (CID) is an electronic device used for sensing and capturing images. It is
similar to a Charge-Coupled Device (CCD). The basic difference between the CID and the CCD is in
the way the electric charge is transferred before it is recorded.
CIDs are made up of a light sensitive surface sub divided into several thousand pixels which are each
addressable by column and row electrodes, allowing collection and readout of signals. Incident light
on the surface of a pixel is converted to a proportional amount of signal charge that is collected and
stored beneath the pixel surface. After the read out, charge is injected into the substrate which
clears the pixel charge.

When a photon of sufficient energy strikes silicon surface, a hole and an electron are created. The
electron migrates to the positively charged substrate and is removed from the system. Positive
voltages are applied to both the Sense and Storage electrodes. The Storage electrode is set at a
higher potential than the Sense electrode thereby creating a deeper potential well under the
‘Storage’ electrode. (Sense gate at -5, storage gate at -10).
Steps in Photon collection and readout in a CID
1. Holes generated due to the incident of light are accumulated below storage electrode
(placed at high potentials).
2. Readout Before-Transfer- Disconnecting the sense electrode from its reference potential.
Output is carried out from sense electrode. (floating state- not connected to any charge)
3. Readout After-Transfer-Collection electrode gives to positive potential and holes will move
to sense electrode and again read out is taken. The difference of 2 output is proportional to
the amount of photon-generated charge at the pixel site.
4. To clear accumulated charge (by injection) in the device, the potentials on both electrodes
are collapsed and charge recombination occurs. 15
Page
Image processing
Image processing is a method to convert an image in to digital form and perform some
operations on it, in order to get an enhanced image or to extract some useful information from
it. Computer algorithms are used to perform image processing on digital images.
An image is an array or a matrix of square pixels (picture elements) arranged in columns and
rows. Images are obtained using various image acquisition devices like cameras. After
acquiring the images they are stored in computer memory as digitized image. Acquired
images will be having errors during their acquisition. If we remove these errors, we will get
clear images. The removal of errors for making the image brighter and clear is done by image
processing. A camera may typically form an image 30 times per sec. At each time interval the
entire image has to be captured and frozen for processing by an image processor.
Digital image processing has a broad range of applications such as image restoration, medical
imaging, remote sensing, image segmentation, etc. Every process requires a different
technique.

Applications of digital image processing


Digital image processing has a broad spectrum of applications, such as
1. Remote Sensing
2. Medical Imaging
3. Non-destructive Evaluation
4. Forensic Studies
5. Textiles
6. Material Science.
7. Military
8. Film industry
9. Document processing
10. Graphic arts
16

11. Printing Industry


Page
Binary and Gray scale for image processing
Binary Images
It is the simplest type of image. It takes only two values i.e, Black and White or 0 and 1. The
binary image consists of a 1-bit image and it takes only 1 binary digit to represent a pixel.
Binary images are mostly used for general shape or outline. Binary images are generated
using threshold operation. When a pixel is above the threshold value, then it is turned
white('1') and which are below the threshold value then they are turned black('0'). Binary
images are images whose pixels have only two possible intensity values. They are normally
displayed as black and white. Numerically, the two values are often 0 for black, and either 1
for white.

Binary Representation of Images


The binary image consist of only black and white color and thus can also be called as Black
and White image. There is no gray level in it. Only two colors that are black and white are
found in it. Thresholding turns gray scale images into binary ones.
Grayscale image
Grayscale image has gray values ranging from 0-255 where 0 =black, 255= white. Gray scale
system assigns up to 256 different values depending on intensity to each pixel. Gray scale
systems are used in applications requiring higher degree of image refinement. For simple
inspection tasks, binary system may serve the purpose. In an 8-bit gray scale image, the value
of the pixel between 0 and 255. Each pixel determines available different gray levels.
Gray scale images are referred to as monochrome, or one-color image. They contain
brightness information only brightness information only, no color information. The number
of different brightness level available.

The value of a pixel at any point correspond to the intensity of the light photons striking at
17

that point. The number of different colors or the depth of color in an image is depends on or
Page

bits per pixel. In an 8-bit gray scale image, the value of the pixel between 0 and 255.
Histogram processing
In digital image processing, the histogram is used for graphical representation of a digital
image. Image histograms depicts how often image intensity values occur in the image.
Histogram of an image represents the relative frequency of occurrence various gray levels
(color tone or color intensity) in an image.
The horizontal axis of the graph represents the tonal variations (relative brightness or colour
of objects in an image), while the vertical axis represents the total number of pixels in that
particular tone.
The x-axis has grey levels/ Intensity values and the y-axis has the number of pixels in each
grey level. The left side of the horizontal axis represents the dark areas, the middle represents
mid-tone values and the right hand side represents light areas. The vertical axis represents the
size of the area (total number of pixels) that is captured in each one of these zones. The
histogram of a digital image with gray levels in the range [0, L-1] is a discrete function

Applications of Histograms for image processing


In digital image processing, histograms are used for simple calculations in software.
1. It is used to analyze an image, enhance the image, change the image.
2. Properties of an image can be predicted by the detailed study of the histogram.
3. The brightness of the image can be adjusted by having the details of its histogram.
4. The contrast of the image can be adjusted according to the need by having details of
the x-axis of a histogram.
5. It is used for image equalization.
6. Histograms are used in thresholding as it improves the appearance of the image.
7. If we have input and output histogram of an image, we can determine which type of
transformation is applied in the algorithm.
18
Page
Histogram Processing Techniques
Histogram Sliding
In histogram sliding, we just simply shift a complete histogram rightwards or leftwards. Due
to shifting or sliding of histogram towards right or left, a clear change can be seen in the
image. We can Increase brightness using histogram sliding. This operation makes the image
brighter or darken. Brightness can be defined as intensity of light emit by a particular light
source.
In order to bright the image, we will slide its histogram towards right, or towards whiter
portion. Now if we were to decrease brightness of this new image to such an extent that the
old image look brighter. We can perform histogram sliding by adding or subtracting from
each pixel value of the image.

As we can clearly see from the new histogram that all the pixels values has been shifted to right and
and its effect of brightening can be seen in the second histogram.
19
Page
Histogram Stretching
In histogram stretching, contrast of an image is increased. If we want to increase the contrast
of an image, histogram of that image will be fully stretched and covered the dynamic range of
the histogram.
The contrast of an image is defined between the maximum and minimum value of pixel
intensity. The low contrast may be due to poor lighting, low sensor ranges or fault of camera
etc. the idea behind contrast stretching is to increase the dynamic range of the gray level in
the image being processed.

Contrast stretching (often called normalization) is a simple image enhancement technique


that attempts to improve the contrast in an image by `stretching' the range of intensity values
it contains to span a desired range of values, e.g. the the full range of pixel values that the
image type concerned allows. Find the minimum and maximum values of the pixels in an
image, and then convert pixels from the source to destination like ((pixel – min) / (max –
min))*255. It is simple grey scale transformation in which the lowest input of interest
becomes zero and the highest input of interest becomes 255.

20

Contrast stretching is also known as normalization. The quality of image is enhanced by


stretching of intensity values. It is a process that expands the range of intensity level in an
Page

image so that it spans the full intensity range of the display device.
Histogram Equalization
Histogram Equalization is an image processing technique that adjusts the contrast of an
image. Histogram equalization alters the input histogram to produce an output whose
histogram is uniform, where the various pixel intensities are equally distributed over the
entire dynamic range. It stretches the histogram to fill the dynamic range and at the same time
tries to keep the histogram uniform. It allows the image’s areas with lower contrast to gain a
higher contrast. By doing this, the resultant image will have an appearance of high contrast
and exhibits a large variety of grey tones. The equalize will attempt to produce a histogram
with equal amounts of pixels in each intensity level

Histogram equalization increases the dynamic range of pixel values and makes an equal
count of pixels at each level which produces a flat histogram with high contrast image. While
stretching histogram, the shape of histogram remains the same whereas in Histogram
equalization, the shape of histogram changes and it generates only one image. It can enhance
contrast for brightness values close to histogram maxima and decrease contrast near minima.

21
Page
Thresholding
Thresholding is the simplest and most popular method of image segmentation. Thresholding
creates binary images from grey-level ones by turning all pixels below some threshold to zero
and all pixels about that threshold to one. The key parameter in the thresholding process is
the choice of the threshold value. The major problem with thresholding is that we consider
only the intensity, not any relationships between the pixels. There is no guarantee that the
pixels identified by the thresholding process are continuous.
Several different methods for choosing a threshold exist; users can manually choose a
threshold value, or a thresholding algorithm can compute a value automatically, which is
known as automatic thresholding. Choosing the theshold using the image histogram. Regions
with uniform intensity give rise to strong peaks in the histogram.

Segmentation with histogram


Segmentation refers to the process of partitioning a digital image into the multiple segments.
One of the most widely applied techniques for image segmentation is histogram-based
thresholding. It gives a segmentation into two classes, depending on the intensity of the pixels
of a grayscale image.
The goal of segmentation is to simplify and or change the representation of an image into
something that is more meaningful and easier to analyze. It is widely used in different
applications such as computer vision, digital pattern recognition, robot vision, etc. We usually
try to segment regions by identifying common properties. The simplest property that pixels in
a region can share is intensity. So, a natural way to segment such regions is through
thresholding, the separation of light and dark regions.
Following are the primary types of image segmentation techniques:
 Thresholding Segmentation- divides the pixels in an image by comparing the pixel’s
intensity with a specified value (threshold)
 Edge-Based Segmentation- identifying the edges of different objects in an image
 Region-Based Segmentation- divide the image into sections with similar features.
22
Page
Edge-Based Segmentation
Edge-based segmentation is one of the most popular implementations of segmentation in
image processing. It focuses on identifying the edges of different objects in an image. This is
a crucial step as it helps you find the features of the various objects present in the image as
edges contain a lot of information you can use. In Edge Based segmentation, the boundaries
or edges of the images are significantly different from each other and also from the
background of the image. This fact is used to do edge detection on images with different
levels of intensities and discontinuity on edges.
Edge detection is widely popular because it helps you in removing unwanted and unnecessary
information from the image. It reduces the image’s size considerably, making it easier to
analyse the same.

******************************************

23
Page

You might also like