M.Module 5
M.Module 5
M.Module 5
Mechatronics in Robotics-Electrical drives: DC, AC, brushless, servo and stepper motors.
Harmonic drive. Force and tactile sensors. Range finders: ultrasonic and light-based range finders
Robotic vision system - Image acquisition: Vidicon, charge coupled device (CCD) and charge injection
device (CID) cameras. Image processing techniques: histogram processing: sliding, stretching,
equalization and thresholding.
1
Page
Electrical Drives-Electric motors
The electrical motor is a device that converts electrical energy into mechanical energy. The motor
commonly consists of two basic parts, an outside stator and an inside rotor attached to the output shaft.
The motor is rotated by the repulsion/attraction between the magnetic field generated between the
stator and rotor.
DC motor
Brushed DC motor
Stator is either magnet or electromagnet with dc supply and rotor is supplied with dc
BLDC motor
Stator is supplied with dc and Rotor is permanent magnet
AC motors
Asynchronous motor (Induction motor)
Stator supplied with 3 phase supply and rotor magnetic fields is due to induced current in rotor
Synchronous motor
Stator is supplied with 3 phase ac supply and rotor is either magnet or electromagnet
Induction motor Working (asynchronous motor)
A three-phase induction motor has a stator and a rotor. When the three-phase supply is given to the
stator, the rotating magnetic field produced which cuts the stationary rotor due to relative speed b/w
rotating flux & stationary rotor, thus E.M.F are induced in the rotor conductors. Inter-action of stator
magnetic field and rotor magnetic field generates torque and rotor starts rotating. The motor rotor
never runs at the speed of the rotating magnetic field, and this is the reason because of which the
induction motor is also known as the asynchronous motor.
4
Page
Stepper motor
A stepper motor is a pulse-driven motor that changes the angular position of the rotor in steps.
When the stepper driver receives a pulse signal, it causes the stepper motor to rotate at a
predetermined fixed angle, known as the “step angle,” in the specified direction. The motor rotates
step-by-step at the fixed step angle. Accurate positioning can be achieved by controlling the number
of pulses, and regulating the speed and acceleration of motor rotation can be accomplished by
controlling the pulse frequency.
Types of stepper motors
Permanent magnet synchronous stepper motor
When a stator is energized, it develops electromagnetic poles. The magnetic rotor aligns along the
magnetic field of the stator. The other stator coils are then energized in the sequence so that the
rotor moves and aligns itself to the new magnetic field. This way energizing the stators in a fixed
sequence rotates the stepper motor by fixed angles.
Servo Motors
The servo motor is usually a simple motor controlled for specific angular rotation with the help of
additional servomechanism (a typical closed-loop feedback control system). A servo system primarily
consists of three basic components – a controlled device, an output sensor, a feedback system. The
main reason behind using a servo is that it provides angular precision, i.e. it will only rotate as much
5
we want and then stop and wait for the next signal to take further action. A servo motor is a DC or
Page
AC or brushless DC motor combined with a position sensing device. The servomotors are used in a
closed-loop servo system. In many servo systems, both velocity and position are monitored.
Stepper motors generally used for small machines where speeds or torque are low. Most stepper
motors run open-loop, which can help eliminate the added cost and complexity of adding an
encoder or a resolver to the mix. It is simple to operate, inexpensive compared to servo motors and
has a high reported accuracy.
Stepper motors typically have a lower efficiency than servo motors. Loads do not accelerate rapidly
due to the low torque to inertia ratio. Despite the loud noise and overheating at high performance,
stepper motors have an overall low power output for their weight and size.
Servo motors have high accuracy and resolution owing to the sensor-fixed encoder. The motor is
powered by the servo amp, which also counts the steps made. Its high torque to inertia ratio enables
rapid load acceleration. With lighter loads, efficiency may reach up to 90 percent. Servo motors are
6
The ratio of the input speed to the output speed depends on the difference in the number of teeth
in the circular spline and on the flex-spline. Speed ratios as high as 320 to 1 can be produced in
single-reduction.
One of the key features of a harmonic drive is its ability to achieve high gear ratios in a relatively
compact space. For example, a typical harmonic drive may have a gear ratio of 100:1, which means
that the output shaft will rotate 100 times for every one rotation of the input shaft.
Advantages
No backlash,
High compactness
Light weight and small
High gear ratios
Versatile and efficient
Reconfigurable ratios within a standard housing,
Good resolution and excellent repeatability
7
Page
Tactile sensors
Tactile sensors are devices that are used to detect and measure physical contact between two
surfaces. A tactile sensor is also called a touch sensor works primarily when an object or individual
comes into physical contact with the sensor. Touch sensors are sensitive to contact, pressure, or
force. The touch sensor enables a device to detect the contact or close proximity. They are used in
day to day activities like touch screens of mobile phones, biometric security systems, pressure
measurement, force measurement, robotics and many more use this sensor.
Capacitance – A change in capacitance is proportional to the applied pressure
Piezo-resistivity – The resistance varies with applied force because of deformation of the
shape.
Piezoelectricity –magnitude of voltage generated during deformation of the crystal lattice.
The functioning of a tactile sensor depends generally, on the conversion of mechanical energy
(pressure, force, or deformation) into an electrical signal that can be interpreted by a computer or
controller.
The change in capacitance, resistance or light intensity is detected by the use of this sensor. Then
this change is used to form a virtual image. The virtual image contains information like how much
pressure is exerted from foreign body interference on all points it has made contact with the sensor,
how is the shape of the foreign body, what is the size of the foreign body. If you are confused by
these statements, then simply think of a fingerprint sensor. It can also be called a type of tactile
sensor.
Force sensors
Force sensors are devices that measure the amount of force being applied to them. They are used in
a variety of applications, such as in manufacturing, robotics, medical devices, and consumer
electronics.
There are several types of force sensors, including strain gauge sensors, piezoelectric sensors,
capacitive sensors, and optical sensors.
Force sensors have many practical applications, such as in weight measurement, force
measurement, and pressure sensing. They are also used in the automotive industry for measuring
tire pressure, and in the medical field for measuring the force of blood flow or the pressure inside
the skull.
Strain gauge Load cell sensor working
Strain gauge sensors (load cell) are the most common type of force sensor, and they work by
measuring the deformation of a material in response to an applied force. The strain gauge is typically
8
made of a thin metal foil, and as the material is deformed, the resistance of the gauge changes,
Page
Capacitive sensors use changes in capacitance to measure force, and they are often used in
applications where high accuracy is required.
9
Page
Range finders in robotics
A rangefinder is a device used to determine the distance between the user and an object or target.
Rangefinders are commonly used in various fields, including hunting, golfing, surveying,
photography, and military applications. There are different types of rangefinders, including optical
rangefinders, laser rangefinders, ultrasonic rangefinders, and radar rangefinders.
Types of range finders are:
1. Ultrasonic
2. Light based range finders
• Ultrasonic rangefinders emit sound waves that bounce off the target and return to the
rangefinder, where the time it takes for the sound wave to return is used to calculate the
distance.
• Laser rangefinders use a laser beam to measure the distance to the target. The laser beam is
emitted from the rangefinder, and it reflects off the target and returns to the rangefinder, where
it is measured to determine the distance.
Overall, rangefinders are useful tools that can provide accurate distance measurements in a variety
of applications.
Ultrasonic range finders
Ultrasonic range finders are commonly used in robotics for detecting the distance between objects
and the robot. These sensors work by emitting high-frequency sound waves and measuring the time
it takes for the waves to bounce back from an object and return to the sensor. By calculating the
time delay, the distance to the object can be determined. The operation of the sensor is best
understood by analyzing the waveforms used for both transmission and detection of acoustic energy
signals.
In robotics, ultrasonic range finders can be used for a variety of applications, such as obstacle
detection and avoidance, localization, and mapping. For example, a robot can use ultrasonic range
finders to detect obstacles in its path and adjust its trajectory to avoid them. Ultrasonic range finders
can also be used to determine the position of objects relative to the robot, which can be used for
mapping and navigation.
One limitation of ultrasonic range finders is that they can only detect objects that are within their
range and within their field of view. Additionally, they may be affected by environmental factors
such as ambient noise and acoustic reflection, which can impact their accuracy. However, these
limitations can be mitigated through proper sensor placement and calibration, as well as through the
use of multiple sensors to provide more comprehensive coverage.
10
Page
Light-based range finders
Light-based range finders are devices that use light waves to measure the distance between the
device and a target object. Laser range finders use a laser beam to illuminate the target object and
measure the time it takes for the laser light to bounce back to the device. The device then calculates
the distance based on the time-of-flight of the laser beam. Light-based range finders, also known as
LIDAR (Light Detection and Ranging), are commonly used in robotics for distance sensing and object
detection.
One of the key advantages of LIDAR systems in robotics is their ability to provide 3D maps of the
environment, which can be used for localization and navigation. For example, autonomous vehicles
often use LIDAR sensors to build detailed maps of their surroundings, which they can then use to
plan safe routes.
There are several types of LIDAR sensors used in robotics, including time-of-flight (ToF) sensors and
phase-shift sensors. ToF sensors work by measuring the time it takes for a laser beam to travel to an
object and back, while phase-shift sensors use the phase difference between the emitted and
reflected laser beams to determine the distance to an object.
Overall, LIDAR sensors are an important technology in robotics, allowing for accurate distance
sensing and 3D mapping of the environment, which are critical for autonomous navigation and
obstacle avoidance. They are generally very accurate and can measure distances over a wide range,
from a few centimeters to several kilometers, depending on the type of range finder used
Robotic vision system - Image acquisition: Vidicon, charge coupled device (CCD) and
charge injection device (CID) cameras.
Image processing techniques: histogram processing: sliding, stretching, equalization
and thresholding.
11
Page
Machine vision system
Robot vision (machine vision) may be defined as the process of extracting, characterizing and
interpreting information from images of a three dimensional world. In human vision system, eyes
sense the images and brain analyses the information and takes action on the basis of analysis.
Similarly, in robotic vision system, visual sensing of information is done after forming an image of the
object and information is analysed to make a useful decision about its content. Robotic vision
(machine vision) system thus needs both visual sensing and interpretive capabilities. The ability for
machines to “see” and interpret the world around them.
Machine vision systems are used for functions like gauging of dimensions, identification of shapes,
measurement of distances, determining orientation of parts, quantifying motion, detecting surface
shading etc.
In a machine vision system, the cameras are responsible for taking the light information from a
scene and converting it into digital information i.e. pixels using CMOS or CCD sensors. Image sensing
device may consist of vidicon camera or a charge coupled device (CCD) camera or charge injection
device (CID) camera. Image sensing device receives light through a lens system and converts this
light into electrical signals.
Machine vision is the technology that enables robots and other machines like autonomous vehicles
to see and recognize objects in their surrounding environment. By pairing optic sensors with artificial
intelligence and machine learning tools that can analyze and process image data, robots and
autonomous vehicles equipped with machine vision systems are able to perform more complex
tasks, like navigating downtown traffic
Steps in Machine Vision Process
The machine vision system involves following four basic steps:
o Image formation
o Processing of image in a form suitable for analysis by computer
o Defining and analysing the characteristics of image
o Interpretation of image and decision making.
Image Acquisition (Image Formation)
Image acquisition is the creation of digital images, typically from a physical scene. The first
link in the vision chain is the camera. It plays the role of robotic eye or the sensor. The visual
information is converted in to electric signals in the camera.
An image sensor like vidicon camera, CCD or CID camera is used to generate the electronic
12
signal representing the image. The image sensor collects light from the scene through a lens
Page
Working
Initially, light from a scene is allowed to fall on the faceplate by passing it through a lens
system. When light reaches the photoconductive material (target), then by absorbing the
light energy, free electrons get generated. Due to the externally supplied positive potential
at the signal plate, the electrons start to migrate towards it and this causes vacancy of
electrons on the photoconductive region. Thereby generating positive charges over the
surface of the material. So, each point on the photo conductive layer acquires positive
charge. Hence, a charge image that corresponds to the incident optical image is produced.
As the electron beam from the gun is incident on the charge image, drop in voltage takes
place and a varying current is produced. This current produces the video-signal output of
the camera. The electron image is periodically scanned horizontally and vertically such that
the entire image is read by the detector many times per second, producing an electrical
signal that can be conveyed to a display device, to reproduce the image
Advantages:
Light weight & Small size
Longer life
Low power consumption
13
Page
Charge coupled device (CCD) camera
A Charged Coupled Device (CCD) is a type of image sensor widely used in digital cameras,
and other imaging devices. It is designed to convert incoming light into an electrical signal,
allowing the capture and storage of digital images.
When light enters the CCD, it passes through a lens system and falls onto an array of
photosensitive sites called pixels. Each pixel consists of a photosensitive element, typically a
silicon diode.
When a photon strikes the photosensitive element, it generates an electron-hole pair. The
number of electron-hole pairs generated is proportional to the intensity of the light.
By applying voltage pulses to the capacitors, the charges are shifted row by row or column by
column in a controlled manner. This shifting process is known as charge transfer. The shifted
charges are sequentially transferred to a serial register, where they accumulate as a charge
packet. The signal is then converted from analog to digital form using an analog-to-digital
converter (ADC).
Analogy of data transfer in CCD
(collecting rain water in each buckets and transfer to collection point using conveyer system.
Likewise, each pixel charge is transferred to computer and finally converted to sequence of
electrical signal).
14
Page
Charge injection device (CID) cameras
A Charge Injection Device (CID) is an electronic device used for sensing and capturing images. It is
similar to a Charge-Coupled Device (CCD). The basic difference between the CID and the CCD is in
the way the electric charge is transferred before it is recorded.
CIDs are made up of a light sensitive surface sub divided into several thousand pixels which are each
addressable by column and row electrodes, allowing collection and readout of signals. Incident light
on the surface of a pixel is converted to a proportional amount of signal charge that is collected and
stored beneath the pixel surface. After the read out, charge is injected into the substrate which
clears the pixel charge.
When a photon of sufficient energy strikes silicon surface, a hole and an electron are created. The
electron migrates to the positively charged substrate and is removed from the system. Positive
voltages are applied to both the Sense and Storage electrodes. The Storage electrode is set at a
higher potential than the Sense electrode thereby creating a deeper potential well under the
‘Storage’ electrode. (Sense gate at -5, storage gate at -10).
Steps in Photon collection and readout in a CID
1. Holes generated due to the incident of light are accumulated below storage electrode
(placed at high potentials).
2. Readout Before-Transfer- Disconnecting the sense electrode from its reference potential.
Output is carried out from sense electrode. (floating state- not connected to any charge)
3. Readout After-Transfer-Collection electrode gives to positive potential and holes will move
to sense electrode and again read out is taken. The difference of 2 output is proportional to
the amount of photon-generated charge at the pixel site.
4. To clear accumulated charge (by injection) in the device, the potentials on both electrodes
are collapsed and charge recombination occurs. 15
Page
Image processing
Image processing is a method to convert an image in to digital form and perform some
operations on it, in order to get an enhanced image or to extract some useful information from
it. Computer algorithms are used to perform image processing on digital images.
An image is an array or a matrix of square pixels (picture elements) arranged in columns and
rows. Images are obtained using various image acquisition devices like cameras. After
acquiring the images they are stored in computer memory as digitized image. Acquired
images will be having errors during their acquisition. If we remove these errors, we will get
clear images. The removal of errors for making the image brighter and clear is done by image
processing. A camera may typically form an image 30 times per sec. At each time interval the
entire image has to be captured and frozen for processing by an image processor.
Digital image processing has a broad range of applications such as image restoration, medical
imaging, remote sensing, image segmentation, etc. Every process requires a different
technique.
The value of a pixel at any point correspond to the intensity of the light photons striking at
17
that point. The number of different colors or the depth of color in an image is depends on or
Page
bits per pixel. In an 8-bit gray scale image, the value of the pixel between 0 and 255.
Histogram processing
In digital image processing, the histogram is used for graphical representation of a digital
image. Image histograms depicts how often image intensity values occur in the image.
Histogram of an image represents the relative frequency of occurrence various gray levels
(color tone or color intensity) in an image.
The horizontal axis of the graph represents the tonal variations (relative brightness or colour
of objects in an image), while the vertical axis represents the total number of pixels in that
particular tone.
The x-axis has grey levels/ Intensity values and the y-axis has the number of pixels in each
grey level. The left side of the horizontal axis represents the dark areas, the middle represents
mid-tone values and the right hand side represents light areas. The vertical axis represents the
size of the area (total number of pixels) that is captured in each one of these zones. The
histogram of a digital image with gray levels in the range [0, L-1] is a discrete function
As we can clearly see from the new histogram that all the pixels values has been shifted to right and
and its effect of brightening can be seen in the second histogram.
19
Page
Histogram Stretching
In histogram stretching, contrast of an image is increased. If we want to increase the contrast
of an image, histogram of that image will be fully stretched and covered the dynamic range of
the histogram.
The contrast of an image is defined between the maximum and minimum value of pixel
intensity. The low contrast may be due to poor lighting, low sensor ranges or fault of camera
etc. the idea behind contrast stretching is to increase the dynamic range of the gray level in
the image being processed.
20
image so that it spans the full intensity range of the display device.
Histogram Equalization
Histogram Equalization is an image processing technique that adjusts the contrast of an
image. Histogram equalization alters the input histogram to produce an output whose
histogram is uniform, where the various pixel intensities are equally distributed over the
entire dynamic range. It stretches the histogram to fill the dynamic range and at the same time
tries to keep the histogram uniform. It allows the image’s areas with lower contrast to gain a
higher contrast. By doing this, the resultant image will have an appearance of high contrast
and exhibits a large variety of grey tones. The equalize will attempt to produce a histogram
with equal amounts of pixels in each intensity level
Histogram equalization increases the dynamic range of pixel values and makes an equal
count of pixels at each level which produces a flat histogram with high contrast image. While
stretching histogram, the shape of histogram remains the same whereas in Histogram
equalization, the shape of histogram changes and it generates only one image. It can enhance
contrast for brightness values close to histogram maxima and decrease contrast near minima.
21
Page
Thresholding
Thresholding is the simplest and most popular method of image segmentation. Thresholding
creates binary images from grey-level ones by turning all pixels below some threshold to zero
and all pixels about that threshold to one. The key parameter in the thresholding process is
the choice of the threshold value. The major problem with thresholding is that we consider
only the intensity, not any relationships between the pixels. There is no guarantee that the
pixels identified by the thresholding process are continuous.
Several different methods for choosing a threshold exist; users can manually choose a
threshold value, or a thresholding algorithm can compute a value automatically, which is
known as automatic thresholding. Choosing the theshold using the image histogram. Regions
with uniform intensity give rise to strong peaks in the histogram.
******************************************
23
Page