IVP Practical Manual
IVP Practical Manual
IVP Practical Manual
Image Inverse: The negative or inverse of an image with intensity levels in the range [0, L-1] is
obtained by using the negative transformation, which is given by the expression,
S=L–1–r
r (Pixel of an image)
In intensity, this means that the true black becomes true white and vise-versa.
Reversing the intensity levels of an image in this manner produces the equivalent of a photographic
negative. This type of processing is particularly suited for enhancing white or gray details embedded
in dark regions of an image, especially when the black areas are dominant in size.
Code:
1
Output:
Original Image:
Negative Image:
2
Log Transformation: The log transformation maps a narrow range of low intensity values in the
input into a wider range of output levels. We use the transformation if this type to extend the values
of dark pixel in an image while compress the higher-level values.
s = c log (r + 1)
Code:
for i = 1:row #reading row value from starting to end and storing in (i) variable
for j = 1:col #reading row value from starting to end and storing in (j) variable
N(i,j)=c*log(1+x(i,j)); #Here we are doing log calculation and storing the value into N
endfor
endfor
figure
imshow(x);
title('Original Image');
figure
imshow(N);
3
Output:
Original Image:
4
Power-Law Transformation: Power-law curves with fractional values of γ map a narrow range of
dark input values into a wider range of output values, with the opposite being true for higher values
of input levels.
The nth power and nth root curves shown in below figure can be given by the expression as s = c r γ,
This transformation function is also called as gamma correction. For various values of γ different levels
of enhancements can be obtained. It is used to correct power law response phenomena. The different
display monitors display images at different intensities and clarity. That means, every monitor has
built-in gamma correction in it with certain gamma ranges and so a good monitor automatically
corrects all the images displayed on it for the best contrast to give user the best experience. The
gamma variation changes ratio of red green & blue along with intensity in color images. The difference
between the log-transformation function and the power-law functions is that using the power-law
function a family of possible transformation curves can be obtained just by varying the λ. This process
is also called a gamma correction.
s=c*r^γ
where,
s is the output pixels value
r is the input pixel value
c and γ are the real numbers
5
Code:
for i = 1:row #reading row value from starting to end and storing in (i) variable
for j = 1:col #reading row value from starting to end and storing in (j) variable
N(i,j)=c*(x(i,j)^gamma); #Here we are doing power-law calculation and storing there values
into N matrix
endfor
endfor
figure
imshow(x);
title('Original Image');
figure
imshow(N);
6
Output:
Original Image:
7
Practical 2: Piecewise Transformation.
Image enhancement by using Contrast Stretching transformation techniques and also show
on histogram.
It differs from the more sophisticated histogram equalization in that it can only apply
a linear scaling function to the image pixel values. As a result the `enhancement' is less harsh.
Program:
8
s1=input("Enter S1: ");
s2=input("Enter S2: ");
for i=1:m
for j=1:n
if r(i,j) < r1
s(i,j) = a*r(i,j);
else
s(i,j) = c*(r(i,j)-r2)+s2;
endif
endfor
endfor
subplot(1,2,2)
imhist(r);
title('Histogtram Of Original Image');
figure;
subplot(1,2,1)
imshow(s);
title("Contrast Streched Image");
subplot(1,2,2)
imhist(s);
title('Histogtram Of Contrast Streched Image');
9
Output:
Original Image:
10
Image enhancement by using Thresholding transformation techniques and also show on
histogram.
The input to a thresholding operation is typically a grayscale or color image. In the simplest
implementation, the output is a binary image representing the segmentation. Black pixels
correspond to background and white pixels correspond to foreground (or vice versa). In
simple implementations, the segmentation is determined by a single parameter known as
the intensity threshold. In a single pass, each pixel in the image is compared with this
threshold. If the pixel's intensity is higher than the threshold, the pixel is set to, say, white in
the output. If it is less than the threshold, it is set to black.
Code:
for i=1:m
for j=1:n
if (r(i,j) > threshold) #If dimension value is greater than user input then convert into
1
OutImage(i,j) = 1;
else
OutImage(i,j) = 0; #If dimension value is less than user input then convert into 0
endif
endfor
endfor
11
subplot(1,2,2)
imhist(r);
title('Histogtram Of Original Image');
figure;
subplot(1,2,1)
imshow(OutImage);
title("Contrast Streched Image");
subplot(1,2,2)
imhist(OutImage);
title('Histogtram Of Contrast Streched Image');
Output:
Original Image:
12
Threshold Image:
Bit plane slicing is well known technique used in Image processing. In image compression Bit
plane slicing is used. Bit plane slicing is the conversion of image into multilevel binary image.
These binary images are then compressed using different algorithm. With this technique, the
valid bits from gray scale images can be separated, and it will be useful for processing these
data in very less time complexity.
Digitally, an image is represented in terms of pixels. These pixels can be expressed further in
terms of bits. Separating a digital image into its bit-planes is useful for analyzing the relative
importance played by each bit of image, a process aids in determining the adequacy of the
no. of bits used to quantize each pixel. This type of decomposition is useful for image
compression. This term of bit-plane extraction for an 8-bit image, it is not difficult to show
that the (binary) image for bit-plane 7 can be obtained by processing input image with a
thresholding gray-level transformation function.
bitget:
C = bitget(A,bit)
C= bitget(A,bit) returns the value of the bit at position bit in A. Operand A must be an unsigned
integer; a double or array containing unsigned integers, doubles or both. the bit input must
13
be a number between 1 and the number of bits in the unsigned class of A. (e.g., 32 for
the uint32 class).
Code:
subplot(2,2,2)
imshow(g1);
title('Bit 1');
subplot(2,2,3)
imshow(g2);
title("Bit 2");
subplot(2,2,4)
imshow(g3);
title('Bit 3');
figure;
subplot(2,2,1)
imshow(g5);
title("Bit 5");
14
subplot(2,2,2)
imshow(g6);
title('Bit 6');
subplot(2,2,3)
imshow(g7);
title("Bit 7");
subplot(2,2,4)
imshow(g8);
title('Bit 8');
Output:
15
16
Practical 3: Histogram Equalization
The process of adjusting intensity values can be done automatically using histogram
equalization. Histogram equalization involves transforming the intensity values so that the
histogram of the output image approximately matches a specified histogram. By default, the
histogram equalization function, histeq, tries to match a flat histogram with 64 bins, but we
can specify a different histogram instead.
Syntax:
X=histeq(y);
histeq enhances the contrast of images by transforming the values in an intensity image, or
the values in the colormap of an indexed image, so that the histogram of the output image
approximately matches a specified histogram.
Program:
r=histeq(x);
figure;
subplot(1,2,1)
imshow(x);
title("Original Image");
subplot(1,2,2)
imhist(x);
17
title('Histogtram Of Original Image');
figure;
subplot(1,2,1)
imshow(r);
title("Histogtram Equalization");
subplot(1,2,2)
imhist(r);
title('Histogtram Of Hist Equalization Image');
OutPut:
Original Image:
18
Histogram Equalization:
19
Practical 4: Image Filtering in Spatial Domain.
Low pass filtering (smoothing), is employed to remove high spatial frequency noise from a
digital image. The low-pass filters usually employ moving window operator which affects one
pixel of the image at a time, changing its value by some function of a local region (window) of
pixels. The operator moves over the image to affect all the pixels in the image.
Average Filter
Syntax:
>> mf = ones(3,3)/9
mf =
20
The filter2() is defined as:
Y = filter2(h,X) filters the data in X with the two-dimensional FIR filter in the matrix h. It computes the
result, Y, using two-dimensional correlation, and returns the central part of the correlation that is the
same size as X.
Y = filter2(h,X,shape)
It returns the part of Y specified by the shape parameter. shape is a string with one of these values:
'full' : Returns the full two-dimensional correlation. In this case, Y is larger than X.
'same' : (default) Returns the central part of the correlation. In this case, Y is the same size as X.
'valid' : Returns only those parts of the correlation that are computed without zero-padded edges. In
this case, Y is smaller than X.
Code:
input_image=imread("hawk.png");
input_image=rgb2gray(input_image);
input_image1= im2double(input_image);
f=ones(3,3)/9;
average_image = filter2(f, input_image1);
#In this figure displaying the original and average filtered image
figure
subplot(1,2,1)
imshow(input_image);
title('Original Image');
subplot(1,2,2)
imshow(average_image);
title("Avarage filtred image");
21
Output:
Weighted Average:
The second mask is a little more interesting. This mask yields a so-called weighted average,
terminology used to indicate that pixels are multiplied by different coefficients, thus giving more
importance (weight) to some pixels at the expense of others. In the mask the pixel at the center of the
mask is multiplied by a higher value than any other, thus giving this pixel more importance in the
calculation of the average.
Weighted Filter mask is as follows:
22
Code:
% To do convolution
c = zeros( ma+mb-1, na+nb-1 );
for i = 1:mb
for j = 1:nb
r1 = i;
r2 = r1 + ma - 1;
c1 = j;
c2 = c1 + na - 1;
c(r1:r2,c1:c2) = c(r1:r2,c1:c2) + w(i,j) * double(NIm);
end
end
figure
subplot(1,2,1)
imshow(NIm);
title('Noisy Image(Salt & Pepper Noise)');
subplot(1,2,2)
imshow(uint8(c));
title('Denoised Image using Weighted Average Operation of Box Filter');
23
Output:
Median:
The best-known example in this category is the median filter, which, as its name implies,
replaces the value of a pixel by the median of the gray levels in the neighborhood of that pixel
(the original value of the pixel is included in the computation of the median). Median filters
are quite popular because, for certain types of random noise, they provide excellent noise-
reduction capabilities, with considerably less blurring than linear smoothing filters of similar
size. Median filters are particularly effective in the presence of impulse noise, also called salt-
and-pepper noise because of its appearance as white and black dots superimposed on an
image.
How It work:
Like the mean filter, the median filter considers each pixel in the image in turn and looks at
its nearby neighbors to decide whether or not it is representative of its surroundings. Instead
of simply replacing the pixel value with the mean of neighboring pixel values, it replaces it
with the median of those values. The median is calculated by first sorting all the pixel values
from the surrounding neighborhood into numerical order and then replacing the pixel being
considered with the middle pixel value. (If the neighborhood under consideration contains an
even number of pixels, the average of the two middle pixel values is used.)
24
Figure 1 illustrates an example calculation.
Figure 1 Calculating the median value of a pixel neighborhood. As can be seen, the central
pixel value of 150 is rather unrepresentative of the surrounding pixels and is replaced with
the median value: 124. A 3×3 square neighborhood is used here --- larger neighborhoods will
produce more severe smoothing.
Syntax:
B = medfilt2(A) performs median filtering of the matrix A using the default 3-by-3
neighborhood.
Code:
input_image=rgb2gray(input_image);
median_image=medfilt2(input_image1);
#In this figure displaying the original and median filtered image
figure
subplot(1,2,1)
imshow(input_image1);
title('Original Image');
25
subplot(1,2,2)
imshow(median_image);
title("Median filtred image");
Output:
Code:
[X,Y]=meshgrid(-sz:sz,-sz:sz);
M=size(X,1)-1
26
N=size(Y,1)-1
exp_comp=-(X.^2+Y.^2)/(2*sigma*sigma);
kernel=exp(exp_comp)/(2*pi*sigma*sigma);
output=zeros(size(I));
[o,p]=size(output)
I=padarray(I,[sz,sz],'both');
[o1,p1]=size(I)
for i=1:size(I,1)-M
for j=1:size(I,1)-N
temp=I(i:i+M,j:j+M).*kernel;
output(i,j)=sum(temp(:));
end
end
output=uint8(output);
figure, imshow(output);
Output:
Original Image:
27
Gaussian Low Pass Filtered Image:
Laplacian Filter:
The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. The
Laplacian of an image highlights regions of rapid intensity change and is therefore often used
for edge detection.
The Laplacian is often applied to an image that has first been smoothed with something
approximating a Gaussian smoothing filter in order to reduce its sensitivity to noise, and
hence the two variants will be described together here. The operator normally takes a single
gray-level image as input and produces another gray-level image as output.
How It Work:
The Laplacian L(x,y) of an image with pixel intensity values I(x,y) is given by:
28
This can be calculated using a convolution filter.
Since the input image is represented as a set of discrete pixels, we have to find a discrete
convolution kernel that can approximate the second derivatives in the definition of the
Laplacian. Two commonly used small kernels are shown in below.
Code:
a=imread('moon.jpg');
a=rgb2gray(a);
[r c]=size(a);
a=im2double(a);
for k=i-1:i+1
row=row+1;
col=1;
for l=j-1:j+1
sum = sum+a(k,l)*filter(row,col);
col=col+1;
end
end
result(i,j)=sum;
end
end
subplot(1,2,1)
29
imshow(a);
subplot(1,2,2)
imshow(result);
Output:
Code:
figure 1,
subplot(1,2,1)
imshow(img);
title('Original Image');
subplot(1,2,2)
30
imshow(sobel);
title("Edge detection using sobel filter");
figure 2,
subplot(1,2,1)
imshow(robert);
title('Edge detection using robert filter');
subplot(1,2,2)
imshow(prewitt);
title("Edge detection using prewitt filter");
Output:
Original Image and Sobel edge detected Image:
31
Robert and Prewitt edge detected Image:
32
Practical 5: Analyze Image in Frequency Domain
Gaussian Smoothing:
The Gaussian smoothing operator is a 2-D convolution operator that is used to `blur' images and
remove detail and noise. In this sense it is similar to the mean filter, but it uses a different kernel that
represents the shape of a Gaussian (`bell-shaped') hump. This kernel has some special properties
which are detailed below.
How It Works:
where σ is the standard deviation of the distribution. We have also assumed that the distribution has
a mean of zero (i.e. it is centered on the line x=0). The distribution is illustrated in below figure
33
In 2-D, an isotropic (i.e. circularly symmetric) Gaussian has the form:
The idea of Gaussian smoothing is to use this 2-D distribution as a `point-spread' function, and this is
achieved by convolution. Since the image is stored as a collection of discrete pixels, we need to
produce a discrete approximation to the Gaussian function before we can perform the convolution.
In theory, the Gaussian distribution is non-zero everywhere, which would require an infinitely large
convolution kernel, but in practice it is effectively zero more than about three standard deviations
from the mean, and so we can truncate the kernel at this point. Below figure shows a suitable integer-
valued convolution kernel that approximates a Gaussian with a σ of 1.0. It is not obvious how to pick
the values of the mask to approximate a Gaussian. One could use the value of the Gaussian at the
centre of a pixel in the mask, but this is not accurate because the value of the Gaussian varies non-
linearly across the pixel. We integrated the value of the Gaussian over the whole pixel (by summing
the Gaussian at 0.001 increments). The integrals are not integers: we rescaled the array so that the
corners had the value 1. Finally, the 273 is the sum of all the values in the mask.
34
Above Figure: Discrete approximation to Gaussian function with σ =1.0
Once a suitable kernel has been calculated, then the Gaussian smoothing can be performed using
standard convolution methods. The convolution can in fact be performed fairly quickly since the
equation for the 2-D isotropic Gaussian shown above is separable into x and y components. Thus the
2-D convolution can be performed by first convolving with a 1-D Gaussian in the x direction, and then
convolving with another 1-D Gaussian in the y direction. (The Gaussian is in fact the only completely
circularly symmetric operator which can be decomposed in such a way.) Figure 4 shows the 1-
D x component kernel that would be used to produce the full kernel shown in Figure 3 (after scaling
by 273, rounding and truncating one row of pixels around the boundary because they mostly have the
value 0. This reduces the 7x7 matrix to the 5x5 shown above.). The y component is exactly the same
but is oriented vertically.
Program:
dftub.m
function [U, V] = dftuv(M, N)
%DFTUV Computes meshgrid frequency matrices.
% [U, V] = DFTUV(M, N) computes meshgrid frequency matrices U and
% V. U and V are useful for computing frequency-domain filter
% functions that can be used with DFTFILT. U and V are both M-by-N.
35
u(idx) = u(idx) - M;
idy = find(v > N/2);
v(idy) = v(idy) - N;
lpfilter.m
function H = lpfilter(type, M, N, D0, n)
36
lowpassfrequncy.m
%Convert to grayscale
footBall=rgb2gray(footBall);
subplot(1,2,2)
imshow(LPF_football, [])
title('Filtered Image');
37
OutPut:
38
Applied Butterworth low-pass filter:
39
Image enhancement by high pass/sharping filter techniques.
A high-pass filter can be used to make an image appear sharper. These filters emphasize fine details
in the image – exactly the opposite of the low-pass filter. High-pass filtering works in exactly the same
way as low-pass filtering; it just uses a different convolution kernel.
A high-pass filter is a filter that passes high frequencies well, but attenuates frequencies lower than
the cut-off frequency. Sharpening is fundamentally a high-pass operation in the frequency domain.
There are several standard forms of high-pass filters such as Ideal, Butterworth and Gaussian high-
pass filter. All high-pass filter (Hhp) is often represented by its relationship to the lowpass filter (Hlp):
𝐻ℎ𝑝 = 1 −𝐻𝑙𝑝
Program:
hpfilter.m
if nargin == 4
n = 1; % Default value of n.
end
40
highpassfrequncy.m
figure
subplot(1,2,1)
imshow(footBall);
title('Original Image');
subplot(1,2,2)
imshow(HPF_football, [])
title('High-pass Filtered Image');
41
OutPut:
Applied Gaussian high pass filter:
42
Applied Butterworth high-pass filter:
43
Practical 6: Color Image Processing
The human visual system can distinguish hundreds of thousands of different color shades and
intensities, but only around 100 shades of grey. Therefore, in an image, a great deal of extra
information may be contained in the color, and this extra information can then be used to simplify
image analysis, e.g. object identification and extraction based on color.
Three independent quantities are used to describe any particular color. The hue is determined by the
dominant wavelength. Visible colors occur between about 400nm (violet) and 700nm (red) on the
electromagnetic spectrum, as shown in figure.
The saturation is determined by the excitation purity, and depends on the amount of white light mixed
with the hue. A pure hue is fully saturated, i.e. no white light mixed in. Hue and saturation together
determine the chromaticity for a given color. Finally, the intensity is determined by the actual amount
of light, with lighter corresponding to more intense colors.
Color model:
The purpose of a color model is to facilitate the specification of colors in some standard, generally
accepted way. In essence, a color model is a specification of a coordinate system and a subspace within
that system where each color is represented by a single point.
44
Separating the RGB Channels:
Code:
Output:
Original Image:
45
Red channel:
Green channel:
46
Blue channel:
47
Pseudocolor Image Processing:
Pseudo color image processing shall be an important tool in digital image processing of breast images.
Pseudo coloring comprises of assigning colors to gray values based on a specific criterion. The term
Pseudo color or false color is used to differentiate the process of assigning colors to monochrome
images from the process associated with true color images. The first and foremost use of Pseudo color
is for human visualization and interpretation of gray scale events in an image or sequent of images.
Code:
#clc;
clear all;
#im=input('Enter the file name);
input_image=imread('hawk.png');
k=rgb2gray(input_image);
[x y z]=size(k);
% z should be one for the input image
k=double(k);
for i=1:x
for j=1:y
if k(i,j)>=0 && k(i,j)<50
m(i,j,1)=k(i,j,1)+5;
m(i,j,2)=k(i,j)+10;
m(i,j,3)=k(i,j)+10;
end
48
if k(i,j)>=200 && k(i,j)<=256
m(i,j,1)=k(i,j)+120;
m(i,j,2)=k(i,j)+60;
m(i,j,3)=k(i,j)+45;
end
end
end
figure,
imshow(uint8(k),[]);
title('Original Image');
figure,
imshow(uint8(m),[]);
title("Pseudo filtred image");
Output:
Original Image:
49
Pseudo Filtered image:
Highlighting a specific range of colors in an image is useful for separating objects from their
surroundings.
• Color slicing is either to display the colors of interest so that they stand out from the
background or,
• Use the region defined by the colors as a mask for further processing.
The most straightforward approach is to extend the gray-level slicing techniques. Because a color pixel
is an n-dimensional quantity, however, the resulting color transformation functions are more
complicated than their gray-scale counterparts. In fact, the required transformations are more
complex than the color component transforms considered thus far. This is because all practical color
slicing approaches require each pixel's transformed color components to be a function of all n original
pixel's color components.
One of the simplest ways to “slice” a color image is to map the colors outside some range of interest
to a non-prominent neutral color
Code:
50
#Here we are converting RGB image to HSV and Displaying with colorbar
HSV = rgb2hsv(input_image);
H = HSV(:,:,1);
figure,imshow(H);colorbar;
#At end we are converting HSV color to RGB color and Displying output
HSV(:,:,1) = H;
C = hsv2rgb(HSV);
figure,imshow(C);title('Sliced_Image');
Output:
Original Image:
51
Sliced Image:
52
Practical 7: Image Compression Techniques and Watermarking
Decreasing the irrelevance or redundancy of an image is the fundamental aim of the image
compression techniques to provide the facility for storing and transmitting the data in an effective
manner .The initial step in this technique is to convert the image from the representation of their
spatial domain into a separate type of the representation by the use of few already known conversions
and then encodes the converted values i.e., coefficients. This technique allows the huge compression
of data as compared to the predictive techniques, though at the cost of the huge computational needs.
Compression is obtained by eliminating any of one or more of the below three fundamental data
redundancies:
1. Coding redundancy: This is presented when the less than best (that is the smallest length)
code words were used.
2. Inter-pixel redundancy: This results from the correlations between the pixels of an image.
3. Psycho-visual redundancy: This is because of the data which is neglected by human visual-
system.
Code:
pkg load communications
sig = repmat([3 3 1 3 3 3 3 3 2 3],1,50);
symbols = [1 2 3];
p = [0.1 0.1 0.8];
dict = huffmandict(symbols,p);
hcode = huffmanenco(sig,dict);
dhsig = huffmandeco(hcode,dict);
isequal(sig,dhsig)
binarySig = de2bi(sig);
seqLen = numel(binarySig)
binaryhcode = de2bi(hcode);
encodedLen = numel(binaryhcode)
53
Output:
Watermarking Techniques:
The methods and standard of Section 8.2 make the distribution of images on digital media and over
the Internet practical. Unfortunately, the image so distributed can be copied repeatedly and without
error, putting the rights of their owners at risk.
Even when encrypted for distribution, images are unprotected after decryption. One way to
discourage illegal duplication is to insert one or more items of information, collectively called
watermark, into potentially vulnerable images in such a way that the watermarks are inseparable from
the images themselves. As integral parts of the watermarked images, they protect the rights of their
owners in c variety of ways, including:
Code:
54
#Watermarking Image
w=imread('watermark.jpeg');
#Again Resized the Watermarking Image
wr=imresize(w,[560 560]);
#Applied watermarking
alpha=0.3;
fw=(1-alpha)*fr+alpha.*wr;
Output:
Original Image:
55
Watermarked Image:
56
Practical 8: Basic Morphological Transformations
Morphological techniques probe an image with a small shape or template called a structuring
element. The structuring element is positioned at all possible locations in the image and it is compared
with the corresponding neighbourhood of pixels. Some operations test whether the element "fits"
within the neighbourhood, while others test whether it "hits" or intersects the neighbourhood.
When dealing with binary images, one of the principal applications of morphology is in extracting
image components that are useful in the representation and description of shape. In particular, we
consider morphological algorithms for extracting boundaries, connected components, the convex hull,
and the skeleton of a region.
Boundary Extraction:
The boundary of a set A, denoted by β (A), can be obtained by first eroding A by B and then performing
the set difference between A and its erosion. That is,
Thinning:
Thinning is a morphological operation that is used to remove selected foreground pixels from binary
images, somewhat like erosion or opening. It can be used for several applications, but is particularly
useful for skeletonization. In this mode it is commonly used to tidy up the output of edge detectors by
reducing all lines to single pixel thickness. Thinning is normally only applied to binary images, and
produces another binary image as output.
57
Thickening:
Hole Filling:
A hole may be defined as a background region surrounded by a connected border of foreground pixels.
We develop an algorithm based on set dilation, complementation, and intersection for filling holes in
an image.
Let A denote a set whose elements are 8-connected boundaries, each boundary enclosing a
background region (i.e., a hole). Given a point in each hole, the objective is to fill all the holes with Is.
We begin by forming an array. X0. of 0s (the same size as the array containing A), except at the
locations in X0 corresponding to the given point in each hole, which we set to I.
Then, the following procedure fills all the holes with ls:
Code:
se=strel('disk',2,0);
# performing Dialation
mydilatedimg=imdilate(bw_im,se);
# performing Erosion
myderodeimg=imerode(bw_im,se);
58
figure 2,
subplot(1,2,1);
imshow(int_boundary); title('Internal boundary');
#performing Thinning
thin_im=bwmorph(bw_im,'thin');
figure 3,
subplot(1,2,1);
imshow(thin_im); title('Thin Image');
#performing THickening
thick_im=bwmorph(bw_im,'thicken');
subplot(1,2,2);
imshow(thick_im); title('Thick Image');
#morphological gradient
im_gradiant=imsubtract(bw_im,myderodeimg);
figure 4,
subplot(1,2,1);
imshow(im_gradiant,[]); title('Gradient Image');
59
Output:
Display Original Image and Binary format of Original Image:
60
Thinning and Thicking Image:
61