Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

(Approved by AICTE, affiliated to AU, NAAC &NBA accredited)

DEPARTMENT
OF
BIOMEDICAL ENGINEERING

DIGITAL IMAGE PROCESSING


LABORATORY
(EC8762)

NAME : …………………………………………
REG NO : …………………………………………

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


INDEX

EXPT. NAME OF THE


PAGE NO MARKS
NO. DATE EXPERIMENT SIGN
Analysis of images with different
1.
color models

2.
Histogram Equalization of Images

3. Intensity transformation of
images

4.
Image sampling and quantization

5. Edge detection algorithm

Sharpen image using gradient


6.
mask

7. Spatial resolution

Image Enhancement-Spatial
8.
filtering
Morphological operation:
9.
Erosion and dilation

10. DCT/IDCT computation

11. Image restoration

12. Segmentation: Simple threshold

13. Segmentation using watershed


transformation

14. Image compression using the


HAAR wavelet transform

Frequency Domain Filters


15.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


BONAFIDE CERTIFICATE

Certified that this is the Bonafide record of work

completed by …………………………. Register No ......................................... of

Semester VII in Digital Image Processing Laboratory (EC8762), B.E degree

course in S K R Engineering College, during the academic year 2022-2023

DATE FACULTY INCHARGE HEAD OF THE DEPARTMENT

Submitted for the University Examination held on ….…………. at S K R


Engineering College.

DATE INTERNAL EXAMINER EXTERNAL EXAMINER

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 1
ANALYSIS OF IMAGES WITH DIFFERENT
DATE:
COLOR MODELS

Aim: To Analysis of images with different color models.

Apparatus Required: Computer, MATLAB Software

Syntax :

imshow(I)
imshow(I,[low high])

imshow(RGB)

imshow(BW)

imshow(X,map)

imshow(filename)

himage = imshow(...)

imshow(..., param1, val1, param2, val2,...)

THEORY:

imshow(I) displays the grayscale image I. imshow(I,[low high]) displays the grayscale image
I, specifying the display range for I in [low high]. The value low (and any value less than low)
displays as black; the value high (and any value greater than high) displays as white. Values
in between are displayed as intermediate shades of gray, using the default number of gray
levels. If you use an empty matrix ([]) for [low high], imshow uses [min(I(:)) max(I(:))]; that
is, the minimum value in I is displayed as black, and the maximum value is displayed as white.
imshow(RGB) displays the truecolor image RGB. imshow(BW) displays the binary image
BW. imshow displays pixels with the value 0 (zero) as black and pixels with the value 1 as
white. imshow(X,map) displays the indexed image X with the colormap map. A color map
matrix may have any number of rows, but it must have exactly 3 columns. Each row is
interpreted as a color, with the first element specifying the intensity of red light, the second

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


green, and the third blue. Color intensity can be specified on the interval 0.0 to 1.0. imshow(filename)
displays the image stored in the graphics file filename. The file must contain an image that can be
read by imread or dicomread. imshow calls imread or dicomread to read the image from the file, but
does not store the image data in the MATLAB workspace. If the file contains multiple images, imshow
displays the first image in the file. The file must be in the current directory or on the MATLAB path.
himage = imshow(...) returns the handle to the image object created by imshow. imshow(..., param1,
val1, param2, val2,...) displays the image, specifying parameters and corresponding values that
control various aspects of the image display.

Converting RGB Image into gray scale image & extracting the color Spaces
image1=imread('dse_college.jpg'); image2=rgb2gray (image1); [r c d]=size (image1); z=zeros(r,c);
tempr=image1; tempr(:,:,2)=z; tempr(:,:,3)=z; imshow(tempr) tempg=image1; tempg(:,:,1)=z;
tempg(:,:,3)=z; imshow(tempg) tempb=image1; tempb(:,:,1)=z; tempb(:,:,2)=z; imshow(tempb)
Result: Thus the gray scale image is displayed.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:

Converting RGB Image into gray scale image & extracting the color Spaces

clear all;
clc;
close all;
image1=imread('C:\Users\Welcome\Desktop\New folder\girl.png');
image2=rgb2gray (image1);
[r c d]=size(image1);
z=zeros(r,c);
tempr=image1;
tempr(:,:,2)=z;
tempr(:,:,3)=z;
subplot(221);
imshow(tempr);title('red channel');
tempg=image1;
tempg(:,:,1)=z;
tempg(:,:,3)=z;
subplot(222);
imshow(tempg);title('green channel');
tempb=image1;
tempb(:,:,1)=z;
tempb(:,:,2)=z;
subplot(223);
imshow(tempb);title('blue channel');

%HSI CONVERSION
HIS=rgb2hsv(image1);
figure(2)
subplot(221);
imshow(image1);title('original image');

subplot(222);
imshow(HIS(:,:,1));title('H');
subplot(223);
imshow(HIS(:,:,2));title('S');
subplot(224);
imshow(HIS(:,:,3));title('I');
%YIQ conversion
R=image1(:,:,1);
G=image1(:,:,2);
B=image1(:,:,3);
Y=0.299*R+0.587*G+0.114*B;
I=-0.14713*R-0.28886*G+0.436*B;

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


Q=0.615*R-0.51499*G-0.10001*B;
YIQ=cat(3,Y,I,Q);
figure(3)
subplot(221);
imshow(image1);title('original image');

subplot(222);
imshow(Y);title('Y');
subplot(223);
imshow(I);title('I');
subplot(224);
imshow(Q);title('Q');

OUTPUT:

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


Result:

Thus, Analysis of images with different color models is displayed.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 2
Histogram Equalization of Images
DATE:

AIM:

To enhance contrast using Histogram Equalization.

APPARATUS REQUIRED:

Computer,Matlab Software

SYNTAX:

J = histeq(I, hgram)

J = histeq(I, n)

[J, T] = histeq(I,...)

newmap = histeq(X, map, hgram)

newmap = histeq(X, map)

[newmap, T] = histeq(X,...)

THEORY:

histeq enhances the contrast of images by transforming the values in an intensity image, or
the values in the colormap of an indexed image, so that the histogram of the output image
approximately matches a specified histogram. J = histeq(I, hgram) transforms the intensity
image I so that the histogram of the output intensity image J with length(hgram) bins
approximately matches hgram. histeq automatically scales hgram so that sum(hgram) =
prod(size(I)). The histogram of J will better match hgram when length(hgram) is much
smaller than the number of discrete levels in I. J = histeq(I, n) transforms the intensity image
I, returning in J an intensity image with n discrete gray levels. A roughly equal number of
pixels is mapped to each of the n levels in J, so that the histogram of J is approximately flat.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


(The histogram of J is flatter when n is much smaller than the number of discrete levels in I.)
The default value for n is 64.

[J, T] = histeq(I,...) returns the grayscale transformation that maps gray levels in the image I
to gray levels in J. newmap = histeq(X, map, hgram) transforms the colormap associated with
the indexed image X so that the histogram of the gray component of the indexed image
(X,newmap) approximately matches hgram. The histeq function returns the transformed
colormap in newmap. length(hgram) must be the same as size(map,1). newmap = histeq(X,
map) transforms the values in the colormap so that the histogram of the gray component of
the indexed image X is approximately flat. It returns the transformed colormap in newmap.
[newmap, T] = histeq(X,...) returns the grayscale transformation T that maps the gray
component of map to the gray component of newmap.

ALGORITHM

When you supply a desired histogram hgram, histeq chooses the grayscale transformation T
to minimize where c0 is the cumulative histogram of A, c1 is the cumulative sum of hgram
for all intensities k. This minimization is subject to the constraints that T must be monotonic
and c1(T(a)) cannot overshoot c0(a) by more than half the distance between the histogram
counts at a. histeq uses the transformation b = T(a) to map the gray levels in X (or the
colormap) to their new values.If you do not specify hgram, histeq creates a flat hgram, hgram
= ones(1,n)*prod(size(A))/n;

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:

Enhance the contrast of an intensity image using histogram equalization.

I = imread('tire.tif');

J = histeq(I);

imshow(I)

figure, imshow(J)

%%Display a histogram of the original image.

figure; imhist(I,64)

%%Compare it to a histogram of the processed image.

figure; imhist(J,64)

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


OUTPUT:

RESULT:

The histogram equalization is done.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 3
INTENSITY TRANSFORMATION OF IMAGES
DATE:

AIM:

To implement intensity transformation of images of the below

1. IMAGE NEGATIVES

2. LOG TRANSFORMATIONS

3. POWER LAW(GAMMA ) TRANSFORMATIONS

4. PIECEWISE-LINEAR TRANSFORMATION FUNCTIONS

APPARATUS REQUIRED:

Computer,Matlab Software

THEORY:

Basic Intensity Transformation Functions – Part 1

Three basic types of functions used for image Enhancement are:


1. Linear transformation
2. Logarithmic transformation
3. Power Law transformation

Consider an Image r with intensity levels in the range [0 L-1]

1. Image Negatives

Equation : s = L – 1 – r
Consider L = 256 and r be the intensity of the image(Range 0 to 255)

2. Log Transformation

Equation: s = c log(1 + r) where c is a constant

Consider c = 1 and r be the intensity of the image(Range 0 to 255)

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPLANATION:
a. Convert the image to type double
b. Apply the log transformation
c. Map the obtained values to the range [0 255]

3. Power –Law (Gamma) corrections

Equation :

Where c and gamma are positive constants

Consider c = 1, gamma =0.04 and r be the intensity of the image (Range 0 to 255)

EXPLANATION:
The transformation is plotted for different values of gamma for the intensity levels [ 0 255].
The output image intensity values are mapped to the range [0 255]

Piece-wise Linear Transformation is type of gray level transformation that is used for image
enhancement. It is a spatial domain method. It is used for manipulation of an image so that the
result is more suitable than the original for a specific application.

Bit Extraction:

An 8-bit image can be represented in the form of bit plane. Each plane represents one bit of all
pixel values. Bit plane 7 contains the most significant bit (MSB) and bit plane 0 contains least
significant bit (LSB). The 4 MSB planes contains most of visually significant data. This
technique is useful for image compression and steganography.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM: IMAGE NEGATIVE

a=imread('C:\Users\pc\Desktop\images\leena.jpg');
b=imread('C:\Users\pc\Desktop\images\apple.jpg');
c=255-a; %%% a_Neg = imcomplement(a);

d=255-b; %%% d_Neg = imcomplement(b);


subplot(2,2,1),imshow(a),title(‘original gray image’);
subplot(2,2,2),imshow(c),title(Negative of Gray image’);
subplot(2,2,3), imshow(b), title(‘original color image’);
subplot(2,2,4),imshow(d); title(Negative of color image’);

PROGRAM: POINT TRANSFORMATION:( LOG TRANSFORMATIONS AND POWER


LAW(GAMMA ) TRANSFORMATIONS)

A=imread(‘cameraman.tif’);

Ad=im2double(a);

X=ad;y=ad;

[r,c]=size(ad);

factor=6;

for i=1:r

for j=1:c

x(i,j)=factor*log(1+ad(i,j));

y(i,j)=factor*ad(i,j)^2;

end

end

subplot(1,2,1);imshow(ad);title(‘Before’);

subplot(1,2,2);imshow(x);title(‘After’);

figure,imshow(y);

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM: PIECEWISE-LINEAR TRANSFORMATION FUNCTIONS

a=imread('C:\Users\pc\Desktop\FDP image classification\gray scale2\6.jpg');


b=double(a);
f1=bitget(b,1);
f2=bitget(b,2);
f3=bitget(b,3);
f4=bitget(b,4);
f5=bitget(b,5);
f6=bitget(b,6);
f7=bitget(b,7);
f8=bitget(b,8);
subplot(331);imshow(f1);
subplot(332);imshow(f2);
subplot(333);imshow(f3);
subplot(334);imshow(f4);
subplot(335);imshow(f5);
subplot(336);imshow(f6);
subplot(337);imshow(f7);
subplot(338);imshow(f8);

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


OUTPUT:IMAGE NEGATIVE

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


OUTPUT: POINT TRANSFORMATION:( LOG TRANSFORMATIONS AND POWER
LAW(GAMMA ) TRANSFORMATIONS)

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


OUTPUT: PIECEWISE LINEAR TRANSFORMATION:

Conclusion: Thus, the intensity transformation of the image was done successfully.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 4
IMAGE SAMPLING AND QUANTIZATION
DATE:

AIM:

To implement image sampling and quantization.

APPARATUS REQUIRED:

Computer,Matlab Software

THEORY:

In order to become suitable for digital processing, an image function f(x,y) must be digitized
both spatially and in amplitude. Typically, a frame grabber or digitizer is used to sample and
quantize the analogue video signal. Hence in order to create an image which is digital, we
need to covert continuous data into digital form. There are two steps in which it is done:

 Sampling
 Quantization

The sampling rate determines the spatial resolution of the digitized image, while the
quantization level determines the number of grey levels in the digitized image. A magnitude
of the sampled image is expressed as a digital value in image processing. The transition
between continuous values of the image function and its digital equivalent is called
quantization.
The number of quantization levels should be high enough for human perception of fine
shading details in the image. The occurrence of false contours is the main problem in image
which has been quantized with insufficient brightness levels.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:SAMPLING

I=double(imread(‘rose.jpeg’);

[j,k,c]=size(I);

%scale for CCD image size

Scale=25;

x_new=round(j./scale);

y_new=round(k./scale);

%determine the ratio of the old dimensions compared to the new dimensions

x_scale=round(j./x_new);

y_scale=round(k./y_new);

%Declare and inilize an output image buffer

m=zeros(x_new,y_new,c);

%Generate the output image

for ch=1:c

for count1=1:x_new

for count2=1:y_new

m(count1,count2,ch)=I(count.*x_scale,count2.*y_scale,ch);

end

end

end

%Display the two images side by side

subplot(121); imagesc(uint8(I)); axis tight;

subplot(122);imagesc(uint8(m));axis tight;

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


OUTPUT:

PROGRAM : QUANTIZATION

%image reading

X=double((imread(‘rose.jpeg’)));

[height,width,c]=size(x);

%Gray levels of an image

B=256;

%quantized to L levels

L=5;

q=B/L;

%Quantizer characteristics

Q=zeros(256,1);

for i=0:255

Q(i+1,1)=floor(i/q) * q +q/2;

end

%Output image initialization

Y=zeros(size(x));

for ch=1:c

for i=1:height

forj=1:width

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


y(I,j,ch)=Q(x(I,j,ch)+1);

end

end

end

%Quantizer plot

Figure; plot(0:255,Q);

%Quantized image

figure; imagesc(uint8(y))

OUTPUT:

Conclusion: Thus, we have sampled and quantized the image.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 5
EDGE DETECTION ALGORITHM
DATE:

AIM: Program for edge detection algorithm.

OBJECTIVE: Program for edge detection algorithm

TOOLS REQUIRED: MATLAB

THEORY: The Canny edge detector is an edge detection operator that uses a multistage
algorithm to detect a wide range of edges in images. It was developed by John F. Canny in
1986. Canny also produced a computational theory of edge detection explaining why the
technique works. The Process of Canny edge detection algorithm can be broken down to 5
different steps:

1. Apply Gaussian filter to smooth the image in order to remove the noise

2. Find the intensity gradients of the image

3. Apply non-maximum suppression to get rid of spurious response to edge detection

4. Apply double threshold to determine potential edges

5. Track edges by hypothesis: Finalize the detection of edges by suppressing all the other edges that
are weak and not connected to strong edges.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:

clear all;
a=imread('C:\Users\Welcome\Desktop\New folder\brick.jpg');
a=rgb2gray(a);
b=edge(a,'roberts');
c=edge(a,'sobel');
d=edge(a,'canny');
f=edge(a,'log');
x=edge(a,'prewitt');
subplot(231),imshow(a),title('original image');
subplot(232),imshow(b),title('Roberts image');
subplot(233),imshow(c),title('sobel image');
subplot(234),imshow(d),title('canny image');
subplot(235),imshow(f),title('log image');
subplot(236),imshow(x),title('prewitt image');

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


OUTPUT:

Conclusion: Thus, we have detected the edges in the original image.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 6 SHARPEN IMAGE USING GRADIENT MASK
DATE:

AIM: Program of sharpen image using gradient mask.

OBJECTIVE: Program of sharpen image using gradient mask.

TOOLS REQUIRED: MATLAB

THEORY: An image gradient is a directional change in the intensity or color in an image. The
gradient of the image is one of the fundamental building blocks in image processing. For
example the Canny edge detector uses image gradient for edge detection.

Mathematically, the gradient of a two-variable function (here the image intensity function)
at each image point is a 2D vector with the components given by the derivatives in the
horizontal and vertical directions. At each image point, the gradient vector points in the
direction of largest possible intensity increase, and the length of the gradient vector
corresponds to the rate of change in that direction.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:

% Program of sharpen image using gradient mask


I=imread('C:\Users\Welcome\Desktop\New folder\girl.png');
subplot(2,2,1);
imshow(I)
title('Original Image');
h=fspecial('sobel');
f=imfilter(I,h,'replicate');
subplot(2,2,2);
imshow(f)
title('filtered image by sobel mask');
s=I+f;
subplot(2,2,3);
imshow(s)
title('Final o/p Image');

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


OUTPUT:

Conclusion: Thus, we have performed the sharpening operation using gradient mask on the
Original image.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 7 SPATIAL RESOLUTION
DATE:

AIM: To implement Program of spatial Resolution.

TOOLS REQUIRED: MATLAB

THEORY:

Image resolution

Image resolution can be defined in many ways. One type of it which is pixel resolution that
has been discussed in the tutorial of pixel resolution and aspect ratio.
In this tutorial, we are going to define another type of resolution which is spatial resolution.

Spatial resolution

Spatial resolution states that the clarity of an image cannot be determined by the pixel
resolution. The number of pixels in an image does not matter.
Spatial resolution can be defined as the
smallest discernible detail in an image. (Digital Image Processing - Gonzalez, Woods - 2nd
Edition)
Or in other way we can define spatial resolution as the number of independent pixels values
per inch.
In short what spatial resolution refers to is that we cannot compare two different types of
images to see that which one is clear or which one is not. If we have to compare the two
images, to see which one is more clear or which has more spatial resolution, we have to
compare two images of the same size.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM: SPATIAL RESOLUTION

clc;
clear;
grayImage = imread('C:\Users\Welcome\Desktop\New folder\cameraman.tif');
subplot(3,1,1);
imshow(grayImage);
title('Original image');
axis on;
smallImage = imresize(grayImage, 1/16, 'nearest');
subplot(3,1,2);
imshow(smallImage);
title('small image');
axis on;
bigImage = imresize(smallImage, 16, 'nearest');
subplot(3,1,3);
imshow(bigImage);
title('bigimage');
axis on;

OUTPUT:

Conclusion: Thus, we have performed the spatial resolution on the Original image.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 8 Image Enhancement-Spatial filtering
DATE:

AIM: To enhance the image using spatial filtering.

TOOLS REQUIRED: MATLAB

THEORY:

Basics of Spatial Filtering

The concept of filtering has its roots in the use of the Fourier transform for signal
processing in the so-called frequency domain.

Spatial filtering term is the filtering operations that are performed directly on the pixels of
an image

Mechanics of spatial filtering

The process consists simply of moving the filter mask from point to point in an image.

At each point (x,y) the response of the filter at that point is calculated using a predefined
relationship

Linear spatial filtering

The result is the sum of products of the mask coefficients with the corresponding pixels
directly under the mask

Nonlinear spatial filtering

Nonlinear spatial filters also operate on neighborhoods, and the mechanics of sliding a mask
past an image are the same as was just outlined.

The filtering operation is based conditionally on the values of the pixels in the neighborhood
under consideration.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:

[file,path] = uigetfile('C:\Users\Welcome\Desktop\New folder\*.*');


fname=strcat(path,file);
cat=imread(fname);

%motion blur
h=fspecial('motion', 20, 45);
cat_motion=imfilter(cat,h);

%sharpening
h=fspecial('unsharp');
cat_sharp=imfilter(cat,h);

%horizontal edge-detection
h=fspecial('sobel');
cat_sobel=imfilter(cat,h);

subplot(221); imshow(cat)
title('original image');
subplot(222); imshow(cat_motion)
title('motion filter out image');
subplot(223); imshow(cat_sharp)
title('sharpen filter out image');
subplot(224); imshow(cat_sobel)
title('sobal filter out image');

OUTPUT:

Conclusion: Thus, we have performed the spatial filtering on the Original image.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 9 MORPHOLOGICAL OPERATION
DATE: EROSION AND DILATION

AIM: Program for morphological operation: erosion and dilation.

TOOLS REQUIRED: MATLAB

THEORY:

Erosion (usually represented by ⊖) is one of two fundamental operations (the other being
dilation) in morphological image processing from which all other morphological operations
are based. It was originally defined for binary images, later being extended to grayscale
images, and subsequently to complete lattices. With A and B as two sets in Z2 (2D integer
space), the dilation of A and B is defined as A(+)B={Z|(B )Z∩A ≠ ɸ} ̂ In the above example, A
is the image while B is called a structuring element. In the equation,(B )Z simply means ta ̂
king the reflections of B about its origin and shifting it by Z. Hence dilation of A with B is a
set of all displacements, Z, such that (B ̂)Z and A overlap by at least one element. Flipping of
B about the origin and then moving it past image A is analogous to the convolution process.
In practice flipping of B is not done always. Dilation adds pixels to the boundaries of object
in an image. The number of pixels added depends on the size and shape of the structuring
element. Based on this definition, dilation can be defined as

A(+)B={{Z|(B )Z∩A} ϵ A}

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:

f=imread('C:\Users\Welcome\Desktop\New folder\girl.png');
B=[0 1 1;1 1 1;0 1 0];
f1=imdilate(f,B);
se=strel('disk',10);
f2=imerode(f,se);
figure,imshow(f)
title('input image');
figure,imshow(f1)
title('dilated image');
figure,imshow(f2)
title('eroded image');
OUTPUT:

Conclusion: Thus, we have performed the morphological operation on the Original image.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 10 DCT/IDCT COMPUTATION
DATE:

AIM: Program for DCT/IDCT computation.

TOOLS REQUIRED: MATLAB

THEORY: A discrete cosine transform (DCT) expresses a finite sequence of data points in
terms of a sum of cosine functions oscillating at different frequencies. DCTs are important to
numerous applications in science and engineering, from lossy compression of audio (e.g.
MP3) and images (e.g. JPEG) (where small high-frequency components can be discarded), to
spectral methods for the numerical solution of partial differential equations. The use of
cosine rather than sine functions is critical for compression, since it turns out (as described
below) that fewer cosine functions are needed to approximate a typical signal, whereas for
differential equations the cosines express a particular choice of boundary conditions. In
particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform
(DFT), but using only real numbers. The DCTs are generally related to Fourier Series
coefficients of a periodically and symmetrically extended sequence whereas DFTs are
related to Fourier Series coefficients of a periodically extended sequence. DCTs are
equivalent to DFTs of roughly twice the length, operating on real data with even symmetry
(since the Fourier transform of a real and even function is real and even), whereas in some
variants the input and/or output data are shifted by half a sample. There are eight standard
DCT variants, of which four are common. Like for the DFT, the normalization factor in front
of these transform definitions is merely a convention and differs between treatments. For
example, some authors multiply the transforms by so that the inverse does not require any
additional multiplicative factor. Combined with appropriate factors of √(2/N), this can be
used to make the transform matrix orthogonal.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:

clc;
clear all;
close all;
m=input('Enter the basis matrix dimension: ');
% Request user input
n=m;
alpha2=ones(1,n)*sqrt(2/n);
alpha2(1)=sqrt(1/n);
alpha1=ones(1,m)*sqrt(2/m);
alpha(1)=sqrt(1/m); % square root.
for u=0:m-1
for v=0:n-1
for x=0:m-1
for y=0:n-1
a{u+1,v+1}(x+1,y+1)=alpha1(u+1)*alpha2(v+1)*...
cos((2*x+1)*u*pi/(2*n))*cos((2*y+1)*v*pi/(2*n));
end
end
end
end

mag=a;
figure(3) % Create figure graphics object
k=1;
% Code to plot the basis
for i=1:m
for j=1:n
subplot(m,n,k) % Create axes in tiled position
imshow(mag{i,j},256) k=k+1;
end

end

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


Enter the basis matrix dimension: 5

RESULT:

Conclusion: Thus, we have obtained the DCT/IDCT for the image.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 11 IMAGE RESTORATION
DATE:
AIM: Program for image Restoration.

TOOLS REQUIRED: MATLAB

THEORY: Image restoration is the operation of taking a corrupt/noisy image and


estimating the clean, original image. Corruption may come in many forms such as motion
blur, noise and camera mis-focus.[1] Image restoration is performed by reversing the process
that blurred the image and such is performed by imaging a point source and use the point
source image, which is called the Point Spread Function (PSF) to restore the image
information lost to the blurring process.
Image restoration is different from image enhancement in that the latter is designed to
emphasize features of the image that make the image more pleasing to the observer, but not
necessarily to produce realistic data from a scientific point of view. Image enhancement
techniques (like contrast stretching or de-blurring by a nearest neighbor procedure)
provided by imaging packages use no a priori model of the process that created the image.
With image enhancement noise can effectively be removed by sacrificing some resolution,
but this is not acceptable in many applications. In a fluorescence microscope, resolution in
the z-direction is bad as it is. More advanced image processing techniques must be applied
to recover the object.
The objective of image restoration techniques is to reduce noise and recover resolution loss
Image processing techniques are performed either in the image domain or the frequency
domain. The most straightforward and a conventional technique for image restoration
is deconvolution, which is performed in the frequency domain and after computing
the Fourier transform of both the image and the PSF and undo the resolution loss caused by
the blurring factors. This deconvolution technique, because of its direct inversion of the PSF
which typically has poor matrix condition number, amplifies noise and creates an imperfect
deblurred image. Also, conventionally the blurring process is assumed to be shift-invariant.
Hence more sophisticated techniques, such as regularized deblurring, have been developed
to offer robust recovery under different types of noises and blurring functions. It is of 3 types:
1. Geometric correction 2. radiometric correction 3. noise removal

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:

% load image
X = double(imread('4.bmp'));
X = X-mean(X(:));
[m,n] = size(X);
% show image and DFT
fX = fft2(X);
figure(1)
imshow(real(X),[]);
title('original image')
figure(2)
imshow(fftshift(log(1+abs(fX))),[]);
title('log(1+|DFT|)) original image');
% model blurring filter
s = 24; t= 0; u = 1; v=0;
g = [ones(s,1);zeros(m-s-t,1); ones(t,1)];
%g = [ones(s,1);0.99; zeros(m-s-t-2,1);0.99;
ones(t,1);
g = g/sum(abs(g));
h = [ones(u,1);
zeros(n-u-v,1);
ones(v,1)];
h = h/sum(abs(h));
f =g*h';
ff = fft2(f);
figure(3)
imshow(fftshift(log(1+abs(ff))),[])
title('amplitude: log(1+|OTF|)');
figure(4)
imshow(fftshift(angle(ff)),[]);
title('phase of OTF');
% get pseudo inverse filter
ff(find(abs(ff)==0))=NaN;
aff = abs(ff); pff = ff./aff; apiff = 1./aff; ppiff = conj(pff);
ppiff(find(isnan(ppiff))) = 0;
cap = 11;
apiff(find(apiff > cap)) = cap;
apiff(find(isnan(apiff))) = 0;
piff = apiff.*ppiff;
% deblur and show
frX = piff.*fX;
rX =real(ifft2(frX));
figure(5)
imshow(fftshift(log(1+abs(frX))),[]);

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


Conclusion: Thus, we done the image restoration on the image.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 12 SEGMENTATION: SIMPLE THRESHOLD
DATE:
AIM: Program for simple thresholding.

TOOLS REQUIRED: MATLAB

THEORY:

Image thresholding is a simple, yet effective, way of partitioning an image into a foreground
and background. This image analysis technique is a type of image segmentation that isolates
objects by converting grayscale images into binary images. Image thresholding is most
effective in images with high levels of contrast.

Definition
The simplest thresholding methods replace each pixel in an image with a black pixel if the
image intensity is less than some fixed constant T or a white pixel if the image intensity is
greater than that constant. In the example image on the right, this results in the dark tree
becoming completely black, and the white snow becoming completely white.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PREOGRAM:SIMPLE THRESHOLD

clear all;

%%%%%%%%%%%% Enter Image 1 %%%%%%%%%%%%%%%%%%%%%


[file,path] = uigetfile('C:\Users\Welcome\Desktop\New folder\\IMAGES\*.*');
fname=strcat(path,file);
a=imread(fname);
a = rgb2gray(a);
subplot(3,3,1);
imshow(a); title('Original Image');

% Simple thresholding at 0.3


% This is equivalent to saying threshoding at pixel value
% 0.3 x 255 = 76.5 , approximatesly 77
% imlabel() is a Matlab command that can threshold any matrix

level = 0.3;
%% Display the threshold image
subplot(3,3,2);
segimage1 = im2bw(a,level);
imshow(segimage1); title('Simple Thresholding at 0.3');

% Simple thresholding at 0.6


% So threshold value is 0.6 x 255 = 153
% Single Thresholding can be done like this also

%% Display the threshold image


subplot(3,3,3);
imshow(a > 153); title('Simple Thresholding at 0.6');

PROGRAM: MULTI THRESHOLDING

% Multiple thresholding Algorithm


% Let us assume that the output should be zero if pixel value is
% if <= 0.1 x 255 = 25.5 = 26, pixel output 204 if pixel value is <= 0.9 x
% 255 = 230 and 0 if pixel value is above 230.

%% Create a temporary matrix g


clear all;

%%%%%%%%%%%% Enter Image 1 %%%%%%%%%%%%%%%%%%%%%

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


[file,path] = uigetfile('C:\Users\Welcome\Desktop\New folder\\IMAGES\*.*');
fname=strcat(path,file);
a=imread(fname);

tmp = a;

[m n]= find(a<26);
for j = 1: length(m)
tmp(m(j),n(j))=0;
end

[m n]= find(a>26 & a <= 230);


for j = 1: length(m)
tmp(m(j),n(j))=0.8;
end

[m n]= find(a>230);
for j = 1: length(m)
tmp(m(j),n(j))=0;
end
subplot(1,2,1);
segimage2 = im2bw(tmp,0);
imshow(a);title('original image');
subplot(1,2,2);
imshow(segimage2); title('Multiple threshoding(Between 26-230)');

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


OUTPUT:

Conclusion: Thus, we have obtained the simple and multi threshold of the image.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 13 SEGMENTATION USING WATERSHED
DATE:
TRANSFORMATION
AIM: Program for segmentation using watershed transformation.

TOOLS REQUIRED: MATLAB

THEORY: Simply defined, watershed is a transformation on grayscale images. The aim of this
technique is to segment the image, typically when two regions-of-interest are close to each
other — i.e, their edges touch.

This technique of transformation treats the image as a topographic map, with the intensity of
each pixel representing the height. For instance, dark areas can be intuitively considered to
be ‘lower’ in height, and can represent troughs. On the other hand, bright areas can be
considered to be ‘higher’, acting as hills or as a mountain ridge.

Various algorithms can be used to compute watersheds. One of the most popular algorithm is
Watershed-by-flooding, which was later improved as the Priority-Flood algorithm.

Watershed-by-flooding

Assume that a source of water is placed in the catchment basins — the areas with low
intensity. These basins are flooded and areas where the floodwater from different basins
meet are identified. Barriers in the form of pixels are built in these areas. Consequently, these
barriers act as partitions in the image, and the image is considered to be segmented.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:

close all;

I = imread('nuclei.png');

I1=imtophat(I,strel('disk',10));

%%%A very useful morphological performs morphological top hat filtering on the

%%grayscale or binary input image IM using the structuring element SE,

%where SE is returned by STREL. SE must be a single structuring

% element object, not an array containing multiple structuring element

figure,imshow(I1);

I2 = imadjust(I1);

figure, imshow(I2);

level=graythresh(I2);

BW=im2bw(I2,level);

figure,imshow(BW);

c=BW;

figure,imshow(c);

d=-bwdist(c);

%bwdist computes the distance transform. The distance transform of a binary

%image is the distance from every pixel to the nearest valued pixel

d(c)=-Inf;

%modify the image so that the background pixels and the extended maxima

%pixels are forced to be the only local minima in the image

L=watershed(d);

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


figure,imshow(L);

wi=label2rgb(L,'hot','w');

figure,imshow(wi);

im=I;

im(L==0)=0;

figure,imshow(im);

output

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


Conclusion: Thus, we have obtained watershed segmentation of the image.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 14
DATE: IMAGE COMPRESSION USING THE HAAR
WAVELET TRANSFORM
AIM: Program for image compression using Haar wavelet transformation.

TOOLS REQUIRED: MATLAB

THEORY:

Haar wavelet compression is an efficient way to perform both lossless and lossy image
compression. It relies on averaging and differencing values in an image matrix to produce a
matrix which is sparse or nearly sparse. A sparse matrix is a matrix in which a large portion
of its entries are 0. A sparse matrix can be stored in an efficient manner, leading to smaller
file sizes. In these notes we will concentrate on grayscale images; however, rgb images can
be handled by compressing each of the color layers separately. The basic method is to start
with an image A, which can be regarded as an m × n matrix with values 0 to 255. In Matlab,
this would be a matrix with unsigned 8-bit integer values. We then subdivide this image into
8 × 8 blocks, padding as necessary. It is these 8 × 8 blocks that we work with.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:

a=imread('C:\Users\Welcome\Desktop\New folder\coinspng.png');
[LL,LH,HL,HH]=dwt2(im2double(a),'haar');
subplot(121),imshow([LL LH;HL HH],[]);title('First level Decomposition');
[LL1,LH1,HL1,HH1]= dwt2(im2double(LL),'haar');
c=[LL1 LH1;HL1 HH1];
subplot(122),imshow([c LH;HL HH],[]); title('second level Decomposition');

OUTPUT:

Conclusion: Thus, we have obtained the compressed image using HAAR transformation.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


EXPT.NO: 15
DATE: FREQUENCY DOMAIN FILTERS
AIM: Program for simple frequency domain filter.

TOOLS REQUIRED: MATLAB

THEORY:

Consider a matrix a with dimensions M × N and a matrix b with dimensions P × Q.

The convolution c = a ∗ b can be calculated as follows: 1. Fill in matrices a, b by zeroes to have


dimensions M + P − 1, N + Q − 1 (usually up to the order of 2 to ease FFT).

2. Calculate 2D FFT matrix of matrices a, b (in MATLAB, using fft2). The outcome are matrices
A, B.

3. Multiply complex Fourier spectra element-wise, C = A . ∗ B.

4. The result of the convolution c is obtained by the inverse Fourier transformation (in
MATLAB using ifft2).

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


PROGRAM:

%simple implementation of frequency domain filters


clear all
%read input image

[file,path] = uigetfile('C:\Users\Welcome\Desktop\New folder\*.*');


fname=strcat(path,file);
dim=imread(fname);
cim=double(dim);
[r,c]=size(cim);

r1=2*r;
c1=2*c;

pim=zeros((r1),(c1));
kim=zeros((r1),(c1));

%padding
for i=1:r
for j=1:c
pim(i,j)=cim(i,j);
end
end

%center the transform


for i=1:r
for j=1:c
kim(i,j)=pim(i,j)*((-1)^(i+j));
end
end

%2D fft
fim=fft2(kim);

n=1; %order for butterworth filter


thresh=10; % cutoff radius in frequency domain for filters

% % function call for low pass filters


him=glp(fim,thresh); % gaussian low pass filter
% him=blpf(fim,thresh,n); % butterworth low pass filter

% % function calls for high pass filters


% him=ghp(fim,thresh); % gaussian low pass filter
% him=bhp(fim,thresh,n); %butterworth high pass filter

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


% % function call for high boost filtering
%him=hbg(fim,thresh); % using gaussian high pass filter
% him=hbb(fim,thresh,n); % using butterworth high pass filter

%inverse 2D fft
ifim=ifft2(him);

for i=1:r1
for j=1:c1
ifim(i,j)=ifim(i,j)*((-1)^(i+j));
end
end

% removing the padding


for i=1:r
for j=1:c
rim(i,j)=ifim(i,j);
end
end

% retaining the real parts of the matrix


rim=real(rim);
rim=uint8(rim);

%figure, imshow(rim);

figure;
subplot(2,3,1);imshow(dim);title('Original image');
subplot(2,3,2);imshow(uint8(kim));title('Padding');
subplot(2,3,3);imshow(uint8(fim));title('Transform centering');
subplot(2,3,4);imshow(uint8(him));title('Fourier Transform');
subplot(2,3,5);imshow(uint8(ifim));title('Inverse fourier transform');
subplot(2,3,6);imshow(uint8(rim));title('Cropped image');

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:


OUTPUT:

Conclusion: Thus, we have obtained the resultant image using frequency domain filters.

EC8762 DIGITAL IMAGE PROCESSING LABORATORY Page:

You might also like