Lab Assignment 3 UCS522: Computer Vision: Thapar Institute of Engineering and Technology Patiala, Punjab
Lab Assignment 3 UCS522: Computer Vision: Thapar Institute of Engineering and Technology Patiala, Punjab
Lab Assignment 3 UCS522: Computer Vision: Thapar Institute of Engineering and Technology Patiala, Punjab
Algorithm:-
Results:-
Explanation: We can see from the various tests made that we were able to detect
images based on corresponding features. Some particular images had the similar
features with other images and the accuracy found was 80%. This could be due to
the face that they made similar gestures or they had the same dimensions. Other
tests showed great results by identifying similar image and gave an accuracy of
100%. We realized that the recognition was based on similar gestures. This could
be the smile made or the raising of the eye browse and could also be the wideness
of the face. There could be many other reasons but in this case we know this was
done due to the features corresponding to the sample images given. The level of
accuracy given depends on the total number of database in which we have. Less
database gives gives low accuracy level but the more the images in the test folder
the higher that accuracy level.
Code:-
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State,
varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
global im
global reference
global W
global imgmean
global col_of_data
global pathname
global img_path_list
im = double(im(:));
objectone = W'*(im - imgmean);
distance = 100000000;
for k = 1:col_of_data
temp = norm(objectone - reference(:,k));
if(distance>temp)
aimone = k;
distance = temp;
aimpath = strcat(pathname, '/',
img_path_list(aimone).name);
axes( handles.axes2 )
imshow(aimpath)
end
end
pathname = uigetdir;
img_path_list = dir(strcat(pathname,'\*.jpg'));
img_num = length(img_path_list);
imagedata = [];
if img_num >0
for j = 1:img_num
img_name = img_path_list(j).name;
temp = imread(strcat(pathname, '/', img_name));
temp = double(temp(:));
imagedata = [imagedata, temp];
end
end
col_of_data = size(imagedata,2);
imgmean = mean(imagedata,2);
for i = 1:col_of_data
imagedata(:,i) = imagedata(:,i) - imgmean;
end
covMat = imagedata'*imagedata;
[COEFF, latent, explained] = pcacov(covMat);
i = 1;
proportion = 0;
while(proportion < 95)
proportion = proportion + explained(i);
i = i+1;
end
p = i - 1;
global W
global reference
col_of_data = 30;
pathname = uigetdir;
img_path_list = dir(strcat(pathname,'\*.jpg'));
img_num = length(img_path_list);
testdata = [];
if img_num >0
for j = 1:img_num
img_name = img_path_list(j).name;
temp = imread(strcat(pathname, '/', img_name));
temp = double(temp(:));
testdata = [testdata, temp];
end
end
col_of_test = size(testdata,2);
testdata = center( testdata );
object = W'* testdata;
error = 0;
for j = 1:col_of_test
distance = 1000000000000;
for k = 1:col_of_data;
temp = norm(object(:,j) - reference(:,k));
if(distance>temp)
aimone = k;
distance = temp;
end
end
if ceil(j/3)==ceil(aimone/4)
error = error + 1;
end
end
% calculating the accuracy
accuracy = ((1-(error/col_of_test))*100);
msgbox(['Accuracy level is : ',
num2str(accuracy),sprintf('%%')],'accuracy')
Algorithm:-
3. Orientation assignment
Code:-
clear;
clc;
image = imread('images/ha1.png');
image = rgb2gray(image);
image = double(image);
keyPoints = SIFT(image,3,5,1.3);
image = SIFTKeypointVisualizer(image,keyPoints);
imshow(uint8(image))
function Descriptors = SIFT(inputImage, Octaves, Scales, Sigma)
% This function is to extract sift features from a given image
%% Setting Variables.
Sigmas = sigmas(Octaves,Scales,Sigma);
ContrastThreshhold = 7.68;
rCurvature = 10;
G = cell(1,Octaves); % Gaussians
D = cell(1,Octaves); % DoG
GO = cell(1,Octaves); % Gradient Orientation
GM = cell(1,Octaves); % Gradient Scale
P = [];
Descriptors = {}; % Key Points
%% Calculating Gaussians
for o = 1:Octaves
[row,col] = size(inputImage);
temp = zeros(row,col,Scales);
for s=1:Scales
temp(:,:,s) = imgaussfilt(inputImage,Sigmas(o,s));
end
G(o) = {temp};
inputImage = inputImage(2:2:end,2:2:end);
end
%% Calculating DoG
for o=1:Octaves
images = cell2mat(G(o));
[row,col,Scales] = size(images);
temp = zeros([row,col,Scales-1]);
for s=1:Scales-1
temp(:,:,s) = images(:,:,s+1) - images(:,:,s);
end
D(o) = {temp};
end
%% Interpolation - Fiting a parabola into 3 points and extracting more exact Exterma
function exterma = interpolateExterma(X, Y, Z)
% Exterpolation and Exterma extraction
% Each input is an array with 2 values, t and f(t).
exterma = Y(1)+...
((X(2)-Y(2))*(Z(1)-Y(1))^2 - (Z(2)-Y(2))*(Y(1)-X(1))^2)...
/(2*(X(2)-Y(2))*(Z(1)-Y(1)) + (Z(2)-Y(2))*(Y(1)-X(1)));
end
Algorithm:-
The LBP feature vector, in its simplest form, is created in the following manner:
Divide the examined window into cells (e.g. 16x16 pixels for each cell).
For each pixel in a cell, compare the pixel to each of its 8 neighbors (on its left-
top, left-middle, left-bottom, right-top, etc.). Follow the pixels along a circle, i.e.
clockwise or counter-clockwise.
Where the center pixel's value is greater than the neighbor's value, write "0".
Otherwise, write "1". This gives an 8-digit binary number (which is usually
converted to decimal for convenience).
Compute the histogram, over the cell, of the frequency of each "number"
occurring (i.e., each combination of which pixels are smaller and which are
greater than the center). This histogram can be seen as a 256-dimensional feature
vector.
Optionally normalize the histogram.
Concatenate (normalized) histograms of all cells. This gives a feature vector for
the entire window.
Result:-
Code:-
clear all;
% load image
img = rgb2gray(imread('ha.png'));
% run descriptor
filtered_img = lbp(img);
% plot results
subplot(1,2,1);
imshow(img);
subplot(1,2,2);
imshow(filtered_img);
end
i, j-1;
i, j+1;
i+1, j-1;
i+1, j;
i+1, j+1
];
End
K-mean/Fuzzy C-mean Clustering techniques in
Image segmentation
Algorithm:-
Result:-
Code:-
% function main
clc;
clear all;
close all;
im = imread('ha.png');
subplot(2,1,1),imshow(im);
subplot(2,1,2),imhist(im(:,:,1));
title('INPUT IMAGE
HISTOGRAM');%figure,imhist(im(:,:,2)),title('blue');figure,imhist(im(:,:,3)),title('Green');
figure;
I = imnoise(rgb2gray(im),'salt & pepper',0.02);
subplot(1,2,1),imshow(I);
title('Noise adition and removal using median filter');
K = medfilt2(I);
subplot(1,2,2),imshow(K);
im = double(im);
s_img = size(im);
r = im(:,:,1);
g = im(:,:,2);
b = im(:,:,3);
% [c r] = meshgrid(1:size(i,1), 1:size(i,2));
data_vecs = [r(:) g(:) b(:)];
k= 4;
palette = round(C);
%Color Mapping
idx = uint8(idx);
outImg = zeros(s_img(1),s_img(2),3);
temp = reshape(idx, [s_img(1) s_img(2)]);
for i = 1 : 1 : s_img(1)
for j = 1 : 1 : s_img(2)
outImg(i,j,:) = palette(temp(i,j),:);
end
end
cluster1 = zeros(size(r));
cluster2 = zeros(size(r));
cluster3 = zeros(size(r));
cluster4 = zeros(size(r));
figure;
cluster1(find(outImg(:,:,1)==palette(1,1))) = 1;
subplot(2,2,1), imshow(cluster1);
cluster2(find(outImg(:,:,1)==palette(2,1))) = 1;
subplot(2,2,2), imshow(cluster2);
cluster3(find(outImg(:,:,1)==palette(3,1))) = 1;
subplot(2,2,3), imshow(cluster3);
cluster4(find(outImg(:,:,1)==palette(4,1))) = 1;
subplot(2,2,4), imshow(cluster4);
cc = imerode(cluster4,[1 1]);
figure,imshow(imerode(cluster4,[1 1]));
title('result image');
for i=1:label_count
area(i) = stats(i).Area;
end
label_im(label_im ~= maxid) = 0;
label_im(label_im == maxid) = 1;
figure,imshow(label_im);
title('lbp');
% outImg = uint8(outImg);
% imtool(outImg);
code_end = 1;