Image Search Engine Using DeepLearning
Image Search Engine Using DeepLearning
1
P Ganesh, 2C Radhika, 3P Kiranmai Yadav
1
Assistant Professor, Department of Information Technology, Bhoj Reddy Engineering College for Women,
Hyderabad, India
2,3
Student, Department of Information Technology, Bhoj Reddy Engineering College for Women, Hyderabad,
India
Abstract: In this paper we are presenting an Image search engine build using deep learning. Automation is everybody’s
dream whether it is to ease their efforts or to make machines learn how to do something. In this paper we are presenting
a way to search for similar images in local disk when an input image is applied. We used CNN for feature extraction
and Flask for implementing the program functionality in the local webpage. The model is used to extract the features as
NumPy arrays and store them locally and be able to search for similar images when and input is applied.
Keywords: Image, Image search, convolutional neural network, Feature vector, Artificial intelligence.
Volume 13, Issue 09, Sept 2023 ISSN 2457-0362 Page 106
be mathematically represented for better
understanding and easier analysis.
A. Convolutional Neural Network:
Convolutional neural networks are
best suited for image processing and visual
analysis. The reason they are used is that the
image can be easily manipulated by
convoluting image data with a filter data for
easier manipulations. CNN contains
multiple layers, layers after CNN are
connected as multilayer neural networks.
The design of CNN allows it to be able to
take a 2D image as an input. The output can Feature extraction:
be achieved by using multiple layers and Just like the above filters
weights just like in a neural network by CNN filters are used to extract features from
convoluting one layer to another for an image,when compared to the usual filters
manipulations and with various pooling CNN filters does not have any predefined
techniques to get the feature vector. values and these values are determined
In this Model we used VGG16 which during the training period this helps the
consists of 16 hidden layers for its model, model to make filters by its own which can
which is a deep learning CNN model. This result in pretty amusing filters that humans
model is a pretrained model capable of can never think of manually. A 2D
categorizing 1million types of images. This convolution filter is commonly used and it
model is used for feature extraction which is referred as Conv2D.This filter adds up all
can be further utilized for training. the inputs and a single output is obtained
from the image.
Preprocessing using CNN:
Preprocessing of data implies that the input
data is converted in a form that the computer
can understand that easy training.
Preprocessing is used to improve the
performance by manipulating the image so
that it can be easily understood by the
machine. As VGG16 accepts only 224 x
224 images it is important to first resize the
images so it can be fed in to VGG16.This
reduction in size greatly enhances the B. Flask:
performance as it has less datato work on. Flask is a Python-based microweb framework. It is
referred to as a microframework because it does
Filtering:
not necessitate the usage of any specific tools or
In CNN, the main application of libraries. [2] It doesn't have a database abstraction layer,
Convolution is done in filtering. In filtering form validation, or any other components that rely on
the input image vectors are multiplied with third-party libraries to do typical tasks. Extensions, on
the filter to get the modified output that we the other hand, can be used to add application
need. There are several types of filters such functionalities as if they were built into Flask itself.
as sharpening, grayscale, blur and etc. Object- relational mappers, form validation, upload
handling, different open authentication protocols, and
other framework-related tools all have extensions..
Volume 13, Issue 09, Sept 2023 ISSN 2457-0362 Page 107
return numpy arrays.
II. METHEDOLOGY: This model converts the image info to
numpy arrays that are stored locally.
Server.py:
This file contains code for web
implimentation of our model.
We are using flask web framework for
interfacing between the model and the
user.
We make use of feature extractor class
for features of user input image.
This file is responsible to host the web
page locally and display output on the
webpage
IV. RESULT:
A. Dataset:
The data set consists of 8000 images
We divided the project into 3 parts: with various types of flowers.
1. Offline.Py (Save features into numpy
arrays) B. Results:
2. Feature extractor.Py (Contains The model returns first 30 closest images
VGG16 for preprocessing) based in the score from distance between
3. Server.Py (contains Flask for web the images.
interactions)
Offline.py:
This Python file consists of code that is
used to get the images from local
directory
It is used to initialise Features folder
which contains all the feature as
Numpy arrays.
This File is able to extract features by
calling the feature extractor class.
Feature Extractor.py:
This file consists of code required for
preprocesssing the images.
Here we are using VGG16 which is a
pretrained CNN model for feature
extraction by eleminating the
classification layer from the model REFERENCES:
This model imports weights from
imagenet. [1] Y. Liu, D. Zhang, G. Lu, and W.-Y. Ma, “A
We apply Normalization and then survey of content-based image retrieval with
Volume 13, Issue 09, Sept 2023 ISSN 2457-0362 Page 108
high-level semantics,” Pattern recognition, vol. 647–655.
40, no. 1, pp. 262–282, 2007. [13] G. Shrivakshan, C. Chandrasekar et al.,
[2] Y. LeCun, Y. Bengio, and G. Hinton, “Deep “A comparison of various edge detection
learning,” Nature, vol. 521, no. 7553, pp. 436– techniques used in image processing,” IJCSI
444, 2015. International Journal of Computer Science
Issues, vol. 9, no. 5, pp. 272–276, 2012.
[3] C. Szegedy,W. Liu, Y. Jia, P. Sermanet, S.
[14] A. Maurya and R. Tiwari, “A novel method of
Reed, D. Anguelov, D. Erhan, V. Vanhoucke,
image restoration by using different types of
and A. Rabinovich, “Going deeper with
filtering techniques,” International Journal of
convolutions,” in Proceedings of the IEEE
Engineering Science and Innovative Technology
Conference on Computer Vision and Pattern
(IJESIT) Volume, vol. 3, 2014. [15] R.
Recognition, 2015, pp. 1–9.
Kandwal, A. Kumar, and S. Bhargava, “Review:
[4] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens,
existing image segmentation techniques,”
and Z. Wojna, “Rethinking the inception
International Journal of Advanced Research in
architecture for computer vision,” in Proceedings of
Computer Science and Software Engineering,
the IEEE Conference on Computer Vision and
vol. 4, no. 4, 2014.
Pattern Recognition, 2016, pp. 2818– 2826.
[16] K. Roy and J. Mukherjee, “Image similarity
[5] A. Krizhevsky, I. Sutskever, and G. E. Hinton,
measure using color histogram, color coherence
“Imagenet classification with deep convolutional
vector, and sobel method,” International Journal
neural networks,” in Advances in neural information
of Science and Research (IJSR), vol. 2, no. 1,
processing systems, 2012, pp. 1097–1105.
pp. 538–543, 2013.
[6] A. Babenko, A. Slesarev, A. Chigorin,
[17] J. Shlens, “Train
and V. Lempitsky, “Neural codes for image
your own image
retrieval,” in European conference on
classifier with
computer vision. Springer, 2014, pp. 584–
Inception in
599.
TensorFlow,”
[7] R. Xia, Y. Pan, H. Lai, C. Liu, and S.
https://research.google
Yan, “Supervised hashing for image retrieval
blog.com/2016/03/train
via image representation learning.” in AAAI,
-your-
vol. 1, 2014, p. 2.
ownimageclassifier-
[8] K. Lin, H.-F. Yang, J.-H. Hsiao, and C.-S.
with.html, 2016.
Chen, “Deep learning of binary hash codes for fast
[18] P. Sermanet, D. Eigen, X. Zhang, M.
image retrieval,” in Proceedings of the IEEE
Mathieu, R. Fergus, and Y. LeCun,
Conference on Computer Vision and Pattern
“Overfeat: Integrated recognition, localization
Recognition Workshops, 2015, pp. 27–35.
and detection using convolutional networks,”
[9] J.-C. Chen and C.-F. Liu, “Visual-based
arXiv preprint arXiv:1312.6229, 2013.
deep learning for clothing from large database,”
[19] P. Wu, S. C. Hoi, H. Xia, P. Zhao, D. Wang,
in Proceedings of the ASE BigData &
and C. Miao, “Online multimodal deep similarity
SocialInformatics 2015. ACM, 2015, p. 42.
learning with application to image retrieval,” in
[10] N. Khosla and V. Venkataraman,
Proceedings of the 21st ACM international
“Building image-based shoe search using
conference on Multimedia. ACM, 2013, pp. 153–
convolutional neural networks,” CS231n
162.
Course Project Reports, 2015.
[11] A. Iliukovich-Strakovskaia, A. Dral, and E. [20] S. Liu, Z. Song, G. Liu, C. Xu, H. Lu, and S.
Dral, “Using pre-trained models for fine-grained Yan, “Street-toshop: Cross-scenario clothing
image classificationin fashion field,” 2016. retrieval via parts alignment and auxiliary set,” in
[12] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, Computer Vision and Pattern Recognition (CVPR),
N. Zhang, E. Tzeng, and T. Darrell, “Decaf: A 2012 IEEE Conference on. IEEE, 2012, pp. 3330–
deep convolutional activation feature for generic 3337.
visual recognition.” in Icml, vol. 32, 2014, pp. [21] K. Yamaguchi, M. H. Kiapour, L. E. Ortiz,
Volume 13, Issue 09, Sept 2023 ISSN 2457-0362 Page 109
and T. L. Berg, “Retrieving similar styles to
parse clothing,” IEEE transactions on pattern
analysis and machine intelligence, vol. 37, no. 5,
pp. 1028–1040, 2015.
[22] J. Wan, P. Wu, S. C. Hoi, P. Zhao, X. Gao,
D. Wang, Y. Zhang, and J. Li, “Online learning to
rank for content-based image retrieval,” 2015.
Volume 13, Issue 09, Sept 2023 ISSN 2457-0362 Page 110