Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
85 views

Emotion Detection Using Machine Learning in Python

In this project emotion detection using its facial expressions will be detected. It can be derived from the live feed via system’s camera or any pre-existing image available in the memory.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views

Emotion Detection Using Machine Learning in Python

In this project emotion detection using its facial expressions will be detected. It can be derived from the live feed via system’s camera or any pre-existing image available in the memory.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

10 V May 2022

https://doi.org/10.22214/ijraset.2022.43108
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com

Emotion Detection Using Machine Learning in


Python
Abhishek Kumar singh1, Shashwat Singh2, Aditya Sahi3, Arvind Kumar Maurya4

Abstract: In this project emotion detection using its facial expressions will be detected. It can be derived from the live feed
via system’s camera or any pre-existing image available in the memory. Emotions possessed by humans can be detect by
machine and has a vast scope of study in the computer vision industry upon which several research have already been done.
The work has been implemented using Anaconda (Jupyter Notebook) (3.10), Open-Source Computer Vision Library
(OpenCV) and NumPy. The code check the video (testing dataset) is being compared to training dataset and thus emotion is
predicted. The objective of this paper is to develop a system which can analyze the image and run time video and predict the
expression of the person. The study proves that this project and code is workable and produces valid results.in this project
we have make change to the accuracy of the running project by using the different models of python and deep learning.
Keywords: Face Recognition, Image Processing, Computer Vision, Emotion Detection, OpenCV, numpy, mobilenet version

I. INTRODUCTION TO IMAGE PROCESSING


In order to get a good image and to get some useful information out of it, the method of Image Processing can be used. It is best way
through which an image can be converted into its digital form subsequently performing various operations on it. This is a technique
like emotion detection and recognition, in which the input given is an image and videos image, which is a collection of numbers
ranging from 0 to 255 which denotes the corresponding pixel value.
Conversion of Color Image to Gray Scale
In this we have two method make a image RGB to grayscale

A. Average Method
In this method, the mean is given for the three colors i.e., Red, Blue & Green present in a color image. Thus, we have
Grayscale= (R+G+B)/3;
But sometime it happens that instead of the grayscale image we get the black image. This is because we have converted the image
we get 33% each of Red, Blue & Green.
For solving the problem in first method we use a another method called Weighted Method or Luminosity Method.

B. Weighted or Luminosity Method


To overcome the problem of Average Method, we use Luminosity method. In this method, we decrement the presence of Red Color
and increment the color of Green Color and the blue color has the percentage in between these two colors.
Thus, by the equation [8],
Grayscale= ((0.2 * R) + (0.55 * G) + (0.12 * B)).
We use this because of the wavelength patterns of all colors. Blue get the least wavelength while Red get the maximum wavelength.

II. REVIEW OF LITERATURE


1) M Murugappan, M Rison, R Nagarajan… - 2008 International …, 2008 - ieeexplore.ieee.org In recent years, the need for and
importance of automatically recognizing emotions from EEG signals has grown with increasing role of brain computer interface
applications.
2) R Plutchik - 2003 - psycnet.apa.org This is a comprehensive textbook for instructors teaching a college-level or graduate course
on emotion. In this volume, the author has brought together materials that will stimulate students' interests and serve as a focus
for thought-provoking discussion and learning experiences.
3) PC Petrantonakis… - IEEE Transactions on …, 2009 - ieeexplore.ieee.org Electroencephalogram (EEG)-based emotion
recognition is a relatively new field in the affective computing area with challenging issues regarding the induction of the
emotional states and the extraction of the features in order to achieve optimum classification performance.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3335
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com

4) T Canli, JE Desmond, Z Zhao, G Glover… - …, 1998 - journals.lww.com CURRENT brain models of emotion processing
hypothesize that positive (or approach-related) emotions are lateralized towards the left hemisphere, whereas negative (or
withdrawal-related) emotions are lateralized towards the right hemisphere.
5) G Chanel, J Kronegg, D Grandjean, T Pun - International workshop on …, 2006 – Springer The arousal dimension of human
emotions is assessed from two different physiological sources: peripheral signals and electroencephalographic (EEG) signals
from the brain.
6) P Ekman - Handbook of cognition and emotion, 1999 - books.google.com In this chapter I consolidate my previous writings
about basic emotions (Ekman, 1984, 1992a, 1992b) and introduce a few changes in my thinking.
7) D Grocke, T Wigram - 2006 - books.google.com This practical book describes the specific use of receptive (listening) methods
and techniques in music therapy clinical practice and research, including relaxation with music for children and adults, the use
of visualisation and imagery, music and collage, song-lyric discussion, vibroacoustic applications, music and movement
techniques, and other forms of aesthetic listening to music.

Fig.1 Data Recording System [16]

III. INTRODUCTION TO OPENCV


OpenCV is Open Computer Vision Library. It is a free for all extensive library which consists of more than 2500 algorithms
specifically designed to carry out Computer Vision and Machine Learning related projects. These algorithms can be used to carry
out different tasks such as Face Recognition, Object Identification, Camera Movement Tracking, Scenery Recognition etc. It
constitutes a large community with an estimate of 47,000 odd people who are active contributors of this library. Its usage extends to
various companies both, Private and Public.
A new feature called GPU Acceleration was added among the preexisting libraries. This new feature can handle most of the
operations, though it’s not completely advanced yet. The GPU is run by using CUDA and thus takes advantages from various
libraries such as NPP i.e., NVIDIA performance primitives. Using GPU is beneficial by the fact that anyone can use the GPU
feature without having a strong knowledge on GPU coding. In GPU Module, we cannot change the features of an image directly,
rather we have to copy the original image followed by editing it.

IV. STEPS INVOLVED IN INSTALLING PYTHON 2.7 AND THE NECESSARY PACKAGES
Let’s begin with a sample of image in either .jpg or .png format and apply the method of image processing to detect emotion out of
the subject in the sample image. (The word ‘Subject’ refers to any living being out of which emotions can be extracted).

A. Importing Libraries
For successful implementation of this project, the following packages of Python 2.7 must be downloaded and installed: Python
2.7.x, NumPy, Glob and Random. Python will be installed in the default location, C drive in this case. Open Python IDLE, import
all the packages and start working.

B. NumPy
NumPy is one of the libraries of Python which is used for complex technical evaluation. It is used for implementation of
multidimensional arrays which consists of various mathematical formulas to process.
The array declared in a program has a dimension which is called as axis.
The number of axis present in an array is known as rank

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3336
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com

For e.g. A= [1,2,3,4,5]


In the given array A 5 elements are present having rank 1, because of one-dimension property.
Let’s take another example for better understanding
B= [[1,2,3,4], [5,6,7,8]]
In this case the rank is 2 because it is a 2-dimensional array. First dimension has 2 elements, and the second dimension has 4
elements. [10]

C. Glob
Based on the guidelines specified by Unix Shell, the Glob module perceives the pattern and with reference to it, generates a file. It
generates full path name.
Wildcards
These wildcards are used to perform various operations on files or a part of directory. There are various wildcards [5] which are
functional out of which only two are useful: -

D. Random
Random Module picks or chooses a random number or an element from a given list of elements. This module supports those
functions which provide access to such operations.

Fig.3 Flowchart for Emotion Detection

V. DIFFERENT EMOTIONS THAT CAN BE DETECTED OUT OF AN VIDEOS


A. Neutral
B. Happy
C. Anger
D. Disgust
E. Surprise
F. Fear
G. Sad

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3337
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com

VI. STEPS INVOLVED TO PERFORM EMOTION DETECTION USING OPENCV-PYTHON:


1) After successfully installing all the necessary software’s, we must start by creating a Dataset. Here, we can create our own
dataset by analysing group of images so that our result is accurate and there is enough data to extract sufficient information. Or
we can use an existing database.
2) The dataset is then organized into two different directories. First directory will contain all the images and the second directory
will contain all the information about the different types of emotions.
3) After running the sample images through the python code, all the output images will be stored into another directory, sorted in
the order of emotions and its subsequent encoding.
4) Different types of classes can be used in OpenCV for emotion recognition, but we will be mainly using Fisher Face one.
5) Extracting Faces: OpenCV provides four predefined classifiers, so to detect as many faces as possible, we use these classifiers
in a sequence
6) The dataset is split into Training set and Classification set. The training set is used to teach the type of emotions by extracting
information from several images and the classification set is used to estimate the classifier performance.
7) For best results, the images should be of exact same properties i.e., size.
8) The subject on each image is analysed, converted to grayscale, cropped and saved to a directory
9) Finally, we compile training set using 80% of the test data and classify the remaining 20% on the classification set. Repeat the
process to improve efficiency.

Fig.4 Output of a sample image

VII. CONCLUSION
1) The expected output of this project is that the accuracy of the project to capture facial The expression of a person using some
tools of AI like python, machine learning, CNN, open cv etc.
2) The main purpose of this project is to make significant contribution to the environment and help people to recognise the human
facial expression which can be easily understand the human feelings
3) Deep learning classification has been successfully applied to many EEG tasks, including motor imagery, seizure detection,
mental workload, sleep stage scoring, event related potential, and emotion recognition tasks. The design of these deep learning
studies varied significantly over the input formulization and network design.
4) Several public datasets were analysed in multiple studies, which allowed us to directly compare classification performances
based on their design. Generally, CNN’s, RNN’s, and DBN’s outperformed other types of deep networks, such as SAE’s and
MLPNN’s
5) Hybrid designs incorporating convolutional layers with recurrent layers or restricted Boltzmann machines showed promise in
classification accuracy and transfer learning when compared against standard designs.
6) We recommend to check more in-depth research of these particularly the number and arrangement of different layers including
RBM’s, recurrent layers, convolutional layers, and fully connected layers.

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3338
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com

REFERENCES
[1] Murugappan, M., Rizon, M., Nagarajan, R., Yaacob, S., Zunaidi, I., Hazry, D.: getting scheme for human emotion detection using EEG. In: International
Symposium on Information Technology, ITSim 2008, vol. 2 (2008)
[2] Plutchik, R.: Emotions and life: perspectives from psychology, biology, and evolution, 1st edn. American Psychological Association, Washington, DC (2003)
[3] Petrantonakis, P.C., Hadjileontiadis, L.J.: Emotion recognition from images using higher order crossings. IEEE Transactions on Information Technology in
Biomedicine 14(2), 186–197 (2010)
[4] Canli, T., Desmond, J.E., Zhao, Z., Glover, G., Gabrieli, J.D.E.: Hemispheric asymmetry for emotional stimuli detected with fMRI. NeuroReport 9(14), 3233–
3239 (1998)
[5] Chanel, G., Kronegg, J., Grandjean, D., Pun, T.: Emotion assessment: Arousal evaluation using EEG’s and peripheral physiological signals (2006)
[6] Ekman, P.: Basic emotions. In: Dalgleish, T., Power, M. (eds.) Handbook of Cognition and Emotion. Wiley, New York (1999)
[7] Grocke, D.E., Wigram, T.: Receptive Methods in Music Therapy: Techniques and Clinical Applications for Music Therapy Clinicians, Educators and Students,
1st edn. Jessica Kingsley Publishers (2007)

©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3339

You might also like