Abstract
Emotion recognition is widely used in many areas, such as medicine and education. Due to the obvious difference in duration and intensity between micro and macro expression, the same model cannot be used to classify emotions precisely. In this paper, an algorithm for emotion recognition based on graph neural network is proposed. The proposed method involves four key steps. Firstly, data augmentation is used to increase the diversity of original data. Secondly, graph network is built based on feature points. The feature points Euclidean distance is calculated as the initial value of the matrix. Thirdly, Laplacian matrix is obtained according to the matrix. Finally, graph neutral network is utilized to bridge the relationship between feature vectors and emotions. In addition, a new dataset named FEC-13 is provided by subdivided traditional six kinds of emotions to thirteen categories according to the intensity of emotions. The experimental results show that a high accuracy is reached with a small amount of training data, especially CASME II dataset, which achieves an accuracy of 95.49%. A cross-database study indicates that proposed method has high generalization performance and the accuracy of FEC-13 dataset is 74.99%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Zeng, H., Shu, X., Wang, Y., et al.: EmotionCues: emotion-oriented visual summarization of classroom videos. IEEE Trans. Visual Comput. Graphics. https://doi.org/10.1109/tvcg.2019.2963659
Zhao, G., Pietikainen, M.: Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 915–928 (2007)
Song, B., Li, K., Zong, Y., et al.: Recognizing spontaneous micro-expression using a three-stream convolutional neural network. IEEE Access 7, 184537–184551 (2019)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR 2017 (2017)
Zeng, R., Huang, W., Tan, M., et al.: Graph convolutional networks for temporal action localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 7094–7103. IEEE (2019)
Ghosal, D., Majumder, N., Poria, S., et al.: Dialoguegcn: a graph convolutional neural network for emotion recognition in conversation. arXiv preprint arXiv:1908.11540 (2019)
Yan, W.J., Li, X., Wang, S.J., et al.: CASME II: an improved spontaneous micro-expression database and the baseline evaluation. PLoS ONE 9(1), (2014)
Li, X., Pfister, T., Huang, X., et al.: A spontaneous micro-expression database: inducement, collection and baseline. In: 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition(fg), pp. 1–6. IEEE (2013)
Davison, A.K., Lansley, C., Costen, N., et al.: SAMM: a spontaneous micro-facial movement dataset. IEEE Trans. Affect. Comput. 9(1), 116–129 (2016)
Pfister, T., Li, X., Zhao, G., et al.: Differentiating spontaneous from posed facial expressions within a generic facial expression recognition framework. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 868–875. IEEE (2011)
Lucey, P., Cohn, J.F., Kanade, T., et al.: The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 94–101. IEEE (2010)
Lyons, M., Akamatsu, S., Kamachi, M., et al.: Coding facial expressions with gabor wavelets. In: Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205. IEEE (1998)
Chen, L.-F., Yen, Y.-S.: Taiwanese facial expression image database. Brain Mapping Laboratory, Institute of Brain Science, National Yang-Ming University, Taipei, Taiwan (2007)
Li, S., Deng, W.: Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial expression recognition. IEEE Trans. Image Process. 28(1), 356–370 (2018)
Liu, Y.J., Zhang, J.K., Yan, W.J., et al.: A main directional mean optical flow feature for spontaneous micro-expression recognition. IEEE Trans. Affect. Comput. 7(4), 299–310 (2015)
Xia, Z., Hong, X., Gao, X., et al.: Spatiotemporal recurrent convolutional networks for recognizing spontaneous micro-expressions. IEEE Trans. Multimedia 22(3), 626–640 (2019)
Zavaschi, T.H.H., Britto, A.S., Oliveira, L.E.S., et al.: Fusion of feature sets and classifiers for facial expression recognition. Expert Syst. Appl. 40(2), 646–655 (2013)
Wang, X., Jin, C., Liu, W., et al.: Feature fusion of HOG and WLD for facial expression recognition. In: Proceedings of the 2013 IEEE/SICE International Symposium on System Integration, pp. 227–232. IEEE (2013)
Ouellet, S.: Real-time emotion recognition for gaming using deep convolutional network features. arXiv preprint arXiv:1408.3750 (2014)
Liu, P., Han, S., Meng, Z., et al.: Facial expression recognition via a boosted deep belief network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1805–1812 (2014)
Azadi, S., Feng, J., Jegelka, S., et al.: Auxiliary image regularization for deep CNNs with noisy labels. arXiv preprint arXiv:1511.07069 (2015)
Goldberger, J., Ben-Reuven, E.: Training deep neural-networks using a noise adaptation layer. In: International Conference on Learning Representations (2017)
Zhang, Z., Fang, C., Ding, X.: Facial expression analysis across databases. In: 2011 International Conference on Multimedia Technology, pp. 317–320. IEEE (2011)
Xie, S., Hu, H., Wu, Y.: Deep multi-path convolutional neural network joint with salient region attention for facial expression recognition. Pattern Recogn. 92, 177–191 (2019)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Yang, C.Y., Zheng, K., Zhou, J., et al.: Video-based meticulous classification of facial expressions reflecting psychological status in the classroom. Basic Clinical Pharmacol. Toxicol. 125(066), 42–43 (2019)
Acknowledgment
This work was supported in part by the Planning Subject for the 13th Five Year Plan of Beijing Education Sciences (CADA18069).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhang, J., Sun, G., Zheng, K., Mazhar, S., Fu, X., Yang, D. (2021). Emotion Recognition Based on Graph Neural Networks. In: Sun, F., Liu, H., Fang, B. (eds) Cognitive Systems and Signal Processing. ICCSIP 2020. Communications in Computer and Information Science, vol 1397. Springer, Singapore. https://doi.org/10.1007/978-981-16-2336-3_45
Download citation
DOI: https://doi.org/10.1007/978-981-16-2336-3_45
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-2335-6
Online ISBN: 978-981-16-2336-3
eBook Packages: Computer ScienceComputer Science (R0)