Abstract
Facial expressions are a prevalent way to recognize human emotions, and automatic facial expression recognition (FER) has been a significant task in cognitive science, artificial intelligence, and computer vision. The critical issue with the design of the FER model is the strong intra-class correlation of different emotions. The accuracy of the FER model is reduced due to other problems such as the variations in expressing the emotions, variations in lighting, and different ethnic biases. The latest convolutional neural network-based FER models have shown significant improvement in accuracy score but lack distinguishing the micro-expressions. This paper proposed a multi-input hybrid FER model that considers both hand-engineered and self-learnt features to classify facial expressions. The VGG-Face and the histogram of oriented gradients (HOG) features are derived from the faces to distinguish various facial expression patterns. The fusion of deep (VGG-Face) and hand-engineered (HOG) features has shown improved accuracy compared to the conventional CNN models. The results obtained showed that the proposed model’s accuracy scores outperformed the accuracy scores of the other popular FER models on three facial expression datasets. Extended Cohn–Kanade (CK\(+\)), Yale-Face, and Karolinska directed emotional faces (KDEF) datasets are used to determine the model’s classification efficiency. The proposed model scored 98.12%, 95.26%, and 96.36% accuracy using a fivefold cross-validation process on the CK\(+\), Yale-Face and KDEF datasets.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Mehrabian, A.: Nonverbal Communication. Routledge, London (2017)
Lucey, P., Cohn,J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn–Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. IEEE, pp. 94–101 (2010)
Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711–720 (1997)
Lundqvist, D., Flykt, A., Öhman, A.: Karolinska directed emotional faces. Cogn, Emot (1998)
Tang, Y., Zhang, X.M., Wang, H.: Geometric-convolutional feature fusion based on learning propagation for facial expression recognition. IEEE Access 6, 42532–42540 (2018)
Wang, Y., Li, M., Zhang, C., Chen, H., Lu, Y.: Weighted-fusion feature of MB-LBPUH and HOG for facial expression recognition. Soft. Comput. 24(8), 5859–5875 (2020)
Wang, X., Jin, C., Liu, W., Hu, M., Xu, L., Ren, F.: Feature fusion of HOG and WLD for facial expression recognition. In: Proceedings of the 2013 IEEE/SICE International Symposium on System Integration, pp. 227–232. IEEE (2013)
Xie, X., Lam, K.M.: Facial expression recognition based on shape and texture. Pattern Recogn. 42(5), 1003–1011 (2009)
Lin, D.T., Pan, D.C.: Integrating a mixed-feature model and multiclass support vector machine for facial expression recognition. Integr. Comput. Aid. Eng. 16(1), 61–74 (2009)
Reddy, G.V., Savarni, C.D., Mukherjee, S.: Facial expression recognition in the wild, by fusion of deep learnt and hand-crafted features. Cogn. Syst. Res. 62, 23–34 (2020)
Pan, X.: Fusing HOG and convolutional neural network spatial-temporal features for video-based facial expression recognition. IET Image Proc. 14(1), 176–182 (2020)
Breuer, R., Kimmel, R.: A deep learning perspective on the origin of facial expressions. arXiv preprint, arXiv:1705.01842
Jung, H., Lee, S., Yim, J., Park, S., Kim, J.: Joint fine-tuning in deep neural networks for facial expression recognition. In: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2983–2991
Zhao, K., Chu, W.-S., Zhang, H.: Deep region and multi-label learning for facial action unit detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3391–3399 (2016)
Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
Kahou, S.E., Pal, C., Bouthillier, X., Froumenty, P., Gülçehre, Ç., Memisevic, R., Vincent, P., Courville, A., Bengio, Y., Ferrari, R.C., Mirza, M.: Combining modality specific deep neural networks for emotion recognition in video. In: Proceedings of the 15th ACM on International Conference on Multimodal Interaction, pp. 543–550 (2013)
Koc, M., Ergin, S., Gülmezoğlu, M.B., Edizkan, R., Barkana, A.: Use of gradient and normal vectors for face recognition. IET Image Proc. 14(10), 2121–2129 (2020)
Liu, P., Han, S., Meng, Z., Tong, Y.: Facial expression recognition via a boosted deep belief network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1805–1812 (2014)
Liu, P., Han, S., Meng, Z., Tong, Y.: Facial expression recognition via deep learning. In: IEEE International Conference on Smart Computing, pp. 303–308 (2014)
Mollahosseini, A., Chan, D., Mahoor, M.H.: Going deeper in facial expression recognition using deep neural networks. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–10 (2016)
Khorrami, P., Paine, T.L., Huang, T.S.: Do deep neural networks learn facial action units when doing expression recognition? In: 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), pp. 19–27 (2015)
Zhang, K., Huang, Y., Du, Y., Wang, L.: Facial expression recognition based on deep evolutional spatial-temporal networks. IEEE Trans. Image Process. 26, 4193–4203 (2017)
Kurup, A.R., Ajith, M., Ramón, M.M.: Semi-supervised facial expression recognition using reduced spatial features and deep belief networks. Neurocomputing 367, 188–197 (2019)
Datta, S., Sen, D., Balasubramanian, R.: Integrating geometric and textural features for facial emotion classification using SVM frameworks. In: Proceedings of International Conference on Computer Vision and Image Processing, pp. 619–628 (2017)
Cai, J., Meng, Z., Khan, A.S., Li, Z., O’Reilly, J., Tong, Y.: Island loss for learning discriminative features in facial expression recognition 13th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 302–309 (2018)
Kim, B. K., Dong, S. Y., Roh, J., Kim, G., Lee, S.-Y.: Fusing aligned and non-aligned face information for automatic affect recognition in the wild: a deep learning approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 48–57 (2016)
Zia, M.S., Hussain, M., Jaffar, M.A.: A novel spontaneous facial expression recognition using dynamically weighted majority voting based ensemble classifier. Multimed. Tools Appl. 77, 25537–25567 (2018)
Cotter, S.F.: Weighted voting of sparse representation classifiers for facial expression recognition. In: IEEE 18th European Signal Processing Conference, pp. 1164–1168 (2010)
Dalal, N., Triggs, B., Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, pp. 886–893. IEEE (2005)
Carcagnì, P., Del Coco, M., Leo, M., Distante, C.: Facial expression recognition and histograms of oriented gradients: a comprehensive study. SpringerPlus 4(1), 1–25 (2015)
Da, B., Sang, N.: Local binary pattern based face recognition by estimation of facial distinctive information distribution. Opt. Eng. 48(11), 117203 (2009)
Chen, J., Shan, S., He, C., Zhao, G., Pietikäinen, M., Chen, X., Gao, W.: Wld: a robust local image descriptor. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1705–1720 (2009)
Ullah, I., Hussain, M., Muhammad, G., Aboalsamh, H., Bebis, G., Mirza, A.M.: Gender recognition from face images with local WLD descriptor. In: 19th International Conference on Systems, Signals and Image Processing (IWSSIP), pp. 417–420. IEEE (2012)
Ahmed, F., Hossain, E., Bari, A.H., Shihavuddin, A.S.M.: Compound local binary pattern (CLBP) for robust facial expression recognition. In: 2011 IEEE 12th International Symposium on Computational Intelligence and Informatics (CINTI), pp. 391–395 (2011)
Chen, J., Chen, Z., Chi, Z., Fu, H., et al.: Facial expression recognition based on facial components detection and hog features. In: International Workshops on Electrical and Computer Engineering Subfields, pp. 884–888 (2014)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Parkhi, O.M., Vedaldi, A., Zisserman, A.: Visual Geometry Group, University of Oxford. https://www.robots.ox.ac.uk/~vgg/software/vgg_face/ (2015)
Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database forstudying face recognition in unconstrained environments. In: Workshop on Faces in ‘Real-Life’ images: detection, alignment, and recognition (2008)
Wolf, L., Hassner, T., Maoz, I.: Face recognition in unconstrained videos with matched background similarity. In: CVPR 2011, pp. 529–534. IEEE (2011)
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... Adam, H.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
Zavarez, M.V., Berriel, R.F., Oliveira-Santos, T.: Cross-database facial expression recognition based on fine-tuned deep convolutional network. In: 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 405–412 (2017)
Pantic, M., Valstar, M., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. In: 2005 IEEE International Conference on Multimedia and Expo, p. 5 pp. IEEE (2005)
Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H., Hawk, S.T., Van Knippenberg, A.D.: Presentation and validation of the Radboud faces database. Cogn. Emot. 24(8), 1377–1388 (2010)
Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with Gabor wavelets. In: Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205. IEEE (1998)
Martinez, A., Benavente, R.: The AR face database, CVC. Copyright of Informatica (03505596) (1998)
Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1, pp. I-I. IEEE (2001)
Serengil, S.I.: https://sefiks.com/2019/07/15/how-to-convert-matlab-models-to-keras/
Ekman, P., Friesen, W., Hager, J.: Facial Action Coding System: Research Nexus. Network Research Information, Salt Lake City (2002)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Tieleman, T., Hinton, G.: Lecture 6.5-rmsprop, coursera: neural networks for machine learning. University of Toronto, Technical Report (2012)
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(7), 2121–2159 (2011)
Platt, J.C., Cristianini, N., Shawe-Taylor, J.: Large margin DAGs for multiclass classification. NIPS 12, 547–553 (1999)
Shan, C., Gong, S., McOwan, P.W.:Robust facial expression recognition using local binary patterns. In: IEEE International Conference on Image Processing 2005, vol. 2, pp. II-370. IEEE (2005)
Friedman, J.H.: Another approach to polychotomous classification. Technical Report, Statistics Department, Stanford University (1996)
Xie, S., Hu, H.: Facial expression recognition using hierarchical features with deep comprehensive multipatches aggregation convolutional neural networks. IEEE Trans. Multimed. 21(1), 211–220 (2018)
Nwosu, L., Wang, H., Lu, J., Unwala, I., Yang, X., Zhang, T., Deep convolutional neural network for facial expression recognition using facial parts. In: IEEE 15th International Conference on Dependable, Autonomic and Secure Computing, 15th International Conference on Pervasive Intelligence and Computing, 3rd International Conference on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech), pp. 1318–1321. IEEE (2017)
Ravi, R., Yadhukrishna, S.V., Prithviraj, R.: A face expression recognition using CNN and LBP. In: Proceedings 4th International Conference on Computing Methodologies and Communication (ICCMC), pp. 684–689 (2020)
Alshamsi, H., Kepuska, V., Meng, H.: Real time automated facial expression recognition app development on smart phones. In: 8th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), pp. 384–392 (2017)
Koujan, M.R., Alharbawee, L., Giannakakis, G., Pugeault, N., Roussos: Real-time facial expression recognition in the wild by disentangling 3D expression from identity. arXiv preprint arXiv:2005.05509 (2020)
Melaugh, R., Siddique, N., Coleman, S., Yogarajah, P.: Facial expression recognition on partial facial sections. In: 11th International Symposium on Image and Signal Processing and Analysis, pp. 193–197 (2019)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Ahadit, A.B., Jatoth, R.K. A novel multi-feature fusion deep neural network using HOG and VGG-Face for facial expression classification. Machine Vision and Applications 33, 55 (2022). https://doi.org/10.1007/s00138-022-01304-y
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00138-022-01304-y