Abstract
This work aims to develop a real-time image and video processor enabled with an artificial intelligence (AI) agent that can predict a job candidate’s behavioral competencies according to his or her facial expressions. This is accomplished using a real-time video-recorded interview with a histogram of oriented gradients and support vector machine (HOG-SVM) plus convolutional neural network (CNN) recognition. Different from the classical view of recognizing emotional states, this prototype system was developed to automatically decode a job candidate’s behaviors by their microexpressions based on the behavioral ecology view of facial displays (BECV) in the context of employment interviews using a real-time video-recorded interview. An experiment was conducted at a Fortune 500 company, and the video records and competency scores were collected from the company’s employees and hiring managers. The results indicated that our proposed system can provide better predictive power than can human-structured interviews, personality inventories, occupation interest testing, and assessment centers. As such, our proposed approach can be utilized as an effective screening method using a personal-value-based competency model.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Change history
16 March 2021
A Correction to this paper has been published: https://doi.org/10.1007/s11554-021-01090-2
References
Woodruffe, C.: What is meant by competency? In: Boam, R., Sparrow, P. (eds.) Designing and Achieving Competency, pp. 1–29. McGraw-Hill, New York (1992)
Hofrichter, D.A., Spencer, L.M.: Competencies: the right foundation the right foundation for effective human resources management. Compens. Benefits Rev. 28, 21–26 (1996)
Moore, D., Cheng, M.I., Dainty, A.: Competence, competency and competencies: performance assessment in organisations. Work Study 51, 314–319 (2002)
Kochanski, J.T.: Introduction to special issue on human resource competencies. Hum. Resour. Manag. 35, 3–6 (1996)
Spencer, L.M., Spencer, S.M.: Competence at Work: Models for Superior Performance. Wiley, New York (1993)
Cardy, R.L., Selvarajan, T.T.: Competencies: alternative frameworks for competitive advantage. Bus. Horiz. 49, 235–245 (2006)
Feltham, R.: Using competencies in selection and recruitment. In: Boam, R., Sparrow, P. (eds) Designing and Achieving Competency. A Competency-based Approach to Developing People and Organizations. pp. 89–103. McGraw-Hill, London (1992)
Nikolaou, I.: The development and validation of a measure of generic work competencies. Int. J. Test. 3, 309–319 (2003)
Hartwell, C.J., Johnson, C.D., Posthuma, R.A.: Are we asking the right questions? Predictive validity comparison of four structured interview question types. J. Bus. Res. 100, 122–129 (2019)
DeGroot, T., Gooty, J.: Can nonverbal cues be used to make meaningful personality attributions in employment interviews? J. Bus. Psychol. 24, 179–192 (2009)
Nikolaou, I., Foti, K.: Personnel selection and personality. In: Zeigler-Hill, V., Shackelford, T. (eds.) The SAGE Handbook of Personality and Individual Differences, pp. 659–677. Sage, London (2018)
Huffcutt, A.I., Van Iddekinge, C.H., Roth, P.L.: Understanding applicant behavior in employment interviews: a theoretical model of interviewee performance. Hum. Resour. Manag. Rev. 21, 353–367 (2011)
Melchers, K.G., Roulin, N., Buehl, A.K.: A review of applicant faking in selection interviews. Int. J. Sel. Assess. 28, 123–142 (2020)
Suen, H.Y., Chen, M.Y.C., Lu, S.H.: Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes? Comput. Hum. Behav. 98, 93–101 (2019)
Takalkar, M., Xu, M., Wu, Q., Chaczko, Z.: A survey: facial micro-expression recognition. Multimed. Tools Appl. 77, 19301–19325 (2018)
Suen, H., Hung, K., Lin, C.: TensorFlow-based automatic personality recognition used in asynchronous video interviews. IEEE Access 7, 61018–61023 (2019)
Suen, H.Y., Hung, K.E., Lin, C.L.: Intelligent video interview agent used to predict communication skill and perceived personality traits. Hum. Centric Comput. Inf. Sci. 10, 3 (2020)
Hilke, S., Bellini, J.: Artificial intelligence: the robots are now hiring. The Wall Street Journal. https://www.wsj.com/articles/artificial-intelligence-the-robots-are-now-hiring-moving-upstream-1537435820 (2018). Accessed 20 Sep 2018
Waller, B.M., Whitehouse, J., Micheletta, J.: Rethinking primate facial expression: a predictive framework. Neurosci. Biobehav. Rev. 82, 13–21 (2017)
Fridlund, A.J.: Human Facial Expression: An Evolutionary View. Academic Press, San Diego (1994)
Chanes, L., Wormwood, J.B., Betz, N., Barrett, L.F.: Facial expression predictions as drivers of social perception. J. Pers. Soc. Psychol. 114, 380–396 (2018)
Crivelli, C., Fridlund, A.J.: Facial displays are tools for social influence. Trends Cogn. Sci. 22, 388–399 (2018)
Barrett, L.F., Adolphs, R., Marsella, S., Martinez, A.M., Pollak, S.D.: Emotional expressions reconsidered: challenges to inferring emotion from human facial movements. Psychol. Sci. Public Interest 20, 1–68 (2019)
Ekman, P.: What scientists who study emotion agree about. Perspect. Psychol. Sci. 11, 31–34 (2016)
Ekman, P., Friesen, W.V.: Nonverbal leakage and clues to deception. Psych. 32, 88–106 (1969)
Crivelli, C., Carrera, P., Fernández-Dols, J.M.: Are smiles a sign of happiness? Spontaneous expressions of judo winners. Evol. Hum. Behav. 36, 52–58 (2015)
Fridlund, A.J.: The behavioral ecology view of facial displays, 25 years later. In: Fernández-Dols, J.M., Russell, J.A. (eds.) Oxford Series in Social Cognition and Social Neuroscience. The Science of Facial Expression. pp. 77–92. Oxford University Press, Oxford (2017)
Rehman, B., Ong, W.H., Tan, A.C.H., Ngo, T.D.: Face detection and tracking using hybrid margin-based ROI techniques. Vis. Comput. 36, 633–647 (2020)
Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. I–511–I–518. IEEE, Kauai, HI, USA (2001)
Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57, 137–154 (2004)
Shreve, M., Godavarthy, S., Goldgof, D., Sarkar, S.: Macro- and micro-expression spotting in long videos using spatio-temporal strain. In: IEEE International Conference on Automatic Face & Gesture Recognition and Workshops, pp. 51–56. IEEE, Santa Barbara, CA, USA (2011)
Pitaloka, D.A., Wulandari, A., Basaruddin, T., Liliana, D.Y.: Enhancing CNN with preprocessing stage in automatic emotion recognition. Procedia Comput. Sci. 116, 523–529 (2017)
Yudin, D.A., Dolzhenko, A.V., Kapustina, E.O.: The usage of grayscale or color images for facial expression recognition with deep neural networks. In: Kryzhanovsky, B., Dunin-Barkowski, W., Redko, V., Tiumentsev, Y. (eds.) Advances in Neural Computation. Machine Learning, and Cognitive Research III, pp. 271–281. Springer International Publishing, Cham (2020)
Sadeghi, H., Raie, A.A.: Human vision inspired feature extraction for facial expression recognition. Multimed. Tools Appl. 78, 30335–30353 (2019)
Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Robust discriminative response map fitting with constrained local models. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3444–3451. IEEE, Portland, OR, USA (2013)
Merget, D., Rock, M., Rigoll, G.: Robust facial landmark detection via a fully-convolutional local-global context network. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 781–790. IEEE, Salt Lake City, UT, USA (2018)
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), pp. 886–893. IEEE, San Diego, CA, USA (2005)
Carcagnì, P., Del Coco, M., Leo, M., Distante, C.: Facial expression recognition and histograms of oriented gradients: a comprehensive study. SpringerPlus 4, 645 (2015)
King, D.: Dlib-ml: A Machine Learning Toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009)
Csaba, B., Tamás, H., Horváth, A., Oláh, A., Reguly, I.Z.: PPCU sam: open-source face recognition framework. Procedia Comput. Sci. 159, 1947–1956 (2019)
Pursche, T., Clauß, R., Tibken, B., Möller, R.: Using neural networks to enhance the quality of ROIs for video based remote heart rate measurement from human faces. In: IEEE International Conference on Consumer Electronics (ICCE), pp. 1–5. IEEE, Las Vegas, NV, USA (2019)
Johnston, B., Chazal, P.D.: A review of image-based automatic facial landmark identification techniques. EURASIP J. Image Video Process. 2018, 86 (2018)
Aslan, M.F., Durdu, A., Sabanci, K., Mutluer, M.A.: CNN and HOG based comparison study for complete occlusion handling in human tracking. Measurement 158, 107704 (2020)
Adouani, A., Henia, W.M.B., Lachiri, Z.: Comparison of Haar-like, HOG and LBP approaches for face detection in video sequences. In: 16th International Multi-Conference on Systems, Signals & Devices (SSD), pp. 266–271. Istanbul, Turkey (2019)
Hammal, Z., Couvreur, L., Caplier, A., Rombaut, M.: Facial expression classification: an approach based on the fusion of facial deformations using the transferable belief model. Int. J. Approx. Reason. 46, 542–567 (2007)
Liu, Y., Zhang, J., Yan, W., Wang, S., Zhao, G., Fu, X.: A main directional mean optical flow feature for spontaneous micro-expression recognition. IEEE Trans. Affect. Comput. 7, 299–310 (2016)
Mehendale, N.: Facial emotion recognition using convolutional neural networks (FERC). SN Appl. Sci. 2, 446 (2020)
Zhao, Y., Xu, J.: A convolutional neural network for compound micro-expression recognition. Sensors (Basel, Switz) 19, 5553 (2019)
González-Lozoya, S.M., de la Calleja, J., Pellegrin, L., Escalante, H.J., Medina, M.A., Benitez-Ruiz, A.: Recognition of facial expressions based on CNN features. Multimed. Tools Appl. 79, 13987–14007 (2020)
Sajjad, M., Zahir, S., Ullah, A., Akhtar, Z., Muhammad, K.: Human behavior understanding in big multimedia data using CNN based facial expression recognition. Mob. Netw. Appl. 25, 1611–1621 (2020)
Fortune: Fortune 500 in 2020. https://fortune.com/fortune500/2020/ (2020). Accessed 11 Aug 2020
Taber, K.S.: The use of Cronbach’s Alpha when developing and reporting research instruments in science education. Res. Sci. Educ. 48, 1273–1296 (2018)
Oh, Y.H., See, J., Le Ngo, A.C., Phan, R.C.W., Baskaran, V.M.: A survey of automatic facial micro-expression analysis: databases, methods, and challenges. Front. Psychol. 9, 1128 (2018)
Deng, J., Trigeorgis, G., Zhou, Y., Zafeiriou, S.: Joint multi-view face alignment in the wild. IEEE Trans. Image Process. 2019, 1 (2019)
Cıbuk, M., Budak, U., Guo, Y., Cevdet Ince, M., Sengur, A.: Efficient deep features selections and classification for flower species recognition. Measurement 137, 7–13 (2019)
Krishnaraj, N., Elhoseny, M., Thenmozhi, M., Selim, M.M., Shankar, K.: Deep learning model for real-time image compression in Internet of Underwater Things (IoUT). J. Real-Time Image Process. 2019, 1 (2019)
Saravanan, A., Perichetla, G., Gayathri, D.K.S.: Facial emotion recognition using convolutional neural networks. arXiv: 1910.05602 (2019)
Schmidt, F.L., Oh, S., Shaffer, J.A.: The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years of Research Findings (Fox School of Business Research Paper). Temple University, Philadelphia, PA (2016)
Lopes, A.T., de Aguiar, E., De Souza, A.F., Oliveira-Santos, T.: Facial expression recognition with Convolutional Neural Networks: coping with few data and the training sample order. Pattern Recognit. 61, 610–628 (2017)
Smith, J.: You are what you will: Kant, schopenhauer, facial expression of emotion, and affective computing. Ger. Life Lett. 70, 466–477 (2017)
Poiesi, F., Cavallaro, A.: Predicting and recognizing human interactions in public spaces. J. Real-Time Image Proc. 10, 785–803 (2015)
Hannuna, S., Camplani, M., Hall, J., et al.: DS-KCF: a real-time tracker for RGB-D data. J. Real-Time Image Proc. 16, 1439–1458 (2019)
Bordallo-López, M., Nieto, A., Boutellier, J., et al.: Evaluation of real-time LBP computing in multiple architectures. J. Real-Time Image Proc. 13, 375–396 (2017)
Pang, W., Choi, K., Qin, J.: Fast Gabor texture feature extraction with separable filters using GPU. J. Real-Time Image Proc. 12, 5–13 (2016)
Shen, X.B., Wu, Q., Fu, X.I.: Effects of the duration of expressions on the recognition of microexpressions. J. Zhejiang Univ. Sci. B 13, 221–230 (2012)
Queiroz, R.B., Musse, S.R., Badler, N.I.: Investigating macroexpressions and microexpressions in computer graphics animated faces. MIT Press 23, 191–208 (2014)
Garbin, C., Zhu, X., Marques, O.: Dropout vs. batch normalization: an empirical study of their impact to deep learning. Multimedia Tools Appl. 79, 12777–12815 (2020)
Dai, C., Liu, X., Lai, J., Li, P.: Human behavior deep recognition architecture for smart city applications in the 5G environment. IEEE Netw. 33, 206–211 (2019)
Su, Y.S., Chou, C.H., Chu, Y.L., Yang, Z.F.: A finger-worn device for exploring Chinese printed text with using CNN algorithm on a micro IoT processor. IEEE Access. 7, 116529–116541 (2019)
Dai, C., Liu, X., Lai, J.: Human action recognition using two-stream attention based LSTM networks. Appl. Soft Comput. 86, 105820 (2020)
Su, Y.S., Lin, C.L., Chen, S.Y., Lai, C.F.: Bibliometric study of social network analysis literature. Libr. Hi Tech. 38, 420–433 (2019)
Dai, C., Liu, X., Chen, W., Lai, C.F.: A low-latency object detection algorithm for the edge devices of IoV systems. IEEE Trans. Veh. Technol. 69, 11169–11178 (2020)
Su, Y.S., Chen, H.R.: Social Facebook with Big Six approaches for improved students’ learning performance and behavior: A case study of a project innovation and implementation course. Front. Psychol. 11, 1166 (2020)
Dai, C., Liu, X., Yang, L.T., Ni, M., Ma, Z., Zhang, Q., Deen, M.J.: Video scene segmentation using tensor-train faster-RCNN for multimedia IoT systems. IEEE Internet Things J. 2020, 5 (2020)
Su, Y.S., Ni, C.F., Li, W.C., Lee, I.H., Lin, C.P.: Applying deep learning algorithms to enhance simulations of large-scale groundwater flow in IoTs. Appl. Soft Comput. 92, 106298 (2020)
Su, Y.S., Liu, T.Q.: Applying data mining techniques to explore users behaviors and viewing video patterns in converged IT environments. J. Ambient Intell. Hum. Comput. (2020). https://doi.org/10.1007/s12652-020-02712-6
Acknowledgements
This work was supported by Ministry of Science and Technology, Taiwan (Grant no. 109-2511-H-003-046).
Author information
Authors and Affiliations
Corresponding authors
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Su, YS., Suen, HY. & Hung, KE. Predicting behavioral competencies automatically from facial expressions in real-time video-recorded interviews. J Real-Time Image Proc 18, 1011–1021 (2021). https://doi.org/10.1007/s11554-021-01071-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11554-021-01071-5