Abstract
Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either ‘spontaneous’ or ‘posed’ categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: audio, visual, and spontaneous expressions. PAMI 31, 39–58 (2009)
Ekman, P., Rosenberg, E.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System, 2nd edn. Oxford University Press, New York (2005)
Hoque, M., McDuff, D., Picard, R.: Exploring temporal patterns in classifying frustrated and delighted smiles. IEEE Trans. Affect. Comput. 3, 323–334 (2012)
Ambadar, Z., Cohn, J., Reed, L.: All smiles are not created equal: morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. J. Nonverbal Behav. 33, 17–34 (2009)
Ekman, P.: Telling Lies: Cues To Deceit in the Marketplace, Politics, and Marriage. WW. Norton & Company, New York (1992)
Hadwin, J., Baron-Cohen, S., Howlin, P., Hill, K.: Can we teach children with autism to understand emotion, belief, or pretense? Dev. Psychopathol. 8, 345–365 (1996)
Xu, Q., Ching, S., Mandal, B., Li, L., Lim, J.H., Mukawa, M., Tan, C.: Socio glass: social interaction assistance with face recognition on google glass. J. Sci. Phone Apps Mob. Devices 2, 1–4 (2016)
Mandal, B., Lim, R.Y., Dai, P., Sayed, M.R., Li, L., Lim, J.H.: Trends in machine and human face recognition. In: Kawulok, M., Celebi, M.E., Smolka, B. (eds.) Advances in Face Detection and Facial Image Analysis, pp. 145–187. Springer, Cham (2016). doi:10.1007/978-3-319-25958-1_7
Mandal, B., Wang, Z., Li, L., Kassim, A.A.: Performance evaluation of local descriptors and distance measures on benchmarks and first-person-view videos for face identification. Neurocomputing 184, 107–116 (2016)
Ekman, P., Friesen, W.: The Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press Inc., San Francisco (1978)
Krumhuber, E.G., Manstead, A.S.: Can duchenne smiles be feigned? new evidence on felt and false smiles. Emotion 9, 807–820 (2009)
Ekman, P., Hager, J., Friesen, W.: The symmetry of emotional and deliberate facial actions. Psychophysiology 18, 101–106 (1981)
Dibeklioğlu, H., Salah, A.A., Gevers, T.: Are you really smiling at me? spontaneous versus posed enjoyment smiles. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7574, pp. 525–538. Springer, Heidelberg (2012). doi:10.1007/978-3-642-33712-3_38
Schmidt, K., Bhattacharya, S., Denlinger, R.: Comparison of deliberate and spontaneous facial movement in smiles and eyebrow raises. J. Nonverbal Behav. 33, 35–45 (2009)
Cohn, J., Schmidt, K.: The timing of facial motion in posed and spontaneous smiles. Int. J. Wavelets, Multiresolut. Inf. Process. 2, 1–12 (2004)
Schmidt, K., Ambadar, Z., Cohn, J., Reed, I.: Movement differences between deliberate and spontaneous facial expressions: zygomaticus major action in smiling. J. Nonverbal Behav. 30, 37–52 (2006)
Valstar, M.F., Pantic, M., Ambadar, Z., Cohn, J.F.: Spontaneous vs. posed facial behavior: automatic analysis of brow actions. In: Proceedings of ACM International Conference on Multimodal Interaction, pp. 162–170 (2006)
Tao, H., Huang, T.: Explanation-based facial motion tracking using a piecewise bézier volume deformation model. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 611–617 (1999)
Mandal, B., Ouarti, N.: Spontaneous vs. posed smiles - can we tell the difference? In: International Conference on Computer Vision and Image Processing (CVIP), vol. 460, pp. 261–271. Roorkee, India (2016)
Mandal, B., Chia, S.-C., Li, L., Chandrasekhar, V., Tan, C., Lim, J.-H.: A wearable face recognition system on google glass for assisting social interactions. In: Jawahar, C.V., Shan, S. (eds.) ACCV 2014. LNCS, vol. 9010, pp. 419–433. Springer, Cham (2015). doi:10.1007/978-3-319-16634-6_31
Mandal, B., Li, L., Chandrasekhar, V., Lim, J.H.: Whole space subclass discriminant analysis for face recognition. In: IEEE International Conference on Image Processing (ICIP), Quebec, Canada, pp. 329–333 (2015)
OpenCV: Open source computer vision (2014). http://opencv.org/
Yu, X., Han, W., Li, L., Shi, J.Y., Wang, G.: An eye detection and localization system for natural human and robot interaction without face detection. In: Groß, R., Alboul, L., Melhuish, C., Witkowski, M., Prescott, T.J., Penders, J. (eds.) TAROS 2011. LNCS, vol. 6856, pp. 54–65. Springer, Heidelberg (2011). doi:10.1007/978-3-642-23232-9_6
Mandal, B., Zhikai, W., Li, L., Kassim, A.A.: Evaluation of descriptors and distance measures on benchmarks and first-person-view videos for face identification. In: Jawahar, C.V., Shan, S. (eds.) ACCV 2014. LNCS, vol. 9008, pp. 585–599. Springer, Cham (2015). doi:10.1007/978-3-319-16628-5_42
Tomasi, C., Kanade, T.: Detection and tracking of point features. Carnegie Mellon University Technical Report CMU-CS-91-132 (1991)
Mandal, B., Eng, H.L.: 3-parameter based eigenfeature regularization for human activity recognition. In: 35th IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 954–957 (2010)
FERET: Feret normalization (2005). http://www.cs.colostate.edu/evalfacerec/data/normalization.html
Mandal, B., Jiang, X.D., Kot, A.: Verification of human faces using predicted eigenvalues. In: 19th International Conference on Pattern Recognition (ICPR), Tempa, Florida, USA, pp. 1–4 (2008)
Jiang, X.D., Mandal, B., Kot, A.: Face recognition based on discriminant evaluation in the whole space. In: IEEE 32nd International Conference on Acoustics, Speech and Signal Processing (ICASSP 2007), Honolulu, Hawaii, USA, pp. 245–248 (2007)
Wu, H.Y., Rubinstein, M., Shih, E., Guttag, J., Durand, F., Freeman, W.T.: Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graph. 31, 65 (2012)
Wadhwa, N., Rubinstein, M., Durand, F., Freeman, W.T.: Phase-based video motion processing. ACM Trans. Graph. 32 (2013)
Li, X., Hong, X., Moilanen, A., Huang, X., Pfister, T., Zhao, G., Pietikäinen, M.: Reading hidden emotions: spontaneous micro-expression spotting and recognition. CoRR abs/1511.00423 (2015)
Ojansivu, V., Heikkilä, J.: Blur insensitive texture classification using local phase quantization. Image Signal Process. 5099, 236–243 (2008)
Ahonen, T., Rahtu, E., Ojansivu, V., Heikkilä, J.: Recognition of blurred faces using local phase quantization. In: 19th International Conference on Pattern Recognition, pp. 1–4 (2008)
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886–893 (2005)
Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part based models. IEEE Trans. Pattern Anal. Mach. Intell. 32, 1627–1645 (2009)
Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Bigun, J., Gustavsson, T. (eds.) SCIA 2003. LNCS, vol. 2749, pp. 363–370. Springer, Heidelberg (2003). doi:10.1007/3-540-45103-X_50
Hu, G., Yang, Y., Yi, D., Kittler, J., Christmas, W.J., Li, S.Z., Hospedales, T.M.: When face recognition meets with deep learning: an evaluation of convolutional neural networks for face recognition. CoRR abs/1504.02351 (2015)
Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: Proceedings of the British Machine Vision Conference (BMVC), vol. 41, no. 1–41, p. 12 (2015)
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20, 273–297 (1995)
Dibeklioglu, H., Salah, A., Gevers, T.: Recognition of genuine smiles. IEEE Trans. Multimedia 17, 279–294 (2015)
Pfister, T., Li, X., Zhao, G., Pietikainen, M.: Differentiating spontaneous from posed facial expressions within a generic facial expression recognition framework. In: ICCV Workshop, pp. 868–875 (2011)
Dibeklioglu, H., Valenti, R., Salah, A., Gevers, T.: Eyes do not lie: spontaneous versus posed smiles. In: ACM Multimedia, pp. 703–706 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Mandal, B., Lee, D., Ouarti, N. (2017). Distinguishing Posed and Spontaneous Smiles by Facial Dynamics. In: Chen, CS., Lu, J., Ma, KK. (eds) Computer Vision – ACCV 2016 Workshops. ACCV 2016. Lecture Notes in Computer Science(), vol 10116. Springer, Cham. https://doi.org/10.1007/978-3-319-54407-6_37
Download citation
DOI: https://doi.org/10.1007/978-3-319-54407-6_37
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-54406-9
Online ISBN: 978-3-319-54407-6
eBook Packages: Computer ScienceComputer Science (R0)