Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Pose‐invariant face recognition based on matching the occlusion free regions aligned by 3D generic model

Published: 01 July 2020 Publication History
  • Get Citation Alerts
  • Abstract

    Face recognition systems perform accurately in a controlled environment, but an unconstrained environment dramatically degrades their performance. In this study, a novel pose‐invariant face recognition system is proposed based on the occlusion free regions. This method utilises a gallery set of frontal face images and can handle large pose variations. For a 2D probe face image with an arbitrary pose, the head pose is first obtained using a robust head pose estimation method. Then, this 2D face image is normalised by a novel 3D modelling method from a single input image. In consequence, pose invariant face recognition is converted to a frontal face recognition problem. The 3D structure is reconstructed using a new method based on the estimated head pose and only one facial feature point, which is significantly reduced in comparison with the number of landmarks used in previous methods. According to the estimated poses, occlusion free regions are extracted from normalised images as feature extraction. Finally, face matching and recognition is performed using these regions from normalised test images and the corresponding regions of gallery images. Experimental results on FERET and CAS‐PEAL‐R1 databases demonstrate that the proposed method outperforms other methods, and it is robust and efficient.

    10 References

    [1]
    Ding, C., Tao, D.: ‘A comprehensive survey on pose‐invariant face recognition’, ACM Trans. Intell. Syst. Technol. (TIST), 2016, 7, (3), p. 37
    [2]
    Tan, X., Triggs, B.: ‘Enhanced local texture feature sets for face recognition under difficult lighting conditions’, IEEE Trans. Image Process., 2010, 19, (6), pp. 1635–1650
    [3]
    Ding, C., Choi, J., Tao, D., et al: ‘Multi‐directional multi‐level dual‐cross patterns for robust face recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2016, 38, (3), pp. 518–531
    [4]
    Günther, M., Costa‐Pazo, A., Ding, C., et al: ‘The 2013 face recognition evaluation in mobile environment’. 2013 Int. Conf. on Biometrics (ICB), Madrid, Spain, 2013, pp. 1–7
    [5]
    Chen, D., Cao, X., Wen, F., et al: ‘Blessing of dimensionality: high‐dimensional feature and its efficient compression for face verification’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Portland, OR, USA, 2013, pp. 3025–3032
    [6]
    Abiantun, R., Prabhu, U., Savvides, M.: ‘Sparse feature extraction for pose‐tolerant face recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (10), pp. 2061–2073
    [7]
    Zhang, X., Gao, Y.: ‘Face recognition across pose: a review’, Pattern Recognit., 2009, 42, (11), pp. 2876–2896
    [8]
    Zhu, Z., Luo, P., Wang, X., et al: ‘Recover canonical‐view faces in the wild with deep neural networks’. arXiv preprint arXiv:1404.3543, 2014
    [9]
    Yi, D., Lei, Z., Li, S.Z.: ‘Towards pose robust face recognition’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Portland, OR, USA, 2013, pp. 3539–3545
    [10]
    Arashloo, S.R., Kittler, J.: ‘Efficient processing of mrfs for unconstrained‐pose face recognition’. 2013 IEEE Sixth Int. Conf. on Biometrics: Theory, Applications and Systems (BTAS), Arlington, VA, USA, 2013, pp. 1–8
    [11]
    Zhang, Y., Shao, M., Wong, E.K., et al: ‘Random faces guided sparse many‐to‐one encoder for pose‐invariant face recognition’. Proc. of the IEEE Int. Conf. on Computer Vision, Sydney, NSW, Australia, 2013, pp. 2416–2423
    [12]
    Kan, M., Shan, S., Chang, H., et al: ‘Stacked progressive auto‐encoders (spae) for face recognition across poses’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 1883–1890
    [13]
    Min, W., Fan, M., Li, J., et al: ‘Real‐time face recognition based on pre‐identification and multi‐scale classification’, IET Comput. Vis., 2018, 13, (2), pp. 165–171
    [14]
    Yin, X., Liu, X.: ‘Multi‐task convolutional neural network for pose‐invariant face recognition’, IEEE Trans. Image Process., 2018, 27, (2), pp. 964–975
    [15]
    Oh, B.‐S., Toh, K.‐A., Teoh, A.B.J., et al: ‘An analytic Gabor feedforward network for single‐sample and pose‐invariant face recognition’, IEEE Trans. Image Process., 2018, 27, (6), pp. 2791–2805
    [16]
    Duan, X., Tan, Z.‐H.: ‘A spatial self‐similarity based feature learning method for face recognition under varying poses’, Pattern Recognit. Lett., 2018, 111, pp. 109–116
    [17]
    Sharma, A., Al Haj, M., Choi, J., et al: ‘Robust pose invariant face recognition using coupled latent space discriminant analysis’, Comput. Vis. Image Underst., 2012, 116, (11), pp. 1095–1110
    [18]
    Sharma, A., Kumar, A., Daume, H., et al: ‘Generalized multiview analysis: a discriminative latent space’. 2012 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 2012, pp. 2160–2167
    [19]
    Sharma, A., Jacobs, D.W.: ‘Bypassing synthesis: pls for face recognition with pose, low‐resolution and sketch’. 2011 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 2011, pp. 593–600
    [20]
    Wang, W., Cui, Z., Chang, H., et al: ‘Deeply coupled auto‐encoder networks for cross‐view classification’. arXiv preprint arXiv:1402.2031, 2014
    [21]
    Kuo, C.‐H., Lee, J.‐D.: ‘Face recognition based on a two‐view projective transformation using one sample per subject’, IET Comput. Vis., 2012, 6, (5), pp. 489–498
    [22]
    Teijeiro‐Mosquera, L., Alba‐Castro, J.: ‘Performance of active appearance model‐based pose‐robust face recognition’, IET Comput. Vis., 2011, 5, (6), pp. 348–357
    [23]
    Taigman, Y., Yang, M., Ranzato, M.A., et al: ‘Deepface: closing the gap to human‐level performance in face verification’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 1701–1708
    [24]
    Ho, H.T., Chellappa, R.: ‘Pose‐invariant face recognition using Markov random fields’, IEEE Trans. Image Process., 2013, 22, (4), pp. 1573–1584
    [25]
    Ashraf, A.B., Lucey, S., Chen, T.: ‘Fast image alignment in the Fourier domain’. 2010 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 2010, pp. 2480–2487
    [26]
    Li, A., Shan, S., Gao, W.: ‘Coupled bias‐variance tradeoff for cross‐pose face recognition’, IEEE Trans. Image Process., 2012, 21, (1), pp. 305–315
    [27]
    Ding, C., Tao, D.: ‘Pose‐invariant face recognition with homography‐based normalization’, Pattern Recognit., 2017, 66, pp. 144–152
    [28]
    Li, S., Liu, X., Chai, X., et al: ‘Morphable displacement field based image matching for face recognition across pose’. Computer Vision–ECCV 2012, Florence, Italy, 2012, pp. 102–115
    [29]
    Li, S., Liu, X., Chai, X., et al: ‘Maximal likelihood correspondence estimation for face recognition across pose’, IEEE Trans. Image Process., 2014, 23, (10), pp. 4587–4600
    [30]
    Zhu, Z., Luo, P., Wang, X., et al: ‘Deep learning identity‐preserving face space’. Proc. of the IEEE Int. Conf. on Computer Vision, Sydney, NSW, Australia, 2013, pp. 113–120
    [31]
    Tai, Y., Yang, J., Luo, L., et al: ‘Kernel orthogonal procrustes regression for face recognition across pose’, Neurocomputing, 2017, 239, pp. 122–129
    [32]
    Mostafa, E.A., Farag, A.A.: ‘Dynamic weighting of facial features for automatic pose‐invariant face recognition’. 2012 Ninth Conf. on Computer and Robot Vision (CRV), Toronto, ON, Canada, 2012, pp. 411–416
    [33]
    Niinuma, K., Han, H., Jain, A.K.: ‘Automatic multi‐view face recognition via 3d model based pose regularization’. 2013 IEEE Sixth Int. Conf. on Biometrics: Theory, Applications and Systems (BTAS), Arlington, VA, USA, 2013, pp. 1–8
    [34]
    Jo, J., Choi, H., Kim, I.‐J., et al: ‘Single‐view‐based 3d facial reconstruction method robust against pose variations’, Pattern Recognit., 2015, 48, (1), pp. 73–85
    [35]
    Asthana, A., Marks, T.K., Jones, M.J., et al: ‘Fully automatic pose‐invariant face recognition via 3d pose normalization’. 2011 IEEE Int. Conf. on Computer Vision (ICCV), Barcelona, Spain, 2011, pp. 937–944
    [36]
    Ding, L., Ding, X., Fang, C.: ‘Continuous pose normalization for pose‐robust face recognition’, IEEE Signal Process. Lett., 2012, 19, (11), pp. 721–724
    [37]
    Chu, B., Romdhani, S., Chen, L.: ‘3d‐aided face recognition robust to expression and pose variations’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp. 1899–1906
    [38]
    Moeini, A., Moeini, H.: ‘Real‐world and rapid face recognition toward pose and expression variations via feature library matrix’, IEEE Trans. Inf. Forensics Sec., 2015, 10, (5), pp. 969–984
    [39]
    Moeini, A., Moeini, H., Faez, K.: ‘Unrestricted pose‐invariant face recognition by sparse dictionary matrix’, Image Vis. Comput., 2015, 36, pp. 9–22
    [40]
    Patil, H.Y., Kothari, A.G., Bhurchandi, K.M.: ‘Expression invariant face recognition using local binary patterns and contourlet transform’, Optik‐Int. J. Light Electron. Opt., 2016, 127, (5), pp. 2670–2678
    [41]
    Kang, S., Lee, J., Bong, K., et al: ‘Low‐power scalable 3‐D face frontalization processor for cnn‐based face recognition in mobile devices’, IEEE J. Emerg. Sel. Top. Circuits Syst., 2018, 8, (4), pp. 873–883
    [42]
    Haghighat, M., Abdel‐Mottaleb, M., Alhalabi, W.: ‘Fully automatic face normalization and single sample face recognition in unconstrained environments’, Expert Syst. Appl., 2016, 47, pp. 23–34
    [43]
    Yan, J., Mei, Y., Liu, X., et al: ‘Patch‐wise normalization for pose‐invariant face recognition from single sample’. 2018 IEEE Int. Conf. on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Halifax, NS, Canada, 2018, pp. 712–715
    [44]
    Zhao, W., Chellappa, R., Phillips, P.J., et al: ‘Face recognition: a literature survey’, ACM Computing Surveys (CSUR), 2003, 35, (4), pp. 399–458
    [45]
    Sadeghzadeh, A., Ebrahimnezhad, H.: ‘Head pose estimation based on fuzzy systems using facial geometric features’. 2016 8th Int. Symp. on Telecommunications (IST), Tehran, Iran, 2016, pp. 777–782
    [46]
    González‐Jiménez, D., Alba‐Castro, J.L.: ‘Toward pose‐invariant 2‐D face recognition through point distribution models and facial symmetry’, IEEE Trans. Inf. Forensics Sec., 2007, 2, (3), pp. 413–429
    [47]
    Moeini, A., Faez, K., Moeini, H.: ‘Deformable generic elastic models from a single 2d image for facial expression and large pose face together synthesis and recognition’. 2013 Seventh Int. Conf. on Distributed Smart Cameras (ICDSC), Palm Springs, CA, USA, 2013, pp. 1–6
    [48]
    Heo, J., Savvides, M.: ‘3‐D generic elastic models for fast and texture preserving 2‐D novel pose synthesis’, IEEE Trans. Inf. Forensics Sec., 2012, 7, (2), pp. 563–576
    [49]
    Prabhu, U., Heo, J., Savvides, M.: ‘Unconstrained pose‐invariant face recognition using 3d generic elastic models’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (10), pp. 1952–1961
    [50]
    Heo, J., Savvides, M.: ‘Gender and ethnicity specific generic elastic models from a single 2d image for novel 2d pose face synthesis and recognition’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (12), pp. 2341–2350
    [51]
    Paysan, P., Knothe, R., Amberg, B., et al: ‘A 3d face model for pose and illumination invariant face recognition’. Sixth IEEE Int. Conf. on Advanced Video and Signal Based Surveillance, 2009. AVSS'09, Genova, Italy, 2009, pp. 296–301
    [52]
    Blanz, V., Vetter, T.: ‘Face recognition based on fitting a 3d morphable model’, IEEE Trans. Pattern Anal. Mach. Intell., 2003, 25, (9), pp. 1063–1074
    [53]
    Alahi, A., Ortiz, R., Vandergheynst, P.: ‘Freak: fast retina keypoint’. 2012 IEEE Conf. on Computer Vision and Pattern Recognition, Providence, RI, USA, 2012, pp. 510–517
    [54]
    Wu, Z., Li, J., Hu, J., et al: ‘Pose‐invariant face recognition using 3d multi‐depth generic elastic models’. 2015 11th IEEE Int. Conf. and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia, 2015, pp. 1–6
    [55]
    Gao, W., Cao, B., Shan, S., et al: ‘The CAS‐PEAL large‐scale Chinese face database and baseline evaluations’, IEEE Trans. Syst., Man, Cybern.‐A: Syst. Humans, 2007, 38, (1), pp. 149–161
    [56]
    Phillips, P.J., Wechsler, H., Huang, J., et al: ‘The FERET database and evaluation procedure for face‐recognition algorithms’, Image Vis. Comput., 1998, 16, (5), pp. 295–306

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image IET Computer Vision
    IET Computer Vision  Volume 14, Issue 5
    August 2020
    111 pages
    EISSN:1751-9640
    DOI:10.1049/cvi2.v14.5
    Issue’s Table of Contents

    Publisher

    John Wiley & Sons, Inc.

    United States

    Publication History

    Published: 01 July 2020

    Author Tags

    1. pose estimation
    2. feature extraction
    3. face recognition

    Author Tags

    1. single input image
    2. frontal face recognition problem
    3. estimated head
    4. estimated poses
    5. occlusion free regions
    6. normalised images
    7. face matching
    8. normalised test images
    9. corresponding regions
    10. novel 3D modelling method
    11. 2D face image
    12. estimation method
    13. robust head
    14. 2D probe face image
    15. pose variations
    16. frontal face images
    17. novel pose-invariant face recognition system
    18. unconstrained environment
    19. controlled environment
    20. face recognition systems
    21. 3D generic model
    22. gallery images

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 0
      Total Downloads
    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 26 Jul 2024

    Other Metrics

    Citations

    View Options

    View options

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media