Abstract
We propose a novel method for training Convolution Neural Network, named CNN-FQ, which takes a face image and outputs a scalar summary of the image quality. The CNN-FQ is trained from triplets of faces that are automatically labeled based on responses of a pre-trained face matcher. The quality scores extracted by the CNN-FQ are directly linked to the probability that the face matcher incorrectly ranks a randomly selected triplet of faces. We applied the proposed CNN-FQ, trained on CASIA database, for selection of the best quality image from a collection of face images capturing the same identity. The quality of the single face representation was evaluated on 1:1 Verification and 1:N Identification tasks defined by the challenging IJB-B protocol. We show that the recognition performance obtained when using faces selected based on the CNN-FQ scores is significantly higher than what can be achieved by competing state-of-the-art image quality extractors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
RetinaFace is available at https://github.com/biubug6/Pytorch_Retinaface.
- 2.
Pre-trained SENet is available at https://github.com/ox-vgg/vgg_face2.
- 3.
ROC is calculated from two metrics, True Acceptance Rate (TAR) and False Acceptance Rate (FAR). TAR corresponds to the probability that the system correctly accepts an authorised person and it is estimated by computing a fraction of matching pairs whose cosine distance is below a decision threshold. FAR corresponds to the probability that the system incorrectly accepts a non-authorised person and it is estimated by computing a fraction of non-matching pairs whose cosine distance is below the decision threshold.
- 4.
DET and CMC plots are calculated in terms of two metrics, False Postitive Identifcation Rate (FPIR) and False Negative Identification Rate (FNIR). FPIR is defined as the proportion of non-mate searches with any candidates below a decision threshold. In this metric, only candidates at rank 1 are considered. The FNIR is defined as the proportion mate searches for which the known individual is outside the top R = 20 ranks, or has cosine distance above threshold.
- 5.
References
Abaza, A., Harison, M., Bourlai, T., Ross, A.: Design and evolution of photometric image quality measures for effective face recognition. IET Biometrics 3(4), 314–324 (2014)
Best-Rowden, L., Jain, A.K.: Learning face image quality from human assessments. IEEE Trans. Inf. Forensics Secur. 13, 3064–3077 (2018)
Beveridge, J., Givens, G., Phillips, P., Draper, B.: Factors that influence algorithm performance in the face recognition grand challenge. Comput. Vis. Image Underst. 113(6), 750–762 (2009)
Beveridge, J., Givens, G., Phillips, P., Draper, B., Bolme, D., Lui, Y.: FRVT 2006: Quo vadis face quality. Image Vis. Comput. 28(5), 732–743 (2010)
Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: Vggface2: A dataset for recognising faces across pose and age. In: 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018). pp. 67–74 (2018)
Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc.: Ser. B (Methodol.) 39(1), 1–22 (1977)
Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 5202–5211 (2020)
Du, L., Ling, H.: Cross-age face verification by coordinating with cross-face age verification. In: Conference on Computer Vision and Patter Recognition (2015)
Goswami, G., Bhardwaj, R., Singh, R., Vatsa, M.: MDLFace: memorability augmented deep learning for video face recognition. In: IEEE International Joint Conference on Biometrics (2014)
Goswami, G., Vatsa, M., Singh, R.: Face verification via learned representation on feature-rich video frames. IEEE Trans. Inf. Forensics Secur. 12(7), 1686–1689 (2017)
Grother, P., Tabassi, E.: Performance of biometric quality measures. IEEE Trans. Pattern Recogn. Mach. Intell. 29(4), 531–543 (2007)
Guo, Y., Zhang, L., Hu, Y., He, X., Gao, J.: MS-Celeb-1M: a dataset and benchmark for large-scale face recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 87–102. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_6
Hernandez-Ortega, J., Galbally, J., Fierrez, J.: Faceqnet: Quality assessment of face recognition based on deep learning. In: International Conference on Biometrics (2019)
Abdurrahim, S.H., Samad, S.A., Huddin, A.B.: Review on the effects of age, gender, and race demographics on automatic face recognition. Vis. Comput. 34(11), 1617–1630 (2017). https://doi.org/10.1007/s00371-017-1428-z
Kingma, D., Ba, J.: ADAM: A method for stochastic optimization. In: ICLR (2014)
Lu, B., Chen, J.C., Castillo, C.D., Chellappa, R.: An experimental evaluation of covariates effects on unconstrained face verification. IEEE Trans. Biomet. Behav. Identity Sci. 1(1), 42–55 (2019)
Ferrara, M., Franco, A., Maio, D., Maltoni, D.: Face image conformance to ISO/ICAO standards in machine readable travel documents. IEEE Trans. Inf. Forensics Secur. 7(4), 1204–1213 (2012)
Poh, N., et al.: Benchmarking quality-dependent and cost-sensitive score-level multimodal biometric fusion algorithms. IEEE Trans. Inf. Forensics Secur. 4(6), 849–866 (2009)
Poh, N., Kittler, J.: A unified framework for biometric expert fusion incorporating quality measures. IEEE Trans. Pattern Anal. Mach. Intell. 34, 3–18 (2012)
Ranjan, R., et al.: Crystal loss and quality pooling for unconstrained face verification and recognition. CoRR abs/1804.01159 (2018). http://arxiv.org/abs/1804.01159
Sellahewa, H., Jassim, S.: Image-quality-based adaptive face recognition. IEEE Trans. Instrum. Measure. 59, 805–813 (2010)
Vignesh, S., Priya, K., Channappayya, S.: Face image quality assessment for face selection in surveillance video using convolutional neural networks. In: IEEE Global Conference on Signal and Information Processing (2015)
Whitelam, C., et al.: Iarpa janus benchmark-b face dataset. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 592–600 (2017)
Wong, Y., Chen, S., Mau, S., Sanderson, C., Lovell, B.: Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition. In: CVPRW, pp. 74–81 (2011)
Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. CoRR abs/1411.7923 (2014). http://arxiv.org/abs/1411.7923
Acknowledgments
The research was supported by the Czech Science Foundation project GACR GA19-21198S.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Yermakov, A., Franc, V. (2021). CNN Based Predictor of Face Image Quality. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12666. Springer, Cham. https://doi.org/10.1007/978-3-030-68780-9_52
Download citation
DOI: https://doi.org/10.1007/978-3-030-68780-9_52
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-68779-3
Online ISBN: 978-3-030-68780-9
eBook Packages: Computer ScienceComputer Science (R0)