Abstract
In spite of their performance and relevance on various image classification fields, deep neural network classifiers encounter real difficulties face of minor information perturbations. In particular, the presence of contradictory examples causes a big weakness and insufficiency of deep learning models in many areas, such as illness recognition. The aim of our paper is to improve the robustness of deep neural network models to small input perturbations using standard training and adversial training to maximize the distance between predict instances and the boundary decision area. We shows the decision boundary performance of deep neural networks during model training, the minimum distance of the input images from the decision boundary area and how this distance develops during the deep neural network training. The results shows that the distance between the images and the decision boundary decreases during standard training. However, adversarial training increases this distance, which improve the performance of our model. Our work presents a new solution to the deep neural networks sensitivity problem. We found a very strong relationship between the efficiency of the deep neural networks model and the training phase. We can say that the efficiency is created during training, it is not predetermined by the initialization or architecture.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Yu, T., Hu, S., Guo, C., Chao, W.-L., Weinberger, K.Q.: A new defense against adversarial images: turning a weakness into a strength. In: NeurIPS (2019)
Tramer, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. In: ICLR (2018)
Schott, L., Rauber, J., Bethge, M., Brendel, W.: Towards the first adversarially robust neural network model on MNIST. In: ICLR (2019)
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy (2016)
Nar, K., Ocal, O., Sastry, S.S., Ramchandran, K.: Cross-entropy loss and low-rank features have responsibility for adversarial examples. arXiv:1901.08360 (2019)
Metzen, J.H., Kumar, M., Brox, T., Fischer, V.: Universal adversarial perturbations against semantic image segmentation. In: IEEE International Conference on Computer Vision (ICCV), pp. 2774–2783 (2017)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
Kang, D., Sun, Y., Hendrycks, D., Brown, T., Steinhardt, J.: Testing robustness against unforeseen adversaries. arXiv:1908.08016 (2019)
Jiang, Y., Krishnan, D., Mobahi, H., Bengio, S.: Predicting the generalization gap in deep networks with margin distributions. In: ICLR (2019)
He, W., Li, B., Song, D.: Decision boundary analysis of adversarial examples. In: ICLR (2018)
Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. In: ICLR (Workshop) (2015)
Ford, N., Gilmer, J., Carlini, N., Cubuk, D.: Adversarial examples are a natural consequence of test error in noise. In: ICML (2019)
Fawzi, A., Fawzi, H., Fawzi, O.: Adversarial vulnerability for any classifier. In: NeurIPS (2018)
Dodge, S., Karam, L.: A study and comparison of human and deep learning recognition performance under visual distortions. In: 2017 26th International Conference on Computer Communication and Networks (ICCCN), pp. 1–7. IEEE (2017)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy, pp. 39–57 (2017)
Mickisch, D., Assion, F., Greßner, F., Günther, W., Motta, M.: Understanding the decision boundary of deep neural networks: an empirical study (2020)
Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: ICML (2018)
Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)
Volpi, R., Murino, V.: Addressing model vulnerability to distributional shifts over image transformation sets. In: ICCV (2019)
Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
Shafahi, A., Huang, W.R., Studer, C., Feizi, S., Goldstein, T.: Are adversarial examples inevitable? In: ICLR (2019)
Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., Mdry, A.: Adversarially robust generalization requires more data. In: NeurIPS (2018)
Papernot, N., McDaniel, P.: Deep k-nearest neighbors: towards confident, interpretable and robust deep learning. arXiv:1803.04765 (2018)
Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: CVPR (2016)
Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. In: Proceedings of 5th International Conference on Learning Representations (ICLR) (2017)
Johnson, J.: Deep, skinny neural networks are not universal approximators. In: ICLR (Poster) (2019)
Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: ICLR (2019)
Guo, C., Rana, M., Cisse, M., van der Maaten, L.: Countering adversarial images using input transformations. In: ICLR (2018)
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (Poster) (2014)
Fawzi, A., Moosavi-Dezfooli, S.-M., Frossard, P., Soatto, S.: Classification regions of deep neural networks. arXiv:1705.09552 (2017)
Eykholt, K., et al.: Physical adversarial examples for object detectors. In: WOOT 2018 Proceedings of the 12th USENIX Conference on Offensive Technologies (2018)
Cisse, M., Bojanowski, P., Grave, E., Dauphin, Y., Usunier, N.: Parseval networks: Improving robustness to adversarial examples. In: ICML (2017)
Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25
Assion, F., et al.: The attack generator: a systematic approach towards constructing adversarial attacks. In: CVPR SAIAD (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Ouriha, M., El Habouz, Y., El Mansouri, O. (2022). Decision Boundary to Improve the Sensitivity of Deep Neural Networks Models. In: Fakir, M., Baslam, M., El Ayachi, R. (eds) Business Intelligence. CBI 2022. Lecture Notes in Business Information Processing, vol 449. Springer, Cham. https://doi.org/10.1007/978-3-031-06458-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-06458-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-06457-9
Online ISBN: 978-3-031-06458-6
eBook Packages: Computer ScienceComputer Science (R0)