Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Adversarial Attack and Defense in Deep Ranking

Published: 13 February 2024 Publication History

Abstract

Deep Neural Network classifiers are vulnerable to adversarial attacks, where an imperceptible perturbation could result in misclassification. However, the vulnerability of DNN-based image ranking systems remains under-explored. In this paper, we propose two attacks against deep ranking systems, i.e., Candidate Attack and Query Attack, that can raise or lower the rank of chosen candidates by adversarial perturbations. Specifically, the expected ranking order is first represented as a set of inequalities. Then a triplet-like objective function is designed to obtain the optimal perturbation. Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks, where the model learns to prevent the adversarial attack from pulling the positive and negative samples close to each other. To comprehensively measure the empirical adversarial robustness of a ranking model with our defense, we propose an empirical robustness score, which involves a set of representative attacks against ranking models. Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196, and Stanford Online Products datasets. Experimental results demonstrate that our attacks can effectively compromise a typical deep ranking system. Nevertheless, our defense can significantly improve the ranking system's robustness and simultaneously mitigate a wide range of attacks.

References

[1]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
[2]
C. Szegedy et al., “Intriguing properties of neural networks,” in Proc. Int. Conf. Learn. Representations, 2014.
[3]
I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in Proc. Int. Conf. Learn. Representations, 2015.
[4]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2818–2826.
[5]
Y. Dong et al., “Efficient decision-based black-box adversarial attacks on face recognition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 7714–7722.
[6]
M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” in Proc. ACM Conf. Comput. Commun. Secur., 2016, pp. 1528–1540.
[7]
Z. Wang, S. Zheng, M. Song, Q. Wang, A. Rahimpour, and H. Qi, “advPattern: Physical-world attacks on deep person re-identification via adversarially transformable patterns,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 8341–8350.
[8]
G. Chechik, V. Sharma, U. Shalit, and S. Bengio, “Large scale online learning of image similarity through ranking,” J. Mach. Learn. Res., vol. 11, no. 36, pp. 1109–1135, 2010.
[9]
J. Wang et al., “Learning fine-grained image similarity with deep ranking,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2014, pp. 1386–1393.
[10]
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in Proc. Int. Conf. Learn. Representations, 2018.
[11]
F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2015, pp. 815–823.
[12]
A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry, “Adversarial examples are not bugs, they are features,” in Proc. Adv. Neural Inf. Process. Syst., 2019, pp. 125–136.
[13]
M. Zhou, Z. Niu, L. Wang, Q. Zhang, and G. Hua, “Adversarial ranking attack and defense,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 781–799.
[14]
K. Roth, T. Milbich, S. Sinha, P. Gupta, B. Ommer, and J. P. Cohen, “Revisiting training strategies and generalization performance in deep metric learning,” in Proc. Int. Conf. Mach. Learn., 2020, pp. 8242–8252.
[15]
K. Musgrave, S. Belongie, and S.-N. Lim, “A metric learning reality check,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 681–699.
[16]
Z. Niu, M. Zhou, L. Wang, X. Gao, and G. Hua, “Hierarchical multimodal LSTM for dense visual-semantic embedding,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 1881–1889.
[17]
M. Zhou, Z. Niu, L. Wang, Z. Gao, Q. Zhang, and G. Hua, “Ladder loss for coherent visual-semantic embedding,” in Proc. AAAI. Conf. Artif. Intell., 2020, pp. 13050–13057.
[18]
L. Zhang et al., “Ordered or orderless: A revisit for video based person re-identification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 4, pp. 1460–1466, Apr. 2021.
[19]
T.-Y. Liu, “Learning to rank for information retrieval,” Found. Trends Inf. Retrieval, vol. 3, no. 3, pp. 225–331, 2009.
[20]
H. Oh Song, Y. Xiang, S. Jegelka, and S. Savarese, “Deep metric learning via lifted structured feature embedding,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 4004–4012.
[21]
C.-Y. Wu, R. Manmatha, A. J. Smola, and P. Krahenbuhl, “Sampling matters in deep embedding learning,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2840–2848.
[22]
A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Proc. Int. Conf. Learn. Representations Workshops, 2017.
[23]
N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in Proc. IEEE Symp. Secur. Privacy, 2017, pp. 39–57.
[24]
F. Croce and M. Hein, “Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,” in Proc. Int. Conf. Mach. Learn., 2020, pp. 2206–2216.
[25]
Y. Yu, X. Gao, and C.-Z. Xu, “LAFEAT: Piercing through adversarial defenses with latent features,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 5735–5745.
[26]
S. Tang, X. Huang, M. Chen, C. Sun, and J. Yang, “Adversarial attack type I: Cheat classifiers by significant changes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 3, pp. 1100–1109, Mar. 2021.
[27]
N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proc. ACM Conf. Comput. Commun. Secur., 2017, pp. 506–519.
[28]
Y. Shi, S. Wang, and Y. Han, “Curls & whey: Boosting black-box adversarial attacks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 6519–6527.
[29]
C. Xie et al., “Improving transferability of adversarial examples with input diversity,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 2730–2739.
[30]
Y. Dong, T. Pang, H. Su, and J. Zhu, “Evading defenses to transferable adversarial examples by translation-invariant attacks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 4312–4321.
[31]
Q. Huang, I. Katsman, H. He, Z. Gu, S. Belongie, and S.-N. Lim, “Enhancing adversarial example transferability with an intermediate level attack,” in Proc. IEEE Int. Conf. Comput. Vis., 2019, pp. 4733–4742.
[32]
S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial perturbations,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1765–1773.
[33]
H. Liu et al., “Universal adversarial perturbation via prior driven uncertainty approximation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 2941–2949.
[34]
Y. Dong et al., “Benchmarking adversarial robustness on image classification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 321–331.
[35]
A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” in Proc. Int. Conf. Mach. Learn., 2018, pp. 284–293.
[36]
K. Eykholt et al., “Robust physical-world attacks on deep learning models,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 1625–1634.
[37]
G. Goren, O. Kurland, M. Tennenholtz, and F. Raiber, “Ranking robustness under adversarial document manipulations,” in Proc. Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2018, pp. 395–404.
[38]
X. He, Z. He, X. Du, and T.-S. Chua, “Adversarial personalized ranking for recommendation,” in Proc. Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2018, pp. 355–364.
[39]
J. Li, R. Ji, H. Liu, X. Hong, Y. Gao, and Q. Tian, “Universal perturbation attack against image retrieval,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 4899–4908.
[40]
G. Tolias, F. Radenovic, and O. Chum, “Targeted mismatch adversarial attack: Query with a flower to retrieve the tower,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 5037–5046.
[41]
H. Wang, G. Wang, Y. Li, D. Zhang, and L. Lin, “Transferable, controllable, and inconspicuous adversarial attacks on person re-identification with deep mis-ranking,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 342–351.
[42]
M. Zhou et al., “Practical relative order attack in deep ranking,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 16393–16402.
[43]
X. Li et al., “QAIR: Practical query-efficient black-box attacks for image retrieval,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 3329–3338.
[44]
F. Tramer, N. Carlini, W. Brendel, and A. Madry, “On adaptive attacks to adversarial example defenses,” in Proc. Adv. Neural Inf. Process. Syst., 2020, pp. 1633–1645.
[45]
A. Athalye, N. Carlini, and D. Wagner, “Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples,” in Proc. Int. Conf. Mach. Learn., 2018, pp. 274–283.
[46]
N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in Proc. IEEE Symp. Secur. Privacy, 2016, pp. 582–597.
[47]
W. He, J. Wei, X. Chen, N. Carlini, and D. Song, “Adversarial example defense: Ensembles of weak defenses are not strong,” in Proc. USENIX Workshop Offensive Technol., 2017, p. 15.
[48]
A. Prakash, N. Moran, S. Garber, A. DiLillo, and J. Storer, “Deflecting adversarial attacks with pixel deflection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 8571–8580.
[49]
D. Meng and H. Chen, “MagNet: A two-pronged defense against adversarial examples,” in Proc. ACM Conf. Comput. Commun. Secur., 2017, pp. 135–147.
[50]
A. Dubey, L. v. d.Z. Maaten, Y. YalnizLi, and D. Mahajan, “Defense against adversarial images using web-scale nearest-neighbor search,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 8767–8776.
[51]
X. Liu, Y. Li, C. Wu, and C.-J. Hsieh, “Adv-BNN: Improved adversarial defense through robust Bayesian neural network,” in Proc. Int. Conf. Learn. Representations, 2019.
[52]
X. Liu, M. Cheng, H. Zhang, and C.-J. Hsieh, “Towards robust neural networks via random self-ensemble,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 369–385.
[53]
C. Xie, Y. Wu, L. V. D. Maaten, A. L. Yuille, and K. He, “Feature denoising for improving adversarial robustness,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 501–509.
[54]
A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” in Proc. Int. Conf. Learn. Representations, 2017.
[55]
J. Wang and H. Zhang, “Bilateral adversarial training: Towards fast training of more robust models against adversarial attacks,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 6629–6638.
[56]
C. Qin et al., “Adversarial robustness through local linearization,” in Proc. Adv. Neural Inf. Process. Syst., 2019, pp. 13842–13853.
[57]
D. Wu, S.-T. Xia, and Y. Wang, “Adversarial weight perturbation helps robust generalization,” in Proc. Adv. Neural Inf. Process. Syst., 2020, pp. 2958–2969.
[58]
F. Croce and M. Hein, “Sparse and imperceivable adversarial attacks,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 4724–4732.
[59]
P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh, “EAD: Elastic-net attacks to deep neural networks via adversarial examples,” in Proc. AAAI. Conf. Artif. Intell., 2018, pp. 10–17.
[60]
S. Sabour, Y. Cao, F. Faghri, and D. J. Fleet, “Adversarial manipulation of deep representations,” in Proc. Int. Conf. Learn. Representations, 2016.
[61]
A. Virmaux and K. Scaman, “Lipschitz regularity of deep neural networks: Analysis and efficient estimation,” in Proc. Adv. Neural Inf. Process. Syst., 2018, pp. 3839–3848.
[62]
P. L. Bartlett, D. J. Foster, and M. J. Telgarsky, “Spectrally-normalized margin bounds for neural networks,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 6241–6250.
[63]
M. Picot, F. Messina, M. Boudiaf, F. Labeau, I. Ben Ayed, and P. Piantanida, “Adversarial robustness via Fisher–Rao regularization,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 3, pp. 2698–2710, Mar. 2023.
[64]
M. Cisse, P. Bojanowski, E. Grave, Y. Dauphin, and N. Usunier, “Parseval networks: Improving robustness to adversarial examples,” in Proc. Int. Conf. Mach. Learn., 2017, pp. 854–863.
[65]
K. Roth, Y. Kilcher, and T. Hofmann, “Adversarial training is a form of data-dependent operator norm regularization,” in Proc. Adv. Neural Inf. Process. Syst., 2020, pp. 14973–14 985.
[66]
F. Farnia, J. Zhang, and D. Tse, “Generalizable adversarial training via spectral normalization,” in Proc. Int. Conf. Learn. Representations, 2019.
[67]
L. Li, T. Xie, and B. Li, “SoK: Certified robustness for deep neural networks,” in Proc. IEEE Symp. Secur. Privacy, 2023, pp. 1289–1310.
[68]
H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms,” 2017,.
[69]
S. Bai, Y. Li, Y. Zhou, Q. Li, and P. H. Torr, “Adversarial metric attack and defense for person re-identification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 6, pp. 2119–2126, Jun. 2021.
[70]
Y. Feng, B. Chen, T. Dai, and S.-T. Xia, “Adversarial attack on deep product quantization network for image retrieval,” in Proc. AAAI. Conf. Artif. Intell., 2020, pp. 10786–10793.
[71]
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” in Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
[72]
P. Welinder et al., “Caltech-ucsd birds 200,” California Institute of Technology, Tech. Rep., 2010.
[73]
J. Krause, M. Stark, J. Deng, and L. Fei-Fei, “3D object representations for fine-grained categorization,” in Proc. Int. Conf. Comput. Vis. Workshops, 2013, pp. 554–561.
[74]
A. Paszke et al., “PyTorch: An imperative style, high-performance deep learning library,” in Proc. Adv. Neural Inf. Process. Syst., 2019, pp. 8024–8035.
[75]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. Int. Conf. Learn. Representations, 2015.
[76]
M. Tan et al., “MnasNet: Platform-aware neural architecture search for mobile,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 2820–2828.
[77]
A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, “Black-box adversarial attacks with limited queries and information,” in Proc. Int. Conf. Mach. Learn., 2018, pp. 2137–2146.

Cited By

View all
  • (2024)Enhancing Adversarial Robustness for Deep Metric Learning via Attention-Aware Knowledge GuidanceAdvanced Intelligent Computing Technology and Applications10.1007/978-981-97-5615-5_9(103-117)Online publication date: 5-Aug-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence  Volume 46, Issue 8
Aug. 2024
664 pages

Publisher

IEEE Computer Society

United States

Publication History

Published: 13 February 2024

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 25 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Enhancing Adversarial Robustness for Deep Metric Learning via Attention-Aware Knowledge GuidanceAdvanced Intelligent Computing Technology and Applications10.1007/978-981-97-5615-5_9(103-117)Online publication date: 5-Aug-2024

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media