Cited By
View all- Roth TGao YAbuadbba ANepal SLiu W(2024)Token-modification adversarial attacks for natural language processing: A surveyAI Communications10.3233/AIC-230279(1-22)Online publication date: 2-Apr-2024
Deep neural networks (DNNs) have been recently shown to be vulnerable to backdoor attacks. The infected model performs well on benign testing samples, however, the attacker can trigger the infected model to misbehave by the backdoor. ...
Despite the great success of Deep Neural Networks (DNNs) in the field of natural language processing (NLP), they are increasingly facing tremendous threats from textual attacks in two kinds: adversarial attacks and backdoor attacks. Both of them ...
Deep neural networks (DNNS) have been proven to be vulnerable to adversarial attacks. But the adversarial perturbations are generated for specific input samples, and the perturbations of one sample cannot be applied to other samples. ...
Association for Computing Machinery
New York, NY, United States
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in