Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = JSMA

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2627 KiB  
Article
Classification and Identification of Frequency-Hopping Signals Based on Jacobi Salient Map for Adversarial Sample Attack Approach
by Yanhan Zhu, Yong Li and Tianyi Wei
Sensors 2024, 24(21), 7070; https://doi.org/10.3390/s24217070 - 2 Nov 2024
Viewed by 845
Abstract
Frequency-hopping (FH) communication adversarial research is a key area in modern electronic countermeasures. To address the challenge posed by interfering parties that use deep neural networks (DNNs) to classify and identify multiple intercepted FH signals—enabling targeted interference and degrading communication performance—this paper presents [...] Read more.
Frequency-hopping (FH) communication adversarial research is a key area in modern electronic countermeasures. To address the challenge posed by interfering parties that use deep neural networks (DNNs) to classify and identify multiple intercepted FH signals—enabling targeted interference and degrading communication performance—this paper presents a batch feature point targetless adversarial sample generation method based on the Jacobi saliency map (BPNT-JSMA). This method builds on the traditional JSMA to generate feature saliency maps, selects the top 8% of salient feature points in batches for perturbation, and increases the perturbation limit to restrict the extreme values of single-point perturbations. Experimental results in a white-box environment show that, compared with the traditional JSMA method, BPNT-JSMA not only maintains a high attack success rate but also enhances attack efficiency and improves the stealthiness of the adversarial samples. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

14 pages, 1562 KiB  
Article
Adversarial Attacks with Defense Mechanisms on Convolutional Neural Networks and Recurrent Neural Networks for Malware Classification
by Sharoug Alzaidy and Hamad Binsalleeh
Appl. Sci. 2024, 14(4), 1673; https://doi.org/10.3390/app14041673 - 19 Feb 2024
Cited by 2 | Viewed by 2546
Abstract
In the field of behavioral detection, deep learning has been extensively utilized. For example, deep learning models have been utilized to detect and classify malware. Deep learning, however, has vulnerabilities that can be exploited with crafted inputs, resulting in malicious files being misclassified. [...] Read more.
In the field of behavioral detection, deep learning has been extensively utilized. For example, deep learning models have been utilized to detect and classify malware. Deep learning, however, has vulnerabilities that can be exploited with crafted inputs, resulting in malicious files being misclassified. Cyber-Physical Systems (CPS) may be compromised by malicious files, which can have catastrophic consequences. This paper presents a method for classifying Windows portable executables (PEs) using Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). To generate malware executable adversarial examples of PE, we conduct two white-box attacks, Jacobian-based Saliency Map Attack (JSMA) and Carlini and Wagner attack (C&W). An adversarial payload was injected into the DOS header, and a section was added to the file to preserve the PE functionality. The attacks successfully evaded the CNN model with a 91% evasion rate, whereas the RNN model evaded attacks at an 84.6% rate. Two defense mechanisms based on distillation and training techniques are examined in this study for overcoming adversarial example challenges. Distillation and training against JSMA resulted in the highest reductions in the evasion rates of 48.1% and 41.49%, respectively. Distillation and training against C&W resulted in the highest decrease in evasion rates, at 48.1% and 49.9%, respectively. Full article
(This article belongs to the Special Issue Safety, Security and Privacy in Cyber-Physical Systems (CPS))
Show Figures

Figure 1

21 pages, 2898 KiB  
Article
Probabilistic Jacobian-Based Saliency Maps Attacks
by Théo Combey, António Loison, Maxime Faucher and Hatem Hajri
Mach. Learn. Knowl. Extr. 2020, 2(4), 558-578; https://doi.org/10.3390/make2040030 - 13 Nov 2020
Cited by 18 | Viewed by 4582
Abstract
Neural network classifiers (NNCs) are known to be vulnerable to malicious adversarial perturbations of inputs including those modifying a small fraction of the input features named sparse or L0 attacks. Effective and fast L0 attacks, such as the widely used Jacobian-based [...] Read more.
Neural network classifiers (NNCs) are known to be vulnerable to malicious adversarial perturbations of inputs including those modifying a small fraction of the input features named sparse or L0 attacks. Effective and fast L0 attacks, such as the widely used Jacobian-based Saliency Map Attack (JSMA) are practical to fool NNCs but also to improve their robustness. In this paper, we show that penalising saliency maps of JSMA by the output probabilities and the input features of the NNC leads to more powerful attack algorithms that better take into account each input’s characteristics. This leads us to introduce improved versions of JSMA, named Weighted JSMA (WJSMA) and Taylor JSMA (TJSMA), and demonstrate through a variety of white-box and black-box experiments on three different datasets (MNIST, CIFAR-10 and GTSRB), that they are both significantly faster and more efficient than the original targeted and non-targeted versions of JSMA. Experiments also demonstrate, in some cases, very competitive results of our attacks in comparison with the Carlini-Wagner (CW) L0 attack, while remaining, like JSMA, significantly faster (WJSMA and TJSMA are more than 50 times faster than CW L0 on CIFAR-10). Therefore, our new attacks provide good trade-offs between JSMA and CW for L0 real-time adversarial testing on datasets such as the ones previously cited. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

14 pages, 4502 KiB  
Article
Adaptive Wiener Filter and Natural Noise to Eliminate Adversarial Perturbation
by Fei Wu, Wenxue Yang, Limin Xiao and Jinbin Zhu
Electronics 2020, 9(10), 1634; https://doi.org/10.3390/electronics9101634 - 3 Oct 2020
Cited by 28 | Viewed by 5473
Abstract
Deep neural network has been widely used in pattern recognition and speech processing, but its vulnerability to adversarial attacks also proverbially demonstrated. These attacks perform unstructured pixel-wise perturbation to fool the classifier, which does not affect the human visual system. The role of [...] Read more.
Deep neural network has been widely used in pattern recognition and speech processing, but its vulnerability to adversarial attacks also proverbially demonstrated. These attacks perform unstructured pixel-wise perturbation to fool the classifier, which does not affect the human visual system. The role of adversarial examples in the information security field has received increased attention across a number of disciplines in recent years. An alternative approach is “like cures like”. In this paper, we propose to utilize common noise and adaptive wiener filtering to mitigate the perturbation. Our method includes two operations: noise addition, which adds natural noise to input adversarial examples, and adaptive wiener filtering, which denoising the images in the previous step. Based on the study of the distribution of attacks, adding natural noise has an impact on adversarial examples to a certain extent and then they can be removed through adaptive wiener filter, which is an optimal estimator for the local variance of the image. The proposed improved adaptive wiener filter can automatically select the optimal window size between the given multiple alternative windows based on the features of different images. Based on lots of experiments, the result demonstrates that the proposed method is capable of defending against adversarial attacks, such as FGSM (Fast Gradient Sign Method), C&W, Deepfool, and JSMA (Jacobian-based Saliency Map Attack). By compared experiments, our method outperforms or is comparable to state-of-the-art methods. Full article
(This article belongs to the Special Issue Recent Advances in Cryptography and Network Security)
Show Figures

Figure 1

14 pages, 1081 KiB  
Article
An Adversarial Approach for Intrusion Detection Systems Using Jacobian Saliency Map Attacks (JSMA) Algorithm
by Ayyaz Ul Haq Qureshi, Hadi Larijani, Mehdi Yousefi, Ahsan Adeel and Nhamoinesu Mtetwa
Computers 2020, 9(3), 58; https://doi.org/10.3390/computers9030058 - 20 Jul 2020
Cited by 19 | Viewed by 4834
Abstract
In today’s digital world, the information systems are revolutionizing the way we connect. As the people are trying to adopt and integrate intelligent systems into daily lives, the risks around cyberattacks on user-specific information have significantly grown. To ensure safe communication, the Intrusion [...] Read more.
In today’s digital world, the information systems are revolutionizing the way we connect. As the people are trying to adopt and integrate intelligent systems into daily lives, the risks around cyberattacks on user-specific information have significantly grown. To ensure safe communication, the Intrusion Detection Systems (IDS) were developed often by using machine learning (ML) algorithms that have the unique ability to detect malware against network security violations. Recently, it was reported that the IDS are prone to carefully crafted perturbations known as adversaries. With the aim to understand the impact of such attacks, in this paper, we have proposed a novel random neural network-based adversarial intrusion detection system (RNN-ADV). The NSL-KDD dataset is utilized for training. For adversarial attack crafting, the Jacobian Saliency Map Attack (JSMA) algorithm is used, which identifies the feature which can cause maximum change to the benign samples with minimum added perturbation. To check the effectiveness of the proposed adversarial scheme, the results are compared with a deep neural network which indicates that RNN-ADV performs better in terms of accuracy, precision, recall, F1 score and training epochs. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Back to TopTop