Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (149)

Search Parameters:
Keywords = NSL-KDD

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 4937 KiB  
Article
Whale Optimization Algorithm-Enhanced Long Short-Term Memory Classifier with Novel Wrapped Feature Selection for Intrusion Detection
by Haider AL-Husseini, Mohammad Mehdi Hosseini, Ahmad Yousofi and Murtadha A. Alazzawi
J. Sens. Actuator Netw. 2024, 13(6), 73; https://doi.org/10.3390/jsan13060073 (registering DOI) - 2 Nov 2024
Viewed by 513
Abstract
Intrusion detection in network systems is a critical challenge due to the ever-increasing volume and complexity of cyber-attacks. Traditional methods often struggle with high-dimensional data and the need for real-time detection. This paper proposes a comprehensive intrusion detection method utilizing a novel wrapped [...] Read more.
Intrusion detection in network systems is a critical challenge due to the ever-increasing volume and complexity of cyber-attacks. Traditional methods often struggle with high-dimensional data and the need for real-time detection. This paper proposes a comprehensive intrusion detection method utilizing a novel wrapped feature selection approach combined with a long short-term memory classifier optimized with the whale optimization algorithm to address these challenges effectively. The proposed method introduces a novel feature selection technique using a multi-layer perceptron and a hybrid genetic algorithm-particle swarm optimization algorithm to select salient features from the input dataset, significantly reducing dimensionality while retaining critical information. The selected features are then used to train a long short-term memory network, optimized by the whale optimization algorithm to enhance its classification performance. The effectiveness of the proposed method is demonstrated through extensive simulations of intrusion detection tasks. The feature selection approach effectively reduced the feature set from 78 to 68 features, maintaining diversity and relevance. The proposed method achieved a remarkable accuracy of 99.62% in DDoS attack detection and 99.40% in FTP-Patator/SSH-Patator attack detection using the CICIDS-2017 dataset and an anomaly attack detection accuracy of 99.6% using the NSL-KDD dataset. These results highlight the potential of the proposed method in achieving high detection accuracy with reduced computational complexity, making it a viable solution for real-time intrusion detection. Full article
(This article belongs to the Section Big Data, Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 5170 KiB  
Article
An Efficient Detection Mechanism of Network Intrusions in IoT Environments Using Autoencoder and Data Partitioning
by Yiran Xiao, Yaokai Feng and Kouichi Sakurai
Computers 2024, 13(10), 269; https://doi.org/10.3390/computers13100269 - 14 Oct 2024
Viewed by 618
Abstract
In recent years, with the development of the Internet of Things and distributed computing, the “server-edge device” architecture has been widely deployed. This study focuses on leveraging autoencoder technology to address the binary classification problem in network intrusion detection, aiming to develop a [...] Read more.
In recent years, with the development of the Internet of Things and distributed computing, the “server-edge device” architecture has been widely deployed. This study focuses on leveraging autoencoder technology to address the binary classification problem in network intrusion detection, aiming to develop a lightweight model suitable for edge devices. Traditional intrusion detection models face two main challenges when directly ported to edge devices: inadequate computational resources to support large-scale models and the need to improve the accuracy of simpler models. To tackle these issues, this research utilizes the Extreme Learning Machine for its efficient training speed and compact model size to implement autoencoders. Two improvements over the latest related work are proposed: First, to improve data purity and ultimately enhance detection performance, the data are partitioned into multiple regions based on the prediction results of these autoencoders. Second, autoencoder characteristics are leveraged to further investigate the data within each region. We used the public dataset NSL-KDD to test the behavior of the proposed mechanism. The experimental results show that when dealing with multi-class attacks, the model’s performance was significantly improved, and the accuracy and F1-Score were improved by 3.5% and 2.9%, respectively, maintaining its lightweight nature. Full article
Show Figures

Figure 1

17 pages, 3304 KiB  
Article
MTC-NET: A Multi-Channel Independent Anomaly Detection Method for Network Traffic
by Xiaoyong Zhao, Chengjin Huang and Lei Wang
Biomimetics 2024, 9(10), 615; https://doi.org/10.3390/biomimetics9100615 - 10 Oct 2024
Viewed by 1135
Abstract
In recent years, deep learning-based approaches, particularly those leveraging the Transformer architecture, have garnered widespread attention for network traffic anomaly detection. However, when dealing with noisy data sets, directly inputting network traffic sequences into Transformer networks often significantly degrades detection performance due to [...] Read more.
In recent years, deep learning-based approaches, particularly those leveraging the Transformer architecture, have garnered widespread attention for network traffic anomaly detection. However, when dealing with noisy data sets, directly inputting network traffic sequences into Transformer networks often significantly degrades detection performance due to interference and noise across dimensions. In this paper, we propose a novel multi-channel network traffic anomaly detection model, MTC-Net, which reduces computational complexity and enhances the model’s ability to capture long-distance dependencies. This is achieved by decomposing network traffic sequences into multiple unidimensional time sequences and introducing a patch-based strategy that enables each sub-sequence to retain local semantic information. A backbone network combining Transformer and CNN is employed to capture complex patterns, with information from all channels being fused at the final classification header in order to achieve modelling and detection of complex network traffic patterns. The experimental results demonstrate that MTC-Net outperforms existing state-of-the-art methods in several evaluation metrics, including accuracy, precision, recall, and F1 score, on four publicly available data sets: KDD Cup 99, NSL-KDD, UNSW-NB15, and CIC-IDS2017. Full article
(This article belongs to the Section Bioinspired Sensorics, Information Processing and Control)
Show Figures

Figure 1

45 pages, 3370 KiB  
Article
Adaptive Cybersecurity Neural Networks: An Evolutionary Approach for Enhanced Attack Detection and Classification
by Ahmad K. Al Hwaitat and Hussam N. Fakhouri
Appl. Sci. 2024, 14(19), 9142; https://doi.org/10.3390/app14199142 - 9 Oct 2024
Viewed by 784
Abstract
The increasing sophistication and frequency of cyber threats necessitate the development of advanced techniques for detecting and mitigating attacks. This paper introduces a novel cybersecurity-focused Multi-Layer Perceptron (MLP) trainer that utilizes evolutionary computation methods, specifically tailored to improve the training process of neural [...] Read more.
The increasing sophistication and frequency of cyber threats necessitate the development of advanced techniques for detecting and mitigating attacks. This paper introduces a novel cybersecurity-focused Multi-Layer Perceptron (MLP) trainer that utilizes evolutionary computation methods, specifically tailored to improve the training process of neural networks in the cybersecurity domain. The proposed trainer dynamically optimizes the MLP’s weights and biases, enhancing its accuracy and robustness in defending against various attack vectors. To evaluate its effectiveness, the trainer was tested on five widely recognized security-related datasets: NSL-KDD, CICIDS2017, UNSW-NB15, Bot-IoT, and CSE-CIC-IDS2018. Its performance was compared with several state-of-the-art optimization algorithms, including Cybersecurity Chimp, CPO, ROA, WOA, MFO, WSO, SHIO, ZOA, DOA, and HHO. The results demonstrated that the proposed trainer consistently outperformed the other algorithms, achieving the lowest Mean Square Error (MSE) and highest classification accuracy across all datasets. Notably, the trainer reached a classification rate of 99.5% on the Bot-IoT dataset and 98.8% on the CSE-CIC-IDS2018 dataset, underscoring its effectiveness in detecting and classifying diverse cyber threats. Full article
Show Figures

Figure 1

14 pages, 3650 KiB  
Article
A Study on Network Anomaly Detection Using Fast Persistent Contrastive Divergence
by Jaeyeong Jeong, Seongmin Park, Joonhyung Lim, Jiwon Kang, Dongil Shin and Dongkyoo Shin
Symmetry 2024, 16(9), 1220; https://doi.org/10.3390/sym16091220 - 17 Sep 2024
Viewed by 540
Abstract
As network technology evolves, cyberattacks are not only increasing in frequency but also becoming more sophisticated. To proactively detect and prevent these cyberattacks, researchers are developing intrusion detection systems (IDSs) leveraging machine learning and deep learning techniques. However, a significant challenge with these [...] Read more.
As network technology evolves, cyberattacks are not only increasing in frequency but also becoming more sophisticated. To proactively detect and prevent these cyberattacks, researchers are developing intrusion detection systems (IDSs) leveraging machine learning and deep learning techniques. However, a significant challenge with these advanced models is the increased training time as model complexity grows, and the symmetry between performance and training time must be taken into account. To address this issue, this study proposes a fast-persistent-contrastive-divergence-based deep belief network (FPCD-DBN) that offers both high accuracy and rapid training times. This model combines the efficiency of contrastive divergence with the powerful feature extraction capabilities of deep belief networks. While traditional deep belief networks use a contrastive divergence (CD) algorithm, the FPCD algorithm improves the performance of the model by passing the results of each detection layer to the next layer. In addition, the mix of parameter updates using fast weights and continuous chains makes the model fast and accurate. The performance of the proposed FPCD-DBN model was evaluated on several benchmark datasets, including NSL-KDD, UNSW-NB15, and CIC-IDS-2017. As a result, the proposed method proved to be a viable solution as the model performed well with an accuracy of 89.4% and an F1 score of 89.7%. By achieving superior performance across multiple datasets, the approach shows great potential for enhancing network security and providing a robust defense against evolving cyber threats. Full article
(This article belongs to the Special Issue Information Security in AI)
Show Figures

Figure 1

16 pages, 2744 KiB  
Article
VGGIncepNet: Enhancing Network Intrusion Detection and Network Security through Non-Image-to-Image Conversion and Deep Learning
by Jialong Chen, Jingjing Xiao and Jiaxin Xu
Electronics 2024, 13(18), 3639; https://doi.org/10.3390/electronics13183639 - 12 Sep 2024
Viewed by 797
Abstract
This paper presents an innovative model, VGGIncepNet, which integrates non-image-to-image conversion techniques with deep learning modules, specifically VGG16 and Inception, aiming to enhance performance in network intrusion detection and IoT security analysis. By converting non-image data into image data, the model leverages the [...] Read more.
This paper presents an innovative model, VGGIncepNet, which integrates non-image-to-image conversion techniques with deep learning modules, specifically VGG16 and Inception, aiming to enhance performance in network intrusion detection and IoT security analysis. By converting non-image data into image data, the model leverages the powerful feature extraction capabilities of convolutional neural networks, thereby improving the multi-class classification of network attacks. We conducted extensive experiments on the NSL-KDD and CICIoT2023 datasets, and the results demonstrate that VGGIncepNet outperforms existing models, including BERT, DistilBERT, XLNet, and T5, across evaluation metrics such as accuracy, precision, recall, and F1-Score. VGGIncepNet exhibits outstanding classification performance, particularly excelling in precision and F1-Score. The experimental results validate VGGIncepNet’s adaptability and robustness in complex network environments, providing an effective solution for the real-time detection of malicious activities in network systems. This study offers new methods and tools for network security and IoT security analysis, with broad application prospects. Full article
Show Figures

Figure 1

17 pages, 1107 KiB  
Article
Explainable Deep Learning-Based Feature Selection and Intrusion Detection Method on the Internet of Things
by Xuejiao Chen, Minyao Liu, Zixuan Wang and Yun Wang
Sensors 2024, 24(16), 5223; https://doi.org/10.3390/s24165223 - 12 Aug 2024
Viewed by 886
Abstract
With the rapid advancement of the Internet of Things, network security has garnered increasing attention from researchers. Applying deep learning (DL) has significantly enhanced the performance of Network Intrusion Detection Systems (NIDSs). However, due to its complexity and “black box” problem, deploying DL-based [...] Read more.
With the rapid advancement of the Internet of Things, network security has garnered increasing attention from researchers. Applying deep learning (DL) has significantly enhanced the performance of Network Intrusion Detection Systems (NIDSs). However, due to its complexity and “black box” problem, deploying DL-based NIDS models in practical scenarios poses several challenges, including model interpretability and being lightweight. Feature selection (FS) in DL models plays a crucial role in minimizing model parameters and decreasing computational overheads while enhancing NIDS performance. Hence, selecting effective features remains a pivotal concern for NIDSs. In light of this, this paper proposes an interpretable feature selection method for encrypted traffic intrusion detection based on SHAP and causality principles. This approach utilizes the results of model interpretation for feature selection to reduce feature count while ensuring model reliability. We evaluate and validate our proposed method on two public network traffic datasets, CICIDS2017 and NSL-KDD, employing both a CNN and a random forest (RF). Experimental results demonstrate superior performance achieved by our proposed method. Full article
(This article belongs to the Special Issue AI-Driven Cybersecurity in IoT-Based Systems)
Show Figures

Figure 1

16 pages, 2754 KiB  
Article
Comparative Analysis of Deep Convolutional Neural Network—Bidirectional Long Short-Term Memory and Machine Learning Methods in Intrusion Detection Systems
by Miracle Udurume, Vladimir Shakhov and Insoo Koo
Appl. Sci. 2024, 14(16), 6967; https://doi.org/10.3390/app14166967 - 8 Aug 2024
Viewed by 1103
Abstract
Particularly in Internet of Things (IoT) scenarios, the rapid growth and diversity of network traffic pose a growing challenge to network intrusion detection systems (NIDs). In this work, we perform a comparative analysis of lightweight machine learning models, such as logistic regression (LR) [...] Read more.
Particularly in Internet of Things (IoT) scenarios, the rapid growth and diversity of network traffic pose a growing challenge to network intrusion detection systems (NIDs). In this work, we perform a comparative analysis of lightweight machine learning models, such as logistic regression (LR) and k-nearest neighbors (KNNs), alongside other machine learning models, such as decision trees (DTs), support vector machines (SVMs), multilayer perceptron (MLP), and random forests (RFs) with deep learning architectures, specifically a convolutional neural network (CNN) coupled with bidirectional long short-term memory (BiLSTM), for intrusion detection. We assess these models’ scalability, performance, and robustness using the NSL-KDD and UNSW-NB15 benchmark datasets. We evaluate important metrics, such as accuracy, precision, recall, F1-score, and false alarm rate, to offer insights into the effectiveness of each model in securing network systems within IoT deployments. Notably, the study emphasizes the utilization of lightweight machine learning models, highlighting their efficiency in achieving high detection accuracy while maintaining lower computational costs. Furthermore, standard deviation metrics have been incorporated into the accuracy evaluations, enhancing the reliability and comprehensiveness of our results. Using the CNN-BiLSTM model, we achieved noteworthy accuracies of 99.89% and 98.95% on the NSL-KDD and UNSW-NB15 datasets, respectively. However, the CNN-BiLSTM model outperforms lightweight traditional machine learning methods by a margin ranging from 1.5% to 3.5%. This study contributes to the ongoing efforts to enhance network security in IoT scenarios by exploring a trade-off between traditional machine learning and deep learning techniques. Full article
(This article belongs to the Special Issue Network Intrusion Detection and Attack Identification)
Show Figures

Figure 1

38 pages, 6204 KiB  
Article
The OX Optimizer: A Novel Optimization Algorithm and Its Application in Enhancing Support Vector Machine Performance for Attack Detection
by Ahmad K. Al Hwaitat and Hussam N. Fakhouri
Symmetry 2024, 16(8), 966; https://doi.org/10.3390/sym16080966 - 30 Jul 2024
Cited by 1 | Viewed by 1061
Abstract
In this paper, we introduce a novel optimization algorithm called the OX optimizer, inspired by oxen animals, which are characterized by their great strength. The OX optimizer is designed to address the challenges posed by complex, high-dimensional optimization problems. The design of the [...] Read more.
In this paper, we introduce a novel optimization algorithm called the OX optimizer, inspired by oxen animals, which are characterized by their great strength. The OX optimizer is designed to address the challenges posed by complex, high-dimensional optimization problems. The design of the OX optimizer embodies a fundamental symmetry between global and local search processes. This symmetry ensures a balanced and effective exploration of the solution space, highlighting the algorithm’s innovative contribution to the field of optimization. The OX optimizer has been evaluated on CEC2022 and CEC2017 IEEE competition benchmark functions. The results demonstrate the OX optimizer’s superior performance in terms of convergence speed and solution quality compared to existing state-of-the-art algorithms. The algorithm’s robustness and adaptability to various problem landscapes highlight its potential as a powerful tool for solving diverse optimization tasks. Detailed analysis of convergence curves, search history distributions, and sensitivity heatmaps further support these findings. Furthermore, the OX optimizer has been applied to optimize support vector machines (SVMs), emphasizing parameter selection and feature optimization. We tested it on the NSL-KDD dataset to evaluate its efficacy in an intrusion detection system. The results demonstrate that the OX optimizer significantly enhances SVM performance, facilitating effective exploration of the parameter space. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 3451 KiB  
Article
A Cooperative Intrusion Detection System for the Internet of Things Using Convolutional Neural Networks and Black Hole Optimization
by Peiyu Li, Hui Wang, Guo Tian and Zhihui Fan
Sensors 2024, 24(15), 4766; https://doi.org/10.3390/s24154766 - 23 Jul 2024
Viewed by 908
Abstract
Maintaining security in communication networks has long been a major concern. This issue has become increasingly crucial due to the emergence of new communication architectures like the Internet of Things (IoT) and the advancement and complexity of infiltration techniques. For usage in networks [...] Read more.
Maintaining security in communication networks has long been a major concern. This issue has become increasingly crucial due to the emergence of new communication architectures like the Internet of Things (IoT) and the advancement and complexity of infiltration techniques. For usage in networks based on the Internet of Things, previous intrusion detection systems (IDSs), which often use a centralized design to identify threats, are now ineffective. For the resolution of these issues, this study presents a novel and cooperative approach to IoT intrusion detection that may be useful in resolving certain current security issues. The suggested approach chooses the most important attributes that best describe the communication between objects by using Black Hole Optimization (BHO). Additionally, a novel method for describing the network’s matrix-based communication properties is put forward. The inputs of the suggested intrusion detection model consist of these two feature sets. The suggested technique splits the network into a number of subnets using the software-defined network (SDN). Monitoring of each subnet is done by a controller node, which uses a parallel combination of convolutional neural networks (PCNN) to determine the presence of security threats in the traffic passing through its subnet. The proposed method also uses the majority voting approach for the cooperation of controller nodes in order to more accurately detect attacks. The findings demonstrate that, in comparison to the prior approaches, the suggested cooperative strategy can detect assaults in the NSLKDD and NSW-NB15 datasets with an accuracy of 99.89 and 97.72 percent, respectively. This is a minimum 0.6 percent improvement. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

21 pages, 2560 KiB  
Article
A Network Intrusion Detection Method Based on Bagging Ensemble
by Zichen Zhang, Shanshan Kong, Tianyun Xiao and Aimin Yang
Symmetry 2024, 16(7), 850; https://doi.org/10.3390/sym16070850 - 5 Jul 2024
Viewed by 1488
Abstract
The problems of asymmetry in information features and redundant features in datasets, and the asymmetry of network traffic distribution in the field of network intrusion detection, have been identified as a cause of low accuracy and poor generalization of traditional machine learning detection [...] Read more.
The problems of asymmetry in information features and redundant features in datasets, and the asymmetry of network traffic distribution in the field of network intrusion detection, have been identified as a cause of low accuracy and poor generalization of traditional machine learning detection methods in intrusion detection systems (IDSs). In response, a network intrusion detection method based on the integration of bootstrap aggregating (bagging) is proposed. The extreme random tree (ERT) algorithm was employed to calculate the weights of each feature, determine the feature subsets of different machine learning models, then randomly sample the training samples based on the bootstrap sampling method, and integrated classification and regression trees (CART), support vector machine (SVM), and k-nearest neighbor (KNN) as the base estimators of bagging. A comparison of integration methods revealed that the KNN-Bagging integration model exhibited optimal performance. Subsequently, the Bayesian optimization (BO) algorithm was employed for hyper-parameter tuning of the base estimators’ KNN. Finally, the base estimators were integrated through a hard voting approach. The proposed BO-KNN-Bagging model was evaluated on the NSL-KDD dataset, achieving an accuracy of 82.48%. This result was superior to those obtained by traditional machine learning algorithms and demonstrated enhanced performance compared with other methods. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

18 pages, 3071 KiB  
Article
Enhancing IoT Security: Optimizing Anomaly Detection through Machine Learning
by Maria Balega, Waleed Farag, Xin-Wen Wu, Soundararajan Ezekiel and Zaryn Good
Electronics 2024, 13(11), 2148; https://doi.org/10.3390/electronics13112148 - 31 May 2024
Viewed by 1838
Abstract
As the Internet of Things (IoT) continues to evolve, securing IoT networks and devices remains a continuing challenge. Anomaly detection is a crucial procedure in protecting the IoT. A promising way to perform anomaly detection in the IoT is through the use of [...] Read more.
As the Internet of Things (IoT) continues to evolve, securing IoT networks and devices remains a continuing challenge. Anomaly detection is a crucial procedure in protecting the IoT. A promising way to perform anomaly detection in the IoT is through the use of machine learning (ML) algorithms. There is a lack of studies in the literature identifying optimal (with regard to both effectiveness and efficiency) anomaly detection models for the IoT. To fill the gap, this work thoroughly investigated the effectiveness and efficiency of IoT anomaly detection enabled by several representative machine learning models, namely Extreme Gradient Boosting (XGBoost), Support Vector Machines (SVMs), and Deep Convolutional Neural Networks (DCNNs). Identifying optimal anomaly detection models for IoT anomaly detection is challenging due to diverse IoT applications and dynamic IoT networking environments. It is of vital importance to evaluate ML-powered anomaly detection models using multiple datasets collected from different environments. We utilized three reputable datasets to benchmark the aforementioned machine learning methods, namely, IoT-23, NSL-KDD, and TON_IoT. Our results show that XGBoost outperformed both the SVM and DCNN, achieving accuracies of up to 99.98%. Moreover, XGBoost proved to be the most computationally efficient method; the model performed 717.75 times faster than the SVM and significantly faster than the DCNN in terms of training times. The research results have been further confirmed by using our real-world IoT data collected from an IoT testbed consisting of physical devices that we recently built. Full article
(This article belongs to the Special Issue Machine Learning for Cybersecurity: Threat Detection and Mitigation)
Show Figures

Figure 1

26 pages, 1117 KiB  
Article
Enhancing Intrusion Detection in Wireless Sensor Networks Using a GSWO-CatBoost Approach
by Thuan Minh Nguyen, Hanh Hong-Phuc Vo and Myungsik Yoo
Sensors 2024, 24(11), 3339; https://doi.org/10.3390/s24113339 - 23 May 2024
Viewed by 1073
Abstract
Intrusion detection systems (IDSs) in wireless sensor networks (WSNs) rely heavily on effective feature selection (FS) for enhanced efficacy. This study proposes a novel approach called Genetic Sacrificial Whale Optimization (GSWO) to address the limitations of conventional methods. GSWO combines a genetic algorithm [...] Read more.
Intrusion detection systems (IDSs) in wireless sensor networks (WSNs) rely heavily on effective feature selection (FS) for enhanced efficacy. This study proposes a novel approach called Genetic Sacrificial Whale Optimization (GSWO) to address the limitations of conventional methods. GSWO combines a genetic algorithm (GA) and whale optimization algorithms (WOA) modified by applying a new three-population division strategy with a proposed conditional inherited choice (CIC) to overcome premature convergence in WOA. The proposed approach achieves a balance between exploration and exploitation and enhances global search abilities. Additionally, the CatBoost model is employed for classification, effectively handling categorical data with complex patterns. A new technique for fine-tuning CatBoost’s hyperparameters is introduced, using effective quantization and the GSWO strategy. Extensive experimentation on various datasets demonstrates the superiority of GSWO-CatBoost, achieving higher accuracy rates on the WSN-DS, WSNBFSF, NSL-KDD, and CICIDS2017 datasets than the existing approaches. The comprehensive evaluations highlight the real-time applicability and accuracy of the proposed method across diverse data sources, including specialized WSN datasets and established benchmarks. Specifically, our GSWO-CatBoost method has an inference time nearly 100 times faster than deep learning methods while achieving high accuracy rates of 99.65%, 99.99%, 99.76%, and 99.74% for WSN-DS, WSNBFSF, NSL-KDD, and CICIDS2017, respectively. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

42 pages, 4932 KiB  
Article
XAI-IDS: Toward Proposing an Explainable Artificial Intelligence Framework for Enhancing Network Intrusion Detection Systems
by Osvaldo Arreche, Tanish Guntur and Mustafa Abdallah
Appl. Sci. 2024, 14(10), 4170; https://doi.org/10.3390/app14104170 - 14 May 2024
Cited by 3 | Viewed by 2583
Abstract
The exponential growth of network intrusions necessitates the development of advanced artificial intelligence (AI) techniques for intrusion detection systems (IDSs). However, the reliance on AI for IDSs presents several challenges, including the performance variability of different AI models and the opacity of their [...] Read more.
The exponential growth of network intrusions necessitates the development of advanced artificial intelligence (AI) techniques for intrusion detection systems (IDSs). However, the reliance on AI for IDSs presents several challenges, including the performance variability of different AI models and the opacity of their decision-making processes, hindering comprehension by human security analysts. In response, we propose an end-to-end explainable AI (XAI) framework tailored to enhance the interpretability of AI models in network intrusion detection tasks. Our framework commences with benchmarking seven black-box AI models across three real-world network intrusion datasets, each characterized by distinct features and challenges. Subsequently, we leverage various XAI models to generate both local and global explanations, shedding light on the underlying rationale behind the AI models’ decisions. Furthermore, we employ feature extraction techniques to discern crucial model-specific and intrusion-specific features, aiding in understanding the discriminative factors influencing the detection outcomes. Additionally, our framework identifies overlapping and significant features that impact multiple AI models, providing insights into common patterns across different detection approaches. Notably, we demonstrate that the computational overhead incurred by generating XAI explanations is minimal for most AI models, ensuring practical applicability in real-time scenarios. By offering multi-faceted explanations, our framework equips security analysts with actionable insights to make informed decisions for threat detection and mitigation. To facilitate widespread adoption and further research, we have made our source code publicly available, serving as a foundational XAI framework for IDSs within the research community. Full article
(This article belongs to the Special Issue Network Intrusion Detection and Attack Identification)
Show Figures

Figure 1

18 pages, 878 KiB  
Article
DRL-GAN: A Hybrid Approach for Binary and Multiclass Network Intrusion Detection
by Caroline Strickland, Muhammad Zakar, Chandrika Saha, Sareh Soltani Nejad, Noshin Tasnim, Daniel J. Lizotte and Anwar Haque
Sensors 2024, 24(9), 2746; https://doi.org/10.3390/s24092746 - 25 Apr 2024
Cited by 3 | Viewed by 1311
Abstract
Our increasingly connected world continues to face an ever-growing number of network-based attacks. An Intrusion Detection System (IDS) is an essential security technology used for detecting these attacks. Although numerous Machine Learning-based IDSs have been proposed for the detection of malicious network traffic, [...] Read more.
Our increasingly connected world continues to face an ever-growing number of network-based attacks. An Intrusion Detection System (IDS) is an essential security technology used for detecting these attacks. Although numerous Machine Learning-based IDSs have been proposed for the detection of malicious network traffic, the majority have difficulty properly detecting and classifying the more uncommon attack types. In this paper, we implement a novel hybrid technique using synthetic data produced by a Generative Adversarial Network (GAN) to use as input for training a Deep Reinforcement Learning (DRL) model. Our GAN model is trained on the NSL-KDD dataset, a publicly available collection of labeled network traffic data specifically designed to support the evaluation and benchmarking of IDSs. Ultimately, our findings demonstrate that training the DRL model on synthetic datasets generated by specific GAN models can result in better performance in correctly classifying minority classes over training on the true imbalanced dataset. Full article
(This article belongs to the Special Issue Network Security and IoT Security)
Show Figures

Figure 1

Back to TopTop