Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (75)

Search Parameters:
Keywords = local differential privacy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2042 KiB  
Article
EdgeGuard: Decentralized Medical Resource Orchestration via Blockchain-Secured Federated Learning in IoMT Networks
by Sakshi Patni and Joohyung Lee
Future Internet 2025, 17(1), 2; https://doi.org/10.3390/fi17010002 - 25 Dec 2024
Abstract
The development of medical data and resources has become essential for enhancing patient outcomes and operational efficiency in an age when digital innovation in healthcare is becoming more important. The rapid growth of the Internet of Medical Things (IoMT) is changing healthcare data [...] Read more.
The development of medical data and resources has become essential for enhancing patient outcomes and operational efficiency in an age when digital innovation in healthcare is becoming more important. The rapid growth of the Internet of Medical Things (IoMT) is changing healthcare data management, but it also brings serious issues like data privacy, malicious attacks, and service quality. In this study, we present EdgeGuard, a novel decentralized architecture that combines blockchain technology, federated learning, and edge computing to address those challenges and coordinate medical resources across IoMT networks. EdgeGuard uses a privacy-preserving federated learning approach to keep sensitive medical data local and to promote collaborative model training, solving essential issues. To prevent data modification and unauthorized access, it uses a blockchain-based access control and integrity verification system. EdgeGuard uses edge computing to improve system scalability and efficiency by offloading computational tasks from IoMT devices with limited resources. We have made several technological advances, including a lightweight blockchain consensus mechanism designed for IoMT networks, an adaptive edge resource allocation method based on reinforcement learning, and a federated learning algorithm optimized for medical data with differential privacy. We also create an access control system based on smart contracts and a secure multi-party computing protocol for model updates. EdgeGuard outperforms existing solutions in terms of computational performance, data value, and privacy protection across a wide range of real-world medical datasets. This work enhances safe, effective, and privacy-preserving medical data management in IoMT ecosystems while maintaining outstanding standards for data security and resource efficiency, enabling large-scale collaborative learning in healthcare. Full article
(This article belongs to the Special Issue Edge Intelligence: Edge Computing for 5G and the Internet of Things)
Show Figures

Figure 1

28 pages, 7268 KiB  
Article
Cross-Project Software Defect Prediction Using Differential Perception Combined with Inheritance Federated Learning
by Aili Wang, Yanxiang Feng, Mingji Yang, Haibin Wu, Yuji Iwahori and Haisong Chen
Electronics 2024, 13(24), 4893; https://doi.org/10.3390/electronics13244893 - 11 Dec 2024
Viewed by 479
Abstract
Cross-project software defect prediction (CPDP) refers to the construction of defect prediction models by collecting multi-source project data, but the heterogeneity of data among projects and the modern problem of “data islands” hinder its development. In response to these challenges, we propose a [...] Read more.
Cross-project software defect prediction (CPDP) refers to the construction of defect prediction models by collecting multi-source project data, but the heterogeneity of data among projects and the modern problem of “data islands” hinder its development. In response to these challenges, we propose a CPDP algorithm based on differential perception combined with inheritance federated learning (FedDPI). Firstly, we design an efficient data preprocessing scheme, which lays a reliable data foundation for federated learning by integrating oversampling and optimal feature selection methods. Secondly, a two-stage collaborative optimization mechanism is proposed in the federated learning framework: the inheritance private model (IPM) is introduced in the local training stage, and the differential perception algorithm is used in the global aggregation stage to dynamically allocate aggregation weights, forming positive feedback for training to overcome the negative impact of data heterogeneity. In addition, we utilize the Ranger optimization algorithm to improve the convergence speed and privacy security of the model through its irreversible mixed optimization operation. The experimental results show that FedDPI significantly improves predictive performance in various defect item data combination experiments. Compared with different deep learning and federated learning algorithms, the average improvement in AUC and G-mean indicators is 0.2783 and 0.2673, respectively, verifying the practicality and effectiveness of federated learning and two-stage collaborative optimization mechanisms in the field of CPDP. Full article
(This article belongs to the Special Issue Feature Papers in "Computer Science & Engineering", 2nd Edition)
Show Figures

Figure 1

15 pages, 4286 KiB  
Article
A Three-Layer Scheduling Framework with Dynamic Peer-to-Peer Energy Trading for Multi-Regional Power Balance
by Tianmeng Yang, Jicheng Liu, Wei Feng, Zelong Chen, Yumin Zhao and Suhua Lou
Energies 2024, 17(24), 6239; https://doi.org/10.3390/en17246239 - 11 Dec 2024
Viewed by 334
Abstract
This paper addresses the critical challenges of renewable energy integration and regional power balance in smart grids, which have become increasingly complex with the rapid growth of distributed energy resources. It proposes a novel three-layer scheduling framework with a dynamic peer-to-peer (P2P) trading [...] Read more.
This paper addresses the critical challenges of renewable energy integration and regional power balance in smart grids, which have become increasingly complex with the rapid growth of distributed energy resources. It proposes a novel three-layer scheduling framework with a dynamic peer-to-peer (P2P) trading mechanism to address these challenges. The framework incorporates a preliminary local supply–demand balance considering renewable energy, followed by an inter-regional P2P trading layer and, ultimately, flexible resource deployment for final balance adjustment. The proposed dynamic continuous P2P trading mechanism enables regions to autonomously switch roles between buyer and seller based on their internal energy status and preferences, facilitating efficient trading while protecting regional privacy. The model features an innovative price update mechanism that initially leverages historical trading data and dynamically adjusts prices to maximize trading success rates. To address the heterogeneity of regional resources and varying energy demands, the framework implements a flexible trading strategy that allows for differentiated transaction volumes and prices. The effectiveness of the proposed framework is validated through simulation experiments using k-means clustered typical daily data from four regions in Northeast China. The results demonstrate that the proposed approach successfully promotes renewable energy utilization, reduces the operational costs of flexible resources, and achieves an efficient inter-regional energy balance while maintaining regional autonomy and information privacy. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

23 pages, 1526 KiB  
Article
CLDP-pFedAvg: Safeguarding Client Data Privacy in Personalized Federated Averaging
by Wenquan Shen, Shuhui Wu and Yuanhong Tao
Mathematics 2024, 12(22), 3630; https://doi.org/10.3390/math12223630 - 20 Nov 2024
Viewed by 412
Abstract
The personalized federated averaging algorithm integrates a federated averaging approach with a model-agnostic meta-learning technique. In real-world heterogeneous scenarios, it is essential to implement additional privacy protection techniques for personalized federated learning. We propose a novel differentially private federated meta-learning scheme, CLDP-pFedAvg, which [...] Read more.
The personalized federated averaging algorithm integrates a federated averaging approach with a model-agnostic meta-learning technique. In real-world heterogeneous scenarios, it is essential to implement additional privacy protection techniques for personalized federated learning. We propose a novel differentially private federated meta-learning scheme, CLDP-pFedAvg, which achieves client-level differential privacy guarantees for federated learning involving large heterogeneous clients. The client-level differentially private meta-based FedAvg algorithm enables clients to upload local model parameters for aggregation securely. Furthermore, we provide a convergence analysis of the clipping-enabled differentially private meta-based FedAvg algorithm. The proposed strategy is evaluated across various datasets, and the findings indicate that our approach offers improved privacy protection while maintaining model accuracy. Full article
Show Figures

Figure 1

21 pages, 717 KiB  
Article
DistOD: A Hybrid Privacy-Preserving and Distributed Framework for Origin–Destination Matrix Computation
by Jongwook Kim
Electronics 2024, 13(22), 4545; https://doi.org/10.3390/electronics13224545 - 19 Nov 2024
Viewed by 493
Abstract
The origin–destination (OD) matrix is a critical tool in understanding human mobility, with diverse applications. However, constructing OD matrices can pose significant privacy challenges, as sensitive information about individual mobility patterns may be exposed. In this paper, we propose DistOD, a hybrid privacy-preserving [...] Read more.
The origin–destination (OD) matrix is a critical tool in understanding human mobility, with diverse applications. However, constructing OD matrices can pose significant privacy challenges, as sensitive information about individual mobility patterns may be exposed. In this paper, we propose DistOD, a hybrid privacy-preserving and distributed framework for the aggregation and computation of OD matrices without relying on a trusted central server. The proposed framework makes several key contributions. First, we propose a distributed method that enables multiple participating parties to collaboratively identify hotspot areas, which are regions frequently traveled between by individuals across these parties. To optimize the data utility and minimize the computational overhead, we introduce a hybrid privacy-preserving mechanism. This mechanism applies distributed differential privacy in hotspot areas to ensure high data utility, while using localized differential privacy in non-hotspot regions to reduce the computational costs. By combining these approaches, our method achieves an effective balance between computational efficiency and the accuracy of the OD matrix. Extensive experiments on real-world datasets show that DistOD consistently provides higher data utility than methods based solely on localized differential privacy, as well as greater efficiency than approaches based solely on distributed differential privacy. Full article
(This article belongs to the Special Issue Emerging Distributed/Parallel Computing Systems)
Show Figures

Figure 1

39 pages, 21483 KiB  
Article
SPM-FL: A Federated Learning Privacy-Protection Mechanism Based on Local Differential Privacy
by Zhiyan Chen and Hong Zheng
Electronics 2024, 13(20), 4091; https://doi.org/10.3390/electronics13204091 - 17 Oct 2024
Viewed by 929
Abstract
Federated learning is a widely applied distributed machine learning method that effectively protects client privacy by sharing and computing model parameters on the server side, thus avoiding the transfer of data to third parties. However, information such as model weights can still be [...] Read more.
Federated learning is a widely applied distributed machine learning method that effectively protects client privacy by sharing and computing model parameters on the server side, thus avoiding the transfer of data to third parties. However, information such as model weights can still be analyzed or attacked, leading to potential privacy breaches. Traditional federated learning methods often disturb models by adding Gaussian or Laplacian noise, but under smaller privacy budgets, the large variance of the noise adversely affects model accuracy. To address this issue, this paper proposes a Symmetric Partition Mechanism (SPM), which probabilistically perturbs the sign of local model weight parameters before model aggregation. This mechanism satisfies strict ϵ-differential privacy, while introducing a variance constraint mechanism that effectively reduces the impact of noise interference on model performance. Compared with traditional methods, SPM generates smaller variance under the same privacy budget, thereby improving model accuracy and being applicable to scenarios with varying numbers of clients. Through theoretical analysis and experimental validation on multiple datasets, this paper demonstrates the effectiveness and privacy-protection capabilities of the proposed mechanism. Full article
(This article belongs to the Special Issue AI-Based Solutions for Cybersecurity)
Show Figures

Figure 1

17 pages, 1483 KiB  
Article
Data Quality-Aware Client Selection in Heterogeneous Federated Learning
by Shinan Song, Yaxin Li, Jin Wan, Xianghua Fu and Jingyan Jiang
Mathematics 2024, 12(20), 3229; https://doi.org/10.3390/math12203229 - 15 Oct 2024
Viewed by 1020
Abstract
Federated Learning (FL) enables decentralized data utilization while maintaining edge user privacy, but it faces challenges due to statistical heterogeneity. Existing approaches address client drift and data heterogeneity issues. However, real-world settings often involve low-quality data with noisy features, such as covariate drift [...] Read more.
Federated Learning (FL) enables decentralized data utilization while maintaining edge user privacy, but it faces challenges due to statistical heterogeneity. Existing approaches address client drift and data heterogeneity issues. However, real-world settings often involve low-quality data with noisy features, such as covariate drift or adversarial samples, which are usually ignored. Noisy samples significantly impact the global model’s accuracy and convergence rate. Assessing data quality and selectively aggregating updates from high-quality clients is crucial, but dynamically perceiving data quality without additional computations or data exchanges is challenging. In this paper, we introduce the FedDQA (Federated learning via Data Quality-Aware) (FedDQA) framework. We discover increased data noise leads to slower loss reduction during local model training. We propose a loss sharpness-based Data-Quality-Awareness (DQA) metric to differentiate between high-quality and low-quality data. Based on the DQA, we design a client selection algorithm that strategically selects participant clients to reduce the negative impact of noisy clients. Experiment results indicate that FedDQA significantly outperforms the baselines. Notably, it achieves up to a 4% increase in global model accuracy and demonstrates faster convergence rates. Full article
Show Figures

Figure 1

16 pages, 1480 KiB  
Article
Protecting Infinite Data Streams from Wearable Devices with Local Differential Privacy Techniques
by Feng Zhao and Song Fan
Information 2024, 15(10), 630; https://doi.org/10.3390/info15100630 - 12 Oct 2024
Viewed by 680
Abstract
The real-time data collected by wearable devices enables personalized health management and supports public health monitoring. However, sharing these data with third-party organizations introduces significant privacy risks. As a result, protecting and securely sharing wearable device data has become a critical concern. This [...] Read more.
The real-time data collected by wearable devices enables personalized health management and supports public health monitoring. However, sharing these data with third-party organizations introduces significant privacy risks. As a result, protecting and securely sharing wearable device data has become a critical concern. This paper proposes a local differential privacy-preserving algorithm designed for continuous data streams generated by wearable devices. Initially, the data stream is sampled at key points to avoid prematurely exhausting the privacy budget. Then, an adaptive allocation of the privacy budget at these points enhances privacy protection for sensitive data. Additionally, the optimized square wave (SW) mechanism introduces perturbations to the sampled points. Afterward, the Kalman filter algorithm is applied to maintain data flow patterns and reduce prediction errors. Experimental validation using two real datasets demonstrates that, under comparable conditions, this approach provides higher data availability than existing privacy protection methods for continuous data streams. Full article
(This article belongs to the Special Issue Digital Privacy and Security, 2nd Edition)
Show Figures

Graphical abstract

16 pages, 2851 KiB  
Article
Trajectory Privacy-Protection Mechanism Based on Multidimensional Spatial–Temporal Prediction
by Ji Xi, Meiyu Shi, Weiqi Zhang, Zhe Xu and Yanting Liu
Symmetry 2024, 16(9), 1248; https://doi.org/10.3390/sym16091248 - 23 Sep 2024
Viewed by 726
Abstract
The popularity of global GPS location services and location-enabled personal terminal applications has contributed to the rapid growth of location-based social networks. Users can access social networks at anytime and anywhere to obtain services in the relevant location. While accessing services is convenient, [...] Read more.
The popularity of global GPS location services and location-enabled personal terminal applications has contributed to the rapid growth of location-based social networks. Users can access social networks at anytime and anywhere to obtain services in the relevant location. While accessing services is convenient, there is a potential risk of leaking users’ private information. In data processing, the discovery of issues and the generation of optimal solutions constitute a symmetrical process. Therefore, this paper proposes a symmetry–trajectory differential privacy-protection mechanism based on multi-dimensional prediction (TPPM-MP). Firstly, the temporal attention mechanism is designed to extract spatiotemporal features of trajectories from different spatiotemporal dimensions and perform trajectory-sensitive prediction. Secondly, class-prevalence-based weights are assigned to sensitive regions. Finally, the privacy budget is assigned based on the sensitive weights, and noise conforming to localized differential privacy is added. Validated on real datasets, the proposed method in this paper enhanced usability by 22% and 37% on the same dataset compared with other methods mentioned, while providing equivalent privacy protection. Full article
Show Figures

Figure 1

8 pages, 373 KiB  
Technical Note
Identity Diffuser: Preserving Abnormal Region of Interests While Diffusing Identity
by Hisaichi Shibata, Shouhei Hanaoka, Saori Koshino, Soichiro Miki, Yuki Sonoda and Osamu Abe
Appl. Sci. 2024, 14(18), 8489; https://doi.org/10.3390/app14188489 - 20 Sep 2024
Viewed by 548
Abstract
To release medical images that can be freely used in downstream processes while maintaining their utility, it is necessary to remove personal features from the images while preserving the lesion structures. Unlike previous studies that focused on removing lesion structures while preserving the [...] Read more.
To release medical images that can be freely used in downstream processes while maintaining their utility, it is necessary to remove personal features from the images while preserving the lesion structures. Unlike previous studies that focused on removing lesion structures while preserving the individuality of medical images, this study proposes and validates a new framework that maintains the lesion structures while diffusing individual characteristics. In this framework, we apply local differential privacy techniques to provide theoretical guarantees of privacy protection. Additionally, to enhance the utility of protected medical images, we perform denoising using a diffusion model on the noise-contaminated medical images. Numerous chest X-rays generated by the proposed method were evaluated by physicians, revealing a trade-off between the level of privacy protection and utility. In other words, it was confirmed that increasing the level of personal information protection tends to result in relatively lower utility. This study potentially enables the release of certain types of medical images that were previously difficult to share. Full article
Show Figures

Figure 1

16 pages, 2533 KiB  
Article
A Personalized Federated Learning Method Based on Knowledge Distillation and Differential Privacy
by Yingrui Jiang, Xuejian Zhao, Hao Li and Yu Xue
Electronics 2024, 13(17), 3538; https://doi.org/10.3390/electronics13173538 - 6 Sep 2024
Viewed by 915
Abstract
Federated learning allows data to remain decentralized, and various devices work together to train a common machine learning model. This method keeps sensitive data local on devices, protecting privacy. However, privacy protection and non-independent and identically distributed data are significant challenges for many [...] Read more.
Federated learning allows data to remain decentralized, and various devices work together to train a common machine learning model. This method keeps sensitive data local on devices, protecting privacy. However, privacy protection and non-independent and identically distributed data are significant challenges for many FL techniques currently in use. This paper proposes a personalized federated learning method (FedKADP) that integrates knowledge distillation and differential privacy to address the issues of privacy protection and non-independent and identically distributed data in federated learning. The introduction of a bidirectional feedback mechanism enables the establishment of an interactive tuning loop between knowledge distillation and differential privacy, allowing dynamic tuning and continuous performance optimization while protecting user privacy. By closely monitoring privacy overhead through Rényi differential privacy theory, this approach effectively balances model performance and privacy protection. Experimental results using the MNIST and CIFAR-10 datasets demonstrate that FedKADP performs better than conventional federated learning techniques, particularly when handling non-independent and identically distributed data. It successfully lowers the heterogeneity of the model, accelerates global model convergence, and improves validation accuracy, making it a new approach to federated learning. Full article
(This article belongs to the Special Issue Novel Methods Applied to Security and Privacy Problems)
Show Figures

Figure 1

19 pages, 702 KiB  
Article
OFPP-GAN: One-Shot Federated Personalized Protection–Generative Adversarial Network
by Zhenyu Jiang, Changli Zhou, Hui Tian and Zikang Chen
Electronics 2024, 13(17), 3423; https://doi.org/10.3390/electronics13173423 - 29 Aug 2024
Viewed by 677
Abstract
Differential privacy techniques have shown excellent performance in protecting sensitive information during GAN model training. However, with the increasing attention to data privacy issues, ensuring high-quality output of generative models and the efficiency of federated learning while protecting privacy has become a pressing [...] Read more.
Differential privacy techniques have shown excellent performance in protecting sensitive information during GAN model training. However, with the increasing attention to data privacy issues, ensuring high-quality output of generative models and the efficiency of federated learning while protecting privacy has become a pressing challenge. To address these issues, this paper proposes a One-shot Federated Personalized Protection–Generative Adversarial Network (OFPP-GAN). Firstly, this scheme employs dual personalized differential privacy to achieve privacy protection. It adjusts the noise scale and clipping threshold based on the gradient changes during model training in a personalized manner, thereby enhancing the performance of the generative model while protecting privacy. Additionally, the scheme adopts the one-shot federated learning paradigm, where each client uploads their local model containing private information only once throughout the training process. This approach not only reduces the risk of privacy leakage but also decreases the communication overhead of the entire system. Finally, we validate the effectiveness of the proposed method through theoretical analysis and experiments. Compared with existing methods, the generative model trained with OFPP-GAN demonstrates superior security, efficiency, and robustness. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 475 KiB  
Article
A Secure Authentication Scheme with Local Differential Privacy in Edge Intelligence-Enabled VANET
by Deokkyu Kwon, Seunghwan Son, Kisung Park and Youngho Park
Mathematics 2024, 12(15), 2383; https://doi.org/10.3390/math12152383 - 31 Jul 2024
Cited by 1 | Viewed by 963
Abstract
Edge intelligence is a technology that integrates edge computing and artificial intelligence to achieve real-time and localized model generation. Thus, users can receive more precise and personalized services in vehicular ad hoc networks (VANETs) using edge intelligence. However, privacy and security challenges still [...] Read more.
Edge intelligence is a technology that integrates edge computing and artificial intelligence to achieve real-time and localized model generation. Thus, users can receive more precise and personalized services in vehicular ad hoc networks (VANETs) using edge intelligence. However, privacy and security challenges still exist, because sensitive data of the vehicle user is necessary for generating a high-accuracy AI model. In this paper, we propose an authentication scheme to preserve the privacy of user data in edge intelligence-enabled VANETs. The proposed scheme can establish a secure communication channel using fuzzy extractor, elliptic curve cryptography (ECC), and physical unclonable function (PUF) technology. The proposed data upload process can provide privacy of the data using local differential privacy and symmetric key encryption. We validate the security robustness of the proposed scheme using informal analysis, the Real-Or-Random (ROR) model, and the Scyther tool. Moreover, we evaluate the computation and communication efficiency of the proposed and related schemes using Multiprecision Integer and Rational Arithmetic Cryptographic Library (MIRACL) software development kit (SDK). We simulate the practical deployment of the proposed scheme using network simulator 3 (NS-3). Our results show that the proposed scheme has a performance improvement of 10∼48% compared to the state-of-the-art research. Thus, we can demonstrate that the proposed scheme provides comprehensive and secure communication for data management in edge intelligence-enabled VANET environments. Full article
Show Figures

Figure 1

19 pages, 1263 KiB  
Article
Robust Estimation Method against Poisoning Attacks for Key-Value Data with Local Differential Privacy
by Hikaru Horigome, Hiroaki Kikuchi, Masahiro Fujita and Chia-Mu Yu
Appl. Sci. 2024, 14(14), 6368; https://doi.org/10.3390/app14146368 - 22 Jul 2024
Viewed by 842
Abstract
Local differential privacy (LDP) protects user information from potential threats by randomizing data on individual devices before transmission to untrusted collectors. This method enables collectors to derive user statistics by analyzing randomized data, thereby presenting a promising avenue for privacy-preserving data collection. In [...] Read more.
Local differential privacy (LDP) protects user information from potential threats by randomizing data on individual devices before transmission to untrusted collectors. This method enables collectors to derive user statistics by analyzing randomized data, thereby presenting a promising avenue for privacy-preserving data collection. In the context of key–value data, in which discrete and continuous values coexist, PrivKV has been introduced as an LDP protocol to ensure secure collection. However, this framework is susceptible to poisoning attacks. To address this vulnerability, we propose an expectation maximization (EM)-based algorithm combined with a cryptographic protocol to facilitate secure random sampling. Our LDP protocol, known as emPrivKV, exhibits two key advantages: it improves the accuracy of statistical information estimation from randomized data, and enhances resilience against the manipulation of statistics, that is, poisoning attacks. These attacks involve malicious users manipulating the analysis results without detection. This study presents the empirical results of applying the emPrivKV protocol to both synthetic and open datasets, highlighting a notable improvement in the precision of statistical value estimation and robustness against poisoning attacks. As a result, emPrivKV improved the frequency and the mean gains by 17.1% and 25.9%, respectively, compared to PrivKV, with the number of fake users being 0.1 of the genuine users. Our findings contribute to the ongoing discourse on refining LDP protocols for key–value data in scenarios involving privacy-sensitive information. Full article
(This article belongs to the Special Issue Progress and Research in Cybersecurity and Data Privacy)
Show Figures

Figure 1

13 pages, 335 KiB  
Article
Binary Encoding-Based Federated Learning for Traffic Sign Recognition in Autonomous Driving
by Yian Wen, Yun Zhou and Kai Gao
Mathematics 2024, 12(14), 2229; https://doi.org/10.3390/math12142229 - 17 Jul 2024
Viewed by 842
Abstract
Autonomous driving involves collaborative data sensing and traffic sign recognition. Emerging artificial intelligence technology has brought tremendous advances to vehicular networks. However, it is challenging to guarantee privacy and security when using traditional centralized machine learning methods for traffic sign recognition. It is [...] Read more.
Autonomous driving involves collaborative data sensing and traffic sign recognition. Emerging artificial intelligence technology has brought tremendous advances to vehicular networks. However, it is challenging to guarantee privacy and security when using traditional centralized machine learning methods for traffic sign recognition. It is urgent to introduce a distributed machine learning approach to protect private data of connected vehicles. In this paper, we propose a local differential privacy-based binary encoding federated learning approach. The binary encoding techniques and random perturbation methods are used in distributed learning scenarios to enhance the efficiency and security of data transmission. For the vehicle layer in this approach, the model is trained locally, and the model parameters are uploaded to the central server through encoding and perturbing. The central server designs the corresponding decoding, correction scheme, and regression statistical method for the received binary string. Then, the model parameters are aggregated and updated in the server and transmitted to the vehicle until the learning model is trained. The performance of the proposed approach is verified using the German Traffic Sign Recognition Benchmark data set. The simulation results show that the convergence of the approach is better with the increase in the learning cycle. Compared with baseline methods, such as the convolutional neural network, random forest, and backpropagation, the proposed approach achieves higher accuracy in the process of traffic sign recognition, with an increase of 6%. Full article
(This article belongs to the Special Issue Artificial Intelligence Security and Machine Learning)
Show Figures

Figure 1

Back to TopTop