Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleFebruary 2025
Exponential heterogeneous anti-synchronization of multi-variable discrete stochastic inertial neural networks with adaptive corrective parameter
Engineering Applications of Artificial Intelligence (EAAI), Volume 142, Issue Chttps://doi.org/10.1016/j.engappai.2024.109871AbstractThis article departs from the conventional considerations of homogeneous and continuous structures to propose a master–slave heterogeneous frame of time-space discrete inertial neural networks with feedback control at the boundary. The aim of ...
- research-articleJanuary 2025
Stealthiness Assessment of Adversarial Perturbation: From a Visual Perspective
IEEE Transactions on Information Forensics and Security (TIFS), Volume 20Pages 898–913https://doi.org/10.1109/TIFS.2024.3520016Assessing the stealthiness of adversarial perturbations is challenging due to the lack of appropriate evaluation metrics. Existing evaluation metrics, e.g., <inline-formula> <tex-math notation="LaTeX">$L_{p}$ </tex-math></inline-formula> norms or Image ...
- research-articleDecember 2024
VisionGuard: Secure and Robust Visual Perception of Autonomous Vehicles in Practice
CCS '24: Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications SecurityPages 1864–1878https://doi.org/10.1145/3658644.3670296Modern Autonomous Vehicles (AVs) implement the Visual Perception Module (VPM) to perceive their surroundings. This VPM adopts various Deep Neural Network (DNN) models to process the data collected from cameras and LiDAR. Prior studies have shown that ...
- research-articleDecember 2024
PhyScout: Detecting Sensor Spoofing Attacks via Spatio-temporal Consistency
CCS '24: Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications SecurityPages 1879–1893https://doi.org/10.1145/3658644.3670290Existing defense approaches against sensor spoofing attacks suffer from the limitations of limited specific attack types, requiring GPU computation, exhibiting considerable detection latency and struggling with the interpretability of corner cases. We ...
- research-articleDecember 2024
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models
- Kunsheng Tang,
- Wenbo Zhou,
- Jie Zhang,
- Aishan Liu,
- Gelei Deng,
- Shuai Li,
- Peigui Qi,
- Weiming Zhang,
- Tianwei Zhang,
- NengHai Yu
CCS '24: Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications SecurityPages 1196–1210https://doi.org/10.1145/3658644.3670284Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but they have also been observed to magnify societal biases, particularly those related to gender. In response to this issue, several benchmarks have been ...
-
- research-articleNovember 2024
TorchGT: A Holistic System for Large-Scale Graph Transformer Training
SC '24: Proceedings of the International Conference for High Performance Computing, Networking, Storage, and AnalysisArticle No.: 77, Pages 1–17https://doi.org/10.1109/SC41406.2024.00083Graph Transformer is a new architecture that surpasses GNNs in graph learning. While there emerge inspiring algorithm advancements, their practical adoption is still limited, particularly on real-world graphs involving up to millions of nodes. We observe ...
- research-articleNovember 2024
TPU as Cryptographic Accelerator
- Rabimba Karanjai,
- Sangwon Shin,
- Wujie Xiong,
- Xinxin Fan,
- Lin Chen,
- Tianwei Zhang,
- Taeweon Suh,
- Weidong Shi,
- Veronika Kuchta,
- Francesco Sica,
- Lei Xu
HASP '24: Proceedings of the International Workshop on Hardware and Architectural Support for Security and Privacy 2024Pages 37–44https://doi.org/10.1145/3696843.3696844Cryptographic schemes like Fully Homomorphic Encryption (FHE) and Zero-Knowledge Proofs (ZKPs), while offering powerful privacy-preserving capabilities, are often hindered by their computational complexity. Polynomial multiplication, a core operation in ...
- research-articleOctober 2024
Model X-ray: Detecting Backdoored Models via Decision Boundary
MM '24: Proceedings of the 32nd ACM International Conference on MultimediaPages 10296–10305https://doi.org/10.1145/3664647.3681075Backdoor attacks pose a significant security vulnerability for deep neural networks (DNNs), enabling them to operate normally on clean inputs but manipulate predictions when specific trigger patterns occur. In this paper, we consider a practical post-...
- research-articleOctober 2024
EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second
MM '24: Proceedings of the 32nd ACM International Conference on MultimediaPages 3657–3665https://doi.org/10.1145/3664647.3680689Text-to-image (T2I) diffusion models enjoy great popularity and many individuals and companies build their applications based on publicly released T2I diffusion models. Previous studies have demonstrated that backdoor attacks can elicit T2I diffusion ...
- ArticleSeptember 2024
Robust-Wide: Robust Watermarking Against Instruction-Driven Image Editing
AbstractInstruction-driven image editing allows users to quickly edit an image according to text instructions in a forward pass. Nevertheless, malicious users can easily exploit this technique to create fake images, which could cause a crisis of trust and ...
- ArticleSeptember 2024
Backdoor Attacks with Input-Unique Triggers in NLP
Machine Learning and Knowledge Discovery in Databases. Research TrackPages 296–312https://doi.org/10.1007/978-3-031-70341-6_18AbstractBackdoor attack aims to induce neural models to make incorrect predictions for poison data while keeping predictions on the clean dataset unchanged, which creates a considerable threat to current natural language processing (NLP) systems. Existing ...
- research-articleAugust 2024
FedNLR: Federated Learning with Neuron-wise Learning Rates
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 3069–3080https://doi.org/10.1145/3637528.3672042Federated Learning (FL) suffers from severe performance degradation due to the data heterogeneity among clients. Some existing work suggests that the fundamental reason is that data heterogeneity can cause local model drift, and therefore proposes to ...
- ArticleAugust 2024
- research-articleAugust 2024
Unbalanced circuit-PSI from oblivious key-value retrieval
SEC '24: Proceedings of the 33rd USENIX Conference on Security SymposiumArticle No.: 360, Pages 6435–6451Circuit-based Private Set Intersection (circuit-PSI) empowers two parties, a client and a server, each with input sets X and Y, to securely compute a function f on the intersection X ∩ Y while preserving the confidentiality of X ∩ Y from both parties. ...
- research-articleAugust 2024
Scalable zero-knowledge proofs for non-linear functions in machine learning
SEC '24: Proceedings of the 33rd USENIX Conference on Security SymposiumArticle No.: 214, Pages 3819–3836Zero-knowledge (ZK) proofs have been recently explored for the integrity of machine learning (ML) inference. However, these protocols suffer from high computational overhead, with the primary bottleneck stemming from the evaluation of non-linear ...
- research-articleAugust 2024
PENTESTGPT: evaluating and harnessing large language models for automated penetration testing
- Gelei Deng,
- Yi Liu,
- Víctor Mayoral-Vilches,
- Peng Liu,
- Yuekang Li,
- Yuan Xu,
- Tianwei Zhang,
- Yang Liu,
- Martin Pinzger,
- Stefan Rass
SEC '24: Proceedings of the 33rd USENIX Conference on Security SymposiumArticle No.: 48, Pages 847–864Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human professionals. Large Language Models (LLMs) have shown significant advancements in ...
- research-articleAugust 2024
Compilation and fast model counting beyond CNF
IJCAI '24: Proceedings of the Thirty-Third International Joint Conference on Artificial IntelligenceArticle No.: 367, Pages 3315–3323https://doi.org/10.24963/ijcai.2024/367Circuits in deterministic decomposable negation normal form (d-DNNF) are representations of Boolean functions that enable linear-time model counting. This paper strengthens our theoretical knowledge of what classes of functions can be efficiently ...
- research-articleJuly 2024
Purifying quantization-conditioned backdoors via layer-wise activation correction with distribution approximation
ICML'24: Proceedings of the 41st International Conference on Machine LearningArticle No.: 1095, Pages 27439–27456Model quantization is a compression technique that converts a full-precision model to a more compact low-precision version for better storage. Despite the great success of quantization, recent studies revealed the feasibility of malicious exploiting ...
- research-articleJuly 2024
AquaLoRA: toward white-box protection for customized stable diffusion models via watermark LoRA
- Weitao Feng,
- Wenbo Zhou,
- Jiyan He,
- Jie Zhang,
- Tianyi Wei,
- Guanlin Li,
- Tianwei Zhang,
- Weiming Zhang,
- Nenghai Yu
ICML'24: Proceedings of the 41st International Conference on Machine LearningArticle No.: 538, Pages 13423–13444Diffusion models have achieved remarkable success in generating high-quality images. Recently, the open-source models represented by Stable Diffusion (SD) are thriving and are accessible for customization, giving rise to a vibrant community of creators ...
- research-articleJuly 2024
A Hitchhiker’s Guide to Jailbreaking ChatGPT via Prompt Engineering
- Yi Liu,
- Gelei Deng,
- Zhengzi Xu,
- Yuekang Li,
- Yaowen Zheng,
- Ying Zhang,
- Lida Zhao,
- Tianwei Zhang,
- Kailong Wang
SEA4DQ 2024: Proceedings of the 4th International Workshop on Software Engineering and AI for Data Quality in Cyber-Physical Systems/Internet of ThingsPages 12–21https://doi.org/10.1145/3663530.3665021Natural language prompts serve as an essential interface between users and Large Language Models (LLMs) like GPT-3.5 and GPT-4, which are employed by ChatGPT to produce outputs across various tasks. However, prompts crafted with malicious intent, known ...