Adversarial Machine Learning
Attack Surfaces, Defence Mechanisms, Learning Theories in Artificial Intelligence
Article
Graph neural networks (GNNs) are models that capture the dependencies between graph data by passing messages between graph nodes and they have been widely used to process graph data that contains relational in...
Article
Reinforcement learning is one of the most important branches of AI. Due to its capacity for self-adaption and decision-making in dynamic environments, reinforcement learning has been widely applied in multiple...
Article
Cooperation is an essential behavior in multi-agent systems. Existing mechanisms have two common drawbacks. The first drawback is that malicious agents are not taken into account. Due to the diverse roles in t...
Article
Machine learning is widely deployed in society, unleashing its power in a wide range of applications owing to the advent of big data. One emerging problem faced by machine learning is the discrimination from d...
Chapter
Chapter and Conference Paper
Today, multiple food delivery companies work globally in different regions, and this expansion could expose users’ data to danger. This data could be stored by a third party and could be used in further analys...
Chapter
This chapter investigates the robustness gap between machine intelligence and human perception in machine learning for cyberspace security with game theoretical adversarial learning algorithms. In this chapter...
Chapter
While adversarial examples (AEs) or adversarial perturbations (APs) are usually treated as a security risk up to date, they can also serve as privacy protection tools when facing deep learning-based privacy at...
Chapter
In this chapter, we explore adversarial attack surfaces. We examine how they can exploit vulnerabilities in machine learning and how to make learning algorithms robust to attacks on security and privacy of the...
Chapter
In this chapter we explore neural network architectures, implementations, cost analysis, and training processes using game theoretical adversarial deep learning. We also define the utility bounds of such deep ...
Book
Attack Surfaces, Defence Mechanisms, Learning Theories in Artificial Intelligence
Chapter
Deep learning is not provably secure. Deep neural networks are vulnerable to security attacks from malicious adversaries, which is an ongoing and critical challenge for deep learning researchers. This chapter ...
Chapter
This chapter summarizes the game theoretical strategies for generating adversarial manipulations. The adversarial learning objective for our adversaries is assumed to be to inject small changes into the data d...
Chapter
During the past decades, deep neural networks (DNNs) have shown great success in a wide range of applications, including image classification in the computer vision (CV) domain and text recognition in the natu...
Article
In recent years, it has been revealed that machine learning models can produce discriminatory predictions. Hence, fairness protection has come to play a pivotal role in machine learning. In the past, most stud...
Article
The concerns on visual privacy have been increasingly raised along with the dramatic growth in image and video capture and sharing. Meanwhile, with the recent breakthrough in deep learning technologies, visual...
Article
To alleviate the traffic congestion caused by the sharp increase in the number of private cars and save commuting costs, taxi carpooling service has become the choice of many people. Current research on taxi c...
Chapter and Conference Paper
With extensive applications and remarkable performance, deep reinforcement learning is becoming one of the most important technologies that researchers have been focusing on. Many applications have used reinfo...
Article
Copyright infringement is a serious problem in the music industry, especially for Internet-based platforms. The existing audio watermarking methods for copyright protection have limited robustness against the ...
Book and Conference Proceedings
20th International Conference, ICWL 2021, Macau, China, November 13–14, 2021, Proceedings