Interval universal approximation for neural networks
Proceedings of the ACM on Programming Languages, 2022•dl.acm.org
To verify safety and robustness of neural networks, researchers have successfully applied
abstract interpretation, primarily using the interval abstract domain. In this paper, we study
the theoretical power and limits of the interval domain for neural-network verification. First,
we introduce the interval universal approximation (IUA) theorem. IUA shows that neural
networks not only can approximate any continuous function f (universal approximation) as
we have known for decades, but we can find a neural network, using any well-behaved …
abstract interpretation, primarily using the interval abstract domain. In this paper, we study
the theoretical power and limits of the interval domain for neural-network verification. First,
we introduce the interval universal approximation (IUA) theorem. IUA shows that neural
networks not only can approximate any continuous function f (universal approximation) as
we have known for decades, but we can find a neural network, using any well-behaved …
To verify safety and robustness of neural networks, researchers have successfully applied abstract interpretation, primarily using the interval abstract domain. In this paper, we study the theoretical power and limits of the interval domain for neural-network verification.
First, we introduce the interval universal approximation (IUA) theorem. IUA shows that neural networks not only can approximate any continuous function f (universal approximation) as we have known for decades, but we can find a neural network, using any well-behaved activation function, whose interval bounds are an arbitrarily close approximation of the set semantics of f (the result of applying f to a set of inputs). We call this notion of approximation interval approximation. Our theorem generalizes the recent result of Baader et al. from ReLUs to a rich class of activation functions that we call squashable functions. Additionally, the IUA theorem implies that we can always construct provably robust neural networks under ℓ∞-norm using almost any practical activation function.
Second, we study the computational complexity of constructing neural networks that are amenable to precise interval analysis. This is a crucial question, as our constructive proof of IUA is exponential in the size of the approximation domain. We boil this question down to the problem of approximating the range of a neural network with squashable activation functions. We show that the range approximation problem (RA) is a Δ2-intermediate problem, which is strictly harder than NP-complete problems, assuming coNP⊄NP. As a result, IUA is an inherently hard problem: No matter what abstract domain or computational tools we consider to achieve interval approximation, there is no efficient construction of such a universal approximator. This implies that it is hard to construct a provably robust network, even if we have a robust network to start with.
ACM Digital Library