Approximation capabilities of multilayer feedforward networks
K Hornik - Neural networks, 1991 - Elsevier
K Hornik
Neural networks, 1991•ElsevierWe show that standard multilayer feedforward networks with as few as a single hidden layer
and arbitrary bounded and nonconstant activation function are universal approximators with
respect to L p (μ) performance criteria, for arbitrary finite input environment measures μ,
provided only that sufficiently many hidden units are available. If the activation function is
continuous, bounded and nonconstant, then continuous mappings can be learned uniformly
over compact input sets. We also give very general conditions ensuring that networks with …
and arbitrary bounded and nonconstant activation function are universal approximators with
respect to L p (μ) performance criteria, for arbitrary finite input environment measures μ,
provided only that sufficiently many hidden units are available. If the activation function is
continuous, bounded and nonconstant, then continuous mappings can be learned uniformly
over compact input sets. We also give very general conditions ensuring that networks with …
Abstract
We show that standard multilayer feedforward networks with as few as a single hidden layer and arbitrary bounded and nonconstant activation function are universal approximators with respect to Lp(μ) performance criteria, for arbitrary finite input environment measures μ, provided only that sufficiently many hidden units are available. If the activation function is continuous, bounded and nonconstant, then continuous mappings can be learned uniformly over compact input sets. We also give very general conditions ensuring that networks with sufficiently smooth activation functions are capable of arbitrarily accurate approximation to a function and its derivatives.
Elsevier