Abstract
The problem of the rejection of patterns not belonging to identified training classes is investigated with respect to Multilayer Perceptron Networks (MLP). The reason for the inherent unreliability of the standard MLP in this respect is explained, and some mechanisms for the enhancement of its rejection performance are considered. Two network configurations are presented as candidates for a more reliable structure, and are compared to the so-called ‘negative training’ approach. The first configuration is an MLP which uses a Gaussian as its activation function, and the second is an MLP with direct connections from the input to the output layer of the network. The networks are examined and evaluated both through the technique of network inversion, and through practical experiments in a pattern classification application. Finally, the model of Radial Basis Function (RBF) networks is also considered in this respect, and its performance is compared to that obtained with the other networks described.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. In: DE Rumelhart, JL McClelland (Eds) Parallel Distributed Processing, Vol. 1, MIT Press, Cambridge, MA, 1986; 318–362
Lippman R. Pattern classification using neural networks. IEEE Comm Mag November 1989
Martin GL, Pittman JA. Recognizing hand-printed letters and digits using backpropagation learning. Neural Computation 1991;3: 258–267
Linden A, Kindermann J. Inversion of multilayer nets. In: Proceedings of IJCNN, Washington, DC, 1989
Lee Y. Handwritten digit recognition using K nearest neighbour, radial basis function, and backpropagation neural networks. Neural Computation 1991; 3: 440–449.
Smieja FJ, Muhlenbein H. Reflective modular neural network systems. Technical Report, German National Research Centre for Computer Science, Germany, 1992
Bromley J, Denker J. Improving rejection performance on handwritten digits by training with rubbish. Neural Computation 1993;5(3): 367–370
Vasconcelos GC, Fairhurst MC, Bisset DL. Enhanced reliability of multilayer perceptron networks through controlled pattern rejection. Electr Lett 1993;29(3): 261–263
Bishop C. Novelty detection and neural network validation. IEE Proc Vision, Image and Signal Process 1994;141(4): 217–222
Broomhead DS, Lowe D. Multivariable functional interpolation and adaptive networks. Complex Syst 1988;2: 321–355
Moody J, Darken C. Fast learning in networks of locally-tuned processing units. Neural Computation 1989;1: 281–294
Lippmann R. Review of neural networks for speech recognition. Neural Computation 1989;1(1): 1–38
Dawson MRW, Schopflocher DP. Modifying the generalized delta rule to train networks of non-monotonic processors for pattern classification. Connection Science 1992;4(1): 19–31
Sontag E. On the recognition capabilities of feedforward nets. Technical Report SYCON 90-03, Department of Mathematics, Rutgers University, 1990
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Vasconcelos, G.C., Fairhurst, M.C. & Bisset, D.L. Efficient detection of spurious inputs for improving the robustness of MLP networks in practical applications. Neural Comput & Applic 3, 202–212 (1995). https://doi.org/10.1007/BF01414645
Received:
Issue Date:
DOI: https://doi.org/10.1007/BF01414645