For a variety of reasons, the relative impacts of neural-net inputs on the output of a network’s computation is valuable information to obtain. In particular, it is desirable to identify the significant features, or inputs, of a data-defined problem before the data is sufficiently preprocessed to enable high performance neural-net training. We have defined and tested a technique for assessing such input impacts, which will be compared with a method described in a paper published earlier in this journal. The new approach, known as the ‘clamping’ technique, offers efficient impact assessment of the input features of the problem. Results of the clamping technique prove to be robust under a variety of different network configurations. Differences in architecture, training parameter values and subsets of the data all deliver much the same impact rankings, which supports the notion that the technique ranks an inherent property of the available data rather than a property of any particular feedforward neural network. The success, stability and efficiency of the clamping technique are shown to hold for a number of different real-world problems. In addition, we subject the previously published technique, which we will call the ‘weight product’ technique, to the same tests in order to provide directly comparable information.
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Wang, W., Jones, P. & Partridge, D. Assessing the Impact of Input Features in a Feedforward Neural Network . NCA 9, 101–112 (2000). https://doi.org/10.1007/PL00009895
Issue Date:
DOI: https://doi.org/10.1007/PL00009895