Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A new pruning heuristic based on variance analysis of sensitivity information

Published: 01 November 2001 Publication History

Abstract

Architecture selection is a very important aspect in the design of neural networks (NNs) to optimally tune performance and computational complexity. Sensitivity analysis has been used successfully to prune irrelevant parameters from feedforward NNs. This paper presents a new pruning algorithm that uses the sensitivity analysis to quantify the relevance of input and hidden units. A new statistical pruning heuristic is proposed, based on the variance analysis, to decide which units to prune. The basic idea is that a parameter with a variance in sensitivity not significantly different from zero, is irrelevant and can be removed. Experimental results show that the new pruning algorithm correctly prunes irrelevant input and hidden units. The new pruning algorithm is also compared with standard pruning algorithms

Cited By

View all
  1. A new pruning heuristic based on variance analysis of sensitivity information

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image IEEE Transactions on Neural Networks
    IEEE Transactions on Neural Networks  Volume 12, Issue 6
    November 2001
    307 pages

    Publisher

    IEEE Press

    Publication History

    Published: 01 November 2001

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 10 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Review on the emerging technology of TinyMLACM Computing Surveys10.1145/366182056:10(1-37)Online publication date: 22-Jun-2024
    • (2024)Towards Sobolev PruningProceedings of the Platform for Advanced Scientific Computing Conference10.1145/3659914.3659915(1-11)Online publication date: 3-Jun-2024
    • (2024)Optimizing dense feed-forward neural networksNeural Networks10.1016/j.neunet.2023.12.015171:C(229-241)Online publication date: 17-Apr-2024
    • (2022)Enable Deep Learning on Mobile Devices: Methods, Systems, and ApplicationsACM Transactions on Design Automation of Electronic Systems10.1145/348661827:3(1-50)Online publication date: 4-Mar-2022
    • (2021)Pruning and quantization for deep neural network accelerationNeurocomputing10.1016/j.neucom.2021.07.045461:C(370-403)Online publication date: 21-Oct-2021
    • (2021)A systematic review on overfitting control in shallow and deep neural networksArtificial Intelligence Review10.1007/s10462-021-09975-154:8(6391-6438)Online publication date: 3-Mar-2021
    • (2020)Retraining a Pruned Network: A Unified Theory of Time ComplexitySN Computer Science10.1007/s42979-020-00208-w1:4Online publication date: 18-Jun-2020
    • (2020)An online self-organizing algorithm for feedforward neural networkNeural Computing and Applications10.1007/s00521-020-04907-632:23(17505-17518)Online publication date: 1-Dec-2020
    • (2020)A new growing pruning deep learning neural network algorithm (GP-DLNN)Neural Computing and Applications10.1007/s00521-019-04196-832:24(18143-18159)Online publication date: 1-Dec-2020
    • (2020)Interval Adjoint Significance Analysis for Neural NetworksComputational Science – ICCS 202010.1007/978-3-030-50420-5_27(365-378)Online publication date: 3-Jun-2020
    • Show More Cited By

    View Options

    View options

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media