Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2908812.2908839acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article
Public Access

Identifying Core Functional Networks and Functional Modules within Artificial Neural Networks via Subsets Regression

Published: 20 July 2016 Publication History

Abstract

As the power and capabilities of Artificial Neural Networks (ANNs) grow, so do their size and complexity. To both decipher and improve ANNs, we need to build better tools that help us understand their inner workings. To that end, we introduce an algorithm called Subsets Regression on network Connectivity (SRC). SRC allows us to prune away unimportant nodes and connections in ANNs, revealing a core functional network (CFN) that is simpler and thus easier to analyze. SRC can also identify functional modules within an ANN. We demonstrate SRC's capabilities on both directly and indirectly encoded ANNs evolved to solve a modular problem. In many of the cases when evolution produces a highly entangled, non-modular ANN, SRC reveals that a CFN is hidden within these networks that is actually sparse and modular. That finding will substantially impact the sizable and ongoing research into the evolution of modularity and will encourage researchers to revisit previous results on that topic. We also show that the SRC algorithm can more accurately estimate the modularity Q-Score of a network than state-of-the-art approaches. Overall, SRC enables us to greatly simplify ANNs in order to better understand and improve them, and reveals that they often contain hidden modular structures within.

References

[1]
M Gethsiyal Augasta and T Kathirvalavakumar. Pruning algorithms of neural networks--a comparative study. Central European Journal of Computer Science, 3(3):105--115, 2013.
[2]
Y. Bengio, I. J. Goodfellow, and A. Courville. Deep learning. Book in preparation for MIT Press, 2015.
[3]
Dmitri B Chklovskii. Exact solution for the optimal neuronal layout problem. Neural computation, 16(10):2067--2078, October 2004.
[4]
J. Clune, B.E. Beckmann, P.K. McKinley, and C. Ofria. Investigating whether HyperNEAT produces modular neural networks. In Proc. Genetic & Evolutionary Comput. Conf., pages 635--642. ACM, 2010.
[5]
J. Clune, B.E. Beckmann, C. Ofria, and R.T. Pennock. Evolving coordinated quadruped gaits with the HyperNEAT generative encoding. In Proc. IEEE Cong. Evolutionary Comput, pages 2764--2771, 2009.
[6]
J. Clune, J-B. Mouret, and H. Lipson. The evolutionary origins of modularity. Proc. Royal Society B, 280(20122863), 2013.
[7]
J. Clune, K.O. Stanley, R.T. Pennock, and C. Ofria. On the performance of indirect encoding across the continuum of regularity. IEEE Trans. Evolutionary Comput, 15(4):346--367, 2011.
[8]
K. O. Ellefsen, J-B. Mouret, J. Clune, and J. C. Bongard. Neural modularity helps organisms evolve to learn new skills without forgetting old skills. PLoS Comput Biol, 11(4):e1004128, 2015.
[9]
T. F. Fuller, A. Ghazalpour, J. E. Aten, T. A. Drake, A. J. Lusis, and S. Horvath. Weighted gene coexpression network analysis strategies applied to mouse weight. Mammalian Genome, 18(6--7):463--472, 2007.
[10]
Masafumi Hagiwara. A simple and effective method for removal of hidden units and weights. Neurocomputing, 6(2):207--218, 1994.
[11]
J. Huizinga, J. Clune, and J-B Mouret. Evolving neural networks that are both modular and regular: Hyperneat plus the connection cost technique. In Proc. Genetic & Evolutionary Comput. Conf, pages 697--704. ACM, 2014.
[12]
N. Kashtan and U. Alon. Spontaneous evolution of modularity and network motifs. Proc. Nat'l Acad. Sciences, 102(39):13773--13778, September 2005.
[13]
Y. LeCun, J. S. Denker, S. A. Solla, R. E. Howard, and L. D. Jackel. Optimal brain damage. In NIPs, volume 89, 1989.
[14]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278--2324, 1998.
[15]
J. Lehman and K.O. Stanley. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary Computation, 19(2):189--223, 2011.
[16]
E. A. Leicht and M. E. J. Newman. Community structure in directed networks. Physical review letters, pages 118703--118707, 2008.
[17]
H. Lipson. Principles of modularity, regularity, and hierarchy for scalable systems. Journal of Biological Physics and Chemistry, 7(4):125, 2007.
[18]
H. Mengistu and J. Clune. The evolutionary origins of hierarchy. In preparation.
[19]
A. Miller. Subset selection in regression. CRC Press, 2002.
[20]
J-B Mouret and S. Doncieux. Sferes v2: Evolvin' in the multi-core world. In Evolutionary Computation (CEC), 2010 IEEE Congress on, pages 1--8. IEEE, 2010.
[21]
M. E. J. Newman. Modularity and community structure in networks. Proc. Nat'l Acad. Sciences, 103(23):8577--8582, 2006.
[22]
M. E. J. Newman and M. Girvan. Finding and evaluating community structure in networks. Physical review E, 69(2):026113, 2004.
[23]
S. M. Smith, K. L. Miller, and et. al. Network modelling methods for fmri. Neuroimage, 54(2):875--891, 2011.
[24]
O. Sporns and R. F. Betzel. Modular brain networks. Annual review of psychology, 67(1), 2015.
[25]
K.O. Stanley and R. Miikkulainen. A taxonomy for artificial embryogeny. Artificial Life, 9(2):93--130, 2003.
[26]
G.F. Striedter. Principles of brain evolution. Sinauer Associates Sunderland, MA, 2005.
[27]
R Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267--288, 1996.
[28]
P. Verbancsics and K. O. Stanley. Constraining connectivity to encourage modularity in hyperneat. In Proc. Genetic & Evolutionary Comput. Conf, pages 1483--1490. ACM, 2011.
[29]
G. Wagner, M. Pavlicev, and J. M. Cheverud. The road to modularity. Nature Reviews Genetics, 8(12):921--31, December 2007.
[30]
Hong-Jie Xing and Bao-Gang Hu. Two-phase construction of multilayer perceptrons using information theory. Neural Networks, IEEE Transactions on, 20(4):715--721, 2009.
[31]
J. Yosinski, J.and Clune, A. Nguyen, T. Fuchs, and H. Lipson. Understanding neural networks through deep visualization. ICML Deep Learning workshop., 2015.

Cited By

View all
  • (2024)Evolving interpretable neural modularity in free-form multilayer perceptrons through connection costsNeural Computing and Applications10.1007/s00521-023-09117-436:3(1459-1476)Online publication date: 1-Jan-2024
  • (2023)The Elements of Flexibility for Task-Performing SystemsIEEE Access10.1109/ACCESS.2023.323887211(8029-8056)Online publication date: 2023
  • (2017)Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networksPLOS ONE10.1371/journal.pone.018773612:11(e0187736)Online publication date: 16-Nov-2017

Index Terms

  1. Identifying Core Functional Networks and Functional Modules within Artificial Neural Networks via Subsets Regression

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    GECCO '16: Proceedings of the Genetic and Evolutionary Computation Conference 2016
    July 2016
    1196 pages
    ISBN:9781450342063
    DOI:10.1145/2908812
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 July 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. analysis
    2. artificial neural networks
    3. functional modules
    4. modularity
    5. subsets regression

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    GECCO '16
    Sponsor:
    GECCO '16: Genetic and Evolutionary Computation Conference
    July 20 - 24, 2016
    Colorado, Denver, USA

    Acceptance Rates

    GECCO '16 Paper Acceptance Rate 137 of 381 submissions, 36%;
    Overall Acceptance Rate 1,669 of 4,410 submissions, 38%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)39
    • Downloads (Last 6 weeks)13
    Reflects downloads up to 10 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Evolving interpretable neural modularity in free-form multilayer perceptrons through connection costsNeural Computing and Applications10.1007/s00521-023-09117-436:3(1459-1476)Online publication date: 1-Jan-2024
    • (2023)The Elements of Flexibility for Task-Performing SystemsIEEE Access10.1109/ACCESS.2023.323887211(8029-8056)Online publication date: 2023
    • (2017)Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networksPLOS ONE10.1371/journal.pone.018773612:11(e0187736)Online publication date: 16-Nov-2017

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media