Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Information Geometry of Neural Networks

  • Conference paper
PRICAI 2000 Topics in Artificial Intelligence (PRICAI 2000)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1886))

Included in the following conference series:

  • 1040 Accesses

Abstract

Japan has launched a big Brain Science Program which includes theoretical foundations of neurocomputing. Mathematical foundation of brain-style computation is one of the main targets of our laboratory in the RIKEN Brain Science Institute. The present talk will introduce the Japanese Brain Science Program, and then give a direction toward mathematical foundation of neurocomputing.

A neural network is specified by a number of real free parameters (connection weights or synaptic efficacies) which are modifiable by learning. The set of all such networks forms a multi-dimensional manifold. In order to understand the total capability of such networks, it is useful to study the intrinsic geometrical structure of the neuromanifold.

When a network is disturbed by noise, its behavior is given by a conditional probability distribution. In such a case, Information Geometry gives a fundamental geometrical structure. We apply information geometry to the set of multi-layer perceptrons. Because it is a Riemannian space, we are naturally lead to the Riemannian or natural gradient learning method, which proves to give a strikingly fast and accurate learning algorithm. The geometry also proves that various types of singularities exist in the manifold, which are not peculiar to neural networks but common to all the hierarchical systems. The sigularities give severe influence on learning behaviors. All of these aspects are analyzed mathematically.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Amari, Si. (2000). Information Geometry of Neural Networks. In: Mizoguchi, R., Slaney, J. (eds) PRICAI 2000 Topics in Artificial Intelligence. PRICAI 2000. Lecture Notes in Computer Science(), vol 1886. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44533-1_2

Download citation

  • DOI: https://doi.org/10.1007/3-540-44533-1_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67925-7

  • Online ISBN: 978-3-540-44533-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics