Book contents
- Frontmatter
- Contents
- Preface
- Dedication
- 1 Introduction
- 2 The Basic Attractor Neural Network
- 3 General Ideas Concerning Dynamics
- 4 Symmetric Neural Networks at Low Memory Loading
- 5 Storage and Retrieval of Temporal Sequences
- 6 Storage Capacity of ANN's
- 7 Robustness - Getting Closer to Biology
- 8 Memory Data Structures
- 9 Learning
- 10 Hardware Implementations of Neural Networks
- Glossary
- Index
3 - General Ideas Concerning Dynamics
Published online by Cambridge University Press: 05 August 2012
- Frontmatter
- Contents
- Preface
- Dedication
- 1 Introduction
- 2 The Basic Attractor Neural Network
- 3 General Ideas Concerning Dynamics
- 4 Symmetric Neural Networks at Low Memory Loading
- 5 Storage and Retrieval of Temporal Sequences
- 6 Storage Capacity of ANN's
- 7 Robustness - Getting Closer to Biology
- 8 Memory Data Structures
- 9 Learning
- 10 Hardware Implementations of Neural Networks
- Glossary
- Index
Summary
The Stochastic Process, Ergodicity and Beyond
Stochastic equation and apparent ergodicity
In Section 1.4.1 we have seen that the dynamics of an ANN is a march on the vertices of a hypercube in a space of N dimensions, where N is the number of neurons in the network. Every one of the vertices of the cube, Figure 1.13, is a bona fide network state. Let us first consider the case of asynchronous dynamics, Section 2.2.3, with a single neuron changing its neural state at every time step. In this case the network steps from one vertex to any of its N nearest neighbors. The question of ergodicity, and correspondingly of the ability of the system to perform as an associative memory, is related to the dependence of trajectories on their initial states. In other words, the initial states of the network are those states which are strongly influenced by an external stimulus. If the network were to enter a similar dynamical trajectory for every stimulus, no classification would be achieved. This will be the sense in which we will employ the term ergodic behavior. In terms of our landscape pictures, Figure 2.10 and 2.11, this would be the case of a single valley to which all flows. Alternatively, if trajectories in the space of network states depend strongly on the initial states, and correspondingly on the incoming stimuli, then the network can recall selectively, and have a variety of items in memory.
- Type
- Chapter
- Information
- Modeling Brain FunctionThe World of Attractor Neural Networks, pp. 97 - 154Publisher: Cambridge University PressPrint publication year: 1989