Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Introduction
- 2 The biology of neural networks: a few features for the sake of non-biologists
- 3 The dynamics of neural networks: a stochastic approach
- 4 Hebbian models of associative memory
- 5 Temporal sequences of patterns
- 6 The problem of learning in neural networks
- 7 Learning dynamics in ‘visible’ neural networks
- 8 Solving the problem of credit assignment
- 9 Self-organization
- 10 Neurocomputation
- 11 Neurocomputers
- 12 A critical view of the modeling of neural networks
- References
- Index
9 - Self-organization
Published online by Cambridge University Press: 30 November 2009
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Introduction
- 2 The biology of neural networks: a few features for the sake of non-biologists
- 3 The dynamics of neural networks: a stochastic approach
- 4 Hebbian models of associative memory
- 5 Temporal sequences of patterns
- 6 The problem of learning in neural networks
- 7 Learning dynamics in ‘visible’ neural networks
- 8 Solving the problem of credit assignment
- 9 Self-organization
- 10 Neurocomputation
- 11 Neurocomputers
- 12 A critical view of the modeling of neural networks
- References
- Index
Summary
A neural network self-organizes if learning proceeds without evaluating the relevance of output states. Input states are the sole data to be given and during the learning session one does not pay attention to the performance of the network. How information is embedded into the system obviously depends on the learning algorithm, but it also depends on the structure of input data and on architectural constraints.
The latter point is of paramount importance. In the first chapter we have seen that the central nervous system is highly structured, that the topologies of signals conveyed by the sensory tracts are somehow preserved in the primary areas of the cortex and that different parts of the cortex process well-defined types of information. A comprehensive theory of neural networks must account for the architecture of the networks. Up to now this has been hardly the case since one has only distinguished two types of structures, the fully connected networks and the feedforward layered systems. In reality the structures themselves are the result of the interplay between a genetically determined gross architecture (the sprouting of neuronal contacts towards defined regions of the system, for example) and the modifications of this crude design by learning and experience (the pruning of the contacts). The topology of the networks, the functional significance of their structures and the form of learning rules are therefore closely intertwined entities. There is now no global theory explaining why the structure of the CNS is the one we observe and how its different parts cooperate to produce such an efficient system, but there have been some attempts to explain at least the most simple functional organizations, those of the primary sensory areas in particular.
- Type
- Chapter
- Information
- An Introduction to the Modeling of Neural Networks , pp. 299 - 324Publisher: Cambridge University PressPrint publication year: 1992