Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Introduction
- 2 The biology of neural networks: a few features for the sake of non-biologists
- 3 The dynamics of neural networks: a stochastic approach
- 4 Hebbian models of associative memory
- 5 Temporal sequences of patterns
- 6 The problem of learning in neural networks
- 7 Learning dynamics in ‘visible’ neural networks
- 8 Solving the problem of credit assignment
- 9 Self-organization
- 10 Neurocomputation
- 11 Neurocomputers
- 12 A critical view of the modeling of neural networks
- References
- Index
8 - Solving the problem of credit assignment
Published online by Cambridge University Press: 30 November 2009
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Introduction
- 2 The biology of neural networks: a few features for the sake of non-biologists
- 3 The dynamics of neural networks: a stochastic approach
- 4 Hebbian models of associative memory
- 5 Temporal sequences of patterns
- 6 The problem of learning in neural networks
- 7 Learning dynamics in ‘visible’ neural networks
- 8 Solving the problem of credit assignment
- 9 Self-organization
- 10 Neurocomputation
- 11 Neurocomputers
- 12 A critical view of the modeling of neural networks
- References
- Index
Summary
The architectures of the neural networks we considered in Chapter 7 are made exclusively of visible units. During the learning stage, the states of all neurons are entirely determined by the set of patterns to be memorized. They are so to speak pinned and the relaxation dynamics plays no role in the evolution of synaptic efficacies. How to deal with more general systems is not a simple problem. Endowing a neural network with hidden units amounts to adding many degrees of freedom to the system, which leaves room for ‘internal representations’ of the outside world. The building of learning algorithms that make general neural networks able to set up efficient internal representations is a challenge which has not yet been fully satisfactorily taken up. Pragmatic approaches have been made, however, mainly using the so-called back-propagation algorithm. We owe the current excitement about neural networks to the surprising successes that have been obtained so far by calling upon that technique: in some cases the neural networks seem to extract the unexpressed rules that are hidden in sets of raw data. But for the moment we really understand neither the reasons for this success nor those for the (generally unpublished) failures.
The back-propagation algorithm
A direct derivation
To solve the credit assignment problem is to devise means of building relevant internal representations; that is to say, to decide which state Iµ, hid of hidden units is to be associated with a given pattern Iµ, vis of visible units.
- Type
- Chapter
- Information
- An Introduction to the Modeling of Neural Networks , pp. 269 - 298Publisher: Cambridge University PressPrint publication year: 1992