Book contents
- Frontmatter
- Contents
- List of contributors
- Preface
- Neurons and neural networks: general principles
- Synaptic plasticity, topological and temporal features, and higher cortical processing
- Spin glass models and cellular automata
- 13 Neural networks: learning and forgetting
- 14 Learning by error corrections in spin glass models of neural networks
- 15 Random complex automata: analogy with spin glasses
- 16 The evolution of data processing abilities in competing automata
- 17 The inverse problem for neural nets and cellular automata
- Cyclic phenomena and chaos in neural networks
- The cerebellum and the hippocampus
- Olfaction, vision and cognition
- Applications to experiment, communication and control
- Author index
- Subject index
16 - The evolution of data processing abilities in competing automata
from Spin glass models and cellular automata
Published online by Cambridge University Press: 05 February 2012
- Frontmatter
- Contents
- List of contributors
- Preface
- Neurons and neural networks: general principles
- Synaptic plasticity, topological and temporal features, and higher cortical processing
- Spin glass models and cellular automata
- 13 Neural networks: learning and forgetting
- 14 Learning by error corrections in spin glass models of neural networks
- 15 Random complex automata: analogy with spin glasses
- 16 The evolution of data processing abilities in competing automata
- 17 The inverse problem for neural nets and cellular automata
- Cyclic phenomena and chaos in neural networks
- The cerebellum and the hippocampus
- Olfaction, vision and cognition
- Applications to experiment, communication and control
- Author index
- Subject index
Summary
Introduction
It is probably fair to say that we have not, to this day, formed a clear picture of the learning process; neither have we been able to elicit from artificial intelligence machines a sort of behavior which could possibly compare in flexibility and performance with that exhibited by human or even animal subjects.
Leaving aside the issue of what actually happens in a learning brain, research on the question of how to generate ‘intelligent’ behavior has oscillated between two poles. The first, which today predominates in artificial intelligence circles (Nilsson, 1980), takes it for granted that solving a particular problem entails repeated application, to a data set representing the starting condition, of some operations chosen in a predefined set; the order of application may be either arbitrary or determined heuristically. The task is completed when the data set is found to be in a ‘goal’ state. This approach can be said to ascribe to the system, ‘from birth’, the capabilities required for a successful solution. The second approach, quite popular in its early version (Samuel, 1959), has been favored recently by physicists (Hopfield, 1982; Hogg & Huberman, 1985), and rests on the idea that ‘learning machines’ should be endowed, not with specific capabilities, but with some general architecture, and a set of rules, which are used to modify the machines' internal states in such a way that progressively better performance is obtained upon presentation of successive sample tasks.
- Type
- Chapter
- Information
- Computer Simulation in Brain Science , pp. 249 - 259Publisher: Cambridge University PressPrint publication year: 1988
- 2
- Cited by