Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Neural Networks: A Control Approach
- 2 Pseudoinverses and Tensor Products
- 3 Associative Memories
- 4 The Gradient Method
- 5 Nonlinear Neural Networks
- 6 External Learning Algorithm for Feedback Controls
- 7 Internal Learning Algorithm for Feedback Controls
- 8 Learning Processes of Cognitive Systems
- 9 Qualitative Analysis of Static Problems
- 10 Dynamical Qualitative Simulation
- Appendix 1 Convex and Nonsmooth Analysis
- Appendix 2 Control of an AUV
- Bibliography
- Index
1 - Neural Networks: A Control Approach
Published online by Cambridge University Press: 05 August 2012
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Neural Networks: A Control Approach
- 2 Pseudoinverses and Tensor Products
- 3 Associative Memories
- 4 The Gradient Method
- 5 Nonlinear Neural Networks
- 6 External Learning Algorithm for Feedback Controls
- 7 Internal Learning Algorithm for Feedback Controls
- 8 Learning Processes of Cognitive Systems
- 9 Qualitative Analysis of Static Problems
- 10 Dynamical Qualitative Simulation
- Appendix 1 Convex and Nonsmooth Analysis
- Appendix 2 Control of an AUV
- Bibliography
- Index
Summary
Introduction
A neural network is a network of subunits, called “formal neurons,” processing input signals to output signals, which are coupled through “synapses.” The synapses are the nodes of this particular kind of network, the “strength” of which, called the synaptic weight, codes the “knowledge” of the network and controls the processing of the signals.
Let us be clear at the outset that the resemblance of a formal neuron to an animal-brain neuron is not well established, but that is not essential at this stage of abstraction. However, this terminology can be justified to some extent, and it is by now widely accepted, as discussed later. Chapter 8 develops this issue.
Also, there is always a combination of two basic motivations for dealing with neural networks - one attempting to model actual biological nervous systems, the other being content with implementation of neural-like systems on computers. Every model lies between these two requirements – the first constraining the modeling, the second allowing more freedom in the choice of a particular representation.
There are so many different versions of neural networks that it is difficult to find a common framework to unify all of them at a rather concrete level. But one can regard neural networks as dynamical systems (discrete or continuous), the states of which are the signals, and the controls of which are the synaptic weights, which regulate the flux of transmitters from one neuron to another.
- Type
- Chapter
- Information
- Neural Networks and Qualitative PhysicsA Viability Approach, pp. 1 - 22Publisher: Cambridge University PressPrint publication year: 1996