Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Neural Networks: A Control Approach
- 2 Pseudoinverses and Tensor Products
- 3 Associative Memories
- 4 The Gradient Method
- 5 Nonlinear Neural Networks
- 6 External Learning Algorithm for Feedback Controls
- 7 Internal Learning Algorithm for Feedback Controls
- 8 Learning Processes of Cognitive Systems
- 9 Qualitative Analysis of Static Problems
- 10 Dynamical Qualitative Simulation
- Appendix 1 Convex and Nonsmooth Analysis
- Appendix 2 Control of an AUV
- Bibliography
- Index
2 - Pseudoinverses and Tensor Products
Published online by Cambridge University Press: 05 August 2012
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Neural Networks: A Control Approach
- 2 Pseudoinverses and Tensor Products
- 3 Associative Memories
- 4 The Gradient Method
- 5 Nonlinear Neural Networks
- 6 External Learning Algorithm for Feedback Controls
- 7 Internal Learning Algorithm for Feedback Controls
- 8 Learning Processes of Cognitive Systems
- 9 Qualitative Analysis of Static Problems
- 10 Dynamical Qualitative Simulation
- Appendix 1 Convex and Nonsmooth Analysis
- Appendix 2 Control of an AUV
- Bibliography
- Index
Summary
Introduction
The projection theorem allows construction of the orthogonal right-inverse of a linear surjective operator A, associating with any datum y the solution x to the equation Ax = y with minimal norm. In the same way, it allows construction of the orthogonal left-inverse of a linear injective operator A, associating with any datum y the solution x to the equation Ax = ȳ, where ȳ is the orthogonal projection of y onto the image of A. More generally, when A is any linear operator between finite-dimensional vector spaces, the pseudoinverse of A associates with any datum y the solution x (with minimal norm) to the equation Ax = ȳ, where ȳ is the orthogonal projection of y onto the image of A.
These definitions show how useful the concept of the pseudoinverse is in many situations. It is used explicitly or implicitly in many domains of statistics and data analysis. It is then quite natural that the pseudoinverse plays an important role in the use of adaptive systems in learning algorithms of patterns.
This is what we do to construct the heavy algorithm for adaptive systems that are affine with respect to the controls. Because we are looking for synaptic matrices when we deal with neural networks, we have to make a short pause to study the structure of the space of linear operators, of its dual, and of a tensor product of linear operators.
- Type
- Chapter
- Information
- Neural Networks and Qualitative PhysicsA Viability Approach, pp. 23 - 43Publisher: Cambridge University PressPrint publication year: 1996