Book contents
- Frontmatter
- Contents
- List of Contributors
- Preface
- 1 Introductory Information Theory and the Brain
- Part One Biological Networks
- Part Two Information Theory and Artificial Networks
- 5 Experiments with Low-Entropy Neural Networks
- 6 The Emergence of Dominance Stripes and Orientation Maps in a Network of Firing Neurons
- 7 Dynamic Changes in Receptive Fields Induced by Cortical Reorganization
- 8 Time to Learn About Objects
- 9 Principles of Cortical Processing Applied to and Motivated by Artificial Object Recognition
- 10 Performance Measurement Based on Usable Information
- Part Three Information Theory and Psychology
- Part Four Formal Analysis
- Bibliography
- Index
6 - The Emergence of Dominance Stripes and Orientation Maps in a Network of Firing Neurons
from Part Two - Information Theory and Artificial Networks
Published online by Cambridge University Press: 04 May 2010
- Frontmatter
- Contents
- List of Contributors
- Preface
- 1 Introductory Information Theory and the Brain
- Part One Biological Networks
- Part Two Information Theory and Artificial Networks
- 5 Experiments with Low-Entropy Neural Networks
- 6 The Emergence of Dominance Stripes and Orientation Maps in a Network of Firing Neurons
- 7 Dynamic Changes in Receptive Fields Induced by Cortical Reorganization
- 8 Time to Learn About Objects
- 9 Principles of Cortical Processing Applied to and Motivated by Artificial Object Recognition
- 10 Performance Measurement Based on Usable Information
- Part Three Information Theory and Psychology
- Part Four Formal Analysis
- Bibliography
- Index
Summary
Introduction
This chapter addresses the problem of training a self-organising neural network on images derived from multiple sources; this type of network potentially may be used to model the behaviour of the mammalian visual cortex (for a review of neural network models of the visual cortex see Swindale (1996). The network that will be considered is a soft encoder which transforms its input vector into a posterior probability over various possible classes (i.e. alternative possible interpretations of the input vector). This encoder will be optimised so that its posterior probability is able to retain as much information as possible about its input vector, as measured in the minimum mean square reconstruction error (i.e. L2 error) sense (Luttrell, 1994a, 1997c).
In the special case where the optimisation is performed over the space of all possible soft encoders, the optimum solution is a hard encoder (i.e. it is a “winner-take-all” network, in which only one of the output neurons is active) which is an optimal vector quantiser (VQ) of the type described in Linde et al. (1980), for encoding the input vector with minimum L2 error. A more general case is where the output of the soft encoder is deliberately damaged by the effects of a noise process. This type of noisy encoder leads to an optimal self-organising map (SOM) for encoding the input vector with minimum L2 error, which is closely related to the well-known Kohonen map (Kohonen, 1984).
The soft encoder network that is discussed in this chapter turns out to have many of the emergent properties that are observed in the mammalian visual cortex, such as dominance stripes and orientation maps.
- Type
- Chapter
- Information
- Information Theory and the Brain , pp. 101 - 121Publisher: Cambridge University PressPrint publication year: 2000