Book contents
- Frontmatter
- Contents
- List of contributors
- Preface
- Neurons and neural networks: general principles
- Synaptic plasticity, topological and temporal features, and higher cortical processing
- 5 Neurons with hysteresis?
- 6 On models of short- and long-term memories
- 7 Topology, structure, and distance in quasirandom neural networks
- 8 A layered network model of sensory cortex
- 9 Computer simulation of networks of electrotonic neurons
- 10 A possible role for coherence in neural networks
- 11 Simulations of the trion model and the search for the code of higher cortical processing
- 12 AND–OR logic analogue of neuron networks
- Spin glass models and cellular automata
- Cyclic phenomena and chaos in neural networks
- The cerebellum and the hippocampus
- Olfaction, vision and cognition
- Applications to experiment, communication and control
- Author index
- Subject index
9 - Computer simulation of networks of electrotonic neurons
from Synaptic plasticity, topological and temporal features, and higher cortical processing
Published online by Cambridge University Press: 05 February 2012
- Frontmatter
- Contents
- List of contributors
- Preface
- Neurons and neural networks: general principles
- Synaptic plasticity, topological and temporal features, and higher cortical processing
- 5 Neurons with hysteresis?
- 6 On models of short- and long-term memories
- 7 Topology, structure, and distance in quasirandom neural networks
- 8 A layered network model of sensory cortex
- 9 Computer simulation of networks of electrotonic neurons
- 10 A possible role for coherence in neural networks
- 11 Simulations of the trion model and the search for the code of higher cortical processing
- 12 AND–OR logic analogue of neuron networks
- Spin glass models and cellular automata
- Cyclic phenomena and chaos in neural networks
- The cerebellum and the hippocampus
- Olfaction, vision and cognition
- Applications to experiment, communication and control
- Author index
- Subject index
Summary
Introduction
The fundamental aim of simulation of neural nets is a better understanding of the functioning of the nervous system. Because of the complexities, simplifying assumptions have to be introduced from the beginning. In many frequently used models these assumptions are:
Each model neuron can be in one of few discrete states (e.g. ‘on’, ‘off’, ‘refractory’).
The totality of interactions between neurons is treated summarily by specifying few parameters, often just one ‘synaptic strength’, which is determined randomly, or by a deterministic algorithm, or by a combination of both.
The information processing by a neuron consists of comparing the value of a certain parameter, which is determined by incoming signals from other neurons, with some threshold. Essentially, when the value of this parameter exceeds the threshold, the neuron transits to another one of its possible states. A typical example for this parameter is the membrane potential at the axon hillock, which is determined by summing the action potentials impinging on this neuron during a certain time and whose value determines whether the neuron ‘fires’ or not.
The hypothesis that individual neurons can be described in a simple way makes the simulation of networks containing many model neurons possible. On the other hand, there are important parts of nervous systems, where model neurons simplified to the extent described above have little in common with reality.
- Type
- Chapter
- Information
- Computer Simulation in Brain Science , pp. 148 - 163Publisher: Cambridge University PressPrint publication year: 1988
- 2
- Cited by