No CrossRef data available.
Article contents
Additional tests of Amit's attractor neural networks
Published online by Cambridge University Press: 04 February 2010
Abstract
Further tests of Amit's model are indicated. One strategy is to use the apparent coding sparseness of the model to make predictions about coding sparseness in Miyashita's network. A second approach is to use memory overload to induce false positive responses in modules and biological systems. In closing, the importance of temporal coding and timing requirements in developing biologically plausible attractor networks is mentioned.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © Cambridge University Press 1995