Book contents
- Frontmatter
- Contents
- List of contributors
- Preface
- Introduction
- Inference and learning in latent Markov models
- Part I State space methods for neural data
- State space methods for MEG source reconstruction
- Autoregressive modeling of fMRI time series: state space approaches and the general linear model
- State space models and their spectral decomposition in dynamic causal modeling
- Estimating state and parameters in state space models of spike trains
- Bayesian inference for latent stepping and ramping models of spike train data
- Probabilistic approaches to uncover rat hippocampal population codes
- Neural decoding in motor cortex using state space models with hidden states
- State space modeling for analysis of behavior in learning experiments
- Part II State space methods for clinical data
- index
- References
Estimating state and parameters in state space models of spike trains
from Part I - State space methods for neural data
Published online by Cambridge University Press: 05 October 2015
- Frontmatter
- Contents
- List of contributors
- Preface
- Introduction
- Inference and learning in latent Markov models
- Part I State space methods for neural data
- State space methods for MEG source reconstruction
- Autoregressive modeling of fMRI time series: state space approaches and the general linear model
- State space models and their spectral decomposition in dynamic causal modeling
- Estimating state and parameters in state space models of spike trains
- Bayesian inference for latent stepping and ramping models of spike train data
- Probabilistic approaches to uncover rat hippocampal population codes
- Neural decoding in motor cortex using state space models with hidden states
- State space modeling for analysis of behavior in learning experiments
- Part II State space methods for clinical data
- index
- References
Summary
Introduction
State space models for neural population spike trains Neural computations at all scales of evolutionary and behavioural complexity are carried out by recurrently connected networks of neurons that communicate with each other, with neurons elsewhere in the brain, and with muscles through the firing of action potentials or “spikes.” To understand how nervous tissue computes, it is therefore necessary to understand how the spiking of neurons is shaped both by inputs to the network and by the recurrent action of existing network activity. Whereas most historical spike data were collected one neuron at a time, new techniques including silicon multielectrode array recording and scanning 2-photon, light-sheet or light-field fluorescence calcium imaging increasingly make it possible to record spikes from dozens, hundreds and potentially thousands of individual neurons simultaneously. These new data offer unprecedented empirical access to network computation, promising breakthroughs both in our understanding of neural coding and computation (Stevenson & Kording 2011), and our ability to build prosthetic neural interfaces (Santhanam et al. 2006). Fulfillment of this promise will require powerful methods for data modeling and analysis, able to capture the structure of statistical dependence of network activity across neurons and time.
Probabilistic latent state space models (SSMs) are particularly well-suited to this task. Neural activity often appears stochastic, in that repeated trials under the same controlled experimental conditions can evoke quite different patterns of firing. Some part of this variation may reflect differences in the way the computation unfolds on each trial. Another part might reflect noisy creation and transmission of neural signals. Yet more may come from chaotic amplification of small perturbations. As computational signals are thought to be distributed across the population (in a “population code”), variation in the computation may be distinguished by its common impact on different neurons and the systematic evolution of these common effects in time.
An SSM is able to capture such structured variation through the evolution of its latent state trajectory. This latent state provides a summary description of all factors modulating neural activity that are not observed directly. These factors could include processes such as arousal, attention, cortical state (Harris & Thiele 2011) or behavioural states of the animal (Niell & Stryker 2010; Maimon 2011).
- Type
- Chapter
- Information
- Advanced State Space Methods for Neural and Clinical Data , pp. 137 - 159Publisher: Cambridge University PressPrint publication year: 2015
References
- 20
- Cited by