Book contents
7 - Dynamic models
Published online by Cambridge University Press: 03 February 2010
Summary
Markov chains (Chapter 5), and extensions of them (Chapter 6), use previous information from a multistate stochastic process to improve prediction of the present state. These state dependence models condition directly on the previous state(s) in the regression function, so that the other regression parameters only can be interpreted as conditional on those specific previous states of each given process.
In contrast, the underlying or marginal trend of any process is independent of the conditional dependence on the past of a specific observed process. This trend always can be obtained by summing or integrating at each time point, but it may not always be easy or practical to do so. In a Markov chain, the underlying trend is represented by the sequence of marginal distributions πt(see Figure 5.2). This is obtained by matrix multiplication, a summation, at each time point. However, in other models, such a representation may not be as easy to obtain.
Dynamic models provide one alternative to the above approaches. They use previous information in various different ways such that they are not directly state dependent. One simple case was serial dependence, described in Section 1.2 and developed further below in Section 7.1. There, dependence is on the previous residual, an unobservable (hidden) quantity, instead of on the previous state. Dynamic models generally will allow a theoretical underlying profile of the process to be plotted more easily, although it may not necessarily be a marginal distribution. However, a prediction of the actual observed process, based conditionally on its previous history, also will be available.
In dynamic models, certain parameters in the distribution of responses are allowed to vary randomly over time.
- Type
- Chapter
- Information
- Statistical Analysis of Stochastic Processes in Time , pp. 151 - 182Publisher: Cambridge University PressPrint publication year: 2004