Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-27T00:11:35.376Z Has data issue: false hasContentIssue false

Markov chain models, time series analysis and extreme value theory

Published online by Cambridge University Press:  01 July 2016

D. S. Poskitt*
Affiliation:
Australian National University
Shin-Ho Chung*
Affiliation:
Australian National University
*
Postal address: Department of Statistics, Australian National University, Canberra ACT 0200, Australia.
∗∗ Postal address: Department of Chemistry, Australian National University, Canberra ACT 0200, Australia.

Abstract

Markov chain processes are becoming increasingly popular as a means of modelling various phenomena in different disciplines. For example, a new approach to the investigation of the electrical activity of molecular structures known as ion channels is to analyse raw digitized current recordings using Markov chain models. An outstanding question which arises with the application of such models is how to determine the number of states required for the Markov chain to characterize the observed process. In this paper we derive a realization theorem showing that observations on a finite state Markov chain embedded in continuous noise can be synthesized as values obtained from an autoregressive moving-average data generating mechanism. We then use this realization result to motivate the construction of a procedure for identifying the state dimension of the hidden Markov chain. The identification technique is based on a new approach to the estimation of the order of an autoregressive moving-average process. Conditions for the method to produce strongly consistent estimates of the state dimension are given. The asymptotic distribution of the statistic underlying the identification process is also presented and shown to yield critical values commensurate with the requirements for strong consistency.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 1996 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This work was supported in part by a grant from the National Health & Medical Research Council of Australia.

References

An, H-Z., Chen, Z. G. and Hannan, E. J. (1983) A note on ARMA estimation. J. Time Series Anal. 4, 917.Google Scholar
Baum, L. E. and Petrie, T. (1966) Statistical inference for probabilistic functions of finite state Markov chains. Ann. Math. Statist. 37, 15541563.Google Scholar
Berman, S. M. (1971) Asymptotic independence of the numbers of high and low level crossings of stationary Gaussian processes. Ann. Math. Statist. 42, 927945.Google Scholar
Bingham, E. O. (1974) The Fast Fourier Transform. Prentice Hall, Englewood Cliffs.Google Scholar
Blackwell, E. and Koopmans, L. (1957) On the identifiability problem for functions of finite Markov chains. Ann. Math. Statist. 28, 10111015.CrossRefGoogle Scholar
Blum, J. R., Hanson, D. L. and Koopmans, L. (1963) On the strong law of large numbers for a class of stochastic processes. Z. Wahrscheinlichkeitsth. 2, 111.CrossRefGoogle Scholar
Chung, S. H., Krishnamurthy, V. and Moore, J. B. (1991) Adaptive processing techniques based on hidden Markov models for characterizing very small channel currents buried in noise and deterministic interferences. Phil. Trans. R. Soc. Lond. B 334, 357384.Google Scholar
Dempster, A. P., Laird, N. M. and Rubin, D. B. (1977) Maximum likelihood from incomplete data via the EM algorithm. J. R. Statist. Soc. B 39, 138.Google Scholar
Engel, E, Engel, M. R. A. (1984) A unified approach to the study of sums, products, time-aggregation and other functions of ARMA processes. J. Time Series Anal. 5, 159171.Google Scholar
Feller, W. (1966) An Introduction to Probability Theory and its Application. Vol. II. Wiley, New York.Google Scholar
Finesso, L. (1990) Consistent Estimation of the Order of Markov and Hidden Markov Chains. PhD Dissertation, University of Maryland.Google Scholar
Fredkin, D. R. and Rice, J. A. (1987) Correlation functions of a finite-state Markov process with applications to channel kinetics. Math. Biosci. 87, 161172.CrossRefGoogle Scholar
Gallant, A. R. and White, H. (1988) A Unified Theory of Estimation and Inference for Nonlinear Dynamic Models. Blackwell, Oxford.Google Scholar
Gihman, I. I. and Skorohod, A. V. (1974) The Theory of Stochastic Processes. Springer, Berlin.Google Scholar
Golub, G. H. and Van Loan, C. F. (1989) Matrix Computations. 2nd edn. Johns Hopkins University Press, Baltimore.Google Scholar
Hannan, E. J. (1970) Multiple Time Series. Wiley, New York.CrossRefGoogle Scholar
Hannan, E. J. (1980) The estimation of the order of an ARMA process. Ann. Statist. 8, 10711081.CrossRefGoogle Scholar
Hannan, E. J. (1982) Testing for autocorrelation and Akaikes criterion. In Essays in Statistical Science. ed. Gani, J. and Hannan, E. J. pp 403412. Applied Probability Trust, Sheffield.Google Scholar
Hannan, E. J. and Deistler, M. (1988) The Statistical Theory of Linear Systems Wiley, New York.Google Scholar
Hannan, E. J. and Kavalieris, L. (1984) A method of autoregressive-moving average estimation. Biometrika 71, 273280.Google Scholar
Hannan, E. J. and Rissanen, J. (1982) Recursive estimation of mixed autoregressive-moving average order. Biometrika 69, 8194.Google Scholar
Heller, A. (1965) On stochastic processes derived from Markov chains. Ann. Math. Statist. 36, 12861291.Google Scholar
Karlin, S. A. (1966) A First Course in Stochastic Processes. Academic Press, New York.Google Scholar
Kohn, R. (1978) Asymptotic properties of time domain Gaussian estimators. Adv. Appl. Prob. 10, 339359.Google Scholar
Lee, E. A. and Messerschmitt, D. G. (1988) Digital Communication. Kluwer, Boston, MA.Google Scholar
Leroux, B. G. (1992) Maximum likelihood estimation for hidden Markov models. Stoch. Proc. Appl. 40, 127143.Google Scholar
Marhav, N., Gutman, M. and Ziv, J. (1989). On the estimation of the order of a Markov chain and universal data compression. IEEE Trans. Inf. Th. 35, 16141619.Google Scholar
Nadas, A. and Mercer, R. L. (1994) Hidden Markov chains can be hard to train. Paper presented at 230th IMS Meeting, Cleveland. Google Scholar
Pagano, M. (1974) Estimation of models of autoregressive signal plus white noise. Ann. Statist. 2, 99108.CrossRefGoogle Scholar
Pötscher, D. S. (1983) Order estimation in ARMA-models by Lagrangian multiplier tests. Ann. Statist. 11, 872885.CrossRefGoogle Scholar
Poskitt, D. S. (1987) A modified Hannan-Rissanen strategy for mixed autoregressive-moving average order determination. Biometrika 74, 781790.Google Scholar
Poskitt, D. S. and Tremayne, A. R. (1980) Testing the specification of a fitted autoregressive-moving average model. Biometrika 67, 359363.Google Scholar
Qian, W. & Titterington, D. M. (1992) Estimation of parameters in hidden Markov models. Phil. Trans. R. Soc Lond. A 337, 407428.Google Scholar
Rozanov, Y. A. (1967) Stationary Random Processes. Holden Day, San Francisco.Google Scholar
Veres, S. (1987) Asymptotic distributions of likelihood ratios for overparameterized ARMA processes. J. Time Series Anal. 8, 345357.CrossRefGoogle Scholar