No CrossRef data available.
Article contents
Two classification theorems of states of Markov chains
Published online by Cambridge University Press: 14 July 2016
Summary
Two theorems on Markov chains, both of which already appear in the literature: the classification of the states into the set of all non-recurrent (transient) states and recurrent classes, and the corresponding classification for idempotent Markov chains due to Doob, are proved from the viewpoint of ergodic theory.
- Type
- Short Communications
- Information
- Copyright
- Copyright © Applied Probability Trust 1970
References
[1]
Chacon, R. V. (1962) Identification of the limit of operator averages. J. Math. Mech.
11, 961–968.Google Scholar
[2]
Chung, K. L. (1967) Markov Chains with Stationary Transition Probabilities.
2nd ed.
Springer-Verlag, New York.Google Scholar
[3]
Doob, J. L. (1942) Topics in the theory of Markoff chains. Trans. Amer. Math. Soc.
52, 37–64.CrossRefGoogle Scholar
[4]
Feldman, J. (1962) Subinvariant measures for Markoff operators. Duke Math. J.
29, 71–98.Google Scholar
[5]
Feller, W. (1968) An Introduction to Probability Theory and its Applications.
Vol. 1. 3rd ed.
Wiley, New York.Google Scholar
[6]
Foguel, S. R. (1969) The Ergodic Theory of Markov Processes.
Van Nostrand Reinhold, Cincinnati.Google Scholar
[7]
Hopf, E. (1954) The general temporally discrete Markoff process. J. Rat. Mech. Anal.
3, 13–45.Google Scholar
[8]
Kim, C. W. (1968) A generalization of Ito's theorem concerning the pointwise ergodic theorem. Ann. Math. Statist.
39, 2145–2148.CrossRefGoogle Scholar
[9]
Neveu, J. (1965) Mathematical Foundations of the Calculus of Probability.
Holden-Day, San Francisco.Google Scholar