Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-26T13:52:45.124Z Has data issue: false hasContentIssue false

Some distribution and moment formulae for the Markov renewal process

Published online by Cambridge University Press:  24 October 2008

A. M. Kshirsagar
Affiliation:
Southern Methodist University, Dallas, Texas 75222
R. Wysocki
Affiliation:
Southern Methodist University, Dallas, Texas 75222

Extract

1. Introduction. A Markov Renewal Process (MRP) with m(<∞) states is one which records at each time t, the number of times a system visits each of the m states up to time t, if the system moves from state to state according to a Markov chain with transition probability matrix P0 = [pij] and if the time required for each successive move is a random variable whose distribution function (d.f.) depends on the two states between which the move is made. Thus, if the system moves from state i to state j, the holding time in the state i has Fij(x) as its d.f. (i, j = 1,2, …, m).

Type
Research Article
Copyright
Copyright © Cambridge Philosophical Society 1970

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

(1)Billingsley, P.Statistical methods in Markov chains. Ann. Math. Statist. 32 (1961), 1240 (see also corrections in ibid. p. 1343).CrossRefGoogle Scholar
(2)Cox, D. R.Renewal Theory, p. 47. (John Wiley and Sons, Inc.; New York, 1962.)Google Scholar
(3)Dawson, R. and Good, I. J.Exact Markov probabilities from oriented linear graphs. Ann. Math. Statist. 28 (1957), 946956.CrossRefGoogle Scholar
(4)Goodman, L. A.Exact probabilities and asymptotic relationships for some statistics from mth order Markov chain. Ann. Math. Statist. 29 (1958), 476490.CrossRefGoogle Scholar
(5)Kshirsagar, A. M. and Gupta, Y. P.Asymptotic values of the first two moments in Markov renewal processes. Biornetrika, 54 (1967), 597604.Google ScholarPubMed
(6)Martin, J. J.Bayesian decision problems and Markov chains. (John Wiley and Sons, Inc.; New York, 1967.)Google Scholar
(7)Moore, E. H. and Pyke, R.Estimation of the transition distributions of a Markov Renewal Process. Ann. Inst. Statist. Maths. 20 (1968), 411424.CrossRefGoogle Scholar
(8)Pyke, R.Markov Renewal Processes—definitions and preliminary properties. Ann. Math. Statist. 32 (1961), 12311242.CrossRefGoogle Scholar
(9)Pyke, R.Markov Renewal Processes with finitely many states. Ann. Math. Statist. 32 (1961), 12431259.Google Scholar
(10)Pyke, R. and Schaufele, R.Limit Theorems for Markov Renewal Processes. Ann. Math. Statist. 35 (1964), 17461763.CrossRefGoogle Scholar
(11)Whittle, P.Some distribution and moment formulae for the Markov chain. J. Roy. Statist. Soc. Ser. B 17 (1955), 235242.Google Scholar