Hostname: page-component-78c5997874-lj6df Total loading time: 0 Render date: 2024-11-17T22:23:08.178Z Has data issue: false hasContentIssue false

Policy Improvement and the Newton–Raphson Algorithm for Renewal Reward Processes

Published online by Cambridge University Press:  27 July 2009

J. M. McNamara
Affiliation:
School of MathematicsUniversity of Bristol University Walk Bristol, BS8 1 TW

Abstract

We consider a renewal reward process in continuous time. The supremum average reward, γ* for this process can be characterised as the unique root of a certain function. We show how one can apply the Newton–Raphson algorithm to obtain successive approximations to γ*, and show that the successive approximations so obtained are the same as those obtained by using the policy improvement technique.

Type
Articles
Copyright
Copyright © Cambridge University Press 1989

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bather, J. A. (1971). Free boundary problems in the design of control charts. Transactions 6th Prague conference on information theory, statistical decision functions, random processes. Dordrecht: Riedel, pp. 89106.Google Scholar
Howard, R.A. (1960). Dynamic programming and Markov processes. Cambridge, MA. MIT Press.Google Scholar
Johns, M. & Miller, R.G. (1963). Average renewal loss rate. Annals of Mathematical Statistics 34: 396401.CrossRefGoogle Scholar
McNamara, J.M. (1985). An optimal sequential policy for controlling a Markov renewal process. Journal of Applied Probability 22: 324335.CrossRefGoogle Scholar
Whittle, P. & Komarova, N. (1988). Policy improvement and the Newton–Raphson algorithm. Probability in the Engineering and Informational Sciences 2: 249255.CrossRefGoogle Scholar