Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-23T05:03:20.871Z Has data issue: false hasContentIssue false

A simple condition for regularity in negative programming

Published online by Cambridge University Press:  14 July 2016

P. Whittle*
Affiliation:
University of Cambridge
*
Postal address: Statistical Laboratory, University of Cambridge, 16 Mill Lane, Cambridge CB2 1SB, U.K.

Abstract

A simple condition (the ‘bridging condition') is given for a Markov decision problem with non-negative costs to enjoy the regularity properties enunciated in Theorem 1. The bridging condition is sufficient for regularity, and is not far from being necessary, in a sense explained in Section 2. In Section 8 we consider the different classes of terminal loss functions (domains of attraction) associated with different solutions of (14). Some conjectures concerning these domains of attraction are either proved, or disproved by counter-example.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 1979 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Derman, C. and Veinott, A. F. (1967) A solution to a countable system of equations arising in Markov decision processes. Ann. Math. Statist. 38, 582584.Google Scholar
Harrison, J. M. (1972) Discrete dynamic programming with unbounded rewards. Ann. Math. Statist. 43, 636644.CrossRefGoogle Scholar
Hinderer, K. (1970) Foundations of Non-stationary Dynamic Programming with Discrete Time Parameter. Springer-Verlag, Berlin.Google Scholar
Hordijk, A. (1974) Dynamic Programming and Markov Potential Theory. Mathematical Centre Tracts 51, Amsterdam.Google Scholar
Robinson, D. R. (1976) Markov decision chains with unbounded costs and applications to the control of queues. Adv. Appl. Prob. 8, 159176.Google Scholar
Schäl, M. (1975) Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal. Z. Wahrscheinlichkeitsth. 32, 179196.Google Scholar