Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-06T10:06:31.300Z Has data issue: false hasContentIssue false

COMPUTING AVERAGE OPTIMAL CONSTRAINED POLICIES IN STOCHASTIC DYNAMIC PROGRAMMING

Published online by Cambridge University Press:  07 February 2001

Linn I. Sennott
Affiliation:
Department of Mathematics, Illinois State University, Normal, Illinois 61790-4520, E-mail: [email protected]

Abstract

A stochastic dynamic program incurs two types of cost: a service cost and a quality of service (delay) cost. The objective is to minimize the expected average service cost, subject to a constraint on the average quality of service cost. When the state space S is finite, we show how to compute an optimal policy for the general constrained problem under weak conditions. The development uses a Lagrange multiplier approach and value iteration. When S is denumerably infinite, we give a method for computation of an optimal policy, using a sequence of approximating finite state problems. The method is illustrated with two computational examples.

Type
Research Article
Copyright
© 2001 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)