Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-23T14:13:22.649Z Has data issue: false hasContentIssue false

Finite-time optimal control of a process leaving an interval

Published online by Cambridge University Press:  14 July 2016

Douglas W. Mcbeth*
Affiliation:
Iowa State University
Ananda P. N. Weerasinghe*
Affiliation:
Iowa State University
*
Postal address: Department of Industrial and Manufacturing Systems Engineering, Iowa State University, Ames, IA 50011, USA.
∗∗Postal address: Department of Mathematics, Iowa State University, Ames, IA 50011, USA.

Abstract

Consider the optimal control problem of leaving an interval (– a, a) in a limited playing time. In the discrete-time problem, a is a positive integer and the player's position is given by a simple random walk on the integers with initial position x. At each time instant, the player chooses a coin from a control set where the probability of returning heads depends on the current position and the remaining amount of playing time, and the player is betting a unit value on the toss of the coin: heads returning +1 and tails − 1. We discuss the optimal strategy for this discrete-time game. In the continuous-time problem the player chooses infinitesimal mean and infinitesimal variance parameters from a control set which may depend upon the player's position. The problem is to find optimal mean and variance parameters that maximize the probability of leaving the interval [— a, a] within a finite time T > 0.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 1996 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Fleming, W. H. and Soner, H. M. (1993) Controlled Markov Processes and Viscosity Solutions. Springer, Berlin.Google Scholar
[2] Friedman, A. (1983) Partial Differential Equations of Parabolic Type. Krieger, Malabar, FL.Google Scholar
[3] Heath, D. and Kertz, R. P. (1988) Leaving an interval in limited playing time. Adv. Appl. Prob. 20, 635645.CrossRefGoogle Scholar
[4] Heath, D., Orey, S., Pestien, V. and Sudderth, W. (1987) Minimizing or maximizing the expected time to reach zero. SIAM J Control Opt. 25, 195205.CrossRefGoogle Scholar
[5] Heath, D. and Sudderth, W. (1974) Continuous-time gambling problems. Adv. Appl. Prob. 6, 651665.Google Scholar
[6] Ikeda, N. and Watanabe, S. (1981) Stochastic Differential Equations and Diffusion Processes. North Holland, Amsterdam.Google Scholar
[7] Karatzas, I. and Shreve, S. E. (1988) Brownian Motion and Stochastic Calculus. Springer, Berlin.CrossRefGoogle Scholar
[8] Ladyzenskaja, O. A., Solonnikov, V. A. and Ural'Ceva, N. N. (1968) Linear and Quasilinear Equations of Parabolic Type. Amer. Math. Soc., Providence, RI.Google Scholar
[9] Lieberman, G. (1986) Mixed boundary value problems for elliptic and parabolic differential equations of second-order. J. Math. Anal. Appl. 113, 422440.CrossRefGoogle Scholar
[10] Pestien, V. and Sudderth, W. (1983) Continuous-time red and black: How to control a diffusion to a goal. Math. Operat. Res. 10, 599611.CrossRefGoogle Scholar
[11] Protter, M. H. and Weinberger, H. F. (1984) Maximum Principles in Differential Equations. Springer, Berlin.CrossRefGoogle Scholar
[12] Spencer, J. (1986) Balancing vectors in the max norm. Combinatorica 6, 5565.Google Scholar
[13] Weerasinghe, A. P. N. (1992) A finite-fuel stochastic control problem on a finite time horizon. SIAM J. Control Opt. 30, 13951408.Google Scholar