Article contents
Gradient approach for recursive estimation and control in finite Markov chains
Published online by Cambridge University Press: 01 July 2016
Abstract
The problem studied is that of controlling a finite Markov chain so as to maximize the long-run expected reward per unit time. The chain's transition probabilities depend upon an unknown parameter taking values in a subset [a, b] of Rn. A control policy is defined as the probability of selecting a control action for each state of the chain. Derived is a Taylor-like expansion formula for the expected reward in terms of policy variations. Based on that result, a recursive stochastic gradient algorithm is presented for the adaptation of the control policy at consecutive times. The gradient depends on the estimated transition parameter which is also recursively updated using the gradient of the likelihood function. Convergence with probability 1 is proved for the control and estimation algorithms.
Keywords
- Type
- Research Article
- Information
- Copyright
- Copyright © Applied Probability Trust 1981
References
- 12
- Cited by