Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-26T01:10:19.724Z Has data issue: false hasContentIssue false

Stability of linear stochastic difference equations in strategically controlled random environments

Published online by Cambridge University Press:  01 July 2016

Ulrich Horst*
Affiliation:
Humboldt-Universität zu Berlin
*
Postal address: Institut für Mathematik, Bereich Stochastik, Humboldt-Universität zu Berlin, Unter den Linden 6, D-10099 Berlin, Germany. Email address: [email protected]

Abstract

We consider the stochastic sequence {Yt}t∈ℕ defined recursively by the linear relation Yt+1=AtYt+Bt in a random environment. The environment is described by the stochastic process {(At,Bt)}t∈ℕ and is under the simultaneous control of several agents playing a discounted stochastic game. We formulate sufficient conditions on the game which ensure the existence of Nash equilibria in Markov strategies which have the additional property that, in equilibrium, the process {Yt}t∈ℕ converges in distribution to a stationary regime.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 2003 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Barnsley, M. F., Demko, S. G., Elton, J. H. and Geronimo, J. S. (1988). Invariant measures for Markov processes arising from iterated function systems with place-dependent probabilities. Ann. Inst. H. Poincaré Prob. Statist. 24, 367394.Google Scholar
[2] Borovkov, A. A. (1998). Ergodicity and Stability of Stochastic Processes. John Wiley, New York.Google Scholar
[3] Brandt, A. (1986). The stochastic equation Y_n+1=A_nY_n + B_n with stationary coefficients. Adv. Appl. Prob. 18, 211220.Google Scholar
[4] Breiman, L. (1968). Probability. Addison-Wesley, Reading, MA.Google Scholar
[5] Curtat, L. (1996). Markov equilibria in stochastic games with complementaries. Games Econom. Behavior 17, 177199.Google Scholar
[6] Duffie, D., Geanakopolos, J., Mas-Colell, A. and McLennan, A. (1994). Stationary Markov equilibria. Econometrica 62, 745781.Google Scholar
[7] Ethier, S. N. and Kurtz, T. G. (1986). Markov Processes: Characterisation and Convergence. John Wiley, New York.Google Scholar
[8] Föllmer, H. and Schweizer, M. (1993). A microeconomic approach to diffusion models for stock prices. Math. Finance 3, 123.Google Scholar
[9] Föllmer, H., Horst, U. and Kirman, A. (2003). Financial price fluctuation in a simple learning model. Working paper, Humboldt-Universität zu Berlin.Google Scholar
[10] Horst, U. (2001). The stochastic equation Y_t+1=A_tY_t + B_t with non-stationary coefficients. J. Appl. Prob. 38, 8094.Google Scholar
[11] Horst, U. and Scheinkman, J. (2002). Equilibria in systems of social interactions. Working paper, Princeton University.Google Scholar
[12] Liggett, T. (1985). Interacting Particle Systems. Springer, Berlin.Google Scholar
[13] Montrucchio, L. (1987). Lipschitz continuous policy functions for strongly concave optimization problems. J. Math. Econom. 16, 259273.Google Scholar
[14] Norman, F. M. (1972). Markov Processes and Learning Models. Academic Press, New York.Google Scholar
[15] Nowak, A. (1985). Existence of equilibrium stationary strategies in discounted non-cooperative stochastic games with uncountable state space. J. Optimization Theory Appl. 45, 591602.Google Scholar
[16] Vervaat, W. (1979). On a stochastic difference equation and a representation of non-negative infinitely divisible random variables. Adv. Appl. Prob. 11, 750783.CrossRefGoogle Scholar