Article contents
STOCHASTIC GRADIENT LEARNING AND INSTABILITY: AN EXAMPLE
Published online by Cambridge University Press: 28 January 2016
Abstract
In this paper, we investigate real-time behavior of constant-gain stochastic gradient (SG) learning, using the Phelps model of monetary policy as a testing ground. We find that whereas the self-confirming equilibrium is stable under the mean dynamics in a very large region, real-time learning diverges for all but the very smallest gain values. We employ a stochastic Lyapunov function approach to demonstrate that the SG mean dynamics is easily destabilized by the noise associated with real-time learning, because its Jacobian contains stable but very small eigenvalues. We also express caution on usage of perpetual learning algorithms with such small eigenvalues, as the real-time dynamics might diverge from the equilibrium that is stable under the mean dynamics.
- Type
- Articles
- Information
- Copyright
- Copyright © Cambridge University Press 2016
References
REFERENCES
- 2
- Cited by