Article contents
Globally convergent stochastic optimization with optimal asymptotic distribution
Published online by Cambridge University Press: 14 July 2016
Abstract
A stochastic gradient descent method is combined with a consistent auxiliary estimate to achieve global convergence of the recursion. Using step lengths converging to zero slower than 1/n and averaging the trajectories, yields the optimal convergence rate of 1/√n and the optimal variance of the asymptotic distribution. Possible applications can be found in maximum likelihood estimation, regression analysis, training of artificial neural networks, and stochastic optimization.
Keywords
MSC classification
- Type
- Research Papers
- Information
- Copyright
- Copyright © Applied Probability Trust 1998
References
- 3
- Cited by