Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-26T00:14:04.594Z Has data issue: false hasContentIssue false

SMOOTHING-BASED INITIALIZATION FOR LEARNING-TO-FORECAST ALGORITHMS

Published online by Cambridge University Press:  23 June 2017

Michele Berardi
Affiliation:
University of Manchester
Jaqueson K. Galimberti*
Affiliation:
ETH Zurich
*
Address correspondence to: Jaqueson K. Galimberti, KOF Swiss Economic Institute, ETH Zurich, LEE G 116, Leonhardstrasse 21, 8092 Zurich, Switzerland; e-mail: [email protected].

Abstract

Under adaptive learning, recursive algorithms are proposed to represent how agents update their beliefs over time. For applied purposes, these algorithms require initial estimates of agents perceived law of motion. Obtaining appropriate initial estimates can become prohibitive within the usual data availability restrictions of macroeconomics. To circumvent this issue, we propose a new smoothing-based initialization routine that optimizes the use of a training sample of data to obtain initials consistent with the statistical properties of the learning algorithm. Our method is generically formulated to cover different specifications of the learning mechanism, such as the least-squares and the stochastic gradient algorithms. Using simulations, we show that our method is able to speed up the convergence of initial estimates in exchange for a higher computational cost.

Type
Articles
Copyright
Copyright © Cambridge University Press 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

An earlier version of this paper was presented at the 2013 Computing in Economics and Finance conference in Vancouver. We thank to our discussants for helpful comments. We also gratefully acknowledge the comments provided by one Associate Editor and two referees. Finally, we thank the Editor Professor William A. Barnett for the quick responsiveness and handling of our submission. Any remaining errors are ours.

References

REFERENCES

Anderson, B. D. O. and Moore, J. B. (1979) Optimal Filtering. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
Barucci, E. and Landi, L. (1997) Least mean squares learning in self-referential linear stochastic models. Economics Letters 57 (3), 313317.Google Scholar
Benveniste, A., Metivier, M., and Priouret, P. (1990) Adaptive Algorithms and Stochastic Approximations. Berlin: Springer-Verlag.Google Scholar
Berardi, M. and Galimberti, J. K. (2012) On the Initialization of Adaptive Learning Algorithms: A Review of Methods and a New Smoothing-Based Routine. Centre for Growth and Business Cycle Research discussion paper series 175, Economics, The Univeristy of Manchester.Google Scholar
Berardi, M. and Galimberti, J. K. (2013) A note on exact correspondences between adaptive learning algorithms and the kalman filter. Economics Letters 118 (1), 139142.Google Scholar
Berardi, M. and Galimberti, J. K. (2017) On the initialization of adaptive learning in macroeconomic models. Journal of Economic Dynamics and Control 78, 2653.Google Scholar
Bray, M. (1982) Learning, estimation, and the stability of rational expectations. Journal of Economic Theory 26 (2), 318339.Google Scholar
Carceles-Poveda, E. and Giannitsarou, C. (2007) Adaptive learning in practice. Journal of Economic Dynamics and Control 31 (8), 26592697.Google Scholar
Chevillon, G., Massmann, M., and Mavroeidis, S. (2010) Inference in models with adaptive learning. Journal of Monetary Economics 57 (3), 341351.Google Scholar
Christev, A. and Slobodyan, S. (2014) Learnability of e-stable equilibria. Macroeconomic Dynamics 18 (05), 959984.Google Scholar
Eusepi, S. and Preston, B. (2011) Expectations, learning, and business cycle fluctuations. American Economic Review 101 (6), 2844–72.Google Scholar
Evans, G. W. and Honkapohja, S. (1998) Stochastic gradient learning in the cobweb model. Economics Letters 61 (3), 333337.Google Scholar
Evans, G. W. and Honkapohja, S. (2001) Learning and Expectations in Macroeconomics. Frontiers of Economic Research. Princeton, NJ: Princeton University Press.Google Scholar
Evans, G. W., Honkapohja, S., and Williams, N. (2010) Generalized stochastic gradient learning. International Economic Review 51 (1), 237262.Google Scholar
Hamilton, J. D. (1994) Time Series Analysis. Princeton, NJ: Princeton University Press.Google Scholar
Haykin, S. S. (2001) Adaptive Filter Theory, 4th ed. Prentice Hall Information and System Sciences Series. New Jersey, USA: Prentice Hall.Google Scholar
Ljung, L. and Soderstrom, T. (1983) Theory and Practice of Recursive Identification. Cambridge, MA: The MIT Press.Google Scholar
Marcet, A. and Sargent, T. J. (1989) Convergence of least squares learning mechanisms in self-referential linear stochastic models. Journal of Economic Theory 48 (2), 337368.Google Scholar
McGough, B. (2003) Statistical learning with time-varying parameters. Macroeconomic Dynamics 7 (01), 119139.Google Scholar
Milani, F. (2011) Expectation shocks and learning as drivers of the business cycle. The Economic Journal 121 (552), 379401.Google Scholar
Moustakides, G. (1997) Study of the transient phase of the forgetting factor RLS. IEEE Transactions on Signal Processing 45 (10), 24682476.Google Scholar
Orphanides, A. and Williams, J. C. (2005) The decline of activist stabilization policy: Natural rate misperceptions, learning, and expectations. Journal of Economic Dynamics and Control 29 (11), 19271950.Google Scholar
Sargent, T. J. (1999) The Conquest of American Inflation. Princeton, NJ: Princeton University Press.Google Scholar
Slobodyan, S. and Wouters, R. (2012) Learning in an estimated medium-scale DSGE model. Journal of Economic Dynamics and Control 36 (1), 2646.Google Scholar
Williams, N. (2003) Adaptive Learning and Business Cycles. Mimeo, Princeton University.Google Scholar