No CrossRef data available.
Published online by Cambridge University Press: 08 February 2005
Since the paper by David Hendry and me, which is at the center of this discussion, was written by us separately, I think it is worth recording how this paper started. David was attending the Allied Social Sciences Association meeting in San Diego in January 2004 and stayed with my wife and me. In preparing a talk I realized that over my career I had been fortunate enough to have about eighty coauthors, but I found that the list did not include David. As he had just discussed the use of PcGets as a research tool, I proposed a paper exploring the limits of such a tool. The paper was soon born, and, after the almost obligatory journal rejection, it was picked up by Peter Phillips and this special issue resulted.
Since the paper by David Hendry and me, which is at the center of this discussion, was written by us separately, I think it is worth recording how this paper started. David was attending the Allied Social Sciences Association meeting in San Diego in January 2004 and stayed with my wife and me. In preparing a talk I realized that over my career I had been fortunate enough to have about eighty coauthors, but I found that the list did not include David. As he had just discussed the use of PcGets as a research tool, I proposed a paper exploring the limits of such a tool. The paper was soon born, and, after the almost obligatory journal rejection, it was picked up by Peter Phillips and this special issue resulted.
From past experience, I find that when asked to discuss the future of some field or topic, many writers seem to find that the future just happens to be exactly what they are working on now, whether it is generalized autoregressive conditionally heteroskedasticity (GARCH) or asymptotics or causality, for example. This reaction is not surprising: if we could foresee an interesting topic in the future, we would be working on it now, unless data or computing constraints prevented it.
The process being discussed in our paper is how to go from data to one model (or several models) that provides satisfactory approximations to the actual economy, so that helpful policy recommendations and forecasts can be made. The outcome of the modeling process has to be useful to decision makers in that their expected utility is improved by using the model. What is obvious is that there are many possible model specifications: linear, dynamic, or nonlinear, multivariate, and with many explanatory variables. The question being considered is how to sort through the various alternatives to find a few superior models or just a single one.
My personal preference is to have available several high quality methods, such as PcGets and RETINA, and then to compare and possibly combine outputs.
Papers by Phillips, and by Pesaran and Timmermann, discuss further aspects of the automatic model selection procedures, with the Pesaran/Timmermann paper also raising the question that the outcomes of the models may influence the relevant economic variables. This possibility makes evaluation particularly difficult and has received little attention so far.