We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Forecasts play a central role in the development of energy infrastructure. Since building energy infrastructure is long and costly, energy system planners try to anticipate future demand to avoid both shortages and overcapacity. But energy demand forecasts aren’t neutral: they represent a certain vision of the future that forecasters hope to bring into being. This article uses a historical case study to open the black box of forecasting and the world it contains. It studies electricity demand forecasts made by Hydro-Québec, one of the biggest industrial firms in North America, from the 1960s to the 1980s. Based on linear extrapolation models forecasting exponential demand and endless growth, the state-owned firm embarked on huge hydroelectric megaprojects with deep consequences on the environment and Indigenous lands. The energy crisis of the 1970s, by disturbing energy systems, led to criticism from the provincial government and civil society towards Hydro-Québec’s bullish forecasts that justified its expansionist agenda. This uncertain context favored other methods of predicting the future, like scenario analysis, and brought scrutiny towards the hydroelectric powerhouse’s business. At the crossroads of business history, energy history, and science and technology studies, the article argues that energy forecasts are used by actors like energy suppliers and governments to produce and project power relations onto the future. They become performative when powerful interests coalesce around their vision of the future to implement it.
Scoring rules measure the deviation between a forecast, which assigns degrees of confidence to various events, and reality. Strictly proper scoring rules have the property that for any forecast, the mathematical expectation of the score of a forecast p by the lights of p is strictly better than the mathematical expectation of any other forecast q by the lights of p. Forecasts need not satisfy the axioms of the probability calculus, but Predd et al. [9] have shown that given a finite sample space and any strictly proper additive and continuous scoring rule, the score for any forecast that does not satisfy the axioms of probability is strictly dominated by the score for some probabilistically consistent forecast. Recently, this result has been extended to non-additive continuous scoring rules. In this paper, a condition weaker than continuity is given that suffices for the result, and the condition is proved to be optimal.
Stastny and Lehner (2018) compared the accuracy of forecasts in an intelligence community prediction market to comparable forecasts in analysis reports prepared by groups of professional intelligence analysts. To obtain quantitative probabilities from the analysis reports experienced analysts were asked to read the reports and state what probability they thought the reports implied for each forecast question. These were called imputed probabilities. Stastny and Lehner found that the prediction market was more accurate than the imputed probabilities and concluded that this was evidence that the prediction market was more accurate than the analysis reports. In a commentary, Mandel (2019) took exception to this interpretation. In a re-analysis of the data, Mandel found a very strong correlation between readers’ personal and imputed probabilities. From this Mandel builds a case that the imputed probabilities are little more than a reflection of the readers’ personal views; that they do not fairly reflect the contents of the analysis reports; and therefore, any accuracy results are spurious. This paper argues two points. First, the high correlation between imputed and personal probabilities was not evidence of substantial imputation bias. Rather it was the natural by-product of the fact that the imputed and personal probabilities were both forecasts of the same events. An additional analysis shows a much lower level of imputation bias that is consistent with the original results and interpretation. Second, the focus of Stastny and Lehner (2018) was on the reports as understood by readers. In this context, even if there was substantial imputation bias it would not invalidate accuracy results; it would instead provide a possible causal explanation of those results.
Recent research suggests that communicating probabilities numerically rather than verbally benefits forecasters’ credibility. In two experiments, we tested the reproducibility of this communication-format effect. The effect was replicated under comparable conditions (low-probability, inaccurate forecasts), but it was reversed for low-probability accurate forecasts and eliminated for high-probability forecasts. Experiment 2 further showed that verbal probabilities convey implicit recommendations more clearly than probability information, whereas numeric probabilities do the opposite. Descriptively, the findings indicate that the effect of probability words versus numbers on credibility depends on how these formats convey directionality differently, how directionality implies recommendations even when none are explicitly given, and how such recommendations correspond with outcomes. Prescriptively, we propose that experts distinguish forecasts from advice, using numeric probabilities for the former and well-reasoned arguments for the latter.
The present research examines the prevalence of predictions in daily life. Specifically we examine whether spending predictions for specific purchases occur spontaneously in life outside of a laboratory setting. Across community samples and student samples, overall self-report and diary reports, three studies suggest that people make spending predictions for about two-thirds of purchases in everyday life. In addition, we examine factors that increase the likelihood of spending predictions: the size of purchase, payment form, time pressure, personality variables, and purchase decisions. Spending predictions were more likely for larger, more exceptional purchases and for item and project predictions rather than time periods.
We present a hierarchical Dirichlet regression model with Gaussian process priors that enables accurate and well-calibrated forecasts for U.S. Senate elections at varying time horizons. This Bayesian model provides a balance between predictions based on time-dependent opinion polls and those made based on fundamentals. It also provides uncertainty estimates that arise naturally from historical data on elections and polls. Experiments show that our model is highly accurate and has a well calibrated coverage rate for vote share predictions at various forecasting horizons. We validate the model with a retrospective forecast of the 2018 cycle as well as a true out-of-sample forecast for 2020. We show that our approach achieves state-of-the art accuracy and coverage despite relying on few covariates.
We present a new methodology that uses professional forecasts to estimate the effects of fiscal policy. We use short-term forecasts to better identify exogenous shocks to government spending by controlling for anticipatory information already in the public domain. We use longer-term forecasts to net out expectations from the future path of other variables, which improves accuracy and efficiency by focusing on more precise measures of the impact of shocks. We show that this improves the statistical fit relative to both local projection methods and vector autoregression-based analyses that do not control for the entire future path of expectations.
We provide a new way of deriving a number of dynamic unobserved factors from a set of variables. We show how standard principal components may be expressed in state space form and estimated using the Kalman filter. To illustrate our procedure, we perform two exercises. First, we use it to estimate a measure of the current account imbalances among northern and southern euro area countries that developed during the period leading up to the outbreak of the euro area crisis, before looking at adjustment in the post-crisis period. Second, we show how these dynamic factors can improve forecasting of the euro exchange rate.
Naive and adaptive schemes have been used as proxies for price expectations in previous studies of supply response. Those studies contain mixed formulas of futures, support, and lagged prices as alternative formulations for price expectations. This study uses a conditional expected price which combines both market and support prices into one price expectations measure. It defines the total effect of available information on supply response. The results indicate the potential usefulness of formulating expected prices as conditional price expectations in supply response analysis, with support prices being the conditional set. Under the provisions of the 1985 Farm Bill, significant reductions in corn and soybean acreages are in prospect for 1987-90.
We investigate how central bank forecasts of GDP growth evolve through time, and how they are adapted in the light of official estimates of actual GDP growth. Using data for 1988–2005, we find that the Federal Open Market Committee (FOMC) has typically adjusted its forecast for growth over the coming four quarters by about a third of the unexpected component of estimated growth in the four quarters most recently ended. We were unable to find any clear signs of systematic errors in the FOMC's forecasts. UK data for 1998–2005 suggest that the Bank of England Monetary Policy Committee (MPC) did not adjust its forecasts in this way, and that there were systematic forecast errors, but the evidence from the latter part of the period 2001–5 tentatively shows a behaviour pattern closer to that of the FOMC, with no clear signs of systematic errors.