We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Anticipating future migration trends is instrumental to the development of effective policies to manage the challenges and opportunities that arise from population movements. However, anticipation is challenging. Migration is a complex system, with multifaceted drivers, such as demographic structure, economic disparities, political instability, and climate change. Measurements encompass inherent uncertainties, and the majority of migration theories are either under-specified or hardly actionable. Moreover, approaches for forecasting generally target specific migration flows, and this poses challenges for generalisation.
In this paper, we present the results of a case study to predict Irregular Border Crossings (IBCs) through the Central Mediterranean Route and Asylum requests in Italy. We applied a set of Machine Learning techniques in combination with a suite of traditional data to forecast migration flows. We then applied an ensemble modelling approach for aggregating the results of the different Machine Learning models to improve the modelling prediction capacity.
Our results show the potential of this modelling architecture in producing forecasts of IBCs and Asylum requests over 6 months. The explained variance of our models through a validation set is as high as 80%. This study offers a robust basis for the construction of timely forecasts. In the discussion, we offer a comment on how this approach could benefit migration management in the European Union at various levels of policy making.
This final chapter demonstrates how the catastrophe (CAT) models described in previous chapters can be used as inputs for CAT risk management. CAT model outputs, which can translate into actionable strategies, are risk metrics such as the average annual loss, exceedance probability curves, and values at risk (as defined in Chapter 3). Practical applications include risk transfer via insurance and CAT bonds, as well as risk reduction, consisting of reducing exposure, hazard, or vulnerability. The forecasting of perils (such as tropical cyclones and earthquakes) is explored, as well as strategies of decision-making under uncertainty. The overarching concept of risk governance, which includes risk assessment, management, and communication between various stakeholders, is illustrated with the case study of seismic risk at geothermal plants. This scenario exemplifies how CAT modelling is central in the trade-off between energy security and public safety and how large uncertainties impact risk perceptions and decisions.
Forecasts play a central role in decision-making under uncertainty. After a brief review of the general issues, this article considers ways of using high-dimensional data in forecasting. We consider selecting variables from a known active set, known knowns, using Lasso and One Covariate at a time Multiple Testing, and approximating unobserved latent factors, known unknowns, by various means. This combines both sparse and dense approaches to forecasting. We demonstrate the various issues involved in variable selection in a high-dimensional setting with an application to forecasting UK inflation at different horizons over the period 2020q1–2023q1. This application shows both the power of parsimonious models and the importance of allowing for global variables.
The Conclusion chapter reiterates the book’s approach, focus and main points. It reminds the reader that the book has concentrated on local, provincial, peripatetic and otherwise relatively marginal sites of scientific activity and shown how a wide variety of spaces were constituted and reconfigured as meteorological observatories. The conclusion reiterates the point that nineteenth-century meteorological observatories, and indeed the very idea of observatory meteorology, were under constant scrutiny. The conclusion interrogates four crucial conditions of these observatory experiments: the significance of geographical particularity in justifications of observatory operations; the sustainability of coordinated observatory networks at a distance; the ability to manage, manipulate and interpret large datasets; and the potential public value of meteorology as it was prosecuted in observatory settings. Finally, the chapter considers the use of historic weather data in recent attempts by climate scientists to reconstruct past climates and extreme weather events.
The Introduction chapter reviews recent literature on the history of nineteenth-century meteorology, particularly as it relates to weather observation. It sets out the book’s argument that projects to establish meteorological observatories should be treated as observatory experiments. The chapter explains that the book presents four historical geographies of meteorological observatories: ships at sea, colonial buildings, huts on mountain tops and suburban back gardens. The remainder of the chapter considers the following questions in relation to debates in the relevant literature and in the context of the nineteenth century: What counted as a meteorological observatory? What was the right way to observe the weather? How were observatory networks configured? How were weather data managed? And what were the ends of observatory meteorology?
In 2022, the world experienced the deadliest year of armed conflict since the 1994 Rwandan genocide. Much of the intensity and frequency of recent conflicts has drawn more attention to failures in forecasting—that is, a failure to anticipate conflicts. Such capabilities have the potential to greatly reduce the time, motivation, and opportunities peacemakers have to intervene through mediation or peacekeeping operations. In recent years, the growth in the volume of open-source data coupled with the wide-scale advancements in machine learning suggests that it may be possible for computational methods to help the international community forecast intrastate conflict more accurately, and in doing so reduce the rise of conflict. In this commentary, we argue for the promise of conflict forecasting under several technical and policy conditions. From a technical perspective, the success of this work depends on improvements in the quality of conflict-related data and an increased focus on model interpretability. In terms of policy implementation, we suggest that this technology should be used primarily to aid policy analysis heuristically and help identify unexpected conflicts.
Common time series models allow for a correlation between observations that is likely to be largest for points that are close together in time. Adjustments can be made, also, for seasonal effects. Variation in a single spatial dimension may have characteristics akin to those of time series, and comparable models find application there. Autoregressive models, which make good intuitive sense and are simple to describe, are the starting point for discussion; then moving on to autoregressive moving average with possible differencing. The "forecast" package for R has mechanisms that allow automatic selection of model parameters. Exponential smoothing state space (exponential time series or ETS) models are an important alternative that have often proved effective in forecasting applications. ARCH and GARCH heteroskedasticity models are further classes that have been developed to handle the special characteristics of financial time series.
This article provides a structured description of openly available news topics and forecasts for armed conflict at the national and grid cell level starting January 2010. The news topics, as well as the forecasts, are updated monthly at conflictforecast.org and provide coverage for more than 170 countries and about 65,000 grid cells of size 55 × 55 km worldwide. The forecasts rely on natural language processing (NLP) and machine learning techniques to leverage a large corpus of newspaper text for predicting sudden onsets of violence in peaceful countries. Our goals are a) to support conflict prevention efforts by making our risk forecasts available to practitioners and research teams worldwide, b) to facilitate additional research that can utilize risk forecasts for causal identification, and c) to provide an overview of the news landscape.
Surface ozone is an air pollutant that contributes to hundreds of thousands of premature deaths annually. Accurate short-term ozone forecasts may allow improved policy actions to reduce the risk to human health. However, forecasting surface ozone is a difficult problem as its concentrations are controlled by a number of physical and chemical processes that act on varying timescales. We implement a state-of-the-art transformer-based model, the temporal fusion transformer, trained on observational data from three European countries. In four-day forecasts of daily maximum 8-hour ozone (DMA8), our novel approach is highly skillful (MAE = 4.9 ppb, coefficient of determination $ {\mathrm{R}}^2=0.81 $) and generalizes well to data from 13 other European countries unseen during training (MAE = 5.0 ppb, $ {\mathrm{R}}^2=0.78 $). The model outperforms other machine learning models on our data (ridge regression, random forests, and long short-term memory networks) and compares favorably to the performance of other published deep learning architectures tested on different data. Furthermore, we illustrate that the model pays attention to physical variables known to control ozone concentrations and that the attention mechanism allows the model to use the most relevant days of past ozone concentrations to make accurate forecasts on test data. The skillful performance of the model, particularly in generalizing to unseen European countries, suggests that machine learning methods may provide a computationally cheap approach for accurate air quality forecasting across Europe.
We report the results of a forecasting experiment about a randomized controlled trial that was conducted in the field. The experiment asks Ph.D. students, faculty, and policy practitioners to forecast (1) compliance rates for the RCT and (2) treatment effects of the intervention. The forecasting experiment randomizes the order of questions about compliance and treatment effects and the provision of information that a pilot experiment had been conducted which produced null results. Forecasters were excessively optimistic about treatment effects and unresponsive to item order as well as to information about a pilot. Those who declare themselves expert in the area relevant to the intervention are particularly resistant to new information that the treatment is ineffective. We interpret our results as suggesting that we should exercise caution when undertaking expert forecasting, since experts may have unrealistic expectations and may be inflexible in altering these even when provided new information.
In this chapter we extend our discussion of the previous chapter to model dynamical systems with continuous state-spaces. We present statistical formulations to model and analyze noisy trajectories that evolve in a continuous state space whose output is corrupted by noise. In particular, we place special emphasis on linear Gaussian state-space models and, within this context, present Kalman filtering theory. The theory presented herein lends itself to the exploration of tracking algorithms explored in the chapter and in an end-of-chapter project.
This paper develops an efficient Stein-like shrinkage estimator for estimating slope parameters under structural breaks in seemingly unrelated regression models, which is then used for forecasting. The proposed method is a weighted average of two estimators: a restricted estimator that estimates the parameters under the restriction of no break in the coefficients, and an unrestricted estimator that considers break points and estimates the parameters using the observations within each regime. It is established that the asymptotic risk of the Stein-like shrinkage estimator is smaller than that of the unrestricted estimator, which is the method typically used to estimate the slope coefficients under structural breaks. Furthermore, this paper proposes an averaging minimal mean squared error estimator in which the averaging weight is derived by minimizing its asymptotic risk. Insights from the theoretical analysis are demonstrated in Monte Carlo simulations and through an empirical example of forecasting output growth of G7 countries.
This paper introduces a new software interface to elicit belief distributions of any shape: Click-and-Drag. The interface was tested against the state of the art in the experimental literature—a text-based interface and multiple sliders—and in the online forecasting industry—a distribution-manipulation interface similar to the one used by the most popular crowd-forecasting website. By means of a pre-registered experiment on Amazon Mechanical Turk, quantitative data on the accuracy of reported beliefs in a series of induced-value scenarios varying by granularity, shape, and time constraints, as well as subjective data on user experience were collected. Click-and-Drag outperformed all other interfaces by accuracy and speed, and was self-reported as being more intuitive and less frustrating, confirming the pre-registered hypothesis. Aside of the pre-registered results, Click-and-Drag generated the least drop-out rate from the task, and scored best in a sentiment analysis of an open-ended general question. Further, the interface was used to collect homegrown predictions on temperature in New York City in 2022 and 2042. Click-and-Drag elicited distributions were smoother with less idiosyncratic spikes. Free and open source, ready to use oTree, Qualtrics and Limesurvey plugins for Click-and-Drag, and all other tested interfaces are available at https://beliefelicitation.github.io/.
People have the ability to estimate frequencies of different behaviors, beliefs, and intentions of others, allowing them to fit into their immediate social worlds, learn from, and cooperate with others. However, psychology has produced a long list of apparent biases in social cognition. We show that this apparent contradiction can be resolved by understanding how cognitive processes underlying social judgments interact with the properties of social and task environments. We describe our social sampling model that incorporates this interaction and can explain biases in people’s estimates of broader populations. We also show that asking people about their social circles produces better predictions of elections than asking about their own voting intentions, provides good description of population attributes, and helps predict people’s future voting and vaccination behavior.
We investigated the extent to which the human capacity for recognition helps to forecast political elections: We compared naïve recognition-based election forecasts computed from convenience samples of citizens’ recognition of party names to (i) standard polling forecasts computed from representative samples of citizens’ voting intentions, and to (ii) simple—and typically very accurate—wisdom-of-crowds-forecasts computed from the same convenience samples of citizens’ aggregated hunches about election results. Results from four major German elections show that mere recognition of party names forecast the parties’ electoral success fairly well. Recognition-based forecasts were most competitive with the other models when forecasting the smaller parties’ success and for small sample sizes. However, wisdom-of-crowds-forecasts outperformed recognition-based forecasts in most cases. It seems that wisdom-of-crowds-forecasts are able to draw on the benefits of recognition while at the same time avoiding its downsides, such as lack of discrimination among very famous parties or recognition caused by factors unrelated to electoral success. Yet it seems that a simple extension of the recognition-based forecasts—asking people what proportion of the population would recognize a party instead of whether they themselves recognize it—is also able to eliminate these downsides.
Presidential elections can be forecast using information from political and economic conditions, polls, and a statistical model of changes in public opinion over time. However, these “knowns” about how to make a good presidential election forecast come with many unknowns due to the challenges of evaluating forecast calibration and communication. We highlight how incentives may shape forecasts, and particularly forecast uncertainty, in light of calibration challenges. We illustrate these challenges in creating, communicating, and evaluating election predictions, using the Economist and Fivethirtyeight forecasts of the 2020 election as examples, and offer recommendations for forecasters and scholars.
Predictions of magnitudes (costs, durations, environmental events) are often given as uncertainty intervals (ranges). When are such forecasts judged to be correct? We report results of four experiments showing that forecasted ranges of expected natural events (floods and volcanic eruptions) are perceived as accurate when an observed magnitude falls inside or at the boundary of the range, with little regard to its position relative to the “most likely” (central) estimate. All outcomes that fell inside a wide interval were perceived as equally well captured by the forecast, whereas identical outcomes falling outside a narrow range were deemed to be incorrectly predicted, in proportion to the magnitude of deviation. In these studies, ranges function as categories, with boundaries distinguishing between right or wrong predictions, even for outcome distributions that are acknowledged as continuous, and for boundaries that are arbitrarily defined (for instance, when the narrow prediction interval is defined as capturing 50 percent and the wide 90 percent of all potential outcomes). However, the boundary effect is affected by label. When the upper limit of a range is described as a value that “can” occur (Experiment 5), outcomes both below and beyond this value were regarded as consistent with the forecast.
This article proposes an Item Response Theoretical (IRT) forecasting model that incorporates proper scoring rules and provides evaluations of forecasters’ expertise in relation to the features of the specific questions they answer. We illustrate the model using geopolitical forecasts obtained by the Good Judgment Project (GJP) (see Mellers, Ungar, Baron, Ramos, Gurcay, Fincher, Scott, Moore, Atanasov, Swift, Murray, Stone & Tetlock, 2014). The expertise estimates from the IRT model, which take into account variation in the difficulty and discrimination power of the events, capture the underlying construct being measured and are highly correlated with the forecasters’ Brier scores. Furthermore, our expertise estimates based on the first three years of the GJP data are better predictors of both the forecasters’ fourth year Brier scores and their activity level than the overall Brier scores obtained and Merkle’s (2016) predictions, based on the same period. Lastly, we discuss the benefits of using event-characteristic information in forecasting.
Accountability pressures are a ubiquitous feature of social systems: virtually everyone must answer to someone for something. Behavioral research has, however, warned that accountability, specifically a focus on being responsible for outcomes, tends to produce suboptimal judgments. We qualify this view by demonstrating the long-term adaptive benefits of outcome accountability in uncertain, dynamic environments. More than a thousand randomly assigned forecasters participated in a ten-month forecasting tournament in conditions of control, process, outcome or hybrid accountability. Accountable forecasters outperformed non-accountable ones. Holding forecasters accountable to outcomes (“getting it right”) boosted forecasting accuracy beyond holding them accountable for process (“thinking the right way”). The performance gap grew over time. Process accountability promoted more effective knowledge sharing, improving accuracy among observers. Hybrid (process plus outcome) accountability boosted accuracy relative to process, and improved knowledge sharing relative to outcome accountability. Overall, outcome and process accountability appear to make complementary contributions to performance when forecasters confront moderately noisy, dynamic environments where signal extraction requires both knowledge pooling and individual judgments.
A growing body of research indicates that forecasting skill is a unique and stable trait: forecasters with a track record of high accuracy tend to maintain this record. But how does one identify skilled forecasters effectively? We address this question using data collected during two seasons of a longitudinal geopolitical forecasting tournament. Our first analysis, which compares psychometric traits assessed prior to forecasting, indicates intelligence consistently predicts accuracy. Next, using methods adapted from classical test theory and item response theory, we model latent forecasting skill based on the forecasters’ past accuracy, while accounting for the timing of their forecasts relative to question resolution. Our results suggest these methods perform better at assessing forecasting skill than simpler methods employed by many previous studies. By parsing the data at different time points during the competitions, we assess the relative importance of each information source over time. When past performance information is limited, psychometric traits are useful predictors of future performance, but, as more information becomes available, past performance becomes the stronger predictor of future accuracy. Finally, we demonstrate the predictive validity of these results on out-of-sample data, and their utility in producing performance weights for wisdom-of-crowds aggregations.