We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper adds to the literature on global inflation synchronization by distinguishing the traded and non-traded content of the consumption basket. Using a novel database of monthly CPI series of 40 countries from 2000, a dynamic factor model with stochastic volatility decomposes inflation into global, income-group, and idiosyncratic components. While synchronization has historically been prominent in tradable goods inflation, findings also reveal an increasing synchronization in non-tradable inflation. Second, I use a time-varying parameter vector autoregressive model to investigate the potential spillover effect. The results provide evidence of spillover from tradable to non-tradable inflation, while the reverse is mainly muted over the sample. Finally, results from local projections indicate that a tightening of US monetary policy causes a significant decline in global headline inflation, which is primarily driven by the heightened sensitivity of tradable goods.
Bayes’ statistical rule remains the status quo for modeling belief updating in both normative and descriptive models of behavior under uncertainty. Some recent research has questioned the use of Bayes’ rule in descriptive models of behavior, presenting evidence that people overweight ‘good news’ relative to ‘bad news’ when updating ego-relevant beliefs. In this paper, we present experimental evidence testing whether this ‘good-news, bad-news’ effect is present in a financial decision making context (i.e. a domain that is important for understanding much economic decision making). We find no evidence of asymmetric updating in this domain. In contrast, in our experiment, belief updating is close to the Bayesian benchmark on average. However, we show that this average behavior masks substantial heterogeneity in individual updating behavior. We find no evidence in support of a sizeable subgroup of asymmetric updators.
We describe Bayes factor, an explicit measure of the strength of the evidence, the extent to which the data increase or decrease the odds a given hypothesis or model is true. Issues and techniques for deriving a Bayes factor are outlined. We illustrate the technique with data from an ultimatum game experiment that looked for an experimenter observation effect. We show that the evidence increases the odds of an effect, but not by enough to convince someone with a skeptical prior.
We propose the use of Bayesian estimation of risk preferences of individuals for applications of behavioral welfare economics to evaluate observed choices that involve risk. Bayesian estimation provides more systematic control of the use of informative priors over inferences about risk preferences for each individual in a sample. We demonstrate that these methods make a difference to the rigorous normative evaluation of decisions in a case study of insurance purchases. We also show that hierarchical Bayesian methods can be used to infer welfare reliably and efficiently even with significantly reduced demands on the number of choices that each subject has to make. Finally, we illustrate the natural use of Bayesian methods in the adaptive evaluation of welfare.
We study how consumer preferences affect the transmission of microeconomic price shocks to consumer price index (CPI) inflation. These preferences give rise to complementarities and substitutions between goods, generating demand-driven cross-price dependencies that either amplify or mitigate the impact of price shocks. Our results demonstrate that while both effects are present, positive spillovers due to complementarities dominate. The magnitude of these cross-price effects is significant, demonstrating their importance in shaping CPI inflation dynamics. Most importantly, demand-driven price linkages decisively shape the impact of producer prices on CPI inflation. These findings underscore the need to take into account demand-driven price dependencies when assessing the impact of price shocks on CPI inflation, rather than relying solely on supply-related ones.
The study analyzes productive efficiency of crop farming in the EU. We use publicly available data on crop farming from FADN database. Standard efficiency measurement techniques based on frontier analysis indicate that the representative farms provided in the database are fully efficient, even though there is ample evidence in the literature that this is highly unlikely. We find that this is a consequence of overly restrictive assumptions about the compound error in standard SF models. The efficiency benchmark, based on the best model given data with generalized error specification, reveals substantial differences in crop farming efficiency in the EU.
We extend the growth-at-risk (GaR) literature by examining US growth risks over 130 years using a time-varying parameter stochastic volatility regression model. This model effectively captures the distribution of GDP growth over long samples, accommodating changing relationships across variables and structural breaks. Our analysis offers several key insights for policymakers. We identify significant temporal variation in both the level and determinants of GaR. The stability of upside risks to GDP growth, as seen in previous research, is largely confined to the Great Moderation period, with a more balanced risk distribution prior to the 1970s. Additionally, the distribution of GDP growth has narrowed significantly since the end of the Bretton Woods system. Financial stress is consistently associated with higher downside risks, without affecting upside risks. Moreover, indicators such as credit growth and house prices influence both downside and upside risks during economic booms. Our findings also contribute to the financial cycle literature by providing a comprehensive view of the drivers and risks associated with economic booms and recessions over time.
This article studies how sudden changes in bank credit supply impact economic activity. I identify shocks to bank credit supply based on firms’ aggregate debt composition. I use a model where firms fund production with bonds and loans. In the model, bank shocks are the only type of shock that imply opposite movements in the two types of debt as firms adjust their debt composition to new credit conditions. Bank shocks account for a third of output fluctuations and are predictive of the bond spread.
We provide new evidence about US monetary policy using a model that: (i) estimates time-varying monetary policy weights without relying on stylized theoretical assumptions; (ii) allows for endogenous breakdowns in the relationship between interest rates, inflation, and output; and (iii) generates a unique measure of monetary policy activism that accounts for economic instability. The joint incorporation of endogenous time-varying uncertainty about the monetary policy parameters and the stability of the relationship between interest rates, inflation, and output materially reduces the probability of determinate monetary policy. The average probability of determinacy over the period post-1982 to 1997 is below 60% (hence well below seminal estimates of determinacy probabilities that are close to unity). Post-1990, the average probability of determinacy is 75%, falling to approximately 60% when we allow for typical levels of trend inflation.
Much historical yield-monitor data is from fields where a uniform rate of nitrogen was applied. A new approach is proposed using this data to get site-specific nitrogen recommendations. Bayesian methods are used to estimate a linear plateau model where only the plateau is spatially varying. The model is then illustrated by using it to make site-specific nitrogen recommendations for corn production in Mississippi. The in-sample recommendations generated by this approach return an estimated $9/acre on the example field. The long-term goal is to combine this information with other information such as remote sensing measurements.
Fertility decline in human history is a complex enigma. Different triggers have been proposed, among others the increased demand for human capital resulting in parents making a quantity–quality (QQ) trade-off. This is the first study that examines the existence of a QQ trade-off and the possible gender bias by analyzing fertility intentions rather than fertility outcomes. We rely on the unified growth theory to understand the QQ trade-off conceptually and a discrete choice experiment conducted among 426 respondents in Ethiopia to analyze fertility intentions empirically. We confirm the existence of a QQ trade-off only when the number of children is less than six and find that intentions are gendered in two ways: (i) boys are preferred over girls, and (ii) men are willing to trade-off more education in return for more children. Results imply that a focus on both stimulating intentions for education, especially girls' education, and on family size intentions is important to accelerate the demographic transition.
Longevity risk is putting more and more financial pressure on governments and pension plans worldwide due to pensioners’ increasing trend of life expectancy and the growing numbers of people reaching retirement age. Lee and Carter (1992, Journal of the American Statistical Association, 87(419), 659–671.) applied a one-factor dynamic factor model to forecast the trend of mortality improvement, and the model has since become the field’s workhorse. It is, however, well known that their model is subject to the limitation of overlooking cross-dependence between different age groups. We introduce Factor-Augmented Vector Autoregressive (FAVAR) models to the mortality modelling literature. The model, obtained by adding an unobserved factor process to a Vector Autoregressive (VAR) process, nests VAR and Lee–Carter models as special cases and inherits both frameworks’ advantages. A Bayesian estimation approach, adapted from the Minnesota prior, is proposed. The empirical application to the US and French mortality data demonstrates our proposed method’s efficacy in both in-sample and out-of-sample performance.
Partial equilibrium models have been used extensively by policy makers to prospectively determine the consequences of government programs that affect consumer incomes or the prices consumers pay. However, these models have not previously been used to analyze government programs that inform consumers. In this paper, we develop a model that policy makers can use to quantitatively predict how consumers will respond to risk communications that contain new health information. The model combines Bayesian learning with the utility-maximization of consumer choice. We discuss how this model can be used to evaluate information policies; we then test the model by simulating the impacts of the North Dakota Folic Acid Educational Campaign as a validation exercise.
This study investigates the time-varying effects of international uncertainty shocks. I use a global vector autoregressive model with drifting coefficients and factor stochastic volatility in the errors to model the G7 economies jointly. The measure of uncertainty is constructed by estimating a time-varying scalar driving the innovation variances of the latent factors, which is also included in the conditional mean of the process. To achieve regularization, I use Bayesian techniques for estimation, and rely on hierarchical global–local priors to shrink the high-dimensional multivariate system towards sparsity. I compare the obtained econometric measure of uncertainty to alternative indices and discuss commonalities and differences. Moreover, I find that international uncertainty may differ substantially compared to identically constructed domestic measures. Structural inference points towards pronounced real and financial effects of uncertainty shocks in all considered economies. These effects are subject to heterogeneities over time and the cross-section, providing empirical evidence in favor of using the flexible econometric framework introduced in this study.
We measure the economic impact of varietal improvement and technological change in flue-cured tobacco across quantity (e.g., yield) and quality dimensions under a voluntary quality constraint. Since 1961, flue-cured tobacco breeders in the United States have been subject to the Minimum Standards Program that sets limits on acceptable quality characteristics for commercial tobacco varieties. We implement a Bayesian hierarchical model to measure the contribution of breeding efforts to changes in tobacco yields and quality between 1954 and 2017. The Bayesian model addresses limited data for varieties in the trials and allows easy generation of the necessary parameters of economic interest.
There is renewed interest in levelling up the regions of the UK. The combination of social and political discontent, and the sluggishness of key UK macroeconomic indicators like productivity growth, has led to increased interest in understanding the regional economies of the UK. In turn, this has led to more investment in economic statistics. Specifically, the Office for National Statistics (ONS) recently started to produce quarterly regional GDP data for the nine English regions and Wales that date back to 2012Q1. This complements existing real GVA data for the regions available from the ONS on an annual basis back to 1998; with the devolved administrations of Scotland and Northern Ireland producing their own quarterly output measures. In this paper we reconcile these two data sources along with UK quarterly output data that date back to 1970. This enables us to produce both more timely real terms estimates of quarterly economic growth in the regions of the UK and a new reconciled historical time-series of quarterly regional real output data from 1970. We explore a number of features of interest of these new data. This includes producing a new quarterly regional productivity series and commenting on the evolution of regional productivity growth in the UK.
The generalized linear model (GLM) is a statistical model which has been widely used in actuarial practices, especially for insurance ratemaking. Due to the inherent longitudinality of property and casualty insurance claim datasets, there have been some trials of incorporating unobserved heterogeneity of each policyholder from the repeated observations. To achieve this goal, random effects models have been proposed, but theoretical discussions of the methods to test the presence of random effects in GLM framework are still scarce. In this article, the concept of Bregman divergence is explored, which has some good properties for statistical modeling and can be connected to diverse model selection diagnostics as in Goh and Dey [(2014) Journal of Multivariate Analysis, 124, 371–383]. We can apply model diagnostics derived from the Bregman divergence for testing robustness of a chosen prior by the modeler to possible misspecification of prior distribution both on the naive model, which assumes that random effects follow a point mass distribution as its prior distribution, and the proposed model, which assumes a continuous prior density of random effects. This approach provides insurance companies a concrete framework for testing the presence of nonconstant random effects in both claim frequency and severity and furthermore appropriate hierarchical model which can explain both observed and unobserved heterogeneity of the policyholders for insurance ratemaking. Both models are calibrated using a claim dataset from the Wisconsin Local Government Property Insurance Fund which includes both observed claim counts and amounts from a portfolio of policyholders.
This study evaluates the effects of vegetative soil conservation practices (afforestation and/or bamboo planting) on farm profit and its components, revenue and variable cost. Since farmers self-select themselves as adopters of conservation measures, there could be a problem of selection bias in evaluating their soil conservation practices. We address the selection bias by using propensity score matching. We also check if there exists spatial spillover in adoption of vegetative conservation measures and how it affects matching. We use primary survey data from the Darjeeling district of the Eastern Himalayan region for the year 2013. Our results suggest strong spatial correlation. We find that the propensity score estimated from the spatial model provides better matches than the non-spatial model. While the results show that vegetative soil conservation can lead to significant gains in revenue, it also increases costs so that no significant gains in profit accrue to farmers.
Basis forecasting is important for producers and consumers of agricultural commodities in their risk management decisions. However, the best performing forecasting model found in previous studies varies substantially. Given this inconsistency, we take a Bayesian approach, which addresses model uncertainty by combining forecasts from different models. Results show model performance differs by location and forecast horizon, but the forecast from the Bayesian approach often performs favorably. In some cases, however, the simple moving averages have lower forecast errors. Besides the nearby basis, we also examine basis in a specific month and find that regression-based models outperform others in longer horizons.
Policy-critical, micro-level statistical data are often unavailable at the desired level of disaggregation. We present a Bayesian methodology for “downscaling” aggregated count data to the micro level, using an outside statistical sample. Our procedure combines numerical simulation with exact calculation of combinatorial probabilities. We motivate our approach with an application estimating the number of farms in a region, using count totals at higher levels of aggregation. In a simulation analysis over varying population sizes, we demonstrate both robustness to sampling variability and outperformance relative to maximum likelihood. Spatial considerations, implementation of “informative” priors, non-spatial classification problems, and best practices are discussed.