Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-22T00:27:25.479Z Has data issue: false hasContentIssue false

Bayesian state-space synthetic control method for deforestation baseline estimation for forest carbon credits

Published online by Cambridge University Press:  28 February 2024

Keisuke Takahata*
Affiliation:
sustainacraft Inc., Tokyo, Japan
Hiroshi Suetsugu
Affiliation:
sustainacraft Inc., Tokyo, Japan
Keiichi Fukaya
Affiliation:
Biodiversity Division, National Institute for Environmental Studies, Ibaraki, Japan
Shinichiro Shirota
Affiliation:
Center for the Promotion of Social Data Science Education and Research, Hitotsubashi University, Tokyo, Japan
*
Corresponding author: Shinichiro Shirota; Email: [email protected]

Abstract

Carbon credits from the reducing emissions from deforestation and degradation (REDD+) projects have been criticized for issuing junk carbon credits due to invalid ex-ante baselines. Recently, the concept of ex-post baseline has been discussed to overcome the criticism, while ex-ante baseline is still necessary for project financing and risk assessment. To address this issue, we propose a Bayesian state-space model that integrates ex-ante baseline projection and ex-post dynamic baseline updating in a unified manner. Our approach provides a tool for appropriate risk assessment and performance evaluation of REDD+ projects. We apply the proposed model to a REDD+ project in Brazil and show that it may have had a small, positive effect but has been overcredited. We also demonstrate that the 90% predictive interval of the ex-ante baseline includes the ex-post baseline, implying that our ex-ante estimation can work effectively.

Type
Application Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Open Practices
Open materials
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Impact Statement

Our novel approach estimates deforestation baselines for forest carbon credits, addressing criticisms of junk carbon credit issued from REDD+ projects and problems of early financing simultaneously. By integrating ex-ante baseline projection and ex-post dynamic updating, we evaluate ex-ante risks and ex-post project performance in a unified manner. Our approach can contribute to sound allocation of funds for forest conservation projects with significant positive impacts on climate change action.

1. Introduction

Carbon credit is an incentive scheme to promote projects that have additional benefits for climate change mitigation, and is expected to play an important role in offsetting the gap from net zero emission after reduction efforts (Griscom et al., Reference Griscom, Busch, Cook-Patton, Ellis, Funk, Leavitt, Lomax, Turner, Chapman, Engelmann, Gurwick, Landis, Lawrence, Malhi, Murray, Navarrete, Roe, Scull, Smith, Streck, Walker and Worthington2020). Reducing deforestation and forest degradation are considered to be one of the most effective approaches to reduce carbon emission and the reducing emissions from deforestation and degradation (REDD+) mechanism is a framework to promote such efforts through the issuance of carbon credit. However, carbon credits from REDD+ have been subject to several criticisms. Credits issued for projects without actual positive effects on climate change mitigation are called “junk carbon credit”, and several studies have showed that many REDD+ projects may have produced junk carbon credits (e.g., West et al., Reference West, Börner, Sills and Kontoleon2020, Reference West, Wunder, Sills, Börner, Rifai, Neidermeier, Frey and Kontoleon2023; Guizar-Coutiño et al., Reference Guizar-Coutiño, Jones, Balmford, Carmenta and Coomes2022).

Criticisms to carbon credit are mainly about the validity of ex-ante baseline, that is, a counterfactual scenario in the absence of a project that is estimated before the project starts. Considering this issue, the concept of ex-post (or dynamic) baseline has recently been discussed (Verra, 2022a). For example, the use of synthetic control method (SCM) has been studied in this context (West et al., Reference West, Börner, Sills and Kontoleon2020, Reference West, Wunder, Sills, Börner, Rifai, Neidermeier, Frey and Kontoleon2023). In this framework, baseline is sequentially updated at every observation of the forest cover after intervention, allowing for the effects of changes in the external environment to be taken into account.

However, a financing issue still remains as result-based payment means project proponents must wait several years to obtain the first credit issuance. From an investor perspective, ex-ante baseline projection is needed to quantify the risk associated with projects for investment decisions (Verra, 2022b). With those in mind, we can find a need for the integration of both ex-ante baseline forecasting before intervention and ex-post baseline updating at each observation after intervention.

We propose a new model to address the above problem. First, we introduce a Bayesian state-space model that naturally integrates the estimation of ex-ante baseline and the dynamic updating of ex-post baseline. We achieve this by combining state-space modeling for forecasting and SCM for dynamic updating. Second, we consider covariate balancing in state-space modeling by using the general Bayesian updating method for valid causal inference. We also address a small sample problem, which often arises in time-series causal inference, by introducing the regularized Horseshoe prior. Finally, we apply the proposed model to a REDD+ project in Brazil and show that both ex-ante and ex-post baseline can be effectively estimated by our model. Our approach would enable appropriate ex-ante risk assessment and ex-post performance evaluation of forest conservation projects and contribute to the sound allocation of funds to projects that have significant positive impacts on climate change mitigation.

2. Preliminaries and related work

2.1. VM0007: A REDD+ methodology for emission reduction evaluation

VM0007 (Verra, 2020) is one of the major methodologies that defines the calculation of emission reductions in a REDD+ project. A key concept of VM0007 is the Reference Region for projecting the rate of deforestation (RRD), which is equivalent to a control unit in causal inference. The RRD is chosen to match the project area (PA) as closely as possible in terms of the following variables: deforestation drivers, landscape factors (e.g. forest types, soil types, elevation, etc.), and socioeconomic variables (access to infrastructures, policies, and regulations, etc.). For the 10–12 years preceding the implementation of a project, deforestation rates are aggregated over the RRD and projected as a baseline for the crediting period. The projection method can be a simple historical average or a predefined linear/nonlinear model, where the former is often used. After the projection, spatial modeling is applied for certain types of forest to adjust the baseline.

Several studies have reported that baselines set under VM0007 were overestimated because these baselines failed to consider a counterfactual scenario correctly or to eliminate the effect of external factors, such as policy changes (West et al., Reference West, Börner, Sills and Kontoleon2020, Reference West, Wunder, Sills, Börner, Rifai, Neidermeier, Frey and Kontoleon2023). As an example, Figure 1 describes the spatial distribution and the deforestation trends of the PA and RRD of the Valparaiso project (Eaton et al., Reference Eaton, Dickson, McFarland, Freitas and Lopes2014), where the project started in 2011. The deforestation rates are calculated using MapBiomas Brazil (Souza et al., Reference Souza, Shimbo, Rosa, Parente, Alencar, Rudorff, Hasenack, Matsumoto, Ferreira, Souza-Filho, de Oliveira, Rocha, Fonseca, Marques, Diniz, Costa, Monteiro, Rosa, Vélez-Martin, Weber, Lenti, Paternost, Pareyn, Siqueira, Viera, Neto, Saraiva, Sales, Salgado, Vasconcelos, Galano, Mesquita and Azevedo2020), which will be used in the case study in Section 4. The baseline reported in the Project Design Document (PDD) is also included in the chart. We can see that the baseline (the red dashed line in Figure 1b) might have been overestimated considering the actual deforestation level in the PA. In addition, we can also see that the baseline might have failed to capture the change of trends where the deforestation rates in the RRD kept decreasing from 2002 to 2012 but changed to an upward trend again. The main cause of these changes may be the implementation and change of Brazil’s National Climate Change Plan (Ferrante and Fearnside, Reference Ferrante and Fearnside2019; West et al., Reference West, Börner and Fearnside2019, Reference West, Börner, Sills and Kontoleon2020). The current methodology cannot effectively reflect such impacts to the baseline.

Figure 1. The Valparaiso project.

2.2. Related work

The SCM (Abadie et al., Reference Abadie, Diamond and Hainmueller2010) is one of the most popular methods for causal inference with time-series data. This method is designed for a case with one treatment unit and multiple control units, which is suited for the evaluation of a REDD+ project. Given that the RRD consists of multiple subunits (hereinafter “control units”), SCM finds an optimal weight to match both preintervention deforestation trends and covariates of the synthetic control (i.e., the weighted average of control units) to those of the PA. Note that a baseline estimated by SCM can reflect effects of external factors occurred after intervention while it cannot forecast the baseline, because it is calculated based on observations after intervention. CausalImpact (Brodersen et al., Reference Brodersen, Gallusser, Koehler, Remy and Scott2015) is another popular method based on Bayesian state-space models. It has several ideas in common with SCM, but there are several differences. In contrast to SCM, CausalImpact can forecast a baseline with a little modification of the original model since it is based on state-space modeling, while it does not include a covariate balancing mechanism between a treatment and synthetic controls.

Sparse estimation needs to be considered to deal with small sample problems when applying time-series causal inference models to REDD+ program evaluations. In these cases, the sample size is the number of time points of deforestation data before a project starts. Since deforestation data are often updated annually and available from the 1980s at the earliest, the sample size is usually around 30–40. On the other hand, a considerable number of control units can be necessary to get a good synthetic control, which may lead to a larger number of parameters than the sample size. Several studies consider the use of penalization methods in SCMs in the frequentist perspective (Doudchenko and Imbens, Reference Doudchenko and Imbens2016; Abadie, Reference Abadie2021; Ben-Michael et al., Reference Ben-Michael, Feller and Rothstein2021). In CausalImpact, the spike-and-slab prior (George and McCulloch, Reference George and McCulloch1993) is employed to select influential samples on the baseline. Although the spike-and-slab prior is a natural solution to variable/sample selection problems in Bayesian modeling, it leads to substantial computational costs. The Horseshoe prior is a continuous analogue of the spike-and-slab prior and is computationally more tractable (Carvalho et al., Reference Carvalho, Polson and Scott2010). If some prior knowledge is available about the degree of sparsity, it can be incorporated by specifying a prior on the global shrinkage parameter (Piironen and Vehtari, Reference Piironen and Vehtari2017).

In the context of general carbon crediting schemes including REDD+, the issue of junk carbon credit or overcrediting has been discussed for a long time (e.g., Haya, Reference Haya2009; Aldy and Stavins, Reference Aldy and Stavins2012; Badgley et al., Reference Badgley, Freeman, Hamman, Haya, Trugman, Anderegg and Cullenward2022), and one of the sources has been identified to be baseline setting (Bento et al., Reference Bento, Kanbur and Leard2016; Haya et al., Reference Haya, Cullenward, Strong, Grubert, Heilmayr, Sivas and Wara2020). SCM and CausalImpact have been applied to several studies evaluating REDD+ projects (Roopsind et al., Reference Roopsind, Sohngen and Brandt2019; Correa et al., Reference Correa, Cisneros, Börner, Pfaff, Costa and Rajão2020; Ellis et al., Reference Ellis, Sierra-Huelsz, Ceballos, Binnqüist and Cerdán2020; West et al., Reference West, Börner, Sills and Kontoleon2020, Reference West, Wunder, Sills, Börner, Rifai, Neidermeier, Frey and Kontoleon2023) and other forest conservation activities (Sills et al., Reference Sills, Herrera, Kirkpatrick, Brandão, Dickson, Hall, Pattanayak, Shoch, Vedoveto, Young and Pfaff2015; Rana and Sills, Reference Rana and Sills2018; Simmons et al., Reference Simmons, Marcos-Martinez, Law, Bryan and Wilson2018) in considering a counterfactual scenario. Many of the studies using causal inference methods have reported some degree of overcrediting. To the best of our knowledge, there is no study in the literature concerning ex-ante prediction and ex-post evaluation at once.

3. Model

3.1. Bayesian state-space SCM

We propose a Bayesian state-space SCM, leveraging the time-series modeling in CausalImpact and the covariate balancing in SCM. Consider a REDD+ project that has one PA and whose RRD is divided as multiple control units. We model the preintervention deforestation rates of the PA and the control units by the following state-space model with a local linear trend:

(1) $$ {y}_t={\tilde{z}}_t^{\prime}\hskip0.5em \beta +{\varepsilon}_t,\hskip1em {\varepsilon}_t\sim N\left(0,\hskip0.5em ,{\sigma}_y^2\right), $$
(2) $$ {z}_t={\tilde{z}}_t+{e}_t,\hskip1em {e}_t\sim N\left(0,\hskip0.5em ,{\sigma}_z^2I\right), $$
(3) $$ {\tilde{z}}_{t+1}={\tilde{z}}_t+{v}_t+{\eta}_t,\hskip1em {\eta}_t\sim N\left(0,\hskip0.5em ,{\sigma}_{\tilde{z}}I\right), $$
(4) $$ {v}_{t+1}={v}_t+{\xi}_t,\hskip1em {\xi}_t\sim N\left(0,\hskip0.5em ,{\sigma}_v^2I\right), $$
$$ t=1,\dots, {T}_0, $$

where $ {y}_t $ is an observed deforestation rate for the PA at time $ t $ , $ {z}_t={\left({z}_{1,t},\dots, {z}_{J,t}\right)}^{\prime } $ is a vector of observed deforestation rates of the control units at time $ t $ , $ J $ is the number of the control units, $ \beta $ is a weight to be applied to the control units defined on the simplex,

(5) $$ \beta \in {S}^{J-1}=\left\{\beta |\hskip0.5em \sum \limits_{j=1}^J{\beta}_j=1,\hskip0.5em ,{\beta}_j\ge 0,\hskip0.5em ,j=1,\dots, J\right\}, $$

$ {\tilde{z}}_t={\left({\tilde{z}}_{1,t},\dots, {\tilde{z}}_{J,t}\right)}^{\prime } $ is a latent state vector, and $ {v}_t={\left({v}_{1,t},\dots, {v}_{J,t}\right)}^{\prime } $ is a vector of slope components of the local linear trend. Time $ t=1 $ denotes the year when observation started and $ t={T}_0 $ the previous year before the intervention. Equations (1)–(2) link the observed data, $ {y}_t $ and $ {z}_t $ , to the latent state, $ {\tilde{z}}_t $ : it assumes that for the control units the deforestation rates are observed as the addition of the latent state and noise, while for the PA, the deforestation rate is written as the addition of the weighted sum of the latent states of the control units and noise (Brodersen et al., Reference Brodersen, Gallusser, Koehler, Remy and Scott2015). The latter relates this model to SCM except for covariate balancing. Equations (3)–(4) define the temporal evolution of the latent state by the simple local linear trend model, which enables us to forecast $ {z}_t $ , and thus the baseline.

We consider a shrinkage prior on $ \beta $ to select important control units for more interpretable and stable results. Sparse estimation is needed because the sample size $ {T}_0 $ tends to be small due to data availability, while we need to take a satisfactory number of the control units $ J $ into account so that there exists some weight that well approximates the PA. Here we introduce the regularized Horseshoe prior proposed by Piironen and Vehtari (Reference Piironen and Vehtari2017): for $ j=1,\dots, J $ ,

(6) $$ {\displaystyle \begin{array}{c}p\left({\beta}_j|{\lambda}_j,\tau, c\right)\propto \hskip0.5em \frac{1}{2{\tilde{\lambda}}_j\tau}\exp \left(-\frac{\beta_j^2}{2{\tilde{\lambda}}_j^2{\tau}^2}\right),\\ {}{\lambda}_j\sim {C}^{+}\left(0,1\right),\hskip1em {\tilde{\lambda}}_j^2=\frac{c^2{\lambda}_j^2}{c^2+{\tau}^2{\lambda}_j^2},\hskip1em {c}^2\sim \mathrm{Inv}-\mathrm{Gamma}\left(1/2,1/2\right),\\ {}\tau \mid {\sigma}_y\sim {C}^{+}\left(0,{\tau}_0\right),\hskip1em {\tau}_0=\frac{m_0}{J-{m}_0}\frac{\sigma_y}{\sqrt{T_0}},\end{array}} $$

where $ {C}^{+}\left(0,1\right) $ is the standard half-Cauchy distribution, $ \tau $ is a global shrinkage parameter, $ {\lambda}_j $ is a local shrinkage parameter, and $ {m}_0 $ is the effective number of nonzero coefficients in $ \beta $ , which is given as a hyperparameter from prior knowledge. The global shrinkage parameter shrinks all coefficients toward zero, while the local shrinkage parameter controls the shrinkage level of each coefficient. Although the global shrinkage parameter has large impacts on the results, the systematic way of specifying a prior for $ \tau $ based on the information about the sparsity has not been well discussed. Piironen and Vehtari (Reference Piironen and Vehtari2017) provide the systematic way of specifying a prior for $ \tau $ based on prior knowledge of the sparsity. We can control the number of control units that show significantly large positive weights through the effective sample size $ {m}_0 $ .

For covariate balancing, we rely on the method of general Bayesian updating (Bissiri et al., Reference Bissiri, Holme and Walker2016). With this, we reflect the distance of the covariates between the PA and synthetic controls as a covariate-dependent prior on $ \beta $ . Together with (6), we set the prior on $ \beta $ as

(7) $$ p\left(\beta |{\left\{{x}_j\right\}}_{j=1}^{J+1},\hskip0.5em ,\lambda, \tau, c\right)\propto \exp \left(- wL\left(\beta; {\left\{{x}_j\right\}}_{j=1}^{J+1}\right)\right)\prod \limits_{j=1}^Jp\left({\beta}_j|{\lambda}_j,\tau, c\right), $$

where $ {x}_j $ is a $ K\times 1 $ vector of the covariates with $ j=J+1 $ for the PA, $ L $ is a loss function that measures a distance between the PA and synthetic controls, and $ w $ is a tuning parameter. Here we choose $ L $ to be a SCM-like loss function for covariate balancing:

(8) $$ L\left(\beta; {\left\{{x}_j\right\}}_{j=1}^{J+1}\right)=1/2\cdot {\left({x}_{J+1}-{X}_0^{\prime}\beta \right)}^{\prime }V\left({x}_{J+1}-{X}_0^{\prime}\beta \right), $$

where $ {X}_0={\left({x}_1,\dots, {x}_J\right)}^{\prime } $ and $ V $ is the inverse of the covariance matrix of $ {X}_0 $ .

Combining equations (1)–(8), we obtain the full posterior as

(9) $$ {\displaystyle \begin{array}{l}\hskip1em p\left(\beta, {\left\{{u}_t\right\}}_{t=1}^{T_0},\Sigma, \lambda, \tau, c|{\left\{{z}_t\right\}}_{t=1}^{T_0},{\left\{{y}_t\right\}}_{t=1}^{T_0},{\left\{{x}_j\right\}}_{j=1}^{J+1},w\right)\\ {}\propto \prod \limits_{t=1}^{T_0}f\left({y}_t,{z}_t,{u}_t|{u}_{t-1},\beta, \Sigma \right)\cdot p\left(\beta |{\left\{{x}_j\right\}}_{j=1}^{J+1},\lambda, \tau, c\right)p\left(\lambda, \tau, c\right)p\left({u}_0\right)p\left(\Sigma \right),\end{array}} $$

where $ f $ is the density function of the model (1)–(4), $ {u}_t={\left(\tilde{z}{'}_t,{v}_t^{\prime}\right)}^{\prime } $ , and $ \Sigma ={\left({\sigma}_y,{\sigma}_z,{\sigma}_{\tilde{z}},{\sigma}_v\right)}^{\prime } $ . After the observation at $ t={T}_1\left(\ge {T}_0\right) $ , we can obtain the posterior predictive distribution of the baseline up to the target period $ t={T}_2\left(\ge {T}_1\right) $ as

(10) $$ {\displaystyle \begin{array}{l}p\left({\left\{{y}_t^{\mathrm{bsl}}\right\}}_{t={T}_0+1}^{T_2}|{\left\{{z}_t\right\}}_{t=1}^{T_1},{\left\{{y}_t\right\}}_{t=1}^{T_1},{\left\{{x}_j\right\}}_{j=1}^{J+1},w\right)=\int \prod \limits_{t={T}_0+1}^{T_2}f\left({y}_t^{\mathrm{bsl}},{z}_t,\hskip0.5em |{u}_t,{u}_{t-1},\hskip0.5em ,\beta, \Sigma \right)\\ {}\cdot \hskip0.5em p\left(\beta, {\left\{{u}_t\right\}}_{t=1}^{T_0},\Sigma, \lambda, \tau, c|{\left\{{z}_t\right\}}_{t=1}^{T_0},{\left\{{y}_t\right\}}_{t=1}^{T_0},{\left\{{x}_j\right\}}_{j=1}^{J+1},w\right)\cdot d\beta \cdot d\Sigma \cdot d\lambda \cdot d\tau \cdot d c\cdot \prod \limits_{t=1}^{T_2}{du}_t\cdot \prod \limits_{t={T}_1+1}^{T_2}{dz}_t,\end{array}} $$

where $ {\left\{{y}_t^{\mathrm{bsl}}\right\}}_{t={T}_0+1}^{T_2} $ is the estimated baseline of the PA from $ t={T}_0+1 $ to $ t={T}_2 $ . As the project proceeds, the baseline can be updated in a unified manner for ex-ante baseline ( $ {T}_1<t\le {T}_2 $ ) and for ex-post baseline ( $ {T}_0<t\le {T}_1 $ ). Note that the estimation of $ \beta $ is based on the data up to $ t={T}_0 $ , because $ \beta $ represents the relation between the PA and the control units without intervention.

3.2. Estimation algorithm

In this section, we discuss an efficient Markov chain Monte Carlo (MCMC) algorithm for sampling the parameters from (9). Although we fit our model using Stan (Stan Development Team, 2016) for the simplicity of implementation in the empirical example below (Section 4), it would be worth discussing the structures of the full conditional distributions and the relevant literature. By the construction of the model, the full conditional distribution of each parameter reduces to a standard distribution, which enables us to obtain a Gibbs sampler easily, except for the weight $ \beta $ and the parameters regarding the regularized Horseshoe prior. Therefore, we focus on the sampling from the full conditional distribution of these parameters.

The model (1) is a Gaussian sparse linear regression model, and if the prior is the basic (without-regularization) Horseshoe prior the full conditional distribution of $ \beta $ becomes also a Gaussian distribution and the Gibbs sampler can be derived by using data augmentation techniques (Carvalho et al., Reference Carvalho, Polson and Scott2010; Bhadra et al., Reference Bhadra, Datta, Polson and Willard2019). However, our formulation has two problems: (1) since we consider the regularized Horseshoe prior, it is hard to find a closed form expression for the full conditional distribution of the shrinkage parameters, $ {\lambda}_j\hskip0.1em \left(j=1,\dots, J\right) $ and $ \tau $ and (2) since the weight $ \beta $ is defined on the simplex space, the full conditional is not a simple Gaussian.

For the first problem, we need to rely on a general sampling algorithm that can be applied densities without a closed form, e.g., Hamiltonian Monte Carlo (HMC) and Langevin Monte Carlo methods (Neal, Reference Neal2011). For the second problem, we can refer to the literature of the sampling from truncated Gaussian distributions. It is known that sampling from a Gaussian distribution defined on the simplex $ {S}^{J-1} $ reduces to sampling from certain truncated Gaussian distribution on $ {\mathrm{\mathbb{R}}}^{J-1} $ (Altmann et al., Reference Altmann, McLaughlin and Dobigeon2014). In our formulation, the covariate balancing term in (7) can be incorporated in this framework because the choice of the quadratic loss function in (8) makes the full conditional distribution have also a Gaussian kernel:

(11) $$ p\left(\beta |\cdot \right)\hskip0.5em \propto \hskip0.5em \exp \left(-\frac{1}{2{\sigma}_y^2}{\left(\beta -{A}^{-1}b\right)}^{\prime }A\left(\beta -{A}^{-1}b\right)\right),\hskip1em \beta \in {S}^{J-1}, $$

where

$$ {\displaystyle \begin{array}{c}A={\tilde{Z}}_0{\tilde{Z}}_0^{\prime }+w{\sigma}_y^2{X}_0V{X}_0^{\prime }+D,\hskip1em D=\operatorname{diag}\left({\tilde{\lambda}}_1^{-2},\dots, {\tilde{\lambda}}_p^{-2}\right)/{\tau}^2,\hskip1em \\ {}b={\tilde{Z}}_0^{\prime }y+w{\sigma}_y^2{x}_{J+1}^{\prime }V{X}_0^{\prime },\hskip1em {\tilde{Z}}_0={\left({\tilde{z}}_1,\dots, {\tilde{z}}_{T_0}\right)}^{\prime },\hskip1em y={\left({y}_1,\dots, {y}_{T_0}\right)}^{\prime }.\end{array}} $$

Equation (11) indicates that the full conditional distribution of $ \beta $ is the Gaussian $ N\left({A}^{-1}b,{\sigma}_y^2{A}^{-1}\right) $ constrained by the linear constraint $ \beta \in {S}^{J-1} $ , which reduces to a truncated normal distribution on $ {\mathrm{\mathbb{R}}}^{J-1} $ .

To sample from a truncated Gaussian distribution, we may use the exact Hamiltonian Monte Carlo (EHMC) method (Pakman and Paninski, Reference Pakman and Paninski2014). A traditional approach to this problem is the combination of the Gibbs sampling from a Gaussian and the rejection of samples that are outside of the domain, but it is known to be inefficient, particularly in a case of high dimension. On the other hand, Pakman and Paninski (Reference Pakman and Paninski2014) propose an efficient HMC algorithm for sampling from a multivariate Gaussian that is defined on a constrained space by linear and quadratic inequalities (or products thereof). In this special case, their algorithm achieves a hundred percent acceptance rate, while HMC methods in general need to reject a proposed sample with some small probability.

In Stan, we implement the following variable transformation so that $ \beta $ is contained in the simplex $ {S}^{J-1} $ :

$$ {\tilde{\beta}}_j={\alpha}_j\cdot {\tilde{\lambda}}_j\cdot \tau, \hskip1em {\beta}_j={\tilde{\beta}}_j/\sum \limits_{j^{\prime }=1}^J{\tilde{\beta}}_{j^{\prime }},\hskip1em j=1,\dots, J, $$

where $ {\alpha}_j $ is defined as a non-negative scalar. In this formulation, $ \alpha $ is sampled based on the full posterior (9) and then transformed into $ \beta $ . We use a truncated normal distribution as the prior for $ \alpha $ .

4. Case study

We apply the proposed model to the Valparaiso project (Eaton et al., Reference Eaton, Dickson, McFarland, Freitas and Lopes2014) to demonstrate the performance of the model. The Valparaiso project, implemented in the state of Acre, Brazil, is a REDD+ project with the main objective of avoiding unplanned deforestation, such as illegal logging or the conversion of forest to agricultural land. Since 2011, the project has implemented several countermeasures against deforestation. These include community outreach and employing local community members as forest guards or in other project staff roles.

To monitor deforestation, we use MapBiomas Brazil (collection 6) (Souza et al., Reference Souza, Shimbo, Rosa, Parente, Alencar, Rudorff, Hasenack, Matsumoto, Ferreira, Souza-Filho, de Oliveira, Rocha, Fonseca, Marques, Diniz, Costa, Monteiro, Rosa, Vélez-Martin, Weber, Lenti, Paternost, Pareyn, Siqueira, Viera, Neto, Saraiva, Sales, Salgado, Vasconcelos, Galano, Mesquita and Azevedo2020), which is a landcover data estimated from Landsat satellite imagery, and convert it to a 0–1 forest map at 250 m resolution from 1995 to 2020. We followed West et al. (Reference West, Börner, Sills and Kontoleon2020) for preprocessing of the data. Figure 2 illustrates the progress of deforestation around the PA between 2000 and 2020. We can see that between 2000 and 2010 the deforestation area approached the PA, especially between two boundaries. Our focus is to evaluate how much deforestation would have occurred within the PA after 2011 in the absence of the project.

Figure 2. Forest transition map calculated from MapBiomas Brazil (Forest area: green, Nonforest areas: white, deforested area during 2000–2010: yellow, deforested area during 2010–2020: red, PA boundary: blue).

Given that deforestation is caused by different drivers (Laurance et al., Reference Laurance, Camargo, Luizão, Laurance, Pimm, Bruna, Stouffer, Williamson, Benítez-Malvido, Vasconcelos, Houtan, Zartman, Boyle, Didham, Andrade and Lovejoy2011; Busch and Ferretti-Gallon, Reference Busch and Ferretti-Gallon2017), we consider the following covariates into our model to find control units with similar deforestation risks to the PA: elevation from FABDEM (Hawker et al., Reference Hawker, Uhe, Paulo, Sosa, Savage, Sampson and Neal2022), (pixel-based Euclidean) distance to road, and annual deforestation rates in the 20 km buffer area around the boundary before the project started. The last covariate is considered to express the risk of deforestation surging to the area. Since the selection of covariates would affect the accuracy of the model, we carry out covariate selection based on the predictive performance within the preproject period. As a result, in addition to distance to road and elevation, we include the annual deforestation rate in the buffer area in 2010 (one year prior to the project implementation). See Appendix A for details on covariate selection.

For the boundary of control units, we use the forest district polygon called CAR (Cadastro Ambiental Rural),Footnote 1 which is a georeferenced property organized by Brazil’s Rural Environmental Registry. For each control unit, deforestation rates and covariates are aggregated over a CAR polygon, while for the PA, they are aggregated over the project boundary found in the project registryFootnote 2. To reduce the pool size of CARs, we query them where the historical average of the deforestation rates between 1999 and 2010 (i.e., 12 years prior to the intervention) are close to the PA both in the boundary (from 0.0% to 0.2%) and in the 20 km buffer area (from 0.0% to 0.4%), obtaining 57 CARs. Figure 3 describes the spatial distribution and the deforestation rate of the PA and the selected CARs.

Figure 3. Spatial distribution and deforestation rate of the PA and all the CARs used in analysis.

We fit the model using Stan (Stan Development Team, 2016). We obtain 3000 MCMC samples where the first 1000 samples are discarded as warm-up. The default setting in Stan is used for prior specifications except for the ones described in Section 3. For the interpretability of results, we set the effective number of nonzero parameters $ {m}_0 $ as 4. This may also depend on the sample size of the data. If the sample size is limited, one may consider a smaller number as $ {m}_0 $ . We need to fine-tune $ w $ for covariate balancing. We adjust it so that the balanced covariates become close enough to the result by SCM, and set $ w=100 $ to get the following results.

Figure 4 shows the result of the estimated baseline. Our baseline is much smaller than the one by the project developer (see Figure 1b). Given that the our baseline is naturally continuous with the preproject trend, it would be more reasonable than the one by the project developer. Comparing Figure 4a–c (respectively), we can find that the 90% interval of the ex-ante baseline includes the posterior mean of the ex-post baseline at least up to three years forward, implying that our ex-ante estimation worked to some extent. Looking at the ex-post baseline at 2019 (Figure 4c), we can see that the project had no effect during the first 4 years (2011–2014), but then gradually started to have a small positive effect after 2015. This may be because the baseline was lifted by the upward trend over Brazil since 2012 (Ferrante and Fearnside, Reference Ferrante and Fearnside2019; West et al., Reference West, Börner and Fearnside2019), while the PA was protected from that trend. Figure 4 also includes the posterior mean of the estimated baseline without the covariate balancing (i.e., $ w=0 $ ), which is estimated separately. We can see that the baseline without covariate balancing is generally lower than the baseline with covariate balancing. This may be because the former underestimated the risk of deforestation that is approaching to the PA, which can be represented by deforestation rate in the 20 km buffer area. This would imply the importance of the covariate balancing in the baseline estimation.

Figure 4. Estimated ex-ante and/or ex-post baseline for the Valparaiso project (x-axis: year; y-axis: annual deforestation rate; dotted vertical line: the time when the intervention started (2011); solid line (black): the observed deforestation rate; solid line (blue): the posterior mean of the estimated baseline; blue area: the 90% credible interval of the estimated baseline; dashed line (black): the posterior mean of the estimated baseline without the covariate balancing (i.e., $ w=0 $ ); throughout (a)–(c), $ {T}_0=2010 $ and $ {T}_2=2020 $ ).

Table 1 shows the mean of the covariates for the PA and the synthetic control, where the synthetic control is evaluated by the posterior mean of $ \beta $ . We can see that the synthetic control is closer to the PA than the simple average of CARs in terms of the similarity in covariates. This suggests that the risk of deforestation to the PA is better approximated by the synthetic control rather than by the simple average of CARs.

Table 1. Comparison of covariates

We also compare the proposed approach with two popular methods, SCM (Abadie et al., Reference Abadie, Diamond and Hainmueller2010) and CausalImpact (Brodersen et al., Reference Brodersen, Gallusser, Koehler, Remy and Scott2015). We run these models using the R packages Synth (version 1.1-8) (Abadie et al., Reference Abadie, Diamond and Hainmueller2011) and CausalImpact (version 1.3.0) (Brodersen et al., Reference Brodersen, Gallusser, Koehler, Remy and Scott2015) respectively. We use the same set of covariates for SCM, while using no covariates for CausalImpact because it cannot consider covariate balancing. In addition, since these two methods do not consider the prediction of deforestation rates of control units, we only compare ex-post results at $ {T}_1=2019 $ . Figure 5 shows the estimated baseline results from SCM and CausalImpact, along with the result of the proposed method with $ w=100 $ . In comparison with SCM the proposed method yielded a similar baseline after the ex-post data is observed. It may imply that the proposed method can be regarded as a natural extension of SCM in terms of the capability of ex-ante prediction. On the other hand, the baseline estimated by CausalImpact remained around zero. This may be explained by the fact that CausalImpact cannot include covariates and fit the weight parameter only by minimizing the error in annual deforestation rates. Therefore, it might underestimate the deforestation risk of PA. This would show the importance of introducing the covariate-dependent prior as in Eq. (7).

Figure 5. Comparison of baseline estimates between different methods (proposed method: solid blue; SCM: dashed green; CausalImpact: dash-dotted orange).

5. Discussion and limitations

Our results suggested that the baseline set under VM0007 might be overestimated in the Valparaiso project, while the project had small, positive effects on reducing deforestation. The implementation of dynamic baseline updating would lead to more reasonable baseline estimates because it could reflect the effects of policy changes as noted in Ferrante and Fearnside (Reference Ferrante and Fearnside2019) and West et al. (Reference West, Börner and Fearnside2019). Our results were qualitatively consistent with Guizar-Coutiño et al. (Reference Guizar-Coutiño, Jones, Balmford, Carmenta and Coomes2022) (i.e., small, positive impacts), but there are some differences in the magnitude of the effects. One reason for this may be that the unit of our analysis is CAR while theirs is pixels. Our analysis followed West et al. (Reference West, Börner, Sills and Kontoleon2020) in the sense that the unit of their analysis is CAR and they concluded that the baselines were overestimated. However, their results are more negative about the additionality of the project than ours. One reason for this difference could lie in the difference of the covariates considered in the model. In particular, distance to deforestation edge is considered to be an important covariate (Laurance et al., Reference Laurance, Camargo, Luizão, Laurance, Pimm, Bruna, Stouffer, Williamson, Benítez-Malvido, Vasconcelos, Houtan, Zartman, Boyle, Didham, Andrade and Lovejoy2011; Busch and Ferretti-Gallon, Reference Busch and Ferretti-Gallon2017), and the fact that we included the deforestation rate in a buffer area as a proxy variable for it may lead to the difference in the estimated weights.

Although we assumed that all covariates, except for deforestation rate in the 20 km buffer area, are constant over time, it is also important to consider them as time-series, especially for socioeconomic covariates. For example, distance to road is known to be an important driver; indeed, the background of the Valparaiso project is to stop deforestation accelerated by the road development. Given the limited update frequency of public data, the monitoring and/or modeling of covariate growth using, e.g., remote sensing would be necessary. As for the modeling, we can consider many different extensions. One possible way would be to consider spatial correlation between control units by introducing a transition matrix in the system equation (3), which would reduce error in estimation.

Finally, we should mention the technical report published in January 2023 by Verra (Verra, 2023). In the report, Verra criticized West et al. (Reference West, Börner, Sills and Kontoleon2020, Reference West, Wunder, Sills, Börner, Rifai, Neidermeier, Frey and Kontoleon2023) for several reasonsFootnote 3. Here we discuss two of their arguments in relation to the present paper. First, they claim that the synthetic controls were constructed “by looking at only a small set of superficial physical characteristics such as initial forest cover, slope, and proximity to state capitals, while excluding the key determinants of deforestation such as forest type, agricultural practices, and in fact any socioeconomic factors whatsoever.” This may be partly true. However, if the effects of the key determinants are reflected in proxy variables that are directly linked to the deforestation risk, such as deforestation rate in the buffer area, then the resulting synthetic control would not underestimate the deforestation risk. Furthermore, regardless of how a baseline is estimated, it should be continuous with the preproject deforestation level to some extent. At least, the fact that the official baseline considers those key determinants would not justify the substantial differences from the observed deforestation level in the PA/RRD as in Figure 1. In this sense, we think SCM-type approaches, including the proposed method in the paper, are effective in evaluating REDD+ projects.

Second, Verra argues that the use of satellite imagery can be helpful for deforestation estimation, but it can be problematic to depend too heavily on it. Deforestation or landcover data estimated from satellite imagery, including MapBiomas, often yield inconsistent results with survey data or visual inspection. Besides, the result is sensitive to the way it is preprocessed and/or at which resolution it is evaluated. Therefore, it needs to be handled carefully before used for REDD+ project assessment. We agree on these points. In fact, we can observe that deforestation trends differ depending on datasets in the Valparaiso project. Figure 6 compares the annual deforestation trends calculated by MapBiomas and TMF (Vancutsem et al., Reference Vancutsem, Achard, Pekel, Vieilledent, Carboni, Simonetti, Gallego and Aragão2021) for the RRD, showing that there is a significant difference between them. Considering this point, we do not argue our results are a definitive judgment on the effectiveness of the project. Nonetheless, we still think that the substantial discrepancies observed between the official baseline, West et al.’s estimates, and our own, cannot solely be attributed to errors inherent in satellite-based products, and further discussions are needed toward more reliable carbon credits.

Figure 6. Annual deforestation rates for the RRD of the Valparaiso project calculated by MapBiomas and TMF.

Author contribution

Conceptualization: K.T., H.S.; Data curation: K.T.; Formal analysis: K.T.; Funding acquisition: H.S.; Investigation: K.T., H.S., K.F., S.S.; Methodology: K.T., S.S.; Software: K.T., Supervision: K.F., S.S.; Validation: K.T.; Visualization: K.T., Writing–original draft: K.T.; Writing–review and editing: K.T., H.S., K.F., S.S.

Competing interest

The authors declare none.

Data availability statement

The codes for this work will be available to the public upon publication at the following GitHub repository: https://github.com/stkhat/bssscm.

Ethics statement

The research meets all ethical guidelines, including adherence to the legal requirements of the study country.

Funding statement

This paper is based on results obtained from a project, JPNP14012, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/eds.2024.5.

Footnotes

This research article was awarded Open Materials badge for transparent practices. See the Data Availability Statement for details.

3 They also criticized Guizar-Coutiño et al. (Reference Guizar-Coutiño, Jones, Balmford, Carmenta and Coomes2022) and an article by Britain’s Guardian, but here we focus on West et al.’s studies in terms of proximity to the present study.

References

Abadie, A (2021) Using synthetic controls: Feasibility, data requirements, and methodological aspects. Journal of Economic Literature 59(2), 391425.CrossRefGoogle Scholar
Abadie, A, Diamond, A and Hainmueller, J (2010) Synthetic control methods for comparative case studies: Estimating the effect of California’s tobacco control program. Journal of the American Statistical Association 105(490), 493505.CrossRefGoogle Scholar
Abadie, A, Diamond, A and Hainmueller, J (2011) Synth: An r package for synthetic control methods in comparative case studies. Journal of Statistical Software 42(13), 117.CrossRefGoogle Scholar
Aldy, JE and Stavins, RN (2012) The promise and problems of pricing carbon. The Journal of Environment & Development 21(2), 152180.CrossRefGoogle Scholar
Altmann, Y, McLaughlin, S and Dobigeon, N (2014) Sampling from a multivariate Gaussian distribution truncated on a simplex: A review. In 2014 IEEE Workshop on Statistical Signal Processing (SSP), pp. 113116, Gold Coast, QLD, Australia.CrossRefGoogle Scholar
Badgley, G, Freeman, J, Hamman, JJ, Haya, B, Trugman, AT, Anderegg, WRL and Cullenward, D (2022) Systematic over–crediting in California’s forest carbon offsets program. Global Change Biology 28(4), 14331445.CrossRefGoogle ScholarPubMed
Ben-Michael, E, Feller, A and Rothstein, J (2021) The augmented synthetic control method. Journal of the American Statistical Association 116(536), 17891803.CrossRefGoogle Scholar
Bento, A, Kanbur, R and Leard, B (2016) On the importance of baseline setting in carbon offsets markets. Climatic Change 137(3-4), 625637.CrossRefGoogle Scholar
Bhadra, A, Datta, J, Polson, NG and Willard, B (2019) Lasso meets horseshoe: A survey. Statistical Science 34(3), 405427.CrossRefGoogle Scholar
Bissiri, PG, Holme, CC and Walker, SG (2016) A general framework for updating belief distributions. Journal of the Royal Statistical Society. Series B (Statistical Methodology) 78(5), 11031130.CrossRefGoogle ScholarPubMed
Brodersen, KH, Gallusser, F, Koehler, J, Remy, N and Scott, SL (2015) Inferring causal impact using Bayesian structural time-series models. The Annals of Applied Statistics 9(1), 247274.CrossRefGoogle Scholar
Busch, J and Ferretti-Gallon, K (2017) What drives deforestation and what stops it? A meta-analysis. Review of Environmental Economics and Policy 11(1), 323.CrossRefGoogle Scholar
Carvalho, CM, Polson, NG and Scott, JG (2010) The horseshoe estimator for sparse signals. Biometrika 97(2), 465480.CrossRefGoogle Scholar
Correa, J, Cisneros, E, Börner, J, Pfaff, A, Costa, M and Rajão, R (2020) Evaluating REDD+ at subnational level: Amazon fund impacts in Alta Floresta, Brazil. Forest Policy and Economics 116, 102178.CrossRefGoogle Scholar
Doudchenko, N and Imbens, GW (2016) Balancing, regression, difference-in-differences and synthetic control methods: A synthesis. Technical report. National Bureau of Economic Research.CrossRefGoogle Scholar
Eaton, J, Dickson, R, McFarland, B, Freitas, P and Lopes, MB (2014) The Valparaiso Project: A Tropical Forest Conservation Project in Acre, Brazil. Available at https://registry.verra.org/app/projectDetail/VCS/1113. Accessed 16 Feb. 2024.Google Scholar
Ellis, EA, Sierra-Huelsz, JA, Ceballos, GCO, Binnqüist, CL and Cerdán, CR (2020) Mixed effectiveness of REDD+ subnational initiatives after 10 years of interventions on the Yucatan peninsula, Mexico. Forests 11(9), 1005.CrossRefGoogle Scholar
Ferrante, L and Fearnside, PM (2019) Brazil’s new president and “ruralists” threaten Amazonia’s environment, traditional peoples and the global climate. Environmental Conservation 46(4), 261263.CrossRefGoogle Scholar
George, EI and McCulloch, RE (1993) Variable selection via gibbs sampling. Journal of the American Statistical Association 88(423), 881889.CrossRefGoogle Scholar
Griscom, BW, Busch, J, Cook-Patton, SC, Ellis, PW, Funk, J, Leavitt, SM, Lomax, G, Turner, WR, Chapman, M, Engelmann, J, Gurwick, NP, Landis, E, Lawrence, D, Malhi, Y, Murray, LS, Navarrete, D, Roe, S, Scull, S, Smith, P, Streck, C, Walker, WS and Worthington, T (2020) National mitigation potential from natural climate solutions in the tropics. Philosophical Transactions of the Royal Society B 375(1794), 20190126.CrossRefGoogle ScholarPubMed
Guizar-Coutiño, A, Jones, JP, Balmford, A, Carmenta, R and Coomes, DA (2022) A global evaluation of the effectiveness of voluntary REDD+ projects at reducing deforestation and degradation in the moist tropics. Conservation Biology 36 (6), e13970.CrossRefGoogle ScholarPubMed
Hawker, L, Uhe, P, Paulo, L, Sosa, J, Savage, J, Sampson, C and Neal, J (2022) A 30 m global map of elevation with forests and buildings removed. Environmental Research Letters 17(2), 024016.CrossRefGoogle Scholar
Haya, B (2009) Measuring emissions against an alternative future: Fundamental flaws in the structure of the Kyoto Protocol’s clean development mechanism. University of California, Berkeley Energy and Resources Group Working Paper, ERG09-001.Google Scholar
Haya, B, Cullenward, D, Strong, AL, Grubert, E, Heilmayr, R, Sivas, DA and Wara, M (2020) Managing uncertainty in carbon offsets: Insights from California’s standardized approach. Climate Policy 20(9), 115.CrossRefGoogle Scholar
Laurance, WF, Camargo, JL, Luizão, RC, Laurance, SG, Pimm, SL, Bruna, EM, Stouffer, PC, Williamson, GB, Benítez-Malvido, J, Vasconcelos, HL, Houtan, KSV, Zartman, CE, Boyle, SA, Didham, RK, Andrade, A and Lovejoy, TE (2011) The fate of Amazonian forest fragments: A 32-year investigation. Biological Conservation 144(1), 5667.CrossRefGoogle Scholar
Neal, RM (2011) MCMC using Hamiltonian dynamics. In Handbook of Markov Chain Monte Carlo, Vol. 2, p. 2. Chapman and Hall/CRC: New York.Google Scholar
Pakman, A and Paninski, L (2014) Exact Hamiltonian Monte Carlo for truncated multivariate Gaussians. Journal of Computational and Graphical Statistics 23(2), 518542.CrossRefGoogle Scholar
Piironen, J and Vehtari, A (2017) Sparsity information and regularization in the horseshoe and other shrinkage priors. Electronic Journal of Statistics 11(2), 50185051.CrossRefGoogle Scholar
Rana, P and Sills, EO (2018) Does certification change the trajectory of tree cover in working forests in the tropics? An application of the synthetic control method of impact evaluation. Forests 9(3), 98.CrossRefGoogle Scholar
Roopsind, A, Sohngen, B and Brandt, J (2019) Evidence that a national REDD+ program reduces tree cover loss and carbon emissions in a high forest cover, low deforestation country. Proceedings of the National Academy of Sciences 116(49), 2449224499.CrossRefGoogle Scholar
Sills, EO, Herrera, D, Kirkpatrick, AJ, Brandão, A, Dickson, R, Hall, S, Pattanayak, S, Shoch, D, Vedoveto, M, Young, L and Pfaff, A (2015) Estimating the impacts of local policy innovation: The synthetic control method applied to tropical deforestation. PLoS ONE 10(7), e0132590.CrossRefGoogle ScholarPubMed
Simmons, BA, Marcos-Martinez, R, Law, EA, Bryan, BA and Wilson, KA (2018) Frequent policy uncertainty can negate the benefits of forest conservation policy. Environmental Science & Policy 89, 401411.CrossRefGoogle Scholar
Souza, CM, Shimbo, JZ, Rosa, MR, Parente, LL, Alencar, AA, Rudorff, BFT, Hasenack, H, Matsumoto, M, Ferreira, LG, Souza-Filho, PWM, de Oliveira, SW, Rocha, WF, Fonseca, AV, Marques, CB, Diniz, CG, Costa, D, Monteiro, D, Rosa, ER, Vélez-Martin, E, Weber, EJ, Lenti, FEB, Paternost, FF, Pareyn, FGC, Siqueira, JV, Viera, JL, Neto, LCF, Saraiva, MM, Sales, MH, Salgado, MPG, Vasconcelos, R, Galano, S, Mesquita, VV and Azevedo, T (2020) Reconstructing three decades of land use and land cover changes in brazilian biomes with landsat archive and earth engine. Remote Sensing 12(17), 2735. https://doi.org/10.3390/rs12172735.CrossRefGoogle Scholar
Stan Development Team (2016) Stan modeling language users guide and reference manual (ver. 2.30). Available at https://mc-stan.org.Google Scholar
Vancutsem, C, Achard, F, Pekel, J-F, Vieilledent, G, Carboni, S, Simonetti, D, Gallego, J, Aragão, LEOC and R. Nasi. Long-term (1990–2019) (2021) Monitoring of forest cover changes in the humid tropics. Science Advances 7(10), eabe1603.CrossRefGoogle ScholarPubMed
Verra VM0007 REDD+ Methodology Framework (REDD+ MF), v1.6 (2020). Available at https://verra.org/methodology/vm0007-redd-methodology-framework-redd-mf-v1-6/.Google Scholar
Verra (2022a) Methodology for Improved Forest Management Using Dynamic Matched Baselines from National Forest Inventories, v1.0. Available at https://verra.org/methodology/methodology-for-improved-forest-management/.Google Scholar
Verra (2022b) Public Consultation: Projected Carbon Units. Available at https://verra.org/public-consultation-projected-carbon-units/.Google Scholar
Verra (2023) Technical review of west et al. 2020 and 2023, guizar-coutiño 2022, and coverage in britain’s guardian. Technical report. Available at https://verra.org/technical-review-of-west-et-al-2020-and-2023-guizar-coutino-2022-and-coverage-in-britains-guardian/.Google Scholar
West, TAP, Börner, J and Fearnside, PM (2019) Climatic benefits from the 2006–2017 avoided deforestation in Amazonian Brazil. Frontiers in Forests and Global Change 2, 52.CrossRefGoogle Scholar
West, TAP, Börner, J, Sills, EO and Kontoleon, A (2020) Overstated carbon emission reductions from voluntary REDD+ projects in the Brazilian Amazon. Proceedings of the National Academy of Sciences 117(39), 2418824194.CrossRefGoogle ScholarPubMed
West, TAP, Wunder, S, Sills, EO, Börner, J, Rifai, SW, Neidermeier, AN, Frey, GP and Kontoleon, A (2023) Action needed to make carbon offsets from forest conservation work for climate change mitigation. Science 381(6660), 873877.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. The Valparaiso project.

Figure 1

Figure 2. Forest transition map calculated from MapBiomas Brazil (Forest area: green, Nonforest areas: white, deforested area during 2000–2010: yellow, deforested area during 2010–2020: red, PA boundary: blue).

Figure 2

Figure 3. Spatial distribution and deforestation rate of the PA and all the CARs used in analysis.

Figure 3

Figure 4. Estimated ex-ante and/or ex-post baseline for the Valparaiso project (x-axis: year; y-axis: annual deforestation rate; dotted vertical line: the time when the intervention started (2011); solid line (black): the observed deforestation rate; solid line (blue): the posterior mean of the estimated baseline; blue area: the 90% credible interval of the estimated baseline; dashed line (black): the posterior mean of the estimated baseline without the covariate balancing (i.e., $ w=0 $); throughout (a)–(c), $ {T}_0=2010 $ and $ {T}_2=2020 $).

Figure 4

Table 1. Comparison of covariates

Figure 5

Figure 5. Comparison of baseline estimates between different methods (proposed method: solid blue; SCM: dashed green; CausalImpact: dash-dotted orange).

Figure 6

Figure 6. Annual deforestation rates for the RRD of the Valparaiso project calculated by MapBiomas and TMF.

Supplementary material: File

Takahata et al. supplementary material

Takahata et al. supplementary material
Download Takahata et al. supplementary material(File)
File 357.3 KB