This corrigendum identifies errors in the calculation of the equilibrium reward $ {w}^{*} $ in the original article (Rundlett and Svolik Reference Rundlett and Svolik2016). Its authors, Ashlea Rundlett and Milan Svolik, regret their inclusion and would like to thank German Gieczewski and Mehdi Shadmehr for identifying them. Milan Svolik would like to thank Tylir Fowler for research assistance.
-
• The correct $ {w}^{*} $ for the baseline model (186) is
$$ \begin{array}{ccc}{w}^{*}=\left\{\begin{array}{c}\sqrt{\frac{2cF(b+2\epsilon c+cF)}{1+F(1+2\epsilon +2F)}}\hskip0.5em -\hskip0.5em c\hskip1em \mathrm{if}\hskip0.3em b>c\left(\frac{1+F(1\hskip0.3em -\hskip0.3em 2\epsilon )}{2F}\right);\qquad \\ {}0\hskip11.8em \mathrm{otherwise}.\qquad \end{array}\right.& & \end{array} $$ -
• The correct $ {w}^{*} $ for the “fraud as insurance” extension (188) is
$$ {w}^{*}=\Big\{{\displaystyle \begin{array}{c}\sqrt{\frac{bFc+{F}^2{c}^2+2\epsilon F{c}^2}{2\sigma \widehat{\theta}+\left[\widehat{\theta}+\sigma \hskip0.3em -\hskip0.3em \frac{1}{2}\right]F+{F}^2+F\epsilon }}\hskip0.3em -\hskip0.3em c\hskip1em \mathrm{if}\hskip1em b>c\left(2\frac{\sigma \widehat{\theta}}{F}+\widehat{\theta}+\sigma \hskip0.3em -\hskip0.3em \frac{1}{2}\hskip0.3em -\hskip0.3em \epsilon \right);\\ {}0\hskip16.4em \mathrm{otherwise}.\qquad \end{array}} $$ -
• In the “differences in competitiveness” extension (189), the payoff offered by the incumbent should be $ w[{R}_i\hskip0.3em -\hskip0.3em \alpha {P}_i] $ , and the correct $ {w}^{*} $ (8 in the supplementary appendix) is
$$ {w}^{*}=\Big\{{\displaystyle \begin{array}{c}\sqrt{\frac{\frac{bFc}{1\hskip0.3em -\hskip0.3em \alpha }+\frac{F^2{c}^2}{1\hskip0.3em -\hskip0.3em \alpha }+2{c}^2\epsilon F}{\frac{1\hskip0.3em -\hskip0.3em \alpha }{2}+\left(1\hskip0.3em -\hskip0.3em \frac{\frac{1}{2}\hskip0.3em -\hskip0.3em \alpha \pi \hskip0.3em -\hskip0.3em F}{1\hskip0.3em -\hskip0.3em \alpha }+\epsilon \right)F}}\hskip0.3em -\hskip0.3em c\hskip1em \mathrm{if}\hskip0.3em b>c\left[\frac{{\left(1\hskip0.3em -\hskip0.3em \alpha \right)}^2}{2F}+\frac{1}{2}\hskip0.3em -\hskip0.3em \alpha \left(1\hskip0.3em -\hskip0.3em \pi \right)\hskip0.3em -\hskip0.3em \epsilon \left(1\hskip0.3em -\hskip0.3em \alpha \right)\right];\\ {}0\hskip17.5em \mathrm{otherwise}.\qquad \end{array}} $$
IMPLICATIONS
Resolving the error in the calculation of the equilibrium reward factor $ {w}^{*} $ does not change key, substantive insights of the original model and is not relevant for the paper’s empirical analysis. The two key equilibrium thresholds, $ {\theta}^{*} $ and $ {S}^{*} $ , remain correct and are not affected. Correcting this error, however, is consequential for the indirect effect on the equilibrium level of fraud of two parameters, $ \epsilon $ and F. The derivative of $ {\theta}^{*} $ with respect to $ \epsilon $ is positive for large enough values of b after correction, implying that when the incumbent’s payoff exceeds a threshold, a greater district “heterogeneity” reduces equilibrium levels of fraud. Meanwhile, after correction, the derivative of $ {\theta}^{*} $ with respect to F is too complicated to yield politically useful insights about the indirect effect of F on $ {\theta}^{*} $ .
DETAILED ANALYSIS
Baseline Model
The solution to the incumbent’s optimal choice of $ w\ge 0 $ in the baseline model presented in section A.3 of the supplementary appendix was based on the optimization problem
This formulation did not account for the fact that, as outlined on page 186, no agent engages in fraud in equilibrium when $ \theta <{S}^{*}\hskip0.3em -\hskip0.3em \epsilon $ , while all agents engage in fraud in equilibrium when $ \theta >{S}^{*}+\epsilon $ . In sum, the fraction of agents $ \phi $ that engage in fraud in equilibrium is
Accounting for the above and taking an expectation with respect to the distribution of $ \theta $ , the incumbent solves
Treating $ {\theta}^{*} $ and $ {S}^{*} $ as functions of w, the above simplifies to
where we used
as well as the solutions for $ {\theta}^{*} $ and $ {S}^{*} $ stated on p. 185,
The first-order condition for this optimization problem is
which simplifies to
Solving the above quadratic equation in w, we have two solutions,
which simplify to
Of these, only the latter can be non-negative, which is the case as long as $ b\ge c\left(\frac{1+F(1\hskip0.3em -\hskip0.3em 2\epsilon )}{2F}\right) $ .
The second-order condition for this optimization problem is
which holds for any (admissible) parameter values and a non-negative w.
In sum, the equilibrium reward factor $ {w}^{*} $ is
The illustration on page 186, based on parameters $ c=1 $ , $ F=\frac{2}{10} $ , $ \epsilon =\frac{1}{10} $ , and $ b=70 $ , now yields $ {S}^{*}=0.29 $ , $ {\theta}^{*}=0.34 $ , $ {\phi}^{*}=0.78 $ , and $ {w}^{*}=3.62 $ .
The threshold $ {\theta}^{*} $ is decreasing in $ {w}^{*} $ as reported in section A.4 of the supplementary appendix. But to account for the indirect effect of the parameters b, c, F, and $ \epsilon $ on $ {\theta}^{*} $ , its total derivatives must be based on the corrected $ {w}^{*} $ . Treating $ {\theta}^{*} $ as a function of $ {w}^{*} $ , i.e. substituting the expression for $ {w}^{*} $ when it is positive into the expression for $ {\theta}^{*} $ , and differentiating with respect to each of the parameters, we get
Note that while the derivatives of $ {\theta}^{*} $ with respect to b and c have the same sign as those reported in the original appendix, the derivative of $ {\theta}^{*} $ with respect to $ \epsilon $ is positive for large enough values of b. Meanwhile, the derivative of $ {\theta}^{*} $ with respect to F is complicated.
“Fraud as Insurance” Extension
The equilibrium thresholds remain the same as in Equation 1. However, as $ \theta \sim U[\widehat{\theta}\hskip0.3em -\hskip0.3em \sigma, \widehat{\theta}+\sigma ] $ , the incumbent’s objective function becomes
Multiplying by $ 2\sigma $ and dropping terms that are constant in w, the incumbent’s objective is (equivalent to)
After the change of variables $ x:=c+w $ , the incumbent’s objective is
which is equivalent to
The first-order condition is then
which yields
or
This is positive when
We highlight that in the range of admissible $ \sigma $ , we have
so that more prior uncertainty about the incumbent’s popularity, reduces the incentives the incumbent offers for fraud.
Furthermore, in footnote 27 (188) we should require $ \sigma >\frac{F}{2}+2\epsilon $ rather than $ \sigma >\frac{F}{2} $ , so that for appropriately chosen $ \widehat{\theta} $ , the prior covers the interval $ \left[\frac{1}{2}\hskip0.3em -\hskip0.3em F\hskip0.3em -\hskip0.3em 2\epsilon, \frac{1}{2}+2\epsilon \right] $ . This guarantees that the dominance regions cover all values of $ {S}_i $ for which the agent’s posterior about $ \theta $ is not equal to $ U[{S}_i\hskip0.3em -\hskip0.3em \epsilon, {S}_i+\epsilon ] $ .
“Differences in Competitiveness” Extension
The payoff offered to agent i should be $ w({R}_i\hskip0.3em -\hskip0.3em \alpha {P}_i) $ rather than $ w({R}_i\hskip0.3em -\hskip0.3em {P}_i) $ (p. 189), so that: (i) agents’ rewards are correctly adjusted for their district’s popularity, instead of agents in popular districts being systematically punished; (ii) as $ \alpha \to 0 $ , the model converges to the baseline. The agent’s payoff then becomes $ w\left[(1\hskip0.3em -\hskip0.3em \alpha ){S}_i+{1}_{a_i=f}F\right] $ when the incumbent wins and $ \hskip0.3em -\hskip0.3em cF{1}_{a_i=f} $ when the incumbent loses.Footnote 1
The expressions for $ {S}^{*} $ , $ {\theta}^{*} $ and $ {\phi}^{*} $ given in page 8 of the Online Appendix are correct. The incumbent’s objective is then
Dropping terms that are constant in w, this is equivalent to
After the change of variables $ x:=w+c $ , this equals
which is equivalent to
or to
The first-order condition is then
which yields
or
This is positive when
Paralleling our last remark in the previous section, in Eq. A.2 (8, Online Appendix) we should require $ F+\alpha \pi +(1\hskip0.3em -\hskip0.3em \alpha )2\epsilon \le \frac{1}{2} $ and $ \alpha (1\hskip0.3em -\hskip0.3em \pi \hskip0.3em -\hskip0.3em 2\epsilon )\le \frac{1}{2}\hskip0.3em -\hskip0.3em 2\epsilon $ to guarantee that the dominance regions cover all values of $ {S}_i $ for which the agent’s posterior about $ \theta $ is not $ U[{S}_i\hskip0.3em -\hskip0.3em \epsilon, {S}_i+\epsilon ] $ .
Comments
No Comments have been published for this article.