Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-22T04:30:16.218Z Has data issue: false hasContentIssue false

Individualized Text Messages about Public Services Fail to Sway Voters: Evidence from a Field Experiment on Ugandan Elections

Published online by Cambridge University Press:  15 July 2021

Ryan S. Jablonski
Affiliation:
Department of Government, London School of Economics and Political Science
Mark T. Buntaine*
Affiliation:
Bren School of Environmental Science & Management, University of California, Santa Barbara
Daniel L. Nielson
Affiliation:
Department of Government, University of Texas at Austin
Paula M. Pickering
Affiliation:
Department of Government, William & Mary
*
*Corresponding author. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Mobile communication technologies can provide citizens access to information that is tailored to their specific circumstances. Such technologies may therefore increase citizens’ ability to vote in line with their interests and hold politicians accountable. In a large-scale randomized controlled trial in Uganda (n = 16,083), we investigated whether citizens who receive private, timely, and individualized text messages by mobile phone about public services in their community punished or rewarded incumbents in local elections in line with the information. Respondents claimed to find the messages valuable and there is evidence that they briefly updated their beliefs based on the messages; however, the treatment did not cause increased votes for incumbents where public services were better than expected nor decreased votes where public services were worse than anticipated. The considerable knowledge gaps among citizens identified in this study indicate potential for communication technologies to effectively share civic information. Yet the findings imply that when the attribution of public service outcomes is difficult, even individualized information is unlikely to affect voting behavior.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of The Experimental Research Section of the American Political Science Association

Introduction

The rapid spread of mobile phones in low-income countries enables users to gain new information directly relevant to their well-being, including prevailing prices at markets, opportunities for employment, and business ideas. Indeed, there is good evidence that gaining access to a mobile phone can be an important catalyst for improved livelihoods (Dammert, Galdo and Galdo, Reference Dammert, Galdo and Galdo2014). Yet there are fewer indications about whether mobile phones can help people overcome the political barriers that keep them in poverty.

Several studies have investigated interventions and circumstances that promote information sharing from citizens to public officials using mobile phones (Ferrali et al., Reference Ferrali, Grossman, Platas and Rodden2020; Grossman, Humphreys and Sacramone-Lutz, Reference Grossman, Humphreys and Sacramone-Lutz2014; Grossman, Humphreys and Sacramone-Lutz, Reference Grossman, Humphreys and Sacramone-Lutz2020; Buntaine, Nielson and Skaggs, Reference Buntaine, Nielson and Skaggs2019). However, the available evidence does not indicate that this kind of information sharing causes meaningful changes in the provision of public services (Grossman, Platas and Rodden, Reference Grossman, Platas and Rodden2018; Buntaine, Hunnicutt and Komakech, Reference Buntaine, Hunnicutt and Komakech2020).

Fewer studies have tested whether good-governance organizations can use mobile phones to provide information to voters so that their choices at the polls align with the programmatic performance of politicians, which would give politicians incentives to deliver good public services. Buntaine et al. (Reference Buntaine, Jablonski, Nielson and Pickering2018) report that sending text messages by mobile phone about corruption in district governments caused Ugandan voters to condition their votes on the information for the lower office of district councillor but not for the higher office of district chair. Aker, Collier and Vicente (Reference Aker, Collier and Vicente2017) report that a civic campaign conducted partly by mobile phone increased turnout in Mozambique, but did not decrease electoral irregularities. Yet neither of these interventions leveraged the full potential of mobile phones to tailor information to the preferences and circumstances of each individual voter. In contrast to other ways voters receive information, such as radio, television, or newspaper, mobile phones might contribute to electoral accountability by providing voters with information that is individualized to voters’ interests and contexts.

We investigate whether individualized messages about the comparative quality of public services that individual voters deemed most important changed those voters’ choices in local Ugandan elections. Prior to the 2016 district (LC5) and sub-county (LC3) elections, local enumerators performed audits of primary schools, roads, health care, and water access in 762 Ugandan villages in a nationwide, area probability sample. Footnote 1 We completed a baseline survey with 16,083 Ugandan citizens with mobile phones in the same villages and asked each respondent which of these four public services was most important in determining their vote for local officials.

We collaborated with a non-governmental organization in Uganda to send factual messages about these public services to voters prior to local elections. The density of these treatments was intentionally kept low to minimize the probability of influencing the aggregate outcome of any election and the voters receiving the messages provided individual informed consent. We individualized the messages in several ways. First, we shared information about the public service that each participant stated was most important when deciding how to vote. Second, we localized the information by conducting and sharing audits from each respondent’s village. Third, we contextualized the information by sharing how service quality in the respondent’s village compared to that in other villages in the district. Finally, we made the information personal by referring to the respondents by their first name and by using their preferred local language.

We predicted and preregistered that individualizing information would be an effective strategy for improving electoral accountability. Yet, while most citizens found the messages useful and there is some evidence that treated voters updated their beliefs about the comparative quality of public services, the text messages did not significantly alter voters’ choices for district or sub-county offices. Good news indicating that the quality of the selected local service – whether it be roads, health clinics, water access, or primary schools – was better than expected did not significantly increase votes for incumbent district or sub-county councillors or chairs. Bad news that the chosen service was worse than expected did not significantly decrease support for incumbents. These null results are precisely estimated and robust to numerous alternative specifications and data subsets.

The findings cast further doubt on the potential for informational campaigns to promote electoral accountability (Dunning et al., Reference Dunning, Grossman, Humphreys, Hyde and McIntosh2019; Dunning et al. Reference Dunning, Grossman, Humphreys, Hyde, McIntosh, Nellis, Adida, Arias, Bicalho and Boas2019), especially where voters struggle to attribute credit or blame. As in many decentralized environments, responsibility for Ugandan public services is shared between multiple layers of government (Martin and Raffler, Reference Martin and Raffler2021). This makes inferring responsibility for public service outcomes challenging for voters. Footnote 2 Additionally, while we find evidence of belief updating after district elections, these effects do not persist after we sent additional messages prior to sub-county elections, reflecting the difficulty of durably shifting political beliefs. The results highlight the many challenges citizens face in sanctioning politicians for poor public services or rewarding them for good public services in low information, decentralized environments.

Research expectations

Information about public performance and public goods appears to affect voting behavior when information is salient and attributable to individual politicians. Public audits of local government officials released before elections in Brazil significantly reduced the probability of reelection for politicians who engage in above-average corruption (Ferraz and Finan, Reference Ferraz and Finan2008). Report cards on politician performance in India’s slums induced higher turnout and higher vote shares for incumbents who were rated favorably on public spending (Banerjee, Kumar, Pande and Su, Reference Banerjee, Kumar, Pande and Su2010). A civic information campaign in Mali, which included information about politician performance, appeared to increase programmatic voting (Gottlieb, Reference Gottlieb2016).

However, in many contexts, sending voters information about the performance of their incumbent politicians does not seem to influence voting behavior on average. Humphreys and Weinstein (Reference Humphreys and Weinstein2013), for instance, found no impact of scorecards on voter choices in Uganda’s national parliamentary elections or on parliamentarians’ actions. Transparency about politician performance can even work against public accountability by inducing incumbents to hide their activity (Malesky, Schuler and Tran, Reference Malesky, Schuler and Tran2012) or discouraging voters from turning out (Chong et al. Reference Chong, De La O, Karlan and Wantchekon2015). Most persuasively, a meta-analysis of six coordinated trials across five countries found no detectable average effect on vote choice from sending voters information about politician performance prior to elections (Dunning et al., Reference Dunning, Grossman, Humphreys, Hyde and McIntosh2019; Dunning et al. Reference Dunning, Grossman, Humphreys, Hyde, McIntosh, Nellis, Adida, Arias, Bicalho and Boas2019). Our study was part of this Metaketa I initiative as an alternative treatment arm. We aimed to offer a contrast with the more common approach of providing similar information to all voters.

In particular, we leverage an insight of Lieberman, Posner and Tsai (Reference Lieberman, Posner and Tsai2014) that information must be individually important, novel, and different from prior beliefs to change behavior. Using mobile phones enabled our team to deliver information tailored to respondents’ circumstances and expressed informational needs. Unlike most existing interventions, we provided information privately about the public service respondents stated was most relevant to their vote choice and analyzed the effects conditional on voters’ prior beliefs about politicians’ performance. Additionally, rather than informing voters about broad regional or national government performance, we tailored the information to each individual’s context. Finally, we made the information timely by sending messages in the days immediately prior to the election.

We hypothesized that providing private, individualized, and timely information would have a number of direct and heterogeneous effects. Specifically, when respondents receive positive information about incumbent politicians’ relative performance as compared to their prior beliefs – “good news” – they should be more likely to support incumbents and to strengthen beliefs about incumbent integrity and effort. We expected the opposite effects when respondents receive “bad news.” We also expected good news to enhance a sense of political efficacy and thus increase turnout and bad news to depress turnout.

We expected the informational treatment to have larger effects when respondents received information very different from their prior beliefs, were more uncertain about the performance of politicians at baseline, or placed greater importance on public services. We also expected respondents who were not part of the same tribe as the incumbent, who received information consistent with their political alignment, and who had not been given gifts prior to the election would be more likely to respond to the messages. We preregistered all these hypotheses in advance of treatment. Footnote 3

Research design

Context and sample

Uganda is a semi-authoritarian country, with restricted access to information and considerable inequities in public service delivery, poverty, and effective governance (Tripp, Reference Tripp2010; Tumushabe et al. Reference Tumushabe, Mushemeza, Tamale, Lukwago and Ssemakula2010). The treatments targeted the 2016 elections for chairs and councillors in districts (or LC5) and sub-counties (LC3), which are the two most important levels of government for the delivery of basic public services. Elections for these offices are held every 5 years and won with a plurality of votes. Incumbents are not term-limited. Citizens assigned to treatment received text messages on how their preferred local service in their village compared to district averages. Citizens assigned to a placebo condition received text messages about the general importance of public services for well-being. We sent many reinforcing text messages by mobile phone a few days before the February 2016 district elections for chairs and councillors and before the March 2016 sub-county elections for chairs and councillors.

Several features of Uganda’s 2016 local elections make them particularly illuminating for understanding the effects of information on voting. First, frequent government interference tends to focus on national elections, allowing more opportunities for citizens to hold officials accountable in local elections. Approximately 85% of local elections were contested by at least two candidates (district chairs in our sample averaged three competitors) (Electoral Commission, 2016). Candidates from seven different political parties and independents contested LC5 chair elections, while candidates from 12 parties and independents contested LC3 chair elections. Most respondents (73%) in our sample expected local elections to be free and fair. Second, unlike more established democracies, citizens in Uganda face large barriers to obtaining accurate information about politics due to government control and repression of public media (Tripp, Reference Tripp2010). As we document, most citizens lack the ability to effectively assess the comparative quality of their public services, and many have misconceptions about the functioning and performance of local governments (Bainomugisha et al., Reference Bainomugisha, Muyomba-Tamale, Muhwezi, Cunningham, Ssemakula, Bogere, Rhoads and Mbabazi2015). Due to the relative political openness and large service delivery remit of local councils, these are also elections where effective civic interventions are likely to have a large policy impact. We discuss the context of these elections in more detail in SI Section 1.

We completed baseline surveys with 16,083 subjects from 762 nationally representative villages in 27 of 111 Ugandan districts. Details on the sample can be found in SI Section 2. Because the experiment required that subjects possess mobile phones, the final sample skews more educated and had a larger proportion of males compared to the general population (Afrobarometer, 2015).

Treatment, placebo, and baseline survey

Enumerators conducted independent audits of three local public services in each sampled village: access to improved water sources; road conditions; and the cleanliness, availability of medicines, and wait times at local health facilities. In addition, we collated data from our partner Twaweza’s preexisting audit of primary schools, which independently tested student achievement. Focus groups indicated these four services would be especially salient to voters. We provide more details on each audit in the SI Section 4.

From the audits, we created indices of “service quality” normalized by the district. The process produced the treatment, which indicates whether services in each village are “much better,” “better,” “a little worse,” or “much worse” than other villages in the same district. Messages were sent to subjects only for the public service they selected as most important or, when an audit was not available, their second choice (applicable to 22% of respondents, see SI Section 10.2). We intended this treatment to provide subjects evidence about whether politicians were performing well or poorly compared to other villages in the district. This information was novel and valuable for the majority of respondents (see SI Table S4 and Section 10.2).

A baseline survey conducted before treatment collected information on prior beliefs about public services, voting intentions, and the most important public service, as well as background information about the subjects’ prior political participation and demographic characteristics (see SI Sections 10.2 and 11.1). We use these baseline data to conduct balance and attrition tests in SI Section 11. We also use these data to test for heterogeneous effects hypothesized in advance. Footnote 4

Using complete randomization, we assigned half of all subjects in each village to the public services treatment and the other half to the placebo condition. Over 5 days, treated subjects received two to four messages per day in their native language reporting factual audit information about the comparative quality of the public service that they deemed most important for voting. They were provided a series of messages about the overall score for that service relative to other villages in the district. We also explained whether the responsibility for the public service belonged to the LC3, LC5, or both. Placebo respondents received only general information – in the form of public service announcements – about the importance of quality public services without any information about the performance of their politicians. Examples of treatment and placebo messages are shown in SI Tables S2 and S3.

Voting and turnout outcomes

To produce measures of vote choice and turnout, local enumerators based at a call center conducted endline surveys following each election. The endline surveys measured (1) votes for district and sub-county council chairpersons and councillors, (2) perceptions and knowledge of the incumbents’ performance, (3) vote buying and motivations for voting, (4) engagement with elected officials, and (5) voter turnout.

We used self-reports of voting from the post-election survey, as opposed to actual votes counted at the precinct level, for several reasons. First, for ethical reasons, we desired to avoid affecting aggregate election outcomes. Given large enough treatment effects, random assignment by village may have swung local elections. Second, for analytic precision and to align with the larger meta-analytic study of which this experiment was a part (Dunning et al., Reference Dunning, Grossman, Humphreys, Hyde and McIntosh2019; Dunning et al., Reference Dunning, Grossman, Humphreys, Hyde, McIntosh, Nellis, Adida, Arias, Bicalho and Boas2019), we wanted to account for individual-level variables such as prior beliefs and voting intent. Third, treatment assignment at the village level would have considerably reduced statistical power.

The measures of vote choice from our endline survey correlate well with the official returns (see SI 6 and Figure S6). We employed multiple approaches to detect possible misreporting, and we show that responses do not meaningfully differ depending upon whether the respondent was surveyed after the announcement of the election results or whether respondents could consistently describe the polling station. See SI Section 5.

Estimation

As preregistered, we split the sample into two groups based on whether each subject was eligible to receive good or bad news about their preferred public service based on their prior belief. We then collapse all types of good news and bad news into single-treatment indicators $$T_i^ + $$ or $$T_i^ - $$ , which equals one when the subject i is treated and is part of the relevant subset $${L^ + }$$ or $${L^ - }$$ , respectively, for good or bad news.

Our primary estimating equation is given by Equation 1 (shown for the good news subgroup). In it $${y_{ij,t = 1}}$$ indicates whether the subject voted for the incumbent party for the political office j, $${y_{ij,t = 0}}$$ indicates whether the subject stated that they intended to vote for the incumbent party during the baseline survey, $$\beta $$ is a vector of estimated coefficients, $${X_i}$$ is a matrix of prespecified, pretreatment covariates, $${\nu_j}$$ is a village fixed effect, and $${\varepsilon _i}$$ is the error term clustered by individual when pooling across offices or sharp null standard errors when not pooling. We test our hypotheses on vote choice only for contested offices with and without an extensive array of pretreatment covariates, which produced substantively similar results. SI Section 12 explains departures from our pre-analysis plan, which do not affect the substantive results reported in the main text:

(1) $${y_{ij,t = 1}} = \alpha + {\tau _1}\,T_{ik}^ + + \vartheta {y_{ij,t = 0}} + \beta {{\rm{X}}_{\rm{i}}} + {\nu _j} + {\varepsilon _i}$$

Results

We first consider four direct effects of treatment in Figure 1: vote choice for the incumbent (panels A and B), beliefs about incumbent integrity (panels C and D), beliefs about incumbent effort (panels E and F), and turnout (panels G and H). We see no consistent evidence that subjects eligible for good news who were treated responded positively on any of these outcomes, nor that subjects eligible for bad news who were treated responded negatively, compared to placebo subjects. The large sample size and its resulting strong statistical power mean that these null results are precisely estimated.

Figure 1 Direct effects of treatment with good and bad news subgroups. Notes: 95% confidence intervals derived from sharp null standard errors by randomization inference. Sample used for estimation of panels A and B excludes uncontested elections and elections where the incumbent switched parties, which is a modification from the prespecified sample.

A likely explanation for the lack of a treatment effect on vote choice (Figure 1, panels A–B) is that respondents did not update their beliefs about incumbent integrity or effort in response to treatment (Figure 1, panels C–F). We conjecture that voters’ challenges in attributing public service outcomes to local politicians may explain why the informational treatment did not affect these outcomes.

And finally, the evidence suggests that neither good news nor bad news significantly affected voter turnout, as shown in panels G and H of Figure 1. The possible exception is the effects of bad news in increasing turnout for district chair, which goes against expectations. However, this result does not retain significance after adjustment for multiple comparisons.

As displayed in Figure 2, preregistered expectations regarding heterogeneous treatment effects – for large differences in priors, uncertainty, importance of services, politician alignment, tribal identity, and receipt of gifts – are likewise not borne out in the results. The only significant exception is that voters not aligned with politicians who are treated with bad news are more likely to turn out (panel H). Figure 2 shows treatment effects within subgroups defined by the prespecified moderator for ease of exposition. The preregistered approach without pooling across offices, which yield the same conclusions, are displayed in SI Figures S9S12. These results provide more confidence that the treatment did not prompt programmatic voting, since the hypothesized effects are not evident among voters most likely to respond to the treatment.

Figure 2 Conditional effects of treatment with good and bad news subgroups. Notes: Estimation pools each outcome across all politicians for each individual, with 95% confidence intervals derived from robust standard errors clustered at the level of the individual, which is the level of pooling results across politician and election types for this figure. Sample used for the estimation of panels A–F and I–L excludes uncontested elections, elections where the incumbent switched parties, and redistricted constituencies.

These precisely estimated null effects are surprising in light of the successful receipt of the messages by a large majority of subjects and a post-election survey with a random sample of subjects that indicated the messages were generally considered valuable (Table S4). Additionally, we see strong evidence that treated subjects expressed beliefs about the comparative quality of public services that were either closer to the audit information (“partial updating”) or matching the audit information (“perfect updating”) following the LC5 election (Figure 3). Belief updating did not persist after the second wave of treatment messages prior to the LC3 election. These decaying effects over time may indicate that subjects changed their beliefs in response to campaigns in the 2 weeks between surveys. Subjects may also have found the treatment information more salient when they first received it prior to the LC5 election compared to when the same information was repeated prior to the LC3 election. Consistent with findings in more developed democracies, these estimates suggest political information delivered during an active campaign may have short-lived effects (Gerber et al., Reference Gerber, Gimpel, Green and Shaw2011).

Figure 3 Belief updating about the comparative quality of public services. Notes: Estimates with 95% confidence intervals derived from robust standard errors clustered at the district level. Sample includes subjects who were able to receive treatment messages about their priority public service and not reassigned to a different service due to missingness in audits.

Discussion

Our results provide further reason to be cautious about efforts to improve electoral accountability through information provision. The intervention we study was designed to improve on more general, blanket information campaigns that have typically yielded disappointing results on programmatic voting (Dunning et al., Reference Dunning, Grossman, Humphreys, Hyde and McIntosh2019; Dunning et al., Reference Dunning, Grossman, Humphreys, Hyde, McIntosh, Nellis, Adida, Arias, Bicalho and Boas2019). We provide well-powered evidence that even when information is novel, timely, individualized, and private, it will not necessarily affect vote choice. Because the information was tailored to individual circumstance and preference, the results of this study provide even stronger evidence than currently exists about the impotence of information campaigns in settings where attribution for outcomes is not clear.

One possible explanation for these null results is that voters were uncertain about which politicians are most accountable for the quality of the local public services. Responsibility and financing for many local services are shared across LC3s, LC5s, and civil servants working for the national government (Manyak and Katono, Reference Manyak and Katono2011; Martin and Raffler, Reference Martin and Raffler2021). As a result, it is far from trivial to know whom to praise or blame for public service outcomes. Nevertheless, we do not detect treatment effects among voters who received messages attributing their chosen service to one level of government, as compared to voters who received messages about services attributed to multiple levels of government (see SI Section 9).

Our treatment and use of technology are purposefully similar to those being adopted by many good-governance organizations and thus offer practical lessons. First, our results strongly suggest that citizens in Uganda do in fact lack sufficient information to hold politicians accountable for public services (SI Figure S15). Yet they also confirm that providing the information is rarely enough to solve accountability gaps, particularly in decentralized contexts or where long or overlapping accountability chains of resources and responsibilities make attributing outcomes to individual officials difficult (Dunning et al., Reference Dunning, Grossman, Humphreys, Hyde and McIntosh2019; Lieberman, Posner and Tsai, Reference Lieberman, Posner and Tsai2014; Banerjee et al., Reference Banerjee, Banerji, Duflo, Glennerster and Khemani2010). While our intervention provided brief information about politicians’ responsibility for services, future interventions might better pair information about performance for delivering public services with more substantial civic education about politicians’ responsibilities, which has been effective elsewhere (Gottlieb, Reference Gottlieb2016; Adida et al., Reference Adida, Gottlieb, Kramon, McClendon, Dunning, Grossman, Humphreys, Hyde and McIntosh2019).

Additionally, our research highlights the importance of understanding citizen beliefs about accountability and attribution as a precursor to effective interventions. While information in our study was novel and individualized, there is no evidence that the treatment caused changes in beliefs about incumbent integrity or effort, even when beliefs about the comparative quality of public services changed. This suggests that good-governance organizations need to better understand not just the institutional structure of public service delivery, but also citizen beliefs about these institutions and the nature of service delivery.

Supplementary Material

To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2021.15

Data Availability Statement

The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: doi:10.7910/DVN/VL4UTZ (Jablonski et al., Reference Jablonski, Buntaine, Nielson and Pickering2021).

Acknowledgments

We thank Jacob Skaggs, Catherine Tabingwa, Immaculate Apio Ayado, Sarah Bush, and Twaweza for contributions to design and implementation. We thank Ola Pozor and Daniel Aboagye for excellent research assistance. This project was approved by the UCSB Human Subjects Committee (#15-0690); IRBs at BYU (#15381), William and Mary (2015-09-10-10589), and Temple (via IAA); the LSE Research Ethics Committee; the Uganda Mildmay Research Ethics Committee (0309-2015); the Uganda National Council for Science and Technology (SS 3943); and the Ugandan Office of the President (ADM 154/212/03). We also thank Guy Grossman, Thad Dunning, Susan Hyde, Macartan Humphreys, Craig McIntosh, and other participants in Metaketa I for invaluable contributions to the design of this research. This project was funded by an anonymous donor as part of EGAP Metaketa I.

Conflicts of Interest

The authors declare no conflicts of interest.

Footnotes

This article has earned badges for transparent research practices: Open Data and Open Materials. For details see the Data Availability Statement.

The author order was determined using the American Economic Association’s author randomization tool, which generates a public, certified record of the randomized ordering (confirmation: zDDXqfYDzGtn). More information is available at: https://www.aeaweb.org/journals/policies/random-author-order/generator.

1 Audits on schools were completed by our partner NGO, Twaweza.

2 As we elaborate in the SI Section 1, responsibility for most services in this study are shared by LC3 and LC5 governments. Additionally, central government bureaucrats retain control over many procurement procedures and budgets.

3 The pre-analysis plan and amendments are available at https://osf.io/t4qjx/.

4 Many questions were informed by the Pre-Analysis plan of the Metaketa I project (Dunning et al., Reference Dunning, Grossman, Humphreys, Hyde, McIntosh, Nellis, Adida, Arias, Bicalho and Boas2019).

References

Adida, Claire, Gottlieb, Jessica, Kramon, Eric and McClendon, Gwyneth. 2019. Under what conditions does performance information influence voting behavior? Lessons from Benin. In Metaketa I: The Limits of Electoral Accountability, eds. Dunning, T., Grossman, G., Humphreys, M., Hyde, S. and McIntosh, C. Cambridge: Cambridge University Press, 81117.Google Scholar
Afrobarometer. 2015. “Afrobarometer Data Round VI, Uganda.” Available at http://afrobarometer.org/countries/uganda-0 (Accessed March 2017).Google Scholar
Aker, Jenny C, Collier, Paul and Vicente, Pedro C. 2017. “Is information power? Using mobile phones and free newspapers during an election in Mozambique.Review of Economics and Statistics 99(2): 185200.CrossRefGoogle Scholar
Bainomugisha, Arthur, Muyomba-Tamale, L., Muhwezi, Wilson W., Cunningham, Kiran, Ssemakula, Eugene G., Bogere, George, Rhoads, Russell and Mbabazi, Jonas. 2015. “Local Government Councils Scorecard Assessment 2014/15.” ACODE Policy Research Series, No. 70, 2015. http://www.acode-u.org/Files/Publications/PRS_70.pdf (Accessed December 2016).Google Scholar
Banerjee, Abhijit V, Banerji, Rukmini, Duflo, Esther, Glennerster, Rachel and Khemani, Stuti. 2010. “Pitfalls of participatory programs: Evidence from a randomized evaluation in education in India.American Economic Journal: Economic Policy 2(1): 130.Google Scholar
Banerjee, Abhijit, Kumar, Selvan, Pande, Rohini and Su, Felix. 2010. “Do Informed Voters Make Better Choices? Experimental Evidence from Urban India.” Unpublished Manuscript. Available at http://www.povertyactionlab.org/node/2764 (last accessed March 22, 2017).Google Scholar
Buntaine, Mark T, Hunnicutt, Patrick and Komakech, Polycarp. 2020. “The Challenges of Using Citizen Reporting to Improve Public Services: A Field Experiment on Solid Waste Services in Uganda.Journal of Public Administration Research and Theory.Google Scholar
Buntaine, Mark T, Jablonski, Ryan, Nielson, Daniel L and Pickering, Paula M. 2018. “SMS texts on corruption help Ugandan voters hold elected councillors accountable at the polls.Proceedings of the National Academy of Sciences 115(26): 6668–73.Google ScholarPubMed
Buntaine, Mark T., Nielson, Daniel L. and Skaggs, Jacob T.. 2019. “Escaping the Disengagement Dilemma: Two Field Experiments on Motivating Citizens to Report on Public Services.British Journal of Political Science p. 121.Google Scholar
Chong, Alberto, De La O, Ana L., Karlan, Dean and Wantchekon, Leonard. 2015. “Does Corruption Information Inspire the Fight or Quash the Hope? A Field Experiment in Mexico on Voter Turnout, Choice, and Party Identification.Journal of Politics 77(1): 5571.CrossRefGoogle Scholar
Dammert, Ana C, Galdo, Jose C and Galdo, Virgilio. 2014. “Preventing Dengue through Mobile Phones: Evidence from a Field Experiment in Peru.Journal of health economics 35: 147–61.Google ScholarPubMed
Dunning, Thad, Grossman, Guy, Humphreys, Macartan, Hyde, Susan D, McIntosh, Craig, Nellis, Gareth, Adida, Claire L, Arias, Eric, Bicalho, Clara, Boas, Taylor C et al. 2019. “Voter information campaigns and political accountability: Cumulative findings from a preregistered meta-analysis of coordinated trials.Science advances 5(7): eaaw2612.Google ScholarPubMed
Dunning, Thad, Grossman, Guy, Humphreys, Macartan, Hyde, Susan and McIntosh, Craig, eds. 2019. Information, Accountability, and Cumulative Learning: Lessons from Metaketa I. Cambridge University Press.CrossRefGoogle Scholar
Electoral Commission. 2016. The Republic of Uganda 2015/2016 General Elections Report. Technical report Electoral Commission.Google Scholar
Ferrali, Romain, Grossman, Guy, Platas, Melina R and Rodden, Jonathan. 2020. “It Takes a Village: Peer Effects and Externalities in Technology Adoption.American Journal of Political Science 64(3): 536–53.CrossRefGoogle Scholar
Ferraz, Claudio and Finan, Frederico. 2008. “Exposing Corrupt Politicians: The Effects of Brazil’s Publicly Released Audits on Electoral Outcomes.Quarterly Journal of Economics 123(2): 703–45.CrossRefGoogle Scholar
Gerber, Alan S, Gimpel, James G, Green, Donald P and Shaw, Daron R. 2011. “How large and long-lasting are the persuasive effects of televised campaign ads? Results from a randomized field experiment.American Political Science Review 105(1): 135–50.CrossRefGoogle Scholar
Gottlieb, Jessica. 2016. “Greater expectations: A field experiment to improve accountability in Mali.American Journal of Political Science 60(1): 143–57.CrossRefGoogle Scholar
Grossman, Guy, Humphreys, Macartan and Sacramone-Lutz, Gabriella. 2014. “I wld like u WMP to extend electricity 2 our village: On Information Technology and Interest Articulation.American Political Science Review 108(03): 688705.CrossRefGoogle Scholar
Grossman, Guy, Humphreys, Macartan and Sacramone-Lutz, Gabriella. 2020. “Information technology and political engagement: Mixed evidence from Uganda.Journal of Politics 82(4): 1321–36.CrossRefGoogle Scholar
Grossman, Guy, Platas, Melina R and Rodden, Jonathan. 2018. “Crowdsourcing accountability: ICT for service delivery.World Development 112: 7487.Google Scholar
Humphreys, Macartan and Weinstein, Jeremy M.. 2013. “Policing Politicians: Citizen Empowerment and Political Accountability in Uganda.” Unpublished Manuscript.Google Scholar
Jablonski, Ryan S, Buntaine, Mark T, Nielson, Daniel L and Pickering, Paula M. 2021. “Replication Data for: Individualized text messages about public services fail to sway voters: Evidence from a field experiment on Ugandan elections.”. Harvard Dataverse, V1. https://doi.org/10.7910/DVN/VL4UTZ CrossRefGoogle Scholar
Lieberman, Evan S, Posner, Daniel N and Tsai, Lily L. 2014. “Does information lead to more active citizenship? Evidence from an education intervention in rural Kenya.World Development 60: 6983.CrossRefGoogle Scholar
Malesky, Edmund, Schuler, Paul and Tran, Anh. 2012. “The adverse effects of sunshine: a field experiment on legislative transparency in an authoritarian assembly.American Political Science Review 106(4): 762–86.Google Scholar
Manyak, Terrell G and Katono, Isaac Wasswa. 2011. “Impact of multiparty politics on local government in Uganda.African Conflict and Peace Building Review 1(1): 838.Google Scholar
Martin, Lucy and Raffler, Pia J. 2021. “Fault Lines: The Effects of Bureaucratic Power on Electoral Accountability.American Journal of Political Science 65(1): 210–24.CrossRefGoogle Scholar
Tripp, Aili Mari. 2010. Museveni’s Uganda: Paradoxes of Power in a Hybrid Regime. Challenge and Change in African Politics. Boulder, CO: Lynne Rienner.Google Scholar
Tumushabe, Godber, Mushemeza, Elijah Dickens, Tamale, Lillian Muyomba, Lukwago, Daniel and Ssemakula, Eugene. 2010. “Monitoring and Assessing the Performance of Local Government District Councils in Uganda: Background, Methodology and Score Card.” ACODE Policy Research Series, No. 31. http://www.acode-u.org/Files/Publications/PRS_31.pdf (last accessed July 30, 2020).Google Scholar
Figure 0

Figure 1 Direct effects of treatment with good and bad news subgroups. Notes: 95% confidence intervals derived from sharp null standard errors by randomization inference. Sample used for estimation of panels A and B excludes uncontested elections and elections where the incumbent switched parties, which is a modification from the prespecified sample.

Figure 1

Figure 2 Conditional effects of treatment with good and bad news subgroups. Notes: Estimation pools each outcome across all politicians for each individual, with 95% confidence intervals derived from robust standard errors clustered at the level of the individual, which is the level of pooling results across politician and election types for this figure. Sample used for the estimation of panels A–F and I–L excludes uncontested elections, elections where the incumbent switched parties, and redistricted constituencies.

Figure 2

Figure 3 Belief updating about the comparative quality of public services. Notes: Estimates with 95% confidence intervals derived from robust standard errors clustered at the district level. Sample includes subjects who were able to receive treatment messages about their priority public service and not reassigned to a different service due to missingness in audits.

Supplementary material: PDF

Jablonski et al. supplementary material

Jablonski et al. supplementary material

Download Jablonski et al. supplementary material(PDF)
PDF 2.5 MB
Supplementary material: Link

Jablonski et al. Dataset

Link