Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-18T00:15:09.493Z Has data issue: false hasContentIssue false

Be Explicit: Identifying and Addressing Misaligned Goals in Collaborative Research Teams

Published online by Cambridge University Press:  01 August 2022

Nicholas Haas*
Affiliation:
Aarhus University, Denmark
Rights & Permissions [Opens in a new window]

Abstract

The misalignment of goals among researchers, external organizational partners (OPs), and study participants is thought to pose a challenge to the successful implementation of collaborative research projects. However, the goals of different collaborative team members almost never are elicited, making identification of misalignment and its potential consequences a difficult task. In this evaluation of a United Nations Nonviolent Communication Program conducted in Bangladesh, the Maldives, and Sri Lanka, I collected qualitative and quantitative data on OPs’ and study participants’ expected program impacts. I find that there are differences in OP and participant goals and that misalignment appears to bear some responsibility for participant dissatisfaction with the program. I also observe evidence that as the program progressed, participants’ expected program impacts began to more closely approximate those of the OPs. I conclude with thoughts on the benefits of explicitly measuring research team goals and expectations and addressing their misalignment.

Type
Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the American Political Science Association

The shift from individuals to collaborative research teams driving knowledge production has been well documented across academic fields (McDermott and Hatemi Reference McDermott and Hatemi2010; Wagner, Park, and Leydesdorff Reference Wagner, Park and Leydesdorff2015). Researchers today often find themselves working with colleagues from different countries and partnering with various external organizational partners (OPs), from governmental agencies to nonprofits and survey firms (Haas et al. Reference Haas, Haenschen, Kumar, Panagopoulos, Peyton, Ravanilla and Sierra-Arévalo2022; Wagner, Park, and Leydesdorff Reference Wagner, Park and Leydesdorff2015). For instance, Butler (Reference Butler2019) found that 62% of field experiments—a method on the rise in political science as in other fields—published between 2000 and 2017 in top political science journals involved some type of collaboration between a researcher and an external partner.

Whereas collaborative research offers several potential benefits, it also generates increased possibilities for confusion and misalignment among both team members and the populations they study. As Haas et al. (Reference Haas, Haenschen, Kumar, Panagopoulos, Peyton, Ravanilla and Sierra-Arévalo2022) discussed, divergences in the goals, identities, and incentives of researchers, OPs, and the populations they study are both common and, when unaddressed, can result in ethical and practical challenges. These challenges can be particularly exacerbated in the presence of power asymmetries, a reality of many collaborations that nevertheless can lead to the privileging of some goals over others—with those under study particularly likely to lose out (Corduneanu-Huci, Dorsch, and Maarek Reference Corduneanu-Huci, Dorsch and Maarek2022; Haas et al. Reference Haas2022; Herman et al. Reference Herman, Panin, Wellman, Blair, Pruett, Opalo, Alarian, Grossman, Tan, Dyzenhaus and Owsley2022).

Whereas collaborative research offers several potential benefits, it also generates increased possibilities for confusion and misalignment among both team members and the populations they study.

This article focuses on OPs’ and study participants’ goals and asks three questions.Footnote 1 First, can we identify empirically the differences in goals between members of a collaborating OP and the population under study? Second, when divergences in goals exist, are some privileged over others, and does such prioritization affect participants’ experience of the program? Third, what are possible ways that researchers working in collaborative research teams might seek to effectively address divergences in goals?

To answer these questions, I drew on insights from a collaborative study on Nonviolent Communication (NVC) conducted between September and December 2021. The program—which was being piloted by the OP with the goal that it later could be scaled up and possibly evaluated with a randomized controlled trial—brought together myself as a researcher; two United Nations (UN) organizations (i.e., UN Women and the UN Development Programme, or UNDP); three UN offices in the countries in which the program was conducted (i.e., Bangladesh, the Maldives, and Sri Lanka); numerous individuals and organizations involved in facilitating the program; and approximately 100 men and women who participated in the program (Haas Reference Haas2022).

Before the program began, I collected qualitative and quantitative data on OP and study participant program aims, data that I supplemented with midline and endline surveys and interviews with study participants. First, I found evidence of misalignment of goals between the OP and study participants. In particular, whereas OP individuals prioritized gender-related issues, study participants expressed a greater desire for the program to address violent extremism in their communities. Second, I found that over the course of the program, participants became more likely to highlight gender issues as a reason to participate in the program and less likely to highlight violent extremism—movement that potentially is consistent with the OP’s goals receiving comparatively higher priority. Participants also expressed dissatisfaction with a perceived lack of attention to the community. Finally, I conclude with thoughts on how to address misalignment in team-member and study-population goals, advocating in particular for pre-program explicit elicitation and communication of expectations and aims.

This study extends the literature on practical and ethical challenges in collaborative research in a few key ways (Haas et al. Reference Haas, Haenschen, Kumar, Panagopoulos, Peyton, Ravanilla and Sierra-Arévalo2022; Levine Reference Levine, Druckman and Green2021). Most notably, whereas scholars have highlighted potential issues with misaligned OP and study-participant goals and speculated that negative outcomes are attributable to this misalignment, data on parties’ aims almost never are elicited and reported. By explicitly measuring individuals’ goals for and experiences of a program, I explored empirically the presence of such misalignment and its impact. Such explicit elicitation, I argue, is a simple and yet potentially effective way to address misalignment.

MISALIGNED GOALS AND EXPECTATIONS

Scholars have highlighted how researchers, OPs, and study participants (knowingly or unknowingly) who engage in collaborative research projects are driven by different motivations, goals, and incentives (Haas et al. Reference Haas, Haenschen, Kumar, Panagopoulos, Peyton, Ravanilla and Sierra-Arévalo2022; Levine Reference Levine, Druckman and Green2021). For example, researchers often operate according to a short time horizon and aim to maximize their chances for obtaining a positive treatment effect, and thus—under a system that tends to penalize null findings—for publication (Haas et al. Reference Haas, Haenschen, Kumar, Panagopoulos, Peyton, Ravanilla and Sierra-Arévalo2022; Levine Reference Levine, Druckman and Green2021). In contrast, OPs often look for long-term, sustainable solutions to the problems on which they are working.

Scholars often speculate that negative or, at the very least, unintended project outcomes are attributable to misaligned goals and expectations. For example, Haas et al. (Reference Haas, Haenschen, Kumar, Panagopoulos, Peyton, Ravanilla and Sierra-Arévalo2022) referenced a study by Bryan, Choi, and Karlan (Reference Bryan, Choi and Karlan2021) in which the authors partnered with a Filipino evangelical antipoverty organization to evaluate the impact of its religious education program on poor people. The authors observed that the program influenced not only participants’ incomes but also their religious identification. Although the outcome in this case may have been desired by the OP, it may have been at odds with the expectations and aims of researchers and the study population. The questions and projects that motivate researchers, OPs, and development-aid donors often have been criticized for their distance from the goals of those under study (Herman et al. Reference Herman, Panin, Wellman, Blair, Pruett, Opalo, Alarian, Grossman, Tan, Dyzenhaus and Owsley2022).

Many studies report backlash to or negative experiences with experimental treatments, indicating possible misalignment between the goals and expectations of those implementing the study and at least a portion of those randomly assigned to receive its treatment. For instance, Blair, Karim, and Morse (Reference Blair, Karim and Morse2019, 642) found evidence of backlash to increased police patrolling in Liberia among those who the authors argued “benefit from customary law.” Often, researchers note that they did not anticipate such backlash, indicating a possible lack of awareness of misalignment. Even when it does not cause harm, a failure to account for misalignment in goals and expectations can lead researchers and OPs to reach faulty conclusions. For instance, whereas researchers and an implementing OP might design an experiment to measure other-regarding preferences, experimental subjects instead might perceive the game as an opportunity to signal the need for development aid (Cilliers, Dube, and Siddiqi Reference Cilliers, Dube and Siddiqi2015).

The growth in international research collaborations may further exacerbate issues related to misalignment of goals. First, collaborators who are geographically and culturally distant may have more contrasting motivations, less prior knowledge of one another’s constraints and aims, and may struggle to communicate and address any inconsistencies. Second, power asymmetries may privilege the goals of some individuals over others, leaving those without strong advocates—often, the population under study—particularly ill served (Corduneanu-Huci, Dorsch, and Maarek Reference Corduneanu-Huci, Dorsch and Maarek2022; Herman et al. Reference Herman, Panin, Wellman, Blair, Pruett, Opalo, Alarian, Grossman, Tan, Dyzenhaus and Owsley2022). These asymmetries are common and not limited to international studies or those of low-income countries by teams based in high-income countries. As Deaton (Reference Deaton2020, 21) observed, “Even in the US, nearly all RCTs [randomized controlled trials] on the welfare system are done by better-heeled, better-educated, and paler people on lower-income, less-educated, and darker people.”

Thus, there is good reason to be concerned about potential misalignment of goals and expectations, which can lead to negative or undesired outcomes particularly among the study population. However, researchers often lack data on parties’ expectations and goals and therefore are left to speculate about the role of misalignment on research outcomes.

…researchers often lack data on parties’ expectations and goals and thus are left to speculate about the role of misalignment on research outcomes.

RESEARCH DESIGN

I studied a pilot NVC training program conducted in 2021 in Bangladesh, the Maldives, and Sri Lanka. The training brought together a diverse team that included a researcher, leaders from multiple UN organizations and country offices, donors, an NVC expert, facilitators and translators, and approximately 100 participants.

NVC aims to improve communication between individuals by helping them to identify their unmet needs and by endowing them with the confidence to make actionable demands for those needs to be addressed (see online appendix figure A1). The program was conducted during nine full-day sessions (10 in Sri Lanka) divided into different waves, with a few weeks elapsing between each, and each successive wave becoming more advanced (see online appendix table A1). Participants were recruited by UN country offices and cooperating nongovernmental organizations.Footnote 2 Sessions included activities such as role playing and empathic-listening exercises using material from the participants’ lives.

Before the program began, I conducted a survey with OP team members and the study population. I then conducted follow-up surveys with participants after the second wave (i.e., midline survey) and the third wave (i.e., endline survey) of the program. I supplemented qualitative and quantitative data from these surveys with insights from semi-structured interviews conducted with study participants and session notes. I recorded 148 completed survey responses from study participants (table 1). I also was able to obtain survey responses from 13 members of the OP team, including representatives from all three country offices, UN Women and the UNDP, the NVC intervention leader, and program facilitators.

Table 1 Study Participant Survey Respondents by Wave and Country

Notes: Numbers indicate the total number of study respondents who started (completed) the survey. A few of the respondents’ countries could not be verified.

I used a few different measures of individuals’ expected program impacts. Both OP members and study participants were asked to rank the following areas in terms of priorities for the program: social cohesion, gender inequality, violent extremism, and gender-based violence (see online appendix section B for survey text).Footnote 3 They also were asked to reflect in open-ended questions on how a successful program impact would look. In addition, study participants were asked to select and rank from a set of possible reasons those that motivated them to participate in the program. Subsequent waves of the survey aimed to modify question wording as little as possible to effectively track changes over time. For example, in the endline survey, participants were asked to select and rank the reasons that they would give to someone else who was considering attending the program.

Individuals who expressed interest in participating in the program applied for a slot.Footnote 4 Descriptions of expected program deliverables were relatively vague and mentioned all of the priority areas described previously. Because participants voluntarily applied to take part in this program, I viewed their expected gains from joining as reflective not only of their expectations but also of their goals vis-à-vis the program.

RESULTS

What did OP team members hope to gain from the training program, and did their goals match the desired program impacts of study participants? Figure 1 compares rankings of priorities before the program began for study participants (left panel) and OP members (right panel). Although both groups appeared to highly value social cohesion, differences emerged regarding gender inequality and violent extremism. Specifically, 50% of OP members ranked gender inequality as their first priority for the program, compared with only 14% of study participants—a difference that is statistically distinguishable from zero.Footnote 5 In contrast, 50% of study participants ranked violent extremism as their first or second priority, totaling the same percentage of OP members who ranked extremism as their lowest priority of the four issues.Footnote 6

Figure 1 OP and Study Participant Program Goals

Note: This figure compares pre-program issue-area priority rankings for study participants (left panel) and members of the OP (right panel). Respondents were asked: “How would you rank the following in terms of changes you hope the program will help promote?” A ranking of first meant that a respondent ranked an issue as their first and, thus, highest priority. Based on OP feedback, an additional option, “greater common understandings,” was added to this question for study participants. To increase comparability, rankings for the same four issue areas are compared relative to one another.

I thus found evidence of misalignment of goals and expectations between study participants and OP members. Where misalignment exists, do we observe—as the literature predicts—that the aims of the OP are prioritized more heavily than those of study participants? Figure 2 displays changes in the participants’ perceived program priority areas from the pre-program baseline survey to the endline survey, providing evidence consistent with this expectation. Specifically, compared with the baseline survey, study participants in the endline survey were more likely to state gender inequality (comparatively more prioritized by the OP before the program began; see figure 1) as a program priority and were less likely to state violent extremism as a priority (comparatively more prioritized by study participants). In contrast, I did not observe any evidence of over-time changes in perceived priority for issues on which there was no misalignment between participants and the OP (i.e., social cohesion and gender-based violence).

Figure 2 Over-Time Changes in Participants’ Perceived Program Priorities

Note: This figure displays changes in individuals’ rankings of expected and perceived program issue-impact areas. For each area, rankings (dependent variable) are regressed on an indicator variable for whether data are from the endline survey (=1) or the pre-program baseline survey (=0). Higher values indicate higher perceived program priority. Regression results without (solid lines) and with (light) country fixed effects, as well as 90% and 95% confidence intervals, are shown.

It is interesting that even as both OP team members and study participants prioritized social cohesion before the program began, I also observed evidence suggesting that participants over the course of the program lowered their expectations regarding its impact on the broader community. Whereas 91% of pre-program respondents chose “a desire to be involved in programs affecting my community” as a primary reason for joining the program, this percentage declined to 79% in the midline survey and to 62% in the endline survey.Footnote 7 Consistently, a large portion of endline respondents reported that the program improved outcomes primarily for participants only (8%) or for participants and their close contacts only (35%). Consequently, study participants’ prioritization of community outreach appears to have been somewhat at odds with program design. However, it is unclear whether the shortcomings were caused by misalignment between participants and at least some members of the OP or due to a challenge in implementation even in the presence of alignment.Footnote 8

A final question concerns the effects of misaligned goals and expectations on study participants’ experience and evaluation of the program. I observed quantitative evidence that study participants for whom I expected misalignment to be greater had worse perceptions of the program. For example, 58% of those who stated pre-program that violent extremism was a main priority—and who were thus expected to be disappointed by a lack of program focus on the topic—later stated that the program exceeded their expectations. This compared with 88% of those who did not rank violent extremism as a top priority.Footnote 9

Many of the quantitative findings reported in this article are supported by my qualitative data. First, mirroring figure 1, OP members exhibited a greater pre-program emphasis on gender. For instance, when asked to reflect on any desired program changes, one respondent wrote that they hoped the program would change “misconceptions surrounding gender equality and how inequality many times contributes to fueling conflict.” Another stated that they hoped for changes in beliefs such as “women are inferior” and “men know best what serves the society.” In contrast, when asked what they hoped to gain from the program, study participants were far more likely to mention violence—often with an application to the community. One study participant’s response for desired impact was a common refrain: “How to deal with violent cases and how to minimize violence within the community.” Differences in focus were evident when considering the frequency of terms relating to gender, violence, and the community in open-ended responses to the question on desired program impacts: 38% of study participants mentioned violence in their answer, 29% mentioned the broader community, and 20% mentioned gender versus 14%, 21%, and 36%, respectively, of OP members.

Second, when asked in the endline survey to evaluate program impacts, study participants tended to highlight issues more highly valued by the OP, which again suggests a greater prioritization of OP goals. For instance, one respondent stated that the program helped them to “know more about gender equality in my life.” Another wrote that the program was most “useful for men.” Most accounts highlighted individuals’ improvements in their personal life as opposed to the broader community. For instance, one participant shared how the program had improved their communication with their daughter; another reflected that the program had allowed them to “reduce my personal conflict with me and my mind.”

Third, study participants’ complaints about the program and areas for improvement often centered on issues—violent extremism and community impact—that my findings suggested were not prioritized as highly by the program and implementing OP. For instance, one study participant wrote, “I think we can discuss more social issues…I would love to learn how I can handle and should act in a more critical social issue like violence in the name of religion.” Many noted a desire for more community outreach. These findings provide additional evidence that misalignment of goals led to some participant dissatisfaction with the program. They also suggest that participants’ goals began to approximate OP goals at least in part because intervention content more closely mirrored OP priorities, as opposed to being due only to participants better learning what the OP wanted to hear.

CONCLUSION

This study presents evidence on the role of goals and expectations using explicit measures elicited from members of a collaborative research project. I found evidence of goal misalignment between the OP and study participants; that where misalignment existed, the goals of the OP seemingly received higher priority; and that this misalignment led to some participant dissatisfaction with program deliverables.

I found evidence of goal misalignment between the OP and study participants; that where misalignment existed, the goals of the OP seemingly received higher priority; and that this misalignment led to some participant dissatisfaction with program deliverables.

How can collaborative research teams address issues related to misaligned goals and expectations? I posit that the explicit elicitation and communication of team members’ goals and expectations can provide two central benefits.

First, once misalignment in individuals’ goals has been identified, collaborative research teams can aim to adjust the program design so that it better meets the needs of all parties. The program under study in this article, for instance, could have been adjusted to better meet study participants’ desires for a focus on violent extremism and the community based on pre-program goal-elicitation insights.

Second, even when misalignment in goals cannot be sufficiently addressed to satisfy all parties, the advance communication of this fact can align team-member expectations. In the example of the NVC program, perhaps OP team members could have concluded that the program could not address issues related to violent extremism in the broader community. Communication of this fact to study participants could have led them to adjust downward their expectations on this issue, thereby reducing their likelihood of disappointment.

Of course, the explicit elicitation and communication of team members’ goals is not a salve for all issues. First, direct elicitation is not always possible for all parties—for instance, when study participants are part of a project without their knowledge. In this event, it nevertheless can be valuable to elicit and address misalignment in the goals and expectations of a subset of team members (e.g., researchers or the OP). Notably, not only were there differences within the OP in the NVC study; some OP members also even conceded that they did not have a clear expectation for how the program should work or via which pathway. When direct elicitation of study participants’ goals is not possible, research team members also can rely on local knowledge and implementers, fieldwork, baseline survey data, and other already-available information and data sources to gain insight in advance of a program about what study participants’ goals “might be.” Maintaining a constant line of contact also helps to identify and address potential mismatches as they arise.

Second, explicitly acknowledged goals may not be fully informative in that they fail to capture more implicit or sensitive aims or pressures. Researchers can take steps to gain a fuller picture: for instance, my baseline survey was conducted before individuals were exposed to what other group members might view as a socially desirable answer; I clarified that surveys were anonymous and there were no “correct” answers; and I supplemented surveys with in-depth interviews when I could probe further and intuit more implicit signals. Researchers additionally can consider adopting popular methods for eliciting truthful answers on sensitive survey topics (e.g., list experiments).

Third, there are areas where only the communication of misalignment may be insufficient. For instance, although this article focuses on an OP and study participants—in part because I was the only researcher on the project—researchers often operate on relatively short time horizons and prioritize publication. To ensure that both parties’ needs were met, the OP and I formally agreed that I would deliver an evaluation report and that they would allow me to retain any obtained data. Formalized agreements may be necessary to address these more structural considerations (see also Haas et al. Reference Haas, Haenschen, Kumar, Panagopoulos, Peyton, Ravanilla and Sierra-Arévalo2022).

DATA AVAILABILITY STATEMENT

Research documentation and data that support the findings of this study are openly available at the Harvard Dataverse at https://doi.org/10.7910/DVN/WINLHO.

Supplementary Materials

To view supplementary material for this article, please visit http://doi.org/10.1017/S1049096522000944.

CONFLICTS OF INTEREST

The author declares that there are no ethical issues or conflicts of interest in this research.

Footnotes

1. The second and fifth sections discuss researcher goals.

2. See online appendix table A2 for sample demographics.

3. By improving communication and understanding, it was theorized that NVC would improve social cohesion and gender relations, and—particularly because gender stereotyping was viewed as a method of encouraging radicalization—reduce violent extremism. See online appendix section A.4 for program background provided by the OP.

4. See online appendix section A.3 for a sample application form.

5. A two-sample test of proportions yields a z statistic=2.49, Pr=0.01.

6. A two-sample test of the proportions of participant and OP respondents who ranked violent extremism as their fourth priority yields a z statistic=1.67, Pr=0.09 (Pr=0.04 for a one-tailed test).

7. A two-sample test of proportions comparing the baseline survey versus the endline survey yields a z statistic=3.51, Pr=0.00.

8. The lack of community impact in the presence of individual learning also may reflect a difference in partial versus general equilibrium effects of the program (Barrett Reference Barrett2021).

9. A two-sample test of proportions yields a z statistic=1.93, Pr=0.05.

References

REFERENCES

Barrett, Christopher B. 2021. “On Design-Based Empirical Research and Its Interpretation and Ethics in Sustainability Science.” Proceedings of the National Academy of Sciences 118 (29): 110.CrossRefGoogle Scholar
Blair, Robert A., Karim, Sabrina M., and Morse, Benjamin S.. 2019. “Establishing the Rule of Law in Weak and War-Torn States: Evidence from a Field Experiment with the Liberian National Police.” American Political Science Review 113 (3): 641–57.CrossRefGoogle Scholar
Bryan, Gharad, Choi, James J., and Karlan, Dean. 2021. “Randomizing Religion: The Impact of Protestant Evangelism on Economic Outcomes.” Quarterly Journal of Economics 136 (1): 293380.CrossRefGoogle Scholar
Butler, Daniel M. 2019. “Facilitating Field Experiments at the Subnational Level.” Journal of Politics 81 (1): 371–76.CrossRefGoogle Scholar
Cilliers, Jacobus, Dube, Oeindrila, and Siddiqi, Bilal. 2015. “The White-Man Effect: How Foreigner Presence Affects Behavior in Experiments.” Journal of Economic Behavior & Organization 118:397414.CrossRefGoogle Scholar
Corduneanu-Huci, Cristina, Dorsch, Michael T., and Maarek, Paul. 2022. “What, Where, Who, And Why? An Empirical Investigation of Positionality in Political Science Field Experiments.” PS: Political Science & Politics 18. DOI:10.1017/S104909652200066X.Google Scholar
Deaton, Angus. 2020. “Randomization in the Tropics Revisited: A Theme and Eleven Variations.” Technical Report. Cambridge, MA: National Bureau of Economic Research.Google Scholar
Haas, Nicholas. 2022. “Replication Data for: ‘Be Explicit: Identifying and Addressing Misaligned Goals in Collaborative Research Teams.’” PS: Political Science & Politics. DOI: 10.7910/DVN/WINLHO.Google Scholar
Haas, Nicholas, Haenschen, Katherine, Kumar, Tanu, Panagopoulos, Costas, Peyton, Kyle, Ravanilla, Nico, and Sierra-Arévalo, Michael. 2022. “Organizational Identity and Positionality in Randomized Control Trials: Considerations and Advice for Collaborative Research Teams.” PS: Political Science & Politics 15. DOI:10.1017/S1049096522000026.Google Scholar
Herman, Biz, Panin, Amma, Wellman, Elizabeth I., Blair, Graeme, Pruett, Lindsey D., Opalo, Ken O., Alarian, Hannah M., Grossman, Allison N., Tan, Yvonne, Dyzenhaus, Alex P., and Owsley, Nicholas. 2022. “Field Experiments in the Global South: Assessing Risks, Localizing Benefits, and Addressing Positionality.” PS: Political Science & Politics 14. DOI:10.1017/S1049096522000063.Google Scholar
Levine, Adam S. 2021. “How to Form Organizational Partnerships to Run Experiments.” In Advances in Experimental Political Science, ed. Druckman, James N. and Green, Donald P., 199216. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
McDermott, Rose, and Hatemi, Peter K.. 2010. “Emerging Models of Collaboration in Political Science: Changes, Benefits, and Challenges.” PS: Political Science & Politics 43 (1): 4958.Google Scholar
Wagner, Caroline S., Park, Han W., and Leydesdorff, Loet. 2015. “The Continuing Growth of Global Cooperation Networks in Research: A Conundrum for National Governments.” PloS One 10 (7): 115.CrossRefGoogle Scholar
Figure 0

Table 1 Study Participant Survey Respondents by Wave and Country

Figure 1

Figure 1 OP and Study Participant Program GoalsNote: This figure compares pre-program issue-area priority rankings for study participants (left panel) and members of the OP (right panel). Respondents were asked: “How would you rank the following in terms of changes you hope the program will help promote?” A ranking of first meant that a respondent ranked an issue as their first and, thus, highest priority. Based on OP feedback, an additional option, “greater common understandings,” was added to this question for study participants. To increase comparability, rankings for the same four issue areas are compared relative to one another.

Figure 2

Figure 2 Over-Time Changes in Participants’ Perceived Program PrioritiesNote: This figure displays changes in individuals’ rankings of expected and perceived program issue-impact areas. For each area, rankings (dependent variable) are regressed on an indicator variable for whether data are from the endline survey (=1) or the pre-program baseline survey (=0). Higher values indicate higher perceived program priority. Regression results without (solid lines) and with (light) country fixed effects, as well as 90% and 95% confidence intervals, are shown.

Supplementary material: Link
Link
Supplementary material: PDF

Haas supplementary material

Haas supplementary material

Download Haas supplementary material(PDF)
PDF 4.8 MB