Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-22T01:55:28.812Z Has data issue: false hasContentIssue false

From principles to practice: methods to increase the transparency of research ethics in violent contexts

Published online by Cambridge University Press:  13 August 2021

Hannah Baron
Affiliation:
Department of Political Science, Brown University, Providence, RI, USA
Lauren E. Young*
Affiliation:
Department of Political Science, University of California, Davis, Davis, CA, USA
*
*Corresponding author. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

There has been a proliferation of research with human participants in violent contexts over the past ten years. Adhering to commonly held ethical principles such as beneficence, justice, and respect for persons is particularly important and challenging in research on violence. This letter argues that practices around research ethics in violent contexts should be reported more transparently in research outputs, and should be seen as a subset of research methods. We offer practical suggestions and empirical evidence from both within and outside of political science around risk assessments, mitigating the risk of distress and negative psychological outcomes, informed consent, and monitoring the incidence of potential harms. An analysis of published research on violence involving human participants from 2008 to 2019 shows that only a small proportion of current publications include any mention of these important dimensions of research ethics.

Type
Research Note
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press on behalf of the European Political Science Association

There has been a proliferation of research involving human participants on the topic of violence. Since 2008, the percent of articles in six top political science journals that study violence and involve human participants has doubled, from about 3 percent to more than 6 percent (Figure 1).Footnote 1 The boom of field research on violence has been fueled by a number of goals: the desire to test the “microfoundations” of theories about the causes and consequences of conflict (Kertzer, Reference Kertzer2017), the demand for research designs that identify causal claims (Gerber et al., Reference Gerber, Green, Kaplan, Shapiro, Smith and Masoud2004), and concerns that the perspectives of people affected by conflict should be represented (Pearlman, Reference Pearlman2017). Most importantly, the proliferation of research in violent contexts is driven by a desire to use high-quality evidence to inform policies to mitigate and prevent violence.

Figure 1. Percent of all articles published in top six political science journals that study political violence with human participants, 2008–2019.

Yet, the fact that many researchers have good intentions does not imply that doing research on violence is ethical. This letter highlights some of the main ethical principles that researchers working on violence aspire to uphold—some of which fall outside the current scope of Institutional Review Boards (IRBs)—and suggests practical solutions that could help researchers better adhere to them. The core principles that inform federal regulations for research with human participants are beneficence, justice, and respect for persons, derived from the Belmont Report (1976). Our practical suggestions are not exhaustive, and are generally ways to validate and monitor assumptions related to ethics rather than “solve” ethical dilemmas. There is no one-size-fits-all solution to research in violent contexts (Wood, Reference Wood2006). However, these practices might help researchers adhere more closely to maximizing the benefits of their research, protecting participant autonomy, and minimizing harm to participants and staff.

In the rest of this letter, we describe practices in four areas that could improve adherence to ethical principles, primarily by increasing the transparency of research methods around ethics. We highlight practices that other researchers have developed both within and outside political science and use examples from our own fieldwork on violence in Zimbabwe and Mexico, carried out with a number of collaborators. The first family of practices describes better documenting risk assessments. The second concerns avoiding distress and negative psychological outcomes. The third highlights ways to assess the adequacy of the informed consent process. The fourth considers the challenge of monitoring negative consequences during and after project implementation. The overarching logic behind our suggestions is to make research ethics more transparent and empirical, so that it can become a part of the peer review process and is seen as a subfield of research methods to incentivize the development of increasingly robust practices.

1. Credibly and empirically assessing risks

The first area of research ethics that can be addressed with better practices is risk assessments. There is widespread consensus that research should not put participants at an unjustifiable risk of harm (American Political Science Association, 2020). There is also increasing recognition that it should not put collaborators, particularly research implementers and academics living in violent contexts, at risk. In the social sciences, however, risk assessments are not always empirically rigorous and are almost never replicable. Although we go to enormous lengths to empirically validate other aspects of our research methodologies, the process of assessing risks for IRBs or funders is more likely to be based on general impressions. Risk assessments rarely make it into articles or online appendices, as documented in Section 5.

It is difficult to overstate the harm that can be caused by inadequate or biased risk assessments. Eck and Cohen (Reference Eck and Cohen2021) discuss a study in Ethiopia in which 20 percent of female participants reported being beaten by their male partners because of their participation. Eriksson Baaz and Utas (Reference Eriksson Baaz and Utas2019) discuss a case in which militant groups tortured a research facilitator and killed his family member because of his participation in a research project. Risk assessments must be thorough enough that research involving such excessive harm is never carried out.

Making risk assessments part of publications that use data collected from participants living in violent contexts could move us closer to the goal of minimizing minor harms and never inflicting severe harm. For example, in a 2019 article that uses original survey data from a lab experiment conducted in Zimbabwe, author two discusses in the article and appendix how consultations with local collaborators and the quantitative data on state repression collected by domestic human rights monitors led her to assess that the risk of punishment for participation in the research was justifiably low (Young, Reference Young2019). In this case, the existence of a high-quality Zimbabwean human rights monitor and established political polling enabled her to make that assessment with confidence. Others can question whether the documented risk assessment was sufficient—and our point is exactly that the assessment should be reviewed by other scholars. We know anecdotally that some other researchers are doing similar or better risk assessments in their own projects. However, when these are not reported in research outputs there are no common norms around risk assessments. If scholars expect that they need to document risk assessments in publications (and not just in IRB applications, which are almost never made public) it creates incentives for thorough and accurate assessments.

2. Measuring and mitigating the risk of distress and negative psychological outcomes

The risk of distress, lasting negative affect, and other adverse psychological outcomes has rightfully become a major concern in violence research. “Re-traumatization” is often used as a shorthand by social scientists for various forms of distress. Yet, the empirical basis for assessing and avoiding the risk of adverse psychological outcomes or any positive psychological benefits in political science research is surprisingly small.

Over the past 25 years trauma psychologists have taken an empirical approach to the risk of negative psychological and affective consequences of participation in research. Psychologists have embedded questions designed to assess the incidence and severity of different harms in instruments used with trauma survivors. This literature tends to separately consider the risk of emotional distress during research procedures, and the risk of longer-term negative outcomes (Legerski and Bunnell, Reference Legerski and Bunnell2010). Two systematic reviews on the risk–benefit ratio of participation in trauma research have concluded that although several participants do report harm, those harms are often offset by perceived benefits and in most studies regret is rare (Jaffe et al., Reference Jaffe, DiLillo, Hoffman, Haikalis and Dykstra2015; McClinton Appollis et al., Reference McClinton Appollis, Lund, de Vries and Mathews2015). Finally, although there have been few studies that use credible methods such as randomized controlled trials to test for longer-term effects of research participation, one review concluded that there are few signs of long-term negative effects in research on trauma in psychiatry (Jorm et al., Reference Jorm, Kelly and Morgan2007).

There remain several open questions around the emotional and psychological risks of political science research. A first-order question is how well the empirical findings from psychology and psychiatry generalize to political science. Almost all of the studies in existing meta-analyses were conducted with American or European participants, whereas much political science research on violence is conducted outside of these “WEIRD” (Western, educated, industrialized, rich, and democratic) contexts (Henrich et al., Reference Henrich, Heine and Norenzayan2010). Second, open questions persist across fields. Are some sub-groups of participants more likely to experience severe or prolonged harms relative to others? What are the best ways to design questions or interview protocols so that the risk of mild distress is minimized, and severe harms are avoided? This should be a growing area of methodological innovation in the social and behavioral sciences.

A second and complementary approach to minimizing the risk of negative psychological outcomes involves the provision of psychosocial support. It has become fairly common in political science to set up a psychosocial referral system for participants, however post-study referrals are often insufficient to protect participants from the potential negative impacts of research. In many contexts, quality services are unavailable or prohibitively costly. Furthermore, political science researchers often lack the training to identify culturally appropriate resources, so we may identify inappropriate resources or make errors when deciding whom to refer for support. Finally, it is often impossible to determine whether psychological distress was caused by the research or not. Most fundamentally, providing assistance to people who have been harmed by a study is clearly inferior to avoiding that harm in the first place.

We identify two low-cost practices that may reduce the risks of negative psychological outcomes among participants and fieldwork staff. First, more consistently measuring experiences of distress during interviews, and when possible, symptoms of negative psychological outcomes in the period following participation, would enable researchers to better understand and avoid these risks. In some cases we have asked interviewers to assess whether they think participants are experiencing distress (Young, Reference Young2019).

In one of our Mexico studies—a field experiment that involved three interviews and a community discussion—we tracked posttraumatic stress disorder (PTSD) symptoms over three rounds of measurement to test whether participants with symptoms reported higher levels of distress during the study, and whether PTSD symptoms were affected by participation in a small-group discussion about crime. We do not find evidence that participating in community discussion groups on crime led to an increase in PTSD symptoms (Baron et al., Reference Baron, Garcia-Ponce, Young and Zeitzoffunpublished). The marginal cost of a few additional interview questions is small, as such questions can be added to the end of existing interviews or in the “back-checking” that is common in large-scale surveys.

These experiences also left us with questions. Is it better to rely on research staff or participants themselves to assess research-related distress? To what extent might social desirability or performance pressure lead either group to under-report distress? More rigorous research is needed to develop and validate measures appropriate for social science research on violence. Having a body of evidence on harms and benefits would enable us to identify the practices that are least likely to have negative psychological consequences. This evidence could be quickly generated if questions about psychological distress were regularly embedded in research protocols.

Second, psychosocial support professionals can be enlisted at early stages in fieldwork to prevent and mitigate the risk of distress and more extreme negative reactions. Building on practices described in Paluck and Green (Reference Paluck and Green2009), in our qualitative study in Mexico we worked with a local trauma specialist and certified counselor to train our interviewers on how to identify and prevent severe distress. Additionally, this specialist provided interviewers with techniques to prevent and cope with secondary trauma, and served as a counseling resource during and after fieldwork. For the field experiment involving discussion groups, we hired two counselors who regularly work with local survivors of violence as facilitators to ensure that our team had the skills and motivation to identify and avoid distress. In our case, these strategies were not costly: we offered the trauma specialist a $200 honorarium, and the decision to hire counselors instead of standard research facilitators required no additional resources. In contexts where mental health professionals are scarce, it may still be feasible to have a mental health professional with experience in the cultural context review the research protocol or offer remote support.

3. Assessing whether consent is really informed

The principle of respect for persons implies that people should be free to make an informed decision about participation in research. IRBs typically require standard study information to be provided: What tasks will participants perform? What are the potential risks and benefits? How will privacy be guaranteed? Potential participants in violence research are often told that the study involves sensitive topics, and that they might become upset. They have the chance to ask questions before they make a decision.

Violence researchers could do more to assess the adequacy of the consent process. The main concern in the informed consent process is that participants have enough information to make a personal decision that they would be satisfied with if they had complete information. To this end, trauma researchers often include questions about regret regarding participation. In our research, we have simply asked participants if they are happy or unhappy that they consented at various points in the study process, and what they are happy or unhappy about, in order to monitor the adequacy of the consent process. If the vast majority of participants say that they are happy with their decision, it suggests that the consent process is adequate.

For the recent field experiment that we carried out in Mexico, we specified that if more than 5 percent of participants were telling us that they were not happy, we would pause the study and retool the consent process to provide more information on the elements that unhappy participants were mentioning. After a first 20-minute interview, 0.6 percent of participants regretted their participation. After a half-day community discussion and second 20-minute interview, 0.3 percent expressed regret. By a final 20-minute follow-up interview, no participants expressed regret (Baron et al., Reference Baron, Garcia-Ponce, Young and Zeitzoffunpublished).

Another empirical approach to assess informed consent involves asking participants factual questions about the consent process. The Afrobarometer surveys, for example, ask participants who they think the sponsor of the research is. Additional factual questions could test if and how participants understand important conditions of the consent process. Do participants understand that they can stop the interview or skip questions? Do participants see their participation decision as one over which they have individual autonomy, or do they experience it as determined by a local elite or relative? Ultimately, assessing the quality of the informed consent process should be an area of methodological development.

4. Monitoring the actual incidence of harms

Actively monitoring for harms, particularly coercion or retribution associated with research participation, is arguably the most important and challenging of the four practices we discuss. Robust measurement of potential harms and management systems to quickly communicate information and adapt research protocols are obvious practices that could help researchers prevent potential harms. Reporting on observed harms is also an accepted standard in medical trials, and has been adopted as an item in the CONSORT reporting guidelines.Footnote 2 It would also build on recent efforts to measure and report harms to research collaborators in authoritarian settings (Grietens and Truex, Reference Grietens and Truex2020). However, the rate of unintended consequences and how they are monitored is rarely reported in social science research outputs.

Researchers can monitor unintended consequences in follow-up surveys with participants or through debriefs with research staff. In our Mexico experiment, we asked participants in the follow-up calls whether they experienced retaliation, emotional distress, or any other negative consequence due to participation in discussion groups on crime and responding to crime. However, not all research designs permit this kind of follow-up. In particular, this type of follow up cannot be done for anonymous studies. In cases where we could not directly follow-up with participants, we have relied on local research partners to monitor major risks. This type of monitoring is likely to pick up major events of retaliation, but less likely to detect emotional distress and other private harms.

Monitoring harms can enable researchers to modify or stop research projects if early data show that research-related harm is occurring. If research has unintended consequences, transparency is the fastest way that violence research as a field can learn about them and prevent harm in future studies. In addition, reporting monitoring practices and incidences of harms enables ethical practices to become a more robust part of the peer review process to create incentives for researchers to use appropriate methods in their research.

5. Assessment of current reporting practices

How transparent are methods related to ethics in violence research involving human participants? We had a team of five undergraduate and Masters-level research assistants code 172 research articles on political violence that involved human participants published between 2008 and 2019 in six influential political science journals. Our research assistants used a coding guide to identify any mention of research ethics in six categories in the article or appendix: IRB review, risk assessments, consent processes, harms observed, risk to research teams, and any other information related to ethics. Each article was double-coded, and the inter-rater reliability scores varied from 0.84 for the undefined “other” category to 0.95 for an indicator of whether the IRB number was reported in the publication.

A few points are worth keeping in mind when interpreting this review. First, research assistants were given very inclusive coding guidelines: if an article or appendix mentioned anything about risks associated with the research project, it was coded as “yes” in the risk assessment category. Similarly, if it mentioned anything about informed consent (including just a passing comment that consent was obtained) it received a “yes” in the informed consent category. Thus, this is not an assessment of whether existing research is employing the practices we recommend. Second, this is not an analysis of whether existing research is violating ethical practices. We believe that many researchers are doing more to adhere to ethical principles than they report in their articles and appendices. Our goal is to assess the extent to which basic elements of research transparency around ethics have been voluntarily adopted in a setting where the stakes of ethical practices are high.

In general, information related to ethics is rarely reported in publications. Figure 2 shows the proportion of all articles based on research on violence with human participants published from 2008 to 2019 that provided at least some discussion of our six areas of transparency.

Figure 2. Percent of articles that study political violence with human participants that mention types of research ethics considerations, 2008–2019.

The first column in each category shows the percent of all identified violence articles with human participants that provides an IRB number, discusses risks considered, reports on observed harms, discusses risk to research team members, gives details of the consent process, or provides any other discussion of ethics. In total, just 11 percent of articles report anything related to observed harms, and 31 percent mention a consent process. Again, it is important to keep in mind that we coded these categories as “yes” if an area of ethics was even mentioned in passing—a much smaller proportion of the articles provide enough information to let readers assess whether the practices were adequate. The percentages of articles mentioning ethics are slightly higher when we exclude seemingly low-risk research, such as public opinion surveys on foreign conflicts or research on less sensitive topics such as clientelism in non-democratic regimes (bar 2). Finally, bar 3 shows that articles published in the last five years generally has more discussion of research ethics, particularly around risk to research team members and the consent process. Nevertheless, in all cases it is clear that discussion of research ethics in publications is far from the norm.

6. Conclusion

There are no silver bullets in research in violent contexts. We have suggested a few practical steps that can be taken to help researchers better adhere to the principles we have widely accepted as a research community. At their core, these suggestions are practices that increase the ability of researchers to monitor their own projects for implementation failures and unintended consequences around research ethics, and to increase the transparency of decisions around ethics. They complement a recent list of questions on ethics that reviewers should ask for research in violent contexts (Cronin-Furman and Lake, Reference Cronin-Furman and Lake2018), a suggested template for “ethics appendices” recently proposed for randomized controlled trials (Asiedu et al., Reference Asiedu, Karlan, Lambon-Quayefio and Udry2021), and updated submission practices at the American Political Science Review (APSR). We also see our suggestions as being in line with the ethics guidelines adopted by the APSA (American Political Science Association, 2020). In particular, these guidelines call for researchers to identify and justify potential harms in presentations and publications. There is no reason that research ethics should not be an area of methodological innovation and research, just as other forms of research transparency have become in recent years as scholars have developed better methods for detecting p-hacking, pre-registration, and replication (Humphreys et al., Reference Humphreys, De la Sierra and Van der Windt2013; Simonsohn et al., Reference Simonsohn, Nelson and Simmons2014; Open Science Collaboration, 2015). In no way do these practices make ethical decisions in violent contexts easy. Is it ever worth putting a participant at some risk of harm in order to learn something about political violence? Can we ever assess risks with sufficient confidence to make research design decisions, given that violence changes unpredictably and is often shrouded in uncertainty? These questions weigh ethical imperatives against each other—the goal to use research to tackle the most important problems that societies face, the imperative to do no harm, and the need to take action with imperfect information. But making the decisions more evidence-based and transparent through methodological innovations might help us move closer to the best balance of these principles.

Acknowledgments

This note draws on research conducted with a number of collaborators, including Omar García-Ponce, Adrienne LeBas, Alexander Noyes, and Thomas Zeitzoff. We also learned significantly from research implementers in Mexico and Zimbabwe. These projects benefited from funding from the National Science Foundation, U.S. Institute of Peace, and International Peace Research Foundation. We thank Cyril Bennouna, Graeme Blair, Robert A. Blair, Dara Kay Cohen, Jennifer Freyd, Macartan Humphreys, Sabrina Karim, Emilia Simison, Tara Slough, participants at the Brown-MIT Development Workshop, and two anonymous reviewers for invaluable feedback.

Footnotes

1 Journals in the review include the American Political Science Review, American Journal of Political Science, Journal of Politics, World Politics, Journal of Peace Research, and Journal of Conflict Resolution.

2 For more on the CONSORT group, see http://www.consort-statement.org/.

References

Asiedu, E, Karlan, D, Lambon-Quayefio, M and Udry, C (2021) A call for structured ethics appendices in social science papers. Unpublished Working Paper.CrossRefGoogle Scholar
Association American Political Science (2020) Principles and guidance for human subjects research. Technical report. Available at https://www.apsanet.org/Portals/54/diversityEthics/Final_Principles1740-153.Google Scholar
Baron, H, Garcia-Ponce, O, Young, LE and Zeitzoff, T (unpublished) The limits of deliberation: A field experiment on criminal justice preferences in Mexico. Working paper, UC Davis.Google Scholar
Cronin-Furman, K and Lake, M (2018) Ethics abroad: fieldwork in fragile and violent contexts. PS: Political Science & Politics 51(3), 607614.Google Scholar
Eck, K and Cohen, DK (2021) Time for a change: the ethics of student-led human subjects research on political violence. Third World Quarterly 42(4), 855866.10.1080/01436597.2020.1864215CrossRefGoogle Scholar
Eriksson Baaz, M and Utas, M (2019) Exploring the backstage: methodological and ethical issues surrounding the role of research brokers in insecure zones. Civil Wars 21, 157178.CrossRefGoogle Scholar
Gerber, AS, Green, DP and Kaplan, EH (2004) The illusion of learning from observational research. In Shapiro, I, Smith, RM and Masoud, TE (eds), Problems and Methods in the Study of Politics. New Haven, CT: Yale University Press, pp. 251273.CrossRefGoogle Scholar
Grietens, SC and Truex, R (2020) Repressive experiences among china scholars: new evidence from survey data. The China Quarterly 242, 349375.CrossRefGoogle Scholar
Henrich, J, Heine, SJ and Norenzayan, A (2010) The weirdest people in the world? Behavioral and Brain Sciences 33, 6183.CrossRefGoogle ScholarPubMed
Humphreys, M, De la Sierra, RS and Van der Windt, P (2013) Fishing, commitment, and communication: a proposal for comprehensive nonbinding research registration. Political Analysis 21, 120.CrossRefGoogle Scholar
Jaffe, AE, DiLillo, D, Hoffman, L, Haikalis, M and Dykstra, RE (2015) Does it hurt to ask? A meta-analysis of participant reactions to trauma research. Clinical Psychology Review 40, 4056.CrossRefGoogle Scholar
Jorm, AF, Kelly, CM and Morgan, AJ (2007) Participant distress in psychiatric research: a systematic review. Psychological Medicine 37, 917926.CrossRefGoogle ScholarPubMed
Kertzer, JD (2017) Microfoundations in international relations. Conflict Management and Peace Science 34, 8197.CrossRefGoogle Scholar
Legerski, J-P and Bunnell, SL (2010) The risks, benefits, and ethics of trauma-focused research participation. Ethics & Behavior 20, 429442.CrossRefGoogle Scholar
McClinton Appollis, T, Lund, C, de Vries, PJ and Mathews, C (2015) Adolescents’ and adults’ experiences of being surveyed about violence and abuse: a systematic review of harms, benefits, and regrets. American Journal of Public Health 105, e31e45.10.2105/AJPH.2014.302293CrossRefGoogle ScholarPubMed
Open Science Collaboration (2015) Estimating the reproducibility of psychological science. Science 349(6251), aac4716.CrossRefGoogle Scholar
Paluck, EL and Green, DP (2009) Deference, Dissent, and Dispute Resolution: An Experimental Intervention Using Mass Media to Change Norms and Behavior in Rwanda. American Political Science Review 103(4), 622644.10.1017/S0003055409990128CrossRefGoogle Scholar
Pearlman, W (2017) We Crossed a Bridge and It Trembled. New York: HarperCollins.Google Scholar
Simonsohn, U, Nelson, LD and Simmons, JP (2014) P-curve: a key to the file-drawer. Journal of Experimental Psychology: General 143, 534.CrossRefGoogle Scholar
Wood, EJ (2006) The ethical challenges of field research in conflict zones. Qualitative Sociology 29, 373386.10.1007/s11133-006-9027-8CrossRefGoogle Scholar
Young, LE (2019) The psychology of state repression: fear and dissent decisions in Zimbabwe. American Political Science Review 113, 140155.CrossRefGoogle Scholar
Figure 0

Figure 1. Percent of all articles published in top six political science journals that study political violence with human participants, 2008–2019.

Figure 1

Figure 2. Percent of articles that study political violence with human participants that mention types of research ethics considerations, 2008–2019.

Supplementary material: Link

Baron and Young Dataset

Link