Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-26T23:07:53.078Z Has data issue: false hasContentIssue false

The Silenced Text: Field Experiments on Gendered Experiences of Political Participation

Published online by Cambridge University Press:  18 April 2023

ALAN N. YAN*
Affiliation:
University of California, Berkeley, United States
RACHEL BERNHARD*
Affiliation:
University of Oxford, United Kingdom
*
Alan N. Yan, Graduate Student, Department of Political Science, University of California, Berkeley, United States, [email protected].
Rachel Bernhard, Associate Professor of Quantitative Political Science Research Methods, Nuffield College, University of Oxford, United Kingdom, [email protected].
Rights & Permissions [Opens in a new window]

Abstract

Who gets to “speak up” in politics? Whose voices are silenced? We conducted two field experiments to understand how harassment shapes the everyday experiences of politics for men and women in the United States today. We randomized the names campaign volunteers used to text supporters reminders to participate in a protest and call their representatives. We find that female-named volunteers receive more offensive, silencing, and withdrawal responses than male-named or ambiguously named volunteers. However, supporters were also more likely to respond and agree to their asks. These findings help make sense of prior research that finds women are less likely than men to participate in politics, and raise new questions about whether individual women may be perceived as symbolic representatives of women as a group. We conclude by discussing the implications for gender equality and political activism.

Type
Letter
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (http://creativecommons.org/licenses/by-nc/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the American Political Science Association

In 2017, the Women’s Marches and #MeToo movement dramatically increased public attention paid to the harassment and silencing of women. In politics, the problem is widespread: relative to men, women politicians face more interruption (Och Reference Och2020), sexual harassment (Folke et al. Reference Folke, Rickne, Tanaka and Tateishi2020; Håkansson Reference Håkansson2019), and psychological abuse (Herrick and Thomas Reference Herrick and Thomas2022; Thomas et al. Reference Thomas, Herrick, Franklin, Godwin, Gnabasik and Schroedel2019). Women activists, journalists, and human rights defenders likewise face more threats and violence (Krook Reference Krook2020; Sobieraj Reference Sobieraj2020). Yet the vast majority of research on violence against women in politics focuses on elite women—politicians, public figures, and federal prosecutors—those that Pitkin (Reference Pitkin1967) might call formal or substantive representatives of women.Footnote 1

This focus on elite women means that we know little about the treatment of the average woman—the voter, the volunteer, and the protester—when she enters the public sphere in the United States today. Research that considers harassment as a barrier to the average woman’s political participation focuses heavily on countries outside of the United States (e.g., Alam Reference Alam2021; Krook Reference Krook2020; Prillaman Reference PrillamanForthcoming). Research on American women’s participation has examined barriers like income and family responsibilities (Bernhard, Shames, and Teele Reference Bernhard, Shames and Teele2021; Schlozman, Burns, and Verba Reference Schlozman, Burns and Verba1994), but this alone has been unable to explain why women are more likely than men to vote but less likely to participate in other political activities (Beauvais Reference Beauvais2020). One possibility is that activities like protesting and canvassing make individual women into “symbolic” representatives who “stand for” women as a group in the eyes of others (Pitkin Reference Pitkin1967, 92), increasing their risk of harassment.

Likewise, while many studies examine the consequences of incivility between elites for public opinion, here too we know little about gendered incivility between citizens (Karpowitz and Mendelberg Reference Karpowitz and Mendelberg2014). Nor do we know whether incivility is a problem for an immensely popular new form of political activism: text canvassing. Text messages have been a boon for campaigns, allowing volunteers to contact more people, more cheaply, than ever before: more than 80 million political text messages were sent every day in September and October 2020 (Bajak and Burke Reference Bajak and Burke2020). In short, we know little about the experience of political activism in the United States today, let alone how women are treated when they participate. We, therefore, ask a simple question: are women more likely than men to receive hostile messages when they participate in politics?

To examine volunteers’ experiences, we conducted two field experiments in 2018 in which we randomized the apparent gender of volunteers during a texting (“Short Message Service,” or SMS) campaign meant to encourage a liberal organization’s supporters to attend rallies and call their representatives. Volunteers used a software program to rapidly text supporters standardized messages. Each supporter received a text message from a volunteer randomly assigned to use either a common male name, a female name, an ambiguously gendered name, or no name. Combining name manipulation with otherwise identical messages allows us to identify perceived gender as one cause of men and women’s different experiences in politics (Bertrand and Mullainathan Reference Bertrand and Mullainathan2004).

In both studies, female-named volunteers receive more offensive, silencing, and withdrawal responses than male-named or ambiguously named volunteers. This suggests backlash against women participating in political activism. In particular, these messages have a “generic” quality that Sobieraj (Reference Sobieraj2020, Reference Bertrand and Mullainathan5) describes as a hallmark of structural abuse: abuse that treats individual women as representatives of all women. When someone texts “fuck off Jessica you’re a slut,” having no information about “Jessica” and despite “Jessica” using exactly the same text messages as “Michael” and “Taylor,” we have evidence that the hostility is toward women as a group rather than some specific characteristic of this individual woman—who may not even be a woman volunteer, thanks to our experiment.

These findings are concerning since many barriers to participation already exist, especially for historically disadvantaged groups, and they help make sense of women’s reluctance to participate in peer-oriented political activities like canvassing. Yet we also find hints that volunteers may be more effective when assigned to use female names. Our study thus raises new questions about women’s equality in political life in the United States.

Materials and Methods

We conducted two randomized control trials in 2018 evaluating whether female-named volunteers receive more harassing SMS responses than male-named volunteers.Footnote 2 To do so, we partnered with a progressive political organization, NextGen America (NGA), to contact individuals who had previously interacted with the organization and agreed to be contacted again.Footnote 3 NGA regularly texts supporters to contact their elected officials, take part in protests, and volunteer. Because these individuals have consented to contact from NGA, we expect them to be more civil toward NGA volunteers. Our estimates of the prevalence of hostile behavior should, therefore, be lower than if we had contacted respondents without their consent.

For each study, NGA selected the largest sample sizes they believed their volunteers could plausibly contact before a given deadline (e.g., before a rally). NGA contacted 60,356 individual supporters in the first trial, and 75,231 in the second.Footnote 4 In Study 1, NGA texted supporters to encourage them to attend a local March for Our Lives rally; in Study 2, to call their representative to urge then-Environmental Protection Agency administrator Scott Pruitt to resign. Volunteers were overwhelmingly women in both studies: 77.14% in Study 1, and 85.71% in Study 2, for an overall rate of 80.95%.

Design and Procedures

Following a design common to audit studies, to vary perceived gender, we randomly assigned supporters to receive an otherwise identical text message from a volunteer using either a stereotypically female name (Jessica), male name (Michael), ambiguously gendered name (Taylor), or no name. Both studies were double-blinded to reduce possible demand effects (among volunteers) and biases common to studying sensitive behaviors, such as social desirability bias (among respondents). Respondents knew only that they were being contacted by NGA, which they had previously opted into.

NGA’s texting software ensured every initial SMS message to a respondent was standardized; Appendix B.3 of the SM provides the base message text. After the volunteer successfully texted a supporter, the software prompted them to send a new message to another, until the volunteer finished texting their allotment of supporters. The average volunteer sent approximately 1,300 messages.

Dependent Variables

We measure three main outcomes for replies to these messages: average offensiveness, silencing, and withdrawal.Footnote 5 We measure offensiveness using questions we designed.Footnote 6 In Study 1, volunteers reported when they felt a text they received was offensive. These were coded as 100 and all others as 0, allowing us to report averages in terms of percentage points. For Study 2, instead of the volunteer-reported measure, we recruited two independent coders to indicate how offensive a response was on a 5-point scale ranging from “non-offensive” (1) to “very offensive” (5).Footnote 7

To compare offensiveness across studies, we recoded Study 2’s existing 5-point offensiveness measure as a binary measure. To do so, we recoded a response as “offensive” (100) if either coder marked the response as anything other than “non-offensive,” and inoffensive (0) otherwise. We do not find a significant difference in the ratings for offensiveness between the two studies; we, therefore, depict the results for both studies below using the binary measure.Footnote 8

NGA requires volunteers to immediately opt out anyone that asks not to be contacted or harasses a volunteer. In both studies, we use this indicator to measure withdrawals—polite or inoffensive requests not to be contacted again—and silencing: whether a respondent intimidates or harasses the volunteer into ceasing contact. Any response that requested an opt-out but was not coded as offensive is coded as 100 for withdrawal, and 0 otherwise. For a response to be coded as silencing (100), it must both opt out the respondent and be coded as offensive. This means that silencing messages represent the overlap between offensive and opt-out messages. Table 1 displays the rates (conditional on responding) and coded examples of each type of behavior. Offensive and silencing responses are relatively infrequent; withdrawals are more common.

TABLE 1. Sample Responses and Response Rates by Category

Analysis

In the main paper, we provide figures based on ordinary least squares (OLS) regressions.Footnote 9 These regressions estimate the treatment effect of assigning someone at phone number i to be contacted by a volunteer randomly assigned a particular name condition on the rate of offensive, silencing, or withdrawal replies.Footnote 10 We employ the “ambiguous-gender” condition as the control condition because it strictly varies the gender cue relative to the clearly gendered male and female names.Footnote 11 In analyses that pool both studies, we include a dummy variable for whether the respondent was in Study 1 (1) or Study 2 (0). In all tables and figures, we report 95% confidence intervals with heteroskedasticity-robust standard errors; all p-values reported are two-tailed.

Results

We conducted two field experiments to study the treatment of political volunteers. In each study, the only aspect of the message that varied was the gender of the name the volunteer used. In both studies, when volunteers are assigned a female name, they receive more offensive, silencing, and withdrawal responses than any other name condition. However, supporters were also more likely to respond and agree to asks made by female-named volunteers.

Offensiveness

In both studies, respondents were more likely to send offensive replies to volunteers assigned female names than those assigned male or ambiguous names. Figure 1 presents the OLS regression results overall (left-hand panel) and for each study individually (middle- and right-hand panels).Footnote 12 Across all three panels, we see the same pattern: respondents send more offensive messages to volunteers using female names. Overall, volunteers assigned to use female names are 0.177 percentage points more likely than ambiguously named volunteers to receive offensive messages during a campaign (two-tailed $ p< $ 0.001). Male-named volunteers were 0.097 percentage points less likely to receive offensive messages than the ambiguously named ( $ p= $ 0.002). Unnamed individuals were also more likely (0.148 percentage points) to receive offensive messages ( $ p< $ 0.001). For every 1,000 messages a female-named volunteer sends, she receives on average 1.77 more offensive messages than an ambiguously named texter, and 2.74 more offensive messages than a male-named texter ( $ p< $ 0.001 for both)—a 202% increase over the male-named base rate.

FIGURE 1. Mean Offensiveness

Note: The figure shows the average treatment effect by treatment condition with 95% confidence intervals estimated using ordinary least squares. The comparison category is the ambiguous name condition.

Silencing

Across both studies, supporters are more likely to silence female-named volunteers than all other named conditions, shown in Figure 2.Footnote 13 Overall, respondents were 0.117 percentage points more likely to silence volunteers using a female name compared with those using an ambiguously gendered name (two-tailed $ p<0.001 $ ), but 0.053 percentage points less likely to silence volunteers using male names (two-tailed $ p=0.034 $ ). The difference means that respondents force female-named volunteers to end 1.70 more interactions—and all subsequent outreach—out of every 1,000 text messages: a 181% increase over male-named volunteers.

FIGURE 2. Mean Silencing

Note: The figure shows the average treatment effect by treatment condition with 95% confidence intervals estimated using ordinary least squares. The comparison category is the ambiguous name condition.

Withdrawals

In both studies, supporters are more likely to politely end all future outreach when contacted by female-named volunteers compared with male-named and ambiguously named volunteers, per Figure 3.Footnote 14 Overall, respondents were 0.383 percentage points more likely to withdraw when volunteers used a female name compared with an ambiguously gendered name (two-tailed $ p< $ 0.001). In contrast, respondents were 0.334 percentage points less likely to withdraw from all future outreach when volunteers used a male name relative to those using an ambiguously gendered name ( $ p< $ 0.001).

FIGURE 3. Mean Withdrawal

Note: The figure shows the average treatment effect by treatment condition with 95% confidence intervals estimated using OLS. The comparison category is the ambiguous name condition.

Looking across Outcomes

Female-named volunteers face increased hostility and decreased ability to engage in future activism, especially relative to male-named volunteers. Recall that in our sample, an average volunteer sent approximately 1,300 messages, so a volunteer consistently using a female name—as would be the case for many women—is likely to experience all of these individual behaviors. The unnamed volunteer fares similarly. Yet the volunteers assigned to use a female name are not “anonymous” like those assigned no name.

In the SM, we show that there is no significant difference in how male and female respondents treated the female-named volunteers (see Tables S7–S9 in the SM), though men are more likely to send offensive and silencing texts across all conditions, while women were more likely to withdraw across all conditions. Thus, while most hostile responses come from men, women also reserved more of their hostile responses for other women.

Notably, despite this poor treatment, we find no evidence that female-named volunteers were less effective than volunteers assigned other name conditions. Female-named volunteers obtain higher response rates and supporter commitments to call their representatives than the other name conditions, despite their higher likelihood of receiving offensive, silencing, and withdrawal responses (see Tables S13–S20 in the SM). We discuss the implications below.

Discussion

Using two field experiments embedded in real political campaigns, we find evidence that volunteers using female names receive more offensive, silencing, and withdrawal responses than volunteers using ambiguously gendered names, who in turn receive more such messages than volunteers using male names. These findings are consistent with research demonstrating that women are interrupted and harassed more (Krook Reference Krook2020). However, few prior studies have documented whether such findings held for direct experiences of political participation and activism (Karpowitz and Mendelberg Reference Karpowitz and Mendelberg2014; Sobieraj Reference Sobieraj2020). These findings underscore the importance of understanding how individual women may be seen as symbolic representatives of all women when they engage in advocacy (Pitkin Reference Pitkin1967; Sobieraj Reference Sobieraj2020) and how this may make even non-elite women’s experiences of participation different than men’s. A woman activist who “stands for” other women may evoke more enthusiasm from those who want to see more women in politics—and more hostility from those who do not.

The findings are striking given that our sampling frame is composed of ideologically like-minded individuals who previously shared their contact information with the organization. Our estimates, therefore, depict these behaviors within a “friendly” audience. We expect rates of uncivil behaviors to be much larger when the audience is not predisposed to be friendly. Indeed, many silencing and withdrawal responses come from individuals stating that the organization has the wrong number, that is, those who have not agreed to be contacted. However, rates of antisocial behaviors may differ depending on the medium of contact (e.g., face-to-face), so studying variation across mediums seems crucial.

Understanding how voters treat political volunteers is important since political participation and activism underpin democracy. Voter-to-voter canvassing is one of the only methods proven to durably move voter attitudes on sensitive political issues (Broockman and Kalla Reference Broockman and Kalla2016). Moreover, texting is among the few outreach tools available to campaigns—and increasingly common. Our findings suggest that it may matter a great deal who is texting (or calling, or door-knocking). Nor can we offer a simple suggestion for campaigns: even though female-named volunteers experienced worse treatment, they received more responses and were more effective in getting respondents to commit to calling their representatives.

Our finding that women respondents are, like men, more hostile to female-named volunteers also merits further scrutiny. One possibility is that both men and women respondents are more fearful of attacking men than women. Another is that men’s speech is privileged in the public sphere, such that both men and women discriminate against women who act politically (e.g., due to internalized sexism).Footnote 15 Still another is that men’s and women’s goals in responding differ. Future research should map the mechanisms responsible.

Finally, the findings contribute to the growing literature on violence against women in politics by showing the importance of studying violence against everyday voters and activists (Krook Reference Krook2020), not just public figures and politicians. More women are speaking up than ever before, from protests for Mahsa Amini in Iran, Women’s Marches in the United States, and #NiUnaMenos/#MeToo activists in Chile. Many—not just highly visible elites like U.S. Congresswoman Nancy Pelosi—will experience violence for doing so. If women are discouraged early on by such experiences, they may never pursue a more formal political role.Footnote 16

Supplementary Material

To view supplementary material for this article, please visit https://doi.org/10.1017/S0003055423000217.

Data Availability Statement

Research documentation and data that support the findings of this study are openly available at the American Political Science Review Dataverse: https://doi.org/10.7910/DVN/UYKE7T.

Acknowledgments

We thank NextGen America for undertaking and supporting this research. We thank Richard Ashcroft, David Broockman, Amanda Clayton, Andy Eggers, Andrew Guess, Ryan Hübert, Rikio Inouye, Ole Jann, Josh Kalla, Gabe Lenz, Tali Mendelberg, Cecilia Mo, Mona Morgan-Collins, Brendan Nyhan, Lauren Peritz, Soledad Artíz Prillaman, Markus Prior, Dan Smith, Sarah Sobieraj, Laura Stoker, Lauren Young, Ana Catalano Weeks, and participants at the 2022 Gender and Political Psychology Virtual Lecture Series, the 2022 Elections, Public Opinion, and Voting and Political Psychology Virtual Junior Speakers Series, the 2020 Junior Americanist Workshop Series, Auburn University’s 2020 “Since Suffrage” Symposium, the UC Berkeley Political Behavior Workshop, the UC Davis American Politics Workshop, the Oxford DPIR Research Seminar Series, and the Nuffield College Prize Postdoctoral Research Fellow Workshop for their helpful comments and suggestions. Finally, we thank Juliet Bost, Lain Mastey, Chloe Porath, and Supreet Sandhu for their outstanding work as research assistants.

Funding Statement

This research was supported with volunteer hours by NextGen America. The authors received no funding.

Conflict of Interest

The authors declare no ethical issues or conflicts of interest in this research.

Ethical Standards

The human subjects research in this article was reviewed and approved by the University of California, Davis Institutional Review Board and determined to be exempt from review (Decision #1626820-1). The authors affirm that this article adheres to the APSA’s Principles and Guidance on Human Subject Research.

Footnotes

1 See Appendix A of the Supplementary Material (SM) for a fuller review of this scholarship.

2 The SM provides the full materials.

3 Appendix B.1 of the SM provides more information.

4 We coded supporter gender using the “gender” package in R (see Appendix B.6.1 of the SM).

5 Appendix B.4.2 of the SM reports data for another dependent variable, discouragement, piloted in Study 2. Female-named volunteers received more discouraging replies, but the findings were not robust to alternate coding strategies.

6 We situate these variables more fully within the literatures on violence against women in politics (VAWIP) and gendered incivility in Appendix A.2 of the SM.

7 With two coders, we can assess how much perceived offensiveness might vary between two individuals looking at the same response. The Krippendorff’s alpha for the two coders’ scores was 0.73, suggesting acceptable inter-coder reliability. Kenski, Coe, and Rains (Reference Kenski, Coe and Rains2020) also show women rate comments as less civil than men; we find no difference. See Appendix C.3 of the SM.

8 Section C.1.1 of the SM shows that the offensiveness results hold for both studies individually, for Study 2 when using either the binary or original 5-point scale measure, and for Study 1 using the volunteer-reported measures. Recoding simply enables easy comparison.

9 Appendix C.4 of the SM shows the results hold using logistic regressions.

10 Appendix C.6 of the SM estimates the average treatment effect for treated phone numbers (those that NGA sent a message to, rather than all numbers) using instrumental variable regressions; these show substantially larger estimates.

11 The no-name condition acts as a “pure” control by varying both the presence of a name and the gender cue associated with the name. Names may seem less like spam, generating friendlier responses. Additionally, previous literature suggests that anonymity and a lack of names in particular can foster disinhibition online (Suler Reference Suler2004), dehumanizing targeted individuals and leading to more toxic behaviors (Kteily et al. Reference Kteily, Bruneau, Waytz and Cotterill2015).

12 Table S7 in the SM provides the regression estimates depicted. Table S22 in the SM replicates the analysis using logistic regression.

13 Table S8 in the SM provides the regression estimates depicted. Table S23 in the SM replicates the analysis using logistic regression.

14 Table S9 in the SM provides the regression estimates depicted. Table S24 in the SM replicates the analysis using logistic regression.

15 This may be true even with gender cues as small as names: see Elder and Hayes (Reference Elder and Hayes2023).

16 Though our study cannot assess whether being assigned a female name influences participants’ decisions to volunteer again as volunteers send texts using all four names, future research might.

References

Alam, Zainab. 2021. “Violence Against Women in Politics: The Case of Pakistani Women’s Activism.” Journal of Language Aggression and Conflict 9 (1): 2146.CrossRefGoogle Scholar
Bajak, Frank, and Burke, Garance. 2020. “Incendiary Texts Traced to Outfit Run by Top Trump Aide.” Associated Press, November 7.Google Scholar
Beauvais, Edana. 2020. “The Gender Gap in Political Discussion Group Attendance.” Politics & Gender 16 (2): 315–38.CrossRefGoogle Scholar
Bernhard, Rachel, Shames, Shauna, and Teele, Dawn Langan. 2021. “To Emerge? Breadwinning, Motherhood, and Women’s Decisions to Run for Office.” American Political Science Review 115 (2): 379–94.CrossRefGoogle Scholar
Bertrand, Marianne, and Mullainathan, Sendhil. 2004. “Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination.” American Economic Review 94 (4): 9911013.CrossRefGoogle Scholar
Broockman, David, and Kalla, Joshua. 2016. “Durably Reducing Transphobia: A Field Experiment on Door-to-Door Canvassing.” Science 352 (6282): 220–24.CrossRefGoogle ScholarPubMed
Elder, Elizabeth Mitchell, and Hayes, Matthew. 2023. “Signaling Race, Ethnicity, and Gender with Names: Challenges and Recommendations.” Journal of Politics. https://doi.org/10.1086/723820.CrossRefGoogle Scholar
Folke, Olle, Rickne, Johanna, Tanaka, Seiki, and Tateishi, Yasuka. 2020. “Sexual Harassment of Women Leaders.” Daedalus 149 (1): 180–97.CrossRefGoogle Scholar
Håkansson, Sandra. 2019. “Do Women Pay a Higher Price for Power? Gender Bias in Political Violence in Sweden.” Journal of Politics 83 (2): 515–31.CrossRefGoogle Scholar
Herrick, Rebekah, and Thomas, Sue. 2022. “Not Just Sticks and Stones: Psychological Abuse and Physical Violence Among U.S. State Senators.” Politics & Gender 18 (2): 422–47.CrossRefGoogle Scholar
Karpowitz, Christopher F., and Mendelberg, Tali. 2014. The Silent Sex: Gender, Deliberation, and Institutions. Princeton, NJ: Princeton University Press.Google Scholar
Kenski, Kate, Coe, Kevin, and Rains, Stephen A.. 2020. “Perceptions of Uncivil Discourse Online: An Examination of Types and Predictors.” Communication Research 47 (6): 795814.CrossRefGoogle Scholar
Krook, Mona Lena. 2020. Violence Against Women in Politics. New York: Oxford University Press.CrossRefGoogle Scholar
Kteily, Nour, Bruneau, Emile, Waytz, Adam, and Cotterill, Sarah. 2015. “The Ascent of Man: Theoretical and Empirical Evidence for Blatant Dehumanization.” Journal of Personality and Social Psychology 109 (5): 901–31.CrossRefGoogle ScholarPubMed
Och, Malliga. 2020. “Manterrupting in the German Bundestag: Gendered Opposition to Female Members of Parliament?Politics & Gender 16 (2): 388408.CrossRefGoogle Scholar
Pitkin, Hanna Fenichel. 1967. The Concept of Representation. Berkeley: University of California Press.CrossRefGoogle Scholar
Prillaman, Soledad Artiz. Forthcoming. The Patriarchal Political Order: The Making and Unraveling of the Gendered Political Participation Gap in India. Cambridge: Cambridge University Press.Google Scholar
Schlozman, Kay Lehman, Burns, Nancy, and Verba, Sidney. 1994. “Gender and the Pathways to Participation: The Role of Resources.” Journal of Politics 56 (4): 963–90.CrossRefGoogle Scholar
Sobieraj, Sarah. 2020. Credible Threat: Attacks Against Women Online and the Future of Democracy. New York: Oxford University Press.CrossRefGoogle Scholar
Suler, John. 2004. “The Online Disinhibition Effect.” Cyberpsychology & Behavior 7 (3): 321–26.CrossRefGoogle ScholarPubMed
Thomas, Sue, Herrick, Rebekah, Franklin, Lori D., Godwin, Marcia L., Gnabasik, Eveline, and Schroedel, Jean R.. 2019. “Not for the Faint of Heart: Assessing Physical Violence and Psychological Abuse Against U.S. Mayors.” State and Local Government Review 51 (1): 5767.CrossRefGoogle Scholar
Yan, Alan N., and Bernhard, Rachel. 2023. “Replication Data for: The Silenced Text: Field Experiments on Gendered Experiences of Political Participation.” Harvard Dataverse. Dataset. https://doi.org/10.7910/DVN/UYKE7T.CrossRefGoogle Scholar
Figure 0

TABLE 1. Sample Responses and Response Rates by Category

Figure 1

FIGURE 1. Mean OffensivenessNote: The figure shows the average treatment effect by treatment condition with 95% confidence intervals estimated using ordinary least squares. The comparison category is the ambiguous name condition.

Figure 2

FIGURE 2. Mean SilencingNote: The figure shows the average treatment effect by treatment condition with 95% confidence intervals estimated using ordinary least squares. The comparison category is the ambiguous name condition.

Figure 3

FIGURE 3. Mean WithdrawalNote: The figure shows the average treatment effect by treatment condition with 95% confidence intervals estimated using OLS. The comparison category is the ambiguous name condition.

Supplementary material: PDF

Yan and Bernhard supplementary material

Yan and Bernhard supplementary material

Download Yan and Bernhard supplementary material(PDF)
PDF 566.6 KB
Supplementary material: Link

Yan and Bernhard Dataset

Link
Submit a response

Comments

No Comments have been published for this article.