Gender diversity is an important issue for graduate training in international relations (IR) and political science (Breuning and Sanders Reference Breuning and Sanders2007; Carpenter Reference Carpenter2009; Lake Reference Lake2013; Walter Reference Walter2013). Unfortunately, there is mounting evidence that women face disadvantages in the discipline (American Political Science Association 2004; Hancock, Baum, and Breuning Reference Hancock, Baum and Breuning2013; Maliniak, Powers, and Walter Reference Maliniak, Powers and Walter2013). If we assume that part of what it means to encourage female students to pursue academia involves showing them examples of excellent research by women—early and often—then the presence of women-authored research on graduate syllabi has considerable importance (Nexon Reference Nexon2013).
This article presents an investigation of the degree to which gender bias exists in graduate IR syllabi. It builds on earlier studies of graduate training in political science (Colgan Reference Colgan2016; Hagmann and Biersteker Reference Hagmann and Biersteker2014; Schwartz-Shea Reference Schwartz-Shea2003; Young Reference Young1995). I find that most of the research assigned in IR graduate courses is written by men, and that the gender of the instructor matters significantly for the proportion of assigned research written by women scholars. On average, female instructors assign significantly more research by female authors than male instructors. Some but not all of the difference between male- and female-taught courses might be explained by differences in course composition (e.g., female instructors appear relatively more likely than male instructors to teach international law courses). Although the empirical evidence collected for this article is limited to the IR field, it seems plausible that similar patterns of behavior might exist in other fields of political science and social science generally. This study uses two relatively small data samples, one with 42 syllabi containing 3,343 readings and the other with 73 syllabi containing 4,148 readings. The relatively small data samples mean that the findings should be viewed as preliminary. This analysis could motivate further investigation in IR and other fields.
EMPIRICAL FINDINGS
My basic findings derive from a two-stage analysis. In the first stage, I used a dataset of graduate syllabi from the “core IR” course (“proseminar”) from 42 US universities. Footnote 1 This dataset was developed for a broader investigation of graduate training (Colgan Reference Colgan2016), which sought to explore whether there is evidence of stagnation in IR theory, how Google Scholar is used in political science, and much else. The sample of universities derives from the top 65 graduate programs in political science, as ranked by US News & World Report. Inclusion in the sample was determined by whether a syllabus could be found by Internet search; therefore, not every university was included. Footnote 2 Internet availability might introduce some bias; however, there is no reason to expect that it would be significant.
The dataset includes 3,343 required “readings,” which could be an article, a book, or a section of a book. Ideally, data from other countries would be included as well, but the outsized influence of American academia in IR makes it the natural starting place to conduct an in-depth investigation of this type (Hagmann and Biersteker Reference Hagmann and Biersteker2014; Kristensen Reference Kristensen2015). Unfortunately, the data do not address diversity issues other than gender, including race and ethnicity (Vitalis Reference Vitalis2015).
Using only the data from these core IR courses in the first stage, I found that 82% of assigned readings in IR proseminars are written by all-male authors (i.e., female or co-ed teams account for the remaining 18%). That percentage is high, but it also is roughly consistent with the gender pattern of articles published in top IR journals, which is 81% male-authored (Maliniak, Powers, and Walter Reference Maliniak, Powers and Walter2013). Footnote 3 Thus, the first stage of analysis did not suggest any additional gender bias at the syllabus-design stage. Moreover, the percentage of work authored by women increased over time. All-male authors were responsible for 90% of assigned readings that were published before 2000, compared to 73.5% since 2000.
One question, however, was whether the instructor’s gender mattered. It looked that way in the first-stage data. Of the 42 courses, eight were taught by women, of which one was co-taught with a male instructor. Those eight syllabi had an average rate of 78% of readings written by all male authors, compared to an average rate of 84% written by all male authors in the 34 syllabi taught by male instructors. This gender difference (i.e., 78% versus 84%) is statistically significant at the p < 0.002 level. However, the sample of courses taught by female instructors was simply too small to be confident; more data were needed.
“Only” 71.5% of readings in courses taught by women instructors were written by men, either individually or in all-male teams. By contrast, in courses taught by male instructors, male authors wrote 79.1% of readings.
Therefore, for the second stage of analysis, my research assistant coded data from 73 additional graduate syllabi, drawn from 37 universities. Footnote 4 Female instructors taught 35 of those courses and male instructors taught the remaining 38. As in the first-stage dataset, the unit of analysis was a required reading, and this new dataset contained 4,148 observations. The courses were different from the first-stage dataset, however, because (1) they were not core IR courses, and (2) we occasionally used more than one syllabus per university. Again, we used only IR courses designed for PhD students. During the first-stage of the analysis, my research assistant created a pool of extra syllabi for noncore IR courses from the same set of top-65 universities (again, availability on the Internet was the only selection criterion). I reviewed this pool of syllabi in random order until I identified 35 syllabi taught by female instructors. I then reviewed the syllabi again in random order until I identified a roughly equal number of courses taught by male instructors (rejecting any courses co-taught by a male and a female instructor). The total sample size was chosen based on the tradeoff between the need for statistical power (i.e., more is better) and feasibility for coding (i.e., fewer is better).
Two key findings resulted from the second-stage analysis. First, female instructors tend to assign more readings by female authors than male instructors. “Only” 71.5% of readings in courses taught by women instructors were written by men, either individually or in all-male teams. By contrast, in courses taught by male instructors, male authors wrote 79.1% of readings. A simple t-test confirms that this difference is statistically significant (p = 0.01). Stated differently, at least one female author was included in the authorship of 20.9% of readings in male-taught classes and 28.5% of readings in female-taught classes. Substantively, this suggests that female instructors assign 36% more readings by women than male instructors, or about five readings per course.
Second, women instructors also appear to be considerably more reluctant than men about assigning their own research as required readings. Female instructors assigned an average of 1.68 readings that they had written themselves (individually or in a team). Male instructors assigned roughly twice as much of their own work: an average of 3.18 readings. The difference again is statistically significant (p = 0.01).
This difference of 1.5 fewer reading assignments (i.e., comparing male and female instructors’ own research) in a single course might not seem particularly important; however, when that difference plays out across many courses and institutions, the substantive effect is larger. If female instructors assigned as much of their own work as male instructors, it would increase the female-authored research taught in their courses by 15%. Footnote 5 In only the small sample of female-taught courses used in this study, that means 52 fewer readings by female authors, as compared to the scenario in which women assigned as much of their own work as men.
Combining these two findings suggests that the gap between male- and female-taught courses would be even larger if female instructors assigned their own work at the same rate as males. If female instructors added an average of 1.5 readings of their own work, without subtracting anything else, research by women authors or co-ed teams would account for 30.7% of their course readings. That rate would be 10 percentage points higher than the rate in male instructors’ classes (i.e., 30.7% versus 20.9%). That is, female instructors would be teaching 47% more readings by women than male instructors.
The impact on the field or individual careers of this gendered pattern of syllabi design is unknown. It may have no real impact. However, citations are used increasingly to gauge research productivity (Hendrix Reference Hendrix2015; Reiter Reference Reiter2016). A gender bias appears to exist in citation patterns in IR (Maliniak, Powers, and Walter Reference Maliniak, Powers and Walter2013). Furthermore, some scholars suggest that citation patterns are driven by graduate syllabi (Nexon Reference Nexon2013). Thus, it seems reasonable to be aware of these practices.
POTENTIAL CAUSES
These findings indicate only correlation, not causation. Determining exactly why we observe this difference between male- and female-taught courses is more difficult. Before making conclusions about bias, it is important to analyze potential confounding factors, two of which seem especially plausible: instructor age and differences in course composition.
With respect to instructor age, one possibility is that younger instructors generally tend to assign more female-authored research than older instructors and, on average, female instructors are younger than male instructors. If these two factors were true, they could explain the observed difference between male- and female-taught courses without any instructor-level bias.
I tested this hypothesis by using the year of PhD completion as a proxy for instructor age. The findings suggest that age is not a major factor in explaining the gender composition of syllabi assignments. First, the difference in ages between male and female instructors is not substantial: the average PhD completion year was 1995 for men and 1998 for women. Among male instructors, 24 had completed their PhD before 2000 and 14 in 2000 or later. Among female instructors, 19 had completed their PhD before 2000 and 16 in 2000 or later. Therefore, on average, female instructors are more junior than male instructors, but the difference is not huge. Second, younger instructors tend to assign slightly more female-authored research; however, the instructor’s gender is much more important than age. Specifically, among instructors who completed their PhD before 2000, females assigned an average of 72.7% of readings by all male authors, compared to 80.3% for males. Among instructors who completed their PhD in 2000 or later, females assigned an average of 70.1% of readings by all male authors, compared to 77.1% for males. Thus, it appears that even junior male instructors assign less female-authored research than senior female instructors.
Thus, it appears that even junior male instructors assign less female-authored research than senior female instructors.
Instructor age might not matter but an author’s age of the assigned readings certainly does. Assigned readings typically come from tenured professors, which is a more male-dominated category than untenured professors. This difference does not explain the difference between female and male instructors, however, because both should be equally likely to assign senior professors’ work.
A second possibility is related to course composition. It is possible that men tend to teach security courses, whereas women tend to teach nonsecurity courses (e.g., political economy, environmental, and human rights). Also, men might be more likely to publish in security studies, whereas women publish more heavily in nonsecurity research areas. This could create the observed result (i.e., male instructors assign more male-authored research) without any real instructor-level bias: instead, it is all about teaching and research preferences.
I tested for this possibility by asking my assistant to code the syllabi in one of five categories: (1) Security, (2) International Political Economy (IPE), (3) Industrial Organization (IO) and International Law, (4) Comprehensive, and (5) Other. Comprehensive courses tended to follow the core IR course as part of a series; courses in the Other category tended to focus on special topics (e.g., ethnic politics) or applications of mathematical models across a range of substantive topics. The syllabi were coded on the basis of four elements, in descending order of importance: (1) summary description and introduction of the syllabus; (2) weekly themes and readings; (3) course title; and (4) topics for major assignments. Tables 1 and 2 show the results of this coding exercise. For most categories, male and female instructors were roughly equally likely to teach it, but there were two exceptions: female instructors taught proportionally more IO and law classes, whereas male instructors taught more courses in the Other category. This difference is important, because IO and law courses tend to have more readings written by women or co-ed teams (regardless of the gender of the instructor).
Table 3 imagines a scenario in which female instructors taught the same distribution of course types as male instructors but continued to assign readings the way female instructors did in the original data sample. In that hypothetical scenario, the percentage of male-authored readings would increase from 71.5% to 74.4%.
In short, the data suggest that about a third of the difference between male- and female-taught courses can be explained by differences in course composition (i.e., because female instructors appear more likely to teach IO and law courses than male instructors). Most of the overall difference between male and female instructors, however, does not appear attributable to differences in course composition. Looking only at security courses, for instance, 85.8% of the readings assigned by male instructors are written by men, compared to only 74.1% of those assigned by female instructors.
Inferences should be made cautiously, however, because of the small sample size. The total sample of 78 syllabi is large enough to observe statistically significant differences between male- and female-taught courses, but observing these differences becomes more difficult among the subcategories. For instance, female instructors teach more IO and law courses than male instructors in this sample, which could have happened by random chance (p = 0.21). If so, attributing a third of the difference between male- and female-taught courses to differences in course composition might be an overestimate. We can be confident, however, that course composition alone does not explain all of the difference between male- and female-taught courses. For instance, the difference between male- and female-taught courses is still statistically significant (p < 0.02) even when looking only at the subsample of security courses.
Another caveat is needed. It seems likely that female-authored research appears at higher rates in Feminist IR courses. None of the courses included in the data sample has the word feminist (or gender or women) in the title, and only one course had feminist in the subtitle.
So, if instructor age and course composition do not explain this gender gap, what does? We can only conjecture, but I suspect that a combination of three factors is responsible: network effects, explicit bias, and implicit bias. Footnote 6 Network effects might matter if female instructors have closer social ties with other female instructors than with male instructors. If instructors, on average, are more likely to assign readings by authors with whom they have social ties, it could explain the gender gap evident in the data. A second factor is explicit bias against female-authored research, which I assume is relatively rare. Probably much more common is the third factor: implicit bias. Instructors may draw on what they were taught as graduate students—updated with some new material, based on what comes to mind when they are designing the syllabus. At this stage, implicit bias could be a key factor: as instructors, we might be more likely to think of male-authored research as more “essential” for our course than female-authored research.
At this stage, implicit bias could be a key factor: as instructors, we might be more likely to think of male-authored research as more “essential” for our course than female-authored research.
Still unexplained is the tendency for female instructors to assign their own work more rarely than male instructors. A female colleague of mine suggested a possible explanation: women are painfully aware of stereotypes against female instructors, so they are reluctant to assign their own work unless they are highly confident of its excellence—especially on a syllabus where it would be in the company of other excellent work, mostly authored by men. This is somewhat analogous to women being unwilling to apply for jobs unless they fulfill all of the requirements in the job listing, whereas men are more likely to apply even if they have only some of the requirements—a trend for which there is documented evidence (Clark Reference Clark2014). This explanation for syllabus design remains conjecture, however.
Even without a precise causal explanation for these findings, they are difficult to ignore. Moreover, any of the proposed explanations raise additional difficult questions about how we think graduate students in IR should be trained. Most instructors want to assign the “best” readings—but “best” is partly subjective, and the evidence presented in this article suggests that gender affects these judgments. After I blogged about preliminary findings in August 2015 (Colgan Reference Colgan2015), many instructors told me that they found revising their syllabus considering gender was not only feasible but also improved it.
CONCLUSION
In summary, the evidence suggests that male and female instructors systematically differ in the way that they design IR courses. Two differences are especially striking. First, when identifying the “best” research to assign in their courses, male instructors select substantially more research written by male authors than female instructors. Second, male instructors also assign substantially more research that they authored than female instructors, thereby potentially giving their own work greater exposure to future scholars in the field. About a third of the difference between male- and female-taught courses can be explained by differences in course composition, but differences remain even when looking within course types (e.g., only security courses).
The appropriate response to these findings is not clear. Many female instructors, reacting to the preliminary evidence I shared in a blog (Colgan Reference Colgan2015), expressed a desire to match the behavior of their male colleagues, at least with regard to assigning their own research in graduate courses. An alternative response might be to encourage male instructors to emulate their female colleagues. To the extent that even a portion of male instructors do so, it could result in more female-authored research being taught and less self-promotion.
A striking feature of this issue is that some view syllabus design through the lens of equality of opportunity, whereas others see it as forcing an artificial equality of outcome. The equality-of-opportunity perspective focuses on how syllabi affect the motivation, interests, and incentives for today’s students, thereby shaping their opportunities for success (and the field’s demographics) in the future. The concern about equality of outcome focuses on the ideal of an intellectual meritocracy and, therefore, the desire to assign research based on content rather than an author’s gender or other characteristics. Too much focus on gender threatens the meritocracy of ideas necessary for academia to function well. Both viewpoints contain some truth, making syllabus design a thorny issue.
The primary part of the diversity problem with respect to gender balance in IR syllabi and citation practices probably starts “upstream”—that is, the loss of women in the profession before research and publication. According to the American Political Science Association (APSA), more than 50% of all undergraduates and 42% of graduate students in political science are female whereas only 26% of instructors are women (APSA 2004, 3; Sedowski and Brintnall Reference Sedowski and Brintnall2007). Within political science, women are less likely to choose IR as a field than men (Sedowski and Brintnall Reference Sedowski and Brintnall2007). More effort to include women-authored research in IR graduate seminars would encourage higher female-student participation in the next generation of scholars.
ACKNOWLEDGMENTS
The author thanks Sarah Bush, Courtenay Conrad, Jessica Green, Nicholas Miller, and participants at the Visions in Methodology 2016 Conference and an International Studies Association 2016 panel for their feedback, as well as others who contributed in informal conversations since my initial blog post. I thank Miriam Hinthorn for her excellent research assistance.