Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-22T17:36:59.897Z Has data issue: false hasContentIssue false

The Qualifying Field Exam: What Is It Good For?

Published online by Cambridge University Press:  23 December 2019

Nicole McMahon
Affiliation:
The University of Western Ontario
Christopher Alcantara
Affiliation:
The University of Western Ontario
Laura B. Stephenson
Affiliation:
The University of Western Ontario
Rights & Permissions [Opens in a new window]

Abstract

Most political scientists self-identify as a comparativist, theorist, Americanist, or another label corresponding with the qualifying field exams (QFE) that they passed during their doctoral studies. Passing the QFE indicates that a graduate student or faculty member is broadly familiar with the full range of theories, approaches, and debates within a subfield or research theme. The value of the QFE as a form of certification, however, depends on the extent to which the subfield or theme is cohesive in and of itself as well as whether departmental lists draw on a common pool of publications. This article investigates the value of the QFE by examining the cohesiveness of 16 Canadian politics PhD QFE lists. Our findings suggest that it is problematic to assume that scholars who pass a QFE share a common knowledge base.

Type
Article
Copyright
Copyright © American Political Science Association 2019 

Most political science PhD programs in North America and Europe require their doctoral students to pass two or more qualifying field exams (QFEs) that correspond with existing subfields or a particular research theme. These subfields and themes include comparative politics, international relations, political theory, domestic politics, methodology, gender and politics, and local government, among others. Passing these exams is an important milestone for graduate students because it means that they finally can begin serious work on their dissertation.

In their CVs, most if not all political scientists self-identify as a methodologist, comparativist, theorist, or another moniker that corresponds to the QFEs that they successfully passed in graduate school. In this sense, QFEs serve as a type of certification process and mechanism indicating that the individual has demonstrated “a broad mastery of core content, theory, and research” of a particular subfield (Mawn and Goldberg Reference Mawn and Goldberg2012, 156; see also Wood Reference Wood2015, 225) and is now “ready to join the academic community”Footnote 1 (Estrem and Lucas Reference Estrem and Lucas2003, 396; Schafer and Giblin Reference Schafer and Giblin2008, 276). Achieving this milestone also means that one is qualified to teach a variety of courses in a particular subfield (Ishiyama, Miles, and Balarezo Reference Ishiyama, Miles and Balarezo2010, 521; Jones Reference Jones1933; Wood Reference Wood2015, 225); read and synthesize new research (Ponder, Beatty, and Foxx Reference Ponder, Beatty and Foxx2004); and converse effectively with others in the subfield at conferences, job talks, and in departmental hallways. In short, passing a QFE signals that the department has certified that an individual is suitably prepared for advanced study and teaching in a particular area.Footnote 2

Surprisingly, there is substantial variation in how departments prepare students and administer their QFEs. Some departments have formal reading lists; others leave it to the supervisors and the students to generate them; some rely solely on class syllabi and core courses to prepare their students. The exam itself varies, with some departments opting for different written formats and some requiring oral defenses. This organizational variation seems at odds with the idea of the QFE as a broadly recognized qualification.

This article analyzes an original dataset of books, articles, and chapters taken from QFE reading lists in a single subfield of political science—Canadian politics—to evaluate the extent to which QFE preparation provides comprehensive and cohesive training. Is there consistency across departments? Our main goal in answering this question is to interrogate the usefulness of the QFE as a certification tool. If the reading lists converge around numerous books and articles, then we should expect those who pass a QFE to share a common language for researching, communicating, and teaching about the subfield. If the lists are incoherent or fragmented, then the QFE certification likely will be less valuable as a heuristic for evaluating a candidate’s ability in these areas. Similarly, if the reading lists privilege certain topics and voices over others, then this may have tremendous implications for the current and future direction of the discipline as it relates to various epistemological, ontological, and normative concerns (Stoker, Peters, and Pierre Reference Stoker, Peters and Pierre2015). Our results suggest that the utility of the QFE as a certification tool may be weaker than expected: in our dataset of 2,953 items taken from the preparation materials of 16 universities, there is not a single item that appears on each list.

If the reading lists converge around numerous books and articles, then we should expect those who pass a QFE to share a common language for researching, communicating, and teaching about the subfield. If the lists are incoherent or fragmented, then the QFE certification likely will be less valuable as a heuristic for evaluating a candidate’s ability in these areas.

WHY CANADIAN POLITICS?

The Canadian politics QFE is a useful test of the certification argument because it is likely to be the most cohesive among the traditional subfields of the discipline. Not only does it focus on the politics of one country and therefore is limited in geographic scope and substantive focus, but also the body of literature is manageable given the number of (relatively few) individuals working in it. Moreover, the Canadian political science community has long been concerned about various external and internal intellectual threats, which suggests self-awareness that should contribute to cohesion. In terms of external threats, scholars have expressed concerns about the Americanization of the discipline (Albaugh Reference Albaugh2017; Cairns Reference Cairns1975; Héroux-Legault Reference Héroux-Legault2017) and a comparative turn (Turgeon et al. Reference Turgeon, Papillon, Wallner and White2014; White et al. Reference White, Simeon, Vipond and Wallner2008). Others are concerned about internal threats, lamenting the fact that white, male, and English-Canadian voices have long dominated the scholarly community at the expense of French, Indigenous, and other racial and ethnic minority voices (Abu-Laban Reference Abu-Laban2017; Ladner Reference Jones2017; Nath, Tungohan, and Gaucher Reference Nath, Tungohan and Gaucher2018; Rocher and Stockemer Reference Rocher and Stockemer2017; Tolley Reference Tolley2017). This introspection, coupled with the limited size of the community, is likely to increase consistency across departments; therefore, we expect the core set of readings identified in the reading lists to be more unified and comprehensive than in other subfields.

THE DATA

To test this assumption, we emailed all of the graduate-program directors across Canada to request copies of their reading lists for the Canadian politics QFE.Footnote 3 Nineteen universities offer PhD training in Canada and 18 offer Canadian politics as a subfield. Email requests were sent in fall 2016 and summer 2018, yielding 16 lists; two universities indicated that they had no set list. Of the lists received, four were course syllabi.Footnote 4

Research assistants entered each reading item into a spreadsheet and coded the entry for author name(s); gender of the author(s); year of publication; type (i.e., journal, book, book chapter, or government document); country of publication; language; type of analysis (i.e., qualitative or quantitative); whether the piece was comparative; and category title. A random sample of 10% of entries from each list was checked and verified. There were as many as 35 subheadings on the lists; therefore, we collapsed the categories into 17 broad titles, as follows: General/Classics (5/16 lists); The Discipline/Methods/Theory (10/16 lists); Political Culture (7/16 lists); Federalism (13/16 lists); Media (2/16 lists); Indigenous (8/16 lists); Identity (11/16 lists); Constitution/Charter Politics/Courts (12/16 lists); Political Parties (10/16 lists); Interest Groups/Social Movements (8/16 lists); Political Economy (7/16 lists); Provinces/Regionalism/Quebec (9/16 lists); Public Policy (9/16 lists); Gender (6/16 lists); Multilevel Governance, Local and Urban Politics (4/16 lists); Political Behaviour, Voting, and Elections (12/16 lists); and Parliament (14/16 lists).Footnote 5

DESCRIPTIVE RESULTS

Our starting point for analysis was to examine simple descriptive characteristics. It was immediately clear that the lists were quite varied, most notably in size. The average number of readings assigned on a list was 184.6, but the range was informative—from 44 to 525 (median=130). This finding suggests real differences in the workload of graduate students in different locations. There also was variation in the list contents. The average number of books was 97, ranging from 21 to 339 (median=55.5). The number of articles ranged from eight to 100 (averaging 48; median=52). Finally, book chapters were popular inclusions, averaging 23.2% of lists (4.1% to 62.7%; median=22.5%). Most of the material was published in Canada (73.6%, ranging from 54.1% to 94.1%; median=72.1%) but a substantial minority was published in Europe (averaging 19.5%; median=20.1%).

We next considered the diversity of voices present on each list. Despite being a bilingual country with a bilingual national association, the reading lists were decidedly not. Outside of Quebec, the average number of French-language readings was only 3.17, but the number ranged from zero to 36 (median=0). Within Quebec—and perhaps reflecting the dominance of English in political science—the average number was only 28, ranging from a low of nine to a high of 58 (median=22.5). The gender breakdown of authors was more positive (23.58% female), ranging from 12.38% to 32.56% (median=23.98%).Footnote 6 This finding is comparable to the membership of the Canadian Political Science Association (i.e., 40.9% in 2017, an increase from 25.4% in 1997). However, the percentage of lists dedicated to subheadings that consider marginalized voices (i.e., gender, Indigenous, class, race, ethnicity, multiculturalism, immigration, religion, interest groups, and social movements) was small (3.95%, ranging from zero to 18.6%; median=0.88%). This is consistent with recent research that suggests the presence of a hidden curriculum in political science that silos marginalized topics and voices while privileging approaches that use gender and race as descriptive categories rather than as analytic or theorized categories (Cassese and Bos Reference Cassese and Bos2013; Cassese, Bos, and Duncan Reference Cassese, Bos and Duncan2012; Nath, Tungohan, and Gaucher Reference Nath, Tungohan and Gaucher2018).

Because large and small departments have different faculty complements whose experiences may affect the design of comprehensive-exam lists, figure 1 reports many of the same statistics by size of department—both small (i.e., 25 or fewer faculty members) and large. The representation of female authors is almost identical, but larger departments have slightly less representation of quantitative political science, more articles and fewer chapters, and more international publications. The age of the readings, published before or after 1990, was similar.

Figure 1 Descriptive Statistics of Readings Lists, by Department Size (Percentages)

A COHESIVE FIELD?

Considering the descriptive overview of the reading lists, we now can answer our research question: Is the study of Canadian politics cohesive? We examined this question in three ways: topics covered, authors cited, and readings included.Footnote 7 As noted previously, the variation in subheadings across the QFE reading lists is substantial. What is most striking is that there is no topic that is covered by every single university (see previous discussion). For example, “federalism” is included on 13/14 of the lists with subheadingsFootnote 8; the constitution, constitutional development, or constitutionalism is included on 10; and Quebec politics is included on eight. Not even “Parliament” (8/14) or “elections” (10/14) appears consistently.Footnote 9 Perhaps this finding is simply a matter of semantics; however, with general topics such as “federalism,” it is possible to more closely examine what the different schools choose to cover. We compared the readings classified under “federalism” (233 items) with those under “Parliament” (166 items) and found minimal overlap (only four items). The use of subheadings does not seem to be creating false distinctions.

Moving beyond topic headings, which are flawed measures of content because many readings address multiple topics, we considered the authors (or viewpoints) to whom students are exposed. We constructed a score for each author that reflected the number of times they were found on each list, weighted by the number of items on each list (i.e., an additive measure that considers each list equally).Footnote 10 Figure 2 lists the top 10 cited authors, noting the number of readings cited, their weighted frequency, and the number of lists on which they appear (in parentheses next to their name).Footnote 11

Figure 2 Most-Cited Authors

Looking at the data this way is informative. The most-cited author, across most measures (i.e., weighted frequency, number of lists, and number of works cited), is Alan Cairns. He appears on 15 of the 16 lists (i.e., 24 different pieces were cited) and he has the highest weighted frequency of all authors. Donald Savoie is the only author to appear on all 16 lists, but his weighted frequency is relatively low. Going down the list by weighted frequency, it is interesting to note that the three measures we report are not always correlated. For example, Elisabeth Gidengil has a weighted frequency of 28.29% (i.e., fifth overall) but appears on only 13 lists and has 16 pieces cited. Peter Russell, conversely, has a weighted frequency of 22.5% (i.e., ninth overall) and appears on 15 lists with 17 pieces of work.

These results suggest that there is some consistency in terms of individual viewpoints that are being studied during QFE preparation (i.e., assuming that scholars are consistent across their work). However, a total of 1,188 author names are included on the lists; therefore, agreement on some of the top 10 does not indicate a substantial amount of coherence.

Finally, we considered specific readings to look for cohesiveness in QFE training. Table 1 reports all readings, in order of number of departments, that appeared on a majority of lists in our database (i.e., eight or more). We were surprised to see that no reading is included in every list. The closest is The Electoral System and the Party System in Canada by Alan Cairns, which is included on 14 lists. Only two readings are included on 13 lists and two on 12 lists. The modal number of lists for a single reading is one (1,747 readings total). The 27 readings reported in table 1 are varied in terms of their date of publication, ranging from 1966 to 2012.

Table 1 Most Common Individual Readings

This analysis reveals that there is no substantial “canon” covered by all QFE reading lists for Canadian politics. Noteworthy, however, is the repeated appearance of Cairns in the list of top-cited readings (i.e., his 1968 article appears on 14/16 lists) and Savoie’s name on all 16 lists. If there is to be one “godfather” of Canadian political science, Cairns and Savoie would be strong nominees. Another point to consider is the diversity of subject matter in the most-read pieces. Parliament, the Charter of Rights and Freedoms, the courts, political culture, federalism, multinationalism, Indigenous politics, women and politics, and party politics are all popular. We find this result encouraging in the sense that it suggests learning about the parliamentary process and/or elections is not the only feature that unites QFE training across the country. There appears to be at least majority recognition of the value of many pieces that provide alternative viewpoints on Canadian politics. Along with our previous findings about topic headings, we are considerably more optimistic that students do learn alternative viewpoints most of the time. Nonetheless, the fact that we find most of these viewpoints only in a majority rather than in all of the lists still gives us pause.

WHAT ARE QFES GOOD FOR?

Analyzing QFE reading lists is a useful way of understanding what “qualifying” actually means. It also provides insight into the comprehensiveness and cohesiveness of an academic field and the extent to which any type of universal training is even possible. Although we suspected that analyzing the Canadian politics subfield might be an “easy” case for finding cohesion, our results suggest otherwise. There is no single topic or reading shared by all political scientists who take comprehensive examinations in the field of Canadian politics. This finding means that looking to QFEs as evidence that job candidates and/or faculty members from various universities share a common vocabulary for teaching, communicating, and collaborating is at least somewhat flawed. There is no guarantee that PhD students or even Canadian politics faculty members working in the same department will share the same knowledge base if they received their training at different universities.

There is no single topic or reading shared by all political scientists who take comprehensive examinations in the field of Canadian politics.

Perhaps this fact is unproblematic. Departments frequently hire candidates knowing that their home department specializes in a particular research area; therefore, the lack of overlap across reading lists in fact may be beneficial. We are normatively agnostic on this point. Our purpose was to investigate whether QFEs represented a core or canon in Canadian politics, and our results suggest that they do not. We leave it to individual departments to consider the implications of our findings for graduate training and hiring.

Footnotes

1. Some see the QFE as an opportunity to “weed out” weak students who show little academic promise (Schafer and Giblin Reference Schafer and Giblin2008, 277).

2. We recognize that QFEs are far from the only (or most important) training that graduate students receive. Exposure to and training in methodological approaches and critical thinking about how to study politics also are essential. In many ways, passing a PhD defense indicates that the student has mastered the art of doing effective research based on these skills. In this article, we chose to focus on the substantive knowledge gained from QFEs because of how they tend to be used in the job market: as an indicator of an individual’s ability to teach and supervise with expertise in a particular area of study. There are many different ways to gain exposure to research approaches but far fewer ways to gain exposure to key theories and arguments that may be considered foundational for subfields. Our focus is on understanding whether we are justified in inferring common subfield expertise based on success in QFEs.

3. One may argue that a better way of assessing commonality in training would be to assess the exam questions. We believe that to do so would not reveal whether students are exposed to common seminal works across universities (e.g., if an exam does not ask students to comment specifically on a seminal publication), and the existence of this type of “core” is what we seek to evaluate. Similarly, whereas some might argue that an analysis of questions may provide better insight into whether students are expected to be conversant on a set of common topics, this approach likely will not indicate whether students across departments share a common vocabulary or set of assumptions about the topic. If the QFE question is “What explains institutional change?,” it seems plausible that in one department students might answer it using a wide range of theories and methodological approaches, whereas a student from a school such as Rochester, with its strong reputation for providing a specific type of training, might answer it using only rational choice, game theory, and formal modeling. Given these considerations, we elected to not include them in our analysis.

4. Some might argue that the departments that do not include some of the classic works (see table 1) in their reading lists are those that provided syllabi rather than department-approved reading lists because those departments might expect students to prepare themselves to be fully conversant with the literature. To assess this argument, we separated out the syllabi departments and compared their top-cited readings to those found in departments that had reading lists. (This was calculated by identifying the readings that appeared on more than 50% of each sample, which resulted in 19 top-cited readings for the list departments and 16 for the syllabi departments.) We found nine readings in common between departments with syllabi and departments with reading lists. Furthermore, the mean, median, and range of the year of publication for the top-cited readings in the syllabi versus reading-list departments are as follows: 1997, 2001, and 1968–2012 versus 1996, 2000, and 1966–2012, respectively. These similarities suggest no real difference between syllabi and reading-list departments.

5. Two lists had no subheadings at all. The large number of unique headings reflects the fact that departments have complete autonomy to organize their lists however they wish. This results in some overlap of readings across categories. For instance, there were 18 of 166 Parliament and 20 of 233 federalism readings that also appeared in the institutions list (i.e., 73 readings).

6. The author count was calculated based on the total number of authors cited on our list. For example, if an article had three authors, all were entered, and the gender percentage was calculated using all three authors.

7. Some departments offer a QFE in a topic that is outside of traditional subfields, such as “gender and politics” and “local government.” In our sample, four departments had a gender and politics QFE, none had a race and politics QFE, one had an Indigenous politics QFE, and one had a local government QFE. In those departments, it is possible that literature on those topics may not appear in the traditional subfield reading lists because they are covered in the gender and politics list, for instance. If so, we might expect the Canadian politics lists to have fewer women authors in those departments that also offer a QFE in gender and politics. To test this argument, we compared the percentage of female authors in all 16 reading lists versus the 12 departments that do not have and the four departments that do have a separate gender and politics QFE. The means for these three samples are 23.58%, 22.71%, and 28.35%, respectively. These results suggest that departments that offer a QFE in gender and politics are more sensitive to the inclusion of women authors in their Canadian politics lists. We found a similar trend when we compared women and politics readings across the samples. To do so, we created a dataset of readings with titles that included “gender,” “women,” “woman,” “female,” “feminist,” “feminism,” or “child.” We found that departments with a separate women and politics QFE tended to have more women and politics readings, on average, in their Canadian politics lists than those departments that did not have a women and politics QFE (i.e., a 4.69-point difference between the means).

8. For this analysis, we looked for the French equivalents in order to incorporate the lists from Quebec universities.

9. We realize that an analysis of headings has limited value given that each department organizes and constructs their own headings and lists. As mentioned previously, for instance, we found that several Parliament and federalism readings also appeared in the institutions list (see note 5). Nevertheless, we think the headings—in combination with the other data in this article—provide a useful and convincing picture of a weakly cohesive subfield.

10. Multi-authored pieces were counted as one item; however, the piece was included in the weighted frequency for each author.

11. The minimum number of lists on which an author in the top 10 appears is 13.

References

REFERENCES

Abu-Laban, Yasmeen. 2017. “Narrating Canadian Political Science: History Revisited.” Canadian Journal of Political Science 50 (4): 895919.10.1017/S000842391700138XCrossRefGoogle Scholar
Albaugh, Quinn M. 2017. “The Americanization of Canadian Political Science? The Doctoral Training of Canadian Political Science Faculty.” Canadian Journal of Political Science 50 (1): 243–62.10.1017/S0008423917000269CrossRefGoogle Scholar
Cairns, Alan C. 1975. “Political Science in Canada and the Americanization Issue.” Canadian Journal of Political Science 8 (2): 191234.10.1017/S0008423900045704CrossRefGoogle Scholar
Cassese, Erin C., and Bos, Angela L.. 2013. “A Hidden Curriculum.” Politics & Gender 9 (2): 214–23.10.1017/S1743923X13000068CrossRefGoogle Scholar
Cassese, Erin C., Bos, Angela L., and Duncan, Lauren E.. 2012. “Integrating Gender into the Political Science Core Curriculum.” PS: Political Science & Politics 45 (2): 238–43.Google Scholar
Estrem, Heidi, and Lucas, Brad E.. 2003. “Embedded Traditions, Uneven Reform: The Place of the Comprehensive Exam in Composition and Rhetoric PhD Programs.” Rhetoric Review 22 (4): 396416.10.1207/S15327981RR2204_4CrossRefGoogle Scholar
Héroux-Legault, Maxime. 2017. “The Evolution of Methodological Techniques in the Canadian Journal of Political Science.” Canadian Journal of Political Science 50 (1): 121–42.10.1017/S0008423917000099CrossRefGoogle Scholar
Ishiyama, John, Miles, Tom, and Balarezo, Christine. 2010. “Training the Next Generation of Teaching Professors: A Comparative Study of PhD Programs in Political Science.” PS: Political Science & Politics 43 (3): 515–22.Google Scholar
Jones, Edward S. 1933. Comprehensive Examinations in American Colleges . New York: The Macmillan Company.Google Scholar
Ladner, Kiera. 2017. “Taking the Field: 50 Years of Indigenous Politics in the CJPS.” Canadian Journal of Political Science 50 (1): 163–79.10.1017/S0008423917000257CrossRefGoogle Scholar
Mawn, Barbara E., and Goldberg, Shari. 2012. “Trends in the Nursing Doctoral Comprehensive Examination Process: A National Survey.” Journal of Professional Nursing 28 (3): 156–62.10.1016/j.profnurs.2011.11.013CrossRefGoogle ScholarPubMed
Nath, Nisha, Tungohan, Ethel, and Gaucher, Megan. 2018. “The Future of Canadian Political Science: Boundary Transgressions, Gender, and Anti-Oppression Frameworks.” Canadian Journal of Political Science 51 (3): 619–42.10.1017/S0008423918000197CrossRefGoogle Scholar
Ponder, Nicole, Beatty, Sharon E., and Foxx, William. 2004. “Doctoral Comprehensive Exams in Marketing: Current Practices and Emerging Perspectives.” Journal of Marketing Education 26 (3): 226–35.10.1177/0273475304268778CrossRefGoogle Scholar
Rocher, François, and Stockemer, Daniel. 2017. “Langue de publication des politologues francophones du Canada.” Canadian Journal of Political Science 50 (1): 97120.10.1017/S0008423917000075CrossRefGoogle Scholar
Schafer, Joseph A., and Giblin, Matthew J.. 2008. “Doctoral Comprehensive Exams: Standardization, Customization, and Everywhere in Between.” Journal of Criminal Justice Education 19 (2): 275–89.10.1080/10511250802137648CrossRefGoogle Scholar
Stoker, Gerry, Peters, B. Guy, and Pierre, Jon. 2015. The Relevance of Political Science . London: Palgrave MacMillan.10.1007/978-1-137-50660-3CrossRefGoogle Scholar
Tolley, Erin. 2017. “Into the Mainstream or Still at the Margins? 50 Years of Gender Research in the Canadian Political Science Association.” Canadian Journal of Political Science 50 (1): 143–61.10.1017/S0008423916001177CrossRefGoogle Scholar
Turgeon, Luc, Papillon, Martin, Wallner, Jennifer, and White, Stephen (eds.). 2014. Canada Compared: Methods and Perspectives on Canadian Politics . Vancouver: University of British Columbia Press.Google Scholar
White, Linda, Simeon, Richard, Vipond, Robert, and Wallner, Jennifer (eds.). 2008. The Comparative Turn in Canadian Political Science . Vancouver: University of British Columbia Press.Google Scholar
Wood, Patricia. 2015. “Contemplating the Value of Comprehensive Exams.” GeoJournal 80: 225–29.10.1007/s10708-014-9582-6CrossRefGoogle Scholar
Figure 0

Figure 1 Descriptive Statistics of Readings Lists, by Department Size (Percentages)

Figure 1

Figure 2 Most-Cited Authors

Figure 2

Table 1 Most Common Individual Readings