Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-22T17:25:39.686Z Has data issue: false hasContentIssue false

Scholarship on the Middle East in Political Science and International Relations: A Reassessment

Published online by Cambridge University Press:  15 February 2023

Andrea Teti
Affiliation:
University of Aberdeen, UK
Pamela Abbott
Affiliation:
University of Aberdeen, UK
Rights & Permissions [Opens in a new window]

Abstract

A recently published dataset of Middle East and North Africa (MENA)–focused scholarship in journals selected to represent the disciplinary “core” of political science sheds empirical light on key publishing trends, from the balance between quantitative and qualitative studies to the growth in experimental and “large-N” statistical methods. Cammett and Kendall’s (2021) analysis shows that between 2001 and 2019, MENA-focused studies declined as a share of publications but that slightly less than half of that work is qualitative. However, the definition of qualitative research that the study uses significantly overstates the number of such articles in the Cammett and Kendall dataset. Our analysis rectifies this, distinguishing among research studies that use qualitative evidence, qualitative methods, theoretical traditions, and paradigms (i.e., positivist/post-positivist). This yields a more accurate and significantly starker picture of the marginality of MENA qualitative research in core politics journals. These results raise the question of why methodologically sophisticated scholarship outside of the “top journals” has not been published there.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the American Political Science Association

Scholars of the Middle East and North Africa (MENA) often suggest that work not explicitly drawing on quantitative methods is unlikely to be published in political science’s “top” journals (Tessler, Nachtwey, and Banda Reference Tessler, Nachtwey and Banda1999). Cammett and Kendall (CK) (Reference Cammett and Kendall2021) recently published a dataset and analysis that shed empirical light on this issue. Their dataset comprises MENA-focused publications in 13 top journals that were selected using reputation- and citation-based metrics (Giles and Garand Reference Giles and Garand2007) to represent the disciplinary “core” of political science and international relations (PSIR).

CK (Reference Cammett and Kendall2021, 7) analyzed the data and trends by topics, author’s gender, and methods. Regarding methods, they showed that there remains a “persistently important share of qualitative research” and that “use of qualitative methods has either remained flat over time or declined,” whereas “major growth areas” are in experimental and “large-N” statistical methods.

First, we replicated CK’s coding process for qualitative studies, identifying instances in which articles were coded incorrectly. Second, we demonstrated that the definition of “qualitative research” in the dataset classifies as this based on case studies and small-N designs rather than research that uses qualitative evidence and/or qualitative methods. We recoded to distinguish between what CK (Reference Cammett and Kendall2021) defined as qualitative studies and the use of qualitative evidence and/or methods (e.g., content or discourse analysis). Third, we included an additional code identifying the theoretical framework of publications—positivist or post-positivist—to probe the “paradigmatic pluralism” of publications.

At each step, we found significantly less diversity than CK. These patterns are noteworthy in themselves but also because a brief perusal of MENA-focused publications outside of the 13 top journals shows that studies of a notably greater diversity and rigor exist in these journals. For example, linguistics, political communication studies, or “area studies,” in which qualitative evidence and methods are more explicitly identified and applied, and that inform the coding criteria adopted in our study.

Our goal is not to lament methodological and/or paradigmatic poverty but rather to identify more precisely the magnitude of the marginality of qualitative MENA-focused scholarship and to bring into focus a puzzle: namely, if methodologically sophisticated scholarship exists outside of top journals, why has it not been published there?

…if methodologically sophisticated scholarship exists outside of top journals, why has it not been published there?

CK’s study (Reference Cammett and Kendall2021) also made a much-needed contribution to debates about the so-called area-studies controversy and whether and how to “bridge the gap” between PSIR and MENA area studies (Teti Reference Teti2007). Our own contribution intends to further expand these analyses.

EXISTING SCHOLARSHIP

The CK dataset is particularly useful because it encompasses the 2000–2019 period in which several major Middle East–related events occurred (e.g., the September 11, 2001, terrorist attacks; the 2003 invasion of Iraq; and the Arab Spring) in correspondence of which they found publication spikes.

Their selection of 13 journalsFootnote 1 was based on a comparative study of reputation- and citation-based journal rankings in PSIR (Giles and Garand Reference Giles and Garand2007). Overall, these journals prioritize quantitative scholarship and little theoretically “critical” work; in this sense, they are not representative of the field overall. However, within its hierarchies, it is difficult to disagree that these journals carry the greatest reputational weight in the field.

CK (Reference Cammett and Kendall2021) showed that Middle East–focused publications decreased during the past two decades relative to the cumulative total articles published. Whereas the latter have increased from slightly more than 300 to approximately 650 annually, the former have increased only slightly, remaining well under 20 per year across selected journals.

Within that overall trend, significant changes occurred in the types of work published. Based on an analysis of all MENA-focused articles (N=222), publications that CK classified as “qualitative” have remained constant at approximately five per year—a proportional reduction, given the increase in total articles published. They also were consistently outperformed year-on-year during the past decade by articles using large-N statistical methods. Simultaneously, experimental methods—which also are quantitative—increased to almost match the number of qualitative articles by 2019.

LIMITATIONS IN PRINCIPLE

Sobering though it is, this analysis understates the marginalization of MENA-focused scholarship in “mainstream” journals for three reasons related to the structure and configuration of the CK coding framework—in what it both excludes and includes.

The CK framework was structured in two layers: (1) “methods (disaggregated)”Footnote 2 lists a number of “research methods,” and (2) “methods (aggregated)” groups disaggregated codes—namely, qualitative, statistical, experimental, and formal methods.Footnote 3

The greatest limitation of the CK dataset is its definition of qualitative research, which is based on Teele and Thelen (Reference Teele and Thelen2017), who noted that qualitative investigations sometimes but not always use case studies and small-N designs. The CK dataset overextends this definition, classifying all case-study and small-N research as qualitative. Qualitative research, however, normally is understood as drawing on evidence that cannot be reduced to quantified forms (Blatter, Haverland, and van Hulst Reference Blatter, Haverland and van Hulst2016). Thus, the CK definition ultimately includes more than only studies using qualitative evidence and/or methods, classifying some clearly quantitative studies as qualitative and thereby overrepresenting qualitative scholarship.

Our analysis recoded to distinguish between the use of qualitative evidence and qualitative methods—in addition to the final consideration of the question of critical theoretical approaches and frameworks. This recoding filtered out several studies that contained quantitative analyses (particularly disaggregated Codes 5–7: single, comparative, and medium-N case studies).

The CK dataset also conflated types of evidence and methods of analysis (Codes 8–10). Whereas ethnography and interviews refer to types of data and data gathering, content/discourse analysis (C/DA) suggests specific methods. These methods of analysis of linguistic or other semiotic data can be rigorous, transparent, and (often) replicable. Most CK studies in this category, however, were unsystematic or descriptive, sporadically quoting portions of interviews or documents. Consequently, the conflation between data types and methods of analysis overrepresents the minimal methodological pluralism on display in selected journals.

The conflation between data types and methods of analysis overrepresents the minimal methodological pluralism on display in selected journals.

Additionally, the CK clustering into aggregate methods is counterintuitive (e.g., experimental studies are quantitative). By pitting qualitative research against three separate categories rather than one, it also facilitates understating the relative size of the difference between qualitative and quantitative publications.Footnote 4

LIMITATIONS IN PRACTICE

This section first evaluates the impact that the limitations discussed previously have on the assessment of the relative weight of scholarship drawing on qualitative methods. Cammett and Kendall (Reference Cammett and Kendall2021, 8) noted that the trend toward experimental, formal, and large-N statistical methods “may have narrowed the types of research published in mainstream political science journals.” The following analysis reveals that their conclusion underestimates this narrowing.

Second, the broader task of assessing trends in the publishing of MENA-focused research in core PSIR journals should entail assessing both data and methods as well as “paradigms” and “theoretical traditions.” Therefore, this discussion provides a further reassessment of publication data based on the theoretical traditions displayed in MENA scholarship published by the sampled journals. Our analysis shows that the “narrowing trends” are even more significant at this level.

Method

Of 222 MENA-focused articles overall, CK (Reference Cammett and Kendall2021) classified 100 as statistical, 22 as experimental, two as formal theory (i.e., game theory and formal models), and 98 (44%) as qualitative. The last category includes 35 (15.7%) publications classified as single case studies; 42 (18.9%) as comparative case studies; no publications are classified as medium-N (Code 7); two (0.9%) as interviews (Code 8); one (0.45%) as ethnography (Code 9); and the remaining 18 (8.1%) as C/DA (Code 10).

We first re-examined CK data by checking every publication in the dataset for its coding “fit” with original parameters. Second, we recoded all publications classified as qualitative as follows:

  1. 1. A further distinction was introduced between publications that do and do not use qualitative methods. Articles were classified as adopting a qualitative method if (a) data cannot be reduced to quantified forms; (b) they had or made available their coding framework; (c) they indicated or provided the sources analyzed (i.e., corpus); (d) they specified a method of analysis; and (e) that analysis was replicable.Footnote 5

  2. 2. Additionally, all articles were coded for whether their referent theoretical framework was positivist (PO), post-positivist (PP), or unspecified (descriptive, DE). Entries classified as PP were also coded for whether this was implied or explicit.

Analysis 1: Replication, Coding Amendments, and Reassessment

Several single case studies (i.e., CK disaggregated methods Code 5) were classified as qualitative but were based on formal modeling and/or statistical methods (i.e., 7/35, or 20%: Manekin, Kemahlioglu, Ben-Porat, Hirschl, Rahat, Bueno de Mesquita, and Penn). Two articles, both by Parkinson, presented methodologically rigorous ethnographies, including analysis of language (i.e., interviews) and could be reclassified as ethnographies (Code 9). However, the analysis of language undertaken therein was not replicable because interview data were not available.

Of the 42 publications classified as comparative case studies (Code 6), four (9.5%) drew primarily or exclusively on quantitative methods (e.g., correlations, regressions, probability testing, and formal modeling). One publication met the criterion for qualitative methods (i.e., Akcinaroglu). Eight (19%) contained some linguistic aspect relevant to the article’s analysis but did not meet the criteria for qualitative methods. Other than Akcinaroglu’s content analysis, no article classified under Code 6 met qualitative-methods criteria.

Seven single and four comparative case studies presented formal theory and modeling and/or drew on statistical analysis (e.g., single or multivariate regressions) as the primary method and therefore were reclassified. Additionally, almost all single and comparative case studies (Codes 5 and 6) in the original dataset failed to meet criteria for qualitative methods.Footnote 6

Two articles (0.9%) (i.e., Pearlman Reference Pearlman2016a, Reference Pearlman2016b) were labeled as interviews (Code 8) and one (0.45%) was classified as ethnography (Code 9) (Jones Reference Jones2018). All three articles, in fact, drew on both participant observation and interview analysis, making them difficult to distinguish neatly based only on data or methods. Additionally, whereas all articles focused on attitudes under authoritarianism, none framed analysis through any specific paradigm or theoretical tradition.

The CK coding for publications classified as C/DA (Code 10) included a broad range of methods such that the category elided the difference among (1) descriptive studies (e.g., several state-of-the-art reviews like Sadowski’s) or studies using only argumentation and loose reference to empirical evidence (e.g., Anderson and Bellin); (2) studies that might include analysis of linguistic data but without a clearly defined method (e.g., citing excerpts from interviews or documents); and (3) approaches analyzing qualitative linguistic/semiotic evidence using an explicitly defined method and corpus of data and sources allowing readers to check sources and replicate analysisFootnote 7 (e.g., content analysis). (For methods in discourse analysis, see Schiffrin, Tannen, and Hamilton Reference Schiffrin, Tannen and Hamilton2001.)

Using this last criterion, formal content analysis was found in only three publications (i.e., Alimi and two by Somer). Only one article (i.e., Euben) used a form of discourse analysis. Classifying content analysis and discourse analysis as falling within the qualitative methods defined previously found that only four (4/98, 4.08%) publications met specified criteria.

The remaining 14 publications used nonquantitative approaches but were almost all descriptive. Even when “using language” (e.g., quoting interviewees or documents), they did not specify a method and lacked rigor, transparency, and replicability. These publications were reclassified as descriptive/non-quantitative (New Code 12).

Across each disaggregated-method category, these reclassifications yielded the totals listed in Table 1 for articles that met specified criteria for qualitative methods.

Table 1 Recoded vs. Original Publication Classification

Analysis 2: Paradigmatic Poverty

Methods and types of evidence are not the only criteria according to which the inclusion or marginalization of a literature or a (sub)field relative to another can be assessed. Indeed, even a cursory perusal of MENA politics–focused journals suggests that one of the striking features of the difference between MENA-focused scholarship in PSIR’s mainstream and the overall body of MENA-focused scholarship is the use of qualitative methods—particularly in a PP vein—as well as a rich Marxist, post-Marxist, or Marxian-inspired political economy (much of it also PP including critical realism).

To capture differences at this level, studies in the dataset additionally were coded for whether publications were presented within a PO frameworkFootnote 8; a PP framework; or as merely descriptive, not mentioning paradigmatic reference points (i.e., NM, no method presented). Of the 222 studies originally selected in the CK dataset, only two were considered to be PP. The first was Euben, whose article presented a multimodal discourse analysis (i.e., text, speech, and video) reminiscent of poststructuralist analyses. The second study was an article on information and communications technology in the Arabophone region (i.e., Murphy), although its analytical framework was Habermasian, making its classification debatable. All other studies included in the dataset either were explicitly framed by PO approaches or focused on the analysis of empirical data—which, when not quantitative, were predominantly descriptive in the manner of “stylized facts.”

DISCUSSION

Several aspects of the studies included in the dataset thus amended better focus the trends in MENA-focused publishing in PSIR’s top journals, as follows:

  1. 1. Most publications originally classified as qualitative do not use qualitative evidence or methods. The CK coding of qualitative research classified all small-N or single-case-study research designs as qualitative, conflating them with research based on qualitative evidence and/or methods. The C/DA category in particular is misleading: its name evokes linguistics attention to data and methods. However, most studies in this category—as well as in the qualitative aggregate methods category generally—do not use these or other qualitative methods’, or any particular method at all. Instead, most publications analyze loosely defined “stylized historical/empirical facts” and/or conduct analysis through argumentation without reference to either an explicitly defined or replicable qualitative method. In several cases—for example, articles providing principally critical reviews and assessments of literature on a topic or subfield—an explicit method is not necessary. Nonetheless, classifying such contributions as representing qualitative research overrepresents the presence of methodologically rigorous qualitative analysis in the CK dataset.

  2. 2. MENA-focused scholarship using qualitative methods is virtually non-existent. Only four articles (1.8%) explicitly used replicable qualitative methods. A similar number of articles drew on ethnographies—often “mixed” with other types of qualitative data and/or methods—which, by definition, cannot meet the replicability criterion but nonetheless are based on rigorous evidence-gathering and analysis. These four articles represent a vanishingly small proportion within the CK dataset—a proportion even more minuscule considering the approximate 500 articles published annually in sampled journals in 2000–2019.

  3. 3. Paradigmatic poverty. Patterns in several “niche” sectors suggest a broader trend—namely, that heterodox data, methods, and theoretical frameworks are virtually absent from PSIR’s core journals. First, whether or not they meet the criteria for qualitative methods, language-based analyses almost never are explicitly framed within PP theoretical frameworks. Whereas several clearly do not intend to locate themselves within these traditions, several others may. Additionally, no publication in the dataset is both explicitly PP and formal in its use of qualitative methods—although certainly not for want of such scholarship in either MENA-focused research or linguistics broadly writ.

Second, poststructuralist analyses also were virtually absent; only two articles could be located within its various traditions. The first was Wictorowicz, although the article was not explicitly framed thusly. The second was Euben, which also was the only article to locate itself in a post-foundationalist and poststructuralist tradition and explicitly name and draw on those traditions as part of its analytical framework. Euben’s case is an exception to the rule for both of these first two patterns. However, it also stands alone on each count as the only article adopting a heterodox and non-positivist theoretical framework as an explicit and rigorous PP and poststructuralist analysis.

The third apparent pattern is the absence of explicitly Marxist/Marxian–inspired analyses. Despite this scholarship also being largely PO, no publications used these approaches in the two decades covered by the data.

CONCLUSIONS: CLOSER INTEGRATION OR SELECTIVE INCLUSION?

Cammett and Kendall’s (Reference Cammett and Kendall2021, 7) important analysis of all MENA-focused publications in leading PSIR journals suggested that almost half were qualitative (44%), albeit with a clear trend toward proportional decline in qualitative MENA scholarship and an increase in experimental methods and large-N statistical analysis.

Careful consideration of their analysis suggests that CK’s (Reference Cammett and Kendall2021, 8) conclusion that the trend to favoring quantitative methods “may have narrowed the types of research published in mainstream political science journals” significantly understates this phenomenon. Recoding data to identify research drawing on qualitative evidence and/or methods shows that most articles previously classified as qualitative did not meet such criteria. Our analysis also suggests that work outside of PO orthodoxy in both methods and types of theoretical traditions is almost entirely absent.

Finally, beyond being rare, methods and theoretical frameworks outside of the existing disciplinary core (as defined by these journals) seldom are indicated explicitly, with Euben’s example being the sole exception. Shifting focus from methods to paradigms, PSIR’s paradigmatic poverty in MENA scholarship could hardly be starker.

Such marginality is all the more significant considering that as CK (Reference Cammett and Kendall2021, 2) showed, “[S]ince 2000, almost half (about 45%) of MENA-focused articles have been published by three journals: Comparative Politics, Comparative Political Studies, and International Studies Quarterly.

It is not clear a priori why this skewness in publication patterns would exist, particularly given the “well-known strengths” of qualitative methods, which CK (Reference Cammett and Kendall2021, 7–8) rightly noted, including “distinct advantages in uncovering causal processes and interpreting meaning”; in “theory-building, development/specification of concepts and measures”; and in asking questions that orthodox, quantitative approaches overlook or are not well equipped to answer.

Our analysis more precisely identifies the magnitude and configuration of this marginalization—namely, that it is specific types of scholarship on the region that are integrated into PSIR’s contemporary mainstream. Indeed, the patterns of inclusion and exclusion of MENA-focused scholarship in the field’s top journals seem selective in ways that are both highly specific and unrelated to the methodological rigor of scholarship on the region’s politics. Our point is not to lament methodological and/or paradigmatic poverty as much as it is to bring into better focus a puzzle—namely, why methodologically sophisticated scholarship that exists outside of PSIR’s top journals has not been published there.

Our analysis more precisely identifies the magnitude and configuration of this marginalization—namely, that it is specific types of scholarship on the region that are integrated into PSIR’s contemporary mainstream.…Patterns of inclusion and exclusion of MENA-focused scholarship in the field’s top journals seem selective in ways that are both highly specific and unrelated to the methodological rigor of scholarship on the region’s politics.

DATA AVAILABILITY STATEMENT

Research documentation and data that support the findings of this study are openly available at the PS: Political Science & Politics Harvard Dataverse at https://doi.org/10.7910/DVN/DQ4OZV.

Supplementary Materials

To view supplementary material for this article, please visit http://doi.org/10.1017/S1049096522001378.

CONFLICTS OF INTEREST

The authors declare that there are no ethical issues or conflicts of interest in this research.

Footnotes

1. American Political Science Review, American Journal of Political Science, Annual Review of Political Science, Journal of Politics, World Politics, International Organization, Comparative Political Studies, Comparative Politics, British Journal of Political Science, Perspectives on Politics, Political Research Quarterly, International Studies Quarterly, and Quarterly Journal of Political Science.

2. Disaggregated codes: 1. experimental; 2. automated text analysis/machine learning/“big data”; 3. large-N statistical: cross-national; 4. large-N statistical: subnational; 5. single case study; 6. comparative case study (i.e., 1–6 cases); 7. medium-N study (i.e., N>6); 8. interviews; 9. ethnography; 10. content/discourse analysis; 11. game theory/formal models.

3. Aggregated codes: 1. experimental (disaggregated Code 1); 2. statistical (disaggregated Codes 2–4); 3. qualitative (disaggregated Codes 5–10); 4. formal theory (disaggregated Code 11).

4. The aggregated methods category formal theory (Code 4) comprises game theory and formally specified models not found in MENA scholarship outside of quantitative methods–based analyses. The latter’s incidence is low (N=2), so this principled limitation does not affect results in practice. However, its separation from other non-qualitative research increases the impression of methodological pluralism. Analogously, 22 (9.9%) publications used experimental methods, which were not counted as quantitative despite many having significant or predominant quantitative elements (e.g., survey data and statistics/testing).

5. Some qualitative methods (e.g., ethnographies and phenomenologies) cannot satisfy replicability requirements or include qualitative datasets—not least for ethical and/or safety reasons—but can be conducted rigorously nonetheless. The replicability criterion is relaxed in these cases.

6. Among single case studies, five used linguistic data and/or some component of a qualitative method but were not rigorous enough to be classified as using qualitative methods. The remaining 30 studies in this category were classified as “N/A”—that is, publications were not attempting to use qualitative methods; and/or were descriptive; or qualitative data and/or methods were irrelevant to their analysis; or, indeed, in seven cases used quantitative methods and were reclassified accordingly. Among comparative case studies, eight contained some textual/linguistic aspect relevant to their analysis but displayed no qualitative method(s) and/or were not replicable. Thirty-three were classified as “N/A.” Only one case study met qualitative-methods criteria.

7. An expansive definition of C/DA to include ethnographies (Code 8) and interviews (Code 9) matching our definition of qualitative methods results in the inclusion of all three under C/DA (Code 10) but would not affect totals under the qualitative aggregate code.

8. This includes Marxist scholarship but excludes critical/continental theory. This coding neither captures the foundationalist/post-foundationalist divide nor neatly maps onto the divide between materialist and semiotic analyses.

9. Of the 35 articles, seven were reclassified as quantitative. All remaining 28 articles did not meet the minimal standards for qualitative methods (five) or were not applicable to the aims and scope of the article (23). Most of the latter were simply descriptive (e.g., unsystematic analysis of “stylized facts”), but two were ethnographies (both by Parkinson).

10. Four articles were reclassified as quantitative. One article met the criteria for qualitative methods. The remaining 37 were classified as descriptive (in most instances) and/or the qualitative-methods criteria were not applicable to the aims and scope of the publication.

11. Reclassified as descriptive: 23 ex Code 5, 37 ex Code 6, 13 ex Code 10=73. Reclassified as quantitative: 7 ex Code 5, 4 ex Code 6=11.

12. Five single country cases, one C/DA article.

References

REFERENCES

Blatter, Joachim, Haverland, Markus, and van Hulst, Merlijn (eds.). 2016. Qualitative Research in Political Science, Volume IV: Interpretive and Constructivist Approaches. Los Angeles: SAGE Publications.Google Scholar
Cammett, Melani, and Kendall, Isabel. 2021. “Political Science Scholarship on the Middle East: A View from the Journals.” PS: Political Science & Politics 54 (3): 448–55. https://doi.org/10.7910/DVN/EIKB8U.Google Scholar
Giles, Michael W., and Garand, James C.. 2007. “Ranking Political Science Journals: Reputational and Citational Approaches.” PS: Political Science & Politics 40 (4): 741–51.Google Scholar
Jones, Calvert W. 2018. “New approaches to Citizen-Building: Shifting Needs, Goals and Outcomes.” Comparative Political Studies 51 (2): 165–196.CrossRefGoogle Scholar
Pearlman, Wendy 2016a. “Moral Identity and Protest Cascades in Syria.” British Journal of Political of Political Sciences 48 (4): 877–901.CrossRefGoogle Scholar
Pearlman, Wendy. 2016b. “Narratives of Fear in Syria.” Perspectives on Politics 14 (1): 21–37.CrossRefGoogle Scholar
Schiffrin, Deborah, Tannen, Deborah, and Hamilton, Heidi E. (eds.). 2001. Handbook of Discourse Analysis. Oxford, UK: Blackwell Publishers.Google Scholar
Teele, Dawn Langan, and Thelen, Kathleen. 2017. “Gender in the Journals: Publication Patterns in Political Science.” PS: Political Science & Politics 50 (2): 433–47.Google Scholar
Tessler, Mark A., Nachtwey, Jodi, and Banda, Anne (eds.). 1999. Area Studies and Social Science: Strategies for Understanding Middle East Politics. Indianapolis: Indiana University Press.Google Scholar
Teti, Andrea. 2007. “Bridging the Gap: IR, Middle East Studies, and the Disciplinary Politics of the Area Studies Controversy.” European Journal of International Relations 13 (1): 117–45.CrossRefGoogle Scholar
Teti, Andrea and Abbott, Pamela. 2022. “Replication Data for: Scholarship on the Middle East in Political Science and International Relations: A Reassessment.” PS: Political Science & Politics. DOI: 10.1017/S1049096522001378.Google Scholar
Figure 0

Table 1 Recoded vs. Original Publication Classification

Supplementary material: File

Teti and Abbott supplementary material

Teti and Abbott supplementary material

Download Teti and Abbott supplementary material(File)
File 118.4 KB
Supplementary material: Link

Teti and Abbott Dataset

Link