Political scientists publish their work in scholarly journals for a variety of reasons. Ideally, they want to share their knowledge with others, and prosaically, they aim to gain employment and achieve tenure and promotion within a department, as well as obtain higher status within the discipline. Publication in the profession's top journals remains an important evaluative metric for success in political science.Footnote 1
Journal rankings inform both external evaluators (e.g., hiring, promotion, and tenure committees) and the discipline itself about which journals score highest on a number of indicators, including their impact on the field and their reputation among peers. One aspect of journal publishing that remains understudied is transparency. At the most basic level of transparency, a journal will gather and make accessible summary information about its submission and review processes, such as the average amount of time it takes to inform an author of a first decision (turnaround) and the journal's acceptance rate. More transparent journals go further by releasing data on the types of submissions they receive and the personal characteristics of the authors who submit manuscripts. These data are broadly useful for both authors and editors: political scientists under pressure to publish understandably want to know what attributes might reward an article with publication; journal editors want to ensure that no biases influence their final manuscript decisions.
This article explores two questions: (1) How transparent are the top political science journals in releasing submission data? (2) How does transparency in releasing journal submission data benefit political science journals specifically, and the profession generally? To answer these questions, we first surveyed the editors of the top 30 political science journals on their journals' record-keeping practices and then examined in greater detail the records of one political science journal, American Politics Research (APR).
Analyzing and Ranking Journal Output
Editorial Bias and Journal Transparency
One major question—Are manuscripts from particular fields or using certain methodologies privileged over others in gaining publication?—formed the basis for the Perestroika movement, our discipline's most recent tumult. This movement asserted that the discipline's top journals, particularly the American Political Science Review (APSR), discriminated against manuscripts employing qualitative methods.Footnote 2 The charge of editorial biasFootnote 3 against a journal—meaning that a particular characteristic of a submitting author or submitted manuscript prevents otherwise excellent work from being published—is serious. The Perestroika movement was ultimately successful in gaining significant representation and influence on APSA's search committees for a new APSR editor and an inaugural editor for a new APSA journal, Perspectives on Politics. Appointing scholars familiar with and friendly toward a diversity of methodologies would, ideally, create an environment more welcoming to submissions from a variety of methodological backgrounds.
Charges of editorial bias, however, did not end with implementation of these conciliatory measures (see, for example, former Perspectives on Politics' editor Jim Johnson's [Reference Johnson2009] rebuttal to charges of editorial bias at that journal). Moreover, editorial bias may spread beyond matters of methodology to include the exclusion of scholarship based on the submitting author(s)' personal characteristics or scholarly topic. Political scientists have self-policed the discipline's journals to determine whether the work published represents the profession's true diversity of scholarship on, for example, Asian-Pacific Americans (Aoki and Takeda Reference Aoki and Takeda2004), pedagogy (Orr Reference Orr2004), human rights (Cardenas Reference Cardenas2009), Latin America (Martz Reference Martz1990), urban politics (Sapotichne, Jones, and Wolfe Reference Sapotichne, Jones and Wolfe2007), and comparative politics (Munck and Snyder Reference Munck and Snyder2007). Other research (Breuning and Sanders Reference Breuning and Sanders2007; Young Reference Young1995) has studied whether the work of female political scientists is adequately represented in the discipline's journals.
These articles analyze journals' output—published articles—to determine whether a particular demographic, methodology, or topic field is underrepresented. The studies and the data employed are important—after all, hiring and promotion decisions are made on the basis of published, not submitted, work. But by analyzing published work alone, these articles may misstate editorial bias. After all, journals cannot publish work employing a certain methodology if this work is never submitted. Hill and Leighley (Reference Hill, Leighley and Monroe2005) responded in this vein to Kasza's (Reference Kasza and Monroe2005) findings of editorial bias against qualitative scholarship in the published work of the American Journal of Political Science (AJPS): “Despite our pledge to review papers in any subfield of political science, we receive more in some fields than in others. And we are captive to what is submitted to the journal for review for publication” (351).
Transparency in Practice
Despite their potential, very few studies have analyzed journal submission data. Lee Sigelman has provided the most insight into journal submission data, most recently (Reference Sigelman2009) finding that coauthor collaboration—a trend that has seen a precipitous increase in recent years (Fisher et al. Reference Fisher, Cobane, Vander Ven and Cullen1998)—does not necessarily lead to a higher rate of article acceptance at the APSR. Lewis-Beck and Levy (Reference Lewis-Beck and Levy1993) also analyze journal submission data, finding that, contrary to conventional wisdom, neither an author's past publishing success or field nor the timing or turnaround time of the submission strongly predict publication in the AJPS. The few political science studies that do analyze submission data focus solely on one of the discipline's few general political science journals—American Political Science Review, Journal of Politics, Perspectives on Politics, and PS: Political Science and Politics. As the profession is organized by subject area (Grant Reference Grant2005) and field-specific journals publish the vast majority of political science scholarship, these analyses may miss the true submission experiences of most political scientists. No analysis has been conducted on the submission data of any of the profession's many field-specific journals.
The availability or lack of such data may be a major reason why so few studies have assessed submission data. Some journals do provide submission data in published annual reports. The editors of International Studies Quarterly (ISQ), for example, post highly detailed submission data analyses on the journal's website.Footnote 4 Submission, rejection, and acceptance data are broken down by author gender, submission month, and subfield, and across years, among other divisions. Unfortunately, the public release of such data by other journals is rare, a finding that the journal editor questionnaire we present below reinforces. For example, though the AJPS maintains summary statistics on submissions, acceptances, and rejections, it provides these numbers only to members of the journal's editorial board at its annual gathering. Most journals seem to follow this model of exclusive release of submission data.
There are several good reasons why editors may opt to not release their journal's submission data. First, editors must be careful to keep the peer-review process blind when releasing these data. Confidentiality issues may explain why those scholars who have published analyses of submission data have also been the editors of the respective journals under study. Second, many editors may find it too difficult to maintain detailed journal submission data. The data collection process can be time consuming, and some journals have only a limited staff that is prone to turnover every semester or academic year.Footnote 5 Additionally, journals tend to migrate to a new editor or different institution every few years, which can lead to a loss of submission data or unwillingness by an editor who views his or her term as temporary to keep these data. These journal migrations (both internal and external) may also lead to inconsistencies in the data collection process. Finally, many editors simply may not see the value in maintaining detailed submission data.
Assessing Journal Quality
While we do not associate journal transparency with journal quality, the growing literature on assessing journal quality does inform our work. Although there is no clear consensus on what constitutes a high-quality journal, most journal rankings employ one of two approaches: the citational and the reputational. The citational approach relies on counting the number of times that other academic articles cite a particular journal's published articles. This method has been used to rank political science journals (Christenson and Sigelman Reference Christenson and Sigelman1985; Hix Reference Hix2004), individual scholars (Klingemann, Grofman, and Campagna Reference Klingemann, Grofman and Campagna1989; Masuoka, Grofman, and Feld Reference Masuoka, Grofman and Feld2007b),Footnote 6 and departments (Klingemann Reference Klingemann1986; Masuoka, Grofman, and Feld Reference Masuoka, Grofman and Feld2007a; Miller, Tien, and Peebler Reference Miller, Tien and Peebler1996). The impact ranking, which publishers often use to promote journals, relies on citation data (see, for example, Thomson's Institute for Scientific Information Journal Citation Reports).Footnote 7
The reputational approach relies on polling a representative sample of scholars about journal quality in a particular field. James Garand and Micheal Giles have become the standard-bearers for reputational studies of journal quality in the profession (Garand Reference Garand1990, Reference Garand2005; Garand and Giles Reference Garand and Giles2003; Garand et al. Reference Garand, Giles, Blais and McLean2009; Giles and Garand Reference Giles and Garand2007; Giles, Mizell, and Patterson Reference Giles, Mizell and Patterson1989; Giles and Wright Reference Giles and Wright1975). This research fulfills a disciplinary longing for journal quality measurements.Footnote 8 Although these two approaches dominate the journal ranking literature, some scholars argue that neither is appropriate. Plümper (Reference Plümper2007), for example, criticizes both approaches for being overly esoteric. His ranking, the Frequently Cited Articles (FCA) Score, focuses instead on journals' real-world impact.
Scholars have not yet included transparency in their ranking systems. We believe that knowing journals' degree of transparency in collecting and sharing submission data will be of interest as a comparison measure to journals' quality rankings. In addition, our effort to rank journals according to their transparency serves as an example of the difficulty of creating a standard measure for journal characteristics.
The importance of journal submission data
We believe that transparency and legitimacy are the primary reasons that scholarly journals should collect and disseminate submission data. The typical political scientist interacts with a chosen journal during the review process at only two stages: submission and decision. The author is necessarily excluded from what transpires in the two to three months in between those stages, but after this period of silence, authors may find the editor's decision to be somewhat arbitrary, particularly when the reviewers' recommendations conflict.
Even if no actual bias exists in editors' decisions, the opacity of the double-blind peer-review and the final decision-making processes may foster the perception of bias among authors. Hearsay and conjecture may lead to perceptions that a journal does not publish a certain type of work or scholarship from a certain type of author. The point of such criticism is, in fact, the promotion of perestroika, or openness. In response to the charges of the Perestroika movement, APSR editorial reports under the new regime (e.g., Sigelman Reference Sigelman2003, Reference Sigelman2004, Reference Sigelman2005) deluged readers with the journal's submission data as a means of proving that editorial bias no longer existed in its pages, if it ever did.
Keeping and releasing such data may help correct for perceived editorial biases. Analyses of journal publications serve a purpose, but they unduly limit the universe of scholarship under analysis by looking at only the end product of the journal publishing process—manuscripts that have cleared the hurdle of scholarly publication. These studies ignore the much larger universe of journal article submissions. Exploring, analyzing, and reporting such data will: (1) aid authors in deciding where to submit their manuscripts, (2) inform editors of potential biases in their journal's review process, and (3) allow the discipline to reassess evaluations of the quality of its journals.
Methods and Results
Survey of Journal Editors
To assess the transparency of political science journals in maintaining and releasing submission data, we first searched the websites of the 30 highest ranked journals, as rated on their impact by Garand et al. (Reference Garand, Giles, Blais and McLean2009). We looked for whether these websites relayed simple record-keeping information, such as average turnaround time, as well as more detailed submission data. The search turned up little in the way of the release of detailed submission data, and we rarely found that even simpler summary statistics were being disseminated. Only 10 of the 30 journal websites provided basic information, and in most of these 10 cases, the websites provided only a rough estimate of average turnaround times.
Editors unwilling to publish this information on their journal's website may provide it in print or by request. To more formally assess journals' transparency in releasing submission data, in July 2009,Footnote 9 we sent an e-mail questionnaire (see figure A1 in the appendix) to the editors (or editorial staff) of the top 30 political science journals, receiving at least a partial response from 20 of the 30 journals surveyed.Footnote 10 About half of the editors of the 30 journals responded to the entire survey or directed us to print or web material to answer our questions. Table 1 summarizes the responses to the questionnaire, ranks the journals on their transparency in releasing submission data, and provides general information on and comparisons of the record-keeping practices of the top 30 journals. Figures 1 and 2 rank the responding journals by acceptance rates and turnaround time from initial submission to first decision, respectively.
Notes.
a Web, print, and by request = 1; web and by request = 2; print and by request = 3; by request only = 4; no contact made with journal or no information found denoted by bar.
b Number includes resubmissions.
c 82% of manuscripts turned around in 60 days or fewer.
The journal transparency rankings depend on the availability of journal information (e.g., acceptance rates, turnaround rates, number of submissions per year) via three different mediums: on the web, in print, and by request. Journals that provided information via all three mediums received a ranking of one. Journals delivering the information via two mediums received a ranking of either two or three. A journal that offered the information both on the web and by request was ranked higher than a journal that provided the information in print and by request, because visiting a journal's website generally imposes fewer opportunity costs than does obtaining a journal's print copy.Footnote 11 Finally, journals that provided such information only by request received a ranking of four.
While the measure is a bit crude, much can be learned from this first attempt at assessing political science journals' transparency in releasing submission data. First, Garand et al.'s (Reference Garand, Giles, Blais and McLean2009) rankings indicating impact do not necessarily correlate with the transparency rankings. Although the APSR and the Journal of Politics (ranked first and third, respectively, under Garand et al.'s system) both received a transparency ranking of one, Political Research Quarterly (ranked 16th with Garand et al.'s system) also received a top transparency score. Five journals received a ranking of two, and three received a ranking of three. Almost half of the journals (11 of 23)Footnote 12 indicated that they would be willing to provide such information only upon request. Many journals clearly keep but do not openly share their submission data.
In the spirit of transparency, table 1 also provides some of the additional survey information provided by the journal editors. These data showcase the ample variation that exists in the submission and review processes of the profession's top journals. The top political science journals vary widely in the number of submissions they receive, their acceptance rates, and their average turnaround time until first decision.
Transparency Case Study
Our examination of submission data for American Politics Research (APR)Footnote 13 from January 2006 to December 2008 highlights the many benefits of more extensive journal record-keeping. While past work has examined submission and publication data from the more general political science journals, we could find no scholarship that focused on a field-specialized journal like APR. The following assessment of the articles published in relation to the articles submitted offers a more complete view of the journal's specialization and biases.
Our examination focused on a few understudied relationships in the field of political science journal publishing, including the effect of the lead author's region, university type, and professional status on a manuscript's likelihood of acceptance. A lead author's geographic location has been shown to influence article acceptance in academic journals in the medical field (Boulos Reference Boulos2005; Tutarel Reference Tutarel2002), so we expected that such a geographic bias might also favor APR's acceptance of manuscripts submitted by lead authors from the Mid-Atlantic region. Authors from these locales are likely to have a greater familiarity with the editor and editorial staff as the result of shared attendance at regional conferences or service on regional boards.
We also expected that authors who work at institutions that place a greater emphasis on research would have more success publishing their work in APR. Research institutions typically give their faculty lighter teaching loads so that they will have more time to research and publish. In academia at large, articles by authors from academic settings are accepted at scholarly peer-reviewed journals at significantly higher rates than articles by authors from nonacademic settings (Tenopir and King Reference Tenopir and King1997). We expected to find a similar difference in acceptance rates of submitted manuscripts from authors in academic settings with different foci and resources.
Finally, we posited that an author's professional status would impact the likelihood of his or her article being accepted. Seasoned academics (associate and full professors) are more likely than less experienced scholars to know the norms of publishable scholarship and be better able to correctly match their manuscript to an appropriate venue (Pasco Reference Pasco2002). Additionally, submitting authors who have attained their doctorate but have not yet gained tenure (usually assistant professors) would likely have a higher probability of acceptance than would graduate students. These authors have the advantage of some research and publishing experience and are driven to produce high-quality work by the incessant ticking of the tenure clock. In examining these hypotheses, we controlled for manuscript turnaround time, the number of authors on a submission, the subject area of the manuscript, and whether the manuscript had a female lead author (see table A2 in the appendix for information on variable measurement).
The reasons why editorial bias may creep into editorial decision-making are easy to explain, even if an editor is trying to be conscientious and fair. Take, for example, two submissions of approximately equal quality. Reviewers return two equally critical sets of reviews, but the editor knows the author of the first manuscript and not the author of the second. The editor may be willing to hazard that the author of the first manuscript is capable of managing the revisions and give that author the benefit of the doubt, sending him or her a decision of revise-and-resubmit. But, being completely in the dark about the author of the second manuscript, the editor may not extend him or her the same benefit of the doubt and may instead send a rejection letter.
It is important to note that by “knowing” the author of the first manuscript, we do not mean that this author has to have been a student, much less an advisee of the editor. Sometimes just having been on a conference panel together or having met a time or two is enough. And, of course, editors are not always even conscious of what biases might be operating, which is why conscientious editors should be interested in serious data collection and analysis.
Case Study Findings
Over the 2006–08 period, the APR journal staff of one full-time editor and one half-time graduate student processed 491 manuscripts.Footnote 14 During this time period, 111 manuscripts were accepted. Twenty-four of the manuscripts that received a decision of revise-and-resubmit were never returned (see table A1 in the appendix for additional data on submissions).Footnote 15 As hypothesized, articles submitted by lead authors from the Mid-Atlantic region have a slight advantage over articles submitted from other regions: just over 30% of manuscripts submitted by lead authors from the Mid-Atlantic region were accepted to APR (see table A3 in the appendix for the descriptive statistics). The probit model's excluded category is the Mid-Atlantic region (see table 2). The signs for nearly every region (except international) are negative, indicating that papers submitted by lead authors from Mid-Atlantic institutions are more likely to be accepted than papers from other regions. Although these relationships are not statistically significant,Footnote 16 the accompanying predicted probabilitiesFootnote 17 (see table 3) provide a better sense of their substantive significance. Lead authors from the Mid-Atlantic region generally have a 15 to 20 percentage point advantage over lead authors from other regions in terms of article acceptance at APR. International papers appear to have the next best chance of acceptance, but this finding could be an artifact of the small number of manuscripts (n = 11) that were submitted by authors working abroad.
Notes. N = 464. Log likelihood = −229.254. Pseudo-R 2 = .089. Coefficients and standard errors calculated using probit regression. p values are two-tailed. Graduate student is the excluded variable for author status. Bachelor's degree is the excluded category for institution type. Mid-Atlantic is the excluded variable for region. The model includes the three subject categories with the most submissions; a number of other categories are excluded (American political development, interest groups, media, other, parties, policy, presidency, public opinion, and subnational).
Note. Predicted probabilities calculated using the observed values approach.
As hypothesized, articles submitted by lead authors from research institutionsFootnote 18 performed better at APR than articles submitted by scholars from other institution types. The excluded category in the model is the bachelor's degree institution type. The sign for master's degree institutions is negative, indicating that papers submitted by lead authors from these institutions are less likely to be accepted than papers submitted by authors from bachelor's degree institutions. The signs for the three research institution rankings are positive, suggesting that papers submitted by lead authors from these institution types are more likely to be accepted than papers submitted by authors from bachelor's degree institutions. Although these relationships are not statistically significant, the predicted probability of acceptance at APR is generally higher for authors from institutions with a research focus (see table 3).
Somewhat surprisingly, lead authors who are graduate students, assistant professors, and associate professors all have similar likelihoods of manuscript acceptance at APR, while full professors' chances of manuscript acceptance lag slightly behind. The excluded category in the model is the status of graduate student. The manuscripts of assistant and associate professors are more likely to be accepted at APR than the work of graduate students, but full professors seem to have less success. None of these relationships reach accepted levels of statistical significance. Substantively, the most surprising finding is that graduate students have a 19% predicted probability of having their manuscript accepted at APR, while full professors have an 18% likelihood of acceptance (see table 3). Two types of selectivity bias may well be behind this finding. First, APR has a notable reputation for publishing the work of younger scholars who are just launching their careers. These scholars may well send their best work to the journal. Second, senior faculty, who are also cognizant of this reputation, may prefer to send their best work elsewhere.
The nonsignificance of many of these relationships led us to look at an additional relationship—whether authors who serve as APR board members have a higher probability of acceptance than non-board members. Indeed, manuscripts with an author who is an APR board member are 6 percentage points more likely to be accepted than manuscripts without a board member author (see table 3). Although not statistically significant, this increased rate may be explained by some factors other than editorial bias. First, board members may have a greater familiarity with the types of articles that APR publishes than the average American politics-focused political scientist. Second, board members are chosen not at random, but predominantly because of their past publishing successes.
In summary, many of the relationships explored here are neither statistically nor substantively significant. Here, null findings prove a relief. While some patterns exist between the personal characteristics of APR authors and the likelihood of manuscript acceptance, none show particularly strong evidence of editorial bias. Nevertheless, awareness of even slight tendencies for bias can inform the editorial staff in future decision-making.
Discussion and Conclusion
Despite the variety and breadth of focus seen in the discipline's journals, all use some manner of peer review to ascertain whether a submission is of high enough quality to merit publication. This process can either be shared with a broader public or kept undisclosed. This shared commonality of process across journals has been left largely unstudied as a result of editors' reserve—either mindful or not—in releasing journal submission data.
The lack of such data leads to inaccurate and potentially harmful conclusions about publishing in political science journals. First, all members of the profession—from graduate students to tenured faculty—should know the long odds they face when submitting a manuscript for publication in a top political science journal. We believe that our study is the first to publish journal acceptance rates (see table 1)Footnote 19 across multiple journals since the APSA last did so (Martin Reference Martin2001). A simple method of ranking journals' selectivity, which often serves as a stand-in for journal quality, uses their acceptance and rejection rates. Analyzing those data can lead to some interesting insights into our discipline. For example, scholars in other disciplines have found journal acceptance rates to be correlated with peer perceptions of journal quality (Coe and Weinstock Reference Coe and Weinstock1984), the methodological quality of the manuscripts published (Lee et al. Reference Lee, Schotland, Bacchetti and Bero2002), and the level of consensus within a discipline (Hargens Reference Hargens1988, 147). Other disciplines mine and publish these data to further a broader understanding of their profession.Footnote 20 Our discipline could benefit from the same self-reflection.
Second, journals may reap the unjust rewards of alleged editorial bias toward the articles they publish. Data on submitted articles' subfields and methodologies, and perhaps even on some individual characteristics of submitting authors, should be published to allow a more accurate evaluation of whether a journal engages in editorial bias. The availability of this information can create a more informed market in which editors are aware of and can address their potential for bias, and in which authors can better choose where to send their work.
Finally and in a related vein, the absence of published submission data leads to a potentially skewed understanding of the types of scholarship that are being undertaken within the discipline. Counting only published articles does not create accurate measurements of what work is being done in the profession. Editors or reviewers unfamiliar with the newest methods or fields may be guarded in suggesting their publication, with the result being that groundswells of work employing a particular methodology—such as the recent surge in studies employing field experimental designs—are not accurately captured when studying journal publications.
Fortunately, there seems to be an easy solution to this problem as most journal editors we contacted were more than willing to share their submission data. This willingness supports our belief that journal editors serve at the behest of their authors and their audiences, a perspective that suggests that editors' only reasons for not distributing submission data are naiveté that readers would find such data useful and possible time constraints. The solution here is simply to educate journal editors about the value to the profession of releasing submission data.
Our research does, however, raise some difficult issues in cases in which transparency must be balanced against privacy, particularly when information about editorial bias or rejected manuscripts might reveal author identities inadvertently. Suppose that by examining a journal's records over a limited period, researchers were to find the hint that an editorial regime had favored a small number of faculty and students from one university. If the number of authors favored was sufficiently small, they would be easily identifiable, putting them in a very vulnerable position. However, these authors would likely have no idea that they had been favored, figuring instead that their work had been subject to the same rigorous review process as all other submissions. Should such a finding be published for the world to see? Probably not. But certainly, such a result should be shared with the editor and perhaps the editorial board to serve as a kind of mid-course correction. Editors may not even be aware that appearances of impropriety are slipping into their editorial practices. Awareness is often the best inoculation against bias.
From an editorial structure standpoint, such an arrangement as that devised by the outgoing management at Political Behavior seems well-adapted to handling the potential conflicts posed by friendly submissions. By appointing one editor from the University of Pittsburgh, Jon Hurwitz, and one editor from the University of Kentucky, Mark Peffley, the matter was easily avoided. Joint-editor arrangements are still relatively rare in the field, however. Longer term, limiting editorial terms through regular rotation of editorships is very important. No editor should hold such a powerful position forever.
While our study faced certain limitations with regard to the difficulty of comparing the profession's wide variety of journal practices, we believe that we have made a solid case for greater transparency. Greater release of submission data may lead to multiple journal-specific datasets. Compilation of these data on one site by a central agent—perhaps the APSA—would help in two ways: (1) by encouraging the creation of a commonly accepted standard for which journal submission data are reported, and (2) by allowing comparisons across journals.Footnote 21
Appendices
Note. International = Not U.S.; Mountain = CO, ID, MT, NV, UT, WY; Mid-Atlantic = DE, DC, MD, NY NJ, WV; Midwest = IL, IN IA, KS, KY, MI, MN, MO, NE, ND, OH, SD, WI; New England = CT, ME, MA, NH, RI, VT; Pacific West = AK, CA, HI, OR, WA; South = AL, AR, FL, GA, LA, MS, NC, SC, TN, TX, VA; Southwest = AZ, NM, OK; other = scholars working outside of traditional college or university settings (e.g., with think tanks, interest groups, or the government).