Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-22T22:32:29.516Z Has data issue: false hasContentIssue false

What Happens at the Journal Office Stays at the Journal Office: Assessing Journal Transparency and Record-Keeping Practices

Published online by Cambridge University Press:  08 April 2011

Stephen Yoder
Affiliation:
University of Maryland
Brittany H. Bramlett
Affiliation:
University of Maryland
Rights & Permissions [Opens in a new window]

Abstract

Dissemination of journal submission data is critical for identifying editorial bias, creating an informed scholarly marketplace, and critically mapping the contours of a discipline's scholarship. However, our survey and case study investigations indicate that nearly a decade after the Perestroika movement began, political science journals remain reserved in collecting and releasing submission data. We offer several explanations for this lack of transparency and suggest ways that the profession might address this shortcoming.

Type
The Profession
Copyright
Copyright © American Political Science Association 2011

Political scientists publish their work in scholarly journals for a variety of reasons. Ideally, they want to share their knowledge with others, and prosaically, they aim to gain employment and achieve tenure and promotion within a department, as well as obtain higher status within the discipline. Publication in the profession's top journals remains an important evaluative metric for success in political science.Footnote 1

Journal rankings inform both external evaluators (e.g., hiring, promotion, and tenure committees) and the discipline itself about which journals score highest on a number of indicators, including their impact on the field and their reputation among peers. One aspect of journal publishing that remains understudied is transparency. At the most basic level of transparency, a journal will gather and make accessible summary information about its submission and review processes, such as the average amount of time it takes to inform an author of a first decision (turnaround) and the journal's acceptance rate. More transparent journals go further by releasing data on the types of submissions they receive and the personal characteristics of the authors who submit manuscripts. These data are broadly useful for both authors and editors: political scientists under pressure to publish understandably want to know what attributes might reward an article with publication; journal editors want to ensure that no biases influence their final manuscript decisions.

This article explores two questions: (1) How transparent are the top political science journals in releasing submission data? (2) How does transparency in releasing journal submission data benefit political science journals specifically, and the profession generally? To answer these questions, we first surveyed the editors of the top 30 political science journals on their journals' record-keeping practices and then examined in greater detail the records of one political science journal, American Politics Research (APR).

Analyzing and Ranking Journal Output

Editorial Bias and Journal Transparency

One major question—Are manuscripts from particular fields or using certain methodologies privileged over others in gaining publication?—formed the basis for the Perestroika movement, our discipline's most recent tumult. This movement asserted that the discipline's top journals, particularly the American Political Science Review (APSR), discriminated against manuscripts employing qualitative methods.Footnote 2 The charge of editorial biasFootnote 3 against a journal—meaning that a particular characteristic of a submitting author or submitted manuscript prevents otherwise excellent work from being published—is serious. The Perestroika movement was ultimately successful in gaining significant representation and influence on APSA's search committees for a new APSR editor and an inaugural editor for a new APSA journal, Perspectives on Politics. Appointing scholars familiar with and friendly toward a diversity of methodologies would, ideally, create an environment more welcoming to submissions from a variety of methodological backgrounds.

Charges of editorial bias, however, did not end with implementation of these conciliatory measures (see, for example, former Perspectives on Politics' editor Jim Johnson's [Reference Johnson2009] rebuttal to charges of editorial bias at that journal). Moreover, editorial bias may spread beyond matters of methodology to include the exclusion of scholarship based on the submitting author(s)' personal characteristics or scholarly topic. Political scientists have self-policed the discipline's journals to determine whether the work published represents the profession's true diversity of scholarship on, for example, Asian-Pacific Americans (Aoki and Takeda Reference Aoki and Takeda2004), pedagogy (Orr Reference Orr2004), human rights (Cardenas Reference Cardenas2009), Latin America (Martz Reference Martz1990), urban politics (Sapotichne, Jones, and Wolfe Reference Sapotichne, Jones and Wolfe2007), and comparative politics (Munck and Snyder Reference Munck and Snyder2007). Other research (Breuning and Sanders Reference Breuning and Sanders2007; Young Reference Young1995) has studied whether the work of female political scientists is adequately represented in the discipline's journals.

These articles analyze journals' output—published articles—to determine whether a particular demographic, methodology, or topic field is underrepresented. The studies and the data employed are important—after all, hiring and promotion decisions are made on the basis of published, not submitted, work. But by analyzing published work alone, these articles may misstate editorial bias. After all, journals cannot publish work employing a certain methodology if this work is never submitted. Hill and Leighley (Reference Hill, Leighley and Monroe2005) responded in this vein to Kasza's (Reference Kasza and Monroe2005) findings of editorial bias against qualitative scholarship in the published work of the American Journal of Political Science (AJPS): “Despite our pledge to review papers in any subfield of political science, we receive more in some fields than in others. And we are captive to what is submitted to the journal for review for publication” (351).

Transparency in Practice

Despite their potential, very few studies have analyzed journal submission data. Lee Sigelman has provided the most insight into journal submission data, most recently (Reference Sigelman2009) finding that coauthor collaboration—a trend that has seen a precipitous increase in recent years (Fisher et al. Reference Fisher, Cobane, Vander Ven and Cullen1998)—does not necessarily lead to a higher rate of article acceptance at the APSR. Lewis-Beck and Levy (Reference Lewis-Beck and Levy1993) also analyze journal submission data, finding that, contrary to conventional wisdom, neither an author's past publishing success or field nor the timing or turnaround time of the submission strongly predict publication in the AJPS. The few political science studies that do analyze submission data focus solely on one of the discipline's few general political science journals—American Political Science Review, Journal of Politics, Perspectives on Politics, and PS: Political Science and Politics. As the profession is organized by subject area (Grant Reference Grant2005) and field-specific journals publish the vast majority of political science scholarship, these analyses may miss the true submission experiences of most political scientists. No analysis has been conducted on the submission data of any of the profession's many field-specific journals.

The availability or lack of such data may be a major reason why so few studies have assessed submission data. Some journals do provide submission data in published annual reports. The editors of International Studies Quarterly (ISQ), for example, post highly detailed submission data analyses on the journal's website.Footnote 4 Submission, rejection, and acceptance data are broken down by author gender, submission month, and subfield, and across years, among other divisions. Unfortunately, the public release of such data by other journals is rare, a finding that the journal editor questionnaire we present below reinforces. For example, though the AJPS maintains summary statistics on submissions, acceptances, and rejections, it provides these numbers only to members of the journal's editorial board at its annual gathering. Most journals seem to follow this model of exclusive release of submission data.

There are several good reasons why editors may opt to not release their journal's submission data. First, editors must be careful to keep the peer-review process blind when releasing these data. Confidentiality issues may explain why those scholars who have published analyses of submission data have also been the editors of the respective journals under study. Second, many editors may find it too difficult to maintain detailed journal submission data. The data collection process can be time consuming, and some journals have only a limited staff that is prone to turnover every semester or academic year.Footnote 5 Additionally, journals tend to migrate to a new editor or different institution every few years, which can lead to a loss of submission data or unwillingness by an editor who views his or her term as temporary to keep these data. These journal migrations (both internal and external) may also lead to inconsistencies in the data collection process. Finally, many editors simply may not see the value in maintaining detailed submission data.

Assessing Journal Quality

While we do not associate journal transparency with journal quality, the growing literature on assessing journal quality does inform our work. Although there is no clear consensus on what constitutes a high-quality journal, most journal rankings employ one of two approaches: the citational and the reputational. The citational approach relies on counting the number of times that other academic articles cite a particular journal's published articles. This method has been used to rank political science journals (Christenson and Sigelman Reference Christenson and Sigelman1985; Hix Reference Hix2004), individual scholars (Klingemann, Grofman, and Campagna Reference Klingemann, Grofman and Campagna1989; Masuoka, Grofman, and Feld Reference Masuoka, Grofman and Feld2007b),Footnote 6 and departments (Klingemann Reference Klingemann1986; Masuoka, Grofman, and Feld Reference Masuoka, Grofman and Feld2007a; Miller, Tien, and Peebler Reference Miller, Tien and Peebler1996). The impact ranking, which publishers often use to promote journals, relies on citation data (see, for example, Thomson's Institute for Scientific Information Journal Citation Reports).Footnote 7

The reputational approach relies on polling a representative sample of scholars about journal quality in a particular field. James Garand and Micheal Giles have become the standard-bearers for reputational studies of journal quality in the profession (Garand Reference Garand1990, Reference Garand2005; Garand and Giles Reference Garand and Giles2003; Garand et al. Reference Garand, Giles, Blais and McLean2009; Giles and Garand Reference Giles and Garand2007; Giles, Mizell, and Patterson Reference Giles, Mizell and Patterson1989; Giles and Wright Reference Giles and Wright1975). This research fulfills a disciplinary longing for journal quality measurements.Footnote 8 Although these two approaches dominate the journal ranking literature, some scholars argue that neither is appropriate. Plümper (Reference Plümper2007), for example, criticizes both approaches for being overly esoteric. His ranking, the Frequently Cited Articles (FCA) Score, focuses instead on journals' real-world impact.

Scholars have not yet included transparency in their ranking systems. We believe that knowing journals' degree of transparency in collecting and sharing submission data will be of interest as a comparison measure to journals' quality rankings. In addition, our effort to rank journals according to their transparency serves as an example of the difficulty of creating a standard measure for journal characteristics.

The importance of journal submission data

We believe that transparency and legitimacy are the primary reasons that scholarly journals should collect and disseminate submission data. The typical political scientist interacts with a chosen journal during the review process at only two stages: submission and decision. The author is necessarily excluded from what transpires in the two to three months in between those stages, but after this period of silence, authors may find the editor's decision to be somewhat arbitrary, particularly when the reviewers' recommendations conflict.

Even if no actual bias exists in editors' decisions, the opacity of the double-blind peer-review and the final decision-making processes may foster the perception of bias among authors. Hearsay and conjecture may lead to perceptions that a journal does not publish a certain type of work or scholarship from a certain type of author. The point of such criticism is, in fact, the promotion of perestroika, or openness. In response to the charges of the Perestroika movement, APSR editorial reports under the new regime (e.g., Sigelman Reference Sigelman2003, Reference Sigelman2004, Reference Sigelman2005) deluged readers with the journal's submission data as a means of proving that editorial bias no longer existed in its pages, if it ever did.

Keeping and releasing such data may help correct for perceived editorial biases. Analyses of journal publications serve a purpose, but they unduly limit the universe of scholarship under analysis by looking at only the end product of the journal publishing process—manuscripts that have cleared the hurdle of scholarly publication. These studies ignore the much larger universe of journal article submissions. Exploring, analyzing, and reporting such data will: (1) aid authors in deciding where to submit their manuscripts, (2) inform editors of potential biases in their journal's review process, and (3) allow the discipline to reassess evaluations of the quality of its journals.

Methods and Results

Survey of Journal Editors

To assess the transparency of political science journals in maintaining and releasing submission data, we first searched the websites of the 30 highest ranked journals, as rated on their impact by Garand et al. (Reference Garand, Giles, Blais and McLean2009). We looked for whether these websites relayed simple record-keeping information, such as average turnaround time, as well as more detailed submission data. The search turned up little in the way of the release of detailed submission data, and we rarely found that even simpler summary statistics were being disseminated. Only 10 of the 30 journal websites provided basic information, and in most of these 10 cases, the websites provided only a rough estimate of average turnaround times.

Editors unwilling to publish this information on their journal's website may provide it in print or by request. To more formally assess journals' transparency in releasing submission data, in July 2009,Footnote 9 we sent an e-mail questionnaire (see figure A1 in the appendix) to the editors (or editorial staff) of the top 30 political science journals, receiving at least a partial response from 20 of the 30 journals surveyed.Footnote 10 About half of the editors of the 30 journals responded to the entire survey or directed us to print or web material to answer our questions. Table 1 summarizes the responses to the questionnaire, ranks the journals on their transparency in releasing submission data, and provides general information on and comparisons of the record-keeping practices of the top 30 journals. Figures 1 and 2 rank the responding journals by acceptance rates and turnaround time from initial submission to first decision, respectively.

Table 1 Summary of Political Science Journal Questionnaire Responses

Notes.

a Web, print, and by request = 1; web and by request = 2; print and by request = 3; by request only = 4; no contact made with journal or no information found denoted by bar.

b Number includes resubmissions.

c 82% of manuscripts turned around in 60 days or fewer.

Figure 1 Responding Journals' Acceptance Rates, Most Recent Year

Figure 2 Responding Journals' Average Turnaround Time in Days from Manuscript Submission to First Decision, Most Recent Year

The journal transparency rankings depend on the availability of journal information (e.g., acceptance rates, turnaround rates, number of submissions per year) via three different mediums: on the web, in print, and by request. Journals that provided information via all three mediums received a ranking of one. Journals delivering the information via two mediums received a ranking of either two or three. A journal that offered the information both on the web and by request was ranked higher than a journal that provided the information in print and by request, because visiting a journal's website generally imposes fewer opportunity costs than does obtaining a journal's print copy.Footnote 11 Finally, journals that provided such information only by request received a ranking of four.

While the measure is a bit crude, much can be learned from this first attempt at assessing political science journals' transparency in releasing submission data. First, Garand et al.'s (Reference Garand, Giles, Blais and McLean2009) rankings indicating impact do not necessarily correlate with the transparency rankings. Although the APSR and the Journal of Politics (ranked first and third, respectively, under Garand et al.'s system) both received a transparency ranking of one, Political Research Quarterly (ranked 16th with Garand et al.'s system) also received a top transparency score. Five journals received a ranking of two, and three received a ranking of three. Almost half of the journals (11 of 23)Footnote 12 indicated that they would be willing to provide such information only upon request. Many journals clearly keep but do not openly share their submission data.

In the spirit of transparency, table 1 also provides some of the additional survey information provided by the journal editors. These data showcase the ample variation that exists in the submission and review processes of the profession's top journals. The top political science journals vary widely in the number of submissions they receive, their acceptance rates, and their average turnaround time until first decision.

Transparency Case Study

Our examination of submission data for American Politics Research (APR)Footnote 13 from January 2006 to December 2008 highlights the many benefits of more extensive journal record-keeping. While past work has examined submission and publication data from the more general political science journals, we could find no scholarship that focused on a field-specialized journal like APR. The following assessment of the articles published in relation to the articles submitted offers a more complete view of the journal's specialization and biases.

Our examination focused on a few understudied relationships in the field of political science journal publishing, including the effect of the lead author's region, university type, and professional status on a manuscript's likelihood of acceptance. A lead author's geographic location has been shown to influence article acceptance in academic journals in the medical field (Boulos Reference Boulos2005; Tutarel Reference Tutarel2002), so we expected that such a geographic bias might also favor APR's acceptance of manuscripts submitted by lead authors from the Mid-Atlantic region. Authors from these locales are likely to have a greater familiarity with the editor and editorial staff as the result of shared attendance at regional conferences or service on regional boards.

We also expected that authors who work at institutions that place a greater emphasis on research would have more success publishing their work in APR. Research institutions typically give their faculty lighter teaching loads so that they will have more time to research and publish. In academia at large, articles by authors from academic settings are accepted at scholarly peer-reviewed journals at significantly higher rates than articles by authors from nonacademic settings (Tenopir and King Reference Tenopir and King1997). We expected to find a similar difference in acceptance rates of submitted manuscripts from authors in academic settings with different foci and resources.

Finally, we posited that an author's professional status would impact the likelihood of his or her article being accepted. Seasoned academics (associate and full professors) are more likely than less experienced scholars to know the norms of publishable scholarship and be better able to correctly match their manuscript to an appropriate venue (Pasco Reference Pasco2002). Additionally, submitting authors who have attained their doctorate but have not yet gained tenure (usually assistant professors) would likely have a higher probability of acceptance than would graduate students. These authors have the advantage of some research and publishing experience and are driven to produce high-quality work by the incessant ticking of the tenure clock. In examining these hypotheses, we controlled for manuscript turnaround time, the number of authors on a submission, the subject area of the manuscript, and whether the manuscript had a female lead author (see table A2 in the appendix for information on variable measurement).

The reasons why editorial bias may creep into editorial decision-making are easy to explain, even if an editor is trying to be conscientious and fair. Take, for example, two submissions of approximately equal quality. Reviewers return two equally critical sets of reviews, but the editor knows the author of the first manuscript and not the author of the second. The editor may be willing to hazard that the author of the first manuscript is capable of managing the revisions and give that author the benefit of the doubt, sending him or her a decision of revise-and-resubmit. But, being completely in the dark about the author of the second manuscript, the editor may not extend him or her the same benefit of the doubt and may instead send a rejection letter.

It is important to note that by “knowing” the author of the first manuscript, we do not mean that this author has to have been a student, much less an advisee of the editor. Sometimes just having been on a conference panel together or having met a time or two is enough. And, of course, editors are not always even conscious of what biases might be operating, which is why conscientious editors should be interested in serious data collection and analysis.

Case Study Findings

Over the 2006–08 period, the APR journal staff of one full-time editor and one half-time graduate student processed 491 manuscripts.Footnote 14 During this time period, 111 manuscripts were accepted. Twenty-four of the manuscripts that received a decision of revise-and-resubmit were never returned (see table A1 in the appendix for additional data on submissions).Footnote 15 As hypothesized, articles submitted by lead authors from the Mid-Atlantic region have a slight advantage over articles submitted from other regions: just over 30% of manuscripts submitted by lead authors from the Mid-Atlantic region were accepted to APR (see table A3 in the appendix for the descriptive statistics). The probit model's excluded category is the Mid-Atlantic region (see table 2). The signs for nearly every region (except international) are negative, indicating that papers submitted by lead authors from Mid-Atlantic institutions are more likely to be accepted than papers from other regions. Although these relationships are not statistically significant,Footnote 16 the accompanying predicted probabilitiesFootnote 17 (see table 3) provide a better sense of their substantive significance. Lead authors from the Mid-Atlantic region generally have a 15 to 20 percentage point advantage over lead authors from other regions in terms of article acceptance at APR. International papers appear to have the next best chance of acceptance, but this finding could be an artifact of the small number of manuscripts (n = 11) that were submitted by authors working abroad.

Table 2 Predicting Final Acceptance at APR

Notes. N = 464. Log likelihood = −229.254. Pseudo-R 2 = .089. Coefficients and standard errors calculated using probit regression. p values are two-tailed. Graduate student is the excluded variable for author status. Bachelor's degree is the excluded category for institution type. Mid-Atlantic is the excluded variable for region. The model includes the three subject categories with the most submissions; a number of other categories are excluded (American political development, interest groups, media, other, parties, policy, presidency, public opinion, and subnational).

Table 3 Predicted Probability of Acceptance at APR

Note. Predicted probabilities calculated using the observed values approach.

As hypothesized, articles submitted by lead authors from research institutionsFootnote 18 performed better at APR than articles submitted by scholars from other institution types. The excluded category in the model is the bachelor's degree institution type. The sign for master's degree institutions is negative, indicating that papers submitted by lead authors from these institutions are less likely to be accepted than papers submitted by authors from bachelor's degree institutions. The signs for the three research institution rankings are positive, suggesting that papers submitted by lead authors from these institution types are more likely to be accepted than papers submitted by authors from bachelor's degree institutions. Although these relationships are not statistically significant, the predicted probability of acceptance at APR is generally higher for authors from institutions with a research focus (see table 3).

Somewhat surprisingly, lead authors who are graduate students, assistant professors, and associate professors all have similar likelihoods of manuscript acceptance at APR, while full professors' chances of manuscript acceptance lag slightly behind. The excluded category in the model is the status of graduate student. The manuscripts of assistant and associate professors are more likely to be accepted at APR than the work of graduate students, but full professors seem to have less success. None of these relationships reach accepted levels of statistical significance. Substantively, the most surprising finding is that graduate students have a 19% predicted probability of having their manuscript accepted at APR, while full professors have an 18% likelihood of acceptance (see table 3). Two types of selectivity bias may well be behind this finding. First, APR has a notable reputation for publishing the work of younger scholars who are just launching their careers. These scholars may well send their best work to the journal. Second, senior faculty, who are also cognizant of this reputation, may prefer to send their best work elsewhere.

The nonsignificance of many of these relationships led us to look at an additional relationship—whether authors who serve as APR board members have a higher probability of acceptance than non-board members. Indeed, manuscripts with an author who is an APR board member are 6 percentage points more likely to be accepted than manuscripts without a board member author (see table 3). Although not statistically significant, this increased rate may be explained by some factors other than editorial bias. First, board members may have a greater familiarity with the types of articles that APR publishes than the average American politics-focused political scientist. Second, board members are chosen not at random, but predominantly because of their past publishing successes.

In summary, many of the relationships explored here are neither statistically nor substantively significant. Here, null findings prove a relief. While some patterns exist between the personal characteristics of APR authors and the likelihood of manuscript acceptance, none show particularly strong evidence of editorial bias. Nevertheless, awareness of even slight tendencies for bias can inform the editorial staff in future decision-making.

Discussion and Conclusion

Despite the variety and breadth of focus seen in the discipline's journals, all use some manner of peer review to ascertain whether a submission is of high enough quality to merit publication. This process can either be shared with a broader public or kept undisclosed. This shared commonality of process across journals has been left largely unstudied as a result of editors' reserve—either mindful or not—in releasing journal submission data.

The lack of such data leads to inaccurate and potentially harmful conclusions about publishing in political science journals. First, all members of the profession—from graduate students to tenured faculty—should know the long odds they face when submitting a manuscript for publication in a top political science journal. We believe that our study is the first to publish journal acceptance rates (see table 1)Footnote 19 across multiple journals since the APSA last did so (Martin Reference Martin2001). A simple method of ranking journals' selectivity, which often serves as a stand-in for journal quality, uses their acceptance and rejection rates. Analyzing those data can lead to some interesting insights into our discipline. For example, scholars in other disciplines have found journal acceptance rates to be correlated with peer perceptions of journal quality (Coe and Weinstock Reference Coe and Weinstock1984), the methodological quality of the manuscripts published (Lee et al. Reference Lee, Schotland, Bacchetti and Bero2002), and the level of consensus within a discipline (Hargens Reference Hargens1988, 147). Other disciplines mine and publish these data to further a broader understanding of their profession.Footnote 20 Our discipline could benefit from the same self-reflection.

Second, journals may reap the unjust rewards of alleged editorial bias toward the articles they publish. Data on submitted articles' subfields and methodologies, and perhaps even on some individual characteristics of submitting authors, should be published to allow a more accurate evaluation of whether a journal engages in editorial bias. The availability of this information can create a more informed market in which editors are aware of and can address their potential for bias, and in which authors can better choose where to send their work.

Finally and in a related vein, the absence of published submission data leads to a potentially skewed understanding of the types of scholarship that are being undertaken within the discipline. Counting only published articles does not create accurate measurements of what work is being done in the profession. Editors or reviewers unfamiliar with the newest methods or fields may be guarded in suggesting their publication, with the result being that groundswells of work employing a particular methodology—such as the recent surge in studies employing field experimental designs—are not accurately captured when studying journal publications.

Fortunately, there seems to be an easy solution to this problem as most journal editors we contacted were more than willing to share their submission data. This willingness supports our belief that journal editors serve at the behest of their authors and their audiences, a perspective that suggests that editors' only reasons for not distributing submission data are naiveté that readers would find such data useful and possible time constraints. The solution here is simply to educate journal editors about the value to the profession of releasing submission data.

Our research does, however, raise some difficult issues in cases in which transparency must be balanced against privacy, particularly when information about editorial bias or rejected manuscripts might reveal author identities inadvertently. Suppose that by examining a journal's records over a limited period, researchers were to find the hint that an editorial regime had favored a small number of faculty and students from one university. If the number of authors favored was sufficiently small, they would be easily identifiable, putting them in a very vulnerable position. However, these authors would likely have no idea that they had been favored, figuring instead that their work had been subject to the same rigorous review process as all other submissions. Should such a finding be published for the world to see? Probably not. But certainly, such a result should be shared with the editor and perhaps the editorial board to serve as a kind of mid-course correction. Editors may not even be aware that appearances of impropriety are slipping into their editorial practices. Awareness is often the best inoculation against bias.

From an editorial structure standpoint, such an arrangement as that devised by the outgoing management at Political Behavior seems well-adapted to handling the potential conflicts posed by friendly submissions. By appointing one editor from the University of Pittsburgh, Jon Hurwitz, and one editor from the University of Kentucky, Mark Peffley, the matter was easily avoided. Joint-editor arrangements are still relatively rare in the field, however. Longer term, limiting editorial terms through regular rotation of editorships is very important. No editor should hold such a powerful position forever.

While our study faced certain limitations with regard to the difficulty of comparing the profession's wide variety of journal practices, we believe that we have made a solid case for greater transparency. Greater release of submission data may lead to multiple journal-specific datasets. Compilation of these data on one site by a central agent—perhaps the APSA—would help in two ways: (1) by encouraging the creation of a commonly accepted standard for which journal submission data are reported, and (2) by allowing comparisons across journals.Footnote 21

Appendices

Table A1 Summary of Non-Acceptances for APR Submission Data

Table A2 Variable Measurement for APR Submission Data

Note. International = Not U.S.; Mountain = CO, ID, MT, NV, UT, WY; Mid-Atlantic = DE, DC, MD, NY NJ, WV; Midwest = IL, IN IA, KS, KY, MI, MN, MO, NE, ND, OH, SD, WI; New England = CT, ME, MA, NH, RI, VT; Pacific West = AK, CA, HI, OR, WA; South = AL, AR, FL, GA, LA, MS, NC, SC, TN, TX, VA; Southwest = AZ, NM, OK; other = scholars working outside of traditional college or university settings (e.g., with think tanks, interest groups, or the government).

Table A3 Descriptive Statistics for the Attributes of Submitted APR Papers

Figure A1 E-mail Questionnaire to Journal Editors

Footnotes

1 Book publication remains the other important metric for obtaining and retaining academic employment in political science. In this case, the esteem of the publisher often serves as a proxy dividing “good” academic books from “bad.” See Goodson, Dillman, and Hira (Reference Goodson, Dillman and Hira1999) for a reputation-based approach to ranking political science book publishers.

2 For a summary of the movement's stances, see Mr. Perestroika's (2000) opening e-mail salvo.

3 “Editorial bias” is used here to mean a significant difference between the amount of scholarship submitted in an area or by a type of author and the amount eventually published. Some recent articles have found a different kind of bias—“publication bias”—in their analysis of the statistical underpinnings of articles published in the profession's journals. Gerber, Green, and Nickerson (Reference Gerber, Green and Nickerson2000) find that sample size matters in voter mobilization studies that employ a field experimental design: treatment effects on turnout were larger in studies with smaller sample sizes, potentially leading scholars citing this literature to overstate the effects of the treatment on turnout. Small-n studies must show larger effects than their large-N kin to pass accepted standards of statistical significance, leading to a bias against small-n studies that show smaller results. Others (Gerber and Malhotra Reference Gerber and Malhotra2008; Gerber et al. Reference Gerber, Malhotra, Dowling and Doherty2010) have found evidence of publication bias at the APSR and the AJPS, but they leave the disentangling of the sources of bias to others.

4 For these reports, see http://www.indiana.edu/~iuisq/.

5 The movement of journal submission processes to electronic formats should ease the laboriousness of submission data collection.

6 Alternatively, see the reputational rankings of scholars composed by Somit and Tanenhaus (Reference Somit and Tanenhaus1967).

7 These data can be accessed at http://www.isiwebofknowledge.com.

8 At least if the position of Giles and Garand's (Reference Giles and Garand2007) article at number one on the list of PS's most-downloaded articles in the past year (as of June 25, 2009) is any indication. For this list, see http://journals.cambridge.org/action/mostReadArticle?jid=PSC.

9 A follow-up e-mail was sent two weeks later to individuals who did not respond. We contacted the nonresponding journals a final time in December 2009.

10 Two journals (Political Analysis and the Journal of Conflict Resolution) responded that they were undergoing editorial transitions that made responding to our survey difficult. While we cannot be certain, editorial transitions may be one of the more significant hindrances to consistent and transparent record-keeping.

11 None of the responding journals provided the information via web and print but not by request.

12 Attentive readers will notice that this number differs from the figure that we offered earlier (i.e., that 20 of 30 journals responded with at least a partial response to our survey). Three journals responded that the information was generally available by request, but that because of an ongoing editorial transition, this information could be shared in the future, but not at present.

13 APR has been published since 1975, and while not the top journal in the political science discipline, it generally ranks among the top 30 (Giles and Garand Reference Giles and Garand2007, though APR's rank varies depending on which measure is employed). APR may be considered a fairly typical example of a more specialized journal to which many American politics subfield-focused political scientists submit and publish work, as opposed to the discipline's elite, general journals. APR publishes across all branches and areas of American government; according to its website, the journal prints the “most recent scholarship on such subject areas as: voting behavior, political parties, public opinion, legislative behavior, courts and the legal process, presidency and bureaucracy, race and ethnic politics, women in politics, public policy, [and] campaign finance” (see http://www.bsos.umd.edu/gvpt/apr/).

14 Processing a manuscript includes making a record of the manuscript in the submission database; assigning, inviting, and reassigning reviewers; making a first decision on the manuscript (either reject or revise-and-resubmit); and making a second (and sometimes a third) decision if the first-round decision is a revise-and-resubmit. On occasion, authors have appealed rejection decisions. APR considers these requests on a case-by-case basis.

15 We chose to explore submission records as they relate to final decisions because at APR, a revise-and-resubmit decision generally indicates that an article has a strong potential for future acceptance. The present editor, James Gimpel, does not offer a chance to revise and resubmit a manuscript unless he believes that there is an excellent chance that a revision will successfully overcome the reviewers' reservations. It is possible that some of the 2008 revise-and-resubmits will still be returned and accepted; however, the likelihood of that occurrence lessens with time. APR normally designates a period ranging from 5 to 10 months to return a revised manuscript, so most of the outstanding revise-and-resubmits in this dataset will likely remain in that purgatory of having been neither accepted nor rejected.

16 We coded many of the variables (region, university type, author status, and gender) with respect to the lead author. The lead (or submitting) author is the individual most likely to be noted by the editorial staff and thus provides potential information for bias. When we ran the same probit model with only single-authored papers, the results did not change, even though the sample size declined dramatically.

17 Predicted probabilities were calculated using the observed values approach, as recommended by Hanmer and Kalkan (Reference Hanmer and Kalkan2009).

18 We used the institutional categorizations created by the Carnegie Foundation for the Advancement of Teaching, available at http://classifications.carnegiefoundation.org/lookup_listings/institution.php.

19 Journals use different methods to calculate these numbers, and we have done our best to convey the percentage of manuscripts that are eventually accepted for publication over a three-year period. These numbers may differ slightly from those that the journals provided us as we tweaked them for conformity to this standard. This process further speaks to the need for a central agency to set uniform standards for this and other journal submission measures.

20 For an example of one discipline's efforts, see the annual reports released by the American Psychological Association (APA), available at http://www.apa.org/pubs/journals/statistics.aspx. The University of North Texas has a site that links to the journal acceptance and rejection rates of journals in multiple disciplines, available at http://www.library.unt.edu/ris/journal-article-acceptance-rates.

21 The APA provides one successful example of this sort of compilation and dissemination of basic journal submission data (see note Footnote 20). Such a site would fit well as replacement for APSA's now outdated efforts (Martin Reference Martin2001) and would complement the association's recent publications on publishing (Yoder Reference Yoder2008) and assessment (Deardorff, Hamann, and Ishiyama Reference Deardorff, Hamann and Ishiyama2009) within the profession.

Note. International = Not U.S.; Mountain = CO, ID, MT, NV, UT, WY; Mid-Atlantic = DE, DC, MD, NY NJ, WV; Midwest = IL, IN IA, KS, KY, MI, MN, MO, NE, ND, OH, SD, WI; New England = CT, ME, MA, NH, RI, VT; Pacific West = AK, CA, HI, OR, WA; South = AL, AR, FL, GA, LA, MS, NC, SC, TN, TX, VA; Southwest = AZ, NM, OK; other = scholars working outside of traditional college or university settings (e.g., with think tanks, interest groups, or the government).

References

Aoki, Andrew L., and Takeda, Okiyoshi. 2004. “Small Spaces for Different Faces: Political Science Scholarship on Asian Pacific Americans.” PS: Political Science and Politics 37 (3): 497500.Google Scholar
Boulos, Maged. 2005. “On Geography and Medical Journalology: A Study of the Geographical Distribution of Articles Published in a Leading Medical Informatics Journal between 1999 and 2004.” International Journal of Health Geographics 4 (1): 7.CrossRefGoogle Scholar
Breuning, Marijke, and Sanders, Kathryn. 2007. “Gender and Journal Authorship in Eight Prestigious Political Science Journals.” PS: Political Science & Politics 40 (2): 347–51.Google Scholar
Cardenas, Sonia. 2009. “Mainstreaming Human Rights: Publishing Trends in Political Science.” PS: Political Science and Politics 42 (1): 161–66.Google Scholar
Christenson, James A., and Sigelman, Lee. 1985. “Accrediting Knowledge: Journal Stature and Citation Impact in Social Science.” Social Science Quarterly 66 (4): 964–75.Google Scholar
Coe, Robert, and Weinstock, Irwin. 1984. “Evaluating the Management Journals: A Second Look.” Academy of Management Journal 27 (3): 660–66.CrossRefGoogle ScholarPubMed
Deardorff, Michelle D., Hamann, Kerstin, and Ishiyama, John, eds. 2009. Assessment in Political Science. Washington, DC: American Political Science Association.Google Scholar
Fisher, Bonnie S., Cobane, Craig T., Vander Ven, Thomas M., and Cullen, Francis T.. 1998. “How Many Authors Does It Take to Publish an Article? Trends and Patterns in Political Science.” PS: Political Science and Politics 31 (4): 847–56.Google Scholar
Garand, James C. 1990. “An Alternative Interpretation of Recent Political Science Journal Evaluations.” PS: Political Science and Politics 23 (3): 448–51.Google Scholar
Garand, James C. 2005. “Integration and Fragmentation in Political Science: Exploring Patterns of Scholarly Communication in a Divided Discipline.” Journal of Politics 67 (4): 9791,005.CrossRefGoogle Scholar
Garand, James C., and Giles, Micheal W.. 2003. “Journals in the Discipline: A Report on a New Survey of American Political Scientists.” PS: Political Science and Politics 36 (2): 293308.Google Scholar
Garand, James C., Giles, Micheal W., Blais, Andre, and McLean, Iain. 2009. “Political Science Journals in Comparative Perspective: Evaluating Scholarly Journals in the United States, Canada, and the United Kingdom.” PS: Political Science and Politics 42 (4): 695717.Google Scholar
Gerber, Alan S., Green, Donald P., and Nickerson, David. 2000. “Testing for Publication Bias in Political Science.” Political Analysis 9 (4): 385–92.CrossRefGoogle Scholar
Gerber, Alan S., and Malhotra, Neil. 2008. “Do Statistical Reporting Standards Affect What Is Published? Publication Bias in Two Leading Political Science Journals.” Quarterly Journal of Political Science 3: 313–26.CrossRefGoogle Scholar
Gerber, Alan S., Malhotra, Neil, Dowling, Conor M., and Doherty, David. 2010. “Publication Bias in Two Political Behavior Literatures.” American Politics Research 38 (4): 591613.CrossRefGoogle Scholar
Giles, Micheal W., and Garand, James C.. 2007. “Ranking Political Science Journals: Reputational and Citational Approaches.” PS: Political Science & Politics 40 (4): 741–51.Google Scholar
Giles, Micheal W., Mizell, Francie, and Patterson, David. 1989. “Political Scientists' Journal Evaluations Revisited.” PS: Political Science and Politics 22 (3): 613–17.Google Scholar
Giles, Micheal W., and Wright, Gerald C.. 1975. “Political Scientists' Evaluations of Sixty-Three Journals.” PS: Political Science and Politics 8 (3): 254–56.CrossRefGoogle Scholar
Goodson, Larry P., Dillman, Bradford, and Hira, Anil. 1999. “Ranking the Presses: Political Scientists' Evaluations of Publisher Quality.” PS: Political Science and Politics 32 (2): 257–62.Google Scholar
Grant, J. Tobin. 2005. “What Divides Us? The Image and Organization of Political Science.” PS: Political Science and Politics 38 (3): 379–86.Google Scholar
Hanmer, Michael J., and Kalkan, K. Ozan. 2009. “Behind the Curve: Clarifying the Best Approach to Calculating Predicted Probabilities and Marginal Effects from Limited Dependent Variable Models.” Working paper, University of Maryland at College Park.Google Scholar
Hargens, Lowell L. 1988. “Scholarly Consensus and Journal Rejection Rates.” American Sociological Review 53 (1): 139–51.CrossRefGoogle Scholar
Hill, Kim Quaile, and Leighley, Jan E.. 2005. “Science, Political Science, and the AJPS.” In Perestroika: The Raucous Rebellion in Political Science, ed. Monroe, Kristen Renwick, 346–53. New Haven: Yale University Press.Google Scholar
Hix, Simon. 2004. “A Global Ranking of Political Science Departments.” Political Studies Review 2 (3): 293313.CrossRefGoogle Scholar
Johnson, Jim. 2009. “Improving Scholarly Journals—Part 2a.” The Monkey Cage [weblog], March 24. http://www.themonkeycage.org/2009/03/post_174.html.Google Scholar
Kasza, Gregory J. 2005. “Methodological Bias in the American Journal of Political Science.” In Perestroika: The Raucous Rebellion in Political Science, ed. Monroe, Kristen Renwick, 342–45. New Haven: Yale University Press.Google Scholar
Klingemann, Hans-Dieter. 1986. “Ranking the Graduate Departments in the 1980s: Toward Objective Qualitative Indicators.” PS: Political Science and Politics 19 (3): 651–61.CrossRefGoogle Scholar
Klingemann, Hans-Dieter, Grofman, Bernard, and Campagna, Janet. 1989. “The Political Science 400: Citations by Ph.D. Cohort and by Ph.D.-Granting Institution.” PS: Political Science and Politics 22 (2): 258–70.Google Scholar
Lee, Kirby P., Schotland, M., Bacchetti, P., and Bero, L. A.. 2002. “Association of Journal Quality Indicators with Methodological Quality of Clinical Research Articles.” JAMA 287 (21): 2,805–08.CrossRefGoogle ScholarPubMed
Lewis-Beck, Michael S., and Levy, Dena. 1993. “Correlates of Publication Success: Some AJPS Results.” PS: Political Science and Politics 26 (3): 558–61.Google Scholar
Martin, Fenton S. 2001. Getting Published in Political Science Journals: A Guide for Authors, Editors and Librarians. 5th ed. Washington, DC: American Political Science Association.Google Scholar
Martz, John D. 1990. “Political Science and Latin American Studies: Patterns and Asymmetries of Research and Publication.” Latin American Research Review 25 (1): 6786.CrossRefGoogle Scholar
Masuoka, Natalie, Grofman, Bernard, and Feld, Scott L.. 2007a. “Ranking Departments: A Comparison of Alternative Approaches.” PS: Political Science and Politics 40 (3): 531–37.Google Scholar
Masuoka, Natalie, Grofman, Bernard, and Feld, Scott L.. 2007b. “The Political Science 400: A 20-Year Update.” PS: Political Science and Politics 40 (1): 133–45.Google Scholar
Miller, Arthur H., Tien, Charles, and Peebler, Andrew A.. 1996. “Department Rankings: An Alternative Approach.” PS: Political Science and Politics 29 (4): 704–17.Google Scholar
Mr. Perestroika. 2000. “On the Irrelevance of APSA and APSR to the study of Political Science!http://www.psci.unt.edu/enterline/mrperestroika.pdf.Google Scholar
Munck, Gerardo L., and Snyder, Richard. 2007. “Who Publishes in Comparative Politics? Studying the World from the United States.” PS: Political Science & Politics 40 (2): 339–46.Google Scholar
Orr, Marion. 2004. “Political Science and Education Research: An Exploratory Look at Two Political Science Journals.” Educational Researcher 33 (5): 1116.CrossRefGoogle Scholar
Pasco, Allan. 2002. “Basic Advice for Novice Authors.” Journal of Scholarly Publishing 33 (2): 7589.CrossRefGoogle Scholar
Plümper, Thomas. 2007. “Academic Heavy-Weights: The ‘Relevance’ of Political Science Journals.” European Political Science 6 (1): 4150.CrossRefGoogle Scholar
Sapotichne, Joshua, Jones, Bryan D., and Wolfe, Michelle. 2007. “Is Urban Politics a Black Hole? Analyzing the Boundary between Political Science and Urban Politics.” Urban Affairs Review 43 (1): 76106.CrossRefGoogle Scholar
Sigelman, Lee. 2003. “Report of the Editor of the American Political Science Review, 2001–2002.” PS: Political Science and Politics 36 (1): 113–17.Google Scholar
Sigelman, Lee. 2004. “Report of the Editor of the American Political Science Review, 2002–2003.” PS: Political Science and Politics 37 (1): 139–42.Google Scholar
Sigelman, Lee. 2005. “Report of the Editor of the American Political Science Review, 2003–2004.” PS: Political Science and Politics 38 (1): 137–40.Google Scholar
Sigelman, Lee. 2009. “Are Two (or Three or Four … or Nine) Heads Better than One? Collaboration, Multidisciplinarity, and Publishability.” PS: Political Science and Politics 42 (3): 507–12.Google Scholar
Somit, Albert, and Tanenhaus, Joseph. 1967. The Development of American Political Science: From Burgess to Behavioralism. New York: Boston, Allyn and Bacon.Google Scholar
Tenopir, Carol, and King, Donald W.. 1997. “Trends in Scientific Scholarly Journal Publishing in the United States.” Journal of Scholarly Publishing 28 (3): 135–70.CrossRefGoogle Scholar
Tutarel, Oktay. 2002. “Geographical Distribution of Publications in the Field of Medical Education.” BMC Medical Education 2: 3.CrossRefGoogle ScholarPubMed
Yoder, Stephen, ed. 2008. Publishing Political Science: The APSA Guide to Writing and Publishing. Washington, DC: American Political Science Association.Google Scholar
Young, Cheryl D. 1995. “An Assessment of Articles Published by Women in 15 Top Political Science Journals.” PS: Political Science and Politics 28 (3): 525–33.Google Scholar
Figure 0

Table 1 Summary of Political Science Journal Questionnaire Responses

Figure 1

Figure 1 Responding Journals' Acceptance Rates, Most Recent Year

Figure 2

Figure 2 Responding Journals' Average Turnaround Time in Days from Manuscript Submission to First Decision, Most Recent Year

Figure 3

Table 2 Predicting Final Acceptance at APR

Figure 4

Table 3 Predicted Probability of Acceptance at APR

Figure 5

Table A1 Summary of Non-Acceptances for APR Submission Data

Figure 6

Table A2 Variable Measurement for APR Submission Data

Figure 7

Table A3 Descriptive Statistics for the Attributes of Submitted APR Papers

Figure 8

Figure A1 E-mail Questionnaire to Journal Editors