Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-19T04:18:59.514Z Has data issue: false hasContentIssue false

Reviewer Fatigue? Why Scholars Decline to Review their Peers’ Work

Published online by Cambridge University Press:  19 October 2015

Marijke Breuning
Affiliation:
University of North Texas
Jeremy Backstrom
Affiliation:
National Consortium for the Study of Terrorism and the Responses to Terrorism (START)
Jeremy Brannon
Affiliation:
University of North Texas
Benjamin Isaak Gross
Affiliation:
University of North Texas
Michael Widmeier
Affiliation:
University of North Texas
Rights & Permissions [Opens in a new window]

Abstract

As new academic journals have emerged in political science and existing journals experience increasing submission rates, editors are concerned that scholars experience “reviewer fatigue.” Editors often assume that an overload of requests to review makes scholars less willing to perform the anonymous yet time-consuming tasks associated with reviewing manuscripts. To date, there has not been a systematic investigation of the reasons why scholars decline to review. We empirically investigated the rate at which scholars accept or decline to review, as well as the reasons they gave for declining. We found that reviewer fatigue is only one of several reasons why scholars decline to review. The evidence suggests that scholars are willing to review but that they also lead busy professional and personal lives.

Type
The Profession
Copyright
Copyright © American Political Science Association 2015 

The double-blind peer-review process is an important aspect of publishing in political science journals. Therefore, an important task for journal editors is the identification of appropriate reviewers. Although reviewing generally is perceived as a vital professional service, it also is uncompensated and time-consuming. In the past decade, many journals have witnessed substantial increases in the flow of manuscripts, and new journals have emerged. Together, these developments result in an increase in the number of review requests arriving in a typical scholar’s e-mail “in-box.”

Therefore, it is not entirely surprising that potential reviewers identified by journal editors are not always eager to respond positively to requests to review. The common wisdom among journal editors is that reviewers increasingly feel overburdened by such requests—a phenomenon known as “reviewer fatigue.” A recent task force report published by the American Political Science Association (APSA) suggests that “many scholars are overwhelmed by requests to review,” which has resulted in “declining response rates [that] pose a threat to the quality of peer review” (Lupia and Aldrich Reference Lupia and Aldrich2014, 28).

Djupe’s (Reference Djupe2015) survey questions the APSA task force’s findings. Here, we empirically investigate the self-reported reasons for declining to review for the American Political Science Review (APSR). In the conclusion, we provide nuances to the reviewer-fatigue argument, speculate about the degree to which our findings generalize, and offer suggestions.

PUBLISHING IN POLITICAL SCIENCE

A number of studies evaluate trends in publishing in political science journals (e.g., Breuning and Sanders Reference Breuning and Sanders2007; Hancock, Baum, and Breuning Reference Hancock, Baum and Breuning2013; Hesli and Lee Reference Hesli and Lee2011; Hesli, Lee, and Mitchell Reference Hesli, Mook Lee and Mitchell2012; Maliniak, Powers, and Walter Reference Maliniak, Powers and Walter2013; Østby et al. Reference Østby, Strand, Nordas and Gleditsch2013; Wilson Reference Wilson2014). They address the factors that make scholars productive, impediments to publishing, and patterns in citation. Miller et al. (Reference Miller, Pevehouse, Rogowski, Tingley and Wilson2013) provided valuable advice about the review process, and Østby et al. (Reference Østby, Strand, Nordas and Gleditsch2013) examined whether published articles reflect patterns in submissions.

A recent survey concluded that journal editors frequently focus their requests on a limited range of scholars (Djupe Reference Djupe2015). In economics, Chetty, Saez, and Sándor (Reference Chetty, Saez and Sándor2014) defined “reviewing” as prosocial behavior, and Djupe (Reference Djupe2015) found that political scientists concur. Chetty et al. (Reference Chetty, Saez and Sándor2014) also found that incentives facilitate the timely completion of reviews. We discuss this issue in the conclusion to this article. Our study, instead, focused on the reasons that scholars give when they decline to review: Are they overburdened with requests to review?

Additionally, we examined whether women and men behave differently. Women remain underrepresented in the discipline (Sedowski and Brintnall Reference Sedowski and Brintnall2007) and their work is cited less often (Maliniak et al. Reference Maliniak, Powers and Walter2013), which may affect the frequency with which they are identified as potential reviewers. If this is true, then the women who are identified may face a relatively higher rate of requests—especially if editors want to ensure that women are adequately represented as reviewers.

REQUESTS TO REVIEW

This study provides an analysis of scholars who were invited to review manuscripts submitted to the American Political Science Review (APSR) in 2013. In 2013, the APSR assigned 893 unique manuscript numbers, 682 (76.4%) of which were subjected to review. A small subset went through two or three cycles of review after receiving invitations to revise and resubmit. These manuscripts are counted as separate submissions in the APSR’s annual report (Ishiyama Reference Ishiyama2014), but they retain the original manuscript number with a suffix added. For example, if manuscript 00525 was revised and resubmitted, it then was identified as manuscript 00525R1. We tracked requests to review through all rounds for all manuscripts initially received in 2013.

Requests to review are standardized e-mails sent through Editorial Manager. Scholars may accept or decline the request. When they decline, they have the option to provide a reason; we coded their reasons.

In total, 4,563 requests to review (i.e., our unit of analysis) were sent to 3,414 individual scholars in 2013. Almost 96% of the requests were for original submissions; the remainder were for revised papers. On average, about 6.7 requests to review were sent per manuscript.

Women received 30.6% of all requests to review, which suggests that they received requests roughly in proportion to their presence in the discipline and somewhat higher than the 24.14% reported for the American Journal of Political Science (Wilson Reference Wilson2014). In 2007, 26% of all political scientists and 36% of those at the assistant professor rank were women (Sedowski and Brintnall Reference Sedowski and Brintnall2007). More recently, the National Science Foundation (NSF 2014) reported that women now earn slightly more than 40% of the PhDs in political science. The proportion of women in academia likely will lag behind the proportion of PhDs awarded, but it is likely to be somewhat higher than the numbers reported by Sedowski and Brintnall (Reference Sedowski and Brintnall2007).

Not all of the invited reviewers responded positively and some did not reply at all. The good news is that 82.8% of the scholars who received a request responded, either accepting or declining the invitation. The remaining 17.2% did not respond to our requests to review. In rare instances, potential reviewers contacted the editorial office outside of the Editorial Manager system regarding their inability to review and were unassigned by an editor. These cases are included in the “no response” category.

The data cannot indicate the reasons for non-responses; however, a request occasionally is sent to an e-mail address that is no longer in use or invitations generated by the Editorial Manager system may get caught in a spam filter. Hence, non-responsiveness is attributable, in part, to the failure of the request to review to reach the addressee. It is certainly possible that some scholars choose not to respond to requests, but this is not the only—and, possibly, not the most prevalent—reason.

Of the almost 83% of scholars who responded to requests to review, the largest proportion accepted.

Of the almost 83% of scholars who responded to requests to review, the largest proportion accepted. As shown in table 1, 60% of the requests were accepted, whereas slightly less than 23% were declined. When taken as a proportion of the 83% of requests that netted a response, 72.5% agreed to review and 27.5% declined. In other words, depending on the preferred denominator, the positive response to requests to review ranged between 60% and slightly more than 70%. Moreover, as shown in table 1, women and men accept review requests in similar proportions; the differences are not statistically significant.

Table 1 Acceptance of the Request to Review

Note: Chi-square: 1.062, df 2, p = 0.588.

Numbers may not add to 100 due to rounding.

The data reported in table 1 use our unit of analysis—that is, the request to review. If we instead examine the responsiveness of the 3,414 individual scholars who received the 4,563 requests to review, we find that 63.6% accepted one or more invitations. The finding is slightly higher than the positive response rate using the “request to review” as the unit of analysis. Additionally, 73.3% of the 3,414 individual scholars received only one request to review, 20.9% received two, and the remaining 5.8% received three or more. Of the 914 individuals who received two or more requests, 14.1% were for resubmissions. Editorial Board members also were more likely than others to receive multiple requests. In summary, most of those invited to review for the APSR in 2013 received only one request.

Of course, it is easier to choose “accept” or “decline” than to actually complete the review. Therefore, we examined the proportion of reviewers who completed their review. Table 2 indicates that the results are positive. The two right-hand columns provide data for all accepted review requests, including those for revised manuscripts. If only the requests for initial submissions are considered, the completion rate is about 1% less, due to the higher rate on revised papers.

As the data in table 2 show, once reviewers accept, 77.6% complete and submit their assignment. In addition, at the time of the analysis, there was one outstanding review of a revised manuscript. The 22.3% who did not complete the review represent two groups of reviewers. First, on receipt of two reviews that suggest the editors should decline to publish the manuscript, the editors evaluate whether a decision can be made. If so, the responsible editor queries the remaining reviewers via e-mail about whether they want to complete the review or be relieved of the task. In many cases, reviewers are happy to have the editor move forward with a decision. In these cases, the responsible editor “unassigned” the remaining reviewers, after obtaining their agreement with this course of action.

Second, some reviewers unfortunately do not complete their reviews regardless of how many reminders they receive—whether generated by the Editorial Manager system or personal queries from the responsible editor. At some point, the editors make a decision using the submitted reviews or, if that is not possible, they invite additional reviewers. The latter option lengthens the time it takes to reach a decision. Fortunately, the data presented in table 2 show that in the largest proportion of cases, reviewers complete the reviews that they agreed to do.

Our data do not permit us to determine precisely what percentage of reviewers who did not complete the review did so because the editor gave them the choice to “opt out” and what percentage were simply nonresponsive. We suspect that a sizeable proportion reflects the former scenario. Overall, then, reviewers who accept an assignment to review tend to complete it.

Furthermore, women and men are equally likely to complete the review assignments they have accepted. Table 2 shows that the proportions of women and men who complete reviews are extremely similar when considering both initial and revised manuscripts. The same is true when only initial reviews are considered (not shown).

Table 2 Do Reviewers Complete the Reviews They Agree to Do?

Note: Chi-square: 2.313, df 2, p = 0.315.

THE IMPACT OF SUBFIELD AND METHODOLOGY

We also investigated whether scholars were more likely to accept review requests in some subfields versus others and whether the methodological classification of a manuscript made a difference. Both measures use the classifications provided in the Editorial Manager system and chosen by submitting authors. Those who are invited to review generally are specialists in the same subfield and have expertise regarding the methodologies used.

There are statistically significant differences in the propensity of scholars to respond positively to requests to review across subfield and methodology.

There are statistically significant differences in the propensity of scholars to respond positively to requests to review across subfield and methodology. Table 3 shows that the acceptance of requests to review is higher in subfields that scholars traditionally associate with the APSR (including race, ethnicity, and politics, which is perceived as closely affiliated with American politics) and lower for other subfields. The journal’s annual report (Ishiyama Reference Ishiyama2014) clearly shows a strong trend toward comparative politics, which currently accounts for about one third of all submissions, with additional gains in international relations. As perceptions change, it is possible that the propensity to accept requests to review also may shift.

Table 3 Subfield Classification and Acceptance of Request to Review

Note: Chi-square: 81.416, df 14, p = 0.000.

Additionally, table 4 shows that scholars respond differently to requests to review manuscripts using different methodologies. There is greater variability in the acceptance rate of requests to review across different methodological classifications than across subfields. Methodologically, almost two-thirds of review requests are for quantitative work, which may shape reviewers’ expectations. Just as reviewers are more likely to accept a request to review a manuscript in a subfield traditionally associated with the APSR, they also are more likely to accept requests to review papers that use methods traditionally associated with it.

Table 4 Methodological Classification and Acceptance of Request to Review

Note: Chi-square: 54.049, df 12, p = 0.000.

SELF-REPORTED REASONS FOR DECLINING TO REVIEW

We now discuss the other side of the issue: those who decline to review. Table 5 shows that 28.3% of scholars who decline to review do not provide a reason. The remaining percentage does provide information, which is categorized in table 5.

We define “reviewer fatigue” as statements indicating scholars decline because they have other reviews to complete and/or cannot take on an additional review (this category is in boldface italics in table 5). This definition seems to come closest to the notion that scholars feel “overwhelmed” by such requests (Lupia and Aldrich Reference Lupia and Aldrich2014), although we note that the definition may be rather narrow. In some cases, reviewers mentioned how many other requests they had received and/or accepted; this number varied. A few scholars commented that our request was the fifth or sixth they had recently received. In another case, a potential reviewer commented that he or she had accepted one other request and could not handle a second. Although the former reason was far more common than the latter, this suggests variability in what scholars consider a reasonable “review load.”

As is evident in table 5, reviewer fatigue is not the only reason that scholars decline to review. This reason accounts for 14.1% of the declines (or 19.7% of those who provided a reason). A larger proportion of scholars simply state that they are “too busy,” which is the response of 24.8% of those who declined overall (or 34.6% of those who provided a reason). We suspect that some proportion of those who only stated they are “too busy” was due to a case of reviewer fatigue. Therefore, we also completed the analysis in table 5 with the categories of “too many invitations” and “too busy” combined. The resulting combined category accounts for 38.9% of the decisions to decline a review request (i.e., 42.4% of women and 37.3% of men). The combined result probably overstates reviewer fatigue, whereas results for the more restrictive definition in table 5 likely understate it.

We suspect that the combined category overstates reviewer fatigue because in cases in which scholars provided additional details, they mentioned issues such as the need to prioritize their own scholarship due to impending tenure or promotion decisions, as well as teaching and related tasks (e.g., grading exams and papers). For those who provided no information beyond the fact that they were “too busy,” we simply do not know whether they had too many review requests or other preoccupying tasks. What we do know from these data is that many scholars face substantial workloads and competing demands on their time. This suggests that reviewer fatigue is certainly not the only reason why scholars decline requests to review.

The remaining categories all accounted for smaller proportions of the overall decision to decline to review. Some scholars declined because they had previously reviewed the paper for a different journal. Although this does not automatically disqualify reviewers who are confident that they can provide a fair and unbiased review, some scholars noted that they thought the author deserved to have the paper reviewed by someone else.

Other scholars declined to review due to a conflict of interest. Although editorial assistants attempt to identify these issues, it is not always easy to do. The honesty of scholars who decline for this reason is appropriate and appreciated.

Some commented that they could not complete the review within a reasonable time due to travel commitments. Because scholars did not always provide sufficiently detailed information, we were not able to distinguish reliably between professional and leisure travel. For this reason, we combined these two categories.

Furthermore, scholars occasionally declined to review because they had assumed administrative duties. One scholar was facing a steep learning curve after recently becoming department chair. Others face personal or family illness or are taking time to adjust to a growing family.

The “other” category contains comments that did not easily fit in any of the categories we developed. It included a scholar who reportedly was leaving academia, another who was retiring, and one who declined to review because the “abstract suggests a paper of little interest.”

Finally, we mention the scholars who declined to review because they are journal editors. It is possible that editors perceive a conflict of interest if they anticipate that a manuscript might be sent to them if it is rejected at the APSR. Footnote 1 The majority of editors who declined to review, however, did not report this as their reason. Instead, almost all referred to their workload as editors, usually citing the number of manuscripts they processed annually. We are aware that editors face substantial workloads; however, the loss of their expertise as reviewers is unfortunate. Footnote 2

There are differences in the reasons given by women and men. Women were somewhat more likely to state that they were too busy, had too many requests, or were on professional or personal leave. Men were somewhat more likely to claim that they did not have sufficient expertise, had taken on administrative duties, served as journal editors, or had previously read the paper.

Table 5 Scholars’ Stated Reasons for Declining to Review

Notes:

Chi-square: 45.850, df 12, p = 0.000.

T-test: 2.003, df 1033, p = 0.045.

Taken together, the various reasons that scholars report when they decline to review suggest that they have busy professional and personal lives. Reviewer fatigue plays a role but is not the only reason to decline a request to review.

Some who declined to review provided suggestions for alternative reviewers, which is useful and welcome. Table 6 shows that those who provided a reason for declining to review were more likely to suggest one or more alternative reviewers. Almost 60% of those who declined to review did not suggest alternative reviewers.

Table 6 Are Scholars Who Provide a Reason for Declining to Review More Likely to Suggest Alternative Reviewers?

Note: Chi-square: 25.976, df 1, p = 0.000.

CONCLUSION

We agree with Chetty et al. (Reference Chetty, Saez and Sándor2014, 186) that there is tremendous value in “studying the peer review process empirically.” Our analysis is based on data for one journal and one year. Variations exist among journals and across years in the rate at which scholars agree to review and then complete those reviews (Djupe Reference Djupe2015). Whereas scholars may prioritize requests from the most prestigious journals, it also is possible that they are more motivated to accept requests from field-specific journals in their own specialty area because they know the field and are more connected to those journals. This is an empirical question that we cannot answer with the available data, which are time-consuming to collect (Wilson Reference Wilson2014).

Our findings suggest that reviewer fatigue, as commonly understood, is not the only reason why scholars decline invitations to review. On the basis of the self-reported reasons, we estimated that between 14.1% and 38.9% of the reasons given are that scholars have too many requests. The remaining self-reported reasons show that scholars face many demands on their time.

Although we did not specifically code for it, many of those who declined indicated a willingness to review sometime in the future. Our experience is that these scholars often accept a subsequent request, which is noteworthy. Peer review is an uncompensated professional service, yet many scholars remain willing to give time and effort to this task.

Despite their willingness to review, however, there clearly is a limit regarding the “review load” that scholars can assume. Djupe (Reference Djupe2015, 347, 349) noted that “being asked to review is a function of reputation” and recommended that journal editors search “beyond the usual suspects.” Many first-time reviewers are more likely to accept and complete the task, making it important to include new PhDs and research-active scholars from a broader range of institutions. The current APSR editorial team systematically seeks to include first-time reviewers and to reduce the frequency of repeated requests to the same scholars. Various web-based search strategies—such as searching dissertation databases, recent conference programs, and recent publications on Google Scholar—facilitate in broadening the reviewer pool. Additionally, we used personal e-mails to query reviewers who were late with their reports from the beginning of our editorial term in 2012 (rather than relying solely on the automated messages generated by the online submission and review system). Reviewers react positively to a personal message, which also is suggested by Chetty et al. (Reference Chetty, Saez and Sándor2014). Both strategies require an investment of time and effort but have important benefits: broadening the reviewer pool provides more scholars a voice, and communication with reviewers improves the efficiency of the review process.

We do not want to minimize the challenges that editors face in finding a sufficient number of appropriate reviewers for each submission they receive—that task can be daunting. Yet, our analysis of the self-reported reasons for declining to review suggests that tales of reviewer fatigue may be somewhat exaggerated.

Footnotes

1. We thank the anonymous reviewer who asked us to reflect on the reasons that journal editors decline invitations to review.

2. The APSR editors do not exempt themselves from completing reviews for other journals, and they appreciate other editors who act likewise.

References

REFERENCES

Breuning, Marijke, and Sanders, Kathryn. 2007. “Gender and Journal Authorship in Eight Prestigious Political Science Journals.” PS: Political Science and Politics 40 (2): 347–51.Google Scholar
Chetty, Raj, Saez, Emmanuel, and Sándor, László. 2014. “What Policies Increase Prosocial Behavior? An Experiment with Referees at the Journal of Public Economics.” Journal of Economic Perspectives 28 (3): 169–88.Google Scholar
Djupe, Paul A. 2015. “Peer Reviewing in Political Science: New Survey Results.” PS: Political Science and Politics 48 (2): 346–51.Google Scholar
Hancock, Kathleen, Baum, Matthew, and Breuning, Marijke. 2013. “Women and Pre-Tenure Scholarly Productivity in International Studies: An Investigation into the Leaky Career Pipeline.” International Studies Perspectives 14 (4): 507–27.Google Scholar
Hesli, Vicki, and Lee, Jae Mook. 2011. “Faculty Research Productivity: Why Do Some of Our Colleagues Publish More than Others?” PS: Political Science and Politics 44 (2): 393408.Google Scholar
Hesli, Vicki, Mook Lee, Jae, and Mitchell, Sara McLaughlin. 2012. “Predicting Rank Attainment in Political Science: What Else, Besides Publications, Affects Promotion?” PS: Political Science and Politics 45 (3): 475–92.Google Scholar
Ishiyama, John. 2014. “Annual Report of the Editors of the American Political Science Review, 2012–2013.” PS: Political Science and Politics 47 (2): 542–5.Google Scholar
Lupia, Arthur, and Aldrich, John H.. 2014. “Improving Public Perceptions of Political Science’s Value: Report of the Task Force on Improving Public Perceptions of Political Science’s Value.” Washington, DC: American Political Science Association.Google Scholar
Maliniak, Daniel, Powers, Ryan M., and Walter, Barbara F.. 2013. “The Gender Citation Gap in International Relations.” International Organization 67 (4): 889922.Google Scholar
Miller, Beth, Pevehouse, Jon, Rogowski, Ron, Tingley, Dustin, and Wilson, Rick. 2013. “How to Be a Peer Reviewer: A Guide for Recent and Soon-to-Be PhDs.” PS: Political Science and Politics 46 (1): 120–3.Google Scholar
National Science Foundation, National Center for Science and Engineering Statistics (NCSES). 2014. Doctorate Recipients from U.S. Universities: 2012. Arlington, VA (NSF 14305). Available at www.nsf.gov/statistics/sed/2012/data_table.cfm. Accessed on July 15, 2014.Google Scholar
Østby, Gudrun, Strand, Havard, Nordas, Ragnhild, and Gleditsch, Nils Petter. 2013. “Gender Gap or Gender Bias in Peace Research? Publication Patterns and Citation Rates for Journal of Peace Research, 1983–2008.” International Studies Perspectives 14 (4): 493506.Google Scholar
Sedowski, Leanne, and Brintnall, Michael. 2007. “Data Snapshot: The Proportion of Women in the Political Science Profession.” Available at http://apsanet.org/content_7589.cfm. Accessed on July 15, 2014.Google Scholar
Wilson, Rick. 2014. “Publishing, the Gender Gap and the American Journal of Political Science.” Available at https://rkwrice.wordpress.com/2014/09/22/publishing-the-gender-gap-and-the-american-journal-of-political-science. Accessed on May 5, 2015.Google Scholar
Figure 0

Table 1 Acceptance of the Request to Review

Figure 1

Table 2 Do Reviewers Complete the Reviews They Agree to Do?

Figure 2

Table 3 Subfield Classification and Acceptance of Request to Review

Figure 3

Table 4 Methodological Classification and Acceptance of Request to Review

Figure 4

Table 5 Scholars’ Stated Reasons for Declining to Review

Figure 5

Table 6 Are Scholars Who Provide a Reason for Declining to Review More Likely to Suggest Alternative Reviewers?