Today, scholarly journals face challenges that present a greater threat to their integrity and reputation than in the past. Ethical problems, including the falsification or fabrication of research findings, “duplicate submissions,” and plagiarism, threaten the integrity—and the perception of the integrity—of scholarly publication. Falsification of research findings in one or another discipline is reported with notable frequency in the mass media (Wade Reference Wade2010). Reports of conflict of interest by researchers are even more common. Other unethical behavior, such as publishing the same findings in multiple journals (duplicate publications), raised concern within the scientific community at large (Errami et al. Reference Errami, Hicks, Fisher, Trusty, Wren, Long and Garner2008; Stone Reference Stone2003), and manuscript submissions of this sort are forbidden by all scholarly journals.
It is plausible that these problems could occur with some frequency in political science. During the last few years expectations for scholars having published work to obtain a faculty appointment have risen. Many institutions below the Carnegie Foundation for the Advancement of Teaching classification for research-designated universities also expect a body of published work for faculty to earn tenure; this is well documented for political science departments by Rothgeb and Burger (Reference Rothgeb and Burger2009). The relatively long review process at many of our journals, discussed in more detail, adds further pressure on scholars seeking new positions or tenure to earn any publications.
Under the preceding pressures, some individuals could yield to the temptation to publish in unethical ways. Political scientists may only rarely be in research circumstances that could imply conflicts of interest like those in many physical and medical disciplines, and remarkable monetary rewards are less likely to arise because of our research publications. Yet many political scientists could be tempted to falsify data, commit plagiarism, fail to abide by human subjects policies, or publish the same work in multiple venues. Apropos of the research presented in this article, informal discussions with select political science journal editors for background for this article confirmed that their journals receive ethically problematic papers for review from time to time. These limited interviews, however, cannot indicate how widespread the problem may be.
Political science journals could be at particular risk for their ability to detect and handle ethical problems with manuscripts. Some of the reasons for this are widely known, and the journal editors with whom we had informal discussions made many of these problems especially clear. Most political science journals have single editors who are only paid modest editorial stipends, enjoy only limited release time from their regular professorial duties, and have little or no professional staff to assist with journal-office review of possible ethics problems in submitted manuscripts. And, as is widely known, the number of submitted manuscripts at many journals has grown remarkably in recent years. For these reasons, most editors lament that they have little or no time for literal editing or for extensive investigation of manuscripts before they are sent out for peer review. Although a number of online or downloadable tools now exist that test manuscripts for plagiarism or for duplication with already published papers (Long et al. Reference Long, Errami, George, Sun and Garner2009), limited time for editorial work means that none of the editors with whom we talked used these resources. Thin managerial resources and limited systematic investigation of submitted works also plague many physical science journals (Marusic, Katavic, and Marusic Reference Marusic, Katavic and Marusic2007; Wager Reference Wager2007).
A second defense against ethical problems, that Long et al. (Reference Long, Errami, George, Sun and Garner2009, 1293) argue to be the most important one, lies within the peer-review process. Occasionally reviewers detect problems such as plagiarism in manuscripts. Thus, strong reviewer pools constitute a second check against ethical problems. Yet, as is widely known in the profession and clear to the editors we interviewed, increases in the number of submitted manuscripts in recent years have strained reviewer pools at many journals. The increasingly limited availability of appropriate reviewer panels for many individual papers and the reluctance of many scholars to review multiple manuscripts over short periods are critical consequences of this problem.
Limited editorial office resources and reviewer pools might also make unethical author behavior more tempting at the same time that they compromise rapid dissemination of knowledge to the professional community. The rapid promulgation of new research findings to the scientific community has always been a high professional priority. Recently, it has become increasingly valued, with the creation of new journals in many disciplines that promise rapid publication and of such electronic databases as the Social Science Research Network with the same stated goal. Online journals and advance online issuance of manuscripts accepted for publication at some journals also reflect this concern.
Yet the time from initial submission of a manuscript to publication is long for many papers at political science journals. Two rounds of review, as is common for manuscripts eventually accepted, can easily consume 12 to 18 months without counting the time for the authors to revise. Summary statistics on decision times from those journals that issue them, such as the American Journal of Political Science and the Journal of Politics, which post such data on their websites (at www.journalofpolitics.org/ and www.ajps.org/) and the American Political Science Review (Rogowski Reference Rogowski2011), suggest a more rapid review process on first glance. Average decision times on manuscripts at those journals, however, include from 20% to 25% that are rejected by the editors and never sent out for review (and thus have literally, or close to, zero decision times). Especially detailed statistics like those reported by the Journal of Politics and comments from the editors we interviewed informally, further, indicate that the tail of the distribution for those manuscripts with decision times above the average is very long. Although decisions are made on many manuscripts expeditiously, most are under review for considerably longer periods. In addition, a fast time from acceptance to publication in print by our estimates and discussions with editors is today about nine months. Thus, one could conclude that many new research findings in our discipline are not rapidly reaching the scholarly community. As noted earlier, long manuscript processing times could mean that more scholars are tempted to “cut corners” in other ways, some of which might be unethical, to build a research record more quickly.
Despite the potential seriousness of these concerns, we do not know the extent in the social sciences, much less in political science alone. Evidence from a meta-analysis of relevant survey data from scholars in a wide variety of scientific and humanities disciplines suggests that research misconduct such as data falsification or fabrication may occur in the work of as many as 1:10,000 scientists (Fanelli Reference Fanelli2009, 2). Further, Errami et al. (Reference Errami, Hicks, Fisher, Trusty, Wren, Long and Garner2008, 248) provide evidence that at a minimum about 1.35% of all published articles with citations in Medline are essentially duplicates of earlier published papers, and that about 5% of scholars admitted to having duplicate publications in a separate survey assessment. It is implausible that political science would escape these problems.
This article provides initial evidence about the scope of these problems in political science by replicating a study in the physical sciences done by Wager et al. (Reference Wager, Fiack, Graf, Robinson and Rowlands2009). In their study, Wager et al. surveyed editors of medical and physical science journals and learned that editors are concerned about publication ethics but report that ethical problems are rare at their journals. Thus, we have surveyed editors of political science journals about the frequency at which they perceive incidences of unethical conduct in publications, how confident they are in addressing problems of unethical publication, and whether managerial issues related to operating a journal may impinge on their ability to detect or respond to incidences of misconduct. Our results characterize levels of, and concern for, unethical behavior in our discipline and provide evidence comparable to that for a number of physical science disciplines.
METHODS
We used some of the questionnaire items in the Wager et al. survey of editors of physical science journals, but we tailored the instrument to add questions about a somewhat wider range of ethical concerns and about issues that editors of social science journals especially face. Fixed-answer surveys were sent by e-mail to the editors of 112 political and related social science journals.Footnote 1 The sample included the 90 journals examined by Giles and Garand (Reference Giles and Garand2007), who constructed a journal set ranking the most prominent journals in which political scientists might publish, based on both the availability of formal citation and reputational ranking data and on the recommendations of peer colleagues. We supplemented the Giles and Garand list to expand the international and subdisciplinary scope of the sample. Current e-mail addresses were taken from the websites of the individual journals.
We received usable replies from 49, or 44%, of the eligible journals according to the Response Rate 2 formula recommended by the American Association for Public Opinion Research (2011, 32–34, 44). This is an unusually high response rate for any comparable study of which we are aware (e.g., Borkowski and Welsh Reference Borkowski and Welsh1998, 20; Wager et al. Reference Wager, Fiack, Graf, Robinson and Rowlands2009, 349) and for e-mail surveys generally. We also observe that the responses include five of the top six journals on Giles and Garand's measure of reputational ranking, and half of the top 20 journals on the latter ranking. A dummy variable for whether the editor of each journal in the full sample replied to the survey is effectively uncorrelated with both the ISI citation impact scores and Giles and Garand's measure of reputational quality. These correlations, and visual inspection of the data, indicate that our sample represents journals of all levels of quality on the two ranking variables equally well.
In the survey we asked editors for their perceptions of the severity of ethical problems in the manuscript-review process, including the occurrence of falsified data, plagiarism, reviewer conflicts of interest (such as by delaying reviews out of self-interest or rejecting papers for unprofessional reasons), duplicate submissions to multiple venues, and confidentiality of the peer review process. The editors were also asked about whether each problem was increasing or decreasing over time. We also asked editors about how confident they were about their ability to handle each of these types of problems. A few other questions, explained next, asked for descriptions of journal policies for the documentation of ethical practices and for data availability.
We also asked about the editors' concerns for the adequacy of the reviewer pools available to assess the papers submitted to their journals and whether they judged the typical time from submission to publication of accepted papers to be sufficiently rapid. Because of its topical importance today in the profession and in university tenure review processes, we also asked editors about the value of reputational ranking measures for journals and citation ranking measures like those examined by Giles and Garand (Reference Giles and Garand2007) and various other scholars.
RESULTS
Table 1 reports the primary results from our questions to the editors about how frequently they face specific ethical and related problems. The first five items in table 1 pertain to specific forms of possible unethical behavior, and the responses summarized in table 1 indicate that all five of these problems are generally rare. This is especially true for the possible problem of manuscripts with falsified data. Yet about 80% of the respondents also conclude that plagiarism, reviewer misconduct of either of the two kinds for which we have questions, and duplicate submissions are also rare, and only modest percentages of the editors conclude otherwise.
The n of respondents is 49.
The next two items in table 1 concern matters that could compromise the blind-review process and, in general, the adequacy of that process—or, more specifically, how often the confidentiality of the peer-review process might be breached and the adequacy of reviewer pools. We specifically raised the first of these two concerns in light of how frequently scholars post working papers on their websites and because so many journal submissions have been given as conference papers—both of which can be discovered by searches on the World Wide Web. Almost 60% of our respondents deemed this problem of confidentiality to be notable. Further, two-thirds of our respondents report that maintaining an adequate reviewer pool is also a serious problem.
Summary results of our survey questions for each of the preceding five specific ethical problems are listed at the top of table 1. All five problems are seen by the majority of our sample who did not select the “don't know” response as being either in decline or only rising at a constant rate. Note that 58% of our respondents thought that the problem of breaching the confidentiality of authors' identities was increasing. Although substantial majorities of the editors reported notable confidence in their ability to address ethical problems like falsification of data, almost half of the editors who gave positive responses (e.g., besides “don't know”) to our question on confidence in handling manuscript confidentiality expressed only modest confidence about their ability to handle this problem.
Journal editors can implement a variety of policies that might mitigate some of the preceding ethical and related management problems. For example, they can require documentation, as appropriate to specific papers, that authors have followed human subjects and other ethical guidelines in their research, request that the roles of each co-author on a paper be stated, and seek, as appropriate to particular papers, replication data sets that other scholars might analyze. Editors of journals in the medical and physical sciences, such as Nature and associated Nature journals, commonly seek many of these assurances (Nature Publishing Group 2011).
Our survey queried editors about the three frequent requirements that exist in political and social science journals: documentation for meeting relevant human subjects research requirements, reporting and interpretation of survey research results in accord with the recommendations of the American Association of Public Opinion Research (AAPOR), and a replication data set. Strikingly, while 17 of the 49 journal editors require replication data sets, only six required the documentation of human subjects protection and only five require conformance to AAPOR recommendations.
Finally, we were curious about editors' opinions of the ways that the prestige of scholarly journals is frequently and systematically assessed, which have led to a host of publications on that topic such as the one by Giles and Garand (Reference Giles and Garand2007). In general, the editors who replied to our survey are not impressed with these measures. More than 80% of our respondents reported that they thought that reputational rankings of journals and “impact scores” derived from citations of published articles were either not at all valuable or only of modest value.
DISCUSSION
Responses to our survey suggest that editors of political and related social science journals do not consider problems such as falsification of data, plagiarism, reviewer conflicts of interest, reviewer misconduct, or duplicate submissions to be significant for their journals. Most respondents reported that these problems were rare or only infrequent. Likewise, most respondents reported feeling confident in their ability to handle these problems. These findings are similar to those of Wager et al. (Reference Wager, Fiack, Graf, Robinson and Rowlands2009, 351–52) from their survey of physical science journal editors.
The editors responding to our survey overwhelmingly indicated that maintaining manuscript confidentiality is a problem of moderate to significant severity. Confidentiality may be compromised by the availability of identifiable previous versions of papers in conference proceedings or on personal web pages. Although this issue may not be as dramatic as plagiarism in published work, it suggests that the integrity of the blind peer-review process may often be compromised. Whether reviewers seek the identity of authors of papers they are asked to review blindly raises, for some observers, issues of reviewers' integrity, the efficacy of disciplinary training in the norms of the peer-review process, and the need for more instructions on the standards of blind peer review by editors. Alternatively, some members of the profession have argued recently that blind peer review cannot be sustained today because of how easily many authors' identities can be learned by reviewers, and thus the policy should be abandoned. In 2011, the editors of Political Analysis announced that they were abandoning double-blind review with its next volume of the journal, as did the American Economic Association for its journals. (Reviewers will learn author's names, but not vice versa.) These concerns and the different points of view about them merit more discussion in our profession that is beyond the scope of this article.
Further indications that the peer-review process is troubled appear in other data from our survey. In particular, we find that about two-thirds of editors believe that the peer-reviewer pools are inadequate for the needs of their journals. Yet adequate reviewer pools and competent reviews are essential for journals to ensure both the intellectual quality and the ethical integrity of the work they publish (Resnik, Shamoo, and Krimsky Reference Resnik, Shamoo and Krimsky2006).
The survey results also suggest one more general consideration that merits discussion. It is encouraging that the most egregious forms of misconduct by authors and reviewers are reportedly rare in political science according to our respondents. The fact that plagiarism and falsification of data are rare could result from the generally successful inculcation of professional values in the members of our profession.
Our research, however, could be read to present a different conclusion: we may have tapped into a difference between our discipline and the clinical or physical sciences—replication of published results and the use of secondary data are rare, and hence possibilities for identifying faulty research are fewer. Other mechanisms, including the requirement that scholars archive data from their published studies, which might deter fabrication of research findings—are used by so few journals as to cast doubt on their role as mitigating incentives for research misconduct.