Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-26T12:00:11.206Z Has data issue: false hasContentIssue false

Editors' Perceptions of Ethical and Managerial Problems in Political Science Journals

Published online by Cambridge University Press:  27 September 2012

Sara R. Jordan
Affiliation:
University of Hong Kong
Kim Quaile Hill
Affiliation:
Texas A & M University
Rights & Permissions [Opens in a new window]

Abstract

Within the medical and physical sciences journals evidence suggests that problems of authorship ethics and journal management bedevil the editors of these journals. Although anecdotal evidence suggests that similar problems persist in political science, the extent of these problems within political science is not well established. Here we report the results of a survey of political science journal editors' perceptions of ethical and managerial issues associated with their journals. We find that unlike ethical publication concerns in the clinical and natural sciences fields, these issues are not of significant concern among our sample. Ethical problems are of low concern and editors report high levels of confidence to address these problems. Managerial problems, such as the adequacy of reviewer pools, are of higher concern to our sample.

Type
The Profession
Copyright
Copyright © American Political Science Association 2012 

Today, scholarly journals face challenges that present a greater threat to their integrity and reputation than in the past. Ethical problems, including the falsification or fabrication of research findings, “duplicate submissions,” and plagiarism, threaten the integrity—and the perception of the integrity—of scholarly publication. Falsification of research findings in one or another discipline is reported with notable frequency in the mass media (Wade Reference Wade2010). Reports of conflict of interest by researchers are even more common. Other unethical behavior, such as publishing the same findings in multiple journals (duplicate publications), raised concern within the scientific community at large (Errami et al. Reference Errami, Hicks, Fisher, Trusty, Wren, Long and Garner2008; Stone Reference Stone2003), and manuscript submissions of this sort are forbidden by all scholarly journals.

It is plausible that these problems could occur with some frequency in political science. During the last few years expectations for scholars having published work to obtain a faculty appointment have risen. Many institutions below the Carnegie Foundation for the Advancement of Teaching classification for research-designated universities also expect a body of published work for faculty to earn tenure; this is well documented for political science departments by Rothgeb and Burger (Reference Rothgeb and Burger2009). The relatively long review process at many of our journals, discussed in more detail, adds further pressure on scholars seeking new positions or tenure to earn any publications.

Under the preceding pressures, some individuals could yield to the temptation to publish in unethical ways. Political scientists may only rarely be in research circumstances that could imply conflicts of interest like those in many physical and medical disciplines, and remarkable monetary rewards are less likely to arise because of our research publications. Yet many political scientists could be tempted to falsify data, commit plagiarism, fail to abide by human subjects policies, or publish the same work in multiple venues. Apropos of the research presented in this article, informal discussions with select political science journal editors for background for this article confirmed that their journals receive ethically problematic papers for review from time to time. These limited interviews, however, cannot indicate how widespread the problem may be.

Political science journals could be at particular risk for their ability to detect and handle ethical problems with manuscripts. Some of the reasons for this are widely known, and the journal editors with whom we had informal discussions made many of these problems especially clear. Most political science journals have single editors who are only paid modest editorial stipends, enjoy only limited release time from their regular professorial duties, and have little or no professional staff to assist with journal-office review of possible ethics problems in submitted manuscripts. And, as is widely known, the number of submitted manuscripts at many journals has grown remarkably in recent years. For these reasons, most editors lament that they have little or no time for literal editing or for extensive investigation of manuscripts before they are sent out for peer review. Although a number of online or downloadable tools now exist that test manuscripts for plagiarism or for duplication with already published papers (Long et al. Reference Long, Errami, George, Sun and Garner2009), limited time for editorial work means that none of the editors with whom we talked used these resources. Thin managerial resources and limited systematic investigation of submitted works also plague many physical science journals (Marusic, Katavic, and Marusic Reference Marusic, Katavic and Marusic2007; Wager Reference Wager2007).

A second defense against ethical problems, that Long et al. (Reference Long, Errami, George, Sun and Garner2009, 1293) argue to be the most important one, lies within the peer-review process. Occasionally reviewers detect problems such as plagiarism in manuscripts. Thus, strong reviewer pools constitute a second check against ethical problems. Yet, as is widely known in the profession and clear to the editors we interviewed, increases in the number of submitted manuscripts in recent years have strained reviewer pools at many journals. The increasingly limited availability of appropriate reviewer panels for many individual papers and the reluctance of many scholars to review multiple manuscripts over short periods are critical consequences of this problem.

Limited editorial office resources and reviewer pools might also make unethical author behavior more tempting at the same time that they compromise rapid dissemination of knowledge to the professional community. The rapid promulgation of new research findings to the scientific community has always been a high professional priority. Recently, it has become increasingly valued, with the creation of new journals in many disciplines that promise rapid publication and of such electronic databases as the Social Science Research Network with the same stated goal. Online journals and advance online issuance of manuscripts accepted for publication at some journals also reflect this concern.

Yet the time from initial submission of a manuscript to publication is long for many papers at political science journals. Two rounds of review, as is common for manuscripts eventually accepted, can easily consume 12 to 18 months without counting the time for the authors to revise. Summary statistics on decision times from those journals that issue them, such as the American Journal of Political Science and the Journal of Politics, which post such data on their websites (at www.journalofpolitics.org/ and www.ajps.org/) and the American Political Science Review (Rogowski Reference Rogowski2011), suggest a more rapid review process on first glance. Average decision times on manuscripts at those journals, however, include from 20% to 25% that are rejected by the editors and never sent out for review (and thus have literally, or close to, zero decision times). Especially detailed statistics like those reported by the Journal of Politics and comments from the editors we interviewed informally, further, indicate that the tail of the distribution for those manuscripts with decision times above the average is very long. Although decisions are made on many manuscripts expeditiously, most are under review for considerably longer periods. In addition, a fast time from acceptance to publication in print by our estimates and discussions with editors is today about nine months. Thus, one could conclude that many new research findings in our discipline are not rapidly reaching the scholarly community. As noted earlier, long manuscript processing times could mean that more scholars are tempted to “cut corners” in other ways, some of which might be unethical, to build a research record more quickly.

Despite the potential seriousness of these concerns, we do not know the extent in the social sciences, much less in political science alone. Evidence from a meta-analysis of relevant survey data from scholars in a wide variety of scientific and humanities disciplines suggests that research misconduct such as data falsification or fabrication may occur in the work of as many as 1:10,000 scientists (Fanelli Reference Fanelli2009, 2). Further, Errami et al. (Reference Errami, Hicks, Fisher, Trusty, Wren, Long and Garner2008, 248) provide evidence that at a minimum about 1.35% of all published articles with citations in Medline are essentially duplicates of earlier published papers, and that about 5% of scholars admitted to having duplicate publications in a separate survey assessment. It is implausible that political science would escape these problems.

This article provides initial evidence about the scope of these problems in political science by replicating a study in the physical sciences done by Wager et al. (Reference Wager, Fiack, Graf, Robinson and Rowlands2009). In their study, Wager et al. surveyed editors of medical and physical science journals and learned that editors are concerned about publication ethics but report that ethical problems are rare at their journals. Thus, we have surveyed editors of political science journals about the frequency at which they perceive incidences of unethical conduct in publications, how confident they are in addressing problems of unethical publication, and whether managerial issues related to operating a journal may impinge on their ability to detect or respond to incidences of misconduct. Our results characterize levels of, and concern for, unethical behavior in our discipline and provide evidence comparable to that for a number of physical science disciplines.

METHODS

We used some of the questionnaire items in the Wager et al. survey of editors of physical science journals, but we tailored the instrument to add questions about a somewhat wider range of ethical concerns and about issues that editors of social science journals especially face. Fixed-answer surveys were sent by e-mail to the editors of 112 political and related social science journals.Footnote 1 The sample included the 90 journals examined by Giles and Garand (Reference Giles and Garand2007), who constructed a journal set ranking the most prominent journals in which political scientists might publish, based on both the availability of formal citation and reputational ranking data and on the recommendations of peer colleagues. We supplemented the Giles and Garand list to expand the international and subdisciplinary scope of the sample. Current e-mail addresses were taken from the websites of the individual journals.

We received usable replies from 49, or 44%, of the eligible journals according to the Response Rate 2 formula recommended by the American Association for Public Opinion Research (2011, 32–34, 44). This is an unusually high response rate for any comparable study of which we are aware (e.g., Borkowski and Welsh Reference Borkowski and Welsh1998, 20; Wager et al. Reference Wager, Fiack, Graf, Robinson and Rowlands2009, 349) and for e-mail surveys generally. We also observe that the responses include five of the top six journals on Giles and Garand's measure of reputational ranking, and half of the top 20 journals on the latter ranking. A dummy variable for whether the editor of each journal in the full sample replied to the survey is effectively uncorrelated with both the ISI citation impact scores and Giles and Garand's measure of reputational quality. These correlations, and visual inspection of the data, indicate that our sample represents journals of all levels of quality on the two ranking variables equally well.

In the survey we asked editors for their perceptions of the severity of ethical problems in the manuscript-review process, including the occurrence of falsified data, plagiarism, reviewer conflicts of interest (such as by delaying reviews out of self-interest or rejecting papers for unprofessional reasons), duplicate submissions to multiple venues, and confidentiality of the peer review process. The editors were also asked about whether each problem was increasing or decreasing over time. We also asked editors about how confident they were about their ability to handle each of these types of problems. A few other questions, explained next, asked for descriptions of journal policies for the documentation of ethical practices and for data availability.

We also asked about the editors' concerns for the adequacy of the reviewer pools available to assess the papers submitted to their journals and whether they judged the typical time from submission to publication of accepted papers to be sufficiently rapid. Because of its topical importance today in the profession and in university tenure review processes, we also asked editors about the value of reputational ranking measures for journals and citation ranking measures like those examined by Giles and Garand (Reference Giles and Garand2007) and various other scholars.

RESULTS

Table 1 reports the primary results from our questions to the editors about how frequently they face specific ethical and related problems. The first five items in table 1 pertain to specific forms of possible unethical behavior, and the responses summarized in table 1 indicate that all five of these problems are generally rare. This is especially true for the possible problem of manuscripts with falsified data. Yet about 80% of the respondents also conclude that plagiarism, reviewer misconduct of either of the two kinds for which we have questions, and duplicate submissions are also rare, and only modest percentages of the editors conclude otherwise.

Table 1 Editors' Perceptions of the Severity of the Ethical and Related Problems at Their Journals

The n of respondents is 49.

The next two items in table 1 concern matters that could compromise the blind-review process and, in general, the adequacy of that process—or, more specifically, how often the confidentiality of the peer-review process might be breached and the adequacy of reviewer pools. We specifically raised the first of these two concerns in light of how frequently scholars post working papers on their websites and because so many journal submissions have been given as conference papers—both of which can be discovered by searches on the World Wide Web. Almost 60% of our respondents deemed this problem of confidentiality to be notable. Further, two-thirds of our respondents report that maintaining an adequate reviewer pool is also a serious problem.

Summary results of our survey questions for each of the preceding five specific ethical problems are listed at the top of table 1. All five problems are seen by the majority of our sample who did not select the “don't know” response as being either in decline or only rising at a constant rate. Note that 58% of our respondents thought that the problem of breaching the confidentiality of authors' identities was increasing. Although substantial majorities of the editors reported notable confidence in their ability to address ethical problems like falsification of data, almost half of the editors who gave positive responses (e.g., besides “don't know”) to our question on confidence in handling manuscript confidentiality expressed only modest confidence about their ability to handle this problem.

Journal editors can implement a variety of policies that might mitigate some of the preceding ethical and related management problems. For example, they can require documentation, as appropriate to specific papers, that authors have followed human subjects and other ethical guidelines in their research, request that the roles of each co-author on a paper be stated, and seek, as appropriate to particular papers, replication data sets that other scholars might analyze. Editors of journals in the medical and physical sciences, such as Nature and associated Nature journals, commonly seek many of these assurances (Nature Publishing Group 2011).

Our survey queried editors about the three frequent requirements that exist in political and social science journals: documentation for meeting relevant human subjects research requirements, reporting and interpretation of survey research results in accord with the recommendations of the American Association of Public Opinion Research (AAPOR), and a replication data set. Strikingly, while 17 of the 49 journal editors require replication data sets, only six required the documentation of human subjects protection and only five require conformance to AAPOR recommendations.

Finally, we were curious about editors' opinions of the ways that the prestige of scholarly journals is frequently and systematically assessed, which have led to a host of publications on that topic such as the one by Giles and Garand (Reference Giles and Garand2007). In general, the editors who replied to our survey are not impressed with these measures. More than 80% of our respondents reported that they thought that reputational rankings of journals and “impact scores” derived from citations of published articles were either not at all valuable or only of modest value.

DISCUSSION

Responses to our survey suggest that editors of political and related social science journals do not consider problems such as falsification of data, plagiarism, reviewer conflicts of interest, reviewer misconduct, or duplicate submissions to be significant for their journals. Most respondents reported that these problems were rare or only infrequent. Likewise, most respondents reported feeling confident in their ability to handle these problems. These findings are similar to those of Wager et al. (Reference Wager, Fiack, Graf, Robinson and Rowlands2009, 351–52) from their survey of physical science journal editors.

The editors responding to our survey overwhelmingly indicated that maintaining manuscript confidentiality is a problem of moderate to significant severity. Confidentiality may be compromised by the availability of identifiable previous versions of papers in conference proceedings or on personal web pages. Although this issue may not be as dramatic as plagiarism in published work, it suggests that the integrity of the blind peer-review process may often be compromised. Whether reviewers seek the identity of authors of papers they are asked to review blindly raises, for some observers, issues of reviewers' integrity, the efficacy of disciplinary training in the norms of the peer-review process, and the need for more instructions on the standards of blind peer review by editors. Alternatively, some members of the profession have argued recently that blind peer review cannot be sustained today because of how easily many authors' identities can be learned by reviewers, and thus the policy should be abandoned. In 2011, the editors of Political Analysis announced that they were abandoning double-blind review with its next volume of the journal, as did the American Economic Association for its journals. (Reviewers will learn author's names, but not vice versa.) These concerns and the different points of view about them merit more discussion in our profession that is beyond the scope of this article.

Further indications that the peer-review process is troubled appear in other data from our survey. In particular, we find that about two-thirds of editors believe that the peer-reviewer pools are inadequate for the needs of their journals. Yet adequate reviewer pools and competent reviews are essential for journals to ensure both the intellectual quality and the ethical integrity of the work they publish (Resnik, Shamoo, and Krimsky Reference Resnik, Shamoo and Krimsky2006).

The survey results also suggest one more general consideration that merits discussion. It is encouraging that the most egregious forms of misconduct by authors and reviewers are reportedly rare in political science according to our respondents. The fact that plagiarism and falsification of data are rare could result from the generally successful inculcation of professional values in the members of our profession.

Our research, however, could be read to present a different conclusion: we may have tapped into a difference between our discipline and the clinical or physical sciences—replication of published results and the use of secondary data are rare, and hence possibilities for identifying faulty research are fewer. Other mechanisms, including the requirement that scholars archive data from their published studies, which might deter fabrication of research findings—are used by so few journals as to cast doubt on their role as mitigating incentives for research misconduct.

Footnotes

1 This study was approved by the Institutional Review Board (IRB) at Texas A&M University with a waiver of written informed consent granted on grounds that it was impractical to obtain written informed consent in an e-mailed survey. Participants were provided with an information sheet detailing procedures of the survey and their rights as participants.

References

American Association for Public Opinion Research. 2011. Standard Definitions. Deerfield, IL. Available at http://www.aapor.org/Home/htm, accessed March 8, 2011.Google Scholar
Borkowski, S. C., and Welsh, M. J.. 1998. “Ethics and the Accounting Publishing Process: Author, Reviewer, and Editor Issues.” Journal of Business Ethics 17: 17851803.Google Scholar
Errami, M., Hicks, J. M., Fisher, W., Trusty, D., Wren, J. D., Long, T. C., and Garner, H. R.. 2008. “Déjà Vu: A Study of Duplicate Citations in Medline.” Bioinformatics 24: 243–49.Google Scholar
Fanelli, D. 2009. “How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data.” PLoS One 4: e5738.Google Scholar
Giles, M. W., and Garand, J. C.. 2007. “Ranking Political Science Journals: Reputational and Citational Approaches.” PS: Political Science & Politics 40 (4): 741–51.Google Scholar
Long, Tara C., Errami, Mounir, George, Angela C., Sun, Zhaohui, and Garner, Harold R.. 2009. “Responding to Possible Plagiarism.” Science 323 (March 6): 1293–94.Google Scholar
Marusic, A., Katavic, V., and Marusic, M.. 2007. “Role of Editors and Journals in Detecting and Preventing Scientific Misconduct: Strengths, Weaknesses, Opportunities, and Threats.” Medicine and Law 26: 545–66.Google Scholar
Nature Publishing Group. 2011. “Editorial Policies.” Available at http://www.nature.com/authors/policies/index.html, accessed March 15, 2011.Google Scholar
Resnik, D. B., Shamoo, A., and Krimsky, S.. 2006. “Fraudulent Human Embryonic Stem Cell Research in South Korea: Lessons Learned.” Accountability in Research 13: 101–09.Google Scholar
Rogowski, Ronald. 2011. “Report of the Editors of the American Political Science Review, 2009–2010.” PS: Political Science & Politics 44 (2): 447–49.Google Scholar
Rothgeb, John M., and Burger, Betsy. 2009. “Tenure Standards in Political Science Departments: Results from a Survey of Department Chairs.” PS: Political Science & Politics 42 (3): 513–19.Google Scholar
Stone, W. R. 2003. “Plagiarism, Duplicate Publication, and Duplicate Submission: They Are All Wrong!IEEE: Antennas and Propagation Magazine 45: 4749.Google Scholar
Wade, N. 2010. “Harvard Finds Scientist Guilty of Misconduct.” New York Times, August 20. http://www.nytimes.com/2010/08/21/education/21harvard.html.Google Scholar
Wager, E. 2007. “What Do Journal Editors Do When They Suspect Research Misconduct?Medicine and Law 26: 535–44.Google Scholar
Wager, E., Fiack, S., Graf, C., Robinson, A., and Rowlands, I.. 2009. “Science Journal Editors' Views on Publication Ethics: Results of an International Survey.” Journal of Medical Ethics 35: 348–53.Google Scholar
Figure 0

Table 1 Editors' Perceptions of the Severity of the Ethical and Related Problems at Their Journals