Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-22T17:26:33.838Z Has data issue: false hasContentIssue false

Editors’ Corner

Published online by Cambridge University Press:  10 October 2017

Phillip Ardoin
Affiliation:
Appalachian State University
Paul Gronke
Affiliation:
Reed College
Micheal Giles
Affiliation:
Emory University
James Garand
Affiliation:
Louisiana State University
Rights & Permissions [Opens in a new window]

Abstract

Type
Editors’ Corner
Copyright
Copyright © American Political Science Association 2017 

We conclude our 50th anniversary celebration for PS: Political Science & Politics by highlighting our PS all-star team—those authors and articles that have been cited the most over the five decades of PS.

The table of articles (table 1) provides an interesting look at the evolution of the journal over the decades. As we noted in our first editors’ corner for this volume, PS did not really start to publish research articles until the late 1970s. This is evident from the most-cited articles for its first decade (1966–1976), which all deal with, in one way or another, the political science profession. It’s notable that, already in its first decade, PS was an outlet for discussion of the status of women in the profession (see the listed articles by Converse, Schuck, and Jacquette). This issue remains a point of discussion and concern, as reflected in our previous issue with articles by the #womenalsoknowstuff team, and the Shames and Wise article on gender, diversity, and methods.

Table 1 Top 10 Most Cited Articles from PS: Political Science & Politics

By the second decade (1977–1986), research articles had moved to the fore in terms of citations. Francovic’s article, “Sex and Politics: New Alignments, Old Issues,”—if it is not clear from the title—is an important early piece in the gender and politics field, pointing out how, at the time at least, there had been virtually no attention paid in the political behavior literature to gender differences in political behavior. Research into political science as a profession remained highly cited, including articles on how to rank graduate programs, and other articles critiquing the rankings of graduate programs. There were no USA Today or Carnegie classifications to compete with our own discipline-centric evaluations.

By its third decade, PS had solidly established itself as an outlet for not just news of the discipline and articles about the discipline, but as an attractive venue for important articles by political scientists about current issues of politics and public policy. Far and away the most widely cited article of the decade, and over a 50 year period, is Robert Putnam’s well-known “Tuning In, Tuning Out: The Strange Disappearance of Social Capital in America.” Younger scholars who are familiar with Putnam’s book, Bowling Alone, and the concept of social capital, may not recall how this research program actually started with a project comparing Northern and Southern Italy, and how the first article dealing with the United States appeared in the Journal of Democracy. Putnam recalls the experience well:

In late 1994 I published in an obscure journal an article with the puzzling title of “Bowling Alone.” A few months later I was invited to deliver the inaugural Ithiel de Sola Pool Lecture at the APSA convention, and I decided to use that occasion to begin working through the question of what might explain the decline in social capital and civic engagement that had been the focus of my earlier article. It would take another five years to fully solve the mystery (to my own satisfaction, at least) in the book Bowling Alone, but it turned out that the prime suspect I had fingered in the PS article was in fact guilty.

In retrospect I’m delighted that the article was published in PS, not least because it got a lot of attention both inside and outside the academy. However, that was not, strictly speaking, something I “decided,” because (at least according to my longer infallible memory), it was then expected that such APSA Lectures would be published in PS. I do recall being entirely pleased with the PS editorial process.

We are also extremely pleased Putnam decided to publish in PS, even if the decision was not apparently completely his own!

A number of other articles from the third decade have withstood the test of time. PS is where Paul Sabatier first published his call for “better theories of the policy process” that resulted in his seminal 1999 edited volume. PS first published Gary King’s 1995 call for “replication, replication” in political and social science research. Gabriel Almond’s short discussion of “Separate Tables: Schools and Sects in Political Science” that in many ways anticipated the eventual “perestroika” movement in the discipline, also appeared first in PS in 1988.

Notable in the fourth decade of PS is the appearance of two important guides to methodology—articles by Aberbach and Rockman, and Berry, on elite interviewing. This is a case where PS acted as an outlet for scholars to promote an important but often poorly understood and sometimes marginalized methodological approach.

The most cited article in our fifth and most recent decade of publication, David Collier’s guide to process tracing, was directed to PS for the same reason. As David wrote in response to an e-mail:

I wanted to publish this with PS in part because—over the decades—it has brought out so many substantive articles, as well as disciplinary debates, of great importance to political scientists. In addition, it seemed like a good way to reach colleagues who might be looking for this kind of material to use in teaching. This seemed to be an excellent combination.

We couldn’t state better and more succinctly why we enjoy our service as coeditors of this journal.

Finally, we want to close by returning to the theme of research on political science as a profession. The runaway leader in the first decade, however, is the Micheal Giles and Gerald Wright article on political science journal evaluations. When reviewing this list, we couldn’t help but notice that a 2007 article on roughly the same topic, this time by James Garand and Micheal Giles, is among the most cited in the last decade, from 2007–2016. We contacted Micheal and Jim to ask them about this research, and why they chose to publish in PS. We have included their response as a short “Reflections” piece along with our Editors’ Corner. It once again highlights the importance of PS as an outlet for scientific research by political scientists about political science and how this research can help us assess and evaluate ourselves more carefully and accurately.

REFLECTIONS: OUR RESEARCH ON POLITICAL SCIENCE JOURNAL RANKINGS AND WHY WE CHOSE TO PUBLISH IN PS

The Origin Story

Micheal Giles:

The 1975 article I co-authored with Gerald C. Wright (Giles and Wright Reference Giles and Wright1975) had its origin in a conversation we had with Robert Huckshorn, then dean of social science at Florida Atlantic University. Huckshorn complained that the economics department was asserting that publications in regional journals of banking were equivalent to the American Political Science Review, American Journal of Political Science (then Midwest Journal of Political Science) and The Journal of Politics for purposes of evaluation for promotion and salary increases.

Jerry and I proposed to provide an empirical grounding for the rankings of political science journals by conducting a mail survey of the discipline. The dean provided a small grant to cover our costs. The results of our survey, along with an existing survey-based ranking for sociology journals, forced the economics department to adopt a more realistic ranking of relevant journals.

In the flush of our victory over the economists, Jerry and I boldly submitted a manuscript reporting the results to PS. And it was desk rejected, in part due to a “lack of fit” with the journal, and in part to a concern of the editor that a journal ranking would be controversial. Huckshorn lobbied the editor on our behalf (obviously the review process at PS was not as formal at that time) and the editor agreed to publish a shorter version of the paper.

It was, we believe, the first survey-based ranking of political science journals. It quickly became a commonly employed reference in scholarly evaluation within the profession. As a result of requests from colleagues for an updating of the original ranking, I coauthored with Franzie Mizell and David Paterson a second survey-based ranking of journals in the discipline. This was also published in PS in 1989 under the title, “Political Scientists’ Journal Evaluations Re-visited.”

The Extension Story

James Garand:

It was at this point that I came into the picture. I had been a close reader of the original Giles and Wright article and the later replication published in PS. The idea of conducting surveys of political scientists to ask them to evaluate scholarly journals in the discipline made a great deal of sense to me. However, one of the things that struck me about both the 1975 paper and the 1989 replication was that some of the journal rankings reported in these works seemed to deviate considerably from what I understood to be the pecking order of journals, admittedly based on informal conversations with colleagues across a range of subfields. Some journals received very high evaluations (and were hence ranked very highly), based on the responses from a small group of subfield specialists. There was also the oddity that flagship journals from related social science fields (i.e., economics, sociology) outranked some of the leading political science journals.

Micheal and his colleagues reported not just the average evaluation score for each journal but also a percentage of those who were familiar enough with a given journal to provide an evaluation. My thought at the time was that the scholarly impact of a journal is a function not only of how it is evaluated by those who read it, but also by its reach throughout the political science discipline. This is a distinction with which some will disagree, but in my view, it is not unreasonable to suggest that a journal’s impact is a function of both the quality of the scholarly work that they publish and how widely read it is within a discipline.

I developed an alternative measure of journal impact, one that combined both journal evaluations and the proportion of respondents who said they were familiar with these journals. The alternative ranking of journals had a considerable amount of face validity. I wrote a brief note that I anticipated sending to PS, and sent it to Micheal for comments. He was very gracious in offering helpful suggestions on the paper, which was published in PS in 1990 (Garand Reference Garand1990).

In the early 2000s Micheal and I concluded that it was time to replicate our earlier works, so we developed a new survey with a list of 115 political science journals for political scientists to evaluate. The result was a 2003 PS article (Garand and Giles Reference Garand and Giles2003) which demonstrated considerable consistency in journal impact rankings over time, and considered how journal impact varied across political scientists’ subfields and methodological approaches. The fragmentation in how journal evaluations varied across subfield and methodological approach were explored further in my SPSA presidential address (Garand Reference Garand2005).

Our exploration of how political scientists evaluate scholarly journals continued with a PS article in which we compared explicitly citation and reputation approaches (Giles and Garand Reference Giles and Garand2007). We uncovered some considerable differences in rankings of journals using these two approaches. First, citation-based approaches tend to cross disciplinary boundaries, so some journals that are outside of the realm of political science may receive more citations because they draw from outside our discipline; hence some highly-cited interdisciplinary journals may be relatively invisible to political scientists and hence may draw lower reputational rankings. Second, there are differences in citation patterns across subfields in political science. Articles in international relations journals have higher citation rates than articles in other journals, perhaps reflecting a norm among international relations scholars in how they cite the work of others that differs from citation norms among scholars in other subfields. Third, there are many journals that draw relatively few citations but that still have high standards for publication and hence draw the respect of political scientists in the field. In the end we find that appropriate adjustments to account for interdisciplinary and subfield citation differences result in a fairly strong correlation (r = 0.656) between impact ratings based on citation and reputation approaches.

Finally, Micheal and I joined with Andre Blais (University of Montreal) and Ian McLean (Oxford University) to extend the research program on journal evaluation to a comparative setting (Garand, Giles, Blais, and McLean Reference Garand, Giles, Blais and McLean2009). We collected survey data on evaluations of journals for political scientists from the United States, Great Britain, and Canada, finding considerable differences across these three countries in which journals have the greatest scholarly impact. It is clear that there is some overlap in what political scientists in these three countries consider to be the leading journals, but there are sufficient differences to suggest that the scholarly communities in these countries are unique and that scholarly communication patterns are largely country-specific. We also extended the survey-based approach used to evaluate scholarly journals to encompass the evaluation of scholarly presses in political science (Garand and Giles Reference Garand and Giles2011).

Why Publish in PS?

Garand and Giles:

There are at least three reasons why we find PS to be the most appropriate outlet for this research.

First, journals are a key element of our professional life and their relative rankings is the equivalent of disciplinary “cat nip.” Political scientists cannot resist looking at journal rankings. As an illustration of this irresistible attraction, at the APSA convention immediately following publication of the 1975 journal-ranking article in PS, one of the authors’ mentors asked if he had seen the article and immediately launched into a discussion of the rankings. The mentor was so focused on the rankings that he had read the article without noticing that it was coauthored by his former student! We regularly receive inquiries about our studies reporting rankings of scholarly journals and presses, especially requests for data.

Second, there is also a rational incentive to this professional interest in journal rankings. Rightly or wrongly, the assessment of a scholar’s publication record is conditioned at least in part by the perceived ranking of the journals in which their articles appear. At the extreme, some evaluators for tenure, promotion, or even hiring may simply assign the ranking of the journal to the articles it contains without reading and assessing their contribution. In this light, the articles that we have published in PS may be seen by some as undermining the engagement which should guide colleagues in making such decisions. In our defense, our 2007 paper makes a strong effort to properly situate the use of any form of ranking journals in the evaluation process.

Third, articles about the ranking of journals are studies of the sociology of our profession. As students of institutions and behavior within institutions we need to avail ourselves of the same tools we use to examine the political world to understand our profession. Moreover, work in this vein provides a policy benefit to the profession. As described above, the genesis for the first journal ranking article was an effort to reform an inequity in cross-discipline evaluation.

In its first decade, hosting scholarly articles on the profession was not initially seen as within the purview of PS. The first journal-ranking article was initially rejected by the editor. However, in the following decades PS has become the principal home for quantitative and non-quantitative scholarship on the profession. And we believe that the profession is the better for it.

References

REFERENCES

Giles, Micheal W., and Wright, Gerald C.. 1975. “Political Scientists’ Evaluations of Sixty-Three Journals.” PS: Political Science & Politics 8 (3): 254–57.Google Scholar
Giles, Micheal W., Mizell, Francie, and Paterson, David. 1989. “Political Scientists’ Journal Evaluation Revisited.” PS: Political Science & Politics 22 (3): 613–17.Google Scholar
Garand, James C. 1990. “An Alternative Interpretation of Recent Political Science Journal Evaluations.” PS: Political Science & Politics 23 (3): 448–51.Google Scholar
Garand, James C., and Giles, Micheal W.. 2003. “Journals in the Discipline: A Report on a New Survey of Political Scientists.” PS: Political Science & Politics 36 (2): 293308.Google Scholar
Garand, James C. 2005. “Integration and Fragmentation in Political Science: Exploring Patterns of Scholarly Communication in a Divided Discipline.” Journal of Politics 67 (4): 9791005.Google Scholar
Giles, Micheal W., and Garand, James C.. 2007. “Ranking Political Science Journals: Reputational and Citational Approaches.” PS: Political Science & Politics 40 (4): 741–51.Google Scholar
Garand, James C., Giles, Micheal W., Blais, André, and McLean, Iain. 2009. “Political Science Journals in Comparative Perspective: Evaluating Scholarly Journals in the United States, Canada, and the United Kingdom.” PS: Political Science & Politics 42 (4): 695717.Google Scholar
Garand, James C., and Giles, Micheal W.. 2011. “Ranking Scholarly Publishers in Political Science: An Alternative Approach.” PS: Political Science & Politics 44 (2): 375–83.Google Scholar
Figure 0

Table 1 Top 10 Most Cited Articles from PS: Political Science & Politics