Hostname: page-component-78c5997874-m6dg7 Total loading time: 0 Render date: 2024-11-20T06:15:22.919Z Has data issue: false hasContentIssue false

Ranking Political Science Programs: A View from the Lower Half

Published online by Cambridge University Press:  02 September 2013

Richard S. Katz
Affiliation:
State University of New York at Buffalo
Munroe Eagles
Affiliation:
State University of New York at Buffalo

Extract

In an age of diminished resources for higher education, the ranking of programs takes on special significance, particularly for programs that rate poorly in the eyes of their peers. Before cost-conscious administrators use the apparent precision of the National Research Council's 1995 ratings to justify rewarding highly rated programs, and penalizing those that fared less well, an analysis of the factors that contribute to ratings success—particularly department size—draws attention to the importance of factors that lie outside the range of departmental control.

Our motivation is only partly ‘academic’. As members of a faculty that rated poorly in this exercise (UB political science ranked 72nd out of the 98 programs surveyed on faculty quality and tied for 48th on “effectiveness”), we have a special interest in reminding our colleagues, and especially our administrators, that our less-than-stellar performance reflects a variety of considerations, only some of which are our “fault.”

Rating academic departments as the NRC has done by relying on peer perceptions (nearly 8,000 graduate-faculty members participated in the ratings survey) involves tapping into a complex set of social processes that are not easily measured. However, we contend that the outcome of this process can be modeled successfully and parsimoniously with a six-variable model of the “determinants of ratings success.” We estimate the parameters of this six-variable model in this article, and show that the most statistically robust and important predictor of ratings success—department size—reflects a characteristic of departments that is beyond their control, and has no direct bearing on the quality of the members of the departments concerned, while a second important indicator—the proportion of full professors—is at least in part similarly beyond departmental control. Our confidence that this model taps central features in the ratings game is enhanced by the consistency of its performance in explaining the ratings of other social science departments.

Type
Research Article
Copyright
Copyright © The American Political Science Association 1996

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Connor, Walter D. 1987. “Mistaken Identity.” PS 20(1):9.Google Scholar
The Chronicle of Higher Education. 1995. “Doctoral Judgements: A Sweeping National Study Assesses the Quality of Research Programs in 41 Fields.” A20–A32.Google Scholar
Greenberg, Edward S. 1987. “Perils in Citation-Counting.” PS 20(1):69.Google Scholar
Klingemann, Hans-Dieter. 1986. “Ranking the Graduate Departments: Toward Objective Qualitative Indicators,” PS 19(3): 651–61.CrossRefGoogle Scholar
National Research Council. 1995. Research-Doctorate Programs in the United States: Continuity and Change. Washington DC: National Academy Press.Google Scholar