Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-22T22:08:18.608Z Has data issue: false hasContentIssue false

Faculty Research Productivity: Why Do Some of Our Colleagues Publish More than Others?

Published online by Cambridge University Press:  08 April 2011

Jae Mook Lee
Affiliation:
University of Iowa
Rights & Permissions [Opens in a new window]

Extract

The justification for studying faculty research productivity is that it affects individual advancement and reputation within academe, as well as departmental and institutional prestige (Creamer 1998, iii). Publication records are an important factor in faculty performance evaluations, research grant awards, and promotion and salary decisions. The phrase “publish or perish” encapsulates the importance of research productivity to academic careers. In addition, questions are sometimes raised about whether an individual's status as a minority within academia (e.g., being a member of an underrepresented ethnic or racial group or being female in a male-dominated profession) affects his or her ability to publish or likelihood of publishing (Cole and Zuckerman 1984; Bellas and Toutkoushian 1999). Finally, most previous work that tackles the productivity causality puzzle comes from disciplines other than political science. Thus, one of the purposes of this report is to explore whether the existing findings about research productivity in other disciplines apply equally well to research productivity in political science.

Type
The Profession
Copyright
Copyright © American Political Science Association 2011

The justification for studying faculty research productivity is that it affects individual advancement and reputation within academe, as well as departmental and institutional prestige (Creamer Reference Creamer1998, iii). Publication records are an important factor in faculty performance evaluations, research grant awards, and promotion and salary decisions. The phrase “publish or perish” encapsulates the importance of research productivity to academic careers. In addition, questions are sometimes raised about whether an individual's status as a minority within academia (e.g., being a member of an underrepresented ethnic or racial group or being female in a male-dominated profession) affects his or her ability to publish or likelihood of publishing (Cole and Zuckerman Reference Cole and Zuckerman1984; Bellas and Toutkoushian Reference Bellas and Toutkoushian1999). Finally, most previous work that tackles the productivity causality puzzle comes from disciplines other than political science.Footnote 1 Thus, one of the purposes of this report is to explore whether the existing findings about research productivity in other disciplines apply equally well to research productivity in political science.Footnote 2

The question that we wish to answer is: What factors contribute to higher or lower research output by political scientists? We base our answer on responses to a 2009 survey sponsored by the APSA. Respondents were drawn from a sample of all faculty employed in political science departments (including departments of government and public affairs) throughout the United States. (Appendix A provides a description of the survey methodology.)

According to previous studies, several blocks of variables determine scholarly productivity. These variables include demographics and family-related factors, human capital, opportunity costs (teaching and service workload), working environment, and professional variables (table 1). Among the demographic variables listed in table 1, gender differences have received special attention. Numerous studies have revealed that women publish less than men (Fish and Gibbons Reference Fish and Gibbons1989; McDowell and Smith Reference McDowell and Smith1992; Broder Reference Broder1993, 123; Bellas and Toutkoushian Reference Bellas and Toutkoushian1999; Sax et al. Reference Sax, Hagedorn, Arredondo and Dicrisi2002; Maske, Durden, and Gaynor Reference Maske, Durden and Gaynor2003, 561; Taylor, Fender, and Burke Reference Taylor, Fender and Burke2006; Evans and Bucy Reference Evans and Bucy2010). This finding, however, remains controversial: Davis and Patterson (Reference Davis and Patterson2001, 89) argue that women do not publish significantly less than men when source of Ph.D., type of employer, and field of specialization are held constant.

Table 1 References for Explanatory Variables for Scholarly Productivity

The second category of variables found in table 1 concern “human capital.”Footnote 3 Human capital addresses any contextual or individual attributes that could potentially influence the quality of an individual's research skills or training. The professional reputation of an academic's Ph.D.-granting department is consistently tied to differences in research productivity (Hansen, Weisbrod, and Strauss Reference Hansen, Weisbrod and Strauss1978; Davis and Patterson Reference Davis and Patterson2001, 88; Broder Reference Broder1993; Buchmueller, Dominitz, and Hansen Reference Buchmueller, Dominitz and Hansen1999, 71). The assumption is that top-rated schools attract the best students and then provide them with training at the frontiers of the discipline and socialization into a culture that values high-quality research (Rodgers and Neri Reference Rodgers and Neri2007, 76).

“Opportunity cost” variables capture the time spent teaching or doing service. Given the limited amounts of time that faculty have, teaching or administrative requirements set by the employing institution may affect faculty research productivity (Fender, Taylor, and Burke Reference Fender, Taylor and Burke2005; Taylor, Fender, and Burke Reference Taylor, Fender and Burke2006; Maske, Durden, and Gaynor Reference Maske, Durden and Gaynor2003). Studies consistently reveal that a large teaching load significantly reduces published output (Graves, Marchand, and Thompson Reference Graves, Marchand and Thompson1982).Footnote 4

The category of “current working environment”—both its culture and its availability of resources—captures primarily departmental and institutional characteristics. Broader availability of resources and incentives for publishing should influence publication rates (see table 1).Footnote 5 “Culture” relates to shared attitudes about not only the value of research, but also collegiality and interpersonal encouragement. Each academic's own research productivity is affected by the productivity of his or her departmental colleagues through “collaboration, academic discourse, peer expectations [and] peer pressure” or through colleagues' other attributes, such as “ability, integrity [and] professionalism” (Rodgers and Neri Reference Rodgers and Neri2007, 85; see also Taylor, Fender, and Burke Reference Taylor, Fender and Burke2006).

As a category that is distinct from the working environment, we also consider “professional variables,” which include the achievements of an individual's academic career. For example, scholarly productivity has been associated with the ranking of the program with which an individual is affiliated (Davis and Patterson Reference Davis and Patterson2001, 88; Xie and Shauman Reference Xie and Shauman1998, 865; Garand and Graddy Reference Garand and Graddy1999; McCormick and Rice Reference McCormick and Rice2001; Youn Reference Youn, Breneman and Youn1988). It may be that higher ranked departments select better scientists, or perhaps these departments foster greater productivity (Broder Reference Broder1993, 116).Footnote 6 Arguably, faculty research productivity also varies according to the researcher's specific subject matter (Fish and Gibbons Reference Fish and Gibbons1989, 98).

Faculty rank (instructor/lecturer, assistant professor, associate professor, full professor) is considered a professional variable. Some researchers find rank to be a predictor of productivity (Blackburn, Behymer, and Hall Reference Blackburn, Behymer and Hall1978; Bellas and Toutkoushian Reference Bellas and Toutkoushian1999; Dundar and Lewis Reference Dundar and Lewis1998; Sax et al. Reference Sax, Hagedorn, Arredondo and Dicrisi2002; Xie and Shauman Reference Xie and Shauman1998, 865), while others have shown that rank has no influence on faculty research productivity when other relevant variables are taken into consideration (Over Reference Over1982; Wanner, Lewis, and Gregorio Reference Wanner, Lewis and Gregorio1981).Footnote 7 Also categorized under professional variables is coauthorship, which is thought to “increase article production through the division of labor made necessary by increased complexity in the subject matter and by the need to saturate markets to increase the probability of getting papers accepted for publication” (Maske, Durden, and Gaynor Reference Maske, Durden and Gaynor2003, 555, 561; see also Hollis Reference Hollis2001; Durden and Perri Reference Durden and Perri1995; Davis and Patterson Reference Davis and Patterson2001, 90; Taylor, Fender, and Burke Reference Taylor, Fender and Burke2006).

The theories behind the explanations for variation in research productivity are nearly as varied as the factors studied. Behavioral reinforcement theory views the “system of faculty ranks as a reward system as well as a schedule of reinforcement” (Tien and Blackburn Reference Tien and Blackburn1996, 5). A similar idea is proposed by the investment-motivated model of scientific productivity, which argues that “scientists engage in research because of the future financial rewards associated with the activity” (Levin and Stephan Reference Levin and Stephan1991, 115). Such a model implies a decline in research productivity over the course of an individual's career, given the finite time horizon (Diamond Reference Diamond1984). Rodgers and Neri (Reference Rodgers and Neri2007, 79) report that the most productive period is the first five years after the Ph.D. is conferred, and Davis and Patterson (Reference Davis and Patterson2001) report that productivity generally declines after tenure.

In contrast, a consumption-motivated model that stresses the “scientist's fascination with the research puzzle itself” (Levin and Stephan Reference Levin and Stephan1991, 115) does not predict a decline in research productivity over time. Likewise, selection theory (Finkelstein Reference Finkelstein1984) argues that only the most productive faculty members are promoted, eliminating low producers before they reach higher ranks and thus creating a situation in which higher ranking faculty produce more. Accumulative advantage theory emphasizes the importance of resource acquisition over time (Allison and Stewart Reference Allison and Stewart1974). Motivational theory draws an important distinction between intrinsic motivation (e.g., interest in research) and extrinsic motivation (e.g., desire for promotion). Intrinsic motivation may account for the continued productivity of full professors, who are no longer motivated by the possibility of promotion (Finkelstein Reference Finkelstein1984, 101).

Thus, theoretical approaches to the productivity question vary, as do the factors that predict faculty research productivity. The dependent variable (productivity, or research output) can also be evaluated in a variety of ways. In our analysis of scholarly productivity, we use as our dependent variable the respondent's best estimate of the total number of articles that he or she has published in refereed academic or professional journals over his or her entire career.Footnote 8 We also look at a summary measure that includes refereed journal articles, books, edited books, and book chapters. Finally, we create a model that uses books and book chapters published as controls when evaluating the total number of articles published. (See appendix A for a description of all variables included in the analyses.)

Descriptive Statistics

Before we embark on the multivariate analyses, we first provide descriptive statistics. Across the entire set of 1,399 respondents, the average number of articles published in refereed academic or professional journals during the respondent's entire career is 10.5. In table 2, we divide our sample into groups of men and women and compare their publication rates. On average, men publish significantly more articles than women do. We wish to note that this bivariate calculation does not control for age. The average age of women in the profession is lower than the average age for men in the profession. Thus, on average, men in the profession have more years of publishing time (based on age) than have women.Footnote 9

Table 2 Average Number of Articles Published by Subgroup

Notes. For all, p < .000.

a Percentages exclude one transgendered respondent.

b Percentages do not add to 100% because of the exclusion of respondents from Tier IV, unranked, and unknown departments.

c Percentages do not add to 100% because of the exclusion of respondents from departments within a two-year college and respondents without a program type specified.

d Percentages do not add to 100% because of the exclusion of instructors, lecturers, postdocs, fellows, and respondents without a rank specified.

Turning to human capital variables, table 2 shows that the average number of articles published during an individual's career is significantly higher among graduates from departments ranked among the top 25 (tier I schools) than among graduates from departments ranked 26–50 (tier II) and departments classified as tier III using the Schmidt and Chingos ranking (Reference Schmidt and Chingos2007).Footnote 10 Thus, the ranking of an academic's Ph.D.-granting department is bivariately tied to differences in research productivity.

Looking at the opportunity cost variables, we asked respondents to report their typical teaching load each year (for the past five years). Across all respondents, the average number of courses taught is 4.3 per year. For purposes of a simple descriptive picture, we divided respondents into three groups: faculty with low teaching loads (2.5 courses or less per year), medium teaching loads (3 to 5.5 courses per year), and high teaching loads (6 or more courses per year). On average, faculty members with the lowest teaching loads publish 14.5 articles, while individuals with heavy teaching loads publish 4.9 articles (see table 2). These numbers reveal a major difference in research output depending on how many courses a faculty member teaches.

The last two variables that we considered in table 2 are associated with working environment and professional achievement. Looking at the “current employment” rows, we can see that faculty who are employed by Ph.D.-granting departments publish dramatically more than faculty who are employed by MA-granting programs or departments in a four-year college. The numbers in table 2 also illustrate the effect of the professional achievement variables by highlighting the difference in the number of articles published by members of different professional ranks. On average, assistant professors have published 3.6 articles, while full professors have published 18.6 articles.

Analysis

In running OLS regressions in our multivariate analysis, we employed three different versions of the dependent variable. The first simply used the respondent's raw report of the total number of articles published in refereed academic or professional journals over his or her entire career. The second approach replaced missing responses to this question with the value of zero. The third approach followed a recommendation by Fox and Milbourne (Reference Fox and Milbourne1999, 256) that the number of articles published be transformed as the logarithm of one plus the original variable, with missing responses replaced with zero. This helps deal with a concentration of observations at zero and makes the distribution more closely approximate a normal distribution.Footnote 11 The tables that we publish here all use the logarithmic transformation of the number of articles produced as the dependent variable. Analyses using the other two versions of the dependent variable are consistent with the findings reported here and are available from the authors upon request.

Table 3 reports regression results using the logarithm of one plus the number of articles published (with missing responses replaced with zero) as the dependent variable. Model I of table 3 contains only demographic and family-related factors.Footnote 12 These factors explain 5% of the variation in the (log of the) number of articles produced. Two variables emerge as significant predictors: gender and number of children. According to this simple demographic model, women tend to publish less than men. Additionally, as the number of children that an individual has increases, so does the number of articles that he or she publishes.

Table 3 Log of Articles as Dependent Variable, Based on Original Responses, with Missing Responses to Predictor Variables Excluded

Note.

*** p < .001,

** p < .01,

* p < .05

Model II of table 3 incorporates the measures of human capital. Both Ph.D. program rank and number of years to complete the Ph.D. are significant. As the ranking of the program from which a faculty member received his or her Ph.D. improves, the number of articles this individual publishes increases.Footnote 13 As the number of years to complete the doctoral degree increases, the number of articles published decreases.

The problem that appears in table 3, however, concerns Model I's loss of a large number of respondents because they neglected to report their number of children. The introduction of additional variables in Model II results in a further loss of 225 other cases. This loss occurs in part because 141 respondents did not identify the institution from which they received their Ph.D.; this prevents us from using the 2007 Schmidt and Chingos ranking variable for these respondents. The more significant missing value problem arises with the question: “In what year did you obtain your Ph.D. degree?” Within our sample, 273 respondents either did not answer or made a mistake when typing in a year. Given this missing data problem, we decided to use the multiple-imputation Amelia II program for missing data (Honaker, King, and Blackwell Reference Honaker, King and Blackwell2010).Footnote 14 We used this program to impute estimates of the missing responses on each of the independent variables used in the analysis. The purpose of this approach was to increase the number of observations taken into consideration in the analysis.

Table 3.1 reports the testing of exactly the same models as are tested in table 3, but using the imputed data and holding the number of cases in the analysis at 1,399 for all models. As in table 3, both human capital variables are significant in Model II of table 3.1. When using the imputed data, it is noteworthy that the significant demographics include “married.” Thus, table 3.1 reveals that women tend to publish fewer articles than men, while married and partnered persons publish more than single, divorced, or widowed academics.

Table 3.1 Log of Articles as Dependent Variable (Regressions with Imputed Data)

Note.

*** p < .001,

** p < .01,

* p < .05

In Model III of table 3, we add in the controls that we label opportunity costs: teaching load, number of new courses prepared, number of committee memberships, number of committees chaired, and amount of student advising. In this model, we jump to 29% of the variance explained in table 3 (Model III) but lose another 108 cases. According to these results, heavy teaching loads do take their toll on article production. In addition, the larger the number of new courses prepared, the lower the number of articles published. We also find, rather unexpectedly, that the more committees an individual chairs and the more advisees he or she has, the more he or she publishes.

When all respondents are considered and missing values are replaced by imputed data, the strong relationship between a higher teaching load and lower article production is confirmed, as is the negative relationship between new course preparation and article production (Model III in table 3.1). The overall count of advisees is again positively related to the number of articles published. To explain the findings regarding committees and advising, it may be wise to interpret both as measures of professional involvement rather than opportunity costs. It appears that we misclassified advising and committee service as opportunity costs, given that when these variables are significant, they appear to support rather than detract from article production. We return to this finding in our discussion section.

Model IV includes the working environment variables: collegial climate; count of overall resources; and current employment in a Ph.D. program, MA program, or private institution. Model IV in table 3 reveals that more positive evaluations of a department's “climate” are related to fewer articles published. Faculty members who evaluate their environment as more friendly, respectful, collegial, collaborative, and cooperative publish less, on average, than faculty who evaluate their home department as more hostile, disrespectful, contentious, individualistic, and competitive. This relationship emerges even more powerfully in the imputed data (Model IV, table 3.1).Footnote 15 The negative association between departmental collegiality and research productivity is affirmed in the tables that follow. We must accept the finding that faculty members who operate in the more competitive, individualistic, and hostile departments publish more on average. The defining element of this scale is the collegial versus contentious contrast, with collegiality associated with lower total publications and contention associated with a higher number of publications.

We also see from the test of Model IV (in table 3 and table 3.1) that more resources are associated with an increase in the number of articles published. The dummy variable for employment in a Ph.D.-granting department is also significant. Table 3 and table 3.1 show contradictory results regarding whether faculty members in private institutions tend to publish more than faculty members in public institutions. Table 3.1's finding that faculty members employed in private rather than public institutions tend to publish more is consistent throughout the analyses using imputed values for missing responses (and therefore based on the largest possible number of respondents). Note that with Model IV, we can now explain 41% of the variance in the number of articles published (see table 3.1).Footnote 16

The last model that we report in tables 3 and 3.1 includes the following professional variables: current faculty rank, a series of dummy variables for current primary field of teaching and research, year the Ph.D. was granted, coauthorship, frequency of conference attendance, and the ranking of the department that currently employs the faculty member. This final model is extremely powerful; it explains 44% of the variance in the number of articles published.Footnote 17 Model V in tables 3 and 3.1 shows that as an individual moves up the academic ranks, the total number of articles published in his or her career also increases.Footnote 18 We do not find evidence that faculty members in any subfield publish significantly more than faculty members in another subfield.Footnote 19 We do find that as more time passes since the granting of the degree, the number of articles published increases.Footnote 20 Increased conference attendance is also positively related to greater article output (see tables 3 and 3.1).

We also tested Models III, IV, and V after adding the total number of books written or edited and book chapters published as a control. Our findings confirm the results of Maske, Durden, and Gaynor (Reference Maske, Durden and Gaynor2003, 561), who report a significant positive relationship between books published and articles published. Among our respondents, the bivariate correlation between the number of articles published and the total number of books, edited books, and book chapters is .640. The correlation between the number of articles published and the number of books published is .593. A higher number of books, edited books, and book chapters is positively associated with a higher number of articles published. Thus, the activities are clearly complementary and do not detract from each other; rather, they reinforce each other.

In a related vein, we thought it valuable to report on a different approach to measuring research output. As an alternate dependent variable, we evaluate the total number of publications. To create a total publications variable, we add together responses to four questions: (1) number of articles published in refereed academic or professional journals, (2) number of monographs (books) published, (3) number of books edited, and (4) number of book chapters published. For the results presented in table 4, missing responses to all questions are set to zero and the responses are summed (the value of one is added to the sum before the log is calculated). To save space, we do not present the results for Models I, III, and IV, and for Model II, we report only the results using the data file created by Amelia, with missing responses on the independent variables replaced with imputed values.

Table 4 Log of Total Productivity as Dependent Variable

Note.

*** p < .001,

** p < .01,

* p < .05

For Model V of Table 4, we report results using the imputed data files created by Amelia, as well as results using the log of total productivity (plus one) as the dependent variable and allowing missing cases to be deleted from the analysis. If we focus on Model V of Table 4, which controls for human capital, opportunity costs, and important characteristics such as faculty rank, we see that women on average report lower total publications than men. Being a minority and having children are significant when using the imputed data, but when missing values are dropped from the analysis, these characteristics are not significantly related to overall total output.

According to table 4, the longer the time an individual spends earning the Ph.D., the lower his or her number of total publications. When opportunity costs are considered, we see that the more courses that are taught, the lower a faculty member's total number of publications. The results using imputed values to replace missing responses also reveal a strong relationship between more advisees and more publications. Among the working environment variables, the total number of publications tends to be lower when the department's collegiality is high. The total number of publications is higher among those faculty members who report receiving more of the following resources: course release time, research assistance time, discretionary funds, travel funds, and summer salary. Consistently significant predictors of total productivity among the professional variables are higher faculty rank, year of degree (i.e., more time since finishing the doctoral degree), and conference attendance.

Since promotion to a higher rank and the total number of publications are inextricably combined in academia, we thought it important to divide our sample into subgroups based on academic rank and evaluate the factors that predict different levels of productivity within each rank. Acknowledging that full, associate, and assistant professor are each heterogeneous categories, in table 5, we test Model V within ranks to identify why some faculty members are more productive than others.Footnote 21 In table 5, we report the results of testing Model V among assistant professors only, associate professors only, and full professors only. Table 5 uses the log of one plus the number of articles published as the dependent variable. We report results based on the file created by the Amelia program, with missing responses replaced with imputed values, as well as results based on analyses with missing responses excluded from the calculations. The following findings focus only on those columns based on the data with missing responses replaced by imputed values, as we feel more confident in these results, given that they incorporate a larger number of respondents.

Table 5 Log of Articles as Dependent Variable in Subgroup Regressions

Note.

*** p < .001,

** p < .01,

* p < .05

Looking first at assistant professors only (table 5, Model VA), we find that a higher number articles published (with the logarithmic transformation and using imputed values for missing responses) is associated with being male rather than female, having more children, taking less time to complete the doctoral degree, teaching fewer undergraduate courses, having more resources, working in a private institution, and attending more conferences. Among associate professors only (table 5, Model VB), a higher number of articles published is associated with being male rather than female, graduating from a higher ranked department, teaching fewer undergraduate courses, working in a competitive rather than a collegial climate, having more resources, and being employed in a department with a Ph.D. program. Among full professors only (table 5, model VC), a higher number of articles published is associated with being male rather than female, having more children, having more advisees, working in a competitive versus a collegial department, having more resources, being employed in a Ph.D.-granting department, working at a private institution, having a specialty other than American politics, and more years since receipt of the Ph.D.

Thus, we can see both similarities and differences in the predictors of article publication rates across academic ranks. For example, the availability of more resources is related to more publication at all ranks. A large teaching load appears to have detrimental effects on publication rates for assistant and associate professors, although not for full professors. A larger advising load is associated with more productivity among full professors, but not among associate and assistant professors. As well, when we look within these subgroups (partially controlling for age), we find that working in a private institution is associated with higher publication rates among assistant and full professors but has no significant effect on the number of articles produced by associate professors. Being employed by a Ph.D.-granting department is positively related to article production among associate and full professors, but not among assistant professors.

Discussion

With regard to demographics, our results appear to reveal that women employed in political science departments in the United States publish less on average than their male counterparts. When we divide respondents by rank and conduct our analysis within these ranks, using imputed data, the relationship between articles published and gender is significant at all ranks. Several explanations for the existence of this gender difference have been offered: Xie and Shauman (Reference Xie and Shauman1998), for example, argue that female scientists are less likely to hold the positions and have access to the facilitating resources that are conducive to higher rates of publication performance. This finding may be relevant for political scientists at the associate professor level, at which women are less likely than men to be employed by a top-ranked department. At the assistant professor level, however, women are more likely than men to be employed by a top-ranked department. In addition, women and men on average report equal access overall to resources.Footnote 22

Another explanation that has been offered in the literature is that women spend more time “mentoring” than do male faculty. Collins (Reference Collins, Collins, Chrisler and Quina1998) finds that women are more likely than men to devote time to teaching and advising, serve in part-time positions, and teach in fields unlike the ones in which they were trained. Among our respondents, we do not see significant differences between men's and women's teaching loads, either for graduate or undergraduate courses, nor do we see significant differences in committee membership, committee chairing, or overall levels of advising. We note, however, that our questions count the number of these activities but do not ask respondents to report on the amount of time spent on these activities. Female political scientists are also no more likely than male political scientists to work in part-time positions and no more likely to teach or do research in a field that differs from their major field as a graduate student.

Another explanation is that women spend more time than their male colleagues on household and childcare responsibilities (Gmelch, Wilke, and Lovrich Reference Gmelch, Wilke and Lovrich1986; Suitor, Mecom, and Feld Reference Suitor, Mecom and Feld2001).Footnote 23 This explanation seems valid if traditional divisions of labor between men and women exist within the household. We did not, however, include a question in our survey about time devoted to domestic or child-rearing chores, so we cannot check this hypothesis. Looking at our sample of political science faculty members, we see that at the level of assistant professor, men are more likely than women to have children. At the associate and full professor levels, men and women are equally likely to have children.Footnote 24

It is also important to note that the men in our sample have, on average, been in the profession longer than the women in the sample. On average, female respondents received their doctoral degree in 1994, while male respondents received their degrees in 1990. However, we do control for year that a degree was awarded in Model V, and we still find in much of our analysis that women publish less than men.

The findings regarding whether members of a racial or ethnic minority publish more or less when opportunity costs, working environment, and professional characteristics are taken into consideration are inconsistent. We do not find any relationship between self-identification as a minority and number of article publications when the sample is divided by rank.Footnote 25 We note that among the political science faculty who responded to our survey, racial minorities are no more or less likely to be employed by a department that offers a doctoral or an MA degree. Political science faculty who are members of a racial or ethnic minority group are also no more or less likely to be married or have children, and they have, on average, the same number of children as do nonminorities. We find no differences on average in age or year that a degree was awarded when we compare minority group respondents with nonminorities. The average teaching load at the graduate level is the same for minorities and nonminorities, while at the undergraduate level, minorities have a slightly lower teaching load.

A finding that we think particularly important to a profession that places a great deal of emphasis on publications when evaluating faculty performance is the negative effect of a heavy teaching load on research output. The opportunity costs of teaching a large number of courses and preparing new courses are significant indeed. Thus, our findings correspond to the findings of many other scholars—that time spent teaching takes away from time spent doing research (Maske, Durden, and Gaynor Reference Maske, Durden and Gaynor2003, 561; Bellas and Toutkoushian Reference Bellas and Toutkoushian1999; Xie and Shauman Reference Xie and Shauman1998, 865; Hamovitch and Morgenstern Reference Hamovitch and Morgenstern1977, 636; Porter and Umbach Reference Porter and Umbach2001; Taylor, Fender, and Burke Reference Taylor, Fender and Burke2006, 858).

Our findings do diverge from previous findings regarding what is generally classified as “service.” A heavy teaching burden generally has a negative effect on publishing, but advising does not. We speculate that the positive relationship between student advising and higher article production is related to the constructive effects on intellectual activity (including the possibility of coauthorship) that are associated with frequent one-on-one interaction with advisees. We believe that advising represents a measure of professional involvement and should be considered a bonus rather than a cost. A particularly strong relationship exists between the number of Ph.D. students an individual advises and the total number of articles that he or she publishes in refereed academic or professional journals. Having a publishing research group and advising appear to go hand in hand.

Finally, we would like to highlight one more finding. The presence of a collegial climate within the department tends to be associated with less productivity. In other words, a degree of competiveness, even hostility, does not detract from productivity. Other attitudinal measures, such as one's evaluation of the research climate within the department, are positively associated with publications. We will address this evaluative dimension of the professional environment further in a follow-up report.

Appendix A: Survey Methodology

Questionnaire Design

In 2005, the APSA Committee on the Status of Women in the Profession (CSWP) proposed to the president of APSA that the association conduct research associated with the recommendations that emerged from the March 2004 Workshop on Women's Advancement in Political Science organized by Michael Brintnall and Linda Lopez (APSA), Susan Clarke (University of Colorado, Boulder), and Leonie Huddy (Stony Brook University). Once the research proposal was approved, the CSWP used questionnaires that had been employed in research published by Hesli and Burrell (Reference Hesli and Burrell1995); Hesli, Fink, and Duffy (Reference Hesli, Fink and Duffy2003); and Hesli et al. (Reference Hesli, DeLaat, Youde, Mendez and Lee2006) to develop a new survey instrument. Additional questions were added from questionnaires developed by the National Research Council and the University of Michigan's Fall 2001 Survey of Academic Climate and Activities, which was created for an NSF ADVANCE project. The following reports were also used to help generate questions:

  • Blau, F. 2002. “Report of the Committee on the Status of Women in the Economics Profession.” American Economic Review 92: 516–20.

  • Commission on Professionals in Science and Technology (CPST). 2000. Professional Women and Minorities: A Total Human Resource Data Compendium. 13th ed. Washington, DC: CPST.

  • Creamer, Elizabeth. 1998. Assessing Faculty Publication Productivity: Issues of Equity. ASHE-ERIC Higher Education Report, Vol. 26, No. 2. Washington, DC: George Washington University.

  • Fox, Mary Frank. 1995. “Women and Scientific Careers.” In Handbook of Science and Technology Studies, ed. S. Jasanoff, J. Markle, J. Petersen, and T. Pinch, 205–23. Newbury Park, CA: Sage.

  • Fox, Mary Frank. 1998. “Women in Science and Engineering: Theory, Practice, and Policy in Programs.” Signs: Journal of Women in Culture and Society 24: 201–23.

  • Sarkees, Meredith Reid, and Nancy E. McGlen. 1992. Confronting Barriers: The Status of Women in Political Science.” Journal of Women, Politics & Policy 12 (4): 43–86.

A draft copy of the questionnaire was circulated to the members of the APSA status committees. The questionnaire was revised and expanded to address the concerns of the members of the status committees. The instrument was pilot-tested by distributing it to all political science faculty members at one research university and one private four-year college. The feedback from the pilot test was used to make further revisions to the questionnaire.

Sample Selection

We used as our target population the names contained within the APSA “faculty” file. We used this file of 11,559 names to create a sample population file of size 5,179 names. The original “faculty” file was stratified by department size. To ensure the adequate representation of faculty members from medium- and small-size schools, we oversampled from these groups. Names were selected randomly from the “faculty” file for the “sample” file.

Survey Procedure

Using e-mail addresses, all persons in the sample file were sent a letter of invitation to participate in the study from the executive director and the president of the APSA. Bad e-mail addresses (addresses that bounced back) were replaced with random selections from the “faculty” file. These persons were also mailed an invitation letter. The cleaned “survey” file was sent to the Survey Research Center at the Pennsylvania State University (SRC).

Individuals in the sample were sent an e-mail from SRC inviting them to participate in the survey. This invitation included a link to the web-based survey containing a unique identifier for each potential participant. Only one completed survey was allowed for each identifier. The initial invitation was e-mailed to respondents on August 27, 2009. Follow-up reminders were sent to nonresponders on September 10, 2009; September 24, 2009: October 8, 2009; and October 29, 2009. From among the 5,179 original addresses, 1,399 completed the survey (252 invalid addresses, 105 refusals, and 3,423 nonrespondents).

The distributions of the variables reported in table 2 provide an opportunity to compare the average characteristics of survey respondents to the population as whole (from which the sample was drawn). As indicated in table 2, among the total set of respondents, 68% are male and 32% are female. With regard to faculty rank, 30% are assistant professors, 27% are associate professors, 35% are full professors, and the remaining fall into smaller categories such as instructors or administrators. Among assistant professor respondents, 44% are female; among associate professors, 29% are female; and among full professors, 24% are female. With regard to department type, 34% of respondents work in a Ph.D.-granting program, 20% work in an MA-granting program, 41% work in a department within a four-year college, and the rest are employed in some other type of academic unit.

According to APSA data, the percentage of females in the population from which we drew the sample (all political science faculty members in the United States) was 28% in 2009. Breaking this down by rank and institution type, we get the following distributions:

Appendix B: Variables Included

Dependent Variables

Article Productivity

Survey question: For your entire career, please give your best estimate of the number you have produced or have been awarded for each of the following:

______ number of articles published in refereed academic or professional journals

In one version of this variable, all missing values were set to zero. In another version, we take the logarithmic transformation of the number of articles plus 1.

Total Productivity

Survey question: For your entire career, please give your best estimate of the number you have produced or have been awarded for each of the following:

______ number of articles published in refereed academic or professional journals

______ number of monographs (books) published

______ number of books edited

______ number of book chapters published

All missing values of articles, monographs, edited books, and book chapters were set to zero, and we then took a logarithmic transformation of the sum of these items plus one.

Independent Variables

Female

Survey question: What is your gender?

  1. a. Male

  2. b. Female

  3. c. Transgender

The dummy variable equals 1 if the response is b.

Minority

Survey question: Do you identify yourself as a member of an ethnic and racial minority group?

  1. a. Yes

  2. b. No

  3. c. Don't know

The dummy variable equals 1 if the response is a.

Married

Survey question: What is your personal status?

  1. a. Never married

  2. b. Married (first time)

  3. c. Married (second or third time)

  4. d. Member of an unmarried opposite or same-sex partnership

  5. e. Separated/divorced

  6. f. Widowed

The dummy variable equals 1 if the response is b, c, or d.

Number of Children

Survey question: Do you or a spouse/partner of yours have any children?

  1. a. Yes (If yes, how many?)

  2. b. No

An interaction variable between a dummy for having children (response a) and the number of children specified.

Number of Years to Complete Ph.D.

Survey questions:

  1. (1) In what year did you begin work on your Ph.D.?

  2. (2) In what year did you obtain your degree?

The reported variable is the year of getting the Ph.D. degree minus the year of beginning the degree program.

Ph.D. Program Rank

Survey question: From which university did you obtain your degree?

The program is ranked based on Schmidt and Chingos' (Reference Schmidt and Chingos2007) rankings, classifying 25 as tier 1, 26–50 as tier 2, 51–75 as tier 3, 76–86 as tier 4, and unranked as tier 5. Foreign degrees and degrees from majors other than political science were set as missing. The score is then reversed so that higher numbers represent higher ranked programs.

Teaching Load

Survey question: During the past five years, what is your typical teaching load each year? (If in your current position for less than five years, base this on the period since your appointment.)

Number of New Courses Prepared

Survey question: In the past 5 years, how many new courses (courses that you have not taught previously—do not include even major revisions of courses you have taught before) have you prepared for your department or college (if you have a joint appointment, refer to your primary unit)?

Number of Committee Memberships

Survey question: In a typical year during the past five years, on how many committees do you serve?

Number of Committees Chaired

Survey question: In a typical year during the past five years, how many committees do you chair?

Amount of Student Advising

Survey question: For how many of each of the following types of individuals do you currently serve as official advisor? Undergraduates, MA students, PhD students, postdocs

The variable was generated by following steps. First, dummy variables were created to represent higher-than-average advising for each student group. For example, the respondent would receive a “1” on undergraduate advising if his or her reported number of undergraduate students advised was higher than the overall mean for that question. The same coding rule was applied to other student groups such as MA students, doctoral students, and postdocs. Next, we counted the overall number of 1s from those four dummies.

Collegial Climate

Survey question: Please rate the climate of your unit(s)/department(s) on the following continuum by selecting the appropriate number (check the appropriate box). For example, in the first row, the value 1 indicates hostile, while the value 5 indicates friendly, and the numbers in between represent relative combinations of each.

A principal component analysis, with the Varimax rotation method revealed two separate components. The Collegial Climate Scale is composed of hostile–friendly, disrespectful–respectful, contentious–collegial, individualistic–collaborative and competitive–cooperative. We calculated the mean score for those five dimensions, with higher numbers indicating a collegial climate.

Count of Overall Resources

Survey question: Have you received any of the following resources as a result of your own negotiations, the terms of an award, or as part of an offer by the university, since your initial contract at your current position? If so, please check all that apply.

Using the count command, we added up the total number of checks for all rows and all columns.

Ph.D. Program

Survey question: What is the type of department where you are employed?

  1. a. Ph.D.-granting program

  2. b. MA-granting program

  3. c. Department within a four-year college

  4. d. Department within a two-year college

  5. e. Other academic unit (specify)

The dummy variable equals 1 if the response is a.

MA Program

Same question as above, with the dummy variable equals to 1 if the response is b.

Private Institution

Survey question: Is this a public or a private institution?

  1. a. Public

  2. b. Private

The dummy variable equals to 1 if the response is b.

Faculty Rank

Survey question: What is the title of your primary current appointment?

We created an ordinal variable using the following coding: 1 (instructors, lecturers, postdocs and fellows), 2 (assistant professors), 3 (associate professors), and 4 (full professors, emeritus, and administrative positions)

Subfield Dummies

Survey question: Which of the following best describes your current primary field of teaching and research?

  1. a. American

  2. b. Comparative

  3. c. International relations

  4. d. Theory

  5. e. Methods

  6. f. Other (please specify)

American subfield equals 1 if the response is a. Comparative subfield equals 1 if the response is b. IR subfield equals 1 if the response is c. Theory subfield equals 1 if the response is d. Methods subfield equals 1 if the response is e.

Year of Degree

Survey question: In what year did you obtain your degree?

Coauthorship

Survey question: Which of following most accurately describes the majority of your publications?

  1. a. Most are sole-authored

  2. b. Most are coauthored with colleagues in my department

  3. c. Most are coauthored with scholars from other departments in my institution

  4. d. Most are coauthored with colleagues from outside my institution

  5. e. Most are coauthored with students

The dummy variable equals 1 if the response is b, c, d, or e.

Frequency of Conference Attendance

Survey question: How often have you attended political science conferences in the past three years?

Current Program Ranking

A ranking of the department for which the respondent is currently working. Programs are ranked based on Schmidt and Chingos' (Reference Schmidt and Chingos2007) ranking, classifying top 25 as tier 1, 26–50 as tier 2, 51–75 as tier 3, 76–86 as tier 4, and unranked as tier 5. The score is then reversed so that higher numbers represent higher ranked departments.

Footnotes

1 The list of prior studies of faculty research productivity is lengthy, and we present only a cursory overview here. An extensive bibliography on the topic is available from the authors upon request.

2 Prior political science studies generally address the effect of research productivity on departmental or program-level performance (e.g., Ballard and Mitchell Reference Ballard and Mitchell1998; Garand and Graddy Reference Garand and Graddy1999; McCormick and Bernick Reference McCormick and Bernick1982; McCormick and Rice Reference McCormick and Rice2001; Miller, Tien, and Peebler Reference Miller, Tien and Peebler1996).

3 Fox and Milbourne (Reference Fox and Milbourne1999, 257) use a similar classification for determinants of research output.

4 Teaching load may be measured by self-reported percentages of time spent on teaching (Bellas and Toutkoushian Reference Bellas and Toutkoushian1999), the number of courses a faculty member teaches (Fox Reference Fox1992; Maske, Durden, and Gaynor Reference Maske, Durden and Gaynor2003), or the number of credit hours he or she teaches (Taylor, Fender, and Burke Reference Taylor, Fender and Burke2006). Because the amount of effort required to teach a course can vary depending on whether the course is at an undergraduate or graduate level, a few studies have distinguished between undergraduate and graduate teaching (Fox Reference Fox1992; Porter and Umbach Reference Porter and Umbach2001, 174).

5 Studies concerning many of the factors listed in the table 1 have produced mixed results. Jordan, Meador, and Walters (Reference Jordan, Meador and Walters1988; Reference Jordan, Meador and Walters1989) have reported greater academic research productivity in private than public institutions, but Golden and Carstensen (Reference Golden and Carstensen1992) have argued that this effect declines after controlling for both research support and the department's reputational ranking (see Dundar and Lewis Reference Dundar and Lewis1998, 613, for this review).

6 Davis and Patterson report that after tenure is granted, the publication advantage of being employed at a top-tier institution disappears, although “faculty in all ranks of departments offering the Ph.D. publish more than … economists employed in departments not granting the Ph.D” (Reference Davis and Patterson2001, 87).

7 Because academic rank is also partly determined by research productivity, Broder employs two-stage least squares and suggests a model in which rank and productivity are simultaneously determined (Reference Broder1993, 117).

8 This accounting is the most common measure used in studies of scholarly productivity (Dundar and Lewis Reference Dundar and Lewis1998, 616; Broder Reference Broder1993, 119; Maske, Durden, and Gaynor Reference Maske, Durden and Gaynor2003; Rodgers and Neri Reference Rodgers and Neri2007, 73), although Buchmueller, Dominitz, and Hansen (Reference Buchmueller, Dominitz and Hansen1999, 68) count both all publications and the number of articles published in “top” journals. Fox and Milbourne (Reference Fox and Milbourne1999, 259) count papers and notes in refereed, internationally recognized journals and authored research books (adjusting for coauthored papers by dividing the number of pages by the number of authors). Some studies limit the time frame during which a scholar's work is published (Tien and Blackburn Reference Tien and Blackburn1996; Sax et al. Reference Sax, Hagedorn, Arredondo and Dicrisi2002).

9 The average age of male respondents was 50 years, while the average age of female respondents was 45. Thus, on average, the male respondents had 5 more years of publishing time (based on age) than did the female respondents.

10 Tier IV and unranked departments are excluded from table 2.

11 All three versions of the dependent variable are highly correlated: the correlation coefficient for the original responses (answering the question of how many articles had been published) and the measure that replaces missing responses with zero is 1.0, while the correlation coefficient between the original responses and the log of one plus the measure that replaces missing responses with zero is 0.78.

12 Because some of the factors that contribute to faculty research productivity involve individual-level attributes while other variables are related to departmental or institutional environments, multi-level statistical models have been suggested to account for the nested structure of data (Porter and Umbach Reference Porter and Umbach2001).

13 In ranking departments, we used Schmidt and Chingos' (Reference Schmidt and Chingos2007) rankings. We also tested models using the 1995 National Research Council rankings and the U.S. News and World Report rankings of graduate programs. Similar results emerge regardless of the source of the ranking.

14 Specifically, we used the standalone version of AmeliaView in the Windows environment, downloadable from the developer's website at http://gking.harvard.edu/amelia/. We did not use the multiple imputation procedures to replace missing values in our dependent variables but instead only imputed a group of independent variables. We transformed positively skewed variables by using a natural logarithm. Slight changes were made to two of the imputed variables to have them make practical sense. For the “year of Ph.D. degree” variable, we assigned the year (value) of 2009 if the imputed values were more than 2009.3. For the “climate collegiality” variable, we assigned a value of five if the imputed values were greater than the scale value maximum of five from original dataset. For the rest of the variables, we used the imputed values' unaltered form as drawn from an Amelia-created output dataset.

15 Other attitudinal variables are also related to article output, but in the interest of parsimony, we did not include these other scales in our model. More information on the different attitudinal indicators included in the survey is available from the authors on request.

16 When Model IV is tested on the original version of the question with missing responses removed, the significant professional environment variables are the count of overall resources and current employment in a department that offers the Ph.D. degree.

17 Other variables were also tested prior to settling on Model V as our full model. For example, we checked whether holding a joint appointment, being part-time, being in a tenure track position, or having received a postdoctoral fellowship affected article production, but we found each to be consistently insignificant. These results may partly be a reflection of the highly skewed distribution on these variables, given that relatively few respondents hold a joint appointment, are part-time, hold a non-tenure track position, or have received a postdoctoral fellowship. We also looked at characteristics of the undergraduate institution and major but found these to be unrelated to article output. We checked the geographic location of the Ph.D.-granting institution but found this variable as well to be unrelated to the number of articles published. Finally, we checked whether the number of graduate courses taught (in addition to the number of undergraduate courses taught) affected article production. This number is highly correlated (inversely) with the number of undergraduate courses taught and highly correlated with employment in a Ph.D.-granting department, so we dropped the number of graduate courses taught from our analysis.

18 On average, American faculty members spend six years in a rank before promotion (Bayer and Smart Reference Bayer and Smart1991).

19 As a measure of subfield specialization, we also asked respondents about their major field when working on their doctoral degree. Needless to say, answers to this question are highly correlated with answers to the question about current primary field of teaching and research. We decided to use current specialization in our models, as it is both more proximate and a better predictor—although in general, field of specialization is only weakly or not at all related to publication rates.

20 Age is important to productivity as both a measure of years of experience and an indicator of cohort effects, which may include the state of knowledge in a field at the time an individual is undergoing his or her education or the state of the job market at the time that the doctorate is received (Levin and Stephan Reference Levin and Stephan1991, 118). We tested the relationship between age and the number of articles published and found the relationship to be strong and significant. Needless to say, age and year of degree are highly correlated. We selected to use the latter as our predictor variable in the tables reported here. Additional analyses using age are available from the authors upon request. Although the general finding is that “scientists become less productive as they age” (Levin and Stephan Reference Levin and Stephan1991, 126; Oster and Hamermesh Reference Oster and Hamermesh1998), the total number of years in academe is correlated with the total number of articles produced (Maske, Durden, and Gaynor Reference Maske, Durden and Gaynor2003, 561).

21 In hypothesizing differences between older and younger cohorts, Broder (Reference Broder1993, 121) estimates predictive models for three different samples: the full sample, individuals more than six years from obtaining their Ph.D., and assistant professors only.

22 Sax et al. conclude that “women publish less in part because they are less driven by a desire to produce numerous publications and receive professional accolades…. Women are more likely than men to view an academic career as an ‘opportunity to influence social change’” (Reference Sax, Hagedorn, Arredondo and Dicrisi2002, 436). Harris and Kaine (Reference Harris and Kaine1994) find that, in general, individual motivation is as important for productivity as resource support.

23 Several other explanations for observed differences between men and women have been offered, such as the ideas that women generally do not have enough professional connections and collegial networks that can facilitate publishing (Mathew and Andersen Reference Mathew and Andersen2001), that women specialize less than men (Leahey Reference Leahey2006), and that discriminatory practices exist in the publication process (Ferber and Teiman Reference Ferber and Teiman1980). We do not have the data required to adequately test these arguments.

24 Sax et al. determine that “family factors, such as having children … have virtually no effect [on faculty research productivity]” (Reference Sax, Hagedorn, Arredondo and Dicrisi2002, 436). Similarly, Hamovitch and Morgenstern (Reference Hamovitch and Morgenstern1977) assert that child-rearing does not interfere with women's research productivity.

25 Race, as measured by Maske, Durden, and Gaynor (Reference Maske, Durden and Gaynor2003, 562), was found to have no effect on the total number of articles produced (see also Sax et al. Reference Sax, Hagedorn, Arredondo and Dicrisi2002, 438).

References

Allison, P. D., and Stewart, J. A.. 1974. “Productivity Differences among Scientists: Evidence for Accumulative Advantage.” American Sociological Review 39: 596606.CrossRefGoogle Scholar
Ballard, M. J., and Mitchell, N. J.. 1998. “The Good, the Better and the Best in Political Science.” PS: Political Science and Politics 23: 826–28.Google Scholar
Baumann, Michael G.., Werden, Gregory J., and Williams, Michael A.. 1987. “Rankings of Economics Departments by Field.” American Economist 31 (1): 5661.CrossRefGoogle Scholar
Bayer, A. E., and Smart, J. C.. 1991. “Career Publication Patterns and Collaborative ‘Styles’ in American Academic Science.” Journal of Higher Education 62: 613–36.Google Scholar
Bellas, M. L., and Toutkoushian, R. K.. 1999. “Faculty Time Allocations and Research Productivity: Gender, Race and Family Effects.” Review of Higher Education 22: 367–90.Google Scholar
Blackburn, R. T., Behymer, C. E., and Hall, D. E.. 1978. “Research Note: Correlates of Faculty Publications.” Sociology of Education 51 (2): 132–41.CrossRefGoogle Scholar
Broder, I. E. 1993. “Professional Achievements and Gender Differences among Academic Economists.” Economic Inquiry 31: 116–27.CrossRefGoogle Scholar
Buchmueller, T. C., Dominitz, J., and Hansen, W. L.. 1999. “Graduate Training and the Early Career Productivity of Ph.D. Economists.” Economics of Education Review 14: 6577.CrossRefGoogle Scholar
Cole, J. R., and Zuckerman, H.. 1984. “The Productivity Puzzle: Persistence and Change in Patterns of Publications of Men and Women Scientists.” Advances in Motivation and Achievement 2: 217–58.Google Scholar
Collins, Lynn H. 1998. “Competition and Contact: The Dynamics behind Resistance to Affirmative Action in Academe.” In Career Strategies for Women in Academe: Arming Athena, ed. Collins, L. H., Chrisler, J. C., and Quina, K., 4580. Thousand Oaks, CA: Sage.Google Scholar
Conroy, M. E., Dusansky, R., Drukker, D., and Kildegaard, A.. 1995. “The Productivity of Economics Departments in the U.S.: Publications in the Core Journals.” Journal of Economic Literature 33: 1,966–71.Google Scholar
Creamer, E. G. 1995. “The Scholarly Productivity of Women Academics.” Initiatives 57 (1): 19.Google Scholar
Creamer, E. G. 1998. Assessing Faculty Publication Productivity: Issues of Equity. ASHE-ERIC Higher Education Report No. 26. Washington, DC: ASHE-ERIC/George Washington University, Graduate School of Education and Human Development.Google Scholar
Davis, Joe C., and Patterson, Debra Morre. 2001. “Determinants of Variations in Journal Publication Rates of Economists.” American Economist 45 (1): 8691.CrossRefGoogle Scholar
Diamond, Arthur. 1984. “An Economic Model of the Life-Cycle Research Productivity of Scientists.” Scientometrics 6 (3): 189–96.CrossRefGoogle Scholar
Dundar, H., and Lewis, D. R.. 1998. “Determinants of Research Productivity in Higher Education.” Research in Higher Education 39 (6): 607–31.CrossRefGoogle Scholar
Durden, G. C., and Perri, T. J.. 1995. “Coauthorship and Publication Efficiency.” Atlantic Economic Journal 23 (1): 6976.CrossRefGoogle Scholar
Evans, H. K., and Bucy, E. P.. 2010. “The Representation of Women in Publication: An Analysis of Political Communication and the International Journal of Press/Politics.” PS: Political Science and Politics 43 (2): 295301.Google Scholar
Fender, B. F., Taylor, S. W., and Burke, K. G.. 2005. “Making the Big Leagues: Factors Contributing to Publication in Elite Economic Journals.” Atlantic Economic Journal 33: 93103.CrossRefGoogle Scholar
Ferber, M. A., and Teiman, M.. 1980. “Are Women Economists at a Disadvantage in Publishing Journal Articles?Eastern Economics Journal 6: 189–93.Google Scholar
Finkelstein, M. J. 1984. The American Academic Profession: A Synthesis of Social Scientific Inquiry since World War II. Columbus: Ohio State University Press.Google Scholar
Fish, M., and Gibbons, J. D.. 1989. “A Comparison of the Publications of Male and Female Economists.” Journal of Economic Education 20 (1): 93105.CrossRefGoogle Scholar
Fox, M. F. 1983. “Publication Productivity among Scientists: A Critical Review.” Social Studies of Science 13: 285305.CrossRefGoogle Scholar
Fox, M. F. 1992. “Research, Teaching, and Publication Productivity: Mutuality versus Competition in Academia.” Sociology of Education 65 (4): 293305.CrossRefGoogle Scholar
Fox, K. J., and Milbourne, R.. 1999. “What Determines Research Output of Academic Economists?Economic Record 75 (230): 256–67.CrossRefGoogle Scholar
Garand, J. C., and Graddy, K. L.. 1999. “Ranking Political Science Departments: Do Publications Matter?PS: Political Science and Politics 32 (1): 113–16.Google Scholar
Gmelch, W. H., Wilke, P. K., and Lovrich, N. P.. 1986. “Dimensions of Stress among University Faculty: Factor Analysis Results from a National Study.” Research in Higher Education 24: 266–86.CrossRefGoogle Scholar
Golden, J., and Carstensen, F. V.. 1992. “Academic Research Productivity, Department Size and Organization: Further Results, Comment.” Economics of Education Review 11 (2): 153–60.CrossRefGoogle Scholar
Graves, P. E., Marchand, J. R., and Thompson, R.. 1982. “Economics Department Rankings: Research Incentives, Constraints and Efficiency.” American Economic Review 72 (5): 1,131–41.Google Scholar
Hamovitch, W., and Morgenstern, R. D.. 1977. “Children and the Productivity of Academic Women.” Journal of Higher Education 47: 633–45.CrossRefGoogle Scholar
Hansen, W. L., Weisbrod, B. A., and Strauss, R. P. 1978. “Modeling the Earnings and Research Productivity of Academic Economists.” Journal of Political Economy 86 (4): 729–41.CrossRefGoogle Scholar
Harris, G. T., and Kaine, G.. 1994. “The Determinants of Research Performance: A Study of Australian University Economists.” Higher Education 27: 191201.CrossRefGoogle Scholar
Hesli, V., DeLaat, Jacqueline, Youde, Jeremy, Mendez, Jeanette, and Lee, Sang-shin. 2006. “Success in Graduate School and After: Survey Results from the Midwest Region.” PS: Political Science and Politics 39 (2): 317–25.Google Scholar
Hesli, V., Fink, Evelyn C., and Duffy, Diane. 2003. “The Role of Faculty in Creating a Positive Graduate Student Experience: Survey Results from the Midwest Region, Part II.” PS: Political Science and Politics 36 (4): 801–04.Google Scholar
Hesli, Vicki, and Burrell, Barbara. 1995. “Faculty Rank among Political Scientists and Reports of the Academic Environment: The Differential Impact of Gender on Observed Patterns.” PS: Political Science and Politics 28 (1): 101–11.Google Scholar
Hogan, T. D. 1981. “Faculty Research Activity and the Quality of Graduate Training.” Journal of Human Resources 16 (3): 400–15.CrossRefGoogle Scholar
Hollis, A. 2001. “Co-Authorship and the Output of Academic Economists.” Labour Economics 19 (4): 503–30.CrossRefGoogle Scholar
Honaker, J., King, Gary, and Blackwell, M.. 2010. AMELIAII: A Program for Missing Data. http://cran.r-project.org/web/packages/Amelia/vignettes/amelia.pdf.CrossRefGoogle Scholar
Jordan, J. M, Meador, M., and Walters, S. J. K. 1988. “Effects of Departmental Size and Organization on the Research Productivity of Academic Productivity of Academic Economists.” Economics of Education Review 7 (2): 251–55.CrossRefGoogle Scholar
Jordan, J. M, Meador, M., and Walters, S. J. K. 1989. “Academic Research Productivity, Department Size and Organization: Further Results.” Economics of Education Review 8 (24): 345–52.CrossRefGoogle Scholar
Keith, B., Layne, J. S., Babchuk, N., and Johnson, K.. 2002. “The Context of Scientific Achievement: Sex Status, Organizational Environments and the Timing of Publications on Scholarship Outcomes.” Social Forces 80 (4): 1,253–81.CrossRefGoogle Scholar
Kyvik, S. 1990. “Motherhood and Scientific Productivity.” Social Studies of Science 20 (1): 149–60.CrossRefGoogle Scholar
Laband, David N. 1986. “A Ranking of the Top U.S. Economics Departments by Research Productivity of Graduates.” Journal of Economic Education 17 (1): 7076.Google Scholar
Leahey, E. 2006. “Gender Differences in Productivity: Research Specialization as a Missing Link.” Gender and Society 20 (6): 754–80.CrossRefGoogle Scholar
Levin, S. G., and Stephan, P. E.. 1991. “Research Productivity over the Life Cycle: Evidence for Academic Scientists.” American Economic Review 82 (1): 114–31.Google Scholar
Maske, K. L., Durden, G. C., and Gaynor, P. C.. 2003. “Determinants of Scholarly Productivity among Male and Female Economists.” Economic Inquiry 41 (4): 555–64.CrossRefGoogle Scholar
Mathew, A. L., and Andersen, K.. 2001. “A Gender Gap in Publishing? Women's Representation in Edited Political Science Books.” PS: Political Science and Politics 34: 143–47.Google Scholar
McCormick, J. M., and Bernick, E. L.. 1982. “Graduate Training and Productivity: A Look at Who Publishes.” Journal of Politics 44: 212–27.CrossRefGoogle Scholar
McCormick, J. M., and Rice, T. W.. 2001. “Graduate Training and Research Productivity in the 1990s: A Look at Who Publishes.” PS: Political Science and Politics 34 (3): 675–80.Google Scholar
McDowell, J. M., and Smith, J. K.. 1992. “The Effect of Gender Sorting on the Propensity to Coauthor: Implications for Academic Promotion.” Economic Inquiry 30 (1): 6882.CrossRefGoogle Scholar
Miller, A. H., Tien, C., and Peebler, A. A.. 1996. “Department Rankings: An Alternative Approach.” PS: Political Science and Politics 29: 704–17.Google Scholar
Oster, S. M., and Hamermesh, D. S.. 1998. “Aging and Productivity among Economists.” Review of Economics and Statistics 80 (1): 154–56.CrossRefGoogle Scholar
Over, R. 1982. “Does Research Productivity Decline with Age?Higher Education 11: 511–20.CrossRefGoogle Scholar
Porter, S. R., and Umbach, P. D.. 2001. “Analyzing Faculty Workload Data Using Multilevel Modeling.” Research in Higher Education 42 (2): 171–96.CrossRefGoogle Scholar
Rodgers, J. R., and Neri, F.. 2007. “Research Productivity of Australian Academic Economists: Human-Capital and Fixed Effect.” Australian Economic Papers 46 (1): 6787.CrossRefGoogle Scholar
Sax, Linda J., Hagedorn, Linda Serra, Arredondo, Marisol, and Dicrisi, Frank A. III. 2002. “Faculty Research Productivity: Exploring the Role of Gender and Family-Related Factors.” Research in Higher Education 43 (4): 423–45.CrossRefGoogle Scholar
Schmidt, B. M., and Chingos, M. M.. 2007. “Ranking Doctoral Programs by Placement: A New Method.” PS: Political Science and Politics 40 (3): 523–29.Google Scholar
Suitor, J. J., Mecom, D., and Feld, I. S.. 2001. “Gender, Household Labor, and Scholarly Productivity among University Professors.” Gender Issues 19 (4): 5067.CrossRefGoogle Scholar
Taylor, Susan Washburn, Fender, Blakely Fox, and Burke, Kimberly Gladden. 2006. “Unraveling the Academic Productivity of Economists: The Opportunity Costs of Teaching and Service.” Southern Economic Journal 72 (4): 846–59.Google Scholar
Thursby, J. G. 2000. “What Do We Say about Ourselves and What Does It Mean? Yet Another Look at Economics Department Research.” Journal of Economic Literature 38: 383404.CrossRefGoogle Scholar
Tien, F. F., and Blackburn, R. T.. 1996. “Faculty Rank System, Research Motivation, and Faculty Productivity: Measure Refinement and Theory Testing.” Journal of Higher Education 67 (1): 222.CrossRefGoogle Scholar
Wanner, R. A., Lewis, L. S., and Gregorio, D. I.. 1981. “Research Productivity in Academia: A Comparative Study of the Sciences, Social Sciences and Humanities.” Sociology of Education 54: 238–53.CrossRefGoogle Scholar
Xie, Y., and Shauman, K. A.. 1998. “Sex Differences in Research Productivity: New Evidence about an Old Puzzle.” American Sociological Review 63: 847–70.CrossRefGoogle Scholar
Youn, Ted I. K. 1988. “Studies of Academic Markets and Careers: An Historical Review.” In Academic Labor Markets and Careers, ed. Breneman, D. and Youn, T. I. K., 827. New York: Falmer.Google Scholar
Figure 0

Table 1 References for Explanatory Variables for Scholarly Productivity

Figure 1

Table 2 Average Number of Articles Published by Subgroup

Figure 2

Table 3 Log of Articles as Dependent Variable, Based on Original Responses, with Missing Responses to Predictor Variables Excluded

Figure 3

Table 3.1 Log of Articles as Dependent Variable (Regressions with Imputed Data)

Figure 4

Table 4 Log of Total Productivity as Dependent Variable

Figure 5

Table 5 Log of Articles as Dependent Variable in Subgroup Regressions