Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-23T16:49:00.191Z Has data issue: false hasContentIssue false

Measuring the Impact of Externalities on College of Agriculture Teaching Evaluations

Published online by Cambridge University Press:  28 April 2015

Ronald A. Fleming
Affiliation:
Department of Agricultural Economics, University of Kentucky, Lexington, KY
Ernest F. Bazen
Affiliation:
Department of Agricultural Economics, University of Tennessee, Knoxville, TN
Michael E. Wetzstein
Affiliation:
Department of Agricultural and Applied Economics, University of Georgia, Athens, GA

Abstract

Student evaluation of teaching (SET) is employed as an aid in improving instruction and determining faculty teaching effectiveness. However, economic theory indicates the existence of externalities in SET scores that directly influence their interpretation. As a test of this existence, a multinomial-choice, ordered data estimation procedure is employed to identify course externalities influencing SET. These externalities include student class standing, required courses, class size, days a class meets, class meeting time, classroom location, and classroom design. Results indicate that externalities have a significant impact on teaching evaluations. Thus, failure to internalize these externalities will lead to biases in SET and questionable use of SET as an aid in instruction improvement and determining faculty effectiveness.

Type
Articles
Copyright
Copyright © Southern Agricultural Economics Association 2005

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Baker, M., Rudd, R., and Hoover, T.. “Relationships Between Student/Course Characteristics and Student Evaluations of Teaching Quality.North American Colleges and Teachers of Agriculture Journal 44(2000):2529.Google Scholar
Becker, W.E., and Watts, M.. “How Departments of Economics Evaluate Teaching.American Economic Review 89(May 1999):344–49.Google Scholar
Braskamp, L., Brandenburg, E., Kohen, E., Ory, J., and Mayberry, P.. “Guidebook for Evaluating Teaching, Part I.North American Colleges and Teachers of Agriculture Journal 27(December 1983):2933.Google Scholar
Braskamp, L., Brandenburg, E., Kohen, E., Ory, J., and Mayberry, P.. “Guidebook For Evaluating Teaching, Part II.North American Colleges and Teachers of Agriculture Journal 28(March 1984): 1924.Google Scholar
Frey, P.W.Validity of Student Instructional Ratings: Does Timing Matter?Journal of Higher Education 47(1976):327–36.Google Scholar
Greene, W.H. Econometric Analysis. New York: Macmillan Publishing Company, 1990.Google Scholar
Greenwald, A.G.Validity Concerns and Usefulness of Student Ratings of Instruction.American Psychologist 52(1997):1182–86.Google Scholar
Greenwald, A.G., and Gillmore, G.M.. “Grading Leniency Is a Removable Contaminant of Student Ratings.American Psychologist 52(1997):209217.Google Scholar
Seevers, B.S., Dormody, T.J., and Van-Leeuwen, D.M.. “Developing a Valid and Salable Student Evaluation of Teaching (SET) Instrument.North American Colleges and Teachers of Agriculture Journal 43(December 1999): 1519.Google Scholar
Wilson, R.New Research Casts Doubts on Value of Student Evaluations of Professors.The Chronicle of Higher Education 54(1998):1216.Google Scholar
Worley, T., and Casavant, K.. “Student Evaluations of Teaching: A Tool for Directing and Measuring Course Improvement Efforts.North American Colleges and Teachers of Agriculture Journal 39(1995):3738.Google Scholar