Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-22T23:09:08.241Z Has data issue: false hasContentIssue false

Assessing Student Learning Outcomes and Documenting Success through a Capstone Course

Published online by Cambridge University Press:  30 June 2010

Paul E. Sum
Affiliation:
University of North Dakota
Steven Andrew Light
Affiliation:
University of North Dakota
Rights & Permissions [Opens in a new window]

Abstract

Colleges and universities are increasingly intentional about meeting well-articulated and consistent general education goals and documenting substantive learning outcomes. Institutional imperatives to document the successful teaching of essential knowledge and skill sets frequently fall to faculty and departments, posing new challenges in an environment of time and resource constraints. A capstone course is an increasingly common method to measure student learning and assess programmatic and institutional success. We provide concrete suggestions to design a capstone course and assess student learning outcomes. After describing the structure of the course and four innovative assignments, we present the results of assessment conducted through the capstone. We further the conversation on the development of best practices and how political science departments can align institutional and programmatic goals and lead the way in university assessment.

Type
The Teacher
Copyright
Copyright © American Political Science Association 2010

INTRODUCTION

Assessment is more than a buzzword. Colleges and universities across the United States are increasingly intentional about meeting well-articulated and consistent general education goals and documenting substantive learning outcomes. Although few would argue with the theoretical importance of measuring and documenting that undergraduate education is working in practice, assessment is not a simple matter. Nor does it happen overnight. Valid assessment of student learning requires a significant long-term commitment by faculty and administration, staff and students alike.

University-wide institutional imperatives to document the successful teaching of essential knowledge and skill sets frequently fall to faculty and departments, posing new challenges in a time- and resource-constrained environment. Faculty are at the front lines of designing course materials and assessment mechanisms, collecting data, and making sense of results. Department chairs are tasked with encouraging faculty to collect valid data and then implementing curricular or pedagogical changes—that is, closing the loop—without overburdening all involved.

A capstone course is a flexible medium to measure student learning and assess programmatic and institutional success (Berheide Reference Berheide2007). In this article, we describe an easily adoptable and adaptable model for a one-credit-hour capstone course that we designed to assess goals at the programmatic and institutional levels.Footnote 1 After highlighting a “mix-and-match” menu of innovative assignments and exercises, we present the results of our assessment of critical thinking and oral and written communication.Footnote 2 In this way, we further the conversation on the development of best practices and how political science departments can align institutional and programmatic goals and lead the way in university assessment.

ASSESSMENT AND POLITICAL SCIENCE DEPARTMENTS

Palomba and Banta define assessment as “the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development” (Reference Palomba and Banta1999, 4). Assessment in higher education “provides tools and information that enable teachers to discern whether they are achieving their personal goals and the goals of their institutions” (Skocpol Reference Skocpol, Deardorff, Hamann and Ishiyama2009, xi). Virtually all administrators and most faculty agree that assessment is “increasingly important in the academic world” and is “here to stay” (Deardorff, Hamann, and Ishiyama Reference Deardorff, Hamann and Ishiyama2009, 3).

Recent scholarship reveals the expansion and enhancement of assessment in political science, offering strategies for how to foster a culture of assessment and design or implement standard scoring instruments, portfolios, and other techniques in both conventional and virtual classrooms (Deardorff, Hamann, and Ishiyama Reference Deardorff, Hamann and Ishiyama2009). Departments are using an increasingly broad set of direct/indirect and external/internal measures of student learning for purposes of program evaluation. External measures include nationally recognized exams (direct) and surveys, such as the National Survey of Student Engagement (indirect); internal measures include portfolios, team scoring of student work, and simulations (direct), and student interviews and “in-house” surveys exploring specific program goals (indirect; Young Reference Young, Deardorff, Hamann and Ishiyama2009).

The small but growing literature on assessment through capstone courses finds that many programs are turning to the capstone as a primary source of information about the quality of instruction (Black and Hundley Reference Black and Hundley2004), programmatic effectiveness (Wagenaar Reference Wagenaar1993), and the extent to which institution-wide goals are met (Henscheid Reference Henscheid2000). Through capstone assessments, departments can report student learning outcomes based on informed adjustments to pedagogy and programs, such as the addition of specific skills exercises and methods or theory courses, and compensate for any deficiencies they detect. Faculty report improved work lives as a result of enhanced student skills that create a better learning environment (Kelly and Klunk Reference Kelly and Klunk2003; Leach and Lang Reference Leach and Lang2006; Berheide Reference Berheide2007). Some teachers embrace capstones as an easily comprehensible and therefore easier method of assessment. Berheide concludes that capstones efficiently and effectively measure student learning, resulting in an inadvertent but solicitous outcome: “Surprisingly, the wrong reason—minimizing the additional work—has led to the right way to do program assessment” (Reference Berheide2007, 27).

Although about one-third of political science departments are now using capstones for assessment (Ishiyama Reference Ishiyama, Deardorff, Hamann and Ishiyama2009, 67), there has been little exploration of best practices in designing and implementing a capstone specific to the discipline. We recently designed a capstone at our university to perform the dual role of accomplishing programmatic goals with regard to student learning outcomes and complying with and furthering the university's institutional goals. More broadly, our capstone is intended to:

  • Expose students to a holistic review of political science as a discipline, reviewing the broader themes that link the various subfields together

  • Allow students to reflect on their experience in the major and consider future applications of the major's themes and skills to a variety of civic and professional contexts

  • Meet university general education requirements in critical thinking and oral and written communication

  • Serve as an assessment method and programmatic guidepost for the department

  • Facilitate a process for closing the loop—that is, using the assessment data to guide and implement curricular or other pedagogical changes

These goals embrace the three types of assessment identified by Earl: summative assessment, or assessment of student learning; formative assessment, or assessment for learning; and assessment as learning (Reference Earl2004, 22–26). Summative assessment focuses on summarizing, measuring, and judging the quality of student work to certify and report learning outcomes. Formative assessment predominantly occurs in the classroom through exercises intended to provide instructors and students with information about student progress. Assessment as learning focuses on the student, involving him or her as an assessor and fostering self-assessment (Voparil Reference Voparil, Deardorff, Hamann and Ishiyama2009, 18–19). In the latter two types of assessment, the process itself becomes a teaching tool.

We believe that the secret of our success in designing and implementing an efficient, effective, and relatively painless capstone is straightforward: we seek to foster student buy-in from day one. The capstone's design and activities invest students with understanding and ownership of the institutional rationale for assessment, process, and expectations at the department level, and outcomes of student participation. Students exit the capstone feeling more deeply connected to the program and committed to the enhancement of the political science major after having participated in substantive assessment exercises linking their predecessors (past), peers (present), and those who will enter the major (future).

MAKING IT WORK: INTENTIONALITY, DESIGN, AND ACTIVITIES

Student learning goals operate at multiple levels: individual courses, departmental or programmatic missions, and university missions and general requirements. A capstone accommodates multiple forms of assessment that may address different and sometimes competing goals. In line with best practices in assessment, we first evaluated goals at these different levels to find points of convergence (Palomba and Banta Reference Palomba and Banta1999, 6–7). Critical thinking and effective communication transcended the venues. To these core goals, we added specific programmatic objectives corresponding to our department's mission and overall curricular structure: exposing students to a holistic view of the discipline of political science and facilitating student reflection on experiences in the major with an eye toward future application of central themes and concepts (e.g., practices of good citizenship). Finally, we purposefully considered the capstone as a vehicle for programmatic and institutional assessment, elevating it as a goal equal to others in our department's mission and reinforcing a departmental culture of assessment.

Armed with a clearer sense of our goals in relation to the vision of the department and university, we consciously developed activities to achieve those aims, while staying mindful of a programmatic one-credit-hour constraint.Footnote 3 We sought activities that would facilitate multiple forms of assessment, developing a menu of four exercises that together would provide a coherent and cohesive capstone experience.

Simulated Academic Conference

The primary activity of our capstone simulates an academic conference in which students present their own papers to one another. In the first class session, students are introduced to the concept and format of a traditional academic conference. Students resurrect a paper they have written for a political science course during their undergraduate career and prepare it for later presentation. The instructor collects paper titles and organizes students into panels according to general topic areas. Panel sessions occupy four of the eight weeks of the course. The instructor serves as both chair and discussant for each panel to set up themes and norms for student participation. The panel sessions aim to generate discussion on broad themes in political science that are reflected in the papers, which generally come from different courses. Sometimes these themes are obvious, but often the instructor (i.e., discussant) will raise questions on a more abstract level concerning themes that transcend individual courses, such as power, citizenship, accountability, legitimacy, and institutional structure and design. Conversation is lively and rich. Student presenters are eager to extend the relevance of their own papers, and the audience members, many of whom recall the writing assignment from a particular course, enthusiastically engage.

The academic conference format nicely serves two capstone goals: expose students to a holistic review of political science as a discipline, and facilitate student reflection on the undergraduate experience in terms of the discipline's themes and acquired skill sets. The presentations become artifacts for instructors to assess for effective oral communication skills. We conduct this assessment using a rubric for oral communication developed at the university level (see appendix). The papers on which the presentations are based serve as separate, albeit related, artifacts through which we assess critical thinking and written communication, also based on rubrics developed at the university level (see appendix). Thus, the simulated academic conference facilitates direct assessment of student products on key student learning outcomes for individual courses, the program, and the university.

We limit the assessment of oral communication to the instructor. However, students are involved in the assessment of the papers for critical thinking and written communication. Students submit three copies of their paper prior to the first panel session. The instructor reads each paper, assessing it using the rubrics for each of the student learning outcomes while teasing out common themes among papers on which to base discussion. Through a random exchange, students read two of their colleagues' papers and score them using the critical thinking and written communication rubrics. This exercise pulls students into the assessment process and heightens their understanding of abstract student learning outcomes. Often, the exercise is the first time students recognize that these programmatic goals are important to the major, as well as the first time they reflect on their own abilities comparatively.

The simulated academic conference experience, in conjunction with the peer review process, exposes students to the three types of assessment: summative, formative, and assessment as learning (Earl Reference Earl2004). The assessment through the rubrics accomplishes the summarizing, measuring, and judging that are integral to summative assessment and contributes to our ability to understand the reliability of the instrument by comparing instructor and student scoring. Formative assessment, or assessment for learning, takes place within the panel discussions. Instructors can identify any gaps in skill sets and observe students' ability to play with highly abstract political science concepts that, in many cases, students do not recognize in their own papers. Finally, peer evaluation introduces assessment as learning, encouraging students to actively engage in the assessment process and, as a result, internalize the concepts of critical thinking and effective communication. The feedback we have received from students supports this interpretation of the effectiveness of the exercise, as they report a fuller appreciation of the qualities of effective writing and sophisticated analysis.

The primary benefit of the simulated academic conference is the direct assessment of artifacts for the department's key student learning goals. However, the activity has other, less tangible benefits as well. For example, the format introduces students to the academic profession. If instructors take the role of chair and discussant seriously, the panels will generate deep discussion and new knowledge. Students can realistically and comfortably benchmark their knowledge and capabilities against those of their colleagues. Indeed, students seem insatiably curious to read each other's (anonymous) papers and score them on the rubrics. The process encourages them to reflect on their own undergraduate careers and the level of skill they possess or have acquired. Student feedback regularly includes praise for the quality of each other's work.Footnote 4

Course Mapping Exercise

The capstone includes three additional activities that allow for indirect assessment of student learning outcomes. The first is a “course mapping” exercise that charts or maps student perceptions of where they gained—or at least were exposed to—instruction and activities that enhanced a particular skill. Students rate each of the 10 courses in the political science core curriculum on a Likert scale (1 to 5, with 1 meaning the course “did not enhance this skill at all” and 5 meaning that the course “entirely enhanced this skill”) for four key learning goals found in the department mission statement: critical thinking, written and oral communication, and understanding of the discipline. The results are particularly useful for departmental triangulation relative to the direct assessment conducted through the simulated academic conference activities. If a deficiency is found in a student learning outcome through direct assessment, the mapping exercise can identify the shortcoming in the core curriculum.

Open-Ended Exit Survey

As part of their exit experience, students complete an open-ended survey that asks them to anonymously and candidly evaluate the strengths and weaknesses of the program and faculty, and to make recommendations for future development. Students have the opportunity to reinforce what the department is doing well and voice concerns about departmental or programmatic issues not included in our student learning goals. For instance, consistent recommendations for better advertisement and recruitment for the major prompted us to develop the Learning Through Teaching activity we next describe. Besides its inherent value as a feedback mechanism, the survey provides another opportunity to triangulate the results from the surveys with findings from other assessment methods.

Learning Through Teaching Activity

A final capstone activity requires student “teaching teams” to leave the comfort zone of their own classroom and deliver a presentation to small breakout groups of students enrolled in a 100-level introduction to American government course. The course is a high-enrollment lecture course populated by many first-year students who are simply seeking to fulfill a university general education requirement. On a prearranged day, instructors create breakout groups of approximately 10 students. A pair of capstone students delivers a 30-minute presentation and facilitates group discussion on the nature of political science as a discipline; the ways in which basic concepts introduced in the American government course are woven through the rest of the major's curriculum; and the soon-to-be-graduates' impressions of the major, the field, and professional opportunities. The instructors circulate among the groups to observe the teaching teams' presentation style and interaction with the American government students.

We encourage the student teams to be creative and open-minded when designing their presentation. We provide suggestions, but students have a great deal of autonomy to develop their thoughts and structure the discussion. Whatever lesson plan they develop requires the student teachers to craft a presentation that effectively conveys substantive content and directly engages with their (academically less experienced) peers. American government students assess the presentations on the oral communication rubric used to evaluate the simulated academic conference presentations. We therefore acquire another direct assessment of oral communication from a significantly different forum that demands that students create a presentation approach distinct from the approach used in delivering their papers at the simulated academic conference.

In addition to furthering departmental goals concerning effective communication, the Learning Through Teaching activity assists the department in recruiting and advising prospective majors and communicating professional opportunities that stem from the program. This activity encourages capstone students to carefully consider the field of political science holistically, and it embraces fully the spirit of Earl's (Reference Earl2004) assessment as learning, facilitates summative and formative assessment, and furthers programmatic and institutional goals. For example, we can compare student oral communication skills as teachers to their skills as presenters, because the same rubric is used to assess both. The activity has the added benefit of introducing American government students to the goals and processes of assessment used by the department.

RESULTS

The capstone generates many diverse and complementary results through its various activities and the use of multiple assessment mechanisms. As noted previously, the simulated academic conference facilitates instructor assessment of oral communication based on student presentations. The conference also serves as the conduit for instructor and peer assessment of student papers for effective written communication and critical thinking. We have assessed each of the three student learning outcomes using rubrics developed at the university level. Each rubric distinguishes different dimensions of the broader concept. For example, written communication is broken down into a sense of purpose, guidance for the reader, and clarity and use of conventions, with four levels of attainment for each dimension of the skill, from developing (0) to mastered (3). The rubric for critical thinking follows the same logic and includes three dimensions: sense of purpose, analysis, and resolution. The range for each skill differs based on the construction of the rubric: oral and written communication are measured on a scale of 0 to 9 and critical thinking on a scale of 0 to 6 (see appendix).

Table 1 reports the results of the assessments for three years. The table shows the aggregate score on each dimension of each skill rubric, comparing peer review to instructor assessment. Our department designates an expectation of 2 for each dimension on the communications rubrics and 1 for each dimension on the critical thinking rubric.

Table 1 Results from Direct Assessment, 2007–2009

For the most part, our expectations were met for written communication and critical thinking. The occasional score below the established threshold has occurred, but these have largely been isolated incidents, and the scores have still been very close to the expectation. However, assessment of oral communication skills has yielded scores consistently lower than departmental expectations. After reviewing the 2007 results, the department took action, as described in the following paragraphs. The scoring from the direct assessment begged the question of where students were (or were not) gaining the skills within the broader core curriculum for political science majors. We considered this question when reviewing results from the course mapping exercise. Table 2 shows the results from 2009 as an example.Footnote 5

Table 2 Results from Course Mapping Exercise, 2009

Note: Numbers represent aggregate responses (N = 32) to rating each course according to how much each departmental mission goal was enhanced, with 1 = course did not enhance this goal at all and 5 = course entirely enhanced this goal. aCourse listings are as follows: 115 = American Government; 116 = State and Local Government; 220 = International Politics; 225 = Comparative Politics; 250 = Public Administration; 300 = Research Methods; 305 = Constitutional Law: Institutional Powers (elective); 306 = Constitutional Law: Civil Rights/Liberties (elective); 310 = Political Thought; 405 = Political Behavior; 432 = Public Policy; and 495 = Senior Colloquium, Capstone

Three years of results, from 2007 to 2009, reveal that students perceive that different courses emphasize and enhance different skill sets. The two 100-level courses show lower scores on all the skill sets and a mixture of scores among the remaining core courses. The accuracy of student perceptions is plausible, because there is no reason to believe that each course and instructor would emphasize and enhance all skills, although all courses received relatively high scores in terms of enhancing students' “understanding of the discipline.”

Despite the expected variance among courses, one pattern stood out from the mapping exercise. Students perceived only two required courses as enhancing oral communication: public administration (250) and the capstone (495). Although indirect measures are not the most reliable means of assessment, these results provided a compelling explanation for why oral communication measured through direct assessment fell below our expectations. When we discussed this result with our colleagues, we heard what we had already concluded. Among courses offered in the core curriculum, only one included formal presentations. Many electives also emphasized oral communication, but as an independent decision of the instructor, leaving the possibility that students would not be fully exposed to instruction and exercises that would enhance this skill. Faculty members agreed that building oral presentation exercises into their classes was difficult because of class size and time constraints.

The Learning Through Teaching activity, first used in 2009, generated intriguing results. The scores generated by the American government students across all teaching teams were extremely consistent, averaging 2.6 out of 3, across three categories of oral communication (purpose, guidance, and style). These scores exceeded departmental expectations. There are, of course, some questions regarding the validity of the scores. Given time constraints, we had limited opportunity to explain the use of the rubric and no opportunity to have the American government students practice applying the oral communication rubric—what sometimes is referred to as “norming” the instrument. Our ability to observe the presentations of each teaching team member was also constrained by our need to circulate among breakout groups, and we therefore could not generate comparative assessment scores. However, whether they were valid because of the attention paid to oral communication in the capstone's academic conference exercise or throughout the major, or inflated owing to students' lack of familiarity with the rubrics, politeness, or awe of their older, wiser peers, the student scores were not inconsistent with our observations. Informal survey feedback from both capstone and American government students was extremely positive, with the foremost observation from both constituencies being that the activity helped them to better understand “how everything fits together” in the discipline and what one might “do” with a political science major.

A final set of results was derived from the open-ended exit surveys that students complete as part of the capstone. The questionnaire asks students to evaluate the strengths and weaknesses of the major and what, if anything, they would change about the major or department. The responses offer many insights, but we summarize here several of the major themes related to the departmental goals for student learning. Among the strengths, students have consistently appreciated the department's commitment to student writing. They also emphasize the major's strong tradition in critical thinking and analysis. Students appreciate the dedication and accessibility of the faculty. Among the weaknesses, students note the lack of opportunities to formally present their work and ideas orally throughout the major.

The results from assessment through the capstone have illuminated both programmatic strengths and weaknesses. Maintaining the status quo on the strengths is an easy task. However, taking action to address weaknesses is a more significant undertaking. The most serious issue that arose from capstone assessment was a deficiency in oral communication skills, which was apparent from one direct and two indirect methods of assessment. Our department approached this problem in three ways. First, each faculty member agreed to include more oral presentation exercises in coursework throughout the curriculum. Secondly, we added the Learning Through Teaching exercise to the capstone. Third, we created a plan to piggyback on our university's recent adoption of a general education requirement that all undergraduate students complete a course in public speaking. We will continue to monitor the results and make adjustments as needed.

A less tangible but no less important programmatic change has been an increased effort to speak the language of student learning goals when describing student assignments and activities within classes. The capstone demonstrates student enthusiasm for clear-cut statements regarding critical student learning objectives. Students also appreciate receiving copies of the rubrics used in the capstone as guides to help them write papers and prepare presentations with these objectives in mind. This technique has been introduced into several upper-division courses.

CONCLUSION

National trends in higher education suggest that the institutional imperative to conduct assessment will not disappear anytime soon. Assessment that is conducted correctly facilitates better student learning and therefore programmatic success. Recognizing the separate yet intersecting goals inhering to assessment—both programmatic and institutional—our department selected a capstone course as the apex not only of the substantive elements of our major, but also of our assessment efforts. We examined our mission, learning objectives, and curriculum to identify particular learning goals. We also engaged in conscious discussion of our program, identifying questions and concerns and building consensus on how best to transform our capstone to achieve the assessment imperatives that were initially emanating from the university. We benefited from several factors that may or may not be present in other departments, including a highly collegial faculty, strong consensus on the meaning of and rationale for assessment, and rapid buy-in by faculty and students alike on the methods we selected to comport with both the departmental mission and institutional imperatives to assess for broad skills (rather than the content of the major or political science as a discipline). We made a number of conscious choices that worked for our department, but which may not be right for all departments. As Deardorff and Folger acknowledge, “ideal circumstances frequently do not exist” for assessment (Reference Deardorff, Folger, Deardorff, Hamann and Ishiyama2009, 79).

As we have continued to develop the capstone, we have become increasingly mindful that designing and incorporating activities that capture different forms of assessment—summative, formative, and as learning—not only maximizes the benefits of assessment for the program and institution, but also for students. A menu of activities, such as the simulated academic conference and the Learning Through Teaching exercise, makes the whys and wherefores of assessment transparent and encourages student participation in achieving learning outcomes. Achieving student buy-in, at first considered an unexpected but happy byproduct of our capstone design, has become an intentionally integral feature of activities that help us to “make it work.”

Using results generated by the capstone, our department is building a culture of assessment that facilitates across-the-board programmatic enhancement and boosts student learning opportunities. We expect to see an increasing return on our department's investment of time and resources in the capstone, which, ideally, students themselves will recognize. Assessment therefore is not an end itself, but a process that achieves multiple ends. Well-designed mechanisms can help political science departments achieve assessment goals at the programmatic and institutional levels. The model capstone we describe here invests students with excitement and enthusiasm about the rationale, process, and outcomes for assessment. Faculty therefore find assessment through the capstone to be invigorating rather than enervating. The capstone in political science becomes an efficient and effective vehicle to achieve the ultimate objective for assessment in higher education: student learning.

APPENDIX: Rubrics for Assessment

Rubrics for Assessment

Footnotes

1 The University of North Dakota is a mid-sized, Carnegie-designated “high research activity” institution that places a premium on faculty-student contact, general education, and the liberal arts. Our department has nine full-time faculty members and 120 majors in political science and public administration. The department recently won the university's teaching award, and most faculty have been individually recognized for teaching excellence. The university also has commended the department for institutional leadership in both general education and assessment.

2 We acknowledge that institutional differences may drive a department's rationale for assessment, selection of methods for data collection, and relative success in achieving articulated goals. For instance, university imperatives initially prompted our department to develop and implement assessment practices and focus on skills-based rather than content-based assessment. The department benefited from a high degree of collegiality and faculty/student buy-in throughout the process, which might be interpreted as elements of a preexisting culture of assessment. Because one size does not fit all, we encourage departments to engage in self-reflection, as well as consider external imperatives as they engage with assessment.

3 Our capstone meets once per week, with 100-minute sessions for the first eight weeks of a 16-week semester, thereby meeting the face-time requirements for a one-credit-hour course in half a semester. However, the course can easily be adapted to more conventional 50- or 75-minute sessions, as well as to a conventional three-credit-hour version, as we recently have done to comport with new university requirements

4 The primary shortcoming of the conference format is its inability to generate value-added results. The one-credit-hour course constraint precludes a longitudinal design. However, our model capstone might easily be transformed into a three-credit-hour course in part by requiring students to write an original paper and present it under similar conditions. The same assessment process would be applied to the second set of papers. This adaptation allows for the comparison of scores from papers written prior to and during the course, creating a pretest/posttest data set.

5 For reasons of space and clarity, we do not show all three years of mapping exercise results. However, table 2 illustrates the usefulness of this method of assessment, and we discuss the broader results in the Results section.

References

Berheide, Catherine White. 2007. “Doing Less Work, Collecting Better Data: Using Capstone Courses to Assess Learning.” Peer Review 9: 2730.Google Scholar
Black, Karen E., and Hundley, Stephen P.. 2004. “Capping Off the Curriculum.” Assessment Update 16 (1): 3.Google Scholar
Deardorff, Michelle D., and Folger, Paul J.. 2009. “Making Assessment Matter: Structuring Assessment, Transforming Departments.” In Assessment in Political Science, ed. Deardorff, Michelle D., Hamann, Kerstin, and Ishiyama, John, 7795. Washington, DC: American Political Science Association.Google Scholar
Deardorff, Michelle D., Hamann, Kerstin, and Ishiyama, John. 2009. Assessment in Political Science. Washington, DC: American Political Science Association.Google Scholar
Earl, Lorna M. 2004. Assessment as Learning: Using Classroom Assessment to Maximize Student Learning. Thousand Oaks, CA: Corwin.Google Scholar
Henscheid, J. M. 2000. Professing the Disciplines: An Analysis of Senior Seminars and Capstone Courses. Columbia, SC: University of South Carolina Press.Google Scholar
Ishiyama, John. 2009. “Comparing Learning Assessment Plans in Political Science.” In Assessment in Political Science, ed. Deardorff, Michelle D., Hamann, Kerstin, and Ishiyama, John, 6175. Washington, DC: American Political Science Association.Google Scholar
Kelly, Marisa, and Klunk, Brian E.. 2003. “Learning Assessment in Political Science Departments: Survey Results.” PS: Political Science and Politics 36: 451–55.Google Scholar
Leach, Melinda, and Lang, Gretchen Chesley. 2006. “The Not-So-Stony Path to Program Assessment and, Along the Way, Transforming a Senior Capstone Seminar in Anthropology.” University of North Dakota Assessment Committee Newsletter, November. http://www.und.nodak.edu/dept/datacol/assessment/newsletter/2006nov_anth.pdf.Google Scholar
Palomba, Catherine A., and Banta, Trudy W.. 1999. Assessment Essentials: Planning, Implementing, Improving Assessment in Higher Education. New York: Jossey-Bass.Google Scholar
Skocpol, Theda. 2009. “Foreword.” In Assessment in Political Science, ed. Deardorff, Michelle D., Hamann, Kerstin, and Ishiyama, John, xi–xiii. Washington, DC: American Political Science Association.Google Scholar
Voparil, Christopher J. 2009. “Assessing for Understanding: Toward a Theory of Assessment as Learning.” In Assessment in Political Science, ed. Deardorff, Michelle D., Hamann, Kerstin, and Ishiyama, John, 1737. Washington, DC: American Political Science Association.Google Scholar
Wagenaar, Theodore C. 1993. “The Capstone Course.” Teaching Sociology 21: 209–14.CrossRefGoogle Scholar
Young, Candace C. 2009. “Program Evaluation and Assessment: Integrating Methods, Processes, and Culture.” In Assessment in Political Science, ed. Deardorff, Michelle D., Hamann, Kerstin, and Ishiyama, John, 117–39. Washington, DC: American Political Science Association.Google Scholar
Figure 0

Table 1 Results from Direct Assessment, 2007–2009

Figure 1

Table 2 Results from Course Mapping Exercise, 2009