Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-25T14:12:10.553Z Has data issue: false hasContentIssue false

Assessing Undergraduate Student Learning in Political Science: Development and Implementation of the “PACKS” Survey

Published online by Cambridge University Press:  03 January 2017

Emily Sydnor
Affiliation:
Southwestern University
Nicole Pankiewicz
Affiliation:
Miami University (Ohio)
Rights & Permissions [Opens in a new window]

Abstract

This article describes the creation and implementation of a new online assessment program (“PACKS”) for the department of politics at the University of Virginia. It discusses the benefits of online assessments, including the ease of administration, minimal faculty involvement, ability to link assessment data to existing student data (e.g., GPA and courses completed), and ability to track student progress over time. The assessment can be easily adapted for use by other departments in the social sciences and by other colleges and universities. The authors discuss the drawbacks to this type of assessment, including the challenge of obtaining the highest number of respondents. They recommend using a strong incentive to ensure full participation, such as an advising hold that prevents students from registering until they complete the assessment. The authors contend that implementing survey-based assessment tools is an ideal way for departments to meet their accrediting institutions’ assessment requirements.

Type
Articles
Copyright
Copyright © American Political Science Association 2017 

Colleges and universities are bound by their regional accrediting body to incorporate department-level assessments of student learning into their programs. Departments are required to identify specific learning objectives for their undergraduates and to develop methods for determining whether those objectives have been met. A 2013 APSA survey of political science departments indicated that various methods are used to assess student learning, including participation in a senior capstone course (76%), rubrics (77%), and performance assessment and culminating projects (60% each) (Young Reference Young2016).

Each approach to assessment requires different levels of commitment from faculty and staff. The four most frequently used approaches listed previously—capstones, rubrics, performance assessment, and culminating projects—demand a heavy investment of time on the part of already-overburdened faculty. Student surveys, although used less frequently by political science departments, have relatively low administrative overhead yet provide data that allow departments to measure and track not only the effects of particular courses or programs but also individual-level learning over the course of the program. This article describes how our own political science department designed and implemented an assessment program that can be adapted for use by other departments in the social sciences. We contend that survey-based assessment tools are an ideal way for departments to meet their assessment requirements.

Drawing on our experience in developing an assessment program at the University of Virginia, we demonstrate that online assessments reduce faculty and classroom time devoted to assessment, facilitate evaluation over time, and increase student participation. To further reduce the burden on faculty and to increase participation while minimizing selection bias, we encourage tying online assessment programs to registration holds, which require students to complete the assessment before they can register for courses. We found that without the registration hold, the assessment oversamples high-achieving students, skewing the department’s perception of how well its learning objectives are being met.

In short, the assessment program we developed minimizes the need for faculty to observe, conduct interviews, or assess final projects. In addition, the registration hold produces a response rate of almost 100%.

OVERVIEW

We argue for a particular method of program assessment: that is, online surveys that can be adapted to whatever content an individual department wants to assess. We begin by providing details regarding program requirements and the logistics of our assessment—how it is administered, how students are notified, and how participation is monitored. Next, we compare several incentive structures to demonstrate selection bias in the absence of a compliance tool such as the registration hold. We also show that online survey responses can be connected to institutional student data to explore variations in student success among groups (e.g., athletes, minority students, and students in honors programs).

The article then presents ideas about how shortcomings in our assessment program might be remedied. Finally, we offer several recommendations to departments interested in implementing a similar program: obtaining buy-in from faculty members and graduate-student teaching assistants; providing strong incentives for students to participate; and, most important, utilizing technology to streamline the administration, analysis, and usefulness of the assessment.

PACKS: POLITICS ASSESSMENT OF CORE KNOWLEDGE SURVEY

We named our assessment program the Politics Assessment of Core Knowledge Survey (PACKS). We began designing PACKS in 2012 in response to a university requirement that each department create assessments of student learning. Before the development of PACKS, our department had no assessment method on a programmatic level and instead relied on individual capstone courses.

Our department’s primary learning objectives are (1) core knowledge of our four subfields: American politics, political theory, comparative politics, and international relations; (2) research-focused analytic skills (i.e., the ability to understand and conduct basic social science research); and (3) critical thinking. PACKS assesses student achievement in two of our department’s three primary learning objectives: core knowledge and research-focused analytic skills. The third objective, critical thinking, is evaluated through the work students produce in seminar courses during their third and fourth years of study and is not addressed in this article.

Our department’s primary learning objectives are (1) core knowledge of our four subfields: American politics, political theory, comparative politics, and international relations; (2) research-focused analytic skills (i.e., the ability to understand and conduct basic social science research); and (3) critical thinking.

In creating PACKS, we turned to other assessments of factual knowledge in political science, culling from introductory texts in our field, exam questions from courses in our programs and from other political science programs, and Advanced Placement tests developed by Educational Testing Service (ETS) in American government and comparative politics. Using these questions, we built a large bank of multiple-choice and short-answer political science core-knowledge questions.

The main content of PACKS consists of multiple-choice questions designed to measure objective knowledge in political science. Each administration of the survey also includes one of six questions designed to evaluate students’ ability to interpret graphical and quantitative presentations of political science-related information. These questions measure students’ social science literacy by introducing them to the type of data they might encounter in everyday life (e.g., a chart illustrating the changing perceptions of Santa Claus’s partisanship over time). Overall, this bank of questions provides multiple direct methods to assess student learning and program effectiveness.

We drew on our bank of questions to produce six short, five-question assessment surveys. Each survey contained one question from each of our four subfields (i.e., American politics, comparative politics, international relations, and political theory) and one methodological question. We administered PACKS using LimeSurvey, a free open-source survey platform that was customized by the University of Virginia’s Political Cognition Laboratory. Administering the assessment through LimeSurvey has important advantages. For example, its flexibility—especially the use of identifying tokens—allows us to link assessment scores to existing student information, such as year in school, cumulative GPA, enrollment in our methods course, and whether a student has declared as a foreign affairs or government major (i.e., the two options offered in our program). The LimeSurvey platform—or any similar online survey platform—provides a quick and easy way to send reminders, track those who have completed the assessment, and ensure that each student participates only once. Another important benefit is that using customized survey software provides complete control of the data collected.

Students were randomly assigned to one of the six versions of PACKS and were sent an e-mail (through LimeSurvey) explaining that they must take the assessment to have their registration hold lifted. LimeSurvey uses “tokens”—that is, unique identifiers that permit us to identify each respondent. The token system also makes it easy to send reminders to students who have not yet completed PACKS. The text of the initial e-mail and the reminders that students received are in table 1.

Table 1 Text of Initial and Reminder Emails Sent to Students

Online administration of assessments may raise concerns that students will collaborate or research the answers to our assessment questions or, alternatively, that they will submit answers without reading the questions. We do not believe either scenario occurs with any regularity. First, students take seriously our university’s honor code and understand that outside assistance with PACKS is a violation. Second, the average and median scores on these assessments hovered around 60%. If students were researching answers or collaborating, it is likely that the average would be much higher. If students were randomly guessing, the median score would be closer to 20%. Instead, as figure 1 shows, the scores of those taking PACKS in order to have their advising hold lifted are normally distributed. Third, online tests and quizzes are used regularly at our university; our students are familiar with the processes and rules that govern online assessment. That said, we recommend that departments adopting online assessments determine whether cheating or randomly checking answers may affect results by either recording the amount of time students spend on each question or including attention-check items in each survey.

Figure 1 Distribution of PACKS Scores (Using Advisor Registration Hold)

USING REGISTRATION HOLDS TO INCREASE PARTICIPATION

Compliance was the primary hurdle we faced with PACKS. Because students are inundated with university e-mails, they often do not open them from department administrators. Even if they did read our e-mails, many students accidentally deleted the e-mail containing their unique link to the assessment, or they simply did not respond in time. To add to the problem, students have little interest and no incentive to complete assessment materials, even those that require minimal time or effort. To improve our PACKS response rate and ensure that all of our students completed the assessment—not only those who are highly motivated, interested, or responsible—we offered different incentives during various administrations of the assessment, including the chance to win tickets to a campus event featuring Secretary of State John Kerry, gift cards to a local bagel shop, and t-shirts. Ultimately, we found that we could obtain an almost 100% response rate by preventing students from registering for classes until they completed the assessment.

We designed a system to let faculty advisors know which students had completed PACKS Footnote 1 and emphasized to them the importance of making sure their advisees complete PACKS before removing the registration hold. Graduating seniors, however, were not subject to the registration hold. Having these students take PACKS was important for three reasons. First, seniors should display the greatest amount of core knowledge and analytical skills. Second, from a methodology standpoint, seniors provide data on response rates for a group not tied to the registration hold. Third, having graduating seniors take the assessment helped us to determine the difficulty of the various combinations of PACKS questions. To get the most from our graduating seniors’ participation, we asked them to complete the questions from all six PACKS—a total of 31 questions—to compare the difficulty of different sets of questions for the same individual. In 2013, we offered seniors an incentive: a chance to win a department t-shirt that included the names of all graduating political science majors (a $15 value). We did not offer an incentive in 2014.

As shown in table 2, the registration hold made a significant difference: completion of PACKS varied greatly based on the incentive offered, with the registration hold being the most effective.

Table 2 Registration Holds Greatly Increase Response Rates

MEASURING PROGRAM EFFECTIVENESS

As students complete PACKS, we can report program-level statistics, controlling for student background and progress in our programs. As PACKS becomes institutionalized, it also will be possible to analyze individual-level student data to determine which courses and academic milestones are producing significant factual and analytic learning.

As PACKS becomes institutionalized, it also will be possible to analyze individual-level student data to determine which courses and academic milestones are producing significant factual and analytic learning.

Because the assessment was online and tied to registration holds, we could easily connect it to existing academic information—such as GPA, race or ethnicity, gender, and athletic status—without inadvertently priming a stereotype risk by asking students to self-report this information (Steele Reference Steele2010). We can easily access a range of data points for accreditation agencies and track students’ learning as they progress through our program. For example, to investigate concerns about the academic success of minority students compared to white students, we can use our linked data to quickly compare their PACKS scores. Alternatively, we can show whether gender or race is tied to learning outcomes in our program. Using OLS regression of PACKS scores on the demographic and academic characteristics described previously, our own analysis showed that in 2014, the largest statistically significant (p < 0.01) predictors of a student’s score on PACKS were year in school Footnote 2 and GPA (table 3). However, controlling for all other factors, minority students also were likely to perform systematically worse on PACKS than white students (p = 0.011). This suggests that we need more understanding about the learning experiences of our minority students.

Table 3 GPA and Year in School Predict Assessment Scores

Notes: Cell entries are OLS regression coefficients with standard errors in parentheses. Other than GPA and year in school, each variable is coded as a dummy, where 1 indicates inclusion in the category described. For “Government Major,” the omitted category is our other major within the politic science department, Foreign Affairs.

* Indicates statistical significance at p <0.05.

** Indicates statistical significance at p <0.01.

Not only does this data integration facilitate cross-sectional analysis of different groups within our major, it also facilitates a “within-subjects” form of assessment. After students declare their major in their sophomore or junior year, they will take PACKS multiple times. We can examine their scores over time—the equivalent of a pretest and posttest design—which allows us to establish how our program is contributing to student-learning outcomes.

In summary, PACKS is a powerful assessment tool not only because of its ties to registration holds but also because of our ability to integrate it with existing demographic data to facilitate comparisons within and across groups.

LIMITATIONS

One limitation we currently face with PACKS is content validity: it is too soon to determine whether our questions accurately reflect the knowledge that students are gaining from our courses. However, as PACKS continues, we will have pre- and post-course data that will improve the accuracy of our measures. Collaboration with faculty and graduate students on question development has improved significantly and questions are more frequently tied to specific course learning objectives and knowledge.

Our main limitation is the way that advisor holds are placed and released. Although we have the ability to place an advisor hold for every major in the department, releasing the hold is at the discretion of a student’s faculty advisor. Furthermore, although we have a method for informing advisors about student participation, the tracking notification system requires the time and resources of our department administrative staff. We are working with the university registrar to gain permission to place a special “PACKS hold” controlled by the assessment coordinator. It would be similar to the type of hold students are subject to if, for example, they have excessive library fines. Using a separate hold would reduce the burden on faculty by limiting the necessity for day-to-day involvement during those weeks that we administer PACKS.

We do not contend that assessment strategies like PACKS should be the only form of assessment that a department implements—our own department uses multiple forms. As currently designed, PACKS is best at assessing factual recall, analytical thinking, and data literacy. It cannot assess a student’s writing or ability to synthesize information gathered across the range of classes in the political science major. For departments interested in examining these learning outcomes, a capstone class or culminating project is more useful. Our department uses PACKS in conjunction with a capstone course.

RECOMMENDATIONS

We make three recommendations for departments interested in designing a new assessment program. First, those in charge of assessments should attempt to obtain buy-in not only from faculty but also from graduate-student teaching assistants. We chose to create our initial question bank using multiple sources; however, subsequent iterations included more questions written by our faculty and graduate students. These questions not only improve the validity of PACKS but also demonstrate increased departmental support of the endeavor.

Second, we recommend that department assessments be tied to a strong incentive to ensure full participation. Letting students “opt-in” to the assessment led to skewed results; once we instituted the advisor hold, our results were more varied and, we believe, more representative.

Third, and most important, we recommend using technology to conduct assessments. The LimeSurvey program is free and customizable, making it a good choice for departments unwilling to use their limited budget on assessments. The relatively low cost of PACKS provides an advantage over other existing survey instruments from professional organizations, such as the ETS field test. However, any online survey tool (e.g., Qualtrics and Google Forms) would work as well. By using an online assessment platform, we can connect individual assessment scores to information we already have about each student (e.g., race or ethnicity, gender, GPA, and whether they have taken a specific course). The technology provides a way to notify students about the assessment requirement and to track their participation. Having the assessment online also makes analysis easier—especially as we track students’ progress through the program—and allows us to respond quickly to requests for data from multiple stakeholders at the department and university levels. Once the online assessment is operating, little maintenance is needed, which can be handled easily by administrative assistants and/or trusted graduate students.

We recognize that department-level assessments are a controversial issue. Although we are fortunate to work with cooperative faculty, we realize that not every assessment coordinator will enjoy the same level of support. Whatever a department’s assessment needs and preferences, our online assessment program provides a solution by greatly reducing the burden on faculty while also providing administrators with the data they require.

Footnotes

1. The initial notification system involves checking LimeSurvey each evening, compiling a list of students who have completed PACKS, and e-mailing it to faculty. We are working on ways to streamline this process.

2. Readers may wonder why the year in school seems to produce such a strong drop in PACKS scores, given that students should be able to answer more questions each year. Our PACKS administration for seniors contained substantially more questions (36) than that for non-graduating students (6), and we believe the length of this questionnaire resulted in survey fatigue. The second column in table 2 supports this claim. When we included a dummy variable for whether a student was a senior in our program, the year in school became statistically insignificant and the drop in scores for seniors became statistically significant.

References

REFERENCES

Steele, Claude M. 2010. Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do. New York: W. W. Norton.Google Scholar
Young, Candace C. 2016. “Survey of Assessment Practices in Political Science.” PS: Political Science and Politics 49 (1): 93–8.Google Scholar
Figure 0

Table 1 Text of Initial and Reminder Emails Sent to Students

Figure 1

Figure 1 Distribution of PACKS Scores (Using Advisor Registration Hold)

Figure 2

Table 2 Registration Holds Greatly Increase Response Rates

Figure 3

Table 3 GPA and Year in School Predict Assessment Scores