Assessment is a subject that often engenders little enthusiasm or excitement in students and instructors alike. The anxiety of deadlines, of not getting a desired grade, and simply of having to grade a pile of essays are the phenomena that we likely envision when the topic of assessment arises. Furthermore, students often are frustrated with assessment norms and desire more authentic and engaging forms of assessment (Deeley et al. Reference Deeley, Fischbacher-Smith, Karadzhov and Koristashevskaya2019). Getting assessment “right” is a vital undertaking in the teaching and learning of any field. As Craddock and Mathias (Reference Craddock and Mathias2009, 127) stated: “Assessing the performance of students is one of the most important activities educators undertake.” This is because assessment not only tests a student’s level of understanding and learning, impacting their long-term outcomes; it also can be a context for learning (Deeley et al. Reference Deeley, Fischbacher-Smith, Karadzhov and Koristashevskaya2019, 386–87). However, despite the importance of assessment, it can be an aspect of teaching and learning that is slow to innovate. The Scholarship of Teaching and Learning (SoTL) literature has long noted that political science has a track record of particularly limited pedagogical innovation (Ishiyama, Breunig, and Lopez Reference Ishiyama, Breuning and Lopez2006). Within the field, the traditional coursework essay remains a predominant assessment format. However, as described in this article, there is a strong case to be made in favor of innovation in political science’s assessment practices.
This study explored the innovation of assessment optionality—an innovation that has been highlighted for its capacity to improve the engagement with and inclusivity of assessments (Firth et al. Reference Firth, Ball-Smith, Burgess, Chaffer, Finn, Guy and Hansen2023). By drawing on the findings of a participatory research project and working with political science students from a department that does not yet employ the practice, a subject-specific case was developed for implementing assessment optionality. This article therefore establishes a case for providing students with choice in their assessment in political science courses rather than reviewing the experience of assessment optionality in these courses. Co-created with the project’s student partners, the case demonstrated that the opportunity for optionality is valued for its potential to bolster authenticity, employability, agency, and inclusivity in political science assessment.
The article proceeds as follows. First, a review of assessment optionality literature is conducted, establishing the arguments that are made in favor of its practice. Second, the methods and ethical considerations for this participatory research project are introduced and described. Third, the results of the research are presented, which include the co-created subject-specific case for utilizing assessment optionality and recommendations for its inclusion in political science education. The fourth section concludes.
ASSESSMENT OPTIONALITY
The prevailing orthodoxy in assessment practices is that students are passive participants, rather than actively engaged, in the assessment process (Deeley et al. Reference Deeley, Fischbacher-Smith, Karadzhov and Koristashevskaya2019, 386). Assessment optionality is a concept that therefore is distinct from this traditional approach to assessment. With optionality, students are asked to engage with the process, making decisions that will shape their experience of assessment.
The literature on assessment optionality divides the concept into two types (Firth et al. Reference Firth, Ball-Smith, Burgess, Chaffer, Finn, Guy and Hansen2023, 7). The first type is termed the “variance” type of optionality. With this type, the format of the assessment (e.g., essay, presentation, or exam) remains the same but there is room for negotiation and choice around the assessment’s length, weighting, and submission date, for example (Cook Reference Cook2001; Wanner, Palmer, and Palmer Reference Wanner, Palmer and Palmer2024, 353). The second type is termed the “format” type of optionality in which students choose among different assessment formats (e.g., essay or policy report, group or individual presentation) (Craddock and Matthias Reference Craddock and Mathias2009; O’Neill Reference O’Neill2017).
SoTL literature highlights three broad categories of benefit to students in using assessment optionality: (1) improved student outcomes, (2) improved student motivation and engagement, and (3) improved inclusivity (Craddock and Mathias Reference Craddock and Mathias2009; Deeley et al. Reference Deeley, Fischbacher-Smith, Karadzhov and Koristashevskaya2019, 394; Firth et al. Reference Firth, Ball-Smith, Burgess, Chaffer, Finn, Guy and Hansen2023; O’Neill Reference O’Neill2017; Wanner, Palmer, and Palmer Reference Wanner, Palmer and Palmer2024). Although Jopp and Cohen (Reference Jopp and Cohen2022) found in their trial of optionality that student outcomes remained largely consistent with those under the traditional assessment framework, there was a marked improvement in student feedback and subject satisfaction. This literature therefore provides a strong basis for the introduction of optionality—albeit one that is generalized to higher education at large.
However, concerns also have been raised about the implementation of optionality. Perhaps unsurprising, when students have the option to be assessed in different ways for the same course, concerns about fairness are raised by both staff and students (O’Neill Reference O’Neill2017). Wanner, Palmer, and Palmer (Reference Wanner, Palmer and Palmer2024, 358; italics in original) also warned of the decision-making burden that is placed on students with optionality. They found in their trial that “participants felt that they may have had too much flexibility in regard to the choices for their assessment, suggesting that maintaining some boundaries and guidelines would assist them in their decision making.” There also have been reservations about the potential increase in administrative work that may result from optionality in assessment (Morris, Milton, and Goldstone Reference Morris, Milton and Goldstone2019, 442). However, one study of the experience of applying optionality concluded that it did not result in an increase in administrative work (Cook Reference Cook2001, 548).
To date, reviews of the benefits and experiences of using assessment optionality primarily have focused on the general, cross-disciplinary level or in fields of study other than political science. As noted previously, because political science is a field that can be slow to embrace pedagogical innovation, there is value in exploring whether there is a subject-specific case for such innovation. The following section introduces the methodology and the ethical considerations that underpinned the research into the subject-specific case for utilizing assessment optionality in political science.
To date, the reviews of the benefits and experiences of using assessment optionality primarily focused on the general, cross-disciplinary level or in fields of study removed from political science.
METHOD AND ETHICS
To explore whether there is a subject-specific case for utilizing assessment optionality in political science education, this research centered political science students in a process of participatory research. As Baik, Larmcombe, and Brooker (Reference Baik, Larmcombe and Brooker2019, 677) stated, students can be vital “consultants in pedagogical explorations,” playing an active role as student partners in shaping educational practices and methods—especially when considering pedagogical innovations. There also is a strong case for student research participation for their own benefit, both as partners and as participants, because it can usefully advance their education and be an otherwise rewarding and enjoyable experience (Brewer and Robinson Reference Brewer and Robinson2018). Political science students therefore were engaged as consultants in pedagogical explorations of assessment optionality in political science education. The study featured students in two roles. In two half-day staff–student partnership workshops, six student partners were involved from the early stages to help shape the project’s content, direction, delivery, and development of conclusions. The second role was as a student participant, the sole focus of which was participation in one two-hour focus group. There was a total of 24 student participants, eight in each focus group.
Preceding the commencement of research, ethical approval for the project was sought and granted by the University of York Institutional Review Board (Decision: 44/ELMPS/23-24). Although conclusions developed in the SoTL literature state that students do not necessarily view themselves as vulnerable participants needing specific safeguards (Innocente, Baker, and Goodwin De Faria Reference Innocente, Baker, De Faria and Fedoruk2022, 111), the unequal power dynamic in the student–instructor relationship requires care to minimize the potential for vulnerabilities to emerge in pedagogic research. Lees, Godbold, and Walters (Reference Lees, Godbold and Walters2023, 57) described key provisions that should underpin ethically sound SoTL research with student participants, including “provisions for voluntariness, protection of grades, and not having their competence undermined through concerns of being judged.” Therefore, the following provisions were provided to participants:
-
• Voluntariness: All research participation was voluntary and was conducted through informed consent. Potential participants were provided with a project-information sheet, specific to their role in the study, so that they could make an informed decision on participation.
-
• Protection of Grades: The project-information sheet stated that participation would not impact students’ grades. This was ensured because (1) the research was not linked to any module’s delivery; (2) the researcher was not teaching any undergraduate modules during the semester that the research was conducted; and (3) all assessments in the department are assessed anonymously.
-
• Concerns of Being Judged: As noted by Lees, Godbold, and Walters (Reference Lees, Godbold and Walters2023, 50), students have a “preference for participating alongside others in their class…for having familiar people within a focus group to allay feelings of vulnerability.” Therefore, the participatory research approach was adopted to alleviate the risk that students would feel judged. Additionally, all participants were provided with anonymity—a practice that is an institutional norm.
All participants were paid the same hourly rate of £12.28, which is the institutional level for student research participants and is higher than the national living wage. The level of participation varied depending on an individual’s role; student partners totalled 20 hours and student participants were involved only in their two-hour focus group. The research project was advanced through the following five stages:
-
1. Call for student partners.
-
2. First workshop with student partners.
-
3. Call for student participants.
-
4. Focus groups with student participants.
-
5. Second workshop with student partners.
Wider SoTL literature has discussed that the benefits of student partnership research—including addressing historically marginalized groups—are undermined if the research centers too much on engaging the “usual suspects” (Mercer-Mapstone, Islam, and Reid Reference Mercer-Mapstone, Islam and Reid2021). Therefore, the calls for student partners and student participants stated that participation from those not normally involved in departmental activities or governance (e.g., course representatives) was highly encouraged. The result of the call was a diverse student group that included “familiar faces” and those new to participation in departmental and research projects. The call went out to all undergraduate students in political science and international relations degree programs (including joint honors) at the University of York. The decision to focus on undergraduate students was to include participants with a shared multiyear perspective on studying political science—future research into the perspectives of postgraduate-taught students also would be beneficial. The six student partners were selected to represent a range of degree programs and year groups, weighted toward final-year undergraduate students to provide retrospective insight into their experiences. Similarly, all focus groups represented a range of year groups and degree programs.
Student partners were provided with two texts to read before the first workshop: (1) a report about the application of assessment optionality in UK higher education (Firth et al. Reference Firth, Ball-Smith, Burgess, Chaffer, Finn, Guy and Hansen2023); and (2) the Quality Assurance Agency for Higher Education’s (2023) Subject Benchmark Statement: Politics and International Relations. These two readings were selected to provide a shared understanding of assessment optionality and a wider understanding (i.e., beyond student partners’ personal experiences) of the specificities of studying political science. This first workshop collaboratively established a definition of assessment optionality (created specifically by and for political science students) and a methodological approach for the remaining stages of the research, including co-creation of questions for the focus groups. The student partners led the semi-structured focus groups composed of student participants using the co-created questions and the same format in all three focus groups to triangulate research findings. After the focus groups were completed, there was a final student-partner workshop to consolidate conclusions. The author was present throughout all stages of the research process to observe, take notes, and facilitate and lead discussions in the student-partner workshops.
RESULTS
The first workshop with student partners produced outcomes that shaped the remainder of the participatory research process. Through a process of collaborative writing with the student partners, the following definition of “assessment optionality” was created:
Assessment optionality is a practice wherein students, based on their personal preferences and needs, are provided with flexibility and control over the ways they are assessed. With optionality, students independently choose from a menu of possible assessment formats that cater to a diverse body of skills and needs. These formats include traditional and innovative styles of assessment and, guided by module convenors, reflect the module learning outcomes.
This definition specifically rejects the variance type of assessment optionality (i.e., choice in length or deadline) in favor of the format type (i.e., choice among different assessment formats) (Firth et al. Reference Firth, Ball-Smith, Burgess, Chaffer, Finn, Guy and Hansen2023, 7). Reflecting the concerns raised by O’Neill (Reference O’Neill2017), the unanimous group decision to reject the variance type of optionality was driven by concerns about fairness. The definition also centers the role of the module convenor as an important guide in setting relevant and useful assessment choices for each module. Student partners argued that any choice in assessment formats should provide a balance of options between traditional formats (e.g., essays) and innovative formats (e.g., policy reports, podcasts, and presentations). This balance has the additional benefit of ensuring that first-generation and international students, as well as others—who might have a lower degree of confidence with some options—would still have a “core” of traditional assessment formats from which to choose.
Student partners argued that any choice in assessment formats should provide a balance of options between traditional formats (e.g., essays) and innovative formats (e.g., policy reports, podcasts, and presentations).
The questions co-created with the student partners focused on four topics: assessment norms, essays, inclusivity, and optionality. These themes were selected to provide a breadth of insight into assessment in political science while also providing a strong basis for discussions on optionality. The intended outcomes of the focus groups were twofold: (1) to assess whether there was a subject-specific case for assessment optionality on political science degrees; and (2) to create recommendations for instructors considering the implementation of assessment optionality in their courses.
The discussions in all three focus groups produced a clear conclusion in favor of introducing assessment optionality in political science courses. Introduced through the collaboratively written definition of assessment optionality, political science students who had no prior experience of the practice quickly grasped its opportunities and also noted their concerns (i.e., the fairness issue was raised again). However, although participants had not been exposed officially to the practice of assessment optionality, it was clear that they already often were practicing some degree of assessment choice. Many students discussed making module choices based on the modules’ respective assessment format. More than half of the participants stated that they had not chosen a module that greatly interested them because it was assessed in a way that they either did not enjoy or believed would not reflect their potential.
Participants were keen to highlight the potential opportunity of communicating to employers about specific assessment formats that they had chosen and the skills and knowledge that resulted from those choices.
During reflection on the conclusions from the focus groups in the final workshop with student partners, four themes were highlighted as making the subject-specific case for assessment optionality in political science courses: authenticity, employability, agency, and inclusivity.
-
• Authenticity: Students stated that although the essay was a core part of studying political science, there were parts of “politics” that were not fully represented in that format alone. As one participant stated: “Politics manifests itself in many different ways, both academically and professionally, and so it should be assessed in a way that reflects that.” Another student noted of political science: “It is interdisciplinary, people end up doing very different things, so people should get to apply a variety of skills when studying it.” Convenors selecting a “menu” of assessment formats relevant to each module was seen as a way to make assessment more authentic. The view that optionality will lead to an increased authenticity of assessment is found in the wider literature on assessment optionality (e.g., Jopp and Cohen Reference Jopp and Cohen2022).
-
• Employability: SoTL literature has long identified the nonvocational nature of political science as a discipline (Moulton Reference Moulton2024, 407). Although the subject area does not clearly identify a career path, many students have a clear notion of what type of work they want to enter after graduation. Optionality was viewed as a way for them to shape their degree to be best positioned for graduate employability. As one student stated: “Students of politics will go into a diverse range of careers…being able to tailor your degree to cater for these diverse skill sets would therefore be especially beneficial.” Participants were keen to highlight the potential opportunity of communicating to employers about specific assessment formats that they had chosen and the skills and knowledge that resulted from those choices.
-
• Agency: The Subject Benchmark Statement: Politics and International Relations suggests that political science students should develop “leadership and decision-making skills” (Quality Assurance Agency for Higher Education 2023, 6). Assessment optionality was viewed as a concrete way for students to develop such skills. Having to make a choice—even if it was to stay with an assessment format that students had engaged with multiple times in the past—was viewed as a pathway to developing a stronger sense of agency over their degree. It was suggested that this had further beneficial implications, as one participant noted of optionality: “I like the idea that it lets you learn more about yourself in the process; you have to consider what your strengths are.” This point links well to the theme of employability because graduates often lack an awareness of the skills that they have developed during their studies (Moulton Reference Moulton2024, 408).
-
• Inclusivity: Optionality was recognized as having the potential to increase the inclusivity of assessment practices. Students stressed that there were various learning styles and abilities—something that was not properly represented in the traditional single-mode-of-assessment format. As one student noted of current assessment norms: “We shouldn’t push people to a certain way of being assessed when we know that there are different styles of learner.” Optionality was described as inclusive for those with conditions that challenge their learning (e.g., dyslexia and ADD) and increasing the level of enjoyment in assessment, through providing options that might be preferable to individual students. Given that political science is a field that pays attention to issues of social justice and inequality (Quality Assurance Agency for Higher Education 2023, 5), it is perhaps not surprising that students were concerned about inclusivity when they considered optionality.
These four interacting themes are presented in this study as the subject-specific case for utilizing optionality in political science education. In addition to this case, the following recommendations for instructors who are considering using assessment optionality were co-created with the student partners:
-
1. Be module specific. The menu of assessment optionality formats should be chosen and tailored for each module and not be too extensive. Instructors should make a clear case about why each format is relevant to the module and its learning outcomes.
-
2. Be clear and consistent. Grading criteria and rubrics should be available for each assessment format, as well as guidance about what is expected in each.
-
3. Be supportive. Where formative assessment exists, it should be related directly to the format of the summative options. Instructors should invite discussion (i.e., in the classroom) about how students are choosing among optional assessment formats.
-
4. Be varied. The menu of options should include both traditional and innovative assessment formats and reflect a range of learning styles. Students should be exposed to different formats in an early stage of their degree (e.g., first-year core modules).
These recommendations reinforce Wanner, Palmer, and Palmer’s (Reference Wanner, Palmer and Palmer2024, 358) call for “boundaries and guidelines” in instituting optionality, providing useful direction for those in other disciplines as well.
Although there was strong support for instituting assessment optionality among the student partners and participants in this study, it must be noted that this research revealed some of the costs of instituting the practice. For example, whereas Cook (Reference Cook2001) claimed that assessment optionality would not lead to increased administrative work, fully responding to the recommendations described herein would be a time-intensive process for those instructors who want to establish the practice (e.g., in the creation of new grading rubrics and providing the case for using optionality on a given module). There also are long-term time costs (e.g., in providing support to students in choosing their assessment format). Therefore, in a sector that already is characterized by intense workload pressures, any move to institute assessment optionality must be thoroughly considered for its practical viability.
CONCLUSION
This article presents the findings of a participatory research project with political science students that explored whether there was a subject-specific case for utilizing optionality in assessment in political science courses. Through a process of co-creation of a definition of assessment optionality, aspects of the research process itself, and findings of the project, this study leads to useful and applicable conclusions about assessment optionality. It is clear that students who have not yet experienced the practice of optionality believe that there is a strong case for its institution in political science education. Although students were open to the general arguments in favor of optionality, they also identified reasons why its institution specifically in political science courses would be beneficial.
This study also demonstrates the benefits of participatory research with students when pedagogical issues are explored. For both ethical and practical reasons, including student partners in the research process was shown to be a beneficial innovation, adding significant value and weight to the conclusions reached.
ACKNOWLEDGMENTS
The author gratefully acknowledges funding provided by the University of York’s Learning and Teaching Fund and thanks the students for their participation in this project. The author appreciates the helpful comments of anonymous reviewers on an earlier version of this article.
CONFLICTS OF INTEREST
The author declares that there are no ethical issues or conflicts of interest in this research.