Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-22T22:30:25.806Z Has data issue: false hasContentIssue false

A Study of Twitter and Clickers as Audience Response Systems in International Relations Courses

Published online by Cambridge University Press:  19 June 2014

Steven B. Rothman*
Affiliation:
Ritsumeikan Asia Pacific University
Rights & Permissions [Opens in a new window]

Abstract

This study conducted experiments using clickers and Twitter in international relations courses to evaluate the effectiveness of audience-response tools on students’ experiences and their performance. The study used both within-group and between-group experimental designs and evaluated the results primarily through inferential descriptive statistical methods. The results show that clickers outperformed Twitter, students enjoy using clickers in class, and the use of these tools had little impact on grade performance.

Type
The Teacher
Copyright
Copyright © American Political Science Association 2014 

The importance of educational technology continues to grow for teachers, students, and administrators. This study examined the use of audience-response systems (ARS) in diverse undergraduate classes in the International Relations and Peace Studies cluster at Ritsumeikan Asia Pacific University (APU). The study specifically compared Twitter and Turning Technologies clickers for both academic performance and survey results on student interactivity and attentiveness. The study found that clickers outperformed Twitter in student satisfaction; however, neither had a strong impact on grade performance.

This article describes how clickers have a greater but limited advantage over Twitter in the classroom. The success of in-class technology depends on students’ technological culture, methods of use, and available logistical resources.

AUDIENCE RESPONSE SYSTEMS

ARS refers to any system in which an audience interacts with a speaker(s) during a presentation. ARS can involve low-technology tools, such as colored cards held up during a presentation, or high-technology dials used to indicate favorability to a speech (e.g., widely used in political-campaign analysis). The purpose of ARS varies based on the goals of a presenter. However, all ARS involve interaction between the audience and the presenter(s).

Twitter is an online system in which individuals send messages composed of up to 140 characters to any number of subscribers (i.e., followers). Audiences use Twitter for back-channel communication or short messages to one another or the presenter(s) (Atkinson Reference Atkinson2010). Sometimes Twitter is broadcasted on an overhead display to increase transparency and communication between the audience and the presenter(s).

Twitter attempts to take education out of the classroom by allowing students to engage in topics among networks of professionals and peers and increase communication among students, though effects are unclear. One study that used Twitter to democratize student involvement had both positive and negative effects (Blair Reference Blair2013). Other research indicates a number of challenges in using online communication tools due to the different social-interactive mechanisms to which we are accustomed (Blair Reference Blair2013). For example, in face-to-face communication, a person who is asked a question feels more pressure to answer than in an online communication (Middleton Reference Middleton2010). At the same time, some of the social pressures have the opposite effect. In another study, students who experience shyness or anxiety were more likely to use and prefer anonymous devices in class compared to hand-raising, and they were less likely to be subject to group conformity (Stowell, Oldham, and Bennett Reference Stowell, Oldham and Bennett2010).

Twitter uses a Bring Your Own Device (BOYD) system; in general, any device connected to the Internet can send and receive Twitter comments. Students and instructors usually bear the costs of BYOD systems themselves, and the costs vary based on the particular device used (e.g., an iPad, iPhone, or laptop).

Clickers entail dedicated devices with limited functions that allow communication between an audience and a presenter(s). The clicker system requires all audience members or a university to purchase identical devices. The presenter must have dedicated software and hardware support to use this ARS. The two most popular companies currently developing and distributing clickers are iClicker and Turning Technologies. Some of the services involve subscription fees in which students must rent access to a BYOD system; other services require the purchase of the devices. In both cases, the cost to students ranges from $20 to $50 depending on the type of device and the style of its use in class.

ARS are widely recognized as a way to increase interaction in class and to create an active educational environment (Draper and Brown Reference Draper and Brown2004). These systems have been available since the 1960s in various forms of technology (Judson and Sawada Reference Judson, Sawada and Banks2006). Several studies have documented how using clickers can improve class attendance and students’ learning (Blasco-Arcas, Buil, Hernandez-Ortega, and Sese Reference Blasco-Arcas, Blanca Hernandez-Ortega and Sese2013; Bruff Reference Bruff2009; Caldwell Reference Caldwell2007; Draper and Brown Reference Draper and Brown2004). For example, one study suggested that immediate feedback and correction of errors on exams result in students’ greater retention of material (Epstein et al. Reference Epstein, Lazarus, Calvano, Matthews, Hendel, Epstein and Brosvic2002), whereas other studies showed either mixed results or no demonstrated improvement in student learning (Banks Reference Banks2006, 3). One study indicated a correlation between students who used clickers and answered questions correctly and their achieving higher grades (Kennedy and Cutts Reference Kennedy and Cutts2005); however, this suggests that the differences are based on student ability rather than clicker use.

In summary, many of the studies conducted on the impact of ARS indicated positive results when accompanied by self-reported data from students. Studies that analyze grades and test performance vary, providing mixed results on the impact of these ARS technologies.

CLICKERS AND TWITTER IN CLASS

In the courses involved in this study, the teaching assistant and the instructor monitored Twitter using iPad devices, and both made Twitter comments (tweets) during and after the daily lectures. These tweets provided additional information, including website references, clarifications, and definitions of complicated technical terms. The purpose of the Twitter comments was to expand the focus of the course discussion and to clarify important information. We also tweeted questions for discussion to prompt student thinking during the lecture using a simple AppleScript® application called keynotetweet. Footnote 1 It allowed the lecturer to copy designated text from the presentation into Keynote® and to automatically post the text segments to Twitter at specific points in the lecture.

Unlike Twitter—which generally operates in the background and does not need class time for students to ask or answer questions—clickers require explicit use of class time. During various opportunities in class, the instructor poses a question to the students, who then read and think about the question, after which they answer it using their clicker. When the instructor views the results, he or she decides whether a discussion among the students will increase their understanding. Discussions work best when there is some disagreement among the students and they are asked to convince one another to agree with their own answer. Footnote 2 For the courses examined initially, the questions had no objectively correct answer. This fostered greater discussion, analysis, and use of evidence to defend a position, thereby increasing participation and engagement with other students.

The purpose of the Twitter comments was to expand the focus of the course discussion and to clarify important information. We also tweeted questions for discussion to prompt student thinking during the lecture using a simple Applescript® application called keynotetweet.

Instructors may also use clickers to assure students’ learning of content. In this study, the instructor provided questions that mimicked exam questions, which allowed students to recall and review material from previous classes. The questions generated discussions when students disagreed on the answers; however, these questions required objectively correct answers. (Samples of both discussion-based and content-based questions are available for download as supplemental materials from the Cambridge University Press/ CJO website at doi:10.1017/S1049096514000869.)

For both Twitter and clicker users in a class, some preparation time is required to learn how to operate the software and hardware tools. Logistical issues are greater for clicker use: instructors must assign devices to students for recording attendance and scores, and they sometimes require distribution and collection of the devices in each class. Footnote 3

Questions for students using clickers can be prepared in advance or during class. When questions are prepared in class, on-the-fly, only a generic question and answer options are presented to students on screen. These same questions are recorded in reports. Therefore, instructors need to be aware of questions they create in class if they wish to review them later. The software takes a screenshot of the computer, which assists instructors if they have question and answer texts within their slides. Questions prepared and entered into the clicker software before classes appears as it was entered for the students and on reports generated later.

When using Twitter, all in-class tweets must be prepared in advance. The software used to tweet during a lecture examines each slide as it appears for any Twitter-feed texts. Instructors also must observe the Twitter feed during and after class to identify and respond to student tweets, if necessary. Ideally, this outside-of-class observation requires little from the instructor if other students have high participation rates. In this case, students respond to one another, thereby reducing faculty involvement.

Students received points for participation only when the instructor asked objective questions—and only when they were answered correctly. Students earned attendance points separately from participation with Twitter or clickers in class. The lack of a grading system for using Twitter or clickers may influence the degree of participation when students focus on grade-performance indicators.

METHODOLOGY

This project used experimental methodology to determine the effects and effectiveness of using clickers and Twitter in an educational setting. The study used both within-group and between-group experimental designs. For the within-group design, each course used ARS for the second half of a seven-week course; for comparison, the first half did not use it. For the between-group design, one course used Twitter and one course used clickers; these results were compared to courses without ARS. Surveys measured student reaction each day that classes were in session.

All of the courses were taught at APU, which has a student body of approximately 6,200 undergraduate students in one of two colleges: the College of Asia Pacific Studies and the College of International Management. The courses for this study involved international relations subjects in the College of Asia Pacific Studies. The APU student body is divided between Japanese-based and English-based students. Students enroll in APU as either one or the other and are required to become proficient in the second language before graduation. The English-based students are from 80 to 100 different countries. In this study, English was the primary language for all of the courses examined; however, for most of the students taking the course, English was not their native language. Indeed, for some of the students, English was their third or fourth language.

Having primarily non-native English speakers in the class may have biased the study results; however, the direction and magnitude of the bias is unclear. Because the APU student body is diverse across many different cultures and primary languages, the study population included significant random variation in educational background, technological experience, and level of interaction in the classroom. This normal variation in the language ability of APU students is not likely to bias the results in a known direction. Future studies that include both native and non-native speakers may test the implications of using non-native speakers as a subject and the bias produced in educational studies.

The courses used in this study were taught during the spring and fall semesters of 2012 and the fall semester of 2013 and then compared to the same courses taught in 2011 without ARS. For this study, the analysis of ARS effectiveness used an introductory class and two third-year classes, but the classes were open to all students. Two courses used clickers: one for the entire course and one for half of the course (excluding two class sessions that used videos). In addition, a subsequent study examined the use of clickers in two introductory international-relations courses (i.e., in 2012 and 2013).

For those courses in which clickers and Twitter were used at least part of the time, the instructor and teaching assistant conducted daily feedback surveys regarding four questions, which allowed students to write additional comments. (Results of the surveys are available for download as supplemental materials from the Cambridge University Press/ CJO website.) The study analyzed within-course surveys by comparing the results before and after the instructor used ARS in class. Between-course results were analyzed using attendance and grade data from those courses taught one year apart.

RESULTS AND CONCLUSIONS

The study results suggest two primary conclusions. First, the use of clickers in class slightly increases student attention and attendance. Statistical indicators, shown in table 1, combined results from the first three classes as well as from the second set of classes to create two groups; the second group used the clicker technology.

For the first three classes in which clickers were not used and the second set of four classes in which they were, the average survey response scores increased slightly for all questions; however, only two of the four survey questions had a statistical significance greater than an error rate of 0.05. This suggests that students evaluated themselves higher in rates of attendance and they rated the course higher after the instructor began using clickers. Based on this self-reported data, the within-group analysis suggests that clickers were successful in increasing class attendance and the overall evaluation.

Table 1 Survey Results with Corresponding Difference in Means t-Test p-Values for Clickers

Note: *p < 0.05.

In classes using Twitter, results for the evaluations appear to be similar. As shown in table 2, only Question 1, asking students to self report their attendance, provided significant results.

Table 2 Survey Results with Corresponding Difference in Means t-Test p-Values for Twitter

Note: *p < 0.05.

Overall, the evidence suggests that clickers had a greater magnitude of effect in a positive direction and a significant effect on both attendance and class evaluation. Footnote 4 When evaluated statistically, there was positive improvement in attendance between the classes without and with clickers from 67.3 to 71.1%, but the t-test suggests that the difference between the averages is not significant enough to rule out random variation at the 0.05 level (p = 0.1988). The use of Twitter resulted in a significant decline in course attendance without significant changes in other indicators. This suggests that clickers performed better than Twitter and that Twitter had a slightly negative effect on class attendance.

Furthermore, results of the between-group design using clickers suggested no improvement in students’ test scores. Comparing the change in final-exam scores between the human-rights course taught in 2011 with no clickers and the 2012 class with clickers revealed a negative impact on test scores, from an average of 78.46 to 66.51%, with a significant t-value at the 0.05 level. This suggests that the use of clickers in this course actually decreased grade performance.

In these classes, the instructor provided only subjective questions with discussions; however, when combined with objective questions in a subsequent class, the same results emerged. In the 2012 course (i.e., introduction to international relations), clickers were used for subjective assessment and to elicit discussion among students for only half of the course. In 2013, the same course used clickers with both subjective questions for discussion and objective questions to review and prepare for the final exam. Comparing the two courses, the average final-exam score decreased from an average of 79.9 to 73.3% and did not show statistical significance at the 0.05 level. In other words, it is clear from the data that there were either no effects or negative effects present. When comparing the use of clickers with objective questions versus subjective/discussion questions, students’ grades decreased using objective questions. This result may seem counterintuitive because we expect objective questions to improve exam performance more than interactive discussion questions.

The use of Twitter resulted in a significant decline in course attendance without significant changes in the other indicators. This suggests that clickers performed better than Twitter and that Twitter had a slightly negative effect on class attendance.

Examining students’ comments about both the Twitter and clicker courses provided insight about their perspective on these two ARS. In courses that used clickers, except for one student, all of the comments suggested that they enjoyed using them in class. For example, students stated that combining lectures and participation increased their attention level, avoided other distractions, helped them to engage in lectures, made the class more exciting, woke them up, and allowed them to express their opinions. In the one negative comment made orally to the instructor, the student stated that the use of clickers is “childish” and not what a college class should look like. This student expected others to have a high degree of interest and interaction regardless of the use of technology in class. Because colleges have become more audience- and business-focused in recent years, and because of the tension between the roles of “entertainer” and “teacher,” this student’s opinion is important to consider.

When students in the Twitter courses were asked for daily feedback, only two comments referenced Twitter. One student said that the Wi-Fi system in the classroom did not allow constant use of the Twitter feed. The second student did not yet use Twitter but was willing to try it. Wi-Fi severely limits the use of any ARS product in a class that requires constant Internet connectivity. When 100 or more students access the Internet through a single access point in a classroom, it is difficult to have consistent use of the technology.

A possible reason that Twitter was less effective (based on the self-evaluation and qualitative measures) involves the lack of Twitter culture among students in the class. More than 60% of those who answered the final survey had never used Twitter before taking the course. Despite our efforts to encourage their use of Twitter by tweeting responses to questions and extra information during class, students failed to adapt to the culture during the course. This suggests that Twitter does not work well when students are not accustomed to the technology. Simple technology that requires less effort by students and instructors, such as clickers, is more likely to be successful in class.

Overall, the research indicated a distinction between Twitter and clickers for increasing course participation. The qualitative results suggested that students enjoy using clickers in class and believe that it provides a means to maintain greater attention. The results also suggested that students self-reported better attendance and more enjoyment of the class with clickers. The attendance records corroborated this result, although at statistically insignificant levels. However, this attention and satisfaction in the self-assessment did not coincide with improved test scores.

Based on student reactions, clickers may have an advantage over Twitter by allowing students a feeling of greater participation in and enjoyment of a class, whereas the use of Twitter depends largely on the culture of use among students. When students are accustomed to the technology, they will have higher use of advanced technology. When they are not accustomed to the technology, then simplicity dominates as the most important efficacy factor. With clickers, however, there are the logistical difficulties of purchasing the devices and maintaining the technology.

Based on the findings presented in this article, the study concluded that technology has distinct benefits to students in a classroom. Educators need to give more attention to the different ways that instructors use the same technology and to the lack of performance improvement presented in this study. Future research that addresses the connection between the use of ARS technology and testing classroom improvement could yield more insight about the conditions under which the technology successfully increases performance.

ACKNOWLEDGMENTS

The author especially thanks Do Sy Huy for his help as teaching assistant and research assistant in this study. Huy created most of the handouts and instructions to students on the proper use of Twitter, and he entered data and assisted in other intellectual ways. Thanks are also extended to Zhao Yuan and the students who participated in the classes in which Twitter and clickers were used—sometimes poorly—in the study’s attempt to improve their educational experience.

A previous version of this article was presented at the Annual Conference of the International Studies Association in April 2013 in San Francisco. This research was supported by a Ritsumeikan Asia Pacific University research subsidy.

Steven B. Rothman is an associate professor of international relations and peace studies at Ritsumeikan Asia Pacific University in Beppu, Japan. He has published and presented several articles on interactive teaching and learning at universities, as well as soft power and the use of framing and rhetoric in international policy making. He can be reached at .

Footnotes

1. See http://code.google.com/p/keynotetweet/ for more information on this open source tool.

2. Thanks to 2012 Clickers Conference, Chicago, IL. In particular, see Newbury and Heiner, (2012).

3. Some universities have alternative policies, such as providing clickers to all students for the duration of their enrollment (Nanyang Technological University).

4. After an examination of student course evaluations for the classes under this study, there is a clear and strong correlation between the class size and evaluation scores. Given the very large and clearly discernable impact of class size on the evaluation scores, regressions are required to separate the effects from class size and technology. Regressions require more data than are available; therefore, the essay does not present analysis using course evaluation scores.

References

REFERENCES

Atkinson, Cliff. 2010. The Backchannel: How Audiences Are Using Twitter and Social Media and Changing Presentations Forever. Berkeley, CA: New Riders.Google Scholar
Banks, David A. 2006. Audience Response Systems in Higher Education: Applications and Cases. Hershey, PA: Information Science Publishing.Google Scholar
Blair, Alasdair. 2013. “Democratising the Learning Process: The Use of Twitter in the Teaching of Politics and International Relations.” Politics 33 (2): 135–45.Google Scholar
Blasco-Arcas, Lorena, Blanca Hernandez-Ortega, Isabel Buil, and Sese, F. Javier. 2013. “Using Clickers in Class: The Role of Interactivity, Active Collaborative Learning and Engagement in Learning Performance.” Computers and Education 62 (March): 102–10.Google Scholar
Bruff, Derek. 2009. Teaching with Classroom Response Systems: Creating Active Learning Environments (1st ed.). San Francisco: Jossey-Bass.Google Scholar
Caldwell, Jane. 2007. “Clickers in the Large Classroom: Current Research and Best-Practice Tips.” CBE Life Sciences Education 6 (1): 920.Google Scholar
Draper, Stephen W., and Brown, M. I.. 2004. “Increasing Interactivity in Lectures Using an Electronic Voting System.” Journal of Computer Assisted Learning 20 (2): 8194.CrossRefGoogle Scholar
Epstein, Michael L., Lazarus, Amber D., Calvano, Tammy B., Matthews, Kelly A., Hendel, Rachel A., Epstein, Beth B., and Brosvic, Gary M.. 2002. “Immediate Feedback Assessment Technique Promotes Learning and Corrects Inaccurate First Responses.” The Psychological Record 52 (2): 187201.Google Scholar
Judson, Eugene, and Sawada, Daiyo. 2006. “Audience Response Systems: Insipid Contrivances or Inspiring Tools?” In Audience Response Systems in Higher Education: Applications and Cases, ed. Banks, David A., 2639. Hershey, PA: Information Science Publishing.Google Scholar
Kennedy, Gregor E., and Cutts, Quintin I.. 2005. “The Association Between Students’ Use of an Electronic Voting System and Their Learning Outcomes.” Journal of Computer Assisted Learning 21 (4): 260–68.CrossRefGoogle Scholar
Middleton, Dave. 2010. “Putting the Learning into E-Learning.” European Politics Science 9 (1): 512.Google Scholar
Newbury, Peter, and Heiner, Cynthia (Producer). 2012 (March4). “Ready, Set, React! Getting the Most out of Peer Instruction with Clickers.” Clicker 2012. (Slideshow and Presentation.), Chicago, IL.Google Scholar
Stowell, Jeffrey R., Oldham, Terrah, and Bennett, Dan. 2010. “Using Student Response Systems (“Clickers”) to Combat Conformity and Shyness.” Teaching of Psychology 37 (2): 135–40.Google Scholar
Figure 0

Table 1 Survey Results with Corresponding Difference in Means t-Test p-Values for Clickers

Figure 1

Table 2 Survey Results with Corresponding Difference in Means t-Test p-Values for Twitter

Supplementary material: File

Rothman supplementary material

Supplementary data

Download Rothman supplementary material(File)
File 346.1 KB
Supplementary material: File

Rothman supplementary material

Supplementary data

Download Rothman supplementary material(File)
File 237.5 KB