Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-26T00:05:13.921Z Has data issue: false hasContentIssue false

Racing the Clock: Using Response Time as a Proxy for Attentiveness on Self-Administered Surveys

Published online by Cambridge University Press:  15 September 2021

Blair Read*
Affiliation:
Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. E-mail: [email protected]
Lukas Wolters
Affiliation:
Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. E-mail: [email protected]
Adam J. Berinsky
Affiliation:
Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. E-mail: [email protected]
*
Corresponding author Blair Read

Abstract

Internet-based surveys have expanded public opinion data collection at the expense of monitoring respondent attentiveness, potentially compromising data quality. Researchers now have to evaluate attentiveness ex-post. We propose a new proxy for attentiveness—response-time attentiveness clustering (RTAC)—that uses dimension reduction and an unsupervised clustering algorithm to leverage variation in response time between respondents and across questions. We advance the literature theoretically arguing that the existing dichotomous classification of respondents as fast or attentive is insufficient and neglects slow and inattentive respondents. We validate our theoretical classification and empirical strategy against commonly used proxies for survey attentiveness. In contrast to other methods for capturing attentiveness, RTAC allows researchers to collect attentiveness data unobtrusively without sacrificing space on the survey instrument.

Type
Article
Copyright
© The Author(s) 2021. Published by Cambridge University Press on behalf of the Society for Political Methodology

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Edited by Jeff Gill

References

Alvarez, R. M., Atkeson, L. R., Levin, I., and Li, Y.. 2019. “Paying Attention to Inattentive Survey Respondents.” Political Analysis 27(2):145162.CrossRefGoogle Scholar
Anduiza, E., and Galais, C.. 2016. “Answering Without Reading: IMCs and Strong Satisficing in Online Surveys.” International Journal of Public Opinion Research 29(3):497519.Google Scholar
Ansolabehere, S., and Schaffner, B. F.. 2015. “Distractions: The Incidence and Consequences of Interruptions for Survey Respondents.” Journal of Survey Statistics and Methodology 3(2):216239.CrossRefGoogle Scholar
Baker, R., et al. 2010. “American Association of Public Opinion Researchers’ Report on Online Panels.” Public Opinion Quarterly 74(4):711781.Google Scholar
Barge, S., and Gehlbach, H.. 2012. “Using the Theory of Satisficing to Evaluate the Quality of Survey Data.” Research in Higher Education 53(2):182200.CrossRefGoogle Scholar
Bassili, J. N., and Scott, B. S.. 1996. “Response Latency as a Signal to Question Problems in Survey Research.” Public Opinion Quarterly 60(3):390399.CrossRefGoogle Scholar
Berinsky, A. J. 2017. “Rumors and Health Care Reform: Experiments in Political Misinformation.” British Journal of Political Science 47(2):241262.CrossRefGoogle Scholar
Berinsky, A. J., Margolis, M. F., and Sances, M. W.. 2014. “Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys.” American Journal of Political Science 58(3):739753.CrossRefGoogle Scholar
Berinsky, A. J., Margolis, M. F., Sances, M. W., and Warshaw, C.. 2019. “Using Screeners to Measure Respondent Attention on Self-Administered Surveys: Which Items and How Many?Political Science Research and Methods 9(2):430437.CrossRefGoogle Scholar
Bishop, C. M. 2006. Pattern Recognition and Machine Learning. New York: Springer.Google Scholar
Börger, T. 2016. “Are Fast Responses More Random? Testing the Effect of Response Time on Scale in an Online Choice Experiment.” Environmental and Resource Economics 65(2):389413.CrossRefGoogle Scholar
Callegaro, M., Yang, Y., Bhola, D. S., Dillman, D. A., and Chin, T.-Y.. 2009. “Response Latency as an Indicator of Optimizing in Online Questionnaires.” Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique 103(1):525.CrossRefGoogle Scholar
Fazio, R. H. 1990. “A Practical Guide to the Use of Response Latency in Social Psychological Research.” Research Methods in Personality and Social Psychology 11:7497.Google Scholar
Greszki, R., Meyer, M., and Schoen, H.. 2015. “Exploring the Effects of Removing “Too Fast” Responses and Respondents from Web Surveys.” Public Opinion Quarterly 79(2):471503.CrossRefGoogle Scholar
Harden, J. J., Sokhey, A. E., and Runge, K. L.. 2019. “Accounting for Noncompliance in Survey Experiments.” Journal of Experimental Political Science 6(3):199202.CrossRefGoogle Scholar
Hillygus, D. S., Jackson, N., and Young, M.. 2014. “Professional Respondents in Non-probability Online Panels.” Online Panel Research: A Data Quality Perspective 1:219237.CrossRefGoogle Scholar
Höhne, J. K., Schlosser, S., Couper, M. P., and Blom, A. G.. 2020. “Switching Away: Exploring On-device Media Multitasking in Web Surveys.” Computers in Human Behavior 111:106417.CrossRefGoogle Scholar
Huang, J. L., Curran, P. G., Keeney, J., Poposki, E. M., and DeShon, R. P.. 2012. “Detecting and Deterring Insufficient Effort Responding to Surveys.” Journal of Business and Psychology 27(1):99114.CrossRefGoogle Scholar
Huckfeldt, R., Levine, J., Morgan, W., and Sprague, J.. 1999. “Accessibility and the Political Utility of Partisan and Ideological Orientations.” American Journal of Political Science 43(3):888911.CrossRefGoogle Scholar
Imai, K., and Tingley, D.. 2012. “A Statistical Method for Empirical Testing of Competing Theories.” American Journal of Political Science 56(1):218236.CrossRefGoogle Scholar
Johnson, M. 2004. “Timepieces: Components of Survey Question Response Latencies.” Political Psychology 25(5):679702.CrossRefGoogle Scholar
Kong, X. J., Wise, S. L., and Bhola, D. S.. 2007. “Setting the Response Time Threshold Parameter to Differentiate Solution Behavior from Rapid-Guessing Behavior.” Educational and Psychological Measurement 67(4):606619.CrossRefGoogle Scholar
Krosnick, J. A. 1991. “Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys.” Applied Cognitive Psychology 5(3):213236.CrossRefGoogle Scholar
Malhotra, N. 2008. “Completion Time and Response Order Effects in Web Surveys.” Public Opinion Quarterly 72(5):914934.CrossRefGoogle Scholar
McLachlan, G. J., Lee, S. X., and Rathnayake, S. I.. 2019. “Finite Mixture Models.” Annual Review of Statistics and Its Application 6:355378.CrossRefGoogle Scholar
Mulligan, K., Grant, J. T., Mockabee, S. T., and Monson, J. Q.. 2003. “Response Latency Methodology for Survey Research: Measurement and Modeling Strategies.” Political Analysis 11(3):289301.CrossRefGoogle Scholar
Olson, K., and Smyth, J. D.. 2015. “The Effect of CATI Questions, Respondents, and Interviewers on Response Time.” Journal of Survey Statistics and Methodology 3(3):361396.CrossRefGoogle Scholar
Oppenheimer, D. M., Meyvis, T., and Davidenko, N.. 2009. “Instructional Manipulation Checks: Detecting Satisficing to Increase Statistical Power.” Journal of Experimental Social Psychology 45(4):867872.CrossRefGoogle Scholar
Read, B., Wolters, L., and Berinsky, A.J.. 2021. “Replication Data for: Racing the Clock: Using Response Time as a Proxy for Attentiveness on Self-Administered Surveys.” Harvard Dataverse, V1. https://doi.org/10.7910/DVN/6OGTHL CrossRefGoogle Scholar
Sendelbah, A., Vehovar, V., Slavec, A., and Petrovčič, A.. 2016. “Investigating Respondent Multitasking in Web Surveys using Paradata.” Computers in Human Behavior 55:777787.CrossRefGoogle Scholar
Tourangeau, R., Rips, L. J., and Rasinski, K.. 2000. The Psychology of Survey Response. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Tversky, A., and Kahneman, D.. 1981. “The Framing of Decisions and the Psychology of Choice.” Science 211(4481):453458.CrossRefGoogle ScholarPubMed
Vandenplas, C., Beullens, K., and Loosveldt, G.. 2019. “Linking Interview Speed and Interviewer Effects on Target Variables in Face-to-Face Surveys.” Survey Research Methods 13(3):249265.Google Scholar
Wise, S. L., and Kong, X.. 2005. “Response Time Effort: A New Measure of Examinee Motivation in Computer-Based Tests.” Applied Measurement in Education 18(2):163183.CrossRefGoogle Scholar
Wood, D., Harms, P., Lowman, G. H., and DeSimone, J. A.. 2017. “Response Speed and Response Consistency as Mutually Validating Indicators of Data Quality in Online Samples.” Social Psychological and Personality Science 8(4):454464.CrossRefGoogle Scholar
Yan, T., and Olson, K.. 2013. “Analyzing Paradata to Investigate Measurement Error.” In Improving Surveys with Paradata: Analytic Uses of Process Information, edited by Kreuter, F., 7396. Hoboken, NJ: John Wiley & Sons.CrossRefGoogle Scholar
Yan, T., Ryan, L., Becker, S. E., and Smith, J.. 2015. “Assessing Quality of Answers to a Global Subjective Well-Being Question through Response Times.” Survey Research Methods 9(2):101.Google ScholarPubMed
Zandt, T. V. 2002. “Analysis of Response Time Distributions.” In Stevens’ Handbook of Experimental Psychology, edited by Pashler, H., and Wixted, J., 62. Atlanta: American Airlines.Google Scholar
Zhang, C., and Conrad, F.. 2014. “Speeding in Web Surveys: The Tendency to Answer Very Fast and Its Association with Straightlining.” Survey Research Methods 8(2):127135.Google Scholar
Supplementary material: PDF

Read et al. supplementary material

Read et al. supplementary material

Download Read et al. supplementary material(PDF)
PDF 3.8 MB