Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2025-01-05T14:24:00.543Z Has data issue: false hasContentIssue false

Using Response Times to Model Not-Reached Items due to Time Limits

Published online by Cambridge University Press:  01 January 2025

Steffi Pohl*
Affiliation:
Freie Universität Berlin
Esther Ulitzsch
Affiliation:
Freie Universität Berlin
Matthias von Davier
Affiliation:
National Board of Medical Examiners
*
Correspondence should be made to Steffi Pohl, Methods and Evaluation/Quality Assurance, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany. Email: [email protected]

Abstract

Missing values at the end of a test typically are the result of test takers running out of time and can as such be understood by studying test takers’ working speed. As testing moves to computer-based assessment, response times become available allowing to simulatenously model speed and ability. Integrating research on response time modeling with research on modeling missing responses, we propose using response times to model missing values due to time limits. We identify similarities between approaches used to account for not-reached items (Rose et al. in ETS Res Rep Ser 2010:i–53, 2010) and the speed-accuracy (SA) model for joint modeling of effective speed and effective ability as proposed by van der Linden (Psychometrika 72(3):287–308, 2007). In a simulation, we show (a) that the SA model can recover parameters in the presence of missing values due to time limits and (b) that the response time model, using item-level timing information rather than a count of not-reached items, results in person parameter estimates that differ from missing data IRT models applied to not-reached items. We propose using the SA model to model the missing data process and to use both, ability and speed, to describe the performance of test takers. We illustrate the application of the model in an empirical analysis.

Type
Original Paper
Copyright
Copyright © 2019 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This work was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft), Grant No. PO1655/3-1. We thank Wim van der Linden for helpful comments on the manuscript as well as the HPC service of Freie Universität Berlin for support and computing time.

References

Allen, N. L.Donoghue, J. R., & Schoeps, T. L. (2001). The NAEP 1998 technical report (NCES 2001–509), Washington, DC: National Center for Education Statistics.Google Scholar
Bolsinova, M, & Tijmstra, J (2016). Modeling conditional dependence between response time and accuracy. Psychometrika, 82 (4), 11261148. https://doi.org/10.1007/s11336-016-9537-6.CrossRefGoogle ScholarPubMed
Bolsinova, MTijmstra, J, & Molenaar, D (2017). Response moderation models for conditional dependence between response time and response accuracy. British Journal of Mathematical and Statistical Psychology, 70 (2), 257279. https://doi.org/10.1111/bmsp.12076.CrossRefGoogle ScholarPubMed
Cosgrove, J, & Cartwright, F (2014). Changes in achievement on PISA: The case of Ireland and implications for international assessment practice. Large-Scale Assessments in Education, 2 (1), 2. https://doi.org/10.1186/2196-0739-2-2.CrossRefGoogle Scholar
Culbertson, M. (2011). Is it wrong? Handling missing responses in IRT. Paper presented at the annual meeting of the National Council of Measurement in Education, New Orleans, LA.Google Scholar
De Ayala, R. J.Plake, B. S., & Impara, J. C. (2001). The impact of omitted responses on the accuracy of ability estimation in item response theory. Educational and Psychological Measurement, 38, 213234. https://doi.org/10.1111/j.1745-3984.2001.tb01124.x.Google Scholar
Drummond, A. J.Nicholls, G. K.Rodrigo, A. G., & Solomon, W. (2002). Estimating mutation parameters, population history and genealogy simultaneously from temporally spaced sequence data. Genetics, 161 (3), 13071320.CrossRefGoogle ScholarPubMed
Duchhardt, C. & Gerdes, A. (2012). NEPS technical report for mathematics—scaling results of starting cohort 3 in fifth grade (NEPS Working Paper No. 19). Bamberg: Otto-Friedrich-Universität, Nationales Bildungspanel.Google Scholar
Finch, H. (2008). Estimation of item response theory parameters in the presence of missing data. Journal of Educational Measurement, 45, 225245. https://doi.org/10.1111/j.1745-3984.2008.00062.x.CrossRefGoogle Scholar
Fox, J. P. (2010). Bayesian item response modeling: Theory and applications, Berlin: Springer.CrossRefGoogle Scholar
Fox, J. P., & Marianti, S. (2016). Joint modeling of ability and differential speed using responses and response times. Multivariate Behavioral Research, 51 (4), 540553. https://doi.org/10.1080/00273171.2016.1171128.CrossRefGoogle ScholarPubMed
Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7 (4), 457511.CrossRefGoogle Scholar
Gelman, A., & Shirley, K. (2011). Inference from simulations and monitoring convergence. Brooks, S.Gelman, A.Jones, G. L., & Meng, X. L. Handbook of markov chain monte carlo, London: Chapman and Hall/CRC. 163174.Google Scholar
Goegebeur, Y.De Boeck, P.Wollack, J., & Cohen, A. (2008). A speeded item response model with gradual process change. Psychometrika, 73, 6587. https://doi.org/10.1007/s11336-007-9031-2.CrossRefGoogle Scholar
Glas, C. A.Pimentel, J. L., & Lamers, S. MA. (2015). Nonignorable data in IRT models: Polytomous responses and response propensity models with covariates. Psychological Test and Assessment Modeling, 57 (4), 523541.Google Scholar
Glas, C. A. W., Pimentel, J. L., & Lamers, S. M. A. (2015). Nonignorable data in IRT models: Polytomous responses and response propensity models with covariates. Psychological Test and Assessment Modeling.Google Scholar
Goldhammer, F. (2015). Measuring ability, speed, or both? Challenges, psychometric solutions, and what can be gained from experimental control. Measurement: Interdisciplinary Research and Perspectives, 13:3–4, 133164. https://doi.org/10.1080/15366367.2015.1100020.Google ScholarPubMed
Goldhammer, F., & Kroehne, U. (2014). Controlling individuals’ time spent on task in speeded performance measures: Experimental time limits, posterior time limits, and response time modeling. Applied Psychological Measurement, 38 (4), 255267. https://doi.org/10.1177/0146621613517164.CrossRefGoogle Scholar
Holman, R., & Glas, C. AW. (2005). Modelling non-ignorable missing-data mechanisms with item response theory models. British Journal of Mathematical and Statistical Psychology, 58, 117. https://doi.org/10.1111/j.2044-8317.2005.tb00312.x.Google ScholarPubMed
Johnson, E. G., & Allen, N. L. (1992). The NAEP 1990 technical report: (Rep. No. 21-TR-20). NJ: Princeton.Google Scholar
Klein Entink, R. H.Fox, J. P., & van der Linden, W. J. (2009). A multivariate multilevel approach to the modeling of accuracy and speed of test takers. Psychometrika, 74 (1), 21. https://doi.org/10.1007/s11336-008-9075-y.CrossRefGoogle Scholar
Köhler, C.Pohl, S., & Carstensen, C. H. (2014). Taking the missing propensity into account when estimating competence scores: Evaluation of IRT models for non-ignorable omissions. Educational and Psychological Measurement, 1, 125. https://doi.org/10.1177/0013164414561785.Google Scholar
Köhler, C.Pohl, S., & Carstensen, C. H. (2015). Investigating mechanisms for missing responses in competence tests. Psychological Test and Assessment Modeling, 57 (4), 499522.Google Scholar
Köhler, C.Pohl, S., & Carstensen, C. H. (2017). Dealing with item nonresponse in large-scale cognitive assessments: The impact of missing data methods on estimated explanatory relationships. Journal of Educational Measurement, 54 (4), 397419.CrossRefGoogle Scholar
Kruschke, J. (2014). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan, New York: Academic Press.Google Scholar
Kuhn, J-T, & Ranger, J. (2015). Measuring speed, ability, or motivation: A commentary on Goldhammer (2015). Measurement: Interdisciplinary Research and Perspectives, 13:3–4, 173176. https://doi.org/10.1080/15366367.2015.1105065.Google Scholar
Lee, Y-HChen, H. (2011). A review of recent response-time analyses in educational testing. Psychological Test and Assessment Modeling, 53 (3), 359379.Google Scholar
Lee, Y-H, & Jia, Y. (2014). Using response time to investigate students’ test-taking behaviors in a NAEP computer-based study. Large Scale Assessments in Education, 2 (8), 124. https://doi.org/10.1186/s40536-014-0008-1.CrossRefGoogle Scholar
Lord, F. M. (1974). Estimation of latent ability and item parameters when there are omitted responses. Psychometrika, 39, 247264. https://doi.org/10.1007/BF02291471.CrossRefGoogle Scholar
Meng, X. B.Tao, J., & Chang, H. H. (2015). A conditional joint modeling approach for locally dependent item responses and response times. Journal of Educational Measurement, 52, 127. https://doi.org/10.1111/jedm.12060.CrossRefGoogle Scholar
Mislevy, R. J., & Wu, P-K (1996). Missing responses and IRT ability estimation: Omits, choice, time limits, and adaptive testing. ETS Research Report Series, 1996, i36. https://doi.org/10.1002/j.2333-8504.1996.tb01708.x.CrossRefGoogle Scholar
Molenaar, D.Oberski, D.Vermunt, J., & De Boeck, P. (2016). Hidden markov item response theory models for responses and response times. Multivariate Behavioral Research, 51 (5), 606626. https://doi.org/10.1080/00273171.2016.1192983.CrossRefGoogle ScholarPubMed
Molenaar, D.Tuerlinckx, F., & van der Maas, H. L. (2015). A generalized linear factor model approach to the hierarchical framework for responses and response times. British Journal of Mathematical and Statistical Psychology, 68 (2), 197219. https://doi.org/10.1111/bmsp.12042.CrossRefGoogle Scholar
Moustaki, I.Knott, M. (2000). Weighting for item non-response in attitude scales by using latent variable models with covariates. Journal of the Royal Statistical Society: Series A (Statistics in Society), 163 (3), 445459. https://doi.org/10.1111/1467-985X.00177.CrossRefGoogle Scholar
OCED. (2009). PISA 2006 technical report. Paris: OECD.Google Scholar
OCED. (2017). PISA 2015 technical report. Paris: OECD.Google Scholar
O’Muircheartaigh, C.Moustaki, I. (1999). Symmetric pattern models: A latent variable approach to item non-response in attitude scales. Journal of the Royal Statistical Society: Series A (Statistics in Society), 162 (2), 177194.CrossRefGoogle Scholar
Plummer, M. (2003). JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In Proceedings of the 3rd international workshop on distributed statistical computing (Vol.124).Google Scholar
Plummer, M. (2016). rjags: Bayesian graphical models using MCMC. R package version 4-6. Retrieved from https://CRAN.R-project.org/package=rjags.Google Scholar
Pohl, S. & Carstensen, C. (2012). NEPS technical report—scaling the data of the competence tests (NEPS Working Paper No. 14). Bamberg: Otto-Friedrich-University, Nationales Bildungspanel.Google Scholar
Pohl, S.Gräfe, L., & Rose, N. (2014). Dealing with omitted and not-reached items in competence tests: Evaluating approaches accounting for missing responses in item response theory models. Educational and Psychological Measurement, 74 (3), 423452. https://doi.org/10.1177/0013164413504926.CrossRefGoogle Scholar
Pohl, S., Haberkorn, K., Hardt, K., & Wiegand, E. (2012). NEPS technical report for reading—scaling results of starting cohort 3 in fifth grade (NEPS Working Paper No. 15). Bamberg: Otto-Friedrich-Universität, Nationales Bildungspanel.Google Scholar
Pohl, S., & von Davier, M. (2018). Commentary: "On the importance of the speed-ability trade-off when dealing with not reached items" by Jesper Tijmstra and Maria Bolsinova. Frontiers in Psychology, 9, 1988. https://doi.org/10.3389/ fpsyg.2018.01988.CrossRefGoogle Scholar
R Development Core Team. (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Retrieved from http://www.r-project.org.Google Scholar
Ranger, J., & Kuhn, J-T (2012). A flexible latent trait model for response times in tests. Psychometrika, 77 (1), 3147. https://doi.org/10.1007/s11336-011-9231-7.CrossRefGoogle Scholar
Ranger, J., & Orthner, T. (2012). The case of dependency of responses and response times: A modeling approach based on standard latent trait models. Psychological Test and Assessment Modeling, 54 (2), 128148.Google Scholar
Rose, N. (2013). Item nonresponses in educational and psychological measurement (Unpublished doctoral dissertation). Friedrich-Schiller-University of Jena, Germany.Google Scholar
Rose, N.von Davier, M., & Xu, X. (2010). Modeling nonignorable missing data with item response theory (IRT). ETS Research Report Series, 2010, i53. https://doi.org/10.1002/j.2333-8504.2010.tb02218.x.CrossRefGoogle Scholar
Sachse, K.Mahler, N., & Pohl, S. (2019). When nonresponse mechanisms change: effects on trends and group comparisons in international large-scale assessments. Educational and Psychological Measurement,. https://doi.org/10.1177/0013164419829196.CrossRefGoogle ScholarPubMed
Schnipke, D. L., & Scrams, D. J. (1997). Modeling item response times with a two-state mixture model: A new method of measuring speededness. Journal of Educational Measurement, 34 (3), 213232.CrossRefGoogle Scholar
Schnipke, D. L.Scrams, D. J.Mills, C. N.Potenza, M., & Fremer, J. J.Ward, W. (2002). Exploring issues of examinee behavior: Insights gained from response-time analyses. Computer-based testing: Building the foundation for future assessments, Hillsdale, NJ: Lawrence Erlbaum Associates. 237266.Google Scholar
Semmes, R.Davison, M. L., & Close, C. (2011). Modeling individual differences in numerical reasoning speed as a random effect of response time limits. Applied Psychological Measurement, 35 (6), 433446. https://doi.org/10.1177/0146621611407305.CrossRefGoogle Scholar
Senkbeil, M. & Ihme, J. M. (2012). NEPS Technical report for computer literacy—scaling results of starting cohort 4 in ninth grade (NEPS Working Paper No. 17). Bamberg: Otto-Friedrich-Universität, Nationales Bildungspanel.Google Scholar
Tijmstra, J., & Bolsinova, M. (2018). On the importance of the speed-ability trade-off when dealing with not reached items. Frontiers in Psychology, 9, 964. https://doi.org/10.3389/fpsyg.2018.00964.CrossRefGoogle Scholar
van der Linden, W. J. (2006). A lognormal model for response times on test items. Journal of Educational and Behavioral Statistics, 31, 181204. https://doi.org/10.3102/10769986031002181.CrossRefGoogle Scholar
van der Linden, W. J. (2007). A hierarchical framework for modeling speed and accuracy on test items. Psychometrika, 72 (3), 287308. https://doi.org/10.1007/s11336-006-1478-z.CrossRefGoogle Scholar
van der Linden, W. J. (2008). Using response times for item selection in adaptive testing. Journal of Educational and Behavioral Statistics, 33, 520https://doi.org/10.3102/1076998607302626.CrossRefGoogle Scholar
van der Linden, W. J.Breithaupt, K.Chuah, S. C., & Zhang, Y. (2007). Detecting differential speededness in multistage testing. Journal of Educational Measurement, 44 (2), 117130. https://doi.org/10.1111/j.1745-3984.2007.00030.x.CrossRefGoogle Scholar
van der Linden, W. J., & Glas, C. A. (2010). Statistical tests of conditional independence between responses and/or response times on test items. Psychometrika, 75 (1), 120139. https://doi.org/10.1007/s11336-009-9129-9.CrossRefGoogle Scholar
van der Linden, W. J., & Guo, F. (2008). Bayesian Procedures for identifying aberrant response-time patterns in adaptive testing. Psychometrika, 73 (3), 365384. https://doi.org/10.1007/s11336-007-9046-8.CrossRefGoogle Scholar
van der Linden, W. J.Scrams, D. J., & Schnipke, D. L. (1999). Using response-time constraints to dontrol for differential speededness in computerized adaptive testing. Applied Psychological Measurement, 23 (3), 195210. https://doi.org/10.1177/01466219922031329.CrossRefGoogle Scholar
Weeks, J. P.von Davier, M., & Yamamoto, K. (2016). Using response time data to inform the coding of omitted responses. Special issue: Current methodological issues in large-scale assessments. Psychological Test and Assessment Modeling, 58 (4), 671701.Google Scholar
Wise, S. L., & DeMars, C. E. (2005). Low examinee effort in low-stakes assessment: Problems and potential solutions. Educational Assessment, 10 (1), 117. https://doi.org/10.1207/s15326977ea1001_1.CrossRefGoogle Scholar
Yamamoto, K.Everson, H., & Rost, J. (1997). Modeling the effects of test length and test time on parameter estimation using the hybrid model. Applications of latent trait nd latent class models in the social sciences, Münster, Germany: Waxmann. 8998.Google Scholar
Yamamoto, K., Khorramdel, L., & von Davier, M. (2013). Scaling PIAAC Cognitive Data. In Organisation for Economic Cooperation and Development (2013), Technical Report of the Survey of Adult Skills (PIAAC) (pp. 406–438). OECD Publishing. Available at: http://www.oecd.org/site/piaac/_Technical%20Report_17OCT13.pdf.Google Scholar