Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-23T08:49:31.519Z Has data issue: false hasContentIssue false

An Introduction to Item Response Theory and Rasch Analysis: Application Using the Eating Assessment Tool (EAT-10)

Published online by Cambridge University Press:  07 December 2017

Jacob Kean*
Affiliation:
Population Health Sciences, University of Utah School of Medicine, Utah, USA
Erica F. Bisson
Affiliation:
Department of Neurosurgery, University of Utah School of Medicine, Utah, USA
Darrel S. Brodke
Affiliation:
Department of Orthopedics, University of Utah School of Medicine, Utah, USA
Joshua Biber
Affiliation:
Population Health Sciences, University of Utah School of Medicine, Utah, USA
Paul H. Gross
Affiliation:
Population Health Sciences, University of Utah School of Medicine, Utah, USA
*
Address for correspondence: Jacob Kean, Population Health Sciences, University of Utah School of Medicine, 295 Chipeta Way, Rm 1N455, Salt Lake City, Utah, USA. E-mail: [email protected]
Get access

Abstract

Item response theory has its origins in educational measurement and is now commonly applied in health-related measurement of latent traits, such as function and symptoms. This application is due, in large part, to gains in the precision of measurement attributable to item response theory and corresponding decreases in response burden, study costs, and study duration. The purpose of this paper is twofold: introduce basic concepts of item response theory and demonstrate this analytic approach in a worked example, a Rasch model (1PL) analysis of the Eating Assessment Tool (EAT-10), a commonly used measure for oropharyngeal dysphagia. The results of the analysis were largely concordant with previous studies of the EAT-10 and illustrate for brain impairment clinicians and researchers how IRT analysis can yield greater precision of measurement.

Type
Articles
Copyright
Copyright © Australasian Society for the Study of Brain Impairment 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Andrich, D. (2004). Controversy and the Rasch model: A characteristic of incompatible paradigms? Medical Care, 42 (1), I7–116.Google Scholar
Belafsky, P. C., Mouadeb, D. A., Rees, C. J., Pryor, J. C., Postma, G. N., Allen, J., & Leonard, R. J. (2008). Validity and reliability of the eating assessment tool (EAT-10). Annals of Otology, Rhinology & Laryngology, 117 (12), 919924.Google Scholar
Bock, R. D. (1997). A brief history of item theory response. Educational Measurement: Issues and Practice, 16 (4), 2133.Google Scholar
Cordier, R., Joosten, A., Clavé, P., Schindler, A., Bülow, M., Demir, N., . . .Speyer, R. (2017). Evaluating the psychometric properties of the eating assessment tool (EAT-10) using Rasch analysis. Dysphagia, 32 (2), 250260.Google Scholar
DeVillis, R. F. (2006). Classical test theory. Medical Care, 44 (11), S50–S59.Google Scholar
Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah: Lawrence Erlbaum Associates.Google Scholar
Fries, J., Bruce, B., & Cella, D. (2005). The promise of PROMIS: Using item response theory to improve assessment of patient-reported outcomes. Clinical and Experimental Rheumatology, 23 (39), S53S57.Google Scholar
Hambleton, R. K., & Jones, R. W. (1993). Comparison of classical test theory and item response theory and their applications to test development. Educational Measurement: Issues and Practice, 12 (3), 3847.CrossRefGoogle Scholar
Hays, R. D., Morales, L. S., & Reise, S. P. (2000). Item response theory and health outcomes measurement in the 21st century. Medical Care, 38 (Suppl. 9), II28II42.Google Scholar
Kean, J., & Reilly, J. (2014). Item response theory. In Hammond, F. M., Malec, J. M., Nick, T. G., & Buschbacher, R. M. (Eds.), Handbook for clinical research: Design, statistics and implementation (pp. 195198). New York: Demos Medical Publishing.Google Scholar
Kroenke, K., Monahan, P. O., & Kean, J. (2015). Pragmatic characteristics of patient-reported outcome measures are important for use in clinical practice. Journal of Clinical Epidemiology, 68 (9), 10851092.Google Scholar
Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Charlotte: Information Age Publishing.Google Scholar
Rasch, G. (1960). Probabilistic models for some intelligence and achievement tests. Copenhagen: Danish Institute for Educational Research. Expanded edition 1983.Chicago: MESA Press.Google Scholar
van der Linden, W.J., & Hambleton, R. K. (2013). Handbook of modern item response theory. New York: Springer-Verlag.Google Scholar
Wright, B. D., & Masters, G. N. (1982). Rating scale analysis. Chicago: MESA Press.Google Scholar
Zikmund-Fisher, B. J., Couper, M. P., Singer, E., Ubel, P. A., Ziniel, S., Fowler, F.J. Jr, . . .Fagerlin, A. (2010). Deficits and variations in patients’ experience with making 9 common medical decisions: The DECISIONS survey. Medical Decision Making, 30 (Suppl. 5), 8595.Google Scholar