Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-04T19:25:25.622Z Has data issue: false hasContentIssue false

Item Response Theory for Creativity Measurement

Published online by Cambridge University Press:  16 February 2024

Nils Myszkowski
Affiliation:
Pace University

Summary

Item-response theory (IRT) represents a key advance in measurement theory. Yet, it is largely absent from curricula, textbooks and popular statistical software, and often introduced through a subset of models. This Element, intended for creativity and innovation researchers, researchers-in-training, and anyone interested in how individual creativity might be measured, aims to provide 1) an overview of classical test theory (CTT) and its shortcomings in creativity measurement situations (e.g., fluency scores, consensual assessment technique, etc.); 2) an introduction to IRT and its core concepts, using a broad view of IRT that notably sees CTT models as particular cases of IRT; 3) a practical strategic approach to IRT modeling; 4) example applications of this strategy from creativity research and the associated advantages; and 5) ideas for future work that could advance how IRT could better benefit creativity research, as well as connections with other popular frameworks.
Get access
Type
Element
Information
Online ISBN: 9781009239035
Publisher: Cambridge University Press
Print publication: 14 March 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Akbari Chermahini, S., Hickendorff, M., & Hommel, B. (2012). Development and validity of a Dutch version of the remote associates task: An item-response theory approach. Thinking Skills and Creativity, 7(3), 177186. DOI: https://10.1016/j.tsc.2012.02.003CrossRefGoogle Scholar
Albert, J. H. (2017). Logit, probit, and other response functions. In van der Linden, W. J. (ed.), Handbook of item response theory, Volume 2: Statistical tools (pp. 322). Chapman and Hall/CRC.CrossRefGoogle Scholar
Amabile, T. M. (1982). Social psychology of creativity: A consensual assessment technique. Journal of Personality and Social Psychology, 43(5), 9971013. DOI: https://10.1037/0022-3514.43.5.997CrossRefGoogle Scholar
Andrich, D. (1978). A rating formulation for ordered response categories. Psychometrika, 43(4), 561573. DOI: https://10.1007/BF02293814CrossRefGoogle Scholar
Baer, J. (2012). Domain specificity and the limits of creativity theory. Journal of Creative Behavior, 46(1), 1629. DOI: https://10.1002/jocb.002CrossRefGoogle Scholar
Baghaei, P., & Doebler, P. (2019). Introduction to the Rasch Poisson counts model: An R tutorial. Psychological Reports, 122(5), 19671994. DOI: https://10.1177/0033294118797577CrossRefGoogle Scholar
Barbot, B., Besançon, M., & Lubart, T. (2016). The generality-specificity of creativity: Exploring the structure of creative potential with EPoC. Learning and Individual Differences, 52, 178187. DOI: https://10.1016/j.lindif.2016.06.005CrossRefGoogle Scholar
Barbot, B., Kaufman, J. C., & Myszkowski, N. (2023). Creativity with 6 degrees of freedom: Feasibility study of visual creativity assessment in virtual reality. Creativity Research Journal, 35(4), 783800. DOI: https://10.1080/10400419.2023.2193040CrossRefGoogle Scholar
Barron, F., & Welsh, G. S. (1952). Artistic perception as a possible factor in personality style: Its measurement by a figure preference test. Journal of Psychology, 33(2), 199203. DOI: https://10.1080/00223980.1952.9712830CrossRefGoogle Scholar
Bates, D., Mächler, M., Bolker, B., et al (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 148. DOI: https://10.18637/jss.v067.i01CrossRefGoogle Scholar
Batey, M. (2012). The measurement of creativity: From definitional consensus to the introduction of a new heuristic framework. Creativity Research Journal, 24(1), 5565. DOI: https://10.1080/10400419.2012.649181CrossRefGoogle Scholar
Beaty, R. E., & Johnson, D. R. (2020). Automating creativity assessment with SemDis: An open platform for computing semantic distance. Behavior Research Methods. DOI: https://10.3758/s13428-020-01453-wGoogle Scholar
Besemer, S. P. (1998). Creative product analysis matrix: Testing the model structure and a comparison among products – three novel chairs. Creativity Research Journal, 11(4), 333346. DOI: https://10.1207/s15326934crj1104_7CrossRefGoogle Scholar
Birnbaum, A., Lord, F. M., & Novick, M. R. (1968). Some latent trait models and their use in inferring an examinee’s ability. In Statistical theories of mental test scores (pp. 397472). Reading, MA: Addison Wesley.Google Scholar
Bock, R. D. (1972). Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika, 37(1), 2951. DOI: https://10.1007/BF02291411CrossRefGoogle Scholar
Bock, R. D., & Aitkin, M. (1982). Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika, 46(4), 443459. DOI: https://10.1007/BF02294168CrossRefGoogle Scholar
Bock, R. D., Gibbons, R., & Muraki, E. (1988). Full-information item factor analysis. Applied Psychological Measurement, 12(3), 261280. DOI: https://10.1177/014662168801200305CrossRefGoogle Scholar
Boeck, P. D., Bakker, M., Zwitser, R., et al. (2011). The estimation of item response models with the lmer function from the lme4 package in R. Journal of Statistical Software, 39(12), 128. DOI: https://10.18637/jss.v039.i12CrossRefGoogle Scholar
Bollen, K., & Diamantopoulos, A. (2017). In defense of causal-formative indicators: A minority report. Psychological Methods, 22(3), 581596. DOI: https://10.1037/met0000056CrossRefGoogle ScholarPubMed
Bollen, K., & Lennox, R. (1991). Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin, 110(2), 305314. DOI: https://10.1037/0033-2909.110.2.305CrossRefGoogle Scholar
Bonifay, W. E., Reise, S. P., Scheines, R., et al. (2015). When are multidimensional data unidimensional enough for structural equation modeling? An evaluation of the DETECT multidimensionality index. Structural Equation Modeling: A Multidisciplinary Journal, 22(4), 504516. DOI: https://10.1080/10705511.2014.938596CrossRefGoogle Scholar
Borsboom, D. (2006). The attack of the psychometricians. Psychometrika, 71(3), 425440. DOI: https://10.1007/s11336-006-1447-6CrossRefGoogle ScholarPubMed
Borsboom, D., & Cramer, A. O. (2013). Network analysis: An integrative approach to the structure of psychopathology. Annual Review of Clinical Psychology, 9(1), 91121. DOI: https://10.1146/annurev-clinpsy-050212-185608CrossRefGoogle Scholar
Borsboom, D., & Mellenbergh, G. J. (2002). True scores, latent variables, and constructs: A comment on Schmidt and Hunter. Intelligence, 30(6), 505514. DOI: https://10.1016/S0160-2896(02)00082-XCrossRefGoogle Scholar
Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2003). The theoretical status of latent variables. Psychological Review, 110(2), 203219. DOI: https://10.1037/0033-295X.110.2.203CrossRefGoogle ScholarPubMed
Bürkner, P.- C. (2017). Brms: An R package for Bayesian multilevel models using Stan. Journal of Statistical Software, 80(1), 128. DOI: https://10.18637/jss.v080.i01CrossRefGoogle Scholar
Bürkner, P.- C. (2020). Analysing standard progressive matrices (spm-ls) with Bayesian item response models. Journal of Intelligence, 8(1), 118. DOI: https://10.3390/jintelligence8010005CrossRefGoogle ScholarPubMed
Cai, L., & Hansen, M. (2013). Limited-information goodness-of-fit testing of hierarchical item factor models. British Journal of Mathematical and Statistical Psychology, 66(2), 245276. DOI: https://10.1111/j.2044-8317.2012.02050.xCrossRefGoogle ScholarPubMed
Chalmers, R. P. (2012). Mirt: A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(1), 129. DOI: https://10.18637/jss.v048.i06CrossRefGoogle Scholar
Chen, W.- H., & Thissen, D. (1997). Local dependence indexes for item pairs using item response theory. Journal of Educational and Behavioral Statistics, 22(3), 265289. DOI: https://10.2307/1165285CrossRefGoogle Scholar
Christensen, A. P., & Golino, H. F. (2021). On the equivalency of factor and network loadings. Behavior Research Methods, 53(4), 15631580. DOI: https://10.3758/s13428-020-01500-6CrossRefGoogle ScholarPubMed
Christensen, A. P., Golino, H. F., & Silvia, P. J. (2020). A psychometric network perspective on the validity and validation of personality trait questionnaires. European Journal of Personality, 34(6), 10951108. DOI: https://10.1002/per.2265CrossRefGoogle Scholar
Costantini, G., Epskamp, S., Borsboom, D., et al. (2015). State of the aRt personality research: A tutorial on network analysis of personality data in R. Journal of Research in Personality, 54(Supplement C), 1329. DOI: https://10.1016/j.jrp.2014.07.003CrossRefGoogle Scholar
Cseh, G. M., & Jeffries, K. K. (2019). A scattered CAT: A critical evaluation of the consensual assessment technique for creativity research. Psychology of Aesthetics, Creativity, and the Arts, 13(2), 159166. DOI: https://10.1037/aca0000220CrossRefGoogle Scholar
De Ayala, R. J. (2022). The theory and practice of item response theory (2nd ed.). New York: Guilford Press.Google Scholar
De Boeck, P. (2008). Random item IRT models. Psychometrika, 73(4), 533559. DOI: https://10.1007/s11336-008-9092-xCrossRefGoogle Scholar
De Boeck, P. (2011). Explanatory item response models: A generalized linear and nonlinear approach. New York: Springer.Google Scholar
Doebler, A., Doebler, P., & Holling, H. (2014). A latent ability model for count data and application to processing speed. Applied Psychological Measurement, 38(8), 587598. DOI: https://10.1177/0146621614543513CrossRefGoogle Scholar
Drasgow, F., Levine, M. V., & McLaughlin, M. E. (1987). Detecting inappropriate test scores with optimal and practical appropriateness indices. Applied Psychological Measurement, 11(1), 5979. DOI: https://10.1177/014662168701100105CrossRefGoogle Scholar
Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists (1st ed.). Mahwah, NJ: Psychology Press.Google Scholar
Fischer, G. H. (1973). The linear logistic test model as an instrument in educational research. Acta Psychologica, 37(6), 359374. DOI: https://10.1016/0001-6918(73)90003-6CrossRefGoogle Scholar
Forthmann, B., & Doebler, P. (2021). Reliability of researcher capacity estimates and count data dispersion: A comparison of Poisson, negative binomial, and Conway-Maxwell Poisson models. Scientometrics, 126, 33373354. DOI: https://10.1007/s11192-021-03864-8CrossRefGoogle Scholar
Forthmann, B., Paek, S. H., Dumas, D., et al. (2020). Scrutinizing the basis of originality in divergent thinking tests: On the measurement precision of response propensity estimates. British Journal of Educational Psychology, 90(3), 683699. DOI: https://10.1111/bjep.12325CrossRefGoogle ScholarPubMed
Fox, J.- P., Klein Entink, R. H., & van der Linden, W. J. (2007). Modeling of responses and response times with the CIRT package. Journal of Statistical Software, 20(7), 114. DOI: https://10.18637/jss.v020.i07CrossRefGoogle Scholar
Fox, J.- P., & Marianti, S. (2016). Joint modeling of ability and differential speed using responses and response times. Multivariate Behavioral Research, 51(4), 540553. DOI: https://10.1080/00273171.2016.1171128CrossRefGoogle ScholarPubMed
Goldhammer, F., & Klein Entink, R. H. (2011). Speed of reasoning and its relation to reasoning ability. Intelligence, 39(2), 108119. DOI: https://10.1016/j.intell.2011.02.001CrossRefGoogle Scholar
Golino, H. F., Shi, D., Christensen, A. P., et al. (2020). Investigating the performance of exploratory graph analysis and traditional techniques to identify the number of latent factors: A simulation and tutorial. Psychological Methods, 25(3), 292320. DOI: https://10.1037/met0000255CrossRefGoogle ScholarPubMed
Graham, J. M. (2006). Congeneric and (essentially) tau-equivalent estimates of score reliability: What they are and how to use them. Educational and Psychological Measurement, 66(6), 930944. DOI: https://10.1177/0013164406288165CrossRefGoogle Scholar
Hass, R. W., Rivera, M., & Silvia, P. J. (2018). On the dependability and feasibility of layperson ratings of divergent thinking. Frontiers in Psychology, 9, 113. DOI: https://10.3389/fpsyg.2018.01343CrossRefGoogle ScholarPubMed
Hohle, R. H. (1965). Inferred components of reaction times as functions of foreperiod duration. Journal of Experimental Psychology, 69(4), 382386. DOI: https://10.1037/h0021740CrossRefGoogle ScholarPubMed
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 155. DOI: https://10.1080/10705519909540118CrossRefGoogle Scholar
Hung, L.- F. (2012). A negative binomial regression model for accuracy tests. Applied Psychological Measurement, 36(2), 88103. DOI: https://10.1177/0146621611429548CrossRefGoogle Scholar
Kan, K.- J., de Jonge, H., van der Maas, H. L. J., et al. (2020). How to compare psychometric factor and network models. Journal of Intelligence, 8(4), 110. DOI: https://10.3390/jintelligence8040035CrossRefGoogle ScholarPubMed
Kaufman, J. C. (2012). Counting the muses: Development of the Kaufman domains of creativity scale (K-DOCS). Psychology of Aesthetics, Creativity, and the Arts, 6(4), 298308. DOI: https://10.1037/a0029751CrossRefGoogle Scholar
Kaufman, J. C., Baer, J., & Cole, J. C. (2009). Expertise, domains, and the consensual assessment technique. Journal of Creative Behavior, 43(4), 223233. DOI: https://10.1002/j.2162-6057.2009.tb01316.xCrossRefGoogle Scholar
Kaufman, J. C., & Beghetto, R. A. (2009). Beyond big and little: The four C model of creativity. Review of General Psychology, 13(1), 112. DOI: https://10.1037/a0013688CrossRefGoogle Scholar
Klein Entink, R. H., Kuhn, J.- T., Hornke, L. F., et al. (2009). Evaluating cognitive theory: A joint modeling approach using responses and response times. Psychological Methods, 14(1), 5475. DOI: https://10.1037/a0014877CrossRefGoogle ScholarPubMed
Kolen, M. J., & Brennan, R. L. (2014). Test equating, scaling, and linking: Methods and practices. New York, NY: Springer. DOI: https://10.1007/978-1-4939-0317-7CrossRefGoogle Scholar
Lord, F. M. (1951). A theory of test scores and their relation to the trait measured. ETS Research Bulletin Series, 1951(1), i126. DOI: https://10.1002/j.2333-8504.1951.tb00922.xCrossRefGoogle Scholar
Lord, F. M., & Novick, M. R. (2008). Statistical theories of mental test scores. Charlotte: Information Age Publishing.Google Scholar
Lumsden, J. (1976). Test theory. Annual Review of Psychology, 27(1), 251280. DOI: https://10.1146/annurev.ps.27.020176.001343CrossRefGoogle Scholar
Mair, P., & Hatzinger, R. (2007). Extended Rasch modeling: The eRm package for the application of IRT models in R. Journal of Statistical Software, 20(9), 120. DOI: https://10.18637/jss.v020.i09CrossRefGoogle Scholar
Marianti, S., Fox, J.- P., Avetisyan, M., et al. (2014). Testing for aberrant behavior in response time modeling. Journal of Educational and Behavioral Statistics, 39(6), 426451. DOI: https://10.3102/1076998614559412CrossRefGoogle Scholar
Masters, G. N. (1982). A Rasch model for partial credit scoring. Psychometrika, 47(2), 149174. DOI: https://10.1007/BF02296272CrossRefGoogle Scholar
Maydeu-Olivares, A. (2013). Goodness-of-fit assessment of item response theory models. Measurement: Interdisciplinary Research and Perspectives, 11(3), 71101. DOI: https://10.1080/15366367.2013.831680Google Scholar
Maydeu-Olivares, A., & Joe, H. (2006). Limited information goodness-of-fit testing in multidimensional contingency tables. Psychometrika, 71(4), 713732. DOI: https://10.1007/s11336-005-1295-9CrossRefGoogle Scholar
McKinley, R. L., & Mills, C. N. (1985). A comparison of several goodness-of-fit statistics. Applied Psychological Measurement, 9(1), 4957. DOI: https://10.1177/014662168500900105CrossRefGoogle Scholar
McNeish, D., & Wolf, M. G. (2020). Thinking twice about sum scores. Behavior Research Methods, 52(6), 22872305. DOI: https://10.3758/s13428-020-01398-0CrossRefGoogle ScholarPubMed
Mellenbergh, G. J. (1994). Generalized linear item response theory. Psychological Bulletin, 115(2), 300307. DOI: https://10.1037/0033-2909.115.2.300CrossRefGoogle Scholar
Muraki, E. (1990). Fitting a polytomous item response model to Likert-type data. Applied Psychological Measurement, 14(1), 5971. DOI: https://10.1177/014662169001400106CrossRefGoogle Scholar
Muraki, E. (1992). A generalized partial credit model: Application of an EM algorithm. ETS Research Report Series, 1992(1), 130. DOI: https://10.1002/j.2333-8504.1992.tb01436.xCrossRefGoogle Scholar
Muraki, E. (1997). A generalized partial credit model. In van der Linden, W. J. & Hambleton, R. K. (eds.), Handbook of modern item response theory (pp. 153164). New York, NY: Springer. DOI: https://10.1007/978-1-4757-2691-6_9CrossRefGoogle Scholar
Myszkowski, N. (2019). The first glance is the weakest: “Tasteful” individuals are slower to judge visual art. Personality and Individual Differences, 141, 188195. DOI: https://10.1016/j.paid.2019.01.010CrossRefGoogle Scholar
Myszkowski, N. (2020). Analysis of an intelligence dataset. Journal of Intelligence, 8(4), 13. DOI: https://10.3390/jintelligence8040039CrossRefGoogle ScholarPubMed
Myszkowski, N. (2021). Development of the R library “jrt”: Automated item response theory procedures for judgment data and their application with the consensual assessment technique. Psychology of Aesthetics, Creativity, and the Arts, 15(3), 426438. DOI: https://10.1037/aca0000287CrossRefGoogle Scholar
Myszkowski, N., Barbot, B., & Zenasni, F. (2022). Cognitive and conative profiles of creative people. In Lubart, T., Botella, M., Bourgeois-Bougrine, S., et al. (eds.), Homo creativus: The 7 C’s of human creativity (pp. 3348). Cham: Springer. DOI: https://10.1007/978-3-030-99674-1_3CrossRefGoogle Scholar
Myszkowski, N., & Storme, M. (2017). Measuring “good taste” with the visual aesthetic sensitivity test-revised (VAST-R). Personality and Individual Differences, 117, 91100. DOI: https://10.1016/j.paid.2017.05.041CrossRefGoogle Scholar
Myszkowski, N., & Storme, M. (2018). A snapshot of g? Binary and polytomous item-response theory investigations of the last series of the standard progressive matrices (SPM-LS). Intelligence, 68, 109116. DOI: https://10.1016/j.intell.2018.03.010CrossRefGoogle Scholar
Myszkowski, N., & Storme, M. (2019). Judge response theory? A call to upgrade our psychometrical account of creativity judgments. Psychology of Aesthetics, Creativity, and the Arts, 13(2), 167175. DOI: https://10.1037/aca0000225CrossRefGoogle Scholar
Myszkowski, N., & Storme, M. (2021). Accounting for variable task discrimination in divergent thinking fluency measurement: An example of the benefits of a 2-parameter Poisson counts model and its bifactor extension over the Rasch Poisson counts model. Journal of Creative Behavior, 55(3), 800818. DOI: https://10.1002/jocb.490CrossRefGoogle Scholar
Myszkowski, N., Storme, M., Kubiak, E., et al. (2022). Exploring the associations between personality and response speed trajectories in low-stakes intelligence tests. Personality and Individual Differences, 191(111580), 19. DOI: https://10.1016/j.paid.2022.111580CrossRefGoogle Scholar
Myszkowski, N., Storme, M., & Çelik, P. (2023). One common factor, four resources, both, or neither: A network model of career adaptability resources. Measurement and Evaluation in Counseling and Development, 56(3), 209224. DOI: https://10.1080/07481756.2022.2073894CrossRefGoogle Scholar
Nicewander, W. A. (2018). Conditional reliability coefficients for test scores. Psychological Methods 23(2), 351362. DOI: https://10.1037/met0000132CrossRefGoogle ScholarPubMed
Noel, Y., & Dauvier, B. (2007). A beta item response model for continuous bounded responses. Applied Psychological Measurement, 31(1), 4773. DOI: https://10.1177/0146621605287691CrossRefGoogle Scholar
Novick, M. R. (1966). The axioms and principal results of classical test theory. Journal of Mathematical Psychology, 3(1), 118. DOI: https://10.1016/0022-2496(66)90002-2CrossRefGoogle Scholar
Orlando, M., & Thissen, D. (2000). Likelihood-based item-fit indices for dichotomous item response theory models. Applied Psychological Measurement, 24(1), 5064. DOI: https://10.1177/01466216000241003CrossRefGoogle Scholar
Ostini, R., & Nering, M. (2006). Polytomous item response theory models. Thousand Oaks, CA: SAGE. DOI: https://10.4135/9781412985413CrossRefGoogle Scholar
Palmer, E. M., Horowitz, T. S., Torralba, A., et al. (2011). What are the shapes of response time distributions in visual search? Journal of Experimental Psychology: Human Perception and Performance, 37(1), 5871. DOI: https://10.1037/a0020747Google ScholarPubMed
Patz, R. J., Junker, B. W., Johnson, M. S., et al. (2002). The hierarchical rater model for rated test items and its application to large-scale educational assessment data. Journal of Educational and Behavioral Statistics, 27(4), 341384. DOI: https://10.3102/10769986027004341CrossRefGoogle Scholar
Primi, R., Silvia, P. J., Jauk, E., et al. (2019). Applying many-facet Rasch modeling in the assessment of creativity. Psychology of Aesthetics, Creativity, and the Arts, 13(2), 176186. DOI: https://10.1037/aca0000230CrossRefGoogle Scholar
Qian, M., & Plucker, J. A. (2017). Creativity assessment. In Plucker, J. (ed.), Creativity and innovation. (pp. 223234) Waco: Routledge.Google Scholar
Raju, N. S., Price, L. R., Oshima, T., et al. (2007). Standardized conditional SEM: A case for conditional reliability. Applied Psychological Measurement, 31(3), 169180. DOI: https://10.1177/0146621606291569CrossRefGoogle Scholar
Rasch, G. (1960). Studies in mathematical psychology: I. Probabilistic models for some intelligence and attainment tests. Oxford: Nielsen & Lydiche.Google Scholar
Rhemtulla, M., Brosseau-Liard, P., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17(3), 354373. DOI: https://10.1037/a0029315CrossRefGoogle ScholarPubMed
Rizopoulos, D. (2006). Ltm: An R package for latent variable modeling and item response analysis. Journal of Statistical Software, 17(5), 125. DOI: https://10.18637/jss.v017.i05CrossRefGoogle Scholar
Robitzsch, A., & Steinfeld, J. (2018). Item response models for human ratings: Overview, estimation methods, and implementation in R. Psychological Test and Assessment Modeling, 60(1), 101139.Google Scholar
Rosseel, Y. (2012). Lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(1), 136. DOI: http://10.18637/jss.v048.i02CrossRefGoogle Scholar
Rost, J. (2001). The growing family of Rasch models. In Boomsma, A., van Duijn, M. A. J., & Snijders, T. A. B. (eds.), Essays on item response theory (pp. 2542). New York, NY: Springer. DOI: https://10.1007/978-1-4613-0169-1_2CrossRefGoogle Scholar
Roussos, L. A., & Ozbek, O. Y. (2006). Formulation of the DETECT population parameter and evaluation of DETECT estimator bias. Journal of Educational Measurement, 43(3), 215243. DOI: https://doi.org/10.1111/j.1745-3984.2006.00014.xCrossRefGoogle Scholar
Runco, M. A., & Jaeger, G. J. (2012). The standard definition of creativity. Creativity Research Journal, 24(1), 9296. DOI: https://10.1080/10400419.2012.650092CrossRefGoogle Scholar
Salvi, C., Costantini, G., Pace, A., et al. (2020). Validation of the Italian remote associate test. Journal of Creative Behavior, 54(1), 6274. DOI: https://10.1002/jocb.345CrossRefGoogle ScholarPubMed
Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores. Psychometrika, 34(1), 197. DOI: https://10.1007/BF03372160CrossRefGoogle Scholar
Schwarz, W. (2001). The ex-Wald distribution as a descriptive model of response times. Behavior Research Methods, Instruments, & Computers, 33(4), 457469. DOI: https://10.3758/BF03195403CrossRefGoogle ScholarPubMed
Shaw, A., Elizondo, F., & Wadlington, P. L. (2020). Reasoning, fast and slow: How noncognitive factors may alter the ability-speed relationship. Intelligence, 83, 101490. DOI: https://10.1016/j.intell.2020.101490CrossRefGoogle Scholar
Sijtsma, K., & Junker, B. W. (1996). A survey of theory and methods of invariant item ordering. British Journal of Mathematical and Statistical Psychology, 49(1), 79105. DOI: https://10.1111/j.2044-8317.1996.tb01076.xCrossRefGoogle ScholarPubMed
Sijtsma, K., & van der Ark, L. A. (2017). A tutorial on how to do a Mokken scale analysis on your test and questionnaire data. British Journal of Mathematical and Statistical Psychology, 70(1), 137158. DOI: https://10.1111/bmsp.12078CrossRefGoogle Scholar
Silvia, P. J., Beaty, R. E., & Nusbaum, E. C. (2013). Verbal fluency and creativity: General and specific contributions of broad retrieval ability (Gr) factors to divergent thinking. Intelligence, 41(5), 328340. DOI: https://10.1016/j.intell.2013.05.004CrossRefGoogle Scholar
Silvia, P. J., Winterstein, B. P., Willse, J. T., et al. (2008). Assessing creativity with divergent thinking tasks: Exploring the reliability and validity of new subjective scoring methods. Psychology of Aesthetics, Creativity, and the Arts, 2(2), 6885. DOI: https://10.1037/1931-3896.2.2.68CrossRefGoogle Scholar
Snijders, T. A. B. (2001). Asymptotic null distribution of person fit statistics with estimated person parameter. Psychometrika, 66(3), 331342. DOI: https://10.1007/BF02294437CrossRefGoogle Scholar
Storme, M., Lubart, T., Myszkowski, N., et al. (2017). A cross-cultural study of task specificity in creativity. Journal of Creative Behavior, 51(3), 263274. DOI: https://10.1002/jocb.123CrossRefGoogle Scholar
Storme, M., Myszkowski, N., Baron, S., et al. (2019). Same test, better scores: Boosting the reliability of short online intelligence recruitment tests with nested logit item response theory models. Journal of Intelligence, 7(3), 117. DOI: https://10.3390/jintelligence7030017CrossRefGoogle ScholarPubMed
Storme, M., Myszkowski, N., Çelik, P., et al. (2014). Learning to judge creativity: The underlying mechanisms in creativity training for non-expert judges. Learning and Individual Differences, 32, 1925. DOI: https://10.1016/j.lindif.2014.03.002CrossRefGoogle Scholar
Stout, W., Habing, B., Douglas, J., et al. (1996). Conditional covariance-based nonparametric multidimensionality assessment. Applied Psychological Measurement, 20(4), 331354. DOI: https://10.1177/014662169602000403CrossRefGoogle Scholar
Suh, Y., & Bolt, D. M. (2010). Nested logit models for multiple-choice item response data. Psychometrika, 75(3), 454473. DOI: https://10.1007/s11336-010-9163-7CrossRefGoogle Scholar
Tan, M., Mourgues, C., Hein, S., et al. (2015). Differences in judgments of creativity: How do academic domain, personality, and self-reported creativity influence novice judges’ evaluations of creative productions? Journal of Intelligence, 3(3), 7390. DOI: https://10.3390/jintelligence3030073CrossRefGoogle Scholar
Thissen, D. (2001). Psychometric engineering as art. Psychometrika, 66(4), 473485. DOI: https://10.1007/BF02296190CrossRefGoogle Scholar
Thissen, D., & Steinberg, L. (1986). A taxonomy of item response models. Psychometrika, 51(4), 567577. DOI: https://10.1007/BF02295596CrossRefGoogle Scholar
Thissen, D., Steinberg, L., & Gerrard, M. (1986). Beyond group-mean differences: The concept of item bias. Psychological Bulletin, 99(1), 118128. DOI: https://10.1037/0033-2909.99.1.118CrossRefGoogle Scholar
van der Linden, W. J. (2005). Linear models for optimal test design. New York: Springer-Verlag.CrossRefGoogle Scholar
van der Linden, W. J. (2006). A lognormal model for response times on test items. Journal of Educational and Behavioral Statistics, 31(2), 181204. DOI: https://10.3102/10769986031002181CrossRefGoogle Scholar
van der Linden, W. J. (2016). Introduction. In Handbook of item response theory, Volume 1: Models (1 ed.), (pp. 110). Boca Raton, FL: CRC Press. DOI: https://10.1201/9781315374512CrossRefGoogle Scholar
van der Linden, W. J., Klein Entink, R. H., & Fox, J.- P. (2010). IRT parameter estimation with response times as collateral information. Applied Psychological Measurement, 34(5), 327347. DOI: https://10.1177/0146621609349800CrossRefGoogle Scholar
van der Maas, H. L. J., Dolan, C. V., Grasman, R. P. P. P., et al. (2006). A dynamical model of general intelligence: The positive manifold of intelligence by mutualism. Psychological Review, 113(4), 842861. DOI: https://10.1037/0033-295X.113.4.842CrossRefGoogle ScholarPubMed
van Duijn, M. A. J., & Jansen, M. G. H. (1995). Modeling repeated count data: Some extensions of the Rasch Poisson counts model. Journal of Educational and Behavioral Statistics, 20(3), 241. DOI: https://10.2307/1165402CrossRefGoogle Scholar
Wagenmakers, E.- J., & Farrell, S. (2004). AIC model selection using Akaike weights. Psychonomic Bulletin & Review, 11(1), 192196. DOI: https://10.3758/BF03206482CrossRefGoogle ScholarPubMed
Weiss, D. J. (1982). Improving measurement quality and efficiency with adaptive testing. Applied Psychological Measurement, 6(4), 473492. DOI: https://10.1177/014662168200600408CrossRefGoogle Scholar
Yen, W. M. (1981). Using simulation results to choose a latent trait model. Applied Psychological Measurement, 5(2), 245262. DOI: https://10.1177/014662168100500212CrossRefGoogle Scholar
Zinbarg, R. E., Yovel, I., Revelle, W., et al. (2006). Estimating generalizability to a latent variable common to all of a scale’s indicators: A comparison of estimators for . Applied Psychological Measurement, 30(2), 121144. DOI: https://10.1177/0146621605278814CrossRefGoogle Scholar

Save element to Kindle

To save this element to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Item Response Theory for Creativity Measurement
Available formats
×

Save element to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Item Response Theory for Creativity Measurement
Available formats
×

Save element to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Item Response Theory for Creativity Measurement
Available formats
×