Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-20T05:34:36.053Z Has data issue: false hasContentIssue false

13 - Measuring Human Intelligence with Artificial Intelligence: Adaptive Item Generation

Published online by Cambridge University Press:  23 November 2009

Robert J. Sternberg
Affiliation:
Yale University, Connecticut
Jean E. Pretz
Affiliation:
Yale University, Connecticut
Get access

Summary

INTRODUCTION

Adaptive item generation may be the next innovation in intelligence testing. In adaptive item generation, the optimally informative item is developed anew for the examinee during the test. Reminiscent of computer versus person chess games, the computer generates the next item based on the previous pattern of the examinee's responses. Adaptive item generation requires the merger of two lines of research, psychometric methods for adaptive testing and a cognitive analysis of items.

Adaptive testing is the current state of the art in intelligence measurement. In adaptive testing, items are selected individually for optimal information about an examinee's ability during testing. The items are selected interactively by a computer algorithm using calibrated psychometric properties. Generally, harder items are selected if the examinee solves items, while easier ones are selected if the examinee does not solve items. Adaptive item selection leads to shorter and more reliable tests. In a sense, optimal item selection for an examinee is measurement by artificial intelligence.

Adaptive item generation is a step beyond adaptive testing. Like adaptive testing, it estimates the psychometric properties of the optimally informative items for the person. Beyond this, however, the impact of specific stimulus content on an item's psychometric properties must be known. That is, knowledge is required of how stimulus features in specific items impact the ability construct.

This paper describes a system for measuring ability in which new items are created while the person takes the test. Ability is measured online by a system of artificial intelligence.

Type
Chapter
Information
Cognition and Intelligence
Identifying the Mechanisms of the Mind
, pp. 251 - 267
Publisher: Cambridge University Press
Print publication year: 2004

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Adams, R. A., Wilson, M., & Wang, W. C. (1997). The multidimensional random coefficients multinomial logit model. Applied Psychological Measurement, 21, 1–23CrossRefGoogle Scholar
Bejar, I. I. (1996). Generative response modeling: Leveraging the computer as a test delivery medium (RR-96-13). Princeton, NJ: Educational Testing Service
Bejar, I. I., Lawless, R. R., Morley, M. E., Wagner, M. E., Bennett, R. E., & Revuelta, J. (2002). A feasibility study of on-the-fly item generation in adaptive testing (GRE Board Research Report 98-12). Princeton, NJ: Educational Testing Service
Butterfield, E. C., Nielsen, D., Tangen, K. L., & Richardson, M. B. (1985). Theoretically based psychometric measures of inductive reasoning. In S. E. Embretson (Ed.), Test design: Developments in psychology and psychometrics (pp. 77–147). New York: Academic PressCrossRef
Carpenter, P. A., Just, M. A., & Shell, P. (1990). What one intelligence test measures: A theoretical account of processing in the Raven's Progressive Matrices Test. Psychological Review, 97, 404–431CrossRefGoogle Scholar
Carroll, J. B. (1976). Psychometric tasks as cognitive tests: A new structure of intellect. In L. Resnick (Ed.), The nature of intelligence (pp. 27–56). Hillsdale, NJ: Erlbaum
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge, UK: Cambridge University Press
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302CrossRefGoogle ScholarPubMed
DiBello, L. V., Stout, W. F., & Roussos, L. (1995). Unified cognitive psychometric assessment likelihood-based classification techniques. In P. D. Nichols, S. F. Chipman, & R. L. Brennan (Eds.), Cognitively diagnostic assessment (pp. 361–389). Hillsdale, NJ: Erlbaum
Diehl, K. A. (1998). Using cognitive theory and item response theory to extract information from wrong responses. Unpublished Master's thesis, University of Kansas
Diehl, K. A. (2002). Algorithmic item generation and problem solving strategies in matrix completion problems. Unpublished doctoral dissertation. Lawrence, KS: University of Kansas
Diehl, K. A., & Embretson, S. E. (2002). Impact of perceptual features on algorithmic item generation for matrix completion problems. Technical Report 02–0100. Cognitive Measurement Reports. Lawrence, KS: University of Kansas
Embretson, S. E. (1983). Construct validity: Construct representation versus nomothetic span. Psychological Bulletin, 93, 179–197Google Scholar
Embretson, S. E. (1994). Applications of cognitive design systems to test development. In C. Reynolds (Ed.), Advances in cognitive assessment: An interdisciplinary perspective (pp. 107–135). New York: PlenumCrossRef
Embretson, S. (1995a). A measurement model for linking individual change to processes and knowledge: Application to mathematical learning. Journal of Educational MeasurementCrossRefGoogle Scholar
Embretson, S. E. (1995b). Working memory capacity versus general central processes in intelligence. Intelligence, 20, 169–189CrossRefGoogle Scholar
Embretson, S. E. (1997a). Multicomponent latent trait models. In W. van der Linden & R. Hambleton (Eds.), Handbook of modern item response theory (pp. 305–322). New York: Springer-Verlag
Embretson, S. E. (1998). A cognitive design system approach to generating valid tests: Application to abstract reasoning. Psychological Methods, 3, 300–396CrossRefGoogle Scholar
Embretson, S. E. (1999). Generating items during testing: Psychometric issues and models. Psychometrika, 64, 407–433CrossRefGoogle Scholar
Embretson, S. E. (2000). Generating assembling objects items from cognitive specifications. HUMRRO Report No. SubPR98-11. Washington, D.C.:
Embretson, S. E. (2002a). Cognitive models for psychometric properties of GRE quantitative items. Report 02-03 to Assessment Design Center. Princeton, NJ: Educational Testing Service
Embretson, S. E. (2002b). Generating abstract reasoning items with cognitive theory. In S. Irvine & P. Kyllonen (Eds.), Item generation for test development (pp. 219–250). Mahwah, NJ: Erlbaum
Embretson, S. E., & Gorin, J. (2001). Improving construct validity with cognitive psychology principles. Journal of Educational MeasurementCrossRefGoogle Scholar
Embretson, S. E., & Reise, S. (2000). Item response theory for psychologists. Mahwah, NJ: Erlbaum
Embretson, S. E., & Schneider, L. M. (1989). Cognitive models of analogical reasoning for psychometric tasks. Learning and Individual Differences, 155–178CrossRefGoogle Scholar
Embretson, S. E., & Wetzel, D. (1987). Component latent trait models for paragraph comprehension tests. Applied Psychological Measurement, 11, 175–193CrossRefGoogle Scholar
Embretson, S. E., & Yang, X. (2002). Modeling item parameters from cognitive design features using non-linear mixed models. Symposium paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA, April, 2002
Embretson, S. E., Schneider, L. M., & Roth, D. L. (1985). Multiple processing strategies and the construct validity of verbal reasoning tests. Journal of Educational Measurement, 23, 13–32CrossRefGoogle Scholar
Fischer, G. H. (1973). Linear logistic test model as an instrument in educational research. Acta Psychologica, 37, 359–374CrossRefGoogle Scholar
Gorin, J. (2002). Cognitive design principles for paragraph comprehension items. Unpublished doctoral dissertation, University of Kansas
Gustafsson, J. E. (1988). Hierarchical models of individual differences in cooperative abilities. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 4, pp. 35–71). Hillsdale, NJ: Erlbaum
Hambleton, R. K., & Swaminathan, H. (1985). Item response theory: Principles and applications. Norwell, MA: Kluwer
Mislevy, R. J., Sheehan, K. M., & Wingersky, M. (1993). How to equate tests with little or no data. Journal of Educational Measurement, 30, 55–76CrossRefGoogle Scholar
Psychological Data Corp. (2002). ITEMGEN1, Item generator for non-verbal intelligence test items. Lawrence, KS: Psychological Data Corporation
Raven, J. C., Court, J. H., & Raven, J. (1992). Manual for Raven's Progressive Matrices and Vocabulary Scale. San Antonio, TX: Psychological Corporation
Sternberg, R. J. (1977). Intelligence, information processing, and analogical reasoning: The componential analysis of human abilities. Hillsdale, NJ: Erlbaum
Whitely, S. E., & Schneider, L. M. (1981). Information structure on geometric analogies: A test theory approach. Applied Psychological Measurement, 5, 383–397CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×