Book contents
- Frontmatter
- Contents
- List of Contributors
- PART I THE BASIS OF COGNITIVE DIAGNOSTIC ASSESSMENT
- PART II PRINCIPLES OF TEST DESIGN AND ANALYSIS
- PART III PSYCHOMETRIC PROCEDURES AND APPLICATIONS
- 8 Cognitive Foundations of Structured Item Response Models
- 9 Using the Attribute Hierarchy Method to Make Diagnostic Inferences About Examinees' Cognitive Skills
- 10 The Fusion Model Skills Diagnosis System
- 11 Using Information from Multiple-Choice Distractors to Enhance Cognitive-Diagnostic Score Reporting
- 12 Directions for Future Research in Cognitive Diagnostic Assessment
- Author Index
- Subject Index
- References
11 - Using Information from Multiple-Choice Distractors to Enhance Cognitive-Diagnostic Score Reporting
Published online by Cambridge University Press: 23 November 2009
- Frontmatter
- Contents
- List of Contributors
- PART I THE BASIS OF COGNITIVE DIAGNOSTIC ASSESSMENT
- PART II PRINCIPLES OF TEST DESIGN AND ANALYSIS
- PART III PSYCHOMETRIC PROCEDURES AND APPLICATIONS
- 8 Cognitive Foundations of Structured Item Response Models
- 9 Using the Attribute Hierarchy Method to Make Diagnostic Inferences About Examinees' Cognitive Skills
- 10 The Fusion Model Skills Diagnosis System
- 11 Using Information from Multiple-Choice Distractors to Enhance Cognitive-Diagnostic Score Reporting
- 12 Directions for Future Research in Cognitive Diagnostic Assessment
- Author Index
- Subject Index
- References
Summary
Unidimensional tests primarily measure only one proficiency trait or ability (Hambleton & Swaminathan, 1985). That is, we assume that a single proficiency trait can completely explain the response patterns observed for a population of test takers. However, most tests exhibit some multidimensionality (i.e., responses that depend on more than one proficiency trait or ability). Multidimensionality may be due to the cognitive complexity of the test items, motivational propensities of the test takers, or other more extraneous factors (Ackerman, 2005; Ackerman, Gierl, & Walker, 2003).
Diagnostically useful scores that profile examinees' strengths and weaknesses require well-behaved or principled multidimensional measurement information. This presents a challenge for established test development and psychometric scaling practices that are aimed at producing unidimensional tests and maintaining unidimensional score scales so accurate summative decisions can be made over time (e.g., college admissions, placement, or granting of a professional certificate or licensure). Any multidimensionality detected during the scaling process is treated as “nuisance factors” not accounted for when designing the test items and building test forms (e.g., passage effects due to choices of topics or method variance due to item types). In fact, most item response theory (IRT) scaling procedures regard multidimensionality and related forms of residual covariation in the response data as statistical misfit or aberrance (Hambleton & Swaminathan, 1985). Can this largely uncontrolled “misfit” or residual covariance be exploited for legitimate diagnostic purposes? Probably not.
- Type
- Chapter
- Information
- Cognitive Diagnostic Assessment for EducationTheory and Applications, pp. 319 - 340Publisher: Cambridge University PressPrint publication year: 2007
References
- 7
- Cited by