Book contents
- Frontmatter
- Contents
- List of contributors
- Preface
- Part I Introduction
- Part II Representativeness
- Part III Causality and attribution
- Part IV Availability
- Part V Covariation and control
- Part VI Overconfidence
- Part VII Multistage evaluation
- Part VIII Corrective procedures
- 28 The robust beauty of improper linear models in decision making
- 29 The vitality of mythical numbers
- 30 Intuitive prediction: Biases and corrective procedures
- 31 Debiasing
- 32 Improving inductive inference
- Part IX Risk perception
- Part X Postscript
- References
- Index
28 - The robust beauty of improper linear models in decision making
Published online by Cambridge University Press: 05 May 2013
- Frontmatter
- Contents
- List of contributors
- Preface
- Part I Introduction
- Part II Representativeness
- Part III Causality and attribution
- Part IV Availability
- Part V Covariation and control
- Part VI Overconfidence
- Part VII Multistage evaluation
- Part VIII Corrective procedures
- 28 The robust beauty of improper linear models in decision making
- 29 The vitality of mythical numbers
- 30 Intuitive prediction: Biases and corrective procedures
- 31 Debiasing
- 32 Improving inductive inference
- Part IX Risk perception
- Part X Postscript
- References
- Index
Summary
Paul Meehl's (1954) book Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence appeared 25 years ago. It reviewed studies indicating that the prediction of numerical criterion variables of psychological interest (e.g., faculty ratings of graduate students who had just obtained a Ph.D.) from numerical predictor variables (e.g., scores on the Graduate Record Examination, grade point averages, ratings of letters of recommendation) is better done by a proper linear model than by the clinical intuition of people presumably skilled in such prediction. The point of this article is to review evidence that even improper linear models may be superior to clinical predictions.
A proper linear model is one in which the weights given to the predictor variables are chosen in such a way as to optimize the relationship between the prediction and the criterion. Simple regression analysis is the most common example of a proper linear model; the predictor variables are weighted in such a way as to maximize the correlation between the subsequent weighted composite and the actual criterion. Discriminant function analysis is another example of a proper linear model; weights are given to the predictor variables in such a way that the resulting linear composites maximize the discrepancy between two or more groups. Ridge regression analysis, another example (Darlington, 1978, Marquardt & Snee, 1975), attempts to assign weights in such a way that the linear composites correlate maximally with the criterion of interest in a new set of data.
- Type
- Chapter
- Information
- Judgment under UncertaintyHeuristics and Biases, pp. 391 - 407Publisher: Cambridge University PressPrint publication year: 1982
- 28
- Cited by