Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-26T07:37:07.979Z Has data issue: false hasContentIssue false

Why Some Situational Judgment Tests Fail To Predict Job Performance (and Others Succeed)

Published online by Cambridge University Press:  23 March 2016

Deborah L. Whetzel*
Affiliation:
Human Resources Research Organization, Alexandria, Virginia
Matthew C. Reeder
Affiliation:
Human Resources Research Organization, Alexandria, Virginia
*
Correspondence concerning this article should be addressed to Deborah L. Whetzel, Human Resources Research Organization, 66 Canal Center Plaza, Suite 700, Alexandria, VA 22314. E-mail: [email protected]

Extract

Situational judgment tests (SJTs) occasionally fail to predict job performance in criterion-related validation studies, often despite much effort to follow scholarly recipes for their development. This commentary provides some plausible explanations for why this may occur as well as some tips for SJT development. In most cases, we frame the issue from an implicit trait policy (ITP) perspective (Motowidlo, Hooper, & Jackson, 2006a, 2006b) and the measurement of general domain knowledge. In other instances, we believe that the issue does not have a direct tie to the ITP concept, but our experience suggests that the issue is of sufficient importance to include in this response. The first two issues involve challenges gathering validity evidence to support the use of SJTs, and the remaining issues deal more directly with SJT design considerations.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Arthur, W., Glaze, R. M., Jarrett, S. M., White, C. D., Schurig, I., & Taylor, J. E. (2014). Comparative evaluation of situational judgment test response formats in terms of construct-related validity, subgroup differences, and susceptibility to response distortion. Journal of Applied Psychology, 99, 535545.Google Scholar
Chan, D., & Schmitt, N. (1997). Video-based versus paper-and-pencil method of assessment in situational judgment tests: Subgroup differences in test performance and face validity perceptions. Journal of Applied Psychology, 82, 143159.CrossRefGoogle ScholarPubMed
Guion, R. M. (2011). Assessment, measurement, and prediction for personnel decisions (2nd ed.). New York, NY: Taylor & Francis Group.CrossRefGoogle Scholar
Lievens, F., & Motowidlo, S. J. (2016). Situational judgment tests: From measures of situational judgment to measures of general domain knowledge. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9, 322.Google Scholar
Lievens, F., & Sackett, P. R. (2006). Video-based versus written situational judgment tests: A comparison in terms of predictive validity. Journal of Applied Psychology, 91, 11811188.Google Scholar
Lievens, F., Sackett, P. R., & Buyse, T. (2009). The effects of response instructions on situational judgment test performance and validity in a high-stakes context. Journal of Applied Psychology, 94, 10961101.Google Scholar
MacKenzie, W. I., Ployhart, R. E., Weekley, J. A., & Ehlers, C. (2010). Contextual effects on SJT responses: An examination of construct validity and mean differences across applicant and incumbent contexts. Human Performance, 23, 121.Google Scholar
McDaniel, M. A., Hartman, N. S., Whetzel, D. L., & Grubb, W. L. III. (2007). Situational judgment tests, response instructions and validity: A meta-analysis. Personnel Psychology, 60, 6391.CrossRefGoogle Scholar
McDaniel, M. A., Psotka, J., Legree, P. J., Yost, A. P., & Weekley, J. A. (2011). Toward an understanding of situational judgment item validity and group differences. Journal of Applied Psychology, 96, 327336.Google Scholar
Motowidlo, S. J., Hooper, A. C., & Jackson, H. L. (2006a). Implicit policies about relations between personality traits and behavioral effectiveness in situational judgment items. Journal of Applied Psychology, 91, 749761.Google Scholar
Motowidlo, S. J., Hooper, A. C., & Jackson, H. L. (2006b). A theoretical basis for situational judgment tests. In Weekley, J. A. & Ployhart, R. E. (Eds.), Situational judgment tests: Theory, measurement and application (pp. 5782). Mahwah, NJ: Erlbaum.Google Scholar
Weekley, J. A., Ployhart, R. E., & Harold, C. M. (2004). Personality and situational judgment tests across applicant and incumbent settings: An examination of validity, measurement, and subgroup differences. Human Performance, 17, 433461.Google Scholar
Weekley, J. A., Ployhart, R. E., & Holtz, B. C. (2006). On the development of situational judgment tests: Issues in item development, scaling, and scoring. In Weekley, J. A. & Ployhart, R. R. (Eds.), Situational judgment tests: Theory, management, and application (pp. 157182). Mahwah, NJ: Erlbaum.Google Scholar