Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-21T17:43:17.051Z Has data issue: false hasContentIssue false

Assessment centers: Reflections, developments, and empirical insights

Published online by Cambridge University Press:  06 May 2024

Duncan J. R. Jackson*
Affiliation:
King’s Business School, King’s College London, London, UK
Michael D. Blair
Affiliation:
U.S. Office of Personnel Management, Kansas City, MO, USA
Pia V. Ingold
Affiliation:
Department of Psychology, University of Copenhagen, Copenhagen, Denmark
*
Corresponding author: Duncan J. R. Jackson; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Keywords

Type
Focal Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Society for Industrial and Organizational Psychology

Assessment centers (ACs) are a popular evaluation approach often applied for the purposes of guiding employment selection and development decisions. AC participants are required to engage in a series of work simulation exercises (e.g., role plays, group discussions, and presentations), and their performance on those exercises is rated by trained assessors. It is these interactions between participants and work simulations that inspired early organizational interest in the AC approach in the late 1940s (Handyside & Duncan, Reference Handyside and Duncan1954; Highhouse & Nolan, Reference Highhouse, Nolan, Jackson, Lance and Hoffman2012) that has remained into the present.

ACs continue to hold appeal in contemporary organizations, which is likely due, in part, to the their interpersonal nature (Kleinmann & Ingold, Reference Kleinmann and Ingold2019) and the rich source of job-relevant information they provide, particularly on job candidates and for employee development (Lievens, Reference Lievens2009). In the same manner, ACs continue to motivate the interests of researchers, as evidenced by the volume of empirical articles on ACs published over the last 10 years (e.g., Breil et al., Reference Breil, Lievens, Forthmann and Back2023; Dimotakis et al., Reference Dimotakis, Mitchell and Maurer2017; Heimann et al., Reference Heimann, Ingold, Lievens, Melchers, Keen and Kleinmann2022; Hickman et al., Reference Hickman, Herde, Lievens and Tay2023; Hoffman et al., Reference Hoffman, Kennedy, LoPilato, Monahan and Lance2015; Ingold et al., Reference Ingold, Kleinmann, Konig and Melchers2016, Reference Ingold, Dönni and Lievens2018; Jackson et al., Reference Ingold, Kleinmann, Konig and Melchers2016, Reference Jackson, Michaelides, Dewberry, Nelson and Stephens2022; Jansen et al., Reference Jansen, Melchers, Lievens, Kleinmann, Brandli, Fraefel and Konig2013; Kuncel & Sackett, Reference Kuncel and Sackett2014; Lievens et al., Reference Lievens, Schollaert and Keen2015; Meriac et al., Reference Meriac, Hoffman and Woehr2014; Monahan et al., Reference Monahan, Hoffman, Lance, Jackson and Foster2013; Oliver et al., Reference Oliver, Hausdorf, Lievens and Conlon2016; Putka & Hoffman, Reference Putka and Hoffman2013; Sackett et al., Reference Sackett, Shewach and Keiser2017; Speer et al., Reference Speer, Christiansen, Goffin and Goff2014; Thornton et al., Reference Thornton, Rupp, Gibbons and Vanhove2019; Wirz et al., Reference Wirz, Melchers, Kleinmann, Lievens, Annen, Blum and Ingold2020).

A link to the inaugural issue

Even though it only dates back roughly 15 years, it is a testament to research- and practice-based interest in ACs that the first issue of Industrial-Organizational Psychology: Perspectives on Science and Practice (IOP) included a focal article on ACs by Charles Lance along with responses to his article.Footnote 1 Lance (Reference Lance2008) contributed a critique of how ratings from ACs are scored. He concluded that ACs do not measure dimensions and that attempts to use ACs to generate dimension scores should be abandoned. Points for consideration, reactions, and diverging points of view raised by Lance’s critique (e.g., Arthur et al., Reference Arthur, Day and Woehr2008; Howard, Reference Howard2008; Rupp et al., Reference Rupp, Thornton and Gibbons2008) have consistently elicited novel research on ACs.

During the 15 years that have passed since the Lance (Reference Lance2008) focal article and its responses, perspectives on ACs have continued to develop and have benefitted from knowledge generated from ongoing research. This special issue on ACs provides an opportunity to reflect again on what conceptual perspectives on ACs have prevailed and emerged since 2008 as well as an opportunity to explore and showcase recent insightful empirical AC research. It moreover provides insights of value to practice and on how ACs can continue to provide an abundant source of information for organizational research and for decision makers.

Overview of papers in the special issue

Dewberry (Reference Dewberry2024) reviews research literature concerned with whether ACs assess dimensions (or competencies, e.g., communication skills, tolerance) reliably and as intended. Fifteen years after Lance’s focal article, Dewberry reviews more recent research on whether ACs measure dimensions, particularly research utilizing generalizability theory (G theory), which provides statistically controlled estimates of dimension effects. He concludes that evidence derived from G theory research confirms that ACs do not measure dimensions and concurs with Lance that attempts to measure dimensions with ACs should be abandoned. Dewberry moreover presents an argument against interactionist perspectives on ACs (e.g., trait activation theory, the mixed-model perspective), suggesting that some of the patterns used to support these perspectives may simply reflect artifacts of the AC measurement design. This work raises considerations for practitioners about whether they should cease attempts to use ACs to measure dimensions, scoring them instead only in relation to exercises and/or overall performance.

In a conceptual paper, Nottingham and Rupp (Reference Nottingham and Rupp2024) propose that ACs could be used to serve the emerging aim of selecting and developing inclusive leaders in organizations. In contrast to the Dewberry paper, using a dimension approach, Nottingham and Rupp argue that measuring inclusive leadership may add incremental validity to overall assessment ratings (OARs). Specifically, Nottingham and Rupp suggest assessing inclusive leadership proficiency as a behavioral leadership dimension and they develop specific propositions about its relationship with leadership and follower performance and diversity among followers. Given that the evaluation of leaders and the fostering of diversity in organizations are key considerations for both research and practice, this article provides valuable guidance on how to potentially optimize evaluation criteria when aiming to assess inclusive leadership.

Prior research has illustrated that AC ratings are dependent on impressions that assessors form of candidates (Ingold et al., Reference Ingold, Dönni and Lievens2018; Lance et al., Reference Lance, Foster, Gentry and Thoresen2004). Yet it is unknown to what degree these impressions remain consistent in ACs and if the impact of these impressions on AC ratings changes across and within AC exercises. Building on the thin-slice paradigm, Ingold et al. (Reference Ingold, Heimann and Breilin press) address this topic and investigate the consistency of assessor candidate impressions by using different slices of video material from the beginning, middle, and end of three AC exercises. Results suggest that the impressions participants convey across different time points of the AC (i.e., the beginning, middle, and end of each AC, but also across exercises) are consistent. Moreover, their findings suggest that these impressions predict AC performance and can also relate to participant’s job performance. This study offers research insights into the relevance of assessor impressions and into the development of our understanding of assessee behavior.

The topic of assessor training for ACs, which can be positioned in the broad research area of frame-of-reference training (Roch et al., Reference Roch, Woehr, Mishra and Kieszczynska2012; Woehr & Huffcutt, Reference Woehr and Huffcutt1994), is addressed by Gorman et al. (Reference Gorman, Jackson, Meriac and Himmler2024). Applied to ACs, frame-of-reference training provides assessors with a common set of standards to evaluate performance with the goal of increasing rater effectiveness and consistency. In their study, Gorman et al. provide a perspective on the multifaceted structure of frame-of-reference training. The authors found that assessor training was of most assistance in the identification of low-performing AC participants. They moreover found that ratings for assessors who were not trained were associated with larger proportions of residual error than for those who were trained. This study contributes knowledge to research and practice associated with AC training and provides insights into how the variance profile of AC ratings depends on whether assessors have been trained.

In her article, Roch (Reference Roch2024) provides evidence for perceptual differences among applicants relating to AC exercises and an ability test. Different applicant perceptions were found for different exercise types. Whether the AC was rated live or via a recording had implications for fairness perceptions. Moreover, Roch found that whether an assessee had previous experience on an AC influenced levels of perceived self-efficacy. This study contributes to knowledge on applicant reactions that can influence a practitioners’ choice of exercises and psychometric tests.

Procedural justice is conceptually related to considerations of ethics, which is a topic of major consequence to organizations. Fostering just, moral, and ethical behavior is paramount: not only for organizations but also for the wider development of society. d’Amato et al. (Reference d’Amato, Murugavel, Mereiros and Watts2024) address this issue in their paper and raise questions about how leaders can develop ethical and moral behavior using the AC method. They provide initial findings suggesting that the development of ethical leadership attitudes may result in negative, backlash-oriented repercussions. For research, this study offers insights into the application of ACs in the context of ethical leadership. For practice, it provides early warnings about some of the pitfalls of attempting to develop attitudes with ACs.

Organizational decision making based on AC results is complex and, as Rupp et al. (Reference Rupp, Thornton, Bisbey, Nottingham, Salas and Murphy2024) suggest, it can involve the decision maker in a consideration of theory, empirical contributions, and best-practice considerations. In their paper, Rupp et al. present an epistemology for the integration of these three factors. They apply their framework to present a perspective on assessment and development that is directly relevant to considerations of ACs. They conclude that there are areas of alignment among theory, empirical contributions, and best practice. However, they also highlight key gaps and areas for further development. The Rupp et al. framework could be applied by both researchers and practitioners to assist in furthering research to help ensure a better integration across theory, research, and practice in assessment and development and also in other areas of complex workplace interventions.

Conclusions

This special issue offers an exploration into existing, new, and alternative lines of inquiry that showcases the progressive, enterprising, and current nature of AC research and development. In our view, modern ACs possess a vast wealth of knowledge and value to offer both individuals and organizations and will continue to stimulate research as they have before and following the first issue of IOP. In our reading, the current collection of works suggests that researchers and practitioners are served best by exploring, debating, and engaging with areas for development, challenges, and controversies associated with ACs. We propose that learning more about such issues, stimulating debate around them, and allowing different perspectives to be heard is how a research area can grow and develop. We hope that the current set of papers will contribute to the AC debate, will reignite unresolved controversies, and will stimulate new lines of enquiry. It is through such debates and discussions that we can, as researchers and practitioners, further strive toward an enhanced understanding of the AC method to the continued benefit of individuals and organizations.

Competing interests

None.

Footnotes

1 For those interested in reading Lance’s focal article and the responses to the article, Volume 1, Issue 1 of IOP is available online: https://www.cambridge.org/core/journals/industrial-and-organizational-psychology/issue/F8319F5B9E1B45CC024A74BE3AFEBB01

References

Arthur, W. Jr., Day, E. A., & Woehr, D. J. (2008). Mend it, don’t end it: An alternate view of assessment center construct-related validity evidence. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 105111. https://doi.org/10.1111/j.1754-9434.2007.00019.x CrossRefGoogle Scholar
Breil, S. M., Lievens, F., Forthmann, B., & Back, M. D. (2023). Interpersonal behavior in assessment center role-play exercises: Investigating structure, consistency, and effectiveness. Personnel Psychology, 76(3), 759795. https://doi.org/10.1111/peps.12507 CrossRefGoogle Scholar
d’Amato, A., Murugavel, V., Mereiros, K., & Watts, L. L. (2024). An ethical leadership assessment center pilot: Assessing and developing moral person and moral manager dimensions. Industrial and Organizational Psychology, 17.Google Scholar
Dewberry, C. (2024). Assessment centers do not measure stable competencies: Why this is now beyond reasonable doubt. Industrial and Organizational Psychology, 17.Google Scholar
Dimotakis, N., Mitchell, D., & Maurer, T. J. (2017). Positive and negative assessment center feedback in relation to development self-efficacy, feedback seeking, and promotion. Journal of Applied Psychology, 102(11), 15141527. https://doi.org/10.1037/apl0000228 CrossRefGoogle ScholarPubMed
Gorman, C. A., Jackson, D. J. R., Meriac, J. P., & Himmler, J. R. (2024). Unpacking frame-of-reference assessor training effectiveness. Industrial and Organizational Psychology, 17.Google Scholar
Handyside, J. D., & Duncan, D. C. (1954). Four years later: A follow-up of an experiment in selecting supervisors. Occupational Psychology, 28, 923.Google Scholar
Heimann, A. L., Ingold, P. V., Lievens, F., Melchers, K. G., Keen, G., & Kleinmann, M. (2022). Actions define a character: Assessment centers as behavior-focused personality measures. Personnel Psychology, 75(3), 675705. https://doi.org/10.1111/peps.12478 CrossRefGoogle Scholar
Hickman, L., Herde, C. N., Lievens, F., & Tay, L. (2023). Automatic scoring of speeded interpersonal assessment center exercises via machine learning: Initial psychometric evidence and practical guidelines. International Journal of Selection and Assessment. https://doi.org/10.1111/ijsa.12418.CrossRefGoogle Scholar
Highhouse, S., & Nolan, K. P. (2012). One history of the assessment center. In Jackson, D. J. R., Lance, C. E., & Hoffman, B. J. (Ed.), The psychology of assessment centers (pp. 2544). Routledge/Taylor & Francis Group.Google Scholar
Hoffman, B. J., Kennedy, C. L., LoPilato, A. C., Monahan, E. L., & Lance, C. E. (2015). A review of the content, criterion-related, and construct-related validity of assessment center exercises. Journal of Applied Psychology, 100, 11431168. https://doi.org/10.1037/a0038707 CrossRefGoogle ScholarPubMed
Howard, A. (2008). Making assessment centers work the way they are supposed to. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 98104. https://doi.org/10.1111/j.1754-9434.2007.00018.x CrossRefGoogle Scholar
Ingold, P. V., Dönni, M., & Lievens, F. (2018). A dual-process theory perspective to better understand judgments in assessment centers: The role of initial impressions for dimension ratings and validity. Journal of Applied Psychology, 103(12), 13671378. https://doi.org/10.1037/apl0000333 CrossRefGoogle ScholarPubMed
Ingold, P. V., Heimann, A. L., & Breil, S. M. (in press). Any slice is predictive? On the consistency of impressions from the beginning, middle, and end of assessment center exercises and their relation to performance. Industrial and Organizational Psychology: Perspectives on Science and Practice. Google Scholar
Ingold, P. V., Kleinmann, M., Konig, C. J., & Melchers, K. G. (2016). Transparency of assessment centers: Lower criterion-related validity but greater opportunity to perform? Personnel Psychology, 69(2), 467497. https://doi.org/10.1111/peps.12105 CrossRefGoogle Scholar
Jackson, D. J. R., Michaelides, G., Dewberry, C., Nelson, J., & Stephens, C. (2022). Reliability in assessment centres depends on general and exercise performance, but not on dimensions. Journal of Occupational and Organizational Psychology, 95(4), 739757. https://doi.org/10.1111/joop.12398 CrossRefGoogle Scholar
Jackson, D. J. R., Michaelides, M., Dewberry, C., & Kim, Y. (2016). Everything that you have ever been told about assessment center ratings is confounded. Journal of Applied Psychology, 101(7), 976994. https://doi.org/10.1037/apl0000102 CrossRefGoogle ScholarPubMed
Jansen, A., Melchers, K. G., Lievens, F., Kleinmann, M., Brandli, M., Fraefel, L., & Konig, C. J. (2013). Situation assessment as an ignored factor in the behavioral consistency paradigm underlying the validity of personnel selection procedures. Journal of Applied Psychology, 98(2), 326341. https://doi.org/10.1037/a0031257 CrossRefGoogle ScholarPubMed
Kleinmann, M., & Ingold, P. V. (2019). Toward a better understanding of assessment centers: A conceptual review. Annual Review of Organizational Psychology and Organizational Behavior, 6, 349372. https://doi.org/10.1146/annurev-orgpsych-012218-014955 CrossRefGoogle Scholar
Kuncel, N. R., & Sackett, P. R. (2014). Resolving the assessment center construct validity problem (as we know it). Journal of Applied Psychology, 99(1), 3847. https://doi.org/10.1037/a0034147 CrossRefGoogle ScholarPubMed
Lance, C. E. (2008). Why assessment centers do not work the way they are supposed to. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1(1), 8497. https://doi.org/10.1111/j.1754-9434.2007.00017.x CrossRefGoogle Scholar
Lance, C. E., Foster, C., Gentry, W. A., & Thoresen, J. D. (2004). Assessor cognitive processes in an operational assessment center. Journal of Applied Psychology, 89, 2235.CrossRefGoogle Scholar
Lievens, F. (2009). Assessment centres: A tale about dimensions, exercises, and dancing bears. European Journal of Work and Organizational Psychology, 18, 102121.CrossRefGoogle Scholar
Lievens, F., Schollaert, E., & Keen, G. (2015). The interplay of elicitation and evaluation of trait-expressive behavior: Evidence in assessment center exercises. Journal of Applied Psychology, 100(4), 11691188. https://doi.org/10.1037/apl0000004 CrossRefGoogle ScholarPubMed
Meriac, J. P., Hoffman, B. J., & Woehr, D. J. (2014). A conceptual and empirical review of the structure of assessment center dimensions. Journal of Management, 40, 12691296. https://doi.org/10.1177/0149206314522299 CrossRefGoogle Scholar
Monahan, E. L., Hoffman, B. J., Lance, C. E., Jackson, D. J. R., & Foster, M. R. (2013). Now you see them, now you do not: The influence of indicator-factor ratio on support for assessment center dimensions. Personnel Psychology, 66, 10091047. https://doi.org/10.1111/peps.12049 CrossRefGoogle Scholar
Nottingham, A., & Rupp, D. E. (2024). Inclusive leadership as a valid assessment center dimension. Industrial and Organizational Psychology, 17.Google Scholar
Oliver, T., Hausdorf, P., Lievens, F., & Conlon, P. (2016). Interpersonal dynamics in assessment center exercises: Effects of role player portrayed disposition. Journal of Management, 42(7), 19922017. https://doi.org/10.1177/0149206314525207 CrossRefGoogle Scholar
Putka, D. J., & Hoffman, B. J. (2013). Clarifying the contribution of assessee-, dimension-, exercise-, and assessor-related effects to reliable and unreliable variance in assessment center ratings. Journal of Applied Psychology, 98(1), 114133. https://doi.org/10.1037/a0030887 CrossRefGoogle ScholarPubMed
Roch, S. G. (2024). Perceptions of assessment center exercises: Between exercises differences and interventions. Industrial and Organizational Psychology, 17.Google Scholar
Roch, S. G., Woehr, D. J., Mishra, V., & Kieszczynska, U. (2012). Rater training revisited: An updated meta-analytic review of frame-of-reference training. (Journal of Occupational Psychology), Journal of Occupational and Organizational Psychology, 85(2), 370395. https://doi.org/10.1111/j.2044-8325.2011.02045.x CrossRefGoogle Scholar
Rupp, D. E., Thornton, G. C. III., Bisbey, T. M., Nottingham, A., Salas, E., & Murphy, K. R. (2024). An epistemology for assessment and development: How do we know what we know? Industrial and Organizational Psychology, 17.Google Scholar
Rupp, D. E., Thornton, G. C., & Gibbons, A. M. (2008). The construct validity of the assessment center method and usefulness of dimensions as focal constructs. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 116120. https://doi.org/10.1111/j.1754-9434.2007.00021.x CrossRefGoogle Scholar
Sackett, P. R., Shewach, O. R., & Keiser, H. N. (2017). Assessment centers versus cognitive ability tests: Challenging the conventional wisdom on criterion-related validity. Journal of Applied Psychology, 102(10), 14351447, https://doi.org/10.1037/apl0000236 CrossRefGoogle ScholarPubMed
Speer, A. B., Christiansen, N. D., Goffin, R. D., & Goff, M. (2014). Situational bandwidth and the criterion-related validity of assessment center ratings: Is cross-exercise convergence always desirable? Journal of Applied Psychology, 99, 282295, https://doi.org/10.1037/a0035213 CrossRefGoogle ScholarPubMed
Thornton, G. C., Rupp, D. E., Gibbons, A. M., & Vanhove, A. J. (2019). Same-gender and same-race bias in assessment center ratings: A rating error approach to understanding subgroup differences. International Journal of Selection and Assessment, 27(1), 5471. https://doi.org/10.1111/ijsa.12229 CrossRefGoogle Scholar
Wirz, A., Melchers, K. G., Kleinmann, M., Lievens, F., Annen, H., Blum, U., & Ingold, P. V. (2020). Do overall dimension ratings from assessment centres show external construct-related validity? European Journal of Work and Organizational Psychology, 29(3), 405420. https://doi.org/10.1080/1359432X.2020.1714593 CrossRefGoogle Scholar
Woehr, D. J., & Huffcutt, A. I. (1994). Rater training for performance-appraisal—a quantitative review. Journal of Occupational and Organizational Psychology, 67, 189205.CrossRefGoogle Scholar