Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-22T14:38:39.534Z Has data issue: false hasContentIssue false

Neurodiversity and talent measurement: Revisiting the basics

Published online by Cambridge University Press:  09 March 2023

Jeremiah T. McMillan*
Affiliation:
WithYouWithMe Pty Ltd, Sydney, Australia
Benjamin Listyg
Affiliation:
University of Georgia, Athens, GA, USA
Jeh Cooper
Affiliation:
WithYouWithMe Pty Ltd, Sydney, Australia
*
*Corresponding author. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Type
Commentaries
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Society for Industrial and Organizational Psychology

LeFevre-Levy et al. (Reference LeFevre-Levy, Melson-Silimon, Harmata, Hulett and Carter2023) lay out a compelling argument in support of furthering research for navigating neurodiversity in organizational settings. We contend that one notable gap in this area is the development of evidenced-based practices for the effective identification, measurement, and use of job-relevant psychological constructs for staffing decisions. This gap is significant considering organizations currently bemoan a talent shortage or “talent war,” and the pressure to tap previously underrepresented pools of talent is higher than ever (Trost, Reference Trost and Trost2020). As a field, industrial-organizational psychology and organizational behavior (IO/OB) has amassed a wealth of knowledge pertaining to validity, diversity, and fairness in employee selection. However, unique considerations come into focus if job analysis and psychometrically sound measurement practices are to be used effectively across the entire neurodiversity spectrum.

Certain neuroatypical employees may demonstrate different “true score” mean trait levels on relevant knowledge, skills, abilities, and other characteristics (KSAOs) compared with neurotypical employees, such as social, learning, and communication skills (Krzeminska et al., Reference Krzeminska, Austin, Bruyère and Hedley2019). We posit this could lead to a new diversity–validity dilemma, similar to the one observed between Black and White respondents on cognitive ability measures (Ployhart & Holtz, Reference Ployhart and Holtz2008). Aside from mean score differences, differential functioning of assessments (i.e., a lack of measurement invariance) may arise from differences in neurotypical and neuroatypical cognitive processes. This potentially calls into question the overall validity of psychological assessments for selection purposes. Differences in social cognition and information-processing among neurodiverse individuals may also impact how they react to validation research and psychological assessment broadly. Furthermore, any unchecked assumptions or biases linked to a traditional job analysis conducted by, with, and for neurotypical individuals may inadvertently result in disadvantaging neuroatypical individuals.

We focus on three key areas to guide researchers and practitioners in the endeavor to improve neurodiverse employee selection and assessment: (a) the explicit inclusion of neurodiversity considerations in job analysis and content validation, (b) further research into the identification of appropriate psychological assessment characteristics, and (c) the inclusion of tests for construct equivalence across different neurodiverse groups. Throughout, we focus on broad recommendations, aiming to spark further discussion on how this may be appropriately applied to varying organizational contexts and desired employee selection outcomes.

Job analysis

Job analysis (JA) serves as the proverbial backbone of all we do as IO/OB practitioners. As such, it is critical that data gathered truly represents the constituent job tasks and/or KSAOs needed for the job. To our knowledge, research has not yet examined how the JA process is impacted by participants and stakeholders falling along a neurological continuum.

At the crux of most JA approaches is a reliance upon subject matter experts (SMEs) to provide researchers with information about the nature of the work or worker. Thus, social and cognitive biases from SMEs may enter at multiple stages of the process, including the delineation of job tasks and the linkage between tasks and individual characteristics (Morgeson & Campion, Reference Morgeson and Campion1997). We recommend the following targeted steps be implemented into the standard JA toolkit: educating SMEs on neurodiversity considerations, including neuroatypical participants in the JA sample, and considering how the JA medium aligns with participant characteristics and desired outcomes.

With regard to initial identification of key work tasks (e.g., a task inventory), socially desirable but not strictly necessary traits may be overly weighted in JA (Morgeson & Campion, Reference Morgeson and Campion1997). Thus, educating SMEs to use job requirements as ground truth evidence, rather than what “ought to be,” may help to prevent such biases from entering the process. It may also be useful to prompt JA participants to conduct thought experiments, posing the question of what alternative or unorthodox styles or approaches could lead to job success. Simply pointing out that an individual may gravitate toward conceptualizing the job from a neurotypical-centric perspective could be helpful, similar to the manner in which calling out ethnocentrism is effective (Myers, Reference Myers2016). A more advanced tactic might be to provide concrete examples of the ways an individual with an atypical skill set may bring unique strengths, serving the dual purpose of priming the pump of divergent thinking as well as reducing stigma toward those that may be seen as an outgroup.

Aside from educating JA participants, intentional inclusion of neuroatypical incumbents, and support for them in the JA process, is important. This diversity of opinion may be especially valuable for disrupting shared mental models of the ideal or prototypical employee (Hong & Page, Reference Hong and Page2004). In fact, some neuroatypical individuals may be particularly gifted at seeing new patterns and mentally visualizing information (Morris et al., Reference Morris, Begel and Wiedermann2015), allowing for keen insights into the nature of the job and linkages with KSAOs.

Job analytic data should be gathered in multiple forms where feasible. Those with low social or verbal abilities may fail to speak up in focus group settings (Paulhus & Morgan, Reference Paulhus and Morgan1997). Alternatively, those with deficits in attention or difficulty processing written information may be more engaged in a focus group compared with completing long written questionnaires. Methodology should be informed by the quality of data expected, given diversity in social and cognitive traits, rather than convenience.

As a final note on JA, one area of potentially fruitful applied research would be to leverage a person-centered approach to identify unique configurations of traits that are predictive of job success (Blickle et al., Reference Blickle, Meurs, Wihler, Ewen, Plies and Günther2013). This would allow researchers to identify compensatory mechanisms within different configurations of KSAOs (i.e., those more or less likely to be seen in neuroatypical vs. neurotypical populations). This is meaningfully different from the notion that a linear predictive model can include compensatory allowances. Instead, different clusters or latent profiles of individuals may be qualitatively rather than quantitatively unique but still succeed on the job.

Assessment characteristics

We use the term assessment characteristics here to refer to any element or property of psychological assessment instruments aside from the construct they are intended to tap; this includes assessment medium, speededness, and item format. From a strictly scientific perspective, all assessment characteristics should be chosen based on the anticipated or proven improvement of psychometric properties. Practically speaking, assessments often serve multiple purposes for an organization, including signaling company culture and making the applicant experience more enjoyable (Rivera, Reference Rivera2012). Explicit attention to the role of neurodiversity in measurement helps to facilitate these multiple purposes.

Including a time limit for an aptitude-based assessment may be useful for increasing the psychological fidelity (i.e., similarity to actual on-the-job behaviors). Time limits may also be used as a tool to force more discrimination of candidates’ ability levels. This is generally not recommended though, as factor analytic evidence suggests that speed (response time) is better modeled as an additional factor than as an additional indicator of the ability of interest (van der Linden, Reference Van der Linden2011). Particularly for those with learning disabilities, it may take longer to arrive at the correct answer to an item. This may lead to erroneous, time-driven subgroup differences across neurotypical and neuroatypical populations. If assessments are timed, allowances should be made for exceptions, both for legal and validity purposes.

Item format may additionally play a role in how neuroatypical applicants perform and react to assessments during the hiring process. Technology enabled and game-based assessment are quickly gaining popularity. These types of assessments not only invite the opportunity to make assessment more engaging for different groups but also introduce a bevy of psychometric concerns, not the least of which is differential functioning by neurodiversity status (Willis et al., Reference Willis, Powell-Rudy, Colley and Prasad2021). Forced choice items represent another item format that may prove challenging for neuroatypical applicants. Such items present two statements of equal social desirability and require respondents to rate which statement best describes them. They are commonly used to reduce applicant faking (Jackson et al., Reference Jackson, Wroblewski and Ashton2000). However, these same items may put an undue cognitive burden on test-takers (Vasilopoulos et al, Reference Vasilopoulos, Cucina, Dyomina, Morewitz and Reilly2006) and elicit negative affective reactions, particularly when coupled with ideal point items (Harris et al., Reference Harris, McMillan and Carter2021). How might neuroatypicality interact with these issues? More research is required to understand how neuroatypical applicants perform on and react to different item response formats.

We recommend prehire assessments draw from the seven principles of universal design (Connell et al., Reference Connell, Jones, Mace, Mueller, Mullick, Ostroff, Sanford, Steinfeld, Story and Vanderheiden1997) during their development and validation, including flexibility in use (the design accommodates a wide range of individual preferences and abilities) and tolerance for error (the design minimizes hazards and the adverse consequences of accidental or unintended actions.). Similar to our recommendations for JA, assessment designers may benefit from a mixed methods approach drawing on interviews and focus groups to ensure assessments follow these principles and are inclusive not only for neuroatypical job seekers but all individuals seeking employment.

Tests of measurement equivalence

Common practice in employee assessment validation is to identify any fairness issues across demographic subgroups (AERA et al., 2014). Fairness issues are fundamental validity issues. A psychological construct should function the same across all intended members of a population for whom interpretations and decisions will be made. Operationally, the nature and number of categories of demographic factors we examine are typically limited by logistical constraints (i.e., cell sample size) and legal considerations. For instance, in the United States, race/ethnicity, age, and gender are frequently flagged as factors of interest due to regulations governing adverse impact against said groups.

A similar approach could be readily applied to identifying differences in item or test functioning in neurotypical versus neuroatypical individuals, leveraging classical test theory and item response theory. Cognitive and noncognitive traits may differ along the neurological continuum both in terms of subgroup mean differences, as well as underlying factor structure (Goldstein et al., Reference Goldstein, Allen, Minshew, Williams, Volkmar, Klin and Schultz2008; Lodi-Smith et al., Reference Lodi-Smith, Rodgers, Cunningham, Lopata and Thomeer2019). If the underlying construct being measured is different for neurotypical and neuroatypical individuals, this may lead to differential prediction and/or muddying of the explanatory mechanisms behind effective job performance. This could open up a Pandora’s box for many practitioners. However, we should not shy away from the challenge of understanding the complexity of measurement across the full range of human neurological functioning. It should be noted that unlike the previously mentioned protected classes, neuroatypicalities can be considered a “deep” level characteristic of individuals. Thus, determining whether a measurement instrument functions differently in neurotypical versus neuroatypical individuals within a sample requires disclosure of group membership status, which may prompt fears of judgment or exclusion (Zolyomi et al., Reference Zolyomi, Ross, Bhattacharya, Milne and Munson2018). We emphasize that organizations be cautious when requesting this information from applicants and incumbents.

Conclusion

We hope this commentary sparks much research and discussion pertaining to effective measurement of psychological constructs across the neurological continuum. To do so requires attention at all stages of validation, including identifying and defining constructs of interests, carefully aligning assessment design with assessment purpose, and identifying any unintended consequences of assessment.

As noted by LeFevre-Levy et al., neuroatypical individuals are not homogenous. Further research is needed to identify specific recommendations for specific neuroatypical groups (e.g., those with ASD vs. those with dyslexia). Additionally, it is worth noting that across many contexts, we run the risk of myopic focus when identifying the core requirements of the job. Leveraging the philosophies and methodologies of work analysis (Morgeson & Dierdorff, Reference Morgeson, Dierdorff, Zedeck and Zedeck2011) versus job analysis per se offers a possible solution. Work analysis involves the “systematic process of breaking down work performed into a number of separate tasks and duties…looking at several or, indeed, many jobs at the same time” and “identifies potential new jobs and a need to reorganize and restructure” (Heron, Reference Heron2005, p. 7). The United Nations’ International Labour Organization specifically highlights work analysis as a tool that can benefit disabled workers, calling it “an opportunity to identify opportunities for work experience in enterprises for persons with disabilities” (Heron, Reference Heron2005, p. 11). This ties in to Lefevre-Levy et al.’s call for better representation of the skills that neuroatypical individuals may bring to the workplace.

References

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (Eds.). (2014). Standards for educational and psychological testing. American Educational Research Association.Google Scholar
Blickle, G., Meurs, J. A., Wihler, A., Ewen, C., Plies, A., & Günther, S. (2013). The interactive effects of conscientiousness, openness to experience, and political skill on job performance in complex jobs: The importance of context. Journal of Organizational Behavior, 34(8), 11451164.CrossRefGoogle Scholar
Connell, B. R., Jones, M. L., Mace, R. L., Mueller, J. L., Mullick, A., Ostroff, E., Sanford, J., Steinfeld, E., Story, M., & Vanderheiden, G. (1997). The principles of universal design, version 2.0. Center for Universal Design, North Carolina State University.Google Scholar
Goldstein, G., Allen, D. N., Minshew, N. J., Williams, D. L., Volkmar, F., Klin, A., & Schultz, R. T. (2008). The structure of intelligence in children and adults with high functioning autism. Neuropsychology, 22(3), 301312. doi: 10.1037/0894-4105.22.3.301 CrossRefGoogle ScholarPubMed
Harris, A. M., McMillan, J. T., & Carter, N. T. (2021). Test-taker reactions to ideal point measures of personality. Journal of Business Psychology, 36, 513532. doi: 10.1007/s10869-020-09682-8 CrossRefGoogle Scholar
Heron, R. (2005). Job and work analysis: Guidelines on identifying job for persons with disabilities. Geneva: International Labour Organization.Google Scholar
Hong, L., & Page, S. E. (2004). Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences, 101(46), 1638516389.CrossRefGoogle ScholarPubMed
Jackson, D. N., Wroblewski, V. R., & Ashton, M. C. (2000). The impact of faking on employment tests: Does forced choice offer a solution? Human Performance, 13(4), 371388.CrossRefGoogle Scholar
Krzeminska, A., Austin, R. D., Bruyère, S. M., & Hedley, D. (2019). The advantages and challenges of neurodiversity employment in organizations. Journal of Management & Organization, 25(4), 453463.CrossRefGoogle Scholar
LeFevre-Levy, R., Melson-Silimon, A., Harmata, R., Hulett, A. L., & Carter, N. T. (2023). Neurodiversity in the workplace: Considering neuroatypicality as a form of diversity. Industrial and Organizational Psychology: Perspectives on Science and Practice, 16(1).Google Scholar
Lodi-Smith, J., Rodgers, J. D., Cunningham, S. A., Lopata, C., & Thomeer, M. L. (2019). Meta-analysis of Big Five personality traits in autism spectrum disorder. Autism, 23(3), 556565. doi: 10.1177/1362361318766571 CrossRefGoogle ScholarPubMed
Morgeson, F. P., & Campion, M. A. (1997). Social and cognitive sources of potential inaccuracy in job analysis. Journal of Applied Psychology, 82(5), 627655.CrossRefGoogle Scholar
Morgeson, F. P., & Dierdorff, E. C. (2011). Work analysis: From technique to theory. In Zedeck, S. & Zedeck, S. (Eds.), APA handbook of industrial and organizational psychology, Vol 2: Selecting and developing members for the organization. (pp. 341). American Psychological Association.Google Scholar
Morris, M. R., Begel, A., & Wiedermann, B. (2015). Understanding the challenges faced by neurodiverse software engineering employees: Towards a more inclusive and productive technical workforce. Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, 173–184.Google Scholar
Myers, C. G. (2016). Where in the world are the workers? Cultural underrepresentation in I-O research. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9(1), 144152.CrossRefGoogle Scholar
Paulhus, D. L., & Morgan, K. L. (1997). Perceptions of intelligence in leaderless groups: The dynamic effects of shyness and acquaintance. Journal of Personality and Social Psychology, 72(3), 581591.CrossRefGoogle ScholarPubMed
Ployhart, R. E., & Holtz, B. C. (2008). The diversity-validity dilemma: Strategies for reducing racioethnic and sex subgroup differences and adverse impact in selection. Personnel Psychology, 61(1), 153172. doi: 10.1111/j.1744-6570.2008.00109.x CrossRefGoogle Scholar
Rivera, L. A. (2012). Hiring as cultural matching: The case of elite professional service firms. American Sociological Review, 77(6), 9991022.CrossRefGoogle Scholar
Trost, A. (2020). Talent acquisition and selection. In Trost, A. (Ed.), Human resource strategies. Future of Business and Finance Series. Springer. doi: 10.1007/978-3-030-30592-5_5 Google Scholar
Van der Linden, W. J. (2011). Setting time limits on tests. Applied Psychological Measurement, 35(3), 183199.CrossRefGoogle Scholar
Vasilopoulos, N. L., Cucina, J. M., Dyomina, N. V., Morewitz, C. L., & Reilly, R. R. (2006). Forced-choice personality tests: A measure of personality and cognitive ability? Human Performance, 19(3), 175199.CrossRefGoogle Scholar
Willis, C., Powell-Rudy, T., Colley, K., & Prasad, J. (2021). Examining the use of game-based assessments for hiring autistic job seekers. Journal of Intelligence, 9(4), 53.CrossRefGoogle ScholarPubMed
Zolyomi, A., Ross, A. S., Bhattacharya, A., Milne, L., & Munson, S. A. (2018). Values, identity, and social translucence: Neurodiverse student teams in higher education. Proceedings of the 2018 Chi Conference on Human Factors in Computing Systems, 1–13.CrossRefGoogle Scholar