Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-22T12:37:17.074Z Has data issue: false hasContentIssue false

A Critical Evaluation of the State of Assessment and Development for Senior Leaders

Published online by Cambridge University Press:  22 August 2018

Douglas H. Reynolds*
Affiliation:
Development Dimensions International
Cynthia D. McCauley
Affiliation:
Center for Creative Leadership
Suzanne Tsacoumis
Affiliation:
Human Resources Research Organization (HumRRO)
The Jeanneret Symposium Participants
Affiliation:
Development Dimensions International Center for Creative Leadership Human Resources Research Organization (HumRRO)
*
Correspondence concerning this article should be addressed to Douglas H. Reynolds, DDI, 1225 Washington Pike, Bridgeville, PA 15017. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Practice and research with senior leaders can be rewarding but also challenging and risky for industrial and organizational (I-O) psychologists; the fact that much of the work with these populations is difficult to access elevates these concerns. In this article we summarize work presented by prominent researchers and practitioners at a symposium organized to share common practices and challenges associated with work at higher levels of organizational management. We review implications for research and practice with senior leaders by examining how assessments are applied at senior levels, how assessments and development practices can be linked, and the challenges associated with research and evaluation conducted with these leaders. Also, we offer suggestions for advancing research and practice at senior levels.

Type
Focal Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Society for Industrial and Organizational Psychology 2018

Senior leaders play an obvious and critical role in the organizations in which they operate. They set vision, establish strategic direction, and ensure the execution of vital activities that propel their organizations forward. Work performed with senior leaders by industrial and organizational (I-O) psychologists and related professionals can have tremendous impact by improving the capabilities of this crucial population. Despite its importance, the knowledge base for how to work effectively with this level of leadership tends to be compartmentalized, often residing with practitioners who have developed expertise over years of experience or within a research literature that makes inconsistent distinctions between senior leadership and leadership in general (Thornton, Johnson, & Church, Reference Thornton, Johnson and Church2017).

The practice of assessment and development with senior leaders in organizations raises challenges that differ from similar practices conducted with front-line or lower-level leaders. There are several factors that drive these differences. Senior leaders, defined here as leaders who work in the mid- to top levels of management in organizations, generally manage other leaders and have substantial spans of control. They work in a different context from the operational aspects of the organizations they manage because they may be several steps removed from where the daily functions of the organizations are performed. Senior leaders guide strategy and monitor the execution of work to gauge and adjust alignment; these core activities may require a different blend of skills and competencies when compared to front-line leaders. Failure to appreciate the differences between lower-level leaders and senior leaders in our research and application may lead to an incomplete understanding of this population and to misguided practices. Further complicating the matter, in some instances, researchers may not be able to rely on common research methods to confirm the effectiveness of our practices due to the smaller numbers of senior leaders within organizations and their limited time for discretionary organizational activities.

Because leadership work becomes increasingly complex and ambiguous at higher organizational levels, organizations must devote attention to developing leaders as they move into more senior roles. These leaders face a wider variety of relationships to manage, more diverse perspectives to integrate, a longer time horizon to pay attention to, and more systemic change initiatives to manage as their levels in organizations increase. Any gaps in capabilities can become more visible and have wider and more significant impact in increasingly demanding roles.

The list of distinguishing features for this population could continue, but the central point would be the same: Working with senior leaders requires special consideration of how they differ from other leaders. This focal article summarizes major topics for research and practice with senior leaders with respect to two areas in which I-O psychologists tend to conduct research and to practice frequently: assessment and development. The framework for the summary is organized around three central questions:

  1. 1. How should assessments be applied at senior levels?

  2. 2. How should assessment and development be connected to ensure senior leaders are best prepared for future roles and organizational challenges?

  3. 3. How should I-O psychologists be researching, evaluating, and extending our practices at senior levels?

Together these topics reflect areas with significant volumes of practice and underlying research, as well as challenges and dilemmas that would benefit from broader discussion and debate. Greater recognition of the issues associated with senior leadership will advance our understanding of leadership in general and our ability to influence this important population.

We begin each topic with a brief review of practices that we characterize as “accepted wisdom.” We prefer this term over the commonly used “best practices” because the effectiveness of given practices can vary substantially depending on the circumstances surrounding their implementation. Further, the concept of accepted wisdom leaves room for wisdom to be misguided despite its popularity. We feel this caution is justified given the tendency for best-practice hunters to adopt approaches that have worked for others without adequate consideration for how their circumstances may differ. In these sections on accepted wisdom, we present practices that I-O psychologists regularly say should be used when assessing and developing senior leaders. At the same time, we are not offering these practices as a set of prescriptions to be applied without regard to professional judgment of those familiar with the assessment context, and we encourage research to support (or refute) the effectiveness of these practices.

The content of this article largely resulted from a 2-day symposium that was organized for the dedicated purpose of describing the state of the science and practice related to the assessment and development of senior leaders (characterized at the symposium as “leaders of leaders”). All too often practice with senior-level people is hidden from mainstream vehicles for research and education: The challenges of sample size, customization to organizational environments, proprietary practices, and sensitivity to risk tend to conspire to limit the accessibility of the information that can help advance our understanding of this important realm of work. The purpose of the symposium was to assemble experts in senior leadership issues to surface best practices and needed research. These experts represented a range of leadership practice and research perspectives; a full list of participants is provided in the Appendix. We present themes and insights from participants in the symposium in this summary as prescriptions for sound practice and as suggestions for future work. We did not cite the views and findings presented by symposium participants at the session because they are joint authors of this work. We added citations where related sources are available.

To lay a foundation for this discussion, we first consider why organizations typically implement assessments for senior leaders. A variety of purposes may be served, including providing input into selection among candidates for senior positions. Moreover, data and insights drawn from formal assessment programs are valuable for high-potential identification and succession management, and to inform the ongoing development of senior leaders. Further, organizations often share and interpret assessment data with individual leaders as a first step in crafting a targeted development plan; the data may also be used for replacement planning and leadership bench-strength analyses. Additional waves of assessment offer organizations opportunities to track leader development over time and examine patterns in assessment data across leaders to identify development initiatives needed for a segment of leaders.

Simply stated, assessments are implemented to inform broader processes where the resulting data ideally provide the basis for subsequent decisions and development programs. However, we acknowledge the ideal state is not often achieved in practice, leaving assessment results occasionally disconnected from the organizational processes they were intended to support. Further complicating the matter is the fact that many assessment tools and practices are closely guarded intellectual property, so details regarding practices and the efficacy of specific programs can be challenging to evaluate. The symposium organized by Society for Industrial and Organizational Psychology (SIOP) and funded by a grant to the SIOP Foundation offered a wide range of experienced practitioners and researchers an opportunity to share, in the context of open collaboration, common practices in leader assessment and development, and to begin to identify needed research in support of those practices. Below, we highlight the themes and insights from the symposium across the three questions posed above.

Assessing Senior Leaders

The assessment of senior leaders to determine their readiness for immediate or near-term selection or promotion, or to guide their development for future roles that may materialize in the longer term, should be informed by acceptable standards that apply to any assessment. Because of the challenges of working with this unique population, many operational programs may side-step essential practices that ensure assessments are accurate, fair, and appropriate for their intended application. Before we examine some of the challenges and issues of assessing senior leaders, the commonly accepted wisdom for how to best design and deploy senior-level assessments is presented as background. Because several recent collections have summarized the extensive work on assessment practices (e.g., Farr & Tippins, Reference Farr and Tippins2017; Scott, Bartram, & Reynolds, Reference Scott, Bartram and Reynolds2018), we describe briefly various techniques only to set the context for the discussion. Even though many practices are well understood and may be fundamental to a strong operational program, the frequency with which they are implemented may still be limited.

Accepted Wisdom for Assessing Senior Leaders

Assessment experts understand that strong implementations are based on a solid understanding of the context for targeted roles and jobs, clear ties to the constructs being assessed, reliable and valid assessment measures, clear communication to participants, and involvement of stakeholders across the organization. Accepted wisdom in each of these areas is summarized below.

Understand job context for senior roles

It is challenging to conduct traditional job analysis with senior leaders, but the design of most programs should start with the definition of a leadership blueprint that has input from a variety sources, not just the incumbent in the target role. Often peers, subordinates, and stakeholders (e.g., directors on the board) will be asked to participate. Beyond these inputs, it is important to overlay the business context and broader environmental factors that are expected to affect the business. Is the market consolidating? Is technology changing the operational and competitive landscape? Is the company expanding through acquisition or organic growth? Any of these factors can influence the competencies required of future leaders and should thus impact the design of an assessment program that is intended to assess the ability of leaders to step up to the challenges they will face.

Depending on the anticipated timeframe for the prediction of readiness, it may be difficult for organizational leaders to see around the bend for what will be facing the business in the future. For this reason, many analyses of organizational context simply emphasize conditions such as a continual state of volatility, uncertainty, complexity, and ambiguity—factors that are prominent enough to be known by their collective acronym “VUCA.”

Determine the targets for assessment

The characteristics of leaders that can be assessed for evaluating readiness and potential are generally understood and usually fall into one or more of the following generally accepted construct categories that may be considered for assessment.

  • Competencies: Usually defined as clusters of related behaviors that reflect strong leadership. In most organizations behavioral sets such as establishing a persuasive vision for the future, establishing strategic direction, implementing and executing corporate plans, and coaching others toward success will appear in senior-level competency models. Multiple methods are used to assess competencies, including 360 ratings, role plays and simulations, situational judgment tests, structured interviews, and combinations of these tools collected into assessment centers.

  • Motivations, values, and interests: Authentic leaders can align their personal values and motivators with the direction of the organization. Strong alignment is presumed to help the leader channel her passions in a manner that sparks the excitement and action of others. Assessment programs often include measures of these variables, typically in the form of interviews or values assessments, to provide a basis for estimating the degree of fit with the demands of future roles.

  • Cognitive ability: Strong general intelligence is generally considered to be a central requirement for senior leadership; however, the complexities of assessing it for senior leadership candidates can produce challenges. Requests to participate in cognitive testing can be insulting to those who have worked to obtain senior positions across a span of decades, and the impact of prior selection decisions has likely narrowed the range of useful variability in cognitive ability across senior leadership candidates. Nonetheless, many organizations include some measures of this type in their assessment programs at this level, though it is sometimes inferred from advanced educational attainment, from accomplishment records, or from assessment tools that do not measure cognitive ability directly such as complex simulations.

  • Personality: Big Five variables, disadvantaging or derailing traits, and clustered patterns of personality factors are all popular and appropriate for inclusion in senior leadership assessment. Much has been written and hypothesized about the important role that personality plays in shaping leadership style (e.g., Hogan, Curphy, & Hogan, Reference Hogan, Curphy and Hogan1994), and research has supported strong and consistent relationships between these variables and leadership (e.g., Judge, Bono, Ilies, & Gerhardt, Reference Judge, Bono, Ilies and Gerhardt2002). Most leadership assessment participants likely expect these elements to appear in their assessment and feedback. When less is known about the exact context and content of future roles, it is typical for organizations to emphasize personality and basic ability factors because the relevance of specific behaviors and competencies may be more difficult to ascertain. This situation may lead to too much weight being placed on the role of personality in some programs.

  • Emotional maturity and competence: The ability to understand and control one's own emotional responses, authentically convey appropriate emotion (including nonverbal expression), and accurately read other's emotions and respond with empathy are viewed as critical assessment targets. Although accurate measurement of emotional competence through standardized assessments still requires foundational research, the application of measures that attempt to focus on these skills is common. Multirater surveys and peer interviews are often effective at uncovering deficiencies in these areas but may be less effective in identifying substantial strengths.

  • Career dimensions such as professional knowledge and occupational competence: Most senior roles require a foundational level of knowledge of the industry within which the organization operates and the professional specialties that give the organization competitive advantage. Clearly there are limits to the range of knowledge a senior leader can master, and there are many examples of senior leaders who emerge from functional career paths outside of the core of the business (e.g., finance or marketing as opposed to engineering or product management). How much industry and technical knowledge is required of a specific leader can vary across roles and organizations in the same market and may be influenced by the level of technical competence among others serving as peers to the senior leader.

Ensure the quality of assessments

Of course, good assessment practice should include the use of reliable measures that have been validated for the purpose for which they are being used. Practitioners have created and validated numerous measures of most of the critical leadership characteristics listed above. The need for novelty and alignment with the latest trends in the popular business press can drive experimentation and innovation in measurement, but it is best to be cautious about reinvention unless significant value is added whereas reliability and validity are at least maintained to the levels achieved with traditional tools.

Many assessment approaches involving senior leaders make use of input from others in the organization who are not trained in assessment, such as when gathering multirater input or using managers as assessors within assessment centers. When lay assessors of this type are used, it becomes even more critical to use strong practices to reduce error and bias in human judgment. Some of the more common practices include providing careful rater training, clearly defining rating dimensions, using behavioral rating scales, and separating assessor judgments from hiring and placement decisions.

Clear communication and alignment across the organization

Many well-designed programs fail to gain traction or do not survive long enough to have the intended impact because the expectations for the program have not been clearly communicated and key stakeholders have not been aligned to the purpose. Organizations should explain to all participants the purpose for the assessment, the appropriate use of the data, the boundaries of the program, and what will happen during the assessment process. These practices will help ensure that participants do not feel as though their privacy has been violated or that they are being unfairly judged. Given this risk, practitioners in organizations should maintain the centrality of professional and ethical guidelines, and should establish oversight committees that may operate in a manner similar to institutional research boards that exist in academic and medical contexts. Oversight committees of this nature provide a mechanism for ensuring compliance with relevant guidelines and alignment with intended program goals.

Organizational leaders responsible for assessment programs should take explicit steps to create alignment across organizational stakeholders (e.g., managers, HR, executive sponsors) regarding program purpose, primary elements, appropriate use, and limitations. Program managers should ensure transparency in how data will be managed and be vigilant about compliance once expectations are set, and increasingly, such managers must be vigilant regarding compliance with legal requirements (e.g., Europa.EU, 2016). Assessment experts should establish program-specific rules that explicitly address how data will be shared, with whom, and under what conditions.

There can be significant resistance from organizations about providing transparency regarding the purpose of the assessment, particularly with internal participants when long-term planning is being conducted. In many cases the purposes are both evaluative and developmental, thereby raising complexity in communicating with clarity. This challenge can be further complicated by organizational attitudes and fears about assessment that stem from prior (and sometimes improper or questionable) experiences with similar tools. Clarifying the purpose and appropriate use of assessment information, and strictly abiding by these declarations, will build trust among users and allow for more transparency over time.

These areas of accepted wisdom generate a variety of inherent challenges and dilemmas. What's the best way to capture ever-changing organizational context? Are we focused on the right criteria for leadership success? How is the assessment of potential different from assessments designed for development planning or selection decisions? How should stakeholders be aligned and prepared? Each of these challenges needs further work to advance our practice with senior leaders.

The Emerging Role of Context

The role of context in understanding the demands of leadership is becoming recognized as an essential aspect of assessment. As a result, organizational stakeholders as well as participants are likely to value contextualized assessment more than generalized assessment. Contextualized assessment can take many forms. Common examples include the use of real job content within an assessment center or high-fidelity video of common leadership scenarios within situational judgment tests. The use of these tools raises the question of how contextual variables should best be factored into leadership assessment and whether the gains in authenticity are worth the tradeoffs that come with increasing levels of specificity. It is possible that taking this perspective too far may diminish our ability to understand leadership capabilities that generalize across situations.

Given the increasing importance of context for understanding and assessing leaders, there is a need to improve the consistency and precision with which we describe and use contextual variables. We should develop taxonomic structures for context that mirror existing taxonomies of leadership behaviors. Such a structure should consider factors at the environmental, organizational, group, and individual levels. In fact, several scholars are vigorously pursuing the development of such taxonomic structures (e.g., Parrigon, Woo, Tay, & Wang, Reference Parrigon, Woo, Tay and Wang2017). No doubt, cross-disciplinary collaborations with fields such as strategic management that are strongly focused on context will greatly facilitate work in this arena.

There are several implications of increasing the focus on context for understanding leadership behavior and designing effective assessment processes. First, the external landscape is continuously evolving. Several broad environmental shifts have a bearing on leadership assessment, such as simultaneous globalization and erosion of traditional borders, and technology advancement that leads to democratization of information, new business models, and unpredictability and volatility in the external environment. Second, the level of analysis of primary interest is usually at the person level, with person-specific description or prediction as the central goal of most leadership assessment. So, what organizations want to know about leaders often comes down to very specific questions, such as how a given individual will perform in a crisis or against a challenge the business is facing. However, these competing forces create a dilemma when assessments are designed: how to address questions of readiness for specific future roles within the context that will exist for the role and acknowledge that context is constantly shifting, thus creating a need to consider leadership capability across a range of likely leadership demands.

Evolving Criteria for Leaders

Just as the context for leadership has become more challenging, so too have the criteria that define successful leadership. Accepted wisdom dictates that organizations should first determine how they are defining an effective leader—both at the individual level as well as at the organizational level—before configuring an appropriate assessment. What do successful leaders do, what makes them distinct from average leaders, and are these factors consistent across different environments? Have the requirements for leadership changed or expanded as the work environment has become more complex? When identifying relevant criteria for evaluating leaders, organizations tend to consider how well the individual fits with the management team and how engaged and committed individuals are in the organization and their intentions to stay. At the individual level, effective leadership is often operationalized by considering judgment and decisiveness, interpersonal skills, and how well individual and team behaviors align with organizational culture and the strategic goals of the business. Given the emphasis on context fit described above, leadership criteria now often include how one responds to the extreme situational characteristics such as VUCA in the modern work context.

From an organizational perspective, leaders are often evaluated by variables such as the performance of their business unit and the engagement levels of the people in their portion of the organization. Here again the rapidly changing environment for leadership may impact the criteria for success. These criteria are likely to vary over time, and this complexity impacts both the effectiveness of assessment and the process of validation when they are used as criteria.

Increasingly research and practice related to leadership are recognizing the value of understanding leadership capability as an emergent quality of a team of leaders, not simply as a function of an individual leader's actions (Day & Dragoni, Reference Day and Dragoni2015). This shared capacity of leadership, which involves looking at the operation and makeup of the full leadership team, may be a better fit in complex organizations, but the work to determine how to best factor a team-level of analysis into assessment and development programs is just beginning. If leadership success is conceptualized at the team level, and the composition and operating climate of the team affects what actions are required for the individual leaders on the team, can assessment processes be designed with enough flexibility to provide an adequate sense of how an individual leader will contribute to the team? When viewed as a team-level construct, the concept of equifinality becomes paramount when defining leadership success at the individual level. In contrast to the dilemma posed above related to context, the definition of leadership at the team level may push the situation-specific versus context-free pendulum back toward generalizable leadership capabilities that will enhance the fit of an individual leader with a range of other leaders who will operate as a team. This emphasis is further justified because the make-up and dynamics of a future team may be difficult to predict.

Moving Beyond Competencies: Measuring Leadership Potential

The emphasis on individual traits as antecedents of leadership has shifted over time in our field (Lord, Day, Zaccaro, Avolio, & Eagly, Reference Lord, Day, Zaccaro, Avolio and Eagly2017). Current research and practice related to leadership may place an overemphasis on competencies, and the development and use of assessments may need to focus beyond them for several reasons. When considering near term roles (within a year or two), it seems reasonable for organizations to focus on competencies to make judgments about readiness, but a broader range of variables is likely needed for making projections about the potential to grow into senior leadership roles over longer spans of time (i.e., over many years or even decades). This suggests that assessments go beyond measuring the skills and competencies observed within the current role to effectively address one's potential, including factors that may underlie the ability to learn and adapt to future demands that are not possible to adequately define at the time of the assessment. Current models of leadership potential tend to emphasize a range of personality, values, and cognitive variables that may contribute to a growth orientation (e.g., Church & Silzer, Reference Church and Silzer2014; Paese, Smith, & Byham, Reference Paese, Smith and Byham2016).

Herein lies a challenge for those who design programs intended to identify leadership potential and accelerate leadership development: Conceptualizing leadership potential through the same lens as leadership readiness for selection or promotion into a new role (where criteria are defined and people are sought who match), even with a focus on different variables, can lead to exclusionary practices that anoint some leaders as having potential and others as lacking. Readiness assessments can be more precise and behavioral because the target role should be definable and within reach. Assessments designed to identify potential may need to be used less strictly because the prediction of the “potential to grow” toward unknown targets, over a broad and unspecified period of time, is inherently less certain, and leaders can take many paths to acquire the capabilities they need for future roles.

Some of the traditional assumptions that serve as the foundation for assessments used for selection may not apply when assessing leadership potential. If one assumes that leadership potential is a construct upon which individuals vary and that selection methods can be applied to identify those who have a lot of it, you run the risk of excluding talented candidates from the development activities that could help them the most. Instead of assuming potential is that trait to be measured, a broader conceptualization may lead to more inclusive practices. For example, the notion of a “development trajectory” might underlie the application of assessments and development activities, where individuals are considered at various points on a path of developmental progress over time (Day & Sin, Reference Day and Sin2011). The slope of progress may rise or fall over time depending on a combination of individual characteristics and environmental opportunities present at a given point. Judgments regarding leadership potential are then considered in terms of where individuals are on their paths and the speed and direction of development. Leadership assessment can occur at various points along the path with a focus on how to best influence an individual's trajectory. The direction and speed of progress are then the primary outcomes to be predicted by measures of potential. This view raises the responsibility of the organization to actively manage potential across the span of leadership development and places emphasis on the skills of managers who must be exceptional developers of leadership talent within their organizations. In contrast, the traditional view simply classifies individuals as high potential and enrolls them in a development program run by the talent management function, thereby simplifying the problem and shifting the responsibility to a different part of the organization.

At an organizational level, accepted wisdom suggests that leadership bench strength should reflect more than a count of developed leaders who are “ready now” to move on to the next level. Rather, the idea of bench strength should be broadened to include the organization's overall capacity for leadership—the aggregated, organization-level capability. Assessment information could then be combined with information about the organizational context to conduct both scenario planning (what to prepare for) and succession planning (who should be prepared).

Influencing Stakeholders and Consumers of Assessment

It seems likely that the effectiveness and longevity of assessment programs deployed at any level, but especially at senior levels, are more affected by the organizational support for the program than by the specific design features of the program. As much as this fact may be recognized by long-tenured practitioners, the steps for improving the success rates of an implementation are too often overlooked. Several elements to consider when seeking support from senior stakeholders who often serve as sponsors for (or barriers to) assessment work inside their organizations were identified by participants at the symposium.

At the outset, it is important to state a clear case for the program and the active use of its outputs. Sustaining an integrated assessment and development process requires that the purpose and value of the tools are made clear to the stakeholders at various levels of leadership to which they apply. It is helpful for assessments to be engaging, challenging, and realistic for the target level; they should yield specific, actionable feedback that can be incorporated into individual development plans. The organizational infrastructure and support for the use of program outputs is stronger when the purpose, content, and use of the assessment program are well aligned.

It is also advantageous to educate stakeholders about the difference between common practices and proven practices. Senior executives may not understand why they should have a vested interest in seeing that good science is applied within an assessment process (cf. McPhail & Stelly, Reference McPhail, Stelly, Scott and Reynolds2010), nor are they necessarily able to distinguish good from bad scientific foundations. Executive stakeholders are seldom aware of or persuaded by academic or even practitioner-based publications, but they may be receptive to the input if packaged appropriately. Unless assessment experts educate executive stakeholders, their attention will be more likely directed to fads and benchmarked practices of untested origin. This bias can be a practical barrier to doing sound applied work, including gathering the data necessary for a successful implementation.

Integrating Assessment and Development for Senior Leaders

Data generated by formal assessment programs are not only valuable for identifying and selecting senior leaders with potential for success at higher organizational levels but also for continued development of these leaders. In fact, in large organizations with robust talent management processes, assessments are more likely to be used at senior levels to identify developmental needs than for succession management or job placement purposes (Church & Rotolo, Reference Church and Rotolo2013).

There is considerable accepted wisdom on how to use assessment data for motivating leaders to pursue targeted improvement goals and how to leverage multiple developmental strategies for realizing those goals (for example, see Davis & Barnett, Reference Davis, Barnett, Silzer and Dowell2010; Paese, Smith, & Byham, Reference Paese, Smith and Byham2016). However, integrating assessment and development for senior leaders also creates challenges. First, there is the dilemma of using the same data for the dual purposes of making decisions about individuals and developing those individuals. Second, integrating assessment and development to grow more senior leaders does not end with adding development tools and programs post-assessment. Rather, it is an ongoing process of creating, implementing, and evolving an integrated leader development system in partnership with line managers.

Accepted Wisdom for Assessment-Driven Development Processes

Assessment processes that motivate senior leaders to change and grow should provide these leaders with feedback that clarifies improvements needed to enhance their effectiveness, help them identify the most important development goals to pursue, and hold them accountable for following through on efforts to improve. The ideal assessment-driven development process starts with an assessment report tailored to a senior leader audience—one that conveys in a straightforward manner the dimensions assessed, why these dimensions matter, and how the leader performs on each one. A coach with expertise in the measures and familiarity with the individual's context guides the leader through a process of digesting and making sense of the results, and then setting specific goals for improvement and action plans for implementing those goals. The next level of management is a key participant in this development planning process that typically yields one to three goals that the leader is not only personally energized to pursue but will clearly benefit the organization. The leader and the leader's boss regularly seek feedback from others about the leader's progress, and both are held accountable for achieving visible improvement.

Yet even high-quality assessment, feedback, and development planning processes will not maximize change and growth if the targeted developmental experiences themselves are not powerful and engaging. On-the-job stretch experiences supported by developmental relationships and formal development programs should make up the bulk of the learning opportunities after assessment. Stretch assignments (via special projects, temporary assignments, or job moves) are chosen to provide new challenges and ample opportunity to practice needed skills. Appropriate people are recruited to serve in specific roles to support the leader with current developmental goals (e.g., a role model who can demonstrate targeted skills and share advice, a coach who can help change an ingrained behavior, or a supportive colleague who can encourage the leader). Individuals can also pursue formal development programs that deliver relevant knowledge, additional practice and feedback in a safe environment, and ample opportunities to connect with and learn from other leaders facing similar challenges.

Using Assessment Data to Make Decisions and to Develop

Organizations typically keep processes used to inform decision making about senior leaders (i.e., promotion) independent from those used to provide feedback to leaders for their development. This separation can be the result of different groups being responsible for different HR tasks (e.g., selection, performance management, leader development) and tailoring assessments for their specific tasks. For example, assessments in the selection context are designed to yield overall scores and profiles that predict specific criteria, whereas assessments in the development context are designed to yield in-depth feedback about behaviors and their impact. The separation is also shaped by the understanding that data collected for one purpose might not be useful or appropriate for another. For example, data on stable traits like intelligence are viewed as valuable in the selection context but less so in the development context. Or 360-feedback data collected from coworkers may be more honest when the coworkers know that the data will be used for development rather than organizational decision making.

Yet there is a growing interest in designing integrated assessment processes that can be used for multiple purposes—particularly at senior levels. One driver is efficiency, with a focus on minimizing the time needed from senior leaders and from assessment professionals. Another is the desire to maximize the return on investments made to develop high quality assessments. There is also a movement toward more integrated talent management processes that produce a coherent, shared understanding of individual leaders and their capabilities. This integrated approach contributes to aligning individual and organizational resources to support and accelerate senior leader development. Finally, for some organizations the integration is driven by culture shifts toward increased transparency or by government-mandated protections of individual privacy, both of which compel organizations to share with an individual data collected about that individual, thus making assessment data available to the individual for his or her own development.

Integrated assessment processes are not without their dilemmas. A central concern is the impact of providing assessment results to individuals and subsequent use of those assessments in organizational decision making. For example, debriefing an assessment center exercise with participants can make explicit the behavior that would have led to higher ratings. This, in turn, makes the rating protocols more public and shareable with others who use that knowledge to gain higher ratings when they are assessed. Might doing so not invalidate or compromise the assessment? Another concern is the orientation of individuals as they engage in assessments for decision making versus development. In what ways might impression management tendencies in selection contexts make it difficult to pinpoint development needs? If coworkers are providing input to the assessment, how might the dual purpose of the assessment impact their honesty or willingness to provide critical data?

Talent management professionals who are moving toward a more integrated assessment process combining decision making and leader development are introducing practices for dealing with the dilemmas (Church, Del Giudice, & Margulies, Reference Church, Del Giudice and Margulies2017). For example, they emphasize the importance of being transparent about who has access to what assessment information, documenting and sharing the specifics of what type of data summaries will be shared with whom and for what purpose. Clearly, this arena needs research—both research that examines the validity of concerns raised about integrated process and research that investigates how the benefits of such processes can be maximized and the downsides mitigated.

Partnering With Line Managers to Develop Leaders

Studies abound that show leader shortages across organizations and concerns about both the quality of current leaders and those in the pipeline (DDI & The Conference Board, 2014; DDI, The Conference Board, & EY, 2018). If much is known about assessing and developing senior leaders, then why do so many organizations experience leadership shortages? Why are assessment and development efforts not driving changes in the quality of leaders ready to take on senior roles?

One explanation is a version of the knowing–doing gap. Knowledge about assessment-driven development does not always lead to organizational actions consistent with that knowledge for several reasons. First, if executives rely more on their subjective evaluations of leaders than on data from objective assessments, it is unlikely that the assessment data will be used to identify appropriate development goals. Second, in building a leader development system, talent management professionals may rely too heavily on formal programs and executive coaching because such interventions are under their control and are often what stakeholders expect. To extend the development system to also include stretch assignments and informal coaching by bosses and peers, talent management professionals have to share accountability for outcomes with line managers, many of whom may be less committed to and savvy about development. Closing the knowing–doing gap requires influencing managers throughout the organization: educating them about how formal assessments improve the quality of the data that inform decisions about leaders, enhancing their awareness of human biases in making judgments about others, exploring their resistance to using assessment data, and making the politics of choosing individuals for high-stakes positions discussable. Organizations also need to develop their managers’ capacity and energy for developing others.

Another explanation for leadership shortages is the dynamic nature of senior leader roles. As organizations reinvent themselves in the face of volatile external environments and changing employee expectations, the knowledge and capabilities needed at senior levels grow broader and more complex. These challenges call for even greater intrapersonal maturity, capacity to work collaboratively with diverse others, intersystemic perspective, and the agility to respond to unpredictable demands (Bunker, Hall, & Kram, Reference Bunker, Hall and Kram2010). Thus, the bar for those deemed “ready” keeps rising. In this context, readiness cannot be perceived as an end state but rather an ongoing process of constant preparation. Leadership development systems that grow effective cadres of senior leaders not only deliver assessment and development processes for employees throughout their careers but are also able to fill unanticipated gaps by accelerating development for particular leaders.

The most advanced leader development systems rely on more than formal processes; they rely on an organizational culture where it is common for leaders to coach other leaders, to hold each other accountable for building a strong pipeline of leaders at all levels, and to give one another feedback (McCauley, Kanaga, & Lafferty, Reference McCauley, Kanaga, Lafferty, Van Velsor, McCauley and Ruderman2010). The field would benefit from organization-level research that examines these advanced systems more closely, and that sheds light on how organizations mature their assessment and development practices toward an integrated system embedded in a development culture.

Advancing Assessment and Development Practices

To advance senior leader assessment and development, I-O psychologists first need to regularly evaluate the tools and processes used in this practice. There is an abundance of accepted wisdom for evaluating assessment practices; however, when it comes to assessing senior leaders, this wisdom is not always easily applied. The accepted wisdom needs to be augmented with renewed efforts to generate more evaluation research aimed at addressing critical questions about the efficacy and impact of our practices. A second avenue for advancing senior leader assessment and development is to test out the value of emerging conceptualizations of leadership and experiment with new technologies impacting the assessment field more broadly.

Accepted Wisdom for Validating Assessment Tools

It is easy to get enthralled by the appeal of complex assessments for evaluating senior leaders; however, the fundamental criteria for evaluating these measures are their reliability and validity. Practitioners need to continue to emphasize to organizations the importance of demonstrating the psychometric quality of assessment practices and the value of adhering to professional and legal standards and guidelines. Accepted wisdom dictates taking a broader perspective when thinking about the definition and conceptualization of validity in this context. The challenge is to determine what criteria to use in the validation study, assuming there is sufficient sample size to conduct such a study (McPhail & Jeanneret, Reference McPhail, Jeanneret and Schmitt2012). Clearly, assessments should be linked to the outcomes of effective leadership (e.g., business results), not just to the leader's displayed behaviors. Other factors to consider include employee retention, staff morale, and organizational climate.

When designing a validity study, researchers should start with the end in mind, with hopes to capture criteria that will reflect the impact the leader may have on the business. The organization should have procedures in place to continuously improve the assessments to ensure relevance, and there should be a conscious effort to balance effectively the interests of multiple stakeholders regarding when, how, and what to assess. The resulting information should be packaged so that it is easily understood by the relevant consumers (e.g., organization, individual, and team). That said, it is difficult to determine how best to represent information collected in situations with very small sample sizes for stakeholder consumption.

Given the small number of vacancies at the most senior-level positions, organizations often implement individual assessments to evaluate the applicant pool (Silzer & Jeanneret, Reference Silzer and Jeanneret2011). These high-touch, deep-dive approaches are aimed at offering rich, tailored insights into a person's capabilities, competencies, readiness, and potential for leadership roles. However, sample size constraints make it difficult to evaluate the effectiveness and validity of these assessments. Certainly, accepted wisdom dictates that the tools provide information on job-relevant capabilities and align with business needs, similar to large-scale assessments. Although it may be difficult to demonstrate the operational reliability and criterion-related validity of these individual assessments, researchers and practitioners should be able to use content validity evidence to support the use of assessments, particularly in situations where sample size is an issue. Under some circumstances, evidence from consortium studies and qualitative research techniques may provide useful evidence that builds our understanding of the effectiveness of specific assessment approaches.

Strengthen Evaluation Research on Current Practices

Senior leader assessments tend to rely on high-touch methods that many practitioners have convinced themselves are required to assess accurately a leader's potential and readiness for higher level leadership roles. There is a long track record of the effectiveness of assessment centers and a huge individual assessment practice that justifies its value both in terms of assessment but also in terms of providing meaningful developmental feedback (McPhail & Jeanneret, Reference McPhail, Jeanneret and Schmitt2012). Yet, the field needs more concrete information comparing these comprehensive assessments with those that have a nimbler delivery mode while considering the accuracy of assessment results. Does an organization reap sufficient benefit from comprehensive assessments to justify their cost, time, and other potentially obtrusive effects?

In terms of research associated with traditional assessment centers, practitioners would benefit from additional information and research regarding data integration, a central feature of this assessment methodology. We understand dimensional judgments, but we need more research on integration and overall assessor decision making, including how to integrate individual scores into recommendations (e.g., the impact of adding ratings of dimensional importance) and how assessors might best bridge the gap between their conclusions and the development activities that best address weak competencies. Furthermore, additional work is needed to delineate the most critical role for human judgment in the assessment process and where mechanical aggregation is better suited to the task.

As noted earlier, a key research arena is the potential impact of context. When are situationally specific assessments better at predicting outcomes than measures that focus primarily on the generalizable aspects of leadership? For example, there are situations specific to some job contexts that rarely occur, but when they do they are critical (e.g., data breaches, manufacturing accidents). In these instances, an organization may prefer custom measures. To what degree does information collected from these situation-specific assessments generalize to other leadership situations or are more generic measures better suited for prediction across the range of situations leaders will eventually face on the job?

In a global economy with organizations having a presence in multiple countries, questions about the impact of local and cultural context arise (Ryan & Tippins, Reference Ryan, Tippins, Scott and Reynolds2010). What is the potential impact of the local context on the scoring and interpretation of the assessment results? How do cultural differences and dynamics affect the process of engaging participants when using assessments on a global level?

More rigorous evaluation of assessment-for-development practices is also critical. Does the impact of assessment feedback vary widely based upon the assessment criteria and methods employed? Which assessment practices provide the most insight for senior leaders? What are the keys to ensuring assessments add value when developing senior leaders? What is the impact of using the same assessment results for development and decision making? What system-level factors influence the efficacy of assessment-for-development practices? What are the best evaluation criteria? Moreover, the field would benefit from more rigorous evaluation studies of growth from development interventions based on theory and that yield validity scores for different methods of development.

Addressing these research questions, of course, requires data that are often limited by either access or sample size constraints. Practitioners are more likely to have large data sets but often lack permission to publish and share results based on those data for many reasons, including concerns about legal exposure and the loss of competitive advantage associated with the disclosure of confidential intellectual property. Additionally, practice-based research does not always appeal to top journals. Scholars with access to publication outlets often have limited access to data resources; therefore, what appears in print may at times be based upon small and less generalizable samples. Such issues pose a threat to evidence-based practice and stand to sustain or widen the science–practice gap.

The path forward is likely to be dependent on collaboration among organizations to share data, perspectives, and lessons learned. Although the practice of data sharing may be difficult to put into practice, many researchers and practitioners alike are persistent in their calls for this level of collaboration, increasing the likelihood of it coming to fruition.

Emerging Concepts and Technology

Current conceptualizations of leadership have been shifting from conventional, “exclusive” definitions of leadership that categorize individuals as either leaders or followers toward broader, more inclusive definitions that emphasize leadership as a collective enterprise that can be shared by many individuals within the organization (Day & Dragoni, Reference Day and Dragoni2015; Yammarino, Salas, Serban, Shirreffs, & Shuffler, Reference Yammarino, Salas, Serban, Shirreffs and Shuffler2012). Therefore, researchers should explore the concept of collective leadership capacity at senior levels and identify appropriate criteria for evaluating it. What properties of lateral relationships and social networks characterize effective shared leadership, and how are these properties best assessed? How do leadership teams develop shared mental models and develop their collective capacity to lead? How can the impact of individual leaders be disentangled from their joint impact as a collective? From an even broader perspective, we should investigate and define the concept of community-based leadership, differentiating it from other forms of leadership.

A useful dialogue among researchers and practitioners would be to evaluate whether leadership assessment models should incorporate issues of social justice and fairness in the workplace (e.g., income inequality, fairness in selection, equitable pay, and gender-based harassment). Should “compassion capital” be added to the leader scorecard, with the expectation that management should do more to take care of and advocate for their employees as they work to meet shareholder expectations? Viewed even more broadly, this question can be expanded to an organization's efforts related to corporate social responsibility in general. Opportunities to shape organizational progress can extend far beyond issues of leadership readiness to drive business success into questions of how an organization influences social equality and global health and wellness.

Finally, the digital revolution is generating new tools for assessing individuals and predicting work performance (Chamorro-Premuzic, Winsborough, Sherman, & Hogan, Reference Chamorro-Premuzic, Winsborough, Sherman and Hogan2016). For example, personality can be assessed via social media posts, computers can discern emotional states from voice and facial patterns, and text analysis can predict who is more likely to emerge as a group leader. The digital revolution is also making it easier for organizations to link sources of employee data (e.g., job performance metrics, engagement scores, workplace experiences), creating “big data” that can be mined for better prediction of outcomes important to the organization. What potential do these new tools hold for the assessment and development of senior leaders or for the validation of instrumentation? Assessment professionals need to be at the forefront of exploring ways available data from these nontraditional sources can inform leadership assessment practices. We also need to recognize that future leaders will be more comfortable with modes of measuring leadership capabilities completely outside of our assessment comfort zone (e.g., sole reliance on computer-based assessments void of any human interaction).

Conclusion

Leadership may be viewed as a dynamic process between individuals, teams, and organizational environments that evolves over time rather than as an end state when a leader is fully developed. Throughout our discussion we encourage the reader to view senior leadership through a multidimensional lens where equifinality characterizes success more than a specific recipe of main effects and interactions. As these expanded views of leadership become dominant there is a need to refine our practices with senior leaders to reflect this perspective rather than being predominantly focused on identifying and developing leadership competencies within individuals as if we were building toward a fixed, singular target.

The Jeanneret Symposium was organized to shed light on current practice related to senior leadership assessment and development and to identify key areas in need of further exploration. Established practitioners and researchers in this arena shared details of work that is often unseen in our usual professional outlets. They reviewed and challenged this work in open debate and discussion. This focal article highlights the central topics that were discussed, major themes across these topics, and the collective sense of where gaps in our research should be filled. We encourage participants in the symposium to contribute commentaries to elaborate points that were given short treatment in this summary and invite the field to extend, refute, or critique the observations we have summarized in this focal article. We look forward to continuing this discussion.

Appendix

Participants in the Jeanneret Symposium February 19–20, 2016, Dallas, TX

Footnotes

This paper summarizes research, best practice guidance, and discussion points reviewed at the Jeanneret Symposium conducted in February 2016; the event was made possible through a grant from the SIOP Foundation supported by a generous donation by Dick Jeanneret. The symposium comprised 45 experts in leadership development and assessment; all participants are listed in the Appendix. The symposium planning committee included Morton McPhail (chair), Alex Alonso, Eric Braverman, Deborah Rupp, and Sharon Sackett. The authorship subcommittee for this summary included Cindy McCauley, Doug Reynolds, and Suzanne Tsacoumis. Doug Reynolds served as coordinator and corresponding author.

*Denotes student volunteer who assisted with note taking and summarization of the event.

References

Bunker, K. A., Hall, D. T., & Kram, K. E., (2010). Extraordinary leadership: Addressing the gaps in senior executive development. San Francisco, CA: Jossey-Bass.Google Scholar
Chamorro-Premuzic, T., Winsborough, D., Sherman, R. A., & Hogan, R. (2016). New talent signals: Shiny new objects or a brave new world? Industrial and Organizational Psychology: Perspectives on Science and Practice, 9, 621640.Google Scholar
Church, A. H., Del Giudice, M., & Margulies, A. (2017). All that glitters is not gold: Maximizing the impact of executive assessment and development efforts. Leadership & Organization Development Journal, 38, 765779.Google Scholar
Church, A. H., & Rotolo, C. T. (2013). How are top companies assessing their high-potentials and senior executives? A talent management benchmark study. Consulting Psychology Journal: Practice and Research, 65, 199223.Google Scholar
Church, A. H., & Silzer, R. (2014). Going behind the corporate curtain with a blueprint for leadership potential: An integrated framework for identifying high-potential talent. People & Strategy, 36, 5158.Google Scholar
Davis, S. L., & Barnett, R. C. (2010). Changing behavior one leader at a time. In Silzer, R. & Dowell, B. E. (Eds.), Strategy-driven talent management (pp. 349398). San Francisco, CA: Jossey-Bass.Google Scholar
Day, D. V., & Dragoni, L. (2015). Leadership development: An outcome-oriented review based on time and levels of analyses. Annual Review of Organizational Psychology and Organizational Behavior, 2, 133156.Google Scholar
Day, D. V., & Sin, H. (2011). Longitudinal tests of an integrative model of leader development: Charting and understanding developmental trajectories. Leadership Quarterly, 22, 545560.Google Scholar
DDI & The Conference Board. (2014). Global Leadership Forecast 2014/2015: Ready-now leaders: 25 findings to meet tomorrow's business challenges. Pittsburgh, PA: DDI.Google Scholar
DDI, The Conference Board, & EY. (2018). Global Leadership Forecast 2018: 25 research insights to fuel your people strategy. Pittsburgh, PA: DDI.Google Scholar
Europa.EU. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal of the European Union, L119, 4 May, 188.Google Scholar
Farr, J. L., & Tippins, N. T. (Eds.). (2017). Handbook of employee selection (2nd edition). New York, NY: Routledge.Google Scholar
Hogan, R., Curphy, G. J., & Hogan, J. (1994). What we know about leadership: Effectiveness and personality. American Psychologist, 49, 493504.Google Scholar
Judge, T. A., Bono, J. E., Ilies, R., & Gerhardt, M. W. (2002). Personality and leadership: A qualitative and quantitative review. Journal of Applied Psychology, 87, 765780.Google Scholar
Lord, R. G., Day, D. V., Zaccaro, S. J., Avolio, B. J., & Eagly, A. H. (2017). Leadership in applied psychology: Three waves of theory and research. Journal of Applied Psychology, 102, 434451.Google Scholar
McCauley, C. D., Kanaga, K., & Lafferty, K. (2010). Leader development systems. In Van Velsor, E., McCauley, C. D., & Ruderman, M. N. (Eds.), The Center for Creative Leadership handbook of leadership development (pp. 2961, 3rd ed.). San Francisco, CA: Jossey-Bass.Google Scholar
McPhail, S. M., & Jeanneret, P. R. (2012). Individual psychological assessment. In Schmitt, N. (Ed.), The Oxford handbook of personnel assessment and selection (pp. 411442). New York, NY: Oxford University Press.Google Scholar
McPhail, S. M., & Stelly, D. J. (2010). Validation strategies. In Scott, J. C. & Reynolds, D. H. (Eds.), Handbook of workplace assessment (pp. 671710). San Francisco, CA: Jossey-Bass.Google Scholar
Paese, M. J., Smith, A. B., & Byham, W. C. (2016). Leaders ready now: Accelerating growth in a faster world. Pittsburgh, PA: Development Dimensions International.Google Scholar
Parrigon, S., Woo, S. E., Tay, L., & Wang, T. (2017). CAPTION-ing the situation: A lexically-derived taxonomy of psychological situation characteristics. Journal of Social and Personality Psychology, 112, 642681.Google Scholar
Ryan, A. M., & Tippins, N. T. (2010). Global applications of assessment. In Scott, J. C., & Reynolds, D. H. (Eds.), Handbook of workplace assessment (pp. 577606). San Francisco, CA: Jossey-Bass.Google Scholar
Scott, J. C., Bartram, D., & Reynolds, D. H. (Eds.). (2018). Next generation technology-enhanced assessment: Global perspectives on occupational and workplace testing. New York, NY: Cambridge University Press.Google Scholar
Silzer, R., & Jeanneret, R. (2011). Individual psychological assessment: A practice and science in search of common ground. Industrial and Organizational Psychology: Perspectives on Science and Practice, 4, 270296.Google Scholar
Thornton, G. C., Johnson, S. K., & Church, A. H. (2017). Executives and high potentials. In J. L. Farr, & N. T. Tippins (Eds.). Handbook of employee selection (2nd ed., pp. 833852). New York, NY: Routledge.Google Scholar
Yammarino, F. J., Salas, E., Serban, A., Shirreffs, K., & Shuffler, M. L. (2012). Collectivistic leadership approaches: Putting the “we” in leadership science and practice. Industrial and Organizational Psychology: Perspectives on Science and Practice, 5, 382402.Google Scholar