Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-26T07:38:41.676Z Has data issue: false hasContentIssue false

Why the Qualms With Qualitative? Utilizing Qualitative Methods in 360° Feedback

Published online by Cambridge University Press:  29 December 2016

Adam Kabins*
Affiliation:
Korn Ferry Hay Group, Dallas, Texas
*
Correspondence concerning this article should be addressed to Adam Kabins, Korn Ferry Hay Group, Suite 1450, 2101 Cedar Springs Road, Dallas, TX 75201. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Extract

Although the authors of the focal article provide a comprehensive definition of 360° feedback, one exclusionary criterion results in an overly narrow definition of 360° feedback. Specifically, Point 3 in their definition described the criticality of strictly using quantitative methods in collecting 360° feedback. The authors provided a brief rationale by stating, “Data generated from truly qualitative interviews would not allow comparisons between rater groups on the same set of behaviors” (Bracken, Rose, & Church, 2016, p. 765). Although there is little doubt about the value in taking a quantitative approach for gathering 360° feedback, it is not clear why this has to be the sole approach. Below, I outline three issues with taking this constricted methodology. That is, first, excluding qualitative methods is not in line with the purpose of 360° feedback, which is directed at minimizing criterion deficiency. Second, qualitative methodologies (in conjunction with quantitative methodologies) are more equipped to provide and inspire a call to action (supporting the change component addressed by the authors). Finally, there are qualitative methods that allow for rigorous quantitative analysis and can provide an additional source of macro organizational-level data.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2016 

Although the authors of the focal article provide a comprehensive definition of 360° feedback, one exclusionary criterion results in an overly narrow definition of 360° feedback. Specifically, Point 3 in their definition described the criticality of strictly using quantitative methods in collecting 360° feedback. The authors provided a brief rationale by stating, “Data generated from truly qualitative interviews would not allow comparisons between rater groups on the same set of behaviors” (Bracken, Rose, & Church, Reference Bracken, Rose and Church2016, p. 765). Although there is little doubt about the value in taking a quantitative approach for gathering 360° feedback, it is not clear why this has to be the sole approach. Below, I outline three issues with taking this constricted methodology. That is, first, excluding qualitative methods is not in line with the purpose of 360° feedback, which is directed at minimizing criterion deficiency. Second, qualitative methodologies (in conjunction with quantitative methodologies) are more equipped to provide and inspire a call to action (supporting the change component addressed by the authors). Finally, there are qualitative methods that allow for rigorous quantitative analysis and can provide an additional source of macro organizational-level data.

Minimizing Criterion Deficiency With Qualitative Methodologies

First and foremost, at its heart, 360° feedback is the process of gathering feedback from various rating sources to avoid taking a myopic view of one's performance. Although much debate has revolved around the meaning in rating distinctions across the various rater categories (e.g., Hoffman, Lance, Bynum, & Gentry, Reference Hoffman, Lance, Bynum and Gentry2010), the inherent value of 360° feedback is that it provides the focal participant with a behaviorally based assessment of his/her performance that is less likely to be criterion deficient (compared with a single-source method). In researching all of the early works purporting the use of 360° feedback, all researchers suggested that utilizing a single-source methodology (i.e., manager ratings) for understanding performance was likely to be criterion deficient (e.g., Edwards & Ewen, Reference Edwards and Ewen1996; Murphy & Cleveland, Reference Murphy and Cleveland1995). That is, the criterion domain was unlikely to be fully tapped due to the minimal exposure managers have for their direct reports (unlike the combined view of managers, peers, direct reports, and customers of the focal participant). As a result, researchers and practitioners pushed for a more holistic approach for feedback gathering that was less likely to be criterion deficient.

In that same light, collecting strictly quantitative data may also be deficient in that it is highly limiting in the type of feedback provided. That is, most 360° feedback platforms are driven by organizational or role-specific competencies (broken down into static behaviors) and do not address or ask about every possible behavior that may be enacted on the job. As a result, there are a number of behaviors that may not be addressed in the quantitative portion of a 360° feedback, which furthers the criterion-deficiency problem. For example, I conducted a 360° feedback session with a midlevel manager at a large restaurant chain who had moderate to high scores on all of the key organizational competencies (customer service, driving performance, planning and organizing, etc.). However, only after reading the comments sections (the qualitative input for this 360° feedback) was it revealed that there were rampant integrity related issues with this manager (making inappropriate comments and jokes to his team, playing favorites, etc.). As a result, this individual's quantitative 360° feedback results would appear quite strong, but only the qualitative feedback revealed deeper underlying issues. While to some degree this was the fault of the organization for not focusing on integrity as a critical competency, nevertheless, it is impossible for an organization to choose all possible competencies (and all associated behaviors) in a single 360° feedback. It is inevitable that something—either a behavior or a competency—is left out in the quantitative rating portion that is critical to uncovering a person's strengths or weaknesses.

In essence, 360° feedback was designed as a methodology to minimize criterion deficiency; however, the authors are proposing a definition of 360° feedback that is inherently criterion deficient. As a result, this goes directly against the purposes and intention of taking a 360° feedback approach.

What Qualitative Methodologies Have To Offer

To that point, qualitative methodsFootnote 1 offer a number of advantages that help further the goal of minimizing criterion deficiency. Specifically, anyone who has conducted a 360° feedback coaching session is familiar with the comments sections that are nearly ubiquitous in all 360° feedback reports. These comments provide both the coach and the participant with detailed examples of how the behaviors provided in the quantitative portion of the report emanate on the job. A behavioral statement may read, “Provides a clear and detailed direction for his/her direct report to develop,” and raters can either agree or disagree. The comments section can provide nuances to how this looks on the job. For example, an employee may be a poor communicator for describing the vision of development to his/her direct reports, or the focal participant may be overly tactical and may not provide the broader strategy for his/her direct reports, or the focal participant may not even initiate these conversations at all. In just this one example, there are a whole host of potential explanations for why a focal participant may receive low scores. As a result, follow-up interviews with the raters after the quantitative portion of a 360° feedback are extremely useful for providing the reasoning and logic behind the rater's quantitative scores (or at minimum, utilizing the comments sections).

Likewise, qualitative methods can facilitate action planning and goal settingFootnote 2 by providing the specific scenarios with which the participant struggles or at which he or she is most effective. As feedback becomes more specific, the focal participant is better equipped at creating a detailed action plan, which is one of the key requirements for effective goal setting (Locke & Latham, Reference Locke and Latham1990). Additionally, specific examples help further the goal of believability and acceptance of ratings (Ilgen, Fisher, & Taylor, Reference Ilgen, Fisher and Taylor1979). Given all that we know about the fundamental attribution error and other available cognitive heuristics (e.g., Forgas, Reference Forgas1998), it is quite likely that focal participants can dismiss all types of ratings. As a result, quantitative, behaviorally based scores may be dismissed by salient counterexamples provided by the focal participant; however, specific examples demonstrating the enlisted behavior (culled from qualitative methods) in addition to the quantitative ratings serve as a “reality check” for participants who are looking to dismiss the feedback. Although comments help support this end, fully investing in comprehensive qualitative methodologies (e.g., interviews) ensures that the feedback will be actionable and integrated.

Last, qualitative methods provide a better call to action than strictly quantitative results. Although quantitative information provides directed feedback on strength/weakness areas, it is unlikely that a focal participant will be inspired by a below average rating on a specific competency. That said, in the midlevel restaurant manager example cited above, the focal participant was quite shaken by the words his team used to describe his leadership style and off-color remarks. It was only after reading how individuals were impacted by his (mis)management style that he was inspired to change his behaviors and set a directed action plan to avoid making these same mistakes in the future. A wealth of knowledge is beginning to emerge that demonstrates the value and impact stories can have in an overly data-driven culture (e.g., see Denning, Reference Denning2011; Krumboltz, Blando, Kim, & Reikowski, Reference Krumboltz, Blando, Kim and Reikowski1994; Pluye & Hong, Reference Pluye and Hong2014); it seems quite apparent that qualitative information is particularly powerful and can inspire behavior change over and beyond pure rank orders or mean averages.

Quantitative Aspects of Qualitative Data

Finally, the authors assume that behavioral comparisons cannot be made by utilizing qualitative methods and that qualitative methodologies are not statistically robust. Although there is no question that quantitative methodologies are more robust than qualitative methodologies, the assumption that qualitative methods do not (or cannot) meet the minimum criteria to make cross-comparisons is not fully accurate. If that were true, utilizing interviews in the hiring process would be untenable. Yet we know that behaviorally based interviews not only provide incremental validity over standard selection criteria (Cortina, Goldstein, Payne, Davison, & Gilliland, Reference Cortina, Goldstein, Payne, Davison and Gilliland2000) but can also be conducted with a high degree of reliability and validity (Huffcutt, Conway, Roth, & Stone, Reference Huffcutt, Conway, Roth and Stone2001) over some quantitatively based methodologies (e.g., job-knowledge tests, education; Schmidt & Hunter, Reference Schmidt and Hunter1998). The same is true for 360° feedback qualitative methods. In addition, the other focal article in this very issue (Pratt & Bonaccio, Reference Pratt and Bonaccio2016) also provides a number of new avenues to refine and improve the current approach to qualitative methods that address this point.

Last, there are a number of advanced big data analytical methods that can pair the qualitative feedback with quantitative data for the organization. For example, sentiment analyses (Pang & Lee, Reference Pang and Lee2008) provide associations with specific words found in a given focal participant's qualitative 360° feedback (e.g., comments or interviewer report) with the 360° feedback quantitative ratings (overall scores, competency scores, rater category scores, etc.). This provides a tremendous amount of information for an organization seeking trends across all of their employees that can help direct training, succession planning, and other talent management initiatives. Additionally, the qualitative data could help serve as a pulse (longitudinally) reflecting how much the organization is improving (or declining) and match that data to organizational performance metrics (sales, turnover, etc.). This is the direction toward which many of our large Fortune 100 clients are turning.

Conclusion

In sum, the authors are overly narrow in their definition of 360° feedback, relegating qualitative methods to second-class status. Utilizing qualitative methods in conjunction with quantitative methods helps further the purpose of 360° feedback; these methods provide numerous benefits to the focal participant and provide the organization with additional data sources to make informed decisions.

Footnotes

1 This is not to the exclusion of quantitative methods but rather in addition.

2 This is a key component to the 360° feedback process outlined by the authors.

References

Bracken, D. W., Rose, D. S., & Church, A. H. (2016). The evolution and devolution of 360° feedback. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9 (4), 761794.CrossRefGoogle Scholar
Cortina, J. M., Goldstein, N. B., Payne, S. C., Davison, H. K., & Gilliland, S. W. (2000). The incremental validity of interview scores over and above cognitive ability and conscientiousness scores. Personnel Psychology, 53, 325351.Google Scholar
Denning, S. (2011). The springboard: How storytelling ignites action in knowledge-era organizations. New York, NY: Routledge.Google Scholar
Edwards, M. R., & Ewen, A. J. (1996). 360° feedback: The powerful new tools for employee assessment and performance improvement. New York, NY: AMACOM.Google Scholar
Forgas, J. P. (1998). On being happy and mistaken: Mood effects on the fundamental attribution error. Journal of Personality and Social Psychology, 75, 318331.Google Scholar
Hoffman, B., Lance, C. E., Bynum, B., & Gentry, W. A. (2010). Rater source effects are alive and well after all. Personnel Psychology, 63, 119151.Google Scholar
Huffcutt, A. I., Conway, J. M., Roth, P. L., & Stone, N. J. (2001). Identification and meta-analytic assessment of psychological constructs measured in employment interviews. Journal of Applied Psychology, 86, 897913.Google Scholar
Ilgen, D. R., Fisher, C. D., & Taylor, M. S. (1979). Consequences of individual feedback on behavior in organizations. Journal of Applied Psychology, 64, 349371.Google Scholar
Krumboltz, J. D., Blando, J. A., Kim, H., & Reikowski, D. J. (1994). Embedding work values in stories. Journal of Counseling and Development, 73 (1), 5762. doi:10.1002/j.1556-6676.1994.tb01710.x Google Scholar
Locke, E. A., & Latham, G. P. (1990). A theory of goal setting and task performance. Englewood Cliffs, NJ: Prentice Hall.Google Scholar
Murphy, K. R., & Cleveland, J. N. (1995). Understanding performance appraisal. Thousand Oaks, CA: Sage.Google Scholar
Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Journal Foundations and Trends in Information Retrieval, 2, 1135.CrossRefGoogle Scholar
Pluye, P., & Hong, Q. N. (2014). Combining the power of stories and the power of numbers: Mixed methods research and mixed studies reviews. Annual Review of Public Health, 35, 2945.Google Scholar
Pratt, M. G., & Bonaccio, S. (2016). Qualitative research in I-O psychology: Maps, myths, and moving forward. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9 (4), 693715.Google Scholar
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262274.Google Scholar