Published online by Cambridge University Press: 04 July 2016
In the focal article “Getting Rid of Performance Ratings: Genius or Folly? A Debate,” two groups of authors argued the merits of performance ratings (Adler et al., 2016). Despite varied views, both sides noted the importance of including multiple raters to obtain more accurate performance ratings. As the pro side noted, “if ratings can be pooled across many similarly situated raters, it should be possible to obtain quite reliable assessments” (Adler et al., p. 236). Even the con side noted, “In theory, it is possible to obtain ratings from multiple raters and pool them to eliminate some types of interrater agreement” (Adler et al., p. 225), although this side was certainly less optimistic about the merits of multiple raters. In the broader industrial–organizational psychology literature, authors have repeatedly heralded the benefits of adding additional raters for performance ratings, some even treating it as a panacea for inaccurate ratings. Although these authors extol the virtues of multiple raters, an important question is often omitted from relevant discussions of performance ratings: To what extent do additional raters actually improve performance ratings? Does adding an additional rater double the validity of performance ratings? Does an additional rater increase the validity of performance ratings by a constant value? Or is the answer something else altogether?