Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-26T11:26:04.154Z Has data issue: false hasContentIssue false

Validity Generalization as a Continuum

Published online by Cambridge University Press:  30 August 2017

Ernest H. O'Boyle*
Affiliation:
Tippie College of Business, University of Iowa
*
Correspondence concerning this article should be addressed to Ernest H. O'Boyle, Tippie College of Business, University of Iowa, 21 E. Market St., Iowa City, IA 52242. E-mail: [email protected]

Extract

Tett, Hundley, and Christiansen (2017) make a compelling case against meta-analyses that focus on mean effect sizes (e.g., rxy and ρ) while largely disregarding the precision of the estimate and true score variance. This is a reasonable point, but meta-analyses that myopically focus on mean effects at the expense of variance are not examples of validity generalization (VG)—they are examples of bad meta-analyses. VG and situational specificity (SS) fall along a continuum, and claims about generalization are confined to the research question and the type of generalization one is seeking (e.g., directional generalization, magnitude generalization). What Tett et al. (2017) successfully debunk is an extreme position along the generalization continuum significantly beyond the tenets of VG that few, if any, in the research community hold. The position they argue against is essentially a fixed-effects assumption, which runs counter to VG. Describing VG in this way is akin to describing SS as a position that completely ignores sampling error and treats every between-sample difference in effect size as true score variance. Both are strawmen that were knocked down decades ago (Schmidt et al., 1985). There is great value in debating whether a researcher should or can argue for generalization, but this debate must start with (a) an accurate portrayal of VG, (b) a discussion of different forms of generalization, and (c) the costs of trying to establish universal thresholds for VG.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Cortina, J. M., Green, J. P., Keeler, K. R., & Vandenberg, R. J. (2017). Degrees of freedom in SEM: Are we testing the models that we claim to test? Organizational Research Methods, 20 (3), 350378.CrossRefGoogle Scholar
Cortina, J. M., & Landis, R. S. (2011). The earth is not round (p =. 00). Organizational Research Methods, 14 (2), 332349.Google Scholar
Lance, C. E., Butts, M. M., & Michels, L. C. (2006). The sources of four commonly reported cutoff criteria: What did they really say? Organizational Research Methods, 9 (2), 202220.CrossRefGoogle Scholar
Le, H., Schmidt, F. L., Harter, J. K., & Lauver, K. J. (2010). The problem of empirical redundancy of constructs in organizational research: An empirical investigation. Organizational Behavior and Human Decision Processes, 112 (2), 112125.Google Scholar
O'Boyle, E. H., Banks, G.C., & Gonzalez-Mulé, E. (2017). The Chrysalis Effect: How ugly initial results metamorphosize into beautiful articles. Journal of Management, 43, 376399.CrossRefGoogle Scholar
Orlitzky, M. (2012). How can significance tests be deinstitutionalized? Organizational Research Methods, 15 (2), 199228.CrossRefGoogle Scholar
Oswald, F. L., & McCloy, R. A. (2003). Meta-analysis and the art of the average. In K. Murphy (Ed.), Validity generalization: A critical review (pp. 311338). Mahwah, NJ: Erlbaum.Google Scholar
Schmidt, F. L., Hunter, J. E., Pearlman, K., Hirsh, H. R., Sackett, p. R., Schmitt, N., . . . Sedeck, S. (1985). Forty questions about validity generalization and meta‐analysis. Personnel Psychology, 38 (4), 697798.CrossRefGoogle Scholar
Tett, R. P., Hundley, N., & Christiansen, N. D. (2017). Meta-analysis and the myth of generalizability. Industrial and Organizational Psychology: Perspectives on Science and Practice, 10 (3), 421–456.CrossRefGoogle Scholar