Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-28T03:35:05.019Z Has data issue: false hasContentIssue false

Imperfect Corrections or Correct Imperfections? Psychometric Corrections in Meta-Analysis

Published online by Cambridge University Press:  27 May 2015

Frederick L. Oswald*
Affiliation:
Department of Psychology, Rice University
Seydahmet Ercan
Affiliation:
Department of Psychology, Rice University
Samuel T. McAbee
Affiliation:
Department of Psychology, Rice University
Jisoo Ock
Affiliation:
Department of Psychology, Rice University
Amy Shaw
Affiliation:
Department of Psychology, Rice University
*
Correspondence concerning this article should be addressed to Frederick L. Oswald, Department of Psychology, Rice University, 6100 Main Street, MS-25, Houston, TX 77005. E-mail: [email protected]

Extract

There is understandable concern by LeBreton, Scherer, and James (2014) that psychometric corrections in organizational research are nothing more than a form of statistical hydraulics. Statistical corrections for measurement error variance and range restriction might inappropriately ratchet observed effects upward into regions of practical significance and publication glory—at the expense of highly questionable results.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aytug, Z. G., Rothstein, H. R., Zhou, W., & Kern, M. C. (2011). Revealed or concealed? Transparency of procedures, decisions, and judgment calls in meta-analyses. Organizational Research Methods, 15, 103133. doi:10.1177/1094428111403495Google Scholar
James, L. R., Demaree, R. G., Mulaik, S. A., & Ladd, R. T. (1992). Validity generalization in the context of situational models. Journal of Applied Psychology, 73, 673678. doi:10.1037//0021-9010.77.1.3Google Scholar
Le, H., Schmidt, F. L., & Putka, D. (2009). The multifaceted nature of measurement artifacts and its implications for estimating construct-level relationships. Organizational Research Methods, 12, 165200. doi:10.1177/1094428107302900Google Scholar
LeBreton, J. M., Scherer, K. T., & James, L. R. (2014). Corrections for criterion reliability in validity generalization: A false prophet in a land of suspended judgment. Industrial and Organizational Psychology: Perspectives on Science and Practice, 7, 478500. doi:10.1111/iops.12184Google Scholar
Newman, D. A., & Lyon, J. S. (2009). Recruitment efforts to reduce adverse impact: Targeted recruiting for personality, cognitive ability, and diversity. Journal of Applied Psychology, 94, 298317. doi:10.1037/a0013472Google Scholar
Oswald, F. L., & McCloy, R. A. (2003). Meta-analysis and the art of the average. In Murphy, K. R. (Ed.), Validity generalization: A critical review (pp. 311338). Mahwah, NJ: Erlbaum.Google Scholar
Raju, N. S., Anselmi, T. V., Goodman, J. S., & Thomas, A. (1998). The effect of correlated artifacts and true validity on the accuracy of parameter estimation in validity generalization. Personnel Psychology, 51, 452465. doi:10.1111/j.1744-6570.1998.tb00733.xGoogle Scholar
Russell, C. J., & Gilliland, S. W. (1995). Why meta-analysis doesn't tell us what the data really mean: Distinguishing between moderator effects and moderator processes. Journal of Management, 21, 813831. doi:10.1177/014920639502100412Google Scholar
Sackett, P. R., Lievens, F., Berry, C. M., & Landers, R. N. (2007). A cautionary note on the effects of range restriction on predictor intercorrelations. Journal of Applied Psychology, 92, 538544. doi:10.1037/0021-9010.92.2.538CrossRefGoogle ScholarPubMed
Schmidt, F. L., & Hunter, J. E. (2015). Methods of meta-analysis: Correcting error and bias in research findings. Thousand Oaks, CA: Sage.Google Scholar