Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-30T23:57:11.275Z Has data issue: false hasContentIssue false

Big Data and the Challenge of Construct Validity

Published online by Cambridge University Press:  17 December 2015

Michael T. Braun*
Affiliation:
Department of Psychology, Virginia Tech
Goran Kuljanin
Affiliation:
Department of Psychology, DePaul University
*
Correspondence concerning this article should be addressed to Michael T. Braun, who is now at Department of Psychology, University of South Florida, 4151 PCD, Tampa, FL 33620. E-mail: [email protected]

Extract

One important issue not highlighted by Guzzo, Fink, King, Tonidandel, and Landis (2015) is that simply establishing construct validity will be significantly more challenging with big data than ever before. One needs to only look as far as the other social sciences analyzing big data (e.g., communications, economics, industrial engineering) to observe the difficulty of making valid claims as to what measured variables substantively “mean.” This presents a significant hurdle in the application of big data to organizational research questions because of the critical importance of demonstrating validity in the organizational sciences as highlighted by Guzzo et al.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Psychological Association.Google Scholar
Anderson, T. W., & Rubin, H. (1956). Statistical inference in factor analysis. In Newman, J. (Ed.), Proceedings of the third Berkeley Symposium on Mathematical Statistics and Probability: Vol. 5. Contributions to econometrics, industrial research, and psychometry (pp. 111150). Berkeley, CA: University of California Press.Google Scholar
Bagozzi, R. P., Yi, Y., & Phillips, L. W. (1991). Construct validity in organizational research. Administrative Science Quarterly, 36, 421458.Google Scholar
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validity by the multitrait–multimethod matrix. Psychological Bulletin, 56, 81105.CrossRefGoogle ScholarPubMed
Caputo, P., & Boyce, A. (2015, April). Data science in human capital research and analytics. Symposium presented at the 30th Annual Conference of the Society for Industrial and Organizational Psychology, Philadelphia, PA.Google Scholar
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.Google Scholar
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281302.Google Scholar
Guzzo, R. A., Fink, A. A., King, E., Tonidandel, S., & Landis, R. S. (2015). Big data recommendations for industrial–organizational psychology. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8 (4), 491508.Google Scholar
Hernandez, I., Newman, D., & Jeon, G. (2015). Twitter analysis: Methods for data management and validation of a word count dictionary to measure city-level job satisfaction. In Tonidandel, S., King, E., & Cortina, J. (Eds.), Big data at work: The data science revolution and organizational psychology. New York, NY: Routledge.Google Scholar
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indices in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 155.Google Scholar
Ilgen, D. R., Hollenbeck, J. R., Johnson, M., & Jundt, D. (2005). Teams in organizations: From input-process-output models to IMOI models. Annual Review of Psychology, 56, 517543.Google Scholar
Kennedy, D. M., & McComb, S. A. (2014). When teams shift among processes: Insights from simulation and optimization. Journal of Applied Psychology, 99, 784815.CrossRefGoogle ScholarPubMed
Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28, 563575.Google Scholar
Marks, M. A., Mathieu, J. E., & Zaccaro, S. J. (2001). A temporally based framework and taxonomy of team processes. Academy of Management Review, 26, 356376.Google Scholar
Marsh, H. W., Hau, K., Balla, J. R., & Grayson, D. (1998). Is more ever too much: The number of indicators per factor in confirmatory factor analysis. Multivariate Behavioral Research, 33, 181220.Google Scholar
McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah, NJ: Erlbaum.Google Scholar
McGrath, J. E. (1964). Social psychology: A brief introduction. New York, NY: Holt, Rinehart and Winston.Google Scholar
Morgeson, F. P., DeRue, S., & Karam, E. P. (2010). Leadership in teams: A functional approach to understanding leadership structures and processes. Journal of Management, 36, 539.Google Scholar
Peter, J. P. (1981). Construct validity: A review of basic issues and marketing practices. Journal of Marketing Research, 18, 133145.Google Scholar
Roth, P. L. (1994). Missing data: A conceptual review for applied psychologists. Personnel Psychology, 47, 537560.CrossRefGoogle Scholar
Schwab, D., Heneman, H., & DeCotiis, T. (1975). Behaviorally anchored rating scales: A review of the literature. Personnel Psychology, 28, 549562.Google Scholar
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2001). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.Google Scholar
Smith, P. C., & Kendall, L. M. (1963). Re translation of expectations: An approach to the construction of unambiguous anchors to rating scales. Journal of Applied Psychology, 47, 149155.CrossRefGoogle Scholar
Thorndike, E. L. (1920). A constant error in psychological ratings. Journal of Applied Psychology, 4, 2529.CrossRefGoogle Scholar
Westen, D., & Rosenthal, R. (2003). Quantifying construct validity: Two simple measures. Journal of Personality and Social Psychology, 84, 608618.Google Scholar
Widaman, K. F. (1985). Hierarchically nested covariance models for multitrait–multimethod data. Applied Psychological Measurement, 9, 126.CrossRefGoogle Scholar
Zwick, W. R., & Velicer, W. F. (1986). A comparison of five rules for determining the number of components to retain. Psychological Bulletin, 22, 432442.Google Scholar