Hostname: page-component-669899f699-ggqkh Total loading time: 0 Render date: 2025-04-24T10:43:34.865Z Has data issue: false hasContentIssue false

A Multidimensional Model to Facilitate Within Person Comparison of Attributes

Published online by Cambridge University Press:  01 January 2025

Mark L. Davison*
Affiliation:
University of Minnesota
Seungwon Chung
Affiliation:
US Food and Drug Administration
Nidhi Kohli
Affiliation:
University of Minnesota
Ernest C. Davenport Jr.
Affiliation:
University of Minnesota
*
Correspondence should be made to Mark L. Davison, Department of Educational Psychology, University of Minnesota, Minneapolis 55455, USA. Email: [email protected]

Abstract

In psychological research and practice, a person’s scores on two different traits or abilities are often compared. Such within-person comparisons require that measurements have equal units (EU) and/or equal origins: an assumption rarely validated. We describe a multidimensional SEM/IRT model from the literature and, using principles of conjoint measurement, show that its expected response variables satisfy the axioms of additive conjoint measurement for measurement on a common scale. In an application to Quality of Life data, the EU analysis is used as a pre-processing step to derive a simple structure Quality of Life model with three dimensions expressed in equal units. The results are used to address questions that can only be addressed by scores expressed in equal units. When the EU model fits the data, scores in the corresponding simple structure model will have added validity in that they can address questions that cannot otherwise be addressed. Limitations and the need for further research are discussed.

Type
Research Article
Copyright
Copyright © 2024 The Author(s), under exclusive licence to The Psychometric Society.

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

References

American Educational Research Association. (2014). Standards for educational and psychological measurement. Author.Google Scholar
Azen, R, Budescu, D.V.. (2006). Comparing predictors in multivariate regression: An extension of dominance analysis. Journal of Educational and Behavioral Statistics, 31 2157180.CrossRefGoogle Scholar
Booth, T, Murray, A.L., Overduin, M, Matthews, M, Furnham, A. (2015). Distinguishing CEOs from top level management: A profile analysis of individual differences, career paths and demographics. Journal of Business and Psychology, 31, 205216.CrossRefGoogle Scholar
Bradlow, E.T., Wainer, H, Wang, X. (1999). A Bayesian random effects model for testlets. Psychometrika, 64, 153168.CrossRefGoogle Scholar
Brogden, D.V.. (1993). Dominance analysis: A new approach to the problem of relative importance of predictors in multiple regression. Psychological Bulletin, 114, 542551.Google Scholar
Brogden, H.E.. (1977). The Rasch model, the law of comparative judgment, and additive conjoint measurement. Psychometrika, 42 4631634.CrossRefGoogle Scholar
Campbell, N. R. (1920). Physics: The elements (Vol. 1). Cambridge University Press.Google Scholar
Campbell, N. R. (1928). An account of the principles of measurement and calculation. Longmans Green.Google Scholar
Davison, M. L., Jew, G., & Davenport, E. C. Jr. (2014). Patterns of SAT scores, choice of STEM major, and gender. Measurement and Evaluation in Counseling and Development, 47, 118126.https://doi.org/10.1177%2F0748175614522269.CrossRefGoogle Scholar
Davison, M.L., Davenport, EC Jr, Kohli, N, Kang, Y, Park, K. (2021). Addressing quantitative and qualitative hypotheses using regression models with equality restrictions and predictors measured in common units. Multivariate Behavioral Research, 56 186100.CrossRefGoogle ScholarPubMed
Dilchert, S. (2007). Peaks and valleys: Predicting interests in leadership and managerial positions from personality profiles. International Journal of Selection and Assessment, 15 2317334.CrossRefGoogle Scholar
Domingue, B. (2014). Evaluating the equal-interval hypothesis with test score scales. Psychometrika, 79 1119.CrossRefGoogle ScholarPubMed
Erford, B. T. (2012). Assessment for counselors (2nd ed.). Brooks/Cole.Google Scholar
Green, K.E.. (1986). Fundamental measurement: A review and application of additive conjoint measurement in educational testing. The Journal of Experimental Education, 54 2141147.CrossRefGoogle Scholar
Jeon, J, Rijmen, F, Rabe-Hasketh, S. (2018). CFA models with a general factor and multiple sets of secondary factors. Psychometrika, 83, 785808.CrossRefGoogle ScholarPubMed
Johnson, J.W., Lebreton, J.M.. (2004). History and use of relative importance indices in organizational research. Organizational Research Methods, 7, 238257.CrossRefGoogle Scholar
Karabatsos, G. (2001). The Rasch model additive conjoint measurement, and new models of probabilistic measurement theory. Journal of Applied Measurement, 2 3389423.Google ScholarPubMed
Krantz, D. H., Luce, R., Suppes, P., & Tversky, A. (1971). Foundations of measurement: Additive and polynomial representations: Vol. 1. Academic Press.Google Scholar
Kyngdon, A. (2011). Plausible measurement analogies to some psychometric models of test performance. British Journal of Mathematical and Statistical Psychology, 64 2478497.CrossRefGoogle ScholarPubMed
Luce, R.D., Tukey, J.W.. (1964). Simultaneous conjoint measurement: A new scale type of fundamental measurement. Journal of Mathematical Psychology, 1 1127.CrossRefGoogle Scholar
Michell, J. (1990). A introduction to the logic of psychological measurement. Psychology Press.Google Scholar
Peralta, Y, Kohli, N, Lock, E.F., Davison, M.L.. (2022). Bayesian modeling of associations in bivariate linear mixed-effects models. Psychological Methods, 27 14664.CrossRefGoogle Scholar
Perline, R, Wright, B.D., Wainer, H. (1979). The Rasch model as an additive conjoint measurement. Applied Psychological Measurement, 3 2237255.CrossRefGoogle Scholar
Rijmen, F. (2010). Formal relations and an empirical comparison between the bi-factor, the testlet, and a second-order multidimensional IRT model. Journal of Educational Measurement, 47, 361372.CrossRefGoogle Scholar
Rijmen, F, Jeon, M, von Davier, M, Rabe-Hesketh, S. (2014). A third-order item response theory model for modeling the effects of domains and subdomains in large-scale educational assessment surveys. Journal of Educational and Behavioral Statistics, 39, 235256.CrossRefGoogle Scholar
Scientific Software International. IRTPRO Guide. Author, (2011). http://www.ssicentral.com/index.php/products/irt/irtpro-downloads.Google Scholar
Shen, W. (2011). The application of a person-oriented criterion-related configural approach to the relationship between personality traits and work behaviors [Doctoral dissertation, University of Minnesota]. https://hdl.handle.net/11299/113566.Google Scholar
Shin, T, Davison, M.L., Long, J.D., Chan, C.K., Heistad, D. (2013). Exploring gains in reading and mathematics achievement among regular and exceptional students using growth curve modeling. Learning and Individual Differences, 23 192100.CrossRefGoogle Scholar
Takane, Y, de Leeuw, J. (1987). On the relationship between item response theory and factor analysis of discretized variables. Psychometrika, 52, 393408.CrossRefGoogle Scholar
Thissen, D. (2013). Using the testlet response model as a shortcut to multidimensional IRT subscore computation. In R. E. Millsap et al. (eds.), New developments in quantitative psychology. Springer https://doi.org/10.1007/978-1-4614-9348-8_3.CrossRefGoogle Scholar
Wiernik, B.M., Wilmot, M.P., Davison, M.L., Ones, D.S.. (2021). Meta-analytic criterion profile analysis. Psychological Methods, 26 2186209.CrossRefGoogle Scholar
Yung, Y-F, Thissen, D, McLeod, L. (1999). On the relationship between the higher-order factor model and the hierarchical factor model. Psychometrika, 64, 113128.CrossRefGoogle Scholar