Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-05T04:47:50.677Z Has data issue: false hasContentIssue false

28 - Psychometrics

from General Methods

Published online by Cambridge University Press:  27 January 2017

John T. Cacioppo
Affiliation:
University of Chicago
Louis G. Tassinary
Affiliation:
Texas A & M University
Gary G. Berntson
Affiliation:
Ohio State University
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2016

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Algina, J. & Penfield, R. D. (2009). Classical test theory. In Millsap, R. E. & Maydeu-Olivares, A. (eds.), The Sage Handbook of Quantitative Methods in Psychology (pp. 93122). Thousand Oaks, CA: Sage.Google Scholar
Boucsein, W., Fowles, D. C., Grimnes, S., Ben-Shakhar, G., Roth, W. T., Dawson, M. E., & Filion, D. L. (2012). Publication recommendations for electrodermal measurements. Psychophysiology, 49: 10171034.Google Scholar
Brennan, R. L. (1992). Elements of Generalizability Theory, rev. edn. Iowa City, IA: American College Testing.Google Scholar
Brennan, R. L. (1995). The conventional wisdom about group mean scores. Journal of Educational Measurement, 32: 385396.Google Scholar
Brennan, R. L. (2001). Generalizability Theory. New York: Springer.Google Scholar
Brennan, R. L. (ed.) (2006). Educational Measurement, 4th edn. Lanham, MD: Rowman & Littlefield.Google Scholar
Brennan, R. L., Gao, X., & Colton, D. A. (1995). Generalizability analyses of work keys listening and writing tests. Educational and Psychological Measurement, 55: 157176.Google Scholar
Brennan, R. L. & Kane, M. T. (1977). An index of dependability for mastery tests. Journal of Educational Measurement, 14: 277289.Google Scholar
Burgess, A. P. & Gruzelier, J. H. (1996). The reliability of event-related desynchronisation: a generalisability study analysis. International Journal of Psychophysiology, 23: 163169.CrossRefGoogle ScholarPubMed
Burt, K. B. & Obradović, J. (2013). The construct of psychophysiological reactivity: statistical and psychometric issues. Developmental Review, 33: 2957.Google Scholar
Bush, N. R., Alkon, A., Obradović, J., Stamperdahl, J., & Boyce, W. T. (2011). Differentiating challenge reactivity from psychomotor activity in studies of children’s psychophysiology: considerations for theory and measurement. Journal of Experimental Child Psychology, 110: 6279.CrossRefGoogle ScholarPubMed
Cacioppo, J. T. & Tassinary, L. G. (1990a). Inferring psychological significance from physiological signals. American Psychologist, 45: 1628.Google Scholar
Cacioppo, J. T. & Tassinary, L. G. (eds.) (1990b). Principles of Psychophysiology. Cambridge University Press.Google Scholar
Campbell, D. T. & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56: 81105.Google Scholar
Campbell, N. R. (1957). Foundations of Science: The Philosophy of Theory. New York: Dover.Google Scholar
Cardinet, J., Johnson, S., & Pini, G. (2009). Applying Generalizability Theory Using EduG. New York: Routledge.Google Scholar
Cardinet, J., Tourneur, Y., & Allal, L. (1976). The symmetry of generalizability theory: application to educational measurement. Journal of Educational Measurement, 13: 119135.Google Scholar
Cardinet, J., Tourneur, Y., & Allal, L. (1981). Extension of generalizability theory and its application in educational measurement. Journal of Educational Measurement, 18: 183204.CrossRefGoogle Scholar
Clayson, P. E. & Larson, M. J. (2013). Psychometric properties of conflict monitoring and conflict adaptation indices: response time and conflict N2 event-related potentials. Psychophysiology, 50: 12091219.Google Scholar
Coan, J. A., Allen, J. J. B., & McKnight, P. E. (2006). A capability model of individual differences in frontal EEG asymmetry. Biological Psychology, 72: 198207.Google Scholar
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2002). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 3rd edn. New York: Routledge.Google Scholar
Cole, D. A., Howard, G. S., & Maxwell, S. E. (1981). Effects of mono- versus multiple-operationalization in construct validation efforts. Journal of Consulting and Clinical Psychology, 49: 395405.Google Scholar
Crocker, L. & Algina, J. (2006). Introduction to Classical and Modern Test Theory. Independence, KY: Cengage.Google Scholar
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16: 292334.Google Scholar
Cronbach, L. J., Gleser, G. C., Nanda, H., & Rajaratnam, N. (1972). The Dependability of Behavioral Measurements: Theory of Generalizability of Scores and Profiles. New York: John Wiley.Google Scholar
Cronbach, L. J. & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52: 281302.CrossRefGoogle ScholarPubMed
de Ayala, R. J. (2008). The Theory and Practice of Item Response Theory. New York: Guilford Press.Google Scholar
Di Nocera, F., Ferlazzo, F., & Borghi, V. (2001). G theory and the reliability of psychophysiological measures: a tutorial. Psychophysiology, 38: 796806.Google Scholar
Embretson, S. & Reise, S. P. (2000). Item Response Theory for Psychologists. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Fahrenberg, J., Foerster, F., Schneider, H. J., Müller, W., & Myrtek, M. (1986). Predictability of individual differences in activation processes in a field setting based on laboratory measures. Psychophysiology, 23: 323333.Google Scholar
Feldt, L. S. & Brennan, R. L. (1989). Reliability. In Lin, R. L. (ed.), Educational Measurement, 3rd edn. (pp. 105146). New York: Macmillan.Google Scholar
Fiske, D. W. (1987). Construct invalidity comes from method effects. Educational and Psychological Measurement, 47: 285307.CrossRefGoogle Scholar
Gao, X. & Harris, D. J. (2012). Generalizability theory. In Cooper, H., Camic, P. M., Long, D. L., Panter, A. T., Rindskopf, D., & Sher, K. J. (eds.), APA Handbook of Research Methods in Psychology, vol. 1: Foundations, Planning, Measures, and Psychometrics (pp. 661681). Washington, DC: American Psychological Association.Google Scholar
Garćia-Vera, M. & Sanz, J. (1999). How many self-measured blood pressure readings are needed to estimate hypertensive patients’ “true” blood pressure? Journal of Behavioral Medicine, 22: 93113.Google Scholar
Ghiselli, E. E., Campbell, J. P., & Zedeck, S. (1981). Measurement Theory for the Behavioral Sciences. San Francisco, CA: Freeman.Google Scholar
Guion, R. M. (1978). Scoring of content domain samples. Journal of Applied Psychology, 63: 449506.Google Scholar
Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of Item Response Theory. Newbury Park, CA: Sage.Google Scholar
Hammond, K. R., Hamm, R. M., & Grassia, J. (1986). Generalizing over conditions by combining the multitrait–multimethod matrix and the representative design of experiments. Psychological Bulletin, 100: 257269.Google Scholar
Hecimovich, M. D., Peiffer, J. J., & Harbaugh, A. G. (2014). Development and psychometric evaluation of a post exercise exhaustion scale utilizing the Rasch measurement model. Psychology of Sports and Exercise, 15: 569579.Google Scholar
Hoyt, W. T. (2000). Rater bias in psychological research: when is it a problem and what can we do about it? Psychological Methods, 5: 6486.Google Scholar
Kamarck, T. W., Debski, T. T., & Manuck, S. B. (2000). Enhancing the laboratory-to-life generalizability of cardiovascular reactivity using multiple occasions of measurement. Psychophysiology, 37: 533542.Google Scholar
Kane, M. T. & Brennan, R. L. (1977). The generalizability of class means. Review of Educational Research, 47: 267292.Google Scholar
Kelley, T. L. (1927). Interpretation of Educational Measurements. New York: Macmillan.Google Scholar
Kenny, D. A. (1995). The multitrait–multimethod matrix: design, analysis, and conceptual issues. In Shrout, P. E. & Fiske, S. T. (eds.), Personality, Research, Methods, and Theory: A Festschrift Honoring Donald W. Fiske (pp. 111124). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
Llabre, M. M., Ironson, G. H., Spitzer, S. B., Gellman, M. D., Weidler, D. J. & Schneiderman, N. (1988). How many blood pressure measurements are enough? An application of generalizability theory to the study of blood pressure reliability. Psychophysiology, 25: 97106.Google Scholar
Llabre, M. M., Spitzer, S. B., Saab, P. G, Ironson, G. H., & Schneiderman, N. (1991). The reliability and specificity of delta versus residualized change as measures of cardiovascular reactivity to behavioral challenges. Psychophysiology, 28: 701711.Google Scholar
Marcoulides, G. A. (1994). Selecting weighting schemes in multivariate generalizability studies. Educational and Psychological Measurement, 54: 37.Google Scholar
Marcoulides, G. A. & Goldstein, Z. (1990). The optimization of generalizability studies with resource constraints. Educational and Psychological Measurement, 50: 761768.Google Scholar
Marsh, H. W. & Grayson, D. (1995). Latent variable models of multitrait–multimethod data. In Hoyle, R. H. (ed.), Structural Equation Modeling: Concepts, Issues, and Applications (pp. 117198). Thousand Oaks, CA: Sage.Google Scholar
Maxwell, S. E. & Delaney, H. D. (2003). Designing Experiments and Analyzing Data: A Model Comparison Perspective, 2nd edn. New York: Routledge.Google Scholar
Messick, S. (1981). Constructs and their vicissitudes in educational and psychological measurement. Psychological Bulletin, 89: 575588.CrossRefGoogle Scholar
Messick, S. (1989). Validity. In Linn, R. L. (ed.), Educational Measurement, 3rd edn. (pp. 13103). New York: Macmillan.Google Scholar
Myers, J. E., Well, A. D., & Lorch, R. F. Jr. (2010). Research Design and Statistical Analysis, 3rd edn. New York: Routledge.Google Scholar
Nunnally, J. C. & Bernstein, I. H. (1994). Psychometric Theory, 3rd edn. New York: McGraw-Hill.Google Scholar
Nussbaum, A. (1984). Multivariate generalizability theory in educational measurement: an empirical study. Applied Psychological Measurement, 8: 219230.Google Scholar
Pennebaker, J. W. (1982). The Psychology of Physical Symptoms. New York: Springer-Verlag.Google Scholar
Pickering, T. G., Harshfield, G. A., Kleinert, H. D., Blank, S., & Laragh, J. H. (1982). Blood pressure during normal daily activities, sleep, and exercise. Journal of the American Medical Association, 247: 992996.Google Scholar
Raykov, T. & Marcoulides, G. A. (2010). Introduction to Psychometric Theory. New York: Routledge.Google Scholar
Sarter, M., Berntson, G. G., & Cacioppo, J. T. (1996). Brain imaging and cognitive neuroscience: toward strong inference in attributing function to structure. American Psychologist, 51: 1321.Google Scholar
Schmidt, F. L. & Hunter, J. E. (1996). Measurement error in psychological research: lessons from 26 research scenarios. Psychological Methods, 1: 199223.Google Scholar
Schwerdtfeger, A. R., Schienle, A., Leutgeb, V., & Rathner, E. M. (2014). Does cardiac reactivity in the laboratory predict ambulatory heart rate? Baseline counts. Psychophysiology, 51: 565572.Google Scholar
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2001). Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston, MA: Houghton Mifflin.Google Scholar
Shavelson, R. J. & Webb, N. M. (1991). Generalizability Theory: A Primer. Newbury Park, CA: Sage.Google Scholar
Shavelson, R. J., Webb, N. M., & Rowley, G. L. (1989). Generalizability theory. American Psychologist, 44: 922932.Google Scholar
Stevens, J. P. (2009). Applied Multivariate Statistics for the Social Sciences, 5th edn. New York: RoutledgeGoogle Scholar
Strube, M. J. (1989). Assessing subjects’ construal of the laboratory situation. In Schneiderman, N., Weiss, S. M., & Kaufman, P. (eds.), Handbook of Research Methods in Cardiovascular Behavioral Medicine (pp. 527542). New York: Plenum Press.Google Scholar
Thomas, M. L., Brown, G. G., Thompson, W. K., Voyvodic, J., Greve, D. N., Turner, J. A., … & Potkin, S. G. (2013). An application of item response theory to fMRI data: prospects and pitfalls. Psychiatry Research: Neuroimaging, 212: 167174.Google Scholar
Thurston, R. C., Hernandez, J., Del Rio, J. M., & De La Torre, F. (2010). Support vector machines to improve physiologic hot flash measures: applications to the ambulatory setting. Psychophysiology, 48: 10151021.Google Scholar
Torrents-Rodas, D., Fullana, M. A., Bonillo, A., Andion, O., Molinuevo, B., Caseras, X., & Torrubia, R. (2014). Testing the temporal stability of individual differences in the acquisition and generalization of fear. Psychophysiology, 51: 697705.CrossRefGoogle ScholarPubMed
Vanleeuwen, D. M. & Mandabach, K. H. (2002). A note on the reliability of ranked items. Sociological Methods & Research, 31: 87105.Google Scholar
Webb, N. M. & Shavelson, R. J. (1981). Multivariate generalizability of general educational development ratings. Journal of Educational Measurement, 18: 1322.Google Scholar
Westen, D. & Rosenthal, R. (2003). Quantifying construct validity: two simple measures. Journal of Personality and Social Psychology, 84: 608618.Google Scholar
Whitley, B. E. Jr. & Kite, M. E. (2012). Principles of Research in Behavioral Science, 3rd edn. New York: Routledge.CrossRefGoogle Scholar
Winer, B. J., Brown, D. R., & Michels, K. M. (1991). Statistical Principles in Experimental Design, 3rd edn. New York: McGraw-Hill.Google Scholar
Wohlgemuth, W. K., Edinger, J. D., Fins, A. I., & Sullivan, R. J. Jr. (1999). How many nights are enough? The short-term stability of sleep parameters in elderly insomniacs and normal sleepers. Psychophysiology, 36: 233244.Google Scholar
Wothke, W. (1996). Models for multitrait-multimethod matrix analysis. In Marcoulides, G. A. & Schumacker, R. E. (eds.), Advanced Structural Equation Modeling: Issues and Techniques (pp. 756). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Youngstrom, E. A. & De Los Reyes, A. (2015). Commentary. Moving toward cost-effectiveness in using psychophysiological measures in clinical assessment: validity, decision making, and adding value. Journal of Clinical Child & Adolescent Psychology, 44: 352361.Google Scholar
Zillmann, D. (1978). Attribution and misattribution of excitatory reactions. In Harvey, J. H., Ickes, W., & Kidd, R. F. (eds.), New Directions in Attribution Research, vol. 2 (pp. 335368). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×