Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-22T19:17:10.643Z Has data issue: false hasContentIssue false

7 - Experimental Design

from Part II - Basic Design Considerations to Know, No Matter What Your Research Is About

Published online by Cambridge University Press:  12 December 2024

Harry T. Reis
Affiliation:
University of Rochester, New York
Tessa West
Affiliation:
New York University
Charles M. Judd
Affiliation:
University of Colorado Boulder
Get access

Summary

This chapter focuses on experimental designs, in which one or more factors are randomly assigned and manipulated. The first topic is statistical power or the likelihood of obtaining a significant result, which depends on several aspects of design. Second, the chapter examines the factors (independent variables) in a design, including the selection of levels of a factor and their treatment as fixed or random, and then dependent variables, including the selection of items, stimuli, or other aspects of a measure. Finally, artifacts and confounds that can affect the validity of results are addressed, as well as special designs for studying mediation. A concluding section raises the possibility that traditional conceptualizations of design – generally focusing on a single study and on the question of whether a manipulation has an effect – may be inadequate in the current world where multiple-study research programs are the more meaningful unit of evidence, and mediational questions are often of primary interest.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Alaei, R., Deska, J. C., Hugenberg, K., and Rule, N. O. (2022). People attribute humanness to men and women differently based on their facial appearance. Journal of Personality and Social Psychology, 123(2), 400422.CrossRefGoogle ScholarPubMed
Aronson, E., Ellsworth, P. C., Carlsmith, J. M., and Gonzales, M. H. (1990). Methods of Research in Social Psychology, 2nd ed. McGraw-Hill.Google Scholar
Bargh, J. A., Bond, R. N., Lombardi, W. J., and Tota, M. E. (1986). The additive nature of chronic and temporary sources of construct accessibility. Journal of Personality and Social Psychology, 50, 869878.CrossRefGoogle Scholar
Baron, R. M., and Kenny, D. A. (1986). The mediator–moderator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 11731182.CrossRefGoogle Scholar
Bem, D. J., Wallach, M. A., and Kogan, N. (1965). Group decision making under risk of aversive consequences. Journal of Personality and Social Psychology, 1, 453460.CrossRefGoogle ScholarPubMed
Blake, K. R., and Gangestad, S. (2020). On attenuated interactions, measurement error, and statistical power: Guidelines for social and personality psychologists. Personality and Social Psychology Bulletin, 46(12), 17021711.CrossRefGoogle ScholarPubMed
Brunswik, E. (1955). Perception and the Representative Design of Psychological Experiments, 2nd ed. University of California Press.Google Scholar
Brysbaert, M. (2019). How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables. Journal of Cognition, 2(1), Article 16, https://doi.org/10.5334/joc.72.CrossRefGoogle ScholarPubMed
Bullock, J. G., Green, D. P., and Ha, S. E. (2010). Yes, but what’s the mechanism? (Don’t expect an easy answer). Journal of Personality and Social Psychology, 98, 550558.CrossRefGoogle ScholarPubMed
Busemeyer, J. R., and Jones, L. E. (1983). Analysis of multiplicative combination rules when the causal variables are measured with error. Psychological Bulletin, 93, 549562.CrossRefGoogle Scholar
Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology, 65, 145153.CrossRefGoogle ScholarPubMed
Cohen, J. (1968). Multiple regression as a general dataanalytic system. Psychological Bulletin, 70, 426443.CrossRefGoogle Scholar
Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 13041312.CrossRefGoogle Scholar
Cook, T. D., and Campbell, D. T. (1979). Quasi-experimentation. Rand McNally.Google Scholar
Cook, T. D., and Shadish, W. R. (1994). Social experiments: Some developments over the past fifteen years. Annual Review of Psychology, 45, 545580.CrossRefGoogle Scholar
Corneille, O., and Lush, P. (2023). Sixty years after Orne’s American Psychologist article: a conceptual framework for subjective experiences elicited by demand characteristics. Personality and Social Psychology Review, 27(1), 83101.CrossRefGoogle ScholarPubMed
Dunn, J. C., and Kirsner, K. (1988). Discovering functionally independent mental processes: The principle of reversed association. Psychological Review, 95, 91101.CrossRefGoogle ScholarPubMed
Fabrigar, L. R., Wegener, D. T., and Petty, R. E. (2020). A validity-based framework for understanding replication in psychology. Personality and Social Psychology Review, 108886832093136, https://doi.org/10.1177/1088868320931366.CrossRefGoogle Scholar
Faul, F., Erdfelder, E., Buchner, A., and Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 11491160.CrossRefGoogle Scholar
Feldt, L. S. (1958). A comparison of the precision of three experimental designs employing a concomitant variable. Psychometrika, 23, 335354.CrossRefGoogle Scholar
Fraley, R. C., and Vazire, S. (2014). The N-pact factor: Evaluating the quality of empirical journals with respect to sample size and statistical power. PLOS ONE, 9(10), e109019, https://doi.org/10.1371/journal.pone.0109019.CrossRefGoogle ScholarPubMed
Gelman, A., and Carlin, J. (2014). Beyond power calculations: Assessing type S (sign) and type M (magnitude) errors. Perspectives on Psychological Science, 9(6), 641651.CrossRefGoogle ScholarPubMed
Hastie, R., and Kumar, P. A. (1979). Person memory: Personality traits as organizing principles in memory for behaviors. Journal of Personality and Social Psychology, 37, 2538.CrossRefGoogle Scholar
Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from intentional uses of memory. Journal of Memory and Language, 30, 513541.CrossRefGoogle Scholar
Judd, C. M., and McClelland, G. H. (1989). Data Analysis: A Model-Comparison Approach. Harcourt Brace Jovanovich.Google Scholar
Judd, C., Westfall, J., and Kenny, D. A. (2017). Experiments with more than one random factor: Designs, analytic models, and statistical power. Annual Reviews in Psychology, 68(1), 601625.CrossRefGoogle ScholarPubMed
Kenny, D. A., and Judd, C. (2013). Power anomalies in testing mediation. Psychological Science: A Journal of the American Psychological Society/APS, 25(2), https://doi.org/10.1177/0956797613502676.Google ScholarPubMed
Kenny, D. A., and Judd, C. M. (2019). The unappreciated heterogeneity of effect sizes: Implications for power, precision, planning of research, and replication. Psychological Methods, 24(5), 578589.CrossRefGoogle ScholarPubMed
Kirk, R. E. (1968). Experimental Design: Procedures for the Behavioral Sciences. Brooks/Cole.Google Scholar
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Bahník, Jr, Bernstein, Š, M. J., … Nosek, B. A. (2014). Investigating variation in replicability. Social psychology, 45, 142152.CrossRefGoogle Scholar
Lovakov, A., and Agadullina, E. R. (2021). Empirically derived guidelines for effect size interpretation in social psychology. European Journal of Social Psychology, 51, 485504.CrossRefGoogle Scholar
Lundqvist, D., Flykt, A., and Ohman, A. (1998). The Karolinska Directed Emotional Faces. Psychology Section, Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden.Google Scholar
McClelland, G. H. (1997). Optimal design in psychological research. Psychological Methods, 2, 319.CrossRefGoogle Scholar
McClelland, G. H., and Judd, C. M. (1993). Statistical difficulties of detecting interactions and moderator effects. Psychological Bulletin, 114, 376390.CrossRefGoogle ScholarPubMed
MacKinnon, D. P., Fairchild, A. J., and Fritz, M. S. (2007). Mediation analysis. Annual Review of Psychology, 58, 593614.CrossRefGoogle ScholarPubMed
Maxwell, S. E., and Delaney, H. D. (1993). Bivariate median splits and spurious statistical significance. Psychological Bulletin, 113, 181190.CrossRefGoogle Scholar
Miller, A. G. (1972). The Social Psychology of Psychological Research. Free Press.Google Scholar
Mook, D. G. (1983). In defense of external invalidity. American Psychologist, 38, 379388.CrossRefGoogle Scholar
Myers, D. G., and Lamm, H. (1976). The group polarization phenomenon. Psychological Bulletin, 83, 602627.CrossRefGoogle Scholar
Newlin, D. B., and Levenson, R. W. (1979). Pre‐ejection period: Measuring beta‐adrenergic influences upon the heart. Psychophysiology, 16(6), 546552.CrossRefGoogle ScholarPubMed
Orne, M. (1962). On the social psychology of the psychological experiment. American Psychologist, 17, 776783.CrossRefGoogle Scholar
Paluck, E. L., and Green, D. P. (2009). Prejudice reduction: What works? A review and assessment of research and practice. Annual Review of Psychology, 60, 339367.CrossRefGoogle ScholarPubMed
Reis, H. T., and Gosling, S. D. (2010). Social psychological methods outside the laboratory. In Fiske, S., Gilbert, D., and Lindzey, G. (eds) Handbook of Social Psychology, 5th ed., vol. 1. Wiley.Google Scholar
Richard, F. D., Bond, C. F., Jr., and Stokes-Zoota, J. J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology, 7(4), 331363.CrossRefGoogle Scholar
Rosenthal, R., and Rosnow, R. L. (eds) (1969). Artifact in Behavioral Research. Academic Press.Google Scholar
Rydell, R. J., and McConnell, A. R. (2006). Understanding implicit and explicit attitude change: A systems of reasoning analysis. Journal of Personality and Social Psychology, 91(6), 9951008.CrossRefGoogle Scholar
Schoemann, A. M., Boulton, A. J., and Short, S. D. (2017). Determining power and sample size for simple and complex mediation models. Social Psychological and Personality Science, 25(4), 194855061771506 8, https://doi.org/10.1177/1948550617715068.Google Scholar
Spencer, S. J., Zanna, M. P., and Fong, G. T. (2005). Establishing a causal chain: Why experiments are often more effective than mediational analyses in examining psychological processes. Journal of Personality and Social Psychology, 89, 845851.CrossRefGoogle ScholarPubMed
Strack, F. (2016). Reflection on the smiling registered replication report. Perspectives on Psychological Science, 11(6), 929930.CrossRefGoogle ScholarPubMed
Strack, F., Martin, L. L., and Stepper, S. (1988). Inhibiting and facilitating conditions of the human smile: A nonobtrusive test of the facial feedback hypothesis. Journal of Personality and Social Psychology, 54(5), 768777.CrossRefGoogle ScholarPubMed
Wagenmakers, E. J., Beek, T., Dijkhoff, L., Gronau, Q. F., Acosta, A., AdamsJr, R. B., … Zwaan, R. A. (2016). Registered replication report: Strack, Martin, & Stepper (1988). Perspectives on Psychological Science, 11(6), 917928.CrossRefGoogle Scholar
Westfall, J., Judd, C., and Kenny, D. A. (2015). Replicating studies in which samples of participants respond to samples of stimuli. Perspectives on Psychological Science, 10(3), 390399.CrossRefGoogle ScholarPubMed
Westfall, J., Kenny, D. A., and Judd, C. (2014). Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli. Journal of Experimental Psychology: General, 143(5), 2020–2045.Google ScholarPubMed
Wilson, J. P., Hugenberg, K., and Rule, N. O. (2017). Racial bias in judgments of physical size and formidability: From size to threat. Journal of Personality and Social Psychology, 113(1), 5980.CrossRefGoogle ScholarPubMed
Zajonc, R. B. (1965). Social facilitation. Science, 149(3681), 269274.CrossRefGoogle ScholarPubMed
Zhou, H., and Fishbach, A. (2016). The pitfall of experimenting on the web: How unattended selective attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social Psychology, 111(4), 493504.CrossRefGoogle ScholarPubMed

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×