Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-22T19:05:32.143Z Has data issue: false hasContentIssue false

Part II - Basic Design Considerations to Know, No Matter What Your Research Is About

Published online by Cambridge University Press:  12 December 2024

Harry T. Reis
Affiliation:
University of Rochester, New York
Tessa West
Affiliation:
New York University
Charles M. Judd
Affiliation:
University of Colorado Boulder
Get access
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

References

Anderson, C. A., Allen, J. J., Plante, C., Quigley-McBride, A., Lovett, A., and Rokkum, J. N. (2019). The MTurkification of social and personality psychology. Personality and Social Psychology Bulletin, 45, 842850.CrossRefGoogle ScholarPubMed
Aronson, E., and Mills, J. (1959). The effect of severity of initiation on liking for a group. Journal of Abnormal and Social Psychology, 59, 177181.CrossRefGoogle Scholar
Aronson, E., Wilson, T., and Brewer, M. B. (1998). Experimentation in social psychology. In Gilbert, D., Fiske, S., and Lindzey, G. (eds) The Handbook of Social Psychology, 4th ed., vol. 1. Boston: McGraw-Hill.Google Scholar
Asch, S. E. (1956). Studies of independence and conformity: A minority of one against a unanimous majority. Psychological Monographs, 70(9), whole no. 416.CrossRefGoogle Scholar
Baron, R. M., and Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 11731182.CrossRefGoogle ScholarPubMed
Berkowitz, L., and Donnerstein, E. (1982). External validity is more than skin deep: Some answers to criticisms of laboratory experiments. American Psychologist, 37, 245257.CrossRefGoogle Scholar
Brewer, M. B. (1997). The social psychology of intergroup relations: Can research inform practice? Journal of Social Issues, 53(1), 197211.CrossRefGoogle Scholar
Brunswik, E. (1956). Perception and the Representative Design of Psychological Experiments, 2nd ed. University of California Press.CrossRefGoogle Scholar
Bullock, J. G., and Green, D. P. (2021). The failings of conventional mediation analysis and a design-based alternative. Advances in Methods and Practices in Psychological Science, 4(4), 118.CrossRefGoogle Scholar
Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54, 297312.CrossRefGoogle Scholar
Campbell, D. T., and Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait–multimethod matrix. Psychological Bulletin, 56, 81105.CrossRefGoogle ScholarPubMed
Campbell, D. T., and Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In Gage, N. (ed.) Handbook of Research on Teaching. Rand-McNally.Google Scholar
Campbell, D. T., and Stanley, J. C. (1966). Experimental and Quasi-experimental Designs for Research. Rand-McNally.Google Scholar
Chester, D. S., and Lasko, E. N. (2021). Construct validation of experimental manipulations in social psychology: Current practices and recommendations for the future. Perspectives on Psychological Science, 16, 377395.CrossRefGoogle ScholarPubMed
Chmielewski, M., and Kucker, S. C. (2020). An MTurk crisis? Shifts in data quality and the impact on study results. Social Psychological and Personality Science, 11, 464473.CrossRefGoogle Scholar
Collingwood, R. G. (1940). An Essay on Metaphysics. Clarendon.Google Scholar
Cook, T. D., and Campbell, D. T. (1979). Quasi-experimentation: Design and Analysis Issues for Field Settings. Rand-McNally.Google Scholar
Cook, T. D., and Shadish, W. R. (1994). Social experiments: Some developments over the past fifteen years. Annual Review of Psychology, 45, 545580.CrossRefGoogle Scholar
Crandall, C. S., and Sherman, J. W. (2016). On the scientific superiority of conceptual replications for scientific progress. Journal of Experimental Social Psychology, 66, 9399.CrossRefGoogle Scholar
Cronbach, L. J. (1982). Designing Evaluations of Educational and Social Programs. Jossey-Bass.Google Scholar
Ejelov, E., and Luke, T. (2020). “Rarely safe to assume”: Evaluating the use and interpretation of manipulations checks in experimental social psychology. Journal of Experimental Social Psychology, 87, Article 103037.CrossRefGoogle Scholar
Fiedler, K., Schott, M., and Meiser, T. (2011). What mediation analysis can (not) do. Journal of Experimental Social Psychology, 47, 12311236.CrossRefGoogle Scholar
Flake, J. K., and Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3, 456465.CrossRefGoogle Scholar
Flake, J. K., Pek, J., and Hehman, E. (2017). Construct validation in social and personality research: Current practice and recommendations. Social Psychological and Personality Science, 8, 370378.CrossRefGoogle Scholar
Gasking, D. (1955). Causation and recipes. Mind, 64, 479487.CrossRefGoogle Scholar
Gerard, H. B., and Mathewson, G. C. (1966). The effects of severity of initiation on liking for a group: A replication. Journal of Experimental Social Psychology, 2, 278287.CrossRefGoogle Scholar
Greenwald, A. G. (1975). On the inconclusivenes of “crucial” cognitive tests of dissonance versus self-perception theories. Journal of Experimental Social Psychology, 11, 490499.CrossRefGoogle Scholar
Greenwood, J. D. (1982). On the relation between laboratory experiments and social behaviour: Causal explanation and generalization. Journal for the Theory of Social Behaviour, 12, 225250.CrossRefGoogle Scholar
Gruijters, S. L. K. (2022). Making inferential leaps: Manipulation checks and the roads towards strong inference. Journal of Experimental Social Psychology, 98, Article 104251.CrossRefGoogle Scholar
Henrich, J., Heine, S. J., and Norenzayan, A. (2010a). Beyond WEIRD: Towards a broad-based behavioral science. Behavioral and Brain Sciences, 33(23), 111135.CrossRefGoogle Scholar
Henrich, J., Heine, S. J., and Norenzayan, A. (2010b). The weirdest people in the world? Behavioral and Brain Sciences, 33(23), 6183.CrossRefGoogle ScholarPubMed
Imai, K., and Jiang, Z. (2020). Identification and sensitivity analysis of contagion effects in randomized placebo‐controlled trials. Journal of the Royal Statistical Society: Series A (Statistics in Society), 183(4), 16371657.CrossRefGoogle Scholar
Kenny, D. A. (1995). The multitrait–multimethod matrix: Design, analysis, and conceptual issues. In Shrout, P. E. and Fiske, S. T. (eds.) Personality Research, Methods, and Theory: A Festschrift Honoring Donald W. Fiske. Erlbaum.Google Scholar
Kihlstrom, J. F. (2021). Ecological validity and “ecological validity.Perspectives on Psychological Science, 16, 466471.CrossRefGoogle ScholarPubMed
Klein, R. A., et al. (2018). Many labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1, 443490.CrossRefGoogle Scholar
Liu, X., and Wang, L. (2021). The impact of measurement error and omitting confounders on statistical inference of mediation effects and tools for sensitivity analysis. Psychological Methods, 26, 327342.CrossRefGoogle ScholarPubMed
Mackie, J. L. (1974). The Cement of the Universe. Oxford University Press.Google Scholar
Markus, H. R., and Kitayama, S. (1991). Culture and the self: Implications for cognition, emotion, and motivation. Psychological Review, 98, 224253.CrossRefGoogle Scholar
Marsh, H. W., Hau, K. T., Wen, Z., Nagengast, B., and Morin, A. J. S. (2011). Moderation. In Little, T. D. (ed.) Oxford Handbook of Quantitative Methods. Oxford University Press.Google Scholar
Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67, 371378.CrossRefGoogle ScholarPubMed
Mook, D. G. (1983). In defense of external invalidity. American Psychologist, 38, 379387.CrossRefGoogle Scholar
Nadler, J., Baumgartner, S., and Washington, M. (2021). MTurk for working samples: Evaluation of data quality 2014–2020. North American Journal of Psychology, 23, 741752.Google Scholar
Nosek, B. A., et al. (2022). Replicability, robustness, and reproducibility in psychological science. Annual Review of Psychology, 73, 719748.CrossRefGoogle ScholarPubMed
Orne, M. (1962). On the social psychology of the psychological experiment. American Psychologist, 17, 776783.CrossRefGoogle Scholar
Orne, M., and Holland, C. (1968). Some conditions of obedience and disobedience to authority. On the ecological validity of laboratory deceptions. International Journal of Psychiatry, 6, 282293.Google ScholarPubMed
Petty, R. E., and Cacioppo, J. T. (1996). Addressing disturbing and disturbed consumer behavior: Is it necessary to change the way we conduct behavioral science? Journal of Marketing Research, 33, 18.CrossRefGoogle Scholar
Qin, X., and Yang, F. (2022). Simulation-based sensitivity analysis for causal mediation studies. PsychologicalMethods, https://doi.org/10.1037/met0000340.CrossRefGoogle Scholar
Rakover, S. S. (1981). Social psychology theory and falsification. Personality and Social Psychology Bulletin, 7, 123130.CrossRefGoogle Scholar
Reichardt, C. S., and Coleman, S. C. (1995). The criteria for convergent and discriminant validity in a multitrait–multimethod matrix. Multivariate Behavioral Research, 30, 513538.CrossRefGoogle Scholar
Rohrer, J. M. (2018). Thinking clearly about correlations and causation: Graphical causal models for observational data. Advances in Methods and Practices in Psychological Science, 1, 2742.CrossRefGoogle Scholar
Rosenberg, M. J. (1969). The conditions and consequences of evaluation apprehension. In Rosenthal, R. and Rosnow, R. (eds) Artifact in Behavioral Research. Academic Press.Google Scholar
Rosenthal, R. (1966). Experimenter Effects in Behavioral Research. Appleton-Century-Crofts.Google Scholar
Sampson, E. E. (1977). Psychology and the American ideal. Journal of Personality and Social Psychology, 35, 767782.CrossRefGoogle Scholar
Sears, D. O. (1986). College sophomores in the laboratory: Influence of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51, 515530.CrossRefGoogle Scholar
Shadish, W. R., Cook, T. D., and Campbell, D. T. (2001). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.Google Scholar
Sherif, M. (1935). A study of some social factors in perception. Archives of Psychology, 27(187), 160.Google Scholar
Sivacek, J., and Crano, W. D. (1982). Vested interest as a moderator of attitude behavior consistency. Journal of Personality and Social Psychology, 43, 210221.CrossRefGoogle Scholar
Toich, M. J., Schutt, E., and Fisher, D. M. (2022). Do you get what you pay for? Preventing insufficient effort responding in MTurk and student samples. Applied Psychology: An International Review, 71, 640661.CrossRefGoogle Scholar
Vazire, S., Schiavone, S. R., and Bottesini, J. G. (2022). Credibility beyond replicability: Improving the four validities in psychological science. Current Directions in Psychological Science, 31, 162168.CrossRefGoogle Scholar
Zanna, M., and Cooper, J. (1974). Dissonance and the pill: An attribution approach to studying the arousal properties of dissonance. Journal of Personality and Social Psychology, 29, 703709.CrossRefGoogle ScholarPubMed
Zhou, H., and Fishbach, A. (2016). The pitfall of experimenting on the web: How unattended selective attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social Psychology, 111, 493504.CrossRefGoogle ScholarPubMed
Zillman, D. (1983). Transfer of excitation in emotional behavior. In Cacioppo, J. T. and Petty, R. E. (eds) Social Psychophysiology: A Sourcebook. Guilford Press.Google Scholar

References

Alaei, R., Deska, J. C., Hugenberg, K., and Rule, N. O. (2022). People attribute humanness to men and women differently based on their facial appearance. Journal of Personality and Social Psychology, 123(2), 400422.CrossRefGoogle ScholarPubMed
Aronson, E., Ellsworth, P. C., Carlsmith, J. M., and Gonzales, M. H. (1990). Methods of Research in Social Psychology, 2nd ed. McGraw-Hill.Google Scholar
Bargh, J. A., Bond, R. N., Lombardi, W. J., and Tota, M. E. (1986). The additive nature of chronic and temporary sources of construct accessibility. Journal of Personality and Social Psychology, 50, 869878.CrossRefGoogle Scholar
Baron, R. M., and Kenny, D. A. (1986). The mediator–moderator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 11731182.CrossRefGoogle Scholar
Bem, D. J., Wallach, M. A., and Kogan, N. (1965). Group decision making under risk of aversive consequences. Journal of Personality and Social Psychology, 1, 453460.CrossRefGoogle ScholarPubMed
Blake, K. R., and Gangestad, S. (2020). On attenuated interactions, measurement error, and statistical power: Guidelines for social and personality psychologists. Personality and Social Psychology Bulletin, 46(12), 17021711.CrossRefGoogle ScholarPubMed
Brunswik, E. (1955). Perception and the Representative Design of Psychological Experiments, 2nd ed. University of California Press.Google Scholar
Brysbaert, M. (2019). How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables. Journal of Cognition, 2(1), Article 16, https://doi.org/10.5334/joc.72.CrossRefGoogle ScholarPubMed
Bullock, J. G., Green, D. P., and Ha, S. E. (2010). Yes, but what’s the mechanism? (Don’t expect an easy answer). Journal of Personality and Social Psychology, 98, 550558.CrossRefGoogle ScholarPubMed
Busemeyer, J. R., and Jones, L. E. (1983). Analysis of multiplicative combination rules when the causal variables are measured with error. Psychological Bulletin, 93, 549562.CrossRefGoogle Scholar
Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology, 65, 145153.CrossRefGoogle ScholarPubMed
Cohen, J. (1968). Multiple regression as a general dataanalytic system. Psychological Bulletin, 70, 426443.CrossRefGoogle Scholar
Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 13041312.CrossRefGoogle Scholar
Cook, T. D., and Campbell, D. T. (1979). Quasi-experimentation. Rand McNally.Google Scholar
Cook, T. D., and Shadish, W. R. (1994). Social experiments: Some developments over the past fifteen years. Annual Review of Psychology, 45, 545580.CrossRefGoogle Scholar
Corneille, O., and Lush, P. (2023). Sixty years after Orne’s American Psychologist article: a conceptual framework for subjective experiences elicited by demand characteristics. Personality and Social Psychology Review, 27(1), 83101.CrossRefGoogle ScholarPubMed
Dunn, J. C., and Kirsner, K. (1988). Discovering functionally independent mental processes: The principle of reversed association. Psychological Review, 95, 91101.CrossRefGoogle ScholarPubMed
Fabrigar, L. R., Wegener, D. T., and Petty, R. E. (2020). A validity-based framework for understanding replication in psychology. Personality and Social Psychology Review, 108886832093136, https://doi.org/10.1177/1088868320931366.CrossRefGoogle Scholar
Faul, F., Erdfelder, E., Buchner, A., and Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 11491160.CrossRefGoogle Scholar
Feldt, L. S. (1958). A comparison of the precision of three experimental designs employing a concomitant variable. Psychometrika, 23, 335354.CrossRefGoogle Scholar
Fraley, R. C., and Vazire, S. (2014). The N-pact factor: Evaluating the quality of empirical journals with respect to sample size and statistical power. PLOS ONE, 9(10), e109019, https://doi.org/10.1371/journal.pone.0109019.CrossRefGoogle ScholarPubMed
Gelman, A., and Carlin, J. (2014). Beyond power calculations: Assessing type S (sign) and type M (magnitude) errors. Perspectives on Psychological Science, 9(6), 641651.CrossRefGoogle ScholarPubMed
Hastie, R., and Kumar, P. A. (1979). Person memory: Personality traits as organizing principles in memory for behaviors. Journal of Personality and Social Psychology, 37, 2538.CrossRefGoogle Scholar
Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from intentional uses of memory. Journal of Memory and Language, 30, 513541.CrossRefGoogle Scholar
Judd, C. M., and McClelland, G. H. (1989). Data Analysis: A Model-Comparison Approach. Harcourt Brace Jovanovich.Google Scholar
Judd, C., Westfall, J., and Kenny, D. A. (2017). Experiments with more than one random factor: Designs, analytic models, and statistical power. Annual Reviews in Psychology, 68(1), 601625.CrossRefGoogle ScholarPubMed
Kenny, D. A., and Judd, C. (2013). Power anomalies in testing mediation. Psychological Science: A Journal of the American Psychological Society/APS, 25(2), https://doi.org/10.1177/0956797613502676.Google ScholarPubMed
Kenny, D. A., and Judd, C. M. (2019). The unappreciated heterogeneity of effect sizes: Implications for power, precision, planning of research, and replication. Psychological Methods, 24(5), 578589.CrossRefGoogle ScholarPubMed
Kirk, R. E. (1968). Experimental Design: Procedures for the Behavioral Sciences. Brooks/Cole.Google Scholar
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Bahník, Jr, Bernstein, Š, M. J., … Nosek, B. A. (2014). Investigating variation in replicability. Social psychology, 45, 142152.CrossRefGoogle Scholar
Lovakov, A., and Agadullina, E. R. (2021). Empirically derived guidelines for effect size interpretation in social psychology. European Journal of Social Psychology, 51, 485504.CrossRefGoogle Scholar
Lundqvist, D., Flykt, A., and Ohman, A. (1998). The Karolinska Directed Emotional Faces. Psychology Section, Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden.Google Scholar
McClelland, G. H. (1997). Optimal design in psychological research. Psychological Methods, 2, 319.CrossRefGoogle Scholar
McClelland, G. H., and Judd, C. M. (1993). Statistical difficulties of detecting interactions and moderator effects. Psychological Bulletin, 114, 376390.CrossRefGoogle ScholarPubMed
MacKinnon, D. P., Fairchild, A. J., and Fritz, M. S. (2007). Mediation analysis. Annual Review of Psychology, 58, 593614.CrossRefGoogle ScholarPubMed
Maxwell, S. E., and Delaney, H. D. (1993). Bivariate median splits and spurious statistical significance. Psychological Bulletin, 113, 181190.CrossRefGoogle Scholar
Miller, A. G. (1972). The Social Psychology of Psychological Research. Free Press.Google Scholar
Mook, D. G. (1983). In defense of external invalidity. American Psychologist, 38, 379388.CrossRefGoogle Scholar
Myers, D. G., and Lamm, H. (1976). The group polarization phenomenon. Psychological Bulletin, 83, 602627.CrossRefGoogle Scholar
Newlin, D. B., and Levenson, R. W. (1979). Pre‐ejection period: Measuring beta‐adrenergic influences upon the heart. Psychophysiology, 16(6), 546552.CrossRefGoogle ScholarPubMed
Orne, M. (1962). On the social psychology of the psychological experiment. American Psychologist, 17, 776783.CrossRefGoogle Scholar
Paluck, E. L., and Green, D. P. (2009). Prejudice reduction: What works? A review and assessment of research and practice. Annual Review of Psychology, 60, 339367.CrossRefGoogle ScholarPubMed
Reis, H. T., and Gosling, S. D. (2010). Social psychological methods outside the laboratory. In Fiske, S., Gilbert, D., and Lindzey, G. (eds) Handbook of Social Psychology, 5th ed., vol. 1. Wiley.Google Scholar
Richard, F. D., Bond, C. F., Jr., and Stokes-Zoota, J. J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology, 7(4), 331363.CrossRefGoogle Scholar
Rosenthal, R., and Rosnow, R. L. (eds) (1969). Artifact in Behavioral Research. Academic Press.Google Scholar
Rydell, R. J., and McConnell, A. R. (2006). Understanding implicit and explicit attitude change: A systems of reasoning analysis. Journal of Personality and Social Psychology, 91(6), 9951008.CrossRefGoogle Scholar
Schoemann, A. M., Boulton, A. J., and Short, S. D. (2017). Determining power and sample size for simple and complex mediation models. Social Psychological and Personality Science, 25(4), 194855061771506 8, https://doi.org/10.1177/1948550617715068.Google Scholar
Spencer, S. J., Zanna, M. P., and Fong, G. T. (2005). Establishing a causal chain: Why experiments are often more effective than mediational analyses in examining psychological processes. Journal of Personality and Social Psychology, 89, 845851.CrossRefGoogle ScholarPubMed
Strack, F. (2016). Reflection on the smiling registered replication report. Perspectives on Psychological Science, 11(6), 929930.CrossRefGoogle ScholarPubMed
Strack, F., Martin, L. L., and Stepper, S. (1988). Inhibiting and facilitating conditions of the human smile: A nonobtrusive test of the facial feedback hypothesis. Journal of Personality and Social Psychology, 54(5), 768777.CrossRefGoogle ScholarPubMed
Wagenmakers, E. J., Beek, T., Dijkhoff, L., Gronau, Q. F., Acosta, A., AdamsJr, R. B., … Zwaan, R. A. (2016). Registered replication report: Strack, Martin, & Stepper (1988). Perspectives on Psychological Science, 11(6), 917928.CrossRefGoogle Scholar
Westfall, J., Judd, C., and Kenny, D. A. (2015). Replicating studies in which samples of participants respond to samples of stimuli. Perspectives on Psychological Science, 10(3), 390399.CrossRefGoogle ScholarPubMed
Westfall, J., Kenny, D. A., and Judd, C. (2014). Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli. Journal of Experimental Psychology: General, 143(5), 2020–2045.Google ScholarPubMed
Wilson, J. P., Hugenberg, K., and Rule, N. O. (2017). Racial bias in judgments of physical size and formidability: From size to threat. Journal of Personality and Social Psychology, 113(1), 5980.CrossRefGoogle ScholarPubMed
Zajonc, R. B. (1965). Social facilitation. Science, 149(3681), 269274.CrossRefGoogle ScholarPubMed
Zhou, H., and Fishbach, A. (2016). The pitfall of experimenting on the web: How unattended selective attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social Psychology, 111(4), 493504.CrossRefGoogle ScholarPubMed

References

Angrist, J. D., Imbens, G. W., and Rubin, D. B. (1996). Identification of causal effects using instrumental variables (with discussion and rejoinder). Journal of the American Statistical Association, 91, 444472.CrossRefGoogle Scholar
Aussems, M. C. E., Boomsma, A., and Snijders, T. A. (2011). The use of quasi-experiments in the social sciences: A content analysis. Quality & Quantity, 45(1), 2142.CrossRefGoogle Scholar
Barlow, D. H., Hayes, S. C., and Nelson, R. O. (1984). The essentials of time-series methodology: Case studies & single-case experimentation; within-series elements; between-series elements; combined-series elements. In Goldstein, A. P. and Krasner, L. (eds.) The Scientist Practitioner: Research and Accountability in Clinical and Educational Settings. Pergamon Press.Google Scholar
Berk, R., Barnes, G., Ahlman, L., and Kurtz, E. (2010). When second best is good enough: A comparison between a true experiment and a regression discontinuity quasi-experiment. Journal of Experimental Criminology, 6(2), 191208.CrossRefGoogle Scholar
Bloom, H. S., Michalopoulos, C., and Hill, C. J. (2005). Using experiments to assess nonexperimental comparison-group methods for measuring program effects. In Bloom, H. S. (ed.) Learning More from Social Experiments. Russell Sage Foundation.Google Scholar
Bollen, K. A. (1989). Structural Equations with Latent Variables. John Wiley & Sons.CrossRefGoogle Scholar
Bor, J., Moscoe, E., Mutevedzi, P., Newell, M. L., and Bärnighausen, T. (2014). Regression discontinuity designs in epidemiology: Causal inference without randomized trials. Epidemiology (Cambridge, MA), 25(5), 729737.CrossRefGoogle ScholarPubMed
Box, G. E. P., and Jenkins, J. M. (1976). Time Series Analysis: Forecasting and Control, 2nd ed. Holden-Day.Google Scholar
Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54, 297312.CrossRefGoogle Scholar
Campbell, D. T. (1969). Reforms as experiments. American Psychologist, 24, 409429.CrossRefGoogle Scholar
Campbell, D. T., and Stanley, J. C. (1966). Experimental and Quasi-experimental Designs for Research. Houghton Mifflin.Google Scholar
Cappelleri, J. C. (1991). Cutoff-based designs in comparison and combination with randomized clinical trials. Ph.D. dissertation, Cornell University, Ithaca, NY.Google Scholar
Cappelleri, J. C., Darlington, R. B., and Trochim, W. M. K. (1994). Power analysis of cutoff-based randomized clinical trials. Evaluation Review, 18, 141152.CrossRefGoogle Scholar
Cappelleri, J. C., and Trochim, W. M. K. (1994). An illustrative statistical analysis of cut-off based randomized clinical trials. Journal of Clinical Epidemiology, 47, 261270.CrossRefGoogle Scholar
Chaplin, D. D., Cook, T. D., Zurovac, J., Coopersmith, J. S., Finucane, M. M., Vollmer, L. N., and Morris, R. E. (2018). The internal and external validity of the regression discontinuity design: A meta‐analysis of 15 within‐study comparisons. Journal of Policy Analysis and Management, 37(2), 403429.CrossRefGoogle Scholar
Chester, D. S., and Lasko, E. N. (2021). Construct validation of experimental manipulations in social psychology: Current practices and recommendations for the future. Perspectives on Psychological Science, 16(2), 377395.CrossRefGoogle ScholarPubMed
Cialdini, R. B. (2009). Influence: Science and Practice. Pearson Education.Google Scholar
Cochran, W. G. (1965). The planning of observational studies of human populations (with discussion). Journal of the Royal Statistical Society, Series A, 128, 134155.CrossRefGoogle Scholar
Cook, T. D., and Campbell, D. T. (1979). Quasi-experimentation: Design and Analysis Issues for Field Settings. Rand McNally College Publishing Company.Google Scholar
Cook, T. D., Shadish, W. R., and Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within‐study comparisons. Journal of Policy Analysis and Management: The Journal of the Association for Public Policy Analysis and Management, 27(4), 724750.CrossRefGoogle Scholar
Cook, T. D., and Steiner, P. M. (2010). Case matching and the reduction of selection bias in quasi-experiments: The relative importance of pretest measures of outcome, of unreliable measurement, and of mode of data analysis. Psychological Methods, 15(1), 5668.CrossRefGoogle ScholarPubMed
Cook, T. D., Steiner, P. M., and Pohl, S. (2009). How bias reduction is affected by covariate choice, unreliability, and mode of data analysis: Results from two types of within-study comparisons. Multivariate Behavioral Research, 44(6), 828847.CrossRefGoogle ScholarPubMed
Cook, T. D., and Wong, V. C. (2008a). Better quasi-experimental practice. In Alasuutari, P., Bickman, L., and Brannen, J. (eds.) The Sage Handbook of Research Methods. Sage Publications.Google Scholar
Cook, T. D., and Wong, V. C. (2008b). Empirical tests of the validity of the regression discontinuity design. Annales d’économie et de statistique, 91–92, 127150.CrossRefGoogle Scholar
Cronbach, L. J. (1982). Designing Evaluations of Educational and Social Programs. Jossey-Bass.Google Scholar
Cruz, M., Bender, M., and Ombao, H. (2017). A robust interrupted time series model for analyzing complex health care intervention data. Statistics in Medicine, 36(29), 46604676.CrossRefGoogle ScholarPubMed
Dong, N., and Lipsey, M. W. (2018). Can propensity score analysis approximate randomized experiments using pretest and demographic information in pre-K intervention research? Evaluation Review, 42(1), 3470.CrossRefGoogle ScholarPubMed
Ewusie, J. E., Soobiah, C., Blondal, E., Beyene, J., Thabane, L., and Hamid, J. S. (2020). Methods, applications and challenges in the analysis of interrupted time series data: a scoping review. Journal of Multidisciplinary Healthcare, 13, 411423.CrossRefGoogle ScholarPubMed
Fabrigar, L. R., and Wegener, D. T. (2014). Exploring causal and noncausal hypotheses in nonexperimental data. In Reis, H. T. and Judd, C. M. (eds.) Handbook of research methods in social and personality psychology, 2nd ed. Cambridge University Press.Google Scholar
Fabrigar, L. R., Wegener, D. T., and Petty, R. E. (2020). A validity-based framework for understanding replication in psychology. Personality and Social Psychology Review, 24(4), 316344.CrossRefGoogle ScholarPubMed
Flake, J. K., Pek, J., and Hehman, E. (2017). Construct validation in social and personality research: Current practice and recommendations. Social Psychological and Personality Science, 8(4), 370378.CrossRefGoogle Scholar
Fretheim, A., Zhang, F., Ross-Degnan, D., Oxman, A. D., Cheyne, H., Foy, R., … Soumerai, S. B. (2015). A reanalysis of cluster randomized trials showed interrupted time-series studies were valuable in health system evaluation. Journal of Clinical Epidemiology, 68(3), 324333.CrossRefGoogle ScholarPubMed
Glass, G. V., Willson, V. L., and Gottman, J. M. (1975). Design and Analysis of Time-Series Experiments. Colorado Associated University Press.Google Scholar
Glazerman, S., Levy, D. M., and Myers, D. (2003). Nonexperimental versus experimental estimates of earnings impacts. Annals of the American Academy of Political & Social Science, 589, 6393.CrossRefGoogle Scholar
Gleason, P., Resch, A., and Berk, J. (2018). RD or not RD: Using experimental studies to assess the performance of the regression discontinuity approach. Evaluation Review, 42(1), 333.CrossRefGoogle ScholarPubMed
Goldberger, A. S. (1972). Selection bias in evaluating treatment effects: some formal illustrations. Unpublished manuscript.Google Scholar
Green, P. J., and Silverman, B. W. (1993). Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach. Crc Press.CrossRefGoogle Scholar
Grimm, K. J., Helm, J., Rodgers, D., and O’Rourke, H. (2021). Analyzing cross‐lag effects: A comparison of different cross‐lag modeling approaches. New Directions for Child and Adolescent Development, 2021(175), 1133.CrossRefGoogle Scholar
Hagemeier, A., Samel, C., and Hellmich, M. (2022). The regression discontinuity design: Methods and implementation with a worked example in health services research. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen, 172, 7177.CrossRefGoogle ScholarPubMed
Hahn, J., Todd, P., and Van der Klaauw, W. (2001). Identification and estimation of treatment effects with a regression-discontinuity design. Econometrica, 69(1), 201209.CrossRefGoogle Scholar
Hallberg, K., Cook, T. D., Steiner, P. M., and Clark, M. H. (2018). Pretest measures of the study outcome and the elimination of selection bias: Evidence from three within study comparisons. Prevention Science, 19(3), 274283.CrossRefGoogle Scholar
Hamaker, E. L., Kuiper, R. M., and Grasman, R. P. (2015). A critique of the cross-lagged panel model. Psychological Methods, 20(1), 102116.CrossRefGoogle ScholarPubMed
Hart, J. D., and Wehrly, T. E. (1992). Kernel regression when the boundary region is large, with an application to testing the adequacy of polynomial models. Journal of the American Statistical Association, 87(420), 10181024.CrossRefGoogle Scholar
Hayes, A. F. (2006). A primer on multilevel modeling. Human Communication Research, 32(4), 385410.CrossRefGoogle Scholar
Holland, P. W. (1986). Statistics and causal inference (with discussion). Journal of the American Statistical Association, 81, 945970.CrossRefGoogle Scholar
Kisbu-Sakarya, Y., Cook, T. D., Tang, Y., and Clark, M. H. (2018). Comparative regression discontinuity: A stress test with small samples. Evaluation Review, 42(1), 111143.CrossRefGoogle ScholarPubMed
Linden, A. (2018). Using forecast modelling to evaluate treatment effects in single‐group interrupted time series analysis. Journal of Evaluation in Clinical Practice, 24(4), 695700.CrossRefGoogle ScholarPubMed
Liu, L. M. (1989). Identification of seasonal ARIMA models using a filtering method. Communications in Statistics: Theory and Methods, 18(6), 22792288.CrossRefGoogle Scholar
Luellen, J. K., Shadish, W. R., and Clark, M. H. (2005). Propensity scores: An introduction and experimental test. Evaluation Review, 29(6), 530558.CrossRefGoogle ScholarPubMed
McClannahan, L. E., McGee, G. G., MacDuff, G. S., and Krantz, P. J. (1990). Assessing and improving child care: A personal appearance index for children with autism. Journal of Applied Behavior Analysis, 23(4), 469482.CrossRefGoogle ScholarPubMed
Maciejewski, M. L. (2020). Quasi-experimental design. Biostatistics & Epidemiology, 4(1), 3847.CrossRefGoogle Scholar
McKillip, J. (1992). Research without control groups: A control construct design. In Bryant, F. B., Edwards, J., Tindale, R. S., Posavac, E. J., Heath, L. and Henderson, E. (eds.) Methodological Issues in Applied Psychology. Plenum.Google Scholar
Magidson, J. (1977). Toward a causal model approach for adjusting for preexisting differences in the nonequivalent control group situation. Evaluation Quarterly, 1, 399402.CrossRefGoogle Scholar
Mark, M. J., and Reichardt, C. S. (2004). Quasi-experimental and correlational designs: Methods for the real world when random assignment isn’t feasible. In Sansone, C., Morft, C., and Panter, A. T. (eds.) The Sage Handbook of Methods in Social Psychology. Sage Publications.Google Scholar
Muggeo, V. M. (2008). Segmented: An R package to fit regression models with broken-line relationships. R News, 8(1), 2025.Google Scholar
Musca, S. C., Kamiejski, R., Nugier, A., Méot, A., Er-Rafiy, A., and Brauer, M. (2011). Data with hierarchical structure: impact of intraclass correlation and sample size on type-I error. Frontiers in Psychology, 2, 74, DOI:10.3389/fpsyg.2011.00074.CrossRefGoogle ScholarPubMed
Paluck, E. L., and Cialdini, R. B. (2014). Field research methods. In Reis, H. and Judd, C. (eds.). Handbook of Research Methods in Social and Personality Psychology, 2nd ed. Cambridge University Press.Google Scholar
Core Team, R (2022). R: A language and environment for statistical computing. R Foundations for Statistical Computing, Vienna, Austria. Retrieved from www.R-project.org.Google Scholar
Ramsay, C. R., Matowe, L., Grilli, R., Grimshaw, J. M., and Thomas, R. E. (2003). Interrupted time series designs in health technology assessment: Lessons from two systematic reviews of behavior change strategies. International Journal of Technology Assessment in Health Care, 19(4), 613623.CrossRefGoogle ScholarPubMed
Reichardt, C. S. (2006). The principle of parallelism in the design of studies to estimate treatment effects. Psychological Methods, 11(1), 118.CrossRefGoogle ScholarPubMed
Reichardt, C. S. (2019). Quasi-experimentation: A Guide to Design and Analysis. Guilford Press.Google Scholar
Rosenbaum, P. R., and Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 4155.CrossRefGoogle Scholar
Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688701.CrossRefGoogle Scholar
Rubin, D. B. (2005). Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100, 322331.CrossRefGoogle Scholar
Rubin, D. B. (2006). Statistical inference for causal effects, with emphasis on applications in psychometrics and education. In Rao, C. R. and Sinharay, S. (eds.) Handbook of Statistics: Psychometrics. Elsevier.Google Scholar
Rubin, D. B. (2007). The design versus the analysis of observational studies for causal effects: Parallels with the design of randomized trials. Statistics in Medicine, 26, 2036.CrossRefGoogle ScholarPubMed
Rubin, D. B. (2008). For objective causal inference, design trumps analysis. Annals of Applied Statistics, 2, 808840.CrossRefGoogle Scholar
Rubin, D. B., and Thomas, N. (1996). Matching using estimated propensity scores: Relating theory to practice. Biometrics, 52, 249264.CrossRefGoogle ScholarPubMed
St. Clair, T., Hallberg, K., and Cook, T. D. (2016). The validity and precision of the comparative interrupted time-series design: Three within-study comparisons. Journal of Educational and Behavioral Statistics, 41(3), 269299.CrossRefGoogle Scholar
Schaffer, A. L., Dobbins, T. A., and Pearson, S. A. (2021). Interrupted time series analysis using autoregressive integrated moving average (ARIMA) models: A guide for evaluating large-scale health interventions. BMC Medical Research Methodology, 21(1), 112.CrossRefGoogle ScholarPubMed
Schneeweiss, S., Maclure, M., Carleton, B., Glynn, R. J., and Avorn, J. (2004). Clinical and economic consequences of a reimbursement restriction of nebulised respiratory therapy in adults: Direct comparison of randomised and observational evaluations. British Medical Journal, 328(7439), 560566.CrossRefGoogle ScholarPubMed
Schochet, P. Z. (2009). Statistical power for regression discontinuity designs in education evaluations. Journal of Educational and Behavioral Statistics, 34(2), 238266.CrossRefGoogle Scholar
Shadish, W. R., and Cook, T. D. (1999). Design rules: More steps towards a complete theory of quasi-experimentation. Statistical Science, 14, 294300.Google Scholar
Shadish, W. R., Cook, T. D., and Campbell, D. T. (2002). Experimental and Quasi-experimental Designs for Generalized Causal Inference. Houghton Mifflin.Google Scholar
Shadish, W. R., Galindo, R., Wong, V. C., Steiner, P. M., and Cook, T. D. (2011). A randomized experiment comparing random and cutoff-based assignment. Psychological Methods, 16(2), 179219.CrossRefGoogle ScholarPubMed
Simon, H. A. (1954). Spurious correlation: A causal interpretation. Journal of the American Statistical Association, 49, 467479.Google Scholar
Steiner, P. M., Cook, T. D., Li, W., and Clark, M. H. (2015). Bias reduction in quasi-experiments with little selection theory but many covariates. Journal of Research on Educational Effectiveness, 8(4), 552576.CrossRefGoogle Scholar
Stuart, E. A., and Rubin, D. B. (2007). Best practices in quasi-experimental designs: Matching methods for causal inference. In Osborne, J. (ed.) Best Practices in Quantitative Methods. Sage.Google Scholar
Tang, Y., and Cook, T. D. (2018). Statistical power for the comparative regression discontinuity design with a pretest no-treatment control function: Theory and evidence from the National Head Start Impact Study. Evaluation Review, 42(1), 71110.CrossRefGoogle ScholarPubMed
Tang, Y., Cook, T. D., and Kisbu-Sakarya, Y. (2018). Statistical power for the comparative regression discontinuity design with a nonequivalent comparison group. Psychological Methods, 23(1), 150168.CrossRefGoogle ScholarPubMed
Thistlewaite, D., and Campbell, D. (1960), Regression-discontinuity analysis: An alternative to the ex-post facto experiment. Journal of Educational Psychology, 51, 309317.CrossRefGoogle Scholar
Trochim, W. M. (1990). The regression-discontinuity design. Research Methodology: Strengthening Causal Interpretations of Nonexperimental Data, 1, 119130.Google Scholar
Trochim, W. M., and Cappelleri, J. C. (1992). Cutoff assignment strategies for enhancing randomized clinical trials. Controlled Clinical Trials, 13(3), 190212.CrossRefGoogle ScholarPubMed
Velicer, W. F., and Fava, J. L. (2003). Time series analysis. In Schinka, J. and Velicer, W. F. (eds.) Handbook of Psychology (editor in chief I. B. Weiner), vol. 2, Research Methods in Psychology. John Wiley & Sons.Google Scholar
Wegener, D. T., and Fabrigar, L. R. (2000). Analysis and design for nonexperimental data: Addressing causal and noncausal hypotheses. In Reis, H. T. and Judd, C. M. (eds.) Handbook of Research Methods in Social and Personality Psychology. Cambridge University Press.Google Scholar
Wegener, D. T., and Fabrigar, L. R. (2004). Constructing and evaluating quantitative measures for social psychological research: Conceptual challenges and methodological solutions. In Sansone, C., Morf, C. C. C., and Panter, A. T. (eds.) The SAGE Handbook of Methods in Social Psychology. Sage.Google Scholar
West, S. G., Biesanz, J. C., and Pitts, S. C. (2000). Causal inference and generalization in field settings: Experimental and quasi-experimental designs. In Judd, C. M. and Reis, H. T. (eds.) Handbook of Research Methods in Social and Personality Psychology. Cambridge University Press.Google Scholar
West, S. G., Cham, H., and Liu, Y. (2014). Causal inference and generalization in field settings: Experimental and quasi-experimental designs. In Reis, H. T. and Judd, C. M. (eds.) Handbook of Research Methods in Social and Personality Psychology, 2nd ed. Cambridge University Press.Google Scholar
West, S. G., and Thoemmes, F. (2008). Equating groups. In Brannon, J., Alasuutari, P., and Bickman, L. (eds.) Handbook of Social Research Methods. Sage.Google Scholar
West, S. G., and Thoemmes, F. (2010). Campbell’s and Rubin’s perspectives on causal inference. Psychological Methods, 15, 1837.CrossRefGoogle ScholarPubMed
Westfall, J., and Yarkoni, T. (2016). Statistically controlling for confounding constructs is harder than you think. PLOS ONE, 11(3), e0152719.CrossRefGoogle ScholarPubMed
Wing, C., and Cook, T. D. (2013). Strengthening the regression discontinuity design using additional design elements: A within‐study comparison. Journal of Policy Analysis and Management, 32(4), 853877.CrossRefGoogle Scholar
Wold, H. (1956). Causal inferences from observational data. Journal of the Royal Statistical Society, A, 119, 2860.CrossRefGoogle Scholar
Wong, V. C., and Steiner, P. M. (2018). Designs of empirical evaluations of nonexperimental methods in field settings. Evaluation Review, 42(2), 176213.CrossRefGoogle ScholarPubMed
Wong, V. C., Steiner, P. M., and Cook, T. D. (2013). Analyzing regression-discontinuity designs with multiple assignment variables: A comparative study of four estimation methods. Journal of Educational and Behavioral Statistics, 38, 107141.CrossRefGoogle Scholar
Zhou, H., and Fishbach, A. (2016). The pitfall of experimenting on the web: How unattended selective attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social Psychology, 111(4), 493504.CrossRefGoogle ScholarPubMed

References

Angrist, J. D., Imbens, G. W., and Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91(434), 444455.CrossRefGoogle Scholar
Arnett, J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63, 602614.CrossRefGoogle ScholarPubMed
Aronow, P. M., and Samii, C. (2017). Estimating average causal effects under general interference, with application to a social network experiment. Annals of Applied Statistics, 11, 1912–1947.CrossRefGoogle Scholar
Astuti, R., and Bloch, M. (2010). Why a theory of human nature cannot be based on the distinction between universality and variability: Lessons from anthropology. Behavioral and Brain Sciences, 33(2–3), 8384.CrossRefGoogle Scholar
Berger, J., Meredith, M., and Wheeler, S. C. (2008). Contextual priming: Where people vote affects how they vote. Proceedings of the National Academy of Sciences, 105(26), 88468849.CrossRefGoogle ScholarPubMed
Beshears, J., Dai, H., Milkman, K. L., and Benartzi, S. (2021). Using fresh starts to nudge increased retirement savings. Organizational Behavior and Human Decision Processes, 167, 7287.CrossRefGoogle ScholarPubMed
Blair, G., Littman, R., and Paluck, E. L. (2019). Motivating the adoption of new community-minded behaviors: An empirical test in Nigeria. Science Advances, 5(3), eaau5175.CrossRefGoogle Scholar
Blair, G., and McClendon, G. (2021). Conducting experiments in multiple contexts. Advances in Experimental Political Science, 411428.CrossRefGoogle Scholar
Blair, G., Weinstein, J. M., Christia, F., Arias, E., Badran, E., Blair, R. A., … Wilke, A. M. (2021). Community policing does not build citizen trust in police or reduce crime in the global South. Science, 374(6571), eabd3446.CrossRefGoogle ScholarPubMed
Broockman, D., and Kalla, J. (2016). Durably reducing transphobia: A field experiment on door-to-door canvassing. Science, 352(6282), 220224.CrossRefGoogle ScholarPubMed
Bruner, J. S. (1957). Going beyond the information given. Contemporary Approaches to Cognition, 1, 119160.Google Scholar
Bullock, J., Green, D., and Ha, S. (2010). Yes, But what’s the mechanism? (Don’t expect an easy answer). Journal of Personality and Social Psychology, 98, 550558.CrossRefGoogle ScholarPubMed
Carpenter, S. M., Menictas, M., Nahum-Shani, I., Wetter, D. W., and Murphy, S. A. (2020). Developments in mobile health just-in-time adaptive interventions for addiction science. Current Addiction Reports, 7(3), 280290.CrossRefGoogle ScholarPubMed
Chang, E. H., Milkman, K. L., Gromet, D. M., Rebele, R. W., Massey, C., Duckworth, A. L., and Grant, A. M. (2019). The mixed effects of online diversity training. Proceedings of the National Academy of Sciences, 116(16), 77787783.CrossRefGoogle ScholarPubMed
Cialdini, R. B. (1980). Full-cycle social psychology. Applied Social Psychology Annual, 1, 2147.Google Scholar
Cialdini, R. B., Demaine, L. J., Sagarin, B. J., Barrett, D. W., Rhoads, K., and Winter, P. L. (2006). Managing social norms for persuasive impact. Social Influence, 1(1), 315.CrossRefGoogle Scholar
Dai, H., Saccardo, S., Han, M. A., Roh, L., Raja, N., Vangala, S., Modi, H., Pandya, S., Sloyan, M., and Croymans, D. M. (2021). Behavioural nudges increase COVID-19 vaccinations. Nature, 597(7876), 404409.CrossRefGoogle ScholarPubMed
DiNardo, J., McCrary, J., and Sanbonmatsu, L. (2006). Constructive proposals for dealing with attrition: An empirical example. Working paper, University of Michigan.Google Scholar
Dolan, P., and Galizzi, M. M. (2014). Getting policy-makers to listen to field experiments. Oxford Review of Economic Policy, 30(4), 725752.CrossRefGoogle Scholar
Dunning, T. (2016). Transparency, replication, and cumulative learning: What experiments alone cannot achieve. Annual Review of Political Science, 19, 541563.CrossRefGoogle Scholar
Ferraro, P. J., and Agrawal, A. (2021). Synthesizing evidence in sustainability science through harmonized experiments: Community monitoring in common pool resources. Proceedings of the National Academy of Sciences, 118(29), e2106489118.CrossRefGoogle ScholarPubMed
Gerber, A. S., and Green, D. P. (2000). The effects of canvassing, telephone calls, and direct mail on voter turnout: A field experiment. American Political Science Review, 94(3), 653663.CrossRefGoogle Scholar
Gerber, A. S., and Green, D. P. (2012). Field Experiments: Design, Analysis, and Interpretation. W. W. Norton & Company.Google Scholar
Gerber, A. S., Huber, G. A., Doherty, D., Dowling, C. M., and Ha, S. E. (2010). Personality and political attitudes: Relationships across issue domains and political contexts. American Political Science Review, 104(1), 111133.CrossRefGoogle Scholar
Gneezy, U., and Rustichini, A. (2000). A fine is a price. Journal of Legal Studies, 29(1), 117.CrossRefGoogle Scholar
Graham, J. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549576.CrossRefGoogle ScholarPubMed
Hansen, P. G., and Jespersen, A. M. (2013). Nudge and the manipulation of choice: A framework for the responsible use of the nudge approach to behaviour change in public policy. European Journal of Risk Regulation, 4(1), 328.CrossRefGoogle Scholar
Hansen, J. A., and Tummers, L. (2020). A systematic review of field experiments in public administration. Public Administration Review, 80(6), 921931.CrossRefGoogle Scholar
Henrich, J. (2020). The Weirdest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous. Allen Lane.Google Scholar
Henrich, J., Heine, S. J., and Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(23), 6183.CrossRefGoogle ScholarPubMed
Hong, L., and Page, S. E. (2004). Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences, 101(46), 1638516389.CrossRefGoogle ScholarPubMed
Hudgens, M. G., and Halloran, M. E. (2008). Toward causal inference with interference. Journal of the American Statistical Association, 103(482), 832842.CrossRefGoogle ScholarPubMed
Hussey, M. A., and Hughes, J. P. (2007). Design and analysis of stepped wedge cluster randomized trials. Contemporary Clinical Trials, 28(2), 182191.CrossRefGoogle ScholarPubMed
Inglehart, R., and Welzel, C. (2005). Modernization, Cultural Change, and Democracy: The Human Development Sequence. Cambridge University Press.Google Scholar
International Telecommunication Union. (2021). Facts and Figures 2021: 2.9 Billion People Still Offline (November 2021), www.itu.int/hub/2021/11/facts-and-figures-2021-2-9-billion-people-still-offline/#:~:text=An%20estimated%2037%20per%20cent,still%20never%20used%20the%20Internet.Google Scholar
James, W. (1907). Pragmatism’s conception of truth. The Journal of Philosophy. Psychology and Scientific Methods, 4(6), 141155.Google Scholar
Kapiszewski, D., MacLean, L. M., and Read, B. L. (2015). Field Research in Political Science: Practices and Principles. Cambridge University Press.CrossRefGoogle Scholar
King, G., Pan, J., and Roberts, M. (2013). How censorship in China allows government criticism but silences collective expression. American Political Science Review, 107(2), 326343.CrossRefGoogle Scholar
Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., AdamsJr, R. B., Alper, S., … Sowden, W. (2018). Many labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1(4), 443490.CrossRefGoogle Scholar
Lewin, K. (1944/1997). Problems of research in social psychology. In Lewin, K., Resolving Social Conflicts & Field Theory in Social Science. American Psychological Association.CrossRefGoogle Scholar
Lewin, K. (1947). Frontiers in group dynamics: II. Channels of group life; social planning and action research. Human Relations, 1(2), 143153.CrossRefGoogle Scholar
Liberman, V., Samuels, S. M., and Ross, L. (2004). The name of the game: Predictive power of reputations versus situational labels in determining prisoner’s dilemma game moves. Personality and Social Psychology Bulletin, 30(9), 11751185.CrossRefGoogle ScholarPubMed
Marrow, A. J. (1977). The practical theorist: The life and work of Kurt Lewin. Teachers College Press.Google Scholar
Meadon, M., and Spurrett, D. (2010). It’s not just the subjects – there are too many WEIRD researchers. Behavioral and Brain Sciences, 33(2–3), 104105.CrossRefGoogle Scholar
Moore-Berg, S. L., Bernstein, K., Gallardo, R. A., Hameiri, B., Littman, R., O’Neil, S., and Pasek, M. H. (2022). Translating social science for peace: Benefits, challenges, and recommendations. Peace and Conflict: Journal of Peace Psychology, 28(3), 274283.CrossRefGoogle Scholar
Nickerson, D. W. (2008). Is voting contagious? Evidence from two field experiments. American Political Science Review, 102(1), 4957.CrossRefGoogle Scholar
Paluck, E. (2009). Reducing intergroup prejudice and conflict using the media: A field experiment in Rwanda. Journal of Personality and Social Psychology, 96, 574587.CrossRefGoogle Scholar
Paluck, E. L., and Cialdini, R. B. (2014). Field research methods. In Judd, C. M. and Reis, H. T. (eds.) Handbook of Research Methods in Social and Personality Psychology, 2nd ed. Cambridge University Press.Google Scholar
Paluck, E. L., and Shafir, E. (2017). Chapter 6. The psychology of construal in the design of field experiment revised version, prepared following the NBER Conference on Economics of Field Experiments, organized by Esther Duflo & Abhijit Banerjee. In Banerjee, A. V. and Duflo, E. (eds.) Handbook of Economic Field Experiments, vol. 1. North-Holland.Google Scholar
Pan, J. (2019). How Chinese officials use the Internet to construct their public image. Political Science Research and Methods, 7(2), 197213.CrossRefGoogle Scholar
Center, Pew Research (2021). Internet/broadband fact sheet (April 2021), www.pewresearch.org/internet/fact-sheet/internet-broadband.Google Scholar
Radsch, C. (2009). From cell phones to coffee: Issues of access in Egypt and Lebanon. In Sriram, C. L., King, J. C., Mertus, J. A., Martin-Ortega, O., and Herman, J. (eds.) Surviving field research: working in violent and difficult situations, Abingdon, Oxon.Google Scholar
Read, B. L., Kapiszewski, D., and MacLean, L. M. (eds.). (2015). Field research in political science: Practices and principles. In Kapiszewski, D., MacLean, L. M., and Read, B. L., Field Research in Political Science: Practices and Principles. Cambridge University Press.Google Scholar
Roberts, S. O., Bareket-Shavit, C., Dollins, F. A., Goldie, P. D., and Mortenson, E. (2020). Racial inequality in psychological research: Trends of the past and recommendations for the future. Perspectives on Psychological Science, 15(6), 12951309.CrossRefGoogle ScholarPubMed
Ross, L., and Nisbett, R. E. (1991). The person and the situation: Perspectives of social psychology. In Ross, L. and Nisbett, R. E., The Person and the Situation: Perspectives of Social Psychology. McGraw-Hill.Google Scholar
Rozin, P. (2009). What kind of empirical research should we publish, fund, and reward? A different perspective. Perspectives on Psychological Science, 4(4), 435439.CrossRefGoogle ScholarPubMed
Rozin, P. (2010). The weirdest people in the world are a harbinger of the future of the world. Behavioral and Brain Sciences, 33(2–3), 108109.CrossRefGoogle ScholarPubMed
Rubin, D. B. (2001). Using propensity scores to help design observational studies: Application to the tobacco litigation. Health Services and Outcomes Research Methodology, 2(3), 169188.CrossRefGoogle Scholar
Rubin, D. B. (2005). Causal inference using potential outcomes. Journal of the American Statistical Association, 100(469), 322331.CrossRefGoogle Scholar
Ruggeri, K., Većkalov, B., Bojanić, L., Andersen, T. L., Ashcroft-Jones, S., Ayacaxli, N., … Folke, T. (2021). The general fault in our fault lines. Nature Human Behaviour, 5(10), 13691380.CrossRefGoogle ScholarPubMed
Shadish, W. R., Cook, T. D., and Campbell, D. T. (2002). Experimental and Quasi-experimental Designs for Generalized Causal Inference. Houghton Mifflin and Company.Google Scholar
Shafer, K., and Lohse, B. (2005). How to Conduct a Cognitive Interview: A Nutrition Education Example. U.S. Department of Agriculture, National Institute of Food and Agriculture.Google Scholar
Sheely, A. (2013). Second-order devolution and administrative exclusion in the temporary assistance for needy families program. Policy Studies Journal, 41(1), 5469.CrossRefGoogle Scholar
Silver, L. (2019). Smartphone ownership is growing rapidly around the world but not always equally (February 5, 2019). Pew Research Center, www.pewresearch.org/global/2019/02/05/digital-connectivity-growing-rapidly-in-emerging-economies.Google Scholar
Voigt, R., Camp, N. P., Prabhakaran, V., Hamilton, W. L., Hetey, R. C., Griffiths, C. M., Jurgens, D., Jurafsky, D., and Eberhardt, J. L. (2017). Language from police body camera footage shows racial disparities in officer respect. Proceedings of the National Academy of Sciences, 114(25), 65216526.CrossRefGoogle ScholarPubMed
Walton, G. M., and Cohen, G. L. (2011). A brief social-belonging intervention improves academic and health outcomes of minority students. Science, 331(6023), 14471451.CrossRefGoogle ScholarPubMed
Willis, G. (2022). Cognitive Interviewing. Sage.Google Scholar
Wood, E. J. (2009). 123 Field Research. In Boix, C. and Stokes, S. C. (eds.) The Oxford Handbook of Comparative Politics. Oxford University Press.Google Scholar
Wu, S. J., Mai, M., Zhuang, M., and Yi, F. (2024). Having a voice in your community: A large-scale field experiment on participatory decision-making in China. Manuscript submitted for publication.Google Scholar
Wu, S. J., and Paluck, E. L. (2020). Participatory practices at work change attitudes and behavior toward societal authority and justice. Nature Communications, 11(1), 2633.CrossRefGoogle ScholarPubMed
Wu, S. J., and Paluck, E. L. (2021). Designing nudges for the context: Golden coin decals nudge workplace behavior in China. Nudges and Choice Architecture in Organizations, 163, 4350.Google Scholar
Wu, S. J., and Paluck, E. L. (2022). Having a voice in your group: Increasing productivity through group participation. Behavioural Public Policy, https://doi.org/10.1017/bpp.2022.9.CrossRefGoogle Scholar
Wu, S. J., Yuhan Mei, B., and Cervantez, J. (2022). Preferences and perceptions of workplace participation: A cross-cultural study. Frontiers in Psychology, 13, 806481.CrossRefGoogle ScholarPubMed

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×