Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-22T11:16:30.008Z Has data issue: false hasContentIssue false

Experiment commensurability does not necessitate research consolidation

Published online by Cambridge University Press:  05 February 2024

Milena Tsvetkova*
Affiliation:
London School of Economics and Political Science, London, UK [email protected] http://tsvetkova.me
*
*Corresponding author.

Abstract

Integrative experiment design promises to foster cumulative knowledge by changing how we design experiments, build theories, and conduct research. I support the push to increase commensurability across experimental research but raise several reservations regarding results-driven and large-team-based research. I argue that it is vital to preserve academic diversity and adversarial debate via independent efforts.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

The proposed integrative experiment design approach consists of three steps: (1) Explicitly define the design space of the experiments in terms of features of the decision situation and the population sample, (2) systematically sample from that design space, and (3) build theories by quantifying the outcome heterogeneity over that space. This approach will guarantee commensurability between different experiments and findings and foster cumulative knowledge. The authors’ concept of “research cartography” is brilliant – the idea is to itemize, standardize, categorize, and quantify the information that we typically and only partially reveal in the Methodology and Discussion sections of our research papers. The image of a Wikidata-style database containing all experimental (and in fact, any other) social and behavioral knowledge is incredibly appealing! The Cooperation Databank, for instance, offers a glimpse of how such a database could look like (Spadaro et al., Reference Spadaro, Tiddi, Columbus, Jin, ten Teije and Balliet2022). Developing research cartography will help identify research gaps, established findings, and controversial problems. The approach will also aid the reuse and reanalysis of existing data to answer new research questions (Almaatouq, Rahimian, Burton, & Alhajri, Reference Almaatouq, Rahimian, Burton and Alhajri2022; Rand, Greene, & Nowak, Reference Rand, Greene and Nowak2012; Tsvetkova, Wagner, & Mao, Reference Tsvetkova, Wagner and Mao2018). In short, whether retrospective or prospective, a comprehensive and systematic research cartography will help consolidate knowledge and stimulate new research.

The integrative experiment design approach, however, presses further – steps (2) and (3) propose to consolidate research and theory-testing efforts by sampling and generalizing over many points in the experimental design space simultaneously, rather than “one-at-a-time.” Yet, these steps are not necessary for commensurability and more importantly, carry negative implications for diversity, innovation, and productive debate in academic research. There are several issues I would like to raise here.

First, the proposed paradigm threatens to entrench and exacerbate existing inequalities within and between scholarly communities. Participating in global research consortia may be open to many but who leads these consortia will likely befall on those with status, prestige, and funding. It is hard to overlook the fact that the authors speak from a position of privilege – they work at prestigious US universities, with access to hefty research funds and numerous PhD students and postdoctoral researchers. The large-scale research they propose is simply not accessible to many experimental researchers.

Second, the proposed paradigm aims to optimize efficiency in research but this is a misguided ideal. Academic research is not just about results but also about exploration and discovery, critique and debate, learning and training. Consolidating research activities in hierarchically structured labs or consortia with established protocols and routines may reduce labor costs but stifle entrepreneurship, critical thinking, and iconoclastic innovation. Based on some of the authors’ empirical examples, the complexities of group synergies imply that different problems would be best addressed by teams of different size and composition (Almaatouq, Alsobay, Yin, & Watts, Reference Almaatouq, Alsobay, Yin and Watts2021; Mao, Mason, Suri, & Watts, Reference Mao, Mason, Suri and Watts2016; Straub, Tsvetkova, & Yasseri, Reference Straub, Tsvetkova and Yasseri2023). This calls for independence, plurality, redundancy, and diversification of research effort, not consolidation.

Third, in the social and behavioral sciences, raising the question is often more important than finding out the answer. Much like the observer effect related to measuring physical systems, studying a social system changes it. Posing a specific social research problem can shape political debate, policy decisions, organization strategies, and collective behavior. Consolidating funding and research efforts forebodes a monopoly over setting research agendas and directions, the muffling of marginalized voices, the sidelining of localized problems, and the suppression of new perspectives and paradigms. Large-scale integrative experiments may be good for providing definitive evidence to integrate and reconcile existing theories but restricted when it comes to launching new research agendas.

Related to the latter issue, the proposed result-driven active-learning sampling strategy for experiments threatens to shift the focus to effects that are sizeable but not necessarily meaningful or important. Specifically, certain combinations of context and population features may be impossible or unlikely and hence, practically irrelevant. In short, the integrative experiment design approach does not alleviate and may even exacerbate the thorniest problem of experimental research – external validity. Explaining all variation is not always the best strategy for good or efficient science: The power of good general theories is not that they are universally true but that they apply to statistically likely/common situations and hence, they are useful.

I acknowledge that the authors present integrative experiment design as an additional, and not the only true, approach to experimentation in the social and behavioral sciences. I assumed an exaggeratedly antagonistic stance here to caution against consolidation. There are alternatives, such as adversarial collaboration (Killingsworth, Kahneman, & Mellers, Reference Killingsworth, Kahneman and Mellers2023; Mellers, Hertwig, & Kahneman, Reference Mellers, Hertwig and Kahneman2001), that can help reconcile contradictory findings without compromising debate, plurality, and diversity.

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interest

None.

References

Almaatouq, A., Alsobay, M., Yin, M., & Watts, D. J. (2021). Task complexity moderates group synergy. Proceedings of the National Academy of Sciences of the United States of America, 118(36), e2101062118. https://doi.org/10.1073/pnas.2101062118CrossRefGoogle ScholarPubMed
Almaatouq, A., Rahimian, M. A., Burton, J. W., & Alhajri, A. (2022). The distribution of initial estimates moderates the effect of social influence on the wisdom of the crowd. Scientific Reports, 12(1), Article 1. https://doi.org/10.1038/s41598-022-20551-7CrossRefGoogle ScholarPubMed
Killingsworth, M. A., Kahneman, D., & Mellers, B. (2023). Income and emotional well-being: A conflict resolved. Proceedings of the National Academy of Sciences of the United States of America, 120(10), e2208661120. https://doi.org/10.1073/pnas.2208661120CrossRefGoogle ScholarPubMed
Mao, A., Mason, W., Suri, S., & Watts, D. J. (2016). An experimental study of team size and performance on a complex task. PLoS ONE, 11(4), e0153048. https://doi.org/10.1371/journal.pone.0153048CrossRefGoogle ScholarPubMed
Mellers, B., Hertwig, R., & Kahneman, D. (2001). Do frequency representations eliminate conjunction effects? An exercise in adversarial collaboration. Psychological Science, 12(4), 269275. https://doi.org/10.1111/1467-9280.00350CrossRefGoogle ScholarPubMed
Rand, D. G., Greene, J. D., & Nowak, M. A. (2012). Spontaneous giving and calculated greed. Nature, 489(7416), 427430. https://doi.org/10.1038/nature11467CrossRefGoogle ScholarPubMed
Spadaro, G., Tiddi, I., Columbus, S., Jin, S., ten Teije, A., CoDa Team, & Balliet, D. (2022). The Cooperation Databank: Machine-readable science accelerates research synthesis. Perspectives on Psychological Science, 17(5), 14721489. https://doi.org/10.1177/17456916211053319CrossRefGoogle ScholarPubMed
Straub, V. J., Tsvetkova, M., & Yasseri, T. (2023). The cost of coordination can exceed the benefit of collaboration in performing complex tasks. Collective Intelligence, 2(2). https://doi.org/10.1177/26339137231156912CrossRefGoogle Scholar
Tsvetkova, M., Wagner, C., & Mao, A. (2018). The emergence of inequality in social groups: Network structure and institutions affect the distribution of earnings in cooperation games. PLoS ONE, 13(7), e0200965. https://doi.org/10.1371/journal.pone.0200965CrossRefGoogle ScholarPubMed