Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-25T00:52:19.608Z Has data issue: false hasContentIssue false

Eliminativist induction cannot be a solution to psychology's crisis

Published online by Cambridge University Press:  05 February 2024

Mehmet Necip Tunç*
Affiliation:
Tilburg University, Tilburg, Netherlands [email protected]
Duygu Uygun Tunç
Affiliation:
Eindhoven University of Technology, Eindhoven, Netherlands [email protected]
*
*Corresponding author.

Abstract

Integrative experiment design assumes that we can effectively design a space of factors that cause contextual variation. However, this is impossible to do so in a sufficiently objective way, resulting inevitably in observations laden with surrogate models. Consequently, integrative experiment design may even deepen the problem of incommensurability. In comparison, one-at-a-time approaches make much more tentative assumptions about the factors excluded from experiment design, hence still seem better suited to deal with incommensurability.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

The authors address the problem of how to integrate the results of independent studies in a way that facilitates knowledge accumulation in psychological science. We agree with the authors that most experiments as they are currently conducted in psychology have low information value. The authors claim that this is because (1) in psychological science the phenomena to be explained are much more complex and the theories are not precise enough, so that theories cannot indicate which auxiliary assumptions might be safely relegated to the ceteris paribus clause, and (2) in the absence of precise enough theories the practice of designing experiments one-at-a-time hampers the goal of knowledge accumulation because the results of individual experiments are incommensurate. They infer from this diagnosis that reforming the scientific practices in psychology toward more reliable studies is misguided because however reliable individual studies are, they will nonetheless fail to fit together with one another in a way that enables knowledge accumulation. They propose that instead of increasing the reliability of individual studies, we should replace the one-at-a-time paradigm with integrative experiment design, which involves constructing a design space that defines all relevant contextual factors and then systematically testing their effects.

We underline two core problems with this proposal. The first is that the authors propose a solution that is fraught with intractable problems. The second is that the authors are mistaken in their diagnosis that incommensurability is an issue that only applies to hypothetico-deductive approaches that involve testing alternative explanations one-at-a-time.

The integrative experiment design strategy bears serious similarities to eliminativist induction, also known as the Baconian method. The essence of this method is that the researchers in a (sub)discipline first construct an event space in which the context variables are defined, and then, by eliminating alternative explanations they arrive at an inductive generalization. As long as the defined event space effectively covers all aspects of the target phenomenon, the inductive inference made on the basis of observed instances will be accurate.

However, several philosophers of science such as Goodman (Reference Goodman and Goodman1983), Popper (Reference Popper1959), Quine (Reference Quine1951), and others have shown in various ways that this important assumption on which eliminativist induction is based is almost never true, that is, it is impossible to effectively map out the contextual variations of even a single phenomenon because the list has infinitely many elements. The only viable strategy, as the authors point out in line with what Bacon (Reference Bacon1994) suggested four centuries ago, is to find the elements that make a significant difference. However, determining which factors would make a significant difference in the contextual variation space is an even harder problem in psychology, because, as the authors also admit, psychological phenomena are inherently more causally dense than natural science disciplines such as physics. But still, the authors suggest, again in line with the Baconian method, “conducting a small number of randomly selected experiments (i.e., points in the design space) and fitting a surrogate model” (target article, sect. 3.2, para. 31). However, since only an omniscient being can have the knowledge of a predefined contextual space, no such experiments can be truly random and hence the researchers cannot avoid the risk that their surrogate model overfits the experiments they perform. So, that the overfitted model would reflect the initial assumptions of the experimenters more than it reflects the underlying reality it purports to describe.

An active learning perspective that updates the surrogate model with new studies is not enough to solve this problem for two reasons. First, no matter how systematically we vary the experiments based on whose results we update our surrogate model, it is very likely that we will ignore critical contextual variables that the prevailing scientific paradigm of the time does not consider important (and thus not include them in the design space; also see Kuhn, Reference Kuhn and Kuhn1977, for how paradigms shape even the basic observations). Second (and relatedly), the problem of weighing or appraising the novel evidence during the update always has to be surrogate model-laden. For example, because of the high heterogeneity pertaining to psychological phenomena, two alternative surrogate models with different sets of initial experiments will most probably incorporate different dimensions to be important, and thus might place the same experiment in radically different points in the design space. Even if these two surrogate models are attempted to be combined, which observations count as valid evidence and how these pieces of evidence are weighted will be a matter of debate among scientists advocating different surrogate models.

Consequently, (1) the main assumption of integrative experiment design is that one can effectively define a design space but it is an impossible task and (2) the problem of theory-ladenness and incommensurability will not be solved by integrative experiment design. Actually what the authors call “one-at-a-time approach” still has a better chance of addressing the incommensurability-related issues that arise from the inherent complexity of psychological phenomena, because it does not require researchers to commit themselves to any list of elements that causes contextual variation but, on the contrary, it requires the researchers to be actively on the search for contextual variables that behave in a way that is not predicted by their theory. So, it allows researchers to devise more severe tests to falsify their theory if it is indeed incorrect. Assuming that we can know at any point which elements of contextual variation are important is only possible through an unjustified indifference to elements outside the design space we have already defined, and for this reason, methodologies that depend on this assumption can give us only an illusion of enabling knowledge accumulation about psychological phenomena. And since it would almost always be impossible to build a consensus among scientists with different perspectives about the elements that need to be in the design space, encountering the problem of incommensurability is also inevitable in integrative experiment design. Therefore, methods that depend on eliminativist induction, such as integrative experiment design, could not be an effective solution to psychology's credibility crisis.

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interest

None.

References

Bacon, F. (1994). Novum organum. Carus (original work published 1620).Google Scholar
Goodman, N. (1983). The new riddle of induction. In Goodman, N. (Ed.), Fact, fiction, and forecast (pp. 5983). Harvard University Press (original work published 1954).Google Scholar
Kuhn, T. S. (1977). Objectivity, value judgement, and theory choice. In Kuhn, T. S. (Ed.), The essential tension (pp. 320339). University of Chicago Press.CrossRefGoogle Scholar
Popper, K. R. (1959). The logic of scientific discovery. Hutchinson.Google Scholar
Quine, W. V. O. (1951). Two dogmas of empiricism. The Philosophical Review, 60, 2043. doi:10.2307/2181906CrossRefGoogle Scholar