1. Introduction
Implicit in the philosophical notion of idealization is a certain idea of model construction: that models are arrived at, among other ways, through a process, or processes, of idealizing. The presence of such a process becomes manifest as soon as the question of deidealization is raised. Namely, idealization and deidealization are assumed to be potentially reversible processes. Models are thought by philosophers of science to be built by making use of processes of distortion, omission, abstraction, and approximation that are variously included in the category of idealization, depending on the philosophical theory in question. The different theories tend to assume that the idealizing features that make models less accurate representations of their targets can be, at least in principle, reversed, setting up deidealization as an opposite process (or set of processes) to that of idealization. Consequently, one central question within recent philosophical discussions of idealization concerns precisely the desirability of deidealization—thus giving deidealization a pivotal role in distinguishing between different notions of idealization (Nowak Reference Nowak, Brzeziński and Nowak1992, Reference Nowak, Nowak and Nowakova2000; Weisberg Reference Weisberg2007; Elliott-Graves and Weisberg Reference Elliott-Graves and Weisberg2014).
Yet, despite the extensive literature on idealization and despite the fact that the question of deidealization proves critical for many accounts of idealization, deidealization as a topic in its own right has not succeeded in attracting much explicit philosophical interest. In this article, we zoom in on this lacuna and study the challenges modelers face when attempting to deidealize their models. We have grouped such challenges within four broad categories: deidealizing as recomposing, deidealizing as reformulating, deidealizing as concretizing, and deidealizing as situating. Analyzing these challenges suggests that the idea of deidealization as a reversal process is both overly simplified and frequently misguided.
Our account, by taking the processes of deidealization seriously, highlights certain representational, conceptual, and methodological complexities of modeling that are often overlooked in treatments that focus only on idealization. Deidealization is crucial for different kinds of attempts to apply models to the world—by empirical use with statistics, in designing experiments, or in making arguments about concrete events. Such attempts unquestionably involve deidealization. But deidealization is frequently involved also in the use of models for theorizing in different contexts and different ways. In what follows, we analyze the processes of deidealization by drawing on examples from economics and the existing literature in the philosophy of economics on model application problems (e.g., Boumans Reference Boumans2005; Alexandrova Reference Alexandrova2006; Reiss Reference Reiss2008; Svetlova Reference Svetlova2013). Concentrating on one discipline provides a more synoptic view than a collection of examples drawn from a multitude of disciplines. Economics provides, furthermore, a good subject for studying deidealization by bestowing a rich repository, and relatively long history, of examples of idealization and deidealization. Also, and equally importantly, economics as a discipline faces the expectations of both policy makers and citizens, making deidealization of economic models both habitual and challenging.
2. What Idealization Implies about Deidealization
It is generally agreed among philosophers of science that most if not all models involve idealizations.Footnote 1 Being idealized, models give us inexact, partial, inaccurate, or distorted depictions of reality, and because of such limitations, one influential defense of idealization is provided precisely by the possibility of deidealization. According to this defense, as a science ‘advances’, the various simplifications and distortions effected by idealizations will be corrected, thus making the theoretical representations more concrete or realistic (e.g., McMullin Reference McMullin1985; Nowak Reference Nowak, Nowak and Nowakova2000). In these discussions, one can notice a subtle slide between the notion of idealization as a process of model formation (by distorting, simplifying, or abstracting, etc.) and idealization as a quality of models (models are simple, inaccurate or distorted, mathematically formulated, etc.). This ambiguity, between whether one is talking about the process or an outcome object called ‘the model’, serves to hide the implicit assumption that models gain the quality of being idealized because of the processes that formed them.
Earlier discussion was more explicit. Nowak (e.g., Reference Nowak, Brzeziński and Nowak1992, Reference Nowak, Nowak and Nowakova2000) offers a classic example of the supposed reversibility of the processes of both idealization and deidealization, with the deidealizing move being called concretization. In Nowak’s view, scientific breakthroughs were brought about by the method of idealization that seeks to set apart, and study, the dependencies between the most relevant magnitudes or essential components. He analyzed the work of, for example, Galileo, Darwin, and Marx, and claimed that the success of these theories was due to their proficient use of idealization. However, the application of an idealized theory would require a return from the ideal world to the real-world phenomena through the procedure of concretization. By eliminating step by step the idealizing conditions, more realistic statements could be achieved. For Nowak, this concretization process is usually completed by approximation: “Normally, after introducing some corrections the procedure of approximation is applied. That is, all idealizing conditions are removed and their joint influence is assessed as responsible for the deviations up to a certain threshold ε” (Nowak Reference Nowak, Brzeziński and Nowak1992, 12). In other words, Nowak assumed that in the mature natural sciences the approximative structure eventually replaces the idealized structure.
In these accounts, idealization, conceived as a process, was considered to cover various different kinds of strategies of simplifying, of leaving aside some components, and of abstraction into mathematical form.Footnote 2 In the more recent discussion, these terminologies have changed their sense such that now a clear distinction is often made between idealization and abstraction. While idealizations are thought to distort the features of the (real-world) target system in the simplification process, abstraction has been interpreted in terms of subtraction, that is, of omitting some of these features, or causal factors (e.g., Cartwright Reference Cartwright, Kitcher and Salmon1989; Nowak Reference Nowak, Nowak and Nowakova2000; Jones Reference Jones, Jones and Cartwright2005; Godfrey-Smith Reference Godfrey-Smith, Barberousse, Morange and Pradeu2009; Levy and Bechtel Reference Levy and Bechtel2013). ‘Abstractions’, then, are thought to give veridical, although partial, representations, whereas ‘idealizations’ depict something differently from what is known, or assumed to be, the case. Yet, even in this more articulated framework, the processual character of idealization is preserved. For example, Levy (Reference Levy2018) addresses the “process/product ambiguity” with respect to idealization, claiming that in understanding idealization the intentions of the modeler and the process of model construction are crucial; idealization involves “a deliberate introduction of falsehood into a representation.”Footnote 3
The distortions introduced by idealization have been motivated and justified on different grounds yet hinge simultaneously on the possibility/desirability of deidealization. Weisberg (Weisberg Reference Weisberg2007; Elliott-Graves and Weisberg Reference Elliott-Graves and Weisberg2014) has distinguished between ‘Galilean idealization’ (which relies on the possibility of deidealization) and ‘minimalist idealization’ (which does not and so has to be justified on different grounds).Footnote 4 The pivotal role deidealization occupies within this distinction becomes apparent once we consider Galilean and minimalist idealizations a bit more closely.
Galilean idealizations are primarily introduced to make a model tractable for computational and other purposes, the ultimate goal being the deidealization of the model. The simple, hypothetical model, arrived at by idealization, is in the subsequent research rendered more accurate by deidealizing it. Thus, Galilean idealizations are thought to be corrigible in that they are supposed to be, at least in principle, reversible by adding back real-world details and correcting the distorted features.Footnote 5 From the epistemic point of view, however, as Batterman (Reference Batterman2009, 445) points out, there is something paradoxical about such strategies of idealization, as the justification of idealizations is due to their (future) eliminability.
In minimalist idealization—according to Weisberg—simple models are not to be deidealized because this will not increase their explanatory or epistemic value. Batterman calls such a minimalist notion of idealization “the non-traditional view” and claims that “the adding of details with the goal of ‘improving’ the minimal model is self-defeating—such improvements are illusory” (Reference Batterman2009, 430). Weisberg (Weisberg Reference Weisberg2007; Elliott-Graves and Weisberg Reference Elliott-Graves and Weisberg2014) considers mainly the case in which the minimal idealized model is thought to contain only the “core causal features that give rise to a phenomenon of interest” (Elliott-Graves and Weisberg Reference Elliott-Graves and Weisberg2014, 178). This idea is articulated in the work of Strevens (Reference Strevens2008, Reference Strevens, Grimm, Baumberger and Ammon2016), who puts forth a causal difference-making approach, where idealization serves to distinguish between difference-makers and non-difference-makers. The latter are set aside through assigning to the variables representing them extreme or default values. However, Batterman (Batterman Reference Batterman2000; Batterman and Rice Reference Batterman and Rice2014) explicitly discards the idea that minimalist idealization depends on the assumption that models are explanatory by virtue of being able to isolate some core causal factors. Batterman underlines the positive explanatory role of idealizations: they may demonstrate what details are irrelevant instead of merely being used to isolate the difference-makers from the irrelevant features and assigning the explanatory work to the difference-makers.
Despite these differences in accounts of minimalist idealization, Weisberg’s proposal effectively captures a distinction that had already been built into the discussion of idealization and also the centrality of the question of deidealization for different strategies of idealization. Further distinctions proposed also serve to highlight this centrality point. Sklar (Reference Sklar2000) has distinguished between controllable and uncontrollable idealizations in discussing when scientists are justified in isolating the system they are studying from the interferences of other factors and assuming them to be negligible. In the case of controllable idealizations, the theory, or a background theory, informs scientists “in what ways, and to what degree, the conclusions we reach about the idealized model can be expected to diverge from the features we will find to hold experimentally in the real system in the world” (Sklar Reference Sklar1993, 258). Sklar points out, however, that scientists do not always know how to compensate for such limit-style idealizations, raising the question of how to legitimize such less tractable and thus ‘uncontrollable’ idealizations (Reference Sklar2000, 63). In a somewhat similar vein, Elgin and Sober write about ‘harmless’ idealizations, where “a causal model contains an idealization when it correctly describes some of the causal factors at work, but falsely assumes that other factors that affect the outcome are absent” (Reference Elgin, Sober, Earman, Glymour and Mitchell2002, 448). Such idealizations “are harmless if correcting them wouldn’t make much difference in the predicted value of the effect variable” (448).Footnote 6
What interests us in these distinctions—Galilean idealization versus minimalist idealization, and uncontrolled versus controlled or harmless idealization—is that they crucially depend, although in different ways, on the possibility and desirability of deidealization in modeling. Galilean idealizations are considered reversible and correctable by deidealization. Minimalist idealizations, however, dispense with deidealization on the assumption that such idealizations could be either harmless or controllable, or deidealizing them would be explanatorily counterproductive. However, the texts presenting these distinctions rarely mention deidealization, let alone trying to articulate it. The situation is curious: How has a notion as central as deidealization escaped the explicit attention of philosophers so far?
In what follows, we consider deidealization separately from the discussion of idealization, in contrast to the major philosophical contributions on idealization that have approached deidealization as if it were a relatively uninteresting question, either conceptualizing it as a reversal or questioning its desirability altogether. Even though the existent philosophical discussion on (de)idealization has been orbiting around the question of whether to deidealize or not, it has not examined what deidealization entails and accomplishes in actual scientific practices. We focus instead on deidealization directly, drawing some inspiration from the literature on model application that points toward the complexities and wider functions of deidealization, as well as to the creative and constructive processes involved (e.g., Alexandrova Reference Alexandrova2006; Morgan and Knuuttila Reference Morgan, Knuuttila and Mäki2012; Miyake Reference Miyake2015, along with this article). Deidealization turns out to be central for the practice of modeling and illuminative of what it encompasses.
3. The Deidealization Menu: Forms, Aims, and Heuristics
In order to address deidealization on its own, freed from the traditional assumptions and distinctions, we do not begin from the processes of idealization or even model making but rather study how models are made usable in certain domains. Models function for scientists as both representing devices and artifacts on which experiments of a particular form—‘model experiments’—can be undertaken. On the one hand, then, scientists use models to represent some system of interest—real, hypothetical, or fictional—that they want to investigate. On the other hand, scientists work with models to learn more about their performance and implications. They treat models as experimentable, or explorative, devices: they ask questions of them, manipulate them, and even ‘play’ with them to study their properties and so, directly or indirectly, the possible targets that they might be used to represent. Starting with these functions of models—representing and experimenting—we see that scientists are engaged in a variety of constructive activities when deidealizing.
Our analysis begins with greater emphasis on the experimentable qualities of using models and moves toward greater emphasis on their representing qualities. As we find, the idea of deidealization as reversal seems more apt—perhaps surprisingly—when modeling work is considered in an analogy to experimentation. Yet, the turn to representing issues shows that the idea of deidealization as a set of reversals is very difficult to sustain. Of course, the two functions of models can never be fully disentangled. The conceptual distinction between the two functions furnishes, however, a handy analytic tool for the introduction of the processes of deidealization under four categories: (i) recomposing, (ii) reformulating, (iii) concretizing, and (iv) situating. Recomposing refers to the reconfiguration of the parts of the model with respect to the causal structure of the world; the supposed links between the parts of the model and real elements, or causal forces, highlight the experimentable qualities of models. Reformulating and concretizing deal more directly with the issues of representing in focusing on the two different sides of the abstractness of models: their symbolic and conceptual formulation. Finally, situating addresses the applicability of models to particular situations, either in the real world or in theorizing. It is concerned with not just how a model can be deidealized to represent some determinable target situations but how such processes enhance their use in theorizing, also stressing their mobility across different uses and disciplines. The proposed classification, and the associated labels, aim to render visible the positive, creative, and use-oriented aspects of deidealization and not just the challenges involved.
3.1. Deidealizing as Recomposing
One primary use of models is investigative; they are vehicles for gaining new knowledge. When scientists come to use models for investigative purposes, they treat them as experimentable objects, without of course waiving their representational status. This points us to the quality of models as experimental setups: simple and focused situations in which a very small number of causes or elements are considered and all other elements/causes are ‘shielded off’ outside the model’s boundaries. This is consistent with minimalist modelers’ viewpoint, but the point we make here is that models are not simplified representations because scientists necessarily believe in the simplicity of the world but because these assumptions are needed for models to function as experimentable devices. As for any laboratory experimental setup, a modeler focuses on the relevant small number of factors/elements, and shields the setup from other factors including disturbances (Mäki Reference Mäki2005; Morgan Reference Morgan2005).
But to what extent does an analogy between modeling and experimental practice imply that such ‘shielding off’ idealizations could be undone in a reverse process of deidealization? To reverse the (quasi)experimental setup of a model is no simple task; it would not be in the laboratory either. One way to appreciate the difficulties of this set of deidealization processes is to think of them as reversals of the various ceteris paribus conditions. They are conditions that generally remain implicit rather than explicit. Boumans (Reference Boumans1999b) has argued that there are three separate conditions here, not just one. Following his division, such processes of deidealization entail adding back (a) those factors that are normally assumed absent yet that do have an influence (i.e., the ceteris absentibus factors); (b) those factors normally assumed of so little weight that they can be neglected in the idealized model (the ceteris neglectis condition); and (c) variability in those factors that are present but whose effect in the model is neutral as they are assumed to be held constant (i.e., the ceteris paribus factors).
There are practical difficulties in using the reversal processes for ceteris absentibus factors, as the set of likely causal factors to be taken back into account might be very large, not able to be fully specified, or dependent in complex ways on one another. If so, adding back these other causal factors will alter the existing contents of the model.Footnote 7 Such a model cannot be simply deisolated; it can only be recomposed by knowledge of the rest of the elements. And just as the world is unlikely to be neatly decomposable, neither is the model.
Ceteris neglectis, in turn, concerns things so small that they can be neglected—providing, as we have seen, one important defense of minimal modeling strategy. But even if small individually, when added together, the neglected factors might make a difference to the model in application.
Within the context of economics, reversing the ceteris paribus conditions (those that hold things constant) has been more discussed than relaxing the two earlier conditions. Models have often embedded assumptions that have been made to smooth out variety to create stability and so enforce homogeneity, and it is not always obvious how that squashed-out variety is to be reconstituted. This might—for example—mean replacing distributions for averages, replacing messy empirical values for simplified hypothetical values, or bringing in time-dependent variations rather than assuming a static world. Such reversing may include correcting factors that have been set to ideal values for which there is no evidence of any real values, either because of absence of knowledge or because there are no possible equivalent deidealized values. But notice that deidealization may involve something easy to do that complicates the model only a little, such as replacing average values by probability distributions, or something very different such as replacing perfect knowledge with partial ignorance.
The ultimate problem of laboratory experimental work is that the world in the test tube is so restricted and isolated that it cannot immediately and easily be fitted up to be usable in the world—think only of the incredible scientific investment in pharmacology to test whether a synthesized ‘cure’ developed in the lab will work in patients. But the problems of model experiments extend beyond that and are also of a different nature—an experiment in the lab is not the same as an experiment on a model (see Morgan Reference Morgan and Radder2003, Reference Morgan2012). Models are usually accomplished by representing the system to be investigated in a different medium than their real-world target systems (e.g., a real-life economic action is represented in mathematical form). These differences in material media take us to the realm of representing. The issues involved concern both the constraints of the representational languages used that become visible in the attempts to reformulate the model (sec. 3.2) and those of concretizing the theoretical concepts (sec. 3.3).
3.2. Deidealizing as Reformulating
The diversity of scientific models is astounding; they are formulated in many different modes of representation in order to convey their content. These representational means impose their own constraints on modeling that can be both enabling and limiting (Knuuttila Reference Knuuttila2011). If the model is diagrammatic, for example, it offers certain possibilities and imposes certain limitations on what can be represented, and these will be different if the model form is algebraic or geometric. There are three major considerations here: integration issues, tractability issues, and translation issues, all of which provide challenges to any process of deidealization.
Models must hold together; there must be some form of integration, a process of giving overall form to a model. Such integration may be achieved quite subtly, by what Boumans (Reference Boumans1999a) aptly calls ‘mathematical molding’ that amounts to, for example, making mathematical formulation choices that integrate a set of elements in a certain way. Mathematical molding is a central feature of the model, yet it might not be noticed or seen as such once the choices have been made. These choices related to mathematization cannot often simply be ‘undone’: deidealization involves reformulating the model, taking into account that the model might fall apart without that particular construction.
Integration operates as a strong constraint, but it is not always obvious which side it is on: the side of idealization or deidealization. For example, should the sequence of equations in a model embed a simultaneity requirement or be block recursive (modular)—a choice with very different consequences for processes of deidealization, for the former cannot easily be taken apart, where the latter can be. This particular problematic sits at the heart of economics. In theoretical terms, it marks the difference between a statement that the world is in a state of equilibrium in all markets at all points of time, from the alternative view that the system only has a tendency to equilibrium. It is a contrast that permeates both theoretical modeling and applied modeling. Economists may believe that the modular system best represents how people plan and act, in a very complicated set of codependency relations that are also time dependent—implying only a tendency toward equilibrium. In principle, these relations could be unraveled within a model, but in practice economists can only get aggregate data for such models at such wide intervals that the application model must necessarily be written in a simultaneous form and so as an equilibrium model (Morgan Reference Morgan, de Marchi and Blaug1991). Even with the present computing power, treating every individual in the market as a member of a simultaneous system is not easy. As in climate science modeling, this is more than a big data problem; it is a complexity problem, and attempts to solve it in economics may start from another direction (such as by agent-based modeling).
These representing issues become intertwined with experimentability issues when we recognize that models are built to be tractable. Frequently, this is thought to mean merely setting certain variables to certain values (e.g., to zero) in order to make the mathematics work easily. But it is often difficult to know how many of those model assumptions could be translated back into statements about real entities and processes. Alexandrova (Reference Alexandrova2006) calls such assumptions ‘derivation facilitators’ and asks whether it is more realistic for agents to have discretely as opposed to continuously distributed valuations, given that it is already questionable whether people form their beliefs by drawing a value from a probability distribution.
At a granular level, then, it is difficult to see how one could easily tease apart the individual assumptions of a model and deidealize them separately. Indeed, according to Cartwright (Reference Cartwright1999), economic models are overconstrained, by which she means that the modeled situation is constructed in order to yield certain kinds of results. Morrison (Reference Morrison2009) pays attention to this same feature of mathematical abstractions in physics, claiming that they are needed to make the model work.
Tractability of course impinges on the investigative function of models. For example, economists’ infamous ‘overlapping generations’ model designed to get at the relationships between consumption and savings in an economy imagined a world of two generations, who work and save in their first period and who use their savings to consume in retirement (see Hausman’s [Reference Hausman1992] analysis). The model relates the two ‘generations’ so that each new working generation transfers resources to the current retired generation. Restricting the model made it tractable; indeed, economists often begin with modeled worlds that have only two dimensions (two consumers, two goods, or two factors of production), for ease of the mathematics. Deidealizing to increase the number of dimensions (e.g., an overlapping model of three generations: children, workers, retirees) would make their models somewhat more realistic, of course, but also more difficult to manage.
The process of deidealizing mathematical models may also involve translations, for different scientific uses may require a formulation that is more convenient for that particular use; that is, deidealizing may involve making a choice of different representational modes, frequently a switch from one formal language to another. Whatever formal language the model is presented in, it cannot straightforwardly be translated into another formal language, for both will likely have a different semantics and syntax. Even in those cases in which the various mathematical versions of a model are ‘formally equivalent’, implying easy switching between ‘equivalent’ formal representations, scientists’ own subject-based understanding of, and use of, the model is likely to be different (Vorms Reference Vorms2011; Morgan Reference Morgan2012). As an example, game theory in its early years had three different representations, in three different mathematical formulations, to describe or instantiate game structures: a matrix structure of payoffs (which depicts the outcome of choices), a branching tree diagram of possibilities (which depict the decision process of choosing), and a spatial solution set (depicting the set of possible solutions; Luce and Raiffa Reference Luce and Raiffa1957). These different formulations focused on different aspects of the relevant game for different purposes, and they imply different processes of deidealization for use.
These difficulties of arriving at mathematical representations and holding model elements together may explain in part the enormous success of some mathematical formulations that are applied across different disciplines. Examples of such cross-disciplinary templates (Humphreys Reference Humphreys2004) are general mathematical forms and computational methods underlying such simple mathematical models as the Ising model and the Lotka-Volterra model or network methods more generally, all of which have been applied to various problems within economics. But, of course, deidealizing models built on cross-disciplinary formal templates is problematic almost by definition since their application is precisely based on the tractability of their particular syntactic configurations. Moreover, the semantics are also important: the template needs to be translated from the theoretical framework of the source field to the new discipline, as well as the new target, in question. Such translation typically involves considerable theoretical effort. For example, it is not a trivial question how a formal template designed for the phenomenon of ferromagnetism can be applied to neural networks or peer pressure in socioeconomic systems (Knuuttila and Loettgers Reference Knuuttila and Loettgers2014, Reference Knuuttila and Loettgers2016). Accordingly, many problems of translation point to the issues faced by the attempt to concretize the concepts incorporated into models.
3.3. Deidealizing as Concretizing
On the representing end of the spectrum, deidealizing involves (apart from reformulating) concretizing the conceptual core of the model that may be needed for specific purposes in theorizing or in application. The idealized model is likely to embed a scientist’s theoretical or conceptual commitments about either the system or the elements of that system. While there has been some consideration of concept-formation associated with modeling (e.g., Wartofsky Reference Wartofsky1968; Nersessian Reference Nersessian2008), there has been little on the problem of deidealizing those conceptual abstractions (except for Nowak Reference Nowak, Brzeziński and Nowak1992, Reference Nowak, Nowak and Nowakova2000). This means figuring out how such conceptual abstractions about the system, or the elements in it, are made concrete; the conceptual elements can be deidealized in different ways for different sites and for different purposes.
The concept of an economy, the economy as a whole unit, has been made concrete in many different ways, for example, as following a dynamic path with cyclical oscillation in business cycle research; as a system that relates all the inputs to all the outputs of each productive sector in ‘input-out analysis’; or as a ‘macroeconomy’ that uses an accounting framework to examine the relations of aggregates of consumption, investment, and so on. These different concretized models enable both reasoning and analysis at a theoretical level and substantive empirical investigation.
‘Utility’ in turn, is concept more usually associated with the individual. It is one of the most abstract and ubiquitous concepts used in economics, referring to the unobservable relationship between people and the goods that they consume, and is the conceptual starting point for much of modern economics of individual behavior. There are various versions of the concept: focusing on human need, or satisfaction, or enjoyment, and other relational notions. It was developed in both textual and mathematical accounts in nineteenth-century economics, and in the twentieth century there have been various attempts to deidealize it for specific and interventionist usages. One of many examples has been in the development of quality-adjusted life years (QUALYS), in order to capture subjective (i.e., patient-experienced) utility from extended life after a medical intervention. These are not just processes of measurement operationalization, or of replacing the symbolic abstraction ‘U’ (for ‘utility’), but are designed to fill in the content that would fit that conceptual abstraction ‘utility’ to an equivalent substance, such as a quality of physical life (see sec. 3.5).
Being more concrete does not necessarily mean being more realistic or accurate to any particular observable objects in the world, for the concretized versions of these elements remain wedded to their conceptual framing. The implications of concretization and the choices they require are well demonstrated in Knight’s (Reference Knight1921) idealized interpretation of rational economic man. The main ancestor of ‘rational economic man’ is usually taken to be ‘homo economicus’, a character associated with John Stuart Mill’s mid-nineteenth-century recipe for making economics a doable science: a wealth-seeking miser who was nevertheless held back by his desires for luxury and dislike of work. For Mill, this was an abstraction in two senses: these were the economic characteristics of man to be found universally, but they were also understood conceptually. In the late nineteenth century, economists’ economic man was portrayed as a consumer, seeking to maximize his utility according to his preferences. In order to fully explore the rationality of that notion of economic behavior, Knight endowed him with the virtues of perfect knowledge and perfect foresight, so that his economic model man had no ignorance of the present and no uncertainty about the future. The deidealization might seem obvious—return uncertainty and lack of foresight to the account of man’s behavior. Changing this assumption would not have been that straightforward for Knight, who thought that lack of knowledge could be divided into two chunks: risk, for which we could write down a probability distribution, and genuine uncertainty about which we had no knowledge. Moreover, Knight later described the implications of his assumptions to mean that his idealized economic man was actually just a slot machine, with no reasoning power and no intelligence (Morgan Reference Morgan2006). So, deidealizing Knight’s model would mean introducing into it intelligence and reasoning power—a considerable problem in artificial intelligence, rather than a relatively contained, if difficult, task of forecasting the economic future.
3.4. Deidealizing as Situating
Models, by virtue of their simplified, ideal, or abstract qualities are not immediately applicable to any real concrete situation in the world. This final important category refers to how models often need to be explicitly situated back into the world—not just into the world in any general sense but rather to be made usable for specific situations in the world. One obvious place we can see this happening is when a simple mathematical model used in theorizing is deidealized into a statistical model as it becomes fitted to data. This is a matter not just of a change in language (i.e., ‘reformulating’, as in sec. 3.2) but of positive fitting to specific case situations. For example, in economics in the early days of modeling markets for goods, it was often assumed that statistical data could just be fitted to the equations, for the data issues and measurement requirements were considered hardly relevant to questions about the difference between corn and hog markets. More recently, such economic modeling has taken the lessons from the problems of fitting models to data and begun to retrofit mathematical models so that they are already geared toward the statistical data available (e.g., by embedding the probability assumptions into the microchoice structures faced by specific sets of individuals in economic labor markets).
The aim of deidealization as situating might be to locate a model in many different but perhaps superficially similar specific sites, using either statistical work or experimental work in lab or field. There is no reason to expect any ‘general’ deidealization, that is, one that will work everywhere. Any model is likely to need a different deidealization for every different situation: time, place, and topic. For example, poverty alleviation field experiments are usually based on some idealized model of behavior, which may be successfully situated (deidealized) for application in a particular site but then often prove not so successful when applied at another geographical site. The critical point to note here is that models made relevant by deidealizing (situating) for just one site may require idealizing again in some respects (i.e., desituating) and then deidealizing again (resituating) in order to be relevant to another site (see Cartwright Reference Cartwright2012; Morgan Reference Morgan2014). Such processes, that is, the transfer of model-based experimental designs from one site to another different site, bear remarkable similarities with the transfer of model templates within and between different disciplines that we discussed in section 3.2.
What these similarities between the application of model-based experimental designs and the transfer of theoretical model templates show is that deidealization as (re)situating is not just a challenge for applied work but equally relevant for theoretical work in which models need to be partially deidealized to situate them in a particular domain of theorizing. These processes offer an equally open-ended and challenging agenda. A telling example is provided by the supply-and-demand model, probably the most iconic model in economics. This model in its original diagrammatic representation was a simple cross or ‘scissors’ diagram of the supply-and-demand curves cutting to capture the price and quantities exchanged in the market (there was also a version in an algebraic format). Even before the end of the nineteenth century, the diagram could be found in different versions as appropriate for thin markets like that for race horses (where the demand and supply are both limited, and value/price are difficult to determine) and for markets for spoiling goods, like fish. That diagrammatic model was also developed at the same time to picture different shapes for the supply curves as appropriate for different industrial structures on the supply side (monopolists or competitors). Thus, that very simple iconic general model was reformulated to be appropriate for categories or kinds of things in the economic world. These forms of deidealization create various generic models applicable to particular classes of things, yet they still remain theorizing objects as direct offspring of the original simpler model. Each one of them is formulated for kinds of markets, yet no one version of the model applies to every market.
Although deidealizing the supply-and-demand model may seem to involve steps toward more concrete accounts of markets, this does not amount to simply adding back factors or reversing assumptions or even reaching something like a realistic account of markets. Rather, deidealization involves shaping the ideal model in particular ways so that it becomes relevant to a subset of the domain for both theorizing about and applications in those subdomains. Thus, the deidealizing process may involve a move from abstract and general to a still abstract, formal model appropriate for a generic class of phenomena or to a level of model that is evidentiarily specific.
Sometimes this situating process involves radical change, particularly changes in concepts. The supply-and-demand model had to be reinterpreted when it was moved from the market for goods to the market for labor, prompting the concept of ‘voluntary unemployment’—that is, that the unemployed chose unemployment because they valued leisure (regardless of whether the choice was real in the sense that job vacancies were available). The appropriation of the model in a very different domain required a reconceptualization of the nature of unemployment. Once again, such processes of deidealization may well accompany the transfer of models for use between subfields within a discipline or even between fields. Game theory models have been moved from economics to apply in political science and evolutionary biology, but not without changing conceptual interpretations and usages in these theoretical domains, hinting toward the often neglected conceptual dimension in deidealization and underlining the close entanglement between concretizing and situating.
3.5. The Deidealizing Menu: Example
Above we categorized deidealization into four distinct processes related to the investigative and representational functions of models. Our analysis recognizes strong limitations in understanding deidealization as processes of reversal and suggests an alternative way of thinking about it. Our four categories—of recomposing, reformulating, concretizing, and situating—not only provide a useful analytical framework but now offer an array, or menu, of processes of deidealization that can be applied to a model according to the purposes at hand. These processes are exemplified in figure 1 with respect to deidealizing the utility function to arrive at the quality-adjusted life years (the specific QUALYS) relevant for a given medical procedure.
Concretization.—The symbolic abstraction U stands for the conceptual abstraction ‘utility’: the relation between a person and a ‘good’ understood as the value the person gains from consuming the good (where the notion of a ‘good’ includes anything that a person values, such as a musical performance or a hot shower or even a replacement limb, not just a box of chocolates or a cold drink). The notion of utility has been the subject of deep analysis and debate over a very long period. Its conceptual meaning has varied over history and according to specific institutional and problem situations in which it is used. At the current time, it primarily features as both a mathematical construct in equations and a theoretical entity with psychological implications underlying choice behavior. That theoretical entity refers to something that economists believe “exist(s) independently of scientists and the scientific conventions of the scientific community” (Cohen Reference Cohen1997, 178).
Given the ontological status of the notion of utility, economists have—with a large investment in social research—made it concrete for specific purposes in practical domains such as developing the notion and measurement of QUALYS. And in this context, health-related utility functions can include both needs and preferences.Footnote 8 One can measure patients’ preferences with respect to a particular treatment (e.g., how much does it contribute to improved quality of life, convenience of use, fewer side effects) and also to their needs (e.g., survival).
Reformulating.—Models are made for reasoning with. In this case, reformulating them to make them workable instruments involves making decisions, for example, on the shapes of people’s utility curves, their choice behavior, and what it means to maximize. In the simple form shown in figure 1, it is not clear what functional form the model takes or what holds the elements together: the model is not ready for use until these commitments are made. One of the complications for these decisions is that in order to normalize experience across a range of patients and medical interventions, QUALYS are valued from 0 to 1 (0 = death, 1 = full healthy life), and (arbitrarily set) interval scales are used between these. The interval scale, still idealized, is useful for policy making and administrators in allowing for comparative measurements, say, between different types of medical interventions. But, the presumption that increments along the interval scale from 0 to 1 are all equivalent is problematic, and such scales may not make much sense at the beginning and end of the scale. All of these questions have implications for the way the model is formulated for theoretical reasoning and for practical usages.
Recomposing.—The valuations, and weighting of needs and preferences, that people make about their utility in judging the future quality of their lives under certain circumstances of less than perfect health (in QUALYS) will vary with age, gender, nationality, family circumstances, and so on. So, if the economist’s purpose is to understand the factors that influence those valuations, such factors should be introduced separately into the model. Their introduction is challenging because the added factors are not likely to be independent of each other, and there may be unknown disturbing factors that cannot be taken into account. This marks the problem of reversing the ceteris absentibus and neglectis conditions. However, the economist can render such recompositional issues into statistical measurement problems, interrogating the extent to which the variation in unknown variables accounts for the measured differences in utility valuations. In this way, economists will gain further information about what has been left out via what are called the ‘residuals’ in such a measurement equation. Such residuals can often be very illuminating about missing factors or the forms given for those included factors, and this kind of learning from recomposing has long been a standard practice in econometrics (the statistical branch of economics). Similarly, in discussing earth sciences, Miyake (Reference Miyake2015) has recently related the use of residuals to what he calls ‘active’ deidealization: the generation of new observations through the comparison of the actual observations with predictions of simple and “false” reference models.
Situating.—Before QUALYS can be used in decisions by individuals, health providers, and payers to decide on actions that will affect individuals with a certain condition (e.g., kidney dialysis or hip replacements), the model and measurements of utility have to be situated for that group of people and their particular decisions. This situating could concern patients thinking about the impact of a particular or alternative treatments, but it could also result in a decision model for health service providers who aim to compare costs of treating alternative conditions in view of their gain for patients as a group. The model will have to be ‘peopled’ with the specific QUALYS values on the basis of survey data from a set of patients and the specific costs of different treatments. For treatment decisions at the system level, the model will also need to be shaped and adjusted for specific country and institutional health service system (e.g., a national health insurance–based system vs. a private health care system).
Our four categories—recomposing, reformulating, concretizing, and situating, fleshed out above—provide a framework for analyzing deidealization. Other kinds of generic processes or finer-grained distinctions are surely possible. But these categories do offer a menu of processes of deidealization that can be applied to a model according to the purposes at hand. It is important to note, moreover, that this array of deidealization strategies is in many cases independent from any original idealizing strategies or purposes of the idealized model. Model users deidealize in view of their aims in particular theoretical and practical contexts, and such processes likely involve different combinations of deidealizing strategies.
The deidealizing menu along with our economic examples show how problematic the notion of deidealization as a set of reversals can be. The problems do not only boil down to our limited knowledge concerning the omitted factors or practical problems concerning tractability (admittedly often demanding); they are more endemic in nature. Examining the range of details posed by the four kinds of deidealizing processes showed that there is no easy ‘adding back’ or ‘correcting’ for previous idealizations. And just as there are no easy reversals, nor are there self-evident movements between the more or less idealized state of a model—the idea (spelled out by Nowak Reference Nowak, Brzeziński and Nowak1992, Reference Nowak, Nowak and Nowakova2000) that scientists can go up and down the ladder from idealized to less idealized and back again. And, consequently, there is no precise way to talk about the degree of (de)idealization either (cf. Levy Reference Levy2018). Deidealization as a reversal is clearly an idealization of its own.
4. Modeling Reconsidered
The starting point of our article was to examine how the philosophical discussion on idealization crucially makes use of the idea of deidealization, while at the same time leaving that notion largely unarticulated and unexamined. We suggested that this neglect can be explained by the implicit assumption that deidealization amounts to a reversal of the idealizing process. In our analysis of deidealization, we did not begin from the notion of reversal and so not from that of idealization either. Rather, we set out to study the actual processes of deidealization. But the focus on the theoretical and practical challenges of deidealization also proves illuminative of modeling.
Our analysis of deidealization processes opens up two critical perspectives on modeling that largely remain hidden when only the processes of idealization are considered. The first concerns decomposability of models, and the second, modeling heuristics—the way models are actually achieved in scientific practice.
Thinking seriously about the practices and processes of deidealization leads us to ask to what extent models can be considered as entities whose parts can be teased apart from each other and edited, or corrected, as the reversal account would have it. The notion of deidealization as a reversal of idealization seems to require that models are composed of separable assumptions or components, enabling theorists to deidealize such components in a selective, controlled fashion. Furthermore, our knowledge of the bits and pieces of the real world could then be, at least in principle, mapped onto a model in a relatively unproblematic way (and vice versa). In other words, the idea of reversing step by step the idealizations made in the modeling process presupposes that models are decomposable. This seems to be a generally held view of philosophical writings on modeling and idealization.Footnote 9 Yet the problems encountered by robustness analysis show that this may not often be the case (Odenbaugh and Alexandrova Reference Odenbaugh and Alexandrova2011). Of course, minimalist idealization does not need to rely for its explanatory value on deidealization. However, the causal difference-making variant of minimalist idealization seems to be based on the decomposability assumption concerning models, supposing, moreover, that the world is causally modular, enabling the analyst to separate the difference-making causal factors from non-difference-making irrelevant ones.
But our analysis of the processes of recomposing and reformulating suggest that models are relatively inflexible to changes in their contents in many different respects. Just as an experimental protocol needs to keep the experiment shielded for it to work, so too in reasoning with models. It may not be possible to add back in certain causal factors without consequences, as these factors are related to others the scientist also wants to keep in the model. And the representing issues inherent in the deidealization processes show that a model may not be decomposable in another more serious sense—if the model consists of the integration or the molding of various elements together, then it may not be possible to tease those elements apart without collapsing the functionality of the model. In short, models may not be robust to many kinds of deidealization.
Our second point concerns modeling heuristics. We espy, in the usual assumptions concerning idealization combined with the associated neglect of deidealization problems, an insidious presumption by philosophers of science that scientists originally start from considering the real world and then arrive at their models through idealizing (and abstracting). And because that habitual assumption is made, it also seems unproblematic to assume that scientists can reverse their modeling recipes to get back down to the more fully blown situation they started with, fraught though that process might be. But many of the challenges we discussed stem from the fact that scientists did not start with a complex picture, simplifying it to get to idealized tractable models. As we know from other studies of modeling, scientists often begin with something that is already simple and abstract in content and based around some conceptual elements that they believe underlie the phenomena they want to model (Morgan Reference Morgan2012; Knuuttila and Loettgers Reference Knuuttila and Loettgers2016). That is, they often begin with an idealized simple model, not with processes of idealization to get to that model. This critical hidden point lies behind many of the challenges of deidealization outlined in the article.
If, and when, scientists start with already simple abstract models, one of the main challenges of deidealizing arises from filling in the set of unknown elements, concretizing and situating the concepts, and being able to render them into a representable form. These are ceteris paribus conditions of various kinds, assumptions related to mathematical tractability, assumptions about what is most relevant, challenges of definitional and conceptual content, and so on. Many of these assumptions might not be spelled out because they are taken for granted by those in the community working with that group of models; typically only some of them are articulated. Moreover, as far as scientific practice goes, models are not well defined by a set of assumptions that lie behind them, nor are they only derived from them in some determinate manner. They can rather be conceived as freestanding artifacts (Knuuttila Reference Knuuttila2011, Reference Knuuttila2017) with a degree of autonomy from both theory and data regimes (Morrison and Morgan Reference Morrison, Morgan, Morgan and Morrison1999). And they may be constructed in various ways—through analogy and template transfer, through putting together a list of ingredients in view of some theoretical goals, or through the use of theoretical imagination (Morgan Reference Morgan2012). None of these standard ways of model construction starts only with a set of assumptions in order to derive a model.
It is important to realize, then, that although models appear simple, they may not in fact be so because they were suitably simplified in the modeling process, but rather it is because they were chosen from the start to be ideal and abstract in certain ways. That models often ‘start seemingly simple’ has important consequences for deidealization. The challenges of deidealization we studied are generic and separable in principle, yet their difficulties may be compounded when a model is not made by any explicit process of idealization but rather a scientist has started with a simple or ideal hypothesis in model form, possibly making use of some familiar model template. This also means that even simple models are much more problematic objects than philosophers have noticed. They are more inflexible than the reversal thesis would have us believe, and so deidealization emerges to be as creative a part of modeling as any other dimension of it.