1. Introduction
In popular science writing and speeches, authors frequently refer to ‘the laws of nature’ if they want to point to something that has been established by scientists beyond any doubt. Rather than providing a specific list of such laws, they typically relate to achievements in physics and chemistry in general. From that one might assume that chemistry, like physics, has always striven for the formulation of universal laws as the proper goal of science. However this is highly contested in the philosophy of chemistry, as we will see. In this paper, I will argue, as the title already suggests, that laws of nature do not matter much in chemistry and that models are more important instead.
In order to do this, I first mention some important episodes in the history of the concept of laws of nature, from its Jewish-Christian origin to its modern formulation by Descartes and Newton and its decline in the early twentieth-century physics (Section 2). That will help understand better not only the epistemological and metaphysical particularities of that concept but also the very narrow field in which the concept was used for most of its history (outside of the moral sciences, which I will not consider here). I discuss in Section 3 various examples of so-called laws of nature in chemistry and physical chemistry that were proposed in the nineteenth century, from which I conclude that none of them correspond to the original concept but that they rather comprise a variety of epistemologically different statements. Most likely, the term ‘law’ was not chosen with epistemological reason but out of a temporary fad that, along with the term, faded away at the beginning of the twentieth century. More recent philosophical approaches to extend the concept of laws, so as to cover chemical cases, all result in inacceptable consequences. The deeper reason of these difficulties and the comparatively little importance of natural laws, as I point out in Section 4, is that chemistry as the original epitome of the experimental or Baconian science has largely followed methodological pluralism in which a variety of models to be chosen from for pragmatic reasons are preferred over universal laws of nature as in mathematical physics.
2. Laws of Nature in the Mechanical Philosophy of Nature
Thanks to many historians of science, the early history of the concept of laws of nature is quite well researched now.Reference Zilsel1–Reference Henry3 We know that it is not of ancient Greek origin, where nature (physis) and law (nomos) were considered opposites, but that it arose out of the Jewish-Christian idea of a legislator god who defined normative rules not only for human conduct, but also for the natural world. Probably the earliest allusion to that idea is in the Book of Enoch, an apocryphal book of the Old Testament that narrates the story of the ‘fallen angels’. The part that interests us here, the ‘Book of the Watchers’ is probably from about 300 BC and describes Enoch, the grandfather of Noah, mediating between the outrageous God and a conspiracy of angels who had ‘fallen’ down to earth to reveal all kinds of technological knowledge, including the quasi-chemical secrets of the primordial Creation to humans.Reference Schummer30 During his mediation, Enoch had to travel several times back and forth between heaven and earth and once noticed that seven stars – most likely the six known planets (the ‘wandering stars’) with their irregular orbits plus the moon – were punished because they did not obey the command of the Lord:Reference Charles4 ‘And the stars which roll over the fire are they which have transgressed the commandment of the Lord in the beginning of their rising, because they did not come forth at their appointed times.’
It is that idea of a divine Lawgiver that inspired the notion of laws of nature in early modern philosophy. René Descartes (1596–1650), the founder of the mechanical philosophy, was the first to introduce the concept. In his Principia philosophiae (1644, II, 36-42) he formulated a set of three ‘laws of nature’ (leges naturae) as the particular and secondary causes of all motion to which all natural explanation should refer, the primary cause being God as the creator of all matter and motion. These laws roughly stated that (1) every body remains in its state of motion, unless changed by outer causes; (2) every body's motion tends to be straight; and (3) the resultant movement of two colliding bodies follows certain rules. Rather than deriving them as regularities by empirical generalization, Descartes inferred his laws a priori from the perfection and unchangingness of Gods, which endowed them with a special ontological status. Unlike hypotheses, axioms, empirical regularities, approximations, and so on, Descartes’ laws of nature were not simply epistemic ideas about nature but God's own operation in nature in his most constant and unchangeable way (quod modo quam maxime constanti et immutabili operetur, 1644, II, 36). Hence, pointing to irregularities of the laws or even criticizing them would have been questioning or criticizing the perfection of God.
When Isaac Newton (1642–1727) reformulated and slightly modified Descartes’ three laws of nature in his Principia mathematica (1687), he did that not only in a mathematically more rigorous Euclidian style, but also relabeled them ‘axioms or laws of motion’ (axiomata sive leges motus, p. 12). Nonetheless, it was these three mechanical laws of motion by Newton (plus sometimes his law of gravitation) that in the following century were exclusively considered the laws of nature, apart from natural laws in ethics and the legal field. That focus is obvious in all three main scientific encyclopedias of the time. In England, Ephraim Chambers, in his 2-volume Cyclopædia of 1728,Reference Chambers5 still restricted ‘Laws of Nature’ in the proper sense to the moral realm and considered the ‘Laws of Motion’ as laws only in a figurative meaning. Yet, in his entry ‘Motion’ (Ref. Reference Chambers5, p. 587), after stating that ‘Mechanics is the Basis of all Natural Philosophy’, he concluded that ‘all the Phenomona of Nature; all the Changes that happen in the System of Bodies, are owing to Motion; and are directed according to the Laws thereof.’ The most comprehensive 68-volume German encyclopedia of the eighteenth century, published by Zedler in 1723–54, simply took ‘laws of nature’ (Naturgesetze) in the nonmoral sense and ‘laws of motion’ (leges motus) as synonymous.Reference Zedler6 In Diderot & Alembert's Encyclopédie (1751 ff.), the entry ‘Nature’ has a sub-entry ‘Lois de la nature’ that just reformulates Newton's three laws of motion.Reference de Jaucourt7
The long-time restriction of the notion of laws of nature to the mechanical philosophy is further illuminated through its particular philosophical assumptions. As Milton has pointed out,Reference Milton2 the Cartesian approach of explaining natural phenomena by laws of motion made sense only within the radical nominalism of the mechanical philosophy. In the corpuscularian view of the mechanical philosophers, the corpuscles of which the entire material word consisted had no material qualities any more that would have characterized them as belonging to this or that element or to this or that substance. Instead, all corpuscles consisted of the same quality-less matter, were shaped only according to some incidental form, and identified through their space-time position that was supposed to be governed by the laws of nature. The radical departure of this nominalist approach from all the contemporary sciences, such as chemistry, mineralogy, meteorology, botany, zoology, geology, and so on, consisted of abandoning classification on which every classical branch of natural philosophy and history relied. All these sciences referred in their explanations to ‘Aristotelian forms’ or at least to (natural) kinds, as they mostly do so still today when, for instance, the property of a chemical substance is explained through its elemental composition. The mechanical approach sought to replace exactly that traditional mode of explanation by a new type of explanation that exclusively referred to particle motions governed by the nominalist laws of nature, as exemplified by Robert Boyle (1627–1691).Reference Boyle8
This relates to the second important philosophical assumptions that made the mechanical laws of nature peculiar. Because these laws were, by their very definition, universal, indeed guaranteed by God in Descartes’ version, without any exception in space and time, and unique, because every scientific explanation had to refer to them and not to any other explanatory concepts, other laws of nature were strictly impossible. As HenryReference Henry3 has argued, the mechanical laws of nature were composed in such a way that they were necessarily universal and reductionist. They became the basis of a hitherto unseen reductionist approach in natural philosophy, with the religious connotation of a Lawgiver attached to it well into the nineteenth century.Reference Milton2 For a believer in the mechanical philosophy, like the encyclopedists quoted above, there could simply be no other law of nature than the mechanical laws of motion. For those scientists who did not believe, the entire notion of laws of nature with its philosophical assumptions remained alien. They mostly continued their research without using the term.
The history of science is full of curiosities. One is that the atomic hypothesis became broadly accepted only after atoms were shown to be composed of subatomic particles and therefore no atoms in the original meaning of being indivisible. Another one is that the mechanical philosophy became fruitful in the explanation of chemical phenomena only after the notion of laws of nature became seriously questioned in mechanics. Because the laws of physics, both in statistical mechanics and in quantum mechanics, are inherently statistical laws that give up the principle of causality or strong determinism, Erwin Schrödinger argued that they are no longer laws of nature.Reference Schrödinger9 Since then also physicists have rather avoided the term ‘law’ and preferred to speak of theories, equations, hypotheses, or models.
Before dealing with chemistry, it is useful to summarize the metaphysical and epistemological characteristics of laws of nature as developed in early modern mechanical philosophy, in particular by Descartes, Newton, and Boyle. These laws governed the mechanical motion of all bodies, were guaranteed by God, considered universally valid without exceptions (unless God changed his will), fully determining every event in the material world, and were unique without alternatives or competitors. They presupposed a strong nominalism and required that every scientific explanation must exclusively refer to them, thereby establishing a strong mechanically reductionist program.
3. Putative Laws of Nature in Nineteenth-century Chemistry
Unlike mechanics, chemistry is also (but not only) a classificatory science that deals with a multitude of different substances that qualitatively and quantitatively differ from each other in a great variety of properties and which are in their composition based on a set of chemical elements. Even though mechanical approaches to chemistry have been tried early on, particularly by Boyle, their success remained largely restricted to mechanical properties like compressibility or elasticity. In contrast, the explanation of chemical transformations has always referred to the elemental composition of the reacting compounds. Because the elements as well as the chemical substances have largely been taken as natural kinds, the strict nominalist approach of the mechanical laws of nature was impossible to apply. However, one of the crucial moves of the Chemical Revolution consisted of transforming the metaphysical concept of elements into an operational concept that Boyle had already suggested earlier. Rather than being entities theoretically conceived for explanatory purposes, they were now considered the ultimate experimental result of chemical decomposition according to the state of the art.Reference Lavoisier10 Thus, the elements had to be experimentally isolated and characterized in the first place, before any explanatory reasoning could start (Ref. Reference Schummer11, pp. 121–156). To that end, a consistent system of relative atomic and molecular weight had to be developed, which occupied much of the experimental and theoretical activity of nineteenth-century chemistry, as we will see soon.
At about the time of the Chemical Revolution in Paris, great mathematicians such as Joseph-Louis de Lagrange (1736–1813) and Pierre-Simon Laplace (1749–1827) enthusiastically championed Newton's approach. In the Paris circle the notion of laws of nature soon became extended to include general quantitative relations between fundamental values in electricity and heat transfer, such as the laws of Coulomb, Ampère, and Fourier. In the same context, the term ‘laws of nature’ began to be widely used also in French chemistry, albeit with new and varying meanings. Most frequently at first, the term ‘law’ (loi) denoted generalized qualitative observations. For instance, in his Traité élémentaire de chimie (1789), Antoine Lavoisier (1743–1794), who collaborated with Laplace on several projects, called ‘the truth given by experience according to which elastic fluids are compressible’ a ‘law’ (Ref. Reference Lavoisier10, p. 273). Other observations, usually supported with theoretical interpretations, he called a ‘general law of nature’ (loi générale de la nature), such as in the ‘law of equilibrium’ between the forces of Caloric and the affinity between metals and oxygen (Ref. Reference Lavoisier10, p. 359), and the spreading of molecules by heat (Ref. Reference Lavoisier10, p. 17) which later became the law of Gay-Lussac or Charles. However, the most fundamental statement of chemistry, which came to be known as the law of conservation of matter both quantitatively and qualitatively in terms of the conservation of elements through chemical transformation, Lavoisier called a principle (principe), employing the same term that he frequently used for the chemical elements.
The law of equilibrium most likely inspired his colleague Claude-Louis Berthollet (1748–1822) to write 1801 his Recherches sur les lois de l'affinité in which he described chemical reactions on the analogy of forming a saturated solution. That, in turn, raised opposition from Joseph Proust (1754–1797) and John Dalton (1766–1844), in the form of the ‘law of definite or fixed proportions’, according to which all chemical compounds are formed from a fixed proportion of masses of their constituent elements rather than with varying composition as Berthollet had claimed in his law. The case is particularly interesting in the present context, because the dispute between Berthollet and Proust/Dalton, argued in terms of different laws, was in essence about the definition of a chemical compound. Berthollet included what we today consider mixtures; Proust/Dalton excluded them, such that the ‘law of definite proportion’ is logically speaking a definition. However, since chemists in the mid-twentieth century began to include so-called Berthollides, i.e., compounds with varying composition, neither the definition nor the law held anymore. Because the ‘law of multiple proportions’ by Jeremias Benjamin Richter (1762–1807) and Dalton, according to which different binary compounds of the same elements combine with mass ratios between small whole numbers, depends on the ‘law of definite proportions’, its fate has been the same mutatis mutandi. More severe however, as organic compounds grew tremendously in number and molecular size – note for instance that the protein pepsin was isolated as early as 1836 – the original idea of small whole numbers could no longer be upheld. All that notwithstanding, the two laws were the starting point for exploring the system of relative atomic and molecular weights. That should become the foundation of modern chemistry because it allowed determining the elemental composition of every compound, formulating quantitative reaction equations (stoichiometry), developing the chemical theory of atoms (i.e., the smallest units of matter that do not change in chemical transformations) and eventually the molecular structure theory.
In this grand project of the first half of the nineteenth-century, several further laws were formulated. This included the ‘law of combining volumes’ by Gay-Lussac, which asserted that in gas reactions the ratio between the volumes of the reactant gases and the products can be expressed in simple whole numbers. That essentially transferred the ‘laws of definite and multiple proportions’ of reacting masses to the volumes involved in gas reactions. Together with Avogadro's law (1811), which he himself actually called a hypothesis, according to which gas volumes of different substances contain the same number of ‘molecules’, it allowed extending the system of relative atomic and molecular weights to gases. The Dulong-Petit law (1819) stated that the mass-specific heat capacity of crystals is the same if multiplied by a number-ratio representing the presumed relative atomic weight of the substance, which in turn allowed calculating the relative weights of solids from the measurement of heat capacity. Later in the century, a number of so-called colligative properties were defined in law-like statements, which all described the behavior of solutions depending only on the amount and not the nature of the solute, including vapor pressure according to Raoult's law, the melting point depression, the boiling-point elevation, and the osmotic pressure in van ‘t Hoff's law. By adding a certain amount of an unknown substance to a solution, the corresponding effect allowed measuring the relative molecular weight of the substance.
None of these laws can claim universal truth. Even the most general one, Lavoisier's ‘principle of conservation of matter’, is violated in nuclear chemistry. Worse though, strictly speaking, any of the laws mentioned in the previous paragraph are empirically falsified by every real case, once experimental measurement is sufficiently accurate. (Philosophically speaking, it is difficult to formulate ceteris paribus conditions to save the laws.) However, the differences between the laws’ predictions and the available experimental data tend to be small or even indiscernible for certain substances (e.g. noble gases and non-dissociating gases for Avogadro's law), under certain conditions (e.g., high temperature for the Dulong-Petit law), and for infinitely diluted solutions (for all the colligative properties). Once these limitations are known, by the extensive experimental work of checking the area of useful application, they can be of various uses. Historically, and most importantly, they could together be employed as tools in the grand project of developing the system of relative atomic and molecular weights. Because they provided instrumentally independent access to these weights and because they had overlapping fields of application, they could be used along with other methods to correct each other's data in order to develop a consistent overall system.Reference Schummer11
The mentioned laws of chemistry are no exception. It can be argued that every single law, particularly those that formed the basis of physical chemistry, are all, strictly speaking, falsified by every real case provided there is sufficient measurement accuracy. The most famous laws of nineteenth-century physical chemistry – Henry's law, Guldberg's and Waage's law of mass action, Ostwald's dilution law, Arrhenius dissociation law, van ‘t Hoff's osmosis law, Raoult's law, Nernst's law of electromotoric force, and so on – describe in their original form ideal systems that are only approached by real systems at infinitely small concentrations, which is why they are sometimes called ‘limiting laws’. This is also the case for Boyle's law, which he himself incidentally called a hypothesis,Reference Boyle12 the aforementioned Charles’ law, and Avogadro's law, which together combine to form the ideal gas law, and which both individually and combined are falsified by every real gas. So, what then are all these laws good for?
First, they are of important didactic value. Expressed in neat mathematical equations, they are easy to learn by beginners and allow for disregarding the particularities of the millions of known different substances. Moreover, once their status as limiting laws is understood, students learn the valuable lesson that science is much more difficult than formulating simple general truths. Second, they are still of great practical value for calculating useful data in many cases, as long as the approximate character is considered and errors can be estimated based on extensive experience and theoretical understanding of their assumptions and limits. Third, the nineteenth-century laws were not the last word. In fact, any of these – and many other laws of chemistry from chemical kinetics to spectroscopy that cannot be mentioned here for space reasons – have been further developed with considerable sophistication in physical chemistry.
These developments follow a typical pattern that could provide new meaning to laws in chemistry as well as in experimental physics. While the laws were originally assumed to be valid independent of the nature and concentration of the particular substances and other particular conditions, their refined versions include various coefficients (or coefficient functions) to cope with exactly those particularities. Examples are van der Waals’ refinement of the ideal gas law to cope with real gases by the van der Waals equation and the further development towards so-called ‘equations of state’ that thermodynamically describe pure substances also in the liquid and solid state. Another important example is Lewis’ replacement of concentrations by activities and fugacities (and their corresponding coefficients) to deal with real mixtures and solutions and to mathematize chemical affinity.Reference Gavroglu and Simões13 The simple mathematical equations have thereby turned complex, with many parameters needing to be determined independently for each specific case. Two approaches have been pursued in parallel, supplementing each other. On the one hand, theoretical consideration, mostly from statistical thermodynamics and quantum chemical modeling, can help calculating the coefficients if they have a clear physical meaning. On the other, tremendously huge sets of data have been measured for particular substances and other specific (usually temperature and pressure) conditions that allow feeding the sophisticated ‘laws’. Indeed, since the first lucky collaboration between the chemist Hans Heinrich Landolt (1831–1910) and the experimental physicist Richard Börnstein (1852–1913) on their famous one-volume handbook from 1883 (Physikalisch-chemische Tabellen), the last print edition grew to 350 volumes in 2008 before the entire project was turned into a digital database.
Thus, if we wanted to make sense of laws in chemistry (and experimental physics) today, we would have to draw the unexpected conclusion that databases are an inherent part thereof. Rather than taking them as isolated and condensed statements about nature, a law in chemistry (and experimental physics) would be a mathematical equation with a (growing) number of parameters plus an ever-growing database for these parameters, which are mostly obtained experimentally.
All that seems to be at odds with the received philosophy of science that, with its focus on mathematical physics, has almost throughout favored something like Newton's mathematical axioms. The only other candidates that might have fulfilled the philosophers’ expectation were the three laws (or better axioms) of thermodynamics. Yet, because the mechanical philosophy in the Newtonian tradition could not accept any laws other than those of Newton because of the presumed universality, tremendous efforts have been spent in showing that the laws of thermodynamics can be reduced to the laws of mechanics through statistical mechanics. However, these efforts somehow ignore that the second law of thermodynamics, which claims a steady increase of entropy in the world, establishes a directional concept of time that is unknown in mechanics of classical, quantum, or relativistic provenience.Reference Prigogine14
A few philosophers have argued for a much more liberal position, according to which every general statement in science that is not falsified should be called a law of nature, such as every statement about essential properties of natural kinds (e.g., Ref. 31). While the normative attitude of determining what scientists shall call a law in science and what not is somehow puzzling with respect to the actual linguistic practice in science and its history, it is questionable if these authors have been aware of the dimension of relabeling that they have asked for. In chemistry alone, with its more than 60 million different known substances,15 which are chemical kinds differing from each other in a multitude of properties, that would result in literally billions of ‘laws of nature’! It would put not only the more sophisticated chemical reaction equations, but also simple sentences such as ‘solid gold is a yellowish metal’ (which can be made universally true with specific ceteris paribus conditions), on the epistemological level of laws, while, on the other hand, that status would have to be denied for, say, the ideal gas law.
A second attempt to save the ‘laws’ would be to extend the concept so as to cover idealizations.Reference Tobin16 In its microscopical interpretation, an ideal gas consists of matter points without extension that do not interact with each other; indeed, with those assumptions one can derive the formula of the ideal gas law from the kinetic theory of gases. However, to the best of our knowledge, all gases consist of atoms and molecules with certain size and structure and which interact with each other. Thus, the assumptions are wrong; the microscopic image of ideal gases is an idealization. There is nothing wrong about idealizations in science. But why call them laws if they are known to be false? Why should we call a bunch of different idealizations (like the ideal gas law and van der Waals’ equation) laws of nature, if they contradict each other in their assumptions and predictions, unless we multiply nature. The coexistence of contradicting laws that can each be falsified requires a strong epistemological stretch, and a radical departure from anything that has been assumed about laws in the philosophy of science. As I will suggest in the next section, the more appropriate concept to deal with idealizations is that of a model.
Chemistry thus challenges the notion of laws of nature. If we want to save them, we would have to buy either a formula plus a huge database, billions of simple sentences, or the epistemological stretch of contradicting and falsified idealizations. It seems more reasonable to ask why this concept still makes any sense in chemistry, if we keep in mind that chemists have not used the term for new discoveries for about a century. This still leaves open the question of why chemists in the early eighteenth and nineteenth centuries so frequently used the term ‘law’ for statements of various epistemological status, which failed to meet basic conditions of the original concept of laws of nature. The fact that these so-called ‘laws’ were mostly named after their inventors suggests a sociological explanation. A likely historical reason is that Newtonianism – which created an iconic figure of Newton as the epitome of science and which outside of Britain became important only in the late eighteenth century, first in France, and then dominated much of nineteenth-century European science – established the notion of ‘laws of nature’ as something that every scientist should be striving for to become immortal, regardless of the epistemological differences between their statements and those by Newton.
The best researched case to support this historiographical hypothesis is that of the Russian chemist Dmitrii Mendeleev and his ‘periodic law’, which was one of the last cases in which the term ‘law’ was used in chemistry. As his recent biographer Michael Gordin has argued,Reference Gordin17, Reference Gordin18 Mendeleev developed the periodic table of elements originally as an educational tool that allowed structuring the then rapidly growing field of inorganic chemistry in his introductory textbook Principles of Chemistry (1869–71). In succeeding editions and various papers and speeches, however, he turned the educational table first into a system and then into a ‘law of nature’, promoting himself as the ‘lawgiver’ of chemistry who, in the heritage of Newton, would first have put chemistry on an exact basis. The obsession with his ‘law’ made him unable to cope with many new discoveries, including the noble gases, the electron, radioactivity,Reference Gordin17, Reference Gordin18 and the rare earth elements,Reference Christie and Christie19 to say nothing about the numerous exceptions to the proposed strict periodicity of the chemical properties of elements when lined up according to their atomic weights, which no introductory textbook can ignore anymore.
4. Models Instead of Laws: The Methodological Pluralism of Chemistry
As Thomas Kuhn once observed,Reference Kuhn20 the experimental or Baconian sciences, which chemistry epitomized for centuries, and the mathematical sciences with its lead field of mathematical physics, developed largely independently from each other for most of their history regarding both their specific studies and their methodologies. With its focus on mathematical physics, the received philosophy of science has largely neglected chemistry, and by the same token most of the experimental sciences, which were at best considered a kind of service institution for theory testing. In this section, I try to sketch the fundamental epistemological and metaphysical differences between the two approaches,Reference Schummer21 starting from the notion of laws of nature.
In many regards, Descartes’ laws of nature inaugurated the mathematical physics tradition (outside of astronomy). Their metaphysical status as God-given and universal as well as their epistemological status as the only legitimate reference point in any scientific explanation, gave mathematical physics a strict reductionist direction. Even though the particular laws have later been modified (by Newton, Einstein, Heisenberg, and others), the goal of finding one unified mathematical Theory of Everything is still undisputed in that tradition. Indeed, reductionist steps (from electricity, magnetism and optics to electromagnetism and finally quantum electrodynamics, from thermodynamics to statistical mechanics and quantum mechanics, and so on) are considered major achievements in the long-term project of methodological monism, according to which different approaches need to be unified or reduced to yield only one approach. Behind all that stands the metaphysical idea that the natural world is ultimately simple and comprehensible, once the correct unifying mathematical theory is found (which originally included the idea of a rational and mathematical Creator).
Contrary to that, in the experimental sciences, such as chemistry, methodological pluralism dominates, which includes even an entirely different vocabulary. Rather than being the ultimate goal of research, a theory (or hypothesis, if the theoretical proposition is preliminary or contested) can be one of many kinds of speculation here for various purposes. Except for the temporary nineteenth-century flirtation discussed in the previous section, chemists have rarely used the term ‘law’ for any further theoretical development, e.g., there are hardly any known laws in organic chemistry or in biochemistry, such that it has strong historical connotations like the term ‘principle’. Instead, the mostly used term for theoretical concepts is ‘model’, sometimes synonymous to ‘theory’ or, if mathematically expressed, simply ‘equation’ or ‘relation’, apart from subject-specific terms such as ‘reaction mechanism’ in organic chemistry. While models in chemistry somehow correspond to laws in mathematical physics regarding their predictive and explanatory use, they are in other epistemological regards quite different. Most importantly, it is perfectly legitimate that there can be many different models, even for the same case. Rather than extending a theory to become a Theory of Everything, the art of model building consists of restricting the field of application of a model according to assumptions and approximations made in the modeling process as well as to empirical findings that show its limits.
Examples abound such that any blind sample, an arbitrary choice from a chemistry textbook, would reveal the obvious.Reference Schummer22 In the previous section, I pointed out numerous ‘laws’ of physical chemistry, which would better be called models, that all have limited but important value, once the limitation is acknowledged, and which together allowed the development of a consistent system of relative atomic weights, as the proper theoretical goal of that period. In inorganic chemistry, various ‘theories’ – more correctly, theoretically guided concepts or models – of what acids and bases are compete with each other, such as those by Brønsted, Lewis, Pearson, and many others. Yet the competition is not about who is right or wrong, but about where exactly which model is more useful in explanations and predictions. Similar competitions are between ligand theory and crystal field theory in the chemistry of complex compounds; between the models of Freundlich, Langmuir, BET, and so on, in adsorption theory; between collision and transition state theory in chemical kinetics; between molecular orbital, valence bond, and density functional theory in quantum chemistry, all of which include a variety of further modeling approaches to deal with specific cases. Frequently, though not always, mechanical ideas are used in the modeling process, both as a starting point and as theoretical guidance in tailoring the model to particular cases and in estimating errors of approximations.
Rather than providing an endless list, I pick one example to illustrate further how methodological pluralism through the use of models works in chemistry. Since the mid-nineteenth century organic chemists have developed classical chemical structure theory that assigns to each compound a molecular structure, based on its elemental composition and chemical reaction properties. In this theory, a molecular structure is not simply a spatial arrangement of atoms, but an arrangement of so-called functional groups that represent the substance's chemical reactivities, which in turn are modeled by a growing set of standardized reaction mechanisms. The theory or model thus does not only provide explanations and predictions of chemical properties, it also allows planning and guiding chemical synthesis of hitherto unknown compounds. Indeed, tens of millions of new compounds have been predicted and synthesized by that model approach. In contrast, quantum chemical modeling of molecular structure provides a unique approach to the explanation and prediction of electromagnetic and many thermodynamic properties, but is still rather poor regarding chemical transformations. There is not only a theoretical division of labor according to different kinds of properties to be dealt with by different approaches. The case of chemical structure theory illustrates that chemistry is about more than explanations and predictions. Instead, theoretical concepts are also developed and judged here according to their potential for synthesis, a major activity of chemists for various, mostly non-technological ends.Reference Schummer23, Reference Schummer24 Moreover, theoretical concepts are expected to provide a basis for the classification of the tens of millions of substances,Reference Schummer25 which necessarily requires qualitative concepts that a nominalist approach cannot provide. The various subdisciplines of chemistry have developed dozens of different kinds of molecular models, from solid state chemistry to biochemistry, that each serves specific disciplinary needs. In sum, because chemistry has a variety of parallel goals, methodological pluralism by way of developing a variety of models is indispensable.
There are even more fundamental epistemological reasons for methodological pluralism. Methodological monism assumes that there is a Theory of Everything that perfectly describes the world, even though we do not know it yet. However, in chemistry (and probably in any science that experimentally deals with the real world and thus is constantly faced with its complexities) there are several fundamental limits of knowledge, of which I mention only one.Reference Schummer26 Every concept of modern chemistry, both empirical and theoretical, is based on the notion of pure substances. Yet there are no pure substances in the material world, neither inside nor outside the laboratory, both for practical limitations of purification procedures and for thermodynamic reasons. Because even the smallest impurity can have, through catalytic effects, a strong impact on chemical properties, there will always be uncertainty in any specific chemical statement that essentially differs from that of approximations. Such uncertainties can only be reduced by considerations of relevance, that under this and that condition and for this and that question this and that impurity in a given sample is irrelevant. However, once relevance considerations are included, which they are by necessity in that approach, there is no chance for a Theory of Everything any more. While this might appear a weakness from the received philosophy of science, it results only from the acknowledgement of the principled limits of the experimental sciences, which theoretical speculations about the world or mathematical treatments of ideal systems can more easily ignore. On the other hand, once we understand that methodological pluralism is not a matter of philosophical taste but inevitable in the experimental sciences, we can appreciate it as a fully-fledged epistemology of science that comes with the advantage of an enormous flexibility: if new fields of interest, new questions, or even severe problems of one of the current approach arise, science can adjust flexibly.
In conclusion, we may summarize the methodological differences between laws of nature in the tradition of mathematical physics, on the one hand, and models in the experimental tradition of chemistry, on the other. These differences remain even if we ignore much of the original meaning of laws from the early modern mechanical philosophy, such as the religious connotation, their a priori status, and the strong nominalism. Laws are formulated with universal claims of truth, which can later be reduced by ceteris paribus conditions or extended by the reduction of other laws. Models are developed on the approximate description of exemplary cases, which can be carefully extended to other cases only by modification and sophistications that include parameters to cover their particularities. While a law is better the more universal it is, a model is improved by precisely calculating, testing and limiting its intended realm of applications with error estimates. There can be no two or more laws of nature competing with each other for long, because there is only one nature that any law tries to describe truthfully and completely. Different models for the same field of application can peacefully coexist and usefully complement each other, because they might employ different approximations or put a different emphasis on different kinds of questions and aspects. Both laws and models are comparable tools for explanations and predictions, but laws assume exclusive explanatory power while models can explain only those aspects they have been built for to do. Laws, if confronted with serious problems, have to be dropped altogether, resulting in discontinuities of science, whereas models can be flexibly adjusted or supplemented by new models. While laws are inherently reductionist in the sense of methodological monism, models are developed in the vein of methodological pluralism.
5. Conclusion
In an influential paper in the philosophy of chemistry, Maureen Christie has once pointed out that in chemistry, the term ‘law’ covers theoretical concepts of quite different epistemological status from those in ‘advanced’ fields of (philosophy of) physics.Reference Christie27 In conclusion, she recommended adopting a broader notion of laws in science that can also include ‘laws’ of chemistry. While I agree with the observation, which she has further defended against criticism,Reference Christie and Christie19, Reference Christie and Christie28, Reference Vihalemm29 and to which I have added more support above, I disagree with the terminological recommendation, for the three main reasons argued for in the previous sections.
First, as pointed out in Section 2, for most of its history, the modern concept of laws of nature as developed by Descartes was strictly confined to mechanical laws of motion and embedded in the metaphysical assumptions of the mechanical philosophy, in particular, nominalism, God-given universalism, determinism, and mechanistic reductionism – none of which made sense in classical chemistry or any other science outside of mechanics for that matter. The concept was so tightly linked to the mechanical philosophy that it was literally impossible to transfer it to other fields, because part of the concept was that the mechanical laws of nature were unique.
Second, as shown in Section 3, none of the theoretical concepts that nineteenth-century chemists called laws comply with the original epistemological and metaphysical criteria. Instead, they were (theoretically guided) definitions, regularities with known exceptions, uncertain hypotheses, limiting laws or idealizations with hardly any real instance, and so on. Taking them today as ‘imperfect laws of nature’ would entirely misunderstand the theoretical, experimental, practical, and educational context in which they were used and still are useful today. Recent suggestions to extend the concept of laws of nature would result in the inacceptable consequences of either billions of chemical laws or mutually contradicting laws. The use of the term ‘law’ appears to be rather a temporary fad of the nineteenth century that soon faded, such that hardly any new law has explicitly been formulated in chemistry (and physics) since the early twentieth century, even though the unspecific expression of ‘the laws of nature’ is still widely used today.
Finally, and most importantly, the concept of laws of nature derived from methodological and metaphysical ideas of science that do not fit with modern chemistry. As I have argued in Section 4, chemistry (like probably all the experimental sciences) largely follows methodological pluralism in which universal laws of nature or even a Theory of Everything cannot be the primary end of science. Instead, a multitude of models are used by necessity, depending on the specific subject matter and the kind of questions asked (which are derived from a variety of scientific goals), and which as well as prediction and various forms of explanation also include classification and synthesis. Reintroducing the notion of ‘laws of nature’ would misunderstand the methodologically different tradition of chemistry and inadequately develop the philosophy of chemistry after the model of mathematical physics.
Joachim Schummer is a both a philosopher and a chemist. He received his PhD and Habilitation in philosophy from the University of Karlsruhe (KIT). Since 1995 he has been the editor-in-chief of Hyle: The International Journal for Philosophy of Chemistry. He teaches at universities only by invitation, most recently in Rio de Janeiro, Manila, Hannover, Bielefeld, and Bogotá. His latest books are on nanotechnology (Nanotechnologie: Spiele mit Grenzen, Suhrkamp, 2009), synthetic biology (Das Gotteshandwerk: Die künstliche Herstellung von Leben im Labor, Suhrkamp, 2011) and book on the purposes of science (Wozu Wissenschaft?, Kadmos, 2013).