Skip to main content Accessibility help
×
Hostname: page-component-6bf8c574d5-r8w4l Total loading time: 0 Render date: 2025-03-10T05:18:47.377Z Has data issue: false hasContentIssue false

2 - Dethroning the Queen

Published online by Cambridge University Press:  10 June 2022

Nancy Cartwright
Affiliation:
Durham University

Summary

The Nobel Prize in Physics 2020: Roger Penrose ‘for the discovery that black hole formation is a robust prediction of the general theory of relativity’, Reinhard Genzel and Andrea Ghez ‘for the discovery of a supermassive compact object at the centre of our galaxy’

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2022

Preface

Look at this list of Nobel Prize winners in science in 2020:

The Nobel Prize in Physics 2020: Roger Penrose ‘for the discovery that black hole formation is a robust prediction of the general theory of relativity’, Reinhard Genzel and Andrea Ghez ‘for the discovery of a supermassive compact object at the centre of our galaxy’

The Nobel Prize in Chemistry 2020: Emmanuelle Charpentier and Jennifer A. Doudna ‘for the development of a method for genome editing’

The Nobel Prize in Physiology or Medicine 2020: Harvey J. Alter, Michael Houghton and Charles M. Rice ‘for the discovery of Hepatitis C virus’

The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2020: Paul R. Milgrom and Robert B. Wilson ‘for improvements to auction theory and inventions of new auction formats’.1

Only two are in physics, and Nobel is not all there is. There are a great many other prestigious prizes across the sciences. To name just a few, there’s the Wolf Prize in agriculture; the Lasker Award, the Canada Gairdner Award and the Wolf Prize in biomedicine; the Indianapolis Prize in conservation biology; the Stockholm Prize in criminology; the Vautrin Lud Prize in geography; the Johan Skytte Prize in political science; the Grawemeyer Award and the Kurt-Koffka Medal in psychology; the Dokuchaev Award in soil science. And you don’t have to win a grand prize to do work essential to the success of science.

So we all know from the start that the sciences are many and varied – very many and very varied. So, do philosophers really believe that there is at base physics and nothing but physics? Yes; not all philosophers, but many. Here’s what one philosophy piece has to say:

The Physical Facts Fix All The Facts

What is the world really like? It’s fermions and bosons and everything that can be made up of them and nothing that can’t be made up of them. All the facts about fermions and bosons determine or ‘fix’ all the other facts about reality and what exists in this universe or any other if, as physics may end up showing, there are other ones. Another way of expressing this fact fixing by physics is to say that all the other facts – the chemical, biological, psychological, social, economic, political, cultural facts – supervene on the physical facts and are ultimately explained by them. And if physics can’t in principle fix a putative fact, it is no fact after all.2

I’ll explain later what philosophers mean by ‘supervene’, but I’m sure you can get a sense of what’s intended by the rest of the quote.

As I noted in the Introduction, this strong reductionist view is thought to provide a longed-for unity to science. It’s all the same really – because it is really all physics. For instance, from a very well-known piece from the late 1950s:

the ‘working hypothesis’ of the unity of science as follows:

6……… Social groups

5……… (Multicellular) living things

4……… Cells

3……… Molecules

2……… Atoms

1……… Elementary particles

Each level is related as parts (below) to wholes (immediately above), with ‘micro-reductions’ hypothesized to obtain between theories explaining phenomena at a lower and an immediately higher level.3

I think this image of science is badly mistaken. There is scant evidence that physics can do anything like that. She might have a little to say about almost everything. But that is a far cry from getting to dictate just what happens. I urge that not only does she not dictate everything everywhere but that she is not even entirely queen in her own realm. If we want to produce an accurate explanation or description or prediction even of outcomes that fit squarely in the domain of physics, we never manage by using ‘the facts of physics’ alone – that is, true propositions using just physics terms.

I realise that many believe that we could do so if only we were more clever and had more time, more computing ability, more research funds, etc. But what warrants this big ‘if only’ claim that flies in the face of what we steadfastly experience? My job here is to look at science; I take it that means real science – science as it actually happens – not science as the outcome of imaginative speculation about what it would be if only … . But beyond that, I am hugely sceptical about these ‘if only’ claims. Why think that all of science would be finally shown to be deeply and essentially dependant on the rule of physics if only we did physics better? We have been doing physics better and better for a long time now and the need for help from other sciences continues to be substantial.

I go into more detail in the section ‘How Philosopher’s Talk: The Gradual Retreat from Proper Reductionism’. But the basic point is this. The Concise Oxford Dictionary tells us that an autocracy is ‘a system of government by one person with absolute power’. I can’t see physics as an autocrat even in her own domain. When you look at science as it is practised in dealing with real concrete physics outcomes, physics does not reign supremely and by herself. Rather, in getting it right about the real empirical world we live in, she works as part of a motley assembly.

And getting it right is more than just getting certain facts, particularly facts about physics, right. I’ll start with a bit of the story about how we got to the idea of physics as autocratic queen. It starts with the Scientific Revolution of the seventeenth century. After that I clarify a few philosophical terms that crop up in these discussions and then look at the arguments that support the idea of the physics takeover.

The Mechanical Philosophy and Its Legacy

The doctrine of the complete rule of physics over everything is not new. It was willed to us by the Scientific Revolution and has been a strong influence on our understanding of nature ever since. As historian of science John Henry underlines in his book The Scientific Revolution and the Origins of Modern Science:

By the end of the [seventeenth] century, the mechanical philosophy had effectively replaced scholastic Aristotelianism as the new key to understanding all aspects of the physical world, from the propagation of light to the generation of animals, from pneumatics to respiration, from chemistry to astronomy. The mechanical philosophy marks a definite break with the past and sets the seal upon the Scientific Revolution.4

What is the Mechanical Philosophy?

All phenomena were to be explained in terms of concepts employed in the mathematical discipline of mechanics: shape, size, quantity and motion … The mechanical philosophy saw the workings of the natural world by analogy with machinery; change was brought about by (and could be explained in terms of) the intermeshing’s of bodies, like cogwheels in a clock, or by impact and the transference of motion from one body to another.5

Hence the clockwork universe that we saw discussed in the Guardian journal article in the Introduction.

A good two millennia before the Scientific Revolution, Plato, in his allegory of the cave, made a sharp distinction between appearance and reality. He then trapped humanity firmly in the realm of appearance, all except the philosophers that is – philosophers can miraculously see things as they really are! The allegory involves people chained in a cave watching shadows cast by a fire, shadows of real things beyond their view. Everything they see and experience is a fuzzy shadow of the clear, clean, sharply defined objects that really exist. The Mechanical Philosophy endorses just such a divide:

A distinction was made between what were considered to be the real properties of bodies (size and shape, motion or rest) and merely secondary qualities, caused by the former, such as colour, taste, odour, hotness or coldness and the like … . The idea was that something like vinegar, for example, did not have a real quality or property, which was its ‘taste’, but its constituent particles were sharp and penetrating and pricked the tongue, so seeming to give it its acidic taste.6

For Plato, the reason that we are trapped in appearances and do not see the world of reality responsible for them is humanity’s inability to be true philosophers, to embrace the contemplation of the natural sciences, mathematics, geometry and deductive logic and the theory of the forms – those clear, clean entities that exist outside the cave somewhere in Platonic heaven. For the Scientific Revolutionaries instead, it is due to humanity’s original fall from grace. Before eating the forbidden fruit from the tree of knowledge of good and evil, all was fine. The influence of the moon upon the tides was no mystery in Adam’s philosophy, according to the mid-seventeenth-century philosopher, clergyman and ‘under-labourer’ to the Mechanical Philosophy, Joseph Glanvill.7 This echoes Francis Bacon, who is famous for early on championing the inductive method in science. For Bacon, we are led astray by a variety of errors of mental processing, which he called ‘idols of the mind’.

So, for whatever reasons, although the world is fundamentally all physics, what appears to us is not. The topics studied in all the other sciences, just like free will as discussed in the Guardian article, are all really just physics when looked at with the unclouded eyes of Plato’s philosopher.

Physicalism and Materialism

The doctrine of the universal rule of physics is a natural follow-on from an even more popular philosophical doctrine, one also shared by many scientists: physicalism or materialism. Here I will use the two words interchangeably. Physicalism is the thesis that everything is physical, at least in the sense that once all the physical facts are fixed, everything is fixed.

In the Guardian article, we see that neuroscientists are among the scientists who subscribe to materialism. ‘Now that it’s possible to observe – thanks to neuroimaging – the physical brain activity associated with our decisions, it’s easier to think of those decisions as just another part of the mechanics of the material universe, in which “free will” plays no role’.8 Although imaging the active brain is a recent development – fMRI (functional magnetic resonance imaging) studies began to proliferate in the 1990s – the notion that we are identical to our brains is nothing new in the mind and brain sciences. The historian of science Fernando Vidal traces the idea back centuries.9 In the nineteenth century, for instance, phrenologists and pathological anatomists came to localise different mental faculties to different brain regions. Perhaps the most famous case is the French anatomist Paul Broca localising language to a circumscribed area in the frontal lobe.

With the birth of the modern ‘neurosciences’ this reductionism became more extreme. As the sociologist Nikolas Rose and historian Joelle M. Abi-Rached argue in their book Neuro, it was during the 1960s that a new understanding of the brain emerged, which they call the ‘neuromolecular gaze’. As they write, ‘the structure and processes of the brain and central nervous system were made understandable as material processes of interaction among molecules in neurons and the synapses between them’.10 The mind could be reduced not just to different regions of the brain but to their component parts. For one illustration of this kind of approach, consider the work of Eric Kandel, which was awarded a Nobel Prize in Physiology or Medicine in 2000. In his book In Search of Memory, Kandel recounted his planning of experiments on associative learning in the sea slug Aplysia and its ‘neural analogs’:

In 1961, Robert Doty at the University of Michigan in Ann Arbor made a remarkable discovery about classical conditioning. He applied a weak electrical stimulus to a part of the dog’s brain governing vision and found that it produced electrical activity in neurons of the visual cortex, but no movement. Another electrical stimulus applied to the motor cortex caused the dog’s paw to move. After a number of trials in which the stimuli were paired, the weak stimulus alone elicited movement of the paw. Doty had clearly shown that classical conditioning in the brain does not require motivation: it simply requires the pairing of two stimuli.

This was a big step toward a reductionist approach to learning, but the neural analogs of learning that I wanted to develop required two further steps. First, instead of conducting experiments in whole animals, I would remove the nervous system and work on a single ganglion, a single cluster of about two thousand nerve cells. Second, I would select a single nerve cell – a target cell – in that ganglion to serve as a model of any synaptic changes that might occur as a result of learning. I would then apply different patterns of electrical pulses modeled on the different forms of learning to a particular bundle of axons extending from sensory neurons on Aplysia’s body surface to the target cell.11

For the philosopher John Bickle, neuroscientists who claim to explain behaviour in terms of cells and molecules are not just reductive. They are ‘ruthlessly reductive’. They purport to bypass, not only theoretically but experimentally, the levels that supposedly lie in-between the cellular/molecular and the behavioural. Bickle does not even privilege the cellular or molecular level. ‘How low can you go?’ he asks. Here again, we find one of the common images of science – the pyramid where everything is made of the bricks of physics – that I introduced at the beginning of the book. ‘Presumably, molecular biology reduces to biochemistry, biochemistry to general chemistry, and general chemistry to physics’, Bickle writes. If neuroscientists continue down the road of ruthless reductionism, he argues, then the ‘next step is to “intervene biophysically” … and “track behaviorally”’.12

As the Guardian article and the work of these scholars collectively suggest, the physicalism of neuroscientists knows no bounds. If free will is an illusion, then so too are ‘diseases of the will’. Writing in the American Journal of Psychiatry in 2010, the former director of the National Institute of Mental Health, Thomas Insel, announced along with his co-authors the launch of the new classification framework for mental disorders, titled the Research Domain Criteria (RDoC) project:

RDoC classification rests on three assumptions. First, the RDoC framework conceptualizes mental illnesses as brain disorders … Second, RDoC classification assumes that the dysfunction in neural circuits can be identified with the tools of clinical neuroscience, including electrophysiology, functional neuroimaging, and new methods for quantifying connections in vivo … Third, the RDoC framework assumes that data from genetics and clinical neuroscience will yield biosignatures that will augment clinical symptoms and signs for clinical management.13

This may not yet cover all the social sciences. But if not justice, democracy, money and institutional inertia, at least severe anxiety and obsession will turn out to be physical. Or so they envision.

How Philosophers Talk about Reduction

A chief motivation in philosophy for the view that there really is nothing but physics is to solve the problem of Eddington’s two tables, named after Arthur Eddington, whose tract The Nature of the Physical World begins thus:

I have settled down to the task of writing these lectures and have drawn up my two chairs to my two tables. Two tables! Yes …

One of them has been familiar to me from earliest years. It is a commonplace object of that environment which I call the world. How shall I describe it? It has extension; it is comparatively permanent; it is coloured; above all it is substantial … .

Table No. 2 is my scientific table … . My scientific table is mostly emptiness. Sparsely scattered in that emptiness are numerous electric charges rushing about with great speed; but their combined bulk amounts to less than a billionth of the bulk of the table itself.14

So, one and the same physical object is both a macroscopic table and a conglomerate of microscopic atoms and molecules. By virtue of being a macroscopic table, it is subject to the macroscopic laws of chemistry, of elasticity, of stress; by virtue of its micro-constitution it is subject to the laws of nuclear physics, of electron–proton interactions, etc. Some guarantee of consistency is required. What is to prevent macroscopic laws from moving the table one yard to the left while all of its molecular components, following the laws of microphysics, move thirty millimetres to the right? ‘It’s all physics really’ provides an answer. All laws, including all the macroscopic ones, are deducible from the laws of microphysics, and hence no incompatibility is possible. Reductionism thus has a powerful argument in its favour: it solves the consistency problem in a simple and unified way.

But just what do philosophers mean when they say, ‘it’s all physics really’? Can this really be defended? And does it really solve the problem of consistency suggested by the existence of Eddington’s two different but related tables? Over the decades philosophers have meant a number of different things, each weaker than the one before. I’m going to sketch out here this gradual retreat since it shows up good reasons for thinking that in the end it just isn’t all physics really.

Type-Type Reduction

The classic discussion of scientific reduction in philosophy is by the American philosopher Ernest Nagel, one of the giants of the logical empiricist movement. According to Nagel, ‘[a] reduction is effected when the … laws of the secondary [i.e. reduced] science … are shown to be the logical consequences of the theoretical assumptions … of the primary [i.e. reducing] science’.15

Nagel mapped out what became the exemplar of theory reduction in philosophy, the reduction of the Boyle–Charles law to statistical mechanics:

Boyle-Charles law: PV=nRT.

Here P is pressure, V is volume, n is the number of moles, R is the universal gas constant and T is temperature. This law is supposed to hold in ideal gases, ones where all collisions between molecules are perfectly elastic and there are no intermolecular attractive forces. No actual gases meet these requirements, but many do so well enough for many purposes, like nitrogen, oxygen, hydrogen, noble gases and some heavier gases like carbon dioxide and mixtures such as air.

There are two notable ingredients in the reduction. The first is the use of a statistical assumption like one introduced by James Clerk Maxwell in 1867, called the molecular chaos hypothesis – which I was taught to call by its German name, Stosszahlansatz, by Adolf Grunbaum, my wonderful philosophy of science instructor at the University of Pittsburgh. It is the assumption that the velocities of colliding particles are uncorrelated and independent of position. This means that the probability that a pair of particles will collide can be calculated by considering each particle separately and ignoring any correlation between finding one particle with velocity v and finding another velocity v′ in a small region. This probabilistic assumption is not a part of classical mechanics but must be added on, hence the reduction is not to mechanics but to ‘statistical mechanics’.

The second arises from a point of logic. You can’t deduce conclusions that use concepts that don’t appear in the premises. In this case the premises are about the mechanical features of molecules and the conclusion is about macroscopic features of a gas. The solution is to make some identifications. So we say: the volume of the gas is the volume the molecules occupy, the pressure at an instant is the average of the instantaneous momenta transferred from the molecules to the walls and the temperature is the mean kinetic energy of the molecules.

This results in what philosophers call ‘type-type’ reduction. Features (types) in the secondary science are identified with features (types) in the primary science. And identification is taken literally in order for the derivation to go through: temperature is mean kinetic energy, pressure is average momentum transferred. But this is puzzling. Temperature sure doesn’t seem like kinetic energy. As Nagel remarks: ‘The primary science thus seems to wipe out familiar distinctions as spurious and appears to maintain that what are prima facie indisputably different traits of things are really identical.’16 Philosophers have never offered a really satisfying understanding of this. I note this here because my notion of the dappled world that I elaborate in Chapter 3 does not homogenise everything into one uniform lot but allows that the rich diversity that we seem to see in nature can in fact be genuine.

These links between the primary and secondary concepts that we see in type-type reductions came to be called ‘bridge principles’. They are part of the reason for the fall from grace of this kind of reduction in philosophy. Finding types in different science to identify with one another turned out to be incredibly difficult. The other big reason is connected: examples of these kinds of reductions are concomitantly hard to find, even in the case of chemistry, which I’ll discuss in the next section. For now, let us proceed with a review of the increasingly weaker kinds of reduction that philosophers have endorsed.

Token-Token Reduction

The idea here was borrowed from the philosophy of mind, where a great many philosophers, and some neuroscientists too, wanted to identify psychological states, like pain, with brain states or some other physical state, which I discussed in the section ‘Physicalism and Materialism’. One of the things that seemed to stand in the way was something that got labelled ‘multi-realisability’: the very same psychological state can correspond to – ‘be realised by’ – different physical states. The physical state of a hamster in pain will surely be very different from yours or mine, though the experience of pain may well be the same. If so, there’s no mapping of types of psychological states onto types of physical states. Nevertheless, each and every single occurrence of a mental state – each token of a pain or fear or of contentment – may be at one and the same time a token of some physical state. No occurrence of a mental states without some concomitant physical state. Or so the doctrine goes.

This was soon extended to the relations between all of what Nagel called ‘secondary sciences’ and physics. On this view, we can continue to maintain that every event that falls under a law of one of the secondary sciences – like chemistry or economics – is also a physics event. Every token of a table is a token of some collection of atoms and molecules. Nevertheless, secondary-science laws may not reduce to physics laws because secondary-science properties may not be identical with physics properties. Eddington’s two-tables problem is generated by supposing that the distant spacing of the molecules in the scientific table implies that the table does not have the property of solidity so that the micro laws that induce the spacing will dictate behaviour incompatible with well-known macro regularities – like that the table will keep my teacup from falling to the floor. We avoid the problem if the micro property that appears in the laws of physics does not map on to any macro property like a failure of solidity. We may accept that occasion by occasion, that is, token by token, an object’s macro and micro states are identical. But in general no micro and macro types are identical. So: as long as the properties which are governed by secondary-science laws are not identifiable with (or definable in terms of) the properties that are governed by physics laws, the two sets of laws cannot generate incompatible predictions.

This does not really imply that no inconsistencies can arise though. It is still possible that given the token special-science characteristics this table has right now (for instance, it stands in front of a fire door and staff have been instructed to get it out of the way), the table is moved one yard to the left by special-science laws (e.g. the widely relied-upon principle, ‘employees follow clear instructions required by their jobs unless unable to or faced with strong incentives to the contrary’) while all of its molecular components, following the laws of microphysics, move it thirty centimetres to the right. This was mostly not thought to be a problem to take seriously. For instance, the laws of economics govern things like monetary exchanges; and it is, argued philosopher of mind Jerry Fodor, ‘wildly implausible’ to suppose that just by identifying every monetary exchange with some physical event or another one could generate an inconsistency between the laws of physics and Gresham’s law or the law of marginal efficiency of capital.

But it is not as wildly improbable as may be thought. Token-token reduction is not guaranteed to rid us of the spectre of inconsistency. Even trying to stick just with token-token relations, some type-type relations are inevitable. For example, the macro table and the micro systems that compose it must occupy roughly the same space at the same time. Given sufficient detail in the theories on both sides, only a few type-type identifications like this may force new type-type identifications not initially intended or recognised. With these new type-type identifications, surprising and unexpected laws of one theory can be generated from another, and these laws may contradict ones which are already a part of the other theory. I can illustrate with an example from one of the original papers in kinetic theory, by James Clark Maxwell.

At the beginning of his 1860 paper, ‘Illustrations of the dynamical theory of gases’, Maxwell makes a token-token identification familiar from Nagel’s paradigm of reduction, the Boyle–Charles law (referred to earlier in this section): every token of an ideal gas is identical to some collection of perfectly elastic molecules interacting only on impact. Maxwell describes his paper as ‘an exercise in mechanics’. The first propositions deal entirely with the mechanical behaviour of a collection of perfectly elastic spheres. Nevertheless, Maxwell derives a macroscopic law about the viscosity of a gas from the laws governing the mechanics of the molecules. What is remarkable about the derivation is that it seems to employ no type-type identifications.

In fact, Maxwell does make type-type identifications. But they easily go unnoticed. Given Maxwell’s token-token identification, he had no choice but to make these. They involve purely mechanical and spatio-temporal properties. The total mass of the gas, for instance, is taken to be the same as the total mass of the molecules that constitute it. From these trivial identifications, however, in combination with his derived laws of the molecular theory and well-established empirical gas laws, Maxwell derives a law involving the novel macroscopic property of viscosity: the viscosity of a gas is independent of its density.

Maxwell found this ‘a curious result’ which might completely refute the kinetic theory of gases. Viscosity is a measure of the stickiness of a fluid; honey, for example, is more viscous than water. He expected that a gas should get more and more viscous as it got denser and denser; and this was apparently supported by the little data available then. Maxwell, however, performed his own experiments and showed that the coefficient of viscosity is indeed independent of density, over a wide range of pressures, as he predicted. But the point stands: here is a case in which mere token-token identification allows the derivation of laws in a different science, which not only may, but were actually thought to, contradict existing laws in that science.

The example points out the need to look at every case in detail. Token-token identifications, to do their job, do sometimes imply type-type identifications, and type-type identifications can more readily than expected lead to possible inconsistencies.

Supervenience

Token-token reduction soon gave way to supervenience, which you saw mention of in the Preface to this chapter. As the Stanford Encyclopedia of Philosophy explains:

A set of properties A supervenes upon another set B just in case no two things can differ with respect to A-properties without also differing with respect to their B-properties. In slogan form, ‘there cannot be an A-difference without a B-difference’.17

Token-token reductions got left behind not primarily because they can imply type-type reductions and thereby lead to inconsistencies, as I have just illustrated – which I take to be the serious scientific problem. In fact, this was not widely recognised. Rather, the primary reason in the philosophical literature was because token-token reductionism seems to merge into type-type reductionism using a simple philosophical tactic for making up new types out of old. Just take a gigantic disjunction of all the token physics states that in real fact correspond to any token of a secondary-science state – like monetary exchange – and give it a label. Presto, we have a new physics type to correspond with that secondary-science type. It may seem complex from the point of view of the initial physics states but that does not detract from the formal point that this enables type-type reduction. You may well want to object, as some philosophers do, that it is not a proper physics type, but that will take you into a metaphysical maze of philosophical argument about what is and is not a real or a proper type, far from our look at what science is like.

Of course, whether we are talking about type-type or token-token reductionism, neither would enable us to reduce the secondary sciences to physics since there is no possibility that we could ever know the corresponding bridge principles. So, when it comes to science – which is what we are primarily looking at in this book – it is not all physics really. Nor would many philosophers and scientists defend that it is. More generally, what is supposed is that in principle reduction of all science to the – ultimate and totally complete and correct – science of physics must be possible. And why is that? What’s happened here is that we have switched from talking about science to talking about the world, not about what can be done in real science but rather about how the world operates, where surely physics rules. If we are going to do that, philosophers seem to have concluded, supervenience is the easier and more plausible view to defend: every other feature that we see in nature supervenes on physics states.

What, though, argues that this is such a plausible view? It is hard to find detailed arguments for this in philosophy. The philosophical literature is much more concerned with figuring out just what supervenience might be like, not why we should believe in it.

The philosophical discussion of the relation between the features of the secondary sciences and those of physics tends to parallel that of the relation of the mental to the physical that I discussed in the section ‘ Physicalism and Materialism’. There, what came to be the canonical view was succinctly expressed in 1970 by Donald Davidson, who was one of the most influential analytic philosophers of the twentieth century:

[M]ental characteristics are in some sense dependent, or supervenient, on physical characteristics. Such supervenience might be taken to mean that there cannot be two events alike in all physical respects but differing in some mental respect, or that an object cannot alter in some mental respect without altering in some physical respect.18

I take it to be uncontroversial that mental characteristics can be causally affected by physical characteristics. But that is a far cry from saying they are entirely set once physical – or physics – characteristics are set. They might also be causally affected by other mental characteristics as well as by socio-economic ones, and they might in turn in part affect various socio-economic and physical features, much as we picture them doing both in everyday explanations and in serious scientific enquiries as well, for instance when negligence enters as part of the explanation of a system failure.

As with claims about supervenience of everything on physics characteristics, so too with claims of the supervenience of the mental on the physical: there’s a paucity of solid detailed arguments in favour. Those endorsing this view seem for the most part to have assumed three broad lines of defence. The first is a leap from the fact that the physical can affect the mental to the conclusion that the physical always fixes the mental. The second is a quick generalisation from a handful of cases in which a reduction has putatively been established. The third is sometimes called ‘the argument from methodological naturalism’, which, as the Stanford Encyclopedia of Philosophy describes, goes like this:

The first premise of this argument is that it is rational to be guided in one’s metaphysical commitments by the methods of natural science … . The second premise of the argument is that, as a matter of fact, the metaphysical picture of the world that one is led to by the methods of natural science is physicalism. The conclusion is that it is rational to believe physicalism, or, more briefly that physicalism is true.19

The first premise is one I adopt throughout this book since I am looking at science and then at what picture of nature science guides us to adopt. It is the second premise that my look at science picks out as faulty. Successful scientific explanations and predictions of real concrete things in the real world almost always picture a mix of different causes that are studied in different scientific disciplines and sub-disciplines and sub-sub-disciplines, acting together in a variety of different ways. The fact that these explanations and predictions are often so successful most immediately suggests that that’s what happening in the world too: that, not physicalism, is the correct metaphysical picture of the world. But I won’t pursue that here because it is the central topic of Chapter 3.

Grounding

As I write this in the summer of 2021, philosophers are not so much discussing supervenience but rather grounding. Grounding is stronger than supervenience. To say that the facts of physics ground those of the secondary sciences is not just to say that the economics, chemistry and biology features, along with all those studied in the other secondary sciences, are fixed once the physics ones are fixed (in the supervenience sense, that the secondary are always the same if the physics features are the same). If these are grounded in the physics, then the physics features are in some way responsible for their being as they are. Grounding is a causal or generative notion. The character of the economics and chemistry features is due to that of the physics.

Philosophers are at great pains to figure out more about what this ‘due to’ relation could consist in. One option is that it involves causation of some kind. I am happy to go with that since I think it is true to the science. We often picture physics characteristics as part of the causes of what features from economics, chemistry, biology etc. are like. What is not true to how I see science working when I look at its practices are two further assumptions philosophers are prone to make.

First is that the physics features are the sole cause. I’ve already noted that where physics characteristics enter, our models of real cases generally picture the features physics describes as working in cooperation with other kinds of features to bring about the effects modelled, even if the rhetoric about what is going on gives top billing to the physics. That’s illustrated in the example of the Millikan experiment which I described in Chapter 1. Later in this chapter, I provide another extended example – the Stanford Gravity Probe B – that I introduce specifically for this purpose.

Second is the asymmetry. Physics features are supposed to cause all the rest but never the reverse. This is another aspect of the privileging of physics; the idea that in the end of ideal true science it is only physics that matters. But again, our detailed models that predict and explain physics’ very own features generally include a variety of causes that are not in the domain of physics. It is true that in the equations of physics, only physics causes appear for physics effects. But these equations are not models of real concrete systems in the real world, as I lay out in my extended of the Stanford Gravity Probe B test of the general theory of relativity. We use the equations to help us construct the models, but we use much else as well. Just how we should think about these equations given this fact and what we might take them to tell us about nature is a topic I take up in Chapter 3.

The point I want to underline with my two physics examples – the Gravity Probe B and the Millikan experiment – is that even physics is not all physics. Before turning to the Gravity Probe B let us consider a more conventional issue – is chemistry really all physics?

Why Chemistry Isn’t All Physics

In 1929, not long after the inception of quantum mechanics, famous theoretical physicist Paul Dirac made a comment that set the tone for ensuing discussions about the relation of chemistry to physics:

The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.20

Dirac was making the claim, which became commonplace but as I see it without good arguments to support it, that the whole of chemistry can be reduced to physics. Such a claim has been recently challenged by philosophers of chemistry.

Generally there are two anti-reductionist approaches to chemistry. The first ab initio approach denies outright that chemistry can be reduced to quantum mechanics or any other branch of physics for that matter. That’s because chemistry is a science of substance and its transformation and thus has its own unique classificatory and methodological features that make it irreducible to physics. The second approach does not deny reduction outright. Instead, it takes the question of reduction of chemistry to physics to be an empirical one that should be neither dogmatically asserted nor dogmatically denied but is rather to be determined in practice. Of the latter camp many philosophers of chemistry hold, contrary to popular belief, that the success of quantum physics in explaining certain chemical phenomena does not give good reason for the reduction of chemistry to physics. Before we tackle that issue head-on, let us consider some assumptions that must be in place prior to any reduction claim.

First, if the reductionist claim is that the whole of chemistry is reducible to physics, then this assumes that there are clear disciplinary boundaries between physics and chemistry. This is important because if we are claiming that a certain phenomenon which properly belongs to chemistry is explained by, and as such reduced to, physics then we need to be sure that this phenomenon properly belongs to chemistry and does not already belong to physics. But a short glance at the history of science reveals that the boundaries between these two disciplines have been fluid. As philosopher and historian of chemistry Hasok Chang notes:

Some topics that are very important for the identity of each discipline have shifted between the two. Two hundred years ago heat and electricity were clearly chemical subjects (and even chemical substances), even though they were also treated by physics to the extent that there was such a thing as ‘physics’. Atoms had to be made respectable in chemistry first through a long struggle in the nineteenth century, before physicists could begin to find useful ways of engaging with them.21

Another point is that when we think of the reduction of chemistry to physics, we seem wrongly to assume that physics itself is unified. But this is not so. Few branches of physics are reduced to fundamental physics, and if physics is unsuccessful in achieving its imperialist goal within its own domain why should we be optimistic about its prospects of doing so in other domains? This is further taken up in the next section.

These points, the reductionist may argue, only make reduction a more complicated business but do not fundamentally challenge the overall project. In the context of chemistry the anti-reductionist is beating a dead horse. The reduction of chemistry to physics has already been achieved, particularly with the advent of quantum chemistry. Anti-reductionist philosophers of chemistry argue in reply that this claim is at least naïve and at most dogmatic. In what follows I’ll outline where they see the reductionist account as going wrong.

Quantum chemistry can be broadly characterised as the branch of science that uses quantum mechanics to answer chemical questions. The reduction of chemistry to quantum mechanics is understood through quantum mechanics’ use of the Schrödinger equation to describe the chemical properties of atoms and molecules. For reduction to take place we should be able to derive the properties of atoms and molecules from their Schrödinger equation. Yet, as many philosophers of chemistry point out, it is seldom the case that this strict demand of classical reduction is met. The problem for the reductionist, as my Durham colleague philosopher of chemistry Robin Hendry notes, is that the explanatory models used in quantum chemistry only bear a loose relationship to exact atomic and molecular Schrödinger equations. There are a few situations where these demands are met, namely where there are exact analytical solutions to the Schrödinger equation, as in the case of the hydrogen atom and other one-electron systems. But, Hendry argues, these ‘are special cases on account of their simplicity and symmetry properties’.22

For the more common cases that do not have analytic solutions, solving the Schrödinger equation involves a battery of approximate methods and models. And these methods and models do not come from physics. Rather, their building and calibration are founded on presuppositions that belong to classical chemistry. For instance, Hasok Chang explains:

The typical method of quantum-mechanical treatment of molecules begins with the Born–Oppenheimer approximation, which separates out the nuclear wavefunction from the electronic wavefunction (Ψtotal = Ψnuclear × Ψelectronic). Additionally, it is assumed that the nuclei have fixed positions in space.23

This approximation treats the atomic nucleus as a classical particle. But this fundamentally violates quantum mechanics which, following the Heisenberg uncertainty principle, maintains that we cannot have a simultaneous assignment of fixed positions and fixed momenta. The approximations that provide the reduction violate the very theory that the chemistry is being reduced to. This wouldn’t count against the reductionist claims if we could suppose that this, and similar, classical approximations can be understood as ‘mere’ approximations which can in principle be done away with. But that’s not so. For these provide the necessary stage-settings that make the quantum calculations possible. Many other examples can be given but the point I hope is clear. If the success of quantum chemistry relies fundamentally on assumptions that belong to classical chemistry then it makes no sense to claim that chemistry has been reduced to quantum mechanics.

Even Physics Isn’t All Physics

Several reasons argue for this conclusion. The first is that physics itself is a hotch-potch of different sets of theories and practices, and the many branches of physics themselves are not reduced to the most fundamental physics. Second, what are taken to be the most fundamental theories in physics are not unified with each other. And third, to produce physics-based technology or physics-based explanations or predictions about real concrete things in the real world, physics must work together with a great many other sources of knowledge and understanding.

For the first, consider these remarks sent to me by Mark Harris after we discussed reductionism at a recent conference. Harris is a physicist well known for his discovery of ‘spin ice’, which is currently a major research area in the physics of magnetism. He is now Professor of Natural Science and Theology at Edinburgh University:

[C]ondensed-matter physicists are not so ardently reductionist as other physicists. That’s partly because we’re basically interested in emergent phenomena, but also because we don’t have a single body of theory that unites our area (except for the basic quantum formalism). Methodologically, we pick and choose from all kinds of areas from within classical physics (thermodynamics, at any rate) and QM [quantum mechanics], since our area overlaps with chemistry, materials science, mineralogy, molecular biology, etc etc. And it’s not clear to us that there ever will be a fundamental body of law that captures all of condensed-matter physics. These points, I would say, mark us out as quite different from the other large areas of physics.24

As to the second, consider two of our most prized fundamental physical theories, quantum mechanics and general relativity.

Quantum mechanics is the fundamental theory that deals with matter and light at the atomic and subatomic levels. Particularly, the theory aims to describe and explain the properties of molecules, atoms and their constituents (e.g. protons, electrons and quarks). The general theory of relativity – which the Gravity Probe B experiment I describe in this section sets out to test – is the theory concerned with ‘gravity’, one of the fundamental forces in the universe. We use it to predict and describe large-scale physical phenomena. Unlike Newtonian mechanics, which takes gravity to be a force acting at a distance, relativity takes gravity to be a geometric phenomenon that arises from the curvature of space-time. Both of these theories have proven to be very successful. The problem is that they are incompatible.

The general theory of relativity teaches that space-time curvature is constrained by the presence of mass and other forms of energy. The relation between the space-time curvature and the mass and energy is captured by Einstein’s field equation. The incompatibility arises from the fact that in general relativity mass and energy are treated in a purely classical fashion – physical quantities like the strengths and directions of various fields and the positions and momenta of particles have definite value. The problem is that our fundamental theories of matter and energy are all quantum theories that include the Heisenberg uncertainty principle. That’s a principle that, as I noted in discussing why chemistry isn’t all physics, denies that such quantities have definite values and thus outright rejects the classical picture offered by many theories in physics that we use to very good effect every day, including the general theory of relativity.

This incompatibility is well known and theoretical physicists are hard at work trying to solve it by developing a theory of quantum gravity. But that is still a theory under construction; there’s more than one version of it on offer and all are far from having convincing empirical tests. This is not to say that unification is not possible. We may succeed eventually – but then again we may not. It’s no good just asserting that physics is all one thing.

Fortunately, physics does not need this unification in order to be successful. Many of its brilliant successes come from constructing crafty models that incorporate theories that are not unified. A good example is GPS, whose construction drew on theories as diverse as Newtonian mechanics for the satellites, quantum mechanics for the atomic clock and the special and general theories of relativity to correct the atomic clocks. This success was obtained partly in virtue of, and not in spite of, these theories being disunified.

The third and to me most important point is that in real science about real systems in the real world, for predictions and explanations of even the purest of physics results, physics must work in cooperation with a motley assembly of other knowledge, from other sciences, engineering, economics and practical life. I was especially impressed with this in the 47-year-long, 750 million dollar project – the Stanford Gravity Probe B (GP-B) – that I was a participant observer on for a couple of years, an experiment that Francis Everitt and his huge team undertook in order to test the general theory of relativity by putting gyroscopes into space and looking to see if they precessed due to space-time coupling as general theory of relativity predicts. Here is part of a description of this experiment from a 2005 report in The Stanford News:

The purpose of GP-B is to test Einstein’s theory by carrying out the experiment in a pristine orbiting laboratory, thereby reducing background noise to insignificant levels and enabling the probe to examine general relativity in new ways.

Deceptively simple. Launched on April 20, 2004, from Vandenberg Air Force Base on the California coast, GP-B has been using four spherical gyroscopes to measure precisely two extraordinary effects predicted by Einstein’s theory … .

How does GP-B measure these effects? Conceptually, the experiment is simple: Place a gyroscope and a telescope in a satellite orbiting the Earth. (GP-B uses four gyroscopes for redundancy.) At the start of the experiment, align both the telescope and the spin axis of the gyroscope with a distant reference point – a guide star. Keep the telescope aligned with the guide star for a year as the spacecraft orbits the Earth more than 5,000 times. According to Einstein’s theory, over the course of a year, the geodetic warping of Earth’s local spacetime should cause the spin axis of the gyroscope to drift away from its initial guide star alignment by a minuscule angle of 6.6 arcseconds (0.0018 degrees). Likewise, the twisting of Earth’s local spacetime should cause the spin axis to drift in a perpendicular direction by an even smaller angle of 0.041 arcseconds (0.000011 degrees), about the width of a human hair viewed from 10 miles away.25

This sounds like exactly the kinds of things that advocates of ‘it’s all physics really’ take to be universal – pure physics causes producing pure physics effects: ‘[s]pace-time warping produces a 6.6 arcseconds drift in a gyroscope spin axis’ and ‘[s]pace-time twisting produces a drift by .041 arcseconds’. Well, yes, but – see how the report proceeds:

As the late Stanford physicist and GP-B co-founder William Fairbank once put it: ‘No mission could be simpler than Gravity Probe B. It’s just a star, a telescope and a spinning sphere.’ However, it took the exceptional collaboration of Stanford, NASA, Lockheed Martin and a host of other physicists, engineers and space scientists almost 44 years to develop the ultra-precise gyroscopes and the other cutting-edge technology necessary to carry out this deceptively ‘simple’ experiment. The ping-pong-ball-sized gyroscope rotors, for example, had to be so perfectly spherical and homogeneous that it took more than 10 years and a whole new set of manufacturing techniques to produce them. They’re now listed in the Guinness Database of Records as the world’s roundest objects. Similarly, it took two years to make the flawless roof prisms in the GP-B science telescope that tracks the guide star.26

What’s going on here is something one of the inventors of the notion of marginal utility in economics, Carl Menger, identified clearly as core to the methodology of the socio-economic sciences. Menger was concerned with theoretical relations, like those in Einstein’s theory, in theoretical economics or in any of the other sciences that offer up what he called ‘strict’ laws. These laws do not describe ‘full empirical reality’ – which is where we live and which is, after all, the only reality there is. Rather, what strict laws describe are exact relations that exist in a kind of Platonic heaven (recall Plato’s metaphor of the cave from Chapter 2) between clear, clean, ideal types. If you want to see the relations described in physics equations exemplified, you must look not only to interactions in Platonic heaven but restrict your attention to how physics types interact when they stay firmly ensconced in the physics compound, having no truck with types living in other regions of Platonic heaven, like those where the pure types of economics live. Here is Menger himself speaking:

The theoretical sciences are … supposed to teach us the types (the empirical forms) and the typical relationships (the laws) of phenomena. By this they are to provide us with theoretical understanding, a cognition going beyond immediate experience, and, wherever we have the conditions of a phenomenon within our control, control over it …

Close examination, however, teaches us that the above idea is not strictly feasible. Phenomena in all their empirical reality are, according to experience, repeated in certain empirical forms. But this is never with perfect strictness, for scarcely ever do two concrete phenomena, let alone a larger group of them, exhibit a thorough agreement. There are no strict types in ‘empirical reality’, i.e., when the phenomena are under consideration in the totality and the whole complexity of their nature … .

But what needs no less to be emphasized is the circumstance that with this presupposition the same thing [i.e. what Menger has argued above about economics] also holds true of the results of theoretical research in all the remaining realms of the world of phenomena. For even natural phenomena in their ‘empirical reality’ offer us neither strict types nor even strictly typical relationships. Real gold, real oxygen and hydrogen, real water – not to mention at all the complicated phenomena the inorganic or even of the organic world – are in their full empirical reality neither of strictly typical nature, nor … can exact laws even be observed concerning them.27

What Menger says of real gold, real hydrogen and oxygen and real water is equally true of real gyroscopes sent into real space with real telescopes – and I repeat, these are, after all, the only kinds of gyroscopes and telescopes there are or ever will be. These fully empirical objects have characteristics that we can use physics equations to help model but they are not the ideal characteristics that appear in the equations, despite the huge efforts of the GP-B team to make them as close as possible. The other great effort of the GP-B team went into ensuring that nothing that they couldn’t model in a physics equation could act on the gyroscope spin, or in my language from Chapter 3, constructing a sufficiently small world that only things named by physics concepts mattered.

There are of course two attitudes one can take to what I just said. The first assumes the view in question: that it is all physics really. There is nothing that could affect the gyroscope spin that can’t in principle be modelled in physics. The only problems we face are practical – we often don’t know what these other factors are or maybe we do not know how to model them properly in a physics equation. The other reserves judgement. If we can model a factor using concepts in physics properly, wonderful. That is a powerful help in prediction and explanation. If we cannot, that could be due to our ignorance, but it could be due to the facts of nature themselves – some of these additional causes just aren’t properly a part of physics. I discuss these two attitudes in some depth in the Preface to this chapter.

Whichever of these two attitudes we adopt, there is one big further question that needs to be confronted: how could the GP-B team defend their claims that nothing they couldn’t model or control would affect the gyroscopes? That followed from sundry other models employing many features not studied in physics. For instance: models of how the gyroscopes were honed so they could be appropriately represented in other models as nearly perfectly homogeneous; and models to show that the measurement itself is not creating a torque that affects the spin of the gyroscope.

The standard method of measuring the spin of a rotating sphere would be to put a tiny spot on the sphere and then track its trajectory visually. But the spot would introduce an inhomogeneity that would cause a shift in the precession of the gyroscopes. So, the Gravity Probe B team spent a mammoth effort producing a system of measurement that would not affect the spin. The gyroscopes were constructed to be superconductive. Once this physics description is applicable, we can conclude that they expel a magnetic field, called ‘the London moment’, parallel to their spin axis. The orientation of each gyroscope is then measured using a low-noise SQUID magnetometer (a Superconducting Quantum Interference Device that can be used to measure extremely weak signals). So, you need to figure out how to produce a set-up that does just that. Then, to defend that you have succeeded so that the final measurement system does not affect the precession of the gyroscope, you need models of its production that show how the way it was produced has the result demanded. These models again picture mixed causal inputs for a pure physics output. They show how a great mix of input causal factors from both inside and outside physics proper can produce a system that can be represented in further models under the ideal physics type ‘not a source of torque’.

You can see a nice image of the gyroscope rotors at the Gravity Probe B website.28 Here’s what that website says about them:

World’s Most Perfect Gyroscopes

To measure the minuscule angles predicted by Einstein’s theory, the GP-B team needed to build a near-perfect gyroscope – one whose spin axis would not drift away from its starting point by more than one hundred-billionth of a degree each hour that it was spinning. By comparison, the spin-axis drift in the most sophisticated Earth-based gyroscopes, found in high-tech aircraft and nuclear submarines, is seven orders of magnitude (more than ten million times) greater than GP-B could allow. ...

[A] GP-B gyroscope rotor had to be perfectly balanced and homogenous inside, had to be free from any bearings or supports, and had to operate in a vacuum of only a few molecules. After years of work and the invention of new technologies and processes for polishing, measuring sphericity, and coating, the result was a homogenous 1.5-inch sphere of pure fused quartz, polished to within a few atomic layers of perfectly smooth …

The spherical rotors are the heart of each GP-B gyroscope. They were carved out of pure quartz blocks, grown in Brazil, and then fused (baked) and refined in a laboratory in Germany. The interior composition of each gyro rotor is homogeneous to within two parts in a million. On its surface, each gyroscope rotor is less than three ten-millionths of an inch from perfect sphericity. This means that every point on the surface of the rotor is the exact same distance from the center of the rotor to within 3×10−7 inches … .

[I]f a GP-B gyroscope were enlarged to the size of the Earth, its tallest mountain or deepest ocean trench would be only eight feet!

Finally, a GP-B gyroscope is freed from any mechanical bearings or supports by levitating the spherical rotor within a precisely machined fused-quartz housing cavity. Six electrodes, evenly spaced around the interior of the housing (three in each half), keep the rotor levitated in the housing cavity.

GP-B was a joint project between Stanford University and NASA – the US National Aeronautics and Space Administration. GP-B Principal Investigator Francis Everitt was hired in the Stanford Physics Department in 1962 to work with William Fairbank and Leonard Schiff as the first full-time researcher on the GP-B experiment. In 2005, Everitt was awarded the NASA Distinguished Public Service Award, which is the highest NASA honour given to a person outside the US government. Everitt, British by birth, was trained in low-temperature physics, which is important for this project since essentially the spacecraft was a giant floating dewar or thermos. As the Stanford website explains:

One of the greatest technical challenges for Gravity Probe B was keeping the probe and science instrument precisely at a designated cryogenic temperature, just above absolute zero, of approximately 2.3 kelvin (−270.9 degrees Celsius or −455.5 degrees Fahrenheit) constantly for 16 months or longer. This was accomplished by integrating the probe into a special 2,441 liter (645-gallon) dewar, or thermos, nine feet tall (about the size of a mini van), that is filled with liquid helium … . The dewar and its payload inside form the main structure around which the GP-B spacecraft was built.29

After all this description of the huge engineering accomplishments, you will not, I think, be surprised to learn that engineer Bradford Parkinson, whom Stanford describes as the ‘chief architect of the now-ubiquitous Global Positioning System (GPS), which he led as a U.S. Air Force colonel in 1973’, served as co-principal investigator.

But tons of engineering and chemistry and physics of all ilks is not enough. Knowledge of economics and social psychology plays an essential causal role as well – and predictions are clearly not possible without these. As did management science. This was certainly the opinion of James Beggs, who was well known for championing the space shuttle:

Managing a flight program such as GP-B, in which the spacecraft contained only a single, highly integrated payload proved to be a challenge for NASA, its government overseers, Stanford University as NASA’s prime contractor, and Stanford’s subcontractor, Lockheed Martin – so much so that in 1984, the then NASA Administrator, James Beggs, remarked that GP-B was not only a fascinating physics experiment, but also a fascinating management experiment.30

There is often a tendency to dismiss the importance of the features from these higher-order secondary sciences for predicting physics outcomes. The attitude seems to be, ‘of course what actually happens depends on matters that are studied in economic and management, but those are just practical matters. If only they can get done and out of the way, all that really matters in principle is physics.’ I have already pointed out how very unusual it is for a situation to be so precisely engineered so that we can suppose: every factor that can affect the outcome of interest but that can’t be represented in physics equations – like all those factors that could affect precession that we don’t know how to model properly as a torque – has been eliminated or shielded against. But beyond that, clearly all the secondary factors that cause those pesky possible interfering causes to disappear are absolutely essential. And so too is the secondary-science knowledge that helps us get those causes into place. It is trivially true that if you have in front of you one of those rare, precisely engineered systems where we know how to model all the causes that matter to a physics outcome with physics equations, then predicting the outcomes is all down to physics. But that truism doesn’t get you anywhere towards the conclusion that it is all physics really.

I hope this extended discussion has provided a sense of the almost infinite collection of scientific knowledge the GP-B team had to assemble and use in generating predictions about what results would be seen with their real telescopes and real SQUID magnetometers about what those real fused quartz spheres would do in real space across real time – as Menger put it, in ‘full empirical reality’. If I recall correctly from my time participant-observing this experiment, the volumes describing the design of the experiment took about forty feet of shelf space. And only a small portion of the knowledge called on in them was from physics proper. To make real predictions of the kind that most amaze us and that give the strongest support to physics theories, physics does not manage on her own but rather works as a part of a motley assembly.

Unity at the Point of Action

What then of the unity of science and of Eddington’s problem of the two tables?

Start with Eddington’s problem. We know that this cannot really be a problem: everything that happens is of necessity consistent. If looking at the laws of our sciences suggests the contrary to us, there must be something wrong with those laws or, I suggest, with our interpretation of what those laws tell us. I think the problem is in our interpretation. In the case of the laws themselves, of course there is surely something wrong with many of them. We have not yet, nor ever will, get it just right in our sciences. But I think we have ample evidence that many of our law claims are getting it right enough that they would dictate contradictory outcomes if read in the conventional way as claims about what must happen when factors in the laws operate. What is the alternative to this conventional reading? What might be wrong with our understanding of how these laws contribute? Here is one, which I describe in Chapter 3: the laws tell us the role each variable in them plays in determining the values of the others. But they do not preclude that other factors not represented by variables in the equation can affect those values as well. Since I explore this along with a few other options in some detail in Chapter 3, I won’t discuss it further here.

As to the unity of science, the pyramid pictures the sciences as united because they are all the same really. The picture I paint of the sciences as like a gigantic Meccano set is the opposite – there are many kinds of pieces that are very different. And that is their strength. We can bring the scientific results and principles together in myriad different ways to achieve myriad different purposes. This is ‘unity at the point of action’.

I borrow the phrase from one of the core members of the Vienna Circle, Otto Neurath, who died in Oxford having fled there from the Nazis. Neurath was the director for full social planning under all three of the short-lived socialist governments in Bavaria in 1919 and 1920 and he was a central figure in the housing movement and movement for workers’ education in Red Vienna throughout the Vienna Circle period. His philosophy was finely tuned to his concerns to change the world and to use science to do so, which was part of the aim of the famous Vienna Circle ‘Unity of Science’ movement that he spearheaded.

The Unity of Science movement aimed to create an Encyclopedia of Unified Science that would house information about all the scientific knowledge accumulated across all the different disciplines and sub-disciplines. The idea was that users could then take different volumes off the shelf, extract lessons from different sections and bring them together in sundry different ways, to serve the purpose of the moment – the sciences become united each time at the point of use, and each time different sciences are put together in different ways for different uses; just as we use our Meccano sets to build models of everything from fighter planes and ocean liners to Ferris wheels and windmills and tractors.

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Dethroning the Queen
  • Nancy Cartwright, Durham University
  • Book: A Philosopher Looks at Science
  • Online publication: 10 June 2022
  • Chapter DOI: https://doi.org/10.1017/9781009201896.003
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Dethroning the Queen
  • Nancy Cartwright, Durham University
  • Book: A Philosopher Looks at Science
  • Online publication: 10 June 2022
  • Chapter DOI: https://doi.org/10.1017/9781009201896.003
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Dethroning the Queen
  • Nancy Cartwright, Durham University
  • Book: A Philosopher Looks at Science
  • Online publication: 10 June 2022
  • Chapter DOI: https://doi.org/10.1017/9781009201896.003
Available formats
×