A literal reading of Kant's Second Analogy of Experience has often allowed it to be supposed that the constitution of objectivity is conditioned by a strict determinist application of the category of causality to phenomena manifest in space-time. This conclusion is however not self-evident, whether in its Kantian dimension or its scientific one. On the one hand, closer readings of Kant (Reference Brittan and ParriniBrittan 1994) have led to a loosening of the linkage apparently established by him between the constitution of objectivity and ‘exact causal determinism’ (Reference KojèveKojève 1990). On the other, divergence between objectification and the applicability to phenomena of a rule of determinist succession has been strongly suggested by reflections associated with the developments and reinterpretations of quantum mechanics. One need only think of the classification of ‘ontologically interpretable’ hidden variable theories into causal theories and stochastic theories (Reference Bohm and HileyBohm and Hiley 1993) to actualise the conclusion that the possibility of constituting a region of objectivity (and accessorily to hypostatise it ontologically) is independent of the direct application of determinism to spatio-temporal phenomena. After all, Brownian motion described in an exclusively stochastic mode is not any less objective than Brownian motion linked back to underlying micro-level movements governed by determinist laws. If there is a genuine line of demarcation for objectivity, this is not to be established between phenomena anticipated in determinist manner and phenomena anticipated in a purely probabilistic mode, but somewhere in the middle of this latter category of phenomena. Certain physical theories offering irreducibly probabilistic predictions are compatible with the hypothesis of a constituted objectivity of the spatio-temporal events on which they bear, while others are not (Reference PitowskyPitowsky 1994). To which may be added the fact that all the known physical theories, among those that offer probabilistic predictions, are based on a law of continuous and deterministic evolution of probabilities, or of functions which allow probabilities to be calculated. Among the laws of this type may be cited the ‘master equations’ and the BBGKY hierarchy equations of classical statistical mechanics, or else Schrödinger's equation for quantum mechanics. Physical theories of stochastic type thus set in train an unexpected displacement, towards a representation space for the probabilities, of the locus of application of the causality principle, rather than a pure and simple setting aside of the latter.
Definitively speaking, far from the probabilistic character of a theory indicating by its very nature a distancing with respect to the norms of objectification, the very structure of the calculation of probabilities utilised by this theory is capable of bearing the trace of a constitution of objectivity. Whether it be of a prior constitution of objectivity in space-time, or of elements of a formal constitution of objectivity in an abstract representational space of the instruments of the calculation of the probabilities, or indeed of both at the same time.
The role and extent of involvement of the category of causality among the a priori possibility conditions of an objective element of knowledge must therefore be thoroughly redefined, by constantly referring to the case of quantum physics. The aim of such a re-examination is obviously not to fall into the extreme of a complete rejection of Kant's philosophy in the name of advances in the field of physics, after having criticised the opposite extreme which consists of constraining physics to conform to a rigid (and caricatural) variety of the Kantian epistemological mould. The aim is rather to contribute to a re-ordering, an extension and a systematic mobilisation of the transcendental method, whose principle remains a precious guiding thread for any reflection about physics.
In the spirit of such a re-examination as has just been described, the plan of this article will begin by conforming to the altered hierarchy of the Analogies of Experience as proposed by Reference StrawsonP.F. Strawson (1966: 140–146). According to Strawson, a good number of the difficulties met by Kant in his process of distinguishing between a subjective succession of perceptions and a succession of states attributable to a perceived object in itself can be resolved on the condition of partly inverting the conventional order of application of the categories of substance, causality and reciprocity.
Following this modified order, the first operation necessary for distinguishing between a subjective series and an objective series derives from the central notion of the Third Analogy of Experience which is only evoked in passing at the beginning of the exposition of the proof of the First Analogy. This operation consists of a setting in simultaneity arising out of a sequential empirical datum. It amounts to proceeding backwards from the succession and multiplicity of perceptions to the coexistence and unity of one or several sets of determinations. It is hence a relation between these fixed and coexisting determinations and a moving subject that retrospectively explains the changing character of perceptions.
The second operation, proper to the Second Analogy, has to provide criteria allowing us to consider ‘a change in our perceptions as the perception of a change’ (Reference StrawsonStrawson 1966: 143). It presupposes the first operation, which consisted of extracting a stable kernel of coexistence from the sequence of perceptions; for the perception of a change could not emerge except as a non-eliminable residue of an attempt to fix an invariant of perceptions.
Finally, the third operation, in line with the First Analogy, is a moment of re-identification. It allows us to pass from the simple perception of a change to the perception of the change which affects a ‘substance’ which is otherwise unchanged. The success of this operation guarantees that, within a reasonable interval of time, the perceived change will not go beyond all possibility of the object being recognised. Presented in this way, the condition of substantiality appears less crucial than its situation at the culmination of the system of the Analogies of Experience would have one think. No doubt the extraction of local islands of relative permanence in the nexus of perceived change is necessary, as Kant writes, to identify relations in time against their background. But nothing imperatively requires that these points of identification should be present everywhere (and on every scale) nor that their duration should extend beyond what is required by their function as terms of comparison.
Following sections 1 and 2, devoted to the first two of these constitutive moments of objectivity and the discussion of their relevance when applied to quantum theories, section 3 will address the question of the probabilistic feature of an objectivity constituted in space-time. We will observe that the quantum calculation of probabilities limits itself generally to operating a formal constitution of objectivity within an abstract space (a Hilbert or Fock space), without carrying this feature of preconstitution of objectivity in space-time. Only the processes of decoherence succeed in causing such a feature to re-emerge. We will reach a conclusion in section 4 on the transcendental status of decoherence.
1. Objectivity as detachment/stabilisation
The characteristic of the object, in its purely etymological sense, is to be ‘set before’, to be disassociated from the background of the subjective, perceptive and instrumental circumstances of its manifestation. The object is identified with what has been able to be so thoroughly disassociated from the particular moments of its appearing that each of these moments becomes inversely interpretable as one of its appearances.
To take up a famous example proposed by Reference KantKant (1964: 220), ‘[…] the apprehension of the manifold in the appearance of a house which stands before me is successive’ (A190/B235). The order of this apprehension depends on the method of investigation chosen by the subject. But the designation of the house, and the system of human actions conducted in its regard presupposes that there is something constant which holds simultaneously available for investigation an infinite number of aspects (Husserl's ‘sketches’). The separation of a perfectly stable unit endowed with coexistent determinations with respect to the sometimes concomitant sometimes chronologically sequential multiplicity of particular manifestations is, in other words, the primordial condition for a constitution of objectivity. It is only via reference to this initially prescribed stability that can subsequently be raised the question of rule-governed downstream effects and changes authenticated by physical laws.
Most contemporary research in the cognitive sciences moreover holds as an essential (if not to say exclusive) requisite for the constitution of objectivity a sufficient co-ordination of sensorial apprehensions so as to detach a coexistent unity through ‘triangulation’. This co-ordination, effected through a movement-engendering mediation, is considered to function on two levels: (1) the level of each sensorial modality, which is successive, and (2) the level of sensorial intermodality, which is immediately simultaneous.
For a given sensorial modality, the co-ordination between successive apprehensions depends on a rule of motor activity that allows their reproducibility to be attested, with the proviso of ceteris paribus. This motor activity rule, as demonstrated by Jean Piaget following the reflections of Henri Poincaré, takes the form of a Group of transpositions (with the transpositions being limited to a single individual or extended by delegation to a social community). Among the defining moments of a Group, the one which most clearly applies in the constitution of objectivity is the assignation of an inverse for each element structured by the law of composition. The reiteration of a certain sensible configuration of a set of translations and rotations under the condition of invertibility disconnects it in effect from the particular conditions realised at the moment of its first manifestation and thereby renders it objective. More generally, as Reference SmithB.C. Smith (1998, ch. 7) explains, the constitution of a world of objects requires active and systematic compensation for the variations in perceptible actuality, and the correlative emergence of stable poles lending themselves to intentional direction of perception. The process of objectification comes down in other words to ‘deconvolving the deixis’ (ibid. 235), to undoing the tangled skein of a here-and-now of overpowering obviousness. The object is the ‘triangulated’ focus of this deconvolution through a maximal compensation (motor or instrumental) of the factors of variation, and the subject reciprocally is ‘the long-term integral or aggregate of that which it must compensate for in order to stabilize the rest of the world’ (ibid. 240).
The case of sensorial intermodality has more particularly been studied by Reference ProustJ. Proust (1997) in the context of a methodological phenomenalism inspired by the earlier work of Rudolf Carnap. Proust starts out afresh from the remark according to which objectification is above all the process of dissociating something with respect to the immediacy apprehended by the senses. But according to this position, the crucial moment of this disassociation occurs as soon as several sensorial modalities converge in one and the same spatial area. It is this ‘synergy’ of the manifold modal items of information that allows the extraction to be initiated of local invariants vis-à-vis the changing of sensorial channels. One thinks here of a famous remark of Aristotle, adopted and applied anew by Galileo and Locke, with respect to the assistance brought by the multiplicity of the senses to make plain the ‘common sensibles’ that are spatial magnitude and movement (Aristotle, On the Soul II-6 and III-1). But of course, the plurality of sensorial pathways does not suffice in itself to guarantee the fixation of objective focal sectors separate from the modes of their appearance. There needs to be added a motor and perceptive capability to calibrate the sensible entries; that is, a capability for constantly correcting these entries to make them conform to the rules of association which define the coexistent unity of the determinations within an object. The principal of these rules is called by Reference ProustProust (1997: 290–291) the ‘constraint of equilocality’. It amounts to the imposition of a condition of coherence (or of non-contradiction) on to the attribution of multiple qualities sharing one and the same spatial localisation, and in return, to the defining of spatial localisation as what the manifold qualities associated by the condition of coherence have in common.
Classical physics (to say nothing of classical science in its entirety) is based on the implicit postulate, embedded in the structures of theory and in the presuppositions of experimentation, that these two moments of objectifying detachment have always already been accomplished. On the level of theory, it non-problematically describes bundles of properties linked by constraints of equilocality. The state of the mechanism thus consists of a list of six co-ordinates (three spatial co-ordinates and three components of the quantity of motion) attributed to one and the same material point at a given instant. Another example is that of the three quantities (pressure, volume and temperature) assigned simultaneously to a gas sample enclosed in one and the same cavity; quantities which are constrained not only by a condition of non-contradiction (just as are those composing the mechanical state), but also by a condition of thermodynamic consistency which takes the form of the Boyle-Mariotte law.
On the level of its relation to experimentation, classical physics moreover presupposes a large number of (at least asymptotic) indifference clauses: indifference of each quantity with respect to the model of experimental set-up used to evaluate it; application of the principle of indifference to results with respect to the order in which the measurements of several quantities are successively carried out; indifference of determinations with respect to the multiplication of simultaneous experiments bearing upon several quantities. The first indifference clause allows the detachment of each quantity measured from the configuration of its particular measurement method, such that henceforth it is related to only one equivalence class of measurement methods. This equivalence class can moreover be reciprocally understood as the operative definition of the quantity concerned. The second indifference clause guarantees both the possibility of detaching each determination of the contingent circumstances from the experimental act associated with its manifestation as well as the possibility of distinguishing a federative unity vis-à-vis the operative history of the modes of manifestation of the federated determinations. It is maintained right up to and including the cases where the measurements of values are in practice sensitive to the order in which those measurements are made, via a double safeguard clause according to which: (1) the sensitivity to order is explained by the presence of a disturbance agent which, for its part, transitively satisfies all the indifference conditions, and (2) the disturbance is capable of indefinite diminution. Finally, the third indifference clause completes the detachment of a unity bearing simultaneous determinations under the hypothesis of an asymptotically non-disturbant character of the mode of measurement.
The preceding remarks could be summed up by saying that the paradigm of classical physics automatically sets in operation, at the heart of its formalism and the presuppositions of its attestation, the conditions for a disassociation of localised objects, simultaneously taking the sum of their available determinations as an experimental ‘evidential proof’.
In quantum theory matters are quite different. The formula of coexistence of the variables of position and quantity of motion within a particular state is here replaced by a commutator relationship which signifies the partial incompatibility (to an approximation approaching the order of Planck's constant) of the two conjugated variables, or which indirectly formulates (through the Heisenberg relations which are deduced from it) a limit to the precision with which the value of each of these two variables can be fixed. These relationships of commutation are not just one characteristic among others of quantum theory; they are the central constituent element of it. The universal process that permits the formulation of a quantum theory in effect consists of replacing the variables of a prior classical theory by operators bound together through a commutator relation. It therefore seems that what defines quantum theories is the exact opposite of what defines classical theories, namely the exclusion of the coexistence of certain pairs of variables.
On the level of the relation to experimentation, the only indifference clause retained in comparison with the case of classical physics is the indifference of each quantity to the experimental model used to evaluate it. This is the sense of Bohr's correspondence principle and of frequent remarks of Reference Schrödinger and BitbolSchrödinger (1995) regarding the classical heritage in the definition of variables in quantum mechanics. On the other hand, the multiplication of simultaneous experiments is limited (or at least there is an implied limitation in the precision of measurement results); and furthermore, indifference to the order of measurement is not generally guaranteed, be it only in principle.
Let's develop this latter point a little further, by considering a canonical experimental situation. Three measurements are successively effected upon the ‘same object’Footnote 1 (let's say a particle of spin 1/2), following the sequence S x then S z, then once more S x. The observables S x and S z correspond to the measurements of the component of spin according to the O x and O z axes respectively. Each of the two observables possesses two eigenvalues particular to each (+1/2 and −1/2), and two eigenvectors (or eigenstates) denoted respectively as:
Let's now look at an example of what happens with the series of measurements (S x – S z – S x). The first S x measurement provides a certain result (say +1/2). The measurement of Sz , coming second in the sequence, supplies another result (say +1/2 again, but the reasoning would not be affected if it were -1/2). The last S x measurement, carried out just after that of S z in the sequence (S x – S z - S x). can then, according to quantum predictions, lead just as readily to the result +1/2 as to the result -1/2, with a probability of ½ for each of these. But if the series was (S x – S x – S z), the result of the second measurement of S x would have been definitively +1/2. In quantum mechanics, the indifference condition for the order of measurement is thus generally not satisfied. Of course, since Heisenberg and Bohr, physicists have not failed to try and apply the usual safeguard clause that consists of explaining the sensitivity to measurement order by a disturbance (here incompressible and uncontrollable) due to the intervening agency of measurement. But numerous objections can be levelled against the importation of this concept of disturbance from the classical conceptual field to the quantum domain. The first such objection, historical in nature, is that after 1935 Bohr himself no longer believed in it,Footnote 2 in consideration of the problems he raised in the interpretation of the thought experiment of Einstein, Podolsky and Rosen (EPR) (Reference BohrBohr 1961: 59). The second, more formal in nature, is that quantum theory incorporates sensitivity to the order of determinations as a constituent element (here again through commutation relationships) rather than as a secondary circumstance capable of being explained from more fundamental principles. The traditional ‘explanation’ by way of a disturbance would be found to be totally coherent only within a framework totally different from that of quantum mechanics, such as that of the theory of non-local hidden variables formulated by Bohm in 1952; only this type of theory is able to make operative (supposedly) more fundamental principles in relation to which the dependence of micro-level phenomena upon the sequential order of the experiments appears as a simple derived consequence.
This process of the carrying over of indifference clauses on to an agency of disturbance will be re-addressed in the next section. But it should henceforth be kept in mind that once this aspect is given up, a decisive moment of the procedure of objectification through ‘setting in simultaneity’ or, if one wishes, through stabilisation of properties from the flux of phenomena, is no longer available. The recognised mode of expression, since von Neumann, according to which an object possesses the ‘property’Footnote 3 when the measured value of S x is +1/2, appears in these conditions to be very unadapted. It has only been formally preserved via the concept of ‘consistent histories’ of Robert Griffiths (Reference OmnèsOmnès 1999: 177ff.) on the condition of admitting that, between two measurements, there is not a single actual series but a multiplicity of virtual series of properties.
Another, less artificial, manner of preserving in quantum physics the foundation moment of objectivity that is the formal concept of a property was proposed by Reference Mittelstaedt and ParriniPeter Mittelstaedt (1994). It consists in retaining only ‘unsharp’ properties, obeying Heisenberg relations not in their statistical distribution but directly in their individual definition (they are said to be e-defined). With ‘unsharp’ properties automatically respecting the sequential indifference clause, nothing stands in the way of simultaneously attributing to a microscopic object couples of properties of this type corresponding to couples of conjugated variables. The problem that I perceive in Mittelstaedt's strategy is that his procedure of co-ordination and ‘setting in simultaneity’ is only valid for a restrictive class of phenomena obtained by means of those instruments of measurement that are such as can provide a e-distribution at the moment of each individual evaluation. That means that, contrary to Kant's approach which aimed at universality, this strategy of objectivity-detachment constitution in the validity domain of quantum theory remains narrowly regional, if not lacunary.
2. Action and rules
The completion of the process of ‘setting in simultaneity’ is susceptible to leaving behind it an irreducible residue of change. An element of change which, despite efforts directed towards this, has not been able to be brought in under a controlled variation of the established relationship between a stable object property and a cognitive apparatus. The issue is now to comprehend how one can make the distinction between what, in this residual element of change, may be due to an at once contingent and uncontrollable variation in the object/cognitive apparatus relationship, and what may be attributed in proper to objects. Answering this question supposes that interest will be taken in certain additional detachment/stabilisation processes, analogous in their principle to those which have led to the location of coexistent determinations in an object, but applying selectively to the residual element of change that they have left behind. These new processes tend to isolate an element of sequential reproducibility in the residual change. And the consistent form of the sequence is assimilated to a law, as per the elegant definition of it given by Reference SchlickMoritz Schlick (1979: 8): ‘the permanent in an alteration is called its law’.
The nature of the change that it is a matter of objectifying, which is ‘residual’ with respect to an initial active process of objectification, immediately discards the strictly empiricist, passive or receptive theories of causality. Since the change to be encompassed by a law arises from a system of acts of ‘calibration’ or correction which has beforehand allowed the definition of properties, only its inscription in an additional nexus of actions can allow a discriminatory analysis to emerge capable of determining the extent, in itself, of the accidental and the necessary, the uncontrollably relational and the objective. The role played by manipulative and/or experimental activity in the setting in place of causal relationships has indeed been long recognised. By Kant first of all, who, after restricting the application of the principle of causal linkage to succession, extended it to determinability in time of the relation between a simultaneous antecedent cause and a consequent effect, by means of an action of manipulation of the antecedent (Reference KantKant 1964: 228 [A203–B248]). By Reference PiagetJean Piaget (1970) and Reference von WrightGeorg Henrik von Wright (1974) in their turn, who designate action as a primitive concept in their exposition of the constitutive procedures of causality.
However, conceptions of causality based on the concept of action have been taxed with charges of anthropocentrism and circularity of argument, and given this, it is better to reply to these in anticipation.
The charge of anthropocentrism, to take that first, comes from the confusion of the active identification of the reproducible factors of a sequence of events with a simple projection of the role of the agent as the archetypal determining factor. Yet, between the detachment/stabilisation activity of causal factors and the activity consisting of self-intervention as a causal factor, there is an obvious difference of both genetic and epistemological type. On the genetic level, it is possible, as demonstrated by Reference PiagetPiaget (1970: 260ff.) to distinguish a stage in the development of children during which the only type of cause they recognise is their own actions, and a later stage after which causal power is passed over to objects. On the epistemological level, it is clear that experimental action does not aim at serving as a substitute for the causal association, but to the contrary at conferring autonomy on it by drawing out through systematic variation of antecedents that which, within the causal association, does not depend on an accidental situation or on an contingent relationship with the cognitive apparatus of the subject.
The charge of circularity, in second place, depends upon an almost trivial observation. This observation is that the proof of a reiteration of the same events on condition of the active reproducibility of the antecedent cause, which is one of the key moments of the performative constitution of causality, presupposes that a given manipulation effected by the agent on the antecedent will each time have the same effect. Now such a presupposition seems to come down to posing in advance that the world is endowed with an elementary causal order: that which guarantees the identity of the immediate effects of similar actions. Thus the derivation of causal rules by experimentation appears to depend in circular fashion on the postulate of a causal rule applying to the relationship between the experimental acts and their outcome. To this charge, Reference von Wrightvon Wright (1974: § II-2) replied by pointing out the difference between a presupposed order and an objectified order. It is true that experimental action rests upon a tacit confidence in the regularity of its effects, but it does not demand that the confidence by which it is pre-conditioned be thematised in causal or law-related assertions. It rests upon an always-and-already available know-how that it has served to derive. If circle there is, in this case it is simply a non-vicious circle of self-consistency between the pre-supposition around the determination of the results of an experimental action and the determinist law-related association by which classical mechanics describes the phenomena manifesting themselves in the space and on the scale of human activity.
With these objections now addressed, we can go on further to develop the idea of a performative constitution of causal relationships. First of all, what linkage can properly associate the process of identification of rules of consequentiality with that prior-appertaining rule of the detachment/stabilisation of properties? This linkage, suggested in the previous section in the context of the comparison between classical and quantum physics, is established to all appearances through the notion of disturbance. When a maximised set of calibration/correction activities has failed to render a certain local phenomenon perfectly invariant, when this phenomenon undergoes change despite the return to the initial cognitive situation through assemblage of positional transformations conferred with a group structure, two options are available. Either to give up without further ado the attempt to attribute the phenomenon to a stable property, which signifies the failure, at least in part, of the procedure of detachment. Or to transitively displace the locus of application of the conditions of detachment/stabilisation towards another property that is capable of explaining the variation of the phenomenon. This displacement allows two important results to be obtained. On the one hand, the process of objectification by detachment/stabilisation, momentarily halted by the observation of imperfect invariance of the distal pole of the investigation, is re-activated by being carried over on to the constitution of disturbance properties. On the other hand, once related to the interference of the disturbance property, the variation of the initial phenomenon can be considered as the feature of a disturbed property. That allows for a retroactive extension of the effect of the objectification procedure on to the very phenomenon which initially was presenting an obstacle to its application, namely the change in this phenomenon.
Of course, the method of transitive application of the procedure of detachment/stabilisation to disturbance properties does not come without difficulties and limitations. In particular it could happen that the phenomenon whose stabilisation in a disturbance property it was wanted to ensure itself shows irreducible variations. In the face of this challenge, the two most commonly taken ways out consist either of engaging upon an indefinite regression of the disturbed properties and the disturbing properties, or (in certain cases) of considering the possibility of reciprocal disturbances. A third solution appeals to the concept of event. The event may be considered as a temporal property slice (we will label it in this case a P-event) or as a property modification (that sort of event we will label an M-eventFootnote 4). The advantage of the event is that it is detachable/stabilisable by nature, even when the potential properties that it manifests are, as per hypothesis, unstable. The event can effectively be attested by a set of other properties of objects irreversibly modified as a consequence of its occurrence, which are referred to as its traces. Instead of relating the alteration of a phenomenon to a disturbance property, the new strategy of objectification of change thereby consists of linking an M-event, detached/stabilised by its traces, to a prior event, equally detached/stabilised by its traces. The prior event can be a temporal property slice (a P-event), and thus one arrives at the exact equivalent, reformulated, of the sequence disturbance property/altered phenomenon. But it can equally be an M-event, which avoids having to depend on the stability of disturbance properties, or on resorting systematically to an infinite regression aiming at designating an initial truly stable disturbance property. From the still concrete representation of the disturbance, one may then pass to the form of a law governing the succession of events.
We have earlier seen how the programme of investigation consisting of going back to the antecedent conditions of a change is capable of providing a response to the partial failure of a property detachment/stabilisation procedure. But the identification of the motivations for such a programme is insufficient. We need now to indicate how it might be brought to its term, in everyday practice as well as in that of the laboratory. The difficulty to be solved, as is well known at least since Kant's critique of Hume, is passing from a series of properties or events which simply is, to a sequence which must be. The difficulty involves, in other words, not being content with an observation of succession, but of establishing a relationship of necessary consequentiality. Now, a modal concept like that of necessity makes operative, alongside an indicative-mode description of what actually happens, a conditional-mode description of what would happen in case of modification of elements of the series of properties or events. Only the renewed inclusion of the concept of action beside that of event will introduce, according to von Wright, following Piaget and many other cognitive science specialists, the component of conditionalisation indispensable for us to arrive at distinguishing the flatly factual from the necessary. An action effectively defines itself as that which interferes with the course of things, that is, as that which prevents something happening which otherwise would have happened, or on the other hand which causes to happen something which would not have happened otherwise. The rule-associated accomplishment of these two types of operation is what allows a relation of causality to be established.
Let us consider these in the order given above, which is also the order of their definitional importance. Let's suppose we are confronted with the habitual succession of an event X and an event Y. The interference consisting in preventing X from happening in a sufficient number of sequences of events of this type allows us to establish a counterfactual proposition which is valid for those sequences outside of the interference situations. More precisely, what should be said if, both at this stage as at that of the simple observation, one wanted to avoid the well-known difficulties around induction, is that an interference of this type leads us to act under the presupposition of the validity of a certain counterfactual proposition, or else that it leads us to anticipate series of events which would conform to what a counterfactual proposition would affirm. Whatever the case, this counterfactual proposition can take two forms: either ‘if X had not happened, Y would have happened anyway’, or ‘if X had not happened, Y would not have happened’.Footnote 5 The first proposition unambiguously affirms the accidental character of this succession. According to Reference LewisDavid Lewis (1986: 167) it is sufficient in itself to define a relation of causality: ‘If X and Y are two actual events such that Y would not have occurred without X, then X is a cause of Y.’ But if that is so, there follow important consequences for the relationship between causality and determinism.
On the one hand, when the only criterion allowing X to be qualified as the cause of Y rests on the counterfactual proposition ‘if X hadn’t happened, Y would not have happened’, the factual proposition associated with a particular occurrence of X, be it ‘X happened and Y followed’ or on the other hand ‘X happened and Y did not follow’, matters little. According to the type of definition proposed by Lewis and quite widely adopted, X can thus be a cause of Y without nevertheless determining it.
On the other hand, the definition of a cause appears much more plastic than that of a genuine determining factor. Designating an event X such that, had it not occurred, Y would not have followed, is an operation whose result can depend without any difficulty on the degree of precision with which X is circumscribed. Nothing forbids conceiving that where, on a first level of approximation, the prevention of occurrence of a very roughly circumscribed X has led to the non-occurrence of Y, one can label X as the cause of Y; then secondly that, when experimental methods had improved to the extent of proving that the selective impediment of a part x of X is enough to bring about the non-occurrence of Y, only x is then labelled as the cause of Y. Neither does anything forbid that a limitation of experiment (which can be incompressible) should render it impossible to refine the selective impedance procedure beyond an imperfect degree corresponding to X, without for all that that one should renounce labelling that roughly circumscribed X as, by the force of circumstance, a cause of Y.
The other operation allowing difference to be made between the necessary and the contingent aspect of changes consists of having occur an antecedent X event which would not have happened otherwise. This operation leads to establishing (presupposing, anticipating) either the counterfactual proposition ‘if X had happened, Y would not necessarily have happened’, or the counterfactual proposition ‘if X had happened, Y would have happened’. The first of these propositions, as we have seen, is compatible with the proposition that X is a cause of Y. But the second proposition adds something important to this affirmation: if it is conjointly valid with that which affirms the non-occurrence of Y when X does not occur, one can infer from it that X is the determinant cause of Y. As one might expect, however, the plasticity of the delimitation of the determinant cause is much slighter that that of a cause pure and simple. One is within one's rights to be satisfied, for a simple cause, with the broad definition: , even if it proves in the end that only x i responds to the condition of the non-occurrence of Y in its absence. On the other hand, it is not possible to affirm that is the determinant cause of Y if only x i responds to the occurrence condition of Y in its presence (for in the circumstances where the occurrence of X translates as that of , if can happen that Y will not occur). It is this characteristic of extreme selectivity that confers on the concept of the determinant cause a value of a regulatory ideal of discrimination for experimental practice: an experimental protocol is habitually not reckoned to have reached a definitive conclusion until it has allowed determinant causes to be isolated. Recognising a delimiting marker in the search for determinant causes would incontestably represent a differential with respect to this regulatory ideal. But the major lesson of the preceding analysis is that the more general demand to establish relationships of (simple) causality would not necessarily be affected by it.
From this discussion we can better appreciate the sense (and the misunderstandings) of the debate on indeterminism and causality in quantum physics. This debate has a dual historical origin. One is found in the probabilistic interpretation of wave function, proposed by Max Born in 1926, the other in the introduction of ‘uncertainty’ relations by Werner Heisenberg in 1927. Hypostatising his probabilistic interpretation, Reference Born, Wheeler and ZurekBorn (1983: 54) was inclined to ‘[…] give up determinism in the world of atoms’. As for Reference Heisenberg, Wheeler and ZurekHeisenberg (1983: 83), he made much of his ‘uncertainty relations’ to declare that ‘[…] quantum mechanics establishes the final failure of causality’. In the face of this double challenge the responses were not long in coming. But they contained gaps by reason of an imperfect perception of what was not present in quantum physics, whether in relation to classical ideals or with respect to a certain Kantian constituent principle. Among the responses, we will there pick out those that assert that quantum mechanics imposes no true limit to the search for a relation of strict determination between successive events, and those which, while admitting there is such a limit, recommend modifying the task of identifying the causes rather than giving it up. Finer distinctions between these positions will be drawn as we proceed.
The first series of objections addressed the very strong conclusion Heisenberg had come to. After all, as Heisenberg himself recognised (ibid.) ‘what is wrong in the sharp formulation of the law of causality, “When we know the present precisely, we can predict the future,” is not the conclusion but the assumption.’ In other words, it is uniquely due to the incompressible limit imposed by the ‘uncertainty’ relations at the present determination of the couple of variables defining the classical state of a material point, that no determined future value could be derived from it. But is that sufficient to justify ‘the ultimate failure of causality’, as Heisenberg was proclaiming at that time? From 1929 Hugo Bergmann was showing that this question should be answered in the negative. It is effectively incorrect to infer the invalidity of the hypothetical assertion expressing the causal relationship between present and future states by taking as a starting point the indeterminacy of the present state. ‘A logical implication, Bergmann emphasized, is not refuted by disproving the validity of its premise or hypothesis’ (Reference JammerJammer 1974: 75). This objection was also addressed to Heisenberg at the beginning of the 1930s by Alexandre Kojève and Ernst Cassirer, apparently independently.
It remains to determine what attitude should be properly adopted when faced with this reopening of the question of causality in quantum physics.
The simplest, one might well say reactionist, attitude amounts to rushing into the breach left open by Heisenberg and giving free rein once again to the regulatory ideal of strict determinism. The most direct method of achieving this is to elaborate a programme of theories of hidden variables governed by determinist laws. The idea here is that the impossibility of experimentally defining the set of values of relevant variables for the evolution of a physical system should not prevent their being held as defined in theoretical description. This theoretical definition of the variables allows them to be linked among themselves by determinist laws, and to put back the random character of their distribution on to the initial conditions (and on to processes of determinist chaos which exponentially amplify their fluctuations). Since the so-called ‘impossibility of hidden variable’ theorems, such as that of von Neumann, have failed to definitively block this path, and the other theorems such as those of Bell or Kochen and Specker have managed only to identify the general characteristics of this type of theory, the only objections that may still reasonably be addressed to them are of a philosophical type. What are then those theories which extend the field of representation into a domain that they themselves define as meta-empirical by principle? What credit should be accorded to those attempts to disassociate the form of objectivity from any possibility of implementing concrete procedures of objectification?
The question of the link between the pursuit of the determinist regulatory ideal and the experimental accessibility of the postulated processes has also been the object of intense discussion among several authors who nevertheless are not declared partisans of hidden variable theories. Heisenberg, to start with, noted in his 1929 Chicago lectures that ‘uncertainty’ relations do not apply to the past: it is perfectly conceivable to reconstitute a trajectory determined a posteriori by carrying out two successive measurements of the position as precise as one wishes, and by inferring as a result of that the value of the momentum prior to the second measurement. But, he added ‘this knowledge of the past is of a purely speculative character, since it can never […] be subjected to experimental verification’ (Reference HeisenbergHeisenberg 1930: 20). In contrast to this verificationist rejection of retrospective reconstructions of trajectories, Reference PopperPopper (1995: 232) made the observation that the hypothesis of a trajectory (whether retrospective or prospective) is refutable, and that it is therefore not correct to refuse it any ‘empirical significance’. But what Popper forgot to say was that reconstituted or projected trajectories, though in principle refutable, are in quantum physics affected by an extreme form of underdetermination by the experiment. Not only does the actual corpus of experiments not allow choosing in practice between the multiplicity of trajectories not refuted by this corpus, but also fundamental limitations of the experimentation (inscribed in the ‘uncertainty’ relations or implied by the contextualist character of hidden variable theories capable of reproducing the predictions of quantum theory) render this choice impossible in principle. The theoretical descriptions of trajectories by determinist law carry in the end the speculative tenor denounced by Heisenberg; and that independently of his initially verificationist option.
Overcoming this element of the arbitrary while at the same time also having recourse to retrospective reconstructions was the principal self-assigned goal of G. Hermann, a young German woman philosopher and mathematician who developed her conceptions in contact with Werner Heisenberg's research group.Footnote 6 The interest presented by her work resides in its being considered from one end to the other as an attempt to save Kantian epistemology from its alleged refutation by twentieth-century physics (and in particular by Heisenberg's emphatic sidelining of the principle of causality (Reference Heisenberg2002)). The strategy underpinning Hermann's work consisted in clearly dissociating the consequentiality of events by means of a rule, and from the possibility of using that rule for predicting future events from past events. Hermann insisted that the fact that exact prediction of events is often impossible in quantum physics does not prevent their being linked together according to a rule that is identifiable a posteriori. In difference from Heisenberg, however, retrospective reconstruction does not bear exclusively upon a corpuscular trajectory. It combines, following the pragmatic method recommended by Bohr for the application of the concept of complementarity, descriptive moments employing particle representation and other descriptive moments employing wave representation (Reference Hermann and SolerHermann 1996: 93–94, 97–100). The advantage of such a reconstruction in comparison to Heisenberg's is that it presents as in part able to be corroborated by subsequent experiment. It is corroborable because, by re-ascending the chain of determinations of an agent of measurement (say a photon) starting out from an ultimate phenomenon (say the impact of this photon on a photographic plate), one can retrospectively assign a wave function to the measured object (say an electron); and because this wave function, in its turn, leads to making testable (probabilistic) predictions. The problem, as pertinently pointed out by L. Soler (ibid., 128 ff.) is that, in the same manner as the Heisenberg-Popper corpuscular reconstructions, Hermann's mixed (wave and corpuscular) reconstructions are by principle underdetermined by the attestation procedures: a large number of distinct sequences of alternated corpuscular and wave processes arrive exactly at the same probabilistic predictions.
The most interesting aspect of Hermann's considerations no doubt resides then in her attempt to explain the dissociation between strict predictability and the applicability of a principle of causality to quantum phenomena. In her view, if the consequentiality rule that governs microscopic processes cannot serve to predict them, it is because it determines them only in a manner relative to the result of the final measurement from which one then can reconstruct them. The cause, wrote Hermann (ibid., 94; cf. 90, 100, 119), ‘belongs to a process only relatively to a link to the observation’. In other words, the reason why predictive use of the principle of causality is impossible in microscopic physics is that one of the principal preliminary conditions for its establishment, that is the condition of detachment of a certain set of events or properties with regard to their modes of manifestation, is not fulfilled. The unpredictability of the phenomena is here a consequence of their contextuality. Unfortunately, Hermann did not properly take in the lessons provided by her analysis on the subject of the moment priority order of the procedure of the constitution of objectivity. This lack of insight led her to a purely allegorical application of the causality principle by having it exclusively carried on the back of retrospective reconstructions of doubtful validity. But how did she get to that point?
To recapitulate, the end-point at which Hermann arrived, whether she was satisfied with it or not, was that the possibility of phenomena prediction, which alone permits the experimental corroboration of causal relations, rests on a prior constitution of objectivity in the form of a carried-through procedure of detachment/stabilisation of properties or events. However, under the influence of a residual Kantian orthodoxy, she persisted in inverting the order of priorities that she herself had established. She made the objectivity of microscopic physics depend on an application of the principle of causality to reconstructed part-corpuscular part-waveform ‘intermediary processes’. The problem is that the intermediary processes in question are fictive, and applying to them the principle of causality by this fact arises also out of a fiction. We should recall that constituting an objectivity, in transcendental philosophy, supposes above all operations of synthesis of the perceptions, or else of synthesis of the experimental phenomena. But Hermann refers only to syntheses of ‘interphenomena’ in the Reichenbachian sense; in other words to syntheses which bear throughout on interpolated representations, partially arbitrary as not being individually testable, between the phenomena proper. She thus causes the procedure of constituting objectivity to become disengaged, and minimises the fact that it is precisely because of the unavailability of the most elementary foundation for the constitution of objectivity, that is, because of the impossibility of applying the antecedent condition of detachment/stabilisation of properties through ‘triangulation’ of several simultaneous or successive phenomena that she resorts to this extreme solution.
Drawing the lessons of Hermann's attempts, we must now restart on sounder bases by turning back to the experimental phenomena themselves. As Reference CassirerCassirer (1956: 125) writes, ‘From the point of view of physics, however, only those values are permissible that can be determined by a certain mode of measurement which must be accurately stipulated. Only through this limiting condition does the “causal principle” attain a physically comprehensible significance – and its legitimate application remains confined to this condition’. But of course, once one has forbidden oneself from restricting the field of application of constitutive methods to a nether world of interphenomenal processes, one cannot avoid accepting a certain limitation of the regulatory ideal of strict determination. But does that necessarily imply a destructive renunciation of the very task of the constitution of objectivity? The current view is that that indeed is the case. Hermann thus considered that, from a Kantian perspective, the abandonment of the search for determinant causes was equivalent to the abandonment of objectification. Numerous passages from the Critique of Pure Reason seem able to be quoted in support of this thesis, as for example this: ‘The proposition that nothing happens through blind chance […] is therefore an a priori law of nature’ (Reference KantKant 1964: 248 [A228–B280]). If one considers that a merely probabilistic association between antecedents and consequents does arise from ‘blind chance’, one is tempted to infer from that that, according to Kant, it is only by means of the establishment of a determinist law that it can be a question of a nature. Reference Brittan and ParriniGordon Brittan (1994) has nevertheless warned against this assimilation and against the hasty conclusion that it leads to. According to him the proposition, ‘everything that happens is hypothetically necessary’ by which Kant (ibid.) summarises the principle of causality, is sufficiently broad for accepting as suitable a causal relationship in the minimal sense defined by David Lewis, and hence of a merely probabilistic link. The ‘blind chance’ that Kant excludes does not equate to the simple fact of the intervention of probabilities less than 1, but to the absence of any possibility of linking successive events through a rule, even were it not strictly determinant. The idea of laws bearing mediately on probabilities rather than immediately on events was moreover not completely foreign to Kant. Evidence for this is found in a reference to ‘constant natural laws’ (Reference Kant and ReissKant 1971: 41) governing the overall number of marriages or changes in the weather which are yet unpredictable on an individual level; and also an observation according to which the calculation of probabilities contains ‘quite certain judgements on the degree of possibility of certain cases under given homogeneous conditions, in which the sum of all must happen quite infallibly according to the rule, although in respect of every particular incident the rule is not sufficiently determined’ (Reference KantKant 1953: 138).Footnote 7
To avoid the trap of a thorough-going (and sometimes misunderstood) literal application to quantum physics of Kant's prescriptions, we now need to come back to the most elementary process of objectification of a change in the phenomena, then consider how (and to what extent) this can be made functional with respect to often random microscopic phenomena. The problem that presents here, as we have seen, is to discriminate between (a) an alteration component due to the more or less poorly mastered interaction of the milieu explored and the apparatus of exploration, and (b) an alteration component that nothing prevents from being treated as it were attributable in proper to the milieu explored. In short, one seeks a separation between an element of the change that is irremediably contextual and an element that is decontextualised from it. When nothing gets in the way, in a certain class of experiment (let's call it the test class), of sufficiently controlling the antecedents so as to establish associations of strict determination between them and the consequents of a given type, such a separation is relatively easily obtainable. The rule established within the context of the test class can in effect serve, in counterfactual fashion, as a discrimination criterion outside of the test class. The alteration element conforming to this prior-established rule is imputable to the objects themselves, while the additional element of alteration is imputable to modifications (controlled or otherwise) of the relation between the objects and the modes of investigation. But how can you arrive at the same result when the degree of control of the antecedents is limited by principle? How can this point be reached in the validity domain of quantum physics, where the fixing of an experimental disposition involves an incompressible indetermination on couples of values of conjugated variables, as per the Heisenberg relations?
Let's suppose for example that an experimental disposition (let's say a light source and a collimator) is characterised by a range of values for the value of the spatial co-ordinate x, and by a corresponding range of values for the variable ‘component according to Ox of the momentum’. Suppose also that we measure a certain value x* of the variable x at a point sufficiently far away from the preparation. How can a discrimination be made in (x o–x*) between a contextual variation of x, depending on the unanalysable momentary configuration of the totality of the experimental set-up (including the disposition and the measuring apparatus) and a decontextualised variation, justifiably qualifiable as objective, of the same variable?
The answer to this question is that one can generally not do so if one keeps to this single measured value. But everything changes however when interest is directed to a distribution of numerous values of the variable x after reiteration of the ‘same’ experimental disposition and of the ‘same’ distant measurement x*. The statistical parameters of the distribution (such as the mean quadratic separation) effectively obey laws that are independent of circumstances, attested through the avenue of a test class of maximally controlled experimental preparations.Footnote 8 These distribution laws alternatively can take the form of Schrödinger's equation governing evolution of a wave function, or those of Hamiltonian equations bearing on self-adjoint operators called ‘observables’. In the developed example, the laws of this type eventuate in following expression of the mean quadratic separation of the value of x as a function of the time t that has elapsed since the disposition:
(where h represents Planck's constantFootnote 9).
Thus one is justified in asserting that, in a given variation of the value of x between 0 and t, the ‘objective’ element (that is, that which is decontextualised and universal) of the change is that which contributes to the growth of the mean quadratic separation according to the above-mentioned law. And the contextual element of the change, depending unanalysably on the overall configuration of each particular experiment carried out, is all the rest.
These observations on the impossibility of objectifying change on any other plane but the statistical have as their theoretical correlative, as we have seen, the application of determinist laws solely to the generative symbols of probabilistic evaluations that, for example, state vectors are. Coupled with the standard reading of the Second Analogy and with considerations of the two other Analogies of Experience (Reference BitbolBitbol 1998), such observations lead to an acceptance that in quantum physics the locus of objectivity has been displaced from the ordinary space in which the phenomena occur to the Hilbert space in which state vectors evolve. This effectively purely formal status of the object in quantum physics is considerably far removed, it is true, from the residual concrete representations still carried, despite all the associated paradoxes and difficulties, by the vehicular mode of expression of physicists in terms of ‘particles’ subjected to ‘collisions’. This is consequently not without raising a certain perplexity. What is that objective entity whose experimental contact can only be established on a statistical level? Is it not possible to find for it a more direct experimental correlative which, by analogy with that of ‘particles’, would put in play an individual measurement process? It has long been thought that this question can only be answered in the negative. But in 1993, physicists of Yakir Aharonov's team brought to light a class of ‘protective’ or ‘adiabatic’ experimental procedures potentially capably of giving access to mean values or mean quadratic separations from a single measurement (Reference Aharonov, Anandan and VaidmanAharonov, Anandan and Vaidman 1993; Reference DicksonDickson 1995). The determinist law governing the evolutionary equation of the state vectors thus translates concretely to a strict predictability of the distributive values supplied by adiabatic measurements. The probabilistic anticipation provided by the state vector has henceforth a direct and individual experimental fulfilment.
In summary, we have now shown that the second moment of the process of objectification, assured by the application of the principle of causality, has a definite corresponding moment in quantum physics; and that despite the failure (or the limitation to a solely ‘unsharp’ degree) of the first moment of the process of objectification, that of ‘detachment/stabilisation’. This corresponding moment fulfils to a satisfactory extent its elementary function, which is to isolate a decontextualised fraction of the change, by separating out its contextual fraction. It accommodates a broad, though not necessarily determinist, conception of the causal link between successive events, and delegates the shape of strict determination to the relation between successive statistical distributions in the Hilbert space of the state vector representing a given experimental preparation. It does not demand application to a meta-empirical universe, whether this be that of hidden variables and processes, or that of retrospective reconstructions of interphenomenal sequences.
3. Probabilities and objectivity
In the previous section, we identified both the likely origin and the principal consequence of the non-predictability of individual (non-adiabatic) microphysical phenomena. The origin of the non-predictability, sketched in outline by G. Hermann, illustrated by the image of the ‘disturbance’ of the object by the measuring agent, and confirmed on a more general theoretical level by Reference Destouches-FévrierPaulette Destouches-Février (1951: 260–280) is that the potential determinant causes of a phenomenon cannot be detached from it; that they are relative to the circumstances themselves of its production at the moment of a measurement.Footnote 10 The double consequence of non-predictability is that: (1) the symbolism of quantum theory has an essentially probabilistic character, and (2) the procedure for the extraction of a decontextualised fraction of change is deflected from the individual phenomena to their statistical distributions. The task that remains to be achieved in this section consists of establishing a precise link between the (contextual) origin and the (structurally probabilistic) consequence of non-predictability. More precisely, it consists of showing that the form of the quantum calculation of the probabilities carries the mark of the contextuality of the phenomena on which it bears.
An initial approach, which I will simply summarise here after having expounded it in detail elsewhere (Reference BitbolBitbol 1997, Reference Bitbol1998), starts out from the dual constraint of contextuality and the demand for unicity of the intercontextual predictive tool in order to arrive at the base structure of quantum mechanics. Within the context of this approach, it may first be noted that the metacontextual languages with the capacity to unify the contextual experimental languages typical of microphysics are isomorphic to the ‘quantum logic’ of Birkhoff and Von Neumann (Reference HeelanHeelan 1970). It will then be shown, based on a generalised version of Pythagoras's theorem (Reference FévrierFévrier 1956; Reference HughesHughes 1989), that the formalism of the vectors of the Hilbert space, complemented by Born's rule which permits the calculation of probabilities from a state vector, takes fully into account the contextuality of phenomena to be predicted and satisfies the unicity clause of the predictive tool (it is even one of the simplest of the formalisms of this type).
A second approach, to which I would like to devote a certain attention in this section, is the exact reciprocal of the first. Instead of re-ascending, as in the first approach, from the fact of the contextuality to the structure of quantum mechanics, its sets itself the goal of descending from the structure of the quantum calculation of the probabilities to the non-decontextualisation of the phenomena that it serves to anticipate.
As has been shown by Reference PitowskyItamar Pitowsky (1994), this second, descending, approach of the probability-contextuality linkage can stand on the analysis undertaken by George Boole of the constraints that logic exercises on the calculation of probabilities. What Reference BooleBoole (1958, Reference Boole1952) emphasises is that, when one assigns probabilities to a set of events, one cannot avoid taking account of the logical link between events; one must for example take into account that an event represents the conjunction of several others, or that it is implied by another. When we assign probabilities, he wrote ‘we are not at liberty to proceed arbitrarily. We are subject, first, to the formal Laws of Thought, which determine the possible conceivable combinations’ (Reference BooleBoole 1952: 390). That means that we cannot content ourselves with imposing on probabilities the requirement to be individually included between 0 and 1. To that one must add relations between them, which guarantee that they respect the logical constraints proper to the event universe to which they are associated term by term. One of these relations was enunciated much later by Reference KolmogorovKolmogorov (1950) in the form of an equality, in his third axiom of the classical theory of probabilities. According to this axiom, ‘the probability of the union of two events is the same as the sum of the probabilities, if the two events are mutually exclusive’. But Boole proposed a much more general type of constraint relationship between probabilities, in the form of inequalities extending to all permitted types of logical relations between events (and in particular to the cases where the events are not mutually exclusive, that is, where they do have elements in common). The non-satisfaction of these inequalities would in his view indirectly reveal a major distancing with respect to the elementary logic of events and properties upon which probabilistic evaluations are supposed to bear. In the terms inspired by Kant that Boole chose, if these inequalities were not satisfied, that would mean that the ‘conditions of a possible experience’ would not be respected by the chosen probabilistic assignation.
A quotation in extenso from Reference BooleBoole (1952: 392) would not be without value at this point of the exposition:
Let represent the probabilities given in the data. As these will in general not be the probabilities of unconnected events, they will be subject to other conditions than that of being positive proper fractions, viz. to other conditions beside:
Those other conditions will, as will hereafter be shown, be capable of expression by equations or inequations reducible to the general form:
a 1, a 2, … an being numerical constants which differ for the different conditions in question. These, together with the former, may be termed the conditions of possible experience. When satisfied they indicate that the data may have, when not satisfied they indicate that the data cannot have resulted from actual observation.
Taken literally, these phrases mean that when someone gives us a list of values p1, p2…pn, we can be sure that they cannot have resulted from an experimental evaluation of frequencies of determinations in a single sample of objects if they do not respect the above inequalities. A difficulty of this text of Boole's is that it closely mingles, though in comprehensible fashion, considerations that are logical, gnoseological and ontological in character. The logical considerations bear upon the connections (conjunction, disjunction, implication, etc.) between propositions referring to events; the gnoseological considerations concern what it is or is not possible to observe as frequencies following random draws; and the ontological considerations, implicitly supposed as subtending the two previous sets of considerations by way of the unicity clause of the sample, relate to the articulation of the properties in the objects of the sample that are submitted to a random draw. Appropriately drawing the consequences of the third sort of considerations, Reference MittelstaedtPeter Mittelstaedt (1998: 93) has accentuated the transcendental significance of Boole's inequalities, declaring that they represent nothing less than the conditions to be fulfilled for numbers to be ‘[…] considered as probabilities for properties […] of some object of experience’. When they are satisfied, they carry the probabilistic trace of a prior constitution of objectivity in the first sense of detachment/stabilisation of properties of individual objects. Inversely, when they are not satisfied, they lead to suspecting the non-achievement of a constitution of objectivity of this type.
Two examples will allow us to understand the close linkage between Boolean inequalities and properties of objects of experiment. The first, very simple, example concerns the type of experiment involving drawing balls from a barrel which currently serves as a paradigm for probabilistic evaluation. Suppose that coloured balls are being drawn from a broad sample, and one is interested in their colour (red, white or black) and the material they are made of (wood, metal, stone). According to the standard rules of the calculation of probabilities, the probability that a ball is red or made of wood is equal to the sum of the probability that it is red and the probability that it is wooden, minus the probability that it is both red and wooden. If we index the predicate ‘red’ by 1, and ‘wooden’ by 2, this probability for a ball to be red or wooden may be written:
Being a probability, P obeys the general condition of being less than or equal to 1.
One thus obtains the following inequality, constraining the relationship between p 1, p 2, p 12 and having exactly Boole's requisite form:
It remains to be understood why any list of givens that did not obey that inequality would depart from the Boolean domain of a ‘possible experience’. To that effect let us consider a list of givens of frequency that massively violates the above inequality: p 1 = 1, p2 = 1, and p12 = 0. Read in the ontological mode, this list of givens signifies that all the balls are red, and that all the balls are wooden, but that no ball is both red and wooden. It does not authorise the conjunction of predicates in one ball, while imposing that they are attributed to it separately. It thus accords neither with classic (Boolean) logic, nor with the structure of objects that this latter presupposes. Only the adoption of a non-classical logic and engagement with respect to an ontology conforming to this alternative logic, would open the possibility of not automatically rejecting the above list of givens into the domain of error. Read in the epistemological mode, however, this list lends itself to both more numerous and more nuanced interpretations. Within the context of a conception of experience prejudiced in favour of the constitution of objectivity, that is, one which held it obligatorily to be a faithful reflection of the pre-existent properties of permanent objects, the conclusion previously obtained on the ontological level would be, it is true, immediately transposable to the level of knowledge. Since the list of givens accords neither with classical logic nor with the ontology which subtends it, no ‘experience’ of objects conforming to that logic and that ontology could in that case correspond to it; it would thus depart from the context of a ‘possible experience’ in this limited sense that it could not translate the specular experience of a world endowed with a Boolean logic and ontology. But everything changes if we consider either, that the experimental phenomenon is the fruit of a reciprocal and ‘disturbing’ interaction between the object and the process of investigation, or, better, that one should not too rigidly prejudge a constitution of objectivity when beginning to interpret phenomena, for the sound reason that a constitution of objectivity is suspended by the universally valid type of organisation to which one can submit these phenomena at the end of the process. If it is firstly accepted that the phenomena result from a reciprocal disturbance interaction, nothing prevents all the objects which were subjected to the experimental evaluation procedure for colours giving the result ‘red’, nor all those that were subjected to the experimental evaluation procedure for the material giving the result ‘wooden’, but that those which were simultaneously subjected to both experimental procedures not giving the result ‘red and wooden’ (all that is required is to imagine that the procedure for evaluation of the material affects the tonality accessible to the procedure for colour evaluation). A list of givens violating Boole's inequality does not therefore depart from the context of possible experience of a world furnished with a Boolean logic and ontology, as long as the experience in question is not passive and specular but active and ‘disturbing’. This list of frequency givens violating Boole's inequalities is not any more the sign of a departure from the context of all possible experience if it is admitted that the phenomena are the initial material of a project of constitution of objectivity, rather that the manifestation (whether faithful or deformed by ‘disturbance’) of a universe of preconstituted objectivity. It can mean in this case that the modalities for the constitution of objectivity by which it is necessary to proceed from the givens in question are profoundly different from those habitually applied from givens that do obey the Boolean inequalities. It may also mean that no procedure of constitution of objectivity of the first order can be successfully achieved, but that a second-order objectification procedure (that is, a procedure for the objectification of the methods of production and anticipation of non-objectifiable phenomena as properties of objects) remains conceivable. The number of degrees of freedom of a ‘possible experience’ is, in sum, much greater than Boole envisaged. The non-satisfaction of the probabilistic conditions posed by Boole nevertheless remains an extremely valuable indicator as to the inadaptability of a domain of phenomena to the common presuppositions of objectivity preconstitution.
A second, somewhat more complex example (Reference PitowskyPitowsky 1994) will now allow us to realise that the Boole inequalities form a ‘genus’ of which the Bell inequalities are a ‘species’ (the fortuitous play of assonances between the two proper names can provide a useful mnemonic for retaining the parallel). The example will show by the same occasion how the issue of non-locality came to be introduced, on the occasion of the great inventory imposed by the absence of conformity between (a) the frequencies of micro-level phenomena and (b) the belief that these phenomena faithfully show the properties of objects obeying a Boolean logic and ontology.
The first stage, under this perspective, consists of deriving a Boolean inequality holding true for three events able to incorporate common elements, instead of two as previously. Following the classical calculation of probabilities, with the three properties respectively indexed by 1, 2 and 3, the probability of their disjunction (e 1 or e 2 or e 3) is:
As with all probabilities, this is less than 1:
Thus a fortiori we get:
Now, let us substitute for e 2 its complement, denoted , in the event universe. That implies that for p2 we substitute 1- p2 ; for p12 we substitute p1-p12 ; and finally for p23 we substitute p3-p23 . The preceding inequality thus becomes:
or:
Or further, to attain the exact form of Boole's inequalities:
We can further simplify the above inequality by noticing that p2-p23 is no different from the conjoint probability of the event e 2 and of the complement of e 3. If we call this conjoint probability the inequality becomes:
This latter inequality has exactly the form of a variety of Bell's inequality that Reference d’EspagnatBernard d’Espagnat (1979, Reference d’Espagnat1975, Reference d’Espagnat1978, Reference d’Espagnat1984) proved from certain elementary ontological hypotheses; as elementary as that of the possibility for permanent objects (for example, particles) to conjointly possess three predicates.
Let us demonstrate this. Let there be three predicates, denoted 1, 2, 3, and their negations, denoted. . Let us denote as the number of particles possessing the three predicates I, J, K; denote as the number of particles possessing two determined predicates I, J, with the third being left free. We then have:
From these expressions, a very simple calculation arrives at the inequality:
From this inequality [3] on the number of particles carrying three predicates there derives immediately the inequality [2] on the probabilities of triplets of events, and also another inequality of the same form on the phenomena frequencies found over a random series of experiments. At least, these two latter inequalities are valid if certain precise conditions are fulfilled: (a) if the statistical sampling applied during the random selection is not biased, (b) if one of the putative properties is not disturbed by the operation involved in manifesting the other, (c) if there is even a meaning in accepting as a starting position ‘a well-defined distribution of properties over some population’ (Reference PitowskyPitowsky 1994: 107). The violation of inequalities of type [2] will also therefore reciprocally carry important lessons. By setting aside the still open possibility (a) of a systematic bias in the statistical sampling, it would mean either (b) that the relationship between properties and phenomena was not simply ‘specular’ but interactive and ‘disturbing’, or (c) that the very presupposition of an always-already constituted objectivity (the belief that there is thereby a statistical distribution of objects furnished with properties detached from the processes eliciting their ‘manifestation’) is no longer operative.
Now it is known that the quantum calculation of probabilities does not respect the type [2] inequalities, and that experiments in microphysics, in conformity with quantum predictions, do not respect the corresponding inequalities on the phenomena frequencies either. The question of what does the violation of these inequalities mean is thus sharply posed. Some researchers, though rather few in number, have invoked the option of statistical bias to explain the violation of these in experiment (Reference SelleriSelleri 1994). But, besides the fact that there is no indicative evidence pointing in this direction and that this explanation would suppose a massive invalidation of quantum theory, which is scarcely plausible at this point in time, it has recently been able to be demonstrated that conditions of the same type as the Bell inequalities but valid for a single random selection were equally violated, both by the quantum calculation of the probabilities and by appropriate experiments (Reference Bouwmester, Pan, Daniell, Weinfurter and ZeilingerBouwmester et al. 1999). In the presence of such single random draw experiments, the ‘out’ of statistical bias no longer exists. It remains therefore to explore the two other options (b) and (c).
Explanation (b), which brings into play a ‘disturbance’ of one of the properties by the measuring of the other, was proposed very early on. The problem is that the quantum calculation of the probabilities implies the violation of type [2] in every case, including when no disturbance seems to be able to invoked on the local level; for example, when (in the famous EPR thought experiment), interest was no longer directed to one particle on which were effected two measurements, but to two distant particles with correlated properties, on which one single measurement was carried out each time. These quantum predictions, implying the violation of Bell inequalities right up to and including locally non-disturbant situations, have been corroborated by a fairly large number of experiments, such as those conducted by Alain Aspect and his group. Several experiments of this family have even been conceived so as to not authorise, between the two distant particles, any influence at a speed inferior or equal to the speed of light in a vacuum (Reference Aspect, Dalibard and RogerAspect, Dalibard and Roger 1982). In such a situation, as is clear, the only way of continuing to support the validity of the ‘disturbance’ explanation is to assert the existence of ‘supraluminal’ disturbance influences, whose nature remains a mystery, and which can have no application as far as the instantaneous transmission of information is concerned. If one wishes to avoid introducing such a speculative element into the interpretation of quantum mechanics, there remains, to account for the violation of the type [2] inequalities, only option (c) for recognising the insurmountable relativity of the phenomena vis-à-vis the set of conditions of their manifestation. It is in this sense that one can say that the quantum calculation of probabilities bears the mark of an incompleteness of the procedure of detachment/stabilisation of properties from out of (microscopic) phenomena, and that it stands in fundamental distinction on this point from the classical calculation of probabilities implemented in the majority of other stochastic theories.
Admittedly, this family of contextualist readings of the violation of the Bell inequalities has long been known (it can be traced back to Bohr). But it has often been discussed, by its adversaries as by some of its supporters, outside of the context of transcendentalist thought adopted in this article. From a transcendentalist perspective, all phenomena are held to be constitutively relative to their conditions of manifestation, and the structure of their concomitant or sequential sets which in some circumstances permits a first-instance objectification, in others does not so authorise it and impels the search for secondary forms of objectification. Objectivity is conceived of here as a ‘work to be achieved’ on one level or another, and not as a given which can sometimes be lacking. From a more current, non-transcendentalist perspective, the conception of phenomena as a simple manifestation of a property of an object is the norm, and any attested divergence with respect to that norm is the sign that one has failed to grasp reality such as it is, independently of the methods of its exploration. From this, if the structure of the phenomena predicted by a theory (perhaps via a probabilistic mode) does not accord immediately with the presupposition of a preconstituted objectivity of the usual type, only two readings of this theory are envisageable. Either one succeeds in conceiving a reality of a non-usual type able to be considered as described by the theory in question, and in this case one can say that the theory has been interpreted in the realist mode. Or alternately one finds obstacles to the conception of such a reality (for example that the divergence that it implied with respect to the familiar forms appears excessive, or that it proves to be too far removed from any possibility of experimental test), and in this case one has to be content with a non-realist interpretation of the theory, one that is purely predictive, with in the back of the mind the thought that a future theory will better manage than the present one to arrive at a faithful description of the real that is still being sought.
From this flows the form taken by the discussions of the violation of Bell inequalities in the general context of non-transcendentalist thought. This discussion swings essentially between the conception of a massively non-local reality and a pragmatically localist and non-realist interpretation of quantum mechanics. The most complete version of the first position is Bohm's theory of non-local hidden variables, whose very largely speculative tenor we have already touched on. As for the second position, it was envisaged in the following terms by Einstein (who suspected that Bohr also supported it): ‘There seems to me no doubt that those physicists who regard the descriptive methods of quantum mechanics as definitive in principle would react to this line of thought in the following way: they would drop the requirement for the independent existence of the physical reality present in different parts of space; they would be justified in pointing out that the quantum theory nowhere makes explicit use of this requirement’ (Reference Einstein, Einstein and BornEinstein 1971: 172 [letter of 5 April 1948 to Max Born]). The availability of a ‘non-realist’ (even ‘anti-realist’ in a sense close to that of Michael Dummett) option is thus recognised; but it is often perceived as being acceptable only with difficulty, for at least two types of reasons which we are now going to evaluate from a transcendentalist point of view.
There is firstly the fear that the ‘antirealist’ choice would be accompanied by the loss of a value judged essential by science: the horizon of a convergence, beginning from appearances, towards a reality that is more profound than them. Einstein indicated unambiguously in a comment immediately following the quotation above that this was a loss that he would not accept, and because of that he held it necessary forthwith (including in the absence of any current threat of empirical refutation of quantum mechanics) that a search should be undertaken for a post-quantum theory immediately interpretable as a description of the real.Footnote 11 But everything changes in a transcendentalist perspective, where the directing value of objectification is methodologically dissociated from potential ambitions to reveal an intrinsic reality. For here, it little matters on what level the objectification has been accomplished (whether on the first level of spatio-temporal phenomena, or on the second level of the formal instruments | of prediction), provided that it has been accomplished; and equally it little matters that everything should happen or not as if those formal instruments described a reality encompassing structures that are independent of operations of experimental investigation. From this well understood transcendental point of view, quantum mechanics emerges as an exemplary success, and there is therefore no reason to want to go beyond it, as long as new modalities of experimental exploration have not allowed the emergence of phenomena transcending the limits of the context objectified by it.
Another argument opposed to the ‘antirealist’ reading of quantum mechanics starts out from a reflection on Bell's theorem. Since Bell's inequalities could be derived from two major premises, currently called the ‘locality hypothesis’ and the ‘reality hypothesis’, their violation leads to a questioning either of one, or of the other, or of both at the same time. Researchers who reject the hypothesis of locality but retain the hypothesis of reality are referred to as ‘realists’; those who reject the hypothesis of reality and retain that of locality are labelled as ‘localists’; and those who reject both hypotheses are called ‘nihilists’. If we set aside the so-called ‘nihilist’ option, generally perceived as being too indiscriminate in its double rejection of these hypotheses, there remain to be discussed the ‘realist’ and ‘localist’ options. The ‘localists’ can easily presume upon the clearly established impossibililty of using the EPR correlations for purposes of instantaneous transmission of information; if such is the case, they observe, the so-called non-locality has no other function but to explain the very EPR correlations which are thought to reveal it. Non-locality appears in this way to be a purely ad hoc explanation. But the ‘realists’ respond to this with an accusation of inconsistency in the ‘localist’ position (Reference Chiao, Kwiat and SteinbergChiao, Kwiat and Steinberg 1995; Reference Chiao and GarrisonChiao and Garrison 1999). The localists, they stress, establish an arbitrary dividing line between the properties concerning microscopic entities, which are purely and simply replaced by quantum ‘observables’, and the properties of space-time, which are treated on the macroscopic scale in the classical mode defined by the Special Theory of Relativity. This dividing line is the companion of the one semi-conventionally drawn by Bohr between the (microscopic) jurisdiction of quantum theory and the (macroscopic) jurisdiction of current language complemented by a terminology borrowed from classical physics. It comes down to presenting as antirealist in the microscopic domain and realist in the macroscopic realm. ‘Localism’ is thus taxed with inconsistency and incompleteness, as its adopted critical position on the question of realism is only partial, and leaves aside an entire (macroscopic) part of the world. However, it is simple for ‘localists’ to reply to these objections by relying on a transcendentalist approach. For, from this latter perspective, it would no longer be a matter for them to lay down a rigid dividing line between one (microscopic) domain where one is limited simply to the prediction of phenomena, and another (macroscopic) domain related to a faithful description of the real; instead of that, ‘localists’ would assign themselves the task of distinguishing, in plastic and stratified mode, regions of constituted objectivity and conditions of background objectification. In the case of quantum physics, the region of constituted objectivity is essentially the Hilbert space, with its state vectors, its operators, its amplitudes etc. and the background objectification condition is the structure that a descriptive language of the type found in classical physics provides to the set of experimental set-ups and phenomena manifested on a macroscopic level.Footnote 12 On a stratum higher than the previous one, the region of constituted objectivity is the classical universe of material bodies inserted in space and in time (including the set-ups involved in the constitution of the lower stratum) and the background objectification condition is a human cognitive process endowed with the Kantian armature of categories.
What in summary constitutes the originality of the transcendental orientation is that the dividing lines that it draws are not based on any assertion of an ontological type. The background objectification conditions are not always of a different nature from the region of constituted objectivity; it is simply that, on a given constitutive stratum, they occupy a different function from the region of objects; and they can, on a different constitutive stratum, completely change function, to the extent of finding themselves on the other side of the dividing line. There is not then, from this point of view, any in principle inconsistency or incompleteness in the ‘localist’ and ‘antirealist’ position. There is no inconsistency because the two theoretical strata as defined by the ‘localist’ do not have the same epistemological status. And neither is there incompleteness because, if it is true that the sector of experimental set-ups and phenomena manifest on the macroscopic level is sheltered from the critique of a language of realist form, this is only on a functional level. The macroscopic set-ups and phenomena play, in quantum physics, the role of a background condition, but this in no way implies that the mode of theorisation inaugurated by quantum physics is inapplicable to them as of principle. The role of constitutive background must at all times be taken by something; but this something can vary and can pass in part, when the need asserts itself, on to the side of the constituted foreground. One can conclude with the assertion that the ‘antirealist’ interpretation of quantum formalism, invited by the non-Boolean structure of its calculation of probabilities, is rendered both consistent and complete as long as the transcendental point of view is adopted.
4. The transcendental status of decoherence
We have just seen in what sense the quantum calculation of probabilities may be said to carry the trace of the contextuality of the phenomena on which it bears; their lack of detachment with respect to the instrumental conditions of their manifestation. This trace is the violation, in the general case, of the ‘(probabilistic) conditions of a possible experience’ as declared by George Boole. The problem is that, being a theory of physics, quantum mechanics has a vocation of universality. It is true that it supposes for its formulation, and no doubt for its interpretation, the setting aside of a meta-theoretical background made up of experimental set-ups and phenomena described according to the norms of classical physics. But we have seen that, in a transcendentalist context, this setting aside could have no other significance than a purely functional one. Nothing should henceforth be able to escape by right from the jurisdiction of the contextual and predictive quantum mode of theorisation. The question which arises from that is to know how one can ensure compatibility between two apparently contradictory demands: the one being that any area at all of potential physical investigation, including that of the set-ups and phenomena manifested macroscopically, should be considered as coming in principle under quantum jurisdiction; and the other being that macroscopic set-ups and manifestations exercise the role of background objectification conditions in the classical mode. This compatibility would demand nothing less than at least an approximate recovery of the validity of the ‘(probabilistic) conditions of a possible experience’ (that is, of the Boolean inequalities) on the macroscopic scale, including when the quantum mode of theorisation is extended to this level.
It is to the theories of decoherence that has been assigned the goal of proving the approximate validity of the Boolean inequalities in a macroscopic domain covered by quantum theory. They in effect allow the demonstration that, applied to complex interactive processes involving an object, a measuring apparatus and a vast environment, the quantum calculation of probabilities aligns to a very close degree of approximation with the classical calculation of probabilities in which, excepting any analysable anomaly of sampling or ‘disturbance’, the Boolean inequalities are automatically satisfied. Such a convergence of the quantum calculation towards the classical calculation of probabilities is made apparent through a near-disappearance of terms of interference typical of the quantum calculation of probabilities, and isomorphic to those of a wave process, to the advantage of a quasi-validation of the classical rule of additivity of the probabilities of a disjunction. It is true that there are few physicists who have completely accepted this purely probabilistic formulation of the theories of decoherence. Some have even nurtured the hope of using decoherence as a way of explaining the emergence of a classical world out of a quantum world supposedly ‘described’ by a universal state vector (Gell-Mann 1997). The obstacle that they have struck in this is that, to reach a derivation, from a purely quantum calculation, of the classical laws and behaviours that prevail on the human scale, they have not been able to avoid introducing hypotheses containing anthropomorphic elements (Reference BitbolBitbol 1997: 410–418).Footnote 13 The tri-part division of the chain of measurement into an object, an apparatus and an environment is, as is recognised by W.H. Zurek himself (Reference Zurek1982), one of these hypotheses. For this division supposes that one has tacitly admitted the universality of the classical norm of the analytical separability of objects and properties, which the quantum domain precisely calls into question. Reference Gell-MannMurray Gell-Mann's (1995) recourse to a coarse graining of consistent histories is another hypothesis of this type, for this procedure is imposed only through the necessity of linking the descriptive content of consistent histories to the limited cognitive capacities of a set of anthropomorphic ‘Information Gathering and Utilizing Systems’.
This form of begging the question which puts a blemish on the theories of decoherence is in reality only a fault in relation to the ‘realist’ reading of quantum mechanics: if one believes that a universal state vector has the capacity of describing the world, and if one aspires to see emerge, via the theories of decoherence, a classical world sector, the necessity of introducing an element of classicism from out of the theories of decoherence is effectively the sign of a failure. But, in the context of an ‘antirealist’ reading of quantum mechanics in the transcendentalist spirit, the only thing that needs to be shown, as I have already emphasised, is the compatibility between the two epistemological functions assignable to the region of macroscopic experimental set-ups and manifestations: the function of the field of investigation being able to be included within quantum jurisdiction, and the function of a background capable of being described in a classical mode. And since there exist certain hypotheses (those of Zurek or Gell-Mann) within which the quantum calculation of probabilities systematically violating the Boolean inequalities tends towards a classical calculation of probabilities which respects them in the general case, this compatibility is then assured.
In sum, if the theories of decoherence have failed to demonstrate the necessary emergence of a classical world from out of the quantum world (which is what a ‘realist’ interpreter of quantum mechanics demands), they have managed to prove that there exist conditions which render possible the linkage between quantum and classical modes of theorisation (which would satisfy a transcendentalist interpreter of quantum mechanics, who demands simply that the function of pre-supposition of background that he assigns to macroscopic experimental set-ups and phenomena should not be obligatorily inscribed as being at odds with the universality ideal of quantum theory). The reasons for which it seems to me legitimate to assign a transcendental status to decoherence become clear from this. The first reason is that decoherence (re)introduces into the calculation of probabilities the feature of prior accomplishment of a procedure of detachment of properties of spatio-temporal objects, that is, the satisfaction of Boole's inequalities. It is (re)constitutive of objectivity on the primary level of daily experience in space and time, emerging from out of the secondary level of objectivity of elements of the Hilbert-Fock spaces. The second reason is that decoherence guarantees the compatibility between: (a) the in principle unlimited character of the quantum region of objectivity (on the secondary level of the Hilbert-Fock spaces), and (b) the prior condition for the constitution of that region of objectivity, that is, the description of the macroscopic set-ups and phenomena in a classic mode. The third reason is that this compatibility has emerged as a simple possibility of co-adequation between the presuppositions and the theorems of quantum theory rather than as a necessary formal consequence of this latter.
But could not someone, in the face of this assertion of the ‘transcendental status of decoherence’, question the circumstances of its experimental corroboration (Reference Haroche, Brune and RaimondHaroche, Brune and Raimond 1997; Reference Haroche, Raimond and BruneHaroche, Raimond and Brune 1997)? Is not decoherence henceforth to be considered as a tangible empirical fact rather than a transcendental precondition? Does it not arise from the a posteriori rather than from an a priori, however functional it might be? This objection in truth does no more than revive a current criticism addressed to transcendentalist initiatives; it raises the delicate question of the distinction between the constitutive and the empirical. The easiest way to reply to it is to draw a parallel with the case of causality, which we discussed earlier. After all, a nineteenth-century physicist could very well have raised, in opposition to the Kantian thesis of the transcendental status of causality, the experimental observation of the regular succession of the phenomena studied by classical physics; he could thus have confidently asserted an argument, decisive in his eyes, in favour of an empirical and a posteriori status for causality, rather than a transcendental and a priori one. To this argument, a philosopher of neo-Kantian persuasion would have no doubt replied (a) that an empirical observation is not sufficient in itself to establish the authenticity of a succession, and (b) that at all events observed regularity does not bear upon things such as they are in themselves, but on the phenomenal result of an investigation guided from the outset by the regulatory principle of succession in accordance with a rule. Thus, the sole lesson carried by the experiment relating to the principle of causality is that it is not impossible to find conditions in which there is a satisfactory degree of mutual consistency between the transcendental presuppositions which subtend the investigation (of which one is the principle of causality), and the empirical product of this same investigation. It is not that the presuppositions in question in fact do have an empirical foundation. The case of decoherence is treated in exactly the same fashion. Here, all that is proven by the corroboration experiments of decoherence, is that there exist experimental conditions in which a satisfactory degree of mutual consistency is shown between the principal presupposition of the investigation (being the availability of a classical environment of objects and properties within the laboratory space) and the empirical results obtained under the regime of that presupposition. It is not that the emergence of the presupposed classical structure on the macroscopic scale from out of a quantum structure necessarily possesses an authentic empirical foundation.
In this way a harmony is assured between the absence of a probabilistic feature of an objectivity constituted in space-time in quantum mechanics, and the presupposition of this level of objectivity by the very experimentation which aims to test it. A harmony which is internal and constitutive rather than external and ontological.
Translated from the French by Colin Anderson