1. Introduction
Recent literature discusses topological or “network” explanations in the life sciences (Bechtel Reference Bechtel2020; Craver Reference Craver2016; Darrason Reference Darrason2018; Green et al. Reference Green, Maria Şerban, Jones, Brigandt and Bechtel2018; Huneman Reference Huneman2010; Jones Reference Jones2014; Kostić Reference Kostić2018, Reference Kostić2019, Reference Kostić2020; Kostić and Khalifa Reference Kostić and Khalifa2021; Levy and Bechtel Reference Levy and Bechtel2013; Matthiessen Reference Matthiessen2017; Rathkopf Reference Rathkopf2018; Ross Reference Ross2020). Some self-described mechanists have only focused on topological explanations as a species of mechanistic explanation (Bechtel Reference Bechtel2020; Craver Reference Craver2016; DiFrisco and Jaeger Reference DiFrisco and Jaeger2019; Levy and Bechtel Reference Levy and Bechtel2013). For them, topological explanations appear to rely on the time-honored scientific strategy of understanding a system by grasping how its components interact with each other. However, others—whom we shall dub “(topological) autonomists”—claim that topological explanations are markedly different than their mechanistic counterparts (Darrason Reference Darrason2018; Huneman Reference Huneman2010, Reference Huneman2018a; Kostić Reference Kostić2018, Reference Kostić2019, Reference Kostić2020; Rathkopf Reference Rathkopf2018).
These debates have far-ranging implications. Ideally, they would provide scientists with guidelines for when mechanistic information is a prerequisite for a topological model’s being explanatory. Furthermore, if autonomists are correct, the debates should provide additional guidelines for when mechanistic information is simply an added bonus to a free-standing topological explanation. In addition to their scientific upshots, these debates between mechanists and autonomists contribute to wider philosophical discussions concerning explanatory pluralism, noncausal explanation, modeling, and the applicability of mathematics. Finally, engagement with topological explanations enriches both the mechanist and autonomist programs by highlighting when and where topological explanations are mechanistic.
To make good on these promises, both mechanists and autonomists would benefit substantially from a more precise and systematic account of when topological explanations count as mechanistic. What kinds of considerations would either differentiate or unify topological and mechanistic explanations? This paper aims to fill this gap. The result is a novel argument for autonomism. To that end, Section 2 describes topological explanations in relatively neutral terms. Section 3 then presents and motivates, in our estimate, the most plausible and principled way of interpreting the claim that topological models are explanatory only insofar as they are mechanistic. Section 4 then provides a neuroscientific example that does not fit this mechanistic framework. Section 5 provides an autonomist alternative to the account of topological explanation offered in Section 3, which unifies topological explanations of both the mechanistic and non-mechanistic varieties. Section 6 then shows how this account of topological explanation rebuts some powerful objections to autonomism.
2. Background
Before embarking on these philosophical tasks, we review topological explanations’ basic concepts. Topological explanations describe how their respective explananda depend upon topological properties. For the purposes of this essay, we will focus on those topological properties that can be represented using the resources of graph theory.Footnote 1 A graph is an ordered pair (V, E), where V is a set of vertices (or nodes) and E is a set of edges (links, or connections) that connect those vertices. For ease of locution, we will use the term “graph” or “topological model” to denote the mathematical representation of a topological structure, and “network” to denote a real-world structure (van den Heuvel and Sporns Reference van den Heuvel and Sporns2013, 683).
Vertices and edges represent different things in different scientific fields. For example, in neuroscience, vertices frequently represent neurons or brain regions; while edges represent synapses or functional connections. In computer science, graphs frequently represent networks of cables between computers and routers or networks of hyperlinked web pages. In ecological and food networks, vertices might be species; edges, predation relations.
Scientists infer a network’s structure from data, and then apply various graph-theoretic algorithms to measure its topological properties. For instance, clustering coefficients measure degrees of interconnectedness among nodes in the same neighborhood. Here, a node’s neighborhood is defined as the set of nodes to which it is directly connected. An individual node’s local clustering coefficient is the proportion of edges within its neighborhood divided by the number of edges that could possibly exist between the members of its neighborhood. By contrast, a network’s global clustering coefficient is the ratio of closed triplets to the total number of triplets in a graph. A triplet of nodes is any three nodes that are connected by at least two edges. An open triplet is connected by exactly two edges; a closed triplet, by three. Another topological property, average (or “characteristic”) path length, measures the mean number of edges needed to connect any two nodes in the network.
In their seminal paper, Watts and Strogatz (Reference Watts and Strogatz1998) applied these concepts to show how a network’s topological structure determines its dynamics. First, regular graphs have both high global clustering coefficients and high average path length. By contrast, random graphs have low global clustering coefficients and low average path length. Finally, they introduced a third type of small-world graph with high clustering coefficients but low average path length (Figure 1).
Highlighting differences between these three types of graphs yields a powerful explanatory strategy. For example, because regular networks have larger average path lengths than small-world networks, things will “spread” throughout the former more slowly than the latter, largely due to the greater number of edges to be traversed. Similarly, because random networks have smaller clustering coefficients than small-world networks, things will also spread throughout the former more slowly than the latter, largely due to sparse interconnections within neighborhoods of nodes. Hence, ceteris paribus, propagation is faster in small-world networks. This is because the fewer long-range connections between highly interconnected neighborhoods of nodes shorten the distance between neighborhoods of nodes that are otherwise very distant, which enables them to behave as if they were first neighbors. For example, Watts and Strogatz showed that the nervous system of C. elegans is a small-world network, and subsequent researchers argued that this system’s small-world topology explains its relatively efficient information propagation (Bullmore and Sporns Reference Bullmore and Sporns2012; Latora and Marchiori Reference Latora and Marchiori2001).
3. Mechanism and topology
Are topological explanations such as the one involving C. elegans just mechanistic explanations in fancy mathematical clothing? To answer this question, we first clarify what we mean by “mechanistic explanation” (Section 3.1), and then present a framework for interpreting topological explanations mechanistically (Section 3.2). This provides the stiffest challenge to autonomism, and thereby sets the stage for Section 4, where we show that autonomism is nevertheless unfazed by this mechanistic contender.Footnote 2
3.1. Mechanistic explanation
Before evaluating whether topological explanations are mechanistic explanations, we elucidate the latter. For our purposes, we focus on conceptions of mechanistic explanation that are minimal and nontrivial. Such conceptions provide the most formidable challenges to autonomism. Hence, when we show that some topological explanations are not mechanistic, we cannot be charged with chasing ghosts. We discuss these two facets of mechanistic explanation in turn.
To begin, Glennan (Reference Glennan2017, 17) provides a “minimal” characterization of mechanisms that captures a widely held consensus among mechanists about conditions that are necessary for something to be a mechanism, even if they differ about, for example, the role of regularities, counterfactuals, and functions in mechanistic explanation:
A mechanism for a phenomenon consists of entities (or parts) whose activities and interactions are organized so as to be responsible for the phenomenon.Footnote 3
Because this conception is minimal, the counterexamples we raise to it below apply a fortiori to more demanding conceptions of mechanisms.
Importantly, mechanists are sometimes criticized for being so vague as to trivialize the concept of mechanism (Dupré Reference Dupré2015). Since minimal conceptions are less committal than other conceptions of mechanism, they are especially susceptible to this criticism. To avoid trivializing mechanistic explanation, we appeal to claims about “entities,” “activities,” “interactions,” “organization,” and “responsibility” widely endorsed by mechanists. For instance, our arguments hinge on claims that entities and activities cannot be spatiotemporal regions determined merely by convention; that interactions cannot be mere correlations, etc. We further develop these claims as they arise in the discussion of our examples of non-mechanistic topological explanation. Should mechanists deny these requirements on entities, activities, and the like, then the burden of proof falls upon them to show that their alternative conception of mechanism is nontrivial.
Finally, mechanists typically distinguish between etiological, constitutive, and contextual mechanistic explanations (e.g., Craver Reference Craver2001). Etiological explanations cite the causal history of the explanandum; constitutive explanations cite the underlying mechanism of that explanandum; and contextual explanations cite an explanandum’s contribution to the mechanism of which it is a part. Since underlying and overlying mechanisms can be synchronic with the phenomena that they explain, neither constitutive nor contextual explanations must cite their respective explananda’s causal histories, i.e., the prior events that produced the phenomenon. Therefore, constitutive and contextual explanations are distinct from etiological explanations. Furthermore, because constitutive explanations explain a larger system in terms of its parts, while contextual mechanisms do the exact opposite, constitutive and contextual explanations are also distinct. For ease of exposition, we will first discuss constitutive mechanistic explanations. Section 4.4 discusses their etiological and contextual counterparts.
To summarize, we are trying to precisely characterize mechanistic explanations in a way that poses the stiffest challenge to our claim that some topological explanations are non-mechanistic. To that end, we think that the most defensible conception of mechanistic explanation has two features: minimality and nontriviality. Furthermore, we will first compare topological explanations to constitutive mechanistic explanations, and then turn to etiological and contextual ones.
3.2. Mechanistic interpretations of topological explanations
With a clearer conception of mechanistic explanation in hand, mechanists’ next task is to translate topological explanations’ characteristic graph-theoretic vocabulary into the language of mechanistic explanation—to provide a mechanistic interpretation of topological explanations (MITE). To that end, the preceding suggests that mechanists ought to hold that a topological model is explanatory only insofar as there exists a mechanism for which all the following conditions hold:
-
(1) Node Requirement: the topological model’s nodes denote the mechanism’s entities or activities.
-
(2) Edge Requirement: the topological model’s edges denote the interactions between the mechanism’s entities or activities.
-
(3) Responsibility Requirement: the topological model specifies how the mechanism’s entities, activities, and interactions are organizedFootnote 4 so as to be responsible for the phenomenon.
-
(4) Interlevel Requirement: the explanandum is at a higher level than the mechanism’s entities, activities, and interactions as described by the Node and Edge Requirements.
The first three requirements fall out of Glennan’s minimal conception of mechanism; the last, from our initial focus on constitutive explanations.
As an illustration of the MITE’s plausibility, consider the earlier example in which the small-world topology of the C. elegans nervous system explains its capacity to process information more efficiently than would be expected if its topology were either regular or random. Here, the neural system’s components are individual neurons, which are represented as nodes in the topological model. Hence, the Node Requirement is satisfied. Furthermore, the edges of the graph denote synapses or gap junctions between different neurons, so the Edge Requirement is satisfied. Third, the Responsibility Requirement is satisfied, though this requires more detailed discussion. Suppose (as we shall throughout) that “responsibility” is characterized in terms of counterfactual dependence:
Had the C. elegans neural network been regular or random (rather than small-world), then information transfer would have been less efficient (rather than its actual level of efficiency).
In regular or random topologies, the synaptic connections between neurons will be different than they are in the actual small-world topology exhibited in C. elegans. Consequently, each of these networks describes a different potential mechanistic structure. Thus, this counterfactual shows how mechanistic differences are responsible for differences in information transfer. Finally, note that the efficiency of information transfer is a global property of the C. elegans nervous system. That system is composed of neurons connected via synapses, so mechanists appear justified in claiming that the nodes and edges of this network are at a lower level than its explanandum property. Hence, the Interlevel Requirement is satisfied. Putting this all together, this means that the example fits the MITE.Footnote 5
Before proceeding, we note three things. First, while other MITEs are certainly possible, this one strikes us as the most plausible. We have culled it from some of the foremost mechanists’ discussions of topological explanations (Craver Reference Craver2016; Levy and Bechtel Reference Levy and Bechtel2013, Glennan Reference Glennan2017). Indeed, our MITE also accords nicely with the widely used “Craver diagrams” in the mechanisms literature. As Figure 2 illustrates, such diagrams entail that there is a phenomenon (“S’s Ψing”) at a higher level, and a mechanism exhibiting a graph-theoretic structure at the lower level, with nodes corresponding to entities (denoted by “X i ”) performing activities (denoted by “φ i ”), and edges corresponding to interactions (denoted by arrows). Since no other MITE has been offered, mechanists ought to propose an alternative MITE should they chafe at the one we propose here. Second, our working definition of autonomism throughout this paper is only that some topological explanations do not fit this MITE. This is consistent with some other topological explanations fitting this MITE. Third, it suffices for our purposes if only one of the MITE’s four requirements is violated. In other words, so long as our counterexample is even “partly non-mechanistic”, autonomism (sub specie this MITE) is vindicated.
4. Non-mechanistic topological explanation
Autonomists have distanced themselves from mechanistic explanations in myriad ways. For instance, Rathkopf (Reference Rathkopf2018) argues that topological explanations are normally used in nearly decomposable and nondecomposable systems, whereas mechanistic explanations are typically used in decomposable systems. Another autonomist approach treats topological explanations as conferring mathematical necessity upon their explananda (Huneman Reference Huneman2018a; Lange Reference Lange2017). Still others claim that topological explanations are frequently more abstract than mechanistic explanations (Darrason Reference Darrason2018; Huneman Reference Huneman2018a; Kostić Reference Kostić2019). Ross (Reference Ross2020) and Woodward (Reference Woodward2013) suggest that some topological explanations are resistant to the kinds of interventions that are characteristic of causal-mechanical explanations. Finally, Huneman (Reference Huneman2018b) and Kostić (Reference Kostić2018) argue that topological and mechanistic explanations involve different kinds of realization relations.
We provide a new argument for autonomism. So far as we can tell, it complements rather than competes with these other autonomist arguments, and has the added virtue of engaging a more precise mechanistic foil—the MITE developed above. Specifically, we use this foil to provide an example of a non-mechanistic topological explanation: Adachi et al.’s (Reference Adachi, Takahiro Osada, Takamitsu Watanabe, Miyamoto and Miyashita2011) explanation of the contribution of anatomically unconnected areas to functional connectivity in macaque neocortices. This explanation rests on the neuroscientific distinction between anatomical connectivity (AC) and functional connectivity (FC). While both kinds of connectivity are modeled using graph theory, only AC networks are naturally glossed as mechanisms. For instance, nodes are segregated anatomical regions of the brain (e.g., different Brodmann areas, gyri, and cortical lobes) and their edges are causal relations (what is sometimes called “effective connectivity”) that are frequently identified with axonal signal flows. Since this is the topological model that figures in Adachi et al.’s explanans, we will grant that it satisfies the MITE’s Node and Edge Requirements. However, we deny that this is a mechanistic explanation, for it violates the MITE’s Responsibility and Interlevel Requirements. After describing the explanation in some detail (Section 4.1), we examine these violations in turn (Sections 4.2 and 4.3). This shows that this explanation is not a constitutive mechanistic explanation. We round out our defense of autonomism by anticipating and rebutting two possible mechanist responses to our argument (Sections 4.4 and 4.5).
4.1. Adachi et al.’s explanation
We first discuss Adachi et al.’s explanandum and then their explanans. Adachi et al.’s dependent variable is (combined) ΔFC, “the regression slope of FC on the total number of length2-AC” patterns. Footnote 6 Obviously, a better understanding of our explanandum, ΔFC, requires an understanding of both FC and length2-AC.
Begin with FC. FC networks’ edges are synchronization likelihoods (SL). Stam et al. (Reference Stam, Jones, Nolte, Breakspear and Scheltens2006, 93) provide a useful definition:
The SL is a general measure of the correlation or synchronization between 2 time series…. The SL is then the chance that pattern recurrence in time series X coincides with pattern recurrence in time series Y.
As this definition suggests, FC networks’ nodes are time series. Depending on the study, these nodes can be interpreted in multiple ways. Many FC models are interpreted so that nodes correspond to the time series of blood oxygen level-dependent (BOLD) readings for an individual voxel in functional magnetic resonance imaging (fMRI) data. A voxel is a unit of graphic information that defines a three-dimensional region in space. Voxels are essentially the result of dividing the brain region of interest into a three-dimensional grid. By contrast, Adachi et al. (Reference Adachi, Takahiro Osada, Takamitsu Watanabe, Miyamoto and Miyashita2011, 1587) opt for a less “operational” interpretation of their FC model. They do this by mapping different clusters of voxels onto 39 different anatomical regions or “areas” in the macaque cortex. Examples include visual area 4 and the mediodorsal parietal area. Consequently, their FC and AC models have the same nodes.
Indeed, without this mapping of FC nodes onto anatomical regions, the neuroscientists could not characterize their explanandum with any precision. Specifically, they focus on “length2-AC” patterns—i.e., functionally connected pairs of regions that are only anatomically connected through some third region (see Figure 3). In other words, length2-AC patterns involve two brain areas that are functionally connected (and hence are correlated) but are also known to lack any direct causal link. Hence, the explanandum, ΔFC, describes the extent to which the totality of Patterns a, b, and c contributes to the overall FC in the macaque neocortex.
With the explanandum clarified, we turn to Adachi’s explanans, which appeals to the global topological properties of the AC network. Specifically, Adachi et al. argue that the AC network’s frequency of three-node motifs (which they abbreviate as “MF3”) explains why the macaque’s ΔFC is as high as it is. In this context, a three-node motif is a triplet where the nodes denote anatomical regions and the edges denote anatomical connections. Thus, MF3 is a measure of how many of these triplets can be found in the macaque neocortex.
Adachi et al. establish this explanation by running simulations involving thousands of randomly generated networks. Some of these networks have the same frequency of three-node motifs as the macaque brain but differ with respect to other topological properties. These other properties include global clustering coefficient, modularity, and the frequency of two-node motifs. Then, each network’s ΔFC is measured. In another run of simulations, they controlled for these other topological properties while varying the frequency of three-node motifs. Models in which the frequency of three-node motifs matched the macaque neocortex vastly outperformed models matching other topological properties in accounting for ΔFC. Once again using counterfactual dependence as a working definition of “responsibility,” this suggests the following:
Had macaque neocortices had a different frequency of three-node network motifs (rather than a different clustering coefficient, modularity, or frequency of two-node network motifs), then these neocortices’ ΔFC would have been different (rather than its actual amount).
In other words, MF3 is “responsible” for the length2-AC patterns’ contributions to FC.
4.2. Responsibility requirement.Footnote 7
We now turn to our broader aim of arguing for autonomism, starting with the Responsibility Requirement, which states that topological models are only explanatory if they specify how the phenomenon counterfactually depends on how a mechanism’s entities, activities, and interactions are organized. We shall consider different mechanist proposals for how Adachi et al.’s explanation satisfies the Responsibility Requirement and show that each faces serious challenges.
The most prominent mechanist strategy holds that many topological explanations describe an abstract kind of mechanistic organization (e.g., Bechtel Reference Bechtel2009; Glennan Reference Glennan2017; Kuorikoski and Ylikoski Reference Kuorikoski and Ylikoski2013; Levy and Bechtel Reference Levy and Bechtel2013; Matthiessen Reference Matthiessen2017). This “organization strategy” prompts three replies. First, the most precise versions of this strategy simply assume that mechanistic organization can be spelled out entirely in terms of graph-theory. This effectively concedes autonomism’s main tenets. For instance, Kuorikoski and Ylikoski (Reference Kuorikoski and Ylikoski2013) define “organization” and “network structure” in terms of topological structure, as evidenced by both their examples and their acknowledgement of Watts as pioneering the “new science of networks.” However, this means that mechanistic organization just is topological structure. Since they also hold that some organization is explanatory unto itself, their position entails that some topological properties are explanatory unto themselves. This is tantamount to autonomism. Second, organization is supposed to refer to the structure of interactions between entities and activities that are constitutive of the phenomenon to be explained. Yet, in this example, the organization in question is of a system that is (partly) constituted by the phenomenon to be explained. After all, the anatomical regions that figure in each of these length2-AC patterns are parts of the anatomical network, and the latter’s frequency of three-node motifs drives the explanation. Third, different models with the same frequency of three-node motifs can posit radically different interactions between any three brain regions—radically different forms of organization—yet nevertheless predict the same FC structure.Footnote 8 This would mean that the specific interactions between the mechanistic components are explanatorily idle, which also violates the Responsibility Requirement.
Chastened by these problems, a mechanist might remain silent about organization, and instead insist that Adachi et al.’s explanation satisfies the Responsibility Requirement by specifying how ΔFC counterfactually depends on the macaque neocortex’s entities, activities, and interactions. However, this faces something we call the causal disconnection problem. In a nutshell, the problem is that: (a) mechanistic explanations require entities, activities, and interactions that are responsible for a phenomenon to be causally connected to that phenomenon,Footnote 9 yet, (b) Adachi et al.’s explanation does not require these causal connections. To see why, let us distinguish different kinds of three-node anatomical motifs included in their calculation of MF3 (see Figure 4). If Adachi et al.’s explanation is mechanistic, then the only three-node anatomical motifs that can figure in Adachi et al.’s explanation are length2-AC patternsFootnote 10 and anatomical motifs that are causally connected to a length2-AC pattern. However, Adachi et al.’s model implies that even if the only change in MF3 were to the number of three-node motifs that are causally disconnected from length2-AC patterns, ΔFC would still change. Consequently, this explanation does not satisfy the MITE’s Responsibility Requirement.
Finally, mechanists might try to avoid these problems by insisting that causally disconnected three-node AC motifs are explanatorily irrelevant and only the length2-AC patterns and the AC-motifs that are causally connected to them are responsible for ΔFC. However, this proposal faces two challenges. First, mechanists would need a non-question-begging argument as to why models such as Adachi et al.’s are incorrect to treat causally disconnected motifs as explanatorily relevant. To our knowledge, no such arguments have been offered. Second, the Responsibility Requirement is not easily satisfied even if causally disconnected three-node AC motifs are excluded from the explanation. Indeed, Adachi et al. (Reference Adachi, Takahiro Osada, Takamitsu Watanabe, Miyamoto and Miyashita2011, 1589) turn to a non-mechanistic topological explanation precisely because of limitations in causal explanations of a phenomenon closely related ΔFC: why there is any functional connectivity in a length2-AC pattern. On the current mechanistic construal, length2-AC patterns’ functional connections (i.e., the edge between nodes 1 and 2 in each of the patterns in Figure 3) should be explained by some indirect causal link; paradigmatically, via a third region serving as either an intermediate cause or a common cause of the functional connection between the two regions (Patterns b and c in Figure 3.) However, the anatomical structure in Pattern b—the “two-step serial relay”—decreases FC. More importantly, Adachi et al. observe that a significant number of functional connections occur even when two functionally connected areas only share a common effect (specifically: a common efferent as represented by Pattern a.) To our knowledge, no mechanists claim that an effect can be responsible for its cause. Hence, functionally connected areas that are only anatomically connected via a common efferent (as in Pattern a) cannot satisfy the Responsibility Requirement. Consequently, even if the causal disconnection problem is bracketed, mechanisms alone cannot be responsible for ΔFC, which (to repeat) is the totality of functional connectivity for which length2-AC patterns are responsible.
In summary, we have considered three ways of trying to mechanistically interpret MF3: as describing a mechanism’s organization; as describing a mechanism’s entities, activities, and interactions while allowing for causal disconnections; and as describing a mechanism’s entities, activities, and interactions while prohibiting causal disconnections. In each case, the claim that a mechanism is responsible for ΔFC faces formidable challenges.
4.3. Interlevel requirement
Adachi et al.’s explanation also violates the Interlevel Requirement. We provide three arguments. First, constitutive mechanistic explanations require explananda to be at higher levels than the parts and activities that explain them. However, given the way that Adachi et al. interpret their FC model, both the FC and AC networks have the same brain regions as their nodes. Because the explanans and explanandum appeal to the same entities, the FC network is not at a higher level than the AC network in this explanation.Footnote 11 Moreover, the anatomical regions that figure in MF3 are minimally at the same level as the anatomical regions in the length2-AC patterns that figure in ΔFC. Indeed, the explanation allows for some anatomical regions that figure in MF3 to be identical to those that figure in ΔFC.
In our estimate, this first argument understates the degree to which this explanation violates the Interlevel Requirement. This leads to our second argument. As already noted, a constitutive mechanistic explanation of ΔFC would need to appeal to the entities, activities, and interactions constitutive of (and thus at a lower level than) the anatomical regions and their axonal connections—and this explanation works in the exact opposite direction. Adachi et al. appeal to the frequency of three-node network motifs, a global property of the AC network. This network is constituted by these anatomical regions. In other words, a higher-level property is explaining the behavior of lower-level constituents. Hence, the MITE’s Interlevel Requirement has been violated.Footnote 12
Finally, one may argue that both ΔFC and MF3 are global properties of the macaque neocortex. If this is correct, then it is not even clear that Adachi et al.’s explanation appeals to levels at all.
4.4. Contextual and etiological explanation
This explanation’s violation of the Interlevel Requirement may prompt mechanists to reply that this merely shows it not to be a constitutive mechanistic explanation. However, it may still be either a contextual or etiological mechanistic explanation. We briefly show that such replies face significant challenges.
Craver (Reference Craver2001, 63) provides the most precise definition of contextual mechanistic explanation:
A contextual description of some X’s φ-ing characterizes its mechanistic role; it describes X (and its φ-ing) in terms of its contribution to a higher (+ 1) level mechanism. The description includes reference not just to X (and its φ-ing) but also to X’s place in the organization of S’s ψ-ing.
If Adachi et al.’s explanation is a contextual mechanistic one, then the system (S) is the macaque’s neocortex, the relevant system capacity (ψ) is its frequency of three-node motifs, the component X’s are the length2-AC patterns in the neocortex, and the activity/property (φ) of these patterns is their functional connectivity.
In Craver’s definition, a part’s activities are supposed to “contribute” to the system’s capacities. We take this to require, at a minimum, that S’s ψ-ing counterfactually depends on X’s φ-ing, i.e.,
Had it not been the case that X φ’s, then it would not have been the case that S ψ’s.
This accords well with Craver’s paradigmatic examples of contextual mechanistic explanations, e.g., the heart pumping blood.Footnote 13 This contributes to circulation and, consonant with our suggestion, the following is true:
Had the heart not pumped blood, then it would not have distributed oxygen and calories throughout the body.
By analogy, if Adachi et al.’s explanation is contextually mechanistic, then the following should figure prominently:
Had macaque neocortices’ ΔFC been different (rather than its actual amount), then these neocortices would have had a different frequency of three-node network motifs (rather than their actual frequency).
However, a quick review of Section 4.1 shows this to be precisely the converse of the counterfactual that figures in Adachi et al.’s explanation. Nor does this counterfactual accord with their methodology: They run simulations in which they vary MF3 to see how ΔFC changes—not the other way around. Consequently, ΔFC does not contribute to MF3; this is not a contextual mechanistic explanation.
As an alternative, mechanists might claim that Adachi et al.’s explanation is an etiological mechanistic explanation. Here, Bechtel’s (Reference Bechtel2009, 557–59) account of “situated mechanisms” (or mechanistic explanations that “look up”) seems especially instructive. Bechtel’s chief example is how stimuli in the environment contribute to the mechanistic explanation of visual processing. Quite plausibly, environmental stimuli provide etiological explanations of why the mechanism for vision behaves as it does. For example, an apple and various environmental conditions (lighting, the absence of smoke and mirrors, etc.) are part of the causal history of why a person comes to see an apple. So, by analogy, if Adachi et al.’s explanation involves a situated mechanism, then MF3 is a “network environment” that is part of ΔFC’s causal history.
Like Craver’s account of contextual mechanistic explanation, situated mechanisms appeal to a higher-level mechanism or environment to explain a lower-level entity. However, unlike contextual mechanisms, situated mechanistic explanations do not require the lower-level entity to “contribute” to a higher-level mechanism in the ways we have discussed. For instance, a visual system does not need to contribute to the lighting conditions in its environment. This immediately avoids the problems raised by treating Adachi et al.’s explanation as contextually mechanistic.
Despite these initial attractions for the mechanist, other autonomists have raised challenges with identifying topological and etiological mechanistic explanations that apply here. For instance, while causes precede their effects, topological explanantia need not precede topological explananda (Huneman Reference Huneman2010, 218–19). Since MF3 does not temporally precede ΔFC, the explanation is not etiological.Footnote 14 Thus, MF3 is not an “environmental condition” in Bechtel’s sense.Footnote 15
4.5. Defending explanatoriness
At this point, we have at least shown that Adachi et al.’s topological model is not mechanistic. Assuming our arguments are sound, mechanists’ only recourse is to deny that this model is explanatory. So far as we can tell, the most plausible argument to this effect is best glossed as a methodological concern about the design of the study: Do the statistical and computational models provide sufficient evidence for ΔFC’s counterfactual dependence upon MF3? Since these statistical and computational considerations do not preclude symmetric correlations between variables, the objection implies that the study provides just as good of evidence for the following symmetric counterfactual: Had ΔFC increased, then MF3 would have increased. Since explanation is widely thought to be asymmetric, mechanists may be tempted to deny that Adachi et al.’s model is explanatory.
We will argue that this objection is inconclusive given mechanists and autonomists’ common ground. To see why, we underscore the importance of background knowledge in inferring explanations (Lipton Reference Glennan2004; Psillos Reference Psillos2007). For instance, even in well-designed experiments, background information is often required to infer the most plausible causal or mechanistic explanation. Similarly, we suggest that the following background knowledge helps to vindicate Adachi et al.’s inference:
AC’s Explanatory Priority: If x is statistically relevant to y, and x only represents AC, while y represents (at least some) FC, then it is prima facie more plausible to infer that x explains y than vice versa.
According to this principle, there is prima facie reason to believe that MF3 explains ΔFC rather than vice versa because MF3 only traffics in anatomical connectivity, while ΔFC traffics in both anatomical and functional connectivity. One motivation for this principle is that interventions on brain regions, neurons, and synapses are easier to conceive of than interventions on voxels and synchronization likelihoods. Since interventions frequently serve as a guide to explanation, this gives AC networks their presumptive explanatory priority. That said, this priority is only prima facie; more rigorous testing and further theoretical considerations can overturn it. This background principle also accords with Adachi et al.’s reasoning. Based on their simulations and statistical analyses, they claim that MF3 “shapes” (Adachi et al. Reference Adachi, Takahiro Osada, Takamitsu Watanabe, Miyamoto and Miyashita2011, 1586, 1589–91) and “influences” (1586, 1591). ΔFC. They do not claim the converse, as the symmetric counterfactual suggests.
For mechanists who are sympathetic to AC’s explanatory priority (Craver Reference Craver2016; Povich Reference Povich2015),Footnote 16 the objection is thereby defused: While Adachi et al.’s simulations and statistical tests are not sufficient unto themselves to license an explanatory claim, they become so in conjunction with this principle. Presumably, these mechanists take AC’s explanatory priority to be a consequence of: (a) AC networks typically containing more mechanistic information than FC networks and (b) mechanistic information providing more reliable evidence for judging the plausibility of counterfactuals than information about functional connectivity. Our argument only requires AC’s explanatory priority. So, these mechanists can only reject it on pain of also rejecting (a) or (b).Footnote 17
But suppose that other mechanists would bite this bullet and reject AC’s explanatory priority. We will argue that this entails that mechanistic and topological explanations are equally (in)vulnerable to symmetry problems. If asymmetry is a requirement on all correct explanations, then this means that both mechanists and autonomists are in trouble. On the other hand, if some correct explanations are symmetric, then autonomists can learn some valuable lessons from mechanists. For instance, Craver and Bechtel (Reference Craver and Bechtel2007, 553) claim that “all of the interesting cases of interlevel causation are symmetrical: components act as they do because of factors acting on mechanisms, and mechanisms act as they do because of the activities of their lower-level components.” Indeed, Craver (Reference Craver and Huneman2013) seems to leverage this point into taking constitutive and contextual mechanistic explanations to be simply two “perspectives” on the same system. So, seemingly harmless symmetries will exist when X constitutively explains Y and Y contextually explains X. Autonomists could devise analogues to constitutive and contextual mechanistic explanations to tolerate symmetries in a similar manner. On such a view, Adachi et al. would adopt one “perspective” in which MF3 explains ΔFC, but from another perspective, ΔFC explains MF3. So, regardless of whether AC enjoys explanatory priority over FC, autonomists are no worse off than mechanists with respect to symmetry.
In summary, while Adachi et al.’s explanation satisfies the Node and Edge Requirements, this does not make it a mechanistic explanation. The reasons for this are twofold. First, only topological properties are responsible for the explanandum; not mechanistic ones. Second, the explanation does not invoke the appropriate levels required for a constitutive mechanistic explanation. Thus, we have a counterexample to the most plausible mechanistic interpretation of topological explanations (MITE). Hence, absent some other MITE, we conclude that some topological explanations are non-mechanistic. Furthermore, this explanation not only resists characterization as a constitutive mechanistic explanation, but also as a contextual and as an etiological one. This suggests that other MITEs will face steep challenges going forward.
5. Autonomist account
A more convincing case for autonomism would provide a general account of topological explanation.Footnote 18 To that end, we take a’s being F to topologically explain why b is G if and only if:
(T1) a is F (or approximately so);
(T2) b is G (or approximately so);
(T3) F is a topological property;
(T4) G is an empirical property; and
-
(T5) Had a been F’ (rather than F), then b would have been G’ (rather than G).
The account appears consonant with several others. It most strongly resembles Kostić’s (Reference Kostić2020) account, but omits certain details of his account that are not relevant to the tasks at hand. Proponents of more general counterfactual theories of noncausal explanation (e.g., those mentioned in note 22) should also be amenable to this account of topological explanation. We extend these views by using this account to unify topological explanations of both the mechanistic and non-mechanistic varieties. The former will satisfy the MITE and T1-T5; the latter will only satisfy T1-T5.Footnote 19
Let us briefly motivate this analysis of topological explanation, and then show that it achieves the desired unification. The first two conditions, T1 and T2, are standard constraints on explanations—that the explanans and explanandum must be approximately true. Note that such a view still leaves ample room for idealization and other fruitful distortions of properties other than F and G. Furthermore, as T4 indicates, we assume that b is G is an empirical proposition, i.e., the sort of claim that can serve as a proper scientific explanandum.Footnote 20 Furthermore (and as our examples show), in many topological explanations “a” and “b” denote one and the same system.
The third condition, T3, distinguishes topological explanations from other kinds of explanations. Hence, it is crucial that we define a topological property. Let a predicate be topological if it correctly describes a graph (or subgraph) and occurs in some nontrivial theorem derived using only mathematical statements, including the characterization of the graph in terms of its vertices and edges. Then a graph’s topological predicates denote its corresponding network’s topological properties. Paradigmatically, topological properties concern quantifiable patterns of connectivity in a network. Clustering coefficient, average path length, and frequency of three-node motifs are examples. As the examples above show, each of these topological properties is measurable and hence also empirical.
Finally, the fifth condition, T5, guarantees that the topological model is explanatory. As many others have noted, what distinguishes explanations from other kinds of representations is the former’s capacity to support such change-relating counterfactuals,Footnote 21 or answer “what-if-things-had-been-different questions” (Jansson and Saatsi Reference Jansson and Saatsi2017; Reutlinger Reference Reutlinger2016; Woodward Reference Woodward2003, Reference Woodward, Reutlinger and Saatsi2018).Footnote 22 Topological explanations also answer these questions, but they are distinctive in being underwritten by counterfactual differences in a system’s topological properties. Such counterfactuals can describe what would happen if the system exhibited another topological property (in which case F’ is contrary to F) or if it simply lacked its actual topological property (in which case F’ is contradictory of F). Furthermore, we assume that only non-backtracking counterfactuals underwrite T5.
Crucially, T1-T5 allow some topological explanations to be mechanistic, but do not require all such explanations to work this way. For instance, both Watts and Strogatz’s and Adachi et al.’s explanation readily fit our account. Begin with Watts and Strogatz. Small-worldness is a topological property that can be predicated of the nervous system of C. elegans. Thus, T1 and T3 are satisfied. Similarly, the extent to which information spreads throughout this nervous system is an empirical property that can be accurately predicated of this network. So, T2 and T4 are satisfied. Finally, in Section 2, we presented the counterfactual that would satisfy T5:
Had the C. elegans neural network been regular or random (rather than small-world), then information transfer would have been less efficient (rather than its actual level of efficiency).
We saw that this explanation was both topological and, in virtue of satisfying the MITE, was also mechanistic. Turn now to Adachi et al.’s non-mechanistic topological explanation. Frequency of network motifs is a topological property that can be accurately predicated of macaque neocortices. In this context, ΔFC is characterized by observed correlations, specifically between different BOLD signal flows between different time series of brain areas, and by anatomical connections in length2-AC patterns which were confirmed in earlier studies (e.g., Honey et al. Reference Honey, Kötter, Breakspear and Sporns2007). Hence, explananda will be true statements about these empirical facts. Thus, T1-T4 are satisfied. Finally, we have already rehearsed the relevant counterfactual in Section 4.1:
Had macaque neocortices had a different frequency of three-node network motifs (rather than a different clustering coefficient, modularity, or frequency of two-node network motifs), then these neocortices’ ΔFC would have been different (rather than its actual amount).
So, Adachi et al.’s explanation also satisfies T5. However, unlike the explanation involving C. elegans, this explanation violated the MITE. Thus, we see that T1-T5 form the common core shared by both mechanistic and non-mechanistic topological explanations.
6. Functional connectivity
We have argued that some topological explanations are non-mechanistic. Moreover, we have also claimed that T1-T5 provide sufficient conditions for genuinely autonomous topological explanations, such as Adachi et al.’s. Craver provides a potential counterexample to this latter claim. To wit, he takes his counterexample to show that topological models are explanatory only insofar as they are mechanistic explanations. Hence, a more complete defense of topological explanations’ autonomy should address Craver’s challenge. To that end, we first present Craver’s challenge, and then defend our core thesis that some topological explanations are not mechanistic explanations from this challenge.
Craver (Reference Craver2016, 704–6) argues that FC models are examples of topological models that are not explanations, chiefly because they do not represent mechanisms. As an illustration, we discuss Helling, Petkov, and Kalitzin’s (Reference Frigg, Nguyen, Magnani and Bertolotti2019) study of the relation between mean functional connectivityFootnote 23 (MFC) and the likelihood of an epileptic seizure (ictogenicity).
Craver observes that FC networks’ nodes “need not … stand for working parts,” that is, for the entities that constitute a mechanism.Footnote 24 Rather, many FC models’ nodes are conventionally determined spatiotemporal regions adopted mostly because they are “conveniently measurable units of brain tissue rather than known functional parts.” For instance, Helling et al.’s FC model’s nodes are readings from EEG channels, i.e., the electrodes measuring the brain’s electrical activity.Footnote 25 EEG channels are spaced evenly—at increments of 10 or 20 percent of the distance from the bridge of the nose to the lowest point of the skull from the back of the head. This suggests that the spatial units represented by these nodes are merely conventional. As mentioned above, nontrivial conceptions of mechanism should distinguish entities and activities from spatiotemporal regions merely determined by convention. For instance, while pistons, gears, camshafts and the like are entities in the mechanism for a car’s moving, each one-centimeter cube comprising a car is not an entity in that mechanism.
Similarly, while turning a crankshaft describes a piston’s activity, whatever happens every two seconds to a piston does not (barring extraordinary coincidence of course.) Yet, the nodes in FC models pick out temporal units that are just as conventional as the spatial ones. In Helling et al.’s FC model, nodes are time series of readings from EEG channels. For each EEG channel, a time series was constructed by sampling its readings several times per second. So, in our parlance, it’s quite clear that FC models violate the MITE’s Node Requirement: They do not denote entities and activities constituting a mechanism.
Craver (Reference Craver2016, 705) also observes that FC models’ “edges do not necessarily represent anatomical connections, causal connections, or communications,” that is, they flout the MITE’s Edge Requirement. Recall that this requires a topological explanation’s edges to represent interactions between entities or activities. The edges in Helling et al.’s model are synchronization likelihoods, which are correlations between pattern recurrences in the time series data generated by two or more EEG channels. For mechanists, interactions cannot be mere correlations (Bechtel Reference Bechtel2015; Craver and Tabery Reference Craver and Tabery2019; Glennan Reference Glennan1996).Footnote 26
Craver takes these two points to plant the kiss of death for philosophers of explanation—guilt by Hempelian association:
FC matrices are network models. They provide evidence about community structure in the brain. Community structure is relevant to brain function. But the matrices do not explain brain function. They don’t model the right kinds of stuff: the nodes aren’t working parts, and the edges are only correlations. As for the barometer and the storm, A is evidence for B, and B explains C, but A does not explain C.Footnote 27
Craver is tapping on a powerful intuition: in most sciences, models consisting only of correlations are (at best) merely evidential but not explanatory of anything they represent. Call this the barometer intuition. In the case of FC models, the barometer intuition seems even more acute, for the correlations are between spatiotemporal units that lack causal roles in virtue of being merely conventional.
However, our view contradicts the barometer intuition. Helling et al. conducted prospective studies involving subjects with focal seizures either starting treatment with an anti-epileptic drug or undergoing drug tapering over several days. They collected EEG data from each patient in order to calculate each patient’s MFC. Helling et al. found that MFC decreased for those who responded positively to their drug treatment, and increased for those who responded negatively.
Their model thereby satisfies T5. For instance, suppose that we ask why a patient responded negatively to anti-epileptic drugs. Then the relevant counterfactual would be:
Had the patient’s MFC decreased (rather than increased), then the patient would have responded positively (rather than negatively).Footnote 28
Because MFC is a topological property, positive drug response is an empirical property, and the relevant statements are true, the model also satisfies T1-T4. Thus, Section 5’s account claims that this is an explanation, which conflicts with the barometer intuition. Given how powerful the barometer intuition is, this appears to pose a serious problem to the account of topological explanation proposed in Section 5.
As we see it, the most straightforward autonomist reply embraces the barometer intuition and agrees with Craver about FC models’ lack of explanatory power. However, it does not budge one inch on autonomism’s more central point that some topological explanations are not mechanistic explanations. The argument is simple: Craver’s point is that FC networks’ topological properties are insufficient as explanantia. However, not all topological explanations work this way. For example, Adachi et al.’s topological explanation only features functional connectivity in the explanandum; other topological explanations do not use FC modeling at all. Hence, Craver’s argument does not undermine Section 4’s arguments.
Autonomists still must distinguish explanatory and non-explanatory topological models. This will require that they add further conditions to Section 5’s unified account of topological explanation. Autonomists might contrast Adachi et al.’s explanation with Helling et al.’s FC model and propose that topological models are explanatory only if they satisfy T1-T5, plus:
T6. The topological model of a satisfies the Node and Edge Requirements.
Adachi et al.’s explanation satisfies T6, but (typical) FC models do not. Thus, we see that these views are autonomist in claiming that some of these explanations are non-mechanistic (namely those that violate either the Responsibility or Interlevel Requirement). However, we do not claim that this is the only way of supplementing T1-T5 or even of responding to Craver’s challenge. Other approaches are possible and should be explored in future research.Footnote 29
7. Conclusion
We began by noting that the debate between autonomists and mechanists suffered from imprecision. Specifically, the discussion lacked a clear account of how all topological explanations could be mechanistic. We have filled this gap, albeit in the service of providing examples of non-mechanistic topological explanations. We conclude that topological explanations sometimes swing freely of mechanistic considerations.
Of course, there is further work to be done. For instance, we have not discussed how graphs represent their respective networks, and this feeds naturally into the vibrant literature on scientific representation (Frigg and Nguyen Reference Frigg, Nguyen, Magnani and Bertolotti2017).Footnote 30 Whether similarity, structuralist, inferentialist, or some other account of representation best accords with topological explanations promises to be an interesting topic, broaching upon longstanding issues such as the applicability of mathematics.
Refining the kinds of counterfactuals that topological explanations ought to support is another exciting avenue of further development. As we see it, this has three crucial implications for advancing the position developed here. First, while we sketched an argument as to why some topological explanations are noncausal, future work should further investigate the link between topology and etiology. For instance, a prominent view is that causal explanations differ from noncausal ones in supporting counterfactuals involving interventions (Woodward Reference Woodward2003). Hence, a suggestive line of research is to explore the relationship between topological explanations and interventions.
Second, some have argued that topological explanations detached from any “ontic dependence relation” will fail to respect explanation’s characteristic “directionality” (Craver Reference Craver2016; Craver and Povich Reference Craver and Povich2017). Whereas these ontic dependence relations used to be restricted to causal-mechanical relations, recent work has sought to broaden their range substantially (Povich Reference Povich2018). Consequently, all topological explanations may still track with one of these broader ontic dependency relations. Alternatively, Kostić and Khalifa (Reference Kostić and Khalifa2021) argue that nothing ontic is needed to account for topological explanations’ directionality. Clarifying the precise nature of the counterfactuals involved in topological explanations helps to circumscribe what a more liberalized conception of ontic dependency relations entails, and thus proves useful in navigating these theoretical options (cf. Povich Reference Povich2019).
Finally, further attention to the counterfactuals involved in topological explanations promises to address concerns that topological explanations are especially susceptible to classic Hempelian problems (such as asymmetry, irrelevance, and the like) because of their extensive appeal to mathematical derivations. We have already made a small contribution in assuaging this worry, by showing how our account can distinguish explanatory from evidential models, thereby blocking any tight analogy with Hempel’s difficulties in preventing a barometer from “explaining” a storm. Moreover, using an analysis quite similar to our own, Kostić (Reference Kostić2020) has outlined several ways that topological explanations are asymmetric. Nevertheless, assembling all of these points in a more systematic way would shed further light on topological explanations.
In closing, topological explanations are not merely a further chapter in the mechanist handbook. Attempts to incorporate all of them into a mechanistic framework are mistaken and fail to respect the unique features of these explanations. Moreover, doing so would foreclose several interesting questions to which philosophers of science would be well-served to attend.
Acknowledgements
We would like to thank Sara Green, Alexandros Goulas, Marc Lange, Arnon Levy, Charles Rathkopf, and two anonymous referees for their helpful discussions and comments on earlier drafts of this paper. We would also like to thank Tarja Knuuttila and her research group at the University of Vienna for their valuable feedback on this paper, which we presented in December of 2019.
Funding
Daniel Kostić would like to acknowledge funding by the Radboud Excellence Initiative. Kareem Khalifa would like to acknowledge funding from the American Council of Learned Society’s Burkhardt Fellowship, “Explanation as Inferential Practice.”