1. Introduction and main result
Perturbation analysis plays an important role in both stochastic geometry [Reference Last and Penrose14, Chapter 19] and statistical mechanics. For Gibbs point processes (grand-canonical Gibbs measures in statistical mechanics), quantities like factorial moment densities (also called correlation functions) are highly nontrivial functions of the intensity of the Gibbs point process itself (density) or the intensity of an underlying Poisson point process (activity). When interactions are pairwise, it is well known that the coefficients of these expansions are given by sums over geometric, weighted graphs. There is a vast literature addressing the convergence of these expansions; see, for example, [Reference Brydges2, Reference Malyshev and Minlos16]. Some attempts have been made at exploiting power series expansions from statistical mechanics for likelihood analysis of spatial point patterns in spatial statistics; see [Reference Ogata and Tanemura19].
The physics literature provides similar power series expansions for connectedness functions in a class of percolation models driven by Gibbs point processes, the so-called random connection models (RCMs) [Reference Coniglio, DeAngelis and Forlani6]. The expansion coefficients for the pair-connectedness function can be written in terms of a sum of certain connected graphs (see (3.1)) and the coefficients for the direct-connectedness function in terms of a sum over certain 2-connected graphs (see (4.1)). The two functions are related via the Ornstein–Zernike equation (OZE) [Reference Ornstein and Zernike20], an integral equation which is of paramount importance in physical chemistry and soft matter physics and which enters some approaches to percolation theory; see [Reference Torquato25, Chapter 10]. For Bernoulli bond percolation on $\mathbb Z^d$ , the OZE encodes a renewal structure and is used to prove Ornstein–Zernike behavior [Reference Campanino and Ioffe4], a precise asymptotic formula for pair-connectedness functions in the subcritical regime that incorporates subleading corrections to the exponential decay. The OZE also appears as a by-product of lace expansions [Reference Heydenreich, van der Hofstad, Last and Matzke10, Proposition 5.2].
The expansions for connectedness functions appearing in [Reference Coniglio, DeAngelis and Forlani6] are derived as a means of discussing the following question: is it possible to choose the notion of connectivity in such a way that the percolation transition, if it occurs at all, coincides with the phase transition in the sense of non-uniqueness of Gibbs measures? We remind the reader that the relationship between the two phenomena is rather subtle, and in general the corresponding critical parameters do not match; see [Reference Jansen12] and references therein. To the best of our knowledge, the question above has not been fully answered for continuum systems, although Betsch and Last [Reference Betsch and Last1] were recently able to show that uniqueness of the Gibbs measure follows from the non-percolation of an associated RCM driven by a Poisson point process.
Moreover, the convergence of the expansions for connectedness functions has not been treated in a mathematically rigorous way, in stark contrast with the rich theory of cluster expansions. Even in the simplest case of the RCM driven by a Poisson point process that we consider in this paper, where activity and density coincide and are called the intensity, rigorous results for the expansion of connectedness functions barely exist: the first ones were obtained by Last and Ziesche in [Reference Last and Ziesche15]. However Last and Ziesche do not prove that their expansions coincide with the physicists’ expansion, and they do not prove quantitative bounds for the domain of convergence of the small-intensity expansion.
Our main result addresses graphical expansions of the direct-connectedness function in infinite volume. The results by Last and Ziesche [Reference Last and Ziesche15], combined with our combinatorial considerations from Section 6.2, imply that the physicists’ expansions have a positive radius of convergence; however, it is not our purpose to provide a quantitative bound for the latter. Instead, we perform first a re-summation, in finite volume, of the physicists’ expansion. Although the re-summed expansion is no longer a power series in the intensity of the underlying Poisson point process, it has the (conjectured) advantage of converging in a bigger domain than the physicists’ expansion. We provide quantitative bounds on the intensity that allow us to pass to the infinite-volume limit in the re-summed expansion of the direct-connectedness function. The proof uses the continuum BK inequality proved in [Reference Heydenreich, van der Hofstad, Last and Matzke10].
In addition, we discuss the relationship of the physicists’ and our expansion to the lace expansion for the continuum random connection model [Reference Heydenreich, van der Hofstad, Last and Matzke10]. Roughly, the lace expansion could in theory be rederived from the graphical expansion by yet another re-summation step. In fact a notion of laces similar to the laces for the self-avoiding random walk [Reference Brydges2, Reference Slade22] already enters the proof of our main result on graphical expansions (see Section 4.3). Thus, contrary to what is stated in [Reference Heydenreich and van der Hofstad9, Chapter 6.1], the denomination ‘lace expansion’ for percolation is not a misnomer, at least for continuum systems. It is unclear, however, whether the discussion offers a new angle of attack on the intricate convergence problems in the theory of lace expansions.
Let us properly introduce the RCM and state our results. The RCM depends on two parameters, namely its intensity $\lambda \geq 0$ and the (measurable) connection function $\varphi\,:\, \mathbb R^d \to [0,1]$ , satisfying
as well as radial symmetry $\varphi(x) = \varphi({-}x)$ for all $x\in\mathbb R^d$ . The model is described informally as follows: the vertex set is taken to be a homogeneous Poisson point process (PPP) in $\mathbb R^d$ of intensity $\lambda$ , denoted by $\eta$ . For any pair $x,y\in\eta$ , we add the edge $\{x,y\}$ with probability $\varphi(x-y)$ and independently of all other pairs. We refer to [Reference Heydenreich, van der Hofstad, Last and Matzke10, Reference Meester and Roy18] for a formal construction.
The RCM is an undirected simple random spatial graph and a standard model of continuum percolation. We denote it by $\xi$ and we use $\mathbb P_\lambda$ to denote the corresponding probability measure. Its vertex set is $V(\xi)=\eta$ , and we let $E(\xi)$ denote its edge set.
For $x\in\mathbb R^d$ , we let $\xi^{x}$ be the RCM augmented by the point x. In other words, the vertex set of $\xi^x$ is $\eta \cup \{x\}$ and the edges are formed as described above. In particular, edges between x and points of $\eta$ are drawn independently and according to $\varphi$ . More generally, for a set of points $x_1, \ldots, x_k$ , we let $\xi^{x_1, \ldots, x_k}$ be the RCM with vertex set $\eta\cup \{x_1, \ldots, x_k\}$ (also here, edges between deterministic points $x_1, x_2$ are drawn independently and according to $\varphi$ ).
We say that $x,y \in \eta$ are connected (and write $x \longleftrightarrow y\textrm { in } \xi$ ) if there is a path from x to y in $\xi$ . For $x\in\mathbb R^d$ , we let $\mathscr {C}(x) = \mathscr {C}(x,\xi^x) = \{ y \in \eta^x\,:\, x \longleftrightarrow y\textrm { in } \xi^x \}$ be the cluster of x and define the pair-connectedness (or two-point) function $\tau_\lambda\,:\,\mathbb R^d\times\mathbb R^d \to [0,1]$ to be
Thanks to the translation-invariance of the model, we have $\tau_\lambda(x,y) = \tau_\lambda(\textbf{0},x-y)$ $\big($ where $\textbf{0}$ denotes the origin in $\mathbb R^d\big)$ , and we can also define $\tau_\lambda$ as a function $\tau_\lambda\,:\,\mathbb R^d \to [0,1]$ with $\tau_\lambda(x) = \mathbb P_\lambda\big(\textbf{0} \longleftrightarrow x\textrm { in } \xi^{\textbf{0},x}\big)$ .
We say that $x,y\in\eta$ are 2-connected (or doubly connected) and write $x \Longleftrightarrow y\textrm { in } \xi$ if there are two paths from x to y that have only their endpoints in common (or if x and y are directly connected by an edge or if $x=y$ ). We define
Recall that the critical intensity for percolation is defined by
and that the identity
has been shown to hold true for connection functions $\phi$ that are nonincreasing in the Euclidean distance (see [Reference Meester17]). It is proved in [Reference Last and Ziesche15] that for $\lambda<\lambda_c$ , there exists a uniquely defined integrable and essentially bounded function $g_\lambda\,:\,\mathbb R^d\times\mathbb R^d\to \mathbb R^d$ such that
This equation is known as the Ornstein–Zernike equation (OZE), and $g_\lambda$ is called the direct-connectedness function.
For two integrable functions $f,g\,:\, \mathbb R^d \to \mathbb R$ , we recall the convolution $f\ast g$ to be given by
We let $f^{\ast 1} = f$ and $f^{\ast m} = f^{\ast (m-1)} \ast f$ . Notice that we can interpret both the pair-connectedness function $\tau_\lambda$ and the direct-connectedness function $g_\lambda$ as functions on $\mathbb R^d$ , thanks to translation-invariance. The OZE then can be formulated as
Naturally, the question arises whether one can provide an explicit form for the direct-connectedness function $g_\lambda$ . Unfortunately, an immediate probabilistic interpretation of $g_\lambda$ is not known. One classical approach from the physics literature is to obtain explicit approximations for the solution $g_\lambda$ of (1.2) by introducing complementary equations, known as closure relations, the choice of which depends on the specifics of the model considered. Different closure relations provide different explicit approximations for $g_\lambda$ and thus also for the pair-connectedness function $\tau_\lambda$ , e.g., via a reformulation of the OZE (1.2) for the Fourier transforms of the connectedness functions. Most prominent are the Percus–Yevick closure relations [Reference Chiew and Stell5, Reference Torquato25]; other examples can be found in [Reference Hansen and McDonald7]. Another approach [Reference Coniglio, DeAngelis and Forlani6] is to directly provide an independent definition of $g_\lambda$ in terms of a graphical expansion and then argue that this expansion satisfies the OZE (1.2). We follow the spirit of the latter approach: our main result is a graphical expansion for the direct-connectedness function, with quantitative bounds on the domain of convergence.
Let
It is not hard to see that $\tilde\lambda_\ast \leq \lambda_\ast \leq \lambda_c$ using (1.5) below.
We can now state our main theorem. It provides (in general dimension) the first rigorous quantitative bounds on $\lambda$ under which the direct-connectedness function admits a convergent graphical expansion.
Theorem 1.1. (Graphical expansion of the direct-connectedness function.) For $\lambda<\lambda_\ast$ , the direct-connectedness function $g_\lambda(x_1,x_2)$ is given by the expansion (4.24), which is absolutely convergent pointwise for all $(x_1,x_2)\in \mathbb{R}^{2d}$ . Moreover, for $\lambda< \tilde \lambda_\ast$ , the expansion (4.24) converges in the $L^1(\mathbb R^d,\textrm d x_2)$ -norm for all $x_1\in \mathbb R^d$ .
The convergence results for the expansion (4.24) are proved in Theorem 4.1 and Theorem 4.2; the equality with the direct-connectedness function is proved in Section 5.
Last and Ziesche show that there is some $\lambda_0>0$ such that $g_\lambda$ is given by a power series for $\lambda \in [0,\lambda_0)$ . No quantitative bounds for $\lambda_0$ are provided, however. In Section 6.2, we discuss how to relate this expansion to our expression for $g_\lambda$ . We now make several remarks on Theorem 1.1 and the quantitative nature of the bounds provided there.
-
Since $0 \leq \sigma_\lambda\leq 1$ , we can bound
(1.5) \begin{equation} \sum_{k \geq 1} \lambda^{k-1} \sigma_\lambda^{\ast k}(x) \leq \sum_{k \geq 0} \bigg( \lambda \int \sigma_\lambda(x) \, \textrm{d} x \bigg)^k = \sum_{k \geq 0} \big( \mathbb E_\lambda\big[\big|\big\{x \in \eta\,:\, \textbf{0} \Longleftrightarrow x\textrm { in } \xi^{\textbf{0}}\big\}\big| \big] \big)^k, \end{equation}where the identity is due to the Mecke equation (2.1). This shows that $\tilde\lambda_\ast \leq \lambda_\ast$ and that $\tilde\lambda_\ast$ is the point where the expected number of points in $\eta$ that are 2-connected to the origin passes 1 (i.e., we have $\mathbb E_\lambda\big[\big|\big\{x \in \eta\,:\, \textbf{0} \Longleftrightarrow x\textrm { in } \xi^{\textbf{0}}\big\}\big| \big]\geq 1$ for all $\lambda>\tilde{\lambda}_\ast$ ). -
The argument of the geometric series in (1.5) can be further bounded from above by
\begin{equation*} \lambda \int \tau_\lambda(x)\textrm{d} x = \mathbb E_\lambda\big[\big|\big\{ x \in \eta\,:\, \textbf{0} \longleftrightarrow x\textrm { in } \xi^{\textbf{0}}\big\}\big|\big],\end{equation*}the expected cluster size (minus 1). A classical branching-process argument gives that $\tilde\lambda_\ast \geq 1/2$ (see, for example, [Reference Penrose21, Theorem 3]). -
In high dimension, we have the following result, proven in [Reference Heydenreich, van der Hofstad, Last and Matzke10]: under some additional assumptions on $\varphi$ (see [Reference Heydenreich, van der Hofstad, Last and Matzke10, Section 1.2]), there is an absolute constant $c_0$ such that
\begin{equation*} \lambda_c \int \sigma_{\lambda_c}(x) \, \textrm{d} x \leq 1 + c_0/d \end{equation*}in sufficiently high dimension, or, for a class of spread-out models (closely related to Kac potentials in statistical mechanics; see [Reference Hara and Slade8]) with a parameter L,\begin{equation*}\lambda_c\int \sigma_{\lambda_c}(x) \, \textrm{d} x \leq 1 + c_0 L^{-d}\end{equation*}for all dimensions $d>6$ (in the spread-out case, $c_0$ is independent of L but may depend on d). As $\sigma_\lambda$ is nondecreasing in $\lambda$ , this provides a bound for the whole subcritical regime. This also implies that for every $\varepsilon>0$ , there is $d_0$ (respectively, $L_0$ ) such that $\tilde\lambda_\ast \geq 1-\varepsilon$ for all $d \geq d_0$ (respectively, $L \geq L_0$ and $d>6$ ). As we also know that $\lambda_c \searrow 1$ as the dimension becomes large, this shows that in high dimension, $\tilde\lambda_\ast$ (and thus also $\lambda_\ast$ ) gets arbitrarily close to $\lambda_c$ .
Outline of the paper. The paper proceeds as follows. We introduce most of our important notation in Section 2. This allows us to demonstrate some basic (and mostly well-known) central ideas in Section 3, where the two-point function is discussed in finite volume. Section 4 contains the main body of work for the proof of Theorem 1.1 (the convergence results). The remainder of Theorem 1.1 regarding the OZE is then proved in Section 5.
We discuss our results in Section 6. In particular, we point out where many of the formulas can be found in the physics literature (not rigorously proven) and allude to generalizations to Gibbs point processes. Moreover, we highlight the connection to two other expressions for the pair-connectedness function; in particular, we show how our expansions relate to the lace expansion. Lastly, we address other percolation models very briefly in Section 6.4.
2. Fixing notation
2.1. General notation
We let $[n] \,:\!=\{1,\ldots, n\}$ and $[n]_0 \,:\!= [n] \cup \{0\}$ . For a set V, we write $\binom{V}{2} \,:\!=\{E \subseteq V\,:\, |E|=2\}$ . For $I = \{i_1, i_2, \ldots, i_\kappa\} \subset \mathbb N$ , let $\vec x_I=\big(x_{i_1}, \ldots, x_{i_\kappa}\big)$ . For compact intervals $[a,b]\subset \mathbb R$ , we write $\vec x_{[a,b]} = \vec x_I$ with $I=[a,b]\cap \mathbb N$ . If $a=1$ , we write $\vec x_{[b]} = \vec x_{[1,b]}$ . By some abuse of notation, we are going to interpret $\vec x_{[a,b]}$ both as an ordered vector and as a set.
If not specified otherwise, $\Lambda$ denotes a bounded, measurable subset of $\mathbb R^d$ .
2.2. Graph theory
We recall that a (simple) graph $G=(V,E) = (V(G), E(G))$ is a tuple with vertex set (or set of points, sites, nodes) V and edge set (or set of bonds) $E \subseteq \binom{V}{2}$ . In this paper, we will always consider graphs with $V \subset \mathbb R^d$ , and for $x,y\in\mathbb R^d$ , an edge $\{x,y\}$ will sometimes be abbreviated xy.
If $xy\in E$ , we write $x \sim y$ (and say that x and y are adjacent). We extend this notation and write $x \sim W$ for $x \in V$ and $W \subseteq V$ if there is $y \in W$ such that $x \sim y$ ; also, we write $A \sim B$ if there is $x\in A$ such that $x \sim B$ . For $W \subseteq V$ , we define the W-neighborhood $N_W(x) = \{y \in W\,:\, x \sim y\}$ and the W-degree of a vertex $x\in V$ as $\deg_{W}\!(x) = |N_W(x)|$ , and we write $N(x) = N_V(x)$ as well as $\deg\!(x) = \deg_V\!(x)$ . For two sets $A,B \subseteq V$ , we write $E(A,B) = \{xy \in E(G)\,:\, x\in A, y \in B\}$ .
Given a graph $G=(V,E)$ and $W \subseteq V$ , we denote by $G[W] \,:\!=\, (W, \{e \in E\,:\, e \subseteq W\})$ the subgraph of G induced by W. Given two simple graphs G, H, we let $G \oplus H \,:\!=\, (V(G) \cup V(H), E(G) \cup E(H))$ .
Connectivity. Given a graph G and two of its vertices $x,y\in V(G)$ , we say that x and y are connected if there is a path between x and y—that is, a sequence of vertices $x=v_0, v_1, \ldots, v_k=y$ for some $k\in\mathbb N_0$ such that $v_{i-1}v_i \in E(G)$ for $i \in [k]$ . We write $x \longleftrightarrow y$ in G or simply $x \longleftrightarrow y$ . We call $\mathscr {C}(x)=\mathscr {C}(x;\,G) = \{y \in V(G)\,:\, x \longleftrightarrow y\}$ the cluster (or connected component) of x in G. If there is only one cluster in G, we say that G is connected.
For $x \longleftrightarrow y$ in G, we let $\textsf {Piv}(x,y;\,G)$ denote the set of pivotal vertices for the connection between x and y. That is, $v\notin \{x,y\}$ is in $\textsf {Piv}(x,y;\,G)$ if every path from x to y in G passes through v. We say that x is doubly connected to y in G (and write $x \Longleftrightarrow y\textrm { in } G$ ) if $\textsf {Piv}(x,y;\,G)=\varnothing$ . We remark that in the physics literature, pivotal points are usually known as nodal points.
In the pathological case $x=y$ , we use the convention $x \longleftrightarrow x$ in G and set $\textsf {Piv}(x,x;\,G)=\varnothing$ for any graph G with $x\in V(G)$ (equivalently, $x \Longleftrightarrow x\textrm { in } G$ ).
We observe that the pivotal points $\{u_1,\ldots, u_k\}$ can be ordered in a way such that every path from x to y passes through the pivotal points in the order $(u_1, \ldots, u_k)$ . We define $\textsf{PD}(x,y,G) = \textsf{PD}(G)$ to be the pivot decomposition of G, that is, a partition of the vertex set V into a sequence, $(x,V_0, u_1, V_1, \ldots, u_k, V_k, y)$ , where $(u_1, \ldots, u_k)$ are the ordered pivotal points and $V_i$ is the (possibly empty) set of vertices that can be reached only by passing through $u_i$ and that is still connected to x after the removal of $u_{i+1}$ . See Figure 1.
Classes of graphs. Given a (locally finite) set $X \subset \mathbb R^d$ , we let $\mathcal G(X)$ be the set of graphs with vertex set X. We let $\mathcal C(X)$ be the set of connected graphs on X. Moreover, for $x,y \in X$ , we let $\mathcal D_{x,y}(X)\subseteq \mathcal C(X)$ be the set of non-pivotal graphs, i.e., the set of connected graphs such that $\textsf {Piv}(x,y;\,G) = \varnothing$ .
Given m bags $X_1, \ldots, X_m \subset \mathbb R^d$ with $|X_i \cap X_j| \leq 1$ for all $1 \leq i<j \leq m$ , we let $\mathcal G(X_1,\ldots, X_m)$ denote the set of m-partite graphs on $X_1, \ldots, X_m$ , i.e., the set of graphs G with $V(G) = \cup_{i=1}^m X_i$ and $E(G[X_i]) =\varnothing$ for $i\in[m]$ . Note that we allow bags to have (at most) one vertex in common, which is a slight abuse of the notation in graph theory, where m-partite graphs have disjoint bags.
The notion of ( $\pm$ )-graphs. We introduce a ( $\pm$ )-graph as a triple
where V is the vertex set and $E^+,E^- \subseteq \binom{V}{2}$ are disjoint. In other words $G^\pm$ is a graph where every edge is of exactly one of two types (plus or minus). We set $E \,:\!=\, E^+ \cup E^-$ and associate to $G^\pm$ the two simple graphs $G^{|\pm|} \,:\!=\, (V,E)$ and $G^+\,:\!=\,(V^+,E^+)$ , where $V^+ \,:\!=\, \{x \in V\,:\, \exists e \in E^+\,:\, x \in e\}$ are the vertices incident to at least one $({+})$ -edge.
We extend all the notions for simple graphs to ( $\pm$ )-graphs. In particular, given $X \subset \mathbb R^d$ , we let $\mathcal G^\pm(X)$ be the set of ( $\pm$ )-graphs on X. Moreover, $\mathcal C^\pm(X)$ are the ( $\pm$ )-connected graphs on X, that is, the graphs such that $G^{|\pm|}$ is connected. Similarly, $\mathcal C^+(X)\subset \mathcal C^\pm(X)$ are the $({+})$ -connected graphs, that is, those where $G^+$ is connected and $V(G)=V^+$ . For $x,y\in X$ , we denote by $\mathcal D^\pm_{x,y}(X)$ the set of those ( $\pm$ )-connected graphs on X where $\textsf {Piv}(x,y;\,G^{|\pm|})= \varnothing$ , and by $\mathcal D^+_{x,y}(X)\subset \mathcal D^\pm_{x,y}(X)$ the set of those ( $\pm$ )-connected graphs on X where $\textsf {Piv}\big(x,y;\,G^+\big)=\varnothing$ . We also define the $({{\pm}})$ -pivot decomposition $\textsf{PD}^\pm\big(x,y,G^\pm\big) = \textsf{PD}^\pm\big(G^\pm\big) = \textsf{PD}(G^{|\pm|})$ and the $({+})$ -pivot decomposition $\textsf{PD}^+\big(x,y,G^\pm\big) = \textsf{PD}^+\big(G^\pm\big) = \textsf{PD}\big(G^{+}\big)$ . Lastly, we write $x \overset{+}{\longleftrightarrow} y$ if there is a path from x to y in $E^+$ .
Given a ( $\pm$ )-graph G and a simple graph H, we define
Weights. Given a simple graph G, a ( $\pm$ )-graph H on $X \subset \mathbb R^d$ , and the connection function $\varphi$ , we define the weights
2.3. The random connection model
The RCM $\xi$ can be formally constructed as a point process, that is, a random variable taking values in the space of locally finite counting measures $(\textbf N, \mathcal N)$ on some underlying metric space $\mathbb X$ . There are various ways to choose $\mathbb X$ . One option is to let $\mathbb X = \mathbb R^d \times \mathbb M$ for an appropriate mark space $\mathbb M$ (see [Reference Meester and Roy18]); another way can be found in [Reference Heydenreich, van der Hofstad, Last and Matzke10, Reference Last and Ziesche15]. In any case, one can reconstruct from $\xi$ the point process $\eta$ on $\mathbb R^d$ which makes up the vertex set of $\xi$ . We treat $\eta$ both as a counting measure and as a set, giving meaning to statements of the form $x \in \eta$ .
If $e=\{x,y\}$ is an edge, then we write $\varphi(e)= \varphi(x-y)$ . For a bounded set $\Lambda\subset \mathbb R^d$ , we write $\eta_\Lambda=\eta\cap\Lambda$ and let $\xi_\Lambda$ denote the RCM restricted to $\Lambda$ , that is, $\xi[\eta_\Lambda]$ . The two-point function restricted to $\Lambda$ is defined as $\tau_\lambda^\Lambda(x,y) = \mathbb P_\lambda\big(x \longleftrightarrow y\textrm { in } \xi_\Lambda^{x,y}\big)$ for $x,y \in \Lambda$ and zero otherwise.
For $V \subset W$ , there is a natural way to couple the models $\xi^V$ and $\xi^W$ , which is by deleting from $\xi^W$ all points in $W \setminus V$ along with their incident edges. We implicitly assume throughout this paper that this coupling for different sets of added points is used.
The Mecke equation. Since it is used repeatedly throughout this paper, we state the Mecke equation, a standard tool in point process theory, in its version for the RCM (see [Reference Last and Ziesche15]). For $m \in \mathbb N$ and a measurable function $f\,:\, \textbf N \times \mathbb R^{dm} \to \mathbb R_{\geq 0}$ , the Mecke equation states that
where $\eta^{(m)}=\big\{\vec x_{[m]} \in \eta^m\,:\, x_i \neq x_j \text{ for } i \neq j\big\}$ are the pairwise distinct tuples.
Rescaling. It is a standard trick in continuum percolation to rescale space in order to normalize a quantity of interest, which is $\int \varphi(x) \, \textrm{d} x$ in our case. We refer to [Reference Meester and Roy18, Section 2.2]. As a consequence, we may without loss of generality assume that $\int \varphi(x) \, \textrm{d} x=1$ .
The BK inequality. We say that $A\in\mathcal{N}$ lives on $\Lambda$ if $\unicode{x1D7D9}_{A}(\mu) = \unicode{x1D7D9}_{A}(\mu_\Lambda)$ for every $\mu \in \textbf N$ . We call an event $A \in \mathcal N$ increasing if $\mu \in A$ implies $\nu\in A$ for each $\nu\in\textbf{N}$ with $\mu\subseteq\nu$ . Let $\mathcal{R}$ denote the ring of all finite unions of half-open rectangles with rational coordinates. For two increasing events $A,B\in\mathcal{N}$ we define
Informally, this is the event that A and B take place in spatially disjoint regions. It is proved in [Reference Heydenreich, van der Hofstad, Last and Matzke10, Theorem 2.1] that for two increasing events A and B living on $\Lambda$ , we have
The RCM on a fixed vertex set. Given some (finite) set $X \subset \mathbb R^d$ and a function $\varphi\,:\,\mathbb R^d\to [0,1]$ , we will often have to deal with the following random graph: its vertex set is X, and two vertices $x,y\in X$ are adjacent with probability $\varphi(x-y)$ , independently of other pairs of vertices. This is simply the RCM conditioned to have the vertex set X. To highlight the difference from $\xi$ , which depends on the PPP $\eta$ , we denote this random graph by $\Gamma_\varphi(X)$ . If $Y \subset X$ , then we write $\Gamma_\varphi(Y)$ for $\Gamma_\varphi(X)[Y]$ . Since there is no dependence on $\lambda$ , we write $\mathbb P$ for the probability measure of the RCM with fixed vertex set.
3. Fixing ideas: the two-point function in finite volume
We use this section to put the definitions of Section 2 into action and to derive a power series expansion for $\tau_\lambda$ in finite volume. We start by motivating the introduction of ( $\pm$ )-graphs by linking them to the RCM $\Gamma_\varphi$ .
Observation 3.1. (Connection between $({{\pm}})$ -graphs and probabilities.) Let $X \subset \mathbb R^d$ be finite. Let $\mathfrak P \subseteq \mathcal G(X)$ be a graph property. Then
Proof. Note that
Expanding the factor $\prod_{e \in \binom{X}{2} \setminus E(G)} (1-\varphi(e))$ into a sum proves the claim.
Note that the weight of a ( $\pm$ )-graph may also be calculated by taking the product over all its edges, with factors $\varphi({\cdot})$ and $-\varphi({\cdot})$ for edges in $E^+$ and $E^-$ , respectively. Observation 3.1 motivates that the edges in $E^+$ correspond to the edges in the random graph $\Gamma_\varphi$ .
Next we prove a power series expansion for $\tau_\lambda$ in terms of the intensity $\lambda$ . The expansion (3.1) has already been given by Coniglio, De Angelis and Forlani [Reference Coniglio, DeAngelis and Forlani6, Equation (12)], who work in the more general context of Gibbs point processes but do not prove convergence. The proposition enters the proof of Proposition 5.1.
Notice that the coefficients of power series expansions like (3.1) are given by integrals with respect to the Lebesgue measure, and it is sufficient that the integrands be defined up to Lebesgue null sets for those integrals to be well-defined. Since vectors $\vec x_{[3, n+2]}\in\mathbb R^{dn}$ with fewer than n distinct entries constitute a Lebesgue null set, we can assume that for $x_1\neq x_2$ only graphs with vertex sets of cardinality $n+2$ contribute to the nth coefficient in (3.1). The same considerations apply to all graphical expansions appearing from here on, including our main definition (4.6).
Proposition 3.1. (Graphical expansion for the two-point function.) Consider the RCM restricted to a bounded measurable set $\Lambda \subset \mathbb R^d$ , and let $x_1,x_2 \in \Lambda$ . Then
with
Note that Proposition 3.1 is valid for all intensities $\lambda\geq 0$ . This situation is completely different from familiar cluster expansions [Reference Brydges2], where the radius of convergence of relevant expansions is finite in finite volume as well.
The expansion (3.1) amounts to the physicists’ expansion in powers of the activity. The expansion in powers of the density instead involves sums over a smaller class of graphs. For PPPs, activity and density are the same and the two expansions must coincide. In our context, we point out that the sum over graphs in (3.1) can be reduced to the sum over the subset of graphs in $\mathcal C^\pm$ that contain a $({+})$ -path from $x_1$ to $x_2$ and that have no articulation points (with respect to $x_1,x_2$ ). To define articulation points, recall that a cut vertex leaves a connected graph disconnected upon its deletion. Now, an articulation point is a cut vertex that is not pivotal for the $x_1$ – $x_2$ connection. It is not difficult to see that for fixed points $x_{[n+2]}$ , the graphs with articulation points in the sum over graphs G in (3.1) exactly cancel out. This cancellation happens at fixed n and does not require any re-summations between graphs with different numbers of vertices.
The proof of Proposition 3.1 builds on yet another equivalent representation: in Equation (3.1) we can discard those graphs G for which $G^+$ is not connected and those for which not every $({-})$ -edge has at least one endpoint in $V\big(G^+\big)$ ; see Equation (3.6) below for a precise statement. To the best of our knowledge, Equation (3.6) is new.
Proof of Proposition 3.1. We write $\tau_\lambda = \tau_\lambda^\Lambda$ and $\eta=\eta_\Lambda$ . Given $x_1,x_2 \in\Lambda$ , we can partition
The second identity can be found, for example, in [Reference Last and Ziesche15, Proposition 3.1]. Set
Expanding the exponential in (3.2), we find
with
In the third line, we have used the inequality
which can be shown as follows. Let $n\in \mathbb N$ and let $0\leq a_1,\ldots, a_n\leq 1$ . Notice that the identity $ 1-\prod_{i=1}^{n} (1-a_i)= (1-a_n)\big(1-\prod_{i=1}^{n-1} (1-a_i)\big)+a_n$ and the estimate $(1-a_n)\leq 1$ hold for all $n\in\mathbb{N}$ . The inequality between the integrands in $(3.5)$ now follows by induction with the choice $a_i=\varphi(x_i-y)$ . The rescaling introduced in Section 2.3 ensures that $\int_\Lambda\varphi(x_i-y)\, \textrm{d} y\leq 1$ , $i\in[n+2]$ , yielding the second inequality.
Next we turn to a combinatorial representation of f as a sum over $({{\pm}})$ -graphs. Recall that $\mathcal C^+$ denotes sets of ( $\pm$ )-graphs that are ( $+$ )-connected. The definition of f and Observation 3.1 yield
where the last sum is over all ( $\pm$ )-graphs $G^{\prime} = G\oplus H$ in $\mathcal C^\pm\big(\vec x_{[n+2]} \cup \vec y_{[m]}\big)$ such that, first, there are no edges between points of $\vec y$ ; second, $(G \oplus H)^+$ is connected; and third, the vertices of $(G \oplus H)^+$ are precisely $\vec x_{[n+2]}$ .
We rearrange the double sum (3.3) over m, n into one sum, indexed by the value of $m+n$ , and obtain
In the second identity, we have added some graphs to the sum, namely those in which $G^+$ is not connected or where there exist edges between vertices of $V \setminus V^+$ .
We claim that the weights of these added graphs sum up to zero. To see this, first identify $[n+2]$ with the vertices $\vec x_{[n+2]}$ and fix a graph $G \in \mathcal C([n+2])$ . Now, let $C \subseteq [n+2]$ with $\{1,2\} \subseteq C$ and consider the set $\mathcal G_G(C)$ of all $({{\pm}})$ -connected graphs $G^\pm$ on $[n+2]$ such that $G^{|\pm|} = G$ and C is the vertex set of the $({+})$ -component of 1 in $G^\pm$ . If there is at least one edge e in G that has both endpoints outside of C, we partition $\mathcal G_G(C)$ into those graphs where e is in $E^+$ and those where e is in $E^-$ . This induces a pairing between the graphs of $\mathcal G_G(C)$ , and they cancel out. What remain are precisely the graphs in (3.6).
4. The direct-connectedness function
4.1. Motivation and rough outline
The expansion of the direct-connectedness function in powers of the activity given by [Reference Coniglio, DeAngelis and Forlani6], without proofs and convergence bounds, is
It is obtained from the expansion of the pair-connectedness function in Proposition 3.1 by discarding graphs that have pivotal points (i.e., graphs G where $\textsf{Piv}^\pm(G)$ is nonempty). Before we pass to the thermodynamic limit, we perform a re-summation and find another representation of $g_\lambda^\Lambda$ which has the conjectured advantage of increasing the domain of convergence.
Let $G=(V, E^+, E^-) \in \mathcal C^\pm\big(\vec x_{[n+2]}\big)$ be a $({{\pm}})$ -graph appearing in the expansion (3.6). Thus $V= \{ x_i\,:\, 1\leq i \leq n+2\}$ , the graph $G^+$ is connected, $x_1$ and $x_2$ belong to $V^+=V\big(G^+\big)$ , every vertex $y\in V(G) \setminus V\big(G^+\big)$ is linked by at least one $({-})$ -edge to $V^+$ , and there are no edges between two vertices in $ V \setminus V^+$ . We impose the additional constraint that $G^{|\pm|} = \big(\vec x_{[n+2]}, E^+\cup E^-\big)$ has no pivotal points for paths from $x_1$ to $x_2$ .
Since $x_1$ and $x_2$ are connected by a path of $({+})$ -edges, G admits a $({+})$ -pivot decomposition $\vec W= (u_0, V_0, \ldots, u_k, V_k, u_{k+1})$ (with $u_0=x_1$ and $u_{k+1}=x_2$ ), where $k \in \mathbb N_0$ is the number of pivotal points in $\textsf{Piv}^+(x_1, x_2;\,G)$ . Then, G decomposes into a core graph $G_{\text{core}} = \big(V\big(G^+\big), E^+, E_{\text{core}}^-\big)$ , with $E_{\text{core}}^-$ the set of $({-})$ -edges of G with both endpoints in $V_i \cup \{u_i, u_{i+1}\}$ for some $i \in [k]_0$ , and a shell graph $H = \big(V, \varnothing, E^-\setminus E^-_{\text{core}}\big)$ . By our choice of $E^-_{\text{core}}$ , we have $\textsf{PD}^\pm\big(G_{\text{core}}\big) = \textsf{PD}^+\big(G_{\text{core}}\big)= \vec W$ . Clearly
In the right-hand side of (4.1), we restrict to graphs that also appear in (3.6) and rewrite the resulting sum as a double sum over core graphs and shell graphs. This gives rise to the series
The outer sum is over potential pivot decompositions $\vec W$ of core vertices $\vec x_{[r+2]}$ , the second sum over $({{\pm}})$ -graphs $G_{\text{core}}= \big(\vec x_{[r+2]}, E^+,E_{\text{core}}^-\big)$ that are $({+})$ -connected and for which $\vec W$ is both the $({{\pm}})$ -pivot decomposition and the $({+})$ -pivot decomposition (in other words, the simple graph $\big(\vec x_{[r+2]}, E^+\big)$ is connected and $\textsf{PD}^\pm(x_1,x_2,G) =\textsf{PD}^+(x_1,x_2,G)=\vec W$ ). The inner sum is over $({{\pm}})$ -graphs $H = (V(H),\varnothing, E^-(H))$ with vertex set $\vec x_{[r+2]}\cup \vec y_{[m]}$ and $({-})$ -edges $\{y_i,x_j\}$ such that every vertex $y_i$ is linked to at least one vertex $x_j$ , under the additional constraint that $(\vec x_{[r+2]} \cup \vec y_{[m]}, E^+, E_{\text{core}}^-\cup E^-(H))$ has no $({{\pm}})$ -pivotal points for paths from $x_1$ to $x_2$ . Let us denote the series associated to such graphs H by $h_\lambda^\Lambda\big(G_{\text{core}}\big)$ :
The right-hand side of (4.2) depends on $G_{\text{core}}$ only through the pivot decomposition $\vec W$ . We obtain the representation
This expression, written in a slightly different form (see Definition 4.2), forms the starting point of this section. The main results of this section are the following:
-
1. Let $G_{\text{core}}$ be a $({{\pm}})$ -graph as above. Then the corresponding power series $h_\lambda^\Lambda\big(G_{\text{core}}\big)$ is absolutely convergent for all intensities $\lambda \geq 0$ (Proposition 4.1). In addition, $h_\lambda^\Lambda\big(G_{\text{core}}\big)$ can be expressed in terms of probabilities involving the random connection model on the fixed vertex set $V\big(G_{\text{core}}\big)$ and of Poisson processes in $\Lambda$ . This alternative expression is used to show that the (pointwise) limit
\begin{equation*} h_\lambda\big(G_{\text{core}}\big) = \lim_{\Lambda\nearrow \mathbb R^d} h_\lambda^\Lambda\big(G_{\text{core}}\big) \end{equation*}exists for all $\lambda>0$ (Lemma 4.5). -
2. Then we show in Theorem 4.1 that
\begin{equation*} \sum_{r=0}^\infty \frac{\lambda^r}{r!}\int_{(\mathbb R^d)^r} \sum_{\vec W} \sum_{G_{\text{core}}} \textbf{w}^\pm \big(G_{\text{core}}\big) \Bigl|h_\lambda\big(G_{\text{core}}\big)\Bigr| \, \textrm{d} \vec x_{[3,r+3]}< \infty. \end{equation*}This allows us to define\begin{equation*} g_\lambda(x_1,x_2)\,:\!=\, \sum_{r=0}^\infty \frac{\lambda^r}{r!}\int_{(\mathbb R^d)^r} \sum_{G_{\text{core}}} \textbf{w}^\pm \big(G_{\text{core}}\big) h_\lambda\big(G_{\text{core}}\big) \, \textrm{d} \vec x_{[3,r+3]} \end{equation*}and to pass to the limit in (4.3), showing that\begin{equation*} \lim_{\Lambda\nearrow \mathbb R^d} g_\lambda^\Lambda(x_1,x_2) = g_\lambda(x_1,x_2)\end{equation*}as part of Theorem 4.1.
4.2. Definition
Here we introduce the precise definitions of core graphs and shell graphs as well as of the functions $h_\lambda^\Lambda$ and $g_\lambda^\Lambda$ . We follow the ideas outlined in the previous section but make two small changes. First, shell graphs H are defined not as $({{\pm}})$ -graphs with minus edges only but right away as standard graphs. Second, a close look reveals that the shell function $h_\lambda^\Lambda\big(G_{\text{core}}\big)$ defined in (4.2) depends on the core graph only via $\vec W$ ; accordingly we view $h_\lambda^\Lambda$ as a function of a sequence of sets. In addition we drop the index from the core graph; thus the graph G in Definition 4.1 below corresponds to $G_{\text{core}}$ in the previous section (see Figure 2).
Definition 4.1. (Core graphs and shell graphs.)
-
1. Let $x_1,x_2\in\mathbb R^d$ and let $\{x_1,x_2\}\subset W\subset\mathbb R^d$ be a finite set of vertices. We call a graph $G\in\mathcal C^+(W)$ with $\textsf{PD}^\pm(x_1,x_2,G) =\textsf{PD}^+(x_1,x_2,G) = \vec W$ a core graph with pivot decomposition $\vec W$ and denote the set of such graphs by $\mathcal G^{\vec W}_{\text{core}}$ .
-
2. Let $G\in\mathcal C^+(W)$ be a core graph with pivot decomposition $\vec W=(u_0, V_0, \ldots, V_k, u_{k+1})$ , $k\in\mathbb{N}_0$ , where we set $u_0\,:\!=\,x_1$ and $u_{k+1}\,:\!=\,x_2$ . Moreover, let $\overline V_i \,:\!=\, V_i \cup \{u_i, u_{i+1}\}$ and let Y be a finite subset of $\mathbb{R}^d$ . A shell graph on $W\cup Y$ associated to $\vec W$ is a $(k+1)$ -partite graph $H\in \mathcal G\big(\overline V_1, \ldots, \overline V_k, Y\big)$ such that $G \oplus H \in \mathcal D_{x_1,x_2}^\pm(W \cup Y)$ . We call the vertices $Y\subset V(H)$ satellite vertices and write $S(H)=Y$ . Notice that the set of all shell graphs on $W\cup Y$ associated to $\vec W$ does not depend on the choice of the core graph G. We denote it by $\mathcal G^{Y, \vec W}_{\text{shell}}$ .
We define $h_\lambda^\Lambda$ and $g_\lambda^\Lambda$ by expansions similar to (4.2) and (4.3) and postpone the proof of convergence to Proposition 4.3 and Theorem 4.4. By some abuse of language, we refer to the series (4.6) as the direct-connectedness function, and we use the same letter $g_\lambda$ as in (1.2). This is justified a posteriori by the proof of Theorem 1.1, where we show that the series is indeed the expansion for the direct-connectedness function $g_\lambda$ defined as the unique solution of the OZE (1.2).
Definition 4.2. (Shell functions and direct-connectedness function.)
-
1. Let $W\subset \mathbb R^d$ be finite and let $\vec W$ be given as in Definition 4.2. For $m\in\mathbb N_0$ , define the m-shell function $h^{(m)}$ by
(4.4) \begin{equation} h^{(m)}(\vec W, Y) \,:\!=\, \sum_{\substack{H \in \mathcal G^{Y, \vec W}_{\text{shell}}}} {\textbf{w}}(H), \qquad Y=\{y_1,\ldots,y_m\}\subset \mathbb R^d, \end{equation}and the shell function $h^\Lambda_\lambda$ in finite volume $\Lambda\subset \mathbb R^d$ by(4.5) \begin{equation}h^\Lambda_\lambda\big(\vec W\big) \,:\!=\, \sum_{m \geq 0} \frac{\lambda^m}{m!} \int_{\Lambda^m} h^{(m)}\big(\vec W, \vec y_{[m]}\big) \, \textrm{d} \vec y_{[m]}.\end{equation} -
2. Let $\lambda < \lambda_\ast$ . We define the direct-connectedness function as $g_\lambda\,:\, \mathbb R^d\times\mathbb R^d \to \mathbb R$ ,
(4.6) \begin{equation} g_\lambda^{\Lambda}(x_1,x_2) \,:\!=\, \sum_{r \geq 0} \frac{\lambda^r}{r!} \int_{\Lambda^{r}} \sum_{\vec W} \Bigg(\sum_{G \in \mathcal G^{\vec W}_{\text{core}}} \textbf{w}^\pm(G) \Bigg) h_\lambda^\Lambda\big(\vec W\big) \, \textrm{d} \vec x_{[3,r+2]}, \end{equation}where $W\,:\!=\,\{x_1,\ldots,x_{r+2}\}$ and we sum over decompositions $\vec W$ of W given as in Definition 4.1. In the pathological case $x_1=x_2$ , $(4.6)$ is to be read as $g_\lambda^{\Lambda}(x_1,x_2) \,:\!=\, 1$ . Let $g_\lambda^{\Lambda}\,:\,\mathbb R^d \to \mathbb R$ be defined by $g_\lambda^{\Lambda}(x) = g_\lambda^{\Lambda}(\textbf{0}, x)$ .
The 0-shell function $h^{(0)}$ is understood to be given in terms of shell graphs without satellite vertices, i.e.,
Note that because of translation-invariance, $g_\lambda^\Lambda(x_1,x_2) = g_\lambda^\Lambda(\textbf{0}, x_2-x_1) = g_\lambda^{\Lambda}(x_2-x_1) $ .
4.3. Analysis of the shell functions: laces
If we take a look at the graphs that are summed over in the shell function, we note that the associated minimal structures have a form which is very reminiscent of graphs that are known as laces and famously appear in the analysis of, for example, self-avoiding walks [Reference Brydges and Spencer3, Reference Slade22]. They are also the namesake of the lace-expansion technique.
Proposition 4.1 is the central result of this section. It allows us to bound the shell function by the probability that the points in a PPP $\eta$ are not connected to the core vertices W. Moreover, we introduce laces and partition the shell graphs with respect to them. For every lace, we obtain a precise expression for its contribution to the shell function.
To prove Proposition 4.1, we will need quite a few definitions (see Definitions 4.3, 4.4, and 4.5) and some intermediate results thereon.
Proposition 4.1. (Bounds on the shell functions) Let $\lambda\geq 0$ and let $\Lambda\subset \mathbb R^d$ be bounded. Let $u_0, \ldots, u_{k+1} \in \Lambda$ for $k\in\mathbb N_0$ , let $V_0, \ldots, V_k \subset \Lambda$ be finite sets, and set $\vec W=(u_0, V_0, \ldots, V_k, u_{k+1})$ . Then
Moreover,
Proposition 4.1 consists of two parts, and it is (4.8) that guarantees the well-definedness of the shell function $h_\lambda^\Lambda$ of Definition 4.2.
Proposition 4.1 is easy to prove for $k=0$ , and we mostly focus on $k\geq 1$ . Throughout the remainder of this section, we fix a pivot decomposition $\vec W = (u_0, V_0, \ldots, V_k, u_{k+1})$ and recall that $\overline{V}_i = V_i \cup \{u_i,u_{i+1}\}$ .
We now work towards a deeper understanding of the shell graphs H summed over in (4.4).
Definition 4.3. (Skeletons) Let $W\subset \mathbb R^d$ and let $\vec W=(u_0,V_0,\ldots, u_{k+1})$ be a pivot decomposition of some core graph on W. Furthermore, let $Y\subset \mathbb R^d$ be finite and let H be a shell graph associated to $\vec W$ with satellite vertices $S(H)=Y$ . Then we define the skeleton $\hat H$ of H as the following graph: its vertex set is $V(\hat H) = \{0,\ldots, k+1\}$ . A bond $\alpha\beta$ is in $E(\hat H)$ if and only if $\vert\alpha-\beta\vert\geq 2$ and there exist $s\in \{u_\alpha\}\cup V_\alpha$ , $t\in V_{\beta-1}\cup\{u_{\beta}\}$ such that
-
$st\in E(H)$ , or
-
sy, $yt\in E(H)$ for some $y\in S(H)$ .
In the first case we call $\{s,t\}$ a direct stitch, and in the second case we call it an indirect stitch. We call an edge $\alpha\beta$ in $E(\hat H)$ a bond to distinguish it from the edge of the underlying graph H.
Thus, the graph $\hat H$ has no nearest-neighbor bonds, and $\alpha\beta$ with $\vert\alpha-\beta\vert\geq 2$ is a bond in $E(\hat H)$ if and only if $\{u_\alpha\}\cup V_\alpha$ and $V_{\beta-1}\cup\{u_{\beta}\}$ are connected by a direct or indirect stitch. See Figure 3 for an illustration. We may now apply the standard vocabulary of lace expansion (for self-avoiding walks) to the graph $\hat H$ [Reference Slade22, Section 3.3].
Definition 4.4. (Laces)
-
The graph $\hat H$ with vertex set $\{0,\ldots, k+1\}$ is irreducible if 0 and $k+1$ are endpoints of edges in $E(\hat H)$ and for every $i\in [k]$ there exists $\alpha\beta\in E(\hat H)$ with $\alpha<i< \beta$ .
-
The graph $\hat H$ is a lace if it is irreducible and, for every bond $\alpha\beta\in E(\hat H)$ , removal of the bond destroys the irreducibility.
-
We denote by $\mathcal L_k$ the set of all laces on $\{0, \ldots, k+1\}$ .
In the context of lace expansions, usually the word ‘connected’ is used instead of ‘irreducible’, but ‘connected’ is clearly misleading in our setup; Brydges and Spencer originally called those graphs ‘primitive’ [Reference Brydges and Spencer3]. We observe that the skeleton graphs $\hat H$ arising from our shell graphs H are precisely the irreducible graphs (and so $G\oplus H$ being 2-connected corresponds to the skeleton $\hat H$ being irreducible).
We map irreducible graphs to laces by following a standard procedure [Reference Slade22, Section 3.3], performed backwards. That is, we define bonds $\alpha^{\prime}_j\beta^{\prime}_j$ with $\beta^{\prime}_1>\beta^{\prime}_2>\cdots$ inductively as follows: we set
and
The procedure terminates when $\alpha^{\prime}_j = 0$ . At the end, we let $\alpha_j\beta_j$ be a relabeling of the bonds $\alpha^{\prime}_j \beta^{\prime}_j$ from left to right.
It is well known that the algorithm maps irreducible graphs to laces; moreover, the set of irreducible graphs that are mapped to a given lace L can be characterized as follows.
Definition 4.5. (Compatible bonds and the span of a lace.)
-
1. Let L be a lace with vertex set $\{0,\ldots,k+1\}$ . A bond is compatible with a lace L if the algorithm described above maps the graph $(V(L), E(L)\cup \{\alpha\beta\})$ to the lace L.
-
2. Let $W\subset \mathbb R^d$ and let $\vec W=(u_0,V_0,\ldots, u_{k+1})$ be a pivot decomposition of some core graph on W. Further let $Y\subset \mathbb R^d$ be finite and let H be a shell graph associated to $\vec W$ with $S(H)=Y$ . Then we say that H belongs to the span of the lace L, and write $H\in \langle\!\langle L \rangle\!\rangle$ , if $E(L) \subseteq E(\hat H)$ and every bond $\alpha\beta \in E(\hat H) \setminus E(L)$ is compatible with L.
In other words, H is in the span of L if the above algorithm maps $\hat H$ to L. See Figure 3.
Given $\vec W$ and a lace L, we define
The series $h_\lambda^\Lambda\big(\vec W;\,L\big)$ converges absolutely for every fixed $\lambda$ . This is shown as part of the proof of (4.8) in Proposition 4.1. Now,
The following characterization of compatible bonds will be useful. We recall that the bonds of a lace with m bonds can be labeled as $\alpha_j\beta_j$ with
see [Reference Slade22, Equations (3.15) and (3.16)].
Lemma 4.1. (Characterization of compatible bonds.) Let L be a lace with vertex set $V(L) = \{0,\ldots, k+1\}$ and bonds $\alpha_j\beta_j$ , $j=1,\ldots, m$ , labeled from left to right (i.e., $\alpha_j<\alpha_{j+1}$ ). Then a bond $\alpha\beta\notin E(L)$ with $\alpha <\beta-1$ is compatible with L if and only if either
-
(a) $\alpha_i \leq \alpha <\beta \leq \beta_i$ for $i\in[m]$ or
-
(b) $\alpha_i < \alpha<\beta \leq \alpha_{i+2}$ for $i\in[m-1]$ (where we set $\alpha_{m+1} \,:\!=\,k$ ).
Proof. Let $\alpha\beta \notin E(L)$ be compatible with L; that is, the algorithm below Definition 4.4 maps $E(L) \cup \{\alpha\beta\}$ to E(L), which in turn means that $\alpha\beta$ is not selected to be part of the output lace. We show that then either (a) or (b) is satisfied. Assume the algorithm has already constructed the partial lace up to some $j < m$ , producing the bonds $\big(\alpha^{\prime}_i, \beta^{\prime}_i\big)_{i=1}^j$ (note that they are in reverse order and make up the last j bonds of the lace). Assume moreover that $\alpha^{\prime}_j< \beta \leq \alpha^{\prime}_{j-1}$ ; that is, $\alpha\beta$ is a potential candidate to be chosen as the next bond of the lace. Since it is not chosen, there is $\alpha^{\prime}_{j+1}\beta^{\prime}_{j+1}$ with $\beta^{\prime}_{j+1} \in \big(\alpha^{\prime}_j,\alpha^{\prime}_{j-1}\big]$ such that either
-
$\alpha^{\prime}_{j+1}<\alpha$ , or
-
$\alpha^{\prime}_{j+1} = \alpha$ and $\beta^{\prime}_{j+1} > \beta$ .
Both the second case and the first case under the additional assumption $\beta^{\prime}_{j+1} \geq \beta$ imply that $\alpha\beta$ satisfies (a). Let us thus focus on the case where $\alpha^{\prime}_{j+1}<\alpha$ and $\beta^{\prime}_{j+1} < \beta$ . Remembering the stage of the algorithm, we have $\beta \leq \alpha^{\prime}_{j-1}$ , implying (b).
Now let $\alpha\beta \notin E(L)$ be a bond that satisfies (a) or (b). We claim that $\alpha\beta$ is compatible with L. Let i be the index such that $\alpha_i \beta_i$ satisfies (a) or (b). Note that in the execution of the algorithm below Definition 4.3, $\alpha\beta$ does not appear as a candidate to be added to the constructed lace up until the point where $\alpha_m\beta_m, \alpha_{m-1}\beta_{m-1}, \ldots, \alpha_{i+1}\beta_{i+1}$ have already been added to the partial lace. At this stage of the algorithm, if $\alpha\beta$ satisfies (b), then it is not picked, because the left endpoint of the bond $\alpha_i\beta_i$ has a smaller value (i.e., $\alpha_i<\alpha$ ). If $\alpha\beta$ satisfies (a), however, then either also $\alpha_i<\alpha$ , or $\alpha_i=\alpha$ , but $\alpha_i\beta_i$ has its right endpoint further to the right (i.e., $\beta<\beta_i$ , since the two bonds cannot be equal), and so again, $\alpha_i\beta_i$ is picked by the algorithm.
To prove the second result of Proposition 4.1, we need the following counting lemma, which may be of independent interest.
Lemma 4.2. (On the number of laces.) Let $f_i$ be the ith Fibonacci number with $f_1 =0$ , $f_2=1$ . Then
Proof. We first choose i vertices in $\{1, \ldots, k\}$ and then count the laces that use exactly those vertices. To this end, let $A_i$ be the set of laces L with $V(L)=\{0, \ldots, i+1\}$ so that every vertex is the endpoint of at least one stitch. We claim that $|A_i|=f_i$ for $i \geq 1$ . Clearly, $|A_0|=1$ , $|A_1|=0$ , $|A_2|=1$ . See Figure 4 for an illustration.
Let $i \geq 3$ . We now establish the Fibonacci recursion. First, note that the bond incident to 0 (the ‘first’ bond) must always have 2 as the second endpoint. Now, depending on whether or not the third bond is incident to 2, the remaining lace lives on $\{1, 2, \ldots, i+1\}$ or on $\{1, 3, 4, \ldots, i+1\}$ , and so $|A_i| = |A_{i-1}| + |A_{i-2}|$ .
The asymptotic behavior follows from the fact that $f_n \sim \Phi^n /\sqrt{5}$ , where $\Phi = \tfrac 12 \big(1+\sqrt{5}\big)$ is the golden ratio.
We can now work towards finding an explicit expression for $h_\lambda^\Lambda\big(\vec W;\,L\big)$ for a fixed lace. The next lemma is in the spirit of Observation 3.1 and will help us find probabilistic factors in the shell function.
Lemma 4.3. (Bipartite graphs and probabilities.) Let $Y, A, B,C \subset \mathbb R^d$ be finite, disjoint sets.
-
1. Then
\begin{equation*} \sum_{\substack{H \in \mathcal G(A\cup C,Y)\,:\, \\ \forall y \in Y\,:\, y \sim A}} {\textbf{w}} (H) = \prod_{y \in Y} \big({-}\mathbb P(A \sim y \nsim C) \big) = ({-}1)^{|Y|} \mathbb P(\forall y \in Y\,:\, A \sim y\nsim C).\end{equation*} -
2. Moreover,
\begin{equation*} \sum_{\substack{H \in \mathcal G(A \cup B \cup C,Y)\,:\, \\ \forall y \in Y\,:\, A \sim y \sim B}} {\textbf{w}} (H) = \prod_{y\in Y}\mathbb P\big(A \sim y \sim B, y \nsim C \big). \end{equation*} -
3. Lastly,
\begin{equation*} \sum_{\substack{H \in \mathcal G(A,Y)\,:\, \\ E(H) \neq \varnothing}} {\textbf{w}} (H) = - \mathbb P(A \sim Y). \end{equation*}
Proof. The first part of the statement is rather straightforward. If $Y=\{y\}$ , then $\mathcal G(A\cup C, \{y\})$ is the set of star graphs (with center y). Observe first that
The first sum is over all star graphs in $\mathcal G(A,\{y\})$ except the empty one, the second is over all star graphs in $\mathcal G(C,\{y\})$ , and so
It is an easy induction to prove that for general Y, the sum factors into a product over sums over star graphs. For the second statement, assume again that $Y = \{y\}$ and observe that
where the last identity is due to independence. The statement easily extends to general Y (again, the sum factors).
For the third statement, note that we sum over every graph except the empty one.
Since the explicit expression for $h_\lambda^\Lambda\big(\vec W;\,L\big)$ is a lengthy product of probabilities, we first introduce some notation to represent the factors of this product compactly. Let A, B be two subsets of $[k+1]_0$ . We define the set of all possible direct stitches in H leading to bonds $\alpha \beta\in E(\hat H)$ with $\alpha\in A$ , $\beta \in B$ as
and we write $\Upsilon(A) = \Upsilon(A,A)$ . We define
and, for $\alpha_1<\alpha_2<\alpha_3$ ,
Note that these products encode the sum over all $\textbf{w}$ -weighted graphs on the set of edges multiplied over.
To lighten notation, for $0 \leq\alpha\leq \beta\leq k+1$ , set
We extend this notation further: for $a,b \in \{u_0, \ldots, u_{k+1}\}$ , let $(a,b) \,:\!=\, [a,b] \setminus \{a, b\}$ , let $[a,b) \,:\!=\, [a,b] \setminus \{b\}$ , and let $(a,b]\,:\!=\, [a,b]\setminus\{a\}$ . We set $(\!(a,b)\!) \,:\!=\, [a,b] \setminus ([a]\!] \cup [\![b]) $ and define sets $(\!(a,b]$ etc. accordingly.
Moreover, define
for $\beta \geq \alpha+2$ . We extend this notation by writing
for sets of pivotal points A, B; we abbreviate $Q_{a,[b,c]} = Q_{\{a\},[b,c]}$ .
We are now ready to state Lemma 4.4, for which we recall the definition of $h_\lambda^\Lambda\big(\vec W;\,L\big)$ in (4.9).
Lemma 4.4. (The shell function of a lace.) Let $\lambda \geq 0$ and let $\Lambda \subset \mathbb R^d$ be bounded. Let $W\subset \mathbb R^d$ be a core vertex set with pivot decomposition $\vec W=(u_0, V_0, \ldots, u_{k+1})$ . Let L be a lace with vertex set $[k+1]_0$ and m bonds $\alpha_i\beta_i$ , $i \in [m]$ . Then, setting $\alpha_{m+1}=k$ , we have
Moreover,
Proof. We abbreviate $\eta=\eta_\Lambda$ , $h = h_\lambda^\Lambda$ , and prove the statement by induction on m.
Base case. Let $m=1$ , which means that $\alpha_1=0$ and $\beta_1=k+1$ . Set $A=[u_0]\!], B=[u_1, u_k]$ , and $C=[\![u_{k+1}]$ . See Figure 5 for an illustration of A, B, C.
Note first that the edge set $\Upsilon([k+1]_0) \setminus E(A,C)$ , that is, the possible direct stitches between points of W except the direct ones between A and C, do not determine membership of H in $\langle\!\langle L \rangle\!\rangle$ . Any such edge xy may or may not be present, resulting in a factor $(1-\varphi(x-y))$ that can be extracted. In total, this produces the factor $q_{0,k+1}$ , and we can restrict to considering graphs $H\in\langle\!\langle L \rangle\!\rangle$ that do not possess any such edge. The remaining graphs H only have edges that are incident to $A\cup C\cup \mathcal S(H)$ .
We split this set of remaining graphs H into those that have a direct stitch between A and C and those that do not. Among the former, the sum over graphs factors into graphs $H^{\prime} \in \mathcal G(A,C)$ (the direct stitches) and graphs $H^{\prime\prime} \in \mathcal G(W, \mathcal S(H))$ . With Lemma 4.3,
For now the power series are treated as formal power series; convergence is proven later. To treat the sum in (4.12), we define
With these definitions, we can partition $\vec y = \mathcal S(H)= \mathcal S_1 \cup \mathcal S_2 \cup \mathcal S_3$ . Moreover, we know that $\mathcal S_1 \neq \varnothing$ . Re-summing and then applying Lemma 4.3, the sum over n in (4.12) becomes
Recognizing the exponential series in the expression above, we can rewrite the probabilities with respect to $\mathbb P$ (appearing in the exponents) as probabilities with respect to $\mathbb P_\lambda$ associated to $\xi^y$ , e.g., $\mathbb P( y \sim (A \cup B))= \mathbb P_\lambda( y \sim (A \cup B) \text{ in }\xi^y)$ . Then we can apply the univariate Mecke formula (see (2.1) for $m=1$ ) to rewrite (4.13) as
Since , we can plug this back into (4.12) and obtain
on the level of formal power series. Now we prove convergence and check that the previous computational steps are justified not only on the level of formal power series. We first revisit Equation (4.13). On the left-hand side, let us put absolute values inside the integral (but outside the sum over shell graphs H). The resulting expression is bounded by the middle part of (4.13), again with absolute values inside the integral. Each integrand is bounded in absolute value by a probability; hence it is smaller than or equal to 1. The resulting series are exponential series and, in particular, absolutely convergent. As a consequence, Equation (4.13) is justified and the last sum in (4.12) is bounded as
where for the last inequality we use the fact that the expected number of direct neighbors of any fixed element of W with respect to $\eta$ is given by $\lambda\int\varphi(x)\, \textrm{d} x$ , as well as the rescaling introduced in Section 2.3 ensuring that $\int\varphi(x)\, \textrm{d} x=1$ ; compare this bound to the one used in (3.4). For the other contribution to $h\big(\vec W;\,L\big)$ , we notice that
by the same argument as in (4.14).
Combining (4.14) and (4.15) with (4.12) and $0\leq q_{0,k+1} \leq 1$ , we deduce
Inductive step. For the inductive step, let $m>1$ . We write the lace L in terms of its vertices $(s_i,t_i)$ in W (that is $s_i=u_{\alpha_i}$ and $t_i=u_{\beta_i}$ ) and let L ′ be the lace on $W^{\prime} \,:\!=\, W \setminus [s_1, s_2)$ obtained from L by deleting the first stitch. We note that if $H \in \langle\!\langle L \rangle\!\rangle$ , then $H[[s_2,u_{k+1}]]\in \langle\!\langle L^{\prime} \rangle\!\rangle$ . Observe that
Again we first prove (4.10) and carry out computations on the level of formal power series; we prove convergence (and thus (4.11)) at the end. We can apply the induction hypothesis to $h(\vec {W^{\prime}};\,L)$ ; it remains to deal with the second factor. We partition the vertices in $[s_1, s_3]$ as $A = [s_1]\!]$ , $B=(\!(s_1,s_2)$ , $C=[s_2,t_1)\!)$ , $D = [\![t_1]$ , and $E= (t_1,s_3]$ (see Figure 5). If $m=2$ , we let $E=(t_1,u_k]$ .
The graphs summed over in (4.16) must satisfy the following restraints: there must be at least one direct or indirect stitch between A and D, and there cannot be any (direct or indirect) edge between A and E. In particular, the remaining direct stitches may or may not be there, and thus can be extracted as the factor $q_{\alpha_1, \alpha_2, \alpha_3}$ .
We partition $\mathcal S(H) = \cup_{i=1}^4 \mathcal S_i$ , where
Again, we intend to split the sum over graphs into those that have at least one direct stitch between A and D, and those that do not. We can thus rewrite the second factor in (4.16) as
where the last identity was obtained using Lemma 4.3. Note that the factor $h\big(\vec{W^{\prime}};\,L^{\prime}\big)$ contains the factor . Together with this factor, (4.17) equals
It remains to rewrite the argument in the expectation of the exponent in (4.18). Note that
This gives two exponential terms. The first is
Similarly, the second exponential term equals $Q_{(\alpha_1,\alpha_2), (\alpha_3,k+1]}$ .
Again, we prove convergence and justify the previous computational steps. Revisiting the left-hand side of (4.17), we insert absolute values inside the integral (and outside the sum over graphs H). As in the base case, this is bounded by the middle part of (4.17) with absolute values in the integrals, and each integrand is a probability. With the Mecke equation, we obtain
arguing as in (4.14) for the last inequality.
Note that by the induction hypothesis, the term $h\big(\vec{W^{\prime}};\,L^{\prime}\big)$ with absolute values in the respective integrals is bounded by $2^{m-1} \text{e}^{3 \lambda|W^{\prime}|}$ . Since $A\cup B$ and W ′ are disjoint, this proves (4.11).
Proof of Proposition 4.1. Again, we abbreviate $\eta=\eta_\Lambda$ and $h=h_\lambda^\Lambda$ . First, consider $k=0$ , i.e., pivot decompositions with no pivotal points. Then there are no direct stitches, and we have
Moreover,
using the same bound as in (4.15). Since this proves the proposition for $k=0$ , we turn to $k\geq 1$ and we first prove (4.7).
We rewrite $h\big(\vec W\big)$ by explicitly writing out the sum over laces L in terms of the endpoints of their stitches in W (note that any lace can have at most k stitches). We first exhibit this for $k=2$ , where $\vec W=(u_0,V_0,u_1,V_1,u_2,V_2,u_3)$ and there are two different laces. With the abbreviation $\tilde Q_{i,j} = Q_{i,j} + \mathbb P([u_i]\!] \sim [\![u_j])$ ,
Clearly, this is unnecessarily complicated for $k=2$ , as the sum over $\alpha_2$ contains only one term and $q_{0,2}=1$ . However, this turns out to be convenient for general k. We use the convention that $Q_{[a,b],\varnothing} = Q_{\varnothing, [a,b]} =1$ . Carefully rearranging the sum over all laces yields
Note that if $\beta=k+1$ for some i, then the double sum following the corresponding indicator breaks down to 0. Also, only the innermost bracketed term contains two factors of $q_{a,b,c}$ .
We now show that, starting with the innermost square brackets, the bracketed terms are bounded by 1 in absolute value.
To lighten notation, we write the innermost sum as $\sum_{\alpha=b_1}^{b_2-1} R(\alpha)$ . We split the factor
This yields two sums
where R ′ and R ′′ are both nonnegative. Now, with the estimate $Q_{(\alpha_{k-1},\alpha),k+1} \leq Q_{[\beta_{k-2},\alpha),k+1} = Q_{[b_1,\alpha),k+1}$ , we can bound
which is readily proven to be at most 1 by induction. Moreover,
The above summands can be rewritten as the probability of the event that the direct stitch $(\alpha,k+1)$ is present, while all direct stitches $(j,k+1)$ for $j\in (\alpha,k+1]$ are not. Hence, these are disjoint events for different values of $\alpha$ , and so the sum is at most 1.
In total, we rewrote $\sum_{\alpha=b_1}^{b_2-1} R(\alpha)$ as the difference of two nonnegative values, both at most 1, proving our claim.
To deal with the summands for $2 \leq i <k$ , we write the double sum as
and split the term $1-\tilde Q_{\alpha_i,\beta_i} = (1- Q_{\alpha_i,\beta_i}) - \mathbb P([u_{\alpha_i}\!] \sim [\![ u_{\beta_i}])$ so that
for nonnegative summands R ′, R ′′. We prove a bound on the sum over $R^{\prime}(\alpha,\beta)$ by induction on $k-b_2$ . If $b_2=k$ , then the bound is the same as for the bound (4.20). For $b_2<k$ , we first bound $Q_{(\alpha_i,\alpha],(\beta,k+1]} \leq Q_{[b_1,\alpha],(\beta,k+1]}$ and then extract the summand for $\beta=k+1$ , yielding
By the induction hypothesis, the double sum in (4.23) is at most 1. Therefore,
where the last identity is now an easy induction.
Turning to the second summand in (4.22), by the same argument used to treat (4.21), the summands $R^{\prime\prime}(\alpha,\beta)$ are probabilities of events which are disjoint for different values of $(\alpha,\beta)$ , and so they sum to at most 1.
The observation that the bracket term for $i=1$ is handled analogously finishes the proof of (4.7).
We proceed to prove (4.8) for $k>1$ . By combining Lemma 4.2 with Lemma 4.4, we obtain
Using the bound $k \leq |W|$ finishes the proof.
Lemma 4.5. (Thermodynamic limit of the shell function.) For every $\lambda \geq 0$ , the pointwise limit
along $\mathbb R^d$ -exhausting sequences exists.
Proof. Let $(\Lambda_n)_{n\in\mathbb N}$ be an $\mathbb R^d$ -exhausting sequence. For fixed $\vec W=(u_0, V_0, \ldots, u_{k+1})$ , note that
For each lace L, the limit
exists and does not depend on the precise choice of $\mathbb R^d$ -exhausting sequence. This is clear from the representation for $h_\lambda^\Lambda\big(\vec W;\,L\big)$ proven in Lemma 4.4. In particular, $h_\lambda^\Lambda\big(\vec W;\,L\big)$ is given as the finite product of $\Lambda$ -independent factors and factors that describe the probability of certain point processes containing no points (namely, and the factors $Q_{i,j}$ ). As probabilities that are decreasing in the volume, the latter admit a $\Lambda \nearrow \mathbb R^d$ limit. It follows that the limit of the shell function exists as well and is given by
4.4. The direct-connectedness function in infinite volume
In this section, we consider the limit $\lim_{\Lambda \nearrow \mathbb R^d} g_\lambda^{\Lambda}$ with $g_\lambda^\Lambda$ as in (4.6) and give sufficient conditions under which it exists, thereby proving the two convergence statements from Theorem 1.1.
The candidate limit is given by the analogue of (4.6) with $\Lambda$ replaced by $\mathbb R^d$ ; the existence of $h_\lambda^{\mathbb R^d}\equiv h_\lambda$ has been checked in Lemma 4.5. Thus,
where the inner sum is over core graphs G on $\vec x_{[r+2]}$ with pivot decomposition $\vec W$ , i.e., over $({+})$ -connected graphs G on $\vec x_{[r+2]}$ with $\textsf{PD}^+(x_1,x_2,G)= \textsf{PD}^\pm(x_1,x_2,G)=\vec W$ . Remember the quantities $0<\tilde \lambda_*\leq \lambda_*$ introduced before Theorem 1.1. We will see in (4.25) that the sum over core graphs for a given pivot decomposition is a probability, hence in particular nonnegative.
Theorem 4.1. (The thermodynamic limit of $g_\lambda^\Lambda$ : pointwise convergence.) If $\lambda< \lambda_\ast$ , then
for all $x_1,x_2\in \mathbb R^d$ . Moreover, for every $\mathbb R^d$ -exhausting sequence $(\Lambda_n)_{n\in \mathbb N}$ , we have the pointwise convergence
with $g_\lambda$ given in (4.24) (equivalently, Equation (4.6) with $\Lambda$ replaced by $\mathbb R^d$ ).
Theorem 4.2. (Integrability and convergence in the $L^1$ -norm.) If $\lambda< \tilde \lambda_\ast$ , then for all $x_1\in \mathbb R^d$ ,
Proof of Theorem 4.1. We consider a summand in (4.24) for fixed $\vec W$ and set $x_1=u_0$ as well as $x_2=u_{k+1}$ . Let $\vec W=(u_0,V_0, \ldots, V_k, u_{k+1})$ . Remember $\overline{V}_i = \{u_{i}\}\cup V_i\cup \{u_{i+1}\}$ . A first important observation is the fact that the weight of a core graph with pivot decomposition $\vec W$ factors into the product over the k $({{\pm}})$ -subgraphs induced by the vertex sets $\overline{V}_i$ . The sum over core graphs thus factors as
Hence, the core can be written as a probability. Combining this with Proposition 4.1, we get
Above, we used independence as well as the fact that for $V \subseteq W$ , the two random graphs $\Gamma_\varphi(V)$ and $\xi^W[V]$ are identical in distribution. The inequality holds true for bounded $\Lambda$ as well as $\Lambda = \mathbb R^d$ .
We now go back to (4.6) and rearrange the sum by first summing over the number of pivotal points k, giving
In the second term, the sum is over pivot decompositions $\vec W=(u_0, V_0, \ldots, V_k, u_{k+1})$ where $u_0=x_1$ , $u_{k+1} =x_2$ , and $\cup_{i=0}^k V_i = \{v_1, \ldots, v_n\}$ .
When rewriting the integrand of (4.26) as a probability, the event that $u_i$ and $u_{i+1}$ are 2-connected for $i\in [k]_0$ in disjoint vertex sets $V_i$ becomes the event that these connection events occur disjointly within W; see Section 2 and recall the definition (2.2). The inner series can thus be bounded as
where the identity is due to the Mecke equation and the fact that by summing over $\vec v$ , we were partitioning over what the joint cluster of $\vec u_{[0,k+1]}$ is. We can now use the BK inequality ([Reference Heydenreich, van der Hofstad, Last and Matzke10, Theorem 2.1]) to bound (4.27) by
Inserting this back into (4.26),
The last expression is finite for $\lambda<\lambda_\ast$ , by the definition of $\lambda_\ast$ . The pointwise convergence of $g_\lambda^{\Lambda_n}$ to $g_\lambda$ follows by dominated convergence.
Proof of Theorem 4.2. If we integrate over $x_2$ in (4.29), this yields the upper bound
which is finite for $\lambda<\tilde\lambda_\ast$ , by definition of $\tilde\lambda_\ast$ . The theorem follows by Fubini–Tonelli and the triangle inequality.
5. The Ornstein–Zernike equation
Here we complete the proof of Theorem 1.1. In view of Theorems 4.1 and 4.2, it remains to prove that the expansion (4.24) is indeed equal to the direct-connectedness function given by the OZE (1.2). This is proven by showing first that $g_\lambda^\Lambda$ from Definition 4.2 fulfills the OZE in finite volume and then passing to the limit $\Lambda\nearrow \mathbb R^d$ .
The idea of the proof in finite volume is basically well known; the same proof works for the OZE for the total correlation function.
Proposition 5.1. (The Ornstein–Zernike equation in finite volume.) Let $\Lambda \subset \mathbb R^d$ be bounded and let $x_1, x_2 \in \Lambda$ . Then
Proof. We drop the $\Lambda$ -dependence in the superscript of $\tau_\lambda^\Lambda$ and $g_\lambda^\Lambda$ . Thanks to Proposition 3.1, we can re-sum the series expansion for $\tau_\lambda$ at will. Given a pivot decomposition $\vec W = (u_0, V_0, \ldots, u_{k+1})$ of an arbitrary core graph G with the vertex set W, define
in analogy to the shell function $h_\lambda$ in (4.5) (just like the latter, $\bar{h}_\lambda$ only depends on G through its pivot decomposition $\vec W$ ). To be more precise, the shell function $h_\lambda$ is recovered from $\bar{h}_\lambda$ by summing over a smaller subset of graphs H in (5.1), adding the restriction that $G\oplus H$ shall not contain ( $\pm$ )-pivot points for the $u_0$ – $u_{k+1}$ connection. Note that
and that when replacing $h_\lambda$ with $\bar h_\lambda$ in the right-hand side of (4.6), we get $\tau_\lambda$ instead of $g_\lambda$ . We can split the sum $\bar h_\lambda^{(m)}\big(\vec W, \vec y_{[m]}\big) = h_\lambda^{(m)}\big(\vec W, \vec y_{[m]}\big) + f_\lambda^{(m)} \big(\vec W, \vec y_{[m]}\big)$ , where $f_\lambda^{(m)}$ contains the sum over those graphs H such that $G \oplus H$ does have $({{\pm}})$ -pivotal points with respect to the $u_0$ – $u_{k+1}$ connection. We set
Assume now that $u_j$ for $j \in [k]$ is the first pivotal point of $G \oplus H\in \mathcal C_{x_1,x_2}^\pm\big(W \cup \vec y_{[m]}\big)$ . Furthermore, let $\vec W^{\prime}_j \,:\!=\, \big(u_0, V_0, \ldots, u_j\big)$ , let $\vec W^{\prime\prime}_j\,:\!=\,(u_j, V_j, \ldots, u_{k+1})$ , and let $y_{[s]}$ for $s \leq m$ be the points adjacent to $\vec W^{\prime}_j$ (possibly after reordering the vertices). The weight of such a graph H then factors into the product of the weights of two graphs, namely the subgraphs of H induced by $\vec W^{\prime}_j\cup\vec y_{[s]}\subset V(H)$ and by $\vec W^{\prime\prime}_j\cup \vec y_{[m]\setminus [s]}\subset V(H)$ ; see Figure 6. That is,
Moreover, we see that $H\big[\vec W^{\prime}_j\cup\vec y_{[s]}\big]\oplus G\big[W^{\prime}_j\big]$ does not contain $({{\pm}})$ -pivot points (for the $u_0$ – $u_j$ connection) and $H\big[\vec W^{\prime\prime}_j\cup\vec y_{[m]\setminus[s]}\big]\oplus G\big[W^{\prime\prime}_j\big]$ is in general just $({{\pm}})$ -connected.
By partitioning over j, we thus obtain the decomposition
Since both $h_\lambda$ and $\bar h_\lambda$ converge absolutely, so does $f_\lambda$ , justifying all re-summations. Letting $x_1=u_0$ and $x_2= u_{k+1}$ ,
The re-summation with respect to j and k is justified as the resulting series converges for $\lambda<\lambda_\ast$ even when we put $h_\lambda$ in absolute values.
We can now extend the result of Proposition 5.1 to $\Lambda \nearrow \mathbb R^d$ and thus prove that the expansion (4.24) is indeed equal to the direct-connectedness function for $\lambda<\lambda_\ast$ , finalizing the proof of our main result.
Proof of Theorem 1.1. We have
where the first equality holds by the continuity of probability measures along sequences of increasing events and the second one by Proposition 5.1.
Note that the integrand in (5.3) is bounded uniformly in $\Lambda$ by
where $C= \sup_{y\in\mathbb R^d} \sum_k \lambda^k \sigma_\lambda^{\ast (k+1)}(y)$ is a constant obtained in (4.29). Since $\tau_\lambda$ is integrable for all $\lambda<\lambda_c$ , the theorem follows by dominated convergence.
6. Discussion
6.1. Connections to percolation on Gibbs point processes
The Ornstein–Zernike equation gets its name from the seminal paper [Reference Ornstein and Zernike20] and has since been a well-known formalism in liquid-state statistical mechanics. It relates the total correlation function to the direct correlation function and it naturally connects to power series expansions of these correlation functions (see [Reference Coniglio, DeAngelis and Forlani6, Reference Stell23, Reference Stell24]; the terminology is not the same in all of these references).
The correlation functions admit graphical expansions that also consist of connected graphs. It was observed [Reference Hill11] that a similar formalism can be formulated for the pair-connectedness function, and a key reference for this is [Reference Coniglio, DeAngelis and Forlani6]. The pair-connectedness function is deemed part of the pair-correlation function. The connected graphs appearing in the expansion of the latter are referred to as ‘mathematical clusters’, and they correspond to our $({{\pm}})$ -connected graphs. Isolating the $({+})$ -connected components within these graphs yields the ‘physical clusters’, and the graphs in which $x_1$ and $x_2$ lie in the same physical cluster make up the expansion for $\tau_\lambda(x_1,x_2)$ . In the following, we elaborate on this.
The percolation models considered in the physics literature are mostly based not on a PPP (Stell calls the Poisson setup random percolation [Reference Stell24]), but on a Gibbs point process (called correlated percolation in the language of Stell). (The denomination ‘random percolation’ for the Poisson setup feels quite misleading for probabilists; but it reflects language commonly adopted across physics, with ‘random’ understood as ‘completely random’ in the sense of completely random measures [Reference Kingman13], a class comprising the PPP.)
To define the latter, consider a nonnegative pair potential $v\,:\, \mathbb R^d \to \mathbb R_{\geq 0}$ and some finite volume $\Lambda$ . Let $\textbf N(\Lambda)$ be the set of finite counting measures on $\Lambda$ and let $\mu \in\textbf N(\Lambda)$ . Then the energy of $\{x_1, \ldots, x_n\}$ under the boundary condition $\mu$ is
Let $f\,:\, \textbf N(\Lambda) \to \mathbb R$ be bounded. We define a probability measure as
where the partition function $\Xi(z)$ is such that $\mathbb E_z[1] = 1$ and $z \in \mathbb R_{\geq 0}$ is called the activity. If we denote by $\eta$ a random variable with law $\mathbb E_z$ , then $\eta$ is a point process. Note that we recover the homogeneous PPP with intensity $\lambda=z$ by setting $v \equiv 0$ .
We can define the RCM $\xi$ on this general point process, and we denote its probability measure by $\mathbb P_{z,\varphi}$ . We furthermore define the (one-particle) density as
and we define the pair-correlation function as
Again, in the case of a homogeneous PPP with intensity $\lambda=z$ , we have $\rho=z$ and $\rho_2= z^2$ . Defining the pair-connectedness function as
we can decompose
In [Reference Coniglio, DeAngelis and Forlani6], Coniglio et al. define the pair-connectedness function as $\tilde\tau_{z,\varphi} = \big(z^2 /\rho^2\big) \tau_{z,\varphi}$ .
The function $\tilde \tau_{z,\varphi}$ has a density expansion (note that $\tau_{z,\varphi}$ is better suited for activity expansions) that can be found in [Reference Coniglio, DeAngelis and Forlani6, Equation (12)], which can be obtained from the density expansion of the pair-correlation function: the latter is obtained by expanding the Mayer f-functions $f(x,y) = \text{e}^{-v(x,y)} - 1$ in the partition function, which is the starting point of a cluster expansion. Splitting the Mayer f-function as $f=f^+ + f^\ast$ with $f^+ = \text{e}^{-v(x,y)} \varphi(x-y)$ and executing the same expansion for the correlation function ‘doubles’ every edge into a $({+})$ -edge and a $({\ast})$ -edge. Only summing over graphs in which x and y are connected by $({+})$ -edges yields the pair-connectedness function.
In general, the graphs appearing in the density expansion are a subset of those in the activity expansion, namely the ones without articulation points (articulation points were defined after Proposition 3.1). In the case of a homogeneous PPP, we have $\lambda=z=\rho$ , and so activity and density expansion coincide (and the graphs with articulation points cancel out). Moreover, $f^+(x,y) = - f^\ast(x,y) = \varphi(x-y)$ , and the graphs summed over in the expansion become the $({{\pm}})$ -graphs, yielding the expansion (3.1) for $\tau_\lambda$ .
It is an interesting question which ideas of this paper can be generalized to RCMs based on Gibbs point processes. While some aspects generalize without much effort, the crucial difference lies in the fact that the weight of graphs showing up in expansions for Gibbs point processes also encodes the pair interaction induced by the potential v. To recover probabilistic interpretations for terms after performing re-summations and bounds is therefore much more delicate.
6.2. Connections to Last and Ziesche
In [Reference Last and Ziesche15], Last and Ziesche use a Margulis–Russo-type formula to prove analyticity of $\tau_\lambda$ in presumably the whole subcritical regime. Moreover, they show the existence of some $\lambda_0>0$ (which is not quantified) such that both $\tau_\lambda$ and $g_\lambda$ have an absolutely convergent graphical expansion in $[0,\lambda_0)$ that seems closely related to the ones discussed here. We want to illustrate how to relate the respective expressions.
The two-point function. Last and Ziesche show that $\tau_\lambda(x_1,x_2)$ is equal to
We show that the above integrand is the same as the one in (3.1). We can rewrite the one in (6.1) as
Note that now, any nonvanishing J needs to contain $\{1,2\}$ . We are now going to observe some cancellations. For a fixed graph $G\in\mathcal C\big(\vec x_{[n+2]}\big)$ ,
Note that for given G and I, defining $\mathcal I(G,I) = \{ j \in [n+2] \setminus I\,:\, x_j \nsim \vec x_{I} \}$ , we can rewrite
The only case for which (6.4) does not vanish is when $\mathcal I(G,I) = \varnothing$ . We can therefore rewrite (6.3) as
and so (6.2) becomes
In the last line, summation is over the same set of graphs as in (3.6), with the additional restriction that $V\big(G^+\big) =I$ . Resolving the partition over I gives that (6.1) is equal to (3.6).
The direct-connectedness function. In [Reference Last and Ziesche15, Theorem 5.1], it is shown that there exists $\lambda_0$ such that for $\lambda \in [0, \lambda_0)$ ,
We show that the integrand in (6.5) is equal to the one in (4.1). With the calculations (6.3) and (6.4) performed for the two-point function, letting $I^c = [n+2]\setminus I$ , we have
The two indicators imply that H is connected, and so
Note that for the second identity in (6.6), we split the edges of H into those contained in H ′ (the subgraph induced by I) and the remaining ones, called F.
When $E(G) \cap \binom{I^c}{2} \neq \varnothing$ , the sum over F vanishes. Hence, the sum over I can be reduced to those I such that $G[I^c]$ contains no edges. For such sets I, we have
If we insert (6.7) back into (6.6), the two factors $({-}1)^{n-|I|}$ cancel out, and so
where $G \vartriangleright H$ means that $E(H) \subseteq E(G)$ , the subgraph of H induced by the vertices incident to at least one edge (call this set $V_{\geq 1}(H)$ ) is connected, and the subgraph of G induced by $[n+2]\setminus V_{\geq 1}(H)$ contains no edges.
With the identity (6.8), and letting $X=\vec x_{[n+2]}$ , the integrand of (6.5) is equal to
The argument for the first identity in (6.9) is the same as for the identity of (3.6) and (3.7).
6.3. Connections to the lace expansion
Both the graphical power series expansions and the lace expansion provide expressions for the direct-connectedness function. In this section, we show how to get from one to the other. Note that the statements to follow hold for sufficiently small intensities and cannot replace the lace expansion, which works all the way up to $\lambda_c$ . The emphasis of this section is on the qualitative nature of the results.
We first summarize some results of [Reference Heydenreich, van der Hofstad, Last and Matzke10], where the lace expansion is applied to the RCM. We keep some of the definitions brief and informal, and we refer to [Reference Heydenreich, van der Hofstad, Last and Matzke10] for the detailed definitions in these cases.
On the lace expansion. In [Reference Heydenreich, van der Hofstad, Last and Matzke10], among other things, the OZE is proved for $\tau_\lambda$ in high dimension (and for certain classes of connection functions $\varphi$ ; see [Reference Heydenreich, van der Hofstad, Last and Matzke10, Section 1.2]). In particular, it is shown that
with $\Pi_\lambda(x) = \sum_{n \geq 0} ({-}1)^n \Pi_\lambda^{(n)}(x)$ . The functions $\Pi_\lambda^{(n)}$ are called the lace-expansion coefficients; they are nonnegative and have a quite involved probabilistic interpretation. To briefly define them, let $\big\{x \xleftrightarrow{\,\,A\,\,} y\textrm { in } \xi^{x,y}\big\}$ be the event that $x \longleftrightarrow y\textrm { in } \xi^{x,y}$ , but x is no longer connected to y in an A-thinning of $\eta^y$ . Informally, every point $z\in \eta$ survives an A-thinning with probability $\prod_{y\in A} (1-\varphi(z-y))$ . See [Reference Heydenreich, van der Hofstad, Last and Matzke10, Definition 3.2] for a formal definition. Letting
we introduce a sequence $\xi_0, \ldots, \xi_n$ of independent RCMs and define
for $n\geq 1$ (with $u_{-1}=\textbf{0}$ ). The method of proof is called the lace expansion, a perturbative technique in which one first proves via induction that
for $n \in \mathbb N_0$ and some remainder term $R_{\lambda,n}$ (see [Reference Heydenreich, van der Hofstad, Last and Matzke10, Definition 3.7]), and then shows that the partial sum converges to $\Pi_\lambda=g_\lambda-\varphi$ and that $R_{\lambda,n}\to 0$ as $n \to\infty$ .
The lace expansion was first devised for self-avoiding walks by Brydges and Spencer [Reference Brydges and Spencer3] and takes some inspiration from cluster expansions. It was later applied to percolation (specifically, bond percolation on $\mathbb Z^d$ ) by Hara and Slade [Reference Hara and Slade8]. While the name stems from laces that appear in the pictorial representation in [Reference Brydges and Spencer3], laces are absent in the representation for percolation models.
We show that we can rewrite $\Pi_\lambda^{(n)}$ in terms of graphs that are associated to a lace of size n. More generally, rewriting $\Pi_\lambda^{(n)}$ should serve as a bridge between the graphical expansions for $g_\lambda$ that are well known in the physics literature, and the expression for $g_\lambda$ in terms of lace-expansion coefficients.
The big advantage in the lace expansion lies in the probabilistic nature of all the terms that appear, allowing one to bound most of the integrals that appear by the expected cluster size, which is finite for $\lambda<\lambda_c$ . The downside is the absence of a direct expression for $g_\lambda$ and thus a direct proof of the OZE, which is only obtained after performing the $n\to\infty$ limit in (6.11).
We now show how to re-sum the graphical expansion for $\tau_\lambda$ and how to obtain the lace-expansion coefficients by appropriate grouping of terms.
Building the connection. For $x,y\in X$ , let $\tilde{\mathcal C}_{x,y}^\pm(X) \subset \mathcal C^\pm(X)$ be the set of graphs in $\mathcal C^\pm(X)$ such that $G^+$ is connected and contains $\{x,y\}$ , and $E(G[ V \setminus V^+]) = \varnothing$ . Hence, all $({-})$ -edges are incident to at least one vertex in $V\big(G^+\big)$ . This is exactly the set of graphs summed over in (3.6). Indeed,
If we define $\tilde{\mathcal D}_{x,y}^\pm(X) \,:\!=\, \mathcal D_{x,y}^\pm(X) \cap \tilde{\mathcal C}_{x,y}^\pm(X)$ , we can express $g_\lambda(x_1,x_2)$ by replacing the graphs summed over in (6.12) by $\tilde{\mathcal D}_{x,y}^\pm\big(\vec x_{[n+2]}\big)$ .
We are going to recycle some notation from Section 4. We split G into its core $G_{\text{core}}$ and its shell H, so that
for some k (where $u_0=x$ and $u_{k+1}=y$ ). We also recall that G ‘contains’ a skeleton (see Definition 4.3), a graph on $[k+1]_0$ .
Definition 6.1. (The minimal lace.) Let G be a graph with core $G_{\text{core}}$ and shell H; let $\vec W= (u_0,V_0 \ldots, u_{k+1})$ for $k \in \mathbb N$ be its $({+})$ -pivot decomposition. We define the minimal lace $L_{\text{min}}(x,y;\,G)$ as the lace with the following properties:
-
L (having bonds $\alpha_i\beta_i$ with $i \in [m]$ for some $m\in\mathbb N$ ) is contained as a subgraph in the skeleton $\hat H$ ;
-
for every $i \in [m]$ , among all the bonds $\alpha\beta$ in $\hat H$ satisfying $\alpha<\beta_{i-1}$ , the bond $\alpha_i\beta_i$ maximizes the value of $\beta$ . For $i=1$ , we take $\beta_{0}=1$ .
If $\textsf{Piv}^+(x,y;\,G)= \varnothing$ , we say that G has a minimal lace of size 0.
In other words, the first stitch $0\beta_1$ maximizes the value of $\beta_1$ among all stitches starting at 0, the second stitch has a maximal value of $\beta_2$ among the stitches with $1 \leq \alpha_2 < \beta_1$ , and so on.
As a side remark, it is worth noting that the minimal laces offer an alternative way of partitioning the set of all shell graphs by mapping every shell graph H onto its minimal lace. This gives a standard procedure used in lace expansion for self-avoiding walks; performing it ‘backwards’ yields precisely the mapping described below Definition 4.4.
With the notion of minimal laces, we partition
where
We also set $\pi_\lambda^{(m)}(x) = \pi_\lambda^{(m)}(\textbf{0},x)$ .
We strongly expect that the (pointwise) absolute convergence of the power series on the right-hand side of (6.14) holds (at least) in the domain of absolute convergence of the physicists’ expansion (4.1) and thus, as already discussed, for sufficiently small intensities $\lambda>0$ . However, a proof would go beyond the scope of the discussion here; therefore we formulate the absolute convergence of $\pi_\lambda^{(m)}$ (in the above sense) as an assumption for the following result (Lemma 6.1).
Assumption 6.1. There exists $0<\lambda_\bigstar\leq\lambda_c$ such that the right-hand side of (6.14) is (pointwise) absolutely convergent for all $m\in\mathbb N$ and $\lambda<\lambda_\bigstar$ .
Under Assumption 6.1, we show that the coefficients defined in (6.14) are basically identical to the lace-expansion coefficients introduced in (6.10).
Lemma 6.1. (Identity for the lace-expansion coefficients.) Let $m \geq 1$ and let $\lambda<\lambda_\bigstar$ . Then
As a side note, since $\Pi_\lambda^{(m)}$ is nonnegative, Lemma 6.1 shows that the sign of $\pi_\lambda^{(m)}$ alternates, which is far from obvious from the definition in (6.14).
Next, we prove an approximate version of the OZE in analogy to [Reference Heydenreich, van der Hofstad, Last and Matzke10, Proposition 3.8]. Clearly, Lemma 6.2 follows immediately from the latter via Lemma ; however, we want to present a short independent proof on the level of formal power series, which we consider instructive for the understanding of the underlying combinatorics. We emphasize that the proof presented here treats the claim of Lemma 6.2 as an identity between formal power series; in particular, we do not concern ourselves with absolute convergence of the power series appearing in (6.20) and in (6.21).
Lemma 6.2. (The lace expansion in terms of $({{\pm}})$ -graph coefficients.) Let $m \in \mathbb N_0$ , let $\lambda<\lambda_\bigstar$ , and set $\pi_{\lambda,m}(x)\,:\!=\, \sum_{i=0}^{m} \pi_\lambda^{(i)}(\textbf{0},x)$ . Then
where $R_{\lambda,m}$ is defined in [Reference Heydenreich, van der Hofstad, Last and Matzke10, Definition 3.7].
Before carrying out the proof of Lemma 6.1, we define
and $\bar\varphi(a,B) = \bar\varphi(\{a\},B)$ . Now, observe that, given a set $A \subset \mathbb R^d$ and an RCM event F,
where $\xi(\eta)$ is the RCM on the basis of the point process $\eta$ and $\eta^v_{\langle A \rangle}$ is an A-thinning of $\eta^v$ (the usual PPP of intensity $\lambda$ and added point v). In particular, v may be thinned out as well. We remark that $\eta_{\langle A \rangle}$ has the same distribution as a PPP of intensity $\lambda\bar\varphi(A,\cdot)$ .
Proof of Lemma 6.1. The statement for $m=0$ is clear. For $m>0$ , we can rewrite $\pi_\lambda^{(m)}$ as
where $\mathcal B \subseteq \tilde{\mathcal D}_{u_{-1},u_m}^\pm\big( \vec u_{[-1,m]} \cup \vec x_{[k]} \cup \vec z_{[n]} \big)$ are the graphs such that
-
$u_0$ is the first pivotal point in $\textsf{Piv}^+(u_{-1}, u_m;\,G)$ (i.e., $\textsf{ord}(u_0)=2$ );
-
$ \vec u_{[0,m-1]} \subseteq \textsf{Piv}^+(u_{-1},u_{m};\,G)$ and $u_{i-1} \prec u_{i}$ ;
-
there are points $p_2, \ldots, p_{m}$ such that $L_{\text{min}} \mathrel{\hat =}\{(u_{-1},u_1),(p_2, u_2), \ldots, (p_{m}, u_{m})\}$ ;
-
$\vec z_{[n]}$ are those vertices $z\notin \{ u_{m-1}, u_m\} $ in G such that $\{z\} \cup N(z)$ contains at least one vertex y of order $y \succ u_{m-1}$ .
Given a graph $G \in \mathcal B$ , let B denote the set of points x in $V\big(G^+\big)$ with $u_{m-2} \preccurlyeq x \prec u_{m-1}$ . See Figure 7 for an illustration of such a graph G. We integrate out the points $\vec z$ first and claim that their contribution to (6.16) is
where every $H \in \mathcal B^\star$ is the subgraph of some $G \in \mathcal B$ and has vertex set $B \cup \{u_{m-1}, u_m\} \cup \vec z_{[n]}$ and precisely those edges in G that have at least one endpoint in $\{u_m\} \cup \vec z_{[n]}$ .
We let y be the last pivotal point in $V\big(G^+\big)$ , that is, $\textsf{ord}(y) = \textsf{ord}(u_m)-2$ . We write $Z=\vec z_{[n]}$ and split Z once more into those vertices ‘in front of’ and ‘behind’ y; that is, $Z= S \cup T$ , where T are the points in $G^+$ of order $\textsf{ord}(u_m)-1$ together with the points in $V\setminus V\big(G^+\big)$ that are adjacent to the former, and $S = Z \setminus T$ . Possibly $y=u_{m-1}$ , in which case $S = \varnothing$ . See Figure 7 for an illustration of this split of the vertices in Z.
Note that there are no restrictions on the $({-})$ -edges between B and $S \cup \{y\}$ , whereas there must be at least one $({-})$ -edge between B and $T \cup \{u_m\}$ . There are no restrictions on the $({-})$ -edges between $\{u_{m-1}\} \cup S \cap V\big(G^+\big)$ and $T \cup \{u_m\}$ , whereas there cannot be any $({-})$ -edges between $S\setminus V\big(G^+\big)$ and $T \cup \{u_m\}$ . By distinguishing whether or not $S = \varnothing$ , we find that the left-hand side of (6.17) is equal to
where we abbreviate $\mathscr {C}=\mathscr {C}(u_{m-1}, \xi^{u_{m-1}})$ . Note that the inner probabilities are conditional on the random variable $\mathscr {C}$ . We now resolve the integral over y by use of the Mecke equation and incorporate the first two summands as the case $y=u_{m-1}$ . With this, (6.18) becomes
where $\mathscr {C}^{\,\prime} = \mathscr {C}\big(u_{m-1}, \xi\big(\eta^{u_{m-1}} \setminus\{y\}\big)\big)$ . But both terms in (6.19) are simply a partition over the last pivotal point for the connection between $u_{m-1}$ and $u_m$ , and so (6.19) equals
proving (6.17). Lemma 6.1 can now be proven by iteratively applying (6.17).
Proof of Lemma 6.2. For $m \in\mathbb N_0$ , we can write
where $\mathcal A$ is the set of graphs $G \in \tilde{\mathcal C}_{x_1, x_2}^\pm\big(\vec x_{[n+2]}\big)\setminus \tilde{\mathcal D}_{x_1, x_2}^\pm\big(\vec x_{[n+2]}\big)$ together with the graphs $G \in \tilde{\mathcal D}_{x_1, x_2}^\pm\big(\vec x_{[n+2]}\big) $ where $ \| L_{\text{min}}\| >m$ . Note that if $G\in \mathcal A$ , then $\textsf{Piv}^+ (x_1,x_2;\,G) \neq\varnothing$ .
For $G \in\mathcal A$ and $u\in\textsf{Piv}^+(x_1,x_2;\,G)$ , define
that is, all the core vertices of order at most that of u together with the shell vertices adjacent to at least one vertex of strictly smaller order than u. Next, let $u^{\text{cut}}=u^{\text{cut}}(x_1,x_2;\,G)$ be the vertex in $\textsf{Piv}^+(x_1,x_2;\,G)$ such that
If such a point exists, it is unique; if no such point exists, set $u^{\text{cut}} = x_2$ . We can now partition $\mathcal A$ as
where
Now, if $x_s = u^{\text{cut}}$ and $V^{\prime} \,:\!=\,V^{\preccurlyeq}(u^{\text{cut}})$ as well as $V^{\prime\prime} \,:\!=\, \{x_s\} \cup \big( \vec x_{[n+2]} \setminus V^{\prime}\big)$ , then
that is, the weight factors. Therefore, for every $i\in[m]$ ,
Setting
we can rewrite (6.20) as
One can now prove by hand or by employing Lemma 6.1 that $\bar R_{\lambda,m} = R_{\lambda,m}$ .
6.4. Other percolation models
The results of this paper should apply in quite analogous fashion to all other percolation models that enjoy sufficient independence—in particular, to (long-range) bond and site percolation on $\mathbb Z^d$ . We take bond percolation on $\mathbb Z^d$ with edge parameter p as an example. We can adjust our notation by using $\mathcal C\big(x,y,\mathbb Z^d\big)$ to denote the connected subgraphs of $\mathbb Z^d$ containing x and y, and we define $\mathcal D_{x,y}(\mathbb Z^d)$ and the notions for ( $\pm$ )-graphs analogously. Then one can show that, if we restrict to a finite box $\Lambda \subset \mathbb Z^d$ , the two-point function satisfies
One can easily observe that all graphs summed over in (6.22) that contain more than one $({+})$ -cluster cancel out, which is also what happens in the RCM. The direct-connectedness function can be defined analogously to Definition 4.2, providing a suitable setup for an analysis analogous to the one in Section 4.
Acknowledgements
We thank David Brydges, Tyler Helmuth, and Markus Heydenreich for interesting discussions.
Funding information
There are no funding bodies to thank in relation to the creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process for this article.