Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-22T11:54:09.801Z Has data issue: false hasContentIssue false

Stable systems with power law conditions for Poisson hail

Published online by Cambridge University Press:  22 August 2023

Thomas Mountford*
Affiliation:
École Polytechnique Fédérale de Lausanne
Zhe Wang*
Affiliation:
École Polytechnique Fédérale de Lausanne
*
*Postal address: MA B1 517 (Bâtiment MA), Station 8, CH-1015 Lausanne, Switzerland.
*Postal address: MA B1 517 (Bâtiment MA), Station 8, CH-1015 Lausanne, Switzerland.
Rights & Permissions [Opens in a new window]

Abstract

We consider Poisson hail models and characterize up to boundaries the collection of critical moments which guarantee stability. In particular, we treat the case of infinite speed of propagation.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

1.1. Model and dynamics

In this article we treat the Poisson hail model introduced in [Reference Baccelli and Foss2]. This can be viewed as a system of interacting queues. In this system $\mathbb{Z}^d$ represents a collection of servers. As with a classical system, jobs arrive at random times and require random amounts of service. The ‘interacting’ part is that in the Poisson hail system the arriving job requires simultaneous service from a random team of servers, that is a finite subset of $\mathbb{Z}^d$ . Each individual server operates a first-come first-served (FCFS) policy, which in this context means that server $x \in \mathbb{Z}^d$ only works on a job once all previously arrived jobs that required service from x have been completed. Once a team of servers has begun a job, they work continuously on the job until completion. The FCFS condition implies, for example, that x may be unable to work on any job at a given moment, because of the prior arrival of a job requiring a large team of servers (among them x) on which no work is possible until a yet earlier job requiring the work of some server y (also in this big team) is completed. We now define our model more precisely.

Let $\Phi=(\Phi_x)_{x \in \mathbb{Z}^d}$ be a collection of independent and identically distributed (i.i.d.) marked Poisson processes on $\mathbb{R}_+$ with marks in $\mathbb{R}_+\times 2^{\mathbb{Z}^d}$ , where the second coordinate is stipulated to be a finite subset of $\mathbb{Z}^d$ . Each point in $\Phi_x$ is denoted by $(t,\tau,B)$ , and the triple corresponds to a job ‘centered’ at x. The t stands for the arrival time of the job, while the pair $(\tau,B)$ stands for the service time $\tau$ and the team of servers $x+B$ required for the job. Here ‘centered’ has no geometrical meaning beyond requiring B to contain the origin $\mathbf{0}$ . The collection of points $N_x= \{t\;:\; \exists (t,\tau,B) \in \Phi_x \}$ is a Poisson process with rate $ \lambda$ for each x, and the pairs $(\tau,B)$ are assumed to be i.i.d. over arriving jobs. Thus, in our article, the model is translation-invariant. We make the simplifying assumption that B is almost surely always a cube, as in [Reference Baccelli and Foss2], with a center at the origin and a radius R (in $l_\infty$ -norm, $\left\vert x \right\vert = \max_{i=1}^d \!\left\vert x_i \right\vert$ ). As we are to give a sufficient condition for stability (which is to be defined in Subsection 1.3 below), this is not a very restrictive assumption. We also discuss general shapes for jobs at the end of Subsection 1.3; see Remark 1.1. Thus a job arriving at server x at time t can be thought of as a pair $(\tau,R)$ , rather than a pair $(\tau,B)$ . The pair $(\tau,R)$ gives the sizes of the job, $\tau$ being the temporal size and R the spatial. As stated, these ‘marks’ are i.i.d. over arrival points and servers. When we write ${\mathbb{P}}((\tau,R) \in A)$ or ${\mathbb{P}}(R \in B)$ we are referring to the underlying probability distribution for the marks. We write $\tilde{\mathbb{P}} $ for the law of the pair $(\tau,R)$ ( $( \tau, R) \sim \tilde{\mathbb{P}}$ ), so $\tilde {\mathbb{P}} (A) = {\mathbb{P}}((\tau,R)\in A)$ . Throughout the paper we assume that the system is nontrivial, in the sense that there is interaction between the servers; that is, $\tilde{\mathbb{P}} (\mathbb{R}_+ \times\{0\}) < 1$ . If $(t,\tau,R) $ is in $\Phi_x$ , this signifies that at time t, a job arrives needing $\tau$ units of service from the team $x+[\!-\!R,R]^d$ (or that a job $(\tau,R)$ arrives at time t). It should not be clear that such a system defines a queueing process, since in general it does not.

We now describe how a queueing system that respects the FCFS stipulation and corresponds to the given job arrivals may be constructed. We do not claim that this is the only means of constructing a queueing system that corresponds to job arrivals $\{\Phi_x\}_{x \in \mathbb{Z}^d}$ . We first need to assume that for a single site x (and therefore for all sites, by translation-invariance), the arrival rate of jobs requiring service from x is finite, i.e.,

(1.1) \begin{equation} \lambda \sum_{y \in \mathbb{Z}^d} \mathbb{P}( \mathbf{0} \in B+y) < \infty . \end{equation}

Under the assumption of nontriviality, this rate is strictly above $\lambda $ . We take a non-random sequence of finite subsets of $\mathbb{Z}^d$ that increase up to $\mathbb{Z}^d, \ \varXi_n, n=1,2 \ldots$ . We have by our finite rate assumption that on the time interval [0, n] there are only a finite number, $N_n$ , of jobs arriving that require service from some server $x \in \varXi_n$ . For fixed n, the arrival times are almost surely distinct and ordered

\[0 < t^n_1 < t^n_2 < \ldots < t^n_{N_n} < n.\]

The $t^n_i $ and $N_n$ correspond to $\varXi_n $ and not to a particular x in it. We construct a (stage-n) FCFS queueing system based on this finite set of jobs in a straightforward way: the job $\big(\tau^n_r, R^n_r\big)$ arriving at $x^n_r $ at time $t^n_r $ will be served on the time interval $[\sigma^n_r, \sigma^n_r +\tau^n_r)$ by the serving team $x^n_r + [\!-\!R^n_r,R^n_r]^d$ , where the $\sigma^n_r$ are recursively defined (for n fixed) as follows: $\sigma^n_1 = t^n_1$ , and for $1<r\leq N^n$ ,

\[\sigma^n_r =\max\!\Bigg\{\max _{x \in x^n_r + [\!-\!R^n_r,R^n_r]^d} \!\left( \max _{j<r: x \in x^n_j + [\!-\!R^n_j,R^n_j]^d }\sigma^n_j + \tau^n_j \right) , t^n_r \Bigg\}.\]

For a fixed job $(\tau,R)$ arriving at a (t, x), if $t \leq n $ and $x \in \varXi_n$ , then t is on the above list, i.e., $t = t^n_{m(n)} $ for some $1 \leq m(n) \leq N_n $ for n large. As n increases, m(n) and $\sigma^n_{m(n)} $ increase with n once m(n) is well-defined. If for each $x \in \mathbb{Z}^d $ and each job $(t,\tau,R)$ that arrives at x there exists finite $n_0$ such that

\begin{equation*} \forall n \geq n_0, \qquad \sigma^n_{m(n)} = \sigma^{n_0}_{m(n_0)} = \sigma,\end{equation*}

then we can define the interacting queueing system where the job is served by the team $x+[\!-\!R,R]^d$ on the time interval $[\sigma, \sigma + \tau)$ . Suppose that jobs $(s_1, \tau_1, R_1)$ and $(s_2, \tau_2, R_2)$ arrive at $x_1 $ and $x_2 $ respectively, with $s_1 < s_2$ , and that $(x_1 +[\!-\!R_1,R_1]^d )\cap ( x_2 +[\!-\!R_2,R_2]^d ) \ne \emptyset $ . Let the beginning service times for the two jobs at stage n (again for all n sufficiently large) be respectively $\sigma^n_{m^1(n)} $ and $\sigma^n_{m^2(n)} $ . Then, from the FCFS policy applied to each stage-n finite queueing system, we have $\sigma^n_{m^2(n)} \geq \sigma^n_{m^1(n)}+ \tau_1 $ . Thus the final queueing system respects FCFS. The choice of $(\varXi_n)_n$ above can be arbitrary. For any two sequences $(\varXi_n)_n$ and $(\tilde{\varXi}_n)_n$ , there exist $\bar{n}$ and $\underline{n}$ such that $\varXi_{\underline{n}} \subset \tilde{\varXi}_n \subset \varXi_{\bar{n}},$ and therefore we get from monotonicity that $\sigma_{m(\underline{n})}^{\underline{n}} \leq \tilde{\sigma}_{m({n})}^{n} \leq\sigma_{m(\bar{n})}^{\bar{n}}.$

To see when a finite $\sigma$ might exist, we fix x and t and consider a dual model $\{{\mathcal{G}} ^{x,t}_s\}_{0 \leq s \leq t} \ = \ \{{\mathcal{G}} _s\}_{0 \leq s \leq t}$ . This is defined by the following rules:

  1. (i) ${\mathcal{G}} _0 = {\mathcal{G}} ^ \prime _0 = \{x\}$ , $\tau_0 = 0$ (here $ ({\mathcal{G}} ^ \prime _i)_{i \geq 0}$ is the jump chain for ${\mathcal{G}} _.$ ).

  2. (ii) For $i \geq 1, \ T_i \ = \ \inf \{s> T_{i-1} $ such that a job arrives at $t-s$ requiring service from some $ y \in {\mathcal{G}} ^ \prime _{i-1} \}$ . Let $x_i + [\!-\!R_i, R_i ] ^d$ be the service team for this job. Then $ {\mathcal{G}} ^ \prime _{i} = {\mathcal{G}} ^ \prime _{i-1} \cup \!\left(x_i + [\!-\!R_i, R_i ] ^d\right)$ .

  3. (iii) For $s \in [T_i , T_{i+1} )$ , we have ${\mathcal{G}} _{s} =\ G^ \prime _{i} $ ; for $T_\infty = \lim_i T_i$ , we have ${\mathcal{G}} _s = \mathbb{Z}^d $ on $[t \wedge T_\infty ,t]$ .

If $T_\infty \leq t$ , we say the dual explodes. This dual model is a similar object to the duals of interacting particle systems. It is easily seen from (1.1) that for all x, t, if ${\mathcal{G}} ^{x,t}$ does not explode, then for every arriving job requiring service from x in the time interval [0, t], there exists $\sigma$ such that for all n large, $\sigma^n_{m(n)} = \sigma$ , and so the queueing system is defined if ${\mathcal{G}} ^{x,t}$ does not explode for every x and t. In Subsection 1.2 below, we will see the connection between the dual ${\mathcal{G}} ^{x,t}$ and admissible paths.

A central quantity in the model is the workload at site x and time t, which we denote by W(t, x). We discuss the workload of a system heuristically and will provide a formal definition in Subsection 1.2. Intuitively speaking, W(t, x) is the additional time required beyond time t for server x to have serviced all jobs (requiring service from x) arriving during the time interval [0, t]. We can understand the dynamics of W(t, x) using the following two equations. Suppose job $(\tau,R )$ arrives at server y at time t. Then by the FCFS rule,

(1.2) \begin{equation} W(t+,x)= \left\{\begin{array}{l@{\quad}l} \sup_{ \left\vert z-y \right\vert \leq R } W(t-,z) + \tau & \text{ if } \left\vert x-y \right\vert \leq R, \\[5pt] W(t_-,x) & \text{ otherwise,} \end{array} \right. \end{equation}

where $\left\vert x-y \right\vert$ stands for the $l_\infty$ -norm in $\mathbb{Z}^d$ . By convention, we always assume that $t \rightarrow W(t,x)$ is right-continuous: $ W(t,x) = W(t+,x)$ . Suppose no job requiring service from site x arrives during time [s, t]. Then W(t, x) decreases linearly in t at rate 1 until it reaches zero:

(1.3) \begin{equation} W(t,x) = \max \{ \left(W(s,x) - (t-s)\right),0\};\end{equation}

see [Reference Baccelli, Chang-Lara and Foss1, Reference Baccelli and Foss2, Reference Foss, Konstantopoulos and Mountford4]. The following initial condition for W(t, x) is in force throughout the paper: for all x in $\mathbb{Z}^d$ ,

(1.4) \begin{equation} W(0,x)=0.\end{equation}

One way to visualize the model and the workload W(t, x) is to think of jobs as hailstones falling randomly on hot ground; this model will be similar to the well-known game Tetris. A job $(\tau,R)$ received by a server x can be viewed as a hailstone with a base $x+[\!-\!R,R]^d$ and a height $\tau$ . The hailstone falls on sites $x+[\!-\!R,R]^d$ , and the FCFS rule requires the hailstone to fall on top of all previously arrived hailstones that required service from a server in $x+[\!-\!R,R]^d$ . The hailstone starts to melt at a constant rate 1 once its base touches the hot ground. In this way, W(t, x) can be interpreted as the height function at site x evolving in time t. See Figure 1 for an illustration of how workload changes when a job arrives. In this example, we fix a time t, and a job with size $(\tau,R) = (1,2)$ arrives at site $-1$ . Before its arrival, the workload is 2 at sites $-4,-3,2$ ; it is 1 at sites $-2,-1, 0, 3$ ; and it is 3 at sites 1, 4. After the arrival of the new job, the workload is 2 at sites $-4, 2$ ; it is 4 at sites $-3,-2,-1$ ; it is 1 at site 3; and it is 3 at site 4.

Figure 1. Workloads before and after a job arrival.

1.2. Admissible paths, workload, and time scales

We now rigorously define workload via admissible paths. Suppose the arrival times $\left({N_x}\right)_{x\in\mathbb{Z}^d}$ and spatial sizes R of jobs are known. For any $0\leq u\leq t$ , an admissible path is a piecewise constant, right-continuous (càdlàg) function $\gamma_{}\;:\; [u,t] \rightarrow \mathbb{Z}^d$ such that, if $\gamma(s) \neq \gamma(s-) $ , then there is a job arriving at time $s \in [u,t]$ with center x for some $x\in\mathbb{Z}^d$ (equivalently, ${N_x(s)=N_x(s-)+1}$ ) and sizes $(\tau,R)$ , and it intersects the path ${\gamma_{}}$ in the following sense:

(1.5) \begin{equation} {\gamma_{}}(s),{\gamma_{}}(s-) \in B(x,R)\;:\!=\;\{y\;:\;\left\vert y-x \right\vert \leq R\}.\end{equation}

We also write $\gamma_{u,t}$ when we want to emphasize that the admissible path is on a fixed interval [u, t]. So, given $x \in \mathbb{Z}^d$ and $t >0$ , the dual model $\{ {\mathcal{G}} ^{x,t}_s\}_{0\leq s \leq t}$ has $y \in {\mathcal{G}} ^{x,t}_s$ if and only if there exists an admissible path $\gamma_{t-s,t}\;:\;[t-s,t] \rightarrow \mathbb{Z}^d $ with $\gamma _{t-s,t}(t-s) = y $ and $\gamma_{t-s,t} (t) = x $ . For each admissible path $\gamma_{u,t}$ , we can define its load

(1.6) \begin{equation} U(\gamma_{u,t})=\sum \tau_i,\end{equation}

by summing over all jobs intersecting the admissible path $\gamma_{u,t}$ in the sense of (1.5), and assign it a score

(1.7) \begin{equation} V(\gamma_{u,t}) = U(\gamma_{u,t})-(t-u)=\sum \tau_i - (t-u).\end{equation}

The workload at the site (t, x) is the maximal score over all admissible paths $\gamma_{u,t}$ starting at some positive time $u\in [0,t]$ and ending at ${\gamma_{u,t}}(t)=x$ (see [Reference Baccelli and Foss2]):

(1.8) \begin{equation} W(t,x) = \sup_{0\leq u\leq t} \!\left( \sup_{\substack{\gamma_{u,t}(t) = x,\\ \text{$\gamma_{u,t}$ admissible }}}V(\gamma_{u,t}) \right)\!.\end{equation}

If one admits the value $\infty $ as a possible value, then W(t, x) is well-defined given any $\{ \Phi(x)\}_{x\in\mathbb{Z}^d}$ , whether a queueing model can be defined or not. However, in the case where the dual model does not explode, we have that the random variable W(t, x) equals the additional time that the queueing model requires until x has served all jobs that arrived before time t and required service from x; see [Reference Baccelli, Chang-Lara and Foss1, Reference Baccelli and Foss2, Reference Foss, Konstantopoulos and Mountford4]. If one takes (1.8) as a definition of W(t, x) without reference to a queueing system, then in fact (see Remark 1.1 after Theorem 1.1) there are nontrivial models that have workload $W(t,x) =\infty$ almost surely for any time $t>0$ . However, this is not the focus of our article. Instead, we study the stability of the system, and derive sufficient conditions for it, though these sufficient conditions for stability imply that the workload is finite for any positive time t almost surely. From (1.8), one can verify the properties (1.2) and (1.3). See Figure 2 for an illustration of an admissible path, its score, and its workload. In this example, horizontal intervals represent job arrivals with temporal sizes $\tau_i$ . The highlighted curve represents an admissible path $\gamma_{u,t}$ which passes jobs $\tau_2,\tau_4,\tau_6, \tau_7 $ . The load of $\gamma_{u,t}$ is $U(\gamma_{u,t})= \tau_2 +\tau_4+\tau_6+ \tau_7$ , and $\gamma_{u,t}$ has a score $V(\gamma_{u,t})= \tau_2 +\tau_4+\tau_6+ \tau_7 - (t-u)$ . The workload $W(t,\mathbf{0})$ is the supremum of these scores over all such paths starting from some (u, y) and ending at $(t, \mathbf{0})$ .

Figure 2. Admissible path, score, and workload.

We set the initial starting point $\gamma(0)=\mathbf{0}$ to obtain a new random variable

(1.9) \begin{equation} \tilde{W}(t,\mathbf{0}) =\sup_{ 0\leq u\leq t} \sup_{\substack{\gamma_{0,u}(0)\,=\,\mathbf{0},\\ \text{$\gamma_{0,u}$ admissible}}} V(\gamma_{0,u}),\end{equation}

which by time-reversibility and homogeneity of Poisson processes has the same distribution as W(t, x) for any (t, x). An advantage of using $\tilde{W}(t,\mathbf{0})$ is that $\tilde{W}(t,\mathbf{0})$ is increasing in t almost surely. This monotonicity is different from the monotonicity in Remark 1.1 below, and we apply it in Lemma 3.1. Since the growth estimate in Lemma 3.1 is related to $\sup U(\gamma_{0,t})$ , we also remark that $\tilde{W}(t,\mathbf{0})$ involves all admissible paths with initial point $(0,\mathbf{0})$ and some ending time $u\leq t$ , whereas $W(t, \mathbf{0}) $ involves admissible paths with the fixed final point $(t,\mathbf{0})$ . Another advantage of $\tilde{W}(t,\mathbf{0}) $ is that it enjoys superadditivity properties.

1.3. Stability and results

We now define stability. From (1.8), given a system $\{ \Phi(x)\}_{x\in\mathbb{Z}^d}$ , we see that $W(t,\mathbf{0})$ is stochastically increasing in t. Given a law $\tilde P$ for the pair $(\tau, R)$ , we say that the family of $\{ \Phi(x)\}_{x\in\mathbb{Z}^d}$ , as $\lambda $ varies, is stable if there exists a $\lambda_1>0$ such that for rate $0 < \lambda < \lambda_1$ , the system $\{ \Phi(x)\}_{x\in\mathbb{Z}^d}$ is such that $\{W(t,\mathbf{0})\}_{t \geq 0}$ is tight. In this case, we also say $\tilde{\mathbb{P}}$ is tight. We are interested in the stability of systems. We stress that the notion of stability depends on the law $\tilde{\mathbb{P}}$ and not on the existence of a queueing model. It is not clear that a queueing system can be defined when our dual model explodes. But for such a model to exist meaningfully, it must have its workload given by (1.8). By translation-invariance, $\tilde{\mathbb{P}} $ is stable if and only if, for any $x \in \mathbb{Z}^d$ , $\{W(t,x)\}_{t \geq 0}$ is tight. The central question in this article is to understand the stability of the system. It is natural to expect that if $\lambda$ is small, then $W(t,\mathbf{0})$ does not grow quickly in time, and it decreases if there are no job arrivals, but if a job of large spatial size follows a job of large temporal size, the system has many sites with large workload. In fact, one can show that one of three scenarios holds for a system:

  1. (a) $W(t,\mathbf{0})$ is infinite for some time $t>0$ with probability one,

  2. (b) the system is unstable but with $W(t,\mathbf{0})$ finite for any time $t>0$ with probability one, or

  3. (c) the system is stable with $W(t,\mathbf{0})$ finite for any time $t>0$ with probability one.

In [Reference Foss, Konstantopoulos and Mountford4], Foss, Konstantopoulos, and Mountford showed that a finite $(d+1+\epsilon)$ th moment for the sum of R and $\tau$ is sufficient for stability, and they also showed that for every $0<\epsilon<d+1$ , there is an unstable system with a finite $(d+1-\epsilon)$ th moment for the sum of R and $\tau$ . As a consequence, $d+1$ is a critical moment for the sum of R and $\tau$ . Their method is via techniques introduced in the study of greedy lattice animals on $\mathbb{Z}^d$ ; see for instance [Reference Cox, Gandolfi, Griffin and Kesten3, Reference Martin5]. We can interpret these two results by considering critical moments for positive random variables R and $\tau$ . Let

(1.10) \begin{equation} \alpha \;:\!=\; \sup\!\Big\{a\geq -d \;:\; \mathbb{E}\Big[R^{d+a}\Big]<\infty \Big\},\quad \beta \;:\!=\; \sup\!\Big\{b\geq -1\;:\; \mathbb{E}\Big[ \tau ^{1+b}\Big]<\infty \Big\}.\end{equation}

So every law $\tilde{\mathbb{P}}$ defines a point $(\alpha, \beta )$ in $ [\!-\!d,\infty] \times [\!-\!1,\infty].$ We say that a law $\tilde{\mathbb{P}} $ is in a region A if the corresponding pair $(\alpha,\beta)$ is in A. We divide up the space for $(\alpha,\beta)$ into regions; see Figure 3.

Figure 3. Stability and critical moments.

The results of [Reference Foss, Konstantopoulos and Mountford4] say that the system is always stable in Region I $= \{\alpha>1, \beta>d\}$ , and there exists an unstable system for every point in Region II $= \{\alpha<1, \beta<d\}$ . Stability in Region III $= \{\alpha>1, \beta<d\}$ , Region IV $= \{0\leq\alpha<1, \beta>d, \alpha\beta<d \}$ , and Region V $= \{0<\alpha<1, \alpha\beta>d\}$ is not clear from [Reference Foss, Konstantopoulos and Mountford4]. Our results are as follows.

Theorem 1.1. Consider the Poisson hail problem in any dimension $d\geq 1$ . Let $d+\alpha, 1+\beta$ be the critical moments for R and $\tau$ as defined in (1.10). Then the following hold:

  1. 1. Every nontrivial system in Regions II and III, ${ \{\beta<d \}}$ is unstable.

  2. 2. Every nontrivial system in $\{\alpha<0 \text{ or } \beta \leq 0\}$ is unstable.

  3. 3. For every point $(\alpha,\beta)$ in Region IV ${= \{\alpha \beta <d, \beta >d , 0<\alpha <1\}}$ , there is an unstable system with

    \[ \mathbb{E}\!\left[R^{d+\alpha} + \tau^{1+\beta} \right]<\infty. \]

Theorem 1.2. Consider the Poisson hail problem in dimension $d\geq1$ . Any system with parameters $(\alpha,\beta)$ in Region V $= \{\alpha \beta >d, {0<\alpha <1} \}$ or on the ray $\{(\alpha,\beta)\;:\; \beta> d,\alpha =1 \}$ is stable.

These results (and [Reference Wang6]) permit a classification of the models according to the parameters $(\alpha, \beta)$ . We do not consider boundaries between regions, though for some boundaries simple monotonicity considerations permit a classification. The proofs of these various classifications are obtained as follows:

  1. I. This is shown in [Reference Foss, Konstantopoulos and Mountford4].

  2. II. This is the content of Theorem 1.1(1).

  3. III. This is the content of Theorem 1.1(1).

  4. IV. This is the content of Theorem 1.1(3) and [Reference Wang6].

  5. V. This is the content of Theorem 1.2.

Region IV is the only one for which the stability or instability of a system with $(\alpha, \beta)$ in the given region is not determined by the region. In fact [Reference Wang6] shows that for any $(\alpha, \beta) $ in Region IV, there exist both stable models and unstable models corresponding to $(\alpha, \beta)$ .

We end this subsection with a remark on monotonicity properties for the workload W(t, x), which can be found in [Reference Foss, Konstantopoulos and Mountford4]. One can also verify these properties using the formula for workload W(t, x); see (1.8) in the previous subsection.

Remark 1.1. The model has the following monotonicity properties: W(t, x) increases if we (a) increase the temporal size of the stones or (b) enlarge the spatial shape. Thanks to the monotonicity of the model, we can do the following:

  1. 1. We can treat general convex shapes as hailstones in the system. One way to do this is to generalize R to represent the maximal distance between two points in the hailstone (under a certain norm). By enlarging the spatial shape of the hailstone to a cube with diameter R, we get an upper bound for W(t, x). Using Theorem 2.1, we can get a similar upper bound for workloads for general shapes. This ‘upper bound’ may not be optimal if the shape does not have nonempty interior.

  2. 2. We can construct a new system from a generic stable system using the following strategy: for fixed increasing sequences $(S_i), (T_j)$ , we enlarge the job sizes $(\tau,R)$ to $(T_{j+1},S_{i+1})$ , if $S_i\leq R<S_{i+1}$ and $T_j\leq\tau<T_{j+1}$ for some i, j. The new system has sizes R (and $\tau$ ) with distribution satisfying (1.11) (and (2.10); see below), for some parameters $(\alpha,\beta)$ and two normalizing constants ${c_1,c_2>0}$ . The parameters $(\alpha,\beta)$ of the new system are typically smaller than the parameters $(\alpha_o,\beta_o)$ of the original system, but they can be chosen to be in Region V for appropriate $(S_i), (T_j)$ . For details, see the discussion of Theorem 1.2 in Subsection 2.2.

Systems in Regions IV and V behave differently from systems in Region I because the tail behaviors of R are different. A major difference is that the spatial growth of admissible paths can be arbitrarily fast in Regions IV and V, and the spatial growth introduces different time scales. To illustrate this, we consider a system in dimension $d=1$ belonging to the region $\{0<\alpha<1\}$ , where the spatial size R has distribution of the form

(1.11) \begin{align} {\mathbb{P}}(R = S_i) & = c_1 S_i^{-(d+\alpha)}, \quad \text{ and }\quad R \in \{S_i\}_{i\geq 1}, \end{align}

for some increasing sequence $(S_i) $ that is increasing at least geometrically fast, and for a normalizing constant $c_1$ . To analyze $W(t,\mathbf{0})$ , it is equivalent to analyze $\tilde{W}(t,\mathbf{0})$ , which we now do. In dimension $d=1$ , we have the advantage of being able to define the leftmost and rightmost points, $L_t$ and $R_t$ , reached by admissible paths with initial point (0, 0) by time t:

\[ L_t = \inf_{\substack{\gamma(0)=\mathbf{0},\\ \text{$\gamma$ admissible}}} \gamma(t), \quad R_t = \sup_{\substack{\gamma(0)=\mathbf{0},\\ \text{$\gamma$ admissible}}} \gamma(t). \]

It is not hard to see that $D(t) = R_t - L_t $ can be compared to two (non-independent) compound Poisson processes,

(1.12) \begin{equation} \sum_i \frac{1}{2}S_i\cdot v_i(t) \leq D(t) \leq \sum_i 2 S_i\cdot V_i(t), \end{equation}

where, for i fixed, $v_i $ and $V_i$ are not independent, but $(v_i (t),V_i(t))_{ i\geq 1}$ are independent as i varies, and for each i, $v_i(t)$ and $V_i(t)$ are Poisson variables with rates $\left(\frac{c}{2} S_i^{-\alpha }t\right)_i$ and $\left(2c S_i^{-\alpha }t\right)_i$ . The lower bound indicates that D(t) grows superlinearly,

\[ \lim_{t\to \infty} \frac{D(t)}{t} \geq \lim_{t\to \infty}\frac{v_i(t)}{2t}S_i = \frac{1}{2}S_i^{1-\alpha}, \]

while the upper bound indicates that D(s) has increments of sizes at most $2S_i$ at time scales $S_i^{\alpha }.$ The different speeds $S_i^{1-\alpha}$ in different time scales $S_i^{\alpha}$ indicate that the ‘domain of influence’ $\mathcal{R}_t$ for $\tilde{W}(t,{\mathbf{0}})$ is also growing at different speeds on different time scales, and this leads to nontrivial growth behaviors of $\tilde{W}(t,{\mathbf{0}})$ . We use one example in the proof of Theorem 1.1, where we construct unstable models with $\limsup_{t\to\infty}\frac{W(t,{\mathbf{0}})}{t} =\infty$ almost surely. In these models, we find jobs of large temporal sizes with high probability in time–space boxes with increasing spatial and temporal sizes. In particular, the ratio between the spatial size and the temporal size of these time–space boxes diverges as the temporal size grows.

1.4. Overview of proofs and outline of paper

The proof of Theorem 1.1 is similar to the proof of Theorem 2 in [Reference Foss, Konstantopoulos and Mountford4]. We look at time–space boxes of increasing size and consider the probability of obtaining jobs of large temporal sizes in these boxes connected via some admissible path. We can concatenate disjoint boxes so that the numbers of jobs inside are independent, and apply the law of large numbers for independent random variables to estimate the probability of getting an admissible path with a high score. A detailed proof of Theorem 1.1 is presented in Subsection 2.1.

The proof of Theorem 1.2 relies on two reductions. In Section 2 we reduce a generic system to a new system by enlarging job sizes to some discrete values as suggested in Remark 1.1. The new system may have smaller parameters $(\alpha,\beta)$ despite still being in Region V, and the workload of the new system dominates the workload of the original system stochastically. We choose the job sizes via two sequences of specific forms; see (2.3). These values allow us to obtain some conditions (see (3.2)–(3.8) in Section 3) under which we prove Theorem 2.1. These conditions are not necessary, but they enable the proof of Theorem 2.1. In Section 3, we further reduce the proof of Theorem 1.2 to two technical propositions. The principal task essentially reduces to showing Proposition 3.2.

The program for showing Proposition 3.2 consists of two steps, corresponding to Sections 4 and 5. The objective is to show that for fixed temporal size $T_j$ , with large probability as t tends to infinity, all admissible paths of length t with a fixed endpoint intersect at most $\frac{t}{T_j^{1+ \delta'}}$ such jobs, for $\delta'>0$ not depending on j. In Section 4 we take the first step: we choose an ‘appropriate’ spatial size $S_{i_0}$ corresponding to j and show through various percolation arguments that, with high probability, all admissible paths starting at a given place which do not ‘use’ job sizes $S_i$ for $i>i_0$ have the desired bounds. In fact, we show that the bounds hold outside exponentially small probabilities. It is here that the condition $\alpha \beta >d$ is used. In Section 5 we use an induction argument on i to show that given an appropriate bound for admissible paths that do not ‘use’ job sizes $S_k$ for $k>i$ , we can find an appropriate bound for admissible paths that do not ‘use’ job sizes $S_k$ for $k>i+1$ . The price to be paid is that the rate of the exponential bound for the bad events decreases as we pass from i to $i+1$ . Given that we have chosen the $S_i$ to be increasing superexponentially, this does not pose a problem. The paper is concluded in Section 6 by putting together the results of Sections 4 and 5 to obtain Proposition 3.2.

2. Behaviors around the critical curve $\alpha\beta=\boldsymbol{d}$

In this section, we first prove Theorem 1.1, which includes unstable cases in Regions II, III, and IV. We understand the importance of the curve $L= \{\alpha\beta = d\}$ from the construction of an unstable model in Region IV; see (2.2) below. We also look into models with parameters $(\alpha,\beta)$ above the critical curve, $\{\alpha\beta > d\}$ . We then show that the proof of Theorem 1.2 need only be given for $\tilde{\mathbb{P}}$ which are discrete with possible values (for $\tau$ and R) well separated.

2.1. Unstable models in Regions II, III; constructing unstable models in Region IV

The proof of Theorem 1.1 is similar to that in [Reference Foss, Konstantopoulos and Mountford4, Section 5]. It is an application of the graphical construction and the law of large numbers for independent random variables. We investigate any generic model in Regions II and III, $\{\beta< d\}$ , and also construct an unstable model in Region IV $=\{\alpha\beta>d,0\leq \alpha<1\}$ .

Proof of Theorem 1.1.

  1. 1. (Regions II, III.) Let the arrival rate be $\lambda > 0$ . By the assumption that the system is nontrivial, we can use a large-deviation estimate for a continuous-time random walk to see that there exist a strictly positive $c_0= c_0( \lambda )$ and $n_0 < \infty $ such that for any $n \geq n_0$ ,

    \[ \inf_ {x\in [\!-\!c_0n, c_0 n]^d} {\mathbb{P}}\!\left( \text{there exists an admissible path $\gamma$ with $\gamma(n) = \mathbf{0}$, $\gamma(0) =x$} \right) > 1- e^{-c_0 n} . \]

    We choose strictly positive $\epsilon < \frac{d - \beta}{3}$ . By the definition of $\beta$ we see that $E(\tau^{1+\beta + \epsilon}) = \ \infty $ , so there exists a sequence $n_i$ increasing to infinity such that

    \[ {\mathbb{P}}(\tau \geq 3n_i) \ \geq \ \frac{1}{n_i^{1+\beta + 2 \epsilon}}. \]
    We may assume that $n_i > n_0$ for all $i \geq 0$ .

    For any $i \geq 1$ , the event that $W(2n_i, \mathbf{0}) \leq n_i$ is contained in the union of the following two events:

    • $\{ \nexists (t,x) \in [0,n_i] \times [\!-\!c_0n_i, c_0n_i]^d$ so that a job arrives at (t, x) with $\tau \geq 3n_i \}$ ,

    • $\{ \nexists$ an admissible path $\gamma\;:\;[n_i, 2 n_i]$ with $\gamma (n_i) = x$ and $\gamma (2n_i) = \mathbf{0}\}$ .

    But the probability of the first event is bounded by

    \begin{align*} e^{- \lambda \frac{(2c_0n_i+1)^d n_i}{n_i^{1+\beta + 2 \epsilon}}},\end{align*}
    which tends to zero as $n_i$ tends to infinity, while the second event, conditioned on the existence of an x for the first chosen by non-random ordering of sites, has probability bounded by $e^{-c_0 n_i} $ , by our choice of $c_0$ . Thus the distributions $\{W( 2n_i, \mathbf{0})\}_{i \geq 0}$ are not tight.
  2. 2. (Regions outside $\{\alpha \geq 0 ,\beta >0\}$ .) The first case deals with $\beta\leq 0 $ , so we need only treat $\alpha < 0 $ . We suppose ${e_1}>0$ fixed so that ${\mathbb{P}}(\tau > {e_1}) >{e_1}.$ We fix $M \geq 4$ , and, as in the previous case, we fix $\epsilon > 0 $ and less than $\frac{- \alpha }{3}$ . Again we have the existence of $n_i $ increasing to infinity with

    \[ {\mathbb{P}}(R\geq 3n_i) \ \geq \ \frac{1}{n_i^{d-\epsilon}}. \]
    For every i, the event $\{ W( 1,\mathbf{0})< M{e_1} -1\}$ is contained in the union of the following two events:
    • $\{$ for some $1 \leq k \leq M,\ \nexists (x,t) \in [\!-\!n_i, n_i]^d \times \Big[1-\frac{k-1}{M}, 1-\frac{2k-1}{2M}\Big]$ such that a job arrives at (x, t) with $R \geq 3n_i \}$ ,

    • $\{ \exists 1 \leq k \leq M $ such that $\nexists \ (x,t) \in [\!-\!n_i, n_i]^d \times \Big[1-\frac{2k-1}{2M}, 1-\frac{k}{M}\Big]$ such that a job arrives at (x, t) with $\tau > {e_1} \}$ .

    So, as above, ${\mathbb{P}}(W( 1 ,\mathbf{0})< M{e_1} -1)$ is bounded by the sum

    \begin{align*} Me^{-(2n_i+1)^d \frac{\lambda}{2Mn_i^{d-\epsilon}}} + Me^{-(2n_i+1)^d \lambda {e_1}/2M} .\end{align*}
    We see the lack of tightness from the arbitrariness of i.
  3. 3. (Region IV.) For any point $(\alpha,\beta)$ in Region IV $=\{\alpha \beta <d , 0<\alpha <1, \beta >d\}$ , we construct an unstable system having $(\alpha, \beta)$ as its moment parameters. Note that unlike in the previous two cases, we are not claiming that any distribution with parameters in this region is necessarily unstable.

    For this system, jobs have discrete spatial and temporal sizes. In particular, for every positive integer i, we define $R_i \;:\!=\; 2^{(1+\beta)i}$ and $\tau_i \;:\!=\; 2^{(d+\alpha) i}$ . The jobs have only sizes $(\tau_i,R_i)$ for some $i\in \mathbb{N}$ , and with probability

    (2.1) \begin{equation} \tilde {\mathbb{P}} (\{(\tau_i, r_i)\}) = {\mathbb{P}}\!\left( (\tau, R) =( \tau_i ,R_i) \right) = c \cdot 2^{-(d+\alpha)(1+\beta)i} , \end{equation}
    where c is a normalizing constant $c = \left(\sum_{i= 1}^{\infty} 2^{-(d+\alpha)(1+\beta)i} \right)^{-1}$ . It is easy to check that this system is in Region IV, and we only need to show that it is unstable. The argument is similar to that of the first case. We consider $W( \frac{\tau_i}{4} ,\mathbf{0})$ . The number $N_i$ of $(\tau_i,R_i)$ jobs arriving at a space–time point in $ [\!-\!\frac{R_i}{4},\frac{R_i}{4}]^d\times[0 ,\frac{\tau_i}{4}] $ is a Poisson random variable with parameter
    (2.2) \begin{equation} c 2^{-(d+1)}R_i^d \tau_i 2^{-(d+\alpha)(1+\beta)i} = c 2^{-(d+1)} 2^{i(d-\alpha\beta)}. \end{equation}
    Since $d>\alpha\beta$ , this parameter tends to infinity as i becomes large. By considering the admissible path $\gamma (s) \equiv \mathbf{0}$ for $0 \leq s \leq \frac{\tau_i}{4}$ , we have that $W( \frac{\tau_i}{4} ,\mathbf{0})$ is stochastically greater than $\tau_i (N_i- \frac{1}{4} )$ . This suffices to show the lack of tightness.

Remark 2.1. We have the following two remarks on the workload W(t, x); the proofs are extensions of arguments in the above proof, and we omit their details:

  1. 1. In the proof of the first case, we use the fact that the parameter $\beta$ is strictly smaller than d, which implies the $(d+1-\frac{\epsilon}{2})$ th moment of $\tau$ is infinite. In fact, with some extra work, we can show that the model is unstable with the condition that $\tau$ has an infinite $(d+1)$ th moment, so that there are also unstable models with parameters $(\alpha,\beta)$ on the boundary of Regions II, III (i.e. $\beta=d$ ). Similarly, we can also show that the model is unstable with the condition that R has an infinite dth moment, when $\alpha =0$ .

  2. 2. When $\alpha <0$ , the workload $W(t,\mathbf{0})$ is infinite almost surely, for any $t>0$ .

2.2. Reduction of Theorem 1.2

The purpose of this subsection is to reduce the proof of Theorem 1.2 to proving corresponding results for a ‘discrete’ comparison process. This discretization, as mentioned, permits the use of inductive arguments. The reduced result, Theorem 2.1 below, is subsequently further reduced in the next section. That is, our objective is to reduce proving Theorem 1.2 to proving Theorem 2.1 below.

Recall that $( \tau', R') \sim \tilde {\mathbb{P}}$ means that $\tilde {\mathbb{P}}$ is the law of $(\tau', R')$ . We say that the law $\tilde{ \tilde {\mathbb{P}}} $ dominates $\tilde {\mathbb{P}} $ if

\begin{align*} \exists \tau' \leq \tau \mbox{ and } R' \leq R \mbox{ such that }( \tau', R') \sim \tilde{\mathbb{P}} \mbox{ and } ( \tau, R) \sim \tilde {\tilde {\mathbb{P}}}.\end{align*}

The main tool which follows from Remark 1.1 is the following.

Proposition 2.1. If $\tilde{ \tilde {\mathbb{P}}} $ dominates $\tilde {\mathbb{P}}$ and $\tilde {\tilde {\mathbb{P}}}$ is stable, then so is $\tilde {\mathbb{P}}$ .

We now give a family of distributions $\tilde{\mathbb{P}}^{\theta,A}$ for $\theta, A >0$ which always dominate a given $\tilde{\mathbb{P}} $ . Given $\theta $ , A and $\tilde{\mathbb{P}}\in V$ , we choose the sequences

(2.3) \begin{equation} S_i = 2^{(1+\delta)A(1+\theta)^{2i}}, \quad T_i =2^{\alpha A (1+\theta)^{2i+1}},\end{equation}

for $i\geq 1$ . We also adopt the convention $T_0=0$ . The value $\delta > 0$ is specified as a function of $\theta $ and $\tilde{\mathbb{P}} $ below. Given these sequences, $\tilde{\mathbb{P}}^{\theta,A}$ denotes the law of $(\tau', R')$ obtained from $(\tau,R) \sim \tilde{\mathbb{P}}$ by specifying as in Remark 1.1(2). Thus, from Proposition 2.1, we have that Theorem 1.2 is implied by the following.

Proposition 2.2. For $\tilde{\mathbb{P}} $ in Region V, there exists $\theta $ sufficiently small so that there exists A sufficiently large so that $\tilde{\mathbb{P}} ^{\theta, A}$ is stable.

It remains to discuss the value $\delta $ . We also introduce a fourth value $\kappa > 0$ which plays a role in our analysis. Consider a generic $\tilde{\mathbb{P}}$ in Region V. Notice that Region V $=\{\alpha <1, \alpha \beta>d \}$ is an open set; we can find parameters $\alpha_0,\beta_0$ with $\alpha_o\cdot \beta_o >d \geq 1$ , $0<\alpha_o < 1 $ , such that the job sizes $(\tau_o, R_o)$ satisfy moment conditions

(2.4) \begin{equation} \mathbb{E}\Big[ R_o^{d+\alpha_o} + \tau_o^{1+\beta_o} \Big]<\infty.\end{equation}

Although the critical moments $(\alpha_c,\beta_c)$ of $\tilde{\mathbb{P}} $ may not have strict inequality as in (2.4), from the definitions (1.10), we can always approximate $(\alpha_c, \beta_c)$ by $(\alpha_0,\beta_0)$ .

For $\theta>0$ , we define $\alpha_1,\beta_1$ parametrized by $\theta$ as follows:

(2.5) \begin{equation}\alpha_1 = \frac{d+\alpha_o}{(1+\theta)^2}-d,\text{ and } \beta_1 = \frac{1+\beta_o}{(1+\theta)^2}-1.\end{equation}

Since $\lim_{\theta \downarrow 0} \alpha_1 \beta_1 = \alpha_o \beta_o >d$ , we can find a $\theta >0$ such that the pair $(\alpha_1, \beta_1)$ is a point in Region V,

(2.6) \begin{equation} 0<\alpha_1 < \alpha_0 <1, \text{ and } \alpha_1\beta_1 >d.\end{equation}

Then we can define $\delta$ by

(2.7) \begin{equation} \delta = { \min\!\bigg({\frac{\alpha_1\beta_1 -d}{d+\alpha_1}} , \frac{\theta}{3}\bigg)}>0\end{equation}

and take a $\kappa < \min\{ \frac{1}{8}, \frac{\delta}{4(1+\delta)} \}$ . It follows immediately that $\theta> 3\delta> \frac{3\kappa} {1-\kappa} $ , and therefore we have

(2.8) \begin{equation} \theta > \frac{3\kappa }{1-\kappa}.\end{equation}

By the Markov inequality and (2.4), we have for each job in the new system (i.e, $(\tau,R)$ chosen from law $\tilde{\mathbb{P}} ^{\theta, A}$ ), for $i\geq {1}$ , $j\geq 1$ ,

\begin{align*}\mathbb{P}(R = S_{i+1} ) \leq \mathbb{P}(R_o \geq S_{i} ) \leq \mathbb{E}[R_o^{d+\alpha_0}] S_{i}^{-(d+\alpha_0)},\notag\\[5pt] \mathbb{P}(\tau = T_{j+1} ) \leq \mathbb{P}(\tau_o \geq T_{j} ) \leq \mathbb{E}[\tau_o^{1+\beta_0}] T_{j}^{-(1+\beta_0)},\end{align*}

which, from the choices of $(\alpha_1, \beta_1)$ and $S_{i+1} = S_i^{(1+\theta)^2},T_{j+1} = T_j^{(1+\theta)^2} $ , are bounded by

\begin{align*}\mathbb{E}[R_o^{d+\alpha_0}] S_{i}^{-(d+\alpha_0)} = \mathbb{E}[R_o^{d+\alpha_0}] S_{i+1}^{-\frac{d+\alpha_0}{(1+\theta)^2}} \leq c_1 S_{i+1}^{-(d+\alpha_1)},\notag\\[5pt] \mathbb{E}[\tau_o^{1+\beta_0}] T_{j}^{-(1+\beta_0)} = \mathbb{E}[\tau_o^{1+\beta_0}] T_{j+1}^{-\frac{1+\beta_0}{(1+\theta)^2}} \leq c_2 T_{{j+1}}^{-(1+\beta_1)},\end{align*}

for some $c_1, c_2>0$ .

Therefore, the distribution of $(\tau,R)$ in the new system (i.e. $\tilde {\mathbb{P}}^{\theta,A}$ ) satisfies, for $i\geq 1$ , $j\geq 1$ ,

(2.9) \begin{align} \mathbb{P}(R = S_{i} ) \leq c_1 S_{i}^{-(d+\alpha_1)},\end{align}
(2.10) \begin{align} \mathbb{P}(\tau = T_{j} ) \leq c_2 T_{j}^{-(1+\beta_1)}, \end{align}

for some $c_1, c_2>0$ . Note that while $c_1, c_2$ depend on A, the exponents $\alpha_1, \beta_1$ do not.

Our final reduction is just to suppose that in the inequalities (2.9) and (2.10) we may take equality. Given a law $\tilde{\mathbb{P}}^{\theta,A}$ satisfying the above inequalities, it is immediate that by adding mass at the points $(0,S_{{j}}) $ and $(T_j,0)$ , we can construct a measure $\mu $ (of mass at least 1) such that, for $\mu $ , we have equality in (2.9) and (2.10). Let $\tilde {\tilde {P}}^{\theta,A} $ be $\mu $ normalized to be a probability. We now simply note, using basic Poisson process properties, that for each $\lambda > 0$ , the system $\tilde {\tilde {\mathbb{P}}}^{\theta,A}$ at rate $|\mu| \lambda $ dominates the system $\tilde{\mathbb{P}}$ at rate $\lambda$ (with the obvious meaning of ‘dominates’ in this context). Thus we have that, to prove Proposition 2.2 (and therefore Theorem 1.2), it suffices to show the following.

Theorem 2.1. Given $\tilde{\mathbb{P}} $ in Region V, let $\theta >0$ be such that for all $A>0$ , the law $\tilde {\tilde {\mathbb{P}}}^{\theta,A}$ is in Region V, and such that for $(\tau,R) \sim \tilde {\tilde {\mathbb{P}}}^{\theta,A}$ , $\mathbb{P}(R = S_{i} ) =c_1 S_{i}^{-(d+\alpha_1)}$ and $\mathbb{P}(\tau = T_{j} ) = c_2 T_{j}^{-(1+\beta_1)}$ for some $c_1$ and $c_2$ . Then for A sufficiently large, $\tilde {\tilde {\mathbb{P}}}^{\theta,A}$ is stable.

3. A stable discretized model and its growth estimates

In this section, we focus on the new discrete system introduced in Theorem 2.1. Our main goal in this section is to reduce the proof of Theorem 2.1 to that of a technical result, Proposition 3.2.

We start with some conditions satisfied by the job sizes for $\tilde{\tilde{\mathbb{P}}}^{\theta,A},$ when A is sufficiently large. The specific form (2.3) of job sizes is convenient for presenting the estimates and proofs in this section. In principle, there are many other choices for $(S_i ,T_j)$ , as long as the system belongs to Region V. The form (2.3) of $(S_i,T_j)$ and a large enough A imply the conditions (3.2)–(3.8) below, and they appear in the assumptions of two technical propositions, Proposition 3.1 and Proposition 3.2. Again, these conditions are not necessary for the stability, but they aid our proofs. We will discuss their roles after stating them.

For large A, the sequences $(S_i), (T_j)$ satisfy the following conditions:

  1. 1. The choice of exponents implies that for $i>1$ ,

    (3.1) \begin{equation} S_i^{\alpha(1+\theta)} =T_i^{1+\delta}. \end{equation}
  2. 2. The scales $ \Delta t_{2i-1}=S_i^\alpha\geq 1$ , $\Delta t_{2i}= T_i^{1+\delta}\geq 1$ are well separated: for all $i\geq 1$ ,

    (3.2) \begin{equation} \varrho^{i} {\Delta t_i}^{1+\kappa} < \varrho^{-(i+1)} {\Delta t_{i+1}}^{1-\kappa}, \end{equation}
    where $ \varrho = 10 + \frac{d}{\alpha\kappa}$ . In fact, for each i we can choose
    (3.3) \begin{equation} l_i = \varrho^{-i}\Delta t_i^{-\kappa}, \qquad r_i = {\varrho}^{i}\Delta t_i^{\kappa} \end{equation}
    so that we have an increasing sequence of times in $\mathbb{R}_+$ ,
    (3.4) \begin{equation} 0< l_1 \Delta t_{1} < r_1\Delta t_{1} < l_2 \Delta t_{2} < r_2 \Delta t_{2} <\ldots . \end{equation}
    To prevent confusion, we point out that the symbol $\Delta t_i$ means not the increment of time $t_{i+1}-t_i$ , but a scale corresponding to i.

    We prove (3.2) when i is odd, $i=2j-1$ . Indeed, there is an $A>0$ such that for all $j\geq 1$ ,

    \[(\Delta t_{2j-1})^{\kappa}= 2^{A\kappa\alpha(1+\delta)(1+\theta)^{2j}}>2^{2 A\kappa\alpha\theta(1+\delta)j}> \rho^{4j-1}.\]
    Then by (2.8), we have that $1+\theta> \frac{1+2\kappa}{1-\kappa}$ . Therefore, from (3.1),
    \[ {\varrho}^{4j-1} {\Delta t_{2j-1}}^{1+\kappa} < {\Delta t_{2j-1}}^{1+2\kappa} < {\Delta t_{2j-1}}^{(1+\theta)(1-\kappa)}={\Delta t_{2j}}^{(1-\kappa)} , \]
    which implies (3.2). We can similarly prove the case when i is even; we omit the proof.

    In fact, (3.2) also implies the following two conditions (when A is large); see (3.5) (3.6) below, and for a short proof see the end of Item 4:

  3. 3. Higher scales have little influence over lower scales: for all $i\geq 1$ ,

    (3.5) \begin{equation} \sum_{j>0}\varrho^{2j} \Delta {t_{2(i+j)-1}}^{-(1-\kappa)} < \Delta{t_{2i}}^{-1}. \end{equation}
  4. 4. Summability: for each $C>0$ ,

    (3.6) \begin{equation} \sum_{i>1}\exp\!(\!-\!C{\Delta t_{i}}^{\kappa} ) \ln (\Delta t_{i})< \infty. \end{equation}
    The inequalities (3.5) and (3.6) follow from (3.2). In fact, (3.2) implies that for all $i\geq 1$ ,
    \[ \Delta t_{i}^{-1} > \rho^{2i+1} \Delta t_{i+1}^{-(1-\kappa)}>\rho^{3} \Delta t_{i+1}^{-1}. \]
    Therefore, we have that for any $j>0$ ,
    \[ \rho^{-j}\Delta t_{2i}^{-1} > \rho^{2j} \Delta t_{2(i+j)-1}^{-(1-\kappa)}. \]
    Summing over j, we get (3.5).

    Similarly, from (3.2) we get that $\Delta t_i \geq \rho^{3i}.$ For any $C>0$ , there is an $M>0$ such that the function $\exp\!(\!-\!C x^{\kappa} ) \ln (x)$ is decreasing in x on $[M,\infty)$ . We can get (3.6) from the inequality $\sum_{i>1}\exp\!(\!-\!C{\rho^{3i\kappa}} ) \ln (\rho^{3i})< \infty$ .

  5. 5. Leading order: for all $i\geq 1$ ,

    (3.7) \begin{equation} \Big(S_i^{1-\alpha}\Delta t_{2i}\Big)^{d}\Delta t_{2i} T_i^{-(1+\beta)}<1. \end{equation}
    This is a consequence of $\alpha \beta >d $ and of the choice of $\delta$ ; see (2.7). In view of (3.1), we have that $S_i < T_i^{\frac{1+\delta}{\alpha}}$ . Therefore, we bound the left-hand side of (3.7) (strictly) by
    \begin{equation*} T_i^{((\frac{1-\alpha}{\alpha} + 1))d+1)(1+\delta) -(1+\beta)} = T_i^{\frac{d-\alpha\beta + (d+\alpha)\delta}{\alpha}}\leq 1 . \end{equation*}
  6. 6. Also, when A is large, we have the following inequalities for $(S_i)$ , for all $k \geq 1$ :

    (3.8) \begin{equation} 10\sum_{i < k} S_i^{1-\alpha} <S_k^{1-\alpha}, \quad 10\sum_{i>k} S_i^{-\alpha} < S_{k}^{-\alpha}. \end{equation}

Before going into the major bounds in the proof of Theorem 2.1, we briefly explain the roles of the conditions (3.2)–(3.8) in the major steps. The conditions (3.2) and (3.6) are used directly in the proof of Theorem 2.1; see Subsection 3.3 below. In general, (3.2) allows us to look at different time intervals corresponding to different time scales and obtain estimates according to these scales, while (3.6) allows us to get a finite sum from these estimates and to apply a Borel–Cantelli type of argument; see Lemma 3.1 below. The conditions (3.8) enable us to estimate the probability of connecting two time–space points, which is used throughout Sections 4 and 5. In particular, (3.8) has nothing to do with temporal workload, and it controls spatial growth for admissible paths for $W(t,\mathbf{0})$ . The condition (3.7) is the cornerstone for controlling the growth of the maximal numbers of jobs (see (3.9) and (3.10) below), and it plays a central role in Proposition 5.1. This condition implies that when jobs of spatial size larger than $S_j$ are not involved, the density of jobs of temporal size $T_j$ along any admissible path is at most $T_j^{-(1+\delta)}$ , for some $\delta>0$ . The condition (3.5), on the other hand, ensures that jobs of size larger than $S_j$ do not affect the leading order $T_j^{-(1+\delta)}$ for the density of jobs of temporal size $T_j$ along any admissible paths; see Proposition 5.2. It is with (3.5) that we bound a family of tailored exponential moments (see Corollary 5.1 below) and derive from them the probabilistic estimates in Proposition 3.2.

3.1. Growth estimates

The job sizes are discrete for $\tilde{\tilde{\mathbb{P}}}^{\theta,A}$ , so we can estimate the load of an admissible path from the contributions of different job sizes. For any fixed admissible path $\gamma_{0,t}$ , we denote by $n_j(\gamma_{0,t})$ the number of jobs of temporal size $T_j$ that $\gamma_{0,t}$ intersects (in the sense of (1.5)), and denote by $m_i(\gamma_{o,t})$ the number of jobs of spatial size $S_i$ that $\gamma_{0,t}$ intersects:

(3.9) \begin{align} n_j(\gamma_{0,t}) \;:\!= & \text{ number of jobs of temporal size } T_j \text{ intersected by } \gamma_{0,t}, \end{align}
(3.10) \begin{align} m_i(\gamma_{0,t}) \;:\!= & \text{ number of jobs of spatial size } S_i \text{ intersected by } \gamma_{0,t}. \end{align}

The load of an admissible path can be written as

\[U(\gamma_{0,t})= \sum_j n_j(\gamma_{0,t}) T_j. \]

Taking the supremum over all admissible paths with initial point $\gamma_{0,t}(0)=\mathbf{0}$ , we get three types of maximal quantities, which are all superadditive in time t:

\begin{equation*} \sup U(\gamma_{0,t}), \qquad \sup n_j(\gamma_{0,t}), \qquad \text{and} \sup m_j(\gamma_{0,t}). \end{equation*}

The technical propositions involve bounding the growth of these quantities.

The main ingredient towards Theorem 2.1 is the following proposition on the maximal load up to time t, $\sup U(\gamma_{0,t})$ . It says that $\sup U(\gamma_{0,t})$ grows at most linearly in time t with high probability. In view of (3.4), the probabilistic estimate is related to different time scales on different time intervals. On each time interval larger than the time scale $\Delta t_{2i}$ , the probability that $\sup U(\gamma_{0,t})$ grows faster than $c\cdot t$ is exponentially small in the time scale $\Delta t_{2i}$ , up to a term which is proportional to the ratio of t and the next time scale $\Delta t_{2(i+1)}$ ; on the time interval close to the time scale $\Delta t_{2i}$ , we can modify the event slightly.

Proposition 3.1. Recall $\kappa < \min\{\frac{1}{8}, \frac{\delta}{4(1+\delta)}\}$ . Suppose the assumptions of Theorem 2.1 hold, and two sequence of numbers $(S_i)$ $(T_j)$ satisfy the conditions (3.2)–(3.8), for $\Delta t_{2i-1}=S_i^{\alpha}$ , $\Delta t_{2i}=T_i^{1+\delta}$ , and $l_i,r_i$ defined by (3.3), for all $i \in \mathbb{N}$ . Then there are positive constants c, C, C, and $\lambda_d$ such that if $\lambda < \lambda_d$ , then for all $i\geq 1$ ,

(3.11) \begin{align} &\mathbb{P}\!\left( \sup U(\gamma_{0,t})> c\cdot t \right) \notag\\[5pt] <& \begin{cases} \exp\!(\!-\!C {\Delta t_{2i}}^{\kappa}) + C' {\Delta t_{{{2i+2}}}}^{-1} t& \text{ if } l_{2i} {\Delta t_{2i}} \leq t < r_{2i} {\Delta t_{2i}}, \\[5pt] \exp\!(\!-\!C {\Delta t_{2i}}^{-1}\cdot t ) + C' {\Delta t_{{{2i+2}}}}^{-1} t &\text{ if } r_{2i} {\Delta t_{2i}}\leq t < l_{2i+2} {{\Delta t_{2i+2}}}, \end{cases} \end{align}

where the supremum is taken over all admissible paths with initial point $\gamma_{0,t}(0)=0$ .

Before proceeding, we remark that we can get an upper bound for C , but because of our choices of $r_i$ , $l_i$ and the time scales $\Delta t_i$ , (3.11) converges to 0 fast enough regardless of the value of C . In fact, we use (3.11) to prove a Borel–Cantelli type of lemma; see Lemma 3.1 below.

Notice that the time scales have been standardized, in the sense that the scale ${\Delta t_{2i} = T_i^{1+\delta}}$ is a typical time of seeing one arrival of $T_i$ along the maximal path. A more precise statement is given in Proposition 3.2. Let $A_{t,j}(c')$ be the event that the maximal number for jobs of temporal size $T_j$ does not exceed $c' T_j^{-(1+\delta)}\cdot t$ by time t, for any admissible path on [0, t] with initial point $(0,\mathbf{0})$ :

(3.12) \begin{equation} A_{t,j}(c')\;:\!=\; \left\{ \sup_{\substack{\gamma_{0,t}(0)={\mathbf{0}},\\ \text{$\gamma_{0,t}$ admissible}}} n_j(\gamma_{0,t}) \leq c'T_j^{-(1+\delta)}t \right\}\!.\end{equation}

Let $B_{i}(t)$ be the event that no jobs of spatial size $S_j$ , $j\geq i+2$ , occur along any admissible path on [0, t] with initial point $(0,\mathbf{0})$ :

(3.13) \begin{equation} B_{i}(t)\;:\!=\; \left\{ \sup_{\substack{\gamma_{0,t}(0)={\mathbf{0}},\\ \text{$\gamma_{0,t}$ admissible}}} m_j(\gamma_{0,t}) =0, \text{ for all $j\geq i+2$ }\right\}\!.\end{equation}

For every t in a fixed time interval $ [l_{2i} \Delta t_{2i},l_{2i+2} \Delta t_{2i+2}]$ , we consider events described by cases in Proposition 3.2 below, and estimate their probabilities, which are in general small. We then upper-bound the probability of the event that $A_{t,j}(c')$ occurs for all j. A small modification is required when t is close to $\Delta t_{2i}$ . In this case, the ratio ${\Delta t_{2i}}^{-1}t$ is like a constant, and we consider the event $A^c_{t,i}(c'\Delta t_{2i}^{2\kappa})$ , which also has a small probability when $B_i(t)$ occurs; see (3.17) below. In general, we can choose $c = c' \sum_{j=1}^{\infty} T_j^{-( \delta-2\kappa )}<\infty$ , and prove Proposition 3.1 from (3.18) in Proposition 3.1. So we omit the proof of Proposition 3.1.

Proposition 3.2. Under the assumptions of Proposition 3.1, there are positive constants c, C, C, and $\lambda_d<1$ such that if $\lambda<\lambda_d$ , then for any $t\in [l_{2i} \Delta t_{2i},l_{2i+2} \Delta t_{2i+2})$ we have the following estimates on maximal numbers of jobs:

  1. 1. $B_i(t)$ occurs with a high probability:

    (3.14) \begin{equation} \mathbb{P}\!\left( B_i(t) ^c\right) \leq C' {\Delta t_{2i+2}}^{-1} t \leq C' l_{2i+2}. \end{equation}
  2. 2. If $j>i$ , then

    (3.15) \begin{equation} \mathbb{P}\Big( A^c_{t,j}(0) \text{ and } B_i(t)\Big) < C' {\Delta t_{2j}}^{-1} t . \end{equation}
  3. 3. If $j\leq i$ , then

    (3.16) \begin{equation} \mathbb{P}\Big( A^c_{t,j}(c') \text{ and } B_i(t)\Big) < \exp\!\Big(\!-\!C \kappa^{i-j+2} {\Delta t_{2j}}^{-1} t\Big) . \end{equation}
  4. 4. In the case when $i=j$ , and when $t< r_{2i}\Delta t_{2i}$ ,

    (3.17) \begin{equation} \mathbb{P}\Big( {A^c_{t,i}}(c'{\Delta t_{2i}}^{2\kappa}) \text{ and } B_i(t)\Big) < \exp\!(\!-\!C \Delta t_{2i}^{\kappa}) . \end{equation}

Furthermore, we bound the event that $A_{t,j}(c'{\Delta t_{2j}}^{2\kappa})$ occurs for all j as follows:

(3.18) \begin{align} 1- \mathbb{P}&\!\left( A_{t,j}(c'{\Delta t_{2j}}^{2\kappa}) \text{ for all} \textit{j} \right) \notag \\[5pt] <& \begin{cases} \exp\!(\!-\!C {\Delta t_{2i}}^{\kappa}) + C' {\Delta t_{{{2i+2}}}}^{-1}\cdot t & \text{ if } l_{2i} {\Delta t_{2i}} \leq t < r_{2i} {\Delta t_{2i}}, \\[5pt] \exp\!(\!-\!C {\Delta t_{2i}}^{-1}\cdot t ) + C' {\Delta t_{{{2i+2}}}}^{-1} \cdot t & \text{ if } r_{2i} {\Delta t_{2i}}\leq t < l_{2i+2} {\Delta t_{{{2i+2}}}}. \end{cases} \end{align}

The proof of Proposition 3.2 is postponed to Section 6. To aid the proof, we introduce some tools in Section 4 and use them to estimate exponential moments of the maximal numbers in Section 5. By the Markov inequalities, we derive the estimates in Proposition 3.2 from exponential moments.

3.2. A Borel–Cantelli type of lemma

The next lemma shows that we can get stability from a Borel–Cantelli type of estimate (see (3.19)), and therefore, showing Theorem 2.1 is reduced to showing Proposition 3.1. The proof is elementary; it comes from extensions of admissible paths and monotonicity of $\tilde{W}(t,\mathbf{0})$ in t. A similar argument can be found in the proof of [Reference Foss, Konstantopoulos and Mountford4, Proposition 14].

Lemma 3.1. Suppose there exist positive constants $a>0$ , $b< \frac{{1}}{1+a}$ and a diverging sequence of times $(u_n)$ such that $u_n \leq u_{n+1} \leq (1+a)u_n$ , and

(3.19) \begin{equation} \sum_n \mathbb{P}\!\left( \sup U(\gamma_{0,{u_n}})> b \cdot {u_n} \right) < \infty, \end{equation}

where the supremum is over all admissible paths with initial point $({0,\mathbf{0}})$ . Then $\{\tilde{W}(t,{\mathbf{0}})\}_{t\geq 0}$ is tight.

Proof. First, for $s'<s$ , we can always extend an admissible path $\gamma_{0,s'}$ on [0, s ] to a new one $\gamma_{0,s}$ on [0, s] by letting ${\gamma_{0,s}(u) = \gamma_{0,s'}(u)}$ , ${0\leq u\leq s'}$ , and $\gamma_{0,s'}(s')$ otherwise. Therefore, $U(\gamma_{0,s}) \geq U(\gamma_{0,s'})$ . By (3.19), we get that for any $\epsilon>0$ there exists an $n_\epsilon$ such that

\begin{align*} \mathbb{P}&\!\left( U(\gamma_{0,u_n})> b \cdot {u_n}, \text{ for some } n\geq n_\epsilon, \text{ and} \right. \notag\\[5pt] &\left. \text{ some admissible path } \gamma_{0,u_n} \text{ with } \gamma_{0,u_n}(0)={\mathbf{0}} \right) < \epsilon.\end{align*}

Then, by extending admissible paths, we have

\begin{align*}\mathbb{P}&\!\left( U(\gamma_{0,s})> b(1+a) \cdot s, \text{ for some } s\geq u_{n_\epsilon}, \text{ and} \right. \\[5pt] &\left.\text{some admissible path } \gamma_{0,s} \text{ with } \gamma_{0,s}(0)={\mathbf{0}} \right) < \epsilon. \end{align*}

Since $V\!\left(\gamma_{0,s}\right) = U(\gamma_{0,s})-s$ and $b(1+a)-1<0$ , we get

(3.20) \begin{align}\mathbb{P}&\!\left( V(\gamma_{0,s})> 0, \text{ for some } s\geq u_{n_\epsilon}, \text{ and} \right. \notag\\[5pt] & \left.\text{some admissible path } \gamma_{0,s} \text{ with } \gamma_{0,s}(0)={\mathbf{0}} \right) < \epsilon. \end{align}

We also get a similar statement from (3.19) and extending admissible paths,

\[\mathbb{P}\!\left( U(\gamma_{0,u_{n_\epsilon}})> b \cdot u_{n_\epsilon}, \text{ for some admissible path } \gamma_{0,u_{n_\epsilon}} \text{ with } \gamma_{0,u_{n_\epsilon}}(0)={\mathbf{0}} \right) < \epsilon,\]

which implies that

(3.21) \begin{equation} \mathbb{P}\!\left( \tilde{W}({u_{n_\epsilon}},{\mathbf{0}})> (b+1)u_{n_\epsilon} \right) < \epsilon. \end{equation}

Now we prove tightness of $\left(\tilde{W}(t,{\mathbf{0}})\right)$ . If $t>{u}_{n_\epsilon}$ , then by (3.20)–(3.21) and the definition of $\tilde{W}(t,{\mathbf{0}})$ , we obtain that there is an $N_\epsilon= (b+1){u}_{n_\epsilon}>0$ such that

\begin{align*} \mathbb{P}\!\left( \tilde{W}(t,{\mathbf{0}})>N_\epsilon \right) &\leq \mathbb{P}\!\left( \tilde{W}({u}_{n_\epsilon},{\mathbf{0}})>N_\epsilon \right) + \mathbb{P}\!\left( V(\gamma_{0,s})>N_\epsilon, \text{for some }s \geq {u}_{n_\epsilon}, \right. \\[5pt] &\quad \left. \text{ and some admissible path } \gamma_{0,s} \text{ with } \gamma_{0,t_s}(0)={\mathbf{0}} \right) \\[5pt] &< 2\epsilon; \end{align*}

If $t\leq {u}_{n_\epsilon}$ , then from the fact that $\tilde{W}(t,{\mathbf{0}})$ is increasing in t, we get that

\[\mathbb{P}\!\left( \tilde{W}(t,{\mathbf{0}})>N_\epsilon \right) \leq \mathbb{P}\!\left( \tilde{W}({u}_{n_\epsilon},{\mathbf{0}})>N_\epsilon \right)<\epsilon. \]

We conclude that $(\tilde{W}(t,{\mathbf{0}}))_{t\geq 0}$ and $(W(t,{\mathbf{0}}))_{t\geq 0}$ are tight.

3.3. Proof of Theorem 2.1 given Proposition 3.2

Now we prove Theorem 2.1 by assuming Proposition 3.2, and therefore Proposition 3.1.

Proof of Theorem 2.1. In view of Lemma 3.1, we only need to find a sequence $(u_n)$ and to verify (3.19) for some $b<1$ . Let $a>0$ ; we define a sequence of $(u_n)$ inductively:

(3.22) \begin{equation} \begin{cases} {u_1} =& l_1\cdot \Delta t_1,\\[5pt] {u_{n+1}} =& \min \{(1+a){u_n}, l_k\cdot \Delta t_k,\;:\; l_k\cdot \Delta t_k > {u_n} \text{, for some} \textit{k}\}. \end{cases} \end{equation}

It is immediate that $u_{i+1} = (1+a)u_i$ , if ${u_{i+1}} \neq l_k\cdot \Delta t_k$ for any k.

From Proposition 3.1, there exist positive constants c, C, C such that, for any k,

\begin{align*} &\sum_{l_k\cdot \Delta t_k \leq {u_i} < r_k\cdot \Delta t_{k}} \mathbb{P}\!\left( \sup U(\gamma_{0,t_i})> c\cdot {u_i} \right) \\[5pt] <&\sum_{l_k\cdot \Delta t_k \leq {u_i} < r_k\cdot \Delta t_{k}} \!\left(\exp\!(\!-\!C {\Delta t_{k}}^{\kappa}) + C' {\Delta t_{k+1}}^{-1} {u_i} \right). \end{align*}

By summing according to the constant $\exp\!(\!-\!C {\Delta t_{k}}^{\kappa})$ , and other terms linear in ${u_i}$ , we can rewrite the previous sum as

\[\exp\!(\!-\!C {\Delta t_k}^{\kappa}) \cdot \left\vert \{{u_i}\;:\; l_k\cdot \Delta t_k \leq {u_i} < r_k\cdot \Delta t_{k} \} \right\vert + C' {\Delta t_{k+1}}^{-1} \sum_{l_k\cdot \Delta t_k \leq {u_i} < r_k\cdot \Delta t_{k}} {u_i}. \]

Since $({u_i})$ grows geometrically in i on the interval $[l_k\cdot \Delta t_k,r_k\cdot \Delta t_k]$ , ${u_{i+1}} = (1+a){u_i}$ , we find that the sum is bounded by

\[\exp\!(\!-\!C {\Delta t_k}^{\kappa}) \!\left( \frac{\log(r_k) -\log(l_k)}{\log(1+a)} +1\right) + C' \frac{r_k\cdot \Delta t_k}{\Delta t_{k+1}} \frac{1}{a},\]

which, by (3.2) and (3.3), is bounded by

(3.23) \begin{align}\frac{2}{\log(1+a)} \exp\!(\!-\!C {\Delta t_k}^{\kappa}) \log\!(\varrho^k \Delta t_k) + \exp\!(\!-\!C {\Delta t_k}^{\kappa}) + C' \varrho^{-(k+1)}. \end{align}

Similarly, for any k,

\begin{align} &\sum_{r_k\cdot \Delta t_k \leq {u_i} < l_{k+1}\cdot \Delta t_{k+1}} \mathbb{P}\!\left( \sup U(\gamma_{0,t_i})> c\cdot {u_i} \right) \notag\\[5pt] <& \sum_{r_k\cdot \Delta t_k \leq {u_i} < l_{k+1}\cdot \Delta t_{k+1}} \!\left( \exp\!(\!-\!C {\Delta t_k}^{-1}\cdot {u_i}) + C' {\Delta t_{k+1}}^{-1} {u_i} \right). \notag \end{align}

Since ${u}_{i+1} = {u_i}(1+a)$ when $u_{i+1} \neq l_k\cdot \Delta t_k$ for any k, the sequence

\begin{align*} \left(\exp\!(\!-\!C {\Delta t_k}^{-1}\cdot u_i)\right)_{r_k\cdot \Delta t_k \leq u_i < l_{k+1}\cdot \Delta t_{k+1}}\end{align*}

is bounded by a geometric sequence with its first term as $\exp\!(\!-\!C r_k)$ and a ratio at most $\exp\!(\!-\!C a\cdot r_k) \leq \exp\!(\!-\!C a)$ . Therefore, we bound the sum by

(3.24) \begin{align} & \exp\!(\!-\!C r_k) \cdot \frac{1}{1-\exp\!(\!-\!C{a})) } + C' {\Delta t_{k+1}}^{-1} \sum_{r_k\cdot \Delta t_k \leq {u_i} < l_{k+1}\cdot \Delta t_{k+1}} {u_i} \notag \\[5pt] <& \exp\!(\!-\!C r_k) \cdot \frac{1}{1-\exp\!(\!-\!C{a}) } + C' \frac{l_{k+1} }{a}. \end{align}

Summing over k and by (3.6), we get

\[\sum_{n} \mathbb{P}\!\left( \sup U(\gamma_{0,{u_n}})> c\cdot {u}_n \right) <\infty.\]

By rescaling the arrival rate of the system by a factor $r =\min\{ \frac{1}{2}, \frac{{\lambda_1}}{c(1+a)}\}$ , and choosing $u'_{\!\!n} = {r^{-1}} \cdot u_n $ , we get the estimate (3.19) with $b=r \cdot c {\leq}\frac{{\lambda_1}}{1+a}$ . Hence, the system is stable.

Thus our proof is reduced to proving Proposition 3.2.

4. Connectivity and spatial growth

In this section, we consider the growth of the range of the admissible paths, $\mathcal{R}_t$ (see (4.3) below). For this question the temporal sizes $T_j$ play no role; the results involve only $(S_n)$ . For convenience, we always assume that A is large enough so that (3.8) is valid for $(S_n)$ . As the job arrivals follow independent Poisson processes, for the law $\tilde{\tilde{P}}^{\theta,A}$ in Theorem 2.1, the arrival rate $\lambda_n$ for jobs with a (positive) spatial size $S_{n}$ is

(4.1) \begin{equation} \lambda_n = c_1 \lambda S_n^{-(d+\alpha)}, \end{equation}

for some $c_1>0$ ; for convenience, we assume $c_1 =1$ for the rest of paper. A major ingredient is to estimate the probability that two time–space points $(0,\mathbf{0})$ and (t, x) in $ \mathbb{R}_+ \times \mathbb{Z}^{d}$ are connected via admissible paths involving only jobs of spatial sizes $S_j$ , for $j\leq n$ . For $t< S_n^\alpha$ , we show that the probability decays geometrically in the spatial $l_\infty$ distance,

\[ \!\left\vert {(t,x)} \right\vert \equiv \!\left\vert x \right\vert,\]

under the scale $S_{n}$ ; see Equation (4.7) below. The estimate (4.7) is an analogue of (1.12). We then explain the similarity between these two inequalities. The value of this result is that it cuts down the number of admissible paths that we must consider.

Before we make our statements more precise, we introduce some definitions and assumptions. As in Subsection 1.2, we use graphical constructions for the job arrivals. We define n-admissible paths and a pair of n-connected time–space points by restricting to jobs of spatial sizes at most $S_n$ :

  1. 1. An n-admissible path is an admissible path ${\gamma^{(n)}\;:\; [u,t] \rightarrow \mathbb{Z}^d}$ that starts from time–space point (u, x) and ends at (t, y), with the further restriction that only jobs of spatial sizes $S_j$ , $j\leq n$ , intersect it, in the sense of (1.5); that is, if $ \gamma^{(n)}(s)\neq \gamma^{(n)}(s-)$ , then there is a job of spatial size $S_j$ with $j \leq n$ arriving at (s, x) with

    \begin{equation*} \gamma^{(n)}(s), \gamma^{(n)}(s-) \in B(x,S_j)\;:\!=\;\{y\;:\;\left\vert y-x \right\vert \leq S_j\}, \end{equation*}
    for some $j\leq n$ .

    We also write $\gamma^{(n)}_{u,t}$ or $\gamma^{(n)}_{u,t;x,y}$ when we want to emphasize that $\gamma^{(n)}$ is on the interval [u, t], or that it has endpoints (u, x) and (t, y).

  2. 2. Two time–space points $(t_1,x_1)$ and $(t_2,x_2)$ are n-connected if there is an n-admissible path with starting and ending points $(t_1,x), (t_2,y)$ such that

    (4.2) \begin{equation} \left\vert x-x_1 \right\vert , \left\vert y-x_2 \right\vert \leq \frac{S_{n}}{2}. \end{equation}

These definitions are intended to facilitate a discretization of the model at the scale $S_n$ . We note that for a pair of n-connected points $(t_1,x_1)$ and $(t_2,x_2)$ , there may not exist any n-admissible path $\gamma^{(n)}$ ‘connecting’ them in the sense that $\gamma^{(n)}(t_1)=x_1$ and $\gamma^{(n)}(t_2)=x_2$ . We also define the domain of influence $\mathcal{R}_t$ as a collection of points in time–space which are connected (or n-connected) to $(0,\mathbf{0})$ by admissible (or n-admissible) paths:

(4.3) \begin{align} \mathcal{R}_t \;:\!=\;& \left\{(x,s)\;:\;\exists \text{ an admissible path $\gamma_{0,t}$, with $\gamma_{0,t}(0) = \mathbf{0}$, }\right.\notag\\[5pt] &\left.\text{ and $\gamma_{0,t}(s) = x$ for some $s\leq t$} \right\},\end{align}
(4.4) \begin{align} \mathcal{R}^{(n)}_t \;:\!=\;& \left\{(x,s)\;:\; \exists \text{ an }\textit{n}-\text{admissible path $\gamma^{(n)}_{0,t}$, with $\gamma^{(n)}_{0,t}(0) = \mathbf{0}$, }\right.\notag\\[5pt] &\left.\text{ and $\gamma^{(n)}_{0,t}(s) = x$ for some $s\leq t$} \right\},\end{align}
(4.5) \begin{align} \mathcal{\tilde{R}}^{(n)}_t \;:\!=\;& \left\{ (s,x)\;:\; (\textit{s},\textit{x}) \text{ and $(0,\mathbf{0})$ are} \textit{n}\text{ -connected for some $s\leq t$} \right\}.\end{align}

It is obvious that $\mathcal{R}_t = \bigcup_{n} \mathcal{R}^{(n)}_t \subset\bigcup_{n} \mathcal{\tilde{R}}^{(n)}_t $ .

Since the model is translation-invariant, we need only consider the probabilities for two time–space points $(0,\mathbf{0})$ and (t, x),

(4.6)

In the case when $d=1$ , we use (1.12) and large-deviation estimates to show that $p_n(t,x)$ decays geometrically in x under the scale $S_{n}$ when $t\leq S_{n}^{\alpha}$ . In fact, in any dimension $d\geq 1$ , it follows from Proposition 4.1 that this remains true.

Proposition 4.1. Recall that $(S_n)$ satisfies (3.8) when A is large enough. There are positive constants depending on $\lambda$ , $q_n(\lambda)<1$ , such that for any $t\leq S_{n}^{\alpha}$ and positive integer i, when ${(i-\frac{1}{2})S_{n} < \left\vert x \right\vert < (i+\frac{1}{2})S_{n} }$ ,

(4.7) \begin{equation} p_n(t,x) \leq q_n(\lambda) ^{i-1}. \end{equation}

Moreover, we can choose the upper bounds $q_n(\lambda)$ so that they satisfy the following properties:

  • For all $n>0$ ,

    (4.8) \begin{equation} q_{n+1}(\lambda) = c_1 \lambda + \left(C_d \cdot q_{n}(\lambda)\right)^{\frac{5S_{n+1}}{8S_n}}, \end{equation}
    for some positive constants $c_1, C_d$ depending only on the dimension d.
  • There exist $\lambda_d>0$ and $c_d < \infty $ depending only on d such that for $\lambda< \lambda_d$ ,

    (4.9) \begin{equation} q_{n}(\lambda) <c_d \lambda \end{equation}
    for all $n>0$ .

Before proving Proposition 4.1, we explain its content. It is immediate that $p_n(x,t)$ is increasing in t. Therefore, showing (4.7) amounts to showing that the probability of connecting a time–space point $(S_{n}^\alpha,x)$ , with jobs of spatial sizes up to scale $S_n$ , decays geometrically in their spatial distance $\left\vert x \right\vert$ under the scale $S_{n}$ . This is not too surprising because $S_{n}^{\alpha}$ is the scale of seeing a job of spatial size $S_{n}$ containing a particular spatial point, and it is much larger than $S_{n-1}^{\alpha}$ , which is the scale for the arrival of a job of strictly smaller spatial size containing a particular site. We expect the speed of the spatial growth for $R_t$ to be of scale at most $\frac{S_{n-1}}{S_{n-1}^{\alpha}} = S_{n-1}^{1-\alpha}$ when jobs of size at most $S_{n-1}$ are allowed, while connecting two points of distance $S_n$ by a job of spatial size $S_n$ gives us a speed of scale at least $S_n^{1-\alpha}$ , which is much larger. As a consequence, we expect the speed of the spatial growth for $\mathcal{R}_t$ to be of scale $S_{n}^{1-\alpha}$ and the geometric decay in i from large-deviation estimates. The harder part of Proposition 4.1 is to show that $q_n(\lambda)< c_d \lambda$ uniformly in n, when $\lambda$ is small. Recall that when A is large, we have that $S_n$ grows fast enough in n from (3.8): for all $n\in \mathbb{N}$ ,

\begin{equation*} \sum_{j\leq n} 10 S_j^{1-\alpha} <S_{n+1}^{1-\alpha},\end{equation*}

which implies that the ratio $\frac{5S_{n+1}}{8S_n} \geq 3$ when n is large. In fact, it provides a recursive formula (depending on the dimension d) for an upper bound on $q_n$ in terms of $\lambda$ and $S_n, S_{n+1}$ . In view of (4.8) and (3.8), we get (4.9) when $\lambda$ is small enough, and we also see that the condition (3.8) is not optimal.

For time $t> S_n^{\alpha}$ , we show a corollary of Proposition 4.1; see Corollary 4.1 below. The inequality (4.10) can be derived using a similar argument to the second step in the proof of Proposition 4.1 below. We give a proof at the end of this section.

Corollary 4.1. Recall that $(S_n)$ satisfies (3.8) when A is large enough. Let $\lambda_d$ be from Proposition 4.1. Then there exist positive constants $ c'_{\!\!d}, \lambda_{d,0}<\lambda_d$ , depending on d, such that $c'_{\!\!d} \lambda_{d,0}<1$ and, when $\lambda <\lambda_{d,0}$ ,

(4.10) \begin{equation} p_n(x,t) \leq \left(c'_{\!\!d} \lambda \right)^{\left({ S_{n}^{-(1-\alpha)}\frac{\left\vert x \right\vert}{t} -1}\right) } \end{equation}

for all positive $t>S_n^{\alpha}$ .

To prove Proposition 4.1, we use two steps involving similar ideas. The first step is induction on i, which measures the spatial distance between $(0,\mathbf{0})$ and $(S_{n}^\alpha,x)$ under the scale $S_{n}$ . We divide the lattice $\mathbb{Z}^d$ into cubes with side length $S_{n}$ . In order for an n-admissible path $\gamma^{(n)}$ to connect $(0,\mathbf{0})$ and a point $(S_{n}^\alpha,x)$ , the n-admissible path must pass through a sequence of neighboring cubes of length at least i. Therefore, there are at most $C_d^i$ (for example, $C_d= 3^d$ ) different sequences of cubes, and for each fixed sequence, we bound the probability that there is an n-admissible path connecting them. We use a stopping-time argument to reduce the problem to the case when $i=2$ ; see (4.11) below. With union bounds, we obtain that the probability decays geometrically in i (see (4.7)) from (4.11) below. The second step is to prove (4.11) and to obtain a recursive formula for $q_n(\lambda)$ ; see (4.8). We use induction on n this time. We again divide $\mathbb{Z}^d$ into cubes with side length $S_{n}$ and consider a sequence of cubes connected by an n-admissible path. In both steps, the stopping times (which are the first times at which the next cubes are connected) are bounded by $S_n^{\alpha}$ , but the total time is different in each step: in the first step, it is $S_n^{\alpha}$ ; in the second step, it is $S_{n+1}^\alpha$ . This results in different constants due to different counting, and different bounds for the probability of a fixed n-connecting sequence of cubes; see (4.18) and (4.28) below.

Proof of Proposition 4.1. Let n be fixed. We can always extend an n-admissible path $\gamma^{(n)}$ on [0, t] to an n-admissible path $\hat{\gamma}^{(n)}$ on $[0,s+t]$ , for some $s>0$ , by letting $\hat{\gamma}^{(n)}$ be constant after time t:

\[ \hat{\gamma}^{(n)}(u) = \begin{cases}\gamma^{(n)}(u)& \text{ if $u<t$}, \\[5pt] \gamma^{(n)}(t)& \text{ if $u\geq t$}. \end{cases} \]

Hence, as claimed above, $p_n(t,x)$ is increasing in t, and we only need to estimate $p_n\!\left(S_{n}^{\alpha},x\right)$ . We divide this into two steps.

  1. Step 1. We first assume the following statement (see (4.11) below) but defer its proof to the second step. There is a $\lambda_d >0$ such that for any $\lambda<\lambda_d$ , we have a constant $A_n(\lambda)<\frac{1}{4\cdot 9^d}$ , depending on $\lambda$ , such that

    (4.11) \begin{equation} p_n(t,x) \leq A_n{(\lambda)} \end{equation}
    for all $\left\vert x \right\vert \in [\frac{3}{2} S_{n}, \frac{5}{2}S_{n}]$ and $t\leq S_{n}^{\alpha}$ .

    Now we take a point $y \in \mathbb{Z}^d$ with $\left\vert y \right\vert \geq 2k S_{n}$ for some $k \in \mathbb{N}$ , and we estimate $p_n(t,y)$ when $t\leq S_n^{\alpha}$ . By considering centers of cubes of scale $S_{n}$ , we see that if (t, y) and $(0,\mathbf{0})$ are n-connected, then there exist at least $k+1$ time–space points $\{(t_i,x_i)\;:\; i =0,1,\dots, k\}$ , including $(t_0,x_0)=(0,\mathbf{0})$ and $(t_k,x_k)=(t,y)$ , such that

    (4.12) \begin{equation} t_{i+1} >t_i, \quad x_i \in \left(S_{n}\mathbb{Z}\right)^d \text{ for $i=0,\dots, k-1$}, \end{equation}
    (4.13) \begin{align} \left\vert x_{i+1} - x_i \right\vert = 2S_{n} \in \!\left[\frac{3}{2} S_{n},\frac{5}{2} S_{n}\right], \end{align}
    and
    (4.14) \begin{equation} (t_i,x_i) \text{ and $(t_{i+1},x_{i+1})$ are} \textit{n}\text{-connected.} \end{equation}
    By (4.12)–(4.13), there are at most $5^{dk}$ such sequences of centers of cubes, $(x_i)_{i=1}^{k-1}$ , and these sequences are deterministic. For each deterministic sequence $(x_i)^{k}_{i=0}$ , we inductively define an increasing sequence of (bounded) stopping times $u_i$ , such that the increments are bounded by $S_{n}^\alpha$ and $(u_i,x_i), (u_{i+1},x_{i+1})$ are n-connected if $u_{i+1} -u_i <S_{n}^\alpha $ :
    (4.15) \begin{align} u_0 &\;:\!=\;0, \end{align}
    (4.16)
    for $i=0,\dots, k-1$ . The inequality (4.11) indicates that each difference $u_{i+1}-u_{i} $ stochastically dominates a random variable $S_n^{\alpha}B_i$ ,
    (4.17) \begin{equation} u_{i+1}-u_{i} \geq S_n^{\alpha}B_i, \end{equation}
    where $B_i$ is a Bernoulli random variable with $\mathbb{P}(B_i=1) = 1-A_n$ and $\mathbb{P}(B_i=0) = A_n$ . Then we use the strong Markov property of Poisson arrivals and properties of binomial random variables to get
    (4.18)
    Therefore, for $\left\vert y \right\vert \geq 2k S_{n}$ and $t\leq S_{n}^{\alpha}$ , we bound $p_n(t,y)$ by
    (4.19) \begin{equation} p_n(t,y) \leq 3^{dk} A_n^{k}\leq q_n^{2k}. \end{equation}
    In particular, $q_n(\lambda) \;:\!=\; \left(5^d A_n(\lambda)\right)^{\frac{1}{2}} <\frac{1}{2}$ .
  2. Step 2. In fact, (4.11) is not surprising, since there is a positive probability $1-A_n$ such that there are no jobs of sizes $S_j$ , $j\leq n$ , with centers arriving in the time–space set $E_n\;:\!=\;\{(t,x)\;:\; S_n<\left\vert x \right\vert < 3S_n, t< S_{n}^{\alpha} \}$ . The harder part is to show that there is a $\lambda_d>0$ independent of n, such that $A_n(\lambda)$ can be bounded uniformly in n when $\lambda < \lambda_d$ . We use induction in n and obtain a recursive formula for bounds of $A_n$ in terms of $\lambda$ ; see (4.29) below.

    1. (a) The case when $n=1$ is obvious. Since the number of jobs of size $S_1$ with centers arriving in the time–space set $E_1$ is a Poisson random variable, we get by (4.1)

      \[ \mathbb{P}\!\left(\text{no job arrivals in $E_1$}\right) = \exp\!(\!-\!\lambda S_1^{-(d+\alpha)}|E_1|). \]
      Therefore, (4.11) holds for $n=1$ , and there are constants $\Lambda_d$ such that if ${\lambda<\Lambda_d}$ , then
      (4.20) \begin{equation} A_1(\lambda) = 1- \exp\!(\!-\!\lambda S_1^{-(d+\alpha)}|E_1|) \leq {6^d} \lambda, \end{equation}
      which converges to 0 as $\lambda$ goes to 0.
    2. (b) Assume that there is a positive $\lambda_d\leq \Lambda_d$ such that for all $n\leq k$ , (4.11) holds. We also assume (3.8). We use an argument similar to that of Step 1.

      Let $\left\vert x \right\vert \in [\frac{3}{2} S_{k+1}, \frac{5}{2}S_{k+1}]$ . We get analogues of (4.12), (4.13), and (4.14). Notice that if (x, t) and $({\mathbf{0}},0)$ are $(k+1)$ -connected, then either there is a job of spatial size $S_{k+1}$ with center in the time–space set $E_{k+1}$ , or there are at least $L+1=\left\vert x \right\vert(2S_{k})^{-1}+1$ time–space points $(t_i,x_i)$ in $E_{k+1}$ , $i=0,\dots, L$ , that are k-connected. In particular, for $i=0,\dots, L-1$ ,

      (4.21) \begin{equation} t_{i+1} >t_i, \quad x_i \in \!\left(S_{k}\mathbb{Z}\right)^d, \end{equation}
      (4.22) \begin{equation} \left\vert x_{i+1} - x_i \right\vert = 2S_{k}, \end{equation}
      and the event
      (4.23) \begin{align} G_i(s)=& \left\lbrace (t_i, x_i) \text{ and $(s,x_{i+1})$ are} \textit{k}\text{-connected} \right\rbrace \end{align}
      occurs for every $s=t_{i+1}$ .

      The probability of the first event is bounded by

      \begin{align} &\mathbb{P}\!\left(\text{a job of size $S_{k+1}$ arrives in } E_{k+1}\right) \notag\end{align}
      (4.24) \begin{align} =& 1- \exp\!(\!-\!\lambda S_{k+1}^{-(d+\alpha)}\left\vert E_{k+1} \right\vert) \leq {6^d} \lambda. \end{align}

      For the second event, we use arguments similar to those of Step 1. On the one hand, from (4.21) and (4.22), we have at most $5^{dL}$ deterministic sequences $(x_j)$ . On the other hand, for each consecutive pair $x_i, x_{i+1}$ , we consider the probability that (4.23) happens for some $s= t_{i+1} \leq t_i+ S_k^{\alpha}$ . By the induction hypothesis, the probability is bounded above by

      \[ \mathbb{P} \!\left( G_i{(s) {\text{ occurs for some }} s\leq t_i+{S^{\alpha}_{k}} }\right) \leq A_{k}(\lambda). \]
      Therefore, we define a sequence of $L+1$ stopping times $(u'_{\!\!i})_{i=0}^L$ recursively: $u'_{\!\!0}=0$ ,
      (4.25) \begin{equation} u'_{\!\!i} \;:\!=\; \inf\{u'_{\!\!i-1}< s \leq S_{k+1}^\alpha\;:\; (t_i, x_i) \text{ and $(s,x_{i+1})$ are} \textit{k}\text{-connected}\}, \end{equation}
      for $i=1,\dots,L$ . We get that each difference $u'_{\!\!i} - u'_{\!\!i-1} $ stochastically dominates a random variable $S_k^{\alpha}B'_{\!\!i}$ ,
      (4.26) \begin{equation} u'_{\!\!i} - u'_{\!\!i-1} \geq S_k^{\alpha}B'_{\!\!i}, \end{equation}
      where $B_i$ is a Bernoulli random variable with
      (4.27) \begin{equation} \mathbb{P}(B'_{\!\!i}=0) = A_k(\lambda). \end{equation}
      Hence, for each deterministic sequence $(x_j)_{j=0}^{L+1}$ , we get from the Markov property that
      (4.28) \begin{align} &\mathbb{P}\!\left(\text{exists an increasing sequence $(t_i)_{i=0}^{L+1}$ such that $G_i(t_i)$ occurs for all} \textit{i} \right) \notag \\[5pt] \leq& \mathbb{P}\!\left( \sum_{i=1}^{L} (u'_{\!\!i}-u'_{\!\!i-1}) \leq S_{k+1}^{\alpha} \right) \leq \mathbb{P}\!\left( \sum_{i=1}^{L-1} B'_{\!\!i} \leq L^{\alpha} \right) \leq {\left(6 A_k(\lambda) \right) ^{\frac{L}{2}}}, \end{align}
      where the last inequality is from applying the Markov inequality to a binomial random variable B(N, p): when $a>0$ , we have
      \begin{align*} &\mathbb{P}(B({N},1-p)\leq {K}) =\mathbb{P}(B({N},p)\geq {N-K}) \\[5pt] &\leq \mathbb{E}[e^{aB({N},p)}]e^{-a({N-K})} \leq \exp\!\left((e^{{a}}-1)Np - a(N-K) \right). \end{align*}
      When N is large (such that $N^\alpha<\frac{N}{2}$ ), for any $K\leq N^\alpha$ , we bound the last term by taking $a = -\ln(2p)$ :
      \[ \exp{ N\!\left((e^{a}-1)p-\frac{a}{2}\right)} \leq \exp N\!\left(\frac{\ln(2p)}2 +(\frac{1}{2}-p) \right) \leq \!\left(6 p\right)^{\frac{N}{2}} . \]

      By (4.24) and (4.28), we have (from the union bound) that

      (4.29) \begin{equation} p_{k+1}(t,x) \leq {6^d}\lambda + 5^{dL}\!\left(6 A_k(\lambda)\right)^{\frac{L}{2}} \;=\!:\;A_{k+1}(\lambda). \end{equation}
      Choosing $q_k(\lambda)= A_k^{\frac{1}{2}}(\lambda)$ for all k, we get (4.8). Combining this with (4.20), we also see from (4.29) that there are positive constants $\lambda_d\leq \Lambda_d$ and $c_d$ depending only on d such that $A_{k+1}(\lambda) \leq c_d \lambda <1$ when $\lambda<\lambda_d$ . Therefore, (4.9) follows from induction.

We use an argument similar to the second step above to prove Corollary 4.1.

Proof of Corollary 4.1. Without loss of generality, we can assume that x and t are of the form ${\left\vert x \right\vert = 2k m S_n}$ , $t= m S_n^\alpha$ , for some positive integers k, m. We first notice that the term $ S_{n}^{-(1-\alpha)}\frac{\left\vert x \right\vert}{t} $ in the exponent of (4.10) is of order k, so we are aiming to obtain probability bounds which are geometric in k. Notice also that if (t, x) and $(0,\mathbf{0})$ are n-connected, then there exist at least 2m consecutive time–space points that are n-connected, and every two consecutive points are $kS_n$ apart. Therefore, we consider the collection of 2m (deterministic) points

\[ \mathcal{E}_m = \{ (x_i)_{i=0}^{2m}\;:\; x_0=0, x_i \in \!\left({ S_n}\mathbb{Z}\right)^d, \left\vert x_{i}- x_{i-1} \right\vert = kS_n ,\text{ for all} \textit{i} \}. \]

Notice that there is a constant $C_d$ depending only on d such that the cardinality of $\mathcal{E}_m$ is bounded by

(4.30) \begin{equation} \left\vert \mathcal{E}_m \right\vert \leq \!\left(\frac{C_d {k^{d-1}}}{4} \right)^m. \end{equation}

For each sequence of $(x_i)_{i=0}^{2m} \in \mathcal{E}_m$ , we define a sequence of stopping times inductively: $u_0 =0 $ , and

\[u_i \;:\!=\; \inf\{s \geq u_i\;:\; (u_i,x_i) \text{ $(s,x_i)$ and $(u_{i-1},x_{i-1})$ are} \textit{n}\text{-connected} \}. \]

By Proposition 4.1, $u_{i+1} - u_i$ is dominated by the random variable $S_n^{\alpha}B_i$ , where $B_i$ is a Bernoulli random variable with

\[1-p=\mathbb{P}(B_i=0) = q_n(\lambda)^{k-1},\]

and

\[ q_n(\lambda)< c_d \lambda<1 \]

when $\lambda<\lambda_d$ , for some $\lambda_d$ depending only on d.

Therefore, we use the union bound, the Markov property, and (4.30) to conclude that there exist some constants $c'_{\!\!d}$ and $\lambda_{d,0}<\min\{\lambda_d,\frac{1}{c_d'}\}$ , depending on the dimension d, such that when $\lambda<\lambda_{d,0}$ ,

(4.31) \begin{align}p_n(t,x) &\leq \sum_{ (x_i) \in \mathcal{E}_m} \mathbb{P}\!\left(\exists (u_i)_{i=0}^{2m}\;:\;u_{i-1}< u_i< t, \text{$(u_{i},x_{i})$ and $(u_{i-1},x_{i-1})$ are} \textit{n}\text{-connected for all} \textit{i} \right) \notag \\[5pt] &\leq \sum_{ (x_i) \in \mathcal{E}_m} \mathbb{P}\!\left( B(2m, p) \leq m \right) \notag \\[5pt] &\leq \!\left\vert \mathcal{E}_m \right\vert \cdot \mathbb{P}\!\left( B(2m, q_n(\lambda)^{k-1}) \geq m \right) \leq \!\left(C_d {k^{d-1}} q_n(\lambda) ^{(k-1)}\right)^m \leq (c'_{\!\!d} \lambda)^{k-1}, \end{align}

where in the last line we use the classical result for a binomial variable B(2n, q) with parameters q and n,

\[\mathbb{P}\!\left(B(2n,q) \geq n\right)\leq \sum_{k= n}^{2n} \binom{2n}{k} q^k \leq 2^{2n} q^n. \]

It is then standard to derive (4.10) from (4.31) by removing the assumption that $\left\vert x \right\vert = 2k m S_n$ and $t= m S_n^\alpha$ for positive integers k and m.

5. Exponential moments of the maximal numbers

The main object of this section is to estimate the exponential moments of the maximal numbers of jobs with temporal size $T_j$ along n-admissible paths. The exponential moments help us in bounding the probabilities of events $A^c_{t,j}$ ; see (3.12). Let

(5.1) \begin{equation} X^{(n)}_{s,t;x,j} = \sup n_j(\gamma^{(n)}_{s,t}),\end{equation}

where the supremum is taken over all n-admissible paths $\gamma^{(n)}_{s,t}$ with initial point (s, x). Since we are mostly interested in the case when j is fixed, and $s=0$ , we focus on $X^{(n)}_{t} = X^{(n)}_{0,t;x,j}$ . Recall that when A is large enough, $(T_n ,S_n)$ satisfy the properties (3.2)–(3.8). As in (4.1), we continue to assume $c_1=1$ .

We use induction on n. There are two types of estimates for $X^{(n)}_{t} = X^{(n)}_{0,t;x,j}$ , which are exponential moments tailored to different job sizes. The first estimate (see Proposition 5.1) is the base case. It says that the maximal number of jobs with temporal size $T_j$ grows linearly with scale $T_j^{-(1+\delta)}$ , when only jobs of spatial size at most $S_j$ are in the system. The second estimate (see Proposition 5.2) is the inductive step. It implies that the growth of the maximal number is not affected much by jobs of spatial size larger than $S_{j+1}$ , when $S_{j+1}$ is large enough; see (5.5) below. We remark that as spatially large jobs are involved, the parameter of exponential moments actually decreases. The proof of Proposition 5.1 is at the end of Subsection 5.2, and the proof of Proposition 5.2 is at the end of Subsection 5.2. Recall from (4.9) in Corollary 4.1 that $\lambda_{d,0}$ is a constant depending only on d.

Proposition 5.1. (Leading order.) Recall that $\delta>0$ . Assume that $T_j$ and $(S_n)_{n\leq j}$ satisfy (3.7) and (3.8). Then there exist positive constants $\lambda_{d,1}<\lambda_{d,0}$ , $C'_{d,1}$ , depending only on d, such that for any $\lambda<\lambda_{d,1}$ , for any $t>0$ ,

(5.2) \begin{equation} \mathbb{E}\!\left[e^{ X^{(j)}_t} \right] \leq e^{ b t}, \end{equation}

where $b{=C'_{\!\!d,1}} T_j^{-(1+\delta)} $ .

We fix a constant $K = \frac{d}{\alpha \kappa} > \max(\frac{d}{\alpha},3)$ , and put $g_n \;:\!=\; K^{-n}$ for all $n\geq 0$ . The second proposition says that we can estimate exponential moments of $X_t^{(n+1)}$ given an estimate for $X_t^{(n)}$ , at a cost of reducing the exponents by a factor of $K^{-1}$ and increasing the factor from $b_n$ to $b_{n+1}$ by a small amount; see (5.5) below.

Proposition 5.2. (Influence from large jumps.) Recall that $g_1 = K^{-1} < \frac{\alpha}{d}$ . Assume that for any $t>0,$ $X^{(n)}_t$ satisfies

(5.3) \begin{equation} \mathbb{E}\!\left[e^{ a X^{(n)}_t} \right] \leq e^{a b_n t} \end{equation}

for some $b_n>0$ and some $a \in (0,1]$ . Then, when $\lambda < \lambda_{d,0}$ , there exists some positive constant $C_K$ , depending on d, such that

(5.4) \begin{equation} \mathbb{E}\!\left[e^{{a}g_{{1}} X^{(n+1)}_t} \right] \leq e^{{a}g_{1} b_{n+1} t}, \end{equation}

where $b_{n+1}$ is a constant satisfying

(5.5) \begin{equation} b_{n+1}-b_n = C_K\cdot {a^{-1}K}\cdot S_{n+1}^{-(\alpha - {d}K^{-1})}. \end{equation}

Remark 5.1. Recall that in (3.9) and (5.1), a job with sizes $(R,\tau)=(S_{k+j}, T_j)$ , for some $k\leq n$ , is counted in $n_j(\gamma^{(n)}_{s,t})$ if its arrival (x, t ) satisfies (1.5),

\[\gamma^{(n)}_{s,t}(t'-),\gamma^{(n)}_{s,t}(t') \in B(x,S_{k+j}).\]

It is possible that the change in $\gamma^{(n)}_{s,t}$ at time t ,

\[\left\vert \gamma^{(n)}_{s,t}(t'-)-\gamma^{(n)}_{s,t}(t') \right\vert ,\]

is small compared to $S_{k+j}$ , or there is no spatial change,

\[\gamma^{(n)}_{s,t}(t'-)=\gamma^{(n)}_{s,t}(t').\]

The conclusion of Proposition 5.2 covers these two cases.

Using induction and (2.3) for a concrete example of $(S_n,T_n)$ , we get the following corollary from Propositions 5.15.2. This corollary says that a small exponential moment of $X^{(j+n)}_t$ grows linearly in t under the scale $T_j^{-(1+\delta)}$ . Since the sequence $(b_{n+j})$ is increasing, we get an upper bound $b_j'$ from (5.2), (5.5), and (3.5).

Corollary 5.1. Recall that $\kappa < \min\!\left(\frac{1}{8}, \frac{\delta}{4(1+\delta)} \right) $ and $K= \frac{d} {\alpha\kappa} \in (3,\rho)$ . Under the assumptions of Proposition 5.1, if we have $(S_{n+j})_{n\geq 1}$ with (3.5) and (3.8), then there are constants ${b'_{\!\!j}}={(C'_{d,1}+C_K)} T_j^{-(1+\delta)}$ and $\lambda_{d,1}>0$ such that for any $t>0$ , and $g_n = \left(\frac{\kappa}{d}\right)^{n}\cdot \alpha^{n} < \alpha^{n}$ ,

(5.6) \begin{equation} \mathbb{E}\!\left[e^{g_n X^{(j+n)}_t} \right] \leq e^{g_n {b'_{\!\!j}} t}, \end{equation}

when $\lambda<\lambda_{d,1}$ .

Proof. The proof is elementary. From Propositions 5.1 and 5.2, there exists an increasing sequence $(b_{j,n})_{n\geq 1}$ such that (5.6) holds for any $t>0$ if we replace $b'_{\!\!j}$ by $b_{j,n}$ for any $n\geq 1$ . In particular, this sequence $(b_{j,n})_{n\geq 1}$ satisfies (5.5) for all $n\geq 0$ when $b_{j,0}=b$ , where b is from (5.2). We only need to verify that

(5.7) \begin{equation} {b'_{\!\!j}} = \lim_{n\to \infty} b_{j,n} <{\left(C'_{d,1}+C_K\right)} T_j^{-(1+\delta)}.\end{equation}

We sum the differences and get

\[ {b'_{\!\!j}}-{b_{j,0}} < \sum_{n>j} C_K \varrho^{2(n+1)} S_{n+1}^{-\alpha(1-\kappa)}, \]

which is bounded by ${C_K} T_j^{-(1+\delta)}$ from (3.5). Therefore, (5.7) follows from (5.2).

5.1. Small jumps and leading order

We now prove Proposition 5.1 using the tools introduced in Section 4. The proof of Proposition 5.1 works for any dimension $d\geq 1$ . The argument can also be extended to dealing with $(\alpha,\beta)$ in Region V. For a fixed j, we drop the subscript j and choose the increment of time $T = T_j^{1+\delta}$ . The choice of $T = T_j^{1+\delta}$ is only used in Subsection 5.1; in Subsection 5.2, we use ${T} = S_{n+1}^{\alpha}$ .

Before giving the detailed proof, we summarize the steps. We use an elementary inequality, which can be seen as a generalization of a union bound. If we have a (deterministic) countable collection I of positive random variables $Z_i$ , then the exponential moment of the maximum is bounded above by the sum of the exponential moments,

(5.8) \begin{equation} \mathbb{E}\!\left[ \sup_{i\in I} e^{Z_i} \right] \leq \sum_{i\in I} E\!\left[e^{Z_i}\mathbf{1}_{\{Z_i\neq 0\}}\right]+{\mathbb{P}(\sup_{i \in I} Z_i=0)} \leq \sum_{i\in I} E\!\left[e^{Z_i}\right].\end{equation}

In our case, we want to encode j-admissible paths by collections of time–space boxes with a number of centers, which are chosen according to some rules; see (5.13), (5.14), (5.15), and (5.18) below. The way to choose centers and boxes is similar to that in the proof of the classical Vitali covering lemma or in a greedy lattice animal argument. Then, to apply (5.8), we take $Z_i$ as the total number of jobs with temporal size $T_j$ inside the collection of time–space boxes, and take I as the set of all sequences of chosen centers. On the one hand, we bound the exponential moment of $Z_i$ by using Poisson random variables and an upper bound on the probability of connecting two time–space points from Corollary 4.1. The exponential moment decays geometrically in the number of centers $m_k$ at a rate which is linear in $\lambda$ ; see (5.17) below. On the other hand, we count the number of different ways of obtaining these collections of time–space boxes, which is exponential in the number of centers. The exponential rate for the number of different ways depends only on the dimension d, not on $\lambda$ . Therefore, we can estimate the exponential moment by applying (5.8) when $\lambda$ is sufficiently small; see (5.22) below. In particular, the argument and the bounds are independent of the index j.

Proof of Proposition 5.1. We fix time to be $t= L\cdot T$ , for some large integer L. Notice that $X^{(j)}_t$ is superadditive: we have

\begin{align*} \mathbb{E}\!\left[e^{ X^{(j)}_{r+s}} \right] \geq \mathbb{E}\!\left[e^{ X^{(j)}_r} \right] \mathbb{E}\!\left[e^{ X^{(j)}_s} \right]\end{align*}

for any $r,s\geq 0$ . If we can show that (5.2) holds for a large time $t= L \cdot T$ , then (5.2) also holds for any positive time. We estimate $\mathbb{E}\!\left[e^{ X^{(j)}_t} \right]$ by considering sequences of (deterministic) time–space boxes which cover j-admissible paths.

First, we partition the time–space set $[0,t] \times \mathbb{Z}^d$ into boxes of the form $(s,x)+U_j$ , where

(5.9) \begin{equation} U_j = [0,{T}]\times[\!-\!{S^{1-\alpha}_jT},{S^{1-\alpha}_jT}]^d, \quad \text{ and } \quad (s,x) \in \!\left({T\cdot} \mathbb{N} \right)\times\left(2S_j^{1-\alpha} T\cdot \mathbb{Z}\right)^d . \end{equation}

Each box has a time–space volume $\left\vert U_j \right\vert= { (2S_j^{1-\alpha})^d \cdot T^{d+1}}$ , and the number of jobs with temporal size $T_j$ and ‘centers’ of arrivals in the box is a Poisson random variable with mean

\begin{align*} \lambda2^d c_2 T_j^{-(1+\beta)} \left(S_j^{1-\alpha}T \right)^d\cdot T, \end{align*}

where $c_2$ is the normalizing constant from Theorem 2.1. By the condition (3.7), the mean is strictly smaller than

(5.10) \begin{equation} \lambda 2^d {c_2}= b_j {\cdot T < 1}, \end{equation}

where $b_j = {\lambda 2^d {c_2} T_j^{-(1+\delta)}<T_j^{-(1+\delta)}}$ when $\lambda$ is strictly smaller than $\frac{1}{2^d c_2}$ .

We then have encodings of paths: given a generic j-admissible path $\gamma^{(j)}$ on the time interval $I_k= [k{T}, (k+1){T}]$ , first we cover $\gamma^{(j)}$ by a collection of boxes of the form (5.9); then we extract a subcollection of disjoint boxes; and lastly we cover the original collection of boxes by enlarging boxes in the subcollection. More precisely, we do the following. Since the spatial sizes of jobs in a j-admissible path are at most $S_j$ , we can find an integer $m_k\geq 1$ and a sequence of $m_k$ (stopping) times $s_{i,k}\in I_k $ , such that the distances between two consecutive points on $\gamma^{(j)}$ at these times are between $4{S^{1-\alpha}_jT}$ and $5{S^{1-\alpha}_jT}$ :

(5.11) \begin{align} s_{0,k}&= k\cdot T, \end{align}
(5.12) \begin{align} s_{i,k}&= \min\!\Big((k+1)\cdot T, \inf\!\Big\{{ s_{i-1,k} \leq u \leq (k+1) T}: \notag \\[5pt] & 4{S^{1-\alpha}_jT} \leq \!\left\vert {\gamma^{(j)}}(u) -{\gamma^{(j)}}({s_{i-1,k}}) \right\vert \leq 5{S^{1-\alpha}_jT} \Big\}\Big), \end{align}

for $i =1,2,\dots, m_k$ . For each time $s_{i,k}$ , $k=1,2,\dots,m_k$ , we can choose a center $x_{i,k} \in \!\left(2{S^{1-\alpha}_jT} \cdot \mathbb{Z}\right)^d$ such that the center is $S^{1-\alpha}_j T$ -close to $\gamma^{(j)}(s_{i,k})$ ,

(5.13) \begin{equation} \left\vert x_{i,k} - \gamma^{(j)} (s_{i,k}) \right\vert \leq S^{1-\alpha}_jT. \end{equation}

If there is more than one point in $\left(2{S^{1-\alpha}_jT} \cdot \mathbb{Z}\right)^d$ satisfying (5.13), we can choose according to certain rules since there are only finitely many such points. With these centers $\mathbf{x}_k=(x_{i,k})_{{i=1}}^{m_k}$ , and (5.12)–(5.13), we obtain a covering $U_{\mathbf{x}_k}$ of the admissible path $\gamma^{(j)}$ on the time interval $I_k$ by taking unions of collections of boxes with side lengths $10S_j$ ,

(5.14) \begin{equation} U_{\mathbf{x}_k} \;:\!=\; \bigcup_{i=1}^{m_k} \!\left((k{T},x_{i,k}) + [0,{T}]\times[\!-\!5{S^{1-\alpha}_jT},5{S^{1-\alpha}_jT}]^d \right) \supset \gamma^{(j)}(I_k). \end{equation}

Therefore, on the time interval $I_k$ , $\gamma^{(j)}$ corresponds to an encoding, which consists of $m_k$ centers $\mathbf{x}_k=(x_{i,k})_{i=1}^{m_k}$ in $\left(2{S^{1-\alpha}_jT} \cdot \mathbb{Z}\right)^d$ .

For every encoding $\mathbf{x}_k=(x_{i,k})_{{i=1}}^{m_k}$ on $I_k$ , we denote by $M_{\mathbf{x}_k}$ the number of arrivals of jobs with temporal size $T_j$ inside $U_{\mathbf{x}_k}$ ,

\[ M_{{\mathbf{x}}_k} = {\left\vert \left\{(y,t)\in U_{\mathbf{x}_k}\;:\; (t,T_j,B)\in \Phi_y \text{ for some $B \subset [\!-\!S_j,S_j]^d$} \right\} \right\vert}, \]

and denote by $E_{{\mathbf{x}}_k}$ the event depending on $\mathbf{x}_k=(x_{i,k})_{{i=1}}^{m_k}$ ,

\[ E_{{\mathbf{x}}_k} = \{ \text{the covering $U_{\mathbf{x}_k}$ contains a}\, \textit{j}\,\text{-admissible path $\gamma^{(j)}$ with (5.13)}\}. \]

We consider $\mathbb{E}\!\left[e^{ M_{{\mathbf{x}}_k}} \cdot \mathbf{1}_{E_{{\mathbf{x}}_k}} \right]$ . By (5.10) and (5.14), $M_{\mathbf{x}_k}$ is a Poisson random variable with parameter at most

\[ C_d m_k b_j {\cdot T}, \]

for some constant $C_d$ depending on the dimension. From (5.12), (5.13), and the fact that $\left(x_{i,k}\right)\subset \!\left(2{S^{1-\alpha}_jT}\cdot \mathbb{Z}\right)^d$ , we have that the distance $\left\vert x_{i,k} - x_{i-1,k} \right\vert $ between two consecutive centers is one of the numbers

(5.15) \begin{equation} 4{S^{1-\alpha}_jT} \quad \text{ or } \quad 6{S^{1-\alpha}_jT}, \end{equation}

for all $i<m_k$ , and it can be

(5.16) \begin{equation} r \cdot {S^{1-\alpha}_jT}, \end{equation}

where $r=0,1,2,3$ , when $i=m_k$ . Although these $m_k$ centers may not be distinct, there is a sequence of ${m_k-1}$ points in $ I_k\times\left(2{S^{1-\alpha}_jT}\cdot \mathbb{Z}\right)^d $ such that every two consecutive points are j-connected with spatial distance at least $4{S^{1-\alpha}_jT}$ . Therefore, from Corollary 4.1 and (5.15) we deduce that the probability $\mathbb{P}(E_k)$ is bounded above as follows: for all $\lambda<\lambda_{d,0}$ ,

\begin{equation*} \mathbb{P}(E_k) \leq {(c'_{\!\!d} \lambda )}^{6{(m_k-1)}}, \end{equation*}

where $c'_{\!\!d},\lambda_{d,0}$ are from Corollary 4.1. By the Cauchy–Schwartz inequality, we have

(5.17) \begin{equation} \mathbb{E}\!\left[e^{ M_{{\mathbf{x}}_k}} \cdot \mathbf{1}_{E_{{\mathbf{x}}_k}} \right] \leq \exp\!\left(\frac{1}{2}(e^{2}-1)C_d m_k {b_j T} \right){(c'_{\!\!d} \lambda )}^{{3(m_k-1)}}. \end{equation}

By choosing $x_{0,k}$ to be the last point in $\mathbf{x}_{k-1} = \left(x_{i,k-1}\right)_{i=0}^{m_{k-1}}$ , for $k=1,\dots, L$ ,

(5.18) \begin{equation} x_{0,0} = {\mathbf{0}}, \text{ and } x_{0,k} = x_{m_{k-1}, k-1}, \end{equation}

we get from (5.15), (5.16), and (5.18) that the number of encodings $(\mathbf{x}_k)_{k=0}^{L-1}$ on the time interval $[0,t] = \bigcup_{k=0}^{L-1} I_k$ is at most

(5.19) \begin{equation} \prod_{k=0}^{L-1} \!\left( {h_d}^{m_k}\right), \end{equation}

for some $h_{d}\geq 2$ depending on the dimension d. Now we apply (5.17) and (5.19) to (5.8). From the independence of $\left(e^{M_{{\mathbf{x}}_k}}\cdot\mathbf{1}_{E_{{\mathbf{x}}_k}}\right)_k$ , we get

(5.20) \begin{align} \mathbb{E}\!\left[e^{ X^{(j)}_{{t}} } \right] \leq& \mathbb{E}\!\left[\sup_{({\mathbf{x}}_k)_{k=0}^{L-1}}\prod_{k=0}^{L-1} \!\left( e^{ M_{{\mathbf{x}}_k}}\cdot \mathbf{1}_{E_{{\mathbf{x}}_k}} \right) \right] \notag \\[5pt] \leq& \sum_{({\mathbf{x}}_k)_{k=0}^{L-1}} \prod_{k=0}^{L-1} \mathbb{E}\!\left[ e^{ M_{{\mathbf{x}}_k}}\cdot \mathbf{1}_{E_{{\mathbf{x}}_k}} \right] \notag \\[5pt] \leq& 1+ \sum_{(m_k) \in \mathbb{Z}_+^L} \exp\!\left(\frac{1}{2}(e^{2}-1)C_d b_j T \cdot \sum_{k=0}^{L-1}{m_k} \right)\! \cdot \notag\\[5pt] &\left({h_d} {(c'_{\!\!d} \lambda )^3}\right)^{\sum_{k=0}^{L-1} {(m_k-1)}}{\left(h_d\right)^L}. \end{align}

From (5.10), we have that $\exp\!\left(\frac{1}{2}(e^{2}-1)C_d b_j \cdot T \right)$ is bounded uniformly in j. Therefore, from (4.9), there exists some $\lambda_{d,1}\leq \lambda_{d,0}$ , depending on d, such that

(5.21) \begin{align} h_d c'_{\!\!d} \lambda_{d,1}\exp\!\left(\frac{1}{2}(e^{2}-1)C_d b_j \cdot T\right) <\frac{1}{2}. \end{align}

As a consequence, when L is large, we use the identity that for $q<1$ ,

\[\sum_{(m_k) \in \mathbb{Z}_+^L} q^{\sum_{k=0}^{L-1}(m_k-1)} = \prod_{k=0}^{L-1} \!\left(\sum_{i=0}^{\infty}q^i \right) = \left(\frac{1}{1-q}\right)^L \]

to bound (5.20) by

(5.22) \begin{align} \left(3 h_d \exp\!\left(\frac{1}{2}(e^{2}-1)C_d b_j \cdot T\right) \right)^L. \end{align}

Noticing that $b_j\cdot T = \lambda 2^d$ , we find a constant $C'_{d,1} = b \cdot T >0$ depending only on d, and bound (5.22) by

\[ \exp\!( C'_{\!\!d,1} \cdot L ) = \exp\!(b \cdot t). \]

5.2. Influence of large jumps

The proof of Proposition 5.2 is similar to that of Proposition 5.1, which also relies on Corollary 4.1. We divide the proof into a few steps. Throughout this subsection, we use the time scale $T = S_{n+1}^{\alpha} .$

We first introduce a new system by adding to the original system some artificial jobs without any temporal workload at fixed time–space points. We consider the maximal number $Y^{(n+1)}_t$ in the new system,

(5.23) \begin{equation} Y^{(n+1)}_{t} = \sup n_j(\gamma^{(n+1)}_{0,t}),\end{equation}

where the supremum is taken over all $(n+1)$ -admissible paths $\gamma^{(n+1)}_{0,t}$ with initial point ${(0, \mathbf{0})}$ in the new system. Equation (5.23) is the same as (5.1) except that $Y^{(n+1)}_{t}$ is for the new system. Because of the additional artificial jobs, the maximal number $Y^{(n+1)}_t$ stochastically dominates $X^{(n+1)}_t$ in the original system, and waiting times between jobs of size strictly greater than $S_{n}$ are bounded by $S_{n+1}^{\alpha}$ , which gives a natural discretization of the new system in the time scale T. Then we estimate the exponential moment of $Y^{(n+1)}_t$ ,

\begin{align*} \mathbb{E}\!\left[e^{g_{1} Y^{(n+1)}_t} \right]\!,\end{align*}

via an integral formula for a large time ${t= T\cdot L}$ , where L is a large integer. This formula corresponds to a decomposition of $Y^{(n+1)}_t$ according to the occurrences of jobs of spatial size $S_{n+1}$ and artificial jobs. We bound the terms in the formula by (4.10) from Proposition 4.1. By applying a superadditivity argument to $X^{(n+1)}_t$ , we get (5.4) for any small time t.

More precisely, we obtain the following two lemmas. The first lemma says that we can estimate $\mathbb{E}\!\left[e^{g_{1} X^{(n+1)}_t} \right]$ by adding artificial jobs of radius $\frac{S_{n+1}}{4}$ at fixed time–space points (s, x), for $s\in T\cdot \mathbb{N}$ , ${x\in S_{n+1} \mathbb{Z}^d}$ . The maximal number of the new system has an estimate in terms of a function $w_n(s,x)$ . In the second lemma, we use Corollary 4.1 to show that $w_n(s,x)$ decays geometrically in x in the scale $S_{n+1}$ .

For convenience, we say two time–space points (u, x) and (v, y) are n-connected via some $S_{n+1}$ neighbors if there exists an n-admissible path with initial point (u, x ) and final point (v, y ) such that

(5.24) \begin{equation} {\left\vert x'-x \right\vert , \left\vert y'-y \right\vert } \leq S_{n+1}. \end{equation}

We should notice that a pair of n-connected points is also n-connected via some $S_{n+1}$ neighbors, but not vice versa. The difference is the condition on endpoints; see (4.2) and (5.24). Recall that $\lambda_{n+1} = \lambda S_{n+1}^{-(d+\alpha)}$ .

Lemma 5.1. (A time–space integral.) Let $Y^{(n+1)}_t$ be the maximal number when there are additional artificial jobs with radius $\frac{S_{n+1}}{4}$ , temporal size 0, and centers at time–space points (s,x), where (s,x) are time–space points in $S_{n+1}^{\alpha}\mathbb{N}\times S_{n+1}\mathbb{Z}^d $ . Then for a large time ${t= T\cdot L}= S_{n+1}^\alpha L$ , $L\in \mathbb{N}$ , the maximal number satisfies

(5.25) \begin{align} \mathbb{E}\!\left[e^{g_{1} X^{(n+1)}_t} \right] \leq \mathbb{E}\!\left[e^{g_{1} Y^{(n+1)}_t} \right] &\leq \sum_{N \in \mathbb{N}} \sum_{(x'_{\!\!i})\in \!\left(\mathbb{Z}^d\right)^{N+L} } (\lambda_{n+1}t)^N e^{-\lambda_{n+1}t}\cdot \notag \\[5pt] \idotsint_{0=t_0<\dots< t_N<t}& \,dt_1dt_2\cdots dt_N \prod_{i=0}^{N+L-1} w_n(t'_{\!\!i+1}-t'_{\!\!i},x'_{\!\!i+1}-x'_{\!\!i}), \end{align}

where $x_0 =\mathbf{0} \in \mathbb{Z}^d$ ,

(5.26)

and $(t'_{\!\!i})_{i=0}^{N+L}$ is the increasing sequence which is the union of $(t_i:i\leq N)$ and multiples of T up to t.

Proof. As the artificial jobs of radius $\frac{S_{n+1}}{4}$ are at fixed time–space points, every $(n+1)$ -admissible path in the original system is also an $(n+1)$ -admissible path in the new system. Therefore, ${Y^{(n+1)}_t \geq X^{(n+1)}_t}$ stochastically. To verify that the integral is an upper bound of $\mathbb{E}\!\left[e^{g_{n+1} Y^{(n+1)}_T} \right]$ , we use the independence of the arrivals of jobs with different sizes $S_{n+1}$ and $S_k$ , $k\leq n$ .

Recall that ${t=S_{n+1}^\alpha L}$ is a large fixed time. For every realization of arrival processes of jobs with size $S_{n+1}$ , we denote by $\mathcal{C}_{n+1}$ the (random) collection of all finite sequences of consecutive arrivals of (centers of) artificial jobs or jobs of spatial size $S_{n+1}$ within the time interval [0, t]:

(5.27) \begin{align} \mathcal{C}_{n+1} = \bigcup_{N=0}^\infty &\lbrace {(\vec{t},\vec{x})}=(t'_{\!\!i},{x'_{\!\!i}})_{i=1}^{N+L}\subset (\mathbb{R}_+ \times \mathbb{Z}^d)\;:\; t'_{\!\!i}<t'_{\!\!i+1}, t'_{N+L}= S_{n+1}^\alpha L, t_0 =0, x_0 =\mathbf{0}, \notag \\[5pt] &\text{no artificial jobs or jobs of spatial size $S_{n+1}$ with center $x'_{\!\!i+1}$ arrive} \notag \\[5pt] &\text{during $(t'_{\!\!i},t'_{\!\!i+1})$; a job of spatial size $S_{n+1}$ or an artificial job } \notag \\[5pt] & \text{with center $x'_{\!\!i}$ arrives at $t'_{\!\!i}$} \rbrace. \end{align}

Since every $(n+1)$ -admissible path $\gamma^{(n+1)}$ can be decomposed into a collection of n-admissible paths connected by a collection of jobs with spatial size $S_{n+1}$ and artificial jobs, each $\gamma^{(n+1)}$ corresponds to a finite sequence of (random) time–space points $(\vec{t},\vec{x})=(t'_{\!\!i},x'_{\!\!i})_{i=1}^{N+L}$ in $\mathcal{C}_{n+1}$ , such that $(t'_{\!\!i},x'_{\!\!i})_{i=1}^{N+L}$ encodes consecutive arrivals of artificial jobs or jobs with spatial size $S_{n+1}$ that intersect $\gamma^{(n+1)}$ . For convenience, we take the closest center of an artificial job if $\gamma^{(n+1)}$ does not intersect any artificial jobs at a time $iS_{n+1}^{\alpha}$ , for $i=1,\dots,L$ . The artificial jobs ‘intersecting’ $\gamma^{(n+1)}$ are labeled by their centers $(iS_{n+1}^{\alpha},\hat{x}_i)$ , for $i=1, 2, \dots, L$ ; the jobs of size $S_{n+1}$ intersecting $\gamma^{(n+1)}$ are labeled by a sequence of time–space points $(t_i,x_i)$ , $i =1, 2, \dots, N$ . The union $(t'_{\!\!i},x_i')_{i=1}^{N+L}$ of $(iS_{n+1}^{\alpha},\hat{x}_i)_{i=1}^L$ and $(t_i,x_i)_{i=1}^N$ belongs to $\mathcal{C}_{n+1}$ .

We obtain upper bounds on $e^{g_{1} Y^{(n+1)}_t}$ by considering different $(n+1)$ -admissible paths characterized by elements $(t'_{\!\!i},x_i')_{i=1}^{N+L}$ in $\mathcal{C}_{n+1}$ . Therefore, we get an upper bound on $e^{g_{1} Y^{(n+1)}_{{t}}}$ ,

(5.28)

where $t_0=0$ , $x_0=\mathbf{0}$ , and the term $e^{g_{{1}}}$ is due to the fact that a job of size $S_{n+1}$ may contribute a job of temporal size $T_j$ . It is not yet clear that the right-hand side of (5.28) is finite. We take the expectation and get an upper bound which is the right-hand side of (5.25). In the proof of Proposition 5.2, we show that the integral is finite.

In fact, as job arrivals follow independent Poisson processes, we have

\begin{equation*} \mathbb{E}\!\left[e^{g_{{1}} Y^{(n+1)}_t} \vert \mathcal{C}_{n+1} \right] \leq \sum_{(x,t)\in \mathcal{C}_{n+1}} \prod_{i=0}^{N+L-1} w_n(t'_{\!\!i+1}-t'_{\!\!i},x'_{\!\!i+1}-x'_{\!\!i}), \end{equation*}

where $t_0=0$ , $x_0=\mathbf{0}$ , and $w_n$ is defined by (5.26). Also, for each fixed integer N, each fixed sequence of $\mathbb{Z}^d$ -points $\mathbf{x} =(x'_{\!\!i})_{i=0}^{N+L}$ , and each fixed sequence of increasing times $(t'_{\!\!i})_{i=0}^{N+L}$ ( $(t'_{\!\!i})_{i=0}^{N+L}$ is also the union of $(t_i)_{i= 1}^{N}$ and $\{jS_{n+1}^{\alpha}:0\leq j\leq L\}$ ), the density of obtaining N non-artificial jobs at time–space points $(t'_{\!\!i},x'_{\!\!i})_{i=1}^{N+L}$ is

\[N!\mathbb{P}(N_{{t}}=N) \,dt_1dt_2\cdots dt_N = (\lambda_{n+1}{t})^N e^{-\lambda_{n+1}{{t}}} \mathbf{1}_{\{0=t_0<t_1<\dots< t_N<{{t}}\}} \,dt_1dt_2\cdots dt_N ,\]

where $N_{{t}}$ is a Poisson random variable with rate $\lambda_{n+1}{{t}}$ . As a consequence, the expectation of (5.28) is bounded by

(5.29) \begin{align} \mathbb{E}\!\left[e^{g_{{1}} Y^{(n+1)}_{{t}}} \right] \leq & \sum_{N} \sum_{(x_i)\in \!\left(\mathbb{Z}^d\right)^{N+L} } N! \mathbb{P}(N_{{t}}=N) \cdot \notag\\[5pt] \idotsint_{0=t_0<\dots< t_N<{{t}}}& \,dt_1dt_2\cdots dt_N \prod_{i=0}^{N+L-1} w_n(t'_{\!\!i+1}-t'_{\!\!i},x'_{\!\!i+1}-x'_{\!\!i}) \notag\\[5pt] \leq& \sum_{N} \sum_{(x_i)\in \!\left(\mathbb{Z}^d\right)^{N+L} } (\lambda_{n+1}{{t}})^N e^{-\lambda_{n+1}{{t}}} \cdot \notag\\[5pt] \idotsint_{0=t_0<\dots< t_N<{{t}}}& \,dt_1dt_2\cdots dt_N \prod_{i=0}^{N+L-1} w_n(t'_{\!\!i+1}-t'_{\!\!i},x'_{\!\!i+1}-x'_{\!\!i}). \end{align}

Thanks to the additional artificial jobs, we see that $t'_{\!\!i+1} - t'_{\!\!i}$ is at most $S_{n+1}^{\alpha}$ . We estimate $w_n(s,x)$ by considering x in scales of $S_{n+1}$ , and see that $w_n(s,x)$ decays geometrically in x under this scale. This is the content of the second lemma, and it is a consequence of Corollary 4.1.

Lemma 5.2. Recall that $K>3$ . Let ${w_n(s,x)}$ be defined by Equation (5.26). Assuming the $X^{(n)}_t$ satisfy (5.3), we have the following estimates for $w_n(s,x)$ : for any ${s}\leq S_{n+1}^\alpha$ ,

(5.30) \begin{equation} {w_n(s,x)} \leq \begin{cases} C S_{n+1}^{\frac{{d}}{K}} e^{g_{1} (1+ b_n s)} & \text{ if $\left\vert x \right\vert \leq 3 S_{n+1}$}, \\[5pt] C S_{n+1}^{\frac{{d}}{K}} r_n^{i-1}(\lambda) e^{g_{1} (1+b_n s)} & \text{ if $(2i-1) S_{n+1}< \left\vert x \right\vert \leq (2i+1)S_{n+1}$ for some $i\geq 2$,} \end{cases} \end{equation}

where C is a constant depending on d, and $r_n(\lambda)$ is a constant depending on $S_n$ , $S_{n+1}$ , and $\lambda$ . Furthermore, there is a constant $C_{d,2}$ , depending on d, such that $r_n(\lambda)<C_{d,2} \lambda $ when $(S_n)$ satisfies (3.8) and $\lambda<\lambda_{d,0}$ , where $\lambda_{d,0} $ is from Corollary 4.1.

Proof. We divide the proof into two cases.

  1. 1. When $\left\vert x \right\vert \leq S_{n+1}$ , assuming (5.3), we have for any $z>0$ , ${s}>0$ that

    \begin{equation*} \mathbb{P}\!\left( g_n (X^{(n)}_{s} - b_n {s}) \geq z \right) \leq \exp\!(\!-\!z). \end{equation*}
    By the union bound and stochastic dominance, we get
    \begin{equation*} \mathbb{P}\!\left( g_n \max_{|y| \leq S_{n+1}} (X^{(n)}_{0,{s};y} - b_n {s}) \geq d\ln (1+2S_{n+1}) + z \right) \leq \exp\!(\!-\!z), \end{equation*}
    and $g_n \max_{|x| \leq \frac{1}{2}S_{n+1}} (X^{(n)}_{x,{s}} - b_n {s})$ is stochastically dominated by ${d}\ln(1+ 2S_{n+1}) + Z $ , where Z is an exponential random variable with rate 1. Therefore,
    (5.31) \begin{equation} w_n(s,\mathbf{0}) = \mathbb{E}\!\left[e^{g_{{1}} \max_{|y| \leq S_{n+1}} X^{(n)}_{0,{s};y}} \right] e^{g_{{1}}} \leq (1+2S_{n+1})^{\frac{{d}}{K}} \frac{1}{1-\frac{1}{K}} e^{g_{{1}} (1+b_n {s})}. \end{equation}
    Clearly, $w_n(s,x) \leq w_n(s,\mathbf{0})$ , since the indicator function always takes the value 1 when $x=\mathbf{0}$ .
  2. 2. When $(2i-1) S_{n+1}< \left\vert x \right\vert \leq (2i+1)S_{n+1}$ for some $i\geq 2$ , we first get an upper bound similar to (5.31). For any $s>0$ ,

    \[ \mathbb{E}\!\left[e^{ 2 g_{1} \max_{|y| \leq S_{n+1}} X^{(n)}_{0,s;y}} \right] \leq (1+2S_{n+1})^{\frac{2d}{K}} \frac{1}{1-\frac{2}{K}} e^{2 g_{1} b_n {s}}. \]
    Then by the Cauchy–Schwarz inequality, we have
    (5.32)
    where $\tilde{p}_n(s,x) = \mathbb{P}(\textit{(s,x)}\; \textrm{and}\; (0,\textbf{0})\; \textrm{are}\; \textit{n}-\textrm{connected via some}\; S_{n+1}\;\textrm{neighbors})^{\frac{1}{2}}$ .

    As $s\leq S_{n+1}^\alpha$ , we get an upper bound for $\tilde{p}_n(s,x)$ by the corollary of Proposition 4.1. More precisely, if (s, x) and $(0,\mathbf{0})$ are n-connected via some $S_{n+1}$ neighbors, then there exist two points y, z in $\left(S_n\mathbb{Z}\right)^d$ such that $\left\vert y \right\vert , \left\vert x-z \right\vert <\frac{S_{n+1}}{2}$ , and (0, y), (t, x) are n-connected. Since $(2i-1) S_{n+1}< \left\vert x \right\vert \leq (2i+1)S_{n+1}$ , we get $2(i-1)S_{n+1}< \left\vert y-z \right\vert \leq 2(i+1) S_{n+1}$ . Therefore, by (4.10) in Corollary 4.1, we get that there exists a constant $C_{d,2}>c'_{\!\!d}$ , depending only on d, such that

    (5.33) \begin{equation} \tilde{p}_n(s,x) \leq \sum_{x,y} p_n(s,x-y) \leq \!\left(\frac{S_{n+1}}{S_n}\right)^d (c_d' \lambda )^{i\left(\frac{S_{n+1}}{S_n}\right)^{1-\alpha}} \leq r_n(\lambda)^{i-1} \end{equation}
    for some $r_n(\lambda)<c_d \lambda$ when ${S^{1-\alpha}_{n+1}> 10 \sum_{j\leq n} S_j^{1-\alpha}}$ , and $\lambda <{\lambda_{d,0}}$ . By (5.32) and (5.33), we get (5.30).

Remark 5.2. It may be possible to get better bounds for ${w_n(s,x)}$ . However, geometrical decay of ${w_n(s,x)}$ in x is sufficient for the proof of Proposition 5.2.

We now prove Proposition 5.2 from Lemmas 5.1 and 5.2 for a large time t. We also obtain the estimate for any positive time t, using that $X^{(n+1)}_t$ is superadditive.

Proof of Proposition 5.2. By a change of variables $y_{i+1} = {x'_{\!\!i+1}-x'_{\!\!i}}=(y_{i,j})_{j=1}^d \in \mathbb{Z}^d$ , we rewrite (5.25) as

(5.34) \begin{align} \sum_{N=0}^\infty& \sum_{(y_i)\in \!\left(\mathbb{Z}^d\right)^{(N+L)} } ({\lambda_{n+1}{t}})^N e^{-{\lambda_{n+1}{t}}} \cdot \notag\\[5pt] &\idotsint_{0=t_0<t_1<\dots< t_N<{t}}{\prod_{i=0}^{N+L-1} w_n(t'_{\!\!i+1}-t'_{\!\!i},y_{i+1}) \,dt_1dt_2\cdots dt_N} , \end{align}

where $(t'_{\!\!i})_ {i=0}^{N+L}$ is the increasing sequence, which is also the union of $(t_i)_{i= 1}^{N}$ and $\{jS_{n+1}^{\alpha}:0\leq j\leq L\}$ . In view of Lemma 5.2, the product

(5.35) \begin{equation} \prod_{i=0}^{N+L-1} w_n(t'_{\!\!i+1}-t'_{\!\!i},y_{i+1}) \leq (Ce^{g_{{1}}} S_{n+1}^{\frac{{d}}{K}})^{N+L} e^{g_{{1}} b_n T} {{r_n(\lambda)}}^{\sum {y'_{\!\!i,j}}}, \end{equation}

where

\begin{align*} y'_{\!\!i} = (y'_{\!\!i,j})_{j=1}^d = \left(\max {\!\left\{ \lfloor \frac{\left\vert y_{i,j} \right\vert-3S_{n+1}}{2S_{n+1}} \rfloor ,0\right\} } \right)_{j=1}^d. \end{align*}

Therefore, for each $y'_{\!\!i}$ , there are at most $(6S_{n+1}+1)^d$ and at least $(4S_{n+1})^d$ different $y_i$ corresponding to it. We get an upper bound for terms in (5.34) by summing over $y_i$ according to $y_i'$ , for every fixed N:

(5.36) \begin{align} &\quad \sum_{(y'_{\!\!i})\in \mathbb{N}^{N+L} } ({\lambda_{n+1}{t}})^N e^{-{\lambda_{n+1}{t}}} \cdot \notag\\[5pt] &\quad \idotsint_{0=t_0<t_1<\dots< t_N<{t}} \,dt_1dt_2\cdots dt_N (6^{{d}}e^{g_{{1}}}CS^{{d}+\frac{{d}}{K}}_{n+1})^{N+L} e^{g_{{1}} b_n {t}} {r_n(\lambda)}^{\sum {y'_{\!\!i,j}}} \notag \\[5pt] &= e^{g_{{1}} b_n {t}} (6^{{d}}e^{g_{{1}}}CS^{{d}+\frac{{d}}{K}}_{n+1})^{N+L} \!\left( \sum_{(y'_{\!\!i})\in \mathbb{N}^{N+L} } {r_n(\lambda)}^{\sum {y'_{\!\!i,j}}} \right) \cdot \notag\\[5pt] & \quad \left( ({\lambda_{n+1}{t}})^N e^{-{\lambda_{n+1}{t}}} \idotsint_{0=t_0<t_1<\dots< t_N<{t}} \,dt_1dt_2\cdots dt_N \right) \notag \\[5pt] &= e^{g_{{1}} b_n {t}} (6^{{d}}e^{g_{{1}}}CS^{{d}+\frac{{d}}{K}}_{n+1})^{N+L} \!\left( \frac{1}{1-{r_n(\lambda)}}\right)^{{d(N+L)}} \mathbb{P}(N_{{t}} =N) , \end{align}

where $N_{{t}}$ is a Poisson random variable with rate ${\lambda_{n+1}{t}} = {\lambda} S_{n+1}^{-(d+\alpha)}{t}$ . Recall that $K>3$ . By Lemma 5.2, there exists a constant $\lambda_{d,2} =\min\{\frac{1}{2c_d}, \lambda_{d,0}\}$ , depending only on d, such that for all $\lambda< \lambda_{d,2}$ , we have that

\[{r_n(\lambda)}< c_d\lambda< \frac{1}{2}.\]

Therefore, we sum over N for (5.36) and get an upper bound for (5.25):

(5.37) \begin{align} \mathbb{E}\!\left[e^{g_{1} X^{(n+1)}_{{t}}} \right] \leq &\sum_{N=0}^\infty e^{g_{1} b_n {t}} (6^{{d}}e^{g_{1}}CS^{{d}+\frac{d}{K}}_{n+1})^{N+L} \!\left( \frac{1}{1-{r_n(\lambda)}}\right)^{d(N+L)} \mathbb{P}(N_{{t}} =N) \notag \\[5pt] \leq & e^{g_{1} b_n {t}} \mathbb{E}\!\left[ (12^de^{g_{1}}CS^{{d}+\frac{{d}}{K}}_{n+1})^{N_{{t}}}\right] (12^de^{g_{1}}CS^{{d}+\frac{{d}}{K}}_{n+1})^{L} \notag \\[5pt] \leq & e^{g_{1} b_n {t}} \exp \!\left((12^de^{g_{1}}CS^{{d}+\frac{d}{K}}_{n+1})\lambda_{n+1}{t} + S_{n+1}^{-\alpha} \ln (12^de^{g_{1}}CS_{n+1}^{d+\frac{d}{K}}) {t} \right). \end{align}

Here, in the last line, we use the fact that

\[\mathbb{E}\!\left[ e^{bN_{{t}}}\right] = \exp \!\left((e^b-1)\lambda_{n+1}{t}\right) \leq \begin{cases} \exp \!\left(3b\lambda_{n+1}{t}\right) \text{ when $b\leq 1$},\\[5pt] \exp \!\left(e^b\lambda_{n+1}{t}\right) \text{ when $b> 1$}. \end{cases} \]

Since

\begin{align*} \lambda_{n+1}S_{n+1}^{d+\frac{d}{K}} =\lambda S_{n+1}^{-(\alpha-\frac{d}{K})} < \lambda_{d,2}S_{n+1}^{-(\alpha-\frac{d}{K})} \end{align*}

and

\begin{align*} \ln(12^de^{g_{1}}CS_{n+1}^{d+\frac{d}{K}}) < C_{d,2} S_{n+1}^{\frac{d}{K}} \end{align*}

for some $C_{d,2}$ depending on d, we bound (5.37) by

(5.38) \begin{equation} \exp \!\left( (g_{1}b_n + {C_{d,2}}S_{n+1}^{-(\alpha-\frac{d}{K})} ) {t}\right)\!. \end{equation}

Since $X^{(n+1)}_{{t}}$ is superadditive, we have

\begin{align*} \mathbb{E}\!\left[e^{g_{1} X^{(n+1)}_{r+s}} \right] \geq \mathbb{E}\!\left[e^{g_{1} X^{(n+1)}_r} \right] \mathbb{E}\!\left[e^{g_{1} X^{(n+1)}_s} \right]\end{align*}

for any $r,s \geq 0$ . Therefore, we also have, for any $t>0$ ,

\[ \mathbb{E}\!\left[e^{g_{1} X^{(n+1)}_t} \right] \leq \exp \!\left( \!\left(g_{1}b_n + {C_{d,2}}S_{n+1}^{-(\alpha-\frac{d}{K})}\right)t\right)\!, \]

for some $C_{d,2}$ depending on d when $K>3$ . Lastly, we use a change of variables to replace $g_1$ by $a\cdot g_1$ .

6. Proof of the growth estimate

In this section, we use Corollary 5.1 to prove Proposition 3.2. By conditioning on the occurrences of jobs of spatial size $S_{i+j}$ for $j\geq 2$ , we use the $g_{i+2-k}$ th exponential moments for the maximal numbers $X_{t;k}$ of jobs with temporal size $T_k$ to bound the upper tails. Applying the union bound, we get (3.14)–(3.18).

Proof of Proposition 3.2. For a fixed time t between $l_{2i} \cdot{\Delta t_{2i}}$ and $l_{2i+2} \cdot {\Delta t_{2i+2}}$ , we first recall the event $B_i(t)$ from (3.13):

(6.1) \begin{align} B_i(t) &= \{ \text{no job of spatial size $S_{i+j}$, $j\geq 2,$ is along} \notag \\[5pt] & \qquad \text{ an admissible path starting from $(0,\mathbf{0})$} \} \notag \\[5pt] &=\{ \sup m_{i+j}(\gamma_{0,t})= 0 \text{ for all $j \geq 2$} \}, \end{align}

where the suprema are taken over all admissible paths $\gamma_{0,t}$ with initial point $(0,\mathbf{0})$ . We estimate the probability $\mathbb{P}\!\left({B_i(t)}^c\right)$ by looking at time–space boxes $E_{i+j}= [0,t]\times[\!-\!3S_{i+j},3S_{i+j}]^d $ for all $j>1$ . If ${B_i(t)}^c$ occurs, then either there is a job of spatial size $S_{i+j}$ inside $E_{i+j}$ for some $j\geq 2$ , or there is a point x in $\left(2S_{i+2}\mathbb{Z}\right)^d$ with $\left\vert x \right\vert = 2S_{i+2}$ such that $(0,\mathbf{0})$ and (t, x) are $(i+1)$ -connected. As the arrivals for jobs of different spatial sizes follow independent Poisson processes, the first event occurs with probability at most

\begin{align*} \sum_{j\geq 2} 1-\exp\!\left(\!\left\vert E_{i+j} \right\vert\lambda_{i+j}\right) \leq \lambda \cdot \sum_{j\geq 2} S_{i+j}^{-\alpha} t , \end{align*}

where $ \lambda_{i+j} = \lambda \mathbb{P}(R= S_{i+j}) \leq \lambda S_{i+j}^{-\alpha}, $ from (4.1). From Proposition 4.1 and Corollary 5.1, the second event occurs with probability at most

\begin{align*} \!\left(\frac{S_{i+2}}{S_{i+1}} \right)^d q_{i+1}(\lambda)^{\left(\frac{S_{i+2}}{S_{i+1}}\right)}. \end{align*}

When $\lambda<\lambda_d$ and $(S_i)$ satisfies (2.3), we have that for any $ t\in [l_{2i} \cdot{\Delta t_{2i}},l_{2(i+1)} \cdot{\Delta t_{2(i+1)}}] $ ,

\begin{equation*} \mathbb{P}(B^c_i(t))\leq 2C_1 S_{i+2}^{-\alpha}t, \end{equation*}

which implies (3.14).

To get (3.15), we use Corollary 5.1 and the fact that the $(X_{t;j})$ are integers. For $j>i$ ,

(6.2) \begin{align} \mathbb{P}\!\left(A^c_{t,j}(0),\text{ and } {B_i(t)}\right) &=\mathbb{P}\!\left(X^{(i+2)}_{t;j}\geq 1 \right) \notag \\[5pt] &=\mathbb{P}\!\left(X^{(j+2)}_{t;j}\geq 1 \right) \notag \\[5pt] &\leq (\exp\!( 2g_{2} T_{j}^{-(1+\delta)} t)-1)(e^{g_{2}} -1)^{-1} \notag \\[5pt] &\leq 6T_j^{-(1+\delta)} t, \end{align}

where, in the last line, we use $2g_2 T_{i+1}^{-(1+\delta)} t < l_{2i+2} <\frac{1}{2}$ and $\exp\!(x)-1\leq 3x$ for $0\leq x\leq 1$ .

The arguments for (3.16) and (3.17) are very similar. Notice that on the event ${B_i(t)}$ , we have its maximal number $X_{t;j}$ the same as $X^{(i+2)}_{t;j}$

\begin{align*} X_{t;j}\mathbb = \sup n_j(\gamma_t) =X^{(i+2)}_{t;j}, \end{align*}

where the supremum is taken over all admissible paths $\gamma_t$ with initial point $(0,\mathbf{0})$ . Therefore, for any $y\geq 0$ , we get

(6.3) \begin{align} \mathbb{P}(X_{t;j} \geq y ) \leq & \mathbb{P}\!\left(X^{(i+2)}_{t;j} \geq y, \text{ and } {B_i(t)} \right) + \mathbb{P}\!\left(B^c_i(t)\right) \notag \\[5pt] \leq& \mathbb{P}\!\left(X^{(i+2)}_{t;j} \geq y \right) + \mathbb{P}\!\left({B_i(t)}^c\right). \end{align}

By Corollary 5.1, we can apply the Markov inequality and obtain (3.16):

\begin{align*} \mathbb{P}\!\left(A^c_{t,j}(3), \text{ and } {B_i(t)}\right) &\leq \mathbb{P}\!\left(X^{(i+2)}_{t;j} > 3 T_j^{-(1+\delta)} t \right) \notag \\[5pt] &\leq \mathbb{E}\!\left[e^{g_{i+2-j} X^{(i+2)}_{t;j}} \right] \exp\!(\!-\!3g_{i+2-j} T_j^{-(1+\delta)} t) \notag \\[5pt] &\leq \exp\!( -g_{i+2-j}2T_j^{-(1+\delta)} t ). \end{align*}

Following a similar computation, we use (3.2) to get (3.17):

\begin{align*} \mathbb{P}\!\left( A^c_{t,i}(3 \Delta t_{2i}^{2\kappa}) ,\text{ and } {B_i(t)} \right) &\leq \exp\!\left(g_2 r_{2i} - 3 l_{2i} \Delta t_{2i}^{2\kappa} \right) \notag \\[5pt] &\leq \exp\!\left( - 2\Delta t_{2i}^{\kappa}\right). \end{align*}

Lastly, we use the union bound and (3.14)–(3.17) to get (3.18). We show only the second case, when t belongs to the interval $[r_{2i} {\Delta t_{2i}}, l_{2i+2} {\Delta t_{2i+2}})$ :

\begin{align*} &1- \mathbb{P}\!\left( A_{t,j}(3 \Delta t_{2j}^{2\kappa} ) \text{ for all} \textit{j}\right)\\[5pt] \leq& \mathbb{P}\!\left({B_i(t)}^c \right) + \sum_{j> i} \mathbb{P}\!\left({B_i(t)} \text{ and $A^c_{t,j}(3 \Delta t_{2j}^{2\kappa})$ } \right) + \sum_{j\leq i} \mathbb{P}\!\left({B_i(t)} \text{ and $A^c_{t,j}(3 \Delta t_{2j}^{2\kappa})$ } \right)\\[5pt] \leq & 2C_1 S_{i+2}^{-\alpha} t + \sum_{j> i} 6 T_j^{-(1+\delta)}t + \sum_{j\leq i} \exp\!( -g_{i+2-j}T_j^{-(1+\delta)} t ). \end{align*}

By the condition (3.2) we get, for some $C,C'>0$ ,

\begin{align*} &\sum_{j>i} 6 T_j^{-(1+\delta)} + 2C_1 S_{i+2}^{-\alpha} \leq C'T_{i+1}^{-(1+\delta)} ,\\[5pt] &\sum_{j\leq i} \exp\!\left( -g_{i+2-j}T_j^{-(1+\delta)} t \right) \leq \exp\!\left( -CT_i^{-(1+\delta)} t \right), \end{align*}

which implies (3.18) for $c'=3$ .

Funding information

This work was supported by the SNF grant 200021L–169691.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Baccelli, F., Chang-Lara, H. and Foss, S. (2016). Shape theorems for Poisson hail on a bivariate ground. Adv. Appl. Prob. 48, 525543.CrossRefGoogle Scholar
Baccelli, F. and Foss, S. (2011). Poisson hail on a hot ground. J. Appl. Prob. 48A, 343366.CrossRefGoogle Scholar
Cox, J. T., Gandolfi, A., Griffin, P. and Kesten, H. (1993). Greedy lattice animals I: upper bound. Ann. Prob. 13, 11511169.Google Scholar
Foss, S., Konstantopoulos, T. and Mountford, T. (2018). Power law condition for stability of Poisson hail. J. Appl. Prob. 31, 681704.Google Scholar
Martin, J. (2002). Linear growth for greedy lattice animals. Stoch. Process. Appl. 98, 4366.CrossRefGoogle Scholar
Wang, Z. (2022). Stable systems with power law conditions for Poisson hail II. In preparation.Google Scholar
Figure 0

Figure 1. Workloads before and after a job arrival.

Figure 1

Figure 2. Admissible path, score, and workload.

Figure 2

Figure 3. Stability and critical moments.