Hostname: page-component-78c5997874-s2hrs Total loading time: 0 Render date: 2024-11-16T15:11:58.632Z Has data issue: false hasContentIssue false

Multidimensional random motions with a natural number of finite velocities

Published online by Cambridge University Press:  22 March 2024

Fabrizio Cinque*
Affiliation:
Sapienza University of Rome
Mattia Cintoli*
Affiliation:
Sapienza University of Rome
*
*Postal address: Department of Statistical Sciences, Sapienza University of Rome, Italy.
*Postal address: Department of Statistical Sciences, Sapienza University of Rome, Italy.
Rights & Permissions [Opens in a new window]

Abstract

We present a detailed analysis of random motions moving in higher spaces with a natural number of velocities. In the case of the so-called minimal random dynamics, under some broad assumptions, we give the joint distribution of the position of the motion (for both the inner part and the boundary of the support) and the number of displacements performed with each velocity. Explicit results for cyclic and complete motions are derived. We establish useful relationships between motions moving in different spaces, and we derive the form of the distribution of the movements in arbitrary dimension. Finally, we investigate further properties for stochastic motions governed by non-homogeneous Poisson processes.

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Since the papers of Goldstein [Reference Goldstein12] and Kac [Reference Kac16], who first studied the connection between random displacements of a particle moving back and forth on the line with stochastic times and hyperbolic partial differential equations (PDEs), researchers have shown increasing interest in the study of finite-velocity stochastic dynamics. The (initial) analytic approach led to fundamental results such as the explicit derivation of the distribution of the so-called telegraph process [Reference Beghin, Nieddu and Orsingher1, Reference Iacus14, Reference Orsingher27], the progenitor of all random motions that later appeared in the literature (also see [Reference Cinque2, Reference Di Crescenzo, Iuliano, Martinucci and Zacks9] for further explicit results and Cinque [Reference Cinque3] for the description of a reflection principle holding for one-dimensional finite-velocity motions). However, as the number of possible directions increases, the order of the PDE governing the probability distribution of the absolutely continuous component of the stochastic movement increases as well; in particular, as shown for the planar case by Kolesnik and Turbin [Reference Kolesnik and Turbin20], the order of the governing PDE coincides with the number of velocities that the motion can undertake. To overcome the weakness of the analytical approach, different ways have been presented to deal with motions in spaces of higher order. One of the first explicit results for multidimensional processes concerned a two-dimensional motion moving with three velocities (see [Reference Di Crescenzo8, Reference Orsingher28]), which was extended to different rules for change of directions in [Reference Leorato and Orsingher23]. We also point out the papers of Kolesnik and Orsingher [Reference Kolesnik and Orsingher18], which dealt with a planar motion choosing between the continuum of possible directions in the plane, $(\cos \alpha, \sin \alpha), \ \alpha\in [0,2\pi]$ , and of De Gregorio [Reference De Gregorio7] and Orsingher and De Gregorio [Reference Orsingher and De Gregorio29], which respectively analyzed the corresponding motions on the line and on higher spaces (note that here we only consider motions with a finite number of velocities). Very interesting results concerning motions in arbitrary dimensions were also presented by Samoilenko [Reference Samoilenko33] and then further investigated by Lachal et al. [Reference Lachal, Leorato and Orsingher22] and Lachal [Reference Lachal21]; see [Reference Garra and Orsingher11, Reference Kolesnik17, Reference Pogorui32] as well. It is worth recalling that explicit and fascinating results have been derived under some specific assumptions—for example, in the case of motions moving with orthogonal directions [Reference Cinque and Orsingher4, Reference Cinque and Orsingher5, Reference Orsingher, Garra and Zeifman30, Reference Orsingher and Kolesnik31]. We also reference the papers [Reference Di Crescenzo, Iuliano and Mustaro10, Reference Iuliano and Verasani15] for motions driven by geometric counting processes. Over the years, stochastic motions with finite velocities have also been studied in depth by physicians, who have obtained interesting outcomes; see for instance [Reference Masoliver and Lindenberg24, Reference Mori, Le Doussal, Majumdar and Schehr26, Reference Santra, Basu and Sabhapandit34].

Random evolutions represent a realistic alternative to diffusion processes for suitably modeling real phenomena in several fields: in geology, to represent the oscillations of the ground [Reference Travaglino, Di Crescenzo, Martinucci and Scarpa35]; in physics, to describe the random movements of electrons in a conductor, bacterial dynamics [Reference Mertens, Angelani, Di Leonardo and Bocquet25], or the movements of particles in gases; and in finance, to model stock prices [Reference Kolesnik and Ratanov19].

In this paper we present some general results for a wide class of random motions moving with a natural number of finite velocities. After a detailed introduction on the probabilistic description of these stochastic processes, we begin our study focusing on minimal motions, i.e. those moving with the minimum number of velocities to ensure that the state space has the same dimension as the space in which they occur. In this case we derive the exact probability in terms of their basic components, generalizing known results in the current literature. The probabilities concern both the inner part and the boundary of the support of the moving particle. Furthermore, thanks to a one-to-one correspondence between minimal stochastic dynamics, we introduce a canonical (minimal) motion to help with the analysis and to show explicit results. The examples provided concern different types of motions governed by both Poisson-type processes and geometric counting processes. In Section 3 we derive the distribution of a motion moving with an arbitrary number of velocities by connecting the problem to minimal movements. Finally, in Section 4, we recover the analytic approach to show some characteristics of stochastic dynamics driven by a non-homogeneous Poisson process—in particular, the relationships between the conditional probability of movements in higher order and lower-dimensional dynamics.

1.1. Random motions with a natural number of finite velocities

Let $\bigl(\Omega, \mathcal{F},\{\mathcal{F}_t\}_{t\ge0}, P\bigr)$ be a filtered probability space and $D \in \mathbb{N}$ . In the following we assume that every random object is suitably defined on the above probability space (i.e. if we introduce a stochastic process, this is adapted to the given filtration).

Let $\{W_n\}_{n\in \mathbb{N}_0}$ be a sequence of random variables such that $W_n \ge 0$ almost surely (a.s.) for all n and $W_0 = 0$ a.s. Let us define $T_n = \sum_{i=0}^n W_i,\ n\in \mathbb{N}_0$ , and the corresponding point process $N = \{N(t)\}_{t\ge0}$ such that $N(t) = \max\{n\in\mathbb{N}_0\;:\;\sum_{i=1}^n W_i \le t\}\ \forall\ t$ . Unless differently described, we assume N such that $N(t)<\infty$ for all $t\ge0$ a.s. Also, let $V = \{V(t)\}_{t\ge0}$ be a stochastic vector process taking values in a finite state space $\{v_0,\dots, v_M\}\subset \mathbb{R}^D, \ M\in\mathbb{N}$ , and such that $\mathbb{P}\{V(t+\mathop{}\!\textrm{d} t) \not = V(t)\,|\,N(t,t+\mathop{}\!\textrm{d} t] = 0\} = 0$ , $t\ge0$ . Now we can introduce the main object of our study, the D-dimensional random motion (with a natural number of finite velocities) $X = \{X(t)\}_{t\ge0}$ with velocity given by V, i.e. moving with the velocities $v_0,\dots, v_M$ and with displacements governed by the random process N,

(1.1) \begin{equation}X(t) = \int_0^t V(s)\mathop{}\!\textrm{d} s = \sum_{i=0}^{N(t)-1} \bigl(T_{i+1} -T_i\bigr) V(T_i) + \bigl(t-T_{N(t)}\bigr) V(T_{N(t)}), \ \ t\ge0,\end{equation}

where $V(T_i)$ denotes the random speed after the ith event recorded by N, therefore after the potential switch occurring at time $T_i$ (clearly, $T_{i+1}-T_i = W_{i+1}$ ). The stochastic process X describes the position of a particle moving in a D-dimensional (real) space with velocities $v_0, \dots, v_M$ and which can change its velocity only when the process N records a new event.

For the sake of brevity we also call X a finite-velocity random motion (even though this definition would also apply to a motion with an infinite number of finite velocities).

Example 1.1. (Telegraph process and cyclic motions.) If $D=1$ , and N is a homogeneous Poisson process with rate $\lambda>0$ and $v_0 = -v_1= c>0$ such that these velocities alternate, i.e. $V(t) = V(0)(\!-\!1)^{N(t)}$ , $t\ge0$ , then we have the well-known symmetric telegraph process, describing the position of a particle moving back and forth on the line with exponential displacements of average length $c/\lambda$ .

In literature, random motions where the velocities change with a deterministic order are usually called cyclic motions. If X is a D-dimensional motion with $M+1$ velocities, we say that it is cyclic if (without any loss of generality) $\mathbb{P}\{V(t+\mathop{}\!\textrm{d} t ) = v_h\,|\,V(t)=v_j,\, N(t,t+\mathop{}\!\textrm{d} t]=1\} = 1$ for $h=j+1$ , and 0 otherwise, for all j, h, where N is the point process governing the displacements (and $v_{h+k(M+1)} = v_h$ , $k\in \mathbb{Z}$ , $h=0,\dots,M$ ). For a complete analysis of this type of motion, see [Reference Lachal21, Reference Lachal, Leorato and Orsingher22]. $\diamond$

Example 1.2. (Complete random motions.) If $\mathbb{P}\{V(0)=v_h\}>0$ and $p_{j,h}=\mathbb{P}\{V(t+\mathop{}\!\textrm{d} t) = v_h\,|\, V(t)=v_j,\, N(t, t+\mathop{}\!\textrm{d} t]= 1\} > 0$ for each $j,h = 0,\dots, M$ , we call X a complete random motion. In this case, at each event recorded by the counting process N, the particle can switch velocity to any of the available ones (with strictly positive probability). $\diamond$

Example 1.3. (Random motion with orthogonal velocities.) Put $D=2$ . Consider the motion (X, Y) moving in $\mathbb{R}^2$ with the four orthogonal velocities $v_h = \bigl(\cos\!(h\pi/2), \sin\!(h\pi/2) \bigr),\ h=0,1,2,3$ , such that $\mathbb{P}\{V(t+\mathop{}\!\textrm{d} t) =v_h\,|\, V(t) = v_j,\,N(t,t+\mathop{}\!\textrm{d} t] = 1\} =1/2$ if $j = 0,2$ and $h=1,3$ or $j=1,3$ and $h = 0,2$ (i.e. it always switches ‘to a different dimension’). (X, Y) is the so-called standard orthogonal planar random motion, which, if N is a non-homogeneous Poisson process, can be expressed as a linear function of two independent and equivalent one-dimensional (non-homogeneous) telegraph processes; see [Reference Cinque and Orsingher4]. One can imagine also other rules for the changes of velocity; we refer to [Reference Cinque and Orsingher4, Reference Cinque and Orsingher5, Reference Orsingher, Garra and Zeifman30] for further details. $\diamond$

The support of the random variable X(t) expands as the time increases, and it reads

(1.2) \begin{equation}\text{Supp}\bigl(X(t)\bigr) = \text{Conv}(v_0t,\dots, v_Mt), \ \ t\ge0,\end{equation}

where Conv $(\!\cdot\!)$ denotes the convex hull of the input vectors. Therefore, the motion X moves in a convex polytope of dimension

\begin{equation*}\text{dim}\bigl(\text{Conv}(v_0,\dots,v_M)\bigr) = \text{rank}(v_1-v_0\ \cdots\ v_M-v_0) = \text{rank}\begin{pmatrix}\begin{array}{l}1^T\\[5pt] \textrm{V}\end{array}\end{pmatrix} -1,\end{equation*}

where $\textrm{V} = (v_0\ \cdots\ v_M)$ is the matrix with the velocities as columns and $1^T$ is a row vector of all ones (with suitable dimension). For $H = 0,\dots, M$ , if the particle takes all, and only, the velocities $v_{i_0}, \dots, v_{i_H}$ in the time interval [0, t], then it is located in the set $\overset{\circ}{\text{Conv}}(v_{i_0}t, \dots, v_{i_H}t)$ (where $\overset{\circ}{S}$ denotes the inner part of the set $S\subset\mathbb{R}^D$ and we assume the notation $\overset{\circ}{\text{Conv}}(v) = \{v\},\, v\in \mathbb{R}^D$ ).

Our analysis involves the relationships between motions moving in spaces of different orders or with state spaces of different dimensions. From (1.1) it is easy to check that if A is an $(R\times D)$ -dimensional real matrix, then the motion $X' = \{AX(t)\}_{t\ge0}$ is an R-dimensional motion governed by N and with velocities $v'_{\!\!0},\dots,v'_{\!\!M}\in \mathbb{R}^R$ such that $v'_{\!\!h} = Av_h,\ \forall\ h$ .

In the following we will use the lemma below, from the theory of affine geometry.

Lemma 1.1. Let $v_0,\dots,v_M\in \mathbb{R}^D$ such that $\textit{dim}\bigl(\text{Conv}(v_0,\dots,v_M)\bigr) = R$ . For $k=0,\dots, M$ , there exists the set $I^{R,k}$ of the indexes of the first R linearly independent rows of the matrix $\Bigl[v_h-v_k\Bigr]_{\substack{h=0,\dots,M\\ h\not=k}}$ and $I^R = I^{R,k} = I^{R,l} \ \forall\ k,l$ .

Let $e_1,\dots,e_D$ be the vectors of the standard basis of $\mathbb{R}^D$ . Then the orthogonal projection $p_{R}\;:\;\mathbb{R}^D\longrightarrow\mathbb{R}^R,$ $p_{R}(x) = \Bigl[e_i\Bigr]^T_{i\in I^{R}}x$ , is such that, with $v_h^{R} = p_{R}(v_h)\ \forall \ h$ ,

\begin{equation*}\textit{dim}\Bigl(\textit{Conv}\Bigl(v^{R}_0,\dots,v^{R}_M\Bigr)\Bigr) = R\end{equation*}

and

\begin{equation*}\,\forall \ x^{R}\in\textit{Conv}\Bigl(v^{R}_0,\dots,v^{R}_M\Bigr) \ \ \exists\ !\ x\in \textit{Conv}(v_0,\dots,v_M) \ s.t.\ p_{R}(x) = x^{R}.\end{equation*}

See Appendix A for the proof. Also note that if $R=M<D$ , i.e. the $R+1$ vectors are affinely independent, then the projected vectors $p_R(v_0),\dots, p_R(v_R)$ are affinely independent as well. Obviously, if $R=D$ , $p_R$ is the identity function.

For our aims, the core of Lemma 1.1 is that, for a collection of vectors $v_0,\dots,v_M\in \mathbb{R}^D$ such that $\text{dim}\Bigl(\text{Conv}(v_0,\dots,v_M)\Bigr) = R$ , there exists an orthogonal projection, $p_R$ , onto an R-dimensional space such that to each element of this projection of the convex hull, $x^R\in\text{Conv}\!\left(v^R_0,\dots,v^R_M\right)$ , corresponds (one and) only one element of the original convex hull, $x\in \text{Conv}(v_0,\dots,v_M)$ .

2. Minimal random motions

A random motion in $\mathbb{R}^D$ needs $D+1$ affinely independent velocities in order to have a D-dimensional state space. Therefore, we say that the D-dimensional stochastic motion X, defined as in (1.1), is minimal if it moves with $D+1$ affinely independent velocities $v_0,\dots,v_D\in \mathbb{R}^D$ .

The support of the position X(t), $t\ge0$ , of a minimal random motion is given in (1.2); it can be decomposed as follows:

(2.1) \begin{equation}\text{Supp}\bigl(X(t)\bigr) = \bigcup_{H=0}^D\ \bigcup_{i\in \mathcal{C}_{H+1}^{\{0,\dots,D\}}}\overset{\circ}{\text{Conv}}(v_{i_0}t, \dots, v_{i_H}t),\end{equation}

where $\mathcal{C}_{k}^{S}$ denotes the combinations of k elements from the set S, with $0\le k\le |S|<\infty$ . Since X is a minimal motion, it lies on each convex hull appearing in (2.1) if and only if it moves with all, and only, the corresponding velocities in the time interval [0, t].

Let us denote by $T_{(h)} = \{T_{(h)}(t)\}_{t\ge0}$ the stochastic process describing, for each $t\ge0$ , the random time that the process X spends moving with velocity $v_h$ in the time interval [0, t], with $ h = 0,\dots,D$ . In formula, $T_{(h)}(t) = \int_0^t {1}(V(s) = v_h) \mathop{}\!\textrm{d} s, \ \forall\ t,h$ . Furthermore, we denote by $T_{(\!\cdot\!)} = (T_{(0)},\dots,T_{(D)})$ the vector process describing the times spent by the motion moving with each velocity, and by $T_{(k^-)} = (T_{(0)},\dots,T_{(k-1)},T_{(k+1)},\dots,T_{(D)})$ the vector process describing the time that X spends with each velocity except for the kth one, $k=0,\dots, D$ . In the next proposition we express X as an affine function of $T_{(k^-)}$ .

Proposition 2.1. Let $X = \{X(t)\}_ {t\ge0}$ be a finite-velocity random motion in $\mathbb{R}^D$ moving with velocities $v_0,\dots,v_D$ . For $k = 0,\dots,D$ ,

(2.2) \begin{equation}X(t) = g_k\Bigl(t,T_{(k^-)}(t)\Bigr) = v_kt+\Biggl[v_h - v_k\Biggr]_{\substack{h=0,\dots, D\\ h \not = k}} T_{(k^-)}(t),\ \ t\ge0,\end{equation}

where $\Bigl[v_h - v_k\Bigr]_{\substack{h=0,\dots, D\\ h \not = k}} $ denotes the matrix with columns $v_h-v_k$ , $h\not=k$ . Furthermore, for fixed $t\ge0$ , $g_k$ is bijective for all k if and only if $v_0,\dots,v_D$ are affinely independent (i.e. if and only if X is minimal).

Hereafter we will omit the direct dependence of $g_k$ (and the similar functions) on the time variable t, since we are always working with fixed $t\ge0$ ; thus, we will more briefly write, for instance, $X(t) = g_k\bigl( T_{(k^-)}(t) \bigr) $ .

Proof. Fix $t\ge0$ . By definition $\sum_{h=0}^D T_{(h)}(t) = t$ a.s., and $X(t) = \sum_{h=0}^D v_h\, T_{(h)}(t)$ ; therefore, for each $k=0,\dots,D$ , we have

\begin{equation*}X(t) = \sum_{h\not = k} (v_h-v_k) T_{(h)}(t) + v_k t,\end{equation*}

which in matrix form is (2.2).

The matrix $\Bigl[v_h - v_k\Bigr]_{h\not = k} $ is invertible for all k if and only if all the differences $v_h-v_k$ , $h\not = k$ , are linearly independent for all k, and thus if and only if the velocities $v_0,\dots,v_D$ are affinely independent.

Remark 2.1. Another useful representation of a finite-velocity random motion X is, with $t\ge0$ ,

(2.3) \begin{equation}\begin{pmatrix}\begin{array}{c}t\\[5pt] X(t)\end{array}\end{pmatrix} =g\Bigl(T_{(\!\cdot\!)}(t)\Bigr) = \begin{pmatrix}\begin{array}{l}1^T\\[5pt] \textrm{V} \end{array}\end{pmatrix} T_{(\!\cdot\!)}(t),\end{equation}

and g is bijective if and only if X is minimal (indeed, $\begin{pmatrix}\begin{array}{l}1^T\\[5pt] \textrm{V} \end{array}\end{pmatrix} $ is invertible if and only if the velocities, the columns of V, are affinely independent). In this case we write

\begin{equation*}T_{(\!\cdot\!)}(t) = g^{-1}\bigl(t, X(t)\bigr) = \Bigl(g_{\cdot, h}^{-1} \bigl(t, X(t)\bigr) \Bigr)_{h=0,\dots,D} = \begin{pmatrix}\begin{array}{l}1^T\\[5pt] \textrm{V} \end{array}\end{pmatrix}^{-1} \begin{pmatrix}\begin{array}{c}t\\[5pt] X(t)\end{array}\end{pmatrix}.\end{equation*}

Now, for $k=0,\dots,D$ , we write the inverse of (2.2) as

(2.4) \begin{equation}T_{(k^-)}(t) = g^{-1}_k\bigl(X(t)\bigr) = \Bigl(g_{k, h}^{-1} \bigl(X(t)\bigr) \Bigr)_{h\not =k} = \Bigl[v_h-v_k\Bigr]_{h\not = k}^{-1} \bigl(X(t)-v_kt\bigr)= \Bigl(g_{\cdot, h}^{-1} \bigl(t, X(t)\bigr) \Bigr)_{h\not=k}.\end{equation}

The notation (2.4) is going to be useful below. Clearly,

\begin{equation*} t - \sum_{h\not=k} g_{\cdot, h}^{-1} \bigl(t,X(t)\bigr) = t - \sum_{h\not=k} T_{(h)}(t) = T_{(k)}(t)= g_{\cdot, k}^{-1} \bigl(t,X(t)\bigr).\end{equation*}

$\diamond$

Theorem 2.1. Let $X = \{X(t)\}_ {t\ge0}$ be a minimal random motion moving with velocities $v_0,\dots,v_D\in\mathbb{R}^D$ and whose displacements are governed by a point process N. Let $X' = \{X'(t)\}_{t\ge0}$ be a random motion with velocities $v'_{\!\!0},\dots,v'_{\!\!D}$ whose displacements are governed by a point process $N' \stackrel{d}{=}N$ and with the same rule for changes of velocity as X. Then, for $t\ge0$ ,

(2.5) \begin{equation}X'(t) \stackrel{d}{=} f\bigl( X(t) \bigr) = \textrm{V}' \begin{pmatrix}\begin{array}{l}1^T\\[5pt] \textrm{V} \end{array}\end{pmatrix}^{-1}\begin{pmatrix}\begin{array}{c}t\\[5pt] X(t) \end{array}\end{pmatrix},\end{equation}

where $\textrm{V}' = (v'_{\!\!0},\dots, v'_{\!\!D})$ . Furthermore, f is bijective if and only if X is minimal.

Proof. Fix $t\ge0$ . Since the changes of velocity of X and X have the same rule and $N(t)\stackrel{d}{=}N'(t)\ \forall\ t$ , we have $T_{(h)}(t)\stackrel{d}{=}T'_{\!\!(h)}(t)\ \forall\ h,t$ . For $k=0,\dots,D$ ,

\begin{equation*}X'(t) = \textrm{V}'T'_{\!\!(\!\cdot\!)}(t)\stackrel{d}{=} \textrm{V}' \begin{pmatrix}\begin{array}{l}1^T\\[5pt] \textrm{V} \end{array}\end{pmatrix}^{-1}\begin{pmatrix}\begin{array}{c}t\\[5pt] X(t) \end{array}\end{pmatrix}.\end{equation*}

Now, for Proposition 2.1, for every k, X(t) is in bijective correspondence with $T_{(k^-)}(t)$ and then with $T'_{\!\!(k^-)}(t)$ (in distribution). Therefore X(t) is in bijective correspondence with X (t) if and only if X (t) is in bijective correspondence with $T'_{\!\!(k^-)}(t)$ , so if X is minimal.

Remark 2.2. (Canonical motion.) Theorem 2.1 states that all minimal random motions in $\mathbb{R}^D$ , with displacements and changes of directions governed by the same probabilistic rules, are in bijective correspondence (in distribution). Therefore, it is useful to introduce a minimal motion $X =\{X(t)\}_{t\ge0}$ moving with the canonical velocities of $\mathbb{R}^D$ , $e_0 = 0,e_1,\dots,e_D$ , where $e_h$ is the hth vector of the standard basis of $\mathbb{R}^D$ . At time $t\ge0$ , the support of the position X(t) is given by the convex set $\{x\in \mathbb{R}^D\;:\;x\ge0,\, \sum_{i=1}^D x_i\le t\}$ . Put $t\ge0$ and $\textrm{E} = (e_0 \ \cdots\ e_D) = (0 \ I_D)$ , the matrix having the canonical velocities as columns. In view of Remark 2.1, the canonical motion can be expressed as $\begin{pmatrix}\begin{array}{c}t\\[5pt] X(t) \end{array}\end{pmatrix} = \begin{pmatrix}\begin{array}{l}1^T\\[5pt] \textrm{E} \end{array}\end{pmatrix} T_{(\!\cdot\!)}(t)$ and

(2.6) \begin{equation}T_{(\!\cdot\!)}(t) = g^{-1}\bigl(t, X(t)\bigr) = \begin{pmatrix}\begin{array}{l@{\quad}l}1 & -1^T\\[5pt] 0 & \;\;\;I_D \end{array}\end{pmatrix} \begin{pmatrix}\begin{array}{c}t\\[5pt] X(t) \end{array}\end{pmatrix} = \begin{pmatrix}\begin{array}{c}t-\sum_{j=1}^D X_j(t)\\[5pt] X_1(t)\\[5pt] \cdot \\[5pt] X_D(t) \end{array}\end{pmatrix}.\end{equation}

Keeping in mind the notation (2.4), the inverse functions $g_k^{-1},\ k=0,\dots,D$ , are given by (2.6) excluding the $(k+1)$ th term (which concerns the time $T_{(k)}(t)$ ).

Finally, if Y is a random motion with affinely independent velocities $v_0,\dots,v_D$ , under the hypotheses of Theorem 2.1, we can write

(2.7) \begin{equation}Y(t) \stackrel{d}{=} v_0 t + \Bigl[v_h - v_0\Bigr]_{h=1,\dots,D}X(t),\ \ \ t\ge0.\end{equation}

2.1. Probability law of the position of minimal motions

In order to study the probability law of the position $X(t),\, t\ge0$ , of a minimal random motion in $\mathbb{R}^D$ , we need to make some hypotheses on the probabilistic mechanisms of the process, i.e. on the velocity process V and the associated point process N, which governs the displacements.

(H1) The changes of velocity, which can only occur when the point process N records an event, depend only on the previously selected velocities (but not on the moment of the switches or the time spent with each velocity). Therefore, we assume that, for $t\ge0$ ,

(2.8) \begin{align}\mathbb{P}&\{V(t+\mathop{}\!\textrm{d} t) = v_h\,|\, N(t, t+\mathop{}\!\textrm{d} t] = 1,\, \mathcal{F}_t\}\nonumber\\[5pt] & = \mathbb{P}\{V(t+\mathop{}\!\textrm{d} t) = v_h\,|\, N(t, t+\mathop{}\!\textrm{d} t] = 1,\, V(T_j), j=0,\dots, N(t)\}, \end{align}

where $T_0=0$ a.s. and $T_1,\dots,T_{N(t)}$ are the arrival times of the process N (see (1.1)). Note that if N and V are Markovian, then the conditional event in (2.8) can be reduced to $\{ N(t, t+\mathop{}\!\textrm{d} t] = 1,\, V\bigl(T_{N(t)}\bigr)\}$ , and (X, V) is Markovian (see for instance [Reference Davis6]).

Now, for $t\ge0,\ h=0,\dots, D$ , we define the processes $N_h(t) = \big|\{0\le j\le N(t)\;:\; V(T_{j}) = v_h\} \big|$ counting the number of displacements with velocity $v_h$ in the time interval [0, t]. Clearly $\sum_{h=0}^D N_h(t) = N(t) +1$ a.s. (because the displacements are one more than the switches since the initial movement). Let us also define the random vector $C_{N(t)+1} = \bigl(C_{N(t)+1, 0}, \dots, C_{N(t)+1,D}\bigr)\in \mathbb{N}_0^{D+1}$ , which provides the allocation of the selected velocities in the $N(t)+1$ displacements.

For $t\ge0$ and $n_0,\dots, n_D$ , we have the following relationship:

\begin{equation*}\left\{ \bigcap_{h=0}^D N_h(t) = n_h\right\} \iff \{ N(t) = n_0+\dots+n_D -1,\, C_{n_0+\dots+n_D} = (n_0, \dots, n_D)\}.\end{equation*}

Example 2.1. (Complete random motions.) Consider the motion in Example 1.2 with $p_{j,h} = p_h = \mathbb{P}\{V(0) = v_h\}>0,\ j,h = 0,\dots, D$ , so that the probability of selecting a velocity does not depend on the current velocity. Then, for every t, $C_{N(t)+1}\sim Multinomial \bigl(N(t)+1, p = (p_0, \dots, p_D)\bigr)$ . $\diamond$

(H2) The times of the displacements along different velocities are independent; i.e. the waiting times $\{W_n\}_{n\in \mathbb{N}_0}$ (see (1.1)) are independent if they concern different velocities. For each $h= 0,\dots,D$ , let $\big\{W_n^{(h)}\big\}_{n\in \mathbb{N}}$ be the sequence of the times related to the displacements with velocity $v_h$ . Specifically, $W_n^{(h)}$ denotes the time of the nth movement with velocity $v_h$ , and $W_n^{(h)}, W_m^{(k)}$ are independent if $h\not=k, \forall\ m,n$ . Let $N_{(h)} =\{N_{(h)}(s) \}_{s\ge0}$ be the associated point process, i.e. such that $N_{(h)}(s) = \max\{n\;:\; \sum_{i=1}^n W_i^{(h)}\le s\},\ \forall \ s$ . Then $N_{(0)},\dots, N_{(D)}$ are independent counting processes.

From the hypothesis (H1) we have that the random times $W_n^{(h)}$ are independent of the allocation of the velocities among the steps, i.e. for measurable $A\subset \mathbb{R}$ ,

(2.9) \begin{equation}\mathbb{P}\!\left\{W_m^{(h)}\in A,\, V(T_n) = v_h,\, C_{n+1,h} = m\right\} = \mathbb{P}\!\left\{W_m^{(h)}\in A \right\} \mathbb{P}\{V(T_n) = v_h,\, C_{n+1,h} = m\}\end{equation}

for each $m\le n\in \mathbb{N},\, h = 0,\dots, D$ . In words, the first member of (2.9) concerns the mth displacement with speed $v_h$ to be in A, also requiring that this is the $(n+1)$ th movement of the motion (counting the initial one as well).

Below we use the following notation: for any suitable function g and any suitable absolutely continuous random variable X with probability density $f_X$ , we write $\mathbb{P}\{X\in \mathop{}\!\textrm{d}\, g(x)\} = f_X\bigl(g(x)\bigr) |J_g(x)|\mathop{}\!\textrm{d} x$ , where $J_g$ is the Jacobian matrix of g.

Theorem 2.2. Let X be a minimal finite-velocity random motion in $\mathbb{R}^D$ satisfying (H1)–(H2). For $t\ge0,\,x\in \overset{\circ}{\textit{Supp}}\bigl(X(t)\bigr),\ n_0,\dots,n_D\in \mathbb{N}$ , and $k=0,\dots, D$ , we have

(2.10) \begin{align}& \mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{h=0}^D\{N_h(t) = n_h\} ,\, V(t) = v_k\Bigg\}\\[5pt] & = \prod_{\substack{h=0\\ h\not=k}}^D f_{\sum_{j=1}^{n_h} W_j^{(h)}}\Bigl(g_{k,h}^{-1}(x)\Bigr) \mathop{}\!\textrm{d} x \,\Big|\bigl[v_h-v_k\bigr]_{h\not =k}\Big|^{-1} \, \mathbb{P}\!\left\{N_{(k)}\!\left(t-\sum_{\substack{h=0\\ h\not = k}}^D g_{k,h}^{-1}(x)\right) = n_k-1\right\}\nonumber\\[5pt] &\ \ \ \times \mathbb{P}\big\{C_{n_0+\dots+n_D} = (n_0,\dots,n_D), V(t) = v_k\big\},\nonumber\end{align}

where $g_k^{-1}$ is given in (2.4).

Theorem 2.2 provides a general formula for the distribution of the position of the minimal motion at time t joint with the number of displacements for each velocity in the interval [0, t] and the current direction (meaning at time t).

Proof. Fix $k = 0,\dots,D$ and $t\ge0$ . We have

(2.11) \begin{align}&\!\!\!\!\!\!\!\!\!\!\!\!\mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{h=0}^D\{N_h(t) = n_h\} ,\, V(t) = v_k\Bigg\}\nonumber\\[5pt] & \!\!\!\!\!\!\!\!\!\!\!\!\!= \mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{h=0}^D\{N_h(t) = n_h\} ,\, \sum_{h=0}^D T_{(h)}(t) = t,\, V(t) = v_k\Bigg\}\nonumber \\[5pt] &\!\!\!\!\!\!\!\!\!\!\!\!\! = \mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{\substack{h=0\\ h\not = k}}^D\Big\{T_{(h)}(t) = \sum_{j=1}^{n_h} W_j^{(h)}\Big\} ,\, N_{(k)}\bigl(T_{(k)}(t)\bigr) = n_k-1,\\[5pt] & \!\! C_{n_0 + \dots + n_D} =(n_0,\dots,n_D),\, V(t) = v_k\Bigg\} \nonumber \end{align}
(2.12) \begin{align} & \qquad\qquad=\mathbb{P}\Bigg\{T_{(k^-)}(t)\in \mathop{}\!\textrm{d}\, g_k^{-1}(x),\,\bigcap_{h\not = k}\Bigg\{T_{(h)}(t) = \sum_{j=1}^{n_h} W_j^{(h)}\Bigg\} ,\, N_{(k)}\bigl(T_{(k)}(t)\bigr) = n_k-1,\\[5pt] & \qquad\qquad \ \ \ \ \ \ \ \ C_{n_0 + \dots + n_D} =(n_0,\dots,n_D),\, V(t) = v_k\Bigg\} \nonumber \nonumber\end{align}
(2.13) \begin{align} & \!\!\!\!\!\!\!\!=\mathbb{P}\Bigg\{ \Biggl(\sum_{j=1}^{n_h} W_j^{(h)}\Biggr)_{h\not = k}\in \mathop{}\!\textrm{d}\, g_{k}^{-1}(x) ,\, N_{(k)}\Biggl(t-\sum_{h\not = k} g_{k,h}^{-1}(x)\Biggr) = n_k-1\Bigg\}\\[5pt] & \!\!\! \times \mathbb{P}\big\{C_{n_0 + \dots + n_D} =(n_0,\dots,n_D),\, V(t) = v_k\big\}. \nonumber\end{align}

The step (2.11) follows from considering that, in the time interval [0, t], the motion performs $n_h$ steps with velocity $v_h,\ \forall\ h$ , and it has $V(t) = v_k$ if and only if the total amount of time spent with $v_h$ is given by the sum of the $n_h$ waiting times $W_j^{(h)}$ , for $h\not = k$ , and if the point process $N_{(k)}$ is waiting for the $n_k$ th event at the time $T_{(k)}(t)$ (because $V(t) = v_k$ , so the motion has completed $n_k-1$ displacements with velocity $v_k$ and is now performing the $n_k$ th). Finally, the event $C_{n_0 + \dots + n_D} = (n_0,\dots, n_D)$ pertains to the randomness in the allocation of the velocities.

The steps (2.12) and (2.13) respectively follow from considering Equation (2.2) and the independence of the waiting times $W_{j}^{(h)}$ from the allocation of the displacements, for all j, h; see (2.9).

Note that (2.13) holds for a random motion where the hypothesis (H2) (concerning the independence of the displacements with different velocities) is not assumed. By taking into account (H2) and using (2.4), we see that (2.13) coincides with (2.10).

We point out that if $x\longrightarrow \bar{x}\in\partial \text{Supp}\bigl(X(t)\bigr)$ , then for at least one $l \in \{ 0,\dots,D \}$ , $T_{(l)}(t)\longrightarrow 0 $ . Therefore, in (2.10) either $g_{k,h}^{-1}(x) = T_{(h)}(t) \longrightarrow 0 $ for at least one $h\not=k$ or $t-\sum_{h\not=k} g_{k,h}^{-1}(x) = T_{(k)}(t)\longrightarrow 0 $ . In light of this observation, for x that tends to the boundary of the support, the probability (2.10) goes to 0 if the density function related to the time $T_{(l)}(t)$ (representing the time which converges to 0) tends to 0. See Examples 2.2 and 2.3 for more details.

Remark 2.3. (Canonical motion.) If X is a canonical minimal random motion in $\mathbb{R}^D$ (see Remark 2.2), then, with (2.6) in hand, we immediately have the corresponding probability (2.10) by considering

\begin{equation*}g_{\cdot,h}^{-1}(x) = \begin{cases}\begin{array}{l@{\quad}l} t-\sum_{i=1}^Dx_i, &\text{if }h =0,\\[5pt] x_h,& \text{if }h\not =0, \end{array}\end{cases}\end{equation*}

and the Jacobian determinant is equal to 1. $\diamond$

Example 2.2. (Cyclic motions.) Let X be a cyclic (see Example 1.1) minimal motion with velocities $v_0,\dots,v_D\in \mathbb{R}^D$ , and let $v_{h+k(D+1)} = v_h, \ h=0,\dots,D,\, k\in \mathbb{Z}$ . Let N be the point process governing the displacements of X; then for fixed $t\ge0$ , the knowledge of N(t) and V(t) is sufficient to determine $N_h(t)$ for all h. Let $\mathbb{P}\{V(0) = v_h\} = p_h>0$ and $p_{h+k(D+1)} = p_h$ , for all h and $k\in \mathbb{Z}$ .

Let $n\in \mathbb{N}$ and $k=0,\dots, D$ . With $j=1\dots,D$ , if the motion performs $n(D+1)+j$ displacements in [0, t], i.e. $N(t) = n(D+1)+j-1$ , and $V(t) = v_k$ , then $n+1$ displacements occur for each of the velocities $v_{k-j+1}, \dots, v_{k}$ (the $(n+1)$ th displacement with velocity $v_k$ is not complete), and n displacements occur for each of the other velocities $v_{k+1},\dots, v_{k+1+D-j}$ . On the other hand, if $j=0$ , then each velocity is taken n times. Hence, for $x\in \overset{\circ}{\text{Supp}}\bigl(X(t)\bigr)$ ,

(2.14) \begin{align}\mathbb{P}&\{X(t)\in \mathop{}\!\textrm{d} x\} = \sum_{j=0}^D \sum_{n=1}^\infty \sum_{k = 0}^D \mathbb{P}\{X(t)\in \mathop{}\!\textrm{d} x,\, N(t) =n(D+1)+j-1,\, V(t) = v_k\}\nonumber\\[5pt] & = \sum_{j=0}^D \sum_{k = 0}^D \sum_{n=1}^\infty \mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{h=k-j+1}^{k}\{N_h(t) = n+1\},\,\bigcap_{h=k+1}^{k+1+D-j}\{N_h(t) = n\} ,\, V(t) = v_k\Bigg\},\ \end{align}

where the probability appearing in (2.14) can be derived from (2.10) (note that, for $j=0$ , we have $\mathbb{P}\{C_{n(D+1)} = (n,\dots, n),\, V(t) = v_k\} = \mathbb{P}\{V(0) = v_{k+1}\}= p_{k+1},$ and for $j=1,\dots,D$ , with $n_h = n+1$ if $ h = k-j+1,\dots,k$ and $n_h=n$ if $h = k+1,\dots,k+1+D-j$ , we have $\mathbb{P}\{C_{n(D+1)+j} = (n_0,\dots, n_{D}),\, V(t) = v_k\}= \mathbb{P}\{V(0) = v_{k-j+1}\}= p_{k-j+1}$ ).

Now we assume X is a cyclic canonical motion and we derive the probabilities appearing in (2.14). For $t\ge0$ , in light of Theorem 2.2 and Remark 2.3, by setting $x_0 = t-\sum_{i=1}^D x_i$ , we readily arrive at the following distributions, for

\begin{equation*}x \in \overset{\circ}{\text{Supp}}\bigl(X(t)\bigr) = \left\{x\in \mathbb{R}^D\;:\;x>0,\, \sum_{i=1}^D x_i < t\right\}\end{equation*}

and $k=0,\dots, D$ . For $j=1,\dots,D,$

(2.15) \begin{align}\mathbb{P}&\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcup_{n=1}^\infty N(t) =n(D+1)+j-1,\,V(t) = e_k \Bigg\} /\mathop{}\!\textrm{d} x\nonumber\\[5pt] & = \sum_{n=1}^\infty \mathbb{P}\{V(0) = v_{k-j+1}\} \nonumber\\[5pt] & \ \ \ \times \Biggl(\prod_{h=k-j+1}^{k-1} f_{\sum_{i=1}^{n+1}W_i^{(h)}}(x_h)\Biggr)\, \mathbb{P}\{N_{(k)}(x_k) = n\} \, \Biggl(\prod_{h=k+1}^{k+1+D-j}f_{\sum_{i=1}^{n}W_i^{(h)}}(x_h) \Biggr), \end{align}

and for $j=0$ ,

(2.16) \begin{align}\mathbb{P}\Bigg\{&X(t)\in \mathop{}\!\textrm{d} x,\,\bigcup_{n=1}^\infty N(t) =n(D+1)-1,\,V(t) = e_k\Bigg\} /\mathop{}\!\textrm{d} x\nonumber\\[5pt] & = \sum_{n=1}^\infty \mathbb{P}\{V(0) = v_{k+1}\} \left(\prod_{\substack{h=0\\ h\not=k}}^{D} f_{\sum_{i=1}^{n}W_i^{(h)}}(x_h)\right)\, \mathbb{P}\{N_{(k)}(x_k) = n-1\}.\end{align}

We point out that thanks to the relationship (2.7), from the probabilities (2.15) and (2.16) we immediately obtain the distribution of the position of any D-dimensional cyclic minimal random motion, Y, moving with velocities $v_0,\dots,v_D$ and governed by a Poisson-type process $N_Y\stackrel{d}{=}N$ .

We now present explicit results for two different types of point processes for N.

(a) Homogeneous Poisson-type process. Assume N is a Poisson-type process such that $W_i^{(h)}\sim Exp(\lambda_h), \ i\in \mathbb{N},\ h=0,\dots,D$ . Then the formula (2.15) turns into

(2.17) \begin{align}\mathbb{P}&\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcup_{n=1}^\infty N(t) =n(D+1)+j-1,\,V(t) = e_k\Bigg\} /\mathop{}\!\textrm{d} x\nonumber\\[5pt] & =\sum_{n=1}^\infty p_{k-j+1} \Biggl(\prod_{h=k-j+1}^{k-1} \frac{\lambda_h^{n+1}\,x_h^{n}\,e^{-\lambda_h x_h}}{n!}\Biggr)\, \frac{e^{-\lambda_k x_k}(\lambda_k x_k)^{n}}{n!} \, \Biggl(\prod_{h=k+1}^{k+1+D-j}\frac{\lambda_h^{n}\,x_h^{n-1}\,e^{-\lambda_h x_h}}{(n-1)!} \Biggr) \nonumber\\[5pt] & =e^{-\sum_{h=0}^D\lambda_h x_h} \Biggl(\prod_{h=0}^{D} \lambda_h \Biggr) p_{k-j+1} x_k\Biggl(\prod_{h=k-j+1}^{k-1} \lambda_h x_h \Biggr)\, \tilde{I}_{j, D+1}\!\left((D+1)\sqrt[D+1]{\prod_{h=0}^{D}\lambda_hx_h}\right),\end{align}

where

\begin{equation*}\tilde{I}_{\alpha,\nu}(z) = \sum_{n=0}^\infty \Bigl(\frac{z}{\nu}\Bigr)^{n\nu} \frac{1}{n!^{\nu-\alpha}(n+1)!^{\alpha}},\end{equation*}

with $0\le\alpha\le\nu,\ z\in \mathbb{C}$ , is a Bessel-type function. Similarly, the formula (2.16) reads

(2.18) \begin{align}\mathbb{P}\Bigg\{&X(t)\in \mathop{}\!\textrm{d} x,\,\bigcup_{n=1}^\infty N(t) =n(D+1)-1,\,V(t) = e_k\Bigg\} /\mathop{}\!\textrm{d} x\nonumber\\[5pt] & =e^{-\sum_{h=0}^D\lambda_h x_h} p_{k+1} \left(\prod_{\substack{h=0\\ h\not = k}}^{D} \lambda_h \right)\, \tilde{I}_{0, D+1}\!\left((D+1)\sqrt[D+1]{\prod_{h=0}^{D}\lambda_hx_h}\right).\end{align}

Note that if $x\longrightarrow \bar{x}\in \partial \text{Supp}\bigl(X(t)\bigr)$ , then there exists $l \in \{ 0, \dots, D \}$ such that the total time spent with velocity l goes to 0, meaning that $T_{(l)}(t) = x_l\longrightarrow0$ . With this in hand, we observe that the probability (2.18) reduces to

\begin{equation*}e^{-\sum_{h\not\in I_0}\lambda_h x_h} p_{k+1} \Bigg(\prod_{h\not = k} \lambda_h \Bigg),\end{equation*}

where $I_0\subset \{0,\dots,D\}$ denotes the set of indexes of the times going to 0. Hence, for all k, the distribution (2.18) never converges to 0 for x tending to the boundary of the support. Intuitively, this follows because the probability concerns the event where every velocity is taken exactly n times, with $n\ge1$ , and therefore it includes also the case $n=1$ , where the random times have exponential density function which is right-continuous and strictly positive in 0.

On the other hand, (2.17) can converge to 0. In fact, for fixed j, if $D+1-j$ times $T_{(h)}(t) = x_h$ tends to 0, then for each k, at least one of these times appears in

\begin{equation*}x_k\Bigg(\prod_{h=k-j+1}^{k-1} \lambda_h x_h \Bigg),\end{equation*}

leading it to 0. This follows because the event in the probability does not include the case where each velocity whose time converges to 0 is taken just once.

(b) Geometric counting process. Assume that $N_{(h)}$ , $h=0,\dots,D$ , are independent geometric counting processes with parameter $\lambda_h>0$ ; then the waiting times $W_i^{(h)}, W_j^{(k)}$ are independent for all $h\not=k$ , and they are dependent for $h=k$ and $i\not=j$ . In particular, if M is a geometric counting process with parameter $\lambda>0$ , then

(2.19) \begin{equation}\mathbb{P}\{M(s+t) - M(s) =n\} =\frac{1}{1+\lambda t}\Biggl(\frac{\lambda t}{1+\lambda t}\Biggr)^n ,\ \ \ \ s,t\ge0,\ n\in\mathbb{N}_0,\end{equation}

and its arrival times have a modified Pareto (Type I) distribution, that is,

(2.20) \begin{equation}\mathbb{P}\{T_n \in \mathop{}\!\textrm{d} t\} =\frac{n\lambda}{(1+\lambda t)^2}\Biggl(\frac{\lambda t}{1+\lambda t}\Biggr)^{n-1}\mathop{}\!\textrm{d} t ,\ \ \ \ t\ge0,\ n\in\mathbb{N}.\end{equation}

We refer to [Reference Di Crescenzo, Iuliano and Mustaro10, Reference Iuliano and Verasani15] for further details about geometric counting processes for random motions and to [Reference Grandell13] for a complete overview of mixed Poisson processes.

In light of (2.19) and (2.20), the formula (2.15) turns into

(2.21) \begin{align}&\mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcup_{n=1}^\infty N(t) =n(D+1)+j-1,\,V(t) = e_k\Bigg\} /\mathop{}\!\textrm{d} x\nonumber\\[5pt] & = \frac{p_{k-j+1}}{1+\lambda_k x_k}\Biggl(\prod_{h=k-j+1}^{k} \frac{\lambda_hx_h}{1+\lambda_hx_h}\Biggr)\!\!\left(\prod_{\substack{h=0\\ h\not =k}}^{D} \frac{\lambda_h}{(1+\lambda_hx_h)^2}\right)\!\sum_{n=1}^\infty n^{D+1-j}(n+1)^{j-1}\!\prod_{h=0}^{D}\Biggl(\frac{\lambda_hx_h}{1+\lambda_hx_h} \Biggr)^{n-1}. \end{align}

Similarly, the formula (2.16) reads

(2.22) \begin{align}\mathbb{P}\Bigg\{&X(t)\in \mathop{}\!\textrm{d} x,\,\bigcup_{n=1}^\infty N(t) =n(D+1)-1,\,V(t) = e_k\Bigg\} /\mathop{}\!\textrm{d} x\nonumber\\[5pt] & = \frac{p_{k+1}}{1+\lambda_k x_k}\left(\prod_{\substack{h=0\\ h\not =k}}^{D} \frac{\lambda_h}{(1+\lambda_hx_h)^2}\right)\sum_{n=0}^\infty (n+1)^{D}\prod_{h=0}^{D}\Biggl(\frac{\lambda_hx_h}{1+\lambda_hx_h} \Biggr)^{n}.\end{align}

Finally, for $x\longrightarrow \bar{x}\in \partial \text{Supp}\bigl(X(t)\bigr)$ , similar considerations to those in (a) apply.

We point out that from the above formulas it is easy to obtain several results appearing in previous papers such as [Reference Di Crescenzo, Iuliano and Mustaro10, Reference Iuliano and Verasani15, Reference Lachal21, Reference Lachal, Leorato and Orsingher22, Reference Orsingher28]. For instance, if we consider $\lambda_h = \lambda>0\ \forall\ h$ and $k = j-1$ , then (2.17) coincides with the distribution in [Reference Lachal21, Section 4.4]; with $D=1$ , from the formulas (2.21) and (2.22) it is straightforward to derive the elegant distributions in [Reference Di Crescenzo, Iuliano and Mustaro10, Theorem 1] (consider $k=j-1=0$ in (2.21) and $k = D=1$ in (2.22)).

For further details about the cyclic motions we refer to [Reference Lachal21, Reference Lachal, Leorato and Orsingher22]. $\diamond$

Example 2.3. (Complete motions.) Let X be a D-dimensional complete canonical (minimal) random motion with $\mathbb{P}\{V(0) = e_h\}=\mathbb{P}\{V(t+\mathop{}\!\textrm{d} t) = e_h\,|\, V(t)=e_j,\, N(t, t+\mathop{}\!\textrm{d} t] = 1\} = p_{h}> 0$ for each $j,h = 0,\dots, D,$ and governed by a homogeneous Poisson process with rate $\lambda>0$ . Now, with $t\ge0$ , in light of Remark 2.3, by setting $x_0 = t-\sum_{j=0}^D x_j$ and using Theorem 2.2, we readily arrive at the following, for $x \in \overset{\circ}{\text{Supp}}\bigl(X(t)\bigr) $ , integers $n_0,\dots,n_D\ge 1$ , and $k=0,\dots,D$ :

(2.23) \begin{align}& \mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{h=0}^D\{N_h(t) = n_h\} ,\, V(t) = e_k\Bigg\}/\mathop{}\!\textrm{d} x\nonumber\\[5pt] & =\left( \prod_{\substack{h=0\\ h\not=k}}^D \frac{\lambda^{n_h}\,x_h^{n_h-1}\,e^{-\lambda x_h}}{(n_h-1)!}\right)\, \frac{e^{-\lambda x_k}(\lambda x_k)^{n_k-1}}{(n_k-1)!} \, \binom{n_0+\dots+n_D-1}{n_0,\dots,n_{k-1},n_{k}-1,n_{k+1},\dots, n_D} \prod_{h=0}^D p_h^{n_h}\nonumber\\[5pt] & =\frac{e^{-\lambda t}}{\lambda}\,\Bigg(\,\sum_{h=0}^D n_h -1\Bigg)! \,n_k\prod_{h=0}^D \frac{(\lambda p_h)^{n_h}\, x_h^{n_h-1}}{(n_h-1)!\,n_h!}.\end{align}

Then it is straightforward to see that

(2.24) \begin{equation}\mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{h=0}^D\{N_h(t) = n_h\}\Bigg\} /\mathop{}\!\textrm{d} x= \frac{e^{-\lambda t}}{\lambda}\,\Bigg(\,\sum_{h=0}^D n_h \Bigg)! \,\prod_{h=0}^D \frac{(\lambda p_h)^{n_h}\, x_h^{n_h-1}}{(n_h-1)!\,n_h!}.\end{equation}

Finally,

(2.25) \begin{align}\mathbb{P}\{X(t)\in \mathop{}\!\textrm{d} x\}/\mathop{}\!\textrm{d} x &= \frac{e^{-\lambda t}}{\lambda} \sum_{n_0,\dots,n_D \ge1}\, \Bigg(\,\sum_{h=0}^D n_h \Bigg)! \,\prod_{h=0}^D \frac{(\lambda p_h)^{n_h}\, x_h^{n_h-1}}{(n_h-1)!\,n_h!} \\[5pt] & = \frac{e^{-\lambda t}}{\lambda}\sum_{m_0,\dots,m_D \ge0} \,\Bigg(\,\sum_{h=0}^D m_h + D+1 \Bigg)! \,\prod_{h=0}^D \frac{(\lambda p_h)^{m_h+1}\, x_h^{m_h}}{m_h!\,(m_h+1)!}\nonumber \\[5pt] & = \frac{e^{-\lambda t}}{\lambda} \prod_{h=0}^D \sqrt{\lambda p_h} \,\sum_{m_0,\dots,m_D \ge0} \,\int_0^\infty e^{-w} w^{D+1}\,\prod_{h=0}^D \frac{(\lambda p_h)^{m_h+\frac{1}{2}}\, (x_h w)^{m_h}}{m_h!\,(m_h+1)!} \mathop{}\!\textrm{d} w\nonumber \end{align}
(2.26) \begin{align} = \frac{e^{-\lambda t}}{\lambda} \prod_{h=0}^D \sqrt{\frac{\lambda p_h}{x_h}} \int_0^\infty e^{-w} w^{\frac{D+1}{2}} \prod_{h=0}^D I_1\Bigl(2\sqrt{w\lambda p_h x_h}\Bigr)\mathop{}\!\textrm{d} w,\end{align}

where

\begin{equation*}I_1(z) = \sum_{n=0}^\infty \Bigl(\frac{z}{2}\Bigr)^{2n+1} \frac{1}{n!\,(n+1)!}\end{equation*}

is the modified Bessel function of order 1, for $ z\in \mathbb{C}$ . Note that if $x\longrightarrow \bar{x}\in \partial \text{Supp}\bigl(X(t)\bigr)$ , then there exists at least one $l \in \{ 0, \dots, D \}$ such that $x_l\longrightarrow0$ . For instance, if we assume that there is just one l satisfying the given condition, then the formula (2.26) turns into

\begin{equation*} \mathbb{P}\{X(t)\in \mathop{}\!\textrm{d} x\}/\mathop{}\!\textrm{d} x \longrightarrow p_l e^{-\lambda t}\prod_{\substack{h=0\\ h\not=l}}^D \sqrt{\frac{\lambda p_h}{x_h}} \int_0^\infty e^{-w} w^{\frac{D}{2}+1} \prod_{\substack{h=0\\ h\not=l}}^D I_1\Bigl(2\sqrt{w\lambda p_h x_h}\Bigr)\mathop{}\!\textrm{d} w.\end{equation*}

Similarly to the cyclic case (see (a), on the limit behavior of (2.16)), the probability (2.26) never converges to 0, because we are including the event where each velocity is chosen once. This can easily be observed from the formula (2.23) by putting $n_l= 1$ .

It is interesting to observe that

\begin{align*}\int_{\text{Supp}\bigl(X(t)\bigr)} \mathbb{P}\{X(t)\in \mathop{}\!\textrm{d} x\}& = \frac{e^{-\lambda t}}{\lambda} \sum_{n_0,\dots,n_D \ge1}\, \Bigg(\,\sum_{h=0}^D n_h \Bigg)! \,\prod_{h=0}^D \frac{(\lambda p_h)^{n_h}}{(n_h-1)!\,n_h!}\nonumber \\[5pt] &\ \ \ \times \int_0^{t} x_1^{n_1-1} \mathop{}\!\textrm{d} x_1\int_0^{t-x_1} x_2^{n_2-1}\mathop{}\!\textrm{d} x_2\dots \int_{0}^{t-x_1-\dots-x_{D-2}} x_{D-1}^{n_{D-1}-1}\mathop{}\!\textrm{d} x_{D-1} \nonumber\\[5pt] & \ \ \ \times\int_0^{t-x_1-\dots-x_{D-1}} x_D^{n_D-1} \Bigg(t-\sum_{j=1}^D x_j\Bigg)^{n_0-1} \mathop{}\!\textrm{d} x_D\nonumber\\[5pt] & = \frac{e^{-\lambda t}}{\lambda t} \sum_{n_0,\dots,n_D \ge1}\,\Bigg(\,\sum_{h=0}^D n_h \Bigg) \prod_{h=0}^D \frac{(\lambda t p_h)^{n_h}}{n_h!}\nonumber\end{align*}
(2.27) \begin{align}& = e^{-\lambda t} \sum_{h=0}^D p_h \sum_{n_0,\dots,n_D \ge1}\, \frac{(\lambda t p_h )^{n_h-1}}{(n_h-1)!} \prod_{\substack{j=0\\ j\not=h}}^D \frac{(\lambda t p_j )^{n_j}}{n_j!}\nonumber\\[5pt] &= e^{-\lambda t} \sum_{h=0}^D p_h e^{\lambda t p_h} \prod_{\substack{j=0\\ j\not=h}}^D \bigl(e^{\lambda t p_j}-1\bigr)\nonumber\\[5pt] & = 1-\mathbb{P}\Bigg\{\bigcup_{h=0}^D \{N_h(t) =0\}\Bigg\}. \end{align}

For details about the last equality, see Appendix B.1. If $p_0 = \dots = p_D = 1/(D+1)$ , then the probability (2.27) reduces to

\begin{equation*}e^{\frac{-\lambda t D}{D+1}} \bigl(e^{\frac{-\lambda t }{D+1}}-1\bigr)^D.\end{equation*}

We can easily obtain the distribution of the position of an arbitrary D-dimensional complete minimal random motion governed by a homogeneous Poisson process, by using the above probabilities and the relationship (2.7). $\diamond$

2.1.1. Distribution on the boundary of the support

Let X be a minimal random motion with velocities $v_0,\dots,v_D$ . Theorem 2.2 describes the joint probability in the inner part of the support of the position X(t), i.e. Conv $(v_0t,\dots, v_Dt)$ , $t\ge0$ . Now we deal with the distribution over the boundary of $\text{Supp}\bigl(X(t)\bigr)$ , which can be partitioned into $\sum_{H=0}^{D-1} \binom{D+1}{H+1}$ components, corresponding to those in (2.1) with $H<D$ .

Fix $H \in\{ 0,\dots,D-1\}$ and let $I_H = \{i_0,\dots,i_H\}\in \mathcal{C}_{H+1}^{\{0,\dots,D\}}$ be a combination of $H+1$ indexes in $\{0,\dots, D\}$ . At time $t\ge0$ , the motion X lies on the set $\overset{\circ}{\text{Conv}}(v_{i_0}t, \dots, v_{i_H}t)$ if and only if it moves with all, and only, the velocities $v_{i_0},\dots, v_{i_H}$ in the time interval [0, t]. Hence, if $X(t)\in \overset{\circ}{\text{Conv}}(v_{i_0}t, \dots, v_{i_H}t)$ a.s. we can write the following: for $k=0,\dots,H$ ,

(2.28) \begin{equation}X(t) = \sum_{h=0}^H v_{i_h}T_{(i_h)}(t) = v_{i_k}t +\sum_{\substack{h=0\\ h\not=k}}^H (v_{i_h}-v_{i_k}) T_{(i_h)}(t) = g_k\!\left(T_{(i_k^-)}^H(t)\right) ,\end{equation}

where $ T_{(\!\cdot\!)}^{H}(t) = \bigl(T_{(i_0)}(t),\dots, T_{(i_H)}(t)\bigr)$ and

\begin{equation*} T_{(i_k^-)}^{H}(t) = \Bigl(T_{(i_h)}(t)\Bigr)_{\substack{h=0, \dots, H\\ h\not=k}}.\end{equation*}

The function $g_k\;:\;[0,+\infty)^H\longrightarrow \mathbb{R}^D$ in (2.28) is an affine relationship.

Keeping in mind that $v_0,\dots, v_D$ are affinely independent, we have that $\text{dim}\Bigl(\text{Conv}(v_{i_0}t, \dots, v_{i_H}t)\Bigr) = H$ , and from Lemma 1.1, there exists an orthogonal projection onto a H-dimensional space, $p_H\;:\;\mathbb{R}^D\longrightarrow\mathbb{R}^H$ , such that $v_{i_0}^H = p_H(v_{i_0}),\dots,v_{i_H}^H = p_H(v_{i_H})$ are affinely independent and such that we can characterize the vector X(t), when it lies on the set $\overset{\circ}{\text{Conv}}(v_{i_0}t, \dots, v_{i_H}t)$ a.s., through its projection $X^H(t) = p_H\bigl(X(t)\bigr)$ . Hence, we just need to study the projected motion

(2.29) \begin{equation}X^H(t) = v_{i_k}^H t+\sum_{\substack{h=0\\ h\not=k}}^H \Bigl(v_{i_h}^H-v_{i_k}^H\Bigr) T_{(i_h)}(t) = g_k^H\Bigl(T_{(i_k^-)}^H(t)\Bigr), \ \ \ t\ge0,\, k=0,\dots, H.\end{equation}

It is straightforward to see that the vector $X^{H^-}(t)$ containing the components of X(t) that are not included in $X^H(t)$ is such that, for $x\in\overset{\circ}{\text{Conv}}(v_{i_0}t, \dots, v_{i_H}t)$ with $x^H = p_H(x)\in \mathbb{R}^H$ and $x^{H^-}\in \mathbb{R}^{D-H}$ denoting the other entries of x,

(2.30) \begin{equation}\mathbb{P}\Bigg\{X^{H^-}(t)\in \mathop{}\!\textrm{d} y\,\Big|\, X^H(t) = x^H,\, \bigcap_{i\in\{0,\dots, D\}\setminus I_H} \{N_i(t) = 0\}\Bigg\} = \delta \bigl(y-x^{H^-}\bigr)\mathop{}\!\textrm{d} y,\end{equation}

with $y\in \mathbb{R}^{D-H}$ and $\delta$ the Dirac delta function centered in 0.

Now, the function $g_k^H\;:\;\mathbb{R}^H\longrightarrow \mathbb{R}^H$ in (2.29) is a bijection, and we can write, for all k,

(2.31) \begin{equation}T^H_{(i_k^-)}(t) = \bigl(g_k^H\bigr)^{-1}\Bigl( X^H(t)\Bigr) = \Biggl(\bigl(g_k^H\bigr)_h^{-1}\Bigl( X^H(t)\Bigr)\Biggr)_{\substack{h=0,\dots, H\\ h\not=k}} = \Biggl[v_{i}^H - v_{i_k}^H\Biggr]^{-1}_{\substack{i\in I_H\\ i\not=i_k}}\,\Bigl(X^H(t)-v_{i_k}^Ht\Bigr).\end{equation}

Note that the formula (2.31) coincides with (2.4) if $H = D$ .

Theorem 2.3. Let X be a minimal finite-velocity random motion in $\mathbb{R}^D$ satisfying (H1)–(H2). Let $H = 0,\dots, D-1$ and $I_H = \{i_0,\dots,i_H\}\in \mathcal{C}_{H+1}^{\{0,\dots,D\}}$ . Then the orthogonal projection $p_H\;:\;\mathbb{R}^D\longrightarrow\mathbb{R}^H$ defined in Lemma 1.1 (there $p_R$ ) exists, and $v_{i_0}^H = p_H(v_{i_0}),\dots,v_{i_H}^H = p_H(v_{i_H})$ are affinely independent. Furthermore, for $t\ge0,\ x\in \overset{\circ}{\text{Conv}}(v_{i_0}t, \dots, v_{i_H}t),\ n_{i_0},\dots,n_{i_H}\in \mathbb{N}$ , and $k=0,\dots, H$ ,

(2.32) \begin{align}& \mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{h=0}^H\{N_{i_h}(t) = n_{i_h}\},\,\bigcap_{i\in I_{H^-}}\{N_{i}(t) = 0\} ,\, V(t) = v_{i_k}\Bigg\}/ \mathop{}\!\textrm{d} x\\[5pt] & = \prod_{\substack{h=0\\ h\not=k}}^H f_{\sum_{j=1}^{n_{i_h}} W_j^{(i_h)}}\biggl(\bigl(g_{k}^H\bigr)_h^{-1}\bigl(x^H\bigr)\biggr) \,\Bigg|\biggl[v_{i}^H - v_{i_k}^H\biggr]_{\substack{i\in I_H\\ i\not=i_k}}\Bigg|^{-1} \, \mathbb{P}\!\left\{N_{(i_k)}\!\left(t-\sum_{\substack{h=0\\ h\not = k}}^H \bigl(g_{k}^H\bigr)_h^{-1}\bigl(x^H\bigr)\right) = n_{i_k}-1\right\}\nonumber\\[5pt] &\ \ \ \times \mathbb{P}\big\{C_{n_{i_0}+\dots+n_{i_H}} = (n_0,\dots,n_D), V(t) = v_{i_k}\big\},\nonumber \end{align}

with $x^H = p_H(x)$ , $\bigl(g_k^H\bigr)^{-1}$ given in (2.31), $I_{H^- } = \{0,\dots, D\}\setminus I_H$ , and suitable $n_0,\dots,n_D$ .

Note that the projection defined in Lemma 1.1 is usually not the only suitable one.

Proof. In light of the considerations above, the proof follows equivalently to the proof of Theorem 2.2.

Remark 2.4. (Canonical motion.) Let X be a canonical (minimal) random motion, governed by a point process N, and $I_H = \{i_0,\dots,i_H\}\in \mathcal{C}_{H+1}^{\{0,\dots,D\}},\ H=0,\dots, D-1$ . We build the projection $p_H$ so that it selects the first H linearly independent rows of $(e_{i_0} \ \cdots\ e_{i_H})$ , if $i_0 = 0$ , and the last ones if $i_0\not=0$ . Then $\left(e_{i_0}^H \ \cdots\ e_{i_H}^H\right) = (0\ I_H)$ , and by proceeding as shown in Remark 2.2, we obtain

\begin{equation*}T_{(\!\cdot\!)}^H(t) = \Bigg(t-\sum_{h=1}^H X_{i_h}^H(t), X^H(t)\Bigg);\end{equation*}

note that in this case the indexes of the velocities ( $i_1,\dots,i_H$ ) coincide with the indexes of the selected coordinates of the motion.

Now, if Y is a minimal random motion with velocities $v_0,\dots, v_D$ and governed by $N_Y \stackrel{d}{=}N$ , for each $I_H = \{i_0,\dots,i_H\}\in \mathcal{C}_{H+1}^{\{0,\dots,D\}},\ H=0,\dots, D-1$ , by using the arguments leading to (2.7), we can write

(2.33) \begin{equation}Y^H(t) \stackrel{d}{=} v_{i_0}^H t +\biggl[v_{i}^H - v_{i_k}^H\biggr]_{\substack{i\in I_H\\ i\not=i_k}} X^H(t).\end{equation}

We point out that the motions are related through the times of the displacements with each velocity and not directly through their coordinates. This means that $X^H$ and $Y^H$ are not necessarily obtained through the same projection, but they are respectively related to processes $T_{(\!\cdot\!)}^H$ and $T_{(\!\cdot\!)}^{Y,H}$ that have the same finite-dimensional distributions, since $N_Y \stackrel{d}{=}N$ (see the proof of Theorem 2.1). $\diamond$

Note that Remark 2.4 holds even though the hypotheses (H1)–(H2) are not assumed.

By comparing Theorem 2.2 with Theorem 2.3, we note that there is a strong similarity between the distribution of a D-dimensional minimal motion over its singularity of dimension H (in fact, dim $\Bigl(\text{Conv}(v_{i_0}t, \dots, v_{i_H}t)\Bigr)=H,\ t>0$ ) and the distribution of an H-dimensional minimal motion moving with velocities $v_{i_0}^H=p_H(v_{i_0}), \dots, v_{i_H}^H = p_H(v_{i_H})$ . These kinds of relationships are further investigated in the next sections (see also the next example); in particular, Theorem 4.1 states a result concerning a wide class of random motions.

Example 2.4. (Complete motions: distribution over the singular components.) Let us consider the complete canonical random motion X studied in Example 2.3. Let $I_H = \{i_0,\dots,i_H\}\in \mathcal{C}_{H+1}^{\{0,\dots,D\}}$ and $I_{H^-} = \{0,\dots, D\}\setminus I_H$ , with $H=0,\dots, D-1$ . We now compute the probability density of being in $x\in\overset{\circ}{\text{Conv}}(e_{i_0}t, \dots, e_{i_H}t)$ at time $t\ge0$ . Keeping in mind Theorem 2.3 and Remark 2.4 (and proceeding as shown for the probability (2.23)), for integers $n_{i_0},\dots ,n_{i_H}\ge1$ and $k = 0,\dots,H$ , we have that

\begin{align*} \mathbb{P}&\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{h=0}^H\{N_{i_h}(t) = n_{i_h}\},\,\bigcap_{i\in I_{H^-}}\{N_{i}(t) = 0\} ,\, V(t) = e_{i_k}\Bigg\}/ \mathop{}\!\textrm{d} x \nonumber\\[5pt] &= \frac{e^{-\lambda t}}{\lambda}\,\Bigg(\,\sum_{h=0}^H n_{i_h} -1\Bigg)! \,n_{i_k}\prod_{h=0}^H \frac{(\lambda p_{i_h})^{n_{i_h}}\, x_{i_h}^{n_{i_h}-1}}{(n_{i_h}-1)!\,n_{i_h}!},\end{align*}

where $x_{i_0} = t-\sum_{j=1}^H x_{i_j}$ . Clearly, by working as shown in Example 2.3, we obtain

\begin{align}\mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{i\in I_{H^-}}\{N_{i}(t) = 0\}\Bigg\}&/\mathop{}\!\textrm{d} x = \frac{e^{-\lambda t}}{\lambda} \sum_{n_{i_0},\dots,n_{i_H} \ge1}\, \Bigg(\,\sum_{h=0}^H n_{i_h} \Bigg)! \,\prod_{h=0}^H \frac{(\lambda p_{i_h})^{n_{i_h}}\, x_{i_h}^{n_{i_h}-1}}{(n_{i_h}-1)!\,n_{i_h}!} \nonumber\\[5pt] & = \frac{e^{-\lambda t}}{\lambda} \prod_{h=0}^H \sqrt{\frac{\lambda p_{i_h}}{x_{i_h}}} \int_0^\infty e^{-w} w^{\frac{H+1}{2}} \prod_{h=0}^H I_1\Bigl(2\sqrt{w\lambda p_{i_h} x_{i_h}}\Bigr)\mathop{}\!\textrm{d} w\nonumber\end{align}

and

(2.34) \begin{align}\int_{\text{Conv}(e_{i_0}t,\dots, e_{i_H}t)}&\mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{i\in I_{H^-}}\{N_{i}(t) = 0\}\Bigg\} = e^{-\lambda t} \sum_{h=0}^H p_{i_h} e^{\lambda t p_{i_h}} \prod_{\substack{j=0\\ j\not=h}}^H \bigl(e^{\lambda t p_{i_j}}-1\bigr)\nonumber\\[5pt] & = \mathbb{P}\Bigg\{\bigcap_{i\in I_{H^-}}\{N_{i}(t) = 0\}\Bigg\}-\mathbb{P}\Bigg\{\bigcup_{i\in I_H}\{N_{i}(t)=0\},\,\bigcap_{i\in I_{H^-}}\{N_{i}(t) = 0\}\Bigg\}. \end{align}

Further details about the last equality are in Appendix B.1.

Let Y be a complete minimal motion governed by a counting process $N_Y \stackrel{d}{=}N$ and moving with velocities $v_0,\dots, v_D$ . By suitably applying the relationship (2.33) and the above probabilities, we can easily obtain the distribution of the position Y(t) over its singular components. $\diamond$

3. Random motions with a finite number of velocities

Proposition 3.1. Let X be a random motion governed by a point process N and moving with velocities $v_0,\dots, v_M\in \mathbb{R}^D, \ M\in\mathbb{N}$ , such that $\textit{dim}\Bigl(\textit{Conv}(v_{0},\dots,v_{M})\Bigr) = R\le D$ . Then the orthogonal projection $p_R\;:\;\mathbb{R}^D\longrightarrow\mathbb{R}^R$ defined in Lemma 1.1 exists, and for $t\ge0$ , we can characterize X(t) through its projection $X^R(t) = p_R\bigl(X(t)\bigr)$ , representing the position of an R-dimensional motion moving with velocities $p_R(v_0), \dots, p_R(v_M)$ and governed by N.

Proof. The projection $p_R$ exists since the hypotheses of Lemma 1.1 are satisfied. By keeping in mind the characteristics of $p_R$ (see Lemma 1.1), we immediately obtain that for any $A\subset \text{Conv}(v_{0},\dots,v_{M})$ and its projection through $p_R$ , $A^R \subset \text{Conv}\bigl(p_R(v_{0}),\dots,p_R(v_{M})\bigr)$ , we have $\{\omega\in \Omega\;:\;X(\omega,t)\in A\} = \{\omega\in \Omega\;:\;X^R(\omega,t)\in A^R\}$ .

Proposition 3.1 states that if $\text{dim}\Bigl(\text{Conv}(v_{0},\dots,v_{M})\Bigr) = R\le D$ , then we can equivalently study either the process X, a random motion with $M+1$ velocities in $\mathbb{R}^D$ , or its projection $X^R$ , a random motion of $M+1$ velocities in $\mathbb{R}^R$ . This means that we can limit ourselves to the study of random motions where the dimension of the space coincides with the dimension of the state space. Clearly, for $R=D$ Proposition 3.1 is not of interest since $p_R$ is the identity function.

Remark 3.1. (Motions with affinely independent velocities.) Let X be a random motion moving with affinely independent velocities $v_0,\dots, v_H\in\mathbb{R}, \ H\le D$ . In light of Proposition 3.1, there exists an orthogonal projection $p_H$ , as given in Lemma 1.1, such that studying $X^H = \big\{p_H\bigl(X(t)\bigr)\big\}_{t\ge0}$ is equivalent to studying X. The process $X^H$ is a minimal random motion moving with velocities $p_H(v_0), \dots, p_H(v_H)$ , and if it satisfies (H1)–(H2), then Theorems 2.2 and 2.3 provide its probability law. $\diamond$

Example 3.1. (Motion with canonical velocities.) Let X be a D-dimensional motion moving with the first H+1 canonical velocities $e_0,\dots,e_H$ and satisfying (H1)–(H2). For $t\ge0$ , Supp $\bigl(X(t)\bigr) = \{x\in \mathbb{R}^D\;:\; x\ge0,\ x_{H+1},\dots,x_D = 0,\ \sum_{i=1}^H x_i = t\}$ , and by following the arguments of Section 2.1.1, we can derive the probability distribution of X(t) in the inner part of its support by using the formula (2.32), which uses the connection to the projected position $p_H\bigl(X(t)\bigr)$ . In this case, the last probability of (2.32) becomes $\mathbb{P}\big\{C_{n_0+\dots+n_H} = (n_0,\dots,n_H), V(t) = v_{i_k}\big\}$ with $n_0,\dots, n_H\not=0,$ and therefore it coincides with the probability of the H-dimensional canonical motion. $\diamond$

3.1. Motions in $\mathbb{R}^D$ with D-dimensional state space

Thanks to Proposition 3.1 and Remark 3.1, in order to cover the analysis of all the possible motions (under the given assumptions), we need to deal with random motions in $\mathbb{R}^D$ moving with $M+1$ velocities, $M>D$ , and with state space of dimension D.

Proposition 3.2. Let X be a random motion governed by a point process N and moving with velocities $v_0,\dots, v_M\in \mathbb{R}^D, \ D<M\in\mathbb{N}$ , such that $\textit{dim}\Bigl(\textit{Conv}(v_{0},\dots,v_{M})\Bigr) = D$ . Then there exists a minimal random motion $\tilde{X}$ in $\mathbb{R}^M$ such that X is the marginal vector process of $\tilde{X}$ represented by its first D components.

Proof. Let V, N be the processes respectively governing the velocity and the displacements of X. Let $\pi_D\;:\;\mathbb{R}^M\longrightarrow \mathbb{R}^D, \ \pi_D(\tilde{x}) = (I_D\ 0)\tilde{x},\, \tilde{x}\in\mathbb{R}^M$ . Then there exist $\tilde{v}_0,\dots, \tilde{v}_M\in \mathbb{R}^M$ affinely independent such that $\pi_D(\tilde{v}_h) = v_h$ for all h. The random motion $\tilde{X}$ with displacements governed by N and velocity process $\tilde{V}$ , with state space $\{\tilde{v_0},\dots,\tilde{v_M}\}$ and such that $\pi_D\bigl(\tilde{V}(t)\bigr) = V(t)$ (i.e. $\{\tilde{V}(t) = \tilde{v}_h\} \iff \{V(t) = v_h\}\ \forall\ h,t$ ), is a minimal random motion in $\mathbb{R}^M$ , and $\pi_D\bigl(\tilde{X}(t)\bigr) = X(t)\ \forall\ t$ .

From the proof of Proposition 3.2 it is obvious that there exist infinitely many M-dimensional stochastic motions $\tilde{X}$ of the required form.

Remark 3.2. (Distribution of the position of the motion.) Let X be a random motion with velocities $v_0,\dots, v_M\in \mathbb{R}^D, \ M\in\mathbb{N}$ , such that $\text{dim}\Bigl(\text{Conv}(v_{0},\dots,v_{M})\Bigr) = D$ . In light of Proposition 3.2, we provide the distribution of $X(t),\, t\ge0,$ in terms of the probabilities of the positions of minimal random motions.

Let $\tilde{X}$ be a minimal random motion as in Proposition 3.2 and $\pi_D$ the orthogonal projection in the proof above. Now, for $t\ge0,\,x\in \overset{\circ}{\text{Conv}}(v_{0}t, \dots, v_{M}t)$ , natural numbers $n_0,\dots,n_M\ge1$ , and $k=0,\dots, M$ , we can write

(3.1) \begin{align} \mathbb{P}&\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcap_{h=0}^M\{N_h(t) = n_h\} ,\, V(t) = v_k\Bigg\}\\[5pt] & = \int_{A_x}\mathbb{P}\Bigg\{\tilde{X}(t)\in \mathop{}\!\textrm{d} (x,y),\,\bigcap_{h=0}^M\{N_h(t) = n_h\} ,\, \tilde{V}(t) = \tilde{v}_k\Bigg\}, \nonumber\end{align}

where $A_x = \big\{y\in \mathbb{R}^{M-D}\;:\; (x,y)\in \text{Conv}(\tilde{v}_{0}t,\dots,\tilde{v}_{M}t)\big\}$ ; clearly, $\pi_D(x,y) =(I_D \ 0)(x,y)= x$ . Under the assumptions (H1)–(H2), the probability (3.1) can be written explicitly by means of Theorem 2.2.

Remember that, unlike in the minimal-motion case, the support of X(t) is not partitioned by the elements appearing in (2.1) (since they are not disjoint). Thus, for fixed $t\ge0$ and $x\in \text{Conv}(v_{0}t, \dots, v_{M}t)$ there may exist several combinations of velocities (and their corresponding times) such that the motion is in position x at time t. With $H=1,\dots, M$ , let $ I_{x,t,H}^{(1)},\dots, I_{x,t,H}^{(L_H)}\in \mathcal{C}^{\{0,\dots, M+1\}}_{H+1}$ be the $L_H\le\binom{M+1}{H+1}$ possible combinations of $H+1$ velocities such that the motion can lie in x at time t, i.e. $x\in \overset{\circ}{\text{Conv}}(v_{i_0}t, \dots, v_{i_H}t)$ with $i_0\dots,i_H\in I_{x,t,H}^{(l)}, \ \forall \ l,H$ (clearly, for some H it can happen that there are no suitable combinations in $\mathcal{C}^{\{0,\dots, M+1\}}_{H+1}$ , so $L_H = 0$ ). In general we can write (omitting the indexes x, t of $I_{x,t,H}^{(l)}$ )

(3.2) \begin{align}&\mathbb{P}\{X(t)\in \mathop{}\!\textrm{d} x\}/\mathop{}\!\textrm{d} x \nonumber \\[5pt] &=\sum_{k=0}^M \mathbb{P}\!\left\{X(t)\in \mathop{}\!\textrm{d} x,\,\bigcup_{H=1}^M\bigcup_{l=1}^{L_H} \!\left\{ \bigcap_{i\in I_{H}^{(l)}}\{N_i(t)\ge1\},\, \bigcap_{i\in I_{H^-}^{(l)}}\{N_i(t) = 0\}\right\},\, V(t) = v_k\right\} /\mathop{}\!\textrm{d} x\nonumber\\[5pt] & = \sum_{k=0}^M\sum_{H=1}^M\sum_{l=1}^{L_H} \sum_{\substack{n_h=1\\ h\in I_{H}^{(l)}}}^\infty \mathbb{P}\!\left\{X(t)\in \mathop{}\!\textrm{d} x,\, \bigcap_{i\in I_{H}^{(l)}}\{N_i(t)=n_i\},\, \bigcap_{i\in I_{H^-}^{(l)}}\{N_i(t) = 0\},\, V(t) = v_k\right\} /\mathop{}\!\textrm{d} x, \end{align}

where $I_{H^-}^{(l)} = \{0,\dots, M\}\setminus I_H^{(l)}$ for all l, H.

Now, under the hypotheses (H1)–(H2), the probabilities appearing in (3.2) can be obtained by using previous results. Consider the combination of velocities $I_{H}^{(l)} =\{i_0,\dots, i_H\}$ :

  1. (a) If $\text{dim}\Bigl(\text{Conv}(v_{i_0},\dots,v_{i_H})\Bigr) = H (\!\le \!D)$ , then we can compute the corresponding probability in (3.2) by suitably using Theorem 2.3 (if $H=D$ , then the projection described in Theorem 2.3 turns into the identity function).

  2. (b) If $\text{dim}\Bigl(\text{Conv}(v_{i_0},\dots,v_{i_H})\Bigr) = R < H$ , then we use the following argument. In light of Proposition 3.1, we can consider the orthogonal projection $p_R$ defined in Lemma 1.1 and study the process $X^R$ with velocities $v_{i_0}^R = p_R(v_{i_0}),\dots, v_{i_H}^R = p_R(v_{i_H})$ . Then $X^R$ is an R-dimensional motion with $H+1$ velocities, and we can proceed as shown for the probability (3.1). Let us denote by $\tilde{X}^R$ the H-dimensional minimal motion such that $\pi_R\bigl(\tilde{X}^R(t)\bigr) =(I_R\ 0)\tilde{X}(t) = X^R(t), \ t\ge0$ , and with $\tilde{V}^R$ the corresponding velocity process, with state space $\{\tilde{v}^R_{i_0},\dots, \tilde{v}_{i_H}^R\},$ where $\pi_R(\tilde{v}^R_{i_h}) = v_{i_h}^R\ \forall\ h$ . Now, for $n_{i_0}, \dots, n_{i_H}\in \mathbb{N}$ and $k=0,\dots, H$ ,

    (3.3) \begin{align}&\mathbb{P}\left\{X(t)\in \mathop{}\!\textrm{d} x,\, \bigcap_{i\in I_{H}^{(l)}}\{N_i(t)=n_i\},\, \bigcap_{i\in I_{H^-}^{(l)}}\{N_i(t) = 0\},\, V(t) = v_{i_k}\right\}/ \mathop{}\!\textrm{d} x\\[5pt] & = \int_{A_{x}}\mathbb{P}\left\{\tilde{X}^R(t)\in \mathop{}\!\textrm{d} (x^R,y),\, \bigcap_{i\in I_{H}^{(l)}}\{N_i(t)=n_i\},\, \bigcap_{i\in I_{H^-}^{(l)}}\{N_i(t) = 0\},\, \tilde{V}^R(t) = \tilde{v}^R_{i_k}\right\} /\mathop{}\!\textrm{d} x^R, \nonumber\end{align}
    where $A_x = \{y\in \mathbb{R}^{H-R}\;:\; (x^R,y) \in \text{Conv}(\tilde{v}_{i_0}t,\dots,\tilde{v}_{i_H}t)\big\}$ , and clearly $\pi_R(x^R,y) = x^R$ . $\diamond$

Example 3.2. Let X be a one-dimensional cyclic motion moving with velocities $v_0 = 0, v_1 = 1, v_2 = -1$ and $p_h = \mathbb{P}\{V(0) = v_h\}>0\ \forall\ h$ . Let N be its governing Poisson-type process such that $W^{(h)}_j\sim Exp(\lambda_h),\ h=0,1,2, \ j\in\mathbb{N}$ . We now consider the two-dimensional minimal random motion (X, Y) moving with velocities $\tilde{v}_0 = (0,1), \tilde{v}_1 =(1, 0), \tilde{v}_2 = (\!-\!1,0)$ governed by N. Let $t\ge0$ and $x\in (0,t)$ . In order to reach x, the motion must perform at least one displacement with $v_1$ . Thus, keeping in mind the cyclic routine for the velocities ( $\dots \rightarrow v_0\rightarrow v_1\rightarrow v_2\rightarrow\dots$ ), we see that the probability reads

(3.4) \begin{align}&\mathbb{P}\{X(t)\in \mathop{}\!\textrm{d} x\} \nonumber\\[5pt] &=\mathbb{P}\{X(t)\in \mathop{}\!\textrm{d} x,\, N_0(t) = 1,\, N_1(t) = 1,\, N_2(t) = 0\} \nonumber\\[5pt] &\quad + \mathbb{P}\{X(t)\in \mathop{}\!\textrm{d} x,\, N_0(t) = 0,\, N_1(t) = 1,\, N_2(t) = 1\} \nonumber\\[5pt] &\ \ \ + \sum_{j=0}^{2} P \Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\, \bigcup_{n=1}^\infty N(t) = 3n+j-1\Bigg\}\nonumber\\[5pt] & = \mathbb{P}\left\{W_1^{(0)}\in \mathop{}\!\textrm{d}(t- x),\,V(0) = v_0\right\} + \mathbb{P}\left\{W_1^{(1)}\in \mathop{}\!\textrm{d} \frac{(t+x)}{2},V(0) = v_1\right\}\nonumber\\[5pt] &\ \ \ +\sum_{j=0}^{2}\,\int_0^{t-x} \mathbb{P}\Bigg\{X(t)\in \mathop{}\!\textrm{d} x,\, Y(t)\in \mathop{}\!\textrm{d} y,\,\bigcup_{n=1}^\infty N(t) = 3n+j-1\Bigg\}.\end{align}

The first two terms are respectively given by $p_0 \lambda_0e^{-\lambda_0 (t-x)}\mathop{}\!\textrm{d} x$ and $p_1\lambda_1 e^{-\lambda_1 \frac{t+x}{2}}\mathop{}\!\textrm{d} x$ . By suitably applying Theorem 2.2 or Example 2.2, the interested reader can explicitly compute (3.4). Note that the integral in (3.4) is of the form

\begin{equation*}\int_0^{t-x} y^{n_0} \bigg(\frac{t+x-y}{2}\Bigr)^{n_1}\Bigl(\frac{t-x-y}{2}\bigg)^{n_2} \mathop{}\!\textrm{d} y\end{equation*}

with suitable natural numbers $n_0,n_1,n_2$ . $\diamond$

4. Random motions governed by a non-homogeneous Poisson process

Here we consider a random motion X moving with a natural number of finite velocities $v_0,\dots, v_M\in\mathbb{R}^D, M\in \mathbb{N}$ , whose movements are governed by a non-homogeneous Poisson process N with rate function $\lambda\;:\;[0,\infty)\longrightarrow[0,\infty)$ . In this case N cannot explode in a bounded time interval if and only if $\Lambda(t) = \int_0^t \lambda(s)\mathop{}\!\textrm{d} s < \infty$ , $t\ge0$ . We note that the process X satisfies (H2) if and only if $\lambda(t) = \lambda>0\ \forall\ t$ .

Let us assume that for all t, we have $p_i =\mathbb{P}\{V(0) = v_i\}$ and $p_{i,j}=\mathbb{P}\{V(t+\mathop{}\!\textrm{d} t) = v_j\,|\, V(t)=v_i,\,N(t, t+\mathop{}\!\textrm{d} t] = 1\} \ge 0$ for each $i,j = 0,\dots, M$ . Let us also consider the notation, with $t\ge 0$ , $x\in \text{Supp}\bigl(X(t)\bigr)$ ,

\begin{equation*}p(x,t)\mathop{}\!\textrm{d} x = \mathbb{P}\{X(t)\in \mathop{}\!\textrm{d} x\} = \sum_{i=0}^M \mathbb{P}\{X(t)\in \mathop{}\!\textrm{d} x,\, V(t) = v_i\} = \sum_{i=0}^M f_i(x,t)\mathop{}\!\textrm{d} x.\end{equation*}

It can be proved that the functions $f_i$ satisfy the differential problem (with $<\cdot,\cdot>$ denoting the dot product in $\mathbb{R}^D$ )

(4.1) \begin{equation}\begin{cases}\dfrac{\partial f_i}{\partial t} = -<\nabla_x f_i, v_i > -\lambda(t)f_i +\lambda(t)\sum_{j=0}^M p_{j,i}f_j,\ \ \ i=0,\dots, M,\\[8pt] f_i(x,t)\ge0,\ \ \ \forall\ i,x,t,\\[5pt] \int_{\text{Conv}(v_0t,\dots,v_Mt)} \sum_{i=0}^M f_i(x,t)\mathop{}\!\textrm{d} x = 1 - \mathbb{P}\Big\{ \bigcup_{h=0}^M \{N_h(t) = 0\}\Big\},\end{cases}\end{equation}

where $\nabla_x f$ represents the x-gradient vector of f and

\begin{equation*}\mathbb{P}\left\{ \bigcup_{h=0}^M \{N_h(t) = 0\}\right\}>0 \iff \Lambda(t)<\infty\ \forall\ t.\end{equation*}

We refer to [Reference Cinque and Orsingher4, Reference Cinque and Orsingher5, Reference Kolesnik and Turbin20, Reference Orsingher27] for proofs similar to the one leading to (4.1).

Remark 4.1. (Complete minimal motions.) Let X be a complete canonical (minimal) random motion (see Example 2.3) such that for all i, j, $p_{i,j} = p_j$ . The differential problem governing the probability law of X satisfies

(4.2) \begin{equation}\begin{cases}\displaystyle\frac{\partial f_0}{\partial t} =\lambda(t)p_0 \sum_{j=1}^D f_j + \lambda(t)(p_0-1)f_0,\\[5pt] \displaystyle\frac{\partial f_i}{\partial t} = -\frac{\partial f_i}{\partial x_i} + \lambda(t)p_i\sum_{\substack{j=0\\ j\not=i}}^D f_j + \lambda(t)(p_i-1)f_i,\ \ i=1,\dots,D,\\[5pt] f_i(x,t)\ge0, \ \ \forall \ i,x,t, \ \ \ \int_{\text{Supp}\bigl(X(t)\bigr)} \sum_{i=0}^D f_i(x,t)\mathop{}\!\textrm{d} x = 1-\mathbb{P}\bigg\{\bigcup_{h=0}^D \{N_h(t) =0\}\bigg\}.\end{cases}\end{equation}

Through a direct calculation, it is easy to show that the probabilities obtained by suitably adapting the distributions (2.23), i.e. by summing with respect to $n_0,\dots,n_D\ge1$ , satisfy the PDEs in (4.2) with $\lambda(t) = \lambda>0\ \forall\ t$ . Furthermore, as shown in Example 2.3, the sum of these probabilities, i.e. (2.25), satisfies the condition in the system (4.2) (see (2.27)).

It is also possible to show that if $\lambda(t) = \lambda>0$ for all t, then the probability (2.25) (that is, $p=\sum_i f_i$ ) is a solution to the following Dth-order PDE:

(4.3) \begin{equation}\sum_{k=0}^D \sum_{i\in \mathcal{C}_k^{\{1,\dots,D\}}} \sum_{h=0}^{D+1-k} \lambda^{D+1-(h+k)} \Biggl[ \binom{D+1-k}{h} - \Bigl(p_0+\sum_{j\not \in i} p_j\Bigr)\binom{D-k}{h}\Biggr] \frac{\partial^{h+k} p}{\partial t^{h}\partial x_{i_1}\cdots \partial x_{i_k}} = 0.\end{equation}

The proof of this result is given in Appendix B.2. $\diamond$

The next statement concerns the distribution over the singular components when N cannot explode in finite time intervals.

Theorem 4.1. Let X be a finite-velocity random motion moving with velocities $v_0,\dots, v_M\in\mathbb{R}^D,\ M\in \mathbb{N}$ , governed by a non-homogeneous Poisson process N with rate function $\lambda\in C^{M}\bigl([0,\infty),[0,\infty)\bigr)$ such that $\Lambda(t) = \int_0^t \lambda(s)\mathop{}\!\textrm{d} s < \infty$ , $t\ge0$ . Let $p_i = \mathbb{P}\{V(0) = v_i\}>0$ and $p_{i,j} = \mathbb{P}\big\{V(t+\mathop{}\!\textrm{d} t) = v_j\,|\,V(t)=v_i,\, N(t, t+\mathop{}\!\textrm{d} t] = 1\big\} \ge 0$ for each $i,j = 0,\dots, M,\,\forall\ t$ .

Set $H = 0,\dots, M-1$ , $I_H = \{i_0,\dots,i_H\}\in \mathcal{C}_{H+1}^{\{0,\dots,M\}}$ , and $I_{H^- } = \{0,\dots, M\}\setminus I_H$ . If

(4.4) \begin{equation}\sum_{j\in I_H} p_{i_k,j} = \mathbb{P}\big\{V(t+\mathop{}\!\textrm{d} t)\in \{v_{i_0}, \dots,v_{i_H}\}\,|\, V(t) = v_{i_k}, N(t,t+\mathop{}\!\textrm{d} t] = 1\big\} = \alpha_{I_H}>0\end{equation}

for $k =0,\dots, H$ and $t\ge0$ , then, with $\textit{dim}\Bigl(\textit{Conv}(v_{i_0},\dots,v_{i_H})\Bigr) = R\le D$ , there exists an orthogonal projection $p_R\;:\;\mathbb{R}^D\longrightarrow\mathbb{R}^R$ such that, for $t\ge0,\, x\in \overset{\circ}{\textit{Conv}}(v_{i_0}t, \dots, v_{i_H}t) $ , with $x^R = p_R(x)$ ,

(4.5) \begin{align}\mathbb{P}\!\left\{X(t)\in \mathop{}\!\textrm{d} x\,\Big|\,\bigcap_{j\in I_{H^-}}\{N_{j}(t) = 0\}\right\}/\mathop{}\!\textrm{d} x = \mathbb{P}\Big\{Y^R(t)\in \mathop{}\!\textrm{d} x^R\Big\}/\mathop{}\!\textrm{d} x^R,\end{align}

where $Y^R$ is an R-dimensional finite-velocity random process governed by a non-homogeneous Poisson process with rate function $\lambda\alpha_{I_H}$ , moving with velocities $v_{i_0}^R =p_R(v_{i_0}),\dots, v_{i_H}^R =p_R(v_{i_H})$ and such that $ p^Y_i = p_i/\sum_{j\in I_H} p_j$ and $p^Y_{i,j} = p_{i,j}/\alpha_{I_H}$ for all $i,j\in I_H$ .

Theorem 4.1 states that if the probability of keeping a velocity with index in $I_H$ is constant ( $\alpha_{I_H}$ ), then, with respect to the conditional measure $\mathbb{P}\Big\{\,\cdot\,|\,\bigcap_{j\in I_{H^-}}\{N_j(t) = 0\}\Big\}$ , X is equal in distribution (in terms of finite-dimensional distributions) to an R-dimensional motion governed by a non-homogeneous Poisson process with rate function $\lambda\alpha_{I_H}$ and suitably scaled transition probabilities, where $R = \text{dim}\Bigl(\text{Conv}(v_{i_0},\dots,v_{i_H})\Bigr)$ (if $R = D$ , the identity function fits $p_R$ ).

Proof. First we note that, in light of (4.4), for $t\ge0$ ,

\begin{equation*}\mathbb{P}\big\{V(t+\mathop{}\!\textrm{d} t)\in \{v_{i_0},\dots, v_{i_H}\}\,|\,\, V(t)\in \{v_{i_0},\dots, v_{i_H} \},\,N(t, t+\mathop{}\!\textrm{d} t]=1\big\} = \alpha_{I_H},\end{equation*}

and thus

(4.6) \begin{align}\mathbb{P}\!\left\{\bigcap_{j\in I_{H^-}}\{N_{j}(t) = 0\}\right\}& = \mathbb{P}\big\{V(0)\in \{v_{i_0},\dots, v_{i_H}\}\big\}\sum_{n=0}^\infty \mathbb{P}\{ N(t) = n\} \, \alpha_{I_H}^n\nonumber\\[5pt] & = e^{-\Lambda(t)(1-\alpha_{I_H})}\sum_{i\in I_H} p_i. \end{align}

Now, by Proposition 3.1, Lemma 1.1, and the same argument used in point (b) of Remark 3.2, there exists a projection $p_R\;:\;\mathbb{R}^D\longrightarrow\mathbb{R}^R$ such that $X^R(t) = p_R\bigl(X(t)\bigr)$ and

\begin{equation*}\mathbb{P}\!\left\{X(t)\in \mathop{}\!\textrm{d} x\,\Bigg|\,\bigcap_{j\in I_{H^-}}\{N_{j}(t) = 0\}\right\}/\mathop{}\!\textrm{d} x = \mathbb{P}\left\{X^R(t)\in \mathop{}\!\textrm{d} x^R\,\Bigg|\,\bigcap_{j\in I_{H^-}}\{N_{j}(t) = 0\}\right\}/\mathop{}\!\textrm{d} x^R ,\end{equation*}

with $x\in \overset{\circ}{\text{Conv}}(v_{i_0}t, \dots, v_{i_H}t)$ . The R-dimensional motion $X^R$ moves with velocities $v_{0}^R =p_R(v_0),\dots, v_M^R = p_R(v_M)$ , and its probability functions

\begin{equation*}f_i(y, t)\mathop{}\!\textrm{d} y = \mathbb{P}\!\left\{X^R(t)\in \mathop{}\!\textrm{d} y,\, \bigcap_{j\in I_{H^-}}\{N_{j}(t) = 0\},\,V_{X^R}(t) = v_{i}^R\right\},\ \ i\in I_H,\end{equation*}

with $t\ge0, \,y \in \overset{\circ}{\text{Conv}}\bigr(v_{i_0}^Rt,\dots,v_{i_H}^Rt\bigr)$ , satisfy the differential system

(4.7) \begin{equation}\begin{cases}\displaystyle\frac{\partial f_i}{\partial t} = -<\nabla_y f_i, v_i^R > -\lambda(t)f_i +\lambda(t)\sum_{j\in I_H} p_{j,i}f_j,\ \ \ i\in I_H,\\[5pt] f_i(y,t)\ge0,\ \ \ i\in I_H,\, \forall\ y,t,\\[5pt] \displaystyle\int_{\text{Conv}\left(v_{i_0}^Rt,\dots,v_{i_H}^Rt\right)} \sum_{i\in I_H} f_i(y,t)\mathop{}\!\textrm{d} y= \int_{\text{Conv}\left(v_{i_0}^Rt,\dots,v_{i_H}^Rt\right)}\mathbb{P}\!\left\{X^R(t)\in \mathop{}\!\textrm{d} y,\, \bigcap_{j\in I_{H^-}}\{N_{j}(t) = 0\}\right\}\\[8pt] \displaystyle\hspace{2.8cm}= \mathbb{P}\left\{\bigcap_{j\in I_{H^-}}\{N_{j}(t) = 0\}\right\} - \mathbb{P}\!\left\{\bigcup_{i\in I_H} \{N_i(t) = 0\},\, \bigcap_{j\in I_{H^-}} \{N_j(t) = 0\}\right\}.\end{cases}\end{equation}

In light of (4.6) we consider

\begin{equation*}f_i(y,t) = g_i(y,t) e^{-\Lambda(t)(1-\alpha_{I_H})}\sum_{h\in I_H} p_h,\end{equation*}

for any i, i.e.

\begin{equation*}g_i(y,t)\mathop{}\!\textrm{d} y = \mathbb{P}\Bigg\{X^R(t)\in \mathop{}\!\textrm{d} y,\,V_{X^R}(t) = v_{i}^R\,\Big|\, \bigcap_{j\in I_{H^-}}\{N_{j}(t) = 0\}\Bigg\}.\end{equation*}

The system (4.7) becomes

(4.8) \begin{equation}\begin{cases}\displaystyle\frac{\partial g_i}{\partial t} = -<\nabla_y\, g_i, v_i^R > -\lambda(t)\alpha_{I_H}g_i +\lambda(t)\alpha_{I_H}\sum_{j\in I_H} \frac{p_{j,i}}{\alpha_{I_H}}g_j,\ \ \ i\in I_H,\\[5pt] g_i(y,t)\ge0,\ \ \ i\in I_H,\,\forall\ y,t,\\[5pt] \begin{split}\displaystyle\int_{\text{Conv}(v_{i_0}^Rt,\dots,v_{i_H}^Rt)} \sum_{i\in I_H} g_i(y,t)\mathop{}\!\textrm{d} y&= \int_{\text{Conv}(v_{i_0}^Rt,\dots,v_{i_H}^Rt)}P\!\left\{X^R(t)\in \mathop{}\!\textrm{d} y\,\Big|\, \bigcap_{j\in I_{H^-}}\{N_{j}(t) = 0\}\right\}\\[8pt] \displaystyle&= 1 - \mathbb{P}\!\left\{\bigcup_{i\in I_H} \{N_i(t) = 0\}\,\Big|\, \bigcap_{j\in I_{H^-}} \{N_j(t) = 0\}\right\},\end{split}\end{cases}\end{equation}

which coincides with the system satisfied by the distribution of the position of the stochastic motion $Y^R$ in the statement.

Theorems 3.1 and 3.2 of Cinque and Orsingher [Reference Cinque and Orsingher5] are particular cases of Theorem 4.1.

Appendix A. Proof of Lemma 1.1

If $\,\text{dim}\Bigl(\text{Conv}(v_0,\dots,v_M)\Bigr) = R$ , the matrix $\textrm{V}_{(k)} = \Bigl[v_h-v_k\Bigr]_{\substack{h=0,\dots,M\\ h\not=k}}$ has R linearly independent rows for any k. Now, the matrix $\textrm{V}_{(k)}^R = \Bigl[v_h^R-v_k^R\Bigr]_{\substack{h=0,\dots,M\\ h\not=k}}$ , obtained by keeping the first R linearly independent rows of $\textrm{V}_{(k)}$ , has rank R, and therefore $\text{dim}\Bigl(\text{Conv}\bigl(v_0^R,\dots,v_M^R\bigr)\Bigr) = R$ . Thus, for l, the matrix $\textrm{V}_{(l)}^R = \Bigl[v_h^R-v_l^R\Bigr]_{\substack{h=0,\dots,M\\ h\not=l}}$ also has rank R, and these must be the first R linearly independent rows of $\textrm{V}_{(l)}$ (if not, by proceeding as above for k, we would obtain that the R selected rows were not the first linearly independent rows of $V_{(k)}$ , which is a contradiction).

Finally, the second part of the lemma follows from the equivalence of the linear systems

(A.1) \begin{equation} x = \Bigl[ v_h\Bigr]_{h=0,\dots,M } \,a \quad \text{ and } \quad x^R = \Bigl[ v^R_h\Bigr]_{h=0,\dots,M} \,a,\end{equation}

where $a=(a_0,\dots,a_M)\in\mathbb{R}^{M+1}$ , such that $a_i\in[0,1]\ \forall\ i$ and $\sum_{i=0}^M a_i = 1$ , is the unknown variable. Indeed, for $k=0,\dots,M$ , thanks to the constraints on a, the systems in (A.1) can be written as

\begin{equation*} x - v_k = \Bigl[v_h-v_k\Bigr]_{h\not=k}a_{(k)}\ \text{ and }\ x^{R} - v_k^{R} = \Bigl[v_h^{R}-v_k^{R}\Bigr]_{h\not=k} a_{(k)},\end{equation*}

with $a_{(k)}=(a_0,\dots,a_{k-1},a_{k+1},\dots, a_M)$ .

Appendix B. Complete canonical random motion

Let X be a complete canonical random motion as in Example 2.3.

B.1. Probability mass of the singularity

Before computing the probability mass of the singularities of the complete uniform random motion, we need to prove some useful relationships.

Let $c_1, \dots c_H\in \mathbb{R}$ , $H\in\mathbb{N}$ , and let $\mathcal{C}_h^{\{1, \dots, H\}}$ denote the combinations of h elements among $\{1, \dots, H\}$ , $h=1,\dots,H$ . We have that

(B.1) \begin{equation}\sum_{h=1}^H (\!-\!1)^{H-h} \sum_{i\in \mathcal{C}_h^{\{1, \dots, H\}}} (c_{i_1}+\dots c_{i_h})^m =\begin{cases}\begin{array}{l@{\quad}l} 0, &\ m<H,\\[5pt] \displaystyle \sum_{\substack{n_1,\dots,n_H\ge1\\ n_1+\dots+n_H=m}} c_1^{n_1}\cdots c_H^{n_H}\binom{m}{n_1,\dots,n_H}, &\ m \ge H,\end{array}\end{cases}\end{equation}

and, with $\beta \in \mathbb{R}$ ,

(B.2) \begin{equation}\sum_{h=1}^H c_h e^{\beta c_h}\prod_{\substack{j = 1\\ j\not=h}}^{H} \bigl(e^{\beta c_j}-1\bigr) = \sum_{h=1}^{H} (\!-\!1)^{H-h} \sum_{i\in \mathcal{C}_h^{\{1,\dots, H\}}} (c_{i_1}+\dots +c_{i_h})\,e^{\beta(c_{i_1}+\dots+c_{i_h})}.\end{equation}

To prove (B.1), we denote by $\mathcal{C}_{h,\{i_1,\dots,i_j\}}^{\{1,\dots,H\}} $ the combinations of h elements in $\{1,\dots,H\}$ containing $i_1,\dots,i_j$ , with $1\le j\le h\le H$ and suitable $i_1,\dots,i_j$ . Then

(B.3) \begin{align}&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\sum_{h=1}^H (\!-\!1)^{H-h} \sum_{i\in \mathcal{C}_h^{\{1, \dots, H\}}} (c_{i_1}+\dots c_{i_h})^m \nonumber\\[5pt]& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!= \sum_{h=1}^H (\!-\!1)^{H-h} \sum_{i\in \mathcal{C}_h^{\{1, \dots,H\}}} \sum_{\substack{n_1,\dots,n_h\ge 0\\ n_1+\dots+n_h = m} } c_{i_1}^{n_1}\cdots c_{i_h}^{n_h}\binom{m}{n_1,\dots,n_h} \end{align}
(B.4) \begin{align}& = \sum_{j=1}^m \sum_{k\in \mathcal{C}_j^{\{1,\dots,H\}}} \sum_{\substack{m_1,\dots,m_j\ge 1\\ m_1+\dots+m_j= m}} c_{k_1}^{m_1}\cdots c_{k_j}^{m_j}\binom{m}{m_1,\dots,m_j} \sum_{h=j}^H (\!-\!1)^{H-h}\, \Big| \mathcal{C}_{h,\{k_1,\dots,k_j\}}^{\{1,\dots,H\}}\Big| \end{align}
(B.5) \begin{align} & = \sum_{j=1}^m \sum_{k\in \mathcal{C}_j^{\{1,\dots,H\}}} \sum_{\substack{m_1,\dots,m_j\ge 1\\ m_1+\dots+m_j= m}} c_{k_1}^{m_1}\cdots c_{k_j}^{m_j}\binom{m}{m_1,\dots,m_j} \,(\!-\!1)^{H+j}\sum_{l=0}^{H-j} (\!-\!1)^l \binom{H-j}{l} \\[5pt] &=\begin{cases}\begin{array}{l@{\quad}l} 0, &\ m<H,\\[5pt] \displaystyle \sum_{\substack{n_1,\dots,n_H\ge1\\ n_1+\dots+n_H=m}} c_1^{n_1}\cdots c_H^{n_H}\binom{m}{n_1,\dots,n_H}, &\ m \ge H.\end{array}\end{cases}\nonumber\end{align}

In fact, in (B.5), the last sum (with index l) is equal to 0 for $j\not= H$ and 1 for $j = H$ . In (B.4) we express (B.3) by summing every possible combination of indexes ( $k_1,\dots,k_j$ ) and every possible allocation of exponents ( $m_1,\dots,m_j\ge1, m_1+\dots+m_j=m$ ). Each of these elements, $c_{k_1}^{m_1}\cdots c_{k_j}^{m_j}$ , appears one time in the expansion of $(c_{i_1}+\dots+c_{i_h})^m$ for each $i\in \mathcal{C}_{h,\{k_1,\dots,k_j\}}^{\{1,\dots,H\}}$ , with $1\le j \le h\le H$ , i.e.

\begin{equation*}\Big| \mathcal{C}_{h,\{k_1,\dots,k_j\}}^{\{1,\dots,H\}}\Big| =\binom{H-j}{h-j}\end{equation*}

times.

To prove (B.2) we proceed as follows, denoting by $\mathcal{C}^{\{1,\dots,H\}}_{k,(h)}$ the combinations of k elements not containing h:

(B.6) \begin{align}\sum_{h=1}^H c_h e^{\beta c_h}\prod_{\substack{j = 1\\ j\not=h}}^{H} \bigl(e^{\beta c_j}-1\bigr)& =\sum_{h=1}^H c_h e^{\beta c_h} \sum_{k=0}^{H-1} (\!-\!1)^{H-1-k} \sum_{i\in \mathcal{C}^{\{1,\dots,H\}}_{k,(h)}} e^{\beta(c_{i_1}+\dots+c_{i_k})}\nonumber\\[5pt] &= \sum_{k=0}^{H-1} (\!-\!1)^{H-1-k} \sum_{h=1}^H c_h \sum_{i\in \mathcal{C}^{\{1,\dots,H\}}_{k,(h)}} e^{\beta(c_h+ c_{i_1}+\dots+c_{i_k})}\\[5pt] & = \sum_{k=0}^{H-1} (\!-\!1)^{H-1-k} \sum_{i\in \mathcal{C}^{\{1,\dots,H\}}_{k+1}} (c_{i_1}+\dots+c_{i_{k+1}})e^{\beta(c_{i_1}+\dots+c_{i_{k+1}})}, \nonumber\end{align}

which coincides with (B.2). The last step follows from observing that for each combination $i\in \mathcal{C}^{\{1,\dots,H\}}_{k+1}$ , the corresponding exponential term appears once for each $h\in i=(i_1,\dots,i_{k+1})$ , with h being the index of the second sum of (B.6).

We now compute the probability mass that the motion moves with all $H+1$ precise velocities, and only these, for $H = 0,\dots,D-1$ . Let $I_H = \{i_0,\dots, i_H\} \in \mathcal{C}_{H+1}^{\{0,\dots,D\}}$ ; then

(B.7) \begin{align}&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathbb{P}\Big\{X(t)\in \overset{\circ}{\text{Conv}}(v_{i_0}t, \dots, v_{i_H}t)\Big\} = \mathbb{P}\Bigg\{ \bigcap_{i\in I_H} \{N_i(t) \ge1\},\,\bigcap_{i\not\in I_H} \{N_i(t) =0\} \Bigg\}\\[5pt] & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!= \sum_{n=H}^{\infty} \mathbb{P}\{N(t) = n \} \sum_{\substack{n_0,\dots, n_H\ge1\\ n_0+\dots+n_H=n+1}} p_{i_0}^{n_0}\cdots p_{i_H}^{n_H} \binom{n+1}{n_0,\dots, n_H}\nonumber \end{align}

(B.8) \begin{align}& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!= \sum_{h=1}^{H+1} (\!-\!1)^{H+1-h} \sum_{i\in \mathcal{C}_h^{\{0, \dots, H\}}} \,\sum_{n=H}^{\infty} \mathbb{P}\{N(t) = n\}(p_{i_0}+\dots +p_{i_h})^{n+1} \end{align}
(B.9) \begin{align}&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! = \sum_{h=1}^{H+1} (\!-\!1)^{H+1-h} \sum_{i\in \mathcal{C}_h^{\{0, \dots, H\}}} (p_{i_0}+\dots +p_{i_h}) e^{-\lambda t(1-p_{i_0}-\dots- p_{i_h})} \end{align}
(B.10) \begin{align} &\quad - e^{-\lambda t} \sum_{n=0}^{H-1} \frac{(\lambda t)^n}{n!} \, \sum_{h=1}^{H+1} (\!-\!1)^{H+1-h} \sum_{i\in \mathcal{C}_h^{\{0, \dots, H\}}} (p_{i_0}+\dots +p_{i_h})^{n+1} \\[5pt] & = (p_{0}+\dots +p_{H})\, e^{-\lambda t(1-p_{0}-\dots- p_H)} - \sum_{h=1}^{H} (\!-\!1)^{H-h} \sum_{i\in \mathcal{C}_h^{\{0, \dots, H\}}} (p_{i_0}+\dots +p_{i_h})\, e^{-\lambda t(1-p_{i_0}-\dots- p_{i_h})} \nonumber\\[5pt] & = \mathbb{P}\Bigg\{ \bigcap_{i\not\in I_H} \{N_i(t) =0\} \Bigg\} - \mathbb{P}\Bigg\{ \bigcup_{i\in I_H} \{N_i(t) =0\},\,\bigcap_{i\not\in I_H} \{N_i(t) =0\} \Bigg\}, \nonumber \end{align}

where we used the second equality of (B.1) to derive (B.8). Thanks to the first case of (B.1), it is easy to see that the term (B.10) is 0, and thus, by means of (B.2), we also obtain the equivalence between (B.9) and the probability mass (2.34).

Note that if the motion is uniform, i.e. $p_0 = \dots = p_D = 1/(D+1)$ , then the probability (B.7) reduces to

\begin{equation*} \frac{H+1}{D+1} e^{\frac{-\lambda t D}{D+1}} \bigl(e^{\frac{-\lambda t }{D+1}}-1\bigr)^H\end{equation*}

(see also (2.34)).

In light of (B.9), the probability that the motion moves with exactly $H+1$ velocities in the time interval [0, t] is

(B.11) \begin{align}\mathbb{P}&\left\{\bigcup_{I \in \mathcal{C}_{H+1}^{\{0,\dots, D\}}} \Bigg \{\bigcap_{i\in I}\{N_i(t)\ge1\},\, \bigcap_{i\not\in I}\{ N_i(t) = 0\} \Bigg\} \right\} \nonumber \\ &=\sum_{I \in \mathcal{C}_{H+1}^{\{0,\dots, D\}}} \mathbb{P}\Bigg\{ \bigcap_{i\in I} \{N_i(t) \ge1\},\,\bigcap_{i\not\in I} \{N_i(t) =0\} \Bigg\}\nonumber\\[5pt] &= \sum_{h=1}^{H+1} (\!-\!1)^{H+1-h} \sum_{I \in \mathcal{C}_{H+1}^{\{0,\dots, D\}}}\sum_{i\in \mathcal{C}_h^{I}} (p_{i_0}+\dots +p_{i_h}) \,e^{-\lambda t(1-p_{i_0}-\dots- p_{i_h})}\nonumber\\[5pt] & = \sum_{h=1}^{H+1} (\!-\!1)^{H+1-h} \binom{D+1-h}{H+1-h}\sum_{i \in \mathcal{C}_{h}^{\{0,\dots, D\}}} (p_{i_0}+\dots +p_{i_h}) \,e^{-\lambda t(1-p_{i_0}-\dots- p_{i_h})},\end{align}

where in the last step we observe that each combination $i \in \mathcal{C}_{h}^{\{0,\dots, D\}}$ appears in $ \binom{D+1-h}{H+1-h}$ combinations in $\mathcal{C}_{H+1}^{\{0,\dots, D\}}$ (i.e. all those which contain the h elements in i).

Finally, by using the expression (B.11), we obtain

(B.12) \begin{align}&\mathbb{P}\big\{X(t)\in \partial \text{Supp}\bigl(X(t)\bigr)\big\} = \mathbb{P}\Bigg\{\bigcup_{h=0}^D\{ N_h(t) = 0\} \Bigg\} \nonumber\\[5pt] & = \mathbb{P}\left\{\,\bigcup_{H=0}^{D-1}\,\bigcup_{I \in \mathcal{C}_{H+1}^{\{0,\dots, D\}}} \Bigg \{\bigcap_{i\in I}\{N_i(t)\ge1\},\, \bigcap_{i\not\in I}\{ N_i(t) = 0\} \Bigg\} \right\}\nonumber \\[5pt] &=\sum_{H=0}^{D-1} \sum_{h=1}^{H+1} (\!-\!1)^{H+1-h} \binom{D+1-h}{H+1-h}\sum_{i \in \mathcal{C}_{h}^{\{0,\dots, D\}}} (p_{i_0}+\dots +p_{i_h}) \,e^{-\lambda t(1-p_{i_0}-\dots- p_{i_h})}\nonumber\\[5pt] & = \sum_{h=1}^{D} \sum_{i \in \mathcal{C}_{h}^{\{0,\dots, D\}}} (p_{i_0}+\dots +p_{i_h}) \,e^{-\lambda t(1-p_{i_0}-\dots- p_{i_h})} \sum_{H=h-1}^{D-1} (\!-\!1)^{H+1-h} \binom{D+1-h}{H+1-h}\nonumber\\[5pt] & = \sum_{h=1}^{D} \sum_{i \in \mathcal{C}_{h}^{\{0,\dots, D\}}} (p_{i_0}+\dots +p_{i_h}) \,e^{-\lambda t(1-p_{i_0}-\dots- p_{i_h})} \Bigl(0-(\!-\!1)^{D+1-h}\Bigr)\nonumber\\[5pt] & = \sum_{h=1}^{D} (\!-\!1)^{D-h} \sum_{i \in \mathcal{C}_{h}^{\{0,\dots, D\}}} (p_{i_0}+\dots +p_{i_h}) \,e^{-\lambda t(1-p_{i_0}-\dots- p_{i_h})}.\end{align}

Note that, with (B.12) in hand, and also keeping in mind (B.2) and the fact that $p_0+\dots+p_D = 1$ , we obtain the last step in the probability (2.27).

It is interesting to observe that if the point process governing X is a non-homogeneous Poisson process with rate function $\lambda\;:\;[0,\infty)\longrightarrow [0,\infty)$ such that $\Lambda(t) = \int_0^t \lambda(s)\mathop{}\!\textrm{d} s <\infty\ \forall \ t$ , then the above probability masses hold with $\Lambda(t)$ replacing $\lambda t$ .

B.2. PDE governing the absolutely continuous component

From the differential system (4.2) we obtain (4.3) through the following iterative argument.

First we consider $w_1 = f_0 + f_1$ and easily obtain

(B.13) \begin{align}\frac{\partial w_1}{\partial t} &= \lambda(p_0+p_1-1)w_1- \frac{\partial f_1}{\partial x_1} + \lambda(p_0+p_1)\sum_{j=2}^D f_j\nonumber\\[5pt] & = A w_1 + B f_1 + C \sum_{j=2}^D f_j,\end{align}

with A, B, C being suitable operators. Next we rewrite the equations of (4.2) by means of the operators $E_i = \Bigl(\frac{\partial }{\partial x_i} + \lambda\Bigr)$ and $G_i = \lambda p_i$ :

(B.14) \begin{align}\frac{\partial f_i}{\partial t} = -E_i f_i + G_i \sum_{j=0}^i f_j + G_i \sum_{j=i+1}^D f_j, \ \ i=1,\dots D.\end{align}

Keeping in mind (B.13), (B.14) (for $i=1$ ), and the exchangeability of the differential operators, we can express the second-order time derivative of $w_1$ in terms of $w_1$ and $\sum_{j=2}^D f_j$ :

(B.15) \begin{align} \frac{\partial^2 w_1}{\partial t^2} & = A \frac{\partial w_1}{\partial t} + B \frac{\partial f_1}{\partial t} + C \frac{\partial}{\partial t} \sum_{j=2}^D f_j \nonumber\\[5pt] & = A \frac{\partial w_1}{\partial t} + B \Biggl(-E_1 f_1 + G_1 w_1 + G_1 \sum_{j=2}^D f_j \Biggr) + C \frac{\partial}{\partial t} \sum_{j=2}^D f_j\nonumber \\[5pt] & = \Biggl( A \frac{\partial }{\partial t} + B G_1 \Biggr) w_1 -E_1 \Biggl(\frac{\partial w_1}{\partial t} - A w_1 -C \sum_{j=2}^D f_j \Biggr) + \Biggl(B G_1 + C \frac{\partial}{\partial t} \Biggr) \sum_{j=2}^D f_j\nonumber\\[5pt] & = \Biggl( (A-E_1) \frac{\partial }{\partial t} + B G_1 +E_1 A \Biggr) w_1 + \Biggl(B G_1 + C\bigg(\frac{\partial}{\partial t} +E_1\bigg) \Biggr) \sum_{j=2}^D f_j\nonumber \\[5pt] &=\Biggl(\lambda^2(p_0+p_1-1) + \lambda(p_0+p_1-2)\frac{\partial}{\partial t} + \lambda(p_0-1) \frac{\partial}{\partial x_1} - \frac{\partial^2}{\partial t \partial x_1}\Biggr)w_1 \nonumber \\[5pt] & \ \ \ + \Biggl(\lambda (p_0+p_1)\bigg(\frac{\partial}{\partial t} + \lambda\bigg)+ \lambda p_0 \frac{\partial}{\partial x_1}\Biggr)\sum_{j=2}^D f_j \nonumber\\[5pt] & = \Lambda_1 w_1 + \Gamma_1 \sum_{j=2}^D f_j.\end{align}

By iterating the above argument, at the nth step, $n=2,\dots, D$ , we have, with $w_{n} = w_{n-1}+f_n$ (meaning that $w_i = \sum_{j=0}^i f_j,\ i = 1,\dots,D$ ),

(B.16) \begin{equation}\begin{cases}\displaystyle\frac{\partial^n w_{n-1}}{\partial t^n } = \Lambda_{n-1} w_{n-1} + \Gamma_{n-1} \sum_{j=n}^D f_j \ \implies\ \bigg(\frac{\partial^n }{\partial t^n}-\Lambda_{n-1}\bigg)w_{n-1} = \Gamma_{n-1} f_n + \Gamma_{n-1} \sum_{j=n+1}^D f_j,\\[21pt] \displaystyle\bigg(\frac{\partial }{\partial t} + E_n\bigg) f_n = G_n w_n + G_n\sum_{j=n+1}^D f_j,\\[21pt] \displaystyle\bigg(\frac{\partial }{\partial t} +E_i\bigg) f_i= G_i w_i + G_i\sum_{j=i+1}^D f_j,\ \ i=n+1,\dots, D.\end{cases}\end{equation}

Thus, using the first two equations of (B.16), we have

(B.17) \begin{align}&\Biggl(\frac{\partial^n }{\partial t^n}-\Lambda_{n-1}\Biggr) \Biggl(\frac{\partial }{\partial t} + E_n\Biggr) w_n \\[5pt] & = \Biggl(\frac{\partial^n }{\partial t^n}-\Lambda_{n-1}\Biggr) G_n w_n + \Gamma_{n-1}\Biggl(\frac{\partial }{\partial t} + E_n\Biggr)f_n +\Biggl[ \Biggl(\frac{\partial }{\partial t} + E_n\Biggr)\Gamma_{n-1} + \Biggl(\frac{\partial^n }{\partial t^n}-\Lambda_{n-1}\Biggr) G_n\Biggr ]\sum_{j=n+1}^D f_j \nonumber\\[5pt] & = \Biggl(\frac{\partial^n }{\partial t^n}-\Lambda_{n-1} + \Gamma_{n-1}\Biggr) G_n w_n + \Biggl[ \Biggl(\frac{\partial }{\partial t} + E_n\Biggr)\Gamma_{n-1} + \Biggl(\frac{\partial^n }{\partial t^n}-\Lambda_{n-1} +\Gamma_{n-1}\Biggr) G_n\Biggr ]\sum_{j=n+1}^D f_j. \nonumber\end{align}

Hence, by reordering the terms in (B.17), for $n = 2,\dots, D$ , we see that

\begin{equation*}\Lambda_n = \Biggl(\frac{\partial }{\partial t} + \frac{\partial }{\partial x_n} +\lambda\Biggr)\Lambda_{n-1} + \lambda (p_0-1)\frac{\partial^n }{\partial t^n} + \lambda p_0(\Gamma_{n-1}-\Lambda_{n-1}) - \frac{\partial^{n+1} }{\partial t^{n}\partial x_n}\end{equation*}

and

\begin{equation*}\Gamma_n = \Biggl(\frac{\partial }{\partial t} + \frac{\partial }{\partial x_n} +\lambda\Biggr)\Gamma_{n-1} + \lambda p_n\Bigl(\frac{\partial^n }{\partial t^n} +\Gamma_{n-1}- \Lambda_{n-1} \Bigr),\end{equation*}

with $\Lambda_1, \Gamma_1$ given in (B.15).

The interested reader can check (for instance by induction) that the operators $\Lambda_n$ and $\Gamma_n$ are such that

(B.18) \begin{align}&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{\partial^{n+1} w_n}{\partial t^{n+1}} = \Lambda_n w_w + \Gamma_n \sum_{j=n+1}^D f_j \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{align}
(B.19) \begin{align} &\ \ \ \ \ \ \ \ \quad = \left(\sum_{k=0}^n \sum_{i\in \mathcal{C}^{\{1,\dots, n\}}_k} \sum_{h = 0}^{n-k} \lambda^{n+1-(h+k)} \biggl[\binom{n-k}{h}\Bigl(p_0 + \sum_{\substack{j = 1\\ j\not \in i}}^n p_j\Bigr)\! -\! \binom{n+1-k}{h} \biggr] \frac{\partial ^{h+k}}{\partial t^h \partial x_{i_1}\cdots x_{i_k}} \right. \end{align}
(B.20) \begin{align}\left. \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\! -\,\sum_{k=1}^{n} \sum_{i\in \mathcal{C}^{\{1,\dots, n\}}_k} \frac{\partial ^{n+1}}{\partial t^{n+1-k} \partial x_{i_1}\cdots x_{i_k}} \right) w_n \end{align}
(B.21) \begin{align} \qquad \ \ \, + \sum_{k=0}^n \sum_{i\in \mathcal{C}^{\{1,\dots, n\}}_k} \sum_{h = 0}^{n-k} \lambda^{n+1-(h+k)} \binom{n-k}{h}\!\left(p_0 + \sum_{\substack{j = 1\\ j\not \in i}}^n p_j\right) \frac{\partial ^{h+k}}{\partial t^h \partial x_{i_1}\cdots x_{i_k}} \sum_{j=n+1}^D f_j.\quad \end{align}

Finally, for $n=D$ and $w_D = \sum_{j=0}^D f_j = p$ , which is the probability density of the position of the motion, the formula (B.18) reduces to (4.3); indeed, the term in (B.21) becomes 0, the $(D+1)$ th-order time derivative can be included in the sum in (B.20) as $k=0$ , and this new sum becomes the term with $h = D+1-k$ in (B.19).

Acknowledgements

We wish to thank the referees for their appreciation and valuable comments. Also, F. Cinque would like to thank his co-author M. Cintoli for his friendship, his inspiring discussions, and his support in undertaking these studies.

Funding information

There are no funding bodies to thank in relation to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Beghin, L., Nieddu, L. and Orsingher, E. (2001). Probabilistic analysis of the telegrapher’s process with drift by means of relativistic transformations. J. Appl. Math. Stoch. Anal. 14, 1125.10.1155/S104895330100003XCrossRefGoogle Scholar
Cinque, F. (2022). A note on the conditional probabilities of the telegraph process. Statist. Prob. Lett. 185, article no. 109431.10.1016/j.spl.2022.109431CrossRefGoogle Scholar
Cinque, F. (2023). Reflection principle for finite-velocity random motions. J. Appl. Prob. 60, 479492.10.1017/jpr.2022.58CrossRefGoogle Scholar
Cinque, F. and Orsingher, E. (2023). Stochastic dynamics of generalized planar random motions with orthogonal directions. J. Theoret. Prob. 36, 22292261.10.1007/s10959-022-01229-2CrossRefGoogle Scholar
Cinque, F. and Orsingher, E. (2023). Random motions in $\mathbb{R}^3$ with orthogonal directions. Stoch. Process. Appl. 161, 173200.10.1016/j.spa.2023.04.003CrossRefGoogle Scholar
Davis, B. H. A. (1984). Piecewise-deterministic Markov processes: a general class of non-diffusion stochastic models. J. R. Statist. Soc. B [Statist. Methodology] 46, 353388.Google Scholar
De Gregorio, A. (2010). Stochastic velocity motions and processes with random time. Adv. Appl. Prob. 42, 10281056.10.1239/aap/1293113150CrossRefGoogle Scholar
Di Crescenzo, A. (2002). Exact transient analysis of a planar motion with three directions. Stoch. Stoch. Reports 72, 175189.10.1080/10451120290019186CrossRefGoogle Scholar
Di Crescenzo, A., Iuliano, A., Martinucci, B. and Zacks, S. (2013). Generalized telegraph process with random jumps. J. Appl. Prob. 50, 450463.10.1239/jap/1371648953CrossRefGoogle Scholar
Di Crescenzo, A., Iuliano, A. and Mustaro, V. (2023). On some finite-velocity random motions driven by the geometric counting process. J. Statist. Phys. 190, article no. 44.10.1007/s10955-022-03045-8CrossRefGoogle Scholar
Garra, R. and Orsingher, E. (2016). Random flights related to the Euler–Poisson–Darboux equation. Markov Process. Relat. Fields 22, 87110.Google Scholar
Goldstein, S. (1951), On diffusion by discontinuous movements and the telegraph equation. Quart. J. Mech. Appl. Math. 4, 129156.10.1093/qjmam/4.2.129CrossRefGoogle Scholar
Grandell, J. (1997). Mixed Poisson Processes. CRC Press, Boca Raton.10.1007/978-1-4899-3117-7CrossRefGoogle Scholar
Iacus, S. M. (2001). Parametric estimation for the standard and geometric telegraph process observed at discrete times. Statist. Infer. Stoch. Process. 11, 249263.Google Scholar
Iuliano, A. and Verasani, G. (2023). A three-dimensional cyclic random motion with finite velocities driven by geometric counting processes. Preprint. Available at https://arxiv.org/abs/2306.03260.Google Scholar
Kac, M. (1974). A stochastic model related to the telegrapher’s equation. Rocky Mountain J. Math. 4, 497509.10.1216/RMJ-1974-4-3-497CrossRefGoogle Scholar
Kolesnik, A. D. (2021). Markov Random Flights. CRC Press, Boca Raton.10.1201/9781003098133CrossRefGoogle Scholar
Kolesnik, A. D. and Orsingher, E. (2005). A planar random motion with an infinite number of directions controlled by the damped wave equation. J. Appl. Prob. 42, 11681182.10.1239/jap/1134587824CrossRefGoogle Scholar
Kolesnik, A. D. and Ratanov, N. (2013). Telegraph Processes and Option Pricing. Springer, Heidelberg.10.1007/978-3-642-40526-6CrossRefGoogle Scholar
Kolesnik, A. D. and Turbin, A. F. (1998). The equation of symmetric Markovian random evolution in a plane. Stoch. Process. Appl. 75, 6787.10.1016/S0304-4149(98)00003-9CrossRefGoogle Scholar
Lachal, A. (2006). Cyclic random motions in $\mathbb {R}^d$ -space with n directions. ESAIM Prob. Statist. 10, 277316.10.1051/ps:2006012CrossRefGoogle Scholar
Lachal, A., Leorato, S. and Orsingher, E. (2006). Minimal cyclic random motion in ${R}^{n}$ and hyper-Bessel functions. Ann. Inst. H. Poincaré Prob. Statist. 42, 753772.10.1016/j.anihpb.2005.11.002CrossRefGoogle Scholar
Leorato, S. and Orsingher, E. (2004). Bose–Einstein-type statistics, order statistics and planar random motions with three directions. Adv. Appl. Prob. 36, 937–970.10.1239/aap/1093962242CrossRefGoogle Scholar
Masoliver, J. and Lindenberg, K. (2020). Two-dimensional telegraphic processes and their fractional generalizations. Phys. Rev. E 101, article no. 012137.10.1103/PhysRevE.101.012137CrossRefGoogle ScholarPubMed
Mertens, K., Angelani, L., Di Leonardo, R. and Bocquet, L. (2012). Probability distributions for the run-and-tumble bacterial dynamics: an analogy to the Lorentz model. Europ. Phys. J. E 35, article no. 84.10.1140/epje/i2012-12084-yCrossRefGoogle Scholar
Mori, F., Le Doussal, P., Majumdar, S. N. and Schehr, G. (2020). Universal properties of a run-and-tumble particle in arbitrary dimension. Phys. Rev. E 102, article no. 042133.10.1103/PhysRevE.102.042133CrossRefGoogle ScholarPubMed
Orsingher, E. (1990). Probability law, flow function, maximum distribution of wave-governed random motions and their connections with Kirchoff’s laws. Stoch. Process. Appl. 34, 4966.10.1016/0304-4149(90)90056-XCrossRefGoogle Scholar
Orsingher, E. (2002). Bessel functions of third order and the distribution of cyclic planar random motion with three directions. Stoch. Stoch. Reports 74, 617631.10.1080/1045112021000060755CrossRefGoogle Scholar
Orsingher, E. and De Gregorio, A. (2007). Random flights in higher spaces. J. Theoret. Prob. 20, 769806.10.1007/s10959-007-0093-yCrossRefGoogle Scholar
Orsingher, E., Garra, R. and Zeifman, A. I. (2020). Cyclic random motions with orthogonal directions. Markov Process. Relat. Fields 26, 381402.Google Scholar
Orsingher, E. and Kolesnik, A. D. (1996). Exact distribution for a planar random motion model, controlled by a fourth-order hyperbolic equation. Theory Prob. Appl. 41, 379386.Google Scholar
Pogorui, A. (2012). Evolution in multidimensional spaces. Random Oper. Stoch. Equ. 20, 119126.10.1515/rose-2012-0006CrossRefGoogle Scholar
Samoilenko, V. (2001). Markovian evolutions in $\mathbb {R}^n$ . Random Operators Stoch. Equat. 9, 139160.Google Scholar
Santra, I., Basu, U. and Sabhapandit, S. (2020). Run-and-tumble particles in two dimensions: marginal position distributions. Phys. Rev. E 101, article no. 062120.10.1103/PhysRevE.101.062120CrossRefGoogle ScholarPubMed
Travaglino, F., Di Crescenzo, A., Martinucci, B. and Scarpa, R. (2018). A new model of Campi Flegrei inflation and deflation episodes based on Brownian motion driven by the telegraph process. Math. Geosci. 50, 961975.10.1007/s11004-018-9756-8CrossRefGoogle Scholar