Hostname: page-component-6587cd75c8-6rxlm Total loading time: 0 Render date: 2025-04-23T20:04:41.512Z Has data issue: false hasContentIssue false

Optimal transport on gas networks

Published online by Cambridge University Press:  16 April 2025

Ariane Fazeny*
Affiliation:
Helmholtz Imaging, Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany
Martin Burger
Affiliation:
Helmholtz Imaging, Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany Fachbereich Mathematik, Universität Hamburg, Hamburg, Germany
Jan-F. Pietschmann
Affiliation:
Institute of Mathematics, Institut für Mathematik, Universität Augsburg, Augsburg, Germany Centre for Advanced Analytics and Predictive Sciences (CAAPS), University of Augsburg, Augsburg, Germany
*
Corresponding author: Ariane Fazeny; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Optimal transport tasks naturally arise in gas networks, which include a variety of constraints such as physical plausibility of the transport and the avoidance of extreme pressure fluctuations. To define feasible optimal transport plans, we utilize a $p$-Wasserstein metric and similar dynamic formulations minimizing the kinetic energy necessary for moving gas through the network, which we combine with suitable versions of Kirchhoff’s law as the coupling condition at nodes. In contrast to existing literature, we especially focus on the non-standard case $p \neq 2$ to derive an overdamped isothermal model for gases through $p$-Wasserstein gradient flows in order to uncover and analyze underlying dynamics. We introduce different options for modelling the gas network as an oriented graph including the possibility to store gas at interior vertices and to put in or take out gas at boundary vertices.

Type
Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

1. Introduction

With the aim of re-purposing and extending existing natural gas infrastructure to hydrogen networks or mixed natural gas and hydrogen networks, there is a fundamental interest in understanding the dynamics of gas transport and the influence of different network topologies.

In this work, we model such networks as metric (or quantum) graphs where to each edge, one associates a one-dimensional interval. On these, we pose a one-dimensional partial differential equation describing the evolution of the mass density of the gas. Examples for such models are (ISO1) or (ISO3) models introduced within a whole hierarchy in [Reference Domschke, Hiller, Lang, Mehrmann, Morandin and Tischendorf11]. Next, we carry out a detailed modelling of possible coupling conditions at the vertices. In particular, we include the possibility of mass storage at vertices by which we extend previous approaches. As it turns out, such models are closely related to dynamic formulations of optimal transport problems on metric graphs. This becomes evident when one only considers the conservation of mass and considers transport on networks minimizing some kinetic energy. The results of the optimal transport may give an indication of efficiency of the gas transport on the network, in particular if we extend the purely conservative framework to possible influx or out-flux on some of the nodes, related to pumping in by the provider or extracting gas at the consumer. We will study the related approaches of optimal transport and also study $p$ -Wasserstein metrics on the metric graphs. We mention however that those are only a metric if there is no additional in- our out-flux.

In literature, (dynamic) optimal transport is typically restricted to bounded domains, which are a subset of $\mathbb {R}^n$ and the number of publications about transport problems on metric graphs (or even non-convex domains) is still quite limited and recent, see below for a discussion. Optimal transport and Wasserstein metrics are natural tools for studying gradient flows, which may also be relevant for some of the overdamped transport models. The study of gradient flows in Wasserstein spaces has been pioneered in [Reference Erbar, Forkert, Maas and Mugnolo14] in the case $p=2$ and without storage on nodes. In this case, even standard logarithmic entropies are no longer displacement convex. This complicates the study of gradient flows by means of strong notions such as evolution variational inequalities; however, more general notions such as the minimizing movement scheme are still available. We are able to recover (ISO3) as a gradient flow in Wasserstein spaces with $p=3$ , and we will also highlight some issues related to defining appropriate potentials on the metric graph, which can differ in an non-trivial way from simple definitions on single edges, since different integration constants on each edge change the interface condition.

Therefore, we want to focus on how we can utilize metric graphs to model gas networks and how the $p$ -Wasserstein distance allows us to tackle optimal transport problems on such graphs, where we are aiming at minimizing the necessary kinetic energy for moving gas through the network. Furthermore, we introduce various dynamic formulations of the $p$ -Wasserstein metric in order to derive physically feasible optimal transport plans and we derive $p$ -Wasserstein gradient flows.

1.1. Main contributions

This paper generalizes the presented dynamic formulation of the $2$ -Wasserstein metric in [Reference Burger, Humpert and Pietschmann8] to general $p \in \left [1, \infty \right )$ . Moreover, we present two types of coupling conditions at interior vertices (classical and generalized Kirchhoff’s law) as well as time-dependent and time-independent boundary conditions. All these different types of conditions can be incorporated into the definition of our $p$ -Wasserstein metric and hence solve different balanced or unbalanced optimal transport tasks on metric graphs. Furthermore, we give a detailed description of classical and weak solutions of the presented optimal transport problem as well as deriving $p$ -Wasserstein gradient flows. Finally, we extend a primal-dual gradient scheme to metric graphs, with and without additional vertex dynamics, and provide several numerical examples for the introduced transport problems.

The structure of the paper is as follows: Section 2 describes how we encode gas networks as oriented graphs, how the gas flow at individual pipes is given by the (ISO3) model of [Reference Domschke, Hiller, Lang, Mehrmann, Morandin and Tischendorf11], how we couple pipes and allow for storage of gas at interior vertices and how we model mass conservation at boundary vertices as well as the inflow and outflow of gas into and from the network. Section 3 includes the introduction of measures on graphs, the formal definition of the optimal transport problems on the oriented graph together with feasibility conditions for given initial and boundary data, in particular mass conservation conditions. Section 4 introduces the static formulation of the $p$ -Wasserstein distance and different dynamic formulations depending on the coupling and boundary conditions and discusses their basic mathematical properties. Moreover, we provide an outlook to gradient flow structures and their use for gas networks in Section 5, highlighting the connection to the (ISO3) model in Section 2. Finally, we discuss some numerical examples of optimal transport on networks in Section 6.

1.2. Related work

The mathematical foundation of optimal transport and Wasserstein metrics, including dynamical formulations as well as the derivation of Wasserstein gradient flows with suitable time discretization schemes, can be found in [Reference Ambrosio, Gigli and Savaré2] and [Reference Santambrogio28]. While [Reference Ambrosio, Gigli and Savaré2] focuses on gradient flows in probability spaces, [Reference Santambrogio28] gives an extensive introduction into optimal transport from the viewpoint of applied mathematics.

Details about the equivalence of the static and dynamic formulations of the Wasserstein distance can be found in the original publication [Reference Benamou and Brenier5]. This dynamic formulation was extended in many directions, e.g. including non-linear mobilities [Reference Dolbeault, Nazaret and Savaré10] or discrete as well as generalized graph structures [Reference Esposito, Patacchini, Schlichting and Slepčev15, Reference Heinze, Pietschmann and Schmidtchen19, Reference Maas23]. Of particular interest in the context of this work is [Reference Monsaingeon24] where two independent optimal transport problems (one in the interior of a domain and one on the boundary) are coupled so that mass exchange is possible. The idea of mass exchange between the boundary and the interior of the domain is applied to metric graphs modelling a network in [Reference Burger, Humpert and Pietschmann8], also utilzing the dynamic formulation of the $2$ -Wasserstein metric on networks for optimal transport tasks, including proofs of existence of minimizers via convex duality and the derivation of $2$ -Wasserstein gradient flows. The equivalence of the static and dynamic formulation of the $2$ -Wasserstein metric for metric graphs is proven in [Reference Erbar, Forkert, Maas and Mugnolo14]. In addition, [Reference Erbar, Forkert, Maas and Mugnolo14] also studies $2$ -Wasserstein gradient flows of diffusion equations on metric graphs.

The catalogue [Reference Domschke, Hiller, Lang, Mehrmann, Morandin and Tischendorf11] gives an overview of models for the transport of gas through networks, and in the case of the (ISO) model hierarchy it derives different possibilities on how to model isothermal gas flow in pipes starting from the original Euler equations. In our paper, we will use the introduced (ISO3) model, which based on a few assumptions on the gas flow, constitutes a simplified PDE system encoding mass and momentum conservation. Part of the model catalogue (especially the rigorous derivation of the (ISO3) model) is based on the work of [Reference Brouwer, Gasser and Herty7] and [Reference Osiadacz26], which use asymptotic analysis of transient gas equations to characterize different gas flow models and to perform numerical simulations on examples of those models. Coupling conditions at multiple pipe connections for isothermal Euler equations are introduced in [Reference Banda, Herty and Klar3]. These coupling conditions resemble Kirchhoff’s law but also include an equal pressure assumption at vertices of the gas network and for these coupling conditions existence of solutions at T-shaped intersections is proven.

In [Reference Gugat, Leugering and Hante18], a method for the calculation of stationary states of gas networks is introduced. The gas flow in pipes is governed by (ISO) flow models of the previously mentioned model hierarchy and the paper specifically analyzes networks containing circles and how suitably chosen boundary conditions determine the uniqueness of stationary solutions. Furthermore, in [Reference Egger and Giesselmann12] proofs for stability estimates of friction-dominated gas transport with respect to initial conditions and an asymptotic analysis of the high friction limit are given. [Reference Gugat, Habermann, Hintermüller and Huber17] analyzes the physical validity of the (ISO2) model (a generalization of the (ISO3) model from [Reference Domschke, Hiller, Lang, Mehrmann, Morandin and Tischendorf11] used in this paper) for sufficiently low Mach numbers. It includes existence results of continuous solutions sufficing upper bounds for the pressure and magnitude of the Mach number of the gas flow, which are crucial parameters for the physical validity of solutions.

[Reference Kramar Fijavž, Mugnolo and Sikolya20] studies the standard diffusion problem on metric graphs, which, as a special case of no drift and no non-local interaction, is included in [Reference Erbar, Forkert, Maas and Mugnolo14]. In [Reference von Below and Nicaise4] the coupling of PDEs on spaces with different dimension and with coupling conditions similar to Kirchhoff’s law are analyzed; however, the paper considers semi-linear diffusion-reaction equations. This is extended in [Reference Mugnolo and Romanelli25] which also includes dynamics at the vertices, but the authors rely on semigroup techniques to show well-posedness, properties of the generator and long time behaviour. In [Reference Nicaise22], the spectrum of the diffusion operator on a network consisting of single, concatenated edges is studied by means of an explicit construction of eigenvalues and eigenfunctions.

2. Modelling gas networks as metric graphs

Given a gas network consisting of a system of pipes, we encode it as a metric graph $\mathcal {G} = \left (\mathcal {V}, \mathcal {E}\right )$ . The set of vertices

\begin{equation*} \mathcal {V} = \left \{\nu _1, \nu _2, \dots \nu _n\right \} \end{equation*}

for $n \in \mathbb {N}$ , encodes all starting and endpoints of the individual pipes, and thus also coresponds to the junction points of the pipes. Moreover, for $m \in \mathbb {N}$ , we have a set of edges

\begin{equation*} \mathcal {E} = \left \{e_1, e_2, \dots e_m\right \} \subseteq \left \{e = \left \{\nu _i, \nu _j\right \} \, \middle | \, \nu _i, \nu _j \in \mathcal {V}\right \}, \end{equation*}

which correspond to the individual pipes with a predetermined ``typical'' flow direction. To each edge $e \in \mathcal {E}$ , we also assign the length $L_e \gt 0$ and a local coordinate system $x \in \left [0,L_e\right ]$ , which follows the orientation. As usual, we consider the metric graph as the product space of all edges and the vertex points, where we identify the boundary points with the vertices, i.e. we take the quotient space of the closed edges subject to the identity relation of being the same vertex (cf. [Reference Erbar, Forkert, Maas and Mugnolo14]).

For each edge $e \in \mathcal {E}$ , a start vertex $\delta ^S(e) \in e$ and an end vertex $\delta ^E(e) \in e$ are assigned, which determine the orientation of edge $e$ . Since we exclude pipes, which constitute loops from our modelling, for each edge $e \in \mathcal {E}$ it holds true that

\begin{equation*} \delta ^S(e) \cup \delta ^E(e) = e \qquad \text {and} \qquad \delta ^S(e) \cap \delta ^E(e) = \emptyset . \end{equation*}

Note that in our notation, the edge coordinate system starts at $\delta ^S(e)$ and ends at $\delta ^E(e)$ , however, flows against the orientation of edges (going from $L_e$ to $0$ ) are also considered feasible and just result in a negative velocity.

In the set of vertices, we differentiate between boundary vertices $\partial \mathcal {V}$ , where gas enters or exits the network definitively (as supply or demand), and interior vertices $\stackrel{\small\circ}{\mathcal {V}}$ . Hence, we first choose our boundary vertices, and then define $\stackrel{\small\circ}{\mathcal {V}} = \mathcal {V} \backslash \partial \mathcal {V}$ , which directly implies that each vertex $\nu \in \mathcal {V}$ is either a boundary vertex $\nu \in \partial \mathcal {V}$ or an interior vertex $\nu \in \stackrel{\small\circ}{\mathcal {V}}$ . In addition, for the set of boundary vertices $\partial \mathcal {V}$ , we differentiate between source vertices $\partial ^+ \mathcal {V}$ , which supply the network with gas, and sink vertices $\partial ^- \mathcal {V}$ , at which gas is taken out of the network to meet given demands. We will assume, that each boundary vertex $\nu \in \partial \mathcal {V}$ is either a source vertex $\nu \in \partial ^+ \mathcal {V}$ or a sink vertex $\partial ^- \mathcal {V}$ .

If an interior vertex $\nu \in \stackrel{\small\circ}{\mathcal {V}}$ is only connected to one other vertex in the graph, then we call $\nu$ a dead-end. This case could arise due to maintenance in the gas network or when pipes of the network are utilized for storage purposes rather than gas transport.

In order to guarantee well-defined expressions and to simplify the notation, we assume the following properties for our underlying gas network and the resulting oriented graph.

Remark 2.1 (Assumptions for the graph). For our oriented graph, we will always assume that the set of vertices and edges are finite, that $\left \lvert \mathcal {E}\right \rvert \geq 1$ and that there are no loops $\left \{\nu, \nu \right \}$ for any $\nu \in \mathcal {V}$ , which automatically implies that $\left \lvert \mathcal {V}\right \rvert \geq 2$ . Moreover, we assume that our graph is connected, meaning that for every two vertices $\nu _i, \nu _j \in \mathcal {V}$ with $\nu _i \neq \nu _j$ , there exists a path from $\nu _i$ to $\nu _j$ (when ignoring the orientation of edges).

Moreover, we specifically want to allow the case of no source and sink vertices ( $\stackrel{\small\circ}{\mathcal {V}} = \mathcal {V}$ ) to include the model of [Reference Burger, Humpert and Pietschmann8], and also the cases of source vertices but no sink vertices ( $\partial ^+ \mathcal {V} = \partial \mathcal {V}$ ) or vice versa ( $\partial ^- \mathcal {V} = \partial \mathcal {V}$ ) are considered feasible in our modelling. Furthermore, we will neglect the pipe intersection angles at interior vertices in our modelling, and only consider the length of pipes.

Example 2.2 (Gas network as oriented graph). The following gas network consists of five pipes and two supply vertices and one demand vertex. Therefore, the oriented graph $\mathcal {G} = \left (\mathcal {V}, \mathcal {E}\right )$ encoding the network contains five edges

\begin{equation*} \mathcal {E} = \left \{e_1, e_2, e_3, e_4, e_5\right \} = \left \{\left \{\nu _1, \nu _4\right \}, \left \{\nu _2, \nu _4\right \}, \left \{\nu _4, \nu _5\right \}, \left \{\nu _3, \nu _5\right \}, \left \{\nu _5, \nu _6\right \}\right \}, \end{equation*}

three boundary vertices $\partial \mathcal {V}$ and three interior vertices $\stackrel{\small\circ}{\mathcal {V}}$

\begin{align*} \mathcal {V} = \partial \mathcal {V} \cup \stackrel{\small\circ}{\mathcal {V}} & = \left \{\nu _1, \nu _2, \nu _3\right \} \cup \left \{\nu _4, \nu _5, \nu _6\right \},\\ \partial ^+ \mathcal {V} & = \left \{\nu _1, \nu _2\right \},\\ \partial ^- \mathcal {V} & = \left \{\nu _3\right \}. \end{align*}

The weights of each edge $e$ are assigned according to the pipe length

\begin{equation*} L_{e_1} = L_{e_4} = 45, \qquad L_{e_2} = 72, \qquad L_{e_3} = 40, \qquad L_{e_5} = 63. \end{equation*}

Moreover, the start and end vertices of each edge are given by

\begin{equation*} \begin{array}{lll} &\nu _1 = \delta ^S\left (e_1\right ) \qquad \qquad \qquad & \nu _4 = \delta ^E\left (e_1\right ) = \delta ^E\left (e_2\right ) = \delta ^S\left (e_3\right )\\ &\nu _2 = \delta ^S\left (e_2\right ) \qquad \qquad \qquad &\nu _5 = \delta ^E\left (e_3\right ) = \delta ^S\left (e_4\right ) = \delta ^S\left (e_5\right )\\ & \nu _3 = \delta ^E\left (e_4\right ) \qquad \qquad \qquad &\nu _6 = \delta ^E\left (e_5\right ) \end{array} \end{equation*}

Thus vertex $\nu _6$ is a dead-end of our gas network, since it is only connected to one edge $e_5$ , and since the vertex is neither a source nor a sink vertex.

2.1. Gas flow in single pipes

In a given time interval $\left [0, T\right ]$ with $T \gt 0$ , on each edge $e \in \mathcal {E}$ , the physical properties of gas flow can be modelled by the isothermal Euler equations for compressible and inviscid fluids. In order to simplify the Euler equation system, we assume the pipe walls and the gas to have the same temperature $\mathcal {T}$ , which allows us to omit the energy conservation equation (see [Reference Domschke, Hiller, Lang, Mehrmann, Morandin and Tischendorf11]). For a mass density $\rho _e: \left [0, L_e\right ] \times \left [0, T\right ] \longrightarrow \mathbb {R}_{\geq 0}$ and a velocity $v_e: \left [0, L_e\right ] \times \left [0, T\right ] \longrightarrow \mathbb {R}$ , the system of equations reads as

(ISO1) \begin{align} \begin{split} \frac {\partial \rho _e}{\partial t} + \frac {\partial }{\partial x}\left (\rho _e v_e\right ) & = 0,\\ \frac {\partial (\rho _e v_e)}{\partial t} + \frac {\partial }{\partial x} \left (p_e \left (\rho _e\right )+ \rho _e v_e^2 \right ) & = -\frac {\lambda _e}{2\mathcal {D}_e} \rho _e v_e \left \lvert v_e\right \rvert - g \rho _e \sin {\left (\omega _e\right )},\\ \text {for} \, \left (x, t\right ) & \in \left (0, L_e\right ) \times \left (0, T\right ). \end{split} \end{align}

The first equation (continuity equation) encodes conservation of mass and the second equation ensures conservation of momentum. Here, $\lambda _e \geq 0$ denotes the pipe friction coefficient, $\mathcal {D}_e \gt 0$ the pipe diameter, $g \approx 6.7 \cdot 10^{-11} \, N m^2 / kg^2$ the gravitational constant and $\omega _e \in \left [0, 2\pi \right ]$ the inclination level of the pipe $e$ . The pressure $p_e: \left [0, L_e\right ] \times \left [0, T\right ] \longrightarrow \mathbb {R}_{\geq 0}$ is implicitly defined by the state of real gases equation

(RGE) \begin{equation} p_e\left (\rho _e\right ) = R \rho _e \mathcal {T} z, \end{equation}

with gas constant $R \approx 8.3 \, J / \left (K \cdot mol\right )$ , temperature $\mathcal {T} \gt 0$ of the gas and pipe walls and compressibility factor $z \geq 0$ , which we assume to be constant, but generally depends on $p_e$ and in the non-isothermal case also on $\mathcal {T}$ . The state of real gases equation constitutes the pressure law for ideal gases in the case of $z = 1$ . For each edge $e \in \mathcal {E}$ , we furthermore define the mass flux

\begin{equation*} j_e: \left [0, L_e\right ] \times \left [0, T\right ] \longrightarrow \mathbb {R} \quad \left (x, t\right ) \mapsto j_e\left (x, t\right ) :\!= \rho _e\left (x, t\right ) v_e\left (x, t\right ). \end{equation*}

Remark 2.3 (Modelling three-dimensional pipes as one-dimensional edges). Note that in (ISO1), we model a three-dimensional pipe as a one-dimensional edge $e$ , thus the mass density, the velocity, the pressure and the mass flux are averaged in the cross-section of the pipe to derive a one-dimensional formulation. A corresponding derivation of optimal transport from three-dimensional pipe domains to one-dimensional metric graphs constitutes an interesting question for future research.

In order to further simplify the (ISO1) model, we assume small flow rates $|v_e|$ , which enables us to eliminate the non-linearity on the left side of the momentum equation (leading to the (ISO2) model in [Reference Domschke, Hiller, Lang, Mehrmann, Morandin and Tischendorf11]), as shown in [Reference Osiadacz26]. Furthermore, we assume a friction-dominated flow, which corresponds to the friction on the pipe walls dominating the heat conduction effects. If we also neglect the gravitational force in the asymptotic considerations, then we obtain, by using the scaling approaches of [Reference Brouwer, Gasser and Herty7], an even further simplified left side of the momentum equation and overall the model then corresponds to the (ISO3) model in [Reference Domschke, Hiller, Lang, Mehrmann, Morandin and Tischendorf11], which is given by

(ISO3) \begin{align} \begin{split} \frac {\partial \rho _e}{\partial t} + \frac {\partial }{\partial x}\left (\rho _e v_e\right ) & = 0,\\ \frac {\partial }{\partial x} \left (p_e \left (\rho _e\right )\right ) & = -\frac {\lambda _e}{2\mathcal {D}_e} \rho _e v_e \left \lvert v_e\right \rvert - g \rho _e \sin {\left (\omega _e\right )}, \end{split} \end{align}

with the linear constitutive relation of $p_e(\rho _e)$ being based on (RGE).

2.2. Gas flow at interior vertices

For connecting the gas flow of the individual pipes to a feasible network, we need coupling conditions for the flux at vertices, which model pipe intersections. This applies to both interior vertices $\stackrel{\small\circ}{\mathcal {V}}$ and boundary vertices $\partial \mathcal {V}$ , where for interior vertices $\nu \in \stackrel{\small\circ}{\mathcal {V}}$ we additionally assume that it is possible to store gas at the individual vertices.

In order to formulate different coupling and boundary conditions in a unified manner, we introduce the vertex excess flux, based on a generalized version of Kirchhoff’s law, as

(VF KL) \begin{equation} f_{\nu }: \left [0, T\right ] \longrightarrow \mathbb {R} \quad t \mapsto f_{\nu }\left (t\right ) :\!= \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} j_e\left (L_e, t\right ) - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} j_e\left (0, t\right ). \end{equation}

For each vertex $\nu \in \mathcal {V}$ , it catches the difference between the total influx from all ingoing edges and the total outflux from all outgoing edges.

To model the possibility to store gas at interior vertices $\nu \in \stackrel{\small\circ}{\mathcal {V}}$ , we introduce vertex mass densities $\gamma _{\nu }: \left [0, T\right ] \longrightarrow \mathbb {R}_{\geq 0}$ for all interior vertices $\nu \in \stackrel{\small\circ}{\mathcal {V}}$ . At each interior vertex, we have the initial storage volume $\gamma _{\nu }\left (0\right ) \geq 0$ and for all $t \in \left (0, T\right )$ the vertex mass density has to satisfy

(1) \begin{equation} \frac {\partial \gamma _{\nu }}{\partial t} = f_{\nu }. \end{equation}

The combination of the vertex excess flux $f_{\nu }$ and the vertex mass density $\gamma _{\nu }$ ensures mass conservation at each interior vertex at all times, since gas can either flow through the vertex completely (indicating $f_{\nu } = 0$ , as accumulated inflow equals accumulated outflow) or gas is stored at the given vertex for $f_{\nu } \gt 0$ or gas is taken out of storage at the given vertex for $f_{\nu } \lt 0$ .

Not storing gas in the interior vertices can be achieved by setting $f_{\nu } \equiv 0$ . Hence, for all $t \in \left [0, T\right ]$ we obtain the classical version of Kirchhoff’s law

(C KL) \begin{equation} 0 = \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} j_e\left (L_e, t\right ) - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} j_e\left (0, t\right ). \end{equation}

Observation 2.4 (No storage at interior vertices). After setting $f_{\nu } \equiv 0$ , the vertex mass density $\gamma _{\nu }$ can be omitted from further calculations, since $\frac {\partial \gamma _{\nu }}{\partial t} \equiv 0$ directly implies a constant amount of stored gas in each vertex $\nu \in \stackrel{\small\circ}{\mathcal {V}}$

\begin{equation*} \gamma _{\nu }\left (0\right ) = \gamma _{\nu }\left (T\right ) \equiv \gamma _{\nu }\left (t\right ), \qquad \forall t \in \left (0, T\right ). \end{equation*}

Note that, continuity of the velocity function $v$ or the mass density $\rho$ in the interior vertices is not covered by these coupling conditions, however it is also not expected in general.

2.3. Gas flow at boundary vertices

Boundary vertices $\nu \in \partial \mathcal {V}$ always either constitute source vertices or sink vertices of the network, which are responsible for the supply (inflow) and demand (outflow) of gas in the network. In contrast to interior vertices, we assume that storage of gas at boundary vertices is not possible. Therefore, we need a modified version of Kirchhoff’s law, which also includes the inflow and outflow of gas to and from outside of the network, to ensure conservation of mass at boundary vertices. Hence, we introduce time-dependent and time-independent boundary conditions for boundary vertices $\nu \in \partial \mathcal {V}$ .

Definition 2.5 (Time-dependent boundary conditions). To include time-dependent boundary conditions for the gas supply at source vertices $\partial ^+ \mathcal {V}$ as well as for the demand at sink vertices $\partial ^- \mathcal {V}$ , we suppose that the source vertex flux function $s_{\nu }^G: \left [0, T\right ] \longrightarrow \mathbb {R}_{\leq 0}$ encodes the amount of gas, which has to enter the network at each source vertex $\nu \in \partial ^+ \mathcal {V}$ ( $s$ for supply) and that the sink vertex flux function $d_{\nu }^G: \left [0, T\right ] \longrightarrow \mathbb {R}_{\geq 0}$ describes the given outflow of gas from the network at each sink vertex $\nu \in \partial ^- \mathcal {V}$ ( $d$ for demand).

Note that, in PDE literature this type of boundary conditions is typically called “inhomogeneous” and inspired by the coupling conditions at interior vertices, we can formulate these conditions as a system of equations.

Observation 2.6 (Time-dependent boundary conditions). With the generalized Kirchhoff’s law (VF KL), these boundary conditions can be written as

\begin{align*} \qquad s_{\nu }^G \left (t\right ) & = f_{\nu } \left (t\right ) \qquad & \forall \nu \in \partial ^+ \mathcal {V}, \, t \in \left [0, T\right ], \qquad \\ \qquad d_{\nu }^G \left (t\right ) & = f_{\nu } \left (t\right ) \qquad & \forall \nu \in \partial ^- \mathcal {V}, \, t \in \left [0, T\right ], \qquad \end{align*}

because the source vertex flux $\left \lvert s_{\nu }^G \right \rvert$ acts as a flow into vertex $\nu \in \partial ^+ \mathcal {V}$ (thus an outflow $\left .j_e\right |_{x = L_e}$ of an imaginary edge $e$ ) and the sink vertex flux $d_{\nu }^G$ can be seen as a flow out of vertex $\nu \in \partial ^- \mathcal {V}$ (thus an inflow $\left .j_e\right |_{x = 0}$ of an imaginary edge $e$ ). With this interpretation, we obtain

\begin{align*} \qquad 0 & = \Bigg ( \left \lvert s_{\nu }^G \left (t\right ) \right \rvert + \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} j_e\left (L_e, t\right ) \Bigg ) - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} j_e\left (0, t\right ) \qquad & \forall \nu \in \partial ^+ \mathcal {V}, \, t \in \left [0, T\right ], \qquad & \\ \qquad 0 & = \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} j_e\left (L_e, t\right ) - \Bigg ( d_{\nu }^G \left (t\right ) + \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} j_e\left (0, t\right ) \Bigg ) \qquad & \forall \nu \in \partial ^- \mathcal {V}, \, t \in \left [0, T\right ]. \qquad & \\ \end{align*}

Another option for conditions at the boundary vertices $\partial \mathcal {V}$ is only considering the (mandatory) total gas supply $\left \lvert S_{\nu }^G \right \rvert$ for each source vertex $\nu \in \partial ^+ \mathcal {V}$ for the time period $\left [0, T\right ]$ , which is available starting at time $t = 0$ , as well as the (mandatory) total gas demand $D_{\nu }^G$ for each sink vertex $\nu \in \partial ^- \mathcal {V}$ for the time period $\left [0, T\right ]$ , which has to be met at the latest at time $t = T$ . Since these conditions only have to be fulfilled at the end of the time period, we call them time-independent boundary conditions.

Definition 2.7 (Time-independent boundary conditions). Suppose for the time period $\left [0, T\right ]$ , the accumulated supplies $\left \lvert S_{\nu }^G \right \rvert$ (with $S_{\nu }^G \leq 0$ ) at source vertices $\nu \in \partial ^+ \mathcal {V}$ and accumulated demands $D_{\nu }^G \geq 0$ at sink vertices $\nu \in \partial ^- \mathcal {V}$ are given. Then our aim is to find source vertex flux functions

\begin{equation*} s_{\nu }: \left [0, T\right ] \longrightarrow \mathbb {R}_{\leq 0} \quad t \mapsto s_{\nu }\left (t\right ) :\!= f_{\nu } \left (t\right ) \end{equation*}

for each source vertex $\nu \in \partial ^+ \mathcal {V}$ , such that

(2) \begin{equation} S_{\nu }^G = \int _0^T s_{\nu }\left (t\right ) \, \mathrm d t, \end{equation}

and to find sink vertex flux functions

\begin{equation*} d_{\nu }: \left [0, T\right ] \longrightarrow \mathbb {R}_{\geq 0} \quad t \mapsto d_{\nu }\left (t\right ) :\!= f_{\nu } \left (t\right ) \end{equation*}

for each sink vertex $\nu \in \partial ^- \mathcal {V}$ , such that

(3) \begin{equation} D_{\nu }^G = \int _0^T d_{\nu }\left (t\right ) \, \mathrm d t. \end{equation}

Inspired by the coupling conditions at interior vertices, we can once again reformulate these conditions.

Observation 2.8 (Time-independent boundary conditions). If we define a source vertex mass density $S_{\nu }: \left [0, T\right ] \longrightarrow \mathbb {R}_{\geq 0}$ for all source vertices $\nu \in \partial ^+ \mathcal {V}$ , then constraint ( 2 ), can be rewritten as

\begin{align*} \qquad \frac {\partial S_{\nu } \left (t\right )}{\partial t} & = s_{\nu } \left (t\right ) \qquad & \forall \nu \in \partial ^+ \mathcal {V}, \, t \in \left (0, T\right ), \qquad & \end{align*}

with initial and final condition $S_{\nu }\left (0\right ) = \left \lvert S_{\nu }^G \right \rvert$ and $S_{\nu }\left (T\right ) = 0$ .

In a similar manner, by introducing a sink vertex mass density $D_{\nu }: \left [0, T\right ] \longrightarrow \mathbb {R}_{\geq 0}$ for each sink vertex $\nu \in \partial ^- \mathcal {V}$ , constraint ( 3 ) can be rewritten as

\begin{align*} \qquad \frac {\partial D_{\nu } \left (t\right )}{\partial t} & = d_{\nu } \left (t\right ) \qquad & \forall \nu \in \partial ^- \mathcal {V}, \, t \in \left (0, T\right ), \qquad & \end{align*}

with initial and final condition $D_{\nu }\left (0\right ) = 0$ and $D_{\nu }\left (T\right ) = D_{\nu }^G$ .

Note that in the case of time-independent boundary conditions the non-positivity property of the source vertex flux and the non-negativity of the sink vertex flux are not automatically fulfilled.

Remark 2.9 (Non-positivity of $s_{\nu }$ and non-negativity of $d_{\nu }$ ). For time-independent boundary conditions, it is necessary to constrain $s_{\nu }\left (t\right )$ to $\mathbb {R}_{\leq 0}$ and $d_{\nu }\left (t\right )$ to $\mathbb {R}_{\geq 0}$ for all $t \in \left [0, T\right ]$ . Otherwise, $s_{\nu }\left (t\right ) \gt 0$ would indicate giving back gas supply and $d_{\nu }\left (t\right ) \lt 0$ would correspond to taking back gas demand, which are both unrealistic behaviours for real-world applications.

The source vertex mass can also be defined in the case of time-dependent boundary conditions as

\begin{equation*} S_{\nu } \left (t\right ) :\!= \big \lvert {\underbrace {S_{\nu }^G}_{\leq \, 0}} \vphantom {S_{\nu }^G} \big \rvert - \bigg \lvert \int _0^t {\underbrace {s_{\nu }^G \left (\tilde {t}\right )}_{\leq \, 0}}\vphantom {s_{\nu }^G \left (\tilde {t}\right )} \, \mathrm d \tilde {t} \bigg \rvert \quad \text {with} \quad S_{\nu }^G :\!= \int _0^T s_{\nu }^G \left (\tilde {t}\right ) \, \mathrm d \tilde {t} \end{equation*}

for source vertices $\nu \in \partial ^+ \mathcal {V}$ , giving the remaining supply at time $t$ , which has not entered the network yet. Similarly, the sink vertex mass can be defined as

\begin{equation*} D_{\nu } \left (t\right ) :\!= \int _0^t d_{\nu }^G \left (\tilde {t}\right ) \, \mathrm d \tilde {t} \quad \text {with} \quad D_{\nu }^G :\!= \int _0^T d_{\nu }^G \left (\tilde {t}\right ) \, \mathrm d \tilde {t} \end{equation*}

for sink vertices $\nu \in \partial ^- \mathcal {V}$ , encoding the amount of gas which has already exited the network until time $t$ .

Proposition 2.10 (Source vertex mass density). The source vertex mass density $S_{\nu } \left (t\right )$ of a source vertex $\nu \in \partial ^+ \mathcal {V}$ at any time $t \in \left [0, T\right ]$ can be calculated by

\begin{equation*} S_{\nu }\left (t\right ) = \left \lvert \int _t^T s_{\nu }^G \left (\tilde {t}\right ) \, \mathrm d \tilde {t} \right \rvert \end{equation*}

for time-dependent boundary conditions, or with $s_{\nu }$ instead of $s_{\nu }^G$ for time-independent boundary conditions.

With the vertex flux functions and the vertex mass densities at boundary vertices, we can also calculate the total gas supply $\left \lvert S \right \rvert$ (with $S \leq 0$ ) entering the network and the total gas demand $D$ exiting the network in the time period $\left [0, T\right ]$ .

Observation 2.11 (Total supply and total demand). In the case of time-dependent boundary conditions, the total supply and total demand of the network for the time period $\left [0, T\right ]$ are given by

\begin{align*} S & = \sum _{\nu \in \partial ^+ \mathcal {V}} - S_{\nu }\left (0\right ) = \sum _{\nu \in \partial ^+ \mathcal {V}} S_{\nu }^G = \sum _{\nu \in \partial ^+ \mathcal {V}} \int _0^T s_{\nu }^G \left (t\right ) \, \mathrm d t, \\ D & = \sum _{\nu \in \partial ^- \mathcal {V}} D_{\nu } \left (T\right ) = \sum _{\nu \in \partial ^- \mathcal {V}} D_{\nu }^G = \sum _{\nu \in \partial ^- \mathcal {V}} \int _0^T d_{\nu }^G \left (t\right ) \, \mathrm d t. \end{align*}

Alternatively, exactly the same equations hold true with $s_{\nu }$ instead of $s_{\nu }^G$ and $d_{\nu }$ instead of $d_{\nu }^G$ for time-independent boundary conditions.

Note that, dead-end vertices can in general not be considered a special case of sink or source vertices, as they are interior vertices and thus provide a storage opportunity, which is not given for boundary vertices. However, the two definitions coincide in the case of a no-flux boundary condition, so $f_{\nu } = s_{\nu }^G = d_{\nu }^G \equiv 0$ (or alternatively for $f_{\nu } \equiv 0$ and $S_{\nu }^G = D_{\nu }^G = 0$ ).

3. Transport type problems on metric graphs

While the previous section was dedicated to the formulation of gas transport models on metric graphs, especially the coupling conditions, we now undertake the next step towards our goal to understand such models as gradient flows. For this purpose, we will introduce optimal transport type problems on metric graphs in this section. We obtain different variants, depending on the type of coupling and boundary conditions, all of which can be understood as extensions of the dynamic formulation of the classical $p$ -Wasserstein distance. However, only in the case of no boundary vertices, do we obtain a metric or distance (for $p=2$ this is shown in [Reference Burger, Humpert and Pietschmann8] and without mass storage on vertices in [Reference Erbar, Forkert, Maas and Mugnolo14]). Ultimately, in section 5, we will show that we can recover the (ISO3) model from a minimizing movement (or JKO) scheme with respect to one of these metrics. However, we start by introducing some further notation.

3.1. Measure spaces on graphs

By $\mathcal {M}_+\left (\Omega \right )$ we denote the set of Borel measures on a metric space $\left (\Omega, d\right )$ with $\Omega \subseteq \mathbb {R}^k$ and $k \in \mathbb {N}$ (and $\Sigma$ being the corresponding $\sigma$ -algebra of Borel sets of $\Omega$ ), which are non-negative and bounded, so $\forall X \subseteq \Omega$ we require $0 \leq \mu \left (X\right )$ , and also $\mu \left (\Omega \right ) \lt \infty$ . Based on this notation, we choose for each edge $e \in \mathcal {E}$ the mass density $\rho _e$ from the set of non-negative bounded measures on edge $e$ for the time period $\left [0, T\right ]$ with

\begin{equation*} \mathcal {M}_+(e) :\!= \mathcal {M}_+\left (\left [0, L_e\right ] \times \left [0, T\right ]\right ). \end{equation*}

Furthermore, we choose the vertex mass density $\gamma _{\nu }$ for each interior vertex $\nu \in \stackrel{\small\circ}{\mathcal {V}}$ , the source vertex mass $S_{\nu }$ for each $\nu \in \partial ^+ \mathcal {V}$ and the sink vertex mass $D_{\nu }$ for each $\nu \in \partial ^- \mathcal {V}$ , from the set of non-negative bounded measures on vertex $\nu$ for the time period $\left [0, T\right ]$ , so we define

\begin{equation*} \mathcal {M}_+\left (\nu \right ) :\!= \mathcal {M}_+\left (\left \{\nu \right \} \times \left [0, T\right ]\right ). \end{equation*}

For a fixed time $t \in \left [0, T\right ]$ , we define in a similar manner $\mathcal {M}_+^t(e) :\!= \mathcal {M}_+\left (\left [0, L_e\right ]\right )$ and $\mathcal {M}_+^t\left (\nu \right ) :\!= \mathcal {M}_+\left (\left \{\nu \right \}\right ) = a \delta _{\nu }$ , for an edge $e \in \mathcal {E}$ and a vertex $\nu \in \mathcal {V}$ , where $\delta _{\nu }$ denotes the corresponding Dirac measure with $a \in \mathbb {R}_{\geq 0}$ .

For the velocity functions $v_e$ , the mass flux functions $j_e$ of edges $e \in \mathcal {E}$ , and for the vertex excess flux function $f_{\nu }$ of interior vertices $\nu \in \stackrel{\small\circ}{\mathcal {V}}$ , we do not need the non-negativity of the defined measure spaces. Hence, by $\mathcal {M}\left (\Omega \right )$ we denote the set of bounded Borel measures $\mu$ on a metric space $\left (\Omega, d\right )$ , and thus we define

\begin{equation*} \mathcal {M}(e) :\!= \mathcal {M}\left (\left [0, L_e\right ] \times \left [0, T\right ]\right ) \quad \text {and} \quad \mathcal {M}\left (\nu \right ) :\!= \mathcal {M}\left (\left \{\nu \right \} \times \left [0, T\right ]\right ) \end{equation*}

for any edge $e \in \mathcal {E}$ and any vertex $\nu \in \mathcal {V}$ .

With the set of measures being defined on a single edge or a single vertex of the graph, we can now also introduce coupled measures on a subset of edges or a subset of vertices. These coupled measure are defined on the domain of a cartesian product of $\mathcal {M}_+(e)$ , $\mathcal {M}_+^t(e)$ , $\mathcal {M} (e)$ , $\mathcal {M}_+\left (\nu \right )$ , $\mathcal {M}_+^t\left (\nu \right )$ or $\mathcal {M} \left (\nu \right )$ for edges $e \in \mathcal {E}$ and vertices $\nu \in \mathcal {V}$ . A formal definition can be found in the appendix A.

3.2. Formulation of the optimal transport problems

Depending on the the existence of boundary vertices and the type of boundary conditions, as well as the coupling conditions at interior vertices, we obtain different formulations of the optimal transport problem on the graph $\mathcal {G} = \left (\mathcal {V}, \mathcal {E}\right )$ .

The aim of this subsection is to introduce a variation of dynamical optimal transport problems in the sense of Benamou and Brenier, [Reference Benamou and Brenier5], i.e. we consider optimisation problems on the graph over curves satisfying the continuity equation together with a version of Kirchhoff’s law as a coupling condition at the vertices, which ensures global mass conservation of our transport. These conditions will serve as side constraints to an optimization problem with a (for now) general cost functional $c$ , which can be thought of as dynamical costs that are chosen depending on the application. We do not yet specify the exact transport costs, as we first focus on the modelling of mass transfer between supply, demand, storage and the network. As we will see in section 5, building a minimizing movement scheme using these optimal transport problems as the distance, we will indeed recover the (ISO3) model.

Assuming the case with no boundary vertices and classical Kirchhoff’s law as coupling condition (as in [Reference Erbar, Forkert, Maas and Mugnolo14]), the corresponding optimal transport problem can be written as:

\begin{alignat*} {3} && &\inf _{\rho \in \mathcal {M}_+ \left (\mathcal {E}\right ), \, v \in \mathcal {M}\left (\mathcal {E}\right )} \quad c\left (\rho, v\right ) \qquad \text {subject to}\\&& 0 & = \frac {\partial \rho _e}{\partial t} + \frac {\partial }{\partial x}\left (\rho _e v_e\right ) & \forall e \in \mathcal {E}, \, x \in \left [0, L_e\right ], \, t \in \left [0, T\right ] \\ && 0 & = \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} \left . \left (\rho _e v_e\right ) \right |_{x = L_e} - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} \left . \left (\rho _e v_e\right ) \right |_{x = 0} \qquad \qquad \quad & \forall \nu \in \stackrel{\small\circ}{\mathcal {V}}, \, t \in \left [0, T\right ] \\ && \left .\rho _e\right |_{t = 0} & = \left (\rho _0\right )_e, \, \left .\rho _e\right |_{t = T} = \left (\rho _T\right )_e & \forall e \in \mathcal {E} \end{alignat*}

If we rewrite the presented optimal transport problem with the mass flux $j_e$ instead of the product $\rho _e v_e$ for each edge $e \in \mathcal {E}$ , we obtain a convex problem with linear side constraints. Based on this form, we obtain the following extensions for different combinations of coupling- and boundary conditions.

Definition 3.1 (Optimal transport problem). If there are no boundary vertices ( $\left \lvert \partial \mathcal {V}\right \rvert = 0$ so $\mathcal {V} = \stackrel{\small\circ}{\mathcal {V}}$ ) and thus no boundary conditions, we can formulate the optimal transport problem with coupling conditions based on the generalized Kirchhoff’s law (VF KL) as:

(OT VF KL) \begin{align} &\inf _{\substack {\rho \in \mathcal {M}_+ \left (\mathcal {E}\right ), \, j \in \mathcal {M}\left (\mathcal {E}\right ), \\ \gamma \in \mathcal {M}_+ \left (\stackrel{\small\circ}{\mathcal {V}}\right ), \, f \in \mathcal {M}\left (\stackrel{\small\circ}{\mathcal {V}}\right )}} \quad c\left (\rho, j, \gamma, f\right ) \qquad \text {subject to}\end{align}
(CE 1) \begin{align} 0 &= \frac {\partial \rho _e}{\partial t} + \frac {\partial j_e}{\partial x} & \forall e \in \mathcal {E}, \, x \in \left [0, L_e\right ], \, t \in \left [0, T\right ]\end{align}
(VF KL 1) \begin{align} f_{\nu }\left (t\right ) & = \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} \left . j_e \right |_{x = L_e} - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} \left . j_e \right |_{x = 0} & \forall \nu \in \stackrel{\small\circ}{\mathcal {V}}, \, t \in \left [0, T\right ]\end{align}
(VF KL 2) \begin{align}\frac {\partial \gamma _{\nu }}{\partial t} & =\ f_{\nu } & \forall \nu \in \stackrel{\small\circ}{\mathcal {V}}, \, t \in \left [0, T\right ]\end{align}
(CE 2) \begin{align}\left .\rho _e\right |_{t = 0} & = \left (\rho _0\right )_e, \, \left .\rho _e\right |_{t = T} = \left (\rho _T\right )_e & \forall e \in \mathcal {E}\end{align}
(VF KL 3) \begin{align} \left .\gamma _{\nu }\right |_{t = 0} & = \left (\gamma _0\right )_{\nu }, \, \left .\gamma _{\nu }\right |_{t = T} = \left (\gamma _T\right )_{\nu } & \forall \nu \in \stackrel{\small\circ}{\mathcal {V}}\end{align}

When using the classical version of Kirchhoff’s law (C KL) at the interior vertices (corresponding to no storage of gas at interior vertices), then this simplifies to:

(OT C KL) \begin{align} &\inf _{\rho \in \mathcal {M}_+ \left (\mathcal {E}\right ), \, j \in \mathcal {M}\left (\mathcal {E}\right )} \quad c\left (\rho, j\right ) \qquad \text {subject to} \end{align}
(CE 1) \begin{align} 0 & = \frac {\partial \rho _e}{\partial t} + \frac {\partial j_e}{\partial x} & \forall e \in \mathcal {E}, \, x \in \left [0, L_e\right ], \, t \in \left [0, T\right ] \end{align}
(C KL 1) \begin{align} 0 & = \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} \left . j_e \right |_{x = L_e} - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} \left . j_e \right |_{x = 0} & \forall \nu \in \stackrel{\small\circ}{\mathcal {V}}, \, t \in \left [0, T\right ] \end{align}
(CE 2) \begin{align} \left .\rho _e\right |_{t = 0} & = \left (\rho _0\right )_e, \, \left .\rho _e\right |_{t = T} = \left (\rho _T\right )_e & \forall e \in \mathcal {E} \end{align}

The optimal transport problem (OT VF KL) is exactly the same as the model studied in [Reference Burger, Humpert and Pietschmann8], while (OT C KL) was analysed in [Reference Erbar, Forkert, Maas and Mugnolo14]. In both cases, no boundary vertices are present and thus mass transport into or out of the network is not possible. In view of our application, we hence extend these problems by the following boundary conditions.

Remark 3.2 (Extension with boundary conditions). If boundary vertices are present ( $\left \lvert \partial \mathcal {V}\right \rvert \gt 0$ so $\mathcal {V} = \partial \mathcal {V} \cup \stackrel{\small\circ}{\mathcal {V}} \supset \stackrel{\small\circ}{\mathcal {V}}$ ), then depending on the type of boundary conditions, different constraints need to be added to the optimal transport problem. Assuming time-dependent boundary conditions, then the following constraints need to be included:

(TD BC 1) \begin{align} 0 & = \Bigg ( \left \lvert s_{\nu }^G \right \rvert + \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} \left . j_e \right |_{x = L_e} \Bigg ) - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} \left . j_e \right |_{x = 0} & \forall \nu \in \partial ^+ \mathcal {V}, \, t \in \left [0, T\right ] \end{align}
(TD BC 2) \begin{align} 0 & = \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} \left . j_e \right |_{x = L_e} - \Bigg ( d_{\nu }^G + \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} \left . j_e \right |_{x = 0} \Bigg ) & \forall \nu \in \partial ^- \mathcal {V}, \, t \in \left [0, T\right ]\end{align}

Similarly, for time-independent boundary conditions, the following constraints, as well as initial and final conditions need to be included in the optimal transport problem instead:

(TI BC 1) \begin{align} s_{\nu } & = \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} \left . j_e \right |_{x = 0} - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} \left . j_e \right |_{x = L_e} & \forall \nu \in \partial ^+ \mathcal {V}, \, t \in \left [0, T\right ] \end{align}
(TI BC 2) \begin{align} d_{\nu } & = \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} \left . j_e \right |_{x = L_e} - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} \left . j_e \right |_{x = 0} & \forall \nu \in \partial ^- \mathcal {V}, \, t \in \left [0, T\right ] \end{align}
(TI BC 3) \begin{align} \frac {\partial S_{\nu }}{\partial t} & = s_{\nu } & \forall \nu \in \partial ^+ \mathcal {V}, \, t \in \left [0, T\right ] \end{align}
(TI BC 4) \begin{align} \frac {\partial D_{\nu }}{\partial t} & = d_{\nu } & \forall \nu \in \partial ^- \mathcal {V}, \, t \in \left [0, T\right ] \end{align}
(TI BC 5) \begin{align} \left .S_{\nu }\right |_{t = 0} & = \left \lvert S_{\nu }^G \right \rvert, \, \left .S_{\nu }\right |_{t = T} = 0 & \forall \nu \in \partial ^+ \mathcal {V} \end{align}
(TI BC 6) \begin{align} \left .D_{\nu }\right |_{t = 0} & = 0, \, \left .D_{\nu }\right |_{t = T} = D_{\nu }^G & \forall \nu \in \partial ^- \mathcal {V} \end{align}

In order to avoid $s_{\nu }\left (t\right ) \gt 0$ or $d_{\nu }\left (t\right ) \lt 0$ for all $t \in \left [0, T\right ]$ , we require $-s \in \mathcal {M}_+ \left (\partial ^+ \mathcal {V}\right )$ and $d \in \mathcal {M}_+ \left (\partial ^- \mathcal {V}\right )$ . In addition, for time-independent boundary conditions, the cost functional changes to

\begin{equation*} \inf _{\substack {\rho \in \mathcal {M}_+ \left (\mathcal {E}\right ), \, j \in \mathcal {M}\left (\mathcal {E}\right ), \\ \gamma \in \mathcal {M}_+ \left (\stackrel{\small\circ}{\mathcal {V}}\right ), \, f \in \mathcal {M}\left (\stackrel{\small\circ}{\mathcal {V}}\right ), \\ S, \, -s \in \mathcal {M}_+ \left (\partial ^+ \mathcal {V}\right ), \\ D, \, d \in \mathcal {M}_+ \left (\partial ^- \mathcal {V}\right )}} c\left (\rho, j, \gamma, f, S, s, D, d\right ), \qquad \text {or} \qquad \inf _{\substack {\rho \in \mathcal {M}_+ \left (\mathcal {E}\right ), \, j \in \mathcal {M}\left (\mathcal {E}\right ), \\ S, \, -s \in \mathcal {M}_+ \left (\partial ^+ \mathcal {V}\right ), \\ D, \, d \in \mathcal {M}_+ \left (\partial ^- \mathcal {V}\right )}} c\left (\rho, j, S, s, D, d\right ), \end{equation*}

depending on the type of coupling conditions at the interior vertices.

Examples for possible (action or) cost functionals will be given in subsection 4.2, aiming at calculating the necessary kinetic energy for the transport, which is sensible for gas networks as well as being a typical type of action functional used for dynamic optimal transport.

3.3. Feasibility of the optimal transport problem

Depending on whether we allow storage of gas at interior vertices $\stackrel{\small\circ}{\mathcal {V}}$ or not, and depending on the existence of boundary vertices and the type of boundary conditions, necessary conditions for the feasibility of the optimal transport problem differ slightly.

So far, the continuity equation and the Kirchhoff’s law respectively only ensure local mass conservation; however, we are naturally interested in analysing global mass conservation for the entire network.

Definition 3.3 (Total mass). Depending on the type of coupling conditions and whether there are any boundary vertices, the total mass $\Sigma \left (t\right )$ at time $t \in \left [0, T\right ]$ on the graph $\mathcal {G} = \left (\mathcal {V}, \mathcal {E}\right )$ is defined as

  • Classical Kirchhoff’s law and no boundary vertices:

    \begin{equation*}\Sigma \left (t\right ) :\!= \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, t\right ) \, \mathrm d x\end{equation*}
  • Classical Kirchhoff’s law and time-dependent or time-independent boundary conditions:

    \begin{equation*}\Sigma \left (t\right ) :\!= \sum _{\nu \in \partial ^+ \mathcal {V}} S_{\nu } \left (t\right ) + \sum _{\nu \in \partial ^- \mathcal {V}} D_{\nu } \left (t\right ) + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, t\right ) \, \mathrm d x\end{equation*}
  • Generalized Kirchhoff’s law and no boundary vertices:

    \begin{equation*}\Sigma \left (t\right ) :\!= \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (t\right ) + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, t\right ) \, \mathrm d x\end{equation*}
  • Generalized Kirchhoff’s law and time-dependent or time-independent boundary conditions:

    \begin{equation*}\Sigma \left (t\right ) :\!= \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (t\right ) + \sum _{\nu \in \partial ^+ \mathcal {V}} S_{\nu } \left (t\right ) + \sum _{\nu \in \partial ^- \mathcal {V}} D_{\nu } \left (t\right ) + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, t\right ) \, \mathrm d x\end{equation*}

Observation 3.4 (Total mass). The contextual interpretation of the different terms in the total mass $\Sigma \left (t\right )$ is given in table 1 . Moreover, note that for time-dependent boundary conditions, the source vertex mass density $S_{\nu }$ and sink vertex mass density $D_{\nu }$ at time $t \in \left [0, T\right ]$ can easily be calculated by utilizing the given source vertex flux $s_{\nu }^G$ and sink vertex flux $d_{\nu }^G$ with

Table 1. Contributions to the total mass

\begin{equation*} S_{\nu } \left (t\right ) = \left \lvert \int _t^T s_{\nu }^G (\tilde {t}\,) \, \mathrm d \tilde {t} \right \rvert, \qquad \text {and} \qquad D_{\nu } (t) = \int _0^t d_{\nu }^G \left (\tilde {t\,}\right ) \, \mathrm d \tilde {t}. \end{equation*}

The condition of global mass conservation can thus be written as the following.

Definition 3.5 (Global mass conservation). The global mass conservation condition in a graph $\mathcal {G} = \left (\mathcal {V}, \mathcal {E}\right )$ is given by

\begin{equation*} \frac {\partial \Sigma \left (t\right )}{\partial t} = 0 \qquad \forall t \in \left (0, T\right ). \end{equation*}

Remark 3.6 (Global mass conservation). For the measures $\rho _e$ , $\gamma _{\nu }$ , $S_{\nu }$ and $D_{\nu }$ , we will always assume that they are chosen in such a way that the total mass in the network is constant for all $t \in \left [0, T\right ]$ , so

\begin{equation*} \Sigma \left (t\right ) \equiv C \in \mathbb {R}_{\geq 0}. \end{equation*}

Hence, we especially demand that the initial and final conditions fulfill

\begin{equation*} \Sigma \left (0\right ) = \Sigma \left (T\right ) = C, \end{equation*}

in order to obtain a feasible optimal transport problem.

The global mass conservation conditions can be used to define feasible upper bounds for the total demand of all sink vertices for the time period $\left [0, T\right ]$ .

Proposition 3.7 (Upper bound for the demand with time-independent boundary conditions). In the case of generalized Kirchhoff’s law at interior vertices and the existence of boundary vertices with time-independent boundary conditions, the accumulated demand $D = \sum _{\nu \in \partial ^- \mathcal {V}} D_{\nu } \left (T\right )$ of the time period $\left [0, T\right ]$ is bounded from above through the inequality

\begin{equation*} \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (0\right ) + \left \lvert S \right \rvert + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, 0\right ) \, \mathrm d x \geq D, \end{equation*}

where $\left \lvert S \right \rvert = \sum _{\nu \in \partial ^- \mathcal {V}} S_{\nu } \left (0\right )$ corresponds to the accumulated supply for the time period $\left [0, T\right ]$ .

Proof. If we compare the total mass $\Sigma \left (t\right )$ for any $t \in \left [0, T\right ]$ to the initial total mass $\Sigma \left (0\right )$ , then we obtain the equation

\begin{align*} & \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (0\right ) + \sum _{\nu \in \partial ^+ \mathcal {V}} \underbrace {S_{\nu } \left (0\right )}_{= \, \left \lvert S_{\nu }^G \right \rvert } + \sum _{\nu \in \partial ^- \mathcal {V}} \underbrace {D_{\nu } \left (0\right )}_{= \, 0} + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, 0\right ) \, \mathrm d x = \\ & \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (t\right ) + \sum _{\nu \in \partial ^+ \mathcal {V}} S_{\nu } \left (t\right ) + \sum _{\nu \in \partial ^- \mathcal {V}} D_{\nu } \left (t\right ) + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, t\right ) \, \mathrm d x, \end{align*}

which can be rewritten to

(GMC) \begin{align} \begin{split} & \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (0\right ) + \sum _{\nu \in \partial ^+ \mathcal {V}} \underbrace {\left (\left \lvert S_{\nu }^G \right \rvert - S_{\nu } \left (t\right )\right )}_{= \, \left \lvert \int _0^t s_{\nu } \left (\tilde {t}\right ) \, \mathrm d \tilde {t} \right \rvert } + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, 0\right ) \, \mathrm d x = \\ & \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (t\right ) + \sum _{\nu \in \partial ^- \mathcal {V}} \underbrace {D_{\nu } \left (t\right )}_{= \, \int _0^t d_{\nu } \left (\tilde {t}\right ) \, \mathrm d \tilde {t}} + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, t\right ) \, \mathrm d x. \end{split} \end{align}

The left side of the global mass conservation equation (GMC) calculates the amount of gas stored in the network at time $t = 0$ (in interior vertices or in the pipes) plus the gas, which has entered the network through source vertices in the time period $\left [0, t\right ]$ . The right side encodes the amount of gas in the interior vertices or the pipes at time $t$ together with the gas, which has exited the network through sink vertices in the time period $\left [0, t\right ]$ .

For $t = T$ we obtain from the GMC

\begin{align*} & \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (0\right ) + \sum _{\nu \in \partial ^+ \mathcal {V}} \left \lvert \int _0^T s_{\nu } \left (t\right ) \, \mathrm d t \right \rvert + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, 0\right ) \, \mathrm d x = \\ & \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (T\right ) + \sum _{\nu \in \partial ^- \mathcal {V}} \int _0^T d_{\nu } \left (t\right ) \, \mathrm d t + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, T\right ) \, \mathrm d x, \end{align*}

which is equivalent to

\begin{equation*} \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (0\right ) + \left \lvert S \right \rvert + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, 0\right ) \, \mathrm d x = \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (T\right ) + D + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, T\right ) \, \mathrm d x. \end{equation*}

By utilizing the non-negativity of the vertex mass density $\gamma$ and of the mass density $\rho$

\begin{equation*} \gamma _{\nu } \in \mathcal {M}_+ \left (\stackrel{\small\circ}{\mathcal {V}}\right ), \, \rho _e \in \mathcal {M}_+ \left (\mathcal {E}\right ) \quad \Longrightarrow \quad \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (T\right ), \, \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, T\right ) \, \mathrm d x \geq 0, \end{equation*}

we obtain the upper bound for the accumulated demand of all sink vertices $\nu \in \partial ^- \mathcal {V}$

\begin{equation*} \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (0\right ) + \left \lvert S \right \rvert + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, 0\right ) \, \mathrm d x \geq D. \end{equation*}

Corollary 3.8 (Upper bound for the demand with time-dependent boundary conditions). If the graph has boundary vertices with time-dependent boundary conditions and uses generalized Kirchhoff’s law as coupling condition at interior vertices, the accumulated demand at all times $t \in \left [0, T\right ]$ is bounded above by

\begin{equation*} \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (0\right ) + \sum _{\nu \in \partial ^+ \mathcal {V}} \left \lvert \int _0^t s_{\nu }^G \left (\tilde {t}\right ) \, \mathrm d \tilde {t}\right \rvert + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, 0\right ) \, \mathrm d x \geq \sum _{\nu \partial ^- \mathcal {V}} \int _0^t d_{\nu }^G \left (\tilde {t}\right ) \, \mathrm d \tilde {t}, \end{equation*}

which can be used to test the given source vertex flux $s_{\nu }^G$ and sink vertex flux $d_{\nu }^G$ for plausibility.

Proof. The non-negativity of the vertex mass density $\gamma$ and the mass density $\rho$ on the right side of the GMC

\begin{equation*} \gamma _{\nu } \in \mathcal {M}_+ \left (\stackrel{\small\circ}{\mathcal {V}}\right ), \, \rho _e \in \mathcal {M}_+ \left (\mathcal {E}\right ) \quad \Longrightarrow \quad \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (\tilde {t}\right ), \, \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, \tilde {t}\right ) \, \mathrm d x \geq 0, \end{equation*}

holds true for all $\tilde {t} \in \left [0, T\right ]$ . Therefore, we obtain the inequality

\begin{equation*} \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (0\right ) + \sum _{\nu \in \partial ^+ \mathcal {V}} \left (\left \lvert S_{\nu }^G \right \rvert - S_{\nu } \left (t\right )\right ) + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, 0\right ) \, \mathrm d x \geq \sum _{\nu \partial ^- \mathcal {V}} D_{\nu } \left (t\right ). \end{equation*}

Plugging in the definitions of the source vertex mass densities $S_{\nu }^G$ and $S_{\nu }$ , as well as the sink vertex mass densities $D_{\nu }$ , yields

\begin{equation*} \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \gamma _{\nu } \left (0\right ) + \sum _{\nu \in \partial ^+ \mathcal {V}} \left \lvert \int _0^t s_{\nu }^G \left (\tilde {t}\right ) \, \mathrm d \tilde {t}\right \rvert + \sum _{e \in \mathcal {E}} \int _0^{L_e} \rho _e \left (x, 0\right ) \, \mathrm d x \geq \sum _{\nu \partial ^- \mathcal {V}} \int _0^t d_{\nu }^G \left (\tilde {t}\right ) \, \mathrm d \tilde {t} \end{equation*}

for all $t \in \left [0, T\right ]$

The global mass conservation equation (GMC) together with the upper bound for the accumulated demand can be used to test given initial, final and boundary conditions for possibly leading to an infeasible optimal transport problem. However, note that these tests do not detect instances which for example require unfeasibly fast velocities $v$ , violating the necessary upper bound for the flow rates, when going from (ISO1) to (ISO3).

3.4. Strong and weak solutions of the continuity equation constraints

Assuming feasibility of the introduced optimal transport problem, we now want to define strong and weak solutions of the continuity equation constraints (CE 1), (CE 2) combined with the constraints from the coupling conditions and possibly the boundary conditions.

For the introduction of strong solutions, we need a variety of conditions:

  1. (S1) $\forall e \in \mathcal {E}: \left (x, t\right ) \mapsto \rho _e \left (x, t\right )$ continuous on $\left [0, L_e\right ] \times \left [0, T\right ]$ $\forall e \in \mathcal {E}, \, \forall x \in \left [0, L_e\right ]: t \mapsto \rho _e \left (x, t\right )$ continuously differentiable on $\left (0, T\right )$

  2. (S2) $\forall e \in \mathcal {E}: \left (x, t\right ) \mapsto j_e \left (x, t\right )$ continuous on $\left [0, L_e\right ] \times \left [0, T\right ]$ $\forall e \in \mathcal {E}, \, \forall t \in \left [0, T\right ]: x \mapsto j_e \left (x, t\right )$ continuously differentiable on $\left (0, L_e\right )$

    1. (S3) i) $\rho \in \mathcal {M}_+ \left (\mathcal {E}\right )$ , $j \in \mathcal {M} \left (\mathcal {E}\right )$ fulfill the constraints (CE 1), (CE 2) and (C KL 1)

    2. ii) $\rho \in \mathcal {M}_+ \left (\mathcal {E}\right )$ , $j \in \mathcal {M} \left (\mathcal {E}\right )$ , $\gamma \in \mathcal {M}_+ \left (\stackrel{\small\circ}{\mathcal {V}}\right )$ and $f \in \mathcal {M} \left (\stackrel{\small\circ}{\mathcal {V}}\right )$ fulfill the side constraints (CE 1), (CE 2) and (VF KL 1) - (VF KL 3)

  3. (S4) $\forall \nu \in \stackrel{\small\circ}{\mathcal {V}}: t \mapsto \gamma _{\nu } \left (t\right )$ continuous on $\left [0, T\right ]$ , and continuously differentiable on $\left (0, T\right )$ (which directly implies $\forall \nu \in \stackrel{\small\circ}{\mathcal {V}}: t \mapsto f_{\nu } \left (t\right )$ continuous on $\left (0, T\right )$ )

  4. (S5) $\forall \nu \in \partial ^+ \mathcal {V}: t \mapsto S_{\nu } \left (t\right )$ continuous on $\left [0, T\right ]$ , and continuously differentiable on $\left (0, T\right )$ (which directly implies $\forall \nu \in \partial ^+ \mathcal {V}: t \mapsto s_{\nu } \left (t\right )$ continuous on $\left (0, T\right )$ )

  5. (S6) $\forall \nu \in \partial ^- \mathcal {V}: t \mapsto D_{\nu } \left (t\right )$ continuous on $\left [0, T\right ]$ , and continuously differentiable on $\left (0, T\right )$ (which directly implies $\forall \nu \in \partial ^- \mathcal {V}: t \mapsto d_{\nu } \left (t\right )$ continuous on $\left (0, T\right )$ )

  6. (S7)

    1. i) $S, -s \in \mathcal {M}_+\left (\partial ^+ \mathcal {V}\right )$ and $D, d \in \mathcal {M}_+\left (\partial ^- \mathcal {V}\right )$ fulfill the constraints (TD BC 1), (TD BC 2)

    2. ii) $S, -s \in \mathcal {M}_+\left (\partial ^+ \mathcal {V}\right )$ and $D, d \in \mathcal {M}_+\left (\partial ^- \mathcal {V}\right )$ fulfill the constraints (TI BC 1) - (TI BC 6)

Definition 3.9 (Strong solutions). A strong solution of (CE 1), (CE 2) together with the constraints of the (generalized) Kirchhoff’s law and the constraints of possible boundary conditions, consists of a tuple of functions fulfilling the following conditions:

  • Classical Kirchhoff’s law and no boundary vertices: $\left (\rho, j\right )$ fulfilling (S1), (S2) and (S3) i)

  • Classical Kirchhoff’s law and time-dependent or time-independent boundary conditions: $\left (\rho, j, S, s, D, d\right )$ fulfilling (S1), (S2), (S3) i), (S5), (S6), and (S7) i) or (S7) ii)

  • Generalized Kirchhoff’s law and no boundary vertices: $\left (\rho, j, \gamma, f\right )$ fulfilling (S1), (S2), (S3) ii) and (S4)

  • Generalized Kirchhoff’s law and time-dependent or time-independent boundary conditions: $\left (\rho, j, \gamma, f, S, s, D, d\right )$ fulfilling (S1), (S2), (S3) ii), (S4), (S5), (S6) and (S7) i) or (S7) ii)

Remark 3.10 (Vertex fluxes in strong solutions). The vertex excess flux $f$ can be calculated directly from $j$ based on the generalized Kirchhoff’s law at the interior vertices and since the vertex flux is not restricted to a non-negative sign, it can also be omitted from the definition of strong solutions. As the source vertex flux $s$ and the sink vertex flux $d$ need to fulfill non-positivity and non-negativity conditions respectively, they can only be omitted from the definition of strong solutions, if corresponding constraints for the sign of the time derivatives of $S$ and $D$ are included.

For the definition of weak solutions, we need suitable test functions defined on the domain of the graph

\begin{equation*} \Omega _{\mathcal {G}} :\!= \left (\times _{\nu \in \mathcal {V}} \left \{\nu \right \}\right ) \times \left (\times _{e \in \mathcal {E}} \left [0, L_e\right ]\right ). \end{equation*}

Definition 3.11 (Test functions). We call a continuous function $\varphi : \Omega _{\mathcal {G}} \times \left [0, T\right ] \longrightarrow \mathbb {R}$ a test function, if for all edges $e \in \mathcal {E}$ its restriction $\varphi _e :\!=\left . \varphi \right |_{e}$ is continuously differentiable on $\left (0, L_e\right ) \times \left (0, T\right )$ . In addition, for any vertex $\nu \in \mathcal {V}$ at any time $t \in \left [0, T\right ]$ we use the notation $\varphi _{\nu } :\!= \left .\varphi \right |_{\nu }$ , as $\varphi$ is continuous across vertices.

The following conditions for the definitions of weak solutions can be derived from the presented strong solutions. Here, $\mathcal {L}$ denotes the standard $1$ -dimensional Lebesgue measure on $\left [0, T\right ]$ .

  1. (W1) $\rho \in \mathcal {M}_+ \left (\mathcal {E}\right )$ and $j \in \mathcal {M} \left (\mathcal {E}\right )$

  2. (W2) $\rho$ fulfills the constraints (CE 2)

  3. (W3)

    1. i) $j$ fulfills the constraint

      \begin{equation*} \int _0^T \sum _{e \in \mathcal {E}} \int _0^{L_e} \left \lvert\,\, j_e \left (x, t\right ) \right \rvert \, \mathrm {d} x \, \mathrm d t \lt \infty \end{equation*}
    2. ii) $j$ , $s$ and $d$ fulfill the constraint

      \begin{equation*} \int _0^T \Bigg (\sum _{e \in \mathcal {E}} \int _0^{L_e} \left \lvert\,\, j_e \left (x, t\right ) \right \rvert \, \mathrm d x - \sum _{\nu \in \partial ^+ \mathcal {V}} s_{\nu } \left (t\right ) + \sum _{\nu \in \partial ^- \mathcal {V}} d_{\nu } \left (t\right )\Bigg ) \, \mathrm d t \lt \infty \end{equation*}
    3. iii) $j$ and $f$ fulfill the constraint

      \begin{equation*} \int _0^T \Bigg ( \sum _{e \in \mathcal {E}} \int _0^{L_e} \left \lvert\,\, j_e \left (x, t\right ) \right \rvert \, \mathrm d x + \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \left \lvert f_{\nu } \left (t\right ) \right \rvert \Bigg )\, \mathrm d t \lt \infty \end{equation*}
    4. iv) $j$ , $f$ , $s$ and $d$ fulfill the constraint

      \begin{equation*} \int _0^T \Bigg (\sum _{e \in \mathcal {E}} \int _0^{L_e} \left \lvert\, j_e \left (x, t\right ) \right \rvert \, \mathrm d x + \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} \left \lvert f_{\nu } \left (t\right ) \right \rvert - \sum _{\nu \in \partial ^+ \mathcal {V}} s_{\nu } \left (t\right ) + \sum _{\nu \in \partial ^- \mathcal {V}} d_{\nu } \left (t\right )\Bigg ) \, \mathrm d t \lt \infty \end{equation*}

  4. (W4) $\gamma \in \mathcal {M}_+ \left (\stackrel{\small\circ}{\mathcal {V}}\right )$ and $f \in \mathcal {M} \left (\stackrel{\small\circ}{\mathcal {V}}\right )$

  5. (W5) $\gamma$ fulfills the constraint (VF KL 3), $\forall \nu \in \stackrel{\small\circ}{\mathcal {V}}$ and for $\mathcal {L}$ -a.e. $t \in \left (0, T\right ): \, \frac {\partial \gamma _{\nu } \left (t\right )}{\partial t} = f_{\nu } \left (t\right )$

  6. (W6) $S, -s \in \mathcal {M}_+ \left (\partial ^+ \mathcal {V}\right )$

  7. (W7) $D, d \in \mathcal {M}_+ \left (\partial ^- \mathcal {V}\right )$

  8. (W8) $S$ fulfills the constraint (TI BC 5), $\forall \nu \in \partial ^+ \mathcal {V}$ and for $\mathcal {L}$ -a.e. $t \in \left (0, T\right ): \, \frac {\partial S_{\nu } \left (t\right )}{\partial t} = s_{\nu } \left (t\right )$ , $D$ fulfills the constraint (TI BC 6), $\forall \nu \in \partial ^- \mathcal {V}$ and for $\mathcal {L}$ -a.e. $t \in \left (0, T\right ): \, \frac {\partial D_{\nu } \left (t\right )}{\partial t} = d_{\nu } \left (t\right )$

  9. (W9)

    1. i) $\forall$ test functions $\varphi$ and $\mathcal {L}$ -a.e. $t \in \left (0, T\right )$ :

      \begin{align*} & \sum _{e \in \mathcal {E}} \int _0^{L_e} \frac {\partial \rho _e \left (x, t\right )}{\partial t} \varphi _e \left (x, t\right ) \, \mathrm d x = \sum _{e \in \mathcal {E}} \int _0^{Le} j_e \left (x, t\right ) \frac {\partial \varphi _e \left (x, t\right )}{\partial x} \, \mathrm d x \end{align*}
    2. ii) $\forall$ test functions $\varphi$ and $\mathcal {L}$ -a.e. $t \in \left (0, T\right )$ :

      \begin{align*} & \sum _{e \in \mathcal {E}} \int _0^{L_e} \frac {\partial \rho _e \left (x, t\right )}{\partial t} \varphi _e \left (x, t\right ) \, \mathrm d x + \sum _{\nu \in \partial ^+ \mathcal {V}} s_{\nu } \left (t\right ) \varphi _{\nu } \left (t\right ) + \sum _{\nu \in \partial ^- \mathcal {V}} d_{\nu } \left (t\right ) \varphi _{\nu } \left (t\right ) = \\ & \sum _{e \in \mathcal {E}} \int _0^{Le} j_e \left (x, t\right ) \frac {\partial \varphi _e \left (x, t\right )}{\partial x} \, \mathrm d x \end{align*}
    3. iii) $\forall$ test functions $\varphi$ and $\mathcal {L}$ -a.e. $t \in \left (0, T\right )$ :

      \begin{equation*} \sum _{e \in \mathcal {E}} \int _0^{L_e} \frac {\partial \rho _e \left (x, t\right )}{\partial t} \varphi _e \left (x, t\right ) \, \mathrm d x + \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} f_{\nu } \left (t\right ) \varphi _{\nu } \left (t\right ) = \sum _{e \in \mathcal {E}} \int _0^{Le} j_e \left (x, t\right ) \frac {\partial \varphi _e \left (x, t\right )}{\partial x} \, \mathrm d x \end{equation*}
    4. iv) $\forall$ test functions $\varphi$ and $\mathcal {L}$ -a.e. $t \in \left (0, T\right )$ :

      \begin{align*} & \sum _{e \in \mathcal {E}} \int _0^{L_e} \frac {\partial \rho _e \left (x, t\right )}{\partial t} \varphi _e \left (x, t\right ) \, \mathrm d x + \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} f_{\nu } \left (t\right ) \varphi _{\nu } \left (t\right ) + \sum _{\nu \in \partial ^+ \mathcal {V}} s_{\nu } \left (t\right ) \varphi _{\nu } \left (t\right ) \\ &+ \sum _{\nu \in \partial ^- \mathcal {V}} d_{\nu } \left (t\right ) \varphi _{\nu } \left (t\right ) = \sum _{e \in \mathcal {E}} \int _0^{Le} j_e \left (x, t\right ) \frac {\partial \varphi _e \left (x, t\right )}{\partial x} \, \mathrm d x \end{align*}

Definition 3.12 (Weak solutions). The tuples $\left (\rho, j\right )$ , $\left (\rho, j, S, s, D, d\right )$ , $\left (\rho, j, \gamma, f\right )$ or $\left (\rho, j, \gamma, f, S, s, D, d\right )$ constitute weak solutions of (CE 1), (CE 2) combined with (C KL 1) or (VF KL 1) - (VF KL 3), and possibly also (TD BC 1), (TD BC 2) or (TI BC 1) - (TI BC 6), if they fulfill the following conditions:

  • Classical Kirchhoff’s law and no boundary vertices: $\left (\rho, j\right )$ fulfilling (W1), (W2), (W3) i) and (W9) i)

  • Classical Kirchhoff’s law and time-dependent or time-independent boundary conditions: $\left (\rho, j, S, s, D, d\right )$ fulfilling (W1), (W2), (W3) ii), (W6), (W7), (W8) and (W9) ii)

  • Generalized Kirchhoff’s law and no boundary vertices: $\left (\rho, j, \gamma, f\right )$ fulfilling (W1), (W2), (W3) iii), (W4), (W5) and (W9) iii)

  • Generalized Kirchhoff’s law and time-dependent or time-independent boundary conditions: $\left (\rho, j, \gamma, f, S, s, D, d\right )$ fulfilling (W1), (W2), (W3) iv), (W4), (W5), (W6), (W7), (W8) and (W9) iv)

4. $p$ -Wasserstein distance as cost of transport

In order to find solutions of the optimal transport task on a given gas network, we first need to assign suitable costs to each feasible transport plan. Thus, we will first review different formulations of the classical $p$ -Wasserstein distances and secondly extend those to our case.

4.1. General $p$ -Wasserstein distance

Given a metric space $\left (\Omega, d\right )$ , where we assume the domain $\Omega \subseteq \mathbb {R}^k$ with $k \in \mathbb {N}$ to be bounded, the $p$ -Wasserstein distance can be defined in the form of a static formulation based on the distance function $d$ , or it can also be given in a dynamic formulation using absolutely continuous curves $\mu : \left [0, T\right ] \longrightarrow \mathcal {M}_+ \left (\Omega \right )$ .

In literature, the study of the $p$ -Wasserstein distance mostly utilizes the distance function $d \left (x, y\right ) :\!= \left \lvert x - y\right \rvert$ , where $\left \lvert \, \cdot \, \right \rvert$ denotes the vector norm in $\mathbb {R}^k$ . The following static formulation of the $p$ -Wasserstein distance could analogously be formulated for a general distance function $d$ , however for the corresponding dynamic formulation of the $p$ -Wasserstein distance, the cost needs to be associated with an action, see for example [Reference Cédric29, Chapter 7] for details.

Definition 4.1 ( $p$ -Wasserstein distance [Reference Ambrosio, Gigli and Savaré2]). For two probability measures $\mu _0, \mu _T \in \mathbb {P}\left (\Omega \right )$ , the static formulation of the $p$ -Wasserstein distance for $p \in \left [1, \infty \right )$ is defined as

\begin{equation*} \widetilde {W_p} \left (\mu _0, \mu _T\right ) = \min _{\pi \in \Pi \left (\mu _0, \mu _T\right )} \left \{\int _{\Omega \times \Omega } \left \lvert x - y\right \rvert ^p \, \mathrm d \pi \left (x, y\right )\right \}^{\frac {1}{p}}, \end{equation*}

with $\Pi \left (\mu _0, \mu _T\right )$ being the set of all joint probability distributions $\pi$ on $\Omega \times \Omega$ , with the respective marginals $\mu _0$ and $\mu _T$ , i.e. $\mu _0\left (x\right ) = \int _{\Omega } \pi \left (x, y\right ) \, \mathrm d y$ , and $\mu _T\left (y\right ) = \int _{\Omega } \pi \left (x, y\right ) \, \mathrm d x$ .

Remark 4.2 ( $p$ -Wasserstein distance). As we assume $\Omega$ to be bounded, the finiteness of the $p$ -th moments of $\mu _0$ and $\mu _T$ are not required. However, for non-negative bounded measures $\mu _0, \mu _t \in \mathcal {M}_+\left (\Omega \right )$ instead of probability measures, the $p$ -Wasserstein distance is only able to assign costs to balanced data, i.e. $\mu _0$ and $\mu _T$ need to have the same total mass

\begin{equation*} \int _{\Omega } \mu _0\left (z\right ) \, \mathrm d z = \int _{\Omega } \mu _T\left (z\right ) \, \mathrm d z. \end{equation*}

In their seminal work [Reference Benamou and Brenier5], Benamou and Brenier introduced a dynamical formulation of the $2$ -Wasserstein distance that was also extended to the case $p\neq 2$ . In this formulation the task is to minimise an action functional, which corresponds to the kinetic energy of curves connecting the initial measure $\mu _0$ to the final measure $\mu _T$ . This minimization is subject to the initial and final conditions, $\left .\mu \right |_{t = 0} = \mu _0$ and $\left .\mu \right |_{t = T} = \mu _T$ , as well as the continuity equation $\frac {\partial \mu }{\partial t} + \frac {\partial j}{\partial x} = 0$ and therefore closely resembles the models introduced in Section 2.

Proposition 4.3 ( $p$ -Wasserstein distance). For $\Omega = \mathbb {R}^k$ and two probability measures $\mu_{0}, \mu_{T} \in \mathbb {P}\left (\Omega \right )$ , the dynamic formulation of the $p$ -Wasserstein distance for $p \in \left [1, \infty \right )$ is given by

\begin{equation*} \widetilde {W_p} \left (\mu_{0}, \mu_{T}\right ) = \inf _{\mu _t, v_t} \left \{\int _0^T \int _{\Omega } \left \lvert v_t\left (x\right )\right \rvert ^p \, \mathrm d \mu _t\left (x\right ) \, \mathrm d t\right \}^{\frac {1}{p}}, \end{equation*}

where the infimum runs over curves of measures $\mu _t \in \mathbb {P}\left (\Omega \right )$ and $v_t \in \mathcal {M}(\Omega ^k)$ for $t \in [0,T]$ , which satisfy the continuity equation in the sense of distributions and are such that $\left. \mu \right|_{t = 0} = \mu_0, \left. \mu \right|_{t = T} = \mu_T$ .

Note that, to increase readability, we will always denote the densities of measures with respect to a given reference measure such as the Lebesgue measure by the same symbol as the original measures themselves.

Remark 4.4 (Reference measures). For simplicity purposes, we will assume that in all further formulations $\mu$ is absolutely continuous with respect to the Lebesgue measure. Thus there exists a reference measure $\widetilde {\mu }$ such that $\widetilde {\mu } \, \mathrm d x = \mathrm d \mu$ . Therefore, from now on we will write $\mu \, \mathrm {d} x$ instead of $\mathrm d \mu$ . Note that this transformation works as well for more general reference measures.

The dynamic formulation of the $p$ -Wasserstein distance can also be written in terms of mass density and mass flux (instead of mass density and velocity), resulting in a convex minimization problem with linear side constraints. For this formulation we will write $\mu$ instead of $\rho$ , since later on, the measure $\mu$ will not only consist of the mass density $\rho$ , but also the vertex mass $\gamma$ , and optionally the source vertex mass $S$ and the sink vertex mass $D$ .

Furthermore, we will restrict the set of feasible mass densities to $\mu \in \mathcal {M}_+ \left (\Omega \times \left [0, T\right ]\right )$ and in order to avoid a positivity constraint for $\mu$ , when reformulating $j :\!= v \mu$ to $v = \frac {j}{\mu }$ , we define

(4) \begin{equation} h: \mathbb {R} \times \mathbb {R}_+ \longrightarrow \mathbb {R}_{\geq 0} \quad \left (a, b\right ) \mapsto h\left (a, b\right ) = \left \{ \begin{array}{lr} \frac {\left \lvert a \right \rvert ^p}{b^{p - 1}} \quad & b \gt 0, \\ 0 \quad & b = a = 0,\\ +\infty \quad & b = 0, \, a \neq 0. \end{array} \right . \end{equation}

Applying all these reformulations results in the following dynamic formulation of the $p$ -Wasserstein distance.

Observation 4.5 (Dynamic formulation of the $p$ -Wasserstein distance). Using a reparametrization in time argument (see, e.g. [Reference Ambrosio, Gigli and Savaré2 , Lemma 1.1.4]), the $p$ -Wasserstein distance for two probability measures $\mu _0, \mu _T \in \mathbb {P} \left (\Omega \right )$ on a convex and compact domain $\Omega$ is given as

(DYN-T) \begin{equation} \widetilde {W}_p^p \left (\mu _0, \mu _T\right ) = \end{equation}
\begin{equation*} \min _{\substack {\mu \in \mathcal {M}_+ \left (\Omega \times \left [0, T\right ]\right ), \\ j \in \mathcal {M} \left (\Omega \times \left [0, T\right ]\right )}} \left \{T^{p - 1} \int _0^T \int _{\Omega } h\left (j\left (x, t\right ), \mu \left (x, t\right )\right ) \, \mathrm d x \, \mathrm d t \, \middle | \, \begin{array}{l} \frac {\partial \mu }{\partial t} + \frac {\partial j}{\partial x} = 0, \\ \left . \mu \right |_{t = 0} = \mu _0, \, \left . \mu \right |_{t = T} = \mu _T \end{array} \right \}, \end{equation*}

Note that, this formulation of the $p$ -Wasserstein distance is restricted to physically feasible solutions, as $\widetilde {W}_p^p \left (\mu _0, \mu _T\right ) \lt \infty$ is only possible, if for $\mathcal {L}$ -a.e. $x \in \Omega$ and $t \in \left [0, T\right ]$ either

\begin{equation*} \mu \left (x, t\right ) \gt 0 \quad \text {or} \quad \mu \left (x, t\right ) = j \left (x, t\right ) = 0, \end{equation*}

so a non-zero flux is only possible if the mass density is positive. (Here, $\mathcal {L}$ denotes the $k + 1$ -dimensional Lebesgue measure on $\Omega \times \left [0, T\right ] \subseteq \mathbb {R}^{k + 1}$ .)

4.2. $p$ -Wasserstein distance on networks

In order to utilize the $p$ -Wasserstein distance as the cost functional for the transport along an edge $e \in \mathcal {E}$ , we can set $\Omega :\!= \left [0, L_e\right ]$ as well as $k :\!= 1$ , and with the notation of the previously introduced optimal transport problems we can thus write for a given initial mass distribution of gas $\left (\rho _0\right )_e \in \mathbb {P} \left (\left [0, L_e\right ]\right )$ , a given final distribution $\left (\rho _T\right )_e \in \mathbb {P} \left (\left [0, L_e\right ]\right )$ and the mass flux $j_e$ along the edge $e$

\begin{equation*} W_p^p \left (\left (\rho _0\right )_e, \left (\rho _T\right )_e\right ) = \min _{\substack {\rho _e \in \mathcal {M}_+ (e), \\ j_e \in \mathcal {M} (e)}} \left \{T^{p - 1} \int _0^T \int _0^{L_e} h\left (j_e\left (x, t\right ), \rho _e\left (x, t\right )\right ) \, \mathrm d x \, \mathrm d t \, \middle | \, \begin{array}{l} \frac {\partial \rho _e}{\partial t} + \frac {\partial j_e}{\partial x} = 0, \\ \left . \rho _e \right |_{t = 0} = \left (\rho _0\right )_e, \, \left . \rho _e \right |_{t = T} = \left (\rho _T\right )_e \end{array} \right \}. \\ \end{equation*}

For a single interior vertex $\nu \in \stackrel{\small\circ}{\mathcal {V}}$ , in the case of generalized Kirchhoff’s law as coupling condition, we could also write the cost of transport from an initial vertex mass density $\left (\gamma _0\right )_{\nu } \in \mathbb {P} \left (\left \{\nu \right \}\right )$ to a final vertex mass density $\left (\gamma _T\right )_{\nu } \in \mathbb {P} \left (\left \{\nu \right \}\right )$ with vertex excess flux $f_{\nu }$ as

\begin{equation*} W_p^p \left (\left (\gamma _0\right )_{\nu }, \left (\gamma _T\right )_{\nu }\right ) =\min _{\substack {\gamma _{\nu } \in \mathcal {M}_+ \left (\nu \right ), \\ f_{\nu } \in \mathcal {M} \left (\nu \right )}} \left \{T^{p - 1} \int _0^T h\left (f_{\nu }\left (t\right ), \gamma _{\nu }\left (t\right )\right ) \, \mathrm d t \, \middle | \, \begin{array}{l} \frac {\partial \gamma _{\nu }}{\partial t} = f_{\nu }, \\ \left . \gamma _{\nu } \right |_{t = 0} = \left (\gamma _0\right )_{\nu }, \, \left . \gamma _{\nu } \right |_{t = T} = \left (\gamma _T\right )_{\nu } \end{array} \right \}, \end{equation*}

where instead of the standard continuity equation, we have $\frac {\partial \gamma _{\nu }}{\partial t} = f_{\nu }$ to ensure mass conservation in the vertex.

Observation 4.6 ( $p$ -Wasserstein distance at interior vertex). The application of the dynamic formulation of the $p$ -Wasserstein distance to an interior vertex $\nu \in \stackrel{\small\circ}{\mathcal {V}}$ is not particularly interesting as $\left (\gamma _0\right )_{\nu }, \left (\gamma _T\right )_{\nu } \in \mathbb {P} \left (\left \{\nu \right \}\right )$ directly implies

\begin{equation*} \left (\gamma _0\right )_{\nu } = \left (\gamma _T\right )_{\nu } = \delta _{\nu } \end{equation*}

and as $h\left (f_{\nu }\left (t\right ), \gamma _{\nu }\left (t\right )\right ) \geq 0$ for all $\gamma _{\nu } \in \mathcal {M}_+ \left (\left \{\nu \right \} \times \left [0, T\right ]\right )$ and $f_{\nu } \in \mathcal {M} \left (\left \{\nu \right \} \times \left [0, T\right ]\right )$ , the unique strong solution of the dynamic formulation of the $p$ -Wasserstein distance is given by

\begin{equation*} \left (\gamma _0\right )_{\nu } = \left (\gamma _T\right )_{\nu } = \gamma _{\nu } \left (t\right ) \equiv \delta _{\nu } \qquad \text {and} \qquad f_{\nu } \left (t\right ) \equiv 0, \end{equation*}

because $f_{\nu } \left (t\right ) \neq 0$ for some $t \in \left [0, T\right ]$ would lead to

\begin{equation*} h\left (f_{\nu }\left (t\right ), \gamma _{\nu }\left (t\right )\right ) \in \mathbb {R}_{\gt 0} \, \text {for} \, \gamma _{\nu }\left (t\right ) \gt 0 \qquad \text {and} \qquad h\left (f_{\nu }\left (t\right ), \gamma _{\nu }\left (t\right )\right ) = \infty \, \text {for} \, \gamma _{\nu }\left (t\right ) = 0. \end{equation*}

The continuity of $t \mapsto \gamma _{\nu } \left (t\right )$ and $t \mapsto f_{\nu }$ thus would lead to $T^{p - 1} \int _0^T h\left (f_{\nu }\left (t\right ), \gamma _{\nu }\left (t\right )\right ) \, \mathrm d t \gt 0$ if there exists any $t \in \left [0, T\right ]$ with $f_{\nu } \left (t\right ) \neq 0$ .

By extending the previous remarks about the dynamic formulation of the $p$ -Wasserstein distance on a single edge or an interior vertex, we can define the $p$ -Wasserstein metric on general oriented graphs modelling network structures.

Definition 4.7 ( $p$ -Wasserstein distance on networks without boundary vertices). For an oriented graph modelling a network, the dynamic formulation of the $p$ -Wasserstein distance, depending on the coupling condition at interior vertices, is given by

  • Classical Kirchhoff’s law and no boundary vertices, with $\mu _0 :\!= \rho _0$ and $\mu _T :\!= \rho _T$ :

    \begin{align*} & W_p^p \left (\mu _0, \mu _T\right ) = \min _{\substack {\rho \in \mathcal {M}_+ \left (\mathcal {E}\right ), \\ j \in \mathcal {M} \left (\mathcal {E}\right )}} \left \{T^{p - 1} \int _0^T \sum _{e \in \mathcal {E}} \left (\int _0^{L_e} h\left (j_e\left (x, t\right ), \rho _e\left (x, t\right )\right ) \, \mathrm d x\right ) \, \mathrm d t \, \left .\vphantom {\begin{array}{l} \forall e \in \mathcal {E} : \, \left .1\right ), \, \left .2\right ) \\ \forall \nu \in \mathcal {V} : \, 3 \, \left .ii\right ), \, \left .4\right ), \, \left .5\right ) \end{array}}\right | {\begin{array}{l} \forall e \in \mathcal {E}: \, \left .1\right ), \, \left .2\right ) \\ \forall \nu \in \mathcal {V}: \, \left .3\right ) \, \left .i\right ) \end{array}} \right \} \end{align*}
    In [Reference Erbar, Forkert, Maas and Mugnolo14], this case for $p = 2$ is considered, including a proof that the $2$ -Wasserstein distance with classical Kirchhoff’s law and no boundary vertices does indeed define a metric on a given metric graph.
  • Generalised Kirchhoff’s law and no boundary vertices, with $\mu _0 :\!= \left (\rho _0, \gamma _0\right )$ and $\mu _T :\!= \left (\rho _T, \gamma _T\right )$ :

    \begin{align*} W_p^p \left (\mu _0, \mu _T\right ) = \min _{\substack {\rho \in \mathcal {M}_+ \left (\mathcal {E}\right ), \, j \in \mathcal {M} \left (\mathcal {E}\right ), \\ \gamma \in \mathcal {M}_+ \left (\mathcal {V}\right ), \, f \in \mathcal {M} \left (\mathcal {V}\right )}} &\left \{ T^{p - 1} \int _0^T \sum _{e \in \mathcal {E}} \left (\int _0^{L_e} h\left (j_e\left (x, t\right ), \rho _e\left (x, t\right )\right ) \, \mathrm d x\right ) \right .\\ & \left .+ \sum _{\nu \in \mathcal {V}} h\left (f_{\nu }\left (t\right ), \gamma _{\nu }\left (t\right )\right ) \, \mathrm d t \left .\vphantom {\begin{array}{l} \forall e \in \mathcal {E} : \, \left .1\right ), \, \left .2\right ) \\ \forall \nu \in \mathcal {V} : \, 3 \, \left .ii\right ), \, \left .4\right ), \, \left .5\right ) \end{array}}\right | \, {\begin{array}{l} \forall e \in \mathcal {E} : \, \left .1\right ), \, \left .2\right ) \\ \forall \nu \in \mathcal {V} : \, 3 \, \left .ii\right ), \, \left .4\right ), \, \left .5\right ) \end{array}} \right \} \end{align*}
    This case with $p = 2$ is analysed in [Reference Burger, Humpert and Pietschmann8], including a proof that the $2$ -Wasserstein distance with generalized Kirchhoff’s law and no boundary vertices does define a metric on a given metric graph.

The respective side constraints are given by

\begin{align*} \begin{array}{ll} \left .1\right )\,\, \frac {\partial \rho _e}{\partial t} + \frac {\partial j_e}{\partial x} = 0 &\qquad \qquad \left .4\right ) \frac {\partial \gamma _{\nu }}{\partial t} = f_{\nu }\\ \left .2\right )\,\, \left . \rho _e \right |_{t = 0} = \left (\rho _0\right )_e, \, \left . \rho _e \right |_{t = T} = \left (\rho _T\right )_e &\qquad \qquad \left .5\right )\,\, \left . \gamma _{\nu } \right |_{t = 0} = \left (\gamma _0\right )_{\nu }, \, \left . \gamma _{\nu } \right |_{t = T} = \left (\gamma _T\right )_{\nu }\\ 3 \left .i\right )\,\, 0 = \sum _{e \in \mathcal {E}: \, \delta ^E(e) = \nu } {\left . j_e \right |}_{x = L_e} - \sum _{e \in \mathcal {E}: \, \delta ^S(e) = \nu } {\left . j_e \right |}_{x = 0}&\\ 3 \left .ii\right )\,\, f_{\nu } = \sum _{e \in \mathcal {E}: \, \delta ^E(e) = \nu } {\left . j_e \right |}_{x = L_e} - \sum _{e \in \mathcal {E}: \, \delta ^S(e) = \nu } {\left . j_e \right |}_{x = 0}&\\ \end{array} \end{align*}

This definition of the $p$ -Wasserstein distance allows for a specific type of unbalanced optimal transport, as mass can be moved from edges (the network) to vertices (storage opportunity), and vice versa, for additional costs, i.e. kinetic energy. In literature, this is known as the Wasserstein–Fisher–Rao metric or the Hellinger-Kantorovich distance (cf. [Reference Monsaingeon24]). The concept of unbalanced optimal transport can be further generalized to also include boundary vertices and therefore the possibility to move gas from interior vertices and edges to boundary vertices (supply and demand).

Remark 4.8 ( $p$ -Wasserstein distance on networks with boundary vertices). For an oriented graph with boundary vertices, the dynamic formulation of the $p$ -Wasserstein distance, depending on the coupling condition at interior vertices, is given by

  • Classical Kirchhoff’s law and time-dependent or time-independent boundary conditions, with $\mu _0 :\!= \left (\rho _0, S_0. D_0\right )$ and $\mu _T :\!= \left (\rho _T, S_T, D_T\right )$ :

    \begin{align*} & W_p^p \left (\mu _0, \mu _T\right ) = \min _{\substack {\rho \in \mathcal {M}_+ \left (\mathcal {E}\right ), \, j \in \mathcal {M} \left (\mathcal {E}\right ), \\ S, -s \in \mathcal {M}_+ \left (\partial ^+ \mathcal {V}\right ), \\ D, d \in \mathcal {M}_+ \left (\partial ^- \mathcal {V}\right )}} \left \{T^{p - 1} \int _0^T \sum _{e \in \mathcal {E}} \left (\int _0^{L_e} h\left (j_e\left (x, t\right ), \rho _e\left (x, t\right )\right ) \, \mathrm d x\right ) \right .\\ &\left . + \sum _{\nu \in \partial ^+ \mathcal {V}} h_s\left (s_{\nu }\left (t\right ), S_{\nu }\left (t\right )\right ) + \sum _{\nu \in \partial ^- \mathcal {V}} h_d\left (d_{\nu }\left (t\right ), D_{\nu }\left (t\right )\right ) \, \mathrm d t \, \left .\vphantom {\begin{array}{l} \forall e \in \mathcal {E} : \, \left .1\right ), \, \left .2\right ) \\ \forall \nu \in \stackrel{\small\circ}{\mathcal {V}}: \, 3 \, \left .i\right ) \\ \forall \nu \in \partial ^+ \mathcal {V}: \, \left .6\right ), \, \left .7\right ), \, \left .8\right ) \\ \forall \nu \in \partial ^- \mathcal {V} : \, \left .9\right ), \, \left .10\right ), \, \left .11\right ) \end{array}}\right | \, \begin{array}{l} \forall e \in \mathcal {E} : \, \left .1\right ), \, \left .2\right ) \\ \forall \nu \in \stackrel{\small\circ}{\mathcal {V}}: \, 3 \, \left .i\right ) \\ \forall \nu \in \partial ^+ \mathcal {V}: \, \left .6\right ), \, \left .7\right ), \, \left .8\right ) \\ \forall \nu \in \partial ^- \mathcal {V} : \, \left .9\right ), \, \left .10\right ), \, \left .11\right ) \end{array} \right \} . \end{align*}
  • Generalized Kirchhoff’s law and time-dependent or time-independent boundary conditions, with $\mu _0 :\!= \left (\rho _0, \gamma _0, S_0, D_0\right )$ and $\mu _T :\!= \left (\rho _T, \gamma _T, S_T, D_T\right )$ :

    \begin{align*} & W_p^p \left (\mu _0, \mu _T\right ) = \\ & \min _{\substack {\rho \in \mathcal {M}_+ \left (\mathcal {E}\right ), \, j \in \mathcal {M} \left (\mathcal {E}\right ), \\ \gamma \in \mathcal {M}_+ \left (\stackrel{\small\circ}{\mathcal {V}}\right ), \, f \in \mathcal {M} \left (\stackrel{\small\circ}{\mathcal {V}}\right ), \\ S, -s \in \mathcal {M}_+ \left (\partial ^+ \mathcal {V}\right ), \\ D, d \in \mathcal {M}_+ \left (\partial ^- \mathcal {V}\right )}} \left \{T^{p - 1} \int _0^T \sum _{e \in \mathcal {E}} \left (\int _0^{L_e} h\left (j_e\left (x, t\right ), \rho _e\left (x, t\right )\right ) \, \mathrm d x\right ) + \sum _{\nu \in \stackrel{\small\circ}{\mathcal {V}}} h\left (f_{\nu }\left (t\right ), \gamma _{\nu }\left (t\right )\right ) \right .\\ &\left . + \sum _{\nu \in \partial ^+ \mathcal {V}} h_s\left (s_{\nu }\left (t\right ), S_{\nu }\left (t\right ),t\right ) + \sum _{\nu \in \partial ^- \mathcal {V}} h_d\left (d_{\nu }\left (t\right ), D_{\nu }\left (t\right ),t\right ) \, \mathrm d t \, \left .\vphantom {\begin{array}{l} \forall e \in \mathcal {E} : \, \left .1\right ), \, \left .2\right ) \\ \forall \nu \in \stackrel{\small\circ}{\mathcal {V}}: \, 3 \, \left .i\right ) \\ \forall \nu \in \partial ^+ \mathcal {V}: \, \left .6\right ), \, \left .7\right ), \, \left .8\right ) \\ \forall \nu \in \partial ^- \mathcal {V} : \, \left .9\right ), \, \left .10\right ), \, \left .11\right ) \end{array}}\right | \, \begin{array}{l} \forall e \in \mathcal {E} : \, \left .1\right ), \, \left .2\right ) \\ \forall \nu \in \stackrel{\small\circ}{\mathcal {V}}: \, 3 \, \left .ii\right ), \, \left .4\right ), \, \left .5\right ) \\ \forall \nu \in \partial ^+ \mathcal {V}: \, \left .6\right ), \, \left .7\right ), \, \left .8\right ) \\ \forall \nu \in \partial ^- \mathcal {V} : \, \left .9\right ), \, \left .10\right ), \, \left .11\right ) \end{array} \right \}, \end{align*}

The function $h$ is defined in ( 4 ). As for the supply and demand contributions, we allow for different costs, $h_s$ and $h_d$ , that may also depend on time and have to be chosen depending on the application. For example, they could include effects of changing gas prices. Moreover, the additional constraints are given by

\begin{align*} \begin{array}{ll} \left .6\right )\,\, s_{\nu } = \sum _{e \in \mathcal {E}: \, \delta ^E(e) = \nu } \left . j_e \right |_{x = L_e} - \sum _{e \in \mathcal {E}: \, \delta ^S(e) = \nu } \left . j_e \right |_{x = 0} & \qquad \qquad\left .9\right )\,\, d_{\nu } = \sum _{e \in \mathcal {E}: \, \delta ^E(e) = \nu } \left . j_e \right |_{x = L_e} - \sum _{e \in \mathcal {E}: \, \delta ^S(e) = \nu } \left . j_e \right |_{x = 0}\\ \left .7\right )\,\, \frac {\partial S_{\nu }}{\partial t} = s_{\nu }&\qquad \qquad \left .10\right )\,\, \frac {\partial D_{\nu }}{\partial t} = d_{\nu }\\ \left .8\right )\,\, \left . S_{\nu } \right |_{t = 0} = \left \lvert S_{\nu }^G \right \rvert, \, \left . S_{\nu } \right |_{t = T} = 0 &\qquad \qquad \left .11\right )\,\, \left . D_{\nu } \right |_{t = 0} = 0, \, \left . D_{\nu } \right |_{t = T} = D_{\nu }^G\\ \end{array} \end{align*}

Remark 4.9 (Absolutely continuous curves). Based on the work of [Reference Ambrosio, Gigli and Savaré2], the modelling of gas networks and the corresponding analysis of the $p$ -Wasserstein metric on metric graphs can be extended by the study of absolutely continuous curves. In [Reference Erbar, Forkert, Maas and Mugnolo14], this has already been done for the case of $p = 2$ , classical Kirchhoff’s law and no boundary vertices, and we postpone a generalization of the presented results to future work.

5. $p$ -Wasserstein gradient flows

Our next goal is to study, on a formal level, gradient flows with respect to the $p$ -Wasserstein distances introduced previously. Our starting point is a time-discrete approximation using the minimizing movement scheme

(5) \begin{equation} \mu ^\tau _{\left (l + 1\right ) \tau } = \text {arg}\min _{\mu } \left ( \frac {1}{p \tau ^{p - 1}} W_p^p \left (\mu, \mu ^{\tau }_{l \tau }\right ) + E \left (\mu \right ) \right ), \end{equation}

for some $\tau \gt 0$ . In the metric setting (cf. [Reference Ambrosio, Gigli and Savaré2] and [Reference Agueh1] for the case $p\neq 2$ ), it can be shown that as $\tau \rightarrow 0$ , an appropriate interpolation of minimizers converges to a solution of the respective gradient flow.

Here, we will instead derive optimality conditions for $\tau \gt 0$ to obtain the general form of the resulting gradient flows. In a second step, we will identify the energy that recovers the (ISO3) model, interestingly for $p=3$ . We emphasize that this approach automatically yields additional coupling conditions for the Kantorovich potentials, which are necessary to obtain a well-posed optimality system.

5.1. $p$ -Wasserstein gradient flow on networks

In the following, let us derive a more concrete structure for Wasserstein gradient flows on networks (for the case of classical Kirchhoff’s law at interior vertices and no boundary vertices). For this sake we consider energies of the natural form

\begin{equation*} E(\mu ) = \sum _{e\in \mathcal {E}} E_e \left (\rho _e\right ), \end{equation*}

i.e. an additive composition of functionals on each edge $e \in \mathcal {E}$ .

With this choice, our next goal is to derive optimality conditions for (5). A formal derivation can be obtained from assuming the existence of a saddle-point of the associated Lagrange functional, whose first variations can be set to zero. We first note that

\begin{equation*} \frac {1}{p \tau ^{p-1}}W_p^p \left (\left (\rho _t\right )_e, \left (\rho _{t+\tau }\right )_e\right ) = \min _{\rho _e, j_e } \max _{\phi _e} \sum _{e \in \mathcal {E}} \int _t^{t+\tau } \int _0^{L_e} \left ( \frac {\left \lvert j_e \right \rvert ^p}{p \rho _e^{p-1}} + \frac {\partial \rho _e}{\partial t} \phi _e + \frac {\partial j_e}{\partial x} \phi _e \right ) \, \mathrm d x \, \mathrm d t \end{equation*}

for $\tau \gt 0$ , $t \in \left [0, T - \tau \right ]$ and test functions $\phi _e$ , which, in case of the test function being a maximizing argument, correspond to the Lagrange multipliers. Here, the minimization is carried out among those $\rho _e, j_e$ , which fulfill the given initial and terminal values and satisfy (9). Thus, the Lagrange functional for (5) is given as

\begin{align*} L(\rho _e, j_e, \phi _e) = \sum _{e \in \mathcal {E}} \int _t^{t+\tau } \int _0^{L_e} \left ( \frac {\left \lvert j_e \right \rvert ^p}{p \rho _e^{p-1}} + \frac {\partial \rho _e}{\partial t} \phi _e + \frac {\partial j_e}{\partial x} \phi _e \right ) \, \mathrm d x \, \mathrm d t + \sum _{e\in \mathcal {E}} E_e \left (\rho _e\right ). \end{align*}

Then, the variation with respect to $j_e$ in direction $k_e$ yields

\begin{equation*} \sum _{e \in \mathcal {E}} \int _t^{t+\tau } \int _0^{L_e} \left ( \frac {j_e \left \lvert j_e\right \rvert ^{p - 2}}{\rho _e^{p-1}} k_e + \frac {\partial k_e}{\partial x} \phi _e \right ) \, \mathrm d x \, \mathrm d t = 0. \end{equation*}

Now, let us integrate by parts and use that each feasible variation $k_e$ still needs to satisfy Kirchhoff’s law, which yields

\begin{align*} 0 =& \sum _{\nu \in \mathcal {V}} \left (\sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} \left . (k_e \phi _e) \right |_{x = L_e} - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} \left . (k_e \phi _e) \right |_{x = 0}\right ) + \\& \sum _{e \in \mathcal {E}}\int _t^{t+\tau } \int _0^{L_e} \left ( \frac {j_e |j_e|^{p-2}}{ \rho _e^{p-1}} -\frac {\partial \phi _e}{\partial x} \right ) k_e\, \mathrm d x \, \mathrm d t \\ =& \sum _{\nu \in \mathcal {V}} \left (\sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} \left . (k_e (\phi _e-\Phi _\nu )) \right |_{x = L_e} - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} \left . (k_e (\phi _e-\Phi _\nu )) \right |_{x = 0}\right ) + \\& \sum _{e \in \mathcal {E}}\int _t^{t+\tau } \int _0^{L_e} \left ( \frac {j_e |j_e|^{p-2}}{ \rho _e^{p-1}} -\frac {\partial \phi _e}{\partial x} \right ) k_e\, \mathrm d x \, \mathrm d t \end{align*}

with the nodal mean

\begin{equation*} \Phi _{\nu } :\!= \frac {1}{\text {deg}(\nu )} \left (\sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} \phi _e + \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} \phi _e \right ), \end{equation*}

and the degree function

\begin{equation*} \text {deg} : \mathcal {V} \longrightarrow \mathbb {N}_0 \quad \nu \mapsto \text {deg}(\nu ) :\!= \left \lvert \left \{e \in \mathcal {E} \, \middle | \, \nu \in e\right \}\right \rvert, \end{equation*}

where for a connected graph it holds true that $\text {deg}\left (\nu \right ) \geq 1$ for all vertices $\nu \in \mathcal {V}$ .

Choosing arbitrary fluxes $k_e$ with compact support inside a single edge immediately yields

\begin{equation*} j_e \left \lvert j_e \right \rvert ^{p - 2} = \rho _e^{p - 1} \frac {\partial \phi _e}{\partial x} \end{equation*}

for each $e \in \mathcal {E}$ . Choosing further arbitrary $k_e$ with support only close to a single vertex $\nu \in \mathcal {V}$ (up to satisfying (9)), we see that further the condition

\begin{equation*} \phi _e = \Phi _\nu \qquad \forall e \in \mathcal {E} \, \text {with} \, \delta ^S(e) = \nu \text { or } \delta ^E(e) = \nu \end{equation*}

holds for the vertex $\nu$ . For dead-end vertices, i.e. deg $\left (\nu \right ) = 1$ , this yields no additional conditions, while for all other vertices this implies continuity of the dual variable.

To relate the dual variable to the energy, we can compute the variation of $L$ with respect to $\left (\rho _{t+\tau }\right )_e$ . As in the metric Wasserstein case, cf. [Reference Ambrosio, Gigli and Savaré2, Reference Santambrogio28], it turns out that the optimality condition becomes

\begin{equation*} \phi _e \left (t+\tau \right ) + E_e'\left (\rho _e\left (t+\tau \right )\right ) = 0. \end{equation*}

In the limit $\tau \downarrow 0$ , we hence obtain the following interface conditions: There exists a nodal function $\Phi : \mathcal {V} \rightarrow \mathbb {R}$ such that

(6) \begin{equation} E_e'(\rho _e)=\Phi _\nu \qquad \forall e \in \mathcal {E} \, \text {with} \, \delta ^S(e) = \nu \text { or } \delta ^E(e) = \nu, \end{equation}

for vertices $\nu \in \mathcal {V}$ . Since $\Phi _\nu$ is not a variable of physical interest, this condition effectively means the continuity of the energy variation across nodes.

To summarize, we obtain that, similar to the case of Wasserstein gradient flows in domains, the gradient flow equations are given by

(7) \begin{align} \frac {\partial \rho _e}{\partial t} + \frac {\partial j_e}{\partial x} & = 0, \end{align}
(8) \begin{align} j_e \left \lvert j_e \right \rvert ^{p - 2} & = - \rho _e^{p-1} \frac {\partial }{\partial x} \left (E'\left (\rho _e\right )\right ), \end{align}

together with the classical Kirchhoff condition for all $\nu \in \mathcal {V}$

(9) \begin{equation} 0 = \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} \left . j_e \right |_{x = L_e} - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} \left . j_e \right |_{x = 0}, \end{equation}

and the coupling conditions (6).

5.2. The (ISO3) model for gas networks

In the following, we turn our attention to the (ISO3) model of transport in gas networks, which is derived and embedded in a full model hierarchy in [Reference Domschke, Hiller, Lang, Mehrmann, Morandin and Tischendorf11] (see also Section 2.1). Based on mass density $\rho _e$ and velocity $v_e$ functions defined on edges, the (ISO3) model is given by

\begin{align*} \frac {\partial \rho _e}{\partial t} + \frac {\partial }{\partial x} ( \rho _e v_e) & = 0, \\ \frac {\lambda _e}{2 \mathcal {D}_e} \rho _e v_e \left \lvert v_e \right \rvert & = - \frac {\partial }{\partial x} \left (p_e\left (\rho _e\right )\right ) - g \rho _e \sin \left (\omega _e\right ), \end{align*}

together with the classical Kirchhoff’s condition (C KL) on the vertices of the network.

We start by reformulating the model in terms of the mass density variable $\rho _e$ and the flux variable $j_e$ as:

\begin{align*} \frac {\partial \rho _e}{\partial t} + \frac {\partial j_e}{\partial x} & = 0 \\ j_e \left \lvert j_e \right \rvert & = - \frac {2 \mathcal {D}_e}{\lambda _e} \rho _e \frac {\partial }{\partial x} \left (p_e\left (\rho _e\right )\right ) - \rho _e^2 \frac {2 \mathcal {D}_e g}{\lambda _e} \sin \left (\omega _e\right ) \\ &= -\left ( \rho _e \frac {2 \mathcal {D}_e}{\lambda _e }\frac {\partial }{\partial x} \left (p_e\left (\rho _e\right )\right ) + \rho _e^2\frac {2 \mathcal {D}_e g}{\lambda _e} \sin \left (\omega _e\right )\right ) \end{align*}

Comparing to the general form of the Wasserstein gradient flow (7),(8) for $p = 3$ , we obtain the following expression for the derivative of the driving energy

\begin{equation*} \frac {\partial }{\partial x} E'(\mu )|_e = \frac {2 \mathcal {D}_e}{\lambda _e \rho _e }\frac {\partial }{\partial x} \left (p_e\left (\rho _e\right )\right ) + \frac {2 \mathcal {D}_e g}{\lambda _e} \sin \left (\omega _e\right ) \end{equation*}

on each edge $e \in \mathcal {E}$ , for some energy still to be determined for the (ISO3) model.

As usual, we need to introduce an entropy functional, which – due to the dependence on diameter $\mathcal {D}_e$ and friction coefficient $\lambda _e$ – may vary with the edge. The derivative of the entropy is defined as

\begin{equation*} F_e''(s) :\!= \frac {2 \mathcal {D}_e}{\lambda _e} \frac {p_e'\left (s\right )}s, \end{equation*}

and we define $F_e$ as a second primitive with $F_e \left (0\right ) = 0$ . Then we see that for $\rho _e \gt 0$

\begin{equation*} \frac {\partial }{\partial x} F_e'\left (\rho _e\right ) = \frac {2 \mathcal {D}_e}{\lambda _e \rho _e }\frac {\partial }{\partial x} \left (p_e\left (\rho _e\right )\right ) \end{equation*}

holds. Note that $F_e''(s) \sim \frac {p_e'(s)}s$ is a standard entropy–pressure relation arising in the overdamped limit of fluid models (cf. e.g. [Reference Giesselmann, Lattanzio and Tzavaras16]). In the case of a linear pressure $p_e(s) \sim s$ we arrive at the well-known log-entropy $F_e(s) = s \log s - s$ , but other choices of the pressure such as polynomial laws are possible as well. The essential property is convexity of the entropy, which is equivalent to $p_e$ being monotone.

With the short-hand notation

\begin{equation*} c_e:\!=\frac {2 \mathcal {D}_e g}{\lambda _e} \sin \left (\omega _e\right ), \end{equation*}

we obtain the energy functional

\begin{equation*} E(\mu ) = \sum _ e\int _0^{L_e} \left (F_e\left (\rho _e\right ) + c_e x \rho _e + d_e \rho _e\right ) \, \mathrm d x. \end{equation*}

Note that $d_e$ is an arbitrary integration constant, which however impacts the gradient flow as soon as there is more than one edge, because the mass per edge is not necessarily conserved then. Since a global change by a constant will not affect the result, due to the overall mass conservation on the network, we can however use the condition

(10) \begin{equation} \sum _{e \in \mathcal {E}} d_e = 0. \end{equation}

As the derivation above shows, the gradient flow formulation immediately yields the appropriate interface conditions (6), in addition to the classical Kirchhoff’s law for (ISO3), which are given by

\begin{align*} F_e'(\rho _e) + d_e&=\Phi _\nu \qquad \forall e \in \mathcal {E} \, \text {with} \, \delta ^S(e) = \nu, \\ F_e'(\rho _e)+ c_e L_e + d_e \ &=\Phi _\nu \qquad \forall e \in \mathcal {E} \, \text {with} \, \delta ^E(e) = \nu, \end{align*}

for vertices $\nu \in \mathcal {V}$ .

The interface condition effectively means that the potential $F_e'(\rho _e) + c_e x + d_e$ is continuous across vertices. This may seem quite arbitrary, since a potential is usually specified only up to a constant. However, we have to understand the potential rather as a global quantity on the network, so clearly changes on single edges affect the global structure of the potential. Since there is quite some freedom to determine the constants $d_e$ , we could in turn specify certain interface conditions and determine the associated potential. Again, it is reasonable that different jump conditions on the vertices have the same effect as changing a potential.

5.3. Vanishing diffusion limit

As mentioned above, there are several potentials, and thus interface conditions, that lead to gradient flows. We may want to specify however a potential such that in the limit of vanishing diffusion ( $F_e=0$ ), we obtain a consistent system. The problem of vanishing diffusion has been investigated for linear problems on networks in [Reference Egger and Philippi13] with a focus on incompressibility rather than a gradient structure.

Consistency of the vector field with the vanishing diffusion limit means

(11) \begin{align} d_e&=\Phi _\nu & \forall e \in \mathcal {E} \, \text {with} \, \delta ^S(e) = \nu, \end{align}
(12) \begin{align} c_e L_e + d_e &=\Phi _\nu & \forall e \in \mathcal {E} \, \text {with} \, \delta ^E(e) = \nu, \end{align}

for $\nu \in \mathcal {V}$ , which – together with (10) – can be considered a system of linear equations for the variables $d_e$ , or can even be extended to a linear system for the variables $d_e$ and $\Phi _\nu$ . Hence, it is natural to investigate the (unique) solvability of this system.

The number of unknowns is $\left \lvert \mathcal {E}\right \rvert + \left \lvert \mathcal {V}\right \rvert$ , while the number of equations is $\sum _{\nu \in \mathcal {V}}$ deg $\left (\nu \right ) + 1$ . By the sum formula (also known as the Handshaking-Lemma) [Reference Bondy and Murty6, Theorem 1.1], we have $\sum _{\nu \in \mathcal {V}}$ deg $\left (\nu \right ) + 1 = 2 \left \lvert \mathcal {E}\right \rvert + 1$ . Thus, the number of equations and unknowns coincides if $\left \lvert \mathcal {E}\right \rvert + 1 = \left \lvert \mathcal {V}\right \rvert$ .

It is a simple exercise to see that $\left \lvert \mathcal {E}\right \rvert \geq \left \lvert \mathcal {V}\right \rvert$ implies the existence of a cycle (cf. [Reference Bondy and Murty6, p.15]) in a so-called simple graph (a graph without loops and parallel edges). Hence, in a simple connected graph without a cycle (which seems a suitable condition for gas networks), we have $\left \lvert \mathcal {E} \right \rvert + 1 \leq \left \lvert \mathcal {V} \right \rvert$ , which is however also a lower bound for the number of edges. Thus, in this case, the number of equations and unknowns coincides and we can see that the linear system has full rank.

Proposition 5.1. Let the graph $\mathcal {G} = \left (\mathcal {V},\mathcal {E}\right )$ be a tree (simple, connected and contains no cycles). Then there exists a unique solution $(d_e,\Phi _\nu )$ of the linear system ( 10 ), ( 11 ) and ( 12 ).

Proof. Since under the above conditions on the graph, we have verified that the number of equations (10)-(11)-(12) and unknowns in the linear system coincides, we only need to show that the homogeneous problem (corresponding to $c_e = 0$ ) has a unique trivial solution (i.e. the nullspace of the associated matrix is trivial).

We first notice that in the homogeneous case, the orientation of the graph plays no role and that (11) and (12) imply that $d_e = d_{e'}$ if $e$ and $e'$ are adjacent to a common vertex. Since the graph is connected, there exists a path that connects two arbitrary edges, hence by transitivity we obtain $d_e = d_{e'}$ for any two edges $e,e' \in \mathcal {E}$ .

Since all $d_e$ are equal, (10) implies $d_e = 0$ for all $e \in \mathcal {E}$ , which further implies $\Phi _\nu = 0$ for all $\nu \in \mathcal {V}$ , since each vertex is connected to at least one edge.

We finally illustrate the choice of the constants with a simple example.

Example 5.2. Let us consider a very simple tree network consisting of the vertices $\mathcal {V} = \left \{\nu _1, \nu _2, \nu _3, \nu _4\right \}$ , and the edges $\mathcal {E} = \left \{e_1, e_2, e_3\right \}$ with

\begin{align*} & \delta ^S\left (e_1\right ) = \nu _1, \qquad \delta ^S\left (e_2\right ) = \nu _2, \qquad \delta ^S\left (e_3\right ) = \nu _2, \\ & \delta ^E\left (e_1\right ) = \nu _2, \qquad \delta ^E\left (e_2\right ) = \nu _3, \qquad \delta ^E\left (e_3\right ) = \nu _4. \end{align*}

Thus, we obtain the following linear system:

Solving this system yields $d_1=-\frac {2}3 c_1 L_1$ and $d_2 = d_3 = \frac {1}3 c_2 L_2$ .

In the example we see that the constants $d_e$ on edges $e_2$ and $e_3$ only depend on the properties of edge $e_1$ , but not on $c_2, c_3, L_2, L_3$ . Indeed it is easy to see that independence of the edge parameters is always the case for an edge leading to an outgoing boundary of the network.

6. Numerical examples

This section presents several numerical examples of solutions to the transport problems introduced in section 4 above. We examine different types of boundary conditions, always for the case $p=2$ .

Our numerical algorithm is based on a variant of the primal-dual optimization algorithm applied to optimal transport that was introduced in [Reference Carrillo, Craig, Wang and Wei9]. We work with a slightly different discretziation of the constraints as presented in [Reference Pietschmann and Schlottbom27] (and recently extended to metric graphs in [Reference Krautz21]), yet only in the case of no-flux boundary conditions.

The basic idea is to relax the constraints in the optimization problem by only asking them to be satisfied up to a given, fixed precision. The main advantage of this approach is that, when adding the relaxed constraints to the objective functional, this results in a simple proximal operator (i.e. an shrinkage operation) which can be implemented rather efficiently. In order the stabilize the algorithm, we add numerical diffusion by means of one half a discretized Laplace operator.

For concreteness, we focus on (TD BC 1) and (TD BC 2) to explain the algorithm in detail.

6.1. Discretizing the constraints

We start by discretizing the interval $[0,L_e]$ associated to every edge into an equidistant grid with $N_x^e$ points. The time interval $[0,T]$ is similarly discretized into an equidistant grid with $N_t$ points. With $\delta _x^e = L_e/N_x^e$ and $\delta _t = T/N_t$ we obtain the grid points

\begin{align*} x_i^e & = (i-1)\delta _x^e \qquad \text {for} \, e \in E, \, i \in \{1, 2, \ldots, N_x^i+1\}, \\ t_k & = (k-1)\delta _t \qquad \text {for} \, k \in \{1, 2, \ldots, N_t+1\}. \end{align*}

Note that in this section, we will slightly deviate from the notation used before, as we will use superscript $e$ and superscript $\nu$ in order to denote the corresponding edge or vertex, respectively, and not to interfere too much with the additional indices coming from the space-time discretization.

To any continuous function $v:[0,L]\times [0,T] \to \mathbb {R}$ we associate a grid function $v_h=(v_{i,k})_{i,k}$ via

\begin{align*} v_{i,k}=v(x_i,t_k) \qquad \text {for} \, i\in \{1,2, \ldots, N_x+1\}, \, k\in \{1, 2, \ldots, N_t+1\}. \end{align*}

Similar notation is used for grid functions associated to the spatial and temporal partitions, respectively. On these functions, we define a scalar product, which we obtain by discretizing the integrals involved using the trapezoidal rule. Given grid functions $u_h=(\rho _{i,k}^u,j_{i,k}^u)$ , $v_h=(\rho _{i,k}^v,j_{i,k}^v)$ , we have

(13) \begin{align} \langle u_h,v_h\rangle = \sum _{k=1}^{N_k+1}\sum _{i=1}^{N_x+1} w_k^tw_j^x\left ( \rho _{i,k}^u\rho _{i,k}^v + j_{i,k}^u j_{i,k}^v\right ),\quad \|v_h\|=\sqrt {\langle v_h,v_h\rangle }, \end{align}

with

\begin{align*} w_i^x =\begin{cases} \frac {\delta _x}{2} &i\in \{1,N_x+1\},\\ \delta _x &\text {otherwise}. \end{cases} \end{align*}

We denote by $w_k^t$ a similar weight function for the composite trapezoidal rule associated with the temporal partition and note that $\sum _{k=1}^{N_t+1}w_k^t=1$ . This renders the linear space of grid functions a Hilbert space.

We will only present the discretization of the constraints for the variant used in (TD BC 1) and (TD BC 2), with obvious modifications for the other cases. We discretize the continuity equation (CE 1) using a centred differencing in space and a backward differencing in time for the interior grid points $i\in \{2, 3, \ldots N_x\},\ k\in \{2, 3, \ldots, N_t+1\}$ , i.e.

(14) \begin{align} \frac {\rho ^e_{i,k}-\rho ^e_{i,k-1}}{\delta _t}+\frac { j^e_{i+1,k}-j^e_{i-1,k}}{2\delta _x}=0. \end{align}

For the boundary we use a one-sided finite difference approximation to approximate $\partial _x j^e$ , i.e. for $i\in \{1,N_x+1\}$ we use

(15) \begin{align} \frac {\rho ^e_{1,k}-\rho ^e_{1,k-1}}{\delta _t}+\frac { j^e_{2,k}-j^e_{1,k}}{\delta _x}&=0,\quad \text {and} \end{align}
(16) \begin{align} \frac {\rho ^e_{N_x+1,k}-\rho ^e_{N_x+1,k-1}}{\delta _t}+\frac { j^e_{N_x+1,k}-j^e_{N_x,k}}{\delta _x}&=0. \end{align}

We also add one half times a finite difference approximation of the Laplace for additional stabilization. The coupling conditions (TD BC 1) and (TD BC 2) at the boundary vertices become

\begin{alignat*} {3} && 0 & = \Bigg ( \left \lvert s^{G,\nu }_k \right \rvert + \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} j^e_{N_x+1,k} \Bigg ) - \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} j^e_{1,k} \qquad \qquad \qquad \quad & \forall \nu \in \partial ^+ \mathcal {V}, \\ \qquad && 0 & = \sum _{\substack {e \in \mathcal {E}: \\ \delta ^E(e) = \nu }} j^e_{N_x+1,k} - \Bigg ( d_k^{G,\nu } + \sum _{\substack {e \in \mathcal {E}: \\ \delta ^S(e) = \nu }} j^e_{1,k} \Bigg ) & \forall \nu \in \partial ^- \mathcal {V}, \end{alignat*}

for all $k\in \{1, 2, \ldots N_t+1\}$ , while the vertex ODEs at the interior vertices (1), become

(17) \begin{align} \frac {\gamma ^\nu _{k}-\gamma ^\nu _{k-1}}{\delta _t} = f^\nu _k, \quad \forall \nu \in \stackrel{\small\circ}{\mathcal {V}}. \end{align}

Since the discretization does not depend on $j_{i, 1}$ , we set $j_{i, 1} = 1$ for $i= 1, 2, \ldots N_x + 1$ . Furthermore, the initial and final conditions of both $\rho ^e$ and $\gamma ^{\nu }$ are taken into account in the following way

(18) \begin{align} \rho ^e_{i, 1} = \rho _0^e \left (x_i\right ) \quad \text {and} \quad \rho ^e_{i, N_t+1} = \rho _T^e \left (x_i\right ) & \qquad \qquad \text {for} \, i \in \{1, 2, \ldots N_x + 1\}, \, e \in \mathcal {E}, \end{align}
(19) \begin{align} \gamma ^{\nu }_1 = \gamma _0^{\nu } \quad \text {and} \quad \gamma ^{\nu }_{N_t + 1} = \gamma _1^{\nu } & \qquad \qquad \text {for} \, \nu \in \stackrel{\small\circ}{\mathcal {V}}. \end{align}

Finally, since the continuity equation is only satisfied approximately, we explicitly add a constraint on the total mass of the graph being a probability density

(20) \begin{align} \sum _{e\in \mathcal {E}}\sum _{i=1}^{N_x+1} w_i^x \left (\rho ^e_{i,k}-\rho ^e_0(x_i)\right ) + \sum _{\nu \in \mathcal {V}} \left (\gamma ^{\nu }_k - \gamma ^{\nu }_0\right )&=0 \qquad \text {for} \, k\in \{2, 3, \ldots N_t+1\}. \end{align}

6.2. Primal-dual algorithm

We now sketch the main ingredients of the primal-dual algorithm, referring to [Reference Carrillo, Craig, Wang and Wei9, Reference Pietschmann and Schlottbom27] for details. As mentioned above, the key idea is to enforce the constraints (14)–(20) only up to a finite precision, i.e. using the norm (13), instead of (14), we only ask for

\begin{align*} \sum _{k=2}^{N_t+1}w_k^t \Bigg ( w_1^x \left (\frac {\rho ^e_{i,k}-\rho ^e_{i,k-1}}{\delta _t}+\frac { j^e_{i+1,k}-j^e_{i-1,k}}{2\delta _x}\right )^2\Bigg ) &\leq \delta _1^2\Bigg . . \end{align*}

These weakened constraints are quadratic and can be written in the form

\begin{align*} Au \in \mathcal {C}_\delta = \left \{x \, \left .\right | \, \|x_j -b_j\|_2 \leq \delta _j, \, j = 1, \ldots, 8\right \}, \end{align*}

where the vector $u$ contains the coefficient of the grid functions $(\rho _h,m_h)$ . We have a total of $8$ constraints since we include, in addition to the continuity equation on the edges, the initial- and final data for $\rho ^e$ and for $\gamma ^{\nu }$ , the coupling conditions at the vertices, vertex dynamics and an additional mass constraint. Note that the weights $w_i^x$ and $w_k^t$ are included in the definition of $A_j$ and $b_j$ , respectively, and the vectors $x_j$ are slices of the vector $x$ corresponding to the number of rows in $A_j$ .

We define the matrix $A$ by vertically concatenating the matrices $A_j$ for $j = 1, \ldots, 8$ . We note that $A$ is the matrix of a linear map from the Hilbert space of grid functions, with the inner product being defined in (13), to a Euclidean space.

We then aim to solve the fully discretized, problem

\begin{align*} \inf _{(\rho _h,m_h,\gamma _h, f_h)} \left (\sum _{e \in \mathcal {E}}\sum _{k=1}^{N_t+1}\sum _{i=1}^{N_x+1} w^x_i w_k^t h(\rho ^e_{i,k},j^e_{i,k}) + \sum _{\nu \in {\mathcal {V}}}\sum _{k=1}^{N_t+1}h(\gamma ^{\nu }_{k},f^{\nu }_k) \right )+ \mathfrak {i}_{\mathcal {C}_\delta }(A u), \end{align*}

where $\mathfrak {i}$ denotes the convex indicator function and with $h$ as defined in (4) and $p=2$ . We now have an unconstrained optimization problem involving a convex, yet non-differentiable, functional and thus we can employ a primal-dual algorithm in the variant given in algorithm1.

Therein, two proximal operators have to be computed: The one with respect to $h$ yields, in each point in the space-time grid, a third order polynomial whose solution can be obtained by explicit formulas, while for the indicator functions we recover the projection onto the respective set, see again [Reference Carrillo, Craig, Wang and Wei9, Reference Pietschmann and Schlottbom27] for details.

Algorithm 1: Primal-dual algorithm for the solution of the optimal transport problems.

In all examples below and unless explicitly stated differently, we make the following parameter choices: $N_x = 150$ , $N_t = 75$ , $T=1$ , $L_e=1$ for all edges $e \in \mathcal {E}$ .

6.3. Example 1: Geodesics and branching

One important observation made in [Reference Erbar, Forkert, Maas and Mugnolo14] is the fact that the logarithmic entropy on metric graphs is not geodesically convex, due to the fact that geodesics may branch. This is illustrated by an explicit example, which we reproduce here numerically, both for the case with and without vertex dynamics, and for the graph topology depicted in Figure 1. As initial and final data we chose

\begin{equation*} \rho _e^0(x):\!= \begin{cases} \mathbb{1}_{[0,0.4]}(x), &e =e_1, \\ 0, & e=e_2, \, e_3,\end{cases} \quad \text { and }\quad \rho _e^1(x) :\!= \begin{cases}0, & e= e_1, \\ \frac {1}{2} \mathbb{1}_{[1-0.4,1]} (x), & e=e_2,\,e_3, \end{cases} \end{equation*}

and, in the case when we allow for a vertex dynamic,

\begin{align*} \gamma _{\nu }^0 = \gamma _{\nu }^1 = 0 \qquad \qquad \forall \nu \in \mathcal {V}. \end{align*}

The results are depicted in Figures 2 and 3 for the case without and with vertex dynamics, respectively. One can observe that with explicit vertex dynamics, the transport over the vertex is slowed down, delaying the whole transport.

Figure 1. Sketch of the graph used in the first example for branching geodesics. Here, no in- or outflux via the boundary is assumed (i.e. $\partial ^+ \mathcal {V} = \partial ^- \mathcal {V} = \emptyset$ ).

Figure 2. Branching geodesic without vertex dynamic: Snapshots of the dynamics of the densities $\rho _e$ at different times.

Figure 3. Branching geodesic with vertex dynamic: Snapshots of the dynamics of the densities $\rho _e$ and $\gamma _{\nu }$ at different times.

6.4. Example 2: Time-dependent in- and outflow

In this example, we study the influence of time-dependent boundary conditions on the solutions of the resulting transport problem with the corresponding vertex dynamics given in definition 3.1, combined with (TD BC 1) and (TD BC 2).

Therefore, we fix the graph depicted in figure 4 and set

\begin{equation*} \partial ^+ \mathcal {V} = \{\nu _1\},\quad \partial ^- \mathcal {V} = \{\nu _3,\,\nu _4\},\,\text { and }\,\stackrel{\small\circ}{\mathcal {V}} = \{\nu _2\}. \end{equation*}

Figure 4. Sketch of the graph used in the second example. We set $\partial ^+ \mathcal {V} = \{\nu _1\}$ , $\partial ^- \mathcal {V} = \{\nu _3,\,\nu _4\}$ and $\stackrel{\small\circ}{\mathcal {V}} = \{\nu _2\}$ .

Figure 5. Snapshots of the dynamics of the densities $\rho _e$ and $\gamma _{\nu }$ with symmetric boundary conditions (InOutSym) at different times.

Figure 6. Snapshots of the dynamics of the densities $\rho _e$ and $\gamma _{\nu }$ with asymmetric boundary conditions (InOutAsym) at different times.

For initial and final data we chose

\begin{align*} \rho _{e_1}^0 = \rho _{e_2}^0 =\rho _{e_3}^0 = \rho _{e_4}^0 = 0.225\quad \text { as well as }\quad \gamma _{\nu _2}^0 = 0.1,\,\gamma ^1_{\nu _3} =0, \end{align*}

and

\begin{align*} \rho _{e_1}^1 = \rho _{e_2}^1 =\rho _{e_3}^1 = 0,\, \rho _{e_4}^1(x) = c_1e^{-\frac {(x-1/2)^2}{(0.2)^2}}\quad \text { as well as }\quad \gamma _{\nu _2}^1 = 0.4,\,\gamma ^1_{\nu _3} =0, \end{align*}

where $c_1$ is chosen such that the total mass is normalized to $1$ . For in- and out-flow we consider two cases:

(InOutSym) \begin{align} s^G_{\nu _1}(t) = t\quad \text { and } \quad d^G_{\nu _3}(t) = d^G_{\nu _4}(t) = \frac {1}{2} t, \end{align}

as well as

(InOutAsym) \begin{align} s^G_{\nu _1}(t) = t,\quad d^G_{\nu _3}(t) = 0 \quad \text { and } \quad d^G_{\nu _4}(t) = t.\end{align}

Note that the values for $\gamma _{\nu _3}$ are only needed in the asymmetric case. The respective results, evaluated at different time steps, are depicted in Figure 5 (for (InOutSym)) and 6 (for (InOutAsym)), respectively. The results show that when only one vertex is an outflow vertex, the mass necessary to obtain the final configuration is predominantly transported via the other edge.

7. Conclusion

This paper gives a detailed introduction in the modelling of gas networks as oriented metric graphs, including two different types of coupling conditions at vertices (one which allows for storing gas in vertices, and one which does not) as well as two different kinds of boundary conditions (enabling gas entering and exiting the network as supply and demand). With this setup, we thoroughly investigate mass conservation properties on the network for given initial, final and boundary data and formulate various transport type problems on metric graphs.

Furthermore, we generalize the dynamic formulation of the $2$ -Wasserstein metric in [Reference Burger, Humpert and Pietschmann8], to general $p$ and we also include the before mentioned time-dependent and time-independent boundary conditions in the formulations. We then utilize the presented $p$ -Wasserstein metrics to derive gradient flows and for the case $p = 3$ we recover the (ISO3) gas model, which is a particularly interesting result. Moreover, we highlight some difficulties in defining appropriate potentials on metric graphs, when going from a single edge to a simple connected graph, which naturally occur in the study of vanishing diffusion limits.

In the last section of the paper, we present some numerical results based on a space-time discretization called primal-dual gradient scheme, which allows us to compute solutions of the presented optimal transport problems for different coupling and boundary conditions. These examples give insights into how gas storage at interior vertices as well as vertices responsible for the supply of the network or for meeting demands, affect the dynamics of the network and thus the optimal transport solution.

Concerning future work, we are aiming at generalizing the results of [Reference Burger, Humpert and Pietschmann8] and [Reference Erbar, Forkert, Maas and Mugnolo14], which proof the existence of minimizers for the $p$ -Wasserstein metrics in the case of $p = 2$ , no boundary vertices and in [Reference Erbar, Forkert, Maas and Mugnolo14] with at interior vertices. Furthermore, an extension to mixture models (for instance a mix of natural gas and hydrogen) constitutes an interesting research opportunity as well.

Acknowledgements

JFP thanks the DFG for support via the Research Unit FOR 5387 POPULAR, Project No. 461909888. AF and MB thank the DFG for support via the SFB TRR 154. MB acknowledges support from DESY (Hamburg, Germany), a member of the Helmholtz Association HGF.

Competing interests

The author declares that he has no competing interests.

A Appendix

Coupled measures

A detailed introduction of coupled measures on a subset of edges or subset of vertices on the graph is given by the following definition.

Without loss of generality, we will assume, that the vertices $\mathcal {V}$ are ordered in such manner, that there exist indices $o, \theta \in \left \{0, 1, \dots n\right \}$ with $o \leq \theta$ such that:

\begin{align*} \partial ^+ \mathcal {V} & = \left \{\nu _1, \nu _2, \dots \nu _o\right \}, \\ \partial ^- \mathcal {V} & = \left \{\nu _{o + 1}, \nu _{o + 2}, \dots \nu _{\theta }\right \}, \\ \stackrel{\small\circ}{\mathcal {V}} & = \left \{\nu _{\theta + 1}, \nu _{\theta + 2}, \dots \nu _n\right \}. \end{align*}

Definition A.1 (Coupled measures on graphs). On a graph $\mathcal {G} = \left (\mathcal {V}, \mathcal {E}\right )$ , we define coupled mass measures, either on all edges or a subset of or all vertices, as

\begin{equation*} \mathcal {M}_+\left (\mathcal {E}\right ) :\!= \mathcal {M}_+\left (e_1\right ) \times \mathcal {M}_+\left (e_2\right ) \times \dots \times \mathcal {M}_+\left (e_m\right ) \end{equation*}

and

\begin{align*} \mathcal {M}_+\left (\partial ^+ \mathcal {V}\right ) :\!= \, & \mathcal {M}_+\left (\nu _1\right ) \times \mathcal {M}_+\left (\nu _2\right ) \times \dots \times \mathcal {M}_+\left (\nu _o\right ), \\ \mathcal {M}_+\left (\partial ^- \mathcal {V}\right ) :\!= \, & \mathcal {M}_+\left (\nu _{o + 1}\right ) \times \mathcal {M}_+\left (\nu _{o + 2}\right ) \times \dots \times \mathcal {M}_+\left (\nu _{\theta }\right ), \\ \mathcal {M}_+\left (\stackrel{\small\circ}{\mathcal {V}}\right ) :\!= \, & \mathcal {M}_+\left (\nu _{\theta + 1}\right ) \times \mathcal {M}_+\left (\nu _{\theta + 2}\right ) \times \dots \times \mathcal {M}_+\left (\nu _n\right ). \end{align*}

Moreover, for a fixed time point $t \in \left [0, T\right ]$ , by $\mathcal {M}_+^t\left (\mathcal {E}\right )$ we define

\begin{equation*} \mathcal {M}_+^t\left (\mathcal {E}\right ) :\!= \mathcal {M}_+^t\left (e_1\right ) \times \mathcal {M}_+^t\left (e_2\right ) \times \dots \times \mathcal {M}_+^t\left (e_m\right ) \end{equation*}

and

\begin{align*} \mathcal {M}_+^t\left (\partial ^+ \mathcal {V}\right ) :\!= & \, \mathcal {M}_+^t\left (\nu _1\right ) \times \mathcal {M}_+^t\left (\nu _2\right ) \times \dots \times \mathcal {M}_+^t\left (\nu _o\right ),\\ \mathcal {M}_+^t\left (\partial ^- \mathcal {V}\right ) :\!= & \, \mathcal {M}_+^t\left (\nu _{o + 1}\right ) \times \mathcal {M}_+^t\left (\nu _{o + 2}\right ) \times \dots \times \mathcal {M}_+^t\left (\nu _{\theta }\right ),\\ \mathcal {M}_+^t\left (\stackrel{\small\circ}{\mathcal {V}}\right ) :\!= & \, \mathcal {M}_+^t\left (\nu _{\theta + 1}\right ) \times \mathcal {M}_+^t\left (\nu _{\theta + 2}\right ) \times \dots \times \mathcal {M}_+^t\left (\nu _n\right ). \end{align*}

In a similar manner, for measures without non-negativity constraints, we define

\begin{equation*} \mathcal {M}\left (\mathcal {E}\right ) = \mathcal {M}\left (e_1\right ) \times \mathcal {M}\left (e_2\right ) \times \dots \times \mathcal {M}\left (e_m\right ) \end{equation*}

and

\begin{equation*} \mathcal {M}\left (\stackrel{\small\circ}{\mathcal {V}}\right ) = \mathcal {M}\left (\nu _{\theta + 1}\right ) \times \mathcal {M}\left (\nu _{\theta + 2}\right ) \times \dots \times \mathcal {M}\left (\nu _n\right ). \end{equation*}

Therefore, the initial and final densities for a given optimal transport problem are given as coupled measures:

  • Mass densities:

    \begin{align*} \rho _0 & :\!= \left (\left .\rho _{e_1}\right |_{t = 0}, \left .\rho _{e_2}\right |_{t = 0}, \dots \left .\rho _{e_m}\right |_{t = 0}\right ) \in \mathcal {M}_+^0\left (\mathcal {E}\right ) \\ \rho _T & :\!= \left (\left .\rho _{e_1}\right |_{t = T}, \left .\rho _{e_2}\right |_{t = T}, \dots \left .\rho _{e_m}\right |_{t = T}\right ) \in \mathcal {M}_+^T\left (\mathcal {E}\right ) \end{align*}
  • Vertex mass densities:

    \begin{align*} \gamma _0 & :\!= \left (\left .\gamma _{\nu _{\theta + 1}}\right |_{t = 0}, \left .\gamma _{\nu _{\theta + 2}}\right |_{t = 0}, \dots \left .\gamma _{\nu _n}\right |_{t = 0}\right ) \in \mathcal {M}_+^0\left (\stackrel{\small\circ}{\mathcal {V}}\right ) \\ \gamma _T & :\!= \left (\left .\gamma _{\nu _{\theta + 1}}\right |_{t = T}, \left .\gamma _{\nu _{\theta + 2}}\right |_{t = T}, \dots \left .\gamma _{\nu _n}\right |_{t = T}\right ) \in \mathcal {M}_+^T\left (\stackrel{\small\circ}{\mathcal {V}}\right ) \end{align*}
  • Source and sink vertex mass densities:

    \begin{align*} S_0 & :\!= \left (\left .S_{\nu _1}\right |_{t = 0}, \left .S_{\nu _2}\right |_{t = 0}, \dots \left .S_{\nu _o}\right |_{t = 0}\right ) \in \mathcal {M}_+^0\left (\partial ^+ \mathcal {V}\right ) \\ S_T & :\!= \left (\left .S_{\nu _1}\right |_{t = T}, \left .S_{\nu _2}\right |_{t = T}, \dots \left .S_{\nu _o}\right |_{t = T}\right ) \in \mathcal {M}_+^T\left (\partial ^+ \mathcal {V}\right ) \\ D_0 & :\!= \left (\left .D_{\nu _{o + 1}}\right |_{t = 0}, \left .D_{\nu _{o + 2}}\right |_{t = 0}, \dots \left .D_{\nu _{\theta }}\right |_{t = 0}\right ) \in \mathcal {M}_+^0\left (\partial ^- \mathcal {V}\right ) \\ D_T & :\!= \left (\left .D_{\nu _{o + 1}}\right |_{t = T}, \left .D_{\nu _{o + 2}}\right |_{t = T}, \dots \left .D_{\nu _{\theta }}\right |_{t = T}\right ) \in \mathcal {M}_+^T\left (\partial ^- \mathcal {V}\right ) \end{align*}

References

Agueh, M. (2005) Existence of solutions to degenerate parabolic equations via the Monge-Kantorovich theory. Adv. Differ. Equ. 10(3), 309360.Google Scholar
Ambrosio, L., Gigli, N. & Savaré, G. (2005) Gradient Flows: In Metric Spaces and in the Space of Probability Measures, Springer Science & Business Media.Google Scholar
Banda, M., Herty, M. & Klar, A. (2005) Gas flow in pipeline networks. Netw. Heterog. Media 1 (1), 4156.CrossRefGoogle Scholar
von Below, J. & Nicaise, S. (1996) Dynamical interface transition in ramified media with diffusion. Commun. Part. Diff. Eq. 21 (1–2), 255279.CrossRefGoogle Scholar
Benamou, J.-D. & Brenier, Y. (2000) A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numer. Math. 84 (3), 375393.CrossRefGoogle Scholar
Bondy, J. A. & Murty, U.S.R. (1976) Graph Theory with Applications, Macmillan London, Vol. 290.Google Scholar
Brouwer, J., Gasser, I. & Herty, M. (2011) Gas pipeline models revisited: Model hierarchies, nonisothermal models, and simulations of networks. SIAM Multiscale Model. Simul. 9 (2), 601623.CrossRefGoogle Scholar
Burger, M., Humpert, I., Pietschmann, J.-F. (2023) Dynamic optimal transport on networks. ESAIM Control Optim. Calc. Var. 29, 54.CrossRefGoogle Scholar
Carrillo, J. A., Craig, K., Wang, L. & Wei, C. (2022) Primal dual methods for Wasserstein gradient flows. Found. Comput. Math. 22 (2), 389443.CrossRefGoogle Scholar
Dolbeault, J., Nazaret, B. & Savaré, G. (2008) A new class of transport distances between measures. Calc. Var. Partial Differ. Equ. 34 (2), 193231.CrossRefGoogle Scholar
Domschke, P., Hiller, B., Lang, J., Mehrmann, V., Morandin, R. & Tischendorf, C. (2021) Gas Network Modeling: An Overview. https://opus4.kobv.de/opus4-trr154/frontdoor/index/index/start/0/rows/10/sortfield/score/sortorder/desc/searchtype/simple/query/gas+modeling+an+overview/docId/411 Google Scholar
Egger, H. & Giesselmann, J. (2023) Stability and asymptotic analysis for instationary gas transport via relative energy estimates. Numer. Math. 153 (4), 701728.CrossRefGoogle Scholar
Egger, H. & Philippi, N. (2021) On the transport limit of singularly perturbed convection–diffusion problems on networks. Math. Method. Appl. Sci. 44 (6), 50055020.CrossRefGoogle Scholar
Erbar, M., Forkert, D., Maas, J. & Mugnolo, D. (2021) Gradient flow formulation of diffusion equations in the Wasserstein space over a metric graph. https://arxiv.org/pdf/2105.05677 Google Scholar
Esposito, A., Patacchini, F. S., Schlichting, A. & Slepčev, D. (2021) Nonlocal-interaction equation on graphs: Gradient flow structure and continuum limit. Arch. Rational Mech. Anal. 240 (2), 699760.CrossRefGoogle Scholar
Giesselmann, J., Lattanzio, C. & Tzavaras, A. E. (2017) Relative energy for the Korteweg theory and related Hamiltonian flows in gas dynamics. Arch. Ration. Mech. Anal. 223 (3), 14271484.CrossRefGoogle Scholar
Gugat, M., Habermann, J., Hintermüller, M. & Huber, O. (2023) Constrained exact boundary controllability of a semilinear model for pipeline gas flow. Eur. J. Appl. Math. 34 (3), 532553.CrossRefGoogle Scholar
Gugat, M., Leugering, G. & Hante, F. (2016) Stationary states in gas networks. Netw. Heterog. Media 10 (2), 295320.CrossRefGoogle Scholar
Heinze, G., Pietschmann, J.-F. & Schmidtchen, M. (2023) Nonlocal cross-interaction systems on graphs: Nonquadratic finslerian structure and nonlinear mobilities. Siam J. Math. Anal. 55 (6), 70397076.CrossRefGoogle Scholar
Kramar Fijavž, M., Mugnolo, D. & Sikolya, E. (2007) Variational and semi group methods for waves and diffusion in networks. Appl. Math. Optim 55 (2), 219240.CrossRefGoogle Scholar
Krautz, J. (2024) The Schröder problem on metric graphs. Master thesis. TU Chemnitz, Germany.Google Scholar
Nicaise, S. (1985) Some results on spectral theory over networks, applied to nerve impulse transmission. Polynomes Orthogonaux et Appl. 1171, 532541.CrossRefGoogle Scholar
Maas, J. (2011) Gradient flows of the entropy for finite Markov chains. J. Funct. Anal. 261 (8), 22502292.CrossRefGoogle Scholar
Monsaingeon, L. (2021) A new transportation distance with bulk/interface interactions and flux penalization. Calc. Var. Partial. Differ. equ. 60(3), 101.CrossRefGoogle Scholar
Mugnolo, D. & Romanelli, S. (2007) Dynamic and generalized Wentzell node conditions for network equations. Math. Method. Appl. Sci. 30 (6), 681706.CrossRefGoogle Scholar
Osiadacz, A. (1996) Different transient models-limitations, advantages and disadvantages. PSIG Annual Meeting, PSIG-9606. https://onepetro.org/PSIGAM/proceedings-pdf/PSIG96/All-PSIG96/PSIG-9606/1957570/psig-9606.pdf Google Scholar
Pietschmann, J.-F. & Schlottbom, M. (2022) Data-driven gradient flows. ETNA - Electronic Transactions on Numerical Analysis 57, 193215.CrossRefGoogle Scholar
Santambrogio, F. (2015) Optimal Transport for Applied Mathematicians, Springer.CrossRefGoogle Scholar
Cédric, V. (2009) Optimal Transport - Old and New, Grundlehren der Mathematischen Wissenschaften Springer-Verlag, Vol. 338.Google Scholar
Figure 0

Table 1. Contributions to the total mass

Figure 1

Algorithm 1: Primal-dual algorithm for the solution of the optimal transport problems.

Figure 2

Figure 1. Sketch of the graph used in the first example for branching geodesics. Here, no in- or outflux via the boundary is assumed (i.e.$\partial ^+ \mathcal {V} = \partial ^- \mathcal {V} = \emptyset$).

Figure 3

Figure 2. Branching geodesic without vertex dynamic: Snapshots of the dynamics of the densities$\rho _e$at different times.

Figure 4

Figure 3. Branching geodesic with vertex dynamic: Snapshots of the dynamics of the densities$\rho _e$and$\gamma _{\nu }$ at different times.

Figure 5

Figure 4. Sketch of the graph used in the second example. We set$\partial ^+ \mathcal {V} = \{\nu _1\}$, $\partial ^- \mathcal {V} = \{\nu _3,\,\nu _4\}$and$\stackrel{\small\circ}{\mathcal {V}} = \{\nu _2\}$.

Figure 6

Figure 5. Snapshots of the dynamics of the densities$\rho _e$and$\gamma _{\nu }$with symmetric boundary conditions (InOutSym) at different times.

Figure 7

Figure 6. Snapshots of the dynamics of the densities$\rho _e$and$\gamma _{\nu }$with asymmetric boundary conditions (InOutAsym) at different times.