1 Introduction
The lack of one distinguished standard Borel topology, with its associated Lebesgue measure, is the source of many differences between stochastic dynamics in finite and infinite dimensions. It is typical for the law of a stochastic ordinary differential equation to have a transition law that is absolutely continuous with respect to the Lebesgue measure. In finite dimensions, the equivalence of transition densities is the norm, while in infinite dimensions, it is the exception. Of course, this fact is at the core of the difference between ordinary and partial differential equations. In the stochastic setting, it produced additional difficulties, as many of the classical ideas, such as ellipticity, smoothing and transition densities, are tied to the existence of a Lebesgue measure.
Here, we provide an analysis showing when there is a preferred topology whose associated Gaussian measure plays the role of the Lebesgue measure in infinite dimensions. We study the stochastically forced Burgers equation in a singular regime and show that the distribution of the dynamics at time t is mutually absolutely continuous with the Gaussian measure associated with linear dynamics, where the nonlinear term has been removed.
In the infinite-dimensional setting, if one only considers finite-dimensional functionals of the solution (such as the evaluation in a space-time point), existence of densities with respect to the natural reference measure – again, the Lebesgue measure – has a large literature, mostly related to Malliavin calculus. Here we point out, for instance, the monograph [Reference Sanz-SoléSS05] and the recent paper [Reference Kumar and MohanKM22] that contains a more thorough literature review, or the papers [Reference Mattingly and PardouxMP06, Reference Hairer and MattinglyHM11, Reference Gerasimovičs and HairerGH19]. In particular, the setting in [Reference Mattingly and PardouxMP06, Reference Hairer and MattinglyHM11, Reference Gerasimovičs and HairerGH19] is orthogonal to ours, as the authors there consider equations driven by finite-dimensional Wiener processes, while our equation is very singular with a stochastic forcing that is nondegenerate in all directions. Through that lens, these papers are dealing with the hypoelliptic setting but only answering finite-dimensional questions about any transition densities, while this paper considers what might be called the truly elliptic setting, where the structure of the stochastic forcing sets the relevant topology, and hence the reference measure, for the full infinite-dimensional setting (see [Reference MattinglyMat03] for a broader, although dated, discussion of this).
Much more substantial literature is devoted to the same problem (in a smoother regime) at the level of path measures, thanks to the Girsanov Theorem. We point out, for instance, the monograph [Reference Da Prato and ZabczykDPZ92]. There is strong evidence that that approach is not directly applicable to our setting.
The first works we are aware of that consider the problem we are interested in are [Reference Da Prato and DebusscheDPD04, Reference Mattingly and SuidanMS05]. In [Reference Da Prato and DebusscheDPD04], equivalence is proved for invariant measures and, via the strong Feller property, the solution at fixed times. This work takes a different tack, leveraging the Time-Shifted Girsanov Method contained in [Reference Mattingly and SuidanMS05, Reference Mattingly and SuidanMS08] and in a more consistent presentation in [Reference WatkinsWat10]. Those works are the starting point for this investigation, but we will see that significant work is required to extend to the singular setting.
In the case of rough but sufficiently smooth forcing, when all of the objects are classically defined, the Time-Shifted Girsanov Method contained in [Reference Mattingly and SuidanMS05, Reference Mattingly and SuidanMS08, Reference WatkinsWat10] can be applied to our setting. As the roughness increases, we decompose the equation into an increasing number of levels of equations, and some stochastic objects in some levels require renormalisations in the sense of [Reference HairerHai13, Reference Gubinelli, Imkeller and PerkowskiGIP15, Reference Gubinelli and PerkowskiGP17, Reference Mourrat, Weber and XuMWX15, Reference Catellier and ChoukCC18]. The additional levels of decomposition are driven by our need to prove absolute continuity and not by the need for renormalisations in the sense of singular SPDEs. The analysis further illuminates the structure of the equations by underlining structural changes that occur as the roughness increases. In particular, the KPZ equation (in Burgers equation form) presents itself as a boundary case just beyond the analysis of this paper. There is strong evidence that this relates to a fundamental change in the structure of the equation in the KPZ setting.
If the KPZ is the boundary case, it is still open whether our results extend to that case. When the forcing is precisely the spatial derivative of the space-time white noise, the invariant measure is Gaussian. However, it is unclear if any Gaussian structure persists if the structure of the noise is perturbed. Since the semigroup in that setting is known to be Strong Feller in the KPZ case [Reference Hairer and MattinglyHM18] even with more general forcing, we know that the failure of our results to generalise will not be because of the appearance of a rough, random shift outside of the needed Cameron-Martin space of admissible shifts as in [Reference Barashkov and GubinelliBG21]. (See [Reference Da PratoDP06] and Sections 3.3 and 5.1.) In [Reference Barashkov and GubinelliBG21], the authors prove the singularity of the $\Phi ^4_3$ measure with respect to the Gaussian free field and absolute continuity with respect to a random shift of the Gaussian free field. A similar result is also established for the $\Phi ^3_3$ measure in the work [Reference Oh, Okamoto and TolomeoOOT21].
Additionally, we believe that establishing absolute continuity of the dynamics with respect to a Gaussian reference measure will open additional perspectives and approaches to analysing these rough SPDEs. Finally, we mention that in [Reference Bessaih and FerrarioBF16], the authors prove a connection between a nonlinear problem and a linear problem.
Outline of paper: The paper is organised as follows. Section 2 contains our main result, and Section 3 contains the main tools we use to prove it: decompositions of the solution and the Time-Shifted Girsanov Method. In Section 4, we give the basic definitions and estimates and study the regularity of the solution and of the terms appearing in the equation. In Section 5, we prove our general statements on absolute continuity and equivalence, which are used in Section 6 and Section 7 to prove absolute continuity of the decompositions. Altogether, these results prove the main theorem. In the Appendix, we recall some details on Besov spaces and paraproducts (Appendix A), we define the Gaussian objects that appear in the decompositions and prove their regularity (Appendix B), and we give a result of existence and uniqueness for the needed equations (Appendix C).
2 Main result
Consider the stochastic Burgers equation on $\mathbb {T} = [0, 2\pi ]$ with the periodic boundary condition
where $A = - \partial _{xx}$ , $\mathscr {L} = \partial _t + A$ , $B(u, v) = \partial _x (uv)$ , and we write $B(u) := B(u, u)$ . Also, W is a cylindrical Brownian motion on $L^2(\mathbb {T})$ . Since A is a positive, symmetric operator on functions in $L^2(\mathbb {T})$ with mean zero in space, we can define $A^\delta $ for any $\delta \in \mathbb {R}$ by its spectral decomposition. Assume that $Q \approx A^{\alpha /2}$ for some $\alpha \in \mathbb {R}$ , where we write $Q \approx A^{\beta /2}$ for some $\beta \in \mathbb {R}$ when A and Q have a common eigenbasis, and $A^{-\beta } QQ^{*}$ is bounded with bounded inverse.
We denote by $e^{-t A}$ the semigroup generated by $-A$ . The use of the notation $\mathscr {L} u_t$ on the left-hand side of equation (2.1) is meant to be both compact and evocative of the fact that we will consider the mild or integral formulation of the equation. Namely, if $u_0$ is the initial condition, then u solves
Based on the assumption of Q and the structure of the equation, if $u_0$ has spatial mean zero, all terms in the equation will have mean zero, which is in consistency with the domains of $A^\delta $ and $e^{-tA}$ . We will consider the setting when $Q\approx A^{\frac {\alpha }2}$ for $\alpha < 1$ , with particular interest in the case of $\alpha $ close to $1$ .
We will see that when $\alpha <1$ , local solutions exist in the Hölder space $\mathcal {C}^{(\frac 12-\alpha )^-}$ up to a stopping time $\tau _\infty $ that is almost surely positive for initial conditions in $\mathcal {C}^\gamma $ for $\gamma> -1$ .Footnote 1 When $\alpha < \frac 12$ , standard energy estimates guarantee the existence of a unique global solution (that is $\tau _\infty = \infty $ almost surely). When $\alpha \geq \tfrac 12$ , the solution is no longer a function, and extra care needs to be taken as it is a priori possible that $\tau _\infty < \infty $ with positive probability.
Because, in the settings of primary interest, global solutions are not assured, we will extend our state space to include an isolated ‘death’ state, denoted by , and define when $t \geq \tau _\infty $ . This also has the advantage of underscoring the applicability of these ideas to equations with explosive solutions. With this in mind, we will extend our state space to include the death state by defining . We extend the dynamics by setting for all $t>0$ if . To state our main results, we define the Markov transition semigroup $\mathcal {P}_t$ by
where $\phi \colon \bar {\mathcal {C}}^{(\frac 12-\alpha )^-}\rightarrow \mathbb {R}$ is a bounded measurable function. This extends in a natural way to a transition measure $\mathcal {P}_t(u_0,K)=(\mathcal {P}_t \mathbf {1}_K)(u_0)= \mathbb {P}_{u_0}(u_t \in K)$ for measurable subsets K of $\bar {\mathcal {C}}^{(\frac 12-\alpha )^-}$ and to the left action of probability measures $\mu $ on $\bar {\mathcal {C}}^{(\frac 12-\alpha )^-}$ by
Our main result will show that, at a fixed time t, the law of the random variable $u_t$ on the event $\{\tau _\infty> t\}$ is absolutely continuous to the law of the Ornstein-Uhlenbeck process obtained by removing the nonlinearity from equation (2.1). In other words, if we define $\mathcal {Q}_t(z_0,K) =\mathbb {P}_{z_0}( z_t \in K)$ , where
then we have the following result, which will follow from more detailed results proved in later sections.
Theorem 2.1. For any $\alpha <1$ , $t>0$ and any $u_0, z_0 \in \mathcal {C}^\gamma $ with $\gamma>-1$ and zero spatial mean, , Footnote 2 where $p=\mathbb {P}_{u_0}(\tau _\infty> t)$ . In other words, the law of $u_t$ , conditioned on nonexplosion by time t, is equivalent as a measure to the law of $z_t$ when $u_t$ and $z_t$ start from $u_0$ and $z_0$ , respectively.
Remark 2.2. The absolute continuity given in Theorem 2.1 implies that any almost sure property of the Gaussian measure $\mathcal {Q}_t(z_0,\;\cdot \;)$ is shared by $\mathcal {P}_t( u_0,\;\cdot \;)$ . For example, the spatial Hölder regularity or the Hausdorff dimension of spatial level-sets or solutions of equation (2.1) are the same as those of equation (2.3). See [Reference Baglioni and RomitoBR14] for results along these lines.
Unfortunately, our methods are not (yet!) powerful enough to cover the case $\alpha =1$ . We remark, though, that with a little effort and the help of [Reference Hairer and MattinglyHM18], one can prove that, at least when the diffusion operator in equation (2.1) is $Q=\partial _x \approx A^{\frac 12}$ , the law of the solutions of equation (2.1) and equation (2.3) at each fixed time are both equivalent to the law of white noise. Indeed, by [Reference Hairer and MattinglyHM18], the transition semigroup of equation (2.1) is strong Feller. If one assumes that the transition semigroup is irreducible, then by a theorem of Khasminskii (see, for instance, [Reference Da Prato and ZabczykDPZ96]), transition probabilities are equivalent. The final part of the argument is, again by [Reference Hairer and MattinglyHM18], that white noise is invariant for the semigroup. We do not know if equivalence holds beyond the case $Q=\partial _x$ .
3 Central ideas in Theorem 2.1
We will now give the central arc of three different (but related) arguments that can prove Theorem 2.1. Although there is overlap in the arguments, we feel each highlights a particular connection and helps to give a more complete picture.
3.1 First decomposition
The core idea used to prove Theorem 2.1 is the decomposition of the solution of equation (2.1) into the sum of different processes of increasing regularity. In equation (2.1), the smoothness of solutions is dictated solely by the stochastic convolution term, namely the last term on the right-hand side of equation (2.2).
We begin by taking the stochastic convolution as the first level in our decomposition. This first level will be fed through the integrated nonlinearity, namely the first term on the right-hand side of equation (2.2). We will then keep only the roughest component and use it to force the next level in our hierarchy. At each level, we will include a stochastic forcing term that, although smoother than the forcing at the previous level, will be sufficiently rough to generate a stochastic convolution that is less smooth than the forcing generated by the previous level through the nonlinearity. Eventually, we will reach a level where the terms in the equation can be handled by classical methods and the expansion terminated.
More concretely, fixing the number of levels n, $n \in \mathbb {N}$ , we begin by positing the existence of process $X^{(0)}, \dots , X^{(n)}$ and remainder term $R^{(n)}$ so that
where $\overset {\tiny{\mbox{dist}}}{=}$ denotes equality in law. We define the terms in this expansion by
where the $Q_1,\dots ,Q_n, \widetilde {Q}_n$ are a collection of linear operators and the $W_t^{(0)}, \dots , W_t^{(n)}, \widetilde {W} _t^{(n)}$ are a collection of standard, independent, cylindrical Wiener processes and
with $X^{(0, -1)}_t = X^{(0, -2)}_t = 0$ . If we choose $X_0^{(0)}=\cdots =X_0^{(n)}=0$ and $R_0^{(n)}=u_0$ and require
then
and, at least formally, the condition given in equation (3.1) holds. To make the argument complete, we need to demonstrate that each of the equations in (3.2) is well defined and has at least local solutions. The number of levels n will be chosen as a function of $\alpha $ . The closer $\alpha $ is to one, the more levels are required.
Notice that because of equation (3.4), which followed from equation (3.3), the stochastic convolution from equation (2.2) satisfies
where
with initial conditions $Z_0^{(0)}= \cdots =Z_0^{(n)}=0$ and $\widetilde {Z}_0^{(n)}=z_0$ . It is worth noting that $ Z_t^{(0)}=X_t^{(0)}$ and that all equations but $R_t^{(n)}$ are ‘feed-forward’ in the sense that the forcing drift $B(X^{(0,k-1)}_t) - B(X^{(0,k-2)}_t)$ in the kth level is adapted to the filtration $\mathcal {F}_t^{(k-1)} = \sigma ( W_s^{(j)}: j \leq k-1, s \leq t)$ . In this sense, conditioned on $\mathcal {F}_t^{(k-1)}$ , $X^{(k)}_t$ is a forced linear equation with both stochastic and (conditionally) deterministic forcing. This in turn implies that conditioned on $\mathcal {F}_t^{(k-1)}$ , $X^{(k)}_t$ is a Gaussian random variable.
We will prove Theorem 2.1 by showing that for any fixed $t>0$ and all $k=1,\dots ,n$ ,Footnote 3
We will see in Section 6.1 that the random existence time of $R^{(n)}$ is almost surely equal to that of u, and hence we will use $\tau _\infty $ in both settings. We will show in Section 6.2 how this sequence of statements about the conditional laws combined with the structure of equation (3.2) will imply Theorem 2.1.
Remark 3.1. We have chosen to structure the initial conditions in equation (3.2) with $X_0^{(0)}=\cdots =X_0^{(n)}=0$ and $R_0^{(n)}=u_0$ . This is solely for convenience, as a number of the estimates for $X_t^{(k)}$ and $Z_t$ are simpler to develop without the mild complication of initial conditions. We could have just as easily taken $X_0^{(0)}=u_0$ and the rest zero or $R_0^{(n)}=0$ and $X_0^{(0)}=\cdots =X_0^{(n)}=\frac {1}{n}u_0$ .
3.2 Second decomposition
Accepting the result of equation (3.7) from the previous section, it might seem reasonable to replace the instances of $X_t^{(j)}$ in equation (3.1) with $Z_t^{(j)}$ . This would have a number of advantages. One is that the $Z_t^{(j)}$ are explicit Gaussian processes that will simplify the rigorous definition of some of the more singular terms in the decomposition. Additionally, it will emphasise the relationship between the $X_t^{(j)}$ construction in equation (3.1) and the tree constructions developed in the analysis of singular SPDEs (such as [Reference HairerHai13, Reference Mourrat, Weber and XuMWX15, Reference Gubinelli and PerkowskiGP17]), which is driven by isolating the singular objects in the solution.
Motivated by this discussion, we now consider the expansion
where
and the Qs are again as in equation (3.3) and
with $Z_t^{(1)}, \dots , Z_t^{(n)}$ again defined by equation (3.6). Again, we take $Y_0^{(0)}=\cdots =Y_0^{(n)}=0$ and $S_0^{(n)}=u_0$ , where $u_0$ was the initial condition of equation (2.1).
Although in many ways we find the X expansion in equation (3.9) more intuitive and better motivated, we will find it easier to prove Theorem 2.1 for the Y expansion in equation (3.9) first. We will then use it to deduce Theorem 2.1 for the X expansion in equation (3.2).
More concretely, we will begin by proving that for each $k \leq n$ ,
Then we will deduce that for $k=1,\dots ,n$ ,
By combining equation (3.10) with equation (3.11), we can deduce that equation (3.7) holds.
There is no fundamental obstruction to proving equation (3.7) directly, as it essentially requires the same calculation as proving equation (3.11). Similarly, we could have directly proven equation (3.7) before equation (3.10); however, along the way we would have collected most of the estimates needed to prove equation (3.10). We hope that proving all three statements, namely equations (3.7), (3.11) and (3.10), will help show the relationship between different ideas around singular SPDEs.
Remark 3.2. The careful reader has likely noticed that the absolute continuity statements in equations (3.7), (3.11) and (3.10) are stated only at a fixed time t and not on the space of trajectories from $0$ to t. Hence one is not free to prove the result for the Y expansion by simply replacing the X with Z in equation (3.2) by a change of measure on path-space to obtain equation (3.9). There is strong evidence that Z is not absolutely continuous with respect to X on path-space since the conditions we are checking are optimal in finite dimensions. It is of course possible that our estimates are suboptimal. See Remark 5.7 for a discussion of optimality. Even though we cannot prove absolute continuity on path-space, we will show that Y and X satisfy a modified version of absolute continuity on path-space relative to Z that will imply equation (3.11).
3.3 Cameron-Martin Theorem and Time-Shifted Girsanov Method
The Cameron-Martin Theorem and the closely related Girsanov Theorem are the classical tools for proving two stochastic processes are absolutely continuous. Both describe when a ‘shift’ in the drift can be absorbed into a stochastic forcing term while keeping the law of the resulting random variable or stochastic process absolutely continuous with respect to the original law.
More concretely, to prove equation (3.10), we will absorb the drift terms on the right-hand side of the equations for $Y_t^{(1)}, \dots , Y_t^{(n)}, S_t^{(n)}$ into the stochastic forcing term on the right-hand side of the the same equation. In the case of the Cameron-Martin Theorem, this absorption is done using the integrated, mild form of the equation, analogous to equation (2.2). The resulting expressions are identical in form to the analogous Z expressions implied by equation (3.6). The Girsanov Theorem proceeds similarly to the Cameron-Martin Theorem, but the drift is removed instantaneously at the level of the driving equation and not in an integrated form as in the Cameron-Martin Theorem. We will see that this leads to both stronger conclusions and a need for stronger assumptions in order to apply the Girsanov Theorem.
In [Reference Mattingly and SuidanMS05, Reference Mattingly and SuidanMS08, Reference WatkinsWat10], the Girsanov Theorem was recast by shifting the infinitesimal perturbation injected by the drift at one instance of time to a later instance. By shifting the drift perturbation forward in time by the flow $e^{-tA}$ , it is regularised in space. This regularised, time-shifted drift can then be compensated by the noise at the later moment in time, thereby extending the applicability of Girsanov’s Theorem. This extended applicability will be critical to our results. The price of the extended applicability is that only a modified form of trajectory-level absolute continuity is proven, which nonetheless is sufficient to deduce absolute continuity at the terminal time of the path. We have dubbed this approach the Time-Shifted Girsanov Method. A full discussion with all of the details is provided in Section 5.3.
3.4 Gaussian regularity
When applying either the Cameron-Martin Theorem or the Time-Shifted Girsanov Method, there is a tension between the roughness of the stochastic forcing, set by $Q\approx A^{\frac {\alpha }2}$ , and the roughness of the drift term on the right-hand side of the kth equations. The gap between these regularities cannot be too big. Hence, it is critical to understand the regularity of the solutions and the drift term involving the nonlinearity B.
When $\alpha < \frac 12$ , we will see that the Time-Shifted Girsanov Method can be applied directly to equation (2.1) to obtain the desired result following the general outline of [Reference Mattingly and SuidanMS08, Reference WatkinsWat10]. When $\alpha \in [\frac {1}{2},1)$ , the multilevel decomposition from equation (3.2) and equation (3.9) will be required to make sure the jump in regularity between the stochastic forcing in an equation and the drift to be removed is not too large.
The reason for this change at $\alpha =\frac 12$ is fundamental to our discussion. When $\alpha < \frac 12$ , the product of all the spatial functions f and g contained in $B(f,g)$ is well-defined classically as the functions will have positive Hölder regularity. Hence, pointwise multiplication is well-defined. When $\alpha \geq \frac 12$ , we must leverage the Gaussian structure of the specific processes being multiplied and a renormalisation procedure to make sense of the product of some of the terms in $B(f,g)$ . When $\alpha \in [\frac 12,\frac 34)$ , through these considerations, we will always be able to give meaning to $B(f,g)$ at each moment of time in all the needed cases. When $\alpha \in [\frac 34,1)$ , at times we must consider the time-integrated version of the drift term from the mild formulation (analogous to the second term from the right in equation (2.2)) and leverage time decorrelations of the Gaussian process to make sense of the nonlinear term in its integrated form $J(f,g)_t$ defined in equation (4.4).
3.5 Relation to trees and chaos expansions
We now briefly explore the relationship between tree representations of stochastic Gaussian objects from [Reference HairerHai13, Reference Mourrat, Weber and XuMWX15, Reference Gubinelli and PerkowskiGP17] and this work in a heuristic way. One way to view the trees in those works is to consider the expansion one obtains by formally substituting the integral representation of z given in equation (3.6) back into the first integral term on the right-hand side of equation (2.2). Repeated applications of this is one way to develop an expansion of the solution $u_t$ in terms of finite trees of z with a remainder. These tree representations of stochastic objects are key to the analysis in those works. We will later see that the drift terms of equation (3.9) can be decomposed into some of the same trees of z.
We push the idea of tree expansions further by grouping the trees formed by Z with different regularity and adding an extra stochastic forcing at each level. Here, looking back at equation (3.5), as we have subdivided our noise into n levels ( $Z^{(1)},\dots , Z^{(n)}$ ) and one remainder term ( $\widetilde {Z}^{(n)}$ ), we have a tree-like expansion mixing the Gaussian inputs of different levels. This work can be viewed as giving a more refined analysis of the stochastic objects in equation (2.1), since we finely decompose the noise into Gaussian processes of different regularity. We will see that our eventual assumptions on the Qs will imply that the regularity of the $Z^{(k)}$ increases with k.
Note that the drifts in equation (3.2) and equation (3.9) can be expanded fully in terms of the sum of trees of the Zs. Hence, we can understand the drifts in equation (3.2) and equation (3.9) as two different groupings of a subset of the tree objects from this expansion. We group them at each level in these two ways such that their regularity allows the use of the Cameron-Martin Theorem or the Time-Shifted Girsanov Method at that level. Clearly this grouping is not unique, but it makes analysis based on regularity more straightforward.
4 Preliminaries
We now collect a number of estimates and observations that will be needed to prove the versions of Theorem 2.1 based on the expansion across noise levels given in equation (3.2) and equation (3.9). We start by setting the function analytic setting in which we will work and recalling some basic estimates on the operator A and the semigroup it generates. We then discuss the stochastic convolution, the regularity of solutions given in equation (2.1) and equations (3.2) and (3.9).
4.1 Function spaces and basic estimates
We shall denote by $\mathcal {C}^\gamma $ , $\gamma \in \mathbb {R}$ , the separable version of the Besov-Hölder space $B^\gamma _{\infty ,\infty }(\mathbb {T})$ of order $\gamma $ , namely the closure of periodic smooth functions with respect to the $B^\gamma _{\infty ,\infty }$ norm. See Appendix A for some details about Besov spaces. If $f \in \mathcal {C}^{\gamma }$ , we will say that f has (Hölder) regularity $\gamma $ . We will write $\mathcal {C}^{\gamma ^-}$ for the intersection of all of space $\mathcal {C}^{\beta }$ with $\beta < \gamma $ .Footnote 4
Given a Banach space $\mathbf {X}$ of functions $f(x)$ on $\mathbb {T}$ , we will write $C_T\mathbf {X}$ for the space of time-dependent functions $f(t,x)$ on $[0,T] \times \mathbb {T}$ such that for each $t \in [0,T]$ , $f(t,\;\cdot \;) \in \mathbf {X}$ and as $s \rightarrow t$ , we have that $\|f(s,\;\cdot \;)-f(t,\;\cdot \;)\|_{\mathbf {X}}\rightarrow 0$ . We will endow this space with the norm
Typical examples we will consider are $C_T\mathcal {C}^\beta $ and $C_TL^2$ . If $f \in C_T\mathcal {C}^{\gamma }$ , we will say that f has (Hölder) regularity $\gamma $ (in space). For convenience, we will write $C_T \mathcal {C}^{\gamma ^-}$ for the intersection of all the spaces $C_T \mathcal {C}^{\beta }$ with $\beta < \gamma $ .
As we are interested in solutions that might have a finite time of existence, we will introduce the one-point compactification of $\mathcal {C}^{\gamma }$ , . $\overline {\mathcal {C}}^{\gamma }$ is a topological space where the open neighbourhoods of are given by $\{ u \in \mathcal {C}^{\gamma }: \|u\|_{\mathcal {C}^\gamma }>R\}$ for $R>0$ . With a light abuse of notation, we will write $C_T \overline {\mathcal {C}}^{\gamma }$ to mean the space of all continuous functions on $[0,T]$ taking values in $\overline {\mathcal {C}}^{\gamma }$ . We do not place a norm on $C_T \overline {\mathcal {C}}^{\gamma }$ and view it only as a topological space. Observe that if $u \in C_T \overline {\mathcal {C}}^{\gamma }$ and $\tau \in (0,\infty ]$ such that for $t \in [0,T]$ , if $t \geq \tau $ and $u_t \in \mathcal {C}^{\gamma }$ if $t < \tau $ , then for all $t \in [0,T] \cap [0,\tau )$ , $u \in C_t \mathcal {C}^{\gamma }$ because u is continuous in $\overline {\mathcal {C}}^{\gamma }$ . We feel this justifies the notation $ C_T \overline {\mathcal {C}}^{\gamma }$ for continuous functions on $\mathcal {C}^{\gamma }$ even though the space is not endowed with the supremum norm. Additionally, because of our choice of open neighbourhoods of , if $\tau < \infty $ , then $\|u_t\|_{\mathcal {C}^\gamma } \rightarrow \infty $ as $t \rightarrow \tau $ . Again, we will write $C_T \overline {\mathcal {C}}^{\gamma ^-}$ for the intersection of all of the spaces $C_T \overline {\mathcal {C}}^{\beta }$ with $\beta < \gamma $ .
We collect a few useful properties of A and its semigroup in the following proposition. Here and throughout the text, we write $a \lesssim b$ to mean there exists a positive constant c so that $a \leq c b$ . When the constant c depends on some parameters, we will denote them by subscripts on $\lesssim $ . We will write $\gtrsim $ when the reverse inequality holds for some constant and $\eqsim $ when both $\lesssim $ and $\gtrsim $ hold (for possibly different constants).
Proposition 4.1. For $\gamma \in \mathbb {R}$ , $\partial _x:\mathcal {C}^\gamma \to \mathcal {C}^{\gamma -1}$ and $A:\mathcal {C}^\gamma \to \mathcal {C}^{\gamma -2}$ are bounded linear operators. Additionally, if $\delta \in \mathbb {R}$ with $\gamma \leq \delta $ , then
In particular, for any $\epsilon>0$ and $\delta ,t>0$ ,
Using these estimates, one can obtain the regularity of the solution $z_t$ of equation (2.3).
Remark 4.2 (Regularity of stochastic convolution)
Using the results given in equation (4.1) and some classical embedding theorems, we have that if
with $\Xi \approx A^{\frac {\delta }2}$ (and hence a mild solution of $\mathscr {L} z_t = \Xi \,dW_t$ ), then $\|z_t\|_{\mathcal {C}^\gamma } < \infty $ uniformly on finite time intervals for all $\gamma <\frac 12-\delta $ . More compactly, $z \in C_t\mathcal {C}^{(\frac 12-\delta )^-}$ for any $t>0$ almost surely.
Given the structure of the nonlinearity B, we are particularly interested in the properties of the pointwise product of two functions. We now summarise the results in the classical setting and recall the results in the Gaussian setting.
Remark 4.3 (Canonical regularity of Gaussian products)
It is a classical result that if $f \in \mathcal {C}^\delta $ and $g \in \mathcal {C}^\gamma $ , then their product is well-defined if $\delta +\gamma>0$ with $fg \in \mathcal {C}^{\gamma \wedge \delta }$ . This can be summarised more completely in the statement that the pointwise product $(g,f) \mapsto gf$ is a continuous bilinear operator between $\mathcal {C}^\gamma \times \mathcal {C}^\delta $ to $\mathcal {C}^{\gamma \wedge \delta }$ if $\gamma +\delta>0$ .
When $f \in \mathcal {C}^\delta $ and $g \in \mathcal {C}^\gamma $ with $\gamma +\delta \le 0$ , there is no canonical way to define the product. A critical observation for this work, and most of the recent progress in singular PDEs [Reference HairerHai13, Reference Gubinelli, Imkeller and PerkowskiGIP15, Reference Mourrat, Weber and XuMWX15, Reference Gubinelli and PerkowskiGP17, Reference Catellier and ChoukCC18], is that even when $\gamma +\delta \le 0$ , one can often define the product in $B(f,g)$ via a renormalisation procedure to have canonical regularity by leveraging the particular Gaussian structure of f and g. We will see that this is not possible in the needed cases when $\gamma +\delta \le -\frac 12$ . However, we still can make sense of $B(f,g)$ convolved in time with the heat semigroup by leveraging the specific structure of the time correlations of the specific f and g of interest.
With this fact about Gaussians in mind, we make the following definition to simplify discussions.
Definition 4.4. Given $f \in \mathcal {C}^\delta $ and $g \in \mathcal {C}^\gamma $ , we will say that the product $fg$ has the canonical regularity if it is well-defined, possibly after a renormalisation procedure, with $fg \in \mathcal {C}^{r}$ , where $r=\gamma \wedge \delta \wedge (\gamma + \delta )$ .
4.2 Regularity of the mild form of the nonlinearity
Looking back at equation (2.2), we see that the nonlinearity B integrated in time against the heat semigroup (namely, the first term on the right-hand side of this equation) will be a principal object of interest. We now pause to study the main properties of this object while postponing some more technical considerations to the Appendix.
In the sequel, it will be notationally convenient to define the bilinear operator $J(f,g)_t$ by
and $J(f)_t=J(f,f)_t$ .
Remark 4.5 (Canonical regularity of J)
If $f \in C_t\mathcal {C}^\gamma $ and $g \in C_t\mathcal {C}^\delta $ with $\gamma + \delta>0$ , then $B(f, g) \in C_t\mathcal {C}^{r -1}$ , where $r=\gamma \wedge \delta \wedge (\gamma +\delta )$ , which implies that
Here we have used the estimate from Proposition 4.1. Since this last integral is finite when $\frac 12(\beta -r+1)<1$ , we deduce that $\beta < r+1$ , implying that $J(f,g)_t \in \mathcal {C}^{(r+1)^-}$ , where $r=\gamma \wedge \delta \wedge (\gamma +\delta )$ .
As mentioned in Remark 4.3 (and proved in Appendix B), we can prove that product $fg$ is well-defined with canonical regularity in the specific examples needed in this work, when $f \in C_t\mathcal {C}^\gamma $ and $g \in C_t\mathcal {C}^\beta $ with $\gamma + \delta> -\frac 12$ . However, we will show in Appendix B that when $\gamma +\delta> -\frac 32$ , $J(f,g)$ is well-defined with $J(f,g) \in C_t\mathcal {C}^{(r+1)^-}$ , where $r=\gamma \wedge \delta \wedge (\gamma +\delta )$ , even though the product $fg$ might not be well-defined with its canonical regularity.
Definition 4.6. Given $f \in C_t\mathcal {C}^\delta $ and $g \in C_t\mathcal {C}^\gamma $ , we will say that $J(f,g)$ has canonical regularity if $J(f,g)$ is well-defined (possibly via a renormalisation procedure) with $J(f,g) \in C_t \mathcal {C}^{(r+1)^-}$ , where $r=\gamma \wedge \delta \wedge (\gamma +\delta )$ .
Remark 4.7 (J regularises in our setting)
Looking at equation (2.1), it is relevant to understand when the map $f \mapsto J(f)_t$ produces an image process that is more regular than the input process f. Assume that we are in the setting where $J(f)_t$ has canonical regularity. Then $J(f)_t$ will be smoother if $f \in \mathcal {C}^\gamma $ with $\gamma < \gamma +1$ if $\gamma>0$ and $\gamma < 2\gamma +1$ if $\gamma < 0$ . Thus, J is always regularising when applied to functions of positive regularity, and it will be regularising in the canonical setting for a distribution of negative Hölder regularity greater than $-1$ . We will always find ourselves in one of these two settings.
Building from the above, it is also relevant to understand how the regularising effect of J interact with products. More specifically, later we need to compute the regularity of sums of terms that are essentially like $B(z, z')$ , $B(J(z), z)$ , $B(J(z))$ , and so on, where $z'$ is another Ornstein-Unlenbeck process with positive Hölder regularity. In our setting, we will see that a term like $B(z, z')$ is the least regular term, which dictates the canonical regularity of the sum, while all other terms with more Js involved are more regular.
The regularising nature of J highlighted in Remark 4.7 is closely related to the use of fixed point methods to prove the existence and uniqueness of local in-time solutions with the needed regularity. This is explored further in Appendix C.
4.3 Regularity of solutions
We now turn to the regularity of the Burgers equation (2.1) and those in our decompositions in equation (3.2) and equation (3.9). Most of the equations are forced linear equations except for the remainder equations $R_t$ and $S_t$ and the original Burgers equation (2.1). While the following discussion will be illuminating in these later cases, it is most directly applicable in the setting of forced linear equations. A complete treatment in the nonlinear setting (namely $R_t$ , $S_t$ and equation (2.1)) involves a fixed point argument that we postpone to Appendix C. Nonetheless, the discussion in this section will still be illuminating to these cases while focusing on the forced linear equation setting.
We begin by studying a more general equation that can subsume most of the equations in our decompositions in equation (3.2) and equation (3.9). Since all of the forcing drift terms on the right-hand side of the equations are a finite sum of terms of the form $B(f,g)$ for some f and g, it is enough to consider the more general equation
for some given $f \in C_t\mathcal {C}^\gamma $ and $g \in C_t\mathcal {C}^\delta $ and $\Xi \approx A^{\beta /2}$ for some $\beta , \gamma \in \mathbb {R}$ . All of the forced linear equations of interest are a finite sum of equations of this form.
The solution to equation (4.5) with initial condition $v_0$ is given by
where now $z_t$ is the stochastic convolution solving equation (4.3) and J is again defined by equation (4.4). We will assume that f and g are such that $J(f,g)_t$ has canonical regularity in the sense of Definition 4.6.
For any $t>0$ and any reasonable $v_0$ , $e^{-t A}v_0 \in \mathcal {C}^b$ for all $b \in \mathbb {R}$ . Hence, the first term will not be the term that fixes the regularity of the equation, and either J or z will determine the maximal regularity of the system.
By Remark 4.2, the stochastic convolution $z \in C_t\mathcal {C}^{(\frac 12-\beta )^-}$ . By Definition 4.6, we have that $J(f,g) \in C_t \mathcal {C}^{(r +1)^-}$ , where $r=\gamma \wedge \delta \wedge (\gamma +\delta )$ . Hence the regularity of the solution will be set by the stochastic convolution z to be $C_t\mathcal {C}^{(\frac 12-\beta )^-}$ if $r> \frac 12 - \beta $ . We will arrange our choice of parameters so that equations (3.2) and (3.9) will always satisfy this condition. Hence, the equations will always have their regularity set by the stochastic convolution term in the equation. With this motivation, we make the following definition.
Definition 4.8. We say that an equation of the general form in equation (4.5) has canonical regularity if $v_t$ has the same Hölder regularity as the associated stochastic convolution $z_t$ uniformly on finite time intervals.
The above considerations are also relevant to assessing the regularity of the remaining equations in (3.2) and (3.9) as well as the original Burgers equation. Observe that the solution to equation (2.1) can be written as
where $z_t$ solves equation (2.3). Since $Q \approx A^{\alpha /2}$ , we know from Remark 4.2 that $z \in C_t\mathcal {C}^{(\frac 12-\alpha )^-}$ . Since we are interested in $\alpha < 1$ , we have that $\frac 12-\alpha> -\frac 12$ . In light of Remark 4.7, we see that $u_t$ has the same regularity in space as $z_t$ ; then $J(u)_t$ will be more regular in space (assuming we can show that $J(u)_t$ has the canonical regularity dictated by u). Hence, it is expected that in our setting the regularity of equation (2.1) will be set by the regularity of the stochastic convolution term so $u \in C_t\mathcal {C}^{(\frac 12-\alpha )^-}$ . For more details, see the discussion in Appendix C.
5 Absolute continuity of measures
We now turn to the main tools used to establish the absolute continuity statements required to prove Theorem 2.1 as outlined in Section 3.3.
Whether at the level of the Burgers equation (2.1) or when considering one of the levels in the expansions in equation (3.1) or equation (3.8), we are left considering when the law of $v_t$ is equivalent with respect to the law of $z_t$ , where
Here $\Xi \approx A^{\beta /2}$ for some $\beta \in \mathbb {R}$ , and $F_t$ is a continuous (in time) stochastic process with the space regularity to be specified presently. We always assume that $F_t$ is adapted to some filtration to which $W_t$ is also adapted. In some instances, it is possible that F is independent of W.
When all the terms are well-defined and $z_0=v_0$ , observe that $v_t = z_t + h_t$ , where
5.1 The Cameron-Martin Theorem
The Cameron-Martin Theorem gives if and only if conditions describing when the $\mathrm {Law}(z_t)$ is equivalent to $\mathrm {Law}(v_t)$ , with $v_t=z_t+h_t$ , for a fixed time t and a deterministic shift $h_t$ . If $\Xi \approx A^{\beta /2}$ , then the covariance operator of the Gaussian random variable $z_t$ is (up to a compact operator) $A^{\beta -1}$ . Then Cameron-Martin Theorem requires that $\|A^{\frac {1-\beta }{2}} h_t\|_{L^2} < \infty $ (see [Reference Da PratoDP06, Theorem 2.8]). If $F_t$ is random but independent of the stochastic forcing $W_t$ , then we can still apply the Cameron-Martin Theorem by first conditioning on the trajectory of F. This produces the following sufficient condition for absolute continuity, which is a version of the classical Cameron-Martin Theorem adapted to our setting.
Theorem 5.1 (A version of Cameron-Martin)
In the setting of equation (5.1) with $z_0=v_0$ , let $\mathcal {G}_t$ be a filtration independent of the Brownian forcing $W_t$ . Let $h_t$ be as in equation (5.2) and adapted to $\mathcal {G}_t$ . If for some $ t>0$ , $\|A^{\frac {1-\beta }{2}} h_t\|_{L^2} < \infty $ almost surely, then $\mathrm {Law}(z_t+h_t\mid \mathcal {G}_t )$ is equivalent as a measure to $\mathrm {Law}(z_t)$ almost surely. In particular, it is sufficient that $h_t \in \mathcal {C}^\gamma $ almost surely for $\gamma +\beta>1$ .
Remark 5.2. If $F=B(f,g)$ for some $f \in C_t\mathcal {C}^\gamma $ and $g\in C_t\mathcal {C}^\delta $ such that $J(f,g)_t$ has canonical regularity, then $h_t=J(f,g)_t \in \mathcal {C}^{(r+1)^-}$ , where $r=\gamma \wedge \delta \wedge (\gamma +\delta )$ . Hence, the condition in Theorem 5.1 becomes $r+\beta =\gamma \wedge \delta \wedge (\gamma +\delta )+\beta>0$ .
Notice that this remark extends to the setting where
for some $ c_i \in \mathbb {R}$ , $f^{(i)} \in C_t\mathcal {C}^{\gamma _i}$ and $g^{(i)} \in C_t\mathcal {C}^{\delta _i}$ , where $r_i=\gamma _i\wedge \delta _i \wedge (\gamma _i+\delta _i)$ and the condition on the indexes becomes $r+\beta>0$ with $r=\min r_i$ .
Remark 5.3. Theorem 5.1 immediately extends to the setting where $v_t=z_t+h_t+k_t$ and both $h_t$ and $k_t$ are adapted to $\mathcal {G}_t$ , with $h_t$ satisfying the assumptions of Theorem 5.1. Then $\mathrm {Law}(z_t+h_t+k_t \mid \mathcal {G}_t )$ is equivalent as a measure to $\mathrm {Law}(z_t+k_t \mid \mathcal {G}_t)$ almost surely.
Remark 5.4. The condition that $z_0=v_0$ is only for simplicity and not needed.
Proof of Theorem 5.1
Without loss of generality, we can take $z_0=v_0=0$ since the effect of the initial condition cancels out when looking at the difference between $z_t$ and $v_t$ . Since $h_t$ is adapted to $\mathcal {G}_t$ , we can apply the classical Cameron-Martin Theorem with $h_t$ considered deterministic by conditioning. A direct application of the Itô isometry to equation (4.3) shows that $z_t$ from equation (5.1) has covariance operator $C_t=\int _0^t e^{-(t-s) A}\Xi \Xi ^* e^{-(t-s) A} ds$ . Because $\Xi \approx A^{\beta /2}$ , we have that $C_t \approx \widetilde C_t= \int _0^t e^{-(t-s) A}A^\beta e^{-(t-s) A} ds$ . Hence, the classical condition from the Cameron-Martin Theorem that $\|C_t^{\frac 12} h_t\|_{L^2} < \infty $ is equivalent to $\|A^{\frac {1-\beta }{2}} h_t\|_{L^2} < \infty $ . Since this condition is assumed to hold almost surely, the classical Cameron-Martin Theorem implies that $\mathrm {Law}(z_t+h_t\mid \sigma ( h_s : s\leq t)\,) $ is equivalent as a measure to $\mathrm {Law}(z_t)$ almost surely. Since $\sigma ( W_s : s \leq t)$ is independent of $\mathcal {G}_t$ , we have that the complement of $\sigma ( h_s : s\leq t)$ in $\mathcal {G}_t$ is independent of the random measure $\mathrm {Law}(z_t+h_t\mid \sigma ( h_s : s\leq t)\,)$ , which implies that $\mathrm {Law}(z_t+h_t\mid \mathcal {G}_t) $ is equivalent as a measure to $\mathrm {Law}(z_t)$ . To verify the last remark, observe that since $h_t \in \mathcal {C}^\gamma $ , we almost surely have $A^{\frac {1-\beta }{2}} h_t \in \mathcal {C}^{\gamma +\beta -1}$ almost surely. Now since $\|A^{\frac {1-\beta }{2}} h_t\|_{L^2} \lesssim \|A^{\frac {1-\beta }{2}} h_t\|_{\mathcal {C}^\epsilon }$ , we see that if $\beta + \gamma - 1> \epsilon $ for some $\epsilon>0$ , the last remark holds. This is possible because we have assumed $\beta +\gamma>1$ .
5.2 The standard Girsanov Theorem
The Girsanov Theorem is essentially the specialisation of the Cameron-Martin Theorem to the path-space of a stochastic differential equation, while relaxing the assumptions to allow random shifts in the drift that are adapted to the Brownian motion forcing the SDE.
We again consider the setting of equation (5.1). Since we will be discussing path-space measures, we will write $v_{[0,t]}$ and $z_{[0,t]}$ for the random variable denoting the entire path of v and z respectively, on the time interval $[0,t]$ . We now give a version of the Girsanov Theorem adapted to our setting.
Theorem 5.5 (A version of Girsanov)
In the setting of equation (5.1) with $z_0=v_0$ , let $\mathcal {F}_t$ be a filtration to which W is an adapted Brownian motion, and let $\mathcal {G}_t$ be a filtration independent of $\mathcal {F}_t$ . Let $\tau $ be a stopping time adapted to $\mathcal {H}_t=\sigma (\mathcal {G}_t,\mathcal {F}_t)$ with $\mathbb {P}(\tau>0) > 0$ such that $v_t$ and $F_t$ are stochastic processes adapted to $\mathcal {H}_{t\wedge \tau }$ so that $v_t$ solves equation (5.1) for $t < \tau $ , and
almost surely for all $t < \tau $ . Then $\mathrm {Law}( v_{[0,t]} \mid t < \tau , \mathcal {G}_t) \ll \mathrm {Law}( z_{[0,t]})$ almost surely.
In particular, it is sufficient that $F \in C_t\mathcal {C}^\sigma $ for $\sigma +\beta>0$ and any $t < \tau $ for equation (5.4) to hold almost surely.
Remark 5.6. In the setting of Theorem 5.5, we assume that there exist stochastic processes $f_t$ and $g_t$ so that $F_t=B(f_t,g_t)$ for all $t < \tau $ with $f \in C_t\mathcal {C}^\gamma $ , $g \in C_t\mathcal {C}^\delta $ and such that $F_t=B(f_t,g_t)$ has canonical regularity, namely $F \in C_t\mathcal {C}^{r-1}$ for $r=\gamma \wedge \delta \wedge (\gamma +\delta )$ and all $t < \tau $ . Then Theorem 5.5 applies, and the regularity assumption in equation (5.4) is implied by $r-1+\beta =\gamma \wedge \delta \wedge (\gamma +\delta )+\beta -1>0$ or, rather, $r+\beta =\gamma \wedge \delta \wedge (\gamma +\delta )+\beta>1$ .
The condition $r+\beta =\gamma \wedge \delta \wedge (\gamma +\delta )+\beta>1$ from Remark 5.6 should be contrasted with the condition $r+\beta =\gamma \wedge \delta \wedge (\gamma +\delta )+\beta>0$ from Remark 5.2. Relative to the Cameron-Martin Theorem 5.1, the basic Girsanov Theorem 5.5 does have the advantage that $F_t$ can be adapted to the forcing Brownian motion and not independent. Also, the results are not comparable, as Theorem 5.5 proves pathwise equivalence while Theorem 5.1 only proves equivalence at a fixed time t.
Proof of Theorem 5.5
We begin by defining
and
Observe that $v_t^M$ is well-defined on $[0,t]$ for any $t> 0$ due to the stopping time and that $v_t= v_t^M$ on the event $\{t < \tau ^M\}$ . Then
almost surely for some fixed constant C. Thus, the classical Kazamaki criterion (see for instance [Reference KrylovKry09]) ensures that the local-martingale in the Girsanov Theorem is an integrable martingale. Let $v_{[0,t]}^{M}$ and $z_{[0,t]}$ be the path-valued random variables over the time interval $[0,t]$ . We have that $\mathrm {Law}( v_{[0,t]}^{M} )$ is equivalent as a measure to $\mathrm {Law}( z_{[0,t]})$ . Since $\mathcal {G}_t$ is independent of the Brownian motion W, we have that $\mathrm {Law}( v_{[0,t]}^{M} \mid \mathcal {G}_t)$ is equivalent as a measure to $\mathrm {Law}(z_{[0,t]})$ almost surely.
We now show that we can remove the truncation level M. Now let E be a measurable subset of paths of length T such that $\mathbb {P}( z_{[0,T]} \in E ) =0$ . To prove the absolute continuity claim, we need to show that $\mathbb {P}\big (v_{[0,T]}\in E,\tau>T \mid \mathcal {G}_T\big )=0$ almost surely. If $\mathbb {P}(\tau>T \mid \mathcal {G}_T)=0$ almost surely, we are done. Hence we proceed assuming $\mathbb {P}(\tau>T \mid \mathcal {G}_T)>0$ almost surely.
Because, conditioned on $\mathcal {G}_T$ , the law of $ v^M_{[0,T]}$ is equivalent to the law of $z_{[0,T]}$ almost surely, we know that $\mathbb {P}( v^M_{[0,T]}\in E \mid \mathcal {G}_T ) =0$ for all $M>0$ almost surely. We also know from the construction of $ v^M$ that $\mathbb {P}( v^M_{[0,T]}\in E, \tau _M> T \mid \mathcal {G}_T ) =\mathbb {P}( v_{[0,T]}\in E, \tau _M > T \mid \mathcal {G}_T )$ . Now, since
we have that
as already noted, since $\mathbb {P}\big (z_{[0,T]}\in E \big )=0$ .
Remark 5.7. We believe that in the context of diffusions, namely when the F in equation (5.1) is a nonanticipative function of v, the condition given in equation (5.4) of Theorem 5.5 should be optimal in the sense that equivalence holds if and only if equation (5.4) holds. This statement is true in finite dimension; see [Reference Liptser and ShiryaevLS01, Theorem 7.5].
Remark 5.8. Building on Remark 5.7, by adding a condition similar to equation (5.4) for z, it should be possible to prove equivalence of the laws. The extension of these results in the framework of Theorem 5.5 goes beyond the scope of this paper and will be addressed elsewhere.
Alternatively, if one has control of some moments of the solution sufficient to imply global existence (namely $\tau _\infty =\infty $ ), one can typically prove the equivalence between the laws in Remark 5.6. For example, this can be accomplished using the relative entropy calculations given in Lemma C.1 of [Reference Mattingly and SuidanMS05].
5.3 The Time-Shifted Girsanov Method
We now present the Time-Shifted Girsanov Method, which was developed in [Reference Mattingly and SuidanMS05, Reference Mattingly and SuidanMS08, Reference WatkinsWat10]. It will provide essentially the same regularity conditions in our setting as in Remark 5.2 while allowing adapted shifts as in the standard Girsanov Theorem. Interestingly, we will see that the classical Cameron-Martin Theorem still holds some advantages when dealing with extremely rough Gaussian objects.
Considering the mild-integral formulation of equation (5.1)
we can understand the first term as the shift of the Gaussian measure, which is the second term. We will now recast the drift term in equation (5.5) to extend the applicability of the Girsanov Theorem.
We begin with the observation that for fixed $T>0$ ,
where
Since $2s-T \leq s$ for all $s \in [\frac {T}2,T]$ , $\widetilde F_s$ is adapted to the filtration of $\sigma $ -algebras generated by the forcing Wiener process W when $F_s$ is also adapted to the same filtration. Hence, we can define the auxiliary Itô stochastic differential equation
which is driven by the same stochastic forcing as used to construct $v_t$ . Choosing the initial data to coincide with $v_0$ , the mild/integral formulation of this equation is
By comparing equation (5.5) and equation (5.9), we see that $\widetilde v_T=v_T$ , while $\widetilde v_t$ need not equal $v_t$ for $t\neq T$ . Hence, if we use the standard Girsanov Theorem to show that the law of $\widetilde v_{[0,T]}$ on path-space is absolutely continuous with respect to the law of $z_{[0,T]}$ (the solution to equation (5.1)), then we can conclude that the law of $\widetilde v_T$ (at the specific time T) is absolutely continuous with respect to the law of $z_T$ (again at the specific time T). Finally, since $v_T= \widetilde v_T$ , we conclude that the law of $v_T$ is absolutely continuous with respect to the law of $z_T$ , both at the specific time T.
The power of this reformulation is seen when we write the condition needed to apply the Girsanov Theorem to remove the drift from equation (5.8). We now are required to have control over moments of
to apply the standard Girsanov Theorem to transform the path-space law of $\widetilde v_{[0,T]}$ to that of $z_{[0,T]}$ . Comparing equation (5.4) with equation (5.10), we see that the additional semigroup $e^{-(T-s)A}$ in the integrand improves its regularity significantly.
Similarly, if we want to compare the distribution at time $T>0$ of two equations starting from different initial conditions $v_0,z_0 \in \mathcal {C}^b$ , for $\beta \in \mathbb {R}$ , then we can observe that
where $\widetilde F_s^{(0)}= \mathbf {1}_{[\frac {T}2,T]}(s)\frac {2}{T}e^{-s A} (v_0 -z_0)$ . This is the observation at the core of the Bismut-Elworthy-Li formula [Reference BismutBis84, Reference Elworthy and LiEL94]. Observe that $\widetilde F^{(0)} \in C_T \mathcal {C}^b$ for any $b \in \mathbb {R}$ , regardless of the initial conditions, so $A^{-\frac {\beta }2} \widetilde F^{(0)} \in C_T L^2$ , and we will always be able to use the Girsanov Theorem to remove this term.
It will be convenient to consider a slightly generalised setting where $v_t$ , $\widetilde v_t$ and $\zeta _t$ , respectively, solve mild forms of the following equations,
with initial conditions $v_0$ and $z_0$ , where $\zeta _0=\widetilde v_0 = z_0$ . Here $\widetilde F_t$ is defined as in equation (5.7), $\widetilde F_t^{(0)}$ as just above, and $G_t$ and $F_t$ are some stochastic processes.
Theorem 5.9 (Time-Shifted Girsanov Method)
In the setting of equation (5.11), let $\mathcal {F}_t$ be a filtration to which W is an adapted Brownian motion, let $\mathcal {G}_t$ be a filtration independent of $\mathcal {F}_t$ . Fix initial conditions $v_0$ and $z_0$ , and let $\tau $ be a stopping time adapted to $\mathcal {H}_t=\sigma (\mathcal {F}_t,\mathcal {G}_t)$ such that $\mathbb {P}(\tau>0) > 0$ . Let $G_t$ be stochastic processes adapted to $\mathcal {G}_t$ and defined for all $t \geq 0$ . Let $v_t$ and $F_t$ be stochastic process adapted to $\mathcal {H}_{t\wedge \tau }$ such that $v_t$ solves equation (5.11) for $t < \tau $ and
almost surely for all $t < \tau $ , and $v_t$ and $\zeta _t$ have initial conditions $v_0$ and $z_0$ , respectively. Then $\mathrm {Law}( v_{t} \mid t < \tau , \mathcal {G}_t ) \ll \mathrm {Law}( \zeta _{t} \mid \mathcal {G}_t)$ almost surely. Additionally, there exists a solution $\widetilde v_t$ , which solves equation (5.11) for $t < \tau $ and with $\mathrm {Law}(\widetilde v_{[0,t]} \mid t < \tau , \mathcal {G}_t ) \ll \mathrm {Law}( \zeta _{[0,t]} \mid \mathcal {G}_t)$ almost surely. In particular, it is sufficient that $F \in C_t\mathcal {C}^\sigma $ almost surely for $\sigma +\beta +1>0$ and for any $t < \tau $ to ensure that equation (5.12) holds.
Remark 5.10. In the setting of Theorem 5.9, we assume that there exist stochastic processes $f_t$ and $g_t$ so that $F_t=B(f_t,g_t)$ for all $t < \tau $ with $f \in C_t\mathcal {C}^\gamma $ , $g \in C_t\mathcal {C}^\delta $ and such that $F_t=B(f_t,g_t)$ has canonical regularity, namely $F \in C_t \mathcal {C}^{r-1}$ for $r=\gamma \wedge \delta \wedge (\gamma +\delta )$ and all $t < \tau $ . Then the regularity assumption of equation (5.4) is satisfied, provided that $r+\beta =\gamma \wedge \delta \wedge (\gamma +\delta )+\beta>0$ .
As in Remark 5.2, this remark extends to the setting where
for some $ c_i \in \mathbb {R}$ , $f^{(i)} \in C_t\mathcal {C}^{\gamma _i}$ and $g^{(i)} \in C_t\mathcal {C}^{\delta _i}$ , where $r_i=\gamma _i\wedge \delta _i \wedge (\gamma _i+\delta _i)$ and the condition on the indexes becomes $r+\beta>0$ with $r=\min r_i$ .
Remark 5.11 (Comparing Theorems 5.1, 5.5 and 5.9)
Comparing the Cameron-Martin Theorem, the Standard Girsanov Theorem and the Time-Shifted Girsanov Method in the setting of $F_t=B(f_t,g_t)$ , we see that the Cameron-Martin Theorem and the Time-Shifted Girsanov Method impose identical regularity conditions on $f_t$ and $g_t$ . The Time-Shifted Girsanov Method has the added advantage of allowing one to consider f and g, which are only adapted to the Brownian motion W and not independent as the Cameron-Martin theorem requires. This added flexibility will be critical to proving the needed absolute continuity for the remainder variables R and S.
Both only prove equivalence at a fixed time, which is an advantage, as we only need this for our applications. However, we will see that the requirement that $B(f_t,g_t)$ has the canonical regularity of the Time-Shifted Girsanov Method will be more restrictive than the requirement that $J(f,g)_t$ has the canonical regularity of the Cameron-Martin Theorem.
Proof of Theorem 5.9
Fixing a time T, we define
almost surely on the event $\{T < \tau \}$ . Fix a positive integer M. The following stopping time
is well-defined, and $\tau _M \rightarrow \tau $ monotonically as $M \rightarrow \infty $ . Let $\widetilde v_t$ be the solution to equation (5.8), and observe that it is well-defined on $[0,T]$ on the event $\{T < \tau \}$ with $v_T=\widetilde v_T$ on the same event. Now consider
with $\widetilde v_0=z_0$ . Clearly, $\widetilde v_t^M= \widetilde v_t$ for $t < \tau _M$ . Furthermore, because of the definition of $\tau _M$ and the fact that $\widetilde F^{(0)} \in C_T\mathcal {C}^b$ for any $b \in \mathbb {R}$ , the classical Girsanov Theorem implies that the law of the trajectories of $\widetilde v^M$ on $[0,T]$ , conditioned on $\mathcal {G}_T$ , are equivalent (i.e., mutually absolutely continuous) to the law of $\zeta $ , conditioned on $\mathcal {G}_T$ , on $[0,T]$ . In the sequel, we will write $\zeta _{[0,T]}$ for the random variable on paths of lengths T induced by the law of $\zeta $ .
By the same argument as in Theorem 5.5, we remove the localisation by $\tau ^M$ to obtain $ \mathrm {Law}(\widetilde v_{[0,T]}\mid \tau>T) \ll \mathrm {Law}(\zeta _{[0,T]} \mid \mathcal {G}_T\big )$ almost surely.
To conclude the proof, we let D be any measurable subset such that $\mathbb {P}( \zeta _T \in D\mid \mathcal {G}_T)=0$ , where $z_T$ is the distribution of z at the fixed time T. Let $D_{[0,T]}$ be the subset of path-space of trajectories that are in D at time T. Then
where the last equality follows from the fact that $ \widetilde v_T=v_T$ on the event $\{\tau> T\}$ . The chain of implications in equation (5.13) shows that the law of $v_T$ restricted to the event $\{\tau> T\}$ is absolutely continuous with respect to the law of $\zeta _T$ with both conditioned on $\mathcal {G}_T$ .
5.4 Range of applicability of methods
We now consider the regimes for which the Cameron-Martin Theorem and the Time-Shifted Girsanov Method can be applied directly to equation (2.1) to prove Theorem 2.1. We will proceed formally with the understanding that some neglected factors will lead to additional complications that will require more nuanced arguments.
For the moment, we assume that equation (2.1) has canonical regularity, namely the regularity dictated by the stochastic convolution term. Thus, $u \in C_t\mathcal {C}^{(\frac 12-\alpha )^-}$ , where recall that $\alpha $ is the exponent that sets the spatial regularity of the forcing.
When $\alpha <\frac 12$ , the solution $u_t$ to equation (2.1) has positive Hölder regularity with $u_t \in \mathcal {C}^{(\frac 12-\alpha )^-}$ . This implies that $B(u_t)$ has canonical regularity with $B(u_t) \in \mathcal {C}^{(-\frac 12-\alpha )^-}$ . Thus, the regularity condition to apply the Cameron-Martin Theorem or the Time-Shifted Girsanov Method to equation (2.2) becomes $\frac 12-\alpha + \alpha =\frac 12>0$ , which is always true. See Remark 5.2, Remark 5.10 and Remark 5.11.
When $\alpha \geq \frac 12$ , the solution $u_t$ to equation (2.1) has negative Hölder regularity since $u_t \in \mathcal {C}^{(\frac 12-\alpha )^-}$ still. If we proceed as if the relevant terms have canonical regularity ( $B(u_t)$ in the case of the Time-Shifted Girsanov Method and $J(u)_t$ in the case of the Cameron-Martin Theorem), then the regularity condition becomes $2(\frac 12-\alpha )+\alpha =1-\alpha>0$ , which restricts us to the setting of $\alpha <1$ . One cannot directly apply either the Time-Shifted Girsanov Method or the Cameron-Martin Theorem to equation (2.1) when $\alpha \geq \frac 12$ . We will see that we need the multilevel decomposition to incrementally improve the regularity of the solution to the point where we can apply the Time-Shifted Girsanov Method to the last level, namely $R^{(n)}$ or $S^{(n)}$ depending on the decomposition. Along the way, we typically use the Cameron-Martin Theorem to prove equivalence of the levels in the decomposition to the appropriate Gaussian processes. This is possible since the fed-forward structure of the decomposition means each level is conditionally independent from the previous. When $\alpha \in [\frac 34,1)$ , there is an added complication that the terms to be removed by the change of measure can be defined only when integrated against the heat semigroup. This necessitates the use of the Cameron-Martin Theorem rather than the Time-Shifted Girsanov Method.
5.5 Interpreting the Time-Shifted Girsanov Method
It is tempting to dismiss the manipulations in equation (5.6) as a trick of algebraic manipulation. We encourage you not to do so.
The standard Girsanov Theorem compares the two equations of the form in equation (5.1) and asks when we can shift the noise realisation to another ‘allowed’ noise realisation to absorb any differences in the drift terms, namely the $F_t$ in our setting. Here, ‘allowed’ means in a way that across all realisations, the resulting noise term’s distribution stays equivalent to the original noise term’s distribution. To keep the path measures equivalent on $[0,T]$ , one needs to do this instantaneously at every moment of time.
Since the equation for $z_t$ in equation (5.1) is a forced linear equation, the linear superposition principle (a.k.a. Duhamel’s principle or the variation of constants formula) applies. It states that we move an impulse injected into the system at time s to another time t by mapping it under the linear flow from the tangent space at time s to the tangent space at time t. Through this lens, we can interoperate equation (5.6) as a reordering of the impulses injected into the system by $F_s$ over the interval $s \in [0,T]$ . The impulse injected at time s is moved to time $t=\frac 12(T+s)$ via $e^{-(t-s)A}=e^{-\frac 12(T-s)A}$ . $\widetilde F_t$ is the resulting effective impulse at the time $t=\frac 12(T+s)$ . The Time-Shifted Girsanov Method compensates for the time-shifted impulse $\widetilde F_t$ using the forcing via a change of measure. The resulting $\widetilde F_t$ is more regular than $F_t$ for $t<T$ . The regularising effect of the semigroup $e^{tA}$ vanishes as we approach T. The requirement $\beta +\gamma +1>0$ ensures that the singularity at $t=T$ is sufficiently integrable to apply the classical Girsanov Theorem to the resulting process with its forcing impulses rearranged, which was denoted by $\widetilde v$ in the Time-Shifted Girsanov discussion in Section 5.3.
6 A decomposition of noise and smoothness
The idea of decomposing the solution into the sum of terms of different regularity is a staple of SPDE analysis dating back at least to the pioneering work of Da Prato, Zabczyk, Flandoli, Debussche and so on. See, for instance, [Reference Da Prato and ZabczykDPZ96, Reference Flandoli and GątarekFG95, Reference Da Prato and DebusscheDPD02]. The decomposition of the solution $u_t$ into $z_t+v_t$ , where $z_t$ solves equation (2.3), is the starting point of many arguments. The advantage of this decomposition is that z, the rougher of the two equations, is very explicit and has all the direct stochastic forcing. In contrast, typically the equation for v is not a stochastic equation (as it contains no Itô integrals). Rather, it is a random equation, and z is contained in some of its terms. This leads to v typically being more regular than z.
We will build on these ideas with some important distinctions. The most basic will be that because we intend to use the Cameron-Martin Theorem/Time-Shifted Girsanov Method on each level, we will leave noise in each equation. Moreover, we see that our explorations expose additional structures in the equation. In particular, to reach $\alpha $ arbitrarily close to one, we will be required to divide our solution into an ever-increasing number of pieces as we approach one.
As mentioned in the introduction, there are three key ingredients in our result. The first is that products of Gaussian objects can be defined via renormalisations with their canonical regularity. The second is the Cameron-Martin Theorem/Time-Shifted Girsanov Method. These two elements were discussed in the proceeding two sections. We now introduce the third component, a noise decomposition across scales. With all three central ideas on the table, we can sketch the main proofs of this note.
6.1 Regularity and existence times of solutions
We begin with a simple lemma that relates the maximal time of existence of $u_t$ with those of $X^{(0)}, X^{(1)}, \ldots $ , $X^{(n)}$ , $R^{(n)}$ satisfying equations (3.1) and (3.2) and $Y^{(0)}, Y^{(1)}, \ldots $ , $Y^{(n)}$ , $S^{(n)}$ satisfying equations (3.8) and (3.9).
Lemma 6.1. Let $\tau _\infty $ be the maximal existence time of $u_t$ .
-
1. If $(X^{(0)}, X^{(1)}, \ldots , X^{(n)}, R^{(n)})$ solves equation (3.2), then $X^{(0)}_t, X^{(1)}_t, \ldots , X^{(n)}_t$ exist for all time t. Additionally, if equation (3.1) holds (or, equivalently, equation (3.3) holds), then the maximal time of existence for $R^{(n)}$ is the same as $\tau _\infty $ almost surely.
-
2. If $(Y^{(0)}, Y^{(1)}, \ldots , Y^{(n)}, S^{(n)})$ solves equation (3.9), then $Y^{(0)}_t, Y^{(1)}_t, \ldots , Y^{(n)}_t$ exist for all time t. Additionally, if equation (3.8) holds (or, equivalently, equation (3.3) holds), then the maximal time of existence for $S^{(n)}$ is the same as $\tau _\infty $ almost surely.
Proof. The argument is the same in both cases. We detail the first case. $X^{(0)}_t$ , $X^{(1)}_t$ , $\ldots $ , $X^{(n)}_t$ exist for all time t because they are linear equations, and the drifts are well-defined for all time. If equation (3.1) holds, we have
so the maximal time of existence for $R^{(n)}$ is almost surely the same as that of u. If equation (3.3) holds, then equation (3.2) combined with equation (3.3) implies equation (3.1).
Remark 6.2. Moving forward, we will take $u_t$ to be constructed by the decomposition in either equation (3.1) or equation (3.8). Hence, in light of Lemma 6.1, the existence time $\tau _\infty $ will almost surely be that of $R^{(n)}$ and $S^{(n)}$ . Recalling the definition of $C_T\overline {\mathcal {C}}^\delta $ from Section 4.1, we will see in Proposition 6.19 and Proposition 7.3 that $R^{(n)}, S^{(n)} \in C_T\overline {\mathcal {C}}^{\delta ^-}$ for some $\delta>0$ , by setting for $t \ge \tau _\infty $ and the same for $S^{(n)}$ . Both of these results follow from the rather classical existence and uniqueness theory in Appendix C, once all the more singular terms have been properly renormalised to give them meaning.
6.2 Absolute continuity via decomposition
As already indicated, we will prove Theorem 2.1 using the decomposition in either equation (3.1) or equation (3.8). In the first case, we will prove equation (3.7), and in the second case equation (3.10). In both cases, Theorem 2.1 will follow from inductively applying the following lemma.
Lemma 6.3. Let U, $U'$ , Z and $Z'$ be random variables. Let $\mathcal {G}$ be a $\sigma $ -algebra such that U and Z are $\mathcal {G}$ -measurable and $Z'$ is independent of $\mathcal {G}$ . If $\mathrm {Law}(U) \ll \mathrm {Law}(Z)$ and $\mathrm {Law}(U' \mid \mathcal {G}) \ll \mathrm {Law}(Z')$ almost surely, then $\mathrm {Law}(U' + U ) \ll \mathrm {Law}(Z' + Z )$ .
Proof of Lemma 6.3
We can assume $\mathrm {Law}(U' \mid \mathcal {G})(\omega ) \ll \mathrm {Law}(Z')$ for every $\omega $ . Let D be any measurable set with $\mathbb {P}(Z'+Z \in D)=0$ . Since Z is $\mathcal {G}$ -measurable and $Z'$ is independent of $\mathcal {G}$ , there exists a set E such that $\mathbb {P}(Z \in E) = 1$ and
for every $x \in E$ . Also, we have $\mathbb {P}(U \in E) = 1$ since $\mathrm {Law}(U) \ll \mathrm {Law}(Z)$ , and $\mathbb {P}(U' \in D - x \mid \mathcal {G}) = 0$ for every $x \in E$ since $\mathrm {Law}(U' \mid \mathcal {G}) \ll \mathrm {Law}(Z')$ . In particular, since U is $\mathcal {G}$ -measurable, the previous statement implies
which completes the proof.
Corollary 6.4. Assume that, for some n, the system of equations (3.2) (respectively, (3.9)) is well-defined and satisfies the absolute continuity conditions given in equation (3.7) (respectively, equation (3.10)). Then in the first case,
holds, and in the second,
holds.
Proof of Corollary 6.4
The proof in the two cases is the same. We give the first. Since $\mathrm {Law}(X_t^{(0)})=\mathrm {Law}(Z_t^{(0)})$ and $\mathrm {Law}(X_t^{(1)}\mid \mathcal {F}_t^{(0)}) \sim \mathrm {Law}(Z_t^{(1)})$ almost surely, where $X_t^{(0)}$ and $Z_t^{(0)}$ are adapted to $\mathcal {F}_t^{(0)}$ and $Z_t^{(1)}$ is independent of $\mathcal {F}_t^{(0)}$ , Lemma 6.3 implies that $\mathrm {Law}(X_t^{(0)}+X_t^{(1)}) \sim \mathrm {Law}(Z_t^{(0)}+Z_t^{(1)})$ . We proceed inductively. If we have shown that $\mathrm {Law}(\sum _{k=0}^{m-1} X_t^{(k)}) \sim \mathrm {Law}(\sum _{k=0}^{m-1} Z_t^{(k)})$ , then because $\sum _{k=0}^{m-1} X_t^{(k)}$ and $\sum _{k=0}^{m-1} Z_t^{(k)}$ are adapted to $\mathcal {F}_t^{(m-1)}$ and $Z_t^{(m)}$ is independent of $\mathcal {F}_t^{(m-1)}$ , the fact that $\mathrm {Law}(X_t^{(m)}\mid \mathcal {F}_t^{(m-1)}) \sim \mathrm {Law}(Z_t^{(m)})$ almost surely implies the next step in the induction, again using Lemma 6.3. For the final step in the proof, note that $\mathrm {Law}(R_t^{(n)}\mid \tau _\infty>t,\mathcal {F}_t^{(n)}) \ll \mathrm {Law}(\widetilde {Z}_t^{(n)})$ . We repeat the previous steps for $1 \le m \le n$ with Lemma 6.3 by conditioning on $\{\tau _\infty>t\}$ and apply Lemma 6.3 once more to include $R_t^{(n)}$ and $\widetilde {Z}_t^{(n)}$ in the summations.
The proof of the following corollary is completely analogous to that of Corollary 6.4.
6.3 Some informal computations
With the tools above, we can informally describe how to choose the number of levels and the $Q_i$ s in the decomposition given by equations (3.2) and (3.9). We focus on equation (3.2) because the equation structure of $X^{(i)}$ and the remainder $R^{(n)}$ are aligned, which makes discussions more intuitive. But the intuition is the same for equation (3.9), and later we will see that the result for equation (3.9) is actually more straightforward to prove rigorously. For simplicity, we do not distinguish between the Cameron-Martin Theorem and the Time-Shifted Girsanov Method, since they require the same condition on canonical regularity, as discussed in Remark 5.11.
Building on the preliminary discussion in Section 4, the first idea is that we assume every Gaussian term: that is, each of the $X^{(0,i)}$ , $B(X^{(0,i)})$ or $J(X^{(0,i)})$ in equation (3.2) can be well-defined with its canonical regularity. As a reminder, it means $X^{(i)}$ has the same Hölder regularity as $Z^{(i)}$ , and the B terms (and J terms) have canonical regularity following Remark 4.3 and Remark 4.5. The second idea is that we want $X^{(i)}$ to become smoother as i increases, so we can take $Q_i \approx A^{\alpha _i / 2}$ so that $Z^{(i)} \in C_T \mathcal {C}^{(\frac {1}{2} - \alpha _i)^-} $ , where $\alpha _i$ decreases as i increases. It is straightforward to show (later) that we can choose $Q_i$ , $\widetilde {Q}_n$ satisfying equation (3.3) with $\alpha _0 = \alpha $ .
When $\alpha < \frac {1}{2}$ , as discussed in Section 5.4, we can directly apply the Time-Shifted Girsanov Method to equation (2.2) and obtain $\mathrm {Law}(u_t) \ll \mathrm {Law}(z_t)$ . In fact, since the solutions can be seen to be almost surely global with finite control of some moments of the norm, one can show that $\mathrm {Law}(u_t) \sim \mathrm {Law}(z_t)$ .
When $\alpha \ge \frac {1}{2}$ , then $u_t \in \mathcal {C}^{(\frac {1}{2} - \alpha )^-}$ is a distribution with its canonical regularity, so we cannot make sense of $u_t^2$ classically. We first consider $u_t = X^{(0)}_t + X^{(1)}_t + R^{(1)}_t$ in equation (3.2) for $t < \tau _\infty $ . Clearly, $X^{(0)} = Z^{(0)}$ . To apply the Cameron-Martin Theorem or the Time-Shifted Girsanov Method on $X^{(1)}$ to show equation (3.7), the regularity condition is $2\alpha _0 - \alpha _1 < 1.$ On the other hand, note that the remainder $R^{(1)}$ is not a Gaussian object. We want $R^{(1)}_t$ to have positive regularity so that $B(R^{(1)}_t)$ is well-defined, so we can take $\widetilde {Q}_1 \approx A^{\beta _1}$ for some $\beta _1 < \frac {1}{2}.$ For convenience of computing regularity, we additionally impose $\alpha _1 < \frac {1}{2}$ so that $Z^{(1)}$ has positive regularity. Assume $R^{(1)}$ also has its canonical regularity as $\widetilde {Z}^{(1)}$ . Then $B(X^{(0,1)}_t, R^{(1)}_t)$ is well-defined classically if $1 - \beta _1 - \alpha _0> 0$ , and the roughest term in the drift is
so the Time-Shifted Girsanov condition for $R^{(1)}$ is $\frac {1}{2} - \alpha _0 + \beta _1> 0.$ By collecting the above constraints on $\alpha _0$ , $\alpha _1$ and $\beta _1$ , we see that as long as $\alpha = \alpha _0 < \frac {3}{4}$ , we can find $\alpha _1$ and $\beta _1$ such that all constraints are satisfied and
follows from Corollary 6.4.
Remark 6.6. The careful reader may notice that for $X^{(1)}$ and $R^{(1)}$ to have their canonical regularity, additional constraints on $\alpha _1$ and $\beta _1$ are needed to ensure that the stochastic forcing is rougher than the drifts, but one can check and will see later that those constraints are implied by the constraints for the Cameron-Martin Theorem/Time-Shifted Girsanov Method.
The previous informal computation is based on the decomposition equation (3.2) when $n = 1$ . Next, we consider the case $n = 2$ : that is, $u_t = X^{(0)}_t + X^{(1)}_t + X^{(2)}_t + R^{(2)}_t$ . Again, based on the same reasoning, we take $Q_i \approx A^{\alpha _i / 2}$ , $i = 0, 1$ and $\widetilde {Q}_2 \approx A^{\beta _2 / 2}$ , and we assume that $R^{(2)}$ has its canonical regularity as $\widetilde {Z}^{(2)}$ and $\alpha _2 < \frac {1}{2} \le \alpha _1 < \alpha _0$ for convenience. For the same reason as in the case $n = 1$ above, we need $\beta _2 < \frac {1}{2}$ , $2\alpha _0 - \alpha _1 < 1$ , $1 - \beta _2 - \alpha _0> 0$ and $\frac {1}{2} - \alpha _0 + \beta _2> 0.$ Similarly, to show equation (3.7) on $X^{(2)}$ , we need additionally $\alpha _0 + \alpha _1 - \alpha _2 < 1$ . However, one can check that the above constraints on $\alpha _i, \beta _2$ give no solutions if $\alpha _0 \ge \frac {3}{4}$ , and the bottleneck is the constraint $1 - \beta _2 - \alpha _0> 0$ for $B(X^{(0,2)}_t, R^{(2)}_t)$ to be classically well-defined. To resolve this issue, we employ the Da Prato-Debussche trick of interpreting $R^{(2)} = \eta ^{(2)} + \rho ^{(2)}$ , where $\eta ^{(2)}$ and $\rho ^{(2)}$ satisfy
with $\eta ^{(2)}_0 = 0$ and $\rho ^{(2)}_0 = u_0$ . Then we can interpret
where $B(X^{(0,2)}, \eta ^{(2)})$ is a well-defined Gaussian object, and one can check and will see that as long as $\alpha _0 < 1$ , $B(X^{(0,2)}, \rho ^{(2)})$ is classically well-defined. Now we do not need the constraint $1 - \beta _2 - \alpha _0> 0$ , and the remaining constraints can be satisfied as long as $\alpha = \alpha _0 < \frac {5}{6}.$
Following the heuristics above, we can increase the number n of levels of the decomposition given by equation (3.2) to obtain the main result up to $\alpha < 1$ . The informal computations above will be justified in a clean and rigorous way in the next section.
6.4 Basic assumptions on the factorisation of noise into levels
We now fix additional structure in the X and Y systems (equations (3.2) and (3.9), respectively) to allow us to better characterise the regularity of the different levels. We assume that there exists a sequence of real numbers
with
such that
We will see that the effect of the assumption in equation (6.1) is to make the levels in equations (3.2) and (3.9) have increasing spatial regularity as k increases. The conditions given by equations 6.1–6.3 will be our standing structural assumption on the noise.
Remark 6.7 (Importance of Condition (6.2))
At first sight, Condition (6.2) may seem unnecessary for our main result. However, it is critical mainly for two reasons:
-
1. In later arguments, Condition (6.2) gives a clean break between terms that are functions (positive Hölder regularity) and those that are distributions (nonpositive Hölder regularity). In particular, it makes computations for regularity straightforward.
-
2. It makes sure the drift in the remainder equation, R or S, can be defined without being convolved with the heat kernel. This in turn allows us to apply the Time-Shifted Girsanov Method. This is critical as the remainder equations have drift terms that depend on the solution of the equation. As such, we cannot apply the Cameron-Martin Theorem, and tools based on the Girsanov Theorem seem the only option.
Remark 6.8 (First Note on $\alpha < 1$ )
Throughout this note, we implicitly assume $\alpha _0 = \alpha < 1$ , because in Appendix B, we only construct relevant Gaussian objects up to the case of $\alpha < 1$ . However, if the condition $\alpha _0 < 1$ is explicitly stated in the assumptions of later results, it highlights another nontrivial dependence on this condition.
The following lemma shows that one can choose the $\{\alpha _i : i = 0,\dots ,n\}$ and $\beta _n$ so that in addition to equations 6.1–6.3, the condition in equation (3.3) holds, which implies that the sum of the stochastic forcing in equation (3.2) or equation (3.9) has the same distribution as that of the Burgers equation in equation (2.1).
Lemma 6.9. For any sequence of real numbers as in equation (6.1) and any choice of operator Q from equation (2.1) with $Q \approx A^{\alpha /2}$ , there exist operators $Q_0, Q_1, \ldots , Q_n, \widetilde {Q}_n$ satisfying equation (6.3) and equation (3.3).
Proof of Lemma 6.9
We can take $Q_i = \frac {1}{\sqrt {n+2}}A^{\alpha _i/2}$ for $1 \le i \le n$ and $\widetilde {Q}_n = \frac {1}{\sqrt {n+2}}A^{\beta _n/2}$ . Note that the operator
is symmetric and positive definite, so it is equal to $Q_0 Q_0^*$ for some operator $Q_0$ . Since $Q \approx A^{\alpha /2}$ and $\alpha = \alpha _0$ is the largest among $\alpha _i$ s and $\beta _n$ , we have $Q_0 \approx A^{\alpha _0/2}$ .
Remark 6.10 (Sums of $Z^{(i)}$ )
With the proofs of Lemma 6.9, we note that for $0 \le i \le n$ ,
for some operator $Q^{(0,i)} \approx A^{\alpha _0/2}$ .
6.5 The $Y^{(i)}$ Equations
We will begin by establishing the needed structural and desired absolute continuity results for $Y^{(i)}$ . They will be leveraged to prove the results about the X system.
Proposition 6.11 (Canonical regularity of drifts)
Under the standing noise factorisation assumptions in equations 6.1–6.3, one has with probability one
for $1 \le i \le n$ . In particular, $J(Z^{(0,i-1)}) - J(Z^{(0,i-2)})$ has canonical regularity. In addition, the equations for $\{Y^{(i)}: i=0,\dots ,n\}$ are well-posed with global solutions.
Proof of Proposition 6.11
By the assumptions in equations 6.1–6.3 and Proposition B.8, $Z^{(i)} \in C_T\mathcal {C}^{(\frac {1}{2} - \alpha _i)^-}$ . The expression
is well-defined and belongs to $C_T\mathcal {C}^{(2 - \alpha _0 - \alpha _{i-1})^-}$ almost surely, by Proposition B.12 and Proposition B.16, because $J(Z^{(0)}, Z^{(i-1)}) \in C_T\mathcal {C}^{(2 - \alpha _0 - \alpha _{i-1})^-}$ almost surely is the least regular term.
Proposition 6.12 (Constraints from canonical regularity of $Y^{(i)}$ )
Under the standing noise factorisation assumptions in equations 6.1–6.3, if in addition $\alpha _0 + \alpha _{i - 1} - \alpha _i < \frac {3}{2}$ for all $1 \le i \le n$ , then all the $Y^{(i)}$ equations have canonical regularity
namely, that of the stochastic convolution in each equation.
Proof of Proposition 6.12
For any $1 \le i \le n$ and $t> 0$ , we only need to make sure that the drift
is smoother than the stochastic convolution
so that $Y^{(i)}$ has the same regularity as the stochastic convolution. This holds if $2 -\alpha _0 - \alpha _{i-1}> \frac {1}{2} - \alpha _i.$
We will apply the Cameron-Martin Theorem 5.1 in our setting to each level by conditioning on previous levels. As mentioned in Remark 5.11 and Section 5.4, the reason we cannot apply Time-Shifted Girsanov Method to some of the levels is that some of the terms involving $Z^{(i)}$ can only be defined when convolving with the heat kernel. For example, we cannot define $B(Z^{(0)}_t)$ but can only define $J(Z^{(0)})_t$ when $\alpha _0 \ge \frac {3}{4}$ (see Appendix B for more details). In this case, the Time-Shifted Girsanov Method, Theorem 5.9, cannot be applied.
Proposition 6.13 (Constraints from Cameron-Martin for $Y^{(i)}$ )
Under the standing noise factorisation assumptions in equations 6.1–6.3, if $\alpha _0 + \alpha _{i-1} - \alpha _i < 1$ for all $1 \le i \le n$ , then the regularity conditions given in Remark 5.2, needed to apply the Cameron-Martin Theorem 5.1, hold. More concretely, it implies that for $1 \le i \le n$ , for any $t> 0$ , it holds almost surely that
where we recall $\mathcal {F}_t^{(i-1)} = \sigma ( W_s^{(j)}: j \leq i-1, s \leq t)$ .
Proof of Proposition 6.13
For each $1 \le i \le n$ , we have
Since $Q^{(i)} \approx A^{\alpha _i / 2}$ , by equation (6.4), the condition from Theorem 5.1 and Remark 5.2 for the equation of $Y^{(i)}$ is exactly $\alpha _0 + \alpha _{i-1} - \alpha _i < 1$ .
Remark 6.14 (Redundant constraints)
It is clear that the parameter constraints in Proposition 6.13 imply those in Proposition 6.12.
With all the constraints so far, we establish the relation between the range of $\alpha $ and the corresponding number n of levels needed in the decomposition (except for the remainder).
Proposition 6.15 (Choosing the number of levels n in $\{Y^{(i)}\}$ )
Fix an n and an $\alpha $ so that $\frac {1}{2} \le \alpha < \frac {2n+1}{2n+2}$ . Then there exists a sequence of real numbers $\alpha _n < \ldots < \alpha _0 = \alpha $ such that the standing noise factorisation assumptions on the $\{\alpha _j: j=0,\dots ,n\}$ in equations 6.1–6.3 hold as well as the hypothesis of Proposition 6.11, Proposition 6.12 and Proposition 6.13.
Proof of Proposition 6.15
First, we make sure the assumption in equation (6.2) is satisfied. Based on Proposition 6.12, Proposition 6.13 and Remark 6.14, we only need to make sure $\alpha _0 + \alpha _{i-1} - \alpha _i < 1$ for any $1 \le i \le n$ . In particular, we have $\alpha _0 + \alpha _{i-1} - 1 < \alpha _i$ , which implies
Since $\alpha _n < \frac {1}{2}$ , we deduce $\frac {1}{2} \le \alpha _0$ and $(n+1)\alpha _0 - n < \frac {1}{2}$ , so $\frac {1}{2} \le \alpha = \alpha _0 < \frac {2n+1}{2n+2}$ ; and starting from this constraint, we may find the possible values of $\alpha _1, \ldots , \alpha _n$ .
Remark 6.16 (Second Note on $\alpha < 1$ )
From Proposition 6.15, we see that when $\alpha < 1$ , each level of the decomposition ‘gains’ regularity of the gap $1 - \alpha $ , and this gap is crucial for our method to work.
6.6 Analysis of remainder $S^{(n)}$ and associated constraints
Recall the remainder equation from equation (3.9):
Note that in this equation, the drift depends on the solution $S^{(n)}$ itself, so we cannot apply the Cameron-Martin Theorem 5.1 by conditioning on previous levels. However, unlike the previous level, the drift is regular enough that it can be defined without convolving with the heat kernel. This is reflected by the fact that $\alpha + \alpha _n < \frac {3}{2}$ and $\alpha + \beta _n < \frac {3}{2}.$ We are in a good position to use the Time-Shifted Girsanov Method, Theorem 5.9.
We first study the well-posedness of $S^{(n)}$ and canonical regularity of the terms. We start with the term $B(Y^{(0,n)}_t) - B(Z^{(0,n-1)}_t)$ .
Proposition 6.17. Under the standing noise factorisation assumptions in equations 6.1–6.3, if the $Y^{(i)}$ equations are well-posed, with all their terms possessing canonical regularity (as guaranteed, for example, by Proposition 6.11 and Proposition 6.12), then if additionally $\alpha _0 < 1$ , with probability one for any $t> 0$ , we have
That is to say, these terms are well-defined with their canonical regularity.
Proof of Proposition 6.17
Note that
which yields
and to leverage independence among the $Z^{(i)}$ , we note that
By $\alpha _0 < 1, \alpha _n < \frac {1}{2}$ , Remark 6.10, Section B.3–B.7,
almost surely. Therefore, since $\alpha _0 < 1$ ,
almost surely.
Unlike the system in equation (3.9) of $Y^{(i)}$ , where all terms are Gaussian objects, some product terms in equation (6.6) may not be a priori well-defined. Assume for the moment that $S^{(n)}$ is well-defined with its canonical regularity. From the assumptions of Lemma 6.17, we have $\beta _n < \alpha _n < \frac {1}{2}$ , so $S^{(n)}_t$ is function-valued almost surely, which makes $B(S^{(n)}_t)$ well-defined and belong to $\mathcal {C}^{(-\frac {1}{2} - \beta _n)^-}$ almost surely. The only term we need to define appropriately is $B(Y^{(0,n)}_t, S^{(n)}_t)$ .
As motivated in Section 6.3, we interpret the term $B(Y^{(0,n)}_t, S^{(n)}_t)$ as
where $\eta ^{(n)}$ and $\rho ^{(n)}$ solve
with $\eta ^{(n)}_0 = 0$ and $\rho ^{(n)}_0 = u_0$ . Note that the stochastic forcing term $\eta ^{(n)}$ is just $\widetilde {Z}^{(n)}$ but with zero initial condition. In this case, $B(Y^{(0,n)}, \eta ^{(n)})$ can be defined with Sections B.3–B.7 since $\eta ^{(n)}$ is an Ornstein-Uhlenbeck process with zero initial condition. On the other hand, as we remove the stochastic forcing term, $\rho ^{(n)}$ has better regularity so that $B(Y^{(0,n)}, \rho ^{(n)})$ can be classically defined with the appropriate choice of parameters.
Lemma 6.18. Assume the standing noise factorisation assumptions in equations 6.1–6.3 hold. If the $Y^{(i)}$ equations are well-posed, with all their terms possessing canonical regularity, then with probability one,
If $\alpha _0 < 1$ , then $\rho ^{(n)}$ and $B(Y^{(0,n)}, \rho ^{(n)})$ are well-defined locally in time, and for any $T < \tau _\infty $ , the maximal existence time,
In particular, $ S^{(n)} = \eta ^{(n)} + \rho ^{(n)} $ is well-defined locally in time, and for $T < \tau _\infty $ ,
Proof of Lemma 6.18
The first statement is proved as before with Remark 6.10, Proposition B.8, and Section B.6 and B.7.
We turn to the second statement. Note that we can rewrite equation (6.8) as
where $F^{(n)}$ is given by
By equation (6.9), we know $\eta ^{(n)} \in C_T \mathcal {C}^{(\frac {1}{2} - \beta _n)^-}$ . By Sections B.6 and B.7, since $\alpha _0 + \beta _n < \frac {3}{2}$ , it is clear that
Hence, by equation (6.7), we have $F^{(n)} \in C_T \mathcal {C}^{(-\frac {1}{2} - \alpha _0)^-}.$ By a standard fixed-point argument in Appendix C, we obtain that equation (6.8) is well-posed with local in time solutions such that $\rho ^{(n)} \in C_T \mathcal {C}^{(\frac {3}{2}-\alpha _0)^-}.$ In particular, since $\alpha _0 < 1$ , $B(Y^{(0,n)}, \rho ^{(n)}) \in C_T\mathcal {C}^{(-\frac {1}{2} - \alpha _0)^-} $ is well-defined classically.
Now we are in a good place to figure out the needed constraints for $S^{(n)}$ to have its canonical regularity and for the application of the Time-Shifted Girsanov Method.
Proposition 6.19 (Constraint from canonical regularity for $S^{(n)}$ )
Under the standing noise factorisation assumptions in equations 6.1–6.3, if the $Y^{(i)}$ equations are well-posed with all their terms possessing canonical regularity, then if in addition $\alpha _0 < 1$ and $\alpha _0 - \beta _n < 1$ , then $S^{(n)}$ has canonical regularity
for any $T < \tau _\infty $ , namely that of the stochastic convolution in the equation. Setting for $t \geq \tau _\infty $ , we have that $ S^{(n)} \in C_T\overline {\mathcal {C}}^{(\frac {1}{2}-\beta _n)^-}$ for any $T>0$ (see Section 4.1 for the definition of $C_T\overline { \mathcal {C}}^\delta $ ).
Proof. By Lemma 6.18, we have $S^{(n)} = \eta ^{(n)} + \rho ^{(n)}$ , where $\rho ^{(n)} \in C_T \mathcal {C}^{(\frac {3}{2}-\alpha _0)^-}$ and $\eta ^{(n)} \in C_T\mathcal {C}^{(\frac {1}{2}-\beta _n)^-}$ for $T < \tau _\infty $ , the maximal existence time. Since $\eta ^{(n)}$ is exactly the stochastic convolution in the equation, it suffices to have $\eta ^{(n)}$ less regular than $\rho ^{(n)}$ , which is guaranteed by the condition $\frac {3}{2} - \alpha _0> \frac {1}{2} - \beta _n$ .
Proposition 6.20 (Constraint from Time-Shifted Girsanov for $S^{(n)}$ )
Under the standing noise factorisation assumptions in equations 6.1–6.3, if the $Y^{(i)}$ equations are well-posed, with all their terms possessing canonical regularity, and if in addition $\alpha _0 < 1$ and $\alpha _0 - \beta _n < \frac {1}{2}$ , then the regularity conditions needed to apply Time-Shifted Girsanov Method to $S^{(n)}$ hold. More concretely, it implies that for any $t> 0$ , it holds almost surely that
where we recall $\mathcal {F}_t^{(n)} = \sigma ( W_s^{(j)}: j \leq n, s \leq t)$ .
In particular, as long as $\alpha _0 < 1$ , $\beta _n$ (and $\alpha _n$ ) can be taken close enough to $\frac {1}{2}$ to satisfy the condition $\alpha _0 - \beta _n < \frac {1}{2}$ .
Proof. In the setting of Proposition 6.17 and Lemma 6.18, we see that the roughest drift term in equation (6.6) is in $C_T\mathcal {C}^{(-\frac {1}{2}-\alpha _0)^-}$ , so the Time-Shifted Girsanov condition from Remark 5.10 and the associate Theorem 5.9 is satisfied if $\frac {1}{2} - \alpha _0 + \beta _n> 0.$
Remark 6.21 (Redundant constraint)
As in Remark 6.14, the constraint for the Time-Shifted Girsanov Method $\alpha _0 - \beta _n < \frac {1}{2}$ in Proposition 6.20 also implies that $\alpha _0 - \beta _n < 1$ in Proposition 6.19: that is, $S^{(n)}$ has canonical regularity.
Remark 6.22 (Third note on $\alpha < 1$ )
We reiterate where $\alpha = \alpha _0 < 1$ is needed for the analysis of the remainder $S^{(n)}$ :
-
1. Together with equations (6.1)–(6.2) – that is, $\alpha + \alpha _n < \frac {3}{2}$ and $\alpha + \beta _n < \frac {3}{2}$ – it makes sure various $B(f, g)$ terms, such as equation (6.7) and equation (6.9), are well-defined with their canonical regularity, where f and g are Gaussian objects.
-
2. It makes sure $B(Y^{(0,n)}) - B(Z^{(0,n-1)})$ has the same regularity as $B(Z^{(0,n)}) - B(Z^{(0,n-1)})$ so that equation (6.7) holds. This corresponds to the regularisation effect of J as discussed in Remark 4.7.
-
3. It makes sure equation (6.8) is well-posed with equation (6.10).
Finally, by collecting all the results above, we prove our main absolute continuity of the law of $u_t$ with respect to the law of $z_t$ defined in equation (2.3), for $\alpha < 1$ .
Corollary 6.23 (The overall result on the Y system)
Fix an n and an $\alpha $ so that $\frac {1}{2} \le \alpha < \frac {2n+1}{2n+2}$ . Then there exists a sequence of real numbers $\beta _n < \alpha _n < \ldots < \alpha _0 = \alpha $ such that the standing noise factorisation assumptions on the $\{\alpha _j: j=0,\dots ,n\}$ in equations 6.1–6.3 hold as well as the hypothesis of Proposition 6.11, Proposition 6.12, Proposition 6.13, Proposition 6.17, Lemma 6.18, Proposition 6.19 and Proposition 6.20. More concretely, it implies that for any $t> 0$ , it holds almost surely that
where, as a reminder, $z_t$ is the linear part of $u_t$ with initial condition $z_0$ as defined in equation (2.3).
7 The X decomposition of noise and smoothness
Now we consider the X system given by equation (3.2). We note that the maximal existence time $\tau _\infty $ of solutions of equation (3.2) is the same as u, as in Lemma 6.1. We want to show that the same noise factorisation assumptions in equations 6.1–6.3 on the system given by equation (3.2) also give the desired absolute continuity result in equation (3.11). Since all computations are based on canonical regularity, which is dictated by the same stochastic forcing terms, we may follow the same arguments of the previous section with minimal modifications. The main change is the need to make sense of products of more complicated Gaussian chaos objects $X^{(0,i)}$ . The idea is that when $\alpha = \alpha _0 < 1$ , the singular terms have positive regularity after convolving with the heat kernel once or twice. Hence, most products can be classically defined, and the remaining ones are exactly those that appeared before. Compared to the direct construction of $X^{(0,i)}$ , we don’t need to construct objects in nth Gaussian chaos for arbitrary large n.
Lemma 7.1 (Canonical regularity of $X^{(i)}$ and drifts)
Under the standing noise factorisation assumptions in equations 6.1–6.3, if, in addition, $\alpha _0 < 1$ and $\alpha _0 + \alpha _{i - 1} - \alpha _i < \frac {3}{2}$ for $1 \le i \le n$ , then it holds almost surely that for $0 \le i \le n$ ,
In particular, the terms are well-defined with their canonical regularity.
Proof. With the same argument in Proposition 6.12, as long as each term in the equation for $X^{(i)}$ is well-defined, it holds almost surely that $X^{(i)} \in C_T\mathcal {C}^{(\frac {1}{2} - \alpha _i)^-}.$ Hence, we can focus on the drifts in equation (3.2). We proceed by (finite) induction.
We start with base cases. For $i = 0$ , clearly $X^{(0, 0)} = X^{(0)} = Z^{(0)}$ , and it holds almost surely that $X^{(0)} \in C_T \mathcal {C}^{(\frac {1}{2} - \alpha _0)^-}$ . For $i = 1$ , $J(X^{(0)}) = J(Z^{(0)})$ is well-defined by Proposition B.12, and it holds almost surely that
For the purpose of induction, we also note that
is well-defined by Remark 6.10, Section B.4 and B.7, and it holds almost surely that
Next, we show our induction step. Assume that for $0 \le j \le i < n$ , each term in the equation for $X^{(j)}$ is well-defined, and it holds almost surely that
We want to show
We start with proving the first part of equation (7.2). Note that
We can rewrite
which gives
so it suffices to show the existence and regularity of each term in equation (7.3). For $j < i$ , using the induction hypothesis, we can define $J(X^{(0,j)})$ by the telescoping sum
Since $J(X^{(0,j)})$ has the regularity of $J(X^{(0)})$ , the roughest term in the sum, and $\alpha < 1$ , $B(J(X^{(0,j)}))$ is well-defined classically and
Also, by the induction hypothesis, we know
By Proposition 6.11, we have
Since $\alpha _{i}> \frac {1}{2}$ , based on equation (7.4) and equation (7.5), $J(X^{(0,i)}) - J(X^{(0,i-1)})$ is well-defined with the same regularity of $J(Z^{(0,i)}) - J(Z^{(0,i-1)})$ , which is the roughest term in equation (7.3).
To finish the induction step, we show the second part of equation (7.2). As before, we only need to work with each term in the following expansion:
Similarly, since $\alpha _0 < 1$ , $B(J^2(X^{(0,i-1)}))$ is well-defined classically and
Again, by equations 6.1–6.3, equation (6.5) and Sections B.4 and B.7,
By the induction hypothesis, we know
Again, since $\alpha _0 < 1$ and $Z^{(0,i+1)} \in C_T \mathcal {C}^{(\frac {1}{2}-\alpha _0)^-}$ , the following term is classically well-defined
because the sum of the regularity of the two terms in the product is positive. Therefore, $B(J(X^{(0,i)}), Z^{(0,i+1)})$ is well-defined with the desired regularity.
Consider proving equation (3.11) for each level $X^{(i)}$ of equation (3.2) before the remainder $R^{(n)}$ , where
In the decomposition in equation (7.3), the term $J(Z^{(0,i-1)}) - J(Z^{(0,i-2)})$ is exactly the drift in the $Y^{(i)}$ equation (3.9), which gives the same constraint as in the assumption of Proposition 6.13 for the Cameron-Martin theorem in Theorem 5.1 and Remark 5.2. Since the remaining terms in equation (7.3) at time $t> 0$ are smoother than the term $J(Z^{(0,i-1)})_t - J(Z^{(0,i-2)})_t$ and adapted to $\mathcal {F}^{(i-1)}_t$ , they also satisfy the condition for the Cameron-Martin Theorem 5.1. Thus, we arrive at the same constraint for parameters as in the assumption of Proposition 6.13.
Proposition 7.2. Under the standing noise factorisation assumptions in equations 6.1–6.3, if $\alpha _0 + \alpha _{i-1} - \alpha _i < 1$ for all $1 \le i \le n$ , then the regularity conditions needed to apply Theorem 5.1, the Cameron-Martin Theorem, hold for $X^{(i)}$ . More concretely, it implies that for $1 \le i \le n$ , for any $t> 0$ , it holds almost surely that
where we recall $\mathcal {F}_t^{(i-1)} = \sigma ( W_s^{(j)}: j \leq i-1, s \leq t)$ .
For the remainder, recall from (3.2) that
with initial condition $u_0$ . It remains to make sense of the term $B(X^{(0,n)}) - B(X^{(0,n-1)})$ with its canonical regularity. From the proof of Lemma 7.1, in particular the decomposition in equation (7.3) with J replaced by B and i replace by n, we can show the same regularity for each term in the decomposition except for the term $B(Z^{(0,i)}) - B(Z^{(0,i-1)})$ . Under Condition (6.2), instead we have
by Section B.3, B.6 and the similar argument in Lemma 6.17. In this case, we have
Now we use basically the same argument in Lemma 6.18 and Proposition 6.17 to obtain the same constraint for canonical regularity.
Proposition 7.3 (Constraint from canonical regularity for $R^{(n)}$ )
Under the standing noise factorisation assumptions in equations 6.1–6.3, if the $X^{(i)}$ equations are well-posed, with all their terms possessing canonical regularity, and if in addition $\alpha _0 < 1$ and $\alpha _0 - \beta _n < 1$ , then $R^{(n)}$ has canonical regularity
namely that of the stochastic convolution in the equation, and
for any $T < \tau _\infty $ almost surely. Setting for $t \geq \tau _\infty $ , we have that $ R^{(n)} \in C_T\overline {\mathcal {C}}^{(\frac {1}{2}-\beta _n)^-}$ for any $T>0$ .
Remark 7.4 (Fourth note on $\alpha < 1$ )
We note again the dependencies on $\alpha < 1$ :
-
1. Under this condition, the $J(Z)$ terms have positive Hölder regularity, which greatly reduces the complexity of making sense of the X equations in Lemma 7.1.
-
2. It makes sure $B(X^{(0,k)}) - B(X^{(0,k-1)})$ has the same regularity as $B(Z^{(0,k)}) - B(Z^{(0,k-1)})$ for $1 \le k \le n$ so that equation (7.1) and equation (7.7) hold. This also corresponds to the regularisation effect of J as discussed in Remark 4.7.
-
3. It makes sure the $R^{(n)}$ equation is well-posed and the Time-Shifted Girsanov Method applies in exactly the same way as the $S^{(n)}$ equation.
We observe that the terms in $R^{(n)}$ have the same regularity as the corresponding terms in $S^{(n)}$ , so we obtain the same result of Proposition 6.20 for $R^{(n)}$ .
Proposition 7.5 (Constraint from Time-Shifted Girsanov for $R^{(n)}$ )
Under the standing noise factorisation assumptions in equations 6.1–6.3, if the $X^{(i)}$ equations are well-posed, with all their terms possessing canonical regularity, and if in addition, $\alpha _0 < 1$ and $\alpha _0 - \beta _n < \frac {1}{2}$ , then the regularity condition needed to apply Theorem 5.9, the Time-Shifted Girsanov Method, to $R^{(n)}$ holds. More concretely, it implies that for any $t> 0$ , it holds almost surely that
where we recall $\mathcal {F}_t^{(n)} = \sigma ( W_s^{(j)}: j \leq n, s \leq t)$ .
In particular, as long as $\alpha _0 < 1$ , $\beta _n$ (and $\alpha _n$ ) can be taken close enough to $\frac {1}{2}$ to satisfy the condition $\alpha _0 - \beta _n < \frac {1}{2}$ .
Again, since the main argument in the previous section follows from computations of the same regularity, we can obtain the same overall result as the previous section by Corollary 6.4.
Corollary 7.6 (The overall result on the X system)
Fix an n and an $\alpha $ so that $\frac {1}{2} \le \alpha < \frac {2n+1}{2n+2}$ . Then there exists a sequence of real numbers $\beta _n < \alpha _n < \ldots < \alpha _0 = \alpha $ such that the standing noise factorisation assumptions on the $\{\alpha _j: j=0,\dots ,n\}$ in equations 6.1–6.3 hold as well as the hypothesis of Lemma 7.1, Proposition 7.2, Proposition 7.3 and Proposition 7.5. More concretely, it implies that for any $t> 0$ , it holds almost surely that
where, as a reminder, $z_t$ is the linear part of $u_t$ with initial condition $z_0$ as defined in equation (2.3).
8 Discussion
The Time-Shifted Girsanov Method, described in Section 5.3, was used in [Reference Mattingly and SuidanMS05] to show that the hyper-viscous two-dimensional Navier-Stokes equation satisfied the translation of Theorem 2.1 to that setting when the forcing is smooth enough to have classical solutions but not so smooth that is infinitely differentiable in space. This is completely analogous to the theorem proven here when $\alpha <\frac 12$ . We conjecture that the translation of Theorem 2.1 for the classical two-dimensional Navier-Stokes equation does not hold for the same kind of forcing as in Section 5.3, because it is just beyond the validity of the Time-Shifted Girsanov Method. It would be interesting to compare and contrast that setting to the current one when $\alpha =1$ , since both cases are right at the boundary of the Time-Shifted Girsanov Method, while the former case does not involve any singularity. In both settings, it would be interesting to understand the structure of the transition measure when $Q \approx e^{-A}$ , where we expect the system to have more in common with a finite-dimensional hypoelliptic system.
A Besov spaces and Paraproducts
Results of this section can be found at [Reference Bahouri, Chemin and DanchinBCD11, Reference Gubinelli, Imkeller and PerkowskiGIP15, Reference Catellier and ChoukCC18]. We recall the definition of Littlewood-Paley blocks. Let $\chi , \varphi $ be smooth radial functions $\mathbb {R} \to \mathbb {R}$ such that
-
○ $0 \le \chi , \varphi \le 1$ , $\chi (\xi ) + \sum _{j \ge 0} \varphi (2^{-j}\xi ) = 1$ for any $\xi \in \mathbb {R}$ ,
-
○ $\text {supp}\, \chi \subseteq B(0, R)$ , $\text {supp}\, \varphi \subseteq B(0, 2R) \setminus B(0, R)$ ,
-
○ $\text {supp}\, \varphi (2^{-j} \cdot ) \cap \text {supp}\, \varphi (2^{-i} \cdot ) = \emptyset $ if $|i - j|> 1$ .
The pair $(\chi , \varphi )$ is called a dyadic partition of unity. We use the notations
for $j \ge 0$ . Then the family of Fourier multipliers $(\Delta _j)_{j \ge -1}$ denotes the associated Littlewood-Paley blocks: that is,
for $j \ge 0$ .
Definition A.1. For $s \in \mathbb {R}, p, q \in [1, \infty ]$ , the Besov space $B^s_{p,q}$ is defined as
As a convention, we denote by $\mathcal {C}^{s}$ the separable version of the Besov-Hölder space $B^s_{\infty , \infty }$ : that is, $\mathcal {C}^{s}$ is the closure of $C^\infty (\mathbb {T})$ with respect to $\|\cdot \|_{B^s_{\infty , \infty }}.$ We also write $\|\cdot \|_{\mathcal {C}^s}$ to mean $\|\cdot \|_{B^s_{\infty , \infty }}$ .
Remark A.2. To show that $u \in \mathscr {S}'$ is in $\mathcal {C}^s$ , it suffices to show $\|u\|_{B^{s'}_{\infty , \infty }} < \infty $ for some $s'> s$ .
Remark A.3. For $0 < s < 1$ , f is in the classical space of s-Hölder continuous functions if and only if $f \in L^\infty $ and $\|f\|_{\mathcal {C}^s} < \infty $ .
Proposition A.4 (Besov embedding)
Let $1 \le p_1 \le p_2 \le \infty $ and $1 \le q_1 \le q_2 \le \infty $ . For $s \in \mathbb {R}$ , the space $B^s_{p_1, q_1}$ is continuously embedded in $B^{s - (\frac {1}{p_1} - \frac {1}{p_2})}_{p_2, q_2}$ .
Definition A.5. A smooth function $\eta : \mathbb {R} \to \mathbb {R}$ is said to be an $S^m$ -multiplier if for every multi-index $\alpha $ ,
Proposition A.6. Let $m \in \mathbb {R}$ and $\eta $ be a $S^m$ -multiplier. Then for all $s \in \mathbb {R}$ and $1 \le p, q \le \infty $ , the operator $\eta (D)$ is continuous from $B^s_{p,q}$ to $B^{s-m}_{p,q}$ .
The following estimate can be found at [Reference Catellier and ChoukCC18, Lemma 2.5] and [Reference Gubinelli, Imkeller and PerkowskiGIP15, Lemma A.7].
Proposition A.7. Let A be the negative Laplacian and $\gamma , \delta \in \mathbb {R}$ with $\gamma \le \delta $ . Then
for all $u \in \mathcal {C}^\gamma $ .
For $u \in \mathcal {C}^{\gamma }$ and $v \in \mathcal {C}^{\delta }$ , we can formally decompose the product $uv$ as
where
Proposition A.8 (Bony estimates)
Let $\gamma , \delta \in \mathbb {R}$ . Then we have the estimates
-
○ $\|u \prec v\|_{\mathcal {C}^\delta } \lesssim \|u\|_{L^\infty } \|v\|_{\mathcal {C}^\delta }$ for $u \in L^\infty $ and $v \in \mathcal {C}^\delta $ .
-
○ $\|u \prec v\|_{\mathcal {C}^{\gamma + \delta }} \lesssim \|u\|_{\mathcal {C}^{\gamma }} \|v\|_{\mathcal {C}^\delta }$ for $\gamma < 0$ , $u \in \mathcal {C}^\gamma $ and $v \in \mathcal {C}^\delta $ .
-
○ $\|u \circ v\|_{\mathcal {C}^{\gamma + \delta }} \lesssim \|u\|_{\mathcal {C}^{\gamma }} \|v\|_{\mathcal {C}^\delta }$ for $\gamma + \delta> 0$ , $u \in \mathcal {C}^\gamma $ and $v \in \mathcal {C}^\delta $ .
Remark A.9. Note that the paraproduct $\prec $ is always a well-defined continuous bilinear operator. The product $\mathcal {C}^\gamma \times \mathcal {C}^\delta \to \mathcal {C}^{\gamma \wedge \delta \wedge (\gamma + \delta )}, (u, v) \mapsto uv$ is a well-defined, continuous bilinear map, provided $\gamma + \delta> 0$ . In this case, we say the product is classically well-defined. On the other hand, if we can directly show the existence of the resonant product $u \circ v$ of two terms u and v, then the product $uv$ will be well-defined. In this case, we usually have $u \circ v \in \mathcal {C}^{\gamma + \delta }$ given that $u \in \mathcal {C}^\gamma $ and $v \in \mathcal {C}^\delta $ , so we still obtain $uv \in \mathcal {C}^{\gamma \wedge \delta \wedge (\gamma + \delta )}$ without the condition $\gamma + \delta> 0$ .
B Construction of finite Gaussian chaos objects
In this section, we construct various singular processes with their canonical regularity that are necessary for our analysis above. We refer to [Reference HairerHai13, Reference Gubinelli and PerkowskiGP17, Reference Mourrat, Weber and XuMWX15, Reference Catellier and ChoukCC18, Reference Gubinelli, Imkeller and PerkowskiGIP15] for relevant technical details. In particular, [Reference Gubinelli and PerkowskiGP17] provides constructions of many of these Gaussian objects that correspond to the harder case of $\alpha = 1$ in our setting. We mainly apply the unified argument in [Reference Mourrat, Weber and XuMWX15] and point out the minimal modifications for our setting. More specially, we only need to change relevant Fourier multipliers of the nonlinearity and the driving noise and compute estimates with the same procedure. In principle, all of these objects can be constructed by invoking the abstract main result from [Reference Chandra and HairerCH16], but we find it instructive to construct them by hand as a reference for ‘pedestrians’ in the spirit of [Reference Mourrat, Weber and XuMWX15].
We first set up some notations:
-
○ We write $\mathbb {Z}_0 = \mathbb {Z} \setminus \{0\}.$
-
○ For $x \in \mathscr {S}'(\mathbb {T})$ , $k \in \mathbb {Z}$ , $\widehat {x}(k)$ denotes the kth Fourier mode of x.
-
○ For $k \in \mathbb {Z}$ , let $e_k$ be the kth Fourier basis function given by complex exponentials.
-
○ For a process x, we use the notation $x_{s, t} \,{:=}\, x_t - x_s$ .
-
○ We write $\eqsim $ to mean both $\lesssim $ and $\gtrsim $ .
-
○ We write $k \sim k'$ if $k \in \text {supp}\,\,\varphi _i$ , $k' \in \text {supp}\,\,\varphi _j$ and $|i - j| \le 1$ , and by abuse of this notation, we write $k \sim 2^j$ if $k \in \text {supp}\,\,\varphi _j$ , where $\varphi _j$ is defined in equation (A.1).
-
○ Let $\psi $ be a smooth radial function with compact support and $\psi (0) = 1$ . We regularise a process x by setting
$$ \begin{align*}x^\epsilon_t = \sum_{k \in \mathbb{Z}} \psi(\epsilon k)\hat{x}_t(k) e_k,\end{align*} $$and for convenience, we also write$$ \begin{align*}x^\epsilon_t = \sum_{|k| \lesssim \epsilon^{-1}} \hat{x}_t(k) e_k.\end{align*} $$
Fix $\gamma , \delta \in (0, 1)$ . Let $(z^{(\gamma )}_t:\, t \in [0, T])$ denote the Ornstein-Uhlenbeck process defined by
for some operator $Q^{(\gamma )} \approx A^{\gamma /2}$ , and W is a cylindrical Brownian motion. We define $z^{(\delta )}$ in the same way, but we use different symbols to note that $z^{(\gamma )}$ and $z^{(\delta )}$ are driven by different, independent cylindrical Brownian motions so that they are independent. It is more convenient to write $z^{(\gamma )}$ and $z^{(\delta )}$ in Fourier space: we have a family of independent, standard, complex-valued Brownian motions $(W(k)\,:\, k \in \mathbb {Z})$ with the real-valued constraint $\overline {W(k)} = W(-k)$ such that for $k \in \mathbb {Z}_0$ ,
where $q_k$ is the eigenvalue of Q corresponding to $e_k$ and $|q_k| \eqsim |k|^{\gamma }$ . In particular, $q_0 = 0$ , so $\widehat {z^{(\gamma )}_t}(0) = 0$ , which means $z^{(\gamma )}$ has mean zero in space.
Remark B.1. Since we work with processes like $z^{(\gamma )}$ that have mean zero in space, we ignore the $0$ th Fourier mode by default: for example, in various summations involving Fourier modes.
B.1 Preliminary results
We will use Proposition 3.6, Lemma 4.1 and Lemma 4.2 from [Reference Mourrat, Weber and XuMWX15].
Proposition B.2 [Reference Mourrat, Weber and XuMWX15]
Let $x: [0, T] \to \mathscr {S}'(\mathbb {T})$ be a stochastic process in some finite Wiener chaos such that
If, for some $t \in [0, T]$ , $\mathbb {E}[|\widehat {x_t}(0)|^2] \lesssim 1$ and for all $k \in \mathbb {Z}_0$ ,
then for every $\beta < \kappa $ , $p \ge 2$ , we have
If, in addition to equation (B.1), there exists $h \in (0, 1)$ such that $\mathbb {E}[|\widehat {x_{s,t}}(0)|^2] \lesssim {|t - s|^h}$ and
uniformly in $0 < |t - s| < 1$ and $k \in \mathbb {Z}_0$ , then $\tau \in C_T \mathcal {C}^{\beta }$ , and
Remark B.3. Later, when we apply Proposition B.2, checking the condition on the $0$ th Fourier mode is straightforward, so we will omit details of that part.
Lemma B.4 [Reference Mourrat, Weber and XuMWX15]
Let $a, b \in \mathbb {R}$ satisfy $a + b> 1$ and $a, b < 1.$ We have uniformly for all $k \in \mathbb {Z}_0$
Lemma B.5 [Reference Mourrat, Weber and XuMWX15]
Let $a, b \in \mathbb {R}$ satisfy $a + b> 1.$ We have uniformly for all $k \in \mathbb {Z}_0$
The following proposition provides a way to bound pth moments of Hölder norms of a process via estimates of its Littlewood-Paley blocks.
Proposition B.6. Let $x : [0, T] \to \mathscr {S}'(\mathbb {T})$ be a stochastic process in some finite Wiener chaos such that for $h \ge 0$ small enough,
Then for any $\beta < \kappa $ and $p> 1$ ,
Proof. By Gaussian hypercontractivity (e.g., [Reference Mourrat, Weber and XuMWX15, Proposition 3.3]), for $p> 1$ , $h \ge 0$ small enough,
For any $\beta < \kappa $ , by taking h small enough and p large enough, by [Reference Mourrat, Weber and XuMWX15, Proposition 2.7] or Besov embedding, Proposition A.4, we have
By a variant of the Kolmogorov continuity theorem or the standard Garsia-Rodemich-Rumsey lemma ([Reference Garsia, Rodemich, Rumsey and RosenblattGRRR70]), we obtain
B.2 Regularity and convergence of z
The following result, as in [Reference Gubinelli, Imkeller and PerkowskiGIP15, Lemma 4.4], follows from a straightforward computation, which will be useful later.
Lemma B.7. The spatial Fourier transform $\widehat {z^{(\gamma )}}$ of $z^{(\gamma )}$ is a complex-valued, centred Gaussian process with covariance
where $k, k' \in \mathbb {Z}$ , $t, t' \in [0, T]$ . In particular, we have
for all $s,t,t' \in [0,T]$ , $\rho , h \in [0, 1]$ and $k \in \mathbb {Z}_0$ .
Using Proposition B.2 with the previous lemma, we have the following result.
Proposition B.8. For any $\beta < \frac {1}{2} - \gamma $ , we have $z^{(\gamma )} \in C_T \mathcal {C}^\beta .$
One may also adapt the argument by changing a few parameters in [Reference Catellier and ChoukCC18, Section 4.1] to prove the following approximation result.
Proposition B.9. For any $\beta < \frac {1}{2} - \gamma $ and any $p> 1$ , we have
B.3 Construction of $B(z^{(\gamma )})$ and $J(z^{(\gamma )})$
If $\gamma < \frac {1}{2}$ , $z^{(\gamma )}$ is a function-valued process, so $(z^{(\gamma )})^2$ and $J(z^{(\gamma )})$ are well-defined classically, then $B(z^{(\gamma )}) \in C_T \mathcal {C}^{\beta }$ for any $\beta < -\frac {1}{2} - \gamma $ .
If $\frac {1}{2} \le \gamma $ , $z^{(\gamma )}$ is distribution-valued and $(z^{(\gamma )})^2$ is not classically well-defined, we introduce a renormalisation procedure. For the regularised process $z^{(\gamma ), \epsilon }$ , define the renormalised product
Lemma B.10. For any $s, t, t' \in [0, T]$ , we have
for any $k \in \mathbb {Z}_0$ , and $\rho , h \in [0,1]$ .
Proof. For convenience, we write $z = z^{(\gamma )}$ . Because of the renormalisation, $(z^{\epsilon })^{\diamond 2}_t$ belongs to the second homogeneous Wiener chaos. For $k \in \mathbb {Z}_0$ , $t> 0$ , by the Itô formula,
so we have
and equations (B.6) and (B.7) hold. On the other hand,
where using both the bound $e^{-r} \le 1$ and $1 - e^{-rt} \lesssim rt$ for $r \ge 0$ and computing integrals of the form $\int e^{-r} dr$ , for any $h \in [0,1]$ ,
so equation (B.8) holds by exchanging the role of $k_1$ and $k_2$ .
For $\gamma < \frac {3}{4}$ , we can show that the renormalised product $(z^{(\gamma ), \epsilon })^{\diamond 2}$ converges to a limiting process $(z^{(\gamma )})^{\diamond 2}$ with the desired regularity.
Proposition B.11. If $\frac {1}{2} \le \gamma < \frac {3}{4}$ , then there exists a process $(z^{(\gamma )})^{\diamond 2}$ such that
for any $k \in \mathbb {Z}_0$ , $0 \le t \le T$ and small enough $\eta> 0$ , and
for any $\beta < 1 - 2\gamma $ and $p> 1$ .
Proof. When $\gamma> \frac {1}{2}$ , by equation (B.6) and Lemma B.4, it holds uniformly in $\epsilon $ that
When $\gamma = \frac {1}{2}$ , in order to apply Lemma B.4, we give up an arbitrary small amount of decay in $k_1$ and $k_2$ of equation (B.10) to obtain equation (B.9).
Similarly, using equation (B.8) and Lemma B.4, for $h \ge 0$ small enough,
which implies, by Proposition B.6,
for any $\beta < 1 - 2\gamma $ and $p> 1$ . We can obtain the same estimate for $(z^{\epsilon })^{\diamond 2} - (z^{\epsilon '})^{\diamond 2}$ , since the terms involving $\psi (\epsilon k)$ , $\psi (\epsilon ' k')$ are uniformly bounded, and other terms are the same. As $\epsilon , \epsilon ' \to 0$ , in the kth Fourier mode, we have a factor like
By dominated convergence, $((z^{(\gamma ), \epsilon })^{\diamond 2})_\epsilon $ is a Cauchy sequence in $L^p(\Omega , C_T \mathcal {C}^{\beta })$ , where $\Omega $ denotes the underlying probability space. We denote the limit by $(z^{(\gamma )})^{\diamond 2}$ , and the result follows.
However, if $\gamma \ge \frac {3}{4}$ , $(z^{(\gamma )})^{\diamond 2}$ is no longer a well-defined process but a space-time distribution so that $(z^{(\gamma )})^{\diamond 2}_t$ has no meaning at a fixed time $t> 0$ . This can be recognised from the fact that the summation in equation (B.10) diverges. In this case, we instead consider the regularised process
for which we can show that, with the help of temporal regularity provided by the heat kernel, $J(z^{(\gamma ), \epsilon })^{\diamond }$ converges to a well-defined process $J(z^{(\gamma )})^{\diamond }$ with the desired regularity.
Proposition B.12. If $\frac {1}{2} \le \gamma < 1$ , there exists a process $J(z)^{\diamond }$ such that
for any $k \in \mathbb {Z}_0$ , $h \in [0,1]$ and $s, t \in [0,T]$ , and
for any $\beta < 2 - 2\gamma $ and $p> 1$ .
Proof. We focus on the case $\frac {3}{4} \le \gamma < 1$ , since the other case is already done. Take $\rho \in (0,1)$ satisfying $2\gamma - \frac {3}{2} < \rho < \gamma - \frac {1}{2}$ , and by equation (B.7) and Lemma B.4,
which implies
On the other hand, by taking $\rho \in (0,1)$ as above, using the bound
with similar computations for equation (B.8), we have
for $h \in [0,1]$ . With the above estimate, the last statement follows from the same argument as in Proposition B.11.
Remark B.13 (Renormalisation is not needed for the nonlinearity of Burgers)
Since $\partial _x: \mathcal {C}^{\beta } \to \mathcal {C}^{\beta - 1}$ is continuous and annihilates quantities that are constant in space, we note that
Also, by the Fourier expansion of $B(z^{(\gamma ), \epsilon })$ , we see that the $0$ th mode is zero, so the renormalisation procedure is not actually needed for $B(z^{(\gamma )})$ . By the same reasoning, we can interpret $J(z^{(\gamma )}) = J(z^{(\gamma )})^{\diamond }$ .
B.4 Construction of $B(J(z^{(\gamma )}), z^{(\gamma )})$
If $\gamma < \frac {1}{2}$ , then $B(J(z^{(\gamma )}), z^{(\gamma )})$ is classically well-defined. Then $B(J(z^{(\gamma )}), z^{(\gamma )}) \in C_T \mathcal {C}^{\beta }$ for any $\beta < ((\frac {3}{2} - \gamma ) \wedge (\frac {1}{2} - \gamma ) \wedge (2 - 2\gamma )) - 1 = -\frac {1}{2} - \gamma $ , according to Remark A.9.
If $\frac {1}{2} \le \gamma < 1$ , we show the existence of the resonant product $J(z^{(\gamma )}) \circ z^{(\gamma )}.$
Proposition B.14. Suppose $\frac {1}{2} \le \gamma < 1$ . Let $\vartheta = J(z^{(\gamma )})$ , $z = z^{(\gamma )}$ . We have for any $k \in \mathbb {Z}_0$
and $\vartheta \circ z \in C_T \mathcal {C}^{\beta }$ for any $\beta < \frac {5}{2} - 3\gamma .$
Proof. Our approach is the same as [Reference Mourrat, Weber and XuMWX15, Page 29–31], and to see the argument more clearly, the reader is encouraged to write down the corresponding diagrams of our case.
For convenience, write $P_{t}(k) = e^{-|k|^2t} \mathbf {1}_{t \ge 0}$ . By the Wiener chaos decomposition (see [Reference Mourrat, Weber and XuMWX15] for a simple strategy using diagrams),
where $I^{(3)}$ belongs to the third Wiener chaos and $I^{(1)}$ belongs to the first Wiener chaos and they are given by
Consider $\mathbb {E}[|{{I^{(3)}_t}}(k)|^2]$ , and expand out the integrals. We see that the inner integral has almost the same expression as $\mathbb {E}[|\widehat {\vartheta _t}(k')|^2]$ , which can be bounded by $\frac {1}{|k'|^{5 - 4\gamma }}$ . With this bound, the remaining outer integral has almost the same expression as $\mathbb {E}[|\widehat {z_t}(k")|^2]$ , which can be bounded by $\frac {1}{|k"|^{2-2\gamma }}$ . Hence, we have by Lemma B.5,
Now consider $\mathbb {E}[|{{I^{(1)}_t}}(k)|^2]$ , and expand out the integrals. The inner integral becomes the left-hand side of the following expression, almost the same as $\mathbb {E}[|\widehat {z_t}(k)|^2]$ :
With this bound, the next outer integral has the following expression
whose size is bounded by, using Lemma B.5 again,
The remaining outer integral also has the form given by equation (B.13). We arrive at the result
For the remaining statement, we note that
We can show equation (B.2) of Proposition B.2, by replacing similar bounds used above in equation (B.3) and equation (B.11) with equation (B.5) and equation (B.12).
Hence, if $\frac {1}{2} \le \gamma < 1$ , $B(J(z^{(\gamma )}), z^{(\gamma )}) \in C_T \mathcal {C}^{\beta }$ for any $\beta < ((2 - 2\gamma ) \wedge (\frac {1}{2} - \gamma ) \wedge (\frac {5}{3} - 3\gamma )) - 1 = -\frac {1}{2} - \gamma $ , according to Remark A.9.
B.5 Construction of $B(J(z^{(\gamma )}))$
Since $\gamma < 1$ , $J(z^{(\gamma )}) \in C_T \mathcal {C}^\beta $ for some $\beta> 0$ . Hence, $B(J(z^{(\gamma )}))$ is classically well-defined. By Remark A.9, if $\gamma < \frac {1}{2}$ , then $B(J(z^{(\gamma )})) \in C_T \mathcal {C}^\beta $ for any $\beta < \frac {1}{2} - \gamma ,$ and if $\frac {1}{2} \le \gamma < 1$ , then $B(J(z^{(\gamma )})) \in C_T \mathcal {C}^\beta $ for any $\beta < 1 - 2\gamma .$
B.6 Construction of $B(z^{(\gamma )}, z^{(\delta )})$ and $J(z^{(\gamma )}, z^{(\delta )})$
Recall that $z^{(\gamma )}$ and $z^{(\delta )}$ are independent. Then for $\gamma + \delta < \frac {3}{2}$ , it suffices to show the existence of the resonant product $z^{(\gamma )} \circ z^{(\delta )}.$
Proposition B.15. Suppose $\gamma + \delta < \frac {3}{2}$ . Then we have, for any $k \in \mathbb {Z}_0$ ,
and $z^{(\gamma )} \circ z^{(\delta )} \in C_T \mathcal {C}^{\beta }$ for any $\beta < 1 - \gamma - \delta .$
Proof. By the definition of $z^{(\gamma )}_t \circ z^{(\delta )}_t$ , independence and equation (B.3), we have for any $k \in \mathbb {Z}_0$
where we used Lemma B.5 since $\gamma + \delta < \frac {3}{2}.$ Note that
We can show equation (B.2) of Proposition B.2 in the same way as equation (B.14) by using equation (B.5) instead. Then the result follows from Proposition B.2.
Thus, when $\gamma + \delta < \frac {3}{2}$ , $B(z^{(\gamma )}, z^{(\delta )}) \in C_T \mathcal {C}^{\beta }$ for any $\beta < ((\frac {1}{2} - \gamma ) \wedge (\frac {1}{2} - \delta ) \wedge (1 - \gamma - \delta )) - 1$ , according to Remark A.9.
When $\frac {3}{2} \le \gamma + \delta < 2$ , we encounter the same situation as $(z^{(\gamma )})^{\diamond 2}$ that $z^{(\gamma )} z^{(\delta )}$ is not a well-defined process but a space-time distribution, so we work on defining the process $J(z^{(\gamma )}, z^{(\delta )})$ as we did for $J(z^{(\gamma )})$ .
Proposition B.16. Suppose $\frac {3}{2} \le \gamma + \delta < 2$ . Let $z = z^{(\gamma )}$ , $\tilde {z} = z^{(\delta )}$ . Then we have for any $k \in \mathbb {Z}_0$ ,
and $J(z, \tilde {z}) \in C_T \mathcal {C}^{\beta }$ for any $\beta < 2 - \gamma - \delta .$
Proof. Let $k \in \mathbb {Z}_0$ . By independence and equation (B.4), for any $\rho _1, \rho _2 \ge 0$ ,
Recall that $\gamma , \delta < 1.$ By taking $\rho _1, \rho _2 \ge 0$ such that
we can use Lemma B.4 to obtain
where $\rho = \rho _1 + \rho _2 \in [0, 1].$ Then
By the same computation in Proposition B.12, for any $h \in [0,1]$ ,
The result follows from Proposition B.2.
B.7 Construction of $B(J(z^{(\gamma )}), z^{(\delta )})$
If $\gamma < \frac {1}{2}$ , then $B(J(z^{(\gamma )}), z^{(\delta )})$ is classically well-defined. Then $B(J(z^{(\gamma )}), z^{(\delta )}) \in C_T \mathcal {C}^{\beta }$ for any $\beta < ((\frac {3}{2} - \gamma ) \wedge (\frac {1}{2} - \delta ) \wedge (2 - \gamma - \delta )) - 1$ , according to Remark A.9.
If $\frac {1}{2} \le \gamma < 1$ , we need to show the existence of the resonant product $J(z^{(\gamma )}) \circ z^{(\delta )}.$
Proposition B.17. Suppose $\frac {1}{2} \le \gamma < 1$ . Let $\vartheta = J(z^{(\gamma )})$ , $z = z^{(\delta )}$ . We have for any $k \in \mathbb {Z}_0$ ,
and $\vartheta \circ z \in C_T \mathcal {C}^{\beta }$ for any $\beta < \frac {5}{2} - 2\gamma - \delta .$
Proof. By the definition of $\vartheta _t \circ z_t$ , independence, equation (B.3) and equation (B.11), we have for any $k \in \mathbb {Z}_0$ ,
where we used Lemma B.5, since $\gamma , \delta < 1$ . Similar to Proposition B.15, we can show equation (B.2) of Proposition B.2 by using equations (B.5) and (B.12). Then the result follows from Proposition B.2.
Thus, when $\frac {1}{2} \le \gamma < 1$ , $B(J(z^{(\gamma )}), z^{(\delta )}) \in C_T \mathcal {C}^{\beta }$ for any $\beta < ((2 - 2\gamma ) \wedge (\frac {1}{2} - \delta ) \wedge (\frac {5}{2} - 2\gamma - \delta )) - 1$ , according to Remark A.9.
C Existence and regularity of solutions
We now prove a local existence theorem by a fixed-point argument for the type of equations needed in this note. This result is quite standard, and we sketch the argument both for completeness and to highlight the structure of these equations. We will consider the following integral equation
where $c_i \in \mathbb {R}$ ; and for some $T>0$ , we have $G \in C_T\mathcal {C}^\sigma $ , $g\in C_T\mathcal {C}^\gamma $ and $v_0 \in \mathcal {C}^\sigma $ for some $\gamma $ and $\sigma $ . We will assume that
Now for $v^{(1)}, v^{(2)} \in C_t\mathcal {C}^\sigma $ for some $t \in (0,T]$ , we have for $s \in (0,t]$ that
Because of our assumptions on $\sigma $ and $\gamma $ , we have that
where the dependent constant s in each inequality goes to zero as $s \rightarrow 0$ . Hence, there exists a $K_s$ so that $K_s \rightarrow 0$ as $s\rightarrow 0$ and
Hence, fixing any $R>0$ so that $\|v_0\|_{\mathcal {C}^\sigma } + \|G\|_{C_T\mathcal {C}^\sigma } <R$ and $\|g\|_{C_T\mathcal {C}^\gamma } < R$ , there exists $s>0$ such that $\Phi $ is a contraction on $\{ v \in C_s\mathcal {C}^\sigma : \|v\|_{ C_s\mathcal {C}^\sigma } \le R\}$ . This implies that there exists a fixed point with $v_t=\Phi (v)_t$ for all $t\in [0,s]$ . Since $c_1J(v) + c_2 J(g,v) \in { C_s\mathcal {C}^\sigma }$ is well-defined classically for $v \in C_s\mathcal {C}^\sigma $ , given our assumptions in equation (C.2), we have proven the following result.
Proposition C.1 (Local existence and regularity)
In the above setting with $\gamma +1> \sigma > 0$ and $\gamma +\sigma>0$ , the integral equation (C.1) has a unique local solution v with $v \in C_s\mathcal {C}^\sigma $ for some $s>0$ . In particular, if the regularity of the additive forcing $G_t$ is set by the stochastic convolution in the equation, then equation (C.1) has canonical regularity in the sense of Definition 4.8.
Remark C.2. By repeatedly applying the above result, we can extend the existence to a maximal time $\tau $ such that $\|v_t\|_{\mathcal {C}^\sigma } \rightarrow \infty $ as $t \rightarrow \tau $ when $\tau <\infty $ . By setting for all $t \geq \tau $ when $\tau < \infty $ , we see that $v \in C_T \overline {\mathcal {C}}^\sigma $ for all $T>0$ . (See Section 4.1 for the definition of $C_T \overline {\mathcal {C}}^\sigma $ and related discussions.)
The previous Proposition C.1 can be seamlessly extended to less regular initial conditions. Assume
set $\rho =\sigma \wedge (\gamma +1)$ , and consider $v_0\in \mathcal {C}^{\sigma _0}$ , with
Set $\theta =\frac 12(\rho -\sigma _0)$ , and for $T>0$ , define the space
with norm
With the above choice of $\sigma _0$ and under the conditions given by equations (C.4) and (C.5), we have that
with $K_T\downarrow 0$ as $T\downarrow 0$ . With these inequalities at hand, the same proof outlined above for Proposition C.1 can be adapted to this setting, yielding the following result.
Proposition C.3. Consider $\gamma ,\sigma $ as in equation (C.4), $\rho =\sigma \wedge (\gamma +1)$ , and $\sigma _0$ as in equation (C.5), and let $v_0\in \mathcal {C}^{\sigma _0}$ . Then the integral equation (C.1) has a unique local solution v in $\mathcal {X}_s^{\sigma _0,\rho }$ for some $s>0$ .
The restriction on $\rho $ can be dropped by parabolic regularisation, yielding the following result.
Corollary C.4. Consider $\gamma ,\sigma $ as in equation (C.4), $\rho =\sigma \wedge (\gamma +1)$ and $\sigma _0$ such that $-1<\sigma _0<\rho $ . If $v_0\in \mathcal {C}^{\sigma _0}$ , then the integral equation (C.1) has a unique local solution v in $C_s\mathcal {C}^{\sigma _0}$ for some $s>0$ such that $v\in C([\epsilon ,s];\mathcal {C}^\rho )$ for all $\epsilon>0$ .
In particular, if $\sigma \leq \gamma +1$ and the regularity of the additive forcing G is set by a stochastic convolution in the equation, then equation (C.1) has canonical regularity in the sense of Definition 4.8 on every closed interval included in $(0,s]$ .
Acknowledgements
MR and JCM thank MSRI for its hospitality during the 2015 program ‘New Challenges in PDE: Deterministic Dynamics and Randomness in High and Infinite Dimensional Systems’, where they began working on the multilevel decomposition to prove equivalence. While at MSRI, JCM was supported by a Simons Professorship. This work builds on an unpublished manuscript of JCM and Andrea Watkins Hairston, which looks at the case analogous to $\alpha < \frac 12$ in related PDEs using the Time-Shifted Girsanov Method directly on the main equation without the levels of decomposition needed for the singular case. JCM and MR also thank a grant from the Visiting professors programme of GNAMPA-INdAM, which allowed JCM to visit Pisa during the summer of 2016, where this work evolved closer to its current direction. JCM and LS thank the National Science Foundation for its partial support through grant NSF-DMS-1613337. LS also thanks SAMSI for its partial support through grant NSF-DMS-163852 during the 2020–2021 academic year, when all the pieces finally came together in the singular setting and this note was written.
Conflicts of Interest
None
Financial support
National Science Foundation supported this work through grants NSF-DMS-1613337, NSF-DMS-163852 and NSF-DMS-163852; the Simons Foundation, through its support of the Simons Professorships at MSRI; and the Italian GNAMPA-INdAM, through its support of a visiting professorship at the University of Pisa.