This paper describes results concerning the construction of low-regularity solutions of nonlinear partial differential equations that depend on a random parameter. The motivations for this study are very varied. However, in the end, the results obtained and the methods used are conceptually very similar.
1 Multiple Fourier series and Sobolev spaces on the torus
For , we set
Let be the torus of dimension . If is a function of class , then for all ,
where are the Fourier coefficients of . For , the Sobolev norm of is defined by
For an integer , we have the equivalence of norms
In (2), represents a partial derivative of order . For , we recover the norm of the Lebesgue space .
The Sobolev space is defined as the completion of with respect to the norm (1). In contrast to the case , for the elements of are not classical functions on the torus, but can be interpreted as Schwartz distributions. Note that the Sobolev spaces are nested: the larger is, the more regular the elements of are; the intersection of all is . On the other hand, the smaller is, the larger is; the union of all the spaces is the Schwartz space of -periodic distributions on .
2 Probabilistic effects in fine questions of analysis
2.1 An almost sure improvement of the Sobolev embedding
Let be a probability space. Recall that a random variable belongs to , with , if the image of the measure under is
where is the Lebesgue measure on . The random variable then follows the centered normal distribution with variance . Similarly, a random variable belongs to , if with and independent.
Let be a deterministic function. There exists a sequence (which consists of the Fourier coefficients of ) such that
Now consider a randomized version of given by the expression
where is a sequence of independent variables of . The randomization has no effect on the Sobolev regularity of (see, e.g., ). On the other hand, randomization has an important effect on regularity in Lebesgue spaces . Using the rotation invariance of , we obtain that , and then the independence of ensures that for a fixed
Since Gaussian variables have finite moments of any order, we get
which implies that almost surely, a remarkable improvement in the regularity of compared to that of . Note that the Sobolev embedding requires that a deterministic function be -regular in order to conclude that it lies in for any (and this restriction on regularity is optimal). In descriptive terms, randomization saves half a derivative compared with the Sobolev embedding. Like the Sobolev embedding, this effect has been known since the beginning of the 20th century, and it may seem surprising that the interaction between these two phenomena has not been studied more in the past.
Finally, thanks to the Khinchin inequality, in the preceding discussion one is allowed to replace Gaussian variables by a more general family of random variables (e.g., Bernoulli variables).
2.2 Almost sure products in Sobolev spaces of negative index
be a random series, with as in the preceding section. It is easy to verify that almost surely for , but almost surely . In the following, we fix a number such that (it should be assumed that this number is very close to ). The series is therefore in a Sobolev space of negative index and it is difficult to define an object like . After renormalization, it is nevertheless possible to give a meaning to , and even to determine its regularity in Sobolev spaces. Let us start by considering the partial sums
which are functions. Now expand as
The first term of this expansion (the zero-order Fourier coefficient) contains all the singularity, while the second term has a limit almost surely in . We then set
and we define the renormalized partial sum as
The independence of the random variables ensures that the zero-order Fourier coefficient is well-defined. More precisely, we obtain
which has a limit as if .
Similarly, the independence implies that the expectation
is bounded from above by a term of the order of
The latter sum is convergent under the condition , which is equivalent to our assumption . Consequently, the sequence
has a limit in . This limit is by definition the renormalization of . We can also establish (using more sophisticated arguments) the almost sure convergence in the Sobolev space of the sequence (3). Note that since , the norm in is weaker than the norm in (where the series is defined).
To sum up in a very informal way, the squared modulus of an element of is in , after renormalization. This is a remarkable probabilistic effect which lies at the heart of the study of evolutionary PDE, in the presence of randomness, in Sobolev spaces of negative index. We will further develop this topic in the remainder of this text.
3 Solving the nonlinear wave equation with low-regularity initial data
The wave equation is a typical example of a dispersive PDE. Solving dispersive PDE with low-regularity initial data has a long history, dating back to the seminal works of Ginibre and Velo and of Kato. Kenig, Ponce and Vega, Klainerman and Machedon, and most notably Bourgain have developed tools from harmonic analysis allowing to obtain solutions of very low regularity. The question of the optimality of these results then arose. It was Lebeau’s work that launched a series of results on the construction of counterexamples showing the optimality of the assumption of regularity in the previous results. It was in this context that the idea of proving a kind of probabilistic well-posedness for regularities where counterexamples were constructed was introduced in , and then implemented in [5, 6].
3.1 Solving the linear wave equation with periodic distributions as initial data
Consider the linear wave equation
where , , and is the Laplace operator. It is readily verified that for
the solution of (4) is given by the map defined by
where . For , the expression is naturally understood as its limit .
Since and , it follows from the above definition that
Since the map is linear, we can define a unique extension of for
and solve (4) with initial data in for arbitrary .
3.2 The nonlinear problem. Resolution by deterministic methods
The previous discussion makes it easy to solve (4) with singular initial data (in Sobolev spaces of arbitrary negative index). The argument is based on the a priori estimate (5) and the linear nature of the map (or of equation (4)). The situation changes radically if we consider a nonlinear perturbation of (4). In this text, we restrict our attention to the case of a cubic nonlinear interaction. More precisely, we consider the problem
For this problem, the crucial information (5) and the linear nature of the equation are lost. Nevertheless, equation (6) is of Hamiltonian type. Therefore, formally, the solutions of (6) satisfy the algebraic relation
This relation implies that the Sobolev space is one of the natural settings for the study of problem (6). The starting point of this study is given by the following classical result.
For any pair (real valued) there exists a unique global in time solution of (6) in the class
If, in addition, , for a given , then
Finally, the dependence on the initial data is continuous.
Using compactness methods (going back to the work of Leray) we can exploit (7) and obtain a much weaker version of Theorem 3.1, without uniqueness and without the propagation of regularity (8). In Theorem 3.1, uniqueness results from the Sobolev embedding . The -norm appears here naturally when we study the -norm of the nonlinear term . As for the propagation of the regularity, it derives from the estimate
The details of the proof of (9) can be found in , where estimates of the type (9) are called tame. The key point in estimate (9) is that the derivatives acting on the expression are redistributed in such a way that, at the end, the right-hand side of (9) depends only linearly on the strong norm (the one that contains derivatives).
In view of the discussion of the linear problem (4), it is now natural to ask whether Theorem 3.1 generalizes to initial data in for a given . As we shall see below, such a generalization is possible for some , but not for all. By using Strichartz estimates instead of the Sobolev embedding , part of Theorem 3.1 generalizes to
More precisely, the local well-posedness of (6) can be established under the assumption (10). A more detailed description of Strichartz’s estimates would go beyond our objectives in this text. We merely point out that Strichartz estimates can be seen as improvements almost everywhere in time of Sobolev embeddings, when, instead of considering an arbitrary function, we consider a function that satisfies a dispersive PDE. We refer to  for more details on Strichartz estimates and the generalization of Theorem 3.1 under assumption (10). It can be conjectured that the global in time part of Theorem 3.1 remains true under assumption (10). The most advanced results towards the resolution of this conjecture are in [8, 21].
3.3 The limitations of deterministic methods
Let and . There exists a sequence
but for all
Well-posedness in the sense of Hadamard requires existence, uniqueness and continuous dependence on the initial data. Theorem 3.2 shows that continuous dependence on initial data fails.
The proof of Theorem 3.2 is based on an idea of Lebeau (see for example ): if the initial data are localized at high frequency, then for small times a good approximation of the solution of (6) is given by the solution of
which is obtained from (6) by neglecting the effect of the Laplacian. In other words, under the hypothesis of Theorem 3.2, nonlinear effects dominate in the regime described above. The solution of (11) manifests the phenomenon of amplification described by Theorem 3.2 and this property propagates to the solutions of (6) by a perturbative, highly non-trivial argument. A detailed proof of Theorem 3.2 can be found in .
3.4 Resolution by probabilistic methods beyond the limitations of deterministic theory
The answer to this question is positive if we endow the space (12) with a non-degenerate probability measure such that we have existence, uniqueness and (a form of) continuous dependence almost surely with respect to this measure.
We will choose the initial data for (6) from the realizations of the following random series:
Here and are two families of independent random variables conditioned by and , so that and are real valued. Furthermore, it is assumed that for , and are complex Gaussians with distribution , while and are real Gaussians with distribution .
is zero. It follows that for we can apply Theorem 3.1 to the data described by (13). For , we can apply refined deterministic results (based on Strichartz estimates). Finally, for , Theorem 3.2 applies and we get:
Let and . For almost every , there exists a sequence
but for all
However, the following result also holds.
Then there exists a set of probability such that for every the sequence converges when in to a (unique) limit that satisfies (6) in the sense of distributions.
Using compactness methods (à la Leray), we can hope to obtain convergence of a subsequence of . The convergence of the whole sequence is beyond the reach of weak-solutions techniques. The fact that the whole sequence converges already contains a form of uniqueness. In , one can find a form of uniqueness that can be formulated in a suitable functional framework.
In , we also obtain a probabilistic continuous dependence on the initial data, the proof of which makes use of conditioned large deviation properties which seem to be of independent interest.
We can prove the result of Theorem 3.4 for more general randomizations than (13). For example, Gaussian variables can be replaced by Bernoulli variables and the deterministic coefficients by other coefficients with “similar” behaviour for (see  for more details).
Theorem 3.4 provides a nice dense set of initial data such that for good approximations we get nice global solutions (but for bad approximations we get divergent sequences, as shown by Theorem 3.3!). On the other hand, due to , we also have a dense set of bad initial data, even for the natural approximations by Fourier truncation (or convolution).
Let . Then there is a dense set such that for every , the sequence of (smooth) solutions of
does not converge. More precisely, for every ,
One may even prove that the pathological set contains a dense set and consequently the good-data set is not generic in the sense of Baire.
3.5 Going even further
For , is no longer a classical function. In this case, it can be interpreted as a distribution belonging to a Sobolev space with negative index. We cannot expect a result like that of Theorem 3.4 for . A renormalization is necessary, as shown by the following result established in .
Let and . There exists positive constants and a divergent sequence such that for any , there exists a set such that the probability of its complement is smaller than and such that if we denote by the solution of
with initial data given by (14), then for any , the sequence converges for in . In particular, for almost every there exists such that converges in .
4 Invariant measures for the nonlinear Schrödinger equation
Let us now consider the nonlinear Schrödinger equation, posed on the torus of dimension two:
For any there exists a unique global solution of (15) in the class . If moreover for some , then . The dependence on the initial data is also continuous.
Equation (15) is again Hamiltonian in nature. This implies that the functional
This renormalization is a classic procedure in quantum field theory, which would be impossible to present in this short text. Let us just say that the measure obtained by this renormalization is absolutely continuous with respect to the Gaussian measure induced by the random series
where is a family of independent complex Gaussians with distribution . Once the measure (17) has been rigorously defined, the natural question is whether we can define a dynamics related to (15) that leaves this measure invariant. The answer to this question is given by the work of Bourgain . The difficulty lies in the fact that (18) does not define a classical function. The object defined by (18) almost surely belongs to the Sobolev space for all . Such a regularity implies that Theorem 4.1 cannot be applied in the context of an initial datum given by (18). This regularity is also beyond the reach of the most sophisticated deterministic techniques. Nevertheless, the following statement can be deduced from .
Then for any the sequence
converges almost surely in to a limit which satisfies (in the sense of distributions) a renormalized version of problem (15).
There is a similarity between Theorems 3.4 and 4.2. One notable difference is the need to renormalize the sequence of Theorem 4.2 in order to obtain a limit. This renormalization is linked to the construction of the measure from the formal object (17) mentioned above.
with initial data (19), where is a sequence of real numbers almost surely divergent to .
5 Singular stochastic PDEs
The issues considered in the previous sections are very close to the analysis of PDE in the presence of a singular random source (noise). This topic has received a lot of attention in recent years (see, for example, [7, 9, 10, 12, 11, 14]). The closest equation to those in the previous sections is the nonlinear heat equation. More precisely, we consider the problem
In this equation, is the space-time white noise on . The unknown is a real valued function. There are many physical motivations for considering a PDE perturbed by white noise. A serious discussion of these motivations is beyond the scope of this paper.
The source term represents the singular randomness in (20), while in (6) and (15), the initial datum is the source of the singular randomness. A little experience with the analysis of evolutionary PDE is enough to know that the two situations are very similar and even that, in some cases, for reasons of convenience, we can easily transform the problem with initial data into a problem with a source term and zero as initial data.
where are independent Brownian motions, conditioned by ( is real and for , is complex valued). The derivative with respect to of in (21) is in the sense of distributions.