The fractional stochastic heat equation on the
circle: Time regularity and potential theory
arXiv:0710.3952v1 [math.PR] 21 Oct 2007
Eulalia Nualart1 and Frederi Viens2,3
Abstract
We consider a system of d linear stochastic heat equations driven by an additive
infinite-dimensional fractional Brownian noise on the unit circle S 1 . We obtain sharp results on the Hölder continuity in time of the paths of the solution u = {u(t , x)}t∈R+ ,x∈S 1 .
We then establish upper and lower bounds on hitting probabilities of u, in terms of respectively Hausdorff measure and Newtonian capacity.
AMS 2000 subject classifications: Primary: 60H15, 60J45; Secondary: 60G15, 60G17.
Key words and phrases. Hitting probabilities, stochastic heat equation, fractional Brownian
motion, path regularity.
1
Institut Galilée, Université Paris 13, 93430 Villetaneuse, France. nualart@math.univ-paris13.fr
Department of Statistics, Purdue University, 150 N. University St., West Lafayette, IN 47907-2067,
USA. viens@purdue.edu
3
The research of this author is partially supported by NSF grant num. 0606615-DMS
2
1
1
Introduction and main results
We consider a system of d stochastic heat equations on the unit circle driven by an infinitedimensional fractional Brownian motion B H with Hurst parameter H ∈ (0, 1). That is,
∂ui
∂BiH
(t, x) = ∆x ui (t, x) +
(t, x), t > 0, x ∈ S 1 ,
∂t
∂t
(1.1)
with initial condition ui (0, x) = 0, for all i = 1, ..., d. Here ∆x is the Laplacian on S 1 and
B H a centered Gaussian field on R+ × S 1 defined, for all x, y ∈ S 1 and s, t ≥ 0, by its
covariance structure
E BiH (t, x) BjH (s, y) = 2−1 tH + sH − |t − s|2H Q (x, y) δi,j ,
where Q is an arbitrary covariance function on S 1 and δi,j is the Kronecker symbol. To
simplify our study, we assume that B H is spatially homogeneous and separable in space;
therefore Q (x, y) depends only on the difference x− y , and we denote it abusively Q(x− y).
Note that because Q is positive definite, there exists a sequence of non-negative real
numbers {qn }n∈N such that
X
Q (x − y) =
qn cos (n(x − y)) .
n∈N
This expression may be only formal for certain choices of the sequence {qn }n , as these
pointwise values may explode, but this Fourier representation is always relevant if one
allows Q to be a Schwartz distribution. Examples will be given below where Q (0) is infinite
while all other values are finite (Riesz kernel case); another, also with Q (0) = ∞, will show
that Q may not be equal to its Fourier series at any point (fractional noise case for small
Hurst parameter), but still allows a solution to (1.1). Any case with Q (0) = ∞ denotes
a distribution-valued noise B H in space, for which the notation B H (t, x) is only formal in
the parameter x.
The existence and uniqueness of the solution of (1.1) was established in [TTV03]. The
“mild” or “evolution” solution of the stochastic integral formulation of equation (1.1) is
given by the evolution convolution
Z t
Z t
∞
X
2
2
√
′H
H
qn cos (nx)
e−n (t−s) βi,n
e−n (t−s) βi,n
(ds) + sin (nx)
(ds) , (1.2)
ui (t, x) =
n=0
0
0
′ }
where the sequences {βi,n }n∈N and {βi,n
n∈N , i ∈ {1, ..., d}, are independent and each
formed of independent one-dimensional standard fractional Brownian motions. [TTV03]
showed when such a solution exist, and more specifically,
that the necessary and sufficient
2
1
condition for existence of (1.2) in L Ω × [0, T ] × S (cf. [TTV03, Corollary 1]) is
∞
X
n=1
qn n−4H < ∞.
The particular case where B H is white in space, that is, qn = 1 for all n, was already studied
in [DPM02] where the solution exists if and only if H > 14 .
2
The aim of this paper is to develop a potential theory for the solution to the system of
equations (1.1). In particular, given A ⊂ Rd , we want to determine whether the process
{u(t , x), t ≥ 0, x ∈ S 1 } visits, or hits, A with positive probability.
Potential theory for the linear and non-linear stochastic heat equation driven by a space
time white noise was developped in [1-DKN07] and [2-DKN07]. The aim of this paper is to
obtain upper and lower bounds on hitting probabilities for the solution of (1.1). For this,
following the approach developped in [1-DKN07] , a careful analysis of the moments of the
increments of the process u(t, x) is needed. In particular, this will lead us to solve an open
question which is the Hölder continuity in time of the solution of (1.1) when H < 12 . The
Hölder continuity in space for the solution of (1.1) was studied in [TTV04] and the Hölder
continuity in time when H ≥ 12 is due to [SV06]. These are generalizations of earlier work
done for the stochastic heat equation with time-white noise potential: [SS00], [SS02].
Let us first state, in some detail, the path continuity results we obtain for the solution of the fractional heat equation on the circle (1.1), as these are a valuable immediate
consequence of our work. Assume that for all n large enough
cn4H−2α−1 ≤ qn ≤ Cn4H−2α−1 ,
(1.3)
for some positive constants c and C and α ∈ (0, 1] with α 6= 2H. Our basic quantitative
result is the following bounds on the variance of the increments of the solution: for t0 , T > 0,
for some positive constants c, C, ct0 , Ct0 , for all x, y ∈ S 1 , and all s, t ∈ [t0 , T ],
ct0 |x − y|2α ≤ E ku (t, x) − u (t, y) k2 ≤ Ct0 |x − y|2α
c |t − s|α∧(2H) ≤ E ku (t, x) − u (s, x) k2 ≤ C |t − s|α∧(2H) .
We then immediately get that u is β-Hölder continuous in space for any β ∈ (0, α) and is
β-Hölder continuous in time for any β ∈ (0, α2 ∧ H), but not for β equal to the upper values
of these intervals. All these results are true for any H ∈ (0, 1). Moreover, these results are
sharp for our additive stochastic heat equation (1.1): up to non-random constants, exact
moduli of continuity can be found (see the last bullet point below).
Let us consider some examples:
• In the case where B H is “white noise” in space, then u exists if and only if H > 1/4;
moreover u is β-Hölder continuous in space for any β ∈ (0, 2H − 12 ) and β-Hölder
continuous in time for any β ∈ (0, H − 41 ). This follows from the above continuity
results because the white noise case is the case qn ≡ 1: the appellation “white” reflects
the fact that all spatial Fourier frequencies are equally represented.
• In the case where B H is white in time and has a covariance function in space given
by the Riesz kernel, that is, Q(x − y) = |x − y|−γ , 0 < γ < 1, we can prove that qn is
commensurate with nγ−1 . More specifically, we can show that qn = nγ−1 c (n) where
c (n) is function bounded bewteen two positive constants, because it can be written
as the partial sum of an alternating series with decreasing general term and positive
initial term (see Appendix A.1). Therefore, the solution of (1.1) exists if and only if
H > γ4 and u is β-Hölder continuous in space for any β ∈ (0, 2H − γ2 ) and β-Hölder
continuous in time for any β ∈ (0, H − γ4 ).
3
• In the case where B H behaves like fractional Brownian noise both in time and space
with common Hurst parameter H, then the solution of (1.1) exists if and only if H > 31 .
Indeed, this case can be obtained by assuming that qn = n1−2H . When H > 1/2, if
one prefers to work starting from the spatial covariance function Q, one may stipulate
that B H is has a Riesz kernel covariance, i.e. Q (x − y) = |x − y|2H−2 = |x − y|−γ
with γ = 2 (1 − H) ∈ (0, 1), in which case one is in the situation of the last example,
with qn = c (n) n1−2H . On the other hand, if H ≤ 1/2, no Riesz-kernel interpretation
is possible with qn = n1−2H . Appendix A.2 contains another interpretation in this
case.
This interpretation, which uses a differentiation construction, also allows a justification, for all H ∈ (0, 1), of why we use the appellation “fractional Brownian noise” in
the case qn = n1−2H . In all cases, i.e. for all H ∈ (1/3, 1), u is β-H ölder continuous in space for any β ∈ (0, 3H − 1) and is β-H ölder continuous in time for any
β ∈ (0, 3H−1
2 ).
• Similarly to the previous example, but more generally, to obtain a B H that behaves
like a fractional Brownian noise with parameter H in time and K in space, we can set
qn = n1−2K (using the same justification as in the Appendix relative to the previous
example). This is equivalent to α = 2H + K − 1. We then get existence of a solution
if and only if 2H + K > 1, and the solution is then β-Hölder continuous in space for
).
any β ∈ (0, 2H + K − 1) and is β-Hölder continuous in time for any β ∈ (0, 2H+K−1
2
• From Gaussian regularity results such as Dudley’s entropy upper bound (see [K02]),
we can state that if the upper bound in (1.3) holds, then the modulus of continuity
random variable
!
ku (t, x) − u (s, x)k
ku (t, x) − u (t, y)k
sup
+
α
1/2
(1 + 1/ |x − y|) |t − s|(α/2)∧H log1/2 (1 + 1/ |t − s|)
x,y∈S 1 ;s,t∈[t0 ,T ] |x − y| log
is finite almost surely. Moreover, a (near) converse also holds: if the above random
variable (with logarithmic terms moved to the numerators) is finite, then the upper
bound in (1.3) holds for some constant C < ∞ (see [TTV04, Corollary 1]).
We now state the results of potential theory that we will prove in this paper. For this,
let us first introduce some notation. For all Borel sets F ⊆ Rd we define P(F ) to be the
set of all probability measures with compact support in F . For all µ ∈ P(Rd ), we let Iβ (µ)
denote the β-dimensional energy of µ; that is,
ZZ
Iβ (µ) :=
Kβ (kx − yk) µ(dx) µ(dy).
Here and throughout,
−β
if β > 0,
r
Kβ (r) := log(N0 /r) if β = 0,
1
if β < 0,
where N0 is a constant whose value will be specified later in the proof of Lemma 4.1.
4
(1.4)
For all β ∈ R and Borel sets F ⊂ Rd , Capβ (F ) denotes the β-dimensional capacity of
F ; that is,
−1
,
Capβ (F ) :=
inf Iβ (µ)
µ∈P(F )
where 1/∞ := 0.
Given β ≥ 0, the β-dimensional Hausdorff measure of F is defined by
)
(∞
∞
[
X
B(xi , ri ), sup ri ≤ ǫ ,
(2ri )β : F ⊆
Hβ (F ) = lim inf
ǫ→0+
i≥1
i=1
i=1
where B(x , r) denotes the open (Euclidean) ball of radius r > 0 centered at x ∈ Rd . When
β < 0, we define Hβ (F ) to be infinite.
Let u(S) denote the range of S under the random map r 7→ u(r), where S is some
Borel-measurable subset of R+ × S 1 .
Theorem 1.1. Assume hypothesis (1.3). Let I ⊂ (0, T ] and J ⊂ [0, 2π) ≡ S 1 be two fixed
non-trivial compact intervals. Then for all T > 0 and M > 0, there exists a finite constant
cH > 0 depending on H, M, I and J such that for all compact sets A ⊆ [−M, M ]d ,
c−1
H Capd−β (A) ≤ P{u(I × J) ∩ A 6= ∅} ≤ cH Hd−β (A).
where β :=
1
α
+ ( α2 ∨
1
H ).
Remark 1.2. (a) When B H is white in time and space, that is, H = 21 and qn = 1 for all
n, Theorem 1.1 gives the same hitting probabilities estimates obtained in [1-DKN07,
Theorem 4.6.].
(b) Because of the inequalities between capacity and Hausdorff measure, the right-hand
side of Theorem 1.1 can be replaced by c Capd−β−η (A) for all η > 0 (cf. [K85, p.
133]).
We say that a Borel set A ⊆ Rd is called polar for u if P{u(T ) ∩ A 6= ∅} = 0; otherwise,
A is called nonpolar.
The following results are consequences of Theorem 1.1.
Corollary 1.3. Assume hypothesis (1.3) and let β :=
1
α
+ ( α2 ∨
1
H ).
(a) A (nonrandom) Borel set A ⊂ Rd is nonpolar for u if it has positive d − β-dimensional
capacity. On the other hand, if A has zero d − β-dimensional Hausdorff measure, then
A is polar for u.
(b) Singletons are polar for u if d > β and are nonpolar when d < β. The case d = β is
open.
(c) If d ≥ β, then
dimH (u(R+ × S 1 )) = β, a.s.
Let us consider the same examples as we had for the regularity statements.
5
• In the case where B H is white in space, then α = 2H −
1
2
and β =
6
4H−1 .
• In the case where B H is white in time and has a covariance function in space given
by the Riesz kernel, that is, Q(x − y) = |x − y|−γ , 0 < γ < 1, then α = 2H − γ2 and
6
.
β = 4H−γ
• In the case where B H is the fractional Brownian noise with Hurst parameter H > 1/3
3
.
in time and space, then α = 3H − 1 and β = 3H−1
• In the case where B H is the fractional Brownian noise with Hurst parameter H in
3
time and K in space, and 2H + K > 1, then α = 2H + K − 1 and β = 2H+K−1
.
This paper is organized as follows. In Section 2 we prove the path continuity results of
u stated in the Introduction using fractional stochastic calculus. In Section 3 we obtain an
upper bound of Gaussian type for the bivariate density of u that will be needed for the proof
of Theorem 1.1. Finally, Section 4 is devoted to the proofs of Theorem 1.1 and Corollary
1.3.
In all the paper, cH , CH will denote universal constants depending on H whose value
may change from line to line.
2
Regularity of the solution
We consider the two canonical metrics of u in the space and time parameter, respectively,
defined by
δt2 (x, y) := E[ku(t, x) − u(t, y)k2 ],
δx2 (s, t) := E[ku(t, x) − u(s, x)k2 ],
for all x, y ∈ S 1 and s, t ∈ R+ .
The aim of this section is to obtain upper and lower bounds bounds in terms of the
differences |x− y| and |t − s| for the two canonical metrics above. These imply, in particular,
the Hölder regularity of u that we have described in detail in the introduction. We begin
by introducing some elements of fractional stochastic calculus.
2.1
Elements of fractional stochastic calculus
In this section, we recall, following [N06], some elements on stochastic integration with
respect to one-dimensional fractional Brownian motion needed for the analysis of the regularity of u in time.
Fix T > 0. Let B H = (B H (t), t ∈ [0, T ]) be a one-dimensional fractional Brownian
motion with Hurst parameter H ∈ (0, 1). That is, B H is a centered Gaussian process with
covariance function given by
R(t, s) = E[B H (t)B H (s)] = 2−1 tH + sH − |t − s|2H .
6
Note that for H = 21 , B H is a standard Brownian motion. Moreover, B H has the integral
respresentation
Z t
H
K H (t, s)W (ds),
B (t) =
0
where W = (W (t), t ∈ [0, T ]) is a Wiener process and K H (t, s) is the kernel defined as
H− 1
2
1
t
t
−H
H− 12
+ s2 F
K (t, s) = cH
(t − s)
,
s
s
H
(2.1)
where cH is a positive constant and
Z z−1
1
H− 23
H− 21
r
−H
1 − (1 + r)
dr.
F (z) = cH
2
0
From (2.1) we get
1 −H
2
∂K H
1
H− 23 s
.
(t, s) = cH H −
(t − s)
∂t
2
t
(2.2)
H
It is important to note that ∂K
∂t is positive if H > 1/2, but is negative when H < 1/2.
This negativity causes problems when evaluating the time-canonical metric’s lower bound.
We denote by E the set of step functions on [0, T ]. Let H be the Hilbert space defined
as the closure of E with respect to the scalar product
h1[0,t] , 1[0,s] iH = R(t, s).
The mapping 1[0,t] 7→ BtH can be extended to an isometry between H and the Gaussian
space H1 associated with B H . Then {B H (φ), φ ∈ H } is an isonormal Gaussian process
associated with the Hilbert space H . For every element φ ∈ H , B H (φ) is called the Wiener
integral if φ with respect to B H and is denoted
Z T
φ(s)B H (ds).
0
For every s < t, consider the linear operator K ∗ from E to L2 ([0, T ]) defined by
Kt∗ φ(s) = K H (t, s)φ(s) +
Z
t
s
(φ(u) − φ(s))
∂K H
(u, s) du.
∂u
When H > 21 , since K H (t, t) = 0, this operator has the simpler expression
Kt∗ φ(s) =
Z
t
φ(u)
s
∂K H
(u, s) du.
∂u
The operator K ∗ is an isometry between E and L2 ([0, T ]) that can be extended to the
Hilbert space H . As a consequence, we have the following relationship between the Wiener
7
integral with respect to the fractional Brownian motion B H and the Wiener integral with
respect to the Wiener process W :
Z t
Z t
H
Kt∗ φ(s)W (ds),
φ(s)B (ds) =
0
0
which holds for every φ ∈ H , which is true if and only if Kt∗ φ ∈ L2 ([0, T ]).
Recall also that when H > 12 ,
Z t
Z t Z t
Z t
H
H
du φ(s)ψ(u)|s − u|2H−2 . (2.3)
ds
ψ(s)B (ds) = H(2H − 1)
φ(s)B (ds)
E
0
2.2
0
0
0
Space regularity
The next lemma gives a precise connection between a generic condition of the type (1.3)
and the Fourier expansion of a canonical metric for a homogeneous Gaussian field on the
circle.
Lemma 2.1. Let Y be a homogeneous, centered and separable Gaussian field on S 1 with
canonical metric δ (x, y) = δ (x − y) for some univariate function δ. Then, there exists a
sequence of non-negative real numbers {rn }n∈N such that for any r ∈ S 1 ,
2
δ (r) = 2
∞
X
n=1
rn (1 − cos nr) .
(2.4)
Moreover, if there exist constants c and C positive, and α ∈ (0, 1], such that for all n large
enough,
cn−2α−1 ≤ rn ≤ Cn−2α−1 ,
(2.5)
then for all r close enough to 0,
p
kα cr α ≤ δ (r) ≤
p
Kα Cr α ,
(2.6)
where kα and Kα are constants depending only on α. More specifically, the upper bound
(resp. lower bound) in (2.5) implies the upper bound (resp. lower bound) in (2.6).
Proof. We start proving (2.4). Let C(x, y) denote the covariance function of Y , that is, for
any x, y ∈ S 1 ,
E[Y (x)Y (y)] = C(x, y),
where C depends only on the diference x − y. Because C is positive definite, it holds that
there exists a sequence of non-negative real numbers {rn }n∈N such that
X
C(x, y) =
rn cos (n(x − y)) .
n∈N
Hence, for any r ∈ S 1 ,
2
2
δ (r) = E[(Y (0) − Y (r)) ] = 2
8
∞
X
n=1
rn (1 − cos nr) .
This proves (2.4).
We now prove the second statement of the lemma. We begin proving the upper bound
statement. Assuming that the upper bound of (2.5) holds for all n > n0 ≥ 1, we restrict
r accordingly: we assume n0 ≤ [1/r], that is, r ≤ 1/n0 . In this case, we immediately get
r 2 ≤ r 2α . We write
−1 2
2
δ (r) =
nX
0 −1
n=1
[1/r]
rn (1 − cos nr) +
≤ max {rn }
n≤n0
nX
0 −1
X
n=n0
rn (1 − cos nr) +
n=[1/r]+1
[1/r]
(nr)2 +
n=1
X
n=1
≤ n20 max {rn } r 2 + Cr 2
X
rn (1 − cos nr)
∞
X
Cn−2α−1 (nr)2 + 2
Cn−2α−1
n=[1/r]+1
[1/r]
n≤n0
∞
X
n−2α+1 + 2
n=1
∞
X
Cn−2α−1
n=[1/r]+1
+ 2CCα′ (1/r)−2α
≤ r 2−2α n20 max {rn } r 2α + CCα r 2 (1/r)
n≤n0
≤ 2C Cα + 2Cα′ r 2α ,
n
1/(2−2α) o
, where Cα and
provided r ≤ r1 := min 1/n0 ; C (Cα + 2Cα′ ) n20 maxn≤n0 {rn }
−2α+2
Cα′ are constant depending only on α. It is elementary to check that Cα′ can be taken as
1/ (2α). If α ∈ (0, 1/2), then one checks that Cα can be taken as 1; while if α ∈ [1/2, 1],
and we assume moreover that r < r2 := (1 − 2α)−1/(2α) , then Cα can be taken as α−1 .
In other words, when
α < 1/2, we obtain the upper bound of (2.6) for all r ≤ r1 , with
Kα = 4 α−1 + 1 , while when α ∈ [1/2, 1], we obtain the upper bound of (2.6) for all
r ≤ min {r1 ; r2 } with Kα = 8α−1 . In fact, the formula Kα = 8α−1 can be used for both
cases.
In order to prove the lower bound on δ (r), we write instead, still assuming r ≤ 1/n0 ,
that
∞
∞
X
X
n−2α−1 (1 − cos nr)
rn (1 − cos nr) ≥ c
2−1 δ2 (r) =
n=n0
n=1
[π/(2r)]
≥c
X
n=[1/r]+1
[π/(2r)]
n−2α−1 (1 − cos nr) ≥ c (1 − cos 1)
X
n−2α−1
n=[1/r]+1
1
≥ c (1 − cos 1)
−1−
2r
2r
r
π −2α π
− 1 − 2r .
≥ r 2α c (1 − cos 1)
2
2
Note here that 1 − cos 1 > 0.459 and π/2 − 1 > 0.57. It is now clear that choosing
r ≤ r0 := min {0.035; 1/n0 }, we get
π −2α−1 h π i
δ2 (r) ≥ r 2α c (1 − cos 1) (π/2)−2α ,
which proves the lower bound of (2.6) with kα = (1 − cos 1) (π/2)−2α for all r ≤ r0 . The
proof of the lemma is complete.
9
This lemma can be applied immediately, to find sharp bounds on the spatial canonical
metric of u; the almost-sure continuity results also follow.
Corollary 2.2. Let H ∈ (0, 1), t0 > 0 and t ∈ [t0 , T ] be fixed. Assume hypothesis (1.3).
Then the canonical metric δt (x − y) for u (t, ·) satisfies, for all r enough close to 0,
p
p
kα cc (t0 , T, H)r α ≤ δt (r) ≤ Kα CC (t0 , T, H)r α ,
where kα and Kα are constants depending only on α, c (t0 , T, H) and c (t0 , T, H) are constant
depending only on t0 , T and H and c, C are the constants in (1.3). In particular, u(t, ·) is
β-Hölder continuous for any β ∈ (0, α). More specifically, up to a non-random constant,
the function r 7→ r α log1/2 (1/r) is an almost sure uniform modulus of continuity for u (t, ·).
Proof. Let (β H (t) , t ≥ 0) be a one-dimensional fractional Brownian motion. Let t0 > 0
and t ∈ [t0 , T ] be fixed. From the proof of Theorems 2 and 3 of [TTV03] we deduce that
there exists positive constants c (t0 , T, H) and C (t0 , T, H) such that
"Z
2 #
t
2
≤ C (t0 , T, H) n−4H .
e−n (t−s) βnH (ds)
c (t0 , T, H) n−4H ≤ E
0
Thus, appealing to (1.2), we find that for all n sufficiently large,
2c (t0 , T, H) n−4H qn (1 − cos(nr)) ≤ δt2 (r) ≤ 2C (t0 , T, H) qn n−4H (1 − cos(nr))
Then hypothesis (1.3) and Lemma 2.1 conclude the first result of the corollary.
The second statement of the corollary, which is a repeat of one of the continuity results
described in the introduction, is proved using the arguments described therein as well. In
fact, a simple application of Dudley’s entropy upper bound theorem is sufficient (see [K02,
Theorem 2.7.1]). We do not elaborate further on this point.
2.3
Time regularity
We now concentrate our efforts on finding sharp bound on the time-canonical metric of u.
The bounds we find for H > 1/2 were essentially already obtained in [SV06], although
the result and its proof was not stated explicitly therein, an omission which we deal with
here. When H < 1/2, no results were known, either for upper or lower bounds: we perform
these calculations from scratch. This portion of our calculations is very delicate. As in the
previous section, our new estimates can be used to also derive almost sure regularity results.
Proposition 2.3. Let H ∈ (0, 1). Assume hypothesis (1.3). Let T > 0, t0 ∈ (0, 1] and
s, t ∈ [t0 , T ] with |t − s| ≤ t20 be fixed. Then the canonical metric δx (t − s) for u (·, x)
satisfies for every x ∈ S 1
ct0 ,T,H |t − s|α∧(2H) ≤ δx2 (t − s) ≤ Ct0 ,T,H |t − s|α∧(2H) ,
(2.7)
where ct0 ,T,H and Ct0 ,T,H are positive constant depending only on t0 , T and H. In particular,
u(·, x) is β-Hö lder continuous for any β ∈ (0, α2 ∧ H).
In particular, u(·, x) is β-Hölder continuous for any β ∈ (0, α2 ∧ H). More specifically,
α
up to a non-random constant, the function r 7→ r 2 ∧H log1/2 (1/r) is an almost sure uniform
modulus of continuity for u (·, x).
10
Proof. The statement on almost-sure continuity is established using the arguments described in the introduction, or simply by applying Dudley’s entropy upper bound theorem
(see [K02, Theorem 2.7.1]). We detail only the proof of (2.7), separating the cases H > 1/2
and H < 1/2.
Fix T > 0, t0 ∈ (0, 1] and s, t ∈ [t0 , T ] such that |t − s| ≤ t20 . We assume without loss of
generality that s ≤ t. Following [SV06, Section 2.1], it yields that
δx2 (s, t) = q0 |t − s|2H
2
Z s
Z t
+∞
X
−n2 (t−r) H
−n2 (t−r)
−n2 (s−r) H
e
βn (dr)
,
(e
−e
)βn (dr) +
qn E
+
s
0
n=1
(2.8)
where {(βnH (t), t ≥ 0)}n≥1 is a sequence of fractional Brownian motions.
In order to bound the last expectation we consider two different cases:
Case 1 : H ≥ 12 . In [SV06, (15)] it is proved that δx2 (s, t) is bounded above and below by
X
X cH qn
+
CH qn |t − s|2H .
q0 |t − s|2H +
4H
n
2
2
n (t−s)≤1
n (t−s)>1
Taking qn and α ∈ (0, 1] from hypothesis (1.3), we obtain that δx2 (s, t) is bounded above
and below by
cH (|t − s|2H + |t − s|α ).
Therefore, the upper and the lower bounds of (2.7) follow for H ≥ 21 .
Case 2 : H < 12 . We prove the upper and lower bound of (2.7) separately.
The upper bound. In order to prove the upper bound of (2.7), we start estimating the
expectation in (2.8). Using the results in Section 2.2, we have that
Z s
2
Z t
−n2 (t−r)
−n2 (s−r)
−n2 (t−r) H
H
E
e
−e
e
βn (dr)
≤ 2I1 + I2 + 2I3 , (2.9)
βn (dr) +
0
s
where
I1 :=
I2 :=
I3 :=
Z
Z
Z
s
0
t
s
0
(Ks∗ f (r))2 dr, f (r) = e−n
(Kt∗ g(r))2 dr, g(r) = e−n
s
2 (t−r)
2 (t−r)
− e−n
2 (s−r)
,
,
(2.10)
(Kt∗ g(r) − Ks∗ g(r))2 dr.
We start estimating I1 . We write
2
Z s Z s
Z s
∂K
2
(f (u) − f (r))
(K(s, r)f (r)) dr + 2
I1 ≤ 2
(u, r)du dr
∂u
0
r
0
:= 2I1,1 + 2I1,2 .
11
(2.11)
Using Lemma A.1 and the change of variables 2n2 (s − r) = v, we have
Z s
2
2
(s − r)2H−1 r 2H−1 (e−n (t−r) − e−n (s−r) )2 dr
I1,1 ≤ cH
0
=
cH
2
(1 − e−n (t−s) )2
n4H
Z
0
2n2 s
s−
v
2n2
2H−1
v 2H−1 e−v dv.
By Lemma A.2, it yields
cH
2
(1 − e−n (t−s) )2 .
4H
n
We now treat I1,2 . Using Lemma A.1 and the change of variables s − r = v, s − u = v ′ , we
have
2
Z s Z v
′
′ H− 32 −n2 v′
−n2 (t−s) 2
−n2 v
dv (v − v )
I1,2 ≤ cH (1 − e
)
dv
−e
) .
(e
I1,1 ≤
0
0
By the change of variables v −
I1,2
v′
= u, we find
Z
Z s
−n2 (t−s) 2
−2n2 v
≤ cH (1 − e
)
dv e
0
v
du u
0
H− 32
n2 u
(e
2
− 1) .
Then using [TTV03, Lemma 2] with a = n2 and A = H − 21 , we conclude that
I1,2 ≤
cH
2
(1 − e−n (t−s) )2 .
n4H
Writing I1,1 and I1,2 together, we get
I1 ≤
cH
2
(1 − e−n (t−s) )2 .
4H
n
We now separate the sum in (2.8) in two terms, as n2 (t − s) > 1 (tail) and n2 (t − s) ≤ 1
(head), and take qn and α ∈ (0, 1] from hypothesis (1.3). Then we obtain for the tail of the
series
X
X
n−2α−1 ≤ cH |t − s|α .
qn I1 ≤ cH
n2 (t−s)>1
n2 (t−s)>1
For the head of the series, use the inequality 1 − e−x ≤ x, valid for all x ≥ 0, to get
X
n2 (t−s)≤1
c(t0 , H)
2
2
(1 − e−n (t−s) )2H (1 − e−n (t−s) )2−2H
4H
n
n2 (t−s)≤1
X
n4H−2α−1
≤ cH |t − s|2H
qn I1 ≤
X
qn
n2 (t−s)≤1
≤ cH |t − s|α∧(2H) .
We now bound I2 .
2
Z t Z t
Z t
∂K
2
du (g(u) − g(r))
dr
(u, r)
(K(t, r)g(r)) dr + 2
I2 ≤ 2
∂u
r
s
s
:= 2I2,1 + 2I2,2 .
12
Using Lemma A.1 and the change of variables 2n2 (t − r) = u, we have
Z t
2
dr (t − r)2H−1 r 2H−1 e−2n (t−r)
I2,1 ≤ cH
s
=
cH
n4H
Z
2n2 (t−s)
0
u 2H−1 2H−1 −u
u
e .
du t − 2
2n
Using Lemma A.2, we obtain for the tail of the series
X
X
qn I2,1 ≤ cH
n−2α−1 ≤ cH |t − s|α .
n2 (t−s)>1
n2 (t−s)>1
For the head of the series, as |t − s| ≤
X
qn I2,1
n2 (t−s)≤1
t0
2,
we have
Z 2
X
c(t0 , H) t 2H−1 2n (t−s)
du u2H−1
≤
qn
4H
n
2
0
n2 (t−s)≤1
X
≤ cH |t − s|2H
n4H−2α−1 .
n2 (t−s)≤1
P
P
This proves that n2 (t−s)≤1 qn I2,1 is of the same order as n2 (t−s)≤1 qn I1 which we calculated above to be of order |t − s|α∧(2H) .
We now bound I2,2 . Using Lemma A.1 and the change of variables t − r = v, t − u = v ′ ,
we have
2
Z t−s Z v
3
2
2 ′
dv ′ (v − v ′ )H− 2 (e−n v − e−n v ) .
dv
I2,2 ≤ cH
0
0
Using the change of variables
I2,2 ≤
cH
n4H
n2 (v
− v ′ ) = y and 2n2 v = x, we find
Z x/2
2
Z 2n2 (t−s)
−x
H− 23 y
dx e
dy y
(e − 1)
0
0
Appealing to [TTV03, Lemma 2] with a = n2 and A = H − 12 , we obtain for the tail of the
series
X
X
qn I2,2 ≤ cH
n−2α−1 ≤ cH |t − s|α .
n2 (t−s)>1
n2 (t−s)>1
For the head of the series, we have
X
qn I2,2
n2 (t−s)≤1
cH
≤
qn 4H
n
n2 (t−s)≤1
X
≤ cH |t − s|
X
Z
2n2 (t−s)
dx
1/2
dyy
0
0
n2 (t−s)≤1
Z
H− 32
2
(e − 1)
y
n−2α+1 ≤ cH |t − s|α .
We now estimate I3 .
Z
Z s
(K(t, r) − K(s, r))2 (g(r))2 dr + 2
I3 ≤ 2
0
:= 2I3,1 + 2I3,2 .
13
s
dr
0
Z
t
s
2
∂K
du (g(u) − g(r))
(u, r)
∂u
By Lemma A.1, we get, for every r < s < t,
Z
K (t, r) − K (s, r) = (t − s)
1
∂K
(s + v (t − s) , r) dv
∂u
0
≤ cH (t − s)
1
Z
|s + v (t − s) − r|H−3/2 dv
0
H−1/2
≤ cH (s − r)
− (t − r)H−1/2 .
(2.12)
(2.13)
We now separate the evaluation of the integral in I3,1 depending upon whether r is
bigger or smaller than s − (t − s) /2. In the first case, we evaluate
Z s
2
(K (t, r) − K (s, r))2 e−n (t−r) dr.
I3,1,1 :=
s−(t−s)/2
Here, we have s − r < (t − s) /2 and t − r > t − s; therefore, using ( 2.13), we have
Z s
2
2
1 + 2H−1/2 (s − r)H−1/2 e−n (t−r) dr
I3,1,1 ≤ cH
s−(t−s)/2
Z s
−n2 (t−s)
≤ cH e
(s − r)2H−1 dr
s−(t−s)/2
−n2 (t−s)
= cH e
(t − s)2H .
For the head of the series, we find
X
qn I3,1,1 ≤ cH (t − s)2H
n2 (t−s)≤1
X
n4H−2α−1 ,
n2 (t−s)≤1
which is bounded above by cH |t − s|α∧(2H) while for the tail of the series we have
X
X
2
qn I3,1,1 ≤ cH (t − s)2H
n4H−2α−1 e−n (t−s)
n2 (t−s)>1
≤ cH (t − s)2H
Z
α
= cH (t − s)
n2 (t−s)>1
Z ∞
−1/2
e−x
2 (t−s)
(t−s)
∞
−y 2 4H−2α−1
e
y
1
x4H−2α−1 dx
dy = cH (t − s)α .
Second we evaluate
I3,1,2 :=
Z
s−(t−s)/2
(K (t, r) − K (s, r))2 e−n
0
2 (t−r)
dr.
Here, we have s − r > (t − s) /2; we simply use (2.12) where an upper bound is obtained
by replacing |s + v (t − s) − r|H−3/2 by |s − r|H−3/2 ; the latter can now be bounded above
by 23/2−H |t − s|H−3/2 . Thus
Z s−(t−s)/2
2
2+2H−3
I3,1,2 ≤ cH |t − s|
e−n (t−r) dr.
2H−1
≤ cH |t − s|
n
0
−2 −n2 (t−s)
14
e
.
This estimate will not help us in the case n2 (t − s) ≤ 1. In the other case, we have
X
X
2
qn I3,1,2 ≤ cH |t − s|2H−1
n4H−2α−3 e−n (t−s)
n2 (t−s)>1
≤ cH |t − s|2H−1
n2 (t−s)>1
Z ∞
2
x4H−2α−3 e−x (t−s) dx
(t−s)
Z ∞
2
= cH |t − s|2H−1 (t − s)−2H+α+1
y 4H−2α−3 e−y dy = cH (t − s)α .
−1/2
1
The third and last step of the estimation of I3,1 is the sum for n2 (t − s) < 1 of I3,1,2 . In
this case, we use (2.13) and obtain an upper bound by bounding (s − r)H−1/2 −(t − r)H−1/2
above by cH (t − s) (s − r)H−3/2 . Thus
Z s−(t−s)/2
I3,1,2 ≤ cH (t − s)2
(s − r)2H−3 dr ≤ cH (t − s)2H .
0
P
P
This proves that n2 (t−s)≤1 qn I3,1,2 is of the same order as n2 (t−s)≤1 qn I3,1,1 which we
calculated above to be of order |t − s|α∧(2H) .
We now bound I3,2 . Using Lemma A.1 and the change of variables s − r = v, s − u = v ′ ,
we have
2
Z s Z 0
−2n2 (t−s)
′
′ H− 23 −n2 v′
−n2 v
I3,2 ≤ cH e
dv (v − v )
−e
) .
(e
dv
s−t
0
v′
Using the change of variables v − = u, we find
2
Z v+(t−s)
Z s
−2n2 (t−s)
−2n2 v
H− 32 n2 u
(e
− 1) .
I3,2 ≤ cH e
dv e
du u
0
v
Appealing to [TTV03, Lemma 2] with a = n2 and A = H − 21 , we obtain for the tail of
the series
X
X
n−2α−1 ≤ cH |t − s|α .
qn I3,2 ≤ cH
n2 (t−s)>1
n2 (t−s)>1
In order to evaluate the head of the series, we separate the evaluation of the integral in I3,2
depending upon whether v is bigger or smaller than t − s, that is,
2
Z s Z v+(t−s)
3
du uH− 2
I3,2 ≤ cH
dv
v
0
= cH
Z
t−s
dv
≤ cH
0
v+(t−s)
3
du uH− 2
v
0
Z
Z
t−s
dv v 2H−1 +
Z
s
t−s
2
Z
s
dv
t−s
2H−3
2
dv v
(t − s)
+
Z
v+(t−s)
v
3
du uH− 2
2
≤ cH (t − s)2H .
P
P
Therefore,
n2 (t−s)≤1 qn I3,2 is of the same order as
n2 (t−s)≤1 qn I3,1 which is of order
α∧(2H)
|t − s|
.
15
Use all the estimates above, together with (2.8), to conclude that
δx2 (s, t) ≤ cH (|t − s|2H + |t − s|α ) ≤ c′H |t − s|α∧(2H) .
This proves the upper bound of (2.7) when H < 1/2.
The lower bound: We now estimate the lower bound of the expectation in the case H < 1/2.
We write
Z s
2
Z t
−n2 (t−r)
−n2 (s−r)
−n2 (t−r) H
H
E
e
−e
e
βn (dr)
= I1 + I2 + I3 + I4 , (2.14)
βn (dr)+
0
s
where I1 , I2 and I3 are as in (2.10), and
Z s
(Ks∗ f (r))(Kt∗ g(r) − Ks∗ g(r))dr.
I4 :=
(2.15)
0
First note that I1 , I2 , I3 ≥ 0.
We start by finding a lower bound for I1 . We have I1 := I1,1 + I1,2 + I1,3 , where I1,1 and
I1,2 are as in (2.11), and
Z s
Z s
∂K
(u, r).
du (f (u) − f (r))
dr K(s, r)f (r)
I1,3 = 2
∂u
r
0
The change of variables s − r = v, s − u = w, v − w = u′ gives
2
Z s
Z v
−n2 (t−s) 2
−2n2 v
′ ∂K
′
n2 u′
I1 = (1 − e
)
dv e
du
K(s, s − v) +
− 1) .
(u , 0)(e
∂u′
0
0
Appealing to Lemma A.1 in the appendix, and the change of variables n2 u′ = u, n2 v = x,
we obtain
2
Z x
Z n2 s
cH
1
H− 32 u
−n2 (t−s) 2
−2x
H− 21
I1 ≥
du u
(1 − e
)
dx e
x
− ( − H)
(e − 1)
n4H
2
0
0
2
Z x
Z t0
1
cH
H− 32 u
−n2 (t−s) 2
−2x
H− 21
du u
(1 − e
)
dx e
x
− ( − H)
(e − 1)
≥
n4H
2
0
0
cH
2
(1 − e−n (t−s) )2 ,
(2.16)
=
4H
n
as the last integral is finite and positive.
Next we evaluate I4 . We write I4 = I4,1 + I4,2 + I4,3 + I4,4 , where
Z s
drK(s, r)f (r)(K(t, r) − K(s, r))g(r),
I4,1 =
0
Z t
Z s
∂K
du(g(u) − g(r))
drK(s, r)f (r)
I4,2 =
(u, r),
∂u
s
0
Z t
Z s Z s
∂K
∂K
dv(g(v) − g(r))
(u, r)
(v, r),
du(f (u) − f (r))
dr
I4,3 =
∂u
∂v
s
r
Z s
Z0 s
∂K
du(f (u) − f (r))
dr(K(t, r) − K(s, r))g(r)
I4,4 =
(u, r).
∂u
r
0
16
(2.17)
Now, note that I4,1 , I4,2 ≥ 0 but I4,3 , I4,4 ≤ 0.
We claim that, for some subset SK ⊂ N,
X
X
qn I1 > 2
qn |I4,3 | ,
n∈SK
X
(2.18)
n∈SK
X
qn I1 > 4
n∈SK
n∈SK
qn |I4,4 | ,
(2.19)
where qn and α ∈ (0, 1] are as in hypothesis (1.3) and SK := n ∈ N : n2 (t − s) > K for
some (large) constant K ≥ 1 which will be chosen later.
Assume (2.18) and (2.19) proved. We write, using (2.16),
Z ∞
X
(2.20)
qn I1 ≥ cH (1 − e−1 )2 √ √ dx x−2α−1 := c1α,H (t − s)α K −α .
n∈SK
2 K/ t−s
Because I2 , I3 , I4,1 , I4,2 ≥ 0 and using (2.18), (2.19) and (2.20), we find
X
X
X
X
qn (I1 + I2 + I3 + I4 ) ≥
qn I1 −
qn |I4,3 | −
qn |I4,4 |
n∈N
n∈SK
n∈SK
n∈SK
1 X
≥
qn I1
4
n∈SK
≥ cα,H,K (t − s)α .
Therefore, by (2.8) and (2.14), we conclude that
δx2 (s, t) ≥ q0 |t − s|2H + cH |t − s|α ≥ c′H |t − s|α∧(2H) .
This proves the lower bound of (2.7) when H < 21 .
We finally prove (2.18) and (2.19).
Proof of (2.18). Using Lemma A.1 and the change of variables s − r = r ′ , s − u = u′ ,
s − v = v ′ , r ′ − u′ = u′′ , r ′ − v ′ = v ′′ , n2 u′′ = x, n2 v ′′ = v, we find
Z
2
2
|I4,3 | ≤ cH 1 − e−n (t−s) e−n (t−s) n−4H
×
Z
n2 s
−2x
dxe
0
x+n2 (t−s)
2
x
du u
H−3/2
0
dv v
x
Z
H−3/2
v
!
u
(e − 1)
(2.21)
(e − 1) .
Note that with the exception of the factor e−n (t−s) in |I4,3 |, the combination of all the
terms in I1 and I4,3 are in fact largely similar, which makes this portion of the proof quite
2
delicate, and in particular, to exploit the factor e−n (t−s) , we must restrict the values of
n2 (t − s) to being relatively large, which explains the choice of SK above.
Our strategy is to bound the sum over n ∈ SK of qn |I4,3 | above as tightly as possible
by performing a “Fubini”, dragging the sum over n all the way inside the expression for
17
P
qn |I4,3 |, and evaluating it first using some Gaussian estimates. That these Gaussian
estimates work has to do with the precise eigenvalue structure of the Laplacian, not with
the Gaussian property of the driving noise.
We proved in (2.20) that the contribution of I1 is bounded below by an expression of
the form c1α,H (t − s)α K −α , where c1α,H depends only on α and H. We will now show that
X
qn |I4,3 | ≤ c2α,H (t − s)α K −β .
(2.22)
SK
n∈SK
for some β > α, where c2α,H depends again only on H and α. Even if c2α,H is much
larger than c1α,H , one only needs to choose K ≥ (2c2α,H /c1H,α )1/(β−α) to guarantee that the
contribution of I1 exceeds twice the absolute value of the contribution of I4,3 as announced
in (2.18), which implies that even though the latter is negative, the sum of the two exceeds
(c1α,H /2) (t − s)α K −α , i.e. for some K depending only on H and α.
First, for fixed x, we perform the announced Fubini, which means
that, instead of
p
having the integration and summation limits for n and v as n > K/ (t − s) first and
x < v < x + n2 (t − s) next, we get instead x < v < ∞ and
np
o
p
n > max
K/ (t − s), (v − x) /t − s
p
= (t − s)−1/2 (v − x) ∨ K.
Therefore, bounding (1 − e−n
2 (t−s)
) by 1, and n2 s by ∞, we have
Z x
Z ∞
X
−2x
H−3/2 u
qn |I4,3 | ≤ cH
dx e
du u
(e − 1)
0
n∈SK
×
Z
0
∞
dv v
H−3/2
x
(2.23)
(e − 1) S (K, v − x, t − s) ,
v
where the term S (K, v − x, t − s) is defined by a series which we compare to a Gaussian
integral as follows
X
2
S (K, v − x, t − s) :=
n−2α−1 e−n (t−s)
√
n>(t−s)−1/2 (v−x)∨K
Z ∞
2
≤
dy y −2α−1 e−y (t−s) .
√
y≥(t−s)−1/2
(v−x)∨K
Using the change of variable w2 = (t − s) y 2 , we have
Z ∞
2
α
S (K, v − x, t − s) ≤ (t − s) √
dw w−2α−1 e−w
(v−x)∨K
−α−1/2
α
≤ (t − s) ((v − x) ∨ K)
Now, using the classical Gaussian tail estimate
R∞
A
2
Z
∞
√
2
dw e−w .
(v−x)∨K
2
dw e−w ≤ 2−1 A−1 e−A , we get
S (K, v − x, t − s) ≤ 2−1 (t − s)α ((v − x) ∨ K)−α−1 e−(v−x)∨K .
18
(2.24)
Combining (2.23) and (2.24) we have immediately
Z
Z ∞
X
α
−2x
qn |I4,3 | ≤ cH (t − s)
dx e
0
n∈SK
Z
∞
H−3/2
x
H−3/2
u
0
u
(e − 1) du
−α−1 −(v−x)∨K
v
(e − 1) ((v − x) ∨ K)
e
Z x
Z ∞
α −K −α−1
H−3/2 u
−2x
= cH (t − s) e K
du u
(e − 1)
dx e
0
0
Z x+K
dv v H−3/2 (ev − 1)
×
x
Z x
Z ∞
α
H−3/2
u
−2x
+ cH (t − s)
u
du (e − 1)
dx e
0
0
Z ∞
dv v H−3/2 (ev − 1) (v − x)−α−1 e−(v−x) .
×
×
dvv
x
(2.25)
(2.26)
x+K
We separate the last expression into various terms. We will calculate first the term in
line (2.25) by separating the x-integration over x ∈ [0, K] and x ∈ (K, ∞), which we denote
by J4,3,1 and J4,3,2 , respectively. The term in line (2.26), which we denote by J4,3,2 , can be
dealt with more directly. We now perform these evaluations.
Term J4,3,1 . We write
α −K
−α−1
Z
K
−2x
Z
x
H−3/2
u
J4,3,1 := cH (t − s) e K
dx e
du u
(e − 1)
0
0
Z x+K
H−3/2 v
dv v
(e − 1)
×
x
Z x
Z ∞
α −K −α−1
−2x
H−3/2 u
≤ cH (t − s) e K
dx e
du u
(e − 1)
0
0
Z 2K
H−3/2 v
dv v
(e − 1) .
× cH +
1
Now, integrating by parts, we get
Z 2K
dv v H−3/2 (ev − 1) ≤ cH eK K H+1/2 .
1
The last two estimates imply immediately that
J4,3,1 ≤ cH (t − s)α K −α+H−1/2 ,
which proves the contribution of J4,3,1 in (2.22).
19
Term J4,3,2 . We write
α −K
−α−1
Z
∞
−2x
Z
x
H−3/2
u
J4,3,2 := cH (t − s) e K
du u
(e − 1)
dx e
0
K
Z x+K
H−3/2 v
dv v
(e − 1)
×
x
Z x
Z ∞
α −K −α−1
−2x
H−3/2 u
≤ cH (t − s) e K
dx e
du u
(e − 1)
K
0
× xH−3/2 ex+K − ex
Z x
Z ∞
≤ cH (t − s)α K −α−1
dx e−x xH−3/2
du uH−3/2 (eu − 1)
0
ZK∞
Z x
α
−α−1
−x H−3/2
x
H−3/2
≤ cH (t − s) K
dx e x
cH + e
du u
K
1
≤ cH (t − s)α K −α−1 K H−3/2 e−K + K H−1/2
≤ cH (t − s)α K −α+H−3/2 .
which proves the contribution of J4,3,2 in (2.22).
Term J4,3,3 . The last part of the estimation is that of
Z x
Z ∞
α
−2x
H−3/2 u
J4,3,3 := cH (t − s)
dx e
u
(e − 1) du
0
0
Z ∞
−α−1 −(v−x)
H−3/2 v
dv v
(e − 1) (v − x)
e
×
x+K
Z x
Z ∞
α
H−3/2 u
H−3/2
−x
du u
(e − 1)
≤ cH (t − s) K
dx e ·
0
0
Z ∞
dv (v − x)−α−1
×
x+K
Z ∞
Z ∞
α
−α+H−3/2
−x
H−3/2 u
= cα,H (t − s) K
dx e
du u
(e − 1)
u
0
Z ∞
= cα (t − s)α K −α+H−3/2
uH−3/2 (eu − 1) e−u du
0
Z ∞
α
−α+H−3/2
H−3/2
= cα (t − s) K
u
du
cH +
1
α
= cα,H (t − s) K
−α+H−3/2
.
Therefore, (2.22) holds taking β = α + 1/2 − H which is greater than α as H < 1/2.
The proof of (2.18) is now finished.
Proof of (2.19). By (2.12) and Lemma A.1, we have
Z s
Z s
3
H− 32
dr (s − r)
du (u − r)H− 2 (f (u) − f (r)).
|I4,4 | ≤ cH (t − s)
g(r)
0
r
20
Using the change of variables s − r = r ′ , s − u = u′ , r ′ − u′ = v, n2 v = u, n2 r ′ = x, we get
|I4,4 | ≤
cH
n
−n2 (t−s)
(t − s)e
4H−2
Bounding (1 − e−n
2 (t−s)
−n2 (t−s)
(1 − e
)
Z
n2 s
H− 32 −2x
dx x
e
0
Z
x
0
3
du uH− 2 (eu − 1).
) by 1 and n2 s by ∞, we get
|I4,4 | ≤
2
cH
(t − s)e−n (t−s) .
n4H−2
We will now proceed as in the proof of (2.18); that is we will prove that there exists a
constant c3H depending only on H such that
X
qn |I4,4 | ≤ c3H (t − s)α K −β ,
(2.27)
n∈SK
for some β > α. It then suffices to choose K ≥ (4c3H /c1H,α )1/(β−α) to get (2.19).
We now prove (2.27). We write
Z ∞
X
2
dx x−2α+1 e−x (t−s)
qn |I4,4 | ≤ cH (t − s) √
K/(t−s)
n∈SK
∞
Z
2
dy y −2α+1 e−y
K
Z ∞
α −α −1
−y 2
≤ cH (t − s) K 2
√ dy 2ye
= cH (t − s)α
√
K
α
≤ cH (t − s) K
−(α+1)
,
which proves (2.27) taking β = α + 1 and concludes the proof of (2.19).
This finishes the proof of the entire proposition.
3
Gaussian upper bound for the bivariate density
We denote by pt,x;s,y (· , ·) the (Gaussian) probability density function of the random vector
(u(t , x) , u(s , y)) for all s, t > 0 and x, y ∈ S 1 such that (t , x) 6= (s , y).
For every fixed real number 0 < α ≤ 1 we consider the metric
∆((t, x); (s, y)) = |x − y|2α + |t − s|α∧(2H) .
(3.1)
In this section we establish an upper bound of Gaussian type for the bivariate density
pt,x;s,y (· , ·) in terms of the metric (3.1). This will be one of the key results in order to
show the lower bound of Theorem 1.1. The estimates obtained in the previous section to
prove space and time regularity are nearly sufficient to obtain the results in this section.
The following further improvement is needed, which deals with precise joint regularity (see
[1-DKN07, (4.11)] for the space-time white noise case).
21
Lemma 3.1. Assume hypothesis (1.3). Fix t0 , T > 0. Then there exists cH > 0 such that
for any s, t ∈ [t0 , T ], x, y ∈ S 1 , with (t, x) is sufficiently near (s, y), and i = 1, ..., d,
2
≤ cH ∆((t , x) ; (s , y)).
(3.2)
c−1
H ∆((t , x) ; (s , y)) ≤ E (ui (t , x) − ui (s , y))
Proof. The upper bound in (3.2) is a consequence of the upper bounds of Corollary 2.2 and
Proposition 2.3, and the following inequality
E (ui (t , x) − ui (s , y))2 ≤ 2{E (ui (t , x) − ui (s , x))2 + E (ui (s , x) − ui (s , y))2 }.
We now proceed to the proof of the lower bound in (3.2). By Corollary 2.2, there exist
c1 , c2 > 0 such that for all t ∈ [t0 , T ] , x, y ∈ S 1 , with x is sufficiently near y, and i = 1, ..., d,
c1 |x − y|2α ≤ E (ui (t , x) − ui (t , y))2 ≤ c2 |x − y|2α .
(3.3)
Moreover, Proposition 2.3 assures the existence of c3 , c4 > 0 such that that for any s, t ∈
[t0 , T ], x ∈ S 1 , with t is sufficiently near s, and i = 1, ..., d,
c3 |t − s|α∧(2H) ≤ E (ui (t , x) − ui (t , y))2 ≤ c4 |t − s|α∧(2H) .
(3.4)
Let us now consider two different cases.
c1
|x − y|2α . Appealing to the lower bound in (3.3) and the upper
Case 1 : |t − s|α∧(2H) < 4c
4
bound in (3.4),
E (ui (t , x) − ui (s , y))2 = E (ui (t , x) − ui (t , y) + ui (t , y) − ui (s , y))2
1
≥ E (ui (t , x) − ui (t , y))2 − E (ui (t , y) − ui (s , y))2
2
1
≥ c1 |x − y|2α − c4 |t − s|α∧(2H) .
2
Because of the inequality that defines this Case 1, this is bounded below by
c1
c1
c1
|x − y|2α − |x − y|2α = |x − y|2α
2
4
4
c1
c1 4c4
≥ |x − y|2α +
|t − s|α∧(2H)
8
8 c1
c c
4
1
∆((t , x) ; (s , y)).
,
≥ min
8 2
This completes the proof of the lower bound in (3.2) in Case 1.
2α
2
Case 2 : |t − s|α∧(2H) > 4c
c3 |x − y| . The proof of this portion is identical to Case 1, by
using the upper bound in (3.3) and the lower bound in (3.4), and writing
E (ui (t , x) − ui (s , y))2 = E (ui (t , x) − ui (t , y) + ui (t , y) − ui (s , y))2
i
i
h
1 h
E (ui (t, x) − ui (s, x))2 − E (ui (s, x) − ui (s, y))2
≥
2
22
which yields the lower bound min
Case 2.
Case 3 :
4c2
c3 |x
c3 c2
8, 2
∆((t , x) ; (s , y)). This completes the proof of
c1
− y|2α ≥ |t − s|α∧(2H) ≥ 4c
|x − y|2α . Note that it suffices to prove that,
4
E (ui (t , x) − ui (s , y))2 ≥ c|t − s|α∧(2H) .
(3.5)
Indeed, because of the lower bound inequality that defines this Case 3, this is bounded
below by
c c1
c
|t − s|α∧(2H) +
|x − y|2α ≥ c′ ∆((t , x) ; (s , y)),
2
2 4c4
which proves the lower bound in (3.2) in this Case 1, provided that (3.5) is proved.
Proof of (3.5). We write
2
E (ui (t , x) − ui (s , y))
where
W1 = E
s
Z
(cos(nx) e−n
2H
= q0 |t − s|
2 (t−r)
0
+
∞
X
n=1
− cos(ny) e−n
+
Z
t
qn {W1 + W2 },
2 (s−r)
) βn (dr)
−n2 (t−r)
cos(nx) e
s
W2 = E
Z
s
(sin(nx) e−n
2 (t−r)
0
− sin(ny) e−n
+
Z
t
2 (s−r)
sin(nx) e−n
2
βn (dr)
,
) βn′ (dr)
2 (t−r)
s
2
βn′ (dr)
,
{βn′ }n∈N
where {βn }n∈N and
are independent standard fractional Brownian motions.
Now, because the further calculations use fractional stochastic calculus we need to consider the two different cases, namely H < 21 and H ≥ 12 .
Case H ≥ 12 . If H < α/2, because E (ui (t , x) − ui (s , y))2 ≥ q0 |t − s|2H , (3.5) follows
directly. Therefore, we assume that H > α/2. In this case, note that (3.5) is proved in
[SV06] for the case x = y.
Straightforward computations using (2.3) give
E (ui (t , x) − ui (s , y))2
∞
X
2H
−2n2 t
−2n2 s
−n2 (t+s)
qn
= q0 |t − s| +
e
+e
− 2 cos(n|x − y|)e
I1
n=1
−n2 I
+e
≥ q0 |t − s|2H +
2
−n2 t
I2 + 2e
∞
X
n=1
−n2 t
e
−n2 s
− cos(n|x − y|)e
I3
2
2
2
2
2
2
qn (e−n t − e−n s )2 I1 + e−n I2 I2 + 2e−n t (e−n t − e−n s )I3 ,
23
where
I1 =
Z
s
dw
0
I2 =
Z
t
dw
I3 =
0
s
0
Z t
dv en
dv en
2 (w+v)
2 (w+v)
s
s
Z
Z
s
dw
Z
t
dv en
|w − v|2H−2 ,
2 (w+v)
s
|w − v|2H−2 ,
|w − v|2H−2 .
Hence, using the results of [SV06, Section 2.1 and (17)] and (1.3), it follows that
X
E (ui (t , x) − ui (s , y))2 ≥ q0 |t − s|2H + cH (t − s)2H
qn
n2 (t−s)≤1
This proves (3.5) when H ≥ 12 .
≥ cH (t − s)α .
Case H < 21 . It is elementary to see that by (2.14), W1 ≥ I˜1 + I˜4 , where I˜1 and I˜4 are defined,
respectively, as I1 and I4 in the previous section (see (2.10) and (2.15)), but replacing f
and g by
2
2
2
f˜(r) = cos(nx) e−n (t−r) − cos(ny) e−n (s−r) ,
g̃(r) = cos(nx) e−n (t−r) .
Similarly, W2 ≥ I¯1 + I¯4 , where I¯1 and I¯4 are defined, respectively, as I1 and I4 but replacing
f and g by
2
2
2
f¯(r) = sin(nx) e−n (t−r) − sin(ny) e−n (s−r) ,
ḡ(r) = sin(nx) e−n (t−r) .
Therefore, the proof of (3.5) when H < 21 is similar to the control of I1 from below by |I4 |
in the previous section; yet it is less delicate, because the hardest estimates we will need to
use are one which were already obtained therein. Indeed, proceeding as in (2.16), we find
cH
2
2
I˜1 + I¯1 ≥
{(cos(nx) e−n (t−s) − cos(ny))2 + (sin(nx) e−n (t−s) − sin(ny))2 }
4H
n
cH −2n2 (t−s)
2
=
{e
+ 1 − 2 cos(n|x − y|) e−n (t−s) }
4H
n
cH
2
≥
(1 − e−n (t−s) )2 .
(3.6)
4H
n
Here we see that the case where x = y is the worst case, in the sense that the lower bound
(2.16) obtained for I1 is a lower bound for all I˜1 + I¯1 uniformly in t, x, s, y.
Moreover, simple calculations yield very similar formulas for the four terms in I˜4 + I¯4
as we had found for I4 itself in (2.17); namely we have
Z s
drK(s, r)h(r)(K(t, r) − K(s, r))g(r),
I˜4,1 + I¯4,1 =
0
Z t
Z s
∂K
˜
¯
(u, r),
du(g(u) − g(r))
drK(s, r)h(r)
I4,2 + I4,2 =
∂u
s
0
Z t
Z s Z s
∂K
∂K
˜
¯
dv(g(v) − g(r))
(u, r)
(v, r),
du(h(u) − h(r))
dr
I4,3 + I4,3 =
∂u
∂v
s
r
Z s
Z0 s
∂K
du(h(u) − h(r))
dr(K(t, r) − K(s, r))g(r)
I˜4,4 + I¯4,4 =
(u, r).
∂u
r
0
24
where
h(r)
e−n
2 (t−r)
=: e−n
2 (s−r)
=
=
2
− cos(n|x − y|) e−n (s−r)
2
2
e−n (s−r) e−n (t−s) − cos(n|x − y|)
hs,t,x,y .
(3.7)
In other words, for each j = 1, 2, 3, 4, the formula for I˜4,j + I¯4,j is identical to that of I4,j ,
with f replaced by h. Also recall that
2
2
2
f (r) = e−n (s−r) e−n (t−s) − 1 = e−n (s−r)hs,t,x,x .
We see here that f is always negative, while it is much more difficult to control the sign of
h. Luckily, for any r, the sign of h (r) is the sign of the fixed coefficient hs,t,x,y defined in
(3.7). When hs,t,x,y is negative, we will be able to use calculations from the previous section
directly. When hs,t,x,y is non-negative, we will instead compare I˜1 + I¯1 with I˜4,1 + I¯4,1 and
I˜4,2 + I¯4,2 .
Case hs,t,x,y < 0. Note that, in this case, I˜4,1 + I¯4,1 > 0 and I˜4,2 + I¯4,2 > 0, while
the other two sums are negative. Therefore, identically to the proof of lower bound in the
previous section, we only need to show that for sufficiently large K, still using SK = {n :
n2 |t − s| ≥ K},
X
X
(3.8)
qn (I˜1 + I¯1 ) > 2
qn I˜4,3 + I¯4,3 ,
n∈SK
X
n∈SK
n∈SK
qn (I˜1 + I¯1 ) > 4
X
qn I˜4,4 + I¯4,4 .
(3.9)
n∈SK
This is not difficult. Indeed, we have that both f and h are decreasing, and for all u ∈ [r, s],
2
2
|h (u) − h (r)| =
e−n (s−u) − e−n (s−r) |hs,t,x,y |
2
2
≤
e−n (s−u) − e−n (s−r) |hs,t,x,x | = |f (u) − f (r)| .
Hence, exploiting the fact that all the terms
I˜4,3 + I¯4,3 have constant signs, we can write
Z s Z s
˜
¯
dr
du |h(u) − h(r)|
I4,3 + I4,3 =
0
r
Z s Z s
du |f (u) − f (r)|
dr
≤
0
r
= |I4,3 | ,
in the products defining the I4,3 as well as
Z t
∂K
(u, r)
dv(g(v) − g(r))
∂u
s
Z t
∂K
(u, r)
dv(g(v) − g(r))
∂u
s
∂K
(v, r)
∂v
∂K
(v, r)
∂v
and similarly we get I˜4,4 + I¯4,4 ≤ |I4,4 |. Since the lower bound on I˜1 + I¯1 in (3.6) is as
large as the lower bound (2.16) on I1 , the proof of the lower bound in the previous section
implies both (3.8) and (3.9), which finishes the proof of (3.5) when hs,t,x,y < 0.
25
Case hs,t,x,y ≥ 0. Here we cannot rely on previous calculations. Indeed, in this case,
˜
I4,3 + I¯4,3 ≥ 0 and I˜4,4 + I¯4,4 ≥ 0, while I˜4,1 + I¯4,1 and I˜4,2 + I¯4,2 are negative, and we must
therefore control their absolute values. As in the previous case, we only need to prove that
for K large enough,
X
X
(3.10)
qn (I˜1 + I¯1 ) > 2
qn I˜4,1 + I¯4,1 ,
n∈SK
X
n∈SK
qn (I˜1 + I¯1 ) > 4
n∈SK
X
qn I˜4,2 + I¯4,2 .
(3.11)
n∈SK
Unlike the last section where the full sum had to be invoked to obtain the required lower
bounds, here it is possible to prove that the above two inequalities hold without the sums,
i.e. for any fixed n ∈ SK . These fact are established in Appendix A.4.
This proves (3.5) when H < 21 . The proof of the lemma is thus complete.
Proposition 3.2. Assume hypothesis (1.3). Then for all t0 , T, M > 0, there exists a finite
constant cH > 0 such that for all s, t ∈ [t0 , T ], x, y ∈ S 1 and z1 , z2 ∈ [−M , M ]d ,
kz1 − z2 k2
−d/2
pt,x;s,y (z1 , z2 ) ≤ cH (∆((t , x) ; (s , y)))
exp −
.
cH ∆((t , x) ; (s , y))
Proof. Let pit,x;s,y (· , ·) denote the bivariate density of the random vector (ui (t , x) , ui (s , y)).
Note that pit,x;s,y (· , ·) does not depend on i.
We follow [DN04] and [1-DKN07]. As in [DN04, (3.8)] and [1-DKN07, (4.10)], we have
that
|z1 − z2 |2
1
i
exp −
pt,x;s,y (z1 , z2 ) ≤
2πσs,y τ
4τ 2
(3.12)
|z2 |2
|z2 |2 |1 − m|2
,
exp − 2
× exp
4τ 2
2σs,y
where
2
τ 2 := σt,x
1 − ρ2t,x;s,y ,
m :=
σt,x;s,y
,
2
σs,y
ρt,x;s,y =
σt,x;s,y
,
σt,x σs,y
2
σt,x
= E[(ui (t, x))2 ]
σt,x;s,y = Cov (ui (t , x) , ui (s , y)) .
We now show the analogues of (4.12) and Lemma 4.3 in [1-DKN07] in the case of the
fractional heat equation. Fix s, t ∈ [t0 , T ], x, y ∈ S 1 . We claim that the following estimates
hold:
|σt,x − σs,y | ≤ cH |t − s|2α .
2
2
2
c−1
H ∆((t , x) ; (s , y)) ≤ σt,x σs,y − σt,x;s,y ≤ cH ∆((t , x) ; (s , y)),
2
|σt,x
− σt,x;s,y | ≤ cH [∆((t , x) ; (s , y))]
26
(3.13)
(3.14)
1/2
.
(3.15)
Indeed, in the proof of Proposition 2.3 we have proved that
2
Z t
2
e−n (t−s) βnH (ds)
≤ cH |t − s|2α .
E
0
Therefore, using [1-DKN07, (4.31)], we have
2
2
|σt,x − σs,y | ≤ cH |σt,x
− σs,y
| ≤ cH |t − s|2α
where cH does not depend on t ∈ [t0 , T ]. This proves (3.13).
2
:= E[(ui (t , x) − ui (s , y))2 ]. Then using [1-DKN07,
We now prove (3.14). Let γt,x;s,y
(4.42)],
1 2
2
2
2
2
=
σt,x
σs,y
− σt,x;s,y
γt,x;s,y − (σt,x − σs,y )2 (σt,x + σs,y )2 − γt,x;s,y
.
(3.16)
4
2
By Lemma 3.1, γt,x,s,y
≤ c∆((t , x) ; (s , y)). Therefore, the second factor of (3.16) is bounded
below by a positive constant when (t , x) is near (s , y). Furthermore, Lemma 3.1 and (3.13)
yield
2
γt,x,s,y
− (σt,x − σs,y )2 ≥ cH ∆((t , x) ; (s , y)).
This proves the lower bound of (3.14) provided (t , x) is sufficiently near (s , y).
In order to extend this inequality to all (t, x) and (s, y) in [t0 , T ] × S 1 , note that by the
2 σ2 − σ2
contuinuity of the function (t, x, s, y) 7→ σt,x
t,x;s,y , it suffices to show that
s,y
2
2
2
σt,x
σs,y
− σt,x;s,y
>0
if (t , x) 6= (s , y).
If this last function was equal to zero there would be λ ∈ R such that ui (t , x) = λui (s , y)
a.s., which is a contradiction to the lower bound in (3.2) and the fact that ∆((t , x) ; (s , y))
is zero only if (t , x) = (s , y). This completes the proof of the lower bound of (3.14).
In order to prove the upper bound of (3.14), use Lemma 3.1 to see that the first factor
in (3.16) is bounded above by cH ∆((t , x) ; (s , y)). As the second factor in (3.16) is bounded
above by a constant cH , the desired upper bound follows.
It remains to prove (3.15). Use [1-DKN07, (4.47)] to find
2
2
|σt,x
− σt,x;s,y | = γt,x;s,y
+ Cov (ui (t , x) − ui (s , y) , ui (s , y))
2
≤ γt,x;s,y
+ γt,x;s,y σs,y
≤ cH [∆((t , x) ; (s , y))]1/2 ,
where we have used Lemma 3.1 twice in the last inequality. This implies the desired bound.
Finally, introducing inequalities (3.14) and (3.15) into (3.12) and using the independence
of the components u1 , ..., ud , the proposition follows.
4
Proof of Theorem 1.1 and Corollary 1.3
In order to prove Theorem 1.1 we will follow the approach developped in [1-DKN07] extended to our situation. For this we shall state and prove the versions of Theorem 2.1(1),
Lemma 2.2(1), Theorem 3.1(1) and Lemma 4.5 in [1-DKN07] needed in our situation.
The first result is an extension of [1-DKN07, Lemma 2.2(1)] (take α = 1/2, H = 1/2
and d = β).
27
Lemma 4.1. Let I and J two intervals as in Theorem 1.1. Then for all N > 0, there exists
a finite and positive constant C = C(I, J, N ) such that for all a ∈ [0 , N ],
Z
dt
I
Z
ds
I
Z
dx
J
Z
2
e−a /∆((t,x);(s,y))
≤ C Kd−( 1 + 2 ) (a),
α
α∧(2H)
∆d/2 ((t, x); (s, y))
dy
J
(4.1)
where ∆((t, x) ; (s, y)) is the metric defined in (3.1).
Proof. Write α1 := 2α and α2 := α ∧ (2H). Using the change of variables ũ = t − s (t fixed),
ṽ = x − y (x fixed) we have that the integral in (4.1) is bounded above by
4|I| |J|
Z
|I|
dũ
Z
|J|
α1
dṽ (ũ
α2 −d/2
+ ṽ )
0
0
a2
.
exp − α1
ũ + ṽ α2
A change of variables [ũα1 = a2 u, ṽ α2 = a2 v] implies that this is equal to
Ca
2
+ α2 −d
α1
2
Z
|I|α1 a−2
du
0
Z
|J|α2 a−2
0
1
−1 1 −1
1
u α1 v α2
exp −
.
dv
u+v
(u + v)d/2
(4.2)
Observe that the last integral is bounded above by
Z
0
|I|α1 a−2
du
Z
|J|α2 a−2
dv (u + v)
1
+ α1 −2− d2
α1
2
0
1
exp −
.
u+v
Pass to polar coordinates to deduce that the preceding is bounded above by I1 + I2 (a),
where
Z KN −2
1
+ 1 −1− d2
exp(−c/ρ),
I1 :=
dρ ρ α1 α2
0
I2 (a) :=
Z
Ka−2
1
dρ ρ α1
+ α1 −1− d2
2
,
KN −2
where K = |I|α1 ∨ |J|α2 . Clearly, I1 ≤ C < ∞, and if d 6=
I2 (a) = K
1
+ α1 − d2
α1
2
d− α2 − α2
a
1
If d > α21 + α22 , then I2 (a) ≤ C for all a ∈ [0, N ]. If d <
Finally, if d = α21 + α22 , then
+
2
α2 ,
d− α2 − α2
−N
+ α12 −
2
1
α1
2
α1
2
α1
1
d
2
2
then
.
d−( α2 + α2 )
+ α22 , then I2 (a) ≤ Ca
1
2
.
1
I2 (a) = 2 ln
+ ln(N ) .
a
Hence, we deduce that for all a ∈ [0 , N ], the expression in (4.2) is bounded above by
C Kd−( 2 + 2 ) (a), provided that N0 in (1.4) is sufficiently large. This proves the lemma.
α1
α2
28
The next result uses the proof of [1-DKN07, Theorem 2.1(1)] applied to our situation
and establishes the lower bound of Theorem 1.1.
Theorem 4.2. Assume hypothesis (1.3). Let I ⊂ (0, T ] and J ⊂ [0, 2π) be two fixed nontrivial compact intervals. Then for all T > 0 and M > 0, there exists a finite constant
cH > 0 such that for all compact sets A ⊆ [−M, M ]d ,
cH Capd−β (A) ≤ P{u(I × J) ∩ A 6= ∅},
where β :=
1
α
+ ( α2 ∨
1
H ).
Proof. The proof of this result follows exactly the same lines as the proof of [1-DKN07,
Theorem 2.1(1)], therefore we will only sketch the steps that differ. It suffices to replace
their β − 6 by our d − β with β := α1 + ( α2 ∨ H1 ). Moreover, if pt,x (y) denotes the density of
u(t, x) solution of (1.1), then we have that for all y ∈ [−M, M ]d and (t, x) ∈ I × J,
2 /(2σ 2 )
t,x
2 −d/2 −kyk
pt,x (y) = (2πσt,x
)
e
≥ cH ,
(4.3)
which proves hypothesis A1 of [1-DKN07, Theorem 2.1(1)]. On the other hand, our Proposition 3.2 proves hypothesis A2 with ∆((t, x) ; (s, y)) defined as in (3.1).
We then follow the proof of [1-DKN07, Theorem 2.1(1)]. Define, for all z ∈ Rd and
ǫ > 0, B̃(z , ǫ) := {y ∈ Rd : |y − z| < ǫ}, where |z| := max1≤j≤d |zj |, and
Z
Z
1
dx 1B̃(z,ǫ) (u(t , x)).
dt
Jǫ (z) =
(2ǫ)d I
J
In the case d < β, instead of [1-DKN07, (2.31)] we will find, using Proposition 3.2, Lemma
4.1 and [1-DKN07, Lemma 2.3], that for all z ∈ A ⊆ [−M , M ]d and ǫ > 0,
Z |J|
Z |I|
2
dv (u2α + v α∧(2H) )−d/2
du
E (Jǫ (z)) ≤cH
0
0
≤ cH
≤ cH
Z
Z
|I|
0
|I|
0
du Ψ|J|,d( α2 ∧H) (udα )
du K1−( 2 ∨ 1 )/d (udα ).
α
H
We will then consider the different cases: d < α2 ∨ H1 , α2 ∨ H1 < d < α1 + ( α2 ∨ H1 ) and
d = α2 ∨ H1 . This will prove the case d < β.
The case d ≥ β is proved exactly along the same lines as the proof of [1-DKN07, Theorem
2.1(1)], appealing to (4.3), Proposition 3.2 and Lemma 4.1.
The following result is an extension of [1-DKN07, Lemma 4.5].
Lemma 4.3. Assume hypothesis (1.3). For all p ≥ 1 , there exists Cp,H > 0 such that for
all ǫ > 0 and all (t , x) fixed,
#
"
E
sup
[∆((t,x) ;(s,y))]1/2 ≤ǫ
ku(t , x) − u(s , y)kp ≤ Cp,H ǫp ,
where ∆((t, x) ; (s, y)) is defined as in (3.1).
29
(4.4)
Proof. It suffices to prove (4.4) for each coordinate ui , i = 1, . . . , d . We proceed as in
[1-DKN07, Lemma 4.5], that is, we will use [1-DKN07, Proposition A.1] with S := Sǫ =
{(s, y) : [∆((t , x) ; (s , y))]1/2 < ǫ}, ρ((t , x) , (s , y)) := [∆((t , x) ; (s , y))]1/2 , µ(dtdx) :=
dtdx, Ψ(x) := e|x| − 1, p(x) := x, and f := ui .
Moreover, by (3.2), the random variable C defined in [1-DKN07, Proposition A.1] satisfies
Z
Z
|ui (t , x) − ui (s , y)|
ds dy exp
≤ cH ǫβ ,
dt dx
E [C ] ≤ E
[∆((t , x) ; (s , y))]1/2
Sǫ
Sǫ
where β = α2 + ( α4 ∨ H2 ).
The rest of the proof follows exactly as in [1-DKN07, (4.51)] and is therefore omitted.
The next result uses the proof of [1-DKN07, Theorem 3.1(1)] applied to our situation
and establishes the upper bound of Theorem 1.1.
Theorem 4.4. Assume hypothesis (1.3). Let I ⊂ (0, T ] and J ⊂ [0, 2π) be two fixed nontrivial compact intervals. Then for all T > 0 and M > 0, there exists a finite constant
cH > 0 such that for all Borel sets A ⊆ [−M, M ]d ,
P{u(I × J) ∩ A 6= ∅} ≤ cH Hd−β (A),
where β :=
1
α
+ ( α2 ∨
1
H ).
Proof. The proof of this result is similar to the proof of [1-DKN07, Theorem 3.1]. When
d < β, there is nothing to prove, so we assume that d ≥ β.
For all positive integers n, set tnk := k2−n/α , xnℓ := ℓ2−(2n/α)∨(n/H) , and
Ikn = [tnk , tnk+1 ],
Jℓn = [xnℓ , xnℓ+1 ],
n
Rk,ℓ
= Ikn × Jℓn .
n ⊂ I × J, there exists a constant c
Then for all Rk,ℓ
H > 0 such that the following hitting
d
small balls estimate holds for all z ∈ R and ǫ > 0,
n
) ∩ B(z , ǫ) 6= ∅} ≤ cH ǫd .
P{u(Rk,ℓ
(4.5)
Indeed, the proof of (4.5) follows along the same lines as the proof of [1-DKN07, Proposition 4.4] for the linear stochastic heat equation driven by space time white noise. Namely,
consider the random variables
n
:=
Yk,ℓ
n
:=
Zk,ℓ
inf
cnk,ℓ (t , x)u(tnk , xnℓ ) − z , and
sup
u(t , x) − cnk,ℓ (t , x)u(tk , xℓ ) ,
(t,x)∈Rn
k,ℓ
(t,x)∈Rn
k,ℓ
where
cnk,ℓ (t , x) :=
E [u1 (t , x)u1 (tnk , xnℓ )]
.
Var u1 (tnk , xnℓ )
n
Note that, because {ui (t, x), ui (tnk , xnℓ )} is a 2-dimensional centered Gaussian vector, Yk,ℓ
n are independent. Hence, the rest of the proof of (4.5) follows as in [1-DKN07,
and Zk,ℓ
Proposition 4.4], using the fact that {ui (t, x)}i=1,..,d are independent, centered, Gaussian
30
random variables, with variance bounded above and below by positive constants, and such
that the upper bound in (3.2) and Lemma 4.3 hold.
Now fix ǫ ∈ (0 , 1) and n ∈ N such that 2−n−1 < ǫ ≤ 2−n , and write
XX
n
P{u(Rk,ℓ
) ∩ B(z , ǫ) 6= ∅}.
P {u (I × J) ∩ B(z , ǫ) 6= ∅} ≤
(k,ℓ):
Rn
k,ℓ ∩(I×J)6=∅
The number of pairs (k , ℓ) involved in the two sums is at most 2βn , where β :=
Because 2−n−1 < ǫ, (4.5) implies that
1
α
P {u (I × J) ∩ B(z , ǫ) 6= ∅} ≤ cH 2−n(d−β) ≤ cH ǫd−β ,
+ ( α2 ∨ H1 ).
(4.6)
where cH does not depend on (n , ǫ). Therefore, (4.6) is valid for all ǫ ∈ (0 , 1).
Now we use a covering argument: Choose ǫ ∈ (0 , 1) and let {Bi }∞
i=1 be a sequence of
d
open balls in R with respective radii ri ∈ (0 , ǫ] such that
A⊆
∞
[
i=1
Bi
and
∞
X
i=1
(2ri )d−β ≤ Hd−β (A) + ǫ.
Because P{u(I × J) ∩ A 6= ∅} is at most
together imply that
P {u (I × J) ∩ A 6= ∅} ≤ cH
P∞
∞
X
i=1
i=1 P{u(I
(4.7)
× J) ∩ Bi 6= ∅}, (4.6) and (4.7)
rid−β ≤ cH (Hd−β (A) + ǫ).
Let ǫ → 0+ to conclude the proof of the theorem.
Proof of Theorem 1.1. Theorems 4.2 and 4.4 prove the lower and upper bounds of Theorem
1.1, respectively.
Proof of Corollary 1.3.
(a) This is an immediate consequence of Theorem 1.1.
(b) Let z ∈ Rd . If d < β, then Capd−β ({z}) = 1. Hence, the lower bound of Theorem 1.1
implies that {z} is not polar. On the other hand, if d > β, then Hd−β ({z}) = 0 and
the upper bound of Theorem 1.1 implies that {z} is polar.
(c) Theorem 1.1 implies that for d ≥ 1: codim(u(R+ × S 1 )) = (d − β)+ ; where codim(E)
with E a random set is defined in [1-DKN07, (5.12)]. Then, when d > β, [1-DKN07,
(5.13)] implies the desired result.
The case d = β follows using exactly the same argument that lead to the result in
[1-DKN07, Corollary 5.3(a)] for d = 6, and is therefore omitted.
31
A
A.1
Appendix
Riesz-kernel example
We consider the example of the Riesz kernel. There, we assume that Q (x) = |x|−γ for some
γ ∈ (0, 1). We then first need to show that this is a bonafide homogeneous spatial covariance
function on the circle (that this is such a function in Euclidean space is well-known, but
here we are restricted to the circle). In other words, we need to show that
Q (x) =
∞
X
qn cos nx,
n=0
where {qn }n∈N is a sequence of nonnegative real numbers. Since Q is integrable, we simply
calculate the values qn by (inverse) Fourier transform: using the symmetry of Q, and some
scaling, we obtain
Z π
Z π
−γ
inx
cos (nx) x−γ dx
e |x| dx = 2
qn =
0
−π
Z nπ
= 2nγ−1
cos (x) x−γ dx
= nγ−1
0
n−1
X
r (k) ,
k=0
R (k+1)π
where r (k) = 2 kπ
cos (x) x−γ dx. We can calculate this r (k) a bit further: using an
integration by parts, we get
r (k) = 2γ
Z
(k+1)π
x−γ−1 sin (x) dx
kπ
= 2γ (−1)k
Z
(k+1)π
kπ
x−γ−1 |sin (x)| dx.
Hence we do indeed have, as announced in the Riesz kernel example, that qn = nγ−1 c (n)
where c (n) is the partial sum of the alternating sequence with general term 2r (k). Also as
announced, we clearly see that r (0) > 0, and it is trivial to prove that |r (k + 1)| < |r (k)|, by
simply using the change of variable x′ = x − π, and the fact that sin (x′ + π) = − sin (x′ ).
The partial sums of such an alternating series are always positive since the first term is
positive. All the claims in the Riesz-kernel example are justified.
A.2
Fractional Brownian example
In the fractional noise P
example, with H < 1/2 and where qn = n1−2H , the Fourier series
1−2H cos (nx) is only formal because this series diverges even
representation Q (x) = ∞
n=0 n
as an alternating series. Yet we can interpret B H as the spatial derivative of a space-time
fractional Brownian sheet-type process. Indeed, consider the centered Gaussian field Y (t, x)
which is fractional Brownian in time with parameter H, and has spatial covariance equal to
32
R (x, y) = |x − y|2H . Using exactly the same calculations as in the Riesz-kernel case above,
but this time with γ = −2H, we can still invoke
the fact that x−γ−1 is decreasing, since
P∞
2H − 1 < 0, and thus R (x, y) can be written as n=0 c (n) n−2H−1 cos (nx). It is then easy
to see that Y can be represented as
Y (t, x) =
∞ p
X
c (n)n−H−1/2 cos (nx) Bn,H (t) +
n=0
∞ p
X
c (n)n−H−1/2 sin (nx) B̃n,H (t)
n=0
where {Bn,H }n∈N and {B̃n,H }n∈N are independent sequences of IID standard fractional
Brownian motions. If one then defines the noise in the heat equation formally (i.e. in the
sense of distributions) by
∂
Y (t, x) ,
BH (t, x) =
∂x
a factor n comes out in the Fourier representation, and one gets that BH can be written,
in the sense of distributions, as
BH (t, x) =
∞ p
X
c (n)n−H+1/2 cos (nx) Bn,H (t) +
n=0
∞ p
X
c (n)n−H+1/2 sin (nx) B̃n,H (t) ,
n=0
from which the formula qn = c (n) n1−2H follows, i.e. the formal expansion Q (x) =
P
∞
−2H+1 cos (nx) follows immediately. This justifies using the scale n1−2H to
n=0 c (n) n
represent the covariance’s Fourier coefficient in this fractional noise case. Note that this
justification also works when H > 1/2.
It is instructive to note that one can also formally write
∂
∂
Q (x − y) = E
Y (1, x)
Y (1, y)
∂x
∂y
= (∂ 2 /∂x∂y) |x − y|2H = 2H (2H − 1) |x − y|2H−2 ,
which is not integrable at the origin (x = y) when H < 1/2, which explains why one cannot
use the pointwise Fourier and/or the Riesz-kernel representation in this case.
A.3
Estimates of the kernel K H
We have the following estimates on the kernel K H .
Lemma A.1. Let t0 , T ≥ 0 be fixed. Then for any H < 21 and s, t ∈ [t0 , T ] with s ≤ t,
there exist positive constants c(t0 , T, H) and C(t0 , T, H) such that
1
1
1
c(t0 , T, H)−1 (t − s)H− 2 ≤ K H (t, s) ≤ c(t0 , T, H)(t − s)H− 2 sH− 2 ,
3
3
1
1
∂K H
C(t0 , T, H)−1 (H − )(t − s)H− 2 ≤
(t, s) ≤ C(t0 , T, H)(H − )(t − s)H− 2
2
∂t
2
Proof. Theses estimates follow immediately from (2.1), (2.2) and [DU97, Theorem 3.2]
The following is a two real variable technical result that is used several times in this
paper.
33
Lemma A.2. Let t0 > 0 fixed. Then for any s ≥ t0 , there exists a positive constant c(t0 , H)
such that
Z 2n2 s
v 2H−1 2H−1 −v
s− 2
v
e dv ≤ c(t0 , H).
2n
0
Proof. We write, following [TTV03, eq. (25)],
Z 2n2 s
v 2H−1 2H−1 −v
v
e dv
s− 2
2n
0
Z
Z 2n2 s
v 2H−1 −v
s 2H−1 ∞ 2H−1 −v
2 2H−1
e dv
v
e dv + (n s)
s− 2
≤( )
2
2n
0
n2 s
Z n2 s ′ 2H−1
v
2
′
2H−1
2 2H−1
≤ cH t0
+ (n s)
e−(2n s−v ) dv ′
2
2n
0
2
≤ C(t0 , H) + cH t0 2H−1 e−n s (n2 s)2H
≤ C(t0 , H) + C(t0 , H) sup |e−x x2H |
x≥s
≤ C(t0 , H).
A.4
Further covariance calculations
Proof of (3.10) . With the notations of the proof of Lemma 3.1, we will show that for K
large enough and for all n such that n2 (t − s) ≥ K, when ht,s,x,y ≥ 0,
I˜1 + I¯1 > 2 I˜4,1 + I¯4,1 .
(A.1)
This will prove (3.10).
Using Lemma A.1, and the trivial bound ht,s,x,y ≤ 2 applied to (3.7), we have
Z t
Z s
∂K
˜
¯
(u, r) du g(r)
drK(s, r)h(r)
I4,1 + I4,1 =
∂u
s
0
Z s
2
≤ cH
dr (s − r)H−1/2 e−n (t+s−2r) (s − r)H−1/2 − (t − r)H−1/2
0
Z s
2
−n2 (t−s)
= cH e
dr r H−1/2 r H−1/2 − (r + t − s)H−1/2 e−2n r .
0
We evaluate the integral above by splitting it up according to whether r exceeds n−2 . We
also assume that n2 (t − s) ≥ 1, i.e. we restrict K ≥ 1. Hence
Z n−2
2
dr r H−1/2 r H−1/2 − (r + t − s)H−1/2 e−2n r
0
≤
=
Z
n−2
0
Z
n−2
dr r H−1/2 r H−1/2 − (2t − 2s)H−1/2
dr
0
≤ cH n−4H .
r 2H−1 − r H−1/2 (2t − 2s)H−1/2
34
The other piece is
Z s
2
dr r H−1/2 r H−1/2 − (r + t − s)H−1/2 e−2n r
n−2
Z
Z s
H−1/2
H−3/2 −2n2 r
dr r
(t − s) r
e
= cH (t − s)
≤ cH
s
dr r 2H−2 e−2n
2r
n−2
n−2
Z
n2 s
dx x2H−2 e−2x
= cH n−2 n4−4H (t − s)
Z ∞1
−4H 2
dx x2H−2 e−2x
≤ cH n
n (t − s)
1
= cH n−4H n2 (t − s) .
In conclusion, we get
2
I˜4,1 + I¯4,1 ≤ c1H n−4H 1 + n2 (t − s) e−n (t−s) .
Since the function x 7→ (1 + x) e−x decreases to 0 as x increases to ∞, we only need to choose
K sufficiently large such that for all n with n2 (t − s) ≥ K, I˜4,1 + I¯4,1 ≤ 2−1 c1H n−4H (1 −
2
e−n (t−s) )2 ≤ I˜1 + I¯1 , where c1H is the constant in (3.6). This completes the proof of
(A.1).
Proof of (3.11). We now show that for K large enough and for all n such that n2 (t − s) ≥
K, when ht,s,x,y ≥ 0,
(A.2)
I˜1 + I¯1 > 2 I˜4,2 + I¯4,2 .
This will prove (3.11).
Again using Lemma A.1, and the bound ht,s,x,y ≤ 2 applied to (3.7), we have
Z s
Z t
∂K
2
(u, r)
drK(s, r)e−n (s−r)
du(g(u) − g(r))
= ht,s,x,y
∂u
0
s
Z s
Z t
2
2
H−1/2 −n2 (s+t−r)
≤ cH
dr(s − r)
e
du en u − en r (u − r)H−3/2 .
I˜4,2 + I¯4,2
0
s
We cut this integral into three pieces. First calculate the piece for u > s + n−2 :
Z s
Z t
2
H−1/2 −n2 (s+t−r)
n u
n2 r
dr(s − r)
e
(u − r)H−3/2
du e
−e
0
s+n−2
≤
Z
s
0
=n
dr(s − r)H−1/2 e−n
−2H+1
= n−4H
Z
Z
s
0
sn2
2 (s+t−2r)
Z
t
duen
H−1/2 −n2 (s+t−2r)
dr(s − r)
2 (u−r)
s+n−2
e
Z
(u − r)H−3/2
(t−r)n2
ex xH−3/2 dx
(s−r)n2 +1
dy y H−1/2 e−y e−n
0
35
2 (t−s)
Z
y+n2 (t−s)
y+1
ex xH−3/2 dx.
Now, for any fixed constants y0 (H) and y1 (H) such that y1 > y0 + 1, the above term with
the y-integral restricted to y ≤ y0 can be written as follows:
!
Z
Z
Z
2
y0
n−4H
dy y H−1/2 e−y e−n
y1
2 (t−s)
≤ n−4H
Z
y0
0
ex xH−3/2 dx
y1
y+1
0
y+n (t−s)
ex xH−3/2 dx +
2
H−3/2 y0
.
e
dy y H−1/2 e−n (t−s) c (H, y1 ) + y1
We now choose y1 and K large enough such that for all n with n2 (t − s) ≥ K and for any
2
choice of y0 , the above equation is smaller than cH n−4H with cH ≤ 2−1 c1H (1 − e−n (t−s) )2 ,
where c1H is the constant in (3.6).
For the other part of the integral in y we get
n−4H
Z
sn2
dy y H−1/2 e−y e−n
y0
≤n
−4H
sn2
Z
dy y
y+n2 (t−s)
e
e
ex xH−3/2 dx
y+1
Z
y+n2 (t−s)
ex dx
y+1
sn2
Z
Z
2H−2 −y −n2 (t−s)
y0
≤ n−4H
≤
2 (t−s)
dy y 2H−2
y0
−4H 2H−1
cH n
y0
,
and it is sufficient to take y0 large enough to ensure that this last expression is smaller than
2
cH n−4H with cH ≤ 2−1 c1H (1 − e−n (t−s) )2 .
Now we calculate the piece for u ∈ [s, s + n−2 ] and r ∈ [s − n−2 , s]. This yields a piece
bounded above by
cH
Z
s
H−1/2 −n2 t
dr(s − r)
s−n−2
Z
2
≤ cH e−n (t−s)
= cH e−n
2 (t−s)
e
s−n−2
n−4H
n
s+n−2
s
s
−n2 (t−s) −4H
= cH e
Z
Z
2
2
du en s+1 − en s−1 (u − r)H−3/2
dr(s − r)H−1/2 (s − r)H−1/2 − (s − r + n−2 )H−1/2
1
0
xH−1/2 xH−1/2 − (x + 1)H−1/2 dx
which can obviously be made smaller than 2−1 c1H (1 − e−n
n2 (t − s) ≥ K, provided that K is large enough.
36
2 (t−s)
)2 , for all n such that
The last piece to deal with is
cH
Z
s−n−2
H−1/2 −n2 (s+t−r)
dr(s − r)
0
≤ cH
Z
s−n−2
Z
s−n−2
0
−n2 (t−s)
= cH e
−n2 (t−s)
≤ cH e
−n2 (t−s)
≤ cH e
s
e
dr(s − r)H−1/2 e−n
Z
n
s−n−2
0
−4H
n−4H
Z
Z
s
s+n−2
s
s+n−2
2
du en u (u − r)H−3/2
du (u − r)H−3/2
dr(s − r)H−1/2 (s − r)H−1/2 − (s − r + n−2 )H−1/2
∞
Z
2 (t−r)
2
2
du en u − en r (u − r)H−3/2
Z1 ∞
xH−1/2 xH−1/2 − (x + 1)H−1/2 dx
xH−3/2 dx
1
−n2 (t−s) −4H
= cH e
s+n−2
H−1/2 −n2 (s+t−r)
dr(s − r)
0
≤ cH e
e
Z
n
,
and the conclusion is the same as before. This finishes the proof of (A.2).
References
[DN04]
Dalang, R.C. and Nualart, E. (2004), Potential theory for hyperbolic
SPDEs, The Annals of Probability, 32, 2099-2148.
[1-DKN07] Dalang, R.C., Khoshnevisan, D., and Nualart, E. (2007), Hitting probabilities for systems of non-linear stochastic heat equations with
additive noise, To appear in Latin American J. Probab. Math. Stat. See
http://arxiv.org/abs/math/0702710.
[2-DKN07] Dalang, R.C., Khoshnevisan, D., and Nualart, E. (2007), Hitting probabilities for systems of non-linear stochastic heat equation with
multiplicative noise, To appear in Probab. Theory and Related Fields. See
http://arxiv.org/abs/0704.1312.
[DU97]
Decreusefond, L. and Ustunel, A.-S. (1997), Stochastic analysis of the
fractional Brownian motion, Potential Analysis, 10, 177-214.
[DPM02]
Duncan, T. E., Pasik-Duncanc, B. and Maslowski, B. (2002), Fractional
Brownian motion and stochastic equations in Hilbert spaces, Stoch. Dyn., 2,
225-250.
[K85]
Kahane, J.-P. (1985), Some random series of functions, Cambridge University
Press.
[K02]
Khoshnevisan, D. (2002), Multiparameter Processes, Springer-Verlag, New
York.
37
[N06]
Nualart, D. (2006), The Malliavin calculus and related topics, Second Edition,
Springer-Verlag.
[SS00]
Sanz-Solé, M. and Sarrà, M. (2000), Path Properties of a Class of Gaussian Processes with Applications to SPDE’s, Canadian mathematical Society
Conference Proceedings, 28, 303-316.
[SS02]
Sanz-Solé, M. and Sarrà, M. (2002), Hölder continuity for the stochastic
heat equation with spatially correlated noise, Seminar on Stochastic Analysis,
Random Fields ans Applications, III (Ascona, 1999), Progr. Prob., 52, 259-268.
[TTV03]
Tindel, S, Tudor, C.A. and Viens, F. (2003), Stochastic evolution equations with fractional Brownian motions, Probab. Theory Related Fields, 127,
186-204.
[TTV04]
Tindel, S, Tudor, C.A., and Viens, F. (2004), Sharp Gaussian regularity
on the circle, and applications to the fractional stochastic heat equation, J.
Funct. Anal., 217, 280-313.
[SV06]
Sarol, Y. and Viens, F. (2006), Time regularity of the evolution solution to
the fractional heat equation, Discrete and Continuous Dynamical Systems B,
6, 895-910.
38