MA250 - Intro To PDEs
MA250 - Intro To PDEs
MA250 - Intro To PDEs
MA250
Introduction to Partial
Differential Equations
Revision Guide
WMS
ii MA250 Introduction to Partial Differential Equations
Contents
0 Introduction 1
1 First-Order PDEs 1
1.1 Change of coordinates method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Method of characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Initial data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
3 Fourier Analysis 4
3.1 Boundary Conditions and Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 Fourier Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.3 L2 Convergence of Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.4 Pointwise and Uniform Convergence of Fourier Series . . . . . . . . . . . . . . . . . . . . . 7
Introduction
This revision guide for MA250 Introduction to Partial Differential Equations has been designed as an
aid to revision, not a substitute for it. PDEs is an applied course, in which the emphasis is on problem-
solving; however, the course is rigorous as well. So, the best way to revise is to use this revision guide
as a quick reference for the theory, and to just keep trying example sheets and mock exam questions.
Various proofs have been omitted, please check your written notes or relevant books.
Authors
Written by Matthew Hutton (matthew.hutton@warwick.ac.uk) and David McCormick
(d.s.mccormick@warwick.ac.uk) in 2007.
Based upon lectures given by Florian Theil at the University of Warwick in 2007.
Updated by Chris Midgley (c.i.midgley@warwick.ac.uk) due to lectures given by Björn Stinner at the
University of Warwick in 2012.
Updated by Matt Rigby (m.rigby@warwick.ac.uk) due to lectures by Björn Stinner at the University of
Warwick in 2013.
Any corrections or improvements should be entered into our feedback form at http://tinyurl.com/WMSGuides
(alternatively email revision.guides@warwickmaths.org).
MA250 Introduction to Partial Differential Equations iii
History
First Edition: May 23, 2007
nth Edition: February 18, 2016.
MA250 Introduction to Partial Differential Equations 1
0 Introduction
Differential equations, i.e. equations relating functions and their derivatives, are the foundation on which
all of physics is built; however, their abstract study has led to many new advances in mathematics,
not least the proof of the Poincaré conjecture. In MA133 Differential Equations, we considered
ordinary differential equations, in which we only had one independent variable; these are in some sense
one-dimensional. But the world is not one-dimensional: many physical problems depend on more than
one independent variable, and so when we differentiate we get partial derivatives in the mix. We are
thus led to study partial differential equations.
To save us all some writing, we denote partial derivatives using subscripts; so for a function u(x, y, . . . ),
we write
∂u ∂u ∂2u
ux := , uy := , uxx := .
∂x ∂y ∂x2
Definitions. A partial differential equation (abbreviated PDE ) is an identity that relates the indepen-
dent variables, the dependent variable u and its partial derivatives, i.e. an equation of the form
F (x, y, . . . , u, ux , uy , . . . ) = 0. (1)
If F depends on x, y, . . . , u, ux , uy , . . . but not on the higher-order partial derivatives uxx , uxy , uyy , . . . ,
etc., then (1) is called a first-order PDE. Similarly if F depends on x, y, . . . u, ux , uy , . . . , uxx , uxy , . . . ,
but not on higher-order derivatives, then (1) is called a second-order PDE. A PDE is called linear if F
depends linearly on u, ux , uy , . . . .
As with ODEs, linear PDEs are much easier to solve; e.g. ux + uy = 0 is linear, but ux + uuy = 0 is
nonlinear. We only consider first- and second-order linear PDEs in this course. In solving such PDEs,
we will make use of many results from MA131 Analysis, MA244 Analysis III, MA134 Geometry
and Motion and MA231 Vector Analysis; make sure you are familiar with most of the major
results such as directional derivatives and the chain rule, the various generalisations of the Fundamental
Theorem of Calculus (Green’s theorem, the Divergence theorem, and Stokes’ theorem), and the various
change of variable formulae for integration. We will also call upon solution methods for ODEs from
MA133 Differential Equations.
Recall that ∂D is the boundary of D and D := D ∪ ∂D is the closure of D. We use the notation
u ∈ C 1 (D) to say that u : D → R is continuously differentiable (i.e. C 1 ), and the domain can be extended
to D. Unless otherwise stated, we will assume that all derivatives exist and are continuous; this means
that second derivatives commute, i.e. uxy = uyx . Furthermore, continuity of the partial derivatives allows
us to differentiate under the integral sign:
d b
Z Z b
∂f
f (x, t) dx = (x, t) dx.
dt a a ∂t
When finding a solution of an ODE of order m, we get m arbitrary constants, which can be determined
by m initial conditions. When finding a solution of a PDE, we get arbitrary functions: for example, if
u : R2 → R, the PDE uxx + u = 0 looks like an ODE, but with an extra variable t, so the solution is
u = f (t) cos x + g(t) sin x, where f (t) and g(t) are two arbitrary functions of t. We need an auxiliary
condition if you want to determine a unique solution; such conditions are usually called initial or boundary
conditions.
1 First-Order PDEs
We start with some very simple PDEs.
Example 1.1. In some sense, the simplest possible PDE is ut = 0, where u = u(x, t), which we can
integrate to get u(x, t) = f (x) as the general solution (f (x) being some arbitrary function of x). Since
the solutions don’t depend on t, they are constant on the lines x = constant in the x–t plane.
Example 1.2. A slightly more complicated first-order equation is the transport equation; for some
velocity c, the one-dimensional transport equation is ut + c(x, t)ux = s(u, x, t). This describes transport
phenomena such as a fluid moving in a pipe.
CHECK IN LECTURE NOTES FOR WHAT TE IS CALLED
2 MA250 Introduction to Partial Differential Equations
Example 1.3. Suppose ut + ux = 2ex+t . Set x̃ = x + t, t̃ = −x + t. Set w(x̃, t̃) = u(x, t). Substituting
everything in as above, we get wx̃ = ex̃ . This has solution w(x̃, t̃) = f (t̃)ex̃ for arbitrary f , giving the
solution u(x, t) = f (−x + t)ex+t .
Suppose we have a family of functions x(t)) such that dx dt = b(x, t). We call the functions x charac-
teristics, and the curves (x(t), t) characteristic curves.
Consider the function
g(t) = u(x(t), t) (4)
We have
g 0 (t) = ut (x(t), t) + b(x(t), t)ux (x(t), t) = 0 (5)
So u is constant on each characteristic curve, so the value of u(x, t) for a given (x, t) is determined
purely by which curve this point lies on.
2
Example 1.4. Suppose ut + 2xtux = 0. The characteristics are given by dx t
dt = 2xt, so x(t) = Ce . So
2 2
all points (x, t) where xe−t is the same are on the same characteristic curve. Hence u(x, t) = f (xe−t )
for some f : R → R.
Now, we look at first-order PDEs with a homogeneous source term - i.e. PDEs of the form
Example 1.6. Suppose we have determined u(x, t) = f (−x + 2t) for some function f , and we know
u(x, 0) = sinh(x). Setting t = 0 gives us f (−x) = sinh(x). Replacing x by −x and using odd-ness of
sinh tells us f (x) = −sinh(x). Then we know u(x, t) = −sinh(−x + 2t)
Another quick warning about the above methods; the representation is NOT unique. For example
ex f (x + t) and e−t g(x + t) for arbitrary f, g represent the same family of functions.
MA250 Introduction to Partial Differential Equations 3
where A and B are arbitrary constants. Since we know that f + g = φ, we can see that A + B = 0.
Substituting s = x + ct in the formula for f and s = x − ct in the formula for g and adding leads to
d’Alembert’s formula:
1 x+ct
Z
1
u(x, t) = f (x + ct) + g(x − ct) = φ(x + ct) + φ(x − ct) + ψ(s) ds .
2 c x−ct
Example 2.1 (Standing Wave equation). For φ = 0 and ψ = cos x, the solution of the wave equation
is:
1
u(x, t) = cos x sin(ct).
c
This is known as a standing wave.
4 MA250 Introduction to Partial Differential Equations
3 Fourier Analysis
Fourier series are an important way of finding solutions of PDEs. In order to motivate their study, we
first consider the more physically realistic case of bounded intervals, rather than infinite ones as studied
previously.
• Neumann boundary conditions: Let u ∈ C([0, l] × [0, T ]). Neumann boundary conditions take
the form
ux (0, t) = a, ux (l, t) = b,
for some a, b ∈ R. For the wave equation, Neumann boundary conditions model the assumption
that we are pulling with a constant force on the ends of a vibrating string. In general, if u ∈ C 1 (D)
for some open set D ⊂ Rk , then Neumann boundary conditions take the form
∂u
(x) = 0 for all x ∈ ∂D,
∂n
∂u
Pk ∂u
where ∂n (x) := ∇u(x) · n(x) = i=1 ∂ni (x) · ni (x) if n(x) ∈ Rk is the outward normal vector of
D.
MA250 Introduction to Partial Differential Equations 5
Then a solution is X
u(x, t) = (Ak cos(βk ct) + Bk sin(βk ct)) sin(βk x)
k∈N
where
kπ 1
βk = Ak = 2iΦ̂(k), k ∈ N Bk = 2iΨ̂(k), k ∈ N
l βk
Theorem 3.2. Set up the wave equation with homogeneous Neumann boundary conditions. Then if,
X
u(x, 0) = Ak sin(kx) = Φ(x) ∈ C 4 (R)
k∈N
X
ut (x, 0) = Bk k sin(kx) = Ψ(x) ∈ C 3 (R)
k∈N
Then a solution is X
u(x, t) = (Ak cos(βk ct) + Bk sin(βk ct)) cos(βk x)
k∈N0
where
(
1
kπ βk 2iΨ̂(k) k>0
βk = Ak = 2iΦ̂(k), k ∈ N Bk =
l 0 k=0
In any case, we have φ̂(−k) = φ̂(k). In the real case, this means that φ̂(−k) = φ̂(k).
Note that φn (x+2π) = φn (x); this implies that φn (x) will not converge to φ(x) if φ is not 2π-periodic.
How do we find the so-called Fourier coefficients φ̂(k)? It can be shown that φ̂(k) are given by the
formula
Z π
1
φ̂(k) = e−ikx φ(x) dx.
2π −π
If φ is an even function then φ̂(k) = φ̂(−k) and its Fourier series is a cos series,
n
X
Sn (φ)(x) = φ̂(0) + 2 φ̂(k) cos(kx)
k=1
Definition 3.5. Let φn : [−π, π] → R be a sequence of functions. We say (φn ) converges to φ : [−π, π] →
R
Z π
3. in the mean-square sense (or in the L2 sense) if lim |φn (x) − φ(x)|2 dx = 0.
n→∞ −π
Note that uniform convergence is the strongest form and implies both L2 and pointwise convergence; in
general no other implication such as L2 =⇒ pointwise holds. Generally pointwise convergence is the
weakest and many theorems about convergence only apply to uniform convergence.
1
R π −ikx
Consider first L2 convergence1 . We first show that the formula φ̂(k) = 2π −π
e f (x) dx for the
2
Fourier coefficients is not arbitrary, but in fact minimises the L distance between φ and its partial
Fourier series:
Theorem 3.6. Let φ ∈ C([−π, π], C). Among all possible choices of 2n + 1 constants c−n , . . . , cn the
choice that minimises Z π 2
X
φ(x) − eikx ck dx
−π |k|≤n
Z π
1
is ck = φ̂(k) = e−ikx φ(x) dx. Furthermore,
2π −π
Z π Z π X
2 2
|φ(x) − Sn (φ(x))| dx = |φ(x)| dx − 2π |φ̂(k)|2
−π −π |k|≤n
1 This is in some sense the most general form of convergence; in fact the Fourier series of φ converges to φ in the L2
Rπ
sense provided only that −π |φ(x)|2 dx is finite. The proof of this result is a deep result involving the Lebesgue integral,
Rπ
and it essentially stems from the space of all square-integrable functions (i.e. functions φ such that −π |φ(x)|2 dx < ∞)
being complete; this is proved in MA359 Measure Theory.
MA250 Introduction to Partial Differential Equations 7
When we have equality in Bessel’s inequality, the Fourier series converges in the L2 sense:
Proposition 3.8 (Parseval’s equality). Let φ ∈ C([−π, π], C), and let its Fourier coefficients be given
1
R π −ikx
by φ̂(k) = 2π −π
e φ(x) dx. The Fourier series of φ converges in the L2 sense, i.e.
Z π X 2 Z π X 2
ikx
φ(x) − e φ̂(k) dx = lim φ(x) − eikx φ̂(k) dx = 0,
−π n→∞ −π
k∈Z |k|≤n
if and only if
X Z π
2
2π |φ̂(k)| = |φ(x)|2 dx.
k∈Z −π
Proof. Assume, without loss of generality, that φ is real-valued. (Otherwise prove the Riemann–Lebesgue
Lemma for the real and imaginary parts of φ separately.)
Z π
sin(kx)φ(x) dx = 2πIm φ̂(k) ≤ 2π|φ̂(k)|
−π
Theorem 3.12 (Pointwise convergence of the Fourier series). Let φ ∈ C 1 (R) be 2π-periodic. Then for
each x ∈ [−π, π], X
lim eikx φ̂(k) = φ(x),
n→∞
|k|≤n
For uniform convergence, we can try to control the decay of coefficients — higher regularity is sufficient
for this.
Theorem 3.14 (Uniform convergence of the Fourier series). Let φ ∈ C 2 (R) be 2π-periodic. Then Sn (φ)
converges uniformly to φ as n → ∞.
In fact, we can weaken the assumptions on φ and still (almost) get pointwise convergence:
Theorem 3.16. Let φ : R → C be a 2π-periodic, piecewise-C 1 function, i.e. [−π, π] can be decomposed
into finitely many open intervals where φ is C 1 on each of them. Then for each x, the partial Fourier
series X
Sn (φ)(x) = eikx φ̂(k)
|k|≤n
converges pointwise to
1 1
lim φ(x) + lim φ(x)
2 x→x0− 2 x→x+0
as n → ∞.
That is, if φ is piecewise C 1 , then the Fourier series converges pointwise to the function, except
at the jump discontinuities where it converges to the average of the limits from either side. At these
jump discontinuities, the Fourier series “overshoots” by approximately 18%; this is known as the Gibbs
phenomenon.
Definition 4.1. The space-time cylinder is the set VL,T := {(x, t) : x ∈ (0, l), t ∈ (0, T ]}. The parabolic
boundary of VL,T is the set ΓL,T := {(x, t) ∈ [0, L] × [0, T ] | t = 0, x = 0, or x = L}.
2 Accurate mathematical models of the heat equation are pretty much always non-trivial. This means that to solve them
Theorem 4.2 (The Maximum Principle). Let u ∈ C 2 (VL,T ) be a solution of the heat equation. Then
u assumes its maximum and minimum on ΓL,T .
Proof. We prove that u attains it’s maximum on ΓL,T . Then the statement for the minimum follows by
applying this −u.
Let M be the maximum value of u(x, t) on ΓL,T ; we want to show that u(x, t) ≤ M for all (x, t) ∈
[0, l] × [0, T ]. Fix ε > 0 and let v(x, t) = u(x, t) + εx2 . Clearly v(x, t) ≤ M + εL2 for t = 0, x = 0 or
x = L. Furthermore,
If v(x, t) assumes its maximum at an interior point (x0 , t0 ), then vt = 0 and vxx ≤ 0 at (x0 , t0 ), hence
vt − kvxx ≥ 0, which is a contradiction. If v(x, t) assumes its maximum for some 0 < x0 < L and t0 = T ,
then vx (x0 , T ) = 0 and vxx (x0 , T ) ≤ 0, but as v(x0 , T ) is a maximum v(x0 , T ) ≥ v(x0 , T − h) and hence
1
vt (x0 , T ) = lim [v(x0 , T ) − v(x0 , T − h)] ≥ 0,
h→0 h
and so vt − kvxx ≥ 0, which is again a contradiction. So v(x, t) can only assume a maximum when
t = 0, x = 0 or x = L. As v(x, t) must assume a maximum somewhere in [0, L] × [0, T ], we have that
v(x, t) ≤ M + εL2 for all (x, t) ∈ [0, L] × [0, T ], and hence that u(x, t) ≤ M + ε(L2 − x2 ). As ε was
arbitrary, we have that u(x, t) ≤ M for all (x, t) ∈ [0, L] × [0, T ], as required.
An application of the Maximum Principle shows that solutions of the heat equation are unique.
Theorem 4.4. Let u ∈ C 2 [0, l] × [0, T ] be a solution of the heat equation (ut = kuxx ). If u satisfies
the initial condition
u(x, 0) = ϕ(x)
and the boundary conditions
u(0, t) = g0 (t), u(l, t) = gl (t)
then u is unique.
Proof. Let u(1) , u(2) ∈ C 2 [0, l] × [0, T ] be two solutions of the heat equation which satisfy the initial
boundary conditions with the same functions for ϕ, g0 and gl . Then let
Since the heat equation is linear, v(x, t) is also a solution of the heat equation. Furthermore, v(x, 0) =
v(0, t) = v(l, t) = 0. The Maximum Principle then implies that v(x, t) ≤ 0, while the Minimum Principle
implies that v(x, t) ≥ 0, hence v(x, t) = 0 for all x and t.
The above proof also holds for the so-called inhomogeneous heat equation ut = kuxx + f (x, t), since
when we subtract two solutions of this equation the f (x, t) terms cancel. (f is usually known as the heat
source.)
The second fundamental principle for the heat equation is stability. For the wave equation, we found
that a certain integral, the energy, is a constant of the motion. For the heat equation, we can show that
the following energy estimate holds:
Theorem 4.5 (Stability). Let u(1) , u(2) ∈ C 2 [0, l] × [0, ∞) be two solutions of the inhomogeneous heat
equation
ut = kuxx + f (x, t).
If u(1) and u(2) satisfy the initial conditions
u(1) (0, t) = u(2) (0, t) = g(t), u(1) (l, t) = u(2) (l, t) = h(t),
Rl 1 2
then, as the energy for the heat equation is EHE,u (t) := 0 2
u (x, t)dx
Z l 2
Z l 2
(1) (2)
u (x, t) − u (x, t) dx ≤ EHE (0) = ϕ1 (x) − ϕ2 (x) dx. (7)
0 0
The right-hand side of the inequality (7) measures the nearness of the initial data, and the left-hand
side the nearness of the solutions at any later time; hence the solutions are, in the “square-integral”
sense, stable; if we start nearby (at t = 0), then we stay nearby. (The maximum principle also proves
stability, but in the “uniform” sense.)
3. Any linear combination of solutions, e.g. c1 u + c2 v (if v also solves the heat equation).
Rx
4. An integral of solutions such as 0
u(s, t) ds.
√
5. The dilated function fa (x) = u( ax, at) for any a > 0.
provided that ϕ decays fast enough to ensure that the integral exists.
−∆u(x) = f (x)
where f ∈ C(Ω). Dirichlet boundary conditions for the Poisson equation take the form u(x) = g(x) for
all x ∈ ∂Ω, where g ∈ C(∂Ω). The special case
∆u(x) = 0
is called the Laplace equation; the solutions of the Laplace equation are called harmonic functions.
The Laplace operator has the property that it doesn’t change if we rotate the coordinate system.
Because of this the form of the Laplacian is relatively simple in both polar and spherical coordinates:
Proposition 5.2 (Laplacian in polar coordinates). In polar coordinates in two dimensions, x = r cos θ
and y = r sin θ, the Laplacian takes the form
1 1 ∂ 2 u 1 ∂u 1 ∂2u
∆2 u = urr + ur + 2 uθθ = 2
+ + 2 2.
r r ∂r r ∂r r ∂θ
Proposition 5.3 (Laplacian in spherical coordinates). In spherical coordinates in three dimensions,
x = r cos ϕ sin θ, y = r sin ϕ sin θ, z = r cos θ, the Laplacian takes the form
2 1 1
∆3 u = urr + ur + 2 uθθ + (cot θ)uθ + uϕϕ
r r sin2 θ
∂ 2 u 2 ∂u 1 ∂2u 1 ∂2u
∂u
= + + 2 + (cot θ) + .
∂r2 r ∂r r ∂θ2 ∂θ sin2 θ ∂ϕ2
We can use polar and spherical coordinates in a separation Ansatz: the technique of separation of
variables yields very simple results in the cases of circular and spherical symmetry.
Proof. We can prove this much like we proved the Maximum Principle for the heat equation. Fix ε > 0
and let v(x) = u(x) + εkxk2 ; then ∆v(x) = ∆(u + εkxk2 ) = 2εn > 0. By the second derivative test,
∆v(x) ≤ 0 at an interior maximum point, hence v has no maximum in the interior of Ω. Since v is
continuous, it must have a maximum in Ω, hence any maximum x0 must lie in ∂Ω. Then for all x ∈ Ω,
u(x) < v(x) ≤ v(x0 ) = u(x0 ) + εkx0 k2 ≤ max u(x̃) + εR2 ,
x̃∈∂Ω
n
where R is such that Ω ⊂ {x ∈ R | kxk ≤ R}. Since ε was arbitrary, u(x) ≤ maxx̃∈∂Ω u(x̃) for all
x ∈ Ω, i.e. the maximum value of u is achieved on the boundary ∂Ω.
Proposition 5.7 (Strong Maximum Principle). Let Ω ⊂ Rn be open, connected and bounded, and let
u ∈ C 1 (Ω) be harmonic. Then u achieves its maximum in Ω if and only if u is constant.
This can be used to show uniqueness. Assume u(1) , u(2) ∈ C 2 (Ω) ∩ C 0 (Ω) are two solutions and
u := u(1) − u(2) . Then ∆u = 0 in Ω and u = 0 in ∂Ω. Using the divergence theorem we obtain
Z Z Z Z Z
2
|∇u| dx = ∇u · ∇u dx = ∇ · (u∇u) − u∇ · ∇u dx = u∇u · n dS − u∆u dx = 0
Ω Ω Ω ∂Ω Ω
Define the functional Z
1
I(w) := |∇w|2 − f w
Ω 2
2
where w ∈ A := {v ∈ C (Ω) : v = g in ∂Ω}
Theorem 5.9 (Dirichlet’s Principle). A function u ∈ A solves
−∆u = f in Ω
u = g in ∂Ω
if and only if it minimises I.
A consequence of this theorem is that every harmonic function fulfils E(u) ≤ E(w) among all functions
w ∈ C 2 (Ω) with the same boundary values on ∂Ω.
MA250 Introduction to Partial Differential Equations 13
1. If a212 < a11 a22 the PDE is called elliptic and is reducible to uxx + uyy + l.o.t. = 0.
2. If a212 > a11 a22 the PDE called hyperbolic and can be reduced to uxx − uyy + l.o.t. = 0.
3. If a212 = a11 a22 then the PDE called parabolic and can be reduced to uxx + l.o.t. = 0.
Reading only the second order terms and interpreting the PDE as a quadratic form with matrix A,
we can equivalently say that a PDE is elliptic with n eigenvalues of the same sign, hyperbolic with n − 1
of the same and one opposite, or parabolic with n − 1 the same and one zero.
The prototypical elliptic equation is the Laplace equation; the prototypical hyperbolic equation is
the wave equation; and the prototypical parabolic equation is the heat equation.
Closing Remarks
Walter Strauss’s Partial Differential Equations: An Introduction has literally hundreds of questions for
those wanting practice, as well as filling any gaps in your theoretical knowledge. Above all practising
lots of questions is the only way to do well, so practise, practise, practise, and good luck in the exam!