[go: up one dir, main page]

0% found this document useful (0 votes)
15 views11 pages

Green Functions

Uploaded by

fedorvonbock2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views11 pages

Green Functions

Uploaded by

fedorvonbock2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

CUST 2023-2024

Maths for Physics 2

Session 8

Green’s functions
Summary

Here we introduce a mathematical object that is particularly relevant to any prob-


lem that involves differential equations: Green’s functions. We restrict to ordinary
differential equations here, and more precisely to second-order differential equations
since they are commonly encountered in practice. We recall some basic aspects of
differential equations and give the general idea that underlies the notion of Green’s
functions: namely, Green’s functions aim at providing a particular solution to an
inhomogeneous differential equation.

i
Contents

1 Reminders of ordinary differential equations 1

2 General idea of Green’s functions 4

3 Continuity and jump condition 6

ii
Chapter 1

Reminders of ordinary differential


equations

REMARK ON NOTATION: throughout these notes we will use the abbreviations


ODE for Ordinary Differential Equation and GF for Green’s Function.

Here for concreteness we’ll focus on linear second-order ODEs. While this may
seem to be a drastic restriction, it’s actually not that bad since it turns out that a
large number of ODEs that we typically encounter in practice are actually of second
order: a prominent example is of course Newton’s second law.
Therefore, let ψ(x) be a function of the real variable x. Let ψ be solution of the
linear second-order ODE

ψ ′′ (x) + q(x)ψ ′ (x) + r(x)ψ(x) = f (x) , (1.1)

where ψ ′ and ψ ′′ denote the first and second derivatives, respectively, of ψ, while
q(x), r(x) and f (x) are some prescribed functions of x. The function ψ is the un-
known (also called the dependent variable, the variable x being then the independent
variable) in the ODE (1.1). The ODE (1.1) is linear because we only have ψ, ψ ′
and ψ ′′ involved with powers 1, and no other powers such as ψ 2 (which would then
make the ODE nonlinear), and it is second-order because the highest derivative is
the second one.
When f = 0 in (1.1), we have a so-called homogeneous ODE

ψ ′′ (x) + q(x)ψ ′ (x) + r(x)ψ(x) = 0 . (1.2)

The solutions of the latter satisfy the superposition principle: any linear combination
of solutions of (1.2) is itself a solution of (1.2). This principle does not apply to the
solutions of the ODE (1.1), which is then called an inhomogeneous ODE. The most
general solution of (1.1) can be written as the sum of the general solution ψh of the

1
CHAPTER 1. REMINDERS OF ORDINARY. . . 2

homogeneous equation (1.2) and of any particular solution ψp of the inhomogeneous


equation (1.1), that is

ψ(x) = ψh (x) + ψp (x) . (1.3)

It proves to be convenient to define the differential operator L, given by

d2 d
L≡ 2
+ q(x) + r(x) , (1.4)
dx dx

so that the homogeneous equation (1.2) can be written as

Lψ(x) = 0 , (1.5)

while the inhomogeneous equation (1.1) reads

Lψ(x) = f (x) . (1.6)

Now, because ψh is the general solution of a homogeneous ODE that is of second


order, it contains two undetermined constants: to fix these constants requires to
complement the ODE with additional constraints, which fix the values of the solution
ψ for some specific values of the independent variable x. The combination of the
ODE and these additional constraints defines a so-called problem. We distinguish
two classes of problems:

i) The initial-value problems: in this case we specify the values of both ψ and ψ ′
at a single point x = x0 , and then look for the solution of the ODE for any
x ⩾ x0 . For this reason, x0 is called the initial value 1 of the variable, and the
prescribed values ψ(x0 ) and ψ ′ (x0 ) specify the so-called initial conditions.

ii) The boundary-value problems: in this case we fix the values of the function
and/or its derivative at two boundary points x1 and x2 > x1 and we look
for the solution of the ODE on the interval x ∈ [x1 , x2 ] (with x1 , x2 being
possibly infinite). In this case, the values that we fix for the function and/or
its derivative are called the boundary conditions. Since we have two boundary
points, we actually have several possibilities for these boundary conditions. One
common choice is to prescribe the values of the function ψ at both points, i.e. to
fix the values of ψ(x1 ) and ψ(x2 ): such conditions are called Dirichlet boundary
conditions. An other common choice is to prescribe the values of the derivative
dψ/dx at both points, i.e. to fix the values of dψ/dx|x=x1 and dψ/dx|x=x2 : such
conditions are called Neumann boundary conditions.
1
Because quite often the independent variable corresponds to time for such problems.
CHAPTER 1. REMINDERS OF ORDINARY. . . 3

A problem is then called homogeneous when both the ODE and the initial/boun-
dary conditions are homogeneous. That is, the problem

Lψ(x) = 0 with ψ(x0 ) = ψ ′ (x0 ) = 0 (1.7)

is a homogeneous initial-value problem, and the problem

Lψ(x) = 0 with ψ(x1 ) = ψ(x2 ) = 0 (1.8)

is a homogeneous boundary-value problem. On the other hand, for instance the


problem

Lψ(x) = 0 with ψ(x1 ) = 0 and ψ(x2 ) = a ̸= 0 (1.9)

is an inhomogeneous boundary-value problem, just as

Lψ(x) = f (x) with ψ(x1 ) = ψ(x2 ) = 0 . (1.10)

Because we deal with second-order ODEs, the general solution ψh of the homoge-
neous equation (1.5) can always be expressed as a linear combination of two linearly
independent solutions ϕ1 and ϕ2 of (1.5), i.e.

Lϕ1 (x) = Lϕ2 (x) = 0 ,

that is

ψh (x) = c1 ϕ1 (x) + c2 ϕ2 (x) (1.11)

with c1 , c2 two constants (that are then fixed by using the initial or boundary condi-
tions). A possible test of the linear independence of two solutions ϕ1 and ϕ2 of the
homogeneous equation (1.5) is to compute their so-called Wronskian W (x), defined
by

W (x) ≡ ϕ1 (x)ϕ′2 (x) − ϕ′1 (x)ϕ2 (x) . (1.12)

If the Wronskian does not vanish identically, then the two functions ϕ1 and ϕ2 are
linearly independent.
After these reminders of second-order ODEs, let’s now introduce the notion of
Green’s functions.
Chapter 2

General idea of Green’s functions

Let’s consider again the inhomogeneous equation (1.6), and now let’s imagine that
we want to solve it for two different inhomogeneous functions f (x), say f1 (x) and
f2 (x), while still having the same functions q(x) and r(x) in both cases. This means
that the corresponding homogeneous equation (1.5) hence remains the same for both
f1 and f2 , so that the only difference arises from the particular solution ψp . To find
the latter particular solution in both cases hence a priori requires to independently
solve the problem for, first, f1 , and then for f2 , which is a bit annoying and may
take some time. This is of course even more annoying if we want to solve the
inhomogeneous equation (1.6) for a larger number of different functions f .
Therefore, a question that naturally arises at this point is the following: can
we come up with a general method that allows us to write a general expression of
the particular solution ψp to the inhomogeneous ODE (1.6) for any inhomogeneous
function f ? Green’s functions precisely allow to give a positive answer to this
question.
Indeed, suppose that we want to solve the inhomogeneous ODE (1.6) on some
interval x1 ⩽ x ⩽ x2 (where x1 and x2 can be infinite). Then the so-called Green’s
function (GF), which we’ll denote by G, allows to write a particular solution ψp
of (1.6) in the form of the following integral:
 x2
ψp (x) = dx′ G(x|x′ )f (x′ ) , (2.1)
x1

where f (x′ ) is precisely the inhomogeneous (or “source”) term of the ODE (1.6).
We can thus readily see on (2.1) that the GF G is a function that depends on two
variables x and x′ . As is clear on (2.1), G(x|x′ ) allows to express the particular
solution ψp at x from the value of f at all points x′ ∈ [x1 , x2 ].

REMARK: the advantage of the expression (2.1) is that it yields a particular


solution of the inhomogeneous ODE (1.6), for any function f , in the form of an

4
CHAPTER 2. GENERAL IDEA OF GREEN’S FUNCTIONS 5

integral. In other words, instead of solving an ODE, we “only” have to compute an


integral, which is actually much simpler even from a numerical point of view.

While the interest of (2.1) is manifest, the question is now of course: how do we
actually get the GF G? To see this, let’s act on (2.1) with the differential operator
L, given by (1.4), that is associated with our ODE: since the integration limits x1
and x2 in (2.1) are fixed, we can simply put the differential operator L inside the
integral, and we get, since L only acts on the variable x,
 x2
Lψp (x) = dx′ [LG(x|x′ )] f (x′ ) . (2.2)
x1

Now, since we want ψp , as given by (2.1), to be a particular solution of the inhomo-


geneous ODE (1.6), we must have Lψp = f , that is from (2.2)
 x2
dx′ [LG(x|x′ )] f (x′ ) = f (x) . (2.3)
x1

In other words, the integral of something times the function f (x′ ) returns merely
the value of this function f at a point x: well, this is nothing but the definition of
the Dirac δ-function! Indeed, remember that the Dirac δ-function δ(x − x′ ) satisfies
 x2
dx′ δ(x − x′ )f (x′ ) = f (x) , (2.4)
x1

if x ∈ (x1 , x2 ) (which is by assumption indeed the case here). Comparing (2.3)


with (2.4) readily shows that the GF G(x|x′ ), as a function of x, must satisfy the
ODE
 2 
′ d d
LG(x|x ) = + q(x) + r(x) G(x|x′ ) = δ(x − x′ ) . (2.5)
dx2 dx

The fact that the ODE (2.5) is of the second order, combined with the presence
of a Dirac δ-function on the right-hand side, ensures two important properties of
the GF: G(x|x′ ) is continuous at x = x′ , while its partial derivative with respect to
x has a finite discontinuity at x = x′ , as we now discuss.
Chapter 3

Continuity and jump condition

The continuity of G at x = x′ and the discontinuity of ∂G/∂x at x = x′ stem from


a particular feature of the derivatives of functions (or more precisely generalized
functions) like the Heaviside function H(x) or the Dirac δ-function. Indeed, let’s
recall that:

i) the derivative of the absolute value |x| is the Heaviside function H(x): that
is, taking the derivative of the continuous function |x| yields the discontinuous
function H(x);

ii) the derivative of the Heaviside function H(x) yields the Dirac δ-function: that is,
taking the derivative of the function H(x) that has a finite discontinuity yields
the function δ(x) that has an infinite spike at x = 0, and is zero everywhere
else.

Such a feature can be referred to as the hierarchy of singularities: differentiating a


singular function (such as a function with a finite discontinuity) makes it even more
singular (namely the δ-function in the latter case).
This hierarchy of singularities, combined with the ODE (2.5) that the GF G
must satisfy, hence ensures that the most singular part of G(x|x′ ) is proportional to
the absolute value |x − x′ | (only for the most singular part: nothing forbids of course
that G has smoother parts, that are differentiable even at x = x′ ). This is proved
by reductio ad absurdum. Indeed, let’s suppose that this was not the case. For
instance, let’s assume that G(x|x′ ) has, as its most singular component, something
with a finite discontinuity, i.e. proportional to the Heaviside function H(x − x′ ).
This would mean, when substituted into the ODE (2.5), that the first derivative
with respect to x would yield a δ-function δ(x − x′ ), but that the second derivative
would then yield a derivative of the δ-function, i.e. δ ′ (x − x′ ). The ODE (2.5) would
thus not be satisfied in such a case, because we would have an uncompensated
δ ′ (x − x′ ). The exact same argument then shows that the ODE (2.5) could not be

6
CHAPTER 3. CONTINUITY AND JUMP CONDITION 7

satisfied if G involved terms like δ, δ ′ , etc . . . The same argument of course shows
that the most singular part of G must be proportional to |x − x′ | (and is thus in
particular continuous at x = x′ ), so that the derivative of G yields a Heaviside
function H(x − x′ ) (which hence has a finite discontinuity at x = x′ ), so that the
second derivative yields the δ-function δ(x − x′ ) that allows to match the δ-function
in the right-hand side of (2.5).
Therefore, the continuity of G at x = x′ merely reads

G(x′+ |x′ ) − G(x′− |x′ ) = 0 , (3.1)

where we introduced the useful notation x′± , which simply means

x′+ ≡ lim′ x and x′− ≡ lim′ x . (3.2)


x→x x→x
x>x′ x<x′

Then, we want to precisely quantify the finite discontinuity of the derivative


∂G/∂x at x = x′ . To do this, we simply integrate the ODE (2.5) between x′ − ϵ and
x′ + ϵ, with some ϵ > 0, and we get
 x′ +ϵ  x′ +ϵ  x′ +ϵ  x′ +ϵ
∂ 2G ∂G ′
dx + dx q(x) + dx r(x)G(x|x ) = dx δ(x − x′ ) .
x′ −ϵ ∂x2 x′ −ϵ ∂x x′ −ϵ x′ −ϵ
(3.3)

Let’s now take the limit ϵ → 0 in (3.3): in this case we can say that q(x) and
r(x) are basically constant over the integration range (because they are assumed to
be sufficiently well-behaved), and thus equal to q(x′ ) and r(x′ ), so that we have,
because of the continuity condition (3.1),
 x′ +ϵ  x′ +ϵ
∂G
lim dx q(x) = lim dx r(x)G(x|x′ ) = 0 . (3.4)
ϵ→0 x′ −ϵ ∂x ϵ→0 x′ −ϵ

Furthermore, in view of our notation (3.2) we merely have


 x′ +ϵ
∂ 2G ∂G ∂G
lim dx = − . (3.5)
ϵ→0 x′ −ϵ ∂x2 ∂x x=x′+ ∂x x=x′−

Therefore, taking the limit ϵ → 0 into (3.3) yields, in view of (3.4) and (3.5),

∂ ∂
G(x|x′ ) − G(x|x′ ) = 1. (3.6)
∂x x=x′+ ∂x x=x′−

This is the so-called jump condition: it quantifies the finite discontinuity that the
derivative of the GF has at x = x′ .
CHAPTER 3. CONTINUITY AND JUMP CONDITION 8

To conclude, let’s now briefly discuss the general strategy that we typically follow
in practice in order to actually compute a GF:

i) since δ(x − x′ ) = 0, ∀x ̸= x′ , we write down the ODE (2.5) separately in the two
complementary regions x < x′ and x > x′ . This hence yields the homogeneous
equation

LG(x|x′ ) = 0 , (3.7)

which we solve separately in the two regions x < x′ and x > x′ : the solution in
the region x < x′ is, say, a function G< and the solution in the region x > x′
is, say, a function G> ;

ii) we then match our two solutions G< and G> at x = x′ by using the continuity
condition (3.1) and the jump condition (3.6), which hence read

G> (x′+ |x′ ) − G< (x′− |x′ ) = 0 (3.8)

and

∂ ∂
G> (x|x′ ) − G< (x|x′ ) = 1; (3.9)
∂x x=x′+ ∂x x=x′−

iii) finally, we also impose the additional constraints (initial values for an initial-
value problem, and boundary values for a boundary-value problem) to get a
unique solution to our problem.

You might also like