[go: up one dir, main page]

0% found this document useful (0 votes)
34 views27 pages

Multi Grid Poisson Solver

The article presents an analysis and implementation of multigrid Poisson solvers, focusing on their application in image processing methods involving the Poisson equation. It discusses the mathematical framework, proper implementation techniques, and compares various iterative solvers, emphasizing the efficiency of multigrid methods. The authors provide a detailed description of their implementation, which converges in linear time, and offer supplementary materials for further exploration.

Uploaded by

bgpatel8888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views27 pages

Multi Grid Poisson Solver

The article presents an analysis and implementation of multigrid Poisson solvers, focusing on their application in image processing methods involving the Poisson equation. It discusses the mathematical framework, proper implementation techniques, and compares various iterative solvers, emphasizing the efficiency of multigrid methods. The authors provide a detailed description of their implementation, which converges in linear time, and offer supplementary materials for further exploration.

Uploaded by

bgpatel8888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

2015/06/16 v0.5.

1 IPOL article class

Published in Image Processing On Line on 2018–07–26.


Submitted on 2018–05–18, accepted on 2018–05–31.
ISSN 2105–1232 c 2018 IPOL & the authors CC–BY–NC–SA
This article is available online with supplementary materials,
software, datasets and online demo at
https://doi.org/10.5201/ipol.2018.228

An Analysis and Implementation of Multigrid Poisson


Solvers With Verified Linear Complexity
J. Matı́as Di Martino1 and Gabriele Facciolo2
1
IFFI, Universidad de la República, 11300 Montevideo, Uruguay (matiasdm@fing.edu.uy)
2
CMLA, ENS Cachan, CNRS, Université Paris-Saclay, 94235 Cachan, France (facciolo@cmla.ens-cachan.fr)

Communicated by Miguel Colom Demo edited by Gabriele Facciolo and Matı́as Di Martino

Abstract

The Poisson equation is the most studied partial differential equation, and it allows to formulate
many useful image processing methods in an elegant and efficient mathematical framework.
Using different variations of data terms and boundary conditions, Poisson-like problems can be
developed, e.g. for local contrast enhancement, inpainting, or image seamless cloning among
many other applications. Multigrid solvers are among the most efficient numerical solvers for
discrete Poisson-like equations. However, their correct implementation relies on: (i) the proper
definition of the discrete problem, (ii) the right choice of interpolation and restriction operators,
and (iii) the adequate formulation of the problem across different scales and layers. In the
present work we address these aspects, and we provide a mathematical and practical description
of multigrid methods. In addition, we present an alternative to the extended formulation of
Poisson equation proposed in 2011 by Mainberger et al. The proposed formulation of the
problem suits better multigrid methods, in particular, because it has important mathematical
properties that can be exploited to define the problem at different scales in a intuitive and natural
way. In addition, common iterative solvers and Poisson-like problems are empirically analyzed
and compared. For example, the complexity of problems is compared when the topology of
Dirichlet boundary conditions changes in the interior of the regular domain of the image. The
main contribution of this work is the development and detailed description of an implementation
of a multigrid numerical solver which converges in linear time.

Source Code

The reviewed source code and documentation of a Matlab implementation for Multigrid Poisson
solvers and the applications described in this work are available from the web page of this
article1 . Usage instructions are included in the README.txt file of the archive.

Keywords: multigrid; Poisson equation; Laplace equation; Poisson editing


1
https://doi.org/10.5201/ipol.2018.228

J. Matı́as Di Martino and Gabriele Facciolo , An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear
Complexity, Image Processing On Line, 8 (2018), pp. 192–218. https://doi.org/10.5201/ipol.2018.228
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

1 Introduction
The Poisson equation is one of the simplest and more versatile elliptic PDEs. It is at the core of
many powerful image processing methods [2, 9, 11, 13, 14, 17] and has been extensively studied for
its practical applications and its interesting mathematical properties. In the past decade, the volume
of digital data has grown exponentially, and large amounts of high resolution digital images are up-
loaded and processed everyday. This scenario is imposing new challenges to classical image processing
methods which need to be adapted and their theory updated to meet modern computational require-
ments. In order to make Poisson-based algorithms practical, fast and efficient, numerical solvers
must be developed. Multigrid methods are among the most reliable and efficient numerical solvers
for Poisson-like problems and tend to outperform other iterative solvers as the size of the domain
increases. Our main goal is to describe in detail and review this family of methods both in theory
and practice. In addition, we extend the ideas presented by Mainberger et al. [10] where a codec for
the compression of cartoon-like images was proposed. This codec is implemented by keeping only the
edges of different objects in the image, which are then used to recover the remaining pixels by solving
the Laplace equation. To decode edge-based compressed images, Mainberger et al. [10] proposed an
efficient multigrid algorithm based on an extended formulation of the Laplace equation. Inspired in
these ideas, we extend [10] to non-homogeneous problems, i.e. when a Poisson-like equation needs
to be solved instead of a Laplace equation. Furthermore, the proposed formulation of the problem
allows to impose arbitrary Dirichlet boundary conditions in arbitrary points of the domain. We
show how this problem can be stated in a self-adjoint positive definite form, which guarantees the
well-posedness of the presented multigrid numerical solvers. To complement the ideas here presented
and to obtain a deeper understanding of multigrid, we highly recommend to complement the reading
of this article with the book of Bringgs et al. [4].

2 Description of the Problem


Let us begin by analyzing the continuous formulation of the problem which gives us the intuition
and tools that later are used to introduce its discrete counterpart.

Poisson equation on a continuous domain. Let Ω be a domain with piecewise C 1 boundary


on the Euclidean plane R2 , and let f : Ω → R and g : ∂Ω → R be two functions on Ω and ∂Ω,
respectively. The Poisson equation can be defined setting different kinds of boundary conditions,
e.g., Dirichlet and Neumann are two canonical cases. When Dirichlet conditions are imposed g plays
the role of setting the value of the solution on ∂Ω, i.e.
(
∆u = f on Ω
(1)
u=g on ∂Ω,

where u : Ω → R is the solution to the problem, and f the Laplacian datum. On the other hand,
Neumann conditions can be imposed letting g be the value of the outward partial derivative of the
solution on ∂Ω i.e. 
∆u = f on Ω
∂u (2)
 = g on ∂Ω,
∂ν
where ν denotes the outward unit normal of ∂Ω.
∂2 ∂2
The differential operator ∆ = ∂x2 + ∂y 2 corresponds to the Laplace operator, and it is arguably

the simplest nontrivial second order operator. The existence and uniqueness of solutions to Equa-

193
J. Matı́as Di Martino and Gabriele Facciolo

tions (1) - (2), provided that the data terms are regular enough, is a standard result in linear PDE
theory [5].

2.1 Discrete Model


The present work focuses on the analysis of the numerical solution of discrete approximations of the
Poisson equation. In particular, we will focus on image-like data, i.e. we will assume that the domain
of interest can be sampled as a regular rectangular grid

u(x, y) = uij for x = hj, y = hi, (3)

def
where h is the distance between samples (pixels) and the discrete indexes (i, j) ∈ Ωh = {1, . . . , H} ×
{1, . . . , W } ⊂ Z2 .
A discrete approximation for the Laplacian can be obtained through a second order Taylor ex-
pansion,

h2
ui(j+1) = u(x + h, y) = u(x, y) + ux h + uxx + O(h3 )
2

h2
ui(j−1) = u(x − h, y) = u(x, y) − ux h + uxx + O(h3 )
2

h2 (4)
u(i+1)j = u(x, y + h) = u(x, y) + uy h + uyy + O(h3 )
2

h2
u(i−1)j = u(x, y − h) = u(x, y) − uy h + uyy + O(h3 )
2
h→0 def 1 
⇒ ∆u(x, y) ≈ ∆h uij = h2
−4uij + u(i+1)j + u(i−1)j + ui(j+1) + ui(j−1) ,

which leads to the standard five-point Laplacian stencil. The previous definition requires the knowl-
edge of the values of u outside Ω (e.g. when computing ∆h u at the boundaries of Ωh ). In the present
work, when a value outside the domain is required, the value of the nearest pixel inside Ωh is used
instead. This is an arbitrary choice that corresponds to implicitly impose vanishing Neumann con-
ditions. It is also equivalent to considering the graph Laplacian [3] on the graph whose vertices are
the points of Ωh and whose edges are given by 4-connectivity.
A vanishing discrete Laplacian has an intuitive interpretation: if we regard the values of u as
measuring some quantity, the fact that ∆u(p) = 0 implies that the quantity at point p equals the
average quantity on the neighbors of p (thus, u(p) can be recovered by averaging the values of u on
the neighbors of p). When ∆u(p) = 0 for every p, we say that the quantity is in equilibrium.
The discrete Poisson equation on Ωh can be expressed as

∆h u(i, j) = f (i, j) (i, j) ∈ Ωh , (5)

f imposes the Laplacian values of the solution, and Neumann boundary conditions at ∂Ωh are
implicitly met as described above. Notice that a super-index is included to make the distance
between pixels explicit. This will become helpful to differentiate problems defined across different
discrete grids (which is the core of multigrid analysis).
Equation (5) is the most standard way of defining the Poisson equation, however, in many practical
applications [10, 13] it is necessary to impose additional conditions on the interior of the discrete set

194
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

Ωh . This more general problem can be expressed as

(
∆h u(i, j) = f (i, j) (i, j) ∈ Ωh \Θh
(6)
u(i, j) = g(i, j) (i, j) ∈ Θh ,

where Θh is a subset of Ωh indicating the positions of pixels to be set. As before f : Ωh \Θh → R


represents the Laplacian data, and g : Θh → R the values of the solution on Θh . Again, Neumann
boundary conditions on the edges of the image are automatically enforced due to the chosen definition
for the discrete Laplacian.
Equation (6) is linear in the set of unknowns u(i, j), and therefore, it can be expressed as

Ah uh = bh . (7)

Abusing of notation, let uh now denote a HW × 1 column vector obtained by stacking column wise
the image u(i, j), bh is a column vector containing a mixture of data terms f and g, and the matrix
Ah is a HW × HW sparse matrix obtained from ∆h and Θh . Equation (7) can be expressed in
different ways. In the following we describe three alternatives to define this linear problem: the
Reduced Formulation, the Extended Formulation, and a Self-Adjoint Extended Formulation.

Reduced and Extended Formulation. Using the column representation introduced above, the
Laplacian operator ∆h can be implemented as the matrix multiplication

  
−2 1 0 1 ··· 0 0 u11
 1 −3 1 0 · · · 1 0
  u21
1  1 −4 1 · · ·
 
L u = 21
h h 0 1 . u31 (8)
  
h  ..

.. .. .. .. .. ..
  ..
 . . . . . . .  .
0 0 0 1 ··· 1 −2 uHW

Therefore Equation (6) can be expressed as

(I − IΘh )(−Lh ) + IΘh uh = (I − IΘh )(−f h ) + IΘh g h .


 
(9)

I denotes the HW × HW identity matrix, and IΘh corresponds to a diagonal matrix where the
element IΘh (k, k) = 1 if k ∈ Θh and 0 otherwise.
Equation (9) can be seen as the concatenation of the equations −∆h uh (i, j) = −f h (i, j) if (i, j) ∈
Ωh \Θh and uh (i, j) = g h (i, j) if (i, j) ∈ Θh . The multiplication by −1 on the first set of equations is
introduced to define a linear problem with nonnegative eigenvalues.
The reduced formulation is obtained when the number of unknowns is minimized by remov-
ing those associated to trivial conditions uh (k) = g h (k) (and of course, modifying the data terms

195
J. Matı́as Di Martino and Gabriele Facciolo

accordingly [10]). Let us illustrate this with the following 3 × 3 toy example
     
f11 f12 f13 − − − u11 u12 u13
h
f = f21
 f22 −  , g h = − − g23  , uh = u21 u22 u23  ,
f31 − f33 − g32 − u31 u32 u33
   
0 0 0 0 0 0 0 0 0 −2 1 0 1 0 0 0 0 0
0 0 0 0 0 0 0 00  1 −3 1 0 1 0 0 0 0
   
0
 0 0 0 0 0 0 00

0
 1 −2 0 0 1 0 0 0 
0 0 0 0 0 0 0 00 1 0 0 −3 1 0 1 0 0 
  h 1  
IΘh 0
= 0 0 0 0 0 0 0 , L = 2  0
0
  1 0 1 −4 1 0 1 0 .
h 
0
 0 0 0 0 1 0 00

0 0 1 0 1 −3 0 0 1 
0
 0 0 0 0 0 0 00

0
 0 0 1 0 0 −2 1 0 
0 0 0 0 0 0 0 01 0 0 0 0 1 0 1 −3 1 
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 −2
(10)
Equation (10) illustrates a Poisson-like problem in which pixels (2, 3) and (3, 2) are set to the values
g23 and g32 respectively, and a condition on the Laplacian of the remaining pixels is imposed. The
concatenation of Equations −∆h uh (i, j) = −f h (i, j) for (i, j) ∈ Ωh \Θh and uh (i, j) = g h (i, j) for
(i, j) ∈ Θh leads to an Extended Formulation linear problem
    2 
2 −1 0 −1 0 0 0 0 0 u11 −h f11
−1 3 −1 0 −1 0 2
0 0 0 u21  −h2 f21 
    

 0 −1 2 0 0 −1 0 0 0 −h f31 
 u31 
 
  2 
−1 0 0 3 −1 0 −1 0 0   u 12
 −h f12 
h h h
    2 
 0 −1 0 −1 4 −1 0 −1 0 u22  =
Aext uext = bext →    −h f22  . (11)
 
0 0 0 0 0 1 0 0 0 u32   g232 
    

0
 0 0 −1 0 0 2 −1 0 u13 
   −h f13 
 
0 0 0 0 0 0 0 1 0 u23   g23 
0 0 0 0 0 −1 0 −1 2 u33 −h2 f33
| {z } | {z }
[h2 (I−IΘh )(−Lh )+IΘh ] h2 (I−IΘh )(−f h )+IΘh g h

This linear system can be reduced by eliminating the unknowns associated to trivial conditions, e.g.
we can set u32 = g32 and u23 = g23 obtaining a more compact Reduced Formulation

−h2 f11
    
2 −1 0 −1 0 0 0 u11
−1 3 −1 0 −1 0 0 u21  
    −h2 f21 

 0 −1 2 0 0 0 0 u31   −h2 f31 − g32 
    
Ahred uhred = bhred → 
−1 0 0 3 −1 −1 0  u12  =  2 −h f12
   2 .
 (12)
 0 −1 0 −1 4 0 0 u22  −h f22 − g32 − g23 
    

0 0 0 −1 0 2 0 u13   −h2 f13 − g23 
0 0 0 0 0 0 2 u33 −h2 f33 − g32 − g23

The Reduced Formulation is an effective and compact way of writing the linear problem associated
with the discrete Poisson equation. It is also the more efficient in terms of memory usage (as
only the non-trivial equations are preserved). However, the reduced formulation is not convenient
when multigrid methods are considered. The main idea behind multigrid approaches consists of
solving the numerical problem along a set of different layers. Each layer is associated with a specific
spatial resolution of the problem. Regular grids can be sampled very naturally and represented in
a pyramidal structure which facilitates the definition of multigrid solution. Because of this, it is

196
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

useful to keep the set of trivial equations (i.e. those associated to Dirichlet conditions) which leads
to the Extended Formulation of the problem [10]. As we shall see, this regular representation of the
domain can be exploited to substantially simplify the implementation of hierarchical representations
of Ωh → {Ωh , Ω2h , . . . , ΩH }, which is crucial for multigrid numerical solvers.

Self-Adjoint Extended Formulation. The extended formulation provides a practical definition


of the problem to implement hierarchical representations of the domain. However, the resulting
matrix Ahext that characterizes the problem lacks some important properties as we shall see. In
particular, to guarantee multigrid methods convergence, it is sufficient that the matrix Ah associated
to the linear problem Ah uh = bh be self-adjoint and positive definite (see for instance Appendix A
and B). This condition can be written as

hAh uh , v h i = huh , Ah v h i ∀ uh , v h ;
(13)
hAh uh , uh i > 0 uh 6= 0.

It is easy to see that (because of the lack of symmetry) the extended formulation of the prob-
lem does not satisfy the self-adjoint positive definite conditions detailed in Equation (13) (see, e.g.
Equation (11)).
To obtain a self-adjoint extended formulation we start by defining g̃(i, j) = g(i, j) for (i, j) ∈ Θh
and 0 otherwise. Then we apply the change of variable u0 = u − g̃ to obtain an equivalent system
but with homogeneous boundary conditions
( (
−∆h u(i, j) = −f (i, j) (i, j) ∈ Ωh \Θh −∆h u0 = −f (i, j) + ∆g̃ (i, j) ∈ Ωh \Θh
→ (14)
u(i, j) = g(i, j) (i, j) ∈ Θh u0 (i, j) = 0 (i, j) ∈ Θh .

Then, as u0 (i, j) = 0 for (i, j) ∈ Θh , Lh u0 ≡ Lh (I − IΘh )u0 for all the pixels in Ω\Θh . Hence,
Equation (14) can be rewritten as the sparse linear problem,
 h
(I − IΘh )(−Lh )(I − IΘh ) + IΘh u0 = (I − IΘh )(−f h + ∆g̃ h ).

(15)

After solving this system of equations, the change of variable should be reverted as u = u0 + g̃
recovering the proper solution to the original problem u.
Recalling our simple 3 × 3 illustrative example, the proposed self-adjoint extended formulation
corresponds to
  0   2 
2 −1 0 −1 0 0 0 0 0 u11 −h (f11 + ∆g̃ 11 )
−1 3 −1 0 −1 0 0 0 0 u021  −h2 (f21 + ∆g̃ 21 )
  0   2 
 0 −1 2 0 0 0 0 0 0  u31  −h (f31 + ∆g̃ 31 )
  0   2 
−1 0 0 3 −1 0 −1 0 0 u12  −h (f12 + ∆g̃ 12 )
h
Ahsa−ext u0 = bhsa−ext → 
     
0 −1 0 −1 4 0 0 0 0  u022  = −h2 (f22 + ∆g̃ 22 ) .
  0   
0 0 0 0 0 1 0 0 0  u32 0
   
 
0 
 2

0
 0 0 −1 0 0 2 0 0   u
  13  
 −h (f 13 + ∆g̃ )
13 

0 0
0 0 0 0 0 0 1 0 u23   0 
0 2
0 0 0 0 0 0 0 0 2 u33 −h (f33 + ∆g̃ 33 )
| {z } | {z }
2 (I−I h +∆g̃ h )
[h (I−IΘh )(−L )(I−IΘh )+IΘh ]
2 h h Θh
)(−f

(16)
Proposition 1. If Θ 6= ∅, the Reduced, Extended and Self-Adjoint discrete problems have the same
(unique) solution.

197
J. Matı́as Di Martino and Gabriele Facciolo

It is easy to see that if a discrete function u is solution of one of the linear problems then it is
also a solution of the others. To prove that the three formulations of the problem are equivalent, we
must prove that a solution for each of these problems exists and is unique.

[6, Theorem 1]. If the partitioned matrix A is block strictly


P diagonally dominant, or if A is block
irreducible and block diagonally dominant with |Ai,i | > j6=i |Ai,j | for at least one i, then A is non-
singular.

When Θ 6= ∅ the previous theorem applies for the three formulations of the problem, proving
that Ared , Aext , and Asa−ext are non-singular. Hence, the linear problems associated to these matrices
have a well defined unique solution. This proves that the solution for the discrete Poisson problem
can be obtained by solving any of the previous formulations.

3 Multigrid Numerical Solvers


In this section we introduce several numerical solvers that can be used to solve the Self-Adjoint
Extended formulation of the discrete Poisson Equation (6). We start by introducing the basic (one-
grid) iterative solvers, then we analyze how to improve the performance of these solvers by applying
them at two different scales. Finally, we generalize the ideas developed for two grids to multiple
grids, which leads to several different multigrid approaches.

Notation. Let Au = b denote a system of linear equations. We use the variable u to denote the
exact solution of this system and v to denote an approximation (obtained by some numerical solver).
By uk /uij we denote the vector/matrix associated to the continuous 2D signal u(x, y), if the discrete
variable u is treated as a matrix or as a column vector will be clear from the context. To emphasize
that a given variable is associated to a particular grid, e.g. Ωh , we use the notation uh , v h . Assuming
that the solution of the system Au = b is unique, there are two important quantities to look at
def
e = u − v, (17)

named as the error or algebraic error, and


def
r = b − Av, (18)

known as the residual. In addition we denote as kuh kh the discrete L2 norm defined on a uniform
grid with spacing h as,
!1/2
X
h 2 h 2
ku kh = h (uk ) . (19)
k

The scaling factor h2 is introduced to make the discrete norm an approximation of the continuous
L2 norm.

3.1 Basic Iterative Solvers


Jacobi and Gauss-Seidel iterative methods are very simple and play an important role in multigrid
solvers. These methods can be used to solve linear problems Ax = b and convergence is guaranteed
when the matrix A is diagonally dominant [7]. For all the discrete problems formulated, this condition
is met when the set Θh is not empty. An approximate expression for the rate of convergence of these
methods is provided in [7] for the canonical Poisson problem, i.e. Lh u = f . In general, convergence

198
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

properties of iterative methods are closely related to the eigenvalues and eigenvectors of A (as we
will illustrate in Section 4).
We denote by v (t) the approximation obtained at the iteration step t. Remember that v represents
a given approximation to the solution of the linear system Au = b with exact solution u. Let us start
by splitting the matrix A in the form,

A = D + L + U, (20)

where D is the diagonal of A, and L and U are the strictly lower and upper triangular parts of A
respectively. Isolating the diagonal part of A, we have

Du = −(L + U )u + b, (21)

which suggests the Jacobi iterative scheme

v (t+1) = −D−1 (L + U )v (t) + D−1 b


(22)
(t) −1
= RJ v + D b,

where RJ = −D−1 (L + U ) is defined as the Jacobi iteration matrix.


A very simple but powerful modification to the previous method (see Algorithm 1), consists in
considering the Jacobi iteration as an intermediate update v (t+0.5) and in defining v (t+1) as a weighted
update in the direction of (v (t+0.5) − v (t) ), i.e.

v (t+0.5) = RJ v (t) + D−1 b



v (t+1) = v (t) + ω v (t+0.5) − v (t)
(23)
(t) −1
= [(1 − ω)I + ωRJ ]v + ωD b

= Rω v (t) + ωD−1 b.

The coefficient ω ∈ R is a weight factor that needs to be set. I denotes the identity matrix and
Rω = [(1 − ω)I + ωRJ ] is the weighted or damped Jacobi iteration matrix. (Notice that by setting
ω = 1 we recover the standard Jacobi scheme.)
The Gauss-Seidel method is very similar to the Jacobi method, but it uses the values of the
(t+1)
new approximation vij as soon as it becomes available. This can be expressed using the matrix
decomposition described above as

(D + L)u = −U u + b, (24)

which suggests the Gauss-Seidel iterative scheme (see Algorithm 2)

v (t+1) = −(D + L)−1 U v (t) + (D + L)−1 b


(25)
= RG v (t) + (D + L)−1 b.

The matrix RG = −(D + L)−1 U represents the Gauss-Seidel iteration matrix.

199
J. Matı́as Di Martino and Gabriele Facciolo

Algorithm 1: WeightedJacobi(A,b,x0 ,ω,n)


Input : A and b // System Au = b
0
Input : x // Initial guess of the solution u
Input : ω // Weight. (For ω = 1 we get Jacobi iterative method)
Input : n // Number of iteration steps
1
Output: x // Computed next approximation ofu
1 L = lowerTriangular(A) // Strictly lower triangular part of A
2 U = upperTriangular(A) // Strictly upper triangular part of A
3 D = diagonal(A) // Diagonal part of A
4 for k = 1 : n do
5 x1 = (1 − ω)x0 + ω(D\(−(L + U )x0 + b))
6 x0 = x1
7 return x1

Algorithm 2: GaussSeidel(A,b,x0 ,n)


Input : A and b // System Au = b
Input : x0 // Initial guess of the solution u
Input : n // Number of iteration steps
Output: x1 // Computed next approximation ofu
1 L = lowerTriangular(A) // Strictly lower triangular part of A
2 U = upperTriangular(A) // Strictly upper triangular part of A
3 D = diagonal(A) // Diagonal part of A
4 for k = 1 : n do
5 x1 = (D + L)\(−U x0 + b)
6 x0 = x1
7 return x1

Frequency response. To understand the key advantages of multigrid approaches, it is important


to review how different iterative methods behave depending on the frequency components of the
error. Let us introduce a simple example to illustrate this. Assume we want to solve the Discrete
Poisson problem presented in Equation (6), with Ωh = {1, · · · , N } × {1, · · · , N }, Θh being an empty
set (no Dirichlet interior points are imposed), and with data term f (i, j) = 0 ∀ i, j. This problem
can be written as
Ah u = b, (26)
where Ah = Lh and b = [0, · · · , 0]T . It is obvious that u(i, j) = 0 ∀i, j is a solution of this problem.
The question we want to address now is the following: if we start with a cosine function as initial
guess  νπ 
vν0 (i, j) = cos i , (27)
N
how many iterations T are needed to converge to the true solution v T ≈ 0? The answer to this
question will provide us intuition to understand how the evolution of the error is conditioned by its
Fourier content. Figure 1 illustrates how the algebraic error evolves, starting from a sinusoidal seed
v 0 , when the Jacobi iterative method is used. Images in the upper row illustrate the evolution of
the computed solution v t when the initialization contains a low frequency sinusoidal, while bottom
images show the evolution for a high frequency cosine function (for the same iteration numbers). As

200
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

v0 v 10 v 20

Figure 1: Evolution of the error for the homogeneous Poisson equation (A = Lh , b = 0) when cosines of two different
frequencies are used as seed v 0 .

Figure 2: Diagram of a two grid scheme. Smooth components in a fine grid are observed as higher frequency components
on coarser grids.

can be seen, simple iterative methods are very effective at reducing the high frequency components
of the error, while several iteration steps are required to reduce low frequency components.

3.2 From One Grid to Two Grids


The key idea behind multigrid methods is to solve the problem using multiple discrete grids of
different sizes. Then, a low frequency in the finer grid can be represented as a high frequency in a
coarser grid, and hence, it can be effectively recovered using trivial iterative schemes (as we saw in
the previous section).
A basic two-grid correction scheme (as illustrated in Figure 2) consists of seven main steps:

1. Perform n1 iterations (of Gauss-Seidel or Jacobi iterative method) over the problem (9)
Ah uh = bh with a given initial seed v0h .

2. Compute the fine-grid residual rh = bh − Ah v h .

3. Restrict the residual to the coarser grid r2h = Ih2h rh .

4. Solve the problem A2h w2h = r2h .

201
J. Matı́as Di Martino and Gabriele Facciolo

5. Interpolate the refined error to the fine grid wh = I2h


h
w2h .

6. Correct the fine grid approximation v h ← v h + wh .

7. Perform n2 iterations (of Gauss-Seidel or Jacobi iterative method) over the problem
Ah uh = bh starting from the refinement v h computed in the previous step.

To implement the previous simple two grid correction scheme, we must define the interpola-
h
tion/restriction operators I2h /Ih2h (Section 3.2.1) and we need to address how the problem in the
coarse layer is defined: A2h w2h = r2h (Section 3.2.2). Before proceeding with these definitions, it is
worth commenting why the seven steps described above are important for a successful definition of
multigrid schemes. One of the key problems that need to be controlled when implementing multigrid
methods is aliasing. As we described above, simple iterative methods are efficient only to reduce
high frequency components of the error. Therefore, if spurious low frequency components are cre-
ated because of aliasing, the implemented multigrid method will no longer be efficient. This explains
why the residual rh of the problem is transfered to the coarse grid (instead of v h ). The iterations
performed in step 1, guarantee that the residual of the problem Ah uh = bh does not contain high
frequency components (as they were efficiently removed by Gauss-Seidel or Jacobi methods). Then
it is safe to restrict the problem to the coarse grid r2h = Ih2h rh and no aliasing will be produced
(as rh is a smooth signal). Also step 7 can be explained with similar arguments. The reason why
n2 iterations of Gauss-Seidel (or Jacobi) method are performed after the solution in the fine grid is
corrected, is to remove high frequency components of the error that may be introduced during the
interpolation operation wh = I2h h
w2h (Appendix A provides a formal discussion on this subject).

3.2.1 Interpolation and Restriction Operators


To transfer information through the different grids it is necessary to define interpolation and restric-
h h
tion operators. The interpolation operator, denoted as I2h , is a transformation I2h : Ω2h → Ωh that
interpolates a discrete function v defined in the coarse grid into the fine grid v h = I2h
2h h 2h
v . On
the other hand, the restriction operator does the inverse transformation mapping a given discrete
function in the fine grid to the coarse grid, i.e. Ih2h : Ωh → Ω2h with v 2h = Ih2h v h .
The proper definition of interpolation and restriction operators is critical to achieve efficient
multigrid implementations. In particular, there are three basic properties that need to be considered:

1. The order of the interpolation and the restriction operator should be higher or equal to the order
of the differential equation we aim to solve (see [8] and Appendix A for a detailed discussion
about the operators order).
h
2. The interpolation I2h and the restriction Ih2h should verify (see Appendix B and reference [4,
pg. 185]) the condition
h T
Ih2h = c (I2h ) c ∈ R. (28)

3. Interpolation and restriction should be compatible with the boundary conditions at ∂Ω.

A restriction operator Ih2h can be expressed without lost of generality as

Ih2h (v h ) = I˜h2h (K h ∗ v h ), (29)

where I˜h2h denotes a basic sub-sampling operation, i.e. I˜h2h (v h )(k) = v 2h (k) = v(k(2h)) = v h (2k);
and K h is a smoothing kernel (see Appendix A and reference [8] for a more detailed discussion).
The basic sub-sampling operation just selects one over two samples on each axis of the discrete
grid. For the case of a two dimensional discrete domain, we can arbitrarily start the sampling process

202
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

˜ 2h
Figure 3: Illustration of the pixels selected by the sub-sampling operators (from left to right) I1 ˜ 2h ˜ 2h ˜ 2h
h , I2h , I3h and I4h .

Figure 4: Illustration of the definition of the discrete convolution on the boundaries of the discrete domain.

at the pixel position (1, 1), (1, 2), (2, 1) and (2, 2) (referred as I1 ˜ 2h , I3
˜ 2h , I2 ˜ 2h respectively)
˜ 2h and I4
h h h h
as illustrated in Figure 3.
The Poisson equation consists of a second order partial differential equation, which implies that
the restriction and interpolation should be at least second order operators [8]. To that end, the well
known full-weighting stencil was used in the present work,
 1 1 1 
4 2 4
 
h 1 1 1

K =  2
1 2
. (30)
4



1 1 1
4 2 4

For each discrete position (i, j) ∈ Ωh , the discrete convolution K h ∗ v h requires the value of v h at
the eight neighbors (i ± 1, j ± 1). This cannot be done at the boundary of the image domain, e.g.,
i = 1, i = H, j = 1 or j = W ). In those cases, Neumann boundary conditions are applied which is
equivalent to use the closest value of v h inside the domain Ωh as it is illustrated in Figure 4.
Observing that Equation (29) is linear in the elements of the matrix v h , it can be expressed as
Ih2h v h where Ih2h is a HW ×HW sparse matrix, and v h represents the column vector HW ×1 obtained
by writing the two dimensional discrete signal in lexicographical order. The definition of the sparse
matrices Ii ˜ 2h is straightforward. Algorithm 3 describes how to define the kernel convolution as a
h
matrix multiplication K ∗ v h = G v h (incorporating the boundary condition at ∂Ωh ). Then, the
sparse restriction matrix can be defined as

1  ˜ 2h ˜ 2h
˜ 2h
˜ 2h

Ih2h = I1h + I2h + I3h + I4h G, (31)
4
and the interpolation as
h
T
I2h = 4 Ih2h . (32)
The multiplicative factor included in Equation (32) was set so that the energy (norm) of discrete
functions remains invariant when restriction and projection are applied consecutively.

203
J. Matı́as Di Martino and Gabriele Facciolo

Algorithm 3: ComputeG (Matrix version of the convolution kernel)


Input : H // Height of the discrete domain Ωh
Input : W // Width of the discrete domain Ωh
Output: HW × HW matrix G // matrix form of the full-weighting convolution
// Prepare sparse matrix indices
1 sh = [H,W]
2 [py, px] = ind2sub( sh, [1:HW]) // get pixel indices for the output image
3 i = sub2ind(sh , py, px) // get rows indices of the operator (y,x) (identity)
4 G = sparse(HW, HW) // initialize
// Vector containing the offsets and weights of the stencil
5 alldxdy = [[-1,-1, 1/4]; [1, 0, 1/2]; [-1, 1, 1/4];
[0, -1, 1/2]; [0, 0, 1]; [0, 1, 1/2];
[1, -1, 1/4]; [-1, 0, 1/2]; [1, 1, 1/4]];
// Loop over the vector combining the weights of the 9 positions of the stencil
6 for t =1:9 do
7 dy = alldxdy(t, 1)
8 dx = alldxdy(t, 2)
9 val = alldxdy(t, 3)
// Get corresponding input columns mirroring at the boundaries ie
max(min(ycoord, H), 1)
10 j = sub2ind(sh, max(min(py + dy, H), 1), max(min(px + dx, W), 1))
11 G = G + sparse(i, j, val, HW, HW)
12 G = 1/4 G // Normalization factor (See stencil def. Eq. (30))
13 return G

The previous definitions chosen for the restriction and interpolation operators, presented in Equa-
tions (31) - (32), average the four different alternatives for defining the basic sub-sampling matrices
˜ 2h
I1 ˜ 2h
h -I4h illustrated in Figure 3. This is an arbitrary choice and one can also choose any of the
basic sub-sampling operators. We found in practice that the average of the four basic sub-sampling
strategies reduces boundary effects. In particular, it is easy to verify that the proposed definitions
keep invariant any constant matrix. More precisely, given a constant H × W matrix C, the equation
h 2h
C = I2h Ih C holds for every pixel (even those at the boundary of the image).

3.2.2 Definition of the Problem in the Coarse Grid


The fourth step of the two-grid approach described in Section 3.2 requires to solve the coarse problem
A2h w2h = r2h . Restriction and prolongation procedures were already addressed, which allows us to
precisely define r2h from rh and vice-versa. However, a precise definition of A2h is still necessary and
will be addressed in the following.
The solution w2h of equation A2h w2h = r2h should provide an update to the current approximation
v h such that the error (in the fine grid) is minimized, i.e.
w2h = arg min kuh − (v h + Ih2
h
w̃2h )k. (33)
w̃2h

Proposition 2. Given the problem in a fine grid Ah uh = bh and a current approximation of its
solution v h , the optimal update w2h on a coarser grid can be obtained as the solution of the coarse
linear problem A2h w2h = b2h where A2h = (I2hh T h h
) A I2h and b2h = (I2h
h T h
) (b − Ah v h ).

204
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

Algorithm 4: v1 = MG(A,b,v0 ,H,W ,µ,n1 ,n2 )


Input : A, b // Matrix and data term of the problem Ax = b
Input : v0 // Initial guess (seed)
Input : H, W // Height and Width of the discrete domain Ωh
Input : µ // Number of iterative calls per layer
Input : n1 // Number of pre-smoothing steps
Input : n2 // Number of post-smoothing steps
Output: v1 // updated approximation of the solution
// (0) Initialization
h
1 [I2h , Ih2h ] = ComputeRestrictionAndInterpolationOp(H,W )
// (1) Pre-smoothing Relaxation
2 v1 = GaussSeidel(A,b,v0 ,n1 ) // n1 iterations of GaussSeidel algorithm
// (2) Compute the residual
3 rh = b − A v1
// (3) Restrict the residual to a coarser layer
4 r2h = Ih2h rh
// (4) Solve the problem in the coarse layer
5 w2h = 0 // initial guess in the coarse layer
6 A2h = Ih2h Ah I2h
h
// Define the problem in the coarser layer
7 for i = 1:µ do
8 w2h ← MG(A2h ,r2h ,w2h ,H/2,W/2,µ)
// (5) Interpolate the correction function to the fine grid
9 wh = I2h
h
w2h
// (6) Apply the correction obtained from the coarse layers
10 v1 = v1 + wh
// (7) Post-smoothing relaxation
11 v1 = GaussSeidel(A,b,v1 ,n2 ) // Reduce high-freq distortion due to the interpolation
12 return v1

Proposition 2 is proved in Appendix B using the important propertied described above: (i) that Ah
h
is a self-adjoint matrix and (ii) that I2h ∝ (Ih2h )T . This proposition is of crucial importance. Firstly,
because it provides a precise definition of A2h in terms of I2h h
and Ah ; and secondly, because it shows
2h h T h h h
that the coarse version of the residual (r = (I2h ) (b − A v )) plays the role of the data term in
the coarse layer.
The two-grid scheme now is fully defined and can be implemented with no ambiguity, the next
step is of course to generalize these ideas to multiple grids.

3.3 Multigrid Approach


Recalling the seven main steps that described a two-grid correction scheme, the fourth step consisted
of solving the linear problem A2h w2h = r2h (in the coarse grid). This problem has exactly the same
structure as the original problem (i.e. is a self-adjoint linear problem), thus, it can be solved using a
two-grid approach. In other words, when the linear problem (at step 4) needs to be solved, a two-grid
scheme can be recursively applied leading to an even coarser grid and so forth.
The previous ideas motivate the definition of a multigrid solver as described in Algorithm 4. The
parameter µ sets the number of times the algorithm uses the coarser grids to improve the solution
obtained in the current grid. Even though the parameter µ can be set to an arbitrary integer number,

205
J. Matı́as Di Martino and Gabriele Facciolo

Figure 5: Illustration of V-scheme for a 32 × 32 discrete problem. On the left we see the ground truth solution and the
i
diagram of the multigrid computations. On the right are shown the corrections w2 h computed at each scale i. Note that
different grids are able to capture different components of the solution.

µ = 1 and µ = 2 are the most common choices. The first case (µ = 1) is called a V-Scheme and
when µ = 2 a W-scheme. Figure 5 illustrates the V-scheme with a simple example. The fine grid
was defined as Ωh = [1 · · · 32] × [1 · · · 32] and the ground truth is uh (x, y) = (1 + sin(x2 /2W )). The
Laplacian of this function was calculated from the ground truth solution and Poisson equation solved
(starting from v0 (x, y) = 0 for all (x, y) ∈ Ωh ). This first illustrative example shows how different
grids (scales) are able to capture different components of the solution. Then the combination of these
orthogonal spectral components allows to recover the global solution in a efficient manner.

4 Experiments and Discussions


The previous section presented some basic iterative and multigrid numerical solvers that can be used
to solve discrete Poisson problems. The goal of this section is to present a set of experiments that
will help us understand interesting properties of the problems and methods explained above. In
particular, we aim at answering: How different the solutions of different discrete formulations of the
Poisson equation are? How does the topology of the set Θh impact on the efficiency of numerical
solvers? And fundamentally, can multigrid methods converge to a solution in a time that is linear
with the number of unknowns?

Changes in the solutions due to differences in the formulation. To illustrate some of


the differences between the extended and self-adjoint extended formulation presented in Section 2.1
let us consider the following example. When photographers capture images of scenes with a very
large illumination range, a common problem is to end up with photographies that have poor local
contrast (either in the bright or dark portions of the image). For example, Figure 6(a) illustrates
a challenging scene that presents a high luminance range which cannot be successfully captured
by a camera sensor. To improve the local contrast in the dark portions of the image we define a
Poisson-like problem by: setting the domain Θ as the set of bright pixels (b), the Laplacian term f
as an amplified version of the original Laplacian (c), and we preserve the original values of the image

206
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

def def def


(a) Input test image I (b) Θ = {x : I(x) > 40} (c) f = α∆I, on Ω\Θ (d) g = I, on Θ

(e) SA-Ext solved (f) Ext solved with (g) SA-Ext solved with (h) Ext solved with
Matlab Backslash Matlab Backslash V-Multi Grid V-Multi Grid

Figure 6: The first row illustrates: (a) A test input image, (b) the domain Θ, (c) the Laplacian term f , and finally the
boundary Dirichlet conditions g. In this example, the Laplacian term was defined by amplifying the Laplacian on the dark
areas of the original image by a factor α = 2 (with the objective of amplifying the local contrast in those regions). The
second row illustrates the solution obtained for Self-Adjoint Extended (e) and the Extended formulation (f) of the discrete
Poisson problem defined above, as the numerical solver matlab backslash is used in this first two examples. Images (g) and
(h) illustrate the solution to the same system of sparse linear equations but when a multigrid V-scheme iterative solver is
used.

in the bright areas (d) (Dirichlet conditions). The second row of Figure 6 illustrates the solution
obtained for the Self-Adjoint Extended (e-g) and the Extended formulation (f-h). Matlab backslash
is used as the numerical solver in the first two examples (e-f). On the other hand, images (g) and
(h) illustrate the solution to the same system of sparse linear equations but when a multigrid V-
scheme iterative solver is used. The solutions (e), (f) and (g) are equivalent (we numerically checked
that these images are identical). This was expected as the Self-Adjoint and the Extended discrete
formulations are mathematically equivalent to each other. Therefore given an arbitrary numerical
solver, if it converges to the solution there will be no difference between the Self-Adjoint and the
Extended formulations. Despite the previous observation, for a single multigrid iteration different
solutions are obtained when the Self-Adjoint or the Extended formulation are chosen. In particular
we can see in this example (see the results (g) and (h)) that after a single V iteration, convergence
is achieved only for the self-adjoint formulation of the problem (see Appendix B).

Impact of the topology of the boundary conditions Θh on the solver efficiency. As we


showed in the previous example, Poisson like problems can be solved efficiently by solving a sparse

207
J. Matı́as Di Martino and Gabriele Facciolo

Input test image Θ g Interpolated result

Figure 7: Interpolation example solving a homogeneous Poisson equation. In this illustrative example, approximately 30%
of the pixels values were kept (in particular those associated to edges) and the interpolation obtained by solving the
homogeneous Poisson equation.

linear system of equations Au = b. The effectiveness of different numerical methods depends on the
structure of the matrix A. From now on, we will analyze the structure of the matrix of the problem
associated to the Self-Adjoint Extended problem. As A is symmetric and positive definite, we can
decompose it into a basis of orthonormal eigenvectors
P φk with associated (strictly) positive real values
λk . The data term can be written as b = k ck φk where ckPis a set of scalar constant coefficients.
ck
Then, it is easy to prove that u can be obtained as u = k λk φk . When the canonical Poisson
equation is considered, i.e. homogeneous Neumann/Dirichlet boundary conditions are imposed in
the exterior of a regular rectangular domain, the eigenvectors φk become cosine/sine functions and
the decomposition expressed above becomes a cosine/sine Fourier decomposition [4]. In addition,
the smaller the eigenvalue λk , the more critical the weight of the k th eigenvector ck /λk φk .
Let us illustrate the previous ideas with a simple example. Consider a test image I that needs
to be compressed e.g. by keeping only those pixels associated to edges of the image (similar to the
method proposed in [10]). To that end, we can define the set Θ as the set of pixels xi for which the
gradient is higher than a certain threshold t. The Dirichlet data term g may be set as the values of
the image I at the given positions xi , and finally, a null Laplacian data term f = 0 may be used for
interpolation. Figure 7 shows an example of an input image, the pixels kept and the corresponding
data term g. Numerically this problem can be stated as the Poisson-like system of equations (13)
(with the data terms specified above).
As described in previous sections, the discrete formulation of the problem leads to a sparse linear
system of equations Au = b. The structure of A depends on the shape of the set Θ, which determines
the eigenvectors of A. For example, Figure 8 illustrates four different sets Θ and the corresponding
eigenvectors associated to the smaller eigenvalues of the matrix A.
Figure 8 shows that when more anchorage points g(x) for x ∈ Θ are provided, the eigenvectors of
the problem become sharper (i.e. their Fourier transforms contain higher frequency content). This
is a very important issue as the effectiveness of different iterative methods strongly depends on the
frequency content of the error vector at each iteration step. Taking this into account, and recalling the
properties in the Fourier domain of the iterative methods presented in Section 3 we can conclude that
the more points of the solution we set, the more effective simple relaxation methods will become.
Furthermore, two particular cases are of particular interest. First, when the set Θ is empty; in
that case the problem cannot be uniquely solved as solutions are defined up to a global constant.
This explains why in that case we obtain a null eigenvalue associated to a constant eigenvector (as
illustrated in the first row of Figure 8). A second interesting case is when the value on a single pixel
is set (i.e. Θ = {x1 }). In this case, the solution is unique, while the eigenvectors of A associated to
the smaller eigenvalues are smooth. This implies that setting a single pixel of the solution guarantees
that the solution of the problem is unique while leading to the numerically more challenging problem.

208
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

Θ λ1 = 0 λ2 = 1 × 10−4 λ3 = 1 × 10−4 λ4 = 3 × 10−4

λ1 = 8 × 10−6 λ2 = 1 × 10−4 λ3 = 1 × 10−4 λ4 = 3 × 10−4

λ1 = 4 × 10−4 λ2 = 8 × 10−4 λ3 = 1 × 10−3 λ4 = 1 × 10−3

λ1 = 9 × 10−3 λ2 = 1 × 10−2 λ3 = 1 × 10−2 λ4 = 1 × 10−2

Figure 8: The first column shows different sets in which the value of the solution is provided, while the other four columns
illustrate the four eigenvectors associated to the smaller eigenvalues of the matrix of the problem: A.

Evaluating multigrid convergence rate. In a third round of experiments, we compared multi-


grid iterative method with: Gauss-Seidel, Conjugate Gradient, Preconditioned Conjugate Gradi-
ent [1], Bi-Conjugate Gradient, Symmetric LQ [12], Generalized minimum residual [15], Transpose-
free Quasi-minimal Residual [1], and Bi-Conjugate Gradient Stabilized [18] methods. We tested our
own implementation of Multigrid, Gauss-Seidel and Conjugate-Gradient methods and used imple-
mentations available in Matlab2 in the rest of the cases. Given a test input image I we solved the
discrete Poisson problem given by Equation (13) with f = I, g = ∆h I and Θ obtained by randomly
sampling pixels of the discrete domain. Intuitively, this problem consists in recovering an image I
from its partial derivatives ∆I given a set of image points on Θ.
Figure 9 shows the evolution of the norm of the error ku − v(t)k along the iterations for the
selected set of iterative solvers. As in the previous experiment, we observe that the less pixels the
domain Θ sets, the more challenging the problem becomes. In addition, we observe that the multigrid
approach achieves efficient convergence rates even in the extreme case that Θ contains a single pixel.
2
Matlab version R2016b

209
J. Matı́as Di Martino and Gabriele Facciolo

Θ (≈ 5% of px) ku − v n k

Θ (≈ 1% of px) ku − v n k

Θ (1 px) ku − v n k

Figure 9: Evolution of the norm of the error as a function of time for different iterative methods and boundary conditions.
The size of the input image was 256 × 512.

Complementing the previous experiment, we also tested how the time required for convergence
depended on the size of the problem (i.e. the number of pixels in the domain Ω). Figure 10 shows the
time required to converge to a solution with an absolute mean error below 1 gray level (input images
are in the range [0, 255]). The resolution of input images was increased exponentially. Multigrid V
and W schemes were tested and compared with Matlab “backslash”3 routine. Matlab “backslash”
solves large sparse linear problems in a smart and efficient manner by analyzing the matrix associated
to the problem. Then, the actual solver executed is chosen to exploit the properties of the matrix
A. In general, we observed that for modest problem sizes (i.e. less or equal to one million pixels)
backslash was the fastest way to solve the problem. However, as the size of the domain increases,

3
In order to have more meaningful comparisons, we executed Matlab in a single processor thread.

210
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

multigrid schemes become more competitive. Multigrid methods are definitely the right choice when
the number of pixels is over 2 millions of pixels. In addition, Multigrid schemes can solve Poisson-like
problems in linear time as can be observed in Figure 10. This is a key feature of multigrid approaches
that makes them particularly useful to solve large PDE equations.

Figure 10: Execution time versus domain size (i.e. number of pixels). Left plot shows the time to convergence (in linear
scale) for a V and W multigrid schemes compared with Matlab Backslash. Right plot illustrates the same results in a
logarithmic scale.

Finally, we evaluated how multigrid parameters affects its performance. Figure 11 shows the time
for convergence for different number of pre- and post-smoothing relaxation steps for both V and W
schemes. As can be seen, a few post and pre relaxation steps provide efficient multigrid schemes.
In particular, we found that zero pre-relaxation steps and one post-relaxation achieved the faster
convergence.

Figure 11: Execution time versus domain size for different values of pre- and post-relaxation steps. The first number in the
legend of the plots corresponds to the number of pre-smoothing G-S relaxation steps and the second number to the number
of post-smoothing relaxation steps. Results are shown in both linear and logarithmic scale.

211
J. Matı́as Di Martino and Gabriele Facciolo

5 Conclusions
We described in depth, analyzed and compared iterative numerical methods to solve Poisson-like
problems. The focus of this work was on the mathematical properties of different formulations of the
problem and how they impact multigrid ideas. In addition, some of the ideas presented by Mainberger
et al. [10] were reviewed and extended. A discrete self-adjoint extended formulation of the Poisson
equation was described, which allowed to impose arbitrary Dirichlet boundary conditions in arbitrary
points of the domain. Moreover, we studied under which conditions the problem is well-posed and
presented some experimental validation examples. The obtained results show how the characteristics
of the set of boundary conditions affect the convergence properties of different iterative methods.
Then, we compared iterative numerical solvers and studied the relation between the convergence
time and the size of the grid. We verified that convergence of multigrid methods can be achieved
in linear time. In addition, multigrid V and W schemes were tested with different parameters and
we observed that very few pre- and post-smoothing relaxation steps provided optimal and stable
convergence rates.

Acknowledgment
The authors would like to thank Jean-Michel Morel for fruitful discussions, Juan Francisco Garamendi
for pointing us to useful references, and Enric Meinhardt-Llopis for his interesting suggestions and
ideas which substantially improved this work. Work partly founded by CSIC and PEDECIBA
(Uruguay), the Office of Naval research by grant N00014-17-1-2552 and ANR-DGA project ANR-12-
ASTR-0035.

Image Credits

NASA4

Standard test image.

Appendix A: Restriction and Prolongation Operator Prop-


erties
Given a continuous function u ∈ L2 (R) the Fourier Transform can be defined as
Z +∞
û(ν) = u(x)e−i2πνx dx. (34)
−∞

From now on we will assume that u has a limited spectrum band. A continuous version of the discrete
def
sampled function uh (k) = u(kh) can be written as the distribution,
X
u(x) = u(x) · δ(x − kh), (35)
k∈Z

where h corresponds to the sampling distance (equivalent to the pixel size for images).
4
http://dragon.larc.nasa.gov/retinex/

212
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

Figure 12: Illustration of the relation between the continuous and the sampled function in the spatial and frequency domain.

The Fourier transform of u(x) can be written using standard Fourier Transform properties [16]
as
X\
û(ν) = û(ν) ∗ δ(x − kh)
k∈Z   (36)
1X k
= û(ν) ∗ δ ν− ,
h k∈Z h

which can also be related to the Fourier Transform of the discrete signal uh (k) by
Z +∞ X
û(ν) = u(x) · δ(x − kh)e−i2πνx dx
−∞ k∈Z
XZ +∞
= u(x)δ(x − kh)e−i2πνx dx (37)
k∈Z −∞
X
= u(hk)e−i2πνhk .
k∈Z

The relation between equations (36) - (37) inspire the following definition for the Fourier Trans-
form of the discrete signal uh (k)
X
ch (ν) = h
u uh (k)e−i2πhνk . (38)
k∈Z

Figure 12 illustrates some of the relations introduced above between a continuous signal u(x) and
it discrete counterpart uh (k) both on the spatial an Fourier domain. Equation (36) shows that
the spectrum of the continuous signal is spread along the frequencies as the convolution with the
Dirac distribution produces a periodic replication of the original spectrum. If the support of û(ν) is
ch (ν) = û(ν) for ν ∈ [−1/2h 1/2h]
included in the interval [−1/2h 1/2h] no aliasing is introduced and u
(Nyquist-Shannon sampling theorem).
We can write the basic interpolation and restriction operators as

I˜h2h (uh (k)) = u2h (k) = u(k(2h)) = uh (2k), (39)

213
J. Matı́as Di Martino and Gabriele Facciolo

Figure 13: Illustration of basic interpolation and restriction operators.


u2h (k/2) if k is even
I˜2h
h
(u2h (k)) = uh (k) = (40)
0 otherwise.
It is easy to see that this basic interpolation and restriction operators wrap the frequency space to the
intervals [−1/2h 1/2h] and [−1/2(2h) 1/2(2h)] respectively. In this process frequencies {ν, ν±(1/2h)}
in the interval [−1/2h 1/2h] are mapped to the same frequency ν0 ∈ [−1/2(2h) 1/2(2h)] as it is
illustrated in Figure 13. In order to control aliasing effects during restriction and to avoid the
artificial generation of high frequencies during the process of interpolation, more regular interpolation
and restriction operators can be defined as

Ih2h (uh ) = I˜h2h (K h ∗ uh )


(41)
h
I2h (u2h ) = cK ∗ h
I˜2h
h
(u2h ).

K h represents a smoothing kernel, and c a scalar constant. The effect of the smoothing operation
can be seen as a filtering step on the Fourier domain, which helps reducing the amplitude of the
components of uh (and I˜2hh
(u2h )) on the spectral band [−1/2h, −1/4h] ∪ [1/4h, 1/2h].
The order of the kernel Kh should be at least the order of the partial equation we are willing to
solve. For example, the Poisson equation is a second order partial differential equation and therefore
at least a kernel of order two is required.
For the case of two dimensional signals, the full weighting (or bilinear) stencil
 1 1 1 
4 2 4
1 1
K= 2
1 2
 (42)
1 1 1
4 2 4

has order 2 (see reference [8] for a detailed discussion) and will be the kernel used in the present
work.

Appendix B: Optimal Problem in a Coarser Grid


Given the discrete linear numerical problem

Ah uh = bh , (43)

we will assume that Ah satisfies the following properties:

214
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

h
• hAh v1h , v2h i = hv1h , Ah v2h i for all v1,2 ∈ Ωh (self-adjoint operator).

• hAh v h , v h i > 0 for any nonzero v h ∈ Ωh .

Proposition 3. Solving Equation (43) is equivalent to the minimization problem,

1 def
uh = arg min hAh uh , uh i − hbh , uh i = arg min F h (uh ). (44)
h
u ∈Ω h 2 uh ∈Ωh

The previous proposition is a classic result in linear algebra, nevertheless we include its proof for
completeness and because we shall reuse its formalism. Consider the value of F h (uh + v h ) for an
arbitrary vector v h ∈ Ωh ,

1 h h
F h (uh + v h ) = hA (u + v h ), (uh + v h )i − hbh , uh + v h i
2
1
hAh uh , uh i + hAh uh , v h i + hAh v h , uh i + hAh v h , v h i − hbh , uh i − hbh , v h i (45)

=
2
1
= F h (uh ) + 2hAh uh , v h i + hAh v h , v h i − hbh , v h i,

2

where the self-adjoint property of matrix Ah was used to simplify the middle term in the last step of
Equation (45). Equation (45) can be expressed as

1
F h (uh + v h ) = F h (uh ) + hAh uh , v h i − hbh , v h i + hAh v h , v h i
2
(46)
1
= F (u ) + hA u − b , v i + hAh v h , v h i,
h h h h h h
2 | {z }
>0

where (from the definite positive condition of Ah , i.e. hAh v h , v h i > 0 for any nonzero v h ∈ Ωh ) we
know that the third term of Equation (46) is always positive. Then the condition hAh uh − bh , v h i = 0
for all v h ∈ Ω is a sufficient condition to guarantee that uh minimizes F h .
We just proved that solving Equation (43) is equivalent to finding the vector uh ∈ Ωh that
minimizes the discrete functional F h defined by Equation (44) [4, Chapter 10]. This equivalence will
help us understand in a very intuitive way how to define the problem in a coarse grid and how that can
be used to improve a current guess in the original fine grid. To that end, let us imagine that we have
a current approximation v h of the (unknown) solution uh of Equation (43), and that we are looking
for a correction signal w2h ∈ Ω2h that improves the current guess v h , i.e. F h (v h + I2h
h
w2h ) < F h (v h ).
In particular, we can formally define the optimal choice for w2h as

w2h = arg min F h (v h + I2h


h
w2h ). (47)
w2h ∈Ω2h

215
J. Matı́as Di Martino and Gabriele Facciolo

Proposition 2 can be proved expanding the functional F h (v h + I2h


h
w2h )
1 h h h
F h (v h + I2h
h
w2h ) = hA (v + I2h w2h ), v h + I2h
h
w2h i − hbh , v h + I2h
h
w2h i
2
1 h h h 1 1
= hA v , v i + hAh I2h
h
w2h , v h i + hv h , Ah I2h
h
w2h i
2 2 2
1
+ hAh I2h
h
w2h , I2h
h
w2h i − hbh , v h i − hbh , I2h
h
w2h i
2
1 h h h
= hA v , v i − hbh , v h i (48)
2
1
+ hAh I2h
h
w2h , I2h
h
w2h i + hv h , Ah I2h
h
w2h i − hbh , I2h
h
w2h i
2
1 h T h h 2h 2h
= F h (v h ) + h(I2h h T h
) A I2h w , w i − h(I2h ) (b − Ah v h ), w2h i
2
1
= F h (v h ) + hA2h w2h , w2h i − hb2h , w2h i,
2
where A2h = (I2h
h T h h
) A I2h and b2h = (I2hh T h
) (b − Ah v h ). On the last row of Equation (48) the first
term is independent of w2h , which leads to the equivalence

w2h = arg min F h (v h + I2h


h
w2h )
w2h ∈Ω2h

1
= arg min hA2h w2h , w2h i − hb2h , w2h i (49)
w2h ∈Ω2h 2

def
= arg min F 2h (w2h ).
w2h ∈Ω2h

The deduction carried out in Equation (48) is of crucial importance to understand how the original
problem should be mapped to a coarser grid. In particular, it allows to derive the following important
properties:

• In the coarse grid, the data term can be defined as b2h = (I2h
h T h
) (b − Ah v h ), which corresponds
to a coarse version of the residual rh = bh − Ah v h . The operator (I2h
h T
) is mapping rh from
h 2h
the fine grid Ω to the coarse grid Ω , hence, it plays the role of the restriction operator
Ih2h = (I2h
h T
) .

• The coarse version of the matrix Ah is A2h = (I2h


h T h h
) A I2h = Ih2h Ah I2h
h
.

• The definition of the problem in the finer grid should guarantee that Ah is a positive definite
self-adjoint operator.

216
An Analysis and Implementation of Multigrid Poisson Solvers With Verified Linear Complexity

References
[1] R. Barrett, M. Berry, T.F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Ei-
jkhout, R. Pozo, C. Romine, and H. Van der Vorst, Templates for the solution of linear
systems: building blocks for iterative methods, Society for Industrial and Applied Mathematics
(SIAM), 1994. ISBN: 978-0-89871-328-2.

[2] P. Bhat, B. Curless, M. Cohen, and C. L. Zitnick, Fourier analysis of the 2D screened
Poisson equation for gradient domain problems, Lecture Notes in Computer Science (including
subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5303
LNCS (2008), pp. 114–128. https://doi.org/10.1007/978-3-540-88688-4_9.

[3] J.E. Boillat, Load balancing and poisson equation in a graph, Concurrency: Practice and
Experience, 2 (1990), pp. 289–313.

[4] W.L. Briggs, V.E. Henson, and S.F. McCormick, A Multigrid Tutorial (2nd Ed.), Soci-
ety for Industrial and Applied Mathematics (SIAM), 2000. ISBN 0-89871-462-1.

[5] L.C. Evans, Partial differential equations, Graduate studies in mathematics, American Math-
ematical Society, 1998. ISBN 0-8218-0772-2.

[6] D.G. Feingold and R.S. Varga, Block diagonally dominant matrices and generalizations of
the Gershgorin circle theorem, Pacific Journal of Mathematics, 12 (1962), pp. 1241–1250.

[7] W. H. Press, S.A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical


Recipes in C, Cambridge University Press, 1992. ISBN 0521431085.

[8] P.W. Hemker, On the order of prolongations and restrictions in multigrid procedures, Journal
of Computational and Applied Mathematics, 32 (1990), pp. 423–429.

[9] A. Levin, A. Zomet, S. Peleg, and Y. Weiss, Seamless Image Stitching in the Gradient
Domain, European Conference on Computer Vision, 4 (2004), pp. 377–389. http://dx.doi.
org/10.1007/978-3-540-24673-2_31.

[10] M. Mainberger, A. Bruhn, J. Weickert, and S. Forchhammer, Edge-based com-


pression of cartoon-like images with homogeneous diffusion, Pattern Recognition, 44 (2011),
pp. 1859–1873. http://dx.doi.org/10.1016/j.patcog.2010.08.004.

[11] J. McCann and N. S. Pollard, Real-time gradient-domain painting, ACM Transactions on


Graphics, 27 (2008), p. 1. http://dx.doi.org/10.1145/1360612.1360692.

[12] C.C. Paige and M.A. Saunders, Solution of sparse indefinite systems of linear equations,
SIAM Journal on Numerical Analysis, 12 (1975), pp. 617–629.

[13] P. Pérez, M. Gangnet, and A. Blake, Poisson image editing, ACM Transactions on
Graphics, 22 (2003), p. 313. http://dx.doi.org/10.1145/882262.882269.

[14] R. Raskar, A. Ilie, and J. Yu, Image Fusion for Context Enhancement and Video Sur-
realism, in International Symposium on Non-Photorealistic Animation and Rendering, 2004,
pp. 85–152. http://dx.doi.org/10.1145/987657.987671.

[15] Y. Saad and M.H. Schultz, GMRES: A generalized minimal residual algorithm for solving
nonsymmetric linear systems, SIAM Journal on Scientific and Statistical Computing, 7 (1986),
pp. 856–869.

217
J. Matı́as Di Martino and Gabriele Facciolo

[16] R.S. Strichartz, A guide to distribution theory and Fourier transforms, World Scientific
Publishing Co Inc, 2003.

[17] J. Sun, J. Jia, C-K. Tang, and H-Y. Shum, Poisson matting, ACM Transactions on
Graphics, 23 (2004), p. 315. http://dx.doi.org/10.1145/1015706.1015721.

[18] H.A. Van der Vorst, Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the
solution of nonsymmetric linear systems, SIAM Journal on scientific and Statistical Computing,
13 (1992), pp. 631–644.

218

You might also like