Solving differential-algebraic systems of equations
(DAEs)
AM 205
Jovana Andrejevic, Catherine Ding
November 4, 2020
Table of Contents
Motivation
What are DAEs?
Definition
Example: The simple pendulum
How can we solve DAEs?
Backward differentiation formulae
Newton’s method
Practical considerations
The double pendulum (Exercise)
Applications of DAEs
DAEs arise in the mathematical modeling of a wide variety of
problems from engineering and science, such as
▶ multibody problem
▶ flexible mechanics
▶ electrical circuit design
▶ optimal control
▶ incompressible fluids
▶ molecular dynamics
▶ chemical kinetics (quasi steady state, partial equilibrium
approximations, chemical process control).
Differential-algebraic systems (DAEs)
In general, we can write any system of differential equations in
implicit form as
F (t, x, x ′ ) = 0
where x and x ′ may be vectors. For a system of ordinary
differential equations, the matrix ∂F /∂x ′ is not singular. A
differential-algebraic system arises when ∂F /∂x ′ is singular.
Another way to think about this is that some equations in F are
purely algebraic; they contain no derivative terms with respect to
t, so some rows of ∂F /∂x ′ are zero, producing a singular matrix.
Differential-algebraic systems (DAEs)
One important class of DAEs are those written in semi-explicit
form:
y ′ = f (t, y , z)
0 = g (t, y , z)
where y are the differential variables, and z are algebraic variables.
Decoupling y and z has nicer implications for numerical
integration. The DAE
y1′ + y2′ + 2y2 = 0
y1 + y2 − t 2 = 0
is not in semi-explicit form, but can be converted through variable
substitution. We’ll restrict our discussion today to semi-explicit
DAEs.
Differential-algebraic systems (DAEs)
Setting
y = y1 + y2 (differential variable)
z = y2 (algebraic variable)
we may obtain
y ′ = −2z
0 = y − t2
This is indeed in the form of
y ′ = f (t, y , z)
0 = g (t, y , z)
.
Differential-algebraic systems (DAEs)
In some cases, DAEs arise naturally as limits of singularly
perturbed ODEs:
y ′ = f (t, y , z)
ϵz ′ = g (t, y , z)
where ϵ is small. The limit of ϵ → 0 results in a DAE.
Since z will change rapidly for small ϵ, our integration scheme
must resolve widely disparate time scales - a stiff problem. Since
the underlying problem is stiff, we’ll see that it’s a good idea to
consider implicit methods for integrating DAEs as well.
Example: The simple pendulum
For a pendulum of unit mass and length, the system of equations
which describes its evolution in Cartesian coordinates is
x ′ = u,
y′ = v,
u ′ = −λx,
v ′ = −λy − g ,
0 = x 2 + y 2 − 1,
where x, y are the position coordinates of the pendulum, u, v the
velocities, λ the tension per unit length, and g = 9.80665 the
acceleration due to gravity.
Hidden constraints
▶ This system has 5 unknowns to solve for: x, y , u, v , λ.
▶ It appears to have 4 independent degrees of freedom, due to
the single constraint.
▶ However, there are in fact only 2 independent degrees of
freedom in this problem, due to hidden constraints that must
be satisfied.
▶ How do we obtain the hidden constraints?
Differentiation index
▶ Differentiate the algebraic constraint, 0 = x 2 + y 2 − 1,
repeatedly with respect to t:
once: 0 = 2xx ′ + 2yy ′ → 0 = xu + yv
twice: 0 = u 2 + v 2 − λ(x 2 + y 2 ) − gy
3 times: λ′ = −3gv
▶ Reveals new constraint equations, for a total of 3 constraints
that govern our consistent initial conditions.
▶ 3 differentiations were needed to obtain a pure ODE system -
this is known as the differentiation index of the DAE, and is a
measure of how close the DAE system is to its corresponding
ODE.
Differentiation index
▶ Any intermediate equation is a valid substitute for our
algebraic constraint:
index 3: 0 = x 2 + y 2 − 1 (length constraint)
index 2: 0 = xu + yv (tangential motion)
index 1: 0 = u 2 + v 2 − λ(x 2 + y 2 ) − gy (centripetal accel.)
index 0: λ′ = −3gv (ODE)
▶ However, we forego guaranteeing one constraint by choosing
another; the total length would be susceptible to numerical
drift, for example, if we use the index 2 formulation, but we
would guarantee tangential motion.
Differentiation index
The index 3 pendulum DAE can in fact be regarded as a reduced
form of a singularly perturbed index 1 DAE in which the rod is
replaced by a stiff spring of spring constant k = ϵ−1 :
x ′ = u,
y′ = v,
u ′ = −λx,
v ′ = −λy − g ,
1
ϵλ = 1 − p .
x + y2
2
Here, λ is the spring force per unit length. In the limit ϵ → 0, the
last equation can be rearranged into our length constraint.
Summary - Pros and Cons of DAEs
+ It’s typically advantageous to work with the DAE directly,
provided we have consistent initial conditions
− Initialization can pose a challenge - have to satisfy hidden
constraints
+ DAEs allow us to explicitly enforce constraints
+ A reduced form higher index DAE is often simpler to solve
than singularly perturbed ODE/lower index DAE that is
stiff/has fast oscillations.
Numerical integration of DAEs
▶ Due to constraints that must be satisfied at each time,
explicit methods are typically not as well-suited for solving
DAEs in general.
▶ Instead, we use implicit methods to discretize the differential
part of DAEs, and solve for the algebraic variables
simultaneously.
▶ We will look at one particular family of implicit methods:
Backward differentiation formulae.
BDF Methods
The backward differentiation formulae (BDF)1 are a family of
implicit multi-step methods which discretize y ′ = f (t, y ) as:
p−1
X
yk+1 + αk−s yk−s = βhf (tk+1 , yk+1 ),
s=0
where p is the order of the method.
The first few formulae:
p = 1 : yk+1 − yk = hf (tk+1 , yk+1 ) (backward Euler)
4 1 2
p = 2 : yk+1 − yk + yk−1 = hf (tk+1 , yk+1 )
3 3 3
18 9 2 6
p = 3 : yk+1 − yk + yk−1 − yk−2 = hf (tk+1 , yk+1 )
11 11 11 11
1
See: wikipedia.org/wiki/Backward_differentiation_formula
BDF Methods
Consider the DAE system
y ′ = f (t, y , z)
0 = g (t, y , z)
We discretize the differential equations and leave the algebraic
equations, resulting in a root-finding problem at each step:
p+1
X
yk+1 + αk−s yk−s − βhfk+1 = 0
s=0
gk+1 = 0
with yk+1 , zk+1 as our unknowns to solve for.
Newton’s method
In general, our discretized system of equations will be nonlinear in
the unknowns, so the equations cannot be rearranged for yk+1 ,
zk+1 explicitly. Instead, our root-finding problem can be solved via
Newton’s method.
Newton’s method iteratively finds the roots x ∗ of a function r (x)
so that r (x ∗ ) = 0 by iterating
xi+1 = xi − J −1 (xi )r (xi )
from an initial guess x0 until convergence, where J is the Jacobian
of r . As a matrix equation to solve:
J(xi )∆xi = −r (xi ),
where xi+1 = xi + ∆xi . It’s natural to think of r as our residual,
measuring the distance from 0.
Newton’s Method
Consider a DAE system with m differential and n algebraic degrees
of freedom, given by:
y ′ = f (t, q) = f (t, y , z)
0 = g (t, q) = g (t, y , z)
where q = (y , z), and f , y ∈ Rm , g , z ∈ Rn . Our root-finding
problem is
yk+1 + p+1
P
s=0 α k−s yk−s − βhfk+1
rk+1 = =0
gk+1
with Jacobian
" #
∂f ∂f
h
∂r ∂r
i Im×m − βh ∂y −βh ∂z
Jk+1 = ∇q r |k+1 = ∂y ∂z = ∂g ∂g
k+1 ∂y ∂z k+1
Newton’s Method
Decompose into two contributions:
dae
Jk+1 = A + BJk+1
0 0
I .. −βhIm×m ..
= m×m
dae
. + Jk+1 .
0 · · · 0n×n 0 ··· In×n
| {z } | {z }
A B
where " #
∂f ∂f
dae ∂y ∂z
Jk+1 = ∂g ∂g
∂y ∂z k+1
is the purely problem-specific part of the Jacobian, while Jk+1 will
depend on the details of the integrator.
Newton’s Method
For the simple pendulum:
u
v
f (t, q) =
−λx
−λy − g
acc.
n
g (t, q) = x 2 + y 2 − 1
where q = x, y , u, v , λ, and
0 0 1 0 0
0 0 0 1 0
J dae
= −λ 0 0
0 −x
,
0 −λ 0 0 −y
2x 2y 0 0 0
Newton’s Method
dae with
The complete Jacobian is Jk+1 = A + BJk+1
1 0 0 0 0 −βh 0 0 0 0
0
1 0 0 0
0
−βh 0 0 0
0
A= 0 1 0 0 , B = 0
0 −βh 0 0
0 0 0 1 0 0 0 0 −βh 0
0 0 0 0 0 0 0 0 0 1
0 0 1 0 0
0 0 0 1 0
dae
Jk+1 = −λk+1
0 0 0 −xk+1
0 −λk+1 0 0 −yk+1
2xk+1 2yk+1 0 0 0
Newton’s iterations for DAE integration
1: To solve for qk+1 at integration step t = tk+1 :
2: Initialize guess q = qk
3: Compute the residual r (t, q)
4: i =0
5: while ||r ||2 > tol and i < maxiter do
6: J = A + BJ dae (t, q)
7: Solve J∆q = −r
8: q ← q + ∆q
9: Compute the new residual r (t, q)
10: i ←i +1
11: end while
12: Solution qk+1 = q
Exercise 1
▶ The provided framework for a DAE solver class sets up a
third-order BDF method with first and second order start-up
steps.
▶ Your task is to implement the Newton’s method routine
performed at each integration step.
▶ Then, test your solver on the provided simple pendulum
problem.
Index comparison
We can look at how well all constraints are maintained when
varying the index of the simple pendulum example:
Notice drift in constraints that are not explicitly enforced.
Practical considerations
▶ Newton’s method initialization: We can improve our initial
guess to Newton’s iterations by constructing a Lagrange
interpolant of our prior solutions (featured as optional
extension).
▶ Convergence: Error convergence for DAEs can be
complicated, though there are analytical results for index ≤ 2
and special index 3 examples. Strict tolerance on Newton
iterations is critical to achieving expected BDF convergence
(See references for more detail on this topic).
Convergence
There is a two-variable ODE system - the state-space form - of the
pendulum which can be solved explicitly:
ϕ′ = ω,
ω ′ = −g cos ϕ,
and maps as
x = cos ϕ,
y = sin ϕ,
u = −ω sin ϕ,
v = ω cos ϕ,
λ = ω 2 − g sin ϕ.
We can use an adaptive step method with strict tolerance (using
odeint) as a reference solution.
Convergence
Our error turns out to be O(h2 ):
In general, the convergence of algebraic variables can differ from
differential variables, and be lower order.
Practical considerations
▶ Step size adaptivity: Start-up steps for fixed step integrators
incur larger errors; in practice, we can solve DAEs with an
adaptive step method, and take small initial steps to avoid the
larger error penalty.
▶ Conditioning: The Jacobian
" #
∂f ∂f
Im×m − βh ∂y −βh ∂z
Jk+1 = ∂g ∂g
∂y ∂z k+1
can become ill-conditioned for very small h particularly for
higher index DAEs, when ∂g /∂z = 0. May call for better
approaches to solve J∆q = −r (e.g. regularization,
preconditioned conjugate gradient methods).
Example: Crumpling a thin sheet
▶ A thin sheet may be modeled as a network of masses and
springs:
The equations of motion for a node i of mass m are given by
ẋi = vi
mv̇i = Fi ,
where the net force Fi includes contributions from stretching,
damping, bending, self-avoidance, and external forces.
Example: Crumpling a thin sheet
▶ Typically, these equations form an ODE system that we can
solve, e.g. using the classic RK4 method.
▶ However, when acceleration is small, the equations of motion
can be approximated as the DAE
ẋi = vi
Fi = 0
▶ Solved using a 3rd order adaptive step BDF method (implicit)
▶ Turns out to be very efficient for this problem - can take large
integration steps
▶ Integrator automatically detects when DAE formulation is
appropriate, and switches to ODE formulation otherwise
The double pendulum
Once we have a general solver in place, we can easily swap out the
DAE system. As an extension, consider the double pendulum:
x1′ = u1 ,
y1′ = v1 ,
u1′ = −λ1 x1 − λ2 (x1 − x2 ),
v1′ = −λ1 y1 − λ2 (y1 − y2 ) − g ,
x2′ = u2 ,
y2′ = v2 ,
u2′ = −λ2 (x2 − x1 ),
v2′ = −λ2 (y2 − y1 ) − g ,
0 = x12 + y12 − 1,
0 = (x2 − x1 )2 + (y2 − y1 )2 − 1.
Exercise 2
▶ Implement the DAE system for the double pendulum, and
derive and implement its Jacobian.
▶ Use your DAE solver from Exercise 1 to integrate the double
pendulum, and visualize its chaotic motion!
References
1. Campbell, Stephen L., Vu Hoang Linh, and Linda R. Petzold.
”Differential-algebraic equations.” Scholarpedia 3.8 (2008):
2849.
2. Ascher, Uri M., and Linda R. Petzold. Computer Methods for
Ordinary Differential Equations and Differential-Algebraic
Equations. Vol. 61. Siam, 1998.
3. Eich-Soellner, Edda, and Claus Führer. Numerical Methods in
Multibody Dynamics. Vol. 45. Stuttgart: Teubner, 1998.
4. Hairer, Ernst. and Wanner, Gerhard. Solving Ordinary
Differential Equations II. Springer, 1996.