Lecture 6: Introduction To Linear Dynamical Systems and ODE Review
Lecture 6: Introduction To Linear Dynamical Systems and ODE Review
Lecture 6: Introduction To Linear Dynamical Systems and ODE Review
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
They may be distributed outside this class only with the permission of the Instructor.
where as usual x(t) ∈ Rn , u(t) ∈ Rm , and y(t) ∈ Rp . A(·), B(·), C(·), D(·) are matrix-valued functions on
R+ , assumed to be piecewise continuous.
Let define the state transition map and the response map:
x(t) = s(t, t0 , x0 , u)
y(t) = ρ(t, t0 , x0 , u)
We now study the state transition map s(t, t0 , x0 , u) as the unique solution to the state DE given by
for some initial condition (t0 , x0 ) ∈ R+ × Rn , x(t0 ) = x0 and u(·) ∈ U . Under these conditions (u.t.c.), the
above reduces to
ẋ(t) = f (x(t), t), t ∈ R, x(t0 ) = x0
where the right-hand side is a given function
Reference: A good reference for basic ODEs is Kreyszig, ’Advanced Engineering Mathematics’. Chapter
1 is basic ODE stuff. It has loads of examples.
6-1
6-2 Lecture 6: Introduction to Linear Dynamical Systems and ODE Review
Let’s just review how to solve a simple ordinary differential equation. The kinds of ODEs we will deal with
in this class largely fall in the class of so-called separable equations of the form
dx = f (x)g(t)dt
Definition 2.1 (Linear ODE ) A first order ODE is linear if it can be written as
Let us find a formula for the general solution of a linear ODE on some interval I and assuming p and r are
continuous (we will get to the formal requirements for existence and uniqueness next). For the homogeneous
equation
x0 + p(t)x = 0
this is very simple. Indeed, separating variables we have
Z
dx
= −p(t)dt =⇒ ln |x| = − p(t) dt + c∗
x
and by taking the exponential of both sides we get
Z
x(t) = c exp(− p(t) dt)
∂P ∂ 2 h ∂Q ∂2h
= , =
∂x ∂x∂t ∂t ∂t∂x
since for sufficiently smooth functions—here we just need h ∈ C 2 (Rn × R, R)—the second-order mixed
partial derivatives are equal. Equality of second derivatives is the Schwarz’s theorem (or Clairaut’s
theorem on equality of mixed partials).
The niceness of exactness comes from the fact that if the ODE is exact then
dh = 0
can be integrated to directly get the general solution
h(t, x) = c
We can reduce an ODE to an exact form if there exists a so-called integrating factor.
and there exists a function F —which in general will be a function of both x and t—such that
F P dt + F Qdx = 0
where
1 ∂P ∂Q
R(x) = −
Q ∂x ∂t
Integrating we get
Z Z Z
exp p(t) dt x = exp p(t) dt rdt + c
R
Let h = p(t) dt and divide on both sides by exp(h) to get
Z
x(t) = exp(−h) exp(h)r dt + c
dx x et
+3 = 3
dt t t
It is of the form
dx
+ Py = Q
dt
so the integrating factor is
Z Z
3
exp P dt = exp dt = exp(3 ln t) = exp(ln t3 ) = t3
t
et + c
x=
t3
Lecture 6: Introduction to Linear Dynamical Systems and ODE Review 6-5
Theorem 3.1 (Mean Value Theorem.) If a function f is continuous on the closed interval [a, b], and
differentiable on the open interval (a, b), then there exists a point c in (a, b) such that
f (b) − f (a)
f 0 (c) =
b−a
Proof. (⇐=). Suppose the derivative is bounded by some K. By the mean value theorem we have that for
x, y ∈ R, there exists c ∈ R such that
f (x) − f (y) = (x − y)f 0 (c)
so that
|f (x) − f (y)| = |(x − y)f 0 (c)| ≤ K|x − y|
(=⇒). Suppose f is K-Lipschitz so that |f (x + h) − f (x)| ≤ K|h|, x, h ∈ R. Then taking the limit we have
that
f (x + h) − f (x)
lim ≤K
h→0 h
Alt. so that |f (x) − f (y)| ≤ K|x − y|, x, y ∈ R. Then taking the limit we have that
f (x) − f (y)
lim ≤K
x→y x−y
Note: Lipschitz functions do not have to be differentiable. They have to be almost everywhere differentiable
(except on a set of measure zero).
6-6 Lecture 6: Introduction to Linear Dynamical Systems and ODE Review
Examples.
1. The function p
f (x) = x2 + 5
defined for all real numbers is Lipschitz continuous with the Lipschitz constant K = 1, because it is
everywhere differentiable and the absolute value of the derivative is bounded above by 1. Indeed,
so that
|f 0 (x)| = |x(x2 + 5)−1/2 | ≤ |x||(x2 + 5)−1/2 |
Claim:
|x|
≤1
|(x2 + 5)−1/2 |
This is true because
|x| = |(x2 )1/2 | ≤ |(x2 + 5)1/2 |
2. The functions sin(x) and cos(x) are Lipschitz with constant K = 1 since their derivatives are bounded by
1.
√
3. is the function sin(x2 )? what about x?
ẋ(t) = f (x, t)
where initial condition (t0 , x0 ) is such that x(t0 ) = x0 . Suppose f satisfies (A1) and (A2). Then,
1. For each (t0 , x0 ) ∈ R+ × Rn there exists a continuous function φ : R+ → Rn such that
φ(t0 ) = x0
and
φ̇(t) = f (φ(t), t), ∀t ∈ R+ \D
2. This function is unique. The function φ is called the solution through (t0 , x0 ) of the differential
equation.
Note that if the Lipschitz condition does not hold, it may be that the solution cannot be continued beyond
a certain time. e.g., consider
˙ = ξ 2 (t), ξ(0) = 1 , c 6= 0
ξ(t)
c
where ξ : R+ → R. This differential equation has the solution
1
ξ(t) =
c−t
Definition 3.1 (Cauchy sequence) A sequence (vi )∞ i=1 in (V, F, k · k) is said to be a Cauchy sequence
in V iff for any ε > 0, ∃ Nε ∈ N such that for any pair m, n > Nε ,
kvm − vn k < ε ∀p ∈ N
Definition 3.2 (Banach Space.) A Banach space X is a normed linear space that is complete with
respect to that norm—that is, every Cauchy sequence {xn } in X converges in X.
where x0 (t0 ) = x0 and m = 0, 1, 2, . . .. The idea is to show that the sequence of continuous functions
{xm (·)}∞ n
0 converges to (i) a continuous function φ : R+ → R which is (ii) a solution of ẋ = f (x, t),
x(t0 ) = x0 .
By the above argument, m → ∞, xm (·) → φ(·) on [t1 , t2 ]. Hence, it suffices to show that
Z t Z t
f (xm (τ ), τ ) dτ → f (φ(τ ), τ ) dτ, as m → ∞
t0 t0
then Z t
u(t) ≤ c1 exp k(τ ) dτ
t0
Rt
Proof. WLOG, assume t > t0 . Let U (t) = c1 + t0
k(τ )u(τ ) dτ . Thus,
u(t) ≤ U (t)
resulting in Z t
d
U (t) exp − k(τ ) dτ ≤0
dt t0
and thus Z t
u(t) ≤ U (t) ≤ c1 exp − k(τ ) dτ
t0
Example. Consider
Proof. Assume φ(t), ψ(t) are two solutions so that φ(t0 ) = ψ(t0 ) = x0 and
Then Z t
φ(t) − ψ(t) = (A(τ )φ(τ ) − A(τ )ψ(τ )) dτ
t0
so that Z t
kφ(t) − ψ(t)k ≤ kA(t)k∞,[t0 ,t] kφ(τ ) − ψ(τ )k dτ
t0
By Bellman-Gronwall,
Z t
kφ(t) − ψ(t)k ≤ c1 + kA(t)k∞,[t0 ,t] kφ(τ ) − ψ(τ )k dτ
t0
implies
kφ(t) − ψ(t)k ≤ c1 exp kA(t)k∞,[t0 ,t] (t − t0 )
This is true for all c1 ≥ 0, so set c1 = 0. . .
Recall
with initial data (t0 , x0 ) and the assumptions on A(·), B(·), C(·), D(·), u(·) all being PC:
Lecture 6: Introduction to Linear Dynamical Systems and ODE Review 6-9
• A(t) ∈ Rn×n
• B(t) ∈ Rn×m
• C(t) ∈ Rp×n
• D(t) ∈ Rp×m
The input function u(·) ∈ U, where U is the set of piecewise continuous functions from R+ → Rm .
This system satisfies the assumptions of our existence and uniqueness theorem. Indeed,
1. For all fixed x ∈ Rn , the function t ∈ R+ \D → f (x, t) ∈ Rn is continuous where D contains all the points
of discontinuity of A(·), B(·), C(·), D(·), u(·)
2. There is a PC function k(·) = kA(·)k such that
Hence, by the above theorem, the differential equation has a unique continuous solution x : R+ → Rn which
is clearly defined by the parameters (t0 , x0 , u) ∈ R+ × Rn × U . Therefore, recalling the state transition map
s we have the following theorem.
Theorem 4.1 (Existence of the state transition map.) Under the assumptions and notation above,
for every triple (t0 , x0 , u) ∈ R+ × Rn × U , the state transition map
x(·) = s(·, t0 , x0 , u) : R+ → Rn
is a continuous map well-defined as the unique solution of the state differential equation
Remark : Since the state transition function being well-defined, so is the response map
y(t) = ρ(t, t0 , x0 , u)
Moreover, with the state transition function being well-defined, so is the response map
y(t) = ρ(t, t0 , x0 , u)
as the composition of the state transition map and the output function g.
The state transition function (resp. response) of a linear system is equal to its zero-input state transition
function (resp. response) and its zero-state state transition map (resp. response):
Due to the fact that the state transition map and the response map are linear, they have the property that
for fixed (t, t0 ) ∈ R+ × R+ the maps
s(t, t0 , ·, 0) : Rn → Rn : x0 7→ s(t, t0 , x0 , 0)
and
ρ(t, t0 , ·, 0) : Rn → Rp : x0 7→ ρ(t, t0 , x0 , 0)
are linear.
Hence by the Matrix Representation Theorem, they are representable by matrices. Therefore there exists a
matrix Φ(t, t0 ) ∈ Rn×n such that
and
ρ(t, t0 , x0 , 0) = C(t)Φ(t, t0 )x0 , ∀x0 ∈ Rn
Definition 4.1 (State transition matrix.) Φ(t, t0 ) is called the state transition matrix.
Let X(t0 ) = X0 .
Definition 5.1 (State Transition Matrix.) The state transition matrix Φ(t, t0 ) is defined to be the
solution of the above matrix differential equation starting from Φ(t0 , t0 ) = I. That is,
∂
Φ(t, t0 ) = A(t)Φ(t, t0 )
∂t
and Φ(t0 , t0 ) = I.
2. for all t, t0 , t1 ∈ R+ ,
Φ(t, t0 ) = Φ(t, t1 )Φ(t1 , t0 )
3. The inverse of the state transition matrix is
−1
(Φ(t, t0 )) = Φ(t0 , t)
Proof. Call the left-hand side of (6.4) LHS, and the righ-hand side RHS.
1. Check first that the LHS of (6.4) and the RHS are equal at t0 :
d d
LHS(t) = A(t)LHS(t) and RHS(t) = A(t)RHS(t)
dt dt
RHS(t1 ) = LHS(t1 )
d
RHS(t) = A(t)RHS(t)
dt
d
LHS(t) = A(t)LHS(t)
dt
Hence, LHS ≡ RHS.
3. First, Φ(t, s) = Φ(s, τ )Φ(τ, t) for any t, s, τ since the following diagram commutes:
Φ(s,τ )
X X
Φ(t,τ )
Φ(t,s)
X
Then, x(t) = Φ(t, s)a, x(t) = Φ(t, τ )x(τ ) and x(τ ) = Φ(τ, s)a, and hence
that is
(Φ(t, τ )Φ(τ, s) − Φ(t, s))a = 0
We claim that Φ(t, s) is invertible and that its inverse is given by Φ(s, t). Indeed, from Φ(t, s) =
Φ(s, τ )Φ(τ, t) we have that
I = Φ(t, s)Φ(s, t)
4. This is called the Jacobi-Liouville equation. We will take this one as given.
6-12 Lecture 6: Introduction to Linear Dynamical Systems and ODE Review
u(·)
t0 t0 t0 + dt0 t
First, let us consider a heuristic derivation of the zero-state transition. (page 35 of C& D) Consider the
input in Fig. 6.1.
Then,
x(t0 ) = Φ(t0 , t0 )x0
and
x(t0 + dt0 ) = x(t0 ) + [A(t0 )x(t0 ) + B(t0 )u(t0 )]dt0
Hence,
x(t) ' Φ(t, t0 )x0 + Φ(t, t0 )B(t0 )u(t0 )dt0
Proof idea: We will use the trick that checks the equality by showing the left and right hand sides of (6.5)
satisfy the same ODE. That is, at t0 , they have the same value (initial condition) and the derivative of
the left and right hand sides is the same. The key here is that since we have the existence and uniqueness
theorem, we know then that the solution of the ODE is unique, so that means any two expressions that
satisfy it have to be equal.
Proof. We will use the same trick as before where we check the initial condition and the differential equation
and invoke the existence/uniqueness theorem for solutions to ODEs.
d
LHS(t) = A(t)LHS(t) + B(t)u(t)
dt
LHS(t0 ) = x0
RHS(t0 ) = x0
Lecture 6: Introduction to Linear Dynamical Systems and ODE Review 6-13
d d
RHS(t) = (Φ(t, t0 )x0 )
dt dt
Z t
d
+ Φ(t, t0 )B(t0 )u(t0 ) dt0
dt t0
d
= A(t)Φ(t, t0 )x0 + (t)Φ(t, t)B(t)u(t)
dt
0 Z t
d >
d
− (t0 )Φ(t, t0 )B(t0 )u(t0 ) + (Φ(t, t0 )B(t0 )u(t0 )) dt0
dt
t0 dt
so that
d
RHS(t) = A(t)Φ(t, t0 )x0 + B(t)u(t)
dt
Z t
+ A(t) Φ(t, t0 )B(t0 )u(t0 ) dt0
t0
= A(t)RHS(t) + B(t)u(t)
Thus LHS and RHS has the same initial condition and satisfy the same ODE
By definition, we have that it satisfies the state transition axiom. Check that it satisfies the semi-group
property:
h
s(t, t0 , s(t1 , t0 , x0 , u[t0 ,t1 ] ), u[t0 ,t] ) = Φ(t, t1 ) Φ(t1 , t0 )x0
Z t
+ Φ(t, t0 )B(t0 )u(t0 ) dt0
t1
Z t1
= Φ(t, t0 )x0 + Φ(t, t0 )B(t0 )u(t0 ) dt0
t0
Z t
+ Φ(t, t0 )B(t0 )u(t0 ) dt0
t1
Z t
= Φ(t, t0 )x0 + Φ(t, t0 )B(t0 )u(t0 ) dt0
t0
= s(t, t0 , x0 , u[t0 ,t] )