[go: up one dir, main page]

0% found this document useful (0 votes)
105 views24 pages

Introduction To Koopman Operator Theory of Dynamical Systems

This document introduces the Koopman operator theory as an alternative framework for analyzing dynamical systems. The classical theory focuses on geometric objects like fixed points and attractors in state space. However, this approach does not scale well to high-dimensional or uncertain systems. Koopman operator theory offers a data-driven method for studying nonlinear and complex dynamical systems using operators on observables rather than trajectories in state space.

Uploaded by

Ayaan Hossain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views24 pages

Introduction To Koopman Operator Theory of Dynamical Systems

This document introduces the Koopman operator theory as an alternative framework for analyzing dynamical systems. The classical theory focuses on geometric objects like fixed points and attractors in state space. However, this approach does not scale well to high-dimensional or uncertain systems. Koopman operator theory offers a data-driven method for studying nonlinear and complex dynamical systems using operators on observables rather than trajectories in state space.

Uploaded by

Ayaan Hossain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Introduction to Koopman operator

theory of dynamical systems

Hassan Arbabi
Last updated: January 2019

Koopman operator theory is an alternative formalism for dynamical systems theory which
offers great utility in analysis and control of nonlinear and high-dimensional systems us-
ing data. These notes present a brief and informal review of this theory, as well as a
list of references which contain rigorous mathematical treatments as well as a variety of
applications to physics and engineering problems.

0.1 Classical theory of dynamical systems

A dynamical system, in the abstract sense, consists of two things: a set of states
through which we can describe the evolution of a system, and a rule for that evolution.
Although this viewpoint is very general and may be applied to almost any system that
evolves with time, often the fruitful and conclusive results are only achievable when
we pose some mathematical structure on the dynamical system, for example, we often
assume the set of states form a linear space with nice geometric properties and the rule

1
of evolution has some order of regularity on that space. The prominent examples of such
dynamical systems are amply found in physics, where we use differential equations to
describe the evolution of physical variables in time. In this note, we specially focus on
dynamical systems that can be represented as

ẋ = f (x), (0.1)

where x is the state, an element of the state space S ⊂ Rn , and f : S → Rn is a vector


field on that state space. Occasionally, we will specify some regularity conditions for f
like being smooth or a few times differentiable.
We also consider dynamical systems given by the discrete-time map

xt+1 = T (xt ), t∈Z (0.2)

where x belongs to the state space S ⊂ Rn , T : S → S is the dynamic map and t


is the discrete time index. Just like the continuous-time system in (0.1), we may need
to make some extra assumptions on T . The discrete-time representation of dynamical
system usually doesn’t show up in representation of physical systems, but we can use it
to represent discrete-time sampling of those systems. This representation is also more
practical because the data collected from dynamical systems almost always comes in
discrete-time samples.
The study of the dynamical systems in (0.1) and (0.2) was dominated by the geometric
viewpoint in much of last century. In this viewpoint, originally due to Henri Poincaré, the
qualitative properties of the solution curves in the state space are studied using geometric
tools and the emphasis is put on the subsets of the state space that play a big role in
the asymptotic behavior of the trajectories. We briefly describe some concepts from this

2
theory here, but a more comprehensive exposition can be found in [1, 2].
Assuming that the solution to (0.1) exists, we define the flow map F t : S → S to be
the map that takes the initial state to the state at time t ∈ R, i.e.,

Z t
t
F (x0 ) = x0 + f (x(t0 ))dt0 . (0.3)
x0 ,t0 =0

The flow map satisfies the semi-group property, i.e., for every s, t ≥ 0,

Z t
t s s
F ◦ F (x0 ) = F (x0 ) + f (x(t0 ))dt0 ,
F s (x 0 ),t0 =0
Z F s (x0 ),s Z t
0 0
= f (x(t ))dt + f (x(t0 ))dt0 ,
x0 ,t0 =0 F s (x 0 ),t0 =0
Z t+s
= f (x(t0 ))dt0 ,
x0 ,t0 =0

= F t+s (x0 ). (0.4)

where ◦ is the composition operator.


Some of the important geometric objects in the state space of continuous-time dy-
namical systems are as follows:
Fixed point: Any point x in the state space such that f (x) = 0 (or F t (x) = x)
is a fixed point. The fixed points correspond to the equilibria of physical systems. An
important notion about fixed points is the stability, that is wether the trajectories starting
in some neighborhood of fixed point stay in its neighborhood over time or not.
Limit cycle: Limit cycles are (isolated) closed curves in the state space which cor-
respond to the time-periodic solutions of (0.1). The generalized version of limit cycles
are tori (cartesian products of circles) which are associated with quasi-periodic motion.
Invariant set: An invariant set B in the state space satisfies F t (B) ⊆ B for all t,
i.e., the trajectories starting in B remain in B. Invariant sets are important because we

3
can isolate the study the dynamics on them from the rest of state space which simplifies
the analysis task. Also they include important objects such as fixed points, limit cycles,
attractors and invariant manifolds.
Attractor: An attractor is an attracting set with a dense orbit. An attracting set
is an invariant subset of the state space to which many initial conditions converge. A
dense orbit in a set is a trajectory that comes arbitrarily close to any point on that set.
For example, a stable limit cycle is an attractor, because if a trajectory starts sufficiently
close to it, it will come arbitrarily close to it, and, the limit cycle itself is a dense orbit.
In contrast, the union of two separate stable limit cycles is an attracting set but not
an attractor, because there is no trajectory that comes arbitrarily close to points on
both cycles. Simple attractors include stable fixed points, limit cycles and tori. A more
complicated example of attractor is the famous butterfly-shaped set in the chaotic Lorenz
system which is called a strange attractor.
Attractors are the objects that determine the asymptotic (that is post-transient or
long-term) dynamics of dissipative dynamical systems. In fact, the mere notion of dissi-
pativity (we can think of it as shrinkage in the state space) is enough to guarantee the
existence of an attractor in many systems [3]. In some cases, the state space contains
more than one attractor, and the attractors divide the state space into basins of attrac-
tion; any point in the basin of attraction of an attractor will converge to it over infinite
time.
Bifurcation: Bifurcation is any change in the qualitative behavior of all the trajec-
tories due to the changes in vector field f or the map T . For example, if we add some
forcing term to the vector field f , a stable fixed point might turn unstable or a limit cycle
might appear out of the blue sky. A physical example is when we add damping to an
otherwise frictionless unforced oscillator: without damping all trajectories are periodic
orbits, but with damping they decay to some fixed point.
4
Here is the traditional approach to study of dynamical systems: We first discover
or construct a model for the system in the form of (0.1) or (0.2). Sometimes, if we are
very lucky, we can come up with analytical or approximate solutions and use them to
analyze the dynamics, by which, we usually mean finding the attractors, invariant sets,
imminent bifurcations and so on. A lot of times, this is not possible and we have to
use various estimates or approximation techniques to evaluate the qualitative behavior
of the system, for example, construct Lyapunov functions to prove the stability of a fixed
point. But most of the times, if we want a quantitative analysis or prediction, we have to
employ numerical computation and then extract information by looking at a collection
of trajectories in the state space.
The traditional approach has contributed a lot to our knowledge of dynamical and
physical systems around us, but it is falling short in treating the high-dimensional systems
that have arisen in various areas of science and technology. A set of classic examples,
which regularly arises in physics, is the set of systems that are governed by partial
differential equations. In these systems, the state space is infinite-dimensional and the
numerical models that we use may have up to billions of degrees of freedom. Other
examples with more recent appeal include climate system of the earth, smart cars and
buildings, power networks, and biological systems with interacting components like neural
networks. The first problem with the traditional approach is that simulating the evolution
of trajectories for these systems is just devastating due to the large size of the problem.
Moreover, unlike the two- or three-dimensional system, the geometric objects in the state
space are difficult to realize and utilize. The second problem is the uncertainty in the
models or even the sheer lack of a model for simulation or analysis. As a result, the field of
dynamical analysis has started shifting toward a less model-based and more data-driven
perspective. This shift is also boosted by the increasing amount of data that is produced
by today’s powerful computational resources and experimental apparatus. In the next
5
section, we introduce the Koopman operator theory, which is a promising framework to
overcome the obstacles of large dimensions and uncertainty in analysis and control of
dynamical systems.

0.2 The data-driven viewpoint toward dynamical sys-

tems and the Koopman operator

In the context of dynamical systems, we interpret the data as knowledge of some


variable(s) related to the state of the system. A natural way to put this into the mathe-
matical form is to assume that data is evaluation of functions of the state. We call these
functions observables of the system. Let us discuss an example. The unforced motion of
an incompressible fluid inside a box constitutes a dynamical system; one way to realize
the state space is to think of it as the set of all smooth velocity fields on the flow domain
that satisfy the incompressibility condition. The state changes with time according to
a rule of evolution which is the Euler equation. Some examples of observables on this
system are pressure/vorticity at a given point in the flow domain, velocity at set of points
and the total kinetic energy of the flow. In all these examples, the knowledge of the state,
i.e. the velocity field, uniquely determines the value of the observable. We see that this
definition allows us to think of the data from most of the flow experiments and simula-
tions as values of observables. We also note that there are some type of data that don’t
fit the above definition as an observable of the system. For example, the position of a
Lagrangian tracer is not an observable of the above system, since it cannot be determined
by mere knowledge of the instantaneous velocity field.
Using the above notion, we formulate the data-driven analysis of dynamical systems
as follows: Given the knowledge of an observable in the form of time series generated by

6
experiment or simulation, what can we say about the evolution of the state?
Consider the continuous-time dynamical system given in (0.2). Let g : S → R be
a real-valued observable on this dynamical system. The collection of all such observ-
ables forms a linear vector space. The Koopman operator, denoted by U , is a linear
transformation on this vector space given by

U g(x) = g ◦ T (x), (0.5)

where ◦ denotes the composition operation. The linearity of the Koopman operator
follows from the linearity of the composition operation, i.e.,

U [g1 + g2 ](x) = [g1 + g2 ] ◦ T (x) = g1 ◦ T (x) + g2 ◦ T (x) = U g1 (x) + U g2 (x). (0.6)

For continuous-time dynamical systems, the definition is slightly different: instead of a


single operator, we define a one-parameter semi-group of Koopman operators, denoted
by {U t }t≥0 , where each element of this semi-group is given by

U t g(x) = g ◦ F t (x), (0.7)

and F t (x) is the flow map defined in (0.3). The linearity of U t follows in the same way as
the discrete-time case. The semi-group property of {U t }t≥0 follows from the semi-group
property of the flow map for autonomous dynamical systems given in (0.4),

U t U s g(x) = U t g ◦ F s (x) = g ◦ F t ◦ F s (x) = g ◦ F t+s (x) = U t+s g(x). (0.8)

An schematic representation of the Koopman operator is shown in figure 0.2. We can


think of the Koopman operator viewpoint as a lifting of the dynamics from the state

7
Figure 0.1: Koopman viewpoint lifts the dynamics from state space to the observable
space, where the dynamics is linear but infinite dimensional.

space to the space of observables. The advantage of this lifting is that it provides a linear
rule of evolution — given by Koopman operator — while the disadvantage is that the
space of observables is infinite dimensional. In the next section, we discuss the spectral
theory of the Koopman operator which leads to linear expansions for data generated by
nonlinear dynamical systems.

0.3 Koopman linear expansion

A naive but somewhat useful way of thinking about linear operators is to imagine
them as infinite-dimensional matrices. Then, just like matrices, it is always good to look
at the their eigenvalues and eigenvectors since they give a better understanding of how

8
they act on the space of observables. Let φj : S → C be a complex-valued observable of
the dynamical system in (0.1) and λj a complex number. We call the couple (φj , λj ) an
eigenfunction-eigenvalue pair of the Koopman operator if they satisfy

U t φj = eλj t φj . (0.9)

An interesting property of the Koopman eigenfunctions, that we will use later, is that
if (φi , λi ) and (φj , λj ) are eigenfunction-eigenvalue pairs, so is (φi · φj , λi + λj ), because

U t (φi · φj ) = (φi · φj ) ◦ F t = (φi ◦ F t ) · (φj ◦ F t ) = U t φi · U t φj = e(λi +λj )t φi · φj . (0.10)

Let us assume for now that all the observables of the dynamical system lie in the
linear span of such Koopman eigenfunctions, that is,


X
g(x) = gk φk (x), (0.11)
k=0

where gj ’s are coefficients of expansion. Then we can describe the evolution of observables
as


X
t
U g(x) = gk eλk t φk (x), (0.12)
k=0

which says that the evolution of g has a linear expansion in terms of Koopman eigenfunc-
tions. If we fix the initial state x = x0 , we see that the signal generated by measuring
g over a trajectory, which is given by U t g(x0 ) = g ◦ F t (x0 ) is sum of (infinite number
of) sinusoids and exponentials. This might sound a bit odd for nonlinear systems since
sinusoids and exponentials are usually generated by linear systems.
It turns out that Koopman linear expansion in (0.12) holds for a large class of non-

9
linear systems, including the ones that have hyperbolic fixed points, limit cycles and tori
as attractors. For these systems the spectrum of the Koopman operator consists of only
eigenvalues and their associated eigenfunctions span the space of observables. Now we
consider some of these systems in more detail. We borrow these examples from [4] where
more details on the regularity of the system and related proofs can be found.

0.3.1 Examples of nonlinear systems with linear Koopman ex-

pansion:

1. Limit cycling is a nonlinear property in the sense that there is no linear system
(ẋ = Ax) that can generate a limit cycle. If a limit cycle has time period T , then
the signal generated by measuring g(x) while x is moving around the limit cycle is
going to be T -periodic. From Fourier analysis, we have


X
g(x(t)) = gj eik(2π/T )t
k=0

where gj ’s are the Fourier coefficients. We can construct the eigenfunctions by


letting φk (x(t)) = eik(2π/T )t , and eigenvalues by λk = ik(2π/T ). It is easy to check
that (φk , λk ) satisfy (0.9), and the above equation is the Koopman linear expansion
of g.

2. Consider a nonlinear system with a hyperbolic fixed point, that is, the linearization
around the fixed point yields a matrix whose eigenvalues don’t lie on the imaginary
axis. There are a few well-known results in dynamical systems theory, such as
Hartman-Grobman theorem [2], which state that the nonlinear system is conjugate
to a linear system of the same dimension in a neighborhood of the fixed point.
To be more precise, they say that there is an invertible coordinate transformation

10
y = h(x) such that the dynamics on y-coordinate is given by ẏ = Ay (with the
solution y(t) = eAt y(0)) and such that

F t (x) = h−1 eAt h(x) .




In other words, to solve the nonlinear system, we can lift it to y-coordinate, and
solve the linear system, and then transform it back to the x-coordinates. We
first show the Koopman linear expansion for the linear systems, and then use the
conjugacy to derive the expansion for the nonlinear system.

Let {vj }nj=1 and {λj }nj=1 denote the eigenvectors and eigenvalues of A. The Koop-
man eigenfunctions for the linear system are simply the eigen-coordinates , that
is

φ̃j (y) =< y, wj >,

where wj ’s are normalized eigenvectors of A∗ . To see this note that


U t φ̃j (y) =< U t y, wj >=< eAt y, wj >=< y, eA t wj >

=< y, eλj t wj >= eλj t < y, wj >= eλj t φ̃j (y).


It is easy to show that φj (x) = φ̃j h(x) are eigenfunctions of the Koopman oper-
ator for the nonlinear system. Other Koopman eigenfunctions can be easily con-
structed using the algebraic structure noted in (0.10).

To find the Koopman expansion for the nonlinear system it is easier to further
transform y into a decoupled linear system. If the matrix A is diagonalizable and
V is the matrix of its eigenvectors, then the state variables of the diagonal system

11
are, not surprisingly, the Koopman eigenfunctions, i.e.,

z = [z1 , z2 , . . . , zn ]T = V −1 y = [φ̃1 (y), φ̃2 (y), . . . , φ̃n (y)]T

= [φ1 (x), φ2 (x), . . . , φn (x)]T .

Now consider an observable of the nonlinear dynamical system g(x) = g h−1 (y) =


g h−1 (V z) = g̃(z) where g̃ is real analytic in z (and therefore y as well). The




Taylor expansion for of this observable in variable z reads

X
g(x) = g̃(z) = αk1 ,...,kn z1k1 z2k2 . . . znkn ,
{k1 ,...,kn }∈Nn
X
= αk1 ,...,kn , φk11 (x)φk22 (x) . . . φknn (x),
{k1 ,...,kn }∈Nn

Using the algebraic property of the Koopman eigenfunctions in (0.10), we can write
the Koopman linear expansion of g as

X
U tg = αk1 ,...,kn e(k1 λ1 +k2 λ2 +...+kn λn )t φk11 φk22 . . . φknn .
{k1 ,...,kn }∈Nn

Recall that the original Hartman-Grobman theorem for nonlinear systems is lo-
cal [2], in the sense that we knew the conjugacy exists for some neighborhood of
the fixed point. But the results in [4] has extended the conjugacy to the whole
basin of attraction for stable fixed points using the properties of the Koopman
eigenfunctions.

3. Now consider the motion in the basin of attraction of a (stable) limit cycle. The
Koopman linear expansion for observables on such system can be constructed by,
roughly speaking, combining the above two examples. That is, observables are

12
decomposed into Koopman eigenfunctions, and each Koopman eigenfunction is a
product of a periodic component, corresponding to the limit cycling, and a lin-
early contracting component for the stable motion toward the limit cycle. The
development of this expansion is lengthy and can be found in [4].

The major class of dynamical systems for which the Koopman linear expansion does not
hold is the class of chaotic dynamical systems. It turns out that for these systems, the
eigenfunctions of the Koopman operator (even if they exist) do not span the space of
observables and we cannot decompose fluctuations of the system all into exponentials and
sinusoids. In such cases the Koopman operator usually possesses a continuous spectrum.
The continuous spectrum of the Koopman operator is similar to the power spectrum
of a stationary stochastic process where the energy content is spread over a range of
frequencies. In fact, if our dynamical system is measure-preserving (which is typically
true for evolution on attractors) the spectral density of the Koopman operator coincides
with the power spectral density of observable evolution. For more on this, and generally
the connection between stochastic processes and Koopman representation of deterministic
dynamics, see [5]. We also note that chaos in measure-preserving system is associated
with continuous spectrum, but continuous spectrum can also be seen in non-chaotic
systems. See the cautionary tale in [4]. The continuous spectrum is further discussed in
[6, 7, 5].
What is more interesting is that some systems possess mixed spectra which is a com-
bination of eigenvalues and continuous spectrum. For these systems the evolution of a
generic observable is composed of two parts: one part associated with eigenvalues and
eigenfunctions which evolves linearly in time and a fully chaotic part corresponding to
continuous spectrum. As such, the linear expansion (and the Koopman modes defined
below) does hold for part of the data. Examples of systems with mixed spectra are given

13
?
Figure 0.2: Koopman Mode Decomposition fully describes the evolution of observables
on systems with Koopman discrete spectrum, but not for chaotic systems which have
continuous spectrum.

in [8, 9, 5].

0.4 Koopman Mode Decomposition (KMD)

A lot of times the data that is measured on a dynamical systems comes to us not
from a single observable, but from multiple observables. For example, when we are
monitoring a power network system, we may have access to the time series of power
generation and consumption on several nodes, or in the study of climate dynamics there
are recordings of atmospheric temperature at different stations around the globe. We
can easily integrate these multiplicity of time-series data into the Koopman operator
framework and Koopman linear expansion.
We use g : S → Rm to denote a vector-valued observable, i.e.,

 
1
g 
 
 g2 
g =  . , g j : S → R, 1 ≤ j ≤ m.
 
 .. 
 
 
gm

14
If we apply the linear Koopman expansion (0.12) to each g j , we can collect all those
expansions into a vector-valued linear expansion for g,


X
U t g(x) = gk eλk t φk (x). (0.13)
k=0

The above expansion is the Koopman Mode Decomposition (KMD) of observable g


and gk is called the Koopman mode of observable g at the eigenvalue λk . Koopman
modes are in fact the projection of observable onto the Koopman eigenfunctions. We can
think of gk as a structure (or shape) within the data that evolves as eλk t with time. Let
us examine the concept of the Koopman modes in the examples mentioned above. In the
context of power networks, we can associate the network instabilities with the Koopman
eigenvalues that grow in time, that is λk > 0, and the entries of Koopman mode gk give
the relative amplitude of each node in unstable growth and hence predict which nodes
are most susceptible to breakdown. In the example of climate time series, the Koopman
modes of temperature recordings give us the spatial pattern (depending on the location
of stations) of temperature change that is proportional to eλk t , and therefore indicate the
spots with extreme variations.
In some physical problems, we have a field of observables, i.e., an observable that
assigns a physical field to each element of the state space. A prominent example is a
fluid flow. The pressure field over a subdomain of the flow, or the whole vorticity field,
are two examples of field of observable defined on a flow, since the knowledge of the
flow state (e.g. instantaneous velocity field) unqiuely determines those fields. We can
formalize the notion of a field of observable as a function g : (S, Ω) → R where Ω is the
flow domain and g(x, z) determines the value of the field at point z in the flow domain

15
when the flow is at state x. The Koopman linear expansion for g would be


X
U t g(x, z) = gk (z)eλk t φk (x), (0.14)
k=0

where the Koopman mode gk (z) is a fixed field by itself, and similar to the Koopman
mode vectors, determines a shape function on Ω which grows with the amplitude eλk t in
time. In a fluid flow, the Koopman modes of vorticity, are steady vorticity fields, and
the whole flow can be decomposed into such fields. with amplitudes that grow as eλk t .

0.5 History of Koopman operator theory and its ap-

plication to data analysis

The Koopman operator formalism originated in the early work of Bernard Koopman
in 1931 [10]. He introduced the the linear transformation that we now call the Koopman
operator, and realized that this transformation is unitary for Hamiltonian dynamical
systems (the “U ” notation comes from unitary property). This observation by Koopman
inspired John von Neumann to give the first proof for a precise formulation of ergodic
hypotheses, known as mean ergodic theorem [11]. In the next year, Koopman and von
Neumann wrote a paper together, in which they introduced the notion of the spectrum of
a dynamical system, i.e. the spectrum of the associated Koopman operator, and noted the
connection between chaotic behavior and the continuous part of the Koopman spectrum
[12].
For several decades after the work of Koopman and Von Neumann, the notion of
Koopman operator was mostly limited to the study of measure-preserving systems; you
could find it as the unitary operator in the proof the mean ergodic theorem or discussions
on the spectrum of measure-preserving dynamical systems [13, 14]. It seldom appeared
16
in other applied fields until it was brought back to the general scene of dynamical system
by two articles in 2004 and 2005 [9, 6]. The first paper showed how we can construct
important objects like the invariant sets in high-dimensional state spaces from data. It
also emphasized the role of nontrivial eigenvalues of the Koopman operators to detect the
periodic trends of dynamics amidst chaotic data. The second paper discussed the spectral
properties of the Koopman operator further, and introduced the notion of Koopman
modes. Both papers also discussed the idea of applying Koopman methodology to capture
the regular components of data in systems with combination of chaotic and regular
behavior.
In 2009, the idea of Koopman modes was applied to a complex fluid flow, namely,
a jet in a cross flow [15]. This work showed the promise of KMD in capturing the
dynamically relevant structures in the flow and their associated time scales. Unlike
other decomposition techniques in flows, KMD combined two advantageous properties:
it made a clear connection between the measurements in the physical domain and the
dynamics of state space (unlike proper orthogonal decomposition), and it is completely
data-driven (unlike the global mode analysis). The work in [15] also showed that KMD
can be computed through a numerical decomposition technique known as Dynamic Mode
Decomposition (DMD) [16]. Since then, KMD and DMD has become immensely popular
in analyzing the nonlinear flows [16, 17, 18, 19, 20, 21, 22, 23, 24]. A review of the
Koopman theory in the context of fluid flows can be found in [25].
The extent of KMD applications for data-driven analysis has enormously grown in
other fields too. Some of these applications include model reduction and fault detection
in energy systems for buildings [26, 27], coherency identification and stability assessment
in power networks [28, 29], hybrid mechanical systems [30], extracting spatio-temporal
patterns of brain activity [31], background detection and object tracking in videos [32, 33]
and design of algorithmic trade strategies in finance [34].
17
Parallel to the applications, the computation of Koopman spectral properties (modes,
eigenfunctions and eigenvalues) has also seen a lot of major advancements. For post-
transient systems, the Koopman eigenvalues lie on the unit circle and Fourier analysis
techniques can be used to find the Koopman spectrum and modes [9, 5]. There is also
another rigorous route to approximate the Koopman operator of measure-preserving
systems through the classical periodic approximation [35, 36]. For dissipative systems,
the Koopman spectral properties can be computed using a theoretical algorithm known
as Generalized Laplace Analysis [37, 38].
In applications involving transient beahvior, DMD is the popular technique for com-
putation of Koopman spectrum from data. In [39], the idea of Extended DMD was
introduced for general computation of Koopman spectrum by sampling the state space
and using a dictionary of observables. The works in [40] and [41] discussed the linear
algebraic properties of the algorithm and suggested new variations for better performance
and wider applications. New variants of DMD were also introduced in [42] to unravel
multi-time-scale phenomena and in [43] to account for linear input to the dynamical
system. Due to constant growth in the size of the available data, new alterations or im-
provements of DMD are also devised to handle larger data sets [44, 45], different sampling
techniques [46, 41, 45] and noise [47, 48]. The convergence of DMD-type algorithms for
computation of Koopman spectrum was discussed in [49], [50] and [51].
The ultimate goal of many data analysis techniques is to provide information that can
be used to predict and manipulate a system to our benefit. Application of the Koopman
operator techniques to data-driven prediction and control are just being developed, with
a few-year lag behind the above work. This lag is perhaps due to the need to account for
the effect of input in the formalism, but promising results have already appeared in this
line of research. The work in [52] showed an example of optimal controller which was
designed based on a finite-dimensional Koopman linear expansion of nonlinear dynamics.
18
The works in [53, 54] have developed a framework to build state estimators for nonlinear
systems based on Koopman expansions. More recent works, have shown successful ex-
amples of Koopman linear predictors for nonlinear systems [55], and optimal controllers
of Hamiltonian systems designed based on Koopman eigenfunctions [56]. More recent
applications include feedback control of fluid flows via using Koopman linear models
computed from data in a model-predictive control framework [57, 58, 59].

19
Bibliography

[1] J. Guckenheimer and P. Holmes, Nonlinear oscillations, dynamical systems, and


bifurcations of vector fields, .
[2] S. Wiggins, Introduction to applied nonlinear dynamical systems and chaos, vol. 2.
Springer Science & Business Media, 2003.
[3] R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics.
Springer-Verlag, New York, 1997.
[4] I. Mezić, Koopman operator spectrum and data analysis, arXiv preprint
arXiv:1702.07597 (2017).
[5] H. Arbabi and I. Mezić, Study of dynamics in post-transient flows using Koopman
mode decomposition, Phys. Rev. Fluids 2 (2017) 124402.
[6] I. Mezić, Spectral properties of dynamical systems, model reduction and
decompositions, Nonlinear Dynamics 41 (2005), no. 1-3 309–325.
[7] M. Korda, M. Putinar, and I. Mezić, Data-driven spectral analysis of the Koopman
operator, arXiv preprint arXiv:1710.06532 (2017).
[8] H. Broer and F. Takens, Mixed spectra and rotational symmetry, Archive for
rational mechanics and analysis 124 (1993), no. 1 13–42.
[9] I. Mezić and A. Banaszuk, Comparison of systems with complex behavior, Physica
D: Nonlinear Phenomena 197 (2004), no. 1 101–133.
[10] B. O. Koopman, Hamiltonian systems and transformation in hilbert space,
Proceedings of the National Academy of Sciences 17 (1931), no. 5 315–318.
[11] P. R. Halmos, The legend of john von neumann, The American Mathematical
Monthly 80 (1973), no. 4 382–394.
[12] B. O. Koopman and J. von Neumann, Dynamical systems of continuous spectra,
Proceedings of the National Academy of Sciences of the United States of America
18 (1932), no. 3 255.
20
[13] K. E. Petersen, Ergodic theory, vol. 2. Cambridge University Press, 1989.

[14] R. Mane, Ergodic Theory and Differentiable Dynamics. Springer-Verlag, New York,
1987.

[15] C. Rowley, I. Mezić, S. Bagheri, P. Schlatter, and D. Henningson, Spectral analysis


of nonlinear flows, Journal of Fluid Mechanics 641 (2009), no. 1 115–127.

[16] P. J. Schmid, Dynamic mode decomposition of numerical and experimental data,


Journal of Fluid Mechanics 656 (2010) 5–28.

[17] P. J. Schmid, L. Li, M. Juniper, and O. Pust, Applications of the dynamic mode
decomposition, Theoretical and Computational Fluid Dynamics 25 (2011), no. 1-4
249–259.

[18] C. Pan, D. Yu, and J. Wang, Dynamical mode decomposition of gurney flap wake
flow, Theoretical and Applied Mechanics Letters 1 (2011), no. 1.

[19] A. Seena and H. J. Sung, Dynamic mode decomposition of turbulent cavity flows
for self-sustained oscillations, International Journal of Heat and Fluid Flow 32
(2011), no. 6 1098–1110.

[20] T. W. Muld, G. Efraimsson, and D. S. Henningson, Flow structures around a


high-speed train extracted using proper orthogonal decomposition and dynamic
mode decomposition, Computers & Fluids 57 (2012) 87–97.

[21] J.-C. Hua, G. H. Gunaratne, D. G. Talley, J. R. Gord, and S. Roy, Dynamic-mode


decomposition based analysis of shear coaxial jets with and without transverse
acoustic driving, Journal of Fluid Mechanics 790 (2016) 5–32.

[22] S. Bagheri, Koopman-mode decomposition of the cylinder wake, J. Fluid Mech 726
(2013) 596–623.

[23] T. Sayadi, P. J. Schmid, J. W. Nichols, and P. Moin, Reduced-order representation


of near-wall structures in the late transitional boundary layer, Journal of Fluid
Mechanics 748 (2014) 278–301.

[24] P. K. Subbareddy, M. D. Bartkowicz, and G. V. Candler, Direct numerical


simulation of high-speed transition due to an isolated roughness element, Journal of
Fluid Mechanics 748 (2014) 848–878.

[25] I. Mezić, Analysis of fluid flows via spectral properties of the Koopman operator,
Annual Review of Fluid Mechanics 45 (2013) 357–378.

[26] M. Georgescu and I. Mezić, Building energy modeling: A systematic approach to


zoning and model reduction using Koopman mode analysis, Energy and buildings
86 (2015) 794–802.
21
[27] M. Georgescu, S. Loire, D. Kasper, and I. Mezic, Whole-building fault detection: A
scalable approach using spectral methods, arXiv preprint arXiv:1703.07048 (2017).

[28] Y. Susuki and I. Mezić, Nonlinear Koopman modes and coherency identification of
coupled swing dynamics, IEEE Transactions on Power Systems 26 (2011), no. 4
1894–1904.

[29] Y. Susuki and I. Mezić, Nonlinear Koopman modes and power system stability
assessment without models, IEEE Transactions on Power Systems 29 (2014), no. 2
899–907.

[30] N. Govindarajan, H. Arbabi, L. van Blargian, T. Matchen, E. Tegling, et. al., An


operator-theoretic viewpoint to non-smooth dynamical systems: Koopman analysis
of a hybrid pendulum, in 2016 IEEE 55th Conference on Decision and Control
(CDC), pp. 6477–6484, IEEE, 2016.

[31] B. W. Brunton, L. A. Johnson, J. G. Ojemann, and J. N. Kutz, Extracting


spatial–temporal coherent patterns in large-scale neural recordings using dynamic
mode decomposition, Journal of neuroscience methods 258 (2016) 1–15.

[32] J. N. Kutz, X. Fu, S. L. Brunton, and N. B. Erichson, Multi-resolution dynamic


mode decomposition for foreground/background separation and object tracking, in
Computer Vision Workshop (ICCVW), 2015 IEEE International Conference on,
pp. 921–929, IEEE, 2015.

[33] N. B. Erichson, S. L. Brunton, and J. N. Kutz, Compressed dynamic mode


decomposition for background modeling, Journal of Real-Time Image Processing
(2016) 1–14.

[34] J. Mann and J. N. Kutz, Dynamic mode decomposition for financial trading
strategies, Quantitative Finance (2016) 1–13.

[35] N. Govindarajan, R. Mohr, S. Chandrasekaran, and I. Mezić, On the


approximation of koopman spectra for measure preserving transformations, arXiv
preprint arXiv:1803.03920 (2018).

[36] N. Govindarajan, R. Mohr, S. Chandrasekaran, and I. Mezić, On the


approximation of koopman spectra of measure-preserving flows, arXiv preprint
arXiv:1806.10296 (2018).

[37] R. M. Mohr, Spectral Properties of the Koopman Operator in the Analysis of


Nonstationary Dynamical Systems. PhD thesis, 2014.

[38] R. Mohr and I. Mezić, Construction of eigenfunctions for scalar-type operators via
laplace averages with connections to the Koopman operator, arXiv preprint
arXiv:1403.6559 (2014).
22
[39] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley, A data-driven approximation
of the Koopman operator: Extending dynamic mode decomposition, Journal of
Nonlinear Science 25 (2015), no. 6 1307–1346.
[40] K. K. Chen, J. H. Tu, and C. W. Rowley, Variants of dynamic mode
decomposition: boundary condition, Koopman, and fourier analyses, Journal of
Nonlinear Science 22 (2012), no. 6 887–915.
[41] J. H. Tu, C. W. Rowley, D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz, On
dynamic mode decomposition: theory and applications, Journal of Computational
Dynamics 1 (2014) 391–421.
[42] J. N. Kutz, X. Fu, and S. L. Brunton, Multiresolution dynamic mode decomposition,
SIAM Journal on Applied Dynamical Systems 15 (2016), no. 2 713–735.
[43] J. L. Proctor, S. L. Brunton, and J. N. Kutz, Dynamic mode decomposition with
control, SIAM Journal on Applied Dynamical Systems 15 (2016), no. 1 142–161.
[44] M. S. Hemati, M. O. Williams, and C. W. Rowley, Dynamic mode decomposition
for large and streaming datasets, Physics of Fluids 26 (2014), no. 11 111701.
[45] F. Guéniat, L. Mathelin, and L. R. Pastur, A dynamic mode decomposition
approach for large and arbitrarily sampled systems, Physics of Fluids 27 (2015),
no. 2 025113.
[46] S. L. Brunton, J. L. Proctor, and J. N. Kutz, Compressive sampling and dynamic
mode decomposition, arXiv preprint arXiv:1312.5186 (2013).
[47] S. T. Dawson, M. S. Hemati, M. O. Williams, and C. W. Rowley, Characterizing
and correcting for the effect of sensor noise in the dynamic mode decomposition,
Experiments in Fluids 57 (2016), no. 3 1–19.
[48] M. S. Hemati, C. W. Rowley, E. A. Deem, and L. N. Cattafesta, De-biasing the
dynamic mode decomposition for applied koopman spectral analysis of noisy
datasets, Theoretical and Computational Fluid Dynamics 31 (2017), no. 4 349–368.
[49] H. Arbabi and I. Mezic, Ergodic theory, dynamic mode decomposition, and
computation of spectral properties of the Koopman operator, SIAM Journal on
Applied Dynamical Systems 16 (2017), no. 4 2096–2126.
[50] M. Korda and I. Mezić, On convergence of extended dynamic mode decomposition
to the Koopman operator, Journal of Nonlinear Science (2017) 1–24.
[51] I. Mezić and H. Arbabi, On the computation of isostables, isochrons and other
spectral objects of the koopman operator using the dynamic mode decomposition, in
2017 International Symposium on Nonlinear Theory and Its Applications,
(NOLTA), 2017.
23
[52] S. L. Brunton, B. W. Brunton, J. L. Proctor, and J. N. Kutz, Koopman invariant
subspaces and finite linear representations of nonlinear dynamical systems for
control, PloS one 11 (2016), no. 2 e0150171.

[53] A. Surana, Koopman operator based observer synthesis for control-affine nonlinear
systems, in Decision and Control (CDC), 2016 IEEE 55th Conference on,
pp. 6492–6499, IEEE, 2016.

[54] A. Surana and A. Banaszuk, Linear observer synthesis for nonlinear systems using
Koopman operator framework, IFAC-PapersOnLine 49 (2016), no. 18 716–723.

[55] M. Korda and I. Mezić, Linear predictors for nonlinear dynamical systems:
Koopman operator meets model predictive control, Automatica 93 (2018) 149–160.

[56] E. Kaiser, J. N. Kutz, and S. L. Brunton, Data-driven discovery of koopman


eigenfunctions for control, arXiv preprint arXiv:1707.01146 (2017).

[57] S. Peitz and S. Klus, Koopman operator-based model reduction for switched-system
control of pdes, arXiv preprint arXiv:1710.06759 (2017).

[58] S. Peitz, Controlling nonlinear pdes using low-dimensional bilinear approximations


obtained from data, arXiv preprint arXiv:1801.06419 (2018).

[59] H. Arbabi, M. Korda, and I. Mezic, A data-driven koopman model predictive


control framework for nonlinear flows, arXiv preprint arXiv:1804.05291 (2018).

24

You might also like