[go: up one dir, main page]

0% found this document useful (0 votes)
136 views113 pages

SS Notes PDF

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 113

State-Space Control

J Carrasco

First version: October 2013

Revised: July 2016


Contents

1 Introduction to state-space representation 11


1.1 Dynamical systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.1 Definition of dynamical systems . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.2 Autonomous linear system and transformations . . . . . . . . . . . . . . 12
1.1.3 Worked example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2 System modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.2.1 State-space representation of a linear system . . . . . . . . . . . . . . . . 16
1.2.2 Transformation of state-space representation . . . . . . . . . . . . . . . . 18
1.3 Canonical forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3.1 Controller canonical form . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.3.2 Observer canonical form . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.4 Learning Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2 Solutions in the state-space 31


2.1 Modal form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.1.1 Definition of the modal form . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.1.2 Worked example: Modal form of a system . . . . . . . . . . . . . . . . . 32
2.2 Solution of a state-space representation . . . . . . . . . . . . . . . . . . . . . . . 34
2.2.1 Exponential matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.2.2 Worked Example: Computing exponential matrix eAt . . . . . . . . . . . 35
2.2.3 Autonomous case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.4 Stability of an LTI system . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3 Solution using Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.1 Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.2 Solution of the system: The autonomous case . . . . . . . . . . . . . . . 41
2.3.3 Transfer function of a state-space representation . . . . . . . . . . . . . . 43

3
2.3.4 Inverse “à la Rosenbrock” . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.4 Learning Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5 Further examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3 Nonlinear systems 49
3.1 Linearisation of nonlinear systems . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.1.1 Equilibrium point of a nonlinear system . . . . . . . . . . . . . . . . . . 49
3.1.2 Linearisation around equilibrium points . . . . . . . . . . . . . . . . . . . 50
3.1.3 Worked example: simple pendulum . . . . . . . . . . . . . . . . . . . . . 51
3.1.4 Linearisation around an operating point . . . . . . . . . . . . . . . . . . 53
3.1.5 Worked example: The quadruple-tank process . . . . . . . . . . . . . . . 53
3.2 Introduction to Lyapunov Stability . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.3 Learning Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4 Controllability and Observability 61


4.1 Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1.2 Test for controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.1.3 Proof of the main result . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.1.4 Worked example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2 Stabilizability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2.1 Worked example: Jan 2013 Exam . . . . . . . . . . . . . . . . . . . . . . 68
4.3 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3.2 Derivation of the Observability matrix . . . . . . . . . . . . . . . . . . . 70
4.3.3 Test for observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.3.4 Worked example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.4 Detectability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4.1 Worked example: Jan 2013 Exam . . . . . . . . . . . . . . . . . . . . . . 75
4.5 Final remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.5.1 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.5.2 Kalman’s decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.6 Learning Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5 Design in the state-space 81
5.1 State-feedback controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.1.1 Design problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.1.2 Existence of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.1.3 Worked example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.1.4 Worked example: Jan 2013 exam . . . . . . . . . . . . . . . . . . . . . . 84
5.1.5 Solution of the Pole Placement Problem: Ackermann’s formula . . . . . . 85
5.1.6 Worked Example: Q3 Jan 2013 Exam . . . . . . . . . . . . . . . . . . . . 86
5.1.7 Final discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.2 Observer design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.2.1 Introduction to the concept of observer . . . . . . . . . . . . . . . . . . . 87
5.2.2 Observer design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.2.3 Existence of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.2.4 Worked example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.2.5 Solution of the problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.2.6 Final discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.3 Output feedback design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.3.1 Separation principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.3.2 Design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.4 Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

6 Realisation of MIMO transfer functions 99


6.1 MISO systems: transfer function column-vector . . . . . . . . . . . . . . . . . . 99
6.1.1 Worked example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2 MIMO systems: transfer function matrix . . . . . . . . . . . . . . . . . . . . . . 101
6.3 Rosenbrock system matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.4 Trivial realisation of a MIMO system . . . . . . . . . . . . . . . . . . . . . . . . 103
6.5 Minimal realisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.5.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.5.2 SISO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.5.3 MIMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.6 Gilbert’s realisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.6.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.6.2 Worked example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
State-Space Control

6.7 Some operations with systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109


6.7.1 Worked example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.8 Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

6
Notation

Symbols
Symbol Meaning
R Set of real numbers
C Set of complex numbers
Rn Real coordinate space of n dimension
Cn Complex coordinate space of n dimension
Rn×m Real matrices with n rows and m columns
Cn×m Complex matrices with n rows and m columns
A> Transpose of A
A∗ Complex conjugate of A
dx
ẋ dt
h i>
x = (x1 , x2 ) x= x>
1 x>
2

diag(d1 , d2 , . . . , dn ) Diagonal matrix with (d1 , d2 , . . . , dn ) as elements of the diagonal


z∗ Complex conjugate of the complex number z
v̄ Vector whose element are the complex conjugate of the elements of v,
i.e. v̄i = vi∗

Acronyms
Acronyms Meaning
ODE Ordinary Differential Equation
LTI Linear Time Invariant

7
State-Space Control

Introduction
This set of notes in State-Space Control summarises and extends undergraduate concepts on
this topic. It has been developed following different textbook at different sections. Some of
them are:

• “Feedback systems” by Åström and Murray.

• “Feedback Control of Dynamical Systems” by Franklin et al.

• “State Variables for Engineers” by DeRusso et al.

• “A Linear Systems Primer” by Antsaklis and Michel.

• “Linear Systems” by Kailath.

The last but one is recommended for students willing to extend their knowledge in this topic.
The last one is the classical reference in this topic but its approach is very mathematical.
I have tried to produce a self-contained set of notes, explaining some of the mathematical
concepts that we will use during the course. Nevertheless, the student may need further help
in Linear Algebra. Any undergraduate textbook in this topic should be fine, but I would
recommend “Introduction to Linear Algebra” by G. Strang, and “Matrix Analysis and Applied
Linear Algebra” by C. D. Meyer1 . For advanced students, the book “Matrix Analysis” by
R. A. Horn and C. R. Johnson should provide all required knowledge in this topic, but it is far
beyond the scope of this unit.
State-Space Control provides a mathematical framework for studying how to design con-
trollers and observer for dynamical systems. This representation will not be as intuitive as the
transfer function representation of the dynamical systems but it will provide other advantages.
For example, the state-space representation is not limited to linear systems. We will be re-
quired to use several geometrical concepts, which are very powerful but can lead to complex
and artificial problems.
Whereas classical control was developed using transfers functions as the representation of
the systems; in the 1950s-1960s-1970s, there was a strong development of tools in the state-
space. Powered by the development of computer, State-Space Control became very popular.
For a while, it seemed that frequency methods were something in the past and Laplace trans-
form suffered a temporally exile from the control realm (“The Laplace transform is dead and
1
Available at http://www.matrixanalysis.com/DownloadChapters.html

8
State-Space Control

buried”, R. E. Kalman, 1959). During the 1970s, frequency Jedi (watch Star Wars!) fought
to defend the simplicity and intuition of the Laplace transform. One of these Jedis was Prof.
H. H. Rosenbrock, a pioneer in computer aided-control techniques and MIMO frequency meth-
ods, who founded the Control Systems Centre at UMIST, nowadays University of Manchester.
Currently, sophisticated design methods, such as H∞ , use advantages of the state-space repre-
sentation to solve the frequency design problem. In fact, other fields such as operator theory
have adopted this representation since it provides valuable advantages over other representa-
tions.
This part of the unit has the following structure:

Chapter 1 The basic notions of state-space representations are given. Since the system is
represented by a linear application, the representation of a system is associated with the
basis used to represent this linear application. There are some forms of the state-space
representation that will be very important to design controllers and observers, and they
are referred to as canonical forms. We will show how to derive these canonical forms of
a system represented by an ODE or a transfer function.

Chapter 2 The solution of the state-space representation is shown as well as an alternative


form to express the system: the modal form. It introduces the concept of modes of the
systems, which are equivalent to the poles of the system in the frequency framework.
Stability then will be analysed using the dynamics of the modes, which are given by the
eigenvalues of a matrix. The Laplace transform of the state-space representation provides
the transfer function associated with the state-space representation.

Chapter 3 When the system is nonlinear, state-space representation is essential. In this unit,
we will study how to analyse nonlinear systems by linearising the system. Two linearisa-
tions will be given: around equilibrium points and around operating points.

Chapter 4 Four concepts that are essential for the design techniques in State-Space control are
introduced in this chapter: controllability, stabilizability, observability, and detectability.
These will be introduced in a mathematical framework, but their meaning will be fully
understood in Chapter 5.

Chapter 5 The problem of designing controllers and observers is solved. Concepts introduced
in Chapter 4 ensure either a solution for any arbitrary design specification or suitable
solutions with some restriction on the design specification.

9
State-Space Control

Chapter 6 The last chapter introduces MIMO systems and their state-space representation. If
it is trivial to obtain a state-space representation of a MIMO system, a new concept turns
up: minimality. Concepts presented in Chapter 4 are exploited to define the minimality of
the system. Gilbert’s realization is required to obtain representation without redundant
states.

C1:  State-­‐Space  
representa/on  

C2:  Linear   C3:  Nonlinear  


Systems.  Solu/ons   Systems.  
and  stability   Lineariza/on  

C4:  Controllability  
and  observability  

C6:  Realisa/on  of  


C5:  Design  in  the  
MIMO  transfer  
State-­‐Space    
func/ons  

Figure 1: Structure of this set of notes.

Finally, three appendices are given: Appendix A contains tutorials for each chapter; the
software lab is given in Appendix B; and the notes conclude with previous exam papers.

10
Chapter 1

Introduction to state-space
representation

1.1 Dynamical systems

1.1.1 Definition of dynamical systems

A dynamical system is a system that changes as time evolves. A system can be a part of the
universe, a set of memory bits, etc. A dynamical system consists of two elements:

1. A non-empty space D, e.g. R2 .

2. A map from this space and the time into the same space: f : D × R 7→ D.

Then, the dynamical system would be described by the differential equation

ẋ(t) = f (x(t), t). (1.1)

Mathematically, it is a geometrical concept. It is not surprising that some of the concepts


presented in this unit are also referred to as Geometrical Control by some authors. Loosely
speaking, for every point of the space x ∈ D, the function f (x, t) provides the information
about the evolution of the system at the instant t. Given an initial condition, the trajectory
of the state follows the field of velocities f = f (x, t). When the function f does not depend on
the time, i.e. f = f (x), then the system is said to be a time-invariant system. Henceforth, we
will focus our attention on time-invariant systems.
A useful concept of a dynamical system is the state of the system. It can be defined as the
minimal information that determines the future of the system. For the dynamical system (1.1),

11
State-Space Control Introduction to state-space representation

the state of the system is given by x(t) ∈ D. Hence the state of the system is a point of the
space D.

Example 1.1.1. Consider the free fall problem:


d2 y
m (t) = −mg. (1.2)
dt2
We will see that it can be represented as (1.1) where f : R2 7→ R2 . You should have studied at
some point of you life that the future of the system is determined by its position and speed at
some instant. Hence the state of the system x ∈ R2 .

This unit will be focused on dynamical systems with this property; however, it is not general.

Example 1.1.2. Let us consider a dynamical systems with input delay, then the differential
equation is
ẋ(t) = f (x, t − τ ). (1.3)

The state of the system is a function g : [−τ, 0) 7→ D × [−τ, 0], i.e. we need the past of the
system from the time [−τ, 0) in order to determine the evolution of the system at the instant
0.

As the system (1.1) is isolated from the rest of the universe, its evolution only depends on
itself, and we say that the system (1.1) is an autonomous system.

1.1.2 Autonomous linear system and transformations

During most of the part of this unit, we will focus on linear time invariant dynamical systems.
In this case, f is a linear map of the coordinates x, hence f (x(t)) = Ax(t) where A is a square
matrix, i.e.
ẋ(t) = Ax(t). (1.4)

In the jargon, such systems are referred to as Linear Time-Invariant (LTI). Loosely speaking,
the function f is a linear map from Rn into Rn 1 . A natural question arises: can we express
this system using another set of coordinates?
Let us assume that we have two different bases on Rn , x and z 2 . Then there exists a
nonsingular square matrix T such that z = T x. The dynamical system (1.4) can also be
represented in the basis z as follows

ż = T ẋ = T Ax = T AT −1 z. (1.5)
1
Any linear map from Rn into Rm is represented by a m-by-n matrix.
2
We are highly abusive of the notation of the bases.

12
Dynamical systems State-Space Control

Figure 1.1

As a result, the dynamical system can be represented in any basis of Rn . Considering two bases
x and z where z = T x, two different matrices Ax and Az will describe the same dynamical
system, and they are related as Az = T Ax T −1 .

1.1.3 Worked example

Let us consider the electrical circuit in Fig. 1.1. The dynamics of the circuit can be described
using infinite set of coordinates, but two sets seem straightforward: the charges at the capacitors
q = (q1 , q2 ) and the mesh current i = (i1 , i2 ). In this example, we are going to model the same
circuit using both sets of coordinates and check the theoretical result that we have obtained in
the above section.

Using q Applying Kirchoff’s Voltage Law (see Wikipedia for more details) on the left mesh
X 1 1 1 1
Vileft = q1 + i1 R − q2 = 0, i1 = − q1 + q2 (1.6)
i
C C CR CR

and using KVL on the right mesh


X 1 1
Viright = i2 R + q2 = 0, i2 = − q2 (1.7)
i
C CR

Moreover, both charges and currents are related as follows

1 1
q̇1 = i1 = − q1 + q2 , (1.8)
CR CR
1 1 1
q̇2 = i2 − i1 = − q2 + q1 − q2 . (1.9)
CR CR CR

or equivalently  
− 1 1
q̇ =  RC RC q (1.10)
1 2
RC
− RC

13
State-Space Control Introduction to state-space representation

Using i The time-derivatives of (1.7) and(1.6) are given by

1
i̇2 R + q̇2 = 0, q̇2 = −RC i̇2 ; (1.11)
C
and
1 1
q̇1 + i̇1 R − q̇2 = 0, q̇1 = −RC i̇1 − RC i̇2 . (1.12)
C C

The dynamical equations in the capacitors can be written as

i1 = q̇1 = −RC i̇1 − RC i̇2 , (1.13)


1 1 1
i2 − i1 = q̇2 = − q2 + q1 − q2 . (1.14)
CR CR CR

Reordering the above equation, the desired result is reached


 
2 1
− RC RC
i̇ =  i (1.15)
1 1
RC
− RC

Transformation From the KVL, we can deduce the transformation between charges and
currents

1 1
i1 = − q1 + q2 (1.16)
CR CR
1
i2 = − q2 (1.17)
CR

or  
1 1

i =  RC RC  q (1.18)
1
0 − RC

Therefore, applying the transformation result to the system (1.10), we should recover (1.15):

   −1
1 1 1 1 1 1
− − −
i̇ =  RC RC   RC RC   RC RC  i (1.19)
1 1 2 1
0 − RC RC
− RC 0 − RC

Trick. The inverse of a 2-by-2 matrix is given by


 −1  
a b 1 d −b
  =   (1.20)
c d ad − bc −c a

14
System modelling State-Space Control

   −1
1 1 1 1 1 1
− RC −
RC   RC

RC   RC RC 
 =
1 1 2 1
0 − RC RC
− RC
0 − RC
   −1
1 −1 1  −1 1  −1 1 
=
RC 0 −1 1 −2 0 −1
    
1 −1 1  −1 1  1 
−1 1
=
RC 0 −1 1 −2 (−1)(−1) − 0 0 −1
    
2 1
1  2 −3 −1 −1 −
  =  RC RC  . (1.21)
RC −1 2 0 −1 1
− RC1
RC

Hence, performing the basis transformation we obtain the same result as in (1.15). This
example has demonstrated the relationship between two representations of the same system.

1.2 System modelling

The key point in control engineering and system theory is interaction. We are interested in
studying the dynamical evolution of interconnected systems. In particular, feedback systems
will attract much of our attention. Therefore, we would like to model our system as a dynamical
system including explicitly input u and output y:

ẋ = f (x, u) x ∈ Rnx , u ∈ Rnu ; (1.22)

y = h(x, u) y ∈ Rny (1.23)

where nx is the number of state coordinates, nu is the number of inputs, and ny is the number of
outputs. This representation of a system is very general and most real systems can be modelled
by (1.22) and (1.23). These equations are referred to as the system equation and the output
equation, respectively.

In contrast with the transfer function representation of a system, the state-space representa-
tion is not limited to linear systems and it can also cover time-varying systems if f = f (x, u, t)
and h = h(x, u, t).

Nevertheless, this first encounter with state-space representation will be focused on either
linear systems or how to approximate a nonlinear system with a linear system, i.e. linearisation.

15
State-Space Control Introduction to state-space representation

1.2.1 State-space representation of a linear system

The general definition of a dynamical system can be used to describe the behaviour of a linear
system as follows:

ẋ = Ax + Bu x ∈ Rnx , u ∈ Rnu ; (1.24)

y = Cx + Du y ∈ Rny (1.25)

where A ∈ Rnx ×nx , B ∈ Rnx ×nu , C ∈ Rny ×nx , and D ∈ Rny ×nu . Equations (1.24) and (1.25)
are said to be the state-space representation of a linear system. In short, we will say that the
four matrices (A,B,C,D) represent a time-invariant linear (LTI) system.
For systems with single input and output, i.e. nu = ny = 1, B is column vector, C is a
row vector and D is a number. These systems are referred to as Single-Input Single-Output
(SISO). Systems with a single input but several outputs, i.e. nu = 1 ny > 1, are referred
to as Single-Input Multiple-Output (SIMO). Systems with several inputs but a single output,
i.e. ny = 1 nu > 1, are referred to as Multiple-Input Single-Output (MISO). Finally, systems
with several inputs and several outputs, i.e. nu > 1 ny > 1, are referred to as Multiple-Input
Multiple-Output (MIMO).
Restricting our attention to SISO systems, we have the following result:

Result 1.2.1. Any ordinary differential equation in the form

dn y dn−1 y dy dm u dm−1 u du
n
+ a n−1 n−1
+ · · · + a 1 + a 0 y = b m m
+ b m−1 m−1
+ · · · + b1 + b0 u (1.26)
dt dt dt dt dt dt

with m ≤ n has an equivalent state-space representation. 

Worked example: Ideal Mass-spring-damper system

Let us consider an ideal mass-spring-damper system where a external force F is applied on the
mass (See Fig. 1.2). The output of the system is the position of the mass y. Applying Newton’s
second law, the dynamics of the system is given by

X
Fi = ma = mÿ,
i

There are three forces in the direction of y: the spring force (−ky), the damper force (−βy),
and the external force F . It follows

F + (−ky) + (−β ẏ) = mÿ, (1.27)

16
System modelling State-Space Control

Rest position
y=0
y
−ky
F-
−β ẏ
m
(No friction!)
Figure 1.2: Ideal Mass-Spring-Damper system

equivalently
β k F
ÿ + ẏ + y = . (1.28)
m m m
Let us define the set of states as

x1 = y, (1.29)

x2 = ẏ; (1.30)

then we can find a state-space representation of this system as follows. From the definition of
both coordinates, it is trivial that ẋ1 = x2 ; then (1.28) can be rewritten in terms of x1 , x2 , and
ẋ2
β k F
ẋ2 + x 2 + x1 = . (1.31)
m m m
As a result, the system is described by two first order simultaneous differential equation

ẋ1 = 0x1 + x2 + 0F, (1.32)


k β 1
ẋ2 = − x1 − x2 + F. (1.33)
m m m

Now we rewrite these two equation using matrices and the state x = (x1 , x2 )

   
0 1 0
ẋ =   x +   F. (1.34)
k β 1
−m −m m

Using (1.29)-(1.30), the output equation is given by


h i
y = x1 + 0x2 + 0F = 1 0 x + 0F (1.35)

In summary, the state-space representation of an ideal mass-spring-damper system is given


by

   
0 1 0 h i
A= , B =  , C= 1 0 D = 0. (1.36)
k β 1
−m −m m

17
State-Space Control Introduction to state-space representation

It is worth highlighting that the state-space is a mathematical description of a system that


could be different to the real space where the system performs its trajectory. In this example,
the system moves along the y-axis, which is one-dimensional space. However, the state of the
system, as a mathematical concept, evolves on a two-dimensional space.

1.2.2 Transformation of state-space representation

In the same manner as we can use different bases to express autonomous systems, we can also
use basis transformations with input-output systems.
Let us consider the dynamical system given by

ẋ = Ax x + Bx u (1.37)

y = Cx x + Dx u (1.38)

and the new set of coordinates z = T x, where T ∈ Rnx ×nx is a nonsingular matrix. Then, we
can express the above dynamical system in the basis z as follows

ż = T ẋ = T Ax x + T Bx u = T Ax T −1 z + T Bx u (1.39)

y = Cx x + Dx u = Cx T −1 z + Dx u (1.40)

As a result, the state-space representation has been transformed as Az = T Ax T −1 , Bz = T Bx ,


Cz = Cx T −1 , and Dz = Dx .

1.3 Canonical forms


As we have stated in Result 1.2.1, there is a space-space representation for all ODEs. Usually,
we prefer to express a differential equation in the Laplace domain, and we speak about transfer
functions. In conclusion, any transfer function can be represented by an infinite number of state-
space representations. Among all possible state-space representations of a system, three are
very important: controller canonical form, observer canonical form, and modal form (Chapter
2).
Before starting the different representations of a transfer function, we must comment that
the term D is independent of the state-space representation, as we have shown in Section 1.2.2.
Therefore, we will focus our attention on system where D = 0. For instance, any transfer
function can be decomposed as
G(s) = Ḡ(s) + G(∞) (1.41)

18
Canonical forms State-Space Control

where Ḡ is a strictly proper system, i.e. Ḡ(∞) = 0. For any representation of G(s), the matrix
D will be given by D = G(∞).

1.3.1 Controller canonical form

Definition

Consider the system described by

dn y(t) dn−1 y(t) dy(t) dn−1 u(t) dn−2 u(t) du(t)


n
+a n−1 n−1
+· · ·+a 1 +a 0 y(t) = b n−1 n−1
+b n−2 n−2
+· · ·+b1 +b0 u(t),
dt dt dt dt dt dt
(1.42)
or equivalently,
Y (s) bn−1 sn−1 + bn−2 sn−2 + · · · + b1 s + b0
G(s) = = ; (1.43)
U (s) sn + an−1 sn−1 + · · · + a1 s + a0

where bi 6= 0 for at least one 1 ≤ i < n.


Then its controller canonical form is given by
   
0 1 0 0 ··· 0 0
   
 0 0 1 0 ··· 0  0
   
   
1 ···
   
 0 0 0 0  0
ẋ(t) = 
 .. .. .. .. .. ..  x(t) +  ..  u(t)
   (1.44)
 . . . . . .  .
   
0 ···
   
 0 0 0 1  0
   
−a0 −a1 −a2 −a3 · · · −an−1 1
h i
y = b0 b1 b2 · · · bn−1 x(t). (1.45)

Other versions of this form can be found in the literature by renaming the state in opposite
order. In the following, we show its development.

Development: Simplest case

First, let us consider the case where the ODE does not contain derivatives of the input

dn y(t) dn−1 y(t) dy(t)


n
+ a n−1 n−1
+ · · · + a1 + a0 y = u, (1.46)
dt dt dt

or equivalently, a transfer function with no bounded zeros

Y (s) 1
G(s) = = n n−1
. (1.47)
U (s) s + an−1 s + · · · + a1 s + a0

19
State-Space Control Introduction to state-space representation

xn = xn−1 = x2 =
u ẋn R ẋn−1 R ẋn−2 ẋ1 R x1 = y
- m- - - ... - -
6
m−an−1

m
6−an−2
6 ...
m
 −a1 

m
6 −a0 

Figure 1.3: Block diagram representation of the control canonical form.

Let us define the set of state coordinates as

x1 (t) = y(t), (1.48)


dy(t)
x2 (t) = = ẋ1 (t), (1.49)
dt
..
. (1.50)
n−1
d y(t)
xn (t) = = ẋn−1 (t). (1.51)
dtn−1
Then, substituting the above states in (1.46), it follows

ẋn (t) + an−1 xn (t) + · · · + a1 x2 (t) + a0 x1 (t) = u(t), (1.52)

where the derivative of the last state is

ẋn (t) = −an−1 xn (t) − · · · − a1 x2 (t) − a0 x1 (t) + u(t). (1.53)

In summary, the controller canonical form of (1.46) or (1.47) is given by

   
0 1 0 ··· 0 0 0
   
 0 0 1 ··· 0 0  0
   
   
···
   
 0 0 0 0 0  0
ẋ(t) = 
 .. .. .. .. .. .. 
 x(t) +   u(t)
 ..  (1.54)
 . . . . . .  .
   
···
   
 0 0 0 0 1  0
   
−a0 −a1 −a2 · · · −an−2 −an−1 1
h i
y = 1 0 0 ··· 0 0 x(t) (1.55)

Since integration blocks are standard in electrical circuits, so we can think of this procedure
as the practical implementation of an ODE or transfer function. Therefore, this procedure is
classically referred to as realisation.

20
Canonical forms State-Space Control

Remark 1.3.1. Different authors use different structures for the definition of controller canonical
form, so the interested student should go to different textbooks and understand that different
structures have a common point, only one state depends directly on the input and the rest of
the states only depend directly on one state.

Development: General case

Once we have developed this particular case, we have got the tools to deal with the most general
case. Let us consider the differential equation given by
dn y(t) dn−1 y(t) dy(t) dn−1 u(t) dn−2 u(t) du(t)
n
+a n−1 n−1
+· · ·+a 1 +a 0 y(t) = b n−1 n−1
+b n−2 n−2
+· · ·+b1 +b0 u(t),
dt dt dt dt dt dt
(1.56)
or equivalently, a transfer function with no bounded zeros:
Y (s) bn−1 sn−1 + bn−2 sn−2 + · · · + b1 s + b0
G(s) = = ; (1.57)
U (s) sn + an−1 sn−1 + · · · + a1 s + a0
where bi 6= 0 for at least one 1 ≤ i < n.
Let us consider an intermediate variable ξ such that
dn ξ(t) dn−1 ξ(t) dξ(t)
+ a n−1 + · · · + a 1 + a0 ξ(t) = u(t). (1.58)
dtn dtn−1 dt
By linearity, it can be shown that
dn−1 ξ(t) dn−2 ξ(t) dξ(t)
y(t) = bn−1 n−1
+ b n−2 n−2
+ · · · + b1 + b0 ξ(t), (1.59)
dt dt dt
however, this linearity argument may not be straightforward for the student. It can be easy to
understand the following procedure is the frequency domain. Let us introduce an intermediate
variable Ξ (Laplace transform of ξ) as follows
Y (s) bn−1 sn−1 + bn−2 sn−2 + · · · + b1 s + b0 Ξ(s)
= , (1.60)
U (s) sn + an−1 sn−1 + · · · + a1 s + a0 Ξ(s)
hence we can rewrite the system between y and u as a system between u and ξ followed by a
sequence of derivatives
Ξ(s) 1
= n , (1.61)
U (s) s + an−1 sn−1 + · · · + a1 s + a0
Y (s) = (bn−1 sn−1 + bn−2 sn−2 + · · · + b1 s + b0 )Ξ(s). (1.62)

Now, it is clear that (1.61) and (1.62) are the equivalent of (1.58) and (1.59) in the Laplace
domain.
Therefore, the realization of the general case can be reduced to the previous case by using
the intermediate variable ξ. One could think that the brute force implementation of (1.62) is

21
State-Space Control Introduction to state-space representation

xn = xn−1 = x2 =
u ẋn R ẋn−1 R ẋn−2 ẋ1 R x1 = ξ
- m- - - ... - - d - d - ... - d
dt dt dt
6
dξ d2 ξ dn−2 ξ dn−1 ξ
m−an−1 dt dt2 dtn−2 dtn−1
? ? ? ? ?
m
6−an−2 b0 b1 b2 bn−2 bn−1

6 ...
m
 −a1 
? ... ?y
m - m- m - m- m-
6 ? ?
 −a0 

(a) Unrealisable control canonical form.


- bn−1
dn−1 ξ dn−2 ξ
xn = xn−1 =
dtn−1 dtn−2
- bn−2 - m
?

...

x2 = dt
- m
?
- b1
ẋn R x1 = ξ
- m- - m- y
R R ?
u - - ... - - b0
6
m−an−1

m
6−an−2
6 ...
m
 −a1 

m
6 −a0 

(b) Realisable control canonical form.

Figure 1.4: Block diagram of control canonical form

indeed infeasible (see Fig. 1.4a), and hence the system (1.56) unrealizable. Nonetheless, ξ = x1
and x2 = ẋ1 , then ξ˙ = x2 , and so on; thus the realization of (1.56) is carried out without the
implementation of (1.62) (see Fig. 1.4b).

As a result, the control canonical form of the ODE (1.56) or transfer function (1.57) is

   
0 1 0 0 ··· 0 0
   
 0 0 1 0 ··· 0  0
   
   
1 ···
   
 0 0 0 0  0
ẋ(t) = 
 .. .. .. .. ... .. 
 x(t) +   u(t)
 ..  (1.63)
 . . . . .  .
   
0 ···
   
 0 0 0 1  0
   
−a0 −a1 −a2 −a3 · · · −an−1 1
h i
y = b0 b1 b2 · · · bn−1 x(t). (1.64)

22
Canonical forms State-Space Control

Worked example

Let us consider the dynamical system given by the following ODE

d3 y(t) d2 y(t) dy(t) d2 u(t) du(t)


3
+ 9 2
+ 26 + 24y(t) = 3 2
+6 + 4u(t), (1.65)
dt dt dt dt dt

or equivalently, by the following transfer function

3s2 + 6s + 4
G(s) = . (1.66)
s3 + 9s2 + 26s + 24

Let us consider the system with input u and output ξ defined by

d3 ξ(t) d2 ξ(t) dξ(t)


3
+9 2
+ 26 + 24ξ(t) = u(t), (1.67)
dt dt dt

Now, let us defined state coordinates as follows:

x1 (t) = ξ(t), (1.68)


dξ(t)
x2 (t) = = ẋ1 (t), (1.69)
dt
d2 ξ(t)
x3 (t) = = ẋ2 (t); (1.70)
dt2

and replace these states in (1.67)

ẋ3 (t)+9x3 (t)+26x2 (t)+24x1 (t) = u(t) and so ẋ3 (t) = −24x1 (t)−26x2 (t)−9x3 (t)+u(t) (1.71)

As a result, the time derivative of the vector state x is


  
0 1 0 0
   
ẋ(t) =  0 1  x(t) + 0 u(t). (1.72)
   
0
   
−24 −26 −9 1

By linearity arguments3 , it can be shown that the output y is given as a linear combination of
dξ(t) d2 ξ(t)
ξ = x1 , dt
= x2 , and dt2
= x3

d2 ξ(t) dξ(t)
y(t) = 3 + 6 + 4ξ(t) and so y(t) = 4x1 (t) + 6x2 (t) + 3x3 (t) (1.73)
dt2 dt

or, in vector form,


h i
y(t) = 4 6 3 x(t) (1.74)

3
If you are still not sure of how this argument works, use the same argument as used in (1.60)-(1.61)-(1.62)

23
State-Space Control Introduction to state-space representation
d2 ξ
x3 = dt2
- 3

x2 = dt - 6 - m
?

x1 = ξ
- m- - m- y
R R R ?
u - - - 4
6
m −9 

m
6 −26 
6
−24 

Figure 1.5: Block diagram representation of the control canonical form for the ODE (1.65) or
transfer function (1.66).

Exercise 1.3.2. Given the state-space realization defined by the matrices


   
0 1 0 0
    h i
A= 0 , B = , C = 4 6 3 , and D = 0; (1.75)
   
0 1  0 
   
−24 −26 −9 1

use the MATLAB commands ss and tf to find the transfer function of this realisation. Check
whether the result corresponds with (1.66). Develop Simulink model using: (a) state-space
representation block, (b) integrators as in Fig. 1.5, and (c) interpreted MATLAB function to
define the state derivative followed by an integrator.

1.3.2 Observer canonical form

Once again, two different cases could be developed. However, since the general case is sim-
pler than in the previous case, we will develop the Observer canonical form for the general
case, without any intermediate case. To simplify the notation, we will omit expression of the
dependence on t.

Definition

Consider again the system described by

dn y(t) dn−1 y(t) dy(t) dn−1 u(t) dn−2 u(t) du(t)


n
+a n−1 n−1
+· · ·+a 1 +a 0 y(t) = b n−1 n−1
+b n−2 n−2
+· · ·+b1 +b0 u(t),
dt dt dt dt dt dt
(1.76)
or equivalently,
Y (s) bn−1 sn−1 + bn−2 sn−2 + · · · + b1 s + b0
G(s) = = ; (1.77)
U (s) sn + an−1 sn−1 + · · · + a1 s + a0
where bi 6= 0 for at least one 1 ≤ i < n.

24
Canonical forms State-Space Control

Then the observer canonical form is given by


   
0 0 0 0 · · · 0 −a0 b0
   
1 0 0 0 · · · 0 −a1   b1 
   
   
0 1 0 0 · · · 0
   
  b2 
ẋ =  .. .. .. .. . . .. ..  x +  ..  u
   (1.78)
. . . . . . .   . 
   
0 0 0 0 · · · 0 −an−2 
   
bn−2 
   
0 0 0 0 · · · 1 −an−1 bn−1
h i
y = 0 0 0 · · · 0 1 x. (1.79)

Development

We are going to define a more sophisticated set of state coordinates for the ODE

dn y dn−1 y dy dn−1 u dn−2 u du


n
+ a n−1 n−1
+ · · · + a 1 + a 0 y = b n−1 n−1
+ b n−2 n−2
+ · · · + b1 + b0 u. (1.80)
dt dt dt dt dt dt

Let us reorder the above equation in a fancy way. All the terms with time-derivatives on the
left-hand side, and the rest on the right-hand side, so it follows

dn y dn−1 y dy dn−1 u(t) dn−2 u(t) du(t)


+a n−1 +· · ·+a 1 −b n−1 −b n−2 −· · ·−b 1 = b0 u(t)−a0 y. (1.81)
dtn dtn−1 dt dtn−1 dtn−2 dt

As every term on the left-hand side has a time-derivative, then one time-derivative can be taken
as a common factor, so

d dn−1 y dn−2 y dn−2 u dn−3 u


 
+ an−1 n−2 + · · · + a1 y − bn−1 n−2 − bn−2 n−3 − · · · − b1 u = b0 u(t) − a0 y.
dt dtn−1 dt dt dt
(1.82)
The first state has magically turned up! We will consider our first state as everything inside of
the brackets on the left-hand side, i.e.

dn−1 y dn−2 y dn−2 u dn−3 u


x1 = + a n−1 + · · · + a 1 y − b n−1 − b n−2 − · · · − b1 u, (1.83)
dtn−1 dtn−2 dtn−2 dtn−3

and
ẋ1 = −a0 y + b0 u. (1.84)

Let us reorder (1.83) following the same procedure, but now terms with time-derivative will
stay on the right-hand side of the equation, whereas terms without derivatives will be moved
to the left-hand side, and so

dn−1 y dn−2 y dy dn−2 u(t) dn−3 u du


x1 −a1 y+b1 u = n−1
+a n−1 n−2
+· · ·+a 2 −b n−1 n−2
−b n−2 n−3
−· · ·−b2 , (1.85)
dt dt dt dt dt dt

25
State-Space Control Introduction to state-space representation

Once again, our fancy re-aggregation of term allows us to take a common derivative on the
right-hand side, so

dn−2 y dn−3 y dn−3 u dn−4 u


 
d
x 1 − a1 y + b 1 u = + an−1 n−3 + · · · + a2 y − bn−1 n−3 − bn−2 n−4 − · · · − b2 u ,
dt dtn−2 dt dt dt
(1.86)
and the second state is born as everything inside of the bracket on the left-hand side, i.e.

dn−2 y dn−3 y dn−3 u dn−4 u


x2 = + a n−1 + · · · + a 2 y − b n−1 − b n−2 − · · · − b2 u, (1.87)
dtn−2 dtn−3 dtn−3 dtn−4

and

ẋ2 = x1 − a1 y + b1 u. (1.88)

So forth and so on, we will obtain the state x3 , x4 ,. . . , xn−2 and

dy
xn−1 = + an−1 y − bn−1 u, (1.89)
dt

which can be reordered in the same manner, i.e

dy
xn−1 − an−1 y + bn−1 u = , (1.90)
dt

an equation that gives birth to our last state

xn = y, (1.91)

and

ẋn = xn−1 − an−1 y + bn−1 u. (1.92)

During this procedure, some students would have been very worried, since one could think that
time-derivative of the states can only depend on states and input. However, (1.91) will have
calmed these students since it allows us to rewrite the time-derivative of the states as follows

ẋ1 = −a0 xn + b0 u, (1.93)

ẋ2 = x1 − a1 xn + b1 u, (1.94)
..
. (1.95)

ẋn−1 = xn−2 − an−2 xn + bn−2 u. (1.96)

ẋn = xn−1 − an−1 xn + bn−1 u; (1.97)

26
Canonical forms State-Space Control

u
? ? ? ?
b0 b1 b2 bn−1

x1 ? x2 ? xn−1 ? R xn = y
m - - m - - m - ... - m-
? R R
-
6 6 6 6

−a0 −a1 −a2 −an−1


6 6 6 6

Figure 1.6: Observer canonical form of ODE (1.76) or transfer function (1.77) with the new set
of coordinates.

or equivalently
   
0 0 0 0 ··· 0 −a0 b
   0 
1 0 0 0 ··· 0 −a1   b1 
   
   
0 ···
   
0 1 0 0   b2 
ẋ = 
 .. .. .. .. . . .. .. 
x + 
 ..  ξ
 (1.98)
. . . . . . .   . 
   
0 0 0 ··· 0 −an−2 
   
0 bn−2 
   
0 0 0 0 ··· 1 −an−1 bn−1
h i
y = 0 0 0 · · · 0 1 x. (1.99)

This form is known as the observer canonical realization of of ODE (1.76) or transfer func-
tion (1.77). See Fig. 1.6 for a block diagram.

Worked example

Let us consider the dynamical systems given by the following ODE

d3 y d2 y dy d2 u du
3
+ 9 2
+ 26 + 24y = 3 2
+ 6 + 4u, (1.100)
dt dt dt dt dt

or equivalently, by the following transfer function

3s2 + 6s + 4
G(s) = . (1.101)
s3 + 9s2 + 26s + 24

We are going to use this example to uncover our magic trick. Let us rewrite (1.100) as
follows
d
(ÿ + 9ẏ + 26y − 3u̇ − 6u) = −24y + 4u, (1.102)
dt
then we can choose
x1 = ÿ + 9ẏ + 26y − 3u̇ − 6u, (1.103)

27
State-Space Control Introduction to state-space representation

u
? ? ?
4 6 3

x1 ? x2 ? x3 = y
m - - m - - m -
? R R R
-
6 6 6

−24 −26 −9
6 6 6

Figure 1.7: Observer canonical form of ODE (1.100) or transfer function (1.101) with the new
set of coordinates.

where it is clear that ẋ1 = −24y + 4u. Once again, let us rewrite (1.103) as follows

d
(ẏ + 9y − 3u) = x1 − 26y + 6u, (1.104)
dt

then we can choose


x2 (t) = ẏ + 9y − 3u, (1.105)

then ẋ2 = −26y + x1 + 6u. And, finally,

d
(y) = x2 − 9y + 3u, (1.106)
dt

then x3 = y and ẋ3 = −9y + x2 + 3u.


As a result,the final expression to the obsever canonical form is
   
0 0 −24 4
   
ẋ = 1 0 −26 x + 6 u (1.107)
   
   
0 1 −9 3
h i
y = 0 0 1 x (1.108)

Exercise 1.3.3. Given the state-space realization defined by the matrices


   
0 0 −24 4
    h i
A = 1 0 −26 , B = 6 , C = 0 0 1 , and D = 0; (1.109)
   
   
0 1 −9 3

use the MATLAB commands ss and tf to find the transfer function of this realisation. Check
whether the result corresponds with (1.101). Develop Simulink model using: (a) state-space
representation block, (b) integrators as in Fig. 1.7, and (c) interpreted MATLAB function to
define the state derivative followed by an integrator.

28
Learning Outcomes State-Space Control

1.4 Learning Outcomes


The learning outcomes of this chapter can be summarised as follows:

• The state-space representation is a very powerful method to model any system, linear or
nonlinear.

• When the system is linear and time invariant (LTI), then the representation of the system
is given by four matrices A, B, C and D with

ẋ = Ax + Bu, (1.110)

y = Cx + Du. (1.111)

• Any transfer function or ODE has a state-space representation.

• The state of the system is the minimal information that allows us to determine the future
of the system.

• The evolution of the system is represented by the trajectory of the state in the state-space.

• There are three important forms of representing an LTI system: control canonical form,
observer canonical form and modal form (Chapter 2).

• In the control canonical form, the input only directly affects one state and the output is
a linear combination of the state coordinates.

• In the observer canonical form, the output is one of the state coordinates and the input
may directly affect the dynamic of all states.

29
State-Space Control Introduction to state-space representation

30
Chapter 2

Solutions in the state-space

This chapter is devoted to finding the solution of a dynamical system in the state-space repre-
sentation. Two methods are presented: firstly, we will solve the ODE in the time-domain and;
secondly, we will do so in the frequency-domain.

2.1 Modal form

2.1.1 Definition of the modal form

We have skipped this form in the previous chapter since this form is related with the solution
of the system. The modal form of a system is obtained when the matrix A is represented by
its diagonal form.

Definition 2.1.1. The matrix A ∈ Rn×n is said to be diagonalizable if there exist a diagonal
matrix Λ ∈ Cn×n and a nonsingular matrix V ∈ Cn×n such that

Λ = V −1 AV. (2.1)

The diagonal elements of Λ, λi , are called eigenvalues of the matrix A and they satisfy det(A −
λi I) = 0 for i = 1, 2, . . . , n. The column vectors of V are the eigenvectors of the matrix A.

There are matrices that cannot be diagonalised, but we are not going to consider these
details. Interested students can check any Linear Algebra textbook, e.g. Chapter 7 in Meyer’s
book available online (http://www.matrixanalysis.com/DownloadChapters.html). We will re-
strict our attention to matrices with different eigenvalues, i.e., λi = λj if and only if i = j; then
we can ensure that there is a diagonal form on the matrix.

Result 2.1.2. If A ∈ Rn×n has n different eigenvalues, then A is diagonalizable.

31
State-Space Control Solutions in the state-space

Let us consider the system with a state-space representation given by

ẋ = Ax + Bu, (2.2)

y = Cx + Du. (2.3)

where A ∈ Rn×n has n different eigenvalues. Then, A is diagonalizable, so let us find V such
that Λ = V −1 AV , with Λ diagonal. Applying the change of variable x = V q, we obtain

q̇ = Λq + V −1 Bu, (2.4)

y = CV q + Du. (2.5)

The coordinates q are referred to as system modes and the matrix Λ as the modal matrix. Note
that this expresion is equivalent to the previous chapter expresion for change of variable if we
use T = V −1 .
Whereas x are assumed to be real, we cannot longer assume that q will be real as the
eigenvalues can be complex. Some authors propose a linear combination of the modes q to
recover realness. In this case Λ is no longer diagonal.

2.1.2 Worked example: Modal form of a system

Find the modal form of the system with the following state representation:
      
ẋ −2 1 x 1
 1 =    1 +   u (2.6)
ẋ2 2 −3 x2 2
 
h i x1
y = 2 1   (2.7)
x2

The first step is to find the eigenvalues of the matrix A, i.e., find λ such that det(A−λI) = 0:

−2 − λ

1
= (−2 − λ)(−3 − λ) − 2 = λ2 + 5λ + 4 = (λ + 4)(λ + 1) (2.8)
−3 − λ

2

hence the eigenvalues of A are given by the roots of the polynomial (λ + 4)(λ + 1), i.e. −1 and
−4. The second step is to find the eigenvectors. The eigenvector associated with the eigenvalue
λ1 = −1 is given by any of the infinite solutions of the simultaneous equation Ax = λ1 x or
(A − λ1 I)x = 0, i.e.       
x −1 1 x 0
(A − (−1)I)   =    =   (2.9)
y 2 −2 y 0

32
Modal form State-Space Control

hence

−x + y = 0, (2.10)

2x − 2y = 0. (2.11)

It is clear that both equations provide the same information; thus the system of simultaneous
equations will have infinite solutions. In this case, any vector where x = y will be an eigenvalue
of λ1 , for instance  
1
v1 =   . (2.12)
1
Following the same procedure for λ = −4
    
2 1 x 0
A − (−4)I =    =  . (2.13)
2 1 y 0

Students with basic knowledge in Linear Algebra will not be surprised by the fact that there
are infinite solutions again. The definition of eigenvector ensures that there exist infinite eigen-
vectors. In this case

2x + y = 0, (2.14)

2x + y = 0; (2.15)

resulting in that any vector such that y = −2x is a eigenvector of λ2 . For instance, let us take
 
1
v2 =   . (2.16)
−2

As a result, we have obtained that the transformation matrix V is given by


   
1 1 1 −2 −1
V =  =⇒ V −1 =   (2.17)
1 −2 −2 − 1 −1 1

hence
       
−1  −2 −1 −2 1 1 1 2 1 1 1 −1 0
Λ = V −1 AV =   = 1   = 
3 −1 1 2 −3 1 −2 −3 4 −4 1 −2 0 −4
(2.18)
So we have found a matrix V such that Λ = V −1 AV is diagonal. In the jargon, A it is said to
be similar to Λ. As V is not unique, the modal form is not unique, but Λ, i.e. the A matrix of
the state-space representation, will be unique up to ordering of its diagonal elements.
In summary, the modal form is given by applying the transformation q = V −1 x

33
State-Space Control Solutions in the state-space

- m
R
- 4/3 - - 3
6 ? y
u −1 m -
6
- −1/3 - m
R
- - 0
6
−4

Figure 2.1: Block diagram of the modal form.

      
4
q̇1 −1 0 q
  =    1 +  3  u (2.19)
−1
q̇2 0 −4 q2 3
 
h i q1
y = 3 0   (2.20)
q2

and it block diagram is given in Fig. 2.1. Note that each mode evolves regardless of the rest of
modes.

Exercise 2.1.3. Use the command ss2ss to find the transformation of the systems defined by
T = V −1 . The result should correspond with the system given in (2.19) and (2.20).

Exercise 2.1.4. Use the command canon to find the modal form of the system. Does this
modal form correspond with the modal form in (2.19) and (2.20)? If not, why?

2.2 Solution of a state-space representation

2.2.1 Exponential matrix

Before dealing with the solution of a system, we need to introduce the concept of the exponential
of a matrix.

Definition 2.2.1. Given a square matrix A ∈ Rn×n , we define the exponential of the matrix
A, henceforth, eA as follows

A
X 1 k
e = A (2.21)
k=0
k!

The above definition is the straightforward generalization of the exponential function from
numbers to matrices. Note that students should be able to carry out all operations on the right-
hand side, but computing this infinite sum could be somehow tedious. We will be particularly
interested in the exponential matrix eAt , since it turns up in the solution of LTI systems.

34
Solution of a state-space representation State-Space Control

The exponential matrix has some properties:

e0 = I (2.22)

e(a+b)A = eaA ebA (2.23)

eA e−A = I (2.24)

eΛ = diag(eλ1 , eλ2 , . . . , eλn ) where Λ = diag(λ1 , λ2 , . . . , λn ) (2.25)


d At
e = AeAt (2.26)
dt
d At
Exercise 2.2.2. Using Definition 2.2.1, show that dt
e = AeAt .

Result 2.2.3. Given a square matrix A ∈ Cn×n , with Λ = V −1 AV where Λ is diagonal, then

eA = V eΛ V −1 . (2.27)

Proof. Using the definition of the exponential matrix



X 1 k
eA = A (2.28)
k=0
k!

Let us pre-multiply by V −1 and post-multiply by V


∞ ∞
−1 A
X 1 −1 k X 1 k
V e V = (V A V ) = Λ = eΛ , (2.29)
k=0
k! k=0
k!

hence
eA = V eΛ V −1 . (2.30)

Exercise 2.2.4. During the above proof, we have used that Λk = V Ak V −1 for any k ∈ N.
Prove by induction this result. Hint: show that it is true for k = 1, then assume that it is true
for k = n − 1, and show that it is true for k = n.

2.2.2 Worked Example: Computing exponential matrix eAt

The computation of exponential matrix using (2.21) is difficult. The use of Result 2.2.3 and
property 2.25 is simpler. Let us consider
 
−2 1
A= . (2.31)
2 −3

Then, we have show that Λ = V −1 AV with Λ = diag(−1, −4) and


   
1 1 −2 −1
V =  and V −1 = −1  . (2.32)
1 −2 3 −1 1

35
State-Space Control Solutions in the state-space

Then, it is trivial to check that Λt = V −1 (At)V . Therefore, the application of Result 2.2.3
yields
    
−t
1 1 e 0 −2 −1
eAt = V eΛt V −1 =   −1  =
1 −2 0 e−4t 3 −1 1
 
−t −4t −t −4t
1 2e + e e − e
 . (2.33)
3 2e−t − 2e−4t e−t + 2e−4t

Exercise 2.2.5. Using symbolic variable (syms) and exponential matrix(expm) in MATLAB,
reproduce the above result.

2.2.3 Autonomous case

Let us start with the autonomous case.

Theorem 2.2.6. The matrix differential equation

ẋ = Ax, x(0) = x0 ; (2.34)

has a unique solution given by x(t) = eAt x0 for all t > 0.

Proof. Let us assume that the solution of the system is x(t) = eAt x0 . Using (2.22), it is clear
that x(0) = e0 x0 = x0 . Moreover, using (2.26)
d(eAt )
ẋ(t) = x0 = AeAt x0 = Ax(t) (2.35)
dt
Therefore, the proposed solution x(t) = eAt x0 satisfies both condition in (2.34). Theorems
about existence and uniqueness of the solution of a ODE gives the desired result. Interested
students can go to any textbook on differential equations to find such a theorem. 

Definition 2.2.7 (The state-transition matrix). From the solution of the matrix ODE (2.34),
the matrix eAt is referred to as the state-transition matrix. From any instant t0 up to the
instant t0 + t, the states are related by

x(t) = eA(t−t0 ) x(t0 ). (2.36)

Worked example: Solution in the state-space

Let us consider the dynamical system


      
ẋ (t) −2 1 x (t) 1
 1 =  1 , x(0) =   (2.37)
ẋ2 (t) 2 −3 x2 (t) 2

36
Solution of a state-space representation State-Space Control

1.8

1.6

1.4

1.2
2

1
x

0.8

0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

X1

Figure 2.2: In solid blue, first 10 seconds of the trajectory of the autonomous system (2.37). In
dashed red, direction of the “slower” mode. The initial value is (1, 2) and the system evolves
towards the point (0, 0). From all the possible directions to approach zero, the trajectory
chooses the direction given by the eigenvector associated with the eigenvalue whose real part
is greater.

Then, the solution of the system is given by x(t) = eAt x(0). As we have previously computed
the exponential matrix, then the solution of the trajectory of the state is given by
      
−t −4t −t −4t −t −4t
x (t) 2e + e e −e 1 4e − e
 1 = 1   = 1   (2.38)
x2 (t) 3 2e−t − 2e−4t e−t + 2e−4t 2 3 4e−t + 2e−4t

Exercise 2.2.8. Plot Fig. 2.2 using MATLAB and expm.

Exercise 2.2.9. Plot Fig. 2.2 using Simulink and State-Space block.

2.2.4 Stability of an LTI system

The previous example has shown that the solution of the dynamical system (2.34) is given
by a linear combination of eλi , where λi are the eigenvalues of A. In short, the eigenvalues
of the matrix A contains a very important piece of information. Since the representation of
the dynamical system is not unique, we should expect that all the possible realisations of the
system provide the same information.

Result 2.2.10. If A is similar to B, i.e. there exists an nonsingular matrix V such that
A = V −1 BV , then the eigenvalues of A and B are the same.

37
State-Space Control Solutions in the state-space

The above result ensures that the dynamical behaviour defined by the eigenvalues of the
matrix A will remain unmodified if we carry out a transformation, as we should expect.
A general definition of stability may be complicated, but let us restrict our attention to
linear systems in this chapter. Then we can propose the following definition:

Definition 2.2.11 (Stability). An autonomous system ẋ = Ax is said to be asymptotically


stable if limt→∞ x(t) = 0 for all initial conditions x0 ∈ Rn . It is said to be marginally stable if
for any initial condition x0 ∈ Rn such that kx0 k < δ, there exists M ∈ R such that kx(t)k < M
for all t > 0, but limt→∞ x(t) 6= 0. Finally, if the system is neither asymptotically stable nor
marginal stable, it said to be unstable.

General definitions for nonlinear systems will be covered in Chapter 3. For further discus-
sion, read Chapter 4 in “Nonlinear Systems” by H. K. Khalil, but it is beyond the scope of this
unit.

Result 2.2.12. An autonomous system ẋ = Ax is asymptotically stable if all the eigenvalues


of A have strictly negative real part. The system is marginally stable if it has one or more
distinct poles on the imaginary axis, and any remaining poles have negative real part. Finally,
the system is unstable either if any pole has a positive real part, or any repeated poles on the
imaginary axis.

Stability is a very important concept, so we have a special name for the matrices with this
property in their eigenvalues.

Definition 2.2.13 (Hurwitz). A square matrix is said to be Hurwitz if all its eigenvalues have
strictly negative real part.

We are not going to develop a formal proof, but the argument is along these lines: if all
eigenvalues are strictly negative, then

lim eAt = 0, (2.39)


t→∞

since all elements of this matrix are linear combination of eλi t for all i = 1, . . . , n and these
exponentials approach zero as t goes to infinity. Therefore

lim x(t) = lim eAt x0 = 0, (2.40)


t→∞ t→∞

Exercise 2.2.14. If the eigenvalues provide information about the stability of the system, other
information is provided by the eigenvectors (See Fig. 2.2). When t approaches infinity, then
x(t) approaches the direction of the eigenvalue with maximum real part. Show this property.
Hint: Transform the dynamical system and the initial conditions to the modal form and evolve
the state.

38
Solution of a state-space representation State-Space Control

Non-autonomous case

Once we have addressed the autonomous case, we can propose the solution for the non-
autonomous case. By analogy, the solution of a non-homogeneous ODE is given by the addition
of the solution of the homogeneous case, i.e. the autonomous case, plus the particular solution.

Theorem 2.2.15 (General Case). Consider the matrix differential equation

ẋ(t) = Ax(t) + Bu(t); x(0) = x0 (2.41)

For t > 0 the state x(t) of the above system is given by


Z t
tA
x(t) = e x0 + eA(t−τ ) Bu(τ )dτ (2.42)
0

Proof. We can write (2.41) as

ẋ(t) − Ax(t) = Bu(t). (2.43)

Multiplying by e−tA yields

e−tA [ẋ(t) − Ax(t)] = e−tA Bu(t) (2.44)

which is equivalent to
d −tA
[e x(t)] = e−tA Bu(t). (2.45)
dt

Finally, integrating over [0, t] it follows


Z t
−tA −0A
e x(t) = e x0 + e−τ A Bu(t)dτ. (2.46)
0

Hence, multiplying by etA the desired result is obtained. 

Once the evolution of the state is known, the output of the system can be trivially computed
as

y(t) = Cx(t) + Du(t) (2.47)

for any t > 0.


When the system has inputs and outputs, we need to define stability properties carefully.
Nevertheless, for linear systems, both definitions, autonomous stability and input-output sta-
bility, are equivalent. Therefore, we will say that the system 2.41 is stable if the autonomous
system associated with the system, i.e. u(t) = 0 for all t ∈ R, is stable.

39
State-Space Control Solutions in the state-space

2.3 Solution using Laplace transform

2.3.1 Laplace transform

In the previous chapter, we discussed the state-space representation of systems which are defined
via ODE or transfer functions. Now we are going to derive the solution of the differential
equation via Laplace transform. Instead of using transfer functions with high order, we are
going to use a first order transfer function but with matrices instead of numbers.

Definition 2.3.1. Given a function f : R 7→ R, its Laplace transform is defined by


Z ∞
L[f (t)] = F (s) = f (t)e−st dt. (2.48)
0−

Moreover, we say that f (t) is the inverse Laplace transform of F (s).

See your notes on Control Fundamentals by W. P. Heath for further discussion.


We are not going to deal with the properties of this transformation here, but students should
be familiar with them. For instance:

L[f˙(t)] = sF (s) − f (0− ) (2.49)

L[f¨(t)] = s2 F (s) − sf (0− ) − f (0− ) (2.50)

L[f (t − t0 )] = e−st0 F (s) (2.51)


1
L[f (at)] = F (s/a) ∀a > 0 (2.52)
a
L[f (t) ∗ g(t)] = F (s)G(s) (2.53)

All these properties make the Laplace transform a very useful tool for solving ODEs. In
particular, the last item is what control engineers use every day.

Result 2.3.2. Let h(t) be the impulse response of an LTI system with zero initial conditions.
Then the output of the system can be computed as
Z t
y(t) = u(t) ∗ h(t) = u(τ )h(t − τ )dτ. (2.54)
0

By using (2.53), the above integral becomes a product

Y (s) = L[y(t)] = L[u(t)]L[h(t)] = H(s)U (s)

. 

For the rest of the world, the Laplace transform is “just” a way for solving a differential
equation. Our world is transformed by Laplace!

40
Solution using Laplace transform State-Space Control

2.3.2 Solution of the system: The autonomous case

Given the system defined by


ẋ = Ax, x(0) = x0 ; (2.55)

the solution of the system can be computed using the Laplace domain

sX(s) − x0 = AX(s). (2.56)

Then the solution in the frequency-domain of the system (2.55) is

X(s) = (sI − A)−1 x0 . (2.57)

But we must go back to the time domain. To this end, let us expand the above expression as
follows

−1 −1 −1 1X I A A2 An−1
(sI − A) = s (I − A/s) = (A/s)k = + 2 + 3 + · · · + n + . . . (2.58)
s k=0 s s s s

Then, we need to use the inverse laplace transformation tables to find that
n!
L[tn ] = (2.59)
sn+1
which is equivalent to
tn−1
 
−1 1
L = . (2.60)
sn (n − 1)!
Moreover, if we are considering a matrix form, then
Xtn−1
 
−1 X
L = . (2.61)
sn (n − 1)!
where X is a matrix.
Now we can translate the frequency-domain expression (2.57) into the time-domain
      2  n−1  
−1 −1 I −1 A −1 A −1 A
x(t) = L [X(s)] = L +L +L + ··· + L + . . . x0 . (2.62)
s s2 s3 sn
And using (2.61), then the final result is achived
A2 t2 An−1 tn−1
 
x(t) = I + At + + ··· + + . . . x0 = eAt x0 . (2.63)
2! (n − 1)!
As expected, the result is the same as developed in previous section. But this offers an alter-
native method of computing the solution. Moreover, let us state the above result properly

Result 2.3.3. Let A be a square matrix, the inverse Laplace transform of (sI − A)−1 is given
by eAt , in short
L−1 [(sI − A)−1 ] = eAt (2.64)

41
State-Space Control Solutions in the state-space

Worked example: Solution in the state-space

Let us consider the dynamical system


      
ẋ (t) −2 1 x (t) 1
 1 =  1 , x(0) =   (2.65)
ẋ2 (t) 2 −3 x2 (t) 2

Then let us use (2.56)


 −1  
s + 2 −1 1
X(s) =    , (2.66)
−2 s + 3 2
hence a new posiblity of solving the problem turns up. Let us invert the above matrix and then
carry out the inverse Laplace transformation.
 −1    
s+3 1
s + 2 −1 1 s+3 1
  =   =  (s+1)(s+4) (s+1)(s+4) 
. (2.67)
−2 s + 3 s2 + 5s + 4 2 s+2 2 s+2
(s+1)(s+4) (s+1)(s+4)

As an intermediate step, we need to apply the partial fraction decomposition to these fractions.
After simple algebra, it follows that

s+3 a1 b1 2/3 1/3


= + = + ; (2.68)
(s + 1)(s + 4) (s + 1) (s + 4) (s + 1) (s + 4)

1 a2 b2 1/3 −1/3
= + = + ; (2.69)
(s + 1)(s + 4) (s + 1) (s + 4) (s + 1) (s + 4)
2 a3 b3 2/3 −2/3
= + = + ; (2.70)
(s + 1)(s + 4) (s + 1) (s + 4) (s + 1) (s + 4)
s+2 a4 b4 1/3 2/3
= + = + . (2.71)
(s + 1)(s + 4) (s + 1) (s + 4) (s + 1) (s + 4)
Then,
 h i h i  
−1 s+3 −1 1 −t −4t −t −4t
L L 1 2e + e e −e
L−1 [(sI − A)−1 ] =  h (s+1)(s+4) i h (s+1)(s+4) i =  .
L−1 2
L−1 s+2 3 −t
2e − 2e−4t −t
e + 2e−4t
(s+1)(s+4) (s+1)(s+4)
(2.72)
As a result, we have been able to compute the exponential matrix as
 
−t −4t −t −4t
1 2e + e e −e
eAt = L−1 [(sI − A)−1 ] =  . (2.73)
3 2e−t − 2e−4t e−t + 2e−4t

So the solution of the system is


      
−t −4t −t −4t −t −4t
x1 (t) 2e + e e −e 1 4e − e
x(t) =  = 1   = 1   (2.74)
x2 (t) 3 2e−t − 2e−4t e−t + 2e−4t 2 3 4e−t + 2e−4t

42
Solution using Laplace transform State-Space Control

2.3.3 Transfer function of a state-space representation

Let us consider the state-space representation

ẋ(t) = Ax(t) + Bu(t), x(0) = x0 (2.75)

y(t) = Cx(t) + Du(t). (2.76)

Firstly, let us apply the Laplace transform to the state equation as previously

sX(s) − x0 = AX(s) + BU (s), (2.77)

grouping terms
(sI − A)X(s) = BU (s) + x0 , (2.78)

hence the state trajectory in the frequency-domain is

X(s) = (sI − A)−1 BU (s) + (sI − A)−1 x0 . (2.79)

We need to keep in mind that we are working with matrices so the order is important when we
move terms from one side to the other.

Exercise 2.3.4. Use (2.79) to obtain (2.42). Hint: You will need to use Result 2.3.3 and
property (2.53).

Secondly, let us apply the Laplace tranform to the output equation and use the above result

Y (s) = CX(s) + DU (s) = C((sI − A)−1 BU (s) + (sI − A)−1 x0 ) + DU (s), (2.80)

thus it follows
Y (s) = (C(sI − A)−1 B + D)U (s) + C(sI − A)−1 x0 . (2.81)

Finally, we need to consider that the transfer function of a system assumes null initial conditions,
and thus we reach the desired result
Y (s)
G(s) = = C(sI − A)−1 B + D. (2.82)
U (s)

Worked Example: Transfer function of a state-space representation

Let us find the transfer function of the system given by the state-space representation
   
0 0 −24 4
    h i
A = 1 0 −26 , B = 6 , C = 0 0 1 , and D = 0. (2.83)
   
   
0 1 −9 3

The first step is to compute the inverse of (sI − A).

43
State-Space Control Solutions in the state-space

- D

x −1
ẋ R u- C(sI − A) B + D y-
- m - - l- ⇐⇒
?
- B - C or
u y (A, B, C, D)
6
A 

Figure 2.3: Block diagram with matrices and its transfer function representation.

Result 2.3.5. The inverse of a matrix of any order can be computed as the adjoint over the
determinant, i.e.
Adj(A)
A−1 = (2.84)
det(A)
where the adjoint of a matrix is the matrix of cofactors of the transpose matrix. Finally, the
cofactor of the element (i, j) is the minor of A without the row i and the column j times (−1)i+j .


Thus, we need to write first the transpose of the matrix that we want to invert
 
s −1 0
 
>
(sI − A) =  0 s −1  (2.85)
 
 
24 26 s + 9
Let us compute some cofactors of the transpose matrix.

Element (1, 1) of the adjoint of (sI − A) It is defined by the minor of (sI − A)> without
fisrt column and row times (−1)1+1 , i.e.

−1

s
2
Adj(sI − A)1,1 = (−1) = s2 + 9s + 26. (2.86)

26 s + 9

Element (2, 1) of the adjoint of (sI − A) It is defined by the minor of (sI − A)> without
fisrt column and second row times (−1)2+1 , i.e.

−1


3
0
Adj(sI − A)2,1 = (−1) = s + 9.
(2.87)
26 s + 9

Element (3, 1) of the adjoint of (sI − A) It is defined by the minor of (sI − A)> without
fisrt column and third row times (−1)3+1 , i.e.

−1 0

4
Adj(sI − A)3,1 = (−1) =1 (2.88)
s −1

44
Solution using Laplace transform State-Space Control

Following this procedure


 
9s + s2 + 26 −24 −24s
 
Adj(sI − A) =  2
9s + s −26s − 24 (2.89)
 
9+s
 
2
1 s s

and applying the above trick


 −1  
2
s 0 −24 9s + s + 26 −24 −24s
  1 
2

−1 s = 3 9s + s −26s − 24 , (2.90)
   
26  2 9+s
s + 9s + 26s + 24 

  
0 −1 s + 9 1 s s2

hence we have all elements to carry out the matrix product G(s) = C(sI − A)−1 B
  
2
9s + s + 26 −24 −24s 4
Y (s) h i 1   
G(s) = = 0 0 1 3 9s + s2 −26s − 24 6 =
   
2 9+s
U (s) s + 9s + 26s + 24 

 
2
1 s s 3
 
4
1 h i 
2  
3s2 + 6s + 4
= (2.91)

1 s s 6
s3 + 9s2 + 26s + 24   s3 + 9s2 + 26s + 24

3

2.3.4 Inverse “à la Rosenbrock”

In his book, H. H. Rosenbrock provides an interesting way of computing the most common
inverse in state-space representation. This approach is based in the Cayley-Hamilton theorem.
Several results in state-space make use of this theorem, so let us state this relevant result.
Let ∆ be the determinant of the matrix sI − A, known as the characteristic polynomial of
A, and given by

∆ = sI − A = sn + an−1 sn−1 + · · · + a1 s + a0 . (2.92)

and the values s that satisfies this characteristic equation are called eigenvalues. For sake of
convenience, we have replaced the symbol λ by s, but this definition is the same as the one
given at the beginning of this Chapter.

Theorem 2.3.6 (Cayley-Hamilton Theorem). Every square matrix satisfies its own character-
istic equation, i.e.
An + an−1 An−1 + · · · + a1 A + a0 = 0 (2.93)

Then the inverse of the matrix sI − A is given as follows:

45
State-Space Control Solutions in the state-space

Result 2.3.7. The inverse of sI − A is given by

pn−1 (s)I + pn−2 (s)A + · · · + p1 (s)An−2 + An−1


(sI − A)−1 = (2.94)
sn + an−1 sn−1 + · · · + a1 s + a0

where the polynomials pk are given by


k
X
pk (s) = an−k+j sj (2.95)
j=0

with an = 1.

This result is insightful since it allows us to write the transfer function of a system as
frequency dependent linear combination of relevant matrices

pn−1 (s) pn−2 (s) p1 (s) 1


G(s) = D + CB + CAB + · · · + CAn−2 B + CAn−1 B. (2.96)
∆(s) ∆(s) ∆(s) ∆(s)

The matrices {CAk B} for k = 0, 1, . . . are refereed to as the system Markov parameters. They
are widely used in MPC (Model Predictive Control) and system identification, where they are
usually obtained from the discrete-time state space form. The inverse “à la Rosenbrock” shows
the meaning of these parameters in the continuous-time domain. Let us consider D = 0, then
we can rewrite (2.96) as follows:

bn−1 sn−1 + bn−2 sn−2 + · · · + b1 s + b0


G(s) = , (2.97)
sn + an−1 sn−1 + · · · + a1 s + a0

where the coefficients bk are given by

bn−1 = CB (2.98)

bn−2 = an−1 CB + CAB (2.99)

bn−3 = an−2 CB + an−1 CAB + CA2 B (2.100)


.. .
. = .. (2.101)

b1 = a2 CB + a3 CAB + · · · + an−1 CAn−3 B + CAn−2 B (2.102)

b0 = a1 CB + a2 CAB + · · · + an−2 CAn−2 B + an−1 CAn−2 B + CAn−1 B (2.103)

46
Learning Outcomes State-Space Control

2.4 Learning Outcomes


The learning outcomes of this chapter can be summarised as follows:

• The modal canonical form splits the system into independent modes via a diagonalisation
of the matrix A.

• Given an (A, B, C, D) representation with A diagonalisable and state x, the modal canon-
ical form is obtained by applying the transformation q = V −1 x, where the columns of V
are the eigenvectors of the matrix A. The set of coordinates q are modes of the system.

• Some matrices cannot be diagonalised, but the modal form can be obtained via Jordan
forms (more details about Jordan forms are beyond the scope of this course).

• The exponential of a square matrix A is defined by



A
X Ak
e = (2.104)
k=0
k!

d At
and dt
e = AeAt

• The matrix eAt is referred to as state transition and the evolution of the autonomous
system between the instant t0 and t1 is given by x(t1 ) = eA(t1 −t0 ) x(t0 ).

• The state transition matrix is a linear combination of the exponential of the eigenvalues
of the matrix A.

• If all eigenvalues of A have strictly negative real part, A is said to be Hurwitz and
limt→∞ eAt = 0. This property ensures that the system is stable.

• The Laplace transform of the exponential matrix provides an alternative method to com-
pute it with finite number of terms:

L(eAt ) = (sI − A)−1 or eAt = L−1 ((sI − A)−1 ). (2.105)

• The transfer function of a system with a state-space representation (A, B, C, D) is G(s) =


C(sI − A)−1 B + D.

47
State-Space Control Solutions in the state-space

2.5 Further examples


Let us find the inverse Laplace transform of the transfer function:
1
G(s) = . (2.106)
s3 + 2s2 + 101s
The first step is decompose this fraction in partial fractions:
1 α βs + γ
= + 2 . (2.107)
s3 + 2s2 + 101s s s + 2s + 101
After some simple algebraic manipulation, it follows that

1 = αs2 + 2αs + 101α + βs2 + γs. (2.108)

Then, we need to find the solution of the simultaneous equations

101α = 1, (2.109)

2α + γ = 0, (2.110)

α + β = 0. (2.111)

It is trivial to show that the solution is

α = 1/101, (2.112)

γ = −2/101, (2.113)

β = −1/101, (2.114)

hence
1 1/101 1 s+2

= . (2.115)
s3 + 2s2 + 101s s 101 (s + 1)2 + 100
The inverse Laplace transform of 1/s is 1 for all t > 0. We need further manipulations with
the second fraction. In particular, we need to use:
 
−1 s+a
L = e−at cos(bt) (2.116)
(s + a)2 + b2
 
−1 b
L = e−at sin(bt) (2.117)
(s + a)2 + b2
so let us rewrite this fraction to be able to use this expression:
s+2 s+1 1 10
2
= 2 2
+ (2.118)
(s + 1) + 100 (s + 1) + 10 10 (s + 1)2 + 102
Then, it follows that
 
−1 s+2 1 −t
L = e−t cos(10t) + e sin(10t) (2.119)
s + 2s + 101 10
As a result, the inverse Laplace transform of G(s) is given by
 
−1 1 1 −t 1 −t
L (G(s)) = − e cos(10t) + e sin(10t) (2.120)
101 101 10

48
Chapter 3

Nonlinear systems

The state-space representation of a system can be used for both linear and nonlinear systems.
In this chapter, we shall only briefly deal with nonlinear systems. Students will be able to test
their endurance with Nonlinear Systems within the unit Nonlinear and Adaptive Control in
the second semester. This chapter will focus on linearisation of nonlinear systems around an
equilibrium point or around an operating point.

3.1 Linearisation of nonlinear systems

3.1.1 Equilibrium point of a nonlinear system

Let us consider the autonomous dynamical system

ẋ(t) = f (x(t)), x(0) = x0 , (3.1)

where the vectorial function f : Rn 7→ Rn has an appropriate continuity condition such that
the above differential equation has a unique solution.
The mathematical definition of an equilibrium point is the following:

Definition 3.1.1 (Equilibrium point). The point xe ∈ Rn is an equilibrium point of the


system (3.1) if f (xe ) = 0, i.e.

ẋ(t) x=xe = f (xe ) = 0. (3.2)

Now, we can introduce a more sophisticated definition of stability.

Definition 3.1.2. An equilibrium point xe is said to be stable if for any initial condition around
the equilibrium point, then the distance between the solution of (3.1) at any instant and the

49
State-Space Control Nonlinear systems

equilibrium point is bounded. Mathematically, for any  > 0 there exists δ such that

kx(0) − xe k < δ then kx(t) − xe k <  ∀t ≥ 0. (3.3)

Moreover, it is said to be asymptotically stable if limt→∞ x(t) = 0. Finally, if the equilibrium


point xe is not stable, it is said to be unstable.

Loosely speaking, an equilibrium point is stable if the trajectory of the state tries to recover
or stay close to the equilibrium point when the system is slightly disturbed. So it is able to keep
this position even though small disturbances can disturb the system. When the equilibrium
point is unstable, any small perturbation will lead to the loss of equilibrium and the equilibrium
position will not be recovered. Further classifications of stability, such as globally, locally,
exponentially; can be found in classical nonlinear literature (for futher discussion read Khalil).
In the linear case, we are only interested in the trivial equilibrium point xe = 0, so the
definition of stability is given for the system. However, this definition is more fundamental
since a system such as a pendulum has two equilibrium points, where one is stable and the
other unstable. Therefore, the concept of stability is inherent to the equilibrium point.

Exercise 3.1.3. Give an example of a linear system with several equilibrium points.

3.1.2 Linearisation around equilibrium points

The nonlinear system behaves approximately as a linear system near an equilibrium point
(assuming some nice properties such as as continuity in f , etc.). This fact is supported by the
Taylor expansion1 of f around the equilibrium point

f (x) ' f (xe ) + Jf (xe )(x − xe ), (3.4)

where Jf is the Jacobian matrix. Now it clear that if f (xe ) = 0 and we define a new state as
∆x = x − xe , then the f (x) becomes the matrix Jf times the vector ∆x if x − xe is close to
zero, hence the nonlinear system (3.1) becomes linear

d
(∆x) ≈ A∆x (3.5)
dt
d
where A = Jf (xe ) and we have used that if ∆x = x − xe then (∆x) = ẋ.
dt
We still need to define the Jacobian matrix of f :
1
Nothing to do with Taylor Swift.

50
Linearisation of nonlinear systems State-Space Control

Definition 3.1.4. Given a vectorial function f : Rn 7→ Rn , then the Jacobian matrix Jf ∈ Rn×n
is defined by  
∂f1 ∂f1 ∂f1
...
 ∂x1 x2 ∂xn 
 ∂f2 ∂f2 ∂f2 
 ∂x1 ∂x2
... ∂xn 

Jf = 
 .. .. .. ..  (3.6)
 . . . . 
 
∂fn ∂fn ∂fn
∂x1 ∂x2
... ∂xn

We can analyse the equilibrium point by performing a linearisation and studying the prop-
erties of the Jacobian matrix at the equilibrium point.

Result 3.1.5. The stability of the equilibrium point xe of the system (3.1) is equivalent to the
stability of the linear system defined by ẋ = Ax, where A = Jf (xe ).

3.1.3 Worked example: simple pendulum

A simple pendulum (Fig. 3.1) is described by the equation


g
θ̈ + sin(θ) = 0 (3.7)
l
where θ is the angle between the vertical position and the position of the pendulum and l is
the length of the pendulum. The state-space representation of this second order differential
equation can be deduced as follows.
o
c

θ
l

y
m

Figure 3.1: A rigid and massless bar of length l joins the pivot o and the mass m. Let θ be the
angle between the vertical position and the bar.

Let us define the states

x1 = θ; (3.8)

x2 = θ̇. (3.9)

Then ẋ2 = − gl sin(θ), so the state-space representation of this system is

ẋ1 = x2 ; (3.10)
g
ẋ2 = − sin(x1 ). (3.11)
l

51
State-Space Control Nonlinear systems

Hence the vectorial function f (x1 , x2 ) is given by


 
x2
f (x1 , x2 ) =  , (3.12)
− gl sin(x1 )
and the equilibrium points satisfies f (x) = 0, i.e.

x2 = 0; (3.13)

sin(x1 ) = 0. (3.14)

As a result, the equilibrium points of the system 3.7 are xe1 = (0, 0) and xe2 = (π, 0), if we
restrict our attention to θ ∈ (−π, π]. Before analysing the equilibrium point let us find the
Jacobian matrix:

     
∂f1 ∂f1 ∂ ∂
(x2 ) (x2 ) 0 1
Jf (x1 , x2 ) =  ∂x1 ∂x2 
= ∂x1 ∂x2 =  (3.15)
∂f2 ∂f2
∂x1 ∂x2

∂x1
(− gl sin(x1 )) ∂
∂x2
(− gl sin(x1 )) g
(− l cos(x1 )) 0

Equilibrium point xe1 = (0, 0) At the equilibrium point xe1 = (0, 0), the system can be
linearised as  
0 1
ẋ = Jf (0, 0)x =  x (3.16)
− gl 0
The eigenvalues of Jf (0, 0) are given by

−λ 1
r
2 g g
det(Jf (0, 0) − λI) = = λ + = 0 and so λ = ± j (3.17)
− gl −λ
l l

Hence, the real part of the eigenvalues is 0 and they are at different localisations, therefore the
equilibrium point xe1 is marginally stable.

Equilibrium point xe2 = (π, 0) At the equilibrium point xe2 = (π, 0), we need to use a
translation of the space since the equilibrium point is not the origin, hence ∆x = x − xe2 . Thus
the system can be linearised as
 
0 1
˙ = Jf (π, 0)∆x = 
∆x  ∆x (3.18)
g
l
0
The eigenvalues of Jf (π, 0) are given by

−λ 1
r
det(Jf (π, 0) − λI) = = λ − = 0 and so λ = ± g
2 g
(3.19)
gl −λ
l l

Hence, the real part of one eigenvalue is greater than 0; therefore the equilibrium point xe2 is
unstable.

52
Linearisation of nonlinear systems State-Space Control

3.1.4 Linearisation around an operating point

When the nonlinear system is required to be operating with an input that is different to zero,
then we can no longer refer to as an equilibrium point of the system since this concept is related
with the autonomous system. In this case, we use the concept of operating point.
Let us consider the nonlinear system given by

ẋ = f (x, u); (3.20)

y = h(x, u); (3.21)

then if f (xo , uo ) = 0, the point (xo , uo ) is referred to as an operating point. Under this condition,
we can perform a linearisation of the system as follows. Let us define the new input, state and
output as the variation around uo , xo , and yo

∆u = u − uo , (3.22)

∆x = x − xo , (3.23)

∆y = y − yo . (3.24)

Carrying out a Taylor expansion of f (x, u) and h(x, u) around the point (xo , uo ), it follows that

f (x, u) ' f (xo , uo ) + Jfx (xo , uo )(x − xo ) + Jfu (xo , uo )(u − uo ); (3.25)

h(x, u) ' h(xo , uo ) + Jhx (xo , uo )(x − xo ) + Jhu (xo , uo )(u − uo ); (3.26)

where f (xo , uo ) = 0 and h(xo , uo ) = yo . Finally, the linearised system is given by


d
(∆x) ' Jfx (xo , uo )∆x + Jfu (xo , uo )∆u; (3.27)
dt
∆y ' Jhx (xo , uo )∆x + Jhu (xo , uo )∆u; (3.28)

where the superscripts x and u in the Jacobian matrices indicate the parameter that is consid-
ered as a variable.
In order to compare both systems, we should implement both systems as depicted in Fig. 3.2.
One could wonder about what happens if the plant is linear. In this case, the implementation
of the operating point is not so critical since the system is the same at any point (x, u) and we
can always replace variables by increments of the variables.

3.1.5 Worked example: The quadruple-tank process

This problem was introduced by Johansson in his paper: “The quadruple-tank process: a
multivariable laboratory process with an adjustable zero”. This process is interesting since it

53
State-Space Control Nonlinear systems

u ẋ = f (x, u) y
- -
y = h(x, u)

uo yo


- m - ẋl = Axl + Bul - m
? ?
-
u yl = Cxl + Dul y

Figure 3.2: Implementation of an operating poing in a nonlinear plant. In the figure, ∆x, ∆u
and ∆y have been replaced by xl , ul and yl .

provides an academic example of a plant with multivariable zero and directionality. Moreover
the process can be set up in such a way that the zero can be either in the left-half plane or in
the right-half plane. This system will be discussed in the last part of this unit.
The experiment consists in 4 tanks as in Fig. 3.3. The inputs of the system are the voltages
applied to two pumps. These pump water from the bottom basin into the four tanks, in
particular, Pump 1 into Tank 1 and Tank 4; and Pump 2 into Tank 2 and Tank 3. It is
assumed that the water flow is proportional to the voltage in the pump. The flow through each
pump is split into two subflows, outputs 1 and 2 at the top of the Fig 3.3. If the total flow is
fi for i = 1, 2, then the flow through Output 1 is fi,1 = γi fi ; and the flow through Output 2 is
fi,2 = (1 − γi )fi where 0 ≤ γ1 ≤ 1. It is assumed that each tank has the same section A.
The state of the system is defined by the four levels of the tanks: L1 , L2 , L3 , and L4 . The
dynamics of each tank is given by the law of conservation of the mass for a liquid with constant
density
d
volume of water = water flow in − water flow out (3.29)
dt
and Bernoulli’s equation provides that the discharge flow fdi at a Tank i with a hole in the
bottom of the cylinder with a section Doi is given by

p
fdi = Doi 2gLi , (3.30)

where we have assumed a perfect discharge.

Tank 1 Let us apply the law of conservation of the mass to Tank 1

d p p
(AL1 ) = (f1,2 + fd3 ) − fd1 = ((1 − γ1 )kVp1 + Do3 2gL3 ) − Do1 2gL1 (3.31)
dt

54
Linearisation of nonlinear systems State-Space Control

f2,1 = γ2 kVp2 f1,1 = γ1 kVp1

f1,2 = f2,2 =
(1 − γ1 )kVp1 (1 − γ2 )kVp2

f1 = kVp1 f2 = kVp2

Figure 3.3: The quadruple tank. Adapted from Quanser manual.

55
State-Space Control Nonlinear systems

and we obtain the dynamics of the state L1 as


Do1 p Do3 p (1 − γ1 )k
L̇1 = − 2gL1 + 2gL3 + Vp1 . (3.32)
A A A
This equation is nonlinear in the states L1 and L3 .

Tank 2 Let us apply the law of conservation of the mass to Tank 2


d p p
(AL2 ) = (f2,2 + fd4 ) − fd2 = ((1 − γ2 )kVp2 + Do4 2gL4 ) − Do2 2gL2 (3.33)
dt
and we obtain the dynamics of the state L2 as
Do2 p Do4 p (1 − γ2 )k
L̇2 = − 2gL2 + 2gL4 + Vp2 . (3.34)
A A A

Tank 3 Let us apply the law of conservation of the mass to Tank 3


d p
(AL3 ) = f2,1 − fd3 = γ2 kVp2 − Do3 2gL3 (3.35)
dt
and we obtain the dynamics of the state L3 as
Do3 p γ2 k
L̇3 = − 2gL3 + Vp2 . (3.36)
A A

Tank 4 Finally, let us apply the law of conservation of the mass to Tank 4
d p
(AL4 ) = f1,2 − fd4 = γ1 kVp1 − Do4 2gL4 (3.37)
dt
and we obtain the dynamics of the state L3 as
Do4 p γ1 k
L̇4 = − 2gL4 + Vp1 . (3.38)
A A
As a result, the state-space representation of the quadruple tank process is
Do1 p Do3 p (1 − γ1 )k
L̇1 = f1 (L1 , L2 , L3 , L4 , Vp1 , Vp2 ) = − 2gL1 + 2gL3 + Vp1 , (3.39)
A A A
Do2 p Do4 p (1 − γ2 )k
L̇2 = f2 (L1 , L2 , L3 , L4 , Vp1 , Vp2 ) = − 2gL2 + 2gL4 + Vp2 , (3.40)
A A A
Do3 p γ2 k
L̇3 = f3 (L1 , L2 , L3 , L4 , Vp1 , Vp2 ) = − 2gL3 + Vp2 , (3.41)
A A
Do4 p γ1 k
L̇4 = f4 (L1 , L2 , L3 , L4 , Vp1 , Vp2 ) = − 2gL4 + Vp1 . (3.42)
A A
The output of the quadruple tank process is given by the levels of Tank 1 and Tank 2, so it
can be written in our classical linear form
 
L
     1
 
L 1 0 0 0 L2 
 1 =   . (3.43)
 
L2 0 1 0 0 L3 
 
L4

56
Linearisation of nonlinear systems State-Space Control

Now, let us assume that we want to operate the plant around the input voltages Vp1o and
Vp2o . Then, from the last two equations, there exist unique levels Lo3 and Lo4 such that

Do3 p γ2 k o
0 = − 2gLo3 + V , (3.44)
A A p2
Do4 p γ1 k o
0 = − 2gLo4 + V . (3.45)
A A p1

Once Lo3 and Lo4 have been determined, it is clear that there exist unique levels Lo1 and Lo2 such
that

Do1 p Do3 p (1 − γ1 )k o
0 = − 2gLo1 + 2gLo3 + Vp1 , (3.46)
A A A
Do2 p Do4 p (1 − γ2 )k o
0 = − 2gLo2 + 2gLo4 + Vp2 ; (3.47)
A A A

hence setting the voltage of the pumps, the operating point is determined by (Lo1 , Lo2 , Lo3 , Lo4 , Vp1o , Vp2o ),
in short, (Lo , V o ).

Exercise 3.1.6. Find the expression of (Lo1 , Lo2 , Lo3 , Lo4 ) as a function of (Vp1o , Vp2o ).

Let us define the new set of coordinates as follows

x1 = L1 − Lo1 , (3.48)

x2 = L2 − Lo2 , (3.49)

x3 = L3 − Lo3 , (3.50)

x4 = L4 − Lo4 . (3.51)

and the new set of inputs and outputs

u1 = Vp1 − Vp1o , (3.52)

u2 = Vp2 − Vp2o , (3.53)

y1 = L1 − Lo1 , (3.54)

y2 = L2 − Lo2 . (3.55)

57
State-Space Control Nonlinear systems

The linearisation around this point is given by


    
∂f1 ∂f1 ∂f1 ∂f1
ẋ (Lo , V o ) (Lo , V o ) (Lo , V o ) (Lo , V o ) x1
 1   ∂L1 ∂L2 ∂L3 ∂L4  
   ∂f2 o o ∂f2 ∂f2 ∂f2
(Lo , V o ) (Lo , V o ) (Lo , V o ) x2 
 
ẋ2   ∂L1 (L , V ) ∂L2 ∂L3 ∂L4
 =   +
   ∂f3 o o ∂f3 ∂f3 ∂f3
(Lo , V o ) (Lo , V o ) (Lo , V o ) x3 
 
ẋ3   ∂L (L , V ) ∂L2 ∂L3 ∂L4
   1  
∂f4 ∂f4 ∂f4 ∂f4
ẋ4 ∂L1
(Lo , V o ) ∂L2
(Lo , V o ) ∂L3
(Lo , V o ) ∂L4
(Lo , V o ) x4
 
∂f1 ∂f1
(Lo , V o ) (Lo , V o )
 ∂Vp1 ∂Vp2  
 ∂f1 o o ∂f2
(Lo , V o ) u1

 ∂Vp1 (L , V ) ∂Vp2

 ∂f3 o o
  (3.56)
∂f3
(Lo , V o )

 ∂V (L , V ) ∂Vp2
u2
 p1 
∂f4 ∂f4
∂Vp1
(Lo , V o ) ∂Vp2
(Lo , V o )

Now we need to find the derivatives of the functions given in (3.39), (3.40), (3.41), and (3.42),
and evaluate them at the operating point (Lo , V o ). Then it follows

   D q 2g Do3
q
2g
 
ẋ − 2A Lo
o1
0 0
 1  1
q
2A Lo3
q  x1 
− D2Ao2 L2go Do4 2g   
   
ẋ2   0 0 2A Lo x2 
 = 2 4 
 +
 
q 
Do3 2g
− 2A Lo
  
ẋ3   0 0 0  x3 

   3
q  
ẋ4 0 0 0 − D2Ao4 L2go x4
4
 
(1−γ1 )k
0
 A  
 (1−γ2 )k 
 0 A
 u1
   (3.57)
 γ2 k 
 0 A
 u2
 
γ1 k
A
0

The state-space representation of the dynamical system is completed with the output equation
 
x
   1
 
1 0 0 0 x2 
y=  .
  (3.58)
0 1 0 0 x3 
 
x4

Exercise 3.1.7. Given the quadruple plant with values k = 0.5·10−4 /60 m3 s−1 V−1 , A = 0.032 π
m−2 , Doi = 4π · 10−6 m−2 for i = 1, 2, Doi = π · 10−6 m−2 for i = 3, 4, and γi = 0.25 for i = 1, 2.
Carry out a simulation of the nonlinear system in Simulink and compare the nonlinear system
with the linearisation of the operating point given by the pump 1 working at 6 V and the
pump 2 at 4 V. Data: g = 9.81 ms−2 . Levels at operating point: Lo1 = 6.7802 · 10−3 m;
Lo2 = 4.5388 · 10−3 m; Lo3 = 3.5862 · 10−3 m; and Lo4 = 8.0690 · 10−3 m.

58
Introduction to Lyapunov Stability State-Space Control

3.2 Introduction to Lyapunov Stability


In the previous section, we have studied the stability of a linear system when it is linearised
around an equilibrium point. However, it just provided information of a very narrow region of
the state-space. In this section we are going to introduce a result that will be able to study
wider region of the state-space.
We are going to reduce our attention to the autonomous system

ẋ(t) = f (x(t)), x(0) = x0 ; (3.59)

and we assume that f : Rn 7→ Rn has nice properties in such a way that the above equation
has one and only one solution and f (0) = 0. The latter assumption ensures that there is at
least one equilibrium point, which is assumed without loss of generality to be the origin of the
state space.

Theorem 3.2.1 (Lyapunov Theorem). Let V : Rn 7→ R be a continuously differentiable func-


tion such that V (0) = 0 and V (x) > 0 for all x ∈ Rn − {0}. If

V̇ (x) ≤ 0 (3.60)

for all x ∈ Rn then the equilibrium point x = 0 is stable. Moreover, if

V̇ (x) < 0 (3.61)

for all x ∈ Rn − {0}, then the equilibrium point is asymptotically stable.

The application of this theorem will be covered in the second semester in Nonlinear Control
and Optimal and Robust Control.

59
State-Space Control Nonlinear systems

3.3 Learning Outcomes


The learning outcomes of this chapter can be summarised as follows:

• The state-space representation is able to describe nonlinear systems.

• The point xe of the dynamical system ẋ = f (x) is said to be an equilibrium point if


f (xe ) = 0.

• The linearisation around an equilibrium point is always possible. The dynamics of the
system is described by the Jacobian matrix of f evaluated at the equilibrium point xe .

• The stability of an equilibrium point xe is given by the eigenvalues of the Jacobian matrix
evaluated at xe as any linear system.

• When we have a nonlinear system operating with a constant input different to zero, then
the linearisation is still possible. This linerisation is referred to as linearisation around
an operating point. The matrices of the state-space representation are given by different
Jacobian matrices, all of them evaluated at the operating point.

• Stability analysis of nonlinear systems is a beautiful and fun topic. One of the most
important approaches is the Lyapunov theorem, which will be widely studied in the
second semester.

60
Chapter 4

Controllability and Observability

This chapter deals with two important concepts for state-space control: controllability and
observability. The former concerns the ability of the input to “move” the system from one
state to another state. The latter concerns the ability of the output to “reveal” the state of
the system. Both concepts were developed in the fifties and sixties of the last Century when
control engineers started to use state-space techniques instead of classical transfer functions.
Since then, controllability and observability have appeared in modern control techniques using
the state-space representation.

4.1 Controllability

4.1.1 Definition

Let us start with the mathematical definition.

Definition 4.1.1 (Controllability). The pair [A, B] is said to be controllable if for any x0 , x1 ∈
Rn and any t1 > 0, there exists a control action u : [0, t1 ] 7→ Rm such that the solution of

ẋ(t) = Ax(t) + Bu(t); x(0) = x0 (4.1)

is x1 at t = t1 , i.e. x(t1 ) = x1 .

We will say that a state-space representation is controllable if its pair [A, B] is controllable.
Loosely speaking, given two points of the state space, the controllability of the system ensures
that we will be able to move between them by a correct design of the input. In the opposite
case, i.e. if the pair [A, B] is not controllable; the input will be not enough to determine a
desired trajectory of the system.

61
State-Space Control Controllability and Observability

If a system is in its modal form, then it is very easy to check the controllability of the pair
[A, B], where A is diagonal. We need to check if all elements of B are non-zero. If so, the pair
[A, B] is controllable. Otherwise, the mode corresponding to the zero element of B will not
be affected by the input at all, and its fate will be fixed by the matrix A itself, i.e., as in the
autonomous mode.

4.1.2 Test for controllability

This section presents a more elegant method to test controllability than using the modal form.
The following theorem provides an essential tool to check the controllability of the pair [A, B].

Result 4.1.2. The pair [A, B] is controllable if and only if the matrix
h i
2 n−1 (4.2)
B AB A B · · · A B

is full rank.

This result introduces an important matrix in our lives, so let us define it properly.

Definition 4.1.3 (Controllability matrix). Given A ∈ Rn×n and B ∈ Rn×m , the controllability
matrix associated with the pair [A, B], henceforth, C(A, B) ∈ Rn×nm is given by
h i
C(A, B) = B AB A2 B · · · An−1 B (4.3)

For the case of a SISO system, i.e., m = 1, the controllability matrix is square and full rank
is reduced to nonsingular.

Result 4.1.4. The pair [A, B] with B ∈ Rn×1 , is controllable if and only if the matrix C(A, B)
is nonsingular, i.e.
h i
det 2
B AB A B · · · A n−1
B 6= 0 (4.4)

Once again, we should expect that such an important property does not depend on the basis
that we choose to express A and B. The following result ensures that this property does not
change due to a transformation.

Result 4.1.5. Given the pair [A, B] and a nonsingular matrix T , the pair [T AT −1 , T B] is
controllable if and only if [A, B] is controllable.

Exercise 4.1.6. Show that C(T AT −1 , T B) = T C(A, B). What conclusion can be drawn? Hint:
Given a nonsingular matrix T , then rank(X) = ρ if and only if rank(T X) = ρ.

62
Controllability State-Space Control

Finally, now we can understand why the control canonical form has such a name.

Result 4.1.7. If the pair [A, B] is controllable, then there exists a nonsingular matrix T such
that the state space representation (T AT −1 , T B, CT −1 , D) is in the control canonical form.

We have introduced the control canonical form in Chapter 1. It is related to the controlla-
bility of the pair [A, B] by the following result:

Result 4.1.8. The pair [A, B] is controllable if and only if there exist a nonsingular matrix T
and the set of numbers ai with i = 0, n − 1 such that
   
0 1 0 0 ··· 0 0
   
 0 0 1 0 ··· 0  0
   
   
· · ·
   
0 0 0 1 0 0
T AT −1 = 
 
 and T B =   (4.5)
 .. .. .. .. .. ..   .. 
 . . . . . .  .
   
0 ···
   
 0 0 0 1  0
   
−a0 −a1 −a2 −a3 · · · −an−1 1

Finally, the following result shows that any SISO control canonical form of a transfer func-
tion with no pole-zero cancellation is controllable:

Result 4.1.9. A pair [A, B] is uncontrollable if and only if there is a the left eigenvector of A,
v, such that v ∗ B = 0.

Whereas the eigenvalues that we have used previously are also referred to as right-eigenvalues,
the definition of left eigenvalues is similar:

Definition 4.1.10. The vector v is said to be a left eigenvalue of A if there exists λ such that
v∗A = v∗λ

4.1.3 Proof of the main result

Students interested in controllability and observability will find more information in Chapter 2
in Kailath and Chapters 5 and 6 in Antsaklis and Michel. The aim of this Section is to show a
simplified version of the proof of Result 4.1.2.
The concept of controllability of the pair [A, B] required that given an initial state x0 , we
must find an input u(t) such that we reach the state x1 after some period of time. To simplify
our development, we are going to try to find very powerful input which is able to move the
state instantaneously.

63
State-Space Control Controllability and Observability

The first initial input that we can find with this property is the dirac function. Let us
consider u1 (t) = α1 δ(t) for any real value α1 , then applying equation (2.42)1 , the new state
x1 will be reached at time 0+ , where 0+ is the ”following instant” after 0, with definition
0+ = lim→0+ 0 + . Under this conditions, the new state will be given by
Z 0+
A0+ +
+
x1 = x(0 ) = e x0 + eA(0 −τ ) B(α1 δ(τ ))dτ, (4.6)
0−
+
where eA0 = I and using the Dirac function property (or definition)
Z 0+
f (τ )δ(τ )dτ = f (0), (4.7)
0−

it thus follows that


x1 = x0 + Bα1 . (4.8)

So if we want to reach the state x1 with n coordinates, the question results in a set of n
simultaneous equations (one for each state) with 1 unknown α1 . Evidently, we cannot find one
α1 for each state x1 . However, it does not mean that the system is uncontrollable since we can
propose a more sophisticated input.
Let us consider u2 (t) = α1 δ(t) + α2 δ (1) (t), where we are using the derivative of the Dirac
delta. It comes from generalised function (or distribution) theory, and you may not be familiar
with it, do not panic! The only property that we want to use is that
Z 0+
(1) df (τ )
f (τ )δ (τ )dt = (−1) . (4.9)
0− dτ τ =0
Then, using(2.42), we obtain the expression of the new state as follows:=
Z 0+
A0+ +
+
x1 = x(0 ) = e x0 + eA(0 −τ ) B(α1 δ(τ ) + α2 δ (1) (τ ))dτ. (4.10)
0−

Thus, using (4.7) and (4.9), it follows

x1 = x0 + Bα1 + (−1) (−A)e−Aτ τ =0 = x0 + Bα1 + ABα2 .



(4.11)

As a result, given x1 we can find the required input by solving the set of simultaneous equation
given by  
h i α1
B AB   = (x1 − x0 ) (4.12)
α2
Once again, we have n equations but only 2 knowns (α1 and α2 ), hence we cannot find a solution
for any x1 .
1
We should have used 0− in the lower limit of the integral in our development. If after the following
development, you still have questions, please, let me know!

64
Controllability State-Space Control

Now, we can propose the following input

un (t) = α1 δ(t) + α2 δ (1) (t) + · · · + αn δ (n−1) (t) (4.13)

In general, the k-th derivative of the Dirac delta holds the following property

0+
dk f (t)
Z
(k) k
f (t)δ (t)dt = (−1) . (4.14)
0− dtk t=0

A similar development yields to

x1 = x0 + Bα1 + ABα2 + · · · + An−1 Bαn . (4.15)

Finally, we have obtained an set of simultaneous equations with the same number of equations
as unknowns  
α
 1
α 
 
h i  2
B AB A2 B · · · An−1 B  α3  = (x1 − x0 ), (4.16)
 
 .. 
 
 . 
 
αn

and the controllability matrix turns up. Then, it will have a solution for any value of x1 (and
x0 ) if and only if the controllability matrix has rank n. As this is the maximum rank that it
can have, then we say it is full rank.
If the student is not familiar with the Cayley-Hamilton theorem, they could be tempted
to introduce a new derivative to obtain more freedom if the controllability matrix is not full
rank. However, students who are familiar with the Cayley-Hamilton theorem, will know that
it is useless since this theorem state that An is a linear combination of A0 , A1 , . . . , and An−1 .
As a result, we will only be able to reach any state x1 if the controllability matrix is full rank.
The above development show the sufficiency of Result 4.1.2, i.e. the pair [A, B] is con-
trollable if the controllability matrix is full rank. To prove the necessity of Result 4.1.2 it is
required to show that if the controllability matrix is not full rank, then we cannot reach the
output with any other input. It goes beyond the scope of this unit, but this part of the proof
can be found in the reading list.

4.1.4 Worked example

Let us consider the pair

65
State-Space Control Controllability and Observability

   
−2 −2 0 0 4 0
   
   
1 0 0 0 0 0
A=

,B = 
   (4.17)
0 −5 −2

0 0 1
   
0 0 2 0 0 0
Then the controllability matrix is
 
4 0 −8 0 8 0 0 0
 
0 −8 0 8 0 

i 0 
h 0 4
C(A, B) = B AB A2 B A3 B =   (4.18)
−5 0 21 0 −85
 
0 1 0
 
0 0 0 2 0 −10 0 42

Trick. The rank of a matrix X n×m is between 0 and min{n, m}. The matrix X n×m has rank
larger than or equal to k, in short, rank(X) ≥ k, if there is one minor of order k different
to zero. A minor of a matrix is the determinant of a square submatrix, roughly speaking the
intersection between the same number of rows and columns. 

With this information, let us start considering 1 as a possible rank of the matrix. Then it
is clear that the minor that intersects the row 1 and column 1 is different to zero. As a result,
the rank of the matrix is 1 or larger than 1. The next step is to find a minor of order 2 different
to zero. For instance, the minor intersecting rows 1 and 3 and columns 1 and 2 is


4 0
= 4 6= 0. (4.19)

0 1

As a result, the rank of the matrix is 2 or greater than 2. Following this procedure, the minor
intersecting rows 1, 3 and 4 and columns 1, 2 and 4 is

4 0 0


0 1 −5 = 8 6= 0. (4.20)



0 0 2

As a result, the rank of the matrix is 3 or greater than 3. Finally, let us try with minors of
order 4. The minor with first 4 rows and columns is

4 0 −8 0



0 0 4 0
(4.21)
−5

0 1 0


0 0 0 2

66
Stabilizability State-Space Control

Brute-force method can be applied, but the following tricks make of this determinant a
straightforward computation.

Trick. Swapping two rows or two columns changes the sign of the determinant. The determi-
nant of a triangular matrix is the product of the diagonal elements. 

Hence
4 0 −8 0 4 0 −8 0


1 0 −5

0 0 4 0 0
= − = −32 6= 0. (4.22)
1 0 −4

0 0 0 4 0


0 0 0 2 0 0 0 2
where we have swapped rows 2 and 3 first and after applied properties of triangular matrices.

Exercise 4.1.11. Use the MATLAB command ctrb to find the controllability matrix of the
pair (A, B). Use the MATLAB command rank to find the rank of this matrix.

4.2 Stabilizability
Stability is a relaxation of the controllability concept. Loosely speaking, any controllable pair
[A, B] will be stabilizable, but not any stabilizable pair [A, B] will be controllable.

Definition 4.2.1 (Stabilizability). The pair [A, B] is said to be stabilizable if there exists
K ∈ Rm×n such that the matrix (A − BK) is Hurwitz.

At first glance, it is difficult to find the connection between controllability and stabilizability.
The concept of controllability informs us about the ability of controlling the modes of the
system. The concept of stabilizability informs us about the possibility of stabilizing the system
through a state feedback interconection (Chapter 5). Therefore, if there are modes that cannot
be controlled, i.e. our pair [A, B] is not controllable, we need to ensure that these modes are
stable.
We can state out test for checking if the pair [A, B] is stabilizable as follows:

Result 4.2.2. Let us assume that the system A is given in the form
   
A11 A12 B
A=  , B =  1 (4.23)
0 A22 0

where the pair [A11 , B1 ] is controllable. Then the system is stabilizable if and only if A22 is
Hurwitz.

67
State-Space Control Controllability and Observability

- D

u
C1 x1 - ? y
m
- h i
- (A11 , B1 A12 , C1 , 0) -
6

x2

(A22 , 0, C2 , 0)
C2 x2

Figure 4.1: In the representation (4.23), the uncontrollable system can be expressed as two
subsystems, one where the states x1 can be controlled from u, and another where u does not
have any ability to modify the trajectory of the state x2 .

Two straightforward conclusions can be drawn:

1. As commented, if [A, B] is controllable, then the system is stabilizable.

2. If A is Hurwitz, then the pair [A, B] is stabilizable for any B (by choosing K = 0).

A natural question arises: what happens if the system is not in the above form? Then, it
is complicated... The full solution of this problem is far beyond the scope of these notes, but
Section 2.4 in Linear Systems by T. Kailath and Chapter 18 in Linear System Theory by W. J.
Rugh are recommended for daredevil students. Here we will just state that it is possible always
possible to transform the pair [A, B] into the above desired form.

Result 4.2.3. Let A ∈ Rn×n and B ∈ Rn×m . If rank(C(A, B)) = ρ < n, then there exists an
nonsingular matrix T such that
   
X11 X12 Y
T AT −1 =   , T B =  1 (4.24)
0 X22 0

where X11 ∈ Rρ×ρ and Y11 ∈ Rρ×m and the pair [X11 , Y11 ] controllable.

In this course, we will transform the system into its modal form.

4.2.1 Worked example: Jan 2013 Exam

Q4 (Jan 2013 Exam) requires to decide whether the pair


   
−7 6 −2
A= ,B =   (4.25)
6 2 1

68
Stabilizability State-Space Control

is stabilizable.
The controllability matrix is
 
−2 20
C(A, B) =  , (4.26)
1 −10

and det(C(A, B)) = 0, hence there is an uncontrollable mode and we need to determine if it is
stable or unstable.
To do so, we will transform this pair into the modal form. The characteristic equation of
det(A − λI) = 0 is
λ2 + 5λ − 50 = 0. (4.27)

Hence the eigenvalues are 5 and −10. Now we need to find the eigenvectors.

λ = 5: We need to find a non-trivial solution of the simultaneous equation


    
−7 − 5 6 x 0
   =   (4.28)
6 2−5 y 0

i.e.

−12x + 6y = 0, (4.29)

6x − 3y = 0. (4.30)

(4.31)

Hence, the solution is any vector with y = 2x, for instance, v1 = (1, 2).

λ = −10: We need to find a non-trivial solution of the simultaneous equation


    
−7 − (−10) 6 x 0
   =   (4.32)
6 2 − (−10) y 0

i.e.

3x + 6y = 0, (4.33)

6x + 12y = 0. (4.34)

(4.35)

Hence, the solution is any vector with −2x = y, for instance, v1 = (−2, 1).
Using these two vectors  
1 −2
V =  (4.36)
2 1

69
State-Space Control Controllability and Observability

where diag(5, −10) = V −1 AV ; and


    
1 −2
1 20
V −1 B =    =   (4.37)
5 −2 1 1 1

As a result, the uncontrollable mode is the mode λ = 5, hence the uncontrollable mode is
unstable; therefore the pair [A, B] is not stabilizable.

4.3 Observability

4.3.1 Definition

Again, let us start with the mathematical definition.

Definition 4.3.1 (Observability). The pair [A, C] is said to be observable if for any t1 > 0,
the initial condition x0 of

ẋ(t) = Ax(t); (4.38)

y(t) = Cx(t); (4.39)

can be determined from the time history of the output in [0, t1 ].

The concept of observability may less be intuitive since you may not be familiar with the
concept of observer. Loosely speaking, if a system is observable then we can “guess” what is
happening with the state of the system from output information. This concept is independent
of the input; hence it is only determined by the pair [A, C]. We will explain why we do not
need to use the input in Chapter 5.
Similarly, if the state-space representation is in its modal form, to check that the pair [A, C]
is observable, just check that all elements of C are non-zero. If one of them is zero, this mode
is said to be unobservable. Once again, this is not a really nice form for testing observability,
since there are systems that cannot be expressed in the modal form.

4.3.2 Derivation of the Observability matrix

We are going to say that the state xō 6= 0 is unobservable2 if the output of the system ẋ(t) =
Ax(t), and y(t) = Cx(t) is null for all t ≥ 0 when x(0) = xō .
2
Only controllable and unobservable states can be properly defined form a mathematical point of view as
they belong to subspaces. The sets of uncontrollable states and observable states are not subspaces.

70
Observability State-Space Control

Following Theorem 2.2.6, the solution of the autonomous systems is given by x(t) = eAt xō ,
hence

At
X 1
y = Ce xō = C (At)k xō .
k=0
k!

Applying the Cayley-Hamilton theorem, we can write the exponential matrix in terms of a
finite addition as the rest are linear combination of the first terms3 :

eAt = α1 (t)I + α2 (t)A + α3 (t)A2 + · · · + αn (t)A(n−1) ;

then the expression of the output of the system is

y(t) = CeAt xō = C(α1 (t)I + α2 (t)A + α3 (t)A2 + · · · + αn (t)A(n−1) )xō .

or in matrix form
 
C
 
 CA 
 
h i 
y = α1 (t) α2 (t) α3 (t) . . . αn (t)  CA2  xō = 0.
 
 .. 
 
 . 
 
CAn−1

As the above expression is true for any value of t, it implies that


 
C
 
 CA 
 
 
 CA  xō = O[A, C]xō = 0,
2
 
 .. 
 
 . 
 
n−1
CA

hence the unobservable states belong to the kernel (or null space) of the observability matrix, i.e.
xō ∈ ker(O[A, C]). A second property that this expression gives us is that if xō ∈ ker(O[A, C]),
then eAt xō ∈ ker(O[A, C]) for all t ≥ 0. So if you start in the subspace of unobservable states,
then you will stay there when you evolve the system. This property is called invariance with
respect to the evolution ẋ = Ax. The final conclusion is that if O[A, C]) is full rank, then the
rank-nullity theorem ensure that the ker(O[A, C]) = {0}.
For completeness, let us mention that the subspace of controllable states is given by the
column space of the controllability matrix, see Equation (4.16).
3
Method 5 in goo.gl/2MKMux

71
State-Space Control Controllability and Observability

4.3.3 Test for observability

In this section, we present a test for observability which can be used independently of the
realization of [A, C].

Result 4.3.2. The pair [A, C] is observable if and only if the matrix
 
C
 
 CA 
 
 

 CA  2 
(4.40)
 .. 
 
 . 
 
n−1
CA

is full rank.

Another important matrix has been introduced in our lives, so let us define it properly.

Definition 4.3.3 (Observability matrix). Given A ∈ Rn×n and C ∈ Rm×n , the observability
matrix associated with the pair [A, C], henceforth, O(A, C) ∈ Rnm×n is given by
 
C
 
 CA 
 
 
O(A, C) =  CA2  (4.41)
 
 .. 
 
 . 
 
CAn−1

For the case of a SISO system, i.e., m = 1, the observability matrix is square and full rank
is reduced to nonsingular.

Result 4.3.4. The pair [A, C] with C ∈ Rn×1 , is observable if and only if the observability
matrix is nonsingular, i.e.  
C
 
 CA 
 
 
det  CA  6= 0
2 (4.42)
 
 .. 
 
 . 
 
n−1
CA

Once again, we should expect that such an important property does not depend on the basis
that we choose to express A and C. The following result ensures that this property does not
change due to a transformation.

72
Observability State-Space Control

Result 4.3.5. Given the pair [A, C] and a nonsingular matrix T , the pair [T AT −1 , CT −1 ] is
observable if and only if [A, C] is observable.

Exercise 4.3.6. Show that O(T AT −1 , CT −1 ) = O(A, C)T −1 . What conclusion can be drawn?

Finally, now we can understand why the observer canonical form has such a name.

Result 4.3.7. If the pair [A, C] is observable, then there exists a nonsingular matrix T such
that the state space representation (T AT −1 , T B, CT −1 , D) is in the observer canonical form.

Finally, a result that we will use in Chapter 6 is given as follows:

Result 4.3.8. A pair [A, C] is unobservable if and only if Cv = 0 where v is a eigenvector of


A.

4.3.4 Worked example

Let us determine whether [A, C] is observable where


 
−2 −2 0 0
   
 
1 0 0 0 4 0 0 0
A= ,C =   (4.43)
0 −5 −2
 
0 0 0 1 0
 
0 0 2 0

Then the observability matrix is


 
4 0 0 0
 
0 0 1 0 
 
   
C −8 −8 0 0 
 
   
−5 −2 
   
 CA   0 0
O(A, C) =  = . (4.44)
 2  
CA   8

16 0 0 
   
3  
CA 0 0 21 10 
 
 0 −16 0
 
0 
 
0 0 −85 −42

Moreover, it is easy to check that



4 0 0 0 4 0 0 0



0 0 1 0 0 1 0 0
= − = −64 6= 0 (4.45)
−8 −8 0 −8 0 −8 0

0

0 −5 −2 0 −5 0 −2

0

73
State-Space Control Controllability and Observability

hence the rank of O(A, C) = 4, i.e. O(A, C) is full-rank. As a result, the pair [A, C] is
observable.

Exercise 4.3.9. Use the MATLAB command obsv to find the observability matrix of the pair
(A, C). Use the MATLAB command rank to find the rank of this matrix.

4.4 Detectability
Detectability is a relaxation of the observability concept just as stabilizability is a relaxation
of controllability. Indeed, any observable pair [A, C] will be detectable, but not any detectable
pair [A, C] will be observable.

Definition 4.4.1 (Detectability). The pair [A, C] is said to be detectable if there exists L ∈
Rn×m such that the matrix (A − LC) is Hurwitz.

The concept of detectability informs us about the possibility of designing a observer where
the error between real state and observer state approaches zero as t goes to infinity (see Chapter
4). Therefore, if there are modes that cannot be observed, i.e. our pair [A, C] is not observable,
we need to ensure that these modes are stable.
We can state our test for checking if the pair [A, C] is detectable as follows:

Result 4.4.2. Let us assume that the system A is given in the form
 
A11 0 h i
A=  , C = C1 0 (4.46)
A21 A22

where the pair [A11 , C1 ] is observable. Then the system is detectable if and only if A22 is Hurwitz.

u - y
(A11 , B1 , C1 , 0) -

x1

- h i
- (A22 , A21 B2 , 0, 0)

Figure 4.2: The unobservable system can be expresed as two subsystems: one where the state
x1 can be determined from the history of y, and another where the state x2 does not have any
ability to modify the output y.

Two straightforward conclusions can be drawn:

74
Detectability State-Space Control

1. As commented, if [A, C] is observable, then the system is detectable.

2. If A is Hurwitz, then the pair [A, C] is detectable for any C.

A similar question arises: What happens if the system is not in the above form? The answer
is the same: it is complicated but the same literature can be consulted if you want to have
some fun. Once again, here we will just state that it is always possible to transform the pair
[A, C] into the above desired form.

Result 4.4.3. Let A ∈ Rn×n and C ∈ Rm×n . If rank(O(A, C)) = ρ < n, then there exists an
nonsingular matrix T such that
 
X11 0 h i
T AT −1 = , CT −1 = Y1 0 (4.47)
X12 X22

where X11 ∈ Rρ×ρ and Y1 ∈ Rm×ρ and the pair [X11 , Y1 ] observable.

In this course, we will transform the system into its modal form.

4.4.1 Worked example: Jan 2013 Exam

Q4 (Jan 2013 Exam) requires to decide whether the pair


 
−7 6 h i
A= ,C = 1 2 (4.48)
6 2

is detectable.
The observability matrix is  
1 2
O(A, C) =  , (4.49)
5 10
and det(O(A, C)) = 0. Hence there is an unobservable mode and we need to determine if it is
stable or unstable.
Using the previous result of Section 4.2.1
 
1 −2
V =  (4.50)
2 1

where    
5 0 h i 1 −2 h i
−1
V AV =   ; and CV = 1 2   = 5 0 (4.51)
0 −10 2 1
As a result, the unobservable mode is the mode λ = −10. Hence the unobservable mode is
stable, and therefore the pair [A, C] is detectable.

75
State-Space Control Controllability and Observability

Uncontrollable unstable mode

- m
R
- 0 - - 5
6 ? y
u 5 m -
6
- m
R
- 1 - - 0
6
−10

Unobservable stable mode

Figure 4.3: Block diagram of the modal form in Q4 (Jan 2013) and Exercise 4.4.4. It is clear
that there is no possible connection between the input and the output.

If we sketch the block diagram of the modal form, it is very easy to understand what is
happening in the system, see Fig 4.3.

Exercise 4.4.4. Show that the transfer function of the state-space representation
   
−7 6 −2 h i
A= , B =  , C = 1 2 , D = 0 (4.52)
6 2 1

is zero, see Fig. 4.3. How many zeros has this system? Where are the zeros?

4.5 Final remarks

4.5.1 Duality

The student can check that my development of both previous sections is very similar and I
hope there are no typos due to copy and paste. The symmetry between controllability and
observability is known as Duality and it can be formally stated as follows:

Result 4.5.1 (Duality). Let A ∈ Rn×n and X ∈ Rn×m . The pair [A, X] is controllable if and
only if the pair [A> , X > ] is observable.

In the same way, stabilizability and detectability are also dual concepts.

Result 4.5.2. Let A ∈ Rn×n and X ∈ Rn×m . The pair [A, X] is stabilizable if and only if the
pair [A> , X > ] is detectable.

This duality will also turn up in the design of controllers and observers (pole placement in
Chapter 5 and Optimal Control and Estimation in Optimal and Robust Control unit, Semester
2)

76
Final remarks State-Space Control

4.5.2 Kalman’s decomposition

The combination of Result 4.2.3 and Result 4.4.3 leads to the following result. This is the formal
procedure that we use to obtain the controllability and observability of a system. However, it
may be somehow difficult.

Result 4.5.3. Any state-space representation can be transformed into a state-space represen-
tation in the following form
      
ż A A12 A13 A14 z B
 1  11   1  1
      
ż2   0 A22 0 A24  z2  B2 
  =    +  u (4.53)
      
ż3   0 0 A33 A34  z3   0 
      
ż4 0 0 0 A44 z4 0
 
z
 1
h i 
z2 
y = 0 C2 0 C4  

 (4.54)
z3 
 
z4
      
A11 A12 B A A24 h i
where the pair   ,  1  is controllable and the pair  22  , C2 C4  is
0 A22 B2 0 A44
−1
observable. Moreover, the tranfer function G(s) = C(sI − A) B + D is reducible to G(s) =
C2 (sI − A22 )−1 B2 + D.

Exercise 4.5.4. Given the system


   
0 −1 1 1 0
    h i
A = 1 −2 1  , B = 1 1 , C = 0 1 0 , and D = 0. (4.55)
   
   
0 1 −1 1 2

use the MATLAB command minreal to find the Kalman’s decomposition of this system.

A formal procedure for this decomposition can be found in Chapter 6 in A Linear Systems
Primer by Antsaklis and Michel and is highly recommended. A more advanced reference is
Chapter 3 in Robust and Optimal Control by Zhou, Doyle, and Glover, where results can be
found with formal proofs.

77
State-Space Control Controllability and Observability

Controllable, Uncontrollable,
-
but unobservable and unobservable
Input

Controllable, Uncontrollable,
-
and observable but observable

? ?
Output
(a) Diagonal Kalman’s decomposition. In this case there is no interaction between the four part of
the system

Controllable, Uncontrollable,
- 
but unobservable and unobservable
Input

6 I 6

Controllable, Uncontrollable,
- 
and observable but observable

? ?
Output
(b) General Kalman’s decomposition: Some interaction between the block.

Figure 4.4: Kalman’s decomposition. The MATLAB function minreal eliminates all states that
are not in the controllable and observable block. It is stated that minreal can also provide
Kalman’s decompostion, but I recomend some critical thinking.

78
Learning Outcomes State-Space Control

4.6 Learning Outcomes


The learning outcomes of this chapter can be summarised as follows:

• Definition of controllability of a linear system or the pair [A, B].

• The pair [A, B] is controllable if a only if the controllability matrix


h i
C(A, B) = B AB A2 B · · · An−1 B (4.56)

is full rank.

• When the pair [A, B] is not controllable but all the uncontrollable modes are stable, then
we say the pair [A, B] is stabilizable. The modal form of A can be used to test this.

• Definition of observability of a linear system or the pair [A, C].

• The pair [A, C] is observable if a only if the observability matrix


 
C
 
 CA 
 
 
O(A, C) =  CA  2 (4.57)
 
 .. 
 
 . 
 
n−1
CA

is full rank.

• When the pair [A, C] is unobservable but all the unobservable modes are stable, then we
said the pair [A, C] to be detectable. The modal form of A can be used to test this.

• There exists a duality between controllability and observability: The pair [A, X] is con-
trollable if and only if the pair [A> , X > ] is observable.

• Kalman’s decomposition splits the system into four blocks: a part that can be controlled
and observed, a part that just can be controlled, a part that just can be observed and a
part that cannot be either observer or controlled.

79
State-Space Control Controllability and Observability

80
Chapter 5

Design in the state-space

Until now, the unit has covered the underpinning principles that will be needed to design
controllers and observers. From now on, we will be focused on using the previous chapters from
a control engineering point of view.
At first, we will assume that we can look inside the system, and all states can be perfectly
known. With this information, we will design a controller that modifies the original dynamics
of the system in a predefined manner. However, this assumption is not very realistic in most
systems. Hence, secondly, we will need to study how to discover what is happening inside the
system using just “input and output information”. Finally, we will combine both designs in
order to obtain a realistic controller.
A very important part of control, tracking control, will not be covered in this unit. State-
space control can cope with set-points, and we will study this issue in the second semester.

5.1 State-feedback controller

This section will propose a problem, the design of a control gain in order to have an appropriate
behaviour of the closed-loop system, see Fig 5.1. The problem is rewritten in a mathematical
fashion. As with many mathematical problems, we will find situations where the problem is
correctly defined and we have at least one solution, but we will also easily find situations where
the problem is ill-defined and no solution can be found. When we are able to guarantee the
existence of a solution, we will study the solution of the problem.

5.1.1 Design problem

The design of a state feedback controller can be stated as the following design problem:

81
State-Space Control Design in the state-space

x y
m−Kx
- ẋ = Ax + Bu - C -

6

Kx
K 

Figure 5.1: State feedback problem.

Problem 5.1.1 (Pole placement). Given the state-space equation

ẋ = Ax + Bu, (5.1)

find the state feedback law u = −Kx such that the poles of the dynamical system

ẋ = (A − BK)x (5.2)

are placed at (α1 , α2 , . . . , αn ).

As we are considering only the input u = −Kx we assume that the reference signal will be
always null, and we focus our attention on achieving the steady state smoothly. This control
design problem is just a mathematical problem that can be stated as follows.

Problem 5.1.2. Given the pair [A, B] where A ∈ Rn×n and B ∈ Rn×m , find K ∈ Rm×n such
that the eigenvalues of A − BK are given by {α1 , α2 , . . . , αn },

Remark 5.1.3. Note that we could ask for complex eigenvalues αi ∈ C, but we should include
their complex conjugate, i.e., there exist 1 ≤ j ≤ n such that αj = αi∗ , if we want K ∈ Rm×n .
If we do not include the complex eigenvalues in pairs, then K ∈ Cm×n . We will always include
complex conjugate pairs in the desired values αi , but interested students are welcome to study
other cases.

Example 5.1.4. Let us consider A = 5 and B = 1, find K ∈ R such that the eigenvalues of
(A − BK) = −2, i.e.
(5 − 1K) = −2. (5.3)

Then the solution of the problem is K = 7. 4

Example 5.1.5. Let us consider A = 5 and B = 0, find K ∈ R such that the eigenvalues of
(A − BK) = −2, i.e.
(5 − 0K) = −2. (5.4)

In this example, we would like to stabilise the mode of this unstable system. However, it is
trivial that it is impossible. For any value of K, the pole of the system will at 5. 4

82
State-feedback controller State-Space Control

5.1.2 Existence of solutions

The above example has provided a trivial situation where we cannot find a solution of the
control design problem 5.1.1 or the mathematical problem 5.1.2. So the first thing that we
need to figure out is when the problem is well defined.

Result 5.1.6. Problem 5.1.1 has a solution for any {α1 , α2 , . . . , αn } if and only if the pair
[A, B] is controllable. 

The mathematical notion of controllability that we have studied in Chapter 4, is the key
point to be able to solve the control problem 5.1.1. If the pair [A, B] contains a mode that
is uncontrollable, then this mode will not be modified by the selection of K. As a result, the
eigenvalues of A − BK will always contain the value of the uncontrollable mode.

Example 5.1.7. Given the pair A = 5 and B = 0, then the controllability matrix is given by
C(A, B) = 0. 4

We can place the modes that are controllable but we must to preserve the uncontrollable
modes in our design.

Result 5.1.8. Let the pair [A, B] to be uncontrollable and let {λ1 , λ2 , . . . , λρ )} be the eigenvalues
of the uncontrollable modes. Then, Problem 5.1.1 has a solution if and only if the desired set
of eigenvalues of A − BK is of the form {α1 , α2 , . . . , αn−ρ , λ1 , λ2 , . . . , λρ }. 

Example 5.1.9. Let us consider A = 5 and B = 0, find K ∈ R such that the eigenvalues of
(A − BK) = 5, i.e.
(5 − 0K) = 5. (5.5)

Hence the above problem has a solution, in fact, has infinite number of solutions, any K ∈ R
is a solution of the problem. 4

Finally, one could wonder if we could solve Problem 5.1.1 but instead of any location of the
poles, we are just interested in placing the poles in the LHP, i.e. given the pair [A, B], we want
to find K such that A − BK is Hurwitz. Now Definition 4.2.1 seems natural. Controllability
offer us the possibility of designing the placement of all the poles, whereas stabilizability ensures
that we will be able to place all poles in the LHP.

5.1.3 Worked example

Let us consider the system


d3 y d2 y dy
3
+ 5 2
+ 3 + 2y = u,
dt dt dt

83
State-Space Control Design in the state-space

and design a state feedback controller such that the poles of the closed-loop systems are located
at {−1, −2, −3}, assuming that we have access to all states.
It should not be strange that the control canonical form of a system is desired to solve the
control design. The control canonical form of the above transfer function is
   
0 1 0 0
   
ẋ =  0 1  x + 0 u; (5.6)
   
0
   
−2 −3 −5 1
h i
y = 1 0 0 x. (5.7)

Let us consider the control action u = −Kx, the closed-loop system is given by
     
0 1 0 0 0 1 0
    h i  
ẋ =  0 x + (− x) =  x;
     
0 1 0 k1 k2 k3  0 0 1
     
−2 −3 −5 1 −(k1 + 2) −(k2 + 3) −(k3 − 5)
(5.8)
and we need to choose k1 , k2 , and k3 to fulfil the design specification. To this end, the charac-
teristic equation is computed
 
λ −1 0
 
−1  = (λ3 + (k3 + 5)λ2 + (k2 + 3)λ + (k1 + 2), (5.9)
 
 0 λ
 
(k1 + 2) (k2 + 3) λ + (k3 + 5)

and compared with the desired

(λ + 1)(λ + 2)(λ + 3) = λ3 + 6λ2 + 11λ + 6. (5.10)

Then, k3 = 1, k2 = 8, and k1 = 4. In summary, the designed gain is

h i
K= 4 8 1 (5.11)

Note that this values depends on the realisation of the system, so if we apply a transforma-
tion z = T x, K in the new basis will be different.

5.1.4 Worked example: Jan 2013 exam

Q4 (Jan 2013 Exam) requires to decide whether the poles of (A − BK) can be placed at −1 ± j,
with    
−7 6 −2
A= ,B =   (5.12)
6 2 1

84
State-feedback controller State-Space Control

This example has been studied in Chapter 4, where we have found that the system is not fully
controllable. Since poles of A are in {5, −10}, applying Result 5.1.8 we conclude that it is not
possible to place the poles of A − BK at this location since either 5 or −10 must be included
in the desired location. Here we are going to show that it is actually true.
Let us analyse the eigenvalues of A − BK:
     
−7 6 −2 h i −7 + 2k1 6 + 2k2
A − BK =   −   k1 k2 =   (5.13)
6 2 1 6 − k1 2 − k2

The eigenvalues of A − BK satisfy



λ + 7 − 2k1 −6 − 2k2

det(λI − (A − BK)) =
= 0. (5.14)
−6 + k1 λ − 2 + k2

Computing this determinant, we find that the characteristic equation is given by

λ2 + (5 − 2k1 + k2 )λ + (−50 + 10k1 − 5k2 ) = 0 (5.15)

Then, solving this second order equation and using k = −2k1 + k2


p √
−(5 + k) ± (5 + k)2 − 4(−50 − 5k) −(5 + k) ± k 2 + 30k + 225
λ= = =
2 2
−(5 + k) ± (k + 15)
(5.16)
2
As a result, the eigenvalues of A + BK are given by
−(5 + k) + (k + 15)
λ1 = = 5; (5.17)
2
−(5 + k) − (k + 15)
λ2 = = −10 + 2k1 − k2 . (5.18)
2
In summary, we have demonstrated that Result 5.1.8 is satisfied for this example. Furthermore,
this has provided a less elegant procedure to find that the system is not stabilizable, since
A − BK will have a pole in the RHP for any values (k1 , k2 ).

5.1.5 Solution of the Pole Placement Problem: Ackermann’s for-


mula

If the state space representation is in the control canonical form, we have shown that it is
straightforward to find the solution of the problem. However, the previous method could
be somewhat complex in other representations. The general solution to the Pole Placement
problem was given by Juergen Ackermann in 1972 for SISO systems. In this case, if the system
is controllable, there is one and only one solution.

85
State-Space Control Design in the state-space

Result 5.1.10. Given the pair [A, B] where A ∈ Rn×n and B ∈ Rn×1 , then the matrix A − BK
has eigenvalues at (α1 , α2 , . . . , αn ) if K ∈ R1×n is given by Ackermann’s formula:
h i
K = 0 0 · · · 0 1 C −1 (A, B)β(A) (5.19)

where β(A) = An + βn−1 An−1 + βn−2 An−2 + · · · + β1 A + β0 I with βi the coefficients of the desired
characteristic polynomial, i.e.

β(λ) = (λ − α1 )(λ − α2 ) . . . (λ − αn−1 )(λ − αn ). (5.20)

Once again, we need to highlight that the coefficient of the characteristic polynomial will
be real if poles are specified including complex conjugate pairs. If not, we will able to find a
solution, but K ∈ C1×n .
MATLAB command is acker. For MIMO systems, it is possible that the simultaneous
equation to be solved be underdetermined, i.e. there are infinite solutions to the Pole Place-
ment problem. Moreover, Ackermann’s formula will have issues related to inverting high order
matrices. To solve both problems, other algorithms has been proposed and MATLAB offers an
alternative command (place). These algorithms are beyond the scope of these notes but, for
further discussion, see J. Kautsky, N. K. Nichols, and P. Van Dooren, “Robust Pole Assignment
in Linear State Feedback,” International Journal of Control, 41 (1985), pp. 1129-1155.

5.1.6 Worked Example: Q3 Jan 2013 Exam

Given the system    


−1 −3 3
ẋ =   x +   u, (5.21)
−3 −1 1
design a state-feedback controller such that the poles of the closed-loop system are located at
{−1 ± j}.
The first step is to check if the problem will have a solution, so the controllability matrix
C(A, B) need to be invertible:
 
3 −6 3 −6

C(A, B) =   and so = −30 + 6 = −24 6= 0 (5.22)
1 −10 1 −10

hence this matrix is nonsingular. Moreover, we will need to find its inverse
 
−1 −10 6
C −1 (A, B) = . (5.23)
24 −1 3

86
Observer design State-Space Control

We need the desired characteristic polynomial, it is given by

β(λ) = (λ − (−1 + j))(λ − (−1 + j)) = λ2 + 2λ + 2. (5.24)

Then,
 2    
−1 −3 −1 −3 1 0
β(A) =   + 2  + 2 =
−3 −1 −3 −1 0 1
 
     
10 6 −2 −6 2 0 10 0
 + + =  (5.25)
6 10 −6 −2 0 2 0 10
Now we are able to find K by applying Ackermann’s formula as follows
   
i −1 −10 6 1 0
 = 5 1 −3 .
h i h h i
K = 0 1 C −1 (A, B)β(A) = 0 1   10  (5.26)
24 −1 3 0 1 12

Exercise 5.1.11. Use Ackermann’s formula to reproduce the results found in Section 5.1.3.
Use also the command acker.

5.1.7 Final discussion

Even though we have solved the design problem in a very nice fashion, the real problem remains.
What is a good location of the poles of the system? It will be the work of the control engineer
to understand the problem and design a correct location of the poles. If poles are located in
such a way that the state reaches zero very fast, it is nice as long as my actuator is able to
cope with the “energy” that the control action will require to modify the state of the system.
A less demanding controller can be designed by choosing the poles location in a slower region,
i.e. closer to the imaginary axis, but always in the LHP, evidently!

5.2 Observer design


Duality between control and observation or estimation has been commented at the end of
Chapter 4. Once again, we are going to exploit this duailty here, and we will reproduce the
same notions as in the previous section. As some students may not be familiar with the concept
of observer, we are going to start by explaining the notion and usefulness of observers.

5.2.1 Introduction to the concept of observer

As previously mentioned, the availability of all states in the feedback is a very strong assump-
tion. In most of the cases, some states of the plant will not be accessible. The target of an

87
State-Space Control Design in the state-space

System

u x- y
- ẋ = Ax + Bu C -

Observer


- x̂˙ = Ax̂ + Bu -

Figure 5.2: Observer design assuming that x(0) = x̂(0).

observer is to provide the information of what is happening inside the system by the use of a
virtual system, which will be referred to as an observer or estimator. This virtual system will
modify its state in such a way that, after some time, the states of the system and the state of
the observer match.
What virtual system are we going to use as an observer? Firstly, let us assume that if both
systems have the same state at some instant and if the dynamics of the states and inputs are
the same for both systems, then the states will match at any future instant. So it seems natural
to use an artificial system with the same dynamic.
Let us consider the strictly proper system

ẋ(t) = Ax(t) + Bu(t), x(0) = x0 ; (5.27)

y(t) = Cx(t) (5.28)

and let observer be given by the same dynamic

˙
x̂(t) = Ax̂(t) + Bu(t), x̂(0) = x0 (5.29)

then, it is trivial to see that x̂(t) = x(t) for all t ∈ R. The state x̂ is referred to as estimation
of the state x. As a result, the use of the observer allows us to see what is happening inside
the system; one can think of the observer as “virtual sensor” that is measuring x.
Evidently, our assumption of x(0) = x̂(0) is totally unrealistic. Hence, if it does not hold,
we need to inform the observer that something is wrong. As the only information about the
state x is given through the output y = Cx, the feedback to the observer must be given as the
difference between the real output of the system and the estimated output ŷ = C x̂. Hence,
the observer will need to modify its dynamics as a function of the difference y − ŷ, so the final
expression for the observer is

˙
x̂(t) = Ax̂(t) + Bu(t) + L(y − C x̂), (5.30)

where L ∈ Rn×ny . Then, let us define the error state as:

e(t) = x(t) − x̂(t); (5.31)

88
Observer design State-Space Control

System

u x- y
- ẋ = Ax + Bu C -

y − ŷ m
?


6
?
- x̂˙ = Ax̂ + Bu + L(y − ŷ) - C ŷ

Observer ?

Figure 5.3: Observer design where we inform to the observer about a possible error between x
and x̂.

and its dynamics equation is

˙
ė(t) = ẋ(t) − x̂(t) = Ax(t) + Bu(u) − (Ax̂ + Bu(t) + LCx − LC x̂) = (A − LC)e(t). (5.32)

Therefore, by construction, the observer results in a “magical system” called the error system,
which is an autonomous dynamical system governed by the matrix A − LC. We say this system
is “magical” in the sense that it cannot be realised. It would require the information x that is
unavailable. Note the difference with the observer, which may only exist inside a computer or
PLC, but can be realised.
In the following subsection, we are going to follow the same flow as Section 5.1.

5.2.2 Observer design

Similar to the case of state-feedback design where we need to place the poles of A − BK in a
desired location, we can state a new design problem where we need to place the poles of the
error system

Problem 5.2.1 (Observer design). Given the strictly proper system

ẋ = Ax + Bu, (5.33)

y = Cx, (5.34)

and the observer system


x̂˙ = (A − LC)x̂ + Bu + Ly; (5.35)

find the value of L such that the poles of the error system

ė = (A − LC)e (5.36)

are placed at (α1 , α2 , . . . , αn ).

89
State-Space Control Design in the state-space

Once again, the observer design problem can be translated into a mathematical problem.
Although we have included the matrix B, the solution will be independent of B.

Problem 5.2.2. Given the pair [A, C] where A ∈ Rn×n and C ∈ Rm×n , find L ∈ Rn×m such
that the eigenvalues of A − LC are given by {α1 , α2 , . . . , αn },

Remark 5.2.3. The same comments about the inclusion of complex conjugate poles hold here
as in the control design. Moreover, we have included B in Problem 5.2.1, but as our design
is just interested in A − LC, the solution will be independent of B. Since the observer design
is independent of B, students can understand now why we defined observability using the
autonomous system, i.e., B = 0.

The student will be able to find examples where the pair [A, C] does not allow us to fix the
poles of the error system at a desired location. Hence, the first step is to find conditions to
ensure the existence of a solution.

5.2.3 Existence of solutions

Result 5.2.4. Problem 5.2.1 has a solution for any {α1 , α2 , . . . , αn } if and only if the pair
[A, C] is obsevable. 

The mathematical notion of observability that we have studied in Chapter 4, is the key
point to be able to solve the observer desing problem. If the pair [A, C] contains a mode that
is unobservable, then the error system will contain this mode regardless of the selection of L.
As a result, the eigenvalues of A − LC will always contain the value of the unobservable mode.

Exercise 5.2.5. Design an uncontrollable pair [A, C] and then choose randomly L in MATLAB,
with adequate dimensions. Check that at least one eigenvalue of [A, C] corresponds with an
eigenvalue of A. 4

We can place the modes that are observable but we must preserve the unobservable modes
in our design.

Result 5.2.6. Let the pair [A, C] to be unobservable and let {λ1 , λ2 , . . . , λρ )} be the eigenvalues
of the unobservable modes. Then, Problem 5.1.1 has a solution if and only if the desired set of
eigenvalues of A − LC is of the form {α1 , α2 , . . . , αn−ρ , λ1 , λ2 , . . . , λρ }. 

Finally, one could wonder if we could solve Problem 5.2.1 but instead of any location of the
poles, we are just interested in placing the poles in the LHP, i.e. given the pair [A, C], we want

90
Observer design State-Space Control

to find L such that A − BK is Hurwitz. This will ensure that after some time, the state of the
error system will approach zero, or equivalenty, the state x̂ will approach the state x, with is
the main target of the observer.
Now, Definition 4.4.1, the definition of detectability, seems natural. Observability allows us
the possibility of designing the error system with special dynamics, whereas detectability just
ensures that we will be able to place all poles of the error system in the LHP. The consequence
of the lack of observability is the lack of freedom in the location of the poles. This lack of
freedom becomes an issue when the poles of the observer stay in the RHP, then the state x will
never approach x̂, hence we cannot design a suitable observer.

5.2.4 Worked example

Let us consider the system


d3 y d2 y dy
+ 5 + 3 + 2y = u,
dt3 dt2 dt
design an observer such that the poles of the error systems are located at {−1, −1 + j, −1 − j}.
In this case, it should not be strange that the observer canonical form of a system is desired
to solve the observer design problem. The observer canonical form of the above transfer function
is    
0 0 −2 1
   
ẋ = 1 0 −3 x + 0 u (5.37)
   
   
0 1 −5 0

and the output is given by


h i
y= 0 0 1 x (5.38)

Then, we need to design L such that the eigenvalues of A − LC are {−1, −1 + j, −1 − j}.
Firstly, let us analyse the location of the eigenvalues of
 
0 0 −2 − l1
 
A − LC = 1 0 −3 − l2  ; (5.39)
 
 
0 1 −5 − l3

Then, the characteristic equation is computed



λ 0 (2 + l1 )


3 2
det(Iλ − (A − LC)) = −1 λ (3 + l2 ) = λ + (l3 + 5)λ + (l2 + 3)λ + (l1 + 2), (5.40)


0 −1 λ + (5 + l3 )

91
State-Space Control Design in the state-space

and compared with the desired

(λ + 1)(λ + 1 + j))(λ + 1 − j) = λ3 + 3λ2 + 4λ + 2. (5.41)

Then, l3 = −2, l2 = 1, and l1 = 0. In summary, the designed gain is

 
0
 
L =  1 . (5.42)
 
 
−2

5.2.5 Solution of the problem

Until now, we have exploited the duality property just to follow the same steps. Now we are
going to use this duality for solving the problem. The following result is based in the fact that
a matrix and its transpose have the same eigenvalues.

Result 5.2.7. The eigenvalues of A − LC are the same as the eigenvalues of A> − C > L> .

Then, the problem of designing a observer for the pair [A, C] becomes the problem of design-
ing a controller K for the pair [A> , C > ] and setting L = K > . Hence the same commands can
be used in MATLAB, L=acker(A’,C’, α)’ or L=place(A’,C’, α)’, where α is the desired
characteristic polynomial.

Exercise 5.2.8. Use Result 5.2.7 to show that the Observer version of the Ackermann’s formula
is given by  
0
 
0
 
.
 
L = β(A)O (A, C)  .. 
−1
(5.43)
 
 
0
 
1

Exercise 5.2.9. Use Ackermann’s formula to reproduce the results in Section 5.2.4. Use also
the MATLAB command acker

5.2.6 Final discussion

As in the previous discussion, the main problem remains: What is a good location of the poles
of the error system? It will depend on the confidence that we have in our model and the noise
of the sensor. One could think that a slow design for the observer will demand low confidence

92
Output feedback design State-Space Control

in model and output, whereas a fast design will requires a high fidelity in the model and low
noise ratio. If we have a good model but noisy output, some states could be estimated faster
than others. However, it is very difficult to quantify the above statement.

These questions and issues will be tackled in Optimal and Robust Control in the second
semester.

5.3 Output feedback design

The last section of this chapter combines the two elements that we have presented previously.
On the one hand, we will design an observer as a virtual sensor of all the states of the system.
On the other hand, we will use the estimated states to develop a control action, see Fig. 5.4.

System
−K x̂
x- y
m - ẋ = Ax + Bu C -

6

y − ŷ m
?


6
?
- x̂˙ = Ax̂ + Bu + L(y − ŷ) - C ŷ

Observer

K 

Figure 5.4: Output feedback controller. The closed-loop system between u and y contain two
set of states, the state of the plant and the state of the observer.

5.3.1 Separation principle

The separation principle states that the design of an output-feedback controller with the poles
of the state-feedback at the location {αic } and the error system at the location {αie } can be
designed independently.

On the one hand, design a state-feedback gain K such that the eigenvalues of A − BK are
placed at {αic } and, on the other hand, design observer gain L such that the eigenvalues of

93
State-Space Control Design in the state-space

A − LC are at {αie }. Then, the system

ẋ = Ax + Bu, (5.44)

y = Cx, (5.45)

x̂˙ = Ax̂ + Bu + L(y − C x̂), (5.46)

u = −K x̂ (5.47)

will have the desired properties.


To this end, let us consider the state xcl = (x, e), where e = x − x̂. It follows

ẋ = Ax − BK x̂ = Ax − BK(x − e) = Ax − BKx + BKe = (A − BK)x + BKe, (5.48)

and

ė = ẋ − x̂˙ = Ax − BK x̂ − Ax̂ + BK x̂ + L(Cx − C x̂) = Ae − LCe = (A − LC)e. (5.49)

As a result the closed-loop systems behave as the autonomous system


    
d x A − BK BK x
=  . (5.50)
dt e 0 A − LC e
The result is obtained applying a well-known result in matrix algebra:

Result 5.3.1. The set of eigenvalues of the matrix


 
X Y
  (5.51)
0 Z
is the union of the set of eigenvalues of X and the set of eigenvalues of Z

As a result, the eigenvalues of the matrix in the right-hand side of (5.50) is the union of the
eigenvalues of A − BK and A − LC. These problems are identical to Problem 5.1.1 and 5.2.1,
whose solutions have been presented in previous sections.

5.3.2 Design considerations

The design specification of the controller are straightforward: fast poles will reject disturbances
quickly, but the actuator will suffer to provide this control action. Evidently, the effort of the
actuator can be reduced by slowing the poles down. Pole placement is a very primitive method,
and more sophisticated methods will be considered in the second semester.
However, the observer design can be slightly more difficult. The control engineer will need
to understand the limitation of the bandwidth of the system. The design of the observer can
be as fast as we wish, since there is not a physical actuation. Once we have understood how
fast the observer can be, the controller should be designed to be 5-10 times slower.

94
Learning outcomes State-Space Control

5.4 Learning outcomes


The learning outcomes of this chapter can be summarised as follows:

• State-feedback design provides a gain K such that the poles of A − BK are located at
some predetermined location of the complex plane.

• If the pair [A, B] is controllable, then we will be able to find a solution of the state-
feedback problem. If the pair [A, B] has uncontrollable modes, we will not be able to
modify these modes in A − BK regardless of the selection of the gain K.

• Ackermann’s formula provides a sophisticated manner to find the solution for SISO sys-
tems.

• The observer of a system is a “virtual sensor” that measures the state of the system. The
error between the state x and the estimated state x̂ behaves as an autonomous dynamical
system ė = (A − LC)e.

• The observer design provides a gain L such that the poles of A − LC are located at some
predetermined location of the complex plane.

• If the pair [A, C] is observable, then we will be able to find a solution of the observer
design problem.

• When the pair [A, C] is unobservable but all the unobservable modes are stable, then we
can still design an observer but we cannot “move” the unobservable modes.

• There exists a duality between state-feedback control and observer design: the matrices
A − XY and A> − Y > X > have the same eigenvalues.

• The output feedback design is the combination of the state-feedback and observer designs.

• The separation principle ensures that they can be independently designed, and we will
keep their original properties when combined.

• The designer must understand the limitations of actuators and sensors to produce a sound
design. Pole placement is a very rudimentary design technique but straightforward.

95
State-Space Control Design in the state-space

System
0 −K x̂
x- y
- m - ẋ = Ax + Bu C -

6

y − ŷ m
?


6
?
- x̂˙ = Ax̂ + Bu + L(y − ŷ) - C ŷ

Observer

K 

System

x- y
- ẋ = Ax + Bu C -

- B

0
x̂ R
m
?
m m
?
−K    L   
6 −6

- C

Figure 5.5: Transformation Step 1

96
Learning outcomes State-Space Control

System

x- y
- ẋ = Ax + Bu C -

- B

0
x̂ R
m
?
m m
?
−K    L   
6 −6

- C

System

x- y
- ẋ = Ax + Bu C -

- −BK

0
x̂ R
m
?
m
?
−K    L  
6

A - m
6

- −LC

Figure 5.6: Transformation Step 2

97
State-Space Control Design in the state-space

System

x- y
- ẋ = Ax + Bu C -

- −BK

0
m m
R ? ?
−K    L  

6

- A - m
6

- −LC

System

x- y
- ẋ = Ax + Bu C -

Controller
− 0
x̂ R
m m
?
K    L  
6

A − LC − BK - m

Figure 5.7: Transformation Step 3. As a result, C(s) = K(sI − (A − LC − BK))−1 L

98
Chapter 6

Realisation of MIMO transfer functions

Undergraduate Control Engineering textbooks introduce transfer functions from ODEs. How-
ever, they restrict their attention to SISO systems. On the other hand, Advanced Control
Engineering books use transfer function matrices with no introduction. The following sections
provide an introduction to MIMO transfer function.

6.1 MISO systems: transfer function column-vector

A general differential equation with k inputs and one output is very similar to the SISO version:

(i) (i) (i)


mk
φ({y (i) }ni=0 , {u1 }m m2
i=0 , {u2 }i=0 , . . . , {uk }i=0 ) = 0,
1
(6.1)

d(i) z
where mi < n for all 1 < i < k and z (i) means dt(i)
. Undoubtedly we have more freedom in
the forced component of the equations, but solving this differential equation is equivalent to
solving a SISO system once all inputs have been decided.

If the system is linear, equation (6.1) becomes an ODE

dn dn−1 dn−2 d2 d
n
y + a n−1 n−1
y + a n−2 n−2
y + · · · + a 2 2
y + a1 y + a0 y =
dt dt dt dt dt
m1 m1 −1 m1 −2
d d d d
b1m1 m1 u1 + b1m1 −1 m1 −1 u1 + b1m1 −2 m1 −2 u1 + · · · + b11 u1 + b10 u1 +
dt dt dt dt
m2 m2 −1 m2 −2
d d d d
b2m2 m2 u2 + b2m2 −1 m2 −1 u2 + b2m2 −2 m2 −2 u2 + · · · + b21 u1 + b20 u2 + . . .
dt dt dt dt
mk mk −1 mk −2
d d d d
bkmk m uk + bkmk −1 m −1 uk + bkmk −2 m −2 uk + · · · + bk1 uk + bk0 uk . (6.2)
dt k dt k dt k dt

99
State-Space Control Realisation of MIMO transfer functions

Using the Laplace transform, then we obtain

sn Y + an−1 sn−1 Y + an−2 sn−2 Y + · · · + a2 s2 Y + a1 sY + a0 Y =

b1m1 sm1 U1 + b1m1 −1 sm1 −1 U1 + b1m1 −2 sm1 −2 U1 + · · · + b11 sU1 + b10 U1 +

b2m2 sm2 U2 + b2m2 −1 sm2 −1 U2 + b2m2 −2 sm2 −2 U2 + · · · + b21 sU2 + b20 U2 + . . .

bkmk smk Uk + bkmk −1 smk −1 Uk + bkmk −2 smk −2 Uk + · · · + bk1 sUk + bk0 Uk . (6.3)

Finally, we take common factors Y , U1 , U2 , . . . , and Uk , so it follows that

(sn + an−1 sn−1 + an−2 sn−2 + · · · + a2 s2 + a1 s + a0 )Y =

(b1m1 sm1 + b1m1 −1 sm1 −1 + b1m1 −2 sm1 −2 + · · · + b11 s + b10 )U1 +

(b2m2 sm2 + b2m2 −1 sm2 −1 + b2m2 −2 sm2 −2 + · · · + b21 s + b20 )U2 + . . .

(bkmk smk + bkmk −1 smk −1 + bkmk −2 smk −2 + · · · + bk1 s + bk0 )Uk . (6.4)

which provides the desired result


b1m1 sm1 + b1m1 −1 sm1 −1 + b1m1 −2 sm1 −2 + · · · + b11 s + b10
Y = U1 +
sn + an−1 sn−1 + an−2 sn−2 + · · · + a2 s2 + a1 s + a0
b2m2 sm2 + b2m2 −1 sm2 −1 + b2m2 −2 sm2 −2 + · · · + b21 s + b20
U2 + . . .
sn + an−1 sn−1 + an−2 sn−2 + · · · + a2 s2 + a1 s + a0
bkmk smk + bkmk −1 smk −1 + bkmk −2 smk −2 + · · · + bk1 s + bk0
Uk . (6.5)
sn + an−1 sn−1 + an−2 sn−2 + · · · + a2 s2 + a1 s + a0
or,  
U1 (s)
 
h i
U (s)

num1 (s) num2 (s) numk (s)  2
Y (s) = · · · den(s)  .  (6.6)
den(s) den(s)
 .. 

 
Uk (s)
where we have included the dependence on s to highlight that it is a transfer function vector
and

num1 (s) = b1m1 sm1 + b1m1 −1 sm1 −1 + b1m1 −2 sm1 −2 + · · · + b11 s + b10 , (6.7)

num2 (s) = b2m2 sm2 + b2m2 −1 sm2 −1 + b2m2 −2 sm2 −2 + · · · + b21 s + b20 , (6.8)
.. ..
. .

numk (s) = bkmk smk + bkmk −1 smk −1 + bkmk −2 smk −2 + · · · + bk1 s + bk0 , (6.9)

den(s) = sn + an−1 sn−1 + an−2 sn−2 + · · · + a2 s2 + a1 s + a0 . (6.10)

On the other hand, if the differential equation 6.1 is nonlinear, then we need to apply state-
dy
space control techniques that we have learn, where the trivial states x1 = y, x2 = dt
,. . . ,xn =
dn−1 y
dtn−1
may be the simplest way of obtaining the state-space representation.

100
MIMO systems: transfer function matrix State-Space Control

6.1.1 Worked example

Consider the differential equation

...
y + 7ÿ + 14ẏ + 8y = ü1 + 14u̇1 + 40u1 + 2u̇2 + 4u2 (6.11)

then we can apply the Laplace transform as follows

(s3 + 7s2 + 14s + 8)Y (s) = (s2 + 14s + 40)U1 (s) + (2s + 4)U2 (s). (6.12)

Rearranging the equation, it follows that


 
h
s2 +14s+40

2s+4
 U1 (s)
i
Y (s) = s3 +7s2 +14s+8 s3 +7s2 +14s+8
  (6.13)
U2 (s)

and simplifying yields

 
h i U
  1 (s)
s+10 2

Y (s) = s2 +3s+2 s2 +5s+4
(6.14)
U2 (s)

6.2 MIMO systems: transfer function matrix


Once we know how to tackle a system with several inputs and one output, we can think in
the most general case, where we have several outputs and several inputs. We will need as
many equations as outputs. A natural assumption, it is that we will obtain independently the
dynamics for every output, i.e.

(i) (i) (i) mk (i)


φ({y1 }ni=0
1
, {u1 }m m2
i=0 , {u2 }i=0 , . . . , {uk }i=0 ) = 0,
1
(6.15)
(i) (i) (i) mk (i)
φ({y2 }ni=0
2
, {u1 }m m2
i=0 , {u2 }i=0 , . . . , {uk }i=0 ) = 0,
1
(6.16)
.. ..
. . (6.17)
(i) (i) (i) (i)
φ({yl }ni=0
l
, {u1 }m m2 mk
i=0 , {u2 }i=0 , . . . , {uk }i=0 ) = 0.
1
(6.18)

Then, we have again two cases. If the system is linear, we will be able to write every equation as
a transfer function vector matrix, and combine all these column vectors in a matrix as follows
    
num1,1 (s) num1,2 (s) num (s)
Y1 (s) · · · den1,k U1 (s)
   den1 (s) den1 (s) 1 (s)   
  num2,1 (s) num2,2 (s) num2,k (s)  
· · · den2 (s)
 
Y2 (s)  den2 (s) U (s)
den2 (s)  2 
 
 ..  =  .. .. ..   ..  (6.19)
  
..
 .   . . . .  . 
    
numl,1 (s) numl,2 (s) num (s)
Yl (s) denl (s) denl (s)
· · · denl,k
l (s)
U k (s)

101
State-Space Control Realisation of MIMO transfer functions

On the other hand, if these equations are nonlinear, once again the state space representation
dy1 dn1 −1 y1
of the system could be obtained by using the states x1 = y1 , x2 = dt
,. . . ,xn1 = dtn1 −1
,
dy2 dn2 −1 y1
xn1 +1 = y2 , xn2 +1 = dt
,. . . ,xn1 +n2 = dtn2 −1
, and so on.
One could think of the most general case where all the outputs are in all the equations.
Then the solution of this system is a simultaneous differential equation. In the linear case,
extra conditions will be required to obtain a solution in a similar way to standard simultaneous
equations. In the nonlinear case, we can either perform a linearization and recover the previous
case or develop a state-space representation as discussed previously. However, it may not be
straightforward.

Exercise 6.2.1. Obtain the transfer function associated with the quadruple tank process using
the linearisation in Exercise 3.1.7 as

G(s) = C(sI − A)−1 B + D. (6.20)

Write the two differential equations associated with this transfer function.

6.3 Rosenbrock system matrix

The state-space methods are usually motivated to study MIMO systems in contrast with trans-
fer function methods. The Control Systems Centre at The University of Manchester was inter-
nationally recognised by the development of frequency methods for MIMO systems such as the
inverse Nyquist array design technique. Howard. H. Rosenbrock and Alistair G. J. MacFarlane
were the leading researchers in this development.
The Rosenbrock system matrix of a state space representation is defined as follows:

 
sI − A B
P (s) =   (6.21)
−C D

It provides a link between both representation, state-space and transfer function. The transfer
function between the input i and output j is given by

sI − A bi


−cj

dij
gij = (6.22)
|sI − A|

where bi is the column i of B and cj is the row j of C.

102
Trivial realisation of a MIMO system State-Space Control

6.4 Trivial realisation of a MIMO system

A realisation of an MIMO system can be found trivially by using the realisation of each element.
For example, let us consider a 2-by-2 system as follows
 
G1 (s) G2 (s)
G(s) =   (6.23)
G3 (s) G4 (s)

where Gi (s) = Ci (sI − Ai )−1 Bi + Di , for i = 1, 2, 3, 4. Then a state-space realisation of the


system is given by
   
A1 0 0 0 B1 0
   
   
0 A2 0 0  0 B2 
A=

 B=
 


0 0 A3 0 B3 0 
   
0 0 0 A4 0 B4 (6.24)

   
C1 C2 0 0 D1 D2
C=  D= 
0 0 C3 C4 D3 D4

Exercise 6.4.1. Extend the above expression to a 3-by-3 transfer function matrix.

Worked example: Realisation of a MIMO system

Consider the differential equation given (6.11), then we have shown that the relationship be-
tween inputs and output in the Laplace domain is given by
h i
G(s) = s+10 2 (6.25)
s2 +3s+2 s2 +5s+4

Then a state space realisation of the above plant can be found as follows:

Element (1, 1) The transfer function

s + 10
G11 = (6.26)
s2 + 3s + 2

We can, for instance, use the control canonical form to represent this transfer function
      
ẋ1 0 1 x 0
  =    1  +   u1 ; (6.27)
ẋ2 −2 −3 x2 1
 
h i x1
y = 10 1   . (6.28)
x2

103
State-Space Control Realisation of MIMO transfer functions

Element (1, 2) The transfer function


2
G12 = (6.29)
s2 + 5s + 4
Again, we can use the controllability canonical form to represent this transfer function
      
ẋ3 0 1 x 0
  =    3  +   u2 ; (6.30)
ẋ4 −4 −5 x4 1
 
h i x3
y = 2 0  . (6.31)
x4

With these two realisations we can conclude that the state-space realisation of (6.25) is
given by
      
ẋ1 0 1 0 0 x 0 0
     1   
−2 −3 0
      
ẋ2  0  x2  1 0 u1
  =    +   ; (6.32)
      
ẋ3  0 0 0 1  x3  0 0 u2
      
ẋ4 0 0 −4 −5 x4 0 1
 
x
 1
h i 
x2 
y = 10 1 2 0  

. (6.33)
x3 
 
x4
Exercise 6.4.2. Show in MATLAB that the state-space representation of this system corre-
sponds with the transfer function matrix (6.25).

6.5 Minimal realisation

6.5.1 Definition

When we find a state space realisation of SISO transfer functions using the canonical forms, it
seems intuitive that all states are controllable and observable if there is no pole-zero cancellation.
This kind of realisation is called a minimal realisation.

Definition 6.5.1. The state-space representation (A, B, C, D) is said to be a minimal realisa-


tion if and only if the pair [A, B] is controllable and the pair [A, C] is observable.

Result 6.5.2. All minimal realisations of a given transfer function are similar to each other.

Loosely speaking, if we have two minimal realisations of the same system, then there exists
a transformation matrix T such that one transforms into the other.

104
Minimal realisation State-Space Control

6.5.2 SISO

For SISO systems, the minimality can be easily associated with pole-zero cancellations.

Result 6.5.3. Any canonical form of a SISO transfer function with no pole-zero cancellation
is minimal.

Proof. Let us show that the controller canonical form is minimal. Given a general transfer
function
Y (s) bn−1 sn−1 + bn−2 sn−2 + · · · + b1 s + b0 n(s)
G(s) = = = ; (6.34)
U (s) sn + an−1 sn−1 + · · · + a1 s + a0 d(s)

where bi 6= 0 for at least one 0 ≤ i < n. Then its controller canonical form is given by
   
0 1 0 0 ··· 0 0
   
 0 0 1 0 ··· 0  0
   
   
1 ···
   
 0 0 0 0  0
A = 
 .. .. .. .. .. .. 
,B =  
 ..  (6.35)
 . . . . . .  .
   
0 ···
   
 0 0 0 1  0
   
−a0 −a1 −a2 −a3 · · · −an−1 1
h i
C = b0 b1 b2 · · · bn−2 bn−1 . (6.36)

The controllability of this form is trivial, however its observability is not straightforward. Let
us denote α a root of the numerator polynomial, i.e.

n(α) = bn−1 αn−1 + · · · + b1 α + b0 = 0. (6.37)

We can rewrite this equation as follows


   
1 1
   
 α   α 
   
   
 α2 
i   2 
h  α 
b0 b1 b2 · · · bn−2  ..  = C  ..  = 0.
bn−1  (6.38)
  
 .   . 
   
 n−2   n−2 
α  α 
   
n−1 n−1
α α

On the other hand, let us denote β a root of the denominator polynomial d(s), which
corresponds with the characteristic polynomial associated with the matrix A, hence β is also a

105
State-Space Control Realisation of MIMO transfer functions

eigenvalue of A. Further the eigenvectors of the matrix A have a very particular structure:
    
0 1 0 0 ··· 0 1 1
    
 0 0 1 0 ··· 0  β   β 
    
    
 2   2 
···

 0 0 0 1 0  β   β 
=β
 .. .. .. .. . .  ..  ; (6.39)
  
.. . .
.
  
 . . . . .  .   . 
    
  n−2   n−2 
···

 0 0 0 0 1  β   β 
    
n−1 n−1
−a0 −a1 −a2 −a3 · · · −an−1 β β

By applying Result 4.3.8, the system is unobservable if and only if there exist a root of n(s),
α, and a root of d(s), β, such that α = β. Therefore, the controller canonical form is observable
if and only if there is no pole-zero cancellation. 

6.5.3 MIMO

However, when we find the realisation of a MIMO transfer function matrix, the complexity
increases. If we transform each element of the transfer function matrix, then it is possible to
find that some states are uncontrollable or unobservable. In MIMO, the dynamics of a output
can be different for every input, but they share the same equation, hence cancellation due to
shared common states is very common as we have shown in Section 6.1.1. When a MIMO
transfer function matrix is realised without considering this possibility of sharing states, then
nonminimal realisations are obtained. Let us consider an example.

Worked example: Lack of minimality

Let us consider the realisation from previous section


      
ẋ1 0 1 0 0 x 0 0
     1   
−2 −3 0
      
ẋ2  0  x2  1 0 u1
  =    +   ; (6.40)
      
ẋ3  0 0 0 1  x3  0 0 u2
      
ẋ4 0 0 −4 −5 x4 0 1
 
x
 1
h 
i x2 

y = 10 1 2 0   .
 (6.41)
x3 
 
x4

As there is no pole-zero cancellation in the transfer function, one could assume that this is a
minimal realisation, but it is easy to show that it is not.

106
Gilbert’s realisation State-Space Control

The controllability matrix is


 
0 0 1 0 −3 0 7 0
 
1 0 −3 0 0 −15 0 
 
7
C(A, B) =  . (6.42)
−5 0
 
0 0 0 1 0 21 
 
0 1 0 −5 0 21 0 −85

It is easy to check that this matrix has rank 4, i.e. it is full rank. As result, the pair [A, B] is
controllable.
On the other hand, the observability matrix is
 
10 1 2 0
 
 −2
 
7 0 2 
O(A, C) =  . (6.43)
−14 −23 −8 −10
 
 
46 55 40 42

The student can check that det(O(A, C)) = 0; hence there is one state that cannot be observed.
As a result, the developed (A, B, C, D) representation of (6.25) using minimal representations
of each component is not a minimal realisation.

Exercise 6.5.4. Use the command minreal to find the transformation T to obtain a system
with the structure given by Kalman’s decomposition.

6.6 Gilbert’s realisation

6.6.1 Procedure

With this example, we can deduce an interesting property of MIMO systems. If we construct
the realisation of a system by using controllability canonical forms or observability canonical
forms of the individual elements, the minimality of the realisation is no longer necessarily
preserved. Therefore, it is natural to ask whether we can realise the MIMO transfer function
in such a way that we obtain a minimal realisation.
The answer to this question is given by Gilbert’s realisation. It provides a procedure to
obtain a minimal representation. The first step is to write the transfer function (l-by-k)-matrix
G(s) as follows:
W (s)
G(s) = D + , (6.44)
p(s)
where p(s) is a scalar polynomial and W (s) is a matrix whose elements are polynomial. Let
us denote λi for i = 1, . . . , n, the roots of p(s) and, for simplicity’s sake, assume that they are

107
State-Space Control Realisation of MIMO transfer functions

real and distinct. The method can be generalised to more general cases. Once we have found
all the poles, then we decompose the fraction (6.44) in partial fractions as follows
n
X Wi
G(s) = D + . (6.45)
i=0
(s − λi )

Note that Wi is a constant matrix. Let us denote the rank of Wi by ρi , then it is possible to
write these matrices as:
Wi = Ci Bi , (6.46)

where Bi ∈ Rρi ×k and Ci ∈ Rl×ρi .


Then, Gilbert’s realisation of the transfer function G(s) is given by
   
λI 0 ··· 0 B
 1 ρ1   1
···
   
 0 λ2 Iρ2 0  B 
A=  B =  2
 .. .. ... ..   .. 
 . . .   . 
    (6.47)
0 0 · · · λn Iρn Bn

h i
C = C1 C2 · · · Cn D

6.6.2 Worked example

Once again, let us consider the transfer function


h i
G(s) = s+10 2 (6.48)
s2 +3s+2 s2 +5s+4

The first step is to decompose each transfer function in partial fractions. After some algebra,
it follows

s + 10 −8 9
= + (6.49)
s2 + 3s + 2 s+2 s+1
2 −2/3 2/3
2
= + (6.50)
s + 5s + 4 s+4 s+1

Then, transfer function (6.48) can be written as


h   −2/3 i
−8 9 0 2/3 0
G(s) = s+2
+ s+1
+ s+4 s+4
+ s+1
+ s+2
(6.51)

where all poles are introduced in each transfer function. Now, we can rewrite the transfer
function matrix as in (6.45) as follows
h i h i h i
−8 0 0 −2/3 9 2/3
G(s) = + + (6.52)
s+2 s+4 s+1

108
Some operations with systems State-Space Control

h i h i h i
where W1 = −8 0 , W2 = 0 −2/3 , and W3 = 9 2/3 . Then it is trivial that the rank of
each matrix is one. Finally, we need to decompose this matrices as in (6.46), where Bi ∈ R1×2
and Ci ∈ R1×1 for all i = 1, 2, 3. Finally, a trivial but correct solution is considering Bi = Wi
and Ci = 1, for all i = 1, 2, 3.
Finally, the Gilbert’s realisation of the system is given by
   
−2 0 0 −8 0
   
A =  0 −4 0  B =  0 −2/3
   
   
0 0 −1 9 2/3 (6.53)

h i h i
C= 1 1 1 D= 0 0
Exercise 6.6.1. Show in MATLAB that this state-space representation corresponds with the
transfer function (6.48).

Exercise 6.6.2. Show that the representation (6.53) is minimal.

6.7 Some operations with systems


The last part of the unit summarises some operations with state-space realisations. Let us
denote
[A, B, C, D] := C(sI − A)−1 B + D; (6.54)

then the following operations with systems can be defined:

Transformation Given a nonsingular matrix V = T −1 , a transformation of the system will


produce the same system, i.e.

[A, B, C, D] = [T −1 AT, T −1 B, CT, D]. (6.55)

- [A1 , B1 , C1 , D1 ]

m -
?
u y
6
- [A2 , B2 , C2 , D2 ]

Figure 6.1: Addition of two system.

Addition The addition of two system is given by


    
A1 0 B h i
[A1 , B1 , C1 , D1 ] + [A2 , B2 , C2 , D2 ] =   ,  1  , C1 C2 , D1 + D2  (6.56)
0 A2 B2

109
State-Space Control Realisation of MIMO transfer functions

Product The product of two system is defined


    
A1 B1 C2 BD h i
[A1 , B1 , C1 , D1 ] × [A2 , B2 , C2 , D2 ] =   ,  1 2  , C1 D1 C2 , D1 D2 
0 A2 B2
(6.57)
or
    
A2 0 B h i
[A1 , B1 , C1 , D1 ] × [A2 , B2 , C2 , D2 ] =  , 2  , D1 C2 C1 , D1 D2 
B1 C2 A1 B1 D2
(6.58)

u- y2 = C2 x2 + D2 u = u1- y-
[A2 , B2 , C2 , D2 ] [A1 , B1 , C1 , D1 ]

Figure 6.2: Product of two systems.

The dynamics of the state x2 is straightforward

ẋ2 = A2 x2 + B2 u. (6.59)

But the dynamics of the state x1 requires some treatment

ẋ1 = A1 x1 + B1 u1 = A1 x1 + B1 (C2 x2 + D2 u) = A1 x1 + B1 C2 x2 + B1 D2 u (6.60)

If we consider the total as x = (x1 , x2 ), both equations lead to


      
x
d  1   1 A B C
1 2 x B D
=   1  +  1 2  u. (6.61)
dt x2 0 A2 x2 B2
Finally, the output is given by

y = C1 x1 + D1 u1 = C1 x1 + D1 (C2 x2 + D2 u) = C1 x1 + D1 C2 x2 + D1 D2 u (6.62)

or  
h i x1
y = C1 D1 C2   + D1 D2 u. (6.63)
x2
As a result, we have found the four matrices given in (6.57). The student can find the
matrices in (6.58) by choosing x = (x2 , x1 )

As the order in the product does not match with the order of the block diagram (see
Fig 6.2), H∞ guys have proposed to change the direction of the arrows (see Fig. 6.3).

Inverse If D is a square nonsingular matrix, then

([A, B, C, D])−1 = [A − BD−1 C, −BD−1 , D−1 C, D−1 ]. (6.64)

110
Some operations with systems State-Space Control

y u = C2 x2 + D2 u = y2 u
 [A1 , B1 , C1 , D1 ]  1 [A2 , B2 , C2 , D2 ] 

Figure 6.3: A reason for changing the direction of the arrows.

u- Cx + Du- u-
[A, B, C, D] ([A, B, C, D])−1

Figure 6.4: Inverse system.

6.7.1 Worked example

Consider the product of a system with it inverse system given in Fig. 6.4. Let us show that
this product is equal to the identity system as defined in (6.64). Using (6.61) and the systems
in Fig. 6.4, i.e.

[A1 , B1 , C1 , D1 ] = [A − BD−1 C, −BD−1 , D−1 C, D−1 ], (6.65)

[A2 , B2 , C2 , D2 ] = [A, B, C, D]; (6.66)

it follows that
      
−1 −1 −1
d x1  A − BD C −BD C  x1  −BD D
= + u=
dt x2 0 A x2 B
    
−1 −1
A − BD C −BD C x −B
   1 +   u (6.67)
0 A x2 B

and    
h i x1 h i x1
y = D−1 C D−1 C   + D−1 Du = D−1 C D−1 C   + Iu, (6.68)
x2 x2

where it is easy to see that there is a very singular structure in the matrices, for example, the
input affect is exactly opposite manner to both states, whereas the output depend on both
states in the same way. Let us transform the system into the new coordinates z1 = x1 + x2 and
z2 = x2 , then it follows that

ż1 = ẋ1 + ẋ2 = (A − BD−1 C)x1 − Bu + (A − BD−1 C)x2 + Bu =

(A − BD−1 C)(x1 + x2 ) = (A − BD−1 C)z1 ; (6.69)

and

y = D−1 C(x1 + x2 ) + Iu = D−1 Cz1 + Iu. (6.70)

111
State-Space Control Realisation of MIMO transfer functions

As a result, we obtain the diagonal form of the system that corresponds with its Kalman’s
decomposition, i.e.
      
−1
z
d  1  A − BD C 0 z 0
=   1 +   u (6.71)
dt z2 0 A z2 B

and  
h i z1
y = D−1 C 0   + Iu, (6.72)
z2
where the state z1 is observable but uncontrollable and z2 is controllable but unobservable.
Therefore, from an input-ouput point of view, both states can be eliminated and only I must
be considered, i.e.

([A, B, C, D])−1 × [A, B, C, D] = I. (6.73)

Exercise 6.7.1. Derive the state space representation of the unit feedback of the system
[A, B, C, D].

Exercise 6.7.2. Derive the state space representation of the feedback interconnection between
the system [A1 , B1 , C1 , D1 ] and [A2 , B2 , C2 , D2 ].

112
Learning outcomes State-Space Control

6.8 Learning outcomes


• A state-space realisation is minimal if and only if the pair [A, B] is controllable and the
pair [A, C] is observable.

• All minimal representations of a system are equivalents, hence they have the same number
of states.

• For multivariable systems, then minimality of each element does not ensure minimality
of the whole transfer function.

• The minimal realisation of a system can be obtained using Gilbert’s realisation.

• Obtain a minimal represetation using Gilbert’s realisation.

• Inverses, additions and product can be defined in the state-space representation.

113

You might also like