[go: up one dir, main page]

0% found this document useful (0 votes)
8 views199 pages

Full Book Stoa Uav Latest

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 199

ERASMUS + KA 203 PROJECT

Project Code:
2018-1TR01-KA203-059632

Project Name:
DEVELOPING NOVEL UAV EDUCATION SET AND TRAINING CURRICULUM IN ORDER TO
CATCH STATE OF THE ART TECHNOLOGY

INTELLECTUAL OUTPUTS OF IO1 & IO2

PREPARED BY PARTNER ISTANBUL TECHNICAL UNIVERSITY

CHAPTER 7:

OVERVIEW OF UAV SYSTEMS MODELLING

“Funded by the Erasmus+ Program of the European Union. However, European


Commission and Turkish National Agency cannot be held responsible for any use which
may be made of the information contained therein”
CHAPTER 1

1. Overview of UAV System Modelling


1.1. Overview
Even the term UAV (Unmanned Aerial Vehicle) is controversial, and it has been replaced in
many times with UAS (Unmanned Aircraft System). The different requirements and
concepts between civilian and military systems are related to these terms that are also
coming from regulatory/legal importance. UAS is preferred term instead of UAVs by the
Federal Aviation Administration (FAA). Remotely Piloted Vehicles (RPVs), was used in the
Vietnam War. Today RPV or Remotely Piloted Aircraft (RPA) is a term to imply
nonautonomous (having self control algorithm) or remotely controlled aircraft by the pilot,
while the United Kingdom prefers different term as Remotely Piloted Air System (RPAS) to
demonstrate the presence of the pilot in the loop to control aircraft.
An unmanned aerial vehicle (also known as a drone) refers to a pilotless aircraft, a flying
machine without an onboard human pilot or passengers. As such, “unmanned” implies total
absence of a human who actively directs the aircraft. Control functions for unmanned aircraft
may be either onboard (self-control) or off-board (remote control). Various definitions of the
term UAV have been proposed, like “A reusable aircraft designed to operate without an
onboard pilot. It does not carry passengers and can be either remotely piloted or
preprogrammed to fly autonomously” in 2007 by Joint Capability Group on Unmanned Aerial
Vehicles. Recently, Department of Defense (DoD), FAA and the European Aviation Safety
Agency (EASA) adopted the term UAS that is consists of ground control stations,
communication links, launch system and retrieval system in addition to the aircraft itself in
order to eliminate the onboard pilot efforts. The FAA has defined an Unmanned Aircraft as
“A device used or intended to be used for flight in the air that has no onboard pilot. This
includes all classes of airplanes, helicopters, airships, and translational lift aircraft that have
no onboard pilot. Unmanned aircraft are understood to include only those aircraft
controllable in three axes and therefore, exclude traditional balloons.” in 2008. The definition
of Unmanned Vehicle given unmanned systems roadmap in the 2007–2012 is defined as ”A
powered vehicle that does not carry a human operator, can be operated autonomously or
remotely, can be expendable or recoverable, and can carry a lethal or nonlethal payload.
Ballistic or semi-ballistic vehicles, cruise missiles, artillery projectiles, torpedoes, mines,
satellites, and unattended sensors (with no form of propulsion) are not considered
unmanned vehicles. Unmanned vehicles are the primary component of unmanned systems”
in 2007 by U.S. Department of Defense. Office of the Secretary of Defense. Similarly, EASA
(European Aviation Safety Agency) defines UAS as “”An Unmanned Aircraft System (UAS)
comprises individual system elements consisting of an unmanned aircraft, the control station
and any other system elements necessary to enable flight, i.e. command and control link and
launch and recovery elements. There may be multiple control stations, command & control
links and launch and recovery elements within a UAS.” in 2009.
The UAV is controlled manually by piloting skills or autonomously via autopilot and
preprogrammed navigation system that maintains the direction, altitude, and ground track
automatically. Manual control usually means controlling the position of the UAV by manually
adjusting the direction, altitude, speed, etc. through switches, a joystick, or some kind of
pointing device. The ground control station contains all equipment to control the UAV
remotely. Navigation systems of various types (Global Positioning System (GPS), radio,
inertial) allow for preprogrammed missions, which may be defined at that time or previously
by the mission planner. As a minimum, a typical UAV system is composed of air vehicles,
one or more ground control station (GCS) and/or mission planning and control stations
(MPCS), payload, and data link. In addition, many systems include launch and recovery
subsystems, air-vehicle carriers, and other ground handling and maintenance equipment.
UAV system overview is shown in Fig. 1.1.

Figure 1.1 UAV System Overview (Source: Fahlstrom and Gleason (2012))

Mission planning and control stations (MPCS) needs tracking, a real-time data link for the
visual monitoring and the position of the UAV. Remote control in MPCS must be supported
by a direct communications link with remote air vehicle or indirect satellite communications.

Figure 1.2 Mission Planning and Control Station (MPCS) (Keller (2014))

The MPCS can also serve as the command post for the person who performs mission
planning, receives mission assignments from supported headquarters, and reports acquired
data and information to the appropriate unit, be it weapon fire direction, intelligence, or
command and control. The station usually has positions for both the air vehicle and mission
payload operators to perform monitoring and mission execution functions. In some small
UAS, the ground control station is contained in a case that can be carried around in a back-
pack and set up on the ground, and consists of little more than a remote control and some
sort of display, probably augmented by embedded microprocessors or hosted on a
ruggedized laptop computer. At the other extreme, some ground stations are located in
permanent structures thousands of miles away from where the air vehicle is flying, using
satellite relays to maintain communications with the air vehicle. In this case, the operator’s
consoles might be located in an internal room of a large building, connected to satellite
dishes on the roof. UAV satellite-control links can have a one- second delay, which is
inadequate for sensitive control during takeoff and landing. UAV pilots within visual range of
their aircraft usually handle takeoff and landing, and then hand off control of the UAVs to
controllers at consolidated locations, who handle much of the mission tasks via satellite
links. More detail can be found in Fahlstrom and Gleason (2012).
1.2. Classification of UAV Modelling
The range of UAV sizes and types now runs from air vehicles (AVs) small enough to land on
the palm of your hand to large lighter-than-air vehicles. Many types of UAVs have been
designed, tested, and fielded throughout the world.
Very small UAV is used as a term to categorize an AV with dimensions of the order of a 30–
50 cm such as, the Mosquito, the Skate and the CyberQuad Mini as given in Fig. 1.3.
Mosquito, an oval flying wing with a single tractor propeller, is developed by the Israel
Aerospace Industries (IAI) . Skate, a rectangular flying wing with twin tractor engine/propeller
combinations that can be tilted to Classes and Missions of UAVs provide “thrust vectoring”
for control, is developed by The US Aurora Flight Sciences. The CyberQuad Mini, having
four ducted fans in a square arrangement, is developed by the Australian Cyber Technology.
The Mosquito wing is 35 cm long and 35 cm in total span. It uses an electric motor with
batteries and has an endurance of 40 minutes, and claims a radius of action of about 1.2 km.
It is hand or bungee launched and can deploy a parachute for recovery. The Skate wing has
a wingspan of about 60 cm and length of 33 cm. It folds in half along its centerline for
transport and storage. It has twin electric motors on the leading edge that can be tilted up or
down and allow vertical takeoff and landing (VTOL) and transition to efficient horizontal flight.
There are no control surfaces, with all control being accomplished by tilting the motor
assemblies and controlling the speed of the two propellers. It can carry a payload of 227 g
with a total takeoff weight of about 1.1 kg. The CyberQuad Mini has four ducted fans, each
with a diameter less than 20 cm, mounted so that the total outside dimension that include the
fan shrouds are about 42 by 42 cm. The total height including the payload and batteries,
which are located in a wing at the center of the square, is 20 cm. The CyberQuad Mini
includes a low-light
level solid-state camera or thermal camera and a control system that allows fully
autonomous waypoint navigation.

Figure 1.3 Very Small UAVs (Source: P. G. Fahlstrom and T. J. Gleason (2012))

Small UAVs have at least one dimension of greater than 50 cm and go up to dimensions of a
meter or two. Many of these UAVs have the configuration of fixed-wing model airplane and
are hand-launched by their operator by throwing them into the air. Small UAVs include the
Raven which is developed by US AeroVironment and the Bayraktar Mini which is developed
by TR Bayraktar, both conventional fixed-wing vehicles.
Figure 1.4 Small UAVs (Source: P. G. Fahlstrom and T. J. Gleason (2012))

The RQ-11 Raven is an example of a UAV that is in the “model airplane” size range. It has a
1.4 wingspan and is about 1 m long. It weighs only a little less than 2 kg and is launched by
being thrown into the air by its operator. It uses electrical propulsion and can fly for nearly an
hour and a half. The Raven and its control “station” can be carried around by its operator’s
back and can carry visible, near-infrared (NIR), and thermal imaging systems for
reconnaissance as well as a “laser illuminator” to point out target to personnel on the ground.
The Bayraktar Mini has a gimbaled camera for day or night use. It offers waypoint navigation
with GPS or other navigation systems. Despite its slightly greater size and weight, it is
launched much like the Raven. It can be recovered by a skidding landing on its belly or with
an internal parachute. It is fielded with small army units and has been heavily used by the
Turkish Army since it was fielded in about 2006. Some of the small UAVs are shown in Fig.
1.4.
Medium UAVs are too large to be carried around by one person and still
smaller than a light aircraft. The UAVs that sparked the present resurgence of interest, such
as the Pioneer and Skyeye, are in the medium class. They have typical wingspans of the
order of 5 - 10 m and carry payloads of from 100 to more than 200 kg. There are a large
number of UAVs that fall into this size group. The Israeli - US Hunter and the UK
Watchkeeper are more recent examples of medium-sized, fixed-wing UAVs. There are also
a large number of rotary-wing UAVs in this size class. A series of conventional helicopter
with rotor diameters of the order of 2 m have been developed in the United Kingdom by
Advanced UAV Technologies. There also are a number of ducted-fan systems configured
much like the CyberQuad Mini but having dimensions measured in meters instead of
centimeters. The US Boeing Eagle Eye, which is a medium-sized VTOL system that is
notable for using tilt-wing technology.
The RQ - 2 Pioneer is an example of an AV that is smaller than a light manned aircraft but
larger than a model airplane. It was for many years the workhorse of the stable of US tactical
UAVs. Originally designed by the Israelis and built by AAI in the United States. The Pioneer
provided real-time reconnaissance and intelligence for ground commanders. High-quality
day and night imagery for artillery and naval gunfire adjust and damage assessment were its
prime operational missions. The 205 kg, 5.2 m wingspan AV had a conventional aircraft
configuration. It cruised at 200 km/h and carried a 220 kg payload. Maximum altitude was
4572 m. Endurance was 5.5 h. The fiberglass air vehicle had a 26 hp engine and was
shipboard capable. It had a piston and rotary engine options. The Pioneer could be launched
from a pneumatic or rocket launcher or by conventional wheeled takeoff from a prepared
runway. Recovery was accomplished by conventional wheeled landing with arrestment or
into a net.
The BAE Systems Skyeye R4E UAV system was fielded in the 1980s and is roughly
contemporary with the Pioneer. It uses launchers similar to the Pioneer but does not have a
net-recovery capability. It uses a ground control station similar in principle to that of the
Pioneer. The Skyeye air vehicle is constructed of lightweight composite materials and was
easy to assemble and disassemble for ground transport because of its modular construction.
It has a 7.32 m wingspan and is 4.1 m long. It is powered by a 52 hp rotary engine providing
high reliability and low vibration. Maximum launch weight is 570 kg and it can fly for 8 - 10 h
at altitudes up to 4,600 m. Maximum payload weight is about 80 kg (176 lb). The Skyeye has
no landing gear to provide large radar echoes or obstruct the view of the payload. The
avoidance of a nose wheel is particularly significant as a nose gear often obstructs the view
of a payload camera looking directly forward, precluding landing based on the view through
the eyes of the camera. The Skyeye also carries either a parafoil or a parachute as
alternative recovery systems. The parafoil essentially is a soft wing that is deployed in the
recovery area to allow the air vehicle to land much slower. The parafoil recovery can be
effective for landing on moving platforms such as ships or barges.
The RQ-5A Hunter was developed for the standard “Short Range” UAS for the US Army.
The Hunter does not require a recovery net or launcher, which significantly simplifies the
overall minimum deployable configuration and eliminates the launcher required by the
Skyeye. Under the appropriate conditions, it can take off and land on a road or runway. It
utilizes an arresting cable system when landing, with a parachute recovery for emergencies.
It is not capable of net recovery because it has a tractor (“puller”) propeller that would be
damaged or broken or would damage any net that was used to catch it. It also has a rocket-
assisted takeoff option to allow the launch to occur when no suitable road or runway is
available. The Hunter is constructed of lightweight composite materials, which afford ease of
repair. It has a 10.2 m wingspan and is 6.9 m long. It is powered by two four-stroke, two-
cylinder (v-type), air-cooled Moroguzzi engines, which utilize fuel injection and individual
computer control. The air vehicle weighs approximately 885 kg at takeoff (maximum), has an
endurance of about 12 h, and a cruise speed of 120 knots.
The Hermes 450/Watchkeeper is an all-weather, intelligence, surveillance, target acquisition
UAV. Its dimensions are similar to the Hunter. The Watchkeeper is manufactured in the
United Kingdom by a joint venture of the French company Thales and the Israeli company
Elbit Systems. It has a weight of 450 kg including a payload capacity of 150 kg. A series of
rotary-wing UAVs called the AT10, AT20, AT100, AT200, AT300, and AT1000 have been
developed by the UK firm Advanced UAV Technology. They are all conventionally
configured helicopters with a single main rotor and a tail boom with a tail rotor for yaw
stability and control. The rotor diameters vary from 1.7 m in the AT10 to about 2.3 m for the
AT1000. All are intended to be launched by vertical takeoff and all claim the ability for
autonomous landings on moving vehicles.
Figure 1.5 Medium UAVs (Source: P. G. Fahlstrom and T. J. Gleason (2012))

The Northrop Grumman MQ-8B Fire Scout is a conventionally configured VTOL UAV. It
looks much like a typical light helicopter. It has a length of 9.2 m, a height of 2.9 m, and a
rotor diameter of 8.4 m. It is powered by a 420 shaft hp turbine engine. Fire Scout is similar
to manned helicopter OH-58 Kiowa, which is light observation helicopter, has a two-man
crew and two passenger seats. The Kiowa has a maximum payload of about 630 kg (1,389
lb), compared to the 270 kg (595 lb) maximum payload of the Fire Scout, but if one takes out
the weight of the crew and other things associated with the crew, the net payload capability
of the Fire Scout is similar to that of the manned helicopter. Some of the medium UAVs are
shown in Fig. 1.5.
Large UAVs that can fly long distances and loiter for extended periods to perform
surveillance functions. They also are capable to carry weapons in significant quantities. The
lower range of such systems includes the US General Atomics Predator A, which has a
significant range and endurance, but can carry only two missiles of the weight presently
being used. The limitation to two missiles is serious as it means that after firing the two
missiles that are on board, the UAV either has lost the ability to deliver weapons or must be
flown back to its base to be rearmed. For this reason, a second generation of UAVs,
including a Predator B model, is larger and able to carry many more weapons on a single
sortie.
The Cassidian Harfang is an example of a system much like the Predator A and the
Talarion, also by Cassidian, is an example of the emerging successors to the Predator A. At
the high end of this size group, an example is an even larger UAV designed for very long
range and endurance and capable of flying anywhere in the world on its own, the US
Northrop Grumman Global Hawk.
The MQ-1 Predator A is larger than a light single-engine private aircraft and provides
medium altitude, real-time surveillance using high-resolution video, infrared imaging, and
synthetic aperture radar. It has a wingspan of 17 m and a length of 8 m. It adds significantly
higher ceiling 7,620 m and longer endurance 40 h to the capabilities of the smaller UAVs.
GPS and inertial systems provide navigation, and control can be via satellite. Speed is 220
km/h and the air vehicle can remain on station for 24 h, 925 km from the operating base. It
can carry an internal payload of 200 kg plus an external payload (hung under the wings) of
136 kg.
The Harfang UAV is produced by Cassidian, which is subsidiary of the French company
EADS. It is about the same size as the Predator and is designed for similar missions. The
configuration is different, using a twin-boom tail structure. There are a variety of possible
payloads. Its stated performance is similar to that of the Predator, but it has a shorter
endurance of 24 h. It takes off and lands conventionally on wheels on a runway. Control can
be via satellite.
The RQ-4 Global Hawk is manufactured by Northrop Grumman Aerospace Systems. It flies
at high altitude and utilizes radar, electro-optical, and infrared sensors for surveillance
applications. It uses a turbofan engine and appears to have a shape that reduces its radar
signature, but is not a “stealth” aircraft. It is 14.5 m long with a 40 m wingspan and has a
maximum weight at takeoff of 1,460 kg. It can loiter at 575 km/h and has an endurance of 32
h. It has a full set of potential payloads and it is routinely controlled via satellite links. Some
of these large UAVs are illustrated in Fig. 1.6.

Figure 1.6 Large UAVs (Source: P. G. Fahlstrom and T. J. Gleason (2012))

According to phisical size of UAVs, very small, smal, medium and large terms can be used
for UAV classification. A comprehensive classification of UAVs demonstrating both the wide
variety of UAV systems and capabilities as well as the multiple dimensions of differentiation
is presented in Table 1.1. More detail can be found in Fahlstrom and Gleason (2012).
Table 1.1 Categorization of Existing UAV Systems (Source: Van Blyenburgh (2006))

Mass (kg) Range (km) Flight Altitute (m) Endurance (h)


Micro <5 < 10 250 1
Mini <20/25/30/150* <10 150/250/300 <2
Tactical
Close range 25-150 10 - 30 3000 2-4
Short range 50-250 30 -70 3000 3-6
Medium range 150-500 70 - 200 5000 6 - 10
Medium range
500-1500 > 500 8000 10 - 18
endurance
Low altitute deep
250-2500 > 250 50 - 9000 0.5 - 1
penetration
Low altitute long
15-25 > 500 3000 > 24
endurance
Medium altitute long
1000-1500 > 500 3000 24 - 48
endurance
Strategic
High altitute long
2500-5000 > 2000 20000 24 - 48
endurance
Stratospheric > 2500 > 2000 > 20000 > 48
Exo-stratospheric TBD TBD >30500 TBD
Special task
Unmanned combat > 2500 1500 12000 2
Lethal TBD 300 4000 3-4
Decoys 150-250 0 - 500 50 - 5000 <4
* Varies with national legal restriction TBD: to be decided

1.3. Payload Design for Small UAV Modeling


Researchers and developers in academia as well as in industry have worked on designing
UAV payloads. A small UAV has limited volume and weight capacity for a payload. These
limitations drive several additional constraints:
a. Electronic components must avoid electromagnetic interference (EMI) interactions, a
risk increases by their close proximity.
b. Power is limited by the space and weight available for energy storage devices such
as batteries.
c. Total payload operational time is limited by the available power processor speed is
limited by the weight and power available.
d. Sensor capabilities such as video frame rate and image resolution are limited by
available weight allocation.
Designing a small UAV payload involves trade-offs among the basic subsystems
shown in Fig. 1.7. Decisions made for one subsystem will affect the design of
another subsystem. While managing this balance, the overall small UAV system
constraints of weight and volume must be maintained
Communications Power
Subsystem Subsystem

Major Payload
Subsystems

Sensors Computer
Subsystem Subsystem

Figure 1.7 Major Payload Subsystem for Small UAVs (Source: Reproduction from Valavanis and Vachtsevanov (2014))

in the flow diagrams used in each of the subsystem design sections, a


reference to WPV is used to represent weight, power, and volume. Note that cost
is another important factor in developing a design. This is specific to each small
UAV application.
Define mission
constrains

Payload constrains
on comm.

Develop link
budgets

Define possible
radios

Does any Yes If multiple,


radios meet select radio
constraints per WPV
?

No

Negotiate No Is distance
new too great
constraints ?

Yes

Can
No Yes
multi-hop be
implemented
?

Fig 1.8 Communication System Design Flow (Source: Reproduction from Valavanis an d Vachtsevanov (2014))

Note that references to the SBC refer to the entire computing system, including
storage devices. Fig. 1.9 shows the design flow.
Define mission
constrains

Payload constrains
on SBC

Develop
performance
budgets

Define possible
SBC

Does any Yes Does SBC No If multiple,


SBC meet create GPS select SBC
constraints ? interference per WPV
?

No Yes

Negotiate No Processing at No Can GPS Yes


new ground station solution be
constraints possible implemented
? ?

Yes

Ground based
solution

Figure 1.9 SBC Subsystem Design flow (Source: Reproduction from Valavanis and Vachtsevanov (2014))

Though other sensors may be used, the electro-optical (EO) sensor is predominantly
employed in small UAVs. Considerations for the EO sensor is presented, but the methods
can be readily applied to other types of sensors. The EO sensor subsystem includes the
sensor hardware, image capture method, and any required gimbal mechanism for panning
the sensor. Note that an inertial measurement unit (IMU) is not listed. The aircraft attitude
can be provided by the autopilot to the image processing system to geo-register the image.
Fig. 1.10 shows the design flow.
Define mission
constrains

Payload constrains
on sensors

Define decision
matrix

Define possible
EO sensors

Yes Analog Digital Method No


Does sensor What is
available
create GPS sensor
to acquire
interference technology
images
? ?
?

No Yes

Can GPS Yes No


Is analog noise Select
solution be
present in image choise per
implemented
? WPV
?

Yes
No

Negotiate new
constraints
Figure
1.10 EO sensory subsystem design flow (Source: Reproduction from Valavanis and Vachtsevanov (2014))

The power system includes batteries and DC-DC converters. Fig. 1.11 shows the
design flow. More detail can be found in Valavanis and Vachtsevanov (2014).
Define mission
constrains

Payload constrains
on power

Develop power
budgets

Define power
solutions

Develop
decision matrix

Negotiate No Yes
Does a power Implement
new
solution exist power system
constraints
?

Figure 1.11 Power Subsystem Design Flow (Source: Reproduction from Valavanis and Vachtsevanov (2014))

1.4. Overview of Linear Flight Control And Guidance


A flight control system is required to ensure the UAV remains stable and, in the presence of
external disturbances, provide the desired performance, such as following the desired
trajectory with sufficient accuracy. Thus, a UAV flight control The system is typically safety /
mission critical, as a wrong design, lack of robustness modeling variations or component
failure may cause performance degradation (for example, influence the ability to mark load
bearing) or even affect vehicle loss. The purpose of the control system is to follow the
desired reference command in the presence of external distortions. The control system
accomplishes this by trying to fix the tracking error between the reference commands and
the measured response of UAV. The control using estimates of UAV states, which typically
include position, velocity, angular rate, and attitude of UAV. These estimates the navigation
system combining noisy measurements from several sensors. A control system attempts to
minimize the tracking error between the desired and estimated states in presence of
disturbances. The navigation system provides estimates of the states by fusing together
measurements from several complementary sensors. Fig. 1.12 shows detail information
about UAV control system. More detail can be found in Valavanis and Vachtsevanov
(2014).
disturbances
control
commands error Control inputs UAV states
+ System + dynamics
Gc(s) G(s)

Sensors and
navigation +
state estimates system sensor
noise
Figure 1.12 A
Typical UAV Control System Architecture (Source: Reproduction from Valavanis and Vachtsevanov (2014))

1.5. Motion Equations: Dynamics, Kinematics and Quaternions


While the aircraft can be modeled as a rigid body, the Earth can be taken to be an inertial
reference frame for most problems about UAVs. It is customary to define a reference frame
(a body frame) that is rigidly attached to it in order to write
motion equations for an aircraft as shown in Fig. 1.13. A standard choice for the inertial
reference frame ( X, Y, Z ) is to choose the X - axis pointing north, the Y - axis pointing east,
and the Z - axis pointing down. Several choices of body frames are common, depending on
the task. There are advantages to each choice, but the stability axes are very commonly
used. In general though, the appropriate body frame ( x, y, z ) is chosen in such a way that
the y-axis is orthogonal to the aircraft’s plane of symmetry, pointing to the right. The x-axis is
chosen pointing “forward” and the z-axis points “down,” to complete the right- handed triad.

Figure 1.13 Aircraft Body Frame (Source: Valavanis and Vachtsevanov (2014))

Newton’s laws of mechanics imply that in a body frame whose origin coincides with
the aircraft’s center of mass,

𝐹 = 𝑚𝑣 ̇ + 𝜔 × 𝑚𝑣 (1.1)

𝑇 = 𝐽 𝜔̇ + 𝜔 × 𝐽 𝜔 (1.2)

In the above equations, the subscript (. ) indicates vectors or tensors expressed in the
body frame, the dot indicates differentiation with respect to time, 𝑣 is the velocity of the
aircraft’s center of mass with respect to the inertial frame, 𝜔 is the angular velocity of the
body frame with respect to the inertial frame, 𝑚 is the aircraft’s mass, and 𝐽 is its inertia
tensor, which is constant in the body frame. The operator × denotes the cross product. On
the left-hand side, 𝐹 denotes the sum of the forces acting on the vehicle (including
aerodynamic, gravity, thrust, and buoyancy),
and 𝑇 denotes the sum of the moments of these forces about its center of
mass.
In coordinates, Equations (1.1) and (1.2) are written as follows. Let 𝐹 = (𝑋, 𝑌, 𝑍),
𝑇 = (𝐿, 𝑀, 𝑁), 𝑣 = (𝑈, 𝑉, 𝑊), and 𝜔 = (𝑃, 𝑄, 𝑅).
In most cases of interest, aircraft has a plane of symmetry, typically separating the
left and right halves. Assuming that the y-axis is orthogonal to the aircraft’s plane
of symmetry, the inertia tensor will have the following structure:
𝐽 0 −𝐽
𝐽 = 0 𝐽 0 (1.3)
−𝐽 0 𝐽
Substituting into Eqs. (1.1) and (1.2),
𝑋 𝑈̇ + 𝑄𝑊 − 𝑅𝑉
𝐹 ≡ 𝑌 = 𝑉̇ + 𝑅𝑈 − 𝑃𝑊 (1.4)
𝑍 𝑊̇ + 𝑃𝑉 − 𝑄𝑈

𝐿 𝐽 𝑃̇ − 𝐽 𝑅̇ + 𝑄𝑅(𝐽 − 𝐽 ) − 𝑃𝑄𝐽
𝑇 ≡ 𝑀 = 𝐽 𝑄̇ + 𝑃𝑅(𝐽 − 𝐽 ) + (𝑃 − 𝑅 )𝐽 (1.5)
𝑁 𝐽 𝑅̇ − 𝐽 𝑃̇ + 𝑃𝑄(𝐽 − 𝐽 ) + 𝑄𝑅𝐽
The evolution over time of the velocity of the aircraft’s center of mass and of the aircraft’s
angular velocity are given by Equations (1.1) and (1.2) give with respect to the inertial
frame. it is necessary to write equations for the aircraft’s kinematics In order to reconstruct
the position and attitude of the aircraft as a function of time.
Let 𝑅 be the rotation matrix that maps vectors in the body frame to vectors in the inertial
frame. Given a vector 𝑢 ∈ ℝ , define a “hat” operator such that the matrix (𝑢)^ = 𝑢 is the
unique matrix satisfying 𝑢𝑣 = 𝑢 × 𝑣, ∀𝑣 ∈ ℝ . In coordinates, given 𝑢 = (𝑢 , 𝑢 , 𝑢 ),
one gets
0 −𝑢 𝑢
𝑢= 𝑢 0 −𝑢 (1.6)
−𝑢 𝑢 0
The aircraft kinematics can be written as
𝑝̇ = 𝑅 𝑣 (1.7)
𝑅̇ =𝑅 𝜔 (1.8)
where 𝑝 is the position of the aircraft’s center of mass. Even though the position and
attitude of the aircraft can be completely specified by Equations (1.7) and (1.8), maintaining
all components of the rotation matrix 𝑅 may be expensive from the computational point of
view, since rotation matrices are not a minimal representation of aircraft attitude. In practice,
several other methods to represent an aircraft’s attitude are commonly used.
A widely used parametrization of rotations that requires fewer parameters than rotation
matrices is based on unit quaternions, that is, unit vectors in ℝ (Chaturvedi). A quaternion
𝑞 = ( 𝑞 , 𝑞⃗) is typically written in terms of a scalar part 𝑞 ∈ ℝ and a vector part 𝑞⃗ ∈ ℝ . A
unit quaternion is such that
𝑞. 𝑞 = 𝑞 + 𝑞⃗. 𝑞⃗ = 1 (1.9)
The set of unit vectors in ℝ is denoted as the sphere 𝕊 . A unit quaternion
𝑞 = ( 𝑞 , 𝑞⃗) ∈ 𝕊 defines a rotation.

𝑅 = 𝑅𝑜𝑡 2arccos(𝑞), ‖ ⃗‖
(1.10)

In other words, the vector part of a unit quaternion is parallel to the fixed axis of
the rotation, and the scalar part determines the rotation angle. Conversely, a rotation
matrix 𝑅 = 𝑅𝑜𝑡 𝜃, 𝑣 can be represented by the unit quaternion

𝑞= 𝑐𝑜𝑠 𝜃 2 , 𝑠𝑖𝑛 𝜃 2 𝑣 (1.11)

Note that this representation is not unique, since for all 𝑞 ∈ 𝕊 , 𝑞 and −𝑞 map to the
same rotation matrix. Quaternions are composed through a multiplication operation,
indicated with 𝑜, defined as follows:
𝑞 𝑜 𝑞 = ( 𝑞 𝑞 − 𝑞⃗ . 𝑞⃗ , 𝑞 𝑞⃗ + 𝑞 𝑞⃗ − 𝑞⃗ × 𝑞⃗ )
or in matrix form,

𝑞 −𝑞⃗
𝑞 𝑜𝑞 = 𝑞 (1.12)
𝑞⃗ 𝑞⃗

if 𝑅 and 𝑅 are rotation matrices corresponding ( according to Equation (1.10) ) to the unit
quaternions 𝑞 and 𝑞 , respectively, then 𝑅 𝑅 corresponds to 𝑞 𝑜 𝑞 .

Finally, quaternion kinematics take the matrix form as follows:

𝑞̇ = − Ω(𝜔 )𝑞 (1.13)

where

0 𝑃 𝑄 𝑅
−𝑃 0 −𝑅 𝑄
Ω(𝜔) = (1.14)
−𝑄 𝑅 0 −𝑃
−𝑅 −𝑄 𝑃 0

Storing and multiplying quaternions require less memory and computation time
than storing and multiplying rotation matrices, thus quaternions are often the preferred
method to model aircraft kinematics. On the other hand, the fact that
the set of unit quaternions is a double covering of the set of rotation matrices
can cause stability and robustness issues (Chaturvedi et al. 2011). Numerical
errors in quaternion propagation can result in the quaternion losing its normalization
constraint. It is common practice therefore to normalize a quaternion
to ensure ‖𝑞‖ = 1 if it is numerically integrated for control or navigation purposes. More
detail can be found in Valavanis and Vachtsevanov (2014).

1.6. Reference Frames and Coordinate Transformations


In order to accurately describe a body motion, it is required to define the forces
and moments acting on the body and thus resulting in the body motion and the
coordinate system that can be used as a reference for the motion states definition. It
is important to note that there are two types of forces acting on a body in free motion.
First, the inertial forces and moments that depend on the velocities and accelerations
relative to an inertial reference frame; the classical Newtonian dynamics equations
hold only in the inertial frame. The second group consists of the aerodynamic
forces and moments resulting from interaction of the body with the surrounding
airflow and therefore relative to the air. Since the airflow might not be stationary,
it is therefore convenient to describe the resulting aerodynamics in the coordinate
frames connected to the body and to the surrounding air. The resulting motion can be
conveniently described in terms of the position, velocity, acceleration, and attitude
coordinates which comprise the states of the moving body. Some of these states, in
turn, need to be defined with respect to a reference frame which choice is defined
by the specifics of the UAV application. Thus, the information carried by various
reference frames is what facilitates the complete and convenient definition of the
free body motion.
Deriving equations of motion of a fixed-wing UAV requires a definition of coordinate frames
where forces and moments acting on the airplane can be conveniently
defined and where the states including the position, velocity, acceleration, and
attitude can be suitably described.
In addition to the body and wind frames that define dynamics of the body–fluid interaction,
the following set of coordinate frames used in various UAV applications are given below:
1. Earth-centered inertial frame {𝑖}
2. Earth-centered Earth-fixed frame {𝑒}
3. Geodetic coordinate system {𝜆, 𝜑, ℎ}
4. Tangent plane coordinate system {𝑢}
5. Body-carried frame {𝑛}
6. Body-fixed frame {𝑏}
7. Wind frame {𝑤}
Depending on the duration of flight and operational range, the first four frames can be
considered inertial frames, with the remaining three frames body fixed. More detail can be
found in Valavanis and Vachtsevanov (2014).
1.7. Earth-Centered and Geodetic Coordinate Frames
It is convenient to consider two coordinate frames connected to the Earth. The
Earth-centered Earth-fixed (ECEF) coordinate system is fixed to the Earth, and
therefore, it rotates at the Earth’s sidereal rate with respect to the Earth-centered
inertial (ECI) frame that represents nonrotating inertial frame. The ECI frame is
usually denoted {𝑖} while the ECEF frame is denoted {𝑒} . Both frames are righthanded
orthogonal and have their origins at the center of the Earth.
The local geodetic {𝜆, 𝜑, ℎ} frame is usually associated with the ECEF frame
as shown in Fig. 1.14. It has the same origin placed at the center of the Earth. The frame
defines the orientation of a line normal to Earth’s surface and passing through the point of
interest. The orientation of the line is defined by two angles, 𝜆 (geodetic latitude) and 𝜑
(geodetic longitude), with the height h above the Earth’s surface; eventually these three
parameters, along with the components of the UAV velocity vector, become the major
navigation states.
Figure 1.14 ECI, ECEF and Geodetic Coordinate Frames (Source: Valavanis and Vachtsevanov (2014))

The following parameters for the oblate spheroid modeling: 𝑟 = 6,378,137.00 𝑚,


𝑟 = 6,356,752.314 𝑚. The resulting transformation from the geodetic {𝜆, 𝜑, ℎ} to
the ECEF frame is as follows:
𝑥 = (𝑟 + ℎ)𝑐𝑜𝑠𝜑𝑐𝑜𝑠𝜆
𝑦 = (𝑟 + ℎ)𝑐𝑜𝑠𝜑𝑠𝑖𝑛𝜆 (1.15)
𝑧 = (1 − 𝜀 )𝑟 + ℎ 𝑠𝑖𝑛𝜑
where 𝜀 the eccentricity of oblate ellipsoid is defined as
𝑟 = (1.16)

𝜀= 1− (1.17)

1.8. Local Tangent Plane Coordinate System


The origin of the local tangent plane (LTP) is fixed to the surface of the Earth
with two of its axes attached to the plane tangent to the surface as shown in Fig. 1.15. The
frame is usually marked with the subscript {𝑢} and serves the purpose of an
inertial frame in most short-duration low-speed UAV applications. The frame’s 𝑥
and 𝑦 axes are in the tangent plane and most often aligned with the north and east
directions correspondingly; the 𝑧 axis completes the right-hand coordinate system,
thus pointing down.

Figure 1.15 Definition of the Local Tangent Plane (Source: Valavanis and Vachtsevanov (2014))
it can be also defined as an NED frame indicating the north–east down alignment of the
coordinate axes. When the origin of the LTP frame is defined in terms of its geodetic latitude,
longitude, and altitude above the ground surface, then the Equation (1.15) can be
applied to define the kinematics of navigation states.
1.9. Body-Carried and Fixed Frames
In flight dynamics, the body-attached reference frames usually have their origin
at the center of gravity (CG) of an airplane; therefore, these frames are moving.
The body-carried frame {𝑛} is an orthogonal frame originated at the CG of the
UAV. All its axes are permanently stabilized and aligned with the LTP frame axes
as it was connected to the CG as shown in Fig. 1.16. The frame is usually utilized in defining
the navigation equations thus assigning its subscript {𝑛}. This frame is connected to the LTP
frame by means of a plain translation 𝑟 = [𝑟 , 𝑟 , 𝑟 ] .

Figure 1.16 Body-Fixed Frame with respect to LTP (Source: Valavanis and Vachtsevanov (2014))

The body-fixed frame is an orthogonal frame defined with respect to the body-carried frame.
Its origin is at the CG of UAV, and its axes are rigidly connected
to the body; therefore, the frame also rotates with the body. The frame is usually
marked with the subscript {𝑏}. The alignment of the {𝑏} frame axes is a degree
of freedom that can exploit the body symmetry.
As the body moves, its attitude is defined with reference to the body-carried
frame {𝑛} by three consecutive Euler rotations by 𝜓 (yaw), 𝜃 (pitch), and 𝜙 (roll)
angles. See their graphical illustration in Fig. 1.17 where frames {0} and {1} relate
to the frames {𝑛} and {𝑏} correspondingly. The formal definition of the Euler
angles in the application to an airplane attitude specification is presented here for
completeness:
𝜓 – yaw is the angle between 𝑥 and the projection of 𝑥 on the local horizontal
plane.
𝜃 – pitch is the angle between the local horizon and the 𝑥 axis measured in the vertical
plane of symmetry of the UAV.
𝜙 – roll is the angle between the body-fixed 𝑦 axis and the local horizon
measured in the plane 𝑦 𝑦 .
Figure 1.17 Three Consecutive Rotations (Source: Valavanis and Vachtsevanov (2014))

1.10. Wind Frame


Aerodynamic forces and moments resulting from the body–air interaction as the
airframe moves through the air depend on the vector representing the wind. The velocity
vector of the possibly moving air (wind) resolved in the inertial frame {𝑢}
is denoted 𝑉 see Fig. 1.18. The magnitude of 𝑉 is called an airspeed 𝑉 , as opposed to
the velocity vector defined in LTP with respect to the ground – ground
speed vector 𝑉 . The orientation of the wind frame {𝑤} defined by the direction of
𝑉 with respect to the body–fixed {𝑏} is defined by two angles.
To generate the lift force in flight, the wing of the UAV must be oriented at
a usually positive angle 𝛼 , angle of attack ,with respect to the 𝑉 vector. The angle of attack
𝛼 is also one of the key parameters that define the longitudinal stability of an airplane.
Therefore, quite often, the coordinate frame that results from a single rotation from the body-
fixed {𝑏} frame on angle 𝛼 is called a stability frame (Beard and McLain 2012; Etkin and
Reid 1995). As illustrated in Fig. 1.18, the angle of attack is defined by the projection of 𝑉
into a vertical plane of symmetry of the UAV (spanned by axes 𝑥 , 𝑧 in frame {𝑏}) and the
longitudinal axis xb of the UAV. It is positive when a leading edge of the wing rotates upward
with respect to the 𝑉 . In turn, the angle between the velocity vector 𝑉 projected into the
“wing-level” plane (spanned by axes 𝑥 , 𝑦 in frame {𝑏} ) and the longitudinal axis 𝑥 of
UAV is called the sideslip angle. It is denoted by 𝛽.
Then the transformation from the body-fixed frame {𝑏} to the wind frame {𝑤} is given by
𝑐𝑜𝑠𝛽 𝑠𝑖𝑛𝛽 0 𝑐𝑜𝑠𝛼 0 𝑠𝑖𝑛𝛼
𝑅 = −𝑠𝑖𝑛𝛽 𝑐𝑜𝑠𝛽 0 0 1 0
0 0 1 −𝑠𝑖𝑛𝛼 0 𝑐𝑜𝑠𝛼
𝑐𝑜𝑠𝛼 𝑐𝑜𝑠𝛽 𝑠𝑖𝑛𝛽 𝑠𝑖𝑛𝛼 𝑐𝑜𝑠𝛽
= − 𝑐𝑜𝑠𝛼 𝑠𝑖𝑛𝛽 𝑐𝑜𝑠𝛽 −𝑠𝑖𝑛𝛼 𝑠𝑖𝑛𝛽 (1.18)
−𝑠𝑖𝑛𝛼 0 𝑐𝑜𝑠𝛼
Figure 1.18 Wind frame and body-fixed frames. Definition of the angle of attack and the sideslip (Source: Valavanis and
Vachtsevanov (2014))

The importance of the wind frame in application of flying UAVs in wind conditions which may
contribute to 100% of the nominal aircraft speed cannot be overestimated. For example,
consider a designed autonomous glider that uses wind energy to maintain a long flight time.
Therefore, it is required to understand the difference between the airspeed, represented by
the air velocity vector 𝑉 and the ground speed 𝑉 , both resolved with respect to the LTP
frame. Consider the graphical representation of the relation between these vectors in Fig.
1.19 In the presence of constant wind, these velocities usually called the wind triangle
equation:
𝑉 =𝑉 − 𝑉 (1.19)
where 𝑉 is the wind velocity defined in the LTP frame.
First, define the components of all three vectors in the body-fixed frame {𝑏}. Let the UAV
velocity in the LTP (inertial) frame expressed in the body frame be 𝑉 =
[𝑢, 𝑣, 𝑤] and let the wind velocity in the LTP frame expressed in body frame be 𝑉 =
[𝑢 , 𝑣 , 𝑤 ] . Observe that 𝑉 can be expressed in {𝑤} frame as 𝑉 = [𝑉 , 0, 0] , and
let 𝑉 = [𝑢 , 𝑣 , 𝑤 ] be its components expressed in the body frame. Utilizing the definition
of the angles of attack and sideslip relating the wind frame to the body-fixed frame and the
“wind triangle” Equation (1.19) expressed in the body frame results in the following:
𝑢 𝑢
𝑉 =𝑉 − 𝑉 = 𝑣 − 𝑣
𝑤 𝑤
𝑢 𝑉 𝑐𝑜𝑠𝛼 𝑐𝑜𝑠𝛽 𝑠𝑖𝑛𝛽 𝑠𝑖𝑛𝛼 𝑐𝑜𝑠𝛽 𝑉
𝑉 = 𝑣 = 𝑅 0 = − 𝑐𝑜𝑠𝛼 𝑠𝑖𝑛𝛽 𝑐𝑜𝑠𝛽 −𝑠𝑖𝑛𝛼 𝑠𝑖𝑛𝛽 0
𝑤 0 −𝑠𝑖𝑛𝛼 0 𝑐𝑜𝑠𝛼 0
𝑢 𝑐𝑜𝑠𝛼 𝑐𝑜𝑠𝛽
𝑣 = 𝑉 𝑠𝑖𝑛𝛽 (1.20)
𝑤 𝑠𝑖𝑛𝛼 𝑐𝑜𝑠𝛽
Figure 1.19 Wind Triangle in 2D Plane and the Yaw (𝜓), Sideslip (𝛽), and Course (𝜒 ) Angles (Source: Valavanis and
Vachtsevanov (2014))

This last equation relates the airspeed components of 𝑉 resolved in the body
frame with the airspeed and the angles of attack and sideslip. In turn, if the wind
components resolved in the body frame are known, then inverting the last equation
allows for calculation of the airspeed 𝑉 and the 𝛼, 𝛽 angles:
𝑢
𝑣 = 𝑉 = 𝑢 + 𝑣 + 𝑤 , 𝛼 = 𝑎 𝑡𝑎𝑛 , 𝛽 = 𝑎 𝑠𝑖𝑛 (1.21)
𝑤
More detail can be found in Valavanis and Vachtsevanov (2014).
1.11. Principles of Guidance, Navigation, and Control of UAVs
Regardless of their size and mission, all UAVs share the need for sensor or
sensor systems which provide an estimate of the vehicle’s full state vector. The
state vector normally consists of three position coordinates, three components of
the velocity vector, and anywhere between three and nine parameters which describe
the vehicle’s attitude. In addition to state sensing and estimation, UAVs (especially
one that operates autonomously) need control and guidance systems which allow
them to maneuver in a way consistent with their mission. In simple terms, the
guidance system generates instruction on what state trajectory the UAV should
follow in accomplishing its mission. The control system in turn operates the aircraft
controls (aileron, elevators, thrust, etc.) to follow the trajectory generated by the
guidance systems. A high-level depiction of a Guidance, Navigation, and Control (GNC)
system is shown in Fig. 1.20 and consists of both airborne and ground components)
solutions for small UAVs exist. Some of these off-the-shelf solutions consist of the vehicle
itself, avionics, and associated ground support systems. While most of these “turnkey”
solutions allow some level of customization by the user, they can be very restrictive for use
in research environments.
In order to be able to guide and control the UAV, the state of the UAV must be
available to the controller at high fidelity and high bandwidth. Accurate position is
required to perform automatic control for precision applications (e.g., landing). The
navigation module in the GNC system implements aircraft state estimation. The advent of
powerful computers made available at relatively low cost has allowed sensor
fusion for navigation; sensor fusion is a technique of optimally blending information
from multiple different (flawed) sensors. Multi-sensor navigation framework aims to
obtain the most information about the aircraft states by using minimum combination
of sensors. This is key to UAV operations where size, weight, and power are all
critical.
The complete state of the UAV comprises its position, velocity, attitude, airspeed,
angle-of-attack, sideslip angle, and rotation (pitch, roll, and yaw) rates. Position,
velocity, and attitude are also known as the navigation state.

Airborne Components
Guidance Control
Module Module

Airborne Data Link


Transmitter/Receiver

Navigation Navigation
Module System

Ground Components

Ground Data Link Ground Data Link


Transmitter/Receiver Transmitter/Receiver

Figure 1.20 Concept of Operations for UAVs (Source: Reproduction from Valavanis and Vachtsevanov (2014))

1.12. INS/GPS Integrated Navigation System


An inertial navigation system (INS) uses the output of inertial sensors to estimate the
vehicle’s position, velocity, and attitude. A complete six degree-of-freedom inertial
sensor consists of 3-axis accelerometers and 3-axis gyros. The accelerometer measures the
specific force acting on the platform and the gyros measures its rotation.
Inertial sensors can be categorized according to its resulting navigational accuracy
(Gleason and Gebre-Egziabher 2009; Gebre-Egziabher 2001). Automotive-grade
micro electro-mechanical system (MEMS) inertial sensors are most suitable for
low-cost UAV applications; however, when operating as a stand-alone navigator,
these sensors produce positioning errors on the order of several hundreds of meter
per minute. These large errors in the position, velocity, and attitude estimates are
mainly due to sensor bias and noise that corrupt the measurements. They are also a
function of the initial error on the state estimates. The position error grows linearly
with the initial velocity error estimate, quadratically with uncorrected bias, and at a
cubic rate with attitude error (Titterton and Weston 2004). Despite the unbounded
growth of the error with time, one of the important advantages of inertial navigators
is that they are self-contained; that is, they require no external signal to provide a
navigation solution. In terms of the data rate, the navigation states can be computed
at a rate limited only by the processing power of the flight computer.
Contrast GPS to the advantages and disadvantages of an inertial navigator: GPS
requires an external signal to operate, and although the navigation solution accuracy
depends on the signal quality and the geometry of the satellite in view, this error is
not a function of time (Misra and Enge 2001). Although there are specialized GPS
receivers that can generate data up to 100 Hz, most low-cost GPS receivers provide
output data at 1–10 Hz. In other words, although GPS navigation solution has longterm
stability, the bandwidth of the solution is much lower than that of an inertial
navigator.
Integration of INS with GPS allows a navigation solution that has the high
bandwidth of the inertial sensors and the drift-free long-term stability of the GPS
solution. Attitude determination is an integral part of strapdown INS. This is
because the specific force measured by the accelerometer needs to be transformed
into the navigation frame in which the position and velocity are calculated. This
transformation calls for the knowledge of the orientation of the platform (defined as
the attitude).
The architecture of the INS/GPS filter using an extended Kalman filter (EKF)
(Simon 2006) is shown on Fig. 1.21. Included in the state vector (Euler angle
representation) are:

𝑥= 𝐿 Λ ℎ𝑉 𝑉 𝑉 𝜙 𝜃 𝜓 𝑏 𝑏 𝑏 𝑏 𝑏 𝑏 (1.22)

Figure 1.21 Block diagram of INS/GPS integration (Source: Valavanis and Vachtsevanov (2014))

1.13. Fuse Inertial Sensor and GPS data


An inertial navigation system (INS) consists of sensors that detect translational motion
(accelerometers), rotation (gyroscopes), and in some systems magnetic fields
(magnetometers). By fusing the sensor data continuously, an INS can dead reckon a
platform's pose without external sensors. However, INS pose estimation generally
decreases in accuracy over time and needs to be corrected using an external reference,
such as GPS readings. Common configurations for INS/GPS fusion include MARG+GPS 1 for
aerial vehicles and Accelerometer+Gyroscope+GPS with nonholonomic constraints for
ground vehicles.
At the core of most UAV guidance and navigation systems, one can find Global
Navigation Satellite Systems (GNSS) (including the Global Positioning System – GPS) and
Inertial Navigation Systems (INS). Their complementary nature has been recognized, and as
a result, GPS and INS sensors are the preferred sensor couple for the majority of autopilot
systems. More detail can be found in Valavanis and Vachtsevanov (2014).

1.14. Electro-Optical (EO) Sensors


Nowadays, it is difficult to realize a UAV without an EO sensor. They have
become standard fit-out onboard many aerial vehicles. The challenge is now in the
processing and interpretation of the information acquired by EO sensors. Perception
through EO sensors can be seen as one of the most important tasks in a UAV.
Whether it is performed for navigation or for surveillance (as an end application), it
defines the necessary peripherals to process the EO data. This section will introduce
some of the most common EO sensors typically found in UAVs. Some types of EO sensors
are given as follows:
a. Visible Spectrum
b. Infrared
i. Cooled Infrared Detectors
ii. Uncooled Infrared Detectors
c. Hyperspectral Imaging

1.15. Radio-Wave Sensors


1.15.1. Airborne Radio Detection and Ranging (RADAR)
Radar is a radio system used to determine range, altitude, direction, or speed of
objects. The system transmits controlled radio pulses which are reflected back by
objects. The distance to objects is estimated by measuring the signal return time.
The received power declines as the fourth power of the range, assuming transmitting
and receiving antennas are in the same location, hence the need for high transmitting
power in most cases. Speed can be estimated by tracking the change in distance with
time or by exploiting the Doppler effect (Stimson 1998) as shown in Fig. 1.22.

Figure 1.22 Airborne RADAR system

1.15.2. Light Detection and Ranging (LIDAR)


Lidar systems bring many advantages for some types of mapping projects, such as those
concerned with forestry and mining. Unlike camera sensors that typically capture data
passively across image frames, lidar systems are active systems. A lidar collects data by

1
Magnetic, Angular Rate and Gravity Sensor System (MARG)
emitting individual pulses of light and calculating the time that it takes for that pulse of light to
return to the lidar sensor. Many thousands of pulses of light per second will be emitted
across a swath, known as a scan line. Since the speed of light is known, the distance of
each pulse to its target (for example, the ground) can be calculated. To calculate an 𝑥, 𝑦, 𝑧
coordinate position of the target also requires that the exact position and orientation of the
aircraft to be known and for it to be dynamically assigned to each these pulse returns. As a
result direct georeferencing of UAV mounted lidar is essential, using a high-grade INS
system. As with other mapping sensors such as aerial cameras, best results are obtained
when the INS and lidar system are mounted together on the UAV and for lever-arms to be
calculated that account for the mounting angle of the lidar. LIDAR sensor application is
shown in Fig. 1.23 and working principle of LIDAR sensor is shown in Fig. 1.24.

Figure 1.23 Sample Drone with LIDAR Sensor (by Phoenix Aerial System Design)

Figure 1.24 LIDAR Sensor Working Principle

1.16. Guidance
UAV guidance, navigation, and control (GNC) ultimately signifies the ability to
follow a desired trajectory through the sky. With attitude estimation established
and inner loop stabilizing the aircraft what is left is to guide the UAV along the desired
trajectory rejecting disturbances such as wind. Again, while there are several ways to
implement this guidance control, this section discusses the guidance algorithms
implemented on the SLUGS autopilot (Lizarraga 2009).
The SLUGS guidance algorithm is based on an extension of a simple line-ofsight guidance
law originally developed for ground robotics. Much of the art of
waypoint guidance consists of determining which leg of the trajectory the UAV is
on and when to switch to the next leg of the trajectory. The basic approach is to define a
Serret-Frenet frame which points from the current waypoint to the next (TE, along the current
leg), with one axis down and the third defined by the cross product between the along track,
TE, and down, BE, unit vectors. The projection of the UAV position onto the TE direction is
used to determine when to switch to the next leg. In practice this is better than using a
distance from the next waypoint to switch (especially in very windy conditions).
1.17. General Tracking
The outer loop guidance law follows closely the work developed in Amidi and
Thorpe (1991), Park et al. (2004), and Park et al. (2007). The original derivation
is presented here for completeness. The guidance algorithm steers the velocity
vector toward the line of sight; this is one form of pursuit guidance. Making the
commanded acceleration proportional to sin is only one of number of possible
guidance laws. The basic guidance algorithm is to determine an aim point and steer
the velocity vector toward it.
Referring to Fig. 1.22, 𝑉 is the UAV’s ground speed, and C is a circular arc of
radius R that originates at the UAV and intercepts the desired path. 𝐿 is a constant
look-ahead distance from the UAV to the path in the desired direction of travel.
From elementary trigonometry:
| |
= 𝑅 sin 𝜂 (1.23)

Figure 1.22 Navigation control law geometry (Reproduction of Fig. 1 in Park et al. 2004)

Additionally, from elementary kinematics, it is known that the centripetal acceleration, 𝑎 ,


required for a point mass to follow the circular arc C is given by

𝑎 = (1.24)

Thus the UAV must command a lateral acceleration of ac. Solving Equation (1.23) for 𝑅 and
substituting it into Equation (1.24) produces the following control law for commanded
acceleration:

𝑎 =2 | |
sin 𝜂 (1.25)
The only requirements for the implementation of this control law are to select
the lookahead distance |𝐿 | and determine sin 𝜂, the sine of the angle from the
velocity vector to 𝐿 . 𝜂 is sometimes called the line of sight angle. sin 𝜂
is found from the vector cross product of Vg and L1 as follows:
×
sin 𝜂 = | |
(1.26)

For the UAV to actually track the desired trajectory, the lateral acceleration
command must be converted to an appropriate bank angle command using the
steady-state turn equation:
𝜙 = 𝑡𝑎𝑛 (1.27)

More detail can be found in Valavanis and Vachtsevanov (2014).

CHAPTER 1 REFERENCES

Air Transport Association (ATA), European Aviation Safety Agency, Learning center (2008)
http://learningcenter.airlines.org/, Airworthiness certification of unmanned aircraft
systems (UAS). Policy Statement, E.Y013-01, 2009
O. Amidi, C. Thorpe, Integrated mobile robot control, in Proceedings of the SPIE, Boston,
vol. 1388, p. 504, 1991
R. Beard, T. McLain, Small Unmanned Aircraft: Theory and Practice (Princeton University
Press, Princeton, 2012
N. A. Chaturvedi, A.K. Sanyal, N.H. McClamroch, Rigid-body attitude control. IEEE Control
Syst. 31(3), 30–51, 2011
B. Etkin, L.D. Reid, Dynamics of Flight: Stability and Control, 3rd edn. (Wiley, New York,
(1995)
Federal Aviation Administration (FAA), FAA System Safety Handbook, Appendix A,
Washington, D.C., 2000
Federal Aviation Administration, Unmanned aircraft systems operations in the U. S. National
airspace system. Interim Operational Approval Guidance 08-01, 2008
M. Fishpool, International military and civilian unmanned aerial vehicle survey. Technical
report, Socolofi Research, 2010 E. Herlik, Unmanned Aerial Vehicles (UAVs) for
commercial applications global market & technologies outlook 2011–2016
Air Force Pamphlet 91-215, 1998 U.S. Department of Defense, Standard practice for system
safety. MIL-STD-882D, 2000
P. G. Fahlstrom and T. J. Gleason, Introduction to UAV systems, Fourth ed., John Wiley &
Sons, Ltd, 2012. DOI:10.1002/9781118396780
D. Gebre-Egziabher, Design and performance analysis of low-cost aided dead reckoning
navigator. PhD thesis, Department of Aeronautics and Astronautics, Stanford
University, Stanford, 2004
S. Gleason, D. Gebre-Egziabher, GNSS Applications and Methods (Artech House, Boston),
2009
http://www.airportjournals.com/Display.cfm?varID=0609005. Accessed Apr 2008
J. Keller (Editor), Air Force asks General Atom ics to upgrade UAV ground-
control stations f or use with the Internet, March 20, 2014.
https://www.militaryaerospace.com/articles/2014/03/uav-control-upgrades.html
J.B. Kuipers, Quaternions and Rotation Sequences: A Primer with Applications to Orbits,
Aerospace, and Virtual Reality (Princeton University Press, Princeton, 2002)
M. Lizarraga, Design, implementation and flight verification of a versatile and rapidly
reconfigurable UAV GNC research platform. PhD thesis, Department of Computer
Engineering, University of California Santa Cruz, Santa Cruz, 2009
P. Misra, P. Enge, Global Positioning System, Signals, Measurements, and Performance
(GangaJamuna Press, Lincoln), 2001
S. Park, J. Deyst, J.P. How, A new nonlinear guidance logic for trajectory tracking, in AIAA
Guidance, Navigation and Control Conference and Exhibit, Portland, 2004
S. Park, J. Deyst, J.P. How, Performance and Lyapunov stability of a nonlinear path-
following guidance method. J. Guid. Control Dyn. 30(6), 1718, 2007
Range Safety Group, Range Commanders Council, Range safety criteria for unmanned air
vehicles – rationale and methodology supplement. Supplement to document 323-99,
1999
Range Safety Group, Range Commanders Council, Common risk criteria standards for
national test ranges: supplement. Supplement to document 321–07, 2007
R. Schultz, Ultralights, LSAs and kit airplanes – what’s the difference? Fla. Aviat. Bus. 2006
D. Simon, Optimal state estimation: Kalman, H1, and nonlinear approaches (Wiley,
Hoboken), 2006
SLUGS website, http://slugsuav.soe.ucsc.edu
Small Unmanned Aircraft System Aviation Rulemaking Committee, Comprehensive set of
recommendations for sUAS regulatory development, 2009
Technical report, Market Intrel Group LLC, 2010 Joint Capability Group on Unmanned Aerial
Vehicles, STANAG 4671 – unmanned aerial vehicle systems airworthiness
requirements (USAR). draft, NATO Naval Armaments Group, 2007.
D. Titterton, J. Weston, Strapdown Inertial Navigation Technology (Institution of Engineering
and Technology, Stevenage), 2004
U.S. Department of Defense Office of the Secretary of Defense, Unmanned systems
roadmap 2007–2032. Report, 2007
United States Air Force, Operational risk management (ORM) guidelines and tools.
K. P. Valavanis and G. J. Vachtsevanov. Handbook of Unmanned Aerial Vehicles, Springer
Netherlands 2014
P. van Blyenburgh, UAV systems: global review. Presented at the Avionics’06 conference,
Amsterdam, 2006
ERASMUS + KA 203 PROJECT
Project Code:
2018-1TR01-KA203-059632

Project Name:
DEVELOPING NOVEL UAV EDUCATION SET AND TRAINING CURRICULUM IN ORDER TO
CATCH STATE OF THE ART TECHNOLOGY

INTELLECTUAL OUTPUTS OF IO1 & IO2

PREPARED BY PARTNER ISTANBUL TECHNICAL UNIVERSITY

CHAPTER 8:

ESTIMATION OF THE UAV MOTION

“Funded by the Erasmus+ Program of the European Union. However, European


Commission and Turkish National Agency cannot be held responsible for any use which
may be made of the information contained therein”
CHAPTER 2

2. Estimation of the UAV Motion


2.1. Kalman Filter for State Estimation
2.1.1. Optimal Kalman Filter
The Kalman filter is an efficient recursive filter that estimates the internal state of a linear
dynamic system from a series of noisy measurements. The Kalman filter keeps track of the
estimated state of the system and the variance or uncertainty of the estimate. The estimate
is updated using a state transition model and measurements. 𝑥 ⁄ denotes the estimate of
the system's state at time step k before the k-th measurement yk has been taken into
account; 𝑃 ⁄ is the corresponding uncertainty.
The Kalman filter, also known as linear quadratic estimation (LQE), is an algorithm that uses
a series of measurements observed over time, containing noise (random variations) and
other inaccuracies, and produces estimates of unknown variables that tend to be more
precise than those based on a single measurement alone. More formally, the Kalman filter
operates recursively on streams of noisy input data to produce a statistically optimal
estimate of the underlying system state. The filter is named for Rudolf E. Kalman, one of the
primary developers of its theory. The Kalman filter has numerous applications in technology.
A common application is for guidance, navigation and control of vehicles, particularly aircraft
and spacecraft.
The algorithm works in a two-step process: the prediction step and updating step. In the
prediction step, the Kalman filter produces estimates of the current state variables, along
with their uncertainties. Once the outcome of the next measurement (necessarily corrupted
with some amount of error, including random noise) is observed, these estimates are
updated using a weighted average, with more weight being given to estimates with higher
certainty. Because of the algorithm's recursive nature, it can run in real time using only the
present input measurements and the previously calculated state and its uncertainty matrix;
no additional past information is required.
It is a common misconception that the Kalman filter assumes that all error terms and
measurements are Gaussian distributed. Kalman's original paper derived the filter using
orthogonal projection theory to show that the covariance is minimized, and this result does
not require any assumption, e.g., that the errors are Gaussian [1]. He then showed that the
filter yields the exact conditional probability estimate in the special case that all errors are
Gaussian-distributed.
Extensions and generalizations to the method have also been developed, such as the
extended Kalman filter, the unscented Kalman filter and the cubature Kalman filter which
work on nonlinear systems. The underlying model is a Bayesian model similar to a hidden
Markov model but where the state space of the latent variables is continuous and where all
latent and observed variables have Gaussian distributions.
The Kalman filter may be regarded as analogous to the hidden Markov model. There is a
strong duality between the equations of the Kalman Filter and those of the hidden Markov
model.
The Kalman filter model assumes the true state at time k is evolved from the state at (k−1)
according to
(2.1)
where
 Fk is the state transition model which is applied to the previous state xk−1;
 Bk is the control-input model which is applied to the control vector uk;
 wk is the process noise which is assumed to be drawn from a zero mean multivariate
normal distribution with covariance Qk.

At time k an observation (or measurement) zk of the true state xk is made according to


(2.2)
where Hk is the observation model which maps the true state space into the observed space
and vk is the observation noise which is assumed to be zero mean Gaussian white
noise with covariance Rk.

The initial state, and the noise vectors at each step {x0, w1, ..., wk, v1 ... vk} are all assumed
to be mutually independent.
The Kalman filter is a recursive estimator. This means that only the estimated state from the
previous time step and the current measurement are needed to compute the estimate for the
current state. In contrast to batch estimation techniques, no history of observations and/or
estimates is required. In what follows, the notation 𝑥 ⁄ represents the estimate of x at
time n given observations up to, and including at time m ≤ n.
The state of the filter is represented by two variables:
 𝑥 ⁄ , the a posteriori state estimate at time k given observations up to and including
at time k;
 𝑃 / , the a posteriori error covariance matrix (a measure of the estimated accuracy of
the state estimate).
A block diagram of the Kalman filter is illustrated in Fig. 2.1.

Figure 2.1 Discrete-Time Kalman Filter Block Diagram

The Kalman filter can be written as two distinct phases: "Predict" and "Update":
Predict
Predicted (a priori) state estimate

Predicted (a priori) estimate covariance

Update

Innovation or measurement residual

Innovation (or residual) covariance

Optimal Kalman gain

Updated (a posteriori) state estimate

Updated (a posteriori) estimate covariance

The formula for the updated estimate and covariance above is only valid for the optimal
Kalman gain.
Invariants

If the model is accurate, and the values for and accurately reflect the distribution of
the initial state values, then the following invariants are preserved: (all estimates have a
mean error of zero)

(2.3)

where E[.] is the expected value of ., and covariance matrices accurately reflect the
covariance of estimates

(2.4)

2.1.2. Extended Kalman Filter


In the extended Kalman filter (EKF), the state transition and observation models need not be
linear functions of the state but may instead be non-linear functions. These functions are
of differentiable type.

(2.5)

(2.6)
The function f can be used to compute the predicted state from the previous estimate and
similarly the function h can be used to compute the predicted measurement from the
predicted state. However, f and h cannot be applied to the covariance directly. Instead a
matrix of partial derivatives (the Jacobian) is computed.
At each time step the Jacobian is evaluated with current predicted states. These matrices
can be used in the Kalman filter equations. This process essentially linearizes the non-linear
function around the current estimate.
Discrete-time predict and update equations
Predict

Predicted state estimate

Predicted covariance estimate

Update

Innovation or measurement residual

Innovation (or residual) covariance

Near-optimal Kalman gain

Updated state estimate

Updated covariance estimate

where the state transition and observation matrices are defined to be the following Jacobians

(2.7)

2.1.3. Unscented Kalman Filter


When the state transition and observation models—that is, the predict and update functions
f and h—are highly non-linear, the extended Kalman filter can give particularly poor
performance. This is because the covariance is propagated through linearization of the
underlying non-linear model. The unscented Kalman filter (UKF) uses a deterministic
sampling technique known as the unscented transform to pick a minimal set of sample
points (called sigma points) around the mean. These sigma points are then propagated
through the non-linear functions, from which the mean and covariance of the estimate are
then recovered. The result is a filter which more accurately captures the true mean and
covariance. In addition, this technique removes the requirement to explicitly calculate
Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring
complicated derivatives if done analytically or being computationally costly if done
numerically).
Predict
As with the EKF, the UKF prediction can be used independently from the UKF update, in
combination with a linear (or indeed EKF) update, or vice versa.
The estimated state and covariance are augmented with the mean and covariance of the
process noise.

(2.8)
A set of 2L + 1 sigma points is derived from the augmented state and covariance where L is
the dimension of the state.
(2.9)

where

is the ith column of the matrix square root of

using the definition: square root A of matrix B satisfies

The matrix square root should be calculated using numerically efficient and stable methods
such as the Cholesky decomposition.
The sigma points are propagated through the transition function f.

(2.10)

where . The weighted sigma points are recombined to produce the predicted
state and covariance.

(2.11)

where the weights for the state and covariance are given by:

 and  control the spread of the sigma points.  is related to the distribution of x. Normal
values are =10-3 , =0 and =2. If the true distribution of x is Gaussian, =2 is optimal.

Update
The predicted state and covariance are augmented as before, except now with the mean
and covariance of the measurement noise.
As before, a set of 2L + 1 sigma points is derived from the augmented state and covariance
where L is the dimension of the state.

Alternatively if the UKF prediction has been used the sigma points themselves can be
augmented along the following lines

where

The sigma points are projected through the observation function h.

The weighted sigma points are recombined to produce the predicted measurement and
predicted measurement covariance.

The state-measurement cross-covariance matrix,

is used to compute the UKF Kalman gain.

As with the Kalman filter, the updated state is the predicted state plus the innovation
weighted by the Kalman gain,

And the updated covariance is the predicted covariance, minus the predicted measurement
covariance, weighted by the Kalman gain.

2.2. Fault Tolerant Control


Fault-tolerant control (FTC) is the synonym for a set of recent techniques that were
developed to increase plant availability and reduce the risk of safety hazards. Its aim is to
prevent that simple faults develop into serious failure. Fault-tolerant control merges several
disciplines to achieve this goal, including on-line fault diagnosis, automatic condition
assessment and calculation of remedial actions when a fault is detected [2].
FTC designs involve knowledge of the nature and/or occurrence of faults in the closed-loop
system either implicitly or explicitly using methods of fault detection and isolation (FDI), fault
detection and diagnosis (FDD), or fault estimation (FE). Fault tolerant controllers are
reconfigured or restructured using FDI/FDD information so that the effects of the faults are
reduced or eliminated within each feedback loop in active or passive approaches or
compensated in each control-loop using FE methods [3].
A scheme of a fault-tolerant control system is depicted in Fig. 2.2.

Figure 2.2 Scheme of fault-tolerant control system

FDI may be performed by using observer/filter structures. The estimated states are assumed
to be close to the true states until a fault occurs. After a fault takes place in the plant,
actuators, or sensors, the estimated states will diverge from the true states, resulting an
anomaly. That is fault detection. If the location is determined, that is fault isolation. If the size
of the fault can be estimated that is fault identification. After the fault is diagnosed, the
control signal is reconfigured such that the system is almost not affected due to the fault.
2.2.1. Active Fault-Tolerant Control of UAV Dynamics against Sensor-
Actuator Failures
UAVs have been attracting more attention during the last few years because of their
important contribution and cost-effective application in several tasks, such as surveillance,
search, rescue, remote sensing, geographic studies, and various military and security
applications. Modern high-performance aircraft is designed to accommodate system faults
automatically in the event of various failures, such as control surface damage, actuator
faults, and sensor faults.
Sensor Fault Detection and Isolation
Statistical Test for Fault Detection
The states of the UAV model can be estimated via a linear optimum OKF. Two hypotheses
are introduced:
H0: System operates normally.
H1: Fault occurs in system.
The use of the innovation approach is suitable for detecting sensor faults [4]. To detect the
failures changing the mean of the innovation sequence, the following statistical function can
be used:

(2.12)
where ∆ is normalized innovation sequence of the OKF; and M = width of sliding window.

The KF normalized innovation can be calculated as


/
∆ = [𝐻𝑃 ⁄ 𝐻 +𝑅 ] ∆ (2.13)
The statistical function Eq. (12) has a χ2 distribution with Ms degree of freedom, where s is
the dimension of the state vector. If the level of significance, α, is selected as

(2.14)
then the threshold value, χ2α,Ms can be found from the table of quantiles of the χ2
distribution. Hence, when Hypothesis H1 is true, the statistical value of η k will be greater
than the threshold value χ2α,Ms , i.e.

(2.15)
2.2.2. Sensor Fault Isolation Algorithm
If the fault is detected, then it is necessary to determine what sensor set is faulty. For this
~
purpose, the s-dimensional sequence  is transformed into s one-dimensional sequences
to isolate the faulty sensor, and for each one-dimensional sequence  k ( i ), i  1, 2 , ..., s
corresponding monitoring algorithm is run. The statistics of the faulty sensor set is assumed
to be affected much more than those of the other sensors. Let the statistics is denoted as
Sk (i ) . When max S k (i ) / i  1, 2,..., s  S k ( m) for i  j , and Sk (i )  Sk ( j ) , it is judged that m
th

sensor set has failed.


For the sensor fault isolation purpose the sample variance of the one-dimensional innovation
sequences  k ( i ), i  1, 2 , ..., s

k 2
1
ˆ k2 (i)     (i ) (i )  ,
j k  i  1, 2 , ..., s
M  1 j k -M 1 

1 k
is used, where  k (i )    j (i ) is the sample mean of the one-dimensional
M j  k - M 1
innovation sequences.
2
σ̂ k ( i )
Let the statistics which is a rate of sample and theoretical variances; 2 be used to
σk (i )

verify the variances of one dimensional innovation sequences  k (i), i  1, 2,..., s . When  k (i ) 
N  0,  k (i )  it is known that [10]:

 k (i)
  M2 1 , i, i  1, 2,..., s (2.16)
 k2 (i )
where  k (i )   M  1 ˆ k2 (i ) .

As  k2 (i )  1 for normalized innovation sequence it follows that,

 k (i)   M2 1 , i, i  1, 2,..., s . (2.17)

Using Eq. (2.17), it can be proved that any change in the mean of the normalized innovation
sequence can be detected. When a fault affecting the mean or variance of the innovation
sequence, occurs in the system the statistics  k (i ) exceeds the threshold value  2 , M 1
depending on the level of significance  , and degree of freedom M-1. The decision making for
isolation is done as follows; if the hypothesis H1 is true,  k (i )   k ( j ), i  j , and
max k (i ) / i  1,2,..., s   k (m) , then it is judged that there is a fault in the m channel.
th

2.2.3. Adaptive TSKF Algorithm for State Estimation and Actuator Fault
Identification
Based on the linearized model of the open-loop aircraft around a trim point, and a
parameterization of two types of actuator faults, the following discrete time model is used as
the design model of the adaptive TSKF (Two Stage Kalman Filter):

(2.18)
where xk ∈ Rn ; uk ∈ Rl , and yk+1 ϵ Rs are state, control input, and output variables,
respectively; and k and βk are bias vectors of dimension l, representing faults entering
w x , w v
actuators. The noise sequences k k and k are assumed to be zero mean uncorrelated
white Gaussian noise sequences with

 wkx   Q x 0 0
   

E  wk  w xj wj 

vj   0 Q  
0  kj
 v   0 0 R 
 k    (2.19)
  x 0
where Q  0, Q  0, R  0 and kj is the Kronecker delta. The initial states
x
and  (0) are
x 
assumed to be uncorrelated with the white noise processes w , w and v , and have
~ ~
P0x P0
covariances and , respectively.

The components

(2.20)
of bias vector k describes the percentage reduction in the control effectiveness when the
terms Bkuk+Ekk are considered together, where

(2.21)
The no-fault case is represented by

(2.22)
a 100 ki % loss of control effectiveness in the ith actuator is

(2.23)
and a control surface stuck at magnitude βki degrees is represented by

(2.24)
2.2.4. Simulation Results
The detailed simulation results are presented in [4].
2.2.4.1. Simulation Results for Sensor/Actuator Fault Isolation
A pitch rate gyro fault is assumed to occur at t=20s. The fault in the pitch rate gyro is
simulated by adding the constant bias 0.01rad/s. By means of the TSKF, the squared error
between the actual states and the estimated states plotted as in Fig. 2.3.

Figure 2.3 Squared error between the actual states and the estimated states.

As the TSKF is not sensitive to actuator faults but sensitive to sensor faults, the jump in the
graph at t=20s reveals a sensor fault (the residual exceeds the selected threshold value
0.4x10-6).
2.2.4.2. Simulation Results for Sensor Fault Isolation
A Kalman Filter is used to detect and isolate the sensor faults. In the sensor fault isolation
simulations two scenarios are examined;
a. Fault in the sensor measuring the longitudinal state (pitch rate) of the UAV
b. Fault in the sensor measuring the lateral state (roll rate) of the UAV
The graphs of statistics Eq. (2.17) when a fault in the pitch rate gyro has occurred are shown
in Fig. 2.4. In the graphs Si ( i  1, 2 , ..., 9 ) denotes  (i) .
Figure 2.4 Fault isolation in the pitch rate gyro

After sensor fault detection, the decision making for sensor fault isolation is done via the
statistics Eq. (2.17) as follows;
 k (i )   k ( j ), i  j , and max  k (i) / i  1, 2,..., 9   k (3) , then it is judged that there is a fault in 3
rd

measurement channel (pitch rate gyro).


The graphs of statistics Eq. (2.17) when a fault in the roll rate gyro has occurred are shown
in Fig. 2.5.

Figure 2.5 Fault isolation in the roll rate gyro


In this case, max  k (i) / i  1, 2,...,9   k (7) , then it is judged that there is a fault in 7th
measurement channel (roll rate gyro).
2.2.4.3. Simulation Results for Actuator Fault Isolation and Identification
Throughout the simulations the symbols seen in the figures are as follows: the state vector is
[x1 x2 x3 x4 x5 x6 x7 x8 x9]T x=  u w q  h  p r   and xiest (i=1,2,...,9) is their
T

estimations. The surface fault parameter vector is [ 1 2 3 4 5 67 8]T = [1 2 34 1 2
34]T.
The actuator faults are assumed to occur symmetrically, i.e. right and left elevators are
considered to move together.
Fault Scenario 1: Stuck fault in the elevator at 0.05 (rad) at 20 seconds.
As the TSKF is not sensitive to actuator stuck fault, there is no jump at t=20s in the graph in
contrast to sensor faults as seen in Fig. 2.6. The error is very small.

Figure 2.6 Squared Residual between Actual States and Estimated States in the case of Elevator Stuck Fault

Figure 2.7 Actual state variables and estimations when a stuck fault in the elevator occurred at 20 seconds. Since actual
and estimated state variables are too close two curves are seen as one curve.
In Fig. 2.7, it can be seen that the states are estimated correctly before and after occurrence
of the stuck fault in the elevator.
From Fig. 2.8, it can be concluded that a stuck actuator fault with magnitude 0.7 rad. has
occurred in the stabilizer because ˆ 1  ˆ 1  1 and ˆ 5  ˆ 1  0.7 . When  becomes -1, we
ˆ
can obtain the estimated stuck magnitude in the corresponding control channel.

Figure 2.8 Actual Inputs and Estimations while the Elevator Stuck at 0.7 rad. at 20 seconds

Fault Scenario 2: 50% partial loss in the elevator at 20 seconds.


As the TSKF is not sensitive to actuator partial loss either, there is no jump at t=20s in the
graph in contrast to sensor faults as seen in Fig. 2.9. The error is very small.

Figure 2.9 Squared Residual between Actual States and Estimated States in the case of Actuator Partial Loss.

In Fig. 2.10, it can be seen that the states are estimated correctly before and after
occurrence of the partial loss in the elevator.
Figure 2.10 Actual state variables and estimations when 50% partial loss in the elevator occurred at 20 seconds. Since
actual and estimated state variables are too close two curves are seen as one curve.

In Fig. 2.11, ˆ 1  ˆ 1  0.5 and ˆ 5  ˆ 1  0 are obtained, which means that 50% loss in the
elevator has occurred.

Figure 2.11 Actual inputs and estimations when 50% partial loss in the elevator occurred at 20 seconds.

2.3. Estimation of the UAV Motion in the Presence of Sensor/Actuator Faults


2.3.1. Robust KF with the R-Adaptation Against Sensor Faults
2.3.1.1. RKF with Single Measurement Noise Scale Factor
In normal operation conditions, where any kind of measurement malfunction is not observed,
optimal Kalman Filter gives sufficiently good estimation results. However, when the
measurements are faulty because of malfunctions in the estimation system such as
abnormal measurements, sudden shifts or step-like changes in the measurement channel
etc. filter estimation outputs become inaccurate.

Therefore, a Robust Kalman Filter algorithm, which brings the fault tolerance to the filter and
secures accurate estimation results in case of faulty measurements without affecting the
remaining good estimation characteristic, should be introduced.

Base of the Robust Kalman Filter (RKF) is the comparison of real and theoretical values of
the covariance of the innovation sequence [5]. When the operational condition of the
measurement system mismatches with the models used in the synthesis of the filter, then
the Kalman Filter gain changes according the differentiation in the covariance matrix of the
innovation sequence. Under these circumstances, covariance matrix of the innovation
sequence differs as:

Pek  H k Pk / k 1 H kT  Sk Rk (2.25)

and so the Kalman Gain becomes

K k  Pk / k 1 H kT  H k Pk / k 1 H kT  Sk Rk 
1
(2.26)

Here S k is the single measurement noise scale factor.

Due to the approach, the Kalman Gain is changed when the predicted observation H k xk / k 1
is considerably different from the actual observation yk because of the significant changes
in the operational condition of the measurement system. In other word, if the real value of
filtration error exceeds the theoretical error as

tr ek ekT   tr  H k Pk / k 1 H kT  Rk  (2.27)

the filter must be run robustly. Here tr 


 denotes the trace of the related matrix.

In order to determine scale factor, S k , let substitute Eq. (2.25) into the Eq. (2.27) and put in
that scale factor should be calculated at the point where condition Eq. (2.19) is satisfied,

tr ek ekT   tr  H k Pk / k 1 H kT   S k tr  Rk  (2.28)

 
Then, in light of tr ek ekT  ekT ek equality, S k can be written as

ekT ek  tr H k Pk / k 1 H kT 
Sk  (2.29)
tr  Rk 

In case of malfunction in the measurement system, the adaptation of Kalman filter is


performed via automatically correcting the Kalman gain. Sensor fault brings out an increase
in the scale factor S k . Higher S k causes a smaller Kalman gain because of the covariance
of the innovation sequence (3.25) which is also increased in robust case. Consequently,
small Kalman gain value reduces the effect of the faulty innovation sequence on the state
estimation process. In all other cases, where measurement system operates normally, single
measurement noise scale factors (MNSF) takes the value of Sk  1 and so filter runs
optimally.

Remark that, due to the scale factor S k , the covariance of the estimation error of RKF
increases in comparison with OKF. Therefore, robust algorithm is operated only when the
measurements are faulty and in all other cases procedure is run optimally with regular
Kalman filter.

2.3.1.2. RKF with Multiple Measurement Noise Scale Factors


As it is discussed, it is possible to adapt filter by using single scale factor as a corrective
term on the filter gain [5], but that is not a healthy procedure as long as the filter performance
differs for each state for the complex systems with multivariable [6]. The preferred method is
to use a matrix built of multiple measurement noise scale factors to fix the relevant terms of
the measurement noise covariance matrix, and consequently the Kalman gain.

In order to determine the scale matrix, an innovation based process may be followed. It is
known that, Kalman filter innovation sequence can be determined by (3.27). Then, as the
next step, real and theoretical values of the innovation covariance matrix must be compared.
When there is a measurement malfunction in the estimation system, the real error will
exceed the theoretical one. Hence, if a scale matrix, S k , is added in to the algorithm as:

k
1


j  k  1
ek ekT  H k Pk / k 1 H kT  Sk Rk (2.30)

then, it can be determined by the formula of:

1 k 
Sk    ek ekT  H k Pk / k 1 H kT  Rk1 (2.31)
 j k  1 

Here,  is the width of the moving window.

In case of normal operation, the scale matrix will be a unit matrix as S k  I . Here I
represents the unit matrix.

Nonetheless, as  is a limited number because of the number of the measurements and the
computations performed with computer implies errors such as the approximation errors and
the round off errors; S k matrix, found by the use of Eq. (2.31) may not be diagonal and may
have diagonal elements which are “negative” or lesser than “one” (actually, that is physically
impossible).

Therefore, in order to avoid such situation, composing scale matrix by the following rule is
suggested:

S   diag  s1 , s2 ,..., sn  (2.32)


where,
si  max 1, Sii  i  1, n (2.33)

Here, 𝑆 represents the ith diagonal element of the matrix 𝑆. Apart from that point, if the
measurements are faulty, S k will change and so affect the Kalman Gain matrix;

K k  Pk / k 1 H kT  H k Pk / k 1 H kT  Sk Rk 
1
(2.34)

Therefore, in case of any kind of malfunctions, related element of the scale matrix, which
corresponds to the faulty component of the measurement vector, increases and that brings
out a smaller Kalman Gain, which reduces the effect of the innovation on the state update
process. As a result, more accurate estimation results can be obtained.

As long as the proposed RKF is not optimal due to the scale matrix S k , the robust algorithm
is again operated only when the measurements are faulty and in all other cases procedure is
run optimally with regular OKF.

2.3.1.3. Comparison of R-Adaptive RKFs


The simulations are realized in 1000 steps for a period of 100 seconds with 0.1 seconds of
sampling time, t . As an experimental platform the ZAGI UAV is chosen and the KF
applications are performed by regarding the dynamics and characteristics of this UAV [7].
In case of sensor fault, the simulations are also carried out for the OKF so as to compare
results with the RKF algorithms and understand efficiency of the proposed algorithm in a
better way.
In measurement malfunction scenario below, measurement fault is characterized by
multiplying the variance of the noise of the velocity component u measurement with a
constant term in between 30th and 60th seconds. Fig. 2.12, Fig. 2.13 and Fig. 2.14 show that
OKF outputs involve error while RKF algorithms achieve estimation of the states accurately.
In this case, it is not possible to detect superiority of the RKF with MMNSF by examining
estimation results, although it is a known fact that individual increase of related scale factors,
which corresponds to the faulty measurements, is advantageous than disregarding all
measurements at the same time. Simulations give similar results when the noise increment
is implemented to other measurement channels.

Figure 2.12 Estimation result for velocity component “u” by regular OKF in case of measurement noise increment.
Figure 2.13 Estimation Result for Velocity Component “u” by RKF with SMNSF in case of Measurement Noise Increment.

Figure 2.14 Estimation result for velocity component “u” by RKF with MMNSF in case of measurement noise increment.
Below, Table 2.1 presents the variation of scale factor and the scalar measures of scale
matrix;
Table 2.1 MNSF Investigation in case of Measurement Noise Increment

Parameter 20 sec. 40 sec. 45 sec. 50 sec. 60 sec. 90 sec.


Scale factor of
RKF with SMNSF 1 279.2798 1 254.0174 6308.1023 1
Sk
Trace of scale
matrix of RKF with
MMNSF 1 9457.1867 3973.6303 18241.576 4935.4913 1
trace  S k 

2.3.2. Robust KF with the Q-Adaptation against Actuator Faults


When there is an actuator fault in the system that results with changes in the control
distribution matrix. In a similar manner with the R adaptation procedure, in case of an
actuator fault the real error of the innovation covariance will exceed the theoretical one.
Thus the basic of the Q adaptation is as well obtaining an appropriate multiplier matrix for
the Q matrix such that the real and theoretical values of the innovation covariance match.
Hence, if a fading matrix (  k ) built of multiple FFs, is added in to the algorithm as,
k
1


j  k  1
ek ekT  H k ( Fk Pk 1/ k 1FkT   k Gk Qk GkT ) H kT  Rk (2.35)

Then the fading matrix can be determined as,


1 k 
ek ekT  H k Fk Pk 1/ k 1Fk T H kT  Rk   Gk Qk Gk T H kT 
1
 k  H k1   (2.36)
 jk  1 
For a specific case where the all states are measured ( H k  I ) as in case, (2.36) reduces to

1 k 
ek ekT  Fk Pk 1/ k 1Fk T  Rk   Gk Qk Gk T 
1
k    (2.37)
 j k  1 
In a similar manner with the R adaptation, the obtained fading matrix should be
diagonalized since the Q matrix must be a diagonal, positive definite matrix.

   diag  1 , 2 ,..., n  , (2.38)

where,
i  max 1,  ii  i  1,, n . (2.39)

Here, ii represents the ith diagonal element of the matrix  . Apart from that point, if there
is an actuator fault in the system,  k must be included in the estimation process as,

Pk / k 1  Fk Pk 1/ k 1 FkT   k Gk Qk GkT (2.40)

2.3.3. Integration of the Q and R Adaptation Filters


2.3.3.1. The Fault Detection and Isolation Process
In this section an integration scheme for the R and Q-adaptation methods is proposed such
that a RAKF, which is insensitive against both sensor and actuator faults, is built as a result.
Because of the already proved superiority,in both adaptation proceduresthe multiple factors
method is usedfor the adaptation.
When the adaptation is applied (either R or Q) the covariance of the estimation error of the
filter increases compared tother OKF. Therefore, the RAKF algorithm is operated only when
the fault is detected and in all other cases the procedure is run optimally with the regular
OKF. The process is controlled by the use of a kind of statistical information. At that point,
following two hypotheses may be introduced:
  0 ; the system is normally operating
  1 ; there is a malfunction in the estimation system.
To detect the failures a statistical function may be defined as,
1
 k  ekT  H k Pk / k 1 H kT  Rk  ek . (2.41)

This statistical function has  2 distribution with s degree of freedom where s is the
dimension of the innovation vector. If the level of significance,  , is selected as,

P   2  2 , s    ; ; 0    1 , (2.42)

the threshold value,  2 , s can be found. Hence, when the hypothesis  1 is correct, the
statistical value of  k will be greater than the threshold value  2 , s , i.e.:

 0 :  k   2 , s k
 1 :  k   2 , s k . (2.43)

On the other hand, after detecting the fault in the system the key point is detecting the type
of the fault (either a sensor or actuator fault). When a regular KF is used, the decision
statistics changes regardless the fault is in the sensors or in the actuators.
If a RKF based on the Doyle-Stein condition is used, it is easy to distinguish the sensor and
actuator faults. The KF that satisfies the Doyle-Stein condition is referred in [8,9] as the RKF
insensitive to the actuator failures and may be used for fault isolation as well in our study.
Then, the appropriate adaptation (the R or Q adaptation) may be applied. The Doyle-Stein
condition is,

K  I  H s K   Bs  H s Bs  .
1 1
(2.44)

Here K is the filter gain, I is unit matrix, H is the system measurement matrix, Bs is the
control distribution matrix in continuous time and s   sI  Fs 
1
where Fs is the system matrix
in continuous time.
The RKF is very useful for isolating the sensor and control surface failures as it is insensitive
to the latter failures. If the KF process noise covariance matrix is chosen as,
Q (qr )  Q  qr 2 BsVBsT (2.45)

then the RKF is obtained. Here Q is the process noise covariance matrix for the nominal
plant, q r is a parameter that approaches to infinity and V is any positive definite symmetric
matrix.
If the sensor fault occurs, the R adaptation is realized; else the procedure is continued with
the Q adaptation. The algorithm architecture given in Fig. 2.1 summarizes the integrated R
and Q self-adaptation procedures. This new filter, where the robust and self-adaptive KFs
are integrated, may be called as the RAKF.
Figure 2.15 System architecture for the RAKF algorithm

2.3.3.2. Simulation of the RAKF algorithm in the Presence of


Sensor/Actuator Faults
The simulations are realized in 1000 steps for a period of 100 seconds with 0.1 seconds of
sampling time, t . As an experimental platform the ZAGI UAV is chosen and the KF
applications are performed by regarding the dynamics and characteristics of this UAV [7].
In case of sensor/actuator fault, the simulations are also carried out for the OKF so as to
compare results with the RAKF algorithm and understand efficiency of the proposed
algorithm in a better way.
As the testing scenario for the RAKF algorithm, the sensor fault is characterized by
multiplying the variance of the noise of the measurement for the u velocity component with
a constant term in between 30th and 50th seconds. In simulations the actuator fault is
simulated by making the first column elements of the longitudinal control distribution matrix
nearly zero in between the 80th and 90th seconds.
As in Fig. 2.16 and Fig. 2.17 in this section Nonetheless, another consequence of the
considered fault scenario is the good performance of the fault isolation scheme; regardless
the type of the sensor fault, isolation scheme can distinguish type of the fault effectively and
the RAKF works properly.
velocity "u" estimation
10
Kalman Estimation
u(m/s)

Actual Value
0

-10
0 10 20 30 40 50 60 70 80 90 100
5
error(m/s)

-5

-10
0 10 20 30 40 50 60 70 80 90 100
variance [(m/s) ]

1
2

0.5

0
0 10 20 30 40 50 60 70 80 90 100
time(sec)
Figure 2.16 “u” velocity estimation via OKF in case of actuator and noise increment sensor fault.

velocity "u" estimation


10
Kalman Estimation
u(m/s)

Actual Value
0

-10
0 10 20 30 40 50 60 70 80 90 100
5
error(m/s)

-5

-10
0 10 20 30 40 50 60 70 80 90 100
variance [(m/s) ]

1
2

0.5

0
0 10 20 30 40 50 60 70 80 90 100
time(sec)
Figure 2.17 “u” velocity estimation via RAKF in case of actuator and noise increment sensor fault.
CHAPTER 2 REFERENCES
[1] Kalman, R. E. (1960). "A New Approach to Linear Filtering and Prediction Problems".
Journal of Basic Engineering 82 (1): 35–45.
[2] Mogens Blanke, Christian Frei, Franta Kraus, J.Ron Patton, Marcel Staroswiecki (2000).
“What is Fault-Tolerant Control?” IFAC Proceedings, Volume 33, Issue 11, June 2000,
Pages 41-52.
[3] R. J Patton (1997). Fault-tolerant control: the 1997 situation, IFAC Fault Detection,
Supervision and Safety for Technical Processes, Kingston Upon Hull, UK.
[4] Hajiyev, Ch. M., and Caliskan, F. (2005). “Sensor and control surface/actuator failure
detection and isolation applied to F-16 flight dynamic”. Aircr. Eng. Aerosp. Technol., 77(2),
152 – 160.
[5] Hajiyev, C. (2007).”Adaptive Filtration Algorithm with the filter gain correction applied to
integrated INS/Radar altimeter”. Proceedings of the Institution of Mechanical Engineers, Part
G: Journal of Aerospace Engineering, 221, 847-885.
[6] Geng, Y., and Wang, J. (2008). “Adaptive estimation of multiple fading factors in Kalman
filter for navigation applications”. GPS Solutions, 12(4), 273-279. DOI: 10.1007/s10291-007-
0084-6

[7] Matthews, J.S. (2006). “Adaptive Control of Micro Air Vehicles, M.Sc. Thesis,
Department of Electrical and Computer Engineering”. Brigham Young University, Utah, USA,
December 2006.
[8] Hajiyev, C., and Caliskan, F. (2000). “Sensor/actuator fault diagnosis based on statistical
analysis of innovation sequence and robust Kalman filtering”. Aerospace Science and
Technology, Vol. 4, pp. 415-422. DOI: 10.1016/S1270-9638(00)00143-7.

[9].Hajiyev Ch and Caliskan F. (2005). “Sensor and Control Surface/Actuator Failure


Detection and Isolation Applied to F-16 Flight Dynamics”. Aircraft Engineering and
Aerospace Technology: An International Journal, 77(2), pp.152-160.

[10] Haciyev, Ch. (1999). Radyo Navigasyon, İstanbul : İTÜ


[11] Grişin, Yu. P., Kazarinov, Yu. M. (1985). Fault Tolerant Dynamic Systems. Radio
Icryaz, Moskova (Russian)
ERASMUS + KA 203 PROJECT
Project Code:
2018-1TR01-KA203-059632

Project Name:
DEVELOPING NOVEL UAV EDUCATION SET AND TRAINING CURRICULUM IN ORDER TO
CATCH STATE OF THE ART TECHNOLOGY

INTELLECTUAL OUTPUTS OF IO1 & IO2

PREPARED BY PARTNER ISTANBUL TECHNICAL UNIVERSITY

CHAPTER 9:

DISTRIBUTED COHESIVE MOTION CONTROL OF


FLIGHT VEHICLE FORMATIONS

“Funded by the Erasmus+ Program of the European Union. However, European


Commission and Turkish National Agency cannot be held responsible for any use which
may be made of the information contained therein”
CHAPTER 3

3. Distributive Cohesive Motion Control of Flight Vehicle


Formations
3.1. Introduction
Aircrafts that do not carry a pilot are called UAVs. The UAV control can be performed with an
autopilot or with a pilot on the ground with the aid of a transmitter and receiver. In recent
years, UAVs have become an important building block in civilian areas such as exploration,
rescue, filmmaking, pollution, oil or gas monitoring, agriculture, hobby, journalism, and
military fields such as reconnaissance, assault and defense.
When we look at nature, we can see that many animal species move in swarms. When we
examine these swarms, it can be seen that the swarms move in specific formations.
Properties of these formation changes under different circumstances, and it is seen that
these formations grant the animals some advantages. Some examples to that is; some fish
species adjusting their swarm size depending on the danger level, and pigeons being more
succesful at navigating when they move in swarms. These formations that we see in the
nature is a source of inspiration for creating and maintaining formations of UAV’s.
In this chapter, we discuss a decentralized cohesive motion control problem which aims to
preserve the geometric formation of herds of autonomous aircraft moving on three axes. We
define the geometric formation as the distance between the vehicles. Our control system
aims to protect the flock formation without causing problems such as collision of vehicles
during the movement of vehicles from a starting position to an end position. In addition to
use techniques such as smooth switching and virtual target tracking to design this system,
we have benefited from graph theory and we represented our herd with a graph that we can
define with rigid and persistence concepts. As a result of this, we discussed the extension of
the distributed control system in fixed wing autonomous vehicles and quadrotor models to
practical wing kinetic and dynamic models and the system that was produced as endproduct
is confirmed by our simulations.
3.2. Research Objective
Even though we see that control of air traffic was handled by a central control system in the
hundred year old aviation history, recent developments show that in the near future it will not
be possible to use a central system to manage the air traffic caused by air vehicles that fly
extremely close to eachother above cities. Along with controlling their flight, these vehicles
will need control mechanisms to handled cases such as multiple vehicles with the same
trajectory. This chapter focuses on a method of cohesive formation flight of quadrotors to
help overcome this proplem.
3.3. Formation Flight
When we look at nature, we see some animals move in formations for the benefits it
provides such as easier navigation, easier defence against possible attacks. This formations
in the nature inspired researches over decentralized UAV formations.

3.3.1. Formation in Nature


One of the examples in movement in nature is pigeon flight. When analysed, it was seen that
pigeons are way better at navigating when they fly in formations. You can see the result of
an experiment which studied this property of pigeon formations in Fig. 3.1 [4].
Figure 3.0.1 Pigeon formation’s effect on navigation. Released in point R studied their movement to point H. Red: Pigeons
in formation, Blue: Singular pigeons Yeşil: Singular pigeons after one formation flight [4]

3.3.2. Formation Flight in Aerial Vehicles


Non-autonomous formation flights in military is a strategy that is used since World War One.
Some of the benefits of this formations were maneuver ease when dodging, cominication via
handsigns midflight, making it easier for pilots to not loose each other in cloudy weather [6].
While formation flight isn’t used that often in civil fields, there are studies that show you can
technically save on fuel by reducing drag in formation flights [7].
3.4. Graph Theory and Rigid Formation Flight
We represent our vehicle swarms with graphs to study cohasive flight easier. Graphs are the
structures that consist of points and edges/links between these points. In our formations, we
represent vehicles with points and the communication links between the vehicles with
directed edges. Every vehicle is responsible with keeping the distance between itself and the
relevent vehicles (showed with edges). We use Rigid Graph Theory to keep the formation
shape constant, and our graph will be expected to be rigid, and constraint consistent or
persistent [8][9]. Along with that we use Virtual Target Tracking and Smooth Switching to get
a design a decentralized system similar to [10], [8] and [11].
We will be using quadrotors due to their easy design, their agility in both open and closed
enviorements, and their low weight [12], [1]-[3].
3.4.1. Cohesive Control Model
Let vehicles 𝐴 , ⋯ , 𝐴 (where 𝑚 ≥ 4) who move in 𝑅 be part of a swarm 𝑆. Our cohesive
motion control system aims to move this swarm from a starting state (position and
orientation) to a desired ending state. You can see a graph which shows an example
formation in Fig. 3.2.
Figure 3.0.2 Leader-Follower type 3-DoF Graph

3.4.2. Graph Theory

We define our graph as 𝐺 = (𝑉 , 𝐸 ) where every vehicle 𝐴 is a member of vehicle set


𝑆 = {𝐴 , ⋯ , 𝐴 }, 𝑉 is the set of points in graph which has a point for every 𝐴 in 𝑆, and , 𝐸 is
the set of edges which represent control links between our vehicles. We show our desired
distances as 𝐷 = {𝑑 | → (𝑖, 𝑗) ∈ 𝐸 } and formation as 𝐹 = (𝑆, 𝐺 , 𝐷 ) [13]. Control design
can be asymetric or symetric. In symetric control, in a vehicle duo both vehicle is responsible
with keeping the desired distance. But in asymetric control only one vehicle is responsible
with keeping the desired distance in the vehicle pair. Thus in asymetric control
communication and control is relatively less complicated.

Graphs that follow the asymetric design are called directed, and graphs that follow the
symetric design are called undirected. In a directed graph, direction of the vector that links to
points i and j tells us which of the points is responsible with keeping the distance same. The
starting point “Follower” follows the point “Leader” which the vector points to [13].
3.4.3. Rigid and Persistent Formation Definitions
A formation where the distances (𝑑 ) between two vehicles (𝐴 ,𝐴 ) is kept same is called
rigid. If in a formation, every vehicle is follows its distance constraints the formation is called
constraint consistent. If a formation is both rigid and constraint consistent, the formation is
called persistent. If a formation is persistent with the minimum number of links (Edges)
possible ( 𝐸 = 3 𝑉 − 6) it is called minimal persistent. You can check Fig 3.3 for the
samples of this formations [13].

Fig 3.3: Left to Right: Rigid, Minimal Rigid, Non-Rigid Graph

3.4.4. Formation Flight Model


We use an asymetric system [8,9]. We use a leader/follower hierarchy which consists of one
“Leader”, one “First Follower”, one “Second Follower” and any number of “nth follower”s. The
leader doesn’t have any constraints on how to move. The first follower has to follow the
leader and the second follower has to follow the leader and the first follower. Any remaining
nth follower 𝐴 has to follow three vehicles 𝐴 , 𝐴 , and 𝐴 where 𝑗, 𝑘, 𝑙 < 𝑛. You can see an
example of this hierarchy given as in Fig. 3.2.

3.5. Real-Time Trajectory Calculation


In this subsection, we theoretically calculate the 3 dimensional formation flight algortihm by
mean of the mathematical model of the three sphere intersection algorithm. Then, we
describe the real-time flight trajectories of aerial vehicles (5 vehicles).

3.5.1. Mathematical Model of Sphere Intersection

Fig 3.4.: Sphere Intersection Model (Left: Perspective, Right: Top)

You can see the visualisation of the three sphere intersection in Fig 3.4. These spheres can
be considered as centered at 𝑠 = (0,0,0) , 𝑠 = (𝑥 , 0,0) , and 𝑠 = (𝑥 , 𝑦 , 0) . The
intersection 𝐵(𝑠 , 𝑟 ) ∩ 𝐵(𝑠 , 𝑟 ) ∩ 𝐵(𝑠 , 𝑟 ) of three spheres 𝑆(𝑠 , 𝑟 ), 𝑆(𝑠 , 𝑟 ), and 𝑆(𝑠 , 𝑟 ) is
not empty set. The equation of the sphere surfaces are as follows:
𝑥 +𝑦 +𝑧 =𝑟 (3.1)
(𝑥 − 𝑥 ) + 𝑦 + 𝑧 = 𝑟 (3.2)
(𝑥 − 𝑥 ) + (𝑦 − 𝑦 ) + 𝑧 = 𝑟 (3.3)
The calculation of the intersection coordinates 𝑥 , 𝑦 , and 𝑧 as follows [13, 26]:
( )
𝑥=𝑥 = (3.4)

𝑦 +𝑧 =𝑟 −𝑥 (3.5)
( )
𝑦 = − 𝑥 (3.6)

𝑧 =± 𝑟 −𝑥 −𝑦 (3.7)
𝑠 𝑝
=𝑇 , ∀𝑖 ∈ {1,2,3} (3.8)
1 1
𝑅 −𝑝
𝑇= ,𝑅 = 𝑅 𝑅 𝑅 (3.9)
0 1
𝑅 = , 𝑅 = , 𝑅 =𝑅 ×𝑅 (3.10)

𝑇 = 𝑅 𝑅 𝑝 (3.11)
0 1
You can find the intersection of 𝑆(𝑝 , 𝑟 ), 𝑆(𝑝 , 𝑟 ), and 𝑆(𝑝 , 𝑟 ) as illustrated in Fig. 3.4 with
the following steps:

1- To find 𝑆 , 𝑆 and 𝑆 we use Eq. (3.8) – Eq. (3.10).


2- To find 𝑥 , 𝑦 , and 𝑧 we use Eq. (3.4) – Eq. (3.7).

𝑃
=𝑇 [𝑧 , 1] (3.12)
1

To find intersection point, 𝑃 is applied to 𝑇 in Eq. (3.11).


3.5.2. Trajectory Planning in Formation Flight

We aim to not only create trajectories for 𝑛 ≥ 4 vehicles but we also want them to keep the
distance between them same, and prevent them from colliding. To reach this goal, the first
three vehicles are choosen as main vehicles and named as: leader, first follower and second
follower. For every follower 𝐴 , the trajectory can be written in terms of relative position this
way:

() () () ()
p (𝑡) = 𝑥 (𝑡), 𝑦 (𝑡), 𝑧 (𝑡) = 𝑝 (𝑡) − 𝑝 (𝑡) (3.13)

If desired global position is 𝑝 (𝑡), then we can give as follows:

𝑝 (𝑡) = 𝑝 (3.14)

𝑝 (𝑡) = 𝑝 (𝑡) + 𝑝 , 𝑝 =𝑝 −𝑝 (3.15)

𝑝 (𝑡) = 𝑝 (𝑡) + 𝑝 , 𝑝 =𝑝 −𝑝 (3.16)

𝑝 (𝑡) = 𝑝̅ 𝑡, 𝑝 (𝑡) , ∀𝑖 ∈ 4, … , 𝑚. (3.17)

Where s, p and h are the vehicles that 𝐴 follows, and 𝑝̅ (𝑡) is the intercetion points of
spheres 𝑆(𝑝 , 𝑑 ), 𝑆(𝑝 , 𝑑 ), and 𝑆(𝑝 , 𝑑 ) which are close to 𝑃 . 𝑆(. , . ) is used to represent
the spheres, where first argument is the center of the sphere and the second argument is the
radius of the sphere.

3.6. Distributed Control Desing for Quadrotor Formations

In this section we will learn more about our formation and its hierarchy, and start things of
by modeling every individual vehicle 𝐴 .

3.6.1. Distributed Control Design

Our distributed control system will be designed using vehicle 𝐴 ’s (ignoring their geometric
properties for simplicity) which modelled by the following single-velocity-integrator
kinematic models:
𝑃̇ (𝑡) = 𝑉 (𝑡) (3.18)
‖𝑉 (𝑡)‖ = 𝑉 (3.19)

where P (𝑡) = [𝑥 (𝑡), 𝑦 (𝑡), 𝑧 (𝑡)] is the position and 𝑉 (𝑡) = [𝑉 (𝑡), 𝑉 (𝑡), 𝑉 (𝑡)] ∈ 𝑅 is
the velocity of the center of mass of the vehicle 𝐴 (𝑡) at time 𝑡.
Since our formation design has four main types of vehicles (leader, first follower, second
follower and nth follower) and all of these vehiles having different DoF’s (Degree of
Freedom), we need to design four different controllers:
()
( ) ( )
𝑉 (𝑡) = () (3.20)
( )

0 ‖𝑝 (𝑡)‖ < 𝜀
‖ ( )‖
𝜎 (𝑡) = 𝜀 ≤ ‖𝑝 (𝑡)‖ ≤ 2𝜀 (3.21)
1 ‖𝑝 (𝑡)‖ > 2𝜀

Where 𝐴 is leader, 𝐴 is first follower, 𝐴 is second follower and 𝐴 is ith follower which
follows 𝐴 , 𝐴 , and 𝐴 (𝑗, 𝑘, 𝑙 < 𝑖), 𝜀 > 0 is a small design constant, 𝜈̅ is constant
maximum velocity and 𝜎 (𝑡) is used for small tolerable mistakes in 𝐴 ’s final position.

Since we apply 𝑉 (𝑡) in Eq. (3.20) to our control signal as follows:


()
( ) ( )
V = () (3.22)
( )

3.6.2. Distributed Control Design for Quadrotor Formations

In this section we will first talk about quadrotors dynamic modelling and then will design
control laws using Eqs. (3.13)−(3.17).

3.6.2.1. Modeling of Quadrotor Vehicles

Fig 3.5: Quadrotors Free Body Diagram

To calculate the applied force 𝐹 ∈ 𝑅 and moment 𝑀 ∈ 𝑅 for every vehicle 𝐴 , we will use
the following nonlinear dynamic model [3][13][14]:

𝐹 = 𝑚 𝑝̈ (3.23)

𝑀 = 𝐼 𝑤̇ + 𝑤 × 𝐼 𝑤 (3.24)

Where 𝑚 is quadrotors mass, 𝐼 ∈ 𝑅 is body fixed inertia matrix, 𝑤 ∈ 𝑅 is the angular


velocity in the body frame
In Eq. (3.24), the superposition of the angular momentums of the rotors is assumed to be
zero since the momentums from the counterrotating propellers cancel each other when
yaw is constant [3]. The free body diagram in Fig. 3.5 shows the main forces and
moments of the quadrotor. Each rotor 𝑗 ∈ {1, 2, 3, 4} generates a thrust force 𝑇
perpendicular to the rotor plane and produces a moment 𝑀 .
Orientation control is handled with the methods explained in Subsection 3.6.2.2. 𝐷 is the
quadrotor body drag and is a function of vehicle and wind speed. Acceleration due to
gravity is denoted by 𝑔. The total force 𝐹 is given by as follows:
𝐹 = −𝐷 𝑒 + 𝑚 𝑔𝑒 + ∑ (−𝑇 𝑅 𝑒 ) (3.25)
where 𝑅 is the rotation matrix from the plane of rotor 𝑗 to inertial coordinates, 𝑒 is the
instant velocity direction in the Earth fixed frame, 𝑒 denotes the unit vector along the
down direction in North-East-Down (NED) inertial coordinates, and 𝑒 represents the unit
vector along the z-axis of body fixed coordinates of the quadrotor.
The total moment 𝑀 is given as follows:
𝑀 =∑ (𝑀 + 𝑟 × (−𝑇 𝑅 𝑒 ) (3.26)

where 𝑅 is the rotation matrix from the plane of rotor j to body coordinates and 𝑟 is the
position vector of rotors with respect to the center of gravity.

Eq. (3.23) and Eq. (3.26) form the nonlinear dynamics of quadrotor vehicles. An
alternative simplified dynamic model is as follows:

𝑀 𝑧̈ = 𝑢 − 𝑚 𝑔 (3.27)
𝑀 𝑦̈ = 𝑢 sin 𝜙 ≅ 𝑢 𝜙 (3.28)
𝑀 𝑥̈ = −𝑢 sin 𝜃 ≅ −𝑢 𝜃 (3.29)

where 𝜙 and 𝜃 are the roll angle and pitch angle, respectively. 𝑢 represents a total
thrust on the body along the z-axis.

3.6.2.2. Trajectory Control

In this section we adapt the attitude and altitude laws for singular quadrotor presented in
[3]. Attitude control loop can adapt to fast changing commands and balances quadrotor
attitude. Altitude and altitude loops use the thrust of the vehicle to create the vertical and
lateral accelerations.

3.6.2.2.1. Attitude and Altitude Control

At small attitude angles (up to ±30°) and low angular speeds, the equations of motion can
be decoupled about each attitude axis. Hence, the effective control inputs 𝑢 , 𝑢 , and
𝑢 about each axis (roll, pitch, and yaw) can be considered to be applied independently
[3,14]. The effective control inputs for each axis can then be combined to generate the
corresponding thrust commands are as follows:

𝑇 = −𝑢 + 𝑢 + 𝑢 ,
𝑇 = 𝑢 −𝑢 +𝑢 , (3.30)
𝑇 = 𝑢 +𝑢 +𝑢 ,
𝑇 = −𝑢 − 𝑢 + 𝑢
that need to be applied to rotors 1, ⋯ , 4, respectively. These thrust commands are the
control inputs of the quadrotor vehicle’s attitude and trajectory tracking control system. The
time delay in thrust can be approximated as a first-order delay and has to be considered in
controlling each attitude angle. The linearized transfer function for the roll axis is as follows
[3]:

( )
= (3.31)
( ) ( )

where 𝐼 is the component of the inertia matrix 𝐼 for the roll axis, 𝑙 is the distance of
each rotor to the center of gravity, and 𝜏 is the thrust time delay. The transfer functions for
the pitch and yaw axes can be obtained in a similar way.

We design the following tracking controller based on proportional–derivative–double


derivative (PDD) and pole placement designs with using root-locus techniques:

𝑈 (𝑠) = 𝐶 (𝑠)[Φ (𝑠) − Φ (𝑠)] (3.32)

𝐶 (𝑠) = + +𝑘 𝐶̅ (𝑠) (3.33)

where Φ (𝑠), 𝑘 , and 𝑝 are the desired roll angle, double derivative, and filtering term,
respectively. 𝑝 determines the pole locations of the filter in derivative and double-
derivative actions. The coefficients of 𝐶 (𝑠) are tuned applying rootlocus graphics tools on
the simplified model presented in this section and the transfer function in Eq. (3.31) to
provide faster and more accurate performance. The corresponding control laws for all
angles 𝜙, 𝜃, and 𝜓 (in the body frame) are obtained as follows:

𝑢 = 𝐶 (𝑠) 𝜙 − 𝜙 (3.34)
𝑢 = 𝐶 (𝑠) 𝜃 − 𝜃 (3.35)
𝑢 = 𝐶 (𝑠) 𝜓 − 𝜓 (3.36)

For the implementation of the effective controllers, rate gyros are used as measurement
units, and then, the measured data are filtered with the help of standard filtering
techniques. At each time t, 𝑇 (𝑡),..., 𝑇 (𝑡) are obtained processing 𝑢 , 𝑢 , and 𝑢 .

Figure 3.6 A Single Quadrotor Nested Control Structure

In a similar way, the altitude controller is derived for the stabilization of the quadrotor
altitude during tracking actions. The accelerations are obtained with accelerometers, and
feedback linearization is used to compensate for the gravity offset and thrust deflection.
The tolerance of the controller is limited with 30° due to the plant model. The following
proportional–integral–derivative (PID) control law is proposed for altitude tracking:

𝑢 =𝑘 𝑧̇ − 𝑧̇ + 𝑘 𝑧 − 𝑧 + 𝑘 ∫ 𝑧 − 𝑧 𝑑𝑡 + 𝑇 (3.37)

where 𝑧 is the altitude, 𝑧 is the reference altitude, and 𝑇 is the nominal thrust. The
quadrotor altitude dynamics is modeled by a second-order transfer function that is
obtained by the linearization of Eq. (3.27) together with Eq. (3.26) and Eq. (3.30).

3.6.2.2.2. Reference Angle Generator


In this section, we present a scheme to generate reference angles 𝜙 , 𝜃 , 𝜓 (for attitude
and altitude controllers) from desired 𝑥 and 𝑦 positions generated by distributed
coordination and sphere intersection algorithms described in Section 3.5. The general
process for reference angle generation can be seen in Fig. 3.6. The generation of
𝜙 , 𝜃 ,and 𝜓 is implemented via a PID control design, similar to that of [10], based on
Eq. (27) – Eq. (29) as follows:

𝑈 (𝑠) = 𝑘 + +𝑘 𝑠 (3.38)
()
𝜙 = 𝑈 (𝑠)(𝑦 − 𝑦 ) = 𝑈 (𝑠)𝑦 (3.39)
()
𝜃 = 𝑈 (𝑠)(𝑥 − 𝑥 ) = 𝑈 (𝑠)𝑥 (3.40)
𝜓 =0 (3.41)

3.6.2.3. Summary of the Controllers

By assuming each quadrotor agent 𝐴 knows its current position 𝑝 (𝑡), and its desired
position 𝑝 (𝑡) is generated online using Eq. (3.14) – Eq. (3.17). At each time 𝑡, 𝐴 takes
its current position 𝑝 (𝑡) as the last visited waypoint 𝑝 and the desired position 𝑝 (𝑡) as
the next waypoint 𝑝 ( ) . Based on this setting, the low-level control scheme of 𝐴 is
shown in Fig. 3.6. In this scheme, the reference angle generator is given by Eq. (3.38) –
Eq. (3.41), and the attitude and altitude controller is given by Eq. (3.30) and Eq. (3.33) –
Eq. (3.36), where 𝑧 is taken as the z-component of 𝑝 (𝑡) at each time instant 𝑡 and 𝑢
is set to zero.
Considering the linearized model equation of the quadrotor as in Eq. (3.31) and the PDD
control law in Eq. (3.32), it can be seen thatthe closed system is always stabilized with a
zero steady-state error. A generic proof of the asymptotic stability of PID type controllers
utilizing the Lyapunov-based event triggering scheme can be found in [15]. The tuning of
quadrotor control parameters is achieved using root locus for finding a stable and desired
step response similar to [15]. A summary of theoretical tracking and error performance
analysis of PID-based quadrotor controllers is provided in [15] and [16].

3.7. Simulation Results

In this section we will be checking simulations of the mentioned systems.

3.7.1. Quadrotor Formation Flight


In this section, we tested the formation control algorithm for quadrotors in previous sections.
In Fig 3.7 you can see the quadrotor position change. In Fig 3.8 you can see the distance
between quadrotors. When we look at this results, we can see that quadrotors reach to their
final destination without colliding or changing the formation shape.
Fig 3.7: Quadrotor simulation: pathway (red: leader, blue: first follower, green: second follower, rest are ordinary
followers)

Fig 3.8: Quadrotor simulation: distances between vehicles (𝒑𝟏 : Leader position, 𝒑𝟐 : First follower position, 𝒑𝟑 : Second
follower position, 𝒑𝒏 : nth follower position)

The parameters used in the simulation are given in this Subsection. 𝜀 = 0.01 and
𝑑 = 1𝑚 for (𝑖, 𝑗) ∈ {(1, 2), (1, 3), (2, 3), (1, 4), (2, 4), (3, 4), (1, 5), (2, 5), (3, 5)}. Initial
positions: 𝑝 = (1,0,0), 𝑝 = (0.5, 0.866, 0), 𝑝 = (0,0,0), 𝑝 = (0.5, 0.2887, 0.8165), and
𝑝 = (0.5, 0.2887, −0.8165). Desired final positions 𝑝 = (−3.9697, 2.7428, 1.0), 𝑝 =
(−4.8696, 2.5995, 1.0), and 𝑝 = (−4.2134, 1.8462, 1.0). Leader velocity obtained as
2𝑚/𝑠. Final positions of the ordinary followers are calculted using the sphere intersection
algorithm presented in Section 3.5.

𝑢 and 𝑢 coefficients 𝑘 = 0.99, 𝑘 = 1.48, 𝑘 = 0.83, 𝑝 = 100, and 𝐶 (𝑠) =


2.7872(s + 3.57)/(s + 21.3); 𝑢 coefficients 𝑘 = 0.99, 𝑘 = 0.67, 𝑘 = 1.11,
𝑝 = 100, and 𝐶 (𝑠) = 0.0187(s + 1)/(s + 0.27); 𝑢 coefficients 𝑘 = 7, 𝑘 = 7.5, and
𝑘 = 6. Additionally, the coefficients for the generation reference pitch angle 𝜃 are
𝑘 = 0.72, 𝑘 = 0.72, and 𝑘 = 0.3, and for reference roll angle 𝜙 , they are 𝑘 =
𝑘 =0.6 and 𝑘 = 0.42.

3.7.2. Point Aerial Vehicles Formation Flight

In this section, we test the previously presented control laws on generic point aerial
vehicles. Initial positions, desired final positions and the distance between vehicles is
choosen same within Subsection 3.7.1. 𝑉 = 1 𝑚/𝑠, 𝐾 = 1 and 𝐾 = 0.05 are used.

Fig 3.9 Point aerial vehicle simulation: pathway (red: leader, blue: first follower, green: second follower, rest are ordinary
followers)

We simulate the effects of sensor noises using bounded random signals in the range
[±10°] added to the angles 𝜑, 𝜃, and 𝜓. Furthermore, we use a timedelay block to simulate
the measurement and processing delays in the actuators. The transport-delay time is
taken as 0.02 s in our simulations. As seen in Figs. 3.9 and 3.10, the agents move in
formation to their final positions cohesively, and hence without colliding with each other,
demonstrating the effectiveness of the proposed distributed control scheme.
Fig 3.10: Point aerial vehicle simulation: distances between vehicles (𝒑𝟏 : leader position, 𝒑𝟐 : first follower position, 𝒑𝟑 :
second follower position, 𝒑𝒏 : nth follower position)

1.7 Conclusion
In this chapter we presened a distributed cohesive formation control design for
autonomous vehicles which move in three dimensional space. The presented control
design aims to move a vehicle swarm from a starting position to a desired final position
without changing its formation shape.
Mentioned design has been thought as two different case study: a study which models
quadrotors and a study which models point aerial vehicles. Additionally, in the point aerial
vehicles possible corruption in sensor data and delays in processing has been considered.
And this problems were compansated with a PID controller. Designed controllers are been
proved to be successful with multiple simulations.
Design of dynamic controllers with constructive stability and convergency analysis is a
new research field. Working on real-time applications, considering detailed actuator
dynamics, and considering dynamic uncertainties is one way of further studying this
concept. A study similar to [11] can be considered. These algorithms also can be tested in
different real-time cases.

CHAPTER 3 REFERENCES

[1] Lemmens, Yves C.j., et al. “Real-Time Simulation of an Integrated Electrical System of
a UAV.” SAE Technical Paper Series, 2014, doi:10.4271/2014-01-2169.
[2] Basak Hasan, and Emmanuel Prempain. “Switching Recovery Control of a Quadcopter
UAV.” 2015 European Control Conference (ECC), 2015,
doi:10.1109/ecc.2015.7331096.
[3] Niemiec, Robert, and Farhan Gandhi. “Multirotor Controls, Trim, and Autonomous
Flight Dynamics of Plus- and Cross-Quadcopters.” Journal of Aircraft, vol. 54, no. 5,
2017, pp. 1910–1920., doi:10.2514/1.c034165.
[4] Dellariccia, Gaia, Giacomo Dellomo, David P. Wolfer, and Hans-Peter Lipp. “Flock
Flying Improves Pigeons Homing: GPS Track Analysis of Individual Flyers versus
Small Groups.” Animal Behaviour 76, no. 4 (2008): 1165–72.
[5] Hoare, D.j, I.d Couzin, J.-G.j Godin, and J Krause. “Context-Dependent Group Size
Choice in Fish.” Animal Behaviour 67, no. 1 (2004): 155–64.
[6] “Index.” Men of Physics: Lord Rayleigh–The Man and His Work, 1970, 243–51.
[7] Wagner, Eugene, David Jacques, William Blake, and Meir Pachter. “Flight Test Results
of Close Formation Flight for Fuel Savings.” AIAA Atmospheric Flight Mechanics
Conference and Exhibit, May 2002.
[8] M. Grewal, L. Weill, and A. Andrews, Global Positioning Systems, Inertial Navigation,
and Integration. Hoboken, NJ: Wiley, 2007.
[9] S. Bayraktar, G. Fainekos, and G. Pappas, “Experimental cooperative control of fixed-
wing unmanned aerial vehicles,” in Proc. IEEE 43rd Conf. Decision Control, Dec. 2004,
vol. 4, pp. 4292–4298.
[10] M. Efe, “Robust low altitude behavior control of a quadrotor rotorcraft through sliding
modes,” in Proc. 15th Mediterranean Conf. Control Au- tom., Jun. 2007, pp. 1–6.
[11] J. Ferruz, M. Vega, A. Ollero, and V. Blanco, “Reconfigurable control architecture for
distributed systems in the hero autonomous helicopter,” IEEE Trans. Ind. Electron., vol.
58, no. 12, pp. 5311–5318, Dec. 2011.
[12] B. Etkin and L. Reid, Dynamics of Flight: Stability and Control. Hoboken, NJ: Wiley,
1995.
[13] I. Bayezit, B. Fidan, M. Amini, and I. Shames, “Distributed cohesive motion control of
quadrotor vehicle formations,” in Proc. ASME Int. Mech. Eng. Congr. Expo., Nov. 2010,
pp. 797–805.
[14] T. Madani and A. Benallegue, “Backstepping sliding mode control applied to a
miniature quadrotor flying robot,” in Proc. 32nd IEEE Annu. Conf. Ind. Electron., Nov.
2006, pp. 700–705.
[15] J. Wang, H. Mounier, A. Cela, and S. Niculescu, “Event driven intelligent PID
controllers with applications to motion control,” in Proc. 18th IFAC World Congr., Aug.
2011, vol. 18, pp. 10080–10085.
[16] M. de Oliveira, “Modeling, Identification and Control of a Quadrotor Aircraft,” M.S.
thesis, Control Eng., Czech Tech. Univ., Prague, Czech Republic, Jun. 2011.
[17] I. Bayezit, M. Amini, B. Fidan, and I. Shames, “Cohesive motion control of autonomous
formations in three dimensions,” in Proc. 6th Int. Conf. Intell. Sens., Sensor Netw. Inf.
Process., Dec. 2010, pp. 205–210.
[18] I. Bayezit and B. Fidan, “Nonlinear maneuvering control of rigid formations of fixed
wing UAVs,” in Proc. 2nd Int. Auton. Intell. Syst. Conf., Jun. 2011, pp. 124–133.
[19] R. Olfati-Saber and R. Murray, “Distributed cooperative control of multiple vehicle
formations using structural potential functions,” in Proc. 15th IFAC Triennial World
Congr., Jul. 2002, pp. 346–352.
[20] R. Schoop, R. Neubert, and A. Colombo, “A multiagent-based distributed control
platform for industrial flexible production systems,” in Proc. IEEE 27th Annu. Conf. Ind.
Electron. Soc., Nov. 2001, vol. 1, pp. 279–284.
[21] V. Gazi and K. Passino, “Stability of a one-dimensional discrete-time asynchronous
swarm,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 35, no. 4, pp. 834–841,
Aug. 2005.
[22] E. Jones, M. Dias, and A. Stentz, “Time-extended multi-robot coordina- tion for
domains with intra-path constraints,” J. Auton. Robots, vol. 30, no. 1, pp. 41–56, Jan.
2011.Apr. 2008, pp. 1–14.
[23] I. Bayezit, “Microavionics Design and Microavionics Control Implemen- tation for
Cross-Compatible Platforms,” M.S. thesis, Elect. Eng., Istanbul Tech. Univ., Istanbul,
Turkey, Dec. 2008.
[24] C. Lin, C. Huang, C. Lee, and M. Chao, “Identification of flight vehicle models using
fuzzified eigensystem realization algorithm,” IEEE Trans. Ind. Electron., vol. 58, no. 11,
pp. 5206–5219, Nov. 2011.
[25] R. Nelson, Flight Stability and Automatic Control. New York: McGraw- Hill, 1997.
[26] Bayezit, Ismail, and Baris Fidan. “Distributed Cohesive Motion Control of Flight Vehicle
Formations.” IEEE Transactions on Industrial Electronics 60, no. 12 (2013): 5763–72.
ERASMUS + KA 203 PROJECT
Project Code:
2018-1TR01-KA203-059632

Project Name:
DEVELOPING NOVEL UAV EDUCATION SET AND TRAINING CURRICULUM IN ORDER TO
CATCH STATE OF THE ART TECHNOLOGY

INTELLECTUAL OUTPUTS OF IO1 & IO2

PREPARED BY PARTNER BUCHAREST POLITECHNICAL UNIVERSITY

CHAPTER 12:

UAV 3 D MODELLING AND MANUFACTURING

Commission and Turkish National Agency cannot be held responsible for any use which

You might also like