Full Book Stoa Uav Latest
Full Book Stoa Uav Latest
Full Book Stoa Uav Latest
Project Code:
2018-1TR01-KA203-059632
Project Name:
DEVELOPING NOVEL UAV EDUCATION SET AND TRAINING CURRICULUM IN ORDER TO
CATCH STATE OF THE ART TECHNOLOGY
CHAPTER 7:
Figure 1.1 UAV System Overview (Source: Fahlstrom and Gleason (2012))
Mission planning and control stations (MPCS) needs tracking, a real-time data link for the
visual monitoring and the position of the UAV. Remote control in MPCS must be supported
by a direct communications link with remote air vehicle or indirect satellite communications.
Figure 1.2 Mission Planning and Control Station (MPCS) (Keller (2014))
The MPCS can also serve as the command post for the person who performs mission
planning, receives mission assignments from supported headquarters, and reports acquired
data and information to the appropriate unit, be it weapon fire direction, intelligence, or
command and control. The station usually has positions for both the air vehicle and mission
payload operators to perform monitoring and mission execution functions. In some small
UAS, the ground control station is contained in a case that can be carried around in a back-
pack and set up on the ground, and consists of little more than a remote control and some
sort of display, probably augmented by embedded microprocessors or hosted on a
ruggedized laptop computer. At the other extreme, some ground stations are located in
permanent structures thousands of miles away from where the air vehicle is flying, using
satellite relays to maintain communications with the air vehicle. In this case, the operator’s
consoles might be located in an internal room of a large building, connected to satellite
dishes on the roof. UAV satellite-control links can have a one- second delay, which is
inadequate for sensitive control during takeoff and landing. UAV pilots within visual range of
their aircraft usually handle takeoff and landing, and then hand off control of the UAVs to
controllers at consolidated locations, who handle much of the mission tasks via satellite
links. More detail can be found in Fahlstrom and Gleason (2012).
1.2. Classification of UAV Modelling
The range of UAV sizes and types now runs from air vehicles (AVs) small enough to land on
the palm of your hand to large lighter-than-air vehicles. Many types of UAVs have been
designed, tested, and fielded throughout the world.
Very small UAV is used as a term to categorize an AV with dimensions of the order of a 30–
50 cm such as, the Mosquito, the Skate and the CyberQuad Mini as given in Fig. 1.3.
Mosquito, an oval flying wing with a single tractor propeller, is developed by the Israel
Aerospace Industries (IAI) . Skate, a rectangular flying wing with twin tractor engine/propeller
combinations that can be tilted to Classes and Missions of UAVs provide “thrust vectoring”
for control, is developed by The US Aurora Flight Sciences. The CyberQuad Mini, having
four ducted fans in a square arrangement, is developed by the Australian Cyber Technology.
The Mosquito wing is 35 cm long and 35 cm in total span. It uses an electric motor with
batteries and has an endurance of 40 minutes, and claims a radius of action of about 1.2 km.
It is hand or bungee launched and can deploy a parachute for recovery. The Skate wing has
a wingspan of about 60 cm and length of 33 cm. It folds in half along its centerline for
transport and storage. It has twin electric motors on the leading edge that can be tilted up or
down and allow vertical takeoff and landing (VTOL) and transition to efficient horizontal flight.
There are no control surfaces, with all control being accomplished by tilting the motor
assemblies and controlling the speed of the two propellers. It can carry a payload of 227 g
with a total takeoff weight of about 1.1 kg. The CyberQuad Mini has four ducted fans, each
with a diameter less than 20 cm, mounted so that the total outside dimension that include the
fan shrouds are about 42 by 42 cm. The total height including the payload and batteries,
which are located in a wing at the center of the square, is 20 cm. The CyberQuad Mini
includes a low-light
level solid-state camera or thermal camera and a control system that allows fully
autonomous waypoint navigation.
Figure 1.3 Very Small UAVs (Source: P. G. Fahlstrom and T. J. Gleason (2012))
Small UAVs have at least one dimension of greater than 50 cm and go up to dimensions of a
meter or two. Many of these UAVs have the configuration of fixed-wing model airplane and
are hand-launched by their operator by throwing them into the air. Small UAVs include the
Raven which is developed by US AeroVironment and the Bayraktar Mini which is developed
by TR Bayraktar, both conventional fixed-wing vehicles.
Figure 1.4 Small UAVs (Source: P. G. Fahlstrom and T. J. Gleason (2012))
The RQ-11 Raven is an example of a UAV that is in the “model airplane” size range. It has a
1.4 wingspan and is about 1 m long. It weighs only a little less than 2 kg and is launched by
being thrown into the air by its operator. It uses electrical propulsion and can fly for nearly an
hour and a half. The Raven and its control “station” can be carried around by its operator’s
back and can carry visible, near-infrared (NIR), and thermal imaging systems for
reconnaissance as well as a “laser illuminator” to point out target to personnel on the ground.
The Bayraktar Mini has a gimbaled camera for day or night use. It offers waypoint navigation
with GPS or other navigation systems. Despite its slightly greater size and weight, it is
launched much like the Raven. It can be recovered by a skidding landing on its belly or with
an internal parachute. It is fielded with small army units and has been heavily used by the
Turkish Army since it was fielded in about 2006. Some of the small UAVs are shown in Fig.
1.4.
Medium UAVs are too large to be carried around by one person and still
smaller than a light aircraft. The UAVs that sparked the present resurgence of interest, such
as the Pioneer and Skyeye, are in the medium class. They have typical wingspans of the
order of 5 - 10 m and carry payloads of from 100 to more than 200 kg. There are a large
number of UAVs that fall into this size group. The Israeli - US Hunter and the UK
Watchkeeper are more recent examples of medium-sized, fixed-wing UAVs. There are also
a large number of rotary-wing UAVs in this size class. A series of conventional helicopter
with rotor diameters of the order of 2 m have been developed in the United Kingdom by
Advanced UAV Technologies. There also are a number of ducted-fan systems configured
much like the CyberQuad Mini but having dimensions measured in meters instead of
centimeters. The US Boeing Eagle Eye, which is a medium-sized VTOL system that is
notable for using tilt-wing technology.
The RQ - 2 Pioneer is an example of an AV that is smaller than a light manned aircraft but
larger than a model airplane. It was for many years the workhorse of the stable of US tactical
UAVs. Originally designed by the Israelis and built by AAI in the United States. The Pioneer
provided real-time reconnaissance and intelligence for ground commanders. High-quality
day and night imagery for artillery and naval gunfire adjust and damage assessment were its
prime operational missions. The 205 kg, 5.2 m wingspan AV had a conventional aircraft
configuration. It cruised at 200 km/h and carried a 220 kg payload. Maximum altitude was
4572 m. Endurance was 5.5 h. The fiberglass air vehicle had a 26 hp engine and was
shipboard capable. It had a piston and rotary engine options. The Pioneer could be launched
from a pneumatic or rocket launcher or by conventional wheeled takeoff from a prepared
runway. Recovery was accomplished by conventional wheeled landing with arrestment or
into a net.
The BAE Systems Skyeye R4E UAV system was fielded in the 1980s and is roughly
contemporary with the Pioneer. It uses launchers similar to the Pioneer but does not have a
net-recovery capability. It uses a ground control station similar in principle to that of the
Pioneer. The Skyeye air vehicle is constructed of lightweight composite materials and was
easy to assemble and disassemble for ground transport because of its modular construction.
It has a 7.32 m wingspan and is 4.1 m long. It is powered by a 52 hp rotary engine providing
high reliability and low vibration. Maximum launch weight is 570 kg and it can fly for 8 - 10 h
at altitudes up to 4,600 m. Maximum payload weight is about 80 kg (176 lb). The Skyeye has
no landing gear to provide large radar echoes or obstruct the view of the payload. The
avoidance of a nose wheel is particularly significant as a nose gear often obstructs the view
of a payload camera looking directly forward, precluding landing based on the view through
the eyes of the camera. The Skyeye also carries either a parafoil or a parachute as
alternative recovery systems. The parafoil essentially is a soft wing that is deployed in the
recovery area to allow the air vehicle to land much slower. The parafoil recovery can be
effective for landing on moving platforms such as ships or barges.
The RQ-5A Hunter was developed for the standard “Short Range” UAS for the US Army.
The Hunter does not require a recovery net or launcher, which significantly simplifies the
overall minimum deployable configuration and eliminates the launcher required by the
Skyeye. Under the appropriate conditions, it can take off and land on a road or runway. It
utilizes an arresting cable system when landing, with a parachute recovery for emergencies.
It is not capable of net recovery because it has a tractor (“puller”) propeller that would be
damaged or broken or would damage any net that was used to catch it. It also has a rocket-
assisted takeoff option to allow the launch to occur when no suitable road or runway is
available. The Hunter is constructed of lightweight composite materials, which afford ease of
repair. It has a 10.2 m wingspan and is 6.9 m long. It is powered by two four-stroke, two-
cylinder (v-type), air-cooled Moroguzzi engines, which utilize fuel injection and individual
computer control. The air vehicle weighs approximately 885 kg at takeoff (maximum), has an
endurance of about 12 h, and a cruise speed of 120 knots.
The Hermes 450/Watchkeeper is an all-weather, intelligence, surveillance, target acquisition
UAV. Its dimensions are similar to the Hunter. The Watchkeeper is manufactured in the
United Kingdom by a joint venture of the French company Thales and the Israeli company
Elbit Systems. It has a weight of 450 kg including a payload capacity of 150 kg. A series of
rotary-wing UAVs called the AT10, AT20, AT100, AT200, AT300, and AT1000 have been
developed by the UK firm Advanced UAV Technology. They are all conventionally
configured helicopters with a single main rotor and a tail boom with a tail rotor for yaw
stability and control. The rotor diameters vary from 1.7 m in the AT10 to about 2.3 m for the
AT1000. All are intended to be launched by vertical takeoff and all claim the ability for
autonomous landings on moving vehicles.
Figure 1.5 Medium UAVs (Source: P. G. Fahlstrom and T. J. Gleason (2012))
The Northrop Grumman MQ-8B Fire Scout is a conventionally configured VTOL UAV. It
looks much like a typical light helicopter. It has a length of 9.2 m, a height of 2.9 m, and a
rotor diameter of 8.4 m. It is powered by a 420 shaft hp turbine engine. Fire Scout is similar
to manned helicopter OH-58 Kiowa, which is light observation helicopter, has a two-man
crew and two passenger seats. The Kiowa has a maximum payload of about 630 kg (1,389
lb), compared to the 270 kg (595 lb) maximum payload of the Fire Scout, but if one takes out
the weight of the crew and other things associated with the crew, the net payload capability
of the Fire Scout is similar to that of the manned helicopter. Some of the medium UAVs are
shown in Fig. 1.5.
Large UAVs that can fly long distances and loiter for extended periods to perform
surveillance functions. They also are capable to carry weapons in significant quantities. The
lower range of such systems includes the US General Atomics Predator A, which has a
significant range and endurance, but can carry only two missiles of the weight presently
being used. The limitation to two missiles is serious as it means that after firing the two
missiles that are on board, the UAV either has lost the ability to deliver weapons or must be
flown back to its base to be rearmed. For this reason, a second generation of UAVs,
including a Predator B model, is larger and able to carry many more weapons on a single
sortie.
The Cassidian Harfang is an example of a system much like the Predator A and the
Talarion, also by Cassidian, is an example of the emerging successors to the Predator A. At
the high end of this size group, an example is an even larger UAV designed for very long
range and endurance and capable of flying anywhere in the world on its own, the US
Northrop Grumman Global Hawk.
The MQ-1 Predator A is larger than a light single-engine private aircraft and provides
medium altitude, real-time surveillance using high-resolution video, infrared imaging, and
synthetic aperture radar. It has a wingspan of 17 m and a length of 8 m. It adds significantly
higher ceiling 7,620 m and longer endurance 40 h to the capabilities of the smaller UAVs.
GPS and inertial systems provide navigation, and control can be via satellite. Speed is 220
km/h and the air vehicle can remain on station for 24 h, 925 km from the operating base. It
can carry an internal payload of 200 kg plus an external payload (hung under the wings) of
136 kg.
The Harfang UAV is produced by Cassidian, which is subsidiary of the French company
EADS. It is about the same size as the Predator and is designed for similar missions. The
configuration is different, using a twin-boom tail structure. There are a variety of possible
payloads. Its stated performance is similar to that of the Predator, but it has a shorter
endurance of 24 h. It takes off and lands conventionally on wheels on a runway. Control can
be via satellite.
The RQ-4 Global Hawk is manufactured by Northrop Grumman Aerospace Systems. It flies
at high altitude and utilizes radar, electro-optical, and infrared sensors for surveillance
applications. It uses a turbofan engine and appears to have a shape that reduces its radar
signature, but is not a “stealth” aircraft. It is 14.5 m long with a 40 m wingspan and has a
maximum weight at takeoff of 1,460 kg. It can loiter at 575 km/h and has an endurance of 32
h. It has a full set of potential payloads and it is routinely controlled via satellite links. Some
of these large UAVs are illustrated in Fig. 1.6.
According to phisical size of UAVs, very small, smal, medium and large terms can be used
for UAV classification. A comprehensive classification of UAVs demonstrating both the wide
variety of UAV systems and capabilities as well as the multiple dimensions of differentiation
is presented in Table 1.1. More detail can be found in Fahlstrom and Gleason (2012).
Table 1.1 Categorization of Existing UAV Systems (Source: Van Blyenburgh (2006))
Major Payload
Subsystems
Sensors Computer
Subsystem Subsystem
Figure 1.7 Major Payload Subsystem for Small UAVs (Source: Reproduction from Valavanis and Vachtsevanov (2014))
Payload constrains
on comm.
Develop link
budgets
Define possible
radios
No
Negotiate No Is distance
new too great
constraints ?
Yes
Can
No Yes
multi-hop be
implemented
?
Fig 1.8 Communication System Design Flow (Source: Reproduction from Valavanis an d Vachtsevanov (2014))
Note that references to the SBC refer to the entire computing system, including
storage devices. Fig. 1.9 shows the design flow.
Define mission
constrains
Payload constrains
on SBC
Develop
performance
budgets
Define possible
SBC
No Yes
Yes
Ground based
solution
Figure 1.9 SBC Subsystem Design flow (Source: Reproduction from Valavanis and Vachtsevanov (2014))
Though other sensors may be used, the electro-optical (EO) sensor is predominantly
employed in small UAVs. Considerations for the EO sensor is presented, but the methods
can be readily applied to other types of sensors. The EO sensor subsystem includes the
sensor hardware, image capture method, and any required gimbal mechanism for panning
the sensor. Note that an inertial measurement unit (IMU) is not listed. The aircraft attitude
can be provided by the autopilot to the image processing system to geo-register the image.
Fig. 1.10 shows the design flow.
Define mission
constrains
Payload constrains
on sensors
Define decision
matrix
Define possible
EO sensors
No Yes
Yes
No
Negotiate new
constraints
Figure
1.10 EO sensory subsystem design flow (Source: Reproduction from Valavanis and Vachtsevanov (2014))
The power system includes batteries and DC-DC converters. Fig. 1.11 shows the
design flow. More detail can be found in Valavanis and Vachtsevanov (2014).
Define mission
constrains
Payload constrains
on power
Develop power
budgets
Define power
solutions
Develop
decision matrix
Negotiate No Yes
Does a power Implement
new
solution exist power system
constraints
?
Figure 1.11 Power Subsystem Design Flow (Source: Reproduction from Valavanis and Vachtsevanov (2014))
Sensors and
navigation +
state estimates system sensor
noise
Figure 1.12 A
Typical UAV Control System Architecture (Source: Reproduction from Valavanis and Vachtsevanov (2014))
Figure 1.13 Aircraft Body Frame (Source: Valavanis and Vachtsevanov (2014))
Newton’s laws of mechanics imply that in a body frame whose origin coincides with
the aircraft’s center of mass,
𝐹 = 𝑚𝑣 ̇ + 𝜔 × 𝑚𝑣 (1.1)
𝑇 = 𝐽 𝜔̇ + 𝜔 × 𝐽 𝜔 (1.2)
In the above equations, the subscript (. ) indicates vectors or tensors expressed in the
body frame, the dot indicates differentiation with respect to time, 𝑣 is the velocity of the
aircraft’s center of mass with respect to the inertial frame, 𝜔 is the angular velocity of the
body frame with respect to the inertial frame, 𝑚 is the aircraft’s mass, and 𝐽 is its inertia
tensor, which is constant in the body frame. The operator × denotes the cross product. On
the left-hand side, 𝐹 denotes the sum of the forces acting on the vehicle (including
aerodynamic, gravity, thrust, and buoyancy),
and 𝑇 denotes the sum of the moments of these forces about its center of
mass.
In coordinates, Equations (1.1) and (1.2) are written as follows. Let 𝐹 = (𝑋, 𝑌, 𝑍),
𝑇 = (𝐿, 𝑀, 𝑁), 𝑣 = (𝑈, 𝑉, 𝑊), and 𝜔 = (𝑃, 𝑄, 𝑅).
In most cases of interest, aircraft has a plane of symmetry, typically separating the
left and right halves. Assuming that the y-axis is orthogonal to the aircraft’s plane
of symmetry, the inertia tensor will have the following structure:
𝐽 0 −𝐽
𝐽 = 0 𝐽 0 (1.3)
−𝐽 0 𝐽
Substituting into Eqs. (1.1) and (1.2),
𝑋 𝑈̇ + 𝑄𝑊 − 𝑅𝑉
𝐹 ≡ 𝑌 = 𝑉̇ + 𝑅𝑈 − 𝑃𝑊 (1.4)
𝑍 𝑊̇ + 𝑃𝑉 − 𝑄𝑈
𝐿 𝐽 𝑃̇ − 𝐽 𝑅̇ + 𝑄𝑅(𝐽 − 𝐽 ) − 𝑃𝑄𝐽
𝑇 ≡ 𝑀 = 𝐽 𝑄̇ + 𝑃𝑅(𝐽 − 𝐽 ) + (𝑃 − 𝑅 )𝐽 (1.5)
𝑁 𝐽 𝑅̇ − 𝐽 𝑃̇ + 𝑃𝑄(𝐽 − 𝐽 ) + 𝑄𝑅𝐽
The evolution over time of the velocity of the aircraft’s center of mass and of the aircraft’s
angular velocity are given by Equations (1.1) and (1.2) give with respect to the inertial
frame. it is necessary to write equations for the aircraft’s kinematics In order to reconstruct
the position and attitude of the aircraft as a function of time.
Let 𝑅 be the rotation matrix that maps vectors in the body frame to vectors in the inertial
frame. Given a vector 𝑢 ∈ ℝ , define a “hat” operator such that the matrix (𝑢)^ = 𝑢 is the
unique matrix satisfying 𝑢𝑣 = 𝑢 × 𝑣, ∀𝑣 ∈ ℝ . In coordinates, given 𝑢 = (𝑢 , 𝑢 , 𝑢 ),
one gets
0 −𝑢 𝑢
𝑢= 𝑢 0 −𝑢 (1.6)
−𝑢 𝑢 0
The aircraft kinematics can be written as
𝑝̇ = 𝑅 𝑣 (1.7)
𝑅̇ =𝑅 𝜔 (1.8)
where 𝑝 is the position of the aircraft’s center of mass. Even though the position and
attitude of the aircraft can be completely specified by Equations (1.7) and (1.8), maintaining
all components of the rotation matrix 𝑅 may be expensive from the computational point of
view, since rotation matrices are not a minimal representation of aircraft attitude. In practice,
several other methods to represent an aircraft’s attitude are commonly used.
A widely used parametrization of rotations that requires fewer parameters than rotation
matrices is based on unit quaternions, that is, unit vectors in ℝ (Chaturvedi). A quaternion
𝑞 = ( 𝑞 , 𝑞⃗) is typically written in terms of a scalar part 𝑞 ∈ ℝ and a vector part 𝑞⃗ ∈ ℝ . A
unit quaternion is such that
𝑞. 𝑞 = 𝑞 + 𝑞⃗. 𝑞⃗ = 1 (1.9)
The set of unit vectors in ℝ is denoted as the sphere 𝕊 . A unit quaternion
𝑞 = ( 𝑞 , 𝑞⃗) ∈ 𝕊 defines a rotation.
⃗
𝑅 = 𝑅𝑜𝑡 2arccos(𝑞), ‖ ⃗‖
(1.10)
In other words, the vector part of a unit quaternion is parallel to the fixed axis of
the rotation, and the scalar part determines the rotation angle. Conversely, a rotation
matrix 𝑅 = 𝑅𝑜𝑡 𝜃, 𝑣 can be represented by the unit quaternion
Note that this representation is not unique, since for all 𝑞 ∈ 𝕊 , 𝑞 and −𝑞 map to the
same rotation matrix. Quaternions are composed through a multiplication operation,
indicated with 𝑜, defined as follows:
𝑞 𝑜 𝑞 = ( 𝑞 𝑞 − 𝑞⃗ . 𝑞⃗ , 𝑞 𝑞⃗ + 𝑞 𝑞⃗ − 𝑞⃗ × 𝑞⃗ )
or in matrix form,
𝑞 −𝑞⃗
𝑞 𝑜𝑞 = 𝑞 (1.12)
𝑞⃗ 𝑞⃗
if 𝑅 and 𝑅 are rotation matrices corresponding ( according to Equation (1.10) ) to the unit
quaternions 𝑞 and 𝑞 , respectively, then 𝑅 𝑅 corresponds to 𝑞 𝑜 𝑞 .
𝑞̇ = − Ω(𝜔 )𝑞 (1.13)
where
0 𝑃 𝑄 𝑅
−𝑃 0 −𝑅 𝑄
Ω(𝜔) = (1.14)
−𝑄 𝑅 0 −𝑃
−𝑅 −𝑄 𝑃 0
Storing and multiplying quaternions require less memory and computation time
than storing and multiplying rotation matrices, thus quaternions are often the preferred
method to model aircraft kinematics. On the other hand, the fact that
the set of unit quaternions is a double covering of the set of rotation matrices
can cause stability and robustness issues (Chaturvedi et al. 2011). Numerical
errors in quaternion propagation can result in the quaternion losing its normalization
constraint. It is common practice therefore to normalize a quaternion
to ensure ‖𝑞‖ = 1 if it is numerically integrated for control or navigation purposes. More
detail can be found in Valavanis and Vachtsevanov (2014).
𝜀= 1− (1.17)
Figure 1.15 Definition of the Local Tangent Plane (Source: Valavanis and Vachtsevanov (2014))
it can be also defined as an NED frame indicating the north–east down alignment of the
coordinate axes. When the origin of the LTP frame is defined in terms of its geodetic latitude,
longitude, and altitude above the ground surface, then the Equation (1.15) can be
applied to define the kinematics of navigation states.
1.9. Body-Carried and Fixed Frames
In flight dynamics, the body-attached reference frames usually have their origin
at the center of gravity (CG) of an airplane; therefore, these frames are moving.
The body-carried frame {𝑛} is an orthogonal frame originated at the CG of the
UAV. All its axes are permanently stabilized and aligned with the LTP frame axes
as it was connected to the CG as shown in Fig. 1.16. The frame is usually utilized in defining
the navigation equations thus assigning its subscript {𝑛}. This frame is connected to the LTP
frame by means of a plain translation 𝑟 = [𝑟 , 𝑟 , 𝑟 ] .
Figure 1.16 Body-Fixed Frame with respect to LTP (Source: Valavanis and Vachtsevanov (2014))
The body-fixed frame is an orthogonal frame defined with respect to the body-carried frame.
Its origin is at the CG of UAV, and its axes are rigidly connected
to the body; therefore, the frame also rotates with the body. The frame is usually
marked with the subscript {𝑏}. The alignment of the {𝑏} frame axes is a degree
of freedom that can exploit the body symmetry.
As the body moves, its attitude is defined with reference to the body-carried
frame {𝑛} by three consecutive Euler rotations by 𝜓 (yaw), 𝜃 (pitch), and 𝜙 (roll)
angles. See their graphical illustration in Fig. 1.17 where frames {0} and {1} relate
to the frames {𝑛} and {𝑏} correspondingly. The formal definition of the Euler
angles in the application to an airplane attitude specification is presented here for
completeness:
𝜓 – yaw is the angle between 𝑥 and the projection of 𝑥 on the local horizontal
plane.
𝜃 – pitch is the angle between the local horizon and the 𝑥 axis measured in the vertical
plane of symmetry of the UAV.
𝜙 – roll is the angle between the body-fixed 𝑦 axis and the local horizon
measured in the plane 𝑦 𝑦 .
Figure 1.17 Three Consecutive Rotations (Source: Valavanis and Vachtsevanov (2014))
The importance of the wind frame in application of flying UAVs in wind conditions which may
contribute to 100% of the nominal aircraft speed cannot be overestimated. For example,
consider a designed autonomous glider that uses wind energy to maintain a long flight time.
Therefore, it is required to understand the difference between the airspeed, represented by
the air velocity vector 𝑉 and the ground speed 𝑉 , both resolved with respect to the LTP
frame. Consider the graphical representation of the relation between these vectors in Fig.
1.19 In the presence of constant wind, these velocities usually called the wind triangle
equation:
𝑉 =𝑉 − 𝑉 (1.19)
where 𝑉 is the wind velocity defined in the LTP frame.
First, define the components of all three vectors in the body-fixed frame {𝑏}. Let the UAV
velocity in the LTP (inertial) frame expressed in the body frame be 𝑉 =
[𝑢, 𝑣, 𝑤] and let the wind velocity in the LTP frame expressed in body frame be 𝑉 =
[𝑢 , 𝑣 , 𝑤 ] . Observe that 𝑉 can be expressed in {𝑤} frame as 𝑉 = [𝑉 , 0, 0] , and
let 𝑉 = [𝑢 , 𝑣 , 𝑤 ] be its components expressed in the body frame. Utilizing the definition
of the angles of attack and sideslip relating the wind frame to the body-fixed frame and the
“wind triangle” Equation (1.19) expressed in the body frame results in the following:
𝑢 𝑢
𝑉 =𝑉 − 𝑉 = 𝑣 − 𝑣
𝑤 𝑤
𝑢 𝑉 𝑐𝑜𝑠𝛼 𝑐𝑜𝑠𝛽 𝑠𝑖𝑛𝛽 𝑠𝑖𝑛𝛼 𝑐𝑜𝑠𝛽 𝑉
𝑉 = 𝑣 = 𝑅 0 = − 𝑐𝑜𝑠𝛼 𝑠𝑖𝑛𝛽 𝑐𝑜𝑠𝛽 −𝑠𝑖𝑛𝛼 𝑠𝑖𝑛𝛽 0
𝑤 0 −𝑠𝑖𝑛𝛼 0 𝑐𝑜𝑠𝛼 0
𝑢 𝑐𝑜𝑠𝛼 𝑐𝑜𝑠𝛽
𝑣 = 𝑉 𝑠𝑖𝑛𝛽 (1.20)
𝑤 𝑠𝑖𝑛𝛼 𝑐𝑜𝑠𝛽
Figure 1.19 Wind Triangle in 2D Plane and the Yaw (𝜓), Sideslip (𝛽), and Course (𝜒 ) Angles (Source: Valavanis and
Vachtsevanov (2014))
This last equation relates the airspeed components of 𝑉 resolved in the body
frame with the airspeed and the angles of attack and sideslip. In turn, if the wind
components resolved in the body frame are known, then inverting the last equation
allows for calculation of the airspeed 𝑉 and the 𝛼, 𝛽 angles:
𝑢
𝑣 = 𝑉 = 𝑢 + 𝑣 + 𝑤 , 𝛼 = 𝑎 𝑡𝑎𝑛 , 𝛽 = 𝑎 𝑠𝑖𝑛 (1.21)
𝑤
More detail can be found in Valavanis and Vachtsevanov (2014).
1.11. Principles of Guidance, Navigation, and Control of UAVs
Regardless of their size and mission, all UAVs share the need for sensor or
sensor systems which provide an estimate of the vehicle’s full state vector. The
state vector normally consists of three position coordinates, three components of
the velocity vector, and anywhere between three and nine parameters which describe
the vehicle’s attitude. In addition to state sensing and estimation, UAVs (especially
one that operates autonomously) need control and guidance systems which allow
them to maneuver in a way consistent with their mission. In simple terms, the
guidance system generates instruction on what state trajectory the UAV should
follow in accomplishing its mission. The control system in turn operates the aircraft
controls (aileron, elevators, thrust, etc.) to follow the trajectory generated by the
guidance systems. A high-level depiction of a Guidance, Navigation, and Control (GNC)
system is shown in Fig. 1.20 and consists of both airborne and ground components)
solutions for small UAVs exist. Some of these off-the-shelf solutions consist of the vehicle
itself, avionics, and associated ground support systems. While most of these “turnkey”
solutions allow some level of customization by the user, they can be very restrictive for use
in research environments.
In order to be able to guide and control the UAV, the state of the UAV must be
available to the controller at high fidelity and high bandwidth. Accurate position is
required to perform automatic control for precision applications (e.g., landing). The
navigation module in the GNC system implements aircraft state estimation. The advent of
powerful computers made available at relatively low cost has allowed sensor
fusion for navigation; sensor fusion is a technique of optimally blending information
from multiple different (flawed) sensors. Multi-sensor navigation framework aims to
obtain the most information about the aircraft states by using minimum combination
of sensors. This is key to UAV operations where size, weight, and power are all
critical.
The complete state of the UAV comprises its position, velocity, attitude, airspeed,
angle-of-attack, sideslip angle, and rotation (pitch, roll, and yaw) rates. Position,
velocity, and attitude are also known as the navigation state.
Airborne Components
Guidance Control
Module Module
Navigation Navigation
Module System
Ground Components
Figure 1.20 Concept of Operations for UAVs (Source: Reproduction from Valavanis and Vachtsevanov (2014))
𝑥= 𝐿 Λ ℎ𝑉 𝑉 𝑉 𝜙 𝜃 𝜓 𝑏 𝑏 𝑏 𝑏 𝑏 𝑏 (1.22)
Figure 1.21 Block diagram of INS/GPS integration (Source: Valavanis and Vachtsevanov (2014))
1
Magnetic, Angular Rate and Gravity Sensor System (MARG)
emitting individual pulses of light and calculating the time that it takes for that pulse of light to
return to the lidar sensor. Many thousands of pulses of light per second will be emitted
across a swath, known as a scan line. Since the speed of light is known, the distance of
each pulse to its target (for example, the ground) can be calculated. To calculate an 𝑥, 𝑦, 𝑧
coordinate position of the target also requires that the exact position and orientation of the
aircraft to be known and for it to be dynamically assigned to each these pulse returns. As a
result direct georeferencing of UAV mounted lidar is essential, using a high-grade INS
system. As with other mapping sensors such as aerial cameras, best results are obtained
when the INS and lidar system are mounted together on the UAV and for lever-arms to be
calculated that account for the mounting angle of the lidar. LIDAR sensor application is
shown in Fig. 1.23 and working principle of LIDAR sensor is shown in Fig. 1.24.
Figure 1.23 Sample Drone with LIDAR Sensor (by Phoenix Aerial System Design)
1.16. Guidance
UAV guidance, navigation, and control (GNC) ultimately signifies the ability to
follow a desired trajectory through the sky. With attitude estimation established
and inner loop stabilizing the aircraft what is left is to guide the UAV along the desired
trajectory rejecting disturbances such as wind. Again, while there are several ways to
implement this guidance control, this section discusses the guidance algorithms
implemented on the SLUGS autopilot (Lizarraga 2009).
The SLUGS guidance algorithm is based on an extension of a simple line-ofsight guidance
law originally developed for ground robotics. Much of the art of
waypoint guidance consists of determining which leg of the trajectory the UAV is
on and when to switch to the next leg of the trajectory. The basic approach is to define a
Serret-Frenet frame which points from the current waypoint to the next (TE, along the current
leg), with one axis down and the third defined by the cross product between the along track,
TE, and down, BE, unit vectors. The projection of the UAV position onto the TE direction is
used to determine when to switch to the next leg. In practice this is better than using a
distance from the next waypoint to switch (especially in very windy conditions).
1.17. General Tracking
The outer loop guidance law follows closely the work developed in Amidi and
Thorpe (1991), Park et al. (2004), and Park et al. (2007). The original derivation
is presented here for completeness. The guidance algorithm steers the velocity
vector toward the line of sight; this is one form of pursuit guidance. Making the
commanded acceleration proportional to sin is only one of number of possible
guidance laws. The basic guidance algorithm is to determine an aim point and steer
the velocity vector toward it.
Referring to Fig. 1.22, 𝑉 is the UAV’s ground speed, and C is a circular arc of
radius R that originates at the UAV and intercepts the desired path. 𝐿 is a constant
look-ahead distance from the UAV to the path in the desired direction of travel.
From elementary trigonometry:
| |
= 𝑅 sin 𝜂 (1.23)
Figure 1.22 Navigation control law geometry (Reproduction of Fig. 1 in Park et al. 2004)
𝑎 = (1.24)
Thus the UAV must command a lateral acceleration of ac. Solving Equation (1.23) for 𝑅 and
substituting it into Equation (1.24) produces the following control law for commanded
acceleration:
𝑎 =2 | |
sin 𝜂 (1.25)
The only requirements for the implementation of this control law are to select
the lookahead distance |𝐿 | and determine sin 𝜂, the sine of the angle from the
velocity vector to 𝐿 . 𝜂 is sometimes called the line of sight angle. sin 𝜂
is found from the vector cross product of Vg and L1 as follows:
×
sin 𝜂 = | |
(1.26)
For the UAV to actually track the desired trajectory, the lateral acceleration
command must be converted to an appropriate bank angle command using the
steady-state turn equation:
𝜙 = 𝑡𝑎𝑛 (1.27)
ℊ
CHAPTER 1 REFERENCES
Air Transport Association (ATA), European Aviation Safety Agency, Learning center (2008)
http://learningcenter.airlines.org/, Airworthiness certification of unmanned aircraft
systems (UAS). Policy Statement, E.Y013-01, 2009
O. Amidi, C. Thorpe, Integrated mobile robot control, in Proceedings of the SPIE, Boston,
vol. 1388, p. 504, 1991
R. Beard, T. McLain, Small Unmanned Aircraft: Theory and Practice (Princeton University
Press, Princeton, 2012
N. A. Chaturvedi, A.K. Sanyal, N.H. McClamroch, Rigid-body attitude control. IEEE Control
Syst. 31(3), 30–51, 2011
B. Etkin, L.D. Reid, Dynamics of Flight: Stability and Control, 3rd edn. (Wiley, New York,
(1995)
Federal Aviation Administration (FAA), FAA System Safety Handbook, Appendix A,
Washington, D.C., 2000
Federal Aviation Administration, Unmanned aircraft systems operations in the U. S. National
airspace system. Interim Operational Approval Guidance 08-01, 2008
M. Fishpool, International military and civilian unmanned aerial vehicle survey. Technical
report, Socolofi Research, 2010 E. Herlik, Unmanned Aerial Vehicles (UAVs) for
commercial applications global market & technologies outlook 2011–2016
Air Force Pamphlet 91-215, 1998 U.S. Department of Defense, Standard practice for system
safety. MIL-STD-882D, 2000
P. G. Fahlstrom and T. J. Gleason, Introduction to UAV systems, Fourth ed., John Wiley &
Sons, Ltd, 2012. DOI:10.1002/9781118396780
D. Gebre-Egziabher, Design and performance analysis of low-cost aided dead reckoning
navigator. PhD thesis, Department of Aeronautics and Astronautics, Stanford
University, Stanford, 2004
S. Gleason, D. Gebre-Egziabher, GNSS Applications and Methods (Artech House, Boston),
2009
http://www.airportjournals.com/Display.cfm?varID=0609005. Accessed Apr 2008
J. Keller (Editor), Air Force asks General Atom ics to upgrade UAV ground-
control stations f or use with the Internet, March 20, 2014.
https://www.militaryaerospace.com/articles/2014/03/uav-control-upgrades.html
J.B. Kuipers, Quaternions and Rotation Sequences: A Primer with Applications to Orbits,
Aerospace, and Virtual Reality (Princeton University Press, Princeton, 2002)
M. Lizarraga, Design, implementation and flight verification of a versatile and rapidly
reconfigurable UAV GNC research platform. PhD thesis, Department of Computer
Engineering, University of California Santa Cruz, Santa Cruz, 2009
P. Misra, P. Enge, Global Positioning System, Signals, Measurements, and Performance
(GangaJamuna Press, Lincoln), 2001
S. Park, J. Deyst, J.P. How, A new nonlinear guidance logic for trajectory tracking, in AIAA
Guidance, Navigation and Control Conference and Exhibit, Portland, 2004
S. Park, J. Deyst, J.P. How, Performance and Lyapunov stability of a nonlinear path-
following guidance method. J. Guid. Control Dyn. 30(6), 1718, 2007
Range Safety Group, Range Commanders Council, Range safety criteria for unmanned air
vehicles – rationale and methodology supplement. Supplement to document 323-99,
1999
Range Safety Group, Range Commanders Council, Common risk criteria standards for
national test ranges: supplement. Supplement to document 321–07, 2007
R. Schultz, Ultralights, LSAs and kit airplanes – what’s the difference? Fla. Aviat. Bus. 2006
D. Simon, Optimal state estimation: Kalman, H1, and nonlinear approaches (Wiley,
Hoboken), 2006
SLUGS website, http://slugsuav.soe.ucsc.edu
Small Unmanned Aircraft System Aviation Rulemaking Committee, Comprehensive set of
recommendations for sUAS regulatory development, 2009
Technical report, Market Intrel Group LLC, 2010 Joint Capability Group on Unmanned Aerial
Vehicles, STANAG 4671 – unmanned aerial vehicle systems airworthiness
requirements (USAR). draft, NATO Naval Armaments Group, 2007.
D. Titterton, J. Weston, Strapdown Inertial Navigation Technology (Institution of Engineering
and Technology, Stevenage), 2004
U.S. Department of Defense Office of the Secretary of Defense, Unmanned systems
roadmap 2007–2032. Report, 2007
United States Air Force, Operational risk management (ORM) guidelines and tools.
K. P. Valavanis and G. J. Vachtsevanov. Handbook of Unmanned Aerial Vehicles, Springer
Netherlands 2014
P. van Blyenburgh, UAV systems: global review. Presented at the Avionics’06 conference,
Amsterdam, 2006
ERASMUS + KA 203 PROJECT
Project Code:
2018-1TR01-KA203-059632
Project Name:
DEVELOPING NOVEL UAV EDUCATION SET AND TRAINING CURRICULUM IN ORDER TO
CATCH STATE OF THE ART TECHNOLOGY
CHAPTER 8:
The initial state, and the noise vectors at each step {x0, w1, ..., wk, v1 ... vk} are all assumed
to be mutually independent.
The Kalman filter is a recursive estimator. This means that only the estimated state from the
previous time step and the current measurement are needed to compute the estimate for the
current state. In contrast to batch estimation techniques, no history of observations and/or
estimates is required. In what follows, the notation 𝑥 ⁄ represents the estimate of x at
time n given observations up to, and including at time m ≤ n.
The state of the filter is represented by two variables:
𝑥 ⁄ , the a posteriori state estimate at time k given observations up to and including
at time k;
𝑃 / , the a posteriori error covariance matrix (a measure of the estimated accuracy of
the state estimate).
A block diagram of the Kalman filter is illustrated in Fig. 2.1.
The Kalman filter can be written as two distinct phases: "Predict" and "Update":
Predict
Predicted (a priori) state estimate
Update
The formula for the updated estimate and covariance above is only valid for the optimal
Kalman gain.
Invariants
If the model is accurate, and the values for and accurately reflect the distribution of
the initial state values, then the following invariants are preserved: (all estimates have a
mean error of zero)
(2.3)
where E[.] is the expected value of ., and covariance matrices accurately reflect the
covariance of estimates
(2.4)
(2.5)
(2.6)
The function f can be used to compute the predicted state from the previous estimate and
similarly the function h can be used to compute the predicted measurement from the
predicted state. However, f and h cannot be applied to the covariance directly. Instead a
matrix of partial derivatives (the Jacobian) is computed.
At each time step the Jacobian is evaluated with current predicted states. These matrices
can be used in the Kalman filter equations. This process essentially linearizes the non-linear
function around the current estimate.
Discrete-time predict and update equations
Predict
Update
where the state transition and observation matrices are defined to be the following Jacobians
(2.7)
(2.8)
A set of 2L + 1 sigma points is derived from the augmented state and covariance where L is
the dimension of the state.
(2.9)
where
The matrix square root should be calculated using numerically efficient and stable methods
such as the Cholesky decomposition.
The sigma points are propagated through the transition function f.
(2.10)
where . The weighted sigma points are recombined to produce the predicted
state and covariance.
(2.11)
where the weights for the state and covariance are given by:
and control the spread of the sigma points. is related to the distribution of x. Normal
values are =10-3 , =0 and =2. If the true distribution of x is Gaussian, =2 is optimal.
Update
The predicted state and covariance are augmented as before, except now with the mean
and covariance of the measurement noise.
As before, a set of 2L + 1 sigma points is derived from the augmented state and covariance
where L is the dimension of the state.
Alternatively if the UKF prediction has been used the sigma points themselves can be
augmented along the following lines
where
The weighted sigma points are recombined to produce the predicted measurement and
predicted measurement covariance.
As with the Kalman filter, the updated state is the predicted state plus the innovation
weighted by the Kalman gain,
And the updated covariance is the predicted covariance, minus the predicted measurement
covariance, weighted by the Kalman gain.
FDI may be performed by using observer/filter structures. The estimated states are assumed
to be close to the true states until a fault occurs. After a fault takes place in the plant,
actuators, or sensors, the estimated states will diverge from the true states, resulting an
anomaly. That is fault detection. If the location is determined, that is fault isolation. If the size
of the fault can be estimated that is fault identification. After the fault is diagnosed, the
control signal is reconfigured such that the system is almost not affected due to the fault.
2.2.1. Active Fault-Tolerant Control of UAV Dynamics against Sensor-
Actuator Failures
UAVs have been attracting more attention during the last few years because of their
important contribution and cost-effective application in several tasks, such as surveillance,
search, rescue, remote sensing, geographic studies, and various military and security
applications. Modern high-performance aircraft is designed to accommodate system faults
automatically in the event of various failures, such as control surface damage, actuator
faults, and sensor faults.
Sensor Fault Detection and Isolation
Statistical Test for Fault Detection
The states of the UAV model can be estimated via a linear optimum OKF. Two hypotheses
are introduced:
H0: System operates normally.
H1: Fault occurs in system.
The use of the innovation approach is suitable for detecting sensor faults [4]. To detect the
failures changing the mean of the innovation sequence, the following statistical function can
be used:
(2.12)
where ∆ is normalized innovation sequence of the OKF; and M = width of sliding window.
(2.14)
then the threshold value, χ2α,Ms can be found from the table of quantiles of the χ2
distribution. Hence, when Hypothesis H1 is true, the statistical value of η k will be greater
than the threshold value χ2α,Ms , i.e.
(2.15)
2.2.2. Sensor Fault Isolation Algorithm
If the fault is detected, then it is necessary to determine what sensor set is faulty. For this
~
purpose, the s-dimensional sequence is transformed into s one-dimensional sequences
to isolate the faulty sensor, and for each one-dimensional sequence k ( i ), i 1, 2 , ..., s
corresponding monitoring algorithm is run. The statistics of the faulty sensor set is assumed
to be affected much more than those of the other sensors. Let the statistics is denoted as
Sk (i ) . When max S k (i ) / i 1, 2,..., s S k ( m) for i j , and Sk (i ) Sk ( j ) , it is judged that m
th
k 2
1
ˆ k2 (i) (i ) (i ) ,
j k i 1, 2 , ..., s
M 1 j k -M 1
1 k
is used, where k (i ) j (i ) is the sample mean of the one-dimensional
M j k - M 1
innovation sequences.
2
σ̂ k ( i )
Let the statistics which is a rate of sample and theoretical variances; 2 be used to
σk (i )
verify the variances of one dimensional innovation sequences k (i), i 1, 2,..., s . When k (i )
N 0, k (i ) it is known that [10]:
k (i)
M2 1 , i, i 1, 2,..., s (2.16)
k2 (i )
where k (i ) M 1 ˆ k2 (i ) .
Using Eq. (2.17), it can be proved that any change in the mean of the normalized innovation
sequence can be detected. When a fault affecting the mean or variance of the innovation
sequence, occurs in the system the statistics k (i ) exceeds the threshold value 2 , M 1
depending on the level of significance , and degree of freedom M-1. The decision making for
isolation is done as follows; if the hypothesis H1 is true, k (i ) k ( j ), i j , and
max k (i ) / i 1,2,..., s k (m) , then it is judged that there is a fault in the m channel.
th
2.2.3. Adaptive TSKF Algorithm for State Estimation and Actuator Fault
Identification
Based on the linearized model of the open-loop aircraft around a trim point, and a
parameterization of two types of actuator faults, the following discrete time model is used as
the design model of the adaptive TSKF (Two Stage Kalman Filter):
(2.18)
where xk ∈ Rn ; uk ∈ Rl , and yk+1 ϵ Rs are state, control input, and output variables,
respectively; and k and βk are bias vectors of dimension l, representing faults entering
w x , w v
actuators. The noise sequences k k and k are assumed to be zero mean uncorrelated
white Gaussian noise sequences with
wkx Q x 0 0
E wk w xj wj
vj 0 Q
0 kj
v 0 0 R
k (2.19)
x 0
where Q 0, Q 0, R 0 and kj is the Kronecker delta. The initial states
x
and (0) are
x
assumed to be uncorrelated with the white noise processes w , w and v , and have
~ ~
P0x P0
covariances and , respectively.
The components
(2.20)
of bias vector k describes the percentage reduction in the control effectiveness when the
terms Bkuk+Ekk are considered together, where
(2.21)
The no-fault case is represented by
(2.22)
a 100 ki % loss of control effectiveness in the ith actuator is
(2.23)
and a control surface stuck at magnitude βki degrees is represented by
(2.24)
2.2.4. Simulation Results
The detailed simulation results are presented in [4].
2.2.4.1. Simulation Results for Sensor/Actuator Fault Isolation
A pitch rate gyro fault is assumed to occur at t=20s. The fault in the pitch rate gyro is
simulated by adding the constant bias 0.01rad/s. By means of the TSKF, the squared error
between the actual states and the estimated states plotted as in Fig. 2.3.
Figure 2.3 Squared error between the actual states and the estimated states.
As the TSKF is not sensitive to actuator faults but sensitive to sensor faults, the jump in the
graph at t=20s reveals a sensor fault (the residual exceeds the selected threshold value
0.4x10-6).
2.2.4.2. Simulation Results for Sensor Fault Isolation
A Kalman Filter is used to detect and isolate the sensor faults. In the sensor fault isolation
simulations two scenarios are examined;
a. Fault in the sensor measuring the longitudinal state (pitch rate) of the UAV
b. Fault in the sensor measuring the lateral state (roll rate) of the UAV
The graphs of statistics Eq. (2.17) when a fault in the pitch rate gyro has occurred are shown
in Fig. 2.4. In the graphs Si ( i 1, 2 , ..., 9 ) denotes (i) .
Figure 2.4 Fault isolation in the pitch rate gyro
After sensor fault detection, the decision making for sensor fault isolation is done via the
statistics Eq. (2.17) as follows;
k (i ) k ( j ), i j , and max k (i) / i 1, 2,..., 9 k (3) , then it is judged that there is a fault in 3
rd
estimations. The surface fault parameter vector is [ 1 2 3 4 5 67 8]T = [1 2 34 1 2
34]T.
The actuator faults are assumed to occur symmetrically, i.e. right and left elevators are
considered to move together.
Fault Scenario 1: Stuck fault in the elevator at 0.05 (rad) at 20 seconds.
As the TSKF is not sensitive to actuator stuck fault, there is no jump at t=20s in the graph in
contrast to sensor faults as seen in Fig. 2.6. The error is very small.
Figure 2.6 Squared Residual between Actual States and Estimated States in the case of Elevator Stuck Fault
Figure 2.7 Actual state variables and estimations when a stuck fault in the elevator occurred at 20 seconds. Since actual
and estimated state variables are too close two curves are seen as one curve.
In Fig. 2.7, it can be seen that the states are estimated correctly before and after occurrence
of the stuck fault in the elevator.
From Fig. 2.8, it can be concluded that a stuck actuator fault with magnitude 0.7 rad. has
occurred in the stabilizer because ˆ 1 ˆ 1 1 and ˆ 5 ˆ 1 0.7 . When becomes -1, we
ˆ
can obtain the estimated stuck magnitude in the corresponding control channel.
Figure 2.8 Actual Inputs and Estimations while the Elevator Stuck at 0.7 rad. at 20 seconds
Figure 2.9 Squared Residual between Actual States and Estimated States in the case of Actuator Partial Loss.
In Fig. 2.10, it can be seen that the states are estimated correctly before and after
occurrence of the partial loss in the elevator.
Figure 2.10 Actual state variables and estimations when 50% partial loss in the elevator occurred at 20 seconds. Since
actual and estimated state variables are too close two curves are seen as one curve.
In Fig. 2.11, ˆ 1 ˆ 1 0.5 and ˆ 5 ˆ 1 0 are obtained, which means that 50% loss in the
elevator has occurred.
Figure 2.11 Actual inputs and estimations when 50% partial loss in the elevator occurred at 20 seconds.
Therefore, a Robust Kalman Filter algorithm, which brings the fault tolerance to the filter and
secures accurate estimation results in case of faulty measurements without affecting the
remaining good estimation characteristic, should be introduced.
Base of the Robust Kalman Filter (RKF) is the comparison of real and theoretical values of
the covariance of the innovation sequence [5]. When the operational condition of the
measurement system mismatches with the models used in the synthesis of the filter, then
the Kalman Filter gain changes according the differentiation in the covariance matrix of the
innovation sequence. Under these circumstances, covariance matrix of the innovation
sequence differs as:
Pek H k Pk / k 1 H kT Sk Rk (2.25)
K k Pk / k 1 H kT H k Pk / k 1 H kT Sk Rk
1
(2.26)
Due to the approach, the Kalman Gain is changed when the predicted observation H k xk / k 1
is considerably different from the actual observation yk because of the significant changes
in the operational condition of the measurement system. In other word, if the real value of
filtration error exceeds the theoretical error as
In order to determine scale factor, S k , let substitute Eq. (2.25) into the Eq. (2.27) and put in
that scale factor should be calculated at the point where condition Eq. (2.19) is satisfied,
Then, in light of tr ek ekT ekT ek equality, S k can be written as
ekT ek tr H k Pk / k 1 H kT
Sk (2.29)
tr Rk
Remark that, due to the scale factor S k , the covariance of the estimation error of RKF
increases in comparison with OKF. Therefore, robust algorithm is operated only when the
measurements are faulty and in all other cases procedure is run optimally with regular
Kalman filter.
In order to determine the scale matrix, an innovation based process may be followed. It is
known that, Kalman filter innovation sequence can be determined by (3.27). Then, as the
next step, real and theoretical values of the innovation covariance matrix must be compared.
When there is a measurement malfunction in the estimation system, the real error will
exceed the theoretical one. Hence, if a scale matrix, S k , is added in to the algorithm as:
k
1
j k 1
ek ekT H k Pk / k 1 H kT Sk Rk (2.30)
1 k
Sk ek ekT H k Pk / k 1 H kT Rk1 (2.31)
j k 1
In case of normal operation, the scale matrix will be a unit matrix as S k I . Here I
represents the unit matrix.
Nonetheless, as is a limited number because of the number of the measurements and the
computations performed with computer implies errors such as the approximation errors and
the round off errors; S k matrix, found by the use of Eq. (2.31) may not be diagonal and may
have diagonal elements which are “negative” or lesser than “one” (actually, that is physically
impossible).
Therefore, in order to avoid such situation, composing scale matrix by the following rule is
suggested:
Here, 𝑆 represents the ith diagonal element of the matrix 𝑆. Apart from that point, if the
measurements are faulty, S k will change and so affect the Kalman Gain matrix;
K k Pk / k 1 H kT H k Pk / k 1 H kT Sk Rk
1
(2.34)
Therefore, in case of any kind of malfunctions, related element of the scale matrix, which
corresponds to the faulty component of the measurement vector, increases and that brings
out a smaller Kalman Gain, which reduces the effect of the innovation on the state update
process. As a result, more accurate estimation results can be obtained.
As long as the proposed RKF is not optimal due to the scale matrix S k , the robust algorithm
is again operated only when the measurements are faulty and in all other cases procedure is
run optimally with regular OKF.
Figure 2.12 Estimation result for velocity component “u” by regular OKF in case of measurement noise increment.
Figure 2.13 Estimation Result for Velocity Component “u” by RKF with SMNSF in case of Measurement Noise Increment.
Figure 2.14 Estimation result for velocity component “u” by RKF with MMNSF in case of measurement noise increment.
Below, Table 2.1 presents the variation of scale factor and the scalar measures of scale
matrix;
Table 2.1 MNSF Investigation in case of Measurement Noise Increment
1 k
ek ekT Fk Pk 1/ k 1Fk T Rk Gk Qk Gk T
1
k (2.37)
j k 1
In a similar manner with the R adaptation, the obtained fading matrix should be
diagonalized since the Q matrix must be a diagonal, positive definite matrix.
where,
i max 1, ii i 1,, n . (2.39)
Here, ii represents the ith diagonal element of the matrix . Apart from that point, if there
is an actuator fault in the system, k must be included in the estimation process as,
This statistical function has 2 distribution with s degree of freedom where s is the
dimension of the innovation vector. If the level of significance, , is selected as,
P 2 2 , s ; ; 0 1 , (2.42)
the threshold value, 2 , s can be found. Hence, when the hypothesis 1 is correct, the
statistical value of k will be greater than the threshold value 2 , s , i.e.:
0 : k 2 , s k
1 : k 2 , s k . (2.43)
On the other hand, after detecting the fault in the system the key point is detecting the type
of the fault (either a sensor or actuator fault). When a regular KF is used, the decision
statistics changes regardless the fault is in the sensors or in the actuators.
If a RKF based on the Doyle-Stein condition is used, it is easy to distinguish the sensor and
actuator faults. The KF that satisfies the Doyle-Stein condition is referred in [8,9] as the RKF
insensitive to the actuator failures and may be used for fault isolation as well in our study.
Then, the appropriate adaptation (the R or Q adaptation) may be applied. The Doyle-Stein
condition is,
K I H s K Bs H s Bs .
1 1
(2.44)
Here K is the filter gain, I is unit matrix, H is the system measurement matrix, Bs is the
control distribution matrix in continuous time and s sI Fs
1
where Fs is the system matrix
in continuous time.
The RKF is very useful for isolating the sensor and control surface failures as it is insensitive
to the latter failures. If the KF process noise covariance matrix is chosen as,
Q (qr ) Q qr 2 BsVBsT (2.45)
then the RKF is obtained. Here Q is the process noise covariance matrix for the nominal
plant, q r is a parameter that approaches to infinity and V is any positive definite symmetric
matrix.
If the sensor fault occurs, the R adaptation is realized; else the procedure is continued with
the Q adaptation. The algorithm architecture given in Fig. 2.1 summarizes the integrated R
and Q self-adaptation procedures. This new filter, where the robust and self-adaptive KFs
are integrated, may be called as the RAKF.
Figure 2.15 System architecture for the RAKF algorithm
Actual Value
0
-10
0 10 20 30 40 50 60 70 80 90 100
5
error(m/s)
-5
-10
0 10 20 30 40 50 60 70 80 90 100
variance [(m/s) ]
1
2
0.5
0
0 10 20 30 40 50 60 70 80 90 100
time(sec)
Figure 2.16 “u” velocity estimation via OKF in case of actuator and noise increment sensor fault.
Actual Value
0
-10
0 10 20 30 40 50 60 70 80 90 100
5
error(m/s)
-5
-10
0 10 20 30 40 50 60 70 80 90 100
variance [(m/s) ]
1
2
0.5
0
0 10 20 30 40 50 60 70 80 90 100
time(sec)
Figure 2.17 “u” velocity estimation via RAKF in case of actuator and noise increment sensor fault.
CHAPTER 2 REFERENCES
[1] Kalman, R. E. (1960). "A New Approach to Linear Filtering and Prediction Problems".
Journal of Basic Engineering 82 (1): 35–45.
[2] Mogens Blanke, Christian Frei, Franta Kraus, J.Ron Patton, Marcel Staroswiecki (2000).
“What is Fault-Tolerant Control?” IFAC Proceedings, Volume 33, Issue 11, June 2000,
Pages 41-52.
[3] R. J Patton (1997). Fault-tolerant control: the 1997 situation, IFAC Fault Detection,
Supervision and Safety for Technical Processes, Kingston Upon Hull, UK.
[4] Hajiyev, Ch. M., and Caliskan, F. (2005). “Sensor and control surface/actuator failure
detection and isolation applied to F-16 flight dynamic”. Aircr. Eng. Aerosp. Technol., 77(2),
152 – 160.
[5] Hajiyev, C. (2007).”Adaptive Filtration Algorithm with the filter gain correction applied to
integrated INS/Radar altimeter”. Proceedings of the Institution of Mechanical Engineers, Part
G: Journal of Aerospace Engineering, 221, 847-885.
[6] Geng, Y., and Wang, J. (2008). “Adaptive estimation of multiple fading factors in Kalman
filter for navigation applications”. GPS Solutions, 12(4), 273-279. DOI: 10.1007/s10291-007-
0084-6
[7] Matthews, J.S. (2006). “Adaptive Control of Micro Air Vehicles, M.Sc. Thesis,
Department of Electrical and Computer Engineering”. Brigham Young University, Utah, USA,
December 2006.
[8] Hajiyev, C., and Caliskan, F. (2000). “Sensor/actuator fault diagnosis based on statistical
analysis of innovation sequence and robust Kalman filtering”. Aerospace Science and
Technology, Vol. 4, pp. 415-422. DOI: 10.1016/S1270-9638(00)00143-7.
Project Name:
DEVELOPING NOVEL UAV EDUCATION SET AND TRAINING CURRICULUM IN ORDER TO
CATCH STATE OF THE ART TECHNOLOGY
CHAPTER 9:
Graphs that follow the asymetric design are called directed, and graphs that follow the
symetric design are called undirected. In a directed graph, direction of the vector that links to
points i and j tells us which of the points is responsible with keeping the distance same. The
starting point “Follower” follows the point “Leader” which the vector points to [13].
3.4.3. Rigid and Persistent Formation Definitions
A formation where the distances (𝑑 ) between two vehicles (𝐴 ,𝐴 ) is kept same is called
rigid. If in a formation, every vehicle is follows its distance constraints the formation is called
constraint consistent. If a formation is both rigid and constraint consistent, the formation is
called persistent. If a formation is persistent with the minimum number of links (Edges)
possible ( 𝐸 = 3 𝑉 − 6) it is called minimal persistent. You can check Fig 3.3 for the
samples of this formations [13].
You can see the visualisation of the three sphere intersection in Fig 3.4. These spheres can
be considered as centered at 𝑠 = (0,0,0) , 𝑠 = (𝑥 , 0,0) , and 𝑠 = (𝑥 , 𝑦 , 0) . The
intersection 𝐵(𝑠 , 𝑟 ) ∩ 𝐵(𝑠 , 𝑟 ) ∩ 𝐵(𝑠 , 𝑟 ) of three spheres 𝑆(𝑠 , 𝑟 ), 𝑆(𝑠 , 𝑟 ), and 𝑆(𝑠 , 𝑟 ) is
not empty set. The equation of the sphere surfaces are as follows:
𝑥 +𝑦 +𝑧 =𝑟 (3.1)
(𝑥 − 𝑥 ) + 𝑦 + 𝑧 = 𝑟 (3.2)
(𝑥 − 𝑥 ) + (𝑦 − 𝑦 ) + 𝑧 = 𝑟 (3.3)
The calculation of the intersection coordinates 𝑥 , 𝑦 , and 𝑧 as follows [13, 26]:
( )
𝑥=𝑥 = (3.4)
𝑦 +𝑧 =𝑟 −𝑥 (3.5)
( )
𝑦 = − 𝑥 (3.6)
𝑧 =± 𝑟 −𝑥 −𝑦 (3.7)
𝑠 𝑝
=𝑇 , ∀𝑖 ∈ {1,2,3} (3.8)
1 1
𝑅 −𝑝
𝑇= ,𝑅 = 𝑅 𝑅 𝑅 (3.9)
0 1
𝑅 = , 𝑅 = , 𝑅 =𝑅 ×𝑅 (3.10)
𝑇 = 𝑅 𝑅 𝑝 (3.11)
0 1
You can find the intersection of 𝑆(𝑝 , 𝑟 ), 𝑆(𝑝 , 𝑟 ), and 𝑆(𝑝 , 𝑟 ) as illustrated in Fig. 3.4 with
the following steps:
𝑃
=𝑇 [𝑧 , 1] (3.12)
1
We aim to not only create trajectories for 𝑛 ≥ 4 vehicles but we also want them to keep the
distance between them same, and prevent them from colliding. To reach this goal, the first
three vehicles are choosen as main vehicles and named as: leader, first follower and second
follower. For every follower 𝐴 , the trajectory can be written in terms of relative position this
way:
() () () ()
p (𝑡) = 𝑥 (𝑡), 𝑦 (𝑡), 𝑧 (𝑡) = 𝑝 (𝑡) − 𝑝 (𝑡) (3.13)
𝑝 (𝑡) = 𝑝 (3.14)
Where s, p and h are the vehicles that 𝐴 follows, and 𝑝̅ (𝑡) is the intercetion points of
spheres 𝑆(𝑝 , 𝑑 ), 𝑆(𝑝 , 𝑑 ), and 𝑆(𝑝 , 𝑑 ) which are close to 𝑃 . 𝑆(. , . ) is used to represent
the spheres, where first argument is the center of the sphere and the second argument is the
radius of the sphere.
In this section we will learn more about our formation and its hierarchy, and start things of
by modeling every individual vehicle 𝐴 .
Our distributed control system will be designed using vehicle 𝐴 ’s (ignoring their geometric
properties for simplicity) which modelled by the following single-velocity-integrator
kinematic models:
𝑃̇ (𝑡) = 𝑉 (𝑡) (3.18)
‖𝑉 (𝑡)‖ = 𝑉 (3.19)
where P (𝑡) = [𝑥 (𝑡), 𝑦 (𝑡), 𝑧 (𝑡)] is the position and 𝑉 (𝑡) = [𝑉 (𝑡), 𝑉 (𝑡), 𝑉 (𝑡)] ∈ 𝑅 is
the velocity of the center of mass of the vehicle 𝐴 (𝑡) at time 𝑡.
Since our formation design has four main types of vehicles (leader, first follower, second
follower and nth follower) and all of these vehiles having different DoF’s (Degree of
Freedom), we need to design four different controllers:
()
( ) ( )
𝑉 (𝑡) = () (3.20)
( )
0 ‖𝑝 (𝑡)‖ < 𝜀
‖ ( )‖
𝜎 (𝑡) = 𝜀 ≤ ‖𝑝 (𝑡)‖ ≤ 2𝜀 (3.21)
1 ‖𝑝 (𝑡)‖ > 2𝜀
Where 𝐴 is leader, 𝐴 is first follower, 𝐴 is second follower and 𝐴 is ith follower which
follows 𝐴 , 𝐴 , and 𝐴 (𝑗, 𝑘, 𝑙 < 𝑖), 𝜀 > 0 is a small design constant, 𝜈̅ is constant
maximum velocity and 𝜎 (𝑡) is used for small tolerable mistakes in 𝐴 ’s final position.
In this section we will first talk about quadrotors dynamic modelling and then will design
control laws using Eqs. (3.13)−(3.17).
To calculate the applied force 𝐹 ∈ 𝑅 and moment 𝑀 ∈ 𝑅 for every vehicle 𝐴 , we will use
the following nonlinear dynamic model [3][13][14]:
𝐹 = 𝑚 𝑝̈ (3.23)
𝑀 = 𝐼 𝑤̇ + 𝑤 × 𝐼 𝑤 (3.24)
where 𝑅 is the rotation matrix from the plane of rotor j to body coordinates and 𝑟 is the
position vector of rotors with respect to the center of gravity.
Eq. (3.23) and Eq. (3.26) form the nonlinear dynamics of quadrotor vehicles. An
alternative simplified dynamic model is as follows:
𝑀 𝑧̈ = 𝑢 − 𝑚 𝑔 (3.27)
𝑀 𝑦̈ = 𝑢 sin 𝜙 ≅ 𝑢 𝜙 (3.28)
𝑀 𝑥̈ = −𝑢 sin 𝜃 ≅ −𝑢 𝜃 (3.29)
where 𝜙 and 𝜃 are the roll angle and pitch angle, respectively. 𝑢 represents a total
thrust on the body along the z-axis.
In this section we adapt the attitude and altitude laws for singular quadrotor presented in
[3]. Attitude control loop can adapt to fast changing commands and balances quadrotor
attitude. Altitude and altitude loops use the thrust of the vehicle to create the vertical and
lateral accelerations.
At small attitude angles (up to ±30°) and low angular speeds, the equations of motion can
be decoupled about each attitude axis. Hence, the effective control inputs 𝑢 , 𝑢 , and
𝑢 about each axis (roll, pitch, and yaw) can be considered to be applied independently
[3,14]. The effective control inputs for each axis can then be combined to generate the
corresponding thrust commands are as follows:
𝑇 = −𝑢 + 𝑢 + 𝑢 ,
𝑇 = 𝑢 −𝑢 +𝑢 , (3.30)
𝑇 = 𝑢 +𝑢 +𝑢 ,
𝑇 = −𝑢 − 𝑢 + 𝑢
that need to be applied to rotors 1, ⋯ , 4, respectively. These thrust commands are the
control inputs of the quadrotor vehicle’s attitude and trajectory tracking control system. The
time delay in thrust can be approximated as a first-order delay and has to be considered in
controlling each attitude angle. The linearized transfer function for the roll axis is as follows
[3]:
( )
= (3.31)
( ) ( )
where 𝐼 is the component of the inertia matrix 𝐼 for the roll axis, 𝑙 is the distance of
each rotor to the center of gravity, and 𝜏 is the thrust time delay. The transfer functions for
the pitch and yaw axes can be obtained in a similar way.
where Φ (𝑠), 𝑘 , and 𝑝 are the desired roll angle, double derivative, and filtering term,
respectively. 𝑝 determines the pole locations of the filter in derivative and double-
derivative actions. The coefficients of 𝐶 (𝑠) are tuned applying rootlocus graphics tools on
the simplified model presented in this section and the transfer function in Eq. (3.31) to
provide faster and more accurate performance. The corresponding control laws for all
angles 𝜙, 𝜃, and 𝜓 (in the body frame) are obtained as follows:
𝑢 = 𝐶 (𝑠) 𝜙 − 𝜙 (3.34)
𝑢 = 𝐶 (𝑠) 𝜃 − 𝜃 (3.35)
𝑢 = 𝐶 (𝑠) 𝜓 − 𝜓 (3.36)
For the implementation of the effective controllers, rate gyros are used as measurement
units, and then, the measured data are filtered with the help of standard filtering
techniques. At each time t, 𝑇 (𝑡),..., 𝑇 (𝑡) are obtained processing 𝑢 , 𝑢 , and 𝑢 .
In a similar way, the altitude controller is derived for the stabilization of the quadrotor
altitude during tracking actions. The accelerations are obtained with accelerometers, and
feedback linearization is used to compensate for the gravity offset and thrust deflection.
The tolerance of the controller is limited with 30° due to the plant model. The following
proportional–integral–derivative (PID) control law is proposed for altitude tracking:
𝑢 =𝑘 𝑧̇ − 𝑧̇ + 𝑘 𝑧 − 𝑧 + 𝑘 ∫ 𝑧 − 𝑧 𝑑𝑡 + 𝑇 (3.37)
where 𝑧 is the altitude, 𝑧 is the reference altitude, and 𝑇 is the nominal thrust. The
quadrotor altitude dynamics is modeled by a second-order transfer function that is
obtained by the linearization of Eq. (3.27) together with Eq. (3.26) and Eq. (3.30).
𝑈 (𝑠) = 𝑘 + +𝑘 𝑠 (3.38)
()
𝜙 = 𝑈 (𝑠)(𝑦 − 𝑦 ) = 𝑈 (𝑠)𝑦 (3.39)
()
𝜃 = 𝑈 (𝑠)(𝑥 − 𝑥 ) = 𝑈 (𝑠)𝑥 (3.40)
𝜓 =0 (3.41)
By assuming each quadrotor agent 𝐴 knows its current position 𝑝 (𝑡), and its desired
position 𝑝 (𝑡) is generated online using Eq. (3.14) – Eq. (3.17). At each time 𝑡, 𝐴 takes
its current position 𝑝 (𝑡) as the last visited waypoint 𝑝 and the desired position 𝑝 (𝑡) as
the next waypoint 𝑝 ( ) . Based on this setting, the low-level control scheme of 𝐴 is
shown in Fig. 3.6. In this scheme, the reference angle generator is given by Eq. (3.38) –
Eq. (3.41), and the attitude and altitude controller is given by Eq. (3.30) and Eq. (3.33) –
Eq. (3.36), where 𝑧 is taken as the z-component of 𝑝 (𝑡) at each time instant 𝑡 and 𝑢
is set to zero.
Considering the linearized model equation of the quadrotor as in Eq. (3.31) and the PDD
control law in Eq. (3.32), it can be seen thatthe closed system is always stabilized with a
zero steady-state error. A generic proof of the asymptotic stability of PID type controllers
utilizing the Lyapunov-based event triggering scheme can be found in [15]. The tuning of
quadrotor control parameters is achieved using root locus for finding a stable and desired
step response similar to [15]. A summary of theoretical tracking and error performance
analysis of PID-based quadrotor controllers is provided in [15] and [16].
Fig 3.8: Quadrotor simulation: distances between vehicles (𝒑𝟏 : Leader position, 𝒑𝟐 : First follower position, 𝒑𝟑 : Second
follower position, 𝒑𝒏 : nth follower position)
The parameters used in the simulation are given in this Subsection. 𝜀 = 0.01 and
𝑑 = 1𝑚 for (𝑖, 𝑗) ∈ {(1, 2), (1, 3), (2, 3), (1, 4), (2, 4), (3, 4), (1, 5), (2, 5), (3, 5)}. Initial
positions: 𝑝 = (1,0,0), 𝑝 = (0.5, 0.866, 0), 𝑝 = (0,0,0), 𝑝 = (0.5, 0.2887, 0.8165), and
𝑝 = (0.5, 0.2887, −0.8165). Desired final positions 𝑝 = (−3.9697, 2.7428, 1.0), 𝑝 =
(−4.8696, 2.5995, 1.0), and 𝑝 = (−4.2134, 1.8462, 1.0). Leader velocity obtained as
2𝑚/𝑠. Final positions of the ordinary followers are calculted using the sphere intersection
algorithm presented in Section 3.5.
In this section, we test the previously presented control laws on generic point aerial
vehicles. Initial positions, desired final positions and the distance between vehicles is
choosen same within Subsection 3.7.1. 𝑉 = 1 𝑚/𝑠, 𝐾 = 1 and 𝐾 = 0.05 are used.
Fig 3.9 Point aerial vehicle simulation: pathway (red: leader, blue: first follower, green: second follower, rest are ordinary
followers)
We simulate the effects of sensor noises using bounded random signals in the range
[±10°] added to the angles 𝜑, 𝜃, and 𝜓. Furthermore, we use a timedelay block to simulate
the measurement and processing delays in the actuators. The transport-delay time is
taken as 0.02 s in our simulations. As seen in Figs. 3.9 and 3.10, the agents move in
formation to their final positions cohesively, and hence without colliding with each other,
demonstrating the effectiveness of the proposed distributed control scheme.
Fig 3.10: Point aerial vehicle simulation: distances between vehicles (𝒑𝟏 : leader position, 𝒑𝟐 : first follower position, 𝒑𝟑 :
second follower position, 𝒑𝒏 : nth follower position)
1.7 Conclusion
In this chapter we presened a distributed cohesive formation control design for
autonomous vehicles which move in three dimensional space. The presented control
design aims to move a vehicle swarm from a starting position to a desired final position
without changing its formation shape.
Mentioned design has been thought as two different case study: a study which models
quadrotors and a study which models point aerial vehicles. Additionally, in the point aerial
vehicles possible corruption in sensor data and delays in processing has been considered.
And this problems were compansated with a PID controller. Designed controllers are been
proved to be successful with multiple simulations.
Design of dynamic controllers with constructive stability and convergency analysis is a
new research field. Working on real-time applications, considering detailed actuator
dynamics, and considering dynamic uncertainties is one way of further studying this
concept. A study similar to [11] can be considered. These algorithms also can be tested in
different real-time cases.
CHAPTER 3 REFERENCES
[1] Lemmens, Yves C.j., et al. “Real-Time Simulation of an Integrated Electrical System of
a UAV.” SAE Technical Paper Series, 2014, doi:10.4271/2014-01-2169.
[2] Basak Hasan, and Emmanuel Prempain. “Switching Recovery Control of a Quadcopter
UAV.” 2015 European Control Conference (ECC), 2015,
doi:10.1109/ecc.2015.7331096.
[3] Niemiec, Robert, and Farhan Gandhi. “Multirotor Controls, Trim, and Autonomous
Flight Dynamics of Plus- and Cross-Quadcopters.” Journal of Aircraft, vol. 54, no. 5,
2017, pp. 1910–1920., doi:10.2514/1.c034165.
[4] Dellariccia, Gaia, Giacomo Dellomo, David P. Wolfer, and Hans-Peter Lipp. “Flock
Flying Improves Pigeons Homing: GPS Track Analysis of Individual Flyers versus
Small Groups.” Animal Behaviour 76, no. 4 (2008): 1165–72.
[5] Hoare, D.j, I.d Couzin, J.-G.j Godin, and J Krause. “Context-Dependent Group Size
Choice in Fish.” Animal Behaviour 67, no. 1 (2004): 155–64.
[6] “Index.” Men of Physics: Lord Rayleigh–The Man and His Work, 1970, 243–51.
[7] Wagner, Eugene, David Jacques, William Blake, and Meir Pachter. “Flight Test Results
of Close Formation Flight for Fuel Savings.” AIAA Atmospheric Flight Mechanics
Conference and Exhibit, May 2002.
[8] M. Grewal, L. Weill, and A. Andrews, Global Positioning Systems, Inertial Navigation,
and Integration. Hoboken, NJ: Wiley, 2007.
[9] S. Bayraktar, G. Fainekos, and G. Pappas, “Experimental cooperative control of fixed-
wing unmanned aerial vehicles,” in Proc. IEEE 43rd Conf. Decision Control, Dec. 2004,
vol. 4, pp. 4292–4298.
[10] M. Efe, “Robust low altitude behavior control of a quadrotor rotorcraft through sliding
modes,” in Proc. 15th Mediterranean Conf. Control Au- tom., Jun. 2007, pp. 1–6.
[11] J. Ferruz, M. Vega, A. Ollero, and V. Blanco, “Reconfigurable control architecture for
distributed systems in the hero autonomous helicopter,” IEEE Trans. Ind. Electron., vol.
58, no. 12, pp. 5311–5318, Dec. 2011.
[12] B. Etkin and L. Reid, Dynamics of Flight: Stability and Control. Hoboken, NJ: Wiley,
1995.
[13] I. Bayezit, B. Fidan, M. Amini, and I. Shames, “Distributed cohesive motion control of
quadrotor vehicle formations,” in Proc. ASME Int. Mech. Eng. Congr. Expo., Nov. 2010,
pp. 797–805.
[14] T. Madani and A. Benallegue, “Backstepping sliding mode control applied to a
miniature quadrotor flying robot,” in Proc. 32nd IEEE Annu. Conf. Ind. Electron., Nov.
2006, pp. 700–705.
[15] J. Wang, H. Mounier, A. Cela, and S. Niculescu, “Event driven intelligent PID
controllers with applications to motion control,” in Proc. 18th IFAC World Congr., Aug.
2011, vol. 18, pp. 10080–10085.
[16] M. de Oliveira, “Modeling, Identification and Control of a Quadrotor Aircraft,” M.S.
thesis, Control Eng., Czech Tech. Univ., Prague, Czech Republic, Jun. 2011.
[17] I. Bayezit, M. Amini, B. Fidan, and I. Shames, “Cohesive motion control of autonomous
formations in three dimensions,” in Proc. 6th Int. Conf. Intell. Sens., Sensor Netw. Inf.
Process., Dec. 2010, pp. 205–210.
[18] I. Bayezit and B. Fidan, “Nonlinear maneuvering control of rigid formations of fixed
wing UAVs,” in Proc. 2nd Int. Auton. Intell. Syst. Conf., Jun. 2011, pp. 124–133.
[19] R. Olfati-Saber and R. Murray, “Distributed cooperative control of multiple vehicle
formations using structural potential functions,” in Proc. 15th IFAC Triennial World
Congr., Jul. 2002, pp. 346–352.
[20] R. Schoop, R. Neubert, and A. Colombo, “A multiagent-based distributed control
platform for industrial flexible production systems,” in Proc. IEEE 27th Annu. Conf. Ind.
Electron. Soc., Nov. 2001, vol. 1, pp. 279–284.
[21] V. Gazi and K. Passino, “Stability of a one-dimensional discrete-time asynchronous
swarm,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 35, no. 4, pp. 834–841,
Aug. 2005.
[22] E. Jones, M. Dias, and A. Stentz, “Time-extended multi-robot coordina- tion for
domains with intra-path constraints,” J. Auton. Robots, vol. 30, no. 1, pp. 41–56, Jan.
2011.Apr. 2008, pp. 1–14.
[23] I. Bayezit, “Microavionics Design and Microavionics Control Implemen- tation for
Cross-Compatible Platforms,” M.S. thesis, Elect. Eng., Istanbul Tech. Univ., Istanbul,
Turkey, Dec. 2008.
[24] C. Lin, C. Huang, C. Lee, and M. Chao, “Identification of flight vehicle models using
fuzzified eigensystem realization algorithm,” IEEE Trans. Ind. Electron., vol. 58, no. 11,
pp. 5206–5219, Nov. 2011.
[25] R. Nelson, Flight Stability and Automatic Control. New York: McGraw- Hill, 1997.
[26] Bayezit, Ismail, and Baris Fidan. “Distributed Cohesive Motion Control of Flight Vehicle
Formations.” IEEE Transactions on Industrial Electronics 60, no. 12 (2013): 5763–72.
ERASMUS + KA 203 PROJECT
Project Code:
2018-1TR01-KA203-059632
Project Name:
DEVELOPING NOVEL UAV EDUCATION SET AND TRAINING CURRICULUM IN ORDER TO
CATCH STATE OF THE ART TECHNOLOGY
CHAPTER 12:
Commission and Turkish National Agency cannot be held responsible for any use which