WO2018117872A1 - The intelligent autopilot system - Google Patents
The intelligent autopilot system Download PDFInfo
- Publication number
- WO2018117872A1 WO2018117872A1 PCT/OM2016/000002 OM2016000002W WO2018117872A1 WO 2018117872 A1 WO2018117872 A1 WO 2018117872A1 OM 2016000002 W OM2016000002 W OM 2016000002W WO 2018117872 A1 WO2018117872 A1 WO 2018117872A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- artificial neural
- neural network
- flight
- aircraft
- neurons
- Prior art date
Links
- 238000013528 artificial neural network Methods 0.000 claims abstract description 74
- 238000012549 training Methods 0.000 claims abstract description 14
- 210000002364 input neuron Anatomy 0.000 claims description 24
- 210000002569 neuron Anatomy 0.000 claims description 23
- 210000004205 output neuron Anatomy 0.000 claims description 22
- 230000009471 action Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 3
- 239000000446 fuel Substances 0.000 claims description 2
- 239000013598 vector Substances 0.000 claims description 2
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 abstract description 10
- 238000013459 approach Methods 0.000 abstract description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000000034 method Methods 0.000 description 7
- 238000013480 data collection Methods 0.000 description 5
- 230000004913 activation Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000010006 flight Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000005312 nonlinear dynamic Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C13/00—Control systems or transmitting systems for actuating flying-control surfaces, lift-increasing flaps, air brakes, or spoilers
- B64C13/02—Initiating means
- B64C13/16—Initiating means actuated automatically, e.g. responsive to gust detectors
- B64C13/18—Initiating means actuated automatically, e.g. responsive to gust detectors using automatic pilot
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0083—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots to help an aircraft pilot in the rolling phase
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0088—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/04—Control of altitude or depth
- G05D1/042—Control of altitude or depth specially adapted for aircraft
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/04—Control of altitude or depth
- G05D1/06—Rate of change of altitude or depth
- G05D1/0607—Rate of change of altitude or depth specially adapted for aircraft
- G05D1/0653—Rate of change of altitude or depth specially adapted for aircraft during a phase of take-off or landing
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/08—Control of attitude, i.e. control of roll, pitch, or yaw
- G05D1/0808—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
- G09B9/02—Simulators for teaching or training purposes for teaching control of vehicles or other craft
- G09B9/08—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
- G09B9/02—Simulators for teaching or training purposes for teaching control of vehicles or other craft
- G09B9/08—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
- G09B9/24—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer including display or recording of simulated flight path
Definitions
- PID controllers are robust methods that can provide adequate trajectory-tracking applied to control the aircraft's surface controllers such as rudders, ailerons, and elevators by sending control commands to their actuators.
- PID controllers have a pre-designed gain factor or control law applied continuously to transform the vehicle from the current state to the desired state by calculating the gain which should be applied based on the proportional error, the integral error, and the differential error.
- PID controllers are capable of providing relatively simple autonomous control of an aircraft such as maintaining a certain trajectory by controlling speed, pitch, and roll.
- G-2 Under NASA's Intelligent Flight Control System (IFCS) program, an adaptive intelligent controller called G-2 which is based on neural networks was proposed. G-2 aids the aircraft's conventional controllers by compensating for errors caused by faults.
- the faults are represented by surface controller failures, or dynamic failures caused by modelling errors.
- Neural networks were applied to output command augmentation signals to reimburse for the errors caused by faults.
- Artificial Neural Networks have been applied as nonlinear dynamic models to aid the controllers of small scale Unmanned Aerial Vehicles (UAV) as well.
- UAV Unmanned Aerial Vehicles
- neural network controllers were introduced to aid the trajectories controllers of a hexa-copter, and a quadrotor drone. The main goal is to reduce the inverse errors, and insure a more stable flight. The results show the ability of neural-based controllers to enhance the robustness of conventional controllers of drones.
- Artificial Neural Networks have been applied to small scale Unmanned Aerial Vehicles to enhance position tracking controllers using integral of the Signum of the error (RISE) feedback along with Artificial Neural Networks.
- RISE Signum of the error
- AFCS/Autopilot Automatic Flight Control Systems
- AFCS/Autopilot are highly limited, in the sense that they are capable of performing specific piloting tasks in nonemergency conditions. Strong turbulence, for example, can cause the autopilot to disengage itself or even attempt an undesired action which could jeopardize the safety of the flight.
- the limitations of autopilots require constant monitoring of the system and the flight status by the flight crew, which could be stressful especially during long flights. On the other hand, trying to anticipate everything that could go wrong with a flight, and incorporating all of that into the set of rules or control models "hardcoded" in an AFCS is infeasible. There have been reports either discussing the limitations of current autopilots such as the inability to handle severe weather conditions, or blaming autopilots for a number of aviation catastrophes.
- PID controllers that are currently used in modern autopilots, such as Proportional Integral Derivative (PID) controllers even when enhanced with additional layers of intelligent control methods are not capable of learning a high-level task comprising of a sequence of multiple sub- tasks. For example, learning a full take-off task which is comprised of a sequence of sub-tasks (release brakes, increase throttle and set to maximum power, wait until a certain speed is achieved then deflect elevators, wait until a certain altitude is achieved then retract gear, etc.) is beyond the capacities of these conventional controllers, which means, performing a fully autonomous flight by relying just on conventional controllers (enhanced or not) is not feasible.
- manually designing and developing all the necessary controllers to handle the complete spectrum of flight scenarios and uncertainties ranging from normal to emergency situations might not be the ideal method due to feasibility limitations such as the difficulty in covering all possible eventualities.
- the Intelligent Autopilot System is an Automatic Flight Control System based on Artificial Intelligence which is capable of performing piloting tasks, and handling flight uncertainties such as severe weather conditions and emergency situations autonomously.
- the Intelligent Autopilot System learns how to perform piloting tasks, and the required skills from human teachers by applying the Learning by Demonstration concept with Artificial Neural Networks.
- the Intelligent Autopilot System is a potential solution to the current problem of Automatic Flight Control Systems of being unable to handle flight uncertainties, and the unfeasible requirement to anticipate all the flight uncertainty scenarios, and manually construct control models to handle them.
- the system utilizes a robust Learning from Demonstration approach which uses human pilots to demonstrate the task to be learned in a flight simulator while training datasets are captured from these demonstrations.
- the datasets are then used by Artificial Neural Networks to generate control models automatically.
- the control models imitate the skills of human pilots when handling piloting tasks including taxi, takeoff, cruise, following flight paths/course, and landing.
- the system also imitates the skills of human pilots when handling flight uncertainties and emergencies including severe weather conditions, engine(s) failure or fire, Rejected Take Off (RTO), emergency landing, and turnaround.
- RTO Rejected Take Off
- a flight manager program decides which Artificial Neural Networks to be fired given the current condition.
- the Intelligent Autopilot System relies on Artificial Neural Networks which are superior methods for handling highly dynamic, and noisy data that are present in flight control environments compared to other methods used in different flight control solutions.
- Conventional controllers such as Proportional Integral Derivative controllers are not used due to their inability to learn sequences of tasks.
- the system has eleven Artificial Neural Networks each designed to handle a specific control. Learning from Demonstration is achieved through offline Supervised Learning which is performed by the Artificial Neural Networks on the training datasets which represent the captured demonstration of the tasks and skills to be learned by the human pilots.
- the tasks and skills that can be learned and imitated autonomously by the system are either low-level which can be viewed as rapid and dynamic sub-actions that occur in fractions of a second such as the rapid correction of heading, pitch, and roll, or high-level which can be viewed as actions governing the whole process and how it should be performed strategically such as performing the full sequence of takeoff, landing, or dealing with an engine fire.
- the Intelligent Autopilot System is made of the following components: an interface that interacts with the human pilot, the different components of the system, and the aircrafts control systems or a flight simulator, a database, eleven Artificial Neural Networks, and a flight manager program.
- the IAS implementation method has three steps: A. Pilot Data Collection, B. Training, and C. Autonomous Control. In each step, different IAS components are used.
- A. Pilot Data Collection Before the IAS can be trained or can take control, training datasets which capture how human pilots perform the task to be learned is captured.
- the demonstrator uses X- Plane which is an advanced flight simulator to perform the task to be learned.
- the demonstrator can also use different flight simulators or an actual aircraft. When using the simulator, it is set up to send and receive packets comprising the desired training data every 0.1 second.
- the IAS Interface is responsible for data flow between the flight simulator/aircraft systems and the IAS in both directions. It also displays flight data received from the simulator or the aircraft's systems.
- the Interface collects flight data from the flight simulator or the aircraft's systems over the network using User Datagram Protocol packets or using any other data link means.
- the collected data includes the pilot's actions while performing the task.
- the Interface organizes the collected flight data received from the simulator or the aircraft' systems (inputs), and the pilot's actions (outputs) into vectors of inputs and outputs, which are sent to the database every 1 second.
- the database stores all the data captured from the pilot demonstrator and the flight systems, which are delivered through the Interface.
- the database contains tables designed to store: 1. Flight data as inputs, and 2. Pilot's actions as outputs. These tables are then used as training datasets to train the Artificial Neural Networks of the IAS. The training datasets are cleaned to remove noise and outliers.
- Fig 1 illustrates the Pilot Data Collection step.
- feedforward Artificial Neural Networks are used to generate learning models from the captured datasets through offline training. Eleven feedforward Artificial Neural Networks comprise the core of the IAS. Each ANN is designed and trained to handle specific controls and tasks.
- the ANNs are:
- the Taxi Speed Gain ANN (Fig. 2) which learns how to control the aircraft during the taxi speed gain phase where the aircraft accelerates to reach takeoff speed.
- the network has one input neuron that takes speed, and three output neurons which output brakes, flaps, and throttle commands.
- the network has one hidden layer containing two neurons.
- the Take Off ANN (Fig. 3) which learns how to perform takeoff and climb.
- the network has two input neuron that take altitude and pitch, and four output neurons which output gear, elevator, throttle, and flaps commands.
- the network has one hidden layer containing four neurons.
- the Rejected Take Off ANN (Fig. 4) which learns how to reject or abort takeoff in case one or more engines fail or catch fire.
- the network has two input neurons that take speed and engine status, and four output neurons which output brakes, throttle, reverse thrust, and speed brakes commands.
- the network has one hidden layer containing four neurons.
- the Aileron ANN (Fig. 5) which learns how to control the aircraft's roll to insure a leveled flight, and how to change heading (bank) at the next waypoint.
- the network has two input neurons that take roll and angle which is the angle between two lines; the first line is between the previous GPS waypoint coordinates and the next waypoint GPS coordinates; the second line is between the current aircraft's location (GPS coordinates) and the next waypoint GPS coordinates.
- the Aileron ANN has one output neuron which outputs aileron command.
- the network has one hidden layer containing two neurons.
- the Rudder ANN (Fig. 6) which learns how to control the aircraft's heading while on the runway in case of strong crosswind, or the aircraft's yaw while airborne in case of drag created by a one or engine failure.
- the network has two input neurons that take heading and angle which is the angle between two lines; the first line is between the starting point GPS coordinates on the runway and the next GPS coordinates at the end of the runway; the second line is between the current aircraft's location (GPS coordinates) and the next waypoint GPS coordinates (end of the runway).
- GPS coordinates GPS coordinates
- end of the runway the angle input neuron takes a constant value since the angle is no longer required.
- the Rudder ANN has one output neuron which outputs rudder command.
- the network has one hidden layer containing two neurons.
- the Cruise Altitude ANN (Fig. 7) which learns how to control the aircraft's altitude.
- the network has one input neuron that takes the difference between the current altitude and the desired altitude.
- the network has two output neurons which output throttle, and elevator trim commands.
- the network has one hidden layer containing two neurons.
- the Cruise Pitch ANN (Fig. 8) which learns how to control the aircraft's pitch to insure a leveled flight.
- the network has one input neuron that takes pitch, and one output neuron which outputs elevator command.
- the network has one hidden layer containing two neurons.
- the Fire Situation ANN (Fig. 9) which learns how to handle one or more engine fire situations.
- the network has one input neuron that takes the fire sensor(s) status, and three output neurons which output the fire extinguisher, throttle, and fuel valve commands.
- the network has one hidden layer containing two neurons.
- the Emergency Landing Pitch ANN (Fig. 10) which learns how to control the aircraft's pitch or angle of attack during an emergency landing situation by lowering the aircraft's speed and altitude, and preventing stall.
- the network has two input neuron that take pitch and speed.
- the network has one output neuron which outputs elevator command.
- the network has one hidden layer containing two neurons.
- the Emergency Landing Altitude ANN (Fig. 11) which learns how to control the aircraft's altitude during an emergency landing situation if there is any power left in one or more engines.
- the network has one input neuron that takes altitude, and one output neuron which outputs throttle command.
- the network has one hidden layer containing two neurons.
- the Landing ANN (Fig. 12) which learns how to perform landing at the destination airport's runway.
- the network has three input neurons that take speed, altitude, and the distance to the landing runway.
- the network has six output neurons which output throttle, flaps, gear, brakes, speed brakes, and reverse thrust commands.
- the network has one hidden layer containing six neurons.
- Training the Artificial Neural Networks is done using the Backpropagation algorithm, updating the free parameters or the coefficients of the models is done using the Delta Rule, and the neurons activation is done using the Sigmoid and the Hyperbolic Tangent activation functions.
- the learning models are generated, and the free parameters or coefficients represented by weights and biases of the models are stored in the database by the Interface.
- Fig. 13 illustrates the Training step.
- the Intelligent Autopilot System can be used for autonomous control.
- the Interface retrieves the coefficients of the models from the database for each trained Artificial Neural Network, and receives flight data from the flight simulator or the aircraft's systems every 0.1 second.
- the Interface organizes the coefficients into sets of weights and biases, and organizes the data received from the simulator or the aircraft's systems into sets of inputs for each ANN. It then feeds the relevant coefficients, and the flight data input sets to the ANNs of the IAS to produce outputs.
- the outputs of the ANNs are sent by the Interface which to the flight simulator or the aircraft's systems as autonomous control commands using User Datagram Protocol packets or any other data link means every 0.1 second.
- the Flight Manager (Fig. 14) is a program which resembles a Behavior Tree.
- the purpose of the Flight Manager is to manage the eleven ANNs of the IAS by deciding which ANNs are to be used simultaneously at each moment.
- the Flight Manager starts by receiving flight data from the flight simulator or the aircraft's systems through the interface of the IAS, then it detects the flight condition and phase by examining the received flight data, and decides which ANNs are required to be used given the flight condition (normal / emergency / fire situation) and phase (taxi speed gain / take off / rejected takeoff / cruise / landing / emergency landing).
- the Flight Manager program of the IAS autonomously decides the flight path to the desired destination airport by using GPS coordinates to generate connected Rhumb lines.
- Fig. 15 illustrates the autonomous control step.
- the Intelligent Autopilot System presents a robust approach to teach autopilots how to handle piloting tasks, uncertainties, and emergencies with minimum effort by exploiting the Learning from Demonstration concepts and Artificial Neural Networks. Therefore, The Intelligent Autopilot System is proposed as the next generation of the Automatic Flight Control Systems.
- the system can be used to aid flight crew members flying different types of fixed-wing aircrafts by handling either some of the piloting workload, or all of it autonomously without the need for human intervention.
- the system can also be used to fully control fixed-wing Unmanned Aerial Systems/drones autonomously without the need for human operators on the ground.
- the Intelligent Autopilot System can be applied in various aviation domains including manned aircrafts for civil aviation/commercial airliners and cargo aircrafts.
- the system can also be applied in scientific, security, and military Unmanned Aerial Systems/drones especially when the requirement of setting up remote ground control units is not feasible, and therefore, the aircraft must be fully autonomous.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Business, Economics & Management (AREA)
- Molecular Biology (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- Medical Informatics (AREA)
- Game Theory and Decision Science (AREA)
- Feedback Control In General (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The Intelligent Autopilot System is an Automatic Flight Control System based on Artificial Intelligence which is capable of performing piloting tasks, and handling flight uncertainties such as severe weather conditions and emergency situations autonomously. The Intelligent Autopilot System learns how to perform piloting tasks, and the required skills from human teachers by applying the Learning by Demonstration concept with Artificial Neural Networks. The Intelligent Autopilot System is a potential solution to the current problem of Automatic Flight Control Systems of being unable to handle flight uncertainties, and the unfeasible requirement to anticipate all the flight uncertainty scenarios, and manually construct control models to handle them. The system utilizes a robust Learning from Demonstration approach which uses human pilots to demonstrate the task to be learned in a flight simulator while training datasets are captured from these demonstrations. The datasets are then used by Artificial Neural Networks to generate control models automatically. The control models imitate the skills of human pilots when handling piloting tasks including taxi, takeoff, cruise, following flight paths/course, and landing. The system also imitates the skills of human pilots when handling flight uncertainties and emergencies including severe weather conditions, engine(s) failure or fire, Rejected Take Off (RTO), emergency landing, and turnaround. A flight manager program decides which Artificial Neural Networks to be fired given the current condition.
Description
THE INTELLIGENT AUTOPILOT SYSTEM
Prior Art
Current operational Automatic Flight Control Systems (AFCS/Autopilot) fall under the domain of Control Theory. Classic and modern autopilots rely on controllers such as the Proportional Integral Derivative (PID) controller, and Finite-State automation. Non-adaptive linear controllers such as Proportional Integral Differential (PID) controllers are extensively used in the aviation field for manned and unmanned aircrafts ranging from fixed-wing airliners to Micro Aerial Vehicles (MAV). PID controllers are robust methods that can provide adequate trajectory-tracking applied to control the aircraft's surface controllers such as rudders, ailerons, and elevators by sending control commands to their actuators. PID controllers have a pre-designed gain factor or control law applied continuously to transform the vehicle from the current state to the desired state by calculating the gain which should be applied based on the proportional error, the integral error, and the differential error. PID controllers are capable of providing relatively simple autonomous control of an aircraft such as maintaining a certain trajectory by controlling speed, pitch, and roll.
Under NASA's Intelligent Flight Control System (IFCS) program, an adaptive intelligent controller called G-2 which is based on neural networks was proposed. G-2 aids the aircraft's conventional controllers by compensating for errors caused by faults. The faults are represented by surface controller failures, or dynamic failures caused by modelling errors. Neural networks were applied to output command augmentation signals to reimburse for the errors caused by faults.
Artificial Neural Networks have been applied as nonlinear dynamic models to aid the controllers of small scale Unmanned Aerial Vehicles (UAV) as well. For example, neural network controllers were introduced to aid the trajectories controllers of a hexa-copter, and a quadrotor drone. The main goal is to reduce the inverse errors, and insure a more stable flight. The results show the ability of neural-based controllers to enhance the robustness of conventional controllers of drones. Other than stability, Artificial Neural Networks have been applied to small scale Unmanned Aerial Vehicles to enhance position tracking controllers using integral of the Signum of the error (RISE) feedback along with Artificial Neural Networks.
Problem or defect in the prior art
Automatic Flight Control Systems (AFCS/Autopilot) are highly limited, in the sense that they are capable of performing specific piloting tasks in nonemergency conditions. Strong turbulence, for example, can cause the autopilot to disengage itself or even attempt an undesired action which could jeopardize the safety of the flight. The limitations of autopilots require constant monitoring of the system and the flight status by the flight crew, which could be stressful especially during long flights. On the other hand, trying to anticipate everything that could go wrong with a flight, and incorporating all of that into the set of rules or control models "hardcoded" in an AFCS is infeasible. There have been reports either discussing the limitations of current autopilots such as the inability to handle severe weather conditions, or blaming autopilots for a number of aviation catastrophes.
Conventional controllers such as the widely used PID (Proportional Integral Differential) controllers suffer from a limitation represented by the inability to change the gains of the system - also known as proportionality constants- after the system starts operating. At this point, only one solution can be applied to tackle this problem, which is to stop the system, perform gain re-tuning, and restart the system, which might not always be practical.
Conventional controllers that are currently used in modern autopilots, such as Proportional Integral Derivative (PID) controllers even when enhanced with additional layers of intelligent control methods are not capable of learning a high-level task comprising of a sequence of multiple sub- tasks. For example, learning a full take-off task which is comprised of a sequence of sub-tasks (release brakes, increase throttle and set to maximum power, wait until a certain speed is achieved then deflect elevators, wait until a certain altitude is achieved then retract gear, etc.) is beyond the capacities of these conventional controllers, which means, performing a fully autonomous flight by relying just on conventional controllers (enhanced or not) is not feasible. On the other hand, manually designing and developing all the necessary controllers to handle the complete spectrum of flight scenarios and uncertainties ranging from normal to emergency situations might not be the ideal method due to feasibility limitations such as the difficulty in covering all possible eventualities.
New in the invention
The Intelligent Autopilot System is an Automatic Flight Control System based on Artificial Intelligence which is capable of performing piloting tasks, and handling flight uncertainties such as severe weather conditions and emergency situations autonomously. The Intelligent Autopilot System learns how to perform piloting tasks, and the required skills from human teachers by applying the Learning by Demonstration concept with Artificial Neural Networks. The Intelligent Autopilot System is a potential solution to the current problem of Automatic Flight Control Systems of being unable to handle flight uncertainties, and the unfeasible requirement to anticipate all the flight uncertainty scenarios, and manually construct control models to handle them. The system utilizes a robust Learning from Demonstration approach which uses human pilots to demonstrate the task to be learned in a flight simulator while training datasets are captured from these demonstrations. The datasets are then used by Artificial Neural Networks to generate control models automatically. The control models imitate the skills of human pilots when handling piloting tasks including taxi, takeoff, cruise, following flight paths/course, and landing. The system also imitates the skills of human pilots when handling flight uncertainties and emergencies including severe weather conditions, engine(s) failure or fire, Rejected Take Off (RTO), emergency landing, and turnaround. A flight manager program decides which Artificial Neural Networks to be fired given the current condition.
The Intelligent Autopilot System relies on Artificial Neural Networks which are superior methods for handling highly dynamic, and noisy data that are present in flight control environments compared to other methods used in different flight control solutions. Conventional controllers such as Proportional Integral Derivative controllers are not used due to their inability to learn sequences of tasks. The system has eleven Artificial Neural Networks each designed to handle a specific control. Learning from Demonstration is achieved through offline Supervised Learning which is performed by the Artificial Neural Networks on the training datasets which represent the captured demonstration of the tasks and skills to be learned by the human pilots. The tasks and skills that can be learned and imitated autonomously by the system are either low-level which can be viewed as rapid and dynamic sub-actions that occur in fractions of a second such as the rapid correction of heading, pitch, and roll, or high-level which can be viewed as actions governing the whole process and how it should be performed strategically such as performing the full sequence of takeoff, landing, or dealing with an engine fire.
Full specification
The Intelligent Autopilot System (IAS) is made of the following components: an interface that interacts with the human pilot, the different components of the system, and the aircrafts control systems or a flight simulator, a database, eleven Artificial Neural Networks, and a flight manager program. The IAS implementation method has three steps: A. Pilot Data Collection, B. Training, and C. Autonomous Control. In each step, different IAS components are used.
A. Pilot Data Collection: Before the IAS can be trained or can take control, training datasets which capture how human pilots perform the task to be learned is captured. The demonstrator uses X- Plane which is an advanced flight simulator to perform the task to be learned. The demonstrator can also use different flight simulators or an actual aircraft. When using the simulator, it is set up to send and receive packets comprising the desired training data every 0.1 second. The IAS Interface is responsible for data flow between the flight simulator/aircraft systems and the IAS in both directions. It also displays flight data received from the simulator or the aircraft's systems. Data collection is started immediately before demonstration, then; the pilot uses either the aircraft's control instruments or external instruments linked to a flight simulator (joystick, yoke, etc.) to perform the piloting task to be learned. The Interface collects flight data from the flight simulator or the aircraft's systems over the network using User Datagram Protocol packets or using any other data link means. The collected data includes the pilot's actions while performing the task. The Interface organizes the collected flight data received from the simulator or the aircraft' systems (inputs), and the pilot's actions (outputs) into vectors of inputs and outputs, which are sent to the database every 1 second. The database stores all the data captured from the pilot demonstrator and the flight systems, which are delivered through the Interface. The database contains tables designed to store: 1. Flight data as inputs, and 2. Pilot's actions as outputs. These tables are then used as training datasets to train the Artificial Neural Networks of the IAS. The training datasets are cleaned to remove noise and outliers. Fig 1 illustrates the Pilot Data Collection step.
B. Training: After the human pilot data collection step is completed, feedforward Artificial Neural Networks are used to generate learning models from the captured datasets through offline training. Eleven feedforward Artificial Neural Networks comprise the core of the IAS. Each ANN is designed and trained to handle specific controls and tasks. The ANNs are:
1. The Taxi Speed Gain ANN (Fig. 2) which learns how to control the aircraft during the taxi speed gain phase where the aircraft accelerates to reach takeoff speed. The network has one input neuron that takes speed, and three output neurons which output brakes, flaps, and throttle commands. The network has one hidden layer containing two neurons.
2. The Take Off ANN (Fig. 3) which learns how to perform takeoff and climb. The network has two input neuron that take altitude and pitch, and four output neurons which output gear, elevator, throttle, and flaps commands. The network has one hidden layer containing four neurons.
3. The Rejected Take Off ANN (Fig. 4) which learns how to reject or abort takeoff in case one or more engines fail or catch fire. The network has two input neurons that take speed and engine status, and four output neurons which output brakes, throttle, reverse thrust, and speed brakes commands. The network has one hidden layer containing four neurons.
4. The Aileron ANN (Fig. 5) which learns how to control the aircraft's roll to insure a leveled flight, and how to change heading (bank) at the next waypoint. The network has two input neurons that take roll and angle which is the angle between two lines; the first line is between the previous GPS waypoint coordinates and the next waypoint GPS coordinates; the second line is between the current aircraft's location (GPS coordinates) and the next waypoint GPS coordinates. The Aileron ANN has one output neuron which outputs aileron command. The network has one hidden layer containing two neurons.
5. The Rudder ANN (Fig. 6) which learns how to control the aircraft's heading while on the runway in case of strong crosswind, or the aircraft's yaw while airborne in case of drag created by a one or engine failure. The network has two input neurons that take heading and angle which is the angle between two lines; the first line is between the starting point GPS coordinates on the runway and the next GPS coordinates at the end of the runway; the second line is between the current aircraft's location (GPS coordinates) and the next waypoint GPS coordinates (end of the runway). In case of an engine failure while airborne, the angle input neuron takes a constant value since the angle is no longer required. The Rudder ANN has one output neuron which outputs rudder command. The network has one hidden layer containing two neurons.
6. The Cruise Altitude ANN (Fig. 7) which learns how to control the aircraft's altitude. The network has one input neuron that takes the difference between the current altitude and the desired altitude. The network has two output neurons which output throttle, and elevator trim commands. The network has one hidden layer containing two neurons.
7. The Cruise Pitch ANN (Fig. 8) which learns how to control the aircraft's pitch to insure a leveled flight. The network has one input neuron that takes pitch, and one output neuron which outputs elevator command. The network has one hidden layer containing two neurons.
8. The Fire Situation ANN (Fig. 9) which learns how to handle one or more engine fire situations. The network has one input neuron that takes the fire sensor(s) status, and three output neurons which output the fire extinguisher, throttle, and fuel valve commands. The network has one hidden layer containing two neurons.
9. The Emergency Landing Pitch ANN (Fig. 10) which learns how to control the aircraft's pitch or angle of attack during an emergency landing situation by lowering the aircraft's speed and altitude, and preventing stall. The network has two input neuron that take pitch and speed. The network has one output neuron which outputs elevator command. The network has one hidden layer containing two neurons.
10. The Emergency Landing Altitude ANN (Fig. 11) which learns how to control the aircraft's altitude during an emergency landing situation if there is any power left in one or more engines. The network has one input neuron that takes altitude, and one output neuron which outputs throttle command. The network has one hidden layer containing two neurons.
11. The Landing ANN (Fig. 12) which learns how to perform landing at the destination airport's runway. The network has three input neurons that take speed, altitude, and the distance to the landing runway. The network has six output neurons which output throttle, flaps, gear, brakes, speed brakes, and reverse thrust commands. The network has one hidden layer containing six neurons.
Training the Artificial Neural Networks is done using the Backpropagation algorithm, updating the free parameters or the coefficients of the models is done using the Delta Rule, and the neurons activation is done using the Sigmoid and the Hyperbolic Tangent activation functions. When
training is completed, the learning models are generated, and the free parameters or coefficients represented by weights and biases of the models are stored in the database by the Interface. Fig. 13 illustrates the Training step.
C. Autonomous Control: Once trained, the Intelligent Autopilot System can be used for autonomous control. Here, the Interface retrieves the coefficients of the models from the database for each trained Artificial Neural Network, and receives flight data from the flight simulator or the aircraft's systems every 0.1 second. The Interface organizes the coefficients into sets of weights and biases, and organizes the data received from the simulator or the aircraft's systems into sets of inputs for each ANN. It then feeds the relevant coefficients, and the flight data input sets to the ANNs of the IAS to produce outputs. Next, the outputs of the ANNs are sent by the Interface which to the flight simulator or the aircraft's systems as autonomous control commands using User Datagram Protocol packets or any other data link means every 0.1 second.
The Flight Manager (Fig. 14) is a program which resembles a Behavior Tree. The purpose of the Flight Manager is to manage the eleven ANNs of the IAS by deciding which ANNs are to be used simultaneously at each moment. The Flight Manager starts by receiving flight data from the flight simulator or the aircraft's systems through the interface of the IAS, then it detects the flight condition and phase by examining the received flight data, and decides which ANNs are required to be used given the flight condition (normal / emergency / fire situation) and phase (taxi speed gain / take off / rejected takeoff / cruise / landing / emergency landing). For Navigation, the Flight Manager program of the IAS autonomously decides the flight path to the desired destination airport by using GPS coordinates to generate connected Rhumb lines. As mentioned above, the IAS constantly monitors the aircraft's heading and path deviations, and calculates the angle between the aircraft's line and the current line, which are fed to the Rudder ANN, and the Aileron ANN to correct the aircraft's heading and path. Fig. 15 illustrates the autonomous control step.
How to use the invention
The aviation industry is currently working on solutions which would lead to decreasing the dependence on crew members. The reason behind this is to lower workload, human error, stress faced by crew members, and costs, by developing autopilots capable of handling multiple scenarios without human intervention.
The Intelligent Autopilot System presents a robust approach to teach autopilots how to handle piloting tasks, uncertainties, and emergencies with minimum effort by exploiting the Learning from Demonstration concepts and Artificial Neural Networks. Therefore, The Intelligent Autopilot System is proposed as the next generation of the Automatic Flight Control Systems.
When integrating the Intelligent Autopilot System with an aircraft, the system can be used to aid flight crew members flying different types of fixed-wing aircrafts by handling either some of the piloting workload, or all of it autonomously without the need for human intervention. The system can also be used to fully control fixed-wing Unmanned Aerial Systems/drones autonomously without the need for human operators on the ground.
The Intelligent Autopilot System can be applied in various aviation domains including manned aircrafts for civil aviation/commercial airliners and cargo aircrafts. The system can also be applied in scientific, security, and military Unmanned Aerial Systems/drones especially when the requirement of setting up remote ground control units is not feasible, and therefore, the aircraft must be fully autonomous.
Claims
1. An Intelligent Autopilot System capable of observing and learning piloting tasks and skills from human pilots by generating control models automatically from demonstration datasets, and capable of controlling an aircraft autonomously using Artificial Neural Networks.
2. The Intelligent Autopilot System of claim 1 further comprising the Interface. The said Interface is responsible for data flow between the flight simulator or the aircraft systems and the Intelligent Autopilot System of claim 1 in both directions, and displays flight data received from the simulator or the aircraft's systems. The said Interface collects flight data from the flight simulator or the aircraft's systems over the network using User Datagram Protocol packets or using any other data link means. The said Interface organizes the collected flight data received from the simulator or the aircraft' systems (inputs), and the pilot's actions (outputs) into vectors of inputs and outputs, which are sent to a database every 1 second. After training, the said Interface stores the free parameters or coefficients represented by weights and biases of the models in the database. The outputs of the Artificial Neural Networks are sent by the said Interface to the flight simulator or the aircraft' s systems as autonomous control commands using User Datagram Protocol packets or any other data link means every 0.1 second.
3. The Intelligent Autopilot System of claim 1 further comprising the Database which stores the manual control commands of the human pilot demonstrator, and the flight systems data which are received from the flight simulator or the aircraft's systems through the said interface of claim 2.
4. The Intelligent Autopilot System of claim 1 further comprising the Taxi Speed Gain Artificial Neural Network which learns how to control the aircraft during the taxi speed gain phase where the aircraft accelerates to reach takeoff speed. The said Taxi Speed Gain Artificial Neural Network has one input neuron that takes speed, and three output neurons which output brakes, flaps, and throttle commands. The said Taxi Speed Gain Artificial Neural Network has one hidden layer containing two neurons.
5. The Intelligent Autopilot System of claim 1 further comprising the Take Off Artificial Neural Network which learns how to perform takeoff and climb. The said Take Off Artificial Neural Network has two input neuron that take altitude and pitch, and four output neurons which output gear, elevator, throttle, and flaps commands. The said Take Off Artificial Neural Network has one hidden layer containing four neurons.
6. The Intelligent Autopilot System of claim 1 further comprising the Rejected Take Off Artificial Neural Network which learns how to reject or abort takeoff in case one or more engines fail or catch fire. The said Rejected Take Off Artificial Neural Network has two input neurons that take speed and engine status, and four output neurons which output brakes, throttle, reverse thrust, and speed brakes commands. The said Rejected Take Off Artificial Neural Network has one hidden layer containing four neurons.
7. The Intelligent Autopilot System of claim 1 further comprising the Aileron Artificial Neural Network which learns how to control the aircraft's roll to insure a leveled flight, and how to change heading (bank) at the next waypoint. The said Aileron Artificial Neural Network has two input neurons that take roll, and angle which is the angle between two lines; the first line is between the previous GPS waypoint coordinates and the next waypoint GPS coordinates; the second line is between the current aircraft's location (GPS
1
coordinates) and the next waypoint GPS coordinates. The said Aileron Artificial Neural Network has one output neuron which outputs aileron command. The network has one hidden layer containing two neurons.
8. The Intelligent Autopilot System of claim 1 further comprising the Rudder Artificial Neural Network which learns how to control the aircraft's heading while on the runway in case of strong crosswind, or the aircraft's yaw while airborne in case of drag created by a one or more engine failure. The said Rudder Artificial Neural Network has two input neurons that take heading and angle which is the angle between two lines; the first line is between the starting point GPS coordinates on the runway and the next GPS coordinates at the end of the runway; the second line is between the current aircraft's location (GPS coordinates) and the next waypoint GPS coordinates (end of the runway). In case of an engine failure while airborne, the angle input neuron takes a constant value since the angle is no longer required. The said Rudder Artificial Neural Network has one output neuron which outputs rudder command. The said Rudder Artificial Neural Network has one hidden layer containing two neurons.
9. The Intelligent Autopilot System of claim 1 further comprising the Cruise Altitude Artificial Neural Network which learns how to control the aircraft's altitude. The said Cruise Altitude Artificial Neural Network has one input neuron that takes the difference between the current altitude and the desired altitude. The said Cruise Altitude Artificial Neural Network has two output neurons which output throttle, and elevator trim commands. The said Cruise Altitude Artificial Neural Network has one hidden layer containing two neurons.
10. The Intelligent Autopilot System of claim 1 further comprising the Cruise Pitch Artificial Neural Network which learns how to control the aircraft's pitch to insure a leveled flight. The said Cruise Pitch Artificial Neural Network has one input neuron that takes pitch, and one output neuron which outputs elevator command. The said Cruise Pitch Artificial Neural Network has one hidden layer containing two neurons.
11. The Intelligent Autopilot System of claim 1 further comprising the Fire Situation Artificial Neural Network which learns how to handle one or many engine fire situations. The said Fire Situation Artificial Neural Network has one input neuron that takes the fire sensor(s) status, and three output neurons which output the fire extinguisher, throttle, and fuel valve commands. The said Fire Situation Artificial Neural Network has one hidden layer containing two neurons.
12. The Intelligent Autopilot System of claim 1 further comprising the Emergency Landing Pitch Artificial Neural Network which learns how to control the aircraft's pitch or angle of attack during an emergency landing situation by lowering the aircraft's speed and altitude, and preventing stall. The said Emergency Landing Pitch Artificial Neural Network has two input neuron that take pitch and speed. The said Emergency Landing Pitch Artificial Neural Network has one output neuron which outputs elevator command. The said Emergency Landing Pitch Artificial Neural Network has one hidden layer containing two neurons.
13. The Intelligent Autopilot System of claim 1 further comprising the Emergency Landing Altitude Artificial Neural Network which learns how to control the aircraft's altitude during an emergency landing situation if there is any power left in one or more engines. The said Emergency Landing Altitude Artificial Neural Network has one input neuron that takes altitude, and one output neuron which outputs throttle command. The said Emergency Landing Altitude Artificial Neural Network has one hidden layer containing two neurons.
2
14. The Intelligent Autopilot System of claim 1 further comprising the Landing Artificial Neural Network which learns how to perform landing at the destination airport's runway. The said Landing Artificial Neural Network has three input neurons that take speed, altitude, and the distance to the landing runway. The said Landing Artificial Neural Network has six output neurons which output throttle, flaps, gear, brakes, speed brakes, and reverse thrust commands. The said Landing Artificial Neural Network has one hidden layer containing six neurons.
15. The said Database of claim 3 stores the learning models which are generated automatically by the Artificial Neural Networks of claims 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14.
16. The said Interface of claim 2, retrieves the stored models from the said Database of claim 3, retrieves the flight data from the flight simulator or the aircraft's systems, and sends them to the Artificial Neural Networks of claims 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14 to produce control commands which are used to control an aircraft autonomously.
17. The Intelligent Autopilot System of claim 1 further comprising the Flight Manager program. The purpose of the said Flight Manager is to manage the eleven Artificial Neural Networks of claims 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14 by deciding which Artificial Neural Networks of claims 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14 are to be used simultaneously at each moment. The Flight Manager starts by receiving flight data from the flight simulator or the aircraft's systems through the said Interface of claim 2, then the said Flight Manager detects the flight condition and phase by examining the received flight data, and decides which Artificial Neural Networks of claims 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14 are required to be used given the flight condition (normal / emergency / fire situation) and phase (taxi speed gain / take off / rejected takeoff / cruise / landing / emergency landing). For Navigation, the said Flight Manager autonomously decides the flight path to the desired destination airport by using GPS coordinates to generate connected Rhumb lines. The said Intelligent Autopilot System of claim 1 constantly monitors the aircraft's heading and path deviations, and calculates the angle between the aircraft's line and the current path line, which are fed to the said Aileron Artificial Neural Network of claim 7, and the Rudder Artificial Neural Network of claim 8 to correct the aircraft' s heading and path.
3
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
OM3632016 | 2016-12-25 | ||
OMOM/P/2016/00363 | 2016-12-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018117872A1 true WO2018117872A1 (en) | 2018-06-28 |
Family
ID=62626758
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/OM2016/000002 WO2018117872A1 (en) | 2016-12-25 | 2016-12-26 | The intelligent autopilot system |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018117872A1 (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109625333A (en) * | 2019-01-03 | 2019-04-16 | 西安微电子技术研究所 | A kind of space non-cooperative target catching method based on depth enhancing study |
CN109866931A (en) * | 2019-03-15 | 2019-06-11 | 西北工业大学 | A kind of airplane throttle control method based on self-encoding encoder |
CN109992000A (en) * | 2019-04-04 | 2019-07-09 | 北京航空航天大学 | A multi-UAV path collaborative planning method and device based on hierarchical reinforcement learning |
CN110007617A (en) * | 2019-03-29 | 2019-07-12 | 北京航空航天大学 | An Uncertainty Transfer Analysis Method for Aircraft Semi-physical Simulation System |
CN111510182A (en) * | 2020-06-12 | 2020-08-07 | 成都锐新科技有限公司 | Link16 signal simulator |
US10793286B1 (en) * | 2018-08-23 | 2020-10-06 | Rockwell Collins, Inc. | Vision based autonomous landing using flight path vector |
CN111914366A (en) * | 2020-08-05 | 2020-11-10 | 湖南航天机电设备与特种材料研究所 | Method for acquiring cylinder outlet speed of high-pressure cold air launching aircraft |
US20200404846A1 (en) * | 2018-03-13 | 2020-12-31 | Moog Inc. | Autonomous navigation system and the vehicle made therewith |
CN112284366A (en) * | 2020-10-26 | 2021-01-29 | 中北大学 | A Correction Method for Heading Angle Error of Polarized Light Compass Based on TG-LSTM Neural Network |
CN112783190A (en) * | 2021-01-22 | 2021-05-11 | 滨州学院 | Control method of robust tracking controller of three-rotor unmanned aerial vehicle |
CN113093568A (en) * | 2021-03-31 | 2021-07-09 | 西北工业大学 | Airplane automatic driving operation simulation method based on long-time and short-time memory network |
CN113093774A (en) * | 2019-12-23 | 2021-07-09 | 海鹰航空通用装备有限责任公司 | Unmanned aerial vehicle sliding control method |
CN113589835A (en) * | 2021-08-13 | 2021-11-02 | 北京科技大学 | Intelligent robot pilot flying method and device based on autonomous perception |
CN113959446A (en) * | 2021-10-20 | 2022-01-21 | 苏州大学 | A Neural Network-based Robot Autonomous Logistics Transportation Navigation Method |
EP4055349A1 (en) * | 2019-11-07 | 2022-09-14 | Thales | Method and device for generating learning data for an artificial intelligence machine for aircraft landing assistance |
EP4055343A1 (en) * | 2019-11-07 | 2022-09-14 | Thales | Method and device for assisting in landing an aircraft under poor visibility conditions |
US11481634B2 (en) * | 2019-08-29 | 2022-10-25 | The Boeing Company | Systems and methods for training a neural network to control an aircraft |
CN115407798A (en) * | 2022-09-05 | 2022-11-29 | 天津大学 | Multi-unmanned aerial vehicle cluster trajectory tracking method based on error function integration |
US20220404831A1 (en) * | 2021-06-16 | 2022-12-22 | The Boeing Company | Autonomous Behavior Generation for Aircraft Using Augmented and Generalized Machine Learning Inputs |
IT202200016971A1 (en) * | 2022-08-08 | 2024-02-08 | Mecaer Aviation Group S P A | IMPROVED AIRCRAFT CONTROL SYSTEM AND METHOD |
CN117891177A (en) * | 2024-03-15 | 2024-04-16 | 国网浙江省电力有限公司宁波供电公司 | UAV controller model construction method, device, equipment and storage medium |
US12025994B2 (en) | 2020-12-23 | 2024-07-02 | Rockwell Collins, Inc. | Rejected takeoff aircraft system and method |
US12187418B2 (en) | 2019-12-23 | 2025-01-07 | Airbus Operations Limited | Scenario-based control system |
EP4492177A1 (en) * | 2023-07-11 | 2025-01-15 | Airbus Defence and Space GmbH | Vehicle management system for controlling at least one function of a vehicle |
EP4492183A1 (en) * | 2023-07-11 | 2025-01-15 | Airbus Defence and Space GmbH | Apparatus management system for controlling at least one function of an apparatus |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6735500B2 (en) * | 2002-06-10 | 2004-05-11 | The Boeing Company | Method, system, and computer program product for tactile cueing flight control |
US20080300740A1 (en) * | 2007-05-29 | 2008-12-04 | Ron Wayne Hamburg | GPS autopilot system |
-
2016
- 2016-12-26 WO PCT/OM2016/000002 patent/WO2018117872A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6735500B2 (en) * | 2002-06-10 | 2004-05-11 | The Boeing Company | Method, system, and computer program product for tactile cueing flight control |
US20080300740A1 (en) * | 2007-05-29 | 2008-12-04 | Ron Wayne Hamburg | GPS autopilot system |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12232450B2 (en) * | 2018-03-13 | 2025-02-25 | Moog Inc. | Autonomous navigation system and the vehicle made therewith |
US20200404846A1 (en) * | 2018-03-13 | 2020-12-31 | Moog Inc. | Autonomous navigation system and the vehicle made therewith |
US10793286B1 (en) * | 2018-08-23 | 2020-10-06 | Rockwell Collins, Inc. | Vision based autonomous landing using flight path vector |
CN109625333B (en) * | 2019-01-03 | 2021-08-03 | 西安微电子技术研究所 | Spatial non-cooperative target capturing method based on deep reinforcement learning |
CN109625333A (en) * | 2019-01-03 | 2019-04-16 | 西安微电子技术研究所 | A kind of space non-cooperative target catching method based on depth enhancing study |
CN109866931A (en) * | 2019-03-15 | 2019-06-11 | 西北工业大学 | A kind of airplane throttle control method based on self-encoding encoder |
CN109866931B (en) * | 2019-03-15 | 2020-10-27 | 西北工业大学 | An Autoencoder-Based Aircraft Throttle Control Method |
CN110007617A (en) * | 2019-03-29 | 2019-07-12 | 北京航空航天大学 | An Uncertainty Transfer Analysis Method for Aircraft Semi-physical Simulation System |
CN109992000A (en) * | 2019-04-04 | 2019-07-09 | 北京航空航天大学 | A multi-UAV path collaborative planning method and device based on hierarchical reinforcement learning |
US11481634B2 (en) * | 2019-08-29 | 2022-10-25 | The Boeing Company | Systems and methods for training a neural network to control an aircraft |
EP4055349A1 (en) * | 2019-11-07 | 2022-09-14 | Thales | Method and device for generating learning data for an artificial intelligence machine for aircraft landing assistance |
EP4055343A1 (en) * | 2019-11-07 | 2022-09-14 | Thales | Method and device for assisting in landing an aircraft under poor visibility conditions |
US12187418B2 (en) | 2019-12-23 | 2025-01-07 | Airbus Operations Limited | Scenario-based control system |
CN113093774A (en) * | 2019-12-23 | 2021-07-09 | 海鹰航空通用装备有限责任公司 | Unmanned aerial vehicle sliding control method |
CN111510182A (en) * | 2020-06-12 | 2020-08-07 | 成都锐新科技有限公司 | Link16 signal simulator |
CN111510182B (en) * | 2020-06-12 | 2020-09-29 | 成都锐新科技有限公司 | Link16 signal simulator |
CN111914366B (en) * | 2020-08-05 | 2024-04-26 | 湖南航天机电设备与特种材料研究所 | Method for acquiring cylinder-out speed of high-pressure cold air emission aircraft |
CN111914366A (en) * | 2020-08-05 | 2020-11-10 | 湖南航天机电设备与特种材料研究所 | Method for acquiring cylinder outlet speed of high-pressure cold air launching aircraft |
CN112284366A (en) * | 2020-10-26 | 2021-01-29 | 中北大学 | A Correction Method for Heading Angle Error of Polarized Light Compass Based on TG-LSTM Neural Network |
CN112284366B (en) * | 2020-10-26 | 2022-04-12 | 中北大学 | A Correction Method for Heading Angle Error of Polarized Light Compass Based on TG-LSTM Neural Network |
US12025994B2 (en) | 2020-12-23 | 2024-07-02 | Rockwell Collins, Inc. | Rejected takeoff aircraft system and method |
CN112783190A (en) * | 2021-01-22 | 2021-05-11 | 滨州学院 | Control method of robust tracking controller of three-rotor unmanned aerial vehicle |
CN113093568A (en) * | 2021-03-31 | 2021-07-09 | 西北工业大学 | Airplane automatic driving operation simulation method based on long-time and short-time memory network |
US20220404831A1 (en) * | 2021-06-16 | 2022-12-22 | The Boeing Company | Autonomous Behavior Generation for Aircraft Using Augmented and Generalized Machine Learning Inputs |
CN113589835A (en) * | 2021-08-13 | 2021-11-02 | 北京科技大学 | Intelligent robot pilot flying method and device based on autonomous perception |
CN113589835B (en) * | 2021-08-13 | 2024-05-14 | 北京科技大学 | Autonomous perception-based intelligent robot pilot flight method and device |
CN113959446A (en) * | 2021-10-20 | 2022-01-21 | 苏州大学 | A Neural Network-based Robot Autonomous Logistics Transportation Navigation Method |
CN113959446B (en) * | 2021-10-20 | 2024-01-23 | 苏州大学 | Autonomous logistics transportation navigation method for robot based on neural network |
IT202200016971A1 (en) * | 2022-08-08 | 2024-02-08 | Mecaer Aviation Group S P A | IMPROVED AIRCRAFT CONTROL SYSTEM AND METHOD |
CN115407798A (en) * | 2022-09-05 | 2022-11-29 | 天津大学 | Multi-unmanned aerial vehicle cluster trajectory tracking method based on error function integration |
EP4492177A1 (en) * | 2023-07-11 | 2025-01-15 | Airbus Defence and Space GmbH | Vehicle management system for controlling at least one function of a vehicle |
EP4492182A1 (en) * | 2023-07-11 | 2025-01-15 | Airbus Defence and Space GmbH | Apparatus management system for controlling at least one function of an apparatus |
EP4492183A1 (en) * | 2023-07-11 | 2025-01-15 | Airbus Defence and Space GmbH | Apparatus management system for controlling at least one function of an apparatus |
CN117891177A (en) * | 2024-03-15 | 2024-04-16 | 国网浙江省电力有限公司宁波供电公司 | UAV controller model construction method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018117872A1 (en) | The intelligent autopilot system | |
Baomar et al. | An Intelligent Autopilot System that learns piloting skills from human pilots by imitation | |
Sadeghzadeh et al. | A review on fault-tolerant control for unmanned aerial vehicles (UAVs) | |
Steinberg | Historical overview of research in reconfigurable flight control | |
Baomar et al. | An Intelligent Autopilot System that learns flight emergency procedures by imitating human pilots | |
Baomar et al. | Autonomous navigation and landing of large jets using artificial neural networks and learning by imitation | |
CN108382575B (en) | Rotorcraft fly-by-wire go-around mode | |
Tekles et al. | Flight envelope protection for NASA's transport class model | |
Jourdan et al. | Enhancing UAV survivability through damage tolerant control | |
CN109383781B (en) | System and method for approaching hover of rotorcraft | |
Shepherd III et al. | Robust neuro-control for a micro quadrotor | |
Perhinschi et al. | Simulation environment for UAV fault tolerant autonomous control laws development | |
Baomar et al. | Autonomous landing and go-around of airliners under severe weather conditions using Artificial Neural Networks | |
Shukla et al. | Flight test validation of a safety-critical neural network based longitudinal controller for a fixed-wing UAS | |
Baomar et al. | Autonomous flight cycles and extreme landings of airliners beyond the current limits and capabilities using artificial neural networks | |
Li et al. | Autopilot controller of fixed-wing planes based on curriculum reinforcement learning scheduled by adaptive learning curve | |
Zhang et al. | Database-driven safe flight-envelope protection for impaired aircraft | |
Puttige et al. | Real-time neural network based online identification technique for a uav platform | |
Logan et al. | Failure mode effects analysis and flight testing for small unmanned aerial systems | |
Lewis et al. | Limited authority adaptive control architectures with dynamic inversion or explicit model following | |
Sadeghzadeh | Fault tolerant flight control of unmanned aerial vehicles | |
Marcu | Fuzzy logic approach in real-time UAV control | |
Nair et al. | Design of fuzzy logic controller for lateral dynamics control of aircraft by considering the cross-coupling effect of yaw and roll on each other | |
Ward et al. | Intelligent control of unmanned air vehicles: Program summary and representative results | |
Edwards et al. | Flight evaluations of sliding mode fault tolerant controllers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16924236 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16924236 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 10.10.2019) |