[go: up one dir, main page]

Next Issue
Volume 6, October
Previous Issue
Volume 6, August
 
 

Drones, Volume 6, Issue 9 (September 2022) – 44 articles

Cover Story (view full-size image): Future wireless communication systems and technologies are expected to provide very high data rates, consume very low energy, and provide massive connectivity and low latency. Unmanned air vehicles (UAVs) have been used in civil as well as military applications, such as monitoring, surveillance, public safety, transportation management, future cellular networks, data collection on the Internet of things (IoT) networks, mobility support in mm-wave communications, and edge computing. The main motivation is to investigate the effect of optimizing power allocation, user pairing, and altitude in UAV based NOMA systems. The joint altitude and user pairing with different power allocations are formulated as a mixed-integer, non-linear programming (MINLP) problem and solved through an optimization algorithm to maximize the network capacity. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 2869 KiB  
Article
Position and Attitude Tracking of MAV Quadrotor Using SMC-Based Adaptive PID Controller
by Aminurrashid Noordin, Mohd Ariffanan Mohd Basri and Zaharuddin Mohamed
Drones 2022, 6(9), 263; https://doi.org/10.3390/drones6090263 - 19 Sep 2022
Cited by 25 | Viewed by 3672
Abstract
A micro air vehicle (MAV) is physically lightweight, such that even a slight perturbation could affect its attitude and position tracking. To attain better autonomous flight system performance, MAVs require good control strategies to maintain their attitude stability during translational movement. However, the [...] Read more.
A micro air vehicle (MAV) is physically lightweight, such that even a slight perturbation could affect its attitude and position tracking. To attain better autonomous flight system performance, MAVs require good control strategies to maintain their attitude stability during translational movement. However, the available control methods nowadays have fixed gain, which is associated with the chattering phenomenon and is not robust enough. To overcome the aforementioned issues, an adaptive proportional integral derivative (PID) control scheme is proposed. An adaptive mechanism based on a second-order sliding mode control is used to tune the parameter gains of the PID controller, and chattering phenomena are reduced by a fuzzy compensator. The Lyapunov stability theorem and gradient descent approach were the basis for the automated tuning. Comparisons between the proposed scheme against SMC-STA and SMC-TanH were also made. MATLAB Simulink simulation results showed the overall favourable performance of the proposed scheme. Finally, the proposed scheme was tested on a model-based platform to prove its effectiveness in a complex real-time embedded system. Orbit and waypoint followers in the platform simulation showed satisfactory performance for the MAV in completing its trajectory with the environment and sensor models as perturbation. Both tests demonstrate the advantages of the proposed scheme, which produces better transient performance and fast convergence towards stability. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

Figure 1
<p>Parrot Mambo Minidrone MAV.</p>
Full article ">Figure 2
<p>Block diagram of an adaptive PID controller for a quadrotor UAV.</p>
Full article ">Figure 3
<p>The fuzzy membership function. P is positive, Z is zero, and N is negative.</p>
Full article ">Figure 4
<p>Circle trajectory without perturbation.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <mrow> <msub> <mi>U</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>U</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>U</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>U</mi> <mn>4</mn> </msub> </mrow> </semantics></math> input for STA, SMC TanH, and APID controllers without perturbation.</p>
Full article ">Figure 6
<p>Circle trajectory with perturbation.</p>
Full article ">Figure 7
<p><math display="inline"><semantics> <mrow> <msub> <mi>U</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>U</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>U</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>U</mi> <mn>4</mn> </msub> </mrow> </semantics></math> input for STA, SMC TanH, and APID controllers with perturbation.</p>
Full article ">Figure 8
<p>Orbit follower—position and attitude responses through simulation.</p>
Full article ">Figure 9
<p>Orbit follower through simulation.</p>
Full article ">Figure 10
<p>Waypoint follower—position and attitude response through simulation.</p>
Full article ">Figure 11
<p>Waypoint follower through simulation.</p>
Full article ">
15 pages, 3522 KiB  
Article
Detection of Micro-Doppler Signals of Drones Using Radar Systems with Different Radar Dwell Times
by Jiangkun Gong, Jun Yan, Deren Li and Deyong Kong
Drones 2022, 6(9), 262; https://doi.org/10.3390/drones6090262 - 19 Sep 2022
Cited by 17 | Viewed by 10446
Abstract
Not any radar dwell time of a drone radar is suitable for detecting micro-Doppler (or jet engine modulation, JEM) produced by the rotating blades in radar signals of drones. Theoretically, any X-band drone radar system should detect micro-Doppler of blades because of the [...] Read more.
Not any radar dwell time of a drone radar is suitable for detecting micro-Doppler (or jet engine modulation, JEM) produced by the rotating blades in radar signals of drones. Theoretically, any X-band drone radar system should detect micro-Doppler of blades because of the micro-Doppler effect and partial resonance effect. Yet, we analyzed radar data detected by three radar systems with different radar dwell times but similar frequency and velocity resolution, including Radar−α, Radar−β, and Radar−γ with radar dwell times of 2.7 ms, 20 ms, and 89 ms, respectively. The results indicate that Radar−β is the best radar for detecting micro-Doppler (i.e., JEM signals) produced by the rotating blades of a quadrotor drone, DJI Phantom 4, because the detection probability of JEM signals is almost 100%, with approximately 2 peaks, whose magnitudes are similar to that of the body Doppler. In contrast, Radar−α can barely detect any micro-Doppler, and Radar−γ detects weak micro-Doppler signals, whose magnitude is only 10% of the body Doppler’s. Proper radar dwell time is the key to micro-Doppler detection. This research provides an idea for designing a cognitive micro-Doppler radar by changing radar dwell time for detecting and tracking micro-Doppler signals of drones. Full article
(This article belongs to the Special Issue Advances in UAV Detection, Classification and Tracking)
Show Figures

Figure 1

Figure 1
<p>Simulated micro-Doppler of rotating blades within X-band data. (<b>a</b>) The geometry of the radar and the rotating rotor blades, (<b>b</b>) micro-Doppler on the STFT image, (<b>c</b>) blade flash and JEM signals.</p>
Full article ">Figure 2
<p>Example of tracking trace of the drone on the radar (i.e., Radar−γ) screenshot.</p>
Full article ">Figure 3
<p>Range-Doppler data of drones. (<b>a</b>) Radar−α, (<b>b</b>) Radar−β, (<b>c</b>) Radar−γ.</p>
Full article ">Figure 3 Cont.
<p>Range-Doppler data of drones. (<b>a</b>) Radar−α, (<b>b</b>) Radar−β, (<b>c</b>) Radar−γ.</p>
Full article ">Figure 4
<p>Radar signals and spectrums of drones. (<b>a</b>) Radar−α, (<b>b</b>) Radar−β, (<b>c</b>) Radar−γ.</p>
Full article ">Figure 4 Cont.
<p>Radar signals and spectrums of drones. (<b>a</b>) Radar−α, (<b>b</b>) Radar−β, (<b>c</b>) Radar−γ.</p>
Full article ">Figure 5
<p>Tracking Doppler data of drones. (<b>a</b>) Radar−α, (<b>b</b>) Radar−β, (<b>c</b>) Radar−γ.</p>
Full article ">Figure 5 Cont.
<p>Tracking Doppler data of drones. (<b>a</b>) Radar−α, (<b>b</b>) Radar−β, (<b>c</b>) Radar−γ.</p>
Full article ">Figure 6
<p>Detection results of drones. (<b>a</b>) Radar−α, (<b>b</b>) Radar−β, (<b>c</b>) Radar−γ.</p>
Full article ">Figure 7
<p>Diagram showing radar wave intervals with different structures of a quadrotor drone.</p>
Full article ">Figure 8
<p>Block diagram of a cognitive micro-Doppler radar system.</p>
Full article ">
35 pages, 3569 KiB  
Article
PX4 Simulation Results of a Quadcopter with a Disturbance-Observer-Based and PSO-Optimized Sliding Mode Surface Controller
by Yutao Jing, Xianghe Wang, Juan Heredia-Juesas, Charles Fortner, Christopher Giacomo, Rifat Sipahi and Jose Martinez-Lorenzo
Drones 2022, 6(9), 261; https://doi.org/10.3390/drones6090261 - 18 Sep 2022
Cited by 13 | Viewed by 5525
Abstract
This work designed a disturbance-observer-based nonlinear sliding mode surface controller (SMC) and validated the controller using a simulated PX4-conducted quadcopter. To achieve this goal, this research (1) developed a dynamic mathematical model; (2) built a PX4-based simulated UAV following the model-based design process; [...] Read more.
This work designed a disturbance-observer-based nonlinear sliding mode surface controller (SMC) and validated the controller using a simulated PX4-conducted quadcopter. To achieve this goal, this research (1) developed a dynamic mathematical model; (2) built a PX4-based simulated UAV following the model-based design process; (3) developed appropriate sliding mode control laws for each degree of freedom; (4) implemented disturbance observers on the proposed SMC controller to achieve finer disturbance rejection such as crosswind effect and other mutational disturbances; (5) optimized the SMC controller’s parameters based on particle swarm optimization (PSO) method; and (6) evaluated and compared the quadcopter’s tracking performance under a range of noise and disturbances. Comparisons of PID control strategies against the SMC were documented under the same conditions. Consequently, the SMC controller with disturbance observer facilitates accurate and fast UAV adaptation in uncertain dynamic environments. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

Figure 1
<p>Simulated PX4-based quadcopter flying in the windy jMavsim simulator environment.</p>
Full article ">Figure 2
<p>The PID-position-only controller diagram.</p>
Full article ">Figure 3
<p>The PID-rate controller diagram.</p>
Full article ">Figure 4
<p>Euler angles and the coordinate system.</p>
Full article ">Figure 5
<p>The SMC controller diagram.</p>
Full article ">Figure 6
<p>Sliding-surface tracking of x position controller on the plane of <span class="html-italic">e</span> vs. <math display="inline"><semantics> <mover accent="true"> <mi>e</mi> <mo>˙</mo> </mover> </semantics></math>.</p>
Full article ">Figure 7
<p>Sliding-surface tracking of z altitude controller on the plane of <span class="html-italic">e</span> vs. <math display="inline"><semantics> <mover accent="true"> <mi>e</mi> <mo>˙</mo> </mover> </semantics></math>.</p>
Full article ">Figure 8
<p>Sliding-surface tracking of <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> yaw controller on the plane of <span class="html-italic">e</span> vs. <math display="inline"><semantics> <mover accent="true"> <mi>e</mi> <mo>˙</mo> </mover> </semantics></math>.</p>
Full article ">Figure 9
<p>Disturbance observer structure combined with SMC controller.</p>
Full article ">Figure 10
<p>(<b>Left</b>): Defined ITAE score curve for X, Y, Z, and YAW; (<b>Middle</b>): Integrated Overshoot and Undershoot (OsUs) score curve for X, Y, Z, and YAW; (<b>Right</b>): Defined Settling Time (Ts) score curve for X, Y, Z, and YAW.</p>
Full article ">Figure 11
<p>Process flow of the PSO.</p>
Full article ">Figure 12
<p>Score and standards vs. PSO Iterations. (<b>Top-left</b>): Weighted average score of Ts, OsUs, and ITAE combined for each direction; (<b>Top-right</b>): Score of Ts for each direction; (<b>Bottom-left</b>): Score of OsUs for each direction; (<b>Bottom-right</b>): Score of ITAE for each direction.</p>
Full article ">Figure 13
<p>Comparison of step-tracking performance before and after the PSO tuning. Simulations are based on the SIMULINK mathematical UAV model.</p>
Full article ">Figure 14
<p>Comparison of step-tracking performance of PSO optimized SMC, PID-position, and PID-rate controllers from the PX4 conducted jMavsim simulation under normal noise levels. (<b>a</b>) X vs. Time. (<b>b</b>) Pitch vs. Time. (<b>c</b>) Y vs. Time. (<b>d</b>) Roll vs. Time. (<b>e</b>) Z vs. Time. (<b>f</b>) Yaw vs. Time.</p>
Full article ">Figure 14 Cont.
<p>Comparison of step-tracking performance of PSO optimized SMC, PID-position, and PID-rate controllers from the PX4 conducted jMavsim simulation under normal noise levels. (<b>a</b>) X vs. Time. (<b>b</b>) Pitch vs. Time. (<b>c</b>) Y vs. Time. (<b>d</b>) Roll vs. Time. (<b>e</b>) Z vs. Time. (<b>f</b>) Yaw vs. Time.</p>
Full article ">Figure 14 Cont.
<p>Comparison of step-tracking performance of PSO optimized SMC, PID-position, and PID-rate controllers from the PX4 conducted jMavsim simulation under normal noise levels. (<b>a</b>) X vs. Time. (<b>b</b>) Pitch vs. Time. (<b>c</b>) Y vs. Time. (<b>d</b>) Roll vs. Time. (<b>e</b>) Z vs. Time. (<b>f</b>) Yaw vs. Time.</p>
Full article ">Figure 15
<p>Comparison of ramp tracking with error histogram from the PX4 conducted jMavsim simulation under normal noise levels. (<b>a</b>) X Ramp vs. Time. (<b>b</b>) X Error histogram. (<b>c</b>) Y Ramp vs. Time. (<b>d</b>) Y Error histogram. (<b>e</b>) Z Ramp vs. Time. (<b>f</b>) Z Error histogram. (<b>g</b>) Yaw Ramp vs. Time. (<b>h</b>) Yaw Error histogram.</p>
Full article ">Figure 15 Cont.
<p>Comparison of ramp tracking with error histogram from the PX4 conducted jMavsim simulation under normal noise levels. (<b>a</b>) X Ramp vs. Time. (<b>b</b>) X Error histogram. (<b>c</b>) Y Ramp vs. Time. (<b>d</b>) Y Error histogram. (<b>e</b>) Z Ramp vs. Time. (<b>f</b>) Z Error histogram. (<b>g</b>) Yaw Ramp vs. Time. (<b>h</b>) Yaw Error histogram.</p>
Full article ">Figure 15 Cont.
<p>Comparison of ramp tracking with error histogram from the PX4 conducted jMavsim simulation under normal noise levels. (<b>a</b>) X Ramp vs. Time. (<b>b</b>) X Error histogram. (<b>c</b>) Y Ramp vs. Time. (<b>d</b>) Y Error histogram. (<b>e</b>) Z Ramp vs. Time. (<b>f</b>) Z Error histogram. (<b>g</b>) Yaw Ramp vs. Time. (<b>h</b>) Yaw Error histogram.</p>
Full article ">Figure 16
<p>Comparison of flower-pattern complex trajectory tracking results from the PX4 conducted jMavsim simulation under normal noise levels. (<b>a</b>) Top view of tracking results of a flower-pattern complex trajectory. (<b>b</b>) Tracking results in x direction with the timeline. (<b>c</b>) Tracking results in y direction with the timeline.</p>
Full article ">Figure 17
<p>X position error histogram tracking the flower-pattern complex trajectory.</p>
Full article ">Figure 18
<p>Y position error histogram tracking the flower-pattern complex trajectory.</p>
Full article ">Figure 19
<p>Origin-tracking results and disturbance estimation results under constant force-disturbance. (<b>a</b>) X vs. Time. (<b>b</b>) Estimated disturbance in the x direction. Actual disturbance is 1 m/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. (<b>c</b>) Y vs. Time. (<b>d</b>) Estimated disturbance in the y direction. (<b>e</b>) Actual disturbance is 1 m/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. (<b>f</b>) Z vs. Time. Estimated disturbance in the z direction. Actual disturbance is 10 m/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">Figure 19 Cont.
<p>Origin-tracking results and disturbance estimation results under constant force-disturbance. (<b>a</b>) X vs. Time. (<b>b</b>) Estimated disturbance in the x direction. Actual disturbance is 1 m/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. (<b>c</b>) Y vs. Time. (<b>d</b>) Estimated disturbance in the y direction. (<b>e</b>) Actual disturbance is 1 m/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. (<b>f</b>) Z vs. Time. Estimated disturbance in the z direction. Actual disturbance is 10 m/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">Figure 20
<p>Comparison of the origin tracking performance under unidirectional crosswind effect and SMC disturbanceobserver real-time wind velocity estimation results.</p>
Full article ">Figure 21
<p>Top view of the origin-tracking results under unidirectional crosswind effect.</p>
Full article ">Figure 22
<p>Origin-tracking results and disturbance estimation results under constant torque disturbance. (<b>a</b>) Y vs. Time. (<b>b</b>) Estimated disturbance in the <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> Roll direction. Actual disturbance is 5 rad/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. (<b>c</b>) X vs. Time. (<b>d</b>) Estimated disturbance in the <math display="inline"><semantics> <mi>θ</mi> </semantics></math> Pitch direction. Actual disturbance is 5 rad/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. (<b>e</b>) Yaw vs. Time. (<b>f</b>) Estimated disturbance in the <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> Yaw direction. Actual disturbance is 1 rad/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">Figure 22 Cont.
<p>Origin-tracking results and disturbance estimation results under constant torque disturbance. (<b>a</b>) Y vs. Time. (<b>b</b>) Estimated disturbance in the <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> Roll direction. Actual disturbance is 5 rad/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. (<b>c</b>) X vs. Time. (<b>d</b>) Estimated disturbance in the <math display="inline"><semantics> <mi>θ</mi> </semantics></math> Pitch direction. Actual disturbance is 5 rad/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. (<b>e</b>) Yaw vs. Time. (<b>f</b>) Estimated disturbance in the <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> Yaw direction. Actual disturbance is 1 rad/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">Figure 22 Cont.
<p>Origin-tracking results and disturbance estimation results under constant torque disturbance. (<b>a</b>) Y vs. Time. (<b>b</b>) Estimated disturbance in the <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> Roll direction. Actual disturbance is 5 rad/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. (<b>c</b>) X vs. Time. (<b>d</b>) Estimated disturbance in the <math display="inline"><semantics> <mi>θ</mi> </semantics></math> Pitch direction. Actual disturbance is 5 rad/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>. (<b>e</b>) Yaw vs. Time. (<b>f</b>) Estimated disturbance in the <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> Yaw direction. Actual disturbance is 1 rad/s<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">
30 pages, 12417 KiB  
Article
Spherical Indoor Coandă Effect Drone (SpICED): A Spherical Blimp sUAS for Safe Indoor Use
by Ying Hong Pheh, Shane Kyi Hla Win and Shaohui Foong
Drones 2022, 6(9), 260; https://doi.org/10.3390/drones6090260 - 18 Sep 2022
Cited by 4 | Viewed by 23966
Abstract
Even as human–robot interactions become increasingly common, conventional small Unmanned Aircraft Systems (sUAS), typically multicopters, can still be unsafe for deployment in an indoor environment in close proximity to humans without significant safety precautions. This is due to their fast-spinning propellers, and lack [...] Read more.
Even as human–robot interactions become increasingly common, conventional small Unmanned Aircraft Systems (sUAS), typically multicopters, can still be unsafe for deployment in an indoor environment in close proximity to humans without significant safety precautions. This is due to their fast-spinning propellers, and lack of a fail-safe mechanism in the event of a loss of power. A blimp, a non-rigid airship filled with lighter-than-air gases is inherently safer as it ’floats’ in the air and is generally incapable of high-speed motion. The Spherical Indoor Coandă Effect Drone (SpICED), is a novel, safe spherical blimp design propelled by closed impellers utilizing the Coandă effect. Unlike a multicopter or conventional propeller blimp, the closed impellers reduce safety risks to the surrounding people and objects, allowing for SpICED to be operated in close proximity with humans and opening up the possibility of novel human–drone interactions. The design implements multiple closed-impeller rotors as propulsion units to accelerate airflow along the the surface of the spherical blimp and produce thrust by utilising the Coandă effect. A cube configuration with eight uni-directional propulsion units is presented, together with the closed-loop Proportional–Integral–Derivative (PID) controllers, and custom control mixing algorithm for position and attitude control in all three axes. A physical prototype of the propulsion unit and blimp sUAS was constructed to experimentally validate the dynamic behavior and controls in a motion-captured environment, with the experimental results compared to the side-tetra configuration with four bi-directional propulsion units as presented in our previously published conference paper. An up to 40% reduction in trajectory control error was observed in the new cube configuration, which is also capable of motion control in all six Degrees of Freedom (DoF) with additional pitch and roll control when compared to the side-tetra configuration. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

Figure 1
<p>Side view showing physical interaction between the impeller and the surrounding air.</p>
Full article ">Figure 2
<p>Isometric view of impellers on the PU, (<b>a</b>) Impeller with straight vanes, (<b>b</b>) Impeller with curved vanes.</p>
Full article ">Figure 3
<p>Types of configuration considered. n = number of PUs. (<b>a</b>) Cube, (<b>b</b>) Prism, (<b>c</b>) Pyramid, (<b>d</b>) Tetrahedron, (<b>e</b>) Side-Tetrahedron. Refer to <a href="#drones-06-00260-t001" class="html-table">Table 1</a> for coordinates of the PUs.</p>
Full article ">Figure 4
<p>Free body diagram of SpICED (Cube) with curved-vane impeller PUs.</p>
Full article ">Figure 5
<p>Control diagram of SpICED (Cube).</p>
Full article ">Figure 6
<p>Control mapping of the PU thrust and torque to the SpICED (Cube) body’s motion in X, Y and Z axes position.</p>
Full article ">Figure 7
<p>Control mapping of the PU thrust and torque to the SpICED (Cube) body’s motion in Pitch (<math display="inline"><semantics> <mi>θ</mi> </semantics></math>), Roll (<math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>) and Yaw (<math display="inline"><semantics> <mi>ψ</mi> </semantics></math>) angle.</p>
Full article ">Figure 8
<p>(<b>a</b>) Propulsion Unit prototype, (<b>b</b>) Base of Propulsion Unit (motor mount) with brushless motor, (<b>c</b>) Curved vane impeller, (<b>d</b>) Straight vane impeller.</p>
Full article ">Figure 9
<p>Photo of SpICED (Cube) prototype.</p>
Full article ">Figure 10
<p>Surface-mounted IR LEDs on FPC cables connecting the PUs to the ESC.</p>
Full article ">Figure 11
<p>Custom designed flight control PCB.</p>
Full article ">Figure 12
<p>Electronics and power system on SpICED prototype.</p>
Full article ">Figure 13
<p>Mass distribution of the cube prototype and the side-tetrahedron prototype.</p>
Full article ">Figure 14
<p>Thrust and torque measurement rig for the Propulsion Unit.</p>
Full article ">Figure 15
<p>Electronics interface from Motion Capture System to SpICED prototype.</p>
Full article ">Figure 16
<p><math display="inline"><semantics> <msub> <mi>F</mi> <mi>P</mi> </msub> </semantics></math> and <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi>τ</mi> <mi>P</mi> </msub> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> of uni-directional PU with curved impeller spinning in both rotation directions against <math display="inline"><semantics> <msup> <mi>ω</mi> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">Figure 17
<p>Comparison of <math display="inline"><semantics> <msub> <mi>F</mi> <mi>P</mi> </msub> </semantics></math> and <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi>τ</mi> <mi>P</mi> </msub> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> of two types of PU against percentage of actuator signal.</p>
Full article ">Figure 18
<p>Comparison of Power and Efficiency of two types of PU against percentage of actuator signal.</p>
Full article ">Figure 19
<p>Altitude step response of SpICED (Cube).</p>
Full article ">Figure 20
<p>Yaw step response of SpICED (Cube).</p>
Full article ">Figure 21
<p>Roll step response of SpICED (Cube).</p>
Full article ">Figure 22
<p>Pitch step response of SpICED (Cube).</p>
Full article ">Figure 23
<p>Trajectory plot of SpICED prototypes in waypoint experiment.</p>
Full article ">Figure 24
<p>Time plot of SpICED prototypes in waypoint experiment.</p>
Full article ">
15 pages, 4067 KiB  
Article
Assessment of Indiana Unmanned Aerial System Crash Scene Mapping Program
by Jairaj Desai, Jijo K. Mathew, Yunchang Zhang, Robert Hainje, Deborah Horton, Seyyed Meghdad Hasheminasab, Ayman Habib and Darcy M. Bullock
Drones 2022, 6(9), 259; https://doi.org/10.3390/drones6090259 - 17 Sep 2022
Cited by 6 | Viewed by 2693
Abstract
Many public safety agencies in the US have initiated a UAS-based procedure to document and map crash scenes. In addition to significantly reducing the time taken to document evidence as well as ensuring first responder safety, UAS-based mapping reduces incident clearance time and [...] Read more.
Many public safety agencies in the US have initiated a UAS-based procedure to document and map crash scenes. In addition to significantly reducing the time taken to document evidence as well as ensuring first responder safety, UAS-based mapping reduces incident clearance time and thus the likelihood of a secondary crash occurrence. There is a wide range of cameras used on these missions, but they are predominantly captured by mid-priced drones that cost in the range of $2000 to $4000. Indiana has developed a centralized processing center at Purdue University that has processed 252 crash scenes, mapped using 29 unique cameras, from 35 public agencies over the past three years. This paper includes a detailed case study that compares measurements obtained from a traditional ground-based real-time kinematic positioning base station and UAS-based photogrammetric mapping. The case study showed that UAS derived scale errors were within 0.1 ft (3 cm) of field measurements, a generally accepted threshold for public safety use cases. Further assessment was done on the 252 scenes using ground control scale error as the evaluation metric. To date, over 85% of the measurement errors were found to be within 0.1 ft (3 cm). When substantial errors are identified by the Purdue processing center, they are flagged for further dialog with the agency. In most of the cases with larger errors, the ground control distance was incorrectly measured, which is easily correctable by returning to the scene and performing new distance control measurements. Full article
Show Figures

Figure 1

Figure 1
<p>Overview map of 252 crash scene locations.</p>
Full article ">Figure 2
<p>Number of scenes received by year (numbers for 2022 are current as of 4 July 2022).</p>
Full article ">Figure 3
<p>Histogram of scale error.</p>
Full article ">Figure 4
<p>Cumulative frequency distribution of scale errors.</p>
Full article ">Figure 5
<p>Impact of number of scale measurements on accuracy.</p>
Full article ">Figure 6
<p>Pareto sorted frequency chart of scale measurements.</p>
Full article ">Figure 7
<p>Scale Errors versus Length of Scale Measurements (Filtered by Top Five Most Frequently Used Scale Measurements).</p>
Full article ">Figure 8
<p>Top Eight Most Frequently Used Camera Models and Associated Scale Errors: (<b>a</b>) most frequently used camera models for UAS-based mapping; and (<b>b</b>) scale errors for most frequently used camera models.</p>
Full article ">Figure 9
<p>UAV Mission Flight plan depicting Image Capture Locations.</p>
Full article ">Figure 10
<p>Scaled UAS ortho-rectified mosaic of scene with crash scene labels.</p>
Full article ">Figure 11
<p>RTK Base Station Data Collection Procedure and Locations: (<b>a</b>) RTK data collection procedure; and (<b>b</b>) RTK data collection locations.</p>
Full article ">
24 pages, 1044 KiB  
Article
Active Disturbance Rejection Control for the Robust Flight of a Passively Tilted Hexarotor
by Santos Miguel Orozco Soto, Jonathan Cacace, Fabio Ruggiero and Vincenzo Lippiello
Drones 2022, 6(9), 258; https://doi.org/10.3390/drones6090258 - 17 Sep 2022
Cited by 8 | Viewed by 3362
Abstract
This paper presents a robust control strategy for controlling the flight of an unmanned aerial vehicle (UAV) with a passively (fixed) tilted hexarotor. The proposed controller is based on a robust extended-state observer to estimate and reject internal dynamics and external disturbances at [...] Read more.
This paper presents a robust control strategy for controlling the flight of an unmanned aerial vehicle (UAV) with a passively (fixed) tilted hexarotor. The proposed controller is based on a robust extended-state observer to estimate and reject internal dynamics and external disturbances at runtime. Both the stability and convergence of the observer are proved using Lyapunov-based perturbation theory and an ultimate bound approach. Such a controller is implemented within a highly realistic simulation environment that includes physics motors, showing an almost identical behavior to that of a real UAV. The controller was tested for flying under normal conditions and in the presence of different types of disturbances, showing successful results. Furthermore, the proposed control system was compared with another robust control approach, and it presented a better performance regarding the attenuation of the error signals. Full article
Show Figures

Figure 1

Figure 1
<p>Hexarotor with tilted propellers and reference frames. <math display="inline"><semantics> <msub> <mi>O</mi> <mi>W</mi> </msub> </semantics></math> is the world reference frame and <math display="inline"><semantics> <msub> <mi>O</mi> <mi>B</mi> </msub> </semantics></math> is the airframe coordinate frame. <math display="inline"><semantics> <msub> <mi>O</mi> <msub> <mi>S</mi> <mi>i</mi> </msub> </msub> </semantics></math> is the coordinate frame of the <span class="html-italic">i</span>-th propeller.</p>
Full article ">Figure 2
<p><b>Left</b>: Top view of the tilted hexarotor indicating each rotor’s spin direction, <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>i</mi> </msub> </semantics></math>, and the length <span class="html-italic">L</span> from the CoM of the airframe to the center of rotation of a rotor; in this case, <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math>. All of the rotors are equidistant with respect to the CoM, and the location angle of each rotor with respect to the last one is the same. <b>Right</b>: Front view of the hexarotor with two examples of the tilting angles <math display="inline"><semantics> <mi>α</mi> </semantics></math>. All six of the rotors are tilted with the same magnitude, but in different senses.</p>
Full article ">Figure 3
<p>Block diagram of the proposed controller and observer. Everything within the dashed line is component-wise for <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mn>6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Time evolution of the norm of the state error while the UAV performed the commanded Cartesian trajectory.</p>
Full article ">Figure 5
<p>Cartesian motion of the UAV. <span style="color:red"><math display="inline"><semantics> <mrow> <mo>−</mo> <mspace width="3.33333pt"/> <mo>−</mo> <mspace width="3.33333pt"/> </mrow> </semantics></math></span> Desired trajectory. – – UAV’s trajectory.</p>
Full article ">Figure 6
<p><b>Top</b>: Behaviors of the states of the UAV during a regulation task when subjected to sudden gusts of wind. <span style="color:red"><math display="inline"><semantics> <mrow> <mo>−</mo> <mspace width="3.33333pt"/> <mo>−</mo> <mspace width="3.33333pt"/> </mrow> </semantics></math></span> Set points. —- Measured states. <math display="inline"><semantics> <mrow> <mo>−</mo> <mspace width="3.33333pt"/> <mo>−</mo> <mspace width="3.33333pt"/> </mrow> </semantics></math> Observed states. <b>Bottom</b>: Reconstructed total disturbances about each axis. All of the horizontal axes display the time in seconds.</p>
Full article ">Figure 7
<p><b>Top</b>: Behaviors of the states of the UAV during a regulation task when subjected to wind blows. <span style="color:red"><math display="inline"><semantics> <mrow> <mo>−</mo> <mspace width="3.33333pt"/> <mo>−</mo> <mspace width="3.33333pt"/> </mrow> </semantics></math></span> Set points. —- Measured states. <math display="inline"><semantics> <mrow> <mo>−</mo> <mspace width="3.33333pt"/> <mo>−</mo> <mspace width="3.33333pt"/> </mrow> </semantics></math> Observed states. <b>Bottom</b>: Reconstructed total disturbances about each axis. All of the horizontal axes display the time in seconds.</p>
Full article ">Figure 8
<p>Time evolution of the rotors’ velocities. <b>Left</b>: Experiment on regulation under wind shear. <b>Right</b>: Experiment on regulation under smooth blowing of wind. <span style="color:red">–</span><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>1</mn> </msub> </semantics></math>, <span style="color:red">–</span><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>2</mn> </msub> </semantics></math>, <span style="color:red">–</span><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>3</mn> </msub> </semantics></math>, <span style="color:orange">–</span><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>4</mn> </msub> </semantics></math>, <span style="color:orange">–</span><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>5</mn> </msub> </semantics></math>, <span style="color:orange">–</span><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>6</mn> </msub> </semantics></math>.</p>
Full article ">Figure 9
<p><b>Left</b>: Time evolution of the norms of the state error. <b>Right</b>: Time evolution of the estimated disturbance along the <math display="inline"><semantics> <msub> <mi>z</mi> <mi>B</mi> </msub> </semantics></math> axis, which was retrieved from the SMESO.</p>
Full article ">Figure 10
<p>Time evolution of the rotors’ velocities for the experiment on regulation under payload conditions. <b>–</b><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>1</mn> </msub> </semantics></math>, <span style="color:red">–</span><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>2</mn> </msub> </semantics></math>, <span style="color:red">–</span><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>3</mn> </msub> </semantics></math>, <span style="color:orange">–</span><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>4</mn> </msub> </semantics></math>, <span style="color:orange">–</span><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>5</mn> </msub> </semantics></math>, <span style="color:orange">–</span><math display="inline"><semantics> <msub> <mi>ω</mi> <mn>6</mn> </msub> </semantics></math>.</p>
Full article ">Figure 11
<p>Time evolution of the norms of the state error while the UAV performed a regulation task. —- ADRC. <math display="inline"><semantics> <mrow> <mo>−</mo> <mo>−</mo> <mspace width="3.33333pt"/> </mrow> </semantics></math> SMC.</p>
Full article ">
19 pages, 4609 KiB  
Article
A Framework for Soil Salinity Monitoring in Coastal Wetland Reclamation Areas Based on Combined Unmanned Aerial Vehicle (UAV) Data and Satellite Data
by Lijian Xie, Xiuli Feng, Chi Zhang, Yuyi Dong, Junjie Huang and Junkai Cheng
Drones 2022, 6(9), 257; https://doi.org/10.3390/drones6090257 - 16 Sep 2022
Cited by 14 | Viewed by 3298
Abstract
Soil salinization is one of the most important causes of land degradation and desertification, often threatening land management and sustainable agricultural development. Due to the low resolution of satellites, fine mapping of soil salinity cannot be completed, while high-resolution images from UAVs can [...] Read more.
Soil salinization is one of the most important causes of land degradation and desertification, often threatening land management and sustainable agricultural development. Due to the low resolution of satellites, fine mapping of soil salinity cannot be completed, while high-resolution images from UAVs can only achieve accurate mapping of soil salinity in a small area. Therefore, how to realize fine mapping of salinity on a large scale based on UAV and satellite data is an urgent problem to be solved. Therefore, in this paper, the most relevant spectral variables for soil salinity were firstly determined using Pearson correlation analysis, and then the optimal inversion model was established based on the screened variables. Secondly, the feasibility of correcting satellite data based on UAV data was determined using Pearson correlation analysis and spectral variation trends, and the correction of satellite data was completed using least squares-based polynomial curve fitting for both UAV data and satellite data. Finally, the reflectance received from the vegetated area did not directly reflect the surface reflectance condition, so we used the support vector machine classification method to divide the study area into two categories: bare land and vegetated area, and built a model based on the classification results to realize the advantages of complementing the accurate spectral information of UAV and large-scale satellite spectral data in the study areas. By comparing the modeling inversion results using only satellite data with the inversion results based on optimized satellite data, our method framework could effectively improve the accuracy of soil salinity inversion in large satellite areas by 6–19%. Our method can meet the needs of large-scale accurate mapping, and can provide the necessary means and reference for soil condition monitoring. Full article
(This article belongs to the Special Issue UAS in Smart Agriculture)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The geographical location of the study area; (<b>b</b>) Ningbo soil classification map.</p>
Full article ">Figure 2
<p>Soil salinity content spatial distribution map.</p>
Full article ">Figure 3
<p>The workflow of this study: (<b>a</b>) building a model based on UAV Imagery; (<b>b</b>) correction of satellite data based on UAV data; (<b>c</b>) soil salinity inversion based on corrected data.</p>
Full article ">Figure 4
<p>Correlations between soil salinity and spectral reflectance: (<b>a</b>) bare soil sample correlation; (<b>b</b>) vegetation area sample correlation.</p>
Full article ">Figure 5
<p>Correlations between soil salinity and spectral index reflectance: (<b>a</b>) bare soil sample correlation; (<b>b</b>) vegetation area sample correlation.</p>
Full article ">Figure 6
<p>Correlations between UAV and Sentinel-2A: (<b>a</b>) trend of reflectance; (<b>b</b>) band correlation of UAV images and Sentinel-2A images.</p>
Full article ">Figure 7
<p>Polynomial fitting results.</p>
Full article ">Figure 8
<p>Classification accuracy of different methods.</p>
Full article ">Figure 9
<p>Visualization results of different classification methods.</p>
Full article ">Figure 10
<p>Inversion results: (<b>a</b>) IDW interpolation map; (<b>b</b>) UAV inversion results; (<b>c</b>) Sentinel-2A inversion results; (<b>d</b>) inversion results for optimizing satellite data based on drone data.</p>
Full article ">
30 pages, 41427 KiB  
Article
Autonomous Surveying of Plantation Forests Using Multi-Rotor UAVs
by Tzu-Jui Lin and Karl A. Stol
Drones 2022, 6(9), 256; https://doi.org/10.3390/drones6090256 - 16 Sep 2022
Cited by 5 | Viewed by 3042
Abstract
Modern plantation forest procedures still rely heavily on manual data acquisition in the inventory process, limiting the quantity and quality of the collected data. This limitation in collection performance is often due to the difficulty of traversing the plantation forest environment on foot. [...] Read more.
Modern plantation forest procedures still rely heavily on manual data acquisition in the inventory process, limiting the quantity and quality of the collected data. This limitation in collection performance is often due to the difficulty of traversing the plantation forest environment on foot. This work presents an autonomous system for exploring plantation forest environments using multi-rotor UAVs. The proposed method consists of three parts: waypoint selection, trajectory generation, and trajectory following. Waypoint selection is accomplished by estimating the rows’ locations within the environment and selecting points between adjacent rows. Trajectory generation is completed using a non-linear optimization-based constant speed planner and the following is accomplished using a model predictive control approach. The proposed method is tested extensively in simulation against various procedurally generated forest environments, with results suggesting that it is robust against variations within the scene. Finally, flight testing is performed in a local plantation forest, demonstrating the successful application of our proposed method within a complex, uncontrolled environment. Full article
(This article belongs to the Special Issue Feature Papers for Drones in Ecology Section)
Show Figures

Figure 1

Figure 1
<p>Examples of plantation forests which are (<b>a</b>) traversable, and (<b>b</b>) untraversable on foot in Kaingaroa Forest, Rotorua, New Zealand. Note the presence of an available flight corridor in both environments.</p>
Full article ">Figure 2
<p>UAV used for flight testing. The carried payload consists of a Livox MID-70 LiDAR, Intel Realsense T265, and an Intel NUC. The body and inertial frames are denoted as <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>B</mi> <mo>}</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>I</mi> <mo>}</mo> </mrow> </semantics></math> respectively.</p>
Full article ">Figure 3
<p>Overall task flow during the survey process. Pose estimation and mapping are not within the scope of this work.</p>
Full article ">Figure 4
<p>Horizontal slice of a point cloud in a typical New Zealand plantation forest with rows labeled. Note the lack of consistent structure in the orthogonal direction.</p>
Full article ">Figure 5
<p>Approximate survey plan consisting of a lawnmowing pattern aligned with plantation rows.</p>
Full article ">Figure 6
<p>Top-down views of (<b>a</b>) good, and (<b>b</b>) poor alignment between <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi>I</mi> <mi>X</mi> </msub> <mo>}</mo> </mrow> </semantics></math> and the principle row directions of the plantation forest.</p>
Full article ">Figure 7
<p>Sample environment consisting of two rows with <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>,</mo> <mi>θ</mi> </mrow> </semantics></math> labeled. The dotted yellow line denotes the traversable corridor.</p>
Full article ">Figure 8
<p>Overall trajectory planner pipeline.</p>
Full article ">Figure 9
<p>Worst case B-Spline collision scenario with degree 3. The trajectory is guaranteed to be collision-free if the three convex hulls shown in blue, orange, and green are obstacle free.</p>
Full article ">Figure 10
<p>Top down view of a typical replanning event, showing the elements of a newly observed obstacle and a splice event.</p>
Full article ">Figure 11
<p>Simplified version of the forward paths of the cascaded multi-rotor controller with unified and airframe-specific portions highlighted.</p>
Full article ">Figure 12
<p>Proposed controller structure.</p>
Full article ">Figure 13
<p>Illustration of parameters modeled in <a href="#drones-06-00256-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 14
<p>Simulation test environments. The three numbers in the subfigure captions correspond to the branching, roughness, and slope metrics.</p>
Full article ">Figure 15
<p>Summary of survey time for each test environment. Trials that did not produce full coverage are not included. Circles denote outliers in the survey time.</p>
Full article ">Figure 16
<p>Effects on survey time with varying test environments.</p>
Full article ">Figure 17
<p>Summary of return times for each test environment with incomplete trials removed.</p>
Full article ">Figure 18
<p>Effects on coverage when the three test parameters are varied.</p>
Full article ">Figure 19
<p>Averaged coverage maps for trials in environments (<b>a</b>–<b>h</b>), normalized to the maximum of each specific sample. Blue hues indicate trees with fewer observed points, and green hues indicate more observed points. Non-uniformity is shown as the difference in hues.</p>
Full article ">Figure 20
<p>Effects on survey time (<b>left</b>) and coverage (<b>right</b>) with varying speed. Exponential fit through all results is shown as a red line.</p>
Full article ">Figure 21
<p>Plot of mean speed vs. target survey speed. Note the plateau in mean speed at 3 m/s and 4 m/s. The averaged mean speed across all samples for each target speed is indicated by a horizontal line.</p>
Full article ">Figure 22
<p>Path taken for complete coverage for the proposed method and FUEL.</p>
Full article ">Figure 23
<p>(<b>a</b>) Single row view of the plantation site used for flight testing, note the presence of some low-hanging branches, and (<b>b</b>) 3D scan of the proposed test site showing the survey region; the scan area is approx. 25 m by 30 m.</p>
Full article ">Figure 24
<p>Simplified views of the large-scale test flights (<b>a</b>–<b>d</b>), the path followed is shown as a solid line overlaid with stem locations within the environment. Greener hues indicate a flight speed closer to the 1 m/s target speed, while red hues indicate slower speeds taken.</p>
Full article ">Figure 25
<p>Top-down views of large-scale test flights (<b>a</b>–<b>d</b>), with the path taken overlaid on the reconstructed point cloud.</p>
Full article ">Figure 26
<p>Plot of estimated speed during trial (d).</p>
Full article ">Figure 27
<p>Top-down views of path taken during small-scale test flights (<b>a</b>–<b>f</b>) overlaid on the reconstructed point clouds. The path taken during the survey is shown as the line.</p>
Full article ">
11 pages, 1416 KiB  
Communication
Evaluating Thermal and Color Sensors for Automating Detection of Penguins and Pinnipeds in Images Collected with an Unoccupied Aerial System
by Jefferson T. Hinke, Louise M. Giuseffi, Victoria R. Hermanson, Samuel M. Woodman and Douglas J. Krause
Drones 2022, 6(9), 255; https://doi.org/10.3390/drones6090255 - 15 Sep 2022
Cited by 11 | Viewed by 2833
Abstract
Estimating seabird and pinniped abundance is central to wildlife management and ecosystem monitoring in Antarctica. Unoccupied aerial systems (UAS) can collect images to support monitoring, but manual image analysis is often impractical. Automating target detection using deep learning techniques may improve data acquisition, [...] Read more.
Estimating seabird and pinniped abundance is central to wildlife management and ecosystem monitoring in Antarctica. Unoccupied aerial systems (UAS) can collect images to support monitoring, but manual image analysis is often impractical. Automating target detection using deep learning techniques may improve data acquisition, but different image sensors may affect target detectability and model performance. We compared the performance of automated detection models based on infrared (IR) or color (RGB) images and tested whether IR images, or training data that included annotations of non-target features, improved model performance. For this assessment, we collected paired IR and RGB images of nesting penguins (Pygoscelis spp.) and aggregations of Antarctic fur seals (Arctocephalus gazella) with a small UAS at Cape Shirreff, Livingston Island (60.79 °W, 62.46 °S). We trained seven independent classification models using the Video and Image Analytics for Marine Environments (VIAME) software and created an open-access R tool, vvipr, to standardize the assessment of VIAME-based model performance. We found that the IR images and the addition of non-target annotations had no clear benefits for model performance given the available data. Nonetheless, the generally high performance of the penguin models provided encouraging results for further improving automated image analysis from UAS surveys. Full article
(This article belongs to the Special Issue UAV Design and Applications in Antarctic Research)
Show Figures

Figure 1

Figure 1
<p>Examples of assessing a model prediction (dashed lines) against a truth annotation (solid line) based on the extent of overlap specified by the truth and prediction overlap parameters. In this example, consider a truth overlap value of 0.5 and prediction overlap value of 0.5. The assessment first considered the truth overlap threshold, and then the prediction overlap threshold. If either the truth or the prediction thresholds are exceeded, the prediction is retained as a true positive. The general flow of this process is depicted schematically in supplemental <a href="#app1-drones-06-00255" class="html-app">Figure S2</a>.</p>
Full article ">Figure 2
<p>Counts of false positives (dashed lines), false negatives (thin lines), and true positives (thick lines) as a function of confidence threshold (left column), truth overlap (middle column), and prediction overlap (right column) parameters for the penguin (top row) and fur seals (bottom row) data. Data displayed here represent results from the Peng_IR_TC and Seal_IR_TC models. For illustration, we allowed one parameter to vary, whereas the other two remained fixed. Fixed values for the respective panels were set at 0.2 (confidence threshold), 0.5 (truth overlap), and 0.5 (prediction overlap). Vertical dotted lines highlight the thresholds used for analysis of all model predictions.</p>
Full article ">Figure 3
<p>Examples of predication and annotation overlap for (<b>a</b>) penguins in the Peng_IR_MC model where most truth annotations are entirely overlapped by predictions, indicating good model performance; (<b>b</b>) pups in the Seal_IR_TC model, where model predictions rarely overlap truth annotations, indicating poor model performance; and (<b>c</b>) non-pups in the the Seal_RGB_TC model, where predications and truth annotations overlap in some areas, but not consistently throughout the image. The upper row shows the annotations split as true and false positives over truth annotations, and the lower panel shows the truth and predicted annotations overlaid on the corresponding images.</p>
Full article ">
23 pages, 22129 KiB  
Article
Cotton Yield Estimation Using the Remotely Sensed Cotton Boll Index from UAV Images
by Guanwei Shi, Xin Du, Mingwei Du, Qiangzi Li, Xiaoli Tian, Yiting Ren, Yuan Zhang and Hongyan Wang
Drones 2022, 6(9), 254; https://doi.org/10.3390/drones6090254 - 14 Sep 2022
Cited by 22 | Viewed by 3080
Abstract
Cotton constitutes 81% of the world’s natural fibers. Accurate and rapid cotton yield estimation is important for cotton trade and agricultural policy development. Therefore, we developed a remote sensing index that can intuitively represent cotton boll characteristics and support cotton yield estimation by [...] Read more.
Cotton constitutes 81% of the world’s natural fibers. Accurate and rapid cotton yield estimation is important for cotton trade and agricultural policy development. Therefore, we developed a remote sensing index that can intuitively represent cotton boll characteristics and support cotton yield estimation by extracting cotton boll pixels. In our study, the Density of open Cotton boll Pixels (DCPs) was extracted by designing different cotton boll indices combined with the threshold segmentation method. The relationship between DCP and field survey datasets, the Density of Total Cotton bolls (DTC), and yield were compared and analyzed. Five common yield estimation models, Linear Regression (LR), Support Vector Regression (SVR), Classification and Regression Trees (CART), Random Forest (RF), and K-Nearest Neighbors (KNN), were implemented and evaluated. The results showed that DCP had a strong correlation with yield, with a Pearson correlation coefficient of 0.84. The RF method exhibited the best yield estimation performance, with average R2 and rRMSE values of 0.77 and 7.5%, respectively (five-fold cross-validation). This study showed that RedGreenBlue (RGB) and Near Infrared Red (NIR) normalized, a normalized form index consisting of the RGB and NIR bands, performed best. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Location of Hebei Province in China; (<b>b</b>) location of Cangzhou city and the experimental area in Hebei Province; (<b>c</b>) overview of the 125 experimental plots in the cotton field. Each plot has an independent code.</p>
Full article ">Figure 2
<p>Original image of the study area.</p>
Full article ">Figure 3
<p>Acquisition of UAV data and field survey data. (<b>a</b>) Flight preparation; (<b>b</b>) data collection; (<b>c</b>) partition packaging; (<b>d</b>) weighing production.</p>
Full article ">Figure 3 Cont.
<p>Acquisition of UAV data and field survey data. (<b>a</b>) Flight preparation; (<b>b</b>) data collection; (<b>c</b>) partition packaging; (<b>d</b>) weighing production.</p>
Full article ">Figure 4
<p>Overall methodology workflow.</p>
Full article ">Figure 5
<p>Distribution of reflectance for 65,603 cotton boll pixels and 118,996 non-cotton boll pixels extracted by visual interpretation from the multispectral image.</p>
Full article ">Figure 6
<p>The yellow frame is the boundary of the experimental plot, the red mask is generated by the open cotton boll index, and the green pixels are the pixels that DCP needs to count. The boundary of the experimental plots was applied to the zonal statistics of the cotton boll mask to obtain DCP in each plot.</p>
Full article ">Figure 7
<p>Thirty-four open cotton boll masks. Original images of a small area in the experimental field are shown. Thirty-four boll mask images generated by thirty-four indices were superimposed on the original image. C1, C2, and C3 indicate three band combinations.</p>
Full article ">Figure 8
<p>The correlation between DTC and DCP was extracted from each index. Three combinations and four calculation methods are included in the figure.</p>
Full article ">Figure 9
<p>The correlation between yield and DCP was extracted from each index. Three combinations and four calculation methods are included in the figure.</p>
Full article ">Figure 10
<p>Five-fold cross-validation results of 5 models. Black dots represent training set results, and green dots represent testing set results.</p>
Full article ">Figure 11
<p>Yield map.</p>
Full article ">Figure 12
<p>The correlation between DUC and DCP was extracted from each index. Three combinations and four calculation methods are included in the figure.</p>
Full article ">Figure 13
<p>(<b>a</b>) The relationship between DUC and actual yield. (<b>b</b>) The relationship between DTC and actual yield. (<b>c</b>) The relationship between DCP and actual yield. R<sup>2</sup> is achieved by RF. Each black dot represents an experimental plot.</p>
Full article ">Figure 14
<p>(<b>a</b>) Four pixels are occupied by one large boll. (<b>b</b>) Four pixels are occupied by four small bolls. They have similar yields, but very different boll counts.</p>
Full article ">
11 pages, 13603 KiB  
Article
Investigating Errors Observed during UAV-Based Vertical Measurements Using Computational Fluid Dynamics
by Hayden Hedworth, Jeffrey Page, John Sohl and Tony Saad
Drones 2022, 6(9), 253; https://doi.org/10.3390/drones6090253 - 13 Sep 2022
Cited by 7 | Viewed by 4469
Abstract
Unmanned Aerial Vehicles (UAVs) are a popular platform for air quality measurements. For vertical measurements, rotary-wing UAVs are particularly well-suited. However, an important concern with rotary-wing UAVs is how the rotor-downwash affects measurement accuracy. Measurements from a recent field campaign showed notable discrepancies [...] Read more.
Unmanned Aerial Vehicles (UAVs) are a popular platform for air quality measurements. For vertical measurements, rotary-wing UAVs are particularly well-suited. However, an important concern with rotary-wing UAVs is how the rotor-downwash affects measurement accuracy. Measurements from a recent field campaign showed notable discrepancies between data from ascent and descent, which suggested the UAV downwash may be the cause. To investigate and explain these observed discrepancies, we use high-fidelity computational fluid dynamics (CFD) simulations to simulate a UAV during vertical flight. We use a tracer to model a gaseous pollutant and evaluate the impact of the rotor-downwash on the concentration around the UAV. Our results indicate that, when measuring in a gradient, UAV-based measurements were ?50% greater than the expected concentration during descent, but they were accurate during ascent, regardless of the location of the sensor. These results provide an explanation for errors encountered during vertical measurements and provide insight for accurate data collection methods in future studies. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles in Atmospheric Research)
Show Figures

Figure 1

Figure 1
<p>DJI M600 in flight with an ozonesonde and radiosonde attached below.</p>
Full article ">Figure 2
<p>Model of the DJI M600 used in simulations. The areas representing the rotors are shown in dark green and the grey region represents the solid geometry for the UAV mainframe and payload.</p>
Full article ">Figure 3
<p>Three vertical ozone profiles measured at (<b>a</b>) 7:35 AM, (<b>b</b>) 8:30 AM, and (<b>c</b>) 8:50 AM while the ozone distribution was evolving. The line marked with black circles represents data collected during ascent and the unmarked line represents data collected during descent.</p>
Full article ">Figure 4
<p>Three-dimensional view of the instantaneous air velocity in the computational domain with slices through each of the rotors for UAV (<b>a</b>) ascent and (<b>b</b>) descent. Color represents air speed on a linear scale from 0 to 16 m/s.</p>
Full article ">Figure 5
<p>Slices through the center of the domain and the UAV at t = 20 s simulation time. The top row shows the instantaneous velocity magnitude during ascent (<b>a</b>) and descent (<b>b</b>) with values ranging from 0 (blue) to 16 (red) m/s.</p>
Full article ">Figure 6
<p>Two-dimensional slices through the center of the domain showing the normalized scalar concentration on a scale from 0 (white) to 1 (black) for (<b>a</b>) ascent and (<b>b</b>) descent. Arrows in (<b>a</b>) show the three intake tube locations that were evaluated, namely, (1) below the UAV, (2) above the UAV, and (3) outside the UAV rotors.</p>
Full article ">Figure 7
<p>Time-averaged relative error between the scalar concentration field and the expected concentration field for (<b>a</b>) ascent and (<b>b</b>) descent. The value of the relative error is labeled at the borders between each of the colored regions.</p>
Full article ">Figure 8
<p>Nominal scalar value during ascent (<b>a</b>) and descent (<b>b</b>) at three potential locations of the ozonesonde intake tube: (1) below the UAV, (2) above the UAV, and (3) extended outside the UAV rotors. The black lines represent the expected profiles (E) for each scenario.</p>
Full article ">
18 pages, 64559 KiB  
Article
Design and Implementation of UAVs for Bird’s Nest Inspection on Transmission Lines Based on Deep Learning
by Han Li, Yiqun Dong, Yunxiao Liu and Jianliang Ai
Drones 2022, 6(9), 252; https://doi.org/10.3390/drones6090252 - 13 Sep 2022
Cited by 30 | Viewed by 4449
Abstract
In recent years, unmanned aerial vehicles (UAV) have been increasingly used in power line inspections. Birds often nest on transmission line towers, which threatens safe power line operation. The existing research on bird’s nest inspection using UAVs mainly stays at the level of [...] Read more.
In recent years, unmanned aerial vehicles (UAV) have been increasingly used in power line inspections. Birds often nest on transmission line towers, which threatens safe power line operation. The existing research on bird’s nest inspection using UAVs mainly stays at the level of image postprocessing detection, which has poor real-time performance and cannot obtain timely bird’s nest detection results. Considering the above shortcomings, we designed a power inspection UAV system based on deep learning technology for autonomous flight, positioning and photography, real-time bird nest detection, and result export. In this research, 2000 bird’s nest images in the actual power inspection environment were shot and collected to create the dataset. The parameter optimization and test comparison for bird’s nest detection are based on the three target detection models of YOLOv3, YOLOv5-s, and YOLOX-s. A YOLOv5-s bird’s nest detection model optimized for bird’s nest real-time detection is proposed, and it is deployed to the onboard computer for real-time detection and verification during flight. The DJI M300 RTK UAV was used to conduct a test flight in a natural power inspection environment. The test results show that the mAP of the UAV system designed in this paper for bird’s nest detection is 92.1%, and the real-time detection frame rate is 33.9 FPS. Compared with the previous research results, this paper proposes a new practice of using drones for bird’s nest detection, dramatically improving the real-time accuracy of bird’s nest detection. The UAV system can efficiently complete the task of bird’s nest detection in the process of electric power inspection, which can significantly reduce manpower consumption in the power inspection process. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

Figure 1
<p>An overview of object detection models. (<b>a</b>) Two-stage object detection; (<b>b</b>) One-stage object detection.</p>
Full article ">Figure 2
<p>Workflow diagram.</p>
Full article ">Figure 3
<p>Schematic diagram of the location where the drone takes pictures and the camera angle setting.</p>
Full article ">Figure 4
<p>Hardware schematic of the UAV system for bird’s nest inspection.</p>
Full article ">Figure 5
<p>Software architecture.</p>
Full article ">Figure 6
<p>YOLOv3 structure diagram.</p>
Full article ">Figure 7
<p>YOLOv5-s structure diagram.</p>
Full article ">Figure 8
<p>Focus network structure diagram.</p>
Full article ">Figure 9
<p>YOLOX-s structure diagram.</p>
Full article ">Figure 10
<p>Schematic diagram of bird’s nest samples after data augmentation. (<b>a</b>) Original image; (<b>b</b>) Horizonal flip image; (<b>c</b>) Vertical flip image; (<b>d</b>) Random rotate image; (<b>e</b>) Gaussian blur image.</p>
Full article ">Figure 11
<p>The comparison of loss curves for YOLOv3, YOLOv5-s, and YOLOX-s. (<b>a</b>) YOLOv3; (<b>b</b>) YOLOv5-s; (<b>c</b>) YOLOX-s.</p>
Full article ">Figure 12
<p>Comparison of the detection results of the three models. <b>(a)</b> YOLOv3; (<b>b</b>) YOLOv5-s; (<b>c</b>) YOLOX-s.</p>
Full article ">Figure 13
<p>Display of flight status and bird’s nest detection results.</p>
Full article ">Figure 14
<p>Display of flight status and bird’s nest detection results.</p>
Full article ">
18 pages, 4554 KiB  
Article
Robust Control Strategy for Quadrotor Drone Using Reference Model-Based Deep Deterministic Policy Gradient
by Hongxun Liu, Satoshi Suzuki, Wei Wang, Hao Liu and Qi Wang
Drones 2022, 6(9), 251; https://doi.org/10.3390/drones6090251 - 12 Sep 2022
Cited by 8 | Viewed by 3953
Abstract
Due to the differences between simulations and the real world, the application of reinforcement learning (RL) in drone control encounters problems such as oscillations and instability. This study proposes a control strategy for quadrotor drones using a reference model (RM) based on deep [...] Read more.
Due to the differences between simulations and the real world, the application of reinforcement learning (RL) in drone control encounters problems such as oscillations and instability. This study proposes a control strategy for quadrotor drones using a reference model (RM) based on deep RL. Unlike the conventional studies associated with optimal and adaptive control, this method uses a deep neural network to design a flight controller for quadrotor drones, which can map the drone’s states and target values to control commands directly. The method was developed based on a deep deterministic policy gradient (DDPG) algorithm combined with the deep neural network. The RM was further employed for the actor–critic structure to enhance the robustness and dynamic stability. The RM–DDPG-based flight-control strategy was confirmed to be practicable through a two-fold experiment. First, a quadrotor drone model was constructed based on an actual drone, and the offline policy was trained on it. The performance of the policy was evaluated via simulations while confirming the transition of system states and the output of the controller. The proposed strategy can eliminate oscillations and steady error and can achieve robust results for the target value and external interference. Full article
(This article belongs to the Special Issue Conceptual Design, Modeling, and Control Strategies of Drones-II)
Show Figures

Figure 1

Figure 1
<p>Coordinate system of the quadrotor drone.</p>
Full article ">Figure 2
<p>Agent–environment cyclic process.</p>
Full article ">Figure 3
<p>Actor–critic structure.</p>
Full article ">Figure 4
<p>The network structure of the RM–DDPG algorithm.</p>
Full article ">Figure 5
<p>(<b>a</b>) The angular rate model of the quadrotor was identified through sectional data; (<b>b</b>) shows the fitting result on the whole flying data. Considering the safety of the experiment, the time of the data did not start from 0 s.</p>
Full article ">Figure 6
<p>(<b>a</b>) Step response of the angular velocity model, and (<b>b</b>) extension of the attitude angle. The SI units on the x− and y−axes are s and rad/s, respectively. As this is the step response of the angular velocity, the angle in (<b>b</b>) is in the shape of a ramp.</p>
Full article ">Figure 7
<p>(<b>a</b>) Step response of the designed reference model. (<b>b</b>) The x−axis represents the time (s), and the y−axis represents the target angle, angular velocity, and angular acceleration in rad, rad/s, and rad/<math display="inline"><semantics> <mrow> <msup> <mi>s</mi> <mn>2</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Average accumulated reward in each training step.</p>
Full article ">Figure 9
<p>The transition of system states (angle, angular rate) and control input during the step response (<b>a</b>) and sine wave response (<b>b</b>). The maximum control input was 3.0. It can be seen that classical DDPG tended to provide the maximum control input, which was unacceptable to a real quadrotor. By contrast, RM–DDPG was softer and could significantly eliminate steady−state errors.</p>
Full article ">Figure 10
<p>Performance of the controller in drones with different diagonal lengths. The controller implemented the control policy corresponding to the size of the drone to maintain a consistent attitude control performance. (<b>a</b>) The attitude angle during the step response of drones with different diagonal lengths; (<b>b</b>) control input during the step response of drones with different diagonal lengths.</p>
Full article ">Figure 11
<p>RM–DDPG method drove the quadrotor to return to the stable status from different initial angles.</p>
Full article ">Figure 12
<p>Experimental results on a real quadrotor. (<b>a</b>) Flight experiment of classical DDPG on a real quadrotor drone; (<b>b</b>) flight experiment of our RM–DDPG on the same real quadrotor drone.</p>
Full article ">
18 pages, 4291 KiB  
Article
Hostile UAV Detection and Neutralization Using a UAV System
by Saulius Rudys, Andrius Laučys, Paulius Ragulis, Rimvydas Aleksiejūnas, Karolis Stankevičius, Martynas Kinka, Matas Razgūnas, Domantas Bručas, Dainius Udris and Raimondas Pomarnacki
Drones 2022, 6(9), 250; https://doi.org/10.3390/drones6090250 - 12 Sep 2022
Cited by 14 | Viewed by 5570
Abstract
The technologies of Unmanned Aerial Vehicles (UAVs) have seen extremely rapid development in recent years. UAV technologies are being developed much faster than the means of their legislation. There have been many means of UAV detection and neutralization proposed in recent research; nonetheless, [...] Read more.
The technologies of Unmanned Aerial Vehicles (UAVs) have seen extremely rapid development in recent years. UAV technologies are being developed much faster than the means of their legislation. There have been many means of UAV detection and neutralization proposed in recent research; nonetheless, all of them have serious disadvantages. The essential problems in the detection of UAVs is the small size of UAVs, weak radio wave reflection, weak radio signal, and sound emitting. The main problem of conventional UAV countermeasures is the short detection and neutralization range. The authors propose the concept of the airborne counter-UAV platform (consisting of several vehicles) with radar. We use a low-cost marine radar with a high resolution 2 m wide antenna, embedded into the wing. Radar scanning is implemented by changing the heading of the aircraft. For the countermeasures, the authors suggest using a small rotorcraft UAV carried by a bigger fixed-wing one. A mathematical model that allows the calculation of the coordinates of the detected drone while scanning the environment in a moving UAV with radar was created. Furthermore, the results of integrated radar performance with a detected drone and the results of successful neutralization experiments of different UAVs were achieved. Full article
Show Figures

Figure 1

Figure 1
<p>Possible manoeuvring routes of a fixed-wing UAV with an embedded radar, where (<b>a</b>). periodical UAV scanning-circling in-route, (<b>b</b>). half circle rotation with <math display="inline"><semantics> <msup> <mn>180</mn> <mo>∘</mo> </msup> </semantics></math> scanning in-route, and (<b>c</b>). <math display="inline"><semantics> <msup> <mn>90</mn> <mo>∘</mo> </msup> </semantics></math> scanning in flight direction with quick turns of UAV.</p>
Full article ">Figure 2
<p>(<b>a</b>). Explanation of 3D radar using two inclined fan-beams, (<b>b</b>). Arrangement of the radar antennas 1 in the aircraft 2” for the scanning by changing vehicle’s trajectory VT and roll (left), arrangement of the two inclined radar antennas 1’ and 1” in the aircraft for electronic scanning (right), (<b>c</b>). Explanation of 3D radar scanning, using one fan-beam antenna in the aircraft.</p>
Full article ">Figure 3
<p>Three-3D radar and target plane.</p>
Full article ">Figure 4
<p>Developed fixed-wing UAV.</p>
Full article ">Figure 5
<p>Embedding of a radar antenna into the fixed-wing UAV.</p>
Full article ">Figure 6
<p>Band of reflective vinyl film on the underside of the wing.</p>
Full article ">Figure 7
<p>Fixed-wing UAV during a mission.</p>
Full article ">Figure 8
<p>UAV horn antenna array. (<b>a</b>). A 3D structure model and a model of an antenna in the Ansys HFSS software, (<b>b</b>). A simulated radiation pattern (total directivity) in the vertical plane. Solid line—reference marine radar antenna; dashed line—UAV antenna with reduced height; dotted line—UAV antenna with an additional plate.</p>
Full article ">Figure 9
<p>Composed data images from primary and secondary UAV arranged radars.</p>
Full article ">Figure 10
<p>Clutter estimation during a flight on a circular trajectory.</p>
Full article ">Figure 11
<p>Detected drone using fixed-wing UAV radar.</p>
Full article ">Figure 12
<p>Launch of the “hunter” drone: 1—fixed wing carrier UAV with the drone on it; 2—drone takes off the carrier UAV; 3—drone performs its mission on its own; 4—drone lands back on the carrier UAV; 5—carrier UAV continues mission with the drone on it.</p>
Full article ">Figure 13
<p>Launch of the “hunter” drone.</p>
Full article ">Figure 14
<p>Principle scheme of neutralizing: 1—“hunter” drone carrying a tether (rope) to be tangled in the propeller of the hunted drone; 2—tether gets tangled by propeller; 3—tether jams the spinning of the propeller; 4—hunted drone lands with the parachute attached to the tether.</p>
Full article ">Figure 15
<p>Neutralizing of the rotorcraft UAV.</p>
Full article ">Figure 16
<p>Neutralizing of the fixed-wing UAV.</p>
Full article ">
29 pages, 2156 KiB  
Article
Impact of the Integration of First-Mile and Last-Mile Drone-Based Operations from Trucks on Energy Efficiency and the Environment
by Tamás Bányai
Drones 2022, 6(9), 249; https://doi.org/10.3390/drones6090249 - 11 Sep 2022
Cited by 11 | Viewed by 4155
Abstract
Supply chain solutions are based on first-mile and last-mile deliveries; their efficiency significantly influences the total cost of operation. Drone technologies make it possible to improve first-mile and last-mile operations, but the design and optimization of these solutions offers new challenges. Within the [...] Read more.
Supply chain solutions are based on first-mile and last-mile deliveries; their efficiency significantly influences the total cost of operation. Drone technologies make it possible to improve first-mile and last-mile operations, but the design and optimization of these solutions offers new challenges. Within the frame of this article, the author focuses on the impact of integrated first-mile/last-mile drone-based delivery services from trucks, analyzing the impact of solutions on energy efficiency, the environmental impact and sustainability. The author describes a novel model of drone-based integrated first-mile/last-mile services which makes it possible to analyze the impact of different typical solutions on sustainability. As the numerical examples and computational results show, the integrated first-mile-last-mile drone-based service from trucks could lead to a significant reduction in energy consumption and a reduction in virtual greenhouse gas (GHG) emissions, which would lead to a more sustainable logistics system. The numerical analysis of the scenarios shows that the increased application of drones and the integration of first-mile and last-mile delivery operations could decrease energy consumption by about 87%. This reduction in energy consumption, depending on the generation source of electricity, significantly increases the reduction in greenhouse gas emission. Full article
(This article belongs to the Special Issue The Applications of Drones in Logistics)
Show Figures

Figure 1

Figure 1
<p>The scientific framework of the research based on analyzed articles [<a href="#B9-drones-06-00249" class="html-bibr">9</a>,<a href="#B12-drones-06-00249" class="html-bibr">12</a>,<a href="#B14-drones-06-00249" class="html-bibr">14</a>,<a href="#B15-drones-06-00249" class="html-bibr">15</a>,<a href="#B17-drones-06-00249" class="html-bibr">17</a>,<a href="#B19-drones-06-00249" class="html-bibr">19</a>,<a href="#B20-drones-06-00249" class="html-bibr">20</a>,<a href="#B21-drones-06-00249" class="html-bibr">21</a>,<a href="#B22-drones-06-00249" class="html-bibr">22</a>,<a href="#B23-drones-06-00249" class="html-bibr">23</a>,<a href="#B29-drones-06-00249" class="html-bibr">29</a>,<a href="#B30-drones-06-00249" class="html-bibr">30</a>,<a href="#B31-drones-06-00249" class="html-bibr">31</a>,<a href="#B32-drones-06-00249" class="html-bibr">32</a>,<a href="#B33-drones-06-00249" class="html-bibr">33</a>,<a href="#B34-drones-06-00249" class="html-bibr">34</a>,<a href="#B35-drones-06-00249" class="html-bibr">35</a>,<a href="#B36-drones-06-00249" class="html-bibr">36</a>,<a href="#B38-drones-06-00249" class="html-bibr">38</a>,<a href="#B43-drones-06-00249" class="html-bibr">43</a>,<a href="#B44-drones-06-00249" class="html-bibr">44</a>,<a href="#B45-drones-06-00249" class="html-bibr">45</a>,<a href="#B46-drones-06-00249" class="html-bibr">46</a>,<a href="#B47-drones-06-00249" class="html-bibr">47</a>,<a href="#B48-drones-06-00249" class="html-bibr">48</a>].</p>
Full article ">Figure 2
<p>The scheduled and performed delivery route of the e-truck.</p>
Full article ">Figure 3
<p>The scheduled and performed delivery route of the e-truck and the drone in the case of Scenario 2, where all suitable first-mile operations were performed by the drone.</p>
Full article ">Figure 4
<p>The scheduled and performed delivery route of the e-truck and the drone in the case of the third scenario, where all suitable pick-up and delivery operations were performed by the drone.</p>
Full article ">Figure 5
<p>The scheduled and performed delivery route of the e-truck and the drone in the case of the fourth scenario, where all suitable pick-up and delivery operations were performed by the drone with milk runs.</p>
Full article ">Figure 6
<p>The energy consumption of the different delivery models compared to the energy consumption of the conventional diesel truck-based delivery model.</p>
Full article ">
21 pages, 8919 KiB  
Article
Design of Non-Conventional Flight Control Systems for Bioinspired Micro Air Vehicles
by Estela Barroso-Barderas, Ángel Antonio Rodríguez-Sevillano, Rafael Bardera-Mora, Javier Crespo-Moreno and Juan Carlos Matías-García
Drones 2022, 6(9), 248; https://doi.org/10.3390/drones6090248 - 9 Sep 2022
Cited by 9 | Viewed by 2581
Abstract
This research focuses on the development of two bioinspired micro air vehicle (MAV) prototypes, based on morphing wings and wing grid wingtip devices. The morphing wings MAV tries to adapt the aerodynamics of the vehicle to each phase of flight by modifying the [...] Read more.
This research focuses on the development of two bioinspired micro air vehicle (MAV) prototypes, based on morphing wings and wing grid wingtip devices. The morphing wings MAV tries to adapt the aerodynamics of the vehicle to each phase of flight by modifying the vehicle geometry, while the wing grid MAV aims to minimize the aerodynamic and weight penalty of these vehicles. This work focuses on the design methodology of the flight control system of these MAVs. A preliminary theoretical conceptual design was used to verify the requirements, wind tunnel tests were performed to determine aerodynamic characteristics, and suitable materials were selected. The hardware and software configuration designed for the control system, which fulfills the objective of adaptive and optimal control in the wingtip-based prototype of the wing grid, is described. Finally, the results of the flight control on the prototype MAVs are analyzed. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

Figure 1
<p>Typical flight profile.</p>
Full article ">Figure 2
<p>Conventional nomenclature in the motion of air vehicles.</p>
Full article ">Figure 3
<p>Design concept of a bioinspired MAV (MM) morphing, based on smart materials.</p>
Full article ">Figure 4
<p>MAV model (MM) with morphing wings: (<b>a</b>) 3D view; (<b>b</b>) main dimensions.</p>
Full article ">Figure 5
<p>MAV prototype with <span class="html-italic">wing grid</span> concept (WGM).</p>
Full article ">Figure 6
<p>MAV prototype (WGM) showing wing grid details (in red color). (<b>A</b>) 3D view with wing grid extended; (<b>B</b>) Main dimensions of the MAV and wing grid properties; (<b>C</b>) Wing grid with different grid spans (base, b/3, 2b/3, and b).</p>
Full article ">Figure 7
<p><math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>L</mi> </msub> <mo> </mo> </mrow> </semantics></math>as a function of α (AoA) in the three configurations: retracted <span class="html-italic">wing grid, extended wing grid</span>, and rectangular wing.</p>
Full article ">Figure 8
<p>Polar curve of the three MAV configurations: retracted <span class="html-italic">wing grid</span>, extended <span class="html-italic">wing grid</span>, and rectangular wing.</p>
Full article ">Figure 9
<p>Lift coefficient <span class="html-italic">C<sub>L</sub></span> (<b>left</b>) and drag coefficient <span class="html-italic">C<sub>D</sub></span> (<b>right</b>) vs. angle of attack for the base configuration (without wing deformation) and the cambered configuration (maximum wing deformation) of the MAV.</p>
Full article ">Figure 10
<p>A simplified diagram of the feedback control.</p>
Full article ">Figure 11
<p>A simplified version of the adaptive control diagram.</p>
Full article ">Figure 12
<p>Final objective of the pitch angle autopilot flight control system. More details in [<a href="#B14-drones-06-00248" class="html-bibr">14</a>].</p>
Full article ">Figure 13
<p>Geometrical properties in a conventional airfoil.</p>
Full article ">Figure 14
<p>Variation of wing curvature in MM prototype with voltage. (<b>a</b>) Relationship between the curvature of the wing and the control of the main voltage piezoelectric device; (<b>b</b>) deformation of the wing of the prototype in the testbed.</p>
Full article ">Figure 15
<p>Surfaces of control in the V-tail (ruddervators).</p>
Full article ">Figure 16
<p>Exploded view of the <span class="html-italic">Wing grid</span> system. (<b>a</b>) <span class="html-italic">Wing grid</span>; (<b>b</b>) detail of the hole in the root, where the bolt from the deployment mechanism will fit.</p>
Full article ">Figure 17
<p>Main assembly of the RC and the Arduino board to test the code.</p>
Full article ">Figure 18
<p>Schematic electrical diagram of the wiring designed to control the testbed of the <span class="html-italic">wing grid</span> prototype.</p>
Full article ">Figure 19
<p>Descriptive diagram of the operation of the control software for WGM MAV.</p>
Full article ">Figure 20
<p>RC equipment of two channels. (<b>a</b>) Control of symmetric retraction-extension of the <span class="html-italic">wing grid</span> prototypes; (<b>b</b>) control of nonsymmetric deployment of the <span class="html-italic">wing grid</span> prototypes.</p>
Full article ">Figure 21
<p>Testbed of the prototype of the <span class="html-italic">wing grid</span> and the control system.</p>
Full article ">Figure 22
<p>Main schematic of the morphing prototype. (<b>a</b>) Electrical wiring; (<b>b</b>) smart material; (<b>c</b>) detail of smart material installed on the lower part of the wing (inside view).</p>
Full article ">
16 pages, 3806 KiB  
Article
Dwarf Mongoose Optimization-Based Secure Clustering with Routing Technique in Internet of Drones
by Fatma S. Alrayes, Jaber S. Alzahrani, Khalid A. Alissa, Abdullah Alharbi, Hussain Alshahrani, Mohamed Ahmed Elfaki, Ayman Yafoz, Abdullah Mohamed and Anwer Mustafa Hilal
Drones 2022, 6(9), 247; https://doi.org/10.3390/drones6090247 - 9 Sep 2022
Cited by 12 | Viewed by 2512
Abstract
Over the last few years, unmanned aerial vehicles (UAV), also called drones, have attracted considerable interest in the academic field and exploration in the research field of wireless sensor networks (WSN). Furthermore, the application of drones aided operations related to the agriculture industry, [...] Read more.
Over the last few years, unmanned aerial vehicles (UAV), also called drones, have attracted considerable interest in the academic field and exploration in the research field of wireless sensor networks (WSN). Furthermore, the application of drones aided operations related to the agriculture industry, smart Internet of things (IoT), and military support. Now, the usage of drone-based IoT, also called Internet of drones (IoD), and their techniques and design challenges are being investigated by researchers globally. Clustering and routing aid to maximize the throughput, reducing routing, and overhead, and making the network more scalable. Since the cluster network used in a UAV adopts an open transmission method, it exposes a large surface to adversaries that pose considerable network security problems to drone technology. This study develops a new dwarf mongoose optimization-based secure clustering with a multi-hop routing scheme (DMOSC-MHRS) in the IoD environment. The goal of the DMOSC-MHRS technique involves the selection of cluster heads (CH) and optimal routes to a destination. In the presented DMOSC-MHRS technique, a new DMOSC technique is utilized to choose CHs and create clusters. A fitness function involving trust as a major factor is included to accomplish security. Besides, the DMOSC-MHRS technique designs a wild horse optimization-based multi-hop routing (WHOMHR) scheme for the optimal route selection process. To demonstrate the enhanced performance of the DMOSC-MHRS model, a comprehensive experimental assessment is made. An extensive comparison study demonstrates the better performance of the DMOSC-MHRS model over other approaches. Full article
(This article belongs to the Special Issue Recent Advances in UAVs for Wireless Networks)
Show Figures

Figure 1

Figure 1
<p>The overall process of the DMOSC-MHRS algorithm.</p>
Full article ">Figure 2
<p>CBT analysis of DMOSC-MHRS approach under distinct drones.</p>
Full article ">Figure 3
<p>ECM analysis of DMOSC-MHRS approach under Scenario-1.</p>
Full article ">Figure 4
<p>ECM analysis of DMOSC-MHRS approach under Scenario-2.</p>
Full article ">Figure 5
<p>ECM analysis of DMOSC-MHRS approach under Scenario-3.</p>
Full article ">Figure 6
<p>CLT analysis of DMOSC-MHRS approach under Scenario-1.</p>
Full article ">Figure 7
<p>CLT analysis of DMOSC-MHRS approach under Scenario-2.</p>
Full article ">Figure 8
<p>CLT analysis of DMOSC-MHRS approach under Scenario-3.</p>
Full article ">Figure 9
<p>Reliability analysis of DMOSC-MHRS approach under distinct drones.</p>
Full article ">
19 pages, 4121 KiB  
Article
Constrained Predictive Tracking Control for Unmanned Hexapod Robot with Tripod Gait
by Yong Gao, Dongliang Wang, Wu Wei, Qiuda Yu, Xiongding Liu and Yuhai Wei
Drones 2022, 6(9), 246; https://doi.org/10.3390/drones6090246 - 9 Sep 2022
Cited by 8 | Viewed by 2658
Abstract
Since it is difficult to accurately track reference trajectories under the condition of stride constraints for an unmanned hexapod robot moving with rhythmic gait, an omnidirectional tracking strategy based on model predictive control and real-time replanning is proposed in this paper. Firstly, according [...] Read more.
Since it is difficult to accurately track reference trajectories under the condition of stride constraints for an unmanned hexapod robot moving with rhythmic gait, an omnidirectional tracking strategy based on model predictive control and real-time replanning is proposed in this paper. Firstly, according to the characteristic that the stride dominates the rhythmic motion of an unmanned multi-legged robot, a body-level omnidirectional tracking model is established. Secondly, a quantification method of limb’s stretch and yaw constraints described by motion stride relying on a tripod gait is proposed, and then, a body-level accurate tracking controller based on constrained predictive control is designed. Then, in view of the low tracking efficiency of the robot under the guidance of common reference stride, a solution strategy of variable stride period and a real-time replanning scheme of reference stride are proposed based on the limb constraints and the integral mean, which effectively avoid the tracking deviation caused by the guidance of constant reference strides. Finally, the effectiveness and practicability of the proposed control strategy are demonstrated through the comparative analysis and simulation test of a hexapod robot WelCH with omnidirectional movement ability to continuously track the directed curve and the undirected polyline trajectory. Full article
(This article belongs to the Special Issue Unmanned Surface Vehicle)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the body-level kinematics modeling of multi-legged robot.</p>
Full article ">Figure 2
<p>Schematic diagram of three key configurations of a hexapod robot moving with tripod gait within one stride period: the starting configuration (green dashed line), the semi-periodic configuration (black solid line), and the ending configuration (red dotted line).</p>
Full article ">Figure 3
<p>Feasible region of motion stride based on the stretch and yaw constraints of limbs.</p>
Full article ">Figure 4
<p>Schematic diagram of the segmentation of a reference trajectory and the replanning of reference stride.</p>
Full article ">Figure 5
<p>Structure diagram of the constrained predictive tracking control for a hexapod robot.</p>
Full article ">Figure 6
<p>Prototype of the wall-climbing hexapod robot WelCH.</p>
Full article ">Figure 7
<p>Schematic diagrams of the kinematics of a single limb.</p>
Full article ">Figure 8
<p>A composite reference trajectory composed of directed curve and undirected polyline.</p>
Full article ">Figure 9
<p>Comparison of body-level trajectory tracking with/without limb constraints.</p>
Full article ">Figure 10
<p>Curves of joint angles with/without limb constraints.</p>
Full article ">Figure 11
<p>Comparison of body-level trajectory tracking with/without replanning.</p>
Full article ">Figure 12
<p>Comparison of trajectory tracking errors with/without replanning.</p>
Full article ">Figure 13
<p>Experimental results of the hexapod robot WelCH tracking a composite reference trajectory with tripod gait.</p>
Full article ">Figure 14
<p>Real-time feedback and replanning results of the experiment of trajectory tracking.</p>
Full article ">
21 pages, 5440 KiB  
Article
A Data Normalization Technique for Detecting Cyber Attacks on UAVs
by Elena Basan, Alexandr Basan, Alexey Nekrasov, Colin Fidge, Evgeny Abramov and Anatoly Basyuk
Drones 2022, 6(9), 245; https://doi.org/10.3390/drones6090245 - 6 Sep 2022
Cited by 14 | Viewed by 3175
Abstract
The data analysis subsystem of an Unmanned Aerial Vehicle (UAV) includes two main modules: a data acquisition module for data processing and a normalization module. One of the main features of an adaptive UAV protection system is the analysis of its cyber-physical parameters. [...] Read more.
The data analysis subsystem of an Unmanned Aerial Vehicle (UAV) includes two main modules: a data acquisition module for data processing and a normalization module. One of the main features of an adaptive UAV protection system is the analysis of its cyber-physical parameters. An attack on a general-purpose computer system mainly affects the integrity, confidentiality and availability of important information. By contrast, an attack on a Cyber-Physical System (CPS), such as a UAV, affects the functionality of the system and may disrupt its operation, ultimately preventing it from fulfilling its tasks correctly. Cyber-physical parameters are the internal parameters of a system node, including the states of its computing resources, data storage, actuators and sensor system. Here, we develop a data normalization technique that additionally allows us to identify the signs of a cyber-attack. In addition, we define sets of parameters that can highlight an attack and define a new database format to support intrusion detection for UAVs. To achieve these goals, we performed an experimental study of the impact of attacks on UAV parameters and developed a software module for collecting data from UAVs, as well as a technique for normalizing and presenting data for detecting attacks on UAVs. Data analysis and the evaluation of the quality of a parameter (whether the parameter changes normally, or abrupt anomalous changes are observed) are facilitated by converting different types of data to the same format. The resulting formalized CPS model allows us to identify the nature of an attack and its potential impact on UAV subsystems. In the future, such a model could be the basis of a CPS digital twin in terms of security. The presented normalization technique supports processing raw data, as well as classifying data sets for their use in machine learning (ML) analyses in the future. The data normalization technique can also help to immediately determine the presence and signs of an attack, which allows classifying raw data automatically by dividing it into different categories. Such a technique could form the basis of an intrusion detection system for CPSs. Thus, the obtained results can be used to classify attacks, including attack detection systems based on machine learning methods, and the data normalization technique can be used as an independent method for detecting attacks. Full article
(This article belongs to the Special Issue Conceptual Design, Modeling, and Control Strategies of Drones-II)
Show Figures

Figure 1

Figure 1
<p>An ontological model for representing knowledge about the impact of attacks on the cyber-physical parameters of a UAV.</p>
Full article ">Figure 2
<p>The result of the influence of the GPS spoofing attack on the NHS: (<b>a</b>) raw data of the number of satellites fixed; (<b>b</b>) calculation result of CDF Poisson; and (<b>c</b>) calculation result of chi-squared with indications of threshold values.</p>
Full article ">Figure 2 Cont.
<p>The result of the influence of the GPS spoofing attack on the NHS: (<b>a</b>) raw data of the number of satellites fixed; (<b>b</b>) calculation result of CDF Poisson; and (<b>c</b>) calculation result of chi-squared with indications of threshold values.</p>
Full article ">Figure 3
<p>The result of the influence of the GPS spoofing attack on the flight coordinates: (<b>a</b>) without attack and (<b>b</b>) under attack.</p>
Full article ">Figure 4
<p>The result of the influence of the GPS spoofing attack on the latitude: (<b>a</b>) calculation result of chi-squared and (<b>b</b>) calculation result of CDF Poisson; and on the longitude; (<b>c</b>) calculation result of chi-squared and (<b>d</b>) calculation result of CDF Poisson.</p>
Full article ">Figure 5
<p>The result of the influence of the GPS spoofing attack on the NGS: (<b>a</b>) calculation result of chi-squared and (<b>b</b>) calculation result of CDF Poisson.</p>
Full article ">Figure 6
<p>The result of the influence of the GPS spoofing attack on the altitude: (<b>a</b>) calculation result of chi-squared, (<b>b</b>) calculation result of CDF Poisson and (<b>c</b>) raw data.</p>
Full article ">Figure 7
<p>The result of the influence of the GPS spoofing attack on the radio signal level: (<b>a</b>) raw data, (<b>b</b>) calculation result of the chi-squared and (<b>c</b>) calculation result of the CDF Poisson.</p>
Full article ">Figure 8
<p>The result of the influence of the GPS spoofing attack on the speed: (<b>a</b>) calculation result of the chi-squared, (<b>b</b>) calculation result of the CDF Poisson and (<b>c</b>) raw data.</p>
Full article ">
20 pages, 16043 KiB  
Article
Medium-Scale UAVs: A Practical Control System Considering Aerodynamics Analysis
by Mohammad Sadeq Ale Isaac, Marco Andrés Luna, Ahmed Refaat Ragab, Mohammad Mehdi Ale Eshagh Khoeini, Rupal Kalra, Pascual Campoy, Pablo Flores Peña and Martin Molina
Drones 2022, 6(9), 244; https://doi.org/10.3390/drones6090244 - 6 Sep 2022
Cited by 10 | Viewed by 3501
Abstract
Unmanned aerial vehicles (UAVs) have drawn significant attention from researchers over the last decade due to their wide range of possible uses. Carrying massive payloads concurrent with light UAVs has broadened the aeronautics context, which is feasible using powerful engines; however, it faces [...] Read more.
Unmanned aerial vehicles (UAVs) have drawn significant attention from researchers over the last decade due to their wide range of possible uses. Carrying massive payloads concurrent with light UAVs has broadened the aeronautics context, which is feasible using powerful engines; however, it faces several practical control dilemmas. This paper introduces a medium-scale hexacopter, called the Fan Hopper, alimenting Electric Ducted Fan (EDF) engines to investigate the optimum control possibilities for a fully autonomous mission carrying a heavy payload, even of liquid materials, considering calculations of higher orders. Conducting proper aerodynamic simulations, the model is designed, developed, and tested through robotic Gazebo simulation software to ensure proper functionality. Correspondingly, an Ardupilot open source autopilot is employed and enhanced by a model reference adaptive controller (MRAC) for the attitude loop to stabilize the system in case of an EDF failure and adapt the system coefficients when the fluid payload is released. Obtained results reveal less than a 5% error in comparison to desired values. This research reveals that tuned EDFs function dramatically for large payloads; meanwhile, thermal engines could be substituted to maintain much more flight endurance. Full article
Show Figures

Figure 1

Figure 1
<p>Demonstration of different coordinate systems, <span class="html-italic">Body</span> frame, <span class="html-italic">Fluid</span> frame, and the <span class="html-italic">Inertial</span> frame.</p>
Full article ">Figure 2
<p>A brief schematic of Fan Hopper’s designed model; (<b>a</b>) the configuration with components installed as a whole; (<b>b</b>) the incident angle of the ducted fan.</p>
Full article ">Figure 3
<p>Analysis of a single propeller; (<b>a</b>) asymmetry streamlines around the rotor; (<b>b</b>) the stream rotation.</p>
Full article ">Figure 4
<p>Analysis of a single propeller; (<b>a</b>) the rotor and shroud; (<b>b</b>) unrealistic droplet distribution due to no real injector.</p>
Full article ">Figure 5
<p>Analysisof 6 propeller engines; (<b>a</b>) (upper-left part) rotor thrust; (right part) mass flow of the stream passing through the engines; (<b>b</b>) streamlines around the model; (<b>c</b>) absence of multiphase model gave better convergence.</p>
Full article ">Figure 6
<p>Injectorsdeployed to make the droplet distribution realistic; the color distribution relates to the <span class="html-italic">H<sub>2</sub>O</span> volume fraction.</p>
Full article ">Figure 7
<p>Diagram of implemented adaptive controller for attitude loop.</p>
Full article ">Figure 8
<p>Simulation results when an EDF fails during 5 min; (<b>a</b>) the horizontal (<span class="html-italic">x</span>) position variation versus time; (<b>b</b>) the roll (<math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>) angle variation versus time; (<b>c</b>) the roll rate (<math display="inline"><semantics> <mover accent="true"> <mi>ϕ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time; (<b>d</b>) the horizontal (<span class="html-italic">y</span>) position variation versus time; (<b>e</b>) the pitch (<math display="inline"><semantics> <mi>θ</mi> </semantics></math>) angle variation versus time; (<b>f</b>) the pitch rate (<math display="inline"><semantics> <mover accent="true"> <mi>θ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time; (<b>g</b>) the vertical (<span class="html-italic">z</span>) position variation versus time; (<b>h</b>) the yaw (<math display="inline"><semantics> <mi>ψ</mi> </semantics></math>) angle variation versus time; (<b>i</b>) the yaw rate (<math display="inline"><semantics> <mover accent="true"> <mi>ψ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time.</p>
Full article ">Figure 9
<p>The CAD models of the Fan Hopper; (<b>a</b>) side elevation of the CAD model; (<b>b</b>) top view of the CAD model; (<b>c</b>) engine arm connectors; (<b>d</b>) a 3D view of the CAD model; (<b>e</b>) incident angle adjusters for duct engines; (<b>f</b>) the conjunction main connector; (<b>g</b>) body to duct arm connectors.</p>
Full article ">Figure 10
<p>Diagram of an EDF data versus incident angles (0°–22°) during 30 s; (<b>a</b>) diagram of EDF thrust and yawing torque, (<b>left bar</b>) thrust values, (<b>right bar</b>) torque values; (<b>b</b>) the expressed vibration impact of distinct incident angles on the duct.</p>
Full article ">Figure 11
<p>Assembled model of the Fan Hopper; (<b>a</b>) the EDF model connected to the joints, modifiable by the rubber band; (<b>b</b>) the base link, containing AP, connectors, antenna, fan, joints, wires, etc.; (<b>c</b>) the aluminum joint for the landing gear and motor arm; (<b>d</b>) the ESC cooler; (<b>e</b>) power distribution board; (<b>f</b>) the EDF system; (<b>g</b>) the duct holder; (<b>h</b>) the configurable duct joint to the arm.</p>
Full article ">Figure 12
<p>The integrated fluid tank schematic of the Fan Hopper; electrical components, valves, and tubes could be seen precisely on the left and right sides.</p>
Full article ">Figure 13
<p>Ground tests made to assure the stability of Fan Hopper, and examine the lifting power; (<b>a</b>) lifting two payloads of 5 kg; (<b>b</b>) lifting four payloads of 5 kg.</p>
Full article ">Figure 14
<p>Balance test results near the ground during 15 min; (<b>a</b>) the roll (<math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>) angle variation versus time; (<b>b</b>) the pitch (<math display="inline"><semantics> <mi>θ</mi> </semantics></math>) angle variation versus time; (<b>c</b>) the yaw (<math display="inline"><semantics> <mi>ψ</mi> </semantics></math>) angle variation versus time; (<b>d</b>) the roll rate (<math display="inline"><semantics> <mover accent="true"> <mi>ϕ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time; (<b>e</b>) the pitch rate (<math display="inline"><semantics> <mover accent="true"> <mi>θ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time; (<b>f</b>) the yaw rate (<math display="inline"><semantics> <mover accent="true"> <mi>ψ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time.</p>
Full article ">Figure 15
<p>Practical results with intermediate disturbance, during 18 min; (<b>a</b>) the roll (<math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>) angle variation versus time; (<b>b</b>) the pitch (<math display="inline"><semantics> <mi>θ</mi> </semantics></math>) angle variation versus time; (<b>c</b>) the yaw (<math display="inline"><semantics> <mi>ψ</mi> </semantics></math>) angle variation versus time; (<b>d</b>) the roll rate (<math display="inline"><semantics> <mover accent="true"> <mi>ϕ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time; (<b>e</b>) the pitch rate (<math display="inline"><semantics> <mover accent="true"> <mi>θ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time; (<b>f</b>) the yaw rate (<math display="inline"><semantics> <mover accent="true"> <mi>ψ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time.</p>
Full article ">Figure 16
<p>Practical results with an EDF failure, during 22 min; (<b>a</b>) the roll (<math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>) angle variation versus time; (<b>b</b>) the pitch (<math display="inline"><semantics> <mi>θ</mi> </semantics></math>) angle variation versus time; (<b>c</b>) the yaw (<math display="inline"><semantics> <mi>ψ</mi> </semantics></math>) angle variation versus time; (<b>d</b>) the roll rate (<math display="inline"><semantics> <mover accent="true"> <mi>ϕ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time; (<b>e</b>) the pitch rate (<math display="inline"><semantics> <mover accent="true"> <mi>θ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time; (<b>f</b>) the yaw rate (<math display="inline"><semantics> <mover accent="true"> <mi>ψ</mi> <mo>˙</mo> </mover> </semantics></math>) variation versus time.</p>
Full article ">
16 pages, 5157 KiB  
Technical Note
Pre-Archaeological Investigation by Integrating Unmanned Aerial Vehicle Aeromagnetic Surveys and Soil Analyses
by Wei Cao, Hao Qing, Xing Xu, Chang Liu, Silin Chen, Yi Zhong, Jiabo Liu, Yuanjie Li, Xiaodong Jiang, Dalun Gao, Zhaoxia Jiang and Qingsong Liu
Drones 2022, 6(9), 243; https://doi.org/10.3390/drones6090243 - 6 Sep 2022
Cited by 4 | Viewed by 2297
Abstract
Magnetic surveys have been widely used in archaeological field investigations. However, conventional survey methods are often restricted by complicated field conditions and ambiguities in data interpretation. In this study, a novel magnetic survey system was designed for pre-archaeological investigation (preliminary survey prior to [...] Read more.
Magnetic surveys have been widely used in archaeological field investigations. However, conventional survey methods are often restricted by complicated field conditions and ambiguities in data interpretation. In this study, a novel magnetic survey system was designed for pre-archaeological investigation (preliminary survey prior to the archaeological excavation) based on a modified quadrotor unmanned aerial vehicle (UAV) and was successfully applied to an archaeological area with a complex landform in Huizhou, China. Results show that the target anomaly identified by UAV aeromagnetic survey corresponds well to the location of a potential archaeological site. Subsequent soil analyses further confirm the archaeological value of UAV aeromagnetic results and provide strong constraints on the interpretation of target anomalies. This study demonstrates that the newly proposed UAV aeromagnetic system can adapt to the various field conditions with the advantages of flexibility and efficiency, which has great potential for future archaeological investigations. Full article
Show Figures

Figure 1

Figure 1
<p>Topography and location of the study site. (<b>a</b>) Topographic of Huizhou: the blue block is the Huizhou area, the red circle indicates survey location; (<b>b</b>) location of the study area in the world: the blue star indicates relative position.</p>
Full article ">Figure 2
<p>Multi-rotor UAV aeromagnetic system GTK-RF-M300. (<b>a</b>) Automatic flight mode of aeromagnetic system; (<b>b</b>) integration module; (<b>c</b>) magnetometer.</p>
Full article ">Figure 3
<p>Sampling procedures and sites. (Green star indicates sampling site in background area; blue star indicates sampling site in anomaly area.) (<b>a</b>) Field exploration in complex environment; (<b>b</b>) archaeological probe; (<b>c</b>) sampling sites in aerial photo; (<b>d</b>) sampling sites in RTP magnetic anomaly.</p>
Full article ">Figure 4
<p>Aeromagnetic results and anomaly identification of the survey area (yellow lines indicate survey lines). (<b>a</b>) Distribution of RTP magnetic anomalies along the north–south survey line; (<b>b</b>) distribution of landforms, architecture, and north–south survey line in the aerial photo; (<b>c</b>) distribution of RTP magnetic anomalies along the west–east survey line; (<b>d</b>) distribution of landforms, architecture and west–east survey lines in the aerial photo.</p>
Full article ">Figure 5
<p>Variation of magnetic concentrations, grain size, and mineralogy parameters with depth. (Green lines represent samples from the background area; blue lines represent samples from the anomaly area. The gray shadings are used to distinguish sections.) (<b>a</b>) χ; (<b>b</b>) χ<sub>ARM</sub>; (<b>c</b>) SIRM; (<b>d</b>) χ<sub>fd%</sub>; (<b>e</b>) χ<sub>ARM</sub>/SIRM; (<b>f</b>) χ<sub>ARM</sub>/χ; (<b>g</b>) S-ratio; (<b>h</b>) Gt—goethite index; (<b>i</b>) Hm—hematite index.</p>
Full article ">Figure 6
<p>Rock magnetic results of the selected depths (red dashed lines represent selected depths). (<b>a</b>) χ; (<b>b</b>–<b>g</b>) χ-T curves; (<b>h</b>–<b>m</b>) IRM acquisition curve decomposition (blue, orange, green, and red curves indicate components 1, 2, 3, and the sum of components, respectively); (<b>n</b>–<b>s</b>) magnetic hysteresis loops; (<b>t</b>–<b>y</b>) FORC diagrams.</p>
Full article ">Figure 7
<p>Index of sedimentary environment and provenance. (<b>a</b>) Median grain size; (<b>b</b>) DRS hematite band position; (<b>c</b>) CIA; (<b>d</b>–<b>f</b>) K/Al vs. Ti/Al; (<b>g</b>–<b>i</b>) LREE/HREE; (<b>j</b>–<b>l</b>) Zr-Th-Sc.</p>
Full article ">Figure 8
<p>Archaeological progress and discoveries in G area. (<b>a</b>) The ruins of the ancient city wall are located on the northern edge of G area; (<b>b</b>) the special square landform of G area with sampling sites and locations of ruins; (<b>c</b>) an archaeological protection area has been established in this region; (<b>d</b>) difference in vegetation at the southern boundary of G area.</p>
Full article ">
23 pages, 18751 KiB  
Article
Structure-from-Motion 3D Reconstruction of the Historical Overpass Ponte della Cerra: A Comparison between MicMac® Open Source Software and Metashape®
by Matteo Cutugno, Umberto Robustelli and Giovanni Pugliano
Drones 2022, 6(9), 242; https://doi.org/10.3390/drones6090242 - 6 Sep 2022
Cited by 18 | Viewed by 4556
Abstract
In recent years, the performance of free-and-open-source software (FOSS) for image processing has significantly increased. This trend, as well as technological advancements in the unmanned aerial vehicle (UAV) industry, have opened blue skies for both researchers and surveyors. In this study, we aimed [...] Read more.
In recent years, the performance of free-and-open-source software (FOSS) for image processing has significantly increased. This trend, as well as technological advancements in the unmanned aerial vehicle (UAV) industry, have opened blue skies for both researchers and surveyors. In this study, we aimed to assess the quality of the sparse point cloud obtained with a consumer UAV and a FOSS. To achieve this goal, we also process the same image dataset with a commercial software package using its results as a term of comparison. Various analyses were conducted, such as the image residuals analysis, the statistical analysis of GCPs and CPs errors, the relative accuracy assessment, and the Cloud-to-Cloud distance comparison. A support survey was conducted to measure 16 markers identified on the object. In particular, 12 of these were used as ground control points to scale the 3D model, while the remaining 4 were used as check points to assess the quality of the scaling procedure by examining the residuals. Results indicate that the sparse clouds obtained are comparable. MicMac® has mean image residuals equal to 0.770 pixels while for Metashape® is 0.735 pixels. In addition, the 3D errors on control points are similar: the mean 3D error for MicMac® is equal to 0.037 m with a standard deviation of 0.017 m, whereas for Metashape®, it is 0.031 m with a standard deviation equal to 0.015 m. The present work represents a preliminary study: a comparison between software packages is something hard to achieve, given the secrecy of the commercial software and the theoretical differences between the approaches. This case study analyzes an object with extremely complex geometry; it is placed in an urban canyon where the GNSS support can not be exploited. In addition, the scenario changes continuously due to the vehicular traffic. Full article
(This article belongs to the Special Issue Unconventional Drone-Based Surveying)
Show Figures

Figure 1

Figure 1
<p>Comparison between the commands in the two software packages investigated. The top row indicates commands for the FOSS, the middle row refers to commercial software, and the bottom row reports the relative photogrammetric processing stages.</p>
Full article ">Figure 2
<p>Test area: (<b>a</b>) location map; (<b>b</b>) localization in Southern Italy; (<b>c</b>) Conte della Cerra overpass.</p>
Full article ">Figure 3
<p>DJI Mavic 2 pro.</p>
Full article ">Figure 4
<p>Open polygonal created for the topographic support survey projected on cartography.</p>
Full article ">Figure 5
<p>View of the south facade with markers locations. GCPs are represented in red while CP is represented in green.</p>
Full article ">Figure 6
<p>View of the north facade with markers locations. GCPs are represented in red while CP is represented in green.</p>
Full article ">Figure 7
<p>View of the north side part of the extrados with markers locations. The left image refers to the left part of the extrados while the right image to the right part. GCPs are represented in red while CPs are represented in green.</p>
Full article ">Figure 8
<p>Results of the external orientation process showing the positions and attitudes of each camera station, together with the TP clouds generated from the respective feature matching process: (<b>a</b>) Metashape<sup>®</sup>; (<b>b</b>) Apero<sup>®</sup>.</p>
Full article ">Figure 9
<p>Image residuals probability density function estimates. On <span class="html-italic">x</span>-axis are reported images residuals expressed in pixels while on <span class="html-italic">y</span>-axis is the probability density estimation. Blu and red bins represent MicMac<sup>®</sup> and Metashape<sup>®</sup> respectively.</p>
Full article ">Figure 10
<p>The 3D view of the RGB TP clouds obtained with MicMac<sup>®</sup> and Metashape<sup>®</sup>. Panel (<b>a</b>) refers to MicMac<sup>®</sup> TP cloud. Panel (<b>b</b>) refers to Metashape<sup>®</sup> TP cloud.</p>
Full article ">Figure 11
<p>Relative model accuracy measurement between markers P7 and P9 displayed with model screenshots: (<b>a</b>) MicMac<sup>®</sup>; (<b>b</b>) Metashape<sup>®</sup>.</p>
Full article ">Figure 12
<p>Cloud-to-Cloud distance of MicMac<sup>®</sup> and Metashape<sup>®</sup> TP clouds computed with M3C2 plugin in CloudCompare.</p>
Full article ">Figure 13
<p>MicMac<sup>®</sup> and Metashape<sup>®</sup> Cloud-to-Cloud distance probability density estimate. On the <span class="html-italic">x</span>-axis are reported Cloud-to-Cloud distances expressed in pixels while on the <span class="html-italic">y</span>-axis the probability density estimation.</p>
Full article ">Figure 14
<p>3D view of the RGB dense cloud obtained with MicMac<sup>®</sup>.</p>
Full article ">Figure 15
<p>3D view of the RGB dense cloud obtained with Metashape<sup>®</sup>.</p>
Full article ">Figure A1
<p>Flowchart of MicMac<sup>®</sup> processing pipeline. Please note that the pipeline is limited to the TP cloud generation and scaling.</p>
Full article ">
27 pages, 12918 KiB  
Article
A High-Precision and Low-Cost Broadband LEO 3-Satellite Alternate Switching Ranging/INS Integrated Navigation and Positioning Algorithm
by Lvyang Ye, Ning Gao, Yikang Yang and Xue Li
Drones 2022, 6(9), 241; https://doi.org/10.3390/drones6090241 - 6 Sep 2022
Cited by 8 | Viewed by 2890
Abstract
To solve the problem of location services in harsh environments, we propose an integrated navigation algorithm based on broadband low-earth-orbit (LEO) satellite communication and navigation integration with 3-satellite alternate switch ranging. First, we describe the algorithm principle and processing flow in detail; next, [...] Read more.
To solve the problem of location services in harsh environments, we propose an integrated navigation algorithm based on broadband low-earth-orbit (LEO) satellite communication and navigation integration with 3-satellite alternate switch ranging. First, we describe the algorithm principle and processing flow in detail; next, we analyze and model the ranging error source and propose a combined multipath and non-line-of-sight (NLOS) error analysis model, which avoids discussing the complex multipath number of paths and its modeling process; in addition, we also propose a multimodal Gaussian noise-based interference model and analyze and model the LEO satellite orbital disturbance. The final simulation results show that our proposed algorithm can not only effectively overcome inertial navigation system (INS) divergence, but also achieve high positioning accuracy, especially when continuous ranging values are used. It can still ensure good anti-interference performance and robustness in terms of path and noise interference and by alternately switching ranging, there are other potential advantages. Compared to some of the existing representative advanced algorithms, it has higher accuracy, stronger stability and lower cost. Furthermore, it can be used as a location reference solution for real-time location services and life search and rescue in harsh environments with incomplete visual satellites and can also be used as a technical reference design solution for the future integration of communication and navigation (ICN). Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the INS+LEO 3-satellite. (<b>a</b>) Three satellites are in the same orbit. (<b>b</b>) Two satellites are in the same orbit. (<b>c</b>) Three satellites are in different orbits.</p>
Full article ">Figure 2
<p>3-satellite alternate switching ranging scene integrated navigation algorithm.</p>
Full article ">Figure 3
<p>Alternate switching ranging scenarios of 2 satellites under 3 satellites under the LEO 3-satellite algorithm. (<b>a</b>) Same orbit. (<b>b</b>) Alternately switch stars in the same orbit. (<b>c</b>) Alternately switch satellites in different orbits. (<b>d</b>) Different orbits.</p>
Full article ">Figure 4
<p>Schematic diagram of the integrated navigation algorithm of INS+2-satellite alternate switching ranging under LEO 3-satellite algorithm.</p>
Full article ">Figure 5
<p>Positioning error curve of INS+LEO3-satellite alternate switching ranging integrated navigation based on the same orbit. (<b>a</b>) Position error. (<b>b</b>) Velocity error. (<b>c</b>) Attitude error. (<b>d</b>) 3D trajectory error.</p>
Full article ">Figure 6
<p>Positioning error curve of INS+LEO3 satellite alternate switching ranging integrated navigation based on different orbits, (<b>a</b>) Position error. (<b>b</b>) Velocity error. (<b>c</b>) Attitude error. (<b>d</b>) 3D trajectory error.</p>
Full article ">Figure 7
<p>Positioning error curve of integrated navigation algorithm of INS+2-satellite alternate switching ranging under LEO 3-satellite on the same orbit. (<b>a</b>) Position, velocity and attitude errors. (<b>b</b>) 3D trajectory error.</p>
Full article ">Figure 8
<p>Positioning error curve of integrated navigation algorithm of INS+2-satellite alternate switching ranging under LEO 3-satellite on a different orbit. (<b>a</b>) Position, velocity and attitude errors. (<b>b</b>) 3D trajectory error.</p>
Full article ">Figure 9
<p>Comparison curve of the algorithm effect of different scenes with a switching time of 5 s. (<b>a</b>) Position, velocity and attitude errors. (<b>b</b>) Trajectory error.</p>
Full article ">Figure 10
<p>Navigation and positioning performance of algorithms under complex interference. (<b>a</b>) Error result. (<b>b</b>) Trajectory curve.</p>
Full article ">Figure 11
<p>Algorithmic navigation and positioning results statistics under complex interference. (<b>a</b>) Mean statistics. (<b>b</b>) Standard deviation statistics.</p>
Full article ">Figure 12
<p>Navigation and positioning error curve under the combined perturbation of aspherical earth perturbation and atmospheric drag perturbation. (<b>a</b>) Error result. (<b>b</b>) Trajectory curve.</p>
Full article ">Figure 13
<p>Algorithmic navigation and positioning result statistics under the combined perturbation of aspherical earth perturbation and atmospheric drag perturbation. (<b>a</b>) Mean statistics. (<b>b</b>) Standard deviation statistics.</p>
Full article ">Figure 13 Cont.
<p>Algorithmic navigation and positioning result statistics under the combined perturbation of aspherical earth perturbation and atmospheric drag perturbation. (<b>a</b>) Mean statistics. (<b>b</b>) Standard deviation statistics.</p>
Full article ">
16 pages, 13964 KiB  
Article
Quantifying Understory Vegetation Cover of Pinus massoniana Forest in Hilly Region of South China by Combined Near-Ground Active and Passive Remote Sensing
by Ruifan Wang, Tiantian Bao, Shangfeng Tian, Linghan Song, Shuangwen Zhong, Jian Liu, Kunyong Yu and Fan Wang
Drones 2022, 6(9), 240; https://doi.org/10.3390/drones6090240 - 5 Sep 2022
Cited by 7 | Viewed by 2363
Abstract
Understory vegetation cover is an important indicator of forest health, and it can also be used as a proxy in the exploration of soil erosion dynamics. Therefore, quantifying the understory vegetation cover in hilly areas in southern China is crucial for facilitating the [...] Read more.
Understory vegetation cover is an important indicator of forest health, and it can also be used as a proxy in the exploration of soil erosion dynamics. Therefore, quantifying the understory vegetation cover in hilly areas in southern China is crucial for facilitating the development of strategies to address local soil erosion. Nevertheless, a multi-source data synergy has not been fully revealed in the remote sensing data quantifying understory vegetation in this region; this issue can be attributed to an insufficient match between the point cloud 3D data obtained from active and passive remote sensing systems and the UAV orthophotos, culminating in an abundance of understory vegetation information not being represented in two dimensions. In this study, we proposed a method that combines the UAV orthophoto and airborne LiDAR data to detect the understory vegetation. Firstly, to enhance the characterization of understory vegetation, the point CNN model was used to decompose the three-dimensional structure of the pinus massoniana forest. Secondly, the point cloud was projected onto the UAV image using the point cloud back-projection algorithm. Finally, understory vegetation cover was estimated using a synthetic dataset. Canopy closure was divided into two categories: low and high canopy cover. Slopes were divided into three categories: gentle slopes, inclined slopes, and steep slopes. To clearly elucidate the influence of canopy closure and slope on the remote sensing estimation of understory vegetation coverage, the accuracy for each category was compared. The results show that the overall accuracy of the point CNN model to separate the three-dimensional structure of the pinus massoniana forest was 74%, which met the accuracy requirement of enhancing the understory vegetation. This method was able to obtain the understory vegetation cover more accurately at a low canopy closure level (Rlow2 = 0.778, RMSElow = 0.068) than at a high canopy closure level (RHigh2 = 0.682, RMSEHigh = 0.172). The method could also obtain high accuracy in version results with R2 values of 0.875, 0.807, and 0.704, as well as RMSE of 0.065, 0.106, and 0.149 for gentle slopes, inclined slopes, and steep slopes, respectively. The methods proposed in this study could provide technical support for UAV remote sensing surveys of understory vegetation in the southern hilly areas of China. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of experimental plot.</p>
Full article ">Figure 2
<p>Flow chart of vegetation inversion method.</p>
Full article ">Figure 3
<p>Point CNN model structure.</p>
Full article ">Figure 4
<p>Canopy closure and topographic distribution of the sample plot. Each point in the figure corresponds to the measured information of the sample area in terms of the closures, slopes and understory vegetation.</p>
Full article ">Figure 5
<p>Schematic diagram of farthest sampling point strategy.</p>
Full article ">Figure 6
<p>Three-dimensional structure decomposition of <span class="html-italic">Pinus massoniana</span> forest. (<b>a</b>): the RGB point cloud of <span class="html-italic">Pinus massoniana</span> forest samples; (<b>b</b>): the point cloud of <span class="html-italic">Pinus massoniana</span> forest 3D structure separation.</p>
Full article ">Figure 7
<p>Calculation of understory vegetation cover. (<b>a</b>): The result of the decomposition of the three-dimensional structure of the sample site; (<b>b</b>): The result of the reverse projection of the understory point cloud collection back to the orthophoto; (<b>c</b>): The two-dimensional image obtained after voxelization in (<b>b</b>). In (<b>c</b>), the green part is the vegetation part, the black points are the understory vegetation points, and the pink area is the ground point. (<b>d</b>): The result of the mask after binarization of the image. The black area represents the plant area, while the white area represents the bare land area.</p>
Full article ">Figure 8
<p>Linear regression results under different canopy closure. (<b>a</b>) Low forest densities; (<b>b</b>): High forest densities.</p>
Full article ">Figure 9
<p>Linear regression results under different slopes. (<b>a</b>): Gentle slope conditions; (<b>b</b>): Inclined slope conditions; (<b>c</b>): Steep slope conditions.</p>
Full article ">
9 pages, 1699 KiB  
Article
Insecticidal Management of Rangeland Grasshoppers Using a Remotely Piloted Aerial Application System
by Daniel E. Martin, Roberto Rodriguez, Derek A. Woller, K. Chris Reuter, Lonnie R. Black, Mohamed A. Latheef, Mason Taylor and Kiara M. López Colón
Drones 2022, 6(9), 239; https://doi.org/10.3390/drones6090239 - 5 Sep 2022
Cited by 4 | Viewed by 2013
Abstract
Grasshoppers are integral parts of rangeland ecosystems but also have the potential to reach population densities high enough (outbreaks) to cause serious economic damage from forage loss and affect adjacent crops. The objective of this study was to investigate the efficacy of treating [...] Read more.
Grasshoppers are integral parts of rangeland ecosystems but also have the potential to reach population densities high enough (outbreaks) to cause serious economic damage from forage loss and affect adjacent crops. The objective of this study was to investigate the efficacy of treating grasshopper population hotspots with a liquid insecticide using a remotely piloted aerial application system (RPAAS), as opposed to fixed-wing aircraft, which is the most common method currently in use. A liquid insecticide, Sevin XLR PLUS (containing carbaryl), was applied on replicated 4.05-hectare (10-acre) plots with an RPAAS on a ranch in New Mexico. Our results demonstrated that Sevin XLR PLUS significantly suppressed grasshopper populations over a 14-day period (normalized population reduction was 79.11 ± 8.35% SEM) and quite rapidly (mostly by day 3) compared to untreated controls. These results are comparable to those achieved with fixed-wing aircraft. The RPAAS covered the whole test area in a single flight in approximately 5 min, making these population hotspot treatment applications relatively rapid, potentially more cost-effective, and more targeted in comparison to fixed-wing aircraft. Before adoption as an application method option, further research is recommended on using an RPAAS to cover larger areas in combination with using diflubenzuron-based insecticides, which are often preferred. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

Figure 1
<p>Layout of water sensitive cards and flight line during spray deposition measurements.</p>
Full article ">Figure 2
<p>Experimental plots on rangeland habitat near Estancia, New Mexico. (<b>A</b>) Map of treatment plots. (<b>B</b>) Grasshoppers feeding in plot. (<b>C</b>) Precision Vision 35 RPAAS in flight.</p>
Full article ">Figure 3
<p>Application rate of tank mixture S across swath. Dashed blue lines indicate effective swath width.</p>
Full article ">Figure 4
<p>Effects of Sevin XLR PLUS treatment on grasshopper density and mean ± SEM across the trial period.</p>
Full article ">
17 pages, 5756 KiB  
Article
Deep Reinforcement Learning with Corrective Feedback for Autonomous UAV Landing on a Mobile Platform
by Lizhen Wu, Chang Wang, Pengpeng Zhang and Changyun Wei
Drones 2022, 6(9), 238; https://doi.org/10.3390/drones6090238 - 4 Sep 2022
Cited by 14 | Viewed by 3410
Abstract
Autonomous Unmanned Aerial Vehicle (UAV) landing remains a challenge in uncertain environments, e.g., landing on a mobile ground platform such as an Unmanned Ground Vehicle (UGV) without knowing its motion dynamics. A traditional PID (Proportional, Integral, Derivative) controller is a choice for the [...] Read more.
Autonomous Unmanned Aerial Vehicle (UAV) landing remains a challenge in uncertain environments, e.g., landing on a mobile ground platform such as an Unmanned Ground Vehicle (UGV) without knowing its motion dynamics. A traditional PID (Proportional, Integral, Derivative) controller is a choice for the UAV landing task, but it suffers the problem of manual parameter tuning, which becomes intractable if the initial landing condition changes or the mobile platform keeps moving. In this paper, we design a novel learning-based controller that integrates a standard PID module with a deep reinforcement learning module, which can automatically optimize the PID parameters for velocity control. In addition, corrective feedback based on heuristics of parameter tuning can speed up the learning process compared with traditional DRL algorithms that are typically time-consuming. In addition, the learned policy makes the UAV landing smooth and fast by allowing the UAV to adjust its speed adaptively according to the dynamics of the environment. We demonstrate the effectiveness of the proposed algorithm in a variety of quadrotor UAV landing tasks with both static and dynamic environmental settings. Full article
(This article belongs to the Special Issue Cooperation of Drones and Other Manned/Unmanned Systems)
Show Figures

Figure 1

Figure 1
<p>RL with corrective feedback based on the human experience of the task.</p>
Full article ">Figure 2
<p>Standard structure of a PID controller.</p>
Full article ">Figure 3
<p>The framework of PID with RL.</p>
Full article ">Figure 4
<p>A quadrotor UAV landing task in the simulation environment. (<b>a</b>) Environmental setting. (<b>b</b>) Recognized marker on the mobile vehicle.</p>
Full article ">Figure 5
<p>Comparison of success times and total used time (in minutes) among RL, RL-PID and RLC-PID during training, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>4.0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Comparison of the accumulated reward among RL, RL-PID, RLC-PID during training, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>4.0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Comparison of loss among RL, RL-PID, RLC-PID during training, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>4.0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Trajectories of landing on a static vehicle (PID, RL, RL-PID, RLC-PID).</p>
Full article ">Figure 9
<p>PID parameters when landing on a static ground vehicle during testing.</p>
Full article ">Figure 10
<p>Trajectories of landing on a moving vehicle (PID, RL-PID, RLC-PID), <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>4.0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>PID parameters when landing on a moving ground vehicle during testing.</p>
Full article ">Figure 12
<p>Real-world UAV landing on a static landmark.</p>
Full article ">Figure 13
<p>Real-world UAV landing on a movable landmark.</p>
Full article ">Figure 14
<p>Real-world UAV landing on a mobile ground vehicle.</p>
Full article ">
26 pages, 10711 KiB  
Article
Scheduling and Securing Drone Charging System Using Particle Swarm Optimization and Blockchain Technology
by Mohamed Torky, Mohamed El-Dosuky, Essam Goda, Václav Snášel and Aboul Ella Hassanien
Drones 2022, 6(9), 237; https://doi.org/10.3390/drones6090237 - 4 Sep 2022
Cited by 18 | Viewed by 4460
Abstract
Unmanned aerial vehicles (UAVs) have emerged as a powerful technology for introducing untraditional solutions to many challenges in non-military fields and industrial applications in the next few years. However, the limitations of a drone’s battery and the available optimal charging techniques represent a [...] Read more.
Unmanned aerial vehicles (UAVs) have emerged as a powerful technology for introducing untraditional solutions to many challenges in non-military fields and industrial applications in the next few years. However, the limitations of a drone’s battery and the available optimal charging techniques represent a significant challenge in using UAVs on a large scale. This problem means UAVs are unable to fly for a long time; hence, drones’ services fail dramatically. Due to this challenge, optimizing the scheduling of drone charging may be an unusual solution to drones’ battery problems. Moreover, authenticating drones and verifying their charging transactions with charging stations is an essential associated problem. This paper proposes a scheduling and secure drone charging system in response to these challenges. The proposed system was simulated on a generated dataset consisting of 300 drones and 50 charging station points to evaluate its performance. The optimization of the proposed scheduling methodology was based on the particle swarm optimization (PSO) algorithm and game theory-based auction model. In addition, authenticating and verifying drone charging transactions were executed using a proposed blockchain protocol. The optimization and scheduling results showed the PSO algorithm’s efficiency in optimizing drone routes and preventing drone collisions during charging flights with low error rates with an MAE = 0.0017 and an MSE = 0.0159. Moreover, the investigation to authenticate and verify the drone charging transactions showed the efficiency of the proposed blockchain protocol while simulating the proposed system on the Ethereum platform. The obtained results clarified the efficiency of the proposed blockchain protocol in executing drone charging transactions within a short time and low latency within an average of 0.34 s based on blockchain performance metrics. Moreover, the proposed scheduling methodology achieved a 96.8% success rate of drone charging cases, while only 3.2% of drones failed to charge after three scheduling rounds. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

Figure 1
<p>The architectural model of the proposed scheduling and securing drone charging system.</p>
Full article ">Figure 2
<p>Authentication method of drones and charging stations.</p>
Full article ">Figure 3
<p>Optimizing drone routing using PSO algorithm.</p>
Full article ">Figure 4
<p>Scheduling drone charging requests using updating methodology of proof-of-schedule (PoSch).</p>
Full article ">Figure 5
<p>Optimal drone charging schedules model using the Stackelberg game algorithm.</p>
Full article ">Figure 6
<p>Verifying drone charging process using a proposed blockchain protocol.</p>
Full article ">Figure 7
<p>Results of Mean Square Error (MSE).</p>
Full article ">Figure 8
<p>Results of Mean Absolute Error (MAE) and Root Mean Square Error (RMSE).</p>
Full article ">Figure 9
<p>Drone losses (i.e., dead drones) vs. arriving drones.</p>
Full article ">Figure 10
<p>Drone losses (i.e., dead drones) distribution vs. simulation time.</p>
Full article ">Figure 11
<p>The completion time of drone charging results, average reading and transaction throughputs.</p>
Full article ">Figure 12
<p>Transactions per block: (<b>a</b>) the number of transactions per block, (<b>b</b>) percentages of transactions per block.</p>
Full article ">Figure 13
<p>Ethereum GAZ usage of 14 blocks.</p>
Full article ">Figure 14
<p>Averages of percentages of charging drones and dead drones over three rounds of drone charging schedule.</p>
Full article ">Figure A1
<p>Read and transaction latency results of drones’ charging transactions with stations S1–S4.</p>
Full article ">Figure A2
<p>Read and transaction latency results of drones’ charging transactions with stations S5–S8.</p>
Full article ">Figure A3
<p>Read and transaction latency results of drones’ charging transactions with stations S9–S11.</p>
Full article ">Figure A4
<p>Read and transaction latency results of drones’ charging transactions with stations S12–S14.</p>
Full article ">Figure A5
<p>Read and transaction latency results of drones’ charging transactions with stations S15–S17.</p>
Full article ">Figure A6
<p>Read and transaction latency results of drones’ charging transactions with stations S18–S20.</p>
Full article ">Figure A7
<p>Read and transaction latency results of drones’ charging transactions with stations S21–S23.</p>
Full article ">Figure A8
<p>Read and transaction latency results of drones’ charging transactions with stations S24–S26.</p>
Full article ">Figure A9
<p>Read and transaction latency results of drones’ charging transactions with stations S27–S29.</p>
Full article ">Figure A10
<p>Read and transaction latency results of drones’ charging transactions with stations S30–S33.</p>
Full article ">Figure A11
<p>Read and transaction latency results of drones’ charging transactions with stations S34–S37.</p>
Full article ">Figure A12
<p>Read and transaction latency results of drones’ charging transactions with stations S38–S41.</p>
Full article ">Figure A13
<p>Read and transaction latency results of drones’ charging transactions with stations S42–S44.</p>
Full article ">Figure A14
<p>Read and transaction latency results of drones’ charging transactions with stations S45–S47.</p>
Full article ">Figure A15
<p>Read and transaction latency results of drones’ charging transactions with stations S48–S50.</p>
Full article ">
28 pages, 17076 KiB  
Article
Aerodynamic Numerical Simulation Analysis of Water–Air Two-Phase Flow in Trans-Medium Aircraft
by Jun Wei, Yong-Bai Sha, Xin-Yu Hu, Jin-Yan Yao and Yan-Li Chen
Drones 2022, 6(9), 236; https://doi.org/10.3390/drones6090236 - 3 Sep 2022
Cited by 5 | Viewed by 4040
Abstract
A trans-medium aircraft is a new concept aircraft that can both dive in the water and fly in the air. In this paper, a new type of water–air multi-medium span vehicle is designed based on the water entry and exit structure model of [...] Read more.
A trans-medium aircraft is a new concept aircraft that can both dive in the water and fly in the air. In this paper, a new type of water–air multi-medium span vehicle is designed based on the water entry and exit structure model of a multi-rotor UAV. Based on the designed structural model of the cross-media aircraft, the OpenFOAM open source numerical platform is used to analyze the single-medium aerodynamic characteristics and the multi-medium spanning flow analysis. The rotating flow characteristics of single-medium air rotor and underwater propeller are calculated by sliding mesh. In order to prevent the numerical divergence caused by the deformation of the grid movement, the overset grid method and the multiphase flow technology are used for the numerical simulation of the water entry and exit of the cross-medium aircraft. Through the above analysis, the flow field characteristics of the trans-medium vehicle in different media are verified, and the changes in the body load and attitude at different water entry angles are also obtained during the process of medium crossing. Full article
Show Figures

Figure 1

Figure 1
<p>Flight control rigid body model for trans-media aircraft.</p>
Full article ">Figure 2
<p>Schematic diagram of the coordinate system of the trans-media aircraft.</p>
Full article ">Figure 3
<p>Schematic diagram of the multi-medium spanning force of a trans-medium aircraft (<b>a</b>) Free entry; (<b>b</b>) Out of water.</p>
Full article ">Figure 4
<p>Schematic diagram of the force of a trans-medium aircraft entering water at a certain attitude angle.</p>
Full article ">Figure 5
<p>Vertical access to water: (<b>a</b>) vertical into the water; (<b>b</b>) vertical out of water.</p>
Full article ">Figure 6
<p>Enter and exit the water at a certain attitude angle: (<b>a</b>) water entry process; (<b>b</b>) out of water process.</p>
Full article ">Figure 7
<p>APC1047SF air rotor: (<b>a</b>) physical map; (<b>b</b>) model diagram.</p>
Full article ">Figure 8
<p>Flow computation domain: (<b>a</b>) computational domain scale modeling; (<b>b</b>) computational domain division and boundary setting.</p>
Full article ">Figure 9
<p>Schematic diagram of computational domain meshing results: (<b>a</b>) outer domain meshing situation; (<b>b</b>) air rotor meshing; (<b>c</b>) inner domain meshing; (<b>d</b>) meshing of the interface between the inner and outer domains.</p>
Full article ">Figure 9 Cont.
<p>Schematic diagram of computational domain meshing results: (<b>a</b>) outer domain meshing situation; (<b>b</b>) air rotor meshing; (<b>c</b>) inner domain meshing; (<b>d</b>) meshing of the interface between the inner and outer domains.</p>
Full article ">Figure 10
<p>Numerical thrust and experimental thrust at different rotational speeds.</p>
Full article ">Figure 11
<p>Cloud map of aerodynamic characteristics of a single propeller of an air rotor. (<b>a</b>) RPM = <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>3</mn> </msup> </mrow> </semantics></math>; (<b>b</b>) RPM = <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>3</mn> </msup> </mrow> </semantics></math>; (<b>c</b>) RPM = <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>3</mn> </msup> </mrow> </semantics></math>; (<b>d</b>) RPM = <math display="inline"><semantics> <mrow> <mn>7</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>3</mn> </msup> </mrow> </semantics></math>; (<b>e</b>) RPM = <math display="inline"><semantics> <mrow> <mn>8</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>3</mn> </msup> </mrow> </semantics></math> (unit <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>d</mi> <mo>×</mo> <mi>m</mi> <mi>i</mi> <msup> <mi>n</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 11 Cont.
<p>Cloud map of aerodynamic characteristics of a single propeller of an air rotor. (<b>a</b>) RPM = <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>3</mn> </msup> </mrow> </semantics></math>; (<b>b</b>) RPM = <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>3</mn> </msup> </mrow> </semantics></math>; (<b>c</b>) RPM = <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>3</mn> </msup> </mrow> </semantics></math>; (<b>d</b>) RPM = <math display="inline"><semantics> <mrow> <mn>7</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>3</mn> </msup> </mrow> </semantics></math>; (<b>e</b>) RPM = <math display="inline"><semantics> <mrow> <mn>8</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>3</mn> </msup> </mrow> </semantics></math> (unit <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>d</mi> <mo>×</mo> <mi>m</mi> <mi>i</mi> <msup> <mi>n</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 12
<p>Schematic diagram of the tip vortex.</p>
Full article ">Figure 13
<p>Schematic diagram of wake vortex structure.</p>
Full article ">Figure 14
<p>Curve diagram of physical quantities changing with time and space: (<b>a</b>) graph of pressure versus position; (<b>b</b>) graph of pressure change with time; (<b>c</b>) graph of velocity versus position; (<b>d</b>) graph of speed change with time.</p>
Full article ">Figure 15
<p>Schematic diagram of grid and computational domain division: (<b>a</b>) fluid computational domain size; (<b>b</b>) computational domain boundary conditions; (<b>c</b>) internal and external computational domain division; (<b>d</b>) blade meshing.</p>
Full article ">Figure 16
<p>Hover state tip vortex.</p>
Full article ">Figure 17
<p>Numerical cloud map in hover state: (<b>a</b>) pressure cloud map; (<b>b</b>) axial velocity contour.</p>
Full article ">Figure 18
<p>Forward flight wakes at different forward flight speeds.</p>
Full article ">Figure 19
<p>Numerical contour of flow field under different forward flight speeds: (<b>a</b>) longitudinal pressure contour; (<b>b</b>) longitudinal velocity contour; (<b>c</b>) axial velocity contour.</p>
Full article ">Figure 19 Cont.
<p>Numerical contour of flow field under different forward flight speeds: (<b>a</b>) longitudinal pressure contour; (<b>b</b>) longitudinal velocity contour; (<b>c</b>) axial velocity contour.</p>
Full article ">Figure 20
<p>Variation diagram of the force characteristics of each blade of the aircraft: (<b>a</b>) forward flight speed is 1.5 <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>•</mo> <msup> <mi>s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>; (<b>b</b>) forward flight speed is 5 <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>•</mo> <msup> <mi>s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>; (<b>c</b>) forward flight speed is 10 <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>•</mo> <msup> <mi>s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 20 Cont.
<p>Variation diagram of the force characteristics of each blade of the aircraft: (<b>a</b>) forward flight speed is 1.5 <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>•</mo> <msup> <mi>s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>; (<b>b</b>) forward flight speed is 5 <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>•</mo> <msup> <mi>s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>; (<b>c</b>) forward flight speed is 10 <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>•</mo> <msup> <mi>s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 21
<p>Schematic diagram of single propeller meshing: (<b>a</b>) computational domain model parameters; (<b>b</b>) boundary conditions and division of internal and external domains.</p>
Full article ">Figure 22
<p>Propeller hydrodynamic performance curve.</p>
Full article ">Figure 23
<p>(<b>a</b>) Flow field area grid; (<b>b</b>) background mesh and overset mesh; (<b>c</b>) background grid and overset grid area marker map (1 for overset grid, 0 for background grid).</p>
Full article ">Figure 24
<p>(<b>a</b>) Pressure curve; (<b>b</b>) speed curve.</p>
Full article ">Figure 25
<p>Attitude change curve of entering water at different angles.</p>
Full article ">Figure 26
<p>The physical evolution process of cavitation entering water from different angles: (<b>a</b>) vertical into the water; (<b>b</b>–<b>e</b>) inclined angle into the water (10°, 20°, 30°, 40°).</p>
Full article ">
17 pages, 4210 KiB  
Article
Respiration Detection of Ground Injured Human Target Using UWB Radar Mounted on a Hovering UAV
by Yu Jing, Fugui Qi, Fang Yang, Yusen Cao, Mingming Zhu, Zhao Li, Tao Lei, Juanjuan Xia, Jianqi Wang and Guohua Lu
Drones 2022, 6(9), 235; https://doi.org/10.3390/drones6090235 - 3 Sep 2022
Cited by 17 | Viewed by 3991
Abstract
As an important and basic platform for remote life sensing, unmanned aerial vehicles (UAVs) may hide the vital signals of an injured human due to their own motion. In this work, a novel method to remove the platform motion and accurately extract human [...] Read more.
As an important and basic platform for remote life sensing, unmanned aerial vehicles (UAVs) may hide the vital signals of an injured human due to their own motion. In this work, a novel method to remove the platform motion and accurately extract human respiration is proposed. We utilized a hovering UAV as the platform of ultra-wideband (UWB) radar to capture human respiration. To remove interference from the moving UAV platform, we used the delay calculated by the correlation between each frame of UWB radar data in order to compensate for the range migration. Then, the echo signals from the human target were extracted as the observed multiple range channel signals. Owing to meeting the independent component analysis (ICA), we adopted ICA to estimate the signal of respiration. The results of respiration detection experiments conducted in two different outdoor scenarios show that our proposed method could accurately separate respiration of a ground human target without any additional sensor and prior knowledge; this physiological information will be essential for search and rescue (SAR) missions. Full article
(This article belongs to the Special Issue Conceptual Design, Modeling, and Control Strategies of Drones-II)
Show Figures

Figure 1

Figure 1
<p>UAV-mounted UWB radar system for vital signal detection of ground injured human subject.</p>
Full article ">Figure 2
<p>Workflow of the UAV-carried UWB radar system.</p>
Full article ">Figure 3
<p>The block diagram of the radar signal processing.</p>
Full article ">Figure 4
<p>The problem of range migration. (<b>a</b>) Radar echo data with range migration. (<b>b</b>) Data after range migration compensation.</p>
Full article ">Figure 5
<p>Illustration of experimental settings for two scenarios. (<b>a</b>) Scenario 1 with smooth background, (<b>b</b>) scenario 2 with grassland background.</p>
Full article ">Figure 6
<p>Range profiles of a static subject. (<b>a</b>) Radar data without range migration compensation. (<b>b</b>) Radar data with range migration compensation.</p>
Full article ">Figure 7
<p>Observed signals extracted using the range sampler.</p>
Full article ">Figure 8
<p>Results of subject 1 in scenario 1. (<b>a</b>) Raw radar echo signal of subject 1 obtained by maximum energy method. (<b>b</b>) Reference respiration from respiratory belt. (<b>c</b>) Respiration extracted using our proposed method. (<b>d</b>) Respiration extracted using background residual method.</p>
Full article ">Figure 9
<p>Frequency spectrum of the signals in <a href="#drones-06-00235-f008" class="html-fig">Figure 8</a>. (<b>a</b>) FFT of raw radar signal. (<b>b</b>) FFT of respiration from respiratory belt. (<b>c</b>) FFT of respiration extracted by our proposed method. (<b>d</b>) FFT of respiration extracted using background residual method.</p>
Full article ">Figure 9 Cont.
<p>Frequency spectrum of the signals in <a href="#drones-06-00235-f008" class="html-fig">Figure 8</a>. (<b>a</b>) FFT of raw radar signal. (<b>b</b>) FFT of respiration from respiratory belt. (<b>c</b>) FFT of respiration extracted by our proposed method. (<b>d</b>) FFT of respiration extracted using background residual method.</p>
Full article ">Figure 10
<p>Results of subject 1 in scenario 2. (<b>a</b>) Raw radar echo signal of subject 1 obtained using maximum energy method. (<b>b</b>) Reference respiration from respiratory belt. (<b>c</b>) Respiration extracted using our proposed method. (<b>d</b>) Respiration extracted using background residual method.</p>
Full article ">Figure 10 Cont.
<p>Results of subject 1 in scenario 2. (<b>a</b>) Raw radar echo signal of subject 1 obtained using maximum energy method. (<b>b</b>) Reference respiration from respiratory belt. (<b>c</b>) Respiration extracted using our proposed method. (<b>d</b>) Respiration extracted using background residual method.</p>
Full article ">Figure 11
<p>Frequency spectrum of the signals in <a href="#drones-06-00235-f010" class="html-fig">Figure 10</a>. (<b>a</b>) FFT of raw radar signal. (<b>b</b>) FFT of respiration from respiratory belt. (<b>c</b>) FFT of respiration extracted using our proposed method. (<b>d</b>) FFT of respiration extracted using background residual method.</p>
Full article ">Figure 11 Cont.
<p>Frequency spectrum of the signals in <a href="#drones-06-00235-f010" class="html-fig">Figure 10</a>. (<b>a</b>) FFT of raw radar signal. (<b>b</b>) FFT of respiration from respiratory belt. (<b>c</b>) FFT of respiration extracted using our proposed method. (<b>d</b>) FFT of respiration extracted using background residual method.</p>
Full article ">
15 pages, 459 KiB  
Article
Capacity Optimization of Next-Generation UAV Communication Involving Non-Orthogonal Multiple Access
by Mubashar Sarfraz, Muhammad Farhan Sohail, Sheraz Alam, Muhammad Javvad ur Rehman, Sajjad Ahmed Ghauri, Khaled Rabie, Hasan Abbas and Shuja Ansari
Drones 2022, 6(9), 234; https://doi.org/10.3390/drones6090234 - 2 Sep 2022
Cited by 15 | Viewed by 3178
Abstract
Unmanned air vehicle communication (UAV) systems have recently emerged as a quick, low-cost, and adaptable solution to numerous challenges in the next-generation wireless network. In particular, UAV systems have shown to be very useful in wireless communication applications with sudden traffic demands, network [...] Read more.
Unmanned air vehicle communication (UAV) systems have recently emerged as a quick, low-cost, and adaptable solution to numerous challenges in the next-generation wireless network. In particular, UAV systems have shown to be very useful in wireless communication applications with sudden traffic demands, network recovery, aerial relays, and edge computing. Meanwhile, non-orthogonal multiple access (NOMA) has been able to maximize the number of served users with the highest traffic capacity for future aerial systems in the literature. However, the study of joint optimization of UAV altitude, user pairing, and power allocation for the problem of capacity maximization requires further investigation. Thus, a capacity optimization problem for the NOMA aerial system is evaluated in this paper, considering the combination of convex and heuristic optimization techniques. The proposed algorithm is evaluated by using multiple heuristic techniques and deployment scenarios. The results prove the efficiency of the proposed NOMA scheme in comparison to the benchmark technique of orthogonal multiple access (OMA). Moreover, a comparative analysis of heuristic techniques for capacity optimization is also presented. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>GA- and PSO-based comparative analysis of sum rates: OMA altitude at 80 m.</p>
Full article ">Figure 3
<p>GA- and PSO-based comparative analysis of sum rates: OMA altitude at 120 m.</p>
Full article ">Figure 4
<p>Comparative analysis of user pairing: suburban.</p>
Full article ">Figure 5
<p>Comparative analysis of user pairing: urban.</p>
Full article ">Figure 6
<p>Comparative analysis of user pairing: dense urban.</p>
Full article ">Figure 7
<p>Sum rate comparison for different environments: OMA altitude at 80 m.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop