[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (611)

Search Parameters:
Keywords = UAV photogrammetry

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 50761 KiB  
Article
Intelligent Structural Health Monitoring and Noncontact Measurement Method of Small Reservoir Dams Using UAV Photogrammetry and Anomaly Detection
by Sizeng Zhao, Fei Kang, Lina He, Junjie Li, Yiqing Si and Yiping Xu
Appl. Sci. 2024, 14(20), 9156; https://doi.org/10.3390/app14209156 - 10 Oct 2024
Viewed by 389
Abstract
This study proposes a UAV-based remote measurement method for accurately locating pedestrians and other small targets within small reservoir dams. To address the imprecise coordinate information in reservoir areas after prolonged operations, a transformation method for converting UAV coordinates into the local coordinate [...] Read more.
This study proposes a UAV-based remote measurement method for accurately locating pedestrians and other small targets within small reservoir dams. To address the imprecise coordinate information in reservoir areas after prolonged operations, a transformation method for converting UAV coordinates into the local coordinate system without relying on preset parameters is introduced, accomplished by integrating the Structure from Motion (SfM) algorithm to calculate the transformation parameters. An improved YOLOv8 network is introduced for the high-precision detection of small pedestrian targets, complemented by a laser rangefinder to facilitate accurate 3D locating of targets from varying postures and positions. Furthermore, the integration of a thermal infrared camera facilitates the detection and localization of potential seepage. The experimental validation and application across two real small reservoir dams confirm the accuracy and applicability of the proposed approach, demonstrating the efficiency of the proposed routine UAV surveillance strategy and proving its potential to establish electronic fences and enhance maintenance operations. Full article
Show Figures

Figure 1

Figure 1
<p>Target 3D localization challenges with UAV inspection.</p>
Full article ">Figure 2
<p>Conventional coordinate transformation process between UAV and dam coordinate systems.</p>
Full article ">Figure 3
<p>Geometric models for coordinate transformation in different dimensions. (<b>a</b>) Seven-parameter model. (<b>b</b>) Four-parameter model. (<b>c</b>) Altitude conversion.</p>
Full article ">Figure 4
<p>The registration process between UAV and dam coordinates based on the SfM algorithm. (<b>a</b>) UAV image-based SfM processes without GCPs. (<b>b</b>) UAV coordinates transformed to local coordinate system.</p>
Full article ">Figure 5
<p>Altitude conversion methods and corresponding error trends.</p>
Full article ">Figure 6
<p>The process of transforming UAV WGS-84 coordinates to dam local coordinates. (<b>a</b>) WGS-84 coordinate system. (<b>b</b>) Projection to plane. (<b>c</b>) Convert to local. (<b>d</b>) Altitude transformation. (<b>e</b>) Altitude adjustment. (<b>f</b>) UAV transformation coordinates.</p>
Full article ">Figure 7
<p>Target 3D localization using a UAV integrated with a laser rangefinder.</p>
Full article ">Figure 8
<p>The ConvNeXt v2 module architecture.</p>
Full article ">Figure 9
<p>Schematic representation of the ODConv architecture.</p>
Full article ">Figure 10
<p>Architecture of the proposed small pedestrian detection network with a tiny head. The main modified part is enclosed in the red box. Modules 0 to 9 are the backbone of the network, while modules 10 to 27 are the Neck structure of the network.</p>
Full article ">Figure 11
<p>UAV coordinate transformation and detected target 3D localization in a small reservoir dam.</p>
Full article ">Figure 12
<p>Evaluation scenario and measurement process of the developed method. (<b>a</b>) 3D model of experimental area with 6 GCPs. (<b>b</b>) Target measurements in 27 various positions.</p>
Full article ">Figure 13
<p>GCP errors measured from each position. (<b>a</b>) Errors in planar measurement. (<b>b</b>) Errors in altitude measurement.</p>
Full article ">Figure 14
<p>Training process <span class="html-italic">mAP</span>curves for the different networks.</p>
Full article ">Figure 15
<p>Small pedestrian detection results of different networks. To visually demonstrate the detection performance, the relevant confidence values are not displayed. Some large target pedestrians appear in (<b>a</b>,<b>b</b>), while smaller targets appear in (<b>c</b>–<b>f</b>).</p>
Full article ">Figure 16
<p>Examples of added background images.</p>
Full article ">Figure 17
<p>Photos of the small reservoir dam and measurement equipment for Case 1. (<b>a</b>) Overall scene of the dam. (<b>b</b>) Total station and one GCP. (<b>c</b>) M300 RTK UAV and H20T camera.</p>
Full article ">Figure 18
<p>Point clouds generated from UAV images of a laser scan for Case 1. (<b>a</b>) Laser scan. (<b>b</b>) Transformed points. (<b>c</b>) No transformation.</p>
Full article ">Figure 19
<p>The 3D localization of pedestrians in the dam coordinate system and the analysis of their positions within the dam structure for Case 1.</p>
Full article ">Figure 20
<p>Detection and localization of potential seepage regions based on infrared thermography for Case 1. (<b>a</b>) Overall image. (<b>b</b>–<b>e</b>) Details of visible light and infrared images.</p>
Full article ">Figure 21
<p>Photograph of the small reservoir dam for Case 2.</p>
Full article ">Figure 22
<p>Point clouds generated from UAV images and a laser scan for Case 2. (<b>a</b>) Laser scan. (<b>b</b>) Transformed points. (<b>c</b>) No transformation.</p>
Full article ">Figure 23
<p>3D localization of pedestrians in the dam coordinate system and the analysis of their positions within the structure for Case 2.</p>
Full article ">Figure 24
<p>Detection and localization of potential seepage regions based on infrared thermography for Case 2. (<b>a</b>) Overall image. (<b>b</b>–<b>e</b>) Details of visible light and infrared images.</p>
Full article ">Figure 25
<p>Schematic diagram of electronic fence for small reservoir dams using UAVs.</p>
Full article ">
21 pages, 8325 KiB  
Article
Estimation of Forage Biomass in Oat (Avena sativa) Using Agronomic Variables through UAV Multispectral Imaging
by Julio Urquizo, Dennis Ccopi, Kevin Ortega, Italo Castañeda, Solanch Patricio, Jorge Passuni, Deyanira Figueroa, Lucia Enriquez, Zoila Ore and Samuel Pizarro
Remote Sens. 2024, 16(19), 3720; https://doi.org/10.3390/rs16193720 - 6 Oct 2024
Viewed by 944
Abstract
Accurate and timely estimation of oat biomass is crucial for the development of sustainable and efficient agricultural practices. This research focused on estimating and predicting forage oat biomass using UAV and agronomic variables. A Matrice 300 equipped with a multispectral camera was used [...] Read more.
Accurate and timely estimation of oat biomass is crucial for the development of sustainable and efficient agricultural practices. This research focused on estimating and predicting forage oat biomass using UAV and agronomic variables. A Matrice 300 equipped with a multispectral camera was used for 14 flights, capturing 21 spectral indices per flight. Concurrently, agronomic data were collected at six stages synchronized with UAV flights. Data analysis involved correlations and Principal Component Analysis (PCA) to identify significant variables. Predictive models for forage biomass were developed using various machine learning techniques: linear regression, Random Forests (RFs), Support Vector Machines (SVMs), and Neural Networks (NNs). The Random Forest model showed the best performance, with a coefficient of determination R2 of 0.52 on the test set, followed by Support Vector Machines with an R2 of 0.50. Differences in root mean square error (RMSE) and mean absolute error (MAE) among the models highlighted variations in prediction accuracy. This study underscores the effectiveness of photogrammetry, UAV, and machine learning in estimating forage biomass, demonstrating that the proposed approach can provide relatively accurate estimations for this purpose. Full article
(This article belongs to the Special Issue Application of Satellite and UAV Data in Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Location of the field experiment and experimental design of six local oat varieties in Santa Ana, showing the ground control points (GCPs).</p>
Full article ">Figure 2
<p>Description of the methodological framework employed in this research; DSM (Digital Surface Model); DTM (Digital Terrain Model); DHM (Digital Height Model).</p>
Full article ">Figure 3
<p>(<b>A</b>) DJI RTK V2 GNSS, (<b>B</b>) UAV Matrice 300, (<b>C</b>) Micasense Red Edge P camera, (<b>D</b>) flight plan, (<b>E</b>) ground control point (GCP), (<b>F</b>) evaluation plot, and (<b>G</b>) Calibrate Reflectance Panel (CRP).</p>
Full article ">Figure 4
<p>Correlation coefficients between agronomic variables and spectral variables over time. r—Pearson correlation coefficient; significant at the 5% probability level; X = not significant.</p>
Full article ">Figure 5
<p>Principal Component Analysis of agronomic and spectral variables.</p>
Full article ">Figure 6
<p>The Taylor diagram compares the performance of linear regression (LM), Neural Network (NN), Random Forest (RF), and Support Vector Machine (SVM) models in predicting dry matter (dm) based on standard deviation, correlation coefficient, and RMSE for both training and test datasets.</p>
Full article ">Figure 7
<p>Representation of dry matter estimated through prediction models for oat cultivation.</p>
Full article ">
30 pages, 32487 KiB  
Article
Fitness of Multi-Resolution Remotely Sensed Data for Cadastral Mapping in Ekiti State, Nigeria
by Israel Oluwaseun Taiwo, Matthew Olomolatan Ibitoye, Sunday Olukayode Oladejo and Mila Koeva
Remote Sens. 2024, 16(19), 3670; https://doi.org/10.3390/rs16193670 - 1 Oct 2024
Viewed by 933
Abstract
In developing nations, such as Ekiti State, Nigeria, the utilization of remotely sensed data, particularly satellite and UAV imagery, remains significantly underexploited in land administration. This limits multi-resolution imagery’s potential in land governance and socio-economic development. This study examines factors influencing UAV adoption [...] Read more.
In developing nations, such as Ekiti State, Nigeria, the utilization of remotely sensed data, particularly satellite and UAV imagery, remains significantly underexploited in land administration. This limits multi-resolution imagery’s potential in land governance and socio-economic development. This study examines factors influencing UAV adoption for land administration in Nigeria, mapping seven rural, peri-urban, and urban sites with orthomosaics (2.2 cm to 3.39 cm resolution). Boundaries were manually delineated, and parcel areas were calculated. Using the 0.05 m orthomosaic as a reference, the Horizontal Radial Root Mean Square Error (RMSEr) and Normalized Parcel Area Error (NPAE) were computed. Results showed a consistent increase in error with increasing resolution (0.1 m to 1 m), with RMSEr ranging from 0.053 m (formal peri-urban) to 2.572 m (informal rural settlement). Formal settlements with physical demarcations exhibited more consistent values. A comparison with GNSS data revealed that RMSEr values conformed to the American Society for Photogrammetry and Remote Sensing (ASPRS) Class II and III standards. The research demonstrates physical demarcations’ role in facilitating cadastral mapping, with formal settlements showing the highest suitability. This study recommends context-specific imagery resolution to enhance land governance. Key implications include promoting settlement typology awareness and addressing UAV regulatory challenges. NPAE values can serve as a metric for assessing imagery resolution fitness for cadastral mapping. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study area map. (<b>a</b>) Map of Nigeria showing Ekiti State; (<b>b</b>) map of Ekiti State showing the study sites.</p>
Full article ">Figure 2
<p>Methodology flow.</p>
Full article ">Figure 3
<p>Perceived benefits of UAV use for land administration in Nigeria.</p>
Full article ">Figure 4
<p>Concerns and challenges perceived in adopting UAVs for land administration and management in Nigeria.</p>
Full article ">Figure 5
<p>UAV orthomosaic of part of Aaye-Oja Ekiti (rural and informal settlement).</p>
Full article ">Figure 6
<p>UAV orthomosaic of part of Igedora-Ekiti (rural and formal settlement).</p>
Full article ">Figure 7
<p>UAV orthomosaic of part of Aaye community (peri-urban and informal settlement).</p>
Full article ">Figure 8
<p>UAV orthomosaic of part of Maryland Avenue (peri-urban and formal settlement).</p>
Full article ">Figure 9
<p>UAV orthomosaic of part of Atinkankan (urban and informal settlement–slum).</p>
Full article ">Figure 10
<p>UAV orthomosaic of part of Okebola (urban and formal settlement).</p>
Full article ">Figure 11
<p>UAV orthomosaic of part of Egbewa GRA (urban and formal settlement) annotated with GNSS marker positions for absolute accuracy comparison.</p>
Full article ">Figure 12
<p>Comparison of spatial resolutions for rural agricultural area. (<b>a</b>) 1 m; (<b>b</b>) 0.5 m; (<b>c</b>) 0.1 m; (<b>d</b>) 0.05 m.</p>
Full article ">Figure 13
<p>Comparison of spatial resolutions for rural formal settlement. (<b>a</b>) 1 m; (<b>b</b>) 0.5 m; (<b>c</b>) 0.1 m; (<b>d</b>) 0.06 m.</p>
Full article ">Figure 14
<p>Comparison of spatial resolutions for peri-urban informal settlement. (<b>a</b>) 1 m; (<b>b</b>) 0.5 m; (<b>c</b>) 0.1 m; (<b>d</b>) 0.06 m.</p>
Full article ">Figure 15
<p>Comparison of spatial resolutions for peri-urban formal settlement. (<b>a</b>) 1 m; (<b>b</b>) 0.5 m; (<b>c</b>) 0.1 m; (<b>d</b>) 0.05 m.</p>
Full article ">Figure 16
<p>Comparison of spatial resolutions for urban informal settlement (slum). (<b>a</b>) 1 m; (<b>b</b>) 0.5 m; (<b>c</b>) 0.1 m; (<b>d</b>) 0.05 m.</p>
Full article ">Figure 17
<p>Comparison of spatial resolutions for urban formal settlement. (<b>a</b>) 1 m; (<b>b</b>) 0.5 m; (<b>c</b>) 0.1 m; (<b>d</b>) 0.05 m.</p>
Full article ">Figure 18
<p>Satellite imagery of Aaye-Oja Ekiti clipped at a 1:20,000 visualization scale on QGIS: (<b>a</b>) 3.5 m resolution multispectral image; (<b>b</b>) 2.1 m resolution panchromatic image.</p>
Full article ">
23 pages, 5705 KiB  
Article
Enhanced Estimation of Crown-Level Leaf Dry Biomass of Ginkgo Saplings Based on Multi-Height UAV Imagery and Digital Aerial Photogrammetry Point Cloud Data
by Saiting Qiu, Xingzhou Zhu, Qilin Zhang, Xinyu Tao and Kai Zhou
Forests 2024, 15(10), 1720; https://doi.org/10.3390/f15101720 - 28 Sep 2024
Viewed by 475
Abstract
Ginkgo is a multi-purpose economic tree species that plays a significant role in human production and daily life. The dry biomass of leaves serves as an accurate key indicator of the growth status of Ginkgo saplings and represents a direct source of economic [...] Read more.
Ginkgo is a multi-purpose economic tree species that plays a significant role in human production and daily life. The dry biomass of leaves serves as an accurate key indicator of the growth status of Ginkgo saplings and represents a direct source of economic yield. Given the characteristics of flexibility and high operational efficiency, affordable unmanned aerial vehicles (UAVs) have been utilized for estimating aboveground biomass in plantations, but not specifically for estimating leaf biomass at the individual sapling level. Furthermore, previous studies have primarily focused on image metrics while neglecting the potential of digital aerial photogrammetry (DAP) point cloud metrics. This study aims to investigate the estimation of crown-level leaf biomass in 3-year-old Ginkgo saplings subjected to different nitrogen treatments, using a synergistic approach that combines both image metrics and DAP metrics derived from UAV RGB images captured at varying flight heights (30 m, 60 m, and 90 m). In this study, image metrics (including the color and texture feature parameters) and DAP point cloud metrics (encompassing crown-level structural parameters, height-related and density-related metrics) were extracted and evaluated for modeling leaf biomass. The results indicated that models that utilized both image metrics and point cloud metrics generally outperformed those relying solely on image metrics. Notably, the combination of image metrics obtained from the 60 m flight height with DAP metrics derived from the 30 m height significantly enhanced the overall modeling performance, especially when optimal metrics were selected through a backward elimination approach. Among the regression methods employed, Gaussian process regression (GPR) models exhibited superior performance (CV-R2 = 0.79, rRMSE = 25.22% for the best model), compared to Partial Least Squares Regression (PLSR) models. The common critical image metrics for both GPR and PLSR models were found to be related to chlorophyll (including G, B, and their normalized indices such as NGI and NBI), while key common structural parameters from the DAP metrics included height-related and crown-related features (specifically, tree height and crown width). This approach of integrating optimal image metrics with DAP metrics derived from multi-height UAV imagery shows great promise for estimating crown-level leaf biomass in Ginkgo saplings and potentially other tree crops. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Workflow for estimating crown-level leaf biomass of <span class="html-italic">Ginkgo</span> saplings. RGB images represent the commonly used colored images with each pixel representing three color channel values (Red, Green, and Blue) ranging from 0 to 255. SfM represents the Structure from Motion (SfM) algorithm used for 3D reconstruction. AL, BL, and DL represent all leaf, the 50% brightest leaf, and the 50% darkest leaf pixels, respectively.</p>
Full article ">Figure 2
<p>Experimental design of nitrogen application treatments for <span class="html-italic">Ginkgo</span> saplings. (<b>A</b>) The drone aerial photography and (<b>B</b>) situ photos in the experimental area.</p>
Full article ">Figure 3
<p>The situ scenario of the DJI Phantom 4 RTK drone.</p>
Full article ">Figure 4
<p>Box plots of the distribution of leaf biomass (<b>A</b>), height (<b>B</b>), and crown width (<b>C</b>) for different nitrogen application levels.</p>
Full article ">Figure 5
<p>The multi-height UAV orthomosaic imagery of three typical plots under different nitrogen treatments (upper row: N0; middle row: N2; bottom row: N4), with sapling crowns delineated by visual interpretation (white lines) and tree tops (yellow color) detected from DAP datasets with different heights (left column: 30 m; middle column: 60 m; right column: 90 m).</p>
Full article ">Figure 6
<p>The scatterplots for PLSR (<b>A</b>) and GPR (<b>B</b>) models with the image metrics of 60 m height and the DAP metrics of 30 m height. Nitrogen<sub>low</sub>, Nitrogen<sub>medium</sub>, and Nitrogen<sub>high</sub> represent the low (N0~N1), medium (N2~N3), and high (N4~N5) nitrogen fertilizer treatments, respectively.</p>
Full article ">Figure 7
<p>The frequency of the optimal metrics selected in the PLSR and GPR models for the estimation of <span class="html-italic">Ginkgo</span> leaf biomass. The green and red color bars represent optimal image metrics and DAP metrics, respectively.</p>
Full article ">Figure 8
<p>The horizontal distribution and vertical profile of the estimated leaf biomass of <span class="html-italic">Ginkgo</span> saplings based on GPR models with the optimal 60 m height image metrics and 30 m height DAP metrics at the landscape level. The upper, medium, and bottom vertical profiles are extracted from six plots within the upper, medium, and bottom rows in the horizontal distribution map, respectively.</p>
Full article ">Figure 9
<p>The linear relationships between leaf biomass of <span class="html-italic">Ginkgo</span> saplings and an image metric of NGI (<b>A</b>) or a DAP metric of Crown width (<b>B</b>).</p>
Full article ">
25 pages, 47040 KiB  
Article
Mapping Earth Hummocks in Daisetsuzan National Park in Japan Using UAV-SfM Framework
by Yu Meng, Teiji Watanabe, Yuichi S. Hayakawa, Yuki Sawada and Ting Wang
Remote Sens. 2024, 16(19), 3610; https://doi.org/10.3390/rs16193610 - 27 Sep 2024
Viewed by 528
Abstract
Earth hummocks are periglacial landforms that are widely distributed in arctic and alpine regions. This study employed an uncrewed aerial vehicle (UAV) and a structure from motion (SfM) framework to map and analyze the spatial distribution and morphological characteristics of earth hummocks across [...] Read more.
Earth hummocks are periglacial landforms that are widely distributed in arctic and alpine regions. This study employed an uncrewed aerial vehicle (UAV) and a structure from motion (SfM) framework to map and analyze the spatial distribution and morphological characteristics of earth hummocks across an extensive area in Daisetsuzan National Park, Japan. The UAV-captured images were processed using SfM photogrammetry to create orthomosaic images and high-resolution DEMs. We identified the distribution and morphological characteristics of earth hummocks using orthoimages, hillshade maps, and DEMs and analyzed how their morphological parameters relate to topographical conditions. A total of 18,838 individual earth hummocks in an area of approximately 82,599 m² were mapped and analyzed across the two study areas, surpassing the scale of existing studies. The average length, width, and height of these earth hummocks are 1.22 m, 1.03 m, and 0.15 m, respectively, and topographical features such as slope, aspect, and landforms are demonstrated to have an essential influence on the morphology of the earth hummocks. These findings enhance our understanding of topographical features. Furthermore, this study demonstrates the efficacy of utilizing the UAV-SfM framework with multi-directional hillshade mapping as an alternative to manual field measurements in studying periglacial landforms in mountainous regions. Full article
(This article belongs to the Special Issue Remote Sensing for Mountain Ecosystems II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the study areas (created using the Red Relief Image Map (RRIM) technique developed by Asia Air Survey Co., Ltd.; Patent No. 3670274, 4272146). Coordinate System: GCS_WGS_1984.</p>
Full article ">Figure 2
<p>Earth hummocks in Area B.</p>
Full article ">Figure 3
<p>Visual representations of an earth hummock: the left figures display the top view and front view of the 3D model of the earth hummock, and the right photographs present the vegetative cover of the earth hummock.</p>
Full article ">Figure 4
<p>Comparison of the two hillshade maps: (<b>a</b>) traditional hillshade map with an azimuth angle of 315° and an altitude angle of 45° for a single light source; (<b>b</b>) multi-directional hillshade map adjusted to incorporate light from six distinct sources; and (<b>c</b>) orthomosaic image for reference.</p>
Full article ">Figure 5
<p>The interface of the ArcGIS Pro when vectorizing the range and shape of earth hummocks: (<b>a</b>) multi-directional hillshade map; (<b>b</b>) hillshade map in 3D view; and (<b>c</b>) orthophoto in 2D view.</p>
Full article ">Figure 6
<p>Schematic diagram of the minimum bounding geometry and points calculating the height.</p>
Full article ">Figure 7
<p>Vegetation height measurements on earth hummocks using a folding ruler.</p>
Full article ">Figure 8
<p>Workflow of the study.</p>
Full article ">Figure 9
<p>Orthomosaic images and DEM maps of the study areas obtained from UAV-SfM: (<b>a</b>) orthomosaic image; (<b>b</b>) DEM map in Area A; (<b>c</b>) orthomosaic image; and (<b>d</b>) DEM map in Area B.</p>
Full article ">Figure 10
<p>Distribution of topographic parameters and landform classifications in Areas A (left) and B (right): (<b>a</b>) slope; (<b>b</b>) aspect; (<b>c</b>) TWI; and (<b>d</b>) Geomorphon Landforms (created using the 5 m resolution DEM5A provided by the GSI).</p>
Full article ">Figure 11
<p>Distribution maps of earth hummocks in (<b>a</b>) Area A and (<b>b</b>) Area B. Each site was assigned a number arranged in descending order based on area size.</p>
Full article ">Figure 12
<p>Distribution patterns of earth hummocks: (a) earth hummocks are widely spaced apart; (b1) earth hummocks are closely connected with minimal spacing; and (b2) earth hummocks overlap.</p>
Full article ">Figure 13
<p>Differences in the morphological parameters of earth hummocks between Areas A and B.</p>
Full article ">Figure 14
<p>The relationship between the morphological parameters of earth hummocks, shown with all data (N = 18,838) and separate area data (N = 5843 in Area A and N = 12,995 in Area B): (<b>a</b>) length vs. width; and (<b>b</b>) length vs. height. The unit of ‘pdf’ in the figure refers to the probability density function, with the light blue lines representing the linear regression fits.</p>
Full article ">Figure 15
<p>Slope (<b>a</b>) and aspect distribution (<b>b</b>) of earth hummocks in the study areas.</p>
Full article ">Figure 16
<p>Distribution of Geomorphon Landforms in Areas A and B.</p>
Full article ">Figure 17
<p>Distribution of earth hummocks with drainage patterns: (<b>a</b>) TWI and (<b>b</b>) orthomosaic map (the numbers are the same as in <a href="#remotesensing-16-03610-f011" class="html-fig">Figure 11</a>).</p>
Full article ">Figure 18
<p>Turf-banked terraces in Area A widely distributed in flat areas on both sides of the hiking trail.</p>
Full article ">Figure 19
<p>A map showing inaccessibility to the earth hummock site due to <span class="html-italic">Pinus pumila</span> and distance from the hiking trail in Area B.</p>
Full article ">
19 pages, 9060 KiB  
Article
An Innovative New Approach to Light Pollution Measurement by Drone
by Katarzyna Bobkowska, Pawel Burdziakowski, Pawel Tysiac and Mariusz Pulas
Drones 2024, 8(9), 504; https://doi.org/10.3390/drones8090504 - 19 Sep 2024
Viewed by 868
Abstract
The study of light pollution is a relatively new and specific field of measurement. The current literature is dominated by articles that describe the use of ground and satellite data as a source of information on light pollution. However, there is a need [...] Read more.
The study of light pollution is a relatively new and specific field of measurement. The current literature is dominated by articles that describe the use of ground and satellite data as a source of information on light pollution. However, there is a need to study the phenomenon on a microscale, i.e., locally within small locations such as housing estates, parks, buildings, or even inside buildings. Therefore, there is an important need to measure light pollution at a lower level, at the low level of the skyline. In this paper, the authors present a new drone design for light pollution measurement. A completely new original design for an unmanned platform for light pollution measurement is presented, which is adapted to mount custom sensors (not originally designed to be mounted on a unmanned aerial vehicles) allowing registration in the nadir and zenith directions. The application and use of traditional photometric sensors in the new configuration, such as the spectrometer and the sky quality meter (SQM), is presented. A multispectral camera for nighttime measurements, a calibrated visible-light camera, is used. The results of the unmanned aerial vehicle (UAV) are generated products that allow the visualisation of multimodal photometric data together with the presence of a geographic coordinate system. This paper also presents the results from field experiments during which the light spectrum is measured with the installed sensors. As the results show, measurements at night, especially with multispectral cameras, allow the assessment of the spectrum emitted by street lamps, while the measurement of the sky quality depends on the flight height only up to a 10 m above ground level. Full article
Show Figures

Figure 1

Figure 1
<p>Drone view: (<b>a</b>) bottom view, (<b>b</b>) side view, (<b>c</b>) top view, and (<b>d</b>) front view.</p>
Full article ">Figure 2
<p>Overview of the drone measurement components.</p>
Full article ">Figure 3
<p>Sensor arrangement and sectors of operation.</p>
Full article ">Figure 4
<p>The campus of the Gdansk University of Technology is illuminated by street lamps and spotlights, marked starting points in the non-illuminated area (no. 1) and under the street lamp in the illuminated area (no. 2).</p>
Full article ">Figure 5
<p>Survey flights 2 and 3 and the night survey point cloud and SQM values marked on the trajectory in the corresponding colour.</p>
Full article ">Figure 6
<p>Measurement of the SQM versus the UAV flight altitude and UAV roll and pitch angles.</p>
Full article ">Figure 7
<p>Take-off 2—relationship of the SQM value to the altitude.</p>
Full article ">Figure 8
<p>Take-off from a non-lighted area (take-off 1)—relationship of the SQM value to the height.</p>
Full article ">Figure 9
<p>(<b>a</b>) Orthophoto map of the study area at night, and (<b>b</b>) digital surface model (DSM).</p>
Full article ">Figure 10
<p>Images acquired with a visible light camera mounted on a drone (<b>top row</b>—raw data, <b>bottom row</b>—data indicating luminance level (cd/m<sup>2</sup>) after post-processing with iQ Luminance software).</p>
Full article ">Figure 11
<p>(<b>a</b>) Composite image (colour composite) from spectral channels 3, 2, 1 (R, G and B), with the spectral measurement spot marked in red, and (<b>b</b>) spectral measurement result, radiance values at the spot for each spectral channel.</p>
Full article ">Figure 12
<p>Comparison of the radiance images in the individual spectral channels for an example section of the study area.</p>
Full article ">Figure 12 Cont.
<p>Comparison of the radiance images in the individual spectral channels for an example section of the study area.</p>
Full article ">
20 pages, 21023 KiB  
Article
Deformation-Adapted Spatial Domain Filtering Algorithm for UAV Mining Subsidence Monitoring
by Jianfeng Zha, Penglong Miao, Hukai Ling, Minghui Yu, Bo Sun, Chongwu Zhong and Guowei Hao
Sustainability 2024, 16(18), 8039; https://doi.org/10.3390/su16188039 - 14 Sep 2024
Viewed by 498
Abstract
Underground coal mining induces surface subsidence, leading to disasters such as damage to buildings and infrastructure, landslides, and surface water accumulation. Preventing and controlling disasters in subsidence areas and reutilizing land depend on understanding subsidence regularity and obtaining surface subsidence monitoring data. These [...] Read more.
Underground coal mining induces surface subsidence, leading to disasters such as damage to buildings and infrastructure, landslides, and surface water accumulation. Preventing and controlling disasters in subsidence areas and reutilizing land depend on understanding subsidence regularity and obtaining surface subsidence monitoring data. These data are crucial for the reutilization of regional land resources and disaster prevention and control. Subsidence hazards are also a key constraint to mine development. Recently, with the rapid advancement of UAV technology, the use of UAV photogrammetry for surface subsidence monitoring has become a significant trend in this field. The periodic imagery data quickly acquired by UAV are used to construct DEM through point cloud filtering. Then, surface subsidence information is obtained by differencing DEM from different periods. However, due to the accuracy limitations inherent in UAV photogrammetry, the subsidence data obtained through this method are characterized by errors, making it challenging to achieve high-precision ground surface subsidence monitoring. Therefore, this paper proposes a spatial domain filtering algorithm for UAV photogrammetry combined with surface deformation caused by coal mining based on the surface subsidence induced by coal mining and combined with the characteristics of the surface change. This algorithm significantly reduces random error in the differential DEM, achieving high-precision ground subsidence monitoring using UAV. Simulation and field test results show that the surface subsidence elevation errors obtained in the simulation tests are reduced by more than 50% compared to conventional methods. In field tests, this method reduced surface subsidence elevation errors by 39%. The monitoring error for surface subsidence was as low as 8 mm compared to leveling survey data. This method offers a new technical pathway for high-precision surface subsidence monitoring in mining areas using UAV photogrammetry. Full article
(This article belongs to the Section Pollution Prevention, Mitigation and Sustainability)
Show Figures

Figure 1

Figure 1
<p>Location of the study area, working face, and observation line.</p>
Full article ">Figure 2
<p>Histogram of error distribution of conventional methods.</p>
Full article ">Figure 3
<p>Schematic diagram of surface inclination calculation. The arrows in the figure refer to the direction in which the sampling point is calculated along the x or y direction.</p>
Full article ">Figure 4
<p>Schematic diagram of mean filtering in spatial domain. The blue dots represent the four corners of the grid corresponding to each sampling point; the blue arrows point to the images representing the results after processing using the algorithm in this paper.</p>
Full article ">Figure 5
<p>Algorithm flow chart. The red dots in the graph represent the direction of the calculation along the x or y direction.</p>
Full article ">Figure 6
<p>Distribution of error intervals of sampling points after different grid treatments.</p>
Full article ">Figure 7
<p>Error proportion diagram under different grid intervals.</p>
Full article ">Figure 8
<p>Mean error of sampling points under different grid intervals.</p>
Full article ">Figure 9
<p>Road settlement map measured by UAV based on original data.</p>
Full article ">Figure 10
<p>Road settlement map obtained by spatial domain filtering.</p>
Full article ">Figure 11
<p>Comparison of path changes for different grid sizes.</p>
Full article ">Figure 12
<p>Surface settlement maps obtained through the proposed algorithm. (<b>a</b>) Phase II. (<b>b</b>) Phase III. (<b>c</b>) Phase IV. (<b>d</b>) Phase V. (<b>e</b>) Phase VI. (<b>f</b>) Phase VII.</p>
Full article ">Figure 12 Cont.
<p>Surface settlement maps obtained through the proposed algorithm. (<b>a</b>) Phase II. (<b>b</b>) Phase III. (<b>c</b>) Phase IV. (<b>d</b>) Phase V. (<b>e</b>) Phase VI. (<b>f</b>) Phase VII.</p>
Full article ">Figure 13
<p>Extracting the isoline map of surface subsidence by UAV. (<b>a</b>) Phase II isoline map of surface subsidence. (<b>b</b>) Phase III isoline map of surface subsidence. (<b>c</b>) Phase IV isoline map of surface subsidence. (<b>d</b>) Phase V isoline map of surface subsidence. (<b>e</b>) Phase VI isoline map of surface subsidence. (<b>f</b>) Phase VII isoline map of surface subsidence.</p>
Full article ">Figure 13 Cont.
<p>Extracting the isoline map of surface subsidence by UAV. (<b>a</b>) Phase II isoline map of surface subsidence. (<b>b</b>) Phase III isoline map of surface subsidence. (<b>c</b>) Phase IV isoline map of surface subsidence. (<b>d</b>) Phase V isoline map of surface subsidence. (<b>e</b>) Phase VI isoline map of surface subsidence. (<b>f</b>) Phase VII isoline map of surface subsidence.</p>
Full article ">Figure 14
<p>Accuracy assessment for different mining phases. (<b>a</b>) phase IV direction E-W. (<b>b</b>) phase IV direction N-S. (<b>c</b>) phase V direction E-W. (<b>d</b>) phase V direction N-S. (<b>e</b>) phase VI direction E-W. (<b>f</b>) phase VI direction N-S.</p>
Full article ">
20 pages, 2618 KiB  
Article
Enhanced Tailings Dam Beach Line Indicator Observation and Stability Numerical Analysis: An Approach Integrating UAV Photogrammetry and CNNs
by Kun Wang, Zheng Zhang, Xiuzhi Yang, Di Wang, Liyi Zhu and Shuai Yuan
Remote Sens. 2024, 16(17), 3264; https://doi.org/10.3390/rs16173264 - 3 Sep 2024
Viewed by 499
Abstract
Tailings ponds are recognized as significant sources of potential man-made debris flow and major environmental disasters. Recent frequent tailings dam failures and growing trends in fine tailings outputs underscore the critical need for innovative monitoring and safety management techniques. Here, we propose an [...] Read more.
Tailings ponds are recognized as significant sources of potential man-made debris flow and major environmental disasters. Recent frequent tailings dam failures and growing trends in fine tailings outputs underscore the critical need for innovative monitoring and safety management techniques. Here, we propose an approach that integrates UAV photogrammetry with convolutional neural networks (CNNs) to extract beach line indicators (BLIs) and conduct enhanced dam safety evaluations. The significance of real 3D geometry construction in numerical analysis is investigated. The results demonstrate that the optimized You Only Look At CoefficienTs (YOLACT) model outperforms in recognizing the beach boundary line, achieving a mean Intersection over Union (mIoU) of 72.63% and a mean Pixel Accuracy (mPA) of 76.2%. This approach shows promise for future integration with autonomously charging UAVs, enabling comprehensive coverage and automated monitoring of BLIs. Additionally, the anti-slide and seepage stability evaluations are impacted by the geometry shape and water condition configuration. The proposed approach provides more conservative seepage calculations, suggesting that simplified 2D modeling may underestimate tailings dam stability, potentially affecting dam designs and regulatory decisions. Multiple numerical methods are suggested for cross-validation. This approach is crucial for balancing safety regulations with economic feasibility, helping to prevent excessive and unsustainable burdens on enterprises and advancing towards the goal of zero harm to people and the environment in tailings management. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Diagram illustrating common BLIs monitoring methods. (<b>a</b>) Equidistant beach width sign placed along the beach section used for the visual identification method. (<b>b</b>) The radar level gauge method, illustrating its setup and data collection technique. (<b>c</b>) The laser ranging method, illustrating the device setup at the dam crest, the measurement of the irradiated laser beam angle <math display="inline"><semantics> <mi>α</mi> </semantics></math>, and the distance <span class="html-italic">l</span> to the decant pond boundary. (<b>d</b>) The seepage backcalculation method, illustrating the placement of piezometers and the principle for calculating the BLIs. (<b>e</b>) Arrangement layout of the tailings pond BLIs monitoring devices.</p>
Full article ">Figure 2
<p>Optimized YOLACT model workflow. Here, “+” denotes elementwise addition, “2×” indicates twice linear upsampling,“DWconv” stands for depthwise convolution, and “BN” refers to batch normalization.</p>
Full article ">Figure 3
<p>Optimized DeepLabV3+ model workflow.</p>
Full article ">Figure 4
<p>Information and UAV aerial survey results of the tailings pond in the case study. (<b>a</b>) Geographical location. (<b>b</b>) Structural composition. (<b>c</b>,<b>d</b>) UAV aerial survey Digital Orthophoto Map (DOM) texture and Digital Surface Model (DSM) topographic results.</p>
Full article ">Figure 5
<p>Computational models of tailings dam stability analyses. (<b>a</b>) Two-dimensional (2D) model. (<b>b</b>) Pseudo-three-dimensional (P-3D) model that extends the 2D profiles fitting with water level conditions. (<b>c</b>) Real three-dimensional (R-3D) model that incorporates actual BLIs conditions identified from the proposed approach.</p>
Full article ">Figure 6
<p>Comparison of automated recognition results. (<b>a1</b>∼<b>a4</b>) Original images. (<b>b1</b>∼<b>b4</b>) Manually annotated results. (<b>c1</b>∼<b>c4</b>) Recognition results of optimized DeepLabV3+. (<b>d1</b>∼<b>d4</b>) Recognition results of optimized YOLACT.</p>
Full article ">Figure 7
<p>The extracted contours delineating the beach interface and the boundary edge of the decant pond using the optimized YOLACT model.</p>
Full article ">Figure 8
<p>Beach slope observation results. (<b>a</b>) Equidistant beach slope calculation sections. (<b>b</b>–<b>g</b>): Beach slope values extracted from DSM from section I to section VI plotted against the distance from the embankment crest.</p>
Full article ">Figure 9
<p>Results of dam stability evaluation. (<b>a</b>) 2D model, dam stability coefficient 2.096 (simplified Bishop method). (<b>b</b>) 2D model, dam stability coefficient 3.10 (SRM method). (<b>c</b>) P-3D model that extended the 2D profiles fitting with the minimum beach width condition, dam stability coefficient 3.86 (SRM method). (<b>d</b>) R-3D model based on actual BLIs condition extracted from the proposed approach, dam stability coefficient 4.00 (SRM method).</p>
Full article ">Figure 10
<p>Results of the tailings pond seepage analyses. (<b>a</b>) Model A, with water levels set at the minimum beach width condition. (<b>b</b>) Model B, with water levels set at the actual beach boundary line condition and extracted using the optimized YOLACT. (<b>c</b>) Comparison of the simulated seepage phreatic lines for Models A and B under various rainfall scenarios.</p>
Full article ">
22 pages, 15853 KiB  
Article
A New Precise Point Positioning with Ambiguity Resolution (PPP-AR) Approach for Ground Control Point Positioning for Photogrammetric Generation with Unmanned Aerial Vehicles
by Hasan Bilgehan Makineci, Burhaneddin Bilgen and Sercan Bulbul
Drones 2024, 8(9), 456; https://doi.org/10.3390/drones8090456 - 2 Sep 2024
Viewed by 799
Abstract
Unmanned aerial vehicles (UAVs) are now widely preferred systems that are capable of rapid mapping and generating topographic models with relatively high positional accuracy. Since the integrated GNSS receivers of UAVs do not allow for sufficiently accurate outcomes either horizontally or vertically, a [...] Read more.
Unmanned aerial vehicles (UAVs) are now widely preferred systems that are capable of rapid mapping and generating topographic models with relatively high positional accuracy. Since the integrated GNSS receivers of UAVs do not allow for sufficiently accurate outcomes either horizontally or vertically, a conventional method is to use ground control points (GCPs) to perform bundle block adjustment (BBA) of the outcomes. Since the number of GCPs to be installed limits the process in UAV operations, there is an important research question whether the precise point positioning (PPP) method can be an alternative when the real-time kinematic (RTK), network RTK, and post-process kinematic (PPK) techniques cannot be used to measure GCPs. This study introduces a novel approach using precise point positioning with ambiguity resolution (PPP-AR) for ground control point (GCP) positioning in UAV photogrammetry. For this purpose, the results are evaluated by comparing the horizontal and vertical coordinates obtained from the 24 h GNSS sessions of six calibration pillars in the field and the horizontal length differences obtained by electronic distance measurement (EDM). Bartlett’s test is applied to statistically determine the accuracy of the results. The results indicate that the coordinates obtained from a two-hour PPP-AR session show no significant difference from those acquired in a 30 min session, demonstrating PPP-AR to be a viable alternative for GCP positioning. Therefore, the PPP technique can be used for the BBA of GCPs to be established for UAVs in large-scale map generation. However, the number of GCPs to be selected should be four or more, which should be homogeneously distributed over the study area. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow scheme of the PPP solution.</p>
Full article ">Figure 2
<p>Outcomes generated with the photogrammetric process and generation process steps: (<b>a</b>) image acquisition; (<b>b</b>) image processing; (<b>c</b>) sparse cloud; (<b>d</b>) dense cloud; (<b>e</b>) mesh model; (<b>f</b>) DEM; (<b>g</b>) orthomosaic; and (<b>h</b>) sample of orthomosaic.</p>
Full article ">Figure 3
<p>General workflow scheme.</p>
Full article ">Figure 4
<p>Study area and distribution of GCPs and pillars.</p>
Full article ">Figure 5
<p>(<b>a</b>) GCPs; and (<b>b</b>) pillars.</p>
Full article ">Figure 6
<p>The outcomes: (<b>A</b>) orthomosaic with no GCP; (<b>B</b>) orthomosaic with four GCPs; (<b>C</b>) orthomosaic with eight GCPs; (<b>D</b>) DEM with no GCP; (<b>E</b>) DEM with four GCPs; and (<b>F</b>) DEM with eight GCPs.</p>
Full article ">Figure 7
<p>Differences between known and measured distances without GCPs.</p>
Full article ">Figure 8
<p>Differences between known and measured distances with four GCPs.</p>
Full article ">Figure 9
<p>Differences between known and measured distances with all the GCPs.</p>
Full article ">Figure 10
<p>Differences between known and measured heights without GCPs.</p>
Full article ">Figure 11
<p>Differences between known and measured heights with four GCPs.</p>
Full article ">Figure 12
<p>Differences between known and measured heights with all the GCPs.</p>
Full article ">Figure 13
<p>Box plot for distance differences without GCPs.</p>
Full article ">Figure 14
<p>Box plot for distance differences with four GCPs.</p>
Full article ">Figure 15
<p>Box plot for distance differences with all the GCPs.</p>
Full article ">Figure 16
<p>Box plot for height differences without GCPs.</p>
Full article ">Figure 17
<p>Box plot for height differences with four GCPs.</p>
Full article ">Figure 18
<p>Box plot for height differences with all the GCPs.</p>
Full article ">
15 pages, 5689 KiB  
Article
Modelling Water Availability in Livestock Ponds by Remote Sensing: Enhancing Management in Iberian Agrosilvopastoral Systems
by Francisco Manuel Castaño-Martín, Álvaro Gómez-Gutiérrez and Manuel Pulido-Fernández
Remote Sens. 2024, 16(17), 3257; https://doi.org/10.3390/rs16173257 - 2 Sep 2024
Viewed by 546
Abstract
Extensive livestock farming plays a crucial role in the economy of agrosilvopastoral systems of the southwestern Iberian Peninsula (known as dehesas and montados in Spanish and Portuguese, respectively) as well as providing essential ecosystem services. The existence of livestock in these areas heavily [...] Read more.
Extensive livestock farming plays a crucial role in the economy of agrosilvopastoral systems of the southwestern Iberian Peninsula (known as dehesas and montados in Spanish and Portuguese, respectively) as well as providing essential ecosystem services. The existence of livestock in these areas heavily relies on the effective management of natural resources (annual pastures and water stored in ponds built ad hoc). The present work aims to assess the water availability in these ponds by developing equations to estimate the water volume based on the surface area, which can be quantified by means of remote sensing techniques. For this purpose, field surveys were carried out in September 2021, 2022 and 2023 at ponds located in representative farms, using unmanned aerial vehicles (UAVs) equipped with RGB sensors and survey-grade global navigation satellite systems and inertial measurement units (GNSS-IMU). These datasets were used to produce high-resolution 3D models by means of Structure-from-Motion and Multi-View Stereo photogrammetry, facilitating the estimation of the stored water volume within a Geographic Information System (GIS). The Volume–Area–Height relationships were calibrated to allow conversions between these parameters. Regression analyses were performed using the maximum volume and area data to derive mathematical models (power and quadratic functions) that resulted in significant statistical relationships (r2 > 0.90, p < 0.0001). The root mean square error (RMSE) varied from 1.59 to 17.06 m3 and 0.16 to 3.93 m3 for the power and quadratic function, respectively. Both obtained equations (i.e., power and quadratic general functions) were applied to the estimated water storage in similar water bodies using available aerial or satellite imagery for the period from 1984 to 2021. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical location of the pilot farms and their watering ponds in Extremadura (<b>A</b>). (<b>B</b>) Parapuños de Doña María (PAR4 and PAR6), (<b>C</b>) La Brava (BRA8), (<b>D</b>) La Barrosa (BAR6).</p>
Full article ">Figure 2
<p>Example of image data obtained of the workflow in pond no. 8 of La Brava. (<b>A</b>) Orthophotography, (<b>B</b>) DEM and (<b>C</b>) Cloud point RGB image [<a href="#B30-remotesensing-16-03257" class="html-bibr">30</a>].</p>
Full article ">Figure 3
<p>Illustrative scatterplots of the V-A-h parameters measured (represented by black dots) and obtained power (blue) and quadratic (red) regression lines. V-A: Par6, V-h: Par4, A-h: Bra8.</p>
Full article ">Figure 4
<p>Scatterplots of the V-A-h parameters measured (represented by dots) and obtained power (blue) and quadratic (red) relationships using the whole dataset (11 ponds).</p>
Full article ">Figure 5
<p>Comparison of the RMSE values from using the different general equations.</p>
Full article ">Figure 6
<p>Errors as function of the volume of water stored. Column (<b>A</b>): V-A relationships. Column (<b>B</b>): V-h relationships. Column (<b>C</b>): A-h relationships.</p>
Full article ">Figure 7
<p>Plots of the analysis of errors as function of the volume of water stored using general equations in V-A relationships of the whole data available.</p>
Full article ">Figure 8
<p>Temporal evolution of the average water-storing capacity of each farm.</p>
Full article ">Figure 9
<p>Power (blue bar) and quadratic (red bar) functions applied to Vmax for all ponds in the pilot farms. The real maximum volume (green bar) is also shown for the four ponds where the real volume was available.</p>
Full article ">
28 pages, 25203 KiB  
Article
Integrating Physical-Based Models and Structure-from-Motion Photogrammetry to Retrieve Fire Severity by Ecosystem Strata from Very High Resolution UAV Imagery
by José Manuel Fernández-Guisuraga, Leonor Calvo, Luis Alfonso Pérez-Rodríguez and Susana Suárez-Seoane
Fire 2024, 7(9), 304; https://doi.org/10.3390/fire7090304 - 27 Aug 2024
Viewed by 743
Abstract
We propose a novel mono-temporal framework with a physical basis and ecological consistency to retrieve fire severity at very high spatial resolution. First, we sampled the Composite Burn Index (CBI) in 108 field plots that were subsequently surveyed through unmanned aerial vehicle (UAV) [...] Read more.
We propose a novel mono-temporal framework with a physical basis and ecological consistency to retrieve fire severity at very high spatial resolution. First, we sampled the Composite Burn Index (CBI) in 108 field plots that were subsequently surveyed through unmanned aerial vehicle (UAV) flights. Then, we mimicked the field methodology for CBI assessment in the remote sensing framework. CBI strata were identified through individual tree segmentation and geographic object-based image analysis (GEOBIA). In each stratum, wildfire ecological effects were estimated through the following methods: (i) the vertical structural complexity of vegetation legacies was computed from 3D-point clouds, as a proxy for biomass consumption; and (ii) the vegetation biophysical variables were retrieved from multispectral data by the inversion of the PROSAIL radiative transfer model, with a direct physical link with the vegetation legacies remaining after canopy scorch and torch. The CBI scores predicted from UAV ecologically related metrics at the strata level featured high fit with respect to the field-measured CBI scores (R2 > 0.81 and RMSE < 0.26). Conversely, the conventional retrieval of fire effects using a battery of UAV structural and spectral predictors (point height distribution metrics and spectral indices) computed at the plot level provided a much worse performance (R2 = 0.677 and RMSE = 0.349). Full article
(This article belongs to the Special Issue Drone Applications Supporting Fire Management)
Show Figures

Figure 1

Figure 1
<p>Workflow summarizing the methodological approach followed in the present study.</p>
Full article ">Figure 2
<p>Location of Folledo (<b>A</b>) and Lavadoira (<b>B</b>) wildfires in the western Mediterranean Basin. We show the extent of the UAV surveys and the location of the Composite Burn Index (CBI) field plots within the wildfire perimeters. The background image is a Landsat-8 false color composite (R = band 7; G = band 5; B = band 4). Wildfire perimeters were obtained from the Copernicus Emergency Management Service (EMS).</p>
Full article ">Figure 3
<p>Detailed view of a true color (RGB) composite of the multispectral orthomosaic (<b>left</b>) and the labeled map for the classes of interest (<b>right</b>) for the surroundings of two CBI plots (red square) with high land cover heterogeneity in the Foyedo wildfire. Non-interest classes included canopy shadows and bare soil/litter.</p>
Full article ">Figure 4
<p>Boxplots showing the relationship between the CBI scores and the strata height. We included one-way ANOVA and Tukey’s HSD post hoc results. Lowercase letters denote significant differences in the CBI score between strata at the 0.05 level.</p>
Full article ">Figure 5
<p>Normalized 3D-point cloud profiles and individual segmented trees for representative CBI plots of each fire severity category and dominated by <span class="html-italic">Pinus pinaster</span> with same pre-fire structure.</p>
Full article ">Figure 6
<p>Comparison of structural (canopy density) and spectral (fractional vegetation cover -FCOVER-, brown pigments fraction -C<sub>brown</sub>-, and canopy water content -C<sub>w</sub>-) UAV-derived metrics with ecological sense describing fire effects across fire severity categories in the CBI field plots. The vegetation stratum higher than 20 m in height was not considered because it was present only in five CBI plots (ratio #observations/#predictors close to 1:1 in further analyses). We included one-way ANOVA and Tukey’s HSD post hoc results. Lowercase letters denote significant differences in the CBI score between strata at the 0.05 level.</p>
Full article ">Figure 7
<p>Univariate relationships between structural (canopy density) and spectral (fractional vegetation cover -FCOVER-, brown pigments fraction -C<sub>brown</sub>-, and canopy water content -C<sub>w</sub>-) UAV-derived metrics with the CBI scores aggregated by strata. The vegetation stratum higher than 20 m in height was not considered because it was present only in five CBI plots (ratio #observations/#predictors close to 1:1 in further analyses). The solid red line represents the linear fit evaluated through the coefficient of determination (R<sup>2</sup>) in ordinary least square (OLS) models.</p>
Full article ">Figure 8
<p>Relationships between field-measured and predicted CBI scores aggregated by strata using structural (canopy density) and spectral (fractional vegetation cover -FCOVER-, brown pigments fraction -C<sub>brown</sub>-, and canopy water content -C<sub>w</sub>-) UAV-derived metrics. The vegetation stratum higher than 20 m was not considered because it was present only in five CBI plots (ratio #observations/#predictors close to 1:1 in further analyses). The solid red line represents the linear fit evaluated through the coefficient of determination (R<sup>2</sup>) in ordinary least square (OLS) models.</p>
Full article ">Figure 9
<p>Comparison of observed and predicted plot-level CBI values for the interval model validation (<b>A</b>), external model validation (<b>B</b>), benchmark #1 (<b>C</b>) and benchmark #2 (<b>D</b>) scenarios (see <a href="#sec2dot6-fire-07-00304" class="html-sec">Section 2.6</a>). Dashed black lines denote the CBI category thresholds.</p>
Full article ">Figure 9 Cont.
<p>Comparison of observed and predicted plot-level CBI values for the interval model validation (<b>A</b>), external model validation (<b>B</b>), benchmark #1 (<b>C</b>) and benchmark #2 (<b>D</b>) scenarios (see <a href="#sec2dot6-fire-07-00304" class="html-sec">Section 2.6</a>). Dashed black lines denote the CBI category thresholds.</p>
Full article ">
29 pages, 5712 KiB  
Article
Advanced Semi-Automatic Approach for Identifying Damaged Surfaces in Cultural Heritage Sites: Integrating UAVs, Photogrammetry, and 3D Data Analysis
by Tudor Caciora, Alexandru Ilieș, Grigore Vasile Herman, Zharas Berdenov, Bahodirhon Safarov, Bahadur Bilalov, Dorina Camelia Ilieș, Ștefan Baias and Thowayeb H. Hassan
Remote Sens. 2024, 16(16), 3061; https://doi.org/10.3390/rs16163061 - 20 Aug 2024
Viewed by 458
Abstract
The analysis and preservation of the cultural heritage sites are critical for maintaining their historical and architectural integrity, as they can be damaged by various factors, including climatic, geological, geomorphological, and human actions. Based on this, the present study proposes a semi-automatic and [...] Read more.
The analysis and preservation of the cultural heritage sites are critical for maintaining their historical and architectural integrity, as they can be damaged by various factors, including climatic, geological, geomorphological, and human actions. Based on this, the present study proposes a semi-automatic and non-learning-based method for detecting degraded surfaces within cultural heritage sites by integrating UAV, photogrammetry, and 3D data analysis. A 20th-century fortification from Romania was chosen as the case study due to its physical characteristics and state of degradation, making it ideal for testing the methodology. Images were collected using UAV and terrestrial sensors and processed to create a detailed 3D point cloud of the site. The developed pipeline effectively identified degraded areas, including cracks and material loss, with high accuracy. The classification and segmentation algorithms, including K-means clustering, geometrical features, RANSAC, and FACETS, improved the detection of destructured areas. The combined use of these algorithms facilitated a detailed assessment of the structural condition. This integrated approach demonstrated that the algorithms have the potential to support each other in minimizing individual limitations and accurately identifying degraded surfaces. Even though some limitations were observed, such as the potential for the overestimation of false negatives and positives areas, the damaged surfaces were extracted with high precision. The methodology proved to be a practical and economical solution for cultural heritage monitoring and conservation, offering high accuracy and flexibility. One of the greatest advantages of the method is its ease of implementation, its execution speed, and the potential of using entirely open-source software. This approach can be easily adapted to various heritage sites, significantly contributing to their protection and valorization. Full article
Show Figures

Figure 1

Figure 1
<p>The location of the analyzed casemate at the level of Romania and the Nojorid commune.</p>
Full article ">Figure 2
<p>Implemented workflow for structural analysis and degradation detection.</p>
Full article ">Figure 3
<p>The dense PC of the analyzed casemate and the classification of points according to class, in vegetation, ground, and man-made object ((<b>a</b>) the dense PC of raw points; and (<b>b</b>) the dense PC of classified points).</p>
Full article ">Figure 4
<p>The results obtained for the eight geometric characteristics of the analyzed PC (GF-PC—geometric features of the PC; MF-GF-PC—the mathematical formula for calculating the geometric characteristics of the PC; and W-GF—the weight of the geometric features in the final model).</p>
Full article ">Figure 5
<p>(<b>a</b>) Visual representation and statistical distribution of DI values for the original dataset, and (<b>b</b>) visual representation and statistical distribution of DI values for the independent dataset within the model and on the real object.</p>
Full article ">Figure 6
<p>The identification of the limitations of the created DI algorithm, due to the recognition of some flat surfaces without surface changes as NDS, even if they are part of a more extensive DS ((<b>a</b>) the results obtained for DI within the analyzed PC; and (<b>b</b>) the PC visualization in RGB).</p>
Full article ">Figure 7
<p>The results generated following the implementation of the RANSAC algorithm on the datasets related to the NDS obtained after the DI analysis ((<b>a</b>) the initial dataset after the application of RANSAC; (<b>b</b>) the planes considered to be part of NDS; (<b>c</b>) the cylindrical geometric shapes considered to be part of NDS; (<b>d</b>) the spherical geometric shapes considered to be part of NDS; (<b>e</b>) the independent datasets considered to be part of NDS, after the elimination of non-conforming values; and (<b>f</b>) the independent datasets considered to be part of DS, represented by PC and geometric shapes).</p>
Full article ">Figure 8
<p>The identification of some limitations of the geometric representations generated by the RANSAC algorithm, due to the recognition of slightly different surfaces from the normal as belonging to the same plane and how they can be determined with greater accuracy by the FACETS algorithm ((<b>a</b>) PC belonging to the same plane-which therefore, it must be represented by surfaces with similar characteristics; (<b>b</b>) dividing the same PC by facets into different families of planes; and (<b>c</b>) visualization of areas with shortcomings within the real object).</p>
Full article ">Figure 9
<p>The results obtained after the implementation of the FACETS algorithm on the datasets related to the NDS obtained after the RANSAC analysis ((<b>a</b>) the initial dataset after the application of FACETS, represented according to the family; (<b>b</b>) the independent datasets considered to be NDS; (<b>c</b>) the sets of independent data considered to be DS; and (<b>d</b>) visualization of areas with shortcomings within the real object).</p>
Full article ">Figure 10
<p>The spatial distribution of the areas that recorded DS and NDS within the PC of the analyzed casemate and their quantitative and qualitative analysis (FPC-DS—final PC that represents DS; MDA—major damage areas; MWDA—major areas without damages; DI—the degradations identified following the DI algorithm implementation; RANSAC—degradations identified by RANSAC algorithm; FACETS—degradations identified by FACETS algorithm; E-OOI—points removed because they were outside the area of interest; and TS-PC—total surface of the model).</p>
Full article ">Figure 11
<p>Implementation of the DI, RANSAC, and FACETS within the Cheresig Tower and obtaining the final DS and NDS.</p>
Full article ">Figure 12
<p>The spatial distribution of the areas that recorded DS and NDS within the PC of the analyzed historical structure and their quantitative and qualitative analysis (the graphs show the number of points related to DS eliminated after each stage, respectively, and the number of points associated with NDS obtained after each stage).</p>
Full article ">
18 pages, 14491 KiB  
Article
Influence of Main Flight Parameters on the Performance of Stand-Level Growing Stock Volume Inventories Using Budget Unmanned Aerial Vehicles
by Marek Lisańczuk, Grzegorz Krok, Krzysztof Mitelsztedt and Justyna Bohonos
Forests 2024, 15(8), 1462; https://doi.org/10.3390/f15081462 - 20 Aug 2024
Viewed by 667
Abstract
Low-altitude aerial photogrammetry can be an alternative source of forest inventory data and a practical tool for rapid forest attribute updates. The availability of low-cost unmanned aerial systems (UASs) and continuous technological advances in terms of their flight duration and automation capabilities makes [...] Read more.
Low-altitude aerial photogrammetry can be an alternative source of forest inventory data and a practical tool for rapid forest attribute updates. The availability of low-cost unmanned aerial systems (UASs) and continuous technological advances in terms of their flight duration and automation capabilities makes these solutions interesting tools for supporting various forest management needs. However, any practical application requires a priori empirical validation and optimization steps, especially if it is to be used under different forest conditions. This study investigates the influence of the main flight parameters, i.e., ground sampling distance and photo overlap, on the performance of individual tree detection (ITD) stand-level forest inventories, based on photogrammetric data obtained from budget unmanned aerial systems. The investigated sites represented the most common forest conditions in the Polish lowlands. The results showed no direct influence of the investigated factors on growing stock volume predictions within the analyzed range, i.e., overlap from 80 × 80 to 90 × 90% and GSD from 2 to 6 cm. However, we found that the tree detection ratio had an influence on estimation errors, which ranged from 0.6 to 15.3%. The estimates were generally coherent across repeated flights and were not susceptible to the weather conditions encountered. The study demonstrates the suitability of the ITD method for small-area forest inventories using photogrammetric UAV data, as well as its potential optimization for larger-scale surveys. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of applied methodology.</p>
Full article ">Figure 2
<p>Investigation area: range of Polish lowlands (<b>a</b>), Skierniewice forest district (<b>b</b>), selected sites (<b>c</b>). Outline of the district in orange. A, B, C, D—selected stands.</p>
Full article ">Figure 3
<p>The most common forest site types in Polish lowlands. Abbreviations: C—coniferous, M—mixed, F—fresh, B—broadleaved.</p>
Full article ">Figure 4
<p>Fragments of orthomosaics presenting areas of interest within analyzed stands. (<b>A</b>–<b>D</b>)—selected stands.</p>
Full article ">Figure 5
<p>Visual results of sample segmentation over test stand A.</p>
Full article ">Figure 6
<p>3D model of validation stand A based on TLS data, with detected stems.</p>
Full article ">Figure 7
<p>Scatterplots of the winning model, (<b>a</b>)—fitted vs observed, (<b>b</b>)—fitted vs residuals (m<sup>3</sup>).</p>
Full article ">Figure 8
<p>Histogram of residual frequency.</p>
Full article ">Figure 9
<p>GSV estimates according to GSD. Missing values were interpolated.</p>
Full article ">Figure 10
<p>Relationship between tree detection ratio and GSV estimation error.</p>
Full article ">
12 pages, 2983 KiB  
Article
Precise Positioning in Nitrogen Fertility Sensing in Maize (Zea mays L.)
by Tri Setiyono
Sensors 2024, 24(16), 5322; https://doi.org/10.3390/s24165322 - 17 Aug 2024
Viewed by 588
Abstract
This study documented the contribution of precise positioning involving a global navigation satellite system (GNSS) and a real-time kinematic (RTK) system in unmanned aerial vehicle (UAV) photogrammetry, particularly for establishing the coordinate data of ground control points (GCPs). Without augmentation, GNSS positioning solutions [...] Read more.
This study documented the contribution of precise positioning involving a global navigation satellite system (GNSS) and a real-time kinematic (RTK) system in unmanned aerial vehicle (UAV) photogrammetry, particularly for establishing the coordinate data of ground control points (GCPs). Without augmentation, GNSS positioning solutions are inaccurate and pose a high degree of uncertainty if such data are used in UAV data processing for mapping. The evaluation included a comparative assessment of sample coordinates involving RTK and an ordinary GPS device and the application of precise GCP data for UAV photogrammetry in field crop research, monitoring nitrogen deficiency stress in maize. This study confirmed the superior performance of the RTK system in providing positional data, with 4 cm bias as compared to 311 cm with the non-augmented GNSS technique, making it suitable for use in agronomic research involving row crops. Precise GCP data in this study allow the UAV-based Normalized Difference Red-Edge Index (NDRE) data to effectively characterize maize crop responses to N nutrition during the growing season, with detailed analyses revealing the causal relationship in that a compromised optimum canopy chlorophyll content under limiting nitrogen environment was the reason for reduced canopy cover under an N-deficiency environment. Without RTK-based GCPs, different and, to some degree, misleading results were evident, and therefore, this study warrants the requirement of precise GCP data for scientific research investigations attempting to use UAV photogrammetry for agronomic field crop study. Full article
(This article belongs to the Special Issue Sensor-Based Crop and Soil Monitoring in Precise Agriculture)
Show Figures

Figure 1

Figure 1
<p>EMLID RS2 RTK system used in this study, showing the base unit mounted on a survey tripod via a tribrach attachment plate (<b>a</b>) and the rover unit mounted on the survey rod stabilized with a survey dipod (during stand-by mode), with the Nautiz X6 Android device attached on the survey rod (<b>b</b>).</p>
Full article ">Figure 2
<p>The online submission page of the Online Positioning User Service (OPUS) from the US National Oceanic Atmospheric Administration (NOAA) (<b>a</b>) and the resulting coordinate solution retrieved via email for the RTK base station at the LSU AgCenter Red River Research Station in Bossier City, LA, USA (<b>b</b>).</p>
Full article ">Figure 3
<p>Registered GCP data in Pix4DMapper during initial processing step involving UAV raw camera data. Green and yellow symbol on the photos of GCP markers indicate registered coordinates of the points.</p>
Full article ">Figure 4
<p>Comparison of planimetric precision of RTK versus GPS coordinate establishment based on evaluation at the Central Research Station, Baton Rouge, LA, USA, on 9 November 2023 (geographic location within Louisiana is shown by the blue symbol in the state level overview map). The purple arrow in the main map points to the 13<sup>th</sup> assessment point that was not aligned with the boundary line of the block (yellow dashed line). The histogram whiskers represent the 25th and 75th percentiles of the computed distance data between the test and reference points whereas the solid and dashed lines in the histogram represent mean and median precision values, respectively. A background UAV-based RGB map was based on aerial data acquired on 6 June 2023. The source of the background imagery overviewing the Central Research Station was the United States Department of Agriculture (USDA) National Agriculture Imagery Program (NAIP) (image date of 11 November 2021).</p>
Full article ">Figure 5
<p>Aerial map of Normalized Difference Red-Edge Index (NDRE) for the maize N rates and excessive water experiment on 3 June 2022 (67 DAP) at block H1 of the LSU AgCenter Red River Research Station, Bossier City, LA, USA. The study location (block H1) is highlighted in cyan in the overview map of the research station, shown with the background aerial imagery map from the United States Department of Agriculture (USDA) National Agriculture Imagery Program (NAIP) (image date of 13 August 2019). The location of the station is indicated by the red symbol in the global overview map.</p>
Full article ">Figure 6
<p>Time series of extracted NDRE data based on entire plot boundaries by N rates (225 kg ha<sup>−1</sup> in (<b>a</b>,<b>e</b>), 180 kg ha<sup>−1</sup> in (<b>b</b>,<b>f</b>), 45 kg ha<sup>−1</sup> in (<b>c</b>,<b>g</b>), and 0 kg ha<sup>−1</sup> in (<b>d</b>,<b>h</b>)), water treatments (W1 in blue and solid line for control and W2 in red and dashed line for excessive), and with (<b>left</b>, +R) or without (<b>right</b>, −R) RTK-based GCPs.</p>
Full article ">Figure 7
<p>Time series of extracted NDRE data based on narrow polygons along crop rows by N rates (224 kg N ha<sup>−1</sup> in (<b>a</b>,<b>e</b>), 180 kg N ha<sup>−1</sup> in (<b>b</b>,<b>f</b>), 45 kg N ha<sup>−1</sup> in (<b>c</b>,<b>g</b>), and 0 kg N ha<sup>−1</sup> in (<b>d</b>,<b>h</b>)), water treatments (W1 in blue and solid line for control and W2 in red and dashed line for excessive), and with (<b>left</b>, +R) or without (<b>right</b>, −R) RTK-based GCPs.</p>
Full article ">Figure 8
<p>Relationship between NDRE at 87 DAP with maize yield in this study. Data are shown as light blue circles. Blue dashed line represents regression curve labeled with the equation and R<sup>2</sup> value.</p>
Full article ">
20 pages, 7800 KiB  
Article
Hydraulic Risk Assessment on Historic Masonry Bridges Using Hydraulic Open-Source Software and Geomatics Techniques: A Case Study of the “Hannibal Bridge”, Italy
by Ahmed Kamal Hamed Dewedar, Donato Palumbo and Massimiliano Pepe
Remote Sens. 2024, 16(16), 2994; https://doi.org/10.3390/rs16162994 - 15 Aug 2024
Viewed by 641
Abstract
This paper investigates the impact of flood-induced hydrodynamic forces and high discharge on the masonry arch “Hannibal Bridge” (called “Ponte di Annibale” in Italy) using the Hydraulic Engineering Center’s River Analysis Simulation (HEC-RAS) v6.5.0. hydraulic numerical method, incorporating Unmanned Aerial Vehicle (UAV) photogrammetry [...] Read more.
This paper investigates the impact of flood-induced hydrodynamic forces and high discharge on the masonry arch “Hannibal Bridge” (called “Ponte di Annibale” in Italy) using the Hydraulic Engineering Center’s River Analysis Simulation (HEC-RAS) v6.5.0. hydraulic numerical method, incorporating Unmanned Aerial Vehicle (UAV) photogrammetry and aerial Light Detection and Ranging (LIDAR) data for visual analysis. The research highlights the highly transient behavior of fast flood flows, particularly when carrying debris, and their effect on bridge superstructures. Utilizing a Digital Elevation Model to extract cross-sectional and elevation data, the research examined 23 profiles over 800 m of the river. The results indicate that the maximum allowable water depth in front of the bridge is 4.73 m, with a Manning’s coefficient of 0.03 and a longitudinal slope of 9 m per kilometer. Therefore, a novel method to identify the risks through HEC-RAS modeling significantly improves the conservation of masonry bridges by providing precise topographical and hydrological data for accurate simulations. Moreover, the detailed information obtained from LIDAR and UAV photogrammetry about the bridge’s materials and structures can be incorporated into the conservation models. This comprehensive approach ensures that preservation efforts are not only addressing the immediate hydrodynamic threats but are also informed by a thorough understanding of the bridge’s structural and material conditions. Understanding rating curves is essential for water management and flood forecasting, with the study confirming a Manning roughness coefficient of 0.03 as suitable for smooth open-channel flows and emphasizing the importance of geomorphological conditions in hydraulic simulation. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Cultural Heritage Research II)
Show Figures

Figure 1

Figure 1
<p>Flowchart of research method and software used in the experimentation.</p>
Full article ">Figure 2
<p>Outline of the study area: map (<b>a</b>), orthophoto (<b>b</b>) and image of the Hannibal Bridge (<b>c</b>).</p>
Full article ">Figure 3
<p>Metric representation of the bridge: 3D point cloud (<b>a</b>) and façade (<b>b</b>).</p>
Full article ">Figure 4
<p>Layout of the study area stations in the model.</p>
Full article ">Figure 5
<p>Rating curve of Station 736.</p>
Full article ">Figure 6
<p>The rating curve of the stations.</p>
Full article ">Figure 7
<p>Velocity distribution through study area.</p>
Full article ">Figure 8
<p>The difference between the SRTM and ALS geometry data at Station 736.</p>
Full article ">Figure 9
<p>The difference between the SRTM and ALS geometry data along the longitudinal profile.</p>
Full article ">Figure 10
<p>The difference between geometry data in the applied model.</p>
Full article ">Figure 11
<p>The water velocity distribution of the geometry data used in the model: water velocity distribution result with ALS (<b>a</b>) and water velocity distribution result with SRTM (<b>b</b>).</p>
Full article ">Figure A1
<p>Profile of the bridge with the allowed maximum water depth.</p>
Full article ">
Back to TopTop