[go: up one dir, main page]

 
 

Topic Editors

Perception, Robotics, and Intelligent Machines Research Group (PRIME), Dept of Computer Science, Université de Moncton, Moncton, NB, E1A 3E9, Canada
Department of Geomatics Engineering, University of Calgary, 2500 University Dr. NW, Calgary, AB T2N 1N4, Canada

AI for Natural Disasters Detection, Prediction and Modeling

Abstract submission deadline
25 April 2025
Manuscript submission deadline
25 July 2025
Viewed by
6811

Topic Information

Dear Colleagues,

In recent years, we have witnessed escalating climate change and its increasing impact on global ecosystems, human lives, and the world economy. This situation calls for advanced tools that can leverage artificial intelligence (AI) for the early detection, prediction, and modeling of natural disasters. The increasing frequency and intensity of events such as wildfires, flooding, storms, and other catastrophic incidents necessitate innovative approaches for mitigation and response. This call for papers invites contributions that address the critical aspects of this interesting field, focusing on the integration of AI methodologies with remote sensing data. We encourage submissions that span a wide range of topics, including reviews of state-of-the-art AI applications for natural disaster management, risk assessment and hazard prediction; the use of AI to detect and track specific events; modeling techniques employing AI; and the development of advanced forecasting models utilizing AI methodologies.

The aim of this call is to bring together researchers and experts from various areas to foster collaborative efforts in developing cutting-edge solutions that will enhance our ability to anticipate, understand, and respond to the increasing challenges posed by natural disasters in an era of climate change.

Dr. Moulay A. Akhloufi
Dr. Mozhdeh Shahbazi
Topic Editors

Keywords

  • AI for natural disasters
  • forest fires, flooding, storms, earthquakes
  • forest monitoring, environmental monitoring, natural risks
  • forecasting models, mitigation, and response
  • earth observation, remote sensing
  • multispectral, hyperspectral, LiDAR, photogrammetry
  • machine learning, deep learning, data fusion, image processing
  • mapping, modelling, digital twins

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
3.1 7.2 2020 17.6 Days CHF 1600 Submit
Big Data and Cognitive Computing
BDCC
3.7 7.1 2017 18 Days CHF 1800 Submit
Fire
fire
3.0 3.1 2018 18.4 Days CHF 2400 Submit
GeoHazards
geohazards
- 2.6 2020 20.4 Days CHF 1000 Submit
Remote Sensing
remotesensing
4.2 8.3 2009 24.7 Days CHF 2700 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (7 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
24 pages, 8367 KiB  
Article
Detecting Hailstorms in China from FY-4A Satellite with an Ensemble Machine Learning Model
by Qiong Wu, Yi-Xuan Shou, Yong-Guang Zheng, Fei Wu and Chun-Yuan Wang
Remote Sens. 2024, 16(18), 3354; https://doi.org/10.3390/rs16183354 - 10 Sep 2024
Viewed by 462
Abstract
Hail poses a significant meteorological hazard in China, leading to substantial economic and agricultural damage. To enhance the detection of hail and mitigate these impacts, this study presents an ensemble machine learning model (BPNN+Dtree) that combines a backpropagation neural network (BPNN) and a [...] Read more.
Hail poses a significant meteorological hazard in China, leading to substantial economic and agricultural damage. To enhance the detection of hail and mitigate these impacts, this study presents an ensemble machine learning model (BPNN+Dtree) that combines a backpropagation neural network (BPNN) and a decision tree (Dtree). Using FY-4A satellite and ERA5 reanalysis data, the model is trained on geostationary satellite infrared data and environmental parameters, offering comprehensive, all-day, and large-area hail monitoring over China. The ReliefF method is employed to select 13 key features from 29 physical quantities, emphasizing cloud-top and thermodynamic properties over dynamic ones as input features for the model to enhance its hail differentiation capability. The BPNN+Dtree ensemble model harnesses the strengths of both algorithms, improving the probability of detection (POD) to 0.69 while maintaining a reasonable false alarm ratio (FAR) on the test set. Moreover, the model’s spatial distribution of hail probability more closely matches the observational data, outperforming the individual BPNN and Dtree models. Furthermore, it demonstrates improved regional applicability over overshooting top (OT)-based methods in the China region. The identified high-frequency hail areas correspond to the north-south movement of the monsoon rain belt and are consistent with the northeast-southwest belt distribution observed using microwave-based methods. Full article
Show Figures

Figure 1

Figure 1
<p>Study area and sample distribution. Dark blue circles indicate hailstorm samples, while light blue circles denote non-hailstorm samples.</p>
Full article ">Figure 2
<p>Technical roadmap for developing an ensemble machine learning model for hailstorm identification.</p>
Full article ">Figure 3
<p>(<b>a</b>) Flowchart of the iterative H-minima transform method for identifying convective clouds. (<b>b</b>) Schematic of the process for identifying convective clouds with the iterative H-minima transform method.</p>
Full article ">Figure 4
<p>Bar chart of feature importance ranking using the ReliefF method. Six distinct breakpoints are indicated by yellow ellipses. The red dashed line drawn at the fourth breakpoint serves as the cutoff point for this study; features with importance scores above this line (amounting to 13 features) were chosen as inputs for the model, whereas those with lower scores were excluded.</p>
Full article ">Figure 5
<p>The results of the identification of convective clouds using the iterative H-minima transform method on the FY-4A 10.8-μm infrared channel image, recorded at 06:38 UTC on 11 August 2018. (<b>a</b>–<b>h</b>) illustrate the outputs at various time steps throughout the process. In (<b>h</b>), the identified convective clouds are differentiated by color, with yellow plus signs indicating the positions of observed hail, and magenta ellipses outlining the corresponding hailstorms.</p>
Full article ">Figure 6
<p>Joy plot of selected features. The curve filled with blue color represents the Gaussian kernel density estimation for hailstorm samples, while the orange curve represents the same for non-hailstorm samples. The horizontal axis represents the normalized value of each feature’s physical quantity. The black dashed line marks the feature value where the probability density is at its maximum. The white numbers correspond to the actual physical values.</p>
Full article ">Figure 7
<p>Joy plot for unselected features. Similar to <a href="#remotesensing-16-03354-f006" class="html-fig">Figure 6</a>, this plot presents the Gaussian kernel density estimation curves for hailstorm and non-hailstorm samples, but for features that were not selected.</p>
Full article ">Figure 8
<p>Confusion matrices for various models on the test set. (<b>a</b>) BPNN; (<b>b</b>) Dtree; (<b>c</b>) BPNN+Dtree.</p>
Full article ">Figure 9
<p>Spatial distribution of hail occurrence probability calculated using various machine learning models: (<b>a</b>) The BPNN model; (<b>b</b>) the Dtree model; (<b>c</b>) the BPNN+Dtree ensemble model; and (<b>d</b>) 1-h hail event records.</p>
Full article ">Figure 10
<p>Comparison of the spatial distribution of hail occurrence probability obtained with the BPNN+Dtree ensemble model and OT-based hail identification methods: (<b>a</b>) Calculated based on hail cloud pixels identified by the OT method; (<b>b</b>) calculated based on hail cloud pixels identified by the OTfilter method; (<b>c</b>) calculated using the BPNN+Dtree ensemble model-identified hailstorms; (<b>d</b>) calculated based on 1-h hail event records.</p>
Full article ">Figure 11
<p>Comparison of the spatial distribution of hail occurrence probability obtained with the BPNN+Dtree ensemble model and microwave-based hail identification methods: (<b>a</b>) Calculated using the BPNN+Dtree ensemble model-identified hailstorms; (<b>b</b>) calculated using the Ni17 method-identified hail PFs; (<b>c</b>) Calculated using the CB12 method-identified hail PFs; (<b>d</b>) calculated using the Mroz17 method-identified hail PFs; (<b>e</b>) calculated based on the published BC19 annual average data; and (<b>f</b>) calculated based on 1-h hail event records.</p>
Full article ">
22 pages, 20392 KiB  
Article
AI-Driven Computer Vision Detection of Cotton in Corn Fields Using UAS Remote Sensing Data and Spot-Spray Application
by Pappu Kumar Yadav, J. Alex Thomasson, Robert Hardin, Stephen W. Searcy, Ulisses Braga-Neto, Sorin C. Popescu, Roberto Rodriguez III, Daniel E. Martin and Juan Enciso
Remote Sens. 2024, 16(15), 2754; https://doi.org/10.3390/rs16152754 - 27 Jul 2024
Viewed by 777
Abstract
To effectively combat the re-infestation of boll weevils (Anthonomus grandis L.) in cotton fields, it is necessary to address the detection of volunteer cotton (VC) plants (Gossypium hirsutum L.) in rotation crops such as corn (Zea mays L.) and sorghum ( [...] Read more.
To effectively combat the re-infestation of boll weevils (Anthonomus grandis L.) in cotton fields, it is necessary to address the detection of volunteer cotton (VC) plants (Gossypium hirsutum L.) in rotation crops such as corn (Zea mays L.) and sorghum (Sorghum bicolor L.). The current practice involves manual field scouting at the field edges, which often leads to the oversight of VC plants growing in the middle of fields alongside corn and sorghum. As these VC plants reach the pinhead squaring stage (5–6 leaves), they can become hosts for boll weevil pests. Consequently, it becomes crucial to detect, locate, and accurately spot-spray these plants with appropriate chemicals. This paper focuses on the application of YOLOv5m to detect and locate VC plants during the tasseling (VT) growth stage of cornfields. Our results demonstrate that VC plants can be detected with a mean average precision (mAP) of 79% at an Intersection over Union (IoU) of 50% and a classification accuracy of 78% on images sized 1207 × 923 pixels. The average detection inference speed is 47 frames per second (FPS) on the NVIDIA Tesla P100 GPU-16 GB and 0.4 FPS on the NVIDIA Jetson TX2 GPU, which underscores the relevance and impact of detection speed on the feasibility of real-time applications. Additionally, we show the application of a customized unmanned aircraft system (UAS) for spot-spray applications through simulation based on the developed computer vision (CV) algorithm. This UAS-based approach enables the near-real-time detection and mitigation of VC plants in corn fields, with near-real-time defined as approximately 0.02 s per frame on the NVIDIA Tesla P100 GPU and 2.5 s per frame on the NVIDIA Jetson TX2 GPU, thereby offering an efficient management solution for controlling boll weevil pests. Full article
Show Figures

Figure 1

Figure 1
<p>Experiment field located at Texas A&amp;M University farm near College Station, TX in Burleson County (96°25′45.9″W, 30°32′07.4″N) where cotton plants were planted in the middle of corn field to mimic the presence of volunteer cotton plants.</p>
Full article ">Figure 2
<p>A customized sprayer UAS (broadcast sprayer converted to spot sprayer) with RedEdge-MX multispectral camera for capturing aerial imagery and NVIDIA Jetson TX2 computing platform [<a href="#B7-remotesensing-16-02754" class="html-bibr">7</a>].</p>
Full article ">Figure 3
<p>(<b>A</b>) The customized spot-sprayer UAS flying over an experimental corn field (containing some cotton plants planted to mimic as volunteer cotton (VC) plants) capturing five band multispectral images; (<b>B</b>) RGB (Red, Green Blue) composite image showing a section of experimental plot where corn at vegetative tassel state (VT) and some cotton plants mimicking as VC plants can be seen.</p>
Full article ">Figure 4
<p>Reflectance panel of type RP 04 image with blue band sensor of RedEdge-MX camera taken on the day of flight.</p>
Full article ">Figure 5
<p>General overview of YOLOv5 network architecture.</p>
Full article ">Figure 6
<p>A flowchart that shows complete workflow representing each step used in this study.</p>
Full article ">Figure 7
<p>Different types of losses that were obtained during the training process of YOLOv5m on training and validation datasets.</p>
Full article ">Figure 8
<p>Different types of performance metrices that were obtained during the training process of YOLOv5m.</p>
Full article ">Figure 9
<p>(<b>A</b>) Precision-recall plot, (<b>B</b>) F1-score vs confidence score plot, and (<b>C</b>) confusion matrix obtained after training YOLOv5m.</p>
Full article ">Figure 10
<p>VC plants detected in the middle of a corn field within the red bounding boxes (BBs) by trained YOLOv5m model. The values associated with each BB show model’s certainty that the bounding box contains an object of interest, i.e., VC plant.</p>
Full article ">Figure 11
<p>YOLOv5m detection of VC plants in a corn field by being deployed on NVIDIA Jetson TX2 mounted on a custom spot-spray-capable UAS.</p>
Full article ">Figure 12
<p>Optimal flight path generated by ACO algorithms and output shown by <span class="html-italic">Streamlit</span> Python package on a webpage.</p>
Full article ">Figure 13
<p>Spot-spray UAS simulation on MAVProxy (<b>A</b>,<b>B</b>) and Mission Planner (<b>C</b>) GCS. Image <span class="html-italic">A</span> shows the simulated UAS flying from node 1 to 2 while image <span class="html-italic">B</span> shows it flying from node 4 to 5. Image <span class="html-italic">C</span> shows the simulated UAS flying from node 8 to 9.</p>
Full article ">Figure 14
<p>Spot-spray nodes generated by Agrosol software (2.87.5) after uploading the CSV file containing nodes generated by ACO algorithm.</p>
Full article ">
16 pages, 6768 KiB  
Article
Landslide Susceptibility Assessment in Active Tectonic Areas Using Machine Learning Algorithms
by Tianjun Qi, Xingmin Meng and Yan Zhao
Remote Sens. 2024, 16(15), 2724; https://doi.org/10.3390/rs16152724 - 25 Jul 2024
Viewed by 676
Abstract
The eastern margin of the Tibetan Plateau is one of the regions with the most severe landslide disasters on a global scale. With the intensification of seismic activity around the Tibetan Plateau and the increase in extreme rainfall events, the prevention of landslide [...] Read more.
The eastern margin of the Tibetan Plateau is one of the regions with the most severe landslide disasters on a global scale. With the intensification of seismic activity around the Tibetan Plateau and the increase in extreme rainfall events, the prevention of landslide disasters in the region is facing serious challenges. This article selects the Bailong River Basin located in this region as the research area, and the historical landslide data obtained from high-precision remote sensing image interpretation combined with field validation are used as the sample library. Using machine learning algorithms and data-driven landslide susceptibility assessment as the methods, 17 commonly used models and 17 important factors affecting the development of landslides are selected to carry out the susceptibility assessment. The results show that the BaggingClassifier model shows advantageous applicability in the region, and the landslide susceptibility distribution map of the Bailong River Basin was generated using this model. The results show that the road and population density are both high in very high and high susceptible areas, indicating that there is still a significant potential landslide risk in the basin. The quantitative evaluation of the main influencing factors emphasizes that distance to a road is the most important factor. However, due to the widespread utilization of ancient landslides by local residents for settlement and agricultural cultivation over hundreds of years, the vast majority of landslides are likely to have occurred prior to human settlement. Therefore, the importance of this factor may be overestimated, and the evaluation of the factors still needs to be dynamically examined in conjunction with the development history of the region. The five factors of NDVI, altitude, faults, average annual rainfall, and rivers have a secondary impact on landslide susceptibility. The research results have important significance for the susceptibility assessment of landslides in the complex environment of human–land interaction and for the construction of landslide disaster monitoring and early warning systems in the Bailong River Basin. Full article
Show Figures

Figure 1

Figure 1
<p>The distribution of historical landslides in the Bailong River Basin.</p>
Full article ">Figure 2
<p>Correlation heatmap of influencing factors.</p>
Full article ">Figure 3
<p>Ranking of model accuracy scores.</p>
Full article ">Figure 4
<p>ROC curve and AUC value of four models with 10 cross-validations, (<b>a</b>) BaggingClassifier; (<b>b</b>) DecisionTreeClassifer; (<b>c</b>) KNeighborsClassifer; (<b>d</b>) RandomForestClassifier.</p>
Full article ">Figure 5
<p>Ranking of ACC scores after model optimization.</p>
Full article ">Figure 6
<p>Distribution map of landslide susceptibility assessment in Bailong River Basin.</p>
Full article ">Figure 7
<p>Statistical relationship between road density and population density in different susceptibility areas.</p>
Full article ">Figure 8
<p>Field verification of landslide susceptibility assessment results. (<b>a</b>) Susceptibility assessment verification area, (<b>b</b>) Nanqiao lanslide, (<b>c</b>) Suoertou and Daxiaowan landslides, (<b>d</b>) Xieliupo landslide, (<b>e</b>) Yahuokou landslide, (<b>f</b>) A small scale rockfall, (<b>g</b>) Zhongpai landslide, (<b>h</b>) An old landslide, (<b>i</b>) Lijie landslide, (<b>j</b>) An old landslide, (<b>k</b>) A small scale rockfall.</p>
Full article ">Figure 9
<p>Quantitative evaluation results of importance of influencing factors.</p>
Full article ">Figure 10
<p>Landslide and non-landslide area distribution in some factors, (<b>a</b>) road, (<b>b</b>) NDVI, (<b>c</b>) altitude, (<b>d</b>) fault, (<b>e</b>) annual precipitation index, and (<b>f</b>) river. “1” represents landslide area and “0” represents non-landslide area.</p>
Full article ">
22 pages, 11376 KiB  
Article
Robust Landslide Recognition Using UAV Datasets: A Case Study in Baihetan Reservoir
by Zhi-Hai Li, An-Chi Shi, Huai-Xian Xiao, Zi-Hao Niu, Nan Jiang, Hai-Bo Li and Yu-Xiang Hu
Remote Sens. 2024, 16(14), 2558; https://doi.org/10.3390/rs16142558 - 12 Jul 2024
Viewed by 823
Abstract
The task of landslide recognition focuses on extracting the location and extent of landslides over large areas, providing ample data support for subsequent landslide research. This study explores the use of UAV and deep learning technologies to achieve robust landslide recognition in a [...] Read more.
The task of landslide recognition focuses on extracting the location and extent of landslides over large areas, providing ample data support for subsequent landslide research. This study explores the use of UAV and deep learning technologies to achieve robust landslide recognition in a more rational, simpler, and faster manner. Specifically, the widely successful DeepLabV3+ model was used as a blueprint and a dual-encoder design was introduced to reconstruct a novel semantic segmentation model consisting of Encoder1, Encoder2, Mixer and Decoder modules. This model, named DeepLab for Landslide (DeepLab4LS), considers topographic information as a supplement to DeepLabV3+, and is expected to improve the efficiency of landslide recognition by extracting shape information from relative elevation, slope, and hillshade. Additionally, a novel loss function term—Positive Enhanced loss (PE loss)—was incorporated into the training of DeepLab4LS, significantly enhancing its ability to understand positive samples. DeepLab4LS was then applied to a UAV dataset of Baihetan reservoir, where comparative tests demonstrated its high performance in landslide recognition tasks. We found that DeepLab4LS has a stronger inference capability for landslides with less distinct boundary information, and delineates landslide boundaries more precisely. More specifically, in terms of evaluation metrics, DeepLab4LS achieved a mean intersection over union (mIoU) of 76.0% on the validation set, which is a substantial 5.5 percentage point improvement over DeepLabV3+. Moreover, the study also validated the rationale behind the dual-encoder design and the introduction of PE loss through ablation experiments. Overall, this research presents a robust semantic segmentation model for landslide recognition that considers both optical and topographic semantics of landslides, emulating the recognition pathways of human experts, and is highly suitable for landslide recognition based on UAV datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Different landslide recognition methods and their general result based on UAV orthophoto: (<b>a</b>) landslide boundary identification by manual judgement; (<b>b</b>) landslide identification based on object-oriented method; (<b>c</b>) landslide positioning based on object-based DL method; (<b>d</b>) landslide identification based on pixel-based DL method.</p>
Full article ">Figure 2
<p>The UAV orthophoto (<b>left</b>) and the geographic location (<b>right</b>) of the study area. The UAV orthophoto is approximately 50 km in length along the Jinsha River and covers an area of approximately 65 km<sup>2</sup>.</p>
Full article ">Figure 3
<p>UAV equipment: (<b>a</b>) photograph of UAV working onsite; (<b>b</b>) Feima D2000 UAV; (<b>c</b>) SONY-DOP 3000 camera.</p>
Full article ">Figure 4
<p>Input image data used for the semantic segmentation model. The orthophoto is composed of three channels including red (R), green (G), and blue (B) captured by UAV aerial camera, and the relative elevation, slope, hillshade and ground truth are composed of a single channel.</p>
Full article ">Figure 5
<p>Workflow of dataset creation from step 1 to step 6. In step 1, the GT was labeled by manually observing the landslide areas in the orthophoto, and the hillshade and slope were exported from DSM. In step 2, certain image cropping was performed (pseudo code has been presented), and the large orthophoto was converted into an indexed set consisting of a series of small-scale images. In step 3, the filter condition is presented, and <span class="html-italic">I</span> is the small image after cropping, (<span class="html-italic">x</span>, <span class="html-italic">y</span>) is the coordinate of a pixel, <span class="html-italic">W</span> and <span class="html-italic">H</span> are the weight and height of the image before cropping, respectively, and <span class="html-italic">g</span>(<span class="html-italic">I</span>) is the dataset after filtering. In step 4, the filtering was applied according to the filter condition (pseudo code has been presented). In step 5, each indexed set was divided into two parts for the training and validation procedures, respectively. In step 6, the final dataset was fed into the model for training and validation.</p>
Full article ">Figure 6
<p>Dataset augmentation procedure in the dataset loading process of each epoch. The original will go through a sequential process consisting of stretching, flipping, Gaussian blur and rotation procedures. All procedures were applied randomly and were designed to increase the randomness of the final augmented image.</p>
Full article ">Figure 7
<p>The basic structure of the DeepLabV3+ model, showing a typical encoder-decoder architecture.</p>
Full article ">Figure 8
<p>The dilated convolutions with a dilation rate of 4 bring an expanded receptive field to a 3 × 3 convolutional kernel: (<b>a</b>) an orthophoto including an entire landslide; (<b>b</b>) feature pixels corresponding to dilated and standard convolutions (yellow: feature pixels for dilated convolution; blue: feature pixels for standard convolution; green: feature pixels for both dilated and standard convolution); (<b>c</b>) schematic of the receptive field area.</p>
Full article ">Figure 9
<p>The basic structure of the ASPP module.</p>
Full article ">Figure 10
<p>The sketch map of common data fusion methods: simple fusion, channel fusion, and dual-encoder fusion.</p>
Full article ">Figure 11
<p>The basic structure of the DeepLab for Landslide (DeepLab4LS) model, composed of Encoder1, Encoder2, Mixer and Decoder.</p>
Full article ">Figure 12
<p>The results of the eight typical segmentation results numbered as 1–8: (<b>a</b>) the original UAV orthophoto; (<b>b</b>) the ground truth determined by landslide experts; (<b>c</b>,<b>d</b>) the segmentation results obtained by DeepLab4LS model and DeepLabV3+ model, respectively.</p>
Full article ">Figure 13
<p>Visualization of feature maps for key nodes in DeepLab4LS. F1: low-level optical feature map of Encoder1 output; F2: high-level optical feature map of Encoder1 output; F3: topographic feature map of Encoder2 output; F4: concatenated feature map before fusion in Mixer; F5: fused feature map of Mixer output.</p>
Full article ">
22 pages, 7648 KiB  
Article
Fire-RPG: An Urban Fire Detection Network Providing Warnings in Advance
by Xiangsheng Li and Yongquan Liang
Fire 2024, 7(7), 214; https://doi.org/10.3390/fire7070214 - 26 Jun 2024
Viewed by 1077
Abstract
Urban fires are characterized by concealed ignition points and rapid escalation, making the traditional methods of detecting early stage fire accidents inefficient. Thus, we focused on the features of early stage fire accidents, such as faint flames and thin smoke, and established a [...] Read more.
Urban fires are characterized by concealed ignition points and rapid escalation, making the traditional methods of detecting early stage fire accidents inefficient. Thus, we focused on the features of early stage fire accidents, such as faint flames and thin smoke, and established a dataset. We found that these features are mostly medium-sized and small-sized objects. We proposed a model based on YOLOv8s, Fire-RPG. Firstly, we introduced an extra very small object detection layer to enhance the detection performance for early fire features. Next, we optimized the model structure with the bottleneck in GhostV2Net, which reduced the computational time and the parameters. The Wise-IoUv3 loss function was utilized to decrease the harmful effects of low-quality data in the dataset. Finally, we integrated the low-cost yet high-performance RepVGG block and the CBAM attention mechanism to enhance learning capabilities. The RepVGG block enhances the extraction ability of the backbone and neck structures, while CBAM focuses the attention of the model on specific size objects. Our experiments showed that Fire-RPG achieved an mAP of 81.3%, an improvement of 2.2%. In addition, Fire-RPG maintained high detection performance across various fire scenarios. Therefore, our model can provide timely warnings and accurate detection services. Full article
Show Figures

Figure 1

Figure 1
<p>National fire situation in recent years. (<b>a</b>) Number of fires and economic losses; (<b>b</b>) Fire death toll.</p>
Full article ">Figure 2
<p>YOLOv8.</p>
Full article ">Figure 3
<p>Structure of Fire-RPG.</p>
Full article ">Figure 4
<p>Structural reparameterization process.</p>
Full article ">Figure 5
<p>The convolutional kernel in the third branch.</p>
Full article ">Figure 6
<p>RepVGG block.</p>
Full article ">Figure 7
<p>The DFC attention mechanism and Ghost module.</p>
Full article ">Figure 8
<p>Cheap operation of the Ghost module.</p>
Full article ">Figure 9
<p>Structure of GhostV2C2f.</p>
Full article ">Figure 10
<p>Two annotation principles. (<b>a</b>) The bounding box encloses the main part of the object, but misses the other parts; (<b>b</b>) the bounding box encloses the whole object, but also includes too much irrelevant information.</p>
Full article ">Figure 11
<p>Relationship between gradient gain and outlier degree.</p>
Full article ">Figure 12
<p>Structure of CBAM.</p>
Full article ">Figure 13
<p>Attention mechanisms in CBAM.</p>
Full article ">Figure 14
<p>Examples in the dataset. (<b>a</b>) Frames extracted from the videos; (<b>b</b>) images obtained from the Internet.</p>
Full article ">Figure 15
<p>Distribution of bounding box information. (<b>a</b>) Distribution of bounding box center position; (<b>b</b>) distribution of bounding box size.</p>
Full article ">Figure 16
<p>Comparison of mAP: (<b>a</b>) mAP50; (<b>b</b>) mAP50-95.</p>
Full article ">Figure 17
<p>Results of the detection of faint flames and thin smoke. (<b>a</b>) YOLOv8; (<b>b</b>) Fire-RPG.</p>
Full article ">Figure 18
<p>Results for the detection of strong flames and dense smoke. (<b>a</b>) YOLOv8; (<b>b</b>) Fire-RPG.</p>
Full article ">Figure 19
<p>Detection results for different datasets. (<b>a</b>) D-Fire and YOLOv8; (<b>b</b>) D-Fire and Fire-RPG; (<b>c</b>) ForestFire and YOLOv8; (<b>d</b>) ForestFire and Fire-RPG; (<b>e</b>) DFS and YOLOv8; (<b>f</b>) DFS and Fire-RPG.</p>
Full article ">
15 pages, 3788 KiB  
Article
Wildfire Susceptibility Prediction Based on a CA-Based CCNN with Active Learning Optimization
by Qiuping Yu, Yaqin Zhao, Zixuan Yin and Zhihao Xu
Fire 2024, 7(6), 201; https://doi.org/10.3390/fire7060201 - 16 Jun 2024
Viewed by 693
Abstract
Wildfires cause great losses to the ecological environment, economy, and people’s safety and belongings. As a result, it is crucial to establish wildfire susceptibility models and delineate fire risk levels. It has been proven that the use of remote sensing data, such as [...] Read more.
Wildfires cause great losses to the ecological environment, economy, and people’s safety and belongings. As a result, it is crucial to establish wildfire susceptibility models and delineate fire risk levels. It has been proven that the use of remote sensing data, such as meteorological and topographical data, can effectively predict and evaluate wildfire susceptibility. Accordingly, this paper converts meteorological and topographical data into fire-influencing factor raster maps for wildfire susceptibility prediction. The continuous convolutional neural network (CCNN for short) based on coordinate attention (CA for short) can aggregate different location information into channels of the network so as to enhance the feature expression ability; moreover, for different patches with different resolutions, the improved CCNN model does not need to change the structural parameters of the network, which improves the flexibility of the network application in different forest areas. In order to reduce the annotation of training samples, we adopt an active learning method to learn positive features by selecting high-confidence samples, which contributes to enhancing the discriminative ability of the network. We use fire probabilities output from the model to evaluate fire risk levels and generate the fire susceptibility map. Taking Chongqing Municipality in China as an example, the experimental results show that the CA-based CCNN model has a better classification performance; the accuracy reaches 91.7%, and AUC reaches 0.9487, which is 5.1% and 2.09% higher than the optimal comparative method, respectively. Furthermore, if an accuracy of about 86% is desired, our method only requires 50% of labeled samples and thus saves about 20% and 40% of the labeling efforts compared to the other two methods, respectively. Ultimately, the proposed model achieves the balance of high prediction accuracy and low annotation cost and is more helpful in classifying fire high warning zones and fire-free zones. Full article
Show Figures

Figure 1

Figure 1
<p>Location of Chongqing Municipality, in China, and distribution of wildfires in 2017.</p>
Full article ">Figure 2
<p>The model framework for the wildfire susceptibility prediction.</p>
Full article ">Figure 3
<p>Raster map of the average temperature in Chongqing, China, in 2017.</p>
Full article ">Figure 4
<p>ROC curves and AUC of the five methods.</p>
Full article ">Figure 5
<p>Radar maps of metrics derived from six different models on the validation set.</p>
Full article ">Figure 6
<p>Wildfire susceptibility maps derived from different methods for Chongqing Municipality, China, in 2017. (<b>a</b>) Our method; (<b>b</b>) CNN-based method; (<b>c</b>) RF; (<b>d</b>) Decision Tree; (<b>e</b>) MLP; (<b>f</b>) SVM.</p>
Full article ">Figure 7
<p>Classification accuracy with different percentages of labeled samples.</p>
Full article ">
12 pages, 3871 KiB  
Article
Multitemporal Dynamics of Fuels in Forest Systems Present in the Colombian Orinoco River Basin Forests
by Walter Garcia-Suabita, Mario José Pacheco and Dolors Armenteras
Fire 2024, 7(6), 171; https://doi.org/10.3390/fire7060171 - 21 May 2024
Viewed by 832
Abstract
In Colombia’s Orinoco, wildfires have a profound impact on ecosystem dynamics, particularly affecting savannas and forest–savanna transitions. Human activities have disrupted the natural fire regime, leading to increased wildfire frequency due to changes in land use, deforestation, and climate change. Despite extensive research [...] Read more.
In Colombia’s Orinoco, wildfires have a profound impact on ecosystem dynamics, particularly affecting savannas and forest–savanna transitions. Human activities have disrupted the natural fire regime, leading to increased wildfire frequency due to changes in land use, deforestation, and climate change. Despite extensive research on fire monitoring and prediction, the quantification of fuel accumulation, a critical factor in fire incidence, remains inadequately explored. This study addresses this gap by quantifying dead organic material (detritus) accumulation and identifying influencing factors. Using Brown transects across forests with varying fire intensities, we assessed fuel loads and characterized variables related to detritus accumulation over time. Employing factor analysis, principal components analysis, and a generalized linear mixed model, we determined the effects of various factors. Our findings reveal significant variations in biomass accumulation patterns influenced by factors such as thickness, wet and dry mass, density, gravity, porosity, and moisture content. Additionally, a decrease in fuel load over time was attributed to increased precipitation from three La Niña events. These insights enable more accurate fire predictions and inform targeted forest management strategies for fire prevention and mitigation, thereby enhancing our understanding of fire ecology in the Orinoco basin and guiding effective conservation practices. Full article
Show Figures

Figure 1

Figure 1
<p>Collection sites for dead fuel (<b>left</b>) Vichada, (<b>right</b>) Arauca.</p>
Full article ">Figure 2
<p>Boxplot of fuel load showing four categories of variables, (<b>a</b>) department, (<b>b</b>) zone, (<b>c</b>) condition, and (<b>d</b>) year.</p>
Full article ">Figure 3
<p>K prototypes of the class finding with all variables, (<b>a</b>) number of clusters, (<b>b</b>) groups to fuel load and diameter, (<b>c</b>) groups to fuel load and distance, and (<b>d</b>) groups to fuel load and porosity.</p>
Full article ">Figure 4
<p>FAMD—Factor analysis of mixed data, (<b>a</b>) proportion of variance explained by each dimension, the dashed line red represents the variation explained by each component and the dashed black line eigen values, (<b>b</b>) biplot quantitative variables, (<b>c</b>) biplot qualitative variables.</p>
Full article ">
Back to TopTop