[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (62)

Search Parameters:
Keywords = wildfire smoke detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1772 KiB  
Article
Forest Wildfire Detection from Images Captured by Drones Using Window Transformer without Shift
by Wei Yuan, Lei Qiao and Liu Tang
Forests 2024, 15(8), 1337; https://doi.org/10.3390/f15081337 - 1 Aug 2024
Viewed by 355
Abstract
Cameras, especially those carried by drones, are the main tools used to detect wildfires in forests because cameras have much longer detection ranges than smoke sensors. Currently, deep learning is main method used for fire detection in images, and Transformer is the best [...] Read more.
Cameras, especially those carried by drones, are the main tools used to detect wildfires in forests because cameras have much longer detection ranges than smoke sensors. Currently, deep learning is main method used for fire detection in images, and Transformer is the best algorithm. Swin Transformer restricts the computation to a fixed-size window, which reduces the amount of computation to a certain extent, but to allow pixel communication between windows, it adopts a shift window approach. Therefore, Swin Transformer requires multiple shifts to extend the receptive field to the entire image. This somewhat limits the network’s ability to capture global features at different scales. To solve this problem, instead of using the shift window method to allow pixel communication between windows, we downsample the feature map to the window size after capturing global features through a single Transformer, and we upsample the feature map to the original size and add it to the previous feature map. This way, there is no need for multiple layers of stacked window Transformers; global features are captured after each window Transformer operation. We conducted experiments on the Corsican fire dataset captured by ground cameras and on the Flame dataset captured by drone cameras. The results show that our algorithm performs the best. On the Corsican fire dataset, the mIoU, F1 score, and OA reached 79.4%, 76.6%, and 96.9%, respectively. On the Flame dataset, the mIoU, F1 score, and OA reached 84.4%, 81.6%, and 99.9%, respectively. Full article
(This article belongs to the Special Issue Forest Fires Prediction and Detection—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Comparison of Swin Transformer and ViT(Vision Transformer). Swin Transformer started with 4× downsampling, followed by 8× downsampling and 16× downsampling. ViT started with 16× downsampling. The red lines are the window boundaries, and the gray lines are the boundaries of each patch.</p>
Full article ">Figure 2
<p>Window shifting. The left side is the window without shifting. On the right is the window after shifting.</p>
Full article ">Figure 3
<p>Architecture diagram of the Nswin Transformer module. The black text on the right represents the size of the feature map for each step.</p>
Full article ">Figure 4
<p>The blue rectangles represent convolutional operation modules consisting of 2D convolutions with a kernel size of 3 × 3, followed by BatchNorm and ReLU. The yellow rectangles represent Nswin Transformer modules. The green arrows represent Maxpooling2D with a kernel size of 2 and a stride of 2. The red arrows represent patch merging. The orange arrows represent ConvTranspose2D with a kernel size of 2 and stride of 2. The purple arrows represent addition. The black arrows represent concatenation.</p>
Full article ">Figure 5
<p>The output results of each model on the test set. Black represents TN pixels, white represents TP pixels, red represents FN pixels, and green represents FP pixels.</p>
Full article ">Figure 6
<p>The output results of each model on the FLAME test dataset. Black represents TN pixels, white represents TP pixels, red represents FN pixels, and green represents FP pixels.</p>
Full article ">
21 pages, 8219 KiB  
Article
An Improved Fire and Smoke Detection Method Based on YOLOv8n for Smart Factories
by Ziyang Zhang, Lingye Tan and Tiong Lee Kong Robert
Sensors 2024, 24(15), 4786; https://doi.org/10.3390/s24154786 - 24 Jul 2024
Viewed by 358
Abstract
Factories play a crucial role in economic and social development. However, fire disasters in factories greatly threaten both human lives and properties. Previous studies about fire detection using deep learning mostly focused on wildfire detection and ignored the fires that happened in factories. [...] Read more.
Factories play a crucial role in economic and social development. However, fire disasters in factories greatly threaten both human lives and properties. Previous studies about fire detection using deep learning mostly focused on wildfire detection and ignored the fires that happened in factories. In addition, lots of studies focus on fire detection, while smoke, the important derivative of a fire disaster, is not detected by such algorithms. To better help smart factories monitor fire disasters, this paper proposes an improved fire and smoke detection method based on YOLOv8n. To ensure the quality of the algorithm and training process, a self-made dataset including more than 5000 images and their corresponding labels is created. Then, nine advanced algorithms are selected and tested on the dataset. YOLOv8n exhibits the best detection results in terms of accuracy and detection speed. ConNeXtV2 is then inserted into the backbone to enhance inter-channel feature competition. RepBlock and SimConv are selected to replace the original Conv and improve computational ability and memory bandwidth. For the loss function, CIoU is replaced by MPDIoU to ensure an efficient and accurate bounding box. Ablation tests show that our improved algorithm achieves better performance in all four metrics reflecting accuracy: precision, recall, F1, and mAP@50. Compared with the original model, whose four metrics are approximately 90%, the modified algorithm achieves above 95%. mAP@50 in particular reaches 95.6%, exhibiting an improvement of approximately 4.5%. Although complexity improves, the requirements of real-time fire and smoke monitoring are satisfied. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The structure of the YOLOv8n algorithm.</p>
Full article ">Figure 2
<p>The architecture of ConvNeXt V2.</p>
Full article ">Figure 3
<p>The architecture of Simconv, RepConv, and RepBlock.</p>
Full article ">Figure 4
<p>Schematic diagram of MPDIoU.</p>
Full article ">Figure 5
<p>The structure of the improved YOLOv8n for factory fire and smoke detection.</p>
Full article ">Figure 6
<p>The after-process after collection of images using Visual Similarity Duplicate Image Finder.</p>
Full article ">Figure 7
<p>Examples of factory fire disaster images inside.</p>
Full article ">Figure 8
<p>Examples of factory fire disaster images outside.</p>
Full article ">Figure 9
<p>Visualization results of the self-made datasets and labels. (<b>a</b>) The number of labels for fire and smoke labels; (<b>b</b>) the size of the labels; (<b>c</b>) the distribution of labels’ centroid locations of the total image; (<b>d</b>) the distribution of labels’ size of the total image.</p>
Full article ">Figure 10
<p>Improved YOLOv8 vs. the other methods: bar charts of FPS and mAP@0.5.</p>
Full article ">Figure 11
<p>Precision-recall curve and precision-confidence curve.</p>
Full article ">Figure 12
<p>The curve of precision–epochs, recall–epochs, and mAP–epochs.</p>
Full article ">Figure 13
<p>Visible experiments of improved and original algorithms for various indoor environments in factories.</p>
Full article ">Figure 14
<p>Visible experiments of improved and original algorithms for various outdoor environments in factories.</p>
Full article ">
11 pages, 607 KiB  
Article
Evaluating the Susceptibility of Different Crops to Smoke Taint
by Julie Culbert, Renata Ristic and Kerry Wilkinson
Horticulturae 2024, 10(7), 713; https://doi.org/10.3390/horticulturae10070713 - 5 Jul 2024
Viewed by 777
Abstract
The potential for grapes and wine to be tainted following vineyard exposure to wildfire smoke is well established, with recent studies suggesting hops and apples (and thus beer and cider) can be similarly affected. However, the susceptibility of other crops to ‘smoke taint’ [...] Read more.
The potential for grapes and wine to be tainted following vineyard exposure to wildfire smoke is well established, with recent studies suggesting hops and apples (and thus beer and cider) can be similarly affected. However, the susceptibility of other crops to ‘smoke taint’ has not yet been investigated. Smoke was applied to a selection of fruits and vegetables, as well as potted lavender plants, and their volatile phenol composition determined by gas chromatography–mass spectrometry to evaluate their susceptibility to contamination by smoke. Volatile phenols were observed in control (unsmoked) capsicum, cherry, lavender, lemon, spinach and tomato samples, typically at ≤18 µg/kg, but 52 µg/kg of guaiacol and 83–416 µg/kg of o- and m-cresol and 4-methylsyringol were detected in tomato and lavender samples, respectively. However, significant increases in volatile phenol concentrations were observed as a consequence of smoke exposure; with the highest volatile phenol levels occurring in smoke-exposed strawberry and lavender samples. Variation in the uptake of volatile phenols by different crops was attributed to differences in their physical properties, i.e., their surface area, texture and/or cuticle composition, while the peel of banana, lemon, and to a lesser extent apple samples, mitigated the permeation of smoke-derived volatile phenols into pulp. Results provide valuable insight into the susceptibility of different crops to smoke contamination. Full article
(This article belongs to the Section Biotic and Abiotic Stress)
Show Figures

Figure 1

Figure 1
<p>Principal component analysis biplot of volatile phenol concentrations measured in different crops, following exposure to smoke.</p>
Full article ">
26 pages, 10567 KiB  
Article
Biomass Burning Aerosol Observations and Transport over Northern and Central Argentina: A Case Study
by Gabriela Celeste Mulena, Eija Maria Asmi, Juan José Ruiz, Juan Vicente Pallotta and Yoshitaka Jin
Remote Sens. 2024, 16(10), 1780; https://doi.org/10.3390/rs16101780 - 17 May 2024
Viewed by 657
Abstract
The characteristics of South American biomass burning (BB) aerosols transported over northern and central Argentina were investigated from July to December 2019. This period was chosen due to the high aerosol optical depth values found in the region and because simultaneously intensive biomass [...] Read more.
The characteristics of South American biomass burning (BB) aerosols transported over northern and central Argentina were investigated from July to December 2019. This period was chosen due to the high aerosol optical depth values found in the region and because simultaneously intensive biomass burning took place over the Amazon. More specifically, a combination of remote sensing observations with simulated air parcel back trajectories was used to link the optical and physical properties of three BB aerosol events that affected Pilar Observatory (PO, Argentina, 31°41′S, 63°53′W, 338 m above sea level), with low-level atmospheric circulation patterns and with types of vegetation burned in specific fire regions. The lidar observations at the PO site were used for the first time to characterize the vertical extent and structure of BB aerosol plumes as well as their connection with the planetary boundary layer, and dust particles. Based mainly on the air-parcel trajectories, a local transport regime and a long transport regime were identified. We found that in all the BB aerosol event cases studied in this paper, light-absorbing fine-mode aerosols were detected, resulting mainly from a mixture of aging smoke and dust particles. In the remote transport regime, the main sources of the BB aerosols reaching PO were associated with Amazonian rainforest wildfires. These aerosols were transported into northern and central Argentina within a strong low-level jet circulation. During the local transport regime, the BB aerosols were linked with closer fires related to tropical forests, cropland, grassland, and scrub/shrubland vegetation types in southeastern South America. Moreover, aerosols carried by the remote transport regime were associated with a high aerosol loading and enhanced aging and relatively smaller particle sizes, while aerosols associated with the local transport pattern were consistently less affected by the aging effect and showed larger sizes and low aerosol loading. Full article
(This article belongs to the Special Issue Observation of Atmospheric Boundary-Layer Based on Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>LULC classification over the study area from ESA Sentinel-2 imagery at 10 m resolution (shaded). The white dot indicates the location of Pilar Observatory (31°41′S, 63°53′W, 338 m ASL), while the black squares indicate the location of the Amazonia region (1), the southeastern South America (SESA) region (2), and the northern and central Argentina (NCA) region (3).</p>
Full article ">Figure 2
<p>Schematic diagram of the methods used to accomplish the objective of the paper.</p>
Full article ">Figure 3
<p>(<b>a</b>) Number of active fires detected by MODIS-FIRMS in Amazonia (in red), southeastern South America (SESA) (in green), and northern and central Argentina (NCA) (in blue) regions represented in the histogram from July to December 2019 (<b>left</b>) and spatial distribution of fire location for the same regions from August to October 2019 (<b>right</b>); (<b>b</b>) TROPOMI CO total column from July to December 2019 at the Pilar Observatory site. The gray shades indicate the three periods in which BB aerosol event criteria have been met and lidar information was available at the PO site. These three cases are indicated as Case 1, Case 2, and Case 3.</p>
Full article ">Figure 4
<p>Daily mean AOD<sub>(440nm)</sub> (black line, left axis), daily mean α<sub>(440–870nm)</sub> (red line, right axis) by AERONET Level 2.0, and daily mean SSA<sub>(440nm)</sub> (blue line, right axis) by AERONET Level 1.5 at the Pilar Observatory site from July through December 2019. The gray shades indicate the three periods in which BB aerosol event criteria have been met and lidar information was available at the study site. These three cases are indicated as Case 1, Case 2, and Case 3.</p>
Full article ">Figure 5
<p>Range-corrected SAVER-Net lidar signal at 1064 nm (arbitrary units) at the Pilar Observatory site from 23 to 30 September 2019.</p>
Full article ">Figure 6
<p>Extinction coefficient at 532 nm (Mm<sup>−1</sup>) for spherical BB aerosol at Pilar Observatory SAVER-Net lidar from 23 to 30 September 2019.</p>
Full article ">Figure 7
<p>Extinction coefficient at 532 nm (Mm<sup>−1</sup>) for non-spherical dust aerosol at Pilar Observatory SAVER-Net lidar from 23 to 30 September 2019.</p>
Full article ">Figure 8
<p>Time series of hourly averages of AOD<sub>(440nm)</sub> (black line, left axis), FMF<sub>(500nm)</sub> (green line, left axis), and α<sub>(440–870nm)</sub> (red line, right axis) by AERONET Level 2.0, and SSA<sub>(440nm)</sub> (blue line, right axis) by AERONET Level 1.5 at the Pilar Observatory site for 23–30 September 2019. Hourly AERONET Level 1.0 data were used on 23 and 30 September. The gray shade indicates the time span when BB aerosols were found at the site based on hourly AERONET data.</p>
Full article ">Figure 9
<p>The 27-member ensemble HYSPLIT back trajectories initialized at different hours originating at the Pilar Observatory site at 1.5 km AGL, driven by the 1° GDAS data from the period of 23–30 September 2019. The white dot indicates the location of the Pilar Observatory site.</p>
Full article ">Figure 10
<p>Wind speed (shaded ms−1), wind direction vectors (arrows), and geopotential height (contours, mgp) at 850 hPa for 24 September 2019 at 12 UTC (<b>upper left</b>), 26 September 2019 at 12 UTC (<b>upper right</b>), 28 September 2019 at 12 UTC (<b>lower left</b>), and 30 September 2019 at 12 UTC (<b>lower right</b>) from the 0.25° GDAS/FNL analysis.</p>
Full article ">Figure 11
<p>(<b>left panel</b>) 72-hour backward trajectories determined with the HYSPLIT model with 27 members starting at the Pilar Observatory site at 1.5 km AGL on 27 September 2019 at 12 UTC. The lines show the air mass back trajectory during 24–27 September. Different colors correspond to different dates, as indicated in the right panel legend. The dots indicate the location of fire points, and their color indicates the day on which they were within a 50 km range of a back-trajectory ensemble member. The shade corresponds to the True Color image taken by MODIS at 14:05 UTC on 27 September 2019. (<b>right panel</b>): Back-trajectory heights as a function of time. Different colors indicate different days, as in the (<b>left panel</b>).</p>
Full article ">Figure 12
<p>(<b>left panel</b>) As in <a href="#remotesensing-16-01780-f011" class="html-fig">Figure 11</a>, but for 30 September 2019 at 10 UTC. The lines show the air mass back trajectory during 27–30 September. Different colors correspond to different dates, as indicated in the right panel legend. The dots indicate the location of fire points, and their color indicates the day on which they were within a 50 km range of a back-trajectory ensemble member. The shade corresponds to the true color image taken by MODIS at 14:35 UTC on 30 September 2019. The blue, green, and yellow squares in the figure indicate the daytime (DT, ascending) and nighttime (NT, descending) trajectories of the CALIPSO satellite for 27, 28, and 30 September 2019; (<b>right panel</b>): Back-trajectory heights as a function of time. Different colors indicate different days, as in the (<b>left panel</b>).</p>
Full article ">Figure 13
<p>The upper, middle, and lower figures show the aerosol layers measured on the daytime (DT, ascending) and nighttime (DT, descending) trajectories of the CALIOP instrument aboard the CALIPSO satellite for 27, 28, and 30 September 2019. Aerosol types are abbreviated in the legend beneath the image: 0, “Not determined”; 1, “Marine”; 2, “Desert dust”; 3, “Polluted continental/smoke”; 4, “Clean continental”; 5, “Polluted dust”; 6, “Elevated smoke”. The images correspond to trajectories identified with blue (<b>upper panel</b>), green (<b>middle panel</b>), and yellow (<b>lower panel</b>) squares in <a href="#remotesensing-16-01780-f012" class="html-fig">Figure 12</a>.</p>
Full article ">
23 pages, 6580 KiB  
Article
Forest Smoke-Fire Net (FSF Net): A Wildfire Smoke Detection Model That Combines MODIS Remote Sensing Images with Regional Dynamic Brightness Temperature Thresholds
by Yunhong Ding, Mingyang Wang, Yujia Fu and Qian Wang
Forests 2024, 15(5), 839; https://doi.org/10.3390/f15050839 - 10 May 2024
Viewed by 850
Abstract
Satellite remote sensing plays a significant role in the detection of smoke from forest fires. However, existing methods for detecting smoke from forest fires based on remote sensing images rely solely on the information provided by the images, overlooking the positional information and [...] Read more.
Satellite remote sensing plays a significant role in the detection of smoke from forest fires. However, existing methods for detecting smoke from forest fires based on remote sensing images rely solely on the information provided by the images, overlooking the positional information and brightness temperature of the fire spots in forest fires. This oversight significantly increases the probability of misjudging smoke plumes. This paper proposes a smoke detection model, Forest Smoke-Fire Net (FSF Net), which integrates wildfire smoke images with the dynamic brightness temperature information of the region. The MODIS_Smoke_FPT dataset was constructed using a Moderate Resolution Imaging Spectroradiometer (MODIS), the meteorological information at the site of the fire, and elevation data to determine the location of smoke and the brightness temperature threshold for wildfires. Deep learning and machine learning models were trained separately using the image data and fire spot area data provided by the dataset. The performance of the deep learning model was evaluated using metric MAP, while the regression performance of machine learning was assessed with Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The selected machine learning and deep learning models were organically integrated. The results show that the Mask_RCNN_ResNet50_FPN and XGR models performed best among the deep learning and machine learning models, respectively. Combining the two models achieved good smoke detection results (Precisionsmoke=89.12%). Compared with wildfire smoke detection models that solely use image recognition, the model proposed in this paper demonstrates stronger applicability in improving the precision of smoke detection, thereby providing beneficial support for the timely detection of forest fires and applications of remote sensing. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

Figure 1
<p>The distribution of forest fires around the world, including the red five-pointed stars where wildfires occur.</p>
Full article ">Figure 2
<p>Example of forest fire smoke image with smoke and clouds.</p>
Full article ">Figure 3
<p>Differentiating fire spots using Otsu’s method, where the left part is the binary image, and the right part corresponds to the original grayscale image.</p>
Full article ">Figure 4
<p>Schematic diagram of the overall architecture of the FSF Net model.</p>
Full article ">Figure 5
<p>Mask R-CNN overall framework diagram.</p>
Full article ">Figure 6
<p>Binarized map of abnormal brightness temperature area. (<b>a</b>) Indicates the brightness temperature region of the aggregated fire point. (<b>b</b>) Indicates the brightness temperature area of mildly dispersed fire points. Among them, white represents the abnormal brightness temperature area, that is, the fire point area, and black represents the area without abnormal brightness temperature.</p>
Full article ">Figure 7
<p>Model accuracy for wildfire smoke segmentation detection. Red represents <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>A</mi> <mi>P</mi> <mn>50</mn> </mrow> </semantics></math>, and blue represents <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>A</mi> <mi>P</mi> <mn>75</mn> </mrow> </semantics></math>. MRR50F, MRR101F, CMRR50F, CMRR101F correspond to Mask_R-CNN_ResNet50_FPN, Mask_R-CNN_ResNet101_FPN, Cascade_Mask_R-CNN_ResNet50_FPN, and Cascade_Mask_R-CNN_ResNet101_FPN, respectively.</p>
Full article ">Figure 8
<p>Loss trend chart of Mask_R-CNN_ResNet50_FPN.</p>
Full article ">Figure 9
<p>Visualization of wildfire smoke semantic segmentation results. Group (<b>A</b>) is a single smoke situation, in which the distinction between smoke and clouds is more obvious. Group (<b>B</b>) is a smoky and cloudy situation. Group (<b>C</b>) is the case where smoke and clouds are mixed.</p>
Full article ">Figure 10
<p><math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> <mo>/</mo> <mi>M</mi> <mi>A</mi> <mi>E</mi> </mrow> </semantics></math> scores using leave-one-out cross-validation.</p>
Full article ">Figure 11
<p>Validated <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> <mo>/</mo> <mi>M</mi> <mi>A</mi> <mi>E</mi> </mrow> </semantics></math> scores using all data.</p>
Full article ">Figure 12
<p>Smoke detection effect diagram of wildfire smoke detection model combined with dynamic fire point threshold. (<b>a</b>) shows the situation where there is a fire point at the root of the scattered smoke. (<b>b</b>) shows the situation where there is a fire point under the concentrated smoke. (<b>c</b>) shows the situation where the fire point and the “smoke” are far away, that is, the cloud is excluded.</p>
Full article ">Figure 13
<p>Figure (<b>a</b>) shows a comparison of the accuracy between the wildfire smoke detection model that integrates dynamic fire point thresholds and the smoke detection model that uses images alone. Figure (<b>b</b>) presents a schematic diagram of fire smoke detection and cloud discrimination.</p>
Full article ">Figure 14
<p>Comparison of smoke detection performance on the MODIS_Smoke_FPT dataset among three models.</p>
Full article ">
22 pages, 46483 KiB  
Article
SWIFT: Simulated Wildfire Images for Fast Training Dataset
by Luiz Fernando, Rafik Ghali and Moulay A. Akhloufi
Remote Sens. 2024, 16(9), 1627; https://doi.org/10.3390/rs16091627 - 2 May 2024
Viewed by 933
Abstract
Wildland fires cause economic and ecological damage with devastating consequences, including loss of life. To reduce these risks, numerous fire detection and recognition systems using deep learning techniques have been developed. However, the limited availability of annotated datasets has decelerated the development of [...] Read more.
Wildland fires cause economic and ecological damage with devastating consequences, including loss of life. To reduce these risks, numerous fire detection and recognition systems using deep learning techniques have been developed. However, the limited availability of annotated datasets has decelerated the development of reliable deep learning techniques for detecting and monitoring fires. For such, a novel dataset, namely, SWIFT, is presented in this paper for detecting and recognizing wildland smoke and fires. SWIFT includes a large number of synthetic images and videos of smoke and wildfire with their corresponding annotations, as well as environmental data, including temperature, humidity, wind direction, and speed. It represents various wildland fire scenarios collected from multiple viewpoints, covering forest interior views, views near active fires, ground views, and aerial views. In addition, three deep learning models, namely, BoucaNet, DC-Fire, and CT-Fire, are adopted to recognize forest fires and address their related challenges. These models are trained using the SWIFT dataset and tested using real fire images. BoucaNet performed well in recognizing wildland fires and overcoming challenging limitations, including the complexity of the background, the variation in smoke and wildfire features, and the detection of small wildland fire areas. This shows the potential of sim-to-real deep learning in wildland fires. Full article
Show Figures

Figure 1

Figure 1
<p>Images of developed biomes for SWIFT, from left to right: boreal, temperate, and tundra.</p>
Full article ">Figure 2
<p>Background image examples.</p>
Full article ">Figure 3
<p>Fire image examples. (<b>Top</b>): RGB fire images. (<b>Bottom</b>): Their corresponding ground-truth images.</p>
Full article ">Figure 4
<p>Fire and smoke image examples. (<b>Top</b>): RGB images. (<b>Bottom</b>): Their corresponding ground-truth images.</p>
Full article ">Figure 5
<p>Smoke example images. (<b>Top</b>) to (<b>Bottom</b>): RGB smoke images, their corresponding grayscale ground truth, and their corresponding binary ground truth.</p>
Full article ">Figure 6
<p>The proposed architecture of BoucaNet and CT-Fire methods. L and L1 refer to the likelihood of the input image being classified as fire or no-fire.</p>
Full article ">Figure 7
<p>The proposed architecture of DC-Fire. L and L1 refer to the likelihood of the input image being classified as fire or no-fire.</p>
Full article ">Figure 8
<p>Loss curves for the proposed DL methods (BoucaNet, DC-Fire, CT-Fire, RegNetY-16GF, and ResNeXt-101) during training and validation steps.</p>
Full article ">Figure 9
<p>Confusion matrices of BoucaNet, CT-Fire, and DC-Fire using real images. From (<b>left</b>) to (<b>right</b>): BoucaNet results, CT-Fire results, and DC-Fire results.</p>
Full article ">Figure 10
<p>Fire classification results of the proposed models.</p>
Full article ">Figure 11
<p>No-Fire classification results of the proposed models.</p>
Full article ">
16 pages, 1176 KiB  
Article
A Control-Theoretic Spatio-Temporal Model for Wildfire Smoke Propagation Using UAV-Based Air Pollutant Measurements
by Prabhash Ragbir, Ajith Kaduwela, Xiaodong Lan, Adam Watts and Zhaodan Kong
Drones 2024, 8(5), 169; https://doi.org/10.3390/drones8050169 - 24 Apr 2024
Viewed by 999
Abstract
Wildfires have the potential to cause severe damage to vegetation, property and most importantly, human life. In order to minimize these negative impacts, it is crucial that wildfires are detected at the earliest possible stages. A potential solution for early wildfire detection is [...] Read more.
Wildfires have the potential to cause severe damage to vegetation, property and most importantly, human life. In order to minimize these negative impacts, it is crucial that wildfires are detected at the earliest possible stages. A potential solution for early wildfire detection is to utilize unmanned aerial vehicles (UAVs) that are capable of tracking the chemical concentration gradient of smoke emitted by wildfires. A spatiotemporal model of wildfire smoke plume dynamics can allow for efficient tracking of the chemicals by utilizing both real-time information from sensors as well as future information from the model predictions. This study investigates a spatiotemporal modeling approach based on subspace identification (SID) to develop a data-driven smoke plume dynamics model for the purposes of early wildfire detection. The model was learned using CO2 concentration data which were collected using an air quality sensor package onboard a UAV during two prescribed burn experiments. Our model was evaluated by comparing the predicted values to the measured values at random locations and showed mean errors of 6.782 ppm and 30.01 ppm from the two experiments. Additionally, our model was shown to outperform the commonly used Gaussian puff model (GPM) which showed mean errors of 25.799 ppm and 104.492 ppm, respectively. Full article
(This article belongs to the Topic Application of Remote Sensing in Forest Fire)
Show Figures

Figure 1

Figure 1
<p>Octocopter unmanned aerial vehicle equipped with air quality sensor packaged for chemical data collection.</p>
Full article ">Figure 2
<p>Air quality sensor package with six low-cost real-time sensors for chemical data collection.</p>
Full article ">Figure 3
<p>Chemical data collection during prescribed burn using the UAV equipped with the air quality sensor package.</p>
Full article ">Figure 4
<p>Periodic trajectory executed by UAV during data collection in first experiment.</p>
Full article ">Figure 5
<p>Periodic trajectory executed by UAV during data collection in second experiment.</p>
Full article ">Figure 6
<p>3D spatial distribution of CO<sub>2</sub> from data collected in first experiment.</p>
Full article ">Figure 7
<p>3D spatial distribution of CO<sub>2</sub> from data collected in second experiment.</p>
Full article ">Figure 8
<p>Predicted values for both models vs. the measured values for the first experiment.</p>
Full article ">Figure 9
<p>Predicted values for both models vs. the measured values for the second experiment.</p>
Full article ">Figure 10
<p>Subspace identification model 2D field predictions of CO<sub>2</sub> concentration at timesteps of (<b>a</b>) 5 s, (<b>b</b>) 10 s, (<b>c</b>) 15 s and (<b>d</b>) 20 s. The x-axis and y-axis represent longitude and latitude, respectively.</p>
Full article ">Figure 11
<p>Gaussian puff model 2D field predictions of CO<sub>2</sub> concentration at timesteps of (<b>a</b>) 5 s, (<b>b</b>) 10 s, (<b>c</b>) 15 s and (<b>d</b>) 20 s. The x-axis and y-axis represent longitude and latitude, respectively.</p>
Full article ">
20 pages, 2875 KiB  
Article
YOLO-Based Models for Smoke and Wildfire Detection in Ground and Aerial Images
by Leon Augusto Okida Gonçalves, Rafik Ghali and Moulay A. Akhloufi
Fire 2024, 7(4), 140; https://doi.org/10.3390/fire7040140 - 14 Apr 2024
Viewed by 1381
Abstract
Wildland fires negatively impact forest biodiversity and human lives. They also spread very rapidly. Early detection of smoke and fires plays a crucial role in improving the efficiency of firefighting operations. Deep learning techniques are used to detect fires and smoke. However, the [...] Read more.
Wildland fires negatively impact forest biodiversity and human lives. They also spread very rapidly. Early detection of smoke and fires plays a crucial role in improving the efficiency of firefighting operations. Deep learning techniques are used to detect fires and smoke. However, the different shapes, sizes, and colors of smoke and fires make their detection a challenging task. In this paper, recent YOLO-based algorithms are adopted and implemented for detecting and localizing smoke and wildfires within ground and aerial images. Notably, the YOLOv7x model achieved the best performance with an mAP (mean Average Precision) score of 80.40% and fast detection speed, outperforming the baseline models in detecting both smoke and wildfires. YOLOv8s obtained a high mAP of 98.10% in identifying and localizing only wildfire smoke. These models demonstrated their significant potential in handling challenging scenarios, including detecting small fire and smoke areas; varying fire and smoke features such as shape, size, and colors; the complexity of background, which can include diverse terrain, weather conditions, and vegetation; and addressing visual similarities among smoke, fog, and clouds and the the visual resemblances among fire, lighting, and sun glare. Full article
(This article belongs to the Special Issue New Advances in Spatial Analysis of Wildfire Planning)
Show Figures

Figure 1

Figure 1
<p>D-Fire dataset examples, from top to bottom: smoke images, fire images, fire/smoke images with challenging scenarios such as the presence of clouds, fog, and sun glare.</p>
Full article ">Figure 2
<p>WSDY dataset examples.</p>
Full article ">Figure 3
<p>YOLO model results using D-Fire dataset, from top to bottom: original images, ground-truth images, and predicted images by YOLOv5l, YOLOv7x, YOLOv8x, and YOLOv5lu, respectively.</p>
Full article ">Figure 4
<p>YOLO model results using WSDY dataset, from top to bottom: original images, ground-truth images, and predicted images by YOLOv5nu, YOLOv7x, YOLOv8s, and YOLOv5n, respectively.</p>
Full article ">Figure 4 Cont.
<p>YOLO model results using WSDY dataset, from top to bottom: original images, ground-truth images, and predicted images by YOLOv5nu, YOLOv7x, YOLOv8s, and YOLOv5n, respectively.</p>
Full article ">
14 pages, 4779 KiB  
Article
Fire and Smoke Detection Using Fine-Tuned YOLOv8 and YOLOv7 Deep Models
by Mohamed Chetoui and Moulay A. Akhloufi
Fire 2024, 7(4), 135; https://doi.org/10.3390/fire7040135 - 12 Apr 2024
Cited by 1 | Viewed by 2512
Abstract
Viewed as a significant natural disaster, wildfires present a serious threat to human communities, wildlife, and forest ecosystems. The frequency of wildfire occurrences has increased recently, with the impacts of global warming and human interaction with the environment playing pivotal roles. Addressing this [...] Read more.
Viewed as a significant natural disaster, wildfires present a serious threat to human communities, wildlife, and forest ecosystems. The frequency of wildfire occurrences has increased recently, with the impacts of global warming and human interaction with the environment playing pivotal roles. Addressing this challenge necessitates the ability of firefighters to promptly identify fires based on early signs of smoke, allowing them to intervene and prevent further spread. In this work, we adapted and optimized recent deep learning object detection, namely YOLOv8 and YOLOv7 models, for the detection of smoke and fire. Our approach involved utilizing a dataset comprising over 11,000 images for smoke and fires. The YOLOv8 models successfully identified fire and smoke, achieving a mAP:50 of 92.6%, a precision score of 83.7%, and a recall of 95.2%. The results were compared with a YOLOv6 with large model, Faster-RCNN, and DEtection TRansformer. The obtained scores confirm the potential of the proposed models for wide application and promotion in the fire safety industry. Full article
(This article belongs to the Special Issue Monitoring Wildfire Dynamics with Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Example of images of smoke and fire on the dataset [<a href="#B39-fire-07-00135" class="html-bibr">39</a>].</p>
Full article ">Figure 1 Cont.
<p>Example of images of smoke and fire on the dataset [<a href="#B39-fire-07-00135" class="html-bibr">39</a>].</p>
Full article ">Figure 2
<p>Precision curves of YOLOv8 models: (<b>a</b>) YOLOv8n, (<b>b</b>) YOLOv8s, (<b>c</b>) YOLOv8m, (<b>d</b>) YOLOv8l, (<b>e</b>) YOLOv8x.</p>
Full article ">Figure 3
<p>Recall curves of YOLOv8 models: (<b>a</b>) YOLOv8n, (<b>b</b>) YOLO v8s, (<b>c</b>) YOLOv8m, (<b>d</b>) YOLOv8l, (<b>e</b>) YOLOv8x.</p>
Full article ">Figure 4
<p>Mean average precision (mAP:50) curves of YOLOv8 models: (<b>a</b>) YOLOv8n, (<b>b</b>) YOLOv8s, (<b>c</b>) YOLOv8m, (<b>d</b>) YOLOv8l, (<b>e</b>) YOLOv8x.</p>
Full article ">Figure 4 Cont.
<p>Mean average precision (mAP:50) curves of YOLOv8 models: (<b>a</b>) YOLOv8n, (<b>b</b>) YOLOv8s, (<b>c</b>) YOLOv8m, (<b>d</b>) YOLOv8l, (<b>e</b>) YOLOv8x.</p>
Full article ">Figure 5
<p>Example of fire and smoke detection by YOLOv8x model.</p>
Full article ">Figure 6
<p>Example of false detection of fire and smoke.</p>
Full article ">Figure 6 Cont.
<p>Example of false detection of fire and smoke.</p>
Full article ">
16 pages, 2616 KiB  
Article
Improving Computer Vision-Based Wildfire Smoke Detection by Combining SE-ResNet with SVM
by Xin Wang, Jinxin Wang, Linlin Chen and Yinan Zhang
Processes 2024, 12(4), 747; https://doi.org/10.3390/pr12040747 - 7 Apr 2024
Viewed by 1045
Abstract
Wildfire is one of the most critical natural disasters that poses a serious threat to human lives as well as ecosystems. One issue hindering a high accuracy of computer vision-based wildfire detection is the potential for water mists and clouds to be marked [...] Read more.
Wildfire is one of the most critical natural disasters that poses a serious threat to human lives as well as ecosystems. One issue hindering a high accuracy of computer vision-based wildfire detection is the potential for water mists and clouds to be marked as wildfire smoke due to the similar appearance in images, leading to an unacceptable high false alarm rate in real-world wildfire early warning cases. This paper proposes a novel hybrid wildfire smoke detection approach by combining the multi-layer ResNet architecture with SVM to extract the smoke image dynamic and static characteristics, respectively. The ResNet model is improved via the SE attention mechanism and fully convolutional network as SE-ResNet. A fusion decision procedure is proposed for wildfire early warning. The proposed detection method was tested on open datasets and achieved an accuracy of 98.99%. The comparisons with AlexNet, VGG-16, GoogleNet, SE-ResNet-50 and SVM further illustrate the improvements. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of proposed approach.</p>
Full article ">Figure 2
<p>Residual learning in ResNet.</p>
Full article ">Figure 3
<p>Network architecture of ResNet-50.</p>
Full article ">Figure 4
<p>Feature pyramid network for wildfire smoke detection.</p>
Full article ">Figure 5
<p>Procedure for HOG features vector extraction.</p>
Full article ">Figure 6
<p>Image samples: (<b>a</b>) real wildfire smoke; (<b>b</b>) water mists; (<b>c</b>) clouds.</p>
Full article ">Figure 7
<p>Image segmentation using SE-ResNet: (<b>a</b>) wildfire smoke and (<b>b</b>) false segmentation of water mists and clouds.</p>
Full article ">
18 pages, 6131 KiB  
Article
An Optimized Smoke Segmentation Method for Forest and Grassland Fire Based on the UNet Framework
by Xinyu Hu, Feng Jiang, Xianlin Qin, Shuisheng Huang, Xinyuan Yang and Fangxin Meng
Fire 2024, 7(3), 68; https://doi.org/10.3390/fire7030068 - 26 Feb 2024
Cited by 4 | Viewed by 1460
Abstract
Smoke, a byproduct of forest and grassland combustion, holds the key to precise and rapid identification—an essential breakthrough in early wildfire detection, critical for forest and grassland fire monitoring and early warning. To address the scarcity of middle–high-resolution satellite datasets for forest and [...] Read more.
Smoke, a byproduct of forest and grassland combustion, holds the key to precise and rapid identification—an essential breakthrough in early wildfire detection, critical for forest and grassland fire monitoring and early warning. To address the scarcity of middle–high-resolution satellite datasets for forest and grassland fire smoke, and the associated challenges in identifying smoke, the CAF_SmokeSEG dataset was constructed for smoke segmentation. The dataset was created based on GF-6 WFV smoke images of forest and grassland fire globally from 2019 to 2022. Then, an optimized segmentation algorithm, GFUNet, was proposed based on the UNet framework. Through comprehensive analysis, including method comparison, module ablation, band combination, and data transferability experiments, this study revealed that GF-6 WFV data effectively represent information related to forest and grassland fire smoke. The CAF_SmokeSEG dataset was found to be valuable for pixel-level smoke segmentation tasks. GFUNet exhibited robust smoke feature learning capability and segmentation stability. It demonstrated clear smoke area delineation, significantly outperforming UNet and other optimized methods, with an F1-Score and Jaccard coefficient of 85.50% and 75.76%, respectively. Additionally, augmenting the common spectral bands with additional bands improved the smoke segmentation accuracy, particularly shorter-wavelength bands like the coastal blue band, outperforming longer-wavelength bands such as the red-edge band. GFUNet was trained on the combination of red, green, blue, and NIR bands from common multispectral sensors. The method showed promising transferability and enabled the segmentation of smoke areas in GF-1 WFV and HJ-2A/B CCD images with comparable spatial resolution and similar bands. The integration of high spatiotemporal multispectral data like GF-6 WFV with the advanced information extraction capabilities of deep learning algorithms effectively meets the practical needs for pixel-level identification of smoke areas in forest and grassland fire scenarios. It shows promise in improving and optimizing existing forest and grassland fire monitoring systems, providing valuable decision-making support for fire monitoring and early warning systems. Full article
(This article belongs to the Special Issue Remote Sensing of Wildfire: Regime Change and Disaster Response)
Show Figures

Figure 1

Figure 1
<p>The overall research technical framework.</p>
Full article ">Figure 2
<p>The spatiotemporal distribution of CAF_SmokeSEG. (<b>a</b>) Spatial distribution of the data; (<b>b</b>) Different intercontinental data distribution; (<b>c</b>) Temporal distribution of the data.</p>
Full article ">Figure 3
<p>Examples of global samples and labels in CAF_SmokeSEG.</p>
Full article ">Figure 4
<p>Schematic diagrams of different structures. (<b>a</b>) Traditional double convolution; (<b>b</b>) Traditional residual convolution; (<b>c</b>) HardRes modules.</p>
Full article ">Figure 5
<p>The curves of various activation functions.</p>
Full article ">Figure 6
<p>Schematic diagram of the DSASPP structure.</p>
Full article ">Figure 7
<p>Schematic diagram of the SCSE structure.</p>
Full article ">Figure 8
<p>Overview of GFUNet network. (The modules with a red border are the main optimized modules based on the UNet framework).</p>
Full article ">Figure 9
<p>Schematic diagram of accuracy calculation metrics relationship.</p>
Full article ">Figure 10
<p>Comparison of smoke segmentation results of different methods.</p>
Full article ">Figure 11
<p>Accuracy changes in module ablation experiments in the training stage.</p>
Full article ">Figure 12
<p>Comparison of smoke segmentation results of different data. Subfigures (<b>a</b>–<b>c</b>) are GF-1 WFV data; subfigures (<b>d</b>–<b>f</b>) are HJ-2A/B CCD data.</p>
Full article ">
15 pages, 5332 KiB  
Article
An Efficient and Lightweight Detection Model for Forest Smoke Recognition
by Xiao Guo, Yichao Cao and Tongxin Hu
Forests 2024, 15(1), 210; https://doi.org/10.3390/f15010210 - 21 Jan 2024
Cited by 3 | Viewed by 1536
Abstract
Massive wildfires have become more frequent, seriously threatening the Earth’s ecosystems and human societies. Recognizing smoke from forest fires is critical to extinguishing them at an early stage. However, edge devices have low computational accuracy and suboptimal real-time performance. This limits model inference [...] Read more.
Massive wildfires have become more frequent, seriously threatening the Earth’s ecosystems and human societies. Recognizing smoke from forest fires is critical to extinguishing them at an early stage. However, edge devices have low computational accuracy and suboptimal real-time performance. This limits model inference and deployment. In this paper, we establish a forest smoke database and propose a model for efficient and lightweight forest smoke detection based on YOLOv8. Firstly, to improve the feature fusion capability in forest smoke detection, we fuse a simple yet efficient weighted feature fusion network into the neck of YOLOv8. This also greatly optimizes the number of parameters and computational load of the model. Then, the simple and parametric-free attention mechanism (SimAM) is introduced to address the problem of forest smoke dataset images that may contain complex background and environmental disturbances. The detection accuracy of the model is improved, and no additional parameters are introduced. Finally, we introduce focal modulation to increase the attention to the hard-to-detect smoke and improve the running speed of the model. The experimental results show that the mean average precision of the improved model is 90.1%, which is 3% higher than the original model. The number of parameters and the computational complexity of the model are 7.79 MB and 25.6 GFLOPs (giga floating-point operations per second), respectively, which are 30.07% and 10.49% less than those of the unimproved YOLOv8s. This model is significantly better than other mainstream models in the self-built forest smoke detection dataset, and it also has great potential in practical application scenarios. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

Figure 1
<p>The architecture of the smoke detection network based on improved YOLOv8s.</p>
Full article ">Figure 2
<p>B1–B5 denote EfficientDet as the backbone network. P2–N5 denote BiFPN as the feature network.</p>
Full article ">Figure 3
<p>Detailed explanation of context aggregation (<b>b</b>) in focal modulation (<b>a</b>). The aggregation procedure consists of two steps: hierarchical contextualization to extract contexts from local to global ranges at different levels of granularity and gated aggregation to condense all context features at different granularity levels into the modulator.</p>
Full article ">Figure 4
<p>SimAM structure diagram. The feature X expansion generates 3D weights, which are normalized with a function. The weights of the target neurons are multiplied by the features of the initial feature map to obtain the final output feature map. The same color indicates that a single scalar is used for each point on that feature map.</p>
Full article ">Figure 5
<p>Partial samples from the Forest Smoke Dataset: (<b>a</b>) smoke generated before forest fires; (<b>b</b>) forest fire; (<b>c</b>) smoke from residential areas; (<b>d</b>) field smoke.</p>
Full article ">Figure 6
<p>Comparison of smoke detection results before (right) and after (left) improvement: (<b>a</b>,<b>b</b>) and (<b>c</b>,<b>d</b>) show the smoke detection results in residential areas and fields, respectively. The smoke detection results of medium scales and small scales are also shown.</p>
Full article ">Figure 7
<p>Comparison of the results for the detection of smoke leakage: (<b>a</b>,<b>c</b>) the smoke detection results using our model; (<b>b</b>,<b>d</b>) the smoke detection results using the original YOLOv8s.</p>
Full article ">
13 pages, 3657 KiB  
Article
ForestFireDetector: Expanding Channel Depth for Fine-Grained Feature Learning in Forest Fire Smoke Detection
by Long Sun, Yidan Li and Tongxin Hu
Forests 2023, 14(11), 2157; https://doi.org/10.3390/f14112157 - 30 Oct 2023
Cited by 1 | Viewed by 1357
Abstract
Wildfire is a pressing global issue that transcends geographic boundaries. Many areas, including China, are trying to cope with the threat of wildfires and manage limited forest resources. Effective forest fire detection is crucial, given its significant implications for ecological balance, social well-being [...] Read more.
Wildfire is a pressing global issue that transcends geographic boundaries. Many areas, including China, are trying to cope with the threat of wildfires and manage limited forest resources. Effective forest fire detection is crucial, given its significant implications for ecological balance, social well-being and economic stability. In light of the problems of noise misclassification and manual design of the components in the current forest fire detection model, particularly the limited capability to identify subtle and unnoticeable smoke within intricate forest environments, this paper proposes an improved smoke detection model for forest fires utilizing YOLOv8 as its foundation. We expand the channel depth for fine-grain feature learning and retain more feature information. At the same time, lightweight convolution reduces the parameters of the model. This model enhances detection accuracy for smoke targets of varying scales and surpasses the accuracy of mainstream models. The outcomes of experiments demonstrate that the improved model exhibits superior performance, and the mean average precision is improved by 3.3%. This model significantly enhances the detection ability while also optimizing the neural network to make it more lightweight. These advancements position the model as a promising solution for early-stage forest fire smoke detection. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning Applications in Forestry)
Show Figures

Figure 1

Figure 1
<p>Typical pictures of smoke at different scale sizes: (<b>a</b>) large percentage of target; (<b>b</b>) medium percentage of target; (<b>c</b>) small percentage of target.</p>
Full article ">Figure 2
<p>Structure of YOLOv8.</p>
Full article ">Figure 3
<p>Structure of SPD-Conv module. X, X<sub>1</sub> and X<sub>2</sub> denote different stages of the feature maps; S represents the length and width of the original feature map; C indicates the number of channels in the feature map, representing the depth of the feature map; N denotes the number of segmentation operations.</p>
Full article ">Figure 4
<p>Structure of GSConv module. C refers to channels; the box marked in pink here refers to Conv mentioned in the text; the DWConv means the DSC operation; purple circle with the letter C refers to Concat operation.</p>
Full article ">Figure 5
<p>Structure of ForestFireDetector model.</p>
Full article ">Figure 6
<p>The convergence process of training.</p>
Full article ">Figure 7
<p>Detection effect before (left) and after (right) the improvement: (<b>a</b>,<b>b</b>) smoke at different percentage detection results; (<b>c</b>,<b>d</b>) missed and false detection results.</p>
Full article ">
19 pages, 6635 KiB  
Article
Fire Detection and Geo-Localization Using UAV’s Aerial Images and Yolo-Based Models
by Kheireddine Choutri, Mohand Lagha, Souham Meshoul, Mohamed Batouche, Farah Bouzidi and Wided Charef
Appl. Sci. 2023, 13(20), 11548; https://doi.org/10.3390/app132011548 - 21 Oct 2023
Cited by 7 | Viewed by 2383
Abstract
The past decade has witnessed a growing demand for drone-based fire detection systems, driven by escalating concerns about wildfires exacerbated by climate change, as corroborated by environmental studies. However, deploying existing drone-based fire detection systems in real-world operational conditions poses practical challenges, notably [...] Read more.
The past decade has witnessed a growing demand for drone-based fire detection systems, driven by escalating concerns about wildfires exacerbated by climate change, as corroborated by environmental studies. However, deploying existing drone-based fire detection systems in real-world operational conditions poses practical challenges, notably the intricate and unstructured environments and the dynamic nature of UAV-mounted cameras, often leading to false alarms and inaccurate detections. In this paper, we describe a two-stage framework for fire detection and geo-localization. The key features of the proposed work included the compilation of a large dataset from several sources to capture various visual contexts related to fire scenes. The bounding boxes of the regions of interest were labeled using three target levels, namely fire, non-fire, and smoke. The second feature was the investigation of YOLO models to undertake the detection and localization tasks. YOLO-NAS was retained as the best performing model using the compiled dataset with an average mAP50 of 0.71 and an F1_score of 0.68. Additionally, a fire localization scheme based on stereo vision was introduced, and the hardware implementation was executed on a drone equipped with a Pixhawk microcontroller. The test results were very promising and showed the ability of the proposed approach to contribute to a comprehensive and effective fire detection system. Full article
(This article belongs to the Special Issue Deep Learning for Object Detection)
Show Figures

Figure 1

Figure 1
<p>Fire detection and geo-localization proposed framework.</p>
Full article ">Figure 2
<p>Representative samples from the compiled dataset with original labels.</p>
Full article ">Figure 3
<p>Training metrics using the YOLOv8 detector.</p>
Full article ">Figure 4
<p>Example of the confidence score diversity using the YOLOv8 detector.</p>
Full article ">Figure 5
<p>Calibration process.</p>
Full article ">Figure 6
<p>Re-projection error.</p>
Full article ">Figure 7
<p>Fire detected in the images adding bounding boxes.</p>
Full article ">Figure 8
<p>Distance extraction.</p>
Full article ">Figure 9
<p>Final UAV build.</p>
Full article ">Figure 10
<p>Detection test 1.</p>
Full article ">
24 pages, 3227 KiB  
Article
An Improved Wildfire Smoke Detection Based on YOLOv8 and UAV Images
by Saydirasulov Norkobil Saydirasulovich, Mukhriddin Mukhiddinov, Oybek Djuraev, Akmalbek Abdusalomov and Young-Im Cho
Sensors 2023, 23(20), 8374; https://doi.org/10.3390/s23208374 - 10 Oct 2023
Cited by 16 | Viewed by 5271
Abstract
Forest fires rank among the costliest and deadliest natural disasters globally. Identifying the smoke generated by forest fires is pivotal in facilitating the prompt suppression of developing fires. Nevertheless, succeeding techniques for detecting forest fire smoke encounter persistent issues, including a slow identification [...] Read more.
Forest fires rank among the costliest and deadliest natural disasters globally. Identifying the smoke generated by forest fires is pivotal in facilitating the prompt suppression of developing fires. Nevertheless, succeeding techniques for detecting forest fire smoke encounter persistent issues, including a slow identification rate, suboptimal accuracy in detection, and challenges in distinguishing smoke originating from small sources. This study presents an enhanced YOLOv8 model customized to the context of unmanned aerial vehicle (UAV) images to address the challenges above and attain heightened precision in detection accuracy. Firstly, the research incorporates Wise-IoU (WIoU) v3 as a regression loss for bounding boxes, supplemented by a reasonable gradient allocation strategy that prioritizes samples of common quality. This strategic approach enhances the model’s capacity for precise localization. Secondly, the conventional convolutional process within the intermediate neck layer is substituted with the Ghost Shuffle Convolution mechanism. This strategic substitution reduces model parameters and expedites the convergence rate. Thirdly, recognizing the challenge of inadequately capturing salient features of forest fire smoke within intricate wooded settings, this study introduces the BiFormer attention mechanism. This mechanism strategically directs the model’s attention towards the feature intricacies of forest fire smoke, simultaneously suppressing the influence of irrelevant, non-target background information. The obtained experimental findings highlight the enhanced YOLOv8 model’s effectiveness in smoke detection, proving an average precision (AP) of 79.4%, signifying a notable 3.3% enhancement over the baseline. The model’s performance extends to average precision small (APS) and average precision large (APL), registering robust values of 71.3% and 92.6%, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed wildfire smoke detection system based on UAV images.</p>
Full article ">Figure 2
<p>Overview of the proposed forest fire smoke detection system based on UAV images.</p>
Full article ">Figure 3
<p>(<b>a</b>) Architecture of the BiFormer block; (<b>b</b>) Architecture of the Bi-Level Routing Attention block.</p>
Full article ">Figure 4
<p>Architecture of the GSConv model.</p>
Full article ">Figure 5
<p>Illustrative samples from the forest fire smoke dataset include: (<b>a</b>) instances of small smoke with concentrated attention at the center and reduced attention at the edges; (<b>b</b>) varying sizes of large and medium smoke occurrences; (<b>c</b>) non-smoke pictures taken under diverse weather situations such as cloudy and sunny; and (<b>d</b>) instances with low smoke density, posing challenges in discerning attributes such as edges, textures, and color. This collection offers a representation of smoke scenarios encountered in natural environments.</p>
Full article ">Figure 6
<p>Example of qualitative evaluation of the forest fire smoke detection model: (<b>a</b>) large-size smoke; (<b>b</b>) small-size smoke.</p>
Full article ">
Back to TopTop