[go: up one dir, main page]

Next Article in Journal
Latitudinal Characteristics of Nighttime Electron Temperature in the Topside Ionosphere and Its Dependence on Solar and Geomagnetic Activities
Previous Article in Journal
Application of Getis-Ord Correlation Index (Gi) for Burned Area Detection Improvement in Mediterranean Ecosystems (Southern Italy and Sardinia) Using Sentinel-2 Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forest Fire Detection Based on Spatial Characteristics of Surface Temperature

College of Forestry, Central South University of Forestry and Technology, Changsha 410004, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(16), 2945; https://doi.org/10.3390/rs16162945
Submission received: 1 June 2024 / Revised: 31 July 2024 / Accepted: 9 August 2024 / Published: 12 August 2024
Figure 1
<p>Overview map of the study area.</p> ">
Figure 2
<p>Vegetation area and DEM in Hunan Province.</p> ">
Figure 3
<p>Histogram of the frequency distribution of the surface temperatures in vegetation areas in Hunan Province on different dates.</p> ">
Figure 4
<p>Flowchart of fire point detection algorithm.</p> ">
Figure 5
<p>Feature correlation heatmap at different moments during the daytime.</p> ">
Figure 6
<p>Scatter density plot of validation data for RF at different moments of the day.</p> ">
Figure 7
<p>Scatter density plot of reconstructed LST versus original LST.</p> ">
Figure 8
<p>LST of original vs. reconstructed vegetation area during daytime.</p> ">
Figure 9
<p>LST of original vs. reconstructed area at nighttime.</p> ">
Figure 10
<p>The result of fire point identification at 15:30 on 18 October 2022.</p> ">
Figure 11
<p>The result of fire point identification at 10:30 on 19 October 2022.</p> ">
Figure 12
<p>The result of fire point identification at 15:20 on 23 October 2022.</p> ">
Figure 13
<p>The results of fire detection.</p> ">
Figure 14
<p>Identification results of fire point image elements in Xintian County, Hunan Province, at four moments on 18 and 19 October 2022. (<b>a</b>) Mid-infrared 7th band of Himawari-9 image and its bright temperature. (<b>b</b>) Identification results of the algorithm of this study. (<b>c</b>) Results of WLF fire point product.</p> ">
Versions Notes

Abstract

:
Amidst the escalating threat of global warming, which manifests in more frequent forest fires, the prompt and accurate detection of forest fires has ascended to paramount importance. The current surveillance algorithms employed for forest fire monitoring—including, but not limited to, fixed threshold algorithms, multi-channel threshold algorithms, and contextual algorithms—rely primarily upon the degree of deviation between the pixel temperature and the background temperature to discern pyric events. Notwithstanding, these algorithms typically fail to account for the spatial heterogeneity of the background temperature, precipitating the consequential oversight of low-temperature fire point pixels, thus impeding the expedited detection of fires in their initial stages. For the amelioration of this deficiency, the present study introduces a spatial feature-based (STF) method for forest fire detection, leveraging Himawari-8/9 imagery as the main data source, complemented by the Shuttle Radar Topography Mission (SRTM) DEM data inputs. Our proposed modality reconstructs the surface temperature information via selecting the optimally designated machine learning model, subsequently identifying the fire point through utilizing the difference between the reconstructed surface temperatures and empirical observations, in tandem with the spatial contextual algorithm. The results confirm that the random forest model demonstrates superior efficacy in the reconstruction of the surface temperature. Benchmarking the STF method against both the fire point datasets disseminated by the China Forest and Grassland Fire Prevention and Suppression Network (CFGFPN) and the Wild Land Fire (WLF) fire point product validation datasets from Himawari-8/9 yielded a zero rate of omission errors and a comprehensive evaluative index, predominantly surpassing 0.74. These findings show that the STF method proposed herein significantly augments the identification of lower-temperature fire point pixels, thereby amplifying the sensitivity of forest surveillance.

1. Introduction

Forest fires represent a type of natural disaster distinguished by their abrupt emergence, formidable destructive potential, and considerable difficulty in terms of containment. They not only jeopardize human life, well-being, and property but also exert substantial impacts on both the ecological environment and the dynamics of climate change [1,2]. The imperative of safeguarding human safety and preserving ecological integrity necessitates the early detection and prompt suppression of such fires. Therefore, the monitoring of forest fires is of exceptional significance [3]. Satellite remote sensing stands as an instrumental asset in this domain, owing to its expansive coverage and high temporal resolution. The range of forest fires discerned by these satellites encompasses not only natural occurrences caused by environmental factors but also anthropogenic fires that erupt in hilly terrain.
Remote sensing satellites can be classified into two categories contingent upon their orbital altitude: polar-orbiting and geostationary (synchronous) satellites. Polar-orbiting satellites are esteemed for their superior spatial and spectral resolutions, attributable to their comparatively lower orbital traversal. Historically, in the early phase of forest fire detection via remote sensing, scholarly pursuits in algorithmic development primarily harnessed the more refined datasets provided by polar-orbiting satellites [4,5,6,7]. Nonetheless, the inherently unpredictable nature of forest fires presents a challenge. The relatively infrequent observation of polar-orbiting satellites—commonly two to four overflights per day—is incongruent with the need for uninterrupted surveillance. By contrast, geostationary satellites are stable in their detection position and capable of real-time ground observation. However, their low spatial resolution imposes constraints on the accuracy of fire monitoring modalities [8,9].
Forest fire detection principally leverages the sensitivity of mid-infrared and thermal infrared channels to thermal radiation, which exhibit substantial variability during fire episodes [10]. Spatially based fire detection algorithms, benefitting from the high resolution afforded by polar-orbiting satellite data, are divided into two types: fixed threshold and contextual algorithms [11]. The fixed threshold algorithm identifies fire pixels if one or more spectral bands exceed pre-specified thresholds [12,13]. However, the temporal dynamism and spatial heterogeneity of the surface temperature render the establishment of an optimal threshold a complex endeavor. Consequently, thresholds are often selected conservatively, drawing from empirical fire data [14,15]. This prudent approach frequently culminates in the non-recognition of smaller-scale fires, resulting in a high frequency of undetected events. To address the problem of threshold incompatibilities stemming from temporal and regional variabilities, the contextual algorithm discerns fire pixels by scrutinizing the disparity in the brightness temperature between the target pixel and its ambient background pixels. A target pixel is classified as a fire pixel only when it reaches a certain level of statistical divergence (e.g., mean and standard deviation) from the background pixels within its adaptive window [16,17,18,19,20]. Therefore, precision in estimating the background pixel brightness temperature is of essence. However, operational scenarios often involve cloud interference, which can introduce numerous cloud-affected pixels into the background window. Extant cloud detection algorithms cannot eliminate all clouds, creating discrepancies between the estimated and real background brightness temperatures. Moreover, when multiple pixels exhibit high-temperature anomalies, an individual threshold cannot effectively eliminate these aberrations, thereby inflating the actual background brightness temperature above the normal level and reducing the accuracy of fire point detection. Simultaneously, due to the requirements of most fire point identification scenarios, thresholds for contextual fire point identification are habitually set at elevated levels. This adjustment inadvertently neglects the capture of some subtler, low-temperature fire point pixels, culminating in missed detection [21,22,23,24].
Extant algorithms for fire point detection based on spatial features mainly use the difference in the brightness temperature between a target pixel and its background pixel to pinpoint fire activity. Consequentially, the accurate assessment of the background pixel brightness temperature is particularly crucial. In fact, this background pixel brightness temperature fundamentally mirrors fluctuations in the surface temperature, with which it shares a substantial correlation [10]. An approach to the reconstruction of the surface temperature within the thermal infrared spectrum has been adopted in attempts to reconstruct this background pixel brightness temperature. This method seeks to solve the problem posed by the inability of thermal infrared remote sensing to penetrate cloud cover, thus impeding the acquisition of surface temperature information where occlusion by clouds is present [25,26]. Efforts to reconstruct the surface temperatures underlying cloud coverage have largely revolved around three main points: spatial, multi-temporal, and spatiotemporal coupling. To begin with, spatial information-based methods interpolate the occluded surface temperature values using the available and proximate temperature values via techniques such as inverse distance weight interpolation [27], kriging interpolation [28], and spline interpolation [29]. Despite the efficacy of these methods in obtaining effective spatial information, their utility declines significantly under conditions of pervasive cloud cover. Secondly, multi-temporal methods mainly target an identical region and harness temporally variable data to reconstitute the absent pixel information [30,31,32]. These methods enable the reconstruction of the surface temperature through a temporal lens, although they may also incur significant errors in interpolation when confronted with rapidly shifting weather conditions. In contrast, methods predicated on the coupling of spatiotemporal information combine the advantages of the aforementioned two approaches. However, the complex non-linear interplay between the surface temperatures recorded by satellites across various spatiotemporal resolutions, particularly under protracted and widespread cloud, constrains the accuracy of the reconstructed surface temperatures [33]. An additional means of reconstructing surface temperatures involves the surface energy balance approach, based on the principle of combining physical process models of various surface parameters to recapture the data of missing pixels [34]. However, this method demands complex parametrization involving variables such as the temperature and wind speed, which poses considerable application challenges in practical applications [35]. In recent years, the continued evolution of machine learning has led to the application of support vector machines, random forest, and artificial neural networks to the realm of surface temperature reconstruction [26]. The surface temperature is greatly influenced by an array of factors that encompass solar radiation, topographic factors, vegetation cover, land use, and more. Thus, modeling the relationship between the surface temperature and each factor is pivotal, thereby enabling the reconstruction of the surface temperature information for aberrant pixels [36,37,38,39,40]. Utilizing machine learning methods to reconstruct surface temperatures not only accounts for the spatial heterogeneity of the surface but also provides a straightforward, expedient, and promising approach. Among the research data, the Himawari-8/9 data offer an exceedingly high temporal resolution, making them well suited for the continuous monitoring of forest fires. In addition, while the existing approaches to surface temperature reconstruction are predominantly based on MODIS datasets, the utilization of Himawari-8/9 data for such purposes remains relatively rare in scholarly studies.
Building upon the foundations laid by previous research, this paper delineates a fire detection method (STF) based on the spatial features of the surface temperature. The core aspects of this novel approach are multi-fold: the integration of a range of modeling factors intrinsically connected to the surface temperature; the meticulous selection of an optimal machine learning model, tailored to the nuanced reconstruction of the surface temperature; the identification of fires through a comparative analysis of the discrepancies between the model-reconstructed surface temperatures and actual surface temperatures; and the incorporation of spatial context algorithms, which augment the universal applicability of the proposed method, providing a mechanism for the joint detection of fires.

2. Materials

2.1. Study Area

Hunan Province, an inland province in China, is positioned to the south of the Yangtze River’s central and lower stretches. As shown in Figure 1, its topography consists chiefly of mountainous and hilly landscapes, encircled by mountain ranges on the eastern, western, and southern frontiers. The central uplands undulate gently, in contrast to the flat expanse of the northern lake district, together creating an asymmetric horseshoe-shaped topography that opens to the northeast. Hunan is characterized by a subtropical monsoon climate that encompasses four distinct seasons, with rainfall and high temperatures often coinciding. The mean annual average temperature fluctuates between 16 and 18.5 °C. July brings the peak summer heat, with the monthly average temperature rising above 27 °C across most areas, with the exception of some elevated mountain areas. January, in contrast, is the coldest month, with average temperatures ranging between 4 and 7 °C. With its annual precipitation ranging from 1200 to 1800 mm, Hunan ranks among the most rainy provinces in China.
As of the end of 2020, the size of the forestry land area in Hunan Province amounted to 12,996,600 hectares (ha), boasting a forested area of 11,142,600 ha and a total standing timber volume of 618,467,600 cubic meters. The forest coverage rate stood at 59.96%. Hunan is recognized as one of the provinces with a high incidence of forest fires. The peak of these occurrences in Hunan is predominantly from February to April, with March witnessing the most frequent outbreaks. A secondary peak spans from October to January, while the months from May to September exhibit a lower incidence of such events [41]. Figure 1 shows an overview map of the research area, highlighting natural fire points documented from 2004 to 2021.

2.2. Data Resources and Data Pre-Processing

2.2.1. Data Resources

In this study, the primary dataset comprised Himawari-8/9 data, procured via the File Transfer Protocol (FTP) provided by the JMA (Japan Meteorological Agency). The dataset primarily consisted of L2 Network Common Data Form (NetCDF) resampled spectral files—a file format used for the storage of multidimensional scientific variables, such as the temperature, humidity, barometric pressure, wind speed, and direction [42]. These data are collected from a range of geostationary meteorological satellites, endowed with AHI, the preeminent optical sensing system globally. It provides coverage for 16 spectral channels, ranging from the visible to thermal infrared, delivering a temporal resolution of every 10 min and a mid-infrared spatial resolution of 2 × 2 km [19]. Complementarily, the SRTM DEM served as ancillary data for the forest fire detection algorithm, sourced from the U.S. Geological Survey (USGS) website (https://earthexplorer.usgs.gov/, accessed on 27 December 2023). To validate the fire point detection efficacy, GF-4 imagery was incorporated alongside the fire data released by the China Forest and Grassland Fire Prevention and Suppression Network (https://slcyfh.mem.gov.cn/, accessed on 21 February 2024) and the WLF fire point products of Himawari-8/9 released by the JMA. The GF-4 satellite data were accessed through the China Resource Satellite Application Center (https://data.cresda.cn/, accessed on 5 January 2024), featuring both a panchromatic and multispectral camera (PMS) with a 50 m resolution, as well as a mid-infrared camera (IRS) with a spatial resolution of 400 m.

2.2.2. Data Pre-Processing

(1)
Cloud pixel and water pixel identification
Prior to the reconstruction of the surface temperature and the fire point identification, it was necessary to identify and exclude cloud and water pixels from the Himawari-8/9 data. The differentiation of cloud pixels relies fundamentally on their thermal radiation properties within the thermal infrared channel and their elevated reflectance in the visible channels. Since different cloud formations exhibit varying reflectance and brightness temperatures across diverse bands, cloud identification typically combines information from multiple bands to enhance the precision of identification. In this study, the multispectral threshold cloud detection algorithm developed by Du Pin et al. [43] is employed, which is adept in delineating dense, elevated, and mid-level cloud formations during daytime by utilizing differentiated reflectance and brightness temperature thresholds across distinct Himawari-8/9 data. Nighttime cloud detection capitalizes on the 15th band, where cloud pixels are inferred from brightness temperatures below 265 K, with adjustments tailored to the region’s specific climatic conditions.
Water’s reflectance properties, particularly its capacity to absorb more light in the near-infrared (NIR) bands, results in generally lower reflectance when compared with terrestrial pixels. In this study, bands 6 and 4, in conjunction with the normalized difference vegetation index (NDVI), are used for the detection of water pixels. The water mask construction algorithm is shown in Equation (1).
B 6 < 0.05   o r   ( B 4 < 0.15   a n d   N D V I < 0 )
where B 3 , B 4 , and B 6 are bands 3, 4, and 6 of Himawari-8/9, respectively, and the NDVI calculation formula is shown in Equation (2).
N D V I = B 4 B 3 B 4 + B 3
(2)
Surface temperature reconstruction factor acquisition
Before reconstructing the surface temperature, modeling factors must be obtained from the Himawari-8/9 data and SRTM DEM data, and outlier rejection for these factors is imperative to ensure the accuracy of the data used for model training.
Modeling factor extraction. This paper selects several modeling factors related to the surface temperature based on prior studies and the features of the Himawari-8/9 dataset. Cloud-free, clear-sky pixels were extracted to reconstruct the surface temperatures. The longitude (Lon), latitude (Lat), normalized vegetation index (NDVI), solar azimuth (SOA), solar zenith angle (SOZ), elevation (ELV), and slope (SLP) are selected as independent variables. The surface temperature (LST) from the Himawari-8/9 data’s 7th band is designated as the dependent variable for model calibration. Among them, the SOA, SOZ, and LST are directly extracted from the respective bands of the Himawari-8/9 data. Lon and Lat are the central latitude and longitude of the corresponding pixels. The NDVI is derived from the calculation of bands 3 and 4, utilizing the maximum value composite method to reconstruct the up-to-date NDVI, circumventing cloud cover, which otherwise impedes pixel extraction. Given that the spatial resolution of the acquired SRTM DEM is 30 × 30 m, which needs alignment with the Himawari-8/9 LST’s 2 × 2 km resolution, the SLP is computed first, and then the ELV and SLP are resampled to 2 × 2 km using the bilinear interpolation method.
Owing to the occurrence of fires exclusively in vegetative areas, only vegetated cover regions are considered for surface temperature reconstruction. This study utilizes the 30 × 30 m land use data of Hunan Province from the China Land Cover Dataset (CLCD) 2021, published by Wuhan University. This dataset, including forest, shrub, grass, and cropland areas, is resampled to 2 × 2 km using the nearest neighbor method. Subsequently, modeling factors within these vegetation areas are extracted for surface temperature reconstruction. To refine the final forest fire detection results, fire pixels located in farmland are discerned using the 30 m land use data and excluded accordingly. That is, if the identified 2 × 2 km fire pixels lack woodland, shrubs, or grassland in the 30 × 30 m land use data, the fire signal is disregarded. An illustrative example of a vegetative area and DEM in Hunan Province is presented in Figure 2.
Model data selection. Prior to model training, it is important to recognize that certain pixels—despite ostensibly clear-sky conditions—may exhibit anomalously elevated temperature pixels, such as those arising from forest fires. These fires result in conspicuously high brightness temperatures within the affected pixels, which, if incorporated into the model, would lead to unsatisfactory outcomes. Therefore, removing the brightness temperatures of these anomalous pixels is necessary. Given the extensive sample size required for surface temperature reconstruction and the approximately normal distribution of the brightness temperatures of surface image pixels, this study employs the Lajda criterion (3σ criterion) to filter out anomalously high temperature pixels. As shown in Figure 3, the histogram represents the frequency distribution of the surface temperature in the vegetation area at a certain moment on different dates in Hunan Province. Here, “Mean” represents the average value, “Std” refers to the standard deviation, and “N” signifies the total pixel count. The 3σ criterion is a prevalent tool for outlier detection, presupposing that the dataset at hand predominantly comprises only random errors. Employing this method involves calculating the standard deviation to establish a range. Any data points that fall outside this preset range are considered outliers and thus excluded from the analysis [44]. The standard deviation, σ, is calculated using Equation (3).
σ = i = 1 n x i μ / ( n 1 )
where σ is the sample standard deviation; xi is the surface temperature value (LST) for each sample; μ is the mean value of the surface image brightness temperature; n is the number of samples.

3. Methods

To solve the problems of imprecise background pixel brightness temperature calculation and the frequent oversight of low-temperature fire point pixels inherent in existing spatial feature fire detection algorithms, this study introduces a spatiotemporal feature fire point detection method (STF). This approach encompasses two principal components: the reconstruction of the surface temperature and the spatial feature-based identification of fire points. The detailed workflow of the method is depicted in Figure 4.

3.1. Surface Temperature Reconstruction Methods

Through the comparative analysis of the effectiveness of multiple linear regression (MLR), support vector machine regression (SVR), and random forest regression (RF) in surface temperature reconstruction, we selected the regression model exhibiting the highest accuracy to serve as the base model. Subsequently, this study investigated the performance of the chosen model in reconstructing the surface temperature at varying times throughout the day.

3.1.1. Regression Model

Multiple linear regression is used to build a linear relationship between multiple independent variables and a dependent variable [45]. In this study, Python was first used to calculate the Pearson correlation coefficients between each modeling factor and the LST. Modeling factors associated with coefficients of 0.7 or above were considered highly collinear and subsequently excluded from the independent variables [46]. The remaining modeling factors were then employed in the model’s training phase for subsequent prediction tasks. As depicted in Figure 5, the correlation heatmap illustrates the interplay between various factors under clear-sky conditions at different times of day, revealing variable correlation strengths influenced by complex dynamics such as transient solar angles. To mitigate the effects of multicollinearity on the results of the multiple linear regression analysis, and given that Lon and Lat are static factors while the solar azimuth (SOA) and solar zenith angle (SOZ) change over time, this study opts to exclude the time-varying variables SOA and SOZ when they display strong correlations with Lon and Lat, respectively. This approach is taken to ensure the accuracy of the eventual model training outcomes.
SVR seeks to establish an optimal classification hyperplane between two classes of samples while minimizing the error rate [47]. In this study, SVR regression with a radial basis function kernel is implemented using Python. The penalty coefficient “C” and the kernel function’s internal parameter “g” are optimized through automatic testing within a specified value range. RF regression performs the regression task by assembling multiple decision trees and synthesizing their outputs [48]. The optimal parameters for this study’s RF model are ascertained using cross-validation: the number of trees “n” is configured to 100, the minimum number of samples required to split an internal node “m1” is established at 5, the minimum number of samples required for leaf nodes “m2” is set to 8, and the maximum tree depth “d” is capped at 10. These parameters are chosen through multiple iterations to avoid overfitting and to enhance the accuracy of the training outcomes. In all three model regressions, this study sets the number of random seeds to 42.

3.1.2. Evaluation of Surface Temperature Reconstruction Accuracy

The quantitative evaluation metrics for the accuracy of the surface temperature reconstruction results include the coefficient of determination (R2), the root mean square error (RMSE), and the mean absolute error (MAE). The R2 indicates the correlation between the reconstructed LST and the LST of the validation data, with values ranging from 0 to 1. An R2 approaching 1 indicates superior reconstruction performance; conversely, a lower R2 indicates a worse reconstruction effect. The RMSE tests the consistency of the reconstruction results with the validation data: a smaller RMSE indicates a smaller deviation between the reconstruction results and the validation data, implying higher model predictive accuracy and reduced errors. The MAE measures the size of the model error, similarly to the RMSE; a smaller MAE value reflects heightened model accuracy. Notably, the MAE is influenced by outliers, in contrast to the RMSE [49]. The formulas for the calculation of the R2, RMSE, and MAE are shown in Equations (4), (5), and (6), respectively.
R 2 = 1 i = 1 n x i x i 2 / i = 1 n x i x i ¯ 2
R M S E = 1 n i = 1 n x i x i 2
M A E = i = 1 n x i x i 2 n
where x i is the reconstructed predicted data; x i is the validated original data; x i ¯ is the mean of the validated original data; n is the number of samples of the validated data.

3.2. Spatial Feature Fire Point Recognition Algorithm Construction

3.2.1. Algorithm Construction

The spatial feature fire point identification algorithm in this study consists of two principal components. The first segment identifies fire point pixels by evaluating the discrepancy between the reconstructed surface pixel temperatures and the original pixel temperatures. The requisite preparatory steps for this phase are described in Section 3.2, which delineates the reconstruction of the surface pixel brightness temperatures under clear-sky conditions utilizing the optimal model, with an emphasis on anomalous pixels. The algorithm is detailed in Equations (7)–(9). The subsequent segment operates on a spatial contextual paradigm, leveraging the thermal contrast between the target pixel and its adjacent pixels to ascertain fire points. This latter method boasts a commendable level of adaptability to diverse environmental conditions.
B T 07 B T 07 , p r e d i c t M A E 07 > 10   K   o r
B T 07 B T 07 , p r e d i c t M A E 07 > 5   K , B T 07 > 330   K  
B T 07 B T 07 , p r e d i c t M A E 07 > 5   K , B T 07 < 330   K
Among these, Equation (7) focuses on fire point pixels that manifest as markedly high temperature anomalies, while Equation (8) is designed for the identification of comparatively high-temperature fire point pixels. Due to the influence of solar radiation throughout the daytime, there exists large variance in the brightness temperatures of clear land pixels. At night, the discrepancy between pixels is considerably less pronounced. Consequently, Equation (9) supplants Equation (8) for the identification process during nighttime periods.
B T 07 B T 07 , p r e d i c t M A E 07 >   3   K
where B T 07 is the original 7th-band LST of Himawari-8/9; B T 07 , p r e d i c t is the reconstructed 7th-band LST; M A E 07 is the mean absolute error between the original and reconstructed LSTs.
Since surface temperature reconstruction depends on the use of an adequate sample size, a deficient sample size can lead to inaccuracies. This may create a substantial disparity between the reconstructed surface temperature and the real-world conditions, thus hindering accurate fire point detection. In the context of Hunan Province, the persistent cloud cover and changing weather conditions further limit the availability of clear-sky pixels necessary for robust surface temperature reconstruction. To mitigate this issue and enhance the fire point identification accuracy, this study employs a spatial context method alongside the surface temperature-based fire point identification reconstruction. The aggregate detection results from both methods culminate in the final fire point identification. This approach ensures that the algorithm maintains effective forest fire monitoring capabilities under variable circumstances. The specific implementation of the contextual method proceeds as follows. First, the identification of high-temperature suspicious image elements is achieved through the application of both fixed and dynamic thresholds, as outlined in Equation (10).
B T 07 > 295   K   a n d   B T 07 B T 07 , m e a n 11 > 5   K
where B T 07 represents the brightness temperature value of the center pixel, and B T 07 , p r e d i c t represents the average brightness temperature value of the pixels within an 11 × 11 window. The calculation excludes the contributions from clouds and water, as well as anomalous high-temperature pixels with a brightness temperature value exceeding 315 K.
For the detection of fire point pixels, the procedure begins by establishing a high-temperature suspicious pixel as the center of the initial analysis window, which is set to a dimension of 5 × 5 pixels. In preparation for the calculation of the mean and standard deviation of the brightness temperature of the pixels within the window, any pixels affected by clouds or water, and those high-temperature pixels with brightness temperature values higher than 315 K, are omitted from consideration. If, after this exclusion, the number of background pixels in the window is less than 20% of the total number of pixels in the window, the window size is sequentially enlarged to 7 × 7, 9 × 9, …, 11 × 11. If the number of pixels in the window still does not meet this condition, the pixel is then eliminated from fire point judgment. The criterion governing the judgment of a pixel as a fire point is detailed in Equation (11).
B T 07 > B T 07 , b g + 3 · δ · B T 07 , b g
B T 07 represents the brightness temperature value of the high-temperature suspicious pixel; B T 07 , b g denotes the mean brightness temperature value of the background pixels; δ is the standard deviation of the brightness temperature of the background pixels; the factor “3” is the background coefficient threshold, which is adjusted to 2 for nighttime analysis.

3.2.2. Evaluation of Accuracy of Fire Point Detection Results

This study randomly selected both daytime and nighttime images from various moments within the period of 17 to 19 October 2022 to detect fire activity across Hunan Province. To validate the accuracy of the identified fire points, data released by the China Forest and Grassland Fire Prevention and Suppression Network (CFGFPN)—which are derived from joint monitoring employing multi-source satellite data—were utilized. Furthermore, the Himawari-9 WLF fire point product was included to facilitate a comparative analysis, highlighting the efficacy of the algorithms used in this study.
The accuracy verification of the identified fires was carried out using the following metrics: the accuracy rate (P), the omission error rate (M), and the comprehensive evaluation index (F) [50]. The formulas for P, M, and F are delineated in Equations (12)–(14).
P = Y y Y y + Y n
M = N y Y y + N y
F = 2 P ( 1 M ) 1 + P M
where Y y is the count of actual fire incidents correctly detected by the algorithm; Y n is the number of fires incorrectly detected by the algorithm; N y is the number of fires missed by the algorithm; P represents the proportion of fires detected by the algorithm as real fires out of the total number of detected fires; M reflects the proportion of real fires not detected by the algorithm out of the total number of real fires.

4. Results

4.1. Comparison Results of Surface Temperature Reconstruction Model Accuracy

Table 1 shows the prediction accuracy of different models on the validation data at 12:00 on selected dates. The analysis of the tabulated data reveals that the random forest (RF) model significantly outperforms both the support vector regression (SVR) and multiple linear regression (MLR) models. The RF model consistently achieves an R2 above 0.80, an RMSE around 1 K, and an MAE less than 0.8 K. On specific dates, such as 8 April 2022 and 8 August 2022, the prediction accuracy and model fit were impacted negatively by cloud cover, which resulted in the excessive retention of uncleared cloud pixels in the dataset. The MLR model’s results exhibit a larger range of R2 variation, fluctuating between 0.42 and 0.83, with an RMSE between 1.42 and 1.86 K and an MAE between 1.07 and 1.38 K. The SVR model’s results also show variation, with the R2 fluctuating between 0.37 and 0.82, an RMSE between 1.45 and 1.93 K, and an MAE between 1.04 and 1.32 K. Overall, the R2 of the MLR results is higher than that of the SVR results. In light of these findings, the RF model is identified as the most suitable model for surface temperature reconstruction.
To verify the applicability of the RF model for surface temperature reconstruction, its accuracy was evaluated at various times on a single day—specifically, 12 October 2022—against the validation data. As presented in Figure 6, the scatter plot of the RF surface temperature reconstruction validation data demonstrates that, during daytime hours, the RF model consistently shows relatively high R2 values and comparatively low RMSE and MAE values. While there is a notable decline in the R2 during the night, the RMSE and MAE exhibit little variation from those recorded during the day. This pattern indicates that the RF model maintains effectiveness in surface temperature reconstruction across different times of the day, proving competent in extracting thermal anomalies related to forest fires.
Figure 7 shows a scatter density plot of the reconstructed LST against the original LST, where N represents the number of pixels involved in the surface temperature reconstruction. This density plot allows for the visualization of the comparison between the predicted surface temperature and the effective clear-sky pixel temperature. During the daytime, the model’s prediction accuracy is indicated by an R2 exceeding 0.90, an RMSE less than 0.94 K, and an MAE under 0.70 K, affirming that the RF model effectively reconstructs the original surface temperature. The strong correlation between the reconstructed and original surface temperatures is markedly apparent. In nighttime conditions, the R2 is over 0.61, the RMSE is below 0.86 K, and the MAE is less than 0.66 K, showcasing a strong correlation between the predicted and original surface temperatures. To further improve the model’s accuracy, the dataset was divided into a training set, comprising 80%, and a test set, consisting of the remaining 20%, followed by cross-validation. The high accuracy of the model meets the performance requirements for the modeling of the surface temperature and complex surfaces, providing a solid basis for subsequent fire point identification.
Figure 8 and Figure 9 illustrate the LSTs of the original and reconstructed vegetation areas at different times of the day. A visual examination reveals that the reconstructed vegetation LSTs during the daytime are in close alignment with the original vegetation LSTs. Specifically, in a small number of the western and southern regions of the original vegetation area at 9:00 a.m. (Figure 8a), where the cloud cover generated anomalously elevated brightness temperatures, the reconstructed surface temperatures were normalized. Later in the day, there were multiple anomalous high-temperature pixels in the LST of the original vegetation region at 15:00 (Figure 8c) and 17:00 (Figure 8d), which also returned to normal surface temperatures after reconstruction. At night, the distribution of the reconstructed LST closely mirrors that of the original LST. Referencing Figure 7, it can be seen that the R2 decreases at night, suggesting the diminishing fit of the modeling factors with the surface temperature and, consequently, a decrease in the accuracy of the reconstructed LST. Nonetheless, the nighttime reconstructed surface temperature, with its RMSE and MAE, remains viable for use in the remote sensing monitoring of forest fires.

4.2. Fire Point Detection Results

As an example, the forest fires that occurred in the southeastern and southern regions of Hunan Province between 17 and 24 October 2022 were monitored at multiple times. The high-resolution GF-4 PMI imagery was used for visual interpretation to compare the fires identified in this study, thus verifying the accuracy of the identified forest fires. Subsequently, the algorithm developed in this study was then applied to monitor the entire Hunan Province region at multiple time points from 17 to 19 October 2022. The results were compared with the Himawari-9 WLF fire point product introduced by the JMA and the fire point data published by the CFGFPN to validate the accuracy.

4.2.1. Validation of GF-4 Interpretation

As shown in Figure 10, Figure 11 and Figure 12, the results of fire point identification for different dates were verified by visual interpretation using the GF-4 PMI images. The presence of smoke from a forest fire can often obscure the underlying area, rendering it challenging to confirm the occurrence of a fire through visual interpretation alone. Hence, in combination with the IRS band of GF-4, the band combination approach of 6, 5, and 2 is utilized to jointly validate the fire identification results, as depicted in Figure 10a, Figure 11a and Figure 12a. Figure 10b, Figure 11b and Figure 12b represents the true-color image of GF-4 PMI synthesized by 4, 3, and 2 and the fire point monitoring outcomes. Figure 11(1),(2) and Figure 12(1),(2) are the GF-4 PMI images synthesized by bands 4, 2, and 1, alongside the corresponding fire point monitoring outcomes.
As shown in Figure 10, the primary forest fires occurred in the southeast and south of Hunan at 15:30 on 18 October 2022, with the inferred fire points demarcated by yellow outlines. Observations from the image reveal sizable fires at this time in the counties of Xintian and Yongxing, where significant volumes of white smoke are also present.
Due to the lower spatial resolution of Himawari-9, which does not match the finer 50 × 50 m resolution of GF-4 PMI, the outlined fire contours generated by the identification process typically present a larger area than the real affected area. During an active fire, the thermal effect at the fire edge causes the fire point to emit thermal radiation outward, thereby forming a temperature field near the fire point and creating an extended temperature field around the fire point. The larger the fire field, the stronger the thermal radiation, leading to a broadened temperature field and an increased number of affected peripheral pixels. This condition is likely to yield a detected fire point area that overestimates the actual size of the fire. Figure 11 shows the occurrence of forest fires in the southeastern and southern regions of Hunan Province on 19 October 2022 at 10:30 a.m. As seen in the figure, the fires in the counties of Xintian and Yongxing had significantly reduced since the previous day. The areas of fire expansion were seen radiating from the central zones outward, with dispersion and a reduced intensity evident in some areas of relatively small fires. The intensity of the red-colored regions in the imagery was also markedly diminished in comparison to the adjacent areas.
Figure 12 also depicts the forest fires on 23 October 2022, at 15:20, pinpointing ongoing fires in Xintian and Yongxing. Compared to the observations from 19 October, the affected areas within these counties were reduced. The visual interpretation of the forest fires in Yongxing County on 18, 19, and 23 October shows a significant reduction in the fire intensity.
In conclusion, the fire point identification method proposed in this study has proven its efficacy in monitoring forest fires. It is adept in overseeing extensive forest fires, as well as discerning isolated and scattered minor fires, thereby offering a reliable tool for the tracking of fire events as they occur.

4.2.2. Fire Point Detection Results and Accuracy Evaluation

In this study, the fire point detection method capitalizes on the mid-infrared seventh-band data from the Himawari-9 satellite and is implemented via the Python programming language. The detected fire point pixels’ results are visualized using ArcGIS. The daytime base maps for this visualization are synthesized by integrating bands 7, 14, and 15 from the Himawari-9 satellite data. The fire point pixels are highlighted in yellow, and the detection results are shown in Figure 13.
The forest fire data from the CFGPFN were queried, and the extracted fire points were compared for accuracy verification. This verification involved aggregating the pixels of the detected fire points into counts at their respective locations. The validation results are presented in Table 2.
Table 2 demonstrates the efficacy of the forest and grassland fire detection within this study, with an omission error rate of 0. Conversely, Himawari-9’s WLF product exhibited some omission instances, with the highest rate reaching 0.5. The algorithm employed in this study achieved a comprehensive evaluation index consistently surpassing 0.74. An exception was observed for the fire points detected at 9:00 a.m. on 19 October 2022, where the index declined to 0.52, warranting further analysis to ascertain the causative factors behind this anomaly. One potential explanation for the observed discrepancy is that the algorithm utilized in this study identifies fire points by comparing the brightness temperature of the central pixel with that of the background pixel. Consequently, when a pixel exhibits an abnormal brightness temperature meeting the criteria for fire point identification, it is classified as a fire pixel. During such instances, both the accuracy rate and the comprehensive evaluation index for the algorithm, as well as for the Himawari-9 WLF product, trend downwards. This suggests the presence of numerous scattered high-temperature anomaly pixels at this particular moment, outnumbering the fire point pixels reported by the CFGPFN. Specifically, at 20:40 on 17 October 2022 and 9:00 on 19 October 2022, the accuracy rate of fire point identification in this study was lower compared to the WLF fire point product. However, the substantially lower omission error rate indicates that our algorithm detected a greater number of fire point pixels than the WLF product, suggesting its elevated sensitivity to thermal anomalies. Moreover, it is important to note that the data from the CFGPFN primarily rely on satellite identification and manual confirmation, which means that some relatively small fires identified by remote sensing satellites might not have been confirmed. When comparing the forest fire data in the National Statistical Yearbook, we observed a notable disparity between the number of forest fires recorded in the yearbook and those announced by the CFGPFN. which suggests that the CFGPFN’s statistics are underestimated and fail to accurately reflect the actual number of forest fires.
To demonstrate the algorithm’s ability to detect more low-temperature fire pixels compared to Himawari-8/9’s WLF product, we tallied the results of fire pixel identification for six moments in Hunan Province using both our algorithm and the WLF product. The findings are summarized in Table 3.
The findings presented in Table 3 distinctly illustrate that the algorithm employed in this study detected a greater number of fire pixels. Further corroborating evidence can be found in Figure 14—specifically in Figure 14a, depicting Xintian County: the yellow area represents high-temperature anomalous pixels. Upon comparing Figure 14a, it is evident that our algorithm has successfully detected more low-temperature fire pixels in comparison to the number identified by the WLF fire product.

5. Discussion

This study holds practical significance through its utilization of authentic wildfire data sourced from the CFGFPSN. A comparison of the wildfire detection outcomes with those of the Himawari-8/9 WLF fire point product further underscores the superiority of the proposed fire point detection methodology over conventional approaches. The results demonstrate that the joint fire point detection method proposed in this study, which synergizes surface temperature reconstruction with contextual analysis, achieves higher detection accuracy over traditional spatially based methods. Traditional spatial contextual fire point detection algorithms, which primarily rely on the brightness temperature difference between the target pixel and its surrounding background pixels to identify fire points, demonstrate robust applicability [9,11], effectively monitoring forest fires even in conditions with substantial cloud cover. However, in the face of large-scale forest fires, they may falter in precisely estimating the background pixel brightness temperature, which can lead to the missed detection of some low-temperature fire point pixels. Furthermore, the thresholds set to cater to the requirements of most fire point identification situations may inadvertently overlook some relatively cooler fire pixel temperatures [10,15,19]. In response to these challenges, we introduce the new fire point detection method (STF) based on spatial features, which utilizes multiple modeling factors to reconstruct the surface temperature of each clear-sky pixel within the study area in real time and with minimal errors. This enhancement more accurately reflects the surface temperature under a normal temperature and then uses the difference between the actual observed value and the predicted value to identify fire points. Such an advancement addresses the issue of numerous fire point omissions caused by the inapplicability of the chosen thresholds due to regional differences, improving the accuracy per pixel. However, the practical application of this method relies on a sufficient sample size. Instances of extensive cloud cover or inadequacies in the number of samples can undermine the reconstruction of accurate surface temperatures and, consequently, the detection of fire points. Under natural conditions, fires typically necessitate multiple days of continuous solar radiation, i.e., an uninterrupted sequence of clear skies, which induce a gradual increase in temperature and a decrease in the moisture content of vegetation, ultimately triggering natural fires [51]. Therefore, this method is viable for the identification of natural fires, negating the need to account for anthropogenic fires arising amid cloudy weather.
The fire point recognition algorithm described in this study enhanced the accuracy of fire point identification to a certain extent. However, it also faces several challenges that warrant further exploration. First of all, the surface temperature reconstruction was tailored to the Hunan Province region, and scaling the approach to larger or even global regions requires additional investigation. Secondly, surface temperature reconstruction requires a sufficient sample size; the ways in which a varying sample size might affect the reconstruction accuracy of the surface temperature remains to be examined with a substantial number of samples collected at different times. Finally, the cloud removal algorithm for Himawari-8/9 should be further explored in depth, as both surface temperature reconstruction and contextual fire point identification rely on this step. The fidelity of cloud removal is a critical determinant in the effectiveness of fire point detection. Therefore, an automated and highly precise cloud removal algorithm is essential in future developments. Such an algorithm should not only effectively filter out evident cloud types, like high, thick, medium, and low clouds, but also be adept in eliminating thin clouds and less obvious clouds.

6. Conclusions

In this paper, a spatial feature-based fire detection method (STF) is proposed, leveraging the high-temporal-resolution Himawari-8/9 as the primary data source, supplemented by SRTM DEM data. Traditional detection methods relying on fixed thresholds and contextual information tend to overlook the spatial heterogeneity of surface temperatures to some extent, leading to the missed detection of certain low-temperature fire point pixels. To address this limitation, our approach entails the reconstruction of the surface temperature using an optimized machine learning model. This model has demonstrated impressive reconstruction accuracy, with daytime and nighttime R2 values surpassing 0.90 and 0.61, respectively, and both the RMSE and MAE remaining under 1 K. Building upon this, the STF method then integrates the contextual algorithm. The validation of the detection results included comparisons with the GF-4 PMI visual interpretation and against both the fire point data of the CFGFPN and the WLF fire point products of Himawari-8/9. The findings show that the STF method effectively identifies forest fires, delivering an omission error rate of 0 and a comprehensive evaluation index generally over 0.74, surpassing the performance indices of the WLF product. Crucially, the STF approach enhances the ability to identify low-temperature fire point pixels, diminishing instances of omission and exhibiting elevated sensitivity for forest fire monitoring.

Author Contributions

Z.Y. conceived and designed the study; H.Y. wrote the first draft, analyzed the data, and collected all the study data; Z.Y., G.Z. and F.L. provided critical insights in editing the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Hunan Province (grant № 2022JJ31006), the Key Projects of Scientific Research of Hunan Provincial Education Department (grant № 22A0194), the Science and Technology Innovation Platform and Talent Plan Project of Hunan Province (Grant № 2017TP1022), and the Field Observation and Research Station of Dongting Lake Natural Resource Ecosystem, Ministry of Natural Resources.

Data Availability Statement

We thank the Japan Meteorological Agency for the Himawari-8/9 series data. We thank the China Forest and Grassland Fire Prevention and Suppression Network for providing fire data records. We thank the China Centre for Resources Satellite Data and Application for providing the GF series datasets.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhou, S.; Collier, S.; Jaffe, D.A.; Briggs, N.L.; Hee, J.; Sedlacek, A.J., III; Kleinman, L.; Onasch, T.B.; Zhang, Q. Regional influence of wildfires on aerosol chemistry in the western US and insights into atmospheric aging of biomass burning organic aerosol. Atmos. Chem. Phys. 2017, 17, 2477–2493. [Google Scholar] [CrossRef]
  2. Chowdhury, E.H.; Hassan, Q.K. Operational perspective of remote sensing-based forest fire danger forecasting systems. ISPRS J. 2015, 104, 224–236. [Google Scholar] [CrossRef]
  3. Hao, X.J.; Qu, J.J. Retrieval of real-time live fuel moisture content using MODIS measurements. Remote Sens. Environ. 2007, 108, 130–137. [Google Scholar] [CrossRef]
  4. Lasaponara, R.; Cuomo, V.; Macchiato, M.F.; Simoniello, T. A self-adaptive algorithm based on AVHRR multitemporal data analysis for small active fire detection. Int. J. Remote Sens. 2003, 24, 1723–1749. [Google Scholar] [CrossRef]
  5. Chand, T.R.K.; Badarinath, K.V.S.; Murthy, M.S.R.; Raishekhar, G.; Elvidge, C.D.; Tuttle, B.T. Active forest fire monitoring in Uttaranchal State, India using multi-temporal DMSP-OLS and MODIS data. Int. J. Remote Sens. 2007, 28, 2123–2132. [Google Scholar] [CrossRef]
  6. Gong, A.; Li, J.; Chen, Y. A Spatio-Temporal Brightness Temperature Prediction Method for Forest Fire Detection with MODIS Data: A Case Study in San Diego. Remote Sens. 2021, 13, 2900. [Google Scholar] [CrossRef]
  7. Masocha, M.; Dube, T.; Mpofu, N.T.; Chimunhu, S. Accuracy assessment of MODIS active fire products in southern African savannah woodlands. Afr. J. Ecol. 2018, 56, 563–571. [Google Scholar] [CrossRef]
  8. Zhang, T.; De Jong, M.C.; Wooster, M.J.; Xu, W.; Wang, L. Trends in eastern China agricultural fire emissions derived from a combination of geostationary (Himawari) and polar (VIIRS) orbiter fire radiative power products. Atmos. Chem. Phys. 2020, 20, 10687–10705. [Google Scholar] [CrossRef]
  9. Jang, E.; Kang, Y.; Im, J.; Lee, D.W.; Yoon, J.; Kim, S.K. Detection and monitoring of forest fires using Himawari-8 geostationary satellite data in South Korea. Remote Sens. 2019, 11, 271. [Google Scholar] [CrossRef]
  10. Xie, Z.; Song, W.; Ba, R.; Li, X.; Long, X. A spatiotemporal contextual model for forest fire detection using Himawari-8 satellite data. Remote Sens. 2018, 10, 1992. [Google Scholar] [CrossRef]
  11. Lin, Z.; Chen, F.; Niu, Z.; Li, B.; Yu, B.; Jia, H. An active fire detection algorithm based on multi-temporal FengYun-3C VIRR data. Remote Sens. Environ. 2018, 211, 376–387. [Google Scholar] [CrossRef]
  12. Gautam, R.S.; Singh, D.; Mittal, A. An efficient contextual algorithm to detect subsurface fires with NOAA/AVHRR data. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2005–2015. [Google Scholar] [CrossRef]
  13. Kawano, K.; Kudoh, J.I.; Makino, S. Forest fire detection in Far East region of Russia by using NOAA AVHRR images. In Proceedings of the IEEE 1999 International Geoscience and Remote Sensing Symposium, Hamburg, Germany, 28 June–2 July 1999; Volume 2, pp. 858–860. [Google Scholar] [CrossRef]
  14. Lin, L.; Meng, Y.; Yue, A.; Yuan, Y.; Liu, X.; Chen, J.; Zhang, M. A spatio-temporal model for forest fire detection using HJ-IRS satellite data. Remote Sens. 2016, 8, 403. [Google Scholar] [CrossRef]
  15. Roberts, G.; Wooster, M.J. Development of a multi-temporal Kalman filter approach to geostationary active fire detection & fire radiative power (FRP) estimation. Remote Sens. Environ. 2014, 152, 392–412. [Google Scholar] [CrossRef]
  16. Giglio, L.; Descloitres, J.; Justice, C.O.; Kaufman, Y.J. An enhanced contextual fire detection algorithm for MODIS. Remote Sens. Environ. 2003, 87, 273–282. [Google Scholar] [CrossRef]
  17. Giglio, L.; Schroeder, W.; Justice, C.O.; Christopher, O.J. The collection 6 MODIS active fire detection algorithm and fire products. Remote Sens. Environ. 2016, 178, 31–41. [Google Scholar] [CrossRef]
  18. Liu, X.; He, B.; Quan, X.; Yebra, M.; Qiu, S.; Yin, C.; Liao, Z.; Zhang, H. Near real-time extracting wildfire spread rate from Himawari-8 satellite data. Remote Sens. 2018, 10, 1654. [Google Scholar] [CrossRef]
  19. Zhang, D.; Huang, C.; Gu, J.; Zhang, Y.; Han, W.; Peng, D.; Feng, Y. Real-Time Wildfire Detection Algorithm Based on VIIRS Fire Product and Himawari-8 Data. Remote Sens. 2023, 15, 1541. [Google Scholar] [CrossRef]
  20. Zheng, W.; Chen, J.; Liu, C.; Shan, T.; Yan, H. Study of the Application of FY-3D/MERSI-II Far-Infrared Data in Wildfire Monitoring. Remote Sens. 2023, 15, 4228. [Google Scholar] [CrossRef]
  21. Hally, B.; Wallace, L.; Reinke, K.; Jones, S.; Engel, C.; Skidmore, A. Estimating Fire Background Temperature at a Geostationary Scale—An Evaluation of Contextual Methods for AHI-8. Remote Sens. 2018, 10, 1368. [Google Scholar] [CrossRef]
  22. Hally, B.; Wallace, L.; Reinke, K.; Jones, S.; Skidmore, A. Advances in active fire detection using a multi-temporal method for next-generation geostationary satellite data. Int. J. Digit. Earth 2019, 12, 1030–1045. [Google Scholar] [CrossRef]
  23. Ding, Y.; Wang, M.; Fu, Y.; Zhang, L.; Wang, X. A Wildfire Detection Algorithm Based on the Dynamic Brightness Temperature Threshold. Forests 2023, 14, 477. [Google Scholar] [CrossRef]
  24. Liu, C.; Chen, R.; He, B. Integrating Machine Learning and a Spatial Contextual Algorithm to Detect Wildfire from Himawari-8 Data in Southwest China. Forests 2023, 14, 919. [Google Scholar] [CrossRef]
  25. Xiao, Y.; Ma, M.; Wen, J.; Yu, W. Progress in land surface temperature retrieval over complex surface. Remote Sens. Technol. Appl. 2021, 36, 33–43. [Google Scholar]
  26. Bartkowiak, P.; Crespi, A.; Niedrist, G.; Zanotelli, D.; Colombo, R.; Notarnicola, C. Land surface temperature reconstruction under long-term cloudy-sky conditions at 250 m spatial resolution: Case study of Vinschgau/Venosta Valley in the european Alps. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 2037–2057. [Google Scholar] [CrossRef]
  27. Ozelkan, E.; Bagis, S.; Ozelkan, E.C.; Ustundag, B.B.; Yucel, M.; Ormeci, C. Spatial interpolation of climatic variables using land surface temperature and modified inverse distance weighting. Int. J. Remote Sens. 2015, 36, 1000–1025. [Google Scholar] [CrossRef]
  28. Mukherjee, S.; Joshi, P.K.; Garg, R.D. Regression-Kriging technique to downscale satellite-derived land surface temperature in heterogeneous agricultural landscape. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1245–1250. [Google Scholar] [CrossRef]
  29. Guillemot, C.; Le Meur, O. Image inpainting: Overview and recent advances. IEEE Signal Process. Mag. 2013, 31, 127–144. [Google Scholar] [CrossRef]
  30. Wu, P.; Yin, Z.; Zeng, C.; Duan, S.B.; Göttsche, F.M.; Ma, X.; Li, X.; Yang, H.; Shen, H. Spatially continuous and high-resolution land surface temperature product generation: A review of reconstruction and spatiotemporal fusion techniques. IEEE Tans. Geosci. Remote Sens. 2021, 9, 112–137. [Google Scholar] [CrossRef]
  31. Zeng, C.; Long, D.; Shen, H.; Cui, Y.; Hong, Y. A two-step framework for reconstructing remotely sensed land surface temperatures contaminated by cloud. ISPRS J. Photogramm. Remote Sens. 2018, 141, 30–45. [Google Scholar] [CrossRef]
  32. Ghafarian Malamiri, H.R.; Rousta, I.; Olafsson, H.; Zare, H.; Zhang, H. Gap-filling of MODIS time series land surface temperature (LST) products using singular spectrum analysis (SSA). Atmosphere 2018, 9, 334. [Google Scholar] [CrossRef]
  33. Wu, P.; Yin, Z.; Yang, H.; Wu, Y.; Ma, X. Reconstructing geostationary satellite land surface temperature imagery based on a multiscale feature connected convolutional neural network. Remote Sens. 2019, 11, 300. [Google Scholar] [CrossRef]
  34. Jin, M.; Dickinson, R.E. Interpolation of surface radiative temperature measured from polar orbiting satellites to a diurnal cycle: 2. Cloudy-pixel treatment. J. Geophys. Res. Atmos. 2000, 105, 4061–4076. [Google Scholar] [CrossRef]
  35. Zhang, X.; Zhou, J.; Wang, D. A practical reanalysis data and thermal infrared remote sensing data merging (RTM) method for reconstruction of a 1-km all-weather land surface temperature. Remote Sens. Environ. 2021, 260, 112437. [Google Scholar] [CrossRef]
  36. Zhao, W.; Duan, S.B. Reconstruction of daytime land surface temperatures under cloud-covered conditions using integrated MODIS/Terra land products and MSG geostationary satellite data. Remote Sens. Environ. 2020, 247, 111931. [Google Scholar] [CrossRef]
  37. Xiao, Y.; Zhao, W.; Ma, M.; He, K. Gap-free LST generation for MODIS/Terra LST product using a random forest-based reconstruction method. Remote Sens. 2021, 13, 2828. [Google Scholar] [CrossRef]
  38. Sarafanov, M.; Kazakov, E.; Nikitin, N.O.; Kalyuzhnaya, A.V. A machine learning approach for remote sensing data gap-filling with open-source implementation: An example regarding land surface temperature, surface albedo and NDVI. Remote Sens. 2020, 12, 3865. [Google Scholar] [CrossRef]
  39. Chen, D.; Zhuang, Q.; Zhu, L.; Zhang, W.; Sun, T. Generating Daily Gap-Free MODIS Land Surface Temperature Using the Random Forest Model and Similar Pixels Method. IEEE Access 2023. [Google Scholar] [CrossRef]
  40. Wu, Z.; Teng, H.; Chen, H.; Han, L.; Chen, L. Reconstruction of Gap-Free Land Surface Temperature at a 100 m Spatial Resolution from Multidimensional Data: A Case in Wuhan, China. Sensors 2023, 23, 913. [Google Scholar] [CrossRef]
  41. Xu, H.; Zhang, G.; Zhou, Z.; Zhou, Z.; Zhou, X.; Zhang, J.; Zhou, C. Development of a Novel Burned-Area Subpixel Mapping (BASM) Workflow for Fire Scar Detection at Subpixel Level. Remote Sens. 2022, 14, 3546. [Google Scholar] [CrossRef]
  42. Rew, R.; Davis, G. NetCDF: An interface for scientific data access. IEEE Comput. Graph. Appl. 1990, 10, 76–82. [Google Scholar] [CrossRef]
  43. Du, P.; Liu, M.; Xu, T.; Song, Y. Application of Himawari-8 Data in Monitoring Forest Fire. Acta Sci. Nat. Univ. Pekin. 2018, 54, 1251–1258. [Google Scholar] [CrossRef]
  44. Siegmund, D. Error Probabilities and Average Sample Number of the Sequential Probability Ratio Test. J. R. Stat. Soc. Ser. B 1975, 37, 394–401. [Google Scholar] [CrossRef]
  45. Lederer, J. ; Linear Regression. Fundamentals of High-Dimensional Statistics: With Exercises and R Labs; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 37–79. [Google Scholar] [CrossRef]
  46. Yakunina, R.P.; Bychkov, G.A. Correlation Analysis of the Components of the Human Development Index Across Countries. Procedia Econ. Financ. 2015, 24, 766–771. [Google Scholar] [CrossRef]
  47. Che, J.; Ding, M.; Zhang, Q.; Wang, Y.; Sun, W.; Wang, Y.; Wang, L.; Huai, B. Reconstruction of Near-Surface Air Temperature over the Greenland Ice Sheet Based on MODIS Data and Machine Learning Approaches. Remote Sens. 2022, 14, 5775. [Google Scholar] [CrossRef]
  48. Genuer, R.; Poggi, J.-M. Random forests. In Use R! Hornik, K., Parmigiani, G., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2020. [Google Scholar] [CrossRef]
  49. Bao, Y.; Yang, Y. Research on long-term Gap-Free Land surface temperature reconstruction method. Remote Sens. Technol. Appl. 2023. Available online: https://kns.cnki.net/kcms2/article/abstract?v=VKFFl0Cm57ZQw557etAkieXq4nHqtV3AF80167vDGN7yVB-Irz91DXBqCrVEUqSDCh_FCQU_KNM8i0u7EdOhxXfbooDEVB2xd6MFfUyCruGKgzE2eTflw4cyxiWRa-NL24K43mXZs0M=&uniplatform=NZKPT&language=CHS (accessed on 31 May 2024).
  50. Deng, Z.; Zhang, G. An improved forest fire monitoring algorithm with three-dimensional Otsu. IEEE Access 2021, 9, 118367–118378. [Google Scholar] [CrossRef]
  51. Ying, L.; Han, J.; Du, Y.; Shen, Z. Forest fire characteristics in China: Spatial patterns and determinants with thresholds. For. Ecol. Manag. 2018, 424, 345–354. [Google Scholar] [CrossRef]
Figure 1. Overview map of the study area.
Figure 1. Overview map of the study area.
Remotesensing 16 02945 g001
Figure 2. Vegetation area and DEM in Hunan Province.
Figure 2. Vegetation area and DEM in Hunan Province.
Remotesensing 16 02945 g002
Figure 3. Histogram of the frequency distribution of the surface temperatures in vegetation areas in Hunan Province on different dates.
Figure 3. Histogram of the frequency distribution of the surface temperatures in vegetation areas in Hunan Province on different dates.
Remotesensing 16 02945 g003
Figure 4. Flowchart of fire point detection algorithm.
Figure 4. Flowchart of fire point detection algorithm.
Remotesensing 16 02945 g004
Figure 5. Feature correlation heatmap at different moments during the daytime.
Figure 5. Feature correlation heatmap at different moments during the daytime.
Remotesensing 16 02945 g005
Figure 6. Scatter density plot of validation data for RF at different moments of the day.
Figure 6. Scatter density plot of validation data for RF at different moments of the day.
Remotesensing 16 02945 g006
Figure 7. Scatter density plot of reconstructed LST versus original LST.
Figure 7. Scatter density plot of reconstructed LST versus original LST.
Remotesensing 16 02945 g007
Figure 8. LST of original vs. reconstructed vegetation area during daytime.
Figure 8. LST of original vs. reconstructed vegetation area during daytime.
Remotesensing 16 02945 g008
Figure 9. LST of original vs. reconstructed area at nighttime.
Figure 9. LST of original vs. reconstructed area at nighttime.
Remotesensing 16 02945 g009
Figure 10. The result of fire point identification at 15:30 on 18 October 2022.
Figure 10. The result of fire point identification at 15:30 on 18 October 2022.
Remotesensing 16 02945 g010
Figure 11. The result of fire point identification at 10:30 on 19 October 2022.
Figure 11. The result of fire point identification at 10:30 on 19 October 2022.
Remotesensing 16 02945 g011
Figure 12. The result of fire point identification at 15:20 on 23 October 2022.
Figure 12. The result of fire point identification at 15:20 on 23 October 2022.
Remotesensing 16 02945 g012
Figure 13. The results of fire detection.
Figure 13. The results of fire detection.
Remotesensing 16 02945 g013
Figure 14. Identification results of fire point image elements in Xintian County, Hunan Province, at four moments on 18 and 19 October 2022. (a) Mid-infrared 7th band of Himawari-9 image and its bright temperature. (b) Identification results of the algorithm of this study. (c) Results of WLF fire point product.
Figure 14. Identification results of fire point image elements in Xintian County, Hunan Province, at four moments on 18 and 19 October 2022. (a) Mid-infrared 7th band of Himawari-9 image and its bright temperature. (b) Identification results of the algorithm of this study. (c) Results of WLF fire point product.
Remotesensing 16 02945 g014
Table 1. Predictive accuracy of different models at different dates.
Table 1. Predictive accuracy of different models at different dates.
ModelDateR2RMSE (K)MAE (K)
MLR8 April 20220.42191.85081.3763
8 August 20220.47901.66931.3075
12 October 20220.82101.42931.0793
30 January 20230.63861.54961.1117
SVR8 April 20220.37181.92941.3154
8 August 20220.54871.55361.2048
12 October 20220.81471.45421.0470
30 January 20230.64001.54661.0407
RF8 April 20220.80061.08690.7891
8 August 20220.81510.99460.7343
12 October 20220.91390.99110.7425
30 January 20230.85320.98780.7393
Table 2. Evaluation of the accuracy of the fire point detection results.
Table 2. Evaluation of the accuracy of the fire point detection results.
Imaging TimePMF
STFWLFSTFWLFSTFWLF
17 October 2022 14:000.670.600.250.80.67
17 October 2022 20:400.580.64000.740.78
18 October 2022 15:300.600.5200.080.750.73
18 October 2022 18:400.820.82000.900.90
19 October 2022 10:300.350.3800.50.520.43
19 October 2022 22:0010.600.2510.67
Table 3. Statistics of the number of fire point image elements detected by the algorithm.
Table 3. Statistics of the number of fire point image elements detected by the algorithm.
Imaging TimeSTFWLF
17 October 2022 14:002523
17 October 2022 20:404635
18 October 2022 15:30134108
18 October 2022 18:40143125
19 October 2022 10:304414
19 October 2022 22:003013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yao, H.; Yang, Z.; Zhang, G.; Liu, F. Forest Fire Detection Based on Spatial Characteristics of Surface Temperature. Remote Sens. 2024, 16, 2945. https://doi.org/10.3390/rs16162945

AMA Style

Yao H, Yang Z, Zhang G, Liu F. Forest Fire Detection Based on Spatial Characteristics of Surface Temperature. Remote Sensing. 2024; 16(16):2945. https://doi.org/10.3390/rs16162945

Chicago/Turabian Style

Yao, Houzhi, Zhigao Yang, Gui Zhang, and Feng Liu. 2024. "Forest Fire Detection Based on Spatial Characteristics of Surface Temperature" Remote Sensing 16, no. 16: 2945. https://doi.org/10.3390/rs16162945

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop