[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (740)

Search Parameters:
Keywords = MODIS land cover

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 5700 KiB  
Article
Remote Sensing-Based Drought Monitoring in Iran’s Sistan and Balouchestan Province
by Kamal Omidvar, Masoume Nabavizadeh, Iman Rousta and Haraldur Olafsson
Atmosphere 2024, 15(10), 1211; https://doi.org/10.3390/atmos15101211 - 10 Oct 2024
Abstract
Drought is a natural phenomenon that has adverse effects on agriculture, the economy, and human well-being. The primary objective of this research was to comprehensively understand the drought conditions in Sistan and Balouchestan Province from 2002 to 2017 from two perspectives: vegetation cover [...] Read more.
Drought is a natural phenomenon that has adverse effects on agriculture, the economy, and human well-being. The primary objective of this research was to comprehensively understand the drought conditions in Sistan and Balouchestan Province from 2002 to 2017 from two perspectives: vegetation cover and hydrology. To achieve this goal, the study utilized MODIS satellite data in the first part to monitor vegetation cover as an indicator of agricultural drought. In the second part, GRACE satellite data were employed to analyze changes in groundwater resources as an indicator of hydrological drought. To assess vegetation drought, four indices were used: Vegetation Health Index (VHI), Vegetation Drought Index (VDI), Visible Infrared Drought Index (VSDI), and Temperature Vegetation Drought Index (TVDI). To validate vegetation drought indices, they were compared with Global Land Data Assimilation System (GLDAS) precipitation data. The vegetation indices showed a strong, statistically significant correlation with GLDAS precipitation data in most regions of the province. Among all indices, the VHI showed the highest correlation with precipitation (moderate (0.3–0.7) in 51.7% and strong (≥0.7) in 45.82% of lands). The output of vegetation indices revealed that the study province has experienced widespread drought in recent years. The results showed that the southern and central regions of the province have faced more severe drought classes. In the second part of this research, hydrological drought monitoring was conducted in fifty third-order sub-basins located within the study province using the Total Water Storage (TWS) deficit, Drought Severity, and Total Storage Deficit Index )TSDI Index). Annual average calculations of the TWS deficit over the period from April 2012 to 2016 indicated a substantial depletion of groundwater reserves in the province, amounting to a cumulative loss of 12.2 km3 Analysis results indicate that drought severity continuously increased in all study basins until the end of the study period. Studies have shown that all the studied basins are facing severe and prolonged water scarcity. Among the 50 studied basins, the Rahmatabad basin, located in the semi-arid northern regions of the province, has experienced the most severe drought. This basin has experienced five drought events, particularly one lasting 89 consecutive months and causing a reduction of more than 665.99 km3. of water in month 1, placing it in a critical condition. On the other hand, the Niskoofan Chabahar basin, located in the tropical southern part of the province near the Sea of Oman, has experienced the lowest reduction in water volume with 10 drought events and a decrease of approximately 111.214 km3. in month 1. However, even this basin has not been spared from prolonged droughts. Analysis of drought index graphs across different severity classes confirmed that all watersheds experienced drought conditions, particularly in the later years of this period. Data analysis revealed a severe water crisis in the province. Urgent and coordinated actions are needed to address this challenge. Transitioning to drought-resistant crops, enhancing irrigation efficiency, and securing water rights are essential steps towards a sustainable future. Full article
(This article belongs to the Section Meteorology)
28 pages, 5528 KiB  
Article
Estimating Rootzone Soil Moisture by Fusing Multiple Remote Sensing Products with Machine Learning
by Shukran A. Sahaar and Jeffrey D. Niemann
Remote Sens. 2024, 16(19), 3699; https://doi.org/10.3390/rs16193699 - 4 Oct 2024
Abstract
This study explores machine learning for estimating soil moisture at multiple depths (0–5 cm, 0–10 cm, 0–20 cm, 0–50 cm, and 0–100 cm) across the coterminous United States. A framework is developed that integrates soil moisture from Soil Moisture Active Passive (SMAP), precipitation [...] Read more.
This study explores machine learning for estimating soil moisture at multiple depths (0–5 cm, 0–10 cm, 0–20 cm, 0–50 cm, and 0–100 cm) across the coterminous United States. A framework is developed that integrates soil moisture from Soil Moisture Active Passive (SMAP), precipitation from the Global Precipitation Measurement (GPM), evapotranspiration from the Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS), vegetation data from the Moderate Resolution Imaging Spectroradiometer (MODIS), soil properties from gridded National Soil Survey Geographic (gNATSGO), and land cover information from the National Land Cover Database (NLCD). Five machine learning algorithms are evaluated including the feed-forward artificial neural network, random forest, extreme gradient boosting (XGBoost), Categorical Boosting, and Light Gradient Boosting Machine. The methods are tested by comparing to in situ soil moisture observations from several national and regional networks. XGBoost exhibits the best performance for estimating soil moisture, achieving higher correlation coefficients (ranging from 0.76 at 0–5 cm depth to 0.86 at 0–100 cm depth), lower root mean squared errors (from 0.024 cm3/cm3 at 0–100 cm depth to 0.039 cm3/cm3 at 0–5 cm depth), higher Nash–Sutcliffe Efficiencies (from 0.551 at 0–5 cm depth to 0.694 at 0–100 cm depth), and higher Kling–Gupta Efficiencies (0.511 at 0–5 cm depth to 0.696 at 0–100 cm depth). Additionally, XGBoost outperforms the SMAP Level 4 product in representing the time series of soil moisture for the networks. Key factors influencing the soil moisture estimation are elevation, clay content, aridity index, and antecedent soil moisture derived from SMAP. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Locations and climates of the in situ soil moisture stations used in this study.</p>
Full article ">Figure 2
<p>Performance metrics (<span class="html-italic">R</span>, MBE, RMSE, ubRMSE, NSE, and KGE) for the soil moisture estimates of the machine learning algorithms when compared to the testing data, including all depths and stations. For each performance metric, the line inside the box indicates the median value and the box represents the interquartile range.</p>
Full article ">Figure 3
<p>RMSE of the soil moisture estimates from the machine learning algorithms for the testing dataset when the data are divided according to the (<b>a</b>) in situ soil moisture networks and (<b>b</b>) depths. For each performance metric, the line inside the box indicates the median and the box represents the interquartile range.</p>
Full article ">Figure 4
<p>Density plots comparing the observed and XGBoost estimates of soil moisture for each depth using the testing datasets for each climate. Darker blues represent higher concentrations of data, while lighter blues represent lower concentrations.</p>
Full article ">Figure 5
<p>Time series of soil moisture at (<b>a</b>) 0–5 cm and (<b>b</b>) 0–100 cm depths at the arid USCRN Las Cruces 20N station (a member of the testing dataset). The plotted soil moisture data include hourly in situ measurements, estimates from the XGBoost model, and 3 h SMAP L4 soil moisture estimates. Daily GPM precipitation data at the site are also shown.</p>
Full article ">Figure 5 Cont.
<p>Time series of soil moisture at (<b>a</b>) 0–5 cm and (<b>b</b>) 0–100 cm depths at the arid USCRN Las Cruces 20N station (a member of the testing dataset). The plotted soil moisture data include hourly in situ measurements, estimates from the XGBoost model, and 3 h SMAP L4 soil moisture estimates. Daily GPM precipitation data at the site are also shown.</p>
Full article ">Figure 6
<p>Time series of soil moisture at (<b>a</b>) 0–5 cm and (<b>b</b>) 0–100 cm depths at the humid USCRN Versailles 3NNW station (a member of the testing dataset). The plotted soil moisture data include hourly in situ measurements, estimates from the XGBoost model, and 3 h SMAP L4 soil moisture estimates. Daily GPM precipitation data for the site are also shown.</p>
Full article ">Figure 7
<p>Correlations between predictor variables and in situ soil moisture at different depths. Positive correlations are shown in blue and negative correlations are shown in red.</p>
Full article ">Figure 8
<p>Relative importance of each predictor variable in the RF, XGBoost, CatBoost, and LightGBM models and the average importance among the four models.</p>
Full article ">
19 pages, 24334 KiB  
Article
A 40-Year Time Series of Land Surface Emissivity Derived from AVHRR Sensors: A Fennoscandian Perspective
by Mira Barben, Stefan Wunderle and Sonia Dupuis
Remote Sens. 2024, 16(19), 3686; https://doi.org/10.3390/rs16193686 - 2 Oct 2024
Abstract
Accurate land surface temperature (LST) retrieval depends on precise knowledge of the land surface emissivity (LSE). Neglecting or inaccurately estimating the emissivity introduces substantial errors and uncertainty in LST measurements. The emissivity, which varies across different surfaces and land uses, reflects material composition [...] Read more.
Accurate land surface temperature (LST) retrieval depends on precise knowledge of the land surface emissivity (LSE). Neglecting or inaccurately estimating the emissivity introduces substantial errors and uncertainty in LST measurements. The emissivity, which varies across different surfaces and land uses, reflects material composition and surface roughness. Satellite data offer a robust means to determine LSE at large scales. This study utilises the Normalised Difference Vegetation Index Threshold Method (NDVITHM) to produce a novel emissivity dataset spanning the last 40 years, specifically tailored for the Fennoscandian region, including Norway, Sweden, and Finland. Leveraging the long and continuous data series from the Advanced Very High Resolution Radiometer (AVHRR) sensors aboard the NOAA and MetOp satellites, an emissivity dataset is generated for 1981–2022. This dataset incorporates snow-cover information, enabling the creation of annual emissivity time series that account for winter conditions. LSE time series were extracted for six 15 × 15 km study sites and compared against the Moderate Resolution Imaging Spectroradiometer (MODIS) MOD11A2 LSE product. The intercomparison reveals that, while both datasets generally align, significant seasonal differences exist. These disparities are attributable to differences in spectral response functions and temporal resolutions, as well as the method considering fixed values employed to calculate the emissivity. This study presents, for the first time, a 40-year time series of the emissivity for AVHRR channels 4 and 5 in Fennoscandia, highlighting the seasonal variability, land-cover influences, and wavelength-dependent emissivity differences. This dataset provides a valuable resource for future research on long-term land surface temperature and emissivity (LST&E) trends, as well as land-cover changes in the region, particularly with the use of Sentinel-3 data and upcoming missions such as EUMETSAT’s MetOp Second Generation, scheduled for launch in 2025. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spectral emissivities of different land-cover classes, as recorded in the ECOSTRESS spectral library [<a href="#B41-remotesensing-16-03686" class="html-bibr">41</a>,<a href="#B42-remotesensing-16-03686" class="html-bibr">42</a>].</p>
Full article ">Figure 2
<p>The study area across Norway, Sweden, and Finland, showing the six chosen study sites (15 × 15 km each). The abbreviations indicating the study sites stand for Low Vegetation (LV) or Forest (F) and South (S), Mid-Latitude (ML), or North (N). The base map is the ESA CCI Land-Cover Dataset [<a href="#B43-remotesensing-16-03686" class="html-bibr">43</a>].</p>
Full article ">Figure 3
<p>Schematic workflow showing the AVHRR data preparation, emissivity dataset calculation process, and incorporated auxiliary data.</p>
Full article ">Figure 4
<p>Overview of the availability of AVHRR data since 1981 in the local archive. The data used for this study are indicated in blue-grey, while the data excluded from the analysis due to quality or processing issues are indicated in orange.</p>
Full article ">Figure 5
<p>The 40-year time series of monthly mean land surface emissivities for the 15 × 15 km low-vegetation southern (LVS) study site.</p>
Full article ">Figure 6
<p>Mean annual cycle of LSE for channel 4, including the confidence interval (1 <math display="inline"><semantics> <mi>σ</mi> </semantics></math>), for the 40-year period for the 15 × 15 km low-vegetation southern (LVS) study site.</p>
Full article ">Figure 7
<p>(<b>a</b>) Land cover (ESA CCI; see <a href="#remotesensing-16-03686-f002" class="html-fig">Figure 2</a> for details) and emissivity differences (Ch5–Ch4) derived from the monthly mean emissivity values over the 40-year period (1981–2022) for an area of 1° × 1° around the FN site in February (<b>b</b>) and July (<b>c</b>).</p>
Full article ">Figure 8
<p>(<b>a</b>) Land cover (ESA CCI; see <a href="#remotesensing-16-03686-f002" class="html-fig">Figure 2</a> for details) and emissivity differences (Ch5–Ch4) derived from the monthly mean emissivity values over the 40-year period (1981–2022) for an area of 1° × 1° around the FS site in February (<b>b</b>) and July (<b>c</b>).</p>
Full article ">Figure 9
<p>Comparison of the AVHRR LAC LSE dataset and the MODIS MOD11A2 LSE dataset for the low-vegetation southern (LVS) study site (2015–2022).</p>
Full article ">
30 pages, 8057 KiB  
Article
Multi-Temporal Pixel-Based Compositing for Cloud Removal Based on Cloud Masks Developed Using Classification Techniques
by Tesfaye Adugna, Wenbo Xu, Jinlong Fan, Xin Luo and Haitao Jia
Remote Sens. 2024, 16(19), 3665; https://doi.org/10.3390/rs16193665 - 1 Oct 2024
Abstract
Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their [...] Read more.
Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their generalizability and flexibility. To address the issue, we propose a maximum-value compositing approach by generating cloud masks. We acquired 432 daily MOD09GA L2 MODIS imageries covering a vast region with persistent cloud cover and various climates and land-cover types. Labeled datasets for cloud, land, and no-data were collected from selected daily imageries. Subsequently, we trained and evaluated RF, SVM, and U-Net models to choose the best models. Accordingly, SVM and U-Net were chosen and employed to classify all the daily imageries. Then, the classified imageries were converted to two sets of mask layers to mask clouds and no-data pixels in the corresponding daily images by setting the masked pixels’ values to −0.999999. After masking, we employed the maximum-value technique to generate two sets of 16-day composite products, MaxComp-1 and MaxComp-2, corresponding to SVM and U-Net-derived cloud masks, respectively. Finally, we assessed the quality of our composite products by comparing them with the reference MOD13A1 16-day composite product. Based on the land-cover classification accuracy, our products yielded a significantly higher accuracy (5–28%) than the reference MODIS product across three classifiers (RF, SVM, and U-Net), indicating the quality of our products and the effectiveness of our techniques. In particular, MaxComp-1 yielded the best results, which further implies the superiority of SVM for cloud masking. In addition, our products appear to be more radiometrically and spectrally consistent and less noisy than MOD13A1, implying that our approach is more efficient in removing shadows and noises/artifacts. Our method yields high-quality products that are vital for investigating large regions with persistent clouds and studies requiring time-series data. Moreover, the proposed techniques can be adopted for higher-resolution RS imageries, regardless of the spatial extent, data volume, and type of clouds. Full article
Show Figures

Figure 1

Figure 1
<p>Technical workflow.</p>
Full article ">Figure 2
<p>Selected region for our experiment.</p>
Full article ">Figure 3
<p>Class labeling/annotation as a region of interest, cloud (in red), land (green), and no-data (blue) from selected MOD09GA daily imageries across different regions.</p>
Full article ">Figure 4
<p>Training and validation accuracy graphs, and their respective training and validation loss graphs, of various U-Net models. (<b>a</b>–<b>c</b>) Models with 4 CNN layers, 5 CNN layers, and 6 CNN layers, respectively, and a learning rate of 0.0001 and batch size of 32; (<b>d</b>) 5 CNN layer models, similar to Model b, but the number of filters in each layer is reduced by half; (<b>e</b>) Model similar to Model b, but a higher number of batches (64) were used; (<b>f</b>) 5 CNN layers as in Model b, but a lower learning rate (0.00001); (<b>g</b>) The model is the same as Model b with 5 CNN layers but without kernel regularization (L1 and L2).</p>
Full article ">Figure 4 Cont.
<p>Training and validation accuracy graphs, and their respective training and validation loss graphs, of various U-Net models. (<b>a</b>–<b>c</b>) Models with 4 CNN layers, 5 CNN layers, and 6 CNN layers, respectively, and a learning rate of 0.0001 and batch size of 32; (<b>d</b>) 5 CNN layer models, similar to Model b, but the number of filters in each layer is reduced by half; (<b>e</b>) Model similar to Model b, but a higher number of batches (64) were used; (<b>f</b>) 5 CNN layers as in Model b, but a lower learning rate (0.00001); (<b>g</b>) The model is the same as Model b with 5 CNN layers but without kernel regularization (L1 and L2).</p>
Full article ">Figure 4 Cont.
<p>Training and validation accuracy graphs, and their respective training and validation loss graphs, of various U-Net models. (<b>a</b>–<b>c</b>) Models with 4 CNN layers, 5 CNN layers, and 6 CNN layers, respectively, and a learning rate of 0.0001 and batch size of 32; (<b>d</b>) 5 CNN layer models, similar to Model b, but the number of filters in each layer is reduced by half; (<b>e</b>) Model similar to Model b, but a higher number of batches (64) were used; (<b>f</b>) 5 CNN layers as in Model b, but a lower learning rate (0.00001); (<b>g</b>) The model is the same as Model b with 5 CNN layers but without kernel regularization (L1 and L2).</p>
Full article ">Figure 5
<p>Optimized U-Net architecture.</p>
Full article ">Figure 6
<p>Daily MOD09GA L2 imageries from various locations (Column 1) and their respective masked imageries (Column 2), clouds, and no-data pixels are masked. All imageries (<b>a</b>–<b>d</b>) but c of the 2<sup>nd</sup> column are false-color composites of NIR, red (Band 1), and green (Band 4). (<b>a</b>) MOD09GA.A2019197.h20v08.061.2020303164214.hdf and the corresponding masked image; (<b>b</b>) MOD09GA.A2019193.h22v07.061.2020303114043.hdf and the corresponding masked image; (<b>c</b>) MOD09GA.A2019207.h20v09.061.2020304034526.hdf and the corresponding masked image with false-color-composite of bands (5, 2, 1) to show water bodies which are not masked, (<b>d</b>) MOD09GA.A2019207.h21v07.061.2020304034241.hdf and the respective masked image.</p>
Full article ">Figure 6 Cont.
<p>Daily MOD09GA L2 imageries from various locations (Column 1) and their respective masked imageries (Column 2), clouds, and no-data pixels are masked. All imageries (<b>a</b>–<b>d</b>) but c of the 2<sup>nd</sup> column are false-color composites of NIR, red (Band 1), and green (Band 4). (<b>a</b>) MOD09GA.A2019197.h20v08.061.2020303164214.hdf and the corresponding masked image; (<b>b</b>) MOD09GA.A2019193.h22v07.061.2020303114043.hdf and the corresponding masked image; (<b>c</b>) MOD09GA.A2019207.h20v09.061.2020304034526.hdf and the corresponding masked image with false-color-composite of bands (5, 2, 1) to show water bodies which are not masked, (<b>d</b>) MOD09GA.A2019207.h21v07.061.2020304034241.hdf and the respective masked image.</p>
Full article ">Figure 7
<p>Maximum-value, mean, and median composite products that were generated using imageries collected over two seasons, April–May (1st row), and July–August (2nd row). (<b>a</b>,<b>d</b>) MVC; (<b>b</b>,<b>e</b>) mean compositing; (<b>c</b>,<b>f</b>) median compositing. The products are presented as a false-color composite of Bands 2, 1, and 3.</p>
Full article ">Figure 8
<p>Confusion matrices depicting the performance of our best composite product (MaxComp-1) (1<sup>st</sup> column) and the reference composite product (MOD13A1) (2<sup>nd</sup> column) across three different classifiers. (<b>a</b>) Input: MaxComp-1, Model: RF; (<b>b</b>) Input: MOD13A1, Model: RF; (<b>c</b>) Input: MaxComp-1, Model: SVM; (<b>d</b>) Input: MOD13A1, Model: SVM; (<b>e</b>) Input: MaxComp-1, Model: U-Net; (<b>f</b>) Input: MOD13A1, Model: U-Net.</p>
Full article ">Figure 8 Cont.
<p>Confusion matrices depicting the performance of our best composite product (MaxComp-1) (1<sup>st</sup> column) and the reference composite product (MOD13A1) (2<sup>nd</sup> column) across three different classifiers. (<b>a</b>) Input: MaxComp-1, Model: RF; (<b>b</b>) Input: MOD13A1, Model: RF; (<b>c</b>) Input: MaxComp-1, Model: SVM; (<b>d</b>) Input: MOD13A1, Model: SVM; (<b>e</b>) Input: MaxComp-1, Model: U-Net; (<b>f</b>) Input: MOD13A1, Model: U-Net.</p>
Full article ">Figure 9
<p>False-color composite of our composite products and the reference composite product (MOD13A) across three seasons: April–May 2019, 1st row; July–August 2019, 2nd row; January–February 2020, 3rd row. (<b>a</b>,<b>d</b>,<b>g</b>) MaxComp-1 products; (<b>b</b>,<b>e</b>,<b>h</b>) MaxComp-2 products; (<b>c</b>,<b>f</b>,<b>i</b>) MOD13A1v061 L3 products (reference composite products). The green circle highlights the main difference between our composite product and the reference data.</p>
Full article ">Figure 10
<p>Land-cover maps generated using our composite products (MaxComp-1 and MaxComp-2) and the reference composite product (MOD13A) by employing three different algorithms (<b>a</b>–<b>i</b>).</p>
Full article ">Figure 10 Cont.
<p>Land-cover maps generated using our composite products (MaxComp-1 and MaxComp-2) and the reference composite product (MOD13A) by employing three different algorithms (<b>a</b>–<b>i</b>).</p>
Full article ">Figure 10 Cont.
<p>Land-cover maps generated using our composite products (MaxComp-1 and MaxComp-2) and the reference composite product (MOD13A) by employing three different algorithms (<b>a</b>–<b>i</b>).</p>
Full article ">
27 pages, 13823 KiB  
Article
Application of Remote Sensing and Explainable Artificial Intelligence (XAI) for Wildfire Occurrence Mapping in the Mountainous Region of Southwest China
by Jia Liu, Yukuan Wang, Yafeng Lu, Pengguo Zhao, Shunjiu Wang, Yu Sun and Yu Luo
Remote Sens. 2024, 16(19), 3602; https://doi.org/10.3390/rs16193602 - 27 Sep 2024
Abstract
The ecosystems in the mountainous region of Southwest China are exceptionally fragile and constitute one of the global hotspots for wildfire occurrences. Understanding the complex interactions between wildfires and their environmental and anthropogenic factors is crucial for effective wildfire modeling and management. Despite [...] Read more.
The ecosystems in the mountainous region of Southwest China are exceptionally fragile and constitute one of the global hotspots for wildfire occurrences. Understanding the complex interactions between wildfires and their environmental and anthropogenic factors is crucial for effective wildfire modeling and management. Despite significant advancements in wildfire modeling using machine learning (ML) methods, their limited explainability remains a barrier to utilizing them for in-depth wildfire analysis. This paper employs Logistic Regression (LR), Random Forest (RF), and Extreme Gradient Boosting (XGBoost) models along with the MODIS global fire atlas dataset (2004–2020) to study the influence of meteorological, topographic, vegetation, and human factors on wildfire occurrences in the mountainous region of Southwest China. It also utilizes Shapley Additive exPlanations (SHAP) values, a method within explainable artificial intelligence (XAI), to demonstrate the influence of key controlling factors on the frequency of fire occurrences. The results indicate that wildfires in this region are primarily influenced by meteorological conditions, particularly sunshine duration, relative humidity (seasonal and daily), seasonal precipitation, and daily land surface temperature. Among local variables, altitude, proximity to roads, railways, residential areas, and population density are significant factors. All models demonstrate strong predictive capabilities with AUC values over 0.8 and prediction accuracies ranging from 76.0% to 95.0%. XGBoost outperforms LR and RF in predictive accuracy across all factor groups (climatic, local, and combinations thereof). The inclusion of topographic factors and human activities enhances model optimization to some extent. SHAP results reveal critical features that significantly influence wildfire occurrences, and the thresholds of positive or negative changes, highlighting that relative humidity, rain-free days, and land use land cover changes (LULC) are primary contributors to frequent wildfires in this region. Based on regional differences in wildfire drivers, a wildfire-risk zoning map for the mountainous region of Southwest China is created. Areas identified as high risk are predominantly located in the Northwestern and Southern parts of the study area, particularly in Yanyuan and Miyi, while areas assessed as low risk are mainly distributed in the Northeastern region. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the research region and the distribution of MODIS active fire incidents from 2004 to 2020. Maps at a national scale represent the kernel density of local wildfires for the same time frame.</p>
Full article ">Figure 2
<p>Hierarchical importance of climatic variables.</p>
Full article ">Figure 3
<p>Hierarchical importance of local factors.</p>
Full article ">Figure 4
<p>The SHAP summary plot ranks the top 20 variables affecting model predictions by their mean absolute SHAP values, shown on the <span class="html-italic">y</span>-axis. Subfigure (<b>a</b>) showcases the importance of these features, while subfigure (<b>b</b>) illustrates their positive or negative effects on wildfire predictions through scatter points.</p>
Full article ">Figure 5
<p>The SHAP dependence plots (<b>a</b>) between SHAP values and Da_minRH, with a fitted trend line (red line); (<b>b</b>) between SHAP values and Norainday_avg, with a fitted trend line (red line); (<b>c</b>) between SHAP values and Da_minRH, showing the interaction with Tmax_avg (color scale); (<b>d</b>) between SHAP values and Norainday_avg, showing the interaction with Tmax_avg (color scale). Da_minRH, daily minimum relative humidity; Noraindy_avg, average number of rainless days of fire season.</p>
Full article ">Figure 6
<p>SHAP interaction plot (<b>a</b>) and heatmap analysis (<b>b</b>).</p>
Full article ">Figure 6 Cont.
<p>SHAP interaction plot (<b>a</b>) and heatmap analysis (<b>b</b>).</p>
Full article ">Figure 7
<p>Fire-occurrence probability: analysis using LR, RF, and XGB based on meteorological factors.</p>
Full article ">Figure 8
<p>Fire-occurrence probability: analysis using LR, RF, and XGB based on local factors.</p>
Full article ">Figure 9
<p>Fire-occurrence probability: combined meteorological and local factors analysis with LR, RF, and XGB.</p>
Full article ">Figure 10
<p>ROC curves of the success rate of three models.</p>
Full article ">Figure 11
<p>Comparison of error metrics for different models.</p>
Full article ">Figure 12
<p>Risk-assessment mapping results of XGB model.</p>
Full article ">
17 pages, 34922 KiB  
Article
Coastal Sea Ice Concentration Derived from Marine Radar Images: A Case Study from Utqiaġvik, Alaska
by Felix St-Denis, L. Bruno Tremblay, Andrew R. Mahoney and Kitrea Pacifica L. M. Takata-Glushkoff
Remote Sens. 2024, 16(18), 3357; https://doi.org/10.3390/rs16183357 - 10 Sep 2024
Abstract
We apply the Canny edge algorithm to imagery from the Utqiaġvik coastal sea ice radar system (CSIRS) to identify regions of open water and sea ice and quantify ice concentration. The radar-derived sea ice concentration (SIC) is compared against the (closest to the [...] Read more.
We apply the Canny edge algorithm to imagery from the Utqiaġvik coastal sea ice radar system (CSIRS) to identify regions of open water and sea ice and quantify ice concentration. The radar-derived sea ice concentration (SIC) is compared against the (closest to the radar field of view) 25 km resolution NSIDC Climate Data Record (CDR) and the 1 km merged MODIS-AMSR2 sea ice concentrations within the ∼11 km field of view for the year 2022–2023, when improved image contrast was first implemented. The algorithm was first optimized using sea ice concentration from 14 different images and 10 ice analysts (140 analyses in total) covering a range of ice conditions with landfast ice, drifting ice, and open water. The algorithm is also validated quantitatively against high-resolution MODIS-Terra in the visible range. Results show a correlation coefficient and mean bias error between the optimized algorithm, the CDR and MODIS-AMSR2 daily SIC of 0.18 and 0.54, and ∼−1.0 and 0.7%, respectively, with an averaged inter-analyst error of ±3%. In general, the CDR captures the melt period correctly and overestimates the SIC during the winter and freeze-up period, while the merged MODIS-AMSR2 better captures the punctual break-out events in winter, including those during the freeze-up events (reduction in SIC). Remnant issues with the detection algorithm include the false detection of sea ice in the presence of fog or precipitation (up to 20%), quantified from the summer reconstruction with known open water conditions. The proposed technique allows for the derivation of the SIC from CSIRS data at spatial and temporal scales that coincide with those at which coastal communities members interact with sea ice. Moreover, by measuring the SIC in nearshore waters adjacent to the shoreline, we can quantify the effect of land contamination that detracts from the usefulness of satellite-derived SIC for coastal communities. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the study site and location of the CSIRS. The black circle marks the coastal radar range and the black start highlights the location of the coastal radar.</p>
Full article ">Figure 2
<p>Flow chart of the floe edge detection algorithm.</p>
Full article ">Figure 3
<p>Images from 11 March 2022, taken after each algorithm step: (<b>a</b>) initial image, (<b>b</b>) image with land removed, (<b>c</b>) the output of the Canny edge algorithm, (<b>d</b>) the contours, in red, found from the detected edges, and (<b>e</b>) the final sea ice contour.</p>
Full article ">Figure 4
<p>Images analyzed by the analysts.</p>
Full article ">Figure 5
<p>CDR (<b>a</b>) and merged MODIS-AMSR2 (<b>b</b>) grid cells used for the comparison with the marine radar. Utqiaġvik and the radar range are marked with a black star and circle.</p>
Full article ">Figure 6
<p>(<b>a</b>) Minimum RMSE (red) and the corresponding best 1:1 line fit (blue) for each kernel, (<b>b</b>) scatter plot of SIC derived from the radar images with the optimal set of parameters and SIC from the 10 analysts (colorbar), including the best-line fit (green line), (<b>c</b>) the analyst standard deviation (STD) for each analyzed frame, and (<b>d</b>) histogram of the departure of the best 1:1 line fit when removing one-by-one the analyzed frame. Note that the blue axis in (<b>a</b>) does not start at 0. Each of the horizontal lines represent a different image in (<b>b</b>). The inter-analyst averaged error of 0.026 is represented by the dashed line in (<b>c</b>).</p>
Full article ">Figure 7
<p>(<b>a</b>) Nearly synchronous marine radar image, including sea ice edge (red) from the detection algorithm and at 15:09 local time (AKDT). (<b>b</b>) MODIS Terra image at 15:09 AKDT. (<b>c</b>) Time series of Pearson correlation coefficient (r) between all coarse-grained (1 km) 4 min images (360 in total) CSIRS SIC and the merged MODIS-AMSR2 SIC for 14 April 2022. The red shading corresponds to the 95% confidence interval. (<b>d</b>) Scatter plot of the CSIRS SIC with the merged MODIS-AMSR2 for the time of maximum correlation at 6:00 AKDT. The best line fit is given by <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>0.43</mn> <mi>x</mi> <mo>+</mo> <mn>0.28</mn> </mrow> </semantics></math> and the red shading corresponds to the RMSE of 0.12.</p>
Full article ">Figure 8
<p>Daily (<b>a</b>), 7-day running mean (<b>b</b>), and 31-day running mean (<b>c</b>) time series of the SIC derived from the radar (blue), the CDR (green), and the merged MODIS-AMSR2 (red) for 2022 to 2023 as a function of the Julian days starting 1 January 2022. The holes in the time series represent the non-availability of the data.</p>
Full article ">
22 pages, 25616 KiB  
Article
Identification of High-Quality Vegetation Areas in Hubei Province Based on an Optimized Vegetation Health Index
by Yidong Chen, Linrong Xie, Xinyu Liu, Yi Qi and Xiang Ji
Forests 2024, 15(9), 1576; https://doi.org/10.3390/f15091576 - 8 Sep 2024
Abstract
This research proposes an optimized method for identifying high-quality vegetation areas, with a focus on forest ecosystems, using an improved Vegetation Health Index (VHI). The study introduces the Land Cover Vegetation Health Index (LCVHI), which integrates the Vegetation Condition Index (VCI) and the [...] Read more.
This research proposes an optimized method for identifying high-quality vegetation areas, with a focus on forest ecosystems, using an improved Vegetation Health Index (VHI). The study introduces the Land Cover Vegetation Health Index (LCVHI), which integrates the Vegetation Condition Index (VCI) and the Temperature Condition Index (TCI) with land cover data. Utilizing MODIS (Moderate Resolution Imaging Spectroradiometer) satellite imagery and Google Earth Engine (GEE), the study assesses the impact of land cover changes on vegetation health, with particular attention to forested areas. The application of the LCVHI demonstrates that forests exhibit a VHI approximately 25% higher than that of croplands, and wetlands show an 18% higher index compared to grasslands. Analysis of data from 2012 to 2022 in Hubei Province, China, reveals an overall upward trend in vegetation health, highlighting the effectiveness of environmental protection and forest management measures. Different land cover types, including forests, wetlands, and grasslands, significantly impact vegetation health, with forests and wetlands contributing most positively. These findings provide important scientific evidence for regional and global ecological management strategies, supporting the development of forest conservation policies and sustainable land use practices. The research results offer valuable insights into the effective management of regional ecological dynamics. Full article
(This article belongs to the Section Forest Ecology and Management)
Show Figures

Figure 1

Figure 1
<p>Geographical location and administrative boundaries of Hubei Province, China.</p>
Full article ">Figure 2
<p>Land cover changes in Hubei Province from 2012 to 2022 based on CLCD [<a href="#B30-forests-15-01576" class="html-bibr">30</a>] data.</p>
Full article ">Figure 3
<p>Temporal trends of Vegetation Condition Index (VCI) in Hubei Province, 2012–2022.</p>
Full article ">Figure 4
<p>Temporal trends of TCI in Hubei Province, 2012–2022.</p>
Full article ">Figure 5
<p>Temporal trends of VHI in Hubei Province, 2012–2022.</p>
Full article ">Figure 6
<p>Spatial distribution of VHI in Hubei Province, 2012–2022.</p>
Full article ">Figure 7
<p>Sankey diagram illustrating land cover transitions in Hubei Province from 2012 to 2022.</p>
Full article ">Figure 8
<p>Spatial distribution of land cover changes in Hubei Province between 2012 and 2022.</p>
Full article ">Figure 9
<p>Spatial distribution of LCVHI in Hubei Province, 2012–2022.</p>
Full article ">Figure 10
<p>Sankey diagram showing transitions between LCVHI classes in Hubei Province from 2012 to 2022.</p>
Full article ">Figure 11
<p>Spatial distribution of VHI changes in Hubei Province between 2012 and 2022.</p>
Full article ">Figure 12
<p>Heatmap of land cover class transitions in Hubei Province from 2012 to 2022.</p>
Full article ">
18 pages, 9816 KiB  
Article
Temporal Dynamics of Global Barren Areas between 2001 and 2022 Derived from MODIS Land Cover Products
by Marinos Eliades, Stelios Neophytides, Michalis Mavrovouniotis, Constantinos F. Panagiotou, Maria N. Anastasiadou, Ioannis Varvaris, Christiana Papoutsa, Felix Bachofer, Silas Michaelides and Diofantos Hadjimitsis
Remote Sens. 2024, 16(17), 3317; https://doi.org/10.3390/rs16173317 - 7 Sep 2024
Abstract
Long-term monitoring studies on the transition of different land cover units to barren areas are crucial to gain a better understanding of the potential challenges and threats that land surface ecosystems face. This study utilized the Moderate Resolution Imaging Spectroradiometer (MODIS) land cover [...] Read more.
Long-term monitoring studies on the transition of different land cover units to barren areas are crucial to gain a better understanding of the potential challenges and threats that land surface ecosystems face. This study utilized the Moderate Resolution Imaging Spectroradiometer (MODIS) land cover products (MCD12C1) to conduct geospatial analysis based on the maximum extent (MaxE) concept, to assess the spatiotemporal changes in barren areas from 2001 to 2022, at global and continental scales. The MaxE area includes all the pixels across the entire period of observations where the barren land cover class was at least once present. The relative expansion or reduction of the barren areas can be directly assessed with MaxE, as any annual change observed in the barren distribution is comparable over the entire dataset. The global barren areas without any land change (UA) during this period were equivalent to 12.8% (18,875,284 km2) of the global land surface area. Interannual land cover changes to barren areas occurred in an additional area of 3,438,959 km2 (2.3% of the global area). Globally, barren areas show a gradual reduction from 2001 (91.1% of MaxE) to 2012 (86.8%), followed by annual fluctuations until 2022 (88.1%). These areas were mainly interchanging between open shrublands and grasslands. A relatively high transition between barren areas and permanent snow and ice is found in Europe and North America. The results show a 3.7% decrease in global barren areas from 2001 to 2022. Areas that are predominantly not barren account for 30.6% of the transitional areas (TAs), meaning that these areas experienced short-term or very recent transitions from other land cover classes to barren. Emerging barren areas hotspots were mainly found in the Mangystau region (Kazakhstan), Tibetan plateau, northern Greenland, and the Atlas Mountains (Morocco, Tunisia). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart showing the methodological steps for deriving the annual land cover changes, the spatial transitions of barren areas to other land cover classes, and the transition occurrence. MaxE is the maximum extent of barren areas, UA is the unchanged barren area, and TA is the transitional area. Yellow squares indicate the input data, whereas the arrows and the green squares indicate the main processing steps and approaches. Red squares indicate the main outputs.</p>
Full article ">Figure 2
<p>Spatial extent of the first occurrence of the barren areas per year between 2001 and 2022.</p>
Full article ">Figure 3
<p>Global barren area per year.</p>
Full article ">Figure 4
<p>Transition occurrence of barren areas to other land cover classes.</p>
Full article ">Figure 5
<p>Spatial transitions of barren areas to other land cover classes.</p>
Full article ">Figure 6
<p>Global annual changes of the land cover over the maximum extent (MaxE) of barren area, expressed as percentages of the MaxE (<span class="html-italic">Y</span>-axis starts at 80% for better visualization of the land cover classes).</p>
Full article ">Figure 7
<p>Annual changes in the land cover over the maximum extent (MaxE) of barren area, expressed as percentages of the MaxE, for Africa (<b>a</b>), Asia (<b>b</b>), Australia (<b>c</b>), Europe (<b>d</b>), North America (<b>e</b>), and South America (<b>f</b>).</p>
Full article ">Figure A1
<p>Unchanged (yellow) and transitional (red) barren areas in Africa.</p>
Full article ">Figure A2
<p>Unchanged (yellow) and transitional (red) barren areas in Asia.</p>
Full article ">Figure A3
<p>Unchanged (yellow) and transitional (red) barren areas in Australia.</p>
Full article ">Figure A4
<p>Unchanged (yellow) and transitional (red) barren areas in Europe.</p>
Full article ">Figure A5
<p>Unchanged (yellow) and transitional (red) barren areas in North America.</p>
Full article ">Figure A6
<p>Unchanged (yellow) and transitional (red) barren areas in South America.</p>
Full article ">
22 pages, 11626 KiB  
Article
Assessment of Semi-Automated Techniques for Crop Mapping in Chile Based on Global Land Cover Satellite Data
by Matías Volke, María Pedreros-Guarda, Karen Escalona, Eduardo Acuña and Raúl Orrego
Remote Sens. 2024, 16(16), 2964; https://doi.org/10.3390/rs16162964 - 12 Aug 2024
Viewed by 755
Abstract
In recent years, the Chilean agricultural sector has undergone significant changes, but there is a lack of data that can be used to accurately identify these transformations. A study was conducted to assess the effectiveness of different spatial resolutions used by global land [...] Read more.
In recent years, the Chilean agricultural sector has undergone significant changes, but there is a lack of data that can be used to accurately identify these transformations. A study was conducted to assess the effectiveness of different spatial resolutions used by global land cover products (MODIS, ESA and Dynamic World (DW)), in addition to the demi-automated methods applied to them, for the identification of agricultural areas, using the publicly available agricultural survey for 2021. It was found that lower-spatial-resolution collections consistently underestimated crop areas, while collections with higher spatial resolutions overestimated them. The low-spatial-resolution collection, MODIS, underestimated cropland by 46% in 2021, while moderate-resolution collections, such as ESA and DW, overestimated cropland by 39.1% and 93.8%, respectively. Overall, edge-pixel-filtering and a machine learning semi-automated reclassification methodology improved the accuracy of the original global collections, with differences of only 11% when using the DW collection. While there are limitations in certain regions, the use of global land cover collections and filtering methods as training samples can be valuable in areas where high-resolution data are lacking. Future research should focus on validating and adapting these approaches to ensure their effectiveness in sustainable agriculture and ecosystem conservation on a global scale. Full article
(This article belongs to the Special Issue Application of Satellite and UAV Data in Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Study area. On the left, Chile with the Ñuble region is marked in black. On the right, a zoomed image of the Ñuble region and the names of its communes are shown.</p>
Full article ">Figure 2
<p>Flow diagram representing the methodology of this work.</p>
Full article ">Figure 3
<p>Agricultural land area in km<sup>2</sup>, (<b>a</b>) per commune in the Ñuble region and (<b>b</b>) the total in the region. Year: 2021. The calculations were retrieved from the following databases: an agricultural survey (AS), MODIS, ESA, Dynamic World (DW) and improved versions of the latter three (v2).</p>
Full article ">Figure 4
<p>Agricultural area in km<sup>2</sup> per commune in the Ñuble region. (<b>a</b>) Agricultural survey and (<b>b</b>) original ESA dataset, as the most precise original dataset from the three tested. (<b>c</b>) Filtered and reclassified DW, that is, DW version 2 (v2), as the most accurate filtered and reclassified dataset.</p>
Full article ">Figure 4 Cont.
<p>Agricultural area in km<sup>2</sup> per commune in the Ñuble region. (<b>a</b>) Agricultural survey and (<b>b</b>) original ESA dataset, as the most precise original dataset from the three tested. (<b>c</b>) Filtered and reclassified DW, that is, DW version 2 (v2), as the most accurate filtered and reclassified dataset.</p>
Full article ">Figure 5
<p>Zoomed image of the pixel reduction through different stages of quality filtering of <a href="#remotesensing-16-02964-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure A1
<p>Maps of agricultural mask retrieved from MODIS dataset (details in <a href="#remotesensing-16-02964-t002" class="html-table">Table 2</a>) for the year 2021, (<b>a</b>) directly and (<b>b</b>) from the preprocessing described in <a href="#sec3dot2dot2-remotesensing-16-02964" class="html-sec">Section 3.2.2</a>.</p>
Full article ">Figure A2
<p>Maps of agricultural mask retrieved from DW dataset (details in <a href="#remotesensing-16-02964-t002" class="html-table">Table 2</a>) for the year 2021, (<b>a</b>) directly and (<b>b</b>) from the preprocessing described in <a href="#sec3dot2dot2-remotesensing-16-02964" class="html-sec">Section 3.2.2</a>.</p>
Full article ">Figure A3
<p>Maps of agricultural mask retrieved from ESA dataset (details in <a href="#remotesensing-16-02964-t002" class="html-table">Table 2</a>) for the year 2021, (<b>a</b>) directly and (<b>b</b>) from the preprocessing described in <a href="#sec3dot2dot2-remotesensing-16-02964" class="html-sec">Section 3.2.2</a>.</p>
Full article ">Figure A3 Cont.
<p>Maps of agricultural mask retrieved from ESA dataset (details in <a href="#remotesensing-16-02964-t002" class="html-table">Table 2</a>) for the year 2021, (<b>a</b>) directly and (<b>b</b>) from the preprocessing described in <a href="#sec3dot2dot2-remotesensing-16-02964" class="html-sec">Section 3.2.2</a>.</p>
Full article ">Figure A4
<p>Maps of agricultural mask retrieved from CONAF dataset for the year 2021.</p>
Full article ">
29 pages, 30487 KiB  
Article
Spatial and Temporal Variations of Vegetation Phenology and Its Response to Land Surface Temperature in the Yangtze River Delta Urban Agglomeration
by Yi Yang, Lei Yao, Xuecheng Fu, Ruihua Shen, Xu Wang and Yingying Liu
Forests 2024, 15(8), 1363; https://doi.org/10.3390/f15081363 - 4 Aug 2024
Viewed by 608
Abstract
In the Yangtze River Delta urban agglomeration, which is the region with the highest urbanization intensity in China, the development of cities leads to changes in land surface temperature (LST), while vegetation phenology varies with LST. To investigate the spatial and temporal changes [...] Read more.
In the Yangtze River Delta urban agglomeration, which is the region with the highest urbanization intensity in China, the development of cities leads to changes in land surface temperature (LST), while vegetation phenology varies with LST. To investigate the spatial and temporal changes in vegetation phenology and its response to LST in the study area, this study reconstructed the time series of the enhanced vegetation index (EVI) based on the MODIS EVI product and extracted the vegetation phenology indicators in the study area from 2002 to 2020, including the start of the growing season (SOS), the end of the growing season (EOS), and the growing season length (GSL), and analyzed the temporal–spatial patterns of vegetation phenology and LST in the study area, as well as the correlation between them. The results show that (1) SOS was advanced, EOS was postponed, and GSL was extended in the study area from 2002 to 2020, and there were obvious differences in the vegetation phenology indicators under different land covers and cities; (2) LST was higher in the southeast than in the northwest of the study area from 2002 to 2020, with an increasing trend; and (3) there are differences in the response of vegetation phenology to LST across land covers and cities, and SOS responds differently to LST at different times of the year. EOS shows a significant postponement trend with the annual mean LST increase. Overall, we found differences in vegetation phenology and its response to LST under different land covers and cities, which is important for scholars to understand the response of vegetation phenology to urbanization. Full article
(This article belongs to the Special Issue Modeling and Remote Sensing of Forests Ecosystem)
Show Figures

Figure 1

Figure 1
<p>Land cover conditions in the Yangtze River Delta urban agglomeration in 2002–2020. “Land covers change” for areas where there has been a change in land covers from 2002 to 2020. Others for no change in land cover from 2002 to 2020.</p>
Full article ">Figure 2
<p>Research framework and workflow of this study. EVI: enhanced vegetation index. LST: land surface temperature. A-G: the fits to asymmetric Gaussians.</p>
Full article ">Figure 3
<p>Spatial and temporal deviations in SOS in the Yangtze River Delta urban agglomeration in 2002–2020. SOS: the start of the growing season.</p>
Full article ">Figure 4
<p>Spatial and temporal deviations in EOS in the Yangtze River Delta urban agglomeration in 2002–2020. EOS: the end of the growing season.</p>
Full article ">Figure 5
<p>Spatial and temporal deviations in GSL in the Yangtze River Delta urban agglomeration in 2002–2020. GSL: the growing season length.</p>
Full article ">Figure 6
<p>Time series of vegetation phenology in the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 7
<p>Differences in vegetation phenology under different land covers in the Yangtze River Delta urban agglomeration (2002–2020 average).</p>
Full article ">Figure 8
<p>Time series of vegetation phenology for different land covers in the Yangtze River Delta urban agglomerations in 2002–2020.</p>
Full article ">Figure 9
<p>Time series of vegetation phenology in different cities of the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 10
<p>Spatial and temporal deviations in winter LST in the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 11
<p>Spatial and temporal deviations in March LST in the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 12
<p>Spatial and temporal deviations in April LST in the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 13
<p>Spatial and temporal deviations in annual mean LST in the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 14
<p>Time series of LST in the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 15
<p>Differences in LST under different land covers in the Yangtze River Delta urban agglomeration (2002–2020 average).</p>
Full article ">Figure 16
<p>Time series of LST for different land covers in the Yangtze River Delta urban agglomerations in 2002–2020.</p>
Full article ">Figure 17
<p>Time series of LST in different cities of the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 18
<p>Partial correlation coefficients between vegetation phenology and LST in the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 19
<p>Regression coefficients of SOS and winter LST in the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 20
<p>Regression coefficients of SOS and March LST in the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 21
<p>Regression coefficients of SOS and April LST in the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 22
<p>Regression coefficients of EOS and annual mean LST in the Yangtze River Delta urban agglomeration in 2002–2020.</p>
Full article ">Figure 23
<p>Partial correlation coefficients between vegetation phenology and LST of different land covers in the Yangtze River Delta urban agglomerations in 2002–2020.</p>
Full article ">Figure 24
<p>Partial correlation between vegetation phenology and LST in Cities of the Yangtze River Delta urban agglomeration.</p>
Full article ">Figure 25
<p>Trends in partial correlation coefficients of LST and EOS with LST under Hefei, Shanghai, Yancheng, and impervious surfaces in the Yangtze River Delta urban agglomeration, 2002–2020.</p>
Full article ">
0 pages, 3913 KiB  
Article
Flood Extent Delineation and Exposure Assessment in Senegal Using the Google Earth Engine: The 2022 Event
by Bocar Sy, Fatoumata Bineta Bah and Hy Dao
Water 2024, 16(15), 2201; https://doi.org/10.3390/w16152201 - 2 Aug 2024
Viewed by 998
Abstract
This study addresses the pressing need for flood extent and exposure information in data-scarce and vulnerable regions, with a specific focus on West Africa, particularly Senegal. Leveraging the Google Earth Engine (GEE) platform and integrating data from the Sentinel-1 SAR, Global Surface Water, [...] Read more.
This study addresses the pressing need for flood extent and exposure information in data-scarce and vulnerable regions, with a specific focus on West Africa, particularly Senegal. Leveraging the Google Earth Engine (GEE) platform and integrating data from the Sentinel-1 SAR, Global Surface Water, HydroSHEDS, the Global Human Settlement Layer, and MODIS land cover type, our primary objective is to delineate the extent of flooding and compare this with flooding for a one-in-a-hundred-year flood event, offering a comprehensive assessment of exposure during the period from July to October 2022 across Senegal’s 14 regions. The findings underscore a total inundation area of 2951 square kilometers, impacting 782,681 people, 238 square kilometers of urbanized area, and 21 square kilometers of farmland. Notably, August witnessed the largest flood extent, reaching 780 square kilometers, accounting for 0.40% of the country’s land area. Other regions, including Saint-Louis, Ziguinchor, Fatick, and Matam, experienced varying extents of flooding, with the data for August showing a 1.34% overlap with flooding for a one-in-a-hundred-year flood event derived from hydrological and hydraulic modeling. This low percentage reveals the distinct purpose and nature of the two approaches (remote sensing and modeling), as well as their complementarity. In terms of flood exposure, October emerges as the most critical month, affecting 281,406 people (1.56% of the population). The Dakar, Diourbel, Thiès, and Saint-Louis regions bore substantial impacts, affecting 437,025; 171,537; 115,552; and 77,501 people, respectively. These findings emphasize the imperative for comprehensive disaster preparation and mitigation efforts. This study provides a crucial national-scale perspective to guide Senegal’s authorities in formulating effective flood management, intervention, and adaptation strategies. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area. The insert in the top right corner locates our study area on the African continent. The left-hand map represents our study area, Senegal, with 14 administrative regions, and shaded relief as the map background. The 14 regions are designated by numbers 1 to 14. The corresponding names are provided in <a href="#app1-water-16-02201" class="html-app">Supplementary Table S1</a>.</p>
Full article ">Figure 2
<p>Framework for flood extent delineation and flood exposure assessment: remote sensing methodology (green), flooding for a one-in-a-hundred-year flood event methodology using modeling (blue), and methodology for estimating exposed population, urban areas, and farmland (black).</p>
Full article ">Figure 3
<p>Spatial distribution of flooded areas based on Sentinel-1, GSW, and HydroSHEDS data for the 2022 flood event per region and month: July (red), August (orange), September (yellow), and October (light green).</p>
Full article ">Figure 4
<p>Histogram of flooded areas based on Sentinel-1, GSW, and HydroSHEDS data for the 2022 flood event per region and month: July (red), August (orange), September (yellow), and October (light green).</p>
Full article ">Figure 5
<p>Spatial distribution of flooded areas based on Sentinel-1, GSW, and HydroSHEDS data for the 2022 flood event in the two most flooded regions: (<b>a</b>) Saint-Louis [<a href="#B4-water-16-02201" class="html-bibr">4</a>] (<b>top</b>) and (<b>b</b>) Ziguinchor [<a href="#B2-water-16-02201" class="html-bibr">2</a>] (<b>bottom</b>).</p>
Full article ">Figure 6
<p>Population exposed to flooding from July to October 2022, estimated using the intersection of GHLS population datasets with the flooded areas in the Google Earth Engine.</p>
Full article ">Figure 7
<p>Spatial distribution of the two most exposed regions to flooding by population: (<b>a</b>) Dakar and (<b>b</b>) Diourbel. Assessed through the intersection of GHLS population datasets with the flooded areas in the Google Earth Engine.</p>
Full article ">Figure 7 Cont.
<p>Spatial distribution of the two most exposed regions to flooding by population: (<b>a</b>) Dakar and (<b>b</b>) Diourbel. Assessed through the intersection of GHLS population datasets with the flooded areas in the Google Earth Engine.</p>
Full article ">Figure 8
<p>(<b>a</b>) Urban areas and (<b>b</b>) farmland exposed to flooding, derived from the intersection of MODIS land cover datasets with the flooded areas in the Google Earth Engine.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) Urban areas and (<b>b</b>) farmland exposed to flooding, derived from the intersection of MODIS land cover datasets with the flooded areas in the Google Earth Engine.</p>
Full article ">
29 pages, 19031 KiB  
Article
Directional Applicability Analysis of Albedo Retrieval Using Prior BRDF Knowledge
by Hu Zhang, Qianrui Xi, Junqin Xie, Xiaoning Zhang, Lei Chen, Yi Lian, Hongtao Cao, Yan Liu, Lei Cui and Yadong Dong
Remote Sens. 2024, 16(15), 2744; https://doi.org/10.3390/rs16152744 - 26 Jul 2024
Viewed by 439
Abstract
Surface albedo measures the proportion of incoming solar radiation reflected by the Earth’s surface. Accurate albedo retrieval from remote sensing data usually requires sufficient multi-angular observations to account for the surface reflectance anisotropy. However, most middle and high-resolution remote sensing satellites lack the [...] Read more.
Surface albedo measures the proportion of incoming solar radiation reflected by the Earth’s surface. Accurate albedo retrieval from remote sensing data usually requires sufficient multi-angular observations to account for the surface reflectance anisotropy. However, most middle and high-resolution remote sensing satellites lack the capability to acquire sufficient multi-angular observations. Existing algorithms for retrieving surface albedo from single-direction reflectance typically rely on land cover types and vegetation indices to extract the corresponding prior knowledge of surface anisotropic reflectance from coarse-resolution Bidirectional Reflectance Distribution Function (BRDF) products. This study introduces an algorithm for retrieving albedo from directional reflectance based on a 3 × 3 BRDF archetype database established using the 2015 global time-series Moderate Resolution Imaging Spectro-radiometer (MODIS) BRDF product. For different directions, BRDF archetypes are applied to the simulated MODIS directional reflectance to retrieve albedo. By comparing the retrieved albedos with the MODIS albedo, the BRDF archetype that yields the smallest Root Mean Squared Error (RMSE) is selected as the prior BRDF for the direction. A lookup table (LUT) that contains the optimal BRDF archetypes for albedo retrieval under various observational geometries is established. The impact of the number of BRDF archetypes on the accuracy of albedo is analyzed according to the 2020 MODIS BRDF. The LUT is applied to the MODIS BRDF within specific BRDF archetype classes to validate its applicability under different anisotropic reflectance characteristics. The applicability of the LUT across different data types is further evaluated using simulated reflectance or real multi-angular measurements. The results indicate that (1) for any direction, a specific BRDF archetype can retrieve a high-accuracy albedo from directional reflectance. The optimal BRDF archetype varies with the observation direction. (2) Compared to the prior BRDF knowledge obtained through averaging method, the BRDF archetype LUT based on the 3 × 3 BRDF archetype database can more accurately retrieve the surface albedo. (3) The BRDF archetype LUT effectively eliminates the influence of surface anisotropic reflectance characteristics in albedo retrieval across different scales and types of data. Full article
Show Figures

Figure 1

Figure 1
<p>The flowchart for albedo retrieval from directional reflectance based on BRDF archetypes. The red, blue and green lines in the validation section represent the processing workflows for different validation data.</p>
Full article ">Figure 2
<p>The shapes of BRDF archetypes (red line) on the PP at an SZA of 45° for the NIR band. (<b>a</b>–<b>i</b>) refer to the nine BRDF archetype classes. The gray lines refer to 100 normalized MODIS BRDF selected from each BRDF archetype class randomly.</p>
Full article ">Figure 3
<p>The viewing zenith angles (<b>a</b>) and azimuth angles (<b>b</b>) of the LUT. The radius represents the zenith angle, and the polar angle represents the azimuth angle. Each point represents a direction, and different colors represent the magnitudes of the angles.</p>
Full article ">Figure 4
<p>Angular sampling of multi-angular observations. (<b>a</b>,<b>b</b>) refer to the MODIS observations within 2021.101–2021.116 and 305–320, (<b>c</b>) shows the angular distribution of POLDER data named ‘brdf_ndvi03_0634_2286.txt’, and (<b>d</b>) represents the angular distribution pattern of ground measurements named ‘Parabola.1987.ifc3-site36.inp’. Solid dots represent the locations of the view, and the red open circles refer to the locations of the sun.</p>
Full article ">Figure 5
<p>The comparison of albedos retrieved from different BRDF archetypes and directional reflectance with MODIS albedo in the NIR band. (<b>a</b>–<b>i</b>) represent the inversion results for the nine BRDF archetypes, respectively. The observation is positioned with an SZA of 45° and a VZA of 55° in the backward direction of the PP. The color represents the density of overlapping points.</p>
Full article ">Figure 6
<p>The comparison between directional reflectance (<b>a</b>–<b>d</b>) or albedo (<b>e</b>–<b>h</b>) retrieved from the BRDF archetype with the least RMSE and MODIS albedo in the NIR band. (<b>a</b>,<b>e</b>) represent the direction with an VZA of 55° in the backward direction of PP; (<b>b</b>,<b>f</b>) represent the forward direction of 45° in PP; (<b>c</b>,<b>g</b>) represent the nadir direction; (<b>d</b>,<b>h</b>) represent the direction of 60° in CPP. The color represents the density of overlapping points.</p>
Full article ">Figure 7
<p>The distribution of RMSE<sub>r</sub> (<b>a</b>–<b>d</b>) and RMSE<sub>a</sub> (<b>e</b>–<b>h</b>) in the red band over the viewing hemisphere under SZA of 5° (<b>a</b>,<b>e</b>), 30° (<b>b</b>,<b>f</b>), 45° (<b>c</b>,<b>g</b>), and 60° (<b>d</b>,<b>h</b>). The radius represents the zenith angle, and the polar angle represents the azimuth angle.</p>
Full article ">Figure 8
<p>The distribution of RMSE<sub>r</sub> (<b>a</b>–<b>d</b>) and RMSE<sub>a</sub> (<b>e</b>–<b>h</b>) in the NIR band over the viewing hemisphere under SZA of 5° (<b>a</b>,<b>e</b>), 30° (<b>b</b>,<b>f</b>), 45° (<b>c</b>,<b>g</b>), and 60° (<b>d</b>,<b>h</b>).</p>
Full article ">Figure 9
<p>The BRDF archetype LUTs for the red (<b>a</b>–<b>d</b>) and NIR (<b>e</b>–<b>h</b>) bands for retrieving BSA based on directional reflectance under SZA of 5° (<b>a</b>,<b>e</b>), 30° (<b>b</b>,<b>f</b>), 45° (<b>c</b>,<b>g</b>), and 60° (<b>d</b>,<b>h</b>). Different colors represent different BRDF archetypes.</p>
Full article ">Figure 10
<p>The BRDF archetype LUTs for the red (<b>a</b>–<b>d</b>) and NIR (<b>e</b>–<b>h</b>) bands for retrieving WSA based on directional reflectance under SZA of 5° (<b>a</b>,<b>e</b>), 30° (<b>b</b>,<b>f</b>), 45° (<b>c</b>,<b>g</b>), and 60° (<b>d</b>,<b>h</b>). Different colors represent different BRDF archetypes.</p>
Full article ">Figure 11
<p>The 3D pattern of mean BRDF at an SZA of 30°. (<b>a</b>) is the red band, and (<b>b</b>) is the NIR band. Colors represent the magnitude of reflectance.</p>
Full article ">Figure 12
<p>The distribution of RMSE<sub>a</sub> based on the mean BRDF in the red (<b>a</b>–<b>d</b>) and NIR (<b>e</b>–<b>h</b>) bands over the viewing hemisphere under SZA of 5° (<b>a</b>,<b>e</b>), 30° (<b>b</b>,<b>f</b>), 45° (<b>c</b>,<b>g</b>), and 60° (<b>d</b>,<b>h</b>).</p>
Full article ">Figure 13
<p>The proportions of directions with RMSE less than 0.025 and 0.045 in the red and NIR bands. (<b>a</b>,<b>b</b>) refer to the BSA and WSA, respectively. The RMSEs are calculated based on directional reflectance, mean BRDF, and LUTs established using 6 × 1, 2 × 2, 3 × 3, and 5 × 5 BRDF archetypes.</p>
Full article ">Figure 14
<p>Validation based on 2020 MODIS BRDF product. The distribution of RMSE<sub>a</sub> in the red (<b>a</b>–<b>d</b>) and NIR (<b>e</b>–<b>h</b>) bands over the viewing hemisphere under SZA of 5° (<b>a</b>,<b>e</b>), 30° (<b>b</b>,<b>f</b>), 45° (<b>c</b>,<b>g</b>), and 60° (<b>d</b>,<b>h</b>).</p>
Full article ">Figure 15
<p>The distribution of RMSE<sub>a</sub> over the viewing hemisphere within each BRDF archetype class at an SZA of 45°. (<b>a</b>–<b>i</b>) represent the nine BRDF archetype classes, respectively.</p>
Full article ">Figure 15 Cont.
<p>The distribution of RMSE<sub>a</sub> over the viewing hemisphere within each BRDF archetype class at an SZA of 45°. (<b>a</b>–<b>i</b>) represent the nine BRDF archetype classes, respectively.</p>
Full article ">Figure 16
<p>The distribution of RMSE<sub>r</sub> over the viewing hemisphere within each BRDF archetype class at an SZA of 45°. (<b>a</b>–<b>i</b>) represent the nine BRDF archetype classes, respectively.</p>
Full article ">Figure 17
<p>Accuracy evaluation of albedo retrieval using LUT based on multi-angular data simulated by PROSAIL. (<b>a</b>–<b>c</b>) refer to the RMSE<sub>r</sub> and (<b>d</b>–<b>f</b>) refer to the RMSE<sub>a</sub> over the viewing hemisphere under SZA of 15°, 45°, and 60°.</p>
Full article ">Figure 18
<p>The comparison between simulated MODIS directional reflectance (<b>a</b>,<b>c</b>,<b>e</b>,<b>f</b>) or albedo (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) retrieved from the BRDF archetype LUTs and MODIS albedo. (<b>a</b>,<b>b</b>,<b>e</b>,<b>f</b>) refer to the red band, and (<b>c</b>,<b>d</b>,<b>g</b>,<b>h</b>) refer to the NIR band. The color represents the density of overlapping points.</p>
Full article ">Figure 19
<p>The comparison between POLDER observations (<b>a</b>,<b>c</b>) or albedo (<b>b</b>,<b>d</b>) retrieved from the BRDF archetype LUTs and POLDER albedo based on multi-angular observations. (<b>a</b>,<b>b</b>) refer to the red band, and (<b>c</b>,<b>d</b>) refer to the NIR band.</p>
Full article ">Figure 20
<p>The comparison between ground observations (<b>a</b>,<b>c</b>) or albedo (<b>b</b>,<b>d</b>) retrieved from the BRDF archetype LUTs and albedo based on multi-angular observations. (<b>a</b>,<b>b</b>) refer to the red band, and (<b>c</b>,<b>d</b>) refer to the NIR band.</p>
Full article ">
16 pages, 3339 KiB  
Article
Localized Crop Classification by NDVI Time Series Analysis of Remote Sensing Satellite Data; Applications for Mechanization Strategy and Integrated Resource Management
by Hafiz Md-Tahir, Hafiz Sultan Mahmood, Muzammil Husain, Ayesha Khalil, Muhammad Shoaib, Mahmood Ali, Muhammad Mohsin Ali, Muhammad Tasawar, Yasir Ali Khan, Usman Khalid Awan and Muhammad Jehanzeb Masud Cheema
AgriEngineering 2024, 6(3), 2429-2444; https://doi.org/10.3390/agriengineering6030142 - 26 Jul 2024
Viewed by 1119
Abstract
In data-scarce regions, prudent planning and precise decision-making for sustainable development, especially in agriculture, remain challenging due to the lack of correct information. Remotely sensed satellite images provide a powerful source for assessing land use and land cover (LULC) classes and crop identification. [...] Read more.
In data-scarce regions, prudent planning and precise decision-making for sustainable development, especially in agriculture, remain challenging due to the lack of correct information. Remotely sensed satellite images provide a powerful source for assessing land use and land cover (LULC) classes and crop identification. Applying remote sensing (RS) in conjunction with the Geographical Information System (GIS) and modern tools/algorithms of artificial intelligence (AI) and deep learning has been proven effective for strategic planning and integrated resource management. The study was conducted in the canal command area of the Lower Chenab Canal system in Punjab, Pakistan. Crop features/classes were assessed using the Normalized Difference Vegetation Index (NDVI) algorithm. The Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m and Landsat 5 TM (thematic mapper) images were deployed for NDVI time-series analysis with an unsupervised classification technique to obtain LULC classes that helped to discern cropping pattern, crop rotation, and the area of specific crops, which were then used as key inputs for agricultural mechanization planning and resource management. The accuracy of the LULC map was 78%, as assessed by the error matrix approach. Limitations of high-resolution RS data availability and the accuracy of the results are the concerns observed in this study that could be managed by the availability of good quality local sources and advanced processing techniques, that would make it more useful and applicable for regional agriculture and environmental management. Full article
(This article belongs to the Special Issue Application of Remote Sensing and GIS in Agricultural Engineering)
Show Figures

Figure 1

Figure 1
<p>Satellite data downloading parameters, e.g., (<b>a</b>) H = 24, V = 05 is the horizontal and vertical tile number of MODIS, respectively (tile size = 10° × 10° latitude, longitude), (<b>b</b>) Path = 150, Row = 38 is the address of Landsat5 TM images (tile size is approximately 170 km north–south by 183 km east–west).</p>
Full article ">Figure 2
<p>MODIS data for the study area: (<b>a</b>) original MODIS NDVI tile h24v05 covers one-fourth of the Pakistan area. (<b>b</b>) Subset and layer stack of the MODIS image of the study area “Rachna Doab”.</p>
Full article ">Figure 3
<p>Model for digital values of the NDVI derivation.</p>
Full article ">Figure 4
<p>Comparison of the currently developed LULC map (<b>a</b>) from MODIS with the (<b>b</b>) fine resolution Landsat5 TM map.</p>
Full article ">Figure 5
<p>Crop phenology and NDVI profiles of final agriculture/crop classes.</p>
Full article ">Figure 6
<p>Land cover and land use map developed from 46 MODIS NDVI layers for 2008–2009.</p>
Full article ">Figure 7
<p>LULC classes and their areal extent in Faisalabad and Toba Tek Singh Districts.</p>
Full article ">
18 pages, 5889 KiB  
Article
How Useful Are Moderate Resolution Imaging Spectroradiometer Observations for Inland Water Temperature Monitoring and Warming Trend Assessment in Temperate Lakes in Poland?
by Mariusz Sojka, Mariusz Ptak, Katarzyna Szyga-Pluta and Senlin Zhu
Remote Sens. 2024, 16(15), 2727; https://doi.org/10.3390/rs16152727 - 25 Jul 2024
Viewed by 448
Abstract
Continuous software development and widespread access to satellite imagery allow for obtaining increasingly accurate data on the natural environment. They play an important role in hydrosphere research, and one of the most frequently addressed issues in the era of climate change is the [...] Read more.
Continuous software development and widespread access to satellite imagery allow for obtaining increasingly accurate data on the natural environment. They play an important role in hydrosphere research, and one of the most frequently addressed issues in the era of climate change is the thermal dynamics of its components. Interesting research opportunities in this area are provided by the utilization of data obtained from the moderate resolution imaging spectroradiometer (MODIS). These data have been collected for over two decades and have already been used to study water temperature in lakes. In the case of Poland, there is a long history of studying the thermal regime of lakes based on in situ observations, but so far, MODIS data have not been used in these studies. In this study, the available products, such as 1-day and 8-day MODIS land surface temperature (LST), were validated. The obtained data were compared with in situ measurements, and the reliability of using these data to estimate long-term thermal changes in lake waters was also assessed. The analysis was conducted based on the example of two coastal lakes located in Poland. The results of 1-day LST MODIS generally showed a good fit compared to in situ measurements (average RMSE 1.9 °C). However, the analysis of long-term trends of water temperature changes revealed diverse results compared to such an approach based on field measurements. This situation is a result of the limited number of satellite data, which is dictated by environmental factors associated with high cloud cover reaching 60% during the analysis period. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study lakes.</p>
Full article ">Figure 2
<p>Minimum (blue), average (red), and maximum (green) in situ water temperatures of Łebsko (<b>a</b>) and Gardno (<b>b</b>) lakes from April to October during the years of 2003–2022.</p>
Full article ">Figure 3
<p>Daily water temperature deviations measured in situ from the values obtained from the MODIS sensor for Lake Łebsko.</p>
Full article ">Figure 4
<p>Daily water temperature deviations measured in situ from the values obtained from the MODIS sensor for Lake Gardno.</p>
Full article ">Figure 5
<p>Scatter plot of daily water temperatures measured in situ and 1 day from the MODIS sensor for Lake Łebsko.</p>
Full article ">Figure 6
<p>Scatter plot of daily water temperatures measured in situ and 1 day from the MODIS sensor for Lake Gardno.</p>
Full article ">Figure 7
<p>One-day LST values obtained from MODIS sensors compared to in situ measurements for Lake Łebsko for the period from April to October 2019.</p>
Full article ">Figure 8
<p>Average deviations between 1-day LST values recorded by MODIS on Aqua and Terra satellites during the day and night.</p>
Full article ">Figure 9
<p>The slope of the regression lines obtained from in situ measurements (blue color) and MODIS (red color) for Lakes Łebsko and Gardno over the years of 2003–2022.</p>
Full article ">Figure 9 Cont.
<p>The slope of the regression lines obtained from in situ measurements (blue color) and MODIS (red color) for Lakes Łebsko and Gardno over the years of 2003–2022.</p>
Full article ">
16 pages, 23675 KiB  
Article
Monitoring Sustainable Development Goal Indicator 15.3.1 on Land Degradation Using SEPAL: Examples, Challenges and Prospects
by Amit Ghosh, Pierrick Rambaud, Yelena Finegold, Inge Jonckheere, Pablo Martin-Ortega, Rashed Jalal, Adebowale Daniel Adebayo, Ana Alvarez, Martin Borretti, Jose Caela, Tuhin Ghosh, Erik Lindquist and Matieu Henry
Land 2024, 13(7), 1027; https://doi.org/10.3390/land13071027 - 9 Jul 2024
Cited by 1 | Viewed by 1303
Abstract
A third of the world’s ecosystems are considered degraded, and there is an urgent need for protection and restoration to make the planet healthier. The Sustainable Development Goals (SDGs) target 15.3 aims at protecting and restoring the terrestrial ecosystem to achieve a land [...] Read more.
A third of the world’s ecosystems are considered degraded, and there is an urgent need for protection and restoration to make the planet healthier. The Sustainable Development Goals (SDGs) target 15.3 aims at protecting and restoring the terrestrial ecosystem to achieve a land degradation-neutral world by 2030. Land restoration through inclusive and productive growth is indispensable to promote sustainable development by fostering climate change-resistant, poverty-alleviating, and environmentally protective economic growth. The SDG Indicator 15.3.1 is used to measure progress towards a land degradation-neutral world. Earth observation datasets are the primary data sources for deriving the three sub-indicators of indicator 15.3.1. It requires selecting, querying, and processing a substantial historical archive of data. To reduce the complexities, make the calculation user-friendly, and adapt it to in-country applications, a module on the FAO’s SEPAL platform has been developed in compliance with the UNCCD Good Practice Guidance (GPG v2) to derive the necessary statistics and maps for monitoring and reporting land degradation. The module uses satellite data from Landsat, Sentinel 2, and MODIS sensors for primary productivity assessment, along with other datasets enabling high-resolution to large-scale assessment of land degradation. The use of an in-country land cover transition matrix along with in-country land cover data enables a more accurate assessment of land cover changes over time. Four different case studies from Bangladesh, Nigeria, Uruguay, and Angola are presented to highlight the prospect and challenges of monitoring land degradation using various datasets, including LCML-based national land cover legend and land cover data. Full article
(This article belongs to the Section Land – Observation and Monitoring)
Show Figures

Figure 1

Figure 1
<p>A simplified structure of the SEPAL’s work flows for Google-Earth-Engine-based modules (based on the SEPAL’s architecture diagram [<a href="#B24-land-13-01027" class="html-bibr">24</a>]).</p>
Full article ">Figure 2
<p>The three sub-indicators of the indicator 15.3.1.</p>
Full article ">Figure 3
<p>The scale and direction of productive trend and productivity state based on <span class="html-italic">z</span> score.</p>
Full article ">Figure 4
<p>Possible combinations of the three metrics to get the productivity sub-indicators; dotted lines represent the combination initially proposed in GPG v1.</p>
Full article ">Figure 5
<p>Interface of the default transition matrix that uses seven UNCCD land cover categories (D [red] = degraded, S [tan] = stable and I [green] = improved).</p>
Full article ">Figure 6
<p>The complete set of combinations of three sub-indicators’ statuses and the corresponding statuses of the final indicator.</p>
Full article ">Figure 7
<p>Different sections of SEPAL module 15.3.1’s interface.</p>
Full article ">Figure 8
<p>Status of baseline and reporting status of the SDG indicator 15.3.1.</p>
Full article ">Figure 9
<p>Final status of SDG indicator after combining the baseline and reporting status.</p>
Full article ">Figure 10
<p>Extent of land degradation for reporting period using NDVI and EVI in Nigeria.</p>
Full article ">Figure 11
<p>National land cover transition matrix for Uruguay as per SEPAL SDG 15.3.1 specification.</p>
Full article ">Figure 12
<p>Land cover sub-indicator indicator (15.3.1) for the baseline period. (<b>a</b>) Based on national custom transition matrix and national land cover data; (<b>b</b>) Based on the default transition matrix and ESA CCI land cover data.</p>
Full article ">Figure 13
<p>Comparison of land degradation mapping using MODIS and Landsat satellite.</p>
Full article ">
Back to TopTop