[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (239)

Search Parameters:
Keywords = PlanetScope

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2699 KiB  
Article
Mapping Polylepis Forest Using Sentinel, PlanetScope Images, and Topographical Features with Machine Learning
by Diego Pacheco-Prado, Esteban Bravo-López and Luis Á. Ruiz
Remote Sens. 2024, 16(22), 4271; https://doi.org/10.3390/rs16224271 (registering DOI) - 16 Nov 2024
Viewed by 200
Abstract
Globally, there is a significant trend in the loss of native forests, including those of the Polylepis genus, which are essential for soil conservation across the Andes Mountain range. These forests play a critical role in regulating water flow, promoting soil regeneration, and [...] Read more.
Globally, there is a significant trend in the loss of native forests, including those of the Polylepis genus, which are essential for soil conservation across the Andes Mountain range. These forests play a critical role in regulating water flow, promoting soil regeneration, and retaining essential nutrients and sediments, thereby contributing to the soil conservation of the region. In Ecuador, these forests are often fragmented and isolated in areas of high cloud cover, making it difficult to use remote sensing and spectral vegetation indices to detect this forest species. This study developed twelve scenarios using medium- and high-resolution satellite data, integrating datasets such as Sentinel-2 and PlanetScope (optical), Sentinel-1 (radar), and the Sigtierras project topographic data. The scenarios were categorized into two groups: SC1–SC6, combining 5 m resolution data, and SC7–SC12, combining 10 m resolution data. Additionally, each scenario was tested with two target types: multiclass (distinguishing Polylepis stands, native forest, Pine, Shrub vegetation, and other classes) and binary (distinguishing Polylepis from non-Polylepis). The Recursive Feature Elimination technique was employed to identify the most effective variables for each scenario. This process reduced the number of variables by selecting those with high importance according to a Random Forest model, using accuracy and Kappa values as criteria. Finally, the scenario that presented the highest reliability was SC10 (Sentinel-2 and Topography) with a pixel size of 10 m in a multiclass target, achieving an accuracy of 0.91 and a Kappa coefficient of 0.80. For the Polylepis class, the User Accuracy and Producer Accuracy were 0.90 and 0.89, respectively. The findings confirm that, despite the limited area of the Polylepis stands, integrating topographic and spectral variables at a 10 m pixel resolution improves detection accuracy. Full article
22 pages, 10867 KiB  
Article
Modeling the Land Surface Phenological Responses of Dominant Miombo Tree Species to Climate Variability in Western Tanzania
by Siwa E. Nkya, Deo D. Shirima, Robert N. Masolele, Henrik Hedenas and August B. Temu
Remote Sens. 2024, 16(22), 4261; https://doi.org/10.3390/rs16224261 - 15 Nov 2024
Viewed by 299
Abstract
Species-level phenology models are essential for predicting shifts in tree species under climate change. This study quantified phenological differences among dominant miombo tree species and modeled seasonal variability using climate variables. We used TIMESAT version 3.3 software and the Savitzky–Golay filter to derive [...] Read more.
Species-level phenology models are essential for predicting shifts in tree species under climate change. This study quantified phenological differences among dominant miombo tree species and modeled seasonal variability using climate variables. We used TIMESAT version 3.3 software and the Savitzky–Golay filter to derive phenology metrics from bi-monthly PlanetScope Normalized Difference Vegetation Index (NDVI) data from 2017 to 2024. A repeated measures Analysis of Variance (ANOVA) assessed differences in phenology metrics between species, while a regression analysis modeled the Start of Season (SOS) and End of Season (EOS). The results show significant seasonal and species-level variations in phenology. Brachystegia spiciformis differed from other species in EOS, Length of Season (LOS), base value, and peak value. Surface solar radiation and skin temperature one month before SOS were key predictors of SOS, with an adjusted R-squared of 0.90 and a Root Mean Square Error (RMSE) of 13.47 for Brachystegia spiciformis. SOS also strongly predicted EOS, with an adjusted R-squared of 1 and an RMSE of 3.01 for Brachystegia spiciformis, indicating a shift in the growth cycle of tree species due to seasonal variability. These models provide valuable insights into potential phenological shifts in miombo species due to climate change. Full article
(This article belongs to the Special Issue Advances in Detecting and Understanding Land Surface Phenology)
Show Figures

Figure 1

Figure 1
<p>Site location: (<b>A</b>) within the miombo woodland distribution, (<b>B</b>) in Tongwe Forest Reserve, Tanganyika District, Katavi Region, Tanzania, and (<b>C</b>) at elevations measured in meters above sea level.</p>
Full article ">Figure 2
<p>Flow chart of the materials and methods.</p>
Full article ">Figure 3
<p>Layer of selected tree species crowns and their points for NDVI extraction, digitized from the drone orthophoto of the site in western Tanzania.</p>
Full article ">Figure 4
<p>Generation of phenological metrics per season (Seas.) and point (Row) in TIMESAT for this study.</p>
Full article ">Figure 5
<p>Correlation of seasonal and pre-season climatic factors with Start of Season (SOS) for <span class="html-italic">B. spiciformis.</span> R7 = rainfall July, R08 = rainfall August, R09 = rainfall September, R10 = rainfall October, SkinT07 = skin temperature July, SkinT08 = skin temperature August, SkinT09 = skin temperature September, SkinT10 = skin temperature October, SSRDT07 = surface solar radiation July, SSRDT08 = surface solar radiation August, SSRDT09 = surface solar radiation September, SSRDT10 = surface solar radiation October, DayLen07 = day length July, DayLen08 = day length August, DayLen09 = day length September, DayLen10 = day length October. Asterisks indicate significance levels: “*” for <span class="html-italic">p</span>-value &lt; 0.05, “**”for <span class="html-italic">p</span>-value &lt; 0.01, and “***” for <span class="html-italic">p</span>-value &lt; 0.001.</p>
Full article ">Figure 6
<p>Correlation of seasonal and pre-season climatic factors with Start of Season (SOS) for <span class="html-italic">J. globiflora.</span> R7 = rainfall July, R08 = rainfall August, R09 = rainfall September, R10 = rainfall October, SkinT07 = skin temperature July, SkinT08 = skin temperature August, SkinT09 = skin temperature September, SkinT10 = skin temperature October, SSRDT07 = surface solar radiation July, SSRDT08 = surface solar radiation August, SSRDT09 = surface solar radiation September, SSRDT10 = surface solar radiation October, DayLen07 = day length July, DayLen08 = day length August, DayLen09 = day length September, DayLen10 = day length October. Asterisks indicate significance levels: “*” for <span class="html-italic">p</span>-value &lt; 0.05, “**” for <span class="html-italic">p</span>-value &lt; 0.01, and “***” for <span class="html-italic">p</span>-value &lt; 0.001.</p>
Full article ">Figure 7
<p>Correlation of seasonal and pre-season climatic factors with End of Season (EOS) for <span class="html-italic">B. spiciformis</span> R3 = rainfall March, R04 = rainfall April, R05= rainfall May, R6 = rainfall June, SkinT03 = skin temperature March, SkinT04 = skin temperature April, SkinT05 = skin temperature May, SkinT06 = skin temperature June, SSRDT03 = surface solar radiation March, SSRDT04 = surface solar radiation April, SSRDT05 = surface solar radiation May, SSRDT06 = surface solar radiation June, DayLen03 = day length March, DayLen04 = day length April, DayLen05 = day length May, DayLen06 = day length June. Asterisks indicate significance levels: “*” for <span class="html-italic">p</span>-value &lt; 0.05, “**” for <span class="html-italic">p</span>-value &lt; 0.01, and “***” for <span class="html-italic">p</span>-value &lt; 0.001.</p>
Full article ">Figure 8
<p>Correlation of seasonal and pre-season climatic factors with End of Season (EOS) for <span class="html-italic">J. globiflora.</span> R04= rainfall April, R05 = rainfall May, R6 = rainfall June, R7 = rainfall July, SkinT04 = skin temperature April, SkinT05 = skin temperature May, SkinT06 = skin temperature June, SkinT07 = skin temperature July, SSRDT04 = surface solar radiation April, SSRDT05 = surface solar radiation May, SSRDT06 = surface solar radiation June, SSRDT07 = surface solar radiation July, DayLen04 = day length April, DayLen05 = day length May, DayLen06 = day length June, DayLen07 = day length July. Asterisks indicate significance levels: “*” for <span class="html-italic">p</span>-value &lt; 0.05, “**” for <span class="html-italic">p</span>-value &lt; 0.01, and “***” for <span class="html-italic">p</span>-value &lt; 0.001.</p>
Full article ">Figure 9
<p>Correlation between Start of Season and End of Season for <span class="html-italic">B. spiciformis</span> and <span class="html-italic">J. globiflora</span> across seasons.</p>
Full article ">Figure 10
<p>Monthly trends of (<b>A</b>) day length, and (<b>B</b>) rainfall from January 2020 to December 2023.</p>
Full article ">Figure 11
<p>Monthly trends in (<b>A</b>) surface solar radiation and (<b>B</b>) skin temperature from January 2020 to December 2023.</p>
Full article ">Figure 12
<p>Overlay of the site on (<b>A</b>) CHIRPS rainfall data, and (<b>B</b>) ERA-5 Land reanalysis surface solar radiation and temperature data.</p>
Full article ">
17 pages, 1694 KiB  
Article
Exploring Multisource Remote Sensing for Assessing and Monitoring the Ecological State of the Mountainous Natural Grasslands in Armenia
by Grigor Ayvazyan, Vahagn Muradyan, Andrey Medvedev, Anahit Khlghatyan and Shushanik Asmaryan
Appl. Sci. 2024, 14(22), 10205; https://doi.org/10.3390/app142210205 - 7 Nov 2024
Viewed by 377
Abstract
Remote sensing (RS) is a compulsory component in studying and monitoring ecosystems suffering from the disruption of natural balance, productivity, and degradation. The current study attempted to assess the feasibility of multisource RS for assessing and monitoring mountainous natural grasslands in Armenia. Different [...] Read more.
Remote sensing (RS) is a compulsory component in studying and monitoring ecosystems suffering from the disruption of natural balance, productivity, and degradation. The current study attempted to assess the feasibility of multisource RS for assessing and monitoring mountainous natural grasslands in Armenia. Different spatial resolution RS data (Landsat 8, Sentinel-2, Planet Scope, and multispectral UAV) were used to obtain various vegetation spectral indices: NDVI, NDWI, GNDVI, GLI, EVI, DVI, SAVI, MSAVI, and GSAVI, and the relationships among the indices were assessed via the Spearman correlation method, which showed a significant positive correlation for all cases (p < 0.01). A comparison of all indices showed a significant high correlation between UAV and the Planet Scope imagery. The comparisons of UAV with Sentinel and Landsat data show moderate and low significant correlation (p < 0.01), correspondingly. Also, trend analysis was performed to explore the spatial–temporal changes of these indices using Mann–Kendall statistical tests (MK, MKKH, MKKY, PW, TFPW), which indicated no significant trend. However, Sen’s slope as a second estimator showed a decreasing trend. Generally, it could be proved that, as opensource data, Sentinel-2 seemed to have better alignment, making it a reliable tool for the accurate monitoring of the ecological state of small mountainous grasslands. Full article
(This article belongs to the Special Issue Ecosystems and Landscape Ecology)
Show Figures

Figure 1

Figure 1
<p>Mountain grasslands of the Nerkin Sasnashen.</p>
Full article ">Figure 2
<p>Correlations between satellite Planet Scope (<b>a</b>), Sentinel-2 (<b>b</b>) and Landsat 8 (<b>c</b>) and UAV data (at the 0.01 * significance level).</p>
Full article ">
20 pages, 11797 KiB  
Article
Relative Radiometric Normalization for the PlanetScope Nanosatellite Constellation Based on Sentinel-2 Images
by Rafael Luís Silva Dias, Ricardo Santos Silva Amorim, Demetrius David da Silva, Elpídio Inácio Fernandes-Filho, Gustavo Vieira Veloso and Ronam Henrique Fonseca Macedo
Remote Sens. 2024, 16(21), 4047; https://doi.org/10.3390/rs16214047 - 30 Oct 2024
Viewed by 963
Abstract
Detecting and characterizing continuous changes on Earth’s surface has become critical for planning and development. Since 2016, Planet Labs has launched hundreds of nanosatellites, known as Doves. Despite the advantages of their high spatial and temporal resolution, these nanosatellites’ images still present inconsistencies [...] Read more.
Detecting and characterizing continuous changes on Earth’s surface has become critical for planning and development. Since 2016, Planet Labs has launched hundreds of nanosatellites, known as Doves. Despite the advantages of their high spatial and temporal resolution, these nanosatellites’ images still present inconsistencies in radiometric resolution, limiting their broader usability. To address this issue, a model for radiometric normalization of PlanetScope (PS) images was developed using Multispectral Instrument/Sentinel-2 (MSI/S2) sensor images as a reference. An extensive database was compiled, including images from all available versions of the PS sensor (e.g., PS2, PSB.SD, and PS2.SD) from 2017 to 2022, along with data from various weather stations. The sampling process was carried out for each band using two methods: Conditioned Latin Hypercube Sampling (cLHS) and statistical visualization. Five machine learning algorithms were then applied, incorporating both linear and nonlinear models based on rules and decision trees: Multiple Linear Regression (MLR), Model Averaged Neural Network (avNNet), Random Forest (RF), k-Nearest Neighbors (KKNN), and Support Vector Machine with Radial Basis Function (SVM-RBF). A rigorous covariate selection process was performed for model application, and the models’ performance was evaluated using the following statistical indices: Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Lin’s Concordance Correlation Coefficient (CCC), and Coefficient of Determination (R2). Additionally, Kruskal–Wallis and Dunn tests were applied during model selection to identify the best-performing model. The results indicated that the RF model provided the best fit across all PS sensor bands, with more accurate results in the longer wavelength bands (Band 3 and Band 4). The models achieved RMSE reflectance values of approximately 0.02 and 0.03 in these bands, with R2 and CCC ranging from 0.77 to 0.90 and 0.87 to 0.94, respectively. In summary, this study makes a significant contribution to optimizing the use of PS sensor images for various applications by offering a detailed and robust approach to radiometric normalization. These findings have important implications for the efficient monitoring of surface changes on Earth, potentially enhancing the practical and scientific use of these datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Schematic flowchart of the steps for the radiometric normalization of the PS constellation.</p>
Full article ">Figure 3
<p>Spatial distribution of samples in the study quadrant, where orange points represent locations identified by the cLHS method, and green points represent those obtained through statistical evaluation of band-to-band pseudo-invariant pixels.</p>
Full article ">Figure 4
<p>Importance of covariates in normalizing the PlanetScope sensor using the MSI sensor, based on Multiple Linear Regression (MLR), Model Averaged Neural Network (avNNet), Random Forest (RF), k-Nearest Neighbors (KKNN), and Support Vector Machines with Radial Basis Function Kernel (SVM-RBF), applied to bands B1, B2, B3, and B4.</p>
Full article ">Figure 5
<p>Comparison of predicted and observed reflectance values for Bands 1, 2, 3, and 4 using RF, avNNet, KKNN, MLR, and SVM-RBF machine learning models. In black the 1:1 line and in gold the trend line.</p>
Full article ">Figure 6
<p>Violin plot showing the results of the Kruskal–Wallis and Dunn’s tests applied to the statistical indices CCC and R<sup>2</sup> for the machine learning models studied.</p>
Full article ">Figure 7
<p>Dispersion plot showing predicted versus observed data, along with the statistical indices used to evaluate the normalized bands of the PS sensor for various spectral indices.</p>
Full article ">
35 pages, 16179 KiB  
Article
Vegetative Index Intercalibration Between PlanetScope and Sentinel-2 Through a SkySat Classification in the Context of “Riserva San Massimo” Rice Farm in Northern Italy
by Christian Massimiliano Baldin and Vittorio Marco Casella
Remote Sens. 2024, 16(21), 3921; https://doi.org/10.3390/rs16213921 - 22 Oct 2024
Viewed by 1716
Abstract
Rice farming in Italy accounts for about 50% of the EU’s rice area and production. Precision agriculture has entered the scene to enhance sustainability, cut pollution, and ensure food security. Various studies have used remote sensing tools like satellites and drones for multispectral [...] Read more.
Rice farming in Italy accounts for about 50% of the EU’s rice area and production. Precision agriculture has entered the scene to enhance sustainability, cut pollution, and ensure food security. Various studies have used remote sensing tools like satellites and drones for multispectral imaging. While Sentinel-2 is highly regarded for precision agriculture, it falls short for specific applications, like at the “Riserva San Massimo” (Gropello Cairoli, Lombardia, Northern Italy) rice farm, where irregularly shaped crops need higher resolution and frequent revisits to deal with cloud cover. A prior study that compared Sentinel-2 and the higher-resolution PlanetScope constellation for vegetative indices found a seasonal miscalibration in the Normalized Difference Vegetation Index (NDVI) and in the Normalized Difference Red Edge Index (NDRE). Dr. Agr. G.N. Rognoni, a seasoned agronomist working with this farm, stresses the importance of studying the radiometric intercalibration between the PlanetScope and Sentinel-2 vegetative indices to leverage the knowledge gained from Sentinel-2 for him to apply variable rate application (VRA). A high-resolution SkySat image, taken almost simultaneously with a pair of Sentinel-2 and PlanetScope images, offered a chance to examine if the irregular distribution of vegetation and barren land within rice fields might be a factor in the observed miscalibration. Using an unsupervised pixel-based image classification technique on SkySat imagery, it is feasible to split rice into two subclasses and intercalibrate them separately. The results indicated that combining histograms and agronomists’ expertise could confirm SkySat classification. Moreover, the uneven spatial distribution of rice does not affect the seasonal miscalibration object of past studies, which can be adjusted using the methods described here, even with images taken four days apart: the first method emphasizes accuracy using linear regression, histogram shifting, and histogram matching; whereas the second method is faster and utilizes only histogram matching. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Riserva San Massimo rice farm crops over 3 June 2021. SkySat image—EPSG: 4326.</p>
Full article ">Figure 2
<p>Process flowchart.</p>
Full article ">Figure 3
<p>SkySat unsupervised classification with vegetation in green and barren land in brown.</p>
Full article ">Figure 4
<p>SkySat image compared with the masks produced in MATLAB.</p>
Full article ">Figure 5
<p>Method 1 for NDVI.</p>
Full article ">Figure 6
<p>Method 1 for NDRE.</p>
Full article ">Figure 7
<p>Method 2 for NDVI—LR after HM is not useful for distribution and statistics.</p>
Full article ">Figure 8
<p>Method 2 for NDRE—LR after HM is not useful for distribution and statistics.</p>
Full article ">Figure A1
<p>Method 1 for NDVI—linear regression for full crops.</p>
Full article ">Figure A2
<p>Method 1 for NDVI—linear regression for vegetation subclass.</p>
Full article ">Figure A3
<p>Method 1 for NDVI—linear regression for barren land subclass.</p>
Full article ">Figure A4
<p>Method 1 for NDRE—linear regression for full crops.</p>
Full article ">Figure A5
<p>Method 1 for NDRE—linear regression for vegetation subclass.</p>
Full article ">Figure A6
<p>Method 1 for NDRE—linear regression for barren land subclass.</p>
Full article ">Figure A7
<p>Method 2 for NDVI—linear regression for full crops.</p>
Full article ">Figure A8
<p>Method 2 for NDVI—linear regression for vegetation subclass.</p>
Full article ">Figure A9
<p>Method 2 for NDVI—linear regression for barren land subclass.</p>
Full article ">Figure A10
<p>Method 2 for NDRE—linear regression for full crops.</p>
Full article ">Figure A11
<p>Method 2 for NDRE—linear regression for vegetation subclass.</p>
Full article ">Figure A12
<p>Method 2 for NDRE—linear regression for barren land subclass.</p>
Full article ">
35 pages, 31461 KiB  
Article
Detection of Floating Algae Blooms on Water Bodies Using PlanetScope Images and Shifted Windows Transformer Model
by Jihye Ahn, Kwangjin Kim, Yeji Kim, Hyunok Kim and Yangwon Lee
Remote Sens. 2024, 16(20), 3791; https://doi.org/10.3390/rs16203791 - 12 Oct 2024
Viewed by 1174
Abstract
The increasing water temperature due to climate change has led to more frequent algae blooms and deteriorating water quality in coastal areas and rivers worldwide. To address this, we developed a deep learning-based model for identifying floating algae blooms using PlanetScope optical images [...] Read more.
The increasing water temperature due to climate change has led to more frequent algae blooms and deteriorating water quality in coastal areas and rivers worldwide. To address this, we developed a deep learning-based model for identifying floating algae blooms using PlanetScope optical images and the Shifted Windows (Swin) Transformer architecture. We created 1,998 datasets from 105 scenes of PlanetScope imagery collected between 2018 and 2023, covering 14 water bodies known for frequent algae blooms. The methodology included data pre-processing, dataset generation, deep learning modeling, and inference result generation. The input images contained six bands, including vegetation indices such as the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), enhancing the model’s responsiveness to algae blooms. Evaluations were conducted using both single-period and multi-period datasets. The single-period model achieved a mean Intersection over Union (mIoU) between 72.18% and 76.47%, while the multi-period model significantly improved performance, with an mIoU of 91.72%. This demonstrates the potential of our model and highlights the importance of change detection in multi-temporal images for algae bloom monitoring. Additionally, the padding technique proposed in this study resolved the border issue that arises when mosaicking inference results from individual patches, providing a seamless view of the satellite scene. Full article
Show Figures

Figure 1

Figure 1
<p>Google Maps image showing the locations of 14 water bodies around the world where algae blooms occur [<a href="#B30-remotesensing-16-03791" class="html-bibr">30</a>].</p>
Full article ">Figure 2
<p>Satellite images of the study areas using Bing Maps images [<a href="#B31-remotesensing-16-03791" class="html-bibr">31</a>].</p>
Full article ">Figure 2 Cont.
<p>Satellite images of the study areas using Bing Maps images [<a href="#B31-remotesensing-16-03791" class="html-bibr">31</a>].</p>
Full article ">Figure 3
<p>Flow chart of the proposed method for identification of floating algae bloom in this study.</p>
Full article ">Figure 4
<p>Surface remote sensing reflectance spectrum for waters with different concentrations of chlorophyll-a [<a href="#B12-remotesensing-16-03791" class="html-bibr">12</a>].</p>
Full article ">Figure 5
<p>A dataset example from PlanetScope SuperDove for the Yeongsan River on 15 April 2022: (<b>a</b>) true color (R–G–B), (<b>b</b>) false color (RE–G–NIR), and (<b>c</b>) label.</p>
Full article ">Figure 6
<p>The procedure for creating the input dataset for the single-period and multi-period model.</p>
Full article ">Figure 7
<p>Examples of pre-bloom (1 January 2023) and during-bloom (2 April 2023) images for the Yeongsan River: (<b>a</b>) true color (R–G–B), (<b>b</b>) false color (R–G–NIR), and (<b>c</b>) label.</p>
Full article ">Figure 8
<p>An overview of the Swin Transformer. (<b>a</b>) Hierarchical feature maps for reducing computational complexity. (<b>b</b>) The shifted window approach was used when calculating self-attention. (<b>c</b>) Two successive Swin Transformer blocks. (<b>d</b>) The core architecture of the Swin Transformer [<a href="#B22-remotesensing-16-03791" class="html-bibr">22</a>].</p>
Full article ">Figure 9
<p>Confusion matrix and IoU/mIoU calculation for image segmentation.</p>
Full article ">Figure 10
<p>Conceptual diagram illustrating the inference result generation using the padding technique.</p>
Full article ">Figure 11
<p>Example of inference results before and after applying padding for Yeongsan River.</p>
Full article ">Figure 12
<p>Comparison of chlorophyll-a concentrations between pixels labeled as algae bloom and non-algae bloom. Box min represents the minimum value of the box (25th percentile), and box max represents the maximum value of the box (75th percentile).</p>
Full article ">Figure 13
<p>Histograms of NDVI and EVI values for pixels labeled as algae bloom or non-algae bloom. Approximately 100 patches were sampled from the 1998 patches used as input images for the floating algae bloom identification model.</p>
Full article ">Figure 14
<p>Box plots of NDVI and EVI values for pixels labeled as algae bloom or non-algae bloom. Approximately 100 patches were sampled from the 1998 patches used as input images for the floating algae bloom identification model. Box min represents the minimum value of the box (25th percentile), and box max represents the maximum value of the box (75th percentile).</p>
Full article ">Figure 15
<p>Example of low prediction accuracy at the river edges of Miho River.</p>
Full article ">Figure 16
<p>Qualitative accuracy was obtained using 100 random patches: (<b>a</b>) RGB true color composite, (<b>b</b>) false color composite using red–green–NIR bands, (<b>c</b>) a label image, and (<b>d</b>) predicted results of the floating algae bloom identification model.</p>
Full article ">Figure 17
<p>Qualitative accuracy of a test set for the 296 patches from the Yeongsan River: (<b>a</b>) RGB true color composite, (<b>b</b>) false color composite using red–green–NIR bands, (<b>c</b>) a label image, and (<b>d</b>) predicted results of the floating algae bloom identification model.</p>
Full article ">Figure 18
<p>Example with significant differences in detection rates in the Geum River (ROI_A: well-predicted region, ROI_B: poorly predicted region).</p>
Full article ">Figure 19
<p>NDVI and EVI histogram distributions in ROI_A and ROI_B (ROI_A: well-predicted region, ROI_B: poorly predicted region).</p>
Full article ">Figure 20
<p>Qualitative accuracy was obtained using 57 random patches: (<b>a</b>) RGB true color composite of the during-bloom image, (<b>b</b>) false color composite using red–green–NIR bands of the during-bloom image, (<b>c</b>) a label image, and (<b>d</b>) predicted results of the floating algae bloom identification model.</p>
Full article ">Figure 21
<p>Qualitative accuracy of a test set for the 130 patches from the Geum River: (<b>a</b>) RGB true color composite of the during-bloom image, (<b>b</b>) false color composite using red–green–NIR bands of the during-bloom image, (<b>c</b>) a label image, and (<b>d</b>) predicted results of the floating algae bloom identification model.</p>
Full article ">Figure 22
<p>The in situ station at Baekjebo (Buyeo) in South Korea: (<b>a</b>) the location, and (<b>b</b>) time-series changes in chlorophyll-a measurements in 2020.</p>
Full article ">Figure 23
<p>A total of 12 Sentinel-2 RGB time-series images obtained in March, June, August, October, and November of 2020.</p>
Full article ">Figure 24
<p>Inference results using a total of 12 Sentinel-2 images acquired in March, June, August, October, and November 2020.</p>
Full article ">
25 pages, 5094 KiB  
Article
Evaluating Flood Damage to Paddy Rice Fields Using PlanetScope and Sentinel-1 Data in North-Western Nigeria: Towards Potential Climate Adaptation Strategies
by Sa’ad Ibrahim and Heiko Balzter
Remote Sens. 2024, 16(19), 3657; https://doi.org/10.3390/rs16193657 - 30 Sep 2024
Cited by 1 | Viewed by 1413
Abstract
Floods are significant global disasters, but their impact in developing countries is greater due to the lower shock tolerance, many subsistence farmers, land fragmentation, poor adaptation strategies, and low technical capacity, which worsen food security and livelihoods. Therefore, accurate and timely monitoring of [...] Read more.
Floods are significant global disasters, but their impact in developing countries is greater due to the lower shock tolerance, many subsistence farmers, land fragmentation, poor adaptation strategies, and low technical capacity, which worsen food security and livelihoods. Therefore, accurate and timely monitoring of flooded crop areas is crucial for both disaster impact assessments and adaptation strategies. However, most existing methods for monitoring flooded crops using remote sensing focus solely on estimating the flood damage, neglecting the need for adaptation decisions. To address these issues, we have developed an approach to mapping flooded rice fields using Earth observation and machine learning. This approach integrates high-resolution multispectral satellite images with Sentinel-1 data. We have demonstrated the reliability and applicability of this approach by using a manually labelled dataset related to a devastating flood event in north-western Nigeria. Additionally, we have developed a land suitability model to evaluate potential areas for paddy rice cultivation. Our crop extent and land use/land cover classifications achieved an overall accuracy of between 93% and 95%, while our flood mapping achieved an overall accuracy of 99%. Our findings indicate that the flood event caused damage to almost 60% of the paddy rice fields. Based on the land suitability assessment, our results indicate that more land is suitable for cultivation during natural floods than is currently being used. We propose several recommendations as adaptation measures for stakeholders to improve livelihoods and mitigate flood disasters. This study highlights the importance of integrating multispectral and synthetic aperture radar (SAR) data for flood crop mapping using machine learning. Decision-makers will benefit from the flood crop mapping framework developed in this study in a number of spatial planning applications. Full article
Show Figures

Figure 1

Figure 1
<p>Study area location showing digital elevation and showing three sites that were used in the subsequent sections as extracts of the LULC/crop and flood extents from pre- and post-flood imagery, respectively, (<b>a</b>) and study area overlay on Nigeria (<b>b</b>).</p>
Full article ">Figure 2
<p>(<b>a</b>–<b>f</b>) depict the RGB composites for three sites within the study area, along with their corresponding. RF LULC maps from PlanetScope four bands, SAR and NDWI. (<b>a</b>) PlanetScope RGB for site I, (<b>b</b>) PlanetScope RGB for site II, (<b>c</b>) PlanetScope RGB for site III, (<b>d</b>) RF LULC map for site I, (<b>e</b>) RF LULC map for site II, and (<b>f</b>) RF LULC map for site III.</p>
Full article ">Figure 3
<p>Histogram distribution of the flood and non-flood points extracted based on the training data used for the flooded/non-flooded classification using the (<b>a</b>) NDWI (PlanetScope), (<b>b</b>) Sentinel-1 VV backscatter, and (<b>c</b>) Sentinel-1 VH backscatter.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>i</b>) RF flooded and non-flooded layers overlaid by water bodies, affected and unaffected rice fields, from the subsets of the full scenes (zoom in from the three sites shown on the study area map).</p>
Full article ">Figure 5
<p>Sentinel-1 VV signals of the paddy rice fields of flooded/non-flooded rice fields and their corresponding rainfall anomalies (2016–2023). (<b>a</b>) Flooded/non-flooded rice fields and rainfall anomalies for the three locations shown on the graph. (<b>b</b>) Flooded/non-flooded rice fields and rainfall anomalies for the three locations shown on the graph.</p>
Full article ">Figure 6
<p>Area in hectares and percentages for each LULC/crop extent, flooded/non-flooded area and the damaged PR.</p>
Full article ">Figure 7
<p>Potential cultivable area estimated based on the weighted overlay approach for different scenarios: (<b>a</b>) for rainfed (RF) agriculture, (<b>b</b>) for rainfed under natural flood (RFNF) and (<b>c</b>) for Irrigation (IR) paddy rice farming.</p>
Full article ">
15 pages, 3436 KiB  
Communication
Enhancing Alfalfa Biomass Prediction: An Innovative Framework Using Remote Sensing Data
by Matias F. Lucero, Carlos M. Hernández, Ana J. P. Carcedo, Ariel Zajdband, Pierre C. Guillevic, Rasmus Houborg, Kevin Hamilton and Ignacio A. Ciampitti
Remote Sens. 2024, 16(18), 3379; https://doi.org/10.3390/rs16183379 - 11 Sep 2024
Viewed by 721
Abstract
Estimating pasture biomass has emerged as a promising avenue to assist farmers in identifying the best cutting times for maximizing biomass yield using satellite data. This study aims to develop an innovative framework integrating field and satellite data to estimate aboveground biomass in [...] Read more.
Estimating pasture biomass has emerged as a promising avenue to assist farmers in identifying the best cutting times for maximizing biomass yield using satellite data. This study aims to develop an innovative framework integrating field and satellite data to estimate aboveground biomass in alfalfa (Medicago sativa L.) at farm scale. For this purpose, samples were collected throughout the 2022 growing season on different mowing dates at three fields in Kansas, USA. The satellite data employed comprised four sources: Sentinel-2, PlanetScope, Planet Fusion, and Biomass Proxy. A grid of hyperparameters was created to establish different combinations and select the best coefficients. The permutation feature importance technique revealed that the Planet’s PlanetScope near-infrared (NIR) band and the Biomass Proxy product were the predictive features with the highest contribution to the biomass prediction model’s. A Bayesian Additive Regression Tree (BART) was applied to explore its ability to build a predictive model. Its performance was assessed via statistical metrics (r2: 0.61; RMSE: 0.29 kg.m−2). Additionally, uncertainty quantifications were proposed with this framework to assess the range of error in the predictions. In conclusion, this integration in a nonparametric approach achieved a useful predicting tool with the potential to optimize farmers’ management decisions. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Geographical distribution of the fields in Kansas, United States, included in this study. (<b>B</b>–<b>D</b>) show the spatial distribution of the samples within each field. The field in panel (<b>B</b>) was under irrigated conditions, while the fields in panels (<b>C</b>,<b>D</b>) were in dryland conditions.</p>
Full article ">Figure 2
<p>Conceptual framework of integrating and analyzing satellite and ground field data to obtain a biomass prediction model with the application at on-farm scale using a Bayesian approach.</p>
Full article ">Figure 3
<p>Nested cross-validation diagram. Example of split data holding one field and sample date out to create a training and testing set to tune hyperparameters.</p>
Full article ">Figure 4
<p>Importance between different spectral bands and alfalfa forage biomass. Panel (<b>A</b>) shows the band comparison across remote sensing sources. The orange highlighting of the first and second boxes identifies the two selected bands. Panel (<b>B</b>) shows the importance of the variables for each source independently. The colors stand for the source of the data (gray = Biomass Proxy, blue = Planet Fusion, light blue = PlanetScope, violet = Sentinel 2). Importance expressed in percentages.</p>
Full article ">Figure 5
<p>Accuracy evaluation of alfalfa regression model. Relation between observed and predicted biomass. The vertical lines represent the interval of 95 of the probability. Different colors (red = field 1, green = field 2, and blue = field 3) represent the data from the different fields. The dashed line is the 1:1 relation between observed and predicted values.</p>
Full article ">Figure 6
<p>Benchmarking was obtained between field biomass and model results for the first sampling date. The dashed line is a 1:1 relationship observed/ predicted. Color points represent different fields (red = field 1, green= field 2, and blue field 3). Units are expressed in kg m<sup>−2</sup>.</p>
Full article ">Figure 7
<p>(<b>A</b>) Predicted maps of alfalfa biomass. The colors range from yellow (low biomass) to green (high biomass). (<b>B</b>) Uncertainty maps of alfalfa biomass. The maps show the difference between the minimum and maximum prediction. Colors range from brown (low), and beige (intermediate) to black (high).</p>
Full article ">
28 pages, 12392 KiB  
Article
Spatial Estimation of Soil Organic Carbon Content Utilizing PlanetScope, Sentinel-2, and Sentinel-1 Data
by Ziyu Wang, Wei Wu and Hongbin Liu
Remote Sens. 2024, 16(17), 3268; https://doi.org/10.3390/rs16173268 - 3 Sep 2024
Viewed by 911
Abstract
The accurate prediction of soil organic carbon (SOC) is important for agriculture and land management. Methods using remote sensing data are helpful for estimating SOC in bare soils. To overcome the challenge of predicting SOC under vegetation cover, this study extracted spectral, radar, [...] Read more.
The accurate prediction of soil organic carbon (SOC) is important for agriculture and land management. Methods using remote sensing data are helpful for estimating SOC in bare soils. To overcome the challenge of predicting SOC under vegetation cover, this study extracted spectral, radar, and topographic variables from multi-temporal optical satellite images (high-resolution PlanetScope and medium-resolution Sentinel-2), synthetic aperture radar satellite images (Sentinel-1), and digital elevation model, respectively, to estimate SOC content in arable soils in the Wuling Mountain region of Southwest China. These variables were modeled at four different spatial resolutions (3 m, 20 m, 30 m, and 80 m) using the eXtreme Gradient Boosting algorithm. The results showed that modeling resolution, the combination of multi-source remote sensing data, and temporal phases all influenced SOC prediction performance. The models generally yielded better results at a medium (20 m) modeling resolution than at fine (3 m) and coarse (80 m) resolutions. The combination of PlanetScope, Sentinel-2, and topography factors gave satisfactory predictions for dry land (R2 = 0.673, MAE = 0.107%, RMSE = 0.135%). The addition of Sentinel-1 indicators gave the best predictions for paddy field (R2 = 0.699, MAE = 0.114%, RMSE = 0.148%). The values of R2 of the optimal models for paddy field and dry land improved by 36.0% and 33.4%, respectively, compared to that for the entire study area. The optical images in winter played a dominant role in the prediction of SOC for both paddy field and dry land. This study offers valuable insights into effectively modeling soil properties under vegetation cover at various scales using multi-source and multi-temporal remote sensing data. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Digital elevation model and distribution of soil sample points in the study area; (<b>b</b>) RGB image acquired by the PlanetScope optical satellite sensor (image date: 27 July 2017); (<b>c</b>) distribution of paddy field and dry land in the study area.</p>
Full article ">Figure 2
<p>Environmental data preprocessing processes for modeling.</p>
Full article ">Figure 3
<p>Multi-temporal images of normalized difference vegetation index (NDVI) at 20 m modeling resolution from PlanetScope and Sentinel-2, respectively. PlanetScope: the spring images are acquired on 26 May 2017, the summer images on 27 July 2017, the autumn images on 31 October 2017, and the winter images on 22 December 2017; Sentinel-2: the spring images are acquired on 4 May 2017, the summer images on 10 July 2017, the autumn images on 31 October 2017, and the winter images on 22 December 2017.</p>
Full article ">Figure 4
<p>Technical workflow.</p>
Full article ">Figure 5
<p>Performance results of XGBoost in predicting SOC based on different combinations of environmental variables at different modeling resolutions. Model A: PlanetScope + DEM; Model B: PlanetScope + Sentinel-1 + DEM; Model C: Sentinel-2 + DEM; Model D: Sentinel-2 + Sentinel-1 + DEM; Model E: PlanetScope + Sentinel-2 + DEM; Model F: PlanetScope + Sentinel-2 + Sentinel-1 + DEM.</p>
Full article ">Figure 6
<p>Relative importance of the thirty most important environmental variables used for the SOC prediction in Model E and Model F at a resolution of 20 m based on XGBoost for paddy field and dry land, respectively. Model E: PlanetScope + Sentinel-2 + DEM; Model F: PlanetScope + Sentinel-2 + Sentinel-1 + DEM (PS, S2, and S1 mean PlanetScope predictors, Sentinel-2 predictors, and Sentinel-1 predictors, respectively; May, July, October, and December mean May, July, October, and December, respectively).</p>
Full article ">Figure 7
<p>SOC content prediction map for cultivated land in the study area based on the best optimal model at 20 m spatial resolution.</p>
Full article ">
26 pages, 6325 KiB  
Article
Acquisition of Bathymetry for Inland Shallow and Ultra-Shallow Water Bodies Using PlanetScope Satellite Imagery
by Aleksander Kulbacki, Jacek Lubczonek and Grzegorz Zaniewicz
Remote Sens. 2024, 16(17), 3165; https://doi.org/10.3390/rs16173165 - 27 Aug 2024
Viewed by 1020
Abstract
This study is structured to address the problem of mapping the bottom of shallow and ultra-shallow inland water bodies using high-resolution satellite imagery. These environments, with their diverse distribution of optically relevant components, pose a challenge to traditional mapping methods. The study was [...] Read more.
This study is structured to address the problem of mapping the bottom of shallow and ultra-shallow inland water bodies using high-resolution satellite imagery. These environments, with their diverse distribution of optically relevant components, pose a challenge to traditional mapping methods. The study was conducted on several research issues, each focusing on a specific aspect of the SDB, related to the selection of spectral bands and regression models, regression models creation, evaluation of the influence of the number and spatial distribution of reference soundings, and assessment of the quality of the bathymetric surface, with a focus on microtopography. The study utilized basic empirical techniques, incorporating high-precision reference data acquired via an unmanned surface vessel (USV) integrated with a single-beam echosounder (SBES), and Global Navigation Satellite System (GNSS) receiver measurements. The performed investigation allowed the optimization of a methodology for bathymetry acquisition of such areas by identifying the impact of individual processing components. The first results indicated the usefulness of the proposed approach, which can be confirmed by the values of the obtained RMS errors of elaborated bathymetric surfaces in the range of up to several centimeters in some study cases. The work also points to the problematic nature of this type of study, which can contribute to further research into the application of remote sensing techniques for bathymetry, especially during acquisition in optically complex waters. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Maps of the study area: view of the area denoted by the hydrographic profiles by Lubczyna (<b>a</b>) and by Czarna Laka (<b>b</b>); view of the locations of Dabie Lake (<b>c</b>) and studied bays locations at smaller scale (<b>d</b>).</p>
Full article ">Figure 2
<p>Photo of unmanned survey vehicle (<b>a</b>) and GNSS RTK (<b>b</b>) used for data acquisition during the survey campaign.</p>
Full article ">Figure 3
<p>General location of GNSS profile measurement: Crosses mark the occurrence of a single profile (<b>a</b>); an example of a profile surveyed using the GNSS RTK technique. Each dot symbolizes a single measurement point (<b>b</b>).</p>
Full article ">Figure 4
<p>Overview of the spectral ranges of the PlanetScope data and imagery in RGB composition used in the study: (<b>a</b>) view of the whole Dabie Lake; (<b>b</b>) close-up of the bay by Czarna Laka; (<b>c</b>) close-up of the bay by Lubczyna.</p>
Full article ">Figure 5
<p>Digital bathymetric models: bay near Czarna Laka village (<b>a</b>); bay near Lubczyna village (<b>b</b>).</p>
Full article ">Figure 6
<p>Comparison of GEBCO method for creation of the regression model using observations and threshold depth with a classical approach: the whole set of the observations (<b>a</b>); set of averaged raster values for depth intervals of 0.1 m with an area of TD, marked by the red box (<b>b</b>); set of data after removing observation at threshold depth (<b>c</b>).</p>
Full article ">Figure 7
<p>Flow chart of the research methodology.</p>
Full article ">Figure 8
<p>Regression models based on GEBCO methodology for the selected bands: (<b>a</b>) bay by Lubczyna—linear model with a threshold depth of 2.3 m, green band; (<b>b</b>) bay by Czarna Laka—linear model with a threshold depth of 2.7 m, yellow band.</p>
Full article ">Figure 9
<p>Regression models reaching the highest R<sup>2</sup> coefficient for the selected bands: (<b>a</b>) bay near Lubczyna—polynomial model with a threshold depth of 2.3 m; (<b>b</b>) bay near Lubczyna—linear combined model with a threshold depth of 2.3 m; (<b>c</b>) bay near Czarna Laka—power model with a threshold depth of 2.7 m.</p>
Full article ">Figure 10
<p>Regression models used for the given options based on methodology defined in stage 1 and 2: (<b>a</b>) Experiment 3.1; (<b>b</b>) Experiment 3.2; (<b>c</b>) Experiment 3.3.1; (<b>d</b>) Experiment 3.3.2; (<b>e</b>) Experiment 3.4; (<b>f</b>) Experiment 3.5.1; (<b>g</b>) Experiment 3.5.2.</p>
Full article ">Figure 11
<p>Summary of RMS errors for each case: (<b>a</b>) Comparison of the obtained RMS errors in the bay near Czarna Laka regarding the analyzed experiments; (<b>b</b>) Comparison of the obtained RMS errors in the bay near Lubczyna regarding the analyzed experiments; (<b>c</b>) comparison of the obtained RMS error on the GNSS Profiles regarding the analyzed experiments.</p>
Full article ">Figure 12
<p>Difference surfaces for the bay by Czarna Laka between the reference bathymetric models and the SDB models based on each experiment: (<b>a</b>) Experiment 3.1; (<b>b</b>) Experiment 3.2; (<b>c</b>) Experiment 3.3.1; (<b>d</b>) Experiment 3.3.2; (<b>e</b>) Experiment 3.4; (<b>f</b>) Experiment 3.5.1; (<b>g</b>) Experiment 3.5.2.</p>
Full article ">Figure 13
<p>Difference surfaces for the bay by Lubczyna between the reference bathymetric models and the SDB models based on each experiment: (<b>a</b>) Experiment 3.1; (<b>b</b>) Experiment 3.2; (<b>c</b>) Experiment 3.3.1; (<b>d</b>) Experiment 3.3.2; (<b>e</b>) Experiment 3.4; (<b>f</b>) Experiment 3.5.1; (<b>g</b>) Experiment 3.5.2.</p>
Full article ">Figure 14
<p>Variety of materials forming the bottom surfaces. Referenced bathymetric model of the bay by Lubczyna overlay on UAV RGB mosaic image.</p>
Full article ">Figure 15
<p>Variation in RMSE values with respect to the depth intervals: (<b>a</b>) Differential model for the bay by Czarna Laka based on experiment 3.1 with isobaths from the reference model overlaid; (<b>b</b>) Model for the bay by Lubczyna based on experiment 3.2 with isobaths from the reference model overlaid; (<b>c</b>) RMS error value in relation to depth interval for the bay by Czarna Laka; (<b>d</b>) RMS error value in relation to depth interval for the bay by Lubczyna.</p>
Full article ">Figure 16
<p>DBMs in 3D view for the bay by Czarna Laka elaborated from various data: SBES, pixel size: 1.3 m (<b>a</b>); UAV, pixel size 0.05 m (<b>b</b>); PlanetScope, pixel size: 3 m (<b>c</b>); Sentinel-2, pixel size: 10 m (<b>d</b>).</p>
Full article ">
26 pages, 67157 KiB  
Article
Impact of Utilizing High-Resolution PlanetScope Imagery on the Accuracy of LULC Mapping and Hydrological Modeling in an Arid Region
by Chithrika Alawathugoda, Gilbert Hinge, Mohamed Elkollaly and Mohamed A. Hamouda
Water 2024, 16(16), 2356; https://doi.org/10.3390/w16162356 - 22 Aug 2024
Viewed by 1025
Abstract
Accurate land-use and land-cover (LULC) mapping is crucial for effective watershed management and hydrological modeling in arid regions. This study examines the use of high-resolution PlanetScope imagery for LULC mapping, change detection, and hydrological modeling in the Wadi Ham watershed, Fujairah, UAE. The [...] Read more.
Accurate land-use and land-cover (LULC) mapping is crucial for effective watershed management and hydrological modeling in arid regions. This study examines the use of high-resolution PlanetScope imagery for LULC mapping, change detection, and hydrological modeling in the Wadi Ham watershed, Fujairah, UAE. The authors compared LULC maps derived from Sentinel-2 and PlanetScope imagery using maximum likelihood (ML) and random forest (RF) classifiers. Results indicated that the RF classifier applied to PlanetScope 8-band imagery achieved the highest overall accuracy of 97.27%. Change detection analysis from 2017 to 2022 revealed significant transformations, including a decrease in vegetation from 3.371 km2 to 1.557 km2 and an increase in built-up areas from 3.634 km2 to 6.227 km2. Hydrological modeling using the WMS-GSSHA model demonstrated the impact of LULC map accuracy on simulated runoff responses, with the most accurate LULC dataset showing a peak discharge of 1160 CMS at 930 min. In contrast, less accurate maps showed variations in peak discharge timings and magnitudes. The 2022 simulations, reflecting urbanization, exhibited increased runoff and earlier peak flow compared to 2017. These findings emphasize the importance of high-resolution, accurate LULC data for reliable hydrological modeling and effective watershed management. The study supports UAE’s 2030 vision for resilient communities and aligns with UN Sustainability Goals 11 (Sustainable Cities and Communities) and 13 (Climate Action), highlighting its broader relevance and impact. Full article
Show Figures

Figure 1

Figure 1
<p>The study area, Wadi Ham watershed in Fujairah, northeastern UAE.</p>
Full article ">Figure 2
<p>Methodological Framework.</p>
Full article ">Figure 3
<p>Methodological Flow Diagram for WMS GSSHA Modeling Process.</p>
Full article ">Figure 4
<p>The precipitation hyetograph for the rainfall event input for the GSSHA model.</p>
Full article ">Figure 5
<p>The digital elevation model for Wadi Ham watershed.</p>
Full article ">Figure 6
<p>Soil cover for Wadi Ham watershed.</p>
Full article ">Figure 7
<p>LULC data for accuracy simulation: (<b>A</b>) LULC map from PlanetScope Imagery (30 June 2022) RF-Classification (97% accuracy); (<b>B</b>) LULC Map: Sentinel-2 Imagery (22 June 2022); (<b>C</b>) LULC map from ESA Data User Element. ML-Classification (85% accuracy); (<b>D</b>) ESRI downloaded LULC map.</p>
Full article ">Figure 7 Cont.
<p>LULC data for accuracy simulation: (<b>A</b>) LULC map from PlanetScope Imagery (30 June 2022) RF-Classification (97% accuracy); (<b>B</b>) LULC Map: Sentinel-2 Imagery (22 June 2022); (<b>C</b>) LULC map from ESA Data User Element. ML-Classification (85% accuracy); (<b>D</b>) ESRI downloaded LULC map.</p>
Full article ">Figure 8
<p>LULC maps for change detection: (<b>A</b>) LULC map for 2022: PlanetScope Imagery_8-Band (30 June 2022) RF-Classification (97% accuracy); (<b>B</b>) LULC map for 2017: PlanetScope Imagery_4-Band (20 December 2017) RF-Classification (96% accuracy).</p>
Full article ">Figure 9
<p>Land-use and land-cover (LULC) map of the Wadi Ham watershed generated using maximum likelihood classification on Sentinel-2 imagery from 22 June 2022. The overall classification accuracy is 85.73%.</p>
Full article ">Figure 10
<p>Land-use and land-cover (LULC) map of the Wadi Ham watershed generated using random forest classification on Sentinel-2 imagery from 5 December 2017. The overall classification accuracy is 93.33%.</p>
Full article ">Figure 11
<p>Land-use and land-cover (LULC) map of the Wadi Ham watershed generated using random forest classification on PlanetScope 4-Band imagery from 20 December 2017. The overall classification accuracy is 96.49%.</p>
Full article ">Figure 12
<p>Land-use and land-cover (LULC) map of the Wadi Ham watershed generated using random forest classification on PlanetScope 8-Band imagery from 30 December 2022. The overall classification accuracy is 97.27%.</p>
Full article ">Figure 13
<p>Hydrographs for different LULC Scenarios in the Wadi Ham watershed: Hydrograph <span class="html-fig-inline" id="water-16-02356-i017"><img alt="Water 16 02356 i017" src="/water/water-16-02356/article_deploy/html/images/water-16-02356-i017.png"/></span>: PS-RF classification (97% accuracy); <span class="html-fig-inline" id="water-16-02356-i018"><img alt="Water 16 02356 i018" src="/water/water-16-02356/article_deploy/html/images/water-16-02356-i018.png"/></span>: LULC from ESA Data User Element; <span class="html-fig-inline" id="water-16-02356-i019"><img alt="Water 16 02356 i019" src="/water/water-16-02356/article_deploy/html/images/water-16-02356-i019.png"/></span>: Sentinel-2 ML classification (85% accuracy); <span class="html-fig-inline" id="water-16-02356-i020"><img alt="Water 16 02356 i020" src="/water/water-16-02356/article_deploy/html/images/water-16-02356-i020.png"/></span>: ESRI downloaded LULC Map.</p>
Full article ">Figure 14
<p>Change Detection of Wadi Ham Watershed (2017–2022).</p>
Full article ">Figure 15
<p>Transformation in Wadi Ham Watershed (2017–2022).</p>
Full article ">Figure 16
<p>Hydrographs showing the flow over time for 2017 and 2022 in Wadi Ham watershed.</p>
Full article ">Figure 17
<p>Flood Grid Output for Wadi Ham Watershed (2017).</p>
Full article ">Figure 18
<p>Flood Grid Output for Wadi Ham Watershed (2022).</p>
Full article ">
19 pages, 4892 KiB  
Article
Comparative Analysis of Machine Learning Techniques and Data Sources for Dead Tree Detection: What Is the Best Way to Go?
by Júlia Matejčíková, Dana Vébrová and Peter Surový
Remote Sens. 2024, 16(16), 3086; https://doi.org/10.3390/rs16163086 - 21 Aug 2024
Viewed by 911
Abstract
In Central Europe, the extent of bark beetle infestation in spruce stands due to prolonged high temperatures and drought has created large areas of dead trees, which are difficult to monitor by ground surveys. Remote sensing is the only possibility for the assessment [...] Read more.
In Central Europe, the extent of bark beetle infestation in spruce stands due to prolonged high temperatures and drought has created large areas of dead trees, which are difficult to monitor by ground surveys. Remote sensing is the only possibility for the assessment of the extent of the dead tree areas. Several options exist for mapping individual dead trees, including different sources and different processing techniques. Satellite images, aerial images, and images from UAVs can be used as sources. Machine and deep learning techniques are included in the processing techniques, although models are often presented without proper realistic validation.This paper compares methods of monitoring dead tree areas using three data sources: multispectral aerial imagery, multispectral PlanetScope satellite imagery, and multispectral Sentinel-2 imagery, as well as two processing methods. The classification methods used are Random Forest (RF) and neural network (NN) in two modalities: pixel- and object-based. In total, 12 combinations are presented. The results were evaluated using two types of reference data: accuracy of model on validation data and accuracy on vector-format semi-automatic classification polygons created by a human evaluator, referred to as real Ground Truth. The aerial imagery was found to have the highest model accuracy, with the CNN model achieving up to 98% with object classification. A higher classification accuracy for satellite imagery was achieved by combining pixel classification and the RF model (87% accuracy for Sentinel-2). For PlanetScope Imagery, the best result was 89%, using a combination of CNN and object-based classifications. A comparison with the Ground Truth showed a decrease in the classification accuracy of the aerial imagery to 89% and the classification accuracy of the satellite imagery to around 70%. In conclusion, aerial imagery is the most effective tool for monitoring bark beetle calamity in terms of precision and accuracy, but satellite imagery has the advantage of fast availability and shorter data processing time, together with larger coverage areas. Full article
Show Figures

Figure 1

Figure 1
<p>Development of spruce stands due to drought and high temperatures.</p>
Full article ">Figure 2
<p>Location of interest. The image shows the location of the Czech Switzerland National Park (northwestern Czech Republic). The yellow polygon on the right shows the boundaries of the Czech Switzerland National Park.</p>
Full article ">Figure 3
<p>Data source in false-color view; component A shows the full image used for the survey, and component B shows the resolution detail. (<b>1A</b>,<b>1B</b>): Sentinel-2 satellite imagery at 10 m resolution; (<b>2A</b>,<b>2B</b>): PlanetScope satellite imagery at 3 m resolution; (<b>3A</b>,<b>3B</b>): Aerial images at 0.2 m resolution.</p>
Full article ">Figure 4
<p>Example of training samples in aerial images in false color composition NIR, R, B bands. In image (<b>A</b>) is a class of dead trees and in image (<b>B</b>) is a class of Green Forest. The green circles represent samples of green forest, while the yellow circles denote dead trees.</p>
Full article ">Figure 5
<p>Structure of RulSet of classification using Random Forest model.</p>
Full article ">Figure 6
<p>Structure of RulSet of classification using CNN.</p>
Full article ">Figure 7
<p>Graphical representation of the procedure for deriving the input variables into the error matrix.</p>
Full article ">Figure 8
<p>Visualization of an aerial image (<b>A</b>) with Ground Truth layer (<b>B</b>) and object classification using RF (<b>C</b>).</p>
Full article ">Figure 9
<p>Visualization of an aerial image (<b>A</b>) with Ground Truth layer (<b>B</b>) and pixel-based classification using RF (<b>C</b>).</p>
Full article ">
16 pages, 9926 KiB  
Article
Automatic Methodology for Forest Fire Mapping with SuperDove Imagery
by Dionisio Rodríguez-Esparragón, Paolo Gamba and Javier Marcello
Sensors 2024, 24(16), 5084; https://doi.org/10.3390/s24165084 - 6 Aug 2024
Viewed by 587
Abstract
The global increase in wildfires due to climate change highlights the need for accurate wildfire mapping. This study performs a proof of concept on the usefulness of SuperDove imagery for wildfire mapping. To address this topic, we present an automatic methodology that combines [...] Read more.
The global increase in wildfires due to climate change highlights the need for accurate wildfire mapping. This study performs a proof of concept on the usefulness of SuperDove imagery for wildfire mapping. To address this topic, we present an automatic methodology that combines the use of various vegetation indices with clustering algorithms (bisecting k-means and k-means) to analyze images before and after fires, with the aim of improving the precision of the burned area and severity assessments. The results demonstrate the potential of using this PlanetScope sensor, showing that the methodology effectively delineates burned areas and classifies them by severity level, in comparison with data from the Copernicus Emergency Management Service (CEMS). Thus, the potential of the SuperDove satellite sensor constellation for fire monitoring is highlighted, despite its limitations regarding radiometric distortion and the absence of Short-Wave Infrared (SWIR) bands, suggesting that the methodology could contribute to better fire management strategies. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Pre-fire (first column) and post-fire (second column) images for the four selected events: (<b>a</b>,<b>b</b>) north Attica; (<b>c</b>,<b>d</b>) Portbou; (<b>e</b>,<b>f</b>) Euboea; (<b>g</b>,<b>h</b>) Sierra de los Guájares.</p>
Full article ">Figure 1 Cont.
<p>Pre-fire (first column) and post-fire (second column) images for the four selected events: (<b>a</b>,<b>b</b>) north Attica; (<b>c</b>,<b>d</b>) Portbou; (<b>e</b>,<b>f</b>) Euboea; (<b>g</b>,<b>h</b>) Sierra de los Guájares.</p>
Full article ">Figure 2
<p>Scheme of the methodology to obtain the maps of burned area and fire severity.</p>
Full article ">Figure 3
<p>Processing of the DEM.</p>
Full article ">Figure 4
<p>Average differences between vegetation indices in pre- and post-fire images.</p>
Full article ">Figure 5
<p>Difference between pre- and post-fire images for different vegetation indices in north Attica area: (<b>a</b>) fire delimitation mask provided by CEMS, (<b>b</b>) EVI, (<b>c</b>) GEMI, (<b>d</b>) GNDVI1, (<b>e</b>) GNDVI2, (<b>f</b>) MSR, (<b>g</b>) NDVI, (<b>h</b>) SAVI, (<b>i</b>) SR, (<b>j</b>) WDRVI, and (<b>k</b>) YNDVI.</p>
Full article ">Figure 5 Cont.
<p>Difference between pre- and post-fire images for different vegetation indices in north Attica area: (<b>a</b>) fire delimitation mask provided by CEMS, (<b>b</b>) EVI, (<b>c</b>) GEMI, (<b>d</b>) GNDVI1, (<b>e</b>) GNDVI2, (<b>f</b>) MSR, (<b>g</b>) NDVI, (<b>h</b>) SAVI, (<b>i</b>) SR, (<b>j</b>) WDRVI, and (<b>k</b>) YNDVI.</p>
Full article ">Figure 6
<p>Comparative visualization of the burned areas computed using the described methodology (in black) and those extracted from the CEMS report (in red) for the fires in (<b>a</b>) north Attica, (<b>b</b>) Portbou, (<b>c</b>) Euobea, and (<b>d</b>) Sierra de los Guájeres.</p>
Full article ">Figure 7
<p>Detailed views of discrepancy in burned-area edges. The red line delineates the fire’s extent as per CEMS data, contrasting with the yellow line that maps the area using the specified methodology: (<b>a</b>) pre-fire, (<b>b</b>) post-fire.</p>
Full article ">Figure 8
<p>Synthetized severity maps computed using the proposed methodology: (<b>a</b>) north Attica, (<b>b</b>) Portbou, (<b>c</b>) Euobea, and (<b>d</b>) Sierra de los Guájeres.</p>
Full article ">Figure 8 Cont.
<p>Synthetized severity maps computed using the proposed methodology: (<b>a</b>) north Attica, (<b>b</b>) Portbou, (<b>c</b>) Euobea, and (<b>d</b>) Sierra de los Guájeres.</p>
Full article ">
17 pages, 11574 KiB  
Article
Assessing Habitat Suitability: The Case of Black Rhino in the Ngorongoro Conservation Area
by Joana Borges, Elias Symeonakis, Thomas P. Higginbottom, Martin Jones, Bradley Cain, Alex Kisingo, Deogratius Maige, Owen Oliver and Alex L. Lobora
Remote Sens. 2024, 16(15), 2855; https://doi.org/10.3390/rs16152855 - 4 Aug 2024
Viewed by 1449
Abstract
Efforts to identify suitable habitat for wildlife conservation are crucial for safeguarding biodiversity, facilitating management, and promoting sustainable coexistence between wildlife and communities. Our study focuses on identifying potential black rhino (Diceros bicornis) habitat within the Ngorongoro Conservation Area (NCA), Tanzania, [...] Read more.
Efforts to identify suitable habitat for wildlife conservation are crucial for safeguarding biodiversity, facilitating management, and promoting sustainable coexistence between wildlife and communities. Our study focuses on identifying potential black rhino (Diceros bicornis) habitat within the Ngorongoro Conservation Area (NCA), Tanzania, across wet and dry seasons. To achieve this, we used remote sensing data with and without field data. We employed a comprehensive approach integrating Sentinel-2 and PlanetScope images, vegetation indices, and human activity data. We employed machine learning recursive feature elimination (RFE) and random forest (RF) algorithms to identify the most relevant features that contribute to habitat suitability prediction. Approximately 36% of the NCA is suitable for black rhinos throughout the year; however, there are seasonal shifts in habitat suitability. Anthropogenic factors increase land degradation and limit habitat suitability, but this depends on the season. This study found a higher influence of human-related factors during the wet season, with suitable habitat covering 53.6% of the NCA. In the dry season, browse availability decreases and rhinos are forced to become less selective of the areas where they move to fulfil their nutritional requirements, with anthropogenic pressures becoming less important. Furthermore, our study identified specific areas within the NCA that consistently offer suitable habitat across wet and dry seasons. These areas, situated between Olmoti and the Crater, exhibit minimal disturbance from human activities, presenting favourable conditions for rhinos. Although the Oldupai Gorge only has small suitable patches, it used to sustain a large population of rhinos in the 1960s. Land cover changes seem to have decreased the suitability of the Gorge. This study highlights the importance of combining field data with remotely sensed data. Remote sensing-based assessments rely on the importance of vegetation covers as a proxy for habitat and often overlook crucial field variables such as shelter or breeding locations. Overall, our study sheds light on the imperative of identifying suitable habitat for black rhinos within the NCA and underscores the urgency of intensified conservation efforts. Our findings underscore the need for adaptive conservation strategies to reverse land degradation and safeguard black rhino populations in this dynamic multiple land-use landscape as environmental and anthropogenic pressures evolve. Full article
(This article belongs to the Special Issue Land Degradation Assessment with Earth Observation (Second Edition))
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) True colour composite (RGB) of a Landsat image with the zoomed in areas of interest (<b>A1</b>,<b>A2</b>); (<b>B</b>) Topography of the study area (ESRI World Hillshade); (<b>C</b>) Land cover classification of the study area for the year 2020 [<a href="#B44-remotesensing-16-02855" class="html-bibr">44</a>] and zoomed in areas of interest (<b>C1</b>,<b>C2</b>).</p>
Full article ">Figure 2
<p>Suitable habitat based on rhino presence/absence data with current human disturbances (<b>A</b>); and with simplified human disturbances (<b>B</b>). (<b>A1</b>,<b>A2</b>,<b>B1</b>,<b>B2</b>) are example subsets. The Ngorongoro Crater has been removed for safety reasons.</p>
Full article ">Figure 3
<p>Suitable habitat based only on remote sensing data with current human disturbances (<b>A</b>); and with simplified human disturbances (<b>B</b>). (<b>A1</b>,<b>A2</b>,<b>B1</b>,<b>B2</b>) are example subsets. The Ngorongoro Crater has been removed for safety reasons.</p>
Full article ">
27 pages, 6288 KiB  
Article
Detection of Maize Crop Phenology Using Planet Fusion
by Caglar Senaras, Maddie Grady, Akhil Singh Rana, Luciana Nieto, Ignacio Ciampitti, Piers Holden, Timothy Davis and Annett Wania
Remote Sens. 2024, 16(15), 2730; https://doi.org/10.3390/rs16152730 - 25 Jul 2024
Viewed by 1178
Abstract
Accurate identification of crop phenology timing is crucial for agriculture. While remote sensing tracks vegetation changes, linking these to ground-measured crop growth stages remains challenging. Existing methods offer broad overviews but fail to capture detailed phenological changes, which can be partially related to [...] Read more.
Accurate identification of crop phenology timing is crucial for agriculture. While remote sensing tracks vegetation changes, linking these to ground-measured crop growth stages remains challenging. Existing methods offer broad overviews but fail to capture detailed phenological changes, which can be partially related to the temporal resolution of the remote sensing datasets used. The availability of higher-frequency observations, obtained by combining sensors and gap-filling, offers the possibility to capture more subtle changes in crop development, some of which can be relevant for management decisions. One such dataset is Planet Fusion, daily analysis-ready data obtained by integrating PlanetScope imagery with public satellite sensor sources such as Sentinel-2 and Landsat. This study introduces a novel method utilizing Dynamic Time Warping applied to Planet Fusion imagery for maize phenology detection, to evaluate its effectiveness across 70 micro-stages. Unlike singular template approaches, this method preserves critical data patterns, enhancing prediction accuracy and mitigating labeling issues. During the experiments, eight commonly employed spectral indices were investigated as inputs. The method achieves high prediction accuracy, with 90% of predictions falling within a 10-day error margin, evaluated based on over 3200 observations from 208 fields. To understand the potential advantage of Planet Fusion, a comparative analysis was performed using Harmonized Landsat Sentinel-2 data. Planet Fusion outperforms Harmonized Landsat Sentinel-2, with significant improvements observed in key phenological stages such as V4, R1, and late R5. Finally, this study showcases the method’s transferability across continents and years, although additional field data are required for further validation. Full article
(This article belongs to the Special Issue Remote Sensing for Precision Farming and Crop Phenology)
Show Figures

Figure 1

Figure 1
<p>Location of fields contained in the Kansas State dataset and location of Planet Fusion tiles. (<b>A</b>) Location of sites in Kansas in the US. (<b>B</b>) Area of Planet Fusion and Harmonized Landsat Sentinel tiles for both sites in the Kansas dataset. (<b>C</b>) Western site field boundaries overlaid on Planet Fusion tiles and imagery. (<b>D</b>) Eastern site filed boundaries overlaid on Planet Fusion tiles and imagery.</p>
Full article ">Figure 2
<p>Maize phenology stages used during ground truth data collection in Kansas State. The scale follows that of Ciampitti et al. [<a href="#B38-remotesensing-16-02730" class="html-bibr">38</a>].</p>
Full article ">Figure 3
<p>Individual field-specific NDVI time series for all fields in the Kansas dataset before (<b>left</b>) and after cleaning (<b>right</b>).</p>
Full article ">Figure 4
<p>Number of observations in the Kansas dataset per micro- and macro-stage in chronological order, indication of pre-vegetative, vegetative and reproductive stages. In our study, we only consider vegetative stages starting from 1-leaf and reproductive stages.</p>
Full article ">Figure 5
<p>Histogram of BBCH phases of the observations in the PIAF dataset.</p>
Full article ">Figure 6
<p>Flow chart of data preprocessing and macro-stage identification with DTW.</p>
Full article ">Figure 7
<p>Planet Fusion NDVI time series for individual fields and average of all fields.</p>
Full article ">Figure 8
<p>Boxplot of median absolute errors (MedAEs) and mean absolute errors (MAEs) in days across 70 micro-stages for PF-NDVI, comparing three different methods.</p>
Full article ">Figure 9
<p>Variability of the RMSE in days for the selected micro-stages across different indices.</p>
Full article ">Figure 10
<p>NDVI time series of PF and HLS with predicted phenology dates for randomly selected micro-stages on two randomly selected fields.</p>
Full article ">Figure 11
<p>Comparison of the best PF and HLS DTW solutions for two macro-stages (comparison between observation DOY and predicted DOY). Orange and pink colors represent V4 and R1 macro-stages, respectively. Solid lines, dotted lines and dashed lines represent zero error, 5-day buffer and 10-day buffer. (<b>a</b>) Stage: V4 and R1; data: PF; index: MCARI. (<b>b</b>) Stage: V4 and R1; data: HLS; index: NDVI.</p>
Full article ">Figure 12
<p>Comparison of the best PF and HLS DTW solutions for three macro-stages (comparison between observation DOY and predicted DOY). Each row represents a different macro-stage where the left images are PF-based and the right ones are HLS-based predictions. Solid lines, dotted lines and dashed lines represent zero error, 5-day buffer and 10-day buffer. (<b>a</b>) Stage: R4; data: PF; index: MCARI. (<b>b</b>) Stage: R4; data: HLS; index: NDVI. (<b>c</b>) Stage: R5; data: PF, Index: MCARI. (<b>d</b>) Stage: R5; data: HLS; index: NDVI. (<b>e</b>) Stage: V8; data: PF; index: MCARI. (<b>f</b>) Stage: V8; data: HLS; index: MCARI.</p>
Full article ">Figure 12 Cont.
<p>Comparison of the best PF and HLS DTW solutions for three macro-stages (comparison between observation DOY and predicted DOY). Each row represents a different macro-stage where the left images are PF-based and the right ones are HLS-based predictions. Solid lines, dotted lines and dashed lines represent zero error, 5-day buffer and 10-day buffer. (<b>a</b>) Stage: R4; data: PF; index: MCARI. (<b>b</b>) Stage: R4; data: HLS; index: NDVI. (<b>c</b>) Stage: R5; data: PF, Index: MCARI. (<b>d</b>) Stage: R5; data: HLS; index: NDVI. (<b>e</b>) Stage: V8; data: PF; index: MCARI. (<b>f</b>) Stage: V8; data: HLS; index: MCARI.</p>
Full article ">Figure 13
<p>Outputs of the proposed method for four fields and macro-stages in Germany (<b>a</b>–<b>d</b>). The dashed line indicates the observation in the PIAF dataset with the BBCH code, while ‘x’ marks represent the detected related micro-stages. Each vertical line represents one week.</p>
Full article ">
Back to TopTop