[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (45)

Search Parameters:
Keywords = Sen2cor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6289 KiB  
Article
Comparative Evaluation of Semi-Empirical Approaches to Retrieve Satellite-Derived Chlorophyll-a Concentrations from Nearshore and Offshore Waters of a Large Lake (Lake Ontario)
by Ali Reza Shahvaran, Homa Kheyrollah Pour and Philippe Van Cappellen
Remote Sens. 2024, 16(9), 1595; https://doi.org/10.3390/rs16091595 - 30 Apr 2024
Cited by 1 | Viewed by 996
Abstract
Chlorophyll-a concentration (Chl-a) is commonly used as a proxy for phytoplankton abundance in surface waters of large lakes. Mapping spatial and temporal Chl-a distributions derived from multispectral satellite data is therefore increasingly popular for monitoring trends in trophic state [...] Read more.
Chlorophyll-a concentration (Chl-a) is commonly used as a proxy for phytoplankton abundance in surface waters of large lakes. Mapping spatial and temporal Chl-a distributions derived from multispectral satellite data is therefore increasingly popular for monitoring trends in trophic state of these important ecosystems. We evaluated products of eleven atmospheric correction processors (LEDAPS, LaSRC, Sen2Cor, ACOLITE, ATCOR, C2RCC, DOS 1, FLAASH, iCOR, Polymer, and QUAC) and 27 reflectance indexes (including band-ratio, three-band, and four-band algorithms) recommended for Chl-a concentration retrieval. These were applied to the western basin of Lake Ontario by pairing 236 satellite scenes from Landsat 5, 7, 8, and Sentinel-2 acquired between 2000 and 2022 to 600 near-synchronous and co-located in situ-measured Chl-a concentrations. The in situ data were categorized based on location, seasonality, and Carlson’s Trophic State Index (TSI). Linear regression Chl-a models were calibrated for each processing scheme plus data category. The models were compared using a range of performance metrics. Categorization of data based on trophic state yielded improved outcomes. Furthermore, Sentinel-2 and Landsat 8 data provided the best results, while Landsat 5 and 7 underperformed. A total of 28 Chl-a models were developed across the different data categorization schemes, with RMSEs ranging from 1.1 to 14.1 μg/L. ACOLITE-corrected images paired with the blue-to-green band ratio emerged as the generally best performing scheme. However, model performance was dependent on the data filtration practices and varied between satellites. Full article
(This article belongs to the Special Issue Remote Sensing Band Ratios for the Assessment of Water Quality)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the study area showing in situ measurement locations of matchup data. Diamond and square markers represent Hamilton Harbour (HH) and Western Lake Ontario (WLO) measurements, respectively. Each marker is colour-coded according to the respective trophic state.</p>
Full article ">Figure 2
<p>Boxplots of in situ data, categorized by location, seasonality, and Carlson’s Trophic State Index (TSI). The green background represents oligotrophic/mesotrophic (light green) and eutrophic/hypereutrophic (dark green) classes based on Carlson’s TSI. The plus markers indicate outliers.</p>
Full article ">Figure 3
<p>Flowchart of this study’s methodology. AC = Atmospheric Correction, Chl-<span class="html-italic">a</span> = Chlorophyll-<span class="html-italic">a</span>, ECCC = Environment and Climate Change Canada, MECP = Ministry of the Environment, Conservation and Parks (Province of Ontario), RF = Random Forest, RPAS = Remotely Piloted Aircraft System, RS = Remote Sensing, TSS = Total Suspended Solids.</p>
Full article ">Figure 4
<p>Random Forest feature importance analysis with colour-coded atmospheric corrections processors. The x-axis denotes the retrieval index (feature), and the y-axis shows the importance score (unitless). For each scenario, the most significant scheme is marked with an asterisk.</p>
Full article ">Figure 5
<p>Evaluation of schemes across different scenarios based on correlation analysis. Marker colours denote different atmospheric corrections, shapes represent satellites, and sizes signify the number of matchups.</p>
Full article ">Figure 6
<p>Plots comparing modeled vs. measured Chl-<span class="html-italic">a</span> concentrations across satellites (columns) and data categories (rows), demonstrating the regression models’ performance.</p>
Full article ">
19 pages, 10172 KiB  
Article
Reconstructing Snow Cover under Clouds and Cloud Shadows by Combining Sentinel-2 and Landsat 8 Images in a Mountainous Region
by Yanli Zhang, Changqing Ye, Ruirui Yang and Kegong Li
Remote Sens. 2024, 16(1), 188; https://doi.org/10.3390/rs16010188 - 2 Jan 2024
Viewed by 1395
Abstract
Snow cover is a sensitive indicator of global climate change, and optical images are an important means for monitoring its spatiotemporal changes. Due to the high reflectivity, rapid change, and intense spatial heterogeneity of mountainous snow cover, Sentinel-2 (S2) and Landsat 8 (L8) [...] Read more.
Snow cover is a sensitive indicator of global climate change, and optical images are an important means for monitoring its spatiotemporal changes. Due to the high reflectivity, rapid change, and intense spatial heterogeneity of mountainous snow cover, Sentinel-2 (S2) and Landsat 8 (L8) satellite imagery with both high spatial resolution and spectral resolution have become major data sources. However, optical sensors are more susceptible to cloud cover, and the two satellite images have significant spectral differences, making it challenging to obtain snow cover beneath clouds and cloud shadows (CCSs). Based on our previously published approach for snow reconstruction on S2 images using the Google Earth Engine (GEE), this study introduces two main innovations to reconstruct snow cover: (1) combining S2 and L8 images and choosing different CCS detection methods, and (2) improving the cloud shadow detection algorithm by considering land cover types, thus further improving the mountainous-snow-monitoring ability. The Babao River Basin of the Qilian Mountains in China is chosen as the study area; 399 scenes of S2 and 35 scenes of L8 are selected to analyze the spatiotemporal variations of snow cover from September 2019 to August 2022 in GEE. The results indicate that the snow reconstruction accuracies of both images are relatively high, and the overall accuracies for S2 and L8 are 80.74% and 88.81%, respectively. According to the time-series analysis of three hydrological years, it is found that there is a marked difference in the spatial distribution of snow cover in different hydrological years within the basin, with fluctuations observed overall. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Flowchart of this algorithm.</p>
Full article ">Figure 3
<p>Comparative analysis of cloud-free snow cover detection using S2, L8, and GF-2 images. (<b>a</b>–<b>c</b>) Original images of Sentinel-2 on 13 January 2020, GF-2, and Landsat 8 on 14 January 2020; (<b>d</b>–<b>f</b>) Clear-sky snow cover of S2 on 13 January 2020, GF-2, and Landsat 8 on 14 January 2020.</p>
Full article ">Figure 4
<p>Comparison of CCS detection results between the improved Fmask4.0 and the original algorithm on S2 and L8 images: (<b>a</b>–<b>h</b>) S2 original image, clouds, and cloud shadows acquired by the improved and original Fmask4.0 on snow-covered and snow-free surfaces; (<b>i</b>–<b>p</b>) L8 original image, clouds, and cloud shadows acquired by the improved and original Fmask4.0 on snow-covered and snow-free surfaces.</p>
Full article ">Figure 5
<p>Accuracy evaluation of S2 and L8 snow cover reconstruction under CCSs: (<b>a</b>–<b>c</b>) The original image of GF-2, cloud-free snow cover, and snow cover under CCSs; (<b>d</b>–<b>f</b>) The original image of S2, cloud-free snow cover, and snow cover under CCSs obtained using the improved SNOWL algorithm; (<b>g</b>–<b>i</b>) The original image of S2, cloud-free snow cover, and snow cover under CCSs; (<b>j</b>–<b>l</b>) The original image of L8, cloud-free snow cover, and snow cover under CCSs obtained using the improved SNOWL algorithm.</p>
Full article ">Figure 6
<p>Variation characteristics of SCR of three hydrological years in the BRB.</p>
Full article ">Figure 7
<p>Spatiotemporal variations of snow cover from October 2021 to May 2022.</p>
Full article ">Figure 8
<p>Relationship curve of SCR with temperature.</p>
Full article ">Figure 9
<p>Time-series snow cover variation in the BRB by combining Sentinel-2 and Landsat 8 in the 2019–2020 hydrological year.</p>
Full article ">Figure 10
<p>Inter-annual variation of snow cover area and the relationship with daily average air temperature in three hydrological years.</p>
Full article ">
17091 KiB  
Proceeding Paper
Mapping Seagrass Meadows and Assessing Blue Carbon Stocks Using Sentinel-2 Satellite Imagery: A Case Study in the Canary Islands, Spain
by Jorge Veiras-Yanes, Laura Martín-García, Enrique Casas and Manuel Arbelo
Environ. Sci. Proc. 2024, 29(1), 10; https://doi.org/10.3390/ECRS2023-15856 - 6 Dec 2023
Viewed by 589
Abstract
This research evaluates the capability of Sentinel-2 satellite imagery for mapping Cymodocea nodosa meadows in El Médano (Tenerife, Canary Islands, Spain). A Level-1C image from 27 October 2022 was used. Atmospheric correction was addressed using the Sen2Cor tool, while Lyzenga’s method was employed [...] Read more.
This research evaluates the capability of Sentinel-2 satellite imagery for mapping Cymodocea nodosa meadows in El Médano (Tenerife, Canary Islands, Spain). A Level-1C image from 27 October 2022 was used. Atmospheric correction was addressed using the Sen2Cor tool, while Lyzenga’s method was employed to account for the water column effect. Three supervised classifications were performed using Random Forest, K-Nearest Neighbors (KNN) and KDTree-KNN algorithms. These classifications were complemented by an unsupervised classification and in situ data. Additionally, the amount of blue carbon sequestered by the C. nodosa in the study area was also estimated. Among the classifiers, the Random Forest algorithm produced the highest F1 scores, ranging from 0.96 to 0.99. The results revealed an average area of 237 ± 5 ha occupied by C. nodosa in the study region, translating to an average sequestration of 111,000 ± 2000 Mg CO2. Notably, the seagrass meadows in this study area have the potential to offset the CO2 emissions produced by the industrial combustion plant sector throughout the Canary Islands. This research represents a significant step forward in the protection and understanding of these invaluable ecosystems. It effectively underlines the potential of Sentinel-2 satellite data to map seagrass meadows and highlights their crucial role in achieving net zero carbon emissions on our planet. Full article
(This article belongs to the Proceedings of ECRS 2023)
Show Figures

Figure 1

Figure 1
<p>The Canary Islands archipelago, highlighting the study area (red rectangle) which encompasses La Tejita and El Médano beaches on Tenerife.</p>
Full article ">Figure 2
<p>Classification results for unsupervised and supervised methods. In situ seagrass data is outlined in blue. Classified seagrass areas are shown in green. All other coverings are labeled as “sand” and colored in yellow: (<b>a</b>) Unsupervised classification, (<b>b</b>) KNN supervised classification, (<b>c</b>) RF supervised classification.</p>
Full article ">
23 pages, 4600 KiB  
Article
An Algorithm Developed for Smallsats Accurately Retrieves Landsat Surface Reflectance Using Scene Statistics
by David P. Groeneveld and Timothy A. Ruggles
Appl. Sci. 2023, 13(23), 12604; https://doi.org/10.3390/app132312604 - 23 Nov 2023
Cited by 1 | Viewed by 812
Abstract
Closed-form Method for Atmospheric Correction (CMAC) is software that overcomes radiative transfer method problems for smallsat surface reflectance retrieval: unknown sensor radiance responses because onboard monitors are omitted to conserve size/weight, and ancillary data availability that delays processing by days. CMAC requires neither [...] Read more.
Closed-form Method for Atmospheric Correction (CMAC) is software that overcomes radiative transfer method problems for smallsat surface reflectance retrieval: unknown sensor radiance responses because onboard monitors are omitted to conserve size/weight, and ancillary data availability that delays processing by days. CMAC requires neither and retrieves surface reflectance in near real time, first mapping the atmospheric effect across the image as an index (Atm-I) from scene statistics, then reversing these effects with a closed-form linear model that has precedence in the literature. Five consistent-reflectance area-of-interest targets on thirty-one low-to-moderate Atm-I images were processed by CMAC and LaSRC. CMAC retrievals accurately matched LaSRC with nearly identical error profiles. CMAC and LaSRC output for paired images of low and high Atm-I were then compared for three additional consistent-reflectance area-of-interest targets. Three indices were calculated from the extracted reflectance: NDVI calculated with red (standard) and substitutions with blue and green. A null hypothesis for competent retrieval would show no difference. The pooled error for the three indices (n = 9) was 0–3% for CMAC, 6–20% for LaSRC, and 13–38% for uncorrected top-of-atmosphere results, thus demonstrating both the value of atmospheric correction and, especially, the stability of CMAC for machine analysis and AI application under increasing Atm-I from climate change-driven wildfires. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

Figure 1
<p>A TOAR image of Lake Tahoe in smoke from a regional wildfire on a portion of the 6 September 2021 L8 image tile (<b>a</b>). The Atm-I grayscale developed from the 300 × 300 m grid cells of the Atm-I model captures in gross detail the haze pattern visible in the TOAR image (<b>b</b>). Bright ground features in the grayscale partially result from forward scatter of the greater energy reflected by bright targets as described in paper 1 [<a href="#B7-applsci-13-12604" class="html-bibr">7</a>]. The Atm-I of this moderate haze ranged from about 970 to 1240.</p>
Full article ">Figure 2
<p>The CMAC conceptual model graphically expresses the effect upon the reflectance of any pixel at one level of atmospheric effect. Slope and offset define the TOAR deviation line that crosses the <span class="html-italic">x</span>-axis at the axis point. This model is appropriate for all VNIR spectral bands.</p>
Full article ">Figure 3
<p>A figure from Fraser and Kaufmann, 1985 [<a href="#B14-applsci-13-12604" class="html-bibr">14</a>].</p>
Full article ">Figure 4
<p>The five QIAs selected for analysis, displayed on a Landsat 8 TOAR image. The QIAs are named for their municipalities: Ontario-3 (<b>a</b>); Ontario-2 (<b>b</b>); Ontario-1 (<b>c</b>); Rochester (<b>d</b>); and Fontana (<b>e</b>). Ontario-1 was also used for MPC calibration.</p>
Full article ">Figure 5
<p>The L8 3 August 2021 image (<b>left</b>) was affected by more extreme haze and smoke that settled into hydrologic drainages, while the 6 August 2022 image (<b>right</b>) was comparatively clear: TOAR (<b>a</b>,<b>b</b>), CMAC corrected (<b>c</b>,<b>d</b>) and LaSRC corrected (<b>e</b>,<b>f</b>). The polygons along the Okanagan River represent sampled AOIs. Each was affected by very low Atm-I (clear conditions) in (<b>b</b>); however, haze is present in other locations of the image.</p>
Full article ">Figure 6
<p>TOAR views of the three QIAs selected for statistical examination: 3 August 2021 (column <b>A</b>) experienced moderate-to-extreme smoke effects, while 6 August 2022 (Column <b>B</b>) was comparatively clear. In order of increasing Atm-I, from south to north (<b>top</b> to <b>bottom</b>), are the Penticton, Kelowna and Vernon AOIs. The smaller numbered circles denote sampled areas for statistical examination, each located over areas of cultivated vegetation.</p>
Full article ">Figure 7
<p>Relative spectral responses for the four VNIR bands examined here. Sentinel 2 (solid) and Landsat 8 (dashed) visible bands are not equivalent.</p>
Full article ">Figure 8
<p>The L8 image of Lake Tahoe TOAR and Atm-I grayscale in <a href="#applsci-13-12604-f001" class="html-fig">Figure 1</a> corrected by CMAC (<b>a</b>) and LaSRC (<b>b</b>). Wisps of uncorrected haze over the lake in (<b>a</b>) result from scaling issues.</p>
Full article ">Figure 9
<p>Bandwise plots of % error estimates from the 21 percentiles of reflectance distributions of 31 images for 5 QIAs (n = 3255 per band). Plotting atop one another, the pooled error plots show strong convergence for the error profiles of the two methods.</p>
Full article ">Figure 10
<p>Low-reflectance indices and regression lines for L8/9 calculated for LaSRC and CMAC.</p>
Full article ">Figure 11
<p>Average reflectance distribution per band (n = 31) for five SoCal QIAs by reflectance DN. These reflectance distributions of CMAC and LaSRC plot atop one another in all cases with only minor discrepancies where the CMAC curves more closely emulate the shape of TOAR.</p>
Full article ">
18 pages, 29718 KiB  
Article
Assessment of Seven Atmospheric Correction Processors for the Sentinel-2 Multi-Spectral Imager over Lakes in Qinghai Province
by Wenxin Li, Yuancheng Huang, Qian Shen, Yue Yao, Wenting Xu, Jiarui Shi, Yuting Zhou, Jinzhi Li, Yuting Zhang and Hangyu Gao
Remote Sens. 2023, 15(22), 5370; https://doi.org/10.3390/rs15225370 - 15 Nov 2023
Cited by 2 | Viewed by 1307
Abstract
The European Space Agency (ESA) developed the Sentinel-2 Multispectral Imager (MSI), which offers a higher spatial resolution and shorter repeat coverage, making it an important source for the remote-sensing monitoring of water bodies. Atmospheric correction is crucial for the monitoring of water quality. [...] Read more.
The European Space Agency (ESA) developed the Sentinel-2 Multispectral Imager (MSI), which offers a higher spatial resolution and shorter repeat coverage, making it an important source for the remote-sensing monitoring of water bodies. Atmospheric correction is crucial for the monitoring of water quality. To compare the applicability of seven publicly available atmospheric correction processors (ACOLITE, C2RCC, C2XC, iCOR, POLYMER, SeaDAS, and Sen2Cor), we chose complex and diverse lakes in Qinghai Province, China, as the research area. The lakes were divided into three types based on the waveform characteristics of Rrs: turbid water bodies (class I lakes) represented by the Dabusun Lake (DBX), clean water bodies (class II lakes) represented by the Qinghai Lake (QHH), and relatively clean water bodies (class III lakes) represented by the Longyangxia Reservoir (LYX). Compared with the in situ Rrs, it was found that for the DBX, the Sen2Cor processor performed best. The POLYMER processor exhibited a good performance in the QHH. The C2XC processor performed well with the LYX. Using the Sen2Cor, POLYMER, and C2XC processors for classes I, II, and III, respectively, compared with the Sentinel-3 OLCI Level-2 Water Full Resolution (L2-WFR) products, it was found that the estimated Rrs from the POLYMER had the highest consistency. Slight deviations were observed in the estimation results for both the Sen2Cor and C2XC. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Location of the study area in Qinghai Province. The red star indicates the centre position of each lake.</p>
Full article ">Figure 2
<p>Locations of sampled lakes. Red markers represent locations of in situ data. (<b>a</b>) Qinghai Lake. (<b>b</b>) Dabusun Lake. (<b>c</b>) Longyangxia Reservoir.</p>
Full article ">Figure 3
<p>In situ reflectance data of typical lake and reservoir water. The solid line is the average of the sampling points for each lake, DBX is represented in red, QHH is represented in blue, and LYX is represented in green. The vertical lines of different colours represent 400–900 nm. (<b>a</b>) Of the 8 wavebands (B1–B7 and B8A) corresponding to the Sentinel-2 MSI images, B1 and B2 partially overlap, with the colour transition indicating a shared area. (<b>b</b>) 17 wavebands (B2–B18), correspond to the Sentinel-3 OLCI images, and the wavelength ranges of different wavebands above 650 nm overlap.</p>
Full article ">Figure 4
<p>Map of lake and reservoir classification in Qinghai Province. (<b>a</b>) Spatial distribution of the three types of lakes. (<b>b</b>) Spectral characteristics of the reflectance of class I lake S3 OLCI images. (<b>c</b>) Spectral characteristics of reflectance of class II lakes. (<b>d</b>) Spectral characteristics of reflectance of class III lakes.</p>
Full article ">Figure 5
<p>There are three typical lake reservoirs: Dabusun Lake, Qinghai Lake, and Longyangxia Reservoir. Scatter plot of in situ measured reflectance spectra versus S2- MSI reflectance spectra of different atmospheric correction processors (ACOLITE, C2RCC, C2XC, iCOR, POLYMER, SeaDAS, Sen2Cor). The solid lines represent the 1:1 line, and the dashed lines represent the linear regression line.</p>
Full article ">Figure 6
<p>Estimated <span class="html-italic">R</span><sub>rs</sub> results and the in situ <span class="html-italic">R</span><sub>rs</sub> evaluation scores of the seven atmospheric correction processors (ACOLITE, C2RCC, C2XC, iCOR, POLYMER, SeaDAS and Sen2Cor) for three typical lakes (<b>a</b>) Dabusun Lake, (<b>b</b>) Qinghai Lake, and (<b>c</b>) Longyangxia Reservoir.</p>
Full article ">Figure 7
<p>Dabusun Lake, Qinghai Lake, and Longyangxia Reservoir S2-MSI images at 443, 490, 560, 665, and 705 nm of the atmospheric correction processor <span class="html-italic">R</span><sub>rs</sub> product.</p>
Full article ">Figure 8
<p>S2-MSI images of the three types of lakes at 443, 490, 560, 665, and 705 nm of the <span class="html-italic">R</span><sub>rs</sub> product. (<b>a</b>) class Ⅰ lakes using Sen2Cor processor. (<b>b</b>) class Ⅱ lakes using POLYMER processor. (<b>c</b>) class Ⅲ lakes using C2XC processor.</p>
Full article ">Figure 8 Cont.
<p>S2-MSI images of the three types of lakes at 443, 490, 560, 665, and 705 nm of the <span class="html-italic">R</span><sub>rs</sub> product. (<b>a</b>) class Ⅰ lakes using Sen2Cor processor. (<b>b</b>) class Ⅱ lakes using POLYMER processor. (<b>c</b>) class Ⅲ lakes using C2XC processor.</p>
Full article ">Figure 9
<p>Three typical lake reservoirs (<b>a</b>) Dabusun Lake, (<b>b</b>) Qinghai Lake, and (<b>c</b>) Longyangxia Reservoir. The relationship between the S3-OLCI image and the S2-MSI image in terms of reflectance; solid lines represent 1:1 lines, and dotted lines represent linear regression lines.</p>
Full article ">Figure 10
<p>Statistical results of the error estimation of <span class="html-italic">R</span><sub>rs</sub> of different bands of three types of lake reservoir optimal atmospheric correction processors are represented by <span class="html-italic">RMSE</span> and <span class="html-italic">MRE</span>.</p>
Full article ">Figure 11
<p>Scatter plots of <span class="html-italic">R</span><sub>rs</sub> estimated for the three types of lakes using the optimal atmospheric correction processor and the same time period S3 OLCI image <span class="html-italic">R</span><sub>rs</sub>. The solid lines represent the 1:1 lines and the dotted lines represent the linear regression lines. (<b>a</b>) Results estimated by Sen2Cor processor for class I lakes. (<b>b</b>) Results estimated by the POLYMER processor for class II lakes. (<b>c</b>) Results estimated by the C2XC processor for class III lakes.</p>
Full article ">Figure 11 Cont.
<p>Scatter plots of <span class="html-italic">R</span><sub>rs</sub> estimated for the three types of lakes using the optimal atmospheric correction processor and the same time period S3 OLCI image <span class="html-italic">R</span><sub>rs</sub>. The solid lines represent the 1:1 lines and the dotted lines represent the linear regression lines. (<b>a</b>) Results estimated by Sen2Cor processor for class I lakes. (<b>b</b>) Results estimated by the POLYMER processor for class II lakes. (<b>c</b>) Results estimated by the C2XC processor for class III lakes.</p>
Full article ">
26 pages, 5244 KiB  
Article
Closed-Form Method for Atmospheric Correction (CMAC) of Smallsat Data Using Scene Statistics
by David P. Groeneveld, Timothy A. Ruggles and Bo-Cai Gao
Appl. Sci. 2023, 13(10), 6352; https://doi.org/10.3390/app13106352 - 22 May 2023
Cited by 3 | Viewed by 1442
Abstract
High-cadence Earth observation smallsat images offer potential for near real-time global reconnaissance of all sunlit cloud-free locations. However, these data must be corrected to remove light-transmission effects from variable atmospheric aerosol that degrade image interpretability. Although existing methods may work, they require ancillary [...] Read more.
High-cadence Earth observation smallsat images offer potential for near real-time global reconnaissance of all sunlit cloud-free locations. However, these data must be corrected to remove light-transmission effects from variable atmospheric aerosol that degrade image interpretability. Although existing methods may work, they require ancillary data that delays image output, impacting their most valuable applications: intelligence, surveillance, and reconnaissance. Closed-form Method for Atmospheric Correction (CMAC) is based on observed atmospheric effects that brighten dark reflectance while darkening bright reflectance. Using only scene statistics in near real-time, CMAC first maps atmospheric effects across each image, then uses the resulting grayscale to reverse the effects to deliver spatially correct surface reflectance for each pixel. CMAC was developed using the European Space Agency’s Sentinel-2 imagery. After a rapid calibration that customizes the method for each imaging optical smallsat, CMAC can be applied to atmospherically correct visible through near-infrared bands. To assess CMAC functionality against user-applied state-of-the-art software, Sen2Cor, extensive tests were made of atmospheric correction performance across dark to bright reflectance under a wide range of atmospheric aerosol on multiple images in seven locations. CMAC corrected images faster, with greater accuracy and precision over a range of atmospheric effects more than twice that of Sen2Cor. Full article
(This article belongs to the Special Issue Small Satellites Missions and Applications)
Show Figures

Figure 1

Figure 1
<p>Inexact synchronicity is a source of error for ancillary image application as shown in the following example: (<b>a</b>) 14 August 2021 Sentinel 2 TOAR RGB of southern Minnesota, (<b>b</b>) the cirrus band (B10) of the same scene, and (<b>c</b>) the cirrus band (B09) of Landsat 8 taken about 18 min before. Cirrus affects visible bands as in (<b>a</b>).</p>
Full article ">Figure 2
<p>An example 100 m resolution (10 × 10 pixel grid cell) Atm-I grayscale for the 8-22-21 S2 tile over Lake Tahoe, CA, USA. At least some ground signal must remain for correction (exceeded in portions of this image).</p>
Full article ">Figure 3
<p>CMAC conceptual model illustrated as a dashed line expressing the effect upon any pixel, dark to bright, from a single level of atmospheric aerosol. SR is surface reflectance. The TDL crosses the x-axis at the axis point.</p>
Full article ">Figure 4
<p><a href="#applsci-13-06352-f002" class="html-fig">Figure 2</a> reproduced from Fraser and Kaufmann [<a href="#B31-applsci-13-06352" class="html-bibr">31</a>].</p>
Full article ">Figure 5
<p>Data extracted from S2 images from 2021 over an area of interest with consistent reflectance in Reno, Nevada that experienced wide swings of aerosol concentration from regional wildfire smoke. The application of such invariant locations is described further in <a href="#sec2dot4-applsci-13-06352" class="html-sec">Section 2.4</a> below. DN refers to reflectance scaled by 10,000.</p>
Full article ">Figure 6
<p>Salon de Provence, France region: a calibration target (arrows) in S2 TOAR regional images 16 June 2021 under light haze (<b>a</b>,<b>d</b>) and 8 March 2021 under moderate haze from wildfire smoke (<b>b</b>,<b>e</b>). A Google Earth image (<b>c</b>) of the target shows the 30 m × 30 m black and white panels.</p>
Full article ">Figure 7
<p>The Reno QIA outlined in red on this S2 image from 6 March 2021 is located just northeast of the Reno, NV airport. The polygon was drawn to exclude vacant lots that might harbor unmanaged vegetation.</p>
Full article ">Figure 8
<p>Map showing locations of six QIAs located east of Los Angeles, CA. (Source: Maxar, Earthstar Geographics and the GIS User Community). QIA locations are designated as follows: Chino (a); Ontario (b); Highgrove (c); Fontana (d); Redlands (e); and Rochester (f).</p>
Full article ">Figure 9
<p>Google Earth closeup images of six QIAs: (<b>a</b>) Chino; (<b>b</b>) Ontario; (<b>c</b>) Highgrove; (<b>d</b>) Fontana; (<b>e</b>) Redlands; and (<b>f</b>) Rochester.</p>
Full article ">Figure 10
<p>Reno QIA reflectance curves plotted for the four VNIR bands of S2 (rows). Colored curves were derived from <span class="html-italic">n</span> = 3 or <span class="html-italic">n</span> = 4 percentile averages for each band and treatment for the low-Atm-I images (clear-appearing, lacking haze). Legend values are Atm-I or average Atm-I. Curves in black are for single images that exceed Sen2Cor AC capability. Though not as accurately, CMAC corrected the extremely high Atm-I curves for the visible bands. CMAC curves are tighter (more precise) than Sen2Cor in all bands.</p>
Full article ">Figure 11
<p>Seven reflectance curves for the Rochester QIA by treatment (columns) and bands (rows). Each curve represents an average of four images with similar average Atm-I. Dispersion is notable for Sen2Cor in the lower limb of visible bands where high precision is needed to support applications such as precision agriculture and AI feature extraction. In CMAC, the lower limb of reflectance is comparatively precise. NIR 8A curves are virtually identical between the two methods. Rochester experienced the highest range of Atm-I levels among the SoCal QIAs.</p>
Full article ">Figure 12
<p>Average CV% distribution for the 22 percentile steps combined for the six QIAs; (<span class="html-italic">n</span> = 132) of CMAC and Sen2Cor. Though approached very differently, both methods show similar trends.</p>
Full article ">Figure 13
<p>Percent error distribution for CMAC and Sen2Cor plotted according to the rank for the 1st through 3696th estimated error values.</p>
Full article ">Figure 14
<p>CMAC estimated percent error plotted together for all four VNIR bands.</p>
Full article ">Figure 15
<p>Percent error in Sen2Cor low surface reflectance estimates rapidly increase with Atm-I in all bands except NIR 8A. CMAC error shows a more gradual trend of increased error with increasing Atm-I level.</p>
Full article ">Figure 16
<p>A clip from the S2 Reno image whose data are shown in <a href="#applsci-13-06352-f010" class="html-fig">Figure 10</a> statistics for Atm-I = 1743 and as the 22-8-2021 statistics in <a href="#app1-applsci-13-06352" class="html-app">Supplementary Materials as “Reno QIA Curves.xlsx”</a>. As in all other image displays in this paper, these examples are screenshots from QGIS display. (<b>a</b>) TOAR representation made from the full tile stretch. A full tile image stretch can be taken to visually represent the degraded TOAR mathematics of the image. The alternative, a clipped image stretch, does not appropriately represent the unbiased mathematics of the image that confronts AI and other machine analysis; such clip stretches may visibly clear some haze (but are typically accompanied by color balance problems). Color balance is a valuable indicator of potential problems that could occur through use of machine analyses. (<b>b</b>) CMAC clearing of the image provides the color balance conferred by the TOAR image. The features within both are darkened through hypothesized diffuse shading from aerosol particles. (<b>c</b>) Sen2Cor correction of the same clip displaying color balance problems and residual haze.</p>
Full article ">Figure 17
<p>S2 image, 5 March 2021 closeup of the Mexican Gulf Coast north of Veracruz: (<b>a</b>) TOAR with smoke effects from fields burned before planting; (<b>b</b>) CMAC v1.1 corrected.</p>
Full article ">Figure 18
<p>L8 full tile of the Mexican Gulf Coast from 5 December 2021: (<b>a</b>) TOAR, (<b>b</b>) Atm-I, (<b>c</b>) LaSRC correction and (<b>d</b>) CMAC correction. The images were rotated from their collection angle to fit squarely.</p>
Full article ">Figure 19
<p>TOAR (<b>a</b>) and CMAC (<b>b</b>) views of 8-18-2021 Planet Labs Dove satellite image over Fargo, North Dakota (20210818_175123_23_105a_3B_AnalyticMS_clip.tif cohort PS2.SD).</p>
Full article ">
19 pages, 4930 KiB  
Article
Assessment of Atmospheric Correction Processors and Spectral Bands for Satellite-Derived Bathymetry Using Sentinel-2 Data in the Middle Adriatic
by Ljerka Vrdoljak and Jelena Kilić Pamuković
Hydrology 2022, 9(12), 215; https://doi.org/10.3390/hydrology9120215 - 30 Nov 2022
Cited by 6 | Viewed by 2124
Abstract
Satellite-derived bathymetry (SDB) based on multispectral satellite images (MSI) from the satellite’s optical sensors is a recent technique for surveying shallow waters. Sentinel-2 satellite mission with an open access policy and high spatial, radiometric, and temporal resolution of MSI-s started a new era [...] Read more.
Satellite-derived bathymetry (SDB) based on multispectral satellite images (MSI) from the satellite’s optical sensors is a recent technique for surveying shallow waters. Sentinel-2 satellite mission with an open access policy and high spatial, radiometric, and temporal resolution of MSI-s started a new era in the mapping of coastal bathymetry. More than 90 percent of the electromagnetic (EM) signal received by satellites is due to the atmospheric path of the EM signal. While Sentinel-2 MSI Level 1C provides top-of-atmosphere reflectance, Level 2A provides bottom-of-atmosphere reflectance. The European Space Agency applies the Sen2Cor algorithm for atmospheric correction (AC) to model the atmospheric path of the signal and reduce the MSI reflectance from L1C to L2A over the land area. This research evaluated the performance of different image-based AC processors, namely: Sen2Cor, Acolite, C2RCC, and iCOR for SDB modelling. The empirical log band ratio algorithm was applied to a time series of Sentinel-2 MSI in the middle Adriatic. All AC processors outperformed Sentinel-L2A MSI for SDB. Acolite and iCOR demonstrated accurate performance with a correlation coefficient higher than 90 percent and the RMSE under 2 m for depths up to 20 m. C2RCC produced more robust bathymetry models and was able to retrieve the depth information from more scenes than any other correction. Furthermore, a switch model combining different spectral bands improved mapping in shallow waters, demonstrating the potential of SDB technology for the effective mapping of shallow waters. Full article
Show Figures

Figure 1

Figure 1
<p>Absorption coefficient of VIS in water [<a href="#B53-hydrology-09-00215" class="html-bibr">53</a>].</p>
Full article ">Figure 2
<p>The study area and the MB survey. The red rectangle defines area: A for the evaluation of the AC processors and the green rectangle defines broader area; B for bathymetry estimation.</p>
Full article ">Figure 3
<p>The performance of AC processors is demonstrated on a smaller marine area (<b>f</b>). Blue band (B2) from the optimal scene collected on 30 September 2017 was processed with different AC algorithms and remote sensing reflectance was calculated for: (<b>a</b>) Acolite DSF, (<b>b</b>) Acolite EXP, (<b>c</b>) C2RCC, (<b>d</b>) iCOR, and (<b>e</b>) Sen2Cor (Sentinel-2 L2A).</p>
Full article ">Figure 4
<p>Bathymetry of the study area in the Šibenik channel (middle Adriatic) estimated from the optimal scene collected on 30 September 2017 and pre-processed with different AC algorithms: (<b>a</b>) Acolite DSF, (<b>b</b>) Acolite EXP, (<b>c</b>) C2RCC, (<b>d</b>) iCOR, and (<b>e</b>) Sen2Cor (Sentinel-2 L2A).</p>
Full article ">Figure 5
<p>Dispersion of residuals between the estimated depth (SDB) and check MB soundings for the optimal Sentinel-2 scene (30 September 2017) pre-processed with different AC processors.</p>
Full article ">Figure 6
<p>Histogram of residuals between the estimated depth (SDB) and check MB soundings for the optimal Sentinel-2 scene (30 September 2017) pre-processed with different AC processors.</p>
Full article ">Figure 7
<p>The median absolute error of the SDB estimated from the optimal scene (30 September 2017) pre-processed with different AC processors in depth ranges: 0–3 m, 3–5 m, 5–10 m, 10–15 m, 15–20 m.</p>
Full article ">Figure 8
<p>Extinction depth of VIS bands for the Sentinel-2 MSI in the study area of the Šibenik channel. Reflection (Rw) of VIS bands: B1—coastal aerosol, B2—blue, B3—green, B4—red were compared to the MB soundings.</p>
Full article ">Figure 9
<p>G SDB and R SDB in a coastal area of the Šibenik channel (middle Adriatic) extracted from the optimal Sentinel-2 MSI acquired on 30 September 2017 using the log band ratio method. The SDB models were estimated by vertical referencing the pseudo depth, ln(blue)/ln(green) and ln(blue)/ln(red) to the mean low lower water (MLLW) using in situ MB soundings.</p>
Full article ">Figure 10
<p>Switch SDB model of the Šibenik channel covering depth range 0–20 m estimated from the Sentinel-2 image acquired on 30 September 2017 using the LBR algorithm with band combinations with blue, green, and red bands.</p>
Full article ">Figure 11
<p>Normalized median absolute error for G SDB (B2 and B3), R SDB (B2 and B4), and switch SDB in different depth ranges.</p>
Full article ">
18 pages, 12232 KiB  
Article
Gradient Boosting and Linear Regression for Estimating Coastal Bathymetry Based on Sentinel-2 Images
by Fahim Abdul Gafoor, Maryam R. Al-Shehhi, Chung-Suk Cho and Hosni Ghedira
Remote Sens. 2022, 14(19), 5037; https://doi.org/10.3390/rs14195037 - 9 Oct 2022
Cited by 9 | Viewed by 2721
Abstract
Thousands of vessels travel around the world every day, making the safety, efficiency, and optimization of marine transportation essential. Therefore, the knowledge of bathymetry is crucial for a variety of maritime applications, such as shipping and navigation. Maritime applications have benefited from recent [...] Read more.
Thousands of vessels travel around the world every day, making the safety, efficiency, and optimization of marine transportation essential. Therefore, the knowledge of bathymetry is crucial for a variety of maritime applications, such as shipping and navigation. Maritime applications have benefited from recent advancements in satellite navigation technology, which can utilize multi-spectral bands for retrieving information on water depth. As part of these efforts, this study combined deep learning techniques with satellite observations in order to improve the estimation of satellite-based bathymetry. The objective of this study is to develop a new method for estimating coastal bathymetry using Sentinel-2 images. Sentinel-2 was used here due to its high spatial resolution, which is desirable for bathymetry maps, as well as its visible bands, which are useful for estimating bathymetry. The conventional linear model approach using the satellite-derived bathymetry (SDB) ratio (green to blue) was applied, and a new four-band ratio using the four visible bands of Sentienl-2 was proposed. In addition, three atmospheric correction models, Sen2Cor, ALOCITE, and C2RCC, were evaluated, and Sen2Cor was found to be the most effective model. Gradient boosting was also applied in this study to both the conventional band ratio and the proposed FVBR ratio. Compared to the green to blue ratio, the proposed ratio FVBR performed better, with R2 exceeding 0.8 when applied to 12 snapshots between January and December. The gradient boosting method was also found to provide better estimates of bathymetry than linear regression. According to findings of this study, the chlorophyll-a (Chl-a) concentration, sediments, and atmospheric dust do not affect the estimated bathymetry. However, tidal oscillations were found to be a significant factor affecting satellite estimates of bathymetry. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Bathymetry field samples and the geo-boundaries investigated in this study around (<b>a</b>) Sir Bani Yas and (<b>b</b>) Abu Dhabi Islands. The red lines show the airborne echo-sounding path lines to measure bathymetry.</p>
Full article ">Figure 2
<p>RGB images of Abu Dhabi regions (<b>a</b>) before sunglint correction and (<b>b</b>) after sunglint correction.</p>
Full article ">Figure 3
<p>Brief flowchart of the methodology implemented in this work. The linear regression method is presented in red color.</p>
Full article ">Figure 4
<p>Histograms of in situ bathymetry data collected by the multibeam echo-sounder around (<b>a</b>) the Sir Bani Yas and (<b>b</b>) Abu Dhabi Islands.</p>
Full article ">Figure 5
<p>Statistical values of (<b>a</b>) R<sup>2</sup> and (<b>b</b>) RMSE for the three different ratios using both linear and gradient boosting.</p>
Full article ">Figure 6
<p>Comparison between the in situ water depth (bathymetry) and estimated bathymetry values based on the gradient boosting applied using the FVBR approach for Sir Bani Yas Island. The red line is the 1:1 agreement line and the blue line is the model best fit.</p>
Full article ">Figure 7
<p>Comparison between the in situ water depth (bathymetry) and estimated bathymetry values based on the gradient boosting applied using the FVBR approach for Abu Dhabi Island. The red line is the 1:1 agreement line and the blue line is the model best fit.</p>
Full article ">Figure 8
<p>Monthly time series of maximum R<sup>2</sup> obtained for the three different ratios using (<b>a</b>) linear and (<b>b</b>) gradient boosting for estimating the bathymetry in both regions, Abu Dhabi and Sir Bani Yas Islands. The bar charts show the corresponding monthly AOT.</p>
Full article ">Figure 9
<p>Snapshots of the surface Chl-<span class="html-italic">a</span> over Sir Bani Yas Island, with a scale ranging from 0 to 2.5 mg m<sup>−3</sup>. The Chl-<span class="html-italic">a</span> is derived based on the OC3 algorithm using Sentinel-2 visible bands.</p>
Full article ">Figure 10
<p>Snapshots of the surface K<sub>d</sub>_480 over Sir Bani Yas Island, with a scale ranging from 0 to 0.6 m<sup>−1</sup>. The K<sub>d</sub>_480 is derived based on the algorithm of Lee et al., 2005 [<a href="#B25-remotesensing-14-05037" class="html-bibr">25</a>], using Sentinel-2 visible bands.</p>
Full article ">Figure 11
<p>Maps of satellite-derived bathymetry of the Sir Bani Yas Island based on the FVBR gradient boosting approach.</p>
Full article ">Figure 12
<p>Snapshots of the surface Chl-<span class="html-italic">a</span> over Abu Dhabi Island, with a scale ranging from 0 to 3.5 mg m<sup>−3</sup>. The Chl-<span class="html-italic">a</span> is derived based on the OC3 algorithm using Sentinel-2 visible bands.</p>
Full article ">Figure 13
<p>Snapshots of the surface K<sub>d</sub>_480 over Abu Dhabi Island, with a scale ranging from 0 to 1.4 m<sup>−1</sup>. The K<sub>d</sub>_480 is derived based on the algorithm of Lee et al., 2005 [<a href="#B25-remotesensing-14-05037" class="html-bibr">25</a>], using Sentinel-2 visible bands.</p>
Full article ">Figure 14
<p>Maps of satellite-derived bathymetry of Abu Dhabi Island based on the FVBR gradient boosting approach.</p>
Full article ">Figure 15
<p>Tidal elevation (height m) in Abu Dhabi for the twelve months. Some months shows two high and two low tides, whereas other months show either one high or low tide and two high/low tides. The gray shading indicates the overpass time window of Sentinel-2 over Abu Dhabi.</p>
Full article ">
17 pages, 6864 KiB  
Article
Water Quality and Water Hyacinth Monitoring with the Sentinel-2A/B Satellites in Lake Tana (Ethiopia)
by Tadesse Mucheye, Sara Haro, Sokratis Papaspyrou and Isabel Caballero
Remote Sens. 2022, 14(19), 4921; https://doi.org/10.3390/rs14194921 - 1 Oct 2022
Cited by 16 | Viewed by 3975
Abstract
Human activities coupled with climate change impacts are becoming the main factors in decreasing inland surface water quantity and quality, leading to the disturbance of the aquatic ecological balance. Under such conditions, the introduction and proliferation of aquatic invasive alien species are more [...] Read more.
Human activities coupled with climate change impacts are becoming the main factors in decreasing inland surface water quantity and quality, leading to the disturbance of the aquatic ecological balance. Under such conditions, the introduction and proliferation of aquatic invasive alien species are more likely to occur. Hence, frequent surface water quality monitoring is required for aquatic ecosystem sustainability. The main objectives of the present study are to analyze the seasonal variation in the invasive plant species water hyacinth (Pontederia crassipes) and biogeochemical water quality parameters, i.e., chlorophyll-a (Chl-a) and total suspended matter (TSM), and to examine their relationship in Lake Tana (Ethiopia) during a one-year study period (2020). Sentinel-2A/B satellite images are used to monitor water hyacinth expansion and Chl-a and TSM concentrations in the water. The Case 2 Regional Coast Colour processor (C2RCC) is used for atmospheric and sunglint correction over inland waters, while the Sen2Cor atmospheric processor is used to calculate the normalized difference vegetation index (NDVI) for water hyacinth mapping. The water hyacinth cover and biomass are determined by NDVI values ranging from 0.60 to 0.95. A peak in cover and biomass is observed in October 2020, just a month after the peak of Chl-a (25.2 mg m−3) and TSM (62.5 g m−3) concentrations observed in September 2020 (end of the main rainy season). The influx of sediment and nutrient load from the upper catchment area during the rainy season could be most likely responsible for both Chl-a and TSM increased concentrations. This, in turn, created a fertile situation for water hyacinth proliferation in Lake Tana. Overall, the freely available Sentinel-2 satellite imagery and appropriate atmospheric correction processors are an emerging potent tool for inland water monitoring and management in large-scale regions under a global change scenario. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the Horn of Africa indicating the location of Lake Tana (Ethiopia), indicating its basin (marked with a red line) and its major tributaries (blue lines) within the region of interest. The location of the meteorological stations is also indicated.</p>
Full article ">Figure 2
<p>Monthly average temperature (°C) (dash-dotted line), and monthly mean precipitation (mm) (solid line) in Lake Tana during 2020.</p>
Full article ">Figure 3
<p>(<b>a</b>) Control points (in red) of water hyacinth (P1) and water (P2-P4) from a Sentinel-2 image on 18 September 2020; comparison of values between (<b>b</b>) BOA reflectance derived from C2RCC and Sen2Cor, (<b>c</b>) level-1C (TOA) reflectance and BOA C2RCC, and (<b>d</b>) level-1C (TOA) reflectance and BOA Sen2Cor. There was no water hyacinth reflectance value at P1 from the C2RCC processor, due to masked data considered as land.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>l</b>) Chlorophyll-a concentration (Chl-a, mg m<sup>−</sup><sup>3</sup>) in Lake Tana from January 2020 to December 2020 estimated from Sentinel-2 satellites after the C2RCC processor.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>l</b>) Total suspended matter concentration (TSM, g m<sup>−3</sup>) in Lake Tana from January 2020 to December 2020 estimated from the Sentinel-2 satellites after the C2RCC processor.</p>
Full article ">Figure 6
<p>(<b>a</b>) RGB (red-green-blue) band composite on 17 January 2020 indicating the water hyacinth area on the northeastern part of Lake Tana; (<b>b</b>) water hyacinth map extracted from the Sentinel-2 Level-2 satellite image (Sen2Cor) using the NDVI values with the range from 0.6 to 0.95.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>l</b>) Water hyacinth expansion in cover and biomass for each month fro January 2020 to December 2020 during the study year in 2020 extracted from Sentinel-2 Level-2 satellite images (Sen2Cor) using the NDVI.</p>
Full article ">Figure 8
<p>Temporal evolution of (<b>a</b>) Chlorophyll-a (Chl-a, mg m<sup>−3</sup>), total suspended matter (TSM, g m<sup>−3</sup>), and water hyacinth cover (ha); (<b>b</b>) average water hyacinth biomass (NDVI) and water hyacinth biomass scaled to total cover (NDVI * cover) in Lake Tana during 2020. X-axis represents the days during the year on a linear scale. Thus, major tick marks correspond to the 1st of each month and data points are shown based on the actual date of sampling, whereas bars are set at the midpoint between two consecutive samplings.</p>
Full article ">Figure 9
<p>(<b>a</b>) Water hyacinth mechanical removal campaign in 2020; (<b>b</b>) water hyacinth collected from water to dry land (Source: Lake Tana Protection Agency, 2020).</p>
Full article ">
21 pages, 18547 KiB  
Article
A Self-Trained Model for Cloud, Shadow and Snow Detection in Sentinel-2 Images of Snow- and Ice-Covered Regions
by Kamal Gopikrishnan Nambiar, Veniamin I. Morgenshtern, Philipp Hochreuther, Thorsten Seehaus and Matthias Holger Braun
Remote Sens. 2022, 14(8), 1825; https://doi.org/10.3390/rs14081825 - 10 Apr 2022
Cited by 8 | Viewed by 3399
Abstract
Screening clouds, shadows, and snow is a critical pre-processing step in many remote-sensing data processing pipelines that operate on satellite image data from polar and high mountain regions. We observe that the results of the state-of-the-art Fmask algorithm are not very accurate in [...] Read more.
Screening clouds, shadows, and snow is a critical pre-processing step in many remote-sensing data processing pipelines that operate on satellite image data from polar and high mountain regions. We observe that the results of the state-of-the-art Fmask algorithm are not very accurate in polar and high mountain regions. Given the unavailability of large, labeled Sentinel-2 training datasets, we present a multi-stage self-training approach that trains a model to perform semantic segmentation on Sentinel-2 L1C images using the noisy Fmask labels for training and a small human-labeled dataset for validation. At each stage of the proposed iterative framework, we use a larger network architecture in comparison to the previous stage and train a new model. The trained model at each stage is then used to generate new training labels for a bigger dataset, which are used for training the model in the next stage. We select the best model during training in each stage by evaluating the multi-class segmentation metric, mean Intersection over Union (mIoU), on the small human-labeled validation dataset. This effectively helps to correct the noisy labels. Our model achieved an overall accuracy of 93% compared to the Fmask 4 and Sen2Cor 2.8, which achieved 75% and 76%, respectively. We believe our approach can also be adapted for other remote-sensing applications for training deep-learning models with imprecise labels. Full article
(This article belongs to the Special Issue Remote Sensing in Glaciology and Cryosphere Research)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographic distribution of the datasets. The labeled numbers indicate the number of Sentinel-2 scenes used from a given site, and their colors indicate the dataset to which they belong.</p>
Full article ">Figure 2
<p>The U-Net architecture [<a href="#B14-remotesensing-14-01825" class="html-bibr">14</a>] with 32 start filters and a depth of 5, used in stage 2 of the self-training framework. The layers that constitute each encoder and decoder block are shown inside the boxes with dotted borders. The number of feature maps is indicated below each colored box. The resolution is the same for all layers in the encoder/decoder block; this is indicated at the top left of the dotted box. The output of the encoder, which is provided via the skip connection, is concatenated with the output of the up-convolution operation from the previous layer.</p>
Full article ">Figure 3
<p>Confusion matrix of an <span class="html-italic">M</span>-class segmentation problem with <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>. Each element in the matrix, denoted <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </semantics></math>, is the total number of pixels that belong to class <span class="html-italic">i</span> and predicted as class <span class="html-italic">j</span> by the model. For class 3, the True Positive pixels are represented by the green cell; the False Negative pixels are represented by the blue cells and the False Positive pixels are represented by the red cells.</p>
Full article ">Figure 4
<p>Comparison of test image results from Tile 10WDE in Northwest Territories, Canada, captured on 1 June 2020.</p>
Full article ">Figure 5
<p>Comparison of test image results from Tile 26XNQ in North East Greenland, captured on 14 September 2020.</p>
Full article ">Figure 6
<p>Comparison of test image results from different stages of the self-training framework and the results from the models trained using supervised training, from Tile 04WDV in Alaska, USA, captured on 27 October 2020.</p>
Full article ">Figure A1
<p>Comparison of test image results from Tile 04CEU in Marie Byrd Land, Antractica, captured on 12 December 2020.</p>
Full article ">Figure A2
<p>Comparison of validation image results from Tile 16XEG in Nunavut, Canada, captured on 15 March 2020.</p>
Full article ">Figure A3
<p>Comparison of validation image results from Tile 41XNE in Arkhangelsk Oblast, Russia, captured on 23 September 2020.</p>
Full article ">
20 pages, 6248 KiB  
Article
Ice Detection with Sentinel-1 SAR Backscatter Threshold in Long Sections of Temperate Climate Rivers
by Edvinas Stonevicius, Giedrius Uselis and Dalia Grendaite
Remote Sens. 2022, 14(7), 1627; https://doi.org/10.3390/rs14071627 - 28 Mar 2022
Cited by 14 | Viewed by 3237
Abstract
Climate change leads to more variable meteorological conditions. In many Northern Hemisphere temperate regions, cold seasons have become more variable and unpredictable, necessitating frequent river ice observations over long sections of rivers. Satellite SAR (Synthetic Aperture Radar)-based river ice detection models have been [...] Read more.
Climate change leads to more variable meteorological conditions. In many Northern Hemisphere temperate regions, cold seasons have become more variable and unpredictable, necessitating frequent river ice observations over long sections of rivers. Satellite SAR (Synthetic Aperture Radar)-based river ice detection models have been successfully applied and tested, but different hydrological, morphological and climatological conditions can affect their skill. In this study, we developed and tested Sentinel-1 SAR-based ice detection models in 525 km sections of the Nemunas and Neris Rivers. We analyzed three binary classification models based on VV, VH backscatter and logistic regression. The model sensitivity and specificity were used to determine the optimal threshold between ice and water classes. We used in situ observations and Sentinel-2 Sen2Cor ice mask to validate models in different ice conditions. In most cases, SAR-based ice detection models outperformed Sen2Cor classification because Sen2Cor misclassified pixels as ice in areas with translucent clouds, undetected by the scene classification algorithm, and misclassified pixels as water in cloud or river valley shadow. SAR models were less accurate in river sections where river flow and ice formation conditions were affected by large valley-dammed reservoirs. Sen2Cor and SAR models accurately detected border and consolidated ice but were less accurate in moving ice conditions. The skill of models depended on how dense the moving ice was. With a lowered classification threshold and increased model sensitivity, SAR models detected sparse frazil ice. In most cases, the VV polarization-based model was more accurate than the VH polarization-based model. The results of logistic and VV models were highly correlated, and the use of VV was more constructive due to its simpler algorithm. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

Figure 1
<p>The Lithuanian part of the Nemunas and Neris Rivers, the location of hydrological stations (black dots) with available ground observation data and river sections (color stripes) covered by the matching Sentinel-1 SAR and Sentinel-2 MSI observations used for the development and validation of the river ice detection models.</p>
Full article ">Figure 2
<p>The distribution of backscatter coefficient Sigma0 from ice (6 March 2018) and open water pixels (data from 10 and 13 May, 3 and 6 November 2018), including and excluding the zone within 30 m away from riverbanks.</p>
Full article ">Figure 3
<p>The distribution of logistic model coefficients estimated using 100 training dataset subsets, each consisting of 7500 pixels. Points were randomly jittered in horizontal direction to avoid overlapping.</p>
Full article ">Figure 4
<p>Optimal ice and open water classification thresholds of the logistic, VH and VV models determined using 100 training dataset subsets. The threshold was considered optimal, and the true prediction rates for water (specificity) and ice (sensitivity) were equal.</p>
Full article ">Figure 5
<p>Interdependency between logistic model parameters β<sub>0,</sub> β<sub>VV</sub>, β<sub>VH</sub> and ice/water classification threshold value estimated using 100 training dataset subsets.</p>
Full article ">Figure 6
<p>Distribution of observations in the VV<sub>Sigma0</sub> and VH<sub>Sigma0</sub> plain on different dates in the testing and training datasets. Red lines represent the ice and water classification thresholds used in models based on VV and VH polarization backscatter.</p>
Full article ">Figure 7
<p>The agreement of SAR-based logistic, VH and VV model classification with the Sen2Cor ice and water classes on different days from the testing dataset.</p>
Full article ">Figure 8
<p>Nemunas River section near the Smalininkai HS affected by the translucent clouds and their shadows on 18 January 2017. The effect of clouds and their shadows for ice detection is visible in Sentinel-2 natural color composite (<b>a</b>) and the match of Sen2Cor classes with the VV model prediction (<b>b</b>).</p>
Full article ">Figure 9
<p>Nemunas River section between the Nemajunai HS and Kaunas HPS Reservoir (north from shown section) on 25 January 2017 was covered by ice. Sen2Cor predicted water class in the river valley shadow, while the VH model misclassified many pixels as water upstream from the HPS Reservoir (<b>a</b>). The VH model classified more pixels as water than the VV model (<b>b</b>).</p>
Full article ">Figure 10
<p>VV model prediction compared to the Sen2Cor class on 14 February 2017.</p>
Full article ">Figure 11
<p>Sentinel-2 natural color composition (<b>a</b>), VV model prediction compared to Sen2Cor class (<b>b</b>), VV polarization backscatter coefficient (<b>c</b>) and VV and VH classification mismatch (<b>d</b>) on 14 February 2017 in the Nemunas River upstream from the Panemunes HS.</p>
Full article ">Figure 12
<p>Sentinel-1 SAR backscatter in VV and VH polarizations in Nemunas River near the Lazdenai HS on a day with sparse frazil ice (7 February 2018) and without frazil ice (21 February 2018).</p>
Full article ">
25 pages, 6949 KiB  
Article
Evaluation of Sentinel-2/MSI Atmospheric Correction Algorithms over Two Contrasted French Coastal Waters
by Quang-Tu Bui, Cédric Jamet, Vincent Vantrepotte, Xavier Mériaux, Arnaud Cauvin and Mohamed Abdelillah Mograne
Remote Sens. 2022, 14(5), 1099; https://doi.org/10.3390/rs14051099 - 23 Feb 2022
Cited by 18 | Viewed by 4640
Abstract
The Sentinel-2A and Sentinel-2B satellites, with on-board Multi-Spectral Instrument (MSI), and launched on 23 June 2015 and 7 March 2017, respectively, are very useful tools for studying ocean color, even if they were designed for land and vegetation applications. However, the use of [...] Read more.
The Sentinel-2A and Sentinel-2B satellites, with on-board Multi-Spectral Instrument (MSI), and launched on 23 June 2015 and 7 March 2017, respectively, are very useful tools for studying ocean color, even if they were designed for land and vegetation applications. However, the use of these satellites requires a process called “atmospheric correction”. This process aims to remove the contribution of the atmosphere from the total top of atmosphere reflectance measured by the remote sensors. For the purpose of assessing this processing, seven atmospheric correction algorithms have been compared over two French coastal regions (English Channel and French Guiana): Image correction for atmospheric effects (iCOR), Atmospheric correction for OLI ‘lite’ (ACOLITE), Case 2 Regional Coast Colour (C2RCC), Sentinel 2 Correction (Sen2Cor), Polynomial-based algorithm applied to MERIS (Polymer), the standard NASA atmospheric correction (NASA-AC) and the Ocean Color Simultaneous Marine and Aerosol Retrieval Tool (OC-SMART). The satellite-estimated remote-sensing reflectances were spatially and temporally matched with in situ measurements collected by an ASD FieldSpec4 spectrophotometer. Results, based on 28 potential individual match-ups, showed that the best performance processor is OC-SMART with the highest values for the total score Stot (16.89) and for the coefficient of correlation R2 (ranging from 0.69 at 443 nm to 0.92 at 665 nm). iCOR and Sen2Cor show the less accurate performances with total score Stot values of 2.01 and 7.70, respectively. Since the size of the in situ observation platform can be significant compared to the pixel resolution of MSI onboard Sentinel-2, it can create bias in the pixel extraction process. Thus, to study this impact, we used different methods of pixel extraction. However, there are no significant changes in results; some future research may be necessary. Full article
(This article belongs to the Special Issue Atmospheric Correction for Remotely Sensed Ocean Color Data)
Show Figures

Figure 1

Figure 1
<p>Maps of measurement locations for two coastal areas: French Guiana (<b>top</b>); English Channel (<b>bottom</b>).</p>
Full article ">Figure 1 Cont.
<p>Maps of measurement locations for two coastal areas: French Guiana (<b>top</b>); English Channel (<b>bottom</b>).</p>
Full article ">Figure 2
<p>The different configurations of the extraction box for the match-up exercise. The red circles correspond to the location in situ stations and the brown squares are the extracting boxes for retrieving data.</p>
Full article ">Figure 3
<p>Scatter plots of the estimated (y-axis) vs in situ (x-axis) R<sub>rs</sub> for the seven atmospheric correction algorithms at 443 nm, 490 nm, 560 nm, and 665 nm. The dotted line is the 1:1 line and solid lines present the linear regression between the AC retrievals and the field measurements R<sub>rs</sub>.</p>
Full article ">Figure 4
<p>Variation in the statistical parameters with the wavelength on the individual match-ups dataset. From left to right, up to bottom: RE, Bias, R<sup>2</sup>.</p>
Full article ">Figure 5
<p>Scatter plots of the estimated (y-axis) vs in situ (x-axis) R<sub>rs</sub> for the seven atmospheric correction algorithms at 443 nm, 490 nm, 560 nm, and 665 nm. The dotted line is the 1:1 line and solid lines present the linear regression between the AC retrieval and the in situ R<sub>rs</sub>.</p>
Full article ">Figure 6
<p>Variation in the statistical parameters with the wavelength on the common match-ups dataset. From left to right, up to bottom: RE, Bias; R<sup>2</sup>.</p>
Full article ">Figure 7
<p>Summary of statistics of normal extraction method (<b>the middle</b>), the tested down-shifted box (<b>the lower</b>), and right-shifted box (<b>the upper</b>). (<b>Left column</b>): RE; (<b>middle column</b>): Bias; (<b>right column</b>): R<sup>2</sup>.</p>
Full article ">Figure 8
<p>Example of an inhomogeneous pixel scene processed with NASA’s atmospheric correction processor in the Eastern English Channel on 21 September 2016. In situ measurement is presented as Pin 1.</p>
Full article ">Figure 9
<p>The example satellite image of Sentinel-2 at band 10 (1375 nm) over French Guiana on 28 November 2016. The cloud, cirrus, or haze covers all of the images.</p>
Full article ">
19 pages, 4355 KiB  
Article
Snow Coverage Mapping by Learning from Sentinel-2 Satellite Multispectral Images via Machine Learning Algorithms
by Yucheng Wang, Jinya Su, Xiaojun Zhai, Fanlin Meng and Cunjia Liu
Remote Sens. 2022, 14(3), 782; https://doi.org/10.3390/rs14030782 - 8 Feb 2022
Cited by 12 | Viewed by 4410
Abstract
Snow coverage mapping plays a vital role not only in studying hydrology and climatology, but also in investigating crop disease overwintering for smart agriculture management. This work investigates snow coverage mapping by learning from Sentinel-2 satellite multispectral images via machine-learning methods. To this [...] Read more.
Snow coverage mapping plays a vital role not only in studying hydrology and climatology, but also in investigating crop disease overwintering for smart agriculture management. This work investigates snow coverage mapping by learning from Sentinel-2 satellite multispectral images via machine-learning methods. To this end, the largest dataset for snow coverage mapping (to our best knowledge) with three typical classes (snow, cloud and background) is first collected and labeled via the semi-automatic classification plugin in QGIS. Then, both random forest-based conventional machine learning and U-Net-based deep learning are applied to the semantic segmentation challenge in this work. The effects of various input band combinations are also investigated so that the most suitable one can be identified. Experimental results show that (1) both conventional machine-learning and advanced deep-learning methods significantly outperform the existing rule-based Sen2Cor product for snow mapping; (2) U-Net generally outperforms the random forest since both spectral and spatial information is incorporated in U-Net via convolution operations; (3) the best spectral band combination for U-Net is B2, B11, B4 and B9. It is concluded that a U-Net-based deep-learning classifier with four informative spectral bands is suitable for snow coverage mapping. Full article
(This article belongs to the Special Issue Remote Sensing for Smart Agriculture Management)
Show Figures

Figure 1

Figure 1
<p>The entire workflow is divided into data collection, data labeling, data exploration, model training and model evaluation.</p>
Full article ">Figure 2
<p>U-Net architecture used in this study. The blue boxes represent different multi-channel feature maps, with the numbers on the top and left edge of the box indicate the number of channels and the feature size (width and height) separately. Each white box represents a copied feature map. The arrows with different colors denote different operations. The number of channels is denoted on the top of the box.</p>
Full article ">Figure 3
<p>Loss curves for training data (blue) and validation data (orange) in training process of (<b>A</b>) U-Net<math display="inline"><semantics> <msub> <mrow/> <mrow> <mi>R</mi> <mi>G</mi> <mi>B</mi> </mrow> </msub> </semantics></math>, (<b>B</b>) U-Net<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>4</mn> <mi>b</mi> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi>s</mi> </mrow> </msub> </semantics></math> and (<b>C</b>) U-Net<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>12</mn> <mi>b</mi> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi>s</mi> </mrow> </msub> </semantics></math>. The dashed line indicates the epoch with smallest validation loss and the loss in the Y-axis represents the weighted cross-entropy.</p>
Full article ">Figure 4
<p>Geographical distribution of the 40 selected sites denoted by empty triangles, with different colors representing scenes obtained from different years, i.e., cyan, red and green denotes scenes dated from the years in 2019, 2020 and 2021.</p>
Full article ">Figure 5
<p>Visualization of all 40 scenes via RGB bands, with the above numbers being the scene captured date.</p>
Full article ">Figure 6
<p>Labeled classification masks of all 40 collected scenes. The three target classes are represented by different colors: black denotes background, red denotes cloud and cyan denotes snow.</p>
Full article ">Figure 7
<p>Boxplots comparing the bottom of atmosphere corrected reflectance of 12 spectral bands from Sentinel-2 L2A products for background (white), cloud (red) and snow (cyan). Note: the outliers of each boxplot are not displayed.</p>
Full article ">Figure 8
<p>NDSI distribution of snow (cyan), cloud (red) and background (black) pixels, where the NDSI is defined as NDSI = (B3 − B12)/(B3 + B12).</p>
Full article ">Figure 9
<p>Feature selection. (<b>A</b>) Forward sequential feature selection, where the tick name of the x-axis means sequentially adding the specified bands into the inputs of the model. (<b>B</b>) Backward sequential feature selection, where the tick name of the x-axis means sequentially removing the specified bands from the inputs of the model.</p>
Full article ">Figure 10
<p>Classification performance comparisons for different models applied in a training dataset images (n = 34) based on (<b>A</b>) precision, (<b>B</b>) F1 score, (<b>C</b>) recall and (<b>D</b>) IoU. The bars with three different colors, i.e., violet, green and blue, represent models with input subset made up of RGB bands, informative four bands and all 12 bands, respectively. The bar without texture denotes random forest model, while the bar with diagonal texture symbols U-Net model. Note: the evaluation was performed on image level, therefore the validation dataset paths are also included.</p>
Full article ">Figure 11
<p>Classification performance comparisons for different models applied in testing dataset images (n = 6) based on (<b>A</b>) precision, (<b>B</b>) F1 score, (<b>C</b>) recall and (<b>D</b>) IoU. The bars with three different colors, i.e., violet, green and blue, represent models with input subset made up of RGB bands, informative four bands and all 12 bands respectively. The bar without texture denotes random forest model, while the bar with diagonal texture symbols U-Net model.</p>
Full article ">Figure 12
<p>Visual comparisons of the classification performance in six independent scenes for different methods. Each row represents an independent test scene, and each column represents a different method. Except for the plots in the first column, the three target classes are represented by different colors, where black denotes background, red denotes cloud and cyan denotes snow.</p>
Full article ">
14 pages, 3212 KiB  
Article
Integration of Sentinel 1 and Sentinel 2 Satellite Images for Crop Mapping
by Shilan Felegari, Alireza Sharifi, Kamran Moravej, Muhammad Amin, Ahmad Golchin, Anselme Muzirafuti, Aqil Tariq and Na Zhao
Appl. Sci. 2021, 11(21), 10104; https://doi.org/10.3390/app112110104 - 28 Oct 2021
Cited by 44 | Viewed by 6062
Abstract
Crop identification is key to global food security. Due to the large scale of crop estimation, the science of remote sensing was able to do well in this field. The purpose of this study is to study the shortcomings and strengths of combined [...] Read more.
Crop identification is key to global food security. Due to the large scale of crop estimation, the science of remote sensing was able to do well in this field. The purpose of this study is to study the shortcomings and strengths of combined radar data and optical images to identify the type of crops in Tarom region (Iran). For this purpose, Sentinel 1 and Sentinel 2 images were used to create a map in the study area. The Sentinel 1 data came from Google Earth Engine’s (GEE) Level-1 Ground Range Detected (GRD) Interferometric Wide Swath (IW) product. Sentinel 1 radar observations were projected onto a standard 10-m grid in GRD output. The Sen2Cor method was used to mask for clouds and cloud shadows, and the Sentinel 2 Level-1C data was sourced from the Copernicus Open Access Hub. To estimate the purpose of classification, stochastic forest classification method was used to predict classification accuracy. Using seven types of crops, the classification map of the 2020 growth season in Tarom was prepared using 10-day Sentinel 2 smooth mosaic NDVI and 12-day Sentinel 1 back mosaic. Kappa coefficient of 0.75 and a maximum accuracy of 85% were reported in this study. To achieve maximum classification accuracy, it is recommended to use a combination of radar and optical data, as this combination increases the chances of examining the details compared to the single-sensor classification method and achieves more reliable information. Full article
(This article belongs to the Special Issue Sustainable Agriculture and Advances of Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Example of profiles of (upper panel) Sentinel 2 normalized difference vegetation index (NDVI) and (lower panel) Sigma<sup>0</sup> VV and VH backscatter intensities for a winter wheat field.</p>
Full article ">Figure 2
<p>Location of the study area.</p>
Full article ">Figure 3
<p>Sentinel 1 VH backscatter mosaics from 12 days in RGB composite. Dates are 1–13 March 2020 (red), 17–29 June 2020 (green), and 16–28 August 2020 (blue).</p>
Full article ">Figure 4
<p>Schematic overview of two-step hierarchical classification procedure.</p>
Full article ">Figure 5
<p>Final categorization result based on Sentinel 1 and 2 inputs through August 2020.</p>
Full article ">Figure 6
<p>Zones in middle field were misclassified as alfalfa in June (<b>a</b>) but were correctly labeled as potato in August (<b>b</b>).</p>
Full article ">Figure 7
<p>Classification confidence defined as random forest predicted class probability of majority class for each pixel at end of August 2020.</p>
Full article ">
20 pages, 8608 KiB  
Article
Fusing Retrievals of High Resolution Aerosol Optical Depth from Landsat-8 and Sentinel-2 Observations over Urban Areas
by Hao Lin, Siwei Li, Jia Xing, Jie Yang, Qingxin Wang, Lechao Dong and Xiaoyue Zeng
Remote Sens. 2021, 13(20), 4140; https://doi.org/10.3390/rs13204140 - 15 Oct 2021
Cited by 8 | Viewed by 2398
Abstract
Recent studies have shown that the high-resolution satellite Landsat-8 has the capability to retrieve aerosol optical depth (AOD) over urban areas at a 30 m spatial resolution. However, its long revisiting time and narrow swath limit the coverage and frequency of the high [...] Read more.
Recent studies have shown that the high-resolution satellite Landsat-8 has the capability to retrieve aerosol optical depth (AOD) over urban areas at a 30 m spatial resolution. However, its long revisiting time and narrow swath limit the coverage and frequency of the high resolution AOD observations. With the increasing number of Earth observation satellites launched in recent years, combining the observations of multiple satellites can provide higher temporal-spatial coverage. In this study, a fusing retrieval algorithm is developed to retrieve high-resolution (30 m) aerosols over urban areas from Landsat-8 and Sentinel-2 A/B satellite measurements. The new fusing algorithm was tested and evaluated over Beijing city and its surrounding area in China. The validation results show that the retrieved AODs show a high level of agreement with the local urban ground-based Aerosol Robotic Network (AERONET) AOD measurements, with an overall high coefficient of determination (R2) of 0.905 and small root mean square error (RMSE) of 0.119. Compared with the operational AOD products processed by the Landsat-8 Surface Reflectance Code (LaSRC-AOD), Sentinel Radiative Transfer Atmospheric Correction code (SEN2COR-AOD), and MODIS Collection 6 AOD (MOD04) products, the AOD retrieved from the new fusing algorithm based on the Landsat-8 and Sentinel-2 A/B observations exhibits an overall higher accuracy and better performance in spatial continuity over the complex urban area. Moreover, the temporal resolution of the high spatial resolution AOD observations was greatly improved (from 16/10/10 days to about two to four days over globe land in theory under cloud-free conditions) and the daily spatial coverage was increased by two to three times compared to the coverage gained using a single sensor. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of the study area, coverage of satellite data, ground-based AERONET sites and the spatial distribution of land use cover types (the map of China is based on drawing review No. GS (2016) 1603 supervised by the Ministry of Natural Resources of the People’s Republic of China).</p>
Full article ">Figure 2
<p>Overview of the fusing AOD retrieval algorithm for the Landsat-8 and Sentinel-2 images.</p>
Full article ">Figure 3
<p>Example of constructing LSR from a BVA pixel over AERONET Beijing site as a function of solar and view geometry using LaSRC surface reflectance at (<b>a</b>) 0.47 and (<b>b</b>) 0.66 μm. The red curve is the polynomial fit through the black symbols.</p>
Full article ">Figure 4
<p>Variation in the surface reflectance in each month for blue (<b>a</b>) and red (<b>b</b>) bands corresponding to NDVI; the black circle and black line represent the average minimum 20% to 50% of LSR in each month and the trend for each month over the mixed pixels.</p>
Full article ">Figure 5
<p>The validation of the AOD retrievals for (<b>a</b>) all three sensors, (<b>b</b>) Landsat-8, and (<b>c</b>) Sentinel-2 A/B. The solid black lines are a 1:1 line, the solid red lines are regression lines, and the dotted black lines are expected error (EE) lines, defined as ±(0.05 + 0.2 × AOD<sub>sunphotometer</sub>).</p>
Full article ">Figure 6
<p>Comparison of the AODs over (<b>a</b>–<b>c</b>) sparsely vegetated areas (SVA) and (<b>d</b>–<b>f</b>) barely non-vegetated areas (BVA) for Landsat-8 and Sentinel-2 against AERONET AOD measurements.</p>
Full article ">Figure 7
<p>Comparison of the (<b>a</b>–<b>c</b>) retrieved AOD with (<b>d</b>) MOD04 DB&amp;DT AOD, (<b>e</b>) LaSRC AOD and (<b>f</b>) SEN2COR AOD products.</p>
Full article ">Figure 8
<p>Comparison of the AOD retrievals for Landsat-8 and Sentinel-2B on 4 December 2018.</p>
Full article ">Figure 9
<p>Percentage of mean potential revisit cycles over globe land for (<b>a</b>) Landsat-8, (<b>b</b>) Sentinel-2A, (<b>c</b>) Sentinel-2A/B and (<b>d</b>) their configurations.</p>
Full article ">Figure 10
<p>Percentage of potential swath coverage over globe land for Landsat-8, Sentinel-2A, Sentinel-2B and their configurations.</p>
Full article ">Figure 11
<p>Color composite image (red-green-blue) of Landsat-8 and Sentinel-2, and corresponding spatial distribution of (<b>a</b>,<b>e</b>) retrieved AOD in 30 m (<b>b</b>,<b>f</b>), MCD19A2 AOD in 1 km (<b>c</b>,<b>g</b>) MOD04_L2 DB &amp; DT AOD in 10 km (<b>d</b>,<b>h</b>).</p>
Full article ">Figure A1
<p>Polar plot illustrating the solar and view geometry of Landsat-8 (red) and Sentinel-2 (blue) data in the study period. The radial straight lines show azimuth spaced every 30° and the circles show zenith spaced every 20°.</p>
Full article ">Figure A2
<p>Monthly LSR determination in (<b>a</b>) January, (<b>b</b>) April, (<b>b</b>) July, and (<b>d</b>) October over study areas.</p>
Full article ">Figure A3
<p>True color image (<b>a</b>) and the spatial distribution of different surface types (<b>b</b>) over Beijing and surrounding areas (Sentinel-2A MSI image in 27 June 2017).</p>
Full article ">Figure A4
<p>AOD retrievals from Landsat-8 (black dot) and Sentinel-2 (red dot) images on AERONET sites in the study period.</p>
Full article ">
Back to TopTop