[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,704)

Search Parameters:
Keywords = remote sensing retrieval

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 32897 KiB  
Article
On the Suitability of Different Satellite Land Surface Temperature Products to Study Surface Urban Heat Islands
by Alexandra Hurduc, Sofia L. Ermida and Carlos C. DaCamara
Remote Sens. 2024, 16(20), 3765; https://doi.org/10.3390/rs16203765 - 10 Oct 2024
Viewed by 611
Abstract
Remote sensing satellite data have been a crucial tool in understanding urban climates. The variety of sensors with different spatiotemporal characteristics and retrieval methodologies gave rise to a multitude of approaches when analyzing the surface urban heat island effect (SUHI). Although there are [...] Read more.
Remote sensing satellite data have been a crucial tool in understanding urban climates. The variety of sensors with different spatiotemporal characteristics and retrieval methodologies gave rise to a multitude of approaches when analyzing the surface urban heat island effect (SUHI). Although there are considerable advantages that arise from these different characteristics (spatiotemporal resolution, time of observation, etc.), it also means that there is a need for understanding the ability of sensors in capturing spatial and temporal SUHI patterns. For this, several land surface temperature products are compared for the cities of Madrid and Paris, retrieved from five sensors: the Spinning Enhanced Visible and InfraRed Imager onboard Meteosat Second Generation, the Advanced Very-High-Resolution Radiometer onboard Metop, the Moderate-resolution Imaging Spectroradiometer onboard both Aqua and Terra, and the Thermal Infrared Sensor onboard Landsat 8 and 9. These products span a wide range of LST algorithms, including split-window, single-channel, and temperature–emissivity separation methods. Results show that the diurnal amplitude of SUHI may not be well represented when considering daytime and nighttime polar orbiting platforms. Also, significant differences arise in SUHI intensity and spatial and temporal variability due to the different methods implemented for LST retrieval. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Land cover resampled for the three projections of LST products (<b>a</b>–<b>f</b>) along with the percentage of urban pixels (<b>g</b>–<b>l</b>).</p>
Full article ">Figure 2
<p>Time of observation of each sensor: (<b>a</b>) for Madrid during daytime and time of minimum SUHI (SUHI<sub>min</sub>), (<b>b</b>) for Madrid during nighttime and time of maximum SUHI (SUHI<sub>max</sub>), (<b>c</b>) for Paris during daytime and SUHI<sub>max</sub>, (<b>d</b>) for Paris during nighttime and SUHI<sub>min</sub>. Colored bins are sampled every 15 min.</p>
Full article ">Figure 3
<p>Mean DJF (December, January, February) LST for all products considered. (<b>a1</b>–<b>a9</b>) The spatial pattern during daytime, and in the case of the MLST, the most frequent hour of the LST maximum and SUHI minimum are shown; (<b>b1</b>–<b>b9</b>) histograms of urban and rural LST shown in (<b>a1</b>–<b>a9</b>); (<b>c1</b>–<b>c7</b>) as in the first line but for nighttime and for the LST minimum and SUHI maximum; (<b>d1</b>–<b>d7</b>) as in (<b>b1</b>–<b>b9</b>) but for nighttime. Please note that color bars are different amongst the different products to allow a better visualization of patterns, but value ranges of the histograms are the same.</p>
Full article ">Figure 4
<p>As <a href="#remotesensing-16-03765-f003" class="html-fig">Figure 3</a> but for JJA (June, July, and August). (<b>a1</b>–<b>a9</b>) The spatial pattern during daytime, and in the case of the MLST, the most frequent hour of the LST maximum and SUHI minimum are shown; (<b>b1</b>–<b>b9</b>) histograms of urban and rural LST shown in (<b>a1</b>–<b>a9</b>); (<b>c1</b>–<b>c7</b>) as in the first line but for nighttime and for the LST minimum and SUHI maximum; (<b>d1</b>–<b>d7</b>) as in (<b>b1</b>–<b>b9</b>) but for nighttime. Please note that color bars are different amongst the different products to allow a better visualization of patterns, but value ranges of the histograms are the same.</p>
Full article ">Figure 5
<p>As <a href="#remotesensing-16-03765-f003" class="html-fig">Figure 3</a> but for Paris. (<b>a1</b>–<b>a9</b>) The spatial pattern during daytime, and in the case of the MLST, the most frequent hour of the LST and SUHI maximum; (<b>b1</b>–<b>b9</b>) histograms of urban and rural LST shown in (<b>a1</b>–<b>a9</b>); (<b>c1</b>–<b>c7</b>) as in the first line but for nighttime and for the LST and SUHI minimum; (<b>d1</b>–<b>d7</b>) as in (<b>b1</b>–<b>b9</b>) but for nighttime. Please note that color bars are different amongst the different products to allow a better visualization of patterns, but value ranges of the histograms are the same.</p>
Full article ">Figure 6
<p>As <a href="#remotesensing-16-03765-f003" class="html-fig">Figure 3</a> but for Paris and DJF; (<b>a1</b>–<b>a9</b>) The spatial pattern during daytime, and in the case of the MLST, the most frequent hour of the LST and SUHI maximum; (<b>b1</b>–<b>b9</b>) histograms of urban and rural LST shown in (<b>a1</b>–<b>a9</b>); (<b>c1</b>–<b>c7</b>) as in the first line but for nighttime and for the LST and SUHI minimum; (<b>d1</b>–<b>d7</b>) as in (<b>b1</b>–<b>b9</b>) but for nighttime, an extension of the histogram in (<b>d6</b>) is seen in (<b>d8</b>). Please note that color bars are different amongst the different products to allow a better visualization of patterns, but value ranges of the histograms are the same.</p>
Full article ">Figure 7
<p>Diurnal cycle of SUHI for Madrid: (<b>a</b>) DJF, (<b>b</b>) MAM, (<b>c</b>) JJA, (<b>d</b>) SON.</p>
Full article ">Figure 8
<p>As <a href="#remotesensing-16-03765-f007" class="html-fig">Figure 7</a> but for Paris.</p>
Full article ">Figure 9
<p>Correlation of monthly SUHI anomalies between all products considered: (<b>a</b>) daytime, (<b>b</b>) nighttime. Blank spaces correspond to pairs of products with no significant correlation (<span class="html-italic">p</span>-value &gt; 0.05).</p>
Full article ">Figure 10
<p>As <a href="#remotesensing-16-03765-f009" class="html-fig">Figure 9</a> but for Paris. (<b>a</b>) daytime, (<b>b</b>) nighttime. Blank spaces correspond to pairs of products with no significant correlation (<span class="html-italic">p</span>-value &gt; 0.05).</p>
Full article ">
16 pages, 8635 KiB  
Article
Enhancing Turbidity Predictions in Coastal Environments by Removing Obstructions from Unmanned Aerial Vehicle Multispectral Imagery Using Inpainting Techniques
by Hieu Trung Kieu, Yoong Sze Yeong, Ha Linh Trinh and Adrian Wing-Keung Law
Drones 2024, 8(10), 555; https://doi.org/10.3390/drones8100555 - 7 Oct 2024
Viewed by 524
Abstract
High-resolution remote sensing of turbidity in the coastal environment with unmanned aerial vehicles (UAVs) can be adversely affected by the presence of obstructions of vessels and marine objects in images, which can introduce significant errors in turbidity modeling and predictions. This study evaluates [...] Read more.
High-resolution remote sensing of turbidity in the coastal environment with unmanned aerial vehicles (UAVs) can be adversely affected by the presence of obstructions of vessels and marine objects in images, which can introduce significant errors in turbidity modeling and predictions. This study evaluates the use of two deep-learning-based inpainting methods, namely, Decoupled Spatial–Temporal Transformer (DSTT) and Deep Image Prior (DIP), to recover the obstructed information. Aerial images of turbidity plumes in the coastal environment were first acquired using a UAV system with a multispectral sensor that included obstructions on the water surface at various obstruction percentages. The performance of the two inpainting models was then assessed through both qualitative and quantitative analyses of the inpainted data, focusing on the accuracy of turbidity retrieval. The results show that the DIP model performs well across a wide range of obstruction percentages from 10 to 70%. In comparison, the DSTT model produces good accuracy only with low percentages of less than 20% and performs poorly when the obstruction percentage increases. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The UAV-borne multispectral imagery system for water turbidity image acquisition and its components of (<b>b</b>) DJI M300 RTK UAV and (<b>c</b>) MicaSense RedEdge-MX Dual Camera. Reprinted with permission from [<a href="#B37-drones-08-00555" class="html-bibr">37</a>].</p>
Full article ">Figure 2
<p>Sample images captured at (<b>a</b>) Line 1 and (<b>b</b>) Line 10 of the UAV flight. The vessel and marine objects are masked in red for confidentiality.</p>
Full article ">Figure 3
<p>Image resizing for the two models.</p>
Full article ">Figure 4
<p>(<b>a</b>) Original UAV image, (<b>b</b>) precise annotation mask, (<b>c</b>) synthetic annotation mask, and (<b>d</b>) the difference between (<b>b</b>) and (<b>c</b>). The yellow area was used as ground-truth information for model evaluation.</p>
Full article ">Figure 5
<p>Illustration to compare variance among frames.</p>
Full article ">Figure 6
<p>Procedures for evaluating model performance for turbidity retrieval.</p>
Full article ">Figure 7
<p>(<b>a</b>) Original images and inpainted images from the (<b>b</b>) DSTT and (<b>c</b>) DIP models. The water regions near the vessel are enlarged for comparison.</p>
Full article ">Figure 8
<p>Effect of number of iterations on DIP model performance.</p>
Full article ">Figure 9
<p>Inconsistency of DIP model results.</p>
Full article ">Figure 10
<p>Mosaic images of Line 1 and Line 10 stitched from original, DSTT, and DIP inpainted images. The vessel and marine object are masked with blue color due to confidentiality.</p>
Full article ">Figure 11
<p>R<sup>2</sup> and MAE of the DSTT and DIP models with ground truth information. The images are indexed sequentially flowing the flight path of Lines 1 and 10.</p>
Full article ">Figure 12
<p>Variance in the inpainted image with previous and next frames for DSTT and DIP. The images are indexed sequentially flowing the flight paths of Lines 1 and 10.</p>
Full article ">Figure 13
<p>Correlation plot of variance and obstacle coverage percentage.</p>
Full article ">
19 pages, 11653 KiB  
Article
Influence of Vegetation Phenology on the Temporal Effect of Crop Fractional Vegetation Cover Derived from Moderate-Resolution Imaging Spectroradiometer Nadir Bidirectional Reflectance Distribution Function–Adjusted Reflectance
by Yinghao Lin, Tingshun Fan, Dong Wang, Kun Cai, Yang Liu, Yuye Wang, Tao Yu and Nianxu Xu
Agriculture 2024, 14(10), 1759; https://doi.org/10.3390/agriculture14101759 - 5 Oct 2024
Viewed by 393
Abstract
Moderate-Resolution Imaging Spectroradiometer (MODIS) Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) products are being increasingly used for the quantitative remote sensing of vegetation. However, the assumption underlying the MODIS NBAR product’s inversion model—that surface anisotropy remains unchanged over the 16-day retrieval period—may [...] Read more.
Moderate-Resolution Imaging Spectroradiometer (MODIS) Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) products are being increasingly used for the quantitative remote sensing of vegetation. However, the assumption underlying the MODIS NBAR product’s inversion model—that surface anisotropy remains unchanged over the 16-day retrieval period—may be unreliable, especially since the canopy structure of vegetation undergoes stark changes at the start of season (SOS) and the end of season (EOS). Therefore, to investigate the MODIS NBAR product’s temporal effect on the quantitative remote sensing of crops at different stages of the growing seasons, this study selected typical phenological parameters, namely SOS, EOS, and the intervening stable growth of season (SGOS). The PROBA-V bioGEOphysical product Version 3 (GEOV3) Fractional Vegetation Cover (FVC) served as verification data, and the Pearson correlation coefficient (PCC) was used to compare and analyze the retrieval accuracy of FVC derived from the MODIS NBAR product and MODIS Surface Reflectance product. The Anisotropic Flat Index (AFX) was further employed to explore the influence of vegetation type and mixed pixel distribution characteristics on the BRDF shape under different stages of the growing seasons and different FVC; that was then combined with an NDVI spatial distribution map to assess the feasibility of using the reflectance of other characteristic directions besides NBAR for FVC correction. The results revealed the following: (1) Generally, at the SOSs and EOSs, the differences in PCCs before vs. after the NBAR correction mainly ranged from 0 to 0.1. This implies that the accuracy of FVC derived from MODIS NBAR is lower than that derived from MODIS Surface Reflectance. Conversely, during the SGOSs, the differences in PCCs before vs. after the NBAR correction ranged between –0.2 and 0, suggesting the accuracy of FVC derived from MODIS NBAR surpasses that derived from MODIS Surface Reflectance. (2) As vegetation phenology shifts, the ensuing differences in NDVI patterning and AFX can offer auxiliary information for enhanced vegetation classification and interpretation of mixed pixel distribution characteristics, which, when combined with NDVI at characteristic directional reflectance, could enable the accurate retrieval of FVC. Our results provide data support for the BRDF correction timescale effect of various stages of the growing seasons, highlighting the potential importance of considering how they differentially influence the temporal effect of NBAR corrections prior to monitoring vegetation when using the MODIS NBAR product. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial extent of the Wancheng District study area (in Henan Province, China). (<b>a</b>) Map of land cover types showing the location of sampling points across the study area. This map came from MCD12Q1 (v061). (<b>b</b>–<b>d</b>) True-color images of the three mixed pixels, obtained from Sentinel-2. The distribution characteristics are as follows: crops above with buildings below (<b>b</b>); crops below with buildings above (<b>c</b>); and buildings in the upper-left corner, crops in the remainder (<b>d</b>).</p>
Full article ">Figure 2
<p>Monthly average temperature and monthly total precipitation in the study area, from 2017 to 2021.</p>
Full article ">Figure 3
<p>Data processing flow chart. The green rectangles from top to the bottom represent three steps: crop phenological parameters extraction with TIMESAT; Fractional Vegetation Cover (FVC) derived from MOD09GA and MCD43A4; and accuracy evaluation, respectively. Blue solid rectangles refer to a used product or derived results, while blue dashed rectangles refer to the software or model used in this study. NDVI<sub>MOD09GA</sub>: NDVI derived from MOD09GA, NDVI<sub>MCD43A4</sub>: NDVI derived from MCD43A4, FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA, FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. PCC<sub>MOD09GA</sub>: Pearson correlation coefficient (PCC) calculated for FVC<sub>MOD09GA</sub> and GEOV3 FVC, PCC<sub>MCD43A4</sub>: PCC calculated for FVC<sub>MCD43A4</sub> and GEOV3 FVC.</p>
Full article ">Figure 4
<p>NDVI and EVI time series fitted curves and phenological parameters of crops. SOS: start of season; EOS: end of season; SGOS: stable growth of season.</p>
Full article ">Figure 5
<p>Spatial distribution of Fractional Vegetation Cover (FVC) derived from MOD09GA and MCD43A4, and the difference images of FVC. FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA, FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. (<b>a</b>–<b>c</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 15 November 2020, respectively; (<b>d</b>–<b>f</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 10 February 2021, respectively; (<b>g</b>–<b>i</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 30 September 2021, respectively.</p>
Full article ">Figure 6
<p>Pearson correlation coefficients (PCCs) of Fractional Vegetation Cover (FVC) derived before and after the NBAR correction with GEOV3 FVC at different stages of the growing seasons. FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA. FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. PCC<sub>MOD09GA</sub>: PCC calculated for FVC<sub>MOD09GA</sub> and GEOV3 FVC, PCC<sub>MCD43A4</sub>: PCC calculated for FVC<sub>MCD43A4</sub> and GEOV3 FVC. (<b>a</b>) PCC<sub>MOD09GA</sub> and PCC<sub>MCD43A4</sub> in 2018–2021; (<b>b</b>) Scatterplot of numerical differences between PCC<sub>MOD09GA</sub> and PCC<sub>MCD43A4</sub>. SOS: start of season; EOS: end of season; SGOS: stable growth of season.</p>
Full article ">Figure 7
<p>NDVI spatial distribution maps of crop pixel, savanna pixel, and grassland pixel in different stages of the growing seasons. (<b>a</b>–<b>d</b>) Crop. (<b>e</b>–<b>h</b>) Savanna. (<b>i</b>–<b>l</b>) Grassland. SZA: Solar Zenith Angle, FVC: Fractional Vegetation Cover, AFX_RED: Anisotropic Flat Index (AFX) in the red band, AFX_NIR: AFX in the near-infrared band.</p>
Full article ">Figure 8
<p>NDVI spatial distribution maps of mixed pixels in different stages of the growing seasons. (<b>a</b>–<b>d</b>) Crops above and buildings below. (<b>e</b>–<b>h</b>) Crops below and buildings above. (<b>i</b>–<b>l</b>) Buildings in the upper-left corner and crops in the remainder. SZA: Solar Zenith Angle, FVC: Fractional Vegetation Cover, AFX_RED: Anisotropic Flat Index (AFX) in the red band, AFX_NIR: AFX in the near-infrared band.</p>
Full article ">
29 pages, 12094 KiB  
Article
Bitemporal Radiative Transfer Modeling Using Bitemporal 3D-Explicit Forest Reconstruction from Terrestrial Laser Scanning
by Chang Liu, Kim Calders, Niall Origo, Louise Terryn, Jennifer Adams, Jean-Philippe Gastellu-Etchegorry, Yingjie Wang, Félicien Meunier, John Armston, Mathias Disney, William Woodgate, Joanne Nightingale and Hans Verbeeck
Remote Sens. 2024, 16(19), 3639; https://doi.org/10.3390/rs16193639 - 29 Sep 2024
Viewed by 642
Abstract
Radiative transfer models (RTMs) are often used to retrieve biophysical parameters from earth observation data. RTMs with multi-temporal and realistic forest representations enable radiative transfer (RT) modeling for real-world dynamic processes. To achieve more realistic RT modeling for dynamic forest processes, this study [...] Read more.
Radiative transfer models (RTMs) are often used to retrieve biophysical parameters from earth observation data. RTMs with multi-temporal and realistic forest representations enable radiative transfer (RT) modeling for real-world dynamic processes. To achieve more realistic RT modeling for dynamic forest processes, this study presents the 3D-explicit reconstruction of a typical temperate deciduous forest in 2015 and 2022. We demonstrate for the first time the potential use of bitemporal 3D-explicit RT modeling from terrestrial laser scanning on the forward modeling and quantitative interpretation of: (1) remote sensing (RS) observations of leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR), and canopy light extinction, and (2) the impact of canopy gap dynamics on light availability of explicit locations. Results showed that, compared to the 2015 scene, the hemispherical-directional reflectance factor (HDRF) of the 2022 forest scene relatively decreased by 3.8% and the leaf FAPAR relatively increased by 5.4%. At explicit locations where canopy gaps significantly changed between the 2015 scene and the 2022 scene, only under diffuse light did the branch damage and closing gap significantly impact ground light availability. This study provides the first bitemporal RT comparison based on the 3D RT modeling, which uses one of the most realistic bitemporal forest scenes as the structural input. This bitemporal 3D-explicit forest RT modeling allows spatially explicit modeling over time under fully controlled experimental conditions in one of the most realistic virtual environments, thus delivering a powerful tool for studying canopy light regimes as impacted by dynamics in forest structure and developing RS inversion schemes on forest structural changes. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Geographic location and map of Wytham Woods with plot indicated by ‘X’ [<a href="#B60-remotesensing-16-03639" class="html-bibr">60</a>].</p>
Full article ">Figure 2
<p>Spectral properties of different tree species in the plot [<a href="#B2-remotesensing-16-03639" class="html-bibr">2</a>,<a href="#B51-remotesensing-16-03639" class="html-bibr">51</a>]. (<b>a</b>) Reflectance and transmittance of leaves; (<b>b</b>) reflectance of bark.</p>
Full article ">Figure 3
<p>Locations of canopy gap dynamics and photosynthetically active radiation (PAR) sensors simulated, shown in the TLS point cloud (top view): (<b>a</b>) 2015; (<b>b</b>) 2022.</p>
Full article ">Figure 4
<p>Vertical profiles of different types of canopy gap dynamics observed by terrestrial laser scanning, and the position of simulated PAR sensors.</p>
Full article ">Figure 5
<p>Flowchart of research methodology. QSMs of woody structure were reconstructed using leaf-off TLS data.</p>
Full article ">Figure 6
<p>Segmented TLS leaf-off point cloud of 1-ha Wytham Woods forest stand (top view): (<b>a</b>) 2015; (<b>b</b>) and 2022. Each color represents an individual tree.</p>
Full article ">Figure 7
<p>The dynamic change of wood structure of a Common ash (<span class="html-italic">Fraxinus excelsior</span>) tree from 2015 to 2022. (<b>a</b>) 2015 leaf-off point cloud; (<b>b</b>) 2022 leaf-off point cloud.</p>
Full article ">Figure 8
<p>3D-explicit reconstruction of a Sycamore (<span class="html-italic">Acer pseudoplatanus</span>) tree. (<b>a</b>) TLS point cloud colored by height (leaf-off); (<b>b</b>) QSM overlaid with TLS leaf-off point cloud; (<b>c</b>) QSM, the modeled branch length was 3863.3 m; (<b>d</b>) Fully reconstructed tree: QSM + leaves, the leaf area assigned to this tree was 888.2 m<sup>2</sup>.</p>
Full article ">Figure 9
<p>The 3D-explicit models of the complete 1-ha Wytham Woods forest stand in (<b>a</b>) 2015 and (<b>b</b>) 2022. The different leaf colors represent the different tree species present in Wytham Woods. The stems and branches of all trees are shown in brown.</p>
Full article ">Figure 10
<p>The vertical profiles of simulated (<b>a</b>) light extinction, (<b>b</b>) light absorption, and (<b>c</b>) leaf area per meter of height in 2015 and 2022 forest scenes. The results of light extinction and absorption were based on the PAR band. The illumination zenith angle (IZA) was 38.4° and the illumination azimuth angle (IAA) was 125.2°.</p>
Full article ">Figure 11
<p>The vertical profiles of simulated (<b>a</b>) light extinction and (<b>b</b>) light absorption in the blue, green, red, and NIR bands for the 2015 and 2022 forest scenes. Illumination zenith angle (IZA) 38.4°, illumination azimuth angle (IAA) 125.2°.</p>
Full article ">Figure 12
<p>Simulated top of canopy images of Wytham Woods forest scenes in 2015 and 2022. The images were simulated under nadir viewing directions and Sentinel-2 RGB bands. IZA 38.4°, IAA 125.2°. (<b>a</b>,<b>b</b>) Ultra-high resolution images in 2015 and 2022 (spatial resolution: 1 cm); (<b>d</b>,<b>e</b>) 25 cm resolution images in 2015 and 2022; (<b>g</b>,<b>h</b>) 10 m resolution images in 2015 and 2022; (<b>c</b>,<b>f</b>,<b>i</b>) Spatial pattern of HDRF variation from 2015 to 2022 (red band).</p>
Full article ">Figure 13
<p>Light extinction profiles of downward PAR at location 1: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 14
<p>Light extinction profiles of downward PAR at location 2: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 15
<p>Light extinction profiles of downward PAR at location 3: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 16
<p>Light extinction profiles of downward PAR at location 4: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">
13 pages, 6757 KiB  
Article
A Fast Computing Model for the Oxygen A-Band High-Spectral-Resolution Absorption Spectra Based on Artificial Neural Networks
by Jianxi Zhou, Congming Dai, Pengfei Wu and Heli Wei
Remote Sens. 2024, 16(19), 3616; https://doi.org/10.3390/rs16193616 - 28 Sep 2024
Viewed by 361
Abstract
A fast and accurate radiative transfer model is the prerequisite in the field of atmospheric remote sensing for limb atmospheric inversion to tackle the drawback of slow calculation speed of traditional atmospheric radiative transfer models. This paper established a fast computing model (ANN-HASFCM) [...] Read more.
A fast and accurate radiative transfer model is the prerequisite in the field of atmospheric remote sensing for limb atmospheric inversion to tackle the drawback of slow calculation speed of traditional atmospheric radiative transfer models. This paper established a fast computing model (ANN-HASFCM) for high-spectral-resolution absorption spectra by using artificial neural networks and PCA (principal component analysis) spectral reconstruction technology. This paper chose the line-by-line radiative transfer model (LBLRTM) as the comparative model and simulated training spectral data in the oxygen A-band (12,900–13,200 cm−1). Subsequently, ANN-HASFCM was applied to the retrieval of the atmospheric density profile with the data of the Global Ozone Monitoring by an Occultation of Stars (GOMOS) instrument. The results show that the relative error between the optical depth spectra calculated by LBLRTM and ANN-HASFCM is within 0.03–0.65%. In the process of using the global-fitting algorithm to invert GOMOS-measured atmospheric samples, the inversion results using Fast-LBLRTM and ANN-HASFCM as forward models are consistent, and the retrieval speed of ANN-HASFCM is more than 200 times faster than that of Fast-LBLRTM (reduced from 226.7 s to 0.834 s). The analysis shows the brilliant application prospects of ANN-HASFCM in limb remote sensing. Full article
Show Figures

Figure 1

Figure 1
<p>Neural network structure diagram.</p>
Full article ">Figure 2
<p>Principal component analysis of oxygen A-band spectra. (<b>a</b>) Singular value for each principal component. (<b>b</b>) Normalized cumulative singular value as function of the selected number of PCs.</p>
Full article ">Figure 3
<p>Limb observation model.</p>
Full article ">Figure 4
<p>GOMOS O<sub>2</sub> A-band limb transmittance.</p>
Full article ">Figure 5
<p>Number of atmospheric layers in 2003.</p>
Full article ">Figure 6
<p>Statistics on the number of samples in different layers in 2003.</p>
Full article ">Figure 7
<p>Vertical interval of GOMOS.</p>
Full article ">Figure 8
<p>Flow chart of ANN absorption spectrum fast computing model.</p>
Full article ">Figure 9
<p>ANN simulation results.</p>
Full article ">Figure 10
<p>Retrieval of atmospheric density profile.</p>
Full article ">
16 pages, 4228 KiB  
Article
Tracking Phytoplankton Biomass Amid Wildfire Smoke Interference Using Landsat 8 OLI
by Sassan Mohammady, Kevin J. Erratt and Irena F. Creed
Remote Sens. 2024, 16(19), 3605; https://doi.org/10.3390/rs16193605 - 27 Sep 2024
Viewed by 544
Abstract
This study investigates the escalating impact of wildfire smoke on the remote sensing of phytoplankton biomass in freshwater systems. Wildfire smoke disrupts the accuracy of Chlorophyll-a (Chl-a) retrieval models, with Chl-a often used as a proxy for quantifying phytoplankton biomass. [...] Read more.
This study investigates the escalating impact of wildfire smoke on the remote sensing of phytoplankton biomass in freshwater systems. Wildfire smoke disrupts the accuracy of Chlorophyll-a (Chl-a) retrieval models, with Chl-a often used as a proxy for quantifying phytoplankton biomass. Given the increasing frequency and intensity of wildfires, there is a need for the development and refinement of remote sensing methodologies to effectively monitor phytoplankton dynamics under wildfire-impacted conditions. Here we developed a novel approach using Landsat’s coastal/aerosol band (B1) to screen for and categorize levels of wildfire smoke interference. By excluding high-interference data (B1 reflectance > 0.07) from the calibration set, Chl-a retrieval model performance using different Landsat band formulas improved significantly, with R2 increasing from 0.55 to as high as 0.80. Our findings demonstrate that Rayleigh-corrected reflectance, combined with B1 screening, provides a robust method for monitoring phytoplankton biomass even under moderate smoke interference, outperforming full atmospheric correction methods. This approach enhances the reliability of remote sensing in the face of increasing wildfire events, offering a valuable tool for the effective management of aquatic environments. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Annual global tree cover loss caused by fires (data source: Global Forest Watch, <a href="https://www.globalforestwatch.org/dashboards/global/?category=fires&amp;location=WyJnbG9iYWwiXQ%3D%3D" target="_blank">https://www.globalforestwatch.org/dashboards/global/?category=fires&amp;location=WyJnbG9iYWwiXQ%3D%3D</a>, accessed on 5 July 2024); (<b>B</b>) Cumulative annual carbon emissions released during wildfires in Canada during the wildfire season (data source: Copernicus Climate Change Service, <a href="https://ads.atmosphere.copernicus.eu/cdsapp#!/dataset/cams-global-fire-emissions-gfas" target="_blank">https://ads.atmosphere.copernicus.eu/cdsapp#!/dataset/cams-global-fire-emissions-gfas</a>, accessed on 5 July 2024).</p>
Full article ">Figure 2
<p>Locations of ground-based Chl-<span class="html-italic">a</span> samples in the Lake Winnipeg watershed (black outline). Green and blue circles represent all the sampling lakes, while blue circles represent the lakes used for the Chl-<span class="html-italic">a</span> retrieval model development (matchups between in situ sampling and satellite overpass occurred within ±3 days).</p>
Full article ">Figure 3
<p>Correlations between the band formulas and ln(Chl-<span class="html-italic">a</span>) for different temporal windows of in situ–satellite matchups.</p>
Full article ">Figure 4
<p>Cluster analysis of B1 Rrc values with example images showing wildfire interference. “Wildfire Effect—2023” showcases examples of lakes that were differentially affected by wildfire smoke, whereas “No Wildfire Effect—2022” represents the same lakes in 2022, a year without wildfire interference. Different lowercase letters indicate significant (<span class="html-italic">p</span> &lt; 0.001) differences between clusters.</p>
Full article ">Figure 5
<p>Comparison of the correlation between band formulas and ln(Chl-<span class="html-italic">a</span>) across various temporal windows of in situ-satellite matchups for: (<b>A</b>) Rayleigh-corrected reflectance dataset, and (<b>B</b>) Surface reflectance product. Different uppercase letters indicate significant (<span class="html-italic">p</span> &lt; 0.001) differences between calibration sets across different band formulas.</p>
Full article ">Figure 6
<p>(<b>A</b>) Map of the differences in lake average Chl-<span class="html-italic">a</span> with and without B1 aerosol screening (with aerosol screening minus without) for the period from start of availability of Landsat OLI with B1 in 2013 to 2023. Frequency distributions of (<b>B</b>) lake average Chl-<span class="html-italic">a</span> without aerosol screening, and (<b>C</b>) lake average Chl-<span class="html-italic">a</span> with aerosol screening.</p>
Full article ">
16 pages, 2934 KiB  
Article
Real-Time Simulation of Clear Sky Background Radiation in Gas Infrared Remote Sensing Monitoring
by Shengquan Shu, Jianguo Liu, Liang Xu, Yuhao Wang, Yasong Deng and Yongfeng Sun
Photonics 2024, 11(10), 904; https://doi.org/10.3390/photonics11100904 - 26 Sep 2024
Viewed by 382
Abstract
During the process of infrared remote sensing monitoring, obtaining real-time measurements of sky background radiation is extremely inconvenient. The current methods incur a certain amount of lag. In this study, within the existing theoretical framework, a fast transmittance calculation method using interpolation was [...] Read more.
During the process of infrared remote sensing monitoring, obtaining real-time measurements of sky background radiation is extremely inconvenient. The current methods incur a certain amount of lag. In this study, within the existing theoretical framework, a fast transmittance calculation method using interpolation was adopted, and a simplified transmission model was established. This led to the development of a new and simplified method for rapid temperature and humidity retrieval. Compared to the line-by-line integration method, the interpolation method significantly improves the speed of transmittance calculation by several tens of times, while maintaining a high level of accuracy. The relative deviation between the results obtained using the interpolation method and those obtained through line-by-line integration is less than 1 ‱. With the proposed method, temperature and humidity profile information can be retrieved from measured spectra within 5 min and corresponding background spectra can be obtained. The differences between the calculated background radiation and the measured spectra using the new method are smaller, making it more suitable for calculating sky background radiation. Additionally, the rapid retrieval results of the temperature profiles in the lower atmosphere have a certain level of accuracy (the mean deviation is less than 2 K). Full article
Show Figures

Figure 1

Figure 1
<p>Three-layer radiation transmission model and different monitoring backgrounds: (<b>a</b>) three-layer radiation transmission model; (<b>b</b>) ground object background; (<b>c</b>) sky background.</p>
Full article ">Figure 2
<p>Comparison of the simulated spectra before and after truncation.</p>
Full article ">Figure 3
<p>Schematic of the research.</p>
Full article ">Figure 4
<p>The ARD (<b>a</b>), AD (<b>b</b>), RMSE (<b>c</b>), and ATDE (<b>d</b>) between the transmittance values obtained from the two methods in the temperature retrieval band.</p>
Full article ">Figure 5
<p>The ARD (<b>a</b>), AD (<b>b</b>), RMSE (<b>c</b>), and ATDE (<b>d</b>) between the transmittance values obtained from the two methods in the humidity retrieval band.</p>
Full article ">Figure 6
<p>Average spectra measured during the two time periods: (<b>a</b>) temperature retrieval band; and (<b>b</b>) humidity retrieval band.</p>
Full article ">Figure 7
<p>Comparison of the retrieval results of the spectra measured over different time periods.</p>
Full article ">Figure 8
<p>The AD, RMSE, and STDE of temperature (<b>a</b>) and humidity (<b>b</b>) retrievals.</p>
Full article ">Figure 9
<p>The RMSE between the measured spectra and the two types of background radiation spectra. Blue line: background radiation calculated with the hysteresis profile provided by ERA5 as the input parameter; red line: background radiation calculated with the real-time profile obtained by the fast retrieval method as the input parameter.</p>
Full article ">Figure 10
<p>As in <a href="#photonics-11-00904-f009" class="html-fig">Figure 9</a> but for STDE.</p>
Full article ">Figure 11
<p>The difference between the measured spectra and the two types of background radiation spectra at 0:00 UTC on 7th April 2024: (<b>a</b>) background radiation with latency; (<b>b</b>) background radiation in real-time; (<b>c</b>) bias of two types of background radiation spectra with measured spectra.</p>
Full article ">
19 pages, 7519 KiB  
Article
Sentinel-2 Multispectral Satellite Remote Sensing Retrieval of Soil Cu Content Changes at Different pH Levels
by Hongxu Guo, Fan Wu, Kai Yang, Ziyan Yang, Zeyu Chen, Dongbin Chen and Rongbo Xiao
Agronomy 2024, 14(10), 2182; https://doi.org/10.3390/agronomy14102182 - 24 Sep 2024
Viewed by 551
Abstract
With the development of multispectral imaging technology, retrieving soil heavy metal content using multispectral remote sensing images has become possible. However, factors such as soil pH and spectral resolution affect the accuracy of model inversion, leading to low precision. In this study, 242 [...] Read more.
With the development of multispectral imaging technology, retrieving soil heavy metal content using multispectral remote sensing images has become possible. However, factors such as soil pH and spectral resolution affect the accuracy of model inversion, leading to low precision. In this study, 242 soil samples were collected from a typical area of the Pearl River Delta, and the Cu content in the soil was detected in the laboratory. Simultaneously, Sentinel-2 remote sensing image data were collected, and two-dimensional and three-dimensional spectral indices were established. Constructing independent decision trees based on pH values, using the Successive Projections Algorithm (SPA) combined with the Boruta algorithm to select the characteristic bands for soil Cu content, and this was combined with Optuna automatic hyperparameter optimization for ensemble learning models to establish a model for estimating Cu content in soil. The research results indicated that in the SPA combined with the Boruta feature selection algorithm, the characteristic spectral indices were mainly concentrated in the spectral transformation forms of TBI2 and TBI4. Full-sample modeling lacked predictive ability, but after classifying the samples based on soil pH value, the R2 of the RF and XGBoost models constructed with the samples with pH values between 5.85 and 7.75 was 0.54 and 0.76, respectively, with corresponding RMSE values of 22.48 and 16.12 and RPD values of 1.51 and 2.11. This study shows that the inversion of soil Cu content under different pH conditions exhibits significant differences, and determining the optimal pH range can effectively improve inversion accuracy. This research provides a reference for further achieving the efficient and accurate remote sensing of heavy metal pollution in agricultural soil. Full article
(This article belongs to the Special Issue Recent Advances in Data-Driven Farming)
Show Figures

Figure 1

Figure 1
<p>The flowchart of study.</p>
Full article ">Figure 2
<p>General situation of the research region and distribution of sampling points.</p>
Full article ">Figure 3
<p>Average spectral reflectance of soil under different pH conditions.</p>
Full article ">Figure 4
<p>Construction of Cu content decision tree with pH as branching criterion.</p>
Full article ">Figure 5
<p>Assessment of the importance of hyperparameters. (<b>a</b>) Full; (<b>b</b>) pH &lt; 5.85; (<b>c</b>) 5.85 ≤ pH &lt; 7.75; and (<b>d</b>) pH ≥ 7.75.</p>
Full article ">Figure 6
<p>Scatter plot of the measured and estimated values with pH values between 5.85 and 7.75: (<b>a</b>) RF; (<b>b</b>) XGBoost.</p>
Full article ">
30 pages, 10615 KiB  
Article
Machine Learning Modelling for Soil Moisture Retrieval from Simulated NASA-ISRO SAR (NISAR) L-Band Data
by Dev Dinesh, Shashi Kumar and Sameer Saran
Remote Sens. 2024, 16(18), 3539; https://doi.org/10.3390/rs16183539 - 23 Sep 2024
Viewed by 1083
Abstract
Soil moisture is a critical factor that supports plant growth, improves crop yields, and reduces erosion. Therefore, obtaining accurate and timely information about soil moisture across large regions is crucial. Remote sensing techniques, such as microwave remote sensing, have emerged as powerful tools [...] Read more.
Soil moisture is a critical factor that supports plant growth, improves crop yields, and reduces erosion. Therefore, obtaining accurate and timely information about soil moisture across large regions is crucial. Remote sensing techniques, such as microwave remote sensing, have emerged as powerful tools for monitoring and mapping soil moisture. Synthetic aperture radar (SAR) is beneficial for estimating soil moisture at both global and local levels. This study aimed to assess soil moisture and dielectric constant retrieval over agricultural land using machine learning (ML) algorithms and decomposition techniques. Three polarimetric decomposition models were used to extract features from simulated NASA-ISRO SAR (NISAR) L-Band radar images. Machine learning techniques such as random forest regression, decision tree regression, stochastic gradient descent (SGD), XGBoost, K-nearest neighbors (KNN) regression, neural network regression, and multilinear regression were used to retrieve soil moisture from three different crop fields: wheat, soybean, and corn. The study found that the random forest regression technique produced the most precise soil moisture estimations for soybean fields, with an R2 of 0.89 and RMSE of 0.050 without considering vegetation effects and an R2 of 0.92 and RMSE of 0.042 considering vegetation effects. The results for real dielectric constant retrieval for the soybean field were an R2 of 0.89 and RMSE of 6.79 without considering vegetation effects and an R2 of 0.89 and RMSE of 6.78 with considering vegetation effects. These findings suggest that machine learning algorithms and decomposition techniques, along with a semi-empirical technique like Water Cloud Model (WCM), can be effective tools for estimating soil moisture and dielectric constant values precisely. The methodology applied in the current research contributes essential insights that could benefit upcoming missions, such as the Radar Observing System for Europe in L-band (ROSE-L) and the collaborative NASA-ISRO SAR (NISAR) mission, for future data analysis in soil moisture applications. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Study area and (<b>b</b>) sampling strategy of SMAPVEX12 campaign (<a href="http://smapvex12.espaceweb.usherbrooke.ca/" target="_blank">http://smapvex12.espaceweb.usherbrooke.ca/</a> (accessed on 29 June 2024)).</p>
Full article ">Figure 2
<p>Methodology for the estimation of dielectric constant and soil moisture using machine leaning modelling.</p>
Full article ">Figure 3
<p>Correlation between soil moisture and other polarimetric features. (<b>a</b>) soybean field, (<b>b</b>) wheat field, and (<b>c</b>) corn field.</p>
Full article ">Figure 4
<p>Soil dielectric constant retrieval from a soybean field without considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree, (<b>c</b>) extreme gradient boosting, (<b>d</b>) stochastic gradient descent, (<b>e</b>) K-nearest neighbor, (<b>f</b>) multiple linear regression, and (<b>g</b>) neural network.</p>
Full article ">Figure 5
<p>Soil dielectric constant retrieval from a wheat field without considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree, (<b>c</b>) extreme gradient boosting, (<b>d</b>) stochastic gradient descent, (<b>e</b>) K-nearest neighbor, (<b>f</b>) multilinear regression, and (<b>g</b>) neural network.</p>
Full article ">Figure 6
<p>Soil dielectric constant retrieval from a corn field without considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree, (<b>c</b>) extreme gradient boosting, (<b>d</b>) stochastic gradient descent, (<b>e</b>) K-nearest neighbor, (<b>f</b>) multilinear regression, and (<b>g</b>) neural network.</p>
Full article ">Figure 7
<p>Soil moisture retrieval from soybean field without considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree, (<b>c</b>) extreme gradient boosting, (<b>d</b>) stochastic gradient descent, (<b>e</b>) K-nearest neighbor, (<b>f</b>) multilinear regression, and (<b>g</b>) neural network.</p>
Full article ">Figure 8
<p>Soil moisture retrieval from the wheat field without considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree, (<b>c</b>) SGD, (<b>d</b>) KNN, (<b>e</b>) MLR, (<b>f</b>) XGBoost, and (<b>g</b>) neural network.</p>
Full article ">Figure 9
<p>Soil moisture retrieval from corn field without considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree, (<b>c</b>) SGD, (<b>d</b>) KNN, (<b>e</b>) MLR, (<b>f</b>) XGBoost, and (<b>g</b>) neural network.</p>
Full article ">Figure 10
<p>Soil dielectric constant retrieval from soybean field considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree, (<b>c</b>) extreme gradient boosting, (<b>d</b>) stochastic gradient descent, (<b>e</b>) K-nearest neighbor, (<b>f</b>) multiple linear regression, and (<b>g</b>) neural network.</p>
Full article ">Figure 11
<p>Soil dielectric constant retrieval from wheat field considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree, (<b>c</b>) extreme gradient boosting, (<b>d</b>) stochastic gradient descent, (<b>e</b>) K-nearest neighbor, (<b>f</b>) multiple linear regression, and (<b>g</b>) neural network.</p>
Full article ">Figure 12
<p>Soil dielectric constant retrieval from corn field considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree, (<b>c</b>) extreme gradient boosting, (<b>d</b>) stochastic gradient descent, (<b>e</b>) K-nearest neighbor, (<b>f</b>) multiple linear regression, and (<b>g</b>) neural network.</p>
Full article ">Figure 13
<p>Soil moisture retrieval from soybean field considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree (<b>c</b>) extreme gradient boosting, (<b>d</b>) stochastic gradient descent, (<b>e</b>) K-nearest neighbor, (<b>f</b>) multiple linear regression, and (<b>g</b>) neural network.</p>
Full article ">Figure 14
<p>Soil moisture retrieval from the wheat field considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree, (<b>c</b>) extreme gradient boosting, (<b>d</b>) stochastic gradient descent, (<b>e</b>) K-nearest neighbor, (<b>f</b>) multiple linear regression, and (<b>g</b>) neural network.</p>
Full article ">Figure 15
<p>Soil moisture retrieval from the corn field considering vegetation effects. (<b>a</b>) Random forest, (<b>b</b>) decision tree, (<b>c</b>) extreme gradient boosting, (<b>d</b>) stochastic gradient descent, (<b>e</b>) K-nearest neighbor, (<b>f</b>) multiple linear regression, and (<b>g</b>) neural network.</p>
Full article ">Figure 16
<p>Feature importance in random forest.</p>
Full article ">Figure 17
<p>Estimated soil dielectric constant and soil moisture using random forest.</p>
Full article ">
39 pages, 13148 KiB  
Article
Fiducial Reference Measurement for Greenhouse Gases (FRM4GHG)
by Mahesh Kumar Sha, Martine De Mazière, Justus Notholt, Thomas Blumenstock, Pieter Bogaert, Pepijn Cardoen, Huilin Chen, Filip Desmet, Omaira García, David W. T. Griffith, Frank Hase, Pauli Heikkinen, Benedikt Herkommer, Christian Hermans, Nicholas Jones, Rigel Kivi, Nicolas Kumps, Bavo Langerock, Neil A. Macleod, Jamal Makkor, Winfried Markert, Christof Petri, Qiansi Tu, Corinne Vigouroux, Damien Weidmann and Minqiang Zhouadd Show full author list remove Hide full author list
Remote Sens. 2024, 16(18), 3525; https://doi.org/10.3390/rs16183525 - 23 Sep 2024
Viewed by 469
Abstract
The Total Carbon Column Observing Network (TCCON) and the Infrared Working Group of the Network for the Detection of Atmospheric Composition Change (NDACC-IRWG) are two ground-based networks that provide the retrieved concentrations of up to 30 atmospheric trace gases, using solar absorption spectrometry. [...] Read more.
The Total Carbon Column Observing Network (TCCON) and the Infrared Working Group of the Network for the Detection of Atmospheric Composition Change (NDACC-IRWG) are two ground-based networks that provide the retrieved concentrations of up to 30 atmospheric trace gases, using solar absorption spectrometry. Both networks provide reference measurements for the validation of satellites and models. TCCON concentrates on long-lived greenhouse gases (GHGs) for carbon cycle studies and validation. The number of sites is limited, and the geographical coverage is uneven, covering mainly Europe and the USA. A better distribution of stations is desired to improve the representativeness of the data for various atmospheric conditions and surface conditions and to cover a large latitudinal distribution. The two successive Fiducial Reference Measurements for Greenhouse Gases European Space Agency projects (FRM4GHG and FRM4GHG2) aim at the assessment of several low-cost portable instruments for precise measurements of GHGs to complement the existing ground-based sites. Several types of low spectral resolution Fourier transform infrared (FTIR) spectrometers manufactured by Bruker, namely an EM27/SUN, a Vertex70, a fiber-coupled IRCube, and a Laser Heterodyne spectro-Radiometer (LHR) developed by UK Rutherford Appleton Laboratory are the participating instruments to achieve the Fiducial Reference Measurements (FRMs) status. Intensive side-by-side measurements were performed using all four instruments next to the Bruker IFS 125HR high spectral resolution FTIR, performing measurements in the NIR (TCCON configuration) and MIR (NDACC configuration) spectral range. The remote sensing measurements were complemented by AirCore launches, which provided in situ vertical profiles of target gases traceable to the World Meteorological Organization (WMO) reference scale. The results of the intercomparisons are shown and discussed. Except for the EM27/SUN, all other instruments, including the reference TCCON spectrometer, needed modifications during the campaign period. The EM27/SUN and the Vertex70 provided stable and precise measurements of the target gases during the campaign with quantified small biases. As part of the FRM4GHG project, one EM27/SUN is now used as a travel standard for the verification of column-integrated GHG measurements. The extension of the Vertex70 to the MIR provides the opportunity to retrieve additional concentrations of N2O, CH4, HCHO, and OCS. These MIR data products are comparable to the retrieval results from the high-resolution IFS 125HR spectrometer as operated by the NDACC. Our studies show the potential for such types of spectrometers to be used as a travel standard for the MIR species. An enclosure system with a compact solar tracker and meteorological station has been developed to house the low spectral resolution portable FTIR systems for performing solar absorption measurements. This helps the spectrometers to be mobile and enables autonomous operation, which will help to complement the TCCON and NDACC networks by extending the observational capabilities at new sites for the observation of GHGs and additional air quality gases. The development of the retrieval software allows comparable processing of the Vertex70 type of spectra as the EM27/SUN ones, therefore bringing them under the umbrella of the COllaborative Carbon Column Observing Network (COCCON). A self-assessment following the CEOS-FRM Maturity Matrix shows that the COCCON is able to provide GHG data products of FRM quality and can be used for either short-term campaigns or long-term measurements to complement the high-resolution FTIR networks. Full article
Show Figures

Figure 1

Figure 1
<p>Background anomaly of the Laser Heterodyne spectro-Radiometer. The black curve shows the CH<sub>4</sub> lines at low solar elevation, and the blue curve shows the measurements at mid-solar elevation angles. The red line indicates the background corrected origin, while the green line is the actual zero transmission.</p>
Full article ">Figure 2
<p>Time series of XCO<sub>2</sub> (<b>a</b>), XCH<sub>4</sub> (<b>c</b>), and XCO (<b>e</b>) retrieved from AirCore and the TCCON instrument for measurements performed at Sodankylä during the period of 2017–2019, and their differences (AirCore minus TCCON) ΔXCO<sub>2</sub> (<b>b</b>), ΔXCH<sub>4</sub> (<b>d</b>), and ΔXCO (<b>f</b>) for the same period.</p>
Full article ">Figure 3
<p>Mean bias (solid points), standard deviation of the differences (error bars), and correlation coefficients (open points) for XCO<sub>2</sub> (red), XCH<sub>4</sub> (blue), and XCO (green) between Xgas calculated from the AirCore relative to the TCCON data for the individual years of the campaign as well as the averaged results over all years.</p>
Full article ">Figure 4
<p>Time series of XCO<sub>2</sub> (<b>a</b>), XCH<sub>4</sub> (<b>c</b>), and XCO (<b>e</b>) retrieved from EM27/SUN and TCCON instruments for measurements performed at Sodankylä during the period of 2017–2019, and their differences (EM27/SUN minus TCCON reference) ΔXCO<sub>2</sub> (<b>b</b>), ΔXCH<sub>4</sub> (<b>d</b>), and ΔXCO (<b>f</b>) for the same period.</p>
Full article ">Figure 5
<p>Mean bias (solid points), standard deviation of the difference (error bars), and correlation coefficients (open points) for XCO<sub>2</sub> (red), XCH<sub>4</sub> (blue), and XCO (green) calculated from the EM27/SUN relative to the TCCON for the individual years of the campaign as well as the averaged combined results of all years.</p>
Full article ">Figure 6
<p>Time series of XCO<sub>2</sub> (<b>a</b>) retrieved from Vertex70 and TCCON instruments and their differences (Vertex70 minus TCCON reference) (<b>b</b>) for the same period. The shaded areas represent the time periods where the instrument was not operated in an optimal condition, and some tests were performed to achieve better results. The vertical bars represent the dates when an instrument modification was performed to the Vertex70 during the campaign.</p>
Full article ">Figure 7
<p>Mean bias (solid points), standard deviation of the difference (error bars), and correlation coefficients (open points) for XCO<sub>2</sub> (red), XCH<sub>4</sub> (blue), and XCO (green) calculated from the Vertex70 relative to the TCCON for the individual years of the campaign as well as the averaged results over all years.</p>
Full article ">Figure 8
<p>Time series of XCO<sub>2</sub> (<b>a</b>) and XAir (<b>c</b>) retrieved from IRCube and TCCON instruments and their difference of ΔXCO<sub>2</sub> (IRCube minus TCCON reference) (<b>b</b>) for the same period. The vertical bars represent the dates when an instrument modification was performed to the IRCube during the campaign.</p>
Full article ">Figure 9
<p>Mean bias (solid points), standard deviation of the difference (error bars), and correlation coefficients (open points) for XCO<sub>2</sub> (red), XCH<sub>4</sub> (blue), and XCO (green) calculated from the IRCube relative to the TCCON for the individual years of the campaign as well as the averaged results over all years. SO points to Sodankylä, WE to Wollongong, and DB to Darwin.</p>
Full article ">Figure 10
<p>Time series of XCO<sub>2</sub> (<b>a</b>), XCH<sub>4</sub> (<b>c</b>), and XH<sub>2</sub>O (<b>e</b>) retrieved from LHR and TCCON instruments for measurements performed at Sodankylä during the period of 2017–2019, and their differences (LHR minus TCCON reference) ΔXCO<sub>2</sub> (<b>b</b>), ΔXCH<sub>4</sub> (<b>d</b>), and ΔXH<sub>2</sub>O (<b>f</b>) for the same period.</p>
Full article ">Figure 11
<p>Time series of the retrieved HCHO columns at Sodankylä from the 125HR (blue) and the Vertex70 (red) spectrometers for all measurements (points) and for data in coincidences within 15 min (circles). Bottom: the differences of the HCHO columns Vertex70—125HR for the data in coincidences.</p>
Full article ">Figure 12
<p>Scatter plot of the HCHO columns retrieved at Sodankylä from the Bruker IFS 125HR and the Vertex70 spectrometers. Theil-Sen regression: y = 0.894 (0.047) x + 3.518 × 10<sup>13</sup> (1.834 × 10<sup>12</sup>).</p>
Full article ">Figure 13
<p>Time series of the retrieved OCS columns at Sodankylä from the Vertex70 (blue) and at Kiruna from the 120/5 HR (red) spectrometer for all measurements performed in 2019.</p>
Full article ">Figure 14
<p>Picture of the enclosure, compact solar tracker, and meteorological station on the mast during deployment at the BIRA-IASB campus in Uccle, Belgium.</p>
Full article ">Figure 15
<p>Measurements of XCO<sub>2</sub>, XCH<sub>4</sub>, and Xluft from the IRCube (red) and TCCON (black) instruments on the UoW campus, Wollongong.</p>
Full article ">Figure 16
<p>Xgas values from the side-by-side measurements with the COCCON reference spectrometer (SN37) and the TS (SN39) at the Karlsruhe TCCON site. The two days in March and two days in August were collected before and after the visit to the Izaña TCCON site.</p>
Full article ">Figure 17
<p>Time series of the side-by-side measurements performed at the Izaña TCCON site during the visit of the TS spectrometer. TCCON-HR data are plotted as red pentagons, the TCCON-LR data as sandy stars, and TS data as blue dots.</p>
Full article ">Figure 18
<p>The results of the TS campaigns conducted so far. The data for Tsukuba (TK) and Wollongong (WG) are taken from Herkommer et al., 2024 [<a href="#B37-remotesensing-16-03525" class="html-bibr">37</a>]. The bars give the deviation in percentage of the HR and LR data at the individual sites relative to the reference in Karlsruhe. The tcorr represents the time-corrected LR data for the Tsukuba site.</p>
Full article ">
25 pages, 10343 KiB  
Article
Exploration of Deep-Learning-Based Error-Correction Methods for Meteorological Remote-Sensing Data: A Case Study of Atmospheric Motion Vectors
by Hang Cao, Hongze Leng, Jun Zhao, Xiaodong Xu, Jinhui Yang, Baoxu Li, Yong Zhou and Lilan Huang
Remote Sens. 2024, 16(18), 3522; https://doi.org/10.3390/rs16183522 - 23 Sep 2024
Viewed by 715
Abstract
Meteorological satellite remote sensing is important for numerical weather forecasts, but its accuracy is affected by many things during observation and retrieval, showing that it can be improved. As a standard way to measure wind from space, atmospheric motion vectors (AMVs) are used. [...] Read more.
Meteorological satellite remote sensing is important for numerical weather forecasts, but its accuracy is affected by many things during observation and retrieval, showing that it can be improved. As a standard way to measure wind from space, atmospheric motion vectors (AMVs) are used. They are separate pieces of information spread out in the troposphere, which gives them more depth than regular surface or sea surface wind measurements. This makes rectifying problems more difficult. For error correction, this research builds a deep-learning model that is specific to AMVs. The outcomes show that AMV observational errors are greatly reduced after correction. The root mean square error (RMSE) drops by almost 40% compared to ERA5 true values. Among these, the optimization of solar observation errors exceeds 40%; the discrepancies at varying atmospheric pressure altitudes are notably improved; the degree of optimization for data with low QI coefficients is substantial; and there remains potential for enhancement in data with high QI coefficients. Furthermore, there has been a significant enhancement in the consistency coefficient of the wind’s physical properties. In the assimilation forecasting experiments, the corrected AMV data demonstrated superior forecasting performance. With more training, the model can fix things better, and the changes it makes last for a long time. The results show that it is possible and useful to use deep learning to fix errors in meteorological remote-sensing data. Full article
Show Figures

Figure 1

Figure 1
<p>The horizontal distribution of the upper-level water vapor channel (<b>a</b>), C009), mid-level water vapor channel (<b>b</b>), C010), and infrared channel (<b>c</b>), C012) data at 00 UTC on 1 January 2020, and the vertical distribution of AMV data volume from 1 to 31 January for the three channels (<b>d</b>).</p>
Full article ">Figure 2
<p>Reanalysis data schematic (ERA5, 500 hPa U-Wind (m/s) stratified by atmospheric pressure; the color gradient represents wind speed).</p>
Full article ">Figure 3
<p>Bilinear interpolation diagram. (The red dots represent the location of ERA5 data, the green dots represent the location of AMV data, and the blue dots represent the intermediate values.)</p>
Full article ">Figure 4
<p>Multi-task network schematic diagram (arrows represent data flow, circles represent different hidden layers).</p>
Full article ">Figure 5
<p>Temporal branch network (based on the transformer network).</p>
Full article ">Figure 6
<p>The schematic diagram of the multi-scale feature analysis temporal branch network (hidden layers represent the transformer network, as shown in <a href="#remotesensing-16-03522-f004" class="html-fig">Figure 4</a>).</p>
Full article ">Figure 7
<p>Spatial branch network (based on Convolutional Neural Networks, the white squares represent the input and output datasets, and the blue squares represent the data undergoing convolution operations).</p>
Full article ">Figure 8
<p>Model-training process (yellow rectangles and green circles respectively represent hidden layers in the two branch models).</p>
Full article ">Figure 9
<p>Schematic diagram of the assimilation region. (The red blocks represent the nested areas of the model).</p>
Full article ">Figure 10
<p>RMSE comparison between data before correction and data after correction against ERA5 data in Month 8 (purple, data after correction; pink, data before correction). C009 represents the high-level water vapor channel, C010 denotes the low-level water vapor channel, and water vapor channel data mainly focus on the mid-to-upper troposphere. C012 represents the infrared channel, with data primarily concentrated in the mid-level troposphere.</p>
Full article ">Figure 11
<p>RMSE comparison between data before correction and data after correction against ERA5 data in Month 11 (purple, data after correction; pink, data before correction). C009 characterizes the high-level water vapor channel, C010 represents the low-level water vapor channel, and water vapor channel data mainly focus on the mid-to-upper troposphere. C012 denotes the infrared channel, with data primarily concentrated in the mid-level troposphere.</p>
Full article ">Figure 12
<p>Error distribution of the upper-level water vapor channel AMVs U and V wind vectors compared to ERA5 data before and after correction on 1 November, 2022, (<b>a</b>,<b>b</b>) represents the error distribution of U and V wind vectors before correction, (<b>c</b>,<b>d</b>) represents the error distribution after correction).</p>
Full article ">Figure 13
<p>Wind speed distribution of the three-channel AMV data compared to ERA5 data before and after correction on 1 November, 2022, (<b>a</b>–<b>c</b>) represents before correction, (<b>d</b>–<b>f</b>) represents after correction, with color representing data density; the darker the color, the more data.</p>
Full article ">Figure 14
<p>RMSE comparison between data before correction and data after correction at different atmospheric pressure levels against ERA5 data (red, data before correction; blue, data after correction; green, data volume at respective pressure levels).</p>
Full article ">Figure 15
<p>RMSE comparison between data before correction and data after correction at different QI against ERA5 data (red, data before correction; blue, data after correction; green, data volume at respective QI).</p>
Full article ">Figure 16
<p>Consistency coefficient comparison between data before correction and data after correction against ERA5 data (dotted line, data before correction; solid line, data after correction). C009 represents the high-level water vapor channel, C010 represents the low-level water vapor channel, and water vapor channel data mainly focus on the mid-to-upper troposphere. C012 represents the infrared channel, with data primarily concentrated in the mid-level troposphere.</p>
Full article ">Figure 17
<p>Analysis of model correction results with training data of different periods (red represents the original root mean square error, green represents the root mean square error corrected by the model trained with 6 months of data, pink represents the root mean square error corrected by the model trained with 12 months of data, and blue represents the root mean square error corrected by the model trained with 24 months of data).</p>
Full article ">Figure 18
<p>Vertical profiles of root mean square error (RMSE) for the three-dimensional variational assimilation experiment results of AMVs data before and after correction for different channels (U wind vector: (<b>a</b>,<b>e</b>,<b>i</b>,<b>m</b>); V wind vector: (<b>b</b>,<b>f</b>,<b>j</b>,<b>n</b>); temperature: (<b>c</b>,<b>g</b>,<b>k</b>,<b>o</b>); relative humidity: (<b>d</b>,<b>h</b>,<b>l</b>,<b>p</b>); blue represents the C009 channel, green represents the C010 channel, purple represents the C012 channel, and red represents the multi-channel fusion; solid lines represent pre-correction data, dashed lines represent post-correction data).</p>
Full article ">Figure 19
<p>Loss and gradient functions in the assimilation experiments before and after correction for C009 channel data (AC-AMVs represent the corrected AMV data, BC-AMVs represent the pre-correction AMV data).</p>
Full article ">Figure 20
<p>Comparison of multi-channel fusion before and after improvement on 1 November, 2022: (<b>a</b>) is the U wind vector, (<b>b</b>) is the V wind vector, (<b>c</b>) is the temperature, (<b>d</b>) is the relative humidity; BC-ALL means before correction, AC-ALL means after correction, NF-ALL means new fusion method.</p>
Full article ">
20 pages, 8242 KiB  
Article
A Scene Graph Similarity-Based Remote Sensing Image Retrieval Algorithm
by Yougui Ren, Zhibin Zhao, Junjian Jiang, Yuning Jiao, Yining Yang, Dawei Liu, Kefu Chen and Ge Yu
Appl. Sci. 2024, 14(18), 8535; https://doi.org/10.3390/app14188535 - 22 Sep 2024
Viewed by 545
Abstract
With the rapid development of remote sensing image data, the efficient retrieval of target images of interest has become an important issue in various applications including computer vision and remote sensing. This research addressed the low-accuracy problem in traditional content-based image retrieval algorithms, [...] Read more.
With the rapid development of remote sensing image data, the efficient retrieval of target images of interest has become an important issue in various applications including computer vision and remote sensing. This research addressed the low-accuracy problem in traditional content-based image retrieval algorithms, which largely rely on comparing entire image features without capturing sufficient semantic information. We proposed a scene graph similarity-based remote sensing image retrieval algorithm. Firstly, a one-shot object detection algorithm was designed for remote sensing images based on Siamese networks and tailored to the objects of an unknown class in the query image. Secondly, a scene graph construction algorithm was developed, based on the objects and their attributes and spatial relationships. Several construction strategies were designed based on different relationships, including full connections, random connections, nearest connections, star connections, or ring connections. Thirdly, by making full use of edge features for scene graph feature extraction, a graph feature extraction network was established based on edge features. Fourthly, a neural tensor network-based similarity calculation algorithm was designed for graph feature vectors to obtain image retrieval results. Fifthly, a dataset named remote sensing images with scene graphs (RSSG) was built for testing, which contained 929 remote sensing images with their corresponding scene graphs generated by the developed construction strategies. Finally, through performance comparison experiments with remote sensing image retrieval algorithms AMFMN, MiLaN, and AHCL, in precision rates, Precision@1 improved by 10%, 7.2%, and 5.2%, Precision@5 improved by 3%, 5%, and 1.7%; and Precision@10 improved by 1.7%, 3%, and 0.6%. In recall rates, Recall@1 improved by 2.5%, 4.3%, and 1.3%; Recall@5 improved by 3.7%, 6.2%, and 2.1%; and Recall@10 improved by 4.4%, 7.7% and 1.6%. Full article
(This article belongs to the Special Issue Deep Learning for Graph Management and Analytics)
Show Figures

Figure 1

Figure 1
<p>Examples of sensing image retrieval based on semantic features.</p>
Full article ">Figure 2
<p>The developed remote sensing image retrieval algorithm.</p>
Full article ">Figure 3
<p>The schematic diagram of the SiamACDet algorithm.</p>
Full article ">Figure 4
<p>Flowcharts of the residual and AC modules.</p>
Full article ">Figure 5
<p>The flowchart of the ACSE module.</p>
Full article ">Figure 6
<p>The flowchart of the FPN-structure-based ACSENet.</p>
Full article ">Figure 7
<p>The schematic diagram of the ARPN.</p>
Full article ">Figure 8
<p>The flowchart of the double-head detector.</p>
Full article ">Figure 9
<p>Example of the scene image construction method.</p>
Full article ">Figure 10
<p>The EGCN module.</p>
Full article ">Figure 11
<p>Flowchart of the feature similarity calculation module.</p>
Full article ">Figure 12
<p>Sample of the RSSG dataset.</p>
Full article ">
21 pages, 14185 KiB  
Article
An Automated Machine Learning Approach to the Retrieval of Daily Soil Moisture in South Korea Using Satellite Images, Meteorological Data, and Digital Elevation Model
by Nari Kim, Soo-Jin Lee, Eunha Sohn, Mija Kim, Seonkyeong Seong, Seung Hee Kim and Yangwon Lee
Water 2024, 16(18), 2661; https://doi.org/10.3390/w16182661 - 18 Sep 2024
Viewed by 923
Abstract
Soil moisture is a critical parameter that significantly impacts the global energy balance, including the hydrologic cycle, land–atmosphere interactions, soil evaporation, and plant growth. Currently, soil moisture is typically measured by installing sensors in the ground or through satellite remote sensing, with data [...] Read more.
Soil moisture is a critical parameter that significantly impacts the global energy balance, including the hydrologic cycle, land–atmosphere interactions, soil evaporation, and plant growth. Currently, soil moisture is typically measured by installing sensors in the ground or through satellite remote sensing, with data retrieval facilitated by reanalysis models such as the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis 5 (ERA5) and the Global Land Data Assimilation System (GLDAS). However, the suitability of these methods for capturing local-scale variabilities is insufficiently validated, particularly in regions like South Korea, where land surfaces are highly complex and heterogeneous. In contrast, artificial intelligence (AI) approaches have shown promising potential for soil moisture retrieval at the local scale but have rarely demonstrated substantial products for spatially continuous grids. This paper presents the retrieval of daily soil moisture (SM) over a 500 m grid for croplands in South Korea using random forest (RF) and automated machine learning (AutoML) models, leveraging satellite images and meteorological data. In a blind test conducted for the years 2013–2019, the AutoML-based SM model demonstrated optimal performance, achieving a root mean square error of 2.713% and a correlation coefficient of 0.940. Furthermore, the performance of the AutoML model remained consistent across all the years and months, as well as under extreme weather conditions, indicating its reliability and stability. Comparing the soil moisture data derived from our AutoML model with the reanalysis data from sources such as the European Space Agency Climate Change Initiative (ESA CCI), GLDAS, the Local Data Assimilation and Prediction System (LDAPS), and ERA5 for the South Korea region reveals that our AutoML model provides a much better representation. These experiments confirm the feasibility of AutoML-based SM retrieval, particularly for local agrometeorological applications in regions with heterogeneous land surfaces like South Korea. Full article
Show Figures

Figure 1

Figure 1
<p>Map of South Korea showing the distribution of the RDA soil moisture measurement stations in blue dots.</p>
Full article ">Figure 2
<p>Soil type according to sand–silt–clay ratio (<a href="https://www.usda.gov/" target="_blank">https://www.usda.gov/</a>, accessed on 20 August 2024).</p>
Full article ">Figure 3
<p>Modeling outline of random forest model.</p>
Full article ">Figure 4
<p>Key code implementations for the (<b>a</b>) RF and (<b>b</b>) AutoML models using the h2o library in R 3.44.0.3.</p>
Full article ">Figure 5
<p>Scatterplots for the RDA in situ daily SM vs. estimated daily SM in the blind test of the (<b>a</b>) RF and (<b>b</b>) AutoML models in the croplands of South Korea, 2013–2019.</p>
Full article ">Figure 6
<p>Scatterplots for the observed vs. estimated daily SM in the blind test of the RF models in the croplands of South Korea according to year.</p>
Full article ">Figure 7
<p>Scatterplots for the observed vs. estimated daily SM in the blind test of the AutoML models in the croplands of South Korea according to year.</p>
Full article ">Figure 8
<p>Scatterplots for the observed vs. retrieved daily SM in the blind test of the RF model in the croplands of South Korea according to month.</p>
Full article ">Figure 9
<p>Scatterplots for the observed vs. retrieved daily SM in the blind test of the AutoML model in the croplands of South Korea according to month.</p>
Full article ">Figure 10
<p>Time-series charts of the monthly soil moisture content for (<b>a</b>) RDA, (<b>b</b>) RF, and (<b>c</b>) AutoML in the croplands of South Korea, 2013–2019.</p>
Full article ">Figure 11
<p>Time-series charts of the RMSE and CC for monthly soil moisture in the croplands of South Korea, 2013–2019: (<b>a</b>) RMSE of RF, (<b>b</b>) CC of RF, (<b>c</b>) RMSE of AutoML, and (<b>d</b>) CC of AutoML.</p>
Full article ">Figure 12
<p>The bar charts for (<b>a</b>) NDVI, (<b>b</b>) FPAR, (<b>c</b>) SPI3, (<b>d</b>) Tsoil10, (<b>e</b>) Ts, and (<b>f</b>) Ta in August during 2013–2019 in the croplands of South Korea. Precipitation was higher in 2014 (red box), while temperature was higher in 2016 (blue box).</p>
Full article ">Figure 13
<p>The bar charts for (<b>a</b>) NDVI, (<b>b</b>) FPAR, (<b>c</b>) SPI3, (<b>d</b>) Tsoil10, (<b>e</b>) Ts, and (<b>f</b>) Ta in June during 2013–2019 in the croplands of South Korea. Precipitation was lower than usual, but the temperature was moderate in 2017 (red box).</p>
Full article ">Figure 14
<p>Soil moisture maps of South Korea in August 2014 with more precipitation than other years.</p>
Full article ">Figure 15
<p>Soil moisture maps of South Korea in August 2016 with more heatwaves than other years.</p>
Full article ">Figure 16
<p>Soil moisture maps of South Korea in June 2017 with less precipitation than other years.</p>
Full article ">Figure 17
<p>Soil moisture maps of South Korea in July and August 2018 with record-breaking heatwaves.</p>
Full article ">Figure 18
<p>Scatter plots for the observed vs. (<b>a</b>) AutoML, (<b>b</b>) ESA CCI, (<b>c</b>) GLDAS, (<b>d</b>) LDPAS, and (<b>e</b>) ERA5 daily soil moisture in the croplands of South Korea, 2013–2019.</p>
Full article ">
23 pages, 5452 KiB  
Article
Bio-Optical Properties and Ocean Colour Satellite Retrieval along the Coastal Waters of the Western Iberian Coast (WIC)
by Luciane Favareto, Natalia Rudorff, Vanda Brotas, Andreia Tracana, Carolina Sá, Carla Palma and Ana C. Brito
Remote Sens. 2024, 16(18), 3440; https://doi.org/10.3390/rs16183440 - 16 Sep 2024
Viewed by 835
Abstract
Essential Climate Variables (ECVs) like ocean colour provide crucial information on the Optically Active Constituents (OACs) of seawater, such as phytoplankton, non-algal particles, and coloured dissolved organic matter (CDOM). The challenge in estimating these constituents through remote sensing is in accurately distinguishing and [...] Read more.
Essential Climate Variables (ECVs) like ocean colour provide crucial information on the Optically Active Constituents (OACs) of seawater, such as phytoplankton, non-algal particles, and coloured dissolved organic matter (CDOM). The challenge in estimating these constituents through remote sensing is in accurately distinguishing and quantifying optical and biogeochemical properties, e.g., absorption coefficients and the concentration of chlorophyll a (Chla), especially in complex waters. This study evaluated the temporal and spatial variability of bio-optical properties in the coastal waters of the Western Iberian Coast (WIC), contributing to the assessment of satellite retrievals. In situ data from three oceanographic cruises conducted in 2019–2020 across different seasons were analyzed. Field-measured biogenic light absorption coefficients were compared to satellite estimates from Ocean-Colour Climate Change Initiative (OC-CCI) reflectance data using semi-analytical approaches (QAA, GSM, GIOP). Key findings indicate substantial variability in bio-optical properties across different seasons and regions. New bio-optical coefficients improved satellite data retrieval, reducing uncertainties and providing more reliable phytoplankton absorption estimates. These results highlight the need for region-specific algorithms to accurately capture the unique optical characteristics of coastal waters. Improved comprehension of bio-optical variability and retrieval techniques offers valuable insights for future research and coastal environment monitoring using satellite ocean colour data. Full article
Show Figures

Figure 1

Figure 1
<p>Location of in situ sampling stations (black dots, N = 125) and their matches with OC-CCI data (red circle, N = 53) in the five regions (A, B, C, D, and E). The AQ2 campaign was carried out in April/May 2019 (spring), AQ3 in October 2019 (autumn), and AQ4 in February/March 2020 (early spring). The main capes (Carvoeiro, Espichel, Sines, and São Vicente-Sagres; black lines) and the main points of freshwater entrance (from north to south: Minho, Lima, Ave, Douro, Ria de Aveiro, Mondego, Tagus, Sado, Mira, Odiáxere, Arade, Quarteira, Ria Formosa, Gilão, and Guadiana; blue dots) are identified on the map. Isobaths of 100, 200, and 1000 m (m) were obtained from GEBCO [<a href="#B25-remotesensing-16-03440" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>Schematic representation of the satellite data processing routine implemented to derive Chl<span class="html-italic">a</span> concentrations and bio-optical properties (<span class="html-italic">a</span><sub>ph</sub> and <span class="html-italic">a</span><sub>dg</sub>).</p>
Full article ">Figure 3
<p>Distribution map of the total in situ absorption coefficients (<span class="html-italic">a</span><sub>t-w</sub> m<sup>−1</sup>, without the sum of water absorption, N = 125) at wavelength (λ) 443 nm and their respective spectra (λ = 350 to 700 nm) by area (<b>A</b> to <b>E</b>) and oceanographic campaign (AQ2—spring, AQ3—autumn, and AQ4—early spring).</p>
Full article ">Figure 4
<p>Triangular diagram (<b>I</b> to <b>V</b>) representing the contribution of the in situ absorption coefficients of phytoplankton (<span class="html-italic">a</span><sub>ph</sub> at 443 nm, m<sup>−1</sup>), non-algal particles (NAPs, at 443 nm, m<sup>−1</sup>), and coloured dissolved organic matter (CDOM, <span class="html-italic">a</span><sub>g</sub> at 443 nm, m<sup>−1</sup>) by sampled area (<b>A</b> to <b>E</b>) and oceanographic campaign (AQ2—spring, AQ3—autumn, and AQ4—early spring). The bar graph (<b>VI</b>) shows the percentage (%) number of samples by area and campaign for each type of water [<a href="#B57-remotesensing-16-03440" class="html-bibr">57</a>], named according to the contribution of their components: <span class="html-italic">a</span><sub>ph</sub> + <span class="html-italic">a</span><sub>g</sub>, <span class="html-italic">a</span><sub>ph</sub>, <span class="html-italic">a</span><sub>g</sub>, <span class="html-italic">a</span><sub>ph</sub> + <span class="html-italic">a</span><sub>d</sub>, and <span class="html-italic">a</span><sub>ph</sub> + <span class="html-italic">a</span><sub>g</sub> + <span class="html-italic">a</span><sub>d</sub>.</p>
Full article ">Figure 5
<p>Spatial variation of phytoplankton absorption coefficient (<span class="html-italic">a</span><sub>ph</sub>, m<sup>−1</sup>), non-algal particles or detritus (<span class="html-italic">a</span><sub>d</sub>, m<sup>−1</sup>), and CDOM (<span class="html-italic">a</span><sub>g</sub>, m<sup>−1</sup>), at wavelength (λ) 443 nm and their respective spectra (λ = 350 to 700 nm) by area (<b>A</b> to <b>E</b>) and oceanographic campaign (AQ2—spring, AQ3—autumn, and AQ4—early spring).</p>
Full article ">Figure 6
<p>(<b>I</b>) Relationships between the in situ Chl<span class="html-italic">a</span> (mg m<sup>−3</sup>) and phytoplankton absorption coefficient (<span class="html-italic">a</span><sub>ph</sub> at 443 nm, m<sup>−1</sup>) obtained by fitting a power law function (“Fit”, with coefficients “a” and “b”, r-squared: R<sup>2</sup>, and the number of samples: N). The relationship between Chl<span class="html-italic">a</span> and <span class="html-italic">a</span><sub>ph</sub> was compared to the results obtained by Bricaud et al. [<a href="#B42-remotesensing-16-03440" class="html-bibr">42</a>,<a href="#B50-remotesensing-16-03440" class="html-bibr">50</a>,<a href="#B58-remotesensing-16-03440" class="html-bibr">58</a>] (B98, B04, and B10, respectively) and Loisel et al. [<a href="#B56-remotesensing-16-03440" class="html-bibr">56</a>] (L10). (<b>II</b>) <span class="html-italic">a</span><sub>ph</sub> versus Chl<span class="html-italic">a</span>, which was compared to the results obtained by Sá et al. [<a href="#B52-remotesensing-16-03440" class="html-bibr">52</a>] (S15) and the Algal 2 (A2) algorithm [<a href="#B60-remotesensing-16-03440" class="html-bibr">60</a>]. The symbols correspond to different sampled areas (A to E), with different colours representing each oceanographic campaign (AQ2—spring, AQ3—autumn, and AQ4—early spring).</p>
Full article ">Figure 7
<p>Relationships between (<b>I</b>) in situ absorption coefficient of non-algal particles (<span class="html-italic">a</span><sub>d</sub>, m<sup>−1</sup>) and in situ Chl<span class="html-italic">a</span> (mg m<sup>−3</sup>); (<b>II</b>) in situ absorption coefficient of non-algal particles (<span class="html-italic">a</span><sub>d</sub>, m<sup>−1</sup>) and turbidity (Turb, FTU); (<b>III</b>) in situ absorption coefficient of coloured dissolved organic matter or gelbstoff (<span class="html-italic">a</span><sub>g</sub>, m<sup>−1</sup>) and in situ Chl<span class="html-italic">a</span> (mg m<sup>−3</sup>); (<b>IV</b>) spectral slope of detritus (S<sub>d</sub>, nm<sup>−1</sup>) and the absorption coefficient of non-algal particles (<span class="html-italic">a</span><sub>d</sub>, m<sup>−1</sup>); (<b>V</b>) spectral slope of CDOM (S<sub>g</sub>, nm<sup>−1</sup>) and the absorption coefficient of CDOM (<span class="html-italic">a</span><sub>g</sub>, m<sup>−1</sup>); and (<b>VI</b>) spectral slope of detritus + CDOM (Sd<sub>g</sub>, nm<sup>−1</sup>) and the absorption coefficient of detritus + CDOM (<span class="html-italic">a</span><sub>dg</sub>, m<sup>−1</sup>). The fit curve was obtained by a power law function (“Fit”, with coefficients “a” and “b”, r-squared: R<sup>2</sup>, and number of samples: N). Comparisons with the curves obtained by B10 [<a href="#B42-remotesensing-16-03440" class="html-bibr">42</a>] are also presented. The symbols correspond to different sampled areas (A to E), with different colours representing each oceanographic campaign (AQ2—spring, AQ3—autumn, and AQ4—early spring).</p>
Full article ">Figure 8
<p>Comparison between in situ Chl<span class="html-italic">a</span> (mg m<sup>−3</sup>) and its corresponding retrievals using the (<b>I</b>) OC5CCI algorithm (OC-CCI) [<a href="#B9-remotesensing-16-03440" class="html-bibr">9</a>] and (<b>II</b>) Garver–Siegel–Maritorena—the GSM algorithm [<a href="#B57-remotesensing-16-03440" class="html-bibr">57</a>]. The symbols correspond to different sampled areas (A to E), with different colours representing each oceanographic campaign (AQ2—spring, AQ3—autumn, and AQ4—early spring).</p>
Full article ">Figure 9
<p>Comparison between in situ absorption coefficients (<span class="html-italic">a</span><sub>t</sub>, <span class="html-italic">a</span><sub>ph</sub>, and <span class="html-italic">a</span><sub>dg</sub> at 443 nm, m<sup>−1</sup>) and their corresponding retrievals, obtained by default algorithms—the QAA from OC-CCI v5 ((<b>I</b>)–(<b>III</b>)), GSM ((<b>IV</b>)–(<b>VI</b>)), and GIOP—using Chl<span class="html-italic">a</span> OC5CCI retrievals as their input (GIOP-OC5CCI; <b>VII</b>–<b>IX</b>). The symbols correspond to different sampled areas (A to E), with different colours representing each oceanographic campaign (AQ2—spring, AQ3—autumn, and AQ4—early spring).</p>
Full article ">Figure 10
<p>Comparison between in situ absorption coefficients (<span class="html-italic">a</span><sub>t</sub>, <span class="html-italic">a</span><sub>ph</sub>, and <span class="html-italic">a</span><sub>dg</sub> at 443 nm, m<sup>−1</sup>) (<b>I</b>–<b>III</b>) and the GIOP, with Chl<span class="html-italic">a</span> OC5CCI retrievals as the input and using the RG coefficients (regional, <a href="#app1-remotesensing-16-03440" class="html-app">Figure S5</a>), namely the GIOP-OC5CCI-RG. The symbols correspond to different sampled areas (A to E), with different colours representing each oceanographic campaign (AQ2—spring, AQ3—autumn, and AQ4—early spring).</p>
Full article ">Figure 11
<p>Comparison between in situ <span class="html-italic">a</span><sub>ph</sub> at 443 nm (m<sup>−1</sup>) and the retrievals from the OC5CCI using the RG coefficients at 443 nm (Chl<span class="html-italic">a</span> vs. <span class="html-italic">a</span><sub>ph</sub>). The dotted line is the 1:1 line, symbols correspond to different sampled areas (A to E), with different colours representing each oceanographic campaign (AQ2—spring, AQ3—autumn, and AQ4—early spring).</p>
Full article ">
28 pages, 15371 KiB  
Article
Research on the Spatial-Temporal Evolution of Changsha’s Surface Urban Heat Island from the Perspective of Local Climate Zones
by Yanfen Xiang, Bohong Zheng, Jiren Wang, Jiajun Gong and Jian Zheng
Land 2024, 13(9), 1479; https://doi.org/10.3390/land13091479 - 12 Sep 2024
Viewed by 599
Abstract
Optimizing urban spatial morphology is one of the most effective methods for improving the urban thermal environment. Some studies have used the local climate zones (LCZ) classification system to examine the relationship between urban spatial morphology and Surface Urban Heat Islands (SUHIs). However, [...] Read more.
Optimizing urban spatial morphology is one of the most effective methods for improving the urban thermal environment. Some studies have used the local climate zones (LCZ) classification system to examine the relationship between urban spatial morphology and Surface Urban Heat Islands (SUHIs). However, these studies often rely on single-time-point data, failing to consider the changes in urban space and the time-series LCZ mapping relationships. This study utilized remote sensing data from Landsat 5, 7, and 8–9 to retrieve land surface temperatures in Changsha from 2005 to 2020 using the Mono-Window Algorithm. The spatial-temporal evolution of the LCZ and the Surface Urban Heat Island Intensity (SUHII) was then examined and analyzed. This study aims to (1) propose a localized, long-time LCZ mapping method, (2) investigate the spatial-temporal relationship between the LCZ and the SUHII, and (3) develop a more convenient SUHI assessment method for urban planning and design. The results showed that the spatial-temporal evolution of the LCZ reflects the sequence of urban expansion. In terms of quantity, the number of built-type LCZs maintaining their original types is low, with each undergoing at least one type change. The open LCZs increased the most, followed by the sparse and the composite LCZs. Spatially, the LCZs experience reverse transitions due to urban expansion and quality improvements in central urban areas. Seasonal changes in the LCZ types and the SUHI vary, with differences not only among the LCZ types but also in building heights within the same type. The relative importance of the LCZ parameters also differs between seasons. The SUHI model constructed using Boosted Regression Trees (BRT) demonstrated high predictive accuracy, with R2 values of 0.911 for summer and 0.777 for winter. In practical case validation, the model explained 97.86% of the data for summer and 96.77% for winter. This study provides evidence-based planning recommendations to mitigate urban heat and create a comfortable built environment. Full article
Show Figures

Figure 1

Figure 1
<p>The location of the study area.</p>
Full article ">Figure 2
<p>Distribution of LCZ Parameters from 2005 to 2020.</p>
Full article ">Figure 2 Cont.
<p>Distribution of LCZ Parameters from 2005 to 2020.</p>
Full article ">Figure 3
<p>The semivariogram model of building height.</p>
Full article ">Figure 4
<p>Various schematic diagrams of local climate zones in Changsha City.</p>
Full article ">Figure 5
<p>The LCZ maps in the years 2005, 2010, 2015, and 2020.</p>
Full article ">Figure 6
<p>The spatial variation of the LCZ types from 2005 to 2020.</p>
Full article ">Figure 7
<p>The urban structural development directions from 2005 to 2020.</p>
Full article ">Figure 8
<p>Spatiotemporal distribution of the LST in Changsha in summer and winter in 2005, 2010, 2016 and 2020: (<b>a</b>) 2005, (<b>b</b>) 2010, (<b>c</b>) 2016, and (<b>d</b>) 2020 in summer; (<b>e</b>) 2005, (<b>f</b>) 2010, (<b>g</b>) 2016, and (<b>h</b>) 2020 in winter; A: Lugu High-Tech Industrial Park, B: Changsha Economic and Technological Development Zone, C: Changsha Tianxin Economic Development Zone.</p>
Full article ">Figure 9
<p>Changes of the SUHII in the LCZ in summer and winter in 2005, 2010, 2016, and 2020: (<b>a</b>) 2005, (<b>b</b>) 2010, (<b>c</b>) 2016, and (<b>d</b>) 2020 in summer; (<b>e</b>) 2005, (<b>f</b>) 2010, (<b>g</b>) 2016 and (<b>h</b>) 2020 in winter. The boxplots represent the variation of SUHII values for each LCZ type, while the strip plots indicate the mean SUHII value for each LCZ type.</p>
Full article ">Figure 10
<p>Relative influences of the LCZ parameters in the two seasons.</p>
Full article ">Figure 11
<p>BRT model’s prediction results.</p>
Full article ">Figure 12
<p>The location and LST of Wangcheng District.</p>
Full article ">
Back to TopTop