[go: up one dir, main page]

Next Issue
Volume 16, November-1
Previous Issue
Volume 16, October-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 16, Issue 20 (October-2 2024) – 163 articles

Cover Story (view full-size image): To address positioning challenges in urban canyons, a sky-view image segmentation algorithm based on Fully Convolutional Networks (FCNs) is proposed for GNSS NLOS detection. The novel S-NDM algorithm integrates GNSS, IMU, and vision systems in a tightly coupled framework named Sky-GVIO, providing continuous and accurate positioning. Tested under SPP and RTK models, Sky-GVIO achieves meter-level accuracy in SPP and sub-decimeter precision in RTK, outperforming conventional GNSS/INS/Vision frameworks. A sky-view image dataset has also been made publicly available for use in research via the following link: https://github.com/whuwangjr/sky-view-images. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 4160 KiB  
Article
Enhancing Algal Bloom Level Monitoring with CYGNSS and Sentinel-3 Data
by Yan Jia, Zhiyu Xiao, Liwen Yang, Quan Liu, Shuanggen Jin, Yan Lv and Qingyun Yan
Remote Sens. 2024, 16(20), 3915; https://doi.org/10.3390/rs16203915 - 21 Oct 2024
Viewed by 785
Abstract
Algal blooms, resulting from the overgrowth of algal plankton in water bodies, pose significant environmental problems and necessitate effective remote sensing methods for monitoring. In recent years, Global Navigation Satellite System–Reflectometry (GNSS-R) has rapidly advanced and made notable contributions to many surface observation [...] Read more.
Algal blooms, resulting from the overgrowth of algal plankton in water bodies, pose significant environmental problems and necessitate effective remote sensing methods for monitoring. In recent years, Global Navigation Satellite System–Reflectometry (GNSS-R) has rapidly advanced and made notable contributions to many surface observation fields, providing new means for identifying algal blooms. Additionally, meteorological parameters such as temperature and wind speed, key factors in the occurrence of algal blooms, can aid in their identification. This paper utilized Cyclone GNSS (CYGNSS) data, Sentinel-3 OLCI data, and ECMWF Re-Analysis-5 meteorological data to retrieve Chlorophyll-a values. Machine learning algorithms were then employed to classify algal blooms for early warning based on Chlorophyll-a concentration. Experiments and validations were conducted from May 2023 to September 2023 in the Hongze Lake region of China. The results indicate that classification and early warning of algal blooms based on CYGNSS data produced reliable results. The ability of CYGNSS data to accurately reflect the severity of algal blooms opens new avenues for environmental monitoring and management. Full article
(This article belongs to the Special Issue Latest Advances and Application in the GNSS-R Field)
Show Figures

Figure 1

Figure 1
<p>Distribution of averaged CYGNSS reflection points in the Hongze Lake.</p>
Full article ">Figure 2
<p>Flowchart of the study.</p>
Full article ">Figure 3
<p>Flowchart of MPH to obtain <span class="html-italic">chl_a</span> concentration.</p>
Full article ">Figure 4
<p>Results of <span class="html-italic">chl_a</span> concentration retrieval based on MPH algorithm.</p>
Full article ">Figure 5
<p>The map of retrieved <span class="html-italic">chl_a</span> concentration results and in situ measurements.</p>
Full article ">Figure 6
<p>Relationship between retrieval results of <span class="html-italic">chl_a</span> concentration on 9 May and 14 May and measured <span class="html-italic">chl_a</span> concentration on May 11.</p>
Full article ">Figure 7
<p><span class="html-italic">chl_a</span> concentration values corresponding to CYGNSS reflection points, the colors (blue to red) represent increasing concentration.</p>
Full article ">Figure 8
<p>Accuracy of predicted <span class="html-italic">chl_a</span> concentration category by XGBoost at 1 KM resolution.</p>
Full article ">Figure 9
<p>Model classification confusion matrix for 2 Classes (<b>a</b>) and 3 Classes (<b>b</b>) classification criterion.</p>
Full article ">Figure 10
<p>Model classification confusion matrix for 4 Classes (<b>a</b>) and 5 Classes (<b>b</b>) classification criterion.</p>
Full article ">Figure 11
<p>Model classification confusion matrix of Guangdong local classification criterion.</p>
Full article ">Figure 12
<p>Accuracy of 5-fold CV of different classification methods at different spatial resolutions.</p>
Full article ">
24 pages, 5589 KiB  
Article
Ozone Detector Based on Ultraviolet Observations on the Martian Surface
by Daniel Viúdez-Moreiras, Alfonso Saiz-Lopez, Michael D. Smith, Víctor Apestigue, Ignacio Arruego, Elisa García, Juan J. Jiménez, José A. Rodriguez-Manfredi, Daniel Toledo, Mike Wolff and María-Paz Zorzano
Remote Sens. 2024, 16(20), 3914; https://doi.org/10.3390/rs16203914 - 21 Oct 2024
Viewed by 573
Abstract
Ozone plays a key role in both atmospheric chemistry and UV absorption in planetary atmospheres. On Mars, upper-tropospheric ozone has been widely characterized by space-based instruments. However, surface ozone remains poorly characterized, hindered by the limited sensitivity of orbiters to the lowest scale [...] Read more.
Ozone plays a key role in both atmospheric chemistry and UV absorption in planetary atmospheres. On Mars, upper-tropospheric ozone has been widely characterized by space-based instruments. However, surface ozone remains poorly characterized, hindered by the limited sensitivity of orbiters to the lowest scale height of the atmosphere and challenges in delivering payloads to the surface of Mars, which have prevented, to date, the measurement of ozone from the surface of Mars. Systematic measurements from the Martian surface could advance our knowledge of the atmospheric chemistry and habitability potential of this planet. NASA’s Mars 2020 mission includes the first ozone detector deployed on the Martian surface, which is based on discrete photometric observations in the ultraviolet band, a simple technology that could obtain the first insights into total ozone abundance in preparation for more sophisticated measurement techniques. This paper describes the Mars 2020 ozone detector and its retrieval algorithm, including its performance under different sources of uncertainty and the potential application of the retrieval algorithm on other missions, such as NASA’s Mars Science Laboratory. Pre-landing simulations using the UVISMART radiative transfer model suggest that the retrieval is robust and that it can deal with common issues affecting surface operations in Martian missions, although the expected low ozone abundance and instrument uncertainties could challenge its characterization in tropical latitudes of the planet. Other space missions will potentially include sensors of similar technology. Full article
Show Figures

Figure 1

Figure 1
<p>Ozone absorption cross-section as a function of wavelength in the UV-VIS, for three different temperatures [<a href="#B35-remotesensing-16-03914" class="html-bibr">35</a>]. The temperature dependence in the absorption cross-section is mostly negligible in the middle of the Hartley band, where ozone presents the largest absorption. The two observational bands used in the retrieval are highlighted in dark grey regions: <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">ξ</mi> <mi mathvariant="normal">H</mi> </msub> </mrow> </semantics></math>, centered in the middle of the Hartley band (255 ± 5 nm), and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math>, centered in the edge of the Hartley band (295 ± 5 nm). In addition, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> <mi mathvariant="normal">s</mi> </mrow> </msub> </mrow> </semantics></math> (250–400 nm) is also represented, which can be used as well as a reference signal, i.e., with a comparatively much smaller ozone contribution. The difference of more than an order of magnitude in the absorption cross-section, between the middle of the Hartley band (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">H</mi> </mrow> </msub> </mrow> </semantics></math>) and in the remaining channels, maximizes the sensitivity to ozone in the Martian atmosphere.</p>
Full article ">Figure 2
<p>(<b>top</b>) Normalized spectral responsivity for the three MEDA RDS UV channels used in the ozone retrieval (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">H</mi> </mrow> </msub> </mrow> </semantics></math>/ch255, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math>/ch295, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> <mi mathvariant="normal">s</mi> </mrow> </msub> </mrow> </semantics></math>/ch250–400). (<b>bottom</b>) Spectral responsivity, on a logarithmic scale, for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">H</mi> </mrow> </msub> </mrow> </semantics></math>/ch255 and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math>/ch295, showing the straylight at visible/infrared wavelengths. The level of straylight is low but significantly affects the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">H</mi> </mrow> </msub> </mrow> </semantics></math>/ch255 and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math>/ch295 irradiance signals, given both the narrow pass bands in the UV in both channels and the solar spectrum shape. The spectral broadband used in <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> <mi mathvariant="normal">s</mi> </mrow> </msub> </mrow> </semantics></math>/ch250–400 implies a mostly negligible effect of straylight in that channel and is not shown. The <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">H</mi> </mrow> </msub> </mrow> </semantics></math>/ch255 filter characterization implied some uncertainty in the straylight levels (represented as upper and lower limits in dotted lines). Elevated noise was measured from 800 nm, showing the mean value convolved with the silicon responsivity.</p>
Full article ">Figure 3
<p>(<b>A</b>) NASA’s Mars 2020 Perseverance during the Entry, Descent, and Landing (EDL) phase. The ozone detector is located on the rover deck. (<b>B</b>) Image acquired by the rover’s camera once deployed on Mars. The ozone detector (blue circle) is integrated in the RDS, as part of the MEDA instrument. The <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">H</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> <mi mathvariant="normal">s</mi> </mrow> </msub> </mrow> </semantics></math> bands are highlighted in the figure, along with the auxiliary channels <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">d</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">a</mi> </mrow> </msub> </mrow> </semantics></math> (<a href="#sec2dot3-remotesensing-16-03914" class="html-sec">Section 2.3</a>), used to compensate for the dust deposition on the ozone channels and to infer the aerosol loading in the atmosphere, respectively; these quantities are necessary for the ozone retrieval algorithm. Image Credit: NASA/JPL-Caltech/ASU.</p>
Full article ">Figure 4
<p>Zenith-sky viewing geometry for atmospheric observations from the Martian surface.</p>
Full article ">Figure 5
<p>Calibration of the flight model (FM) at SPASOLAB (INTA). (<b>left</b>) ARF calibration in a dark room, where the RDS was installed in a robotic platform allowing azimuthal and zenithal rotations related to the incident light from a xenon lamp located at 4.5 m above the RDS FM unit; (<b>right</b>) TRF calibration, where the RDS was located in the LT/HT thermal chamber.</p>
Full article ">Figure 6
<p>Retrieved (azimuthally averaged) angular response function (ARF) of the UV channels obtained post-landing (the laboratory ARF considering dust in black line and the adjusted with empirical data acquired on Mars in color lines), showing an increase of orders of magnitude in the responsivity related to the pre-landing calibrations at high zenith angles. The remaining channels of the RDS with the same FoV (<a href="#remotesensing-16-03914-t001" class="html-table">Table 1</a> and <a href="#remotesensing-16-03914-t002" class="html-table">Table 2</a>) present a similar performance.</p>
Full article ">Figure 7
<p>Deviation in the retrieved ozone column abundance from the actual values present in the simulated atmosphere, as a function of the prescribed partitioning in the ozone vertical profile. The entire range of possible ozone vertical distributions was tested, even including extreme cases (0: all ozone was located in the upper troposphere, 1: ozone was fully located in the lower troposphere). The prescribed partitioning (0.90, green line) was based on the vertical profile obtained by the JPL/Caltech KINETICS photochemical model as presented in [<a href="#B19-remotesensing-16-03914" class="html-bibr">19</a>]. Under low SZAs (<b>right panel</b>), the effect is negligible with a deviation less than 1% of the true VCD in the atmosphere. (<b>left panel</b>) Under medium-to-high SZAs, the retrieval is somewhat sensitive to the vertical profile, and the effect increases, producing a bias in the ozone VCD as far as 16%, in the extreme and improbable case of ozone being fully located in the upper troposphere of Mars.</p>
Full article ">Figure 8
<p>Ozone retrievals for mid–high SZAs (t<sub>1</sub> interval, <b>left column</b>) and for low SZAs (t<sub>2</sub> interval, <b>right column</b>), considering ±5% and ±10% errors in measured ratio <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">ψ</mi> </mrow> </semantics></math>, roughly related to 1σ and 2σ uncertainties of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">ψ</mi> </mrow> </semantics></math> (mainly derived by the absolute calibration of the detector), respectively.</p>
Full article ">Figure 9
<p>As in <a href="#remotesensing-16-03914-f008" class="html-fig">Figure 8</a>, but retrieving ozone abundances considering discrepancies in the aerosol loading and properties used in the retrieval algorithm with respect to those used to simulate the channel measurements. The first two rows consider pure dust aerosol simulations. Two dust loadings were simulated, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">τ</mi> </mrow> <mrow> <mi mathvariant="normal">a</mi> </mrow> </msub> </mrow> </semantics></math> = 0.3 (<b>top row</b>) and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">τ</mi> </mrow> <mrow> <mi mathvariant="normal">a</mi> </mrow> </msub> </mrow> </semantics></math> = 0.7 (<b>mid row</b>), which are representative of the nominal aerosol present in tropical latitudes on Mars outside dust storms. <math display="inline"><semantics> <mrow> <mo>∆</mo> <msub> <mrow> <mi mathvariant="sans-serif">τ</mi> </mrow> <mrow> <mi mathvariant="normal">a</mi> </mrow> </msub> </mrow> </semantics></math> ± 0.0 (ideal case), ±0.1, and ±0.2 have been used in the retrieval for each dust loading. (<b>bottom rows</b>) Simulations with aerosol opacity equal to 0.3 (dust opacity = 0.2), also including 0.0 of water ice (dust opacity = 0.3, blue color) and 0.2 of water ice (dust opacity = 0.1, green color), where the retrieval algorithm has a fixed <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">τ</mi> </mrow> <mrow> <mi mathvariant="normal">a</mi> </mrow> </msub> </mrow> </semantics></math> = 0.3 of aerosol (0.1 of ice clouds), i.e., without knowledge of ±0.1 ice clouds present in the simulated atmosphere. Two scenarios with and without straylight in the ozone channels were considered (see text).</p>
Full article ">Figure 10
<p>As in <a href="#remotesensing-16-03914-f009" class="html-fig">Figure 9</a>, but using <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> <mi mathvariant="normal">s</mi> </mrow> </msub> </mrow> </semantics></math> in the ratio <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">ψ</mi> </mrow> </semantics></math> instead of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="sans-serif">ξ</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>As in <a href="#remotesensing-16-03914-f008" class="html-fig">Figure 8</a>, but applying the ozone retrieval algorithm to the REMS UV sensor. Ozone uncertainties are larger in the MSL for detector uncertainties similar to the ones in Mars 2020 due to the sensitivity of the REMS UV spectral broadbands in the ratio <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">ψ</mi> </mrow> </semantics></math> (UVC and UVB).</p>
Full article ">
15 pages, 2289 KiB  
Technical Note
Detection of Complex Formations in an Inland Lake from Sentinel-2 Images Using Atmospheric Corrections and a Fully Connected Deep Neural Network
by Damianos F. Mantsis, Anastasia Moumtzidou, Ioannis Lioumbas, Ilias Gialampoukidis, Aikaterini Christodoulou, Alexandros Mentes, Stefanos Vrochidis and Ioannis Kompatsiaris
Remote Sens. 2024, 16(20), 3913; https://doi.org/10.3390/rs16203913 - 21 Oct 2024
Viewed by 528
Abstract
The detection of complex formations, initially suspected to be oil spills, is investigated using atmospherically corrected multispectral satellite images and deep learning techniques. Several formations have been detected in an inland lake in Northern Greece. Four atmospheric corrections (ACOLITE, iCOR, Polymer, and C2RCC) [...] Read more.
The detection of complex formations, initially suspected to be oil spills, is investigated using atmospherically corrected multispectral satellite images and deep learning techniques. Several formations have been detected in an inland lake in Northern Greece. Four atmospheric corrections (ACOLITE, iCOR, Polymer, and C2RCC) that are specifically designed for water applications are examined and implemented on Sentinel-2 multispectral satellite images to eliminate the influence of the atmosphere. Out of the four algorithms, iCOR and ACOLITE are able to depict the formations sufficiently; however, the latter is chosen for further processing due to fewer uncertainties in the depiction of these formations as anomalies across the multispectral range. Furthermore, a number of formations are annotated at the pixel level for the 10 m bands (red, green, blue, and NIR), and a deep neural network (DNN) is trained and validated. Our results show that the four-band configuration provides the best model for the detection of these complex formations. Despite not being necessarily related to oil spills, studying these formations is crucial for environmental monitoring, pollution detection, and the advancement of remote sensing techniques. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Schematic showing the work flow of our methodology.</p>
Full article ">Figure 2
<p>North–south cross-section of the 18 January 2021 formation, showing the remote sensing reflectance in all available bands after atmospheric corrections have been applied. For ACOLITE, all corrected bands are displayed, and for iCOR, only the 10 m bands are displayed.</p>
Full article ">Figure 3
<p>Visualization of the 18 January 2021 formation with B5 (560 nm) after C2RCC, iCOR, Polymer, and ACOLITE corrections have been implemented. Note that the color bar is not the same for all images to improve the direct comparison of formation characteristics after different atmospheric corrections have been applied.</p>
Full article ">Figure 4
<p>Visualization of the 18 January 2021 formation with NIR (833 nm) after C2RCC, iCOR, Polymer, and ACOLITE corrections have been implemented. Note that for C2RCC, we plot the 865 nm band, given that the NIR band is not available. Lack of data in the polymer illustration indicates that the polymer algorithm overestimates the correction resulting in a negative <math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>r</mi> <mi>s</mi> </mrow> </msub> </semantics></math>, and therefore is masked out.</p>
Full article ">Figure 5
<p>Visualization of the 30 December 2017 formation with B4 (665 nm) after C2RCC, iCOR, Polymer, and ACOLITE corrections have been implemented. Note that the color bar scale is lower for C2RCC. The feature in the upper right corner represents land.</p>
Full article ">Figure 6
<p>(<b>a</b>) False color image where three patches of different sizes (30 December 2017) are “puzzled together” to form one of the patches that is chosen for annotation, (<b>b</b>) formation annotation, (<b>c</b>) DNN prediction after ACOLITE is applied, and (<b>d</b>) DNN prediction without ACOLITE. For (<b>b</b>–<b>d</b>) oil spill pixels appear with yellow.</p>
Full article ">Figure 7
<p>(<b>a</b>) False color image of one of the formations (18 January 2021) chosen for annotation, (<b>b</b>) formation annotation, (<b>c</b>) DNN prediction after ACOLITE is applied, and (<b>d</b>) DNN prediction without ACOLITE. For (<b>b</b>–<b>d</b>) oil spill pixels appear with yellow.</p>
Full article ">Figure 8
<p>Red–NIR <math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>r</mi> <mi>s</mi> </mrow> </msub> </semantics></math> scatterplot for the formation and clear water pixels that are annotated and used for the DNN training.</p>
Full article ">Figure 9
<p>Schematic showing the DNN in three different configurations corresponding to the three types of experiments: (<b>a</b>) Red-NIR, (<b>b</b>) Green-NIR, and (<b>c</b>) 4-Bands.</p>
Full article ">Figure 10
<p>(<b>a</b>) Sentinel-2 false color image (green–red–NIR) showing the oil spill case for the 27 February 2017, (<b>b</b>) zoomed area in yellow box, (<b>c</b>) DNN prediction.</p>
Full article ">
22 pages, 7929 KiB  
Article
Remote Sensing LiDAR and Hyperspectral Classification with Multi-Scale Graph Encoder–Decoder Network
by Fang Wang, Xingqian Du, Weiguang Zhang, Liang Nie, Hu Wang, Shun Zhou and Jun Ma
Remote Sens. 2024, 16(20), 3912; https://doi.org/10.3390/rs16203912 - 21 Oct 2024
Viewed by 829
Abstract
The rapid development of sensor technology has made multi-modal remote sensing data valuable for land cover classification due to its diverse and complementary information. Many feature extraction methods for multi-modal data, combining light detection and ranging (LiDAR) and hyperspectral imaging (HSI), have recognized [...] Read more.
The rapid development of sensor technology has made multi-modal remote sensing data valuable for land cover classification due to its diverse and complementary information. Many feature extraction methods for multi-modal data, combining light detection and ranging (LiDAR) and hyperspectral imaging (HSI), have recognized the importance of incorporating multiple spatial scales. However, effectively capturing both long-range global correlations and short-range local features simultaneously on different scales remains a challenge, particularly in large-scale, complex ground scenes. To address this limitation, we propose a multi-scale graph encoder–decoder network (MGEN) for multi-modal data classification. The MGEN adopts a graph model that maintains global sample correlations to fuse multi-scale features, enabling simultaneous extraction of local and global information. The graph encoder maps multi-modal data from different scales to the graph space and completes feature extraction in the graph space. The graph decoder maps the features of multiple scales back to the original data space and completes multi-scale feature fusion and classification. Experimental results on three HSI-LiDAR datasets demonstrate that the proposed MGEN achieves considerable classification accuracies and outperforms state-of-the-art methods. Full article
(This article belongs to the Special Issue 3D Scene Reconstruction, Modeling and Analysis Using Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Overall framework of the proposed MGEN for multi-modal data classification.</p>
Full article ">Figure 2
<p>Structure of the graph encoder.</p>
Full article ">Figure 3
<p>Visualized classification results of proposed MGEN and compared methods on Trento dataset.</p>
Full article ">Figure 4
<p>Visualized classification results of proposed MGEN and compared methods on MUUFL dataset.</p>
Full article ">Figure 5
<p>Visualized classification results of proposed MGEN and compared methods on Houston dataset.</p>
Full article ">Figure 6
<p>Experimental results of using a single scale with different values of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Classification accuracy results of the proposed MGEN with different scale parameters on MUUFL dataset. OA, AA, and Kappa are displayed by colors of blue, orange, and green, respectively, with the intensity of the colors indicating the magnitude of the values.</p>
Full article ">
22 pages, 6149 KiB  
Article
ER-MACG: An Extreme Precipitation Forecasting Model Integrating Self-Attention Based on FY4A Satellite Data
by Mingyue Lu, Jingke Zhang, Manzhu Yu, Hui Liu, Caifen He, Tongtong Dong and Yongwei Mao
Remote Sens. 2024, 16(20), 3911; https://doi.org/10.3390/rs16203911 - 21 Oct 2024
Viewed by 494
Abstract
Extreme precipitation events often present significant risks to human life and property, making their accurate prediction an essential focus of current research. Recent studies have primarily concentrated on exploring the formation mechanisms of extreme precipitation. Existing prediction methods do not adequately account for [...] Read more.
Extreme precipitation events often present significant risks to human life and property, making their accurate prediction an essential focus of current research. Recent studies have primarily concentrated on exploring the formation mechanisms of extreme precipitation. Existing prediction methods do not adequately account for the combined terrain and atmospheric effects, resulting in shortcomings in extreme precipitation forecasting accuracy. Additionally, the satellite data resolution used in prior studies fails to precisely capture nuanced details of abrupt changes in extreme precipitation. To address these shortcomings, this study introduces an innovative approach for accurately predicting extreme precipitation: the multimodal attention ConvLSTM-GAN for extreme rainfall nowcasting (ER-MACG). This model employs high-resolution Fengyun-4A(FY4A) satellite precipitation products, as well as terrain and atmospheric datasets as inputs. The ER-MACG model enhances the ConvLSTM-GAN framework by optimizing the generator structure with an attention module to improve the focus on critical areas and time steps. This model can alleviate the problem of information loss in the spatial–temporal convolutional long short-term memory network (ConvLSTM) and, compared with the standard ConvLSTM-GAN model, can better handle the detailed changes in time and space in extreme precipitation events to achieve more refined predictions. The main findings include the following: (a) The ER-MACG model demonstrated significantly greater predictive accuracy and overall performance than other existing approaches. (b) The exclusive consideration of DEM and LPW data did not significantly enhance the ability to predict extreme precipitation events in Zhejiang Province. (c) The ER-MACG model significantly improved in identifying and predicting extreme precipitation events of different intensity levels. Full article
Show Figures

Figure 1

Figure 1
<p>The location and terrain of Zhejiang, China.</p>
Full article ">Figure 2
<p>Distribution of precipitation values.</p>
Full article ">Figure 3
<p>Identification of independent extreme precipitation events.</p>
Full article ">Figure 4
<p>The structure of Att-ConvLSTM.</p>
Full article ">Figure 5
<p>The structure of the Att module.</p>
Full article ">Figure 6
<p>The structure and data flow of ER-MACG. t – m + 1, where t represents the past m frames of images at time t and t + 1:t + 1 + m represents the future m frames of images at time t.</p>
Full article ">Figure 7
<p>Evaluation curves of the POD (<b>a</b>), FAR (<b>b</b>), CSI (<b>c</b>), R (<b>d</b>), RMSE (<b>e</b>), and MAE (<b>f</b>) performance metrics for different forecasting methods over 50 min.</p>
Full article ">Figure 8
<p>Cumulative precipitation distribution map for rainfall exceeding 18 mm/5 min from May to October 2018–2023.</p>
Full article ">Figure 9
<p>Comparison of Three Precipitation Intensities During Extreme Precipitation Events. (<b>a</b>) Case 1 moderate precipitation. (<b>b</b>) Case 2 heavy precipitation. (<b>c</b>) Case 3 light precipitation.</p>
Full article ">Figure 9 Cont.
<p>Comparison of Three Precipitation Intensities During Extreme Precipitation Events. (<b>a</b>) Case 1 moderate precipitation. (<b>b</b>) Case 2 heavy precipitation. (<b>c</b>) Case 3 light precipitation.</p>
Full article ">
21 pages, 14622 KiB  
Article
Cross-Spectral Navigation with Sensor Handover for Enhanced Proximity Operations with Uncooperative Space Objects
by Massimiliano Bussolino, Gaia Letizia Civardi, Matteo Quirino, Michele Bechini and Michèle Lavagna
Remote Sens. 2024, 16(20), 3910; https://doi.org/10.3390/rs16203910 - 21 Oct 2024
Viewed by 520
Abstract
Close-proximity operations play a crucial role in emerging mission concepts, such as Active Debris Removal or small celestial bodies exploration. When approaching a non-cooperative target, the increased risk of collisions and reduced reliance on ground intervention necessitate autonomous on-board relative pose (position and [...] Read more.
Close-proximity operations play a crucial role in emerging mission concepts, such as Active Debris Removal or small celestial bodies exploration. When approaching a non-cooperative target, the increased risk of collisions and reduced reliance on ground intervention necessitate autonomous on-board relative pose (position and attitude) estimation. Although navigation strategies relying on monocular cameras which operate in the visible (VIS) spectrum have been extensively studied and tested in flight for navigation applications, their accuracy is heavily related to the target’s illumination conditions, thus limiting their applicability range. The novelty of the paper is the introduction of a thermal-infrared (TIR) camera to complement the VIS one to mitigate the aforementioned issues. The primary goal of this work is to evaluate the enhancement in navigation accuracy and robustness by performing VIS-TIR data fusion within an Extended Kalman Filter (EKF) and to assess the performance of such navigation strategy in challenging illumination scenarios. The proposed navigation architecture is tightly coupled, leveraging correspondences between a known uncooperative target and feature points extracted from multispectral images. Furthermore, handover from one camera to the other is introduced to enable seamlessly operations across both spectra while prioritizing the most significant measurement sources. The pipeline is tested on Tango spacecraft synthetically generated VIS and TIR images. A performance assessment is carried out through numerical simulations considering different illumination conditions. Our results demonstrate that a combined VIS-TIR navigation strategy effectively enhances operational robustness and flexibility compared to traditional VIS-only navigation chains. Full article
Show Figures

Figure 1

Figure 1
<p>VIS-TIR coupling strategies.</p>
Full article ">Figure 2
<p>Navigation chain architecture.</p>
Full article ">Figure 3
<p>Visualization of the major steps of the re-initialization process. (<b>a</b>) Landmarks vs. feature detection; (<b>b</b>) Convex Hull visualization; (<b>c</b>) Landmarks to features matching.</p>
Full article ">Figure 4
<p>VIS synthetic images (<b>Left</b>) and respective TIR synthetic images (<b>Right</b>).</p>
Full article ">Figure 4 Cont.
<p>VIS synthetic images (<b>Left</b>) and respective TIR synthetic images (<b>Right</b>).</p>
Full article ">Figure 5
<p>Relative target-chaser distance.</p>
Full article ">Figure 6
<p>VIS images acquired at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> s (<b>Left</b>) and <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math> s (<b>Right</b>), respectively. Test case n.1.</p>
Full article ">Figure 7
<p>Average position KE (<b>Left</b>) and attitude KE (<b>Right</b>) over 250 simulations. Test case n.1.</p>
Full article ">Figure 8
<p>Number of matched feature pairs (<b>Left</b>) and mean feature measurement noise covariance normalized over image size (<b>Right</b>). Test case n.1.</p>
Full article ">Figure 9
<p>Sun aspect angle evolution for test case n.2.</p>
Full article ">Figure 10
<p>VIS images acquired at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> s (<b>left</b>) and <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math> s (<b>right</b>), respectively. Test case n.2.</p>
Full article ">Figure 11
<p>Average position KE (<b>Left</b>) and attitude KE (<b>Right</b>) over 250 simulations. Test case n.2.</p>
Full article ">Figure 12
<p>Average positionKE (<b>Left</b>) and attitude KE (<b>Right</b>) over 250 simulations. In the red band only the TIR camera is used due to poor illumination in the VIS spectrum.</p>
Full article ">Figure 13
<p>Number of matched feature pairs (<b>Left</b>) and mean feature measurement noise covariance (<b>Right</b>). Test case n.2.</p>
Full article ">
16 pages, 12826 KiB  
Article
Seasonal and Interannual Variations in Sea Ice Thickness in the Weddell Sea, Antarctica (2019–2022) Using ICESat-2
by Mansi Joshi, Alberto M. Mestas-Nuñez, Stephen F. Ackley, Stefanie Arndt, Grant J. Macdonald and Christian Haas
Remote Sens. 2024, 16(20), 3909; https://doi.org/10.3390/rs16203909 - 21 Oct 2024
Viewed by 747
Abstract
The sea ice extent in the Weddell Sea exhibited a positive trend from the start of satellite observations in 1978 until 2016 but has shown a decreasing trend since then. This study analyzes seasonal and interannual variations in sea ice thickness using ICESat-2 [...] Read more.
The sea ice extent in the Weddell Sea exhibited a positive trend from the start of satellite observations in 1978 until 2016 but has shown a decreasing trend since then. This study analyzes seasonal and interannual variations in sea ice thickness using ICESat-2 laser altimetry data over the Weddell Sea from 2019 to 2022. Sea ice thickness was calculated from ICESat-2’s ATL10 freeboard product using the Improved Buoyancy Equation. Seasonal variability in ice thickness, characterized by an increase from February to September, is more pronounced in the eastern Weddell sector, while interannual variability is more evident in the western Weddell sector. The results were compared with field data obtained between 2019 and 2022, showing a general agreement in ice thickness distributions around predominantly level ice. A decreasing trend in sea ice thickness was observed when compared to measurements from 2003 to 2017. Notably, the spring of 2021 and summer of 2022 saw significant decreases in Sea Ice Extent (SIE). Although the overall mean sea ice thickness remained unchanged, the northwestern Weddell region experienced a noticeable decrease in ice thickness. Full article
(This article belongs to the Special Issue Monitoring Sea Ice Loss with Remote Sensing Techniques)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Sea ice concentration from NSIDC for Antarctica in January 2022 with solid dark blue depicting ice concentrations smaller than 15% and white indicating 100% ice. The study area in the Weddell Sea is indicated by the yellow polygon. (<b>b</b>) Expanded study area map showing the location of ICESat-2 tracks for September 2022. Also shown are the 45°W meridian, which divides the study area into eastern and western sectors, and the 68°S parallel, which further divides the western sector into northwestern and southwestern regions, for the purpose of this study. The solid blue dots indicate the approximate locations of the field observations used in this study, which were obtained from 2019 to 2022.</p>
Full article ">Figure 2
<p>(<b>a</b>–<b>p</b>) Multi-panel maps of the study area showing ICESat-2 total freeboard tracks from 2019 to 2022 for different seasons. The bottom two panels show: (<b>q</b>) the modal values of freeboard in meters for the western Weddell (solid color circles) with second modal values shown by the color crosses, 2019 (red), 2020 (black), 2021 (green), and 2022 (blue); and (<b>r</b>) the same as (<b>q</b>) but for the eastern Weddell.</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>–<b>p</b>) Multi-panel maps of the study area showing ICESat-2 total freeboard tracks from 2019 to 2022 for different seasons. The bottom two panels show: (<b>q</b>) the modal values of freeboard in meters for the western Weddell (solid color circles) with second modal values shown by the color crosses, 2019 (red), 2020 (black), 2021 (green), and 2022 (blue); and (<b>r</b>) the same as (<b>q</b>) but for the eastern Weddell.</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>–<b>p</b>) Multi-panel maps of the study area showing ICESat-2 total freeboard tracks from 2019 to 2022 for different seasons. The bottom two panels show: (<b>q</b>) the modal values of freeboard in meters for the western Weddell (solid color circles) with second modal values shown by the color crosses, 2019 (red), 2020 (black), 2021 (green), and 2022 (blue); and (<b>r</b>) the same as (<b>q</b>) but for the eastern Weddell.</p>
Full article ">Figure 3
<p>Thickness estimates of the western Weddell (<b>left</b>—<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) and eastern Weddell (<b>right</b>—<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) regions from 2019 to 2022. Mean, mode, and Standard Deviation (SD) in meters. Bimodal mode values in the western Weddell are depicted in parenthesis.</p>
Full article ">Figure 4
<p>Location of ICESat-2 tracks in red with field data in blue. Field data from 2019 and 2021 are point measurements. The data from 2022 are thickness measurements on board Endurance 22 ship EM data.</p>
Full article ">Figure 5
<p>Mean thickness from (<b>a</b>) northwestern and (<b>b</b>) southwestern Weddell Sea.</p>
Full article ">Figure 6
<p>ERA5 monthly averaged 2 m air temperature in Kelvin for the (<b>a</b>) western and (<b>b</b>) eastern Weddell, 2019–2022.</p>
Full article ">Figure 7
<p>Comparison of ICESat and IceBridge mean thickness results in blue from [<a href="#B7-remotesensing-16-03909" class="html-bibr">7</a>] using the same method, with ICESat-2 results from this study in orange.</p>
Full article ">
21 pages, 13880 KiB  
Article
A Modified Method for Reducing the Scale Effect in Land Surface Temperature Downscaling at 10 m Resolution
by Zhida Guo, Lei Cheng, Liwei Chang, Shiqiong Li and Yuzhu Li
Remote Sens. 2024, 16(20), 3908; https://doi.org/10.3390/rs16203908 - 21 Oct 2024
Viewed by 622
Abstract
Satellite-derived Land Surface Temperature (LST) plays an important role in research on natural energy balance and water cycle. Considering the tradeoff between spatial and temporal resolutions, accurate fine-resolution LST must be obtained through the use of LST downscaling (DLST) technology. Various methods have [...] Read more.
Satellite-derived Land Surface Temperature (LST) plays an important role in research on natural energy balance and water cycle. Considering the tradeoff between spatial and temporal resolutions, accurate fine-resolution LST must be obtained through the use of LST downscaling (DLST) technology. Various methods have been proposed for DLST at fine resolutions (e.g., 10 m) and small scales. However, the scale effect of these methods, which is inherent to DLST processes at different extents, has rarely been addressed, thus limiting their application. In this study, a modified daily 10 m resolution DLST method based on Google Earth Engine, called mDTSG, is proposed in order to reduce the scale effect at fine spatial resolutions. The proposed method introduces a convolution-based moving window into the DLST process for the fusion of different remote sensing data. The performance of the modified method is compared with the original method in six regions characterized by various extents and landscape heterogeneity. The results show that the scale effect is significant in the DLST process at fine resolutions across extents ranging from 100 km2 to 22,500 km2. Compared with the original method, mDTSG can effectively reduce the LST value differences between tile edges, especially when considering large extents (>22,500 km2) with an average R2 improvement of 33.75%. The average MAE is 1.63 °C, and the average RMSE is 2.3 °C in the mDTSG results, when compared with independent remote sensing products across the six regions. A comparison with in situ observations also shows promising results, with an MAE of 2.03 °C and an RMSE of 2.63 °C. These findings highlight the robustness and scalability of the mDTSG method, making it a valuable tool for fine-resolution LST applications in diverse and extensive landscapes. Full article
Show Figures

Figure 1

Figure 1
<p>The flowchart of the mDTSG algorithm. The implementation steps of mDTSG are shown on the right, and the DLST implementation steps in the “window calculation” are shown on the left.</p>
Full article ">Figure 2
<p>Schematic diagram of the moving window. The gray square is a pixel of the raster image. The dark blue square represents the convolution kernel range. The light blue square represents the receptive field range. The red square represents the output range.</p>
Full article ">Figure 3
<p>Location of the case study areas. The six different surface cover types correspond to: (<b>a1</b>) Grass; (<b>a2</b>) shrubs; (<b>a3</b>) crops; (<b>a4</b>) forest; (<b>a5</b>) urban; and (<b>a6</b>) water. (<b>b1</b>) Overall map of the Danjiangkou Reservoir; and (<b>b2</b>) specific location of the Danjiangkou station, which is the red dot in the figure.</p>
Full article ">Figure 4
<p>The results of the original DLST method for different study areas at different extents. Each row represents the results for the same land cover type, while each column corresponds to the results for the same spatial extent. Due to the limitation of the minimum extent of 10 km × 10 km, the 60 km × 60 km, and 150 km × 150 km extents in the figures are both intercepted to show the extent of 10 km × 10 km.</p>
Full article ">Figure 5
<p>The results of the evaluation in the six study areas. The observed LST is from the calibrated Landsat-8 product. The simulated LST is from the mDTSG model. The light grey line is the 1:1 line. The black line is the linear regression result.</p>
Full article ">Figure 6
<p>The results of validation at Danjiangkou station, including (<b>a</b>) the diagonal scatter plot and (<b>b</b>) the distribution trend in the time-series. The observed LST is from the calibrated in situ data. The simulated LST is from the mDTSG model.</p>
Full article ">Figure 7
<p>The local results of mDTSG in the six study areas. (<b>a1–f1</b>) Sentinel-2 true-color raster images, (<b>a2–f2</b>) local results of mDTSG in the 10 km × 10 km extent, and (<b>a3–f3</b>) details within the black square in (<b>a2–f2</b>).</p>
Full article ">Figure 8
<p>Comparison of observed and simulated images in six different landscapes (in different columns). (<b>a1–f1</b>) MOD11A1 images, (<b>a2–f2</b>) DTSG simulation results, (<b>a3–f3</b>) mDTSG simulated results, and (<b>a4–f4</b>) calibrated Landsat-8 images. Only the 10 km × 10 km extent with obvious features is selected to show in this figure.</p>
Full article ">Figure 9
<p>The DLST results when <span class="html-italic">b</span> is kept constant and the size of <span class="html-italic">a</span> is varied. Only the obvious features at <span class="html-italic">b</span> = 0.01° and (<b>a</b>) <span class="html-italic">a</span> = 0.05°, (<b>b</b>) <span class="html-italic">a</span> = 0.1°, (<b>c</b>) <span class="html-italic">a</span> = 0.5°, and (<b>d</b>) <span class="html-italic">a</span> = 1° are shown in the figure for a range of 20 km × 20 km. Black areas in the figure are missing values in the images.</p>
Full article ">Figure 10
<p>The DLST results when <span class="html-italic">a</span> is kept constant and the size of <span class="html-italic">b</span> is varied. Only the obvious features at <span class="html-italic">a</span> = 0.1° and (<b>a</b>) <span class="html-italic">b</span> = 0.01°, (<b>b</b>) <span class="html-italic">b</span> = 0.02°, and (<b>c</b>) <span class="html-italic">b</span> = 0.05° are shown in the figure for a range of 20 km × 20 km. Black areas in the figure are missing values in the images.</p>
Full article ">Figure A1
<p>The results of the evaluation in the six study areas. The observed LST is from the calibrated Landsat-8 product. The simulated LST is from the DTSG model. The light grey line is the 1:1 line. The black line is the linear regression result.</p>
Full article ">Figure A2
<p>The shadow distribution location of forest and urban regions. (<b>a1</b>,<b>b1</b>) Sentinel-2 true-color raster images, (<b>a2</b>,<b>b2</b>) local results of mDTSG, and (<b>a3</b>,<b>b3</b>) local Landsat-8 image within the gray square in (*1).</p>
Full article ">Figure A3
<p>The global results of mDTSG in the six study areas. A study extent of 150 km × 150 km was selected in (<b>a</b>) forest, (<b>b</b>) shrubs, (<b>c</b>) grass, (<b>d</b>) urban, (<b>e</b>) crops, and (<b>f</b>) water regions. The black square extent is selected and shown in <a href="#remotesensing-16-03908-f007" class="html-fig">Figure 7</a>. The red square extent is selected and shown in <a href="#remotesensing-16-03908-f009" class="html-fig">Figure 9</a> and <a href="#remotesensing-16-03908-f010" class="html-fig">Figure 10</a>. Black areas in the figure are missing values in the images.</p>
Full article ">Figure A4
<p>Comparison of DTSG and mDTSG methods in six different landscapes (in different columns). The first and second rows (<b>a1</b>–<b>f1</b>,<b>a2</b>–<b>f2</b>) show the performance of DTSG at 10 km × 10 km and 60 km × 60 km spatial extents. The third row (<b>a3</b>–<b>f3</b>) shows the performance of mDTSG.</p>
Full article ">
17 pages, 9573 KiB  
Article
Anti-Rain Clutter Interference Method for Millimeter-Wave Radar Based on Convolutional Neural Network
by Chengjin Zhan, Shuning Zhang, Chenyu Sun and Si Chen
Remote Sens. 2024, 16(20), 3907; https://doi.org/10.3390/rs16203907 - 21 Oct 2024
Viewed by 528
Abstract
Millimeter-wave radars are widely used in various environments due to their excellent detection capabilities. However, the detection performance in severe weather environments is still an important research challenge. In this paper, the propagation characteristics of millimeter-wave radar in a rainfall environment are thoroughly [...] Read more.
Millimeter-wave radars are widely used in various environments due to their excellent detection capabilities. However, the detection performance in severe weather environments is still an important research challenge. In this paper, the propagation characteristics of millimeter-wave radar in a rainfall environment are thoroughly investigated, and the modeling of the millimeter-wave radar echo signal in a rainfall environment is completed. The effect of rainfall on radar detection performance is verified through experiments, and an anti-rain clutter interference method based on a convolutional neural network is proposed. The method combines image recognition and classification techniques to effectively distinguish target signals from rain clutter in radar echo signals based on feature differences. In addition, this paper compares the recognition results of the proposed method with VGGnet and Resnet. The experimental results show that the proposed convolutional neural network method significantly improves the target detection capability of the radar system in a rainfall environment, verifying the method’s effectiveness and accuracy. This study provides a new solution for the application of millimeter-wave radar in severe weather conditions. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of target detection using the triangular wave linear frequency modulation signal.</p>
Full article ">Figure 2
<p>Field experiment and result of range measurement. (<b>a</b>) Field experiment scene of range measurement. (<b>b</b>) Result of range measurement.</p>
Full article ">Figure 3
<p>CST simulation analysis. (<b>a</b>) Schematic diagram. (<b>b</b>) CST model diagram.</p>
Full article ">Figure 4
<p>Results of CST simulation analysis. (<b>a</b>) Result of point O. (<b>b</b>) Result of point P.</p>
Full article ">Figure 5
<p>Raindrop spectrum modeling under 1.2 rainfall rate.</p>
Full article ">Figure 6
<p>The results of simulation in rainfall environment. (<b>a</b>) Time-domain diagram. (<b>b</b>) Frequency-domain diagram. (<b>c</b>) Spectrogram.</p>
Full article ">Figure 7
<p>Signal detection experiment in the rainfall environment.</p>
Full article ">Figure 8
<p>The rain clutter signals collected from the experiment. (<b>a</b>) Time-domain diagram (<b>b</b>) Frequency-domain diagram. (<b>c</b>) Spectrogram.</p>
Full article ">Figure 9
<p>Range measurement experiment under a rainfall environment. (<b>a</b>) Schematic diagram. (<b>b</b>) Scene. (<b>c</b>) Range measurement results.</p>
Full article ">Figure 10
<p>The spectrograms under different environmental and signal conditions from the experiment. (<b>a</b>) The target signal in a non-rain environment. (<b>b</b>) The rain clutter signal in a rain environment. (<b>c</b>) The mixed signal in a rain environment.</p>
Full article ">Figure 11
<p>The spectrograms under different environmental and signal conditions from the simulation after performing thresholding. (<b>a</b>) The target signal in a non-rain environment. (<b>b</b>) The rain clutter signal in a rain environment (<b>c</b>) The mixed signal in a rain environment.</p>
Full article ">Figure 12
<p>The structure of the CNN.</p>
Full article ">Figure 13
<p>The loss of the CNN model.</p>
Full article ">Figure 14
<p>The results of the confusion matrix under different CNN models. (<b>a</b>) The confusion matrix under the VGGnet model. (<b>b</b>) The confusion matrix under the Resnet model. (<b>c</b>) The CNN confusion matrix under the model used in this paper.</p>
Full article ">Figure 15
<p>The ranging result after recognition.</p>
Full article ">
21 pages, 10239 KiB  
Article
Accurate Estimation of Gross Primary Production of Paddy Rice Cropland with UAV Imagery-Driven Leaf Biochemical Model
by Xiaolong Hu, Liangsheng Shi, Lin Lin, Shenji Li, Xianzhi Deng, Jinmin Li, Jiang Bian, Chenye Su, Shuai Du, Tinghan Wang, Yujie Wang and Zhitao Zhang
Remote Sens. 2024, 16(20), 3906; https://doi.org/10.3390/rs16203906 - 21 Oct 2024
Viewed by 735
Abstract
Accurate estimation of gross primary production (GPP) of paddy rice fields is essential for understanding cropland carbon cycles, yet remains challenging due to spatial heterogeneity. In this study, we integrated high-resolution unmanned aerial vehicle (UAV) imagery into a leaf biochemical properties-based model for [...] Read more.
Accurate estimation of gross primary production (GPP) of paddy rice fields is essential for understanding cropland carbon cycles, yet remains challenging due to spatial heterogeneity. In this study, we integrated high-resolution unmanned aerial vehicle (UAV) imagery into a leaf biochemical properties-based model for improving GPP estimation. The key parameter, maximum carboxylation rate at the top of the canopy (Vcmax,025), was quantified using various spatial information representation methods, including mean (μref) and standard deviation (σref) of reflectance, gray-level co-occurrence matrix (GLCM)-based features, local binary pattern histogram (LBPH), and convolutional neural networks (CNNs). Our models were evaluated using a two-year eddy covariance (EC) system and UAV measurements. The result shows that incorporating spatial information can vastly improve the accuracy of Vcmax,025 and GPP estimation. CNN methods achieved the best Vcmax,025 estimation, with an R of 0.94, an RMSE of 19.44 μmol m−2 s−1, and an MdAPE of 11%, and further produced highly accurate GPP estimates, with an R of 0.92, an RMSE of 6.5 μmol m−2 s−1, and an MdAPE of 23%. The μref-GLCM texture features and μref-LBPH joint-driven models also gave promising results. However, σref contributed less to Vcmax,025 estimation. The Shapley value analysis revealed that the contribution of input features varied considerably across different models. The CNN model focused on nir and red-edge bands and paid much attention to the subregion with high spatial heterogeneity. The μref-LBPH joint-driven model mainly prioritized reflectance information. The μref-GLCM-based features joint-driven model emphasized the role of GLCM texture indices. As the first study to leverage the spatial information from high-resolution UAV imagery for GPP estimation, our work underscores the critical role of spatial information and provides new insight into monitoring the carbon cycle. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the study (<span class="html-italic">μ<sub>ref</sub></span>: the mean value of UAV reflectance; <span class="html-italic">σ<sub>ref</sub></span>: the standard deviation value of UAV reflectance; GLCM: gray-level co-occurrence matrix-based texture features; LBPH: local binary pattern histogram texture feature; <span class="html-italic">GPP</span>: gross primary production; <span class="html-italic">SW<sub>IN</sub></span>: incoming shortwave radiation; <span class="html-italic">C<sub>a</sub></span>: ambient CO<sub>2</sub> concentration; <span class="html-italic">T<sub>a</sub></span>: air temperature; <span class="html-italic">RH</span>: relative humidity; <span class="html-italic">u</span>: wind speed).</p>
Full article ">Figure 2
<p>The location of the experiment site. The red five-pointed star represents the EC system. The contour lines of the footprint during the growing period in (3) are shown in steps of 10% from 10 to 90%. The background maps in (<b>a</b>,<b>b</b>) are RGB images from Google Earth. The background map in (<b>c</b>) is the RGB image from the UAV platform.</p>
Full article ">Figure 3
<p>Seasonal variation in daily carbon fluxes, environmental elements and biophysical indicators. (<b>a</b>) Carbon fluxes, including net ecosystem exchange, gross primary production, and ecosystem respiration; (<b>b</b>) incoming shortwave radiation; (<b>c</b>) air temperature; (<b>d</b>) humidity; (<b>e</b>) wind speed; (<b>f</b>) ambient CO<sub>2</sub> concentration; (<b>g</b>) precipitation; (<b>h</b>) soil moisture; (<b>i</b>) plant area index; (<b>j</b>) leaf area index; (<b>k</b>) clumping index; (<b>l</b>) canopy height. The term ‘Doy’ means day of the year.</p>
Full article ">Figure 4
<p>Seasonal variation in field-scale <math display="inline"><semantics> <msubsup> <mi>V</mi> <mrow> <mi>c</mi> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mo>,</mo> <mn>0</mn> </mrow> <mn>25</mn> </msubsup> </semantics></math>.</p>
Full article ">Figure 5
<p>The accuracy of 8 field-scale <math display="inline"><semantics> <msubsup> <mi>V</mi> <mrow> <mi>c</mi> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mo>,</mo> <mn>0</mn> </mrow> <mn>25</mn> </msubsup> </semantics></math> models. The blue points represent the training samples, and the orange points represent the validation samples. The sample numbers of the training and validation dataset are 224 and 57, respectively.</p>
Full article ">Figure 6
<p>The accuracy of <span class="html-italic">GPP</span> estimation based on the 8 field-scale <math display="inline"><semantics> <msubsup> <mi>V</mi> <mrow> <mi>c</mi> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mo>,</mo> <mn>0</mn> </mrow> <mn>25</mn> </msubsup> </semantics></math> models. The blue bar represents the accuracy metrics on the training dataset, and the orange bar represents the accuracy metrics on the testing dataset.</p>
Full article ">Figure 7
<p>SHAP-based feature importance analysis for the DNN <math display="inline"><semantics> <msubsup> <mi>V</mi> <mrow> <mi>c</mi> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mo>,</mo> <mn>0</mn> </mrow> <mn>25</mn> </msubsup> </semantics></math> model. The feature importance is defined as the mean of the absolute value of the SHAP for each feature (mean(|SHAP|)). Only the top 5 most important features are displayed.</p>
Full article ">Figure 8
<p>The spatial distribution of the AlexNet-model-based SHAP.</p>
Full article ">
27 pages, 31281 KiB  
Article
Tracking Moisture Dynamics in a Karst Rock Formation Combining Multi-Frequency 3D GPR Data: A Strategy for Protecting the Polychrome Hall Paintings in Altamira Cave
by Vicente Bayarri, Alfredo Prada, Francisco García, Carmen De Las Heras and Pilar Fatás
Remote Sens. 2024, 16(20), 3905; https://doi.org/10.3390/rs16203905 - 21 Oct 2024
Viewed by 878
Abstract
This study addresses the features of the internal structure of the geological layers adjacent to the Polychrome Hall ceiling of the Cave of Altamira (Spain) and their link to the distribution of moisture and geological discontinuities mainly as fractures, joints, bedding planes and [...] Read more.
This study addresses the features of the internal structure of the geological layers adjacent to the Polychrome Hall ceiling of the Cave of Altamira (Spain) and their link to the distribution of moisture and geological discontinuities mainly as fractures, joints, bedding planes and detachments, using 3D Ground Penetrating Radar (GPR) mapping. In this research, 3D GPR data were collected with 300 MHz, 800 MHz and 1.6 GHz center frequency antennas. The data recorded with these three frequency antennas were combined to further our understanding of the layout of geological discontinuities and how they link to the moisture or water inputs that infiltrate and reach the ceiling surface where the rock art of the Polychrome Hall is located. The same 1 × 1 m2 area was adopted for 3D data acquisition with the three antennas, obtaining 3D isosurface (isoattribute-surface) images of internal distribution of moisture and structural features of the Polychrome Hall ceiling. The results derived from this study reveal significant insights into the overlying karst strata of Polychrome Hall, particularly the interface between the Polychrome Layer and the underlying Dolomitic Layer. The results show moisture patterns associated with geological features such as fractures, joints, detachments of strata and microcatchments, elucidating the mechanisms driving capillary rise and water infiltration coming from higher altitudes. The study primarily identifies areas of increased moisture content, correlating with earlier observations and enhancing our understanding of water infiltration patterns. This underscores the utility of 3D GPR as an essential tool for informing and putting conservation measures into practice. By delineating subsurface structures and moisture dynamics, this research contributes to a deeper analysis of the deterioration processes directly associated with the infiltration water both in this ceiling and in the rest of the Cave of Altamira, providing information to determine its future geological and hydrogeological evolution. Full article
(This article belongs to the Special Issue Multi-data Applied to Near-Surface Geophysics (Second Edition))
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Location map of the Cave of Altamira (<b>b</b>) Projection of the Cave of Altamira on the orthoimage, indicating in a red frame the position of (<b>c</b>) the Polychrome Hall with the rock art orthoimage overlayed.</p>
Full article ">Figure 2
<p>Overview of previous GPR studies conducted in the Altamira Cave and Polychrome Hall ceiling: (<b>a</b>) 100 MHz exterior grid 3 × 3 m, (<b>b</b>) 400 MHz exterior grid 1 × 1 m, (<b>c</b>) 900 MHz exterior grid 1 × 1 m, (<b>d</b>) 400 and 900 MHz interior reflection profiles (indicated with white lines), and location of area around ALT1 (indicated with a red square), (<b>e</b>) 1.6 GHz interior grid 5 × 5 cm in 2022.</p>
Full article ">Figure 3
<p>(<b>a</b>) The location of the control area ALT1 on the ceiling of the Polychrome Hall. The dashed light-green line marks the path of the central fracture that runs across the ceiling from west to east. (<b>b</b>) A photograph showing a general view of the ALT1 study area, with the same dashed light-green line indicating the central crack, which has been sealed with cement.</p>
Full article ">Figure 4
<p>(<b>a</b>) Orthoimage of the Polychrome ceiling with the sectorisation grid and the location of the control area ALT1. (<b>b</b>) View of the ALT1 control area together with the sectorisation grids and the working areas for the 300/800 MHz and 1.6 GHz antennas and (<b>c</b>) overlapping of the working areas for 300/800 MHz and 1.6 GHz antennas with basins, streams and drip points.</p>
Full article ">Figure 5
<p>General workflow.</p>
Full article ">Figure 6
<p>Photographs showing the GPR data collection set-ups in control area ALT1. (<b>a</b>) with the 300–800 MHz dual-frequency antenna and (<b>b</b>) with the 1.6 GHz antenna.</p>
Full article ">Figure 7
<p>(<b>a</b>) Location of the two grids with an equidistance of 5 cm of the profiles using the 1.6 GHz antenna in the study areas ALT1-A (in white dashed square) and ALT1-B (in yellow dashed square). (<b>b</b>) Location of the two 10 cm × 10 cm grids using the 300/800 MHz dual frequency antenna in the study areas ALT1-C (in white dashed square) and ALT1-D (in yellow dashed square).</p>
Full article ">Figure 8
<p>3D model including basins (in red), streams (in light blue), drip points (in black) and the working areas (green boxes) for 1.6 GHz and 300/800 MHz antennas.</p>
Full article ">Figure 9
<p>GPR reflection profiles processed in the ALT1 control area showing the main subsurface features (discontinuities), in particular the central fracture, with their stratigraphic correlations of the overlying layers of the ceiling for each frequency antenna used: (<b>a</b>) Location of P63 and P8 profiles which spatially coincide, (<b>b</b>) reflection profile P63 antenna 1.6 GHz; (<b>c</b>) reflection profile P8 antenna 800 MHz; (<b>d</b>) reflection profile P8 antenna 300 MHz; (<b>e</b>) schematic stratigraphy of the Polychrome Hall ceiling according to [<a href="#B2-remotesensing-16-03905" class="html-bibr">2</a>,<a href="#B3-remotesensing-16-03905" class="html-bibr">3</a>], showing the depth range reached with the 1.6 GHz, 800 MHz and 300 MHz frequency antennas.</p>
Full article ">Figure 10
<p>(<b>a</b>) location of the drip points ALT1_1, ALT1_2 and ALT1_4 involved in migration and pigment loss within the control area ALT1. (<b>b</b>) Evolution of the internal moisture zones detected by the 1.6 GHz antenna directly involved in the above mentioned dropping points.</p>
Full article ">Figure 11
<p>South area of the ceiling of the Polychrome Hall affected by carbonate concretions produced by the access of seepage water through the fracture zones.</p>
Full article ">Figure 12
<p>Evolution of moisture in the ALT1 control area. The highlighted boxes indicate significant changes in moisture depths relative to depth. In February 2022, higher moisture zones were observed at lower depths, while in April 2023, lower moisture zones were recorded at lower depths. The moisture recorded between April and June, which was involved in paint drips, has already reached the surface of the Ceiling; therefore, it was not detected by the 1.6 GHz antenna on 26 April 2023. The blue arrow connects the moisture moisture zones that, year after year, are associated with pigment dragging drips in ALT1.</p>
Full article ">Figure 13
<p>Overlying of the 3D horizontal sections framing the Dolomitic Layer in the 70 cm to 100 cm depth interval in the ALT1-C zone, captured using a 1.6 GHz GPR antenna. The figure shows multiple zones of irregularly distributed moisture, projected on the ceiling surface of the Polychrome Hall with the micro-basins and with the active drip points (water/moisture zones in shades of purple; active drip points in blue) for zones (<b>a</b>) ALT1-C and (<b>b</b>) ALT1-D.</p>
Full article ">Figure 14
<p>Pigment particles and other mineral deposits carried away by the drip water and collected in the containers located on the ground perpendicular to the drip points. SEM analysis of the detached particles indicates the existence of microcorrosion processes of the supporting rock associated with the presence of infiltration water, condensation and CO<sub>2</sub> concentration.</p>
Full article ">Figure 15
<p>Perspective view of the discontinuities revealing zones of higher moisture, represented by a blue to white gradient, across the different strata of the Polychrome ceiling. Data were collected using three GPR antennas: a 1.6 GHz antenna for the upper 50 cm, an 800 MHz antenna for depths up to 1.2 m, and a 300 MHz antenna for depths up to 4.9 m. This complex network of discontinuities highlights the primary pathways for fluid exchange within the interior of the Polychrome Hall, as well as seepage from the active drip points located on the basal surface of the Polychrome Layer in the ALT1 control area.</p>
Full article ">Figure 16
<p>(<b>a</b>) Location of the drip points within the control area ALT1. (<b>b</b>) Mean dip of the Polychrome Hall calculated from the 3D model (<b>c</b>) Moistures associated with two large vertically developing fractures in the Polychrome ceiling. The red box shows the moistures associated with the large central fracture, the yellow box shows the moisture zones coming from the fracture located in the polychrome bison. The red arrows as well as the degree of dip of the ceiling show the relationship of this with the moistures associated with the dripping points with migration and loss of pigment from ALT1. (<b>d</b>) Reflection profile P8 obtained with the 300 MHz antenna. The dashed red line indicates the fracture (central fracture) associated with the moisture shown in red, and the dashed yellow line corresponds to the fracture associated with the moisture marked in yellow in (<b>c</b>).</p>
Full article ">
15 pages, 2422 KiB  
Article
Multi-Modal Object Detection Method Based on Dual-Branch Asymmetric Attention Backbone and Feature Fusion Pyramid Network
by Jinpeng Wang, Nan Su, Chunhui Zhao, Yiming Yan and Shou Feng
Remote Sens. 2024, 16(20), 3904; https://doi.org/10.3390/rs16203904 - 21 Oct 2024
Viewed by 557
Abstract
With the simultaneous acquisition of the infrared and optical remote sensing images of the same target becoming increasingly easy, using multi-modal data for high-performance object detection has become a research focus. In remote sensing multi-modal data, infrared images lack color information, it is [...] Read more.
With the simultaneous acquisition of the infrared and optical remote sensing images of the same target becoming increasingly easy, using multi-modal data for high-performance object detection has become a research focus. In remote sensing multi-modal data, infrared images lack color information, it is hard to detect difficult targets with low contrast, and optical images are easily affected by illuminance. One of the most effective ways to solve this problem is to integrate multi-modal images for high-performance object detection. The challenge of fusion object detection lies in how to fully integrate multi-modal image features with significant modal differences and avoid introducing interference information while taking advantage of complementary advantages. To solve these problems, a new multi-modal fusion object detection method is proposed. In this paper, the method is improved in terms of two aspects: firstly, a new dual-branch asymmetric attention backbone network (DAAB) is designed, which uses a semantic information supplement module (SISM) and a detail information supplement module (DISM) to supplement and enhance infrared and RGB image information, respectively. Secondly, we propose a feature fusion pyramid network (FFPN), which uses a Transformer-like strategy to carry out multi-modal feature fusion and suppress features that are not conducive to fusion during the fusion process. This method is a state-of-the-art process for both FLIR-aligned and DroneVehicle datasets. Experiments show that this method has strong competitiveness and generalization performance. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>The overall scheme of the proposed method. The input arrows for each cube are the variables involved in the calculation not the fixed calculation method. The green, orange, and yellow in the figure mark the calculation process for optical, infrared, and fusion features, respectively, as in the following text.</p>
Full article ">Figure 2
<p>The main structure diagram of DAAN. The area highlighted by the red line can be seen as a loop structure. The part of the figure highlighted by the red line is a loop structure.</p>
Full article ">Figure 3
<p>(<b>a</b>) DISM structure diagram; (<b>b</b>) SISM structure diagram. In the figure, C refers to the concatenation operation, the plus sign refers to the addition operation of the elements, and 1× 1 (3 × 3) refers to the convolution operation with a kernel size of 1 (3).</p>
Full article ">Figure 4
<p>The main structure diagram of FFPN. The symbols in the figure are the same as described above.</p>
Full article ">Figure 5
<p>Our method detects results on some typical images from the FLIR dataset. The prediction box is marked as orange-red.</p>
Full article ">Figure 6
<p>Our method detects results on some typical images in the DroneVehicle dataset. The prediction box for the baseline method is marked in green, and the prediction box for the present method is marked in red.</p>
Full article ">
27 pages, 3720 KiB  
Article
Hierarchical Analysis of Miombo Woodland Spatial Dynamics in Lualaba Province (Democratic Republic of the Congo), 1990–2024: Integrating Remote Sensing and Landscape Ecology Techniques
by Yannick Useni Sikuzani, Médard Mpanda Mukenza, John Kikuni Tchowa, Delphin Kabamb Kanyimb, François Malaisse and Jan Bogaert
Remote Sens. 2024, 16(20), 3903; https://doi.org/10.3390/rs16203903 - 21 Oct 2024
Viewed by 969
Abstract
Lualaba Province, located in the southeastern Democratic Republic of the Congo (DRC), consists of five territories with varied dominant land uses: agriculture (Dilolo, Kapanga, and Musumba in the west) and mining (Mutshatsha and Lubudi in the east). The province also includes protected areas [...] Read more.
Lualaba Province, located in the southeastern Democratic Republic of the Congo (DRC), consists of five territories with varied dominant land uses: agriculture (Dilolo, Kapanga, and Musumba in the west) and mining (Mutshatsha and Lubudi in the east). The province also includes protected areas with significant governance challenges. The tropical dry forests that cover the unique Miombo woodland of Lualaba are threatened by deforestation, which poses risks to biodiversity and local livelihoods that depend on these forests for agriculture and forestry. To quantify the spatio-temporal dynamics of Lualaba’s landscape, we utilized Landsat images from 1990 to 2024, supported by a Random Forest Classifier. Landscape metrics were calculated at multiple hierarchical levels: province, territory, and protected areas. A key contribution of this work is its identification of pronounced deforestation trends in the unique Miombo woodlands, where the overall woodland cover has declined dramatically from 62.9% to less than 25%. This is coupled with a marked increase in landscape fragmentation, isolation of remaining woodland patches, and a shift toward more heterogeneous land use patterns, as evidenced by the Shannon diversity index. Unlike previous research, our study distinguishes between the dynamics in agricultural territories—which are particularly vulnerable to deforestation—and those in mining areas, where Miombo forest cover remains more intact but is still under threat. This nuanced distinction between land use types offers critical insights into the differential impacts of economic activities on the landscape. Our study also uncovers significant deforestation within protected areas, underscoring the failure of current governance structures to safeguard these critical ecosystems. This comprehensive analysis offers a novel contribution to the literature by linking the spatial patterns of deforestation to both agricultural and mining pressures while simultaneously highlighting the governance challenges that exacerbate landscape transformation. Lualaba’s Miombo woodlands are at a critical juncture, and without urgent, coordinated intervention from local and international stakeholders, the ecological and socio-economic foundations of the region will be irreversibly compromised. Urgent action is needed to implement land conservation policies, promote sustainable agricultural practices, strengthen Miombo woodland regulation enforcement, and actively support protected areas. Full article
(This article belongs to the Special Issue Remote Sensing of Savannas and Woodlands II)
Show Figures

Figure 1

Figure 1
<p>Geographical map of the Lualaba Province in the DRC, detailing its five territories, including key agricultural zones (IV) Sandoa, (V) Kapanga, and (III) Dilolo and major mining areas (I) Lubudi and (II) Mutshatsha. The map also identifies seven protected areas within Lualaba Province: (a) Basse Kando Hunting Domain (BKHD), (b) Lac Tshangalele Hunting Domain (LTHD), (c) Mulumbu Hunting Domain (MHD), (f) Alunda and Tutshokwe Hunting Reserve (ATHR), (d) Mwene Kay Hunting Domain (MKHD), (e) Mwene Musoma Hunting Domain (MMHD), and (g) Tshikamba Hunting Reserve (THR).</p>
Full article ">Figure 2
<p>Spatial mapping of land cover dynamics in the Lualaba province landscape from 1990 to 2024, utilizing supervised classification of Landsat images with the Random Forest classifier. The land cover classes are denoted as follows: MW (<span class="html-italic">Miombo</span> woodland), SV (savanna), AG (agriculture), BBS (built-up and bare soil), and other land cover. Intermediate dates not displayed on this map did not exhibit significant perceptible changes in the landscape.</p>
Full article ">Figure 3
<p>Landscape composition evolution in Lualaba province (<b>A</b>) (DRC) and their territories, (<b>B</b>) Lubudi, (<b>C</b>) Mutshatsha, (<b>D</b>) Dilolo, (<b>E</b>) Sandoa, and (<b>F</b>) Kapanga, from 1990 to 2024. MW (<span class="html-italic">Miombo</span> woodland), SV (savanna), AG (agriculture), and BBS (built-up and bare soil). The total landscape proportion in Lualaba province and for each territory does not sum to 100% as other land cover classes were excluded from the analyses due to their relatively stable nature.</p>
Full article ">Figure 4
<p>Dynamics of landscape diversity in Lualaba province (DRC) and its territories (Lubudi, Mutshatsha, Dilolo, Kapanga, and Sandoa). The overall landscape of Lualaba province has remained relatively stable between 1990 and 2024, though with notable variations during this period.</p>
Full article ">Figure 5
<p>Landscape composition evolution in protected areas in Lualaba province from 1990 to 2024: The Basse Kando Hunting Domain (<b>A</b>); the Lac Tshangalele Hunting Domain (<b>B</b>); the Mulumbu Hunting Domain (<b>C</b>); the Alunda and Tutshokwe Hunting Reserve (<b>D</b>); The Mwene Kay Hunting Domain (<b>E</b>); the Mwene Musoma Hunting Domain (<b>F</b>); the Tshikamba Hunting Reserve (<b>G</b>). MW (<span class="html-italic">Miombo</span> woodland), SV (savanna), AG (agriculture) and BBS (built-up and bare soil). The total landscape proportion for each protected area does not sum to 100% as other land cover classes were excluded from the analyses due to their relatively stable nature.</p>
Full article ">Figure 6
<p>Dynamics of landscape diversity in the protected areas of Lualaba Province between 1990 and 2024. The protected areas have experienced variations in landscape homogeneity and heterogeneity over this period. Basse Kando Hunting Domain (BKHD), Lac Tshangalele Hunting Domain (LTHD), Mulumbu Hunting Domain (MHD), Alunda and Tutshokwe Hunting Reserve (ATHR), Mwene Kay Hunting Domain (MKHD), Mwene Musoma Hunting Domain (MMHD), and Tshikamba Hunting Reserve (THR).</p>
Full article ">Figure 7
<p>Dynamics of <span class="html-italic">Miombo</span> woodland spatial patterns (1990–2024). (<b>A</b>) displays the class area (CA, in km<sup>2</sup>) of <span class="html-italic">Miombo</span> woodlands, with absolute values calculated by dividing the total <span class="html-italic">Miombo</span> woodland area for each year by the sum of <span class="html-italic">Miombo</span> woodland areas across all studied years. (<b>B</b>) illustrates the patch number (PN, also in absolute values) of <span class="html-italic">Miombo</span> woodland patches across the landscapes of Lualaba Province and its territories from 1990 to 2024. The variations in CA and PN during this period enabled the identification of spatial transformation processes, which were analyzed using the decision tree algorithm developed by Bogaert et al. [<a href="#B62-remotesensing-16-03903" class="html-bibr">62</a>]. (<b>C</b>) shows the evolution of the largest patch index (LPI, in %), which indicates the proportion of the landscape occupied by the largest <span class="html-italic">Miombo</span> woodland patch. (<b>D</b>) depicts the edge density (ED, in m/ha), reflecting the amount of edge habitat in relation to the <span class="html-italic">Miombo</span> woodland area. Finally, (<b>E</b>) presents the Mean Euclidean Nearest-Neighbor Distance (ENN, in meters), which measures the average distance between the nearest neighboring patches, providing insights into <span class="html-italic">Miombo</span> woodland connectivity.</p>
Full article ">Figure 8
<p>Dynamics of <span class="html-italic">Miombo</span> woodland spatial patterns in protected areas of Lualaba Province (1990–2024). (<b>A</b>) displays the class area (CA, in km<sup>2</sup>) of <span class="html-italic">Miombo</span> woodlands, with absolute values calculated by dividing the total <span class="html-italic">Miombo</span> woodland area for each year by the sum of <span class="html-italic">Miombo</span> woodland areas across all studied years. (<b>B</b>) illustrates the patch number (PN, also in absolute values) of <span class="html-italic">Miombo</span> woodland patches across the protected areas in Lualaba Province from 1990 to 2024. The variations in CA and PN during this period enabled the identification of spatial transformation processes, which were analyzed using the decision tree algorithm developed by Bogaert et al. [<a href="#B62-remotesensing-16-03903" class="html-bibr">62</a>]. (<b>C</b>) shows the evolution of the largest patch index (LPI, in %), which indicates the proportion of the landscape occupied by the largest <span class="html-italic">Miombo</span> woodland patch. (<b>D</b>) depicts the edge density (ED, in m/ha), reflecting the amount of edge habitat in relation to <span class="html-italic">Miombo</span> woodland area. Finally, (<b>E</b>) presents the Mean Euclidean Nearest-Neighbor Distance (ENN, in meters), which measures the average distance between the nearest neighboring patches, providing insights into <span class="html-italic">Miombo</span> woodland connectivity. Basse Kando Hunting Domain (BKHD), Lac Tshangalele Hunting Domain (LTHD), Mulumbu Hunting Domain (MHD), Alunda and Tutshokwe Hunting Reserve (ATHR), Mwene Kay Hunting Domain (MKHD), Mwene Musoma Hunting Domain (MMHD), and Tshikamba Hunting Reserve (THR).</p>
Full article ">
25 pages, 27207 KiB  
Article
From Data to Decision: Interpretable Machine Learning for Predicting Flood Susceptibility in Gdańsk, Poland
by Khansa Gulshad, Andaleeb Yaseen and Michał Szydłowski
Remote Sens. 2024, 16(20), 3902; https://doi.org/10.3390/rs16203902 - 20 Oct 2024
Viewed by 951
Abstract
Flood susceptibility prediction is complex due to the multifaceted interactions among hydrological, meteorological, and urbanisation factors, further exacerbated by climate change. This study addresses these complexities by investigating flood susceptibility in rapidly urbanising regions prone to extreme weather events, focusing on Gdańsk, Poland. [...] Read more.
Flood susceptibility prediction is complex due to the multifaceted interactions among hydrological, meteorological, and urbanisation factors, further exacerbated by climate change. This study addresses these complexities by investigating flood susceptibility in rapidly urbanising regions prone to extreme weather events, focusing on Gdańsk, Poland. Three popular ML techniques, Support Vector Machine (SVM), Random Forest (RF), and Artificial Neural Networks (ANN), were evaluated for handling complex, nonlinear data using a dataset of 265 urban flood episodes. An ensemble filter feature selection (EFFS) approach was introduced to overcome the single-method feature selection limitations, optimising the selection of factors contributing to flood susceptibility. Additionally, the study incorporates explainable artificial intelligence (XAI), namely, the Shapley Additive exPlanations (SHAP) model, to enhance the transparency and interpretability of the modelling results. The models’ performance was evaluated using various statistical measures on a testing dataset. The ANN model demonstrated a superior performance, outperforming the RF and the SVM. SHAP analysis identified rainwater collectors, land surface temperature (LST), digital elevation model (DEM), soil, river buffers, and normalized difference vegetation index (NDVI) as contributors to flood susceptibility, making them more understandable and actionable for stakeholders. The findings highlight the need for tailored flood management strategies, offering a novel approach to urban flood forecasting that emphasises predictive power and model explainability. Full article
(This article belongs to the Special Issue Advances in GIS and Remote Sensing Applications in Natural Hazards)
Show Figures

Figure 1

Figure 1
<p>Workflow of flood susceptibility mapping using topographical and environmental data, feature selection, machine learning classifiers, and SHAP analysis.</p>
Full article ">Figure 2
<p>The geographical location of the study area: map of Gdańsk, Poland (<b>bottom left</b>), Gdańsk in Pomeranian Voivodeship (<b>bottom right</b>), and a map of Gdańsk (<b>top</b>) showing flood event locations.</p>
Full article ">Figure 3
<p>Histograms and overlaid kernel density estimates illustrating the distribution of key features that are critical for flood susceptibility analyses at historical flood sites.</p>
Full article ">Figure 4
<p>Comparative analysis of feature rankings using four filter methods: ANOVA-F, gain ratio, mutual information, and correlation.</p>
Full article ">Figure 5
<p>Accuracy of ML classifier using different feature selection methods and EFFS.</p>
Full article ">Figure 6
<p>SHAP force plots for flood instances, showing the contribution of key features: (<b>a</b>) Contribution of key features for flood instance 1. (<b>b</b>) Contribution of key features for flood instance 2.</p>
Full article ">Figure 7
<p>SHAP force plots for non-flood instances, showing the contribution of key features: (<b>a</b>) Contribution of key features for non-flood instance 1. (<b>b</b>) Contribution of key features for non-flood instance 2.</p>
Full article ">Figure 8
<p>Explanation of models’ (<b>a</b>) SVM, (<b>b</b>) RF, and (<b>c</b>) ANN using SHAP. A cluster of data around the SHAP value of zero indicates a small impact on model output.</p>
Full article ">Figure 9
<p>ROC curves for training and test datasets for SVM, RF, and ANN under best parameter configuration.</p>
Full article ">Figure 10
<p>Flood susceptibility maps for SVM, RF, and ANN models.</p>
Full article ">Figure 11
<p>Flash flood-susceptible areas in Gdańsk; the top map shows the LULC with upper and lower terraces; the lower section features satellite images of specific areas (<b>A</b>–<b>D</b>), and a map indicating flood distribution across different districts, with colour gradients representing the number of floods.</p>
Full article ">Figure A1
<p>Spatial distribution of flood susceptibility factors: (<b>a</b>) DEM, (<b>b</b>) slope, (<b>c</b>) aspect, (<b>d</b>) TRI, (<b>e</b>) plan curvature, and (<b>f</b>) profile curvature.</p>
Full article ">Figure A2
<p>Spatial distribution of flood susceptibility factors: (<b>a</b>) SPI, (<b>b</b>) TWI, (<b>c</b>) NDVI, (<b>d</b>) NDWI, (<b>e</b>) LST, (<b>f</b>) distance from the coastline, (<b>g</b>) distance from the river network, and (<b>h</b>) distance from rainwater collectors.</p>
Full article ">Figure A3
<p>Spatial distribution of flood susceptibility factors: (<b>a</b>) LULC and (<b>b</b>) soil.</p>
Full article ">
18 pages, 4741 KiB  
Article
Estimation of Glacier Outline and Volume Changes in the Vilcanota Range Snow-Capped Mountains, Peru, Using Temporal Series of Landsat and a Combination of Satellite Radar and Aerial LIDAR Images
by Nilton Montoya-Jara, Hildo Loayza, Raymundo Oscar Gutiérrez-Rosales, Marcelo Bueno and Roberto Quiroz
Remote Sens. 2024, 16(20), 3901; https://doi.org/10.3390/rs16203901 - 20 Oct 2024
Viewed by 643
Abstract
The Vilcanota is the second-largest snow-capped mountain range in Peru, featuring 380 individual glaciers, each with its own unique characteristics that must be studied independently. However, few studies have been conducted in the Vilcanota range to monitor and track the area and volume [...] Read more.
The Vilcanota is the second-largest snow-capped mountain range in Peru, featuring 380 individual glaciers, each with its own unique characteristics that must be studied independently. However, few studies have been conducted in the Vilcanota range to monitor and track the area and volume changes of the Suyuparina and Quisoquipina glaciers. Notably, there are only a few studies that have approached this issue using LIDAR technology. Our methodology is based on a combination of optical, radar and LIDAR data sources, which allowed for constructing coherent temporal series for the both the perimeter and volume changes of the Suyuparina and Quisoquipina glaciers while accounting for the uncertainty in the perimeter detection procedure. Our results indicated that, from 1990 to 2013, there was a reduction in snow cover of 12,694.35 m2 per year for Quisoquipina and 16,599.2 m2 per year for Suyuparina. This represents a loss of 12.18% for Quisoquipina and 22.45% for Suyuparina. From 2006 to 2013, the volume of the Quisoquipina glacier decreased from 11.73 km3 in 2006 to 11.04 km3 in 2010, while the Suyuparina glacier decreased from 6.26 km3 to 5.93 km3. Likewise, when analyzing the correlation between glacier area and precipitation, a moderate inverse correlation (R = −0.52, p < 0.05) was found for Quisoquipina. In contrast, the correlation for Suyuparina was low and nonsignificant, showing inconsistency in the effect of precipitation. Additionally, the correlation between the snow cover area and the annual mean air temperature (R = −0.34, p > 0.05) and annual minimum air temperature (R = −0.36, p > 0.05) was low, inverse, and not significant for Quisoquipina. Meanwhile, snow cover on Suyuparina had a low nonsignificant correlation (R = −0.31, p > 0.05) with the annual maximum air temperature, indicating a minimal influence of the measured climatic variables near this glacier on its retreat. In general, it was possible to establish a reduction in both the area and volume of the Suyuparina and Quisoquipina glaciers based on freely accessible remote sensing data. Full article
(This article belongs to the Section Remote Sensing and Geo-Spatial Science)
Show Figures

Figure 1

Figure 1
<p>An Airborne LIDAR point cloud of 3.2 m spatial resolution was acquired on the Suyuparina and Quisoquipina glaciers in the province of Canchis, Cusco.</p>
Full article ">Figure 2
<p>Binarized NDSI images recovered from Landsat 5 images from May 1990 (<b>A</b>) and Landsat 7 from April 2013 (<b>B</b>) for the Suyuparina and Quisoquipina glaciers. In orange and red, the shapefiles of the Suyuparina and Quisoquipina glaciers are delimited by expert criteria.</p>
Full article ">Figure 3
<p>Processing scheme. Blue represents inputs, green represents processing, yellow represents intermedium processing, and purple represents outputs.</p>
Full article ">Figure 4
<p>(<b>A</b>) Glacierized area of Quisoquipina, and (<b>B</b>) Suyuparina glaciers. In gray is the uncertainty band of the estimated glacier area.</p>
Full article ">Figure 5
<p>(<b>Up</b>) Glaciated area of Quisoquipina, and Suyuparina glaciers (<b>Down</b>) analyzed from 1990 to 1999 and from 2000 to 2013.</p>
Full article ">Figure 6
<p>(<b>A</b>) Volume changes of the Quisoquipina and (<b>B</b>) Suyuparina glaciers. Confidence intervales to the linear fitted model, shown in gray.</p>
Full article ">Figure 7
<p>(<b>A</b>) Glacierized outlines of Suyuparina and Quisoquipina glaciers. (<b>B</b>) Elevation change based on ALOS and LIDAR DEM analysis. Snow glaciological stakes installed between 2014 to 2016 are shown as reference. The background image corresponds to Google Earth 2019.</p>
Full article ">Figure 8
<p>Scatterplots of climatic and glacier surface changes. MeanAPE: Mean annual potential evapotranspiration (mm/day), MeanAP: Annual mean precipitation (mm/day), MaxAAT: Max. annual air temperature (°C), MinAAT: Min. annual air temperature (°C), MeanAMAT: Mean annual mean air temperature (°C).</p>
Full article ">
18 pages, 9065 KiB  
Article
Modeling of Solar Radiation Pressure for BDS-3 MEO Satellites with Inter-Satellite Link Measurements
by Yifei Lv, Zihao Liu, Rui Jiang and Xin Xie
Remote Sens. 2024, 16(20), 3900; https://doi.org/10.3390/rs16203900 - 20 Oct 2024
Viewed by 613
Abstract
As the largest non-gravitational force, solar radiation pressure (SRP) causes significant errors in precise orbit determination (POD) of the BeiDou global navigation satellite system (BDS-3) medium Earth orbit (MEO) satellite. This is mainly due to the imperfect modeling of the satellite’s cuboid body. [...] Read more.
As the largest non-gravitational force, solar radiation pressure (SRP) causes significant errors in precise orbit determination (POD) of the BeiDou global navigation satellite system (BDS-3) medium Earth orbit (MEO) satellite. This is mainly due to the imperfect modeling of the satellite’s cuboid body. Since the BDS-3’s inter-satellite link (ISL) can enhance the orbit estimation of BDS-3 satellites, the aim of this study is to establish an a priori SRP model for the satellite body using 281-day ISL observations to reduce the systematic errors in the final orbits. The adjustable box wind (ABW) model is employed to refine the optical parameters for the satellite buses. The self-shadow effect caused by the search and rescue (SAR) antenna is considered. Satellite laser ranging (SLR), day-boundary discontinuity (DBD), and overlapping Allan deviation (OADEV) are utilized as indicators to assess the performance of the a priori model. With the a priori model developed by both ISL and ground observation, the slopes of SLR residual for the China Academy of Space Technology (CAST) and Shanghai Engineering Center for Microsatellites (SECM) satellites decrease from −0.097 cm/deg and 0.067 cm/deg to −0.004 cm/deg and −0.009 cm/deg, respectively. The standard deviation decreased by 21.8% and 26.6%, respectively. There are slight enhancements in the average values of DBD and OADEV, and a reduced β-dependent variation is observed in the OADEV of the corresponding clock offset. We also found that considering the SAR antenna only slightly improves the orbit accuracy. These results demonstrate that an a priori model established for the BDS-3 MEO satellite body can reduce the systematic errors in orbits, and the parameters estimated using both ISL and ground observation are superior to those estimated using only ground observation. Full article
(This article belongs to the Special Issue GNSS Positioning and Navigation in Remote Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>Dimensions of the satellite body and its shadow on the panel (unit: m).</p>
Full article ">Figure 2
<p>Flowchart for the refinement of SRP models of BDS-3 MEO satellite.</p>
Full article ">Figure 3
<p>Correlation between the parameters for CAST (C20) satellite and SECM (C29) satellite obtained by the G1 and J1.</p>
Full article ">Figure 4
<p>Optical parameter estimated by strategy G1 (blue), G2 (purple), J1 (red), and J2 (green).</p>
Full article ">Figure 5
<p>SLR residual dependent on ε angle for satellite without SAR antenna. The first row represents G1 estimates, and the second row represents J1 estimates.</p>
Full article ">Figure 6
<p>OADEV at 9000 s of CAST-A (C20) and SECM-A (C30) with respect to the <span class="html-italic">β</span> angle.</p>
Full article ">Figure 7
<p>OADEV at 9000 s of CAST-B (C33) and SECM-B (C43) with respect to the <span class="html-italic">β</span> angle.</p>
Full article ">Figure 8
<p>SLR residual estimated by ECOM (left column), ECOM+G1 (middle column), and ECOM+J1 (right column) dependent on <math display="inline"><semantics> <mrow> <mi>ε</mi> </mrow> </semantics></math> angle. The red line represents the linear trend line, with the slope value (red font) denoted in cm/deg.</p>
Full article ">Figure 9
<p>SLR residual distribution based on ECOM (blue), ECOM+G1 (green), and ECOM+J1 (red).</p>
Full article ">Figure 10
<p>OADEV at 9000 s of CAST-A (C20), SECM-A (C30), and SECM-B (C43) satellites with respect to the <span class="html-italic">β</span> angle.</p>
Full article ">Figure 11
<p>OADEV at 9000 s of CAST-B (C33) satellite with respect to the <span class="html-italic">β</span> angle.</p>
Full article ">
16 pages, 5879 KiB  
Article
The Characterization of Electric Field Pulses Observed in the Preliminary Breakdown Processes of Normal and Inverted Intracloud Flashes
by Dongdong Shi, Jinlai Zhang, Panliang Gao, Dong Zheng, Qi Qi, Jie Shao, Shiqi Kan, Daohong Wang and Ting Wu
Remote Sens. 2024, 16(20), 3899; https://doi.org/10.3390/rs16203899 - 20 Oct 2024
Viewed by 422
Abstract
We have studied the parameters of preliminary breakdown (PB) pulses in 395 normal and 319 inverted intracloud (IC) flashes observed in Gifu, Japan, and Ningxia, China, respectively, by using a low-frequency mapping system called fast antenna lightning mapping array (FALMA). These parameters are [...] Read more.
We have studied the parameters of preliminary breakdown (PB) pulses in 395 normal and 319 inverted intracloud (IC) flashes observed in Gifu, Japan, and Ningxia, China, respectively, by using a low-frequency mapping system called fast antenna lightning mapping array (FALMA). These parameters are extracted from the first half of the PB pulses. It is found that compared to normal IC flashes, inverted IC flashes exhibited PB pulses with slower rise times (6.8 vs. 3.1 μs), wider half-peak widths (3.8 vs. 2.5 μs), longer zero-crossing times (26.2 vs. 14 μs), and extended fall times (4 vs. 3.2 μs). We further demonstrated that such discrepancies between normal and inverted IC flashes should not be caused by subjective factors, like noise threshold setting, or objective factors, like signal propagation distance. Based on this analysis, finally, we inferred that the discrepancies should be a reflection of the PB channel properties of normal and inverted IC flashes. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The layout of the observation sites. The geographical locations of the FALMAs, with the filled color indicating the altitude (<b>a</b>). The FALMAs deployed in Ningxia, China (<b>b</b>), and Gifu, Japan (<b>c</b>), with the black triangles representing the sites. The red and blue points indicate the analyzed PB initiations of inverted and normal IC flashes.</p>
Full article ">Figure 2
<p>The E-change waveform and 3D source locations of a normal IC case. (<b>a</b>) E-change waveform with time; (<b>b</b>) source altitude with time; and (<b>c</b>) the Gifu FALMA sites. (<b>d</b>) The expanded PB waveform, with the red dots representing the recognized PB pulses. The abbreviation of DU stands for digital unit. (0,0) indicates the location of the GFU site, corresponding to a latitude and longitude of 35.48°N and 136.96°E, respectively.</p>
Full article ">Figure 3
<p>The E-change waveform and 3D source locations of an inverted IC case. (<b>a</b>) E-change waveform with time; (<b>b</b>) source altitude with time; and (<b>c</b>) the Ningxia FALMA sites. (<b>d</b>) The expanded PB waveform, with the red dots representing the recognized PB pulses. The abbreviation of DU stands for digital unit. (0,0) indicates the location of the ZKY site, corresponding to a latitude and longitude of 38.43°N and 106.17°E, respectively.</p>
Full article ">Figure 4
<p>Parameter statistics of PB pulses in normal and inverted IC flashes. (<b>a</b>) Rise time; (<b>b</b>) half-peak width; (<b>c</b>) fall time; and (<b>d</b>) zero-crossing time. The abbreviations AM, GM, and SD indicate the arithmetic mean, the geometric mean, and the standard deviation, respectively.</p>
Full article ">Figure 5
<p>The identified PB pulse samples vary with the cumulative probability threshold.</p>
Full article ">Figure 6
<p>The AM values of PB pulse parameters changing with the cumulative probability threshold. (<b>a</b>) Rise time; (<b>b</b>) half-peak width; (<b>c</b>) fall time; and (<b>d</b>) zero-crossing time.</p>
Full article ">Figure 7
<p>A histogram of the distance between PB initiation and the recorded FALMA sites.</p>
Full article ">Figure 8
<p>Scatterplots of the AM values for inverted (orange dots) and normal (blue dots) IC flashes’ PB pulse parameters in different distance ranges. The fitted curves are represented by dashed lines with the equation and fitting goodness marked in the upper area of the subplots. (<b>a</b>) Rise time; (<b>b</b>) half-peak width; (<b>c</b>) fall time; and (<b>d</b>) zero-crossing time. The abbreviations Cof and R<sup>2</sup> indicate the correlation coefficient and fitting goodness.</p>
Full article ">Figure 9
<p>An illustration of the PB waveform recorded at the near and far FALMA sites. (<b>a</b>) The PB waveform recording at the near FALMA site with a distance of 33.49 km; (<b>b</b>) the PB waveform recording at the far FALMA site with a distance of 60.55 km; and (<b>c</b>) the geographical locations of the PB discharge event and FALMA sites.</p>
Full article ">Figure 10
<p>The statistics of the distances between the PB locations and the near/far FALMA sites. (<b>a</b>) Normal IC flashes; (<b>b</b>) inverted IC flashes.</p>
Full article ">Figure 11
<p>The comparison between the parameters of the largest PB pulses in normal IC flashes measured from the near and far FALMA sites. (<b>a</b>) Rise time; (<b>b</b>) half-peak width; (<b>c</b>) fall time; and (<b>d</b>) zero-crossing time.</p>
Full article ">Figure 12
<p>The comparison between the parameters of the largest PB pulses in inverted IC flashes measured from the near and far FALMA sites. (<b>a</b>) Rise time; (<b>b</b>) half-peak width; (<b>c</b>) fall time; and (<b>d</b>) zero-crossing time.</p>
Full article ">Figure 13
<p>The average (color-coded lines) and individual (thin gray lines) amplitude-normalized PB pulse waveforms from near and far stations, shown on a time scale from −30 to 30 μs (where t = 0 corresponds to the peak of the largest PB pulse). (<b>a</b>) Normal IC flashes; (<b>b</b>) inverted IC flashes.</p>
Full article ">Figure A1
<p>An illustration of the calculation of the electric field from the PB channel.</p>
Full article ">Figure A2
<p>Simulated PB pulses of normal and inverted IC flashes using different PB channel parameters. The black and red lines represent the modeled PB pulse waveforms at a distance of 30 km and 60 km, respectively. (<b>a</b>,<b>b</b>) represent PB pulses of normal IC flashes, while (<b>c</b>,<b>d</b>) represent PB pulses of inverted IC flashes.</p>
Full article ">
15 pages, 1077 KiB  
Technical Note
Quantifying Annual Glacier Mass Change and Its Influence on the Runoff of the Tuotuo River
by Lin Liu, Xueyu Zhang and Zhimin Zhang
Remote Sens. 2024, 16(20), 3898; https://doi.org/10.3390/rs16203898 - 20 Oct 2024
Viewed by 509
Abstract
Glacier meltwater is an indispensable water supply for billions of people living in the catchments of major Asian rivers. However, the role of glaciers on river runoff regulation is seldom investigated due to the lack of annual glacier mass balance observation. In this [...] Read more.
Glacier meltwater is an indispensable water supply for billions of people living in the catchments of major Asian rivers. However, the role of glaciers on river runoff regulation is seldom investigated due to the lack of annual glacier mass balance observation. In this study, we employed an albedo-based model with a daily land surface albedo dataset to derive the annual glacier mass balance over the Tuotuo River Basin (TRB). During 2000–2022, an annual glacier mass balance range of −0.89 ± 0.08 to 0.11 ± 0.11 m w.e. was estimated. By comparing with river runoff records from the hydrometric station, the contribution of glacier mass change to river runoff was calculated to be 0.00–31.14% for the studied period, with a mean value of 9.97%. Moreover, we found that the mean contribution in drought years is 20.07%, which is approximately five times that in wet years (4.30%) and twice that in average years (9.49%). Therefore, our results verify that mountain glaciers act as a significant buffer against drought in the TRB, at least during the 2000–2022 period. Full article
Show Figures

Figure 1

Figure 1
<p>The geographic location of the Tuotuo River Basin (Olivine Yellow) and the three study sites (A, B, and C). Glacier boundaries were obtained from the second Chinese glacier inventory. The location of the Tuotuohe hydrometric station is indicated as a red five-pointed star.</p>
Full article ">Figure 2
<p>The extracted annual minimum regional-average surface albedo time series for the three study sites in the TRB between 2000 and 2022. The red vertical dashed line indicates the year 2012.</p>
Full article ">Figure 3
<p>The observed annual runoff of the Tuotuo River between 2000 and 2022.</p>
Full article ">Figure 4
<p>Annual basin-wide average precipitation (<b>a</b>) and evaporation (<b>b</b>) for the Tuotuo River Basin between 2000 and 2022.</p>
Full article ">Figure 5
<p>Comparison between annual glacier mass change and annual river runoff for the Tuotuo River Basin during the period of 2000–2022.</p>
Full article ">Figure 6
<p>The amount of annual precipitation minus annual evaporation in 2000–2022.</p>
Full article ">Figure 7
<p>Relationship between river runoff and the difference in precipitation and evaporation over the Tuotuo River Basin during the period of 2000–2022.</p>
Full article ">
26 pages, 28365 KiB  
Article
Three-Dimensional Geometric-Physical Modeling of an Environment with an In-House-Developed Multi-Sensor Robotic System
by Su Zhang, Minglang Yu, Haoyu Chen, Minchao Zhang, Kai Tan, Xufeng Chen, Haipeng Wang and Feng Xu
Remote Sens. 2024, 16(20), 3897; https://doi.org/10.3390/rs16203897 - 20 Oct 2024
Viewed by 653
Abstract
Environment 3D modeling is critical for the development of future intelligent unmanned systems. This paper proposes a multi-sensor robotic system for environmental geometric-physical modeling and the corresponding data processing methods. The system is primarily equipped with a millimeter-wave cascaded radar and a multispectral [...] Read more.
Environment 3D modeling is critical for the development of future intelligent unmanned systems. This paper proposes a multi-sensor robotic system for environmental geometric-physical modeling and the corresponding data processing methods. The system is primarily equipped with a millimeter-wave cascaded radar and a multispectral camera to acquire the electromagnetic characteristics and material categories of the target environment and simultaneously employs light detection and ranging (LiDAR) and an optical camera to achieve a three-dimensional spatial reconstruction of the environment. Specifically, the millimeter-wave radar sensor adopts a multiple input multiple output (MIMO) array and obtains 3D synthetic aperture radar images through 1D mechanical scanning perpendicular to the array, thereby capturing the electromagnetic properties of the environment. The multispectral camera, equipped with nine channels, provides rich spectral information for material identification and clustering. Additionally, LiDAR is used to obtain a 3D point cloud, combined with the RGB images captured by the optical camera, enabling the construction of a three-dimensional geometric model. By fusing the data from four sensors, a comprehensive geometric-physical model of the environment can be constructed. Experiments conducted in indoor environments demonstrated excellent spatial-geometric-physical reconstruction results. This system can play an important role in various applications, such as environment modeling and planning. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>In-house-developed FUSEN system hardware configuration. (<b>a</b>) System physical diagram; (<b>b</b>) System architecture diagram.</p>
Full article ">Figure 2
<p>System attitude control. (<b>a1</b>,<b>b1</b>,<b>c1</b>) Electric translation stage moves 0 cm, 10 cm, and 20 cm; (<b>a2</b>,<b>b2</b>,<b>c2</b>) Turntable rotates 0°, 45°, and 90°; (<b>a3</b>,<b>b3</b>,<b>c3</b>) Pitch stage tilt 30°, 45°, and 60°.</p>
Full article ">Figure 3
<p>Sensor selection. (<b>a</b>) 77G Millimeter-wave Cascade RF Board; (<b>b</b>) Vision Star pixel-level mosaic imaging spectrometer; (<b>c</b>) Mid-70 LiDAR; (<b>d</b>) JHUMs series USB3.0 industrial camera.</p>
Full article ">Figure 4
<p>FUSEN workflow diagram.</p>
Full article ">Figure 5
<p>Millimeter-wave cascaded radar antenna array measurement scheme.</p>
Full article ">Figure 6
<p>Schematic diagram of antenna array equivalent channel. (<b>a</b>) Schematic diagram of real aperture size distribution; (<b>b</b>) Schematic diagram of equivalent aperture size; (<b>c</b>) Schematic diagram of overlapping equivalent aperture size; (<b>d</b>) Interval of equivalent aperture (in m).</p>
Full article ">Figure 7
<p>Comparison of the 9-channel multispectral camera with camera. (<b>a</b>) RGB 3-band filters; (<b>b</b>) Multispectral camera 9-band filters.</p>
Full article ">Figure 8
<p>Multispectral inversion flowchart.</p>
Full article ">Figure 9
<p>The preliminary registration of RGB camera and LiDAR. (<b>a</b>) Experimental scene of RGB camera and LiDAR registration; (<b>b</b>) Preliminary registration results of RGB camera and LiDAR.</p>
Full article ">Figure 10
<p>Lidar and RGB camera data fusion flow diagram.</p>
Full article ">Figure 11
<p>The accurate registration of RGB camera and LiDAR by proposed method. (<b>a</b>) Point clouds pixelated; (<b>b</b>) Image edge extraction.</p>
Full article ">Figure 12
<p>Comparison of results. (<b>a</b>) The registration by hand–eye calibration method; (<b>b</b>) The accurate registration by proposed method.</p>
Full article ">Figure 13
<p>Experiment on obtaining registration matrix of millimeter-wave radar and LiDAR. (<b>a</b>) Horizontal–vertical imaging results; (<b>b</b>) 3D point cloud results; (<b>c</b>) Millimeter-wave point cloud (pcd); (<b>d</b>) Point cloud fusion results.</p>
Full article ">Figure 14
<p>Experiment on verifying registration matrix of millimeter-wave radar and LiDAR. (<b>a</b>) Experimental results; (<b>b</b>) Horizontal–vertical plane (1.52 m); (<b>c</b>) Horizontal–vertical plane (1.22 m); (<b>d</b>) Horizontal–vertical plane projection; (<b>e</b>) Horizontal-range plane imaging results; (<b>f</b>) Corner reflector millimeter-wave point cloud; (<b>g</b>) Millimeter-wave point cloud (pcd); (<b>h</b>) LiDAR point cloud.</p>
Full article ">Figure 15
<p>Millimeter-wave radar and LiDAR fusion results. (<b>a</b>) Front view of fusion result; (<b>b</b>) Side view of fusion result.</p>
Full article ">Figure 16
<p>Schematic diagram of data collection at different locations.</p>
Full article ">Figure 17
<p>Millimeter-wave imaging results of targets of different materials. (<b>a1</b>) Optical picture of metal plate at angle 1; (<b>b1</b>) Millimeter-wave imaging result of metal plate at angle 1; (<b>c1</b>) Optical picture of metal plate at angle 2; (<b>d1</b>) Millimeter-wave imaging result of metal plate at angle 2; The meaning of (<b>a2</b>,<b>b2</b>,<b>c2</b>,<b>d2</b>) and (<b>a3</b>,<b>b3</b>,<b>c3</b>,<b>d3</b>) are the same as above, facing glass and cement, respectively.</p>
Full article ">Figure 18
<p>Scattering intensity as a function of view angle.</p>
Full article ">Figure 19
<p>Geometric-physical reconstruction result for small scene. (<b>a</b>) RGB image; (<b>b</b>) Multispectral image inversion pseudo-color image; (<b>c</b>) Multispectral recognition results; (<b>d</b>) Millimeter-wave imaging results; (<b>e</b>) Geometric-physical reconstruction results; (<b>f</b>) Electromagnetic scattering characteristic curve matching results.</p>
Full article ">Figure 20
<p>Geometric-physical reconstruction result for large scene.</p>
Full article ">Figure 21
<p>Material recognition results. The single column represents the optical scene, and the double column represents the recognition result.</p>
Full article ">Figure 22
<p>Confusion matrix. (<b>a</b>) Precision confusion matrix; (<b>b</b>) Recall confusion matrix.</p>
Full article ">Figure A1
<p>Reconstruction result of an entire wall of the office.</p>
Full article ">Figure A2
<p>Reconstruction result of an entire wall of the corridor.</p>
Full article ">Figure A3
<p>Reconstruction result of an entire wall of the building.</p>
Full article ">Figure A4
<p>Reconstruction result of a small wall.</p>
Full article ">Figure A5
<p>Reconstruction result of a small wall.</p>
Full article ">
20 pages, 6289 KiB  
Article
A High-Resolution Remote Sensing Road Extraction Method Based on the Coupling of Global Spatial Features and Fourier Domain Features
by Hui Yang, Caili Zhou, Xiaoyu Xing, Yongchuang Wu and Yanlan Wu
Remote Sens. 2024, 16(20), 3896; https://doi.org/10.3390/rs16203896 - 20 Oct 2024
Viewed by 653
Abstract
Remote sensing road extraction based on deep learning is an important method for road extraction. However, in complex remote sensing images, different road information often exhibits varying frequency distributions and texture characteristics, and it is usually difficult to express the comprehensive characteristics of [...] Read more.
Remote sensing road extraction based on deep learning is an important method for road extraction. However, in complex remote sensing images, different road information often exhibits varying frequency distributions and texture characteristics, and it is usually difficult to express the comprehensive characteristics of roads effectively from a single spatial domain perspective. To address the aforementioned issues, this article proposes a road extraction method that couples global spatial learning with Fourier frequency domain learning. This method first utilizes a transformer to capture global road features and then applies Fourier transform to separate and enhance high-frequency and low-frequency information. Finally, it integrates spatial and frequency domain features to express road characteristics comprehensively and overcome the effects of intra-class differences and occlusions. Experimental results on HF, MS, and DeepGlobe road datasets show that our method can more comprehensively express road features compared with other deep learning models (e.g., Unet, D-Linknet, DeepLab-v3, DCSwin, SGCN) and extract road boundaries more accurately and coherently. The IOU accuracy of the extracted results also achieved 72.54%, 55.35%, and 71.87%. Full article
Show Figures

Figure 1

Figure 1
<p>Decomposition of remote sensing images into low-frequency components and high-frequency components.</p>
Full article ">Figure 2
<p>Frequency domain analysis of remote sensing images using different filter radius ratios.</p>
Full article ">Figure 3
<p>The structural composition of our model. Swin Transformer blocks provide spatial domain features, and Fourier feature blocks provide frequency domain features.</p>
Full article ">Figure 4
<p>The structural composition of Fourier feature blocks. The upper part is the spatial information feature processing group; the middle part is a low-pass filter; and the lower part is the high-pass filter.</p>
Full article ">Figure 5
<p>Example from the HF dataset.</p>
Full article ">Figure 6
<p>Example from the Massachusetts dataset.</p>
Full article ">Figure 7
<p>Example from the DeepGlobe dataset.</p>
Full article ">Figure 8
<p>Comparative experimental results on the HF dataset (<b>a</b>–<b>i</b>). In each image, the red boxes highlight road details and the differences between the various models.</p>
Full article ">Figure 9
<p>Comparative experimental results on the Massachusetts roads dataset (<b>a</b>–<b>i</b>). In each image, the red boxes highlight road details and the differences between various models.</p>
Full article ">Figure 10
<p>Comparative experimental results on the DeepGlobe roads dataset (<b>a</b>–<b>i</b>). In each image, the red boxes highlight road details and the differences between various models.</p>
Full article ">Figure 11
<p>Results of ablation experiments on the HF dataset (<b>a</b>–<b>h</b>). In each image, the red boxes highlight road details and the differences observed in the ablation study.</p>
Full article ">Figure 12
<p>Feature maps for different structure layers of the network. The upper part is the feature map output by our model; the lower part is the feature map output after removing Fourier feature blocks.</p>
Full article ">
17 pages, 17273 KiB  
Article
Monitoring Coastal Evolution and Geomorphological Processes Using Time-Series Remote Sensing and Geospatial Analysis: Application Between Cape Serrat and Kef Abbed, Northern Tunisia
by Zeineb Kassouk, Emna Ayari, Benoit Deffontaines and Mohamed Ouaja
Remote Sens. 2024, 16(20), 3895; https://doi.org/10.3390/rs16203895 - 19 Oct 2024
Viewed by 963
Abstract
The monitoring of coastal evolution (coastline and associated geomorphological features) caused by episodic and persistent processes associated with climatic and anthropic activities is required for coastal management decisions. The availability of open access, remotely sensed data with increasing spatial, temporal, and spectral resolutions, [...] Read more.
The monitoring of coastal evolution (coastline and associated geomorphological features) caused by episodic and persistent processes associated with climatic and anthropic activities is required for coastal management decisions. The availability of open access, remotely sensed data with increasing spatial, temporal, and spectral resolutions, is promising in this context. The coastline of Northern Tunisia is currently showing geomorphic process, such as increasing erosion associated with lateral sedimentation. This study aims to investigate the potential of time-series optical data, namely Landsat (from 1985–2019) and Google Earth® satellite imagery (from 2007 to 2023), to analyze shoreline changes and morphosedimentary and geomorphological processes between Cape Serrat and Kef Abbed, Northern Tunisia. The Digital Shoreline Analysis System (DSAS) was used to quantify the multitemporal rates of shoreline using two metrics: the net shoreline movement (NSM) and the end-point rate (EPR). Erosion was observed around the tombolo and near river mouths, exacerbated by the presence of surrounding dams, where the NSM is up to −8.31 m/year. Despite a total NSM of −15 m, seasonal dynamics revealed a maximum erosion in winter (71% negative NSM) and accretion in spring (57% positive NSM). The effects of currents, winds, and dams on dune dynamics were studied using historical images of Google Earth®. In the period from 1994 to 2023, the area is marked by dune face retreat and removal in more than 40% of the site, showing the increasing erosion. At finer spatial resolution and according to the synergy of field observations and photointerpretation, four key geomorphic processes shaping the coastline were identified: wave/tide action, wind transport, pedogenesis, and deposition. Given the frequent changes in coastal areas, this method facilitates the maintenance and updating of coastline databases, which are essential for analyzing the impacts of the sea level rise in the southern Mediterranean region. Furthermore, the developed approach could be implemented with a range of forecast scenarios to simulate the impacts of a higher future sea-level enhanced climate change. Full article
(This article belongs to the Special Issue Advances in Remote Sensing in Coastal Geomorphology (Third Edition))
Show Figures

Figure 1

Figure 1
<p>Location of the studied northern Tunisian coastal area (southern Mediterranean seashore) between Cape Serrat and Ragoubet El Golea, showing (blue rectangle) the three main dams and rivers, including Ziatine, Gamgoum, and El Harka. The study area was divided into six zones, according to their morphologies: Three zones are characterized by rocky coasts, a sandy coastal area with a tombolo, and fixed and (semi-)fixed dune zones. The background is the MapTiler Satellite map.</p>
Full article ">Figure 2
<p>Flowchart related to Landsat data analysis for the years 1985–2019. It involves the following: (<b>a</b>) the pre-processes steps: radiometric calibration, geometric, and atmospheric corrections; (<b>b</b>) the multi-time coastline extraction based on the Tasseled map transformation (greenness/wetness data extraction); and (<b>c</b>) coastline evolution.</p>
Full article ">Figure 3
<p>Net shoreline movement (NSM) in the period from 1985 to 2019 between Cape Serrat and Ragoubet el Golea points. Shoreline retreat is indicated by red lines, while green lines represent relatively unchanged areas. Shoreline advance is indicated by blue lines. The background is the MapTiler Topo map.</p>
Full article ">Figure 4
<p>Erosion forms are mainly identified around the tombolo areas, indicated by red lines.</p>
Full article ">Figure 5
<p>(<b>a</b>) Seasonal sedimentary balance of shoreline movement based on near-shore movement values (the distance between the oldest and youngest shorelines), net shoreline movement and (<b>b</b>) seasonal NSM variation showing the balance between erosion (blue color) and accretion (red color) and the total NSM value.</p>
Full article ">Figure 6
<p>Spatial distribution of dunes as examples of different stages, from stable and vegetated dunes to system instability and the development of a mobile transgressive dune system. High sand dune (Level 1 or L1); incipient dune, L2, and foredune (LN) (background map is a GoogleEarth<sup>®</sup> image of 2019).</p>
Full article ">Figure 7
<p>The disappearance of the incipient dune between 1994 and 2018, based on two satellite GoogleEarth<sup>®</sup> views. The phenomena highlight the important erosion process around the tombolo.</p>
Full article ">Figure 8
<p>Example of wind action on the dunes based on GoogleEarth<sup>®</sup> time-series imagery (a view of zone 2 (<a href="#remotesensing-16-03895-f001" class="html-fig">Figure 1</a>)). Green arrows highlight the perpendicular direction of the wind reactivation by a secondary wind from the north–east (NE) of the old dunes under the dominant north–west (NW) wind.</p>
Full article ">Figure 9
<p>An example of dune zonation in the study area in relation to wind direction. Five categories of dunes were identified, including near-shore zones, high sand dunes, incipient dunes, foredunes, and transgressive dunes.</p>
Full article ">Figure 10
<p>Examples of two dune systems in the study area: (<b>a</b>) the long dunes form caused by the interaction of multiple coastal currents or wind directions; (<b>b</b>) the short dunes form under the influence of a single, dominant current and wind direction. The white line in the sea (<b>a</b>,<b>b</b>) represents the coastal current direction.</p>
Full article ">Figure 11
<p>The retreat and removal of dunes (<b>a</b>). A series of parallel dune ridges, with the oldest dune ridges located furthest inland. (<b>b</b>) Formation and growth of flat dune deposits in zone 5 (<a href="#remotesensing-16-03895-f001" class="html-fig">Figure 1</a>), characterized by semi-fixed dunes.</p>
Full article ">Figure 12
<p>The coastal landscape in Cape Serrat (<a href="#remotesensing-16-03895-f001" class="html-fig">Figure 1</a>, zone 1) (<b>a</b>), showing the reshaping of the rocky shoreline from 1994 to 2019. Sedimentary rock layers with visible ripple marks, highlighting geomorphological features caused by weathering and erosion in the cliff, and the abrasion phenomena in the cliff (<b>b</b>).</p>
Full article ">Figure 13
<p>The dynamics of coastal erosion and dune dynamics processes, particularly in relation to waves action, dune formations, and stability. (<b>a</b>) shows the coastal features (dunes, inshore sand deposits); (<b>b</b>) the relationship between water level and micro-cliff formation caused by erosion; and (<b>c</b>) illustrates the effects of storm wave attacks on dunes.</p>
Full article ">
17 pages, 9005 KiB  
Article
NDVI or PPI: A (Quick) Comparison for Vegetation Dynamics Monitoring in Mountainous Area
by Dimitri Charrière, Loïc Francon and Gregory Giuliani
Remote Sens. 2024, 16(20), 3894; https://doi.org/10.3390/rs16203894 - 19 Oct 2024
Viewed by 1058
Abstract
Cold ecosystems are experiencing a warming rate that is twice as fast as the global average and are particularly vulnerable to the consequences of climate change. In mountain ecosystems, it is particularly important to monitor vegetation to understand ecosystem dynamics, biodiversity conservation, and [...] Read more.
Cold ecosystems are experiencing a warming rate that is twice as fast as the global average and are particularly vulnerable to the consequences of climate change. In mountain ecosystems, it is particularly important to monitor vegetation to understand ecosystem dynamics, biodiversity conservation, and the resilience of these fragile ecosystems to global change. Hence, we used satellite data acquired by Sentinel-2 to perform a comparative assessment of the Normalized Difference Vegetation Index (NDVI) and the Plant Phenology Index (PPI) in mountainous regions (canton of Valais-Switzerland in the European Alps) for monitoring vegetation dynamics of four types: deciduous trees, coniferous trees, grasslands, and shrublands. Results indicate that the NDVI is particularly noisy in the seasonal cycle at the beginning/end of the snow season and for coniferous trees, which is consistent with its known snow sensitivity issue and difficulties in retrieving signal variation in dense and evergreen vegetation. The PPI seems to deal with these problems but tends to overestimate peak values, which could be attributed to its logarithmic formula and derived high sensitivity to variations in near-infrared (NIR) and red reflectance during the peak growing season. Concerning seasonal parameters retrieval, we find close concordance in the results for the start of season (SOS) and end of season (EOS) between indices, except for coniferous trees. Peak of season (POS) results exhibit important differences between the indices. Our findings suggest that PPI is a robust remote sensed index for vegetation monitoring in seasonal snow-covered and complex mountain environments. Full article
Show Figures

Figure 1

Figure 1
<p>Localization of the canton of Valais (CH) and altitudinal range (projection: CH1903+/LV95).</p>
Full article ">Figure 2
<p>General Workflow for the comparative assessment of PPI and NDVI for phenological metrics retrieval. The Vegetation Cover map is produced using Sentinel-2 images and “High Resolution Layers” from the Copernicus Land Monitoring Service (CLMS). This map is then used together with pre-processed PPI and NDVI rasters to retrieve PPI and NDVI values per vegetation classes. In a subsequent step, these values are used to process and retrieve time-series and seasonality parameters necessary for the analysis.</p>
Full article ">Figure 3
<p>Vegetation distribution in Valais (projection: CH1903+/LV95).</p>
Full article ">Figure 4
<p>NDVI time-series per vegetation classes (top) with raw data (grey) and the double logistic function derived (colored). The bottom chart depicts all double logistic functions.</p>
Full article ">Figure 5
<p>PPI time-series per vegetation classes (top) with raw data (grey) and the double logistic function derived (colored). The bottom chart depicts all double logistic functions.</p>
Full article ">Figure 6
<p>PPI and NDVI time-series for each vegetation class.</p>
Full article ">
32 pages, 100733 KiB  
Article
On-Orbit Geometric Calibration and Accuracy Validation of the Jilin1-KF01B Wide-Field Camera
by Hongyu Wu, Guanzhou Chen, Yang Bai, Ying Peng, Qianqian Ba, Shuai Huang, Xing Zhong, Haijiang Sun, Lei Zhang and Fuyu Feng
Remote Sens. 2024, 16(20), 3893; https://doi.org/10.3390/rs16203893 - 19 Oct 2024
Viewed by 862
Abstract
On-orbit geometric calibration is key to improving the geometric positioning accuracy of high-resolution optical remote sensing satellite data. Grouped calibration with geometric consistency (GCGC) is proposed in this paper for the Jilin1-KF01B satellite, which is the world’s first satellite capable of providing 150-km [...] Read more.
On-orbit geometric calibration is key to improving the geometric positioning accuracy of high-resolution optical remote sensing satellite data. Grouped calibration with geometric consistency (GCGC) is proposed in this paper for the Jilin1-KF01B satellite, which is the world’s first satellite capable of providing 150-km swath width and 0.5-m resolution data. To ensure the geometric accuracy of high-resolution image data, the GCGC method conducts grouped calibration of the time delay integration charge-coupled device (TDI CCD). Each group independently calibrates the exterior orientation elements to address the multi-time synchronization issues between imaging processing system (IPS). An additional inter-chip geometric positioning consistency constraint is used to enhance geometric positioning consistency in the overlapping areas between adjacent CCDs. By combining image simulation techniques associated with spectral bands, the calibrated panchromatic data are used to generate simulated multispectral reference band image as control data, thereby enhancing the geometric alignment consistency between panchromatic and multispectral data. Experimental results show that the average seamless stitching accuracy of the basic products after calibration is better than 0.6 pixels, the positioning accuracy without ground control points(GCPs) is better than 20 m, the band-to-band registration accuracy is better than 0.3 pixels, the average geometric alignment consistency between panchromatic and multispectral data are better than 0.25 multispectral pixels, the geometric accuracy with GCPs is better than 2.1 m, and the geometric alignment consistency accuracy of multi-temporal data are better than 2 m. The GCGC method significantly improves the quality of image data from the Jilin1-KF01B satellite and provide important references and practical experience for the geometric calibration of other large-swath high-resolution remote sensing satellites. Full article
Show Figures

Figure 1

Figure 1
<p>The diagram of detector look-angle model.</p>
Full article ">Figure 2
<p>CCD stitching and overlap imaging characteristics schematic diagram.</p>
Full article ">Figure 3
<p>Illustration of the large-scale dynamic variation of the y-coordinate differences in mountainous areas.</p>
Full article ">Figure 4
<p>Tie point matching method based on geometric positioning constraints for connection point extraction.</p>
Full article ">Figure 5
<p>Comparison of detailed images of typical land cover types for panchromatic data, red band data, and simulated red band data.</p>
Full article ">Figure 5 Cont.
<p>Comparison of detailed images of typical land cover types for panchromatic data, red band data, and simulated red band data.</p>
Full article ">Figure 6
<p>Jilin1-KF01B WF camera on-orbit geometric calibration of panchromatic and multispectral reference band flow chart.</p>
Full article ">Figure 7
<p>Jilin1-KF01B WF camera on-orbit geometric calibration of multispectral non-reference band flow chart.</p>
Full article ">Figure 8
<p>Calibration data.</p>
Full article ">Figure 9
<p>Comparison of geometric stitching before and after calibration.</p>
Full article ">Figure 10
<p>In-Scene stitching accuracy.</p>
Full article ">Figure 11
<p>RPC fitting accuracy.</p>
Full article ">Figure 12
<p>Geometric positioning consistency accuracy between panchromatic and multispectral data.</p>
Full article ">Figure 13
<p>The distribution of GCPs in experimental regions.</p>
Full article ">Figure 14
<p>Geographic distribution of experimental data.</p>
Full article ">Figure 14 Cont.
<p>Geographic distribution of experimental data.</p>
Full article ">Figure 14 Cont.
<p>Geographic distribution of experimental data.</p>
Full article ">Figure 14 Cont.
<p>Geographic distribution of experimental data.</p>
Full article ">Figure 15
<p>The effect of geometric positioning consistency before and after correction of GanSu data.</p>
Full article ">Figure 16
<p>The effect of geometric positioning consistency before and after correction of LinYi data.</p>
Full article ">Figure 17
<p>The effect of geometric positioning consistency before and after correction of QingDao data.</p>
Full article ">Figure 18
<p>The effect of geometric positioning consistency before and after correction of XinJiang data.</p>
Full article ">Figure A1
<p>Positioning accuracy without GCPs of PMS01-PMS06.</p>
Full article ">Figure A1 Cont.
<p>Positioning accuracy without GCPs of PMS01-PMS06.</p>
Full article ">
16 pages, 2967 KiB  
Technical Note
Field Programmable Gate Array (FPGA) Implementation of Parallel Jacobi for Eigen-Decomposition in Direction of Arrival (DOA) Estimation Algorithm
by Shuang Zhou and Li Zhou
Remote Sens. 2024, 16(20), 3892; https://doi.org/10.3390/rs16203892 - 19 Oct 2024
Viewed by 636
Abstract
The eigen-decomposition of a covariance matrix is a key step in the Direction of Arrival (DOA) estimation algorithms such as subspace classes. Eigen-decomposition using the parallel Jacobi algorithm implemented on FPGA offers excellent parallelism and real-time performance. Addressing the high complexity and resource [...] Read more.
The eigen-decomposition of a covariance matrix is a key step in the Direction of Arrival (DOA) estimation algorithms such as subspace classes. Eigen-decomposition using the parallel Jacobi algorithm implemented on FPGA offers excellent parallelism and real-time performance. Addressing the high complexity and resource consumption of the traditional parallel Jacobi algorithm implemented on FPGA, this study proposes an improved FPGA-based parallel Jacobi algorithm for eigen-decomposition. By analyzing the relationship between angle calculation and rotation during the Jacobi algorithm decomposition process, leveraging parallelism in the data processing, and based on the concepts of time-division multiplexing and parallel partition processing, this approach effectively reduces FPGA resource consumption. The improved parallel Jacobi algorithm is then applied to the classic DOA estimation algorithm, the MUSIC algorithm, and implemented on Xilinx’s Zynq FPGA. Experimental results demonstrate that this parallel approach can reduce resource consumption by approximately 75% compared to the traditional method but introduces little additional time consumption. The proposed method in this paper will solve the problem of great hardware consumption of eigen-decomposition based on FPGA in DOA applications. Full article
Show Figures

Figure 1

Figure 1
<p>Systolic array structure of an 8-order covariance matrix.</p>
Full article ">Figure 2
<p>(<b>a</b>) First rotational partition diagram. (<b>b</b>) Second rotational partition diagram.</p>
Full article ">Figure 3
<p>The steps of module operation.</p>
Full article ">Figure 4
<p>Block diagram of the MUSIC algorithm.</p>
Full article ">Figure 5
<p>Simulation results of eigen-decomposition iteration for (<b>a</b>) 21 times and (<b>b</b>) 28 times.</p>
Full article ">Figure 5 Cont.
<p>Simulation results of eigen-decomposition iteration for (<b>a</b>) 21 times and (<b>b</b>) 28 times.</p>
Full article ">Figure 6
<p>Illustration of the direction-finding results on the UV plane in MATLAB and FPGA.</p>
Full article ">
22 pages, 11121 KiB  
Article
Joint Prediction of Sea Clutter Amplitude Distribution Based on a One-Dimensional Convolutional Neural Network with Multi-Task Learning
by Longshuai Wang, Liwen Ma, Tao Wu, Jiaji Wu and Xiang Luo
Remote Sens. 2024, 16(20), 3891; https://doi.org/10.3390/rs16203891 - 19 Oct 2024
Viewed by 797
Abstract
Accurate modeling of sea clutter amplitude distribution plays a crucial role in enhancing the performance of marine radar. Due to variations in radar system parameters and oceanic environmental factors, sea clutter amplitude distribution exhibits multiple distribution types. Focusing solely on a single type [...] Read more.
Accurate modeling of sea clutter amplitude distribution plays a crucial role in enhancing the performance of marine radar. Due to variations in radar system parameters and oceanic environmental factors, sea clutter amplitude distribution exhibits multiple distribution types. Focusing solely on a single type of amplitude prediction lacks the necessary flexibility in practical applications. Therefore, based on the measured X-band radar sea clutter data from Yantai, China in 2022, this paper proposes a multi-task one-dimensional convolutional neural network (MT1DCNN) and designs a dedicated input feature set for the joint prediction of the type and parameters of sea clutter amplitude distribution. The results indicate that the MT1DCNN model achieves an F1 score of 97.4% for classifying sea clutter amplitude distribution types under HH polarization and a root-mean-square error (RMSE) of 0.746 for amplitude distribution parameter prediction. Under VV polarization, the F1 score is 96.74% and the RMSE is 1.071. By learning the associations between sea clutter amplitude distribution types and parameters, the model’s predictions become more accurate and reliable, providing significant technical support for maritime target detection. Full article
(This article belongs to the Topic Radar Signal and Data Processing with Applications)
Show Figures

Figure 1

Figure 1
<p>The architecture of MT1DCNN.</p>
Full article ">Figure 2
<p>The overall temporal characteristics of T1 and T2 pulses: (<b>a</b>) T1 pulse echo (dB). (<b>b</b>) T2 plus Echo (dB).</p>
Full article ">Figure 3
<p>The data distribution of shape parameters for K and Pareto distributions: (<b>a</b>) K distribution of sea clutter in HH polarimetric radar. (<b>b</b>) Pareto distribution of sea clutter in HH polarimetric radar. (<b>c</b>) K distribution of sea clutter in VV polarimetric radar. (<b>d</b>) Pareto distribution of sea clutter in VV polarimetric radar.</p>
Full article ">Figure 3 Cont.
<p>The data distribution of shape parameters for K and Pareto distributions: (<b>a</b>) K distribution of sea clutter in HH polarimetric radar. (<b>b</b>) Pareto distribution of sea clutter in HH polarimetric radar. (<b>c</b>) K distribution of sea clutter in VV polarimetric radar. (<b>d</b>) Pareto distribution of sea clutter in VV polarimetric radar.</p>
Full article ">Figure 4
<p>Sea clutter amplitude distribution joint prediction results of the MT1DCNN model are presented as follows: for the classification task, outcomes are depicted via a confusion matrix heatmap, whereas the regression task results are illustrated using scatter density plots. (<b>a</b>) Amplitude distribution types of HH polarization. (<b>b</b>) Amplitude distribution types of VV polarization. (<b>c</b>) Shape parameter of HH polarization. (<b>d</b>) Shape parameter of VV polarization. (<b>e</b>) Scale parameter of HH polarization. (<b>f</b>) Scale parameter of VV polarization.</p>
Full article ">Figure 4 Cont.
<p>Sea clutter amplitude distribution joint prediction results of the MT1DCNN model are presented as follows: for the classification task, outcomes are depicted via a confusion matrix heatmap, whereas the regression task results are illustrated using scatter density plots. (<b>a</b>) Amplitude distribution types of HH polarization. (<b>b</b>) Amplitude distribution types of VV polarization. (<b>c</b>) Shape parameter of HH polarization. (<b>d</b>) Shape parameter of VV polarization. (<b>e</b>) Scale parameter of HH polarization. (<b>f</b>) Scale parameter of VV polarization.</p>
Full article ">Figure 5
<p>TEIC comparison of two methods: MLE, and MT1DCNN based on MLE. (<b>a</b>) TEIC value comparison under HH polarization. (<b>b</b>) Boxplot of TEIC values under HH polarization. (<b>c</b>) TEIC value comparison under VV polarization. (<b>d</b>) Boxplot of TEIC values under VV polarization.</p>
Full article ">Figure 5 Cont.
<p>TEIC comparison of two methods: MLE, and MT1DCNN based on MLE. (<b>a</b>) TEIC value comparison under HH polarization. (<b>b</b>) Boxplot of TEIC values under HH polarization. (<b>c</b>) TEIC value comparison under VV polarization. (<b>d</b>) Boxplot of TEIC values under VV polarization.</p>
Full article ">Figure 6
<p>Comparison of results for predicting the real sea clutter amplitude distribution using different parameter estimation methods: (<b>a</b>) Prediction results under HH polarization. (<b>b</b>) Prediction results under VV polarization.</p>
Full article ">Figure 7
<p>Comparison of training results of different models and epochs on the HH polarization sea clutter validation set: (<b>a</b>) F1 score. (<b>b</b>) Validation loss.</p>
Full article ">Figure 8
<p>Comparison of training results of different models and epochs on the VV polarization sea clutter validation set: (<b>a</b>) F1 score. (<b>b</b>) Validation loss.</p>
Full article ">
18 pages, 10462 KiB  
Article
Multi-Year Hurricane Impacts Across an Urban-to-Industrial Forest Use Gradient
by Carlos Topete-Pozas, Steven P. Norman and William M. Christie
Remote Sens. 2024, 16(20), 3890; https://doi.org/10.3390/rs16203890 - 19 Oct 2024
Viewed by 615
Abstract
Coastal forests in the eastern United States are increasingly threatened by hurricanes; however, monitoring their initial impacts and subsequent recovery is challenging across scales. Understanding disturbance impacts and responses is essential for sustainable forest management, biodiversity conservation, and climate change adaptation. Using Sentinel-2 [...] Read more.
Coastal forests in the eastern United States are increasingly threatened by hurricanes; however, monitoring their initial impacts and subsequent recovery is challenging across scales. Understanding disturbance impacts and responses is essential for sustainable forest management, biodiversity conservation, and climate change adaptation. Using Sentinel-2 imagery, we calculated the annual Normalized Difference Vegetation Index change (∆NDVI) of forests before and after Hurricane Michael (HM) in Florida to determine how different forest use types were impacted, including the initial wind damage in 2018 and subsequent recovery or reactive management for two focal areas located near and far from the coast. We used detailed parcel data to define forest use types and characterized multi-year impacts using sampling and k-means clustering. We analyzed five years of timberland logging activity up to the fall of 2023 to identify changes in logging rates that may be attributable to post-hurricane salvage efforts. We found uniform impacts across forest use types near the coast, where winds were the most intense but differences inland. Forest use types showed a wide range of multi-year responses. Urban forests had the fastest 3-year recovery, and the timberland response was delayed, apparently due to salvage logging that increased post-hurricane, peaked in 2021–2022, and returned to the pre-hurricane rate by 2023. The initial and secondary consequences of HM on forests were complex, as they varied across local and landscape gradients. These insights reveal the importance of considering forest use types to understand the resilience of coastal forests in the face of potentially increasing hurricane activity. Full article
Show Figures

Figure 1

Figure 1
<p>The study area location is in northwestern Florida, USA. The letters on the map indicate focal areas: “A” near and “B” far from the coast.</p>
Full article ">Figure 2
<p>Workflow diagram showing the methodological steps for this study.</p>
Full article ">Figure 3
<p>Annual ∆NDVI by forest use type near (<b>a</b>) and far (<b>b</b>) from the coast for two years prior to Hurricane Michael, the year of the storm (2018–2019), and for two years after. The boxes represent the 25th and 75th percentiles of each distribution and the lines inside the boxes represent the medians.</p>
Full article ">Figure 4
<p>Three-year recovery of forest use types standardized to the pre-HM condition for the coastal focal area (<b>a</b>) and inland focal area (<b>b</b>).</p>
Full article ">Figure 5
<p>Patterns of NDVI decline and recovery from spatio-temporal clustering over three years of annual post-hurricane ∆NDVI behavior (2018–2019 to 2020–2021). Note that warmer colors (clusters 1–6) mostly surround the track, indicating stronger declines in NDVI (see corresponding cluster numbers in <a href="#remotesensing-16-03890-f006" class="html-fig">Figure 6</a>): (<b>a</b>) the southwest corner of the study area showing the hurricane track, (<b>b</b>) industrial forests, (<b>c</b>) interface forests (i.e., woodlot, farm-woodlot, and other), and (<b>d</b>) urban forests.</p>
Full article ">Figure 6
<p>Relative annual NDVI behavior of the 10 clusters with respect to the immediate pre-storm condition. The cluster numbers and colors in the legend correspond to the clusters shown in <a href="#remotesensing-16-03890-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 7
<p>Annual logging for timberland and all other non-urban forest use types within the study area for the pre-hurricane baseline of 2016–2017 and for five subsequent years.</p>
Full article ">Figure A1
<p>Classification key used to define forest cover types from forest canopy cover, building density, and parcel data. The same colors are used in <a href="#remotesensing-16-03890-f0A2" class="html-fig">Figure A2</a>.</p>
Full article ">Figure A2
<p>Forest cover types derived from forest canopy cover, building density, and parcel data.</p>
Full article ">
19 pages, 3739 KiB  
Article
Integration of Generative-Adversarial-Network-Based Data Compaction and Spatial Attention Transductive Long Short-Term Memory for Improved Rainfall–Runoff Modeling
by Bahareh Ghanati and Joan Serra-Sagristà
Remote Sens. 2024, 16(20), 3889; https://doi.org/10.3390/rs16203889 - 19 Oct 2024
Viewed by 461
Abstract
This work presents a novel approach to rainfall–runoff modeling. We incorporate GAN-based data compaction into a spatial-attention-enhanced transductive long short-term memory (TLSTM) network. The GAN component reduces data dimensions while retaining essential features. This compaction enables the TLSTM to capture complex temporal dependencies [...] Read more.
This work presents a novel approach to rainfall–runoff modeling. We incorporate GAN-based data compaction into a spatial-attention-enhanced transductive long short-term memory (TLSTM) network. The GAN component reduces data dimensions while retaining essential features. This compaction enables the TLSTM to capture complex temporal dependencies in rainfall–runoff patterns more effectively. When tested on the CAMELS dataset, the model significantly outperforms benchmark LSTM-based models. For 8-day runoff forecasts, our model achieves an NSE of 0.536, compared to 0.326 from the closest competitor. The integration of GAN-based feature extraction with spatial attention mechanisms improves predictive accuracy, particularly for peak-flow events. This method offers a powerful solution for addressing current challenges in water resource management and disaster planning under extreme climate conditions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the proposed rainfall–runoff prediction model.</p>
Full article ">Figure 2
<p>Overview of the random key method employed to convert numeric vectors into distinct hyperparameter levels.</p>
Full article ">Figure 3
<p>RMSE values with 95% confidence intervals for selected basins. Error bars show the uncertainty in model predictions, with narrower intervals indicating higher confidence.</p>
Full article ">Figure 4
<p>Loss curves for training and validation datasets in the proposed model.</p>
Full article ">Figure 5
<p>A depiction of the model’s application on individual rainfall–runoff for basin 12374250, spanning from 1 January 2013 to 1 January 2014, is presented. Here, Proposed-i denotes the predictions for runoff made by our model, calculated for the i-th day ahead.</p>
Full article ">Figure 5 Cont.
<p>A depiction of the model’s application on individual rainfall–runoff for basin 12374250, spanning from 1 January 2013 to 1 January 2014, is presented. Here, Proposed-i denotes the predictions for runoff made by our model, calculated for the i-th day ahead.</p>
Full article ">Figure 6
<p>A depiction of the model’s application on regional rainfall–runoff modeling from 1 January 2013 to 1 January 2014 is presented. Here, Proposed-i denotes the predictions for runoff made by our model, calculated for the i-th day ahead.</p>
Full article ">Figure 6 Cont.
<p>A depiction of the model’s application on regional rainfall–runoff modeling from 1 January 2013 to 1 January 2014 is presented. Here, Proposed-i denotes the predictions for runoff made by our model, calculated for the i-th day ahead.</p>
Full article ">
14 pages, 313 KiB  
Review
Progress in Remote Sensing of Heavy Metals in Water
by Xiaoling Xu, Jiayi Pan, Hua Zhang and Hui Lin
Remote Sens. 2024, 16(20), 3888; https://doi.org/10.3390/rs16203888 - 19 Oct 2024
Viewed by 683
Abstract
This review article details the advancements in detecting heavy metals in aquatic environments using remote sensing methodologies. Heavy metals are significant pollutants in aquatic environment, and their detection and monitoring are crucial for predicting water quality. Traditional in situ water sampling methods are [...] Read more.
This review article details the advancements in detecting heavy metals in aquatic environments using remote sensing methodologies. Heavy metals are significant pollutants in aquatic environment, and their detection and monitoring are crucial for predicting water quality. Traditional in situ water sampling methods are time-consuming and costly, highlighting the advantages of remote sensing techniques. Analysis of the reflectance and absorption characteristics of heavy metals has identified the red and near-infrared bands as the sensitive wavelengths for heavy metal detection in aquatic environments. Several studies have demonstrated a correlation between total suspended matter and heavy metals, which forms the basis for retrieving heavy metal content from TSM data. Recent developments in hyperspectral remote sensing and machine (deep) learning technologies may pave the way for developing more effective heavy metal detection algorithms. Full article
(This article belongs to the Section Environmental Remote Sensing)
21 pages, 19359 KiB  
Article
Landslide Hazard Prediction Based on UAV Remote Sensing and Discrete Element Model Simulation—Case from the Zhuangguoyu Landslide in Northern China
by Guangming Li, Yu Zhang, Yuhua Zhang, Zizheng Guo, Yuanbo Liu, Xinyong Zhou, Zhanxu Guo, Wei Guo, Lihang Wan, Liang Duan, Hao Luo and Jun He
Remote Sens. 2024, 16(20), 3887; https://doi.org/10.3390/rs16203887 - 19 Oct 2024
Viewed by 635
Abstract
Rainfall-triggered landslides generally pose a high risk due to their sudden initiation, massive impact force, and energy. It is, therefore, necessary to perform accurate and timely hazard prediction for these landslides. Most studies have focused on the hazard assessment and verification of landslides [...] Read more.
Rainfall-triggered landslides generally pose a high risk due to their sudden initiation, massive impact force, and energy. It is, therefore, necessary to perform accurate and timely hazard prediction for these landslides. Most studies have focused on the hazard assessment and verification of landslides that have occurred, which were essentially back-analyses rather than predictions. To overcome this drawback, a framework aimed at forecasting landslide hazards by combining UAV remote sensing and numerical simulation was proposed in this study. A slow-moving landslide identified by SBAS-InSAR in Tianjin city of northern China was taken as a case study to clarify its application. A UAV with laser scanning techniques was utilized to obtain high-resolution topography data. Then, extreme rainfall with a given return period was determined based on the Gumbel distribution. The Particle Flow Code (PFC), a discrete element model, was also applied to simulate the runout process after slope failure under rainfall and earthquake scenarios. The results showed that the extreme rainfall for three continuous days in the study area was 151.5 mm (P = 5%), 184.6 mm (P = 2%), and 209.3 mm (P = 1%), respectively. Both extreme rainfall and earthquake scenarios could induce slope failure, and the failure probabilities revealed by a seepage–mechanic interaction simulation in Geostudio reached 82.9% (earthquake scenario) and 92.5% (extreme rainfall). The landslide hazard under a given scenario was assessed by kinetic indicators during the PFC simulation. The landslide runout analysis indicated that the landslide had a velocity of max 23.4 m/s under rainfall scenarios, whereas this reached 19.8 m/s under earthquake scenarios. In addition, a comparison regarding particle displacement also showed that the landslide hazard under rainfall scenarios was worse than that under earthquake scenarios. The modeling strategy incorporated spatial and temporal probabilities and runout hazard analyses, even though landslide hazard mapping was not actually achieved. The present framework can predict the areas threatened by landslides under specific scenarios, and holds substantial scientific reference value for effective landslide prevention and control strategies. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Location of the study area in China, (<b>b</b>) the topographic information of the area, where 30 m resolution DEM is the base map, and (<b>c</b>) the lithology map.</p>
Full article ">Figure 2
<p>The cross-section of the ZhuangGuoYu landslide.</p>
Full article ">Figure 3
<p>The macro deformation of the ZGYL: (<b>a</b>) an overview of the landslide from Google Earth images and (<b>b</b>) the small-scale landsliding and the protective net at the toe of the slope.</p>
Full article ">Figure 4
<p>The deformation results from SBAS-InSAR analysis: (<b>a</b>) spatial deformation of the pixels on the landslide and (<b>b</b>) the displacement of points between 2014 and 2023, where the locations of P1, P2, and P3 are shown in (<b>a</b>).</p>
Full article ">Figure 5
<p>The proposed methodological framework of this study for landslide hazard assessment.</p>
Full article ">Figure 6
<p>The route setting of the UAV and obtained results: (<b>a</b>,<b>b</b>) are the two overlapping UAV routes, (<b>c</b>) the obtained DSM data from the remote sensing images, and (<b>d</b>) the digital orthophoto map (DOM) of the landslide.</p>
Full article ">Figure 7
<p>The dataset for the extreme rainfall analysis: (<b>a</b>) annual rainfall of the study area from 1980 to 2017 and (<b>b</b>) the largest continuous 3-day rainfall for each month.</p>
Full article ">Figure 8
<p>The settings for the stability evaluation in Geostudio: (<b>a</b>) the established geological model and (<b>b</b>) the hydrological parameter settings.</p>
Full article ">Figure 9
<p>The geological models of the ZGYL in PFC: (<b>a</b>) 2D and (<b>b</b>) 3D.</p>
Full article ">Figure 10
<p>The extreme rainfall under various return periods of the study area.</p>
Full article ">Figure 11
<p>The stability analysis results from Geostudio v2024. The left column is the factor of safety under (<b>a</b>) the rainfall with 50-year return period, (<b>b</b>) rainfall with 100-year return period, (<b>c</b>) earthquake scenario; The right column is the displacement under (<b>d</b>) the rainfall with 50-year return period, (<b>e</b>) rainfall with 100-year return period, (<b>f</b>) earthquake scenario.</p>
Full article ">Figure 12
<p>The 2D landslide kinetics at different moments under the rainfall scenario with 100-year return period.</p>
Full article ">Figure 13
<p>The landslide kinetics at different moments under the earthquake scenario.</p>
Full article ">Figure 14
<p>The 3D landslide kinetics at different moments: (<b>a</b>) rainfall scenario with 50-year return period, (<b>b</b>) rainfall scenario with 100-year return period, and (<b>c</b>) earthquake scenario.</p>
Full article ">Figure 15
<p>The velocity versus time of four monitoring particles: (<b>a</b>) #1, (<b>b</b>) #2, (<b>c</b>) #3, and (<b>d</b>) #4.</p>
Full article ">Figure 15 Cont.
<p>The velocity versus time of four monitoring particles: (<b>a</b>) #1, (<b>b</b>) #2, (<b>c</b>) #3, and (<b>d</b>) #4.</p>
Full article ">Figure 16
<p>The displacement versus time of four monitoring particles: (<b>a</b>) #1, (<b>b</b>) #2, (<b>c</b>) #3, and (<b>d</b>) #4.</p>
Full article ">
22 pages, 12339 KiB  
Article
Robust Trend Analysis in Environmental Remote Sensing: A Case Study of Cork Oak Forest Decline
by Oliver Gutiérrez-Hernández and Luis V. García
Remote Sens. 2024, 16(20), 3886; https://doi.org/10.3390/rs16203886 - 19 Oct 2024
Viewed by 625
Abstract
We introduce a novel methodological framework for robust trend analysis (RTA) using remote sensing data to enhance the accuracy and reliability of detecting significant environmental trends. Our approach sequentially integrates the Theil–Sen (TS) slope estimator, the Contextual Mann–Kendall (CMK) test, and the false [...] Read more.
We introduce a novel methodological framework for robust trend analysis (RTA) using remote sensing data to enhance the accuracy and reliability of detecting significant environmental trends. Our approach sequentially integrates the Theil–Sen (TS) slope estimator, the Contextual Mann–Kendall (CMK) test, and the false discovery rate (FDR) control. This comprehensive method addresses common challenges in trend analysis, such as handling small, noisy datasets with outliers and issues related to spatial autocorrelation, cross-correlation, and multiple testing. We applied this RTA workflow to study tree cover trends in Los Alcornocales Natural Park (Southern Spain), Europe’s largest cork oak forest, analysing interannual changes in tree cover from 2000 to 2022 using Terra MODIS MOD44B data. Our results reveal that the TS estimator provides a robust measure of trend direction and magnitude, but its effectiveness is dramatically enhanced when combined with the CMK test. This combination highlights significant trends and effectively corrects for spatial autocorrelation and cross-correlation, ensuring that genuine environmental signals are distinguished from statistical noise. Unlike previous workflows, our approach incorporates the FDR control, which successfully filtered out 29.6% of false discoveries in the case study, resulting in a more stringent assessment of true environmental trends captured by multi-temporal remotely sensed data. In the case study, we found that approximately one-third of the area exhibits significant and statistically robust declines in tree cover, with these declines being geographically clustered. Importantly, these trends correspond with relevant changes in tree cover, emphasising the ability of RTA to detect relevant environmental changes. Overall, our findings underscore the crucial importance of combining these methods, as their synergy is essential for accurately identifying and confirming robust environmental trends. The proposed RTA framework has significant implications for environmental monitoring, modelling, and management. Full article
Show Figures

Figure 1

Figure 1
<p>Study area: Los Alcornocales Natural Park (≈174,000 ha), a representative cork oak-dominated mixed Mediterranean forest in the Southwest Iberian Peninsula (Andalusia, Spain). Source: Author’s work. Data from the Department of the Environment of the Regional Government of Andalusia, specifically from the Andalusian Network of Environmental Information (REDIAM).</p>
Full article ">Figure 2
<p>Evidence of cork oak forest decline in Los Alcornocales Natural Park. Source: Lorena Gómez-Aparicio.</p>
Full article ">Figure 3
<p>Research workflow diagram for robust trend analysis (RTA). Source: Author’s work.</p>
Full article ">Figure 4
<p>Density of interannual trends in tree cover (2000–2022) in Los Alcornocales Natural Park. Source: Author’s work. Note: TS slope (dashed green line); combined TS slope and CMK significance (thin solid green line); combined TS slope and CMK significance with FDR control (thick solid green line). Axes details: X-axis—TS slope; Y-axis—spatial distribution density.</p>
Full article ">Figure 5
<p>Controlling the FDR for interannual tree cover trends (2000–2022) in Los Alcornocales Natural Park: FDR–BH <span class="html-italic">p</span>-value adjustment for the CMK test. Source: Author’s work. Note: The <span class="html-italic">x</span>-axis represents the rank (<span class="html-italic">k</span>) of 27,828 <span class="html-italic">p</span>-values sorted in ascending order, while the <span class="html-italic">y</span>-axis displays the observed <span class="html-italic">p</span>-values. The celestial blue line symbolises the threshold defined by the FDR, which is calculated using the following expression: <math display="inline"><semantics> <mrow> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mi>k</mi> <mo>·</mo> <mi>α</mi> </mrow> <mi>m</mi> </mfrac> </mstyle> </mrow> </semantics></math>. Herein, <span class="html-italic">k</span> denotes the rank, α is the significance level, and m is the total number of tests conducted. The part of the line coloured blue indicates the <span class="html-italic">p</span>-values that are deemed significant following the application of the FDR control, using the FDR–BH procedure. Conversely, the part of the line coloured red represents the <span class="html-italic">p</span>-values that are not considered significant according to this adjustment. Altogether, this bi-coloured line comprises 27,828 points (pixels), corresponding to the number of tests performed (<span class="html-italic">m</span>).</p>
Full article ">Figure 6
<p>Histogram of original and adjusted <span class="html-italic">p</span>-values (FDR). Both histograms share the same scale axes for easy comparison. Source: Author’s work. Note: The <span class="html-italic">x</span>-axis shows <span class="html-italic">p</span>-values ranging from 0 to 1, and the <span class="html-italic">y</span>-axis shows frequencies up to 12,000 (pixels). The first bin in each histogram represents pixels with significant trends (α = 0.05). The left histogram displays the original <span class="html-italic">p</span>-values, heavily skewed towards 0, with a green dashed line at 8218 (pixels), which serves as a reference for comparison with the adjusted <span class="html-italic">p</span>-values at the alpha level of 0.05. The right histogram shows the adjusted <span class="html-italic">p</span>-values, illustrating how the FDR–BH procedure reduces the frequency of small <span class="html-italic">p</span>-values to control the FDR at the desired level (5%).</p>
Full article ">Figure 7
<p>Spatial distribution of interannual trends in tree cover (2000–2022) in Los Alcornocales Natural Park using RTA. Source: Author’s work. Note: The choropleth map classifies trends into five intervals based on TS slope values: “Sharp Decrease” (&lt;−0.5) indicates a notably negative annual rate of change; “Moderate Decrease” (−0.5 to −0.1) indicates a moderate annual rate of change; “Stable” (−0.1 to 0.1) implies a negligible annual rate of change; “Moderate Increase” (0.1 to 0.5) indicates a notably positive annual rate of change; and “Sharp Increase” (&gt;0.5) indicates a notably positive annual rate of change.</p>
Full article ">Figure 8
<p>Distribution of interannual trends in tree cover (2000–2022) in Los Alcornocales Natural Park using RTA. Source: Author’s work. Note: The bar chart classifies trends into five intervals based on the TS slope values: “Sharp Decrease” (&gt;−0.5) indicates a notably negative annual rate of change; “Moderate Decrease” (−0.5 to −0.1) indicates a moderate annual rate of change; “Stable” (−0.1 to 0.1) implies a negligible annual rate of change; “Moderate Increase” (0.1 to 0.5) indicates a notably positive annual rate of change; and “Sharp Increase” (&gt;0.5) indicates a notably positive annual rate of change.</p>
Full article ">Figure 9
<p>Spatial distribution of tree cover and tree cover change (2000–2022) in Los Alcornocales Natural Park. Source: Author’s work. Note: The first map represents tree cover derived from MOD44B Version 6.1 Vegetation Continuous Fields (VCF) data for 2000, while the second map shows tree cover for 2022. The third map illustrates the percentage changes in tree cover between 2000 and 2022. However, only relevant changes are presented, specifically those that coincide with the statistically significant trends identified by the RTA.</p>
Full article ">Figure 10
<p>Moran’s I scatter plot of spatial autocorrelation for significant TS trend slopes filtered with the CMK test and FDR control. Source: Author’s work. Note: The scatter plot presents the standardised Theil–Sen slope values on the x-axis against their spatially lagged counterparts on the y-axis. Each point corresponds to a geographic unit exhibiting a significant trend, filtered through the CMK test and controlled with FDR. The red regression line depicts the overall spatial autocorrelation, with Moran’s I value quantifying the strength and direction of this spatial relationship.</p>
Full article ">
Previous Issue
Back to TopTop