[go: up one dir, main page]

Next Issue
Volume 16, July-1
Previous Issue
Volume 16, June-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 16, Issue 12 (June-2 2024) – 208 articles

Cover Story (view full-size image): This study addresses the data gap in the deep learning-based pan sharpening of hyperspectral images, a technique used to improve the spatial resolution of an image using a high-resolution panchromatic image while preserving spectral information. Using the ASI PRISMA sensor, a dataset of 262,200 km2 was collected, making it the largest dataset in terms of statistical relevance and scene diversity, which are essential for robust model generalization. Reduced resolution (RR) and full resolution (FR) experiments were also conducted to compare several deep learning pan sharpening algorithms with various non-machine learning methods. The investigation shows that data-driven neural networks significantly outperform traditional methods in terms of spectral and spatial fidelity. An in-depth analysis of both aspects is presented in this work. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 52503 KiB  
Article
Study on the Identification, Failure Mode, and Spatial Distribution of Bank Collapses after the Initial Impoundment in the Head Section of Baihetan Reservoir in Jinsha River, China
by Chuangchuang Yao, Lingjing Li, Xin Yao, Renjiang Li, Kaiyu Ren, Shu Jiang, Ximing Chen and Li Ma
Remote Sens. 2024, 16(12), 2253; https://doi.org/10.3390/rs16122253 - 20 Jun 2024
Cited by 1 | Viewed by 1127
Abstract
After the initial impoundment of the Baihetan Reservoir in April 2021, the water level in front of the dam rose about 200 m. The mechanical properties and effects of the bank slopes in the reservoir area changed significantly, resulting in many bank collapses. [...] Read more.
After the initial impoundment of the Baihetan Reservoir in April 2021, the water level in front of the dam rose about 200 m. The mechanical properties and effects of the bank slopes in the reservoir area changed significantly, resulting in many bank collapses. This study systematically analyzed the bank slope of the head section of the reservoir, spanning 30 km from the dam to Baihetan Bridge, through a comprehensive investigation conducted after the initial impoundment. The analysis utilized UAV flights and ground surveys to interpret the bank slope’s distribution characteristics and failure patterns. A total of 276 bank collapses were recorded, with a geohazard development density of 4.6/km. The slope gradient of 26% of the collapsed banks experienced an increase ranging from 5 to 20° after impoundment, whereas the remaining sites’ inclines remained unchanged. According to the combination of lithology and movement mode, the bank failure mode is divided into six types, which are the surface erosion type, surface collapse type, surface slide type, bedding slip type of clastic rock, toppling type of clastic rock, and cavity corrosion type of carbonate rock. It was found that the collapsed banks in the reservoir area of 85% developed in the reactivation of old landslide deposits, while 15% in the clastic and carbonate rock. This study offers guidance for the next phase of bank collapse regulations and future geohazards prevention strategies in the Baihetan Reservoir area. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>,<b>b</b>) The location of study area. (<b>c</b>) Precipitation and water level fluctuations map.</p>
Full article ">Figure 2
<p>Geological map of the study area.</p>
Full article ">Figure 3
<p>Structural map of bank slopes in the study area: (<b>a</b>) Cataclinal slope and anaclinal slope. (<b>b</b>) Orthoclinal slope. (<b>c</b>) The bank slope structure of the study area. (<b>d</b>) Cataclinal slope. (<b>e</b>) Orthoclinal slope. (<b>f</b>) Anaclinal slope.</p>
Full article ">Figure 4
<p>The head section of the bank collapse interpretation: (<b>a</b>) Distribution diagram of bank collapse in the study area. (<b>b</b>) Surface erosion type. (<b>c</b>) Toppling type. (<b>d</b>) Surface slide type. (<b>e</b>) Surface collapse type. (<b>f</b>) Cavity corrosion type. (<b>g</b>) Bedding slip type bank.</p>
Full article ">Figure 5
<p>Surface erosion type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 6
<p>Surface collapse type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 7
<p>Surface slide type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 8
<p>Bedding slide type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 9
<p>Toppling type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 10
<p>Toppling type bank collapse failure model: (<b>a</b>) Field survey diagram. (<b>b</b>) Sectional diagram.</p>
Full article ">Figure 11
<p>Statistical diagram of bank collapse: (<b>a</b>) Location. (<b>b</b>) Lithological group. (<b>c</b>) Bank collapse type. (<b>d</b>) Area. (<b>e</b>) Slope gradient. (<b>f</b>) Bank slope structure.</p>
Full article ">Figure 12
<p>Statistics of bank collapse geometric parameters: (<b>a</b>) Schematic diagram of bank collapse geometric parameters. (<b>b</b>,<b>c</b>) Statistical diagram of bank collapse geometric parameters.</p>
Full article ">Figure 13
<p>Photos of five sites with threats to roads, tunnels and settlements along the river. (<b>a</b>–<b>c</b>) The bank collapse threatens roads and tunnels. (<b>d</b>) Cracks in highway. (<b>e</b>) The bank collapse threatens the storeroom. (<b>f</b>) Cracks around the storeroom. (<b>g</b>,<b>h</b>) The bank collapse threatens Bridges and residential buildings.</p>
Full article ">
23 pages, 76599 KiB  
Article
SRBPSwin: Single-Image Super-Resolution for Remote Sensing Images Using a Global Residual Multi-Attention Hybrid Back-Projection Network Based on the Swin Transformer
by Yi Qin, Jiarong Wang, Shenyi Cao, Ming Zhu, Jiaqi Sun, Zhicheng Hao and Xin Jiang
Remote Sens. 2024, 16(12), 2252; https://doi.org/10.3390/rs16122252 - 20 Jun 2024
Cited by 4 | Viewed by 980
Abstract
Remote sensing images usually contain abundant targets and complex information distributions. Consequently, networks are required to model both global and local information in the super-resolution (SR) reconstruction of remote sensing images. The existing SR reconstruction algorithms generally focus on only local or global [...] Read more.
Remote sensing images usually contain abundant targets and complex information distributions. Consequently, networks are required to model both global and local information in the super-resolution (SR) reconstruction of remote sensing images. The existing SR reconstruction algorithms generally focus on only local or global features, neglecting effective feedback for reconstruction errors. Therefore, a Global Residual Multi-attention Fusion Back-projection Network (SRBPSwin) is introduced by combining the back-projection mechanism with the Swin Transformer. We incorporate a concatenated Channel and Spatial Attention Block (CSAB) into the Swin Transformer Block (STB) to design a Multi-attention Hybrid Swin Transformer Block (MAHSTB). SRBPSwin develops dense back-projection units to provide bidirectional feedback for reconstruction errors, enhancing the network’s feature extraction capabilities and improving reconstruction performance. SRBPSwin consists of the following four main stages: shallow feature extraction, shallow feature refinement, dense back projection, and image reconstruction. Firstly, for the input low-resolution (LR) image, shallow features are extracted and refined through the shallow feature extraction and shallow feature refinement stages. Secondly, multiple up-projection and down-projection units are designed to alternately process features between high-resolution (HR) and LR spaces, obtaining more accurate and detailed feature representations. Finally, global residual connections are utilized to transfer shallow features during the image reconstruction stage. We propose a perceptual loss function based on the Swin Transformer to enhance the detail of the reconstructed image. Extensive experiments demonstrate the significant reconstruction advantages of SRBPSwin in quantitative evaluation and visual quality. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall architecture of SRBPSwin. <math display="inline"><semantics> <mo>⊕</mo> </semantics></math> indicates the element-wise sum.</p>
Full article ">Figure 2
<p>(<b>a</b>) Multi-attention Hybrid Swin Transformer Block (MAHSTB). (<b>b</b>) Channel- and Spatial-attention Block (CSAB). (<b>c</b>) Channel attention (CA) block. (<b>d</b>) Spatial attention (SA) block. <math display="inline"><semantics> <mo>⊕</mo> </semantics></math> indicates the element-wise sum. <math display="inline"><semantics> <mo>⊗</mo> </semantics></math> indicates the element-wise product.</p>
Full article ">Figure 3
<p>(<b>a</b>) Up-projection Swin Unit (UPSU). (<b>b</b>) Down-projection Swin Unit (DPSU). <math display="inline"><semantics> <mo>⊕</mo> </semantics></math> indicates the element-wise sum. <math display="inline"><semantics> <mo>⊖</mo> </semantics></math> indicates the element-wise difference.</p>
Full article ">Figure 4
<p>PSNR curves of our method, based on using CSAB or not. Base refers to the network that uses only STB, while Base + CSAB denotes MAHSTB. The results are compared on the validation dataset with a scale factor of 2<math display="inline"><semantics> <mo>×</mo> </semantics></math> during the overall training phase.</p>
Full article ">Figure 5
<p>Visual comparison of ablation study to verify the effectiveness of MAHSTB; Base refers to the network that uses only STB, while Base + CSAB denotes MAHSTB. We used a red box to mark the area for enlargement on the left HR image. On the right, we present the corresponding HR image and the results reconstructed by the different methods.</p>
Full article ">Figure 6
<p>PSNR curves of our method, based on using <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>S</mi> <mi>w</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math> or not. The results are compared on the validation dataset with a scale factor of 2<math display="inline"><semantics> <mo>×</mo> </semantics></math> during the overall training phase.</p>
Full article ">Figure 7
<p>Visual comparison of ablation study to verify the effectiveness of <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>S</mi> <mi>w</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>. We used a red box to mark the area for enlargement on the left HR image. On the right, we present the corresponding HR image and the results reconstructed using the different loss functions.</p>
Full article ">Figure 8
<p>PSNR comparison for different methods on the validation dataset with a scale factor of 2<math display="inline"><semantics> <mo>×</mo> </semantics></math> during the training phase.</p>
Full article ">Figure 9
<p>PSNR comparison for different methods on the validation dataset with a scale factor of 4<math display="inline"><semantics> <mo>×</mo> </semantics></math> during the training phase.</p>
Full article ">Figure 10
<p>Visual comparison of some representative SR methods and our model at the 2<math display="inline"><semantics> <mo>×</mo> </semantics></math> scale factor.</p>
Full article ">Figure 11
<p>Visual comparison of some representative SR methods and our model at the 4<math display="inline"><semantics> <mo>×</mo> </semantics></math> scale factor.</p>
Full article ">
19 pages, 6650 KiB  
Technical Note
Innovative Rotating SAR Mode for 3D Imaging of Buildings
by Yun Lin, Ying Wang, Yanping Wang, Wenjie Shen and Zechao Bai
Remote Sens. 2024, 16(12), 2251; https://doi.org/10.3390/rs16122251 - 20 Jun 2024
Cited by 1 | Viewed by 1362
Abstract
Three-dimensional SAR imaging of urban buildings is currently a hotspot in the research area of remote sensing. Synthetic Aperture Radar (SAR) offers all-time, all-weather, high-resolution capacity, and is an important tool for the monitoring of building health. Buildings have geometric distortion in conventional [...] Read more.
Three-dimensional SAR imaging of urban buildings is currently a hotspot in the research area of remote sensing. Synthetic Aperture Radar (SAR) offers all-time, all-weather, high-resolution capacity, and is an important tool for the monitoring of building health. Buildings have geometric distortion in conventional 2D SAR images, which brings great difficulties to the interpretation of SAR images. This paper proposes a novel Rotating SAR (RSAR) mode, which acquires 3D information of buildings from two different angles in a single rotation. This new RSAR mode takes the center of a straight track as its rotation center, and obtains images of the same facade of a building from two different angles. By utilizing the differences in geometric distortion of buildings in the image pair, the 3D structure of the building is reconstructed. Compared to the existing tomographic SAR or circular SAR, this method does not require multiple flights in different elevations or observations from varying aspect angles, and greatly simplifies data acquisition. Furthermore, both simulation analysis and actual data experiment have verified the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Advances in Synthetic Aperture Radar Data Processing and Application)
Show Figures

Figure 1

Figure 1
<p>The geometric model of Rotating SAR.</p>
Full article ">Figure 2
<p>Schematic diagram of BP imaging at different angles in the same coordinate.</p>
Full article ">Figure 3
<p>RD projection model of the building.</p>
Full article ">Figure 4
<p>Schematic of the RD projection model.</p>
Full article ">Figure 5
<p>Geometric projection model at various rotation angles.</p>
Full article ">Figure 6
<p>RD geometric projection relationship.</p>
Full article ">Figure 7
<p>Diagram of distance offset among projection points.</p>
Full article ">Figure 8
<p>Flowchart of the main algorithm for 3D imaging.</p>
Full article ">Figure 9
<p>Different hypothetical elevation schematics.</p>
Full article ">Figure 10
<p>Flowchart of image neighborhood matching with <math display="inline"><semantics> <mi>n</mi> </semantics></math> hypothetical heights.</p>
Full article ">Figure 11
<p>Simulation results for point targets: (<b>a</b>) SAR image of the point targets (A1–G1) at angle <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>1</mn> </msub> </mrow> </semantics></math>; (<b>b</b>) SAR image of the point targets (A2–G2) at angle <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>c</b>) the curve depicts the relationship between height and the offset of the image pair; (<b>d</b>) maximum correlation coefficient curve; (<b>e</b>) side view of 3D point clouds; (<b>f</b>) top view of 3D point clouds.</p>
Full article ">Figure 12
<p>The experimental equipment and scene.</p>
Full article ">Figure 13
<p>Observation area.</p>
Full article ">Figure 14
<p>Practical scene experiment results: (<b>a</b>) 2D SAR image at angle <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>1</mn> </msub> </mrow> </semantics></math>; (<b>b</b>) 2D SAR image at angle <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>c</b>) mask image of the building; (<b>d</b>) maximum correlation coefficient curve.</p>
Full article ">Figure 15
<p>Three-dimensional SAR images of the proposed algorithm: (<b>a</b>) side view; (<b>b</b>) top view; (<b>c</b>) different target heights in 2D image; (<b>d</b>) color mapping of point clouds height.</p>
Full article ">
27 pages, 6641 KiB  
Article
Biomass Estimation and Saturation Value Determination Based on Multi-Source Remote Sensing Data
by Rula Sa, Yonghui Nie, Sergey Chumachenko and Wenyi Fan
Remote Sens. 2024, 16(12), 2250; https://doi.org/10.3390/rs16122250 - 20 Jun 2024
Cited by 4 | Viewed by 2096
Abstract
Forest biomass estimation is undoubtedly one of the most pressing research subjects at present. Combining multi-source remote sensing information can give full play to the advantages of different remote sensing technologies, providing more comprehensive and rich information for aboveground biomass (AGB) estimation research. [...] Read more.
Forest biomass estimation is undoubtedly one of the most pressing research subjects at present. Combining multi-source remote sensing information can give full play to the advantages of different remote sensing technologies, providing more comprehensive and rich information for aboveground biomass (AGB) estimation research. Based on Landsat 8, Sentinel-2A, and ALOS2 PALSAR data, this paper takes the artificial coniferous forests in the Saihanba Forest of Hebei Province as the object of study, fully explores and establishes remote sensing factors and information related to forest structure, gives full play to the advantages of spectral signals in detecting the horizontal structure and multi-dimensional synthetic aperture radar (SAR) data in detecting the vertical structure, and combines environmental factors to carry out multivariate synergistic methods of estimating the AGB. This paper uses three variable selection methods (Pearson correlation coefficient, random forest significance, and the least absolute shrinkage and selection operator (LASSO)) to establish the variable sets, combining them with three typical non-parametric models to estimate AGB, namely, random forest (RF), support vector regression (SVR), and artificial neural network (ANN), to analyze the effect of forest structure on biomass estimation, explore the suitable AGB of artificial coniferous forests estimation of machine learning models, and develop the method of quantifying saturation value of the combined variables. The results show that the horizontal structure is more capable of explaining the AGB compared to the vertical structure information, and that combining the multi-structure information can improve the model results and the saturation value to a great extent. In this study, different sets of variables can produce relatively superior results in different models. The variable set selected using LASSO gives the best results in the SVR model, with an R2 values of 0.9998 and 0.8792 for the training and the test set, respectively, and the highest saturation value obtained is 185.73 t/ha, which is beyond the range of the measured data. The problem of saturation in biomass estimation in boreal medium- and high-density forests was overcome to a certain extent, and the AGB of the Saihanba area was better estimated. Full article
Show Figures

Figure 1

Figure 1
<p>Location map of the study area: (<b>a</b>) the location map of the study area; (<b>b</b>) the HV polarization data of the study area; (<b>c</b>) the true color image of Sentinel-2, with the actual sample locations indicated by green dots.</p>
Full article ">Figure 2
<p>Relationships between forest structural parameters at the sample site level: (<b>a</b>) mean DBH vs. S; (<b>b</b>) CC vs. BA; (<b>c</b>) mean DBH vs. mean forest height, where the size and color shade of the dots vary with biomass.</p>
Full article ">Figure 3
<p>Flowchart of the methodology.</p>
Full article ">Figure 4
<p>Determination of the number of model leaves and decision tree.</p>
Full article ">Figure 5
<p>Parameter optimization diagram of three models. From left to right are the results of RF, SVR, and ANN models. From top to bottom are the results obtained for the horizontal structure indices (V1), vertical structure indices (V2), horizontal + vertical structure indices (V3), horizontal + vertical structure indices + topographical variables (V4), Pearson selection variable (V5), RF importance selection of the variables (V6), and the variable chosen by the LASSO (V7) in each model.</p>
Full article ">Figure 6
<p>Summary graphs of the results of the training and test sets of the three models for estimating AGB. The first and second rows of each model are the training set results, and test set results, respectively. The left side is <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>R</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>, and the right side is RMSE.</p>
Full article ">Figure 7
<p>Summary plot of model results. From left to right, the horizontal structure indices (V1), vertical structure indices (V2), horizontal + vertical structure indices (V3), horizontal + vertical structure indices + topographical variables (V4), Pearson selection variable (V5), RF importance selection of the variables (V6), and the variable chosen by the LASSO (V7) were introduced into the three models to estimate the results of AGB. From top to bottom are the results of RF, SVR, and ANN models, and the first and second rows of each model are the training set results and test set results, respectively. The horizontal axis of the image is the measured data, the vertical axis is the predicted results, the blue is the 1:1 straight line, and the green is the fitted line.</p>
Full article ">Figure 8
<p>AGB map of the study area estimated by the LASSO-based SVR model: (<b>a</b>) AGB map of the study area; (<b>b</b>) histogram of AGB distribution.</p>
Full article ">Figure 9
<p>The spherical model curves for each structure index and different variable sets under different ML models. The left side shows the horizontal structure indices figures (RVI, CTI, MTI, PTI) in order, and the individual index figures of CC, S, and BA are shown on the right. Right side: vertical structure indices figures and a summary plot of the spherical model curves for the different sets of variables under the RF, SVR, and ANN models.</p>
Full article ">
22 pages, 2885 KiB  
Article
Exploring Spatial Patterns of Tropical Peatland Subsidence in Selangor, Malaysia Using the APSIS-DInSAR Technique
by Betsabé de la Barreda-Bautista, Martha J. Ledger, Sofie Sjögersten, David Gee, Andrew Sowter, Beth Cole, Susan E. Page, David J. Large, Chris D. Evans, Kevin J. Tansey, Stephanie Evers and Doreen S. Boyd
Remote Sens. 2024, 16(12), 2249; https://doi.org/10.3390/rs16122249 - 20 Jun 2024
Cited by 1 | Viewed by 1417
Abstract
Tropical peatlands in Southeast Asia have experienced widespread subsidence due to forest clearance and drainage for agriculture, oil palm and pulp wood production, causing concerns about their function as a long-term carbon store. Peatland drainage leads to subsidence (lowering of peatland surface), an [...] Read more.
Tropical peatlands in Southeast Asia have experienced widespread subsidence due to forest clearance and drainage for agriculture, oil palm and pulp wood production, causing concerns about their function as a long-term carbon store. Peatland drainage leads to subsidence (lowering of peatland surface), an indicator of degraded peatlands, while stability/uplift indicates peatland accumulation and ecosystem health. We used the Advanced Pixel System using the Intermittent SBAS (ASPIS-DInSAR) technique with biophysical and geographical data to investigate the impact of peatland drainage and agriculture on spatial patterns of subsidence in Selangor, Malaysia. Results showed pronounced subsidence in areas subjected to drainage for agricultural and oil palm plantations, while stable areas were associated with intact forests. The most powerful predictors of subsidence rates were the distance from the drainage canal or peat boundary; however, other drivers such as soil properties and water table levels were also important. The maximum subsidence rate detected was lower than that documented by ground-based methods. Therefore, whilst the APSIS-DInSAR technique may underestimate absolute subsidence rates, it gives valuable information on the direction of motion and spatial variability of subsidence. The study confirms widespread and severe peatland degradation in Selangor, highlighting the value of DInSAR for identifying priority zones for restoration and emphasising the need for conservation and restoration efforts to preserve Selangor peatlands and prevent further environmental impacts. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area. North and South Selangor. Red points represent the points where pixels were extracted for the analysis. Peatlands are enclosed in the black polygon and rivers and canals are represented by blue lines.</p>
Full article ">Figure 2
<p>Methodology framework.</p>
Full article ">Figure 3
<p>(<b>a</b>) Land cover map from North Selangor; (<b>b</b>) Subsidence over North Selangor; (<b>c</b>) number of coherent pairs per pixel over North Selangor (coherence count); (<b>d</b>) Land cover map from South Selangor; (<b>e</b>) Subsidence over South Selangor; (<b>f</b>) number of coherent pairs per pixel over North Selangor. The subsidence data are in mm yr<sup>−1</sup> between 2017 and 2019. A greater negative value (red) indicates a greater subsidence rate. Coherence count data ranges from 71 to 1335, whereby the higher the value, the greater the number of consistently coherent pairs that exist for this pixel. Black and blue lines represent the peatland extent. Areas of notable interest are marked with a red square.</p>
Full article ">Figure 4
<p>Rates of subsidence in mm yr<sup>−1</sup> computed from the surface motion velocity (2017–2019) among different land cover classes. Mean and SD are shown. A greater negative value indicates greater subsidence rates. (<b>a</b>) North Selangor subsidence rates; (<b>b</b>) South Selangor subsidence rates.</p>
Full article ">Figure 5
<p>(<b>a</b>) Variable importance based on MSE; (<b>b</b>) variable importance based on node purity; (<b>c</b>) variable importance in optimum variables selected based on MSE (<b>d</b>) variable importance in optimum variables selected based on node purity.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) Variable importance based on MSE; (<b>b</b>) variable importance based on node purity; (<b>c</b>) variable importance in optimum variables selected based on MSE (<b>d</b>) variable importance in optimum variables selected based on node purity.</p>
Full article ">Figure A1
<p>Residual plots for multiple regression.</p>
Full article ">
18 pages, 5061 KiB  
Article
Generating 10-Meter Resolution Land Use and Land Cover Products Using Historical Landsat Archive Based on Super Resolution Guided Semantic Segmentation Network
by Dawei Wen, Shihao Zhu, Yuan Tian, Xuehua Guan and Yang Lu
Remote Sens. 2024, 16(12), 2248; https://doi.org/10.3390/rs16122248 - 20 Jun 2024
Viewed by 1215
Abstract
Generating high-resolution land cover maps using relatively lower-resolution remote sensing images is of great importance for subtle analysis. However, the domain gap between real lower-resolution and synthetic images has not been permanently resolved. Furthermore, super-resolution information is not fully exploited in semantic segmentation [...] Read more.
Generating high-resolution land cover maps using relatively lower-resolution remote sensing images is of great importance for subtle analysis. However, the domain gap between real lower-resolution and synthetic images has not been permanently resolved. Furthermore, super-resolution information is not fully exploited in semantic segmentation models. By solving the aforementioned issues, a deeply fused super resolution guided semantic segmentation network using 30 m Landsat images is proposed. A large-scale dataset comprising 10 m Sentinel-2, 30 m Landsat-8 images, and 10 m European Space Agency (ESA) Land Cover Product is introduced, facilitating model training and evaluation across diverse real-world scenarios. The proposed Deeply Fused Super Resolution Guided Semantic Segmentation Network (DFSRSSN) combines a Super Resolution Module (SRResNet) and a Semantic Segmentation Module (CRFFNet). SRResNet enhances spatial resolution, while CRFFNet leverages super-resolution information for finer-grained land cover classification. Experimental results demonstrate the superior performance of the proposed method in five different testing datasets, achieving 68.17–83.29% and 39.55–75.92% for overall accuracy and kappa, respectively. When compared to ResUnet with up-sampling block, increases of 2.16–34.27% and 8.32–43.97% were observed for overall accuracy and kappa, respectively. Moreover, we proposed a relative drop rate of accuracy metrics to evaluate the transferability. The model exhibits improved spatial transferability, demonstrating its effectiveness in generating accurate land cover maps for different cities. Multi-temporal analysis reveals the potential of the proposed method for studying land cover and land use changes over time. In addition, a comparison of the state-of-the-art full semantic segmentation models indicates that spatial details are fully exploited and presented in semantic segmentation results by the proposed method. Full article
(This article belongs to the Special Issue AI-Driven Mapping Using Remote Sensing Data)
Show Figures

Figure 1

Figure 1
<p>The network structure of the deeply fused super resolution guided semantic segmentation network (DFSRSSN): Super Resolution Residual Network (SRResNet) and Cross-Resolution Feature Fusion Network (CRFFNet).</p>
Full article ">Figure 2
<p>The 10 m land cover results using Landsat-8 images (<b>a</b>) Landsat-8 RGB images; (<b>b</b>) Super-resolution image; (<b>c</b>) ResUnet with up-sampling block (ResUnet_UP); (<b>d</b>) Deeply Fused Super Resolution Guided Semantic Segmentation Network (DFSRSSN); and (<b>e</b>) ESA 2020.</p>
Full article ">Figure 3
<p>Comparison of user’s accuracy (<span class="html-italic">UA</span>), producer’s accuracy (<span class="html-italic">PA</span>), and <span class="html-italic">F</span>1 score for the different testing datasets: (<b>a</b>) Dataset I, (<b>b</b>) Dataset II, (<b>c</b>) Dataset III, (<b>d</b>) Dataset IV, and (<b>e</b>) Dataset V. Note: 1 tree cover, 2 shrubland, 3 grassland, 4 cropland, 5 built-up, 6 bare/sparse vegetation, 7 permanent water bodies, 8 herbaceous wetland.</p>
Full article ">Figure 4
<p>The land use and land cover map of Wuhan in (<b>a</b>) 2013 and (<b>b</b>) 2020.</p>
Full article ">Figure 5
<p>Representative scenes of 10 m multi-temporal results for Wuhan: (<b>a</b>) Landsat image in 2013; (<b>b</b>) Land cover map in 2013; (<b>c</b>) Landsat image in 2020; (<b>d</b>) Land cover map in 2020, and (<b>e</b>) ESA 2020.</p>
Full article ">
21 pages, 11309 KiB  
Article
LiDAR Point Cloud Augmentation for Adverse Conditions Using Conditional Generative Model
by Yuxiao Zhang, Ming Ding, Hanting Yang, Yingjie Niu, Maoning Ge, Kento Ohtani, Chi Zhang and Kazuya Takeda
Remote Sens. 2024, 16(12), 2247; https://doi.org/10.3390/rs16122247 - 20 Jun 2024
Cited by 1 | Viewed by 1429
Abstract
The perception systems of autonomous vehicles face significant challenges under adverse conditions, with issues such as obscured objects and false detections due to environmental noise. Traditional approaches, which typically focus on noise removal, often fall short in such scenarios. Addressing the lack of [...] Read more.
The perception systems of autonomous vehicles face significant challenges under adverse conditions, with issues such as obscured objects and false detections due to environmental noise. Traditional approaches, which typically focus on noise removal, often fall short in such scenarios. Addressing the lack of diverse adverse weather data in existing automotive datasets, we propose a novel data augmentation method that integrates realistically simulated adverse weather effects into clear condition datasets. This method not only addresses the scarcity of data but also effectively bridges domain gaps between different driving environments. Our approach centers on a conditional generative model that uses segmentation maps as a guiding mechanism to ensure the authentic generation of adverse effects, which greatly enhances the robustness of perception and object detection systems in autonomous vehicles, operating under varied and challenging conditions. Besides the capability of accurately and naturally recreating over 90% of the adverse effects, we demonstrate that this model significantly improves the performance and accuracy of deep learning algorithms for autonomous driving, particularly in adverse weather scenarios. In the experiments employing our augmented approach, we achieved a 2.46% raise in the 3D average precision, a marked enhancement in detection accuracy and system reliability, substantiating the model’s efficacy with quantifiable improvements in 3D object detection compared to models without augmentation. This work not only serves as an enhancement of autonomous vehicle perception systems under adverse conditions but also marked an advancement in deep learning models in adverse condition research. Full article
(This article belongs to the Special Issue Remote Sensing Advances in Urban Traffic Monitoring)
Show Figures

Figure 1

Figure 1
<p>Point cloud in clear driving scenario (<b>a</b>) and corresponding snow-augmented results (<b>b</b>) expected from augmentation models. Red boxes denote locations where snow effects were generated. Color encoded by height.</p>
Full article ">Figure 2
<p>Workflow of the condition-guided adverse effects augmentation model based on novel segmentation maps production and early data fusion techniques. The clear data input is obtained from filtered raw adverse data to establish intrinsic correlation for optimal training. The cluster segmentation map serves as a conditional guide, which can be input into the generative model through early data fusion. Data with adverse conditions are generated under the guidance of the segmentation map.</p>
Full article ">Figure 3
<p>Examples of segmentation maps of CADC dataset in a depth image format for visualization. Images are rendered with pixel values multiplied by 64 under the OpenCV BGR environment for better illustration purposes. Red points denote snow clusters, blue denotes scattered snow points, green denotes all the objects, and black means void (no signal).</p>
Full article ">Figure 4
<p>Diagram of the early fusion process for conditional augmentation in point clouds.</p>
Full article ">Figure 5
<p>Architecture of the condition-guided adverse effects augmentation model based on CycleGAN [<a href="#B13-remotesensing-16-02247" class="html-bibr">13</a>]. Clear A and Snow B along with their segmentation maps are the input data. The condition-guided conversions are conducted by 6-channel generators while the reconstructions are completed by 3-channel generators. <math display="inline"><semantics> <msub> <mi>D</mi> <mi>A</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>D</mi> <mi>B</mi> </msub> </semantics></math> are discriminators.</p>
Full article ">Figure 6
<p>Set (a) augmentation results in the Canadian driving scenario. First row—BEV scenes, colored by height; middle row—clustered results, colored by cluster groups; bottom row—enlarged third-person view center part around the ego vehicle, colored by height. Red boxes and arrows—locations where snow’s effects are reproduced.</p>
Full article ">Figure 7
<p>Set (b) augmentation results in the Canadian driving scenario. First row—BEV scenes, colored by height; middle row—clustered results, colored by cluster groups; bottom row—enlarged third-person view center part around the ego vehicle, colored by height. Red boxes and arrows—locations where snow’s effects are reproduced.</p>
Full article ">Figure 8
<p>Precision and recall rates comparisons of adverse effects generation based on snow clusters.</p>
Full article ">Figure 9
<p>Qualitative comparison of detection results on samples from CADC containing fierce adverse conditions. The top row shows the corresponding forward 180° RGB images. The rest show the LiDAR point clouds with ground-truth boxes and predictions using the baseline (“no augmentation”), our augmentation, and DROR. Red dots denote pedestrians, and black boxes with red dots in the center denote cars (or trucks). Point cloud colors encoded in height.</p>
Full article ">Figure 10
<p>Set (a) augmentation results in the Nagoya driving scenario. Red boxes and arrows—locations where adverse effects are synthesized.</p>
Full article ">Figure 11
<p>Set (b) augmentation results in the Nagoya driving scenario. Red boxes and arrows—locations where adverse effects are synthesized.</p>
Full article ">
25 pages, 11675 KiB  
Article
An Ensemble Machine Learning Model to Estimate Urban Water Quality Parameters Using Unmanned Aerial Vehicle Multispectral Imagery
by Xiangdong Lei, Jie Jiang, Zifeng Deng, Di Wu, Fangyi Wang, Chengguang Lai, Zhaoli Wang and Xiaohong Chen
Remote Sens. 2024, 16(12), 2246; https://doi.org/10.3390/rs16122246 - 20 Jun 2024
Viewed by 1332
Abstract
Urban reservoirs contribute significantly to human survival and ecological balance. Machine learning-based remote sensing techniques for monitoring water quality parameters (WQPs) have gained increasing prominence in recent years. However, these techniques still face challenges such as inadequate band selection, weak machine learning model [...] Read more.
Urban reservoirs contribute significantly to human survival and ecological balance. Machine learning-based remote sensing techniques for monitoring water quality parameters (WQPs) have gained increasing prominence in recent years. However, these techniques still face challenges such as inadequate band selection, weak machine learning model performance, and the limited retrieval of non-optical active parameters (NOAPs). This study focuses on an urban reservoir, utilizing unmanned aerial vehicle (UAV) multispectral remote sensing and ensemble machine learning (EML) methods to monitor optically active parameters (OAPs, including Chla and SD) and non-optically active parameters (including CODMn, TN, and TP), exploring spatial and temporal variations of WQPs. A framework of Feature Combination and Genetic Algorithm (FC-GA) is developed for feature band selection, along with two frameworks of EML models for WQP estimation. Results indicate FC-GA’s superiority over popular methods such as the Pearson correlation coefficient and recursive feature elimination, achieving higher performance with no multicollinearity between bands. The EML model demonstrates superior estimation capabilities for WQPs like Chla, SD, CODMn, and TP, with an R2 of 0.72–0.86 and an MRE of 7.57–42.06%. Notably, the EML model exhibits greater accuracy in estimating OAPs (MRE ≤ 19.35%) compared to NOAPs (MRE ≤ 42.06%). Furthermore, spatial and temporal distributions of WQPs reveal nitrogen and phosphorus nutrient pollution in the upstream head and downstream tail of the reservoir due to human activities. TP, TN, and Chla are lower in the dry season than in the rainy season, while clarity and CODMn are higher in the dry season than in the rainy season. This study proposes a novel approach to water quality monitoring, aiding in the identification of potential pollution sources and ecological management. Full article
(This article belongs to the Special Issue Remote Sensing in Natural Resource and Water Environment II)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the research framework. The whole study consists of four main parts: data collection and preprocessing, feature bands selection, ensemble machine learning model development, and retrieval of the spatial and temporal distribution of WQPs.</p>
Full article ">Figure 2
<p>The study area. (<b>a</b>,<b>b</b>) Locations of the Longdong Reservoir. (<b>c</b>) Sampling points of water quality in the Longdong Reservoir. In panel (<b>c</b>), the sampling points of water quality are marked in green, and the RGB image is composited from three UAV remote sensing bands of Red, Blue, and Green.</p>
Full article ">Figure 3
<p>Flowchart of feature band selection based on FC-GA. FC-GA consists of two parts: feature combination of arithmetic operations and random combination, band selection based on genetic algorithm, and <span class="html-italic">VIF</span>.</p>
Full article ">Figure 4
<p>The modeling framework of the EML-1 and EML-2. (<b>a</b>,<b>c</b>) The training and prediction of EML-1, respectively; (<b>b</b>,<b>d</b>) the training and prediction of EML-2, respectively. The difference in panel (<b>b</b>) compared to panel (<b>a</b>) is that the training dataset and testing dataset of level 0 are re-entered into the meta-model of level 1, and the difference in panel (<b>d</b>) compared to panel (<b>c</b>) is the input of new data into the meta-model of level 1.</p>
Full article ">Figure 4 Cont.
<p>The modeling framework of the EML-1 and EML-2. (<b>a</b>,<b>c</b>) The training and prediction of EML-1, respectively; (<b>b</b>,<b>d</b>) the training and prediction of EML-2, respectively. The difference in panel (<b>b</b>) compared to panel (<b>a</b>) is that the training dataset and testing dataset of level 0 are re-entered into the meta-model of level 1, and the difference in panel (<b>d</b>) compared to panel (<b>c</b>) is the input of new data into the meta-model of level 1.</p>
Full article ">Figure 5
<p>Performance evaluation of five water quality parameters using EML-1, EML-2, BRR, SVR, NNR, CART, RF, LightGBM, and MLP. Each panel, (<b>a</b>–<b>e</b>), indicates Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively. Meanwhile, the black dotted line presents an angle of 45.</p>
Full article ">Figure 5 Cont.
<p>Performance evaluation of five water quality parameters using EML-1, EML-2, BRR, SVR, NNR, CART, RF, LightGBM, and MLP. Each panel, (<b>a</b>–<b>e</b>), indicates Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively. Meanwhile, the black dotted line presents an angle of 45.</p>
Full article ">Figure 5 Cont.
<p>Performance evaluation of five water quality parameters using EML-1, EML-2, BRR, SVR, NNR, CART, RF, LightGBM, and MLP. Each panel, (<b>a</b>–<b>e</b>), indicates Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively. Meanwhile, the black dotted line presents an angle of 45.</p>
Full article ">Figure 6
<p>Maps of WQPs concentration distribution. (<b>A</b>–<b>E</b>) Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively; (<b>a</b>–<b>f</b>) the reversal results of six periods, respectively. COD<sub>Mn</sub>, TN, and TP are divided according to the grade range of China’s Environmental Quality Standards for Surface Water (GB3838-2002 [<a href="#B8-remotesensing-16-02246" class="html-bibr">8</a>]), while Chla and SD are equally divided according to the total range of each. A small number of holes in the maps are caused by image mosaicking deviation.</p>
Full article ">Figure 6 Cont.
<p>Maps of WQPs concentration distribution. (<b>A</b>–<b>E</b>) Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively; (<b>a</b>–<b>f</b>) the reversal results of six periods, respectively. COD<sub>Mn</sub>, TN, and TP are divided according to the grade range of China’s Environmental Quality Standards for Surface Water (GB3838-2002 [<a href="#B8-remotesensing-16-02246" class="html-bibr">8</a>]), while Chla and SD are equally divided according to the total range of each. A small number of holes in the maps are caused by image mosaicking deviation.</p>
Full article ">Figure 7
<p>Trends of water quality changes in the six periods of the best model reversion. (<b>a</b>–<b>e</b>) The five WQPs of Chla, SD, COD<sub>Mn</sub>, TN, and TP, respectively; A~F in the horizontal coordinates represent the six periods of 4 January 2022, 7 April 2022, 31 July 2022, 26 April 2023, 27 May 2023, and 11 June 2023, respectively. The error bars represent the degree of dispersion of each water quality parameter.</p>
Full article ">
26 pages, 8482 KiB  
Article
Adaptive Background Endmember Extraction for Hyperspectral Subpixel Object Detection
by Lifeng Yang, Xiaorui Song, Bin Bai and Zhuo Chen
Remote Sens. 2024, 16(12), 2245; https://doi.org/10.3390/rs16122245 - 20 Jun 2024
Viewed by 1049
Abstract
Subpixel object detection presents a significant challenge within the domain of hyperspectral image (HSI) processing, primarily due to the inherently limited spatial resolution of imaging spectrometers. For subpixel object detection, the dimensional extent of the object of interest is smaller than an individual [...] Read more.
Subpixel object detection presents a significant challenge within the domain of hyperspectral image (HSI) processing, primarily due to the inherently limited spatial resolution of imaging spectrometers. For subpixel object detection, the dimensional extent of the object of interest is smaller than an individual pixel, which significantly diminishes the utility of spatial information pertaining to the object. Therefore, the efficacy of detection algorithms depends heavily on the spectral data inherent in the image. The detection of subpixel objects in hyperspectral imagery primarily relies on the suppression of the background and the enhancement of the object of interest. Hence, acquiring accurate background information from HSI images is a crucial step. In this study, an adaptive background endmember extraction for hyperspectral subpixel object detection is proposed. An adaptive scale constraint is incorporated into the background spectral endmember learning process to improve the adaptability of background endmember extraction, thus further enhancing the algorithm’s generalizability and applicability in diverse analytical scenarios. Experimental results demonstrate that the adaptive endmember extraction-based subpixel object detection algorithm consistently outperforms existing state-of-the-art algorithms in terms of detection efficacy on both simulated and real-world datasets. Full article
(This article belongs to the Special Issue Advances in Hyperspectral Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Methodological framework adopted for the proposed hyperspectral subpixel object detection.</p>
Full article ">Figure 2
<p>Reference spectrum of the object to be detected on the simulated dataset.</p>
Full article ">Figure 3
<p>The simulated hyperspectral dataset: (<b>a</b>) with SNR = 30 dB; (<b>b</b>) ground truth.</p>
Full article ">Figure 4
<p>ROC curves of different hyperspectral subpixel detection approaches on the simulated dataset: (<b>a</b>) SNR = 25 dB; (<b>b</b>) SNR = 30 dB; and (<b>c</b>) SNR = 35 dB.</p>
Full article ">Figure 4 Cont.
<p>ROC curves of different hyperspectral subpixel detection approaches on the simulated dataset: (<b>a</b>) SNR = 25 dB; (<b>b</b>) SNR = 30 dB; and (<b>c</b>) SNR = 35 dB.</p>
Full article ">Figure 5
<p>The experimental Urban dataset: (<b>a</b>) true color composite of the hyperspectral imagery; (<b>b</b>) ground truth.</p>
Full article ">Figure 6
<p>The reference background endmembers of the experimental Urban dataset.</p>
Full article ">Figure 7
<p>Object detection score images of various subpixel detection methodologies applied to the Urban dataset: (<b>a</b>) 2D plot of SACE; (<b>b</b>) 3D plot of SACE; (<b>c</b>) 2D plot of CSCR; (<b>d</b>) 3D plot of CSCR; (<b>e</b>) 2D plot of hCEM; (<b>f</b>) 3D plot of hCEM; (<b>g</b>) 2D plot of SPSMF; (<b>h</b>) 3D plot of SPSMF; (<b>i</b>) 2D plot of PALM; (<b>j</b>) 3D plot of PALM; (<b>k</b>) 2D plot of HSPRD; (<b>l</b>) 3D plot of HSPRD; (<b>m</b>) 2D plot of the proposed method; and (<b>n</b>) 3D plot of the proposed method. (The <span class="html-italic">X</span>-axis and <span class="html-italic">Y</span>-axis represent the spatial coordinates of the column pixel index and the row pixel index, respectively).</p>
Full article ">Figure 7 Cont.
<p>Object detection score images of various subpixel detection methodologies applied to the Urban dataset: (<b>a</b>) 2D plot of SACE; (<b>b</b>) 3D plot of SACE; (<b>c</b>) 2D plot of CSCR; (<b>d</b>) 3D plot of CSCR; (<b>e</b>) 2D plot of hCEM; (<b>f</b>) 3D plot of hCEM; (<b>g</b>) 2D plot of SPSMF; (<b>h</b>) 3D plot of SPSMF; (<b>i</b>) 2D plot of PALM; (<b>j</b>) 3D plot of PALM; (<b>k</b>) 2D plot of HSPRD; (<b>l</b>) 3D plot of HSPRD; (<b>m</b>) 2D plot of the proposed method; and (<b>n</b>) 3D plot of the proposed method. (The <span class="html-italic">X</span>-axis and <span class="html-italic">Y</span>-axis represent the spatial coordinates of the column pixel index and the row pixel index, respectively).</p>
Full article ">Figure 7 Cont.
<p>Object detection score images of various subpixel detection methodologies applied to the Urban dataset: (<b>a</b>) 2D plot of SACE; (<b>b</b>) 3D plot of SACE; (<b>c</b>) 2D plot of CSCR; (<b>d</b>) 3D plot of CSCR; (<b>e</b>) 2D plot of hCEM; (<b>f</b>) 3D plot of hCEM; (<b>g</b>) 2D plot of SPSMF; (<b>h</b>) 3D plot of SPSMF; (<b>i</b>) 2D plot of PALM; (<b>j</b>) 3D plot of PALM; (<b>k</b>) 2D plot of HSPRD; (<b>l</b>) 3D plot of HSPRD; (<b>m</b>) 2D plot of the proposed method; and (<b>n</b>) 3D plot of the proposed method. (The <span class="html-italic">X</span>-axis and <span class="html-italic">Y</span>-axis represent the spatial coordinates of the column pixel index and the row pixel index, respectively).</p>
Full article ">Figure 8
<p>ROC curves of different hyperspectral subpixel detection approaches on the experimental Urban dataset.</p>
Full article ">Figure 9
<p>The endmember estimates obtained via HSPRD and the proposed method: (<b>a</b>) Asphalt; (<b>b</b>) Grass; (<b>c</b>) Trees; and (<b>d</b>) Roofs.</p>
Full article ">Figure 9 Cont.
<p>The endmember estimates obtained via HSPRD and the proposed method: (<b>a</b>) Asphalt; (<b>b</b>) Grass; (<b>c</b>) Trees; and (<b>d</b>) Roofs.</p>
Full article ">Figure 10
<p>The MUUFL Gulfport dataset: (<b>a</b>) true color composite of the hyperspectral imagery; (<b>b</b>) ground truth of Solid Brown panels.</p>
Full article ">Figure 11
<p>Reference spectrum of the Solid Brown panel on the MUUFL Gulfport dataset.</p>
Full article ">Figure 12
<p>Results of various subpixel detection methods applied to the MUUFL Gulfport dataset: (<b>a</b>) SACE; (<b>b</b>) CSCR; (<b>c</b>) hCEM; (<b>d</b>) SPSMF; (<b>e</b>) PALM; (<b>f</b>) HSPRD; and (<b>g</b>) the proposed method.</p>
Full article ">Figure 12 Cont.
<p>Results of various subpixel detection methods applied to the MUUFL Gulfport dataset: (<b>a</b>) SACE; (<b>b</b>) CSCR; (<b>c</b>) hCEM; (<b>d</b>) SPSMF; (<b>e</b>) PALM; (<b>f</b>) HSPRD; and (<b>g</b>) the proposed method.</p>
Full article ">Figure 13
<p>ROC curves of different hyperspectral subpixel detection approaches on the MUUFL Gulfport dataset.</p>
Full article ">Figure 14
<p>The HOSD hyperspectral data: (<b>a</b>) the false-color composite of the hyperspectral imagery; (<b>b</b>) ground truth.</p>
Full article ">Figure 15
<p>Results of various subpixel detection methodologies applied to the HOSD dataset: (<b>a</b>) SACE; (<b>b</b>) CSCR; (<b>c</b>) hCEM; (<b>d</b>) SPSMF; (<b>e</b>) PALM; (<b>f</b>) HSPRD; and (<b>g</b>) the proposed method.</p>
Full article ">Figure 16
<p>ROC curves of different subpixel detection approaches on the HOSD dataset.</p>
Full article ">
18 pages, 5290 KiB  
Article
Assessing Ice Break-Up Trends in Slave River Delta through Satellite Observations and Random Forest Modeling
by Ida Moalemi, Homa Kheyrollah Pour and K. Andrea Scott
Remote Sens. 2024, 16(12), 2244; https://doi.org/10.3390/rs16122244 - 20 Jun 2024
Viewed by 1347
Abstract
The seasonal temperature trends and ice phenology in the Great Slave Lake (GSL) are significantly influenced by inflow from the Slave River. The river undergoes a sequence of mechanical break-ups all the way to the GSL, initiating the GSL break-up process. Additionally, upstream [...] Read more.
The seasonal temperature trends and ice phenology in the Great Slave Lake (GSL) are significantly influenced by inflow from the Slave River. The river undergoes a sequence of mechanical break-ups all the way to the GSL, initiating the GSL break-up process. Additionally, upstream water management practices impact the discharge of the Slave River and, consequently, the ice break-up of the GSL. Therefore, monitoring the break-up process at the Slave River Delta (SRD), where the river meets the lake, is crucial for understanding the cascading effects of upstream activities on GSL ice break-up. This research aimed to use Random Forest (RF) models to monitor the ice break-up processes at the SRD using a combination of satellite images with relatively high spatial resolution, including Landsat-5, Landsat-8, Sentinel-2a, and Sentinel-2b. The RF models were trained using selected training pixels to classify ice, open water, and cloud. The onset of break-up was determined by data-driven thresholds on the ice fraction in images with less than 20% cloud coverage. Analysis of break-up timing from 1984 to 2023 revealed a significant earlier trend using the Mann–Kendall test with a p-value of 0.05. Furthermore, break-up data in recent years show a high degree of variability in the break-up rate using images in recent years with better temporal resolution. Full article
(This article belongs to the Special Issue Advances of Remote Sensing and GIS Technology in Surface Water Bodies)
Show Figures

Figure 1

Figure 1
<p>Site map of the Slave River Delta (SRD) and Peace Athabasca Delta (PAD) in the Northwest Territories and Alberta, Canada. The (<b>a</b>) SRD and (<b>b</b>) PAD images were acquired on May 23, 2018 and May 14, 2014, respectively.</p>
Full article ">Figure 2
<p>Performance of Landsat (<b>left</b>) and Sentinel (<b>right</b>) models. The training accuracy of the Sentinel model experiences more fluctuation than that of the Landsat model. This fluctuation may be the result of more features and the additional cloud class in the Sentinel model.</p>
Full article ">Figure 3
<p>Examples of manually selecting training areas generated from (<b>a</b>) a Sentinel image captured on May 12, 2014; (<b>b</b>) a Landsat image captured on May 30, 2020; and (<b>c</b>) a Landsat image captured on May 10, 2019. The black, cyan, and blue colors correspond to cloud, ice, and water training areas, respectively. Training polygons with nearly equal contributions of ice and water pixels have an even distribution over the SRD.</p>
Full article ">Figure 4
<p>Distribution of cloud fractions of Landsat and Sentinel datasets from May 1 to May 30. Each open triangle corresponds to an individual image with a total number of 210. The cloud percentages were generated from the SRD boundary (the images were masked using the SRD shapefile to exclude land pixels).</p>
Full article ">Figure 5
<p>Distribution of training images over the categorized ice fractions. Given that an image was not acquired every day, the distribution of images helped us choose the water/ice thresholds corresponding to the break-up period.</p>
Full article ">Figure 6
<p>Workflow of RF modeling and trend analysis.</p>
Full article ">Figure 7
<p>Examples of (<b>1</b>) the Landsat model’s performance with an overall accuracy of 97.8%: (<b>a</b>) the RGB scene captured on May 25, 2013; (<b>b</b>) the RGB scene captured on May 28, 2014; and (<b>c</b>) and (<b>d</b>) the corresponding model classification plots. (<b>2</b>) The Sentinel model’s performance with an overall accuracy of 91.5%: (<b>a</b>) the RGB scene captured on May 21, 2019; (<b>b</b>) the RGB scene captured on May 27, 2021; and (<b>c</b>) and (<b>d</b>) the corresponding plots. (<b>3</b>) The performance of both Landsat and Sentinel models using images taken on the same date (May 23, 2018): (<b>a</b>) the RGB plot captured by the Landsat-8 satellite; (<b>b</b>) the corresponding classified image (Sentinel-2b); and (<b>c</b>) the corresponding classified image (Landsat-8).</p>
Full article ">Figure 8
<p>Descriptive statistics of tree decisions for Landsat (<b>left</b>) and Sentinel (<b>right</b>) models using testing pixels. The maximum value of 1 indicates that all trees unanimously decided on the same classification for a pixel.</p>
Full article ">Figure 9
<p>Examples of classification within the PAD. The (<b>a</b>) RGB scenes were acquired on (<b>1</b>) May 11, 2013, and (<b>2</b>) May 14, 2014. The corresponding plots are shown in (<b>b</b>) respectively.</p>
Full article ">Figure 10
<p>Rate of break-up occurrence in: (<b>a</b>) 2018; (<b>b</b>) 2019; and (<b>c</b>) 2020. These years with data from Sentinel-2a and Sentinel-2b indicate a high degree of variability in the break-up rate. Solid lines are used as a visual aid.</p>
Full article ">Figure 11
<p>Black triangles indicate cloud-free images with an ice fraction of 60% to 90%, while red positive points are the corresponding weighted averages or estimated days for the start of break-up. A linear model (red line) was fitted to estimate break-up onset with a confidence level of 0.99, an RSE of 11.98, and a slope of −0.84.</p>
Full article ">Figure 12
<p>Images captured during the month of May with a less than 20% cloud fraction. Each color corresponds to one satellite. Landsat-5 started in 1984 and continued until 2011. Landsat-8 started afterward and remains active. Furthermore, to fill the temporal gaps in recent years, Sentinel data were used as supplementary data starting from 2015 and 2017.</p>
Full article ">Figure 13
<p>Break-up anomalies from 1984 to 2023 using Landsat archives (<b>right</b>) and a combination of Landsat and Sentinel archives (<b>left</b>). The y axis indicates the average value (zero) and the bars represent the differences in days of break-up estimates from the average value. A lower temporal resolution resulted in more temporal gaps; however, the years of 2015, 2017, 2022, and 2023 could be identified successfully without Sentinel records and have a similar estimation of the break-up onset date to the results with Sentinel records.</p>
Full article ">
18 pages, 9615 KiB  
Article
Multi-Scale Window Spatiotemporal Attention Network for Subsurface Temperature Prediction and Reconstruction
by Jiawei Jiang, Jun Wang, Yiping Liu, Chao Huang, Qiufu Jiang, Liqiang Feng, Liying Wan and Xiangguang Zhang
Remote Sens. 2024, 16(12), 2243; https://doi.org/10.3390/rs16122243 - 20 Jun 2024
Cited by 2 | Viewed by 1273
Abstract
In this study, we investigate the feasibility of using historical remote sensing data to predict the future three-dimensional subsurface ocean temperature structure. We also compare the performance differences between predictive models and real-time reconstruction models. Specifically, we propose a multi-scale residual spatiotemporal window [...] Read more.
In this study, we investigate the feasibility of using historical remote sensing data to predict the future three-dimensional subsurface ocean temperature structure. We also compare the performance differences between predictive models and real-time reconstruction models. Specifically, we propose a multi-scale residual spatiotemporal window ocean (MSWO) model based on a spatiotemporal attention mechanism, to predict changes in the subsurface ocean temperature structure over the next six months using satellite remote sensing data from the past 24 months. Our results indicate that predictions made using historical remote sensing data closely approximate those made using historical in situ data. This finding suggests that satellite remote sensing data can be used to predict future ocean structures without relying on valuable in situ measurements. Compared to future predictive models, real-time three-dimensional structure reconstruction models can learn more accurate inversion features from real-time satellite remote sensing data. This work provides a new perspective for the application of artificial intelligence in oceanography for ocean structure reconstruction. Full article
(This article belongs to the Special Issue Artificial Intelligence for Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Surface ocean prediction using remote sensing images. (<b>b</b>) Subsurface ocean prediction using measured profile data. (<b>c</b>) Subsurface ocean reconstruction using satellite remote sensing images. (<b>d</b>) Subsurface ocean prediction using satellite remote sensing images.</p>
Full article ">Figure 2
<p>(<b>a</b>) Flow chart of RNN iterative framework on spatiotemporal prediction tasks. (<b>b</b>) Flow chart of the generative framework for spatiotemporal prediction tasks.</p>
Full article ">Figure 3
<p>Flow diagram of MSWO using satellite remote sensing data to predict subsurface ocean temperature structure. 4-D represents the xyz axis in the time dimension and space.</p>
Full article ">Figure 4
<p>Details in the MSWO process. (<b>a</b>) The process of tensor dimension change in the whole process. (<b>b</b>) Window segmentation and attention calculation diagram. (<b>c</b>) SWO model flow chart. (<b>d</b>) MSWO model flow chart.</p>
Full article ">Figure 5
<p>Evaluation index results of different models at different depths. (<b>a</b>) MSE, (<b>b</b>) RMSE, (<b>c</b>) MAE.</p>
Full article ">Figure 6
<p>Evaluation index results of different models in the predicted 6-month period. (<b>a</b>) MSE, (<b>b</b>) RMSE, (<b>c</b>) MAE.</p>
Full article ">Figure 7
<p>Evaluation index results of historical satellite remote sensing predicting future profile structure (S2P) and historical profile predicting future profile structure (P2P) models at different depths. (<b>a</b>) MSE, (<b>b</b>) RMSE, (<b>c</b>) MAE.</p>
Full article ">Figure 8
<p>(<b>a</b>) RMSE changes of different baseline models on the 12-month prediction task of the test dataset. (<b>b</b>) Seasonal RMSE bar chart of different benchmark models on the prediction task of the test dataset. (<b>c</b>) RMSE changes for different baseline models on 12-month reconstruction tasks of the test dataset. (<b>d</b>) Seasonal RMSE histogram for different benchmark models on the reconstruction task of the test dataset.</p>
Full article ">Figure 9
<p>Section selection diagram.</p>
Full article ">Figure 10
<p>Profile diagram of the January 2019 test dataset drawn at A1–A4 cross sections.</p>
Full article ">Figure 11
<p>Profile diagram of the January 2019 test dataset drawn at B1–B3 cross sections.</p>
Full article ">Figure 12
<p>All baseline models were compared with the MSWO model for planar error plots and density scatter plots.</p>
Full article ">
31 pages, 62358 KiB  
Article
Comprehensive Ecological Risk Changes and Their Relationship with Ecosystem Services of Alpine Grassland in Gannan Prefecture from 2000–2020
by Zhanping Ma, Jinlong Gao, Tiangang Liang, Zhibin He, Senyao Feng, Xuanfan Zhang and Dongmei Zhang
Remote Sens. 2024, 16(12), 2242; https://doi.org/10.3390/rs16122242 - 20 Jun 2024
Cited by 3 | Viewed by 1391
Abstract
Alpine grassland is one of the most fragile and sensitive ecosystems, and it serves as a crucial ecological security barrier on the Tibetan Plateau. Due to the combined influence of climate change and human activities, the degradation of the alpine grassland in Gannan [...] Read more.
Alpine grassland is one of the most fragile and sensitive ecosystems, and it serves as a crucial ecological security barrier on the Tibetan Plateau. Due to the combined influence of climate change and human activities, the degradation of the alpine grassland in Gannan Prefecture has been increasing recent years, causing increases in ecological risk (ER) and leading to the grassland ecosystem facing unprecedented challenges. In this context, it is particularly crucial to construct a potential grassland damage index (PGDI) and assessment framework that can be used to effectively characterize the damage and risk to the alpine grassland ecosystem. This study comprehensively uses multi-source data to construct a PGDI based on the grassland resilience index, landscape ER index, and grass–livestock balance index. Thereafter, we proposed a feasible framework for assessing the comprehensive ER of alpine grassland and analyzed the responsive relationship between the comprehensive ER and comprehensive ecosystem services (ESs) of the grassland. There are four findings. The first is that the comprehensive ER of the alpine grassland in Gannan Prefecture from 2000–2020 had a low distribution in the southeast and a high distribution trend in the northwest, with medium risk (29.27%) and lower risk (27.62%) dominating. The high-risk area accounted for 4.58% and was mainly in Lintan County, the border between Diebu and Zhuoni Counties, the eastern part of Xiahe County, and the southwest part of Hezuo. Second, the comprehensive ESs showed a pattern of low distribution in the northwest and high distribution in the southeast. The low and lower services accounted for only 9.30% of the studied area and were mainly distributed in the west of Maqu County and central Lintan County. Third, the Moran’s index values for comprehensive ESs and ER for 2000, 2005, 2010, 2015, and 2020 were −0.246, −0.429, −0.348, −0.320, and −0.285, respectively, thereby indicating significant negative spatial autocorrelation for all aspects. Fourth, ER was caused by the combined action of multiple factors. There are significant differences in the driving factors that affect ER. Landscape index is the first dominant factor affecting ER, with q values greater than 0.25, followed by DEM and NDVI. In addition, the interaction between diversity index and NDVI had the greatest impact on ER. Overall, this study offers a new methodological framework for the quantification of comprehensive ER in alpine grasslands. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the study areas. (<b>a</b>,<b>b</b>) Geographic location and elevation of Gannan Prefecture; (<b>c</b>–<b>g</b>) photographs of the grassland, forest, wetland, cultivated land, and shrubland, respectively; (<b>h</b>,<b>i</b>) scenes of grassland desertification and rodent damage.</p>
Full article ">Figure 2
<p>Research framework.</p>
Full article ">Figure 3
<p>Spatiotemporal distribution of the PGDI from 2000–2020.</p>
Full article ">Figure 4
<p>Spatiotemporal distribution of the ER levels from 2000–2020.</p>
Full article ">Figure 5
<p>The ER level transfer matrix. (<b>a</b>) 2000–2005; (<b>b</b>) 2005–2010; (<b>c</b>) 2010–2015; (<b>d</b>) 2015–2020.</p>
Full article ">Figure 6
<p>The ER level changes from 2000–2020.</p>
Full article ">Figure 7
<p>Spatiotemporal distribution of the ESI during 2000–2020.</p>
Full article ">Figure 8
<p>Spatiotemporal distribution of ESs levels during 2000–2020.</p>
Full article ">Figure 9
<p>The ESs level transfer matrix. (<b>a</b>) 2000–2005, (<b>b</b>) 2005–2010, (<b>c</b>) 2010–2015, (<b>d</b>) 2015–2020.</p>
Full article ">Figure 10
<p>The ESs level changes from 2000–2020.</p>
Full article ">Figure 11
<p>Global Moran’s index scatter plot. (<b>a</b>) 2000, (<b>b</b>) 2005, (<b>c</b>) 2010, (<b>d</b>) 2015, (<b>e</b>) 2020, (<b>f</b>) mean.</p>
Full article ">Figure 12
<p>Local Moran’s index local indicators of spatial correlation clustering plot.</p>
Full article ">Figure 13
<p>The explanatory power of ecological risk. Note: X1, precipitation; X2, temperature; X3, DEM; X4, slope; X5, aspect; X6, NDVI; X7, evenness index; X8, diversity index; X9, contagion index; X10, land reclamation rate; X11, gross domestic product per capita; X12, fertilizer usage; X13, industrial addition; X14, grazing pressure; X15, urbanization rate.</p>
Full article ">Figure 14
<p>Results of the interactive detection of the driving factors of ecological risk. Note: X1, precipitation; X2, temperature; X3, DEM; X4, slope; X5, aspect; X6, NDVI; X7, evenness index; X8, diversity index; X9, contagion index; X10, land reclamation rate; X11, gross domestic product per capita; X12, fertilizer usage; X13, industrial addition; X14, grazing pressure; X15, urbanization rate.</p>
Full article ">Figure A1
<p>The spatiotemporal distribution of habitat quantity in Gannan Prefecture.</p>
Full article ">Figure A2
<p>The spatiotemporal distribution of water yield in Gannan Prefecture.</p>
Full article ">Figure A3
<p>The spatiotemporal distribution of soil conservation in Gannan Prefecture.</p>
Full article ">Figure A4
<p>The spatiotemporal distribution of the grassland resilience index in Gannan Prefecture.</p>
Full article ">Figure A5
<p>Spatiotemporal distribution of landscape ecological risk index in Gannan Prefecture.</p>
Full article ">Figure A6
<p>Spatiotemporal distribution of grass–livestock balance index in Gannan.</p>
Full article ">
20 pages, 6070 KiB  
Article
An Improved Propagation Prediction Method of Low-Frequency Skywave Fusing Fine Channel Parameters
by Jian Wang, Chengsong Duan, Yu Chen, Yafei Shi and Cheng Yang
Remote Sens. 2024, 16(12), 2241; https://doi.org/10.3390/rs16122241 - 20 Jun 2024
Cited by 1 | Viewed by 1373
Abstract
Low-frequency communication constitutes a vital component of essential communication systems, serving a pivotal role in remote radio communication, navigation, timing, and seismic analysis. To enhance the predictive precision of low-frequency skywave propagation and address the demands of engineering applications, we propose a high-precision [...] Read more.
Low-frequency communication constitutes a vital component of essential communication systems, serving a pivotal role in remote radio communication, navigation, timing, and seismic analysis. To enhance the predictive precision of low-frequency skywave propagation and address the demands of engineering applications, we propose a high-precision prediction method based on the ITU-R P.684 wave-hop theory and real-time environmental parameter forecasts. This method features several distinctive attributes. Firstly, it employs real-time ionospheric prediction data instead of relying on long-term ionospheric model predictions. Secondly, it utilizes a detailed map of land–sea surface electrical characteristics, surpassing the simplistic land–sea dichotomy previously employed. Compared with measured data, the findings demonstrate that we attained a reasonable propagation pattern and achieved high-precision field strength predictions. Comparatively, the improved method exhibits an improvement in the time and spatial domains over the ITU-R P.684 standard. Finally, the improved method balances computational efficiency with enhanced prediction accuracy, supporting the advancement of low-frequency communication system design and performance evaluation. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geometric diagram of skywave propagation path within 2000 km. (<b>a</b>) One-hop propagation (without diffraction). (<b>b</b>) One-hop propagation (including diffraction). (<b>c</b>) Two-hop propagation (without diffraction). (<b>d</b>) Two-hop propagation (including diffraction).</p>
Full article ">Figure 2
<p>Observation stations selected for reconstruction.</p>
Full article ">Figure 3
<p>Comparison of variation pattern of ionospheric reflection height <span class="html-italic">h</span><sub>r</sub> and <span class="html-italic">f</span><sub>O</sub>E when JST = 12 in East Asia. (<b>a</b>) <span class="html-italic">f</span><sub>O</sub>E of the ITU method. (<b>b</b>) <span class="html-italic">f</span><sub>O</sub>E of the improved method. (<b>c</b>) The difference in <span class="html-italic">f</span><sub>O</sub>E between the ITU method and the improved method. (<b>d</b>) <span class="html-italic">h</span><sub>r</sub> of the ITU method. (<b>e</b>) <span class="html-italic">h</span><sub>r</sub> of the improved method. (<b>f</b>) The difference in <span class="html-italic">h</span><sub>r</sub> between the ITU method and the improved method.</p>
Full article ">Figure 4
<p>Changes in land–sea surface electrical characteristics in East Asia. (<b>a</b>) The land–sea distinction in the ITU method. (<b>b</b>) Refined land electrical characteristics.</p>
Full article ">Figure 5
<p>Comparison of the variation in relevant parameters in East Asia. (<b>a</b>) <span class="html-italic">ψ</span> of the ITU method. (<b>b</b>) <span class="html-italic">ψ</span> of the PRO method. (<b>c</b>) The difference in <span class="html-italic">ψ</span> between the ITU method and the PRO method. (<b>d</b>) <span class="html-italic">i</span> of the ITU method. (<b>e</b>) <span class="html-italic">i</span> of the PRO method. (<b>f</b>) The difference in <span class="html-italic">i</span> between the ITU method and the PRO method. (<b>g</b>) <span class="html-italic">D</span> of the ITU method. (<b>h</b>) <span class="html-italic">D</span> of the PRO method. (<b>i</b>) The difference in <span class="html-italic">D</span> between the ITU method and the PRO method. (<b>j</b>) <span class="html-italic">F</span><sub>r</sub> of the ITU method. (<b>k</b>) <span class="html-italic">F</span><sub>r</sub> of the PRO method. (<b>l</b>) The difference in <span class="html-italic">F</span><sub>r</sub> between the ITU method and the PRO method. (<b>m</b>) <span class="html-italic">R</span> of the ITU method. (<b>n</b>) <span class="html-italic">R</span> of the PRO method. (<b>o</b>) The difference in <span class="html-italic">R</span> between the ITU method and the PRO method.</p>
Full article ">Figure 6
<p>Location of the transmitter and fixed receiver. Green ●: transmitter location. Blue ▼: fixed receiver location.</p>
Full article ">Figure 7
<p>The variation pattern of field strength with time. Blue ●: measured field strength at the corresponding moment. Red square line: predicted field strength of the ITU method. Green triangle line: predicted field strength of the PRO method.</p>
Full article ">Figure 8
<p>Variation of errors over time. (<b>a</b>) Error: |OBS − ITU|. (<b>b</b>) Error: |OBS − PRO|. Gray double-dash line arc: contour of error (1 dB difference per circle).</p>
Full article ">Figure 9
<p>Location of the transmitter and mobile receivers. Green ●: transmitter location. Blue ▼: mobile receiver locations.</p>
Full article ">Figure 10
<p>The variation pattern of field strength with distance at different hours. Blue ●: measured field strength at the corresponding distance. Blue double-dash line: predicted field strength of the ITU method. Red solid line: predicted field strength of the PRO method. Horizontal axis: distance (km). Vertical axis: field strength (dB (μV/m)).</p>
Full article ">Figure 11
<p>Variation of errors over distance. Green column: error (|OBS − ITU|). Pink column: error (|OBS − PRO|). Horizontal axis: corresponding receiving point.</p>
Full article ">Figure 12
<p>Comparison of the overall field strength. Black ■: measured field strength. Red ●: predicted field strength of the ITU method. Blue △: predicted field strength of the PRO method. Pink column: error (|OBS − ITU|). Green column: error (|OBS − PRO|).</p>
Full article ">Figure 13
<p>Overall error statistics of the two methods. (<b>a</b>) ITU method. (<b>b</b>) PRO method. Blue column: histogram of frequency distribution. Red solid line: normal distribution curve.</p>
Full article ">Figure 14
<p>Method comparison. (<b>a</b>) ITU method. (<b>b</b>) PRO method. (<b>c</b>) |ITU−PRO|.</p>
Full article ">
28 pages, 13295 KiB  
Article
Optimized Parameters for Detecting Multiple Forest Disturbance and Recovery Events and Spatiotemporal Patterns in Fast-Regrowing Southern China
by Yuwei Tu, Kaiping Liao, Yuxuan Chen, Hongbo Jiao and Guangsheng Chen
Remote Sens. 2024, 16(12), 2240; https://doi.org/10.3390/rs16122240 - 20 Jun 2024
Viewed by 1364
Abstract
The timing, location, intensity, and drivers of forest disturbance and recovery are crucial for developing effective management strategies and policies for forest conservation and ecosystem resilience. Although many algorithms and improvement methods have been developed, it is still difficult to guarantee the detection [...] Read more.
The timing, location, intensity, and drivers of forest disturbance and recovery are crucial for developing effective management strategies and policies for forest conservation and ecosystem resilience. Although many algorithms and improvement methods have been developed, it is still difficult to guarantee the detection accuracy for forest disturbance and recovery patterns in southern China due to the complex climate and topography, faster forest recovery after disturbance, and the low availability of noise-free Landsat images. Here, we improved the LandTrendr parameters for different provinces to detect forest disturbances and recovery trajectories based on the LandTrendr change detection algorithm and time-series Landsat images on the GEE platform, and then applied the secondary random forest classifier to classify the forest disturbance and recovery patterns in southern China during 1990–2020. The accuracy evaluation indicated that our approach and improved parameters of the LandTrendr algorithm can increase the detection accuracy for both the spatiotemporal patterns and multiple events of forest disturbance and recovery, with an overall accuracy greater than 86% and a Kappa coefficient greater than 0.91 for different provinces. The total forest loss area was 1.54 × 105 km2 during 1990–2020 (4931 km2/year); however, most of these disturbed forests were recovered and only 6.39 × 104 km2 was a net loss area (converted to other land cover types). The area with two or more times of disturbance events accounted for 11.50% of the total forest loss area. The total forest gain area (including gain after loss and the afforestation area) was 5.44 × 105 km2, among which, the forest gain area after loss was 8.94 × 104 km2, and the net gain area from afforestation was 4.55 × 105 km2. The timing of the implementation of forestry policies significantly affected the interannual variations in forest disturbance and recovery, with large variations among different provinces. The detected forest loss and gain area was further compared against with inventory and other geospatial products, and proved the effectiveness of our method. Our study suggests that parameter optimization in the LandTrendr algorithm could greatly increase the accuracy for detecting the multiple and lower rate disturbance/recovery events in the fast-regrowing forested areas. Our findings also offer a long-term, moderate spatial resolution, and precise forest dynamic data for achieving sustainable forest management and the carbon neutrality goal in southern China. Full article
(This article belongs to the Special Issue Natural Hazard Mapping with Google Earth Engine)
Show Figures

Figure 1

Figure 1
<p>The locations of the study area, provinces, and the sampling plots representing persistent forest, forest gain, forest loss, and non-forest areas.</p>
Full article ">Figure 2
<p>The work flow of this study. Note: NFI: national forest inventory; LFMI: local forest management inventory; GEE: Google Earth Engine.</p>
Full article ">Figure 3
<p>Comparison of classified forest loss/gain area with the visually interpreted loss/gain area and GFC forest loss product based on NBR at five locations (<b>a</b>–<b>e</b>) with available high-resolution images. Note: GFC is the global forest change product [<a href="#B4-remotesensing-16-02240" class="html-bibr">4</a>]. The red dashed line is the identified disturbance year based on the fitted trajectory of Normalized Burned Ratio (NBR).</p>
Full article ">Figure 4
<p>The selected area for IoU analysis in Taojiang County, Hunan Province, during 2015–2020. Note: (<b>a</b>): The manually digitized disturbed areas (loss year 2015: zones 1 and 4; 2017: zones 2, 3, and 5) based on the Google Earth Pro high-resolution images; (<b>b</b>): our classified disturbed areas; (<b>c</b>): disturbed area from the Global Forest Change (GFC) product.</p>
Full article ">Figure 5
<p>The LandTrendr identified breakpoints for disturbance events with the optimized and the default (original) parameter values at four randomly-selected sampling plots (<b>a</b>–<b>d</b>) based on the fitted NBR change magnitudes.</p>
Full article ">Figure 6
<p>The comparison of the detected forest loss area and the fraction of disturbance times using default parameters (<b>a</b>) and optimized parameters (<b>b</b>) in Guangxi Province.</p>
Full article ">Figure 7
<p>Forest loss and gain area (100 km<sup>2</sup>) in the study region and different provinces during 1990–2020.</p>
Full article ">Figure 8
<p>The spatial distribution of forest loss area and occurrence years in southern China. Note: the boxes from 1 to 8 are magnified areas (scaled to 1:10,000) to show clearer spatial patterns in forest loss area and years.</p>
Full article ">Figure 9
<p>Forest disturbance severity (%) at 10 km spatial resolution in southern China.</p>
Full article ">Figure 10
<p>The spatial distribution of forest gain area and occurrence years in southern China. Note: the boxes from 1 to 8 are magnified areas (scaled to 1:10,000) to show clearer spatial patterns in forest gain area and years.</p>
Full article ">Figure 11
<p>Fraction of forest gain area (%) at 10 km spatial resolution in southern China.</p>
Full article ">Figure 12
<p>The forest disturbance frequency (times) and the area fraction in each province in southern China.</p>
Full article ">Figure 13
<p>The overall forest dynamics including persistent, net loss, net gain, and gain after loss forest area in southern China during 1990–2020. Note: the top-right graph is the comparisons of the classified provincial net gain area with the National Forest Inventory (NFI) statistical afforestation area.</p>
Full article ">Figure 14
<p>The comparison of the detected forest area during 1990–2020 using improved parameters (black line) and default parameters (dotted line) with the National Forest Inventory (NFI) statistical data (discrete points).</p>
Full article ">Figure 15
<p>Comparisons of our detected annual forest loss area in different provinces with the global forest change (GFC) product [<a href="#B4-remotesensing-16-02240" class="html-bibr">4</a>].</p>
Full article ">Figure A1
<p>The ranking of relative importance (top 10) of different input variables in detecting forest disturbance and recovery in the RF classifier for Sichuan (<b>top</b>) and Guangxi (<b>bottom</b>) Province. Note: dur: duration of spectral change; preval: pre-disturbance spectral value; mag: spectral change magnitude; rate: spectral change rate; and dsnr: signal-to-noise ratio of spectral change.</p>
Full article ">Figure A2
<p>The implementation timeline of major forestry polices and their effects on the interannual variations of forest loss and gain area during 1990–2020. Note: the numbers along the dashed arrow lines are the policy symbols listed in <a href="#remotesensing-16-02240-t0A3" class="html-table">Table A3</a>.</p>
Full article ">
19 pages, 1380 KiB  
Article
Highlighting the Use of UAV to Increase the Resilience of Native Hawaiian Coastal Cultural Heritage
by Kainalu K. Steward, Brianna K. Ninomoto, Haunani H. Kane, John H. R. Burns, Luke Mead, Kamala Anthony, Luka Mossman, Trisha Olayon, Cybil K. Glendon-Baclig and Cherie Kauahi
Remote Sens. 2024, 16(12), 2239; https://doi.org/10.3390/rs16122239 - 20 Jun 2024
Cited by 2 | Viewed by 2416
Abstract
The use of Uncrewed Aerial Vehicles (UAVs) is becoming a preferred method for supporting integrated coastal zone management, including cultural heritage sites. Loko i′a, traditional Hawaiian fishponds located along the coastline, have historically provided sustainable seafood sources. These coastal cultural heritage sites are [...] Read more.
The use of Uncrewed Aerial Vehicles (UAVs) is becoming a preferred method for supporting integrated coastal zone management, including cultural heritage sites. Loko i′a, traditional Hawaiian fishponds located along the coastline, have historically provided sustainable seafood sources. These coastal cultural heritage sites are undergoing revitalization through community-driven restoration efforts. However, sea level rise (SLR) poses a significant climate-induced threat to coastal areas globally. Loko i′a managers seek adaptive strategies to address SLR impacts on flooding, water quality, and the viability of raising native fish species. This study utilizes extreme tidal events, known as King Tides, as a proxy to estimate future SLR scenarios and their impacts on loko i′a along the Keaukaha coastline in Hilo, Hawai′i. In situ water level sensors were deployed at each site to assess flooding by the loko i′a type and location. We also compare inundation modeled from UAV-Structure from Motion (SfM) Digital Elevation Models (DEM) to publicly available Light Detection and Ranging (LiDAR) DEMs, alongside observed flooding documented by UAV imagery in real time. The average water levels (0.64 m and 0.88 m) recorded in this study during the 2023 King Tides are expected to reflect the average sea levels projected for 2060–2080 in Hilo, Hawai′i. Our findings indicate that high-resolution UAV-derived DEMs accurately model observed flooding (with 89% or more agreement), whereas LiDAR-derived flood models significantly overestimate observed flooding (by 2–5 times), outlining a more conservative approach. To understand how UAV datasets can enhance the resilience of coastal cultural heritage sites, we looked into the cost, spatial resolution, accuracy, and time necessary for acquiring LiDAR- and UAV-derived datasets. This study ultimately demonstrates that UAVs are effective tools for monitoring and planning for the future impacts of SLR on coastal cultural heritage sites at a community level. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of different types of loko i′a included in this study that are labeled with significant natural, cultural, and modern features. Loko kuapā (<b>A</b>) have rock wall enclosures, Loko wai (<b>B</b>) are freshwater-dominated ponds, and Kāheka (<b>C</b>) are natural pools with more direct wave exposure.</p>
Full article ">Figure 2
<p>Location of where the study sites are located within the Hawai′i archipelago (<b>A</b>). The loko i′a are situated along the Eastern Keaukaha Coastline on Hawai′i island (<b>B</b>). Each marker in both Transects represents the location of an in situ water sensor. The loko i′a within Honohononui (Transect 1) are Laehala (orange), Hale o Lono (green), and Waiāhole (green). Within Hui Ho′oleimaluō (Transect 2), the loko i′a are Honokea (green) and Kaumaui (purple). The location of the barometer (white) is also indicated.</p>
Full article ">Figure 3
<p>Observed UAV King Tide flooding (0.64 m) documented on 3 July 2023 at Honokea loko i′a with ground-truthed photos in the field. Observations include inundation at the main freshwater spring (<b>A</b>), high water levels intruding on Native bird habitat (<b>B</b>), and flooding causing debris to spread (<b>C</b>).</p>
Full article ">Figure 4
<p>Modeled Inundation Area of Honokea loko i′a with the UAV observed at low tide (dotted line) and at high tide (purple), with UAV-modeled (light blue) and LiDAR-modeled (yellow) flooding overlaid. Areas of agreement between observed and modeled flooding are overlapping (light purple). The in situ water sensor is located near the rock wall (blue dot).</p>
Full article ">
17 pages, 5035 KiB  
Article
CLIP-Driven Few-Shot Species-Recognition Method for Integrating Geographic Information
by Lei Liu, Linzhe Yang, Feng Yang, Feixiang Chen and Fu Xu
Remote Sens. 2024, 16(12), 2238; https://doi.org/10.3390/rs16122238 - 20 Jun 2024
Cited by 1 | Viewed by 1128
Abstract
Automatic recognition of species is important for the conservation and management of biodiversity. However, since closely related species are visually similar, it is difficult to distinguish them by images alone. In addition, traditional species-recognition models are limited by the size of the dataset [...] Read more.
Automatic recognition of species is important for the conservation and management of biodiversity. However, since closely related species are visually similar, it is difficult to distinguish them by images alone. In addition, traditional species-recognition models are limited by the size of the dataset and face the problem of poor generalization ability. Visual-language models such as Contrastive Language-Image Pretraining (CLIP), obtained by training on large-scale datasets, have excellent visual representation learning ability and demonstrated promising few-shot transfer ability in a variety of few-shot species recognition tasks. However, limited by the dataset on which CLIP is trained, the performance of CLIP is poor when used directly for few-shot species recognition. To improve the performance of CLIP for few-shot species recognition, we proposed a few-shot species-recognition method incorporating geolocation information. First, we utilized the powerful feature extraction capability of CLIP to extract image features and text features. Second, a geographic feature extraction module was constructed to provide additional contextual information by converting structured geographic location information into geographic feature representations. Then, a multimodal feature fusion module was constructed to deeply interact geographic features with image features to obtain enhanced image features through residual connection. Finally, the similarity between the enhanced image features and text features was calculated and the species recognition results were obtained. Extensive experiments on the iNaturalist 2021 dataset show that our proposed method can significantly improve the performance of CLIP’s few-shot species recognition. Under ViT-L/14 and 16-shot training species samples, compared to Linear probe CLIP, our method achieved a performance improvement of 6.22% (mammals), 13.77% (reptiles), and 16.82% (amphibians). Our work provides powerful evidence for integrating geolocation information into species-recognition models based on visual-language models. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall framework of SG-CLIP for few-shot species recognition. It contains three paths for text, image, and geographic information, respectively. The geographic feature is obtained by GFEM. The parameters of GFEM and IGFFM are learnable.</p>
Full article ">Figure 2
<p>The structure of GFEM for geographic feature extraction. The dashed box is the structure of the FCResLayer.</p>
Full article ">Figure 3
<p>The structure of IGFFM for image and geographic feature fusion, where Fc denotes the fully connected layer, ReLU denotes the ReLU activation function, and LayerNorm denotes layer normalization. DFB denotes the dynamic fusion block. DFB is used recursively, where <span class="html-italic">N</span> is the number of DFB modules.</p>
Full article ">Figure 4
<p>Heatmaps of the geolocation distribution. (<b>a</b>) Mammals. (<b>b</b>) Reptiles. (<b>c</b>) Amphibians. Different colors indicate the number of species at different locations. Green indicates relatively little data and red indicates a large number.</p>
Full article ">Figure 5
<p>Performance comparison of different methods with different training samples on different datasets. (<b>a</b>) Mammals. (<b>b</b>) Reptiles. (<b>c</b>) Amphibians.</p>
Full article ">Figure 6
<p>Comparison of few-shot species recognition accuracy on different datasets under different versions of CLIP. (<b>a</b>) Mammals. (<b>b</b>) Reptiles. (<b>c</b>) Amphibians.</p>
Full article ">Figure 7
<p>Visualization of t-SNE representations under different methods. (<b>a</b>) Zero-shot CLIP under ViT-B/32. (<b>b</b>) Zero-shot CLIP under ViT-L/14. (<b>c</b>) SG-CLIP under ViT-B/32. (<b>d</b>) SG-CLIP under ViT-L/14.</p>
Full article ">
17 pages, 2403 KiB  
Article
Estimating Pavement Condition by Leveraging Crowdsourced Data
by Yangsong Gu, Mohammad Khojastehpour, Xiaoyang Jia and Lee D. Han
Remote Sens. 2024, 16(12), 2237; https://doi.org/10.3390/rs16122237 - 20 Jun 2024
Cited by 1 | Viewed by 1297
Abstract
Monitoring pavement conditions is critical to pavement management and maintenance. Traditionally, pavement distress is mainly identified via accelerometers, videos, and laser scanning. However, the geographical coverage and temporal frequency are constrained by the limited amount of equipment and labor, which sometimes may delay [...] Read more.
Monitoring pavement conditions is critical to pavement management and maintenance. Traditionally, pavement distress is mainly identified via accelerometers, videos, and laser scanning. However, the geographical coverage and temporal frequency are constrained by the limited amount of equipment and labor, which sometimes may delay road maintenance. By contrast, crowdsourced data, in a manner of crowdsensing, can provide real-time and valuable roadway information for extensive coverage. This study exploited crowdsourced Waze pothole and weather reports for pavement condition evaluation. Two surrogate measures are proposed, namely, the Pothole Report Density (PRD) and the Weather Report Density (WRD). They are compared with the Pavement Quality Index (PQI), which is calculated using laser truck data from the Tennessee Department of Transportation (TDOT). A geographically weighted random forest (GWRF) model was developed to capture the complicated relationships between the proposed measures and PQI. The results show that the PRD is highly correlated with the PQI, and the correlation also varies across the routes. It is also found to be the second most important factor (i.e., followed by pavement age) affecting the PQI values. Although Waze weather reports contribute to PQI values, their impact is significantly smaller compared to that of pothole reports. This paper demonstrates that surrogate pavement condition measures aggregated by crowdsourced data could be integrated into the state decision-making process by establishing nuanced relationships between the surrogated performance measures and the state pavement condition indices. The endeavor of this study also has the potential to enhance the granularity of pavement condition evaluation. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial distribution of surrogate measures, (<b>a</b>) pothole report density, and (<b>b</b>) weather report density.</p>
Full article ">Figure 2
<p>MAE and RMSE of GWRF with different bandwidths.</p>
Full article ">Figure 3
<p>Average variable importance %incMSE.</p>
Full article ">Figure 4
<p>Spatial distribution of variable importance of (<b>a</b>) pavement AGE; (<b>b</b>) PRD; (<b>c</b>) AADTT; and (<b>d</b>) WRD.</p>
Full article ">Figure 4 Cont.
<p>Spatial distribution of variable importance of (<b>a</b>) pavement AGE; (<b>b</b>) PRD; (<b>c</b>) AADTT; and (<b>d</b>) WRD.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <mrow> <msup> <mrow> <mi>R</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> of local random forest models.</p>
Full article ">
20 pages, 17559 KiB  
Article
Assessing Ecological Impacts and Recovery in Coal Mining Areas: A Remote Sensing and Field Data Analysis in Northwest China
by Deyun Song, Zhenqi Hu, Yi Yu, Fan Zhang and Huang Sun
Remote Sens. 2024, 16(12), 2236; https://doi.org/10.3390/rs16122236 - 19 Jun 2024
Cited by 2 | Viewed by 1448
Abstract
In the coal-rich provinces of Shanxi, Shaanxi, and Inner Mongolia, the landscape bears the scars of coal extraction—namely subsidence and deformation—that disrupt both the terrain and the delicate ecological balance. This research delves into the transformative journey these mining regions undergo, from pre-mining [...] Read more.
In the coal-rich provinces of Shanxi, Shaanxi, and Inner Mongolia, the landscape bears the scars of coal extraction—namely subsidence and deformation—that disrupt both the terrain and the delicate ecological balance. This research delves into the transformative journey these mining regions undergo, from pre-mining equilibrium, through the tumultuous phase of extraction, to the eventual restoration of stability post-reclamation. By harnessing a suite of analytical tools, including sophisticated remote sensing, UAV aerial surveys, and the meticulous ground-level sampling of flora and soil, the study meticulously measures the environmental toll of mining activities and charts the path to ecological restoration. The results are promising, indicating that the restoration initiatives are effectively healing the landscapes, with proactive interventions such as seeding, afforestation, and land rehabilitation proving vital in the swift ecological turnaround. Remote sensing technology, in particular, emerges as a robust ally in tracking ecological shifts, supporting sustainable practices and guiding ecological management strategies. This study offers a promising framework for assessing geological environmental shifts, which may guide policymakers in shaping the future of mining rehabilitation in arid and semi-arid regions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the trial region.</p>
Full article ">Figure 2
<p>Methodological framework of this study. (Soil moisture content (SMC), pH level, total nitrogen (TN), available phosphorus (AP), and available potassium (AK)).</p>
Full article ">Figure 3
<p>Indices for various topographic factors before and after mining at the Erlintu coal mine.</p>
Full article ">Figure 4
<p>Topographic factors’ distribution before and after mining: (<b>a</b>) slope distribution before and after mining; (<b>b</b>) aspect distribution before and after mining; (<b>c</b>) surface roughness distribution before and after mining; (<b>d</b>) terrain relief distribution before and after mining.</p>
Full article ">Figure 5
<p>Extraction of ground fissures in a mining-disturbed area in 2023: (<b>a</b>) UAV data of the mining area in Region R in 2023, with dense fissures. (<b>b</b>) Surface conditions in Region R in May. (<b>c</b>) Extracted distribution of ground fissures in Region R based on May imagery. (<b>d</b>) Surface conditions in Region R in August with the locations of large fissure areas (width &gt; 15 cm) P1 and small fissure areas (width &lt; 15 cm) P2. (<b>e</b>) Surface conditions in P1 area. (<b>f</b>) Surface conditions in P2 area.</p>
Full article ">Figure 6
<p>Extraction of ground fissures in mining-disturbed area in 2014: (<b>a</b>) Surface image of the mining area in Region Q in 2014, with dense fissures. (<b>b</b>) Surface conditions in the dense fissure area Q. (<b>c</b>) Extracted distribution of ground fissures in Region Q.</p>
Full article ">Figure 7
<p>Long-term NDVI value change in the trial region.</p>
Full article ">Figure 8
<p>The evolution of VFC in the Erlintu mining area: (<b>a</b>) evolution in VFC: 2013–2015; (<b>b</b>) evolution in VFC: 2013–2018; (<b>c</b>) evolution in VFC: 2013–2022; (<b>d</b>) evolution in VFC: 2018–2020; (<b>e</b>) evolution in VFC: 2018–2022.</p>
Full article ">Figure 9
<p>Vegetation dynamics in areas A1 and A2: (<b>a</b>) proportions of each category; (<b>b</b>) numerical distribution of VFC changes.</p>
Full article ">Figure 10
<p>Vegetation dynamics in area B1 and B2: (<b>a</b>) proportions of each category; (<b>b</b>) numerical distribution of VFC changes.</p>
Full article ">Figure 11
<p>Transformation of RSEI: (<b>a</b>) 2015 RSEI map; (<b>b</b>) 2018 RSEI map; (<b>c</b>) 2020 RSEI map; (<b>d</b>) 2022 RSEI map.</p>
Full article ">Figure 12
<p>Contrasts the change in RSEI values between the mining-affected area and the control area.</p>
Full article ">Figure 13
<p>Soil element analysis of different trial regions in Erlintu coal mine.</p>
Full article ">
18 pages, 14154 KiB  
Article
Three-Dimensional Rockslide Analysis Using Unmanned Aerial Vehicle and LiDAR: The Castrocucco Case Study, Southern Italy
by Antonio Minervino Amodio, Giuseppe Corrado, Ilenia Graziamaria Gallo, Dario Gioia, Marcello Schiattarella, Valentino Vitale and Gaetano Robustelli
Remote Sens. 2024, 16(12), 2235; https://doi.org/10.3390/rs16122235 - 19 Jun 2024
Cited by 1 | Viewed by 1052
Abstract
Rockslides are one of the most dangerous hazards in mountainous and hilly areas. In this study, a rockslide that occurred on 30 November 2022 in Castrocucco, a district located in the Italian municipality of Maratea (Potenza province) in the Basilicata region, was investigated [...] Read more.
Rockslides are one of the most dangerous hazards in mountainous and hilly areas. In this study, a rockslide that occurred on 30 November 2022 in Castrocucco, a district located in the Italian municipality of Maratea (Potenza province) in the Basilicata region, was investigated by using pre- and post-event high-resolution 3D models. The event caused a great social alarm as some infrastructures were affected. The main road to the tourist hub of Maratea was, in fact, destroyed and made inaccessible. Rock debris also affected a beach club and important boat storage for sea excursions to Maratea. This event was investigated by using multiscale and multisensor close-range remote sensing (LiDAR and SfM) to determine rockslide characteristics. The novelty of this work lies in how these data, although not originally acquired for rockslide analysis, have been integrated and utilized in an emergency at an almost inaccessible site. The event was analyzed both through classical geomorphological analysis and through a quantitative comparison of multi-temporal DEMs (DoD) in order to assess (i) all the morphological features involved, (ii) detached volume (approximately 8000 m3), and (iii) the process of redistributing and reworking the landslide deposit in the depositional area. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Hillshade and 20 m contour map of the Serra di Castrocucco ridge and (<b>b</b>) close-up map of the Castrocucco headland with 10 m contour lines. (<b>c</b>) The white dashed line highlights the landslide scar on a pre-2022 orthophoto (2018).</p>
Full article ">Figure 2
<p>(<b>a</b>) Overview of the Castrocucco landslide that, on 30 November 2022, hit the State Road SS18 (<b>b</b>,<b>c</b>) <a href="https://www.lecronachelucane.it/2022/12/03/frana-in-castrocucco-di-maratea-il-fate-presto-non-esiste-poiche-e-vasta-larea-con-predisposizione-alla-instabilita/" target="_blank">https://www.lecronachelucane.it/2022/12/03/frana-in-castrocucco-di-maratea-il-fate-presto-non-esiste-poiche-e-vasta-larea-con-predisposizione-alla-instabilita/</a>, accessed on 3 December 2022), one of the main roads leading to the tourist town of Maratea, causing its 8-month closure.</p>
Full article ">Figure 3
<p>Record of daily and cumulative rainfall from “Castrocucco” pluviometer station (111 m s.l.m., about 7 km north of the landslide site), provided by the Civil Protection of Basilicata Region.</p>
Full article ">Figure 4
<p>Acquisition, processing and output flowchart.</p>
Full article ">Figure 5
<p>Outlines of the dataset coverage (<b>a</b>). Flight plan in figures (<b>b</b>,<b>c</b>) refers to 2018 and 2022 surveys, respectively.</p>
Full article ">Figure 6
<p>D13 classified point cloud.</p>
Full article ">Figure 7
<p>Shaded relief, topography (<b>a</b>), ortophoto from D18 point cloud (<b>b square in a</b>), and view from the SSE (<b>c</b>) of the main morphological features of the landslide area; close-up views of ortophoto from D18 (<b>d</b>) and D22 (<b>e</b>) point clouds showing the area (red circles) affected by the movement of a wedge-shaped area bounded by N50 and N90 discontinuities (white dashed lines in (<b>d</b>)) which promote the later failure of the main block; (<b>f,g</b>) views from the SS18 State Road of N50 and N90 discontinuities affecting the westernmost sector of the cliff edge prior the 30 November 2022 event.</p>
Full article ">Figure 8
<p>Kinematic analysis for the main block sliding. The discontinuity pattern was extracted by the D18 point cloud using the Cloud Compare 2.13 software. Parameters: Slope dip: 45°; Slope Dip Direction: 195; Friction Angle: 30°. The orange area highlights the critical zone, where the intersections of the two sets of discontinuities (fault planes and joints) have the potential for planar and wedge-type failure. The plot suggests a SW orientation of the direction of sliding.</p>
Full article ">Figure 9
<p>Maratea rockslide: (<b>a</b>) ortophoto (from D22 point cloud), with the red arrow indicating later debris/earth flow events, and (<b>b</b>) DoD volume difference obtained between 2022 and 2013.</p>
Full article ">Figure 10
<p>DoD between D22 and D18.</p>
Full article ">Figure 11
<p>Comparison between D22 (<b>a</b>,<b>b</b>) and D18 (<b>c</b>,<b>d</b>) models.</p>
Full article ">
20 pages, 6499 KiB  
Article
Tracking Loop Current Eddies in the Gulf of Mexico Using Satellite-Derived Chlorophyll-a
by Corinne B. Trott, Bulusu Subrahmanyam, Luna Hiron and Olmo Zavala-Romero
Remote Sens. 2024, 16(12), 2234; https://doi.org/10.3390/rs16122234 - 19 Jun 2024
Viewed by 1184
Abstract
During the period of 2018–2022, there were six named Loop Current Eddy (LCE) shedding events in the central Gulf of Mexico (GoM). LCEs form when a large anticyclonic eddy (AE) separates from the main Loop Current (LC) and propagates westward. In doing so, [...] Read more.
During the period of 2018–2022, there were six named Loop Current Eddy (LCE) shedding events in the central Gulf of Mexico (GoM). LCEs form when a large anticyclonic eddy (AE) separates from the main Loop Current (LC) and propagates westward. In doing so, each LCE traps and advects warmer, saltier waters with lower Chlorophyll-a (Chl-a) concentrations than the surrounding Gulf waters. This difference in water mass permits the study of the effectiveness of using Chl-a from satellite-derived ocean color to identify LCEs in the GoM. In this work, we apply an eddy-tracking algorithm to Chl-a to detect LCEs, which we have validated against the traditional sea surface height-(SSH) based eddy-tracking approach with three datasets. We apply a closed-contour eddy-tracking algorithm to the SSH of two model products (HYbrid Coordination Ocean Model; HYCOM and Nucleus for European Modelling of the Ocean; NEMO) and absolute dynamic topography (ADT) from altimetry, as well as satellite-derived Chl-a data to identify the six named LCEs from 2018 to 2022. We find that Chl-a best characterizes LCEs in the summertime due to a basin-wide increase in the horizontal gradient of Chl-a, which permits a more clearly defined eddy edge. This study demonstrates that Chl-a can be effectively used to identify and track LC and LCEs in the GoM, serving as a promising source of information for regional data assimilative models. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Satellite-derived absolute dynamic topography (ADT) (m), (<b>b</b>) satellite-derived Chlorophyll-a (Chl-a) concentration (mg/m<sup>3</sup>); (<b>c</b>) HYCOM derived sea surface height (SSH) (m); and (<b>d</b>) NEMO derived SSH (m) for 2018–2022.</p>
Full article ">Figure 2
<p>Monthly mean satellite-derived Chlorophyll-a (Chl-a) concentration (mg/m<sup>3</sup>) and 17 cm altimetry contour value plotted representing the Loop Current position for the year 2019.</p>
Full article ">Figure 3
<p>Daily eddy characteristics (<b>a</b>–<b>d</b>) from 2018 to 2022 for satellite altimetry, HYCOM GoM, and NEMO data of anticyclonic eddies and the respective boxplots (<b>e</b>–<b>h</b>) showing median, upper, and lower quartiles, and minimum and maximum of these characteristics. The boxplot for ocean Color is not shown due to its different units, which would be mg/m<sup>3</sup>. Characteristics (<b>a</b>,<b>e</b>) number of eddies, (<b>b</b>,<b>f</b>) average eddy amplitude (cm), (<b>c</b>,<b>g</b>) average eddy radius (km), and (<b>d</b>,<b>h</b>) average Chl-a anomaly (mg/m<sup>3</sup>).</p>
Full article ">Figure 4
<p>Same as <a href="#remotesensing-16-02234-f003" class="html-fig">Figure 3</a> but for cyclonic eddies. (<b>a</b>,<b>e</b>) number of eddies, (<b>b</b>,<b>f</b>) average eddy amplitude (cm), (<b>c</b>,<b>g</b>) average eddy radius (km), and (<b>d</b>,<b>h</b>) average Chl-a anomaly (mg/m<sup>3</sup>).</p>
Full article ">Figure 5
<p>Daily eddy characteristics from 2018 to 2022 for satellite altimetry and ocean color of Chl-a anticyclonic and cyclonic eddies using scatterplots. Units for amplitude, radius, and Chl-a anomaly are cm, km, and mg/m<sup>3</sup>, respectively.</p>
Full article ">Figure 6
<p>Eddy tracking of 2018 LCE (Revelle) when LC is in extended position (<b>top row</b>), LCE separation date (<b>second row</b>), after LCE separation (<b>third row</b>), and LCE migration (<b>bottom row</b>). Shown in satellite altimetry (<b>first column</b>), HYCOM (<b>second column</b>), NEMO (<b>third column</b>), and ocean color of Chl-a data (<b>fourth column</b>). Eddy trajectory is plotted in the (<b>first row</b>).</p>
Full article ">Figure 7
<p>Eddy tracking of 2019 LCE (Sverdrup) when LC is in extended position (<b>top row</b>), LCE separation date (<b>second row</b>), after LCE separation (<b>third row</b>), and LCE migration (<b>bottom row</b>). Shown in satellite altimetry (<b>first column</b>), HYCOM (<b>second column</b>), NEMO (<b>third column</b>), and ocean color of Chl-a data (<b>fourth column</b>). Eddy trajectory is plotted in the (<b>first row</b>).</p>
Full article ">Figure 8
<p>Eddy tracking of 2020 LCE (Thor) when LC is in extended position (<b>top row</b>), LCE separation date (<b>second row</b>), after LCE separation (<b>third row</b>), and LCE migration (<b>bottom row</b>). Shown in satellite altimetry (<b>first column</b>), HYCOM (<b>second column</b>), NEMO (<b>third column</b>), and ocean color of Chl-a data (<b>fourth column</b>). Eddy trajectory plotted in (<b>first row</b>).</p>
Full article ">Figure 9
<p>Eddy tracking of 2021 LCE (Wilde) when LC is in extended position (<b>top row</b>), LCE separation date (<b>second row</b>), after LCE separation (<b>third row</b>), and LCE migration (<b>bottom row</b>). Shown in satellite altimetry (<b>first column</b>), HYCOM (<b>second column</b>), NEMO (<b>third column</b>), and ocean color of Chl-a (<b>fourth column</b>). Eddy trajectory plotted in (<b>first row</b>).</p>
Full article ">Figure 10
<p>Eddy tracking of 2022 LCE when LC (Wilde II) is in extended position (<b>top row</b>), LCE separation date (<b>second row</b>), after LCE separation (<b>third row</b>), and LCE migration (<b>bottom row</b>). Shown in satellite altimetry (<b>first column</b>), HYCOM (<b>second column</b>), NEMO (<b>third column</b>), and ocean color of Chl-a (<b>fourth column</b>). Eddy trajectory plotted in (<b>first row</b>).</p>
Full article ">Figure 11
<p>Eddy tracking of 2022 LCE (X) when LC is in extended position (<b>top row</b>), LCE separation date (<b>second row</b>), after LCE separation (<b>third row</b>), and LCE migration (<b>bottom row</b>). Shown in satellite altimetry (<b>first column</b>), HYCOM (<b>second column</b>), NEMO (<b>third column</b>), and ocean color of Chl-a (<b>fourth column</b>). Eddy trajectory plotted in (<b>first row</b>).</p>
Full article ">
24 pages, 8813 KiB  
Article
MSSD-Net: Multi-Scale SAR Ship Detection Network
by Xi Wang, Wei Xu, Pingping Huang and Weixian Tan
Remote Sens. 2024, 16(12), 2233; https://doi.org/10.3390/rs16122233 - 19 Jun 2024
Cited by 2 | Viewed by 1323
Abstract
In recent years, the development of neural networks has significantly advanced their application in Synthetic Aperture Radar (SAR) ship target detection for maritime traffic control and ship management. However, traditional neural network architectures are often complex and resource intensive, making them unsuitable for [...] Read more.
In recent years, the development of neural networks has significantly advanced their application in Synthetic Aperture Radar (SAR) ship target detection for maritime traffic control and ship management. However, traditional neural network architectures are often complex and resource intensive, making them unsuitable for deployment on artificial satellites. To address this issue, this paper proposes a lightweight neural network: the Multi-Scale SAR Ship Detection Network (MSSD-Net). Initially, the MobileOne network module is employed to construct the backbone network for feature extraction from SAR images. Subsequently, a Multi-Scale Coordinate Attention (MSCA) module is designed to enhance the network’s capability to process contextual information. This is followed by the integration of features across different scales using an FPN + PAN structure. Lastly, an Anchor-Free approach is utilized for the rapid detection of ship targets. To evaluate the performance of MSSD-Net, we conducted extensive experiments on the Synthetic Aperture Radar Ship Detection Dataset (SSDD) and SAR-Ship-Dataset. Our experimental results demonstrate that MSSD-Net achieves a mean average precision (mAP) of 98.02% on the SSDD while maintaining a compact model size of only 1.635 million parameters. This indicates that MSSD-Net effectively reduces model complexity without compromising its ability to achieve high accuracy in object detection tasks. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The general network architecture of MSSD-Net.</p>
Full article ">Figure 2
<p>The structure of the MobileOne block. The MobileOne training block on the left is reparameterized to obtain the MobileOne inference block on the right.</p>
Full article ">Figure 3
<p>The Squeeze-and-Excitation (SE) attention block.</p>
Full article ">Figure 4
<p>The Efficient Channel Attention (ECA) block.</p>
Full article ">Figure 5
<p>The Multi-Scale Coordinate Attention module.</p>
Full article ">Figure 6
<p>The Coordinate Attention (CA) module.</p>
Full article ">Figure 7
<p>The Shuffle Attention (SA) module.</p>
Full article ">Figure 8
<p>Examples of the datasets used in this study. (<b>a</b>) Examples of the SSDD; (<b>b</b>) examples of the SAR-Ship-Dataset.</p>
Full article ">Figure 9
<p>Heatmaps of the C2f (backbone network of YOLOv8), MobileOne, and MobileOne + MSCA modules.</p>
Full article ">Figure 10
<p>SAR target detection results via MSSD-Net. Red boxes are targets detected via MSSD-Net and yellow boxes are missed detections. (<b>a</b>) Detection results for the SSDD; (<b>b</b>) detection results for the SAR-Ship-Dataset.</p>
Full article ">Figure 10 Cont.
<p>SAR target detection results via MSSD-Net. Red boxes are targets detected via MSSD-Net and yellow boxes are missed detections. (<b>a</b>) Detection results for the SSDD; (<b>b</b>) detection results for the SAR-Ship-Dataset.</p>
Full article ">Figure 11
<p>SAR target detection results via MSSD-Net and other models. Red boxes are target detections, yellow boxes are missed detections, and blue boxes are false detections. (<b>a</b>) Detection results for the Faster-RCNN; (<b>b</b>) detection results for the FCOS; (<b>c</b>) detection results for the SSD; (<b>d</b>) detection results for the YOLOv5-s; (<b>e</b>) detection results for the YOLOv8-s; and (<b>f</b>) detection results for the MSSD-Net model.</p>
Full article ">Figure 11 Cont.
<p>SAR target detection results via MSSD-Net and other models. Red boxes are target detections, yellow boxes are missed detections, and blue boxes are false detections. (<b>a</b>) Detection results for the Faster-RCNN; (<b>b</b>) detection results for the FCOS; (<b>c</b>) detection results for the SSD; (<b>d</b>) detection results for the YOLOv5-s; (<b>e</b>) detection results for the YOLOv8-s; and (<b>f</b>) detection results for the MSSD-Net model.</p>
Full article ">
21 pages, 5043 KiB  
Article
Using Sentinel-2 Imagery to Measure Spatiotemporal Changes and Recovery across Three Adjacent Grasslands with Different Fire Histories
by Annalise Taylor, Iryna Dronova, Alexii Sigona and Maggi Kelly
Remote Sens. 2024, 16(12), 2232; https://doi.org/10.3390/rs16122232 - 19 Jun 2024
Viewed by 1164
Abstract
As a result of the advocacy of Indigenous communities and increasing evidence of the ecological importance of fire, California has invested in the restoration of intentional burning (the practice of deliberately lighting low-severity fires) in an effort to reduce the occurrence and severity [...] Read more.
As a result of the advocacy of Indigenous communities and increasing evidence of the ecological importance of fire, California has invested in the restoration of intentional burning (the practice of deliberately lighting low-severity fires) in an effort to reduce the occurrence and severity of wildfires. Recognizing the growing need to monitor the impacts of these smaller, low-severity fires, we leveraged Sentinel-2 imagery to reveal important inter- and intra-annual variation in grasslands before and after fires. Specifically, we explored three methodological approaches: (1) the complete time series of the normalized burn ratio (NBR), (2) annual summary metrics (mean, fifth percentile, and amplitude of NBR), and (3) maps depicting spatial patterns in these annual NBR metrics before and after fire. We also used a classification of pre-fire vegetation to stratify these analyses by three dominant vegetation cover types (grasses, shrubs, and trees). We applied these methods to a unique study area in which three adjacent grasslands had diverging fire histories and showed how grassland recovery from a low-severity intentional burn and a high-severity wildfire differed both from each other and from a reference site with no recent fire. On the low-severity intentional burn site, our results showed that the annual NBR metrics recovered to pre-fire values within one year, and that regular intentional burning on the site was promoting greater annual growth of both grass and shrub species, even in the third growing season following a burn. In the case of the high-severity wildfire, our metrics indicated that this grassland had not returned to its pre-fire phenological signals in at least three years after the fire, indicating that it may be undergoing a longer recovery or an ecological shift. These proposed methods address a growing need to study the effects of small, intentional burns in low-biomass ecosystems such as grasslands, which are an essential part of mitigating wildfires. Full article
Show Figures

Figure 1

Figure 1
<p>Map (<b>a</b>) shows the study area with the no fire (1), controlled burn (2), and wildfire (3) sites located from northwest to southeast. The location of the study area within the San Francisco Bay Area, California, USA is indicated by a dark purple box within the inset map in the upper right. Map (<b>b</b>) shows the results of the pre-fire vegetation cover classification of the 27 May 2020 NAIP image (60 cm spatial resolution) within the study area, and map (<b>c</b>) shows the dominant vegetation cover class within each Sentinel-2 pixel of those located completely within the study area and with a single vegetation cover totaling 60% or greater. Pixels with no dominant vegetation cover (i.e., with no single vegetation cover class totaling 60% or more within the pixel) are not shown on this map and were excluded from the vegetation-stratified analyses.</p>
Full article ">Figure 2
<p>Thirty-day moving average of the mean NBR values across each study area site during the study period (31 December 2018 to 30 September 2023). The timing of the wildfire and controlled burn events are shown as labeled vertical bars shaded in red and orange, respectively.</p>
Full article ">Figure 3
<p>Thirty-day moving average of the mean NBR values within the Sentinel-2 pixels dominated by grasses, shrubs, and trees within each study area site across the study period (31 December 2018 to 30 September 2023). The wildfire site experienced significant vegetation mortality and vegetation type changes during and after the fire; therefore, the vegetation categories should not be interpreted as remaining constant in the period following the fire. The timing of the wildfire and controlled burn events are shown as labeled vertical bars shaded in gray. Tree-dominated pixels were excluded from this figure due to small sample size and to facilitate legibility.</p>
Full article ">Figure 4
<p>Mean metric values (NBR mean, fifth percentile, and amplitude) from 200 randomly sampled pixels on each of the three sites across the five water years of the study period. The first column (<b>a</b>,<b>d</b>,<b>g</b>) shows the original NBR values; the second column (<b>b</b>,<b>e</b>,<b>h</b>) shows the pairwise offset values (i.e., the average difference between the pre-fire WY 2019 and 2020 values on each fire site and the no fire site) for the wildfire site; and the third column (<b>c</b>,<b>f</b>,<b>i</b>) shows the pairwise offset values for the controlled burn site. The pairwise offset values illustrate the relative differences between the fire sites and no fire site following the fires, which were statistically tested in WYs 2022 and 2023. Asterisks (*) above the pairwise offset values in WYs 2022 and 2023 indicate that the means of the two sites were significantly different using the Mann–Whitney U test (<span class="html-italic">p</span><sub>adj</sub> &lt; 0.05). The shaded bounds surrounding each line represent the 95% confidence interval of each sample.</p>
Full article ">Figure 5
<p>Average metrics (NBR mean, fifth percentile, and amplitude) from the sampled pixels dominated by each pre-fire vegetation type (grass, shrub, and tree) across the five water years of the study period. The columns correspond to the no fire, wildfire, and controlled burn sites from left to right. In order, (<b>a</b>) shows the mean NBR by vegetation type on the no fire site, (<b>b</b>) shows the mean NBR by vegetation type on the wildfire site, and (<b>c</b>) shows the mean NBR by vegetation type on the controlled burn site; (<b>d</b>) shows the fifth percentile NBR by vegetation type on the no fire site, (<b>e</b>) shows the fifth percentile NBR by vegetation type on the wildfire site, and (<b>f</b>) shows the fifth percentile NBR by vegetation type on the controlled burn site; (<b>g</b>) shows the NBR amplitude by vegetation type on the no fire site, (<b>h</b>) shows the NBR amplitude by vegetation type on the wildfire site, and (<b>i</b>) shows the NBR amplitude by vegetation type on the controlled burn site. The shaded band surrounding each line represents the 95% confidence interval of that sample. A note on variation in the sample sizes: 200 pixels were randomly sampled from each site–vegetation cover pair except in the following four cases: all 168 and 139 shrub-dominated pixels were selected from the controlled burn and wildfire sites, respectively, and all 27 and 19 tree-dominated pixels were selected from the no fire and wildfire sites, respectively. <sup>1</sup> The wildfire site experienced significant vegetation mortality and vegetation type changes during and after the fire; therefore, the pre-fire vegetation categories should not be interpreted as remaining steady in the years during and after the fire.</p>
Full article ">Figure 6
<p>Spatial variation in the mean, fifth percentile, and amplitude of NBR across the study area sites for three water years: two years prior to the fire events (WY 2019), the year of the fire events (WY 2021), and two years after the fire events (WY 2023). In the first row, the maps show the (<b>a</b>) mean NBR in WY 2019, (<b>b</b>) mean NBR in WY 2021, and (<b>c</b>) mean NBR in WY 2023. In the second row, the maps show the (<b>d</b>) fifth percentile NBR in WY 2019, (<b>e</b>) fifth percentile NBR in WY 2021, and (<b>f</b>) fifth percentile NBR in WY 2023. In the third row, the maps show the (<b>g</b>) amplitude of NBR in WY 2019, (<b>h</b>) amplitude of NBR in WY 2021, and (<b>i</b>) amplitude of NBR in WY 2023.</p>
Full article ">
23 pages, 6378 KiB  
Article
Navigation Resource Allocation Algorithm for LEO Constellations Based on Dynamic Programming
by Sixin Wang, Xiaomei Tang, Jingyuan Li, Xinming Huang, Jiyang Liu and Jian Liu
Remote Sens. 2024, 16(12), 2231; https://doi.org/10.3390/rs16122231 - 19 Jun 2024
Viewed by 1117
Abstract
Navigation resource allocation for low-earth-orbit (LEO) constellations refers to the optimal allocation of navigational assets when the number and allocation of satellites in the LEO constellation have been determined. LEO constellations can not only transmit navigation enhancement signals but also enable space-based monitoring [...] Read more.
Navigation resource allocation for low-earth-orbit (LEO) constellations refers to the optimal allocation of navigational assets when the number and allocation of satellites in the LEO constellation have been determined. LEO constellations can not only transmit navigation enhancement signals but also enable space-based monitoring (SBM) for real-time assessment of GNSS signal quality. However, proximity in the frequencies of LEO navigation signals and SBM can lead to significant interference, necessitating isolated transmission and reception. This separation requires that SBM and navigation signal transmission be carried out by different satellites within the constellation, thus demanding a strategic allocation of satellite resources. Given the vast number of satellites and their rapid movement, the visibility among LEO, medium-earth-orbit (MEO), and geostationary orbit (GEO) satellites is highly dynamic, presenting substantial challenges in resource allocation due to the computational intensity involved. Therefore, this paper proposes an optimal allocation algorithm for LEO constellation navigation resources based on dynamic programming. In this algorithm, a network model for the allocation of navigation resources in LEO constellations is initially established. Under the constraints of visibility time windows and onboard transmission and reception isolation, the objective is set to minimize the number of LEO satellites used while achieving effective navigation signal transmission and SBM. The constraints of resource allocation and the mathematical expression of the optimization objective are derived. A dynamic programming approach is then employed to determine the optimal resource allocation scheme. Analytical results demonstrate that compared to Greedy and Divide-and-Conquer algorithms, this algorithm achieves the highest resource utilization rate and the lowest computational complexity, making it highly valuable for future resource allocation in LEO constellations. Full article
(This article belongs to the Special Issue Space-Geodetic Techniques (Third Edition))
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>LEO constellation navigation enhanced function architecture diagram.</p>
Full article ">Figure 2
<p>BDS constellation three-dimensional space allocation diagram. The red line represents the IGSO orbit, the green line represents the GEO orbit, and the blue line represents the MEO orbit.</p>
Full article ">Figure 3
<p>BDS constellation subsatellite point trajectory chart.</p>
Full article ">Figure 4
<p>Global average coverage of BDS.</p>
Full article ">Figure 5
<p>Global GDOP value distribution of BDS.</p>
Full article ">Figure 6
<p>GNSS satellites and LEO satellites.</p>
Full article ">Figure 7
<p>Dynamic programming algorithm schematic diagram.</p>
Full article ">Figure 8
<p>Flowchart of the NRAA-DP.</p>
Full article ">Figure 9
<p>The distribution diagram of the LEO constellation and BDS constellation. The blue lines represent the orbital planes of the BDS constellation, the yellow lines indicate the near-polar orbital planes of the LEO constellation, and the red lines denote the inclined orbital planes of the LEO constellation.</p>
Full article ">Figure 10
<p>Visual links between LEO satellite S0101 and BDS satellites within 4 h. Lines of different colors represent different visible links.</p>
Full article ">Figure 11
<p>Distribution map of ground stations used for evaluating constellation global coverage. The blue circles represent ground stations in the northern hemisphere along the 0° longitude from 0° to 90°N.</p>
Full article ">Figure 12
<p>Visible links between the LEO satellite I0101 and the ground stations. Lines of different colors represent different visible links.</p>
Full article ">Figure 13
<p>Constellation distribution map of the resource allocation scheme by NRAA-DP. (<b>a</b>) Inclined orbit satellites. (<b>b</b>) Near-polar orbit.</p>
Full article ">Figure 14
<p>Constellation distribution map of the resource allocation scheme by GA. (<b>a</b>) Inclined orbit satellites. (<b>b</b>) Near-polar orbit.</p>
Full article ">Figure 15
<p>Constellation distribution map of the resource allocation scheme by DCA. (<b>a</b>) Inclined orbit satellites. (<b>b</b>) Near-polar orbit.</p>
Full article ">Figure 16
<p>Coverage performance of different navigation resource allocation schemes for ground stations. (<b>a</b>) NRAA-DP. (<b>b</b>) GA. (<b>c</b>) DCA.</p>
Full article ">Figure 17
<p>Coverage performance of different navigation resource allocation schemes for BDS satellites. (<b>a</b>) NRAA-DP. (<b>b</b>) GA. (<b>c</b>) DCA.</p>
Full article ">
19 pages, 4057 KiB  
Article
Global Navigation Satellite System/Inertial Measurement Unit/Camera/HD Map Integrated Localization for Autonomous Vehicles in Challenging Urban Tunnel Scenarios
by Lu Tao, Pan Zhang, Kefu Gao and Jingnan Liu
Remote Sens. 2024, 16(12), 2230; https://doi.org/10.3390/rs16122230 - 19 Jun 2024
Cited by 1 | Viewed by 1712
Abstract
Lane-level localization is critical for autonomous vehicles (AVs). However, complex urban scenarios, particularly tunnels, pose significant challenges to AVs’ localization systems. In this paper, we propose a fusion localization method that integrates multiple mass-production sensors, including Global Navigation Satellite Systems (GNSSs), Inertial Measurement [...] Read more.
Lane-level localization is critical for autonomous vehicles (AVs). However, complex urban scenarios, particularly tunnels, pose significant challenges to AVs’ localization systems. In this paper, we propose a fusion localization method that integrates multiple mass-production sensors, including Global Navigation Satellite Systems (GNSSs), Inertial Measurement Units (IMUs), cameras, and high-definition (HD) maps. Firstly, we use a novel electronic horizon module to assess GNSS integrity and concurrently load the HD map data surrounding the AVs. This map data are then transformed into a visual space to match the corresponding lane lines captured by the on-board camera using an improved BiSeNet. Consequently, the matched HD map data are used to correct our localization algorithm, which is driven by an extended Kalman filter that integrates multiple sources of information, encompassing a GNSS, IMU, speedometer, camera, and HD maps. Our system is designed with redundancy to handle challenging city tunnel scenarios. To evaluate the proposed system, real-world experiments were conducted on a 36-kilometer city route that includes nine consecutive tunnels, totaling near 13 km and accounting for 35% of the entire route. The experimental results reveal that 99% of lateral localization errors are less than 0.29 m, and 90% of longitudinal localization errors are less than 3.25 m, ensuring reliable lane-level localization for AVs in challenging urban tunnel scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>The proposed GNSS/IMU/Camera/HD map integrated localization system.</p>
Full article ">Figure 2
<p>System overview; the key technologies used in this system are highlighted in blue text.</p>
Full article ">Figure 3
<p>A demonstration of the EH (electronic horizon) applied in this paper. The left column: real world; the right column: electronic horizon. In the top-right of the figure, the EH provides the following pieces of information: Lane Number: 3 (the vehicle is in the third lane (from the left) of the road); FOW (form of way): Link Divided (the road ahead is divided); Curvature: 0; Slope: 0.036488%; Heading: 16.457718 degrees. In the bottom-right of the figure, the EH provides the following pieces of information: Tunnel (the vehicle is in a tunnel); Lane Number: 3; FOW: Link Divided; Curvature: 0; Slope: 0.041921%; Heading: 16.787982 degrees.</p>
Full article ">Figure 4
<p>Explanations of tunnel scenario identification patterns using an EH.</p>
Full article ">Figure 5
<p>The iBiSeNet for lane line recognition.</p>
Full article ">Figure 6
<p>Snapshots of lane line recognition and cubic polynomial fitting in tunnel scenarios.</p>
Full article ">Figure 7
<p>Accuracy verification of HD maps.</p>
Full article ">Figure 8
<p>Matching between the visual lane line and HD map lane line.</p>
Full article ">Figure 9
<p>Experimental field and tunnel scenes in Nanjing City (image: Google).</p>
Full article ">Figure 10
<p>Experimental vehicle and sensors.</p>
Full article ">Figure 11
<p>Localization errors’ space distribution; these points are down-sampled using a 10:1 ratio; the left: lateral errors; the right: longitudinal errors; unit: meter.</p>
Full article ">Figure 12
<p>The time-series error data in lateral and longitudinal localization; the left: lateral error (positive: right shift; negative: left shift); the right: longitudinal error (positive: forward shift; negative: backward shift).</p>
Full article ">
30 pages, 12064 KiB  
Article
Inversion of Forest Aboveground Biomass in Regions with Complex Terrain Based on PolSAR Data and a Machine Learning Model: Radiometric Terrain Correction Assessment
by Yonghui Nie, Rula Sa, Sergey Chumachenko, Yifan Hu, Youzhu Wang and Wenyi Fan
Remote Sens. 2024, 16(12), 2229; https://doi.org/10.3390/rs16122229 - 19 Jun 2024
Cited by 2 | Viewed by 1066
Abstract
The accurate estimation of forest aboveground biomass (AGB) in areas with complex terrain is very important for quantifying the carbon sequestration capacity of forest ecosystems and studying the regional or global carbon cycle. In our previous research, we proposed the radiometric terrain correction [...] Read more.
The accurate estimation of forest aboveground biomass (AGB) in areas with complex terrain is very important for quantifying the carbon sequestration capacity of forest ecosystems and studying the regional or global carbon cycle. In our previous research, we proposed the radiometric terrain correction (RTC) process for introducing normalized correction factors, which has strong effectiveness and robustness in terms of the backscattering coefficient of polarimetric synthetic aperture radar (PolSAR) data and the monadic model. However, the impact of RTC on the correctness of feature extraction and the performance of regression models requires further exploration in the retrieval of forest AGB based on a machine learning multiple regression model. In this study, based on PolSAR data provided by ALOS-2, 117 feature variables were accurately extracted using the RTC process, and then Boruta and recursive feature elimination with cross-validation (RFECV) algorithms were used to perform multi-step feature selection. Finally, 10 machine learning regression models and the Optuna algorithm were used to evaluate the effectiveness and robustness of RTC in improving the quality of the PolSAR feature set and the performance of the regression models. The results revealed that, compared with the situation without RTC treatment, RTC can effectively and robustly improve the accuracy of PolSAR features (the Pearson correlation R between the PolSAR features and measured forest AGB increased by 0.26 on average) and the performance of regression models (the coefficient of determination R2 increased by 0.14 on average, and the rRMSE decreased by 4.20% on average), but there is a certain degree of overcorrection in the RTC process. In addition, in situations where the data exhibit linear relationships, linear models remain a powerful and practical choice due to their efficient and stable characteristics. For example, the optimal regression model in this study is the Bayesian Ridge linear regression model (R2 = 0.82, rRMSE = 18.06%). Full article
(This article belongs to the Special Issue SAR for Forest Mapping III)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of study sites: (<b>a</b>) the location of Saihanba Forest Farm in relation to the provinces and counties in China; (<b>b</b>) the spatial location of ALOS-2 data relative to Weichang County; (<b>c</b>) the Pauli RGB image (R: |HH-VV|, G: |HV|, B: |HH + VV|) based on PolSAR data and the location of the measured samples; the basemap is the optical image of Tianditu.</p>
Full article ">Figure 2
<p>A flowchart of the proposed forest AGB mapping scheme.</p>
Full article ">Figure 3
<p>Absolute value of Pearson correlation coefficient (R) between forest AGB and the PolSAR features based on the data (25 July 2020) with radiometric terrain correction (RTC, olive) and non-RTC data (NRT, red). Sorted based on R_RTC (i.e., absolute value of R value between forest AGB and SAR features extracted based on RTC data). (<b>a</b>) The first set of the extracted original PolSAR features; (<b>b</b>) the second set of the extracted original PolSAR features (39 in total); (<b>c</b>) derived features based on PolSAR original features (39 in total).</p>
Full article ">Figure 4
<p>Taking PolSAR data from 25 July 2020 as an example, we created scatter density plots between the decibel values of the three components (Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl)) of the Freeman three-decomposition in different topographic correction stages (non-radiometric terrain correction (NRTC), polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC)) and the local incidence angle <span class="html-italic">θ<sub>loc</sub></span>. (<b>a</b>) NRTC_Vol; (<b>b</b>) POAC_Vol; (<b>c</b>) ESAC_Vol; (<b>d</b>) AVEC_Vol; (<b>e</b>) NRTC_Odd; (<b>f</b>) POAC_Odd; (<b>g</b>) ESAC_Odd; (<b>h</b>) AVEC_Odd; (<b>i</b>) NRTC_Dbl; (<b>j</b>) POAC_Dbl; (<b>k</b>) ESAC_Dbl; (<b>l</b>) AVEC_Dbl.</p>
Full article ">Figure 5
<p>Taking PolSAR data from 25 July 2020 as an example, we created a scatter density plot for each component of Freeman three-decomposition (FRE3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC was completed) with respect to non-RTC (NRTC). The three components of FRE3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure 5 Cont.
<p>Taking PolSAR data from 25 July 2020 as an example, we created a scatter density plot for each component of Freeman three-decomposition (FRE3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC was completed) with respect to non-RTC (NRTC). The three components of FRE3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure 6
<p>Analysis of the effectiveness of RTC and the optimal regression model of this study, taking the SAR data from 25 July 2020 as an example. (<b>a</b>) The training results of the NRTC and RTC data, where the black dots are the results of the corresponding single training; (<b>b</b>) scatter plot of the measured forest AGB and the AGB predicted by the optimal regression model (BysRidge); (<b>c</b>) spatial distribution map of forest AGB in the study area based on optimal model prediction.</p>
Full article ">Figure A1
<p>The scatter density plot of each component of Yamaguchi three-component (YAM3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC) with respect to non-RTC (NRTC). The three components of YAM3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure A1 Cont.
<p>The scatter density plot of each component of Yamaguchi three-component (YAM3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC) with respect to non-RTC (NRTC). The three components of YAM3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure A2
<p>The result of feature selection: (<b>a</b>) the 32 features selected in preliminary feature selection (Boruta algorithm) based on radiative terrain correction (RTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>b</b>) the 21 features selected in preliminary feature selection (Boruta algorithm) based on non-RTC (NRTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>c</b>) the number of features selected in the second step feature selection (RFECV algorithm) based on RTC and NRTC data; (<b>d</b>) the features selected in different multivariate linear models and the variance inflation factor (VIF) value corresponding to each feature; (<b>e</b>) the features selected in different non-parametric models.</p>
Full article ">Figure A2 Cont.
<p>The result of feature selection: (<b>a</b>) the 32 features selected in preliminary feature selection (Boruta algorithm) based on radiative terrain correction (RTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>b</b>) the 21 features selected in preliminary feature selection (Boruta algorithm) based on non-RTC (NRTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>c</b>) the number of features selected in the second step feature selection (RFECV algorithm) based on RTC and NRTC data; (<b>d</b>) the features selected in different multivariate linear models and the variance inflation factor (VIF) value corresponding to each feature; (<b>e</b>) the features selected in different non-parametric models.</p>
Full article ">Figure A3
<p>Scatter plot of measured forest AGB and predicted forest AGB. The prediction model is an optimal regression model based on the PolSAR data processed by radiometric terrain correction (RTC) from 25 July 2020. (<b>a</b>) The independent variable of the prediction model was derived from the PolSAR data (after RTC processing) from 11 July 2020. (<b>b</b>) The independent variable of the prediction model was derived from the PolSAR data (after RTC processing) from 8 August 2020.</p>
Full article ">
18 pages, 1739 KiB  
Article
Polar Stratospheric Cloud Observations at Concordia Station by Remotely Controlled Lidar Observatory
by Luca Di Liberto, Francesco Colao, Federico Serva, Alessandro Bracci, Francesco Cairo and Marcel Snels
Remote Sens. 2024, 16(12), 2228; https://doi.org/10.3390/rs16122228 - 19 Jun 2024
Viewed by 876
Abstract
Polar stratospheric clouds (PSCs) form in polar regions, typically between 15 and 25 km above mean sea level, when the local temperature is sufficiently low. PSCs play an important role in the ozone chemistry and the dehydration and denitrification of the stratosphere. Lidars [...] Read more.
Polar stratospheric clouds (PSCs) form in polar regions, typically between 15 and 25 km above mean sea level, when the local temperature is sufficiently low. PSCs play an important role in the ozone chemistry and the dehydration and denitrification of the stratosphere. Lidars with a depolarization channel may be used to detect and classify different classes of PSCs. The main PSC classes are water ice, nitric acid trihydrate (NAT), and supercooled ternary solutions (STSs), the latter being liquid droplets consisting of water, nitric acid, and sulfuric acid. PSCs have been observed at the lidar observatory at Concordia Station from 2014 onward. The harsh environmental conditions at Concordia during winter render successful lidar operation difficult. To facilitate the operation of the observatory, several measures have been put in place to achieve an almost complete remote control of the system. PSC occurrence is strongly correlated with local temperatures and is affected by dynamics, as the PSC coverage during the observation season shows. PSC observations in 2021 are shown as an example of the capability and functionality of the lidar observatory. A comparison of the observations with the satellite-borne CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) lidar has been made to demonstrate the quality of the data and their representativeness for the Antarctic Plateau. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The figure shows a map of Antarctica. The position of Concordia Station with respect to other bases is indicated with a black circle.</p>
Full article ">Figure 2
<p>The lidar illuminates the polar sky at Concordia Station. Photo courtesy of Luca Ianniello.</p>
Full article ">Figure 3
<p>The bistatic configuration is schematically shown. The laser emission is directed vertically with a piezo-controlled mirror with two axes of freedom, while the main telescope is static. The smaller telescope has a two-dimensional computer-controlled mechanical movement, which allows it to vary its axis with respect to the laser emission.</p>
Full article ">Figure 4
<p>Screenshot of the automatic alignment program, showing the three signals of the 532 nm channels. The vertical axis is in logarithmic scale to display the full dynamics of the signals. The horizontal axis reports the distance from the lidar in km.</p>
Full article ">Figure 5
<p>Screenshot of the automatic alignment program showing the intensity map of the received signals in the function of the two angular displacements of the piezo-controlled mirror. The darker colors correspond with the highest-quality factor, representing the maximum overlap obtained for a certain altitude range. The yellow lines define a cursor position and allow to obtain the value of the quality factor in as specific position of the graph.</p>
Full article ">Figure 6
<p>The figure shows a schematic view of the remote control.</p>
Full article ">Figure 7
<p>The figure shows the criteria using the backscatter ratio <span class="html-italic">R</span> and the perpendicular backscatter coefficient <math display="inline"><semantics> <msub> <mi>β</mi> <mrow> <mi>p</mi> <mi>e</mi> <mi>r</mi> <mi>p</mi> </mrow> </msub> </semantics></math> to classify the different PSC types.</p>
Full article ">Figure 8
<p>The figure shows the PSCs observed by lidar over Concordia Station in 2021. The upper level shows the ground-based observations, while the lower panel represents the CALIOP data, considering a latitude–longitude range defined by the coordinates 73.1°S &lt; lat &lt; 77.1°S and 116.33°E &lt; lon &lt; 130.33°E centered on Concordia Station. The color codes indicate the different PSC classes; orange stands for STSs, green for NAT mixtures, blue for ice, and red for enhanced NAT mixtures. The small circles indicate that a measurement is available but no PSCS were observed.</p>
Full article ">Figure 9
<p>The figure shows the PSCs observed by lidar over Concordia Station in 2021. The upper level shows the ground-based observations, while the lower panel represents the CALIOP data considering a latitude–longitude range defined by the coordinates 73.1°S &lt; lat &lt; 77.1°S and 100°E &lt; lon &lt; 150°E centered on Concordia Station. The color codes indicate the different PSC classes; orange stands for STSs, green for NAT mixtures, blue for ice, and red for enhanced NAT mixtures. The small circles indicate that a measurement is available, but no PSCS were observed.</p>
Full article ">Figure 10
<p>The figure shows the PSC area of the southern hemisphere in 2021 at a potential temperature of 450 K. The red line shows the PSC area in the SH 2021, the black lines correspond with the minimum and maximum values recorded in the previous 10 years. The yellow line stands for the mean value in the preceding 10 years.</p>
Full article ">Figure 11
<p>The figure shows the frost temperature (in red) and the NAT formation temperature minus 3 K to show where ice PSCs, NAT and STSs PSCs, respectively, are probably formed. The green contours indicate where the local temperature is below the formation temperature of NAT minus 3 K. The formation temperatures were calculated from local pressure and water vapor and nitric acid mixing ratios were obtained from MLS data.</p>
Full article ">Figure 12
<p>The figure shows the water vapor mixing ratio in ppm as observed by MLS in 2021.</p>
Full article ">
19 pages, 3232 KiB  
Article
Harmonic Source Depth Estimation by a Single Hydrophone under Unknown Seabed Geoacoustic Property
by Xiaolei Li, Yangjin Xu, Wei Gao, Haozhong Wang and Liang Wang
Remote Sens. 2024, 16(12), 2227; https://doi.org/10.3390/rs16122227 - 19 Jun 2024
Viewed by 825
Abstract
The passive estimation of harmonic sound source depth is of great significance for underwater target localization and identification. Passive source depth estimation using a single hydrophone with an unknown seabed geoacoustic property is a crucial challenge. To address this issue, a harmonic sound [...] Read more.
The passive estimation of harmonic sound source depth is of great significance for underwater target localization and identification. Passive source depth estimation using a single hydrophone with an unknown seabed geoacoustic property is a crucial challenge. To address this issue, a harmonic sound source depth estimation algorithm, seabed independent depth estimation (SIDE) algorithm, is proposed. This algorithm combines the estimated mode depth functions, modal amplitudes, and the sign of each modal to estimate the sound source depth. The performance of the SIDE algorithm is analyzed by simulations. Results show that the SIDE is insensitive to the initial range of the sound source, the source depth, the hydrophone depth, the source velocity, and the type of the seabed. Finally, the effectiveness of the SIDE algorithm is verified by the SWellEX-96 data. Full article
(This article belongs to the Topic Advances in Underwater Acoustics and Aeroacoustics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A harmonic source approaches the hydrophone at velocity <math display="inline"><semantics> <msub> <mi>v</mi> <mn>0</mn> </msub> </semantics></math> in a range-independent waveguide.</p>
Full article ">Figure 2
<p>The SwellEX-96 experiments. (<b>a</b>) Trajectory of the source movement for the S5 event and the locations of hydrophone arrays. (<b>b</b>) Seawater Sound Speed Profile (SSP). (<b>c</b>) The black solid line represents the range of the source relative to the vertical array, while the blue dotted line shows the data for the time period under analysis. (<b>d</b>) Frequency of signals emitted by the source during the S5 event, with the four line spectra of this analysis displayed in the blue box.</p>
Full article ">Figure 3
<p>(<b>a</b>) represents the time−domain waveform (within a 1 s observation window) emitted by the source. (<b>b</b>) The normalized spectrum of the emitted signal. Similarly, (<b>c</b>) represents the signal with noise considered received by the hydrophone (within a 1 s observation window). (<b>d</b>) The normalized spectrum of the received signal. And the window of Fourier transform is set as 2000s in order to observe the modal Doppler.</p>
Full article ">Figure 4
<p>The source velocity and frequencies results for the <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mtext>-</mtext> <mi>v</mi> </mrow> </semantics></math> and the <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mtext>-</mtext> <mi>v</mi> </mrow> </semantics></math> plane at the location of the minimum loss. And the velocity search interval is 0.01 m/s, and the frequency search interval is 0.1 Hz. The white circles denote the true source velocity and frequencies, and the estimated source frequencies and velocities are <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>170</mn> <mspace width="3.33333pt"/> <mi>Hz</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>180</mn> <mspace width="3.33333pt"/> <mi>Hz</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>2.5</mn> <mspace width="3.33333pt"/> <mrow> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 5
<p>Estimated horizontal wavenumbers and mode depth functions. (<b>a</b>) The comparison of the estimated horizontal wavenumbers (red circle) with horizontal wavenumbers calculated by Kraken (blue cross symbol). (<b>b</b>) The comparison of the estimated mode depth function with the mode depth function calculated by Kraken. The above results are derived from the frequency <math display="inline"><semantics> <msub> <mi>f</mi> <mn>1</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>(<b>a</b>) The amplitudes of modal Doppler frequencies and that calculated by Equation (<a href="#FD11-remotesensing-16-02227" class="html-disp-formula">11</a>). (<b>b</b>) Loss function evolution during DSS exploration (blue dotted line) with the loss function minimum denoted by the red circle. (<b>c</b>) Depth ambiguity functions computed using frequencies <math display="inline"><semantics> <msub> <mi>f</mi> <mn>1</mn> </msub> </semantics></math> (blue solid line) and <math display="inline"><semantics> <msub> <mi>f</mi> <mn>2</mn> </msub> </semantics></math> (black solid line), as well as from joint estimation (red solid line) and traditional MMP (magenta solid line). The true source depth is represented by the green vertical dashed line. (<b>d</b>) Mode depth functions sharing identical sign combinations within the range delineated by the red box.</p>
Full article ">Figure 7
<p>Impact of various factors on the SIDE algorithm. (<b>a</b>) Impact of initial range of the source. (<b>b</b>) Impact of source depth. (<b>c</b>) Impact of hydrophone depth. (<b>d</b>) Impact of source velocity.</p>
Full article ">Figure 8
<p>Impact of various factors on the traditional MMP. (<b>a</b>) Impact of initial range of the source. (<b>b</b>) Impact of source depth. (<b>c</b>) Impact of hydrophone depth. (<b>d</b>) Impact of source velocity.</p>
Full article ">Figure 9
<p>Results of the SIDE algorithm and the traditional MMP at different SNR.</p>
Full article ">Figure 10
<p>A multi-layered seabed with sedimentary layers. Layer 1 can be considered the sedimentary layer. And the parameters of Layer 1 have six cases as shown in <a href="#remotesensing-16-02227-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 11
<p>(<b>a</b>) A negative gradient SSP waveguide with multi-layered seabed. (<b>b</b>) A Pekris waveguide with multi-layered seabed.</p>
Full article ">Figure 12
<p>Estimated sources frequencies and velocities by the MDFMD-v. (<b>a</b>,<b>b</b>) are the estimation results for shallow source with <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>127</mn> <mspace width="3.33333pt"/> <mi>Hz</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>145</mn> <mspace width="3.33333pt"/> <mi>Hz</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>2.6</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>. (<b>c</b>,<b>d</b>) are the estimation results for deep source with <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>127</mn> <mspace width="3.33333pt"/> <mi>Hz</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>145</mn> <mspace width="3.33333pt"/> <mi>Hz</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>2.6</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Estimation results. (<b>a</b>) Loss function for shallow source sign search (blue dotted line), with red circles indicating the locations of the minima of the loss function. (<b>b</b>) Depth ambiguity functions computed using the sign combinations corresponding to the minima locations (red solid line). (<b>c</b>,<b>d</b>) represent the loss functions for deep source sign search (blue dotted line) and the depth ambiguity functions computed using the sign combinations at the minima locations (red solid line), respectively.</p>
Full article ">
16 pages, 15964 KiB  
Article
Quantifying the Impact of Aerosols on Geostationary Satellite Infrared Radiance Simulations: A Study with Himawari-8 AHI
by Haofei Sun, Deying Wang, Wei Han and Yunfan Yang
Remote Sens. 2024, 16(12), 2226; https://doi.org/10.3390/rs16122226 - 19 Jun 2024
Cited by 2 | Viewed by 1083
Abstract
Aerosols exert a significant influence on the brightness temperature observed in the thermal infrared (IR) channels, yet the specific contributions of various aerosol types remain underexplored. This study integrated the Copernicus Atmosphere Monitoring Service (CAMS) atmospheric composition reanalysis data into the Radiative Transfer [...] Read more.
Aerosols exert a significant influence on the brightness temperature observed in the thermal infrared (IR) channels, yet the specific contributions of various aerosol types remain underexplored. This study integrated the Copernicus Atmosphere Monitoring Service (CAMS) atmospheric composition reanalysis data into the Radiative Transfer for TOVS (RTTOV) model to quantify the aerosol effects on brightness temperature (BT) simulations for the Advanced Himawari Imager (AHI) aboard the Himawari-8 geostationary satellite. Two distinct experiments were conducted: the aerosol-aware experiment (AER), which accounted for aerosol radiative effects, and the control experiment (CTL), in which aerosol radiative effects were omitted. The CTL experiment results reveal uniform negative bias (observation minus background (O-B)) across all six IR channels of the AHI, with a maximum deviation of approximately −1 K. Conversely, the AER experiment showed a pronounced reduction in innovation, which was especially notable in the 10.4 μm channel, where the bias decreased by 0.7 K. The study evaluated the radiative effects of eleven aerosol species, all of which demonstrated cooling effects in the AHI’s six IR channels, with dust aerosols contributing the most significantly (approximately 86%). In scenarios dominated by dust, incorporating the radiative effect of dust aerosols could correct the brightness temperature bias by up to 2 K, underscoring the substantial enhancement in the BT simulation for the 10.4 μm channel during dust events. Jacobians were calculated to further examine the RTTOV simulations’ sensitivity to aerosol presence. A clear temporal and spatial correlation between the dust concentration and BT simulation bias corroborated the critical role of the infrared channel data assimilation on geostationary satellites in capturing small-scale, rapidly developing pollution processes. Full article
(This article belongs to the Special Issue Remote Sensing for High Impact Weather and Extremes)
Show Figures

Figure 1

Figure 1
<p>Spatial distribution of the FY-4A AGRI Channel 6 (wavelength: 2.25 µm) brightness temperature (<b>left</b>) and Himawari-8 Cloud Product (<b>right</b>) at 03:00 UTC on 28 November 2018. The red solid lines in the right image represent the aerosol optical depth (AOD) values at 550 nm.</p>
Full article ">Figure 2
<p>Diagram for the aerosol-blind experiment (CTL) and the aerosol-aware experiment (AER). The orange box indicates whether the impact of aerosols is considered in RTTOV. In the CTL experiment (<b>a</b>), aerosols are not considered, while in the AER experiment (<b>b</b>), aerosols from CAMS are included in RTTOV.</p>
Full article ">Figure 3
<p>Spatial distribution of the column mass concentration (cMass) of the 550 nm AOD and 5 species of aerosols in the study area at 03:00 UTC on 28 November 2018.</p>
Full article ">Figure 4
<p>Mixing ratio vertical distribution of 5 species of aerosols in the study area at 03:00 UTC on 28 November 2018.</p>
Full article ">Figure 5
<p>Statistics of innovations of the CTL and AER experiments at 03:00 UTC. The blue line represents the CTL experiment results, and the red line represents the AER experiment results. Subfigure (<b>a</b>) shows the bias (difference between simulation and observation), subfigure (<b>b</b>) shows the standard deviation (STD) of the brightness temperature (BT) difference from AHI observations, and subfigure (<b>c</b>) shows the root mean square error (RMSE) of the BT difference.</p>
Full article ">Figure 6
<p>The difference in the simulated brightness temperature values between the CTL experiment and the <span class="html-italic">aerosol</span>-only experiments.</p>
Full article ">Figure 7
<p>Contributions of different aerosol types in each infrared channel in the <span class="html-italic">aerosol</span>-only-CTL experiments. Subfigures (<b>a</b>–<b>f</b>) correspond to the six AHI infrared channels at wavelengths of 8.6 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, 9.6 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, 10.4 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, 11.2 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, 12.4 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, and 13.3 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m, respectively. The aerosol types are color-coded as follows: black carbon (BC), dust (DU), sulfate (SU), organic matter (OM), and sea salt (SS).</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>e</b>) The aerosol Jacobians of 6 channels, and (<b>f</b>) the number density profiles of 5 aerosols at point (39.98°N, 119.24°E).</p>
Full article ">Figure 9
<p>At 03:00 UTC, (<b>a</b>) the observed value of AHI in the 10.4 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m wavelength channel, (<b>b</b>) the simulated brightness temperature value of the CTL experiment, (<b>c</b>) the simulated brightness temperature value of the <span class="html-italic">dust</span>-only experiment, (<b>d</b>) the observed value minus the CTL experiment value, (<b>e</b>) the observed value minus the <span class="html-italic">dust</span>-only experiment value, and (<b>f</b>) the <span class="html-italic">dust</span>-only experiment value minus the CTL experiment value. The solid black line represents the concentration distribution of dust aerosol.</p>
Full article ">Figure 10
<p>Same as in <a href="#remotesensing-16-02226-f009" class="html-fig">Figure 9</a>, but for the spatial distribution at 06:00 UTC.</p>
Full article ">
22 pages, 49542 KiB  
Article
A Robust Target Detection Algorithm Based on the Fusion of Frequency-Modulated Continuous Wave Radar and a Monocular Camera
by Yanqiu Yang, Xianpeng Wang, Xiaoqin Wu, Xiang Lan, Ting Su and Yuehao Guo
Remote Sens. 2024, 16(12), 2225; https://doi.org/10.3390/rs16122225 - 19 Jun 2024
Cited by 4 | Viewed by 1116
Abstract
Decision-level information fusion methods using radar and vision usually suffer from low target matching success rates and imprecise multi-target detection accuracy. Therefore, a robust target detection algorithm based on the fusion of frequency-modulated continuous wave (FMCW) radar and a monocular camera is proposed [...] Read more.
Decision-level information fusion methods using radar and vision usually suffer from low target matching success rates and imprecise multi-target detection accuracy. Therefore, a robust target detection algorithm based on the fusion of frequency-modulated continuous wave (FMCW) radar and a monocular camera is proposed to address these issues in this paper. Firstly, a lane detection algorithm is used to process the image to obtain lane information. Then, two-dimensional fast Fourier transform (2D-FFT), constant false alarm rate (CFAR), and density-based spatial clustering of applications with noise (DBSCAN) are used to process the radar data. Furthermore, the YOLOv5 algorithm is used to process the image. In addition, the lane lines are utilized to filter out the interference targets from outside lanes. Finally, multi-sensor information fusion is performed for targets in the same lane. Experiments show that the balanced score of the proposed algorithm can reach 0.98, which indicates that it has low false and missed detections. Additionally, the balanced score is almost unchanged in different environments, proving that the algorithm is robust. Full article
(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)
Show Figures

Figure 1

Figure 1
<p>Decision-level fusion framework.</p>
Full article ">Figure 2
<p>FMCW radar equation schematic block diagram.</p>
Full article ">Figure 3
<p>Schematic representation of lane detection algorithm.</p>
Full article ">Figure 4
<p>BEV of the lane to be detected. (<b>a</b>) RGB format. (<b>b</b>) Greyscale format. (<b>c</b>) ROI of the lanes to be detected. (<b>d</b>) Binary format.</p>
Full article ">Figure 5
<p>Histograms of lane lines to be detected.</p>
Full article ">Figure 6
<p>Visual representation of lane detection.</p>
Full article ">Figure 7
<p>Network structure of YOLOv5s.</p>
Full article ">Figure 8
<p>Different locations of targets.</p>
Full article ">Figure 9
<p>Distance fitting curve.</p>
Full article ">Figure 10
<p>Sampling method of frame.</p>
Full article ">Figure 11
<p>Schematic of coordinate system relationships. (<b>a</b>) Position of radar coordinate and camera coordinate. (<b>b</b>) Position of camera coordinate, image coordinate, and pixel coordinate.</p>
Full article ">Figure 12
<p>Camera calibration. (<b>a</b>) Camera calibration chessboard graph. (<b>b</b>) Corner extraction and correction of checkerboard.</p>
Full article ">Figure 13
<p>Hardware system for information fusion of radar and monocular camera.</p>
Full article ">Figure 14
<p>Target matching algorithm of radar and camera.</p>
Full article ">Figure 15
<p>Lane detection results. (<b>a</b>) Normal light. (<b>b</b>) Ground icon interference. (<b>c</b>) Ground shelter interference. (<b>d</b>) Weak light.</p>
Full article ">Figure 16
<p>Detection results of YOLOv5s. (<b>a</b>) Normal light. (<b>b</b>) Weak light. (<b>c</b>) Intense light. (<b>d</b>) Targets occluded by trees.</p>
Full article ">Figure 17
<p>Actual positions of targets.</p>
Full article ">Figure 18
<p>Radar detection results. (<b>a</b>) Original 2D point clouds of radar. (<b>b</b>) Valid point clouds after filtering. (<b>c</b>) DBSCAN target clustering. (<b>d</b>) Valid point clouds coalescing after clustering.</p>
Full article ">Figure 18 Cont.
<p>Radar detection results. (<b>a</b>) Original 2D point clouds of radar. (<b>b</b>) Valid point clouds after filtering. (<b>c</b>) DBSCAN target clustering. (<b>d</b>) Valid point clouds coalescing after clustering.</p>
Full article ">Figure 19
<p>The results of multi-sensor spatiotemporal calibration. (<b>a</b>) Without lane detection. (<b>b</b>) With lane detection.</p>
Full article ">Figure 20
<p>Information fusion results. (<b>a</b>) Without lane detection. (<b>b</b>) With lane detection.</p>
Full article ">Figure 21
<p>Weak light.</p>
Full article ">Figure 22
<p>Intense light.</p>
Full article ">Figure 23
<p>Normal light.</p>
Full article ">
39 pages, 5735 KiB  
Review
Potential of Earth Observation to Assess the Impact of Climate Change and Extreme Weather Events in Temperate Forests—A Review
by Marco Wegler and Claudia Kuenzer
Remote Sens. 2024, 16(12), 2224; https://doi.org/10.3390/rs16122224 - 19 Jun 2024
Cited by 3 | Viewed by 2133
Abstract
Temperate forests are particularly exposed to climate change and the associated increase in weather extremes. Droughts, storms, late frosts, floods, heavy snowfalls, or changing climatic conditions such as rising temperatures or more erratic precipitation are having an increasing impact on forests. There is [...] Read more.
Temperate forests are particularly exposed to climate change and the associated increase in weather extremes. Droughts, storms, late frosts, floods, heavy snowfalls, or changing climatic conditions such as rising temperatures or more erratic precipitation are having an increasing impact on forests. There is an urgent need to better assess the impacts of climate change and extreme weather events (EWEs) on temperate forests. Remote sensing can be used to map forests at multiple spatial, temporal, and spectral resolutions at low cost. Different approaches to forest change assessment offer promising methods for a broad analysis of the impacts of climate change and EWEs. In this review, we examine the potential of Earth observation for assessing the impacts of climate change and EWEs in temperate forests by reviewing 126 scientific papers published between 1 January 2014 and 31 January 2024. This study provides a comprehensive overview of the sensors utilized, the spatial and temporal resolution of the studies, their spatial distribution, and their thematic focus on the various abiotic drivers and the resulting forest responses. The analysis indicates that multispectral, non-high-resolution timeseries were employed most frequently. A predominant proportion of the studies examine the impact of droughts. In all instances of EWEs, dieback is the most prevailing response, whereas in studies on changing trends, phenology shifts account for the largest share of forest response categories. The detailed analysis of in-depth forest differentiation implies that area-wide studies have so far barely distinguished the effects of different abiotic drivers at the species level. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow chart outlining the literature search process used to identify relevant scientific articles about remote-sensed forest responses to weather extremes and climate change.</p>
Full article ">Figure 2
<p>Distribution of publications subdivided into different journal categories: temporal (<b>a</b>), and overall (<b>b</b>).</p>
Full article ">Figure 3
<p>Map and bar chart of the spatial distribution of first author affiliations by country and in the donut chart by continent. The distribution of the temperate forest according to Olson et al. [<a href="#B33-remotesensing-16-02224" class="html-bibr">33</a>] is marked with a green outline.</p>
Full article ">Figure 4
<p>Map of the spatial distribution of study areas by country and in the donut chart by continent. The bar chart illustrates the distribution of cross-border study areas. The distribution of the temperate forest according to Olson et al. [<a href="#B33-remotesensing-16-02224" class="html-bibr">33</a>] is marked with a green outline.</p>
Full article ">Figure 5
<p>Overview of the different remote sensing sensors, their platform, and the sensor type combination used in the reviewed articles. Abbreviations: A-thermal—Airborne thermal, AHS—Airborne Hyperspectral Sensor, ALS—Airborne Laser Scanning, AMS—Airborne Multispectral Sensor, AMSR—Advanced Microwave Scanning Radiometer, AVHRR—Advanced Very High Resolution Radiometer, GLAS—Geoscience Laser Altimeter System, MODIS—Moderate Resolution Imaging Spectroradiometer, MODIS LST—Moderate Resolution Imaging Spectroradiometer Land Surface Temperature, OCO-2—Orbiting Carbon Observatory-2, PROBA-V—Project for On-Board Autonomy—Vegetation, SPOT—Satellite Pour l’Observation de la Terre.</p>
Full article ">Figure 6
<p>Overview of investigated timeframes in reviewed publications. The temporal resolution is depicted using different point and line styles, while colors indicate the spatial resolution (<b>a</b>). Summary of the spatial resolution (<b>b</b>) and temporal resolution (<b>c</b>).</p>
Full article ">Figure 7
<p>The relationship between the spatial extent and mapping resolution among the reviewed articles. The color code represents the observed extreme weather events or trend changes.</p>
Full article ">Figure 8
<p>Number of publications dealing with the respective abiotic driver subdivided into Extreme Weather Event (light gray), Recurrent Extreme Events (darker gray), or long-term Trend Changes (below dashed line) due to climate change. Studies may cover more than one extreme event.</p>
Full article ">Figure 9
<p>The distribution of whether the study analyzed a direct forest response to climate change or extreme weather event with remote sensing or not (<b>a</b>). The distribution of the different associated forest response categories (<b>b</b>).</p>
Full article ">Figure 10
<p>The relative values of the associated forest response categories differentiated by extreme weather events, recurrent extreme events, or climate change-induced trend changes, whereby the absolute figures fluctuate greatly.</p>
Full article ">Figure 11
<p>Percentage of forest types investigated. At least two forest types were mentioned in 57 studies.</p>
Full article ">Figure 12
<p>The utilization of in-depth forest differentiation is illustrated in the left donut chart. In-depth forest differentiation describes whether or not the analyzed forest is subdivided by different forest characteristics. These characteristics are displayed on the <span class="html-italic">x</span>-axis, and are categorized according to the scale of the study area. An overview of the general distribution of territorial extents is presented in the right donut chart, which also represents the color code for the bar chart.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop