[go: up one dir, main page]

Next Issue
Volume 16, June-2
Previous Issue
Volume 16, May-2
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 16, Issue 11 (June-1 2024) – 225 articles

Cover Story (view full-size image): A surge is a natural catastrophe during which a glacier accelerates to 100–200 times its normal speed, resulting in rapid mass transfer from the cryosphere to the oceans and contributing significantly to sea-level rise. To study the current surge of the Negribreen Glacier System in Arctic Svalbard, we investigate the trade-offs between two different approaches to modern machine learning in the geosciences: (1) deep, convolutional neural networks (CNNs) and (2) NNs informed by Earth observations and geophysical knowledge (physically constrained NNs). Combining the advantages from each method in the GEOCLASS-image system, we derive a physically informed CNN, VarioCNN, that allows rapid and efficient extraction of complex geophysical information from submeter-resolution satellite imagery. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 5326 KiB  
Article
Lunar High Alumina Basalts in Mare Imbrium
by Jingran Chen, Shengbo Chen, Ming Ma and Yijun Jiang
Remote Sens. 2024, 16(11), 2045; https://doi.org/10.3390/rs16112045 - 6 Jun 2024
Viewed by 609
Abstract
High-alumina (HA) mare basalts play a critical role in lunar mantle differentiation. Although remote sensing methods have speculated their potential presence regions based on sample FeO and TiO2 compositions, the location and distribution characteristics of HA basalts have not been provided. In [...] Read more.
High-alumina (HA) mare basalts play a critical role in lunar mantle differentiation. Although remote sensing methods have speculated their potential presence regions based on sample FeO and TiO2 compositions, the location and distribution characteristics of HA basalts have not been provided. In this study, the compositions of exposed rocks in Mare Imbrium were determined using Lunar Reconnaissance Orbiter (LRO) Diviner oxides and Lunar Prospector Gamma-Ray Spectrometer (LP-GRS) Thorium (Th) products. The exposed HA basalts were identified based on laboratory lithology classification criteria and Al2O3 abundance. The HA basalt units were mapped based on lunar topographic data, and their morphological geological characteristics were calculated based on elevation data. The results show that there are 8406 HA basalt pixels and 17 original units formed by volcanic eruptions in Mare Imbrium. The statistics of their morphology characteristics show that the HA basalts are widely distributed in the northern part of Mare Imbrium, and their compositions have a large range of variation. These units have different area and volume, and the layers formed were discontinuous. The characteristic analysis shows that the aluminum-bearing volcanic activities in Mare Imbrium were irregular. The eruptions of four different source regions occurred in three phases, and the scale and extent of the eruptions were different. The results in this study provide reliable evidence for the heterogeneity of the lunar mantle and contribute valuable information to the formation process of early lunar mantle materials. Full article
(This article belongs to the Section Satellite Missions for Earth and Planetary Exploration)
Show Figures

Figure 1

Figure 1
<p>LRO Diviner Al<sub>2</sub>O<sub>3</sub> map (70° N/S, 32 ppd [<a href="#B43-remotesensing-16-02045" class="html-bibr">43</a>]); the base image is LROC WAC map (100 m/pixel [<a href="#B62-remotesensing-16-02045" class="html-bibr">62</a>]).</p>
Full article ">Figure 2
<p>Remote sensing identification and morphological geological characteristics analysis of HA basalt flow chart. The red text in the figure introduces the research methodology of this paper.</p>
Full article ">Figure 3
<p>Exposed rock pixels formed by impact in Mare Imbrium. (<b>a</b>) Map of Mare Imbrium. (<b>b</b>) Map of Mare basalt unit I18 in Hiesinger et al. [<a href="#B31-remotesensing-16-02045" class="html-bibr">31</a>].</p>
Full article ">Figure 4
<p>Simplified vertical profile of the impact crater. <span class="html-italic">a</span>, <span class="html-italic">b</span> on the crater wall represent the calculated pixels, <span class="html-italic">c</span>, <span class="html-italic">d</span>, <span class="html-italic">e</span>, and <span class="html-italic">g</span> on the edge represent the calculated interference factors. <span class="html-italic">f</span> is the threshold established based on the value of the impact crater radius <span class="html-italic">R</span>.</p>
Full article ">Figure 5
<p>Map of Mare Imbrium lithology classification results. Including mare basalt (MB), magnesian suite (MS), alkali suite (AS), and KREEP basalt. The mountains in Mare Imbrium are marked in yellow bold text, and the large impact craters are marked in yellow text.</p>
Full article ">Figure 6
<p>The amount of Mare Imbrium lithological classifications and the amount of MB classifications.</p>
Full article ">Figure 7
<p>The abundance of the three main oxides in Mare Imbrium HA basalt pixels.</p>
Full article ">Figure 8
<p>Mare Imbrium HA basalt unit map; F1–F17 represent the identified HA basalt units; the mountains in Imbrium are marked in dark yellow, and the large craters excavated by impact events are marked in light yellow.</p>
Full article ">Figure 9
<p>The HA basalt layer and basalt layer thickness and ratio (<b>a</b>); the maximum burial depth difference and the minimum burial depth difference (<b>b</b>).</p>
Full article ">Figure 10
<p>Sinus Iridum basalt units [<a href="#B31-remotesensing-16-02045" class="html-bibr">31</a>] and HA basalt units (<b>a</b>); five units in central Imbrium (<b>b</b>,<b>c</b>) and two large impact craters (<b>d</b>). (<b>d</b>) shows the white box area in (<b>c</b>). The color of the pixels in the figure is the Al<sub>2</sub>O<sub>3</sub> abundance of HA basalts.</p>
Full article ">
20 pages, 6559 KiB  
Article
Study of Fast and Reliable Time Transfer Methods Using Low Earth Orbit Enhancement
by Mingyue Liu, Rui Tu, Qiushi Chen, Qi Li, Junmei Chen, Pengfei Zhang and Xiaochun Lu
Remote Sens. 2024, 16(11), 2044; https://doi.org/10.3390/rs16112044 - 6 Jun 2024
Viewed by 523
Abstract
The Global Navigation Satellite System (GNSS) can be utilized for long-distance and high-precision time transmission. With the ongoing development of low Earth orbit (LEO) satellites and the rapidly changing geometric relationships between them, the convergence rate of ambiguity parameters in Precise Point Positioning [...] Read more.
The Global Navigation Satellite System (GNSS) can be utilized for long-distance and high-precision time transmission. With the ongoing development of low Earth orbit (LEO) satellites and the rapidly changing geometric relationships between them, the convergence rate of ambiguity parameters in Precise Point Positioning (PPP) algorithms has increased, enabling fast and reliable time transfer. In this paper, GPS is used as an experimental case, the LEO satellite constellation is designed, and simulated LEO observation data are generated. Then, using the GPS observation data provided by IGS, a LEO-enhanced PPP model is established. The LEO-augmented PPP model is employed to facilitate faster and more reliable high-precision time transfer. The application of the LEO-augmented PPP model to time transfer is examined and discussed through experimental examples. These examples show multiple types of time transfer links, and the experimental outcomes are uniform. GPS + LEO is compared with exclusive GPS time transfer schemes. The clock offset of the time transfer link for the GPS + LEO scheme converges more swiftly, meaning that the time required for the clock offset to reach a stable level is the briefest. In this paper, standard deviation is employed to assess stability, and Allan deviation is utilized to assess frequency stability. The results show that the clock offset stability and frequency stability achieved by the GPS + LEO scheme are superior within the convergence time range. Controlled experiments with different numbers of satellites for LEO enhancement indicate that time transfer performance can be improved by increasing the number of satellites. As a result, augmenting GPS tracking data with LEO observations enhances the time transfer service compared to GPS alone. Full article
(This article belongs to the Topic GNSS Measurement Technique in Aerial Navigation)
Show Figures

Figure 1

Figure 1
<p>Global distribution of stations and time transfer links.</p>
Full article ">Figure 2
<p>Average number of daily visible satellites of the LEO constellation on the first day of 2022.</p>
Full article ">Figure 3
<p>Receiver clock offset time series of stations (black denotes GPS alone; blue, GPS + LEOs.</p>
Full article ">Figure 4
<p>The first-order difference of the receiver clock offset time series of the stations (black denotes GPS alone; blue, GPS + LEOs). (<b>a</b>) The first-order difference results of the clock offset sequence for the areg station, (<b>b</b>) The first-order difference results of the clock offset sequence for the harb station, (<b>c</b>) The first-order difference results of the clock offset sequence for the ons1 station.</p>
Full article ">Figure 5
<p>Receiver clock offset convergence time.</p>
Full article ">Figure 6
<p>Time transfer link clock offset time series (black denotes GPS alone; blue GPS + LEOs). (<b>a</b>) The clock offset sequence of the areg-mcil time transfer link, (<b>b</b>) The clock offset sequence of the gold-pie1 time transfer link, (<b>c</b>) The clock offset sequence of the pie1-ons1 time transfer link, (<b>d</b>) The clock offset sequence of the ons1-bor1 time transfer link, (<b>e</b>) The clock offset sequence of the bor1-usud time transfer link, (<b>f</b>) The clock offset sequence of the kiru-dlf1 time transfer link, (<b>g</b>) The clock offset sequence of the dlf1-harb time transfer link, (<b>h</b>) The clock offset sequence of the harb-sydn time transfer link, (<b>i</b>) The clock offset sequence of the kiru-syog time transfer link, (<b>j</b>) The clock offset sequence of the ohi2-syog time transfer link.</p>
Full article ">Figure 7
<p>The first-order difference results of clock offsets time series in time transfer links (black denotes GPS alone; blue, GPS + LEOs). (<b>a</b>) The first-order difference results of the clock offset sequence for the areg-mcil time transfer link, (<b>b</b>) The first-order difference results of the clock offset sequence for the gold-pie1 time transfer link, (<b>c</b>) The first-order difference results of the clock offset sequence for the pie1-ons1 time transfer link, (<b>d</b>) The first-order difference results of the clock offset sequence for the ons1-bor1 time transfer link, (<b>e</b>) The first-order difference results of the clock offset sequence for the bor1-usud time transfer link, (<b>f</b>) The first-order difference results of the clock offset sequence for the kiru-dlf1 time transfer link, (<b>g</b>) The first-order difference results of the clock offset sequence for the dlf1-harb time transfer link, (<b>h</b>) The first-order difference results of the clock offset sequence for the harb-sydn time transfer link, (<b>i</b>)The first-order difference results of the clock offset sequence for the kiru-syog time transfer link, (<b>j</b>) The first-order difference results of the clock offset sequence for the ohi2-syog time transfer link.</p>
Full article ">Figure 8
<p>Time transfer link convergence time histogram.</p>
Full article ">Figure 9
<p>Histogram of standard deviation of time transfer link clock offsets in the convergence interval.</p>
Full article ">Figure 10
<p>The Allan deviation results of the time transfer link in the convergence interval.</p>
Full article ">Figure 11
<p>Clock offset diagram of pie1-ons1 time transfer link in the four enhancement scenarios.</p>
Full article ">Figure 12
<p>First-order difference diagram of pie1-ons1 time transfer link clock offset in the four enhancement scenarios.</p>
Full article ">Figure 13
<p>Clock offset diagram of harb-sydn time transfer link in the four enhancement scenarios.</p>
Full article ">Figure 14
<p>First-order difference diagram of harb-sydn time transfer link clock offset in the four enhancement scenarios.</p>
Full article ">Figure 15
<p>Clock offset diagram of ohi2-syog time transfer link in the four enhancement scenarios.</p>
Full article ">Figure 16
<p>First-order difference diagram of ohi2-syog time transfer link clock offset in the four enhancement scenarios.</p>
Full article ">Figure 17
<p>Convergence time histograms of the three links across the four enhancement scenarios.</p>
Full article ">
18 pages, 9341 KiB  
Article
Evaluation of Soybean Drought Tolerance Using Multimodal Data from an Unmanned Aerial Vehicle and Machine Learning
by Heng Liang, Yonggang Zhou, Yuwei Lu, Shuangkang Pei, Dong Xu, Zhen Lu, Wenbo Yao, Qian Liu, Lejun Yu and Haiyan Li
Remote Sens. 2024, 16(11), 2043; https://doi.org/10.3390/rs16112043 - 6 Jun 2024
Viewed by 841
Abstract
Drought stress is a significant factor affecting soybean growth and yield. A lack of suitable high-throughput phenotyping techniques hinders the drought tolerance evaluation of multi-genotype samples. A method for evaluating drought tolerance in soybeans is proposed based on multimodal remote sensing data from [...] Read more.
Drought stress is a significant factor affecting soybean growth and yield. A lack of suitable high-throughput phenotyping techniques hinders the drought tolerance evaluation of multi-genotype samples. A method for evaluating drought tolerance in soybeans is proposed based on multimodal remote sensing data from an unmanned aerial vehicle (UAV) and machine learning. Hundreds of soybean genotypes were repeatedly planted under well water (WW) and drought stress (DS) in different years and locations (Jiyang and Yazhou, Sanya, China), and UAV multimodal data were obtained in multiple fertility stages. Notably, data from Yazhou were repeatedly obtained during five significant fertility stages, which were selected based on days after sowing. The geometric mean productivity (GMP) index was selected to evaluate the drought tolerance of soybeans. Compared with the results of manual measurement after harvesting, support vector regression (SVR) provided better results (N = 356, R2 = 0.75, RMSE = 29.84 g/m2). The model was also migrated to the Jiyang dataset (N = 427, R2 = 0.68, RMSE = 15.36 g/m2). Soybean varieties were categorized into five Drought Injury Scores (DISs) based on the manually measured GMP. Compared with the results of the manual DIS, the accuracy of the predicted DIS gradually increased with the soybean growth period, reaching a maximum of 77.12% at maturity. This study proposes a UAV-based method for the rapid high-throughput evaluation of drought tolerance in multi-genotype soybean at multiple fertility stages, which provides a new method for the early judgment of drought tolerance in individual varieties, improving the efficiency of soybean breeding, and has the potential to be extended to other crops. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental study design and workflow. (<b>a</b>) Location of the experimental site. (<b>b</b>) DJI Matrice 300 RTK UAV platform equipped with visible and multispectral sensors for soybean image acquisition. (<b>c</b>) Multimodal data for soybeans over multiple fertility stages. (<b>d</b>) WW and DS soybean plots and DISs. (<b>e</b>) Machine learning modeling. (<b>f</b>) GMP regression and DIS classification for multiple fertility stages.</p>
Full article ">Figure 2
<p>UAV-based multi-source image data-processing workflow. (<b>a</b>) Frame of the plot from the orthophoto map. (<b>b</b>) Segmentation of ROI from the plots and extraction of canopy coverage, length, and width. (<b>c</b>) Segmentation of soybean from dense point cloud and acquisition of ROI. (<b>d</b>) Extraction of vegetation indices from five multispectral bands.</p>
Full article ">Figure 3
<p>Comparison of canopy traits under WW and DS at maturity (Yazhou, DAS64). (<b>a</b>–<b>h</b>) Comparison plots of canopy coverage, plant height, length, width, volume, NDVI, ARVI, and RECI under WW and DS, and the x-axis labels are the pseudo-ID numbers of each genotype.</p>
Full article ">Figure 4
<p>Box–line plots of dynamic changes in soybean canopy traits (Yazhou). (<b>a</b>–<b>h</b>) Comparison plots of dynamic changes in canopy coverage, plant height, length, width, volume, NDVI, ARVI, and RECI under WW and DS. *** indicates a significance level of <span class="html-italic">p</span> = 0.001.</p>
Full article ">Figure 5
<p>Pearson correlation between yield and eight canopy traits. (<b>a</b>,<b>b</b>) C correlation between yield and five canopy traits for image collection date under WW and DS. NS indicates not significant; * and *** indicate significance levels of <span class="html-italic">p</span> = 0.05 and 0.001.</p>
Full article ">Figure 6
<p>Correlation matrix of DTIs with the eight canopy traits at maturity, and yield under WW and DS, all with <span class="html-italic">p</span> = 0.001 significance level.</p>
Full article ">Figure 7
<p>Performance of estimating GMP with XGBoost, RF, and SVR models. (<b>a</b>–<b>c</b>) Yazhou 64DAS data. (<b>d</b>–<b>f</b>) Jiyang 71DAS data.</p>
Full article ">Figure 8
<p>DIS classification of soybean at multiple fertility stages based on machine learning. (<b>a</b>–<b>c</b>) Results of XGBoost, RF, and SVC models on the test set (Yazhou, N = 153). (<b>d</b>) DIS classification prediction from 64DAS (Yazhou, N = 509). (<b>e</b>–<b>h</b>) DIS classification prediction from 21DAS, 35DAS, 46DAS, and 56DAS (Yazhou, N = 509). (<b>i</b>–<b>l</b>) Comparison between predicted DIS and DAS64 predicted DIS at 21DAS, 35DAS, 46DAS, 56DAS (Yazhou, N = 509).</p>
Full article ">
23 pages, 28193 KiB  
Article
Using Ground-Penetrating Radar (GPR) to Investigate the Exceptionally Thick Deposits from the Storegga Tsunami in Northeastern Scotland
by Charlie S. Bristow, Lucy K. Buck and Rishi Shah
Remote Sens. 2024, 16(11), 2042; https://doi.org/10.3390/rs16112042 - 6 Jun 2024
Viewed by 1048
Abstract
A submarine landslide on the edge of the Norwegian shelf that occurred around 8150 ± 30 cal. years BP triggered a major ocean-wide tsunami, the deposits of which are recorded around the North Atlantic, including Scotland. Ground-penetrating radar (GPR) was used here to [...] Read more.
A submarine landslide on the edge of the Norwegian shelf that occurred around 8150 ± 30 cal. years BP triggered a major ocean-wide tsunami, the deposits of which are recorded around the North Atlantic, including Scotland. Ground-penetrating radar (GPR) was used here to investigate tsunami sediments within estuaries on the coast of northeastern Scotland where the tsunami waves were funnelled inland. Around the Dornoch Firth, the tsunami deposits are up to 1.6 m thickness, which is exceptionally thick for tsunami deposits and about twice the thickness of the 2004 IOT or 2011 Tohoku-oki tsunami deposits. The exceptional thickness is attributed to a high sediment supply within the Dornoch Firth. At Ardmore, the tsunami appears to have overtopped a beach ridge with a thick sand layer deposited inland at Dounie and partly infilled a valley. Later, fluvial activity eroded the tsunami sediments locally, removing the sand layer. At Creich, on the north side of the Dornoch Firth, the sand layer varies in thickness; mapping of the sand layer with GPR shows lateral thickness changes of over 1 m attributed to a combination of infilling an underlying topography, differential compaction, and later reworking by tidal inlets. Interpretation of the GPR profiles at Wick suggests that there has been a miscorrelation of Holocene stratigraphy based on boreholes. Changes in the stratigraphy of spits at Ardmore are attributed to the balance between sediment supply and sea-level change with washovers dominating a spit formed during the early Holocene transgression, while spits formed during the subsequent mid-Holocene high-stand are dominated by progradation. Full article
(This article belongs to the Collection Feature Papers for Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Satellite image of UK and Ireland from Google Earth™. The outline of the Storegga slide shown by a yellow line, with red boxes showing (<b>b</b>) location of Milton Farm near Wick. (<b>c</b>) Creich, Ardmore, and Dounie around the Dornoch Firth. Detailed images of the survey sites at Milton Farm are shown in Figure 2, Creich is shown in Figure 3, Ardmore in Figure 4, and Dounie in Figure 5.</p>
Full article ">Figure 2
<p>Satellite image showing location of GPR profiles and boreholes from [<a href="#B11-remotesensing-16-02042" class="html-bibr">11</a>], at Milton Farm near Wick.</p>
Full article ">Figure 3
<p>Satellite image of the study area at Creich. (<b>a</b>) In this image from April 2019, the field has been cultivated with no crops growing. Darker areas are wet soil; pale-coloured areas are dryer and a little bit higher. The pale-blue area is interpreted as a palaeochannel. A break of slope is marked where Holocene sediments abut Pleistocene sediments. (<b>b</b>) Location of GPR profiles and boreholes at Creich, the GPR lines are identified by letters A–W and the boreholes are identified by numbers c1–c8. Google Earth image CNES/Airbus.</p>
Full article ">Figure 4
<p>Satellite image showing location of GPR profiles, red lines A-A′ and B-B′, at Ardmore.</p>
Full article ">Figure 5
<p>Satellite image showing location of GPR profiles, red lines A-A′, B-B′, C-C′ and D-D′, and auger borehole D1 at Dounie.</p>
Full article ">Figure 6
<p>GPR data from Milton Farm near Wick. (<b>a</b>) GPR profile A-A′ between boreholes 23 and 20 of [<a href="#B11-remotesensing-16-02042" class="html-bibr">11</a>]. (<b>b</b>) Reflections at depths of 1 to 2 m are coloured red, yellow, and orange and are not continuous across the whole profile. The red reflection is correlated with a tsunami sand identified by [<a href="#B11-remotesensing-16-02042" class="html-bibr">11</a>] in borehole 23 and is truncated by the yellow reflection between 30 and 40 m, interpreted as local erosion of the sand layer. The yellow reflection is truncated by the orange reflection. As a result, the sand layer in borehole 23 is not present in borehole 20. (<b>c</b>) The pink line indicates correlation of sand layers made by [<a href="#B11-remotesensing-16-02042" class="html-bibr">11</a>]. The pink line does not follow the stratigraphy shown by GPR, showing that there is a miscorrelation. (<b>d</b>) GPR profile B-B′ is perpendicular to A-A′ and the surface slopes gently from NE to SW away from the River Wick due to a levee. (<b>e</b>) The base of the section is interpreted as a fluvial erosion surface (orange fill (RF7)) from a precursor to the River Wick. The reflection interpreted as the tsunami sand (red) is truncated by a younger erosion surface (yellow).</p>
Full article ">Figure 7
<p>GPR data for Creich lines G-G′ (<b>a</b>,<b>b</b>) and B-B′ (<b>c</b>,<b>d</b>). The tsunami sand (RF1, pink) is sandwiched between low-amplitude packages of RF2 (brown) and RF5 (blue), interpreted as marsh and tidal channels, respectively. The blue tidal channel facies locally cut into the pink tsunami reflection. This erosion explains some of the thickness changes observed in <a href="#remotesensing-16-02042-f008" class="html-fig">Figure 8</a>, but the base of the tsunami layer also changes elevation, indicating a buried topography. Inset (<b>e</b>) shows the a plan of the GPR profiles with lines B-B′ and G-G′ in red.</p>
Full article ">Figure 8
<p>(<b>A</b>) Surface graph showing the elevation of the top of the sand layer. (<b>B</b>) The elevation of the base of the sand layer with the black-to-yellow scale bar from 0–5 m elevation. (<b>C</b>) The thickness of the sand layer derived by subtracting the base elevation from the top elevation, with the blue-to-red scale bar from 0–2 m. The thickness changes are attributed to differential compaction over intertidal channel and mudflat and supratidal marsh sediments with later erosion adding to the complex 3-D changes in thickness.</p>
Full article ">Figure 9
<p>Summary lithology logs for auger boreholes at Creich and Dounie.</p>
Full article ">Figure 10
<p>GPR data from Ardmore with the A close to the sea, and B’ at the inland end (see locations on <a href="#remotesensing-16-02042-f004" class="html-fig">Figure 4</a>). The tsunami erosion surface (red line) appears to have washed over and truncated an earlier beach ridge between 160 and 240 m that is dominated by landward dipping washover deposits (RF3). The beach ridges closer to the sea are younger and contain more prograding beach deposits (RF4).</p>
Full article ">Figure 11
<p>GPR profile A-A′ and B-B′ across a shallow valley at Dounie. The tsunami sand layer (RF1) thickens into the middle of the valley and pinches out against the valley margins. It is cut by a fluvial channel between 280 and 300 m.</p>
Full article ">Figure 12
<p>GPR profiles Dounie C-C′ and D-D′ are perpendicular to each other and show river deposits (RF6) in the south and west, cutting into tsunami and tidal flat sediments, RF1 and RF2, in the north and west.</p>
Full article ">
19 pages, 9652 KiB  
Article
Focus on the Crop Not the Weed: Canola Identification for Precision Weed Management Using Deep Learning
by Michael Mckay, Monica F. Danilevicz, Michael B. Ashworth, Roberto Lujan Rocha, Shriprabha R. Upadhyaya, Mohammed Bennamoun and David Edwards
Remote Sens. 2024, 16(11), 2041; https://doi.org/10.3390/rs16112041 - 6 Jun 2024
Viewed by 1191
Abstract
Weeds pose a significant threat to agricultural production, leading to substantial yield losses and increased herbicide usage, with severe economic and environmental implications. This paper uses deep learning to explore a novel approach via targeted segmentation mapping of crop plants rather than weeds, [...] Read more.
Weeds pose a significant threat to agricultural production, leading to substantial yield losses and increased herbicide usage, with severe economic and environmental implications. This paper uses deep learning to explore a novel approach via targeted segmentation mapping of crop plants rather than weeds, focusing on canola (Brassica napus) as the target crop. Multiple deep learning architectures (ResNet-18, ResNet-34, and VGG-16) were trained for the pixel-wise segmentation of canola plants in the presence of other plant species, assuming all non-canola plants are weeds. Three distinct datasets (T1_miling, T2_miling, and YC) containing 3799 images of canola plants in varying field conditions alongside other plant species were collected with handheld devices at 1.5 m. The top performing model, ResNet-34, achieved an average precision of 0.84, a recall of 0.87, a Jaccard index (IoU) of 0.77, and a Macro F1 score of 0.85, with some variations between datasets. This approach offers increased feature variety for model learning, making it applicable to the identification of a wide range of weed species growing among canola plants, without the need for separate weed datasets. Furthermore, it highlights the importance of accounting for the growth stage and positioning of plants in field conditions when developing weed detection models. The study contributes to the growing field of precision agriculture and offers a promising alternative strategy for weed detection in diverse field environments, with implications for the development of innovative weed control techniques. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified workflow for training deep learning models for segmentation.</p>
Full article ">Figure 2
<p>Display of base full size RGB image and labelled crop mask after soil deletion, vegetation index application, and overlay of ground truth labels. Plants other than canola are shown in blue.</p>
Full article ">Figure 3
<p>Image modified from Prakash et al. [<a href="#B75-remotesensing-16-02041" class="html-bibr">75</a>]. U-Net model architecture of ResNet34 displaying residual blocks made up of convolutional layers followed by skip connections, allowing for a deeper network structure. Each residual block is followed by an ReLU activation function. Max pooling layers are placed to reduce special dimensions of feature maps, assisting with down sampling.</p>
Full article ">Figure 4
<p>Modified version of image from Gorla Praveen and available at <a href="https://commons.wikimedia.org/wiki/File:VGG16.pn" target="_blank">https://commons.wikimedia.org/wiki/File:VGG16.pn</a> (accessed on 2 October 2023). Architecture of VGG-16 convolutional neural network model, with five sets of convolutional blocks and pooling layers.</p>
Full article ">Figure 5
<p>Random subset of predicted segmentation masks from the Resnet-34 model from each dataset. Each column shows an RGB image (row 1), ground truth (row 2), and predicted mask (row 3) from each dataset—T1 Miling, T2 Miling, and York canola. Canola structures are shown in orange, all other plant structures are shown in green. ResNet34 displayed the capacity for the model to accurately detect plant structures, whilst being obscured by crop stubble (York Canola panel 1, B), as well as correctly identifying mislabelled plant structures (T2 Miling, C). Predicted segmentation masks from Resnet-34 model on York canola dataset. Blue lupin regrowth is shown in images A and B. Examples of light reflectance reducing segmentation performance can be observed in image C.</p>
Full article ">Figure 6
<p>Twenty 125 × 125 pixel layer 1 feature maps extracted using the torch.vision feature extraction tool from ResNet-34. Images are not of native 500 by 500 resolution due to initial encoding of U-Net model architecture.</p>
Full article ">Figure 7
<p>Twenty 500 × 500 pixel feature maps extracted from layer 1 of VGG-16 using the torch.vision feature extraction tool.</p>
Full article ">Figure 8
<p>Subset of feature maps displaying attention to crop-specific structures extracted from VGG-16. The model appears to show greater attention to unique shapes and structures along leaf edges and stem structures.</p>
Full article ">
27 pages, 4080 KiB  
Review
Satellite Remote Sensing Tools for Drought Assessment in Vineyards and Olive Orchards: A Systematic Review
by Nazaret Crespo, Luís Pádua, João A. Santos and Helder Fraga
Remote Sens. 2024, 16(11), 2040; https://doi.org/10.3390/rs16112040 - 6 Jun 2024
Viewed by 1515
Abstract
Vineyards and olive groves are two of the most important Mediterranean crops, not only for their economic value but also for their cultural and environmental significance, playing a crucial role in global agriculture. This systematic review, based on an adaptation of the 2020 [...] Read more.
Vineyards and olive groves are two of the most important Mediterranean crops, not only for their economic value but also for their cultural and environmental significance, playing a crucial role in global agriculture. This systematic review, based on an adaptation of the 2020 PRISMA statement, focuses on the use of satellite remote sensing tools for the detection of drought in vineyards and olive groves. This methodology follows several key steps, such as defining the approach, selecting keywords and databases, and applying exclusion criteria. The bibliometric analysis revealed that the most frequently used terms included “Google Earth Engine” “remote sensing” “leaf area index” “Sentinel-2”, and “evapotranspiration”. The research included a total of 81 articles published. The temporal distribution shows an increase in scientific production starting in 2018, with a peak in 2021. Geographically, the United States, Italy, Spain, France, Tunisia, Chile, and Portugal lead research in this field. The studies were classified into four categories: aridity and drought monitoring (ADM), agricultural water management (AWM), land use management (LUM), and water stress (WST). Research trends were analysed in each category, highlighting the use of satellite platforms and sensors. Several case studies illustrate applications in vineyards and olive groves, especially in semi-arid regions, focusing on the estimation of evapotranspiration, crop coefficients, and water use efficiency. This article provides a comprehensive overview of the current state of research on the use of satellite remote sensing for drought assessment in grapevines and olive trees, identifying trends, methodological approaches, and opportunities for future research in this field. Full article
(This article belongs to the Special Issue Remote Sensing in Viticulture II)
Show Figures

Figure 1

Figure 1
<p>Average grape production (t) by country between 2018 and 2022 (<b>a</b>); worldwide grape production evolution between 1961 and 2022 (<b>b</b>); and percentage of vineyard plantation area by continent in 2022 (<b>c</b>). Adapted from FAOSTAT.</p>
Full article ">Figure 2
<p>Average olive production (t) by country between 2018 and 2022 (<b>a</b>); worldwide olive production evolution between 1961 and 2022 (<b>b</b>); and percentage of olive plantation area by continent in 2022 (<b>c</b>). Adapted from FAOSTAT.</p>
Full article ">Figure 3
<p>PRISMA flow diagram of the systematic literature review search adapted from Moher et al. [<a href="#B34-remotesensing-16-02040" class="html-bibr">34</a>].</p>
Full article ">Figure 4
<p>Network connection graph between the top 20 most frequently used keywords in the selected studies. The size of each circle corresponds to the frequency of a keyword’s usage, with larger circles indicating higher usage. The top 5 most used keywords are distinguished in red. The proximity between circles, connected by lines, identifies the degree of connection between the corresponding keywords.</p>
Full article ">Figure 5
<p>Temporal distribution of articles included in the systematic review by publication year (<b>a</b>) and the spatial distribution by country (<b>b</b>).</p>
Full article ">Figure 6
<p>Distribution of studies expressed as a percentage, categorised by crop type.</p>
Full article ">Figure 7
<p>Percentage of publications between 1 January 2003 and 30 December 2023, according to categorical classification by country (<b>a</b>) and by year (<b>b</b>). ADM: aridity and drought monitoring; AWM: agricultural water management; LUM: land use management; and WST: water stress. AF: Afghanistan; AU: Australia; CL: Chile; CN: China; FR: France; GR: Greece; IT: Italy; LB: Lebanon; MD: Moldova; MA: Morocco; PT: Portugal; KSA: Saudi Arabia (Kingdom of); ES: Spain; TN: Tunisia; TR: Türkiye; USA: United States of America.</p>
Full article ">Figure 8
<p>Percentage distribution of studies by satellite platform (<b>a</b>) and by sensor (<b>b</b>) based on their respective categorical classification. ADM: aridity and drought monitoring; AWM: agricultural water management; LUM: land use management; and WST: water stress. Note that one study can appear several times, as it may include more than one index.</p>
Full article ">Figure 9
<p>The distribution of studies by the use of indices according to their respective categorical classifications. ADM: aridity and drought monitoring; AWM: agricultural water management; LUM: land use management; and WST: water stress. Others correspond to the sum of vegetation indices that have been used only once. Note that one study can appear several times, as it may include more than one index.</p>
Full article ">
24 pages, 6459 KiB  
Article
An Efficient Ground Moving Target Imaging Method for Synthetic Aperture Radar Based on Scaled Fourier Transform and Scaled Inverse Fourier Transform
by Xin Zhang, Haoyu Zhu, Ruixin Liu, Jun Wan and Zhanye Chen
Remote Sens. 2024, 16(11), 2039; https://doi.org/10.3390/rs16112039 - 6 Jun 2024
Viewed by 521
Abstract
The unknown relative motions between synthetic aperture radar (SAR) and a ground moving target will lead to serious range cell migration (RCM) and Doppler frequency spread (DFS). The energy of the moving target will defocus, given the effect of the RCM and DFS. [...] Read more.
The unknown relative motions between synthetic aperture radar (SAR) and a ground moving target will lead to serious range cell migration (RCM) and Doppler frequency spread (DFS). The energy of the moving target will defocus, given the effect of the RCM and DFS. The moving target will easily produce Doppler ambiguity, due to the low pulse repetition frequency of radar, and the Doppler ambiguity complicates the corrections of the RCM and DFS. In order to address these issues, an efficient ground moving target focusing method for SAR based on scaled Fourier transform and scaled inverse Fourier transform is presented. Firstly, the operations based on the scaled Fourier transform and scaled inverse Fourier transforms are presented to focus the moving targets in consideration of Doppler ambiguity. Subsequently, in accordance with the detailed analysis of multiple target focusing, the spurious peak related to the cross term is removed. The proposed method can accurately eliminate the DFS and RCM, and the well-focused result of the moving target can be achieved under the complex Doppler ambiguity. Then, the blind speed sidelobe can be further avoided. The presented method has high computational efficiency without the step of parameter search. The simulated and measured SAR data are provided to demonstrate the effectiveness of the developed method. Full article
(This article belongs to the Special Issue Technical Developments in Radar—Processing and Application)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Motion geometric model between the moving target and the SAR platform.</p>
Full article ">Figure 2
<p>Azimuth Doppler spectrum distribution diagram. (<b>a</b>) The Doppler spectrum occupies a PRF band. (<b>b</b>) Doppler spectrum occupies two PRF bands. (<b>c</b>) Doppler spectrum occupies more PRF bands.</p>
Full article ">Figure 3
<p>Flow chart of the proposed method.</p>
Full article ">Figure 4
<p>Experimental results of Case A. (<b>a</b>) Results after range compression for TA and TB. (<b>b</b>) Results after SCFT operation. (<b>c</b>) Results after SCIFT operation. (<b>d</b>) Focusing results using the auto term peak TA parameter. (<b>e</b>) Focusing results using the auto term peak TB parameter. (<b>f</b>) Focusing results using the cross term peak TC parameter.</p>
Full article ">Figure 4 Cont.
<p>Experimental results of Case A. (<b>a</b>) Results after range compression for TA and TB. (<b>b</b>) Results after SCFT operation. (<b>c</b>) Results after SCIFT operation. (<b>d</b>) Focusing results using the auto term peak TA parameter. (<b>e</b>) Focusing results using the auto term peak TB parameter. (<b>f</b>) Focusing results using the cross term peak TC parameter.</p>
Full article ">Figure 5
<p>Experimental results of Case B. (<b>a</b>) Results after range compression for TD and TE. (<b>b</b>) Results after SCFT operation. (<b>c</b>) Results after SCIFT operation. (<b>d</b>) Focusing results using the auto term peak TD parameter. (<b>e</b>) Focusing results using the auto term peak TE parameter. (<b>f</b>) Focusing results using the cross term peak TF parameter.</p>
Full article ">Figure 5 Cont.
<p>Experimental results of Case B. (<b>a</b>) Results after range compression for TD and TE. (<b>b</b>) Results after SCFT operation. (<b>c</b>) Results after SCIFT operation. (<b>d</b>) Focusing results using the auto term peak TD parameter. (<b>e</b>) Focusing results using the auto term peak TE parameter. (<b>f</b>) Focusing results using the cross term peak TF parameter.</p>
Full article ">Figure 6
<p>The results of the experiment. (<b>a</b>) Range compression results. (<b>b</b>) Doppler spectrum of three targets. (<b>c</b>) Results after SCFT operation. (<b>d</b>) Results of SCIFT. (<b>e</b>) Focusing result of TA using the proposed method. (<b>f</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>e. (<b>g</b>) Focusing result of TB by the developed method. (<b>h</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>g. (<b>i</b>) Focus result of TC by the developed method. (<b>j</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>i. (<b>k</b>) Results after TB is processed by the method in [<a href="#B20-remotesensing-16-02039" class="html-bibr">20</a>]. (<b>l</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>k. (<b>m</b>) Compensation result for LRCM using the keystone-based method for TB [<a href="#B30-remotesensing-16-02039" class="html-bibr">30</a>]. (<b>n</b>) Results after TB is processed by the method in [<a href="#B21-remotesensing-16-02039" class="html-bibr">21</a>].</p>
Full article ">Figure 6 Cont.
<p>The results of the experiment. (<b>a</b>) Range compression results. (<b>b</b>) Doppler spectrum of three targets. (<b>c</b>) Results after SCFT operation. (<b>d</b>) Results of SCIFT. (<b>e</b>) Focusing result of TA using the proposed method. (<b>f</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>e. (<b>g</b>) Focusing result of TB by the developed method. (<b>h</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>g. (<b>i</b>) Focus result of TC by the developed method. (<b>j</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>i. (<b>k</b>) Results after TB is processed by the method in [<a href="#B20-remotesensing-16-02039" class="html-bibr">20</a>]. (<b>l</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>k. (<b>m</b>) Compensation result for LRCM using the keystone-based method for TB [<a href="#B30-remotesensing-16-02039" class="html-bibr">30</a>]. (<b>n</b>) Results after TB is processed by the method in [<a href="#B21-remotesensing-16-02039" class="html-bibr">21</a>].</p>
Full article ">Figure 6 Cont.
<p>The results of the experiment. (<b>a</b>) Range compression results. (<b>b</b>) Doppler spectrum of three targets. (<b>c</b>) Results after SCFT operation. (<b>d</b>) Results of SCIFT. (<b>e</b>) Focusing result of TA using the proposed method. (<b>f</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>e. (<b>g</b>) Focusing result of TB by the developed method. (<b>h</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>g. (<b>i</b>) Focus result of TC by the developed method. (<b>j</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>i. (<b>k</b>) Results after TB is processed by the method in [<a href="#B20-remotesensing-16-02039" class="html-bibr">20</a>]. (<b>l</b>) Stereogram of <a href="#remotesensing-16-02039-f006" class="html-fig">Figure 6</a>k. (<b>m</b>) Compensation result for LRCM using the keystone-based method for TB [<a href="#B30-remotesensing-16-02039" class="html-bibr">30</a>]. (<b>n</b>) Results after TB is processed by the method in [<a href="#B21-remotesensing-16-02039" class="html-bibr">21</a>].</p>
Full article ">Figure 7
<p>Results of spaceborne real data for a single target. (<b>a</b>) Results of range compression. (<b>b</b>) Doppler spectrum of moving target. (<b>c</b>) Result after SCFT operation. (<b>d</b>) Result after SCIFT operation. (<b>e</b>) Focusing result of the target by the developed approach. (<b>f</b>) Stereogram of <a href="#remotesensing-16-02039-f007" class="html-fig">Figure 7</a>e. (<b>g</b>) Results of processing with the method in [<a href="#B17-remotesensing-16-02039" class="html-bibr">17</a>]. (<b>h</b>) Results of processing with the method in [<a href="#B14-remotesensing-16-02039" class="html-bibr">14</a>].</p>
Full article ">Figure 7 Cont.
<p>Results of spaceborne real data for a single target. (<b>a</b>) Results of range compression. (<b>b</b>) Doppler spectrum of moving target. (<b>c</b>) Result after SCFT operation. (<b>d</b>) Result after SCIFT operation. (<b>e</b>) Focusing result of the target by the developed approach. (<b>f</b>) Stereogram of <a href="#remotesensing-16-02039-f007" class="html-fig">Figure 7</a>e. (<b>g</b>) Results of processing with the method in [<a href="#B17-remotesensing-16-02039" class="html-bibr">17</a>]. (<b>h</b>) Results of processing with the method in [<a href="#B14-remotesensing-16-02039" class="html-bibr">14</a>].</p>
Full article ">Figure 8
<p>Spaceborne real data results of two targets. (<b>a</b>) Scene of the selected data for two targets. (<b>b</b>) Result of range compression. (<b>c</b>) Result of SCFT. (<b>d</b>) Result of SCIFT. (<b>e</b>) Focusing result of the target by the developed approach.</p>
Full article ">Figure 9
<p>Results of airborne measured data. (<b>a</b>) Airborne SAR data scene without clutter suppression. (<b>b</b>) Result after clutter suppression. (<b>c</b>) Results of SCFT. (<b>d</b>) Results of SCIFT. (<b>e</b>) The result of focusing the target with the developed approach. (<b>f</b>) Stereogram of <a href="#remotesensing-16-02039-f009" class="html-fig">Figure 9</a>e. (<b>g</b>) The result of focusing the target using the DKP method [<a href="#B20-remotesensing-16-02039" class="html-bibr">20</a>]. (<b>h</b>) Stereogram of <a href="#remotesensing-16-02039-f009" class="html-fig">Figure 9</a>g.</p>
Full article ">Figure 9 Cont.
<p>Results of airborne measured data. (<b>a</b>) Airborne SAR data scene without clutter suppression. (<b>b</b>) Result after clutter suppression. (<b>c</b>) Results of SCFT. (<b>d</b>) Results of SCIFT. (<b>e</b>) The result of focusing the target with the developed approach. (<b>f</b>) Stereogram of <a href="#remotesensing-16-02039-f009" class="html-fig">Figure 9</a>e. (<b>g</b>) The result of focusing the target using the DKP method [<a href="#B20-remotesensing-16-02039" class="html-bibr">20</a>]. (<b>h</b>) Stereogram of <a href="#remotesensing-16-02039-f009" class="html-fig">Figure 9</a>g.</p>
Full article ">Figure 10
<p>A computational complexity diagram of the six methods.</p>
Full article ">
21 pages, 4492 KiB  
Article
Changes in Global Aviation Turbulence in the Remote Sensing Era (1979–2018)
by Diandong Ren and Mervyn J. Lynch
Remote Sens. 2024, 16(11), 2038; https://doi.org/10.3390/rs16112038 - 6 Jun 2024
Cited by 1 | Viewed by 657
Abstract
Atmospheric turbulence primarily originates from abrupt density variations in a vertically stratified atmosphere. Based on the prognostic equation of turbulent kinetic energy (TKE), we here chose three indicators corresponding to the forcing terms of the TKE generation. By utilizing ERA5 reanalysis data, we [...] Read more.
Atmospheric turbulence primarily originates from abrupt density variations in a vertically stratified atmosphere. Based on the prognostic equation of turbulent kinetic energy (TKE), we here chose three indicators corresponding to the forcing terms of the TKE generation. By utilizing ERA5 reanalysis data, we investigate first the maximum achievable daily thickness of the planetary boundary layer (PBL). The gradient Richardson number (Ri) is used to represent turbulence arising from shear instability and the daily maximum convective available potential energy (CAPE) is examined to understand the turbulence linked with convective instability. Our analysis encompasses global turbulence trends. As a case study, we focus on the North Atlantic Corridor (NAC) to reveal notable insights. Specifically, the mean annual number of hours featuring shear instability (Ri < 0.25) surged by more than 300 h in consecutive 20-year periods: 1979–1998 and 1999–2018. Moreover, a substantial subset within the NAC region exhibited a notable rise of over 10% in the number of hours characterized as severe shear instability. Contrarily, turbulence associated with convective instability (CAPE > 2 kJ/kg), which can necessitate rerouting and pose significant aviation safety challenges, displays a decline. Remote sensing of clouds confirms these assertions. This decline contains a component of inherent natural variability. Our findings suggest that, as air viscosity increases and hence a thickened PBL due to a warming climate, the global inflight turbulence is poised to intensify. Full article
Show Figures

Figure 1

Figure 1
<p>Temperature profiles over polar and midlatitude stations (<b>a</b>) at present and (<b>b</b>) for a warming climate. In (<b>a</b>), soundings taken at 12Z, 25 November 2021 over the south pole and mid-to-high latitude stations: Amundsen-Scott (90°S; 0°E), Syowa (69°S; 39.58°E), SAVC (−45.78°S; 67.5°W) and NZNV (46.41°S; 168.31°E). At the tropopause, temperature gradients reverse both vertically and meridionally, explaining why the jet core occurs near the tropopause. Panel (<b>b</b>) illustrates how anthropogenic climate change differentially affects the polar and mid-latitude temperature profiles. The net effect is equivalent to a superimposed circulation that counters the present climate mean thermal winds at the lower troposphere (1000–600 hPa) and enhances the present climate mean thermal winds at the upper troposphere (600–200 hPa). The commercial aviation operating domain is hatched in (<b>b</b>). The illustrations were produced using GrADS (available at <a href="http://cola.gmu.edu/grad.php" target="_blank">http://cola.gmu.edu/grad.php</a>; accessed on 16 June 2018).</p>
Full article ">Figure 2
<p>The maximum global PBL depths over the globe for March. Using hourly data, daily maximum PBL depths are extracted and used as an index. Maximum reachable March PBL depths (color shading is in meters) at each grid point are obtained by averaging the largest daily maximum PBL depths. Then the 20-year mean value is calculated and subtracted to obtain <a href="#remotesensing-16-02038-f003" class="html-fig">Figure 3</a>. The background and color shades are produced using GRADS (available at <a href="http://cola.gmu.edu/grad.php" target="_blank">http://cola.gmu.edu/grad.php</a>; accessed on 15 May 2022).</p>
Full article ">Figure 3
<p>The differences of March maximum PBL depths (m) between two consecutive 20-year periods; (1999–2018) minus (1979–1998). Details of the calculations are described in <a href="#remotesensing-16-02038-f002" class="html-fig">Figure 2</a>. Except for very limited areas in south Africa and Tibetan Plateau, all regions of increase PBL depth passed 95% confidence interval of a <span class="html-italic">t</span>-test of degree-of-freedom (dof) 40. Regions indicating a decrease of PBL depth (e.g., Western Australia) did not pass the same <span class="html-italic">t</span>-test. The map background and colour shades were produced using GrADS (available at <a href="http://cola.gmu.edu/grad.php" target="_blank">http://cola.gmu.edu/grad.php</a>; accessed on 1 October 2022).</p>
Full article ">Figure 4
<p>The differences of maximum PBL depths (m) between two consecutive 20-year periods; (1999–2018) versus (1979–1998), for four months representing respectively winter (January), spring (April), summer (July), and fall (October). The lowest row is the climatology of the maximum PBL height over the 1979–2018 period (mean over this period). The central row is absolute changes (same as <a href="#remotesensing-16-02038-f003" class="html-fig">Figure 3</a>, (1999–2018) minus (1979–1998)). The upper row is the percentage changes, or the central row divides the lowest row.</p>
Full article ">Figure 5
<p>Increased instability over the globe (a 40°S–70°N latitude belt is shown for clarity). The severe atmospheric turbulence locations were generated from the gradient Richardson Numbers (<span class="html-italic">R<sub>i</sub></span>) calculated from ERA5 hourly atmospheric parameters during the period 1979–2018. The critical <span class="html-italic">R<sub>i</sub></span> value was set as 0.25. The total annual number of hours with <span class="html-italic">R<sub>i</sub></span> below the critical value is plotted. Contour lines are increases in annual mean number of hours with <span class="html-italic">R<sub>i</sub></span> &lt; 0.25, between 1979–1998 and 1999–2018. The blue shaded areas identify regions with increases of at least 10% in annual-average number of hours of severe atmospheric turbulence between the two 20-year periods. The map background and color shading were produced using GrADS (available at <a href="http://cola.gmu.edu/grad.php" target="_blank">http://cola.gmu.edu/grad.php</a>; accessed on 1 October 2022).</p>
Full article ">Figure 6
<p>Similar to <a href="#remotesensing-16-02038-f005" class="html-fig">Figure 5</a>, but for the North Atlantic Corridor (NAC). The number of hours below the critical <span class="html-italic">Ri</span> (0.25) in a given year is defined as the index. Contour lines show increases in the annual mean number of hours with <span class="html-italic">Ri</span> &lt; 0.25, between 1979–1998 and 1999–2018. The blue shaded regions have annual-average numbers of hours with severe atmospheric turbulence increased by at least 10%, between the two consecutive 20-year periods 1979–1998 and 1999–2018. A t-test is performed for each grid of a 40-year annual time series of this index (i.e., the number of hours with <span class="html-italic">Ri</span> &lt; 0.25 in a year). Stippled regions passed the t-test at the 95% confidence level. More than 60% of the traditionally defined NAC passed the t-test. The red dashed line indicates the JFK-LHR route. The map backgrounds and contours were produced using GRADS (available at <a href="http://cola.gmu.edu/grad.php" target="_blank">http://cola.gmu.edu/grad.php</a>; accessed on 1 October 2022).</p>
Full article ">Figure 7
<p>Annual mean trends (color shades) in wind magnitude in the subtropical jets as a function of latitude and atmospheric pressure over the period 1979–2018. Values were calculated as the temporal mean over the (1999–2018) period minus the temporal mean over the (1979–1998) period, based on ERA-5 reanalyses. Dashed and solid lines are, respectively, the contours of the magnitude of winds for the second (1999–2018) and first (1979–1998) periods. Wind speeds generally increased aloft (above 150 hPa), whereas they mainly decreased below 150 hPa. The map backgrounds and contours were produced using GRADS (available at <a href="http://cola.gmu.edu/grad.php" target="_blank">http://cola.gmu.edu/grad.php</a>; accessed on 20 March 2022).</p>
Full article ">Figure 8
<p>Trends in convective instability. Panel (<b>a</b>) uses the same approach as for <a href="#remotesensing-16-02038-f005" class="html-fig">Figure 5</a> but identifies the annual number of hours with <span class="html-italic">CAPE</span> exceeding 2 kJ/kg as the index. Color shades show changes in convective turbulence over the past 40 years, defined as the mean annual number of hours during (1999–2018) minus the mean annual number of hours during (1979–1998). Note that, globally, regions with reductions in convective instability are dominant. Panel (<b>b</b>) is a sliding-window of 20-year trends of inter-basin SST differences (TBV: Trans-basin Variability) for Atlantic-minus-Pacific trans-basin SST gradient (ATL-PAC) and Indian-minus-Pacific TBV (IND-PAC). SST is from the Extended Reconstructed Sea Surface Temperature (ERSST). Indian (IND), Pacific (PAC), and Atlantic (ATL) basin SST are calculated between 20°S to 20°N, respectively for longitudinal ranges of 21°E–120°E, 121°E–90°W, and 70°W–20°E.</p>
Full article ">Figure A1
<p>Flow-chart of how PBL depth is obtained from near surface properties. Blue arrows indicate the iteration cycles. Iteration starts from the specified initial value of <span class="html-italic">H</span> (two options as shown on the illustration). The iteration ceases when the changes of <span class="html-italic">H</span> values is less than 5 thousandths of its current values, or a maximum number of iterations is exceeded. This scheme is applied to the lowest level sounding data to obtain an estimation of <span class="html-italic">H</span>.</p>
Full article ">Figure A2
<p>Stable temperatures (°C) for the two regimes under different assumptions of the heat exchange coefficient: control (red lines), with curvature effects (blue lines) and with further consideration of sensitivity of viscosity to temperature (yellow brown lines).</p>
Full article ">
15 pages, 12660 KiB  
Article
Contactless X-Band Detection of Steel Bars in Cement: A Preliminary Numerical and Experimental Analysis
by Adriana Brancaccio and Simone Palladino
Remote Sens. 2024, 16(11), 2037; https://doi.org/10.3390/rs16112037 - 6 Jun 2024
Viewed by 578
Abstract
This work presents preliminary experimental results for advancing non-destructive testing methods for detecting steel bars in cement via contactless investigations in the X-band spectrum. This study reveals the field’s penetration into cement, extracting insights into embedded bars through scattered data. Applying a quasi-quadratic [...] Read more.
This work presents preliminary experimental results for advancing non-destructive testing methods for detecting steel bars in cement via contactless investigations in the X-band spectrum. This study reveals the field’s penetration into cement, extracting insights into embedded bars through scattered data. Applying a quasi-quadratic inverse scattering technique to numerically simulated data yields promising results, confirming the effectiveness and reliability of the proposed approach. In this realm, using a higher frequency allows for the use of lighter equipment and smaller antennas. Identified areas for improvement include accounting for antenna behavior and establishing the undeformed target morphology and precise orientation. Transitioning from powder-based and sand specimens to real, solid, reinforced concrete structures is expected to alleviate laboratory challenges. Although accurately determining concrete properties such as its relative permittivity and conductivity is essential, it remains beyond the scope of this study. Finally, overcoming these challenges could significantly enhance non-invasive testing, improving structural health monitoring and disaster prevention. Full article
Show Figures

Figure 1

Figure 1
<p>Attenuation in dB/dm vs. frequency and conductivity for different <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>r</mi> </msub> </semantics></math>. Left column <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>∈</mo> </mrow> </semantics></math> [100–1000] MHz; right column <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>∈</mo> </mrow> </semantics></math> [1–20] GHz. Top row <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>; middle row <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>; bottom row <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Transmitted power density: TE (blue), TM (red); <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> (stars), <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math> (circles).</p>
Full article ">Figure 3
<p>Two-dimensional measurement configuration: the source and the observation points scan the circles of radius <math display="inline"><semantics> <msub> <mi>R</mi> <mi>s</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>R</mi> <mi>o</mi> </msub> </semantics></math>, respectively, at a fixed stand-off angle <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>0</mn> </msub> </semantics></math>. The investigated domain is the rectangle <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>×</mo> <mi>P</mi> </mrow> </semantics></math>. On the left square zoomed-in section, blue circles represent the unknown bar positions.</p>
Full article ">Figure 4
<p>Experimental setup. On the left, the CAD model is shown, depicting the supports for the steel bars and the rotating basement. On the right, the supports printed in PLA material are shown. On the bottom, the instrumentation and one of the specimens employed for the experiment in the semi-anechoic environment are shown.</p>
Full article ">Figure 5
<p>Geometry of specimens used in both experimental tests and numerical simulations.</p>
Full article ">Figure 6
<p>Experimental measurement of moduli of specimens with and without bars. Specimens filled with sand (<b>left</b>); specimens filled with cement powder (<b>right</b>).</p>
Full article ">Figure 7
<p>GprMax numerical simulations of the cement powder target. Scattered field moduli with and without bars.</p>
Full article ">Figure 8
<p>GprMax numerical simulations of the cement powder target. Scattered field phases with and without bars.</p>
Full article ">Figure 9
<p>Comparison between experimental measurements (<b>a</b>) and GprMax simulations (<b>b</b>) in the time domain in cement powder without bars. On the right, the cut of the results shown in (<b>a</b>,<b>b</b>) are reported. (<b>c</b>) Time trace at <math display="inline"><semantics> <msup> <mn>340</mn> <mo>°</mo> </msup> </semantics></math> (measurement, red line) and <math display="inline"><semantics> <msup> <mn>342</mn> <mo>°</mo> </msup> </semantics></math> (simulation, blue line). (<b>d</b>) Angular trace at 3.85 ns (measurement, red line) and 2.85 ns (simulation, blue line).</p>
Full article ">Figure 10
<p>Detection and localization by GprMax simulated scattered field data. The bars are numbered progressively from left to right and from bottom to top. On the left, candidate (blue circles), actual (red circles) and estimated (blue stars) positions are shown. On the right, the estimated <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>γ</mi> <mo>(</mo> <mi>n</mi> <mo>)</mo> <mo>|</mo> </mrow> </semantics></math> is plotted versus the position index.</p>
Full article ">
21 pages, 6923 KiB  
Article
Instrument Overview and Radiometric Calibration Methodology of the Non-Scanning Radiometer for the Integrated Earth–Moon Radiation Observation System (IEMROS)
by Hanyuan Zhang, Xin Ye, Duo Wu, Yuwei Wang, Dongjun Yang, Yuchen Lin, Hang Dong, Jun Zhou and Wei Fang
Remote Sens. 2024, 16(11), 2036; https://doi.org/10.3390/rs16112036 - 6 Jun 2024
Cited by 1 | Viewed by 508
Abstract
The non-scanning radiometer with short-wavelength (SW: 0.2–5.0 μm) and total-wavelength (TW: 0.2–50.0 μm) channels is the primary payload of the Integrated Earth–Moon Radiation Observation System (IEMROS), which is designed to provide comprehensive Earth radiation measurements and lunar calibrations at the L1 Lagrange point [...] Read more.
The non-scanning radiometer with short-wavelength (SW: 0.2–5.0 μm) and total-wavelength (TW: 0.2–50.0 μm) channels is the primary payload of the Integrated Earth–Moon Radiation Observation System (IEMROS), which is designed to provide comprehensive Earth radiation measurements and lunar calibrations at the L1 Lagrange point of the Earth–Moon system from a global perspective. This manuscript introduces a radiometer preflight calibration methodology, which involves background removal and is validated using accurate and traceable reference sources. Simulated Earth view tests are performed to evaluate repeatability, linearity, and gain coefficients over the operating range. Both channels demonstrate repeatability uncertainties better than 0.34%, indicating consistent and reliable measuring performance. Comparative polynomial regression analysis confirms significant linear response characteristics with two-channel nonlinearity less than 0.20%. Gain coefficients are efficiently determined using a two-point calibration approach. Uncertainty analysis reveals an absolute radiometric calibration accuracy of 0.97% for the SW channel and 0.92% for the TW channel, underscoring the non-scanning radiometer’s capability to provide dependable global Earth radiation budget data crucial to environmental and climate studies. Full article
(This article belongs to the Section Satellite Missions for Earth and Planetary Exploration)
Show Figures

Figure 1

Figure 1
<p>Observation and calibration process of the IEMROS non-scanning radiometer. (<b>a</b>) Schematic diagram of the on-orbit observation mode; (<b>b</b>) block diagram of the calibration procedure.</p>
Full article ">Figure 2
<p>IEMROS non-scanning radiometer. (<b>a</b>) External schematic diagram; (<b>b</b>) photograph of the prototype; (<b>c</b>) schematic of the primary internal structure portion of the prototype, excluding the filter wheel and the external mechanical structure of the entire instrument.</p>
Full article ">Figure 3
<p>The detector module of the IEMROS non-scanning radiometer. (<b>a</b>) Schematic diagram of the detector measurement principle. (<b>b</b>) Diagram of the overall structure of the detector module.</p>
Full article ">Figure 4
<p>Diagram showing the main radiation pathways reaching the primary detector and producing the raw signal.</p>
Full article ">Figure 5
<p>Path of source radiation through the Cassegrain telescope to the primary detector.</p>
Full article ">Figure 6
<p>(<b>a</b>) Spectral radiance output of the integrating sphere measured by the SVC spectroradiometer under different pulse control. (<b>b</b>) Mean background-subtracted counts of the SW channel under different radiance of the integrating sphere.</p>
Full article ">Figure 7
<p>(<b>a</b>) Blackbody temperature measured at different settings. (<b>b</b>) Mean background-subtracted counts of the TW channel under different blackbody temperatures.</p>
Full article ">Figure 8
<p>(<b>a</b>) Four types of fitting curves for the SW channel; (<b>b</b>) residual distribution plots of four fitting results for the SW channel.</p>
Full article ">Figure 9
<p>(<b>a</b>) Four types of fitting curves for the TW channel; (<b>b</b>) residual distribution plots of four fitting results for the TW channel.</p>
Full article ">Figure 10
<p>(<b>a</b>) Regression analysis result of the SW channel; (<b>b</b>) regression analysis result of the TW channel.</p>
Full article ">Figure 11
<p>(<b>a</b>) Reflectance of the telescope mirror for the IEMROS non-scanning radiometer; (<b>b</b>) transmittance of the SW channel filter; (<b>c</b>) relative spectral response of SW and TW channels (combined with component-level reflectance and transmittance).</p>
Full article ">
28 pages, 5098 KiB  
Article
A Robust High-Accuracy Star Map Matching Algorithm for Dense Star Scenes
by Quan Sun, Zhaodong Niu, Yabo Li and Zhuang Wang
Remote Sens. 2024, 16(11), 2035; https://doi.org/10.3390/rs16112035 - 6 Jun 2024
Viewed by 633
Abstract
The algorithm proposed in this paper aims at solving the problem of star map matching in high-limiting-magnitude astronomical images, which is inspired by geometric voting star identification techniques. It is a two-step star map matching algorithm relying only on angular features, and adopts [...] Read more.
The algorithm proposed in this paper aims at solving the problem of star map matching in high-limiting-magnitude astronomical images, which is inspired by geometric voting star identification techniques. It is a two-step star map matching algorithm relying only on angular features, and adopts a reasonable matching strategy to overcome the problem of poor real-time performance of the geometric voting algorithm when the number of stars is large. The algorithm focuses on application scenarios where there are a large number of dense stars (limiting magnitude greater than 13, average number of stars per square degree greater than 185) in the image, which is different from the sparse star identification problem of the star tracker, which is more challenging for the robustness and real-time performance of the algorithm. The proposed algorithm can be adapted to application scenarios such as unreliable brightness information, centroid positioning error, visual axis pointing deviation, and a large number of false stars, with high accuracy, robustness, and good real-time performance. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Frequency distribution and cumulative counts of stars of different magnitudes. (The numbers in parentheses are the average number of stars per square degree.)</p>
Full article ">Figure 2
<p>Correspondence between catalog and star image stars. The stars in (<b>a</b>) are located in the catalog, the star points in (<b>b</b>) are extracted from the star image; <math display="inline"><semantics> <msub> <mi>C</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>C</mi> <mn>2</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>C</mi> <mn>3</mn> </msub> </semantics></math> match with <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>S</mi> <mn>3</mn> </msub> </semantics></math>, and no star points corresponding to <math display="inline"><semantics> <msub> <mi>C</mi> <mn>4</mn> </msub> </semantics></math> have been extracted from the star image; <math display="inline"><semantics> <msub> <mi>S</mi> <mn>4</mn> </msub> </semantics></math> is a false star due to noise.</p>
Full article ">Figure 3
<p>Schematic diagram of joint sorting operation. The star image angle vector is in front and the catalog angle vector is behind to form the joint angle vector. <math display="inline"><semantics> <msub> <mi>N</mi> <mi>S</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>N</mi> <mi>C</mi> </msub> </semantics></math> are the number of elements of the star image angle vector and the catalog angle vector, respectively.</p>
Full article ">Figure 4
<p>Schematic diagram of the maximum matching degree calculated from <math display="inline"><semantics> <msub> <mover> <mi mathvariant="bold">F</mi> <mo>·</mo> </mover> <mi>M</mi> </msub> </semantics></math>. <span class="html-italic">L</span> denotes the number of subsegments in <math display="inline"><semantics> <msub> <mover> <mi mathvariant="bold">F</mi> <mo>·</mo> </mover> <mi>M</mi> </msub> </semantics></math> whose elements are all <span class="html-italic">true</span>, and <math display="inline"><semantics> <msub> <mi>n</mi> <mi>k</mi> </msub> </semantics></math> represents the number of <span class="html-italic">true</span> elements in the <span class="html-italic">k</span>th subsegment.</p>
Full article ">Figure 5
<p>Rough matching process flow chart.</p>
Full article ">Figure 6
<p>Block diagram of the data synthesis process.</p>
Full article ">Figure 7
<p>Typical synthetic star background image. (<b>a</b>) Synthetic image of a sparsely distributed star sky region (visual axis pointing: 0° declination, 0° right ascension). (<b>b</b>) Synthetic image of a densely distributed star sky region (visual axis pointing: 290° declination, 0° right ascension).</p>
Full article ">Figure 8
<p>Matching rates for different visual axis pointing deviations.</p>
Full article ">Figure 9
<p>Average matching accuracy for different visual axis pointing deviations (black line is the error introduced by the synthetic data).</p>
Full article ">Figure 10
<p>Average running time for different visual axis pointing deviations.</p>
Full article ">Figure 11
<p>Matching rates for different numbers of false stars.</p>
Full article ">Figure 12
<p>Average matching accuracy for different numbers of false stars (black line is the error introduced by the synthetic data).</p>
Full article ">Figure 13
<p>Average running time for different numbers of false stars.</p>
Full article ">Figure 14
<p>Matching rate for different positioning deviations.</p>
Full article ">Figure 15
<p>Average matching accuracy for different positioning deviations (black line is the error introduced by the synthetic data).</p>
Full article ">Figure 16
<p>Average running time for different positioning deviations.</p>
Full article ">Figure 17
<p>Matching rate for different magnitude deviations.</p>
Full article ">Figure 18
<p>Average matching accuracy for different magnitude deviations (black line is the error introduced by the synthetic data).</p>
Full article ">Figure 19
<p>Average running time for different magnitude deviations.</p>
Full article ">Figure 20
<p>Matching rate for different tasks.</p>
Full article ">Figure 21
<p>Average matching accuracy for different tasks.</p>
Full article ">Figure 22
<p>Average running time for different tasks.</p>
Full article ">Figure 23
<p>Matching error of RIAV.</p>
Full article ">Figure 24
<p>Matching error of GMV.</p>
Full article ">Figure 25
<p>Matching error of the proposed algorithm.</p>
Full article ">Figure 26
<p>Comprehensive performance radar charts for star map matching algorithms.</p>
Full article ">
20 pages, 7017 KiB  
Article
Inter-Comparison of SST Products from iQuam, AMSR2/GCOM-W1, and MWRI/FY-3D
by Yili Zhao, Ping Liu and Wu Zhou
Remote Sens. 2024, 16(11), 2034; https://doi.org/10.3390/rs16112034 - 6 Jun 2024
Viewed by 689
Abstract
Evaluating sea surface temperature (SST) products is essential before their application in marine environmental monitoring and related studies. SSTs from the in situ SST Quality Monitor (iQuam) system, Advanced Microwave Scanning Radiometer 2 (AMSR2) aboard the Global Change Observation Mission 1st-Water, and the [...] Read more.
Evaluating sea surface temperature (SST) products is essential before their application in marine environmental monitoring and related studies. SSTs from the in situ SST Quality Monitor (iQuam) system, Advanced Microwave Scanning Radiometer 2 (AMSR2) aboard the Global Change Observation Mission 1st-Water, and the Microwave Radiation Imager (MWRI) aboard the Chinese Fengyun-3D satellite are intercompared utilizing extended triple collocation (ETC) and direct comparison methods. Additionally, error characteristic variations with respect to time, latitude, SST, sea surface wind speed, columnar water vapor, and columnar cloud liquid water are analyzed comprehensively. In contrast to the prevailing focus on SST validation accuracy, the random errors and the capability to detect SST variations are also evaluated in this study. The result of ETC analysis indicates that iQuam SST from ships exhibits the highest random error, above 0.83 °C, whereas tropical mooring SST displays the lowest random error, below 0.28 °C. SST measurements from drifters, tropical moorings, Argo floats, and high-resolution drifters, which possess random errors of less than 0.35 °C, are recommended for validating remotely sensed SST. The ability of iQuam, AMSR2, and MWRI to detect SST variations diminishes significantly in ocean areas between 0°N and 20°N latitude and latitudes greater than 50°N and 50°S. AMSR2 and iQuam demonstrate similar random errors and capabilities for detecting SST variations, whereas MWRI shows a high random error and weak capability. In comparison to iQuam SST, AMSR2 exhibits a root-mean-square error (RMSE) of about 0.51 °C with a bias of −0.05 °C, while MWRI shows an RMSE of about 1.26 °C with a bias of −0.14 °C. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial distribution of triple collocations.</p>
Full article ">Figure 2
<p>Comparison between AMSR2 SST and iQuam SST in the daytime and nighttime. (<b>a</b>) Daytime. (<b>b</b>) Nighttime.</p>
Full article ">Figure 3
<p>Comparison between MWRI SST and iQuam SST in the daytime and nighttime. (<b>a</b>) Daytime. (<b>b</b>) Nighttime.</p>
Full article ">Figure 4
<p>Spatial distribution of AMSR2 SST bias referring to iQuam SST. (<b>a</b>) Daytime. (<b>b</b>) Nighttime.</p>
Full article ">Figure 5
<p>Spatial distribution of SST difference between MWRI and iQuam. (<b>a</b>) Daytime. (<b>b</b>) Nighttime.</p>
Full article ">Figure 6
<p>Temporal variation in error characteristics. (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>SUb</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">Figure 7
<p>Latitudinal variation in error characteristics. (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>SUb</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">Figure 8
<p>Variation in error characteristics along SST. (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>SUb</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">Figure 9
<p>Variation in error characteristics along sea surface wind speed. (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>SUb</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">Figure 10
<p>Variation in error characteristics along columnar water vapor. (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>SUb</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">Figure 11
<p>Variation in error characteristics along columnar cloud liquid water (<b>a</b>) ESD. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SNR</mi> </mrow> <mrow> <mi>SUb</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Bias. (<b>d</b>) RMSE.</p>
Full article ">
30 pages, 42903 KiB  
Article
Monitoring Chlorophyll-a Concentration Variation in Fish Ponds from 2013 to 2022 in the Guangdong-Hong Kong-Macao Greater Bay Area, China
by Zikang Li, Xiankun Yang, Tao Zhou, Shirong Cai, Wenxin Zhang, Keming Mao, Haidong Ou, Lishan Ran, Qianqian Yang and Yibo Wang
Remote Sens. 2024, 16(11), 2033; https://doi.org/10.3390/rs16112033 - 5 Jun 2024
Viewed by 704
Abstract
Aquaculture plays a vital role in global food production, with fish pond water quality directly impacting aquatic product quality. The Guangdong-Hong Kong-Macao Greater Bay Area (GBA) serves as a key producer of aquatic products in South China. Monitoring environmental changes in fish ponds [...] Read more.
Aquaculture plays a vital role in global food production, with fish pond water quality directly impacting aquatic product quality. The Guangdong-Hong Kong-Macao Greater Bay Area (GBA) serves as a key producer of aquatic products in South China. Monitoring environmental changes in fish ponds serves as an indicator of their health. This study employed the extreme gradient boosting tree (BST) model of machine learning, utilizing Landsat imagery data, to assess Chlorophyll-a (Chl-a) concentration in GBA fish ponds from 2013 to 2022. The study also examined the corresponding spatiotemporal variations in Chl-a concentration. Key findings include: (1) clear seasonal fluctuations in Chl-a concentration, peaking in summer (56.7 μg·L−1) and reaching lows in winter (43.5 μg·L−1); (2) a slight overall increase in Chl-a concentration over the study period, notably in regions with rapid economic development, posing a heightened risk of eutrophication; (3) influence from both human activities and natural factors such as water cycle and climate, with water temperature notably impacting summer Chl-a levels; (4) elevated Chl-a levels in fish ponds compared to surrounding natural water bodies, primarily attributed to human activities, indicating an urgent need to revise breeding practices and address eutrophication. These findings offer a quantitative assessment of fish pond water quality and contribute to sustainable aquaculture management in the GBA. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical configuration of the GBA against the spatial distribution of fish ponds.</p>
Full article ">Figure 2
<p>Image selection and image availability for this study: (<b>a</b>) the strip number of the images covering the GBA, (<b>b</b>) the number of available images from Landsat8/9 for this study.</p>
Full article ">Figure 3
<p>Gap filling method used in this study; gap highlighted in red line to be filled with images from 2020 (<b>left</b>) or 2022 (<b>right</b>).</p>
Full article ">Figure 4
<p>Flowchart for image preprocessing and Chl-a concentration retrieval.</p>
Full article ">Figure 5
<p>On-site photos of water sampling conducted in fish ponds. The photos were taken by the authors.</p>
Full article ">Figure 6
<p>Locations of monitoring sites in Foshan, Huizhou, Zhongshan, and HK SAR. The monitoring sites in Zhongshan are far away from each other. It is difficult to distinguish two adjacent sampling points at this scale, so we further enlarged them in the figure.</p>
Full article ">Figure 7
<p>Fish ponds in high-resolution images (<b>a</b>,<b>c</b>) and fish ponds identified through SVM using Landsat imagery (<b>b</b>,<b>d</b>).</p>
Full article ">Figure 8
<p>Major flowchart for a BST model, an optimized distributed gradient-boosting method that solves the objective function (Obj) with decision tree ensembles and additive training to obtain an optimal Chl-a retrieval from Landsat OLI Rrc. The initial input data for the training is in-situ Chl-a concentration and subsequent inputs are the residual errors fitted by the previous decision trees. An optimal model structure with the lowest Obj value is used to estimate Chl-a concentration by summing up the score in the corresponding leaves.</p>
Full article ">Figure 9
<p>Spatial distribution of Chl-a concentration in fish ponds in the GBA in 2022 based on the prediction by the BST model. (<b>a</b>) shows the spatial distribution of Chl-a concentration in fish ponds across the GBA in 2022. (<b>b</b>) is a partial enlargement of an area with dense fish ponds. (<b>c</b>) is a partial enlargement of the area selected in (<b>b</b>).</p>
Full article ">Figure 10
<p>Seasonal changes of Chl-a concentration in fish ponds in the GBA under the BST model.</p>
Full article ">Figure 11
<p>Box plot of Chl-a concentration annual changes in fish ponds from 2013 to 2022.</p>
Full article ">Figure 12
<p>Annual average Chl-a concentration changes in fish ponds in the GBA from 2013 to 2022.</p>
Full article ">Figure 13
<p>MK trend test results for Chl-a concentration changes in fish ponds in the GBA from 2013 to 2022 ((<b>a</b>): overall test result, (<b>b</b>): Zhuhai, (<b>c</b>): Zhongshan, (<b>d</b>): Zhaoqing, (<b>e</b>): Hong Kong, (<b>f</b>): Shenzhen, (<b>g</b>): Foshan, (<b>h</b>): Dongguan, (<b>i</b>): Guangzhou, (<b>j</b>): Huizhou, (<b>k</b>): Jiangmen).</p>
Full article ">Figure 14
<p>Sen’s test results for Chl-a concentration change in fish ponds in the GBA from 2013 to 2022 ((<b>a</b>): overall test result, (<b>b</b>): Zhuhai, (<b>c</b>): Zhongshan, (<b>d</b>): Zhaoqing, (<b>e</b>): Hong Kong, (<b>f</b>): Shenzhen, (<b>g</b>): Foshan, (<b>h</b>): Dongguan, (<b>i</b>): Guangzhou, (<b>j</b>): Huizhou, (<b>k</b>): Jiangmen).</p>
Full article ">Figure 15
<p>Side-by-side comparison of seasonal changes in Chl-a concentration and water temperature in the GBA.</p>
Full article ">Figure 16
<p>Correlation between Chl-a concentration and water temperature at different time scales.</p>
Full article ">Figure 17
<p>Two representative riverine areas to represent the discrepancy in Chl-a concentrations in different waters. ((<b>a</b>): Zone (1), (<b>b</b>): Zone (2). These are “changes” averaged over 2013–2022).</p>
Full article ">Figure 18
<p>Chl-a concentration variations of riverine Zone (1) and Zone (2) in different seasons from 2013 to 2022.</p>
Full article ">Figure 19
<p>Comparison of Chl-a concentration change between riverine Zone (1) and Zone (2) and fish ponds (referring to fish ponds in the entire GBA) in different seasons in the past decade: (<b>a</b>) Spring; (<b>b</b>) Summer; (<b>c</b>) Autumn; (<b>d</b>) Winter; (<b>e</b>) Annual.</p>
Full article ">Figure 20
<p>Correlation between measured and predicted Chl-a concentration in the GBA.</p>
Full article ">
26 pages, 22249 KiB  
Article
Terrain Shadow Interference Reduction for Water Surface Extraction in the Hindu Kush Himalaya Using a Transformer-Based Network
by Xiangbing Yan and Jia Song
Remote Sens. 2024, 16(11), 2032; https://doi.org/10.3390/rs16112032 - 5 Jun 2024
Viewed by 548
Abstract
Water is the basis for human survival and growth, and it holds great importance for ecological and environmental protection. The Hindu Kush Himalaya (HKH) is known as the “Water Tower of Asia”, where water influences changes in the global water cycle and ecosystem. [...] Read more.
Water is the basis for human survival and growth, and it holds great importance for ecological and environmental protection. The Hindu Kush Himalaya (HKH) is known as the “Water Tower of Asia”, where water influences changes in the global water cycle and ecosystem. It is thus very important to efficiently measure the status of water in this region and to monitor its changes; with the development of satellite-borne sensors, water surface extraction based on remote sensing images has become an important method through which to do so, and one of the most advanced and accurate methods for water surface extraction involves the use of deep learning networks. We designed a network based on the state-of-the-art Vision Transformer to automatically extract the water surface in the HKH region; however, in this region, terrain shadows are often misclassified as water surfaces during extraction due to their spectral similarity. Therefore, we adjusted the training dataset in different ways to improve the accuracy of water surface extraction and explored whether these methods help to reduce the interference of terrain shadows. Our experimental results show that, based on the designed network, adding terrain shadow samples can significantly enhance the accuracy of water surface extraction in high mountainous areas, such as the HKH region, while adding terrain data does not reduce the interference from terrain shadows. We obtained the water surface extraction results in the HKH region in 2021, with the network and training datasets containing both water surface and terrain shadows. By comparing these results with the data products of Global Surface Water, it was shown that our water surface extraction results are highly accurate and the extracted water surface boundaries are finer, which strongly confirmed the applicability and advantages of the proposed water surface extraction approach in a wide range of complex surface environments. Full article
Show Figures

Figure 1

Figure 1
<p>The location of the Hindu Kush Himalaya.</p>
Full article ">Figure 2
<p>The terrain diagram of the Hindu Kush Himalaya.</p>
Full article ">Figure 3
<p>The structure of the water surface extraction network.</p>
Full article ">Figure 4
<p>The preparation of training datasets.</p>
Full article ">Figure 5
<p>Misclassification of terrain shadows in unvegetated areas (areas circled in red represent terrain shadows misclassified as water surfaces).</p>
Full article ">Figure 6
<p>Misclassification of terrain shadows in vegetated areas (areas circled in red represent terrain shadows misclassified as water surfaces).</p>
Full article ">Figure 7
<p>Water surface extraction results (areas circled in yellow represent missed water surface pixels, and areas circled in red represent non-water objects misclassified as water surfaces).</p>
Full article ">Figure 8
<p>Misclassification of non-water objects (areas circled in red represent non-water objects misclassified as water surface).</p>
Full article ">Figure 9
<p>The distribution of Sentinel-2 imaging tiles in the HKH region.</p>
Full article ">Figure 10
<p>Water surface extraction results of the HKH region in 2021.</p>
Full article ">Figure 11
<p>Our water surface extraction results and the GSW seasonality product for the HKH region in 2021.</p>
Full article ">Figure 12
<p>Our water surface extraction results and the GSW seasonality product for large lakes.</p>
Full article ">Figure 13
<p>Our water surface extraction results and the GSW seasonality product for medium lakes.</p>
Full article ">Figure 14
<p>Our water surface extraction results and the GSW seasonality product for small lakes.</p>
Full article ">Figure 15
<p>Our water surface extraction results and the GSW seasonality product for rivers.</p>
Full article ">
20 pages, 7213 KiB  
Article
Improvement of High-Resolution Daytime Fog Detection Algorithm Using GEO-KOMPSAT-2A/Advanced Meteorological Imager Data with Optimization of Background Field and Threshold Values
by Ji-Hye Han, Myoung-Seok Suh, Ha-Yeong Yu and So-Hyeong Kim
Remote Sens. 2024, 16(11), 2031; https://doi.org/10.3390/rs16112031 - 5 Jun 2024
Cited by 1 | Viewed by 571
Abstract
This study aimed to improve the daytime fog detection algorithm GK2A_HR_FDA using the GEO-KOMPSAT-2A (GK2A) satellite by increasing the resolution (2 km to 500 m), improving predicted surface temperature by the numerical model, and optimizing some threshold values. GK2A_HR_FDA uses numerical model prediction [...] Read more.
This study aimed to improve the daytime fog detection algorithm GK2A_HR_FDA using the GEO-KOMPSAT-2A (GK2A) satellite by increasing the resolution (2 km to 500 m), improving predicted surface temperature by the numerical model, and optimizing some threshold values. GK2A_HR_FDA uses numerical model prediction temperature to distinguish between fog and low clouds and evaluates the fog detection level using ground observation visibility data. To correct the errors of the numerical model prediction temperature, a dynamic bias correction (DBC) technique was developed that reflects the geographic location, time, and altitude in real time. As the numerical model prediction temperature was significantly improved after DBC application, the fog detection level improved (FAR: −0.02–−0.06; bias: −0.07–−0.23) regardless of the training and validation cases and validation method. In most cases, the fog detection level was improved due to DBC and threshold adjustment. Still, the detection level was abnormally low in some cases due to background reflectance problems caused by cloud shadow effects and navigation errors. As a result of removing navigation errors and cloud shadow effects, the fog detection level was greatly improved. Therefore, it is necessary to improve navigation accuracy and develop removal techniques for cloud shadows to improve fog detection levels. Full article
(This article belongs to the Section Satellite Missions for Earth and Planetary Exploration)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spatial distribution of ground observation station visibility meter (AWS/ASOS) with land–sea mask with 500 m resolution and 7 airports (Incheon, Gimpo, Jeju, Ulsan, Muan, Yeosu, and Yangyang). The red circle symbolizes the AWS/ASOS and the purple triangle symbolizes the airport.</p>
Full article ">Figure 2
<p>Flow chart of GK2A high-resolution fog detection algorithm operating in daytime.</p>
Full article ">Figure 3
<p>Variation in POD and FAR according to the fog detection steps for the selected 20 fog training cases. Red and blue colored bars stand for the POD and FAR, respectively. Red and blue circles indicate exceptionally low POD and FAR.</p>
Full article ">Figure 4
<p>Distribution of average ground temperature deviation (K) by interpolation method of numerical model data, geographic location (land/sea/coast), and time (10 min intervals). (<b>a</b>,<b>d</b>,<b>f</b>) Biases of simple distance interpolation for land, sea, and coast, respectively. (<b>b</b>,<b>g</b>) Bias after considering the topography effect. (<b>c</b>,<b>e</b>,<b>h</b>) Biases after performing dynamic bias correction. T# on the vertical axis represents the fog cases presented in <a href="#remotesensing-16-02031-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 4 Cont.
<p>Distribution of average ground temperature deviation (K) by interpolation method of numerical model data, geographic location (land/sea/coast), and time (10 min intervals). (<b>a</b>,<b>d</b>,<b>f</b>) Biases of simple distance interpolation for land, sea, and coast, respectively. (<b>b</b>,<b>g</b>) Bias after considering the topography effect. (<b>c</b>,<b>e</b>,<b>h</b>) Biases after performing dynamic bias correction. T# on the vertical axis represents the fog cases presented in <a href="#remotesensing-16-02031-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 5
<p>Sample images of fog detection results (<b>a</b>,<b>c</b>,<b>e</b>) and ground fog data (<b>b</b>,<b>d</b>,<b>f</b>) with visible images of 0.64 µm channel (red and green circles represent foggy and non-foggy area classified by visibility meter data, respectively). (<b>a</b>,<b>b</b>) Strong fog cases at 08:00 KST on 30 September 2019. (<b>c</b>,<b>d</b>) Cases of fog with low cloud at 09:00 KST on 13 February 2020. (<b>e</b>,<b>f</b>) Weakly occurring fog cases at 08:00 KST on 3 May 2021.</p>
Full article ">Figure 6
<p>Performance diagram summarizing the Success Ratio (=1-FAR), POD, bias, and CSI for training (<b>a</b>,<b>b</b>) and validation (<b>c</b>,<b>d</b>) fog cases. The left (<b>a</b>,<b>c</b>) and right (<b>b</b>,<b>d</b>) diagrams show the 1:9 and 1:1 validation results, respectively. Dash and solid lines represent the bias scores and CSI, respectively. The sampling uncertainty is given by the crosshairs. The red line extending from top left to bottom right means that POD and FAR are the same value.</p>
Full article ">Figure 7
<p>Comparison of fog detection level (POD, FAR) for the selected 15 fog cases based on the detection method (GK2A_FDA, GK2A_HR_FDA) and spatial resolution (2 km, 500 m). The T# and V# on the horizontal axis represent the fog cases used simultaneously in the GK2A_FDA and GK2A_HR_FDA of the cases presented in <a href="#remotesensing-16-02031-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 8
<p>Samples of original image with a navigation error (<b>a</b>) and corrected image (<b>b</b>) at 09:00 KST on 6 November 2019. The sea is masked in blue.</p>
Full article ">Figure 9
<p>Examples of minimum value composited reflectance contaminated by the effects of cloud shadows (14 March 2021, 08:00 KST). (<b>a</b>) Reflection map produced using minimum value composition (mVC) for 30 days and (<b>b</b>) Reflectivity map generated by applying mVC and post-processing to 20-day data. (<b>c</b>,<b>d</b>) Images where fog was detected by applying the reflectivity map of (<b>a</b>) and (<b>b</b>), respectively. (<b>e</b>) Ground fog data with visible images of 0.64 um channel (red and green circles represent foggy and non-foggy area classified by visibility meter data, respectively).</p>
Full article ">Figure 9 Cont.
<p>Examples of minimum value composited reflectance contaminated by the effects of cloud shadows (14 March 2021, 08:00 KST). (<b>a</b>) Reflection map produced using minimum value composition (mVC) for 30 days and (<b>b</b>) Reflectivity map generated by applying mVC and post-processing to 20-day data. (<b>c</b>,<b>d</b>) Images where fog was detected by applying the reflectivity map of (<b>a</b>) and (<b>b</b>), respectively. (<b>e</b>) Ground fog data with visible images of 0.64 um channel (red and green circles represent foggy and non-foggy area classified by visibility meter data, respectively).</p>
Full article ">
19 pages, 6940 KiB  
Article
Evaluation of Two Satellite Surface Solar Radiation Products in the Urban Region in Beijing, China
by Lin Xu and Yuna Mao
Remote Sens. 2024, 16(11), 2030; https://doi.org/10.3390/rs16112030 - 5 Jun 2024
Viewed by 558
Abstract
Surface solar radiation, as a primary energy source, plays a pivotal role in governing land–atmosphere interactions, thereby influencing radiative, hydrological, and land surface dynamics. Ground-based instrumentation and satellite-based observations represent two fundamental methodologies for acquiring solar radiation information. While ground-based measurements are often [...] Read more.
Surface solar radiation, as a primary energy source, plays a pivotal role in governing land–atmosphere interactions, thereby influencing radiative, hydrological, and land surface dynamics. Ground-based instrumentation and satellite-based observations represent two fundamental methodologies for acquiring solar radiation information. While ground-based measurements are often limited in availability, high-temporal- and spatial-resolution, gridded satellite-retrieved solar radiation products have been extensively utilized in solar radiation-related studies, despite their inherent uncertainties in accuracy. In this study, we conducted an evaluation of the accuracy of two high-resolution satellite products, namely Himawari-8 (H8) and Moderate Resolution Imaging Spectroradiometer (MODIS), utilizing data from a newly established solar radiation observation system at the Beijing Normal University (BNU) station in Beijing since 2017. The newly acquired measurements facilitated the generation of a firsthand solar radiation dataset comprising three components: Global Horizontal Irradiance (GHI), Direct Normal Irradiance (DNI), and Diffuse Horizontal Irradiance (DHI). Rigorous quality control procedures were applied to the raw minute-level observation data, including tests for missing data, the determination of possible physical limits, the identification of solar tracker malfunctions, and comparison tests (GHI should be equivalent to the sum of DHI and the vertical component of the DNI). Subsequently, accurate minute-level solar radiation observations were obtained spanning from 1 January 2020 to 22 March 2022. The evaluation of H8 and MODIS satellite products against ground-based GHI observations revealed strong correlations with R-squared (R2) values of 0.89 and 0.81, respectively. However, both satellite products exhibited a tendency to overestimate solar radiation, with H8 overestimating by approximately 21.05% and MODIS products by 7.11%. Additionally, solar zenith angles emerged as a factor influencing the accuracy of satellite products. This dataset serves as crucial support for investigations of surface solar radiation variation mechanisms, future energy utilization prospects, environmental conservation efforts, and related studies in urban areas such as Beijing. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Location of the newly established BNU site; (<b>b</b>) the instruments.</p>
Full article ">Figure 2
<p>Scatter plots of minute-level GHI and the sum of DNI multiplied by the cosine of solar zenith angle (<span class="html-italic">μ</span>) plus DHI (DNI·<span class="html-italic">μ</span> + DHI) data, in both raw and quality-controlled (QC) forms, from 2020 to 2022.</p>
Full article ">Figure 3
<p>Hourly observations included in each date with available data from 2020 to 2022.</p>
Full article ">Figure 4
<p>Time series of daily solar radiation components from 1 January 2020, to 22 March 2022. (<b>a</b>) GHI, (<b>b</b>) DNI, (<b>c</b>) DHI.</p>
Full article ">Figure 5
<p>Scatter plot of hourly H8, MCD18, and observational GHI Data. The red solid line is the 1:1 line.</p>
Full article ">Figure 6
<p>Scatter plot of daily H8, MCD18, and observational GHI data. The red solid line is the 1:1 line.</p>
Full article ">Figure 7
<p>Hourly GHI distribution of observational data, H8 and MCD18 for each month.</p>
Full article ">Figure 8
<p>Boxplots of the difference between observed GHI and MCD18, H8 and under different SZA intervals. In each boxplot, the bottom of the lower tail represents the minimum value and the top of the upper tail represents the maximum. The lower line of the box represents the 25th percentile, the upper box represents the 75th percentile, and the middle line in the box represents the median.</p>
Full article ">Figure 9
<p>Boxplots of the relative difference between observed GHI and MCD18, H8 and under different SZA intervals. The relative difference (%) was calculated by dividing the absolute difference in <a href="#remotesensing-16-02030-f009" class="html-fig">Figure 9</a> by the mean observed GHI for each SZA category. In each boxplot, the bottom of the lower tail represents the minimum value and the top of the upper tail represents the maximum. The lower line of the box represents the 25th percentile, the upper box represents the 75th percentile, and the middle line in the box represents the median.</p>
Full article ">
24 pages, 9011 KiB  
Article
A New Dual-Branch Embedded Multivariate Attention Network for Hyperspectral Remote Sensing Classification
by Yuyi Chen, Xiaopeng Wang, Jiahua Zhang, Xiaodi Shang, Yabin Hu, Shichao Zhang and Jiajie Wang
Remote Sens. 2024, 16(11), 2029; https://doi.org/10.3390/rs16112029 - 5 Jun 2024
Viewed by 567
Abstract
With the continuous maturity of hyperspectral remote sensing imaging technology, it has been widely adopted by scholars to improve the performance of feature classification. However, due to the challenges in acquiring hyperspectral images and producing training samples, the limited training sample is a [...] Read more.
With the continuous maturity of hyperspectral remote sensing imaging technology, it has been widely adopted by scholars to improve the performance of feature classification. However, due to the challenges in acquiring hyperspectral images and producing training samples, the limited training sample is a common problem that researchers often face. Furthermore, efficient algorithms are necessary to excavate the spatial and spectral information from these images, and then, make full use of this information with limited training samples. To solve this problem, a novel two-branch deep learning network model is proposed for extracting hyperspectral remote sensing features in this paper. In this model, one branch focuses on extracting spectral features using multi-scale convolution and a normalization-based attention module, while the other branch captures spatial features through small-scale dilation convolution and Euclidean Similarity Attention. Subsequently, pooling and layering techniques are employed to further extract abstract features after feature fusion. In the experiments conducted on two public datasets, namely, IP and UP, as well as our own labeled dataset, namely, YRE, the proposed DMAN achieves the best classification results, with overall accuracies of 96.74%, 97.4%, and 98.08%, respectively. Compared to the sub-optimal state-of-the-art methods, the overall accuracies are improved by 1.05, 0.42, and 0.51 percentage points, respectively. The advantage of this network structure is particularly evident in unbalanced sample environments. Additionally, we introduce a new strategy based on the RPNet, which utilizes a small number of principal components for feature classification after dimensionality reduction. The results demonstrate its effectiveness in uncovering compressed feature information, with an overall accuracy improvement of 0.68 percentage points. Consequently, our model helps mitigate the impact of data scarcity on model performance, thereby contributing positively to the advancement of hyperspectral remote sensing technology in practical applications. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall architecture of the proposed DMAN model, where PC stands for principal component, PCA stands for principal component analysis, GAP stands for global average pooling, and HRPM stands for hybrid random patch model.</p>
Full article ">Figure 2
<p>The proposed HRPM structure.</p>
Full article ">Figure 3
<p>The proposed spectral branch structure.</p>
Full article ">Figure 4
<p>The proposed NAM structure.</p>
Full article ">Figure 5
<p>The proposed spatial branch structure.</p>
Full article ">Figure 6
<p>The proposed ESA structure.</p>
Full article ">Figure 7
<p>The proposed PPA structure.</p>
Full article ">Figure 8
<p>Comparison between pre- and post-fusion images of YRE data fusion.</p>
Full article ">Figure 9
<p>The classification results of the DMAN with different training samples.</p>
Full article ">Figure 10
<p>The classification results of the DMAN with different patch sizes.</p>
Full article ">Figure 11
<p>Classification maps of different methods on the IP dataset. (<b>a</b>) False-color; (<b>b</b>) ground truth map; (<b>c</b>) FDSSC; (<b>d</b>) SSRN; (<b>e</b>) RSSAN; (<b>f</b>) GSC-ViT; (<b>g</b>) DFFN; (<b>h</b>) HybridSN; (<b>i</b>) SSFTT; (<b>j</b>) AMS-M2ESL; (<b>k</b>) DMAN; and (<b>l</b>) color bar.</p>
Full article ">Figure 12
<p>Classification maps of different methods on the UP dataset. (<b>a</b>) False-color; (<b>b</b>) ground truth map; (<b>c</b>) FDSSC; (<b>d</b>) SSRN; (<b>e</b>) RSSAN; (<b>f</b>) GSC-ViT; (<b>g</b>) DFFN; (<b>h</b>) HybridSN; (<b>i</b>) SSFTT; (<b>j</b>) AMS-M2ESL; (<b>k</b>) DMAN; and (<b>l</b>) color bar.</p>
Full article ">Figure 13
<p>Classification maps of different methods on the YRE dataset. (<b>a</b>) False-color; (<b>b</b>) ground truth map; (<b>c</b>) FDSSC; (<b>d</b>) SSRN; (<b>e</b>) RSSAN; (<b>f</b>) GSC-ViT; (<b>g</b>) DFFN; (<b>h</b>) HybridSN; (<b>i</b>) SSFTT; (<b>j</b>) AMS-M2ESL; (<b>k</b>) DMAN; and (<b>l</b>) color bar.</p>
Full article ">Figure 14
<p>The OAs of various combinations in three datasets.</p>
Full article ">Figure 15
<p>(<b>a</b>) Effect of Gaussian noise on the model. (<b>b</b>) Effect of pretzel noise on the model.</p>
Full article ">
16 pages, 2965 KiB  
Technical Note
Evaluation of IMERG Data over Open Ocean Using Observations of Tropical Cyclones
by Stephen L. Durden
Remote Sens. 2024, 16(11), 2028; https://doi.org/10.3390/rs16112028 - 5 Jun 2024
Viewed by 509
Abstract
The IMERG data product is an optimal combination of precipitation estimates from the Global Precipitation Mission (GPM), making use of a variety of data types, primarily data from various spaceborne passive instruments. Previous versions of the IMERG product have been extensively validated by [...] Read more.
The IMERG data product is an optimal combination of precipitation estimates from the Global Precipitation Mission (GPM), making use of a variety of data types, primarily data from various spaceborne passive instruments. Previous versions of the IMERG product have been extensively validated by comparisons with gauge data and ground-based radars over land. However, IMERG rain rates, especially sub-daily, over open ocean are less validated due to the scarcity of comparison data, particularly with the relatively new Version 07. To address this issue, we consider IMERG V07 30-min data acquired in tropical cyclones over open ocean. We perform two tasks. The first is a straightforward comparison between IMERG precipitation rates and those retrieved from the GPM Dual-frequency Precipitation Radar (DPR). From this, we find that IMERG and DPR are close at low rain rates, while, at high rain rates, IMERG tends to be lower than DPR. The second task is the assessment of IMERG’s ability to represent or detect structures commonly seen in tropical cyclones, including the annular structure and concentric eyewalls. For this, we operate on IMERG data with many machine learning algorithms and are able to achieve a 96% classification accuracy, indicating that IMERG does indeed contain TC structural information. Full article
(This article belongs to the Special Issue Remote Sensing and Parameterization of Air-Sea Interaction)
Show Figures

Figure 1

Figure 1
<p>Flow chart of both analysis tasks completed in this paper. The DPR comparison is the upper part of the chart, while the process of doing the ML classification of IMERG only is illustrated in the lower part.</p>
Full article ">Figure 2
<p>IMERG and DPR data for TC Surigae 2021 02W 04181507 (format mmddhhmm). (<b>a</b>) IMERG surface rain rate (mm/h). (<b>b</b>) DPR rain rate. (<b>c</b>) IMERG minus the DPR rain rate. (<b>d</b>) Plots of AARR using IMERG and DPR data. Time is the UTC time of the DPR overpass.</p>
Full article ">Figure 3
<p>Same as <a href="#remotesensing-16-02028-f002" class="html-fig">Figure 2</a> but for TC Bolaven 2023 15W 10112342. (<b>a</b>) IMERG surface rain rate (mm/h). (<b>b</b>) DPR rain rate. (<b>c</b>) IMERG minus the DPR rain rate. (<b>d</b>) Plots of AARR using IMERG and DPR data. Time is the UTC time of the DPR overpass.</p>
Full article ">Figure 4
<p>Same as <a href="#remotesensing-16-02028-f002" class="html-fig">Figure 2</a> but for TC Dorian 2019 05L 08302132. (<b>a</b>) IMERG surface rain rate (mm/h). (<b>b</b>) DPR rain rate. (<b>c</b>) IMERG minus the DPR rain rate. (<b>d</b>) Plots of AARR using IMERG and DPR data. Time is the UTC time of the DPR overpass.</p>
Full article ">Figure 5
<p>Example images of the IMERG rainfall rate for the four classes: (<b>a</b>) Olivia 2018 17E, class 0, 0600 UTC on Sep 6, (<b>b</b>) Isabel 2003 13L, class 1, 1800 UTC on Sep 11, (<b>c</b>) Larry 2021 12L, class 2, 0000 UTC on Sep 6, and (<b>d</b>) Barbara 2019 02E, class 3, 0000 UTC on Jul 3.</p>
Full article ">Figure 6
<p>Example radial plots for 4 TCs in each class. The date and time are given in mmddhhmm format after each name and number. (<b>a</b>) Class 0, Olivia 2018 17E 09060600, Longwang 2005 19W 09291200, Hector 2018 10E 08070530, Champi 2018 25W 07090600. (<b>b</b>) Class 1, Marie 2014 13E 08251800, Isabel 2003 13L 09111800, Frances 2004 06L 09010700, Maria 10W 2018 07090600, (<b>c</b>) class 2, Linda 2021 12E 08150800, Larry 2021 12L 09060000, Noru 07W 2017 08021800, Surigae 2021 02W 04201800, and (<b>d</b>) class 3, Barbara 2019 02E 07030000, Irma 2017 11L 09050000, Goni 2020 22W 10311200, Chanthu 2021 19W 09071200.</p>
Full article ">Figure 7
<p>Classification results for the KNN classifier with <span class="html-italic">K</span> = 1, using 6-h averages of AARRs. Accuracies for correct classification are highlighted in blue.</p>
Full article ">
15 pages, 15001 KiB  
Article
Attributing the Decline of Evapotranspiration over the Asian Monsoon Region during the Period 1950–2014 in CMIP6 Models
by Xiaowei Zhu, Zhiyong Kong, Jian Cao, Ruina Gao and Na Gao
Remote Sens. 2024, 16(11), 2027; https://doi.org/10.3390/rs16112027 - 5 Jun 2024
Viewed by 595
Abstract
Evapotranspiration (ET) accounts for over half of the moisture source of Asian monsoon rainfall, which has been significantly altered by anthropogenic forcings. However, how individual anthropogenic forcing affects the ET over monsoonal Asia is still elusive. In this study, we found a significant [...] Read more.
Evapotranspiration (ET) accounts for over half of the moisture source of Asian monsoon rainfall, which has been significantly altered by anthropogenic forcings. However, how individual anthropogenic forcing affects the ET over monsoonal Asia is still elusive. In this study, we found a significant decline in ET over the Asian monsoon region during the period of 1950–2014 in Coupled Model Intercomparison Project Phase 6 (CMIP6) models. The attribution analysis suggests that anthropogenic aerosol forcing is the primary cause of the weakening in ET in the historical simulation, while it is only partially compensated by the strengthening effect from GHGs, although GHGs are the dominant forcings for surface temperature increase. The physical mechanisms responsible for ET changes are different between aerosol and GHG forcings. The increase in aerosol emissions enhances the reflection and scattering of the downward solar radiation, which decreases the net surface irradiance for ET. GHGs, on the one hand, increase the moisture capability of the atmosphere and, thus, the ensuing rainfall; on the other hand, they increase the ascending motion over the Indian subcontinent, leading to an increase in rainfall. Both processes are beneficial for an ET increase. The results from this study suggest that future changes in the land–water cycle may mainly rely on the aerosol emission policy rather than the carbon reduction policy. Full article
Show Figures

Figure 1

Figure 1
<p>Climatological ET (mm d<sup>−1</sup>) from (<b>a</b>) observational data and (<b>c</b>) five CMIP6 model means. And the ratio of ET to precipitation (%) from (<b>b</b>) observation and (<b>d</b>) CMIP6 models. The observed precipitation is from the CRU dataset. The climatological ET and precipitation is defined as the mean of the JJA period of 1985–2014. The red dashed lines present the Asian monsoon region.</p>
Full article ">Figure 2
<p>Spatial distribution of the trends (mm d<sup>−1</sup> cent<sup>−1</sup>) in ET from the (<b>a</b>) historical experiment, (<b>b</b>) hist-aer experiment, and (<b>c</b>) hist-GHG experiment. (<b>d</b>) The observed and simulated temporal evolution of areal-averaged ET (mm d<sup>−1</sup>) over the Asian monsoon region. Dots represent the linear trends, which are significant at a 95% confidence level.The red dashed lines present the Asian monsoon region.</p>
Full article ">Figure 3
<p>Trends in the (<b>a</b>) ET, (<b>b</b>) precipitation (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>c</b>) net surface radiation (MJ m<sup>−2</sup> d<sup>−1</sup> cent<sup>−1</sup>), (<b>d</b>) 2 m surface temperature (K cent<sup>−1</sup>), (<b>e</b>) vapor pressure deficient (KPa cent<sup>−1</sup>), and (<b>f</b>) 2 m wind speed (m s<sup>−1</sup> cent<sup>−1</sup>) from the historical experiment. The red dashed lines present the Asian monsoon region.</p>
Full article ">Figure 4
<p>Spatial distribution of the (<b>a</b>) ET trend (mm d<sup>−1</sup> cent<sup>−1</sup>) and its contributions from (<b>b</b>) precipitation (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>c</b>) net surface radiation (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>d</b>) 2 m surface temperature (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>e</b>) vapor pressure deficit(mm d<sup>−1</sup> cent<sup>−1</sup>), and (<b>f</b>) 2 m wind speed (mm d<sup>−1</sup> cent<sup>−1</sup>) from the historical experiment. The red dashed lines present the Asian monsoon region.</p>
Full article ">Figure 5
<p>Relative contribution of individual factors to ET trends (mm d<sup>−1</sup> cent<sup>−1</sup>) in the historical (red bars), hist-aer (blue bars), and hist-GHG (green bars) experiments.</p>
Full article ">Figure 6
<p>Trends in the (<b>a</b>) ET, (<b>b</b>) precipitation (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>c</b>) net surface radiation (MJ m<sup>−2</sup> d<sup>−1</sup> cent<sup>−1</sup>), (<b>d</b>) 2 m surface temperature (K cent<sup>−1</sup>), (<b>e</b>) vapor pressure deficit (VPD, KPa cent<sup>−1</sup>), and (<b>f</b>) 2 m wind speed (U, m s<sup>−1</sup> cent<sup>−1</sup>) due to anthropogenic aerosols (from the hist-aer experiment). The red dashed lines present the Asian monsoon region.</p>
Full article ">Figure 7
<p>Spatial distribution of the (<b>a</b>) ET trend (mm d<sup>−1</sup> cent<sup>−1</sup>) and its contributions from (<b>b</b>) precipitation (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>c</b>) net surface radiation (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>d</b>) 2 m surface temperature (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>e</b>) vapor pressure deficit (mm d<sup>−1</sup> cent<sup>−1</sup>), and (<b>f</b>) 2 m wind speed (mm d<sup>−1</sup> cent<sup>−1</sup>) due to anthropogenic aerosols (from the hist-aer experiment). The red dashed lines present the Asian monsoon region.</p>
Full article ">Figure 8
<p>Trends in the (<b>a</b>) net surface longwave radiation (W m<sup>−2</sup> cent<sup>−1</sup>), (<b>b</b>) net surface shortwave radiation (W m<sup>−2</sup> cent<sup>−1</sup>), (<b>c</b>) downward shortwave radiation (W m<sup>−2</sup> cent<sup>−1</sup>), and (<b>d</b>) aerosol optical depth (cent<sup>−1</sup>) due to anthropogenic aerosols (from the hist-aer experiment). The red dashed lines present the Asian monsoon region.</p>
Full article ">Figure 9
<p>Trends in the (<b>a</b>) ET, (<b>b</b>) precipitation (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>c</b>) net surface radiation (MJ m<sup>−2</sup> d<sup>−1</sup> cent<sup>−1</sup>), (<b>d</b>) 2 m surface temperature (K cent<sup>−1</sup>), (<b>e</b>) vapor pressure deficit (VPD, KPa cent<sup>−1</sup>), and (<b>f</b>) 2 m wind speed (U, m s<sup>−1</sup> cent<sup>−1</sup>) due to GHGs (from the hist-GHG experiment). The red dashed lines present the Asian monsoon region.</p>
Full article ">Figure 10
<p>Spatial distribution of the (<b>a</b>) ET trend (mm d<sup>−1</sup> cent<sup>−1</sup>) and its contributions from (<b>b</b>) precipitation (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>c</b>) net surface radiation (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>d</b>) 2 m surface temperature (mm d<sup>−1</sup> cent<sup>−1</sup>), (<b>e</b>) vapor pressure deficit (mm d<sup>−1</sup> cent<sup>−1</sup>), and (<b>f</b>) 2 m wind speed (mm d<sup>−1</sup> cent<sup>−1</sup>) from the hist-GHG experiment. The red dashed lines present the Asian monsoon region.</p>
Full article ">Figure 11
<p>Trends in the (<b>a</b>) vertical integrated atmospheric moisture (k kg<sup>−1</sup> cent<sup>−1</sup>) and (<b>b</b>) vertical pressure velocity at 500 hPa (Pa s<sup>−1</sup> cent<sup>−1</sup>) from the hist-GHG experiment. The red dashed lines present the Asian monsoon region.</p>
Full article ">
26 pages, 28936 KiB  
Article
L1RR: Model Pruning Using Dynamic and Self-Adaptive Sparsity for Remote-Sensing Target Detection to Prevent Target Feature Loss
by Qiong Ran, Mengwei Li, Boya Zhao, Zhipeng He and Yuanfeng Wu
Remote Sens. 2024, 16(11), 2026; https://doi.org/10.3390/rs16112026 - 5 Jun 2024
Viewed by 581
Abstract
Limited resources for edge computing platforms in airborne and spaceborne imaging payloads prevent using complex image processing models. Model pruning can eliminate redundant parameters and reduce the computational load, enhancing processing efficiency on edge computing platforms. Current challenges in model pruning for remote-sensing [...] Read more.
Limited resources for edge computing platforms in airborne and spaceborne imaging payloads prevent using complex image processing models. Model pruning can eliminate redundant parameters and reduce the computational load, enhancing processing efficiency on edge computing platforms. Current challenges in model pruning for remote-sensing object detection include the risk of losing target features, particularly during sparse training and pruning, and difficulties in maintaining channel correspondence for residual structures, often resulting in retaining redundant features that compromise the balance between model size and accuracy. To address these challenges, we propose the L1 reweighted regularization (L1RR) pruning method. Leveraging dynamic and self-adaptive sparse modules, we optimize L1 sparsity regularization, preserving the model’s target feature information using a feature attention loss mechanism to determine appropriate pruning ratios. Additionally, we propose a residual reconstruction procedure, which removes redundant feature channels from residual structures while maintaining the residual inference structure through output channel recombination and input channel recombination, achieving a balance between model size and accuracy. Validation on two remote-sensing datasets demonstrates significant reductions in parameters and floating point operations (FLOPs) of 77.54% and 65%, respectively, and a 48.5% increase in the inference speed on the Jetson TX2 platform. This framework optimally maintains target features and effectively distinguishes feature channel importance compared to other methods, significantly enhancing feature channel robustness for difficult targets and expanding pruning applicability to less difficult targets. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of changes in the feature activation map and the feature areas of interest map of the same feature channel during L1 regularization.</p>
Full article ">Figure 2
<p>Feature activation maps of the pruned model in remote sensing scenarios based on L1 regularization. In the feature activation map, regions of high attention are depicted in dark red, while regions with low attention are in dark blue. The percentage in red text represents the feature attention loss (<math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>F</mi> <mi>A</mi> </mrow> </msub> </semantics></math>) at that pruning rate. The zoomed-in areas in the feature activation map correspond to the zoomed-in airplane target in the input image.</p>
Full article ">Figure 3
<p>Block Diagram of L1RR.</p>
Full article ">Figure 4
<p>The single-layer sparse stride gradient construction. The color shades of the squares represent the weights. Darker colors indicate larger weights, while lighter colors indicate smaller weights.</p>
Full article ">Figure 5
<p>Schematic diagram of residual reconstruction procedure.</p>
Full article ">Figure 6
<p>The detection results for the baseline (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>,<b>k</b>) and pruned (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>,<b>l</b>) models on the NWPU VHR-10 dataset. The red circles represent targets detected by the pruned model but not by the baseline model, and the blue circles represent false alarms for the baseline model.</p>
Full article ">Figure 7
<p>The heatmap derived from Grad-CAM++ for the baseline (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>,<b>k</b>) model and pruned (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>,<b>l</b>) model on the NWPU VHR-10 dataset.</p>
Full article ">Figure 8
<p>Comparison of detection results for the baseline (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) and pruned (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) models in complex scenes of the RSOD dataset. The red circles indicate targets detected by the pruned model but not by the baseline model.</p>
Full article ">Figure 9
<p>The heatmap derived from Grad-CAM++ for the baseline (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) and pruned (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) models on the RSOD dataset.</p>
Full article ">Figure 10
<p>Mean average precision of different pruning methods for different pruning ratios: (<b>a</b>) NWPU VHR-10 dataset, and (<b>b</b>) RSOD dataset.</p>
Full article ">Figure 11
<p>Mean average precision for various targets for different pruning methods and pruning ratios on the NWPU VHR10 dataset: (<b>a</b>) airplane; (<b>b</b>) ship; (<b>c</b>) storge tank; (<b>d</b>) baseball diamond; (<b>e</b>) tennis court; (<b>f</b>) basketball court; (<b>g</b>) ground track field; (<b>h</b>) harbor; (<b>i</b>) bridge and (<b>j</b>) vehicle.</p>
Full article ">Figure 12
<p>Mean average precision for various targets for different pruning ratios and methods on the RSOD dataset: (<b>a</b>) aircraft; (<b>b</b>) oiltank; (<b>c</b>) overpass and (<b>d</b>) playground.</p>
Full article ">Figure 13
<p>Comparison of detection results and heatmaps for the baseline (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) and pruned (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) models on the HRSID dataset.</p>
Full article ">
24 pages, 7257 KiB  
Article
Radiation Feature Fusion Dual-Attention Cloud Segmentation Network
by Mingyuan He and Jie Zhang
Remote Sens. 2024, 16(11), 2025; https://doi.org/10.3390/rs16112025 - 5 Jun 2024
Viewed by 536
Abstract
In the field of remote sensing image analysis, the issue of cloud interference in high-resolution images has always been a challenging problem, with traditional methods often facing limitations in addressing this challenge. To this end, this study proposes an innovative solution by integrating [...] Read more.
In the field of remote sensing image analysis, the issue of cloud interference in high-resolution images has always been a challenging problem, with traditional methods often facing limitations in addressing this challenge. To this end, this study proposes an innovative solution by integrating radiative feature analysis with cutting-edge deep learning technologies, developing a refined cloud segmentation method. The core innovation lies in the development of FFASPPDANet (Feature Fusion Atrous Spatial Pyramid Pooling Dual Attention Network), a feature fusion dual attention network improved through atrous spatial convolution pooling to enhance the model’s ability to recognize cloud features. Moreover, we introduce a probabilistic thresholding method based on pixel radiation spectrum fusion, further improving the accuracy and reliability of cloud segmentation, resulting in the “FFASPPDANet+” algorithm. Experimental validation shows that FFASPPDANet+ performs exceptionally well in various complex scenarios, achieving a 99.27% accuracy rate in water bodies, a 96.79% accuracy rate in complex urban settings, and a 95.82% accuracy rate in a random test set. This research not only enhances the efficiency and accuracy of cloud segmentation in high-resolution remote sensing images but also provides a new direction and application example for the integration of deep learning with radiative algorithms. Full article
(This article belongs to the Special Issue Deep Learning for Satellite Image Segmentation)
Show Figures

Figure 1

Figure 1
<p>“FFASPPDANet+” Algorithm Framework.</p>
Full article ">Figure 2
<p>Cloud probability maps for six channels, comprising the visible RGB channels and the mutual grayscale difference channels B-R, R-G, and B-G. The correspondence between probability and color is shown in <a href="#remotesensing-16-02025-t002" class="html-table">Table 2</a>, while the original image is placed at the bottom and the original image.</p>
Full article ">Figure 3
<p>Schematic diagram of the visible light image cloud segmentation technology approach.</p>
Full article ">Figure 4
<p>Pixel Spectral Profiles: (<b>a</b>) Example satellite image; (<b>b</b>) Pixel spectral lines in different channels.</p>
Full article ">Figure 5
<p>FFDANet Network Architecture (for specific mechanisms of spatial attention PA and channel attention CA, see <a href="#remotesensing-16-02025-f003" class="html-fig">Figure 3</a> in the paper by Jun Fu [<a href="#B30-remotesensing-16-02025" class="html-bibr">30</a>]).</p>
Full article ">Figure 6
<p>Structure of the U-shaped Feature Fusion Module.</p>
Full article ">Figure 7
<p>FFASPPDANet Network Structure.</p>
Full article ">Figure 8
<p>U-shaped Feature Fusion Network with ResNet-101 as the Feature Extraction Network.</p>
Full article ">Figure 9
<p>Training Loss Graphs for Various Networks.</p>
Full article ">Figure 10
<p>CloudLabel annotation software. In the left toolbar, the seven buttons from top to bottom are ‘Undo’, ‘Enhance’, ‘Save’, ‘Growth Rate’, ‘Fill’, ‘Eraser’, and ‘Update Default Configuration’, respectively.</p>
Full article ">Figure 11
<p>Cloud Segmentation Results of Different Neural Networks in Water Body Scenarios.</p>
Full article ">Figure 12
<p>Cloud Segmentation Results of Different Neural Networks in Water Body Scenarios (comparison results with other state-of-the-art methods).</p>
Full article ">Figure 12 Cont.
<p>Cloud Segmentation Results of Different Neural Networks in Water Body Scenarios (comparison results with other state-of-the-art methods).</p>
Full article ">Figure 13
<p>Cloud Segmentation Results of Different Neural Networks in Urban Scenarios.</p>
Full article ">Figure 13 Cont.
<p>Cloud Segmentation Results of Different Neural Networks in Urban Scenarios.</p>
Full article ">Figure 14
<p>Cloud Segmentation Results of Different Neural Networks in Urban Scenarios (comparison results with other state-of-the-art methods).</p>
Full article ">Figure 14 Cont.
<p>Cloud Segmentation Results of Different Neural Networks in Urban Scenarios (comparison results with other state-of-the-art methods).</p>
Full article ">Figure 15
<p>Cloud Segmentation Results of Different Neural Networks in 38-Cloud dataset (comparison results with other state-of-the-art methods).</p>
Full article ">Figure 15 Cont.
<p>Cloud Segmentation Results of Different Neural Networks in 38-Cloud dataset (comparison results with other state-of-the-art methods).</p>
Full article ">
19 pages, 3074 KiB  
Article
Two-Stage Adaptive Network for Semi-Supervised Cross-Domain Crater Detection under Varying Scenario Distributions
by Yifan Liu, Tiecheng Song, Chengye Xian, Ruiyuan Chen, Yi Zhao, Rui Li and Tan Guo
Remote Sens. 2024, 16(11), 2024; https://doi.org/10.3390/rs16112024 - 5 Jun 2024
Viewed by 704
Abstract
Crater detection can provide valuable information for humans to explore the topography and understand the history of extraterrestrial planets. Due to the significantly varying scenario distributions, existing detection models trained on known labelled crater datasets are hardly effective when applied to new unlabelled [...] Read more.
Crater detection can provide valuable information for humans to explore the topography and understand the history of extraterrestrial planets. Due to the significantly varying scenario distributions, existing detection models trained on known labelled crater datasets are hardly effective when applied to new unlabelled planets. To address this issue, we propose a two-stage adaptive network (TAN) for semi-supervised cross-domain crater detection. Our network is built on the YOLOv5 detector, where a series of strategies are employed to enhance its cross-domain generalisation ability. In the first stage, we propose an attention-based scale-adaptive fusion (ASAF) strategy to handle objects with significant scale variances. Furthermore, we propose a smoothing hard example mining (SHEM) loss function to address the issue of overfitting on hard examples. In the second stage, we propose a sort-based pseudo-labelling fine-tuning (SPF) strategy for semi-supervised learning to mitigate the distributional differences between source and target domains. For both stages, we employ weak or strong image augmentation to suit different cross-domain tasks. Experimental results on benchmark datasets demonstrate that the proposed network can enhance domain adaptation ability for crater detection under varying scenario distributions. Full article
(This article belongs to the Special Issue Recent Advances in Remote Sensing Image Processing Technology)
Show Figures

Figure 1

Figure 1
<p>(<b>Top</b>) One of the samples in the LROC dataset and the distributions of all craters in this dataset. (<b>Bottom</b>) One of the samples in the DACD dataset and the distributions of all craters in this dataset. Compared with the top sample, the bottom one has more background interference. According to the statistical results of these two datasets, the LROC dataset contains smaller and more craters than DACD.</p>
Full article ">Figure 2
<p>The architecture of our proposed two-stage TAN model for cross-domain crater detection. The first stage: (1) ASAF utilises the attention-based NAM (Normalisation-Based Attention Module) to fuse shallow information to improve scale adaptation abilities; (2) the SHEM loss function is used to alleviate the bias of the model. The second stage: The SPF strategy is adopted to sort and select high-quality pseudo-labels which are used to fine-tune the model. In these two stages, we adopt weak or strong image augmentation to match different cross-domain tasks. The new components are highlighted in blue font or regions.</p>
Full article ">Figure 3
<p>Illustration of our model architecture with the ASAF strategy. To prevent the model from losing crucial information, we incorporate C3TR (C3 + Transformer) into the backbone. We pass the shallow feature maps of different stages to NAM to obtain shallow attention-based feature maps. We also inject multiple C3 modules into the neck to detect large targets.</p>
Full article ">Figure 4
<p>The architecture of NAM [<a href="#B43-remotesensing-16-02024" class="html-bibr">43</a>].</p>
Full article ">Figure 5
<p>The overall flow of the SHEM loss function. We calculate BFLs for the feature maps at four scales, each with a distinct distribution of loss values. Then, we select the top K% of loss values that have been sorted for the feature maps. Subsequently, we average and weigh the loss values at different scales to obtain the Loss Rank Function (LRM), followed by L2 regularisation to obtain the objectness loss.</p>
Full article ">Figure 6
<p>Ablation experiment on different attention mechanism modules and loss functions for cross-domain crater detection from DACD to LROC.</p>
Full article ">Figure 7
<p>Ablation experiment on BOT (data augmentation and SPF) for cross-domain crater detection from DACD to LROC, where the Faster refers to Faster R-CNN [<a href="#B16-remotesensing-16-02024" class="html-bibr">16</a>] and Libra refers to Libra R-CNN [<a href="#B49-remotesensing-16-02024" class="html-bibr">49</a>].</p>
Full article ">Figure 8
<p>Visualisation of the detection results of different models on the LROC dataset as well as the cross-domain detection results. (<b>a</b>). Visualisation of the detection results of the proposed method and the mainstream methods on the DACD dataset. (<b>b</b>). Visualisation of the detection results of the proposed method and the mainstream methods for cross-domain detection from DACD to LROC.</p>
Full article ">
20 pages, 18584 KiB  
Article
A New Grid Zenith Tropospheric Delay Model Considering Time-Varying Vertical Adjustment and Diurnal Variation over China
by Jihong Zhang, Xiaoqing Zuo, Shipeng Guo, Shaofeng Xie, Xu Yang, Yongning Li and Xuefu Yue
Remote Sens. 2024, 16(11), 2023; https://doi.org/10.3390/rs16112023 - 4 Jun 2024
Cited by 1 | Viewed by 653
Abstract
Improving the accuracy of zenith tropospheric delay (ZTD) models is an important task. However, the existing ZTD models still have limitations, such as a lack of appropriate vertical adjustment function and being unsuitable for China, which has a complex climate and great undulating [...] Read more.
Improving the accuracy of zenith tropospheric delay (ZTD) models is an important task. However, the existing ZTD models still have limitations, such as a lack of appropriate vertical adjustment function and being unsuitable for China, which has a complex climate and great undulating terrain. A new approach that considers the time-varying vertical adjustment and delicate diurnal variations of ZTD was introduced to develop a new grid ZTD model (NGZTD). The NGZTD model employed the Gaussian function and considered the seasonal variations of Gaussian coefficients to express the vertical variations of ZTD. The effectiveness of vertical interpolation for the vertical adjustment model (NGZTD-H) was validated. The root mean squared errors (RMSE) of the NGZTD-H model improved by 58% and 22% compared to the global pressure and temperature 3 (GPT3) model using ERA5 and radiosonde data, respectively. The NGZTD model’s effectiveness for directly estimating the ZTD was validated. The NGZTD model improved by 22% and 31% compared to the GPT3 model using GNSS-derived ZTD and layered ZTD at radiosonde stations, respectively. Seasonal variations in Gaussian coefficients need to be considered. Using constant Gaussian coefficients will generate large errors. The NGZTD model exhibited outstanding advantages in capturing diurnal variations and adapting to undulating terrain. We analyzed and discussed the main error sources of the NGZTD model using validation of spatial interpolation accuracy. This new ZTD model has potential applications in enhancing the reliability of navigation, positioning, and interferometric synthetic aperture radar (InSAR) measurements and is recommended to promote the development of space geodesy techniques. Full article
Show Figures

Figure 1

Figure 1
<p>The research framework.</p>
Full article ">Figure 2
<p>Distributions of the annual mean value and period amplitudes for Gaussian coefficients <span class="html-italic">b</span> and <span class="html-italic">c</span>. (<b>a</b>) The annual mean value of <span class="html-italic">b</span>. (<b>b</b>) The annual period amplitude of <span class="html-italic">b</span>. (<b>c</b>) The semi-annual period amplitude of <span class="html-italic">b</span>. (<b>d</b>) The annual mean value of <span class="html-italic">c</span>. (<b>e</b>) The annual period amplitude of <span class="html-italic">c</span>. (<b>f</b>) The semi-annual period amplitude of <span class="html-italic">c</span>.</p>
Full article ">Figure 3
<p>The diurnal variation of the surface ZTD and its spectral analysis results. (<b>a</b>) 50°N, 120°E. (<b>b</b>) 35°N, 115°E.</p>
Full article ">Figure 4
<p>Distribution of annual mean value and period amplitudes for surface ZTD. (<b>a</b>) The annual mean value. (<b>b</b>) The annual period amplitude. (<b>c</b>) The semi-annual period amplitude. (<b>d</b>) The diurnal period amplitude. (<b>e</b>) The semi-diurnal period amplitude.</p>
Full article ">Figure 5
<p>Distribution of vertical interpolation accuracy for NGZTD-H model and GPT3 model using ERA5 profile ZTD in 2018. (<b>a</b>) The bias of NGZTD-H. (<b>b</b>) The RMSE of NGZTD-H. (<b>c</b>) The bias of GPT3. (<b>d</b>) The RMSE of GPT3.</p>
Full article ">Figure 6
<p>Distribution of vertical interpolation accuracy for NGZTD-H and GPT3 models in the selected pressure layers and latitude bands using ERA5 profile ZTD in 2018. (<b>a</b>) The bias of pressure layers. (<b>b</b>) The RMSE of pressure layers. (<b>c</b>) The bias of latitude bands. (<b>d</b>) The RMSE of latitude bands.</p>
Full article ">Figure 7
<p>Distribution of vertical interpolation accuracy for the NGZTD-H and GPT3 models using the ZTD-layered profiles at radiosonde stations in 2018. (<b>a</b>) The bias of NGZTD-H. (<b>b</b>) The RMSE of NGZTD-H. (<b>c</b>) The bias of GPT3. (<b>d</b>) The RMSE of GPT3.</p>
Full article ">Figure 8
<p>Distribution of vertical interpolation accuracy for NGZTD-H model and GPT3 model in different seasons using the ZTD-layered profiles at radiosonde stations in 2018. (<b>a</b>) Hailar. (<b>b</b>) Hangzhou.</p>
Full article ">Figure 9
<p>Distribution of vertical interpolation accuracy for the NGZTD-H and GPT3 models at different heights using the ZTD-layered profiles at radiosonde stations in 2018. (<b>a</b>) The bias. (<b>b</b>) The RMSEs.</p>
Full article ">Figure 10
<p>Distribution of accuracy for NGZTD and GPT3 models using the GNSS-derived ZTD at GNSS stations in 2018. (<b>a</b>) The bias of NGZTD. (<b>b</b>) The RMSE of NGZTD. (<b>c</b>) The bias of GPT3. (<b>d</b>) The RMSE of GPT3.</p>
Full article ">Figure 11
<p>Distribution of accuracy for NGZTD and GPT3 models in different seasons using GNSS-derived ZTD in 2018. (<b>a</b>) Kuqa. (<b>b</b>) Delingha.</p>
Full article ">Figure 12
<p>Distribution of accuracy for NGZTD and GPT3 models during five days using GNSS-derived ZTD in 2018. (<b>a</b>) Kuqa. (<b>b</b>) Guilin.</p>
Full article ">Figure 13
<p>Distribution of accuracy for the NGZTD model, the model with constant Gaussian coefficients, and the GPT3 model using the ZTD-layered profiles at radiosonde stations in 2018. (<b>a</b>) The bias of NGZTD. (<b>b</b>) The RMSE of NGZTD. (<b>c</b>) The bias of GPT3. (<b>d</b>) The RMSE of GPT3.</p>
Full article ">Figure 14
<p>The percentage results of the RMSE for the NGZTD and GPT3 models using the ZTD-layered profiles at radiosonde stations in 2018. (<b>a</b>) NGZTD model. (<b>b</b>) GPT3 model.</p>
Full article ">Figure 15
<p>Distribution of spatial interpolation accuracy for NGZTD-H and GPT3 models using the GNSS-derived ZTD at GNSS stations in 2018. (<b>a</b>) The bias of NGZTD-H. (<b>b</b>) The RMSE of NGZTD-H. (<b>c</b>) The bias of GPT3. (<b>d</b>) The RMSE of GPT3.</p>
Full article ">
27 pages, 9472 KiB  
Article
A GPU-Based Integration Method from Raster Data to a Hexagonal Discrete Global Grid
by Senyuan Zheng, Liangchen Zhou, Chengshuai Lu and Guonian Lv
Remote Sens. 2024, 16(11), 2022; https://doi.org/10.3390/rs16112022 - 4 Jun 2024
Viewed by 572
Abstract
This paper proposes an algorithm for the conversion of raster data to hexagonal DGGSs in the GPU by redevising the encoding and decoding mechanisms. The researchers first designed a data structure based on rhombic tiles to convert the hexagonal DGGS to a texture [...] Read more.
This paper proposes an algorithm for the conversion of raster data to hexagonal DGGSs in the GPU by redevising the encoding and decoding mechanisms. The researchers first designed a data structure based on rhombic tiles to convert the hexagonal DGGS to a texture format acceptable for GPUs, thus avoiding the irregularity of the hexagonal DGGS. Then, the encoding and decoding methods of the tile data based on space-filling curves were designed, respectively, so as to reduce the amount of data transmission from the CPU to the GPU. Finally, the researchers improved the algorithmic efficiency through thread design. To validate the above design, raster integration experiments were conducted based on the global Aster 30 m digital elevation dataDEM, and the experimental results showed that the raster integration accuracy of this algorithms was around 1 m, while its efficiency could be improved to more than 600 times that of the algorithm for integrating the raster data to the hexagonal DGGS data, executed in the CPU. Therefore, the researchers believe that this study will provide a feasible method for the efficient and stable integration of massive raster data based on a hexagonal grid, which may well support the organization of massive raster data in the field of GIS. Full article
Show Figures

Figure 1

Figure 1
<p>Differences in architecture between the CPU and the GPU. (The light blue rectangles represent the Control Unit, the dark blue rectangles represent the Arithmetic Logic Unit (AU), the red rectangles represent the Cache Memory, and the brown-yellow rectangles represent the Dynamic Random-Access Memory (DRAM)).</p>
Full article ">Figure 2
<p>Pipeline scheduling.</p>
Full article ">Figure 3
<p>Arrangement of hexagonal cells.</p>
Full article ">Figure 4
<p>Organization of hexagonal cells based on rhombic tiles. (The blue hexagonal cells are the cells attributed to this basic rhombic surface, and the gray hexagonal cells are the cells that do not belong to the current basic rhombic surface).</p>
Full article ">Figure 5
<p>Rhombic tile encoding structure.</p>
Full article ">Figure 6
<p>Encoding process for hexagonal cells.</p>
Full article ">Figure 7
<p>Decoding process for hexagonal cells.</p>
Full article ">Figure 8
<p>Raster data scheduling strategy.</p>
Full article ">Figure 9
<p>Spatial relationship between raster data and rhombic tile data.</p>
Full article ">Figure 10
<p>Raster data scheduling strategy: (<b>a</b>) Schematic diagram of the first raster data scheduling; (<b>b</b>) Update strategy for raster data.</p>
Full article ">Figure 11
<p>Principle of bilinear interpolation of raster data: (<b>a</b>) Principle of bilinear interpolation; (<b>b</b>) Finding the nearest raster point from the center of a hexagonal cell; (<b>c</b>) Interpolate the center of the hexagonal cell based on the coordinates of the four nearest raster center points.</p>
Full article ">Figure 12
<p>Grid–stride loops. (Dark blue rectangles represent Grids in threads. light blue and white rectangles represent Blocks in a Grid. orange rectangles represent data blocks).</p>
Full article ">Figure 13
<p>Thread buffer design.</p>
Full article ">Figure 14
<p>Experiment to determine the threshold of tile splitting.</p>
Full article ">Figure 15
<p>Comparison of efficiency of space-filling curves (times).</p>
Full article ">Figure 16
<p>Efficiency comparison for each thread combination block.</p>
Full article ">Figure 17
<p>CPU decoding efficiency vs. GPU decoding efficiency.</p>
Full article ">Figure 18
<p>Time consumed by each part of the algorithmic flow.</p>
Full article ">Figure 19
<p>Overall efficiency comparison.</p>
Full article ">Figure 20
<p>Comparison of bandwidth consumption.</p>
Full article ">Figure 21
<p>Comparison of efficiency with and without the use of encoding and decoding mechanisms.</p>
Full article ">Figure 22
<p>Various types of hexagonal cells on the surface of an icosahedron. (The black line represents the <span class="html-italic">q</span>2<span class="html-italic">d</span><span class="html-italic">i</span> coordinate system, the gray line represents rhombic cells, the green line represents triangular cells, the blue line represents hexagonal cells, and the red dashed line represents the projection of the center point of the cell into the <span class="html-italic">q</span>2<span class="html-italic">d</span><span class="html-italic">i</span> coordinate system).</p>
Full article ">Figure A1
<p>The algorithm for encoding.</p>
Full article ">Figure A2
<p>Decoding algorithm for rhombic tile data.</p>
Full article ">
22 pages, 6406 KiB  
Article
LPI Sequences Optimization Method against Summation Detector Based on FFT Filter Bank
by Qiang Liu, Fucheng Guo, Kunlai Xiong, Zhangmeng Liu and Weidong Hu
Remote Sens. 2024, 16(11), 2021; https://doi.org/10.3390/rs16112021 - 4 Jun 2024
Viewed by 539
Abstract
Waveform design is a crucial factor in electronic surveillance (ES) systems. In this paper, we introduce an algorithm that designs a low probability of intercept (LPI) radar waveform. Our approach directly minimizes the detection probability of summation detectors based on FFT filter banks. [...] Read more.
Waveform design is a crucial factor in electronic surveillance (ES) systems. In this paper, we introduce an algorithm that designs a low probability of intercept (LPI) radar waveform. Our approach directly minimizes the detection probability of summation detectors based on FFT filter banks. The algorithm is derived from the general quadratic optimization framework, which inherits the monotonic properties of such methods. To expedite overall convergence, we have integrated acceleration schemes based on the squared iterative method (SQUAREM). Additionally, the proposed algorithm can be executed through fast Fourier transform (FFT) operations, enhancing computational efficiency. With some modifications, the algorithm can be adjusted to incorporate spectral constraints, increasing its flexibility. Numerical experiments indicate that our proposed algorithm outperforms existing ones in terms of both intercept properties and computational complexity. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The sketch of the LPI radar and ES system framework.</p>
Full article ">Figure 2
<p>Comparison of the convergence between MPI and spectral-MPI with different parameters and their respective accelerated versions.</p>
Full article ">Figure 3
<p>Comparison of the convergence time of the spectral-MPI and accelerate-spectral-MPI.</p>
Full article ">Figure 4
<p>Comparison of the maximum propagation distance of common sequence and accelerated-MPI initialized by different sequences.</p>
Full article ">Figure 5
<p>Comparison of the maximum propagation distance between normal waveforms and accelerated-MPI with different <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>K</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Comparison of the merit factor of common sequences and accelerated-MPI initialized by different sequences.</p>
Full article ">Figure 7
<p>Comparison of the merit factor of accelerated-MPI initialized by different sequences versus <math display="inline"><semantics> <mi>ξ</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>Time–frequency energy flow of spectral-MPI with low spectral power in frequency bands <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>0.2</mn> <mo>)</mo> <mo>⋃</mo> <mo>(</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.5</mn> <mo>)</mo> <mo>⋃</mo> <mo>(</mo> <mn>0.8</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Time–frequency energy flow of spectral-MPI with low spectral power in frequency bands <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math>.</p>
Full article ">
42 pages, 1593 KiB  
Article
Higher-Order Convolutional Neural Networks for Essential Climate Variables Forecasting
by Michalis Giannopoulos, Grigorios Tsagkatakis and Panagiotis Tsakalides
Remote Sens. 2024, 16(11), 2020; https://doi.org/10.3390/rs16112020 - 4 Jun 2024
Viewed by 729
Abstract
Earth observation imaging technologies, particularly multispectral sensors, produce extensive high-dimensional data over time, thus offering a wealth of information on global dynamics. These data encapsulate crucial information in essential climate variables, such as varying levels of soil moisture and temperature. However, current cutting-edge [...] Read more.
Earth observation imaging technologies, particularly multispectral sensors, produce extensive high-dimensional data over time, thus offering a wealth of information on global dynamics. These data encapsulate crucial information in essential climate variables, such as varying levels of soil moisture and temperature. However, current cutting-edge machine learning models, including deep learning ones, often overlook the treasure trove of multidimensional data, thus analyzing each variable in isolation and losing critical interconnected information. In our study, we enhance conventional convolutional neural network models, specifically those based on the embedded temporal convolutional network framework, thus transforming them into models that inherently understand and interpret multidimensional correlations and dependencies. This transformation involves recasting the existing problem as a generalized case of N-dimensional observation analysis, which is followed by deriving essential forward and backward pass equations through tensor decompositions and compounded convolutions. Consequently, we adapt integral components of established embedded temporal convolutional network models, like encoder and decoder networks, thus enabling them to process 4D spatial time series data that encompass all essential climate variables concurrently. Through the rigorous exploration of diverse model architectures and an extensive evaluation of their forecasting prowess against top-tier methods, we utilize two new, long-term essential climate variables datasets with monthly intervals extending over four decades. Our empirical scrutiny, particularly focusing on soil temperature data, unveils that the innovative high-dimensional embedded temporal convolutional network model-centric approaches markedly excel in forecasting, thus surpassing their low-dimensional counterparts, even under the most challenging conditions characterized by a notable paucity of training data. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Time series data from various ECVs are teeming with valuable insights, thus capturing their temporal dynamics. Notably, ECVs like soil temperature and moisture at varying depths often exhibit significant correlations, which are evident in the slope of the least squares line drawn between each pair. These critical interdependencies among ECV time series can be effectively harnessed and maintained using meticulously crafted 4D model architectures, thus allowing for a more comprehensive and nuanced analysis.</p>
Full article ">Figure 2
<p>Tucker convolution between two third-order tensors <math display="inline"><semantics> <mi mathvariant="bold-script">X</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="bold-script">Y</mi> </semantics></math>.</p>
Full article ">Figure 3
<p>Computation of the loss gradients of <span class="html-italic">N</span>D convolutional layer. The loss gradient of the previous layer (<math display="inline"><semantics> <mfrac> <mrow> <mo>∂</mo> <mi mathvariant="bold-script">L</mi> </mrow> <mrow> <mo>∂</mo> <mi mathvariant="bold-script">Z</mi> <mspace width="-4pt"/> <mo>-</mo> </mrow> </mfrac> </semantics></math>) is propagated to other layers via the help of local gradients (<math display="inline"><semantics> <mfrac> <mrow> <mo>∂</mo> <mi mathvariant="bold-script">Z</mi> <mspace width="-4pt"/> <mo>-</mo> </mrow> <mrow> <mo>∂</mo> <mi mathvariant="bold-script">I</mi> </mrow> </mfrac> </semantics></math>, <math display="inline"><semantics> <mfrac> <mrow> <mo>∂</mo> <mi mathvariant="bold-script">Z</mi> <mspace width="-4pt"/> <mo>-</mo> </mrow> <mrow> <mo>∂</mo> <mi mathvariant="bold-script">W</mi> </mrow> </mfrac> </semantics></math>) and the chain rule.</p>
Full article ">Figure 4
<p>The proposed method involves the concurrent forecasting of multiple ECV time series data. Original 4D data, which consist of a time series of 3D data, are input into our specially devised higher-order ETCN model. This model retains all pertinent information up to the regression phase by performing all critical operations such as convolutions and transpose convolutions within a 4D framework.</p>
Full article ">Figure 5
<p>The proposed 4D-ETCN model architecture: Each higher-order input sample undergoes processing by the encoder, TCN, and decoder parts of the model in order to be used for accurate ECV forecasting purposes.</p>
Full article ">Figure 6
<p>Regression MSE of the best-proposed 4D-ETCN models for Crete and Italy datasets. As the number of epochs increases, the model’s performance improves, thus avoiding overfitting for both datasets.</p>
Full article ">Figure 7
<p>The regression maps for November 2020 showcasing the ground truth and predictions for the soil temperature at two distinct depths have been generated using the ETCN and 4D-ETCN models for Italy (<b>a</b>–<b>f</b>) and Crete (<b>g</b>–<b>l</b>) datasets.</p>
Full article ">Figure 8
<p>Regression metrics relative to the number of training samples indicate that while increasing the number of samples enhances the performance of comparative models, the proposed 4D architecture prevails in almost all instances.</p>
Full article ">Figure 9
<p>The regression metrics relative to the number of training samples for the proposed 4D-ETCN model and its counterparts under scenarios where an entire year is used for forecasting. The results reveal a notable trend where while augmenting the number of training samples enhances the performance of state-of-the-art models, this improvement does not suffice to surpass the performance of the proposed 4D model.</p>
Full article ">Figure A1
<p>NMSE between the Tucker convolution model and the direct and Fourier ones for up to 4D data and all available convolution subsections/paddings. In all cases, the relative approximation error is of order <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>16</mn> </mrow> </msup> </semantics></math>, thus indicating the validity of the convolution generalization via tensor decompositions.</p>
Full article ">Figure A1 Cont.
<p>NMSE between the Tucker convolution model and the direct and Fourier ones for up to 4D data and all available convolution subsections/paddings. In all cases, the relative approximation error is of order <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>16</mn> </mrow> </msup> </semantics></math>, thus indicating the validity of the convolution generalization via tensor decompositions.</p>
Full article ">Figure A2
<p>Computational time (in seconds) required by the Tucker convolution model and the direct and Fourier ones for up to 4D data and all available convolution subsections/paddings. The Tucker convolution model scales better than its competing ones in higher-dimensional cases.</p>
Full article ">Figure A3
<p>NMSE between the proposed Stacked convolution model and the direct and Fourier ones for up to 4D data and all available convolution paddings/subsections. In all cases, the relative approximation error is of order <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>16</mn> </mrow> </msup> </semantics></math>, thus indicating the validity of the proposed convolution generalization via stacking lower-dimensional convolutions.</p>
Full article ">Figure A4
<p>Computational time (in seconds) required by the proposed Stacked convolution model and the direct and Fourier ones for up to 4D data and all available convolution paddings/subsections. The proposed Stacked convolution model scales equally well to its competing ones in every case.</p>
Full article ">
25 pages, 11944 KiB  
Article
Advancing the Limits of InSAR to Detect Crustal Displacement from Low-Magnitude Earthquakes through Deep Learning
by Elena C. Reinisch, Charles J. Abolt, Erika M. Swanson, Bertrand Rouet-Leduc, Emily E. Snyder, Kavya Sivaraj and Kurt C. Solander
Remote Sens. 2024, 16(11), 2019; https://doi.org/10.3390/rs16112019 - 4 Jun 2024
Viewed by 829
Abstract
Detecting surface deformation associated with low-magnitude (Mw5) seismicity using interferometric synthetic aperture radar (InSAR) is challenging due to the subtlety of the signal and the often challenging imaging environments. However, low-magnitude earthquakes are potential precursors to larger seismic [...] Read more.
Detecting surface deformation associated with low-magnitude (Mw5) seismicity using interferometric synthetic aperture radar (InSAR) is challenging due to the subtlety of the signal and the often challenging imaging environments. However, low-magnitude earthquakes are potential precursors to larger seismic events, and thus characterizing the crustal displacement associated with them is crucial for regional seismic hazard assessment. We combine InSAR time-series techniques with a Deep Learning (DL) autoencoder denoiser to detect the magnitude and extent of crustal deformation from the Mw=3.4 Gallina, New Mexico earthquake that occurred on 30 July 2020. Although InSAR alone cannot detect event-related deformation from such a low-magnitude seismic event, application of the DL method reveals maximum displacements as small as (±2.5 mm) in the vicinity of both the fault and earthquake epicenter without prior knowledge of the fault system. This finding improves small-scale displacement discernment with InSAR by an order of magnitude relative to previous studies. We additionally estimate best-fitting fault parameters associated with the observed deformation. The application of the DL technique unlocks the potential for low-magnitude earthquake studies, providing new insights into local fault geometries and potential risks from higher-magnitude earthquakes. This technique also permits low-magnitude event monitoring in areas where seismic networks are sparse, allowing for the possibility of global fault deformation monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Satellite image (Google Earth Pro v7.3, image taken on 19 September 2017 courtesy of Landsat/Copernicus) showing the epicenter of the 30 July 2020 event from LASN, denoted with a yellow circle [<a href="#B21-remotesensing-16-02019" class="html-bibr">21</a>]. White lines are faults from the USGS Geologic Map Database [<a href="#B22-remotesensing-16-02019" class="html-bibr">22</a>]. Map inset in upper right corner depicts the location of the study area in blue relative to the state of New Mexico. Map was created using Generic Mapping Tools v6 [<a href="#B31-remotesensing-16-02019" class="html-bibr">31</a>].</p>
Full article ">Figure 2
<p>Flowchart showing the major steps for our displacement analysis and the names of LOS displacement resulting from each step.</p>
Full article ">Figure 3
<p>Example of TS displacement fields from time-series analysis on InSAR pairs from descending track T56. (<b>a</b>) LOS cumulative displacement from the start of the descending track T56 dataset (11 March 2020) to 2 August 2020, just after the event on 30 July (LASN-located epicenter denoted with a triangle). The TS deformation field is overlaid on topography from an SRTM DEM. (<b>b</b>) Full TS displacement fields for the descending track analysis. Geographic coverage and color scale as in (<b>a</b>).</p>
Full article ">Figure 4
<p>Average DL-derived LOS displacement rate fields for ascending track T49 and descending track T56. The Nacimiento Fault site is denoted with a black square, the LASN-located epicenter is denoted with a black star, and faults from the USGS are denoted with black lines.</p>
Full article ">Figure 5
<p>Best-fitting model for NF Site using two Okada sources, as fitted on DL-derived average displacement rate fields from ascending track T49 and descending track T56. Pictured is the observed average displacement rate field from descending track T56, the modeled displacement rate field derived from the best-fitting model parameters for both Okada sources (O1 and O2), and the residual field (observed minus modeled).</p>
Full article ">Figure 6
<p>Results of temporal adjustment on NF Okada 1 slip estimates occurring at ~207 m depth using <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>t</mi> </mrow> </mfenced> </mrow> </semantics></math> parameterization. Modeled cumulative slip is shown with the black line, with corresponding 68% confidence intervals indicated by the dashed black lines. Temporal breaks in the parameterization are indicated with dashed, vertical green lines. Red segments represent pairwise measurements of slip with black dots distinguishing individual epochs per pair. Vertical blue bars indicate 1<math display="inline"><semantics> <mrow> <mi>σ</mi> </mrow> </semantics></math> uncertainty in the pairwise slip measurements post-scaling using the square root of the variance.</p>
Full article ">Figure 7
<p>Results of temporal adjustment on NF Okada 2 slip estimates occurring at ~370 m depth using <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>t</mi> </mrow> </mfenced> </mrow> </semantics></math> parameterization. Plotting conventions as in <a href="#remotesensing-16-02019-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 8
<p>Comparison of LOS crustal displacement and moment magnitudes studied using Sentinel-1 InSAR pairs. The red circle represents the Gallina earthquake targeted for this study, which is an order of magnitude smaller in LOS displacement than typical studies. Associated references used to obtain this information are provided in <a href="#app6-remotesensing-16-02019" class="html-app">Appendix F</a> (<a href="#remotesensing-16-02019-t0A1" class="html-table">Table A1</a>).</p>
Full article ">Figure A1
<p>Baseline map for the ascending Sentinel-1 track T49 data that were used in this study. The lines connecting the data points illustrate the interferogram pairs selected as the optimal dataset maximizing coherence, plotted according to the time span and orbital separation of the associated epochs in each pair. The dates of each image are as follows. A: 03-05-2020, B: 03-17-2020, C: 03-29-2020, D: 04-10-2020, E: 05-04-2020, F: 05-16-2020, G: 05-28-2020, H: 06-09-2020, I: 06-21-2020, J: 07-15-2020, K: 07-27-2020, L: 08-08-2020, M: 08-20-2020, N: 09-01-2020, O: 09-13-2020, P: 09-25-2020, Q: 10-07-2020, R: 10-19-2020, S: 10-31-2020. We note that although pictured here, pairs with epochs Q, R, and S were not included in any further analysis due to low signal quality.</p>
Full article ">Figure A2
<p>Baseline map for the descending Sentinel-1 track T56 data that were used in this study. The lines connecting the data points illustrate the interferogram pairs selected as the optimal dataset maximizing coherence, plotted according to the time span and orbital separation of the associated epochs in each pair. The dates of each image in MM-DD-YYYY are as follows. A: 03-11-2020, B: 03-23-2020, C: 04-04-2020, D: 04-16-2020, E: 04-28-2020, F: 05-10-2020, G: 05-22-2020, H: 06-03-2020, I: 06-15-2020, J: 06-27-2020, K: 07-09-2020, L: 07-21-2020, M: 08-02-2020, N: 08-14-2020, O: 08-26-2020, P: 09-07-2020.</p>
Full article ">Figure A3
<p>Pairwise LOS displacement results from DL analysis on ascending track T49 and descending track T56 for the Nacimiento Fault site (e.g., region denoted by the square in <a href="#remotesensing-16-02019-f004" class="html-fig">Figure 4</a>). Each plot shows resulting cumulative displacement per 9-step window as estimated after applying our DL method to TS displacements within said time window. Star denotes the fault junction.</p>
Full article ">Figure A4
<p>Relevant clusters obtained from k-means clustering with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>. Cluster 1 consists of high-magnitude subsidence (mean = −0.56 mm; std = 0.73 mm). Cluster 2 consists of uplift (mean = 0.08 mm; std = 0.14 mm). Cluster 3 consists of subsidence (mean = −0.14 mm; std = 0.22 mm). Cluster 0 (considered to contain noise) is not shown. Clusters are overlaid on a DEM obtained from the Earth Data Analysis Center of the University of New Mexico as visualized using the open-source software QGIS. Bounding region for analysis is shown with a dark green rectangle.</p>
Full article ">Figure A5
<p>Elbow plot used to determine the optimum number of clusters [<a href="#B55-remotesensing-16-02019" class="html-bibr">55</a>].</p>
Full article ">Figure A6
<p>Overlay of deformation between 4 and 15 to 7 and 20 near the Nacimiento Fault site, with geology from the New Mexico Bureau of Mines &amp; Mineral Resources [<a href="#B46-remotesensing-16-02019" class="html-bibr">46</a>]. Also shown are fault locations from the USGS Quaternary Fault Database [<a href="#B56-remotesensing-16-02019" class="html-bibr">56</a>] and the New Mexico USGS Fault Database [<a href="#B22-remotesensing-16-02019" class="html-bibr">22</a>].</p>
Full article ">
18 pages, 26335 KiB  
Article
Revealing the Eco-Environmental Quality of the Yellow River Basin: Trends and Drivers
by Meiling Zhou, Zhenhong Li, Meiling Gao, Wu Zhu, Shuangcheng Zhang, Jingjing Ma, Liangyu Ta and Guijun Yang
Remote Sens. 2024, 16(11), 2018; https://doi.org/10.3390/rs16112018 - 4 Jun 2024
Cited by 1 | Viewed by 785
Abstract
The Yellow River Basin (YB) acts as a key barrier to ecological security and is an important experimental region for high-quality development in China. There is a growing demand to assess the ecological status in order to promote the sustainable development of the [...] Read more.
The Yellow River Basin (YB) acts as a key barrier to ecological security and is an important experimental region for high-quality development in China. There is a growing demand to assess the ecological status in order to promote the sustainable development of the YB. The eco-environmental quality (EEQ) of the YB was assessed at both the regional and provincial scales utilizing the remote sensing-based ecological index (RSEI) with Landsat images from 2000 to 2020. Then, the Theil–Sen (T-S) estimator and Mann–Kendall (M-K) test were utilized to evaluate its variation trend. Next, the optimal parameter-based geodetector (OPGD) model was used to examine the drivers influencing the EEQ in the YB. Finally, the geographically weighted regression (GWR) model was utilized to further explore the responses of the drivers to RSEI changes. The results suggest that (1) a lower RSEI value was found in the north, while a higher RSEI value was found in the south of the YB. Sichuan (SC) and Inner Mongolia (IM) had the highest and the lowest EEQ, respectively, among the YB provinces. (2) Throughout the research period, the EEQ of the YB improved, whereas it deteriorated in both Henan (HA) and Shandong (SD) provinces. (3) The soil-available water content (AWC), annual precipitation (PRE), and distance from impervious surfaces (IMD) were the main factors affecting the spatial differentiation of RSEI in the YB. (4) The influence of meteorological factors (PRE and TMP) on RSEI changes was greater than that of IMD, and the influence of IMD on RSEI changes showed a significant increasing trend. The research results provide valuable information for application in local ecological construction and regional development planning. Full article
(This article belongs to the Special Issue Environmental Monitoring Using Satellite Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the YB.</p>
Full article ">Figure 2
<p>Technology flowchart.</p>
Full article ">Figure 3
<p>Spatial distribution of driving factors.</p>
Full article ">Figure 4
<p>Spatial distribution of RSEI in YB.</p>
Full article ">Figure 5
<p>The proportion of RSEI classes in YB provinces (<b>a</b>) and land cover types (<b>b</b>). From left to right, the figure represents the years 2000, 2005, 2010, 2015, and 2020, respectively.</p>
Full article ">Figure 6
<p>Spatial distribution of RSEI change characteristics during the period of 2000–2020.</p>
Full article ">Figure 7
<p>Interactive detection matrix in YB. * represents nonlinear enhancement; otherwise, there is bilinear enhancement.</p>
Full article ">Figure 8
<p>q statistic of factor detection of nine provinces in YB (95% confidence level).</p>
Full article ">Figure 9
<p>Distribution of regression coefficients of the GWR model.</p>
Full article ">Figure 10
<p>The proportion of the dominant driving factors in the YB and its provinces. From left to right, the figure represents 2000–2005, 2005–2010, 2010–2015, 2015–2020, and 2000–2020, respectively.</p>
Full article ">Figure 11
<p>Christmas tree anomaly (<b>a</b>) and caterpillar tracks (<b>b</b>).</p>
Full article ">Figure 12
<p>Correlation coefficient r between indicators.</p>
Full article ">Figure 13
<p>Mean RSEI values for different land cover types.</p>
Full article ">Figure 14
<p>Area changes of different land cover types in YB from 2000 to 2020.</p>
Full article ">
20 pages, 4593 KiB  
Article
Observations, Remote Sensing, and Model Simulation to Analyze Southern Brazil Antarctic Ozone Hole Influence
by Lucas Vaz Peres, Damaris Kirsh Pinheiro, Hassan Bencherif, Nelson Begue, José Valentin Bageston, Gabriela Dorneles Bittencourt, Thierry Portafaix, Andre Passaglia Schuch, Vagner Anabor, Rodrigo da Silva, Theomar Trindade de Araujo Tiburtino Neves, Raphael Pablo Tapajós Silva, Gabriela Cacilda Godinho dos Reis, Marco Antônio Godinho dos Reis, Maria Paulete Pereira Martins, Mohamed Abdoulwahab Toihir, Nkanyiso Mbatha, Luiz Angelo Steffenel and David Mendes
Remote Sens. 2024, 16(11), 2017; https://doi.org/10.3390/rs16112017 - 4 Jun 2024
Viewed by 856
Abstract
This paper presents the observational, remote sensing, and model simulation used to analyze southern Brazil Antarctic ozone hole influence (SBAOHI) events that occurred between 2005 and 2014. To analyze it, we use total ozone column (TOC) data provided by a Brewer spectrophotometer (BS) [...] Read more.
This paper presents the observational, remote sensing, and model simulation used to analyze southern Brazil Antarctic ozone hole influence (SBAOHI) events that occurred between 2005 and 2014. To analyze it, we use total ozone column (TOC) data provided by a Brewer spectrophotometer (BS) and the OMI (Ozone Monitoring Instrument). In addition to the AURA/MLS (Microwave Limb Sounder) instrument, satellite ozone profiles were utilized with DYBAL (Dynamical Barrier Localization) code in the MIMOSA (Modélisation Isentrope du Transport Mésoéchelle de l’Ozone Stratosphérique par Advection) model Potential Vorticity (PV) fields. TOC has 7.0 ± 2.9 DU reductions average in 62 events. October has more events (30.7%). Polar tongue events are 19.3% in total, being more frequently observed in October (50% of cases), with medium intensity (58.2%), and in the stratosphere medium levels (55.0%). Already, polar filament events (80.7%) are more frequent in September (32.0%), with medium intensity (42.0%), and stratosphere medium levels (40.7%). Full article
Show Figures

Figure 1

Figure 1
<p>TOC datasets obtained by BS (black) and OMI instrument satellites (red) for the August–November period at the SSO station.</p>
Full article ">Figure 2
<p>Brewer (<b>a</b>) and OMI instrument satellite (<b>b</b>) of the August–November observations (blue) and TOC reduction days for August–November (brown) between 2005 and 2014 at the SSO station.</p>
Full article ">Figure 3
<p>(<b>a</b>) TOC of the BS #167 (gray) and OMI instrument satellites (brown) between August and November 2012. Values below the −1.5σ limit (black star line) in blue (Brewer) and red (OMI satellite). (<b>b</b>) PV values for the 670 K level (~24 km altitude) between August and November 2012. The dotted black line represents the climatology, and the dotted red line represents the +1.5σ PV values.</p>
Full article ">Figure 4
<p>(<b>a</b>) Comparison of the AURA/MLS satellite ozone profiles for 14 October 2012 in blue and the October climatology ozone profile (black). Dotted lines correspond to the ±1.5σ October climatology limit. (<b>b</b>) Percent ozone anomalies, derived as differences between monthly and daily ozone recorded on 10, 12, 14, and 16 October 2012. The dotted black line represents the null difference.</p>
Full article ">Figure 5
<p>Vertical evolution of the MIMOSA model PV fields on stratospheric isentropic surfaces (400, 450, 500, 525, 675, and 850 K) on 14 October 2012. Locations of dynamical barriers (subtropical barrier in red and polar barrier in black) are overlapped on the maps using a color scale with PV units. The X indicates the SSO location.</p>
Full article ">Figure 6
<p>Temporal evolution of MIMOSA model PV fields for a 675 K isentropic surface obtained on 8, 10, 12, 14, 16, and 18 October. Locations of dynamical barriers (subtropical barrier in red and polar barrier in black) are overlapped on the maps using a color scale with PV units. The X indicates the SSO location.</p>
Full article ">Figure 7
<p>Time distribution of the number of SBAOHI events per month and per year between 2005 and 2014.</p>
Full article ">Figure 8
<p>Frequency of occurrence (percentage) of SBAOHI events per month. The upper plots, (<b>a</b>,<b>b</b>), illustrate the events with “tongue” structure statistics, while the lower plots, (<b>c</b>,<b>d</b>), present the “filament” structure events between 2005 and 2014.</p>
Full article ">
20 pages, 3230 KiB  
Article
SLR Validation and Evaluation of BDS-3 MEO Satellite Precise Orbits
by Ran Li, Chen Wang, Hongyang Ma, Yu Zhou, Chengpan Tang, Ziqian Wu, Guang Yang and Xiaolin Zhang
Remote Sens. 2024, 16(11), 2016; https://doi.org/10.3390/rs16112016 - 4 Jun 2024
Viewed by 426
Abstract
Starting from February 2023, the International Laser Ranging Service (ILRS) began releasing satellite laser ranging (SLR) data for all BeiDou global navigation satellite system (BDS-3) medium earth orbit (MEO) satellites. SLR data serve as the best external reference for validating satellite orbits, providing [...] Read more.
Starting from February 2023, the International Laser Ranging Service (ILRS) began releasing satellite laser ranging (SLR) data for all BeiDou global navigation satellite system (BDS-3) medium earth orbit (MEO) satellites. SLR data serve as the best external reference for validating satellite orbits, providing a basis for comprehensive evaluation of the BDS-3 satellite orbit. We utilized the SLR data from February to May 2023 to comprehensively evaluate the orbits of BDS-3 MEO satellites from different analysis centers (ACs). The results show that, whether during the eclipse season or the yaw maneuver season, the accuracy was not significantly decreased in the BDS-3 MEO orbit products released from the Center for Orbit Determination in Europe (CODE), Wuhan University (WHU), and the Deutsches GeoForschungsZentrum (GFZ) ACs, and the STD (Standard Deviation) of SLR residuals of those three ACs are all less than 5 cm. Among these, CODE had the smallest SLR residuals, with 9% and 12% improvement over WHU and GFZ, respectively. Moreover, the WHU precise orbits exhibit the smallest systematic biases, whether during non-eclipse seasons, eclipse seasons, or satellite yaw maneuver seasons. Additionally, we found some BDS-3 satellites (C32, C33, C34, C35, C45, and C46) exhibit orbit errors related to the Sun elongation angle, which indicates that continued effort for the refinement of the non-conservative force model further to improve the orbit accuracy of BDS-3 MEO satellites are in need. Full article
(This article belongs to the Special Issue Space-Geodetic Techniques (Third Edition))
Show Figures

Figure 1

Figure 1
<p>The research methodology of this article.</p>
Full article ">Figure 2
<p>Relationship between the orbital angle µ, solar elevation angle β, and the satellite body-fixed frame.</p>
Full article ">Figure 3
<p>Distribution of the SLR stations that can track BDS-3 satellites.</p>
Full article ">Figure 4
<p>STD of SLR residuals for BDS-3 MEO satellites.</p>
Full article ">Figure 5
<p>Time series of SLR residuals derived from CODE precise orbits for the BDS-3 satellites C36, C37, C43, and C44.</p>
Full article ">Figure 6
<p>Average SLR residuals for BDS-3 MEO satellites.</p>
Full article ">Figure 7
<p>SLR residuals of BDS-3 CAST (C23 and C37) and BDS-3 SECM (C29 and C30) with respect to β (the red line indicates the β).</p>
Full article ">Figure 8
<p>SLR residuals of BDS-3 CAST (C20, C21, and C24) and BDS-3 SECM (C25, C29, and C30) satellites with respect to the Sun elongation angle.</p>
Full article ">Figure 9
<p>SLR residuals of BDS-3 CAST (C32, C33, C45, and C46) and BDS-3 SECM (C34 and C35) satellites with respect to the Sun elongation angle.</p>
Full article ">Figure 10
<p>SLR residuals of BDS-3 satellite C45 and C46 with respect to β (the red line indicates β).</p>
Full article ">Figure 11
<p>Slope of SLR residuals with respect to the Sun elongation angle.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop