[go: up one dir, main page]

Next Issue
Volume 14, December-1
Previous Issue
Volume 14, November-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 14, Issue 22 (November-2 2022) – 277 articles

Cover Story (view full-size image): All-sky imagers (ASIs) can be used to model clouds and detect cloud attenuation, including its spatial variation. This can support solar-irradiance nowcasts, upscaling of photovoltaic production, and numeric weather predictions. We develop a network of ASIs for this task—aiming to cover a whole town and to achieve higher accuracy. We combine images of 12 ASIs which monitor the cloud scene from different perspectives. Areas covered by clouds are detected and distinguished by optical thickness. Including a single rotating shadowband irradiometer, a map of cloud attenuation is derived. The method suppresses errors present in a single ASI’s observation. In the validation, we show that the network of ASIs detects spatial variations of cloud attenuation considerably more accurately than state-of-the-art approaches in all atmospheric conditions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 8886 KiB  
Article
Construction and Optimization of Ecological Security Pattern in the Loess Plateau of China Based on the Minimum Cumulative Resistance (MCR) Model
by Hong Wei, Hui Zhu, Jun Chen, Haoyang Jiao, Penghui Li and Liyang Xiong
Remote Sens. 2022, 14(22), 5906; https://doi.org/10.3390/rs14225906 - 21 Nov 2022
Cited by 37 | Viewed by 3475
Abstract
With accelerating urbanization, the regional ecological security pattern (ESP) faces unprecedented threats. The situation is particularly serious in the Loess plateau of China (LPC) due to the fragile ecological environment and poor natural conditions. Constructing an ecological network and optimizing the ESP is [...] Read more.
With accelerating urbanization, the regional ecological security pattern (ESP) faces unprecedented threats. The situation is particularly serious in the Loess plateau of China (LPC) due to the fragile ecological environment and poor natural conditions. Constructing an ecological network and optimizing the ESP is significant for guiding regional development and maintaining the stability of the ecological process. This study constructed an ecological security network by integrating the minimum cumulative resistance (MCR) model and morphological spatial-pattern-analysis approach in LPC. Additionally, the optimization scheme of the regional ESP has also been proposed. Results show that the ecological source area is about 57,757.8 km2, 9.13% of the total area, and is mainly distributed in the southeast of the study area. The spatial distribution of ecological sources shows specific agglomeration characteristics. The ecological security network constructed contains 24 main ecological corridors, 72 secondary ecological corridors, and 53 ecological nodes. Referring to the identified ecological sources area, corridors, nodes, and other core components, the “two barriers, five corridors, three zones and multipoint” ESP optimization scheme was presented. This research hopes to provide a valuable reference for constructing the ecological security network and optimizing ecological space in ecologically fragile areas of western China. Full article
(This article belongs to the Special Issue Remote Sensing of Interaction between Human and Natural Ecosystem)
Show Figures

Figure 1

Figure 1
<p>Location and topography of LPC.</p>
Full article ">Figure 2
<p>Research framework.</p>
Full article ">Figure 3
<p>Spatial distribution of landscape pattern in LPC based on MSPA. (<b>a</b>) sample view of the edge; (<b>b</b>) sample view of the perforation; (<b>c</b>) sample view of the bridge.</p>
Full article ">Figure 4
<p>Results of the core patch importance assessment.</p>
Full article ">Figure 5
<p>Resistance factors of the MCR model.</p>
Full article ">Figure 6
<p>Construction results of ecological network structure in LPC.</p>
Full article ">Figure 7
<p>Ecological sources and corridors in each administrative area of LPC.</p>
Full article ">Figure 8
<p>Optimization scheme for ESP of LPC.</p>
Full article ">Figure 9
<p>Comparison of ecological corridors constructed by the MCR model and circuit theory.</p>
Full article ">Figure 10
<p>Comparison before and after the policy of GGP in LPC. (<b>a</b>) before GCP (1984) in Wuqi County, Shaanxi Province; (<b>b</b>) after GCP (2017) in Wuqi County, Shaanxi Province.</p>
Full article ">
24 pages, 6474 KiB  
Article
Comparing Different Light Use Efficiency Models to Estimate the Gross Primary Productivity of a Cork Oak Plantation in Northern China
by Linqi Liu, Xiang Gao, Binhua Cao, Yinji Ba, Jingling Chen, Xiangfen Cheng, Yu Zhou, Hui Huang and Jinsong Zhang
Remote Sens. 2022, 14(22), 5905; https://doi.org/10.3390/rs14225905 - 21 Nov 2022
Cited by 6 | Viewed by 2422
Abstract
Light use efficiency (LUE) models have been widely used to estimate terrestrial gross primary production (GPP). However, the estimation of GPP still has large uncertainties owing to an insufficient understanding of the complex relationship between water availability and photosynthesis. The plant water stress [...] Read more.
Light use efficiency (LUE) models have been widely used to estimate terrestrial gross primary production (GPP). However, the estimation of GPP still has large uncertainties owing to an insufficient understanding of the complex relationship between water availability and photosynthesis. The plant water stress index (PWSI), which is based on canopy temperature, is very sensitive to the plant stomatal opening and has been regarded as a good indicator for monitoring plant water status at the regional scale. In this study, we selected a cork oak plantation in northern China with an obvious seasonal drought as the research object. Using the ground-observed data, we evaluated the applicability of the LUE models with typical water stress scalars (MOD17, MODTEM, EC-LUE, ECM-LUE, SM-LUE, GLO-PEM, and Wang) in a GPP simulation of the cork oak plantation and explored whether the model’s accuracy can be improved by applying PWSI to modify the above models. The results showed that among the seven LUE models, the water stress scalar had a greater impact on the model’s performance than the temperature stress scalar. On sunny days, the daily GPP simulated by the seven LUE models was poorly matched with the measured GPP, and all models explained only 23–52% of the GPP variation in the cork oak plantation. The modified LUE models can significantly improve the prediction accuracy of the GPP and explain 49–65% of the variation in the daily GPP. On cloudy days, the performance of the modified LUE models did not improve, and the evaporative fraction was more suitable for defining the water stress scalar in the LUE models. The ECM-LUE and the modified GLO-PEM based on PWSI had optimal model structures for simulating the GPP of the cork oak plantation under cloudy and sunny days, respectively. This study provides a reference for the accurate prediction of GPP in terrestrial ecosystems in the future. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of the experimental site. This picture is from Liu et al. [<a href="#B33-remotesensing-14-05905" class="html-bibr">33</a>].</p>
Full article ">Figure 2
<p>Photosynthesis light-response curve of cork oak plantation on sunny (<b>a</b>) and cloudy (<b>b</b>) days. The bule line is the slope curve of the utilization of the light energy. The red line is the fitting curve.</p>
Full article ">Figure 3
<p>The relationship between the fraction of incident photosynthetically active radiation absorbed by vegetation canopies (FPAR) and the leaf area index (LAI) (<b>a</b>) and the normalized difference vegetation index (NDVI) (<b>b</b>). The red line is the fitting curve.</p>
Full article ">Figure 4
<p>Seasonal variations of monthly gross primary productivity (GPP) (<b>a</b>), monthly photosynthetically active radiation (PAR) (<b>b</b>), monthly mean fraction of PAR absorbed by the canopy (FPAR) (<b>c</b>), leaf area index (LAI) (<b>d</b>), normalized difference vegetation index (NDVI) (<b>e</b>), turbulent heat flux (H + LE) and available energy (R<sub>n</sub> – G) (<b>f</b>), soil water content (SWC) (<b>g</b>), vapor pressure deficit (VPD) (<b>h</b>), air temperature (T<sub>a</sub>) (<b>i</b>), plant water stress index (PWSI) (<b>j</b>), and evaporative fraction (EF) (<b>k</b>) during the period of canopy closure in 2020–2022.</p>
Full article ">Figure 5
<p>The ratio of gross primary production (GPP) that was simulated by considering only water (GPP<sub>wat</sub>, (<b>a</b>)) or temperature (GPP<sub>tem</sub>, (<b>b</b>)) stress scalars in light use efficiency (LUE) models to estimate potential GPP (PGPP).</p>
Full article ">Figure 6
<p>Comparison of daily gross primary production (GPP) simulated by MOD17 (<b>a</b>,<b>b</b>), MODTEM (<b>c</b>,<b>d</b>), EC-LUE (<b>e</b>,<b>f</b>), ECM-LUE (<b>g</b>,<b>h</b>), SM-LUE (<b>i</b>,<b>j</b>), GLO-PEM (<b>k</b>,<b>l</b>), and Wang (<b>m</b>,<b>n</b>) models versus observed GPP on sunny days and cloudy days. Solid dots represent cloudy days, and the open circle dots represent sunny days. Data from June–September 2020 and May–June 2021. The red line is the fitting curve.</p>
Full article ">Figure 7
<p>Comparison of daily gross primary production (GPP) simulated by MOD17<sub>pwsi</sub> (<b>a</b>,<b>b</b>), MODTEM<sub>pwsi</sub> (<b>c</b>,<b>d</b>), EC-LUE<sub>pwsi</sub> (<b>e</b>,<b>f</b>), ECM-LUE<sub>pwsi</sub> (<b>g</b>,<b>h</b>), SM-LUE<sub>pwsi</sub> (<b>i</b>,<b>j</b>), GLO-PEM<sub>pwsi</sub> (<b>k</b>,<b>l</b>), and Wang<sub>pwsi</sub> (<b>m</b>,<b>n</b>) models versus observed GPP on cloudy days. The subscript plant water stress index (PWSI) indicates that PWSI was used to modify the water stress scalar in the LUE model. Solid dots represent cloudy days, and the open circle dots represent sunny days. Data from June–September 2020 and May–June 2021. The red line is the fitting curve.</p>
Full article ">Figure 8
<p>Comparison of daily gross primary production (GPP) simulated by MOD17 (<b>a</b>), MODTEM (<b>b</b>), EC-LUE (<b>c</b>), ECM-LUE (<b>d</b>), SM-LUE (<b>e</b>), GLO-PEM (<b>f</b>), Wang (<b>g</b>), and ECM-LUE+GLO-PEM<sub>pwsi</sub> (<b>h</b>) models versus observed GPP. ECM-LUE+GLO-PEM<sub>pwsi</sub> means the combination of ECM-LUE model on cloudy days and the GLO-PEM<sub>pwsi</sub> model on sunny days. Solid dots represent cloudy days, and the open circle dots represent sunny days. The subscript plant water stress index (PWSI) indicates that PWSI was used to modify the water stress scalar in the LUE model. Data from July–September 2021 and May–June 2022. The red line is the fitting curve.</p>
Full article ">Figure A1
<p>Seasonal variations of gross primary production (GPP) simulated by MOD17 and MODTEM (<b>a</b>), EC-LUE and ECM-LUE (<b>b</b>), SM-LUE and Wang (<b>c</b>), GLO-PEM (<b>d</b>), and ECM-LUE+GLO-PEM<sub>pwsi</sub> (<b>e</b>) models and observed GPP. ECM-LUE+GLO-PEM<sub>pwsi</sub> means the combination of ECM-LUE model on cloudy days and the GLO-PEM<sub>pwsi</sub> model on sunny days. The subscript plant water stress index (PWSI) indicates that PWSI was used to modify the water stress scalar in the LUE model. Solid dots represent cloudy days, and the open circle dots represent sunny days. Data from June–September 2020 and May–June 2021.</p>
Full article ">Figure A2
<p>Seasonal variations of gross primary production (GPP) simulated by MOD17<sub>pwsi</sub> and MODTEM<sub>pwsi</sub> (<b>a</b>), EC-LUE<sub>pwsi</sub> and ECM-LUE<sub>pwsi</sub> (<b>b</b>), SM-LUE<sub>pwsi</sub> and Wang<sub>pwsi</sub> (<b>c</b>), GLO-PEM<sub>pwsi</sub> (<b>d</b>), and ECM-LUE+GLO-PEM<sub>pwsi</sub> (<b>e</b>) models and observed GPP. ECM-LUE+GLO-PEM<sub>pwsi</sub> means the combination of ECM-LUE model on cloudy days and the GLO-PEM<sub>pwsi</sub> model on sunny days. The subscript plant water stress index (PWSI) indicates that PWSI was used to modify the water stress scalar in the LUE model. Solid dots represent cloudy days, and the open circle dots represent sunny days. Data from June–September 2020 and May–June 2021.</p>
Full article ">Figure A3
<p>Seasonal variations of gross primary production (GPP) simulated by MOD17 and MODTEM (<b>a</b>), EC-LUE and ECM-LUE (<b>b</b>), SM-LUE and Wang (<b>c</b>), GLO-PEM (<b>d</b>), and ECM-LUE+GLO-PEM<sub>pwsi</sub> (<b>e</b>) models and observed GPP. ECM-LUE+GLO-PEM<sub>pwsi</sub> means the combination of ECM-LUE model on cloudy days and the GLO-PEM<sub>pwsi</sub> model on sunny days. The subscript plant water stress index (PWSI) indicates that PWSI was used to modify the water stress scalar in the LUE model. Data from July–September 2021 and May–June 2022.</p>
Full article ">Figure A4
<p>The relationship between turbulent heat flux (H + LE) and available energy (R<sub>n</sub> – G) on sunny days (<b>a</b>) and cloudy days (<b>b</b>) in cork oak plantation. The red line is the fitting curve.</p>
Full article ">
21 pages, 2058 KiB  
Article
UAV-LiDAR and RGB Imagery Reveal Large Intraspecific Variation in Tree-Level Morphometric Traits across Different Pine Species Evaluated in Common Gardens
by Erica Lombardi, Francisco Rodríguez-Puerta, Filippo Santini, Maria Regina Chambel, José Climent, Víctor Resco de Dios and Jordi Voltas
Remote Sens. 2022, 14(22), 5904; https://doi.org/10.3390/rs14225904 - 21 Nov 2022
Cited by 6 | Viewed by 3594
Abstract
Remote sensing is increasingly used in forest inventories. However, its application to assess genetic variation in forest trees is still rare, particularly in conifers. Here we evaluate the potential of LiDAR and RGB imagery obtained through unmanned aerial vehicles (UAVs) as high-throughput phenotyping [...] Read more.
Remote sensing is increasingly used in forest inventories. However, its application to assess genetic variation in forest trees is still rare, particularly in conifers. Here we evaluate the potential of LiDAR and RGB imagery obtained through unmanned aerial vehicles (UAVs) as high-throughput phenotyping tools for the characterization of tree growth and crown structure in two representative Mediterranean pine species. To this end, we investigated the suitability of these tools to evaluate intraspecific differentiation in a wide array of morphometric traits for Pinus nigra (European black pine) and Pinus halepensis (Aleppo pine). Morphometric traits related to crown architecture and volume, primary growth, and biomass were retrieved at the tree level in two genetic trials located in Central Spain and compared with ground-truth data. Both UAV-based methods were then tested for their accuracy to detect genotypic differentiation among black pine and Aleppo pine populations and their subspecies (black pine) or ecotypes (Aleppo pine). The possible relation between intraspecific variation of morphometric traits and life-history strategies of populations was also tested by correlating traits to climate factors at origin of populations. Finally, we investigated which traits distinguished better among black pine subspecies or Aleppo pine ecotypes. Overall, the results demonstrate the usefulness of UAV-based LiDAR and RGB records to disclose tree architectural intraspecific differences in pine species potentially related to adaptive divergence among populations. In particular, three LiDAR-derived traits related to crown volume, crown architecture, and main trunk—or, alternatively, the latter (RGB-derived) two traits—discriminated the most among black pine subspecies. In turn, Aleppo pine ecotypes were partly distinguishable by using two LiDAR-derived traits related to crown architecture and crown volume, or three RGB-derived traits related to tree biomass and main trunk. Remote-sensing-derived-traits related to main trunk, tree biomass, crown architecture, and crown volume were associated with environmental characteristics at the origin of populations of black pine and Aleppo pine, thus hinting at divergent environmental stress-induced local adaptation to drought, wildfire, and snowfall in both species. Full article
(This article belongs to the Special Issue 3D Point Clouds in Forest Remote Sensing II)
Show Figures

Figure 1

Figure 1
<p>Aerial RGB images and derived information of the two common gardens analyzed in this study (<b>a</b>): <span class="html-italic">Pinus nigra</span> (left panels) and <span class="html-italic">Pinus halepensis</span> (right panels). Example of crown segmentation from the canopy height model of <span class="html-italic">P. nigra</span> and <span class="html-italic">P. halepensis</span> (<b>b</b>). Example of a normalized LiDAR point cloud of <span class="html-italic">P. nigra</span> and <span class="html-italic">P. halepensis</span> (<b>c</b>). Example of extracted tree individual point clouds for <span class="html-italic">P. nigra</span> subspecies <span class="html-italic">nigra</span> and <span class="html-italic">laricio</span> (<b>d</b>,<b>e</b>), and for Aleppo pine ecotype DHT (<b>d</b>) and DSC (<b>e</b>).</p>
Full article ">Figure 2
<p>Workflow of RGB and LiDAR point cloud processing. Green lines refer to the RGB-derived point cloud process, and blue lines illustrate the LiDAR-derived point cloud process. The LiDAR-derived traits described in the right part of the workflow are those obtained from the segmentation applied directly to the LiDAR-derived point cloud.</p>
Full article ">Figure 3
<p>Discriminant analysis plots showing the centroids (black crosses) and their 95% confidence ellipses for the first two canonical variables (CAN1 and CAN2) for four subspecies of <span class="html-italic">P. nigra</span> (<b>a</b>) and five ecotypes of <span class="html-italic">P. halepensis</span> (<b>b</b>) and their explanatory variables, using LiDAR-derived traits. <span class="html-italic">Cvol<sub>025</sub></span> = crown volume calculated with alpha equal to 0.25; <span class="html-italic">h</span> = total tree height; <span class="html-italic">CV<sub>CL</sub></span> = coefficient of variation of crown length point dispersion; <span class="html-italic">Q75d</span> = 75th percentile of crown’s point density calculated from the point cloud.</p>
Full article ">Figure 4
<p>Simple correlation (<b>a</b>) and partial correlation coefficients (<b>b</b>) between population means of in situ, RGB-derived, or LiDAR-derived traits and climate factors at populations’ origin for 16 populations of <span class="html-italic">Pinus nigra</span> tested in a common garden located in Valsaín (Spain). Only significant correlations (<span class="html-italic">p</span> &lt; 0.05) with their respective correlation coefficients are shown.</p>
Full article ">Figure 5
<p>Simple correlation coefficients between population means of in situ, RGB-derived, or LiDAR-derived traits and climate factors at populations’ origin for 48 populations of Aleppo pine tested in a common garden located in Valdeolmos (Spain). Only significant correlations (<span class="html-italic">p</span> &lt; 0.05) with their respective correlation coefficients are shown.</p>
Full article ">
20 pages, 6755 KiB  
Article
Quality Control of CyGNSS Reflectivity for Robust Spatiotemporal Detection of Tropical Wetlands
by Hironori Arai, Mehrez Zribi, Kei Oyoshi, Karin Dassas, Mireille Huc, Shinichi Sobue and Thuy Le Toan
Remote Sens. 2022, 14(22), 5903; https://doi.org/10.3390/rs14225903 - 21 Nov 2022
Cited by 3 | Viewed by 2453
Abstract
The aim of this study was to develop a robust methodology for evaluating the spatiotemporal dynamics of the inundation status in tropical wetlands with the currently available Global Navigation Satellite System-Reflectometry (GNSS-R) data by proposing a new quality control technique called the “precision [...] Read more.
The aim of this study was to develop a robust methodology for evaluating the spatiotemporal dynamics of the inundation status in tropical wetlands with the currently available Global Navigation Satellite System-Reflectometry (GNSS-R) data by proposing a new quality control technique called the “precision index”. The methodology was applied over the Mekong Delta, one of the most important rice-production systems comprising aquaculture areas and natural wetlands (e.g., mangrove forests, peatlands). Cyclone Global Navigation Satellite System (CyGNSS) constellation data (August 2018–December 2021) were used to evaluate the spatiotemporal dynamics of the reflectivity Γ over the delta. First, the reflectivity Γ, shape and size of each specular footprint and the precision index were calibrated at each specular point and reprojected to a 0.0045° resolution (approximately equivalent to 500 m) grid at a daily temporal resolution (Lv. 2 product); then, the results were obtained considering bias-causing factors (e.g., the velocity/effective scattering area/incidence angle). The Lv. 2 product was temporally integrated every 15 days with a Kalman smoother (+/− 14 days temporal localization with Gaussian kernel: 1σ = 5 days). By applying the smoother, the regional-annual dynamics over the delta could be clearly visualized. The behaviors of the GNSS-R reflectivity and the Advanced Land Observing Satellite-2 Phased-Array type L-band Synthetic Aperture Radar-2 quadruple polarimetric scatter signals were compared and found to be nonlinearly correlated due to the influence of the incidence angle and the effective scattering area. Full article
(This article belongs to the Special Issue GNSS-R Earth Remote Sensing from SmallSats)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the methodology of this study outlining the data, processing and analysis steps.</p>
Full article ">Figure 2
<p>Illustration of the precision index. The light blue tiles are rasterization grid cells. The yellow circular area is the effective scattering area. The red tile in the yellow/green circle effective scattering area is the corresponding grid. The green, blue and red arrows are equivalent to the <span class="html-italic">GS</span>, <span class="html-italic">DistSP</span> and <span class="html-italic">SemiDSP</span> terms in Equation (2).</p>
Full article ">Figure 3
<p>Illustration of DDM 3D statistics (skewness and kurtosis). The calibration is conducted by assuming that the analog power is equivalent to the probability density of the DDM 3D histogram (Delay, Doppler and Analogue power).</p>
Full article ">Figure 4
<p>A sample of the Lv. 2 daily product (<b>a</b>) Γ(<span class="html-italic">θ</span>)<sub>normalized</sub>; (<b>b</b>) incidence angle; (<b>c</b>) precision index clipped at the Mekong Delta ((<b>d</b>) Optical image) on 4 January 2021.</p>
Full article ">Figure 5
<p>A sample of the Lv. 3 15−day−cycle Kalman smoother product based on the precision index [Γ(dB) without or with applying the precision index (<b>a</b>,<b>b</b>), zeroed out based on the kurtosis threshold and DDM 3D statistics such as skewness (<b>c</b>) and kurtosis (<b>d</b>)] clipped at Mekong Delta on 1 August 2018.</p>
Full article ">Figure 6
<p>Temporal dynamics of the Lv. 2 daily product with a 30-day moving average ((<b>a</b>): Γ<sub>normalized</sub> and (<b>b</b>): Γ(<span class="html-italic">θ</span>)<sub>normalized</sub>) and the Lv. 3 15-day-cycle Kalman smoother product [(<b>c</b>): Γ<sub>normalized</sub> (500 m resolution with the precision index), (<b>d</b>): Γ (3000 km resolution without the precision index) and e: Γ (500 m resolution without the precision index)]. Each line/plot denotes spatially averaged values over the delta.</p>
Full article ">Figure 7
<p>Samples of nonnormalized Γ values in 2020 (<b>a</b>–<b>d</b>) and 2021 (<b>e</b>–<b>h</b>). The left-hand side scenes are snapshots obtained in dry seasons (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>). The right-hand side scenes are snapshots obtained in rainy seasons (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>).</p>
Full article ">Figure 8
<p>PALSAR-2 data-based rice map ((<b>a</b>); white pixels indicate rice paddies), PALSAR-2 data-based rice floodability map (<b>b</b>) and inundation detection snapshot obtained by PALSAR-2 above one of the study sites [Thot not, Can Tho city, Vietnam, on 6 May 2016 (69 days after sowing)] with the corresponding aerial photo (<b>c</b>); CF: Continuously inundated paddy; AWD: Alternate wetting and drying paddy; the temporal water level dynamics of these blocks are presented in the referenced literature [<a href="#B21-remotesensing-14-05903" class="html-bibr">21</a>,<a href="#B26-remotesensing-14-05903" class="html-bibr">26</a>,<a href="#B27-remotesensing-14-05903" class="html-bibr">27</a>]).</p>
Full article ">Figure 9
<p>Two-dimensional scatterplots between the CyGNSS reflectivity Γ (dB) and PALSAR-2 based spatial inundation percentage (<b>a</b>) and PALSAR-2 back scatters σ<sup>0</sup> (dB) values ((<b>b</b>) HV, (<b>c</b>) odd scattering, (<b>d</b>) volume diffusion, (<b>e</b>) double bounce) at specular points with 30–35° incidence angles. The statistical analysis results representing these relationships are described in <a href="#app1-remotesensing-14-05903" class="html-app">Table S2</a>.</p>
Full article ">
20 pages, 4441 KiB  
Article
SATNet: A Spatial Attention Based Network for Hyperspectral Image Classification
by Qingqing Hong, Xinyi Zhong, Weitong Chen, Zhenghua Zhang, Bin Li, Hao Sun, Tianbao Yang and Changwei Tan
Remote Sens. 2022, 14(22), 5902; https://doi.org/10.3390/rs14225902 - 21 Nov 2022
Cited by 9 | Viewed by 2600
Abstract
In order to categorize feature classes by capturing subtle differences, hyperspectral images (HSIs) have been extensively used due to the rich spectral-spatial information. The 3D convolution-based neural networks (3DCNNs) have been widely used in HSI classification because of their powerful feature extraction capability. [...] Read more.
In order to categorize feature classes by capturing subtle differences, hyperspectral images (HSIs) have been extensively used due to the rich spectral-spatial information. The 3D convolution-based neural networks (3DCNNs) have been widely used in HSI classification because of their powerful feature extraction capability. However, the 3DCNN-based HSI classification approach could only extract local features, and the feature maps it produces include a lot of spatial information redundancy, which lowers the classification accuracy. To solve the above problems, we proposed a spatial attention network (SATNet) by combining 3D OctConv and ViT. Firstly, 3D OctConv divided the feature maps into high-frequency maps and low-frequency maps to reduce spatial information redundancy. Secondly, the ViT model was used to obtain global features and effectively combine local-global features for classification. To verify the effectiveness of the method in the paper, a comparison with various mainstream methods on three publicly available datasets was performed, and the results showed the superiority of the proposed method in terms of classification evaluation performance. Full article
Show Figures

Figure 1

Figure 1
<p>Application of hyperspectral images. (<b>a</b>) Hyperspectral imagery for agricultural applications. (<b>b</b>) Hyperspectral images for Earth observation.</p>
Full article ">Figure 2
<p>Flow of framework for SATNet.</p>
Full article ">Figure 3
<p>3D OctConv Module.</p>
Full article ">Figure 4
<p>Spatial Attention Module. (Reprinted with permission from Ref. [<a href="#B30-remotesensing-14-05902" class="html-bibr">30</a>]. 2018, Sanghyun Woo, Jongchan Park, Joon-Young Lee, In So Kweon).</p>
Full article ">Figure 5
<p>Transformer Encoder Module. (Reprinted with permission from Ref. [<a href="#B33-remotesensing-14-05902" class="html-bibr">33</a>]. 2021, Alesey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.)</p>
Full article ">Figure 6
<p>Spatial attention network structure combining 3D Octave convolution and ViT.</p>
Full article ">Figure 7
<p>Comparison plots of OA and runtime when using different numbers of spectral bands on different datasets: (<b>a</b>) IP dataset; (<b>b</b>) UP dataset; (<b>c</b>) SA dataset.</p>
Full article ">Figure 8
<p>Classification maps generated by all of the competing methods on the Indian Pines data with 10% training samples; (<b>a</b>) is ground truth map, and (<b>b</b>–<b>i</b>) are the results of Baseline, 3DCNN, SSRN, HybridSN, ViT, 3D_Octave, OctaveVit, SATNet, respectively.</p>
Full article ">Figure 9
<p>Classification maps generated by all of the competing methods on the University of Pavia data with 5% training samples; (<b>a</b>) is ground truth map, and (<b>b</b>–<b>i</b>) are the results of Baseline, 3DCNN, SSRN, HybridSN, ViT, 3D_Octave, OctaveVit, SATNet, respectively.</p>
Full article ">Figure 9 Cont.
<p>Classification maps generated by all of the competing methods on the University of Pavia data with 5% training samples; (<b>a</b>) is ground truth map, and (<b>b</b>–<b>i</b>) are the results of Baseline, 3DCNN, SSRN, HybridSN, ViT, 3D_Octave, OctaveVit, SATNet, respectively.</p>
Full article ">Figure 10
<p>Classification maps generated by all of the competing methods on the Salinas Scene data with 5% training samples; (<b>a</b>) is ground truth map, and (<b>b</b>–<b>i</b>) are the results of Baseline, 3DCNN, SSRN, HybridSN, ViT, 3D_Octave, OctaveVit, and SATNet, respectively.</p>
Full article ">Figure 11
<p>Confusion matrix of different methods for the Indian Pines data set. (<b>a</b>) Baseline (<b>b</b>) 3DCNN (<b>c</b>) SSRN (<b>d</b>) HybridSN (<b>e</b>) ViT (<b>f</b>) 3D_Octave (<b>g</b>) OctaveVit (<b>h</b>) SATNet.</p>
Full article ">Figure 12
<p>Confusion matrix of different methods for the University of Pavia data set. (<b>a</b>) Baseline (<b>b</b>) 3DCNN (<b>c</b>) SSRN (<b>d</b>) HybridSN (<b>e</b>) ViT (<b>f</b>) 3D_Octave (<b>g</b>) OctaveVit (<b>h</b>) SATNet.</p>
Full article ">Figure 13
<p>Confusion matrix of different methods for the Salinas Scene data set. (<b>a</b>) Baseline (<b>b</b>) 3DCNN (<b>c</b>) SSRN (<b>d</b>) HybridSN (<b>e</b>) ViT (<b>f</b>) 3D_Octave (<b>g</b>) OctaveVit (<b>h</b>) SATNet.</p>
Full article ">
21 pages, 25034 KiB  
Article
Repeated (4D) Marine Geophysical Surveys as a Tool for Studying the Coastal Environment and Ground-Truthing Remote-Sensing Observations and Modeling
by Giuseppe Stanghellini, Camilla Bidini, Claudia Romagnoli, Renata Archetti, Massimo Ponti, Eva Turicchia, Fabrizio Del Bianco, Alessandra Mercorella, Alina Polonia, Giulia Giorgetti, Andrea Gallerani and Luca Gasperini
Remote Sens. 2022, 14(22), 5901; https://doi.org/10.3390/rs14225901 - 21 Nov 2022
Cited by 4 | Viewed by 2887
Abstract
Sandy beaches and the nearshore environment are dynamic coastal systems characterized by sediment mobilization driven by alternating stormy and mild wave conditions. However, this natural behavior of beaches can be altered by coastal defense structures. Repeated surveys carried out with autonomous surface vehicles [...] Read more.
Sandy beaches and the nearshore environment are dynamic coastal systems characterized by sediment mobilization driven by alternating stormy and mild wave conditions. However, this natural behavior of beaches can be altered by coastal defense structures. Repeated surveys carried out with autonomous surface vehicles (ASVs) may represent an interesting tool for studying nearshore dynamics and testing the effects of mitigation strategies against erosion. We present a one-year experiment involving repeated stratigraphic and morpho-bathymetric surveys of a nearshore environment prone to coastal erosion along the Emilia-Romagna coast (NE Italy), the Lido di Dante beach, carried out between October 2020 and December 2021 using an ASV. Seafloor and subseafloor “snapshots” collected at different time intervals enabled us to delineate the seasonal variability and shed light on key controlling variables, which could be used to integrate and calibrate remote-sensing observations and modeling. The results demonstrated that repeated surveys could be successfully employed for monitoring coastal areas and represent a promising tool for studying coastal dynamics on a medium/short (years/months) timescale. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study site in NE Italy, along the Emilia-Romagna coast. Projection: UTM33/WGS84.</p>
Full article ">Figure 2
<p>Shore protection structures in the Lido di Dante beach. Red lines delimit the study area (base map: Google Earth 3 April 2020; projection: UTM33N/WGS84).</p>
Full article ">Figure 3
<p>Acquisition line grid of echographic and seismic reflection lines implemented during the four surveys and the location of the seafloor sediment sampling stations (numbered red circles). A-A’ and B-B’ are location of seismic sections.</p>
Full article ">Figure 4
<p>Distribution of linear differences (distances) between two consecutive runs along the same navigation path. Left: counts vs. distances; right: cumulative errors during the acquisition.</p>
Full article ">Figure 5
<p>Mesh of the bathymetric model used for the hydrodynamic simulations.</p>
Full article ">Figure 6
<p>Shaded relief morphobathymetric maps of the seafloor collected between October 2020 and December 2021 (<b>a</b>–<b>d</b>), along the same acquisition tracks in the study area. See also Bathy4D.mp4, <a href="#app1-remotesensing-14-05901" class="html-app">Supplementary Material</a>.</p>
Full article ">Figure 7
<p>Grain-size distribution in the study area. Proportions of the three main grain-size classes are represented using a ternary color scale (red = fine sand and larger particles, green = very fine sand, blue = silt and clay). Sampling stations are indicated by white circles numbered from 1 to 20. Contour lines are based on the percentage of very fine sand.</p>
Full article ">Figure 8
<p>(<b>a</b>) Seafloor reflectivity map and cumulative percentage of fine sand and larger particles (&gt;125 μm) at each sampling station (color-coded circles) in October 2020, represented by the red-to-blue aligned color scale. (<b>b</b>) Linear correlation between reflectivity and cumulative percentage of fine sand and larger particles (&gt;125 μm). The shaded area represents the 95% confidence interval.</p>
Full article ">Figure 9
<p>Reflectivity models of the seafloor collected at different time intervals (<b>a</b>–<b>d</b>). See also Ref4D.mp4, <a href="#remotesensing-14-05901-f003" class="html-fig">Figure 3</a> and <a href="#app1-remotesensing-14-05901" class="html-app">Supplementary Material</a>.</p>
Full article ">Figure 10
<p>Seismic reflection profiles A-A’ (<b>a</b>) and B-B’ (<b>b</b>) collected onshore and offshore with respect to the LCS barrier (location in <a href="#remotesensing-14-05901-f003" class="html-fig">Figure 3</a>).</p>
Full article ">Figure 11
<p>Time slices of reflectivity values at different depths. White arrow indicates a buried erosional channel not observed in the morphobathymetric maps. See also Time_Slices.mp4, <a href="#app1-remotesensing-14-05901" class="html-app">Supplementary Material</a>.</p>
Full article ">Figure 12
<p>Differences in seafloor models between DTMs during the experiment. Comparisons are between subsequent surveys (<b>a</b>–<b>c</b>) and between the first and last surveys (<b>d</b>). Brownish colors indicate seafloor accretion, while blue colors indicate negative sedimentary budgets.</p>
Full article ">Figure 13
<p>Wave and current fields modeled for conditions corresponding to storms from the NE (<b>a</b>,<b>b</b>) and SE (<b>c</b>,<b>d</b>). White boxes delimit the study area.</p>
Full article ">Figure 14
<p>(<b>a</b>) Simulated seabed changes between October 2020 and April 2021. (<b>b</b>) Differential DTMs evaluated for the same period.</p>
Full article ">
16 pages, 6501 KiB  
Article
Comprehensive Analysis of PERSIANN Products in Studying the Precipitation Variations over Luzon
by Jie Hsu, Wan-Ru Huang and Pin-Yi Liu
Remote Sens. 2022, 14(22), 5900; https://doi.org/10.3390/rs14225900 - 21 Nov 2022
Cited by 8 | Viewed by 2166
Abstract
This study evaluated the capability of satellite precipitation estimates from five products derived from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (including PERSIANN, PERSIANN-CCS, PERSIANN-CDR, PERSIANN-CCS-CDR, and PDIR-Now) to represent precipitation characteristics over Luzon. The analyses focused on monthly and [...] Read more.
This study evaluated the capability of satellite precipitation estimates from five products derived from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (including PERSIANN, PERSIANN-CCS, PERSIANN-CDR, PERSIANN-CCS-CDR, and PDIR-Now) to represent precipitation characteristics over Luzon. The analyses focused on monthly and daily timescales from 2003–2015 and adopted surface observations from the Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE) platform as the evaluation base. Among the five satellite precipitation products (SPPs), PERSIANN-CDR was observed to possess a better ability to qualitatively and quantitatively estimate spatiotemporal variations of precipitation over Luzon for the majority of the examined features with the exception of the extreme precipitation events, for which PERSIANN-CCS-CDR is superior to the other SPPs. These results highlight the usefulness of the addition of the cloud patch approach to PERSIANN-CDR to produce PERSIANN-CCS-CDR to depict the characteristics of extreme precipitation events over Luzon. A similar advantage of adopting the cloud patch approach in producing extreme precipitation estimates was also revealed from the comparison of PERSIANN, PERSIANN-CCS, and PDIR-Now. Our analyses also highlighted that all PERSIANN-series exhibit improved skills in regard to detecting precipitation characteristics over west Luzon compared to that over east Luzon. To overcome this weakness, we suggest that an adjustment in the cloud patch approach (e.g., using different cloud temperature thresholds or different brightness temperature and precipitation rate relationships) over east Luzon may be helpful. Full article
(This article belongs to the Special Issue Remote Sensing of Precipitation: Part III)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical location and topography of Luzon.</p>
Full article ">Figure 2
<p>Spatial distribution for the precipitation and wind vector (925 hPa) during four seasons that was averaged from 2003–2015 over two selected domains: (<b>a</b>) East Asia (108°E–133°E, 5°N–30°N); and (<b>b</b>) Luzon (118°E–126°E, 12°N–20°N). Precipitation was extracted from the APHRODITE (hereinafter APHRO), while the wind vector was extracted from the ERA5 reanalysis. MAM, JJA, SON, and DJF denote March to May (spring), June to August (summer), September to November (autumn), and December to February (winter), respectively.</p>
Full article ">Figure 3
<p>Phase diagram for depicting the timing of the occurrence of maximum value of monthly precipitation constructed based on APHRO and the five products of the PERSIANN-series (including PANN, PCCS, PCDR, PCCSCDR, and PDIR) that were averaged from 2003–2015. The gray line represents the spatial resolution of 0.25° × 0.25° in all data. In the five products of the PERSIANN-series, the areas that depict the same phase as that of APHRO are marked by black dots. To help the discussion in the main text, the number of grid points (at a 0.25° × 0.25° spatial scale) that are covered by dotted areas is provided in the top right panel of each SPP.</p>
Full article ">Figure 4
<p>Monthly mean precipitation (denote as P) of the APHRO (bar) and the five products of the PERSIANN-series (including PANN, PCCS, PCDR, PCCSCDR, and PDIR) (lines) in the entire Luzon region as averaged from 2003–2015: (<b>b</b>) the correlation coefficient (hereinafter CC) and the root mean square error (hereinafter RMSE) that were calculated between the time series of APHRO and each product of PERSIANN-series in (<b>a</b>); (<b>c</b>) three inland sub-regions of Luzon: west, northeast, and southeast; (<b>d</b>) As in (<b>a</b>), but for west Luzon (<b>c</b>); (<b>e</b>) as in (<b>a</b>), but for northeast Luzon (<b>c</b>); and (<b>f</b>) as in (<b>a</b>), but for southeast Luzon (<b>c</b>).</p>
Full article ">Figure 5
<p>(<b>a</b>) Grid-to-grid CC between the satellite precipitation products and APHRO for the annual evolution of monthly precipitation as averaged from 2003–2015; and (<b>b</b>) as in (<b>a</b>), but for the RMSE. The contour intervals in (<b>a</b>,<b>b</b>) are 0.05 and 0.5 mm⋅d<sup>−1</sup>, respectively.</p>
Full article ">Figure 6
<p>(<b>a</b>) The CC of <a href="#remotesensing-14-05900-f005" class="html-fig">Figure 5</a>a area-averaged at different elevations; and (<b>b</b>) the RMSE of <a href="#remotesensing-14-05900-f005" class="html-fig">Figure 5</a>b area-averaged at different elevations.</p>
Full article ">Figure 7
<p>(<b>a</b>) The scatter plot for the comparison of monthly precipitation between the APHRO and the SPPs from 2003–2015. In (<b>a</b>), the spatial grids (0.25° × 0.25°) over the entire Luzon region are used for comparison. The values of CC and RMSE between the APHRO and the SPPs are added in each panel; and (<b>b</b>) as in (<b>a</b>), but for daily precipitation.</p>
Full article ">Figure 8
<p>(<b>a</b>) The frequency of daily precipitation events as estimated from <a href="#remotesensing-14-05900-f007" class="html-fig">Figure 7</a>b at different intensity; and (<b>b</b>) the frequency of various types of precipitation events estimated from (<b>a</b>): non rainy (≤0.1 mm·d<sup>−1</sup>); rainy (&gt;0.1 mm·d<sup>−1</sup>); light (0.1–5 mm·d<sup>−1</sup>); moderate (5–20 mm·d<sup>−1</sup>); heavy (20–100 mm·d<sup>−1</sup>); extreme (&gt;100 mm·d<sup>−1</sup>) events.</p>
Full article ">Figure 9
<p>Statistical scores calculated for comparing daily precipitation grids that were extracted from <a href="#remotesensing-14-05900-f008" class="html-fig">Figure 8</a>a between APHRO and five SPPs during 2003–2015 at different precipitation thresholds: (<b>a</b>) the probability of detection (POD); and (<b>b</b>) the false alarm ratio (FAR). In (<b>b</b>), the results for FAR at a 0.1 mm·d<sup>−1</sup> threshold are provided in the left top panel and enlarged.</p>
Full article ">Figure 10
<p><b>(a)</b> The frequency of hit events at criteria of daily precipitation of &gt;0.1 mm⋅d<sup>−1</sup> from 2013–2015 over Luzon. (<b>b</b>) as in (<b>a</b>), but for miss events; and (<b>c</b>) as in (<b>a</b>), but for false alarm events. The contour intervals in (<b>a</b>–<b>c</b>) are 250 mm·d<sup>−1</sup>, 250 mm·d<sup>−1</sup>, and 50 mm·d<sup>−1</sup>, respectively.</p>
Full article ">Figure 11
<p>(<b>a</b>) The spatial distribution of POD at criteria of daily precipitation of &gt;0.1 mm⋅d<sup>−1</sup> from 2013–2015 over Luzon; and (<b>b</b>) as in (<b>a</b>), but for FAR. The contour intervals in (<b>a</b>,<b>b</b>) are 0.025 and 0.01, respectively.</p>
Full article ">
17 pages, 6074 KiB  
Article
Modeling and Analysis of Microwave Emission from Multiscale Soil Surfaces Using AIEM Model
by Ying Yang, Kun-Shan Chen and Rui Jiang
Remote Sens. 2022, 14(22), 5899; https://doi.org/10.3390/rs14225899 - 21 Nov 2022
Cited by 1 | Viewed by 1818
Abstract
Natural rough surfaces have inherent multiscale roughness. This article presents the modeling and analysis of microwave emission from a multiscale soil surface. Unlike the linear superposition of different correlation functions with various correlation lengths, we applied the frequency modulation concept to characterize the [...] Read more.
Natural rough surfaces have inherent multiscale roughness. This article presents the modeling and analysis of microwave emission from a multiscale soil surface. Unlike the linear superposition of different correlation functions with various correlation lengths, we applied the frequency modulation concept to characterize the multiscale roughness, in which the modulation does not destroy the surface’s curvature but only modifies it. The multiscale effect on emission under different observation geometries and surface parameters was examined using an AIEM model. The paper provides new insights into the dependence of polarized emissivity on multiscale roughness: V-polarized emissivity is much less sensitive to multiscale roughness across the moisture content from dry to wet (5–30%). The H-polarized is sensitive to multiscale roughness, especially at higher moisture content. The predicted emissivity will have considerable uncertainty, even for the same baseline correlation length, without accounting for the multiscale roughness effect. V-polarized emissivity is less sensitive to the multiscale effect than H-polarized and the higher modulation ratio indicates larger emissivity. The higher modulation ratio indicates larger emissivity. Multiscale roughness weakens the polarization difference, particularly in higher moisture conditions. In addition, ignoring the multiscale effect leads to underestimated emissivity to a certain extent, particularly at the larger RMS height region. Finally, when accounting for multiscale roughness, model predictions of emission from a soil surface are in good agreement with two independently measured data sets. Full article
(This article belongs to the Special Issue Advances on Radar Scattering of Terrain and Applications)
Show Figures

Figure 1

Figure 1
<p>An illustrative description of the multiscale roughness components of a rough surface profile—different colored line represents different roughness scale in term of spatial wavenumber.</p>
Full article ">Figure 2
<p>Gaussian modulated correlation function with different modulation ratio <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Modulation factor (<math display="inline"><semantics> <mrow> <msqrt> <mrow> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>+</mo> <msup> <mi>π</mi> <mn>2</mn> </msup> <msubsup> <mi>r</mi> <mi>m</mi> <mn>2</mn> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </msqrt> </mrow> </semantics></math>) of RMS slope for the Gaussian modulated rough surface. When <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, the modulation factor is equal to 1 and the RMS slope corresponds to Gaussian surface.</p>
Full article ">Figure 4
<p>Comparison between truncated and real correlation function with <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>5.0</mn> <mi>λ</mi> <mo> </mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Same as <a href="#remotesensing-14-05899-f004" class="html-fig">Figure 4</a> except for <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>5.0</mn> <mi>λ</mi> </mrow> </semantics></math> and (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>1.0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Comparison of backscattering between the analytical modulation and combined correlation functions. (<b>a</b>) HH polarization, (<b>b</b>) VV polarization.</p>
Full article ">Figure 7
<p>(<b>a</b>) Emissivity as function of frequency for exponential modulated multiscale rough surfaces, <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.5</mn> <mo> </mo> <mi>cm</mi> <mo>,</mo> <mo> </mo> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>12</mn> <mo>−</mo> <mi>j</mi> <mn>1.8</mn> <mo>,</mo> <mo> </mo> <mi>l</mi> <mo>=</mo> <mn>5</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>1.92</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>1.25</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>1.0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>i</mi> </msub> <mo>=</mo> <msup> <mrow> <mn>40</mn> </mrow> <mo>∘</mo> </msup> </mrow> </semantics></math>. (<b>b</b>) Difference between H and V-polarized emissivities.</p>
Full article ">Figure 8
<p>(<b>a</b>) Emissivity as function of look angle from exponential modulated multiscale rough surfaces, <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.5</mn> <mo> </mo> <mi>cm</mi> <mo>,</mo> <mo> </mo> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>12</mn> <mo>−</mo> <mi>j</mi> <mn>1.8</mn> <mo>,</mo> <mo> </mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>5</mn> <mo> </mo> <mi>cm</mi> <mrow> <mo> </mo> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>1.92</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> <mi>f</mi> <mo>=</mo> <mn>5.5</mn> <mo> </mo> <mi>GHz</mi> </mrow> </semantics></math>. (<b>b</b>) Difference of emissivity between V and H polarizations.</p>
Full article ">Figure 9
<p>The multiscale sensitivity index (MSI) as function of sensor parameters (look angle and frequency) at <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.5</mn> <mo> </mo> <mi>cm</mi> <mo>,</mo> <mo> </mo> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>12</mn> <mo>−</mo> <mi>j</mi> <mn>1.8</mn> <mo>,</mo> <mo> </mo> <mi>l</mi> <mo>=</mo> <mn>5</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>1.25</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>1.0</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>a</b>) H polarization; (<b>b</b>) V polarization.</p>
Full article ">Figure 10
<p>(<b>a</b>) Emissivity as function of modulation ratio from exponential modulated multiscale rough surface <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.5</mn> <mo> </mo> <mi>cm</mi> <mo>,</mo> <mo> </mo> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>12</mn> <mo>−</mo> <mi>j</mi> <mn>1.8</mn> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>5</mn> <mo> </mo> <mi>cm</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>2.15</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>1.36</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>1.0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>5.5</mn> <mo> </mo> <mi>GHz</mi> </mrow> </semantics></math>. (<b>b</b>) Difference of emissivity between V and H polarizations.</p>
Full article ">Figure 11
<p>(<b>a</b>) Emissivity as function of RMS height for exponential modulated multiscale rough surfaces, <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>12</mn> <mo>−</mo> <mi>j</mi> <mn>1.8</mn> <mo>,</mo> <mo> </mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>5</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>1.92</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>5.5</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> <mo> </mo> <msub> <mi>θ</mi> <mi>i</mi> </msub> <mo>=</mo> <msup> <mrow> <mn>40</mn> </mrow> <mo>∘</mo> </msup> </mrow> </semantics></math>. (<b>b</b>) Difference of emissivity between V and H polarizations.</p>
Full article ">Figure 12
<p>Emissivity as a function of RMS height and modulation ratio for exponential modulated multiscale rough surfaces, <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>12</mn> <mo>−</mo> <mi>j</mi> <mn>1.8</mn> <mo>,</mo> <mo> </mo> <mi>l</mi> <mo>=</mo> <mn>5</mn> <mo> </mo> <mi>cm</mi> <mo>,</mo> <mo> </mo> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>1.0</mn> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>1.25</mn> <mo> </mo> <mi>cm</mi> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> <mi>f</mi> <mo>=</mo> <mn>5.5</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> <mo> </mo> <msub> <mi>θ</mi> <mi>i</mi> </msub> <mo>=</mo> <msup> <mrow> <mn>40</mn> </mrow> <mo>∘</mo> </msup> </mrow> </semantics></math>. (<b>a</b>) H polarization; (<b>b</b>) V polarization.</p>
Full article ">Figure 13
<p>(<b>a</b>) Emissivity as function of soil moisture for exponential modulated multiscale rough surfaces <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.5</mn> <mo> </mo> <mi>cm</mi> <mo>,</mo> <mo> </mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>5</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>1.92</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>1.25</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>1.0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>5.5</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> <mo> </mo> <msub> <mi>θ</mi> <mi>i</mi> </msub> <mo>=</mo> <msup> <mrow> <mn>40</mn> </mrow> <mo>∘</mo> </msup> </mrow> </semantics></math>. (<b>b</b>) Difference of emissivity between V and H polarizations.</p>
Full article ">Figure 14
<p>The multiscale sensitivity index (MSI) as function of surface parameters at <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>5.5</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> <mo> </mo> <msub> <mi>θ</mi> <mi>i</mi> </msub> <mo>=</mo> <msup> <mrow> <mn>40</mn> </mrow> <mo>∘</mo> </msup> <mo>,</mo> <mo> </mo> <mi>l</mi> <mo>=</mo> <mn>5</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>1.92</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.6</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>a</b>) H polarization; (<b>b</b>) V polarization.</p>
Full article ">Figure 15
<p>Brightness temperature as function of look angle in H and V polarizations with <span class="html-italic">l</span> = 20 cm (<span class="html-italic">r<sub>m</sub></span> = 0), <span class="html-italic">l<sub>e</sub></span> = 17.71 cm (<span class="html-italic">r<sub>m</sub></span> = 0.12), <span class="html-italic">σ</span> = 0.05 cm. (<b>a</b>) L band (<math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>1.413</mn> <mo> </mo> <mi>GHz</mi> </mrow> </semantics></math>), (<b>b</b>) C band (<math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>6.8</mn> <mo> </mo> <mi>GHz</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 16
<p>Comparison of the emissivity between model predictions and measurements [<a href="#B25-remotesensing-14-05899" class="html-bibr">25</a>]: <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>0.26</mn> <mo>,</mo> <mo> </mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>10</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>8.85</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.12</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>6.86</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.12</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>1.4</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.73</mn> <mo> </mo> <mi>cm</mi> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>1.4</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> <mo> </mo> <mi>σ</mi> <mo>=</mo> <mn>2.45</mn> <mo> </mo> <mi>cm</mi> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>5</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> <mo> </mo> <mi>σ</mi> <mo>=</mo> <mn>0.73</mn> <mo> </mo> <mi>cm</mi> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>10.7</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> <mo> </mo> <mi>σ</mi> <mo>=</mo> <mn>0.73</mn> <mo> </mo> <mi>cm</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 16 Cont.
<p>Comparison of the emissivity between model predictions and measurements [<a href="#B25-remotesensing-14-05899" class="html-bibr">25</a>]: <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>0.26</mn> <mo>,</mo> <mo> </mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>10</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>8.85</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.12</mn> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mo> </mo> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>6.86</mn> <mo> </mo> <mi>cm</mi> <mo> </mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.12</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>1.4</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.73</mn> <mo> </mo> <mi>cm</mi> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>1.4</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> <mo> </mo> <mi>σ</mi> <mo>=</mo> <mn>2.45</mn> <mo> </mo> <mi>cm</mi> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>5</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> <mo> </mo> <mi>σ</mi> <mo>=</mo> <mn>0.73</mn> <mo> </mo> <mi>cm</mi> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>10.7</mn> <mo> </mo> <mi>GHz</mi> <mo>,</mo> <mo> </mo> <mi>σ</mi> <mo>=</mo> <mn>0.73</mn> <mo> </mo> <mi>cm</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Comparison of H and V polarized emissivities between model predictions and measurements at L, C, X bands [<a href="#B25-remotesensing-14-05899" class="html-bibr">25</a>]. volumetric soil moisture content for the smooth field was −0.250 cm<sup>3</sup>/cm<sup>3</sup> and for the rough field was −0.259 cm<sup>3</sup>/cm<sup>3</sup> in the top 0–10-cm layer. with the effective correlation length to <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>8.85</mn> <mo> </mo> <mi>cm</mi> <mo>;</mo> </mrow> </semantics></math> and modulation ratio to <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.12</mn> </mrow> </semantics></math> for rough surface and <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>6.86</mn> <mo> </mo> <mi>cm</mi> <mo>,</mo> <mo> </mo> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math> for very rough surface.</p>
Full article ">Figure 18
<p>Comparison of H and V polarized emissivity between model predictions and measurements at L-band [<a href="#B26-remotesensing-14-05899" class="html-bibr">26</a>].</p>
Full article ">
1 pages, 189 KiB  
Correction
Correction: Moghimi et al. Automatic Relative Radiometric Normalization of Bi-Temporal Satellite Images Using a Coarse-to-Fine Pseudo-Invariant Features Selection and Fuzzy Integral Fusion Strategies. Remote Sens. 2022, 14, 1777
by Armin Moghimi, Ali Mohammadzadeh, Turgay Celik, Brian Brisco and Meisam Amani
Remote Sens. 2022, 14(22), 5898; https://doi.org/10.3390/rs14225898 - 21 Nov 2022
Viewed by 1301
Abstract
There was an error in the original publication [...] Full article
18 pages, 11144 KiB  
Article
Assimilation of Water Vapor Retrieved from Radar Reflectivity Data through the Bayesian Method
by Junjian Liu, Shuiyong Fan, Mamtimin Ali, Huoqing Li, Hailiang Zhang, Yu Wang and Ailiyaer Aihaiti
Remote Sens. 2022, 14(22), 5897; https://doi.org/10.3390/rs14225897 - 21 Nov 2022
Cited by 1 | Viewed by 2852
Abstract
This work describes the implementation of an updated radar reflectivity assimilation scheme with the three-dimensional variational (3D-Var) system of Weather Research and Forecast (WRF). The updated scheme, instead of the original scheme assuming the relative humidity to a fixed value where radar reflectivity [...] Read more.
This work describes the implementation of an updated radar reflectivity assimilation scheme with the three-dimensional variational (3D-Var) system of Weather Research and Forecast (WRF). The updated scheme, instead of the original scheme assuming the relative humidity to a fixed value where radar reflectivity is higher than a threshold, assimilates pseudo water vapor retrieved by the Bayesian method, which would be consistent with clouds/precipitations provided by the model in theory. To verify the effect of the updated scheme to the improvement of precipitation simulation, a convective case in Wenquan County and the continuous monthly simulation with contrasting experiments in Xinjiang were performed. The test of single reflectivity observation demonstrates that the water vapor retrieved by the Bayesian method is consistent with the meteorological situation around. In the convective case, both the updated and original scheme results show that the assimilation of pseudo water vapor can adjust to the environmental conditions of water vapor and temperature. This can improve the hourly precipitation forecast skill more than the contrasting experiment, which was designed to only assimilate conventional observations and radar radial velocity data. In the continuous monthly experiments, the updated scheme reveals that the analysis of water vapor is more reasonable, and obtains a better precipitation forecast skill for 6 h accumulated precipitation than the contrasting experiments. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the sequence of operations of water vapor retrieval and assimilation.</p>
Full article ">Figure 2
<p>(<b>a</b>) The model forecast domains and observations assimilated. The outside box indicates domain 1, and the inner black box indicates domain 2. The red dots indicate SYNOP stations. (<b>b</b>) The radar station and radar data assimilated available range in domain 2. The location of Wenquan national meteorological station, west of Bole’s radar, is represented by the back cross marker which is same below. The black dots show the locations of the nine radar stations. The altitude (unit m) of ground above sea level is represented by the colour shades.</p>
Full article ">Figure 3
<p>Flowcharts of spin-up, data assimilation, and forecast for the (<b>a</b>) convective case and (<b>b</b>) continuous monthly experiments.</p>
Full article ">Figure 4
<p>The single reflectivity (unit dBZ) observation test of the relative humidity (unit %) retrieved at 12th model level. (<b>a</b>) Assimilated reflectivity. (<b>b</b>) Observed reflectivity. (<b>c</b>) Weight of each assimilated reflectivity. (<b>d</b>) Assimilated relative humidity. (<b>e</b>) Relative humidity retrieved using the updated scheme. (<b>f</b>) Relative humidity retrieved using the original scheme. The black square indicates the window of 21 × 21 grid points used for the Bayesian method. The red square indicates the single reflectivity observation point.</p>
Full article ">Figure 5
<p>Environmental conditions with the geopotential height (unit gpm, blue lines), wind vector (unit m/s, wind bars), temperature (unit °C, red lines) at 500 hPa, and relative humidity (unit %, colour shades) at 750 hPa at (<b>a</b>) 0000 UTC, (<b>b</b>) 0600 UTC 30 July 2019 from NCEP-FNL.</p>
Full article ">Figure 6
<p>Observed composite reflectivity fields (unit dBZ) at (<b>a</b>) 0700 UTC, (<b>b</b>) 0730 UTC, (<b>c</b>) 0800 UTC, (<b>d</b>) 0820 UTC, (<b>e</b>) 0840 UTC, and (<b>f</b>) 0900 UTC 30 July 2019. The black squares indicate the reflectivity assimilated, in which observed radar reflectivity is higher than 25 dBZ which is same below. The shades of gray indicate the model terrain height in each plot which is same below.</p>
Full article ">Figure 7
<p>Horizontal and vertical cross-sections of assimilated composite reflectivity fields (unit dBZ) from (<b>a</b>) C2Rad for 1-h forecasts beginning at 0700 UTC 30 July 2019. The wind vector (unit m/s, black arrows) is calculated by u and w wind in which w speed is multiplied by 10.0 in (<b>b</b>,<b>c</b>) which is the same below. The red triangle in each plot of vertical cross-sections indicates the location of Wenquan station. The red lines indicate vertical cross-sections through AB and CD. The color points indicate the vertical distribution of observed reflectivity fields (unit dBZ).</p>
Full article ">Figure 8
<p>Vertical cross-section of the analysis and the increment of water vapor (unit g/kg) along line AB at 0800 UTC. (<b>a</b>) The analysis from C2Rad. (<b>b</b>) The analysis from C2RadBy. (<b>c</b>) The increment from C2Rad. (<b>d</b>) The increment from C2RadBy. (<b>e</b>) The difference of increment between C2Rad and C2RadBy. The shades in each plot indicate the water vapor mixing ratio (unit g/kg). The solid red lines indicate the increment of temperature (unit 0.001 °C). The dashed blue lines indicate negative divergence (unit 1/s).</p>
Full article ">Figure 9
<p>Vertical cross-section of the analysis and the increment of water vapor (unit g/kg) along line CD at 0800 UTC. (<b>a</b>) The analysis from C2Rad. (<b>b</b>) The analysis from C2RadBy. (<b>c</b>) The increment from C2Rad. (<b>d</b>) The increment from C2RadBy. (<b>e</b>) The difference of increment between C2Rad and C2RadBy. The shades in each plot indicate the water vapor mixing ratio (unit g/kg). The solid red lines indicate the increment of temperature (unit 0.001 °C). The dashed blue lines indicate negative divergence (unit 1/s).</p>
Full article ">Figure 10
<p>Horizontal cross-section of the analysis and the increment of water vapor (unit g/kg) at 0800 UTC at 3000m. (<b>a</b>) The analysis from C2Rad. (<b>b</b>) The analysis from C2RadBy. (<b>c</b>) The increment from C2Rad. (<b>d</b>) The increment from C2RadBy. (<b>e</b>) The difference of increment between C2Rad and C2RadBy. The shades in each plot indicate the water vapor mixing ratio (unit g/kg). The solid red lines indicate the increment of temperature (unit 0.001 °C). The dashed blue lines indicate the analysis of negative divergence (unit 1/s). The ellipse of dashed red lines indicates the main areal coverage of precipitation which is the same below.</p>
Full article ">Figure 11
<p>Horizontal cross-section of the analysis and the increment of water vapor (unit g/kg) at 0800 UTC at 5000 m. (<b>a</b>) The analysis from C2Rad. (<b>b</b>) The analysis from C2RadBy. (<b>c</b>) The increment from C2Rad. (<b>d</b>) The increment from C2RadBy. (<b>e</b>) The difference of increment between C2Rad and C2RadBy. The shades in each plot indicate the water vapor mixing ratio (unit g/kg). The solid red lines indicate the increment of temperature (unit 0.001 °C). The dashed blue lines indicate the analysis of negative divergence (unit 1/s).</p>
Full article ">Figure 12
<p>Horizontal cross-section of the analysis and the increment of water vapor (unit g/kg) at 0800 UTC at7000 m. (<b>a</b>) The analysis from C2Rad. (<b>b</b>) The analysis from C2RadBy. (<b>c</b>) The increment from C2Rad. (<b>d</b>) The increment from C2RadBy. (<b>e</b>) The difference of increment between C2Rad and C2RadBy. The shades in each plot indicate the water vapor mixing ratio (unit g/kg). The solid red lines indicate the increment of temperature (unit 0.001 °C). The dashed blue lines indicate the analysis of negative divergence (unit 1/s).</p>
Full article ">Figure 13
<p>The increment and difference of precipitable water (PW, unit mm) and convective available potential energy (CAPE, unit J/kg) at 0800 UTC 30 July 2019. (<b>a</b>) The increment of PW from C2Rad. (<b>b</b>) The increment of PW from C2RadBy. (<b>c</b>) The difference of PW between (<b>a</b>,<b>b</b>). (<b>d</b>) The increment of CAPE from C2RadBy. (<b>e</b>) The increment of CAPE from C2RadBy. (<b>f</b>) The difference of CAPE between (<b>d</b>,<b>e</b>).</p>
Full article ">Figure 14
<p>Hourly accumulated precipitation of (<b>a</b>) C1Con, (<b>b</b>) C1Rad, and (<b>c</b>) C3RadBy at 0900 UTC. The colored dots show the locations and hourly accumulated precipitation (in millimeters) of observations of SYNOP. The Forecast hourly accumulated precipitation (unit mm) is represented by the colour shades.</p>
Full article ">Figure 15
<p>Taylor diagrams of the hourly accumulated precipitation forecast at 0900 UTC from 30 July 2019. The triangle and circle indicate the increased percentage of ETS relative to C1Con or E1Con which is the same below.</p>
Full article ">Figure 16
<p>Diagnostic plots of water vapor (unit g/kg). (<b>a</b>) Scatter distribution of water vapor mixing ratio between analysis and background. (<b>b</b>) QQ distribution of water vapor mixing ratio. (<b>c</b>) The distribution of the analysis increment at different altitudes (unit m) of ground above sea level. (<b>d</b>) PDF distribution of the analysis increment. The blue lines and circles indicate E3RadBy, and the red lines and circles indicate E2Rad.</p>
Full article ">Figure 17
<p>Taylor diagrams of the six hours accumulated precipitation forecast at 1800 UTC from 1 July to 31 July 2019.</p>
Full article ">
31 pages, 7102 KiB  
Article
Self-Organizing Control of Mega Constellations for Continuous Earth Observation
by Yun Xu, Yulin Zhang, Zhaokui Wang, Yunhan He and Li Fan
Remote Sens. 2022, 14(22), 5896; https://doi.org/10.3390/rs14225896 - 21 Nov 2022
Cited by 6 | Viewed by 2672
Abstract
This work presents a novel self-organizing control method for mega constellations to meet the continuous Earth observation requirements. In order to decrease the TT&C pressure caused by numerous satellites, constellation satellites are not controlled according to the designed configurations but are controlled with [...] Read more.
This work presents a novel self-organizing control method for mega constellations to meet the continuous Earth observation requirements. In order to decrease the TT&C pressure caused by numerous satellites, constellation satellites are not controlled according to the designed configurations but are controlled with respect to intersatellite constraints. By analyzing the street-of-coverage (SOC) of coplanar constellation satellites, the continuous coverage constraint of the mega constellation is transformed into constraints of the right ascension of ascending node (RAAN) and relative motion bound between every two adjacent coplanar satellites. The proposed continuous coverage constraint can be satisfied by most ongoing or planned mega constellations. Artificial potential functions (APFs) are used to realize self-organizing control. The scale-independent relative orbital elements (SIROEs) are innovatively presented as the self-organizing control variables. Using the Gaussian equations and Lyapunov’s theory, the stability of the APF control in quadratic form is proven, from which it can be concluded that the APF control variables of the controlled satellite should have the same time derivative as the target satellite states under two-body Keplerian motion condition, and SIROEs are ideal choices. The proposed controllers and self-organizing rules are verified in the sub-constellation of the GW-2 mega constellation by simulation. The results demonstrate the goodness in control effect and ground coverage performance. Full article
(This article belongs to the Special Issue CubeSats Applications and Technology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Satellite coverage geometry.</p>
Full article ">Figure 2
<p>Street-of-coverage geometry of one orbital plane.</p>
Full article ">Figure 3
<p>Streets-of-coverage geometries of adjacent orbital planes.</p>
Full article ">Figure 4
<p>Street-of-coverage geometry of the perturbed coplanar satellites. D and E are two sub-satellite points. A and C are two intersections of two coverage circle. Point A, B, C form a right triangle. Point D, E, F form a right triangle.</p>
Full article ">Figure 5
<p>Illustration of the scale-independent relative orbital elements (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> </mrow> </semantics></math> plane).</p>
Full article ">Figure 6
<p>Illustration of the scale-independent relative orbital elements (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane).</p>
Full article ">Figure 7
<p>Relative motion-bound changing with time.</p>
Full article ">Figure 8
<p>Semi-major axis time histories without semi-major axis control.</p>
Full article ">Figure 9
<p>Semi-major axis time histories under semi-major axis control.</p>
Full article ">Figure 10
<p>Control effects changing with time.</p>
Full article ">Figure 11
<p>The initial distribution of the coplanar satellites in the geocentric inertial system. Red dots represent satellites that do not satisfy the continuous coverage constraint while green dots represent satellites that satisfy the continuous coverage constraint.</p>
Full article ">Figure 12
<p>The final distribution of the coplanar satellites in the geocentric inertial system.</p>
Full article ">Figure 13
<p>Number of unqualified satellites in the coplanar satellites.</p>
Full article ">Figure 14
<p>The differences in maximum bounds away from the corresponding constraints. Red line is zero-value standard. Blue lines are maximum bound differences.</p>
Full article ">Figure 15
<p>The differences in minimum bounds away from the corresponding constraints. Red line is zero-value standard. Blue lines are minimum bound differences.</p>
Full article ">Figure 16
<p>Semi-major axis differences away from the target value.</p>
Full article ">Figure 17
<p>RAAN differences away from the target RAAN under <math display="inline"><semantics> <msub> <mi>J</mi> <mn>2</mn> </msub> </semantics></math> perturbation. Red lines represent <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi>min</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi>max</mi> </msub> </mrow> </semantics></math> respectively. Blue lines represent RAAN differences.</p>
Full article ">Figure 18
<p>Control effects of coplanar satellites in the geocentric inertial system.</p>
Full article ">Figure 19
<p>Initial distribution of the mega constellation. Red dots represent satellites that do not satisfy the continuous coverage constraint while green dots represent satellites that satisfy the continuous coverage constraint.</p>
Full article ">Figure 20
<p>Final distribution of the mega constellation.</p>
Full article ">Figure 21
<p>Number of unqualified satellites in the mega constellation.</p>
Full article ">Figure 22
<p>Control effects of constellation satellites.</p>
Full article ">Figure 23
<p>Maximum bound differences of constellation satellites. Red line is zero-value standard. Blue lines are maximum bound differences.</p>
Full article ">Figure 24
<p>Minimum bound differences of constellation satellites. Red line is zero-value standard. Blue lines are minimum bound differences.</p>
Full article ">Figure 25
<p>Semi-major axis differences of constellation satellites.</p>
Full article ">Figure 26
<p>RAAN differences of constellation satellites. Red lines represent <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi>min</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi mathvariant="sans-serif">Ω</mi> <mi>max</mi> </msub> </mrow> </semantics></math> respectively. Blue lines represent RAAN differences.</p>
Full article ">Figure 27
<p>Coverage performance of the GW-2 sub-constellation.</p>
Full article ">Figure 28
<p>Multiplicity of coverage of the GW-2 sub-constellation at the initial and final moments.</p>
Full article ">
18 pages, 4974 KiB  
Article
Ionospheric Oscillation with Periods of 6–30 Days at Middle Latitudes: A Response to Solar Radiative, Geomagnetic, and Lower Atmospheric Forcing
by Zhenlin Yang, Sheng-Yang Gu, Yusong Qin, Chen-Ke-Min Teng, Yafei Wei and Xiankang Dou
Remote Sens. 2022, 14(22), 5895; https://doi.org/10.3390/rs14225895 - 21 Nov 2022
Cited by 4 | Viewed by 2002
Abstract
This research studies the medium timescale (6–30 days) ionospheric response over the Wuhan area to solar radiative, recurrent geomagnetic, and lower atmospheric forcing. The ionospheric response is examined by wavelet analysis of the total electron content (TEC) over the Wuhan area from 2001 [...] Read more.
This research studies the medium timescale (6–30 days) ionospheric response over the Wuhan area to solar radiative, recurrent geomagnetic, and lower atmospheric forcing. The ionospheric response is examined by wavelet analysis of the total electron content (TEC) over the Wuhan area from 2001 to 2020. Ionospheric oscillations with periods centering at the harmonic oscillations of the 27-day solar rotation (e.g., 27 days, 13.5 days, 9 days, and 6.75 days) are focused upon. The results show that the quasi-27-day TEC oscillations at the middle latitude have a better overall correlation with solar radiation than recurrent geomagnetic activity, but the correlation between TEC and recurrent geomagnetic activity has a significant increase at the solar minimum stage. As for ionospheric oscillations with periods shorter than 15 days, these oscillations correlate better with recurrent geomagnetic activity. Moreover, a quasi-27-day TEC oscillation event at the middle latitude caused by convective activity from the lower atmosphere was studied. This suggests that lower atmospheric forcing is also an important factor causing ionospheric oscillations. In addition, the ionospheric oscillations over the Wuhan area also show unique regional characteristics, as the regional ionosphere does not respond well to the Kp oscillation with periods shorter than 20 days, particularly, 13.5 days. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>,<b>b</b>) The original data from IGS (Beidou GEO)-TEC during 2017. (<b>c</b>,<b>d</b>) The filtered IGS (Beidou GEO)-TEC data during 2017 after filtering by the low-pass filter. (<b>e</b>,<b>f</b>) Wavelet spectrums of the filtered IGS (Beidou GEO)-TEC during 2017 with a period range of 16–32 days. (<b>g</b>,<b>h</b>) Same as (<b>e</b>,<b>f</b>), but the period range is 0–16 days.</p>
Full article ">Figure 2
<p>The Lomb-Scargle (LS) periodogram of the original F10.7 (Kp) index. The blue curve represents the periodogram of F10.7 and the red curve represents the periodogram of Kp. The green vertical dash lines represent the four harmonic periods of solar rotation.</p>
Full article ">Figure 3
<p>(<b>a</b>,<b>b</b>) The original data from the F10.7 (Kp) index during 2017. (<b>c</b>,<b>d</b>) The F10.7 (Kp) index during 2017 after filtering with a low-pass filter. (<b>e</b>,<b>f</b>) Wavelet spectrums of the filtered F10.7 (Kp) index during 2017.</p>
Full article ">Figure 4
<p>Examples of the ionospheric oscillations with period centering at the four harmonic periods of solar rotation.</p>
Full article ">Figure 5
<p>(<b>a</b>) Counts of ionospheric oscillations with periods centering at the four harmonic periods of solar rotation. (<b>b</b>) The Lomb-Scargle (LS) periodogram of the original IGS-TEC data during 2001–2018 with a period range of 6–30 days. The green vertical dash lines represent the four harmonic periods of solar rotation. (<b>c</b>) Same as (<b>b</b>), but the period range is 6–16 days.</p>
Full article ">Figure 6
<p>The annual variation in solar activity and ionospheric oscillations with periods centering at the four harmonic periods of solar rotation.</p>
Full article ">Figure 7
<p>Four types of relationships between the different ionospheric oscillations with solar activity and geomagnetic activity.</p>
Full article ">Figure 8
<p>(<b>a</b>,<b>b</b>) The amplitude variations of the 27 (6-15)-day oscillations in TEC (red line) and F10.7 (blue line) during 2001–2018. (<b>c</b>,<b>d</b>) The amplitude variations of the 27 (6–15)-day oscillations in TEC (red line) and Kp (green line) during 2001–2018. (<b>e</b>,<b>f</b>) The comparison of the yearly correlation coefficient of the 27 (6–15)-day oscillations in TEC-F10.7 (red line) and TEC-Kp (blue line).</p>
Full article ">Figure 9
<p>(<b>a</b>–<b>c</b>) Wavelet analysis of IGS-TEC, F10.7, and Kp oscillation during Days 100–200, 2002.</p>
Full article ">Figure 10
<p>(<b>a</b>,<b>b</b>) The period–wavenumber power spectra (the period–time power spectra) of zonal wind at 90 km height during day 100–200, 2002. (<b>c</b>) The spatial structure of the 27-day oscillations in the zonal wind with wavenumber 0.</p>
Full article ">Figure 11
<p>The period–time power spectra of OLR during Days 100–200, 2002.</p>
Full article ">
23 pages, 8412 KiB  
Article
Vine Canopy Reconstruction and Assessment with Terrestrial Lidar and Aerial Imaging
by Igor Petrović, Matej Sečnik, Marko Hočevar and Peter Berk
Remote Sens. 2022, 14(22), 5894; https://doi.org/10.3390/rs14225894 - 21 Nov 2022
Cited by 9 | Viewed by 2535
Abstract
For successful dosing of plant protection products, the characteristics of the vine canopies should be known, based on which the spray amount should be dosed. In the field experiment, we compared two optical experimental methods, terrestrial lidar and aerial photogrammetry, with manual defoliation [...] Read more.
For successful dosing of plant protection products, the characteristics of the vine canopies should be known, based on which the spray amount should be dosed. In the field experiment, we compared two optical experimental methods, terrestrial lidar and aerial photogrammetry, with manual defoliation of some selected vines. Like those of other authors, our results show that both terrestrial lidar and aerial photogrammetry were able to represent the canopy well with correlation coefficients around 0.9 between the measured variables and the number of leaves. We found that in the case of aerial photogrammetry, significantly more points were found in the point cloud, but this depended on the choice of the ground sampling distance. Our results show that in the case of aerial UAS photogrammetry, subdividing the vine canopy segments to 5 × 5 cm gives the best representation of the volume of vine canopies. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Location of the surveying part of the vineyard (red) and UAS flight restriction zones (military, blue and urban, yellow).</p>
Full article ">Figure 2
<p>Lidar and UAS survey area 28 m × 7 m (4 vine rows) and the distribution of GCPs and MTPs. The GCPs were fixed on the ground between the vine rows and the MTPs were fixed at the top of the vine rows.</p>
Full article ">Figure 3
<p>Positioning of the lidar with respect to the vines and numbering of the segments of the vine canopy (two vines on the whole test area). View from the beginning of the row (in the center), with lidar position and detected points on the vine. The left and right images show the canopy view and the annotations that the lidar saw on the left and right sides of the canopy as it moved through the vineyard.</p>
Full article ">Figure 4
<p>A (blue) section of the vineyard used for manual defoliation measurements, with local coordinate system (<span class="html-italic">X</span>, <span class="html-italic">Y</span>, <span class="html-italic">Z</span>). The green vertical poles and the red ribbon (collinear with x) can be seen; the horizontal ropes were somewhat hidden by the leaves.</p>
Full article ">Figure 5
<p>Representation of the calculation of vine canopy volumes of individual scans. A section of segments S13, S14, and S15 colored with yellow, blue, and red is on the left and representation of the volume calculation from multiple points within segment S14 using the trapezoidal method is on the right.</p>
Full article ">Figure 6
<p>The UAS surveyed experimental vineyard as a point cloud from UAS photogrammetry measurements with about 195 million points (<b>a</b>), and the section used for the method comparison with about 30 million points (<b>b</b>).</p>
Full article ">Figure 7
<p>Comparison of point clouds from terrestrial lidar (red) and UAS photogrammetry (blue) in a side view (right side of the canopy).</p>
Full article ">Figure 8
<p>Comparison of point clouds from terrestrial lidar (red) and UAS photogrammetry (blue) in a top view (right side of the canopy).</p>
Full article ">Figure 9
<p>Comparison of point clouds from terrestrial lidar (red) and UAS photogrammetry (blue) in a side view (left side of the canopy).</p>
Full article ">Figure 10
<p>Comparison of point clouds from terrestrial lidar (red) and UAS photogrammetry (blue) in a top view (left side of the canopy).</p>
Full article ">Figure 11
<p>Reconstruction of the canopy from lidar measurement points. Segments of the same height are shown in different colors: (<b>a</b>) left side of the canopy, (<b>b</b>) right side of the canopy.</p>
Full article ">Figure 11 Cont.
<p>Reconstruction of the canopy from lidar measurement points. Segments of the same height are shown in different colors: (<b>a</b>) left side of the canopy, (<b>b</b>) right side of the canopy.</p>
Full article ">Figure 12
<p>Photogrammetric side view of canopy volume and a 3D projection at different segment subdivisions (<span class="html-italic">n</span> = 1 to <span class="html-italic">n</span> = 15) for the right half of the vine. Segments of the same height are shown in different colors.</p>
Full article ">Figure 13
<p>Correlation between number of points in segment (lidar measurements) and leaf area in segment (manual measurements): (<b>a</b>) left side of canopy and (<b>b</b>) right side of canopy. Correlation was performed for all 12 segments.</p>
Full article ">Figure 14
<p>Correlation between vine canopy volume (lidar measurements) and leaf area (manual measurements): (<b>a</b>) left side of vine canopy and (<b>b</b>) right side of vine canopy. Correlation was performed for all 12 segments.</p>
Full article ">Figure 15
<p>Correlation between vine canopy volume (photogrammetric measurements) and leaf area (manual measurements): (<b>a</b>) left side of vine canopy and (<b>b</b>) right side of vine canopy. Correlation was performed for all 12 segments.</p>
Full article ">Figure 16
<p>Correlation between the number of points (photogrammetric measurements) and the number of points (lidar measurements): (<b>a</b>) left side of canopy and (<b>b</b>) right side of canopy. The correlation was performed for all 12 segments.</p>
Full article ">Figure 17
<p>Correlation between digital volume (photogrammetric measurements) and digital volume (lidar measurements): (<b>a</b>) left side of canopy and (<b>b</b>) right side of canopy. The correlation was performed for all 12 segments.</p>
Full article ">
19 pages, 1008 KiB  
Review
Recent Advances and Challenges in the Seismo-Electromagnetic Study: A Brief Review
by Hongyan Chen, Peng Han and Katsumi Hattori
Remote Sens. 2022, 14(22), 5893; https://doi.org/10.3390/rs14225893 - 21 Nov 2022
Cited by 42 | Viewed by 5226
Abstract
Due to their potential application in earthquake forecasting, seismo-electromagnetic phenomena were intensively studied for several decades all over the world. At present, measurements from ground to space have accumulated a large amount of observation data, proving an excellent opportunity for seismo-electromagnetic study. Using [...] Read more.
Due to their potential application in earthquake forecasting, seismo-electromagnetic phenomena were intensively studied for several decades all over the world. At present, measurements from ground to space have accumulated a large amount of observation data, proving an excellent opportunity for seismo-electromagnetic study. Using a variety of analytical methods to examine past earthquake events, many electromagnetic changes associated with earthquakes have been independently reported, supporting the existence of pre-earthquake anomalies. This study aimed to give a brief review of the seismo-electromagnetic studies preceding earthquakes and to discuss possible ways for the application of seismo-electromagnetic signals at the current stage. In general, seismo-electromagnetic signals can be classified into electric and magnetic changes in the lithosphere and perturbations in the atmosphere. We start with seismo-electromagnetic research in the lithosphere, and then we review the studies in the lower atmosphere and upper atmosphere, including some latest topics that aroused intense scholarly interest. The potential mechanisms of seismo-electromagnetic phenomena are also discussed. It was found that although a number of statistical tests show that electromagnetic anomalies may contain predictive information for major earthquakes, with probability gains of approximately 2–6, it is still difficult to make use of seismo-electromagnetic signals efficiently in practice. To address this, finally, we put forward some preliminary ideas about how to apply the seismo-electromagnetic information in earthquake forecasting. Full article
Show Figures

Figure 1

Figure 1
<p>Earthquake forecast network based on machine learning.</p>
Full article ">Figure 2
<p>Multi-geophysical observation system for earthquake modeling and forecasting (modified from [<a href="#B83-remotesensing-14-05893" class="html-bibr">83</a>]).</p>
Full article ">
24 pages, 7365 KiB  
Article
A Recursive Hull and Signal-Based Building Footprint Generation from Airborne LiDAR Data
by Xiao Li, Fang Qiu, Fan Shi and Yunwei Tang
Remote Sens. 2022, 14(22), 5892; https://doi.org/10.3390/rs14225892 - 21 Nov 2022
Cited by 7 | Viewed by 2332
Abstract
Automatically generating a building footprint from an airborne LiDAR point cloud is an active research topic because of its widespread usage in numerous applications. This paper presents an efficient and automated workflow for generating building footprints from pre-classified LiDAR data. In this workflow, [...] Read more.
Automatically generating a building footprint from an airborne LiDAR point cloud is an active research topic because of its widespread usage in numerous applications. This paper presents an efficient and automated workflow for generating building footprints from pre-classified LiDAR data. In this workflow, LiDAR points that belong to the building category are first segmented into multiple clusters by applying the grid-based DBSCAN clustering algorithm. Each cluster contains the points of an individual building. Then, the outermost points of each building are extracted, on which the recursive convex hull algorithm is applied to generate the initial outline of each building. Since LiDAR points are irregularly distributed, the initial building outline contains irregular zig-zag shapes. In order to achieve a regularized building footprint that is close to the true building boundary, a signal-based regularization algorithm is developed. The initial outline is first transformed into a signal, which can reveal the wholistic geometric structure of the building outline after applying a denoising procedure. By analyzing the denoised signal, the locations of corners are identified, and the regularized building footprint is generated. The performance of the proposed workflow is tested and evaluated using two datasets that have different point densities and building types. The qualitative assessment reveals that the proposed workflow has a satisfying performance in generating building footprints even for building with complex structures. The quantitative assessment compares the performance of signal-based regularization with existing regularization methods using the 149 buildings contained in the test dataset. The experimental result shows the proposed method has achieved superior results based on a number of commonly used accuracy metrics. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) The point cloud of a building and its derived initial outline. (<b>B</b>) The magnified view of the initial outline.</p>
Full article ">Figure 2
<p>Examples of the grid-based DBSCAN algorithm.</p>
Full article ">Figure 3
<p>Examples of multiple returns. An emitted laser pulse causes two returns, corresponding to two different LiDAR points. One is for eaves; the other one is for ground.</p>
Full article ">Figure 4
<p>Initial building outline generation: (<b>A</b>) LiDAR point cloud of an individual building (2−D view). (<b>B</b>) Extracted outermost points. (<b>C</b>) Convex polygon generated after applying convex hull algorithm only once. (<b>D</b>) The concave polygon generated by removing spurious line after applying the recursive convex hull algorithm on the deviated points.</p>
Full article ">Figure 5
<p>Transforming the initial building outline to discrete signal. (<b>A</b>) The initial outline of a building; (<b>B</b>) The entire plot of the transformed signal; (<b>C</b>) The magnified view of angular deviations at vertex number 13 and number 14; (<b>D</b>) The magnified view of the signal plot when index equals to 13 and 14.</p>
Full article ">Figure 6
<p>Simulated building outline and corresponding angular deviation signal. (<b>A</b>) The simulated building outline that contains no zig-zag shape; (<b>B</b>) The corresponding angular deviation signal.</p>
Full article ">Figure 7
<p>Gaussian Smoothing. (<b>A</b>) Angular Deviation Signal. (<b>B</b>) The denoised signal after applying Gaussian Smoothing.</p>
Full article ">Figure 8
<p>Example of insertion operation. (<b>A</b>) The U-shape building outline; (<b>B</b>) The outline after applying insertion operation; (<b>C</b>) The angular deviation signal of the original U-shape building outline; (<b>D</b>) The angular deviation signal of the outline after insertion; (<b>E</b>) The smoothed signal of original U-shape outline; (<b>F</b>) The smoothed signal of outline after insertion.</p>
Full article ">Figure 9
<p>Subdivide the denoised signal and calculate features. (<b>A</b>) The initial building outline; (<b>B</b>) the portion of the initial building outline from vertex number 306 to 327; (<b>C</b>) the denoised signal (absolute value); (<b>D</b>) the magnified view of the denoised signal from 306 to 327.</p>
Full article ">Figure 10
<p>The result of clustering. The Y-coordinate of each dot is the normalized nonlinear indicator. X-coordinate represents the normalized angular accumulation.</p>
Full article ">Figure 11
<p>The location of the identified building corner on initial outline and their corresponding position on the plot of the signal. (<b>A</b>) All the 8 corners are marked with blue dots; (<b>B</b>) all the peaks of 8 corner pulses are marked with blue dots.</p>
Full article ">Figure 12
<p>(<b>A</b>) Regularized building outline without the zig-zag shape. (<b>B</b>) Regularized building footprint with adjusted direction.</p>
Full article ">Figure 13
<p>The point cloud (colorized by height) of test dataset and overlapped airborne imagery (<b>A</b>) Toronto. (<b>B</b>) Santa Rosa.</p>
Full article ">Figure 14
<p>Points colorized according to its cluster id. Each cluster represents a building. (<b>A</b>) Toronto; (<b>B</b>) Santa Rosa.</p>
Full article ">Figure 15
<p>Initial Building Outline and magnified view. (<b>A</b>)Toronto. (<b>B</b>) Santa Rosa.</p>
Full article ">Figure 16
<p>Regularized Building Footprint and magnified view. (<b>A</b>)Toronto. (<b>B</b>) Santa Rosa.</p>
Full article ">Figure 17
<p>Comparison of Initial Building Outline, Regularized Building Footprint, Reference Building Footprint and D-P Simplified Outline. The first three examples are in the Santa Rosa dataset. The last example is in Toronto dataset.</p>
Full article ">Figure 18
<p>Example of Vegetation occlusion in Santa Rosa dataset.</p>
Full article ">Figure 19
<p>Example in Toronto dataset.</p>
Full article ">
20 pages, 4361 KiB  
Article
A Prediction Model for the Outbreak Date of Spring Pollen Allergy in Beijing Based on Satellite-Derived Phenological Characteristics of Vegetation Greenness
by Xinyi Yang, Wenquan Zhu and Cenliang Zhao
Remote Sens. 2022, 14(22), 5891; https://doi.org/10.3390/rs14225891 - 21 Nov 2022
Cited by 5 | Viewed by 2614
Abstract
Pollen allergies have a serious impact on people’s physical and mental health. Accurate and efficient prediction of the outbreak date of pollen allergies plays an important role in the conservation of people sensitive to allergenic pollen. It is a frontier research to combine [...] Read more.
Pollen allergies have a serious impact on people’s physical and mental health. Accurate and efficient prediction of the outbreak date of pollen allergies plays an important role in the conservation of people sensitive to allergenic pollen. It is a frontier research to combine new social media data and satellite data to develop a model to forecast the outbreak date of pollen allergies. This study extracted the real outbreak dates of spring pollen allergies from Sina Weibo records from 2011 to 2021 in Beijing and calculated five vegetation indices of three vegetation types as phenological characteristics within the 30 days before the average outbreak date. The sensitivity coefficients and correlation coefficients were used to screen the phenological characteristics that best reflected the outbreak date of spring pollen allergy. Based on the best characteristic, two kinds of prediction models for the outbreak date of spring pollen allergy in Beijing were established (the linear fit prediction model and the cumulative linear fit prediction model), and the root mean square error (RMSE) was calculated as the prediction accuracy. The results showed that (1) the date of EVI2 (2-band enhanced vegetation index) in evergreen forest first reaching 0.138 can best reflect the outbreak date of pollen allergies in spring, and (2) the cumulative linear fit prediction model based on EVI2 in evergreen forests can obtain a high accuracy with an average RMSE of 3.6 days, which can predict the outbreak date of spring pollen allergies 30 days in advance. Compared with the existing indirect prediction models (which predict pollen concentrations rather than pollen allergies), this model provides a new direct way to predict pollen allergy outbreaks by using only remote sensing time-series data before pollen allergy outbreaks. The new prediction model also has better representativeness and operability and is capable of assisting public health management. Full article
Show Figures

Figure 1

Figure 1
<p>Study area. Note the downtown area of Beijing is mainly located within the Outer Ring Road.</p>
Full article ">Figure 2
<p>Technical process diagram for the extraction of satellite-derived vegetation greenness phenological characteristics before the pollen allergy outbreak. T in the right plots of (<b>c</b>) is the average DOY of spring pollen allergy outbreak during 2011–2021; VI in the right plots of (<b>c</b>) and (<b>d</b>) is the vegetation index value corresponding to the average DOY of pollen allergy outbreak; <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> </mrow> </semantics></math> in the left plot of (<b>c</b>) is the DOY that 20 days earlier than the earliest breakout date of spring pollen allergy during 2011–2021, i.e., the 50th day; <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </semantics></math> is the average DOY of spring pollen allergy outbreak during 2011–2021, i.e., the 81st day; <math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mn>1</mn> </msub> </mrow> </semantics></math> is the vegetation index value corresponding to <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> </mrow> </semantics></math> on the fitted curve of the smoothed multiyear average daily curve of a vegetation index for a vegetation type; <math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mn>2</mn> </msub> </mrow> </semantics></math> is the vegetation index value corresponding to <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </semantics></math> on the fitted curve of the smoothed multiyear average daily curve of a vegetation index for a vegetation type; <span class="html-italic">t</span><sub>n</sub> in the right plot of (<b>d</b>) is the corresponding DOY of VI in each year during 2011–2021; and <span class="html-italic">T</span><sub>n</sub> is the DOY of spring pollen allergy outbreak in each year during 2011–2021.</p>
Full article ">Figure 3
<p>An example shows how the prediction model of pollen allergy works for each year. (<b>a</b>) The preprocess of vegetation index data during 2011–2021; (<b>b</b>) the final cumulative linear fit prediction model; (<b>c</b>) the preprocess of vegetation index data in 2021 from the 1st day to the 50th day; and (<b>d</b>) the process of predicting pollen allergy in 2021. <span class="html-italic">Y</span> is the vegetation index value; <span class="html-italic">W</span> is the predicted number of days to the outbreak date of spring pollen allergy in Beijing (a negative value indicates that the outbreak date has not yet arrived, and a positive value indicates that the outbreak date has passed); <span class="html-italic">y</span> is the smoothed cumulative vegetation index value on the 50th day in 2021; and <span class="html-italic">w</span> is the predicted number of days to the outbreak date of spring pollen allergy in Beijing in 2021. Note the difference between the model building with the multi-year average vegetation index time series in (<b>a</b>) and the model prediction with the daily vegetation index time series in a certain year in (<b>c</b>).</p>
Full article ">Figure 4
<p>The establishment of two prediction models based on EVI2 of evergreen forest during 2011–2021. <span class="html-italic">W</span> is the number of days to the outbreak date of spring pollen allergy in Beijing, <span class="html-italic">Y</span><sub>1</sub> is the smoothed daily EVI2 value of evergreen forest for a given forecast year, and <span class="html-italic">Y</span><sub>2</sub> is the cumulative smoothed daily EVI2 value of evergreen forest for a given forecast year.</p>
Full article ">Figure 5
<p>The EVI2 value corresponding to the green-up date of evergreen forest. The black dots correspond to the green-up date of the evergreen forest.</p>
Full article ">Figure A1
<p>Weibo data with the keyword “pollen allergy” in Beijing during 2011–2021. The sharp increase in Weibo data in 2018 is because the mobile side of Weibo began to be widely popular in that year, with mobile active users accounting for 93% of the total active users, which is sourced from the Weibo Data Center. Weibo 2018 User Development Report [EB/0 L], 2019–03-15.</p>
Full article ">Figure A2
<p>A schematic diagram for extracting the outbreak dates of pollen allergies based on valid microblog data.</p>
Full article ">Figure A3
<p>Process diagram of vegetation classification.</p>
Full article ">Figure A4
<p>Pixel frequency probability distribution of NDVI values of vegetation and non-vegetation training samples.</p>
Full article ">Figure A5
<p>Pixel frequency probability distribution of NDVI values of evergreen and deciduous forest training samples.</p>
Full article ">
18 pages, 8950 KiB  
Article
A Registration-Error-Resistant Swath Reconstruction Method of ZY1-02D Satellite Hyperspectral Data Using SRE-ResNet
by Mingyuan Peng, Guoyuan Li, Xiaoqing Zhou, Chen Ma, Lifu Zhang, Xia Zhang and Kun Shang
Remote Sens. 2022, 14(22), 5890; https://doi.org/10.3390/rs14225890 - 21 Nov 2022
Cited by 1 | Viewed by 2051
Abstract
ZY1-02D is a Chinese hyperspectral satellite, which is equipped with a visible near-infrared multispectral camera and a hyperspectral camera. Its data are widely used in soil quality assessment, mineral mapping, water quality assessment, etc. However, due to the limitations of CCD design, the [...] Read more.
ZY1-02D is a Chinese hyperspectral satellite, which is equipped with a visible near-infrared multispectral camera and a hyperspectral camera. Its data are widely used in soil quality assessment, mineral mapping, water quality assessment, etc. However, due to the limitations of CCD design, the swath of hyperspectral data is relatively smaller than multispectral data. In addition, stripe noise and collages exist in hyperspectral data. With the contamination brought by clouds appearing in the scene, the availability is further affected. In order to solve these problems, this article used a swath reconstruction method of a spectral-resolution-enhancement method using ResNet (SRE-ResNet), which is to use wide swath multispectral data to reconstruct hyperspectral data through modeling mappings between the two. Experiments show that the method (1) can effectively reconstruct wide swaths of hyperspectral data, (2) can remove noise existing in the hyperspectral data, and (3) is resistant to registration error. Comparison experiments also show that SRE-ResNet outperforms existing fusion methods in both accuracy and time efficiency; thus, the method is suitable for practical application. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Examples of defects in ZY1-02D hyperspectral data. (<b>a</b>) Band 129 (1880.13 nm), (<b>b</b>) band 166 (2501.08 nm).</p>
Full article ">Figure 2
<p>Basic framework of SRE-ResNet.</p>
Full article ">Figure 3
<p>ResNet structure in SRE-ResNet framework. (<b>a</b>) Residual Learning, (<b>b</b>) SRE-ResNet.</p>
Full article ">Figure 4
<p>RGB composite of fusion result comparison between SRE-ResNet and ResNet for Dataset I. (<b>a</b>–<b>c</b>) are the whole scene of ground truth, prediction of SRE-ResNet and ResNet. (<b>d</b>–<b>f</b>) are the detailed scene of ground truth, prediction of SRE-ResNet and ResNet.</p>
Full article ">Figure 5
<p>RGB composite of fusion result comparison between SRE-ResNet and ResNet for Dataset II. (<b>a</b>–<b>c</b>) are the whole scene of ground truth, prediction of SRE-ResNet and ResNet. (<b>d</b>–<b>f</b>) are the detailed scene of ground truth, prediction of SRE-ResNet and ResNet.</p>
Full article ">Figure 6
<p>SNR comparison between SRE-ResNet and ResNet for Dataset I. (<b>a</b>) SRE-ResNet, (<b>b</b>) ResNet.</p>
Full article ">Figure 6 Cont.
<p>SNR comparison between SRE-ResNet and ResNet for Dataset I. (<b>a</b>) SRE-ResNet, (<b>b</b>) ResNet.</p>
Full article ">Figure 7
<p>Quantitative fusion result comparison between SRE-ResNet and ResNet for Dataset II. (<b>a</b>) SRE-ResNet, (<b>b</b>) SRE-ResNet.</p>
Full article ">Figure 8
<p>Simulation of registration errors between hyperspectral data and multispectral data.</p>
Full article ">Figure 9
<p>Quantitative fusion result comparison between SRE-ResNet and ResNet for Dataset I.</p>
Full article ">Figure 9 Cont.
<p>Quantitative fusion result comparison between SRE-ResNet and ResNet for Dataset I.</p>
Full article ">Figure 10
<p>Quantitative fusion result comparison between SRE-ResNet and ResNet for Dataset II.</p>
Full article ">Figure 11
<p>Quality assessment result of Dataset I with different simulated registration error.</p>
Full article ">Figure 12
<p>Quality assessment result of Dataset II with different simulated registration error.</p>
Full article ">Figure 13
<p>Image of Band 125 of dataset I. (<b>a</b>) Original Image, (<b>b</b>) SRE-ResNet, (<b>c</b>) HSCNN, (<b>d</b>) CNMF, (<b>e</b>) J-SLoL.</p>
Full article ">Figure 14
<p>Image of Band 125 of dataset II. (<b>a</b>) Original Image, (<b>b</b>) SRE-ResNet, (<b>c</b>) HSCNN, (<b>d</b>) CNMF, (<b>e</b>) J-SLoL.</p>
Full article ">Figure 15
<p>RGB Composites of fusion results comparison dataset I. (<b>a</b>) Original Image, (<b>b</b>) SRE-ResNet, (<b>c</b>) HSCNN, (<b>d</b>) CNMF, (<b>e</b>) J-SLoL.</p>
Full article ">Figure 16
<p>RGB Composites of fusion results comparison dataset II. (<b>a</b>) Original Image, (<b>b</b>) SRE-ResNet, (<b>c</b>) HSCNN, (<b>d</b>) CNMF, (<b>e</b>) J-SLoL.</p>
Full article ">
19 pages, 5284 KiB  
Article
A Novel Estimation Method of Water Surface Micro-Amplitude Wave Frequency for Cross-Media Communication
by Jianping Luo, Xingdong Liang, Qichang Guo, Tinggang Zhao, Jihao Xin and Xiangxi Bu
Remote Sens. 2022, 14(22), 5889; https://doi.org/10.3390/rs14225889 - 20 Nov 2022
Cited by 3 | Viewed by 2402
Abstract
Cross-media communication underpins many vital applications, especially in underwater resource exploration and the biological population monitoring domains. Water surface micro-amplitude wave (WSAW) frequency detection is the key to cross-media communication, where the WSAW frequency can invert the underwater sound source frequency. However, extracting [...] Read more.
Cross-media communication underpins many vital applications, especially in underwater resource exploration and the biological population monitoring domains. Water surface micro-amplitude wave (WSAW) frequency detection is the key to cross-media communication, where the WSAW frequency can invert the underwater sound source frequency. However, extracting the WSAW frequency information encounters many challenges in a real environment, such as low precision and symbol synchronization, leading to inaccurately estimating the WSAW frequency. Thus, this paper proposed a WSAW frequency estimation method based on an improved RELAX algorithm, incorporating two improvements. First, adding a nonlinear filter to the RELAX kernel function compensates for the filtered gain and enhances the WSAW frequency precision. Second, the improved RELAX kernel function is combined with the generalized inner product method to obtain the time distribution of the non-stationary signals, which is convenient for decoding. Several simulations and experiments applying our method on a Ka-band frequency modulated continuous wave (FMCW) radar demonstrate that our algorithm attains a better performance than traditional methods, e.g., periodogram and the RELAX algorithm. Using the improved algorithm affords to extract the frequency information of the WSAW signal accurately with a short sampling duration, further improving the performance indicators of the communication system, such as communication rate. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the experimental system.</p>
Full article ">Figure 2
<p>The underwater sound source sends 100 Hz, 130 Hz, 180 Hz, and 300 Hz signals, the RELAX fails to estimate the 300 Hz signals.</p>
Full article ">Figure 3
<p>Estimation of the WSAW number, where the simulated underwater sound source excites five signals of different frequencies. (<b>a</b>) It shows the estimation results of the above algorithm and periodogram under high SNR and (<b>b</b>) low SNR conditions.</p>
Full article ">Figure 4
<p>The influence of the sampling point cardinality at a single frequency on frequency estimation, involving a signal frequency sampling point of (<b>a</b>) 2000 and (<b>b</b>) 20,000.</p>
Full article ">Figure 5
<p>Estimation results of (<b>a</b>) periodogram and (<b>b</b>) improved RELAX algorithm.</p>
Full article ">Figure 6
<p>The underwater sound source sends four different frequency signals. The results of different frequency estimation algorithms based on (<b>a</b>) periodogram that presents large side lobes, (<b>b</b>) RELAX that misses the 300 Hz signal, and (<b>c</b>) improved RELAX, which has estimated results consistent with the frequency of the transmitted sound source.</p>
Full article ">Figure 7
<p>(<b>a</b>) Generalized inner product value of each sample, and (<b>b</b>) time–frequency analysis result of simulation data.</p>
Full article ">Figure 8
<p>Cross-media communication platform construction.</p>
Full article ">Figure 9
<p>The estimated results of the periodogram and the improved algorithm at five different durations, where thebule line and the red line represent the estimated results of periodogram and improved algorithm, respectively.</p>
Full article ">Figure 10
<p>Verification of the minimum frequency interval (<b>a</b>) periodogram, which cannot distinguish the 100 Hz and 130 Hz signals, and (<b>b</b>) improved RELAX obtaining 100.022 Hz and 102.996 Hz.</p>
Full article ">Figure 11
<p>Weak WSAW, (<b>a</b>) signal detection based on periodogram, where the amplitude of the 130 Hz signal side lobe is greater than the amplitude of the 400 Hz signal (<b>b</b>) frequency estimation results of RELAX algorithm are 130 Hz and 144 Hz. (<b>c</b>) frequency estimation results of the improved RELAX algorithm 130 Hz and 400 Hz.</p>
Full article ">Figure 12
<p>(<b>a</b>) Sample generalized inner product value; (<b>b</b>) time–frequency analysis result.</p>
Full article ">
19 pages, 23977 KiB  
Article
A Novel Echo Separation Scheme for Space-Time Waveform-Encoding SAR Based on the Second-Order Cone Programming (SOCP) Beamformer
by Shuo Han, Yunkai Deng, Wei Wang, Qingchao Zhao, Jinsong Qiu, Yongwei Zhang and Zhen Chen
Remote Sens. 2022, 14(22), 5888; https://doi.org/10.3390/rs14225888 - 20 Nov 2022
Cited by 1 | Viewed by 2122
Abstract
Space-time waveform-encoding (STWE)-synthetic aperture radar (SAR) is an effective way to accomplish high-resolution and wide-swath (HRWS) imaging. By designing the specific signal transmit mode, the echoes from several subswaths are received within a single receiving window and overlap each other in STWE-SAR. In [...] Read more.
Space-time waveform-encoding (STWE)-synthetic aperture radar (SAR) is an effective way to accomplish high-resolution and wide-swath (HRWS) imaging. By designing the specific signal transmit mode, the echoes from several subswaths are received within a single receiving window and overlap each other in STWE-SAR. In order to separate the overlapped echoes, the linear-constrained minimum variance (LCMV) beamformer, a single-null beamformer, is typically used. However, the LCMV beamformer has a very narrow and unstable notch depth, which is not sufficient to accurately separate the overlapped echoes with large signal energy differences between subswaths. The issue of signal energy differences in STWE-SAR is first raised in this paper. Moreover, a novel echo separation scheme based on a second-order cone programming (SOCP) beamformer is proposed. The beam pattern generated by the SOCP beamformer allows flexible adjustment of the notch width and depth, which effectively improves the quality of separation results compared to the LCMV beamformer. The simulation results illustrate that the scheme can greatly enhance the performance of echo separation. Furthermore, the experimental results based on the X-band STWE-SAR airborne system not only demonstrate the scheme’s effectiveness but also indicate that it holds great promise for future STWE-SAR missions. Full article
(This article belongs to the Special Issue SAR-Based Signal Processing and Target Recognition)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The principle diagram of the typical STWE-SAR system with two subswaths.</p>
Full article ">Figure 2
<p>The SAR images with different scattering scenes. (<b>a</b>) Buildings. (<b>b</b>) Water surfaces.</p>
Full article ">Figure 3
<p>The single–null beam pattern based on the LCMV beamformer.</p>
Full article ">Figure 4
<p>Diagram of the superposition of the interference signal and ambiguity signal range.</p>
Full article ">Figure 5
<p>The beam pattern generated by the LCMV beamformer. (<b>a</b>) The null placed at the notch of the beam pattern generated by SCORE. (<b>b</b>) The null placed at the side-lobe of the beam pattern generated by SCORE.</p>
Full article ">Figure 6
<p>Diagram of the suppression of interference energy with a wide notch.</p>
Full article ">Figure 7
<p>Diagram of the interference signals from the equiphasic plane.</p>
Full article ">Figure 8
<p>The beam pattern with wide and deep notches based on the SOCP beamformer.</p>
Full article ">Figure 9
<p>(<b>a</b>) The imaging diagram of the simulated system. (<b>b</b>) The imaging scenario for three subswaths.</p>
Full article ">Figure 10
<p>The beam pattern generated by the conventional LCMV beamformer. (<b>a</b>–<b>c</b>) The beam pattern for subswath 1–3.</p>
Full article ">Figure 11
<p>The beam pattern generated by the proposed SOCP beamformer. (<b>a</b>–<b>c</b>) The beam pattern for subswath 1–3.</p>
Full article ">Figure 12
<p>The final 3D and 2D separation results. (<b>a</b>–<b>c</b>) The final 3D and 2D separation results obtained by the LCMV beamformer. (<b>d</b>–<b>f</b>) The final 3D and 2D separation results obtained by the SOCP beamformer.</p>
Full article ">Figure 13
<p>Real scheme.</p>
Full article ">Figure 14
<p>The data processing flow of the real data.</p>
Full article ">Figure 15
<p>The single-channel imaging results before echo separation. (<b>a</b>) Subswath 1. (<b>b</b>) Subswath 2.</p>
Full article ">Figure 16
<p>The single-channel energy. (<b>a</b>) Subswath 1. (<b>b</b>) Subswath 2. (<b>c</b>) Energy difference (subswath 2 − subswath 1).</p>
Full article ">Figure 17
<p>The echo separation results of the LCMV beamformer. Region A reflects the separation results of human-made structures and roads. Region B and C compare the effect of the separation of water and human-made structures. (<b>a</b>) Subswath 1. (<b>b</b>) Subswath 2.</p>
Full article ">Figure 18
<p>The echo separation results of the SOCP beamformer. (<b>a</b>) Subswath 1. (<b>b</b>) Subswath 2.</p>
Full article ">Figure 19
<p>(<b>a</b>) Special region C selected by the red rectangles in <a href="#remotesensing-14-05888-f017" class="html-fig">Figure 17</a>a. (<b>b</b>) Special region C selected by the red rectangles in <a href="#remotesensing-14-05888-f018" class="html-fig">Figure 18</a>a.</p>
Full article ">
17 pages, 10699 KiB  
Article
Evaluating Satellite-Observed Ecosystem Function Changes and the Interaction with Drought in Songnen Plain, Northeast China
by Haiyan Li, Fang Huang, Xiuchao Hong and Ping Wang
Remote Sens. 2022, 14(22), 5887; https://doi.org/10.3390/rs14225887 - 20 Nov 2022
Cited by 6 | Viewed by 2036
Abstract
Drought is considered one of the devastating natural disasters worldwide. In the context of global climate change, the frequency and intensity of drought have increased, thereby affecting terrestrial ecosystems. To date, the interactions between ecosystem change and drought, especially their mutual lag and [...] Read more.
Drought is considered one of the devastating natural disasters worldwide. In the context of global climate change, the frequency and intensity of drought have increased, thereby affecting terrestrial ecosystems. To date, the interactions between ecosystem change and drought, especially their mutual lag and cumulative effects is unclear. The Songnen Plain in northeastern China is one of the three major black soil areas in the world and is highly sensitive to global change. Herein, to quantify the interaction between drought and ecosystem function changes in the Songnen Plain, integrating with time-series moderate resolution imaging spectroradiometer (MODIS), leaf area Index (LAI), evapotranspiration (ET), and gross primary productivity (GPP) data, we calculated the standardized precipitation and evapotranspiration index (SPEI) based on the meteorological data, diagnosed the causal relationship between SPEI and the ecosystem function indicators i.e., LAI, ET, and GPP, and analyzed the time-lag and cumulative effects between the degree of drought and three ecosystem function indicators using impulse response analysis. The results showed that the trend of SPEI (2000–2020) was positive in the Songnen Plain, indicating that the drought extent had eased towards wetness. LAI showed insignificant changes (taking up 88.34% of the total area), except for the decrease in LAI found in some forestland and grassland, accounting for 9.43%. The pixels showing a positive trend of ET and GPP occupied 24.86% and 54.94%, respectively. The numbers of pixels with Granger causality between LAI and SPEI (32.31%), SPEI and GPP (52.8%) were greater at the significance 0.05 level. Impulse responses between each variable pair were stronger mainly between the 6th and 8th months, but differed significantly between vegetation types. Grassland and cropland were more susceptible to drought than forest. The cumulative impulse response coefficients values indicated that the mutual impacts between all variables were mainly positive. The increased wetness positively contributed to ecosystem function, and in turn enhanced ecosystem function improved regional drought conditions to some extent. However, in the northeastern forest areas, the SPEI showed a significant negative response to increased ET and GPP, suggesting that the improved physiological functions of forest might lead to regional drought. There were regional differences in the interaction between drought conditions and ecosystem function in the Songnen Plain over the past 21 years. Full article
(This article belongs to the Special Issue Environmental Stress and Natural Vegetation Growth)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Location of the study area with DEM, (<b>b</b>) land cover type (2020).</p>
Full article ">Figure 2
<p>Trends of SPEI (<b>a</b>), LAI (<b>b</b>), ET (<b>c</b>), and GPP (<b>d</b>) from 2000 to 2020 (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 3
<p>The spatial distribution of SPEI, LAI, ET and GPP trends from 2000 to 2020. LAI, ET and GPP are expressed using the same color table. SPEI shows significant increase trend (<span class="html-italic">p</span> &lt; 0.05) throughout the study area, expressed in a stretched mode.</p>
Full article ">Figure 4
<p>The pixel proportions of SPEI, LAI, ET, and GPP trend types from 2000 to 2020.</p>
Full article ">Figure 5
<p>The significance level (<span class="html-italic">p</span>-value) of Granger causality. (<b>a</b>,<b>b</b>) interactions between SPEI and LAI; (<b>c</b>,<b>d</b>) interactions between SPEI and ET; (<b>e</b>,<b>f</b>) interactions between SPEI and GPP. The pixels with insignificant trends are in grey color.</p>
Full article ">Figure 6
<p>The proportion of pixels at different significance levels in Granger causality of four variables.</p>
Full article ">Figure 7
<p>Impulse responses showing the trend with time of SPEI and LAI (<b>a</b>,<b>b</b>), ET (<b>c</b>,<b>d</b>), and GPP (<b>e</b>,<b>f</b>) of different vegetation types after giving shocks to each other. The <span class="html-italic">y</span>-axis represents standard deviation (i.e., the standard deviation that a variable causing another variable at a lag of n months).</p>
Full article ">Figure 8
<p>The cumulative impulse coefficients in 12 months after each group of variables were impacted with each other, i.e., SPEI and LAI (<b>a</b>,<b>b</b>), ET (<b>c</b>,<b>d</b>), GPP (<b>e</b>,<b>f</b>).</p>
Full article ">Figure 9
<p>The proportion of pixels of cumulative pulse coefficient among SPEI and ecosystem functional indicators.</p>
Full article ">
15 pages, 4250 KiB  
Article
Preflight Evaluation of the Environmental Trace Gases Monitoring Instrument with Nadir and Limb Modes (EMI-NL) Based on Measurements of Standard NO2 Sample Gas
by Taiping Yang, Fuqi Si, Haijin Zhou, Minjie Zhao, Fang Lin and Lei Zhu
Remote Sens. 2022, 14(22), 5886; https://doi.org/10.3390/rs14225886 - 20 Nov 2022
Cited by 5 | Viewed by 1774
Abstract
Hyperspectral observations are used to retrieve high-resolution horizontal distribution and vertical profiles of trace gases (O3, NO2, HCHO, and SO2), thereby playing a vital role in monitoring the spatio-temporal distribution and transportation of atmospheric pollutants. These observations [...] Read more.
Hyperspectral observations are used to retrieve high-resolution horizontal distribution and vertical profiles of trace gases (O3, NO2, HCHO, and SO2), thereby playing a vital role in monitoring the spatio-temporal distribution and transportation of atmospheric pollutants. These observations reflect air quality changes on global and regional scales, including China, thereby elucidating the impacts of anthropogenic and natural emissions on atmospheric composition and global climate change. The DaQi 02 (DQ02) satellite carries the Environmental Trace Gases Monitoring Instrument with Nadir and Limb modes (EMI-NL) onboard, which will simultaneously perform nadir and limb measurements of high-resolution ultraviolet and visible solar scattered light in the nadir and limb directions. Combined with the absorption of different trace gases in this wavelength band, this information can provide high-resolution horizontal and vertical distributions of trace gases. We examined the spectral measuring ability and instrument characteristics of both modules of EMI-NL by measuring different light sources and concentrations of the NO2 sample gas. In the nadir module test, when the NO2 sample gas concentration was 198 ppm and 513 ppm with scattered sunlight as the light source, the average relative errors of spatial pixels were 4.02% and 3.64%, respectively. At the NO2 sample gas concentration of 198 ppm with the integrating sphere as the light source, the average relative error of spatial pixels was −2.26%. In the limb module test, when the NO2 sample gas concentration was 198 ppm and 1000 ppm with the tungsten halogen lamp as the light source, the average relative errors of spatial pixels were −3.07% and 8.32%, respectively. When the NO2 sample gas concentration was 198 ppm and 1000 ppm with the integrating sphere as the light source, the spatial pixel average errors were −3.5% and 8.06%, respectively. The retrieved NO2 slant column density between different spatial pixels exhibited notable inconsistency in both modules, which could be used to estimate the stripe of spatial dimension. These results confirm the ability of EMI-NL to provide accurate spaceborne monitoring of NO2 globally. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Diagram of EMI-NL in-orbit observation.</p>
Full article ">Figure 2
<p>Optical system block diagram of the EMI-NL (<b>a</b>) nadir module and (<b>b</b>) limb module.</p>
Full article ">Figure 3
<p>Schematic of the EMI-NL NO<sub>2</sub> sample gas measurement experimental setup. (<b>a</b>) EMI-NL nadir module measurements with the scattered sunlight; (<b>b</b>) EMI-NL nadir module measurements with the integrating sphere; (<b>c</b>) EMI-NL limb module measurements with the tungsten halogen lamp; (<b>d</b>) EMI-NL limb module measurements with integrating sphere.</p>
Full article ">Figure 4
<p>An example of NO<sub>2</sub> DOAS fitting of the measurement of the 102nd pixel of the nadir module as the NO<sub>2</sub> sample gas concentration is 198 ppm using scattered sunlight.</p>
Full article ">Figure 5
<p>NO<sub>2</sub> SCD with fitting error and relative deviation of different spatial pixels of the EMI-NL nadir module with scattered sunlight and NO<sub>2</sub> sample gas concentrations of: (<b>a</b>) 198; and (<b>b</b>) 513 ppm.</p>
Full article ">Figure 6
<p>NO<sub>2</sub> SCD with fitting error and relative deviation of different spatial pixels of the EMI-NL nadir module with integrating sphere and an NO<sub>2</sub> sample gas concentration of 198 ppm.</p>
Full article ">Figure 7
<p>NO<sub>2</sub> SCD with fitting error and relative deviation of different spatial pixels of EMI-NL limb module with tungsten halogen lamp and NO<sub>2</sub> sample gas concentrations of: (<b>a</b>) 198; and (<b>b</b>) 1000 ppm.</p>
Full article ">Figure 8
<p>NO<sub>2</sub> SCD with fitting error and relative deviation of different spatial pixels of EMI-NL nadir module with integrating sphere and NO<sub>2</sub> sample gas concentrations of: (<b>a</b>) 198; and (<b>b</b>) 1000 ppm.</p>
Full article ">Figure 9
<p>Ground-based measured: (<b>a</b>) dark current; (<b>b</b>) spectral; (<b>c</b>) SNR of the 92nd pixel of the VIS1 channel of the nadir module in the range of 430–470 nm; (<b>d</b>) dark current; (<b>e</b>) spectral; and (<b>f</b>) SNR of the 125th pixel of the UV2 channel of the limb module in the range of 430–470 nm.</p>
Full article ">
22 pages, 27308 KiB  
Article
Characterizing Spatiotemporal Patterns of Snowfall in the Kaidu River Basin from 2000–2020 Using MODIS Observations
by Jiangeng Wang, Linglong Zhu, Yonghong Zhang, Wei Huang, Kaida Song and Feng Tian
Remote Sens. 2022, 14(22), 5885; https://doi.org/10.3390/rs14225885 - 20 Nov 2022
Cited by 1 | Viewed by 1822
Abstract
Characterizing spatiotemporal patterns of snowfall is essential for understanding cryosphere responses to warming climate stress. The changes in snowfall and topographic controls in mountain regions still need to be clarified. This study proposes a general parsimonious methodology to obtain the frequency of snowfall [...] Read more.
Characterizing spatiotemporal patterns of snowfall is essential for understanding cryosphere responses to warming climate stress. The changes in snowfall and topographic controls in mountain regions still need to be clarified. This study proposes a general parsimonious methodology to obtain the frequency of snowfall in mountainous areas. The methodology employed is easily transferable to any other mountain region. Utilizing daily MODIS observations from June 2000 to May 2020 and the snowfall event detection algorithm, we monitored the frequency of snowfall in a long time series in the Kaidu river basin. The results are as follows: (1) The method for detecting the frequency of snowfall has high accuracy. The annual detected results agreed with surface observations, with an R2 of 0.65 and RMSE of 3.39. (2) The frequency of snowfall events increased monotonically with elevation. The influence of slope angle on snowfall gradually decreased with increasing elevation. (3) The frequency of snowfall events in the Kaidu river basin was dominated by an increasing trend. The trends showed a pronounced topographic dependence. This study reveals the distribution characteristics and changing snowfall trends in mountain regions. The results provide a reference for snowfall research in mountainous areas. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area of Kaidu River Basin.</p>
Full article ">Figure 2
<p>The topographic characteristics of the study area. (<b>a</b>) Plot of percentage distribution with an elevation over the KRB; (<b>b</b>) The percentage distribution of slope aspects; (<b>c</b>) The percentage distribution of slopes.</p>
Full article ">Figure 3
<p>(<b>a</b>) The monthly mean precipitation and temperature at the Bayanbulak meteorological station from January 2000 to December 2019. (<b>b</b>) Bayanbulak meteorological station time series of daily snow depth, with a monthly histogram of mean snow depth from January 2000 to December 2019 in the inset figure.</p>
Full article ">Figure 4
<p>Flowchart for snowfall detection.</p>
Full article ">Figure 5
<p>Flowchart for estimating snow grain size.</p>
Full article ">Figure 6
<p>Comparison of annual detected numbers of snowfall events using MODIS and observations from the Bayanbulak meteorological station for each hydrological year from 2000 to 2018. The dashed line denotes the best-fit line from linear regression. The coefficient of determination (R<sup>2</sup>), root mean square error (RMSE), and sample size (N) are also given.</p>
Full article ">Figure 7
<p>(<b>a</b>) Spatial distribution of the mean frequency of snowfall from June 2000 to May 2020 over the KRB; (<b>b</b>) the minimum, mean, and maximum frequency of snowfall events with an elevation over the KRB during 20 hydrological years; (<b>c</b>) the minimum, the mean, and maximum frequency of snowfall with aspect in four elevation zones over the KRB during 20 hydrological years; and (<b>d</b>) the minimum, the mean, and maximum frequency of snowfall with slope angle in four elevation zones over the KRB during 20 hydrological years.</p>
Full article ">Figure 8
<p>The spatial distribution of the annual trend in the number of snowfall events over the Kaidu River Basin from June 2000 to May 2020, where the green areas in the inset figure represent significant trends at a 95% (<span class="html-italic">p</span> &lt; 0.05) confidence level.</p>
Full article ">Figure 9
<p>Spatial distribution of the seasonal median frequency of snowfall over the Kaidu River Basin from June 2000 to May 2020. SCA = snow-covered area, Mean = the mean value of the median snowfall frequency, and SD = the standard deviation of the median snowfall frequency.</p>
Full article ">Figure 10
<p>Seasonal frequency of snowfall at each elevation zone, slope angle, and slope aspect over the Kaidu River Basin from June 2000 to May 2020. Solid lines are the median snowfall at each elevation zone, slope angle, and slope aspect. Dashed lines are the median snowfall in the four seasons of the study area. MZ = montane zone, ASZ = alpine steppe zone, AMZ = alpine meadow zone, and SZ = snow zone.</p>
Full article ">Figure 11
<p>Spatial distributions of the seasonal trends in the frequency of snowfalls over the KRB from June 2000 to May 2020, where the green areas in the inset figure represent significant trends at a 95% (<span class="html-italic">p</span> &lt; 0.05) confidence level.</p>
Full article ">Figure 12
<p>Spatial distributions of the monthly median snowfall events over the KRB from June 2000 to May 2020. SCA = Snow-covered area, Mean = mean value of the median snowfall events, and SD = standard deviation of the median numbers of snowfall events.</p>
Full article ">Figure 13
<p>Monthly median frequency of snowfall and snow-covered areas (SCAs) at each elevation zone (<b>a</b>), slope (<b>b</b>), and aspect (<b>c</b>) over the KRB from June 2000 to May 2020. MZ = montane zone, ASZ = alpine steppe zone AMZ = alpine meadow zone, and SZ = snow zone.</p>
Full article ">Figure 14
<p>Spatial distributions of the trends in the monthly snowfall events over the KRB from June 2000 to May 2020. Mean = mean value of the median numbers of snowfall events.</p>
Full article ">Figure 15
<p>The MODIS-observed cloudy days, snow grain size, and in situ snow depth records at the Bayanbulak meteorological station for the 2017 hydrological year.</p>
Full article ">Figure 16
<p>The average frequency of snowfall over the KRB derived by MODIS, ERA5, and GPM datasets from June 2000 to May 2020.</p>
Full article ">
22 pages, 2634 KiB  
Article
Predicting Nitrogen Efficiencies in Mature Maize with Parametric Models Employing In-Season Hyperspectral Imaging
by Monica B. Olson, Melba M. Crawford and Tony J. Vyn
Remote Sens. 2022, 14(22), 5884; https://doi.org/10.3390/rs14225884 - 20 Nov 2022
Cited by 4 | Viewed by 2023
Abstract
Overuse of nitrogen (N), an essential nutrient in food production systems, can lead to health issues and environmental degradation. Two parameters related to N efficiency, N Conversion Efficiency (NCE) and N Internal Efficiency (NIE), measure the amount of total biomass or grain produced, [...] Read more.
Overuse of nitrogen (N), an essential nutrient in food production systems, can lead to health issues and environmental degradation. Two parameters related to N efficiency, N Conversion Efficiency (NCE) and N Internal Efficiency (NIE), measure the amount of total biomass or grain produced, respectively, per unit of N in the plant. Utilizing remote sensing to improve these efficiency measures may positively impact the stewardship of agricultural N use in maize (Zea mays L.) production. We investigated in-season hyperspectral imaging for prediction of end-season whole-plant N concentration (pN), NCE, and NIE, using partial least squares regression (PLSR) models. Image data were collected at two mid-season growth stages (V16/V18 and R1/R2) from manned aircraft and unmanned aerial vehicles for three site years of 5 to 9 maize hybrids grown under 3 N treatments and 2 planting densities. PLSR models resulted in accurate predictions for pN at R6 (R2 = 0.73; R2 = 0.68) and NCE at R6 (R2 = 0.71; R2 = 0.73) from both imaging times. Additionally, the PLSR models based on the R1 images, the second imaging, accurately distinguished the highest and lowest ranked hybrids for pN and NCE across N rates. Neither timepoint resulted in accurate predictions for NIE. Genotype selection efficiency for end-season pN and NCE was increased through the use of the in-season PLSR imaging models, potentially benefiting early breeding screening methods. Full article
(This article belongs to the Special Issue Crop Biophysical Parameters Retrieval Using Remote Sensing Data)
Show Figures

Figure 1

Figure 1
<p>Average spectral signatures (% reflectance) at wavebands from 412–917 nm by location (ACRE, Gorman, Rominger) for high (red triangle) and low N (blue dot) treatment from imaging at V16/V18 or R1/R2 at high planting densities across all hybrids.</p>
Full article ">Figure 2
<p>Pearson product-moment correlations (r) for N concentration (pN) at R6 by waveband (nm) from images at V16 (upper block) or R1 (lower block) for low and high planting density (pd) colored by N treatment (Low N = blue circle, Med N = red plus, High N = green diamond). Black lines are for reference only: solid line is at r = 0 and dashed lines at r = |0.4|.</p>
Full article ">Figure 3
<p>Pearson product-moment correlations (r) for NCE at R6 (kg/kg N) by waveband (nm) from images at V16 (upper block) or R1 (lower block) for low and high planting density (pd) colored shape by N treatment (Low N = blue circle, Med N = red plus, High N = green diamond). Black lines are for reference only: solid line is at r = 0 and dashed lines at r = |0.4|.</p>
Full article ">Figure 4
<p>Pearson product-moment correlations (r) for NIE (kg/kg N) at R6 by waveband (nm) from images at V16 (upper block) or R1 (lower block) for low and high planting density (pd) colored by N treatment (Low N = blue circle, Med N = red plus, High N = green diamond). Black lines are for reference only: solid line is at r = 0 and dashed lines at r = |0.4|.</p>
Full article ">Figure 5
<p>Plots of Variable Importance for Projection (VIP) values for NCE at V16 (<b>A</b>), NCE at R1 (<b>B</b>), NIE at V16 (<b>C</b>) and NIE at R1 (<b>D</b>). Predictors for each model are shown on the x axis. Line is at reference value of 0.8; values less than 0.8 are deemed not important as per [<a href="#B44-remotesensing-14-05884" class="html-bibr">44</a>].</p>
Full article ">Figure 6
<p>Measured by predicted plots for full PLSR models based on spectral data at V16/18 or R1/R2 for training and test data sets. R<sup>2</sup><sub>r</sub> = R<sup>2</sup> for training data set; R<sup>2</sup><sub>e</sub> = R<sup>2</sup> for test data set; pN = nitrogen concentration (%); NCE = nitrogen conversion efficiency (kg/kg N); NIE = nitrogen internal efficiency (kg/kg N); Gen Loc = general location; CA = California, blue dots (Gorman and Rominger locations); IN = Indiana, red dots (ACRE location). Black line is 1:1 reference line.</p>
Full article ">Figure 7
<p>Measured by predicted plot for Full R1 PLSR model for predicting N concentration (pN, %) separated by hybrid with general location indicated by color and marker shape. Black line is 1:1 reference line.</p>
Full article ">
20 pages, 22754 KiB  
Article
WDBSTF: A Weighted Dual-Branch Spatiotemporal Fusion Network Based on Complementarity between Super-Resolution and Change Prediction
by Shuai Fang, Qing Guo and Yang Cao
Remote Sens. 2022, 14(22), 5883; https://doi.org/10.3390/rs14225883 - 20 Nov 2022
Cited by 2 | Viewed by 1860
Abstract
Spatiotemporal fusion (STF) is a solution to generate satellite images with both high-spatial and high-temporal resolutions. The deep learning-based STF algorithms focus on spatial dimensions to build a super-resolution (SR) model or the temporal dimensions to build a change prediction (CP) model, or [...] Read more.
Spatiotemporal fusion (STF) is a solution to generate satellite images with both high-spatial and high-temporal resolutions. The deep learning-based STF algorithms focus on spatial dimensions to build a super-resolution (SR) model or the temporal dimensions to build a change prediction (CP) model, or the task itself to build a data-driven end-to-end model. The multi-source images used for STF usually have large spatial scale gaps and temporal spans. The large spatial scale gaps lead to poor spatial details based on a SR model; the large temporal spans make it difficult to accurately reconstruct changing areas based on a CP model. We propose a weighted dual-branch spatiotemporal fusion network based on complementarity between super-resolution and change prediction (WDBSTF), which includes the SR branch and CP branch, and a weight module representing the complementarity of the two branches. The SR branch makes full use of edge information and high-resolution reference images to obtain high-quality spatial features for image reconstruction. The CP branch decomposes complex problems via a two-layer cascaded network, changes features from the difference image, and selects high-quality spatial features through the attention mechanism. The fusion result of the CP branch has rich image details, but the fusion accuracy in the changing area is low due to the lack of detail. The SR branch has consistent and excellent fusion performances in the changing and no-changing areas, but the image details are not rich enough compared with the CP branch due to the large amplification factor. Next, a weighted network was designed to combine the advantages of the two branches to produce improved fusion results. We evaluated the performance of the WDBSTF in three representative scenarios, and both visual and quantitative evaluations demonstrate the state-of-the-art performance of our algorithm. (On the LGC dataset, our method outperforms the suboptimal method by 2.577% on SSIM. On the AHB dataset, our method outperforms the suboptimal method by 1.684% on SSIM. On the CIA dataset, our method outperforms the suboptimal method by 5.55% on SAM). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Dimensional analysis of the spatiotemporal fusion process.</p>
Full article ">Figure 2
<p>Flowchart of the whole network. <math display="inline"><semantics> <msub> <mi>M</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>M</mi> <mn>2</mn> </msub> </semantics></math> refer to the coarse images at the reference date <math display="inline"><semantics> <msub> <mi>t</mi> <mn>1</mn> </msub> </semantics></math> and the prediction date <math display="inline"><semantics> <msub> <mi>t</mi> <mn>2</mn> </msub> </semantics></math>. <math display="inline"><semantics> <msub> <mi>L</mi> <mn>1</mn> </msub> </semantics></math> refers to the fine image at <math display="inline"><semantics> <msub> <mi>t</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>_</mo> <mi>R</mi> </mrow> </semantics></math> refers to the resample image of <math display="inline"><semantics> <msub> <mi>L</mi> <mn>1</mn> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>Architecture of the EESRNet. <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>_</mo> <mi>E</mi> <mi>d</mi> <mi>g</mi> <mi>e</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>_</mo> <mi>E</mi> <mi>d</mi> <mi>g</mi> <mi>e</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>_</mo> <mi>R</mi> <mo>_</mo> <mi>E</mi> <mi>d</mi> <mi>g</mi> <mi>e</mi> </mrow> </semantics></math> refer to the edge map derived from the corresponding image, respectively. LF and HF refer to low-resolution features and high-resolution features, respectively.</p>
Full article ">Figure 4
<p>Architecture of the TLCPNet.</p>
Full article ">Figure 5
<p>Architecture of the RCAB.</p>
Full article ">Figure 6
<p>Architecture of the UNet.</p>
Full article ">Figure 7
<p>Architecture of the SAMB.</p>
Full article ">Figure 8
<p>Architecture of the WNet.</p>
Full article ">Figure 9
<p>Test data from the LGC dataset. These figures exhibit the false color composite images using band 4, band 3, and band 2. (<b>a</b>,<b>c</b>) are the MOIDS and Landsat images on 26 November 2004, respectively. (<b>b</b>,<b>d</b>) are the MOIDS and Landsat images on 12 December 2004, respectively.</p>
Full article ">Figure 10
<p>Test data from the AHB dataset. These figures exhibit the false color composite images using band 4, band 3, and band 2. (<b>a</b>,<b>c</b>) are the MOIDS and Landsat images on 29 August 2017, respectively. (<b>b</b>,<b>d</b>) are the MOIDS and Landsat images on 4 January 2018, respectively.</p>
Full article ">Figure 11
<p>Test data from the CIA dataset. These figures exhibit the false color composite images using band 4, band 3, and band 2. (<b>a</b>,<b>c</b>) are the MOIDS and Landsat images on 11 January 2002, respectively. (<b>b</b>,<b>d</b>) are the MOIDS and Landsat images on 12 February 2002, respectively.</p>
Full article ">Figure 12
<p>The fusion results on 12 December 2001 in LGC (the first row exhibits the false color composite images using band 4, band 3, and band 2; the second row exhibits the zoomed-in details of the yellow rectangles marked in the first row; the third row exhibits the average difference map between the prediction and ground truth; the fourth row exhibits the average surface reflection change map).</p>
Full article ">Figure 13
<p>The fusion results on 4 January 2018 in AHB (the arrangement of the subfigures is the same as the above).</p>
Full article ">Figure 14
<p>The fusion results on 12 February 2002 in CIA (The arrangement of the subfigures is the same as above).</p>
Full article ">Figure 15
<p>This figure shows the local results with edge enhancement on the LGC dataset and their corresponding edge images. These color figures exhibit the false color composite images using band 4, band 3, and band 2.</p>
Full article ">Figure 16
<p>This figure shows the effect of removing the RCAB or SAMB modules on the LGC dataset on the results. These figures exhibit the false color composite images using band 4, band 3, and band 2.</p>
Full article ">
19 pages, 4985 KiB  
Article
Monitoring of Atmospheric Carbon Dioxide over Pakistan Using Satellite Dataset
by Ning An, Farhan Mustafa, Lingbing Bu, Ming Xu, Qin Wang, Muhammad Shahzaman, Muhammad Bilal, Safi Ullah and Zhang Feng
Remote Sens. 2022, 14(22), 5882; https://doi.org/10.3390/rs14225882 - 20 Nov 2022
Cited by 13 | Viewed by 4351
Abstract
Satellites are an effective source of atmospheric carbon dioxide (CO2) monitoring; however, city-scale monitoring of atmospheric CO2 through space-borne observations is still a challenging task due to the trivial change in atmospheric CO2 concentration compared to its natural variability [...] Read more.
Satellites are an effective source of atmospheric carbon dioxide (CO2) monitoring; however, city-scale monitoring of atmospheric CO2 through space-borne observations is still a challenging task due to the trivial change in atmospheric CO2 concentration compared to its natural variability and background concentration. In this study, we attempted to evaluate the potential of space-based observations to monitor atmospheric CO2 changes at the city scale through simple data-driven analyses. We used the column-averaged dry-air mole fraction of CO2 (XCO2) from the Carbon Observatory 2 (OCO-2) and the anthropogenic CO2 emissions provided by the Open-Data Inventory for Anthropogenic Carbon dioxide (ODIAC) product to explain the scenario of CO2 over 120 districts of Pakistan. To study the anthropogenic CO2 through space-borne observations, XCO2 anomalies (MXCO2) were estimated from OCO-2 retrievals within the spatial boundary of each district, and then the overall spatial distribution pattern of the MXCO2 was analyzed with several datasets including the ODIAC emissions, NO2 tropospheric column, fire locations, cropland, nighttime lights and population density. All the datasets showed a similarity in the spatial distribution pattern. The satellite detected higher CO2 concentrations over the cities located along the China–Pakistan Economic Corridor (CPEC) routes. The CPEC is a large-scale trading partnership between Pakistan and China and large-scale development has been carried out along the CPEC routes over the last decade. Furthermore, the cities were ranked based on mean ODIAC emissions and MXCO2 estimates. The satellite-derived estimates showed a good consistency with the ODIAC emissions at higher values; however, deviations between the two datasets were observed at lower values. To further study the relationship of MXCO2 and ODIAC emissions with each other and with some other datasets such as population density and NO2 tropospheric column, statistical analyses were carried out among the datasets. Strong and significant correlations were observed among all the datasets. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) District and administrative unit boundaries, (<b>b</b>) topography and (<b>c</b>) landcover map of Pakistan. The shaded area shows the disputed territory of Kashmir.</p>
Full article ">Figure 2
<p>Spatial distribution of monthly averaged XCO<sub>2</sub> and the number of cloud-free OCO-2 retrievals in each district of Pakistan for a period of six years from January 2015 to December 2020.</p>
Full article ">Figure 3
<p>(<b>a</b>) Long-term time series, (<b>b</b>) monthly averaged, and (<b>c</b>) seasonal-averaged XCO<sub>2</sub> concentrations in various administrative units (provinces) of Pakistan derived using OCO-2 XCO<sub>2</sub> retrievals from January 2015 to December 2020.</p>
Full article ">Figure 4
<p>(<b>a</b>) District and administrative unit boundaries of Pakistan along with the CPEC routes and spatial distribution of (<b>b</b>) OCO-2 retrievals over a grid of 0.5 × 0.5 degrees, (<b>c</b>) OCO-2 retrievals observed by each district, (<b>d</b>) mean XCO<sub>2</sub> concentration over a grid of 0.5 × 0.5 degrees, (<b>e</b>) mean XCO<sub>2</sub> concentration in each district, (<b>f</b>) mean MXCO<sub>2</sub> concentration over a grid of 0.5 × 0.5 degrees, (<b>g</b>) mean MXCO<sub>2</sub> concentration in each district, (<b>h</b>) fire locations, (<b>i</b>) mean NO<sub>2</sub> tropospheric columns, (<b>j</b>) mean population density, (<b>k</b>) mean ODIAC emissions, and (<b>l</b>) mean nighttime lights over Pakistan.</p>
Full article ">Figure 5
<p>(<b>a</b>) Mean annual concentration of XCO<sub>2</sub>, (<b>b</b>) ranking of cities based on mean MXCO<sub>2</sub>, and (<b>c</b>) mean annual MXCO<sub>2</sub> concentration in districts of Pakistan.</p>
Full article ">Figure 6
<p>(<b>a</b>) Ranking of Pakistani districts based on mean anthropogenic CO<sub>2</sub> emissions and (<b>b</b>) mean annual anthropogenic CO<sub>2</sub> emissions for a period of five years from 2015 to 2019.</p>
Full article ">Figure 7
<p>(<b>a</b>) Scatterplot between ODIAC and OCO-2 anomalies (MXCO<sub>2</sub>), (<b>b</b>) scatterplot between ODIAC and OCO-2 anomalies at higher values, (<b>c</b>) scatterplot between OCO-2 and population density, (<b>d</b>) scatterplot between OCO-2 and population density at higher values, (<b>e</b>) scatterplot between ODIAC emissions and population density, (<b>f</b>) scatterplot between ODIAC emissions and population density at lower values, (<b>g</b>) scatterplot between OCO-2 and NO<sub>2</sub> tropospheric column, (<b>h</b>) scatterplot between OCO-2 and NO<sub>2</sub> tropospheric column at higher values, (<b>i</b>) scatterplot between ODIAC emissions and NO<sub>2</sub> tropospheric column, and (<b>j</b>) scatterplot between ODIAC and NO<sub>2</sub> tropospheric column at higher values. The Pearson and the Spearman correlations are represented by r<sub>p</sub> and r<sub>s</sub>, respectively.</p>
Full article ">
16 pages, 7232 KiB  
Article
Lidar- and V2X-Based Cooperative Localization Technique for Autonomous Driving in a GNSS-Denied Environment
by Min-Su Kang, Jae-Hoon Ahn, Ji-Ung Im and Jong-Hoon Won
Remote Sens. 2022, 14(22), 5881; https://doi.org/10.3390/rs14225881 - 20 Nov 2022
Cited by 7 | Viewed by 5165
Abstract
Autonomous vehicles are equipped with multiple heterogeneous sensors and drive while processing data from each sensor in real time. Among the sensors, the global navigation satellite system (GNSS) is essential to the localization of the vehicle itself. However, if a GNSS-denied situation occurs [...] Read more.
Autonomous vehicles are equipped with multiple heterogeneous sensors and drive while processing data from each sensor in real time. Among the sensors, the global navigation satellite system (GNSS) is essential to the localization of the vehicle itself. However, if a GNSS-denied situation occurs while driving, the accident risk may be high due to the degradation of the vehicle positioning performance. This paper presents a cooperative positioning technique based on the lidar sensor and vehicle-to-everything (V2X) communication. The ego-vehicle continuously tracks surrounding vehicles and objects, and localizes itself using tracking information from the surroundings, especially in GNSS-denied situations. We present the effectiveness of the cooperative positioning technique by constructing a GNSS-denied case during autonomous driving. A numerical simulation using a driving simulator is included in the paper to evaluate and verify the proposed method in various scenarios. Full article
(This article belongs to the Special Issue GNSS for Urban Transport Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Block diagram of the two-step cooperative localization method.</p>
Full article ">Figure 2
<p>Process to find the position and heading of the surrounding vehicle using point cloud data (Red dots).</p>
Full article ">Figure 3
<p>Relationship between coordinate systems.</p>
Full article ">Figure 4
<p>Data processing from the lidar and V2X modules [<a href="#B1-remotesensing-14-05881" class="html-bibr">1</a>,<a href="#B2-remotesensing-14-05881" class="html-bibr">2</a>,<a href="#B3-remotesensing-14-05881" class="html-bibr">3</a>].</p>
Full article ">Figure 5
<p>Sensor fusion: (<b>a</b>) simple online and real-time tracking (SORT) and (<b>b</b>) fusion output. Black box as vehicle contour [<a href="#B1-remotesensing-14-05881" class="html-bibr">1</a>,<a href="#B2-remotesensing-14-05881" class="html-bibr">2</a>,<a href="#B4-remotesensing-14-05881" class="html-bibr">4</a>].</p>
Full article ">Figure 6
<p>Block diagram of the sensor fusion module for tracking the surrounding vehicles.</p>
Full article ">Figure 7
<p>Localization of the ego-vehicle.</p>
Full article ">Figure 8
<p>Block diagram of the autonomous driving simulation.</p>
Full article ">Figure 9
<p>Test scenario: (<b>a</b>) view of the driving simulator; (<b>b</b>) aerial view.</p>
Full article ">Figure 10
<p>Multiobject tracking results for surrounding vehicles: (<b>a</b>) precision and (<b>b</b>) recall [<a href="#B1-remotesensing-14-05881" class="html-bibr">1</a>,<a href="#B2-remotesensing-14-05881" class="html-bibr">2</a>,<a href="#B3-remotesensing-14-05881" class="html-bibr">3</a>,<a href="#B4-remotesensing-14-05881" class="html-bibr">4</a>].</p>
Full article ">Figure 11
<p>Tracking results for an object.</p>
Full article ">Figure 12
<p>Ego-vehicle localization scenario: (<b>a</b>) satellite view and (<b>b</b>) road view.</p>
Full article ">Figure 13
<p>Localization results for the ego-vehicle with the number of surrounding vehicles [<a href="#B1-remotesensing-14-05881" class="html-bibr">1</a>,<a href="#B2-remotesensing-14-05881" class="html-bibr">2</a>,<a href="#B3-remotesensing-14-05881" class="html-bibr">3</a>,<a href="#B4-remotesensing-14-05881" class="html-bibr">4</a>].</p>
Full article ">Figure 14
<p>Four scenarios for vehicle localization testing: (<b>a</b>) satellite view and (<b>b</b>) road view.</p>
Full article ">Figure 15
<p>Localization technique results according to vehicle arrangement and the number of particles.</p>
Full article ">Figure 16
<p>Localization technique position errors according to the vehicle arrangement and number of particles.</p>
Full article ">Figure 17
<p>Position dilution of precision analysis results for each scenario.</p>
Full article ">
21 pages, 5912 KiB  
Article
Development of a Lightweight Single-Band Bathymetric LiDAR
by Guoqing Zhou, Xiang Zhou, Weihao Li, Dawei Zhao, Bo Song, Chao Xu, Haotian Zhang, Zhexian Liu, Jiasheng Xu, Gangchao Lin, Ronghua Deng, Haocheng Hu, Yizhi Tan, Jinchun Lin, Jiazhi Yang, Xueqin Nong, Chenyang Li, Yiqiang Zhao, Cheng Wang, Lieping Zhang and Liping Zouadd Show full author list remove Hide full author list
Remote Sens. 2022, 14(22), 5880; https://doi.org/10.3390/rs14225880 - 20 Nov 2022
Cited by 39 | Viewed by 4537
Abstract
Traditional bathymetry LiDAR (light detection and ranging) onboard manned and/or unmanned airborne systems cannot operate in the context of narrow rivers in urban areas with high buildings and in mountainous areas with high peaks. Therefore, this study presents a prototype of a lightweight [...] Read more.
Traditional bathymetry LiDAR (light detection and ranging) onboard manned and/or unmanned airborne systems cannot operate in the context of narrow rivers in urban areas with high buildings and in mountainous areas with high peaks. Therefore, this study presents a prototype of a lightweight bathymetry LiDAR onboard an unmanned shipborne vehicle (called “GQ-Cor 19”). The GQ-Cor 19 system primarily includes an emitting optical module, a receiving optical module, control module, detection module, high-speed A/D sampling module, and data processing system. Considering that the “GQ-Cor 19” is extremely close to the water surface, various new technical challenges are encountered, such as significant laser scattering energy from the surface of the water, which saturates signals received by the photomultiplier tube detector. Therefore, this study presents various new technical solutions, including (1) an improved Bresenham algorithm, (2) a small and lightweight receiving optical system with a split-field method, and (3) a data acquisition module with a high-speech A/D collector. Following a series of different experimental verifications, the results demonstrate that the new generation of single-band LiDAR onboard an unmanned shipborne vehicle can swiftly measure the underwater depth, and the maximum measurement depth is more than 25 m. The measurement accuracy is better than 30 cm and the weight is less than 12 kg. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Architecture of GQ-Cor 19.</p>
Full article ">Figure 2
<p>Laser purchased from market.</p>
Full article ">Figure 3
<p>Galileo-type collimated beam expander.</p>
Full article ">Figure 4
<p>Architecture of the scanning optical system.</p>
Full article ">Figure 5
<p>Implementation of a scanning optical system.</p>
Full article ">Figure 6
<p>Architecture of the receiving optical system.</p>
Full article ">Figure 7
<p>The receiving optical system, where p represents the objective group first lens, q depicts the objective group second lens, r indicates the objective group third lens, s denotes the split-field lens, t refers to the first lens of the APD eyepiece group, u represents the second lens of the APD eyepiece group, v signifies the first lens of the APD eyepiece group, w represents the first lens of the PMT eyepiece group, x represents the second lens of the PMT eyepiece group, and y represents the first lens of the PMT eyepiece group.</p>
Full article ">Figure 8
<p>3D model and the implemented of receiving optical system.</p>
Full article ">Figure 9
<p>Architecture of the control components in the GQ-Cor 19 system.</p>
Full article ">Figure 10
<p>Circuit scheme of the main control section.</p>
Full article ">Figure 11
<p>(<b>a</b>) Top view of the PCB and (<b>b</b>) the physical diagram of the control board.</p>
Full article ">Figure 12
<p>Main control part of the operating system task scheduling flowchart.</p>
Full article ">Figure 13
<p>Architecture of high-speed AD sampling system for GQ-Cor 19.</p>
Full article ">Figure 14
<p>High-speed AD sampling module.</p>
Full article ">Figure 15
<p>LiDAR system assembly.</p>
Full article ">Figure 16
<p>Wiring diagram of shipboard LiDAR system.</p>
Full article ">Figure 17
<p>LiDAR overall physical diagram.</p>
Full article ">Figure 18
<p>Architecture of LiDAR data post-processing.</p>
Full article ">Figure 19
<p>A screen shot for the GQ-Cor 19 data post-processing software with Chinese interface.</p>
Full article ">Figure 20
<p>The overall physical diagram of the laboratory validation.</p>
Full article ">Figure 21
<p>(<b>a</b>–<b>c</b>) The waveforms are shown by oscilloscope in indoor experiments; the first peak of the waveforms is the surface signal and the second peak of the waveforms is the bottom signal.</p>
Full article ">Figure 22
<p>The overall physical diagram of the pool experiment.</p>
Full article ">Figure 23
<p>(<b>a</b>–<b>f</b>) Waveform diagram of the water pool experiment.</p>
Full article ">Figure 24
<p>The experiments under the artificial water well.</p>
Full article ">Figure 25
<p>The waveform from the experiment under the artificial water well.</p>
Full article ">Figure 26
<p>The overall physical diagram of the water pool experiment.</p>
Full article ">Figure 27
<p>The waveform received under the water pond environment.</p>
Full article ">Figure 28
<p>The experiments under reservoir environment.</p>
Full article ">Figure 29
<p>Waveform under the reservoir experimental environment.</p>
Full article ">
22 pages, 9099 KiB  
Article
A Fast and Robust Heterologous Image Matching Method for Visual Geo-Localization of Low-Altitude UAVs
by Haigang Sui, Jiajie Li, Junfeng Lei, Chang Liu and Guohua Gou
Remote Sens. 2022, 14(22), 5879; https://doi.org/10.3390/rs14225879 - 20 Nov 2022
Cited by 5 | Viewed by 3544
Abstract
Visual geo-localization can achieve UAVs (Unmanned Aerial Vehicles) position during GNSS (Global Navigation Satellite System) denial or restriction. However, The performance of visual geo-localization is seriously impaired by illumination variation, different scales, viewpoint difference, spare texture, and computer power of UAVs, etc. In [...] Read more.
Visual geo-localization can achieve UAVs (Unmanned Aerial Vehicles) position during GNSS (Global Navigation Satellite System) denial or restriction. However, The performance of visual geo-localization is seriously impaired by illumination variation, different scales, viewpoint difference, spare texture, and computer power of UAVs, etc. In this paper, a fast detector-free two-stage matching method is proposed to improve the visual geo-localization of low-altitude UAVs. A detector-free matching method and perspective transformation module are incorporated into the coarse and fine matching stages to improve the robustness of the weak texture and viewpoint data. The minimum Euclidean distance is used to accelerate the coarse matching, and the coordinate regression based on DSNT (Differentiable Spatial to Numerical) transform is used to improve the fine matching accuracy respectively. The experimental results show that the average localization precision of the proposed method is 2.24 m, which is 0.33 m higher than that of the current typical matching methods. In addition, this method has obvious advantages in localization robustness and inference efficiency on Jetson Xavier NX, which completed to match and localize all images in the dataset while the localization frequency reached the best. Full article
(This article belongs to the Special Issue Computer Vision and Image Processing)
Show Figures

Figure 1

Figure 1
<p>The framework of a visual geo-localization system based on image matching. The red box is the image preprocessing module, the green box is the localization module, and the blue box is the image matching module.</p>
Full article ">Figure 2
<p>The framework of the proposed image matching model.</p>
Full article ">Figure 3
<p>The framework and parameters of the feature extraction model.</p>
Full article ">Figure 4
<p>Coarse matching module.</p>
Full article ">Figure 5
<p>Fine matching module.</p>
Full article ">Figure 6
<p>Matching point precision of different methods.</p>
Full article ">Figure 7
<p>Matching point precision with different matching strategies. The gray curve with a circular marker represents the proposed method that is corrected twice in the coarse-matching and fine-matching stages, the green curve with a plus sign marker represents the method that is corrected in the coarse-matching stage, and the red curve with a triangular marker represents the matching result without perspective transformation module.</p>
Full article ">Figure 8
<p>The localized trajectory of all methods on the ICRA dataset: (<b>a</b>) the whole trajectory. The red line represents the GNSS reference trajectory. The green, blue, gray, and white represent the positioning trajectory of the SuperGlue method, LoFTR method, Patch2Pix method, and the proposed method, respectively; (<b>b</b>) local trajectory; (<b>c</b>) the beginning of the trajectory; (<b>d</b>) the end of the trajectory.</p>
Full article ">Figure 9
<p>Matching results for the same area: (<b>a</b>) the LoFTR method; (<b>b</b>) the SuperGlue method; (<b>c</b>) the Patch2Pix method; (<b>d</b>) the proposed method.</p>
Full article ">Figure 10
<p>The localization truth value provided by the dataset has deviation: (<b>a</b>) The blue dot on the image is the center of the UAV image; (<b>b</b>) The blue dot on the image is the UAVs position converted to satellite images using GNSS coordinates.</p>
Full article ">Figure 11
<p>The localized trajectory of all methods on the self-build dataset. The red line represents the GNSS reference trajectory. The green, blue, gray, and white represent the positioning trajectory of the SuperGlue algorithm, LoFTR algorithm, Patch2PIx algorithm, and the proposed algorithm, respectively. The small window in the figure shows the details of the trajectory.</p>
Full article ">Figure 12
<p>Matching results of weak texture images: (<b>a</b>) the LoFTR method; (<b>b</b>) the SuperGlue method; (<b>c</b>) the Patch2Pix method; (<b>d</b>) the proposed method without single perspective transformation module; (<b>e</b>) the proposed method with single perspective transformation module; (<b>f</b>) the proposed method with double perspective transformation modules.</p>
Full article ">
17 pages, 3707 KiB  
Article
A Sequential Student’s t-Based Robust Kalman Filter for Multi-GNSS PPP/INS Tightly Coupled Model in the Urban Environment
by Sixiang Cheng, Jianhua Cheng, Nan Zang, Zhetao Zhang and Sicheng Chen
Remote Sens. 2022, 14(22), 5878; https://doi.org/10.3390/rs14225878 - 19 Nov 2022
Cited by 4 | Viewed by 2142
Abstract
The proper stochastic model of a global navigation satellite system (GNSS) makes a significant difference on the precise point positioning (PPP)/inertial navigation system (INS) tightly coupled solutions. The empirical Gaussian stochastic model is easily biased by massive gross errors, deteriorating the positioning precisions, [...] Read more.
The proper stochastic model of a global navigation satellite system (GNSS) makes a significant difference on the precise point positioning (PPP)/inertial navigation system (INS) tightly coupled solutions. The empirical Gaussian stochastic model is easily biased by massive gross errors, deteriorating the positioning precisions, especially in the severe GNSS blockage urban environment. In this paper, the distributional characteristics of the gross-error-contaminated observation noise are analyzed by the epoch-differenced (ED) geometry-free (GF) model. The Student’s t distribution is used to express the heavy tails of the gross-error-contaminated observation noise. Consequently, a novel sequential Student’s t-based robust Kalman filter (SSTRKF) is put forward to adjust the GNSS stochastic model in the urban environment. The proposed method pre-weights all the observations with the a priori residual-derived robust factors. The chi-square test is adopted to distinguish the unreasonable variances. The proposed sequential Student’s t-based Kalman filter is conducted for the PPP/INS solutions instead of the standard Kalman filter (KF) when the null hypothesis of the chi-square test is rejected. The results of the road experiments with urban environment simulations reveal that the SSTRKF improves the horizontal and vertical positioning precisions by 57.5% and 62.0% on average compared with the robust Kalman filter (RKF). Full article
Show Figures

Figure 1

Figure 1
<p><math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>G</mi> <msub> <mi>F</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>k</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> PDF and <math display="inline"><semantics> <mrow> <mi mathvariant="normal">N</mi> <mfenced> <mrow> <mn>0</mn> <mo>,</mo> <mi>var</mi> <mfenced> <mrow> <mo>Δ</mo> <mi>G</mi> <msub> <mi>F</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>k</mi> <mo>,</mo> <mi>r</mi> </mrow> </msub> </mrow> </mfenced> </mrow> </mfenced> </mrow> </semantics></math> of (<b>a</b>) GPS, (<b>b</b>) GLONASS, (<b>c</b>) Galileo, (<b>d</b>) BDS GEO, (<b>e</b>) BDS IGSO, and (<b>f</b>) BDS MEO.</p>
Full article ">Figure 2
<p><math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>G</mi> <msub> <mi>F</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> PDF and <math display="inline"><semantics> <mrow> <mi mathvariant="normal">N</mi> <mfenced> <mrow> <mn>0</mn> <mo>,</mo> <mi>var</mi> <mfenced> <mrow> <mo>Δ</mo> <mi>G</mi> <msub> <mi>F</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>k</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </mfenced> </mrow> </mfenced> </mrow> </semantics></math> of (<b>a</b>) GPS, (<b>b</b>) GLONASS, (<b>c</b>) Galileo, (<b>d</b>) BDS GEO, (<b>e</b>) BDS IGSO, and (<b>f</b>) BDS MEO.</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>G</mi> <msubsup> <mi>F</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>k</mi> <mo>,</mo> <mi>r</mi> </mrow> <mi>s</mi> </msubsup> </mrow> </semantics></math> PDF of G08, <math display="inline"><semantics> <mrow> <mi mathvariant="normal">N</mi> <mfenced> <mrow> <mn>0</mn> <mo>,</mo> <mi>var</mi> <mfenced> <mrow> <mo>Δ</mo> <mi>G</mi> <msubsup> <mi>F</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>k</mi> <mo>,</mo> <mi>r</mi> </mrow> <mi>s</mi> </msubsup> </mrow> </mfenced> </mrow> </mfenced> </mrow> </semantics></math>, and Student’s t PDF (<math display="inline"><semantics> <mrow> <mi>DOF</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 4
<p>Flow chart of the SSTRKF.</p>
Full article ">Figure 5
<p>Two-dimensional experiment trajectory.</p>
Full article ">Figure 6
<p>The AS number during the road experiments.</p>
Full article ">Figure 7
<p>Impacts of the LOS multipath and NLOS reception errors on the positioning solutions.</p>
Full article ">Figure 8
<p>Chi-square statistics during the road experiment.</p>
Full article ">Figure 9
<p>Positioning errors of the three filters.</p>
Full article ">Figure 10
<p>RMS positioning errors of the three schemes during the six simulations.</p>
Full article ">
24 pages, 23655 KiB  
Article
A Multi-Channel Descriptor for LiDAR-Based Loop Closure Detection and Its Application
by Gang Wang, Xiaomeng Wei, Yu Chen, Tongzhou Zhang, Minghui Hou and Zhaohan Liu
Remote Sens. 2022, 14(22), 5877; https://doi.org/10.3390/rs14225877 - 19 Nov 2022
Cited by 5 | Viewed by 2412
Abstract
Simultaneous localization and mapping (SLAM) algorithm is a prerequisite for unmanned ground vehicle (UGV) localization, path planning, and navigation, which includes two essential components: frontend odometry and backend optimization. Frontend odometry tends to amplify the cumulative error continuously, leading to ghosting and drifting [...] Read more.
Simultaneous localization and mapping (SLAM) algorithm is a prerequisite for unmanned ground vehicle (UGV) localization, path planning, and navigation, which includes two essential components: frontend odometry and backend optimization. Frontend odometry tends to amplify the cumulative error continuously, leading to ghosting and drifting on the mapping results. However, loop closure detection (LCD) can be used to address this technical issue by significantly eliminating the cumulative error. The existing LCD methods decide whether a loop exists by constructing local or global descriptors and calculating the similarity between descriptors, which attaches great importance to the design of discriminative descriptors and effective similarity measurement mechanisms. In this paper, we first propose novel multi-channel descriptors (CMCD) to alleviate the lack of point cloud single information in the discriminative power of scene description. The distance, height, and intensity information of the point cloud is encoded into three independent channels of the shadow-casting region (bin) and then compressed it into a two-dimensional global descriptor. Next, an ORB-based dynamic threshold feature extraction algorithm (DTORB) is designed using objective 2D descriptors to describe the distributions of global and local point clouds. Then, a DTORB-based similarity measurement method is designed using the rotation-invariance and visualization characteristic of descriptor features to overcome the subjective tendency of the constant threshold ORB algorithm in descriptor feature extraction. Finally, verification is performed over KITTI odometry sequences and the campus datasets of Jilin University collected by us. The experimental results demonstrate the superior performance of our method to the state-of-the-art approaches. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic of CMCD blocks for multi-channel descriptors. The red area represents a pixel in the figure, and the point cloud in the corresponding three-dimensional region of the red area will be projected into this pixel.</p>
Full article ">Figure 2
<p>System overview.</p>
Full article ">Figure 3
<p>A visual illustration of the CMCD coding process: each channel corresponds to distance, intensity, and height information, respectively.</p>
Full article ">Figure 4
<p>The execution result figure of selection of loop candidate: retrieve the ten most similar loop candidates of the point cloud at frame 2582 of KITTI-05.</p>
Full article ">Figure 5
<p>The visualization of DTORB matching progress, they should be listed as (<b>a</b>,<b>b</b>) two pairs of LCD in KITTI-05; (<b>c</b>,<b>d</b>) the visualization of matching performed by traditional ORB; (<b>e</b>,<b>f</b>) the visualization of the proposed DTORB.</p>
Full article ">Figure 5 Cont.
<p>The visualization of DTORB matching progress, they should be listed as (<b>a</b>,<b>b</b>) two pairs of LCD in KITTI-05; (<b>c</b>,<b>d</b>) the visualization of matching performed by traditional ORB; (<b>e</b>,<b>f</b>) the visualization of the proposed DTORB.</p>
Full article ">Figure 6
<p>P-R curves corresponding to KITTI odometry sequences should be listed as the (<b>a</b>) P-R curve of KITTI-00; (<b>b</b>) P-R curve of KITTI-02; (<b>c</b>) P-R curve of KITTI-05; (<b>d</b>) P-R curve of KITTI-06.</p>
Full article ">Figure 7
<p>P-R curves corresponding to JLU datasets should be listed as the (<b>a</b>) P-R curve of jlu00; (<b>b</b>) P-R curve of jlu01; (<b>c</b>) P-R curve of jlu02; (<b>d</b>) P-R curve of jlu03.</p>
Full article ">Figure 8
<p>Mapping and LCD results should be listed as the (<b>a</b>–<b>f</b>) mapping and LCD results on KITTI-00, KITTI-02, and KITTI-05; (<b>g</b>,<b>h</b>) mapping and LCD results on jlu00. The red and yellow lines indicate the detected obverse and reverse loop closures, respectively.</p>
Full article ">Figure 8 Cont.
<p>Mapping and LCD results should be listed as the (<b>a</b>–<b>f</b>) mapping and LCD results on KITTI-00, KITTI-02, and KITTI-05; (<b>g</b>,<b>h</b>) mapping and LCD results on jlu00. The red and yellow lines indicate the detected obverse and reverse loop closures, respectively.</p>
Full article ">Figure 9
<p>The mapping results of CMCD in SLAM on jlu02 and 03: (<b>a</b>,<b>e</b>) BEV images of jlu02 and jlu05, the red lines represent the driving route at the time of data collection; (<b>b</b>,<b>f</b>) the visualized results of the point cloud map; (<b>c</b>,<b>g</b>) the local LCD results; (<b>d</b>,<b>h</b>) the mapping and LCD results, the red box indicates that this trajectory has loop closure.</p>
Full article ">Figure 10
<p>Illustration of CMCD and simple bird’s eye view projection: (<b>a</b>) The CMCD projection of frame 2582 on KITTI-05 (as shown in (4)) and the corresponding single-channel projection of height, intensity, and distance (as shown in (1), (2) and (3), respectively); (<b>b</b>) The CMCD projection of frame 3715 of jlu02 (as shown in (5)) and the corresponding single-channel projection of height, intensity, and distance (as shown in (6), (7) and (8), respectively). In the legend, “Height: 1; Intensity: 0; Distance: 0,” indicates that the image is projected according to the height information.</p>
Full article ">Figure 11
<p>P-R curves of CMCD, single-channel projection of height, intensity, and distance on KITTI Odometry sequences: (<b>a</b>) The P-R curve of KITTI-00; (<b>b</b>) The P-R curve of KITTI-02; (<b>c</b>) The P-R curve of KITTI-05; (<b>d</b>) The P-R curve of KITTI-06.</p>
Full article ">Figure 12
<p>P-R curves of CMCD, single-channel projection of height, intensity, and distance on JLU datasets: (<b>a</b>) P-R curve of jlu00; (<b>b</b>) P-R curve of jlu01; (<b>c</b>) P-R curve of jlu02; (<b>d</b>) P-R curve of jlu03.</p>
Full article ">Figure 13
<p>P-R curves of CMCD, orb-bag, and ori-retrieval-vector on KITTI odometry sequences: (<b>a</b>) The P-R curve of KITTI-00; (<b>b</b>) The P-R curve of KITTI-02; (<b>c</b>) The P-R curve of KITTI-05; (<b>d</b>) The P-R curve of KITTI-06.</p>
Full article ">Figure 14
<p>P-R curves of CMCD, orb-bag, and ori-retrieval-vector on JLU datasets: (<b>a</b>) The P-R curve of jlu00; (<b>b</b>) The P-R curve of jlu01; (<b>c</b>) The P-R curve of jlu02; (<b>d</b>) The P-R curve of jlu03.</p>
Full article ">Figure 15
<p>The retrieval results of the enhanced retrieval vector: (<b>a</b>) frame 2582 of KITTI-05 and its candidate set; (<b>b</b>) frame 3715 of jlu02 and its candidate set.</p>
Full article ">
Previous Issue
Back to TopTop