[go: up one dir, main page]

Next Issue
Volume 14, April-1
Previous Issue
Volume 14, March-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 14, Issue 6 (March-2 2022) – 227 articles

Cover Story (view full-size image): A statistical analysis of the effusion rate of recent flank eruptions at Mt Etna was performed, finding that most peaks occur at the beginning of eruptions, between 0.5% and 29% of the total duration, followed by a progressive decrease. Three generalized curves were derived through the calculation of the 25th, 50th, and 75th percentiles linked to the distribution of peaks and slope variations. Lava flow simulations were run by using each characteristic curve to quantify the differences in run-out distance, proving that an early incidence of the effusion rate peak can induce variations up to 40%. Our tests highlights how effusion rate strongly influences the emplacement of lava flow fields, with significant repercussion both on long- and short-term hazard assessment associated with effusive eruptions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 5182 KiB  
Article
Retrieving the Infected Area of Pine Wilt Disease-Disturbed Pine Forests from Medium-Resolution Satellite Images Using the Stochastic Radiative Transfer Theory
by Xiaoyao Li, Tong Tong, Tao Luo, Jingxu Wang, Yueming Rao, Linyuan Li, Decai Jin, Dewei Wu and Huaguo Huang
Remote Sens. 2022, 14(6), 1526; https://doi.org/10.3390/rs14061526 - 21 Mar 2022
Cited by 12 | Viewed by 2804
Abstract
Pine wilt disease (PWD) is a global destructive threat to forests which has been widely spread and has caused severe tree mortality all over the world. It is important to establish an effective method for forest managers to detect the infected area in [...] Read more.
Pine wilt disease (PWD) is a global destructive threat to forests which has been widely spread and has caused severe tree mortality all over the world. It is important to establish an effective method for forest managers to detect the infected area in a large region. Remote sensing is a feasible tool to detect PWD, but the traditional empirical methods lack the ability to explain the signals and can hardly be extended to large scales. The studies using physically-based models either ignore the within-canopy heterogeneity or rely too much on prior knowledge. In this study, we propose an approach to retrieve PWD infected areas from medium-resolution satellite images of two phases based on the simulations of an extended stochastic radiative transfer model for forests infected by pests (SRTP). A small amount of prior knowledge was used, and a change of background soil was considered in this approach. The performance was evaluated in different study sites. The inversion method performs best in the three-dimensional model LESS simulation sample plots (R2 = 0.88, RMSE = 0.059), and the inversion accuracy decreases in the real forest sample plots. For Jiangxi masson pine stand with large coverage and serious damage, R2 = 0.57, RMSE = 0.074; and for Shandong black pine stand with sparse and a small number of single plant damage, R2 = 0.48, RMSE = 0.063. This study indicates that the SRTP model is more feasible for pest damage inversion over different regions compared with empirical methods. The stochastic radiative transfer theory provides a potential approach for future monitoring of terrestrial vegetation parameters. Full article
(This article belongs to the Special Issue Forest Disturbance Monitoring Using Satellite Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The location of study area and the field sample plots. The dot and triangle represent Jiangxi sample plot and Shandong sample plot, respectively.</p>
Full article ">Figure 2
<p>The reflectance and transmittance of healthy and damaged needles of masson pine and black pine.</p>
Full article ">Figure 3
<p>The individual tree and the 900 m × 900 m plot generated by LESS. (<b>a</b>) The obj-file of the individual tree. (<b>b</b>) The 900 m × 900 m plot, in which the round dots denote trees. The orange line and the green line represent the directions of the sun and the sensor, respectively. The tree stem density decreases from left (796/hm<sup>2</sup>) to right (79.6/hm<sup>2</sup>).</p>
Full article ">Figure 4
<p>The field plots established from the UAV images. (<b>a</b>,<b>c</b>) Masson pine plots in Jiangxi. (<b>b</b>,<b>d</b>) Black pine plots in Shandong. (<b>a</b>,<b>b</b>) Indicate the infected stage, while (<b>c</b>,<b>d</b>) indicate the corresponding healthy stage.</p>
Full article ">Figure 5
<p>The input/output data and the overall framework of the inversion model.</p>
Full article ">Figure 6
<p>The variable importance of the 10 spectral indices in the RF model.</p>
Full article ">Figure 7
<p>The results of IAR retrieval from the simulated images. (<b>a</b>) The results of all the four plots. (<b>b</b>–<b>e</b>) indicate each of the four plots in which the random probabilities of generating damaged trees are 0.25, 0.5, 0.75 and 1, respectively.</p>
Full article ">Figure 7 Cont.
<p>The results of IAR retrieval from the simulated images. (<b>a</b>) The results of all the four plots. (<b>b</b>–<b>e</b>) indicate each of the four plots in which the random probabilities of generating damaged trees are 0.25, 0.5, 0.75 and 1, respectively.</p>
Full article ">Figure 8
<p>The results of IAR retrieval of Jiangxi and Shandong plots. (<b>a</b>) Masson pine plots in Jiangxi. (<b>b</b>) Black pine plots in Shandong.</p>
Full article ">Figure 9
<p>The results of IAR retrieval of Jiangxi and Shandong plots without consideration of the background change. (<b>a</b>) Masson pine plots in Jiangxi. (<b>b</b>) Black pine plots in Shandong.</p>
Full article ">Figure 10
<p>The results of IAR retrieval of Jiangxi and Shandong plots by statistical RF model. (<b>a</b>) Masson pine plots in Jiangxi. (<b>b</b>) Black pine plots in Shandong.</p>
Full article ">Figure 11
<p>Infected area map of typical region of masson pine trees. (<b>a</b>) The 30 m-resolution IAR map. (<b>b</b>) The corresponding UAV RGB image. (<b>c</b>) The corresponding Sentinel-2 RGB image. (<b>d</b>) The reference classification results from the UAV image.</p>
Full article ">Figure 12
<p>Infected area map of typical region of black pine trees. (<b>a</b>) The 30 m-resolution IAR map. (<b>b</b>) The corresponding UAV RGB image. (<b>c</b>) The corresponding Sentinel-2 RGB image. (<b>d</b>) The reference classification results from the UAV image.</p>
Full article ">Figure 12 Cont.
<p>Infected area map of typical region of black pine trees. (<b>a</b>) The 30 m-resolution IAR map. (<b>b</b>) The corresponding UAV RGB image. (<b>c</b>) The corresponding Sentinel-2 RGB image. (<b>d</b>) The reference classification results from the UAV image.</p>
Full article ">
16 pages, 12058 KiB  
Article
Recursive Enhancement of Weak Subsurface Boundaries and Its Application to SHARAD Data
by Peng Fang and Jinhai Zhang
Remote Sens. 2022, 14(6), 1525; https://doi.org/10.3390/rs14061525 - 21 Mar 2022
Cited by 3 | Viewed by 2097
Abstract
Sedimentary layers are composed of alternately deposited compositions in different periods, reflecting the geological evolution history of a planet. Orbital radar can detect sedimentary layers, but the radargram is contaminated by varying background noise levels. Traditional denoising methods, such as median filter, have [...] Read more.
Sedimentary layers are composed of alternately deposited compositions in different periods, reflecting the geological evolution history of a planet. Orbital radar can detect sedimentary layers, but the radargram is contaminated by varying background noise levels. Traditional denoising methods, such as median filter, have difficulty dealing with such kinds of noise. We propose a recursive signal enhancement scheme to identify weak reflections from intense background noise. Numerical experiments with synthetic data and SHARAD radargrams illustrate that the proposed method can enhance the clarity of the radar echoes and reveal delicate sedimentary structures previously buried in the background noise. The denoising result presents better horizontal continuity and higher vertical resolution compared with those of the traditional methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Comparison of the denoised results using different methods. (<b>a</b>) Original SHARAD radargram s_0224401. (<b>b</b>) Denoised results obtained by 3 × 3 median filtering. (<b>c</b>) Denoised results obtained by 7 × 7 median filtering. (<b>d</b>–<b>f</b>) Denoised results obtained by PID method with noise variance parameters <span class="html-italic">σ<sub>i</sub></span> = 10, 50 and 100, respectively.</p>
Full article ">Figure 2
<p>Flowchart of RSE method for denoising SHARAD radargram.</p>
Full article ">Figure 3
<p>The synthetic radargram of subsurface reflections. (<b>a</b>) The synthetic data: the combination of signal power (<b>b</b>) and SHARAD background noise. (<b>b</b>) The simulated backscatter echo power of laterally discontinuous layers with gaps of 1 to 5 pixels from left to right.</p>
Full article ">Figure 4
<p>Denoised results of the synthetic data shown in <a href="#remotesensing-14-01525-f003" class="html-fig">Figure 3</a>a by PID method and RSE method with different parameters, respectively. (<b>a</b>–<b>c</b>) Median filtering method with window sizes of 3 × 3, 5 × 5 and 7 × 7, respectively. (<b>d</b>–<b>f</b>) PID method with noise variances of 10, 50 and 100, respectively. (<b>g</b>–<b>i</b>) RSE method with enhancement coefficient c = 0.1 after 1-, 3- and 5-pixel moving average (1-pixel means no moving average performed).</p>
Full article ">Figure 5
<p>The residual errors of the denoised results shown in <a href="#remotesensing-14-01525-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>Local details of some segments in <a href="#remotesensing-14-01525-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-14-01525-f004" class="html-fig">Figure 4</a>. Groups A, B, and C correspond to the arrows denoted in <a href="#remotesensing-14-01525-f003" class="html-fig">Figure 3</a>a from left to right, respectively. The aqua waves and arrows are the noisy input data (<a href="#remotesensing-14-01525-f003" class="html-fig">Figure 3</a>a); the gray waves and arrows are the denoised results produced by median filtering (<a href="#remotesensing-14-01525-f004" class="html-fig">Figure 4</a>a); the orange waves and arrows are the denoised results produced by the PID method (<a href="#remotesensing-14-01525-f004" class="html-fig">Figure 4</a>e); the red waves and arrows are the denoised results produced by the RSE method (<a href="#remotesensing-14-01525-f004" class="html-fig">Figure 4</a>g); and the black waves and arrows are the theoretical signals (<a href="#remotesensing-14-01525-f003" class="html-fig">Figure 3</a>b). Each wave represents the rightmost column in the corresponding panel. The dashed lines labeled “1” indicate some artificial local peaks caused by noise in the waves. The dashed lines labeled “2” indicate the enhanced weak reflections by the RSE method. The dashed lines labeled “3” indicate weak residual noise for the RSE method.</p>
Full article ">Figure 7
<p>The denoised results of the RSE method with different parameters for SHARAD radargram 224401. (<b>a</b>–<b>c</b>) With enhancement coefficient c = 0.1 after 1-, 3-, and 5-pixel moving average. (<b>d</b>,<b>e</b>) With enhancement coefficient c = 0.2 after 1- and 3-pixel moving average. (<b>f</b>) Original SHARAD radargram 224401 with vague subsurface layered structures. H(i) represents the i-pixel moving average along the horizontal direction.</p>
Full article ">Figure 8
<p>SHARAD radargram 2758501 (<b>top panel</b>) and its corresponding topography (<b>bottom panel</b>). Section A contains ~5 blurred layers. The two horizontal arrows inside section A indicate the low reflection zones. Section B contains four defined units.</p>
Full article ">Figure 9
<p>Comparison of the denoised results using the RSE method with different parameters (<b>a</b>–<b>h</b>). (<b>i</b>) Section A shown in <a href="#remotesensing-14-01525-f008" class="html-fig">Figure 8</a>. V(i) represents vertical stacking of i pixels. H(i) represents the i-pixel moving average along the horizontal direction. The enhancement coefficient is c = 0.05.</p>
Full article ">Figure 10
<p>Comparison of the denoised results using the RSE method with different parameters (<b>a</b>–<b>h</b>). (<b>i</b>) Section B shown in <a href="#remotesensing-14-01525-f008" class="html-fig">Figure 8</a>. V(i) represents vertical stacking of i pixels. H(i) represents the i-pixel moving average along the horizontal direction. The enhancement coefficient is c = 0.05.</p>
Full article ">Figure 11
<p>Local details of some segments in <a href="#remotesensing-14-01525-f007" class="html-fig">Figure 7</a>. The black waves are the noisy data (<a href="#remotesensing-14-01525-f007" class="html-fig">Figure 7</a>f). The red waves are the denoised results (<a href="#remotesensing-14-01525-f007" class="html-fig">Figure 7</a>d). The rightmost columns of panels A, B, and C correspond to the columns denoted by the three arrows above <a href="#remotesensing-14-01525-f007" class="html-fig">Figure 7</a>f from left to right, respectively. Each wave represents the rightmost column in the corresponding panel.</p>
Full article ">Figure 12
<p>Local details of some segments in <a href="#remotesensing-14-01525-f009" class="html-fig">Figure 9</a>. The black waves are the noisy data (<a href="#remotesensing-14-01525-f009" class="html-fig">Figure 9</a>i). The red waves are the denoised results (<a href="#remotesensing-14-01525-f009" class="html-fig">Figure 9</a>e). The rightmost column of panels A, B, and C corresponds to the columns denoted by three arrows in section A of <a href="#remotesensing-14-01525-f008" class="html-fig">Figure 8</a> from left to right, respectively. Each wave represents the rightmost column in the corresponding panel.</p>
Full article ">Figure 13
<p>Local details of some segments in <a href="#remotesensing-14-01525-f010" class="html-fig">Figure 10</a>. The black waves are the noisy data (<a href="#remotesensing-14-01525-f010" class="html-fig">Figure 10</a>i). The red waves are the denoised results (<a href="#remotesensing-14-01525-f010" class="html-fig">Figure 10</a>a). The rightmost columns of panels A, B, and C correspond to the columns denoted by the three arrows in area B of <a href="#remotesensing-14-01525-f008" class="html-fig">Figure 8</a> from left to right, respectively.</p>
Full article ">Figure 14
<p>Comparison of a portion of SHARAD observation 2758501 with its corresponding clutter simulation results. (<b>a</b>) Section B shown in <a href="#remotesensing-14-01525-f008" class="html-fig">Figure 8</a> and <a href="#remotesensing-14-01525-f010" class="html-fig">Figure 10</a>i. (<b>b</b>) Simulation result of 2758501 with the same window as (<b>a</b>). (<b>c</b>) Denoised result shown in <a href="#remotesensing-14-01525-f010" class="html-fig">Figure 10</a>a. The three radargrams have the same horizontal and vertical scale marked in (<b>a</b>).</p>
Full article ">
19 pages, 5202 KiB  
Article
Classification of Tree Species in Different Seasons and Regions Based on Leaf Hyperspectral Images
by Rongchao Yang and Jiangming Kan
Remote Sens. 2022, 14(6), 1524; https://doi.org/10.3390/rs14061524 - 21 Mar 2022
Cited by 3 | Viewed by 2508
Abstract
This paper aims to establish a tree species identification model suitable for different seasons and regions based on leaf hyperspectral images, and to mine a more effective hyperspectral identification algorithm. Firstly, the reflectance spectra of leaves in different seasons and regions were analyzed. [...] Read more.
This paper aims to establish a tree species identification model suitable for different seasons and regions based on leaf hyperspectral images, and to mine a more effective hyperspectral identification algorithm. Firstly, the reflectance spectra of leaves in different seasons and regions were analyzed. Then, to solve the problem that 0-element in sparse random (SR) coding matrices affects the classification performance of error-correcting output codes (ECOC), two versions of supervision-mechanism-based ECOC algorithms, namely SM-ECOC-V1 and SM-ECOC-V2, were proposed in this paper. In addition, the performance of the proposed algorithms was compared with that of six traditional algorithms based on all bands and feature bands. The experiment results show that seasonal and regional changes have an effect on the reflectance spectra of leaves, especially in the near-infrared region of 760–1000 nm. When the spectral information of different seasons and different regions is added into the identification model, tree species can be effectively classified. SM-ECOC-V2 achieves the best classification performance based on both all bands and feature bands. Furthermore, both SM-ECOC-V1 and SM-ECOC-V2 outperform the ECOC method under SR coding strategy, indicating the proposed methods can effectively avoid the influence of 0-element in SR coding matrix on classification performance. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Collection time of samples in different seasons and regions.</p>
Full article ">Figure 2
<p>(<b>a</b>) The leaf image at 727.84 nm band, and (<b>b</b>) binary image of leaf after threshold segmentation.</p>
Full article ">Figure 3
<p>Original reflectance spectra of all leaf samples.</p>
Full article ">Figure 4
<p>Coding matrices of four coding strategies under four categories: (<b>a</b>) OVO, (<b>b</b>) OVA, (<b>c</b>) DR, (<b>d</b>) SR.</p>
Full article ">Figure 5
<p>The decoding process of ECOC algorithm.</p>
Full article ">Figure 6
<p>The output code of test sample supervised by SM-ECOC-V1 in the decoding process.</p>
Full article ">Figure 7
<p>The output results of dichotomizers supervised by SM-ECOC-V2.</p>
Full article ">Figure 8
<p>The output code of test sample supervised by SM-ECOC-V2 algorithm in the decoding process.</p>
Full article ">Figure 9
<p>Average reflectance spectra of leaves for each tree species in different seasons. (<b>1</b>) <span class="html-italic">Ilex chinensis Sims</span>, (<b>2</b>) <span class="html-italic">Lonicera maackii</span>, (<b>3</b>) <span class="html-italic">Sophora japonica</span>, (<b>4</b>) <span class="html-italic">Amygdalus triloba</span>, (<b>5</b>) <span class="html-italic">Syringa oblata Lindl.</span>, (<b>6</b>) <span class="html-italic">Kerria japonica</span>, (<b>7</b>) <span class="html-italic">Rhodotypos scandens</span>, (<b>8</b>) <span class="html-italic">Weigela florida</span>, (<b>9</b>) <span class="html-italic">Acer grosseri Pax</span>, (<b>10</b>) <span class="html-italic">Viburnum opulus Linn.</span>, (<b>11</b>) <span class="html-italic">Cerasus serrulate</span>, (<b>12</b>) <span class="html-italic">Philadelphus pekinensis Rupr.</span>, (<b>13</b>) <span class="html-italic">Chaenomeles speciosa</span>, (<b>14</b>) <span class="html-italic">Malus micromalus</span>, (<b>15</b>) <span class="html-italic">Syringa pubescens</span>, (<b>16</b>) <span class="html-italic">Ligustrum quihoui Carr.</span>, (<b>17</b>) <span class="html-italic">Rosa chinensis Jacq.</span>, (<b>18</b>) <span class="html-italic">Swida alba Opiz</span>, (<b>19</b>) <span class="html-italic">Forsythia suspensa</span>, and (<b>20</b>) <span class="html-italic">Prunus Cerasifera</span>.</p>
Full article ">Figure 10
<p>Average reflectance spectra of leaves for each tree species in different regions. (<b>1</b>) <span class="html-italic">Ilex chinensis Sims</span>, (<b>2</b>) <span class="html-italic">Lonicera maackii</span>, (<b>3</b>) <span class="html-italic">Sophora japonica</span>, (<b>4</b>) <span class="html-italic">Amygdalus triloba</span>, and (<b>5</b>) <span class="html-italic">Syringa oblata Lindl</span>.</p>
Full article ">Figure 11
<p>Class accuracy (%) of (<b>a</b>) spring samples classifying summer and autumn samples, (<b>b</b>) summer samples classifying spring and autumn samples, and (<b>c</b>) autumn samples classifying spring and summer samples.</p>
Full article ">Figure 12
<p>Class accuracy (%) of (<b>a</b>) samples from campus classifying samples from Xiling Lake Park, and (<b>b</b>) samples from Xiling Lake Park classifying samples from campus.</p>
Full article ">Figure 13
<p>Class accuracy (%) of different classification models established based on all bands.</p>
Full article ">Figure 14
<p>Class accuracy (%) of different classification models established based on feature bands.</p>
Full article ">
20 pages, 10127 KiB  
Article
Extraction of Olive Crown Based on UAV Visible Images and the U2-Net Deep Learning Model
by Zhangxi Ye, Jiahao Wei, Yuwei Lin, Qian Guo, Jian Zhang, Houxi Zhang, Hui Deng and Kaijie Yang
Remote Sens. 2022, 14(6), 1523; https://doi.org/10.3390/rs14061523 - 21 Mar 2022
Cited by 35 | Viewed by 8217
Abstract
Olive trees, which are planted widely in China, are economically significant. Timely and accurate acquisition of olive tree crown information is vital in monitoring olive tree growth and accurately predicting its fruit yield. The advent of unmanned aerial vehicles (UAVs) and deep learning [...] Read more.
Olive trees, which are planted widely in China, are economically significant. Timely and accurate acquisition of olive tree crown information is vital in monitoring olive tree growth and accurately predicting its fruit yield. The advent of unmanned aerial vehicles (UAVs) and deep learning (DL) provides an opportunity for rapid monitoring parameters of the olive tree crown. In this study, we propose a method of automatically extracting olive crown information (crown number and area of olive tree), combining visible-light images captured by consumer UAV and a new deep learning model, U2-Net, with a deeply nested structure. Firstly, a data set of an olive tree crown (OTC) images was constructed, which was further processed by the ESRGAN model to enhance the image resolution and was augmented (geometric transformation and spectral transformation) to enlarge the data set to increase the generalization ability of the model. Secondly, four typical subareas (A–D) in the study area were selected to evaluate the performance of the U2-Net model in olive crown extraction in different scenarios, and the U2-Net model was compared with three current mainstream deep learning models (i.e., HRNet, U-Net, and DeepLabv3+) in remote sensing image segmentation effect. The results showed that the U2-Net model achieved high accuracy in the extraction of tree crown numbers in the four subareas with a mean of intersection over union (IoU), overall accuracy (OA), and F1-Score of 92.27%, 95.19%, and 95.95%, respectively. Compared with the other three models, the IoU, OA, and F1-Score of the U2-Net model increased by 14.03–23.97 percentage points, 7.57–12.85 percentage points, and 8.15–14.78 percentage points, respectively. In addition, the U2-Net model had a high consistency between the predicted and measured area of the olive crown, and compared with the other three deep learning models, it had a lower error rate with a root mean squared error (RMSE) of 4.78, magnitude of relative error (MRE) of 14.27%, and a coefficient of determination (R2) higher than 0.93 in all four subareas, suggesting that the U2-Net model extracted the best crown profile integrity and was most consistent with the actual situation. This study indicates that the method combining UVA RGB images with the U2-Net model can provide a highly accurate and robust extraction result for olive tree crowns and is helpful in the dynamic monitoring and management of orchard trees. Full article
(This article belongs to the Special Issue UAV Applications for Forest Management: Wood Volume, Biomass, Mapping)
Show Figures

Figure 1

Figure 1
<p>Geographic location of the study area: (<b>a</b>) administrative map of Fujian Province, China; (<b>b</b>) orthophoto of the study area taken by UAV.</p>
Full article ">Figure 2
<p>Flow chart of data set’s construction.</p>
Full article ">Figure 3
<p>Super-resolution (SR) reconstruction effect.</p>
Full article ">Figure 4
<p>Geometric transformations.</p>
Full article ">Figure 5
<p>Spectral augmentation: (<b>a</b>) HSV boxplot; (<b>b</b>) spectral transformation.</p>
Full article ">Figure 6
<p>Location of the four test subareas (A–D): (<b>a</b>–<b>d</b>) details of olive trees in the A–D test subareas, respectively.</p>
Full article ">Figure 7
<p>Structure of the U<sup>2</sup>-Net model. En represents the encoder, De represents the decoder, S<sup>(n)</sup>side represents the side output saliency maps, and Sup represents the upsampling ratio.</p>
Full article ">Figure 8
<p>Structure of the residual U-block.</p>
Full article ">Figure 9
<p>Comparison diagram of the performance of different resolution models.</p>
Full article ">Figure 10
<p>Segmentation results using the U<sup>2</sup>-Net model.</p>
Full article ">Figure 11
<p>Extraction accuracy of olive tree crown number using the U<sup>2</sup>-Net model. (<b>a</b>–<b>d</b>) indicate the accuracy evaluation plots of four test subareas (A–D). In the figure, the green part indicates the proportion of pixels correctly classified as OTC and non-OTC, and the red part indicates the proportion of pixels incorrectly classified as OTC and non-OTC.</p>
Full article ">Figure 12
<p>Segmentation results of olive tree crown for the different models. The red mask area in the figure represents the overlay prediction results for each model.</p>
Full article ">Figure 13
<p>Extraction accuracy of olive tree crown number using different models.</p>
Full article ">Figure 14
<p>Relationship between extracted area and measured area using the U<sup>2</sup>-Net model.</p>
Full article ">Figure 15
<p>Comparison of the error rates for the different models.</p>
Full article ">
21 pages, 9232 KiB  
Article
Mapping Canopy Cover in African Dry Forests from the Combined Use of Sentinel-1 and Sentinel-2 Data: Application to Tanzania for the Year 2018
by Astrid Verhegghen, Klara Kuzelova, Vasileios Syrris, Hugh Eva and Frédéric Achard
Remote Sens. 2022, 14(6), 1522; https://doi.org/10.3390/rs14061522 - 21 Mar 2022
Cited by 9 | Viewed by 5425
Abstract
High-resolution Earth observation data is routinely used to monitor tropical forests. However, the seasonality and openness of the canopy of dry tropical forests remains a challenge for optical sensors. In this study, we demonstrate the potential of combining Sentinel-1 (S1) SAR and Sentinel-2 [...] Read more.
High-resolution Earth observation data is routinely used to monitor tropical forests. However, the seasonality and openness of the canopy of dry tropical forests remains a challenge for optical sensors. In this study, we demonstrate the potential of combining Sentinel-1 (S1) SAR and Sentinel-2 (S2) optical sensors in order to map the tree cover in East Africa. The overall methodology consists of: (i) the generation of S1 and S2 layers, (ii) the collection of an expert-based training/validation dataset and (iii) the classification of the satellite data. Three different classification workflows, together with different approaches to incorporating the spatial information to train the classifiers, are explored. Two types of maps were derived from these mapping approaches over Tanzania: (i) binary tree cover–no tree cover (TC/NTC) maps, and (ii) maps of the canopy cover classes. The overall accuracy of the maps is >95% for the TC/NTC maps and >85% for the forest types maps. Considering the neighboring pixels for training the classification improved the mapping of the areas that are covered by 1–10% tree cover. The study relied on open data and publicly available tools and can be integrated into national monitoring systems. Full article
(This article belongs to the Special Issue Accelerating REDD+ Initiatives in Africa Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Challenges of tropical dry forests monitoring (<b>A</b>) Deciduous vegetation—same forest area in the humid (May) and the dry (July) season, (<b>B</b>) Fires—same forest area affected (2010) or not (2011) by fire, (<b>C</b>) Four different areas illustrating the variety of tree cover. “Maps data: Google, © 2021 Maxar Technologies”.</p>
Full article ">Figure 2
<p>Workflow diagram of the main methodological steps.</p>
Full article ">Figure 3
<p>NDVI bi-monthly compositing—number of cloud free observations.</p>
Full article ">Figure 4
<p>Example of temporal profiles (in Db) for the year 2018 in VV and VH polarization for different land cover classes.</p>
Full article ">Figure 5
<p>The 50 m by 50 m square plot interpreted in Collect Earth and the interpretation rules. Maps data: Google, © 2021 Maxar Technologies.</p>
Full article ">Figure 6
<p>Spatial distribution and number of samples of the reference dataset. Maps data: Google, © 2021 Maxar Technologies.</p>
Full article ">Figure 7
<p>“pixel” and “window” classification workflows.</p>
Full article ">Figure 8
<p>NDVI bi-monthly composites for 2018. The NDVI values from −1 to 1 are rescaled to 0–200.</p>
Full article ">Figure 9
<p>Resulting maps from the three classification workflows.</p>
Full article ">Figure 10
<p>Features’ importance for two classifications in the “pixel” approach.</p>
Full article ">Figure 11
<p>Overall accuracy, producer accuracy and user accuracy of the six maps based on the area proportion.</p>
Full article ">Figure 11 Cont.
<p>Overall accuracy, producer accuracy and user accuracy of the six maps based on the area proportion.</p>
Full article ">Figure 12
<p>Illustration of the class 21 1–10% tree cover for (<b>a</b>) the window–forest type classes ETC, (<b>b</b>) the pixel–forest type classes RF and (<b>c</b>) recent VHR images in Google Earth.</p>
Full article ">Figure 13
<p>Assessment of the class 1–10% tree cover for the 98 validation plots with a class 21.</p>
Full article ">
20 pages, 7070 KiB  
Article
Simulation of Soil Organic Carbon Content Based on Laboratory Spectrum in the Three-Rivers Source Region of China
by Wei Zhou, Haoran Li, Shiya Wen, Lijuan Xie, Ting Wang, Yongzhong Tian and Wenping Yu
Remote Sens. 2022, 14(6), 1521; https://doi.org/10.3390/rs14061521 - 21 Mar 2022
Cited by 6 | Viewed by 2632
Abstract
Soil organic carbon (SOC) changes affect the land carbon cycle and are also closely related to climate change. Visible-near infrared spectroscopy (Vis-NIRS) has proven to be an effective tool in predicting soil properties. Spectral transformations are necessary to reduce noise and ensemble learning [...] Read more.
Soil organic carbon (SOC) changes affect the land carbon cycle and are also closely related to climate change. Visible-near infrared spectroscopy (Vis-NIRS) has proven to be an effective tool in predicting soil properties. Spectral transformations are necessary to reduce noise and ensemble learning methods can improve the estimation accuracy of SOC. Yet, it is still unclear which is the optimal ensemble learning method exploiting the results of spectral transformations to accurately simulate SOC content changes in the Three-Rivers Source Region of China. In this study, 272 soil samples were collected and used to build the Vis-NIRS simulation models for SOC content. The ensemble learning was conducted by the building of stack models. Sixteen combinations were produced by eight spectral transformations (S-G, LR, MSC, CR, FD, LRFD, MSCFD and CRFD) and two machine learning models of RF and XGBoost. Then, the prediction results of these 16 combinations were used to build the first-step stack models (Stack1, Stack2, Stack3). The next-step stack models (Stack4, Stack5, Stack6) were then made after the input variables were optimized based on the threshold of the feature importance of the first-step stack models (importance > 0.05). The results in this study showed that the stack models method obtained higher accuracy than the single model and transformations method. Among the six stack models, Stack 6 (5 selected combinations + XGBoost) showed the best simulation performance (RMSE = 7.3511, R2 = 0.8963, and RPD = 3.0139, RPIQ = 3.339), and obtained higher accuracy than Stack3 (16 combinations + XGBoost). Overall, our results suggested that the ensemble learning of spectral transformations and simulation models can improve the estimation accuracy of the SOC content. This study can provide useful suggestions for the high-precision estimation of SOC in the alpine ecosystem. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution of soil sampling sites and different vegetation types in the Three-Rivers Source Region.</p>
Full article ">Figure 2
<p>Soil spectral curve of eight kinds of spectral transformations. (<b>a</b>) Savitzky–Golay smooth filtering, (<b>b</b>) multivariate scattering correction, (<b>c</b>) logarithmic reciprocal, (<b>d</b>) continuum removal, (<b>d</b>,<b>e</b>) first–order differential, and (<b>f</b>–<b>h</b>) first–order differential transformation based on (<b>b</b>–<b>d</b>), respectively). The color lines represented the spectral curve of soil samples.</p>
Full article ">Figure 2 Cont.
<p>Soil spectral curve of eight kinds of spectral transformations. (<b>a</b>) Savitzky–Golay smooth filtering, (<b>b</b>) multivariate scattering correction, (<b>c</b>) logarithmic reciprocal, (<b>d</b>) continuum removal, (<b>d</b>,<b>e</b>) first–order differential, and (<b>f</b>–<b>h</b>) first–order differential transformation based on (<b>b</b>–<b>d</b>), respectively). The color lines represented the spectral curve of soil samples.</p>
Full article ">Figure 3
<p>Stacking model processes based on three models and eight pre-processing transformations (SVR is support vector machine regression model, RF is the random forest regression model, XGBoost is the extreme gradient tree regression model. Stack1~ stack3 are stacked models based on 16 preprocessor-model combinations. After modeling Stack1~Stack3, stack4~stack6 are stacked models based on the selection of important variables. The SG is Savitzky–Golay smooth filtering, LR is the logarithmic reciprocal, CR is the continuum removal, MSC is the multivariate scattering correction, FD is the first-order differential, and CRFD, MSCFD and LRFD are the first-order differentials based on CR, MSC and LR, respectively).</p>
Full article ">Figure 4
<p>The correlation coefficient between SOC content and eight spectral transformations data (SOC is the soil organic carbon, R is the original spectrum merely using Savitzky–Golay smooth filtering, FD is the first-order differential, LR is the logarithmic reciprocal, CR is the continuum removal, MSC is the multivariate scattering correction, and CRFD, MSCFD, and LRFD are the FD based on CR, MSC and LR, respectively).</p>
Full article ">Figure 5
<p>Feature importance of Stack1, Stack2, and Stack3 (Stacking models with SVR, RF, XG (i.e., XGBoost) and 16 pretreatment-models, respectively).</p>
Full article ">Figure 6
<p>Scatter diagrams of six representative prediction models. (<b>a</b>) MSCFD–XG, (<b>b</b>) Stack 1, (<b>c</b>) Stack 2, (<b>d</b>) Stack 3, (<b>e</b>) Stack 5 and (<b>f</b>) Stack 6.</p>
Full article ">
23 pages, 7549 KiB  
Article
Using Augmented and Virtual Reality (AR/VR) to Support Safe Navigation on Inland and Coastal Water Zones
by Tomasz Templin, Dariusz Popielarczyk and Marcin Gryszko
Remote Sens. 2022, 14(6), 1520; https://doi.org/10.3390/rs14061520 - 21 Mar 2022
Cited by 17 | Viewed by 6255
Abstract
The aim of this research is to propose a new solution to assist sailors in safe navigation on inland shallow waters by using Augmented and Virtual Reality. Despite continuous progress in the methodology of displaying bathymetric data and 3D models of the bottoms, [...] Read more.
The aim of this research is to propose a new solution to assist sailors in safe navigation on inland shallow waters by using Augmented and Virtual Reality. Despite continuous progress in the methodology of displaying bathymetric data and 3D models of the bottoms, there is still a lack of solutions promoting these data and their widespread use. Most existing products present navigation content on 2D/3D maps onscreen. Augmented Reality (AR) technology revolutionises the way digital content is displayed. This paper presents the solution for the use of AR on inland and coastal waterways to increase the safety of sailing and other activities on the water (diving, fishing, etc.). The real-time capability of AR in the proposed mobile application also allows other users to be observed on the water in limited visibility and even at night. The architecture and the prototype Mobile Augmented Reality (MAR) applications are presented. The required AR, including the preparation methodology supported by the Virtual Reality Geographic Information System (VRGIS), is also shown. The prototype’s performance has been validated in water navigation, specifically for exemplary lakes of Warmia and Mazury in Poland. The performed tests showed the great usefulness of AR in the field of content presentation during the navigation process. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of the Inland and Coastal Water Zones)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The evaluation of the need to implement AR functionality in a mobile navigation application: (<b>a</b>) Do you use an application supporting spending time by the water? (<b>b</b>) Would you use AR function in navigation application on the water?</p>
Full article ">Figure 2
<p>Identification of prioritised functionalities from the perspective of the mobile application user.</p>
Full article ">Figure 3
<p>User needs analysis for visual AR interface.</p>
Full article ">Figure 4
<p>Types of AR functionality supported by the proposed architecture.</p>
Full article ">Figure 5
<p>The overall architecture of the proposed AR shallow water application highlighting the major modules and their relationships.</p>
Full article ">Figure 6
<p>Data sources and processing methodology for preparing AR-optimised layers.</p>
Full article ">Figure 7
<p>Study area.</p>
Full article ">Figure 8
<p>Marking the navigable route on the largest lake in Poland, Śniardwy Lake, with the depth profile along the route. Inset map presents a sample of cardinal buoy location (IALA system) used to mark hazardous locations obtained from GIS analyses.</p>
Full article ">Figure 9
<p>On-site verification of cardinal buoys (measuring platform—Water Rescue Service boat (<b>left</b>), the model with depth readings (<b>right</b>)).</p>
Full article ">Figure 10
<p>Optimisation of marking location and verification of hazardous area signs based on DEM of the bottom using self-developed VRGIS app. Screens from Oculus Rift with motion controllers: (<b>1</b>) The new point creation. (<b>2</b>) Settings new attributes for created point. (<b>3</b>) Removing existing points. (<b>4</b>) Display all values after deleting one point. (<b>5</b>) Display all values after adding one point and refresh the display.</p>
Full article ">Figure 11
<p>Test application presenting labels and 3D models of spatial objects based on sensor readings.</p>
Full article ">Figure 12
<p>Concept of AR content matching in dynamic water conditions. Geographic objects are defined by distance criterion (5 distance thresholds), horizontal directions, and actual horizontal field of view.</p>
Full article ">Figure 13
<p>A visualisation of a real-world 3D coordinate system, determined by the viewing frustum created by the visual sensor, with the origin at the centre of the visual sensor. The X and Y axes are parallel to the screen. The Z-axis, which corresponds to the negative orientation direction of the visual sensor, is perpendicular to the screen. The AR layer layout concept for the MAR app prototype is on the right.</p>
Full article ">Figure 14
<p>Examples of Augmented Reality applied to the navigation process—data fusion of the virtual location of sailors and Digital Elevation Model of the bottom of the water reservoirs.</p>
Full article ">Figure 15
<p>The screenshot of the prototype of the MAR application (<b>right</b>). The AR visual interface presents real-time AR content about other boats and POIs (<b>left</b>).</p>
Full article ">
23 pages, 42852 KiB  
Article
Projections of Climate Change Impacts on Flowering-Veraison Water Deficits for Riesling and Müller-Thurgau in Germany
by Chenyao Yang, Christoph Menz, Maxim Simões De Abreu Jaffe, Sergi Costafreda-Aumedes, Marco Moriondo, Luisa Leolini, Arturo Torres-Matallana, Daniel Molitor, Jürgen Junk, Helder Fraga, Cornelis van Leeuwen and João A. Santos
Remote Sens. 2022, 14(6), 1519; https://doi.org/10.3390/rs14061519 - 21 Mar 2022
Cited by 7 | Viewed by 3532
Abstract
With global warming, grapevine is expected to be increasingly exposed to water deficits occurring at various development stages. In this study, we aimed to investigate the potential impacts of projected climate change on water deficits from the flowering to veraison period for two [...] Read more.
With global warming, grapevine is expected to be increasingly exposed to water deficits occurring at various development stages. In this study, we aimed to investigate the potential impacts of projected climate change on water deficits from the flowering to veraison period for two main white wine cultivars (Riesling and Müller-Thurgau) in Germany. A process-based soil-crop model adapted for grapevine was utilized to simulate the flowering-veraison crop water stress indicator (CWSI) of these two varieties between 1976–2005 (baseline) and 2041–2070 (future period) based on a suite of bias-adjusted regional climate model (RCM) simulations under RCP4.5 and RCP8.5. Our evaluation indicates that the model can capture the early-ripening (Müller-Thurgau) and late-ripening (Riesling) traits, with a mean bias of prediction of ≤2 days and a well-reproduced inter-annual variability for more than 60 years. Under climate projections, the flowering stage is advanced by 10–20 days (higher in RCP8.5) between the two varieties, whereas a slightly stronger advancement is found for Müller-Thurgau than for Riesling for the veraison stage. As a result, the flowering-veraison phenophase is mostly shortened for Müller-Thurgau, whereas it is extended by up to two weeks for Riesling in cool and high-elevation areas. The length of phenophase plays an important role in projected changes of flowering-veraison mean temperature and precipitation. The late-ripening trait of Riesling makes it more exposed to increased summer temperature (mainly in August), resulting in a higher mean temperature increase for Riesling (1.5–2.5 °C) than for Müller-Thurgau (1–2 °C). As a result, an overall increased CWSI by up to 15% (ensemble median) is obtained for both varieties, whereas the upper (95th) percentile of simulations shows a strong signal of increased water deficit by up to 30%, mostly in the current winegrowing regions. Intensified water deficit stress can represent a major threat for high-quality white wine production, as only mild water deficits are acceptable. Nevertheless, considerable variabilities of CWSI were discovered among RCMs, highlighting the importance of efforts towards reducing uncertainties in climate change impact assessment. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison between time series of observed (<span class="html-italic">OB,</span> solid line) and simulated (<span class="html-italic">SM,</span> dashed line) DOY (day of year) for phenology stages of BBCH09 (budburst), BBCH65 (flowering) and BBCH81 (veraison) for grape varieties of (<b><span class="html-italic">a</span></b>) Müller-Thurgau and (<b><span class="html-italic">b</span></b>) Riesling over 1956–2019 at Eltviller Sonnenberg. The mean (Mean_) and standard deviation (std_) of the time series for observation and simulation are labelled. MAE (days): mean absolute error; RMSE (days): root mean squared error; R<sup>2</sup>: R square.</p>
Full article ">Figure 2
<p>Ensemble median of climate model projections for the mean seasonal (April–October) (<b><span class="html-italic">a</span></b>) average temperature (°C) and (<b><span class="html-italic">b</span></b>) precipitation sum (mm) in the baseline period (1976–2005) and respective mean changes in the future period (2041–2070) under RCP4.5 and RCP8.5.</p>
Full article ">Figure 3
<p>Ensemble median projections for the mean DOY changes of flowering (BBCH65) and veraison stages (BBCH81) and for the BBCH65-BBCH81 phenophase between the future period (2041–2070) under RCP4.5 and RCP8.5 and the baseline period (1976–2005) for grape varieties of (<b><span class="html-italic">a</span></b>) Müller-Thurgau and (<b><span class="html-italic">b</span></b>) Riesling. For the phenophase, positive values indicate the extended period, whereas negative values denote the shortened period. DOY: day of year. Negative DOY denotes the advanced days of the phenology stages.</p>
Full article ">Figure 4
<p>Ensemble median projections for the mean (<b><span class="html-italic">a</span></b>) increased temperature (°C) and (<b><span class="html-italic">b</span></b>) change of precipitation sum (mm) in the flowering (BBCH65)-veraison (BBCH81) phenophase between the future period (2041–2070) under RCP4.5 and RCP8.5 and the baseline period (1976–2005). Positive values indicate increased temperature and precipitation in the projection period, whereas negative values denote respective decreases (where applicable).</p>
Full article ">Figure 5
<p>Projected monthly average temperature changes (°C) (in line plot) and precipitation sum changes (%) (in bar plot) over the growing season (April–October) for the future period (2041–2070) under (<b><span class="html-italic">a</span></b>) RCP4.5 (yellow) and (<b><span class="html-italic">b</span></b>) RCP8.5 (red) as compared to the baseline period (1976—2005) in northern (Latitude 52°–55°), central (Latitude 50°–52°) and southern (Latitude 47°–50°) Germany. In the line plot, the lines represent regional mean values with the respective colour band indicating their 90% uncertainty range among RCMs. In the bar plot, bars indicate the regional mean values with the respective error bars indicating their 90% uncertainty range among RCMs. The regional mean flowering stage and veraison stage are also shown for Müller-Thurgau (triangular) and Riesling (diamond). Note that for the same variety in each subplot, the phenology symbols occurring earlier correspond to the flowering stage, and the next symbol is the veraison stage. The phenology symbols with black, yellow and red are for the baseline, RCP4.5 and RCP8.5, respectively.</p>
Full article ">Figure 6
<p>Projected changes (%) of mean crop water stress indicator (CWSI) for the flowering (BBCH65)-veraison (BBCH81) phenophase for (<b><span class="html-italic">a</span></b>) Müller-Thurgau and (<b><span class="html-italic">b</span></b>) Riesling between the future period (2041–2070) under RCP4.5 and RCP8.5 and the baseline period (1976–2005). The 5th percentile, median and 95th percentile of simulations under different climate models are shown. Positive values indicate increased CWSI (intensified water stress), whereas negative values denote respective reductions (alleviated water stress).</p>
Full article ">Figure A1
<p>Individual model projections for changes (%) in mean crop water stress indicator (CWSI) during the flowering (BBCH65)-veraison (BBCH81) phenophase for the future period (2041–2070) under (<b><span class="html-italic">a</span></b>) RCP4.5 and (<b><span class="html-italic">b</span></b>) RCP8.5 as compared to the baseline period (1976–2005) for Müller-Thurgau. Positive values indicate increased CWSI (intensified water stress), whereas negative values denote respective reductions (alleviated water stress).</p>
Full article ">Figure A2
<p>Individual model projections for changes (%) in mean crop water stress indicator (CWSI) during the flowering (BBCH65)-veraison (BBCH81) phenophase for the future period (2041–2070) under (<b><span class="html-italic">a</span></b>) RCP4.5 and (<b><span class="html-italic">b</span></b>) RCP8.5 as compared to the baseline period (1976–2005) for Riesling. Positive values indicate increased CWSI (intensified water stress), whereas negative values denote respective reductions (alleviated water stress).</p>
Full article ">
28 pages, 24789 KiB  
Review
Giant Planet Atmospheres: Dynamics and Variability from UV to Near-IR Hubble and Adaptive Optics Imaging
by Amy A. Simon, Michael H. Wong, Lawrence A. Sromovsky, Leigh N. Fletcher and Patrick M. Fry
Remote Sens. 2022, 14(6), 1518; https://doi.org/10.3390/rs14061518 - 21 Mar 2022
Cited by 6 | Viewed by 5340
Abstract
Each of the giant planets, Jupiter, Saturn, Uranus, and Neptune, has been observed by at least one robotic spacecraft mission. However, these missions are infrequent; Uranus and Neptune have only had a single flyby by Voyager 2. The Hubble Space Telescope, particularly the [...] Read more.
Each of the giant planets, Jupiter, Saturn, Uranus, and Neptune, has been observed by at least one robotic spacecraft mission. However, these missions are infrequent; Uranus and Neptune have only had a single flyby by Voyager 2. The Hubble Space Telescope, particularly the Wide Field Camera 3 (WFC3) and Advanced Camera for Surveys (ACS) instruments, and large ground-based telescopes with adaptive optics systems have enabled high-spatial-resolution imaging at a higher cadence, and over a longer time, than can be achieved with targeted missions to these worlds. These facilities offer a powerful combination of high spatial resolution, often <0.05”, and broad wavelength coverage, from the ultraviolet through the near infrared, resulting in compelling studies of the clouds, winds, and atmospheric vertical structure. This coverage allows comparisons of atmospheric properties between the planets, as well as in different regions across each planet. Temporal variations in winds, cloud structure, and color over timescales of days to years have been measured for all four planets. With several decades of data already obtained, we can now begin to investigate seasonal influences on dynamics and aerosol properties, despite orbital periods ranging from 12 to 165 years. Future facilities will enable even greater spatial resolution and, combined with our existing long record of data, will continue to advance our understanding of atmospheric evolution on the giant planets. Full article
(This article belongs to the Special Issue Remote Sensing Observations of the Giant Planets)
Show Figures

Figure 1

Figure 1
<p>Timeline of spacecraft observations relative to planetary season. Solid lines indicate the subsolar latitude (a proxy for season) for each planet and horizontal dashed lines indicate equinox crossings; peaks and trough indicate northern and southern hemisphere summer solstices, respectively. The major missions listed in <a href="#remotesensing-14-01518-t001" class="html-table">Table 1</a> are plotted, demonstrating the lack of coverage for Uranus and Neptune. The Hubble era began in late 1993 (after the installation of corrective optics) and is indicated by the shaded area on the right.</p>
Full article ">Figure 2
<p>The giant planets’ appearance at UV and visible wavelengths from Hubble. Quasi-true color images from WFC3 data from the OPAL program show the differences amongst the planets with Jupiter and Saturn having shades of red, yellow and brown, while Uranus and Neptune are bluer. These color composites are comprised of the F631N (R), F502N (G), and 395N (B) filters for Jupiter (2021) and Saturn (2021) and F657N (R), F547M (G), and F467M (B) filters for Uranus (2015) and Neptune (2018). Images are not to scale.</p>
Full article ">Figure 3
<p>The giant planets’ appearance at near-IR wavelengths with adaptive optics. This Jupiter (VLT, 2008) false-color composite at 2.02 (R), 2.14 (G), and 2.16 (B) microns highlights high polar and equatorial hazes and storms [<a href="#B45-remotesensing-14-01518" class="html-bibr">45</a>]. Saturn (Gemini, 2009) in false color at 2 (B), 2.12 (G), and 2.17 (R) microns shows equatorial structure [<a href="#B74-remotesensing-14-01518" class="html-bibr">74</a>], credit: Gemini Observatory/AURA/Henry Roe, Lowell Observatory/Emily Schaller, Insitute for Astronomy, University of Hawai‘i. A Uranus (Keck, 2004) false-color composite at 1.26 (B), 1.62 (G), and 2.1 (R) microns, highlights both its cloud activity and its rings [<a href="#B29-remotesensing-14-01518" class="html-bibr">29</a>]. A Neptune (Keck, 2007) false-color composite from observations at 1.65 (B) and 2.1 (R) microns highlights elongated small clouds [<a href="#B41-remotesensing-14-01518" class="html-bibr">41</a>], credit: M. van Dam, E. Schaller and W. M. Keck Observatory. Images are not to scale.</p>
Full article ">Figure 4
<p>Jupiter’s appearance at UV and visible wavelengths. September 2021 Hubble data show variation in Jupiter’s latitude bands with wavelength. Jupiter is dark in the 889 nm methane gas absorption band, with the highest features (the GRS and the polar haze caps) appearing brightest; artificial fringing is apparent in that image, and in all long wavelength narrow-band WFC3 filters.</p>
Full article ">Figure 5
<p>Jupiter’s enduring cloud features in Hubble WFC3 data from September 2021. This map spans a subset of latitudes and ∼136<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> of longitude; 1<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> of longitude = ∼12,500 km at the equator. Jupiter’s bands contain multiple closed-circulation vortices, both cyclones and anticyclones, including the GRS and Oval BA. Waves and turbulent structures are also visible.</p>
Full article ">Figure 6
<p>Saturn’s appearance at UV and visible wavelengths. September 2021 Hubble data show Saturn’s cloud band variations with wavelength. At visible wavelengths, Saturn’s cloud bands are clearly seen, and the planet is very dark in the 889 nm methane absorption band.</p>
Full article ">Figure 7
<p>Saturn’s major northern hemisphere cloud features in Hubble WFC3 data from June 2018. Individual latitude bands have slightly different colors in this contrast- and sharpness-enhanced map over ∼180<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> of longitude; 1<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> of longitude = ∼10,500 km at the equator. The small vortex near 40<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> is a remnant of the large 2010 storm, and small thunderstorm-like clouds are visible at mid latitudes. Inset: The polar hexagon (unmapped) was prominent in 2018, and the boundaries varied at different wavelengths, giving it a multi-colored border.</p>
Full article ">Figure 8
<p>Uranus’s appearance at visible and near-IR wavelengths. September 2015 Hubble and August 2014 Keck data [<a href="#B24-remotesensing-14-01518" class="html-bibr">24</a>] show Uranus’s cloud band variations with wavelength. The north pole is to the right. Uranus is more uniform at short wavelengths than at long wavelengths, with banding and storm contrast increasing at longer wavelengths.</p>
Full article ">Figure 9
<p>Uranus’s cloud features in Hubble ACS and WFC3 and Keck H-band images from 2004 to 2020 [<a href="#B24-remotesensing-14-01518" class="html-bibr">24</a>,<a href="#B25-remotesensing-14-01518" class="html-bibr">25</a>,<a href="#B69-remotesensing-14-01518" class="html-bibr">69</a>,<a href="#B111-remotesensing-14-01518" class="html-bibr">111</a>,<a href="#B112-remotesensing-14-01518" class="html-bibr">112</a>,<a href="#B113-remotesensing-14-01518" class="html-bibr">113</a>]. The Hubble ACS color composite in 2006 uses 755, 658, 550 nm in the R, G, B channels, respectively, and in 2020 the WFC3 845 nm filter is shown [<a href="#B111-remotesensing-14-01518" class="html-bibr">111</a>]. The 2012 Keck image is a high signal-to-noise, de-rotated, average image, high-pass filtered to show low-contrast cloud details [<a href="#B24-remotesensing-14-01518" class="html-bibr">24</a>]. Bright clouds are prominent in the H-band images and the south polar region has faded while the north polar region has brightened with the changing Uranian seasons.</p>
Full article ">Figure 10
<p>Neptune’s appearance at visible and near IR wavelengths. November 2018 Hubble and August 2018 Keck data [<a href="#B118-remotesensing-14-01518" class="html-bibr">118</a>] show Neptune’s cloud band variations with wavelength. Neptune’s 2018 northern great dark spot is visible at shorter wavelengths to the upper left, while companion clouds are brighter at longer wavelengths.</p>
Full article ">Figure 11
<p>Neptune’s major cloud features in Hubble WFC3 and Keck H-band images from 2003 to 2021. The Hubble color composite in 2011 uses 845, 631, 467 nm in the R, G, B channels, respectively, while in 2015 and 2021, the G channel is 547 nm, and contrast and sharpness have been enhanced. Bright clouds are prominent in the H-band images, both in bands and in dark spot companion clouds, and the SPF appears on some dates [<a href="#B22-remotesensing-14-01518" class="html-bibr">22</a>,<a href="#B35-remotesensing-14-01518" class="html-bibr">35</a>,<a href="#B124-remotesensing-14-01518" class="html-bibr">124</a>]; dark spots are denoted in 2015 and 2021.</p>
Full article ">Figure 12
<p>The giant planets’ zonally-averaged winds measured over time. In all four panels, the black profile is from Voyager imaging measurements; for Neptune, the best-fit profile for southern latitudes is mirrored above 50<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>N [<a href="#B123-remotesensing-14-01518" class="html-bibr">123</a>]. Jupiter’s winds show only small variations between Voyager (1979), Cassini (2000), and Hubble (2019) imaging measurements [<a href="#B94-remotesensing-14-01518" class="html-bibr">94</a>,<a href="#B126-remotesensing-14-01518" class="html-bibr">126</a>,<a href="#B131-remotesensing-14-01518" class="html-bibr">131</a>], primarily in the strongest eastward jet; map from Hubble 2015 data. Saturn shows substantial change at the equator between the Voyager (1981), Cassini (2004–2006), and Hubble (2020) eras [<a href="#B14-remotesensing-14-01518" class="html-bibr">14</a>,<a href="#B94-remotesensing-14-01518" class="html-bibr">94</a>]; map from Cassini and Voyager 2 by B. Jonnson. Uranus shows very little variation between Voyager (1986), Keck (2007), and Keck/Gemini (2012) [<a href="#B24-remotesensing-14-01518" class="html-bibr">24</a>,<a href="#B105-remotesensing-14-01518" class="html-bibr">105</a>,<a href="#B116-remotesensing-14-01518" class="html-bibr">116</a>]; map from Hubble 2021 data, mirrored for southern hemisphere coverage. Neptune’s winds show large dispersion over from the Voyager (1989) profile and within a single filter (Keck H-band measurements are shown for 2013 and 2014) [<a href="#B34-remotesensing-14-01518" class="html-bibr">34</a>,<a href="#B123-remotesensing-14-01518" class="html-bibr">123</a>]; map from Voyager 2 up to 50<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>, B. Jonnson.</p>
Full article ">Figure 13
<p>Planetary waves at multiple scales; solid line indicates 10,000 km. Jupiter in 2015 at 395 nm shows fine-scale waves (dashed arrows) superimposed on a cyclone/anticyclone vortex street at 16<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>N latitude [<a href="#B140-remotesensing-14-01518" class="html-bibr">140</a>]. Saturn in 2011 at 502 nm shows a wavy structure along the edges of the expanding storm plume at 40<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>N latitude. Uranus in 2012 at 1.6 microns, showing braided equatorial waves [<a href="#B24-remotesensing-14-01518" class="html-bibr">24</a>]. Neptune in 2021 at 467 nm shows a wave number 1 feature centered at 55<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>S latitude.</p>
Full article ">Figure 14
<p>Cloud layers on the giant planets. Calculations use composition described in [<a href="#B149-remotesensing-14-01518" class="html-bibr">149</a>] (and references therein) for non-condensable species and [<a href="#B148-remotesensing-14-01518" class="html-bibr">148</a>] (and references therein) for condensable species, with adiabatic model temperatures (dashed blue curves) matched to Voyager 2 radio occultation temperature profiles (solid blue curves, [<a href="#B73-remotesensing-14-01518" class="html-bibr">73</a>]), in the model of [<a href="#B71-remotesensing-14-01518" class="html-bibr">71</a>,<a href="#B149-remotesensing-14-01518" class="html-bibr">149</a>,<a href="#B150-remotesensing-14-01518" class="html-bibr">150</a>]. Cloud condensation efficiency is plotted for each layer. Qualitative vertical ranges of photochemical hazes are shown by grey shading for comparison.</p>
Full article ">Figure 15
<p>Normalized solar insolation patterns from 1970 to 2030, neglecting ring shadows. The solar insolation pattern is governed by its orbital elements (period, inclination, eccentricity), as well as axial tilt, as shown in <a href="#remotesensing-14-01518-t003" class="html-table">Table 3</a>; black indicates no illumination. Jupiter, the only giant planet with little tilt, experiences only slight differences between the northern and southern hemispheres and receives ample equatorial insolation. Saturn has higher insolation at the south pole than at the north pole at their respective solstices, and ring shadows further block the winter hemisphere. Uranus was first observed near southern summer solstice and has not yet reached northern summer solstice; large portions of the planet receive no illumination for decades. Neptune has only been observed with southern illumination, due to its long orbital period.</p>
Full article ">
20 pages, 2795 KiB  
Article
A GNSS/5G Integrated Three-Dimensional Positioning Scheme Based on D2D Communication
by Wei Zhang, Yuanxi Yang, Anmin Zeng and Yangyin Xu
Remote Sens. 2022, 14(6), 1517; https://doi.org/10.3390/rs14061517 - 21 Mar 2022
Cited by 10 | Viewed by 3174
Abstract
The fifth generation (5G) communication has the potential to achieve ubiquitous positioning when integrated with a global navigation satellite system (GNSS). The device-to-device (D2D) communication, serving as a key technology in the 5G network, provides the possibility of cooperative positioning with high-density property. [...] Read more.
The fifth generation (5G) communication has the potential to achieve ubiquitous positioning when integrated with a global navigation satellite system (GNSS). The device-to-device (D2D) communication, serving as a key technology in the 5G network, provides the possibility of cooperative positioning with high-density property. The mobile users (MUs) collaborate to jointly share the position and measurement information, which can make use of more references for positioning. In this paper, a GNSS/5G integrated three-dimensional positioning scheme based on D2D communication is proposed, where the time of arrival (TOA) and received signal strength (RSS) measurements are jointly utilized in the 5G network. The density spatial clustering of application with noise (DBSCAN) is exploited to reduce the position uncertainty of the cooperative nodes, and the positions of the requesting nodes are obtained simultaneously. The particle filter (PF) algorithm is further conducted to improve the position accuracy of the requesting nodes. Numerical results show that the position deviation of the cooperative nodes can be significantly decreased and that the proposed algorithm performs better than the nonintegrated one. The DBSCAN brings an increase of about 50% in terms of the positioning accuracy compared with GNSS results, and the PF further increases the accuracy about 8%. It is also verified that the algorithm suits the fixed and dynamic condition well. Full article
Show Figures

Figure 1

Figure 1
<p>System model of GNSS/5G three-dimensional integrated positioning.</p>
Full article ">Figure 2
<p>Two-dimensional positioning based on trilateration: (<b>a</b>) the positioning procedure when the positions of three reference nodes are accurate, and (<b>b</b>) the comparison between the positioning procedure with accurate and inaccurate reference nodes.</p>
Full article ">Figure 3
<p>The basic definitions of DBSCAN.</p>
Full article ">Figure 4
<p>Distribution of the data points before (<b>a</b>) and after (<b>b</b>) the DBSCAN process based on the TOA measurements.</p>
Full article ">Figure 5
<p>Performance comparison between the proposed algorithm and other situations for the requesting nodes.</p>
Full article ">Figure 6
<p>Performance comparison between the algorithms based on different measurements for the requesting nodes.</p>
Full article ">Figure 7
<p>Positioning results of a moving MU.</p>
Full article ">Figure 8
<p>RMSEs versus the number of particles.</p>
Full article ">
18 pages, 3031 KiB  
Article
A Supervoxel-Based Random Forest Method for Robust and Effective Airborne LiDAR Point Cloud Classification
by Lingfeng Liao, Shengjun Tang, Jianghai Liao, Xiaoming Li, Weixi Wang, Yaxin Li and Renzhong Guo
Remote Sens. 2022, 14(6), 1516; https://doi.org/10.3390/rs14061516 - 21 Mar 2022
Cited by 15 | Viewed by 4093
Abstract
As an essential part of point cloud processing, autonomous classification is conventionally used in various multifaceted scenes and non-regular point distributions. State-of-the-art point cloud classification methods mostly process raw point clouds, using a single point as the basic unit and calculating point cloud [...] Read more.
As an essential part of point cloud processing, autonomous classification is conventionally used in various multifaceted scenes and non-regular point distributions. State-of-the-art point cloud classification methods mostly process raw point clouds, using a single point as the basic unit and calculating point cloud features by searching local neighbors via the k-neighborhood method. Such methods tend to be computationally inefficient and have difficulty obtaining accurate feature descriptions due to inappropriate neighborhood selection. In this paper, we propose a robust and effective point cloud classification approach that integrates point cloud supervoxels and their locally convex connected patches into a random forest classifier, which effectively improves the point cloud feature calculation accuracy and reduces the computational cost. Considering the different types of point cloud feature descriptions, we divide features into three categories (point-based, eigen-based, and grid-based) and accordingly design three distinct feature calculation strategies to improve feature reliability. Two International Society of Photogrammetry and Remote Sensing benchmark tests show that the proposed method achieves state-of-the-art performance, with average F1-scores of 89.16 and 83.58, respectively. The successful classification of point clouds with great variation in elevation also demonstrates the reliability of the proposed method in challenging scenes. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Supervoxel-based random forests framework for point cloud classification. The equation of the random forest model located at the bottom-left refers to the least squares method applied in the model to predict unlabeled points, in which <span class="html-italic">Y</span> represents the label, <span class="html-italic">X</span> represents an individual centroid point, and <math display="inline"><semantics> <mo>Θ</mo> </semantics></math> represents the coefficient matrix.</p>
Full article ">Figure 2
<p>Illustration of two-level graphical model generation. (<b>a</b>) The fundamental process of supervoxel-based object segmentation. (<b>b</b>) The octree structure used for supervoxel clustering. (<b>c</b>) The locally convex connected patches (LCCP) segmentation scheme. Colored arrows show the corresponding normal vectors of supervoxels.</p>
Full article ">Figure 3
<p>Locally convex connected patches (LCCP) neighborhood optimization. The neighborhood ranges used to calculate eigenvalues are shown at the bottom.</p>
Full article ">Figure 4
<p>Grid-based elevation computation and filtering. (<b>a</b>) The illustrated point cloud data (left) and the 2D-projected data with grid segmentation (right). (<b>b</b>) The grid filter examining anomalies of calculated elevation values in grid squares.</p>
Full article ">Figure 5
<p>Classification results of two Toronto site areas. (<b>a</b>) The classification result of Area 1 and (<b>b</b>) the clasification result of Area 2.</p>
Full article ">Figure 6
<p>The comparison of the classification accuracy before and after using the grid-based elevation features on the Toronto sites.</p>
Full article ">Figure 7
<p>Misclassification cases in which roof points were recognized as ground points in the Toronto sites. (<b>a</b>–<b>c</b>) refer to different types of misclassification results from roof to ground separately.</p>
Full article ">Figure 8
<p>Classification results of the Vaihingen sites.</p>
Full article ">Figure 9
<p>Misclassified regions in the Vaihingen site caused by unexpected connections between supervoxels of different objects. (<b>a</b>–<b>c</b>) mean misclassification situations in different minor scenes from roof to vegetation.</p>
Full article ">Figure 10
<p>Classification results of airborne LiDAR-generated Shenzhen sites. Three selected sites have been marked as (<b>a</b>–<b>c</b>).</p>
Full article ">Figure 11
<p>Misclassification cases in the Shenzhen dataset. (<b>a</b>) Faults due to edge interruption. (<b>b</b>) Faults due to untrained object shapes.</p>
Full article ">
21 pages, 2205 KiB  
Article
Estimating High-Resolution PM2.5 Concentrations by Fusing Satellite AOD and Smartphone Photographs Using a Convolutional Neural Network and Ensemble Learning
by Fei Wang, Shiqi Yao, Haowen Luo and Bo Huang
Remote Sens. 2022, 14(6), 1515; https://doi.org/10.3390/rs14061515 - 21 Mar 2022
Cited by 7 | Viewed by 3037
Abstract
Aerosol optical depth (AOD) data derived from satellite products have been widely used to estimate fine particulate matter (PM2.5) concentrations. However, existing approaches to estimate PM2.5 concentrations are invariably limited by the availability of AOD data, which can be missing [...] Read more.
Aerosol optical depth (AOD) data derived from satellite products have been widely used to estimate fine particulate matter (PM2.5) concentrations. However, existing approaches to estimate PM2.5 concentrations are invariably limited by the availability of AOD data, which can be missing over large areas due to satellite measurements being obstructed by, for example, clouds, snow cover or high concentrations of air pollution. In this study, we addressed this shortcoming by developing a novel method for determining PM2.5 concentrations with high spatial coverage by integrating AOD-based estimations and smartphone photograph-based estimations. We first developed a multiple-input fuzzy neural network (MIFNN) model to measure PM2.5 concentrations from smartphone photographs. We then designed an ensemble learning model (AutoELM) to determine PM2.5 concentrations based on the Collection-6 Multi-Angle Implementation of Atmospheric Correction AOD product. The R2 values of the MIFNN model and AutoELM model are 0.85 and 0.80, respectively, which are superior to those of other state-of-the-art models. Subsequently, we used crowdsourced smartphone photographs obtained from social media to validate the transferability of the MIFNN model, which we then applied to generate smartphone photograph-based estimates of PM2.5 concentrations. These estimates were fused with AOD-based estimates to generate a new PM2.5 distribution product with broader coverage than existing products, equating to an average increase of 12% in map coverage of PM2.5 concentrations, which grows to an impressive 25% increase in map coverage in densely populated areas. Our findings indicate that the robust estimation accuracy of the ensemble learning model is due to its detection of nonlinear correlations and high-order interactions. Furthermore, our findings demonstrate that the synergy of smartphone photograph-based estimations and AOD-based estimations generates significantly greater spatial coverage of PM2.5 distribution than AOD-based estimations alone, especially in densely populated areas where more smartphone photographs are available. Full article
Show Figures

Figure 1

Figure 1
<p>Study area and spatial distribution of ground-level monitoring stations.</p>
Full article ">Figure 2
<p>A brief flowchart of our method.</p>
Full article ">Figure 3
<p>Multi-layer stack ensemble in AutoGluon.</p>
Full article ">Figure 4
<p>Three-kilometer buffers around monitoring stations and the locations at which smartphone photographs were taken.</p>
Full article ">Figure 5
<p>Histogram of PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations of the PPCP dataset.</p>
Full article ">Figure 6
<p>Correlation between PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations measured at ground-based monitoring stations and PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations estimated by the MIFNN model. The red dashed line is the 1:1 line.</p>
Full article ">Figure 7
<p>Examples of smartphone photographs with corresponding PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations measured at ground-based monitoring stations and PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations estimated by the MIFNN model.</p>
Full article ">Figure 8
<p>Density scatter plots of PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations measured at ground-based monitoring stations and PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations estimated by various methods. The data in (<b>a</b>–<b>d</b>) are the testing data and training data for AutoELM, and the CV results of OLS regression and GWR, respectively. The red dashed lines represent the 1:1 line.</p>
Full article ">Figure 9
<p>Annual mean estimated PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations. The black lines are major roads in Beijing.</p>
Full article ">Figure 10
<p>Correlation between PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations measured by ground-based monitoring stations and those estimated by the MIFNN model. The red dashed line is the 1:1 line.</p>
Full article ">Figure 11
<p>Locations of smartphone photographs in Beijing obtained from social media in November 2021, with corresponding PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations estimated by the MIFNN model.</p>
Full article ">Figure 12
<p>Ratios of days with estimates of PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations to total days in densely populated regions, before (<b>left</b>) and after (<b>right</b>) the introduction of smartphone photograph-based estimates of PM<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>2.5</mn> </mrow> </msub> </semantics></math> concentrations.</p>
Full article ">
20 pages, 3608 KiB  
Article
Ground Maneuvering Target Focusing via High-Order Phase Correction in High-Squint Synthetic Aperture Radar
by Lei Ran, Zheng Liu and Rong Xie
Remote Sens. 2022, 14(6), 1514; https://doi.org/10.3390/rs14061514 - 21 Mar 2022
Cited by 4 | Viewed by 2129
Abstract
Moving target imaging in high-squint synthetic aperture radar (SAR) shows great potential for reconnaissance and surveillance tasks. For the desired resolution, high-squint SAR has a long-time coherent processing interval (CPI). In this case, the maneuvering motion of the moving target usually causes high-order [...] Read more.
Moving target imaging in high-squint synthetic aperture radar (SAR) shows great potential for reconnaissance and surveillance tasks. For the desired resolution, high-squint SAR has a long-time coherent processing interval (CPI). In this case, the maneuvering motion of the moving target usually causes high-order phase terms in the echoed data, which cannot be neglected for precise focusing. Many ground moving target imaging (GMTIm) algorithms have been proposed in the literature, but some high-order phase terms remain uncompensated in high-squint SAR. For this problem, a high-order phase correction-based GMTIm (HPC-GMTIm) method is proposed in this paper. We assumed that the target of interest has a constant velocity in the subaperture CPI, but maneuvering motion parameters for the whole CPI. Within the short subaperture CPI, the target signal can be simplified as a three-order phase expression, and the instantaneous Doppler frequency (DF) was estimated by some time–frequency analysis tools, including the Hough transform and the fractional Fourier transform. For the whole CPI, the subaperture, the instantaneous DF was combined to form a total least-squares problem, outputting the undetermined phase coefficients. Using the proposed local-to-global processing chain, all high-order phase terms can be estimated and corrected, which outperforms existing methods. The effectiveness of the HPC-GMTIm method is demonstrated by real measured high-squint SAR data. Full article
(This article belongs to the Special Issue Radar High-Speed Target Detection, Tracking, Imaging and Recognition)
Show Figures

Figure 1

Figure 1
<p>High-squint SAR geometry with a ground maneuvering moving target.</p>
Full article ">Figure 2
<p>Range: Please add bold for abc in figure, same as others. cell migration and azimuth phase comparison. (<b>a</b>) Linear range walk difference. (<b>b</b>) Quadratic phase difference. (<b>c</b>) Third-order phase difference.</p>
Full article ">Figure 3
<p>Flowchart of the proposed HPC-GMTIm algorithm.</p>
Full article ">Figure 4
<p>Subaperture processing results. The whole aperture is divided into 8 subapertures, and all 8 subaperture images are well focused by the proposed HPC-GMTIm algorithm.</p>
Full article ">Figure 5
<p>Whole aperture image. (<b>a</b>) HPC-GMTIm algorithm. (<b>b</b>) GHHAF method. (<b>c</b>) Cross-range profile comparison.</p>
Full article ">Figure 6
<p>Moving target detection for T1. (<b>a</b>) Range–Doppler domain. (<b>b</b>) Moving target detection result.</p>
Full article ">Figure 7
<p>The stationary scene image after eliminating the energy of T1.</p>
Full article ">Figure 8
<p>The envelope of T1. (<b>a</b>) Before correction. (<b>b</b>) After correction.</p>
Full article ">Figure 9
<p>The imaging result of T1. (<b>a</b>) HPC-GMTIm (entropy = 2.97). (<b>b</b>) GHHAF method (entropy = 3.36).</p>
Full article ">Figure 10
<p>The stationary scene image containing moving target T2.</p>
Full article ">Figure 11
<p>Moving target detection for T2. (<b>a</b>) Subaperture images before detection. (<b>b</b>) Subaperture images after detection.</p>
Full article ">Figure 12
<p>The envelope of T2. (<b>a</b>) Before correction. (<b>b</b>) After correction.</p>
Full article ">Figure 13
<p>The imaging result of T2. (<b>a</b>) HPC-GMTIm (entropy = 3.62). (<b>b</b>) GHHAF method (entropy = 4.17).</p>
Full article ">Figure 14
<p>Moving target detection for T3. (<b>a</b>) Range–Doppler domain. (<b>b</b>) Moving target detection result.</p>
Full article ">Figure 15
<p>The stationary scene image corresponding to T3.</p>
Full article ">Figure 16
<p>The envelope of T3. (<b>a</b>) Before correction. (<b>b</b>) After correction.</p>
Full article ">Figure 17
<p>The imaging result of T3. (<b>a</b>) HPC-GMTIm (entropy = 2.48). (<b>b</b>) GHHAF method (entropy = 2.81).</p>
Full article ">Figure 18
<p>The stationary scene containing moving target T4.</p>
Full article ">Figure 19
<p>Moving target detection for T4. (<b>a</b>) Subaperture images before detection. (<b>b</b>) Subaperture images after detection.</p>
Full article ">Figure 20
<p>The envelope of T4. (<b>a</b>) Before RCMC. (<b>b</b>) After RCMC.</p>
Full article ">Figure 21
<p>The imaging result of T4. (<b>a</b>) HPC-GMTIm (entropy = 3.46). (<b>b</b>) GHHAF method (entropy = 4.03).</p>
Full article ">
22 pages, 2443 KiB  
Article
Unsupervised Remote Sensing Image Super-Resolution Guided by Visible Images
by Zili Zhang, Yan Tian, Jianxiang Li and Yiping Xu
Remote Sens. 2022, 14(6), 1513; https://doi.org/10.3390/rs14061513 - 21 Mar 2022
Cited by 6 | Viewed by 3410
Abstract
Remote sensing images are widely used in many applications. However, due to being limited by the sensors, it is difficult to obtain high-resolution (HR) images from remote sensing images. In this paper, we propose a novel unsupervised cross-domain super-resolution method devoted to reconstructing [...] Read more.
Remote sensing images are widely used in many applications. However, due to being limited by the sensors, it is difficult to obtain high-resolution (HR) images from remote sensing images. In this paper, we propose a novel unsupervised cross-domain super-resolution method devoted to reconstructing a low-resolution (LR) remote sensing image guided by an unpaired HR visible natural image. Therefore, an unsupervised visible image-guided remote sensing image super-resolution network (UVRSR) is built. The network is divided into two learnable branches: a visible image-guided branch (VIG) and a remote sensing image-guided branch (RIG). As HR visible images can provide rich textures and sufficient high-frequency information, the purpose of VIG is to treat them as targets and make full use of their advantages in reconstruction. Specially, we first use a CycleGAN to drag the LR visible natural images to the remote sensing domain; then, we apply an SR network to upscale these simulated remote sensing domain LR images. However, the domain gap between SR remote sensing images and HR visible targets is massive. To enforce domain consistency, we propose a novel domain-ruled discriminator in the reconstruction. Furthermore, inspired by the zero-shot super-resolution network (ZSSR) to explore the internal information of remote sensing images, we add a remote sensing domain inner study to train the SR network in RIG. Sufficient experimental works show UVRSR can achieve superior results with state-of-the-art unpaired and remote sensing SR methods on several challenging remote sensing image datasets. Full article
(This article belongs to the Special Issue Advanced Super-resolution Methods in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The architecture of the proposed UVRSR.The proposed network consists of two learnable training branches: VIG and RIG. VIG learns the detailed texture and high-frequency information in the reconstruction by the HR visible images. In RIG, the remote sensing image inner relationship is learned by LR remote sensing images.</p>
Full article ">Figure 2
<p>The architecture of CycleGAN in VIG.</p>
Full article ">Figure 3
<p>The architecture of generators and discriminators in this paper.</p>
Full article ">Figure 4
<p>Different domain gaps of the real-world visible domain/visible clean domain and the remote sensing domain/visible clean domain.</p>
Full article ">Figure 5
<p>The architecture of the SR network.</p>
Full article ">Figure 6
<p>The architecture of the proposed DR discriminator.</p>
Full article ">Figure 7
<p><b>Datasets in the paper.</b> The first row depicts images from the DIV2K dataset. The second row shows images from the UC Merced dataset, and the last row shows images from the NWPU-RESISC45 dataset.</p>
Full article ">Figure 8
<p>Visual comparison on two remote sensing datasets with the scale factor of <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>×</mo> </mrow> </semantics></math>. For results on the UC Merced dataset, see upper images, and for the NWPU-RESISC45 dataset, see lower images. The LR inputs and SR outputs in both cases are <math display="inline"><semantics> <mrow> <mn>200</mn> <mo>×</mo> <mn>200</mn> </mrow> </semantics></math> pixels and <math display="inline"><semantics> <mrow> <mn>400</mn> <mo>×</mo> <mn>400</mn> </mrow> </semantics></math>. We crop the ROIs in the SR images for better comparison. We additionally compute the NIQE and PI for each ROI, which lower denote better quality. We can easily observe that UVRSR yields all comparable methods.</p>
Full article ">Figure 9
<p>Visual comparison on two remote sensing datasets with the scale factor of <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> </mrow> </semantics></math>. For results on the UC Merced dataset, see upper images, and for results on the NWPU-RESISC45 dataset see lower images. The LR inputs and SR outputs in both cases are <math display="inline"><semantics> <mrow> <mn>100</mn> <mo>×</mo> <mn>100</mn> </mrow> </semantics></math> pixels and <math display="inline"><semantics> <mrow> <mn>400</mn> <mo>×</mo> <mn>400</mn> </mrow> </semantics></math>. We crop ROIs in the SR images for better comparison. We also compute the NIQE and PI for each ROI, for which lower values denote better quality. We can also easily observe that UVRSR wins the first place among all methods.</p>
Full article ">Figure 10
<p>Visual comparison on two remote sensing datasets with the scale factor of <math display="inline"><semantics> <mrow> <mn>8</mn> <mo>×</mo> </mrow> </semantics></math>. Results on UC Merced dataset see upper images and NWPU-RESISC45 dataset see lower images. The LR inputs and SR outputs in both cases are <math display="inline"><semantics> <mrow> <mn>80</mn> <mo>×</mo> <mn>80</mn> </mrow> </semantics></math> pixels and <math display="inline"><semantics> <mrow> <mn>640</mn> <mo>×</mo> <mn>640</mn> </mrow> </semantics></math>. We crop ROIs in the SR images for better comparison. Also we compute the NIQE and PI for each ROI which lower denote better quality. We can conclude that UVRSR yields the performance among all methods.</p>
Full article ">Figure 11
<p>Visual comparison with different discriminator settings. The results are trained and tested on the UC Merced dataset with a scale of <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> </mrow> </semantics></math>.</p>
Full article ">
14 pages, 1388 KiB  
Article
A Proposed Satellite-Based Crop Insurance System for Smallholder Maize Farming
by Wonga Masiza, Johannes George Chirima, Hamisai Hamandawana, Ahmed Mukalazi Kalumba and Hezekiel Bheki Magagula
Remote Sens. 2022, 14(6), 1512; https://doi.org/10.3390/rs14061512 - 21 Mar 2022
Cited by 2 | Viewed by 3292
Abstract
Crop farming in Sub-Saharan Africa is constantly confronted by extreme weather events. Researchers have been striving to develop different tools that can be used to reduce the impacts of adverse weather on agriculture. Index-based crop insurance (IBCI) has emerged to be one of [...] Read more.
Crop farming in Sub-Saharan Africa is constantly confronted by extreme weather events. Researchers have been striving to develop different tools that can be used to reduce the impacts of adverse weather on agriculture. Index-based crop insurance (IBCI) has emerged to be one of the tools that could potentially hedge farmers against weather-related risks. However, IBCI is still constrained by poor product design and basis risk. This study complements the efforts to improve IBCI design by evaluating the performances of the Tropical Applications of Meteorology using SATellite data and ground-based observations (TAMSAT) and Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) in estimating rainfall at different spatial scales over the maize-growing season in a smallholder farming area in South Africa. Results show that CHIRPS outperforms TAMSAT and produces better results at 20-day and monthly time steps. The study then uses CHIRPS and a crop water requirements (CWR) model to derive IBCI thresholds and an IBCI payout model. Results of CWR modeling show that this proposed IBCI system can cover the development, mid-season, and late-season stages of maize growth in the study area. The study then uses this information to calculate the weight, trigger, exit, and tick for each of these growth stages. Although this approach is premised on the prevailing conditions in the study area, it can be applied in other areas with different growing conditions to improve IBCI design. Full article
(This article belongs to the Special Issue Monitoring Crops and Rangelands Using Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study area (<b>a</b>) within South Africa (<b>b</b>).</p>
Full article ">Figure 2
<p>WSs and the spatial scales at which analyses were performed. Source: adapted from [<a href="#B26-remotesensing-14-01512" class="html-bibr">26</a>].</p>
Full article ">Figure 3
<p>Correlations of WS and satellite data at (<b>a</b>) daily, (<b>b</b>) 20-day, and (<b>c</b>) monthly intervals. (<b>d</b>) Mean correlations between WS and satellite data.</p>
Full article ">Figure 4
<p>Development stage (trigger = 96.27 mm, exit = 52.40 mm), mid-season (trigger = 140.87 mm, exit = 73.55 mm), late-season (trigger = 67.52 mm, exit = 24.94 mm).</p>
Full article ">
31 pages, 6054 KiB  
Article
Calibration and Validation of SWAT Model by Using Hydrological Remote Sensing Observables in the Lake Chad Basin
by Ali Bennour, Li Jia, Massimo Menenti, Chaolei Zheng, Yelong Zeng, Beatrice Asenso Barnieh and Min Jiang
Remote Sens. 2022, 14(6), 1511; https://doi.org/10.3390/rs14061511 - 21 Mar 2022
Cited by 29 | Viewed by 5950
Abstract
Model calibration and validation are challenging in poorly gauged basins. We developed and applied a new approach to calibrate hydrological models using distributed geospatial remote sensing data. The Soil and Water Assessment Tool (SWAT) model was calibrated using only twelve months of remote [...] Read more.
Model calibration and validation are challenging in poorly gauged basins. We developed and applied a new approach to calibrate hydrological models using distributed geospatial remote sensing data. The Soil and Water Assessment Tool (SWAT) model was calibrated using only twelve months of remote sensing data on actual evapotranspiration (ETa) geospatially distributed in the 37 sub-basins of the Lake Chad Basin in Africa. Global sensitivity analysis was conducted to identify influential model parameters by applying the Sequential Uncertainty Fitting Algorithm–version 2 (SUFI-2), included in the SWAT-Calibration and Uncertainty Program (SWAT-CUP). This procedure is designed to deal with spatially variable parameters and estimates either multiplicative or additive corrections applicable to the entire model domain, which limits the number of unknowns while preserving spatial variability. The sensitivity analysis led us to identify fifteen influential parameters, which were selected for calibration. The optimized parameters gave the best model performance on the basis of the high Nash–Sutcliffe Efficiency (NSE), Kling–Gupta Efficiency (KGE), and determination coefficient (R2). Four sets of remote sensing ETa data products were applied in model calibration, i.e., ETMonitor, GLEAM, SSEBop, and WaPOR. Overall, the new approach of using remote sensing ETa for a limited period of time was robust and gave a very good performance, with R2 > 0.9, NSE > 0.8, and KGE > 0.75 applying to the SWAT ETa vs. the ETMonitor ETa and GLEAM ETa. The ETMonitor ETa was finally adopted for further model applications. The calibrated SWAT model was then validated during 2010–2015 against remote sensing data on total water storage change (TWSC) with acceptable performance, i.e., R2 = 0.57 and NSE = 0.55, and remote sensing soil moisture data with R2 and NSE greater than 0.85. Full article
(This article belongs to the Special Issue Remote Sensing of Hydrological Processes: Modelling and Applications)
Show Figures

Figure 1

Figure 1
<p>The African Sahel, the location of the Lake Chad Basin, the study area (Southern Lake Chad Basin), and the 37 delineated sub-basins.</p>
Full article ">Figure 2
<p>The conceptual framework of this study: (<b>a</b>) the SWAT flowchart (and uncalibrated model outputs), (<b>b</b>) SWAT-CUP flowchart (parameter selection and calibration), and (<b>c</b>) the validation schemes (using the calibrated model).</p>
Full article ">Figure 3
<p>Spatial distribution of performance metrics (R<sup>2</sup>, NSE, and KGE) of SWAT_Hargreaves, when calibrated in 2009 against ETMonitor, GLEAM, WaPOR, and SSEBop (<b>a</b>,<b>b</b>,<b>c</b>,<b>d</b>), respectively, in the study area in the Lake Chad Basin.</p>
Full article ">Figure 4
<p>Comparison between ETMonitor ETa and SWAT-calibrated ETa for all sub-catchments in the LCB.</p>
Full article ">Figure 5
<p>Spatial distribution of performance metrics (R<sup>2</sup>, NSE, and KGE) of SWAT_Hargreaves when validated against ETMonitor in 2010, 2011, 2012, 2013, 2014, and 2015 (<b>a</b>,<b>b</b>,<b>c</b>,<b>d</b>,<b>e</b>,<b>f</b>), respectively, in the study area in the Lake Chad Basin.</p>
Full article ">Figure 6
<p>Time series of monthly uncalibrated and calibrated ETa simulated by the SWAT and ETa from ETMonitor in 2009–2015 in the study area in the Lake Chad Basin.</p>
Full article ">Figure 7
<p>Seasonal comparison between monthly averaged SWAT and ESA CCI SSM at 1 cm on 2009 as driest year: (<b>a</b>) dry months, (<b>b</b>) wet months and on 2012 as wettest year: (<b>c</b>) dry months, and (<b>d</b>) wet months.</p>
Full article ">Figure 8
<p>Comparison between monthly averaged SWAT SWC (black line) vs. ESA CCI SM (red line) at 50 mm during 2009–2015 in the study area in the Lake Chad Basin: (<b>a</b>) the scatter plot, (<b>b</b>) comparison of time series.</p>
Full article ">Figure 9
<p>Comparison between monthly TWSC averaged over the study area in the Lake Chad Basin for 2009–2015: SWAT estimates and GRACE data product, at 1 km resolution: (<b>a1</b>) scatter plot, (<b>b1</b>) time series, and at 300 km resolution: (<b>a2</b>) scatter plot, (<b>b2</b>) time series.</p>
Full article ">Figure 10
<p>Average annual water balance components in the study area in the Lake Chad Basin based on SWAT-simulated output before (Uncalibrated) and after (Calibrated) calibration. (ETa: actual evapotranspiration; SW: soil water content; PERC: perception; SURQ: surface runoff; GW_Q: groundwater recharge; WYLD: water yield; LATQ: lateral runoff.</p>
Full article ">Figure 11
<p>Comparison of simulated runoff in the Lake Chad Basin by different studies.</p>
Full article ">Figure A1
<p>Spatial distribution of performance metrics (R<sup>2</sup>, NSE, and KGE) of SWAT ETa based on P-M when calibrated in 2009 against ETMonitor, GLEAM, WaPOR, and SSEBop (<b>a1</b>,<b>b1</b>,<b>c1</b>,<b>d1</b>), respectively, and SWAT ETa based on P-T when calibrated in 2009 against ETMonitor, GLEAM, WaPOR, and SSEBop (<b>a2</b>,<b>b2</b>,<b>c2</b>,<b>d2</b>), respectively.</p>
Full article ">Figure A2
<p>The annual mean of different remote sensing evapotranspiration products in Lake Chad.</p>
Full article ">
28 pages, 3460 KiB  
Article
Retrieval of Black Carbon Absorption Aerosol Optical Depth from AERONET Observations over the World during 2000–2018
by Naghmeh Dehkhoda, Juhyeon Sim, Sohee Joo, Sungkyun Shin and Youngmin Noh
Remote Sens. 2022, 14(6), 1510; https://doi.org/10.3390/rs14061510 - 21 Mar 2022
Cited by 4 | Viewed by 3317
Abstract
Black carbon (BC) absorption aerosol optical depth (AAODBC) defines the contribution of BC in light absorption and is retrievable using sun/sky radiometer measurements provided by Aerosol Robotic Network (AERONET) inversion products. In this study, we utilized AERONET-retrieved depolarization ratio (DPR, [...] Read more.
Black carbon (BC) absorption aerosol optical depth (AAODBC) defines the contribution of BC in light absorption and is retrievable using sun/sky radiometer measurements provided by Aerosol Robotic Network (AERONET) inversion products. In this study, we utilized AERONET-retrieved depolarization ratio (DPR, δp), single scattering albedo (SSA, ω), and Ångström Exponent (AE, å) of version 3 level 2.0 products as indicators to estimate the contribution of BC to the absorbing fractions of AOD. We applied our methodology to the AERONET sites, including North and South America, Europe, East Asia, Africa, India, and the Middle East, during 2000–2018. The long-term AAODBC showed a downward tendency over Sao Paulo (−0.001 year−1), Thessaloniki (−0.0004 year−1), Beijing (−0.001 year−1), Seoul (−0.0015 year−1), and Cape Verde (−0.0009 year−1) with the highest values over the populous sites. This declining tendency in AAODBC can be attributable to the successful emission control policies over these sites, particularly in Europe, America, and China. The AAODBC at the Beijing, Sao Paulo, Mexico City, and the Indian sites showed a clear seasonality indicating the notable role of residential heating in BC emissions over these sites during winter. We found a higher correlation between AAODBC and fine mode AOD at 440 nm at all sites except for Beijing. High pollution episodes, BC emission from different sources, and aggregation properties seem to be the main drivers of higher AAODBC correlation with coarse particles over Beijing. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Locations of the selected AERONET sites (red color represents the sites with dust-dominant AAOD and blue color represents the sites with non-dust-dominant AAOD).</p>
Full article ">Figure 2
<p>Variations of annual mean AOD<sub>T</sub> at 440 nm from 2000 to 2018 at fourteen sites. Error bars represent the standard deviation of the annual averages. (units AOD<sub>T</sub> year<sup>−1</sup>).</p>
Full article ">Figure 3
<p>Variations of annual mean AAOD<sub>BC</sub> at 440 nm from 2000 to 2018 at fourteen sites. Error bars represent the standard deviation of the annual averages. (units AAOD<sub>BC</sub> year<sup>−1</sup>).</p>
Full article ">Figure 4
<p>Monthly variations of mean AAOD<sub>BC</sub> at 440 nm from 2000 to 2018 at fourteen sites. Error bars represent the standard deviation of the monthly averages. (units AAOD<sub>BC</sub> year<sup>−1</sup>).</p>
Full article ">Figure 5
<p>Variations of annual AAOD<sub>BC</sub>/AOD<sub>T</sub> ratio at 440 nm from 2000 to 2018 at fourteen sites. Error bars represent the standard deviation of the annual averages (units year<sup>−1</sup>).</p>
Full article ">Figure 6
<p>Variations of annual AAOD<sub>BC</sub>/AAOD<sub>Total</sub> ratio at 440 nm from 2000 to 2018 at fourteen sites. Error bars represent the standard deviation of the annual averages (units year<sup>−1</sup>).</p>
Full article ">Figure 7
<p>The coefficient of determination (R<sup>2</sup>) between AAOD<sub>BC</sub> and AOD<sub>T</sub> at 440 nm for the fine mode (black circles) and the coarse mode (gray circles) at (<b>a</b>) the American, (<b>b</b>) East Asian, and (<b>c</b>) African sites from 2000 to 2018.</p>
Full article ">Figure 8
<p>The coefficient of determination (R<sup>2</sup>) between AAOD<sub>BC</sub> and AOD<sub>T</sub> at 440 nm for the fine mode (black circles) and the coarse mode (gray circles) at (<b>a</b>) the European, (<b>b</b>) Middle Eastern, and (<b>c</b>) Indian sites from 2000 to 2018.</p>
Full article ">
20 pages, 5923 KiB  
Article
SIF-Based GPP Is a Useful Index for Assessing Impacts of Drought on Vegetation: An Example of a Mega-Drought in Yunnan Province, China
by Chuanhua Li, Lixiao Peng, Min Zhou, Yufei Wei, Lihui Liu, Liangliang Li, Yunfan Liu, Tianbao Dou, Jiahao Chen and Xiaodong Wu
Remote Sens. 2022, 14(6), 1509; https://doi.org/10.3390/rs14061509 - 21 Mar 2022
Cited by 9 | Viewed by 3249
Abstract
The impact of drought on terrestrial ecosystem Gross Primary Productivity (GPP) is strong and widespread; therefore, it is important to study the response of terrestrial ecosystem GPP to drought. In this paper, we compared the correlations of Sun-induced Chlorophyll fluorescence (SIF), Enhanced Vegetation [...] Read more.
The impact of drought on terrestrial ecosystem Gross Primary Productivity (GPP) is strong and widespread; therefore, it is important to study the response of terrestrial ecosystem GPP to drought. In this paper, we compared the correlations of Sun-induced Chlorophyll fluorescence (SIF), Enhanced Vegetation Index (EVI), and Normalized Differential Vegetation Index (NDVI) with the drought index sc_PDSI, estimated GPP in Yunnan Province, China, based on SIFTOTAL data (SIF data with canopy effects eliminated), and analyzed the response characteristics of GPP to drought for one mega-drought event (2009–2011) in combination with the sc_PDSI drought index. The results show that SIF is more sensitive to drought than the NDVI and EVI; the correlation between the GPP estimated based on SIF data (GPPSIF) and the actual observed flux values (R2 = 0.83) is better than GPPGLASS and GPPLUE, and the RMSE is also lower than those two products. This drought has a serious impact on GPP, and the monthly average values of the effect of drought on GPP (GPPd) in Yunnan Province in 2009, 2010, and 2011 are −11.37 gC·m−2·month−1, −23.48 gC·m−2·month−1 and −17.92 gC·m−2·month−1, which are 8.6%, 17.48% and 13.85% of the monthly average in a normal year, respectively. The spatial variability of GPP response to drought is significant, which is mainly determined by the degree, and duration of the drought, the vegetation type, the topography, and anthropogenic factors. In conclusion, GPPSIF quickly and accurately reflects the process of this drought, and this study helps to elucidate the response of GPP to drought conditions and provides more scientific information for drought prediction and ecosystem management. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview map of Yunnan Province, China.</p>
Full article ">Figure 2
<p>Distribution of monthly average values in Yunnan Province from 2009 to 2011: (<b>a</b>) SIF; (<b>b</b>) EVI; (<b>c</b>) NDVI; (<b>d</b>) sc_PDSI.</p>
Full article ">Figure 3
<p>Correlation distribution of monthly average values in Yunnan Province from 2009 to 2011: (<b>a</b>) sc_PDSI &amp; SIF; (<b>b</b>) sc_PDSI &amp; EVI; (<b>c</b>) sc_PDSI &amp; NDVI. (The numbers below the title are the range of values for this raster plot and the mean value).</p>
Full article ">Figure 4
<p>Percentage of SIF, EVI, NDVI and sc_PDSI significance tests in Yunnan Province, 2009–2011.</p>
Full article ">Figure 5
<p>Scatter chart of SIF, NDVI, and EVI with sc_PDSI at the Yunnan sites.</p>
Full article ">Figure 6
<p>GPP accuracy verification. (<b>a</b>) accuracy verification of GPP<sub>SIF</sub> and flux observations; (<b>b</b>) accuracy verification of GPP<sub>GLASS</sub> and flux observations; (<b>c</b>) accuracy verification of GPP<sub>LUE</sub> and flux observations.</p>
Full article ">Figure 7
<p>Comparison of GPP<sub>SIF</sub>, GPP<sub>GLASS</sub>, GPP<sub>LUE</sub>, and GPP FLUX data from 2007–2010.</p>
Full article ">Figure 8
<p>Spatial distribution of GPP<sub>SIF</sub> data and the other two products data in 2009. (<b>a</b>) GPP<sub>SIF</sub>; (<b>b</b>) GPP<sub>GLASS</sub>; (<b>c</b>) GPP<sub>LUE</sub>. (The numbers below the title are the range of values for this raster plot).</p>
Full article ">Figure 9
<p>Spatial distribution of monthly average values of sc_PDSI from 2009 to 2011.</p>
Full article ">Figure 10
<p>Typical regional distribution in Yunnan Province.</p>
Full article ">Figure 11
<p>Changes in the three types of GPP and sc_PDSI in Yunnan Province, 2009–2011.</p>
Full article ">Figure 12
<p>Changes in the three GPPs and sc_PDSI in typical regions of Yunnan Province, 2009–2011. (<b>a</b>–<b>h</b>) correspond to the data changes in areas (1)~(8) in <a href="#remotesensing-14-01509-f010" class="html-fig">Figure 10</a> respectively.</p>
Full article ">Figure 13
<p>Changes in GPPd and sc_PDSI, 2009-06–2011-06.</p>
Full article ">Figure 14
<p>The total value of GPPd and the mean value of sc_PDSI in all months, 2009-06–2011-06. (<b>a</b>) total value of GPPd; (<b>b</b>) mean value of sc_PDSI.</p>
Full article ">Figure 15
<p>Scatter plot of the total value of GPPd and the mean value of sc_PDSI in all months, 2009-06–2011-06.</p>
Full article ">
29 pages, 7016 KiB  
Article
Glacier Recession in the Altai Mountains after the LIA Maximum
by Dmitry Ganyushkin, Kirill Chistyakov, Ekaterina Derkach, Dmitriy Bantcev, Elena Kunaeva, Anton Terekhov and Valeria Rasputina
Remote Sens. 2022, 14(6), 1508; https://doi.org/10.3390/rs14061508 - 20 Mar 2022
Cited by 11 | Viewed by 2966
Abstract
The study aims to reconstruct the Altai glaciers at the maximum of the LIA, to estimate the reduction of the Altai glaciers from the LIA maximum to the present, and to analyze glacier reduction rates on the example of the Tavan Bogd mountain [...] Read more.
The study aims to reconstruct the Altai glaciers at the maximum of the LIA, to estimate the reduction of the Altai glaciers from the LIA maximum to the present, and to analyze glacier reduction rates on the example of the Tavan Bogd mountain range. Research was based on remote sensing and field data. The recent glaciation in the southern part of the Altai is estimated (1256 glaciers with the total area of 559.15 ± 31.13 km2), the area of the glaciers of the whole Altai mountains is estimated at 1096.55 km2. In the southern part of Altai, 2276 glaciers with a total area of 1348.43 ± 56.16 km2 were reconstructed, and the first estimate of the LIA glacial area for the entire Altai mountain system was given (2288.04 km2). Since the LIA, the glaciers decrease by 59% in the southern part of Altai and by 47.9% for the whole Altai. The average increase in ELA in the southern part of Altai was 106 m. The larger increase of ELA in the relatively humid areas was probably caused by a decrease in precipitation. Glaciers in the Tavan Bogd glacial center degraded with higher rates after 1968 relative to the interval between 1850–1968. One of the intervals of fast glacier shrinkage in 2000–2010 was caused by a dry and warm interval between 1989 and 2004. However, the fast decrease in glaciers in 2000–2010 was mainly caused by the shrinkage or disappearance of the smaller glaciers, and large valley glaciers started a fast retreat after 2010. The study results present the first evaluation of the glacier recession of the entire Altai after the LIA maximum. Full article
(This article belongs to the Special Issue The Cryosphere Observations Based on Using Remote Sensing Techniques)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Glacial centers of Altai. 1—Mongun-Taiga, 2—Shapsalsky ridge, 3—Ikh Turgen, 4—Mongun-Taiga Minor, 5—Talduayr, 6—Saylugem, 7—Tavan Bogd, 8—Tsengel Khairkhan, 9—Sogostyn, 10—Hunguyn-Nuru, 11—Turgen, 12—Kharkhiraa, 13-Tsambagarav, 14—Sair, 15—Huh Serh, 16—Munkh Khairkhan, 17—Baatar Khairkhan, 18—Sutai, 19—Sargamyr-Nuru, 20—Alag-Deliyn, 21-North Mongolian Altai, 22—Hoton, 23—Under-Khaikhan, 24—Harit-Nuru, 25—Bayantyn-Ula, 26—Eule-Tau, 27—Dushin-Ula, 28—Katunsky, 29—South-Chuya, 30—North-Chuya, 31—South Altai, 32—Kara-Alakha, 33—Sarymsakty, 34—Kurai, 35—Ivanovskiy, 36—Holzun, 37—Kurkurebazy, 38—Sumulta, 39—Listvyaga.</p>
Full article ">Figure 2
<p>Areas of satellite images that are problematic for glacier delineation and their photographs obtained in situ. (<b>A</b>,<b>B</b>) Grigorieva glacier, Ikh Turgen: 1—False glacier edge based on the interpretation of the satellite image (World View-2, 22 August 2013), 2—The real edge of the glacier, based on the results of in situ observations (photo 24 July 2015). (<b>C</b>,<b>D</b>) Tsengel-Khairkhan, Holtsutiyn-Gol valley: C, 1—The area that looks like a debris-covered glacial snout in the Spot-5 image (7 August 2008), (<b>D</b>) 1—The real edge of the glacier, based on the results of field observations (photo 24 July 2016). (<b>E</b>,<b>F</b>) Mongun-Taiga, Levyi Mugur valley: (<b>E</b>) 1—The area, that looks like a small cirque glacier in the satellite image (Spot-5 19 September 2011), (<b>F</b>) 1—The same area in the photo (15 July 2011)-debris-covered dead ice.</p>
Full article ">Figure 3
<p>Identification of the LIA moraines and of the present edge of a glacier, Tolayty valley, Mongun-Taiga massif. (<b>A</b>) WorldView-2 image, 26 June 2015, (<b>B</b>) Spot-5 image, 19 September 2011, (<b>C</b>) Sentinel-2 image, 6 September 2016, (<b>D</b>) photo, 6 October 2019: 1—snow-covered talus and dead ice (false glacier edge), 2—LIA moraine, 3—pre-LIA late Holocene moraine, so-called “historic” stage.</p>
Full article ">Figure 4
<p>LIA moraines and rock glaciers. Tsengel Khairkhan ridge: (<b>A</b>) World View 2 image, 18 September 2011, (<b>B</b>) Photo, 31 July 2016: 1—LIA moraine, 2—Talus rock glacier. Mongun Taiga massif: (<b>C</b>) WorldView-2 image, 26 June 2015, (<b>D</b>,<b>E</b>) Photos 28 July 2008: 1—LIA moraines, 2—Debris rock glaciers.</p>
Full article ">Figure 5
<p>Degradation of hanging glaciers, Mongun Taiga massif. (<b>A</b>) WorldView-2 image, 26 June 2015, (<b>B</b>) Landsat 5 image, 5,4,3-bands combination, 15 September 1987, (<b>C</b>) aerial image, 10 July 1966: 1—Degrading small hanging glaciers, 2—Nival niches in place of degraded glaciers.</p>
Full article ">Figure 6
<p>Recent glaciation of Altay mountains. The numbers correspond to the ones in <a href="#remotesensing-14-01508-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 7
<p>Glacier recession in the Tavan Bogd massif after the LIA maximum, a fragment from the north-east part.</p>
Full article ">Figure 8
<p>Relationship between the areas of glacial centers at the LIA maximum and the decrease in their area after the LIA maximum, %.</p>
Full article ">Figure 9
<p>The increase of the ELA after the maximum of the LIA in the southern part of Altai. The numbers of the glacial centers correspond to the ones in <a href="#remotesensing-14-01508-f001" class="html-fig">Figure 1</a>, <a href="#remotesensing-14-01508-t002" class="html-table">Table 2</a> and <a href="#remotesensing-14-01508-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 10
<p>Changes of the total area of the glaciers of Tavan Bogd and the retreat of the termini of 5 valley glaciers of the same glacial center after the LIA maximum.</p>
Full article ">Figure 11
<p>Annual precipitation (mm) and average summer temperature (°C), Kosh-Agach meteorological station (1758 m a.s.l.). The periods, unfavorable for the glaciers are marked with yellow bars and orange bar (extremely unfavorable).</p>
Full article ">
23 pages, 30433 KiB  
Article
Two-Stream Swin Transformer with Differentiable Sobel Operator for Remote Sensing Image Classification
by Siyuan Hao, Bin Wu, Kun Zhao, Yuanxin Ye and Wei Wang
Remote Sens. 2022, 14(6), 1507; https://doi.org/10.3390/rs14061507 - 20 Mar 2022
Cited by 26 | Viewed by 5098
Abstract
Remote sensing (RS) image classification has attracted much attention recently and is widely used in various fields. Different to natural images, the RS image scenes consist of complex backgrounds and various stochastically arranged objects, thus making it difficult for networks to focus on [...] Read more.
Remote sensing (RS) image classification has attracted much attention recently and is widely used in various fields. Different to natural images, the RS image scenes consist of complex backgrounds and various stochastically arranged objects, thus making it difficult for networks to focus on the target objects in the scene. However, conventional classification methods do not have any special treatment for remote sensing images. In this paper, we propose a two-stream swin transformer network (TSTNet) to address these issues. TSTNet consists of two streams (i.e., original stream and edge stream) which use both the deep features of the original images and the ones from the edges to make predictions. The swin transformer is used as the backbone of each stream given its good performance. In addition, a differentiable edge Sobel operator module (DESOM) is included in the edge stream which can learn the parameters of Sobel operator adaptively and provide more robust edge information that can suppress background noise. Experimental results on three publicly available remote sensing datasets show that our TSTNet achieves superior performance over the state-of-the-art (SOTA) methods. Full article
(This article belongs to the Special Issue State-of-the-Art Remote Sensing Image Scene Classification)
Show Figures

Figure 1

Figure 1
<p>Edge information of the six scenes; the red curves represent the edge of the scene object.</p>
Full article ">Figure 2
<p>The framework of the two-stream swin transformer. It has two inputs, the original image (I) and the edge image T(I), and the synthetic edge image T(I) generated by differentiable edge Sobel operator module (DESOM). The fusion module fuses the original features and edge features.</p>
Full article ">Figure 3
<p>Two successive transformer blocks of the swin transformer. The regular and shifted windows correspond to W-MSA and SW-MSA, respectively.</p>
Full article ">Figure 4
<p>Shifted window mechanism of W-MSA and SW-MSA. (<b>a</b>) is the standard window, (<b>b</b>) is the repartitioned window, (<b>c</b>) is the window after the shift transformation, and (<b>d</b>) is the window after the (<b>e</b>) reverse shift transformation.</p>
Full article ">Figure 5
<p>Structure of differentiable edge Sobel operator module (DESOM).</p>
Full article ">Figure 6
<p>Design of the fusion module and loss function. An auxiliary cross-entropy loss about <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> is added to the cross-entropy loss of the fused feature <math display="inline"><semantics> <mrow> <mi>F</mi> <msup> <mrow/> <mo>′</mo> </msup> </mrow> </semantics></math>. The balance between the two loss functions is controlled by <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Example images of AID dataset: (1) airport; (2) bare land; (3) baseball field; (4) beach; (5) bridge; (6) centre; (7) church; (8) commercial; (9) dense residential; (10) desert; (11) farmland; (12) forest; (13) industrial; (14) meadow; (15) medium residential; (16) mountain; (17) park; (18) parking; (19) playground; (20) pond; (21) port; (22) railway station; (23) resort; (24) river; (25) school; (26) sparse residential; (27) square; (28) stadium; (29) storage tanks; (30) viaduct.</p>
Full article ">Figure 8
<p>Example images of NWPU dataset: (1) airplane; (2) airport; (3) baseball diamond; (4) basketball court; (5) beach; (6) bridge; (7) chaparral; (8) church; (9) circular farmland; (10) cloud; (11) commercial area; (12) dense residential; (13) desert; (14) forest; (15) freeway; (16) golf course; (17) ground track field; (18) harbor; (19) industrial area; (20) intersection; (21) island; (22) lake; (23) meadow; (24) medium residential; (25) mobile home park; (26) mountain; (27) overpass; (28) palace; (29) parking lot; (30) railway; (31) railway station; (32) rectangular farmland; (33) river; (34) roundabout; (35) runway; (36) sea ice; (37) ship; (38) snow berg; (39) sparse residential; (40) stadium; (41) storage tank; (42) tennis court; (43) terrace; (44) thermal power station; (45) wetland.</p>
Full article ">Figure 9
<p>Example images of UCM dataset: (1) agriculture, (2) airplane, (3) baseball diamond, (4) beach, (5) buildings, (6) chaparral, (7) dense resi- dential, (8) Forest, (9) Freeway, (10) golf course, (11) harbor, (12) intersection, (13) medium residential, (14) mobile home park, (15) overpass, (16) parking lot, (17) river, (18) runway, (19) sparse residential, (20) storage tanks, and (21) tennis court.</p>
Full article ">Figure 10
<p>Ablation Study: Accuracy comparison between different methods under the training ratios of (<b>a</b>) 20% and 50% on AID dataset; (<b>b</b>) 10% and 20% on NWPU dataset. Our proposed method, TSTNet, consistently has the best performance across different datasets with different training ratios.</p>
Full article ">Figure 11
<p>The optimal values learned by the differentiable Sobel operator under the training ratio of 10% on the NWPU dataset and under the training ratio of 20% on the AID dataset.</p>
Full article ">Figure 12
<p>(<b>a</b>) The original remote sensing images. (<b>b</b>) Synthesized edge images. (<b>c</b>) Attention map of the last layer of original stream (Swin-B). (<b>d</b>) Attention map of the last layer of TSTNet. The upper dashed line represents that original has only the original input images, and TST has both original input images and synthetic edge input images.</p>
Full article ">Figure 13
<p>Confusion matrix for AID dataset under 20% training rate, with true labels on the vertical axis and predicted values on the horizontal axis.</p>
Full article ">Figure 14
<p>Confusion matrix for NWPU dataset under 10% training rate, with true labels on the vertical axis and predicted values on the horizontal axis.</p>
Full article ">Figure 15
<p>Confusion matrix for UCM dataset under 50% training rate, with true labels on the vertical axis and predicted values on the horizontal axis.</p>
Full article ">
18 pages, 17885 KiB  
Article
Glacier Mass Balance in the Manas River Using Ascending and Descending Pass of Sentinel 1A/1B Data and SRTM DEM
by Lili Yan, Jian Wang and Donghang Shao
Remote Sens. 2022, 14(6), 1506; https://doi.org/10.3390/rs14061506 - 20 Mar 2022
Cited by 9 | Viewed by 2893
Abstract
Mountain glaciers monitoring is important for water resource management and climate changes but is limited by the lack of a high-quality Digital Elevation Model (DEM) and field measurements. Sentinel 1A/1B satellites provide alternative data for glacier mass balance. In this study, we tried [...] Read more.
Mountain glaciers monitoring is important for water resource management and climate changes but is limited by the lack of a high-quality Digital Elevation Model (DEM) and field measurements. Sentinel 1A/1B satellites provide alternative data for glacier mass balance. In this study, we tried to generate DEMs from C-band Sentinel 1A/1B ascending and descending pass SLC images and evaluate the overall accuracy of INSAR DEMs based on Shuttle Radar Topography Mission (SRTM) DEM and ICESat/GLAS. The low Standard Deviation (STD)and Root Means Square Error (RMSE) displayed the feasibility of Sentinel 1A/1B satellites for DEM generation. Glacier elevation changes and glacier mass balance were estimated based on INSAR DEM and SRTM DEM. The results showed that the most glaciers have exhibited obvious thinning, and the mean annual glacier mass balance between 2000 and 2020 was −0.18 ± 0.1 m w.e.a−1. The south-facing and-east facing aspects, slope and elevation play an important role on glacier melt. This study demonstrates that ascending and descending orbit data of Sentinel-1A/1B satellites are promising for the detailed retrieval of surface elevation changes and mass balance in mountain glaciers. Full article
(This article belongs to the Special Issue Remote Sensing for Mountain Vegetation and Snow Cover)
Show Figures

Figure 1

Figure 1
<p>The geographical location of study area. The background is a Landsat OIL image acquired in 2019 with a false color composite of bands 432.</p>
Full article ">Figure 2
<p>The flow chart of estimating glacier mass balance.</p>
Full article ">Figure 3
<p>The relationships between elevation difference and slope and maximum curvature.</p>
Full article ">Figure 4
<p>Glacier elevation changes between February 2000 and August 2019.</p>
Full article ">Figure 5
<p>The mean glacier elevation difference in each 100 m elevation bin.</p>
Full article ">Figure 6
<p>The relationship between elevation difference and slope.</p>
Full article ">Figure 7
<p>The slope distribution map of the glacier area.</p>
Full article ">Figure 8
<p>The average elevation difference and of the percentage of slope categories.</p>
Full article ">Figure 9
<p>The relationship between elevation difference and aspect.</p>
Full article ">Figure 10
<p>The aspect distribution map of glacier area.</p>
Full article ">Figure 11
<p>The average glacier elevation difference and the percentage for each aspect category in the glacier area.</p>
Full article ">Figure 12
<p>The mean glacier elevation difference, the percentage of elevation categories, the mean slope and the percentage of south slope of different elevation ranges in glacier area.</p>
Full article ">Figure 12 Cont.
<p>The mean glacier elevation difference, the percentage of elevation categories, the mean slope and the percentage of south slope of different elevation ranges in glacier area.</p>
Full article ">
21 pages, 11182 KiB  
Article
Effects of UAV-LiDAR and Photogrammetric Point Density on Tea Plucking Area Identification
by Qingfan Zhang, Maosheng Hu, Yansong Zhou, Bo Wan, Le Jiang, Quanfa Zhang and Dezhi Wang
Remote Sens. 2022, 14(6), 1505; https://doi.org/10.3390/rs14061505 - 20 Mar 2022
Cited by 2 | Viewed by 3390
Abstract
High-cost data collection and processing are challenges for UAV LiDAR (light detection and ranging) mounted on unmanned aerial vehicles in crop monitoring. Reducing the point density can lower data collection costs and increase efficiency but may lead to a loss in mapping accuracy. [...] Read more.
High-cost data collection and processing are challenges for UAV LiDAR (light detection and ranging) mounted on unmanned aerial vehicles in crop monitoring. Reducing the point density can lower data collection costs and increase efficiency but may lead to a loss in mapping accuracy. It is necessary to determine the appropriate point cloud density for tea plucking area identification to maximize the cost–benefits. This study evaluated the performance of different LiDAR and photogrammetric point density data when mapping the tea plucking area in the Huashan Tea Garden, Wuhan City, China. The object-based metrics derived from UAV point clouds were used to classify tea plantations with the extreme learning machine (ELM) and random forest (RF) algorithms. The results indicated that the performance of different LiDAR point density data, from 0.25 (1%) to 25.44 pts/m2 (100%), changed obviously (overall classification accuracies: 90.65–94.39% for RF and 89.78–93.44% for ELM). For photogrammetric data, the point density was found to have little effect on the classification accuracy, with 10% of the initial point density (2.46 pts/m2), a similar accuracy level was obtained (difference of approximately 1%). LiDAR point cloud density had a significant influence on the DTM accuracy, with the RMSE for DTMs ranging from 0.060 to 2.253 m, while the photogrammetric point cloud density had a limited effect on the DTM accuracy, with the RMSE ranging from 0.256 to 0.477 m due to the high proportion of ground points in the photogrammetric point clouds. Moreover, important features for identifying the tea plucking area were summarized for the first time using a recursive feature elimination method and a novel hierarchical clustering-correlation method. The resultant architecture diagram can indicate the specific role of each feature/group in identifying the tea plucking area and could be used in other studies to prepare candidate features. This study demonstrates that low UAV point density data, such as 2.55 pts/m2 (10%), as used in this study, might be suitable for conducting finer-scale tea plucking area mapping without compromising the accuracy. Full article
Show Figures

Figure 1

Figure 1
<p>An illustration of a UAV-LiDAR data collection scheme in the Huashan Tea Garden (the study area). The UAV flight height affects the point cloud density and acquisition efficiency. Each flight had the same lateral overlap ratio (50%), and the number of flight lines was determined by the flight height.</p>
Full article ">Figure 2
<p>Location of the study area (the Huashan Tea Garden) in Wuhan City, China.</p>
Full article ">Figure 3
<p>Illustration of percentage-based LiDAR and photogrammetric point cloud decimation. The radius was 5 m, and NP denotes the number of points.</p>
Full article ">Figure 4
<p>A schematic graph of the point cloud performance comparison with LiDAR and photogrammetric cases.</p>
Full article ">Figure 5
<p>Illustration of the tea and non-tea area-based accuracy assessment: (<b>a</b>) digital imagery with a spatial resolution of 0.1 m; (<b>b</b>) thematic map produced using the random forest algorithm; (<b>c</b>) the classes produced from the area intersection process.</p>
Full article ">Figure 6
<p>Variation in the overall accuracy with different numbers of features selected by the RF-based RFE algorithm and the details of the selected features for the LiDAR case (<b>a</b>–<b>c</b>) and photogrammetric case (<b>d</b>–<b>f</b>).</p>
Full article ">Figure 7
<p>Illustration of percentage-based UAV point cloud data density reduction.</p>
Full article ">Figure 8
<p>Variations of the LiDAR and photogrammetric features with reduced point densities: (<b>a</b>–<b>d</b>) LiDAR features; (<b>e</b>) photogrammetric features.</p>
Full article ">Figure 9
<p>Hierarchical cluster plot and correlation matrix heatmap of important features in the LiDAR case with a point density of 10%. The 10% point density was the most cost-effective LiDAR point density found in this study. The upper graph represents hierarchical construction and distance among features; the below graph portrays pairwise correlation between features.</p>
Full article ">
23 pages, 5635 KiB  
Article
A Reconstructed Global Daily Seamless SIF Product at 0.05 Degree Resolution Based on TROPOMI, MODIS and ERA5 Data
by Jiaochan Hu, Jia Jia, Yan Ma, Liangyun Liu and Haoyang Yu
Remote Sens. 2022, 14(6), 1504; https://doi.org/10.3390/rs14061504 - 20 Mar 2022
Cited by 7 | Viewed by 3695
Abstract
Satellite-derived solar-induced chlorophyll fluorescence (SIF) has been proven to be a valuable tool for monitoring vegetation’s photosynthetic activity at regional or global scales. However, the coarse spatiotemporal resolution or discrete space coverage of most satellite SIF datasets hinders their full potential for studying [...] Read more.
Satellite-derived solar-induced chlorophyll fluorescence (SIF) has been proven to be a valuable tool for monitoring vegetation’s photosynthetic activity at regional or global scales. However, the coarse spatiotemporal resolution or discrete space coverage of most satellite SIF datasets hinders their full potential for studying carbon cycle and ecological processes at finer scales. Although the recent TROPOspheric Monitoring Instrument (TROPOMI) partially addresses this issue, the SIF still has drawbacks in spatial insufficiency and spatiotemporal discontinuities when gridded at high spatiotemporal resolutions (e.g., 0.05°, 1-day or 2-day) due to its nonuniform sampling sizes, swath gaps, and clouds contaminations. Here, we generated a new global SIF product with Seamless spatiotemporal coverage at Daily and 0.05° resolutions (SDSIF) during 2018–2020, using the random forest (RF) approach together with TROPOMI SIF, MODIS reflectance and meteorological datasets. We investigated how the model accuracy was affected by selection of explanatory variables and model constraints. Eventually, models were trained and applied for specific continents and months given the similar response of SIF to environmental variables within closer space and time. This strategy achieved better accuracy (R2 = 0.928, RMSE = 0.0597 mW/m2/nm/sr) than one universal model (R2 = 0.913, RMSE = 0.0653 mW/m2/nm/sr) for testing samples. The SDSIF product can well preserve the temporal and spatial characteristics in original TROPOMI SIF with high temporal correlations (mean R2 around 0.750) and low spatial residuals (less than ±0.081 mW/m2/nm/sr) between them two at most regions (80% of global pixels). Compared with the original SIF at five flux sites, SDSIF filled the temporal gaps and was better consistent with tower-based SIF at the daily scale (the mean R2 increased from 0.467 to 0.744. Consequently, it provided more reliable 4-day SIF averages than the original ones from sparse daily observations (e.g., the R2 at Daman site was raised from 0.614 to 0.837), which resulted in a better correlation with 4-day tower-based GPP. Additionally, the global coverage ratio and local spatial details had also been improved by the reconstructed seamless SIF. Our product has advantages in spatiotemporal continuities and details over the original TROPOMI SIF, which will benefit the application of satellite SIF for understanding carbon cycle and ecological processes at finer spatial and temporal scales. Full article
Show Figures

Figure 1

Figure 1
<p>Visualizations of the spatiotemporal limitations in original TROPOMI SIF including spatial resolution insufficiency (<b>a</b>), spatial gaps (<b>b</b>), and temporal discontinuities (<b>c</b>,<b>d</b>) at 0.05° resolution.</p>
Full article ">Figure 2
<p>The land cover map in 2019 from MCD12C1.</p>
Full article ">Figure 3
<p>The statistical metrics for the accuracy of SIF reconstruction models using different combinations of explanatory variables based on the testing samples at 0.1°, 8-day resolutions in 2019. (<b>a</b>) coefficient of determination (R<sup>2</sup>); (<b>b</b>) Root Mean Square Error (RMSE, mW/m<sup>2</sup>/nm/sr); (<b>c</b>) Mean Absolute Error (MAE, mW/m<sup>2</sup>/nm/sr). Ref1–4 and Ref1–7 refer to MODIS bands 1–4 and MODIS bands 1–7, respectively.</p>
Full article ">Figure 4
<p>Scatter diagrams between the TROPOMI SIF and the SIF predicted by RF models for the testing samples of three cross-validation experiments: first (<b>a</b>), second (<b>b</b>), and third (<b>c</b>) at 0.1°, 8-day resolutions in 2019. The density of points in logarithmic scale is represented by the colorbar. The black dash line represents the 1:1 line.</p>
Full article ">Figure 5
<p>The pixel-wise correlations between day-to-day SIF values from SDSIF and <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>TROSIF</mi> </mrow> <mi>s</mi> <mrow> <mn>02</mn> </mrow> </msubsup> </mrow> </semantics></math> in 2019 at 0.2°, daily scales in terms of the coefficient of determination (R<sup>2</sup>) (<b>a</b>) and regression slope (<b>b</b>). All pixels in this figure achieved the significance level of 0.05.</p>
Full article ">Figure 6
<p>Spatial patterns of the 16-day, 0.1° re-aggregated SDSIF product (<b>leaf column</b>), as well as its residuals (<b>middle column</b>) and latitudinal averages (<b>right column</b>) compared with the original 16-day <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>TROSIF</mi> </mrow> <mi>s</mi> <mrow> <mn>01</mn> </mrow> </msubsup> </mrow> </semantics></math> in January (<b>a</b>), March (<b>b</b>), July (<b>c</b>), and October (<b>d</b>) 2019. For each month, the first 16-day maps are shown here.</p>
Full article ">Figure 7
<p>Scatter diagrams between the re-aggregated SDSIF and the original <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>TROSIF</mi> </mrow> <mi>s</mi> <mrow> <mn>01</mn> </mrow> </msubsup> </mrow> </semantics></math> at 16-day, 0.1° scales for the first 16 days in January (<b>a</b>), March (<b>b</b>), July (<b>c</b>), and October (<b>d</b>) 2019. The density of points in logarithmic scale is represented by the colorbar. The black dash line represents the 1:1 line.</p>
Full article ">Figure 8
<p>Comparison between the time series of tower-based SIF and the two satellite SIF products (SDSIF and original TROSIF<sup>005</sup>) at daily scale for (<b>a–e</b>) sites. All regressions in the right panel achieved the significance level of 0.05.</p>
Full article ">Figure 9
<p>Comparison between the time series of tower-based SIF and two satellite SIF products (SDSIF and original TROSIF<sup>005</sup>) at the 4-day scale for (<b>a</b>,<b>b</b>) sites. All regressions in the right panel achieved the significance level of 0.05. The blue hollow dots, hollow triangles and solid dots represent the 4-day averages with valid observations from no more than one or two days, three days and four days, respectively.</p>
Full article ">Figure 10
<p>Spatial patterns of annual mean (<b>a</b>) and maximum (90th percentile) (<b>b</b>) of re-aggregated SDSIF in 2019, as well as the spatial comparison between SDSIF (<b>c</b>) and TROSIF<sup>005</sup> (<b>d</b>) on 3 August 2019.</p>
Full article ">Figure 11
<p>Local enlarged images of the Mideastern United States region on 3 August 2019 in terms of different products: (<b>a</b>–<b>e</b>). All maps are at 0.05°, daily resolution.</p>
Full article ">Figure 12
<p>Comparison between the time series of tower-based GPP with SDSIF and original TROSIF<sup>005</sup> (<b>a</b>), as well as the corresponding correlations (<b>b</b>) at the 4-day scale for the DM site.</p>
Full article ">
24 pages, 7973 KiB  
Article
Three-Dimensional Simulation Model for Synergistically Simulating Urban Horizontal Expansion and Vertical Growth
by Linfeng Zhao, Xiaoping Liu, Xiaocong Xu, Cuiming Liu and Keyun Chen
Remote Sens. 2022, 14(6), 1503; https://doi.org/10.3390/rs14061503 - 20 Mar 2022
Cited by 10 | Viewed by 3073
Abstract
Urban expansion studies have focused on two-dimensional planar dimensions, ignoring the impact of building height growth changes in the vertical direction on the urban three-dimensional (3D) spatial expansion. Past 3D simulation studies have tended to focus on simulating virtual cities, and a few [...] Read more.
Urban expansion studies have focused on two-dimensional planar dimensions, ignoring the impact of building height growth changes in the vertical direction on the urban three-dimensional (3D) spatial expansion. Past 3D simulation studies have tended to focus on simulating virtual cities, and a few studies have attempted to build 3D simulation models to achieve the synergistic simulation of real cities. This study proposes an urban 3D spatial expansion simulation model to achieve a synergistic simulation of urban horizontal expansion and vertical growth. The future land use simulation model was used to simulate urban land use changes in the horizontal direction. The random forest (RF) regression algorithm was used to predict building height growth in the vertical direction. Furthermore, the RF algorithm was used to mine the patterns of spatial factors affecting building heights. The 3D model was applied to simulate 3D spatial changes in Shenzhen City from 2014 to 2034. The model effectively simulates the horizontal expansion and vertical growth of a real city in 3D space. The crucial factors affecting building heights and the simulation results of future urban 3D expansion hotspot areas can provide scientific support for decisions in urban spatial planning. Full article
Show Figures

Figure 1

Figure 1
<p>Case study area: Shenzhen, Guangdong Province, China.</p>
Full article ">Figure 2
<p>Urban land use data of Shenzhen in 2009 and 2014. Where (<b>A</b>–<b>C</b>) represent their respective regions, respectively; (<b>A1</b>,<b>A2</b>) represent the urban land use of region A in 2009 and 2014; (<b>B1</b>,<b>B2</b>) represent the urban land use of region B in 2009 and 2014; (<b>C1</b>,<b>C2</b>) represent the urban land use of region C in 2009 and 2014.</p>
Full article ">Figure 3
<p>Spatial distribution of building heights in Shenzhen. (<b>a</b>) The central region of Nanshan District; (<b>b</b>) The central region of Futian District; (<b>c</b>) The southwestern region of Luohu District.</p>
Full article ">Figure 4
<p>Driving factors of urban land use change in Shenzhen. (<b>a</b>) DEM; (<b>b</b>) slope; (<b>c</b>) distance to city center; (<b>d</b>) distance to district centers; (<b>e</b>) distance to railways; (<b>f</b>) distance to highways; (<b>g</b>) distance to national roads; (<b>h</b>) distance to provincial roads; (<b>i</b>) distance to railway stations; (<b>j</b>) distance to subway stations; (<b>k</b>) distance to ocean; (<b>l</b>) distance to lakes; (<b>m</b>) distance to rivers; (<b>n</b>) density of urban road network; (<b>o</b>) population density; (<b>p</b>) PM2.5; (<b>q</b>) GDP.</p>
Full article ">Figure 5
<p>Spatial factors of building heights in Shenzhen. (<b>a</b>) DEM; (<b>b</b>) slope; (<b>c</b>) distance to parks and green spaces; (<b>d</b>) distance to waters; (<b>e</b>) housing prices; (<b>f</b>) density of entertainment facilities; (<b>g</b>) density of supermarkets; (<b>h</b>) density of restaurants; (<b>i</b>) density of factories; (<b>j</b>) density of shopping malls; (<b>k</b>) population density; (<b>l</b>) density of hospitals; (<b>m</b>) nighttime light intensity; (<b>n</b>) distance to city center; (<b>o</b>) distance to district centers; (<b>p</b>) distance to urban roads; (<b>q</b>) density of bus stations.</p>
Full article ">Figure 6
<p>Flowchart of urban 3D spatial expansion simulation model.</p>
Full article ">Figure 7
<p>Framework of CA in FLUS model.</p>
Full article ">Figure 8
<p>Actual and simulated land use patterns in Shenzhen in 2014. Where (<b>A</b>–<b>C</b>) represent their respective regions, respectively; (<b>A1</b>,<b>A2</b>) represent the actual and simulated land use patterns in region A; (<b>B1</b>,<b>B2</b>) represent the actual and simulated land use patterns in region B; (<b>C1</b>,<b>C2</b>) represent the actual and simulated land use patterns in region C.</p>
Full article ">Figure 9
<p>Spatial distribution of relative errors between actual and predicted FAR. (<b>A</b>) The northern region of Longhua District; (<b>B</b>) The southeastern region of Futian District.</p>
Full article ">Figure 10
<p>Ratio of the interval range of the relative error between actual and predicted FAR.</p>
Full article ">Figure 11
<p>Simulated urban land use patterns of Shenzhen in the horizontal direction, 2014–2034. (<b>A</b>) The region near Shenzhen Bay; (<b>B</b>) The region near the Shenzhen North Railway Station in the south of Longhua District; (<b>A1</b>–<b>A3</b>) represent the urban land use patterns of region A in 2014, 2024 and 2034; (<b>B1</b>–<b>B3</b>) represent the urban land use patterns of region B in 2014, 2024 and 2034.</p>
Full article ">Figure 12
<p>Predicted results of building height growth of Shenzhen in the vertical direction, 2014–2034. (<b>A</b>) The region near Shenzhen Bay; (<b>B</b>) The region near the Shenzhen North Railway Station in the south of Longhua District; (<b>A1</b>–<b>A3</b>) represent the predicted results of building height growth of region A in 2014, 2024 and 2034; (<b>B1</b>–<b>B3</b>) represent the predicted results of building height growth of region B in 2014, 2024 and 2034.</p>
Full article ">Figure 13
<p>(<b>a</b>) The spatial distribution of urban 3D expansion intensity in Shenzhen from 2014 to 2024; (<b>b</b>) The spatial distribution of urban 3D expansion intensity in Shenzhen from 2024 to 2034.</p>
Full article ">Figure 14
<p>Spatial distribution of urban 3D expansion intensity of urban land use types in Shenzhen, 2014–2034. (<b>a</b>) public management services land; (<b>b</b>) commercial land; (<b>c</b>) residential land; (<b>d</b>) industrial land.</p>
Full article ">
20 pages, 3866 KiB  
Article
Semantic Segmentation of Polarimetric SAR Image Based on Dual-Channel Multi-Size Fully Connected Convolutional Conditional Random Field
by Yingying Kong and Qiupeng Li
Remote Sens. 2022, 14(6), 1502; https://doi.org/10.3390/rs14061502 - 20 Mar 2022
Cited by 1 | Viewed by 1994
Abstract
The traditional fully connected convolutional conditional random field has a proven robust performance in post-processing semantic segmentation of SAR images. However, the current challenge is how to improve the richness of image features, thereby improving the accuracy of image segmentation. This paper proposes [...] Read more.
The traditional fully connected convolutional conditional random field has a proven robust performance in post-processing semantic segmentation of SAR images. However, the current challenge is how to improve the richness of image features, thereby improving the accuracy of image segmentation. This paper proposes a polarization SAR image semantic segmentation method based on a dual-channel multi-size fully connected convolutional conditional random field. Firstly, the full-polarization SAR image and the corresponding optical image are input into the model at the same time, which can increase the richness of feature information. Secondly, multi-size input integrates image information of different sizes and models images of various sizes. Finally, the importance of features is introduced to determine the weights of polarized SAR images and optical images, and CRF is improved into a potential function so that the model can adaptively adjust the degree of influence of different image features on the segmentation effect. The experimental results show that the proposed method achieves the highest mean intersection over union (mIoU) and global accuracy (GA) with the least running time, which verifies the effectiveness of our method. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the model structure.</p>
Full article ">Figure 2
<p>Flow chart of SVM-RFE algorithm based on sensitivity analysis.</p>
Full article ">Figure 3
<p>Sensitivity analysis results.</p>
Full article ">Figure 4
<p>Classification effect diagram: (<b>a</b>) the unfiltered feature set; (<b>b</b>) the filtered feature set.</p>
Full article ">Figure 5
<p>Sensitivity analysis results.</p>
Full article ">Figure 6
<p>Model prediction results: (<b>a</b>) single-polarization SAR image, (<b>b</b>) full-polarization SAR image, (<b>c</b>) label image, (<b>d</b>) output of single-polarization, (<b>e</b>) FullCRF, (<b>f</b>) ConvCRF, (<b>g</b>) output of Improved CRF.</p>
Full article ">Figure 6 Cont.
<p>Model prediction results: (<b>a</b>) single-polarization SAR image, (<b>b</b>) full-polarization SAR image, (<b>c</b>) label image, (<b>d</b>) output of single-polarization, (<b>e</b>) FullCRF, (<b>f</b>) ConvCRF, (<b>g</b>) output of Improved CRF.</p>
Full article ">Figure 7
<p>Model prediction results: (<b>a</b>) polarimetric SAR image, (<b>b</b>) label image, (<b>c</b>) synthetic data (Unary), (<b>d</b>) FullCRF, (<b>e</b>) ConvCRF, (<b>f</b>) the method of this paper.</p>
Full article ">Figure 7 Cont.
<p>Model prediction results: (<b>a</b>) polarimetric SAR image, (<b>b</b>) label image, (<b>c</b>) synthetic data (Unary), (<b>d</b>) FullCRF, (<b>e</b>) ConvCRF, (<b>f</b>) the method of this paper.</p>
Full article ">Figure 8
<p>Model prediction results:(<b>a</b>) polarimetric SAR image, (<b>b</b>) label image, (<b>c</b>) Deelabv3+, (<b>d</b>) FullCRF, (<b>e</b>) ConvCRF, (<b>f</b>) the method of this paper.</p>
Full article ">Figure 8 Cont.
<p>Model prediction results:(<b>a</b>) polarimetric SAR image, (<b>b</b>) label image, (<b>c</b>) Deelabv3+, (<b>d</b>) FullCRF, (<b>e</b>) ConvCRF, (<b>f</b>) the method of this paper.</p>
Full article ">
12 pages, 16836 KiB  
Technical Note
Application of Multispectral Remote Sensing for Mapping Flood-Affected Zones in the Brumadinho Mining District (Minas Gerais, Brasil)
by Lorenzo Ammirati, Rita Chirico, Diego Di Martire and Nicola Mondillo
Remote Sens. 2022, 14(6), 1501; https://doi.org/10.3390/rs14061501 - 20 Mar 2022
Cited by 10 | Viewed by 3195
Abstract
The collapse of the tailing “Dam B1” of the Córrego do Feijão Mine (Brumadinho, Brasil) that occurred in January 2019 is considered a large socio-environmental flood-disaster where numerous people died and the local flora and fauna were seriously affected, including agricultural areas of [...] Read more.
The collapse of the tailing “Dam B1” of the Córrego do Feijão Mine (Brumadinho, Brasil) that occurred in January 2019 is considered a large socio-environmental flood-disaster where numerous people died and the local flora and fauna were seriously affected, including agricultural areas of the Paraopeba River. This study aims to map the land area affected by the flood by using multispectral satellite images. To pursue this aim, Level-2A multispectral images from the European Space Agency’s Sentinel-2 sensor were acquired before and after the tailing dam collapse in the period 2019–2021. The pre- and post-failure event analysis allowed us to evidence drastic changes in the vegetation rate, as well as in the nature of soils and surficial waters. The spectral signatures of the minerals composing the mining products allowed us to highlight the effective area covered by the flood and to investigate the evolution of land properties after the disaster. This technique opens the possibility for quickly classifying areas involved in floods, as well as obtaining significant information potentially useful for monitoring and planning the reclamation and restoration activities in similar cases worldwide, representing an additional tool for evaluating the environmental issues related to mining operations in large areas at high temporal resolution. Full article
(This article belongs to the Special Issue Remote Sensing Solutions for Mapping Mining Environments)
Show Figures

Figure 1

Figure 1
<p>Study area. The red zones are the principal elements (see <a href="#sec4-remotesensing-14-01501" class="html-sec">Section 4</a> below).</p>
Full article ">Figure 2
<p>The spectral properties of principal elements sampled in the pre-event image. The black line is the official USGS hematite signature [<a href="#B50-remotesensing-14-01501" class="html-bibr">50</a>] compared to Sentinel-2 bands (from B1 to B12) [<a href="#B16-remotesensing-14-01501" class="html-bibr">16</a>].</p>
Full article ">Figure 3
<p>The relative abundance maps of water, ferric oxides and hydroxides, and vegetation endmembers, at different periods.</p>
Full article ">Figure 4
<p>The RGB true color on left, the RGB (ferric oxides and hydroxides, vegetation, water) composites with endmember abundances on right. The white boxes are the flood-affected zones selected.</p>
Full article ">Figure 5
<p>Mean spectral signatures obtained by flood-affected zones 1st, 2nd, 3rd and 4th (divided the riverbed (R) and riverside (RS) features) in 2019 (<b>A</b>), 2020 (<b>B</b>), 2021 (<b>C</b>).</p>
Full article ">
19 pages, 7945 KiB  
Article
Missing Data Imputation in GNSS Monitoring Time Series Using Temporal and Spatial Hankel Matrix Factorization
by Hanlin Liu and Linchao Li
Remote Sens. 2022, 14(6), 1500; https://doi.org/10.3390/rs14061500 - 20 Mar 2022
Cited by 4 | Viewed by 2557
Abstract
GNSS time series for static reference stations record the deformation of monitored targets. However, missing data are very common in GNSS monitoring time series because of receiver crashes, power failures, etc. In this paper, we propose a Temporal and Spatial Hankel Matrix Factorization [...] Read more.
GNSS time series for static reference stations record the deformation of monitored targets. However, missing data are very common in GNSS monitoring time series because of receiver crashes, power failures, etc. In this paper, we propose a Temporal and Spatial Hankel Matrix Factorization (TSHMF) method that can simultaneously consider the temporal correlation of a single time series and the spatial correlation among different stations. Moreover, the method is verified using real-world regional 10-year period monitoring GNSS coordinate time series. The Mean Absolute Error (MAE) and Root-Mean-Square Error (RMSE) are calculated to compare the performance of TSHMF with benchmark methods, which include the time-mean, station-mean, K-nearest neighbor, and singular value decomposition methods. The results show that the TSHMF method can reduce the MAE range from 32.03% to 12.98% and the RMSE range from 21.58% to 10.36%, proving the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Geodetic Observations for Earth System)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map showing the locations of current GNSS stations in the southern California region. The station is installed and distributed in the local regional area.</p>
Full article ">Figure 2
<p>The vertical displacements of 20 selected GNSS sites on the North American plate.</p>
Full article ">Figure 2 Cont.
<p>The vertical displacements of 20 selected GNSS sites on the North American plate.</p>
Full article ">Figure 3
<p>Spatial correlations between different stations (Red background represents higher correlation).</p>
Full article ">Figure 4
<p>Framework of the proposed method.</p>
Full article ">Figure 5
<p>MAE and RMSE averages using different methods with 20 selected GNSS stations (missing ratio = 5−50%).</p>
Full article ">Figure 6
<p>MAE and RMSE averages using different methods with 20 selected GNSS stations (missing ratio = 5−50%).</p>
Full article ">Figure 7
<p>Temporal and spatial correlations in random and continuous missing scenarios: (<b>a</b>) MAE results considering ST, T, and S correlations in the various random missing ratio; (<b>b</b>) RMSE results considering ST, T, and S correlations in the various random missing ratio; (<b>c</b>) MAE results considering ST, T, and S correlations in the various continuous missing scenario; (<b>d</b>) RMSE results considering ST, T, and S correlations in the various continuous missing scenario.</p>
Full article ">Figure 7 Cont.
<p>Temporal and spatial correlations in random and continuous missing scenarios: (<b>a</b>) MAE results considering ST, T, and S correlations in the various random missing ratio; (<b>b</b>) RMSE results considering ST, T, and S correlations in the various random missing ratio; (<b>c</b>) MAE results considering ST, T, and S correlations in the various continuous missing scenario; (<b>d</b>) RMSE results considering ST, T, and S correlations in the various continuous missing scenario.</p>
Full article ">
16 pages, 57315 KiB  
Communication
Range Gate Pull-Off Mainlobe Jamming Suppression Approach with FDA-MIMO Radar: Theoretical Formalism and Numerical Study
by Pengfei Wan, Yuanlong Weng, Jingwei Xu and Guisheng Liao
Remote Sens. 2022, 14(6), 1499; https://doi.org/10.3390/rs14061499 - 20 Mar 2022
Cited by 7 | Viewed by 2448
Abstract
With the development of an electronic interference technique, the self-defense jammer can generate mainlobe jamming using the range gate pull-off (RGPO) strategy, which brings serious performance degradation of target tracking for the ground-based warning radar. In this paper, a RGPO mainlobe jamming suppression [...] Read more.
With the development of an electronic interference technique, the self-defense jammer can generate mainlobe jamming using the range gate pull-off (RGPO) strategy, which brings serious performance degradation of target tracking for the ground-based warning radar. In this paper, a RGPO mainlobe jamming suppression approach is proposed, with a frequency diverse array using multiple-input multiple-output (FDA-MIMO) radar. The RGPO mainlobe jamming differs from the true target in slant range, thus it is possible to identify the true target from the RGPO mainlobe jammings by exploiting the transmit beampattern diversity of FDA-MIMO radar. A RGPO mainlobe jamming suppression approach is devised by using joint transmit–receive beamforming for a group of range sectors. The jamming suppression performance is studied, in consideration of practical time-delay of RGPO jamming. Simulation examples are provided to verify the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of RGPO false target.</p>
Full article ">Figure 2
<p>Spectrum position of true and false targets in joint transmit–receive spatial domain.</p>
Full article ">Figure 3
<p>Simultaneous tracking of a group of range sectors.</p>
Full article ">Figure 4
<p>Fourier power spectrum of receive signal, including the true target, suppressive jamming, and RGPO false targets in the (<b>a</b>) normalized Doppler frequency and range dimensions; (<b>b</b>) transmit spatial angle and range dimensions; and (<b>c</b>) receive spatial angle and range dimensions.</p>
Full article ">Figure 5
<p>Fourier power spectrum of the receive signal, including the true target and RGPO false targets in the (<b>a</b>) normalized Doppler frequency and range dimensions; (<b>b</b>) transmit spatial angle and range dimensions; and (<b>c</b>) receive spatial angle and range dimensions.</p>
Full article ">Figure 5 Cont.
<p>Fourier power spectrum of the receive signal, including the true target and RGPO false targets in the (<b>a</b>) normalized Doppler frequency and range dimensions; (<b>b</b>) transmit spatial angle and range dimensions; and (<b>c</b>) receive spatial angle and range dimensions.</p>
Full article ">Figure 6
<p>Capon power spectra of receive signal (<b>a</b>) before and (<b>b</b>) after SRDC.</p>
Full article ">Figure 7
<p>Fourier power spectra of output signal in normalized Doppler frequency and range dimensions. (<b>a</b>) Receive beamforming. (<b>b</b>) Transmit beamforming.</p>
Full article ">Figure 8
<p>Output signal in range dimension.</p>
Full article ">Figure 9
<p>Equivalent Capon power spectrum of constructed covariance matrix in the transmit dimension.</p>
Full article ">Figure 10
<p>Two-dimensional transmit–receive adaptive beamforming.</p>
Full article ">Figure 11
<p>Anti-jamming performance, with respect to frequency increment and delayed pulse number. (<b>a</b>) Non-adaptive beamforming and (<b>b</b>) adaptive beamforming.</p>
Full article ">
21 pages, 4848 KiB  
Article
MLCRNet: Multi-Level Context Refinement for Semantic Segmentation in Aerial Images
by Zhifeng Huang, Qian Zhang and Guixu Zhang
Remote Sens. 2022, 14(6), 1498; https://doi.org/10.3390/rs14061498 - 20 Mar 2022
Cited by 9 | Viewed by 2469
Abstract
In this paper, we focus on the problem of contextual aggregation in the semantic segmentation of aerial images. Current contextual aggregation methods only aggregate contextual information within specific regions to improve feature representation, which may yield poorly robust contextual information. To address this [...] Read more.
In this paper, we focus on the problem of contextual aggregation in the semantic segmentation of aerial images. Current contextual aggregation methods only aggregate contextual information within specific regions to improve feature representation, which may yield poorly robust contextual information. To address this problem, we propose a novel multi-level context refinement network (MLCRNet) that aggregates three levels of contextual information effectively and efficiently in an adaptive manner. First, we designed a local-level context aggregation module to capture local information around each pixel. Second, we integrate multiple levels of context, namely, local-level, image-level, and semantic-level, to aggregate contextual information from a comprehensive perspective dynamically. Third, we propose an efficient multi-level context transform (EMCT) module to address feature redundancy and to improve the efficiency of our multi-level contexts. Finally, based on the EMCT module and feature pyramid network (FPN) framework, we propose a multi-level context feature refinement (MLCR) module to enhance feature representation by leveraging multi-level contextual information. Extensive empirical evidence demonstrates that our MLCRNet achieves state-of-the-art performance on the ISPRS Potsdam and Vaihingen datasets. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>General contextual refinement framework.</p>
Full article ">Figure 2
<p>Local-level context module.</p>
Full article ">Figure 3
<p>The multi-level context transform (MCT) module.</p>
Full article ">Figure 4
<p>The efficient multi-level context transform (EMCT) module. The image-level contextual information <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>I</mi> </msub> </mrow> </semantics></math> is first embedded into semantic-level contextual information <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>S</mi> </msub> </mrow> </semantics></math>, then we further fuse them with local contextual information matrix <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>L</mi> </msub> </mrow> </semantics></math> by matrix multiplication to generate multi-level contextual attention matrix <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mi>M</mi> <mi>L</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>MLCR module.</p>
Full article ">Figure 6
<p>Overview of the proposed MLCRNet.</p>
Full article ">Figure 7
<p>Visualization of features. Our module enhances the representation of more structural features.</p>
Full article ">Figure 8
<p>Qualitative comparisons against the Baseline on the Potsdam test set. We marked the improved regions with red dashed boxes (best viewed when colored and zoomed in).</p>
Full article ">Figure 9
<p>Qualitative comparisons between our method and Baseline on Vaihingen test set. We marked the improved regions with red dashed boxes (best viewed when colored and zoomed in).</p>
Full article ">Figure 10
<p>Qualitative comparison in terms of prediction errors on Potsdam test set, where correctly predicted pixels are shown with a black background and incorrectly predicted pixels are colored using the prediction results (best viewed when colored and zoomed in).</p>
Full article ">Figure A1
<p>Error and binarization-corrected labels in the Potsdam datasets (best viewed when colored and zoomed in).</p>
Full article ">
17 pages, 6251 KiB  
Article
Elevation Regimes Modulated the Responses of Canopy Structure of Coastal Mangrove Forests to Hurricane Damage
by Qiong Gao and Mei Yu
Remote Sens. 2022, 14(6), 1497; https://doi.org/10.3390/rs14061497 - 20 Mar 2022
Cited by 3 | Viewed by 2261
Abstract
Mangrove forests have unique ecosystem functions and services, yet the coastal mangroves in tropics are often disturbed by tropical cyclones. Hurricane Maria swept Puerto Rico and nearby Caribbean islands in September 2017 and caused tremendous damage to the coastal mangrove systems. Understanding the [...] Read more.
Mangrove forests have unique ecosystem functions and services, yet the coastal mangroves in tropics are often disturbed by tropical cyclones. Hurricane Maria swept Puerto Rico and nearby Caribbean islands in September 2017 and caused tremendous damage to the coastal mangrove systems. Understanding the vulnerability and resistance of mangrove forests to disturbances is pivotal for future restoration and conservation. In this study, we used LiDAR point clouds to derive the canopy height of five major mangrove forests, including true mangroves and mangrove associates, along the coast of Puerto Rico before and after the hurricanes, which allowed us to detect the spatial variations of canopy height reduction. We then spatially regressed the pre-hurricane canopy height and the canopy height reduction on biophysical factors such as the elevation, the distance to rivers/canals within and nearby, the distance to coast, tree density, and canopy unevenness. The analyses resulted in the following findings. The pre-hurricane canopy height increased with elevation when elevation was low and moderate but decreased with elevation when elevation was high. The canopy height reduction increased quadratically with the pre-hurricane canopy height, but decreased with elevation for the four sites dominated by true mangroves. The site of Palma del Mar dominated by Pterocarpus, a mangrove associate, experienced the strongest wind, and the canopy height reduction increased with elevation. The canopy height reduction decreased with the distance to rivers/canals only for sites with low to moderate mean elevation of 0.36–0.39 m. In addition to the hurricane winds, the rainfall during hurricanes is an important factor causing canopy damage by inundating the aerial roots. In summary, the pre-hurricane canopy structures, physical environment, and external forces brought by hurricanes interplayed to affect the vulnerability of coastal mangroves to major hurricanes. Full article
Show Figures

Figure 1

Figure 1
<p>Puerto Rico geographical location (<b>up</b>) and location of five sites along its coast (<b>low</b>). Red and blue lines depict the paths of Hurricane Maria and Hurricane Irma, respectively.</p>
Full article ">Figure 2
<p>DEM of the five mangrove forests. Blue and light-green lines depict rivers and canals of freshwater/sewage, respectively.</p>
Full article ">Figure 3
<p>The pre-hurricane canopy height, derived from the G-LiHT point-cloud data.</p>
Full article ">Figure 4
<p>The post-hurricane canopy height, derived from the USGS LiDAR point clouds.</p>
Full article ">Figure 5
<p>Canopy height reduction calculated as the pre-hurricane canopy height minus the post-hurricane canopy height.</p>
Full article ">Figure 6
<p>(<b>a</b>) Model-predicted versus LiDAR-derived pre-hurricane canopy height, and (<b>b</b>) model-predicted versus LiDAR-derived canopy height reduction. Codes for the five sites: CM—Caño Martín Peña, FJ—Fajardo, PG—La Parguera, PM—Palma del Mar, TB—Toa Baja.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop