[go: up one dir, main page]

Next Issue
Volume 14, March-1
Previous Issue
Volume 14, February-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 14, Issue 4 (February-2 2022) – 245 articles

Cover Story (view full-size image): For the example of AVHRR aerosol optical depth (AOD) retrieval, a thorough analysis of the retrieval operator and its sensitivities to the used input and auxiliary variables is undertaken to quantify the different contributions to AOD uncertainties. Uncertainties are then propagated from measured reflectances to geophysical retrieved AOD datasets at different product levels. The propagation uses uncertainty correlations of separate uncertainty contributions from the FIDUCEO easyFCDR level1b input and other major effects in the retrieval. The uncertainties are statistically validated against true error estimates versus AERONET ground-based AOD. The study demonstrates the benefits of new recipes for uncertainty characterization from the Horizon 2020 project FIDUCEO. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 24663 KiB  
Article
Robust Multimodal Remote Sensing Image Registration Based on Local Statistical Frequency Information
by Xiangzeng Liu, Jiepeng Xue, Xueling Xu, Zixiang Lu, Ruyi Liu, Bocheng Zhao, Yunan Li and Qiguang Miao
Remote Sens. 2022, 14(4), 1051; https://doi.org/10.3390/rs14041051 - 21 Feb 2022
Cited by 6 | Viewed by 4028
Abstract
Multimodal remote sensing image registration is a prerequisite for comprehensive application of remote sensing image data. However, inconsistent imaging environment and conditions often lead to obvious geometric deformations and significant contrast differences between multimodal remote sensing images, which makes the common feature extraction [...] Read more.
Multimodal remote sensing image registration is a prerequisite for comprehensive application of remote sensing image data. However, inconsistent imaging environment and conditions often lead to obvious geometric deformations and significant contrast differences between multimodal remote sensing images, which makes the common feature extraction extremely difficult, resulting in their registration still being a challenging task. To address this issue, a robust local statistics-based registration framework is proposed, and the constructed descriptors are invariant to contrast changes and geometric transformations induced by imaging conditions. Firstly, maximum phase congruency of local frequency information is performed by optimizing the control parameters. Then, salient feature points are located according to the phase congruency response map. Subsequently, the geometric and contrast invariant descriptors are constructed based on a joint local frequency information map that combines Log-Gabor filter responses over multiple scales and orientations. Finally, image matching is achieved by finding the corresponding descriptors; image registration is further completed by calculating the transformation between the corresponding feature points. The proposed registration framework was evaluated on four different multimodal image datasets with varying degrees of contrast differences and geometric deformations. Experimental results demonstrated that our method outperformed several state-of-the-art methods in terms of robustness and precision, confirming its effectiveness. Full article
Show Figures

Figure 1

Figure 1
<p>Nonlinear intensity difference and geometric deformation in multimodal images. (<b>a</b>) SAR-Optical image pair. (<b>b</b>) Optical-Infrared image pair. Original images (top line) and matching results obtained by our method (bottom line). The endpoints of the yellow lines in the matching results represent the corresponding matching point pairs.</p>
Full article ">Figure 2
<p>Illustration of a registration by the proposed framework. The framework consists of three parts: feature point detection using optimized phase consistency, construction of geometric and contrast invariant descriptors, feature matching and image registration.</p>
Full article ">Figure 3
<p>Phase congruency feature detection by optimizing parameters. (<b>a</b>) Infrared and visible image pair. (<b>b</b>) CFMs obtained by default parameters. CFMs after parameter optimization (<b>c</b>) can extract more complete structural information than those using the default parameters, so that more high-quality feature points can be detected (<b>d</b>).</p>
Full article ">Figure 4
<p>Salient feature points detection on multimodal remote sensing image pairs. (<b>a</b>,<b>b</b>) are Night image and Day image, respectively.</p>
Full article ">Figure 5
<p>Comparison of different structural feature extraction methods. (<b>a</b>) Infrared and visible image pair. JLFMs (<b>c</b>) contains more local structural information than phase congruency maps (<b>b</b>) rendering good matching results (<b>d</b>).</p>
Full article ">Figure 6
<p>Execution process of image feature scale and rotation invariance. (<b>a</b>) The scale invariance of the corresponding feature descriptor is achieved. (<b>b</b>) Orientation histograms of description regions obtained by using ORM and JLFM. (<b>c</b>) Dominant orientations of the description regions. (<b>d</b>) Rotated description regions of on JLFMs. (<b>b</b>–<b>d</b>) ensure the rotation invariance of the descriptor.</p>
Full article ">Figure 7
<p>Workflow of the proposed multimodal remote sensing image registration framework.</p>
Full article ">Figure 8
<p>Examples of four multimodal image datasets employed in comparative and evaluative experiments. The image pair types from (<b>a</b>–<b>i</b>) are Visible-Infrared, Optical-Optical, Map-Optical, SAR-Optical, Visible-Depth, Visible Cross-Season, Optical-NIR, Visible Day-Night and Infrared-Optical, respectively.</p>
Full article ">Figure 9
<p>Comparison of matching performance of three different groups of parameters: optimized (<b>left column</b>), default (<b>middle column</b>), randomly selected (<b>right column</b>). The top-to-bottom rows show matching results of infrared-optical, SAR-optical, and depth-optical, respectively.</p>
Full article ">Figure 10
<p>Rotation invariance test of the proposed method on five images pairs randomly selected from remote sensing and UAV datasets.</p>
Full article ">Figure 11
<p>Matching results of image pairs with different rotation angles from 30° to 180° (at intervals of 30°) by using the proposed method.</p>
Full article ">Figure 12
<p>Visible and infrared image pairs examples from UAV dataset.</p>
Full article ">Figure 13
<p>NCM obtained by our method for 20 image pairs with large scale changes.</p>
Full article ">Figure 14
<p>Matching results of <a href="#remotesensing-14-01051-f012" class="html-fig">Figure 12</a> and the registration result of the last pair of matching images.</p>
Full article ">Figure 15
<p>Comparison of matching performance of five methods on six image pairs. Five rows are in one group, and the top-to-bottom matching results of each group are obtained by Root-SIFT, RIFT, CFOG, SuperGlue, and the proposed method, respectively. The red lines indacate incorrect matching point pairs.</p>
Full article ">Figure 15 Cont.
<p>Comparison of matching performance of five methods on six image pairs. Five rows are in one group, and the top-to-bottom matching results of each group are obtained by Root-SIFT, RIFT, CFOG, SuperGlue, and the proposed method, respectively. The red lines indacate incorrect matching point pairs.</p>
Full article ">Figure 16
<p>Registration results of six image pairs by using the proposed method.</p>
Full article ">Figure 17
<p>Matching precisions of five methods on ten image pairs selected from different datasets.</p>
Full article ">Figure 18
<p>RMSEs of registration obtained by the five methods on different types of images.</p>
Full article ">Figure 19
<p>Matching results of the proposed method on different types of images.</p>
Full article ">Figure 19 Cont.
<p>Matching results of the proposed method on different types of images.</p>
Full article ">
22 pages, 9297 KiB  
Article
Global Spatiotemporal Variability of Integrated Water Vapor Derived from GPS, GOME/SCIAMACHY and ERA-Interim: Annual Cycle, Frequency Distribution and Linear Trends
by Roeland Van Malderen, Eric Pottiaux, Gintautas Stankunavicius, Steffen Beirle, Thomas Wagner, Hugues Brenot, Carine Bruyninx and Jonathan Jones
Remote Sens. 2022, 14(4), 1050; https://doi.org/10.3390/rs14041050 - 21 Feb 2022
Cited by 9 | Viewed by 2563
Abstract
Atmospheric water vapor plays a prominent role in climate change and atmospheric, meteorological, and hydrological processes. Because of its high spatiotemporal variability, precise quantification of water vapor is challenging. This study investigates Integrated Water Vapor (IWV) variability for the period 1995–2010 at 118 [...] Read more.
Atmospheric water vapor plays a prominent role in climate change and atmospheric, meteorological, and hydrological processes. Because of its high spatiotemporal variability, precise quantification of water vapor is challenging. This study investigates Integrated Water Vapor (IWV) variability for the period 1995–2010 at 118 globally distributed Global Positioning System (GPS) sites, using additional UV/VIS satellite retrievals by GOME, SCIAMACHY, and GOME-2 (denoted as GOMESCIA below), plus ERA-Interim reanalysis output. Apart from spatial representativeness differences, particularly at coastal and island sites, all three IWV datasets correlate well with the lowest mean correlation coefficient of 0.878 (averaged over all the sites) between GPS and GOMESCIA. We confirm the dominance of standard lognormal distribution of the IWV time series, which can be explained by the combination of a lower mode (dry season characterized by a standard lognormal distribution with a low median value) and an upper mode (wet season characterized by a reverse lognormal distribution with high median value) in European, Western American, and subtropical sites. Despite the relatively short length of the time series, we found a good consistency in the sign of the continental IWV trends, not only between the different datasets, but also compared to temperature and precipitation trends. Full article
(This article belongs to the Special Issue Climate Modelling and Monitoring Using GNSS)
Show Figures

Figure 1

Figure 1
<p>Maps of the 118 IGS stations for which data are available from 1995/1996 to March 2011. (<b>a</b>) Global map, (<b>b</b>) zoom in on Europe, (<b>c</b>) zoom in on North America.</p>
Full article ">Figure 2
<p>Linear Pearson correlation coefficients <span class="html-italic">R</span><sup>2</sup> between the monthly IWV means of (<b>a</b>) GOMESCIA and GPS and (<b>b</b>) ERA-Interim and GPS. A zoom-in on North America and Europe is provided in <a href="#app1-remotesensing-14-01050" class="html-app">Figure S4 of the Supplementary Materials</a>.</p>
Full article ">Figure 3
<p>Mean differences (mm) between the monthly IWV means of (<b>a</b>) GOMESCIA and GPS (GOMESCIA IWV minus GPS IWV) and (<b>b</b>) ERA-Interim and GPS (ERA-Interim IWV minus GPS IWV). A zoom-in on North America and Europe is provided in <a href="#app1-remotesensing-14-01050" class="html-app">Figure S5 of the supplementary material</a>.</p>
Full article ">Figure 4
<p>Standard deviations of the differences (mm) between the IWV monthly means of (<b>a</b>) GOMESCIA and GPS and (<b>b</b>) ERA-Interim and GPS. A zoom-in on North America and Europe is provided in <a href="#app1-remotesensing-14-01050" class="html-app">Figure S6 of the supplementary material</a>.</p>
Full article ">Figure 5
<p>Geographical distribution of the amplitude (length of the arrows) and phase (direction of the arrows like a clock: 1 h = January, 2 h = February, 3 h = March, etc.) of the seasonal cycle in the monthly mean IWV time series of GPS (blue), ERA-interim (red) and GOMESCIA (green). A seasonal cycle of 10 mm amplitude in IWV is illustrated, as reference, by the length of the arrow in the upper left corner. A zoom-in on North America and Europe is provided in <a href="#app1-remotesensing-14-01050" class="html-app">Figure S7 of the supplementary material</a>.</p>
Full article ">Figure 6
<p>Frequency distribution of (<b>a</b>) the amplitudes and (<b>b</b>) phases of the seasonal cycle in the monthly mean IWV time series of GPS (blue), ERA-Interim (red), and GOMESCIA (green) at the location of the 118 IGS sites. As the sites of our sample are not homogeneously distributed around the globe, the shapes of those histograms do not reflect global climatological characteristics.</p>
Full article ">Figure 7
<p>Examples of the different categories of frequency distribution functions for the GPS IWV distribution at 4 GPS sites: (<b>a</b>) the standard lognormal distribution (fit in red) at PERT (Perth, Australia), (<b>b</b>) the reverse lognormal distribution (fit in orange) at BOGT (Bogota, Colombia), (<b>c</b>) the shouldered lognormal distribution (in blue, with the two contributing lognormal distributions in dashed blue) at GRAS (Caussols, France), and, for illustration, the best fit of a single lognormal distribution in red, and (<b>d</b>) the bimodal lognormal distribution (fit in green) at CCJM (Ogasawara, Japan) with its contributing lognormal distributions in dashed lines.</p>
Full article ">Figure 8
<p>(<b>a</b>) Classification of the GPS IWV time series according to their frequency distributions: Gaussian (yellow), standard lognormal (red), reverse lognormal (orange), shouldered lognormal (blue), and bimodal (green). Those colors correspond to the colors used in <a href="#remotesensing-14-01050-f007" class="html-fig">Figure 7</a> for the different categories. (<b>b</b>) Distribution of the dimensionless geometric standard deviation (GSD) of a single lognormal distribution fitted through the ERA-Interim IWV histograms. The sites with unfilled circles have bimodal distributions. The reverse plot (classification of ERA-Interim, GSD distribution for GPS) is included as <a href="#app1-remotesensing-14-01050" class="html-app">Figure S8 in the supplementary material</a>, and <a href="#app1-remotesensing-14-01050" class="html-app">Figure S9</a> zooms in on North America and Europe for the GSD.</p>
Full article ">Figure 9
<p>Classification of the GPS (<b>a</b>) and ERA-Interim (<b>b</b>) IWV time series after removal of the seasonal cycle, according to their frequency distributions. The same color coding as in <a href="#remotesensing-14-01050-f008" class="html-fig">Figure 8</a>a is used.</p>
Full article ">Figure 10
<p>IWV trends (% decade<sup>−1</sup>) for GPS (<b>a</b>), GOMESCIA (<b>b</b>), and ERA-Interim (<b>c</b>) for the period January 1996—December 2010. A zoom-in on the IWV trends in North America and Europe is provided in <a href="#app1-remotesensing-14-01050" class="html-app">Figure S10 of the supplementary material</a>. For illustration, panel (<b>d</b>) shows the ERA-Interim surface temperature trends for the same period in °C decade<sup>−1</sup>.</p>
Full article ">Figure 11
<p>Precipitation trends (% dec<sup>−1</sup>) from a monthly gridded precipitation dataset [<a href="#B46-remotesensing-14-01050" class="html-bibr">46</a>] at the 118 GPS site locations.</p>
Full article ">
15 pages, 36345 KiB  
Technical Note
A Remote Sensing Perspective on Mass Wasting in Contrasting Planetary Environments: Cases of the Moon and Ceres
by Lydia Sam and Anshuman Bhardwaj
Remote Sens. 2022, 14(4), 1049; https://doi.org/10.3390/rs14041049 - 21 Feb 2022
Cited by 1 | Viewed by 3157
Abstract
Mass wasting, as one of the most significant geomorphological processes, contributes immensely to planetary landscape evolution. The frequency and diversity of mass wasting features on any planetary body also put engineering constraints on its robotic exploration. Mass wasting on other Solar System bodies [...] Read more.
Mass wasting, as one of the most significant geomorphological processes, contributes immensely to planetary landscape evolution. The frequency and diversity of mass wasting features on any planetary body also put engineering constraints on its robotic exploration. Mass wasting on other Solar System bodies shares similar, although not identical, morphological characteristics with its terrestrial counterpart, indicating a possible common nature for their formation. Thus, planetary bodies with contrasting environmental conditions might help reveal the effects of the atmosphere, subsurface fluids, mass accumulation/precipitation, and seismicity on mass wasting, and vice versa. Their relative positions within our Solar System and the environmental and geophysical conditions on the Moon and the dwarf planet Ceres are not only extremely different from Earth’s but from each other too. Their smaller sizes coupled with the availability of global-scale remote sensing datasets make them ideal candidates to understand mass wasting processes in widely contrasting planetary environments. Through this concept article, we highlight several recent advances in and prospects of using remote sensing datasets to reveal unprecedented details on lunar and Cerean mass wasting processes. We start with briefly discussing several recent studies on mass wasting using Lunar Reconnaissance Orbiter Camera (LROC) data for the Moon and Dawn spacecraft data for Ceres. We further identify the prospects of available remote sensing data in advancing our understanding of mass wasting processes under reduced gravity and in a scant (or absent) atmosphere, and we conclude the article by suggesting future research directions. Full article
(This article belongs to the Special Issue Planetary Exploration Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spatial distribution of lunar mass wasting processes with respect to lunar geology. This map was created using data from various sources. The tabulated data on a variety of lunar mass wasting features from the Supplementary Material of Xiao et al. [<a href="#B2-remotesensing-14-01049" class="html-bibr">2</a>] were plotted in Geographic Information System (GIS). The data on several characterized lunar rockfalls [<a href="#B27-remotesensing-14-01049" class="html-bibr">27</a>] were downloaded from ETH Zurich’s Research Collection [<a href="#B28-remotesensing-14-01049" class="html-bibr">28</a>] and plotted in GIS. The latest data of the Unified Geologic Map of the Moon were obtained from the United States Geological Survey’s webpage [<a href="#B29-remotesensing-14-01049" class="html-bibr">29</a>,<a href="#B30-remotesensing-14-01049" class="html-bibr">30</a>]. The landing sites of several lunar missions have been included in the white text to provide the contextual information.</p>
Full article ">Figure 2
<p>Schematic representation of various mass wasting processes commonly observed on the Moon.</p>
Full article ">Figure 3
<p>Examples of mass wasting on the Moon (north is up for all the images): (<b>a</b>) evidence of rockfalls on the northern wall of the Giordano Bruno Crater (LROC NAC ID: M106209806R, centered at 36° N/103° E). The red rectangle highlights the zoomed-in view of boulder tracks with one lodged boulder; (<b>b</b>) rock/debris slides (yellow rectangle) with extensive talus deposits on the western wall of the Aristarchus Crater (LROC NAC ID: M107192593L, centered at 23.7 ° N/312.6° E); (<b>c</b>) a well-developed slump (orange ellipse) on the northern wall of the Giordano Bruno Crater (LROC NAC ID: M106209806R, centered at 36° N/103° E); (<b>d</b>) an example of channeled flows on the northern wall of the Gambart Crater (LROC NAC ID: M127009259R, centered at 3.4° N/348.21° E). The red ellipse marks the alcove while the yellow ellipse shows the talus apron; (<b>e</b>) examples of sweeping flow (dark albedo lineation) on the north-western wall of Dawes Crater (LROC NAC ID: M113785646L, centered at 17.2° N/26.4° E); (<b>f</b>) regolith creeping producing rippled topography (red ellipse) on the central peak of Eratosthenes Crater (LROC NAC ID: M117569408R, centered at 14.5° N/348.7° E). The yellow arrows point to several small craters with straightened western walls, deformed as a result of regolith creeping. The maps were generated using the JMARS tool developed by Arizona State University [<a href="#B36-remotesensing-14-01049" class="html-bibr">36</a>].</p>
Full article ">Figure 4
<p>Stratigraphic ages of geologic units hosting lunar mass wasting features, derived using mass wasting data from the Supplementary Material of Xiao et al. [<a href="#B2-remotesensing-14-01049" class="html-bibr">2</a>] and the latest data of lunar geology from the United States Geological Survey [<a href="#B30-remotesensing-14-01049" class="html-bibr">30</a>]. The description of the geologic units can be downloaded from the USGS [<a href="#B41-remotesensing-14-01049" class="html-bibr">41</a>].</p>
Full article ">Figure 5
<p>Distribution of identified mass movements on Ceres. The data used to plot this map in GIS were generated by Parkeh et al. [<a href="#B61-remotesensing-14-01049" class="html-bibr">61</a>] and downloaded from Figshare [<a href="#B62-remotesensing-14-01049" class="html-bibr">62</a>]. This map also includes the previously described mass wasting features by Chilton et al. [<a href="#B63-remotesensing-14-01049" class="html-bibr">63</a>], Duarte et al. [<a href="#B64-remotesensing-14-01049" class="html-bibr">64</a>], and Schmidt et al. [<a href="#B65-remotesensing-14-01049" class="html-bibr">65</a>], as compiled by Parekh et al. [<a href="#B61-remotesensing-14-01049" class="html-bibr">61</a>]. The scale is true at the equator.</p>
Full article ">Figure 6
<p>Examples of mass wasting features of Ceres (north is up for all the images): (<b>a</b>) a flow feature with a well-developed debris fan on the western wall of Dantu Crater (26.375° N, 137.281° E); (<b>b</b>) a flow feature (red ellipse) on the western wall of Dantu Crater and an undocumented possible flow candidate (yellow ellipse) on the north-western wall of Axomama Crater; (<b>c</b>) slide deposits on the floor of an unnamed crater (−25.828° N, 191.58° E); (<b>d</b>) a slump feature on the western wall of Dantu Crater. The yellow curved arrow shows the topographic displacement downwards. The maps were generated using the JMARS tool developed by Arizona State University [<a href="#B36-remotesensing-14-01049" class="html-bibr">36</a>].</p>
Full article ">
17 pages, 19021 KiB  
Article
Using LiDAR System as a Data Source for Agricultural Land Boundaries
by Natalia Borowiec and Urszula Marmol
Remote Sens. 2022, 14(4), 1048; https://doi.org/10.3390/rs14041048 - 21 Feb 2022
Cited by 12 | Viewed by 3593
Abstract
In this study, LiDAR sensor data were used to identify agricultural land boundaries. This is a remote sensing method using a pulsating laser directed toward the ground. This study focuses on accurately determining the edges of parcels using only the point cloud, which [...] Read more.
In this study, LiDAR sensor data were used to identify agricultural land boundaries. This is a remote sensing method using a pulsating laser directed toward the ground. This study focuses on accurately determining the edges of parcels using only the point cloud, which is an original approach because the point cloud is a scattered set, which may complicate finding those points that define the course of a straight line defining the parcel boundary. The innovation of the approach is the fact that no data from other sources are supported. At the same time, a unique contribution of the research is the attempt to automate the complex process of detecting the edges of parcels. The first step was to classify the data, using intensity, and define land use boundaries. Two approaches were decided, for two test fields. The first test field was a rectangular shaped parcel of land. In this approach, pixels describing each edge of the plot separately were automatically grouped into four parts. The edge description was determined using principal component analysis. The second test area was the inner subdivision plot. Here, the Hough Transform was used to emerge the edges. Obtained boundaries, both for the first and the second test area, were compared with the boundaries from the Polish land registry database. Performed analyses show that proposed algorithms can define the correct course of land use boundaries. Analyses were conducted for the purpose of control in the system of direct payments for agriculture (Integrated Administration Control System—IACS). The aim of the control is to establish the borders and areas of croplands and to verify the declared group of crops on a given cadastral parcel. The proposed algorithm—based solely on free LiDAR data—allowed the detection of inconsistencies in farmers’ declarations. These mainly concerned areas of field roads that were misclassified by farmers as subsidized land, when in fact they should be excluded from subsidies. This is visible in both test areas with areas belonging to field roads with an average width of 1.26 and 3.01 m for test area no. 1 and 1.31, 1.15, 1.88, and 2.36 m for test area no. 2 were wrongly classified as subsidized by farmers. Full article
(This article belongs to the Special Issue Remote Sensing for Land Administration 2.0)
Show Figures

Figure 1

Figure 1
<p>The figure shows a point cloud acquired from the airborne laser scanning system. The cloud is displayed in plan view. (<b>a</b>) displays natural RGB colors, (<b>b</b>) the displayed point cloud uses the intensity, (<b>c</b>) the color corresponds to the height of the point, that is the Z coordinates.</p>
Full article ">Figure 2
<p>The scheme of the proposed algorithm.</p>
Full article ">Figure 3
<p>Edge detection of agricultural lands along with the visible errors—noise. The selected noise is marked with red circles.</p>
Full article ">Figure 4
<p>Segmentation of agricultural lands in the study area.</p>
Full article ">Figure 5
<p>Overlaying of the segmentation and edge detection steps.</p>
Full article ">Figure 6
<p>Scattered points superimposed on a raster representing parcel boundaries.</p>
Full article ">Figure 7
<p>Test area no. 1—segment, coinciding with approximate edges.</p>
Full article ">Figure 8
<p>The sequence steps (<b>a</b>–<b>c</b>) leading to a raster edge representation of one segment.</p>
Full article ">Figure 9
<p>Example of an error ellipse calculated for one of the parcels of land.</p>
Full article ">Figure 10
<p>Approximation of straight lines based on scattered points that define utility boundaries.</p>
Full article ">Figure 11
<p>Test area no. 2—segment with more approximate edges.</p>
Full article ">Figure 12
<p>The next steps (<b>a</b>–<b>c</b>) detected the outer and inner edges of the selected segment.</p>
Full article ">Figure 13
<p>A binary raster representing the edges with the detected lines by Hough Transform.</p>
Full article ">Figure 14
<p>Approximation of straight lines defining the course of agricultural edges.</p>
Full article ">Figure 15
<p>Land use boundaries and overlaid vectors from land records. Arrows indicate boundaries that were not considered in further analyses.</p>
Full article ">Figure 16
<p>Comparison of the southern boundary of the land parcel (S) (<b>a</b>) (red line—boundary from the land cadastre, blue—from the LiDAR) and the northern boundary (N) (<b>b</b>) (red line—boundary from the land cadastre, blue—from the LiDAR data).</p>
Full article ">Figure 17
<p>Comparison of the southern boundary of the land parcel (S) (<b>a</b>) (red line—boundary from the land cadastre, blue—from the LiDAR) and the northern boundary (N) (<b>b</b>) (red line—boundary from the land cadastre, blue—from the LiDAR).</p>
Full article ">
17 pages, 706 KiB  
Article
A Preliminary Numerical Study to Compare the Physical Method and Machine Learning Methods Applied to GPR Data for Underground Utility Network Characterization
by Rakeeb Mohamed Jaufer, Amine Ihamouten, Yann Goyat, Shreedhar Savant Todkar, David Guilbert, Ali Assaf and Xavier Dérobert
Remote Sens. 2022, 14(4), 1047; https://doi.org/10.3390/rs14041047 - 21 Feb 2022
Cited by 14 | Viewed by 3516
Abstract
In the field of geophysics and civil engineering applications, ground penetrating radar (GPR) technology has become one of the emerging non-destructive testing (NDT) methods thanks to its ability to perform tests without damaging structures. However, NDT applications, such as concrete rebar assessments, utility [...] Read more.
In the field of geophysics and civil engineering applications, ground penetrating radar (GPR) technology has become one of the emerging non-destructive testing (NDT) methods thanks to its ability to perform tests without damaging structures. However, NDT applications, such as concrete rebar assessments, utility network surveys or the precise localization of embedded cylindrical pipes still remain challenging. The inversion of geometric parameters, such as depth and radius of embedded cylindrical pipes, as well as the dielectric parameters of its surrounding material, is of great importance for preventive measures and quality control. Furthermore, the precise localization is mandatory for critical underground utility networks, such as gas, power and water lines. In this context, innovative signal processing techniques associated with GPR are capable of performing physical and geometric characterization tasks. This paper evaluates the performance of a supervised machine learning and ray-based methods on GPR data. Support vector machines (SVM) classification, support vector machine regression (SVR) and ray-based methods are all used to correlate information about the radius and depth of embedded pipes with the velocity of stratified media in various numerical configurations. The approach is based on the hyperbola trace emerging in a set of B-scans, given that the shape of the hyperbola varies greatly with pipe depth and radius as well as with velocity of the medium. According to the ray-based method, an inversion of the wave velocity and pipe radius is performed by applying an appropriate nonlinear least mean squares inversion technique. Feature selection within machine learning models is also implemented on the information chosen from observed hyperbola travel times. Simulated data are obtained by means of the finite-difference time-domain (FDTD) method with the 2D numerical tool GprMax. The study is carried out on mono-static, ground-coupled GPR datasets. The preliminary study showed that the proposed machine learning methods outperforms the ray-based method for estimating radius, depth and velocity. SVR, for instance, calculates depth and radius values with mean absolute relative errors of 0.39% and 6.3%, respectively, with regard to the ground truth. A parametric comparison of the aforementioned methodologies is also included in the performance analysis in terms of relative error. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Buried cylinders in the subsurface and intended estimated parameter.</p>
Full article ">Figure 2
<p>Geometrical, Ray−based relationship of a buried cylinder.</p>
Full article ">Figure 3
<p>Examples of hyperbola shape variation across different velocity at same depth and radius.</p>
Full article ">Figure 4
<p>Examples of hyperbola shape variation across different depth at same velocity and radius.</p>
Full article ">Figure 5
<p>Examples of hyperbola shape variation across different radii at same velocity and depth.</p>
Full article ">Figure 6
<p>Representation of travel-time-based feature selection from the hyperbola on a B-scan; <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>1</mn> <mo> </mo> <mrow> <mi mathvariant="normal">c</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>30</mn> <mo> </mo> <mrow> <mi mathvariant="normal">c</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Travel time estimation from A−scan for hyperbola formation.</p>
Full article ">Figure 8
<p>Confusion matrix of predicted results for radius estimation based on multi-class SVM classification model. Radius classes: 1 cm, 2 cm, 3 cm, 5 cm, 7 cm and 10 <math display="inline"><semantics> <mi mathvariant="normal">c</mi> </semantics></math><math display="inline"><semantics> <mi mathvariant="normal">m</mi> </semantics></math>, respectively. Blue boxes indicates number of correct predictions and pink boxes represents number of false alarms.</p>
Full article ">Figure 9
<p>Absolute relative error (<span class="html-italic">err</span>) in ray−based estimation of radius at fixed velocity scenario.</p>
Full article ">Figure 10
<p>Absolute relative error (<span class="html-italic">err</span>) in SVR−based estimation of radius.</p>
Full article ">Figure 11
<p>Absolute relative error (<span class="html-italic">err</span>) variation in SVR−based radius estimation across different depths.</p>
Full article ">Figure 12
<p>Absolute relative error (<span class="html-italic">err</span>) variation in SVR−based radius estimation across different velocities of mediums.</p>
Full article ">Figure 13
<p>SVR−linear relative error (<span class="html-italic">l.r.e</span>) of radius estimation.</p>
Full article ">Figure 14
<p>SVR−linear absolute relative error (<span class="html-italic">a.l.r.e</span>) of radius estimation.</p>
Full article ">Figure 15
<p>Absolute relative error (<span class="html-italic">err</span>) variation in SVR−based depth estimation across different depths.</p>
Full article ">Figure 16
<p>Absolute relative error (<span class="html-italic">err</span>) variation in SVR−based depth estimation across different velocities of mediums.</p>
Full article ">Figure 17
<p>SVR−linear relative error (<span class="html-italic">l.r.e</span>) of depth estimation.</p>
Full article ">Figure 18
<p>SVR−linear absolute relative error (<span class="html-italic">a.l.r.e</span>) of depth estimation.</p>
Full article ">Figure 19
<p>Absolute relative error (<span class="html-italic">err</span>) across depth classes.</p>
Full article ">Figure 20
<p>SVR−velocity error (<span class="html-italic">err</span>) across velocity classes.</p>
Full article ">
30 pages, 21712 KiB  
Article
An Integrated Platform for Ground-Motion Mapping, Local to Regional Scale; Examples from SE Europe
by Valentin Poncoş, Irina Stanciu, Delia Teleagă, Liviu Maţenco, István Bozsó, Alexandru Szakács, Dan Birtas, Ştefan-Adrian Toma, Adrian Stănică and Vlad Rădulescu
Remote Sens. 2022, 14(4), 1046; https://doi.org/10.3390/rs14041046 - 21 Feb 2022
Cited by 4 | Viewed by 2998
Abstract
Ground and infrastructure stability are important for our technologically based civilization. Infrastructure projects take into consideration the risk posed by ground displacement (e.g., seismicity, geological conditions and geomorphology). To address this risk, earth scientists and civil engineers employ a range of measurement technologies, [...] Read more.
Ground and infrastructure stability are important for our technologically based civilization. Infrastructure projects take into consideration the risk posed by ground displacement (e.g., seismicity, geological conditions and geomorphology). To address this risk, earth scientists and civil engineers employ a range of measurement technologies, such as optical/laser leveling, GNSS and, lately, SAR interferometry. Currently there is a rich source of measurement information provided in various formats that covers most of the industrialized world. Integration of this information becomes an issue that will only increase in importance in the future. This work describes a practical approach to address and validate integrated stability measurements through the development of a platform that could be easily used by a variety of groups, from geoscientists to civil engineers and also private citizens with no training in this field. The platform enables quick cross-validation between different data sources, easy detection of critical areas at all scales (from large-scale individual buildings to small-scale tectonics) and can be linked to end-users from various monitoring fields and countries for automated notifications. This work is closing the gap between the specialized monitoring work and the general public, delivering the full value of technology for societal benefits in a free and open manner. The platform is calibrated and validated by an application of SAR interferometry data to specific situations in the general area of the Romanian Carpathians and their foreland. The results demonstrate an interplay between anthropogenically induced changes and high-amplitude active tectono–sedimentary processes creating rapid regional and local topographic variations. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The area of analysis: (<b>a</b>) Romania and parts of the neighboring countries; (<b>b</b>) PSInSAR coverage of the studied area; (<b>c</b>) tectonic map of Romania (modified from [<a href="#B11-remotesensing-14-01046" class="html-bibr">11</a>]).</p>
Full article ">Figure 1 Cont.
<p>The area of analysis: (<b>a</b>) Romania and parts of the neighboring countries; (<b>b</b>) PSInSAR coverage of the studied area; (<b>c</b>) tectonic map of Romania (modified from [<a href="#B11-remotesensing-14-01046" class="html-bibr">11</a>]).</p>
Full article ">Figure 2
<p>PSTool, an online and free-access analysis tool for ground displacement monitoring [<a href="#B23-remotesensing-14-01046" class="html-bibr">23</a>].</p>
Full article ">Figure 3
<p>PSTool application architecture.</p>
Full article ">Figure 4
<p>Permanent GNSS stations in Romania. The color represents the yearly vertical displacement rate.</p>
Full article ">Figure 5
<p>(<b>a</b>) Data selection (time interval, axis, reference station); (<b>b</b>) variation of local vertical axis orientation over the Romanian coordinates compared to a reference point defined in the SW corner (longitude: 20°; latitude: 43.5°). GNSS analysis with PSTool (<b>a</b>) and variation of the vertical axis of the local ENU coordinate system over the studied area (<b>b</b>).</p>
Full article ">Figure 6
<p>Types of motion detected from temporal classification. (<b>a</b>) Map of PS targets; (<b>b</b>) subset of PS targets affected by displacement rate changes; (<b>c</b>) quasi-linear displacement rate; (<b>d</b>) variable displacement rate of the point indicated by the magenta circle.</p>
Full article ">Figure 7
<p>Set of PSInSAR targets and subset of targets affected by seasonal motion: (<b>a</b>) PSInSAR motion map in Sulina, Romania; (<b>b</b>) areas affected by seasonal motion in Sulina, Romania; (<b>c</b>) profile from area <a href="#remotesensing-14-01046-f006" class="html-fig">Figure 6</a>b affected by seasonal motion (red star); (<b>d</b>) seasonal motion removed from the profile in <a href="#remotesensing-14-01046-f007" class="html-fig">Figure 7</a>c (red star).</p>
Full article ">Figure 7 Cont.
<p>Set of PSInSAR targets and subset of targets affected by seasonal motion: (<b>a</b>) PSInSAR motion map in Sulina, Romania; (<b>b</b>) areas affected by seasonal motion in Sulina, Romania; (<b>c</b>) profile from area <a href="#remotesensing-14-01046-f006" class="html-fig">Figure 6</a>b affected by seasonal motion (red star); (<b>d</b>) seasonal motion removed from the profile in <a href="#remotesensing-14-01046-f007" class="html-fig">Figure 7</a>c (red star).</p>
Full article ">Figure 8
<p>Infrastructure versus natural area displacement (village of Razboieni, Romania): (<b>a</b>) infrastructure only (buildings); (<b>b</b>) infrastructure and natural areas; (<b>c</b>) example of LOS displacement time-series of infrastructure in <a href="#remotesensing-14-01046-f007" class="html-fig">Figure 7</a>a; (<b>d</b>) example of LOS displacement time-series in natural areas (blue circle in <a href="#remotesensing-14-01046-f007" class="html-fig">Figure 7</a>b).</p>
Full article ">Figure 9
<p>Ground subsidence related to salt exploitation in Provadia area, northeastern Bulgaria. Temporal series represents an average of profiles for targets located in the red polygon.</p>
Full article ">Figure 10
<p>Ground subsidence related to salt exploitation in Solotvyno, Ukraine.</p>
Full article ">Figure 11
<p>Correlation between geology and measured ground displacement: (<b>a</b>) Sentinel-1 ground displacement map; (<b>b</b>) Sentinel-1 LOS time-series displacement (area 2); (<b>c</b>) geology of the Cernavoda area (detail from [<a href="#B49-remotesensing-14-01046" class="html-bibr">49</a>]); (<b>d</b>) ASAR PSInSAR displacement map (2003–2009).</p>
Full article ">Figure 11 Cont.
<p>Correlation between geology and measured ground displacement: (<b>a</b>) Sentinel-1 ground displacement map; (<b>b</b>) Sentinel-1 LOS time-series displacement (area 2); (<b>c</b>) geology of the Cernavoda area (detail from [<a href="#B49-remotesensing-14-01046" class="html-bibr">49</a>]); (<b>d</b>) ASAR PSInSAR displacement map (2003–2009).</p>
Full article ">Figure 12
<p>PSInSAR displacement results for the infrastructure area of Cernavoda nuclear plant: (<b>a</b>) unstable infrastructure on alluvial formation; (<b>b</b>) subsidence on the alluvial formation; (<b>c</b>) sudden change in LOS displacements for isolated structure.</p>
Full article ">Figure 13
<p>PSInSAR measurement results in Videle area (Southern Romania).</p>
Full article ">Figure 14
<p>On-the-fly differential measurements between selected GNSS stations: (<b>a</b>) east, SULN vs. AEGY; (<b>b</b>) north, SULN vs. AEGY; (<b>c</b>) up, SULN vs. AEGY; (<b>d</b>) up, SULN vs. IGEO.</p>
Full article ">Figure 15
<p>PSInSAR measurements in the SULN GNSS area. (<b>a</b>) SULN GNSS station area, −3mm/year; (<b>b</b>) SULN GNSS station neighboring area, −9.2 mm/year; (<b>c</b>) Sulina city, −3 mm/year; (<b>d</b>) polder south of Sulina city, −6 mm/year.</p>
Full article ">Figure 15 Cont.
<p>PSInSAR measurements in the SULN GNSS area. (<b>a</b>) SULN GNSS station area, −3mm/year; (<b>b</b>) SULN GNSS station neighboring area, −9.2 mm/year; (<b>c</b>) Sulina city, −3 mm/year; (<b>d</b>) polder south of Sulina city, −6 mm/year.</p>
Full article ">Figure 16
<p>Subsidence areas in central Transylvania: (<b>a</b>) subsidence measured with PSInSAR; (<b>b</b>) salt diapirs—color contours (from modified from [<a href="#B21-remotesensing-14-01046" class="html-bibr">21</a>]) illustrate the depth of the salt below surface; (<b>c</b>) gas extraction fields marked with black contours.</p>
Full article ">Figure 16 Cont.
<p>Subsidence areas in central Transylvania: (<b>a</b>) subsidence measured with PSInSAR; (<b>b</b>) salt diapirs—color contours (from modified from [<a href="#B21-remotesensing-14-01046" class="html-bibr">21</a>]) illustrate the depth of the salt below surface; (<b>c</b>) gas extraction fields marked with black contours.</p>
Full article ">Figure 17
<p>Subsidence of underground gas storage area: (<b>a</b>) subsidence related to gas storage (<b>left</b>) and gas storage facilities (<b>right</b>; detail from [<a href="#B55-remotesensing-14-01046" class="html-bibr">55</a>]); (<b>b</b>) injection/extraction cycles (gas storage historical data from [<a href="#B56-remotesensing-14-01046" class="html-bibr">56</a>]); (<b>c</b>) subsidence cycles measured with PSInSAR.</p>
Full article ">Figure 18
<p>Total ground motion and separation in three components (stable, up and down): (<b>a</b>) total ground motion; (<b>b</b>) stable targets; (<b>c</b>) subsidence; (<b>d</b>) uplift. Black lines = local and regional fault systems (from [<a href="#B11-remotesensing-14-01046" class="html-bibr">11</a>]).</p>
Full article ">Figure 19
<p>Small-scale motion detection: (<b>a</b>) subsidence/westward motion; (<b>b</b>) uplift/eastward motion (black lines = local and regional fault systems, from [<a href="#B11-remotesensing-14-01046" class="html-bibr">11</a>]).</p>
Full article ">Figure 20
<p>Qualitative correlation between a crustal motion map (<b>left</b>, detail from [<a href="#B63-remotesensing-14-01046" class="html-bibr">63</a>]) and PSINSAR measurement (<b>right</b>).</p>
Full article ">Figure 21
<p>GNSS measured trend reversal at GNSS station CRCL: (<b>a</b>) 2012–2018 east, 0.9 mm/year; (<b>b</b>) 2017–2021 east, 1.9 mm/year; (<b>c</b>) 2012–2017 north,1.45 mm/year; (<b>d</b>) 2017–2021 north, −0.41 mm/year; (<b>e</b>) 2012–2017 up, −1.63 mm/year; (<b>f</b>) 2017–2021 up, 1.33 mm/year.</p>
Full article ">Figure 21 Cont.
<p>GNSS measured trend reversal at GNSS station CRCL: (<b>a</b>) 2012–2018 east, 0.9 mm/year; (<b>b</b>) 2017–2021 east, 1.9 mm/year; (<b>c</b>) 2012–2017 north,1.45 mm/year; (<b>d</b>) 2017–2021 north, −0.41 mm/year; (<b>e</b>) 2012–2017 up, −1.63 mm/year; (<b>f</b>) 2017–2021 up, 1.33 mm/year.</p>
Full article ">Figure 22
<p>PSInSAR LOS trends: EPOS_36_5, 1.28 mm/year; TRACK_36c, 1.12 mm/year.</p>
Full article ">
17 pages, 33579 KiB  
Article
Application of Machine Learning for Simulation of Air Temperature at Dome A
by Xiaoping Pang, Chuang Liu, Xi Zhao, Bin He, Pei Fan, Yue Liu, Meng Qu and Minghu Ding
Remote Sens. 2022, 14(4), 1045; https://doi.org/10.3390/rs14041045 - 21 Feb 2022
Cited by 3 | Viewed by 4203
Abstract
Dome A is the summit of the Antarctic plateau, where the Chinese Kunlun inland station is located. Due to its unique location and high altitude, Dome A provides an important observatory site in analyzing global climate change. However, before the arrival of the [...] Read more.
Dome A is the summit of the Antarctic plateau, where the Chinese Kunlun inland station is located. Due to its unique location and high altitude, Dome A provides an important observatory site in analyzing global climate change. However, before the arrival of the Chinese Antarctic expedition in 2005, near-surface air temperatures had not been recorded in the region. In this study, we used meteorological parameters, such as ice surface temperature, radiation, wind speed, and cloud type, to build a reliable model for air temperature estimation. Three models (linear regression, random forest, and deep neural network) were developed based on various input datasets: seasonal factors, skin temperature, shortwave radiation, cloud type, longwave radiation from AVHRR-X products, and wind speed from MERRA-2 reanalysis data. In situ air temperatures from 2010 to 2015 were used for training, while 2005–2009 and 2016–2020 measurements were used for model validation. The results showed that random forest and deep neural network outperformed the linear regression model. In both methods, the 2005–2009 estimates (average bias = 0.86 °C and 1 °C) were more accurate than the 2016–2020 values (average bias = 1.04 °C and 1.26 °C). We conclude that the air temperature at Dome A can be accurately estimated (with an average bias less than 1.3 °C and RMSE around 3 °C) from meteorological parameters using random forest or a deep neural network. Full article
(This article belongs to the Special Issue Remote Sensing of Polar Regions)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of Dome A and monthly mean measured temperature trend.</p>
Full article ">Figure 2
<p>The comparison between air temperature from MERRA-2 and AWS (3 is specified as the radius and then each sample point is traversed, and the number of scattered points within the radius of each sample point is calculated to indicate the density of the scattered point.).</p>
Full article ">Figure 3
<p>Skin temperature and air temperature at Dome A obtained from AVHRR and AWS.</p>
Full article ">Figure 4
<p>The seasonal distribution of the temperature differences.</p>
Full article ">Figure 5
<p>Eleven layers of DNN. Fcn represents the full connection layer; each full connection layer also connects to a ReLu layer.</p>
Full article ">Figure 6
<p>The process of adjusting the parameters for random forest (The score is the average score after 10-fold cross-validation.).</p>
Full article ">Figure 7
<p>The process of adjusting the parameters for deep learning (the mean error, mean absolute error, mean percentage error, and RMSE are the average in the process of 10-fold cross-validation).</p>
Full article ">Figure 8
<p>Comparison of the deviation degree between the simulation and the true value.</p>
Full article ">Figure 9
<p>Comparison of the deviation degree between the simulation and the true value.</p>
Full article ">Figure 10
<p>Simulated time series data using random forest and deep learning.</p>
Full article ">Figure 11
<p>The comparison of the simulated values with MERRA-2.</p>
Full article ">Figure 12
<p>Pearson coefficients of thermodynamic factors as outliers.</p>
Full article ">
24 pages, 11932 KiB  
Article
Target Detection and DOA Estimation for Passive Bistatic Radar in the Presence of Residual Interference
by Haitao Wang, Jun Wang, Junzheng Jiang, Kefei Liao and Ningbo Xie
Remote Sens. 2022, 14(4), 1044; https://doi.org/10.3390/rs14041044 - 21 Feb 2022
Cited by 6 | Viewed by 3207
Abstract
With the development of radio technology, passive bistatic radar (PBR) will suffer from interferences not only from the base station that is used as the illuminator of opportunity (BS-IoO), but also from the base station with co-frequency or adjacent frequency (BS-CF/AF). It is [...] Read more.
With the development of radio technology, passive bistatic radar (PBR) will suffer from interferences not only from the base station that is used as the illuminator of opportunity (BS-IoO), but also from the base station with co-frequency or adjacent frequency (BS-CF/AF). It is difficult for clutter cancellation algorithm to suppress all the interferences, especially the interferences from BS-CF/AF. The residual interferences will seriously affect target detection and DOA estimation. To solve this problem, a novel target detection and DOA estimation method for PBR based on compressed sensing sparse reconstruction is proposed. Firstly, clutter cancellation algorithm is used to suppress the interferences from BS-IoO. Secondly, the residual interferences and target echo are separated in spatial domain based on the azimuth sparse reconstruction. Finally, target detection and DOA estimation method are given. The proposed method can achieve not only target detection and DOA estimation in the presence of residual interferences, but also better anti-mainlobe interferences and high-resolution DOA estimation performance. Numerical simulation and experimental results verify the effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Radar High-Speed Target Detection, Tracking, Imaging and Recognition)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of radar target detection structure for GSM-based PBR.</p>
Full article ">Figure 2
<p>The proposed target detection and DOA estimation algorithm.</p>
Full article ">Figure 3
<p>RDCC result between the original signal received by the echo antenna and the reference signal of BS-IoO.</p>
Full article ">Figure 4
<p>RDCC results after clutter cancellation, (<b>a</b>) Range-Doppler projection, (<b>b</b>) Enlarged view of target area.</p>
Full article ">Figure 5
<p>Sparse coarse reconstruction result of target range-Doppler unit.</p>
Full article ">Figure 6
<p>Range-Doppler images in different directions (target area) after sparse reconstruction, (<b>a</b>) −5.6°, (<b>b</b>) 0°, (<b>c</b>) 5.6°, (<b>d</b>) 11.2°.</p>
Full article ">Figure 6 Cont.
<p>Range-Doppler images in different directions (target area) after sparse reconstruction, (<b>a</b>) −5.6°, (<b>b</b>) 0°, (<b>c</b>) 5.6°, (<b>d</b>) 11.2°.</p>
Full article ">Figure 7
<p>Sparse fine reconstruction result of target range-Doppler unit.</p>
Full article ">Figure 8
<p>Target SCNR after sparse reconstruction varying with the DOA of the 4-th strong co-channel interference.</p>
Full article ">Figure 9
<p>Result of DOA estimation accuracy varying with the number of antenna elements.</p>
Full article ">Figure 10
<p>Result of DOA estimation accuracy varying with target SNR.</p>
Full article ">Figure 11
<p>Sparse reconstruction results when two targets are in a range-Doppler unit.</p>
Full article ">Figure 12
<p>The probability of super-resolution DOA estimation varying with the number of array elements and target angular spacing.</p>
Full article ">Figure 13
<p>Photograph of radar antenna.</p>
Full article ">Figure 14
<p>Sketch of radar setup.</p>
Full article ">Figure 15
<p>The processing flow of the digital baseband echo signal in the computer.</p>
Full article ">Figure 16
<p>The RDCC result of the original echo signal.</p>
Full article ">Figure 17
<p>The RDCC result of the non-adaptive beamformer.</p>
Full article ">Figure 18
<p>The RDCC results of ADBF and sparse reconstruction, (<b>a</b>) ADBF, (<b>b</b>) Sparse reconstruction.</p>
Full article ">Figure 19
<p>Sparse reconstruction results of the target range-Doppler unit.</p>
Full article ">Figure 20
<p>Target trajectory tracking result. (<b>a</b>) Range-Doppler accumulated observation result; (<b>b</b>) Target Doppler trajectory tracking result; (<b>c</b>) Target DOA trajectory tracking result.</p>
Full article ">Figure 20 Cont.
<p>Target trajectory tracking result. (<b>a</b>) Range-Doppler accumulated observation result; (<b>b</b>) Target Doppler trajectory tracking result; (<b>c</b>) Target DOA trajectory tracking result.</p>
Full article ">
24 pages, 10737 KiB  
Article
Multitemporal Change Detection Analysis in an Urbanized Environment Based upon Sentinel-1 Data
by Lars Gruenhagen and Carsten Juergens
Remote Sens. 2022, 14(4), 1043; https://doi.org/10.3390/rs14041043 - 21 Feb 2022
Cited by 9 | Viewed by 3581
Abstract
The German Ruhr area is a highly condensed urban area that experienced a tremendous structural change over recent decades with the replacement of the coal and steel industries by other sectors. Consequently, a lot of major land cover changes happened. To retrospectively quantify [...] Read more.
The German Ruhr area is a highly condensed urban area that experienced a tremendous structural change over recent decades with the replacement of the coal and steel industries by other sectors. Consequently, a lot of major land cover changes happened. To retrospectively quantify such land cover changes, this study analysed synthetic aperture radar images of the Sentinel-1 satellites by applying the Google Earth Engine. Three satellite images are analysed by the multitemporal difference-adjusted dispersion threshold approach to capture land cover changes such as demolished buildings and new buildings by applying a threshold. This approach uses synthetic aperture radar data that are rarely considered in previously existing land cover change services. Urbanization or urban sprawl leads to changes in the urban form globally. These can be caused, for example, by migration or regionally by structural change, etc., such as in the study area presented here. The results are validated with reference data sets, which are publicly available nationally (e.g., house contour lines, normalized digital terrain model, digital orthophotos) or which are publicly available globally like the Global Urban Footprint and the World Settlement Footprint. Based on this, land cover changes could be identified for 21 locations within the study area of the city of Bochum. Full article
(This article belongs to the Special Issue Remote Sensing of Urban Form)
Show Figures

Figure 1

Figure 1
<p>Fundamental components and aspects of urban form. (Figure adapted from [<a href="#B10-remotesensing-14-01043" class="html-bibr">10</a>]).</p>
Full article ">Figure 2
<p>Right: overview map with indicated study area in Bochum, North Rhine-Westphalia, Germany [<a href="#B48-remotesensing-14-01043" class="html-bibr">48</a>]. Left: study area in the eastern part of Bochum [<a href="#B49-remotesensing-14-01043" class="html-bibr">49</a>].</p>
Full article ">Figure 3
<p>Described workflow in a flow chart.</p>
Full article ">Figure 4
<p>Multitemporal SAR representation of the former Opel plant I using an additive colour representation. The grey marked pixels represent the time of the respective land cover change, point one (P1) blue state 5 January 2020 in the eastern surroundings of the plant; P2 cyan change between 2017 and 2020 at the southern edge of the plant; P3 light green to cyan state 7 January 2017, former northern halls; P4 yellow change between 2015 and 2017, former central hall section; P5 red state 7 January 2015, former halls; P6 magenta change between 2015 and 2020, former southern halls; today DHL distribution centre. In this period, long-lasting land cover changes overlap; on the one hand factory halls were demolished, and on the other hand, the DHL distribution centre was built here afterwards. Here no already-depicted changes are shown, but only the difference between the first and the last image. P7 does not know of any changes to the still-existing administration building—today “O-Werk” [<a href="#B52-remotesensing-14-01043" class="html-bibr">52</a>,<a href="#B60-remotesensing-14-01043" class="html-bibr">60</a>].</p>
Full article ">Figure 5
<p>Recorded and validated multitemporal land cover changes between 2015, 2017, and 2020 [<a href="#B49-remotesensing-14-01043" class="html-bibr">49</a>,<a href="#B52-remotesensing-14-01043" class="html-bibr">52</a>].</p>
Full article ">Figure 6
<p>Composite changes in the house perimeter dataset from 2016 to 2021 compared to the results of the MDADT method’s recorded land cover changes in yellow (2015–2017), cyan (2017–2020) and magenta (2015–2020) [<a href="#B49-remotesensing-14-01043" class="html-bibr">49</a>,<a href="#B52-remotesensing-14-01043" class="html-bibr">52</a>].</p>
Full article ">Figure 7
<p>Changes in the house perimeter dataset per year from 2016–2020, compared to the results of the MDADT method’s recorded land cover changes [<a href="#B52-remotesensing-14-01043" class="html-bibr">52</a>,<a href="#B53-remotesensing-14-01043" class="html-bibr">53</a>].</p>
Full article ">Figure 8
<p>Accuracy of land cover changes recorded and validated by the MDADT method [<a href="#B49-remotesensing-14-01043" class="html-bibr">49</a>,<a href="#B52-remotesensing-14-01043" class="html-bibr">52</a>].</p>
Full article ">Figure 9
<p>Accuracy of land cover changes recorded and validated by the MDADT method in the respective time periods [<a href="#B49-remotesensing-14-01043" class="html-bibr">49</a>,<a href="#B52-remotesensing-14-01043" class="html-bibr">52</a>].</p>
Full article ">Figure 10
<p>Recorded and validated multitemporal land cover changes between the years 2015, 2017, and 2020 of the MDADT method in terms of land cover change type [<a href="#B49-remotesensing-14-01043" class="html-bibr">49</a>,<a href="#B52-remotesensing-14-01043" class="html-bibr">52</a>].</p>
Full article ">Figure 11
<p>Land cover changes detected by the MDADT method between 2015 and 2020 shown in their spatial extent: large (&gt;1 ha), middle (0.1–1 ha) and small (&lt;0.1 ha) [<a href="#B49-remotesensing-14-01043" class="html-bibr">49</a>,<a href="#B52-remotesensing-14-01043" class="html-bibr">52</a>].</p>
Full article ">Figure 12
<p>Comparison of the 2011-12 GUF, 2015 &amp; 2019 WSF datasets and their changes compared to the 2019 house perimeters [<a href="#B53-remotesensing-14-01043" class="html-bibr">53</a>,<a href="#B57-remotesensing-14-01043" class="html-bibr">57</a>,<a href="#B59-remotesensing-14-01043" class="html-bibr">59</a>,<a href="#B61-remotesensing-14-01043" class="html-bibr">61</a>].</p>
Full article ">Figure 13
<p>Multitemporal SAR representation of the former Opel plant I using the REACTIV method in HSV colour space; see <a href="#remotesensing-14-01043-f004" class="html-fig">Figure 4</a> in comparison in the RGB colour space. Land cover changes from 7 January 2015 are shown in light red, e.g., in the eastern part of the former Opel plant. Land cover changes around 7 January 2017 are marked in blue–cyan. Changes until 5 January 2020 are shown in dark red, e.g., southern surroundings of Opelwerk [<a href="#B64-remotesensing-14-01043" class="html-bibr">64</a>,<a href="#B65-remotesensing-14-01043" class="html-bibr">65</a>].</p>
Full article ">Figure 14
<p>LaVerDi land cover changes between 2018–2019 compared to MDADT method land cover changes between 2017–2020 [<a href="#B49-remotesensing-14-01043" class="html-bibr">49</a>,<a href="#B52-remotesensing-14-01043" class="html-bibr">52</a>,<a href="#B68-remotesensing-14-01043" class="html-bibr">68</a>].</p>
Full article ">
20 pages, 23831 KiB  
Article
A Novel Method for Hyperspectral Mineral Mapping Based on Clustering-Matching and Nonnegative Matrix Factorization
by Zhongliang Ren, Qiuping Zhai and Lin Sun
Remote Sens. 2022, 14(4), 1042; https://doi.org/10.3390/rs14041042 - 21 Feb 2022
Cited by 12 | Viewed by 4029
Abstract
The emergence of hyperspectral imagery paved a new way for rapid mineral mapping. As a classical hyperspectral classification method, spectral matching (SM) can automatically map the spatial distribution of minerals without the need for selecting training samples. However, due to the influence of [...] Read more.
The emergence of hyperspectral imagery paved a new way for rapid mineral mapping. As a classical hyperspectral classification method, spectral matching (SM) can automatically map the spatial distribution of minerals without the need for selecting training samples. However, due to the influence of noise, the mapping accuracy of SM is usually poor, and its per-pixel matching method is inefficient to some extent. To solve these problems, we propose an unsupervised clustering-matching mapping method, using a combination of k-means and SM (KSM). First, nonnegative matrix factorization (NMF) is used and combined with a simple and effective NMF initialization method (SMNMF) for feature extraction. Then, k-means is implemented to get the cluster centers of the extracted features and band depth, which are used for clustering and matching, respectively. Finally, dimensionless matching methods, including spectral angle mapper (SAM), spectral correlation angle (SCA), spectral gradient angle (SGA), and a combined matching method (SCGA) are used to match the cluster centers of band depth with a spectral library to obtain the mineral mapping results. A case study on the airborne hyperspectral image of Cuprite, Nevada, USA, demonstrated that the average overall accuracies of KSM based on SAM, SCA, SGA, and SCGA are approximately 22%, 22%, 35%, and 33% higher than those of SM, respectively, and KSM can save more than 95% of the mapping time. Moreover, the mapping accuracy and efficiency of SMNMF are about 15% and 38% higher than those of the widely used NMF initialization method. In addition, the proposed SCGA could achieve promising mapping results at both high and low signal-to-noise ratios compared with other matching methods. The mapping method proposed in this study provides a new solution for the rapid and autonomous identification of minerals and other fine objects. Full article
(This article belongs to the Special Issue Hyperspectral and Multispectral Imaging in Geology)
Show Figures

Figure 1

Figure 1
<p>Study area and data: (<b>a</b>) location map; (<b>b</b>) mineral distribution map from USGS; and (<b>c</b>) AVIRIS image.</p>
Full article ">Figure 2
<p>The spectral curves of the Cuprite minerals in the USGS spectral library.</p>
Full article ">Figure 3
<p>Spectral curves of muscovite of USGS mineral spectral library after spectral preprocessing.</p>
Full article ">Figure 4
<p>Flowchart of the clustering-matching procedure.</p>
Full article ">Figure 5
<p>Spectral features of the AVIRIS image extracted by NMF: (<b>a</b>–<b>f</b>) are the NMF features initialized by NNDSVD; (<b>g</b>–<b>l</b>) are the NMF features initialized by NNDSVDa; and (<b>m</b>–<b>r</b>) are the NMF features initialized by SMNMF.</p>
Full article ">Figure 6
<p>Mineral mapping results of the AVIRIS image: (<b>a</b>–<b>d</b>) are the mapping results based on SM–SAM, SM–SCA, SM–SGA, and SM–SCGA, respectively; (<b>e</b>–<b>h</b>) are the mapping results based on NKSM–SAM, NKSM–SCA, NKSM–SGA, and NKSM–SCGA, respectively; (<b>i</b>–<b>l</b>) are the mapping results based on NAKSM–SAM, NAKSM–SCA, NAKSM–SGA, and NAKSM–SCGA, respectively; and (<b>m</b>–<b>p</b>) are the mapping results based on SKSM–SAM, SKSM–SCA, SKSM–SGA, and SKSM–SCGA, respectively.</p>
Full article ">Figure 7
<p>Mineral mapping accuracies of the AVIRIS image: (<b>a</b>–<b>d</b>) are the mapping accuracies based on SAM, SCA, SGA, and SCGA, respectively.</p>
Full article ">Figure 8
<p>Mineral mapping results of the AVIRIS image after adding zero-mean Gaussian noise with the standard deviation of 0.006: (<b>a</b>–<b>d</b>) are the mapping results based on SM–SAM, SM–SCA, SM–SGA, and SM–SCGA, respectively; (<b>e</b>–<b>h</b>) are the mapping results based on NKSM–SAM, NKSM–SCA, NKSM–SGA, and NKSM–SCGA, respectively; (<b>i</b>–<b>l</b>) are the mapping results based on NAKSM–SAM, NAKSM–SCA, NAKSM–SGA, and NAKSM–SCGA, respectively; and (<b>m</b>–<b>p</b>) are the mapping results based on SKSM–SAM, SKSM–SCA, SKSM–SGA, and SKSM–SCGA, respectively.</p>
Full article ">Figure 9
<p>Mineral mapping accuracies of the AVIRIS image after adding zero-mean Gaussian noise: (<b>a</b>–<b>d</b>) are the mapping accuracies based on SM, NKSM, NAKSM, and SKSM, respectively.</p>
Full article ">Figure 10
<p>Original and averaged spectrum of (<b>a</b>) alunite and (<b>b</b>) kaolinite.</p>
Full article ">Figure 11
<p>Average overall accuracies of 20 random clustering-matchings of KSM with different K values: (<b>a</b>–<b>c</b>) are the mapping accuracies based on NKSM, NAKSM, and SKSM, respectively.</p>
Full article ">Figure 12
<p>Mineral mapping results of the AVIRIS image based on MF: (<b>a</b>–<b>d</b>) are the mapping results of SAM, SCA, SGA, and SCGA, respectively.</p>
Full article ">Figure 13
<p>Mineral mapping accuracies of 30 AVIRIS images with different mixing levels: (<b>a</b>–<b>d</b>) are the mapping accuracies based on NKSM, NAKSM, SKSM, and MF, respectively.</p>
Full article ">
24 pages, 52501 KiB  
Article
A Suitable Retrieval Algorithm of Arctic Snow Depths with AMSR-2 and Its Application to Sea Ice Thicknesses of Cryosat-2 Data
by Zhaoqing Dong, Lijian Shi, Mingsen Lin and Tao Zeng
Remote Sens. 2022, 14(4), 1041; https://doi.org/10.3390/rs14041041 - 21 Feb 2022
Cited by 5 | Viewed by 3161
Abstract
Arctic sea ice and snow affect the energy balance of the global climate system through the radiation budget. Accurate determination of the snow cover over Arctic sea ice is significant for the retrieval of the sea ice thickness (SIT). In this study, we [...] Read more.
Arctic sea ice and snow affect the energy balance of the global climate system through the radiation budget. Accurate determination of the snow cover over Arctic sea ice is significant for the retrieval of the sea ice thickness (SIT). In this study, we developed a new snow depth retrieval method over Arctic sea ice with a long short-term memory (LSTM) deep learning algorithm based on Operation IceBridge (OIB) snow depth data and brightness temperature data of AMSR-2 passive microwave radiometers. We compared climatology products (modified W99 and AWI), altimeter products (Kwok) and microwave radiometer products (Bremen, Neural Network and LSTM). The climatology products and altimeter products are completely independent of the OIB data used for training, while microwave radiometer products are not completely independent of the OIB data. We also compared the SITs retrieved from the above different snow depth products based on Cryosat-2 radar altimeter data. First, the snow depth spatial patterns for all products are in broad agreement, but the temporal evolution patterns are distinct. Snow products of microwave radiometers, such as Bremen, Neural Network and LSTM snow depth products, show thicker snow in early winter with respect to the climatology snow depth products and the altimeter snow depth product, especially in the multiyear ice (MYI) region. In addition, the differences in all snow depth products are relatively large in the early winter and relatively small in spring. Compared with the OIB and IceBird observation data (April 2019), the snow depth retrieved by the LSTM algorithm is better than that retrieved by the other algorithms in terms of accuracy, with a correlation of 0.55 (0.90), a root mean square error (RMSE) of 0.06 m (0.05 m) and a mean absolute error (MAE) of 0.05 m (0.04 m). The spatial pattern and seasonal variation of the SITs retrieved from different snow depths are basically consistent. The total sea ice decreases first and then thickens as the seasons change. Compared with the OIB SIT in April 2019, the SIT retrieved by the LSTM snow depth is superior to that retrieved by the other SIT products in terms of accuracy, with the highest correlation of 0.46, the lowest RMSE of 0.59 m and the lowest MAE of 0.44 m. In general, it is promising to retrieve Arctic snow depth using the LSTM algorithm, but the retrieval of snow depth over MYI still needs to be verified with more measured data, especially in early winter. Full article
(This article belongs to the Special Issue Remote Sensing Monitoring of Arctic Environments)
Show Figures

Figure 1

Figure 1
<p>Airborne snow depth map: (<b>a</b>) Operation IceBridge (OIB) from March 2013 to April 2019, the sub-figure on the top right shows the OIB data used to test model and other snow depth products. (<b>b</b>) IceBird in April 2019.</p>
Full article ">Figure 2
<p>Time series of snow density.</p>
Full article ">Figure 3
<p>Structure of the LSTM neural unit.</p>
Full article ">Figure 4
<p>Framework of the LSTM model.</p>
Full article ">Figure 5
<p>Evaluation of the training of the LSTM model.</p>
Full article ">Figure 6
<p>Flowchart of SIT retrieval.</p>
Full article ">Figure 7
<p>Spatial distribution of different snow depth products: (<b>a</b>) modified W99 snow depth, (<b>b</b>) AWI snow depth, (<b>c</b>) Bremen snow depth, (<b>d</b>) Kwok snow depth, (<b>e</b>) Neural Network snow depth, (<b>f</b>) LSTM snow depth. The dates from the first row to the last row are November 2018, December 2018, January 2019, February 2019, March 2019 and April 2019.</p>
Full article ">Figure 8
<p>Histogram of seasonal variations in the different snow depth products over the common area of all data sets: (<b>a</b>) snow depth on total sea ice, (<b>b</b>) snow depth on FYI, (<b>c</b>) snow depth on MYI.</p>
Full article ">Figure 9
<p>Scatterplots of different snow depth products and OIB snow depth: (<b>a</b>) modified W99 snow depth, (<b>b</b>) AWI snow depth, (<b>c</b>) Bremen snow depth, (<b>d</b>) Kwok snow depth, (<b>e</b>) Neural Network snow depth, (<b>f</b>) LSTM snow depth. The black line is the relationship with each other.</p>
Full article ">Figure 10
<p>Scatterplots of different snow depth products and IceBird snow depth: (<b>a</b>) modified W99 snow depth, (<b>b</b>) AWI snow depth, (<b>c</b>) Bremen snow depth, (<b>d</b>) Kwok snow depth, (<b>e</b>) Neural Network snow depth, (<b>f</b>) LSTM snow depth. The black line is the relationship with each other.</p>
Full article ">Figure 11
<p>Spatial distribution of SIT from different snow depth product retrievals: (<b>a</b>) modified W99 snow depth, (<b>b</b>) AWI snow depth, (<b>c</b>) Bremen snow depth, (<b>d</b>) Kwok snow depth, (<b>e</b>) Neural Network snow depth and (<b>f</b>) LSTM snow depth. The dates from the first row to the last row are November 2018, December 2018, January 2019, February 2019, March 2019 and April 2019.</p>
Full article ">Figure 12
<p>Histogram of seasonal variations in different SIT products: (<b>a</b>) total sea ice, (<b>b</b>) FYI and (<b>c</b>) MYI.</p>
Full article ">Figure 13
<p>Evaluation of the CryoSat-2 (CS-2) SIT retrieved from different snow depths compared with the OIB sea ice thickness: (<b>a</b>) modified W99 snow depth, (<b>b</b>) AWI snow depth, (<b>c</b>) Bremen snow depth, (<b>d</b>) Kwok snow depth, (<b>e</b>) Neural Network snow depth and (<b>f</b>) LSTM snow depth. The black line is the relationship with each other.</p>
Full article ">
19 pages, 17943 KiB  
Article
Integrated Fire Management as a Renewing Agent of Native Vegetation and Inhibitor of Invasive Plants in Vereda Habitats: Diagnosis by Remotely Piloted Aircraft Systems
by Jéssika Cristina Nascente, Manuel Eduardo Ferreira and Gustavo Manzon Nunes
Remote Sens. 2022, 14(4), 1040; https://doi.org/10.3390/rs14041040 - 21 Feb 2022
Cited by 4 | Viewed by 3706
Abstract
The Cerrado biome is being gradually reduced. Remote sensing has been widely used to investigate spatio-temporal changes in the landscape, which are frequently limited to mapping with orbital sensors, while the Remotely Piloted Aircraft System (RPAS) proved to be advantageous in terms of [...] Read more.
The Cerrado biome is being gradually reduced. Remote sensing has been widely used to investigate spatio-temporal changes in the landscape, which are frequently limited to mapping with orbital sensors, while the Remotely Piloted Aircraft System (RPAS) proved to be advantageous in terms of spatial resolution and the application of advanced digital processing techniques. In this study, we investigated a vereda (humid area) of a conservation unit in the state of Mato Grosso, Brazil. Object-Based Image Analysis (OBIA) was applied to images obtained by RPAS to distinguish the phytophysiognomies of plant strata from the vereda and to diagnose the recovery of native and invasive vegetation after prescribed burning. The study was carried out in the following five stages: biomass collection; quality analysis of the land cover; phytosociological survey; collection of control points using a GNSS receiver (type L1/L2); and the capture of aerial images with an RGB camera coupled to a DJI Phantom 4 Pro, which was performed through overflights in three different periods. Object–Based Image Analysis was subsequently performed using the Nearest Neighbor classifier combined with Feature Space Optimization, obtaining classifications with accuracy and Kappa indexes greater than 80% and 0.80, respectively. The results of image processing allowed us to infer that fire acted as a renewing agent for native vegetation and as an inhibiting agent for invasive vegetation. The classification analyses combined with the phytosociological analysis allowed us to infer that the vereda is in the process of maturation. Therefore, the study demonstrated the potential of data obtained by RPAS for the diagnosis and analysis of vegetation dynamics in small wetlands submitted to Integrated Fire Management (IFM). Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Flowchart of procedures performed in the study area.</p>
Full article ">Figure 3
<p>Digital Elevation Model (DEM) with the allocation of control points, biomass collection, and soil cover.</p>
Full article ">Figure 4
<p>OBIA classification of the orthomosaic image of 23 May 2018—‘pre–burn’.</p>
Full article ">Figure 5
<p>OBIA classification of the orthomosaic image of 8 June 2018—‘post–burning’.</p>
Full article ">Figure 6
<p>OBIA Classification of the orthomosaic image of 2 July 2019—‘Monitoring’.</p>
Full article ">
17 pages, 3149 KiB  
Article
Integrating Multi-Source Remote Sensing to Assess Forest Aboveground Biomass in the Khingan Mountains of North-Eastern China Using Machine-Learning Algorithms
by Xiaoyi Wang, Caixia Liu, Guanting Lv, Jinfeng Xu and Guishan Cui
Remote Sens. 2022, 14(4), 1039; https://doi.org/10.3390/rs14041039 - 21 Feb 2022
Cited by 10 | Viewed by 4345
Abstract
Forest aboveground biomass (AGB) is of great significance since it represents large carbon storage and may reduce global climate change. However, there are still considerable uncertainties in forest AGB estimates, especially in rugged regions, due to the lack of effective algorithms to remove [...] Read more.
Forest aboveground biomass (AGB) is of great significance since it represents large carbon storage and may reduce global climate change. However, there are still considerable uncertainties in forest AGB estimates, especially in rugged regions, due to the lack of effective algorithms to remove the effects of topography and the lack of comprehensive comparisons of methods used for estimation. Here, we systematically compare the performance of three sources of remote sensing data used in forest AGB estimation, along with three machine-learning algorithms using extensive field measurements (N = 1058) made in the Khingan Mountains of north-eastern China in 2008. The datasets used were obtained from the LiDAR-based Geoscience Laser Altimeter System onboard the Ice, Cloud, and land Elevation satellite (ICESat/GLAS), the optical-based Moderate Resolution Imaging Spectroradiometer (MODIS), and the SAR-based Advanced Land Observing Satellite (ALOS) Phased Array type L-band Synthetic Aperture Radar (PALSAR). We show that terrain correction is effective for this mountainous study region and that the combination of terrain-corrected GLAS and PALSAR features with Random Forest regression produces the best results at the plot scale. Including further MODIS-based features added little power for prediction. Based upon the parsimonious data source combination, we created a map of AGB circa 2008 and its uncertainty, which yields a coefficient of determination (R2) of 0.82 and a root mean squared error of 16.84 Mg ha−1 when validated with field data. Forest AGB values in our study area were within the range 79.81 ± 16.00 Mg ha−1, ~25% larger than a previous, SAR-based, analysis. Our result provides a historic benchmark for regional carbon budget estimation. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area along with the topography provided by SRTM (elevation ranges from 28 to 1435 m). The green points indicate the locations of field measurements. The subplot shows the distribution of ICESat/GLAS data (<b>a</b>) with enlarged map (<b>b</b>) showing details of its distribution, and histogram of elevation values for both the field plots (<b>c</b>) and the whole study region (<b>d</b>).</p>
Full article ">Figure 2
<p>Schematic of the forest AGB mapping in this study. We first extracted the GLAS-based parameters using an improved algorithm suitable for rugged areas. Then, in the data selection procedure, we compared the performance of GLAS, MODIS and PALSAR, and the combinations (GLAS + MODIS, GLAS + PALSAR, and GLAS + MODIS + PALSAR). Thirdly, the performance of three different machine-learning algorithms was evaluated. Finally, we produced the benchmark map of forest AGB and its uncertainty.</p>
Full article ">Figure 3
<p>Comparison between predicted forest above ground biomass versus independent field-measured values. The dark red region represents the 95% confidence intervals for the regression line, and the light red area shows the prediction intervals for all individual observations. Performance of the proposed method was evaluated with <span class="html-italic">R</span><sup>2</sup>, Root Mean Square Error (RMSE) and relative RMSE (RMSE%).</p>
Full article ">Figure 4
<p>Distribution of forest aboveground biomass in the Khingan Mountains of north-eastern China in 2008. The inserted panel shows a histogram of forest AGB, with the dashed lines corresponding to the intervals displayed on the map (i.e., 66, 77, 90, 105 Mg ha<sup>−1</sup>).</p>
Full article ">Figure 5
<p>Distribution of the uncertainty of forest aboveground biomass in the Khingan Mountains of northeastern China circa 2008. The inserted panel shows a histogram of forest AGB uncertainty, with the dashed lines corresponding to the intervals displayed on the map (i.e., 15, 20, 25, 30 Mg ha<sup>−1</sup>).</p>
Full article ">Figure 6
<p>Comparison of two existing maps versus independent field-measured values (same as <a href="#remotesensing-14-01039-f003" class="html-fig">Figure 3</a>). Performance of those two maps was evaluated with <span class="html-italic">R</span><sup>2</sup>, Root Mean Square Error (RMSE) and relative RMSE (RMSE%).</p>
Full article ">
21 pages, 15231 KiB  
Article
Meta-Learner Hybrid Models to Classify Hyperspectral Images
by Dalal AL-Alimi, Mohammed A. A. Al-qaness, Zhihua Cai, Abdelghani Dahou, Yuxiang Shao and Sakinatu Issaka
Remote Sens. 2022, 14(4), 1038; https://doi.org/10.3390/rs14041038 - 21 Feb 2022
Cited by 24 | Viewed by 3754
Abstract
Hyperspectral (HS) images are adjacent band images that are generally used in remote-sensing applications. They have numerous spatial and spectral information bands that are extremely useful for material detection in various fields. However, their high dimensionality is a big challenge that affects their [...] Read more.
Hyperspectral (HS) images are adjacent band images that are generally used in remote-sensing applications. They have numerous spatial and spectral information bands that are extremely useful for material detection in various fields. However, their high dimensionality is a big challenge that affects their overall performance. A new data normalization method was developed to enhance the variations and data distribution using the output of principal component analysis (PCA) and quantile transformation, called QPCA. This paper also proposes a novel HS images classification framework using the meta-learner technique to train multi-class and multi-size datasets by concatenating and training the hybrid and multi-size kernel of convolutional neural networks (CNN). The high-level model works to combine the output of the lower-level models and train them with the new input data, called meta-learner hybrid models (MLHM). The proposed MLHM framework with our external normalization (QPCA) improves the accuracy and outperforms other approaches using three well-known benchmark datasets. Moreover, the evaluation outcomes showed that the QPCA enhanced the framework accuracy by 13% for most models and datasets and others by more than 25%, and MLHM provided the best performance. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of meta-learner hybrid models (MLHM).</p>
Full article ">Figure 2
<p>Dataset images with the ground truth.</p>
Full article ">Figure 3
<p>The three shapes of the data distribution, (<b>a</b>) is the original data, (<b>b</b>) is the change of data distribution after using principal components analysis (PCA), and (<b>c</b>) is the data distribution after using quantile transformation principal components analysis (QPCA).</p>
Full article ">Figure 4
<p>The results of both levels models for the IPs dataset.</p>
Full article ">Figure 5
<p>The results of both levels models for the Pavia-University dataset.</p>
Full article ">Figure 6
<p>The results of both levels models for the KS_Center dataset.</p>
Full article ">Figure 7
<p>Loss values of the three datasets for the two models of Level-0 (QPCA-Model 1 and QPCA-Model 1) and Level-1 model (QPCA-MLHM). (<b>a</b>) is the loss values of the IPs dataset, (<b>b</b>) is the loss values of the Pavia-University dataset, and (<b>c</b>) is the loss values of the Kennedy Space Center dataset.</p>
Full article ">Figure 8
<p>The output of the various models with PCA and QPCA preprocessing for the IPs dataset.</p>
Full article ">Figure 9
<p>The output of the various models with PCA and QPCA preprocessing for the Pavia University dataset.</p>
Full article ">Figure 10
<p>The output of the various models with PCA and QPCA preprocessing for the KS_Center dataset.</p>
Full article ">Figure 11
<p>The training accuracy in each model for the three used datasets: (<b>a</b>) IPs dataset, (<b>b</b>) the Pavia University dataset, and (<b>c</b>) the KS_Center dataset.</p>
Full article ">
19 pages, 10356 KiB  
Article
A Novel Workflow for Seasonal Wetland Identification Using Bi-Weekly Multiple Remote Sensing Data
by Liwei Xing, Zhenguo Niu, Cuicui Jiao, Jing Zhang, Shuqing Han, Guodong Cheng and Jianzhai Wu
Remote Sens. 2022, 14(4), 1037; https://doi.org/10.3390/rs14041037 - 21 Feb 2022
Cited by 9 | Viewed by 3382
Abstract
Accurate wetland mapping is essential for their protection and management; however, it is difficult to accurately identify seasonal wetlands because of irregular rainfall and the potential lack of water inundation. In this study, we propose a novel method to generate reliable seasonal wetland [...] Read more.
Accurate wetland mapping is essential for their protection and management; however, it is difficult to accurately identify seasonal wetlands because of irregular rainfall and the potential lack of water inundation. In this study, we propose a novel method to generate reliable seasonal wetland maps with a spatial resolution of 20 m using a seasonal-rule-based method in the Zhalong and Momoge National Nature Reserves. This study used Sentinel-1 and Sentinel-2 data, along with a bi-weekly composition method to generate a 15-day image time series. The random forest algorithm was used to classify the images into vegetation, waterbodies, bare land, and wet bare land during each time period. Several rules were incorporated based on the intra-annual changes in the seasonal wetlands and annual wetland maps of the study regions were generated. Validation processes showed that the overall accuracy and kappa coefficient were above 89.8% and 0.87, respectively. The seasonal-rule-based method was able to identify seasonal marshes, flooded wetlands, and artificial wetlands (e.g., paddy fields). Zonal analysis indicated that seasonal wetland types, including flooded wetlands and seasonal marshes, accounted for over 50% of the total wetland area in both Zhalong and Momoge National Nature Reserves; and permanent wetlands, including permanent water and permanent marsh, only accounted for 11% and 12% in the two reserves, respectively. This study proposes a new method to generate reliable annual wetland maps that include seasonal wetlands, providing an accurate dataset for interannual change analyses and wetland protection decision-making. Full article
(This article belongs to the Special Issue Remote Sensing of Wetlands and Biodiversity)
Show Figures

Figure 1

Figure 1
<p>Location of study area.</p>
Full article ">Figure 2
<p>Typical spectral profiles of vegetation, waterbodies, bare land, and wet bare land. The figure was generated using the Sentinel-2 TOA reference average of intra-annual training samples. Band2, Band3, Band4, Band5, Band6, Band7, Band8, Band8A, Band11, and Band12 represent Blue, Green, Red, Red-edge 1, Red-edge 2, Red-edge 3, NIR (broad), NIR (narrow), SWIR1, and SWIR 2 bands of Sentinel-2 data, respectively.</p>
Full article ">Figure 3
<p>Flowchart of the methodology followed in the study (<b>A</b>,<b>B</b>). <span class="html-italic">N</span> is the number of 15-day periods, with a maximum value of 17.</p>
Full article ">Figure 4
<p>Overall accuracy (OA) and kappa coefficient of 15-day intra-annual classification results. Note: The times marked in the figure are the first dates of the 15-day Sentinel-2 composite images.</p>
Full article ">Figure 5
<p>Producer accuracy (PA) and user accuracy (UA) of 15-day intra-annual classification results. Note: The times marked in the figure are the first dates of the 15-day Sentinel-2 composite images.</p>
Full article ">Figure 6
<p>Comparison between our resultant wetland map and the classification result based on optimal features and the RF algorithm. (<b>a</b>) Location of the first subset in our study area (false colour image created from 15-day Sentinel-2 composited image between 16 and 31 July). (<b>b</b>) False colour image for the first subset (15-day Sentinel-2 composited image between 1 and 15 April). (<b>c</b>) False colour image for the first subset (15-day Sentinel-2 composited image between 16 and 31 July). (<b>d</b>) False colour image for the first subset (15-day Sentinel-2 composited image between 16 and 31 October). (<b>e</b>,<b>f</b>) Wetland mapping result using seasonal-rule-based wetland classification method. (<b>g</b>,<b>h</b>) Wetland mapping result using the Random Forest algorithm. (<b>A</b>–<b>D</b>) 15-day Sentinel-2 composited NDVI time series curves for point A, B, C, and D in (<b>e</b>–<b>h</b>). Note: The times marked in the figure are the first dates of the 15-day Sentinel-2 composite images.</p>
Full article ">Figure 7
<p>Intra-annual classification results based on 15-day periods.</p>
Full article ">Figure 8
<p>Changes in four landcovers area for Zhalong National Nature Reserve and Momoge National Nature Reserve. Note: The times marked in the figure are the first dates of the 15-day Sentinel-2 composite images.</p>
Full article ">Figure 9
<p>Annual wetland classification result based on seasonal-rule-based wetland classification method.</p>
Full article ">Figure 10
<p>Areal ratio of each wetland type in Zhalong National Nature Reserve and Momoge National Nature Reserve.</p>
Full article ">Figure 11
<p>Comparison between our resultant wetland map and the European Space Agency (ESA) World Cover map. (<b>a</b>) Location of the second subset in our study area (false colour image created from 15-day Sentinel-2 composited image between 1 and 15 May). (<b>b</b>) False colour image for the second subset (15-day Sentinel-2 composited image between 16 and 30 June). (<b>c</b>) False colour image for the second subset (15-day Sentinel-2 composited image between 16 and 30 September). (<b>d</b>) False colour image for the second subset (15-day Sentinel-2 composited image between 1 and 15 November). (<b>e</b>,<b>f</b>) Wetland mapping result using seasonal-rule-based wetland classification method. (<b>g</b>,<b>h</b>) ESA World Cover map for the second subset. (<b>A</b>–<b>D</b>) 15-day Sentinel-2 composited NDVI time series curves for points A, B, C, and D in (<b>f</b>,<b>h</b>). Note: The times marked in the figure are the first dates of the 15-day Sentinel-2 composite images.</p>
Full article ">Figure 12
<p>Seasonal marsh1 in different areas. (<b>a</b>) Location of the third subset in our study area (false colour image created from 15-day Sentinel-2 composited image between 16 and 31 July). (<b>b</b>) False colour image and our classification result for the third subset. (<b>A</b>) 15-day Sentinel-2 composited NDVI time series curves for the red point in <a href="#remotesensing-14-01037-f012" class="html-fig">Figure 12</a>b. (<b>c</b>) Location of the fourth subset in our study area (false colour image created from 15-day Sentinel-2 composited image between 16 and 30 June). (<b>d</b>) False colour image and our classification result for the fourth subset. (<b>B</b>) 15-day Sentinel-2 composited NDVI time series curves for the red point in <a href="#remotesensing-14-01037-f012" class="html-fig">Figure 12</a>d. Note: The times marked in the figure are the first dates of the 15-day Sentinel-2 composite images.</p>
Full article ">
18 pages, 1706 KiB  
Article
AGNet: An Attention-Based Graph Network for Point Cloud Classification and Segmentation
by Weipeng Jing, Wenjun Zhang, Linhui Li, Donglin Di, Guangsheng Chen and Jian Wang
Remote Sens. 2022, 14(4), 1036; https://doi.org/10.3390/rs14041036 - 21 Feb 2022
Cited by 28 | Viewed by 4513
Abstract
Classification and segmentation of point clouds have attracted increasing attention in recent years. On the one hand, it is difficult to extract local features with geometric information. On the other hand, how to select more important features correctly also brings challenges to the [...] Read more.
Classification and segmentation of point clouds have attracted increasing attention in recent years. On the one hand, it is difficult to extract local features with geometric information. On the other hand, how to select more important features correctly also brings challenges to the research. Therefore, the main challenge in classifying and segmenting the point clouds is how to locate the attentional region. To tackle this challenge, we propose a graph-based neural network with an attention pooling strategy (AGNet). In particular, local feature information can be extracted by constructing a topological structure. Compared to existing methods, AGNet can better extract the spatial information with different distances, and the attentional pooling strategy is capable of selecting the most important features of the topological structure. Therefore, our model can aggregate more information to better represent different point cloud features. We conducted extensive experiments on challenging benchmark datasets including ModelNet40 for object classification, as well as ShapeNet Part and S3DIS for segmentation. Both the quantitative and qualitative experiments demonstrated a consistent advantage for the tasks of point set classification and segmentation. Full article
Show Figures

Figure 1

Figure 1
<p>The illustration of three different representative methods: (<b>a</b>) PointNet ++; (<b>b</b>) DGCNN; (<b>c</b>) our proposed AGM. In the last row, points of different colors have different attention scores, and we used attention pooling in the last step, which is different from the two max-poolings above.</p>
Full article ">Figure 2
<p>AGNet architecture for classification (<b>top</b>) and segmentation (<b>bottom</b>). The point features were processed into high-level geometric feature learning in cascaded AGMs. Next, we used the max-pooling results, as well as fully connected layers to obtain the class scores.</p>
Full article ">Figure 3
<p>The illustration of the proposed attention graph module (AGM). A set of points with features of <span class="html-italic">D</span> dimension are processed into the output with features of the <math display="inline"><semantics> <msup> <mi>D</mi> <msup> <mrow/> <mo>′</mo> </msup> </msup> </semantics></math> dimension by the attention pooling mechanism, which weights the important neighboring features.</p>
Full article ">Figure 4
<p>Attention pooling illustrated on 2D points.</p>
Full article ">Figure 5
<p>Visualization of the part segmentation results for the tables, chairs, airplanes, and lamps.</p>
Full article ">Figure 6
<p>Visualization of the comparison results by different methods on the ShapeNet Part dataset.</p>
Full article ">Figure 7
<p>Visualization of different methods on S3DIS. For each set, from left to right: PointNet, the DGCNN, ours, ground truth, and real color.</p>
Full article ">Figure 8
<p>Results on different numbers of points.</p>
Full article ">
11 pages, 3260 KiB  
Technical Note
Characterization of Tropical Cyclone Intensity Using the HY-2B Scatterometer Wind Data
by Siqi Liu, Wenming Lin, Marcos Portabella and Zhixiong Wang
Remote Sens. 2022, 14(4), 1035; https://doi.org/10.3390/rs14041035 - 21 Feb 2022
Cited by 9 | Viewed by 2830
Abstract
The estimation of tropical cyclone (TC) intensity using Ku-band scatterometer data is challenging due to rain perturbation and signal saturation in the radar backscatter measurements. In this paper, an alternative approach to directly taking the maximum scatterometer-derived wind speed is proposed to assess [...] Read more.
The estimation of tropical cyclone (TC) intensity using Ku-band scatterometer data is challenging due to rain perturbation and signal saturation in the radar backscatter measurements. In this paper, an alternative approach to directly taking the maximum scatterometer-derived wind speed is proposed to assess the TC intensity. First, the TC center location is identified based on the unique characteristics of wind stress divergence/curl near the TC core. Then the radial extent of 17-m/s winds (i.e., R17) is calculated using the wind field data from the Haiyang-2B (HY-2B) scatterometer (HSCAT). The feasibility of HSCAT wind radii in determining TC intensity is evaluated using the maximum sustained wind speed (MSW) in the China Meteorological Administration best-track database. It shows that the HSCAT R17 value generally better correlates with the best-track MSW than the HSCAT maximum wind speed, therefore indicating the potential of using the HSCAT data to improve the TC nowcasting capabilities. Full article
(This article belongs to the Special Issue Remote Sensing of Ocean Surface Winds)
Show Figures

Figure 1

Figure 1
<p>The observation data used in this study. The legend indicates the observing sensors, the TC names, and the TC durations in year 2019. The markers indicate the TC center location identified with the method in <a href="#sec3-remotesensing-14-01035" class="html-sec">Section 3</a>.</p>
Full article ">Figure 2
<p>Illustration of the divergence (<b>a</b>) and the curl (<b>b</b>) of HSCAT wind stress (TC Faxai) acquired on 7 September 2019 at UTC 16:34.</p>
Full article ">Figure 3
<p>Illustration of exception handing in the estimation of R17: (<b>a</b>) more than one 17-m/s intersection in a wind profile; (<b>b</b>) isogram of 17-m/s wind speed constructed from the radial extent of 17 m/s winds closest to TC center; (<b>c</b>) corrected isogram of 17-m/s wind speed.</p>
Full article ">Figure 4
<p>The best-track MSW versus HSCAT maximum wind speed (<b>a</b>) and ECMWF maximum forecast wind speed (<b>b</b>). Black line indicates the linear regression result.</p>
Full article ">Figure 5
<p>The best-track MSW versus HSCAT-B R17 (<b>a</b>) and ECMWF R17 (<b>b</b>). Black line indicates the linear regression result.</p>
Full article ">Figure 6
<p>The correspondence between HSCAT R17 and best-track MSW (<b>a</b>) and MSLP (<b>b</b>), respectively. The red text in (<b>a</b>) shows the temporal evolution of the TC events with more than three HSCAT acquisitions.</p>
Full article ">Figure 6 Cont.
<p>The correspondence between HSCAT R17 and best-track MSW (<b>a</b>) and MSLP (<b>b</b>), respectively. The red text in (<b>a</b>) shows the temporal evolution of the TC events with more than three HSCAT acquisitions.</p>
Full article ">Figure 7
<p>The correspondence between ASCAT R17 and best-track MSW (<b>a</b>) and MSLP (<b>b</b>), respectively.</p>
Full article ">
18 pages, 37044 KiB  
Article
Global Mangrove Watch: Updated 2010 Mangrove Forest Extent (v2.5)
by Pete Bunting, Ake Rosenqvist, Lammert Hilarides, Richard M. Lucas and Nathan Thomas
Remote Sens. 2022, 14(4), 1034; https://doi.org/10.3390/rs14041034 - 21 Feb 2022
Cited by 45 | Viewed by 8211
Abstract
This study presents an updated global mangrove forest baseline for 2010: Global Mangrove Watch (GMW) v2.5. The previous GMW maps (v2.0) of the mangrove extent are currently considered the most comprehensive available global products, however areas were identified as missing or poorly mapped. [...] Read more.
This study presents an updated global mangrove forest baseline for 2010: Global Mangrove Watch (GMW) v2.5. The previous GMW maps (v2.0) of the mangrove extent are currently considered the most comprehensive available global products, however areas were identified as missing or poorly mapped. Therefore, this study has updated the 2010 baseline map to increase the mapping quality and completeness of the mangrove extent. This revision resulted in an additional 2660 km2 of mangroves being mapped yielding a revised global mangrove extent for 2010 of some 140,260 km2. The overall map accuracy was estimated to be 95.1% with a 95th confidence interval of 93.8–96.5%, as assessed using 50,750 reference points located across 60 globally distributed sites. Of these 60 validation sites, 26 were located in areas that were remapped to produce the v2.5 map and the overall accuracy for these was found to have increased from 82.6% (95th confidence interval: 80.1–84.9) for the v2.0 map to 95.0% (95th confidence interval: 93.7–96.4) for the v2.5 map. Overall, the improved GMW v2.5 map provides a more robust product to support the conservation and sustainable use of mangroves globally. Full article
(This article belongs to the Special Issue Remote Sensing in Mangroves II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of regions identified as in need of updating as part of the Global Mangrove Watch (GMW) v2.5 analysis.</p>
Full article ">Figure 2
<p>Flowchart of the methods used to generate the GMW v2.5 map for 2010.</p>
Full article ">Figure 3
<p>An example of the clear sky mask for part of a Sentinel-2 scene, where (<b>a</b>) is the original scene (false colour: near infrared (NIR), shortwave infrared band 1 (SWIR-1), and red bands) and (<b>b</b>) is the resulting cloud and cloud shadow mask, which can been seen to have missed some clouds, and (<b>c</b>) is the clear sky mask which has masked the regions around the cloud and cloud shadows.</p>
Full article ">Figure 4
<p>A map of the 60 sites used for the accuracy assessment. The 26 red points are over areas which have been mapped with Sentinel-2 as part of the GMW v2.5 analysis while the 34 blue points are further set of sites used to capture the global accuracy of the GMW v2.5 baseline rather than just the areas updated.</p>
Full article ">Figure 5
<p>Comparison of GMW v2.0 (<b>a</b>) and v2.5 (<b>b</b>) products illustrated with an example from West Papua, Indonesia, where remapping with Sentinel-2 removed artefacts from Landsat ETM+ in GMW v2.0.</p>
Full article ">Figure 6
<p>Comparison of GMW v2.0 (<b>a</b>) and v2.5 (<b>b</b>) products, illustrated with an example from Colombia, where regions had been omitted in GMW v2.0 but included in GMW v2.5.</p>
Full article ">Figure 7
<p>Comparison of GMW v2.0 (<b>a</b>) and v2.5 (<b>b</b>) products, illustrated with an example from Florida, USA, where the habitat mask was too restricted when used in the production of GMW v2.0 which has been improved for the GMW v2.5 product.</p>
Full article ">Figure 8
<p>Comparison of GMW v2.0 (<b>a</b>) and v2.5 (<b>b</b>) products, illustrated with an example from Angola, where there were errors of commission within the GMW v2.0 product which were improved for the GMW v2.5 product.</p>
Full article ">Figure 9
<p>Comparison of GMW v2.0 (<b>a</b>) and v2.5 (<b>b</b>) products, illustrated with an example from Benin, where regions had been omitted in GMW v2.0 but included in GMW v2.5.</p>
Full article ">Figure 10
<p>Example of fragmented mangroves from Sulawesi, Indonesia. (<b>a</b>) 30-m Landsat 8 imagery from 2016 (false colour: NIR, SWIR-1, and Red bands) and (<b>b</b>) 30-m Landsat 8 imagery from 2016 overlain with the GMW v2.5 baseline (green), illustrating the resulting v2.5 map for these fragmented areas of mangroves, which only maps the larger regions for mangroves and not the finer detail.</p>
Full article ">
15 pages, 35551 KiB  
Technical Note
A High-Precision Motion Errors Compensation Method Based on Sub-Image Reconstruction for HRWS SAR Imaging
by Liming Zhou, Xiaoling Zhang, Liming Pu, Tianwen Zhang, Jun Shi and Shunjun Wei
Remote Sens. 2022, 14(4), 1033; https://doi.org/10.3390/rs14041033 - 21 Feb 2022
Cited by 3 | Viewed by 2264
Abstract
High-resolution wide-swath (HRWS) synthetic aperture radar (SAR) plays an important role in remote sensing observation. However, the motion errors caused by the carrier platform’s instability severely degrade the performance of the HRWS SAR imaging. Conventional motion errors compensation methods have two drawbacks, i.e., [...] Read more.
High-resolution wide-swath (HRWS) synthetic aperture radar (SAR) plays an important role in remote sensing observation. However, the motion errors caused by the carrier platform’s instability severely degrade the performance of the HRWS SAR imaging. Conventional motion errors compensation methods have two drawbacks, i.e., (1) ignoring the spatial variation of the phase errors of pixels along the range direction of the scene, which leads to lower compensation accuracy, and (2) performing compensation after echo reconstruction, which fails to consider the difference in motion errors between channels, resulting in poor imaging performance in the azimuth direction. In this paper, to overcome these two drawbacks, a high-precision motion errors compensation method based on sub-image reconstruction (SI-MEC) for high-precision HRWS SAR imaging is proposed. The proposed method consists of three steps. Firstly, the motion errors of the platform are estimated by maximizing the intensity of strong points in multiple regions. Secondly, combined with the multichannel geometry, the equivalent phase centers (EPCs) used for sub-images imaging are corrected and the sub-images imaging is performed before reconstruction. Thirdly, the reconstruction is performed by using the sub-images. The proposed method has two advantages, i.e., (1) compensating for the spatially varying phase errors in the range direction, by correcting EPCs, to improve the imaging quality, and (2) compensating for the motion errors of each channel in sub-image imaging before reconstruction, to enhance the imaging quality in the azimuth direction. Moreover, the experimental results are provided to demonstrate that the proposed method outperforms PGA and BP-FMSA. Full article
(This article belongs to the Section Remote Sensing Communications)
Show Figures

Figure 1

Figure 1
<p>The geometry of the HRWS SAR system with motion errors.</p>
Full article ">Figure 2
<p>The flow of the HRWS SAR high-precision motion compensation method based on sub-image reconstruction.</p>
Full article ">Figure 3
<p>The location of the selected regions for estimation.</p>
Full article ">Figure 4
<p>Imaging results of point targets at 50 Km, 60 Km, and 70 km by using different compensation methods. (<b>a</b>) Imaging result without MEC. (<b>b</b>) Imaging result by PGA [<a href="#B27-remotesensing-14-01033" class="html-bibr">27</a>]. (<b>c</b>) Imaging result by BP-FMSA [<a href="#B28-remotesensing-14-01033" class="html-bibr">28</a>]. (<b>d</b>) Imaging result by SI-MEC (Ours).</p>
Full article ">Figure 5
<p>The azimuth profile of the point target at 50 Km by the different methods shown in <a href="#remotesensing-14-01033-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>The azimuth sampling pattern of the four-channel SAR system.</p>
Full article ">Figure 7
<p>Complex scene imaging results by using different compensation methods. (<b>a</b>) Imaging result without MEC. (<b>b</b>) Imaging result by PGA [<a href="#B27-remotesensing-14-01033" class="html-bibr">27</a>]. (<b>c</b>) Imaging result by BP-FMSA [<a href="#B28-remotesensing-14-01033" class="html-bibr">28</a>]. (<b>d</b>) Imaging result by SI-MEC (Ours).</p>
Full article ">Figure 8
<p>The imaging result of the building in <a href="#remotesensing-14-01033-f007" class="html-fig">Figure 7</a>. (<b>a</b>) Imaging result without MEC. (<b>b</b>) Imaging result by PGA [<a href="#B27-remotesensing-14-01033" class="html-bibr">27</a>]. (<b>c</b>) Imaging result by BP-FMSA [<a href="#B28-remotesensing-14-01033" class="html-bibr">28</a>]. (<b>d</b>) Imaging result by SI-MEC (Ours).</p>
Full article ">Figure 9
<p>The azimuth profile of the point target in <a href="#remotesensing-14-01033-f008" class="html-fig">Figure 8</a> using different methods. (<b>a</b>) Full azimuth data and (<b>b</b>) partial azimuth data.</p>
Full article ">
20 pages, 4901 KiB  
Article
Decadal Lake Volume Changes (2003–2020) and Driving Forces at a Global Scale
by Yuhao Feng, Heng Zhang, Shengli Tao, Zurui Ao, Chunqiao Song, Jérôme Chave, Thuy Le Toan, Baolin Xue, Jiangling Zhu, Jiamin Pan, Shaopeng Wang, Zhiyao Tang and Jingyun Fang
Remote Sens. 2022, 14(4), 1032; https://doi.org/10.3390/rs14041032 - 21 Feb 2022
Cited by 21 | Viewed by 5021
Abstract
Lakes play a key role in the global water cycle, providing essential water resources and ecosystem services for humans and wildlife. Quantifying long-term changes in lake volume at a global scale is therefore important to the sustainability of humanity and natural ecosystems. Yet, [...] Read more.
Lakes play a key role in the global water cycle, providing essential water resources and ecosystem services for humans and wildlife. Quantifying long-term changes in lake volume at a global scale is therefore important to the sustainability of humanity and natural ecosystems. Yet, such an estimate is still unavailable because, unlike lake area, lake volume is three-dimensional, challenging to be estimated consistently across space and time. Here, taking advantage of recent advances in remote sensing technology, especially NASA’s ICESat-2 satellite laser altimeter launched in 2018, we generated monthly volume series from 2003 to 2020 for 9065 lakes worldwide with an area ≥ 10 km2. We found that the total volume of the 9065 lakes increased by 597 km3 (90% confidence interval 239–2618 km3). Validation against in situ measurements showed a correlation coefficient of 0.98, an RMSE (i.e., root mean square error) of 0.57 km3 and a normalized RMSE of 2.6%. In addition, 6753 (74.5%) of the lakes showed an increasing trend in lake volume and were spatially clustered into nine hot spots, most of which are located in sparsely populated high latitudes and the Tibetan Plateau; 2323 (25.5%) of the lakes showed a decreasing trend in lake volume and were clustered into six hot spots—most located in the world’s arid/semi-arid regions where lakes are scarce, but population density is high. Our results uncovered, from a three-dimensional volumetric perspective, spatially uneven lake changes that aggravate the conflict between human demands and lake resources. The situation is likely to intensify given projected higher temperatures in glacier-covered regions and drier climates in arid/semi-arid areas. The 15 hot spots could serve as a blueprint for prioritizing future lake research and conservation efforts. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Methodological framework of this study. Abbreviations: Global Surface Water (GSW), Ice Cloud and land Elevation Satellite (ICESat), European Space Agency Climate Change Initiative (ESA CCI); total water storage (TWS), and ERA5 (European Centre for Medium-Range Weather Forecasts Reanalysis v5). For more details on the methods, see <a href="#app1-remotesensing-14-01032" class="html-app">Figures S1 and S2</a>.</p>
Full article ">Figure 2
<p>Changes in lake volume at the global scale in the period of 2003–2020. (<b>a</b>) Trends in monthly lake volume from 2003 to 2020 for 9065 lakes: the trends were calculated using the seasonal Kendall method to remove the seasonal amplitude of lake volume. (<b>b</b>) Hot spots of lake volume change: trends in lake volume of individual lakes were normalized and smoothed spatially via the kernel density method (Equations (12) and (13)). Hot spots dominated by increasing trends in lake volume are shown in blue and those dominated by decreasing trends are shown in red.</p>
Full article ">Figure 3
<p>Time series of normalized lake volume in the period of 2003–2020 for the 15 hot spots (<b>a</b>–<b>o</b>) of lake change. The locations of the hot spots are shown in <a href="#remotesensing-14-01032-f002" class="html-fig">Figure 2</a>b. In each panel, red dots represent the monthly normalized lake volumes of a region (mean across all lakes, with the normalized volume of each lake in this region calculated using Equation (12)). A trend line is shown in blue and was calculated using the locally estimated scatterplot smoothing (LOESS) method. The standard deviation of the monthly normalized volumes of individual lakes in this region is also shown (light blue shadow). The slope value labeled within each panel can be interpreted as the annual percentage increase/decrease in lake volume.</p>
Full article ">Figure 4
<p>Correlations between monthly lake volume and TWS (terrestrial water storage measured by the GRACE satellite).</p>
Full article ">Figure 5
<p>Climate change versus lake volume change. Spatial patterns of the trends in monthly potential evapotranspiration (PET, (<b>a</b>)), temperature (TEM, (<b>c</b>)), and precipitation (PRE, (<b>e</b>)) during the period 2003–2020 are shown in the left panels. Seasonal amplitude was removed before calculating the trends by using the seasonal Kendall method. Correlation coefficients between the three climatic factors and lake volumes are shown in the right panels (<b>b</b>,<b>d</b>,<b>f</b>). The 15 hot spots of lake change were marked as ellipses, same with <a href="#remotesensing-14-01032-f002" class="html-fig">Figure 2</a>b.</p>
Full article ">
27 pages, 515 KiB  
Review
Mapping of Urban Vegetation with High-Resolution Remote Sensing: A Review
by Robbe Neyns and Frank Canters
Remote Sens. 2022, 14(4), 1031; https://doi.org/10.3390/rs14041031 - 21 Feb 2022
Cited by 74 | Viewed by 11107
Abstract
Green space is increasingly recognized as an important component of the urban environment. Adequate management and planning of urban green space is crucial to maximize its benefits for urban inhabitants and for the urban ecosystem in general. Inventorying urban vegetation is a costly [...] Read more.
Green space is increasingly recognized as an important component of the urban environment. Adequate management and planning of urban green space is crucial to maximize its benefits for urban inhabitants and for the urban ecosystem in general. Inventorying urban vegetation is a costly and time-consuming process. The development of new remote sensing techniques to map and monitor vegetation has therefore become an important topic of interest to many scholars. Based on a comprehensive survey of the literature, this review article provides an overview of the main approaches proposed to map urban vegetation from high-resolution remotely sensed data. Studies are reviewed from three perspectives: (a) the vegetation typology, (b) the remote sensing data used and (c) the mapping approach applied. With regard to vegetation typology, a distinction is made between studies focusing on the mapping of functional vegetation types and studies performing mapping of lower-level taxonomic ranks, with the latter mainly focusing on urban trees. A wide variety of high-resolution imagery has been used by researchers for both types of mapping. The fusion of various types of remote sensing data, as well as the inclusion of phenological information through the use of multi-temporal imagery, prove to be the most promising avenues to improve mapping accuracy. With regard to mapping approaches, the use of deep learning is becoming more established, mostly for the mapping of tree species. Through this survey, several research gaps could be identified. Interest in the mapping of non-tree species in urban environments is still limited. The same holds for the mapping of understory species. Most studies focus on the mapping of public green spaces, while interest in the mapping of private green space is less common. The use of imagery with a high spatial and temporal resolution, enabling the retrieval of phenological information for mapping and monitoring vegetation at the species level, still proves to be limited in urban contexts. Hence, mapping approaches specifically tailored towards time-series analysis and the use of new data sources seem to hold great promise for advancing the field. Finally, unsupervised learning techniques and active learning, so far rarely applied in urban vegetation mapping, are also areas where significant progress can be expected. Full article
Show Figures

Figure 1

Figure 1
<p>Papers on urban green mapping published between 2000 and 2021 and included in the review.</p>
Full article ">Figure 2
<p>Overview of the number of papers per country/region (<b>left</b>) and of the different vegetation typologies that were addressed in these papers (<b>right</b>).</p>
Full article ">Figure 3
<p>Overview of the different algorithms used in the reviewed studies. N refers to the number of papers adopting each of the classification approaches discussed.</p>
Full article ">
20 pages, 5178 KiB  
Article
In Situ Measuring Stem Diameters of Maize Crops with a High-Throughput Phenotyping Robot
by Zhengqiang Fan, Na Sun, Quan Qiu, Tao Li, Qingchun Feng and Chunjiang Zhao
Remote Sens. 2022, 14(4), 1030; https://doi.org/10.3390/rs14041030 - 21 Feb 2022
Cited by 9 | Viewed by 3666
Abstract
Robotic High-Throughput Phenotyping (HTP) technology has been a powerful tool for selecting high-quality crop varieties among large quantities of traits. Due to the advantages of multi-view observation and high accuracy, ground HTP robots have been widely studied in recent years. In this paper, [...] Read more.
Robotic High-Throughput Phenotyping (HTP) technology has been a powerful tool for selecting high-quality crop varieties among large quantities of traits. Due to the advantages of multi-view observation and high accuracy, ground HTP robots have been widely studied in recent years. In this paper, we study an ultra-narrow wheeled robot equipped with RGB-D cameras for inter-row maize HTP. The challenges of the narrow operating space, intensive light changes, and messy cross-leaf interference in rows of maize crops are considered. An in situ and inter-row stem diameter measurement method for HTP robots is proposed. To this end, we first introduce the stem diameter measurement pipeline, in which a convolutional neural network is employed to detect stems, and the point cloud is analyzed to estimate the stem diameters. Second, we present a clustering strategy based on DBSCAN for extracting stem point clouds under the condition that the stem is shaded by dense leaves. Third, we present a point cloud filling strategy to fill the stem region with missing depth values due to the occlusion by other organs. Finally, we employ convex hull and plane projection of the point cloud to estimate the stem diameters. The results show that the R2 and RMSE of stem diameter measurement are up to 0.72 and 2.95 mm, demonstrating its effectiveness. Full article
(This article belongs to the Special Issue Imaging for Plant Phenotyping)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The schematic of HTP robot platform. (<b>a</b>) is the system control schematic. (<b>b</b>) (1) mast; (2) display screen; (3) power and communication interfaces; (4) sensor trays; (5) IPC; (6) GPS receiver; (7) laser scanner; (8) robot body.</p>
Full article ">Figure 2
<p>The experimental scheme.</p>
Full article ">Figure 3
<p>The experimental scenarios. (<b>a</b>) crop rows; (<b>b</b>) aisle.</p>
Full article ">Figure 4
<p>The comprehensive framework for calculating the stem diameters.</p>
Full article ">Figure 5
<p>Point cloud clustering strategy based on improved DBSCAN. (<b>a</b>) 3D stem point clouds; (<b>b</b>) projections of point clouds on the x-z plane; (<b>c</b>) stems that split into multiple point clouds clustered into one cluster.</p>
Full article ">Figure 6
<p>Two approaches for estimating the stem diameters. (<b>a</b>) SD-PCCH; (<b>b</b>) SD-PPC.</p>
Full article ">Figure 7
<p>The point cloud with outliers.</p>
Full article ">Figure 8
<p>2D point cloud contour extraction. (<b>a</b>) 3D point cloud. (<b>b</b>) the projection of the cloud on the X-Y plane. (<b>c</b>) the point cloud contour used to generate concave hull.</p>
Full article ">Figure 9
<p>The point cloud filling strategy.</p>
Full article ">Figure 10
<p>The stem detection by Faster RCNN under natural scenarios. (<b>a</b>) long-distance and strong lighting intensity; (<b>b</b>) close-distance and backlighting; (<b>c</b>) close-distance; (<b>d</b>) backlighting; (<b>e</b>) strong lighting intensity; (<b>f</b>) long-distance.</p>
Full article ">Figure 11
<p>The loss curve and PR curve after model convergence.</p>
Full article ">Figure 12
<p>The stem point cloud extraction based on target bounding box. (<b>a</b>) Stem bounding box; (<b>b</b>) mask processing; (<b>c</b>) ROI of stem pixels; (<b>d</b>) stem point clouds.</p>
Full article ">Figure 13
<p>Convex hulls and plane projections of stem point clouds. (<b>a</b>) Stem point clouds; (<b>b</b>) point clouds clustered based on 2D-DBSCAN; (<b>c</b>) convex hulls; (<b>d</b>) plane projections of point clouds.</p>
Full article ">Figure 14
<p>The point cloud filling results.</p>
Full article ">Figure 15
<p>The comparison results of the stem diameter estimation based on SD-PCCH and the manual measurement values.</p>
Full article ">Figure 16
<p>The comparison results of the stem diameter estimation based on SD-PPC and the manual measurement values.</p>
Full article ">Figure 17
<p>The measurement result distribution of the stem diameters by SD-PCCH, SD-PPC, and manual measurement.</p>
Full article ">
20 pages, 4684 KiB  
Article
Surface Characteristics, Elevation Change, and Velocity of High-Arctic Valley Glacier from Repeated High-Resolution UAV Photogrammetry
by Kristaps Lamsters, Jurijs Ješkins, Ireneusz Sobota, Jānis Karušs and Pēteris Džeriņš
Remote Sens. 2022, 14(4), 1029; https://doi.org/10.3390/rs14041029 - 21 Feb 2022
Cited by 18 | Viewed by 4644
Abstract
Unmanned Aerial Vehicles (UAVs) are being increasingly used in glaciology demonstrating their potential for the generation of high-resolution digital elevation models (DEMs) that can be further used for the evaluation of glacial processes in detail. Such investigations are especially important for the evaluation [...] Read more.
Unmanned Aerial Vehicles (UAVs) are being increasingly used in glaciology demonstrating their potential for the generation of high-resolution digital elevation models (DEMs) that can be further used for the evaluation of glacial processes in detail. Such investigations are especially important for the evaluation of surface changes of small valley glaciers, which are not well-represented in lower-resolution satellite-derived products. In this study, we performed two UAV surveys at the end of the ablation season in 2019 and 2021 on Waldemarbreen, a High-Arctic glacier in NW Svalbard. We derived the mean annual glacier surface velocity of 5.3 m. The estimated mean glacier surface elevation change from 2019 to 2021 was −1.46 m a−1 which corresponds to the geodetic mass balance (MB) of −1.33 m w.e. a−1. The glaciological MB for the same period was −1.61 m w.e. a−1. Our survey includes all Waldemarbreen and demonstrates the efficiency of high-resolution DEMs produced from UAV photogrammetry for the reconstruction of changes in glacier surface elevation and velocity. We suggest that glaciological and geodetic MB methods should be used complementary to each other. Full article
(This article belongs to the Special Issue Remote Sensing in Snow and Glacier Hydrology)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The location of Waldemarbreen in north-western Spitsbergen (white arrow); (<b>b</b>) A view towards Waldemarbreen taken by UAV in August 2021; (<b>c</b>) the direction of ice flow (black arrow) at the steep glacier slope.</p>
Full article ">Figure 2
<p>UAV flight coverage areas on orthomosaic of Waldemarbreen. (<b>a</b>) In August 2019; (<b>b</b>) In August 2021. Each flight is shown in a different color. The red triangles and black circles indicate the checkpoints and control points respectively. Note that the number of GCPs in 2021 is more than two times less than in 2019 due to the usage of the RTK UAV platform.</p>
Full article ">Figure 3
<p>Photogrammetric processing workflow in Agisoft Metashape.</p>
Full article ">Figure 4
<p>(<b>a</b>) Glacier-wide geodetic MB and (<b>b</b>) glaciological MB of Waldemarbreen for 2019–2021.</p>
Full article ">Figure 5
<p>(<b>a</b>) Aspect and (<b>b</b>) slope of Waldemarbreen generated from UAV DEM.</p>
Full article ">Figure 6
<p>(<b>a</b>) Glacier terminus position in 2019; (<b>b</b>) The recession of Waldemarbreen between 2019 and 2021; (<b>c</b>,<b>d</b>) changes in surface drainage pattern in 2019 and 2021.</p>
Full article ">Figure 7
<p>Surface velocity and displacement vectors of Waldemarbreen calculated from hillshade rasters (2019 and 2021).</p>
Full article ">
21 pages, 3005 KiB  
Article
Sentinel-2 Data and Unmanned Aerial System Products to Support Crop and Bare Soil Monitoring: Methodology Based on a Statistical Comparison between Remote Sensing Data with Identical Spectral Bands
by Marco Dubbini, Nicola Palumbo, Michaela De Giglio, Francesco Zucca, Maurizio Barbarella and Antonella Tornato
Remote Sens. 2022, 14(4), 1028; https://doi.org/10.3390/rs14041028 - 20 Feb 2022
Cited by 3 | Viewed by 3192
Abstract
The growing need for sustainable management approaches of crops and bare soils requires measurements at a multiple scale (space and time) field system level, which have become increasingly accurate. In this context, proximal and satellite remote sensing data cooperation seems good practice for [...] Read more.
The growing need for sustainable management approaches of crops and bare soils requires measurements at a multiple scale (space and time) field system level, which have become increasingly accurate. In this context, proximal and satellite remote sensing data cooperation seems good practice for the present and future. The primary purpose of this work is the development of a sound protocol based on a statistical comparison between Copernicus Sentinel-2 MIS satellite data and a multispectral sensor mounted on an Unmanned Aerial Vehicle (UAV), featuring spectral deployment identical to Sentinel-2. The experimental dataset, based on simultaneously acquired proximal and Sentinel-2 data, concerns an agricultural field in Pisa (Tuscany), cultivated with corn. To understand how the two systems, comparable but quite different in terms of spatial resolution and atmosphere impacts, can effectively cooperate to create a value-added product, statistical tests were applied on bands and the derived Vegetation and Soil index. Overall, as expected, due to the mentioned impacts, the outcomes show a heterogeneous behavior with a difference between the coincident bands as well for the derived indices, modulated in the same manner by the phenological status (e.g., during the canopy developments) or by vegetation absence. Instead, similar behavior between two sensors occurred during the maturity phase of crop plants. Full article
Show Figures

Figure 1

Figure 1
<p>Study area: corn crop field (continuous red line) located in San Giuliano Terme (Pisa, Italy). The circled dots are the Ground Control Points used to MS2 orthoimage processing. Image background: Google Earth Satellite (WGS 84/UTM zone 32N).</p>
Full article ">Figure 2
<p>Example of the linear correlation coefficient in relation to the lag in x (abscissa) and y (ordinate), for NIR data, June epoch: (<b>a</b>) the numerical value is shown in 3−D; (<b>b</b>) the numerical value is shown in 2−D chromatic scale. The maximum is reached for lag (8,8) or shift (0,0).</p>
Full article ">Figure 3
<p>Plots of moving window application results: the yellow pixels satisfy both conditions on vegetation intensity and dispersion at the indicated levels.</p>
Full article ">Figure 4
<p>Test areas A, B, C.</p>
Full article ">Figure 5
<p>The graphs report the NDVI value points (MS2_10, S2), the estimated regression line <math display="inline"><semantics> <mrow> <mi>MS</mi> <mn>2</mn> <mo>_</mo> <mn>10</mn> <mo>=</mo> <mi mathvariant="normal">p</mi> <mn>1</mn> <mrow> <mo> </mo> <mi mathvariant="normal">S</mi> </mrow> <mn>2</mn> <mo>+</mo> <mi mathvariant="normal">p</mi> <mn>0</mn> </mrow> </semantics></math> (black line) and the lines representing the range of variability (red lines) expected for a new observation at the confidence level of 0.99.</p>
Full article ">
17 pages, 5632 KiB  
Article
Robotic Mapping Approach under Illumination-Variant Environments at Planetary Construction Sites
by Sungchul Hong, Pranjay Shyam, Antyanta Bangunharcana and Hyuseoung Shin
Remote Sens. 2022, 14(4), 1027; https://doi.org/10.3390/rs14041027 - 20 Feb 2022
Cited by 7 | Viewed by 3078
Abstract
In planetary construction, the semiautonomous teleoperation of robots is expected to perform complex tasks for site preparation and infrastructure emplacement. A highly detailed 3D map is essential for construction planning and management. However, the planetary surface imposes mapping restrictions due to rugged and [...] Read more.
In planetary construction, the semiautonomous teleoperation of robots is expected to perform complex tasks for site preparation and infrastructure emplacement. A highly detailed 3D map is essential for construction planning and management. However, the planetary surface imposes mapping restrictions due to rugged and homogeneous terrains. Additionally, changes in illumination conditions cause the mapping result (or 3D point-cloud map) to have inconsistent color properties that hamper the understanding of the topographic properties of a worksite. Therefore, this paper proposes a robotic construction mapping approach robust to illumination-variant environments. The proposed approach leverages a deep learning-based low-light image enhancement (LLIE) method to improve the mapping capabilities of the visual simultaneous localization and mapping (SLAM)-based robotic mapping method. In the experiment, the robotic mapping system in the emulated planetary worksite collected terrain images during the daytime from noon to late afternoon. Two sets of point-cloud maps, which were created from original and enhanced terrain images, were examined for comparison purposes. The experiment results showed that the LLIE method in the robotic mapping method significantly enhanced the brightness, preserving the inherent colors of the original terrain images. The visibility and the overall accuracy of the point-cloud map were consequently increased. Full article
(This article belongs to the Special Issue Planetary Geologic Mapping and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Planetary base and ISRU facilities for human presence.</p>
Full article ">Figure 2
<p>Conceptual process of the robotic construction mapping method.</p>
Full article ">Figure 3
<p>Robotic mapping system.</p>
Full article ">Figure 4
<p>Emulated planetary worksite (test site).</p>
Full article ">Figure 5
<p>Terrain images with decreasing illumination conditions (<b>a</b>–<b>d</b>): (<b>a</b>) bright condition with clear sky at noon; (<b>b</b>) overcast condition, partly cloudy in the afternoon; (<b>c</b>) overcast condition with full cloud coverage in the afternoon; (<b>d</b>) darkest condition with low solar altitude in the late afternoon.</p>
Full article ">Figure 6
<p>Darkest image enhancement results in the test image in <a href="#remotesensing-14-01027-f005" class="html-fig">Figure 5</a>d: (<b>a</b>) RetinexNet; (<b>b</b>) DALE; (<b>c</b>) DLN; (<b>d</b>) DSLR; (<b>e</b>) GLAD; (<b>f</b>) KinD.</p>
Full article ">Figure 7
<p>Image enhancement results over the test images in <a href="#remotesensing-14-01027-f005" class="html-fig">Figure 5</a>: (<b>a</b>) DLN; (<b>b</b>) GLAD.</p>
Full article ">Figure 8
<p>Aerial photo of the test site.</p>
Full article ">Figure 9
<p>Robotic mapping results to terrain images from the robotic mapping system: (<b>a</b>) original point-cloud map; (<b>b</b>) enhanced point-cloud map.</p>
Full article ">Figure 10
<p>Terrain features selected from original point cloud in <a href="#remotesensing-14-01027-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 11
<p>Terrain features selected from the enhanced point-cloud maps in <a href="#remotesensing-14-01027-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 12
<p>Positional error histogram (left) and positional error distribution over the test site (right). Identical color index is used to represent a magnitude of positional errors in the original and enhanced point-cloud maps: (<b>a</b>) positional errors in the original point-cloud map; (<b>b</b>) positional errors in the enhanced point-cloud map.</p>
Full article ">
24 pages, 31979 KiB  
Article
Efficient Identification and Monitoring of Landslides by Time-Series InSAR Combining Single- and Multi-Look Phases
by Zijing Liu, Haijun Qiu, Yaru Zhu, Ya Liu, Dongdong Yang, Shuyue Ma, Juanjuan Zhang, Yuyao Wang, Luyao Wang and Bingzhe Tang
Remote Sens. 2022, 14(4), 1026; https://doi.org/10.3390/rs14041026 - 20 Feb 2022
Cited by 69 | Viewed by 4973
Abstract
Identification and monitoring of unstable slopes across wide regions using Synthetic Aperture Radar Interferometry (InSAR) can further help to prevent and mitigate geological hazards. However, the low spatial density of measurement points (MPs) extracted using the traditional time-series InSAR method in topographically complex [...] Read more.
Identification and monitoring of unstable slopes across wide regions using Synthetic Aperture Radar Interferometry (InSAR) can further help to prevent and mitigate geological hazards. However, the low spatial density of measurement points (MPs) extracted using the traditional time-series InSAR method in topographically complex mountains and vegetation-covered slopes makes the final result unreliable. In this study, a method of time-series InSAR analysis using single- and multi-look phases were adopted to solve this problem, which exploited single- and multi-look phases to increase the number of MPs in the natural environment. Archived ascending and descending Sentinel-1 datasets covering Zhouqu County were processed. The results revealed that nine landslides could be quickly identified from the average phase rate maps using the Stacking method. Then, the time-series InSAR analysis with single- and multi-look phases could be used to effectively monitor the deformation of these landslides and to quantitatively analyze the magnitude and dynamic evolution of the deformation in various parts of the landslides. The reliability of the InSAR results was further verified by field investigations and Unmanned Aerial Vehicle (UAV) surveys. In addition, the precursory movements and causative factors of the recent Yahuokou landslide were analyzed in detail, and the application of the time-series InSAR method in landslide investigations was discussed and summarized. Therefore, this study has practical significance for early warning of landslides and risk mitigation. Full article
Show Figures

Figure 1

Figure 1
<p>Location of Zhouqu County. The background image is the Sentinel-2 image on 3 August 2019. The green and blue rectangles represent the coverage of the ascending and descending Sentinel-1, respectively, and the white labels indicate recent disasters in the study area. In addition, the red circle marks the location of Zhouqu County in Gansu Province in the inset map at the left–bottom.</p>
Full article ">Figure 2
<p>Flowchart of this study. Radar remote sensing images were used to identify the deformation regions and invert the deformation rates. Optical remote sensing images were used to delimit the topographic boundary. An Unmanned Aerial Vehicle (UAV) was used for photogrammetry, Real-Time Kinematic (RTK) was used for control point surveying, and tape was used for on-site measurements.</p>
Full article ">Figure 3
<p>Schematic diagram of the SLC, single-look phase, Rmli (multi-look intensity images from SLC images), and multi-look phases.</p>
Full article ">Figure 4
<p>Average phase rate revealed by Stacking. (<b>a</b>) Ascending track; (<b>b</b>) descending track. The black circles mark the location of the reference point. The degree of color change indicates the degree of deformation. The white boxes contain the names of the detected landslides. The red circles indicate the identified potential landslide areas.</p>
Full article ">Figure 5
<p>The LOS deformation rate maps in the midstream of the Bailong River. (<b>a</b>) Ascending track; (<b>b</b>) descending track. The black circles mark the location of reference point. Negative values (red color) and positive values (blue color) indicate that the measurement point is moving away from and toward the radar sensor, respectively. The red circles indicate the identified potential landslide areas.</p>
Full article ">Figure 6
<p>Suoertou landslide. LOS deformation rate of the MPs from the (<b>a</b>) ascending track and (<b>b</b>) descending track, overlaid on Google EarthTM; (<b>c</b>–<b>h</b>) photos acquired using a UAV on 11 July 2021; (<b>i</b>) InSAR time-series displacement of points P1, P2, P3, and P4.</p>
Full article ">Figure 7
<p>Xieliupo landslide. LOS deformation rate of the MPs from the (<b>a</b>) ascending track and (<b>b</b>) descending track superimposed on Google Earth<sup>TM</sup>; (<b>c</b>,<b>e</b>,<b>g</b>) photos acquired using a UAV taken on 11 July 2021; (<b>d</b>,<b>f</b>) photos of sites d and f taken on 11 July 2021; and (<b>h</b>) time-series displacement of points P5, P6, P7, and P8.</p>
Full article ">Figure 8
<p>Zhongpai landslide. LOS deformation rate of the MPs from the (<b>a</b>) ascending track and (<b>b</b>) descending track superimposed on Google Earth<sup>TM</sup>; (<b>c</b>,<b>d</b>) photos acquired using a UAV on 13 July 2021; (<b>e</b>) photos of site e taken on 13 July 2021; and (<b>f</b>) time-series displacement of points P9 and P10.</p>
Full article ">Figure 9
<p>Qinyu landslide. LOS deformation rate of the MPs from the (<b>a</b>) ascending track and (<b>b</b>) descending track superimposed on Google Earth<sup>TM</sup>; (<b>c</b>,<b>d</b>) photos acquired using a UAV on 12 July 2021; (<b>e</b>–<b>h</b>) photos of sites (<b>e</b>–<b>h</b>) taken on 12 July 2021; and (<b>i</b>) time-series displacement of points P11 and P12.</p>
Full article ">Figure 10
<p>Field investigation of the Yahuokou landslide on 13 July 2021. (<b>a</b>) Digital Surface Model (DSM); (<b>b</b>) rockfalls; (<b>c</b>) rockfall; (<b>d</b>) damaged road; (<b>e</b>) construction organization; (<b>f</b>) building; (<b>g</b>) schematic map of the landslide; (<b>h</b>) introduction to landslide; (<b>i</b>) direction of landslide movement; (<b>j</b>) accumulation area; and (<b>k</b>) landslide panorama.</p>
Full article ">Figure 11
<p>Yahuokou landslide. (<b>a</b>) Digital Surface Model (DSM); (<b>b</b>) slope; (<b>c</b>) elevation. LOS displacement rates derived from the (<b>d</b>) ascending and (<b>e</b>) descending Sentinel-1 data stack and the background image is the DSM.</p>
Full article ">Figure 12
<p>InSAR displacement time series and distribution of rainfall. (<b>a</b>) The time-series average LOS displacements of the moving points outlined from the ascending Sentinel-1 data stack and the descending Sentinel-1 data stack by the white curve on the map in <a href="#remotesensing-14-01026-f011" class="html-fig">Figure 11</a>d,e. (<b>b</b>) Non-linear deformation and daily precipitation from 9 October 2014, to 15 July 2019. (<b>c</b>) Daily precipitation from 16 June 2019, to 15 July 2019.</p>
Full article ">
18 pages, 4494 KiB  
Article
Evaluation of the Emissions State of a Satellite Laser Altimeter Based on Laser Footprint Imaging
by Jiaqi Yao, Haoran Zhai, Shuqi Wu, Zhen Wen and Xinming Tang
Remote Sens. 2022, 14(4), 1025; https://doi.org/10.3390/rs14041025 - 20 Feb 2022
Cited by 1 | Viewed by 2291
Abstract
The GaoFen-7(GF-7) satellite is equipped with China’s first laser altimeter for Earth observation; it has the capability of full waveform recording, which can obtain global high-precision three-dimensional coordinates over a wide range. The laser is inevitably affected by platform tremors, random errors in [...] Read more.
The GaoFen-7(GF-7) satellite is equipped with China’s first laser altimeter for Earth observation; it has the capability of full waveform recording, which can obtain global high-precision three-dimensional coordinates over a wide range. The laser is inevitably affected by platform tremors, random errors in the laser pointing angle, laser state, and other factors, which further affect the measurement accuracy of the laser footprint. Therefore, evaluation of the satellite laser launch state is an important process. This study contributes to laser emission state evaluations based on the laser footprint image in terms of two main two aspects: (1) Monitoring changes in the laser pointing angle—laser pointing is closely related to positioning accuracy, which mainly results from monitoring the change in the laser spot centroid. We propose a threshold constraint algorithm that extracts the centroid of an ellipse-fitting spot. (2) Analysis of the energy distribution state—directly obtaining the parameters used in the traditional evaluation method is a challenge for the satellite. Therefore, an index suitable for evaluating the laser emissions state of the GF-7 satellite was constructed according to the data characteristics. Based on these methods, long time-series data were evaluated and analyzed. The experimental results show that the proposed method can effectively evaluate the emissions state of the laser altimeter, during which the laser pointing angle changes monthly by 0.434″. During each continuous operation of the laser, the energy state decreased gradually, with a small variation range; however, both were generally in a stable state. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Optical path of the laser system. (<b>b</b>) Laser footprint image. (<b>c</b>) Laser center profile array. (<b>d</b>) Laser emissions waveform.</p>
Full article ">Figure 2
<p>Flowchart of the ellipse-fitting method (TEFM) algorithm.</p>
Full article ">Figure 3
<p>Threshold constraint algorithm for extracting the centroid of the ellipse fitting. (<b>a</b>) LFI. (<b>b</b>) The result of laser spot contour extraction based on threshold method. (<b>c</b>) Morphological processing to remove noise. (<b>d</b>) Determine the centroid coordinates of laser spot by ellipse fitting method and GCM.</p>
Full article ">Figure 4
<p>Calculation flow of the optical transfer function for laser emissions state evaluation. (<b>a</b>) Transmitted waveform of laser. (<b>b</b>) Fourier transform process. (<b>d</b>) Laser spot and its rendered image. (<b>e</b>) Laser emission state curve. (<b>c</b>,<b>f</b>) respectively correspond to Fourier change results of emission waveform and laser emission state curve.</p>
Full article ">Figure 5
<p>Accuracy evaluation of the proposed algorithm. Red, blue, and yellow dots correspond to the centroid calibration position, (<b>a</b>) the centroid extraction result of the ellipse-fitting method (TEFM), and (<b>b</b>) the centroid extraction result of the gray centroid method (GCM), respectively.</p>
Full article ">Figure 6
<p>Centroid coordinate statistics for the laser footprint image (LFI) of laser 1 (<b>a</b>) and 2 (<b>b</b>).</p>
Full article ">Figure 7
<p>Changes in the centroid of the two beams in the X- and Y-directions on the laser footprint image (LFI). (<b>a</b>,<b>b</b>) respectively correspond to the monthly changes of the coordinates of the centroid of the laser spot of laser1 in the X and Y directions. (<b>c</b>,<b>d</b>) respectively correspond to the monthly changes of the coordinates of the centroid of the laser spot of laser2 in the X and Y directions.</p>
Full article ">Figure 8
<p>Analysis of changes in the laser energy at the emission time. (<b>a</b>,<b>b</b>) represent respectively center disk brightness of beam 1&amp;2. (<b>c</b>,<b>d</b>) represent respectively Encircled energy diagram of beam 1&amp;2. (<b>e</b>,<b>f</b>) represent respectively OTF-LESE of beam 1&amp;2.</p>
Full article ">Figure 9
<p>Several typical cases of the optical transfer function (OTF)-laser emission state evaluation (LESE). (<b>a</b>–<b>d</b>) Transmission waveform. (<b>e</b>–<b>h</b>) LCPA.</p>
Full article ">Figure 10
<p>(<b>a</b>) Distribution of laser spots across Lake Tanganyika. (<b>b</b>) Height profile of beam #1 along the track and (<b>c</b>) height profile of beam #2 along the track.</p>
Full article ">Figure 11
<p>Correlation analysis between the optical transfer function (OTF)-laser emission state evaluation (LESE) and altimetry error.</p>
Full article ">
15 pages, 3563 KiB  
Article
Assessing the Wall-to-Wall Spatial and Qualitative Dynamics of the Brazilian Pasturelands 2010–2018, Based on the Analysis of the Landsat Data Archive
by Claudinei Oliveira dos Santos, Vinícius Vieira Mesquita, Leandro Leal Parente, Alexandre de Siqueira Pinto and Laerte Guimaraes Ferreira, Jr.
Remote Sens. 2022, 14(4), 1024; https://doi.org/10.3390/rs14041024 - 20 Feb 2022
Cited by 22 | Viewed by 5923
Abstract
Brazilian livestock is predominantly extensive, with approximately 90% of the production being sustained on pasture, which occupies around 20% of the territory. It is estimated that more than half of Brazilian pastures have some level of degradation. In this study, we mapped and [...] Read more.
Brazilian livestock is predominantly extensive, with approximately 90% of the production being sustained on pasture, which occupies around 20% of the territory. It is estimated that more than half of Brazilian pastures have some level of degradation. In this study, we mapped and evaluated the spatiotemporal dynamics of pasture quality in Brazil, between 2010 and 2018, considering three classes of degradation: Absent (D0), Intermediate (D1), and Severe (D2). There was no variation in the total area occupied by pastures in the evaluated period, in spite of the accentuated spatial dynamics. The percentage of non-degraded pastures increased by ~12%, due to the recovery of degraded areas and the emergence of new pasture areas. However, about 44 Mha of the pasture area is currently severely degraded. The dynamics in pasture quality were not homogeneous in property size classes. We observed that in the approximately 2.68 million properties with livestock activity, the proportion with quality gains was twice as low in small properties compared with large ones, and the proportion with losses was three times greater, showing an increase in inequality between properties with more and fewer resources (large and small properties, respectively). The areas occupied by pastures in Brazil present a unique opportunity to increase livestock production and make areas available for agriculture, without the need for new deforestation in the coming decades. Full article
(This article belongs to the Special Issue State-of-the-Art Remote Sensing in South America)
Show Figures

Figure 1

Figure 1
<p>Flowchart depicting the stratification of the NDVInorm (normalized NDVI) into three classes of degradation state (i.e., Absent, Intermediate and Severe).</p>
Full article ">Figure 2
<p>Areas occupied by pastures in Brazil in 2010 and 2018 (<a href="http://atlasdaspastagens.ufg.br/" target="_blank">http://atlasdaspastagens.ufg.br/</a> accessed on 17 December 2021) (31.7 Mha (million hectares) mapped only in 2010; 139.9 Mha mapped in 2010 and 2018; 30.8 Mha mapped only in 2018).</p>
Full article ">Figure 3
<p>Pasture area in Brazil, according to 3 classes of degradation state (D0—Absent; D1—Intermediate; D2—Severe), for the years 2010 and 2018.</p>
Full article ">Figure 4
<p>Dynamics of pasture degradation in Brazil between 2010 and 2018. Degradation classes: D0—Absent; D1—Intermediate; D2—Severe. NP—area not mapped as pasture.</p>
Full article ">Figure 5
<p>Spatial distribution of Brazilian rural properties by size class (Small, Medium and Large), according to the limits available in the CAR—the Rural Environmental Registry.</p>
Full article ">Figure 6
<p>Spatial distribution of the dynamics in pasture quality in Brazilian rural properties between 2010 and 2018 ((<b>A</b>)—Small properties; (<b>B</b>)—Medium properties; (<b>C</b>)—Large properties).</p>
Full article ">Figure 7
<p>Distribution of polygons with ABC Plan financing contracts for the recovery of degraded pastures, in the period from 2016 to 2017, classified as loss (Degradation), gain (Recovery) or Stability in the quality of pastures.</p>
Full article ">
24 pages, 5141 KiB  
Article
Prediction of Soil Water Content and Electrical Conductivity Using Random Forest Methods with UAV Multispectral and Ground-Coupled Geophysical Data
by Yunyi Guan, Katherine Grote, Joel Schott and Kelsi Leverett
Remote Sens. 2022, 14(4), 1023; https://doi.org/10.3390/rs14041023 - 20 Feb 2022
Cited by 31 | Viewed by 5988
Abstract
The volumetric water content (VWC) of soil is a critical parameter in agriculture, as VWC strongly influences crop yield, provides nutrients to plants, and maintains the microbes that are needed for the biological health of the soil. Measuring VWC is difficult, as it [...] Read more.
The volumetric water content (VWC) of soil is a critical parameter in agriculture, as VWC strongly influences crop yield, provides nutrients to plants, and maintains the microbes that are needed for the biological health of the soil. Measuring VWC is difficult, as it is spatially and temporally heterogeneous, and most agricultural producers use point measurements that cannot fully capture this parameter. Electrical conductivity (EC) is another soil parameter that is useful in agriculture, since it can be used to indicate soil salinity, soil texture, and plant nutrient availability. Soil EC is also very heterogeneous; measuring EC using conventional soil sampling techniques is very time consuming and often fails to capture the variability in EC at a site. In contrast to the point-based methods used to measure VWC and EC, multispectral data acquired with unmanned aerial vehicles (UAV) can cover large areas with high resolution. In agriculture, multispectral data are often used to calculate vegetation indices (VIs). In this research UAV-acquired VIs and raw multispectral data were used to predict soil VWC and EC. High-resolution geophysical methods were used to acquire more than 41,000 measurements of VWC and 8000 measurements of EC in 18 traverses across a field that contained 56 experimental plots. The plots varied by crop type (corn, soybeans, and alfalfa) and drainage (no drainage, moderate drainage, high drainage). Machine learning was performed using the random forest method to predict VWC and EC using VIs and multispectral data. Prediction accuracy was determined for several scenarios that assumed different levels of knowledge about crop type or drainage. Results showed that multispectral data improved prediction of VWC and EC, and the best predictions occurred when both the crop type and degree of drainage were known, but drainage was a more important input than crop type. Predictions were most accurate in drier soil, which may be due to the lower overall variability of VWC and EC under these conditions. An analysis of which multispectral data were most important showed that NDRE, VARI, and blue band data improved predictions the most. The final conclusions of this study are that inexpensive UAV-based multispectral data can be used to improve estimation of heterogenous soil properties, such as VWC and EC in active agricultural fields. In this study, the best estimates of these properties were obtained when the agriculture parameters in a field were fairly homogeneous (one crop type and the same type of drainage throughout the field), although improvements were observed even when these conditions were not met. The multispectral data that were most useful for prediction were those that penetrated deeper into the soil canopy or were sensitive to bare soil. Full article
Show Figures

Figure 1

Figure 1
<p>The field site (shown as a green dot) is in the northeast corner of Missouri in the United States. The nearest town to the field site is Novelty, Missouri.</p>
Full article ">Figure 2
<p>Schematic showing corn (blue), soybean (yellow), and alfalfa (green) plots.</p>
Full article ">Figure 3
<p>Geophysical data were acquired along 18 traverses, shown as yellow lines. T1 and T18 are the first and eighteenth traverses, respectively.</p>
Full article ">Figure 4
<p>VI pixels with a midpoint that fell within the footprint of a geophysical measurement were averaged for comparison with that measurement.</p>
Full article ">Figure 5
<p>Multispectral band maps.</p>
Full article ">Figure 6
<p>Vegetation indices maps.</p>
Full article ">Figure 7
<p>Volumetric water content contour map for the dry campaign (<b>a</b>) and wet campaign (<b>b</b>). Note that the VWC scale is different for (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 8
<p>Electrical conductivity contour maps for the dry campaign (<b>a</b>) and wet campaign (<b>b</b>). Note that the EC scale is different for (<b>a</b>,<b>b</b>).</p>
Full article ">
17 pages, 4357 KiB  
Article
The River–Sea Interaction off the Amazon Estuary
by Di Yu, Shidong Liu, Guangxue Li, Yi Zhong, Jun Liang, Jinghao Shi, Xue Liu and Xiangdong Wang
Remote Sens. 2022, 14(4), 1022; https://doi.org/10.3390/rs14041022 - 20 Feb 2022
Cited by 2 | Viewed by 3344
Abstract
The Amazon River has the highest discharge in the world. Nevertheless, there is still a lack of the research on the interaction between river-diluted water and the ocean. This study used the remote sensing data (2008–2017) of the Moderate Resolution Imaging Spectroradiometer (MODIS) [...] Read more.
The Amazon River has the highest discharge in the world. Nevertheless, there is still a lack of the research on the interaction between river-diluted water and the ocean. This study used the remote sensing data (2008–2017) of the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite, and data of the currents, wind fields, sea surface temperature, and water depth. The river–sea interaction off the Amazon estuary was studied by analyzing the diffusion of river-diluted water and the distribution of surface suspended particulate matter (SPM). The results revealed that the Amazon estuary has a “filter effect,” whereby the distribution of the surface SPM exhibited significant spatial characteristics of being high in the nearshore area and low in the offshore area. Most of the SPM accumulated within the estuary in a fan shape, although some was distributed in the shallow water region of the continental shelf along the coasts on both sides of the estuary. The currents were found to limit the diffusion range of SPM. The flow direction and velocity of the North Brazil Current and the North Equatorial Countercurrent, which are largely driven by the magnitude of the trade wind stress, are the main forces controlling the long-distance diffusion of diluted water, thus forming unique river–sea interaction patterns in the Amazon estuary. This research provides a supplement and reference for the study of the diffusion process of SPM and river-diluted water, and on the estuarine river–sea interactions of other large rivers worldwide. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Bird’s-eye-view image of the study area in the Amazon estuary (ETOPO1 Global Relief Model, <a href="https://www.ngdc.noaa.gov/mg/global/global.html" target="_blank">https://www.ngdc.noaa.gov/mg/global/global.html</a> (accessed on 1 February 2021)). M and N are two transects perpendicular to and parallel to the shoreline, respectively, which will be discussed in the following sections.</p>
Full article ">Figure 2
<p>The fitting relationship image between SSS and a<sub>CDOM</sub> (with 95% confidence bounds). The horizontal axis represents a<sub>CDOM</sub> (unit: m<sup>−1</sup>), whereas the ordinate represents the measured salinity value (unit: PSU).</p>
Full article ">Figure 3
<p>The deviation between the monthly average and annual average remote-sensing reflectance (R<sub>rs</sub>) at the 555 nm band over 10 years off the Amazon estuary (2008–2017). The purple contour lines from the estuary to the sea are the 50 m, 200 m, and 2000 m isobaths. (<b>a</b>–<b>l</b>) correspond to the winter period (June–November) and summer period (December–May) in the southern hemisphere, respectively (unit: sr<sup>−1</sup>). Transects M and N are indicated in (<b>a</b>).</p>
Full article ">Figure 4
<p>Monthly average SSS over 10 years off the Amazon estuary (2008–2017). The grey contour lines from the estuary to the sea are the 50 m, 200 m, and 2000 m isobaths. (<b>a</b>–<b>l</b>) correspond to the winter period (June–November) and summer period (December–May) in the southern hemisphere, respectively (unit: PSU). Transects M and N are indicated in (<b>a</b>).</p>
Full article ">Figure 5
<p>Monthly average wind field over 10 years in the study area off the Amazon estuary (2008–2017). (<b>a</b>–<b>l</b>) correspond to the winter period (June–November) and summer period (December–May) in the southern hemisphere, respectively (unit: m/s).</p>
Full article ">Figure 6
<p>Monthly average flow field over 10 years in the sea area of the Amazon estuary (2008–2017). (<b>a</b>–<b>l</b>) correspond to the winter period (June–November) and summer period (December–May) in the southern hemisphere, respectively (unit: m/s). Transects M and N are indicated in (<b>a</b>).</p>
Full article ">Figure 7
<p>Ten-year averaged monthly variations of various elements along transect M. (<b>a</b>) Water remote-sensing reflectance (R<sub>rs</sub>, to represent the SPM; unit: lg(sr<sup>−1</sup>)); (<b>b</b>) sea surface salinity (SSS; unit: PSU); (<b>c</b>) sea surface temperature (SST; unit: °C); (<b>d</b>) ocean current velocity (unit: m/s). The ordinate is the month (January–December) and the abscissa is the longitude.</p>
Full article ">Figure 8
<p>Ten-year averaged monthly variations of various elements along transect N. (<b>a</b>) Water remote-sensing reflectance (R<sub>rs</sub>, to represent the SPM; unit: lg(sr<sup>−1</sup>)); (<b>b</b>) sea surface salinity (SSS; unit: PSU); (<b>c</b>) sea surface temperature (SST; unit: °C); (<b>d</b>) ocean current velocity (unit: m/s). The ordinate is the month (January–December) and the abscissa is the longitude.</p>
Full article ">Figure 9
<p>The river–sea interaction patterns off the Amazon estuary. (<b>a</b>) Winter semiannual pattern; (<b>b</b>) summer semiannual pattern. The contour map and gray lines are the water depth and isobaths. The purple region and yellow region represent the Rrs and SSS, respectively. The red arrows represent the ocean currents and the thickness represents the flow velocity. The number 1 represents the NBC, which exists throughout the year and mainly restricts the seaward diffusion of SPM. The number 2 represents the NECC, which has obvious seasonal variation characteristics. The number 3 represents the NSEC, which supplies the NECC. The dashed red lines indicate the seasonal differences in development. The white arrows indicate the wind directions, whose size distinguishes the wind speed.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop