[go: up one dir, main page]

Next Issue
Volume 11, November-1
Previous Issue
Volume 11, October-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 11, Issue 20 (October-2 2019) – 127 articles

Cover Story (view full-size image): The climate and weather forecast predictive capability for precipitation intensity is limited by gaps in the understanding of cloud-convective processes. A better understanding of these processes lacks observational constraints, due to the difficulty in obtaining vertically resolved pressure, temperature, and water vapor structure inside and near convective clouds. By collecting sequential radio occultation (RO) observations from a constellation of closely spaced low Earth-orbiting satellites, the RO tangent points tend to cluster together, and the associated ray paths sample independent air mass quantities. The presence of heavy precipitation can be discerned by the use of the polarimetric RO (PRO) technique. Over time, one or more PRO intersect a region of heavy precipitation, and one or more capture the surrounding environment. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 7953 KiB  
Article
Ultrasonic Proximal Sensing of Pasture Biomass
by Mathew Legg and Stuart Bradley
Remote Sens. 2019, 11(20), 2459; https://doi.org/10.3390/rs11202459 - 22 Oct 2019
Cited by 22 | Viewed by 4357
Abstract
The optimization of pasture food value, known as ‘biomass’, is crucial in the management of the farming of grazing animals and in improving food production for the future. Optical sensing methods, particularly from satellite platforms, provide relatively inexpensive and frequently updated wide-area coverage [...] Read more.
The optimization of pasture food value, known as ‘biomass’, is crucial in the management of the farming of grazing animals and in improving food production for the future. Optical sensing methods, particularly from satellite platforms, provide relatively inexpensive and frequently updated wide-area coverage for monitoring biomass and other forage properties. However, there are also benefits from direct or proximal sensing methods for higher accuracy, more immediate results, and for continuous updates when cloud cover precludes satellite measurements. Direct measurement, by cutting and weighing the pasture, is destructive, and may not give results representative of a larger area of pasture. Proximal sensing methods may also suffer from sampling small areas, and can be generally inaccurate. A new proximal methodology is described here, in which low-frequency ultrasound is used as a sonar to obtain a measure of the vertical variation of the pasture density between the top of the pasture and the ground and to relate this to biomass. The instrument is designed to operate from a farm vehicle moving at up to 20 km h−1, thus allowing a farmer to obtain wide coverage in the normal course of farm operations. This is the only method providing detailed biomass profile information from throughout the entire pasture canopy. An essential feature is the identification of features from the ultrasonic reflectance, which can be related sensibly to biomass, thereby generating a physically-based regression model. The result is significantly improved estimation of pasture biomass, in comparison with other proximal methods. Comparing remotely sensed biomass to the biomass measured via cutting and weighing gives coefficients of determination, R2, in the range of 0.7 to 0.8 for a range of pastures and when operating the farm vehicle at speeds of up to 20 km h−1. Full article
(This article belongs to the Special Issue Advances of Remote Sensing in Pasture Management)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The left-hand photo shows the view from above of the calibration setup with a solid disk target (green object) and the ultrasonic pasture meter (blue object) mounted on a rotary table. The right-hand photo is a close-up of the ultrasonic array.</p>
Full article ">Figure 2
<p>The grass blade segment.</p>
Full article ">Figure 3
<p>Calibration using disks of radius <span class="html-italic">a</span> (blue circles) and a portion of a blade of grass (green circle). The expected dependence is shown by the solid red line.</p>
Full article ">Figure 4
<p>The measured beam pattern (solid black line) and ±one standard deviation (green). Also shown is the Airy diffraction pattern at 24 kHz (blue line) and the half-power range limits (red lines).</p>
Full article ">Figure 5
<p>The farm vehicle with the ultrasonic pasture meter mounted on a frame and pointing downward at the pasture ahead of the vehicle. The ultrasonic pasture meter is at the front of the frame.</p>
Full article ">Figure 6
<p>Transect 1 with posts at 0, 1, …, 8, 10, 11, …, 20 m. Biomass measured was measured from quadrats at –0.25 to 0.25 m, 0.75 to 1.25 m, …., 19.75 to 20.25 m.</p>
Full article ">Figure 7
<p>Transect 3 with three 0.5 m-wide reflectors. Ultrasonic profiles and biomass samples were taken from the start of the first post up to the last post. The 0.5 m × 0.5 m quadrat frames can be seen.</p>
Full article ">Figure 8
<p>A waterfall display of ultrasonic reflectivity profiles from Transect 3, Pass 6, at 10 km h<sup>−1</sup>.</p>
Full article ">Figure 9
<p>CLEAN detection of peaks from one profile of Transect 3, Pass 6, at 10 km h<sup>−1</sup>.</p>
Full article ">Figure 10
<p>The probability distribution of the number of peaks in a profile (blue bars) and the Gaussian fit to that distribution (red line) from Transect 3, Pass 6.</p>
Full article ">Figure 11
<p>The probability distribution of the magnitude of peaks (blue bars) and the exponential distribution fit (red line) from Transect 3, Pass 6.</p>
Full article ">Figure 12
<p>The geometry for secondary scattering.</p>
Full article ">Figure 13
<p>Linear regressions of biomass versus sward height for six passes at 10 km h<sup>−1</sup> from Transect 3.</p>
Full article ">Figure 14
<p>The <span class="html-italic">R<sub>adj</sub></span><sup>2</sup> obtained from models <span class="html-italic">B</span> = <span class="html-italic">B</span><sub>0</sub> + μ<sub>ρ</sub><span class="html-italic">H</span> (cyan), <span class="html-italic">B</span> = <span class="html-italic">c</span><sub>0</sub><span class="html-italic">R</span><sub>0</sub> + <span class="html-italic">c</span><sub>1</sub><span class="html-italic">R</span><sub>1</sub> (green), <span class="html-italic">B</span> = <span class="html-italic">c</span><sub>0</sub><span class="html-italic">H</span> + <span class="html-italic">c</span><sub>1</sub><span class="html-italic">R</span><sub>1</sub> (red), and <span class="html-italic">B</span> = <span class="html-italic">c</span><sub>0</sub><span class="html-italic">H</span> + <span class="html-italic">c</span><sub>1</sub><span class="html-italic">R</span><sub>1</sub> + <span class="html-italic">c</span><sub>2</sub><span class="html-italic">R</span><sub>2</sub> (blue) for the 14 passes from Transect 3. Passes 1 and 2 were at 5 km h<sup>−1</sup>, passes 3 to 8 at 10 km h<sup>−1</sup>, passes 9 to 11 at 15 km h<sup>−1</sup>, and passes 12 to 14 at 20 km h<sup>−1</sup>. Also shown are the model <span class="html-italic">B</span> = <span class="html-italic">c</span><sub>0</sub><span class="html-italic">H</span> + <span class="html-italic">c</span><sub>1</sub><span class="html-italic">R</span><sub>1</sub> results from Transect 2 (magenta diamonds, dashed line).</p>
Full article ">Figure 15
<p>Estimation error for biomass <span class="html-italic">B</span> obtained from models <span class="html-italic">B</span> = <span class="html-italic">B</span><sub>0</sub> + μ<sub>ρ</sub><span class="html-italic">H</span> (cyan), <span class="html-italic">B</span> = <span class="html-italic">c</span><sub>0</sub><span class="html-italic">R</span><sub>0</sub> + <span class="html-italic">c</span><sub>1</sub><span class="html-italic">R</span><sub>1</sub> (green), <span class="html-italic">B</span> = <span class="html-italic">c</span><sub>0</sub><span class="html-italic">H</span> + <span class="html-italic">c</span><sub>1</sub><span class="html-italic">R</span><sub>1</sub> (red), and <span class="html-italic">B</span> = <span class="html-italic">c</span><sub>0</sub><span class="html-italic">H</span> + <span class="html-italic">c</span><sub>1</sub><span class="html-italic">R</span><sub>1</sub> + <span class="html-italic">c</span><sub>2</sub><span class="html-italic">R</span><sub>2</sub> (blue) for the 14 passes from Transect 3. Also shown are the model <span class="html-italic">B</span> = <span class="html-italic">B</span><sub>0</sub> + μ<sub>ρ</sub><span class="html-italic">H</span> (black squares, dashed line) and <span class="html-italic">B</span> = <span class="html-italic">c</span><sub>0</sub><span class="html-italic">H</span> + <span class="html-italic">c</span><sub>1</sub><span class="html-italic">R</span><sub>1</sub> (magenta diamonds, dashed line) from Transect 2.</p>
Full article ">Figure 16
<p>Residuals from the model <span class="html-italic">B</span> = <span class="html-italic">c</span><sub>0</sub><span class="html-italic">H</span> + <span class="html-italic">c</span><sub>1</sub><span class="html-italic">R</span><sub>1</sub>, for Pass 6 from Transect 3.</p>
Full article ">Figure 17
<p>The variation of the first coefficient, <span class="html-italic">c</span><sub>0</sub>, (red) and second coefficient, <span class="html-italic">c</span><sub>1</sub>, (blue) from the model <span class="html-italic">B</span> = <span class="html-italic">c</span><sub>0</sub><span class="html-italic">H</span> + <span class="html-italic">c</span><sub>1</sub><span class="html-italic">R</span><sub>1</sub> for the 14 passes from Transect 3. Also shown are <span class="html-italic">c</span><sub>0</sub> (magenta diamonds, dashed line) and <span class="html-italic">c</span><sub>1</sub> (black squares, dashed line) from Transect 2.</p>
Full article ">
23 pages, 3308 KiB  
Article
A Supervised Method for Nonlinear Hyperspectral Unmixing
by Bikram Koirala, Mahdi Khodadadzadeh, Cecilia Contreras, Zohreh Zahiri, Richard Gloaguen and Paul Scheunders
Remote Sens. 2019, 11(20), 2458; https://doi.org/10.3390/rs11202458 - 22 Oct 2019
Cited by 17 | Viewed by 5050
Abstract
Due to the complex interaction of light with the Earth’s surface, reflectance spectra can be described as highly nonlinear mixtures of the reflectances of the material constituents occurring in a given resolution cell of hyperspectral data. Our aim is to estimate the fractional [...] Read more.
Due to the complex interaction of light with the Earth’s surface, reflectance spectra can be described as highly nonlinear mixtures of the reflectances of the material constituents occurring in a given resolution cell of hyperspectral data. Our aim is to estimate the fractional abundance maps of the materials from the nonlinear hyperspectral data. The main disadvantage of using nonlinear mixing models is that the model parameters are not properly interpretable in terms of fractional abundances. Moreover, not all spectra of a hyperspectral dataset necessarily follow the same particular mixing model. In this work, we present a supervised method for nonlinear spectral unmixing. The method learns a mapping from a true hyperspectral dataset to corresponding linear spectra, composed of the same fractional abundances. A simple linear unmixing then reveals the fractional abundances. To learn this mapping, ground truth information is required, in the form of actual spectra and corresponding fractional abundances, along with spectra of the pure materials, obtained from a spectral library or available in the dataset. Three methods are presented for learning nonlinear mapping, based on Gaussian processes, kernel ridge regression, and feedforward neural networks. Experimental results conducted on an artificial dataset, a data set obtained by ray tracing, and a drill core hyperspectral dataset shows that this novel methodology is very promising. Full article
(This article belongs to the Special Issue Advances in Unmixing of Spectral Imagery)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the proposed method. HSI refers to hyperspectral image.</p>
Full article ">Figure 2
<p>The feedforward neural network that is applied for the mapping.</p>
Full article ">Figure 3
<p>Endmember spectra of soil, weed, and citrus tree of the ray tracing experiment.</p>
Full article ">Figure 4
<p>(<b>a</b>) RGB image of the drill core sample. The red rectangle represents the area where the ground truth fractional abundance maps were obtained by the scanning electron microscope (SEM)—mineral liberation analysis (MLA) analysis; (<b>b</b>) MLA image.</p>
Full article ">Figure 5
<p>Spectra of ten mineral endmembers from the USGS spectral library.</p>
Full article ">Figure 6
<p>Root mean squared error (RMSE) (20 runs) for the methods using the proposed strategy (Gaussian processes (GP)_LM, kernel ridge regression (KRR)_LM, and neural networks (NN)_LM) and neural networks method with the softmax activation function in the last layer of the network (SM) as a function of signal-to-noise ratio (SNR).</p>
Full article ">Figure 7
<p>RMSE (20 runs) for the methods using the proposed strategy (GP_LM, KRR_LM, and NN_LM) and SM as a function of the number of endmembers.</p>
Full article ">Figure 8
<p>RMSE (100 runs) obtained by the four studied supervised methods as a function of the applied number of training samples in the ray tracing vegetation dataset.</p>
Full article ">Figure 9
<p>Ground truth (GT) and estimated abundance maps and absolute difference with the GT for the four proposed supervised methods on the ray tracing dataset. All images are normalized by the largest abundance value in the GT map.</p>
Full article ">Figure 10
<p>RMSE (100 runs) obtained by the four studied supervised methods as a function of the applied number of training samples in the drill core dataset.</p>
Full article ">Figure 11
<p>Estimated abundance maps and absolute differences with the ground truth for the four supervised methods on the drill core dataset. All images are normalized by the largest abundance value in the GT map.</p>
Full article ">Figure 12
<p>Scatterplot of the whole drill core sample (blue dots) and the MLA region (red circles); without (<b>a</b>) and with (<b>b</b>) normalization.</p>
Full article ">Figure 13
<p>Estimated abundance maps of entire drill core sample (applying a map learned by using 373 training pixels). All images are normalized by the largest abundance value in the GT map.</p>
Full article ">Figure 14
<p>Estimated abundance maps (without rescaling) of the entire drill core sample, for groups of minerals representing the matrix (<b>left</b>) and veins (<b>right</b>) (applying a map learned by using 373 training pixels).</p>
Full article ">
16 pages, 3749 KiB  
Article
A Radar Radial Velocity Dealiasing Algorithm for Radar Data Assimilation and its Evaluation with Observations from Multiple Radar Networks
by Guangxin He, Juanzhen Sun, Zhuming Ying and Lejian Zhang
Remote Sens. 2019, 11(20), 2457; https://doi.org/10.3390/rs11202457 - 22 Oct 2019
Cited by 5 | Viewed by 3366
Abstract
Automated and accurate radar dealiasing algorithms are very important for their assimilation into operational numerical weather forecasting models. A radar radial velocity dealiasing algorithm aimed at radar data assimilation is introduced and assessed using from several S-band and C-band radar observations under the [...] Read more.
Automated and accurate radar dealiasing algorithms are very important for their assimilation into operational numerical weather forecasting models. A radar radial velocity dealiasing algorithm aimed at radar data assimilation is introduced and assessed using from several S-band and C-band radar observations under the severe weather conditions of hurricanes, typhoons, and deep continental convection in this paper. This dealiasing algorithm, named automated dealiasing for data assimilation (ADDA), is a further development of the dealiasing algorithm named the China radar network (CINRAD) improved dealiasing algorithm (CIDA), originally developed for China’s CINRAD (China Next Generation Weather Radar) radar network. The improved scheme contains five modules employed to remove noisy data, select the suitable first radial, preserve the convective regions, execute multipass dealiasing in both azimuthal and radial directions and conduct the final local dealiasing with an error check. This new dealiasing algorithm was applied to two hurricane cases, two typhoon cases, and three intense-convection cases that were observed from the CINRAD of China, Taiwan‘s radar network, and NEXRAD (Next Generation Weather Radar) of the U.S. with a continuous period of more than 12 h for each case. The dealiasing results demonstrated that ADDA performed better than CIDA for all selected cases. This algorithm not only produced a high success rate for the S-band radar, but also a reasonable performance for the C-band radar. Full article
(This article belongs to the Special Issue Tropical Cyclones Remote Sensing and Data Assimilation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Raw radar radial velocities of the elevation angle of 6.0° of Guangxi radar station at 2028 (UTC), 30 March, 2014. (<b>b</b>) The number of flags resulting from the procedure of preserving the convection regions at the same elevation angle of the same radar. Flag 1 denotes the boundary of the aliased area. Flag 3 shows the real shear regions.</p>
Full article ">Figure 2
<p>Radial velocity fields observed by Haikou radar station in the China radar network (CINRAD) system at 06:35 (UTC), 18 July 2014. (<b>a</b>) Original radial velocity at an elevation angle of 2.4°, (<b>b</b>) dealiasing result at an elevation angle of 2.4°, (<b>c</b>) original radial velocity at an elevation angle of 4.3°, (<b>d</b>) dealiasing result at an elevation angle of 4.3°, (<b>e</b>) original radial velocity at an elevation angle of 9.9°, and (<b>f</b>) dealiasing result at an elevation angle of 9.9°. The spatial scales of the pairs of images at different elevations are different.</p>
Full article ">Figure 2 Cont.
<p>Radial velocity fields observed by Haikou radar station in the China radar network (CINRAD) system at 06:35 (UTC), 18 July 2014. (<b>a</b>) Original radial velocity at an elevation angle of 2.4°, (<b>b</b>) dealiasing result at an elevation angle of 2.4°, (<b>c</b>) original radial velocity at an elevation angle of 4.3°, (<b>d</b>) dealiasing result at an elevation angle of 4.3°, (<b>e</b>) original radial velocity at an elevation angle of 9.9°, and (<b>f</b>) dealiasing result at an elevation angle of 9.9°. The spatial scales of the pairs of images at different elevations are different.</p>
Full article ">Figure 3
<p>Radial velocity fields observed by the Hualian radar station in the Taiwan radar network at 23:53 (UTC), 18 Sep 2010. (<b>a</b>) Original radial velocity at an elevation angle of 3.3°, (<b>b</b>) dealiasing result at an elevation angle of 3.3°, (<b>c</b>) original radial velocity at an elevation angle of 14.6°, and (<b>d</b>) dealiasing result at an elevation angle of 14.6°. The spatial scales of the pairs of images at different elevations are different.</p>
Full article ">Figure 4
<p>Radar radial velocity at elevation angle of 4.3° from the CINRAD S-band radar system in Guangxi province of China at 20:28 (UTC), 30 Mar 2014. (<b>a</b>) Raw radial velocity, (<b>b</b>) flag 3 showing the real shear regions and the first reference radial shown by the yellow line, (<b>c</b>) dealiased result after the first two steps in module 4, (<b>d</b>) dealiased result after the last two steps in module 4, and (<b>e</b>) dealiased result after module 5. The Nyquist velocity for the Guangxi radar is 27.12 m s<sup>−1</sup>. The spatial scales of the pairs of images are different.</p>
Full article ">Figure 5
<p>Radar radial velocity from C-band radar stations in the CINRAD network: (<b>a</b>) raw velocity of the 3.3° elevation angle at 07:05 (UTC), 23 July 2009 from the Zhangbei radar; (<b>b</b>) dealiasing result of Figure a; (<b>c</b>) raw velocity of the 4.3° elevation angle at 08:06 (UTC), 27 June 2008 from the Datong radar; and (<b>d</b>) dealiasing result of Figure c. The spatial scales of the pairs of images at different elevations are different.</p>
Full article ">Figure 5 Cont.
<p>Radar radial velocity from C-band radar stations in the CINRAD network: (<b>a</b>) raw velocity of the 3.3° elevation angle at 07:05 (UTC), 23 July 2009 from the Zhangbei radar; (<b>b</b>) dealiasing result of Figure a; (<b>c</b>) raw velocity of the 4.3° elevation angle at 08:06 (UTC), 27 June 2008 from the Datong radar; and (<b>d</b>) dealiasing result of Figure c. The spatial scales of the pairs of images at different elevations are different.</p>
Full article ">
22 pages, 4670 KiB  
Article
Improving Field-Scale Wheat LAI Retrieval Based on UAV Remote-Sensing Observations and Optimized VI-LUTs
by Wanxue Zhu, Zhigang Sun, Yaohuan Huang, Jianbin Lai, Jing Li, Junqiang Zhang, Bin Yang, Binbin Li, Shiji Li, Kangying Zhu, Yang Li and Xiaohan Liao
Remote Sens. 2019, 11(20), 2456; https://doi.org/10.3390/rs11202456 - 22 Oct 2019
Cited by 35 | Viewed by 5114
Abstract
Leaf area index (LAI) is a key biophysical parameter for monitoring crop growth status, predicting crop yield, and quantifying crop variability in agronomic applications. Mapping the LAI at the field scale using multispectral cameras onboard unmanned aerial vehicles (UAVs) is a promising precision-agriculture [...] Read more.
Leaf area index (LAI) is a key biophysical parameter for monitoring crop growth status, predicting crop yield, and quantifying crop variability in agronomic applications. Mapping the LAI at the field scale using multispectral cameras onboard unmanned aerial vehicles (UAVs) is a promising precision-agriculture application with specific requirements: The LAI retrieval method should be (1) robust so that crop LAI can be estimated with similar accuracy and (2) easy to use so that it can be applied to the adjustment of field management practices. In this study, three UAV remote-sensing missions (UAVs with Micasense RedEdge-M and Cubert S185 cameras) were carried out over six experimental plots from 2018 to 2019 to investigate the performance of reflectance-based lookup tables (LUTs) and vegetation index (VI)-based LUTs generated from the PROSAIL model for wheat LAI retrieval. The effects of the central wavelengths and bandwidths for the VI calculations on the LAI retrieval were further examined. We found that the VI-LUT strategy was more robust and accurate than the reflectance-LUT strategy. The differences in the LAI retrieval accuracy among the four VI-LUTs were small, although the improved modified chlorophyll absorption ratio index-lookup table (MCARI2-LUT) and normalized difference vegetation index-lookup table (NDVI-LUT) performed slightly better. We also found that both of the central wavelengths and bandwidths of the VIs had effects on the LAI retrieval. The VI-LUTs with optimized central wavelengths (red = 612 nm, near-infrared (NIR) = 756 nm) and narrow bandwidths (~4 nm) improved the wheat LAI retrieval accuracy (R2 ≥ 0.75). The results of this study provide an alternative method for retrieving crop LAI, which is robust and easy use for precision-agriculture applications and may be helpful for designing UAV multispectral cameras for agricultural monitoring. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of Yucheng Comprehensive Experiment Station (YCES) and overview of the long-term experimental plots. (The yellow rectangles are the boundaries of plots. The red rectangles are flight coverage of the UAV hyperspectral mission; two UAV multispectral missions covered all six fields. The red spots are ground sampling plots for Fields A, C, and F in 2018).</p>
Full article ">Figure 2
<p>Wheat in experimental plots and wheat leaf destructive sampling for LAI measurements on 15 May 2018: (<b>a</b>) N<sub>0</sub>-80%fc in Field D, (<b>b</b>) N<sub>140</sub>-80%fc in Field, (<b>c</b>) N<sub>280</sub>-80%fc in Field D, and (<b>d</b>) Destructive sampling.</p>
Full article ">Figure 3
<p>UAV platforms used in this study: (<b>a</b>) the DJ M600 Pro six-rotor UAV; (<b>b</b>) the DJ M100 four-rotor UAV.</p>
Full article ">Figure 4
<p>Flowchart of crop LAI retrieval using multispectral and hyperspectral UAV datasets. ARVI = atmospherically resistant vegetation index; MCARI2 = modified improved chlorophyll absorption ratio index; NDVI = modified normalized difference vegetation index; NRI = nitrogen ratio index; NIR = near-infrared band; R = red band; <span class="html-italic">f</span><sub>RMSE</sub> = cost function.</p>
Full article ">Figure 5
<p>Results of global sensitivity analyses for the (<b>a</b>) five-band reflectance, (<b>b</b>) R-VIs, (<b>c</b>) G-VIs, and (<b>d</b>) E-VIs to PROSAIL inputs using the EFAST method.</p>
Full article ">Figure 6
<p>Comparisons of two-year measured LAI and simulated wheat LAI values based on four VIs-LUTs (<b>a</b>) NDVI-LUT in 2018, (<b>b</b>) ARVI-LUT in 2018, (<b>c</b>) MCARI2-LUT in 2018, and (<b>d</b>) NRI-LUT in 2018; (<b>e</b>) NDVI-LUT in 2019, (<b>f</b>) ARVI-LUT in 2019, (<b>g</b>) MCARI2-LUT in 2019, and (<b>h</b>) NRI-LUT in 2019. <span class="html-italic">R</span><sup>2</sup> represents the coefficient of determination for a linear regression model between the measured LAI and simulated LAI.</p>
Full article ">Figure 7
<p>Autocorrelation analyses between any <span class="html-italic">ρ</span><sub>1</sub> and <span class="html-italic">ρ</span><sub>2</sub> bands (600–950 nm). (<b>a</b>) the <span class="html-italic">r</span> values; (<b>b</b>) the <span class="html-italic">p</span> values (&gt; 0.05).</p>
Full article ">Figure 8
<p>Pearson correlation coefficients (<span class="html-italic">r</span> values) between the simulated LAI and m-VIs derived from the refined <span class="html-italic">ρ</span><sub>1</sub> and <span class="html-italic">ρ</span><sub>2</sub>. (<b>a</b>) m-NDVI, (<b>b</b>) m-NRI, (<b>c</b>) m-MCARI2, and (<b>d</b>) m-ARVI.</p>
Full article ">Figure 9
<p>(<b>a</b>) <span class="html-italic">R</span><sup>2</sup>, (<b>b</b>) RMSE, and (<b>c</b>) MRE for linear regression between measured LAI and simulated LAI of four datasets based on VI-LUTs (Datasets with optimized central wavelengths are in red: Datasets 2 and 4; datasets without optimizing central wavelengths are in black: Datasets 1 and 3. Datasets with broad bandwidths are denoted as dots: Datasets 1 and 2; datasets with narrow bandwidths are denoted as the crosses: Datasets 3 and 4).</p>
Full article ">Figure A1
<p>Comparisons of two-year measured LAI and estimated LAI values for wheat and maize based on four VIs-LUTs. (<b>a</b>) NDVI-LUT, (<b>b</b>) MCARI2-LUT, and (<b>c</b>) NRI-LUT in 2016 (multispectral data captured from multiSPEC-4C camera); (<b>d</b>) NDVI-LUT, (<b>e</b>) MCARI2-LUT, (<b>f</b>) NRI-LUT, and (<b>g</b>) ARVI-LUT in 2018 (multispectral data captured from Micasense RedEdge-M camera). <span class="html-italic">R</span><sub>1</sub><sup>2</sup> represents the coefficient of determination for a linear regression model between the measured and estimated LAI for wheat. <span class="html-italic">R</span><sub>2</sub><sup>2</sup> represents the coefficient of determination for a linear regression model between the measured and estimated LAI for maize.</p>
Full article ">
16 pages, 10961 KiB  
Article
Wetland Classification Based on a New Efficient Generative Adversarial Network and Jilin-1 Satellite Image
by Zhi He, Dan He, Xiangqin Mei and Saihan Hu
Remote Sens. 2019, 11(20), 2455; https://doi.org/10.3390/rs11202455 - 22 Oct 2019
Cited by 14 | Viewed by 3657
Abstract
Recent studies have shown that deep learning methods provide useful tools for wetland classification. However, it is difficult to perform species-level classification with limited labeled samples. In this paper, we propose a semi-supervised method for wetland species classification by using a new efficient [...] Read more.
Recent studies have shown that deep learning methods provide useful tools for wetland classification. However, it is difficult to perform species-level classification with limited labeled samples. In this paper, we propose a semi-supervised method for wetland species classification by using a new efficient generative adversarial network (GAN) and Jilin-1 satellite image. The main contributions of this paper are twofold. First, the proposed method, namely ShuffleGAN, requires only a small number of labeled samples. ShuffleGAN is composed of two neural networks (i.e., generator and discriminator), which perform an adversarial game in the training phase and ShuffleNet units are added in both generator and discriminator to obtain speed-accuracy tradeoff. Second, ShuffleGAN can perform species-level wetland classification. In addition to distinguishing the wetland areas from non-wetlands, different tree species located in the wetland are also identified, thus providing a more detailed distribution of the wetland land-covers. Experiments are conducted on the Haizhu Lake wetland data acquired by the Jilin-1 satellite. Compared with existing GAN, the improvement in overall accuracy (OA) of the proposed ShuffleGAN is more than 2%. This work can not only deepen the application of deep learning in wetland classification but also promote the study of fine classification of wetland land-covers. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>General architecture of the proposed method.</p>
Full article ">Figure 2
<p>Schematic illustration of constructing the objects.</p>
Full article ">Figure 3
<p>Architecture of the proposed ShuffleGAN.</p>
Full article ">Figure 4
<p>ShuffleNet unit-1 and ShuffleNet unit-2 used in the ShuffleGAN.</p>
Full article ">Figure 5
<p>Geographic location of the study area.</p>
Full article ">Figure 6
<p>Three-band false color composite of the wetland data with sampling sites.</p>
Full article ">Figure 7
<p>Ground reference photos of different classes in the study area: (<b>a</b>) Ficus, (<b>b</b>) Alstonia scholaris, (<b>c</b>) Delonix, (<b>d</b>) Bauhinia, (<b>e</b>) Camphor tree, (<b>f</b>) Metasequoia, (<b>g</b>) Palm tree, (<b>h</b>) Lake/River, (<b>i</b>) Road and (<b>j</b>) Building.</p>
Full article ">Figure 8
<p>Superpixel segmentation result.</p>
Full article ">Figure 9
<p>Classification maps of the Haizhu Lake wetland data obtained by (<b>a</b>) Support Vector Machine (SVM), (<b>b</b>) SVM-s, (<b>c</b>) LapSVM, (<b>d</b>) LapSVM-s, (<b>e</b>) Convolutional Neural Network (CNN), (<b>f</b>) GAN-or, (<b>g</b>) GAN-dw and (<b>h</b>) ShuffleGAN.</p>
Full article ">Figure 10
<p>Normalized confusion matrices of (<b>a</b>) SVM, (<b>b</b>) SVM-s, (<b>c</b>) LapSVM, (<b>d</b>) LapSVM-s, (<b>e</b>) CNN, (<b>f</b>) GAN-or, (<b>g</b>) GAN-dw and (<b>h</b>) ShuffleGAN.</p>
Full article ">Figure 11
<p>Normalized features of class 5 (Camphor tree) and class 10 (Building) in the real data and generated fake data obtained by various methods. (<b>a</b>,<b>b</b>) GAN-or, (<b>c</b>,<b>d</b>) GAN-dw and (<b>e</b>,<b>f</b>) ShuffleGAN.</p>
Full article ">Figure 12
<p>Scattering map of the two-dimensional features obtained by the discriminator of (<b>a</b>) GAN-or, (<b>b</b>) GAN-dw and (<b>c</b>) ShuffleGAN.</p>
Full article ">Figure 13
<p>Impact of the (<b>a</b>) initial learning rate, (<b>b</b>) number of training epochs and (<b>c</b>) number of labeled training samples on the overall accuracy (OA).</p>
Full article ">
18 pages, 2141 KiB  
Article
A Superpixel-Based Relational Auto-Encoder for Feature Extraction of Hyperspectral Images
by Miaomiao Liang, Licheng Jiao and Zhe Meng
Remote Sens. 2019, 11(20), 2454; https://doi.org/10.3390/rs11202454 - 22 Oct 2019
Cited by 16 | Viewed by 3402
Abstract
Filter banks transferred from a pre-trained deep convolutional network exhibit significant performance in heightening the inter-class separability for hyperspectral image feature extraction, but weakening the intra-class consistency simultaneously. In this paper, we propose a new superpixel-based relational auto-encoder for cohesive spectral–spatial feature learning. [...] Read more.
Filter banks transferred from a pre-trained deep convolutional network exhibit significant performance in heightening the inter-class separability for hyperspectral image feature extraction, but weakening the intra-class consistency simultaneously. In this paper, we propose a new superpixel-based relational auto-encoder for cohesive spectral–spatial feature learning. Firstly, multiscale local spatial information and global semantic features of hyperspectral images are extracted by filter banks transferred from the pre-trained VGG-16. Meanwhile, we utilize superpixel segmentation to construct the low-dimensional manifold embedded in the spectral domain. Then, representational consistency constraint among each superpixel is added in the objective function of sparse auto-encoder, which iteratively assist and supervisedly learn hidden representation of deep spatial feature with greater cohesiveness. Superpixel-based local consistency constraint in this work not only reduces the computational complexity, but builds the neighborhood relationships adaptively. The final feature extraction is accomplished by collaborative encoder of spectral–spatial feature and weighting fusion of multiscale features. A large number of experimental results demonstrate that our proposed method achieves expected results in discriminant feature extraction and has certain advantages over some existing methods, especially on extremely limited sample conditions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Visualization of features before and after collaborative fusion by t-SNE [<a href="#B33-remotesensing-11-02454" class="html-bibr">33</a>]: (<b>a</b>) raw spectral feature (SeF) (<b>b</b>) deep spatial feature (DSaF), and (<b>c</b>) deep spectral–spatial fusion feature (DS<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>F<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>). Take the University of Pavia dataset as example.</p>
Full article ">Figure 2
<p>Comparison of AE, GAE, and our proposed S-RAE.</p>
Full article ">Figure 3
<p>Illustration of the proposed MS-RCAE (All the sizes are subject to the digital markers). A case of the Salinas dataset.</p>
Full article ">Figure 4
<p>Pseudo-color and ground references for the following datasets. (<b>a</b>) Indian Pines; (<b>b</b>) University of Pavia; (<b>c</b>) Salinas; (<b>d</b>) KSC.</p>
Full article ">Figure 5
<p>Effect of initial superpixel clusters number and weight coefficient <math display="inline"><semantics> <mi>γ</mi> </semantics></math> on classification accuracy.</p>
Full article ">Figure 6
<p>Effect of weighting parameters <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math> on classification accuracy.</p>
Full article ">Figure 7
<p>Comparison of classification results from deep spatial feature preprocessed by SAE and S-RAE on datasets of (<b>a</b>) University of Pavia, (<b>b</b>) Salinas, and (<b>c</b>) KSC.</p>
Full article ">Figure 8
<p>Comparison of classification results from spectral–spatial feature learned by CAE, S-RCAE, and multiscale feature by MS-RCAE on datasets of (<b>a</b>) University of Pavia, (<b>b</b>) Salinas, and (<b>c</b>) KSC.</p>
Full article ">Figure 9
<p>Classification maps of the Indian Pine dataset obtained by different methods (AA).</p>
Full article ">Figure 10
<p>Classification maps of the University of Pavia dataset obtained by different methods (AA).</p>
Full article ">Figure 11
<p>Classification maps of the Salinas dataset obtained by different methods (AA).</p>
Full article ">Figure 12
<p>Classification maps of the KSC dataset obtained by different methods (AA).</p>
Full article ">Figure 13
<p>Effect of the number of training samples on classification accuracy for the University of Pavia dataset.</p>
Full article ">Figure 14
<p>Effect of the number of training samples on classification accuracy for the Salinas dataset.</p>
Full article ">Figure 15
<p>Effect of the number of training samples on classification accuracy for the KSC dataset.</p>
Full article ">
25 pages, 11758 KiB  
Article
Integrating Stereo Images and Laser Altimeter Data of the ZY3-02 Satellite for Improved Earth Topographic Modeling
by Guo Zhang, Kai Xu, Peng Jia, Xiaoyun Hao and Deren Li
Remote Sens. 2019, 11(20), 2453; https://doi.org/10.3390/rs11202453 - 22 Oct 2019
Cited by 25 | Viewed by 3451
Abstract
The positioning accuracy is critical for satellite-based topographic modeling in cases of exterior orientation parameters with high uncertainty and scarce ground control data. The integration of multi-sensor data can help to ensure precision topographical modeling in such situations. Presently, research on the combined [...] Read more.
The positioning accuracy is critical for satellite-based topographic modeling in cases of exterior orientation parameters with high uncertainty and scarce ground control data. The integration of multi-sensor data can help to ensure precision topographical modeling in such situations. Presently, research on the combined processing of optical camera images and laser altimeter data has focused on planetary observations, especially on the Moon and Mars. This study presents an endeavor to establish a combined adjustment model with one constraint in image space for integration of ZY3-02 stereo images and laser altimeter data for improved Earth topographic modeling. The geometric models for stereo images and laser altimeter data were built first, and then, the laser ranging information was introduced to construct a combined adjustment model on the basis of the block adjustment model. One constraint that minimized the back-projection discrepancies in image space was incorporated into the combined adjustment. Datasets in several areas were collected as experimental data for the validation work. Experimental results demonstrated that the inconsistencies between stereo images and laser altimeter data for the ZY3-02 satellite can be reduced, and the elevation accuracy of stereo images can be significantly improved after applying the proposed combined adjustment. Experiments further proved that the improved height accuracy is insensitive to the number and relative position of laser altimeter points (LAPs) in stereo images. Moreover, additional plane control points (PCPs) were incorporated to achieve better planimetric accuracy. Experimental results in the Dengfeng area showed that the adjustment results derived by using LAPs and additional four PCPs were only slightly lower than those for the block adjustment with four ground control points (GCPs). Generally, the proposed approach can effectively improve the quality of Earth topographic model. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of the installation of the laser altimeter and the triple linear array cameras (TLCs) onboard the ZY3-02 satellite.</p>
Full article ">Figure 2
<p>Schematic diagram of the stereo imaging geometry and focal plane of TLCs onboard the ZY 3-02 satellite.</p>
Full article ">Figure 3
<p>Schematic diagram for improving the elevation accuracy of stereo images with the laser range constraint. Geometric relation between stereo images and laser ranging data (<b>a</b>) before and (<b>b</b>) after the combined adjustment.</p>
Full article ">Figure 4
<p>Principle of constraints in image space for the combined adjustment model.</p>
Full article ">Figure 5
<p>Experimental data and reference data at the Taihang research area. (<b>a</b>) Spatial coverage of the research area; (<b>b</b>) image showing a GCP (Ground Control Point) measured primarily by using a static Global Positioning (GPS) receiver.</p>
Full article ">Figure 6
<p>Schematic diagram of a PCP extracted from YG-13 SAR image in the Dengfeng area.</p>
Full article ">Figure 7
<p>ZY3-02 stereo images and laser ranging data at the Taihang research area before combined adjustment. (<b>a</b>) Planar view of the LA (Laser Altimeter) data overlaid on the stereo images, (<b>b</b>) and (<b>c</b>) are detailed views for LAPs 16 and 18 overlaid on the stereo images, where the left image shows the nadir-view (NAD), the center one shows BWD and the right shows the FWD image.</p>
Full article ">Figure 7 Cont.
<p>ZY3-02 stereo images and laser ranging data at the Taihang research area before combined adjustment. (<b>a</b>) Planar view of the LA (Laser Altimeter) data overlaid on the stereo images, (<b>b</b>) and (<b>c</b>) are detailed views for LAPs 16 and 18 overlaid on the stereo images, where the left image shows the nadir-view (NAD), the center one shows BWD and the right shows the FWD image.</p>
Full article ">Figure 8
<p>Schematic diagram of the points’ (CPs, GCPs, LAPs) distribution in (<b>a</b>) Songshan, (<b>b</b>) Tianjin and (<b>c</b>) Dengfeng.</p>
Full article ">Figure 9
<p>Schematic diagram of comparison between reference DEM and generated DEMs in the Dengfeng mountainous area. (<b>a</b>) Elevation difference using the P6; (<b>b</b>) Elevation difference using P7.</p>
Full article ">Figure 10
<p>Schematic diagram of points’ distribution (CP, GCP, LAP, PCP) in Taihang research area.</p>
Full article ">Figure 11
<p>ZY3-02 LA points projected on the stereo images after the combined adjustment at the Taihang area. (<b>a</b>) and (<b>b</b>) are detailed views for LAPs 16 and 18 overlaid on the stereo images, where the left image shows the NAD, the center one shows BWD and the right shows the FWD image.</p>
Full article ">Figure 12
<p>Schematic showing the analysis of the error induced by different capture time.</p>
Full article ">
16 pages, 7110 KiB  
Article
Impact of Urbanization and Climate on Vegetation Coverage in the Beijing–Tianjin–Hebei Region of China
by Qian Zhou, Xiang Zhao, Donghai Wu, Rongyun Tang, Xiaozheng Du, Haoyu Wang, Jiacheng Zhao, Peipei Xu and Yifeng Peng
Remote Sens. 2019, 11(20), 2452; https://doi.org/10.3390/rs11202452 - 22 Oct 2019
Cited by 26 | Viewed by 4520
Abstract
Worldwide urbanization leads to ecological changes around urban areas. However, few studies have quantitatively investigated the impacts of urbanization on vegetation coverage so far. As an important indicator measuring regional environment change, fractional vegetation cover (FVC) is widely used to analyze changes in [...] Read more.
Worldwide urbanization leads to ecological changes around urban areas. However, few studies have quantitatively investigated the impacts of urbanization on vegetation coverage so far. As an important indicator measuring regional environment change, fractional vegetation cover (FVC) is widely used to analyze changes in vegetation in urban areas. In this study, on the basis of a partial derivative model, we quantified the effect of temperature, precipitation, radiation, and urbanization represented as nighttime light on vegetation coverage changes in the Beijing–Tianjin–Hebei (BTH) region during its period of rapid resident population growth from 2001 to 2011. The results showed that (1) the FVC of the BTH region varied from 0.20 to 0.26, with significant spatial heterogeneity. The FVC increased in small cities such as Cangzhou and in the Taihang Mountains, while it decreased in megacities with populations greater than 1 million, such as Beijing and Zhangjiakou Bashang. (2) The BTH region experienced rapid urbanization, with the area of artificial surface increasing by 18.42%. From the urban core area to the fringe area, the urbanization intensity decreased, but the urbanization rate increased. (3) Urbanization and precipitation had the greatest effect on FVC changes. Urbanization dominated the FVC changes in the expanded area, while precipitation had the greatest impacts on the FVC changes in the core area. For future studies on the major influencing factors of FVC changes, quantitative analysis of the contribution of urbanization to FVC changes in urban regions is crucial and will provide scientific perspectives for sustainable urban planning. Full article
(This article belongs to the Special Issue Remote Sensing of Human-Environment Interactions)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and land cover of the research area. (<b>a</b>) The location of the Beijing–Tianjin–Hebei (BTH) region in China, (<b>b</b>) the variation in land cover in the BTH region, and (<b>c</b>) the distribution of land use types. Beijing (BJ), Tianjin (TJ), Shijiazhuang (SJZ), Handan (HD), Tangshan (TS), Baoding (BD) Zhangjiakou (ZJK), Xingtai (XT), Hengshui (HS), Cangzhou (CZ), Langfang (LF), Chengde (CD), and Qinghuangdao (QHD) refer to 13 cities in the BTH region.</p>
Full article ">Figure 2
<p>The fractional vegetation cover (FVC) in the BTH region: (<b>a</b>) average FVC from 2001 to 2011; (<b>b</b>) box plots of FVC values of different land covers; (<b>c</b>) average FVC of six large cities from 2001 to 2003, 2004 to 2008, and 2009 to 2011; and (<b>d</b>) average and standard deviation of FVC in the core, expanded, and fringe areas of six large cities.</p>
Full article ">Figure 3
<p>Spatiotemporal variation of the FVC in the BTH region: (<b>a</b>) spatial variation of the FVC, (<b>b</b>) temporal variation of the FVC, and (<b>c</b>) interannual variation rate of the FVC in 13 cities. Beijing (BJ), Tianjin (TJ), Shijiazhuang (SJZ), Handan (HD), Tangshan (TS), Baoding (BD), Zhangjiakou (ZJK), Xingtai (XT), Hengshui (HS), Cangzhou (CZ), Langfang (LF), Chengde (CD), and Qinghuangdao (QHD) refer to 13 cities in the BTH region.</p>
Full article ">Figure 4
<p>Changes in urban areas from 2001 to 2011. The value represents urbanization intensity.</p>
Full article ">Figure 5
<p>Spatial distribution of urban areas and urbanization rate in the BTH region: (<b>a</b>) and (<b>b</b>) are urbanization areas in 2001 and 2011, and (<b>c</b>) is the urbanization rate from 2001 to 2011.</p>
Full article ">Figure 6
<p>Contribution of each factor to FVC changes: (<b>a</b>–<b>d</b>) are the contributions of temperature, precipitation, radiation, and urbanization, respectively, to FVC changes, (<b>e</b>) shows the contributions of other factors not considered to FVC changes, and (<b>f</b>) is the average contribution of various factors to FVC changes in the BTH region.</p>
Full article ">Figure 7
<p>Regional statistics of the contribution of each factor to FVC changes. T, P, R, and U refer to temperature, precipitation, radiation, and urbanization, respectively, and O represents the other factors not considered in this study.</p>
Full article ">Figure 8
<p>(<b>a</b>) Changes in FVC and precipitation in urban core areas, and (<b>b</b>) changes in FVC and urbanization in urban expanded areas. P and U indicate precipitation and urbanization, respectively.</p>
Full article ">
18 pages, 7218 KiB  
Article
Combining Machine Learning and Compact Polarimetry for Estimating Soil Moisture from C-Band SAR Data
by Emanuele Santi, Mohammed Dabboor, Simone Pettinato and Simonetta Paloscia
Remote Sens. 2019, 11(20), 2451; https://doi.org/10.3390/rs11202451 - 22 Oct 2019
Cited by 26 | Viewed by 4244
Abstract
This research aimed at exploiting the joint use of machine learning and polarimetry for improving the retrieval of surface soil moisture content (SMC) from synthetic aperture radar (SAR) acquisitions at C-band. The study was conducted on two agricultural areas in Canada, for which [...] Read more.
This research aimed at exploiting the joint use of machine learning and polarimetry for improving the retrieval of surface soil moisture content (SMC) from synthetic aperture radar (SAR) acquisitions at C-band. The study was conducted on two agricultural areas in Canada, for which a series of RADARSAT-2 (RS2) images were available along with direct measurements of SMC from in situ stations. The analysis confirmed the sensitivity of RS2 backscattering (σ°) to SMC. The comparison of SMC with the compact polarimetry (CP) parameters, computed from the RS2 acquisitions by the CP data simulator, pointed out that some CP parameters had a sensitivity to SMC equal or better than σ°, with correlation coefficients up to R ≃ 0.4. Based on these results, the potential of machine learning (ML) for SMC retrieval was exploited by implementing and testing on the available data an artificial neural network (ANN) algorithm. The algorithm was implemented using several combinations of σ° and CP parameters. Validation results of the algorithm with in situ observations confirmed the promising capabilities of the ML techniques for SMC monitoring. Furthermore, results pointed out the potential of CP in improving the SMC retrieval accuracy, especially when used in combination with linearly polarized σ°. Depending on the considered input combination, the ANN algorithm was able to estimate SMC with Root Mean Square Error (RMSE) between 3% and 7% of SMC and R between 0.7 and 0.9. Full article
(This article belongs to the Special Issue Compact Polarimetric SAR)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Carman and Casselman test areas.</p>
Full article ">Figure 2
<p>Scatterplots of backscattering (σ°) at the Real-time In Situ Soil Monitoring for Agriculture (RISMA) station locations against the measured soil moisture content (SMC) in each station.</p>
Full article ">Figure 3
<p>Scatterplots of linearly polarized σ° as a function of SMC at the RISMA stations.</p>
Full article ">Figure 4
<p>Soil texture map expressed as sand percentage for the Carman site.</p>
Full article ">Figure 5
<p>CP parameter vs. in situ SMC: (<b>a</b>) SV3, (<b>b</b>) <span class="html-italic">δ</span><sub>RHRV</sub>, (<b>c</b>) α<sub>s</sub>, (<b>d</b>) <span class="html-italic">ρ</span><sub>RHRV</sub>, (<b>e</b>) <span class="html-italic">σ</span><sub>RR</sub> /<span class="html-italic">σ</span><sub>RL</sub>, (<b>f</b>) SE<sub>Pol</sub>, and (<b>g</b>) u.</p>
Full article ">Figure 5 Cont.
<p>CP parameter vs. in situ SMC: (<b>a</b>) SV3, (<b>b</b>) <span class="html-italic">δ</span><sub>RHRV</sub>, (<b>c</b>) α<sub>s</sub>, (<b>d</b>) <span class="html-italic">ρ</span><sub>RHRV</sub>, (<b>e</b>) <span class="html-italic">σ</span><sub>RR</sub> /<span class="html-italic">σ</span><sub>RL</sub>, (<b>f</b>) SE<sub>Pol</sub>, and (<b>g</b>) u.</p>
Full article ">Figure 6
<p>Cross-correlation matrix of CP and σ° parameters. Colours are proportional to r values.</p>
Full article ">Figure 7
<p>ANN estimated vs. in situ SMC for the different combinations of CP parameters: (<b>a</b>) CP2, (<b>b</b>) CP3, (<b>c</b>) CP4, (<b>d</b>) CP5, (<b>e</b>) CP6, and (<b>f</b>) CP7.</p>
Full article ">Figure 7 Cont.
<p>ANN estimated vs. in situ SMC for the different combinations of CP parameters: (<b>a</b>) CP2, (<b>b</b>) CP3, (<b>c</b>) CP4, (<b>d</b>) CP5, (<b>e</b>) CP6, and (<b>f</b>) CP7.</p>
Full article ">Figure 8
<p>ANN estimated vs. in situ SMC for the two different combinations of input σ°: (<b>a</b>) 2pol and (<b>b</b>) 3pol.</p>
Full article ">Figure 9
<p>ANN estimated vs. in situ SMC for the two different combinations of input σ° + CP: (<b>a</b>) All1 and (<b>b</b>) All2.</p>
Full article ">Figure 10
<p>Scatterplot of ANN estimated vs. target SMC for Casselman.</p>
Full article ">Figure 11
<p>Examples of SMC maps for the Casselman site: (<b>a</b>) for November 10, 2014 and (<b>b</b>) for June 14, 2015. Maps have been generated by the ANN algorithm using the combination of σ° and CP input parameters. Red dots represent the locations of the RISMA stations.</p>
Full article ">
8 pages, 3048 KiB  
Technical Note
Response to Variations in River Flowrate by a Spaceborne GNSS-R River Width Estimator
by April Warnock and Christopher Ruf
Remote Sens. 2019, 11(20), 2450; https://doi.org/10.3390/rs11202450 - 22 Oct 2019
Cited by 21 | Viewed by 4225
Abstract
In recent years, the use of Global Navigation Satellite System-Reflectometry (GNSS-R) for remote sensing of the Earth’s surface has gained momentum as a means to exploit existing spaceborne microwave navigation systems for science-related applications. Here, we explore the potential for using measurements made [...] Read more.
In recent years, the use of Global Navigation Satellite System-Reflectometry (GNSS-R) for remote sensing of the Earth’s surface has gained momentum as a means to exploit existing spaceborne microwave navigation systems for science-related applications. Here, we explore the potential for using measurements made by a spaceborne GNSS-R bistatic radar system (CYGNSS) during river overpasses to estimate its width, and to use that width as a proxy for river flowrate. We present a case study utilizing CYGNSS data collected in the spring of 2019 during multiple overpasses of the Pascagoula River in southern Mississippi over a range of flowrates. Our results demonstrate that a measure of river width derived from CYGNSS is highly correlated with the observed flowrates. We show that an approximately monotonic relationship exists between river flowrate and a measure of river width which we define as the associated GNSS-R width (AGW). These results suggest the potential for GNSS-R systems to be utilized as a means to estimate river flowrates and widths from space. Full article
(This article belongs to the Special Issue GPS/GNSS for Earth Science and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Histogram of propagation distances: (<b>a</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>r</mi> <mi>R</mi> </msub> </mrow> <mo stretchy="false">¯</mo> </mover> <mo> </mo> </mrow> </semantics></math> (from specular point to CYGNSS receiver); (<b>b</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>r</mi> <mi>T</mi> </msub> </mrow> <mo stretchy="false">¯</mo> </mover> <mo> </mo> </mrow> </semantics></math> (from GPS transmitter to specular point) for a full day of samples on 9 June 2017.</p>
Full article ">Figure 2
<p>Dependence on incidence angle of: (<b>a</b>) propagation distances from specular point to receiver; (<b>b</b>) relative signal strength received from coherent and incoherent scenes for one full day of samples on 9 June 2017. The red dots indicate the incidence angles at which the greatest number of samples occurs.</p>
Full article ">Figure 3
<p>Google Earth imagery of southern Pascagoula River in Mississippi. The location of the USGS streamflow gauge (USGS 02479000) is shown at the head of the river. The mean width along the roughly 40 km span of the river where collections took place is ~100 m.</p>
Full article ">Figure 4
<p>April 2019 raw IF data collections over the Pascagoula River: (<b>a</b>) track locations; (<b>b</b>) corresponding flowrates at the time of each overpass (color coded to match the track locations plotted on the left), plotted over the USGS flowrates over the time period spanning the data collections.</p>
Full article ">Figure 5
<p>Fitted power series for each of the five April 2019 Pascagoula overpasses, using a threshold of 22% of the peak as the cutoff for the portion of the waveform to be fitted.</p>
Full article ">Figure 6
<p>Linear regression between the river AGW derived from CYGNSS observations and the river flow rate measured by the USGS stream gauge. The R<sup>2</sup> coefficient is 0.97. Marker colors correspond to the tracks and flowrates shown in <a href="#remotesensing-11-02450-f004" class="html-fig">Figure 4</a>.</p>
Full article ">
20 pages, 2258 KiB  
Article
Remote Sensing of the Atmosphere by the Ultraviolet Detector TUS Onboard the Lomonosov Satellite
by Pavel Klimov, Boris Khrenov, Margarita Kaznacheeva, Gali Garipov, Mikhail Panasyuk, Vasily Petrov, Sergei Sharakin, Andrei Shirokov, Ivan Yashin, Mikhail Zotov, Viktor Grebenyuk, Andrei Grinyuk, Maria Lavrova, Artur Tkachenko, Leonid Tkachev, Alla Botvinko, Oleg Saprykin, Andrei Puchkov and Alexander Senkovsky
Remote Sens. 2019, 11(20), 2449; https://doi.org/10.3390/rs11202449 - 22 Oct 2019
Cited by 17 | Viewed by 4784
Abstract
The orbital detector TUS (Tracking Ultraviolet Setup) with high sensitivity in near-visible ultraviolet (tens of photons per time sample of 0.8 μ s of wavelengths 300–400 nm from a detector’s pixel field of view) and the microsecond-scale temporal resolution was developed by the [...] Read more.
The orbital detector TUS (Tracking Ultraviolet Setup) with high sensitivity in near-visible ultraviolet (tens of photons per time sample of 0.8 μ s of wavelengths 300–400 nm from a detector’s pixel field of view) and the microsecond-scale temporal resolution was developed by the Lomonosov-UHECR/TLE collaboration and launched into orbit on 28 April 2016. A variety of different phenomena were studied by measuring ultraviolet signals from the atmosphere: extensive air showers from ultra-high-energy cosmic rays, lightning discharges, transient atmospheric events, aurora ovals, and meteors. These events are different in their origin and in their duration and luminosity. The TUS detector had a capability to conduct measurements with different temporal resolutions (0.8 μ s, 25.6 μ s, 0.4 ms, and 6.6 ms) but the same spatial resolution of 5 km. Results of the TUS detector measurements of various atmospheric emissions are discussed and compared to data from previous experiments. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The TUS (Tracking Ultraviolet Setup) detector on board the Lomonosov satellite (<b>a</b>, an artist’s view). TUS on board the Lomonosov satellite covered with a protective cover during preflight preparations (<b>b</b>).</p>
Full article ">Figure 2
<p>Snapshots of the focal plane show the arc-like shape and movement of an image of the elve registered on 23 August 2017, through the detector’s field of view. The snapshots were taken at 136 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>s, 168 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>s and 200 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>s from the beginning of the record. Colors denote the signal amplitude in ADC codes.</p>
Full article ">Figure 3
<p>Waveforms of several hit pixels of the elve registered on 23 August 2017, above Pacific Ocean.</p>
Full article ">Figure 4
<p>Example of a multi-elve. (<b>a</b>): waveforms of two pixels (15th and 16th) in the second module. (<b>b</b>): a pixel map at time 132 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>s from the beginning of the record.</p>
Full article ">Figure 5
<p>Waveforms of a “monotonously growing” flash in three pixels. The same kind of waveforms is observed in the majority of channels.</p>
Full article ">Figure 6
<p>Geographical distribution of monotonously growing flashes measured by the TUS detector. A correlation with some thunderstorm regions, especially on oceans, can be seen.</p>
Full article ">Figure 7
<p>Waveforms of three channels for the unusual bright far-from-thunderstorm event measured by TUS on 28 July 2017.</p>
Full article ">Figure 8
<p>Cloud top temperatures by GOES-13 on July 28th, 2017 at 00:45 UTC near the TUS FOV centered at the event shown in <a href="#remotesensing-11-02449-f007" class="html-fig">Figure 7</a> position. The black star shows the position of the event.</p>
Full article ">Figure 9
<p>Waveforms of three pixels in the event registered on 27 June 2016, above India (<math display="inline"><semantics> <mrow> <msup> <mn>25.3</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>S, <math display="inline"><semantics> <mrow> <msup> <mn>77.8</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>E).</p>
Full article ">Figure 10
<p>Snapshots of the focal plane for the event presented in <a href="#remotesensing-11-02449-f009" class="html-fig">Figure 9</a>. Snapshots were made at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> ms, 40 ms, and 88 ms from the beginning of the record. Colors denote ADC counts.</p>
Full article ">Figure 11
<p>Example of short single pulse events (<b>top</b>) and series of pulses (<b>bottom</b>) and their respective geographical distributions.</p>
Full article ">Figure 12
<p>UV flash with a complicated temporal structure (<b>a</b>) and a map of pixels for this event (<b>b</b>).</p>
Full article ">Figure 13
<p>Waveforms of a few hit pixels of the meteor measured on 3 January 2017 at 14:31:08 UTC.</p>
Full article ">Figure 14
<p>Waveforms of two events registered in the meteor mode: a series of UV flashes above a thunderstorm (<b>a</b>), a far-from-thunderstorm event (<b>b</b>), see the text.</p>
Full article ">Figure 15
<p>Example of Aurora light measurements. The (<b>a</b>,<b>b</b>) show waveforms of two hit pixels during the TUS passage above the aurora oval. The (<b>c</b>) shows a pixel map in the same passage above the aurora. The (<b>d</b>) shows density of electron flux in the polar region causing the polar light, see the text.</p>
Full article ">Figure 16
<p>(<b>a</b>): a snapshot of the focal plane of an event registered above anthropogenic lights, see the text for details. (<b>b</b>): a snapshot of the focal plane adjusted to the TUS geographical position and orientation and superimposed on the Google map around Greece.</p>
Full article ">Figure 17
<p>(<b>a</b>): a waveform measured above Greece on 21 August 2016 at 21:57:21 UTC. (<b>b</b>): the result of its Fourier analysis.</p>
Full article ">
24 pages, 11289 KiB  
Article
Assessment of Phytoecological Variability by Red-Edge Spectral Indices and Soil-Landscape Relationships
by Helena S. K. Pinheiro, Theresa P. R. Barbosa, Mauro A. H. Antunes, Daniel Costa de Carvalho, Alexis R. Nummer, Waldir de Carvalho Junior, Cesar da Silva Chagas, Elpídio I. Fernandes-Filho and Marcos Gervasio Pereira
Remote Sens. 2019, 11(20), 2448; https://doi.org/10.3390/rs11202448 - 22 Oct 2019
Cited by 5 | Viewed by 3157
Abstract
There is a relation of vegetation physiognomies with soil and geological conditions that can be represented spatially with the support of remote sensing data. The goal of this research was to map vegetation physiognomies in a mountainous area by using Sentinel-2 Multispectral Instrument [...] Read more.
There is a relation of vegetation physiognomies with soil and geological conditions that can be represented spatially with the support of remote sensing data. The goal of this research was to map vegetation physiognomies in a mountainous area by using Sentinel-2 Multispectral Instrument (MSI) data and morphometrical covariates through data mining techniques. The research was based on red-edge (RE) bands, and indices, to classify phytophysiognomies at two taxonomic levels. The input data was pixel sampled based on field sample sites. Data mining procedures comprised covariate selection and supervised classification through the Random Forest model. Results showed the potential of bands 3, 5, and 6 to map phytophysiognomies for both seasons, as well as Green Chlorophyll (CLg) and SAVI indices. NDVI indices were important, particularly those calculated with bands 6, 7, 8, and 8A, which were placed at the RE position. The model performance showed reasonable success to Kappa index 0.72 and 0.56 for the first and fifth taxonomic level, respectively. The model presented confusion between Broadleaved dwarf-forest, Parkland Savanna, and Bushy grassland. Savanna formations occurred variably in the area while Bushy grasslands strictly occur in certain landscape positions. Broadleaved forests presented the best performance (first taxonomic level), and among its variation (fifth level) the model could precisely capture the pattern for those on deep soils from gneiss parent material. The approach was thus useful to capture intrinsic soil-plant relationships and its relation with remote sensing data, showing potential to map phytophysiognomies in two distinct taxonomic levels in poorly accessible areas. Full article
(This article belongs to the Special Issue Remote Sensing of Tropical Phenology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area (Ibitipoca State Park—ISP) and location in Brazil, Minas Gerais State (MG).</p>
Full article ">Figure 2
<p>Thematic maps: (<b>a</b>) climate domains, adapted from [<a href="#B35-remotesensing-11-02448" class="html-bibr">35</a>]; (<b>b</b>) elevation (m); (<b>c</b>) slope (%); (<b>d</b>) soil map, adapted from [<a href="#B40-remotesensing-11-02448" class="html-bibr">40</a>]; (<b>e</b>) geological map, adapted from [<a href="#B17-remotesensing-11-02448" class="html-bibr">17</a>,<a href="#B41-remotesensing-11-02448" class="html-bibr">41</a>]; (<b>f</b>) field sampling points.</p>
Full article ">Figure 3
<p>Flowchart of classification steps and procedures.</p>
Full article ">Figure 4
<p>Pearson’s correlation among numerical covariates dataset, based on all samples (800) for the fifth taxonomic level. DEM = Elevation. Bands and indices assigned with a ‘Sep’ correspond to September images (dry season), while those assigned with a ‘Dec’ correspond to December images (wet season). B02 to B08A = Sentinel-2 MSI bands; CLg = Green chlorophyll index; CLre = Chlorophyll index; IRECI = Inverted Red-Edge Chlorophyll Index; NDRE 1 to 2 = Normalized Difference Red-edge; NDVI 1 to 2: Normalized difference vegetation index; NDVIre 1 to 6 = Normalized difference vegetation index red-edge; SAVI = Soil-adjusted vegetation index.</p>
Full article ">Figure 5
<p>(<b>a</b>) Rank of covariates importance regarding accuracy decreasing (%); (<b>b</b>) rank of covariates importance regarding decreasing in Gini coefficient (first taxonomic level); (<b>c</b>) rank of covariates importance regarding accuracy decreasing (%); (<b>d</b>) rank of covariates importance regarding decreasing in Gini coefficient (fifth taxonomic level). Mde = Elevation, gph30 = Geomorphons. Bands and indices assigned with a ‘S’ correspond to September images (dry season), while those assigned with a ‘D’ correspond to December images (wet season) B02, B03, B05, B06, and B08 = Selected Sentinel-2 MSI bands; NDVIre = Normalized difference vegetation index red-edge; CLg = Green chlorophyll index; SAVI = Soil-adjusted vegetation index.</p>
Full article ">Figure 6
<p>Graphical of relative covariates importance for modeling phytophysiognomies. (<b>a</b>) The relative importance of each physiognomy; (<b>b</b>) average cumulative relative importance. DEM = Elevation. Bands and indices assigned as ‘Sep’ correspond to September images (dry season), while those assigned as ‘Dec’ correspond to December images (wet season). B02, B03, B05, B06, and B08 = Selected Sentinel-2 MSI bands; NDVIre = Normalized difference vegetation index red-edge; CLg = Green chlorophyll index; SAVI = Soil-adjusted vegetation index.</p>
Full article ">Figure 7
<p>Maps of phytophysiognomies at the Ibitipoca State Park, Brazil. (<b>a</b>) First Taxonomic Level (phytophysiognomies); (<b>b</b>) Fifth Taxonomic Level (phytophysiognomies + soil depth); (<b>c</b>) central portion of the area highlighting differences among formations for both taxonomic levels; (<b>d</b>) photo highlighting north sight: Bushy grassland (CL), Parkland savanna (SL), Broad-leaved dwarf-forest (NL) and Shrubland savanna (SA); (<b>e</b>) photo highlighting southeast sight: Bushy grassland (CL), Broadleaved dwarf-forest (NL), Broadleaved forest (FL), and Broadleaved scrub (AL).</p>
Full article ">Figure 7 Cont.
<p>Maps of phytophysiognomies at the Ibitipoca State Park, Brazil. (<b>a</b>) First Taxonomic Level (phytophysiognomies); (<b>b</b>) Fifth Taxonomic Level (phytophysiognomies + soil depth); (<b>c</b>) central portion of the area highlighting differences among formations for both taxonomic levels; (<b>d</b>) photo highlighting north sight: Bushy grassland (CL), Parkland savanna (SL), Broad-leaved dwarf-forest (NL) and Shrubland savanna (SA); (<b>e</b>) photo highlighting southeast sight: Bushy grassland (CL), Broadleaved dwarf-forest (NL), Broadleaved forest (FL), and Broadleaved scrub (AL).</p>
Full article ">
12 pages, 3756 KiB  
Technical Note
Estimating Pasture Biomass and Canopy Height in Brazilian Savanna Using UAV Photogrammetry
by Juliana Batistoti, José Marcato Junior, Luís Ítavo, Edson Matsubara, Eva Gomes, Bianca Oliveira, Maurício Souza, Henrique Siqueira, Geison Salgado Filho, Thales Akiyama, Wesley Gonçalves, Veraldo Liesenberg, Jonathan Li and Alexandre Dias
Remote Sens. 2019, 11(20), 2447; https://doi.org/10.3390/rs11202447 - 22 Oct 2019
Cited by 34 | Viewed by 4970
Abstract
The Brazilian territory contains approximately 160 million hectares of pastures, and it is necessary to develop techniques to automate their management and increase their production. This technical note has two objectives: First, to estimate the canopy height using unmanned aerial vehicle (UAV) photogrammetry; [...] Read more.
The Brazilian territory contains approximately 160 million hectares of pastures, and it is necessary to develop techniques to automate their management and increase their production. This technical note has two objectives: First, to estimate the canopy height using unmanned aerial vehicle (UAV) photogrammetry; second, to propose an equation for the estimation of biomass of Brazilian savanna (Cerrado) pastures based on UAV canopy height. Four experimental units of Panicum maximum cv. BRS Tamani were evaluated. Herbage mass sampling, height measurements, and UAV image collection were simultaneously performed. The UAVs were flown at a height of 50 m, and images were generated with a mean ground sample distance (GSD) of approximately 1.55 cm. The forage canopy height estimated by UAVs was calculated as the difference between the digital surface model (DSM) and the digital terrain model (DTM). The R2 between ruler height and UAV height was 0.80; between biomass (kg ha−1 GB—green biomass) and ruler height, 0.81; and between biomass (kg ha−1 GB) and UAV height, 0.74. UAV photogrammetry proved to be a potential technique to estimate height and biomass in Brazilian Panicum maximum cv. BRS Tamani pastures located in the endangered Brazilian savanna (Cerrado) biome. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Localization of Fazenda Escola (study area) and experimental plots.</p>
Full article ">Figure 2
<p>Distribution of digital terrain model (DTM) points and ground control points (GCPs), both collected with real-time kinematics (RTK).</p>
Full article ">Figure 3
<p>Generated models of the study area. (<b>a</b>) Digital surface model (DSM); (<b>b</b>) DTM generated using a Global Navigation Satellite System (GNSS) RTK receiver; and (<b>c</b>) elevation profile.</p>
Full article ">Figure 4
<p>Relationship between the height estimated by the UAV and that measured with the ruler.</p>
Full article ">Figure 5
<p>Relationship between biomass and height measured with the ruler.</p>
Full article ">Figure 6
<p>Relationship between the height estimated by the UAV and herbage mass.</p>
Full article ">Figure 7
<p>Height measured by the ruler and by the UAV at different harvest intervals.</p>
Full article ">Figure 8
<p>Biomass estimated by the ruler and by the UAV at different harvest intervals.</p>
Full article ">
24 pages, 12199 KiB  
Article
Contribution to Sandy Site Characterization: Spectro-Directional Signature, Grain Size Distribution and Mineralogy Extracted from Sand Samples
by Françoise Viallefont-Robinet, Cédric Bacour, Marc Bouvet, Malika Kheireddine, Mustapha Ouhssain, Ramzi Idoughi, Léo Grignon, Eric Munesa, François Lemaître and Thomas Rivière
Remote Sens. 2019, 11(20), 2446; https://doi.org/10.3390/rs11202446 - 21 Oct 2019
Cited by 3 | Viewed by 3215
Abstract
The characterization of sands detailed in this paper has been performed in order to support the in-flight radiometric performance assessment of space-borne optical sensors over the so-called Pseudo-Invariant Calibration Sites (PICS). Although the physical properties of PICS surface are fairly stable in time, [...] Read more.
The characterization of sands detailed in this paper has been performed in order to support the in-flight radiometric performance assessment of space-borne optical sensors over the so-called Pseudo-Invariant Calibration Sites (PICS). Although the physical properties of PICS surface are fairly stable in time, the signal measured from space varies with the illumination and the viewing geometries. Thus, there is a need to characterize the spectro-directional properties of PICS. This could be done on a broad scale, thanks to multi-spectral multi-directional space-borne sensors such as the POLDER instrument (with old data). However, interpolating or extrapolating the spectro-directional reflectance measured from space to spectral bands of another sensor is not straightforward. The hyperspectral characterization of sand samples collected within or nearby PICS could contribute to a solution. In this context, a set of 31 sand samples was compiled. The BiConical Reflectance Factor (BCRF), linked to Bidirectional Reflectance Distribution Function (BRDF), was measured between 0.4 and 2.5 µm, over a half hemisphere when the amount of sand in the sample was large enough and for only a single fixed angular configuration for small samples. These optical measurements were complemented by grain size distribution measurements and mineralogical analysis and compiled together with previously published measurements in the so-called PICSAND database, freely available online. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Measurement devices: (<b>a</b>) ONERA BCRF measurement device named “Banc de BRDF grands échantillons”; (<b>b</b>) 1 geometry BCRF measurement set up using a contact probe.</p>
Full article ">Figure 2
<p>Examples of sand samples: (<b>a</b>) Namibia_RadCalNet; (<b>b</b>) Arabia_PICSAND1_SPL5.</p>
Full article ">Figure 3
<p>Measurements discrepancy for the three positions compared to one measurement (pos0) uncertainty for Arabia_PICSAND1_SPL5.</p>
Full article ">Figure 4
<p>Spectral variation of BCRF for the Namibia_RadCalNet sample for the three positions.</p>
Full article ">Figure 5
<p>Algeria5_PICSCEOS BCRF discrepancy between samples compared to one measurement uncertainty.</p>
Full article ">Figure 6
<p>Arabia_PICSAND1 BCRF discrepancy between samples compared to one measurement uncertainty.</p>
Full article ">Figure 7
<p>Spectral variation of BCRF for the Australian Pinnacles Desert sand samples.</p>
Full article ">Figure 8
<p>Spectral comparison between various sites.</p>
Full article ">Figure 9
<p>Spectral variation of BCRF for some of the small sand samples.</p>
Full article ">Figure 10
<p>Spectral variation of BCRF for the Australian Internal Soil Standard (ISS) samples.</p>
Full article ">Figure 11
<p>BCRF for the minimum (10°) and the maximum (60°) solar zenith angles, in the principal plane, for Algerian samples: (<b>a</b>) Algeria3;(<b>b</b>) Algeria4; (<b>c</b>) Algeria5 sample 1; (<b>d</b>) Algeria5 sample 2; (<b>e</b>) Algeria5 sample 3.</p>
Full article ">Figure 12
<p>BCRF for the minimum (10°) and the maximum (60°) solar zenith angles, in the principal plane, for Arabian samples: (<b>a</b>) sample 1; (<b>b</b>) sample 2; (<b>c</b>) sample 3; (<b>d</b>) sample 4; (<b>e</b>) sample 5; (<b>f</b>) sample 6.</p>
Full article ">Figure 12 Cont.
<p>BCRF for the minimum (10°) and the maximum (60°) solar zenith angles, in the principal plane, for Arabian samples: (<b>a</b>) sample 1; (<b>b</b>) sample 2; (<b>c</b>) sample 3; (<b>d</b>) sample 4; (<b>e</b>) sample 5; (<b>f</b>) sample 6.</p>
Full article ">Figure 13
<p>BCRF for the minimum (10°) and the maximum (60°) solar zenith angles, in the principal plane, for: (<b>a</b>) Libya; (<b>b</b>) Morocco.</p>
Full article ">Figure 14
<p>BCRF for the minimum (10°) and the maximum (60°) solar zenith angles, in the principal plane, for Namibian sites near Gobabeb: (<b>a</b>) dunes; (<b>b</b>) RadCalNet.</p>
Full article ">Figure 15
<p>BCRF for the minimum (10°) and the maximum (60°) solar zenith angles, in the principal plane, for Nigerian site near Niamey.</p>
Full article ">Figure 16
<p>Arabia_PICSAND1 _SPL1 residual RMSD between the measured BCRF in the six POLDER wavelengths plus two wavelengths in the SWIR (1600 and 2200 nm) for the three rotating positions, and the four fitted BRDF models.</p>
Full article ">Figure 17
<p>Arabia_PICSAND1_SPL5 measured (cross) and modeled (dot) BCRF for SZA = 10 and VZA = 60°, wavelength = 670 (in blue) and 875 nm (in red).</p>
Full article ">Figure 18
<p>Grain size distribution for the Arabia_PICSAND_SPL5 sample.</p>
Full article ">Figure 19
<p>Polar diagram of the BCRF measured with BBGE for the Arabia_PICSAND1_SPL5 sand sample (position at 0°) and for an illumination angle (SZA) of 40°.</p>
Full article ">Figure 20
<p>Normalized BCRF for Algeria4 (<b>a</b>) and Libya Erg Ubari (<b>b</b>) in the principal plane.</p>
Full article ">
14 pages, 3616 KiB  
Article
Liquid Water Detection under the South Polar Layered Deposits of Mars—A Probabilistic Inversion Approach
by Sebastian Emanuel Lauro, Francesco Soldovieri, Roberto Orosei, Andrea Cicchetti, Marco Cartacci, Elisabetta Mattei, Barbara Cosciotti, Federico Di Paolo, Raffaella Noschese and Elena Pettinelli
Remote Sens. 2019, 11(20), 2445; https://doi.org/10.3390/rs11202445 - 21 Oct 2019
Cited by 9 | Viewed by 3987
Abstract
Liquid water was present on the surface of Mars in the distant past; part of that water is now in the ground in the form of permafrost and heat from the molten interior of the planet could cause it to melt at depth. [...] Read more.
Liquid water was present on the surface of Mars in the distant past; part of that water is now in the ground in the form of permafrost and heat from the molten interior of the planet could cause it to melt at depth. MARSIS (Mars Advanced Radar for Subsurface and Ionosphere Sounding) has surveyed the Martian subsurface for more than fifteen years in search for evidence of such water buried at depth. Radar detection of liquid water can be stated as an inverse electromagnetic scattering problem, starting from the echo intensity collected by the antenna. In principle, the electromagnetic problem can be modelled as a normal plane wave that propagates through a three-layered medium made of air, ice and basal material, with the final goal of determining the dielectric permittivity of the basal material. In practice, however, two fundamental aspects make the inversion procedure of this apparent simple model rather challenging: (i) the impossibility to use the absolute value of the echo intensity in the inversion procedure; (ii) the impossibility to use a deterministic approach to retrieve the basal permittivity. In this paper, these issues are faced by assuming a priori information on the ice electromagnetic properties and adopting an inversion probabilistic approach. All the aspects that can affect the estimation of the basal permittivity below the Martian South polar cap are discussed and how detection of the presence of basal liquid water was done is described. Full article
(This article belongs to the Special Issue Real-Time Radar Imaging and Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Example of a radar received signal; two distinct echoes are recognizable that are associated to the reflections at the surface and the base of the South polar layered deposits (SPLD).</p>
Full article ">Figure 2
<p>The attenuation coefficient as a function of temperature at 4 MHz for different materials composing the SPLD.</p>
Full article ">Figure 3
<p>(<b>a</b>) Ice attenuation; (<b>b</b>) ice-layer velocity profiles for the extreme values of the investigated interval of the model parameters <math display="inline"><semantics> <mrow> <mfenced> <mrow> <msub> <mi>f</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>0.05</mn> <mo>,</mo> <mo> </mo> <msub> <mi>T</mi> <mi>b</mi> </msub> <mo>=</mo> <mn>170</mn> </mrow> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>f</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> <mo> </mo> <msub> <mi>T</mi> <mi>b</mi> </msub> <mo>=</mo> <mn>270</mn> <mo stretchy="false">)</mo> </mrow> </semantics></math>. The two cases represent a cold quasi pure ice-layer and a warm dirty ice layer, respectively.</p>
Full article ">Figure 4
<p>(<b>a</b>) Topographic map of the Martian South pole with the 200-km studied area in the Planum Australe enclosed by the contoured black line. (<b>b</b>) Spatial distribution of the power ratio <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>P</mi> <mi>s</mi> </msub> </mrow> </semantics></math> in the studied area. The contoured white line defines the basal bright area (highest intensity in dark blue tone).</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Topographic map of the Martian South pole with the 200-km studied area in the Planum Australe enclosed by the contoured black line. (<b>b</b>) Spatial distribution of the power ratio <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>P</mi> <mi>s</mi> </msub> </mrow> </semantics></math> in the studied area. The contoured white line defines the basal bright area (highest intensity in dark blue tone).</p>
Full article ">Figure 5
<p>Distributions of the power ratio <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>P</mi> <mi>s</mi> </msub> </mrow> </semantics></math> for the data collected inside (blue) and outside (red) the bright area. Black lines are the computed normal pdfs for the two areas.</p>
Full article ">Figure 6
<p>Posterior pdfs of basal permittivity <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>p</mi> </msub> <mfenced> <mrow> <msub> <mi>ε</mi> <mi>b</mi> </msub> <msup> <mrow/> <mo>*</mo> </msup> </mrow> </mfenced> <mo>.</mo> </mrow> </semantics></math> In red are indicated the results of the inversion performed outside the bright area, in blue those inside the bright area. The black lines indicate the results obtained by using the curve fitting of <a href="#remotesensing-11-02445-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 7
<p>Posterior pdf <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>p</mi> </msub> <mfenced> <mrow> <msub> <mi>ε</mi> <mi>b</mi> </msub> <msup> <mrow/> <mo>*</mo> </msup> <mo>,</mo> <msub> <mi>f</mi> <mi>v</mi> </msub> <msup> <mrow/> <mo>*</mo> </msup> <mo> </mo> </mrow> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>p</mi> </msub> <mfenced> <mrow> <msub> <mi>ε</mi> <mi>b</mi> </msub> <msup> <mrow/> <mo>*</mo> </msup> <mo>,</mo> <msub> <mi>T</mi> <mi>b</mi> </msub> <msup> <mrow/> <mo>*</mo> </msup> <mo> </mo> </mrow> </mfenced> </mrow> </semantics></math> outside (<b>a</b>) and (<b>b</b>) and inside (<b>c</b>) and (<b>d</b>) the bright reflector.</p>
Full article ">Figure 8
<p>(<b>a</b>) Trend of the median value <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>b</mi> </msub> </mrow> </semantics></math> as a function of <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>v</mi> </msub> </mrow> </semantics></math> computed from the posterior pdf <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>p</mi> </msub> <mfenced> <mrow> <msub> <mi>ε</mi> <mi>b</mi> </msub> <msup> <mrow/> <mo>*</mo> </msup> <mo>,</mo> <msub> <mi>f</mi> <mi>v</mi> </msub> <msup> <mrow/> <mo>*</mo> </msup> </mrow> </mfenced> </mrow> </semantics></math> inside (blue) and outside (red) the bright area. (<b>b</b>) Trend of the median value <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mi>b</mi> </msub> </mrow> </semantics></math> as a function of <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>b</mi> </msub> </mrow> </semantics></math> computed from the posterior pdf <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>p</mi> </msub> <mfenced> <mrow> <msub> <mi>ε</mi> <mi>b</mi> </msub> <msup> <mrow/> <mo>*</mo> </msup> <mo>,</mo> <msub> <mi>T</mi> <mi>b</mi> </msub> <msup> <mrow/> <mo>*</mo> </msup> </mrow> </mfenced> </mrow> </semantics></math> inside (blue) and outside (red) the bright area.</p>
Full article ">
17 pages, 4001 KiB  
Article
Turbulence Measurements with Dual-Doppler Scanning Lidars
by Alfredo Peña and Jakob Mann
Remote Sens. 2019, 11(20), 2444; https://doi.org/10.3390/rs11202444 - 21 Oct 2019
Cited by 10 | Viewed by 3973
Abstract
Velocity-component variances can be directly computed from lidar measurements using information of the second-order statistics within the lidar probe volume. Specifically, by using the Doppler radial velocity spectrum, one can estimate the unfiltered radial velocity variance. This information is not always available in [...] Read more.
Velocity-component variances can be directly computed from lidar measurements using information of the second-order statistics within the lidar probe volume. Specifically, by using the Doppler radial velocity spectrum, one can estimate the unfiltered radial velocity variance. This information is not always available in current lidar campaigns. The velocity-component variances can also be indirectly computed from the reconstructed velocities but they are biased compared to those computed from, e.g., sonic anemometers. Here we show, for the first time, how to estimate such biases for a multi-lidar system and we demonstrate, also for the first time, their dependence on the turbulence characteristics and the lidar beam scanning geometry relative to the wind direction. For a dual-Doppler lidar system, we also show that the indirect method has an advantage compared to the direct one for commonly-used scanning configurations due to the singularity of the system. We demonstrate that our estimates of the radial velocity and velocity-component biases are accurate by analysis of measurements performed over a flat site using a dual-Doppler lidar system, where both lidars stared over a volume close to a sonic anemometer at a height of 100 m. We also show that mapping these biases over a spatial domain helps to plan meteorological campaigns, where multi-lidar systems can potentially be used. Particularly, such maps help the multi-point mapping of wind resources and conditions, which improve the tools needed for wind turbine siting. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sketch of a scanning lidar measuring the wind <math display="inline"><semantics> <mi mathvariant="bold">v</mi> </semantics></math> at a distance <math display="inline"><semantics> <msub> <mi>d</mi> <mi>f</mi> </msub> </semantics></math> with a beam in the direction of the unit vector <math display="inline"><semantics> <mrow> <mi mathvariant="bold">n</mi> <mo>=</mo> <mi mathvariant="bold">n</mi> <mo>(</mo> <mi>θ</mi> <mo>,</mo> <mi>ϕ</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Simulated <span class="html-italic">u</span>- and <span class="html-italic">w</span>-velocity spectra and radial velocity spectra of a lidar with (<b>a</b>) and without (<b>b</b>) the effect of filtering for different elevation angles with respect to the mean wind for <math display="inline"><semantics> <mrow> <mi>α</mi> <msup> <mi>ϵ</mi> <mrow> <mn>2</mn> <mo>/</mo> <mn>3</mn> </mrow> </msup> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> m<math display="inline"><semantics> <msup> <mrow/> <mrow> <mn>4</mn> <mo>/</mo> <mn>3</mn> </mrow> </msup> </semantics></math> s<math display="inline"><semantics> <msup> <mrow/> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> m, and <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Γ</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>. For the case of filtering (b), <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>R</mi> </msub> <mo>/</mo> <mi>L</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Spatial distribution of the determinant of <math display="inline"><semantics> <mi mathvariant="bold">M</mi> </semantics></math> (<b>a</b>) and <math display="inline"><semantics> <mi mathvariant="bold">P</mi> </semantics></math> (<b>b</b>) for a dual-Doppler scanning system with <math display="inline"><semantics> <mrow> <msub> <mi>ϕ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>ϕ</mi> <mn>2</mn> </msub> <mo>=</mo> <msup> <mn>0</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> assuming the wind is aligned with the <span class="html-italic">x</span>-axis.</p>
Full article ">Figure 4
<p>A sketch of the campaign at the Høvsøre Test Station in Denmark. The two WindScanners are shown in rectangles (<span class="html-italic">k</span> in black and <span class="html-italic">w</span> in red), their beams with similar color, the meteorological mast in blue with a 100-m sonic anemometer <span class="html-italic">s</span> in cyan. Coordinates are in UTM 32, WGS84.</p>
Full article ">Figure 5
<p>Radial velocity spectra of the <span class="html-italic">k</span> and <span class="html-italic">w</span> WindScanners and those of an ideal sonic anemometer aligned with either WindScanner (<span class="html-italic">s</span>, <span class="html-italic">w</span> or <span class="html-italic">s</span>, <span class="html-italic">k</span>) for two wind directions: 165<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> (<b>a</b>) and 230<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> (<b>b</b>).</p>
Full article ">Figure 6
<p>Scatter plot of the radial velocities of the WindScanners and the sonic anemometer in the beam direction of the <span class="html-italic">k</span> lidar (<b>a</b>) and <span class="html-italic">w</span> lidar (<b>b</b>). The red markers correspond to all data and the black markers to selected data (see text for details)</p>
Full article ">Figure 7
<p>Scatter plot of the radial velocities’ variances of the WindScanners and the sonic anemometer in the beam direction of the <span class="html-italic">k</span> lidar (<b>a</b>) and <span class="html-italic">w</span> lidar (<b>b</b>). The results of a linear regression through the origin are also shown together with the coefficient of determination R<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">Figure 8
<p>Ratio of the radial velocity variance of the WindScanners to the sonic anemometer as a function of wind direction in the beam direction of the <span class="html-italic">k</span> lidar (<b>a</b>) and <span class="html-italic">w</span> lidar (<b>b</b>). Measurements and a loess fit to the measurements are shown in markers and predictions in colored solid lines. The standard error of the loess fit is shown in solid black line.</p>
Full article ">Figure 9
<p>Scatter plot of the horizontal velocity-component variances of the WindScanners and the sonic anemometer. The results for the <span class="html-italic">u</span>- and <span class="html-italic">v</span>-velocity components are shown in frames (<b>a</b>) and (<b>b</b>), respectively. The results of a linear regression through the origin are also shown together with the coefficient of determination R<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">Figure 10
<p>Ratio of the velocity components’ variance of the WindScanners to the sonic anemometer as a function of wind direction. The result for the <span class="html-italic">u</span>- and <span class="html-italic">v</span>-velocity components are shown in frames (<b>a</b>) and (<b>b</b>), respectively. Measurements and a loess fit to the measurements are shown in markers and predictions in colored solid lines. The standard error of the loess fit is shown in solid black line.</p>
Full article ">Figure 11
<p>Radial velocity spectra ratio of the WindScanners to the sonic anemometer for each WindScanner beam as function of wavenumber. In solid lines, the theoretical spectral ratio is shown for each WindScanner beam for 290<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> winds assuming <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>R</mi> </msub> <mo>/</mo> <mi>L</mi> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math></p>
Full article ">Figure 12
<p>Spatial variation of velocity-variance biases of the dual-Doppler scanning system (lidars in solid rectangles) with respect to the velocity-variance measured at each point for a height of 100 m for a 90<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> wind. Turbulence is characterized by <math display="inline"><semantics> <mrow> <mo>Γ</mo> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>R</mi> </msub> <mo>/</mo> <mi>L</mi> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>. Frame (<b>a</b>) shows the bias for the <span class="html-italic">u</span>-velocity component, i.e., <math display="inline"><semantics> <mrow> <msubsup> <mi>σ</mi> <mrow> <msub> <mi>u</mi> <mrow> <mi>w</mi> <mi>s</mi> </mrow> </msub> </mrow> <mn>2</mn> </msubsup> <mo>/</mo> <msubsup> <mi>σ</mi> <mrow> <msub> <mi>u</mi> <mi>s</mi> </msub> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math>, and (<b>b</b>) that for the <span class="html-italic">v</span>-velocity component, i.e., <math display="inline"><semantics> <mrow> <msubsup> <mi>σ</mi> <mrow> <msub> <mi>v</mi> <mrow> <mi>w</mi> <mi>s</mi> </mrow> </msub> </mrow> <mn>2</mn> </msubsup> <mo>/</mo> <msubsup> <mi>σ</mi> <mrow> <msub> <mi>v</mi> <mi>s</mi> </msub> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Same as <a href="#remotesensing-11-02444-f012" class="html-fig">Figure 12</a> but for a 180<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> wind.</p>
Full article ">
20 pages, 1175 KiB  
Perspective
Measuring Marine Plastic Debris from Space: Initial Assessment of Observation Requirements
by Víctor Martínez-Vicente, James R. Clark, Paolo Corradi, Stefano Aliani, Manuel Arias, Mathias Bochow, Guillaume Bonnery, Matthew Cole, Andrés Cózar, Rory Donnelly, Fidel Echevarría, François Galgani, Shungudzemwoyo P. Garaba, Lonneke Goddijn-Murphy, Laurent Lebreton, Heather A. Leslie, Penelope K. Lindeque, Nikolai Maximenko, François-Régis Martin-Lauzer, Delwyn Moller, Peter Murphy, Lorenzo Palombi, Valentina Raimondi, Julia Reisser, Laia Romero, Stefan G.H. Simis, Sindy Sterckx, Richard C. Thompson, Konstantinos N. Topouzelis, Erik van Sebille, Joana Mira Veiga and A. Dick Vethaakadd Show full author list remove Hide full author list
Remote Sens. 2019, 11(20), 2443; https://doi.org/10.3390/rs11202443 - 21 Oct 2019
Cited by 111 | Viewed by 23313
Abstract
Sustained observations are required to determine the marine plastic debris mass balance and to support effective policy for planning remedial action. However, observations currently remain scarce at the global scale. A satellite remote sensing system could make a substantial contribution to tackling this [...] Read more.
Sustained observations are required to determine the marine plastic debris mass balance and to support effective policy for planning remedial action. However, observations currently remain scarce at the global scale. A satellite remote sensing system could make a substantial contribution to tackling this problem. Here, we make initial steps towards the potential design of such a remote sensing system by: (1) identifying the properties of marine plastic debris amenable to remote sensing methods and (2) highlighting the oceanic processes relevant to scientific questions about marine plastic debris. Remote sensing approaches are reviewed and matched to the optical properties of marine plastic debris and the relevant spatio-temporal scales of observation to identify challenges and opportunities in the field. Finally, steps needed to develop marine plastic debris detection by remote sensing platforms are proposed in terms of fundamental science as well as linkages to ongoing planning for satellite systems with similar observation requirements. Full article
(This article belongs to the Special Issue EO Solutions to Support Countries Implementing the SDGs)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Diagram representing the four observational scenarios discussed in the text (<a href="#sec2-remotesensing-11-02443" class="html-sec">Section 2</a>). (1) River discharge, (2) spills, (3) shoreline accumulation, (4) submesoscale convergence filaments.</p>
Full article ">Figure 2
<p>Near-infrared short-wave infrared (NIR-SWIR) reflectance spectrum (%), plastic absorption feature locations and satellite detection windows. Thick solid red line is median reflectance spectrum and shadowed area is the standard deviation of all plastics measured (data adapted from [<a href="#B53-remotesensing-11-02443" class="html-bibr">53</a>]). Thick horizontal bars highlight the regions of the spectrum with major absorption features (data adapted from [<a href="#B50-remotesensing-11-02443" class="html-bibr">50</a>]). Double arrow end lines show the maximum width of the spectral response function and the band names of the Sentinel 2B Multi Spectral Imager (MSI). Dashed line is typical oceanic seawater reflectance shown for comparison (data corresponding to OPENOCEAN-SW2 spectra adapted from [<a href="#B54-remotesensing-11-02443" class="html-bibr">54</a>]).</p>
Full article ">Figure 3
<p>Example of plastic debris detection experiment during a Sentinel-2B overpass on 15 May 2018 over Whitsand Bay (United Kingdom). (<b>A</b>) Overview of the bay area with the study area indicated by a red square, shown in detail in (<b>B</b>) with the positions of the plastic targets (10 × 10 m) visible within the red square.</p>
Full article ">
27 pages, 9168 KiB  
Article
The Use of SMAP-Reflectometry in Science Applications: Calibration and Capabilities
by Nereida Rodriguez-Alvarez, Sidharth Misra, Erika Podest, Mary Morris and Xavier Bosch-Lluis
Remote Sens. 2019, 11(20), 2442; https://doi.org/10.3390/rs11202442 - 21 Oct 2019
Cited by 25 | Viewed by 3815
Abstract
The Soil Moisture Active Passive (SMAP) mission became one of the newest spaceborne Global Navigation Satellite System–Reflectometry (GNSS-R) missions collecting Global Positioning System (GPS) bistatic radar measurements when the band-pass center frequency of its radar receiver was switched to the GPS L2C band. [...] Read more.
The Soil Moisture Active Passive (SMAP) mission became one of the newest spaceborne Global Navigation Satellite System–Reflectometry (GNSS-R) missions collecting Global Positioning System (GPS) bistatic radar measurements when the band-pass center frequency of its radar receiver was switched to the GPS L2C band. SMAP-Reflectometry (SMAP-R) brings a set of unique capabilities, such as polarimetry and improved spatial resolution, that allow for the exploration of scientific applications that other GNSS-R missions cannot address. In order to leverage SMAP-R for scientific applications, a calibration must be performed to account for the characteristics of the SMAP radar receiver and each GPS transmitter. In this study, we analyze the unique characteristics of SMAP-R, as compared to other GNSS-R missions, and present a calibration method for the SMAP-R signals that enables the standardized use of these signals by the scientific community. There are two key calibration parameters that need to be corrected: The first is the GPS transmitted power and GPS antenna gain at the incidence angle of the measured reflections and the second is the convolution of the SMAP high gain antenna pattern and the glistening zone (Earth surface area from where GPS signals scatter). To account for the GPS transmitter variability, GPS instrument properties—transmitted power and antenna gain—are collocated with information collected from the CYclone Global Navigation Satellite System (CYGNSS) at SMAP’s range of incidence angles (37.3° to 42.7°). To account for the convolutional effect of the SMAP antenna gain, both the scattering area of the reflected GPS signal and the SMAP antenna footprint are mapped on the surface. We account for the size of the scattering area corresponding to each delay and Doppler bin of the SMAP-R measurements based off the SMAP antenna pattern, and normalize according to the size of a measurement representative to one obtained with an omnidirectional antenna. We have validated these calibration methods through an analysis of the coherency of the reflected signal over an extensive area of old sea ice having constant surface characteristics over a period of 3 months. By selecting a vicarious scattering surface with high coherency, we eliminated scene variability and complexity in order to avoid scene dependent aliases in the calibration. The calibration method reduced the dependence on the GPS transmitter power and gain from ~1.08 dB/dB to a residual error of about −0.2 dB/dB. Results also showed that the calibration method eliminates the effect of the high gain antenna filtering effect, thus reducing errors as high as 10 dB on angles furthest from SMAP’s constant 40° incidence angle. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Example of a Soil Moisture Active Passive (SMAP)-Reflectometry (SMAP-R) measurement over the sea ice.</p>
Full article ">Figure 2
<p>SMAP-R coverage observed for (<b>a</b>) 1 day (285 observations), (<b>b</b>) 15 days (7168 observations), (<b>c</b>) 1 month (14,449 observations), and (<b>d</b>) 2 months (30,500 observations).</p>
Full article ">Figure 3
<p>Peak signal to noise ratio (SNR) of the GPS reflections, captured by (<b>a</b>) SMAP-R at V-polarization with 25 ms integration time and (<b>b</b>) CYGNSS, for 1 day of measurements with 1,000 ms integration time. Peak SNR plotted in dB.</p>
Full article ">Figure 4
<p>Peak signal to noise ratio (SNR) of the GPS reflections captured by (<b>a</b>) SMAP-R at V-polarization 25 ms integration time and (<b>b</b>) TDS-1 for 1 day of measurements with 1000 ms integration time. Peak SNR plotted in dB.</p>
Full article ">Figure 5
<p>GPS transmitted power <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>t</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> for the 32 satellites of the constellation. The information is in the variation range of those transmitted power. Each color corresponds to a different GPS satellite. (<b>a</b>) GPS block IIF satellites are represented by circles, (<b>b</b>) GPS block IIRM satellites are represented by squares, and (<b>c</b>) GPS block IIR are represented by crosses.</p>
Full article ">Figure 6
<p>GPS antenna gain <math display="inline"><semantics> <mrow> <msub> <mi>G</mi> <mrow> <mi>t</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> for the 32 satellites of the constellation. The information is in the variation range of those transmitted power. Each color corresponds to a different GPS satellite. Each color corresponds to a different GPS satellite. (<b>a</b>) GPS block IIF satellites are represented by circles, (<b>b</b>) GPS block IIRM satellites are represented by squares, and (<b>c</b>) GPS block IIR are represented by crosses.</p>
Full article ">Figure 7
<p>Combined GPS Equivalent Radiated Isotropically Power (EIRP) (<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>t</mi> <mi>x</mi> </mrow> </msub> <msub> <mi>G</mi> <mrow> <mi>t</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math>) for the measured range of incidence angles (37.3° to 42.7°) and for the 32 satellites of the constellation. The main information of this plot is in the total variation range of those transmitted power. Each color corresponds to a different GPS satellite. (<b>a</b>) GPS block IIF satellites are represented by circles, (<b>b</b>) GPS block IIRM satellites are represented by squares, and (<b>c</b>) GPS block IIR are represented by crosses.</p>
Full article ">Figure 8
<p>Temporal analysis of the variations in the combined transmitted power and gain observed for four GPS satellites over a period of 12 months. Mean variability observed in the vertical axis is due to antenna gain variations with respect to incidence angle. The standard deviation of the variability observed in the vertical axis is due to antenna gain variations with respect to azimuth angle.</p>
Full article ">Figure 9
<p>Delay–Doppler Maps of the scattered power measured by SMAP-R for (<b>a</b>) ocean surface in the edges of the Caribbean Sea between Puerto Rico and Caracas, (<b>b</b>) arid land surface in the middle of Australia, near the Neale Junction Nature Reserve, and (<b>c</b>) sea ice surface in the Kara Sea, Russia.</p>
Full article ">Figure 10
<p>Top: Delay–Doppler Maps of the scattered power measured by SMAP-R for the sea ice surface for (<b>a</b>) 42.5°, (<b>b</b>) 40°, and (<b>c</b>) 37.5° incidence angles. Bottom: iso-delay (blue ellipses) and iso-Doppler lines (blue hyperbolae) computed from SMAP-R and GPS geometries and velocities with the antenna beam pattern limited at –3 dB beamwidth (red) over-imposed to those isolines for the same (<b>d</b>) 42.5°, (<b>e</b>) 40°, and (<b>f</b>) 37.5° incidence angles. Note that they are not to scale: The scattering area represents 1 km major axis ellipse, while the antenna beam patter diameter is 40 km. The antenna boresight is shown as a red dot.</p>
Full article ">Figure 11
<p>Top: Delay-Doppler Maps of the scattered power measured by SMAP-R for an ocean surface for (<b>a</b>) 42.5°, (<b>b</b>) 40°, and (<b>c</b>) 37.5° incidence angles. Bottom: iso-delay (blue ellipses) and iso-Doppler lines (blue hyperbolae) computed from actual SMAP-R and GPS geometries and velocities with the antenna beam pattern (red) over-imposed to those isolines for the same (<b>d</b>) 42.5°, (<b>e</b>) 40°, and (<b>f</b>) 37.5° incidence angles. Note that these are not to scale: The scattering area represents 200 km major axis ellipse, while antenna beam pattern diameter is represented by the −3 dB beamwidth, i.e., 40 km. The antenna boresight is shown as a red dot.</p>
Full article ">Figure 12
<p>Spatially filtered effective surface scattering area used for (<b>a</b>) 40° and (<b>b</b>) 42.5° incidence angle.</p>
Full article ">Figure 13
<p>Arctic sea ice concentration reported by the National Snow and Ice Data Center [<a href="#B43-remotesensing-11-02442" class="html-bibr">43</a>], University of Colorado Boulder on (<b>a</b>) December 2017 and (<b>b</b>) March 2018. Ice concentration remains constant since December to March. Images courtesy of the National Snow and Ice Data Center (NSIDC), University of Colorado, Boulder. The images are derived from Sea Ice Index NSIDC data product, which relies on NASA-developed methods using passive microwave data from the Defense Meteorological Satellite Program (DMSP) F-18 Special Sensor Microwave Imager/Sounder (SSMIS). The Sea Ice Index [<a href="#B44-remotesensing-11-02442" class="html-bibr">44</a>] was developed by the NSIDC with financial support from NOAA NESDIS and in cooperation with NOAA NGDC. Color scale from 0% (dark blue) to 100% (white) on 10% increments.</p>
Full article ">Figure 14
<p>Arctic sea ice primary stage of development obtained from the NOAA National Ice Center website (<a href="https://www.natice.noaa.gov" target="_blank">https://www.natice.noaa.gov</a>) for February 2018. The brown color corresponds to old ice, and red, orange and blue color correspond to three types of first year ice. Pink color corresponds to new ice and purple color to young ice. The white dotted box defines the calibration target area selected for this study.</p>
Full article ">Figure 15
<p>Impact of the direct signal power and gain differences between GPS satellites for February 2018 data. Up to 4 dB total difference correction is observed from minimum to maximum values, which applies to both H-pol and V-pol.</p>
Full article ">Figure 16
<p>February 2018 SMAP-R samples as a function of the GPS transmitter information (P<sub>tx</sub> + G<sub>tx</sub>) (<b>a</b>) uncalibrated data over a small area of the calibration target area (south-west corner) and (<b>b</b>) calibrated data over the same small area of the calibration target area. Note that calibrated values correspond to radar cross section values after applying calibration Equation (5), under coherent assumption.</p>
Full article ">Figure 17
<p>Impact of the SMAP antenna high gain filtering effect calibration for February 2018 data. Up to 10 dB total difference correction is observed. The larger errors correspond to reflections farther away from the SMAP antenna boresight. This correction applies to both H-pol and V-pol.</p>
Full article ">Figure 18
<p>Corrections applied to peak SNR as a function of the incidence angle for February 2018. Corrected peak SNR = peak SNR − <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>v</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 19
<p>February 2018 SMAP-R samples as a function of the high gain antenna incidence angle: (<b>a</b>) uncalibrated and (<b>b</b>) calibrated. Note that calibrated values correspond to radar cross section values after applying calibration Equation (5), under coherent assumption.</p>
Full article ">Figure 20
<p>SMAP-R measurements for February 2018, (<b>a</b>) uncalibrated V-pol and (<b>b</b>) calibrated V-pol, and (<b>c</b>) uncalibrated H-pol and (<b>d</b>) calibrated H-pol. Calibration includes both the correction of the GPS transmitter variability (collocating direct power information from CYGNSS) and the correction of the effect of the SMAP high gain antenna. Note that calibrated values correspond to radar cross section values after applying calibration Equation (5) under a coherent assumption.</p>
Full article ">Figure 21
<p>Standard deviation representing the spatial variability observed for February 2018 data: (<b>a</b>) uncalibrated V-pol, (<b>b</b>) calibrated V-pol, (<b>c</b>) uncalibrated H-pol, and (<b>d</b>) calibrated H-pol. The spatial variability is analyzed in 1° × 1° boxes (lat/lon). Note that calibrated values correspond to radar cross section values after applying calibration Equation (5), under coherent assumption.</p>
Full article ">
12 pages, 7380 KiB  
Letter
Life Signs Detector Using a Drone in Disaster Zones
by Ali Al-Naji, Asanka G. Perera, Saleem Latteef Mohammed and Javaan Chahl
Remote Sens. 2019, 11(20), 2441; https://doi.org/10.3390/rs11202441 - 21 Oct 2019
Cited by 62 | Viewed by 12776
Abstract
In the aftermath of a disaster, such as earthquake, flood, or avalanche, ground search for survivors is usually hampered by unstable surfaces and difficult terrain. Drones now play an important role in these situations, allowing rescuers to locate survivors and allocate resources to [...] Read more.
In the aftermath of a disaster, such as earthquake, flood, or avalanche, ground search for survivors is usually hampered by unstable surfaces and difficult terrain. Drones now play an important role in these situations, allowing rescuers to locate survivors and allocate resources to saving those who can be helped. The aim of this study was to explore the utility of a drone equipped for human life detection with a novel computer vision system. The proposed system uses image sequences captured by a drone camera to remotely detect the cardiopulmonary motion caused by periodic chest movement of survivors. The results of eight human subjects and one mannequin in different poses shows that motion detection on the body surface of the survivors is likely to be useful to detect life signs without any physical contact. The results presented in this study may lead to a new approach to life detection and remote life sensing assessment of survivors. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Experimental setup of the proposed life detector system.</p>
Full article ">Figure 2
<p>Data acquisition from human subjects and a mannequin at different poses.</p>
Full article ">Figure 3
<p>Schematic diagram demonstrating the process by which contactlessly obtained video data were acquired using a drone to detect life signs of both human subjects and a mannequin.</p>
Full article ">Figure 4
<p>ROI selection based on OpenPose approach where the chest region was selected by drawing a bounding box (red) around the upper torso (magenta) for (<b>a</b>) a human subject, (<b>b</b>) a mannequin.</p>
Full article ">Figure 5
<p>Wavelet signal denoising method (<b>a</b>) input signal <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mrow> <msub> <mi>Y</mi> <mrow> <mi>a</mi> <mi>v</mi> <mi>g</mi> </mrow> </msub> </mrow> </msub> <mtext> </mtext> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, (<b>b</b>) denoised signal using wavelet = wdenoise(<math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mrow> <msub> <mi>Y</mi> <mrow> <mi>a</mi> <mi>v</mi> <mi>g</mi> </mrow> </msub> </mrow> </msub> </mrow> </semantics></math>, 4,’Wavelet’,’db20’,’DenoisingMethod’,’UniversalThreshold’,’ThresholdRule’,’Soft’,’NoiseEstimate’,’LevelDependent’), and (<b>c</b>) smoothed signal based on a moving average filter with span equal to 5.</p>
Full article ">Figure 6
<p>The graphical user interface (GUI) main panel of the proposed life detector system for the human subject (alive).</p>
Full article ">Figure 7
<p>The graphical user interface (GUI) main panel of the proposed life detector system the mannequin (deceased).</p>
Full article ">Figure 8
<p>Subjects at different poses (<b>a</b>) Pose 1: back position, (<b>b</b>) Pose 2: side position facing the camera, (<b>c</b>) Pose 3: side position non-facing the camera, and (<b>d</b>) Pose 4: stomach position.</p>
Full article ">Figure 9
<p>Multiple subject detection.</p>
Full article ">
20 pages, 5956 KiB  
Article
Guided Next Best View for 3D Reconstruction of Large Complex Structures
by Randa Almadhoun, Abdullah Abduldayem, Tarek Taha, Lakmal Seneviratne and Yahya Zweiri
Remote Sens. 2019, 11(20), 2440; https://doi.org/10.3390/rs11202440 - 21 Oct 2019
Cited by 18 | Viewed by 4307
Abstract
In this paper, a Next Best View (NBV) approach with a profiling stage and a novel utility function for 3D reconstruction using an Unmanned Aerial Vehicle (UAV) is proposed. The proposed approach performs an initial scan in order to build a rough model [...] Read more.
In this paper, a Next Best View (NBV) approach with a profiling stage and a novel utility function for 3D reconstruction using an Unmanned Aerial Vehicle (UAV) is proposed. The proposed approach performs an initial scan in order to build a rough model of the structure that is later used to improve coverage completeness and reduce flight time. Then, a more thorough NBV process is initiated, utilizing the rough model in order to create a dense 3D reconstruction of the structure of interest. The proposed approach exploits the reflectional symmetry feature if it exists in the initial scan of the structure. The proposed NBV approach is implemented with a novel utility function, which consists of four main components: information theory, model density, traveled distance, and predictive measures based on symmetries in the structure. This system outperforms classic information gain approaches with a higher density, entropy reduction and coverage completeness. Simulated and real experiments were conducted and the results show the effectiveness and applicability of the proposed approach. Full article
Show Figures

Figure 1

Figure 1
<p>A flowchart of the proposed Guided Next Best View (NBV) method.</p>
Full article ">Figure 2
<p>The process of ray tracing. The rays are shown in pink, occupied voxels are green, free voxels are transparent, and unknown voxels are blue. (<b>a</b>) Virtual rays are casted from the selected viewpoint (shown in yellow) while respecting the sensor limitations. (<b>b</b>) The rays traverse the occupancy map until either the ray reaches the sensor’s maximum range, or the ray is blocked by an occupied cell.</p>
Full article ">Figure 3
<p>A 2D example illustrating the cell density calculation method. (<b>a</b>) Cells are represented as a 2D grid, where points in each cell are presented as red dots, and circles illustrate the neighborhood of two-point samples. (<b>b</b>) The classical method for computing density, where the density is simply the number of point in each grid. (<b>c</b>) The density calculated using the proposed method, where a neighborhood search is performed for each point to get a better density estimation.</p>
Full article ">Figure 4
<p>Profiling process following structure bounded box. The cumulative profile and path are shown after executing: (<b>a</b>) one waypoint; (<b>b</b>) three waypoints; and (<b>c</b>) eight waypoints.</p>
Full article ">Figure 5
<p>Example of AGVS performed using different scale values of <math display="inline"><semantics> <msup> <mi>b</mi> <mn>0</mn> </msup> </semantics></math>, <math display="inline"><semantics> <msup> <mi>b</mi> <mn>1</mn> </msup> </semantics></math>, and <math display="inline"><semantics> <msup> <mi>b</mi> <mn>2</mn> </msup> </semantics></math>, respectively, multiplied by the discretization length <span class="html-italic">L</span>.</p>
Full article ">Figure 6
<p>(<b>a</b>) Low density frontier voxels shown in purple; (<b>b</b>) the centroids (yellow) of the clusters of the frontiers voxels; (<b>c</b>) FOV planes visualization where two sensors are mounted on the UAV; and (<b>d</b>) an example of the three layers to cover an example frontier voxels (cubes) cluster.</p>
Full article ">Figure 7
<p>The circular collision box in purple created around the UAV locations to check the validity of the viewpoints. The path of the UAV is shown in blue and the scans are presented in dark red.</p>
Full article ">Figure 8
<p>The path followed by the UAV in simulation around the aircraft model using three different utility functions: (<b>a</b>) our proposed utility; (<b>b</b>) Isler et al. [<a href="#B29-remotesensing-11-02440" class="html-bibr">29</a>] utility; and (<b>c</b>) Bircher et al. [<a href="#B17-remotesensing-11-02440" class="html-bibr">17</a>] utility.</p>
Full article ">Figure 9
<p>The voxel point density vs. the number iterations, showing the proposed approach’s performance as compared to other state-of-the-art utility functions.</p>
Full article ">Figure 10
<p>The path followed by the UAV in simulation around the power plant structure using three different utility functions: (<b>a</b>) our proposed utility; (<b>b</b>) Isler et al. [<a href="#B29-remotesensing-11-02440" class="html-bibr">29</a>] utility; and (<b>c</b>) Bircher et al. [<a href="#B17-remotesensing-11-02440" class="html-bibr">17</a>] utility.</p>
Full article ">Figure 11
<p>The density and total entropy attributes along the iterations during the power plant coverage using the proposed Guided NBV. The areas where the viewpoints generator switched to FVS are shown in green.</p>
Full article ">Figure 12
<p>The quadrotor hardware components.</p>
Full article ">Figure 13
<p>The path followed by the UAV shown in orange color including the profiling stage. The planned trajectory is shown in black and the reconstruction of the processed point cloud shown using OctoMap voxels.</p>
Full article ">Figure 14
<p>The density and total entropy attributes along the iterations during the indoor real experiment.</p>
Full article ">Figure 15
<p>The 3D reconstructed representative model utilizing the collected images during the NBV mission. The locations of the gathered images are presented in green dots.</p>
Full article ">Figure 16
<p>The path followed by the UAV shown in orange color including the profiling stage. The planned trajectory is shown in black and the reconstruction of the processed point cloud shown using OctoMap voxels.</p>
Full article ">Figure 17
<p>The density and total entropy attributes along the iterations during the indoor real experiment for the airplane model.</p>
Full article ">Figure 18
<p>The 3D reconstructed airplane model utilizing the collected images during the NBV mission. The locations of the gathered images are presented in green dots.</p>
Full article ">
22 pages, 4997 KiB  
Article
Crop Growth Condition Assessment at County Scale Based on Heat-Aligned Growth Stages
by Yonglan Qian, Zhengwei Yang, Liping Di, Md. Shahinoor Rahman, Zhenyu Tan, Lei Xue, Feng Gao, Eugene Genong Yu and Xiaoyang Zhang
Remote Sens. 2019, 11(20), 2439; https://doi.org/10.3390/rs11202439 - 21 Oct 2019
Cited by 18 | Viewed by 4074
Abstract
Remotely sensed data have been used in crop condition monitoring for decades. Traditionally, crop growth conditions were assessed by comparing Normalized Difference Vegetation Index (NDVI) of the current year and past years at a pixel scale on the same calendar day. The assumption [...] Read more.
Remotely sensed data have been used in crop condition monitoring for decades. Traditionally, crop growth conditions were assessed by comparing Normalized Difference Vegetation Index (NDVI) of the current year and past years at a pixel scale on the same calendar day. The assumption of this comparison is that the different years’ crops were at the same growing stage on the same day. However, this assumption is often violated in reality. This paper proposes to combine remotely sensed data and meteorological data to assess corn growth conditions at the same growth stages at county level. The proposed approach uses the active accumulated temperature (AAT) computed from Daymet, a daily weather data product, to align different years of NDVI time series at the same growth stages estimated from AATs. The study area covers Carroll County, Iowa. The best index slope extraction (BISE) method and Savitzky–Golay filter are used to filter noise and to reconstruct 11 years of corn growing season NDVI time series from 250 m MODIS daily surface reflectance data product (MOD09GQ). The corn growth stages are identified every year with precise Julian dates from AAT time series. The corn growth conditions are assessed based on the aligned growth stages. The validation of the assessed crop conditions is performed based on National Agricultural Statistics Service (NASS) reports. The study indicates that the crop condition assessment results based on aligned growth stages are consistent with the NASS reported results and they are more reliable than the results based on the same calendar days. The proposed method provides not only crop growth condition information but also crop phenology information. Potentially, it can help improve crop yield prediction since it can effectively measure crop growth changes with NDVI and AAT data. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area (Carroll County) in the State of Iowa, U.S. The background grey map is daily MODIS NDVI image. The blue boundary in the small overview map overlaid at the lower left corner illustrates the State of Iowa. The color map overlaid in the middle is the NASS Cropland Data Layer (CDL) map of Carroll County. The dark green, yellow, and grey represent soybeans, corn, and development/open space categories. The light green and pink represent grass land and alfalfa.</p>
Full article ">Figure 2
<p>Flowchart of the corn growth condition assessment methodology based on heat-aligned crop growth stages. The left part shows the procedures for producing noise-filtered NDVI time series data; the right part illustrates the procedures for computing active accumulated temperature (AAT), estimating corn growth stage and assessing crop growth conditions based on aligned crop growth stages.</p>
Full article ">Figure 3
<p>The original (in blue) and smoothed (in red) NDVI time series of a pure corn pixel. The extremely lower NDVI values (in blue) were removed and reconstructed by filtering and smoothing process.</p>
Full article ">Figure 4
<p>The corn phenologies in Carroll County from 2007 to 2017 and the NDVI baseline (dashed line). The corn phenology of the year 2012 (orange line) is abnormal and excluded from NDVI baseline calculation. The 2017 NDVI is the study target and its NDVI is not in the baseline calculation.</p>
Full article ">Figure 5
<p>(<b>a</b>) Left is the raw NDVI image and its histogram, and (a) Right is the smoothed NDVI image and its histogram. The smoothed and reconstructed NDVI image illustrates field and land cover patterns and its histogram is concentrated around 0.9, which is consistent with peak vegetation period on August 3, 2017. (<b>b</b>) Illustrates the processed NDVI means of 2007 and 2017 in the upper part, and standard deviations of the NDVIs of 2007 and 2017 in the lower part.</p>
Full article ">Figure 6
<p>The AAT curves cover the corn growing season in Carroll County from year 2007 to 2017 and the AAT baseline (dashed line). The abnormal year 2012 is excluded in the baseline calculation.</p>
Full article ">Figure 7
<p>Corn NDVI curve (corn phenology) and AAT curve covering the growing season of 2012 (a given study year) are compared with the baseline based on the same Julian calendar day. The green solid and black dashed lines are 2012 NDVI curve and NDVI baseline, respectively. The pink solid and red dashed lines are 2012 AAT curve and AAT baseline. Compared with the NDVI baseline, the 2012 corn phenology is significantly earlier than baseline (the entire NDVI curve appears left of the baseline). The 2012 AAT curve is above the ATT baseline, which means 2012 crop reaches the same growth stages earlier.</p>
Full article ">Figure 8
<p>The NDVIs and the needed Julian dates corresponding to the ΔAATs of the given year and the normal reference year. The baseline (normal year) and the given study year NDVI curves (corn phenology) are in red and blue respectively. The purple and green curves represent the dates that crops reach the same growth stages (or the same ΔAAT) in the normal year and the given study year, respectively. For a given ΔAAT (or growth stage), comparing two NDVI values gives the indication of the crop growth condition while comparing Julian days provides information of timing of reaching at the given crop growth stage.</p>
Full article ">Figure 9
<p>Estimated corn growth stages comparison in 2012, 2013, 2016, and 2017.</p>
Full article ">Figure 10
<p>The 2012 NDVIs at SGS and 2011 NDVIs at SGS and SJD and corresponding NASS reported data for validation. The x-axis represents 2012 growth stage (with Julian date). The 2011 NDVIs at SGS and SJD are in red and green solid lines respectively. They are aligned with 2012 growth stages and Julian day respectively. The 2012 NDVI at SGS is in blue solid line. The 2012 NASS reported percentage of crop in good and excellent categories (in purple dashed line) is lower than the corresponding 2011 NASS reported data (in light blue dash dot line). Obviously, the results based on SJD NDVI (green solid line) are not consistent with the NASS reported crop condition data.</p>
Full article ">
24 pages, 14042 KiB  
Review
Mitigation of Radio Frequency Interference in Synthetic Aperture Radar Data: Current Status and Future Trends
by Mingliang Tao, Jia Su, Yan Huang and Ling Wang
Remote Sens. 2019, 11(20), 2438; https://doi.org/10.3390/rs11202438 - 21 Oct 2019
Cited by 105 | Viewed by 8817
Abstract
Radio frequency interference (RFI) is a major issue in accurate remote sensing by a synthetic aperture radar (SAR) system, which poses a great hindrance to raw data collection, image formation, and subsequent interpretation process. This paper provides a comprehensive study of the RFI [...] Read more.
Radio frequency interference (RFI) is a major issue in accurate remote sensing by a synthetic aperture radar (SAR) system, which poses a great hindrance to raw data collection, image formation, and subsequent interpretation process. This paper provides a comprehensive study of the RFI mitigation techniques applicable for an SAR system. From the view of spectrum allocation, possible terrestrial and spaceborne RFI sources to SAR system and their geometry are analyzed. Typical signal models for various RFI types are provided, together with many illustrative examples from real measured data. Then, advanced signal processing techniques for removing RFI are reviewed. Advantages and drawbacks of each approach are discussed in terms of their applicability. Discussion on the future trends are provided from the perspective of cognitive, integrated, and adaptive. This review serves as a reference for future work on the implementation of the most suitable RFI mitigation scheme for an air-borne or space-borne SAR system. Full article
(This article belongs to the Special Issue Radio Frequency Interference (RFI) in Microwave Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Distribution of observed radio frequency interference (RFI) sources from different frequency bands spread over the global area. The circles with different colors corresponding to the RFI sources observed in various frequency bands, as shown in the legend at the upper right corner. Captured from the database of frequency allocations for microwave remote sensing and observed radio frequency interference [<a href="#B4-remotesensing-11-02438" class="html-bibr">4</a>].</p>
Full article ">Figure 2
<p>A particular example: the range spectrum of the synthetic aperture radar (SAR) echoes (<b>a</b>) without RFI and (<b>b</b>) with RFI.</p>
Full article ">Figure 3
<p>The effects of RFI on image formation process. (<b>a</b>) Variation of the estimated azimuth Doppler rate with range blocks in the case with and without interference affection. (Courtesy of Tao et al. [<a href="#B8-remotesensing-11-02438" class="html-bibr">8</a>]) (<b>b</b>) Sectional drawings of the point target response after matched filtering using biased parameters under different interference to signal ratio (ISR). (Courtesy of Tao et al. [<a href="#B5-remotesensing-11-02438" class="html-bibr">5</a>]).</p>
Full article ">Figure 4
<p>Illustration of RFI’s resulting amplitude and phase distortion on real measured NASA UAVSAR data. (<b>a</b>) Pauli-coded image. (<b>b</b>) Entropy parameters and (<b>c</b>) anisotropy obtained by Cloud–Pottier polarimetric decomposition. The regions in dashed lines highlights the anomalies introduced by interference artifacts. (<b>d</b>) Evaluation of the distortion to the polarization signatures under various RFI conditions. (Courtesy of Tao et al. [<a href="#B5-remotesensing-11-02438" class="html-bibr">5</a>]).</p>
Full article ">Figure 5
<p>Illustration of allocated radio services in the vicinity of (<b>a</b>) P-band: 432–438 MHz and (<b>b</b>) L-band: 1215–1300 MHz.</p>
Full article ">Figure 6
<p>Illustration of the terrestrial RFI case. (<b>a</b>) Possible terrestrial radio emitters. (<b>b</b>) Interfering geometry of terrestrial RFI. (<b>c</b>) SAR image contaminated by RFI and (<b>d</b>) without presence of RFI.</p>
Full article ">Figure 6 Cont.
<p>Illustration of the terrestrial RFI case. (<b>a</b>) Possible terrestrial radio emitters. (<b>b</b>) Interfering geometry of terrestrial RFI. (<b>c</b>) SAR image contaminated by RFI and (<b>d</b>) without presence of RFI.</p>
Full article ">Figure 7
<p>Illustration of terrain scattered interference. (<b>a</b>) Geometry of space-borne RFI interfering case. A case example in real measured C-band Sentinel-1 SAR data: (<b>b</b>) VV image, (<b>c</b>) VH image and (<b>d</b>) pseudo color coded image with <span style="color:#C00000">VV-Red</span>, <span style="color:#00BF00">VH-Green</span>, and <span style="color:blue">VV/VH-Blue</span>.</p>
Full article ">Figure 8
<p>Illustration of narrow-band interference in (<b>a</b>) range spectrum and (<b>b</b>) range spectrum-azimuth–time representation.</p>
Full article ">Figure 9
<p><b>Figure</b><b>9.</b> Illustration of wide-band interference in (<b>a</b>) range spectrum and (<b>b</b>) its spectrogram.</p>
Full article ">Figure 10
<p>Illustration of broadband noise like in frequency spectra (<b>a</b>) and (<b>b</b>) its spectrogram.</p>
Full article ">Figure 11
<p>Illustration of system architecture for future SAR systems incorporated with RFI mitigation strategy.</p>
Full article ">
17 pages, 16086 KiB  
Article
A New Processing Chain for Real-Time Ground-Based SAR (RT-GBSAR) Deformation Monitoring
by Zheng Wang, Zhenhong Li, Yanxiong Liu, Junhuan Peng, Sichun Long and Jon Mills
Remote Sens. 2019, 11(20), 2437; https://doi.org/10.3390/rs11202437 - 20 Oct 2019
Cited by 12 | Viewed by 3592
Abstract
Due to the high temporal resolution (e.g., 10 s) required, and large data volumes (e.g., 360 images per hour) that result, there remain significant issues in processing continuous ground-based synthetic aperture radar (GBSAR) data. This includes the delay in creating displacement maps, the [...] Read more.
Due to the high temporal resolution (e.g., 10 s) required, and large data volumes (e.g., 360 images per hour) that result, there remain significant issues in processing continuous ground-based synthetic aperture radar (GBSAR) data. This includes the delay in creating displacement maps, the cost of computational memory, and the loss of temporal evolution in the simultaneous processing of all data together. In this paper, a new processing chain for real-time GBSAR (RT-GBSAR) is proposed on the basis of the interferometric SAR small baseline subset concept, whereby GBSAR images are processed unit by unit. The outstanding issues have been resolved by the proposed RT-GBSAR chain with three notable features: (i) low requirement of computational memory; (ii) insights into the temporal evolution of surface movements through temporarily-coherent pixels; and (iii) real-time capability of processing a theoretically infinite number of images. The feasibility of the proposed RT-GBSAR chain is demonstrated through its application to both a fast-changing sand dune and a coastal cliff with submillimeter precision. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of the real-time ground-based synthetic aperture radar (RT-GBSAR) processing chain. SLC is single-look complex.</p>
Full article ">Figure 2
<p>Flowchart of the small baseline subset (SBAS) time series analysis in the RT-GBSAR chain. APS is atmospheric phase screen.</p>
Full article ">Figure 3
<p>Overview of the sand dune at Feicuidao on the Changli Gold Coast, Hebei Province, China. (<b>a</b>) Oblique photograph of the site. Three in-situ corner reflectors (CRs) are marked with yellow circles. Two areas of interest are roughly outlined: Area #1 is covered with sparse vegetation and Area #2 is devoid of vegetation. (<b>b</b>) A corresponding amplitude image of this site.</p>
Full article ">Figure 4
<p>RT-GBSAR results of the sand dune dataset. (<b>a</b>–<b>j</b>): The displacement maps of all units from the first to the last, respectively. The cumulative displacement map with respect to units 3–6 is given in subfigure (<b>k</b>) and that of units 8–10 in (<b>l</b>). Note that Area #1 is indicated in a black loop and Area #2 in a red one.</p>
Full article ">Figure 5
<p>Sequential amplitude images during the period from 05:17:07 to 05:19:27.</p>
Full article ">Figure 6
<p>Overview of the coastal cliff. (<b>a</b>) Deployment of the FastGBSAR system for data collection. (<b>b</b>) Approximate co-registration of a GBSAR amplitude image with the planimetric view of the observing site in Google Earth.</p>
Full article ">Figure 7
<p>RT-GBSAR results of the coastal cliff dataset. (<b>a</b>–<b>n</b>): The displacement maps of all units from the first to the last are shown in subfigures, respectively. The cumulative displacement map with respect to pixels that are coherent over the entire observation period is given in (<b>o</b>).</p>
Full article ">Figure 8
<p>Tidal elevation for North Shields, Tynemouth, UK on 16 November 2016, obtained from the National Tidal and Sea Level Facility [<a href="#B23-remotesensing-11-02437" class="html-bibr">23</a>]. The variation of tidal elevation over the period of GBSAR observation is marked as red.</p>
Full article ">Figure 9
<p>The time series of displacements and APS for the five selected pixels (P1, P2, P3, P4, and P5).</p>
Full article ">Figure 10
<p>Precision maps for coherent pixels without unwrapping errors in the sand dune application. The precision maps for unit 1 to unit 10 are shown in subfigures (<b>a</b>–<b>j</b>), respectively. The precision map with respect to units 3–6 is given in subfigure (<b>k</b>) and that of units 8–10 in (<b>l</b>).</p>
Full article ">Figure 11
<p>Precision maps for coherent pixels without unwrapping errors in the coastal cliff application. The precision maps for each unit from the 1st to the 14th are shown in subfigures (<b>a</b>–<b>n</b>), respectively. The precision map with respect to the total units 1–14 is given in subfigure (<b>o</b>).</p>
Full article ">Figure 12
<p>The number of units versus computational RAM and images per unit in RT-GBSAR.</p>
Full article ">
17 pages, 6181 KiB  
Letter
Sea Ice Extent Detection in the Bohai Sea Using Sentinel-3 OLCI Data
by Hua Su, Bowen Ji and Yunpeng Wang
Remote Sens. 2019, 11(20), 2436; https://doi.org/10.3390/rs11202436 - 20 Oct 2019
Cited by 20 | Viewed by 4128
Abstract
Sea ice distribution is an important indicator of ice conditions and regional climate change in the Bohai Sea (China). In this study, we monitored the spatiotemporal distribution of the Bohai Sea ice in the winter of 2017–2018 by developing sea ice information indexes [...] Read more.
Sea ice distribution is an important indicator of ice conditions and regional climate change in the Bohai Sea (China). In this study, we monitored the spatiotemporal distribution of the Bohai Sea ice in the winter of 2017–2018 by developing sea ice information indexes using 300 m resolution Sentinel-3 Ocean and Land Color Instrument (OLCI) images. We assessed and validated the index performance using Sentinel-2 MultiSpectral Instrument (MSI) images with higher spatial resolution. The results indicate that the proposed Normalized Difference Sea Ice Information Index (NDSIIIOLCI), which is based on OLCI Bands 20 and 21, can be used to rapidly and effectively detect sea ice but is somewhat affected by the turbidity of the seawater in the southern Bohai Sea. The novel Enhanced Normalized Difference Sea Ice Information Index (ENDSIIIOLCI), which builds on NDSIIIOLCI by also considering OLCI Bands 12 and 16, can monitor sea ice more accurately and effectively than NDSIIIOLCI and suffers less from interference from turbidity. The spatiotemporal evolution of the Bohai Sea ice in the winter of 2017–2018 was successfully monitored by ENDSIIIOLCI. The results show that this sea ice information index based on OLCI data can effectively extract sea ice extent for sediment-laden water and is well suited for monitoring the evolution of Bohai Sea ice in winter. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Coastal Areas)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area of the Bohai Sea, including the Liaodong Bay, Bohai Bay, and Laizhou Bay.</p>
Full article ">Figure 2
<p>TOA reflectance values in OLCI all bands (<b>a</b>) and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>NDSIII</mi> </mrow> <mrow> <mi>O</mi> <mi>L</mi> <mi>C</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> values (<b>b</b>) for sea ice, seawater, turbid seawater, land, snow, and cloud cover in the Bohai Sea. The whiskers in (<b>a</b>) depict the standard deviations of the TOA reflectance samples. The whiskers in (<b>b</b>) indicate the maximum and minimum ratio values of the sample. The box is determined by the 25th and 75th percentiles of the ratio values of the sample. The median value is marked as the line within the box.</p>
Full article ">Figure 3
<p>Sample TOA reflectance values for sea ice and turbid seawater in OLCI Bands 12, 16, 20, and 21 in the Bohai Sea.</p>
Full article ">Figure 4
<p>The histogram distributions of sampling points for the three sea ice development stages (freezing, stable and melting stages) with <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>NDSIII</mi> </mrow> <mrow> <mi>O</mi> <mi>L</mi> <mi>C</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> (<b>a</b>–<b>c</b>) and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ENDSIII</mi> </mrow> <mrow> <mi>O</mi> <mi>L</mi> <mi>C</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> (<b>d</b>–<b>f</b>). The T<sub>NDSIII</sub> and T<sub>ENDSIII</sub> threshold values (dashed vertical lines) were determined for sea ice separation by the Jenks natural break method.</p>
Full article ">Figure 5
<p>An example of true-color (<b>a</b>), <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>NDSIII</mi> </mrow> <mrow> <mi>O</mi> <mi>L</mi> <mi>C</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> (<b>b</b>), (b12−b16)/(b12+b16) (<b>c</b>), and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ENDSIII</mi> </mrow> <mrow> <mi>O</mi> <mi>L</mi> <mi>C</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> (<b>d</b>) feature extraction from OLCI imagery on 24 January 2018. A statistical histogram of the main types of surface cover (sea ice, seawater, turbid seawater, land, snow, and cloud) in each image is displayed alongside the corresponding image (<b>e</b>–<b>g</b>).</p>
Full article ">Figure 6
<p>An image of sea ice extraction result for the entire Bohai Sea (top) on 1 February 2018 using threshold segmentation from <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ENDSIII</mi> </mrow> <mrow> <mi>O</mi> <mi>L</mi> <mi>C</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math>. Three true-color images in the first column (<b>a</b>,<b>f</b>,<b>k</b>) are the enlarged MSI validation images indicated by boxes I, II, and III in the top image, respectively. The next four columns present the results of sea ice extent extraction using <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>NDSIII</mi> </mrow> <mrow> <mi>O</mi> <mi>L</mi> <mi>C</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> (<b>b</b>,<b>g</b>,<b>l</b>), <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ENDSIII</mi> </mrow> <mrow> <mi>O</mi> <mi>L</mi> <mi>C</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> (<b>c</b>,<b>h</b>,<b>m</b>), NDSI (<b>d</b>,<b>i</b>,<b>n</b>), and SVM (<b>e</b>,<b>j</b>,<b>o</b>).</p>
Full article ">Figure 7
<p>Sea ice extraction results based on <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ENDSIII</mi> </mrow> <mrow> <mi>O</mi> <mi>L</mi> <mi>C</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> from OLCI images with 300 m spatial resolution on 29 January 2018 (<b>b</b>) and February 16, 2018 (<b>d</b>). Two true-color images in the first column (<b>a</b>,<b>c</b>) are the MSI validation images with 60 m spatial resolution. The blue box in (<b>b</b>) represents the boundary of the validation image (<b>a</b>).</p>
Full article ">Figure 8
<p>Spatiotemporal evolution of the extent of Bohai Sea ice during the winter of 2017–2018 from OLCI images using the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>ENDSIII</mi> </mrow> <mrow> <mi>O</mi> <mi>L</mi> <mi>C</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math> method (<b>a</b>–<b>r</b>). Red areas depict sea ice coverage.</p>
Full article ">Figure 9
<p>The evolution of the Bohai Sea ice area extracted from OLCI images during the winter of 2017–2018.</p>
Full article ">
21 pages, 8716 KiB  
Article
Quantifying Trends of Land Change in Qinghai-Tibet Plateau during 2001–2015
by Chao Wang, Qiong Gao and Mei Yu
Remote Sens. 2019, 11(20), 2435; https://doi.org/10.3390/rs11202435 - 20 Oct 2019
Cited by 48 | Viewed by 3966
Abstract
The Qinghai-Tibet Plateau (QTP) is among the most sensitive ecosystems to changes in global climate and human activities, and quantifying its consequent change in land-cover land-use (LCLU) is vital for assessing the responses and feedbacks of alpine ecosystems to global climate changes. In [...] Read more.
The Qinghai-Tibet Plateau (QTP) is among the most sensitive ecosystems to changes in global climate and human activities, and quantifying its consequent change in land-cover land-use (LCLU) is vital for assessing the responses and feedbacks of alpine ecosystems to global climate changes. In this study, we first classified annual LCLU maps from 2001–2015 in QTP from MODIS satellite images, then analyzed the patterns of regional hotspots with significant land changes across QTP, and finally, associated these trends in land change with climate forcing and human activities. The pattern of land changes suggested that forests and closed shrublands experienced substantial expansions in the southeastern mountainous region during 2001–2015 with the expansion of massive meadow loss. Agricultural land abandonment and the conversion by conservation policies existed in QTP, and the newly-reclaimed agricultural land partially offset the loss with the resulting net change of −5.1%. Although the urban area only expanded 586 km2, mainly at the expense of agricultural land, its rate of change was the largest (41.2%). Surface water exhibited a large expansion of 5866 km2 (10.2%) in the endorheic basins, while mountain glaciers retreated 8894 km2 (−3.4%) mainly in the southern and southeastern QTP. Warming and the implementation of conservation policies might promote the shrub encroachment into grasslands and forest recovery in the southeastern plateau. While increased precipitation might contribute to the expansion of surface water in the endorheic basins, warming melts the glaciers in the south and southeast and complicates the hydrological service in the region. The substantial changes in land-cover reveal the high sensitivity of QTP to changes in climate and human activities. Rational policies for conservation might mitigate the adverse impacts to maintain essential services provided by the important alpine ecosystems. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study Area of Qinghai-Tibetan Plateau (QTP): (<b>A</b>) elevation, major Mts. (i.e., mountains) ranges, and provincial and country borders; (<b>B</b>) provincial cities, rivers, and the endorheic and exorheic basins. The grids, in dark blue dashed lines, overlaid are the corresponding MODIS tiles (e.g., h25v06). Boundaries of Changtang Plateau (shadowed with the dashed lines in (<b>B</b>) and the endorheic, and exorheic basins were determined from the 15-s HydroSHEDS (Hydrological data and maps based on SHuttle Elevation Derivatives at multiple Scales) drainage basin dataset [<a href="#B42-remotesensing-11-02435" class="html-bibr">42</a>]. Map created using ArcGIS 10.6 (ESRI, CA, <a href="http://www.esri.com" target="_blank">www.esri.com</a>).</p>
Full article ">Figure 2
<p>Land cover net gain and net loss in the Qinghai-Tibetan Plateau. FORE stands for all forest types; AGRI, agricultural land; BARE, bare ground; CLSH, closed shrubland; MEDO, meadow; OPSH, open shrubland; SNOW, permanent snow cover; STEP, steppe; SPAS, sparse vegetation; URBN, urban area; and WATR, water.</p>
Full article ">Figure 3
<p>Net changes of (<b>A</b>) forest, (<b>B</b>) closed shrubland, (<b>C</b>) meadow, and (<b>D</b>) agriculture from 2001–2015. The red box in (<b>A</b>,<b>B</b>) indicates the forest habitats destroyed by the earthquake in 2008. Land changes are significant with a <span class="html-italic">p</span>-value &lt; 0.1.</p>
Full article ">Figure 4
<p>Sparse vegetation changes are associated with bare ground and grassland. Net changes of (<b>A</b>) steppe, (<b>B</b>) open shrubland, (<b>C</b>) sparse vegetation, and (<b>D</b>) bare ground from 2001–2015. Land changes are significant with a <span class="html-italic">p</span>-value &lt; 0.1.</p>
Full article ">Figure 5
<p>Spatial patterns of net changes in (<b>A</b>) permanent snow and ice and (<b>B</b>) surface water bodies. Land changes are significant with a <span class="html-italic">p</span>-value &lt; 0.1.</p>
Full article ">Figure 6
<p>Spatial patterns of the trends of (<b>A</b>) annual mean land surface temperature at daytime, (<b>B</b>) annual mean land surface temperature at nighttime, (<b>C</b>) annual mean precipitation, and (<b>D</b>) annual mean evapotranspiration in 2001–2015 (maps and histograms were created using ArcMap 10.6).</p>
Full article ">Figure A1
<p>Flowchart on processing predictor variables. QA, quality control; VI, vegetation indices; EVI, Enhanced Vegetation Index; Squares, source data and processed data; Diamonds, operations; Arrows, processing directions.</p>
Full article ">Figure A2
<p>Spatial patterns of the trends of annual mean TWS anomalies from 2003–2015 (the trend was calculated using the R statistical software; maps and histograms were created using ArcMap 10.6, ESRI, CA).</p>
Full article ">Figure A3
<p>(<b>A</b>) Population density change (i.e., Pop. Dens. Ch. (indiv./km<sup>2</sup>) means the population density change at the unit of individuals per square kilometer) between 2000 and 2015, derived from population density data acquired from the Gridded Population of the World, Version 4 (GPWv4) collection; (<b>B</b>) two main ecological restoration programs in QTP (GFG, Grain for Green Program, NFCP, Natural Forest Conservation Program).</p>
Full article ">
27 pages, 4345 KiB  
Article
Hyperspectral Unmixing with Gaussian Mixture Model and Spatial Group Sparsity
by Qiwen Jin, Yong Ma, Erting Pan, Fan Fan, Jun Huang, Hao Li, Chenhong Sui and Xiaoguang Mei
Remote Sens. 2019, 11(20), 2434; https://doi.org/10.3390/rs11202434 - 20 Oct 2019
Cited by 15 | Viewed by 3560
Abstract
In recent years, endmember variability has received much attention in the field of hyperspectral unmixing. To solve the problem caused by the inaccuracy of the endmember signature, the endmembers are usually modeled to assume followed by a statistical distribution. However, those distribution-based methods [...] Read more.
In recent years, endmember variability has received much attention in the field of hyperspectral unmixing. To solve the problem caused by the inaccuracy of the endmember signature, the endmembers are usually modeled to assume followed by a statistical distribution. However, those distribution-based methods only use the spectral information alone and do not fully exploit the possible local spatial correlation. When the pixels lie on the inhomogeneous region, the abundances of the neighboring pixels will not share the same prior constraints. Thus, in this paper, to achieve better abundance estimation performance, a method based on the Gaussian mixture model (GMM) and spatial group sparsity constraint is proposed. To fully exploit the group structure, we take the superpixel segmentation (SS) as preprocessing to generate the spatial groups. Then, we use GMM to model the endmember distribution, incorporating the spatial group sparsity as a mixed-norm regularization into the objective function. Finally, under the Bayesian framework, the conditional density function leads to a standard maximum a posteriori (MAP) problem, which can be solved using generalized expectation-maximization (GEM). Experiments on simulated and real hyperspectral data demonstrate that the proposed algorithm has higher unmixing precision compared with other state-of-the-art methods. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Algorithm flow chart of the proposed spatial group sparsity constraint based on Gaussian mixture model (SGSGMM) method.</p>
Full article ">Figure 2
<p>(<b>a</b>) The color images of the synthetic dataset. (<b>b</b>) The spectral of the endmember seeds used to construct the synthetic dataset, which is all extracted from the ASTER library.</p>
Full article ">Figure 3
<p>The endmember spectral signatures of the synthetic dataset. The gray portion of the background within the image represents the reflection value of the pure pixel reflectance at each wavelength position. The different colors in the figure represent different components, and the corresponding legend indicates its prior probability. The solid line indicates the center of Gaussians <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">μ</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> </semantics></math>, and the dotted line indicates the variance range of each Gaussian component, which is constructed by <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">μ</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>±</mo> <mn>2</mn> <msqrt> <msub> <mover accent="true"> <mi mathvariant="bold-italic">σ</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> </msqrt> <msub> <mi mathvariant="bold">v</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> (<math display="inline"><semantics> <msub> <mover accent="true"> <mi mathvariant="bold-italic">σ</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> </semantics></math> denotes the largest eigenvalue of <math display="inline"><semantics> <msub> <mi mathvariant="bold-sans-serif">Σ</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> </semantics></math>; <math display="inline"><semantics> <msub> <mi mathvariant="bold">v</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> </semantics></math> denotes the corresponding eigenvector).</p>
Full article ">Figure 4
<p>Abundance maps comparison for the ground truth, SGSGMM, Gaussian mixture model (GMM), spatial group sparsity regularized non-negative matrix factorization (SGSNMF), normal compositional model (NCM), and Beta compositional model (BCM).</p>
Full article ">Figure 5
<p>The histograms of the pure pixels for the 5 materials. The x-axis is expressed as a pure pixel for each material via PCA to 1-dimensional space, and the y-axis represents the proportion of occupying each bin size in the histogram. The probability of each distribution is calculated by multiplying the value of the density function at each bin location with the bin size.</p>
Full article ">Figure 6
<p>(<b>a</b>) The original RGB image. (<b>b</b>) The corresponding ground truth materials of Gulfport dataset. (<b>c</b>) The selected ROI area. (<b>d</b>) The corresponding ground truth materials in the ROI. (<b>e</b>) The superpixel map we used for the experiment. (<b>f</b>) The wavelength reflectance of mean spectra signature for the 5 materials.</p>
Full article ">Figure 7
<p>Abundance maps comparison for the ground truth, SGSGMM, GMM, SGSNMF, NCM, and BCM.</p>
Full article ">Figure 8
<p>The wavelength reflectance space of the endmember signature estimated for the Gulfport dataset, which has the same meaning as in <a href="#remotesensing-11-02434-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 9
<p>The estimated distributions and the histograms of pure materials for the SGSGMM, GMM, and NCM.</p>
Full article ">Figure 10
<p>(<b>a</b>) The original RGB image. (<b>b</b>) The corresponding ground truth materials of Salinas-A dataset. (<b>c</b>) The superpixel map we used for the experiment. (<b>d</b>) The mean spectra of the 6 materials.</p>
Full article ">Figure 11
<p>Abundance maps comparison for the ground truth, SGSGMM, GMM, SGSNMF, NCM, and BCM.</p>
Full article ">Figure 12
<p>The wavelength reflectance space of the endmember signature estimated for the Salinas-A dataset, which has the same meaning as in <a href="#remotesensing-11-02434-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 13
<p>The estimated distributions and the histograms of pure materials for the SGSGMM, GMM, and NCM.</p>
Full article ">Figure 14
<p>The scatter plot of the synthetic dataset under different initial conditions. The gray dots are the pixels when projected by PCA. (<b>a</b>) The original endmember scatter plot of the synthetic dataset with estimated GMM. (<b>b</b>) The endmember scatter plot by K-means initialization. (<b>c</b>) The endmember scatter plot by VCA initialization. (<b>d</b>) The endmember scatter plot by region-based VCA initialization.</p>
Full article ">Figure 15
<p>Performance analysis of SGSNMF with respect to different size superpixels. (<b>a</b>) <span class="html-italic">w</span> = 5. (<b>b</b>) <span class="html-italic">w</span> = 7. (<b>c</b>) <span class="html-italic">w</span> = 9. (<b>d</b>) <span class="html-italic">w</span> = 11. (<b>e</b>) RMSE of abundance values with respect to <span class="html-italic">w</span>.</p>
Full article ">
21 pages, 3644 KiB  
Article
Assessing Legacy Effects of Wildfires on the Crown Structure of Fire-Tolerant Eucalypt Trees Using Airborne LiDAR Data
by Yogendra K. Karna, Trent D. Penman, Cristina Aponte and Lauren T. Bennett
Remote Sens. 2019, 11(20), 2433; https://doi.org/10.3390/rs11202433 - 20 Oct 2019
Cited by 29 | Viewed by 4872
Abstract
The fire-tolerant eucalypt forests of south eastern Australia are assumed to fully recover from even the most intense fires; however, surprisingly, very few studies have quantitatively assessed that recovery. The accurate assessment of horizontal and vertical attributes of tree crowns after fire is [...] Read more.
The fire-tolerant eucalypt forests of south eastern Australia are assumed to fully recover from even the most intense fires; however, surprisingly, very few studies have quantitatively assessed that recovery. The accurate assessment of horizontal and vertical attributes of tree crowns after fire is essential to understand the fire’s legacy effects on tree growth and on forest structure. In this study, we quantitatively assessed individual tree crowns 8.5 years after a 2009 wildfire that burnt extensive areas of eucalypt forest in temperate Australia. We used airborne LiDAR data validated with field measurements to estimate multiple metrics that quantified the cover, density, and vertical distribution of individual-tree crowns in 51 plots of 0.05 ha in fire-tolerant eucalypt forest across four wildfire severity types (unburnt, low, moderate, high). Significant differences in the field-assessed mean height of fire scarring as a proportion of tree height and in the proportions of trees with epicormic (stem) resprouts were consistent with the gradation in fire severity. Linear mixed-effects models indicated persistent effects of both moderate and high-severity wildfire on tree crown architecture. Trees at high-severity sites had significantly less crown projection area and live crown width as a proportion of total crown width than those at unburnt and low-severity sites. Significant differences in LiDAR -based metrics (crown cover, evenness, leaf area density profiles) indicated that tree crowns at moderate and high-severity sites were comparatively narrow and more evenly distributed down the tree stem. These conical-shaped crowns contrasted sharply with the rounded crowns of trees at unburnt and low-severity sites and likely influenced both tree productivity and the accuracy of biomass allometric equations for nearly a decade after the fire. Our data provide a clear example of the utility of airborne LiDAR data for quantifying the impacts of disturbances at the scale of individual trees. Quantified effects of contrasting fire severities on the structure of resprouter tree crowns provide a strong basis for interpreting post-fire patterns in forest canopies and vegetation profiles in Light Detection and Ranging (LiDAR) and other remotely-sensed data at larger scales. Full article
(This article belongs to the Special Issue Remote Sensing to Assess Canopy Structure and Function)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study sites (as coloured dots), extent of 2016 LiDAR data, and the study area (defined by the overlap of the 2009 wildfire, the LiDAR data, and the distribution of the herb-rich Foothill Forest).</p>
Full article ">Figure 2
<p>Field measurements of unburnt and severely burnt trees, including tree height metrics, crown-width metrics (bulk crown width of the live crown, total crown width to outermost extremities, which could be dead) and the presence/absence of epicormic (stem) and basal resprouts.</p>
Full article ">Figure 3
<p>Fish-eye lens views of tree crowns representing each fire-severity type nearly nine years after the 2009 wildfire: (<b>a</b>) unburnt, (<b>b</b>) low, (<b>c</b>) moderate, (<b>d</b>) high fire severity.</p>
Full article ">Figure 4
<p>Illustration of the crown cover and crown density LiDAR metrics for individual unburnt and burnt trees as estimated from the LiDAR point cloud.</p>
Full article ">Figure 5
<p>Comparison of top live height and diameter at breast height over bark (DBHOB) of field-measured trees among fire-severity types for both mature (20–50 cm DBHOB) and over-mature trees (&gt;50 cm DBHOB). Graphs are standard boxplots (median as centre line), based on 20 to 59 trees per diameter class for each severity (20–50 cm: unburnt (UB)—34; low severity (L)—51; moderate severity (M)—49; high severity (H)—20; &gt;50 cm: UB—45; L—40; M—44; H—59). <span class="html-italic">P</span>-values are calculated from linear mixed effects models (LMEM), and different letters indicate significant differences between fire-severity types as indicated by post hoc tests (Fisher’s LSD test).</p>
Full article ">Figure 6
<p>Comparison of field-measured horizontal crown metrics—crown projection area, and live crown width—among fire-severity types for both mature (left panels) and larger trees. Graphs are standard boxplots (median as centre line), based on 20 to 59 trees per diameter class for each of the severity types (20–50 cm: UB—34, L—51, M—49, H—20; &gt;50 cm: UB—45, L—40, M—44, H—59). <span class="html-italic">P</span>-values are calculated from LMEM, and different letters indicate significant differences between fire-severity types as indicated by post hoc tests (Fisher’s LSD test).</p>
Full article ">Figure 7
<p>Comparison of LiDAR-derived crown metrics—crown cover, crown density, evenness index and clumping index—among fire-severity types for trees delineated in the LiDAR data. Graphs are standard boxplots (median as centre line), based on 41 to 60 trees per severity type (UB—41, L—43, M—52, H—60). <span class="html-italic">P</span>-values are from LMEM, and different letters indicate significant differences between fire-severity types as indicated by post hoc tests (Fisher’s LSD test).</p>
Full article ">Figure 8
<p>Comparison of LiDAR-derived metrics extracted from leaf area density (LAD) profiles of individual trees—maximum LAD (LADmax), mean LAD (LADmean), coefficient of variation of LAD (LADcv), percentile height of maximum LAD (HtmaxLAD) and percentile height of minimum LAD (HtminLAD)—among fire-severity types. The mean LAD profile (blue line), percentile height of the minimum LAD (triangles), and percentile height of the maximum LAD (circles) are based on 41 to 60 trees per severity type (UB—41, L—43, M—52, H—60, as individual filled circles and triangles). The blue line in the figure represents the mean LAD profile, triangles and circles are the minimum and maximum LAD values for individual trees, other values are the mean metrics, and superscript letters indicate pairwise comparison at <span class="html-italic">p</span> &lt; 0.001 (LADmax), <span class="html-italic">p</span> &lt; 0.001 (LADcv), <span class="html-italic">p</span> = 0.461 (LADmean), <span class="html-italic">p</span> = 0.003 (HtmaxLAD), and <span class="html-italic">p</span> &lt; 0.001 (HtminLAD). <span class="html-italic">P</span>-values were calculated from LMEM, and different letters indicate significant differences between fire-severity types as indicated by post hoc tests (Fisher’s LSD test).</p>
Full article ">
24 pages, 6474 KiB  
Article
Spatial Resolution Matching of Microwave Radiometer Data with Convolutional Neural Network
by Yade Li, Weidong Hu, Shi Chen, Wenlong Zhang, Rui Guo, Jingwen He and Leo Ligthart
Remote Sens. 2019, 11(20), 2432; https://doi.org/10.3390/rs11202432 - 19 Oct 2019
Cited by 14 | Viewed by 3999
Abstract
Passive multi-frequency microwave remote sensing is often plagued with the problems of low- and non-uniform spatial resolution. In order to adaptively enhance and match the spatial resolution, an accommodative spatial resolution matching (ASRM) framework, composed of the flexible degradation model, the deep residual [...] Read more.
Passive multi-frequency microwave remote sensing is often plagued with the problems of low- and non-uniform spatial resolution. In order to adaptively enhance and match the spatial resolution, an accommodative spatial resolution matching (ASRM) framework, composed of the flexible degradation model, the deep residual convolutional neural network (CNN), and the adaptive feature modification (AdaFM) layers, is proposed in this paper. More specifically, a flexible degradation model, based on the imaging process of the microwave radiometer, is firstly proposed to generate suitable datasets for various levels of matching tasks. Secondly, a deep residual CNN is introduced to jointly learn the complicated degradation factors of the data, so that the resolution can be matched up to fixed levels with state of the art quality. Finally, the AdaFM layers are added to the network in order to handle arbitrary and continuous resolution matching problems between a start and an end level. Both the simulated and the microwave radiation imager (MWRI) data from the Fengyun-3C (FY-3C) satellite have been used to demonstrate the validity and the effectiveness of the method. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of the accommodative spatial resolution match (ASRM) method.</p>
Full article ">Figure 2
<p>The scan geometry of the Fengyun-3C (FY-3C) microwave radiation imager (MWRI).</p>
Full article ">Figure 3
<p>The architecture of deep residual convolutional neural network (CNN).</p>
Full article ">Figure 4
<p>Schematic diagram of convolution operations. (<b>a</b>) Schematic diagram of convolution operation in the convolutional layer; (<b>b</b>) Schematic diagram of group convolution operation in the AdaFM layer.</p>
Full article ">Figure 5
<p>Illustration of resolution adjustment.</p>
Full article ">Figure 6
<p>The flowchart of the ASRM framework.</p>
Full article ">Figure 7
<p>The synthetic scenario and the resolution match results. (<b>a</b>) Simulated 18.7 GHz antenna temperature image; (<b>b</b>) Match result of the Backus–Gilbert (BG) method; (<b>c</b>) Match result of Banach method; (<b>d</b>) Match result of SRCNN; (<b>e</b>) Match result of VDSR; (<b>f</b>) Match result of SRResNet; (<b>g</b>) Match result of basic network N<sub>b</sub>; (<b>h</b>) Simulated 89 GHz antenna temperature image; (<b>i</b>) Synthetic brightness temperature image.</p>
Full article ">Figure 8
<p>The along-track transects of the synthetic scenario. (<b>a</b>) The along-track transects around the strip (labeled with the blue dash-dotted line in <a href="#remotesensing-11-02432-f007" class="html-fig">Figure 7</a>i); (<b>b</b>) The along-track transects around the background (labeled with the red dash-dotted line in <a href="#remotesensing-11-02432-f007" class="html-fig">Figure 7</a>i).</p>
Full article ">Figure 9
<p>The evaluation indexes of the test set (3 points smoothed). (<b>a</b>) The PSNR results of the test set; (<b>b</b>) The SSIM results of the test set; (<b>c</b>) The IFOV’ results of the test set.</p>
Full article ">Figure 10
<p>A pair of test set images and their resolution match results. (<b>a</b>) Simulated 18.7 GHz antenna temperature image; (<b>b</b>) Match result of the BG method; (<b>c</b>) Match result of the Banach method; (<b>d</b>) Match result of SRCNN; (<b>e</b>) Match result of VDSR; (<b>f</b>) Match result of SRResNet; (<b>g</b>) Match result of basic network N<sub>b</sub>; (<b>h</b>) Simulated 89 GHz antenna temperature image.</p>
Full article ">Figure 11
<p>Resolution match results of real 18.7 GHz MWRI data. (<b>a</b>) Real 18.7 GHz antenna temperature image; (<b>b</b>) Match result of the BG method; (<b>c</b>) Match result of the Banach method; (<b>d</b>) Match result of SRCNN; (<b>e</b>) Match result of VDSR; (<b>f</b>) Match result of SRResNet; (<b>g</b>) Match result of basic network N<sub>b</sub>; (<b>h</b>) Match result of basic network N<sub>b</sub> with degradation model (Equations (10) and (11)); (<b>i</b>) Real 89 GHz antenna temperature image.</p>
Full article ">Figure 12
<p>Resolution match results of real 18.7 GHz MWRI data. (<b>a</b>) Real 18.7 GHz antenna temperature image; (<b>b</b>) Match result of the BG method; (<b>c</b>) Match result of the Banach method; (<b>d</b>) Match result of SRCNN; (<b>e</b>) Match result of VDSR; (<b>f</b>) Match result of SRResNet; (<b>g</b>) Match result of basic network N<sub>b</sub>; (<b>h</b>) Match result of basic network N<sub>b</sub> with degradation model (Equations (10) and (11)); (<b>i</b>) Real 89 GHz antenna temperature image.</p>
Full article ">Figure 13
<p>Typical output points of the adjustable network <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="normal">N</mi> <mi mathvariant="normal">a</mi> <mi>γ</mi> </msubsup> </mrow> </semantics></math> and its fitting curve.</p>
Full article ">Figure 14
<p>Resolution enhancement results by adjustable network <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="normal">N</mi> <mi mathvariant="normal">a</mi> <mi>γ</mi> </msubsup> </mrow> </semantics></math>. (<b>a</b>) Output of adjustable network <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="normal">N</mi> <mi mathvariant="normal">a</mi> <mi>γ</mi> </msubsup> </mrow> </semantics></math> when γ = 1 (the resolution is matched to the 36.5 GHz channel); (<b>b</b>) Output when γ = 0.8; (<b>c</b>) Output when γ = 0.6; (<b>d</b>) Output when γ = 0.4; (<b>e</b>) Output when γ = 0.2; (<b>f</b>) Output when γ = 0 (the resolution is matched to the 89 GHz channel).</p>
Full article ">Figure 15
<p>Resolution enhancement results by adjustable network <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="normal">N</mi> <mi mathvariant="normal">a</mi> <mi>γ</mi> </msubsup> </mrow> </semantics></math>. (<b>a</b>) Output of adjustable network <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="normal">N</mi> <mi mathvariant="normal">a</mi> <mi>γ</mi> </msubsup> </mrow> </semantics></math> when γ = 1 (the resolution is matched to the 23.8 GHz channel); (<b>b</b>) Output when γ = 0.8; (<b>c</b>) Output when γ = 0.6; (<b>d</b>) Output when γ = 0.4; (<b>e</b>) Output when γ = 0.2; (<b>f</b>) Output when γ = 0 (the resolution is matched to the 89 GHz channel).</p>
Full article ">
19 pages, 9615 KiB  
Article
Upper Ocean Response to Two Sequential Tropical Cyclones over the Northwestern Pacific Ocean
by Jue Ning, Qing Xu, Tao Feng, Han Zhang and Tao Wang
Remote Sens. 2019, 11(20), 2431; https://doi.org/10.3390/rs11202431 - 19 Oct 2019
Cited by 13 | Viewed by 3771
Abstract
The upper ocean thermodynamic and biological responses to two sequential tropical cyclones (TCs) over the Northwestern Pacific Ocean were investigated using multi-satellite datasets, in situ observations and numerical model outputs. During Kalmaegi and Fung-Wong, three distinct cold patches were observed at sea surface. [...] Read more.
The upper ocean thermodynamic and biological responses to two sequential tropical cyclones (TCs) over the Northwestern Pacific Ocean were investigated using multi-satellite datasets, in situ observations and numerical model outputs. During Kalmaegi and Fung-Wong, three distinct cold patches were observed at sea surface. The locations of these cold patches are highly correlated with relatively shallower depth of the 26 °C isotherm and mixed layer depth (MLD) and lower upper ocean heat content. The enhancement of surface chlorophyll a (chl-a) concentration was detected in these three regions as well, mainly due to the TC-induced mixing and upwelling as well as the terrestrial runoff. Moreover, the pre-existing ocean cyclonic eddy (CE) has been found to significantly modulate the magnitude of surface cooling and chl-a increase. With the deepening of the MLD on the right side of TCs, the temperature of the mixed layer decreased and the salinity increased. The sequential TCs had superimposed effects on the upper ocean response. The possible causes of sudden track change in sequential TCs scenario were also explored. Both atmospheric and oceanic conditions play noticeable roles in abrupt northward turning of the subsequent TC Fung-Wong. Full article
(This article belongs to the Special Issue Tropical Cyclones Remote Sensing and Data Assimilation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The tracks (<b>a</b>), intensities (<b>b</b>,<b>c</b>), and translation speeds (<b>d</b>,<b>e</b>) of Kalmaegi (left panel) and Fung-Wong (right panel). The blue and red dashed lines indicate tracks of Kalmaegi and Fung-Wong, respectively. The symbol “*” in (<b>a</b>) denotes the center location of two TCs at 00:00 UTC on the corresponding date. The black triangles with symbols from “B1” to “B5” indicate five array stations. The magenta pentagrams with symbols “A1” and “A2” indicate the locations of the two Argo floats.</p>
Full article ">Figure 2
<p>Comparisons of SST, SSTD and SSHA between satellite observations (Top) and HYCOM data (Bottom). (<b>a</b>,<b>b</b>): SST distributions before Kalmaegi, (<b>c</b>,<b>d</b>): SSTD during the passage of Kalmaegi (i.e., SST difference between September 15 and 9), (<b>e</b>,<b>f</b>): negative SSHA in the area denoted by the blue box in (<b>c</b>) before Kalmaegi. The dash and solid lines denote the tracks of Kalmaegi and Fung-Wong, respectively. The black and gray parts of the lines represent the TC has and has not passed the region, respectively. The red dot denotes the center location of Kalmaegi at 00:00 UTC on September 15.</p>
Full article ">Figure 3
<p>Satellite observed evolution of SST before, during and after Kalmaegi and Fung-Wong. (<b>a</b>): before Kalmaegi, (<b>b</b>,<b>c</b>): during Kalmaegi and before Fung-Wong, (<b>d</b>–<b>f</b>): after Kalmaegi and during Fung-Wong. The expression of TC tracks is the same as <a href="#remotesensing-11-02431-f002" class="html-fig">Figure 2</a>. The red and green dots denote the center locations of Kalmaegi and Fung-Wong at 00:00 UTC, respectively. The black, gray and red boxes represent typical regions with distinct sea surface cooling.</p>
Full article ">Figure 4
<p>Spatial distributions of MLD, D26 and UOHC from HYCOM output before the formation of Kalmaegi on September 9 (upper panel) and Fung-Wong on September 17 (lower panel). (<b>a</b>,<b>b</b>): MLD, (<b>c</b>,<b>d</b>): D26, (<b>e</b>,<b>f</b>): UOHC. The expression of TC tracks is the same as <a href="#remotesensing-11-02431-f002" class="html-fig">Figure 2</a>. The red dot denotes the center location of Kalmaegi at 00:00 UTC on September 17.</p>
Full article ">Figure 5
<p>Distribution of SSTD during the passages of Kalmaegi on September 14 (i.e., SST difference between September 14 and 9) (<b>a</b>) and Fung-Wong on September 19 (i.e., SST difference between September 19 and 9) (<b>b</b>). Overlaid is SSHA (contours, unit: cm) on the corresponding date. Only negative values are plotted, with closed contours indicating the location of the CE. The expression of TC tracks is the same as <a href="#remotesensing-11-02431-f002" class="html-fig">Figure 2</a>. The red and green dots denote the center locations at 00:00 UTC of Kalmaegi and Fung-Wong, respectively.</p>
Full article ">Figure 6
<p>Temperature and salinity profiles observed by Argo floats A1 (Top, (<b>a</b>,<b>b</b>)) and A2 (Bottom, (<b>c</b>,<b>d</b>)). Curves with different colours represent observations on different dates.</p>
Full article ">Figure 7
<p>Variations of temperature (Top, (<b>a</b>,<b>c</b>)) and salinity (Bottom, (<b>b</b>,<b>d</b>)) vertical profiles at Stations B4 (Left) and B5 (Right). The dashed and solid vertical straight lines denote the time when Kalmaegi and Fung-Wong passed by, respectively.</p>
Full article ">Figure 8
<p>Variations of MLD (Top, (<b>a</b>,<b>d</b>)), MLT (Middle, (<b>b</b>,<b>e</b>)), and MLS (Bottom, (<b>c</b>,<b>f</b>)) at Stations B4 (Left) and B5 (Right). The dashed and solid vertical straight lines denote the time when Kalmaegi and Fung-Wong passed by, respectively.</p>
Full article ">Figure 9
<p>Satellite observed evolution of surface chl-a concentration (mg m<sup>−3</sup>) before (<b>a</b>), during (<b>b</b>) and after (<b>c</b>) sequential TCs. The expression of TC tracks is the same as <a href="#remotesensing-11-02431-f002" class="html-fig">Figure 2</a>. The black, gray and red boxes represent three typical regions with obvious chl-a increase and surface cooling as well as shown in <a href="#remotesensing-11-02431-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 10
<p>The wind field (arrows; m s<sup>−1</sup>) at 10 m above sea surface and estimated EPV (colours; m s<sup>−1</sup>) on September 15 (<b>a</b>), 19 and 20 (<b>b</b>). The expression of TC tracks is the same as <a href="#remotesensing-11-02431-f002" class="html-fig">Figure 2</a>. The red and green dots denote the center locations of Kalmaegi and Fung-Wong at 00:00 UTC, respectively. The red box represents region R3 depicted in <a href="#remotesensing-11-02431-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 11
<p>Evolution of satellite observed SST (Top, (<b>a</b>–<b>d</b>)) and negative SSHA (Bottom, (<b>e</b>–<b>h</b>)) in the CE region before, during and after sequential TCs. The expression of TC tracks is the same as <a href="#remotesensing-11-02431-f002" class="html-fig">Figure 2</a>. The red and green dots denote the center locations of Kalmaegi and Fung-Wong at 00:00 UTC, respectively.</p>
Full article ">Figure 12
<p>HYCOM output of temperature profile at the center of the CE on September 9. The dash and solid lines denote the time when Kalmaegi and Fung-Wong passed by the eddy, respectively.</p>
Full article ">Figure 13
<p>500 hPa geopotential height (colour; gpm) and wind field (black arrows; m s<sup>−1</sup>) averaged at 500–700 hPa. Spatial distributions of 500 hPa geopotential height and wind field averaged at 500–700 hPa before (<b>a</b>,<b>b</b>), during (<b>c</b>) and after (<b>d</b>) the change of Fung-Wong’s track. The cyan line indicates the value of the geopotential height of 5880 gpm at 500 hPa. The expression of TC tracks is the same as <a href="#remotesensing-11-02431-f002" class="html-fig">Figure 2</a>. The green dot denotes the center location at 00:00 UTC of Fung-Wong. (<b>e</b>,<b>f</b>) are temporal variations of zonal and meridional wind speed averaged at 500–700 hPa in a 5° × 5° region centered at Fung-Wong’s center location, respectively. Positive values of U and V denote the eastward and northward winds, respectively.</p>
Full article ">Figure 14
<p>Distributions of SST (<b>a</b>), MLD (<b>b</b>), D26 (<b>c</b>) and UOHC (<b>d</b>) on September 19. The expression of TC tracks is the same as <a href="#remotesensing-11-02431-f002" class="html-fig">Figure 2</a>. The green dot denotes the center location at 00:00 UTC of Fung-Wong.</p>
Full article ">
39 pages, 22117 KiB  
Article
Landsat-8, Advanced Spaceborne Thermal Emission and Reflection Radiometer, and WorldView-3 Multispectral Satellite Imagery for Prospecting Copper-Gold Mineralization in the Northeastern Inglefield Mobile Belt (IMB), Northwest Greenland
by Amin Beiranvand Pour, Tae-Yoon S. Park, Yongcheol Park, Jong Kuk Hong, Aidy M Muslim, Andreas Läufer, Laura Crispini, Biswajeet Pradhan, Basem Zoheir, Omeid Rahmani, Mazlan Hashim and Mohammad Shawkat Hossain
Remote Sens. 2019, 11(20), 2430; https://doi.org/10.3390/rs11202430 - 19 Oct 2019
Cited by 76 | Viewed by 10779
Abstract
Several regions in the High Arctic still lingered poorly explored for a variety of mineralization types because of harsh climate environments and remoteness. Inglefield Land is an ice-free region in northwest Greenland that contains copper-gold mineralization associated with hydrothermal alteration mineral assemblages. In [...] Read more.
Several regions in the High Arctic still lingered poorly explored for a variety of mineralization types because of harsh climate environments and remoteness. Inglefield Land is an ice-free region in northwest Greenland that contains copper-gold mineralization associated with hydrothermal alteration mineral assemblages. In this study, Landsat-8, Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), and WorldView-3 multispectral remote sensing data were used for hydrothermal alteration mapping and mineral prospecting in the Inglefield Land at regional, local, and district scales. Directed principal components analysis (DPCA) technique was applied to map iron oxide/hydroxide, Al/Fe-OH, Mg-Fe-OH minerals, silicification (Si-OH), and SiO2 mineral groups using specialized band ratios of the multispectral datasets. For extracting reference spectra directly from the Landsat-8, ASTER, and WorldView-3 (WV-3) images to generate fraction images of end-member minerals, the automated spectral hourglass (ASH) approach was implemented. Linear spectral unmixing (LSU) algorithm was thereafter used to produce a mineral map of fractional images. Furthermore, adaptive coherence estimator (ACE) algorithm was applied to visible and near-infrared and shortwave infrared (VINR + SWIR) bands of ASTER using laboratory reflectance spectra extracted from the USGS spectral library for verifying the presence of mineral spectral signatures. Results indicate that the boundaries between the Franklinian sedimentary successions and the Etah metamorphic and meta-igneous complex, the orthogneiss in the northeastern part of the Cu-Au mineralization belt adjacent to Dallas Bugt, and the southern part of the Cu-Au mineralization belt nearby Marshall Bugt show high content of iron oxides/hydroxides and Si-OH/SiO2 mineral groups, which warrant high potential for Cu-Au prospecting. A high spatial distribution of hematite/jarosite, chalcedony/opal, and chlorite/epidote/biotite were identified with the documented Cu-Au occurrences in central and southwestern sectors of the Cu-Au mineralization belt. The calculation of confusion matrix and Kappa Coefficient proved appropriate overall accuracy and good rate of agreement for alteration mineral mapping. This investigation accomplished the application of multispectral/multi-sensor satellite imagery as a valuable and economical tool for reconnaissance stages of systematic mineral exploration projects in remote and inaccessible metallogenic provinces around the world, particularly in the High Arctic regions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geological map of the Inglefield Land. Cu-Au mineralized belt in the northeastern part of Inglefield Land shown as a yellow color semi-transparent polygon (modified after [<a href="#B3-remotesensing-11-02430" class="html-bibr">3</a>,<a href="#B42-remotesensing-11-02430" class="html-bibr">42</a>]).</p>
Full article ">Figure 2
<p>Comparison of the spectral bands of WV-3 with Landsat-8 and ASTER in the visible and near-infrared (VNIR) and shortwave infrared (SWIR) regions [<a href="#B46-remotesensing-11-02430" class="html-bibr">46</a>].</p>
Full article ">Figure 3
<p>Laboratory reflectance spectra of hematite, jarosite, biotite, muscovite, chlorite, epidote, chalcedony (hydrous-silica), and opal (hyalite) resampled to response functions of VINR + SWIR bands of Landsat-8 (<b>A</b>), ASTER (<b>B</b>), and WV-3 (<b>C</b>) that were extracted from the USGS spectral library version 7.0 [<a href="#B75-remotesensing-11-02430" class="html-bibr">75</a>]. Cubes indicate the position of the VINR + SWIR bands of Landsat-8, ASTER, and WV-3 in the range of 0.4 μm to 2.5 μm.</p>
Full article ">Figure 3 Cont.
<p>Laboratory reflectance spectra of hematite, jarosite, biotite, muscovite, chlorite, epidote, chalcedony (hydrous-silica), and opal (hyalite) resampled to response functions of VINR + SWIR bands of Landsat-8 (<b>A</b>), ASTER (<b>B</b>), and WV-3 (<b>C</b>) that were extracted from the USGS spectral library version 7.0 [<a href="#B75-remotesensing-11-02430" class="html-bibr">75</a>]. Cubes indicate the position of the VINR + SWIR bands of Landsat-8, ASTER, and WV-3 in the range of 0.4 μm to 2.5 μm.</p>
Full article ">Figure 3 Cont.
<p>Laboratory reflectance spectra of hematite, jarosite, biotite, muscovite, chlorite, epidote, chalcedony (hydrous-silica), and opal (hyalite) resampled to response functions of VINR + SWIR bands of Landsat-8 (<b>A</b>), ASTER (<b>B</b>), and WV-3 (<b>C</b>) that were extracted from the USGS spectral library version 7.0 [<a href="#B75-remotesensing-11-02430" class="html-bibr">75</a>]. Cubes indicate the position of the VINR + SWIR bands of Landsat-8, ASTER, and WV-3 in the range of 0.4 μm to 2.5 μm.</p>
Full article ">Figure 4
<p>A regional view of the northwestern part of Greenland generated using a mosaic of Landsat-8 images as RGB false-color composite of the normalized difference snow index (NDSI), Al-OH-bearing alteration minerals index (Al-OH-MI), and thermal radiance lithology index (TRLI). Yellow rectangle shows the location of the Cu-Au mineralization belt.</p>
Full article ">Figure 5
<p>Landsat-8 image-maps of the IMB. (<b>A</b>) RGB false-color composite of B4/B2, B4/B6, and B6/B7 band ratio indices covering the IMB. (<b>B</b>) Pseudocolor ramp of the DPCA3 rule image covering the IMB. (<b>C</b>) Pseudocolor ramp of the DPCA4 rule image covering the IMB.</p>
Full article ">Figure 5 Cont.
<p>Landsat-8 image-maps of the IMB. (<b>A</b>) RGB false-color composite of B4/B2, B4/B6, and B6/B7 band ratio indices covering the IMB. (<b>B</b>) Pseudocolor ramp of the DPCA3 rule image covering the IMB. (<b>C</b>) Pseudocolor ramp of the DPCA4 rule image covering the IMB.</p>
Full article ">Figure 6
<p>(<b>A</b>) The n-D classes (end-member spectra) extracted for a selected spatial subset (Landsat-8) covering Cu-Au mineralization belt and surrounding areas. Landsat-8 band center positions are shown. (<b>B</b>) LSU mineral map produced from fraction images overlaid on band 5 of Landsat-8 for the selected spatial subset covering the Cu-Au mineralization belt and surrounding areas.</p>
Full article ">Figure 6 Cont.
<p>(<b>A</b>) The n-D classes (end-member spectra) extracted for a selected spatial subset (Landsat-8) covering Cu-Au mineralization belt and surrounding areas. Landsat-8 band center positions are shown. (<b>B</b>) LSU mineral map produced from fraction images overlaid on band 5 of Landsat-8 for the selected spatial subset covering the Cu-Au mineralization belt and surrounding areas.</p>
Full article ">Figure 7
<p>Pseudocolor ramp of the ASTER (VNIR + SWIR) DPCA rule images covering the selected spatial subset of the Cu-Au mineralization belt and surrounding areas. (<b>A</b>) Ferrous iron oxides (Fe<sup>+2</sup>)/Si-OH image-map; (<b>B</b>) ferric iron (Fe<sup>+3</sup>) oxide/hydroxides image-map; (<b>C</b>) Al/Fe-OH minerals image-map; (<b>D</b>) Mg-Fe-OH minerals image-map; (<b>E</b>) Si-OH minerals image-map.</p>
Full article ">Figure 8
<p>Pseudocolor ramp of the ASTER (TIR) DPCA rule images covering the selected spatial subset of the Cu-Au mineralization belt and surrounding areas. (<b>A</b>) Quartz Index (QI) image-map; (<b>B</b>) Carbonate Index (CI) image-map; (<b>C</b>) Mafic Index (MI) image-map.</p>
Full article ">Figure 9
<p>(<b>A</b>) The n-D classes (end-member spectra) extracted for a selected spatial subset (ASTER VNIR + SWIR) covering the Cu-Au mineralization belt and surrounding areas. ASTER band center positions are shown. (<b>B</b>) ASTER LSU classification mineral map for the selected spatial subset covering the Cu-Au mineralization belt and surrounding areas.</p>
Full article ">Figure 9 Cont.
<p>(<b>A</b>) The n-D classes (end-member spectra) extracted for a selected spatial subset (ASTER VNIR + SWIR) covering the Cu-Au mineralization belt and surrounding areas. ASTER band center positions are shown. (<b>B</b>) ASTER LSU classification mineral map for the selected spatial subset covering the Cu-Au mineralization belt and surrounding areas.</p>
Full article ">Figure 10
<p>Pseudocolor ramp of the WV3 (VNIR) DPCA rule images covering the selected spatial subset of the southern part of the Cu-Au mineralization belt. (<b>A</b>) Fe<sup>3+</sup> iron oxides image-map; (<b>B</b>) Fe<sup>2+</sup> iron oxides image-map; (<b>C</b>) ferric silicates image-map; (<b>D</b>) ferrous silicates image-map (WV-3 image, courtesy of the DigitalGlobe Foundation (<a href="http://www.digitalglobefoundation.org" target="_blank">www.digitalglobefoundation.org</a>)).</p>
Full article ">Figure 11
<p>(<b>A</b>) The n-D classes (end-member spectra) extracted for the WV-3 selected spatial subset covering the southern part of the Cu-Au mineralization belt. WV-3 band center positions are shown. (<b>B</b>) LSU mineral map produced from fraction images for the selected spatial subset covering the southern part of the Cu-Au mineralization belt (WV-3 image, courtesy of the DigitalGlobe Foundation (<a href="http://www.digitalglobefoundation.org" target="_blank">www.digitalglobefoundation.org</a>)).</p>
Full article ">Figure 11 Cont.
<p>(<b>A</b>) The n-D classes (end-member spectra) extracted for the WV-3 selected spatial subset covering the southern part of the Cu-Au mineralization belt. WV-3 band center positions are shown. (<b>B</b>) LSU mineral map produced from fraction images for the selected spatial subset covering the southern part of the Cu-Au mineralization belt (WV-3 image, courtesy of the DigitalGlobe Foundation (<a href="http://www.digitalglobefoundation.org" target="_blank">www.digitalglobefoundation.org</a>)).</p>
Full article ">Figure 12
<p>Fraction images of the selected end-member minerals derived from the adaptive coherence estimator (ACE) algorithm for the selected spatial subset covering the Cu-Au mineralization belt and surrounding areas. Pseudo-color ramp was applied to greyscale rule images.</p>
Full article ">Figure 12 Cont.
<p>Fraction images of the selected end-member minerals derived from the adaptive coherence estimator (ACE) algorithm for the selected spatial subset covering the Cu-Au mineralization belt and surrounding areas. Pseudo-color ramp was applied to greyscale rule images.</p>
Full article ">
Previous Issue
Back to TopTop