[go: up one dir, main page]

Next Issue
Volume 12, August-1
Previous Issue
Volume 12, July-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 12, Issue 14 (July-2 2020) – 170 articles

Cover Story (view full-size image): The disease caused by SARS-CoV-2 has affected many countries and regions. In order to contain the spread of infection, many countries have adopted lockdown measures. As a result, SARS-CoV-2 has negatively influenced economies on a global scale and has had a significant impact on the environment. In this study, changes in the concentration of the pollutant nitrogen dioxide (NO2) within the lockdown period were examined, in addition to how these changes relate to the Spanish population. Remote sensing is a useful tool to analyze the spatial variability of air quality. For this purpose, Sentinel-5P images were used to analyze the spatial distribution of NO2 and its evolution under the lockdown measures in Spain. The results indicate a significant correlation between the population’s activity level and the reduction of NO2 values. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
45 pages, 13597 KiB  
Article
Rapid Mangrove Forest Loss and Nipa Palm (Nypa fruticans) Expansion in the Niger Delta, 2007–2017
by Chukwuebuka Nwobi, Mathew Williams and Edward T. A. Mitchard
Remote Sens. 2020, 12(14), 2344; https://doi.org/10.3390/rs12142344 - 21 Jul 2020
Cited by 26 | Viewed by 9655
Abstract
Mangrove forests in the Niger Delta are very valuable, providing ecosystem services, such as carbon storage, fish nurseries, coastal protection, and aesthetic values. However, they are under threat from urbanization, logging, oil pollution, and the proliferation of the invasive Nipa Palm (Nypa [...] Read more.
Mangrove forests in the Niger Delta are very valuable, providing ecosystem services, such as carbon storage, fish nurseries, coastal protection, and aesthetic values. However, they are under threat from urbanization, logging, oil pollution, and the proliferation of the invasive Nipa Palm (Nypa fruticans). However, there are no reliable data on the current extent of mangrove forest in the Niger Delta, its rate of loss, or the rate of colonization by the invasive Nipa Palm. Here, we estimate the area of Nipa Palm and mangrove forests in the Niger Delta in 2007 and 2017, using 567 ground control points, Advanced Land Observatory Satellite Phased Array L-band SAR (ALOS PALSAR), Landsat and the Shuttle Radar Topography Mission Digital Elevation Model 2000 (SRTM DEM). We performed the classification using Maximum Likelihood (ML) and Support Vector Machine (SVM) methods. The classification results showed SVM (overall accuracy 93%) performed better than ML (77%). Producers (PA) and User’s accuracy (UA) for the best SVM classification were above 80% for most classes; however, these were considerably lower for Nipa Palm (PA—32%, UA—30%). We estimated a 2017 mangrove area of 801,774 ± 34,787 ha (±95% Confidence Interval) ha and Nipa Palm extent of 11,447 ± 7343 ha. Our maps show a greater landward extent than other reported products. The results indicate a 12% (7–17%) decrease in mangrove area and 694 (0–1304)% increase in Nipa Palm. Mapping efforts should continue for policy targeting and monitoring. The mangroves of the Niger Delta are clearly in grave danger from both rapid clearance and encroachment by the invasive Nipa Palm. This is of great concern given the dense carbon stocks and the value of these mangroves to local communities for generating fish stocks and protection from extreme events. Full article
(This article belongs to the Special Issue Ensuring a Long-Term Future for Mangroves: A Role for Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of Nigeria in West Africa, the Niger Delta in Southern Nigeria, study area, and ground control points (GCPs) established during the field survey.</p>
Full article ">Figure 2
<p>Land cover classes used in land cover classification. (<b>A</b>) Terra firma Forest; (<b>B</b>) agricultural land; (<b>C</b>) Nipa Palm; (<b>D</b>) mangrove forest.</p>
Full article ">Figure 3
<p>Image processing steps prior to land cover (LC) classification. Digital Elevation Model (DEM), Advanced Land Observing Satellite (ALOS) Phased Array L-band Synthetic Aperture Radar (PALSAR) and Landsat were pre-processed separately before they were stacked together with the Shuttle Radar Topography Mission (SRTM) DEM 30 m resolution. We grouped both Landsat and ALOS PALSAR data for the year 2007 and 2017. Each color code represents the different software used for analysis.</p>
Full article ">Figure 4
<p>2017 thematic map from Support Vector machine: Radial Basis Function kernel type (<b>i</b>) highlighting inset regions. Detailed analysis of the thematic maps produced by the different classifiers (Maximum Likelihood, Support Vector machine: Linear, Polynomial and Radial Basis Function kernel types) in three regions of the Niger Delta (Support Vector machine: Radial Basis Function): (<b>A</b>) Calabar Estuary; (<b>B</b>) Oproama Community and (<b>C</b>) Imo River Estuary.</p>
Full article ">Figure 4 Cont.
<p>2017 thematic map from Support Vector machine: Radial Basis Function kernel type (<b>i</b>) highlighting inset regions. Detailed analysis of the thematic maps produced by the different classifiers (Maximum Likelihood, Support Vector machine: Linear, Polynomial and Radial Basis Function kernel types) in three regions of the Niger Delta (Support Vector machine: Radial Basis Function): (<b>A</b>) Calabar Estuary; (<b>B</b>) Oproama Community and (<b>C</b>) Imo River Estuary.</p>
Full article ">Figure 4 Cont.
<p>2017 thematic map from Support Vector machine: Radial Basis Function kernel type (<b>i</b>) highlighting inset regions. Detailed analysis of the thematic maps produced by the different classifiers (Maximum Likelihood, Support Vector machine: Linear, Polynomial and Radial Basis Function kernel types) in three regions of the Niger Delta (Support Vector machine: Radial Basis Function): (<b>A</b>) Calabar Estuary; (<b>B</b>) Oproama Community and (<b>C</b>) Imo River Estuary.</p>
Full article ">Figure 4 Cont.
<p>2017 thematic map from Support Vector machine: Radial Basis Function kernel type (<b>i</b>) highlighting inset regions. Detailed analysis of the thematic maps produced by the different classifiers (Maximum Likelihood, Support Vector machine: Linear, Polynomial and Radial Basis Function kernel types) in three regions of the Niger Delta (Support Vector machine: Radial Basis Function): (<b>A</b>) Calabar Estuary; (<b>B</b>) Oproama Community and (<b>C</b>) Imo River Estuary.</p>
Full article ">Figure 5
<p>2007 (<b>i</b>) and 2017 (<b>ii</b>) thematic map derived from the classification of layer stacked SRTM DEM, ALOS PALSAR, and Landsat 7 of the Niger Delta using SVM with Radial Basis Function Kernel. Inset (<b>iii</b>) shows the inland extent of mangrove forests from the 2017 classification in (<b>A</b>) western Niger Delta; (<b>B</b>) central Niger Delta; (<b>C</b>) eastern Niger Delta; (<b>D</b>) Imo River estuary; (<b>E</b>) Calabar estuary; and (<b>F</b>) the River Niger Basin bifurcation and the spread of surface water proximity to rain forests and agricultural land surrounding urban settlements.</p>
Full article ">Figure 5 Cont.
<p>2007 (<b>i</b>) and 2017 (<b>ii</b>) thematic map derived from the classification of layer stacked SRTM DEM, ALOS PALSAR, and Landsat 7 of the Niger Delta using SVM with Radial Basis Function Kernel. Inset (<b>iii</b>) shows the inland extent of mangrove forests from the 2017 classification in (<b>A</b>) western Niger Delta; (<b>B</b>) central Niger Delta; (<b>C</b>) eastern Niger Delta; (<b>D</b>) Imo River estuary; (<b>E</b>) Calabar estuary; and (<b>F</b>) the River Niger Basin bifurcation and the spread of surface water proximity to rain forests and agricultural land surrounding urban settlements.</p>
Full article ">Figure 5 Cont.
<p>2007 (<b>i</b>) and 2017 (<b>ii</b>) thematic map derived from the classification of layer stacked SRTM DEM, ALOS PALSAR, and Landsat 7 of the Niger Delta using SVM with Radial Basis Function Kernel. Inset (<b>iii</b>) shows the inland extent of mangrove forests from the 2017 classification in (<b>A</b>) western Niger Delta; (<b>B</b>) central Niger Delta; (<b>C</b>) eastern Niger Delta; (<b>D</b>) Imo River estuary; (<b>E</b>) Calabar estuary; and (<b>F</b>) the River Niger Basin bifurcation and the spread of surface water proximity to rain forests and agricultural land surrounding urban settlements.</p>
Full article ">Figure 5 Cont.
<p>2007 (<b>i</b>) and 2017 (<b>ii</b>) thematic map derived from the classification of layer stacked SRTM DEM, ALOS PALSAR, and Landsat 7 of the Niger Delta using SVM with Radial Basis Function Kernel. Inset (<b>iii</b>) shows the inland extent of mangrove forests from the 2017 classification in (<b>A</b>) western Niger Delta; (<b>B</b>) central Niger Delta; (<b>C</b>) eastern Niger Delta; (<b>D</b>) Imo River estuary; (<b>E</b>) Calabar estuary; and (<b>F</b>) the River Niger Basin bifurcation and the spread of surface water proximity to rain forests and agricultural land surrounding urban settlements.</p>
Full article ">Figure 6
<p>Mangrove forests loss along islands off the coast of Akwa Ibom and nipa palm colonization in Alligator Island, Calabar River Estuary of the Niger Delta. Black circles represent mangrove loss while red circles represent Nipa Palm colonization. Inset pictures of study sites during GCP selection.</p>
Full article ">Figure 7
<p>Percentage change of land cover classes in the Niger Delta between 2007 and 2017.</p>
Full article ">Figure 8
<p>Comparison of different mangrove maps with red circles showing similarities and black circles showing underestimation. (<b>A</b>) Oproama creek; (<b>B</b>) Benin river estuary; and (<b>C</b>) Calabar estuary showing Nipa Palm cover on Alligator Island compared with global maps.</p>
Full article ">Figure 8 Cont.
<p>Comparison of different mangrove maps with red circles showing similarities and black circles showing underestimation. (<b>A</b>) Oproama creek; (<b>B</b>) Benin river estuary; and (<b>C</b>) Calabar estuary showing Nipa Palm cover on Alligator Island compared with global maps.</p>
Full article ">Figure 8 Cont.
<p>Comparison of different mangrove maps with red circles showing similarities and black circles showing underestimation. (<b>A</b>) Oproama creek; (<b>B</b>) Benin river estuary; and (<b>C</b>) Calabar estuary showing Nipa Palm cover on Alligator Island compared with global maps.</p>
Full article ">Figure 8 Cont.
<p>Comparison of different mangrove maps with red circles showing similarities and black circles showing underestimation. (<b>A</b>) Oproama creek; (<b>B</b>) Benin river estuary; and (<b>C</b>) Calabar estuary showing Nipa Palm cover on Alligator Island compared with global maps.</p>
Full article ">Figure 9
<p>Mixed stand of Nipa Palm and disturbed mangrove in Ete creek.</p>
Full article ">Figure 10
<p>Cloud cover limitation of Optical data over Kwa Ibo river southeastern Nigeria. (<b>A</b>) ALOS PALSAR radar scene; (<b>B</b>) classification imagery; (<b>C</b>) infrared scene.</p>
Full article ">
22 pages, 2487 KiB  
Article
Soil Moisture Estimate Uncertainties from the Effect of Soil Texture on Dielectric Semiempirical Models
by Jing Liu and Qinhuo Liu
Remote Sens. 2020, 12(14), 2343; https://doi.org/10.3390/rs12142343 - 21 Jul 2020
Cited by 7 | Viewed by 3743
Abstract
Soil texture has been shown to affect the dielectric behavior of soil over the entire frequency range. Three universally employed dielectric semiempirical models (SEMs), the Dobson model, the Wang–Schmugge model and the Mironov model, as well as a new improved SEM known as [...] Read more.
Soil texture has been shown to affect the dielectric behavior of soil over the entire frequency range. Three universally employed dielectric semiempirical models (SEMs), the Dobson model, the Wang–Schmugge model and the Mironov model, as well as a new improved SEM known as the soil semi-empirical mineralogy-related-to-water dielectric model (SSMDM), incorporate a significant soil texture effect in different ways. In this paper, soil moisture estimate uncertainties from the effect of soil texture on these four SEMs are systematically and widely investigated over all soil texture cases at different frequencies between 1.4 and 18 GHz for volumetric water content levels between 0.0 and 0.4 m3/m3 from the perspective of two aspects: soil dielectric model discordance and soil texture discordance. Firstly, the effect of soil texture on these four dielectric SEMs is analyzed. Then, soil moisture estimate uncertainties due to the effect of soil texture are carefully investigated. Finally, the applicability of these SEMs is discussed, which can supply references for their choice. The results show that soil moisture estimate uncertainties are small and satisfy the 4% volumetric water content retrieval requirement in some cases. However, in other cases, it may contribute relatively significant uncertainties to soil moisture estimates and correspond to a difference that exceeds the 4% volumetric water content requirement, with potential for the largest deviations to exceed 0.22 m3/m3. Full article
Show Figures

Figure 1

Figure 1
<p>Soil texture grouping diagram according to the gravimetric weights of soil particles.</p>
Full article ">Figure 2
<p>Maximum bound water fraction (MBWF) versus clay percentage. Legend notes: eq (5) and eq (7) indicate that the MBWF is computed by Equation (5) or Equation (7), respectively, and: 1, a sand fraction of 0; 2, a sand fraction of 30%; and 3, a sand fraction of 60%.</p>
Full article ">Figure 3
<p>Simulated real parts of typical soil samples for different models. A solid line is used for soil texture cases in Section I or II; and a dashed line is used for soil texture cases in Section III or IV. “II-I-L” stands for I sample from Section II at the L-band.</p>
Full article ">Figure 4
<p>Simulated imaginary parts of typical soil samples for different models. A solid line is used for soil texture cases in Section I or II; and a dashed line is used for soil texture cases in Section III or IV Section. “III-H-L” stands for H sample from Section III at the L-band.</p>
Full article ">Figure 5
<p>The simulations of the real parts for different soil texture types under the same model. Top panels—sand content constant; middle panels—clay content constant; bottom panels—silt content constant. ‘H(I)’ is the ‘H’ sample from I soils. A solid line is used for the L-band and a dashed line is used for the Ku-band.</p>
Full article ">Figure 6
<p>Deviations in the soil moisture estimates by different models for typical soil samples. The numbers (from ‘1′ to ‘6′) were explained earlier in <a href="#sec2-remotesensing-12-02343" class="html-sec">Section 2</a>. The black straight line represents the soil moisture estimate accuracy requirement of 0.04 m<sup>3</sup>/m<sup>3</sup>.</p>
Full article ">Figure 7
<p>Deviations in the soil moisture retrievals by soil texture. Top panels—sand content constant; middle panels—clay content constant; bottom panels—silt content constant. ‘H(I)→H1(I)’ is the ‘H’ sample from Section I, which is used as the input soil texture for the soil dielectric model to construct the lookup table (LUT). The ‘H1′ sample from Section I is the input soil texture for the same soil dielectric model to retrieve the soil moisture.</p>
Full article ">
12 pages, 2836 KiB  
Letter
Semi-Automated Roadside Image Data Collection for Characterization of Agricultural Land Management Practices
by Neal Pilger, Aaron Berg and Pamela Joosse
Remote Sens. 2020, 12(14), 2342; https://doi.org/10.3390/rs12142342 - 21 Jul 2020
Cited by 5 | Viewed by 2746
Abstract
Land cover management practices, including the adoption of cover crops or retaining crop residue during the non-growing season, has important impacts on soil health. To broadly survey these practices, a number of remotely sensed products are available but issues with cloud cover and [...] Read more.
Land cover management practices, including the adoption of cover crops or retaining crop residue during the non-growing season, has important impacts on soil health. To broadly survey these practices, a number of remotely sensed products are available but issues with cloud cover and access to agriculture fields for validation purposes may limit the collection of data over large regions. In this study, we describe the development of a mobile roadside survey procedure for obtaining ground reference data for the remote sensing of agricultural land use practices. The key objective was to produce a dataset of geo-referenced roadside digital images that can be used in comparison to in-field photos to measure agricultural land use and land cover associated with crop residue and cover cropping in the non-growing season. We found a very high level of correspondence (>90% level of agreement) between the mobile roadside survey to in-field ground verification data. Classification correspondence was carried out with a portion of the county-level census image data against 114 in-field manually categorized sites with a level of agreement of 93%. The few discrepancies were in the differentiation of residue levels between 30–60% and >60%, both of which may be considered as achieving conservation practice standards. The described mobile roadside image capture system has advantages of relatively low cost and insensitivity to cloudy days, which often limits optical remote sensing acquisitions during the study period of interest. We anticipate that this approach can be used to reduce associated field costs for ground surveys while expanding coverage areas and that it may be of interest to industry, academic, and government organizations for more routine surveys of agricultural soil cover during periods of seasonal cloud cover. Full article
(This article belongs to the Special Issue Remote Sensing of Crop Residue and Non-photosynthetic Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of Ontario counties where the roadside surveys were conducted.</p>
Full article ">Figure 2
<p>Roadside survey vehicle camera system with roof-top camera mounts.</p>
Full article ">Figure 3
<p>Examples of the georeferenced roadside survey. (<b>a</b>) illustrates roadside survey locations, with green points showing photos taken from right side of the vehicle (driving direction), and red from the left. Vehicle driving directions can be determined from the embedded time stamps in the photos. (<b>b</b>) illustrates examples of the photos acquired for the polygons of the sampled agricultural fields.</p>
Full article ">Figure 4
<p>Non-growing season soil cover classifications shown from nadir (<b>left</b>) and oblique (<b>right</b>) vantage points. (<b>a</b>) illustrates conventional tillage practices with little visible residue (CV); (<b>b</b>) illustrates conservation tillage (CS) practices (≈30–60% residue); (<b>c</b>) presents no-till (NT) practices with greater than 60% residue; (<b>d</b>) illustrates green cover (GC) classification.</p>
Full article ">
15 pages, 1616 KiB  
Article
Assessing the Temporal Response of Tropical Dry Forests to Meteorological Drought
by Lidong Zou, Sen Cao, Anzhou Zhao and Arturo Sanchez-Azofeifa
Remote Sens. 2020, 12(14), 2341; https://doi.org/10.3390/rs12142341 - 21 Jul 2020
Cited by 7 | Viewed by 3062
Abstract
Due to excessive human disturbances, as well as predicted changes in precipitation regimes, tropical dry forests (TDFs) are susceptible to meteorological droughts. Here, we explored the response of TDFs to meteorological drought by conducting temporal correlations between the MODIS-derived normalized difference vegetation index [...] Read more.
Due to excessive human disturbances, as well as predicted changes in precipitation regimes, tropical dry forests (TDFs) are susceptible to meteorological droughts. Here, we explored the response of TDFs to meteorological drought by conducting temporal correlations between the MODIS-derived normalized difference vegetation index (NDVI) and land surface temperature (LST) to a standardized precipitation index (SPI) between March 2000 and March 2017 at the Santa Rosa National Park Environmental Monitoring Super Site (SRNP-EMSS), Guanacaste, Costa Rica. We conducted this study using monthly and seasonal scales. Our results indicate that the NDVI and LST are largely influenced by seasonality, as well as the magnitude, duration, and timing of precipitation. We find that greenness and evapotranspiration are highly sensitive to precipitation when TDFs suffer from long-term water deficiency, and they tend to be slightly resistant to meteorological drought in the wet season. Greenness is more resistant to short-term rainfall deficiency than evapotranspiration, but greenness is more sensitive to precipitation after a period of rainfall deficiency. Precipitation can still strongly influence evapotranspiration on the canopy surface, but greenness is not controlled by the rainfall, but rather phenological characteristics when leaves begin to senesce. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area: Santa Rosa National Park Monitoring Super Site (SRNP-EMSS).</p>
Full article ">Figure 2
<p>The monthly precipitation distribution at SRNP-EMSS from June 1976 to December 2016.</p>
Full article ">Figure 3
<p>The monthly distribution of the normalized difference vegetation index (NDVI) and land surface temperature (LST) at the SRNP-EMSS from March 2000 to March 2017.</p>
Full article ">Figure 4
<p>Correlation coefficients as a function of time duration in the fixed time lag corresponding to the maximum SPI-NDVI correlation and the minimum SPI-LST correlation in the dry, dry-to-wet, wet and wet-to-dry season. Red dots and green dots indicate the SPI-NDVI and SPI-LST correlations with p-values less than 0.05 and no less than 0.05, respectively.</p>
Full article ">Figure 5
<p>The seasonal correlations between the average NDVI and the LST in the dry, dry-to-wet, wet, and wet-to-dry season from March 2000 to March 2017 (N = 17 year).</p>
Full article ">
25 pages, 27270 KiB  
Article
DeepInSAR—A Deep Learning Framework for SAR Interferometric Phase Restoration and Coherence Estimation
by Xinyao Sun, Aaron Zimmer, Subhayan Mukherjee, Navaneeth Kamballur Kottayil, Parwant Ghuman and Irene Cheng
Remote Sens. 2020, 12(14), 2340; https://doi.org/10.3390/rs12142340 - 21 Jul 2020
Cited by 38 | Viewed by 9192
Abstract
Over the past decade, using Interferometric Synthetic Aperture Radar (InSAR) remote sensing technology for ground displacement detection has become very successful. However, during the acquisition stage, microwave signals reflected from the ground and received by the satellite are contaminated, for example, due to [...] Read more.
Over the past decade, using Interferometric Synthetic Aperture Radar (InSAR) remote sensing technology for ground displacement detection has become very successful. However, during the acquisition stage, microwave signals reflected from the ground and received by the satellite are contaminated, for example, due to undesirable material reflectance and atmospheric factors, and there is no clean ground truth to discriminate these noises, which adversely affect InSAR phase computation. Accurate InSAR phase filtering and coherence estimation are crucial for subsequent processing steps. Current methods require expert supervision and expensive runtime to evaluate the quality of intermediate outputs, limiting the usability and scalability in practical applications, such as wide area ground displacement monitoring and predication. We propose a deep convolutional neural network based model DeepInSAR to intelligently solve both phase filtering and coherence estimation problems. We demonstrate our model’s performance using simulated and real data. A teacher-student framework is introduced to handle the issue of missing clean InSAR ground truth. Quantitative and qualitative evaluations show that our teacher-student approach requires less input but can achieve better results than its stack-based teacher method even on new unseen data. The proposed DeepInSAR also outperforms three other top non-stack based methods in time efficiency without human supervision. Full article
(This article belongs to the Special Issue InSAR in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The architecture of the proposed Deep Interferometric Synthetic Aperture Radar (DeepInSAR) network with corresponding kernel size (k), number of feature maps (n) and stride (s) indicated for each Convolutional Neural Network (CNN) layer</p>
Full article ">Figure 2
<p>Before and after preprocessing: amplitude images selected from three real-world site datasets. From left to right, it shows Site-A (1st and 2nd columns), Site-B (3rd and 4th columns), Site-C (5th and 6th columns) with two samples for each dataset. (1st row) Raw amplitude images after log transformation for better visualization, (2nd row) their corresponding histograms in log, (3rd row) histograms after proposed normalization and (4th row) corresponding normalized images.</p>
Full article ">Figure 3
<p>Information and Gradient flow between modules.</p>
Full article ">Figure 4
<p>Illustration of the PtSel method describing the three key steps in order [<a href="#B51-remotesensing-12-02340" class="html-bibr">51</a>].</p>
Full article ">Figure 5
<p>We use S#-F#-S or S#-F#-NS to name simulation datasets generated using different distortion scenarios: S# denotes Gaussian level of base noise S; F# denotes frequency level of phase fringes F; S and NS mean with or without low amplitude strips respectively. (From left to right) A set of simulated images are selected from S1-F3-NS, S2-F2-NS, and S3-F1-S datasets. First row shows simulated ground truth with clean interferometric phase [<math display="inline"><semantics> <mrow> <mo>−</mo> <mi>π</mi> </mrow> </semantics></math>,<math display="inline"><semantics> <mi>π</mi> </semantics></math>), second row is the noisy interferometric phase [<math display="inline"><semantics> <mrow> <mo>−</mo> <mi>π</mi> </mrow> </semantics></math>,<math display="inline"><semantics> <mi>π</mi> </semantics></math>)—(Blue: <math display="inline"><semantics> <mrow> <mo>−</mo> <mi>π</mi> </mrow> </semantics></math>; Red: +<math display="inline"><semantics> <mi>π</mi> </semantics></math>), and third row is coherence (Black: 0; White: 1).</p>
Full article ">Figure 6
<p>Examples of filtering and coherence estimation results on sample simulation images shown in <a href="#remotesensing-12-02340-f005" class="html-fig">Figure 5</a>. (<b>a</b>–<b>d</b>) are filtering outputs and (<b>e</b>–<b>h</b>) are coherence estimations of S1-F3-NS, (<b>i</b>–<b>l</b>) are filtering outputs and (<b>m</b>–<b>p</b>) are coherence estimations of S2-F2-NS, and (<b>q</b>–<b>t</b>) are filtering outputs and (<b>u</b>–<b>x</b>) are coherence estimations of S3-F1-NS. Visual inspection on filtered outputs from different methods compared to ground truth phase images are given in <a href="#remotesensing-12-02340-f005" class="html-fig">Figure 5</a>, 1st row. It can be seen that our model can preserve structural details better than others for increasing base noise levels and frequency of fringes (5th row). Our proposed method’s coherence estimation is most matched to ground truth (<a href="#remotesensing-12-02340-f005" class="html-fig">Figure 5</a>, 3rd row), while other methods tend to predict inaccurate results on areas with highly dense fringes or low amplitude stripes.</p>
Full article ">Figure 7
<p>Three representative noisy interferograms (Phase) selected from each of the three real datasets; Blue: <math display="inline"><semantics> <mrow> <mo>−</mo> <mi>π</mi> </mrow> </semantics></math>; Red: +<math display="inline"><semantics> <mi>π</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>Filtered images and coherence maps generated by the reference methods and proposed DeepInSAR trained model for a Site-A image.</p>
Full article ">Figure 9
<p>Filtered images and coherence maps generated by the reference methods and proposed DeepInSAR trained model for a Site-B image.</p>
Full article ">Figure 10
<p>Filtered images and coherence maps generated by the reference methods and proposed DeepInSAR trained model for a Site-C image.</p>
Full article ">
19 pages, 1807 KiB  
Article
Precipitation Diurnal Cycle Assessment of Satellite-Based Estimates over Brazil
by João Maria de Sousa Afonso, Daniel Alejandro Vila, Manoel Alonso Gan, David Pareja Quispe, Naurinete de Jesus da Costa Barreto, Joao Henry Huamán Chinchay and Rayana Santos Araujo Palharini
Remote Sens. 2020, 12(14), 2339; https://doi.org/10.3390/rs12142339 - 21 Jul 2020
Cited by 16 | Viewed by 4119
Abstract
The main objective of this study is to assess the ability of several high-resolution satellite-based precipitation estimates to represent the Precipitation Diurnal Cycle (PDC) over Brazil during the 2014–2018 period, after the launch of the Global Precipitation Measurement satellite (GPM). The selected algorithms [...] Read more.
The main objective of this study is to assess the ability of several high-resolution satellite-based precipitation estimates to represent the Precipitation Diurnal Cycle (PDC) over Brazil during the 2014–2018 period, after the launch of the Global Precipitation Measurement satellite (GPM). The selected algorithms are the Global Satellite Mapping of Precipitation (GSMaP), The Integrated Multi-satellitE Retrievals for GPM (IMERG) and Climate Prediction Center (CPC) MORPHing technique (CMORPH). Hourly rain gauge data from different national and regional networks were used as the reference dataset after going through rigid quality control tests. All datasets were interpolated to a common 0.1° × 0.1° grid every 3 h for comparison. After a hierarchical cluster analysis, seven regions with different PDC characteristics (amplitude and phase) were selected for this study. The main results of this research could be summarized as follow: (i) Those regions where thermal heating produce deep convective clouds, the PDC is better represented by all algorithms (in term of amplitude and phase) than those regions driven by shallow convection or low-level circulation; (ii) the GSMaP suite (GSMaP-Gauge (G) and GSMaP-Motion Vector Kalman (MVK)), in general terms, outperforms the rest of the algorithms with lower bias and less dispersion. In this case, the gauge-adjusted version improves the satellite-only retrievals of the same algorithm suggesting that daily gauge-analysis is useful to reduce the bias in a sub-daily scale; (iii) IMERG suite (IMERG-Late (L) and IMERG-Final (F)) overestimates rainfall for almost all times and all the regions, while the satellite-only version provide better results than the final version; (iv) CMORPH has the better performance for a transitional regime between a coastal land-sea breeze and a continental amazonian regime. Further research should be performed to understand how shallow clouds processes and convective/stratiform classification is performed in each algorithm to improve the representativity of diurnal cycle. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Studied region and spatial distribution of rain gauges.</p>
Full article ">Figure 2
<p>Quality control flow diagram (<b>a</b>) and spatial distribution and percentage of valid data for each grid point downgraded to 0.3° for visualization purposes (<b>b</b>).</p>
Full article ">Figure 3
<p>Sub regions (boxes) chosen to assess the estimate products determined by an cluster analysis of the Precipitation Diurnal Cycle (PDC) with gauge data. (<b>a</b>) Boxes 1 and 2 in summer; 3 and 4 in winter and 5, 6 and 7 in fall. Histograms of the different groups of PDCs: (<b>b</b>) summer, (<b>c</b>) fall and (<b>d</b>) winter.</p>
Full article ">Figure 4
<p>Seasonal Precipitation Climatology in the regions of the boxes in (<b>a</b>) SE and MW, (<b>b</b>) NE and (<b>c</b>) N using data from the Brazilian National Institute of Meteorology (INMET) station network for the 30 year period (from January 1989 to December 2018).</p>
Full article ">Figure 5
<p>Mean PDC for some satellite-based precipitation algorithms for different regions of Brazil. (<b>a</b>) box 1 and (<b>b</b>) box 2 in summer; (<b>c</b>) box 3 and (<b>d</b>) box 4 in winter and (<b>e</b>) box 5, (<b>f</b>) box 6 and (<b>g</b>) box 7 in fall: Global Satellite Mapping of Precipitation (GSMaP)-Gauge (G) in continuous blue line, GSMaP-Motion Vector Kalman (MVK) in dashed dark blue line, Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) (IMERG)-Final (F) in continuous green line, IMERG-Late (L) in dashed dark green line and Climate Prediction Center (CPC) MORPHing technique (CMORPH) in red line.</p>
Full article ">Figure 5 Cont.
<p>Mean PDC for some satellite-based precipitation algorithms for different regions of Brazil. (<b>a</b>) box 1 and (<b>b</b>) box 2 in summer; (<b>c</b>) box 3 and (<b>d</b>) box 4 in winter and (<b>e</b>) box 5, (<b>f</b>) box 6 and (<b>g</b>) box 7 in fall: Global Satellite Mapping of Precipitation (GSMaP)-Gauge (G) in continuous blue line, GSMaP-Motion Vector Kalman (MVK) in dashed dark blue line, Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) (IMERG)-Final (F) in continuous green line, IMERG-Late (L) in dashed dark green line and Climate Prediction Center (CPC) MORPHing technique (CMORPH) in red line.</p>
Full article ">Figure 6
<p>Taylor diagram for the PDCs estimated by the different algorithms for different regions of Brazil. (<b>a</b>) box 1 and (<b>b</b>) box 2 in summer; (<b>c</b>) box 3 and (<b>d</b>) box 4 in winter and (<b>e</b>) box 5, (<b>f</b>) box 6 and (<b>g</b>) box 7 in fall: GSMaP-G in blue, GSMaP-MVK in dark blue, IMERG-F in green, IMERG-L in dark green and CMORPH in red. Numbers refer to time: 1 at 0000UTC, 2 at 0300UTC, 3 at 0600UTC, 4 at 0900UTC, 5 at 1200UTC, 6 at 1500UTC, 7 at 1800UTC and 8 at 2100UTC.</p>
Full article ">Figure 6 Cont.
<p>Taylor diagram for the PDCs estimated by the different algorithms for different regions of Brazil. (<b>a</b>) box 1 and (<b>b</b>) box 2 in summer; (<b>c</b>) box 3 and (<b>d</b>) box 4 in winter and (<b>e</b>) box 5, (<b>f</b>) box 6 and (<b>g</b>) box 7 in fall: GSMaP-G in blue, GSMaP-MVK in dark blue, IMERG-F in green, IMERG-L in dark green and CMORPH in red. Numbers refer to time: 1 at 0000UTC, 2 at 0300UTC, 3 at 0600UTC, 4 at 0900UTC, 5 at 1200UTC, 6 at 1500UTC, 7 at 1800UTC and 8 at 2100UTC.</p>
Full article ">
23 pages, 4198 KiB  
Article
Pre-Emptive Detection of Mature Pine Drought Stress Using Multispectral Aerial Imagery
by Nancy Grulke, Jason Maxfield, Phillip Riggan and Charlie Schrader-Patton
Remote Sens. 2020, 12(14), 2338; https://doi.org/10.3390/rs12142338 - 21 Jul 2020
Cited by 13 | Viewed by 4072
Abstract
Drought, ozone (O3), and nitrogen deposition (N) alter foliar pigments and tree crown structure that may be remotely detectable. Remote sensing tools are needed that pre-emptively identify trees susceptible to environmental stresses could inform forest managers in advance of tree mortality [...] Read more.
Drought, ozone (O3), and nitrogen deposition (N) alter foliar pigments and tree crown structure that may be remotely detectable. Remote sensing tools are needed that pre-emptively identify trees susceptible to environmental stresses could inform forest managers in advance of tree mortality risk. Jeffrey pine, a component of the economically important and widespread western yellow pine in North America was investigated in the southern Sierra Nevada. Transpiration of mature trees differed by 20% between microsites with adequate (mesic (M)) vs. limited (xeric (X)) water availability as described in a previous study. In this study, in-the-crown morphological traits (needle chlorosis, branchlet diameter, and frequency of needle defoliators and dwarf mistletoe) were significantly correlated with aerially detected, sub-crown spectral traits (upper crown NDVI, high resolution (R), near-infrared (NIR) Scalar (inverse of NDVI) and THERM Δ, and the difference between upper and mid crown temperature). A classification tree model sorted trees into X and M microsites with THERM Δ alone (20% error), which was partially validated at a second site with only mesic trees (2% error). Random forest separated M and X site trees with additional spectra (17% error). Imagery taken once, from an aerial platform with sub-crown resolution, under the challenge of drought stress, was effective in identifying droughted trees within the context of other environmental stresses. Full article
(This article belongs to the Special Issue Monitoring Forest Change with Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Colorized composite of the intensities of high resolution (R), near-infrared (NIR), and thermal wavelengths of the calibration site on the western slope of south-central Sierra Nevada. The site is bounded by the Marble Fork of the Kaweah River (rusty brown) on the south, and General’s Highway on the north (fuchsia), just west of Lodgepole, California. The darker gray-green rounded trees (see cast shadow) are Jeffrey pine distributed in open rocky areas (xeric (X)), along the riparian corridor and ephemeral seeps (mesic (M)), as well as intermingled X and M trees.</p>
Full article ">Figure 2
<p>Thermal imagery of two Jeffrey pine trees taken at 0.5 h time steps, 1030 to 1200 (left) and true color (center right). Below imagery, a cartoon of an aerial view depicting the upper canopy (center circle) and the upper portion of the mid canopy (concentric circle), varying from cool (green, transpiring) to warm (yellow, stomata closed). On the right is a diagram of the masked shaded crown, and subsample points for R, NIR, and THERM of the top- and upper mid-crown. Points randomly occurring in the outer 2.2 m were rejected to avoid contamination by ground cover through the edge of the crown.</p>
Full article ">Figure 3
<p>Diagram of study design, data collection, approach to data analysis, and classification of trees based on selected spectral traits using classification tree and random forest models.</p>
Full article ">Figure 4
<p>Loading for the first four principal components for morphological traits (<b>left</b>) and spectral traits (<b>right</b>). Principal components analysis (PCA) cumulative variation explained for the first two axes was 63% for morphological, and 71% for spectral traits.</p>
Full article ">Figure 5
<p>Mean decrease accuracy (MDA) for morphological (<b>left</b>) and spectral (<b>right</b>) traits using random forest. Acronyms as in <a href="#app2-remotesensing-12-02338" class="html-app">Appendix B</a>, except “D” replaces “Δ“ here.</p>
Full article ">Figure 6
<p>Classification tree model (CTM) for spectral traits within Jeffrey pine crowns. The difference in crown temperature between upper and mid crown (THERM Δ, upper crown cooler), separated trees in mesic and xeric microsites with 20% error. Further separation of xeric trees was accomplished with NDVI TOP also with 20% error. Further separation of mesic microsite trees into N level (BN or +N) was unresolved.</p>
Full article ">Figure A1
<p>Examples of branchlets illustrating variability of different crown trait measurements. (<b>A</b>) Branchlet was clipped from a tree in a mesic (M) microsite. Four needle ages (WHL) are labeled by year, with CHL1 and CHL2 with 0% chlorosis for the first two years of needles (CHL1, CHL2), and CHL3 with 15% chlorosis. A needle defoliator, scale (<span class="html-italic">Chionaspis</span>, spp.; Chion), is present on the 2015 needles. Branch length (BRLNx) by year (x) is indicated by a dark mustard bar, with the foliated portion (%FOLLNx) of each year indicated by a dark green bar. An open red triangle indicates the position of the preformed 2018 terminal bud. The blue bar perpendicular to the branchlet axis indicates measurement point of prior year branchlet diameter (BRDIA2). (<b>B</b>) Branchlet was clipped from a tree in a xeric (X) microsite. Four needle ages were retained, with 40% and 60% chlorosis for CHL3 and CHL4, respectively. Needle length measured after elongation growth has ceased is presented relative to the average length of needles, of the whorl with the longest needles retained on the branchlet (%MxNLx): 100% for 2017 (total length not apparent in the photo); 5% for 2016; 60% for 2015, and 45% for 2014. The effects of a needle defoliator, <span class="html-italic">Scythropus</span>, spp. (Scyth) is present on the 4th year needles. (<b>C</b>) Branchlet from another X tree. The oldest needles are seven years old, but the 2014 needles were excised. Because the 2014 branchlet elongated, and the bracts for each needle fascicle (cluster of three needles bundled by a paper sheath) are widely spaced, it is likely that the needles elongated but were lost to reduce leaf area with increasing drought through 2016. The oldest WHL retained only one fascicle. % MxNL2 is 20% (compare to 5% in Panel (B)), 40% for MxNL3, and 60% for MxNL4. Trees differ in their respond to hydrologic deficits, especially in xeric microsites likely due to differences in the amount of trapped water in bedrock interstices. Whole tree examples of early senescence (ES) can be found in [<a href="#B53-remotesensing-12-02338" class="html-bibr">53</a>] and DMR in [<a href="#B57-remotesensing-12-02338" class="html-bibr">57</a>].</p>
Full article ">
25 pages, 4945 KiB  
Article
Estimating Near Real-Time Hourly Evapotranspiration Using Numerical Weather Prediction Model Output and GOES Remote Sensing Data in Iowa
by Wonsook S. Ha, George R. Diak and Witold F. Krajewski
Remote Sens. 2020, 12(14), 2337; https://doi.org/10.3390/rs12142337 - 21 Jul 2020
Cited by 7 | Viewed by 3869
Abstract
This study evaluates the applicability of numerical weather prediction output supplemented with remote sensing data for near real-time operational estimation of hourly evapotranspiration (ET). Rapid Refresh (RAP) and High-Resolution Rapid Refresh (HRRR) systems were selected to provide forcing data for a Penman-Monteith model [...] Read more.
This study evaluates the applicability of numerical weather prediction output supplemented with remote sensing data for near real-time operational estimation of hourly evapotranspiration (ET). Rapid Refresh (RAP) and High-Resolution Rapid Refresh (HRRR) systems were selected to provide forcing data for a Penman-Monteith model to calculate the Actual Evapotranspiration (AET) over Iowa. To investigate how the satellite-based remotely sensed net radiation ( R n ) estimates might potentially improve AET estimates, Geostationary Operational Environmental Satellite derived R n (GOES- R n ) data were incorporated into each dataset for comparison with the RAP and HRRR R n -based AET evaluations. The authors formulated a total of four AET models—RAP, HRRR, RAP-GOES, HRRR-GOES, and validated the respective ET estimates against two eddy covariance tower measurements from central Iowa. The implementation of HRRR-GOES for AET estimates showed the best results among the four models. The HRRR-GOES model improved statistical results, yielding a correlation coefficient of 0.8, a root mean square error (mm hr−1) of 0.08, and a mean bias (mm hr−1) of 0.02 while the HRRR only model results were 0.64, 0.09, and 0.04, respectively. Despite limited in situ observational data to fully test a proposed AET estimation, the HRRR-GOES model clearly showed potential utility as a tool to predict AET at a regional scale with high spatio-temporal resolution. Full article
(This article belongs to the Special Issue Remote Sensing of Evapotranspiration (ET) II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>State of Iowa with the Iowa Flood Center (IFC) model domain and locations of two eddy covariance (EC) towers near central Iowa. The square in brown around the state of Iowa represents the study area.</p>
Full article ">Figure 2
<p>Selected field measurements of (<b>a</b>) averaged air temperature at approximately 2 m above ground (°C), (<b>b</b>) averaged net radiation (<math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>n</mi> </msub> </mrow> </semantics></math>), and latent heat flux (LE) (W m<sup>−2</sup>), (<b>c</b>) averaged wind speed at approximately 2 m above ground (u, m s<sup>−1</sup>), (<b>e</b>) averaged volumetric water content (VWC, unitless) measured at 10- and 100-cm depths, (<b>f</b>) total rainfall (mm), and a derived value of (<b>d</b>) VPD (vapor pressure deficit, kPa) on a daily time scale between 1 May, 2016 and 30 April, 2017 at Tower 11. The gray area in (<b>e</b>) represents the time when the surface soil was frozen (soil temperature at 10 cm below the ground surface &lt; 0 °C).</p>
Full article ">Figure 3
<p>Scatter plots of selected observed hourly input data vs. extraction of hourly numerical weather prediction (NWP) outputs as modeled input data for Tower 11. A black solid line indicates 1:1 line and a blue solid line represents a linear regression line.</p>
Full article ">Figure 4
<p>Scatter plots of observed vs. modeled actual evapotranspiration (AET) in the hourly time scale for Tower 10 (<b>A</b>) and Tower 11 (<b>B</b>) between May 2016 and April 2017. A black solid line indicates 1:1 line and a blue solid line represents a linear regression line.</p>
Full article ">Figure 5
<p>Scatter plots of observed vs. modeled AET in the daily time scale for Tower 10 (<b>A</b>) and Tower 11 (<b>B</b>) between May 2016 and April 2017.</p>
Full article ">Figure 6
<p>Time series of hourly AET estimates from four models on three randomly selected days against AET from two towers in late August 2016. The hours are in Central Daylight Time (CDT).</p>
Full article ">Figure 7
<p>AET difference (modeled AET–observed AET) in mm d<sup>−1</sup> at Tower 10 (T10, (<b>a</b>) and (<b>c</b>)) and Tower 11 (T11, (<b>b</b>) and (<b>d</b>)) between May 2016 and April 2017.</p>
Full article ">Figure 8
<p>Time series of hourly Rn estimates from three models on a few selected days against Rn from two towers in late August 2016. The hours are in Central Daylight Time (CDT).</p>
Full article ">Figure 9
<p>Estimated daily AET maps (mm d<sup>−1</sup>) using the High-Resolution Rapid Refresh-Geostationary Operational Environmental Satellite (HRRR-GOES) dataset over the state of Iowa during the study period.</p>
Full article ">Figure 10
<p>Monthly AET comparisons of the tower vs. model estimates at Tower 11.</p>
Full article ">
20 pages, 9263 KiB  
Article
A Non-Local Low-Rank Algorithm for Sub-Bottom Profile Sonar Image Denoising
by Shaobo Li, Jianhu Zhao, Hongmei Zhang, Zijun Bi and Siheng Qu
Remote Sens. 2020, 12(14), 2336; https://doi.org/10.3390/rs12142336 - 21 Jul 2020
Cited by 18 | Viewed by 3452
Abstract
Due to the influence of equipment instability and surveying environment, scattering echoes and other factors, it is sometimes difficult to obtain high-quality sub-bottom profile (SBP) images by traditional denoising methods. In this paper, a novel SBP image denoising method is developed for obtaining [...] Read more.
Due to the influence of equipment instability and surveying environment, scattering echoes and other factors, it is sometimes difficult to obtain high-quality sub-bottom profile (SBP) images by traditional denoising methods. In this paper, a novel SBP image denoising method is developed for obtaining underlying clean images based on a non-local low-rank framework. Firstly, to take advantage of the inherent layering structures of the SBP image, a direction image is obtained and used as a guidance image. Secondly, the robust guidance weight for accurately selecting the similar patches is given. A novel denoising method combining the weight and a non-local low-rank filtering framework is proposed. Thirdly, after discussing the filtering parameter settings, the proposed method is tested in actual measurements of sub-bottom, both in deep water and shallow water. Experimental results validate the excellent performance of the proposed method. Finally, the proposed method is verified and compared with other methods quantificationally based on the synthetic images and has achieved the total average peak signal-to-noise ratio (PSNR) of 21.77 and structural similarity index (SSIM) of 0.573, which is far better than other methods. Full article
(This article belongs to the Special Issue 2nd Edition Radar and Sonar Imaging and Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of patch searching and patch group construction. The different colored rectangles in (<b>a</b>) represent patches with different similarities, (<b>b</b>) denotes a patch group, and (<b>c</b>) is the constructed low-rank matrix.</p>
Full article ">Figure 2
<p>An overview of the proposed filtering framework. (<b>a</b>) is the original SBP image. First, the guidance image (<b>b</b>) is computed based on (<b>a</b>) to help the selection of similar patches. Then, after patch grouping (rectangles in (<b>c</b>) represent patches) and low-rank optimization ((<b>d</b>) is the optimization diagram), the patches are refined, and the low-rank structures are recovered as shown in (<b>e</b>). Aggregating all of the low-rank patches, the final result (<b>f</b>) is obtained.</p>
Full article ">Figure 3
<p>Illustration of sub-bottom profile (SBP) measurement, (<b>a</b>) shows that the measurement of the sub-bottom profile can capture the information of sediment, (<b>b</b>) shows the received reflection echoes, and (<b>c</b>) is the final cross-section image of the sub-bottom. The red rectangles in (<b>c</b>) represent the sediment interior and the green rectangle represents the sediment interface.</p>
Full article ">Figure 4
<p>Illustration of the guidance weight, (<b>a</b>) shows the locations of the reference patch and the searching patches, (<b>b</b>) illustrates the relative direction between the reference patch and the searching patch, (<b>c</b>) and (<b>d</b>) visualize the functions <span class="html-italic">Ψ´</span>(<span class="html-italic">η´</span>) and <span class="html-italic">Ψ</span>(<span class="html-italic">η</span>), respectively.</p>
Full article ">Figure 5
<p>Illustration of parameter settings. (<b>a</b>) and (<b>b</b>) show the first synthetic clean SBP image and the polluted SBP image, respectively. (<b>c</b>) shows the peak signal-to-noise ratio (PSNR) surface and the red point has the highest value, (<b>d</b>) shows the structural similarity index (SSIM) surface obtained similarly to the PSNR surface and the red point has the highest value.</p>
Full article ">Figure 6
<p>Illustration of parameter settings. (<b>a</b>) and (<b>b</b>) show the second synthetic clean SBP image and the polluted SBP image, respectively, (<b>c</b>) shows the PSNR surface and the red point in (<b>c</b>) has the highest PSNR value, (<b>d</b>) shows the SSIM surface obtained similarly to the PSNR surface. The red point in (<b>d</b>) is a local maximum point and the black point in (<b>d</b>) has the highest SSIM value.</p>
Full article ">Figure 7
<p>The original SBP image and the SBP image after histogram matching, (<b>a</b>) is the original SBP image, (<b>b</b>) is the SBP image after grayscale equalization, (<b>c</b>) shows details in the area marked with a yellow rectangle in (<b>b</b>). (<b>c</b>) shows that the rectangle area in (<b>b</b>) is separated into two parts having different Signal-to-Noise Ratio (SNR) and the left part has a low SNR.</p>
Full article ">Figure 8
<p>The SBP image after histogram matching and its corresponding direction information, (<b>a</b>) is the SBP image after grayscale equalization, and (<b>b</b>) shows the direction information.</p>
Full article ">Figure 9
<p>Filtering results: (<b>a</b>) is the SBP image after grayscale equalization, (<b>b</b>–<b>f</b>) are the filtered results using KSVD, Non-local Means (NLM), Block-Matching and three-dimensional (3D) filtering (BM3D), the conventional non-local low-rank filtering method (NLLR), and the proposed method.</p>
Full article ">Figure 10
<p>Filtering results: (<b>a</b>) is the details of the area in the lower rectangle in <a href="#remotesensing-12-02336-f009" class="html-fig">Figure 9</a>a, (<b>b</b>–<b>f</b>) show the details of filtered results in the lower rectangle in <a href="#remotesensing-12-02336-f009" class="html-fig">Figure 9</a> using KSVD, NLM, BM3D, NLLR, and the proposed method, (<b>g</b>) is the details of the area in the top rectangle in <a href="#remotesensing-12-02336-f009" class="html-fig">Figure 9</a>a, (<b>h</b>–<b>l</b>) are the details of filtered results in the top rectangle using KSVD, NLM, BM3D, NLLR, and the proposed method.</p>
Full article ">Figure 11
<p>Filtering results: (<b>a</b>) is the SBP image, (<b>b</b>–<b>f</b>) are the filtered results using KSVD, NLM, BM3D, NLLR, and the proposed method, (<b>g</b>) shows the details of the SBP image in the rectangle area in <a href="#remotesensing-12-02336-f011" class="html-fig">Figure 11</a>a, (<b>h</b>–<b>l</b>) are filtered results in the rectangle area in <a href="#remotesensing-12-02336-f011" class="html-fig">Figure 11</a>a using KSVD, NLM, BM3D, NLLR, and the proposed method.</p>
Full article ">Figure 12
<p>Filtering results: (<b>a</b>) is the SBP image, (<b>b</b>–<b>f</b>) are the filtered results using KSVD, NLM, BM3D, NLLR, and the proposed method, (<b>g</b>) shows the details of the SBP image in the rectangle area in <a href="#remotesensing-12-02336-f012" class="html-fig">Figure 12</a>a, (<b>h</b>–<b>l</b>) are the filtered results in the rectangle area in <a href="#remotesensing-12-02336-f012" class="html-fig">Figure 12</a>a using KSVD, NLM, BM3D, NLLR, and the proposed method.</p>
Full article ">Figure 13
<p>Filtering results: (<b>a</b>) is the SBP image, (<b>b</b>–<b>f</b>) are the filtered results using KSVD, NLM, BM3D, NLLR, and the proposed method, (<b>g</b>) is the details of the SBP image in the rectangle area in <a href="#remotesensing-12-02336-f013" class="html-fig">Figure 13</a>a, (<b>h</b>–<b>l</b>) are the filtered results in the rectangle area in <a href="#remotesensing-12-02336-f013" class="html-fig">Figure 13</a>a using KSVD, NLM, BM3D, NLLR, and the proposed method.</p>
Full article ">Figure 14
<p>Filtering results based on synthetic data: (<b>a</b>) shows the clean synthetic SBP image, (<b>b</b>) is the synthetic polluted SBP image, (<b>c</b>–<b>g</b>) are the filtered results using KSVD, NLM, BM3D, NLLR, and the proposed method, (<b>h</b>) is the clean synthetic SBP image, (<b>i</b>) is a synthetic polluted SBP image, (<b>j</b>–<b>n</b>) show the filtered results using KSVD, NLM, BM3D, NLLR, and the proposed method.</p>
Full article ">Figure 15
<p>Filtering results based on synthetic data: (<b>a</b>) shows the clean synthetic SBP image, (<b>b</b>) is the synthetic polluted SBP image, (<b>c</b>–<b>g</b>) are the filtered results using KSVD, NLM, BM3D, NLLR, and the proposed method, (<b>h</b>) is the clean synthetic SBP image, (<b>i</b>) is a synthetic polluted SBP image, (<b>j</b>–<b>n</b>) show the filtered results using KSVD, NLM, BM3D, NLLR, and the proposed method.</p>
Full article ">
40 pages, 6817 KiB  
Article
Classification of Hyperspectral Reflectance Images With Physical and Statistical Criteria
by Alexandre Alakian and Véronique Achard
Remote Sens. 2020, 12(14), 2335; https://doi.org/10.3390/rs12142335 - 21 Jul 2020
Cited by 6 | Viewed by 4064
Abstract
A classification method of hyperspectral reflectance images named CHRIPS (Classification of Hyperspectral Reflectance Images with Physical and Statistical criteria) is presented. This method aims at classifying each pixel from a given set of thirteen classes: unidentified dark surface, water, plastic matter, carbonate, clay, [...] Read more.
A classification method of hyperspectral reflectance images named CHRIPS (Classification of Hyperspectral Reflectance Images with Physical and Statistical criteria) is presented. This method aims at classifying each pixel from a given set of thirteen classes: unidentified dark surface, water, plastic matter, carbonate, clay, vegetation (dark green, dense green, sparse green, stressed), house roof/tile, asphalt, vehicle/paint/metal surface and non-carbonated gravel. Each class is characterized by physical criteria (detection of specific absorptions or shape features) or statistical criteria (use of dedicated spectral indices) over spectral reflectance. CHRIPS input is a hyperspectral reflectance image covering the spectral range [400–2500 nm]. The presented method has four advantages, namely: (i) is robust in transfer, class identification is based on criteria that are not very sensitive to sensor type; (ii) does not require training, criteria are pre-defined; (iii) includes a reject class, this class reduces misclassifications; (iv) high precision and recall, F 1 score is generally above 0.9 in our test. As the number of classes is limited, CHRIPS could be used in combination with other classification algorithms able to process the reject class in order to decrease the number of unclassified pixels. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spectral reflectances of different types of materials acquired from the two different hyperspectral instruments HySpex and HyMap: synthetic tarpaulin (HySpex), synthetic greenhouse (HyMap), dry grass (HySpex) and green tree (HyMap). A factor is applied on each reflectance in order to improve visualization. Specific features can be observed and used to identify these types of surface: reflectance of synthetic matter exhibits a local minimum around 1730 nm due to specific absorption, reflectance of green and stressed vegetation exhibit some similar geometric patterns: parabolic variation in the spectral range [1500–1750 nm] and local maxima around 1660 nm and 2210 nm.</p>
Full article ">Figure 2
<p>Processing chain of CHRIPS (Classification of Hyperspectral Reflectance Images with Physical and Statistical criteria) classification method. Atmospheric correction is not included: the input of the processing chain is a reflectance image. The spectral range of input image needs to be [400–2500 nm]: CHRIPS characterizes each class with criteria that use spectral bands in both VNIR and short-wave infrared (SWIR) ranges.</p>
Full article ">Figure 3
<p>Three-band false color composite of HySpex images (Fauga, Mauzac), HyMap image (Garons) and AisaFENIX images (suburban Mauzac). Images <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mn>2.2</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mn>8.8</mn> </mrow> </msub> </semantics></math> are resampled at 0.55 m (nearest neighbor) in order to make image views comparable.</p>
Full article ">Figure 4
<p>Reflectance spectra of shadowed green vegetation (grass + trees). Reflectances are smoothed with Gaussian filtering.</p>
Full article ">Figure 5
<p>Reflectance spectra of water. Reflectances are smoothed with Gaussian filtering. Dashed vertical lines delimit the spectral range [470–600 nm] in which the maximal reflectance <math display="inline"><semantics> <msup> <mi>ρ</mi> <mo>*</mo> </msup> </semantics></math> lies.</p>
Full article ">Figure 6
<p>Reflectance spectra of dark surfaces. Reflectances are smoothed with Gaussian filtering.</p>
Full article ">Figure 7
<p>Reflectance of a tarpaulin covering a swimming pool (reflectance × 1.4). Sharp aliphatic absorptions around 1730 nm and 2300 nm are observed. Segments are drawn in red. The ratio between reflectance and segment is shown in green. Dash lines delimit the spectral range where the minimum value of the ratio is computed.</p>
Full article ">Figure 8
<p>Reflectance of linoleum (on the left, reflectance × 3) and truck capote (on the right, reflectance × 1.4). Aromatic absorptions are observed around 1670 nm, 2140 nm and 2300. Segments are drawn in red. The ratio between reflectance and segment is shown in green. The absorptions are less pronounced for truck capote than for linoleum</p>
Full article ">Figure 9
<p>Reflectances (from HySpex images) of surfaces containing carbonate. Reflectance decreases along 2200–2300 nm and has a local minimum around 2340 nm: this property is used for detection.</p>
Full article ">Figure 10
<p>Reflectance of sand containing clay. A local minimum can be observed around 2200 nm.</p>
Full article ">Figure 11
<p>Reflectance of vegetation. Depending on the type of vegetation, reflectance is highly variable before 1200 nm. However, spectral variations become similar after 1500 nm, where chlorophyll does not have an optical impact anymore: reflectance is parabolic between 1500 nm and 1750 nm, with a local maximum around 1660 nm. Reflectance also has a local maximum around 2200 nm.</p>
Full article ">Figure 12
<p>Two sides of a sunlit roof do not receive the same amount of radiation per unit area. This results in a difference that can be significant on the apparent reflectance but does not modify the relative spectral variations. After normalization with the quadratic norm, reflectances become very similar.</p>
Full article ">Figure 13
<p>Selection of reflectances from sets <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math> (blue) and <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> (red) used to compute indices dedicated to the characterization of the class house roof/tile, and corresponding normalized reflectances (quadratic norm). <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math> is composed of the selection of 1230 reflectances that have been selected from set <math display="inline"><semantics> <msub> <mi>S</mi> <mi>ρ</mi> </msub> </semantics></math>. Likewise, the set <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> includes 3500 reflectances from <math display="inline"><semantics> <msub> <mi>S</mi> <mi>ρ</mi> </msub> </semantics></math> with very different classes: roads, trucks, cars, soils, sand, plastic matter, green vegetation, stressed meadow.</p>
Full article ">Figure 14
<p>Successive values of indices <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>I</mi> <mn>2</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>I</mi> <mn>3</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>I</mi> <mn>4</mn> </msub> </semantics></math> computed to characterize the class house roof/tile. Samples from set <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math> and set <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> are respectively drawn in blue and red. A margin of 10% is chosen for every index: thresholds <math display="inline"><semantics> <msubsup> <mi>S</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mo>*</mo> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>S</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mo>*</mo> </msubsup> </semantics></math> are represented with dash lines. Index <math display="inline"><semantics> <msub> <mi>I</mi> <mi>n</mi> </msub> </semantics></math> is computed for all samples of <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math> and for remaining samples from <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> after discrimination from indices <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1</mn> </msub> </semantics></math> …<math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>n</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </semantics></math>. Values of horizontal axis are meaningless: each sample is drawn at a different abscissa. The number of samples in set <math display="inline"><semantics> <msub> <mi>S</mi> <mn>1</mn> </msub> </semantics></math> is 1230. Initial number of samples in <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> is 3500. The number of remaining samples in <math display="inline"><semantics> <msub> <mi>S</mi> <mn>2</mn> </msub> </semantics></math> after computation of each index is 383 (after <math display="inline"><semantics> <msub> <mi>I</mi> <mn>1</mn> </msub> </semantics></math>), 67 (after <math display="inline"><semantics> <msub> <mi>I</mi> <mn>2</mn> </msub> </semantics></math>), 30 (after <math display="inline"><semantics> <msub> <mi>I</mi> <mn>3</mn> </msub> </semantics></math>) and 2 (after <math display="inline"><semantics> <msub> <mi>I</mi> <mn>4</mn> </msub> </semantics></math>). The search for indices was stopped with indice <math display="inline"><semantics> <msub> <mi>I</mi> <mn>4</mn> </msub> </semantics></math>. Indeed, the remaining spectra are very similar and may have similar composition. It is preferred to stop the process rather than reducing the generalization power of the computed indices.</p>
Full article ">Figure 15
<p>Three-band false color composite, ground truth and classification maps obtained by CHRIPS (after post-processing), CNN-1D, SVM and RFC for the Fauga image. In ground truth, the three classes of green vegetation (dark, dense and green) are fused into a single one. Fauga image was used for training.</p>
Full article ">Figure 16
<p>Several vehicles such as trucks may have synthetic hoods. CHRIPS may consider these parts as belonging to the class plastic matter and other parts as belonging to the vehicle class.</p>
Full article ">Figure 17
<p>Three-band false color composite, ground truth and classification maps obtained by CHRIPS (after post-processing), CNN-1D, SVM and RFC for the Mauzac image. In ground truth, the three classes of green vegetation (dark, dense and green) are fused into a single one. Mauzac image was not used for training.</p>
Full article ">Figure 18
<p>Three-band false color composite, ground truth and classification maps obtained by CHRIPS (after post-processing), CNN-1D, SVM and RFC for the Garons image. In ground truth, the three classes of green vegetation (dark, dense and green) are fused into a single one. Garons image was not used for training.</p>
Full article ">Figure 19
<p>House roofs are well detected with the CHRIPS method in the class house roof/tile (orange colour).</p>
Full article ">Figure 20
<p>Three-band false color composite, ground truth and classification maps obtained by CHRIPS, CNN-1D and SVM for the three suburban Mauzac images <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mn>0.55</mn> </mrow> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mn>2.2</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mn>8.8</mn> </mrow> </msub> </semantics></math>. Images <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mn>2.2</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mn>8.8</mn> </mrow> </msub> </semantics></math> are resampled at 0.55 m (nearest neighbor) in order to make image views comparable. In ground truth, the three classes of green vegetation (dark, dense and green) are fused into a single one.</p>
Full article ">Figure 21
<p>Three-band false color composite of a part from Garons image containing two vineyards (1 and 2), CHRIPS classification of this image, reflectance of pixels from vineyard 1 and vineyard 2, and in situ photography of vineyard 1. In Garons image, vineyards are composed of parallel vine ranks every 2.5 m. Each vine rank is around 1 m width and the inter-rank is around 1.5 m width. Surface of inter-rank is composed of stressed vegetation and soil containing clay. As a HyMap pixel has a size of 4 × 4 m, it contains a mixing of vineyard and inter-rank content. Then, spectral reflectances also mix their properties. CHRIPS classifies pixels from vineyard 1 as clay and pixels from vineyard 2 as green vegetation. It could be counter intuitive when looking at the image in the visible range: both vineyards seem green, the increase of reflectance in the red-edge is observed and then a class of green vegetation would be expected for both parcels. Presence of clay induces a local minimum on reflectance around 2200 nm: it can be observed on spectral reflectance of pixel from vineyard 1 and it is detected by CHRIPS. However, the whole reflectance spectrum also satisfies criteria that are dedicated to green vegetation except the criterion requiring a local maximum around 2200 nm. Indeed, this criterion is opposite to the criterion dedicated to clay detection. In this case, the absorption of clay prevails and then CHRIPS considers the presence of clay. In vineyard 2, the clay content of soil is lower and a local maximum can be observed around 2200 nm, all criteria for vegetation are true and CHRIPS classifies it as green vegetation.</p>
Full article ">
19 pages, 7808 KiB  
Article
A Partition-Based Detection of Urban Villages Using High-Resolution Remote Sensing Imagery in Guangzhou, China
by Lu Zhao, Hongyan Ren, Cheng Cui and Yaohuan Huang
Remote Sens. 2020, 12(14), 2334; https://doi.org/10.3390/rs12142334 - 21 Jul 2020
Cited by 14 | Viewed by 3619
Abstract
High-resolution remotely sensed imageries have been widely employed to detect urban villages (UVs) in highly urbanized regions, especially in developing countries. However, the understanding of the potential impacts of spatially and temporally differentiated urban internal development on UV detection is still limited. In [...] Read more.
High-resolution remotely sensed imageries have been widely employed to detect urban villages (UVs) in highly urbanized regions, especially in developing countries. However, the understanding of the potential impacts of spatially and temporally differentiated urban internal development on UV detection is still limited. In this study, a partition-strategy-based framework integrating the random forest (RF) model, object-based image analysis (OBIA) method, and high-resolution remote sensing images was proposed for the UV-detection model. In the core regions of Guangzhou, four original districts were re-divided into five new zones for the subsequent object-based RF-detection of UVs with a series features, according to the different proportion of construction lands. The results show that the proposed framework has a good performance on UV detection with an average overall accuracy of 90.23% and a kappa coefficient of 0.8. It also shows the possibility of transferring samples and models into a similar area. In summary, the partition strategy is a potential solution for the improvement of the UV-detection accuracy through high-resolution remote sensing images in Guangzhou. We suggest that the spatiotemporal process of urban construction land expansion should be comprehensively understood so as to ensure an efficient UV-detection in highly urbanized regions. This study can provide some meaningful clues for city managers identifying the UVs efficiently before devising and implementing their urban planning in the future. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of study area. (<b>a</b>,<b>b</b>) The overview; (<b>c</b>) core area of Guangzhou; (<b>d</b>,<b>e</b>) Shipai village in the Tianhe district, Guangzhou (source: author).</p>
Full article ">Figure 2
<p>Flowchart of urban villages (UVs) detection.</p>
Full article ">Figure 3
<p>Urban villages in (<b>a</b>) Shenzhen, (<b>b</b>) Wuhan, and (<b>c</b>) Guangzhou (source of <b>a</b> and <b>b</b>: Google Earth).</p>
Full article ">Figure 4
<p>Partition strategy in study area. (<b>a</b>) Partition based on proportion of construction land; (<b>b</b>–<b>f</b>) UVs in five zones.</p>
Full article ">Figure 5
<p>Distribution of UV and non-UV samples in each zone.</p>
Full article ">Figure 6
<p>Accuracy of UV detection using samples of the individual and the total five zones.</p>
Full article ">Figure 7
<p>Classification result of UVs in the core area of Guangzhou.</p>
Full article ">Figure 8
<p>Feature significance ranks in five zones (zones <b>A</b>–<b>E</b>).</p>
Full article ">Figure 9
<p>Adaptability of individual samples and models. The value in each grid represents the variation of the UV-detection kappa coefficient in the target zone when the sample and model of the source zone are applied, compared with the original model of the target zone.</p>
Full article ">Figure 10
<p>Urban villages in Guangzhou and neighboring cities (source: Google Earth).</p>
Full article ">
17 pages, 3567 KiB  
Technical Note
Modeling Salt Marsh Vegetation Height Using Unoccupied Aircraft Systems and Structure from Motion
by Alexandra E. DiGiacomo, Clara N. Bird, Virginia G. Pan, Kelly Dobroski, Claire Atkins-Davis, David W. Johnston and Justin T. Ridge
Remote Sens. 2020, 12(14), 2333; https://doi.org/10.3390/rs12142333 - 21 Jul 2020
Cited by 27 | Viewed by 6339
Abstract
Salt marshes provide important services to coastal ecosystems in the southeastern United States. In many locations, salt marsh habitats are threatened by coastal development and erosion, necessitating large-scale monitoring. Assessing vegetation height across the extent of a marsh can provide a comprehensive analysis [...] Read more.
Salt marshes provide important services to coastal ecosystems in the southeastern United States. In many locations, salt marsh habitats are threatened by coastal development and erosion, necessitating large-scale monitoring. Assessing vegetation height across the extent of a marsh can provide a comprehensive analysis of its health, as vegetation height is associated with Above Ground Biomass (AGB) and can be used to track degradation or growth over time. Traditional methods to do this, however, rely on manual measurements of stem heights that can cause harm to the marsh ecosystem. Moreover, manual measurements are limited in scale and are often time and labor intensive. Unoccupied Aircraft Systems (UAS) can provide an alternative to manual measurements and generate continuous results across a large spatial extent in a short period of time. In this study, a multirotor UAS equipped with optical Red Green Blue (RGB) and multispectral sensors was used to survey five salt marshes in Beaufort, North Carolina. Structure-from-Motion (SfM) photogrammetry of the resultant imagery allowed for continuous modeling of the entire marsh ecosystem in a three-dimensional space. From these models, vegetation height was extracted and compared to ground-based manual measurements. Vegetation heights generated from UAS data consistently under-predicted true vegetation height proportionally and a transformation was developed to predict true vegetation height. Vegetation height may be used as a proxy for Above Ground Biomass (AGB) and contribute to blue carbon estimates, which describe the carbon sequestered in marine ecosystems. Employing this transformation, our results indicate that UAS and SfM are capable of producing accurate assessments of salt marsh health via consistent and accurate vegetation height measurements. Full article
(This article belongs to the Special Issue She Maps)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Map of the location of five salt marshes sampled via Unoccupied Aircraft Systems (UAS) for Structure-from-Motion (SfM) photogrammetric vegetation height modeling. (<b>b</b>) Example site orthomosaic derived from SfM-photogrammetry of UAS imagery.</p>
Full article ">Figure 2
<p>Vertical accuracy apparatus (adapted from Neumeier 2005) [<a href="#B36-remotesensing-12-02333" class="html-bibr">36</a>].</p>
Full article ">Figure 3
<p>Schematic describing the process of deriving three Digital Terrain Models (DTMs) and one Digital Surface Model (DSM) to generate a vegetation height model.</p>
Full article ">Figure 4
<p>Computed vs. true regression mapped vegetation height derived from UAS imagery to field-measured true stem heights. The 1:1 line, where computed stem height is equal to true stem height, is displayed for reference. Computed vegetation heights are compared across the point cloud, manual, and Light Detection and Ranging (LiDAR)-derived digital terrain models.</p>
Full article ">Figure 5
<p>Distribution of regression model mean squared error from 10-fold cross-validation results across three different vegetation height prediction methods.</p>
Full article ">Figure 6
<p>Distribution of differences between computed (predicted) and true (observed) vegetation height after transformation for each of the three methods: LiDAR, manual, and point cloud.</p>
Full article ">Figure 7
<p>Predicted vegetation height and Above Ground Biomass (AGB) are significantly correlated using (<b>a</b>) LiDAR, (<b>b</b>) manual, and (<b>c</b>) point cloud terrain model construction methods. AGB is calculated from the height–weight regression developed by Davis et al., stem count and stem height measurements in the field [<a href="#B6-remotesensing-12-02333" class="html-bibr">6</a>]. Predicted vegetation heights are calculated from transformed drone-derived vegetation height estimates.</p>
Full article ">Figure 8
<p>(<b>a</b>) An example plot is displayed to demonstrate the vertical profile (lateral obstruction) and associated vegetation height predictions across three different DTM generation methods post-transformation. (<b>b</b>) Inverse of the cumulative proportion of grass area encompassed at different stem heights across fifteen sampling quadrats after transformation with corresponding predicted vegetation heights from each of the three methods.</p>
Full article ">
21 pages, 16716 KiB  
Article
Uncovering Dryland Woody Dynamics Using Optical, Microwave, and Field Data—Prolonged Above-Average Rainfall Paradoxically Contributes to Woody Plant Die-Off in the Western Sahel
by Paulo N. Bernardino, Martin Brandt, Wanda De Keersmaecker, Stéphanie Horion, Rasmus Fensholt, Ilié Storms, Jean-Pierre Wigneron, Jan Verbesselt and Ben Somers
Remote Sens. 2020, 12(14), 2332; https://doi.org/10.3390/rs12142332 - 21 Jul 2020
Cited by 12 | Viewed by 4874
Abstract
Dryland ecosystems are frequently struck by droughts. Yet, woody vegetation is often able to recover from mortality events once precipitation returns to pre-drought conditions. Climate change, however, may impact woody vegetation resilience due to more extreme and frequent droughts. Thus, better understanding how [...] Read more.
Dryland ecosystems are frequently struck by droughts. Yet, woody vegetation is often able to recover from mortality events once precipitation returns to pre-drought conditions. Climate change, however, may impact woody vegetation resilience due to more extreme and frequent droughts. Thus, better understanding how woody vegetation responds to drought events is essential. We used a phenology-based remote sensing approach coupled with field data to estimate the severity and recovery rates of a large scale die-off event that occurred in 2014–2015 in Senegal. Novel low (L-band) and high-frequency (Ku-band) passive microwave vegetation optical depth (VOD), and optical MODIS data, were used to estimate woody vegetation dynamics. The relative importance of soil, human-pressure, and before-drought vegetation dynamics influencing the woody vegetation response to the drought were assessed. The die-off in 2014–2015 represented the highest dry season VOD drop for the studied period (1989–2017), even though the 2014 drought was not as severe as the droughts in the 1980s and 1990s. The spatially explicit Die-off Severity Index derived in this study, at 500 m resolution, highlights woody plants mortality in the study area. Soil physical characteristics highly affected die-off severity and post-disturbance recovery, but pre-drought biomass accumulation (i.e., in areas that benefited from above-normal rainfall conditions before the 2014 drought) was the most important variable in explaining die-off severity. This study provides new evidence supporting a better understanding of the “greening Sahel”, suggesting that a sudden increase in woody vegetation biomass does not necessarily imply a stable ecosystem recovery from the droughts in the 1980s. Instead, prolonged above-normal rainfall conditions prior to a drought may result in the accumulation of woody biomass, creating the basis for potentially large-scale woody vegetation die-off events due to even moderate dry spells. Full article
(This article belongs to the Special Issue Earth Observations for Ecosystem Resilience)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Map of Senegal, with the study area highlighted in yellow [<a href="#B54-remotesensing-12-02332" class="html-bibr">54</a>]. (<b>b</b>) Polar seasonal plot of precipitation in the study area (2010–2016). The time series was restricted to allow visualization of possible seasonal variations in precipitation around the year of the die-off, which in fact are not observed. (<b>c</b>) Zoomed inset maps show a true-colour image [<a href="#B55-remotesensing-12-02332" class="html-bibr">55</a>] with precipitation isohyets (350–450 mm) and (<b>d</b>) sand content obtained from ISRIC World Soil Information [<a href="#B56-remotesensing-12-02332" class="html-bibr">56</a>]. While the western region has more sandy soils, the eastern region has shallow ferrugineous soils. The centre of the transects where field data were collected is shown in (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic time series highlighting periods and years used for the calculations of the DoSI and the RRI (the latter was adapted from Reference [<a href="#B75-remotesensing-12-02332" class="html-bibr">75</a>]). The post-drought woody vegetation state is represented by the time step “Y” (with Y corresponding to 2015), and the stable period before the drought (2010–2013) as “NDVI<sub>stab</sub>”. (<b>b</b>) Standardized precipitation, calculated as the annual precipitation anomaly divided by the standard deviation, and summed dry season Ku-VOD for the study area. The extreme and severe droughts in the 1980s and 1990s, respectively, the years of above-normal precipitation in 2008–2012, and the mild drought in 2014 are observable.</p>
Full article ">Figure 3
<p>Flowchart summarizing datasets used and the outputs of each analysis. The research question(s) which relates to each dataset and/or analysis is also specified.</p>
Full article ">Figure 4
<p>(<b>a</b>) L-VOD monthly time series as a proxy of the woody vegetation dynamics in the last 9 years at our study area. The fitted trend component (red) and the break in it (dashed line) are presented, with the respective confidence interval in blue. (<b>b</b>) Annual SMOS soil moisture anomaly, in percentages, spatially averaged for the study area. (<b>c</b>) Annual minimum Ku-band VOD spatially averaged for the study area, superimposed with a fitted sinusoidal term. Periodic drops in the annual minimum VOD are observed, highlighted by the sinusoidal term, suggesting an inter-annual cyclic pattern (see Discussion).</p>
Full article ">Figure 5
<p>(<b>a</b>) Dry season Ku-VOD anomalies in relation to the long-term average (1989–2017) and (<b>b</b>) to the previous year. Total dry season Ku-VOD values were calculated by summing the values for all the pixels inside our study area and were used as an estimation of the standing woody biomass. (<b>c</b>) Standardized precipitation, calculated as the annual precipitation anomaly divided by the standard deviation.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) Dry season Ku-VOD anomalies in relation to the long-term average (1989–2017) and (<b>b</b>) to the previous year. Total dry season Ku-VOD values were calculated by summing the values for all the pixels inside our study area and were used as an estimation of the standing woody biomass. (<b>c</b>) Standardized precipitation, calculated as the annual precipitation anomaly divided by the standard deviation.</p>
Full article ">Figure 6
<p>(<b>a</b>) The Die-off Severity Index (DoSI). Darker colours represent a higher die-off severity. Negative DoSI values were grouped as “No die-off”. (<b>b</b>) Relationship between the percentage of dead woody individuals (IDs) measured in the field and the DoSI. (<b>c</b>) The Relative Recovery Indicator (RRI). Pixels within the yellow-blue colour ramp were affected by the die-off event but presented signs of recovery. Darker colours represent areas where woody vegetation recovery, relative to the level of mortality, was higher. Brown pixels were affected by the drought but did not show signs of recovery. (<b>d</b>) NDVI<sub>dec</sub> trend between 2010 and 2013, representing areas with high/low woody vegetation accumulation before the die-off.</p>
Full article ">Figure 7
<p>Photos of three transects sampled during the field campaign of 2015 and revisited in 2017. (<b>a</b>,<b>b</b>) Transect 9 was located in shallow ferrugineous soils. Two big <span class="html-italic">G. senegalensis</span> individuals that appeared dead in the 2015 photo were not present anymore in 2017 (it is a common practice in the region to collect wood from dead trees and shrubs), while the acacia and <span class="html-italic">B. aegyptiaca</span> individuals remained. (<b>c</b>,<b>d</b>) Transect 16 and (<b>e</b>,<b>f</b>) transect 13 were located in more sandy soils and the lower amount of dead woody individuals in those transects is clear. Although a visual assessment of the recovery after two years should be treated with caution, the growth of new <span class="html-italic">G. senegalensis</span> and <span class="html-italic">B. aegyptiaca</span> individuals is already observed in (<b>f</b>).</p>
Full article ">
22 pages, 7144 KiB  
Article
Monitoring Vineyard Canopy Management Operations Using UAV-Acquired Photogrammetric Point Clouds
by Francisca López-Granados, Jorge Torres-Sánchez, Francisco M. Jiménez-Brenes, Oihane Oneka, Diana Marín, Maite Loidi, Ana I. de Castro and L. G. Santesteban
Remote Sens. 2020, 12(14), 2331; https://doi.org/10.3390/rs12142331 - 20 Jul 2020
Cited by 19 | Viewed by 5384
Abstract
Canopy management operations, such as shoot thinning, leaf removal, and shoot trimming, are among the most relevant agricultural practices in viticulture. However, the supervision of these tasks demands a visual inspection of the whole vineyard, which is time-consuming and laborious. The application of [...] Read more.
Canopy management operations, such as shoot thinning, leaf removal, and shoot trimming, are among the most relevant agricultural practices in viticulture. However, the supervision of these tasks demands a visual inspection of the whole vineyard, which is time-consuming and laborious. The application of photogrammetric techniques to images acquired with an Unmanned Aerial Vehicle (UAV) has proved to be an efficient way to measure woody crops canopy. Consequently, the objective of this work was to determine whether the use of UAV photogrammetry allows the detection of canopy management operations. A UAV equipped with an RGB digital camera was used to acquire images with high overlap over different canopy management experiments in four vineyards with the aim of characterizing vine dimensions before and after shoot thinning, leaf removal, and shoot trimming operations. The images were processed to generate photogrammetric point clouds of every vine that were analyzed using a fully automated object-based image analysis algorithm. Two approaches were tested in the analysis of the UAV derived data: (1) to determine whether the comparison of the vine dimensions before and after the treatments allowed the detection of the canopy management operations; and (2) to study the vine dimensions after the operations and assess the possibility of detecting these operations using only the data from the flight after them. The first approach successfully detected the canopy management. Regarding the second approach, significant differences in the vine dimensions after the treatments were detected in all the experiments, and the vines under the shoot trimming treatment could be easily and accurately detected based on a fixed threshold. Full article
(This article belongs to the Special Issue Digital Agriculture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Images of the vineyard in Azagra after shoot thinning: (<b>a</b>) shoot thinned vines, (<b>b</b>) control vines.</p>
Full article ">Figure 2
<p>Pictures after leaf removal mode experiment in Traibuenas: (<b>a</b>) leaves removed from one side (LR-1s), (<b>b</b>) two sides (LR-2s), (<b>c</b>) control vines.</p>
Full article ">Figure 3
<p>Pictures after leaf removal intensity experiment in Ausejo: (<b>a</b>) low intensity leaf removal (LR-LI), (<b>b</b>) high intensity leaf removal (LR-HI), (<b>c</b>) control vines.</p>
Full article ">Figure 4
<p>Pictures after the shoot trimming detection experiment in Ausejo: (<b>a</b>) Shoot trimmed vines, (<b>b</b>) control vines.</p>
Full article ">Figure 5
<p>Pictures after shoot trimming intensity experiment in Ausejo: (<b>a</b>) low intensity shoot trimming (ST-LI), (<b>b</b>) high intensity shoot (ST-HI), (<b>c</b>) control vines.</p>
Full article ">Figure 6
<p>Graphical summary of the main steps of the Object Based Image Analysis (OBIA) algorithm for vineyard characterization. (<b>a</b>) Side view of the point cloud of one segment of vine row. The height threshold for vineyard classification is showed. (<b>b</b>) Boxes showing the slicing applied to the vine row. (<b>c</b>) Front view of one of the vine slices from 6b. (<b>d</b>) Division of the previous slice in voxels.</p>
Full article ">Figure 7
<p>Vine width at different heights after the shoot thinning experiment at Azagra (error bars represent the standard deviation). For each height, different letters indicate significant differences among treatments at <span class="html-italic">p</span> = 0.05 by a Student’s <span class="html-italic">T</span>-test.</p>
Full article ">Figure 8
<p>Vine width at different heights after the leaf removal experiment at Traibuenas (error bars represent the standard deviation). For each height, different letters indicate significant differences among treatments at <span class="html-italic">p</span> = 0.05 by a Student’s <span class="html-italic">T</span>-test.</p>
Full article ">Figure 9
<p>Vine width at different heights after the second leaf removal and trimming experiment at Ausejo-1 vineyard (error bars represent the standard deviation). For each height, different letters indicate significant differences among treatments at <span class="html-italic">p</span> = 0.05 by a Student’s <span class="html-italic">T</span>-test.</p>
Full article ">Figure 10
<p>Vine width at different heights after the shoot trimming experiment at the Ausejo-2 vineyard (error bars represent the standard deviation). For each height, different letters indicate significant differences among treatments at <span class="html-italic">p</span> = 0.05 by a Student’s <span class="html-italic">T</span>-test.</p>
Full article ">Figure 11
<p>Vine width at different heights after the shoot trimming experiment at the Ausejo-1 vineyard (error bars represent the standard deviation). For each height, different letters indicate significant differences among treatments at <span class="html-italic">p</span> = 0.05 by a Student’s <span class="html-italic">T</span>-test.</p>
Full article ">
27 pages, 10352 KiB  
Article
Assessment of the Representativeness of MODIS Aerosol Optical Depth Products at Different Temporal Scales Using Global AERONET Measurements
by Yan Tong, Lian Feng, Kun Sun and Jing Tang
Remote Sens. 2020, 12(14), 2330; https://doi.org/10.3390/rs12142330 - 20 Jul 2020
Cited by 7 | Viewed by 4151
Abstract
Assessments of long-term changes of air quality and global radiative forcing at a large scale heavily rely on satellite aerosol optical depth (AOD) datasets, particularly their temporal binning products. Although some attempts focusing on the validation of long-term satellite AOD have been conducted, [...] Read more.
Assessments of long-term changes of air quality and global radiative forcing at a large scale heavily rely on satellite aerosol optical depth (AOD) datasets, particularly their temporal binning products. Although some attempts focusing on the validation of long-term satellite AOD have been conducted, there is still a lack of comprehensive quantification and understanding of the representativeness of satellite AOD at different temporal binning scales. Here, we evaluated the performances of the Moderate Resolution Imaging Spectroradiometer (MODIS) AOD products at various temporal scales by comparing the MODIS AOD datasets from both the Terra and Aqua satellites with the entire global AErosol RObotic NETwork (AERONET) observation archive between 2000 and 2017. The uncertainty levels of the MODIS hourly and daily AOD products were similarly high, indicating that MODIS AOD retrievals could be used to represent daily aerosol conditions. The MODIS data showed the reduced quality when integrated from the daily to monthly scale, where the relative mean bias (RMB) changed from 1.09 to 1.21 for MODIS Terra and from 1.04 to 1.17 for MODIS Aqua, respectively. The limitation of valid data availability within a month appeared to be the primary reason for the increased uncertainties in the monthly binning products, and the monthly data associated uncertainties could be reduced when the number of valid AOD retrievals reached 15 times in one month. At all three temporal scales, the uncertainty levels of satellite AOD products decreased with increasing AOD values. The results of this study could provide crucial information for satellite AOD users to better understand the reliability of different temporal AOD binning products and associated uncertainties in their derived long-term trends. Full article
(This article belongs to the Special Issue Active and Passive Remote Sensing of Aerosols and Clouds)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spatial distribution of 1051 global AErosol RObotic NETwork (AERONET) sites (black dots) using in this study overlapped Moderate Resolution Imaging Spectroradiometer (MODIS) land cover types (color map).</p>
Full article ">Figure 2
<p>(<b>a</b>) Processing steps to determine AERONET and satellite concurrent observations at various temporal scales. (<b>b</b>) Example showing how hourly Aerosol Optical Depth (AOD) measurements concurrent with MODIS Terra (blue circle) and Aqua (green circle) data were selected. (<b>c</b>) Temporary distribution of AOD measurements within one day from the global AERONET sites. (<b>d</b>) Example showing how daily AERONET AOD measurements were distributed within a month. (<b>e</b>) Correlations between the real monthly mean AERONET AOD (estimated with daily data from the entire month) and monthly mean values that are estimated using a subset (5~30 days) of the daily data within a month. The correlations between the two types of data plateaued when the number of days reached 15.</p>
Full article ">Figure 3
<p>Density plots between the MODIS-derived and AERONET-observed AODs at (<b>a</b>–<b>c</b>) hourly, (<b>d</b>–<b>f</b>) daily and (<b>g</b>–<b>i</b>) monthly scales, which were generated using the entire global AERONET data archive between 2000 and 2017 and concurrent satellite AOD datasets. The blue solid, black dotted, and dashed lines represent the fitted line, 1:1 line, and EE line, respectively. The color bar shows the density of the data points (scaled to 0–1).</p>
Full article ">Figure 4
<p>Spatial distributions of R<sup>2</sup> (<b>a1</b>–<b>a3</b>), RMB (<b>b1</b>–<b>b3</b>), uncertainty (<b>c1</b>–<b>c3</b>) and %within EE (<b>d1</b>–<b>d3</b>) of the merged DT and DB (DTB)-based MODIS/Terra AOD products compared with AERONET data at hourly, daily, and monthly scales. Here, uncertainties are measured by standard deviations of the relative errors. Black dots represent AERONET sites where the number of satellite and ground-based match-up pairs are insufficient to construct valid statistics.</p>
Full article ">Figure 5
<p>The overall relationship of the absolute errors and relative errors of satellite AOD (MODIS vs. AERONET) and reference data (AERONET) at 550 nm at (<b>a</b>) hourly, (<b>b</b>) daily, and (<b>c</b>) monthly scales. The <span class="html-italic">x</span>-axis is the ground-based AOD, and the <span class="html-italic">y</span>-axis is the absolute errors and relative errors of the MODIS Terra AOD. The data were grouped into many equal bins (absolute errors: <b>a-1</b>, <b>b-1</b>, <b>c-1</b>; relative errors: <b>a-2</b>, <b>b-2</b>, <b>c-2</b>). The central of the box represents the median, while the bottom and the top of the boxes represent the 25<sup>th</sup> and 75<sup>th</sup> of percentiles. Each boxplot represents the statistics of the absolute or relative differences (MODIS-AERONET, (MODIS-AERONET)/AERONET) for that bin. The means and standard deviations of the AERONET AODs are represented by the centers and half-widths in the horizontal direction, respectively (red). The means, medians, and 66% (1-σ) intervals of the absolute and relative differences are represented by the black squares, the center, and the half height in the vertical direction, respectively (also red). The blue whiskers are the 95% (2-σ) intervals. The red dashed-dotted lines are linear fits to one standard deviation of the absolute errors and relative errors (uncertainty), whereas the green dashed lines represent the expected error (<math display="inline"><semantics> <mrow> <mi>E</mi> <mi>E</mi> <mo>=</mo> <mo>±</mo> <mn>0.05</mn> <mo>±</mo> <mn>0.15</mn> <mo>×</mo> <mi>A</mi> <mi>O</mi> <msub> <mi>D</mi> <mrow> <mi>A</mi> <mi>E</mi> <mi>R</mi> <mi>O</mi> <mi>N</mi> <mi>E</mi> <mi>T</mi> </mrow> </msub> <mo> </mo> </mrow> </semantics></math>) and relative expected error (<math display="inline"><semantics> <mrow> <mi>R</mi> <mi>E</mi> <mi>E</mi> <mo>=</mo> <mo>±</mo> <mn>0.15</mn> <mo>±</mo> <mn>0.05</mn> <mo>/</mo> <mi>A</mi> <mi>O</mi> <msub> <mi>D</mi> <mrow> <mi>A</mi> <mi>E</mi> <mi>R</mi> <mi>O</mi> <mi>N</mi> <mi>E</mi> <mi>T</mi> </mrow> </msub> </mrow> </semantics></math>). Uncertainties (standard deviations of the relative errors) as a function of the AOD values for three temporal scales are plotted (red lines) in <b>a-3</b>, <b>b-3</b>, and <b>c-3</b>, where Threshold<sub>100</sub> was determined as 0.06. Note that green squares (<b>b-1</b>, <b>c-1</b>, <b>b-2</b>, <b>c-2</b>) and green solid lines (<b>b-3</b>, <b>c-3</b>) in (<b>b)</b> and (<b>c</b>) are added as the references of corresponding red ones, representing the means and standard deviations of absolute errors or relative errors based on new daily/monthly statistics whose daily/monthly collocations were averaged by the concurrent hourly/daily MODIS, and corresponding AERONET AOD.</p>
Full article ">Figure 6
<p>Spatial distribution of the probabilities of satellite-derived AODs with uncertainties greater than 100% (POU<sub>100</sub>) at (<b>a</b>) hourly, (<b>b</b>) daily, and (<b>c</b>) monthly scales. The histograms (with numbers) on the right side of the legend show the number of AERONET sites for different POU<sub>100</sub> ranges.</p>
Full article ">Figure 7
<p>Scatter plots of the uncertainty vs. probability of satellite-derived AODs with uncertainties greater than 100% (POU<sub>100</sub>) at (<b>a</b>) hourly, (<b>b</b>) daily, and (<b>c</b>) monthly scales.</p>
Full article ">Figure 8
<p>Scatter plots between the POU<sub>100</sub> and mean AERONET AOD at (<b>a</b>) hourly, (<b>b</b>) daily, and (<b>c</b>) monthly scales. Number of matched pairs (N), correlation coefficients (R), and significant P values (P) are annotated. Note that only the sites with nonzero POU<sub>100</sub> (POU<sub>100</sub> could equal to 0 for regions with high aerosol loading) are plotted and therefore N is less than the number of pairs in <a href="#remotesensing-12-02330-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 9
<p>(<b>a</b>–<b>c</b>) The uncertainty (i.e., standard deviation of the relative errors) of hourly AOD product as a function of Angström exponent extracted from AERONET datasets for different aerosol types. (<b>d</b>) The impacts of aerosol types on the uncertainty of MODIS hourly AOD product.</p>
Full article ">Figure 10
<p>Relationships between relative uncertainties and the hourly AERONET AOD measurements for different land-cover-dominated sites: (<b>a</b>) forest, (<b>b</b>) urban, (<b>c</b>) bare soil, (<b>d</b>) crop land, and (<b>e</b>) water with different values of Threshold<sub>100</sub> (i.e., 0.06, 0.07, 0.11, 0.04, and 0.03 selected with dotted line) where relative uncertainties of DTB-based MODIS Terra AOD products reach 100%.</p>
Full article ">Figure 11
<p>Spatial distribution of the five classes of mean CVs of timely AODs within one day for 943 sites (89.7% of global 1051 sites), where at least three daily AODs can be calculated based on the criteria described in <a href="#sec2dot3-remotesensing-12-02330" class="html-sec">Section 2.3</a>. Higher CVs denote more diurnal variations of AODs. All selected sites were divided into five clusters by the values of CVs (i.e., 24 with 0–10%, 558 with 10–20%, 318 with 20–30%, 28 with 30–40%, and 15 with 40–73% of CVs. The blue triangles show the same 299 sites that are in <a href="#remotesensing-12-02330-f004" class="html-fig">Figure 4</a>a1,b1,c1,d1, <a href="#remotesensing-12-02330-f006" class="html-fig">Figure 6</a>a and <a href="#remotesensing-12-02330-f007" class="html-fig">Figure 7</a>a. The black dots represent AERONET sites where there were few or no retrievals.</p>
Full article ">Figure 12
<p>(<b>a</b>) Scatter plots of uncertainty (standard deviation of relative errors) of the daily MODIS retrievals against the mean CV in the AERONET AOD measurements. (<b>b</b>) Correlation between the hourly and daily uncertainties (outliers with uncertainties &gt;10 were excluded in the regression). The blue dots show the same 159 sites that are in <a href="#remotesensing-12-02330-f004" class="html-fig">Figure 4</a>a2,b2,c2,d2, <a href="#remotesensing-12-02330-f006" class="html-fig">Figure 6</a>b and <a href="#remotesensing-12-02330-f007" class="html-fig">Figure 7</a>b.</p>
Full article ">Figure 13
<p>The changes in different statistical measures for (<b>a</b>) RMB, (<b>b</b>) uncertainty, (<b>c</b>) R<sup>2</sup>, and (<b>d</b>) mean bias as a function of the amount of valid satellite AOD data (N_SAOD).</p>
Full article ">Figure 14
<p>Scatter plots between the monthly AERONET and satellite AOD data, where the color represents the number of valid MODIS Terra (<b>a</b>) and Aqua (<b>b</b>) AOD retrievals (N_SAOD) in that month. Data with large N_SAOD values showed a better agreement, while points far from the 1:1 line generally had small N_SAOD values (encircled with a dashed line).</p>
Full article ">Figure 15
<p>Comparison of AERONET monthly mean AODs under free-screening average strategy (<span class="html-italic">y</span>-axis, i.e., daily AERONET data are considered as valid when one AERONET measurement is available within a day) and strict aggregation criteria (<span class="html-italic">x</span>-axis, i.e., at least one valid measurement is available for each hour). Samples are colored by mean CVs of daily observations.</p>
Full article ">Figure 16
<p>The monthly climatological values of the valid satellite AOD data (N_SAOD) and PUO100 for MODIS Terra (<b>a</b>,<b>d</b>), Aqua (<b>b</b>,<b>e</b>) and both satellites (<b>c</b>,<b>f</b>), which were estimated using the entire data archive between 2000 and 2017. Note that the uncertainty in the AOD retrievals over oceanic regions was not well quantified due to the difficulties in obtaining sufficient ground truth measurements, but the climatological mean N_SAOD and PUO<sub>100</sub> values over oceans were also calculated and demonstrated here for reference purposes.</p>
Full article ">
18 pages, 29786 KiB  
Technical Note
UAV + BIM: Incorporation of Photogrammetric Techniques in Architectural Projects with Building Information Modeling Versus Classical Work Processes
by Carlos Rizo-Maestre, Ángel González-Avilés, Antonio Galiano-Garrigós, María Dolores Andújar-Montoya and Juan Antonio Puchol-García
Remote Sens. 2020, 12(14), 2329; https://doi.org/10.3390/rs12142329 - 20 Jul 2020
Cited by 31 | Viewed by 6102
Abstract
The current computer technology facilitates the processing of large volumes of information in architectural design teams, in parallel with recent advances in-flight automation in unmanned aerial vehicles (UAVs) along with lower costs, facilitates their use to capture aerial photographs and obtain orthophotographs and [...] Read more.
The current computer technology facilitates the processing of large volumes of information in architectural design teams, in parallel with recent advances in-flight automation in unmanned aerial vehicles (UAVs) along with lower costs, facilitates their use to capture aerial photographs and obtain orthophotographs and 3D models of relief and terrain textures. With these technologies, 3D models can be produced that allow different geometric configurations of the distribution of construction elements on the ground to be analyzed. This article presents the process of implementation in a terrain integrated into the early stages of architectural design. A methodology is proposed that covers the detailed capture of terrain, the relationship with the architectural design environment, and its implementation on the plot. As a novelty, an inverse perspective to the remaining disciplines is presented, from the inside of the object to the outside. The proposed methodology for the use of UAVs integrates terrain capture, generation of the 3D mesh, superimposition of environmental realities and architectural design using building information modeling (BIM) technologies. In addition, it represents the beginning of a line of research on the implementation of the plot and the layout of foundations using UAVs. The results obtained in the study carried out in three different projects comparing traditional technologies with the integration of UAVs + BIM show a clear improvement in the second option. The use of new technologies applied to the execution and control of work not only improves accuracy but also reduces errors and saves time, which undoubtedly indicates significant savings in costs and deviations in the project. Full article
(This article belongs to the Special Issue UAV Photogrammetry and Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sample of the topographic survey generated by a UAV in the illustrated forest of the University of Alicante.</p>
Full article ">Figure 2
<p>Image showing the different stages of work for the creation of virtual building images.</p>
Full article ">Figure 3
<p>Image showing the difference between the type of photograph studied. (<b>Left</b>) flat terrain. (<b>Right</b>) variable terrain.</p>
Full article ">Figure 4
<p>Image showing the outline of a route where the course and position in which the photographs will be taken are shown.</p>
Full article ">Figure 5
<p>Diagram showing the relationship between two points where the photographs are taken.</p>
Full article ">Figure 6
<p>Representation of how to take data on a predetermined flight. The height is calculated with the GPS data, considering the takeoff height, which is considered to be 0 m.</p>
Full article ">Figure 7
<p>Unmanned aerial vehicle (UAV) flight path and image generation.</p>
Full article ">Figure 8
<p>Image capture path preparation process. (<b>Left</b>) contour of the plot. (<b>Right</b>) route and shooting points.</p>
Full article ">Figure 9
<p>Overlaps between the different photographs on the selected plot.</p>
Full article ">Figure 10
<p>Image of the final orthophoto superimposed on the Google Maps satellite image.</p>
Full article ">Figure 11
<p>Image of the program Pix4D that represents the difference in heights in each point of the region in a blue-red gradient.</p>
Full article ">Figure 12
<p>Image of the solid generated by the UAV with the information obtained from photographs and heights.</p>
Full article ">Figure 13
<p>(<b>Left</b>) volumetry generated in 3D software. (<b>Right</b>) integration of the volumetry generated in the solid generated with the UAV flight.</p>
Full article ">Figure 14
<p>Image showing the integration process of a volumetry generated in 3D software over the solid representation generated by the UAV flight. The process described shows the proposed technique used. The photogrammetric data of the plot are obtained, the new work is adjusted and the two processes are integrated (UAV + building information modeling (BIM)).</p>
Full article ">Figure 15
<p>Image showing the process of building stakeout before excavation with gypsum, slab perimeter stakeout on polyethylene film with spray coupled to DRON; marking pillars with a traditional shape strip and reinforcement of the foundation slab with dotting of slab rebar by UAV and spray. The different processes show the difficulties in marking the lines executed with a computer in the field.</p>
Full article ">
23 pages, 27135 KiB  
Article
IrrMapper: A Machine Learning Approach for High Resolution Mapping of Irrigated Agriculture Across the Western U.S.
by David Ketchum, Kelsey Jencso, Marco P. Maneta, Forrest Melton, Matthew O. Jones and Justin Huntington
Remote Sens. 2020, 12(14), 2328; https://doi.org/10.3390/rs12142328 - 20 Jul 2020
Cited by 40 | Viewed by 15242
Abstract
High frequency and spatially explicit irrigated land maps are important for understanding the patterns and impacts of consumptive water use by agriculture. We built annual, 30 m resolution irrigation maps using Google Earth Engine for the years 1986–2018 for 11 western states within [...] Read more.
High frequency and spatially explicit irrigated land maps are important for understanding the patterns and impacts of consumptive water use by agriculture. We built annual, 30 m resolution irrigation maps using Google Earth Engine for the years 1986–2018 for 11 western states within the conterminous U.S. Our map classifies lands into four classes: irrigated agriculture, dryland agriculture, uncultivated land, and wetlands. We built an extensive geospatial database of land cover from each class, including over 50,000 human-verified irrigated fields, 38,000 dryland fields, and over 500,000 km 2 of uncultivated lands. We used 60,000 point samples from 28 years to extract Landsat satellite imagery, as well as climate, meteorology, and terrain data to train a Random Forest classifier. Using a spatially independent validation dataset of 40,000 points, we found our classifier has an overall binary classification (irrigated vs. unirrigated) accuracy of 97.8%, and a four-class overall accuracy of 90.8%. We compared our results to Census of Agriculture irrigation estimates over the seven years of available data and found good overall agreement between the 2832 county-level estimates (r 2 = 0.90), and high agreement when estimates are aggregated to the state level (r 2 = 0.94). We analyzed trends over the 33-year study period, finding an increase of 15% (15,000 km 2 ) in irrigated area in our study region. We found notable decreases in irrigated area in developing urban areas and in the southern Central Valley of California and increases in the plains of eastern Colorado, the Columbia River Basin, the Snake River Plain, and northern California. Full article
Show Figures

Figure 1

Figure 1
<p>The 11 states of the Western U.S. included in our study area displayed with the 186 Landsat scene footprints from which imagery was used.</p>
Full article ">Figure 2
<p>Training data from the irrigated class used to train IrrMapper. <a href="#remotesensing-12-02328-t001" class="html-table">Table 1</a> shows the number of polygons and total irrigated training area from each class in each of the 11 Western States.</p>
Full article ">Figure 3
<p>Training data from the unirrigated classes used to train IrrMapper (i.e., wetlands, dryland agriculture, and uncultivated lands). <a href="#remotesensing-12-02328-t001" class="html-table">Table 1</a> shows the number of polygons and total training area from each class in each of the 11 Western States.</p>
Full article ">Figure 4
<p>Precipitation during the years irrigation was verified for IrrMapper in millimeters; the bar height shows difference from mean statewide precipitation (i.e., the horizontal axis). Precipitation normals are the 100-year statewide average precipitation (1901–2000) during the 12 months ending in September of the year specified. All subplots range 1986–2018, as shown in lower left.</p>
Full article ">Figure 5
<p>Training data sample points from the four classes used to train IrrMapper (i.e., irrigated, wetland, dryland agriculture, and uncultivated). Points were randomly sampled from within a 20-m interior buffer of the training data GIS polygons.</p>
Full article ">Figure 6
<p>Irrigation status as predicted for the year 2018 by IrrMapper, at 30-m resolution.</p>
Full article ">Figure 7
<p>The fractional importance of the top 10 variables from the IrrMapper Random Forest model (0.40 accuracy contribution). Variable importance was calculated over ten iterations of model training using a total of 132 data inputs.</p>
Full article ">Figure 8
<p>Comparison of NASS Census of Agriculture and IrrMapper estimates of county-level irrigated area. Comparison is over the 412 counties within the study region.</p>
Full article ">Figure 9
<p>Comparison of NASS Census of Agriculture and IrrMapper estimates of irrigated area over the study domain. IrrMapper roughly follows the same pattern in irrigated area as the semi-decadal NASS estimates of total irrigated area.</p>
Full article ">Figure 10
<p>The normalized difference of IrrMapper and NASS county-wide Census of Agriculture mean irrigated area estimates over the years of available NASS data (i.e., 1987, 1992, 1997, 2002, 2007, 2012, and 2017). Positive values indicate where IrrMapper made larger estimates than NASS.</p>
Full article ">Figure 11
<p>The statewide and study area sum of irrigated area predicted by IrrMapper over the 33-year study period, normalized to one. Highlighted is the year 2012, the only year in which the only available USGS atmospherically corrected Landsat surface reflectance data were impacted by the scan line corrector hardware failure on the Landsat 7 ETM+ mission.</p>
Full article ">Figure 12
<p>Change in ‘irrigation equipped’ area over the course of the study period, where locations with two or more years of detected irrigation in the periods 1986–1990 and 2014–2018 are considered equipped.</p>
Full article ">
20 pages, 7000 KiB  
Article
Integrating MNF and HHT Transformations into Artificial Neural Networks for Hyperspectral Image Classification
by Ming-Der Yang, Kai-Hsiang Huang and Hui-Ping Tsai
Remote Sens. 2020, 12(14), 2327; https://doi.org/10.3390/rs12142327 - 20 Jul 2020
Cited by 11 | Viewed by 3754
Abstract
The critical issue facing hyperspectral image (HSI) classification is the imbalance between dimensionality and the number of available training samples. This study attempted to solve the issue by proposing an integrating method using minimum noise fractions (MNF) and Hilbert–Huang transform (HHT) transformations into [...] Read more.
The critical issue facing hyperspectral image (HSI) classification is the imbalance between dimensionality and the number of available training samples. This study attempted to solve the issue by proposing an integrating method using minimum noise fractions (MNF) and Hilbert–Huang transform (HHT) transformations into artificial neural networks (ANNs) for HSI classification tasks. MNF and HHT function as a feature extractor and image decomposer, respectively, to minimize influences of noises and dimensionality and to maximize training sample efficiency. Experimental results using two benchmark datasets, Indian Pine (IP) and Pavia University (PaviaU) hyperspectral images, are presented. With the intention of optimizing the number of essential neurons and training samples in the ANN, 1 to 1000 neurons and four proportions of training sample were tested, and the associated classification accuracies were evaluated. For the IP dataset, the results showed a remarkable classification accuracy of 99.81% with a 30% training sample from the MNF1–14+HHT-transformed image set using 500 neurons. Additionally, a high accuracy of 97.62% using only a 5% training sample was achieved for the MNF1–14+HHT-transformed images. For the PaviaU dataset, the highest classification accuracy was 98.70% with a 30% training sample from the MNF1–14+HHT-transformed image using 800 neurons. In general, the accuracy increased as the neurons increased, and as the training samples increased. However, the accuracy improvement curve became relatively flat when more than 200 neurons were used, which revealed that using more discriminative information from transformed images can reduce the number of neurons needed to adequately describe the data as well as reducing the complexity of the ANN model. Overall, the proposed method opens new avenues in the use of MNF and HHT transformations for HSI classification with outstanding accuracy performance using an ANN. Full article
(This article belongs to the Special Issue Advanced Machine Learning Approaches for Hyperspectral Data Analysis)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The flow chart of the proposed classification process illustrated with the IP dataset.</p>
Full article ">Figure 2
<p>Illustration of the 14 sets of BEMCs with BIMFs and residue image for the IP dataset.</p>
Full article ">Figure 3
<p>Classification results with overall accuracy values for IP dataset original 220 band images, MNF-transformed images, and MNF+HHT-transformed images with four training sample proportions.</p>
Full article ">Figure 4
<p>Accuracy comparisons between the original 220 band IP dataset, the MNF-transformed image sets, and the MNF+HHT-transformed image sets.</p>
Full article ">Figure 5
<p>Classification results with overall accuracy values for PaviaU dataset original 103 band images, MNF-transformed images, and MNF+HHT-transformed images with four training sample proportions.</p>
Full article ">Figure 6
<p>Accuracy comparisons between the original 103 band PaviaU dataset, the MNF-transformed image sets, and the MNF+HHT-transformed image sets.</p>
Full article ">Figure 7
<p>Accuracy distribution of IP MNF1–10+HHT image set, varying with training sample proportion and hidden layers.</p>
Full article ">Figure 8
<p>Accuracy distribution of IP MNF1–14+HHT image set, varying with training sample proportion and hidden layers.</p>
Full article ">Figure 9
<p>Accuracy distribution of PaviaU MNF1–10+HHT image set, varying with training sample proportion and hidden layers.</p>
Full article ">Figure 10
<p>Accuracy distribution of PaviaU MNF1–14+HHT image set, varying with training sample proportion and hidden layers.</p>
Full article ">Figure 11
<p>Estimated number of parameters with the associated number of input layers (bands).</p>
Full article ">Figure 12
<p>IP MNF1–10+HHT image classification of the highest accuracies, varying with training sample proportions (the accuracy values are provided in parentheses).</p>
Full article ">Figure 13
<p>IP MNF1–14+HHT image classification of the highest accuracies, varying with training sample proportions (the accuracy values are provided in parentheses).</p>
Full article ">Figure 14
<p>MNF1 to MNF14 images of the PaviaU dataset.</p>
Full article ">
23 pages, 1159 KiB  
Article
Hierarchical Sparse Nonnegative Matrix Factorization for Hyperspectral Unmixing with Spectral Variability
by Tatsumi Uezato, Mathieu Fauvel and Nicolas Dobigeon
Remote Sens. 2020, 12(14), 2326; https://doi.org/10.3390/rs12142326 - 20 Jul 2020
Cited by 7 | Viewed by 3490
Abstract
Accounting for endmember variability is a challenging issue when unmixing hyperspectral data. This paper models the variability that is associated with each endmember as a conical hull defined by extremal pixels from the data set. These extremal pixels are considered as so-called prototypal [...] Read more.
Accounting for endmember variability is a challenging issue when unmixing hyperspectral data. This paper models the variability that is associated with each endmember as a conical hull defined by extremal pixels from the data set. These extremal pixels are considered as so-called prototypal endmember spectra that have meaningful physical interpretation. Capitalizing on this data-driven modeling, the pixels of the hyperspectral image are then described as combinations of these prototypal endmember spectra weighted by bundling coefficients and spatial abundances. The proposed unmixing model not only extracts and clusters the prototypal endmember spectra, but also estimates the abundances of each endmember. The performance of the approach is illustrated thanks to experiments conducted on simulated and real hyperspectral data and it outperforms state-of-the-art methods. Full article
(This article belongs to the Special Issue New Advances on Sub-pixel Processing: Unmixing and Mapping Methods)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Endmember variability described by the conical hulls spanned by prototypal endmembers. The three sets, depicted in red, green, and blue, are associated with the endmember bundles.</p>
Full article ">Figure 2
<p>First row: Endmember variability representing spectra of vegetation. Second row: Endmember variability displayed by principal components.</p>
Full article ">Figure 3
<p>Hierarchical sparsity at different levels. Red squares represent active coefficients while gray ones represents inactive coefficients.</p>
Full article ">Figure 4
<p>Flowchart of the overall unmixing procedure. The hierarchical sparse nonnegative matrix factorization (HSNMF) algorithm is introduced in <a href="#sec4dot1-remotesensing-12-02326" class="html-sec">Section 4.1</a> (see also Algorithm 1) and the initialization step is detailed in <a href="#sec4dot2-remotesensing-12-02326" class="html-sec">Section 4.2</a> (see also Algorithm 2).</p>
Full article ">Figure 5
<p>Synthetically generated abundances of 4 endmember classes.</p>
Full article ">Figure 6
<p>Real hyperspectral images: (<b>left</b>) MUESLI image and (<b>right</b>) Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) image.</p>
Full article ">Figure 7
<p>MUESLI image—1st row: endmember spectra extracted by SVMAX. 2nd row: endmember bundles extracted by EBE. 3rd row: prototypal endmember spectra. 4th row: pixel-wise endmember spectra estimated by HSNMF.</p>
Full article ">Figure 8
<p>AVIRIS image—1st row: endmember spectra extracted by SVMAX. 2nd row: endmember bundles extracted by EBE. 3rd row: prototypal endmember spectra. 4th row: pixel-wise endmember spectra estimated by HSNMF.</p>
Full article ">Figure 9
<p>MUESLI image—estimated abundance maps. From top to bottom: tree, crop soil, grass and bare soil.</p>
Full article ">Figure 10
<p>AVIRIS image—estimated abundance maps. From top to bottom: tree, crop soil, grass and bare soil.</p>
Full article ">
21 pages, 7459 KiB  
Article
Conversion of Agricultural Land for Urbanization Purposes: A Case Study of the Suburbs of the Capital of Warmia and Mazury, Poland
by Katarzyna Kocur-Bera and Adrian Pszenny
Remote Sens. 2020, 12(14), 2325; https://doi.org/10.3390/rs12142325 - 20 Jul 2020
Cited by 18 | Viewed by 5318
Abstract
Population growth, economic globalization and the launch of market economy instruments have become the main triggers for processes related to the anthropogenization of space. According to Organisation for Economic Co-operation and Development (OECD) statistics, the developed area indication tripled in the last 25 [...] Read more.
Population growth, economic globalization and the launch of market economy instruments have become the main triggers for processes related to the anthropogenization of space. According to Organisation for Economic Co-operation and Development (OECD) statistics, the developed area indication tripled in the last 25 years. Humans keep appropriating more natural and semi-natural areas, which entails specific social, economic and environmental consequences. Provisions in some countries’ laws and some economic factors encourage investors to engage in urbanization. The authors of this study noticed a research gap in the analysis of suburban areas in this topic. Our research aimed to analyze the conversion of plots of land used for agricultural purposes into urbanized land in the city’s suburban zone, in areas of high landscape and natural value. We focused on the analysis of geodetic and legal divisions of plots of land and analyzed the conditions of plots of land “ex ante” and “ex post” and the changes in their values. To achieve the research objective, we used Corine Land Cover (CLC) data for various time intervals, orthophotomaps (using the Web Map Service browsing service compliant with Open Geospatial Consortium standards), cadastral data, administrative decisions, data from the real estate market, spatial analyses and statistical modeling (linear, non-linear and stepwise regression). In general, the CLC data resolution enables analysis at regional or national levels. We used them innovatively at the local level because CLC data allowed us to notice the development of the area over time. Detailed research confirmed that, in the studied area, the conversion of agricultural land into developed areas results from economic factors. The division procedure increases the plot value by about 10%. However, the effects of uncontrolled urbanization, which we are currently dealing with, generate long term social and economic losses, difficulties in the labour market and may become a barrier to development. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Average percentage increase in developed surface area between 1990 and 2014 in the 40 most developed countries of the world. Source: own study [<a href="#B10-remotesensing-12-02325" class="html-bibr">10</a>].</p>
Full article ">Figure 2
<p>Types of possible real estate divisions. Source: own studies [<a href="#B48-remotesensing-12-02325" class="html-bibr">48</a>,<a href="#B49-remotesensing-12-02325" class="html-bibr">49</a>,<a href="#B50-remotesensing-12-02325" class="html-bibr">50</a>].</p>
Full article ">Figure 3
<p>Research area. Source: own study [<a href="#B54-remotesensing-12-02325" class="html-bibr">54</a>].</p>
Full article ">Figure 4
<p>Comparison of information on land cover according to Corine Land Cover (CLC) data from the (<b>a</b>) year (2006); (<b>b</b>) 2012; (<b>c</b>) 2018 and orthophotos (<a href="http://www.geoportal.gov.pl" target="_blank">www.geoportal.gov.pl</a>) from the year (<b>a’</b>) 2006; (<b>b’</b>) 2012; (<b>c’</b>) 2018. Source: [<a href="#B54-remotesensing-12-02325" class="html-bibr">54</a>,<a href="#B55-remotesensing-12-02325" class="html-bibr">55</a>,<a href="#B56-remotesensing-12-02325" class="html-bibr">56</a>].</p>
Full article ">Figure 5
<p>SQL query attributing ratings to real estate according to the adopted rating scale for the (<b>a</b>) X<sub>area</sub> (plot area), (<b>b</b>) X<sub>access</sub> (access to the plot) and (<b>c</b>) X<sub>shape</sub> (number of border points/shape of plots), attributes. Source: own study.</p>
Full article ">Figure 6
<p>(<b>a</b>) Relationship between surface area and price in the examined real estate. (<b>b</b>) Relationship between plot area (up to 5000 m<sup>2</sup>) and time interval. (<b>c</b>) Relationship between plot area (above 5000 m<sup>2</sup>) and time interval. Source: own study. Legend: KOD—the time interval covering the number of months from real estate transactions to the time of the analysis.</p>
Full article ">Figure 7
<p>Comparison of the weights of attributes for plots in “ex ante” and “ex post” conditions.</p>
Full article ">Figure 8
<p>Ratios (%) between the value from the model and the observed “ex ante” value, as well as the forecast and observed values (%). Source: own study.</p>
Full article ">Figure 9
<p>Suburbanization zones in (<b>a</b>) the city of Łódź (Poland) and (<b>b</b>) the city of Cologne (Germany). Source: own study [<a href="#B54-remotesensing-12-02325" class="html-bibr">54</a>].</p>
Full article ">
1 pages, 175 KiB  
Correction
Correction: Otón, G., et al. Global Detection of Long-Term (1982–2017) Burned Area with AVHRR-LTDR Data. Remote Sensing 2019, 11, 2079
by Gonzalo Otón, Rubén Ramo, Joshua Lizundia-Loiola and Emilio Chuvieco
Remote Sens. 2020, 12(14), 2324; https://doi.org/10.3390/rs12142324 - 20 Jul 2020
Cited by 1 | Viewed by 2009
Abstract
The authors wish to make the following corrections to this paper [...] Full article
(This article belongs to the Section Remote Sensing Image Processing)
25 pages, 14750 KiB  
Article
LiDAR/RISS/GNSS Dynamic Integration for Land Vehicle Robust Positioning in Challenging GNSS Environments
by Ahmed Aboutaleb, Amr S. El-Wakeel, Haidy Elghamrawy and Aboelmagd Noureldin
Remote Sens. 2020, 12(14), 2323; https://doi.org/10.3390/rs12142323 - 19 Jul 2020
Cited by 23 | Viewed by 5776
Abstract
The autonomous vehicles (AV) industry has a growing demand for reliable, continuous, and accurate positioning information to ensure safe traffic and for other various applications. Global navigation satellite system (GNSS) receivers have been widely used for this purpose. However, GNSS positioning accuracy deteriorates [...] Read more.
The autonomous vehicles (AV) industry has a growing demand for reliable, continuous, and accurate positioning information to ensure safe traffic and for other various applications. Global navigation satellite system (GNSS) receivers have been widely used for this purpose. However, GNSS positioning accuracy deteriorates drastically in challenging environments such as urban environments and downtown cores. Therefore, inertial sensors are widely deployed inside the land vehicle for various purposes, including the integration with GNSS receivers to provide positioning information that can bridge potential GNSS failures. However, in dense urban areas and downtown cores where GNSS receivers may incur prolonged outages, the integrated positioning solution may become prone to severe drift resulting in substantial position errors. Therefore, it is becoming necessary to include other sensors and systems that can be available in future land vehicles to be integrated with both the GNSS receivers and inertial sensors to enhance the positioning performance in such challenging environments. This work aims to design and examine the performance of a multi-sensor system that fuses the GNSS receiver data with not only the three-dimensional reduced inertial sensor system (3D-RISS), but also with the three-dimensional point cloud of onboard light detection and ranging (LiDAR) system. In this paper, a comprehensive LiDAR processing and odometry method is developed to provide a continuous and reliable positioning solution. In addition, a multi-sensor Extended Kalman filtering (EKF)-based fusion is developed to integrate the LiDAR positioning information with both GNSS and 3D-RISS and utilize the LiDAR updates to limit the drift in the positioning solution, even in challenging or ultimately denied GNSS environment. The performance of the proposed positioning solution is examined using several road test trajectories in both Kingston and Toronto downtown areas involving different vehicle dynamics and driving scenarios. The proposed solution provided a performance improvement over the standalone inertial solution by 64%. Over a GNSS outage of 10 min and 2 km distance traveled, our solution achieved position errors less than 2% of the distance travelled. Full article
(This article belongs to the Special Issue Positioning and Navigation in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>3D-RISS Mechanization Block Diagram.</p>
Full article ">Figure 2
<p>Point Cloud Preparation Flowchart.</p>
Full article ">Figure 3
<p>Ego Vehicle Points Labeled in Red in the Point Cloud.</p>
Full article ">Figure 4
<p>Ground Points Labeled in Yellow in the Point Cloud.</p>
Full article ">Figure 5
<p>Arbitrary Points <span class="html-italic">A</span> and <span class="html-italic">B</span> with respect to the Sensor at <span class="html-italic">O.</span></p>
Full article ">Figure 6
<p>Point Cloud Clustered Based on Euclidean Distance.</p>
Full article ">Figure 7
<p>(<b>a</b>) Raw Point Cloud with red circles highlighting the most distinctive points that are removed; (<b>b</b>) Denoised Point Cloud.</p>
Full article ">Figure 8
<p>(<b>a</b>) Normal Point Cloud; (<b>b</b>) Downsampled Point Cloud.</p>
Full article ">Figure 9
<p>Root Mean Squared (RMS) error versus Speed Plot Highlighting the Average RMS error Calculated.</p>
Full article ">Figure 10
<p>LiDAR/RISS/GNSS System Architecture.</p>
Full article ">Figure 11
<p>Switching Algorithm Flowchart.</p>
Full article ">Figure 12
<p>(<b>a</b>) Minivan Used for the Trajectories; (<b>b</b>) Sensors Mounted on the Rooftop.</p>
Full article ">Figure 13
<p>Sensor Setup on the Testbed.</p>
Full article ">Figure 14
<p>First Trajectory Highlighting the Simulated Outages.</p>
Full article ">Figure 15
<p>Navigation Solution for 3D-RISS, LiDAR/RISS, and NovAtel Reference during Simulated GNSS Outage 1, First Trajectory.</p>
Full article ">Figure 16
<p>2D Position Error for Simulated Outage 1, First Trajectory.</p>
Full article ">Figure 17
<p>Navigation Solution for 3D-RISS, LiDAR/RISS, and NovAtel Reference during Simulated GNSS Outage 2, First Trajectory.</p>
Full article ">Figure 18
<p>2D Position Error for Outage 2, First Trajectory.</p>
Full article ">Figure 19
<p>Second Trajectory Highlighting the Scenario.</p>
Full article ">Figure 20
<p>Navigation Solution for 3D-RISS, GNSS, LiDAR/RISS/GNSS, and NovAtel Reference During the Selected Scenario.</p>
Full article ">Figure 21
<p>2D Position Error for Second Trajectory Scenario.</p>
Full article ">Figure 22
<p>GDOP Values Before, During, and After the Scenario Highlighting the Switching Between the Systems.</p>
Full article ">Figure 23
<p>The SD of the 3D-position Provided by the GNSS Before, During, and After the Scenario Highlighting the Switching Between the Systems.</p>
Full article ">Figure 24
<p>The RMSE of the ICP Calculated in the Selected Scenario.</p>
Full article ">Figure 25
<p>The (<span class="html-italic">w<sub>z</sub></span>)measured by the IMU in the Selected Scenario.</p>
Full article ">Figure 26
<p>Navigation solution for 3D-RISS, GNSS, LiDAR/RISS/GNSS, and NovAtel reference at the Beginning of the Scenario.</p>
Full article ">Figure 27
<p>Navigation solution for 3D-RISS, GNSS, LiDAR/RISS/GNSS, and NovAtel Reference During the Scenario (First Switch).</p>
Full article ">Figure 28
<p>Navigation solution for 3D-RISS, GNSS, LiDAR/RISS/GNSS, and NovAtel Reference During the Scenario (Second Switch).</p>
Full article ">Figure 29
<p>Navigation solution for 3D-RISS, GNSS, LiDAR/RISS/GNSS, and NovAtel reference during the Scenario (Third Switch).</p>
Full article ">
19 pages, 8334 KiB  
Technical Note
Validation of the EGSIEM-REPRO GNSS Orbits and Satellite Clock Corrections
by Andreja Sušnik, Andrea Grahsl, Daniel Arnold, Arturo Villiger, Rolf Dach, Gerhard Beutler and Adrian Jäggi
Remote Sens. 2020, 12(14), 2322; https://doi.org/10.3390/rs12142322 - 19 Jul 2020
Cited by 5 | Viewed by 2716
Abstract
In the framework of the European Gravity Service for Improved Emergency Management (EGSIEM) project, consistent sets of state-of-the-art reprocessed Global Navigation Satellite System (GNSS) orbits and satellite clock corrections have been generated. The reprocessing campaign includes data starting in 1994 and follows the [...] Read more.
In the framework of the European Gravity Service for Improved Emergency Management (EGSIEM) project, consistent sets of state-of-the-art reprocessed Global Navigation Satellite System (GNSS) orbits and satellite clock corrections have been generated. The reprocessing campaign includes data starting in 1994 and follows the Center for Orbit Determination in Europe (CODE) processing strategy, in particular exploiting the extended version of the empirical CODE Orbit Model (ECOM). Satellite orbits are provided for Global Positioning System (GPS) satellites since 1994 and for Globalnaya Navigatsionnaya Sputnikovaya Sistema (GLONASS) since 2002. In addition, a consistent set of GPS satellite clock corrections with 30 s sampling has been generated from 2000 and with 5 s sampling from 2003 onwards. For the first time in a reprocessing scheme, GLONASS satellite clock corrections with 30 s sampling from 2008 and 5 s from 2010 onwards were also generated. The benefit with respect to earlier reprocessing series is demonstrated in terms of polar motion coordinates. GNSS satellite clock corrections are validated in terms of completeness, Allan deviation, and precise point positioning (PPP) using terrestrial stations. In addition, the products herein were validated with Gravity Recovery and Climate Experiment (GRACE) precise orbit determination (POD) and Satellite Laser Ranging (SLR). The dataset is publicly available. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Success rate of the ambiguity resolution for Globalnaya Navigatsionnaya Sputnikovaya Sistema (GLONASS) (blue color) and Global Positioning System (GPS) (red color) observations for the period 2000–2015.</p>
Full article ">Figure 2
<p>Number of stations delivering high-rate Receiver Independent Exchange Format (RINEX2) data for the period 2003–2015.</p>
Full article ">Figure 3
<p>Geographical distribution of stations providing high-rate RINEX2 data on January 1, 2003 (black dots), and on January 1, 2014 (red stars).</p>
Full article ">Figure 4
<p>(<b>a</b>) Time series of polar motion misclosures referring to the 2nd IGS reprocessing campaign (IGS-repro02). (<b>b</b>) Time series of polar misclosures referring to the reprocessing campaign carried out in the frame of the EGSIEM project (EGSIEM-REPRO). The top shows misclosures in the <span class="html-italic">x</span> coordinates and the bottom shows misclosures in the <span class="html-italic">y</span> coordinates, plotted with a shift along the vertical axis for clarity. Polar motion misclosures referring to the 1-day solution are shown with red, while polar motion misclosures referring to the 3-day solution are presented in blue.</p>
Full article ">Figure 5
<p>(<b>a</b>) Power spectrum of polar motion misclosures referring to the 2nd IGS reprocessing campaign (IGS-repro02). (<b>b</b>) Power spectrum of polar misclosures referring to the reprocessing campaign carried out in the frame of the EGSIEM project (EGSIEM-REPRO). Misclosures in <span class="html-italic">x</span> coordinates are shown on the top and misclosures in the <span class="html-italic">y</span> coordinates are shown on the bottom, plotted with a shift along the vertical axis for clarity. Polar motion misclosures referring to the 1-day solution are shown in red, while polar motion misclosures referring to the 3-day solution are presented in blue. In both cases, the chosen time interval is 2000–2014.</p>
Full article ">Figure 6
<p>(<b>a</b>) Power spectrum of polar motion misclosures for the 3-day solution referring to IGS-repro02. (<b>b</b>) Power motion of polar misclosures for the 3-day solution referring to EGSIEM-REPRO. The <span class="html-italic">x</span> coordinates are shown on the top and <span class="html-italic">y</span> coordinates are shown on the bottom. In both cases, the chosen time interval is 2000–2013.</p>
Full article ">Figure 7
<p>(<b>a</b>) The completeness of 30 s GLONASS (where each GLONASS satellite is shown with their pseudorandom noise numbers (PRN) number on the y axis) satellite clock corrections. (<b>b</b>) The completeness of 5 s GLONASS (where each GLONASS satellite is shown with their pseudorandom noise numbers (PRN) number on the y axis) satellite clock corrections. Note that the 5 s GLONASS clock corrections are only available from the end of 2010 onwards.</p>
Full article ">Figure 8
<p>(<b>a</b>) Allan deviations for satellites with space vehicle number (SVN) R747, G063, and G036 satellite clock corrections with different sampling corrections. (<b>b</b>) Allan deviations for satellites with space vehicle number (SVN) R724, G063, and G036 satellite clock corrections with different samplings. Both figures refer to October 27, 2013.</p>
Full article ">Figure 9
<p>(<b>a</b>) Root-Mean-Square (RMS) of the linear fit for selected GNSS satellite clock corrections shown in <a href="#remotesensing-12-02322-f008" class="html-fig">Figure 8</a>a. (<b>b</b>) Root-Mean-Square (RMS) of the linear fit for selected GNSS satellite clock corrections shown in <a href="#remotesensing-12-02322-f008" class="html-fig">Figure 8</a>b.</p>
Full article ">Figure 10
<p>(<b>a</b>) RMS in mm of the kinematic coordinates in the up (top), north (middle), and east (bottom) components for WTZR (Wettzell, Germany) station. (<b>b</b>) RMS in mm of the kinematic coordinates in the up (top), north (middle), and east (bottom) components for KIRU (Kiruna, Sweden) station. For both stations, the selected period is from December 11, 2013, to December 17, 2013.</p>
Full article ">Figure 11
<p>(<b>a</b>) Estimated kinematic coordinates in the north for two different sampling types, referring to the GPS+GLONASS solution. (<b>b</b>) Estimated kinematic coordinates in the north for two different sampling types, referring to the GPS solution. (<b>c</b>) Estimated kinematic coordinates in the north for two different sampling types, referring to the GLONASS solution. All three solutions refer to the first hour on December 17, 2013.</p>
Full article ">Figure 12
<p>(<b>a</b>) Allan deviations of estimated kinematic coordinates in the east component for two different samplings using GLONASS (indicated with GLO) only observations and satellite clock corrections. (<b>b</b>) Allan deviations of estimated kinematic coordinates in the north component for two different samplings using GLONASS only observations and satellite clock corrections. (<b>c</b>) Allan deviations of estimated kinematic coordinates in the up component for two different samplings using GLONASS only observations and satellite clock corrections. In all three figures, the Allan deviations were calculated over the entire day.</p>
Full article ">Figure 13
<p>(<b>a</b>) SLR residuals to GLONASS-M orbits from IGS-repro02 using the original Empirical CODE Orbit Model (ECOM). (<b>b</b>) SLR residuals to GLONASS-M orbits from EGSIEM-REPRO using the extended ECOM. For both figures, residuals between January 2003 and December 2013 are shown. Observations for four GLONASS satellites (space vehicle number (SVN) 723, 725, 736, 737) were excluded due to anomalous patterns. Residuals of these GLONASS satellites increased after a certain time after launch [<a href="#B29-remotesensing-12-02322" class="html-bibr">29</a>]. Furthermore, residuals with absolute beta angles smaller than 15◦ are not shown due to unmodeled attitude behavior during eclipses. The black line indicates the linear regression of the SLR residuals as a function of the elongation angle.</p>
Full article ">Figure 14
<p>(<b>a</b>) RMS of ionosphere-free carrier-phase residuals of kinematic precise orbit determination (POD) for GRACE-A. (<b>b</b>) RMS of ionosphere-free carrier-phase residuals of kinematic precise orbit determination (POD) for GRACE-B. The numbers indicate the average RMS values over the entire year. In both figures values in red were calculated using the GPS orbits and clocks from CODE’s contribution to the first IGS reprocessing campaign (repro1), while the values in green present the results obtained with EGSIEM-REPRO products.</p>
Full article ">Figure 15
<p>(<b>a</b>) SLR residuals as a function of the solar beta angle with respect to microwave-based orbits of GLONASS satellite SVN 747 over a three-year time span (January 2012 to December 2014): computed using the <span class="html-italic">D0B1</span> parametrization of the ECOM model; (<b>b</b>) computed using the <span class="html-italic">D4B1</span> parametrization of the ECOM model; (<b>c</b>) computed using <span class="html-italic">D2B1</span> parametrization of the ECOM model.</p>
Full article ">Figure 15 Cont.
<p>(<b>a</b>) SLR residuals as a function of the solar beta angle with respect to microwave-based orbits of GLONASS satellite SVN 747 over a three-year time span (January 2012 to December 2014): computed using the <span class="html-italic">D0B1</span> parametrization of the ECOM model; (<b>b</b>) computed using the <span class="html-italic">D4B1</span> parametrization of the ECOM model; (<b>c</b>) computed using <span class="html-italic">D2B1</span> parametrization of the ECOM model.</p>
Full article ">
20 pages, 5749 KiB  
Article
Hierarchical Modeling of Street Trees Using Mobile Laser Scanning
by Jingzhong Xu, Jie Shan and Ge Wang
Remote Sens. 2020, 12(14), 2321; https://doi.org/10.3390/rs12142321 - 19 Jul 2020
Cited by 13 | Viewed by 3040
Abstract
This paper proposes a novel method to reconstruct hierarchical 3D tree models from Mobile Laser Scanning (MLS) point clouds. Starting with a neighborhood graph from the tree point clouds, the method treats the root point of the tree as a source point and [...] Read more.
This paper proposes a novel method to reconstruct hierarchical 3D tree models from Mobile Laser Scanning (MLS) point clouds. Starting with a neighborhood graph from the tree point clouds, the method treats the root point of the tree as a source point and determines an initial tree skeleton by using the Dijkstra algorithm. The initial skeleton lines are then optimized by adjusting line connectivity and branch nodes based on morphological characteristics of the tree. Finally, combined with the tree point clouds, the radius of each branch skeleton node is estimated and flat cones are used to simulate tree branches. A local triangulation method is used to connect the gaps between two joint flat cones. Demonstrated by street trees of different sizes and point densities, the proposed method can extract street tree skeletons effectively, generate tree models with higher fidelity, and reconstruct trees with different details according to the skeleton level. It is found out the tree modeling error is related to the average point spacing, with a maximum error at the coarsest level 6 being about 0.61 times the average point spacing. The main source of the modeling error is the self-occlusion of trees branches. Such findings are both theoretically and practically useful for generating high-precision tree models from point clouds. The developed method can be an alternative to the current ones that struggle to balance modeling efficiency and modeling accuracy. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Workflow of the proposed hierarchical tree modeling approach.</p>
Full article ">Figure 2
<p>Schematic of skeleton line connection adjustment (A is a node to be processed and the parent node of A need to be adjusted from B to C).</p>
Full article ">Figure 3
<p>Schematic of branching node processing: (<b>a</b>) Branching node before processing; (<b>b</b>) Branching node after processing. C and D need to be merged by the angle criterion. The new node C’ is generated by the average position of C and D and becomes the parent node of A, which needs to be adjusted for the angle condition. The parent node of A is finally changed from C’ to B.</p>
Full article ">Figure 4
<p>Schematic diagram of the parent–child node radius (<math display="inline"><semantics> <mi>P</mi> </semantics></math> is the parent node of the node <math display="inline"><semantics> <mrow> <mi>C</mi> <mn>1</mn> <mo>;</mo> <mo> </mo> <mi>L</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mn>3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>L</mi> <mn>4</mn> </mrow> </semantics></math> are the lengths of each skeleton line).</p>
Full article ">Figure 5
<p>Schematic of local triangulation in the gap between segments of a tree trunk. The point of <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>i</mi> </msub> </mrow> </semantics></math> needs to be interpolated around the circle of the flat cones.</p>
Full article ">Figure 6
<p>Point clouds of the study and segmented street trees.</p>
Full article ">Figure 7
<p>Skeleton line extraction from tree point clouds: (<b>a</b>) Tree point clouds; (<b>b</b>) Initial tree skeleton; (<b>c</b>) Optimized tree skeleton.</p>
Full article ">Figure 8
<p>Comparison of skeleton branch before and after connection adjustment: (<b>a</b>) Tree points; (<b>b</b>) Before optimization; (<b>c</b>) After optimization.</p>
Full article ">Figure 9
<p>Comparison of skeleton line processing by different angle condition: (<b>a</b>) is skeleton lines before processing; (<b>b</b>) is the branch node processing result by <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>θ</mi> </mrow> </semantics></math> = 15°; and (<b>c</b>) is the branch node processing result by <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>θ</mi> </mrow> </semantics></math> = 30°, which has noticeable difference with the original tree shape.</p>
Full article ">Figure 10
<p>Branch modeling results before and after using the local triangulation method: (<b>a</b>) branches model before gap filling; (<b>b</b>) branches model after gap filling.</p>
Full article ">Figure 11
<p>Tree reconstruction results at different levels of detail: (<b>a</b>) level 0 branches; (<b>b</b>) level 1 branches; (<b>c</b>) level 2 branches; (<b>d</b>) level 3 branches; (<b>e</b>) level 4 branches; and (<b>f</b>) level 5 branches.</p>
Full article ">Figure 12
<p>Modeling results of all street trees in the study area.</p>
Full article ">Figure 13
<p>Street tree point clouds overlaid on a row of selected tree models: (<b>a</b>) tree point clouds; (<b>b</b>) tree model results; (<b>c</b>) tree points overlaid on models.</p>
Full article ">Figure 14
<p>Tree branch diameter and modeling error (in mm) with respect to the level of details for six trees ((<b>a</b>–<b>f</b>) for tree 1–6 respectively).</p>
Full article ">Figure 15
<p>Comparison of modeling results with respect to different point densities: (<b>a</b>) original tree point clouds; (<b>b</b>) tree points thinned by 50%; (<b>c</b>) tree points thinned by 75%; (<b>d</b>) model from the original tree point clouds; (<b>e</b>) model from the tree points thinned by 50%; (<b>f</b>) model from the tree points thinned by 75%.</p>
Full article ">Figure 16
<p>Two trees for comparison of tree skeleton line extraction results: (<b>a</b>) Tree point clouds; (<b>b</b>) skeleton lines by the method [<a href="#B15-remotesensing-12-02321" class="html-bibr">15</a>]; (<b>c</b>) skeleton lines by the method [<a href="#B22-remotesensing-12-02321" class="html-bibr">22</a>]; (<b>d</b>) skeleton lines by our method.</p>
Full article ">
22 pages, 10887 KiB  
Article
The Urban–Rural Heterogeneity of Air Pollution in 35 Metropolitan Regions across China
by Wenchao Han, Zhanqing Li, Jianping Guo, Tianning Su, Tianmeng Chen, Jing Wei and Maureen Cribb
Remote Sens. 2020, 12(14), 2320; https://doi.org/10.3390/rs12142320 - 19 Jul 2020
Cited by 28 | Viewed by 6148
Abstract
Urbanization and air pollution are major anthropogenic impacts on Earth’s environment, weather, and climate. Each has been studied extensively, but their interactions have not. Urbanization leads to a dramatic variation in the spatial distribution of air pollution (fine particles) by altering surface properties [...] Read more.
Urbanization and air pollution are major anthropogenic impacts on Earth’s environment, weather, and climate. Each has been studied extensively, but their interactions have not. Urbanization leads to a dramatic variation in the spatial distribution of air pollution (fine particles) by altering surface properties and boundary-layer micrometeorology, but it remains unclear, especially between the centers and suburbs of metropolitan regions. Here, we investigated the spatial variation, or inhomogeneity, of air quality in urban and rural areas of 35 major metropolitan regions across China using four different long-term observational datasets from both ground-based and space-borne observations during the period 2001–2015. In general, air pollution in summer in urban areas is more serious than in rural areas. However, it is more homogeneously polluted, and also more severely polluted in winter than that in summer. Four factors are found to play roles in the spatial inhomogeneity of air pollution between urban and rural areas and their seasonal differences: (1) the urban–rural difference in emissions in summer is slightly larger than in winter; (2) urban structures have a more obvious association with the spatial distribution of aerosols in summer; (3) the wind speed, topography, and different reductions in the planetary boundary layer height from clean to polluted conditions have different effects on the density of pollutants in different seasons; and (4) relative humidity can play an important role in affecting the spatial inhomogeneity of air pollution despite the large uncertainties. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Locations of the 35 cities selected as study areas.</p>
Full article ">Figure 2
<p>Urban area of Nanjing in 2010. The red contour outlines the urban boundary. Green areas are other surfaces, for instance, water, vegetation, or soil. Gray areas are impervious surfaces.</p>
Full article ">Figure 3
<p>Wintertime (in black) and summertime (in green) mean visibilities (unit: km) calculated based on observational data from meteorological stations in urban (shaded bars) and rural (unfilled bars) areas of (<b>a</b>) each city and (<b>b</b>) all cities when the air is polluted. Data are from 2001 to 2015.</p>
Full article ">Figure 4
<p>The spatial differences in aerosol optical depth (AOD) (<math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>A</mi> <mi>O</mi> <mi>D</mi> <mo>=</mo> <mi>A</mi> <mi>O</mi> <msub> <mi>D</mi> <mrow> <mi>u</mi> <mi>r</mi> <mi>b</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> <mo>−</mo> <mi>A</mi> <mi>O</mi> <msub> <mi>D</mi> <mrow> <mi>r</mi> <mi>u</mi> <mi>r</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>) between the urban and rural areas in and around the 35 cities based on MODIS AOD. Black and green bars represent <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>A</mi> <mi>O</mi> <mi>D</mi> </mrow> </semantics></math> in winter and summer, respectively. The overall mean <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>A</mi> <mi>O</mi> <mi>D</mi> </mrow> </semantics></math> calculated using data in winter and summer are shown as black and green lines, respectively. The city marked with an asterisk (Chongqing) shows results opposite to those of the other cities.</p>
Full article ">Figure 5
<p>Mean AOD as a function of distance from the urban geometrical center of each city in winter (black curves with open circles) and summer (green curves with crosses). Note that wintertime AOD data are not available at cities Harbin and Changchun. The distance ranges are &lt;10 km, 11–20 km, 21–31 km, 31–40 km, and 41–50 km from the urban geometrical center.</p>
Full article ">Figure 6
<p>The Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO)-retrieved occurrence frequency of aerosols at different altitudes from urban to rural areas for the data period of 2006 to 2015 for (<b>a</b>) the Greater Beijing Metropolitan Area (GBMA) in summer, (<b>b</b>) the GBMA in winter, (<b>c</b>) the Yangtze River Delta (YRD) in summer, and (<b>d</b>) the YRD in winter. The color bars represent the occurrence frequencies of aerosols. The histograms located in the upper-left corner of each panel show the overall mean AODs in rural and urban areas.</p>
Full article ">Figure 7
<p><math display="inline"><semantics> <mrow> <msub> <mrow> <mi>PM</mi> </mrow> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> differences (<math display="inline"><semantics> <mrow> <mrow> <mi>Diff</mi> <mo> </mo> </mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>2.5</mn> </mrow> </msub> <mo>=</mo> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>2.5</mn> <mo>−</mo> <mi>u</mi> <mi>r</mi> <mi>b</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> <mo>−</mo> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>2.5</mn> <mo>−</mo> <mi>r</mi> <mi>u</mi> <mi>r</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math>, unit: μg / m<sup>3</sup>) in different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>PM</mi> </mrow> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> concentration bins based on surface observations: (<b>a</b>) 0–50 μg/m<sup>3</sup> (light pollution), (<b>b</b>) 100–150 μg / m<sup>3</sup> (moderate pollution), and (<b>c</b>) &gt;200 μg/m<sup>3</sup> (heavy pollution). (<b>d</b>) the mean <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> spatial differences in all cites for different <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> levels. Green and black bars represent summertime and wintertime <span class="html-italic">PM<sub>2.5</sub></span> differences, respectively. Asterisks mark those cities with insufficient numbers of samples.</p>
Full article ">Figure 8
<p>Spatial distribution of mean <span class="html-italic">PM<sub>2.5</sub></span> emissions (unit: ton/km<sup>2</sup>) in the urban (filled bars) and rural (unfilled bars) areas of each city in summer and winter, and <span class="html-italic">PM<sub>2.5</sub></span> differences (green dots, unit: ton / km<sup>2</sup>) between the urban and rural areas of each city in summer and winter. The <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> in right panel is the urban <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> differences between summer and winter. Data are from 2010 to 2014.</p>
Full article ">Figure 9
<p>Wintertime (in black) and summertime (in green) <span class="html-italic">NH<sub>3</sub>, NO<sub>x</sub></span>, and <span class="html-italic">SO<sub>2</sub></span> emission differences (unit: ton/km<sup>2</sup>) between the urban and rural areas of each city. Data are from 2010 to 2014.</p>
Full article ">Figure 10
<p>The relationships between the Boyce-Clark shape index (SBC) and <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>A</mi> <mi>O</mi> <mi>D</mi> </mrow> </semantics></math> in summer (<b>a</b>) and winter (<b>b</b>) at 35 cities in 2015. The red dashed lines are the linear least-squares-fit lines. (<b>c</b>) The extent of changes in AOD every 10 km from the urban geometrical center as a function of SBC. The AOD decrease is calculated based on <a href="#remotesensing-12-02320-f005" class="html-fig">Figure 5</a>. (<b>d</b>) The relationship between PM<sub>2.5</sub> concentration and building density within 300 m around the PM<sub>2.5</sub> sites in Beijing (unit: μg/m<sup>3</sup>). The building density information is extracted using the ArcMap application.</p>
Full article ">Figure 11
<p>Planetary boundary layer height (PBLH, unit: km) under different pollution conditions calculated based on sounding data in (<b>a</b>) Beijing, (<b>b</b>) Nanjing, (<b>c</b>) Shenyang, and (<b>d</b>) Xi’an. The green lines are summertime and the black lines are wintertime. (<b>e</b>) is the PBLH difference under clean and heavy pollution conditions at each city.</p>
Full article ">Figure 12
<p>Variations in ΔPM<sub>2.5</sub> concentration (unit: ug/m<sup>3</sup>) as a function of wind speed in Beijing, Nanjing, Shenyang, and Xi’an in summer (green curves) and winter (black curves). The ΔPM<sub>2.5</sub> concentration is the difference in PM<sub>2.5</sub> concentration between urban and rural areas.</p>
Full article ">Figure 13
<p>The ΔAODs of mountainous cities, coastal cities, and plain cities in summer (green bars) and winter (black bars)<b>.</b></p>
Full article ">Figure 14
<p>Variations in ΔPM<sub>2.5</sub> concentration (unit: μg/m<sup>3</sup>) as a function of relative humidity (RH) in Beijing, Nanjing, Shenyang, and Xi’an in summer (green curves) and winter (black curves). The ΔPM<sub>2.5</sub> concentration is the difference in PM<sub>2.5</sub> concentration between urban and rural areas.</p>
Full article ">Figure 15
<p>Comparisons of mean RH in each city in summer (green bars) and winter (black bars).</p>
Full article ">
22 pages, 16194 KiB  
Article
Semi-Automatization of Support Vector Machines to Map Lithium (Li) Bearing Pegmatites
by Joana Cardoso-Fernandes, Ana C. Teodoro, Alexandre Lima and Encarnación Roda-Robles
Remote Sens. 2020, 12(14), 2319; https://doi.org/10.3390/rs12142319 - 19 Jul 2020
Cited by 67 | Viewed by 5994
Abstract
Machine learning (ML) algorithms have shown great performance in geological remote sensing applications. The study area of this work was the Fregeneda–Almendra region (Spain–Portugal) where the support vector machine (SVM) was employed. Lithium (Li)-pegmatite exploration using satellite data presents some challenges since pegmatites [...] Read more.
Machine learning (ML) algorithms have shown great performance in geological remote sensing applications. The study area of this work was the Fregeneda–Almendra region (Spain–Portugal) where the support vector machine (SVM) was employed. Lithium (Li)-pegmatite exploration using satellite data presents some challenges since pegmatites are, by nature, small, narrow bodies. Consequently, the following objectives were defined: (i) train several SVM’s on Sentinel-2 images with different parameters to find the optimal model; (ii) assess the impact of imbalanced data; (iii) develop a successful methodological approach to delineate target areas for Li-exploration. Parameter optimization and model evaluation was accomplished by a two-staged grid-search with cross-validation. Several new methodological advances were proposed, including a region of interest (ROI)-based splitting strategy to create the training and test subsets, a semi-automatization of the classification process, and the application of a more innovative and adequate metric score to choose the best model. The proposed methodology obtained good results, identifying known Li-pegmatite occurrences as well as other target areas for Li-exploration. Also, the results showed that the class imbalance had a negative impact on the SVM performance since known Li-pegmatite occurrences were not identified. The potentials and limitations of the methodology proposed are highlighted and its applicability to other case studies is discussed. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and geological map of the study area where the different aplite–pegmatite dykes outcrop (adapted from [<a href="#B22-remotesensing-12-02319" class="html-bibr">22</a>,<a href="#B26-remotesensing-12-02319" class="html-bibr">26</a>,<a href="#B27-remotesensing-12-02319" class="html-bibr">27</a>]). Open-pit mines of Bajoca, Feli and Alberto are also highlighted. The map projection is Universal Transverse Mercator zone 29 N from the WGS84 datum.</p>
Full article ">Figure 2
<p>The support vector machines (SVM) method: the optimal hyperplane separates the two classes and is parallel to the bounding hyperplanes on which the support vectors lie. The distance between these two bounding hyperplanes is called the margin, and the distance <span class="html-italic">d</span> between the bounding hyperplane and the outlier (misclassified sample) indicates that there are slack variables (modified from [<a href="#B8-remotesensing-12-02319" class="html-bibr">8</a>]).</p>
Full article ">Figure 3
<p>Mean spectral signatures based on the training pixels selected for each class.</p>
Full article ">Figure 4
<p>Flowchart of the image classification process.</p>
Full article ">Figure 5
<p>Receiver operating characteristic (ROC) curves and area under the curve (AUC) scores for each of the models tested in the second stage grid-search for both the imbalanced and balanced data.</p>
Full article ">Figure 6
<p>Confusion matrix for the RBF-SVM model built on the balanced dataset.</p>
Full article ">Figure 7
<p>Final classification map based on the Linear-SVM model built with imbalanced data. Three open-pit mines exploiting Li-minerals are identified: A—Bajoca, B—Feli, and C—Alberto. The dashed purple line represents the simplified contact between the Metasediments and the Granite. Zoom images of the open-pit mines can be found in <a href="#remotesensing-12-02319-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 8
<p>Final classification map based on the RBF-SVM model built with the balanced data. A—Bajoca mine, B—Feli mine, and C—Alberto mine. Dashed purple line: simplified contact between the Metasediments and the Granite. Zoom images of the open-pit mines can be found in <a href="#remotesensing-12-02319-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 9
<p>Evaluation of the models’ performance in the known open-pit mines exploiting Li-bearing pegmatite.</p>
Full article ">Figure 10
<p>Evaluation of the models’ performance in known ore stockpiles containing Li-bearing minerals (<b>a</b>). (<b>b</b>) Field photograph of a spodumene crystal (a Li-bearing mineral) from the Alberto stockpile.</p>
Full article ">Figure 10 Cont.
<p>Evaluation of the models’ performance in known ore stockpiles containing Li-bearing minerals (<b>a</b>). (<b>b</b>) Field photograph of a spodumene crystal (a Li-bearing mineral) from the Alberto stockpile.</p>
Full article ">Figure A1
<p>Confusion matrix for the Linear-SVM model built on the imbalanced dataset.</p>
Full article ">
22 pages, 11657 KiB  
Article
PercepPan: Towards Unsupervised Pan-Sharpening Based on Perceptual Loss
by Changsheng Zhou, Jiangshe Zhang, Junmin Liu, Chunxia Zhang, Rongrong Fei and Shuang Xu
Remote Sens. 2020, 12(14), 2318; https://doi.org/10.3390/rs12142318 - 19 Jul 2020
Cited by 41 | Viewed by 4442
Abstract
In the literature of pan-sharpening based on neural networks, high resolution multispectral images as ground-truth labels generally are unavailable. To tackle the issue, a common method is to degrade original images into a lower resolution space for supervised training under the Wald’s protocol. [...] Read more.
In the literature of pan-sharpening based on neural networks, high resolution multispectral images as ground-truth labels generally are unavailable. To tackle the issue, a common method is to degrade original images into a lower resolution space for supervised training under the Wald’s protocol. In this paper, we propose an unsupervised pan-sharpening framework, referred to as “perceptual pan-sharpening”. This novel method is based on auto-encoder and perceptual loss, and it does not need the degradation step for training. For performance boosting, we also suggest a novel training paradigm, called “first supervised pre-training and then unsupervised fine-tuning”, to train the unsupervised framework. Experiments on the QuickBird dataset show that the framework with different generator architectures could get comparable results with the traditional supervised counterpart, and the novel training paradigm performs better than random initialization. When generalizing to the IKONOS dataset, the unsupervised framework could still get competitive results over the supervised ones. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Different perspectives to train pan-sharpening models. Left: traditional supervised perspective; Right: proposed unsupervised perspective.</p>
Full article ">Figure 2
<p>The generator of ESRGAN. Contents in red boxes shows an example of adaptations when taking as input MS image with four bands.</p>
Full article ">Figure 3
<p>The structure of the proposed PercepPan. <span class="html-italic">G</span>, <span class="html-italic">R</span>, and <span class="html-italic">D</span> denote Generator, Reconstructor, and Discriminator, respectively.</p>
Full article ">Figure 4
<p>Territorial framework of the investigated area from a QuickBird satellite.</p>
Full article ">Figure 5
<p>The score trend of different indexes with respect to the level of noise. On the <span class="html-italic">x</span>-axis, a greater number means a higher noise level.</p>
Full article ">Figure 6
<p>Fused results of two randomly selected samples from the QuickBird test set. From left to right are the original LRMS/PAN images, results of PNN, RSIFNN, PanNet, PSGAN, and PercepPan with the ESRGAN generator, respectively.</p>
Full article ">Figure 7
<p>Fused results of two randomly selected samples from the IKONOS test set. From left to right are the original LRMS/PAN images, results of PNN, RSIFNN, PanNet, PSGAN, and the PercepPan with the ESRGAN generator, respectively.</p>
Full article ">Figure A1
<p>Noised images and the corresponding image quality assessment scores. Greater level value means stronger noise.</p>
Full article ">
24 pages, 6679 KiB  
Article
Water Balance Analysis Based on a Quantitative Evapotranspiration Inversion in the Nukus Irrigation Area, Lower Amu River Basin
by Zhibin Liu, Yue Huang, Tie Liu, Junli Li, Wei Xing, Shamshodbek Akmalov, Jiabin Peng, Xiaohui Pan, Chenyu Guo and Yongchao Duan
Remote Sens. 2020, 12(14), 2317; https://doi.org/10.3390/rs12142317 - 18 Jul 2020
Cited by 28 | Viewed by 4399
Abstract
Human activities are mainly responsible for the Aral Sea crisis, and excessive farmland expansion and unreasonable irrigation regimes are the main manifestations. The conflicting needs of agricultural water consumption and ecological water demand of the Aral Sea are increasingly prominent. However, the quantitative [...] Read more.
Human activities are mainly responsible for the Aral Sea crisis, and excessive farmland expansion and unreasonable irrigation regimes are the main manifestations. The conflicting needs of agricultural water consumption and ecological water demand of the Aral Sea are increasingly prominent. However, the quantitative relationship among the water balance elements in the oasis located in the lower reaches of the Amu Darya River Basin and their impact on the retreat of the Aral Sea remain unclear. Therefore, this study focused on the water consumption of the Nukus irrigation area in the delta of the Amu Darya River and analyzed the water balance variations and their impacts on the Aral Sea. The surface energy balance algorithm for land (SEBAL) was employed to retrieve daily and seasonal evapotranspiration (ET) levels from 1992 to 2018, and a water balance equation was established based on the results of a remote sensing evapotranspiration inversion. The results indicated that the actual evapotranspiration (ETa) simulated by the SEBAL model matched the crop evapotranspiration (ETc) calculated by the Penman–Monteith method well, and the correlation coefficients between the two ETa sources were greater than 0.8. The total ETa levels in the growing seasons decreased from 1992 to 2005 and increased from 2005 to 2015, which is consistent with the changes in the cultivated land area and inflows from the Amu Darya River. In 2000, 2005 and 2010, the groundwater recharge volumes into the Aral Sea during the growing season were 6.74×109 m3, 1.56×109 m3 and 8.40×109 m3; respectively; in the dry year of 2012, regional ET exceeded the river inflow, and 2.36×109 m3 of groundwater was extracted to supplement the shortage of irrigation water. There is a significant two-year lag correlation between the groundwater level and the area of the southern Aral Sea. This study can provide useful information for water resources management in the Aral Sea region. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of the Nukus irrigation area.</p>
Full article ">Figure 2
<p>The information about the acquisition time of Landsat images, including 42 TM images, 59 ETM+ images, and 43 OLI images.</p>
Full article ">Figure 3
<p>Comparison of ET modeled by SEBAL (ET<sub>SEBAL</sub>) between ET<sub>observation</sub> using evaporating pan (<b>a</b>). (<b>b</b>–<b>i</b>) represent the validation of rice, wheat, cotton, bare land’s SEBAL modeled daily ET<sub>a</sub> contrast with ET<sub>c</sub>.</p>
Full article ">Figure 4
<p>The correlation analysis results of ET<sub>sebal</sub> chronological sequence and ET<sub>c</sub> chronological sequence of all sampling points.</p>
Full article ">Figure 5
<p>The monthly ETsebal for rice, wheat, and cotton was available for comparison for ETc in 2018.</p>
Full article ">Figure 6
<p>Spatial distribution of monthly ET<sub>a</sub> variability based on the SEBAL model (Take 2012 and 2018 as examples).</p>
Full article ">Figure 7
<p>The ET<sub>a</sub> performance modeled by SEBAL of each month in the growing season of 1992–2018.</p>
Full article ">Figure 8
<p>The ET<sub>a</sub> estimated by SEBAL in the growing season from 1992 to 2018 (<b>a</b>–<b>h</b>).</p>
Full article ">Figure 9
<p>The average daily ET<sub>a</sub> variation of cultivated land, forest land and bare land from 1992 to 2015.</p>
Full article ">Figure 10
<p>The change of total ETa modeled by SEBAL in irrigated areas from 1992 to 2018 (May to September).</p>
Full article ">Figure 11
<p>(<b>a</b>) The changes of surface inflow and variation of underground water volume in the Nukus irrigation area, 1999–2015. (<b>b</b>) Variations of ET, surface inflow, groundwater and precipitation of Nukus irrigation area in the growing season of 2000, 2005, 2010 and 2012.</p>
Full article ">Figure 12
<p>Trends of cultivated land area, total ET and total ET of cultivated land.</p>
Full article ">Figure 13
<p>The lag cross-correlation analysis of the Nukus inflow volume to the Nukus irrigation area (<b>a</b>), groundwater lever (<b>b</b>), annual precipitation (<b>c</b>) and annual average temperature (<b>d</b>) with the area of south Aral Sea.</p>
Full article ">
23 pages, 2351 KiB  
Article
Identifying Hydro-Geomorphological Conditions for State Shifts from Bare Tidal Flats to Vegetated Tidal Marshes
by Chen Wang, Sven Smolders, David P. Callaghan, Jim van Belzen, Tjeerd J. Bouma, Zhan Hu, Qingke Wen and Stijn Temmerman
Remote Sens. 2020, 12(14), 2316; https://doi.org/10.3390/rs12142316 - 18 Jul 2020
Cited by 8 | Viewed by 4254
Abstract
High-lying vegetated marshes and low-lying bare mudflats have been suggested to be two stable states in intertidal ecosystems. Being able to identify the conditions enabling the shifts between these two stable states is of great importance for ecosystem management in general and the [...] Read more.
High-lying vegetated marshes and low-lying bare mudflats have been suggested to be two stable states in intertidal ecosystems. Being able to identify the conditions enabling the shifts between these two stable states is of great importance for ecosystem management in general and the restoration of tidal marsh ecosystems in particular. However, the number of studies investigating the conditions for state shifts from bare mudflats to vegetated marshes remains relatively low. We developed a GIS approach to identify the locations of expected shifts from bare intertidal flats to vegetated marshes along a large estuary (Western Scheldt estuary, SW Netherlands), by analyzing the interactions between spatial patterns of vegetation biomass, elevation, tidal currents, and wind waves. We analyzed false-color aerial images for locating marshes, LIDAR-based digital elevation models, and spatial model simulations of tidal currents and wind waves at the whole estuary scale (~326 km²). Our results demonstrate that: (1) Bimodality in vegetation biomass and intertidal elevation co-occur; (2) the tidal currents and wind waves change abruptly at the transitions between the low-elevation bare state and high-elevation vegetated state. These findings suggest that biogeomorphic feedback between vegetation growth, currents, waves, and sediment dynamics causes the state shifts from bare mudflats to vegetated marshes. Our findings are translated into a GIS approach (logistic regression) to identify the locations of shifts from bare to vegetated states during the studied period based on spatial patterns of elevation, current, and wave orbital velocities. This GIS approach can provide a scientific basis for the management and restoration of tidal marshes. Full article
(This article belongs to the Special Issue Remote Sensing of Wetlands)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Location of the Scheldt estuary in Western Europe; (<b>b</b>) Scheldt estuary with indications of tide gauges (red dots), locations of measured discharge transects (red lines), study area Western Scheldt (grey striped box), model boundaries (black line) and bathymetry (the elevations are expressed relative to the Dutch Ordnance Level, NAP, which is close to the mean sea level at the Dutch coast); (<b>c</b>) Detail of measured discharge transects for ebb and flood channel; (<b>d</b>) Locations of Acoustic Doppler Current Profilers (ADCP) measurements on one of the shoals.</p>
Full article ">Figure 2
<p>Wave generation and propagation model grid layout for Grid 0 (<b>a</b>) and Grid 1–11 (<b>b</b>) overlaid with available measurement locations for waves (i.e., WCT1, PVT1, HAN1, HAWI, HFPL, and HFP1) and wind (i.e., HFPL, TNWS, and HAW1).</p>
Full article ">Figure 3
<p>Bimodal frequency distribution of normalized difference vegetation index (NDVI) values (as proxy of vegetation biomass) in (<b>a</b>) 2004 and (<b>b</b>) 2011; intertidal elevations in (<b>c</b>) 2004 and (<b>d</b>) 2011. The elevation is relative to the mean high water level (MHWL). The proportion on the <span class="html-italic">y</span>-axis is computed as the number of pixels in each NDVI class (every 0.01) or relative elevation class (every 0.1 m) to the total number of the pixels for all intertidal pixels (red bold curve), vegetated marsh pixels (green dashed curve), and bare flat pixels (blue dotted curve). NDVI ranges are indicated (with vertical black dashed lines) and denoted as “stable bare” (proportion above threshold), “unstable” (below threshold), and “stable vegetated” (above threshold), based on a threshold proportion of 0.5% (horizontal black dashed line). Elevation ranges are indicated (with vertical black dashed lines) and denoted as “stable low-elevated” (proportion above threshold), “unstable” (below threshold), and “stable high-elevated” (above threshold), based on a threshold proportion of 1.5% (horizontal black dashed line).</p>
Full article ">Figure 4
<p>(<b>a</b>) Variation in NDVI in 2004 as a function of elevation in 2004. Circles denote the mean NDVI value for elevation classes with 0.1 m intervals; error bars denote 25th and 75th percentiles for each elevation class; (<b>b</b>) Variation of elevation in 2004 as a function of NDVI in 2004. Circles denote the mean elevation for NDVI classes of 0.01 interval; error bars denote 25th and 75th percentiles for each NDVI class. Stable and unstable NDVI ranges and elevation ranges are indicated, as determined from <a href="#remotesensing-12-02316-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>NDVI changes between 2004 and 2011 as a function of elevation in 2004. Circles denote the mean NDVI change for elevation classes of 0.1 m interval; error bars denote 25th and 75th percentiles for each elevation class. Stable and unstable elevation ranges are indicated, as determined from <a href="#remotesensing-12-02316-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 6
<p>Variation of tidal current velocity (<b>a</b>) and wave orbital velocity (<b>b</b>) as a function of NDVI in 2004. Circles denote the mean velocity for NDVI classes of 0.01 interval; error bars denote 25th and 75th percentiles for each elevation class. Stable and unstable NDVI ranges are indicated, as determined from <a href="#remotesensing-12-02316-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 7
<p>Variation of tidal current velocity (<b>a</b>) and wave orbital velocity (<b>b</b>) as a function of elevation in 2004. Circles denote the mean velocity for elevation classes of 0.1 m interval; error bars denote 25th and 75th percentiles for each elevation class. Stable and unstable elevation ranges are indicated, as determined from <a href="#remotesensing-12-02316-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 8
<p>Performance of logistic regression model (Equation (7) in <a href="#remotesensing-12-02316-t002" class="html-table">Table 2</a>) to identify pixels that stayed bare and pixels that shifted from a bare to vegetated state between 2004 and 2011. (<b>a</b>) Nagelkerke R<sup>2</sup> (black solid line) and percentage of correctly identified pixels that stayed bare (blue dotted line), that shifted to new marsh vegetation (green dot–dash line) and the overall percentage of correctly identified pixels (red dashed line), all as a function of increasing pixel size obtained by aggregation; (<b>b</b>) Probability map for a shift from bare to vegetated state based on the Logistic Regression Model No. 7 for a pixel resolution of 30 × 30 m; (<b>c</b>) Observed percentage of pixels that shifted from a bare to vegetated state between 2004 and 2011 within the grids of 30 × 30 m, based on the aerial photograph with a resolution of 0.5 × 0.5 m. The shoals noted are Hooge Platen (A) and Walsoorden (B); (<b>d</b>) Observed percentage of new marshes between 2004 and 2011 in relation to the identified probability of new marshes based on the Logistic Regression Model No. 7.</p>
Full article ">
20 pages, 2665 KiB  
Article
Mapping Coastal Dune Landscape through Spectral Rao’s Q Temporal Diversity
by Flavio Marzialetti, Mirko Di Febbraro, Marco Malavasi, Silvia Giulio, Alicia Teresa Rosario Acosta and Maria Laura Carranza
Remote Sens. 2020, 12(14), 2315; https://doi.org/10.3390/rs12142315 - 18 Jul 2020
Cited by 21 | Viewed by 4312
Abstract
Coastal dunes are found at the boundary between continents and seas representing unique transitional mosaics hosting highly dynamic habitats undergoing substantial seasonal changes. Here, we implemented a land cover classification approach specifically designed for coastal landscapes accounting for the within-year temporal variability of [...] Read more.
Coastal dunes are found at the boundary between continents and seas representing unique transitional mosaics hosting highly dynamic habitats undergoing substantial seasonal changes. Here, we implemented a land cover classification approach specifically designed for coastal landscapes accounting for the within-year temporal variability of the main components of the coastal mosaic: vegetation, bare surfaces and water surfaces. Based on monthly Sentinel-2 satellite images of the year 2019, we used hierarchical clustering and a Random Forest model to produce an unsupervised land cover map of coastal dunes in a representative site of the Adriatic coast (central Italy). As classification variables, we used the within-year diversity computed through Rao’s Q index, along with three spectral indices describing the main components of the coastal mosaic (i.e., Modified Soil-adjusted Vegetation Index 2—MSAVI2, Normalized Difference Water Index 2—NDWI2 and Brightness Index 2—BI2). We identified seven land cover classes with high levels of accuracy, highlighting different covariates as the most important in differentiating them. The proposed framework proved effective in mapping a highly seasonal and heterogeneous landscape such as that of coastal dunes, highlighting Rao’s Q index as a sound base for natural cover monitoring and mapping. The applicability of the proposed framework on updated satellite images emphasizes the procedure as a reliable and replicable tool for coastal ecosystems monitoring. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Coastal Environment)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) In black, the study area including the coastal dunes of the Molise Region (Italy). Most of the analyzed coastal sectors are included in Sites of European Conservation Concern (SCI, European Directive 92/43/EEC): Foce Trigno-Marina di Petacciato (IT7228221); Foce Biferno-Litorale di Campomarino (IT7222216); Foce Saccione-Bonifica Ramitelli (IT7222217) and belong to the European LTER network [<a href="#B40-remotesensing-12-02315" class="html-bibr">40</a>,<a href="#B42-remotesensing-12-02315" class="html-bibr">42</a>]. (<b>b</b>) An example of coastal zonation. Reference system WGS84 UTM32 (epsg: 32632).</p>
Full article ">Figure 2
<p>Workflow synthesizing the full mapping procedure of coastal dune Semi-natural and natural cover types with temporal MSAVI2, NDWI2, BI2 series and Random Forest classification approach.</p>
Full article ">Figure 3
<p>Schematic representation of temporal Rao’s Q diversity calculation implemented on year stacks of spectral indices MSAVI2, NDWI2 and BI2 to summarize the seasonal behavior of vegetation biomass, water and bare soil surfaces on coastal dunes.</p>
Full article ">Figure 4
<p>Scheme of the obtained natural and semi-natural cover classes (in the top) along with the respective boxplots showing variables’ values. Reported variables are those selected by VIF analysis and used for classification on each cycle. Classes are: Water (W), Sand (S), Vegetation (V), Water Edge (WE), Open Sand (OS), Mobile Dune Herbaceous Vegetation (MDHV), Evergreen Woody Vegetation (EWV), Deciduous and Humid Herbaceous Vegetation (DHHV), Fixed Dune Herbaceous Vegetation with Sparse Shrub (FHVSS). Variables are Q<sub>NDWI2</sub>: temporal Rao of NDWI2, Q<sub>BI2</sub>: temporal Rao of BI2, Q<sub>MSAVI2</sub>: temporal Rao of MSAVI2, 10<sup>th</sup><sub>NDWI2</sub>: 10th percentile of NDWI2<span class="html-italic">,</span> 10<sup>th</sup><sub>BI2</sub>: 10th percentile of BI2, 90<sup>th</sup><sub>BI2</sub>: 90th percentile of BI2, 90<sup>th</sup><sub>MSAVI2</sub>: 90th percentile of MSAVI2, M<sub>MSAVI2</sub>: mean of MSAVI2.</p>
Full article ">Figure 5
<p>An example of the obtained land cover map reporting classes projected on Google Earth View (on the top) along with monthly average values of NDWI2, BI2, and MSAVI2 ± standard deviation. Water (W), Sand (S), Vegetation (V), Water Edge (WE), Open Sand (OS), Mobile Dune Herbaceous Vegetation (MDHV), Fixed Dune Herbaceous Vegetation with Shrub (FDHVS), Evergreen Woody Vegetation (EWV), Deciduous and Humid Herbaceous Vegetation (DHHV).</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop