[go: up one dir, main page]

Next Issue
Volume 10, March
Previous Issue
Volume 10, January
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 10, Issue 2 (February 2018) – 201 articles

Cover Story (view full-size image): Estuarine water quality is not static, but rather fluctuates on daily to interannual time scales depending on the forces driving it. Identifying the drivers of water quality across estuaries has been an elusive goal of researchers and managers. Doing so requires a time series of frequent and synoptic sampling to capture short- and long-term variability over a large area. All 11 National Estuary Program (black outlines) estuaries of the US Gulf of Mexico were mapped for a water quality proxy (Rrs645), representing turbidity, from 2000–2014 using near-daily MODIS satellite imagery. These whole-estuary time series of water quality were then compared with observations of eight environmental drivers on weekly to annual time scales. Statistical relationships identified wind speed (bottom panel example) as the most consistent driver of water quality variability across estuaries and time scales. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 20794 KiB  
Article
Matching of Remote Sensing Images with Complex Background Variations via Siamese Convolutional Neural Network
by Haiqing He, Min Chen, Ting Chen and Dajun Li
Remote Sens. 2018, 10(2), 355; https://doi.org/10.3390/rs10020355 - 24 Feb 2018
Cited by 73 | Viewed by 7708
Abstract
Feature-based matching methods have been widely used in remote sensing image matching given their capability to achieve excellent performance despite image geometric and radiometric distortions. However, most of the feature-based methods are unreliable for complex background variations, because the gradient or other image [...] Read more.
Feature-based matching methods have been widely used in remote sensing image matching given their capability to achieve excellent performance despite image geometric and radiometric distortions. However, most of the feature-based methods are unreliable for complex background variations, because the gradient or other image grayscale information used to construct the feature descriptor is sensitive to image background variations. Recently, deep learning-based methods have been proven suitable for high-level feature representation and comparison in image matching. Inspired by the progresses made in deep learning, a new technical framework for remote sensing image matching based on the Siamese convolutional neural network is presented in this paper. First, a Siamese-type network architecture is designed to simultaneously learn the features and the corresponding similarity metric from labeled training examples of matching and non-matching true-color patch pairs. In the proposed network, two streams of convolutional and pooling layers sharing identical weights are arranged without the manually designed features. The number of convolutional layers is determined based on the factors that affect image matching. The sigmoid function is employed to compute the matching and non-matching probabilities in the output layer. Second, a gridding sub-pixel Harris algorithm is used to obtain the accurate localization of candidate matches. Third, a Gaussian pyramid coupling quadtree is adopted to gradually narrow down the searching space of the candidate matches, and multiscale patches are compared synchronously. Subsequently, a similarity measure based on the output of the sigmoid is adopted to find the initial matches. Finally, the random sample consensus algorithm and the whole-to-local quadratic polynomial constraints are used to remove false matches. In the experiments, different types of satellite datasets, such as ZY3, GF1, IKONOS, and Google Earth images, with complex background variations are used to evaluate the performance of the proposed method. The experimental results demonstrate that the proposed method, which can significantly improve the matching performance of multi-temporal remote sensing images with complex background variations, is better than the state-of-the-art matching methods. In our experiments, the proposed method obtained a large number of evenly distributed matches (at least 10 times more than other methods) and achieved a high accuracy (less than 1 pixel in terms of root mean square error). Full article
(This article belongs to the Special Issue Multisensor Data Fusion in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Architecture of Siamese convolutional neural network.</p>
Full article ">Figure 2
<p>Schematic of the proposed matching framework.</p>
Full article ">Figure 3
<p>Multi-temporal Google Earth images of the same area from 2008 to 2017. Images are affected by complex background variations, including small rotation and translation, nonlinear geometric deformation, shadow, image quality degradation, and land cover changes.</p>
Full article ">Figure 4
<p>Matching and non-matching probabilities between multi-temporal remote image patches in <a href="#remotesensing-10-00355-f003" class="html-fig">Figure 3</a>. (<b>a</b>–<b>j</b>) show the statistical results from 2008 to 2017.</p>
Full article ">Figure 4 Cont.
<p>Matching and non-matching probabilities between multi-temporal remote image patches in <a href="#remotesensing-10-00355-f003" class="html-fig">Figure 3</a>. (<b>a</b>–<b>j</b>) show the statistical results from 2008 to 2017.</p>
Full article ">Figure 5
<p>Patch comparison via GPCQ. The red rectangles are the patches, which are located at the top layer of Gaussian image pyramid. For example, four patches with sizes <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>×</mo> <mi>d</mi> </mrow> </semantics> </math> are found in the top pyramid layer with size <math display="inline"> <semantics> <mrow> <mn>2</mn> <mi>d</mi> <mo>×</mo> <mn>2</mn> <mi>d</mi> </mrow> </semantics> </math>, in which <math display="inline"> <semantics> <mi>d</mi> </semantics> </math> is set to 96 pixels. The green and blue rectangles are the patches in the second and third layers, respectively. SCNN is used to compare the similarity between the patches in the reference and sensed images.</p>
Full article ">Figure 6
<p>Local outlier elimination. <math display="inline"> <semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>P</mi> <mn>2</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> denotes the initial matches of the reference and sensed images. <math display="inline"> <semantics> <mrow> <msubsup> <mi>P</mi> <mn>2</mn> <mo>'</mo> </msubsup> </mrow> </semantics> </math> is estimated from <math display="inline"> <semantics> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> based on local polynomial coefficients.</p>
Full article ">Figure 7
<p>Experimental image pairs. (<b>a</b>,<b>b</b>) is a pair of ZY3 (fusion image obtained from multispectral and panchromatic images) and Google Earth images in an urban area in China. (<b>c</b>,<b>d</b>) is a pair of GF1 (fusion image obtained from multispectral and panchromatic images) and Google Earth images in China. (<b>e</b>,<b>f</b>) is a pair of ZY3 and GF1 images with large background variations in a mountain area in China. (<b>g</b>,<b>h</b>) is a pair of IKONOS and Google Earth images with coastline in Australia. The images in (<b>i</b>,<b>j</b>) are a pair of Google Earth images with farmlands in different seasons in the United States. (<b>k</b>,<b>l</b>) is a pair of Google Earth images in China, in which (<b>l</b>) is contaminated by cloud and haze.</p>
Full article ">Figure 8
<p>Examples of feature visualization learned by the proposed SCNN. (<b>a</b>), (<b>b</b>), (<b>c</b>), (<b>d</b>), (<b>e</b>) and (<b>f</b>) are the visual features in Pair1, Pair2, Pair3, Pair4, Pair5 and Pair6 respectively.</p>
Full article ">Figure 9
<p>Comparison of average accuracies for each round between training (<b>a</b>) and test (<b>b</b>) data with layer−, layer+, and our network.</p>
Full article ">Figure 10
<p>Comparison of (<b>a</b>) NCM, (<b>b</b>) MP, and (<b>c</b>) RMSE values with different deep SCNNs.</p>
Full article ">Figure 11
<p>Comparison of (<b>a</b>) NCM, (<b>b</b>) MP, and (<b>c</b>) RMSE between gridding S-Harris and non-gridding S-Harris.</p>
Full article ">Figure 12
<p>Comparison of (<b>a</b>) NCM, (<b>b</b>) MP, and (<b>c</b>) RMSE with and without GPCQs.</p>
Full article ">Figure 13
<p>Matching and registration results of the proposed matching framework. The matches of Pairs 1–6 are pinned to the top-left two images of (<b>a</b>–<b>f</b>) using yellow dots. The two small sub-regions marked by red boxes correspond to the two conjugated patches P1 and P2. The top-right image shows the registration result of the checkerboard overlay of the image pair. The four small sub-regions marked by green, blue, magenta, and cyan are enlarged to show the registration details.</p>
Full article ">Figure 13 Cont.
<p>Matching and registration results of the proposed matching framework. The matches of Pairs 1–6 are pinned to the top-left two images of (<b>a</b>–<b>f</b>) using yellow dots. The two small sub-regions marked by red boxes correspond to the two conjugated patches P1 and P2. The top-right image shows the registration result of the checkerboard overlay of the image pair. The four small sub-regions marked by green, blue, magenta, and cyan are enlarged to show the registration details.</p>
Full article ">Figure 14
<p>Matching results of SIFT. The matches of Pairs 1, 2, 3, and 5 are shown in (<b>a</b>), (<b>b</b>), (<b>c</b>), and (<b>d</b>), respectively. No correct match is obtained for the images of Pairs 4 and 6 (i.e., see <a href="#remotesensing-10-00355-t002" class="html-table">Table 2</a>).</p>
Full article ">Figure 14 Cont.
<p>Matching results of SIFT. The matches of Pairs 1, 2, 3, and 5 are shown in (<b>a</b>), (<b>b</b>), (<b>c</b>), and (<b>d</b>), respectively. No correct match is obtained for the images of Pairs 4 and 6 (i.e., see <a href="#remotesensing-10-00355-t002" class="html-table">Table 2</a>).</p>
Full article ">Figure 15
<p>Matching results using Jiang’s method [<a href="#B5-remotesensing-10-00355" class="html-bibr">5</a>]. (<b>a</b>–<b>c</b>) are the matching results of Pairs 1–3. No correct match is obtained for the images of Pairs 4–6 (i.e., see <a href="#remotesensing-10-00355-t002" class="html-table">Table 2</a>).</p>
Full article ">Figure 16
<p>Matching results using Shi’s method [<a href="#B20-remotesensing-10-00355" class="html-bibr">20</a>]. (<b>a</b>–<b>e</b>) are the matching results of Pairs 1–5. No correct match is obtained for the image of Pair 6 (i.e., see <a href="#remotesensing-10-00355-t002" class="html-table">Table 2</a>).</p>
Full article ">Figure 17
<p>Matching results using Zagoruyko’s method [<a href="#B19-remotesensing-10-00355" class="html-bibr">19</a>]. (<b>a</b>,<b>b</b>) are the matching results of Pairs 1 and 2, respectively. No correct match is obtained for the images of Pairs 3–6 (i.e., see <a href="#remotesensing-10-00355-t002" class="html-table">Table 2</a>). (<b>c</b>–<b>f</b>) highlights the ellipse and centroids of MSER of Pairs 3–6.</p>
Full article ">Figure 17 Cont.
<p>Matching results using Zagoruyko’s method [<a href="#B19-remotesensing-10-00355" class="html-bibr">19</a>]. (<b>a</b>,<b>b</b>) are the matching results of Pairs 1 and 2, respectively. No correct match is obtained for the images of Pairs 3–6 (i.e., see <a href="#remotesensing-10-00355-t002" class="html-table">Table 2</a>). (<b>c</b>–<b>f</b>) highlights the ellipse and centroids of MSER of Pairs 3–6.</p>
Full article ">
26 pages, 5527 KiB  
Article
Snow Density and Ground Permittivity Retrieved from L-Band Radiometry: Melting Effects
by Mike Schwank and Reza Naderpour
Remote Sens. 2018, 10(2), 354; https://doi.org/10.3390/rs10020354 - 24 Feb 2018
Cited by 24 | Viewed by 4974
Abstract
Ground permittivity and snow density retrievals for the “snow-free period”, “cold winter period”, and “early spring period” are performed using the experimental L-band radiometry data from the winter 2016/2017 campaign at the Davos-Laret Remote Sensing Field Laboratory. The performance of the single-angle and [...] Read more.
Ground permittivity and snow density retrievals for the “snow-free period”, “cold winter period”, and “early spring period” are performed using the experimental L-band radiometry data from the winter 2016/2017 campaign at the Davos-Laret Remote Sensing Field Laboratory. The performance of the single-angle and multi-angle two-parameter retrieval algorithms employed during each of the aforementioned three periods is assessed using in-situ measured ground permittivity and snow density. Additionally, a synthetic sensitivity analysis is conducted that studies melting effects on the retrievals in the form of two types of “geophysical noise” (snow liquid water and footprint-dependent ground permittivity). Experimental and synthetic analyses show that both types of investigated “geophysical noise” noticeably disturb the retrievals and result in an increased correlation between them. The strength of this correlation is successfully used as a quality-indicator flag for the purpose of filtering out highly correlated ground permittivity and snow density retrievals. It is demonstrated that this filtering significantly improves the accuracy of both ground permittivity and snow density retrievals compared to corresponding reference in-situ data. Experimental and synthetic retrievals are performed in retrieval modes RM = “H”, “V”, and “HV”, where brightness temperatures from polarizations p = H, p = V, or both p = H and V are used, respectively, in the retrieval procedure. Our analysis shows that retrievals for RM = “V” are predominantly least prone to the investigated “geophysical noise”. The presented experimental results indicate that retrievals match in-situ observations best for the “snow-free period” and the “cold winter period” when “geophysical noise” is at minimum. Full article
(This article belongs to the Special Issue Snow Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Diagram of the footprint areas and the location of the in-situ sensors. ETH L-band Radiometer-II (ELBARA-II) was mounted atop an 8-m tower indicated by the hollow black square.</p>
Full article ">Figure 2
<p>Panels (<b>a</b>,<b>b</b>) show the time series of in-situ measured ground permittivities along transects 1 and 2 (shown in <a href="#remotesensing-10-00354-f001" class="html-fig">Figure 1</a>), respectively. In panels (<b>a</b>,<b>b</b>), red indicates ground permittivitiy <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> </mrow> </semantics> </math> values resulting from averaging all 12 in-situ sensor readings. Panel (<b>c</b>) shows the average ground temperature <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi mathvariant="normal">G</mi> </msub> </mrow> </semantics> </math> measured by the 12 SMT-100 sensors. Panel (<b>d</b>) indicates temperatures <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>air</mi> </mrow> </msub> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mn>15</mn> <mi>cm</mi> </mrow> </msub> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mn>50</mn> <mi>cm</mi> </mrow> </msub> </mrow> </semantics> </math> measured by ELBARA-II’s PT-100 temperature sensor and SMT-100 sensors placed 15 cm and 50 cm above ground, respectively. <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mn>15</mn> <mi>cm</mi> </mrow> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mn>50</mn> <mi>cm</mi> </mrow> </msub> </mrow> </semantics> </math> show either air or snow temperatures depending on the snow height at the time of measurement. Panel (<b>e</b>) shows precipitation (both rain and snow). Panel (<b>f</b>) shows mass-density of the lowest 10 cm of the snowpack, as measured in-situ with a manual density cutter.</p>
Full article ">Figure 3
<p>Flowchart of the modeling approach used to infer sensitivities of retrieval pairs <math display="inline"> <semantics> <mrow> <msup> <mstyle mathvariant="bold" mathsize="normal"> <mi>P</mi> </mstyle> <mrow> <mi mathvariant="bold-italic">R</mi> <mi mathvariant="bold-italic">M</mi> </mrow> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> to “melting effects” such as: (<b>a</b>) snow liquid-water, and (<b>b</b>) spatial heterogeneity of ground permittivity.</p>
Full article ">Figure 4
<p>Scatterplots of retrieval pairs <math display="inline"> <semantics> <mrow> <msup> <mstyle mathvariant="bold" mathsize="normal"> <mi>P</mi> </mstyle> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> (orange squares) for <span class="html-italic">RM</span> = “H” (panel (<b>a</b>)) and “V” (panel (<b>b</b>)) simulated for the two-dimensional space of “true” values (crossed black circles). For each <math display="inline"> <semantics> <mrow> <msup> <mstyle mathvariant="bold" mathsize="normal"> <mi>P</mi> </mstyle> <mo>*</mo> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mo>*</mo> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mo>*</mo> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> snow liquid water column (the studied sensitive parameter) is varied within the range <math display="inline"> <semantics> <mrow> <mn>0</mn> <mtext> </mtext> <mi>mm</mi> <mo>≤</mo> <mi>W</mi> <msub> <mi>C</mi> <mi mathvariant="normal">S</mi> </msub> <mo>≤</mo> <mn>1</mn> <mtext> </mtext> <mi>mm</mi> </mrow> </semantics> </math> in steps of <math display="inline"> <semantics> <mrow> <mi>δ</mi> <mi>W</mi> <msub> <mi>C</mi> <mi mathvariant="normal">S</mi> </msub> <mo>=</mo> <mn>0.1</mn> <mtext> </mtext> <mi>mm</mi> </mrow> </semantics> </math>. Panels (<b>c</b>,<b>d</b>) show Root Mean Square Errors <math display="inline"> <semantics> <mrow> <mi>RMSE</mi> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>RM</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> (solid blue dots), <math display="inline"> <semantics> <mrow> <mi>RMSE</mi> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>RM</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> (open red dots) and retrievals’ coefficients of determination <math display="inline"> <semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>RM</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>RM</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> caused by <math display="inline"> <semantics> <mrow> <mi>W</mi> <msub> <mi>C</mi> <mi mathvariant="normal">S</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>Scatterplots of retrieval pairs <math display="inline"> <semantics> <mrow> <msup> <mstyle mathvariant="bold" mathsize="normal"> <mi>P</mi> </mstyle> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> for <span class="html-italic">RM</span> = “H” (panel (<b>a</b>)) and “V” (panel (<b>b</b>)) for “true” values (crossed black circles) <math display="inline"> <semantics> <mrow> <mn>100</mn> <msup> <mrow> <mtext> </mtext> <mi>kg</mi> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> <mo>≤</mo> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mo>*</mo> </msubsup> <mo>≤</mo> <mn>400</mn> <msup> <mrow> <mtext> </mtext> <mi>kg</mi> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mn>5</mn> <mo>≤</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mo>*</mo> </msubsup> <mo>≤</mo> <mn>20</mn> </mrow> </semantics> </math>. For each <math display="inline"> <semantics> <mrow> <msup> <mstyle mathvariant="bold" mathsize="normal"> <mi>P</mi> </mstyle> <mo>*</mo> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mo>*</mo> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mo>*</mo> </msubsup> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </semantics> </math> the sensitive parameter in question is varied within <math display="inline"> <semantics> <mrow> <mn>0</mn> <mo>≤</mo> <mo>Δ</mo> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> <mo>≤</mo> <mn>2</mn> </mrow> </semantics> </math> (in steps of <math display="inline"> <semantics> <mrow> <mi>δ</mi> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics> </math>). <math display="inline"> <semantics> <mrow> <mo>Δ</mo> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> </mrow> </semantics> </math> expresses <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> </mrow> </semantics> </math>-dependent ground permittivities <math display="inline"> <semantics> <mrow> <msubsup> <mi>ε</mi> <mrow> <mi mathvariant="normal">G</mi> <mo>,</mo> <mi>θ</mi> </mrow> <mrow> <mi>t</mi> <mi>y</mi> <mi>p</mi> <mi>e</mi> </mrow> </msubsup> <mrow> <mo>(</mo> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math>; “true” <math display="inline"> <semantics> <mrow> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mo>*</mo> </msubsup> </mrow> </semantics> </math> are defined as <math display="inline"> <semantics> <mrow> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mo>*</mo> </msubsup> <mo>=</mo> <mtext> </mtext> <mo stretchy="false">〈</mo> <msubsup> <mi>ε</mi> <mrow> <mi mathvariant="normal">G</mi> <mo>,</mo> <mi>θ</mi> </mrow> <mrow> <mi>t</mi> <mi>y</mi> <mi>p</mi> <mi>e</mi> </mrow> </msubsup> <mrow> <mo>(</mo> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo stretchy="false">〉</mo> </mrow> </semantics> </math> (averaging over <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>min</mi> </mrow> </msub> <mo>=</mo> <mn>30</mn> <mo>°</mo> <mo>≤</mo> <msub> <mi>θ</mi> <mi>k</mi> </msub> <mo>≤</mo> <msub> <mi>θ</mi> <mrow> <mi>max</mi> </mrow> </msub> <mo>=</mo> <mn>65</mn> <mo>°</mo> </mrow> </semantics> </math>). Retrieval sensitivities to increasing (<span class="html-italic">type</span> = “inc.”, green) and decreasing (<span class="html-italic">type</span> = “dec.”, orange) <math display="inline"> <semantics> <mrow> <msubsup> <mi>ε</mi> <mrow> <mi mathvariant="normal">G</mi> <mo>,</mo> <mi>θ</mi> </mrow> <mrow> <mi>t</mi> <mi>y</mi> <mi>p</mi> <mi>e</mi> </mrow> </msubsup> <mrow> <mo>(</mo> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> are shown. Panels (<b>c</b>) and (<b>d</b>) show <math display="inline"> <semantics> <mrow> <mi>RMSE</mi> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> (blue), <math display="inline"> <semantics> <mrow> <mi>RMSE</mi> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> (red), and retrievals’ coefficients of determination <math display="inline"> <semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> (black) caused by <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>Multi-angle two-parameter retrievals <math display="inline"> <semantics> <mrow> <msup> <mstyle mathvariant="bold" mathsize="normal"> <mi>P</mi> </mstyle> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> for the time period 15 December, 2016–15 March, 2017. Panels (<b>a</b>,<b>b</b>) show the time series of <math display="inline"> <semantics> <mrow> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mo>“</mo> <mi>HV</mi> <mo>”</mo> </mrow> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mo>“</mo> <mi>HV</mi> <mo>”</mo> </mrow> </msubsup> </mrow> </semantics> </math>, respectively. Panels (<b>c</b>,<b>d</b>) and (<b>e</b>,<b>f</b>) show corresponding <math display="inline"> <semantics> <mrow> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> </semantics> </math> for <span class="html-italic">RM</span> = “H” and “V”, respectively. Red markers show in-situ measured bottom-layer snow density <math display="inline"> <semantics> <mrow> <msub> <mi>ρ</mi> <mi mathvariant="normal">S</mi> </msub> </mrow> </semantics> </math> and ground permittivity <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> </mrow> </semantics> </math> (same data as shown in <a href="#remotesensing-10-00354-f002" class="html-fig">Figure 2</a>). The vertical dashed lines delimit the “snow-free period” (before 3 January), the “cold winter period” (3–31 January), and the “early spring period” (after 31 January).</p>
Full article ">Figure 7
<p>Histograms of coefficients of determination <math display="inline"> <semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mrow> <mo> </mo> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> for <span class="html-italic">RM</span> = “V” (<b>a</b>,<b>b</b>) and <span class="html-italic">RM</span> = “H” (<b>c</b>,<b>d</b>) of the retrieval pairs <math display="inline"> <semantics> <mrow> <msup> <mstyle mathvariant="bold" mathsize="normal"> <mi>P</mi> </mstyle> <mrow> <mi mathvariant="bold-italic">R</mi> <mi mathvariant="bold-italic">M</mi> </mrow> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> computed based on a sliding 12-h time window between 15 December, 2016 and 5 February, 2017. (<b>a</b>,<b>c</b>) are derived from “morning” (2:00–8:00) measurements, (<b>b</b>,<b>d</b>) are derived from “afternoon” (12:00–18:00) measurements.</p>
Full article ">Figure 8
<p>Multi-angle two-parameter retrievals <math display="inline"> <semantics> <mrow> <msup> <mstyle mathvariant="bold" mathsize="normal"> <mi>P</mi> </mstyle> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> for the time period 15 December, 2016–15 March, 2017. Panels (<b>a</b>,<b>b</b>) show the time series of <math display="inline"> <semantics> <mrow> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mo>“</mo> <mi>HV</mi> <mo>”</mo> </mrow> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mo>“</mo> <mi>HV</mi> <mo>”</mo> </mrow> </msubsup> </mrow> </semantics> </math>, respectively. Panels (<b>c</b>,<b>d</b>) and (<b>e</b>,<b>f</b>) show corresponding <math display="inline"> <semantics> <mrow> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mi>R</mi> <mi>M</mi> </mrow> </msubsup> </mrow> </semantics> </math> for <span class="html-italic">RM</span> = “H” and “V”, respectively. Red markers show in-situ measured bottom-layer snow density <math display="inline"> <semantics> <mrow> <msub> <mi>ρ</mi> <mi mathvariant="normal">S</mi> </msub> </mrow> </semantics> </math> and ground permittivity <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> </mrow> </semantics> </math> (same data as shown in <a href="#remotesensing-10-00354-f002" class="html-fig">Figure 2</a>). The vertical dashed lines delimit the “snow-free period” (before 3 January), the “cold winter period” (3–31 January), and the “early spring period” (after 31 January). Retrievals are in red when the “quality flag” is raised and in blue when not raised. The “quality flag” approach used employs the threshold <math display="inline"> <semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mrow> <mo> </mo> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mo>“</mo> <mi mathvariant="normal">V</mi> <mo>”</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mo>“</mo> <mi mathvariant="normal">V</mi> <mo>”</mo> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> <mo>&lt;</mo> <mn>0.1</mn> </mrow> </semantics> </math> between <math display="inline"> <semantics> <mrow> <msup> <mstyle mathvariant="bold" mathsize="normal"> <mi>P</mi> </mstyle> <mrow> <mo>“</mo> <mi mathvariant="normal">V</mi> <mo>”</mo> </mrow> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>ρ</mi> <mi mathvariant="normal">S</mi> <mrow> <mo>“</mo> <mi mathvariant="normal">V</mi> <mo>”</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mo>“</mo> <mi mathvariant="normal">V</mi> <mo>”</mo> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> retrievals computed from 12-hour asymmetric sliding windows.</p>
Full article ">Figure 9
<p>(<b>a</b>,<b>b</b>) Time series of the footprint-specific <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>ρ</mi> <mi mathvariant="normal">S</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> single-angle retrievals from 3 January (first snow event) to 15 March (end of measurement campaign). Colored bars show the color code of the retrievals performed at nadir angles <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> </mrow> </semantics> </math> labelled on the left vertical axes. Failed retrievals are shown in blue.</p>
Full article ">Figure 10
<p>(<b>a</b>) Single-angle retrievals <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mrow> <mi mathvariant="normal">G</mi> <mo>,</mo> <mi>steep</mi> </mrow> </msub> <mo>≡</mo> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> <mo>=</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> <mo> </mo> <mn>35</mn> <mo>°</mo> </mrow> </semantics> </math> (blue) and <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mrow> <mi mathvariant="normal">G</mi> <mo>,</mo> <mi>shallow</mi> </mrow> </msub> <mo>≡</mo> <mo stretchy="false">〈</mo> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo stretchy="false">〉</mo> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> <mo>=</mo> <mn>60</mn> <mo>°</mo> <mo>,</mo> <mo> </mo> <mn>65</mn> <mo>°</mo> </mrow> </semantics> </math> (red). (<b>b</b>) Comparison of single-angle retrievals <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mrow> <mi mathvariant="normal">G</mi> <mo>,</mo> <mi>scan</mi> </mrow> </msub> <mo>≡</mo> <mo stretchy="false">〈</mo> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>θ</mi> <mi>k</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo stretchy="false">〉</mo> </mrow> </semantics> </math> for <math display="inline"> <semantics> <mrow> <mn>30</mn> <mo>°</mo> <mo>≤</mo> <msub> <mi>θ</mi> <mi>k</mi> </msub> <mo>≤</mo> <mn>65</mn> <mo>°</mo> </mrow> </semantics> </math> (blue) with multi-angle retrievals <math display="inline"> <semantics> <mrow> <msubsup> <mi>ε</mi> <mi mathvariant="normal">G</mi> <mrow> <mo>“</mo> <mi>HV</mi> <mo>”</mo> </mrow> </msubsup> </mrow> </semantics> </math> (red, same as in <a href="#remotesensing-10-00354-f006" class="html-fig">Figure 6</a>a). Spatially averaged in-situ references <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mi mathvariant="normal">G</mi> </msub> </mrow> </semantics> </math> (same as in <a href="#remotesensing-10-00354-f002" class="html-fig">Figure 2</a>a,b) and their spatial variability are shown by the green lines and gray areas, respectively. The specific date 21 February is marked with the vertical dashed black line.</p>
Full article ">
18 pages, 6089 KiB  
Article
Triple-Frequency Code-Phase Combination Determination: A Comparison with the Hatch-Melbourne-Wübbena Combination Using BDS Signals
by Chenlong Deng, Weiming Tang, Jianhui Cui, Mingxing Shen, Zongnan Li, Xuan Zou and Yongfeng Zhang
Remote Sens. 2018, 10(2), 353; https://doi.org/10.3390/rs10020353 - 24 Feb 2018
Cited by 10 | Viewed by 5167
Abstract
Considering the influence of the ionosphere, troposphere, and other systematic errors on double-differenced ambiguity resolution (AR), we present an optimal triple-frequency code-phase combination determination method driven by both the model and the real data. The new method makes full use of triple-frequency code [...] Read more.
Considering the influence of the ionosphere, troposphere, and other systematic errors on double-differenced ambiguity resolution (AR), we present an optimal triple-frequency code-phase combination determination method driven by both the model and the real data. The new method makes full use of triple-frequency code measurements (especially the low-noise of the code on the B3 signal) to minimize the total noise level and achieve the largest AR success rate (model-driven) under different ionosphere residual situations (data-driven), thus speeding up the AR by directly rounding. With the triple-frequency Beidou Navigation Satellite System (BDS) data collected at five stations from a continuously-operating reference station network in Guangdong Province of China, different testing scenarios are defined (a medium baseline, whose distance is between 20 km and 50 km; a medium-long baseline, whose distance is between 50 km and 100 km; and a long baseline, whose distance is larger than 100 km). The efficiency of the optimal code-phase combination on the AR success rate was compared with that of the geometry-free and ionosphere-free (GIF) combination and the Hatch-Melbourne-Wübbena (HMW) combination. Results show that the optimal combinations can always achieve better results than the HMW combination with B2 and B3 signals, especially when the satellite elevation angle is larger than 45°. For the wide-lane AR which aims to obtain decimeter-level kinematic positioning service, the standard deviation (STD) of ambiguity residuals for the suboptimal combination are only about 0.2 cycles, and the AR success rate by directly rounding can be up to 99%. Compared with the HMW combinations using B1 and B2 signals and using B1 and B3 signals, the suboptimal combination achieves the best results in all baselines, with an overall improvement of about 40% and 20%, respectively. Additionally, the STD difference between the optimal and the GIF code-phase combinations decreases as the baseline length increases. This indicates that the GIF combination is more suitable for long baselines. The proposed optimal code-phase combination determination method can be applied to other multi-frequency global navigation satellite systems, such as new-generation BDS, Galileo, and modernized GPS. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Distribution of the used stations in GDCORS and the selected baselines.</p>
Full article ">Figure 2
<p>Comparison of ambiguity residuals and the corresponding standard deviations of <span class="html-italic">optimal</span>, <span class="html-italic">GIF</span>, and <span class="html-italic">HMW23</span> combinations for the medium baseline (20 km).</p>
Full article ">Figure 3
<p>Comparison of ambiguity residuals and the corresponding standard deviations of <span class="html-italic">optimal</span>, <span class="html-italic">GIF</span>, and <span class="html-italic">HMW23</span> combinations for the medium-long baseline (66 km).</p>
Full article ">Figure 4
<p>Comparison of ambiguity residuals and the corresponding standard deviations of <span class="html-italic">optimal</span>, <span class="html-italic">GIF</span>, and <span class="html-italic">HMW23</span> combinations for the long baseline (147 km).</p>
Full article ">Figure 5
<p>Comparison of ambiguity residuals and the corresponding standard deviations of <span class="html-italic">suboptimal</span>, <span class="html-italic">HMW</span>12, and <span class="html-italic">HMW</span>13 combinations for the medium baseline (20 km).</p>
Full article ">Figure 6
<p>Comparison of ambiguity residuals and the corresponding standard deviations of <span class="html-italic">suboptimal</span>, <span class="html-italic">HMW</span>12, and <span class="html-italic">HMW</span>13 combinations for the medium baseline (66 km).</p>
Full article ">Figure 7
<p>Comparison of ambiguity residuals and the corresponding standard deviations of <span class="html-italic">suboptimal</span>, <span class="html-italic">HMW</span>12, and <span class="html-italic">HMW</span>13 combinations for the medium baseline (147 km).</p>
Full article ">
18 pages, 4578 KiB  
Article
Atmospheric Correction Inter-Comparison Exercise
by Georgia Doxani, Eric Vermote, Jean-Claude Roger, Ferran Gascon, Stefan Adriaensen, David Frantz, Olivier Hagolle, André Hollstein, Grit Kirches, Fuqin Li, Jérôme Louis, Antoine Mangin, Nima Pahlevan, Bringfried Pflug and Quinten Vanhellemont
Remote Sens. 2018, 10(2), 352; https://doi.org/10.3390/rs10020352 - 24 Feb 2018
Cited by 176 | Viewed by 16065
Abstract
The Atmospheric Correction Inter-comparison eXercise (ACIX) is an international initiative with the aim to analyse the Surface Reflectance (SR) products of various state-of-the-art atmospheric correction (AC) processors. The Aerosol Optical Thickness (AOT) and Water Vapour (WV) are also examined in ACIX as additional [...] Read more.
The Atmospheric Correction Inter-comparison eXercise (ACIX) is an international initiative with the aim to analyse the Surface Reflectance (SR) products of various state-of-the-art atmospheric correction (AC) processors. The Aerosol Optical Thickness (AOT) and Water Vapour (WV) are also examined in ACIX as additional outputs of AC processing. In this paper, the general ACIX framework is discussed; special mention is made of the motivation to initiate the experiment, the inter-comparison protocol, and the principal results. ACIX is free and open and every developer was welcome to participate. Eventually, 12 participants applied their approaches to various Landsat-8 and Sentinel-2 image datasets acquired over sites around the world. The current results diverge depending on the sensors, products, and sites, indicating their strengths and weaknesses. Indeed, this first implementation of processor inter-comparison was proven to be a good lesson for the developers to learn the advantages and limitations of their approaches. Various algorithm improvements are expected, if not already implemented, and the enhanced performances are yet to be assessed in future ACIX experiments. Full article
(This article belongs to the Special Issue Atmospheric Correction of Remote Sensing Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The scatterplots of AOT estimates at 550 nm based on Landsat-8 observations compared to the AERONET measurements from all the sites. The main plots refer to the AOT values up to 1, while in the sub-plots (upper right) higher AOT values are also included.</p>
Full article ">Figure 2
<p>The accuracy (red line), precision (green line), and uncertainty (blue line) as computed in bins (blue bars) for OLI Band 4 (Red). The total number of points used in the computations is given also in the plot. The magenta line represents the theoretical SR reference for Landsat SR (0.005 + 0.05 × ρ).</p>
Full article ">Figure 2 Cont.
<p>The accuracy (red line), precision (green line), and uncertainty (blue line) as computed in bins (blue bars) for OLI Band 4 (Red). The total number of points used in the computations is given also in the plot. The magenta line represents the theoretical SR reference for Landsat SR (0.005 + 0.05 × ρ).</p>
Full article ">Figure 3
<p>The scatterplots of AOT estimates at 550 nm based on Sentinel-2 observations versus the AERONET measurements. The main plots refer to the AOT values up to 0.8, while in the sub-plots (upper right), higher AOT values are also included.</p>
Full article ">Figure 4
<p>The scatterplots of WV estimates based on Sentinel-2 observations versus the AERONET measurements.</p>
Full article ">Figure 4 Cont.
<p>The scatterplots of WV estimates based on Sentinel-2 observations versus the AERONET measurements.</p>
Full article ">Figure 5
<p>The accuracy (red line), precision (green line), and uncertainty (blue line) as computed in bins (blue bars) for MSI Band 4 (Red). The total number of points used in the computations is given also in the plot. The magenta line represents the theoretical SR reference (0.005 + 0.05 × ρ).</p>
Full article ">Figure 5 Cont.
<p>The accuracy (red line), precision (green line), and uncertainty (blue line) as computed in bins (blue bars) for MSI Band 4 (Red). The total number of points used in the computations is given also in the plot. The magenta line represents the theoretical SR reference (0.005 + 0.05 × ρ).</p>
Full article ">
19 pages, 7990 KiB  
Article
Siamese-GAN: Learning Invariant Representations for Aerial Vehicle Image Categorization
by Laila Bashmal, Yakoub Bazi, Haikel AlHichri, Mohamad M. AlRahhal, Nassim Ammour and Naif Alajlan
Remote Sens. 2018, 10(2), 351; https://doi.org/10.3390/rs10020351 - 24 Feb 2018
Cited by 52 | Viewed by 11398
Abstract
In this paper, we present a new algorithm for cross-domain classification in aerial vehicle images based on generative adversarial networks (GANs). The proposed method, called Siamese-GAN, learns invariant feature representations for both labeled and unlabeled images coming from two different domains. To this [...] Read more.
In this paper, we present a new algorithm for cross-domain classification in aerial vehicle images based on generative adversarial networks (GANs). The proposed method, called Siamese-GAN, learns invariant feature representations for both labeled and unlabeled images coming from two different domains. To this end, we train in an adversarial manner a Siamese encoder–decoder architecture coupled with a discriminator network. The encoder–decoder network has the task of matching the distributions of both domains in a shared space regularized by the reconstruction ability, while the discriminator seeks to distinguish between them. After this phase, we feed the resulting encoded labeled and unlabeled features to another network composed of two fully-connected layers for training and classification, respectively. Experiments on several cross-domain datasets composed of extremely high resolution (EHR) images acquired by manned/unmanned aerial vehicles (MAV/UAV) over the cities of Vaihingen, Toronto, Potsdam, and Trento are reported and discussed. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Standard supervised classification: training and test scenes are extracted from the same domain.</p>
Full article ">Figure 2
<p>Cross-domain classification: use training samples from a previous domain to classify data coming from a new domain.</p>
Full article ">Figure 3
<p>Proposed Siamese-GAN method.</p>
Full article ">Figure 4
<p>Feature extraction using a VGG16 pre-trained CNN.</p>
Full article ">Figure 5
<p>Architecture of the (<b>a</b>) encoder G, (<b>b</b>) decoder DE, (<b>c</b>) discriminator D, and classifier CL.</p>
Full article ">Figure 6
<p>Sample EHR images used in the experiments.</p>
Full article ">Figure 7
<p>The adversarial losses of Siamese-GAN for the scenarios: (<b>a</b>) Potsdam<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Vaihingen, (<b>b</b>) Trento<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Toronto, and (<b>c</b>) Toronto<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Vaihingen.</p>
Full article ">Figure 7 Cont.
<p>The adversarial losses of Siamese-GAN for the scenarios: (<b>a</b>) Potsdam<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Vaihingen, (<b>b</b>) Trento<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Toronto, and (<b>c</b>) Toronto<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Vaihingen.</p>
Full article ">Figure 8
<p>PCA for the transfers: (<b>a</b>) Potsdam<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Vaihingen; (<b>b</b>) Trento<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Toronto; (<b>c</b>) Toronto<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Vaihingen. First column: before adaptation; second column: after adaptation.</p>
Full article ">Figure 8 Cont.
<p>PCA for the transfers: (<b>a</b>) Potsdam<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Vaihingen; (<b>b</b>) Trento<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Toronto; (<b>c</b>) Toronto<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Vaihingen. First column: before adaptation; second column: after adaptation.</p>
Full article ">Figure 9
<p>Confusion matrices for Potsdam<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Vaihingen: (<b>a</b>) NN; (<b>b</b>) Siamese-GANs.</p>
Full article ">Figure 10
<p>Confusion matrices for Trento<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Toronto: (<b>a</b>) NN; (<b>b</b>) Siamese-GANs.</p>
Full article ">Figure 11
<p>Confusion matrices for Toronto<math display="inline"> <semantics> <mo>→</mo> </semantics> </math>Vaihingen: (<b>a</b>) NN; (<b>b</b>) Siamese-GANs.</p>
Full article ">
18 pages, 13447 KiB  
Article
Monitoring Water Levels and Discharges Using Radar Altimetry in an Ungauged River Basin: The Case of the Ogooué
by Sakaros Bogning, Frédéric Frappart, Fabien Blarel, Fernando Niño, Gil Mahé, Jean-Pierre Bricquet, Frédérique Seyler, Raphaël Onguéné, Jacques Etamé, Marie-Claire Paiz and Jean-Jacques Braun
Remote Sens. 2018, 10(2), 350; https://doi.org/10.3390/rs10020350 - 24 Feb 2018
Cited by 71 | Viewed by 10388
Abstract
Radar altimetry is now commonly used for the monitoring of water levels in large river basins. In this study, an altimetry-based network of virtual stations was defined in the quasi ungauged Ogooué river basin, located in Gabon, Central Africa, using data from seven [...] Read more.
Radar altimetry is now commonly used for the monitoring of water levels in large river basins. In this study, an altimetry-based network of virtual stations was defined in the quasi ungauged Ogooué river basin, located in Gabon, Central Africa, using data from seven altimetry missions (Jason-2 and 3, ERS-2, ENVISAT, Cryosat-2, SARAL, Sentinel-3A) from 1995 to 2017. The performance of the five latter altimetry missions to retrieve water stages and discharges was assessed through comparisons against gauge station records. All missions exhibited a good agreement with gauge records, but the most recent missions showed an increase of data availability (only 6 virtual stations (VS) with ERS-2 compared to 16 VS for ENVISAT and SARAL) and accuracy (RMSE lower than 1.05, 0.48 and 0.33 and R² higher than 0.55, 0.83 and 0.91 for ERS-2, ENVISAT and SARAL respectively). The concept of VS is extended to the case of drifting orbits using the data from Cryosat-2 in several close locations. Good agreement was also found with the gauge station in Lambaréné (RMSE = 0.25 m and R2 = 0.96). Very good results were obtained using only one year and a half of Sentinel-3 data (RMSE < 0.41 m and R2 > 0.89). The combination of data from all the radar altimetry missions near Lamabréné resulted in a long-term (May 1995 to August 2017) and significantly improved water-level time series (R² = 0.96 and RMSE = 0.38 m). The increase in data sampling in the river basin leads to a better water level peak to peak characterization and hence to a more accurate annual discharge over the common observation period with only a 1.4 m3·s−1 difference (i.e., 0.03%) between the altimetry-based and the in situ mean annual discharge. Full article
(This article belongs to the Special Issue Satellite Altimetry for Earth Sciences)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Location of the Ogooué River Basin in Gabon in Equatorial Africa. (<b>b</b>) In this basin, delineated with a white line, the Ogooué and its major tributaries appear in light blue. Altimetry tracks are represented in red for the missions on a 10-day repeat cycle on their nominal track (Jason-1/2/3), in black for Sentinel-3A on its nominal track (27-day repeat cycle), in yellow for the missions on a 35-day repeat cycle on their nominal track (ERS-2/ENVISAT/SARAL), (<b>c</b>) zoom of the downstream of the Ogooué River Basin with altimetric tracks of Cryosat-2 on its nominal track (369-day repeat cycle) in cyan lines.</p>
Full article ">Figure 2
<p>Locations of the altimetry virtual stations in the Ogooué River Basin. VS from ERS-2, ENVISAT, ENVISAT 2nd orbit, SARAL, Sentinel-3A, Cryosat-2, Jason-1, Jason-2, Jason-3 are represented using orange stars, white stars, brown dots, red dots, blue squares, cyan triangles, green squares and orange diamonds respectively. For readability purpose, virtual stations from missions with repeat period shorter than 35 days are presented in (<b>a</b>) and virtual stations from Cryosat-2 are presented in (<b>b</b>).</p>
Full article ">Figure 3
<p>Results of the comparison between the altimetry-based water stages from ERS-2/ENVISAT/SARAL for tracks 401 (<b>a</b>), 902 (<b>b</b>), 945 (<b>c</b>), 444 (<b>d</b>) and the in situ ones from Lambaréné gauge station.</p>
Full article ">Figure 4
<p>Results of the comparison between the altimetry-based water stages from ENVISAT on its second orbit for (<b>a</b>) Station 1, (<b>b</b>) Station 2 and (<b>c</b>) Station 3 and the in-situ ones from Lambaréné gauge station.</p>
Full article ">Figure 5
<p>Results of the comparison between the altimetry-based water stages from Sentinel-3A for tracks (<b>a</b>) 050, (<b>b</b>) 378 and (<b>c</b>) 128 and the in-situ ones from Lambaréné gauge station.</p>
Full article ">Figure 6
<p>Results of the comparison between the altimetry-based water stages from Cryosat-2 and the in-situ ones from Lambaréné gauge station.</p>
Full article ">Figure 7
<p>Maps of maximum of cross-correlation between time series from ENVISAT data in the ORB for the four VS around Lambaréné.</p>
Full article ">Figure 8
<p>Maps of maximum of cross-correlation between time series from SARAL data in the ORB for the four VS around Lambaréné.</p>
Full article ">Figure 9
<p>Time series of water level from Jason-2 (blue), Jason-3 (dashed green) and Sentinel-3A (dashed red) on the Ivindo (<b>a</b>) and upstream Ogooué (<b>b</b>) rivers.</p>
Full article ">Figure 10
<p>Time series of water level at Lambaréné from the in-situ gauge record (black continuous line), the multi-mission altimetry-based record (ERS-2 data are represented with diamonds, ENVISAT with blue crosses on its nominal orbit and with green triangles on its second orbit, Cryosat-2 with green-blue stars, SARAL with red circles, Sentinel-3 with purple dots).</p>
Full article ">Figure 11
<p>Time series of river discharge at Lambaréné from the in-situ gauge record (black continuous line), the multi-mission altimetry-based record (ERS-2 data are represented with diamonds, ENVISAT with blue crosses on its nominal orbit and with green triangles on its second orbit, Cryosat-2 with green-blue stars, SARAL with red circles, Sentinel-3 with purple dots).</p>
Full article ">
21 pages, 6496 KiB  
Article
Phenotyping Conservation Agriculture Management Effects on Ground and Aerial Remote Sensing Assessments of Maize Hybrids Performance in Zimbabwe
by Adrian Gracia-Romero, Omar Vergara-Díaz, Christian Thierfelder, Jill E. Cairns, Shawn C. Kefauver and José L. Araus
Remote Sens. 2018, 10(2), 349; https://doi.org/10.3390/rs10020349 - 24 Feb 2018
Cited by 41 | Viewed by 8763
Abstract
In the coming decades, Sub-Saharan Africa (SSA) faces challenges to sustainably increase food production while keeping pace with continued population growth. Conservation agriculture (CA) has been proposed to enhance soil health and productivity to respond to this situation. Maize is the main staple [...] Read more.
In the coming decades, Sub-Saharan Africa (SSA) faces challenges to sustainably increase food production while keeping pace with continued population growth. Conservation agriculture (CA) has been proposed to enhance soil health and productivity to respond to this situation. Maize is the main staple food in SSA. To increase maize yields, the selection of suitable genotypes and management practices for CA conditions has been explored using remote sensing tools. They may play a fundamental role towards overcoming the traditional limitations of data collection and processing in large scale phenotyping studies. We present the result of a study in which Red-Green-Blue (RGB) and multispectral indexes were evaluated for assessing maize performance under conventional ploughing (CP) and CA practices. Eight hybrids under different planting densities and tillage practices were tested. The measurements were conducted on seedlings at ground level (0.8 m) and from an unmanned aerial vehicle (UAV) platform (30 m), causing a platform proximity effect on the images resolution that did not have any negative impact on the performance of the indexes. Most of the calculated indexes (Green Area (GA) and Normalized Difference Vegetation Index (NDVI)) were significantly affected by tillage conditions increasing their values from CP to CA. Indexes derived from the RGB-images related to canopy greenness performed better at assessing yield differences, potentially due to the greater resolution of the RGB compared with the multispectral data, although this performance was more precise for CP than CA. The correlations of the multispectral indexes with yield were improved by applying a soil-mask derived from a NDVI threshold with the aim of corresponding pixels with vegetation. The results of this study highlight the applicability of remote sensing approaches based on RGB images to the assessment of crop performance and hybrid choice. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Landsat satellite (<b>left</b>) and CNES Airbus (<b>right</b>) images of the study area acquired using Google Earth Pro. The photographs are from the 31st of December 2015. The image on the left shows the location of the Domboshawa Training Center in Zimbabwe. The image on the right shows the field site.</p>
Full article ">Figure 2
<p>Map of the experimental design showing alternating High Density (HD) and Low Density (LD) plots per replicate, with Conservation Agriculture (CA) on the left and Conventional Ploughing (CP) on the right. Each square corresponds one plot dedicated to each of the different hybrids used. Complete details of the experimental design are explained in <a href="#sec2dot2-remotesensing-10-00349" class="html-sec">Section 2.2</a>.</p>
Full article ">Figure 3
<p>Mikrokopter OktoXL 6S12 unmanned aerial platform equipped with the micro-MCA12 Tetracam multispectral sensor, showing the placement of the Incident Light Sensor (ILS) module with white diffusor plate connected by a fiber optic cable to the top of the UAV facing upwards while the other 11 multispectral sensors are positioned on a dual axis gimbal camera platform for zenithal/nadir image capture. The RGB (Red-Green-Blue) and TIR (thermal infrared) cameras were alternately mounted on the same gimbaled platform for image capture.</p>
Full article ">Figure 4
<p>Relationship between grain yield with the Normalized Difference Vegetation Index (NDVI), measured with the GreenSeeker and calculated from the aerial multispectral images (<b>a</b>,<b>b</b>) and with the Photochemical Reflectance Index (PRI), measured from the aerial multispectral images (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 5
<p>Examples of the differences in the vegetation area identification with the RGB and multispectral images with at the conservation (ca) and conventional (CP) plots.</p>
Full article ">
22 pages, 11391 KiB  
Article
Upper Ocean Response to Typhoon Kalmaegi and Sarika in the South China Sea from Multiple-Satellite Observations and Numerical Simulations
by Xinxin Yue, Biao Zhang, Guoqiang Liu, Xiaofeng Li, Han Zhang and Yijun He
Remote Sens. 2018, 10(2), 348; https://doi.org/10.3390/rs10020348 - 24 Feb 2018
Cited by 49 | Viewed by 8162
Abstract
We investigated ocean surface and subsurface physical responses to Typhoons Kalmaegi and Sarika in the South China Sea, utilizing synergistic multiple-satellite observations, in situ measurements, and numerical simulations. We found significant typhoon-induced sea surface cooling using satellite sea surface temperature (SST) observations and [...] Read more.
We investigated ocean surface and subsurface physical responses to Typhoons Kalmaegi and Sarika in the South China Sea, utilizing synergistic multiple-satellite observations, in situ measurements, and numerical simulations. We found significant typhoon-induced sea surface cooling using satellite sea surface temperature (SST) observations and numerical model simulations. This cooling was mainly caused by vertical mixing and upwelling. The maximum amplitudes were 6 °C and 4.2 °C for Typhoons Kalmaegi and Sarika, respectively. For Typhoon Sarika, Argo temperature profile measurements showed that temperature response beneath the surface showed a three-layer vertical structure (decreasing-increasing-decreasing). Satellite salinity observations showed that the maximum increase of sea surface salinity (SSS) was 2.2 psu on the right side of Typhoon Sarika’s track, and the maximum decrease of SSS was 1.4 psu on the left. This SSS seesaw response phenomenon is related to the asymmetrical rainfall on both sides of the typhoon track. Acoustic Doppler Current Profilers measurements and numerical simulations both showed that subsurface current velocities rapidly increased as the typhoon passed, with peak increases of up to 1.19 m/s and 1.49 m/s. Typhoon-generated SST cooling and current velocity increases both exhibited a rightward bias associated with a coupling between typhoon wind-stress and mixed layer velocity. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Buoy and mooring design at Station 4.</p>
Full article ">Figure 2
<p>Best track of Typhoons Kalmaegi (2014) and Sarika (2016) from the Joint Typhoon Warning Center. Buoy array positions are marked by red dots (B1, B2, and B4). The red and blue stars denote the positions of Argo floats 5904562 and 5904746. The red and blue boxes mark the focused regions of the Typhoons Kalmaegi (2014) and Sarika (2016). The color bar shows maximum sustained wind speed (m/s), which indicates typhoon intensity.</p>
Full article ">Figure 3
<p>Microwave optimally interpolated SST data showing SST as Typhoon Kalmaegi passed: (<b>a</b>) on 13 September 2014; (<b>b</b>) on 14 September 2014; (<b>c</b>) on 15 September 2014; (<b>d</b>) on 16 September 2014; and (<b>e</b>) on 17 September 2014. (<b>f</b>) SST difference between 16 and 14 September 2014. The color bar denotes the temperature, in units of °C. The colored circles mark the location of the typhoon center at different times, and the colors represent the maximum sustained wind speed (m/s): yellow for 31–35 m/s and magenta for 36–40 m/s. The black star marks the position of Argo float 5904562.</p>
Full article ">Figure 4
<p>Microwave optimally interpolated SST data showing SST as Typhoon Sarika passed: (<b>a</b>) on 15 October 2016; (<b>b</b>) on 16 October 2016; (<b>c</b>) on 17 October 2016; (<b>d</b>) on 18 October 2016; and (<b>e</b>) on 19 October 2016. (<b>f</b>) SST difference between 18 and 16 October. The color bar denotes the temperature, in units of °C. The colored circles mark the location of the typhoon center at different times, and the colors represent the maximum sustained wind speed (m/s): black for &lt;20 m/s; green for 20–30 m/s; yellow for 30–35 m/s; magenta for 35–40 m/s; and red for &gt;40 m/s. The black star marks the position of Argo float 5904746.</p>
Full article ">Figure 5
<p>The altimeter-derived SSHA and GVA between post-storm and pre-storm conditions for: (<b>a</b>) Typhoon Kalmaegi; and (<b>c</b>) Typhoon Sarika. The color bar denotes SSHA variation in m. The ROMS-simulated OHC variations between post- and pre-storm conditions for: (<b>b</b>) Typhoon Kalmaegi; and (<b>d</b>) Typhoon Sarika. The color bar denotes the OHC variation in J/m<sup>2</sup>. The black stars mark the position of the Argo floats. The colored circles mark the location of the typhoon center at different times, and the colors represent the maximum sustained wind speed (m/s): black for &lt;20 m/s; green for 20–30 m/s; yellow for 30–35 m/s; magenta for 35–40 m/s; and red for &gt;40 m/s.</p>
Full article ">Figure 6
<p>SST changes induced by Typhoons: (<b>a</b>) Kalmaegi; and (<b>b</b>) Sarika. The intersection of the dotted lines marks the location of the typhoon center, and the centers of the Typhoons Kalmaegi and Sarika are at 18.9°N, 114°E and 18.1°N, 111.5°E, and at 12:00 UTC on 15 September 2014 and at 18:00 UTC on 17 October 2016, respectively. The negative direction of the horizontal axis denotes the typhoon’s translation direction. The top and bottom halves of the vertical axis represent the left and right sides of the typhoon track, respectively. The color bar denotes SST variation, in units of °C.</p>
Full article ">Figure 7
<p>PSD of the wind speed (red lines) and current speed (blue lines) for Typhoon: (<b>a</b>) Kalmaegi; and (<b>b</b>) Sarika. The magenta lines denote the near-inertial frequency.</p>
Full article ">Figure 8
<p>Vertical temperature and salinity measured by Argo floats at profiling interval of four days: (<b>a</b>) the tracks of Argo floats (black line); and (<b>e</b>) the locations of the profiles used in (<b>b</b>,<b>c</b>,<b>f</b>,<b>g</b>) (red dots). The blue line denoted the typhoon tracks, and the yellow and magenta dots denote the center of the typhoons. (<b>b</b>) Temperature; and (<b>c</b>) salinity differences on 11 September, 15 September, and 19 September 2014; and (<b>f</b>) temperature; and (<b>g</b>) salinity differences on 13 October, 17 October, and 21 October 2016. The blue solid lines denote the changes during the and before storm, the blue dashed lines denote the changes between the post- and pre-storm, and the red solid lines denote the zero line. The mixed layer depth measured by Argo float (<b>d</b>) 5904562 during Typhoon Kalmaegi (2014) (maximum sustained wind of 33.4 m/s) and (<b>h</b>) 5904746 during Typhoon Sarika (2016) (maximum sustained wind of 38.6 m/s).</p>
Full article ">Figure 9
<p>(<b>a</b>,<b>b</b>) CTD-measured temperatures from 0–200 m deep at two different stations; and (<b>c</b>,<b>d</b>) ROMS-simulated temperatures from 0–200 m deep at two different stations. The color bar denotes the temperature, in units of °C.</p>
Full article ">Figure 10
<p>ROMS-simulated sea surface temperature differences between 16 September and 14 September 2014. The color bar denotes the temperature difference, in units of °C. The color circles show the location of the typhoon center at different times and the colors denote the maximum sustained wind speed (m/s): yellow for 31–35 m/s; magenta for 36–40 m/s.</p>
Full article ">Figure 11
<p>SMAP Level 3 8-day Running sea surface salinity (SSS) data showing the SSS as Typhoon Sarika passed: (<b>a</b>) on 11 October 2016; (<b>b</b>) on 14 October 2016; (<b>c</b>) on 17 October 2016; (<b>d</b>) on 20 October 2016; (<b>e</b>) and on 23 October 2016. (<b>f</b>) SSS difference between 17 October and 11 October 2016. The color bar denotes the salinity, in units of PSU. The colored circles denote the location of the typhoon center at different times and the colors denote the maximum sustained wind speed (m/s): black for &lt;20 m/s; green for 20–30 m/s; yellow for 30–35 m/s; magenta for 35–40 m/s; and red for &gt;40 m/s. The black star marks the position of Argo float 5904746.</p>
Full article ">Figure 12
<p>GPM-measured rain rates at 00:00 UTC on 17 October 2016. The color bar denotes rainfall rate in mm/h. White indicates no rain. The black star denotes the position of Argo float 5904746. The colored circles show the location of the typhoon center at different times and the colors represent the maximum sustained wind speed (m/s): black for &lt;20 m/s; green for 20–30 m/s; yellow for 30–35 m/s; magenta for 35–40 m/s; and red for &gt;40 m/s.</p>
Full article ">Figure 13
<p>(<b>a</b>,<b>b</b>) CTD-measured salinity at 0–200 m deep at two different stations; and (<b>c</b>,<b>d</b>) ROMS-simulated salinity at 0–200 m deep at two different stations. The color bar denotes the salinity, in units of PSU.</p>
Full article ">Figure 14
<p>WindSat-measured rainfall rates at 10:12 UTC on 15 September 2014. The color bar denotes rainfall rate in mm/h. The black star denotes the position of Argo float 5904562. The colored circles show the location of the typhoon center at different times and the colors denote the maximum sustained wind speed (m/s): yellow for 31–35 m/s; magenta for 36–40 m/s.</p>
Full article ">Figure 15
<p>Time series of current velocity profiles observed by ADCP at: (<b>a</b>) station B1; (<b>b</b>) station B2; and (<b>c</b>) station B4 during the passage of Typhoon Kalmaegi. Time series of current velocity profiles simulated by the ROMS at: (<b>d</b>) station B1; (<b>e</b>) station B2; and (<b>f</b>) station B4 during the passage of Typhoon Kalmaegi. The color bar denotes the current velocity, in units of m/s.</p>
Full article ">Figure 16
<p>ROMS-simulated near-surface current velocity difference between 15 September and 14 September 2014. The color bar denotes the current velocity difference, in units of m/s. The colored circles denote the location of the typhoon center at different times and the colors denote the maximum sustained wind speed (m/s): yellow for 31–35 m/s; magenta for 36–40 m/s.</p>
Full article ">
21 pages, 11686 KiB  
Article
Combining Multi-Date Airborne Laser Scanning and Digital Aerial Photogrammetric Data for Forest Growth and Yield Modelling
by Piotr Tompalski, Nicholas C. Coops, Peter L. Marshall, Joanne C. White, Michael A. Wulder and Todd Bailey
Remote Sens. 2018, 10(2), 347; https://doi.org/10.3390/rs10020347 - 24 Feb 2018
Cited by 50 | Viewed by 7980
Abstract
The increasing availability of highly detailed three-dimensional remotely-sensed data depicting forests, including airborne laser scanning (ALS) and digital aerial photogrammetric (DAP) approaches, provides a means for improving stand dynamics information. The availability of data from ALS and DAP has stimulated attempts to link [...] Read more.
The increasing availability of highly detailed three-dimensional remotely-sensed data depicting forests, including airborne laser scanning (ALS) and digital aerial photogrammetric (DAP) approaches, provides a means for improving stand dynamics information. The availability of data from ALS and DAP has stimulated attempts to link these datasets with conventional forestry growth and yield models. In this study, we demonstrated an approach whereby two three-dimensional point cloud datasets (one from ALS and one from DAP), acquired over the same forest stands, at two points in time (circa 2008 and 2015), were used to derive forest inventory information. The area-based approach (ABA) was used to predict top height (H), basal area (BA), total volume (V), and stem density (N) for Time 1 and Time 2 (T1, T2). We assigned individual yield curves to 20 × 20 m grid cells for two scenarios. The first scenario used T1 estimates only (approach 1, single date), while the second scenario combined T1 and T2 estimates (approach 2, multi-date). Yield curves were matched by comparing the predicted cell-level attributes with a yield curve template database generated using an existing growth simulator. Results indicated that the yield curves using the multi-date data of approach 2 were matched with slightly higher accuracy; however, projections derived using approach 1 and 2 were not significantly different. The accuracy of curve matching was dependent on the ABA prediction error. The relative root mean squared error of curve matching in approach 2 for H, BA, V, and N, was 18.4, 11.5, 25.6, and 27.53% for observed (plot) data, and 13.2, 44.6, 50.4 and 112.3% for predicted data, respectively. The approach presented in this study provides additional detail on sub-stand level growth projections that enhances the information available to inform long-term, sustainable forest planning and management. Full article
(This article belongs to the Special Issue Multitemporal Remote Sensing for Forestry)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and outline of the study area.</p>
Full article ">Figure 2
<p>Overall study design and workflow.</p>
Full article ">Figure 3
<p>Overview of the processing steps followed. ABA—area-based approach, <span class="html-italic">X<sub>YC</sub></span>—value of a stand attribute on the yield curve, <span class="html-italic">X<sub>ABA</sub></span>—value of a stand attribute derived with ABA.</p>
Full article ">Figure 4
<p>Observed versus predicted values for top height (H), basal area (BA), total volume (V), and stem density (N). Variables denoted with T1 were modeled using ALS data acquired at T1 (2008), while T2 variables were modeled using DAP data from T2 (2015).</p>
Full article ">Figure 5
<p>Examples of yield curves for top height (H), basal area (BA), total volume (V), and stem density (N), fitted for three sample plots. Curves are fit based on different input data (observed or predicted) and using data acquired at T1 only or at T1 and T2.</p>
Full article ">Figure 6
<p>Scatterplots of stand attributes observed at T2 versus projected to T2 based on different projection approaches. H—top height, BA—basal area, V—total volume, N—stem density.</p>
Full article ">Figure 7
<p>Relationship between prediction error, curve matching error, and uncertainty for the four plot-level attributes. H—top height, BA—basal area, V—total volume, N—stem density.</p>
Full article ">Figure 8
<p>A 2 × 2 km subset of the wall-to-wall yield curve projection result, based on approach 2. Each row of graphs demonstrates a progression of a cell-level attribute (H—top height, BA—basal area, V—total volume, N—stem density) through time. Attributes are projected to 25, 50, 100, and 150 years (columns). Yield curves are shown for four representative cells.</p>
Full article ">
17 pages, 2501 KiB  
Article
Impact of Vertical Canopy Position on Leaf Spectral Properties and Traits across Multiple Species
by Tawanda W. Gara, Roshanak Darvishzadeh, Andrew K. Skidmore and Tiejun Wang
Remote Sens. 2018, 10(2), 346; https://doi.org/10.3390/rs10020346 - 23 Feb 2018
Cited by 39 | Viewed by 6866
Abstract
Understanding the vertical pattern of leaf traits across plant canopies provide critical information on plant physiology, ecosystem functioning and structure and vegetation response to climate change. However, the impact of vertical canopy position on leaf spectral properties and subsequently leaf traits across the [...] Read more.
Understanding the vertical pattern of leaf traits across plant canopies provide critical information on plant physiology, ecosystem functioning and structure and vegetation response to climate change. However, the impact of vertical canopy position on leaf spectral properties and subsequently leaf traits across the entire spectrum for multiple species is poorly understood. In this study, we examined the ability of leaf optical properties to track variability in leaf traits across the vertical canopy profile using Partial Least Square Discriminatory Analysis (PLS-DA). Leaf spectral measurements together with leaf traits (nitrogen, carbon, chlorophyll, equivalent water thickness and specific leaf area) were studied at three vertical canopy positions along the plant stem: lower, middle and upper. We observed that foliar nitrogen (N), chlorophyll (Cab), carbon (C), and equivalent water thickness (EWT) were higher in the upper canopy leaves compared with lower shaded leaves, while specific leaf area (SLA) increased from upper to lower canopy leaves. We found that leaf spectral reflectance significantly (P ≤ 0.05) shifted to longer wavelengths in the ‘red edge’ spectrum (685–701 nm) in the order of lower > middle > upper for the pooled dataset. We report that spectral bands that are influential in the discrimination of leaf samples into the three groups of canopy position, based on the PLS-DA variable importance projection (VIP) score, match with wavelength regions of foliar traits observed to vary across the canopy vertical profile. This observation demonstrated that both leaf traits and leaf reflectance co-vary across the vertical canopy profile in multiple species. We conclude that canopy vertical position has a significant impact on leaf spectral properties of an individual plant’s traits, and this finding holds for multiple species. These findings have important implications on field sampling protocols, upscaling leaf traits to canopy level, canopy reflectance modelling, and subsequent leaf trait retrieval, especially for studies that aimed to integrate hyperspectral measurements and LiDAR data. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The demarcation of the three canopy layers.</p>
Full article ">Figure 2
<p>Mean leaf spectral reflectance at each canopy position for the pooled dataset. Note that the red edge shifts to longer wavelengths with increasing C<sub>ab</sub>.</p>
Full article ">Figure 3
<p>Variation in leaf spectral reflectance at different canopy positions for the pooled dataset (ANOVA test) (<b>A</b>); and pairwise variation in leaf spectral reflectance at different canopy positions (<b>B</b>).</p>
Full article ">Figure 4
<p>Species-specific mean leaf spectral reflectance at each canopy position for <span class="html-italic">F. lizei</span> (<b>A</b>), <span class="html-italic">F. benjamina</span> (<b>B</b>), <span class="html-italic">C. japonica</span> (<b>C</b>) and <span class="html-italic">C. elegans</span> (<b>D</b>).</p>
Full article ">Figure 5
<p>Species- specific ANOVA test for mean leaf spectral reflectance at three canopy positions (<b>A</b>) and pairwise <span class="html-italic">t</span>-test of mean leaf spectral reflectance at three canopy positions for <span class="html-italic">F. lizei</span> (<b>B</b>), <span class="html-italic">F. benjamina</span> (<b>C</b>), <span class="html-italic">C. japonica</span> (<b>D</b>) and <span class="html-italic">C. elegans</span> (<b>E</b>). Dark circles indicate spectral wavebands that were significantly different (<span class="html-italic">P</span> ≤ 0.05) for each pairwise comparison.</p>
Full article ">Figure 6
<p>Graphical matrix showing variation in leaf functional traits across the vertical canopy profile for the studied species as well as for the pooled dataset (fifth column). Error bars represent standard errors.</p>
Full article ">Figure 7
<p>Key wavelengths that enhance leaf sample discrimination. Canopy position was used as the discrimination group.</p>
Full article ">
20 pages, 5477 KiB  
Article
Impacts of Insufficient Observations on the Monitoring of Short- and Long-Term Suspended Solids Variations in Highly Dynamic Waters, and Implications for an Optimal Observation Strategy
by Qu Zhou, Liqiao Tian, Onyx W. H. Wai, Jian Li, Zhaohua Sun and Wenkai Li
Remote Sens. 2018, 10(2), 345; https://doi.org/10.3390/rs10020345 - 23 Feb 2018
Cited by 11 | Viewed by 4756
Abstract
Coastal water regions represent some of the most fragile ecosystems, exposed to both climate change and human activities. While remote sensing provides unprecedented amounts of data for water quality monitoring on regional to global scales, the performance of satellite observations is frequently impeded [...] Read more.
Coastal water regions represent some of the most fragile ecosystems, exposed to both climate change and human activities. While remote sensing provides unprecedented amounts of data for water quality monitoring on regional to global scales, the performance of satellite observations is frequently impeded by revisiting intervals and unfavorable conditions, such as cloud coverage and sun glint. Therefore, it is crucial to evaluate the impacts of varied sampling strategies (time and frequency) and insufficient observations on the monitoring of short-term and long-term tendencies of water quality parameters, such as suspended solids (SS), in highly dynamic coastal waters. Taking advantage of the first high-frequency in situ SS dataset (at 30 min sampling intervals from 2007 to 2008), collected in Deep Bay, China, this paper presents a quantitative analysis of the influences of sampling strategies on the monitoring of SS, in terms of sampling frequency and time of day. Dramatic variations of SS were observed, with standard deviation coefficients of 48.9% and 54.1%, at two fixed stations; in addition, significant uncertainties were revealed, with the average absolute percent difference of approximately 13%, related to sampling frequency and time, using nonlinear optimization and random simulation methods. For a sampling frequency of less than two observations per day, the relative error of SS was higher than 50%, and stabilized at approximately 10%, when at least four or five samplings were conducted per day. The optimal recommended sampling times for SS were at around 9:00, 12:00, 14:00, and 16:00 in Deep Bay. The “pseudo” MODIS SS dataset was obtained from high-frequency in situ SS measurements at 10:30 and 14:00, masked by the temporal gap distribution of MODIS coverage to avoid uncertainties propagated from atmospheric correction and SS models. Noteworthy uncertainties of daily observations from the Terra/Aqua MODIS were found, with mean relative errors of 19.2% and 17.8%, respectively, whereas at the monthly level, the mean relative error of Terra/Aqua MODIS observations was approximately 10.7% (standard deviation of 8.4%). Sensitivity analysis between MODIS coverage and SS relative errors indicated that temporal coverage (the percentage of valid MODIS observations for a month) of more than 70% is required to obtain high-precision SS measurements at a 5% error level. Furthermore, approximately 20% of relative errors were found with the coverage of 30%, which was the average coverage of satellite observations over global coastal waters. These results highlight the need for high-frequency measurements of geostationary satellites like GOCI and multi-source ocean color sensors to capture the dynamic process of coastal waters in both the short and long term. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Temporal coverage of Terra/ Aqua MODIS over Deep Bay (<b>A</b>), and spatial-temporal variation scales of coastal/inland waters (<b>B</b>) (reproduced from Mouw, 2015, [<a href="#B14-remotesensing-10-00345" class="html-bibr">14</a>]).</p>
Full article ">Figure 2
<p>Location of Deep Bay, and the in situ measurements of suspended solids (SS) at stations A1 and K1.</p>
Full article ">Figure 3
<p>Regression results between turbidity measured by the OBS sensors and the suspended solids (SS) obtained from water samples (<b>A</b>), and between SS measured by the OBS sensors and the in situ SS obtained in water samples (<b>B</b>).</p>
Full article ">Figure 4
<p>Daily mean suspended solids (SS) from 2007 to 2008 at station A1 (<b>A</b>) and K1 (<b>B</b>).</p>
Full article ">Figure 5
<p>Relative errors (Y, bars) corresponding to the number of observations (N), and the first order error differences (Y(N) – Y(N + 1), lines) between N and N + 1.</p>
Full article ">Figure 6
<p>Scatter plots and biases of SS from MODIS (Terra + Aqua) versus in situ SS match-ups at station A1 (<b>A</b>) and K1 (<b>B</b>).</p>
Full article ">Figure 7
<p>Relative errors of MODIS/Terra, MODIS/Aqua, the combination of MODIS/Aqua and MODIS/Terra, and the optimization results at station A1.</p>
Full article ">Figure 8
<p>Frequency statistics of relative errors of MODIS/Terra, MODIS/Aqua, the combination of MODIS/Aqua and MODIS/Terra, and the optimization results at station A1.</p>
Full article ">Figure 9
<p>Monthly suspended solids (SS) of valid MODIS observations (<b>A</b>), in situ measurements (<b>B</b>), monthly coverage of MODIS observations (<b>C</b>), and relative errors between MODIS and in situ measurements (<b>D</b>).</p>
Full article ">Figure 10
<p>Monthly relative errors (<b>A</b>) and standard deviations (<b>B</b>) under different coverage of satellite observations.</p>
Full article ">
22 pages, 23300 KiB  
Article
Estimation of Forest Canopy Height and Aboveground Biomass from Spaceborne LiDAR and Landsat Imageries in Maryland
by Mengjia Wang, Rui Sun and Zhiqiang Xiao
Remote Sens. 2018, 10(2), 344; https://doi.org/10.3390/rs10020344 - 23 Feb 2018
Cited by 44 | Viewed by 6585
Abstract
Mapping the regional distribution of forest canopy height and aboveground biomass is worthwhile and necessary for estimating the carbon stocks on Earth and assessing the terrestrial carbon flux. In this study, we produced maps of forest canopy height and the aboveground biomass at [...] Read more.
Mapping the regional distribution of forest canopy height and aboveground biomass is worthwhile and necessary for estimating the carbon stocks on Earth and assessing the terrestrial carbon flux. In this study, we produced maps of forest canopy height and the aboveground biomass at a 30 m spatial resolution in Maryland by combining Geoscience Laser Altimeter System (GLAS) data and Landsat spectral imageries. The processes for calculating the forest biomass included the following: (i) processing the GLAS waveform and calculating spatially discrete forest canopy heights; (ii) developing canopy height models from Landsat imagery and extrapolating them to spatially contiguous canopy heights in Maryland; and, (iii) estimating forest aboveground biomass according to the relationship between canopy height and biomass. In our study, we explore the ability to use the GLAS waveform to calculate canopy height without ground-measured forest metrics (R2 = 0.669, RMSE = 4.82 m, MRE = 15.4%). The machine learning models performed better than the principal component model when mapping the regional forest canopy height and aboveground biomass. The total forest aboveground biomass in Maryland reached approximately 160 Tg. When compared with the existing Biomass_CMS map, our biomass estimates presented a similar distribution where higher values were in the Western Shore Uplands region and Folded Application Mountain section, while lower values were located in the Delmarva Peninsula and Allegheny Mountain regions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overall introduction to the study area. (<b>a</b>) The elevation map and the distribution of the physical regions. (<b>b</b>) The distribution of the forest in Maryland and Geoscience Laser Altimeter System (GLAS) footprints located in forest areas.</p>
Full article ">Figure 2
<p>A typical waveform profile of a GLAS shot in Maryland.</p>
Full article ">Figure 3
<p>The evaluation of estimated canopy height by the GLAS waveform. (<b>a</b>) The evaluation of canopy height without slope correction; and, (<b>b</b>) The evaluation of canopy height with slope correction.</p>
Full article ">Figure 4
<p>PCA analysis results. (<b>a</b>) Power model; and, (<b>b</b>) Evaluation of this model.</p>
Full article ">Figure 5
<p>The evaluation results of the machine learning models. (<b>a</b>) Evaluation result of the BPANN model; (<b>b</b>) evaluation result of the SVR model; and, (<b>c</b>) evaluation result of the RF power model.</p>
Full article ">Figure 6
<p>The distribution of forest canopy height in Maryland. (<b>a</b>) Forest canopy height estimated by the first principal component power model; (<b>b</b>) Forest canopy height estimated by the BPANN model; (<b>c</b>) Forest canopy height estimated by the SVR model; and (<b>d</b>) Forest canopy height estimated by the RF model.</p>
Full article ">Figure 6 Cont.
<p>The distribution of forest canopy height in Maryland. (<b>a</b>) Forest canopy height estimated by the first principal component power model; (<b>b</b>) Forest canopy height estimated by the BPANN model; (<b>c</b>) Forest canopy height estimated by the SVR model; and (<b>d</b>) Forest canopy height estimated by the RF model.</p>
Full article ">Figure 7
<p>Forest aboveground biomass model and the evaluation results. (<b>a</b>) Power model to estimate forest aboveground biomass; and, (<b>b</b>) Evaluation result of the biomass estimation model.</p>
Full article ">Figure 8
<p>The distribution of forest aboveground biomass in Maryland. (<b>a</b>) Forest aboveground biomass estimated by the PCA power model; (<b>b</b>) Forest aboveground biomass estimated by the BP-ANN model; (<b>c</b>) Forest aboveground biomass estimated by the SVR model; (<b>d</b>) Forest aboveground biomass estimated by the RF model; and, (<b>e</b>) Forest aboveground biomass estimated by the CMS.</p>
Full article ">Figure 8 Cont.
<p>The distribution of forest aboveground biomass in Maryland. (<b>a</b>) Forest aboveground biomass estimated by the PCA power model; (<b>b</b>) Forest aboveground biomass estimated by the BP-ANN model; (<b>c</b>) Forest aboveground biomass estimated by the SVR model; (<b>d</b>) Forest aboveground biomass estimated by the RF model; and, (<b>e</b>) Forest aboveground biomass estimated by the CMS.</p>
Full article ">Figure 9
<p>The results of forest biomass difference. (<b>a</b>) The map of biomass difference in Maryland; (<b>b</b>) The statistical result of biomass difference.</p>
Full article ">Figure 9 Cont.
<p>The results of forest biomass difference. (<b>a</b>) The map of biomass difference in Maryland; (<b>b</b>) The statistical result of biomass difference.</p>
Full article ">Figure 10
<p>The forest aboveground biomass in Maryland. (<b>a</b>) Statistical forest biomass values of each county; (<b>b</b>) statistical forest biomass values of each physical region; and (<b>c</b>) the total biomass estimated by all models.</p>
Full article ">Figure 10 Cont.
<p>The forest aboveground biomass in Maryland. (<b>a</b>) Statistical forest biomass values of each county; (<b>b</b>) statistical forest biomass values of each physical region; and (<b>c</b>) the total biomass estimated by all models.</p>
Full article ">
14 pages, 3178 KiB  
Article
Early-Season Stand Count Determination in Corn via Integration of Imagery from Unmanned Aerial Systems (UAS) and Supervised Learning Techniques
by Sebastian Varela, Pruthvidhar Reddy Dhodda, William H. Hsu, P. V. Vara Prasad, Yared Assefa, Nahuel R. Peralta, Terry Griffin, Ajay Sharda, Allison Ferguson and Ignacio A. Ciampitti
Remote Sens. 2018, 10(2), 343; https://doi.org/10.3390/rs10020343 - 23 Feb 2018
Cited by 57 | Viewed by 10816
Abstract
Corn (Zea mays L.) is one of the most sensitive crops to planting pattern and early-season uniformity. The most common method to determine number of plants is by visual inspection on the ground but this field activity becomes time-consuming, labor-intensive, biased, and [...] Read more.
Corn (Zea mays L.) is one of the most sensitive crops to planting pattern and early-season uniformity. The most common method to determine number of plants is by visual inspection on the ground but this field activity becomes time-consuming, labor-intensive, biased, and may lead to less profitable decisions by farmers. The objective of this study was to develop a reliable, timely, and unbiased method for counting corn plants based on ultra-high-resolution imagery acquired from unmanned aerial systems (UAS) to automatically scout fields and applied to real field conditions. A ground sampling distance of 2.4 mm was targeted to extract information at a plant-level basis. First, an excess greenness (ExG) index was used to individualized green pixels from the background, then rows and inter-row contours were identified and extracted. A scalable training procedure was implemented using geometric descriptors as inputs of the classifier. Second, a decision tree was implemented and tested using two training modes in each site to expose the workflow to different ground conditions at the time of the aerial data acquisition. Differences in performance were due to training modes and spatial resolutions in the two sites. For an object classification task, an overall accuracy of 0.96, based on the proportion of corrected assessment of corn and non-corn objects, was obtained for local (per-site) classification, and an accuracy of 0.93 was obtained for the combined training modes. For successful model implementation, plants should have between two to three leaves when images are collected (avoiding overlapping between plants). Best workflow performance was reached at 2.4 mm resolution corresponding to 10 m of altitude (lower altitude); higher altitudes were gradually penalized. The latter was coincident with the larger number of detected green objects in the images and the effectiveness of geometry as descriptor for corn plant detection. Full article
(This article belongs to the Special Issue Remote Sensing from Unmanned Aerial Vehicles (UAVs))
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p><b>Left</b>: On-farm fields located in the northeast region of Kansas. <b>Top-right</b>: Site 1, Atchison, KS; <b>bottom-right</b>: Site 2, Jefferson, KS. Purple squares = field sampled areas.</p>
Full article ">Figure 2
<p>Workflow for plant estimation via unmanned aerial systems (UAS). (<b>A</b>) data pre-processing, (<b>B</b>) training, (<b>C</b>) cross-validation, and (<b>D</b>) testing.</p>
Full article ">Figure 3
<p>Diagram of the Excess Greenness (ExG) index projection, local-maxima smoothing, and thresholding for rows location.</p>
Full article ">Figure 4
<p><b>Left</b>: RGB, <b>center</b>: ExG, <b>right</b>: classifier output on testing data in site 1, green contours: corn objects, red contours: non-corn objects.</p>
Full article ">Figure 5
<p>Receiver operating characteristic (ROC) curves (<b>a</b>) and positive rate (PR) plots (<b>b</b>) based on testing data for each site.</p>
Full article ">Figure 6
<p>ROC curves (<b>a</b>), and PR plots (<b>b</b>) of downscaled testing data set in testing resolutions.</p>
Full article ">Figure 7
<p>Difference between ground-truth and objects detected by the classifier as a function of spatial resolution.</p>
Full article ">
24 pages, 13762 KiB  
Article
A Hierarchical Fully Convolutional Network Integrated with Sparse and Low-Rank Subspace Representations for PolSAR Imagery Classification
by Yan Wang, Chu He, Xinlong Liu and Mingsheng Liao
Remote Sens. 2018, 10(2), 342; https://doi.org/10.3390/rs10020342 - 23 Feb 2018
Cited by 35 | Viewed by 6168
Abstract
Inspired by enormous success of fully convolutional network (FCN) in semantic segmentation, as well as the similarity between semantic segmentation and pixel-by-pixel polarimetric synthetic aperture radar (PolSAR) image classification, exploring how to effectively combine the unique polarimetric properties with FCN is a promising [...] Read more.
Inspired by enormous success of fully convolutional network (FCN) in semantic segmentation, as well as the similarity between semantic segmentation and pixel-by-pixel polarimetric synthetic aperture radar (PolSAR) image classification, exploring how to effectively combine the unique polarimetric properties with FCN is a promising attempt at PolSAR image classification. Moreover, recent research shows that sparse and low-rank representations can convey valuable information for classification purposes. Therefore, this paper presents an effective PolSAR image classification scheme, which integrates deep spatial patterns learned automatically by FCN with sparse and low-rank subspace features: (1) a shallow subspace learning based on sparse and low-rank graph embedding is firstly introduced to capture the local and global structures of high-dimensional polarimetric data; (2) a pre-trained deep FCN-8s model is transferred to extract the nonlinear deep multi-scale spatial information of PolSAR image; and (3) the shallow sparse and low-rank subspace features are integrated to boost the discrimination of deep spatial features. Then, the integrated hierarchical subspace features are used for subsequent classification combined with a discriminative model. Extensive experiments on three pieces of real PolSAR data indicate that the proposed method can achieve competitive performance, particularly in the case where the available training samples are limited. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The framework of the proposed methods. Upper left: learn sparse and low-rank subspace representations of high-dimensional polarimetric data. Lower left: learn deep multi-scale spatial features via FCN-8s. Middle column: visualizations of subspace features in 2D space and classification map. Right column: integrate hierarchical subspace features for classification combined with a discriminative model.</p>
Full article ">Figure 2
<p>Architectures of fully convolutional networks.</p>
Full article ">Figure 3
<p>The adjacency graphs of graph embedding.</p>
Full article ">Figure 4
<p>The flowchart of learning a low-dimensional sparse and low-rank subspace of high-dimensional PolSAR data based upon graph embedding. The features in the learned subspace are used for subsequent classification.</p>
Full article ">Figure 5
<p>A simple illustration for receptive field of different layers. The receptive fields corresponding to location “A” and “B” are 9 and 36, respectively.</p>
Full article ">Figure 6
<p>The procedure of extracting deep multi-scale spatial features via FCN-8s: “conv”, “Deconv” and “crop” denote convolutional operation, deconvolutional operation and crop operation, respectively. The 21D outputs of “score” layer are the desired deep spatial features.</p>
Full article ">Figure 7
<p>Visual feature maps in the first channel of specific layers of FCN-8s corresponding to <a href="#remotesensing-10-00342-f006" class="html-fig">Figure 6</a>. (<b>a</b>) grayscale image of input layer; (<b>b</b>) pool1 layer; (<b>c</b>) pool2 layer; (<b>d</b>) pool3 layer; (<b>e</b>) pool4 layer; (<b>f</b>) pool5 layer; (<b>g</b>) fc6 layer; (<b>h</b>) fc7 layer; (<b>i</b>) score_fr layer; (<b>j</b>) upscore2 layer; (<b>k</b>) score_pool4c layer; (<b>l</b>) fuse_pool4 layer; (<b>m</b>) upscore_pool4 layer; (<b>n</b>) score_pool3c layer; (<b>o</b>) fuse_pool3 layer; (<b>p</b>) score layer.</p>
Full article ">Figure 8
<p>Visualizations of feature in 2D space by using t-SNE. Each sample is visualized as a point and samples with the same color belong to the same class. (<b>a</b>) raw polarimetric feature; (<b>b</b>) sparse and low-rank subspace feature; (<b>c</b>) deep spatial feature; (<b>d</b>) integrated hierarchical subspace feature.</p>
Full article ">Figure 9
<p>Flevoland dataset. (<b>a</b>) Pauli RGB; (<b>b</b>) ground truth and corresponding legend (note that the number of each class is given in brackets).</p>
Full article ">Figure 10
<p>San Francisco dataset. (<b>a</b>) Pauli RGB; (<b>b</b>) ground truth and corresponding legend (note that the number of each class is given in brackets).</p>
Full article ">Figure 11
<p>Flevoland Benchmark dataset. (<b>a</b>) Pauli RGB; (<b>b</b>) ground truth and corresponding legend (note that the number of each class is given in brackets).</p>
Full article ">Figure 12
<p>Overall accuracy versus reduced dimension using BSLGDA on three PolSAR data sets (with 1% training samples).</p>
Full article ">Figure 13
<p>Overall accuracy versus different training samples ratios curves using BSLGDA+FCN on three PolSAR data sets (in their optimum dimensions). (<b>a</b>) Flevoland data set; (<b>b</b>) San Francisco data set; (<b>c</b>) Flevoland Benchmark data set.</p>
Full article ">Figure 14
<p>Classification confusion matrix for Flevoland data set by usingBSLGDA + FCN.</p>
Full article ">Figure 15
<p>Classification confusion matrix for San Francisco data set by using BSLGDA + FCN.</p>
Full article ">Figure 16
<p>Classification confusion matrix for Flevoland Benchmark data set by using BSLGDA + FCN.</p>
Full article ">Figure 17
<p>Classification maps resulting from the Flevoland data set with different algorithms. (<b>a</b>) BSLGDA: 73.77%; (<b>b</b>) FCN: 90.99%; (<b>c</b>) PCA + FCN: 81.97%.; (<b>d</b>) BLGDA + FCN: 95.78%; (<b>e</b>) BSGDA + FCN: 96.12%; (<b>f</b>) BSLGDA + FCN: 95.67%.</p>
Full article ">Figure 18
<p>Classification maps resulting from the San Francisco data set with different algorithms. (<b>a</b>) BSLGDA: 86.51%; (<b>b</b>) FCN: 97.82%; (<b>c</b>) PCA + FCN: 94.70%; (<b>d</b>) BLGDA + FCN: 98.38%; (<b>e</b>) BSGDA + FCN: 98.04%; (<b>f</b>) BSLGDA + FCN: 98.34%.</p>
Full article ">Figure 19
<p>Classification maps resulting from the Flevoland Benchmark data set with different algorithms. (<b>a</b>) BSLGDA: 92.80%; (<b>b</b>) FCN: 97.45%; (<b>c</b>) PCA + FCN: 96.49%; (<b>d</b>) BLGDA + FCN: 99.71%; (<b>e</b>) BSGDA + FCN: 99.79%; (<b>f</b>) BSLGDA + FCN: 99.72%.</p>
Full article ">
15 pages, 19642 KiB  
Article
A Fully Automatic Burnt Area Mapping Processor Based on AVHRR Imagery—A TIMELINE Thematic Processor
by Simon Plank and Sandro Martinis
Remote Sens. 2018, 10(2), 341; https://doi.org/10.3390/rs10020341 - 23 Feb 2018
Cited by 8 | Viewed by 3831
Abstract
The German Aerospace Center’s (DLR) TIMELINE project (“Time Series Processing of Medium Resolution Earth Observation Data Assessing Long-Term Dynamics in our Natural Environment”) aims to develop an operational processing and data management environment to process 30 years of National Oceanic and Atmospheric Administration [...] Read more.
The German Aerospace Center’s (DLR) TIMELINE project (“Time Series Processing of Medium Resolution Earth Observation Data Assessing Long-Term Dynamics in our Natural Environment”) aims to develop an operational processing and data management environment to process 30 years of National Oceanic and Atmospheric Administration (NOAA)—Advanced Very High-Resolution Radiometer (AVHRR) raw data into Level (L) 1b, L2, and L3 products. This article presents the current status of the fully automated L3 burnt area mapping processor, which is based on multi-temporal datasets. The advantages of the proposed approach are (I) the combined use of different indices to improve the classification result, (II) the provision of a fully automated processor, (III) the generation and usage of an up-to-date cloud-free pre-fire dataset, (IV) classification with adaptive thresholding, and (V) the assignment of five different probability levels to the burnt areas detected. The results of the AVHRR data-based burn scar mapping processor were validated with the Moderate Resolution Imaging Spectroradiometer (MODIS) burnt area product MCD64 at four different European study sites. In addition, the accuracy of the AVHRR-based classification and that of the MCD64 itself were assessed by means of Landsat imagery. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview map of the test sites of Sicily, Greece, Croatia, and Hungary (red circles). The red rectangle in the world map shows the study area of the TIMELINE project. The background map shows in blue colors water surfaces, in green colors lower elevated land surfaces, in brown colors mountainous regions, and in white colors areas covered by glaciers.</p>
Full article ">Figure 2
<p>Flowchart of the developed burnt area mapping processor.</p>
Full article ">Figure 3
<p>Test site Greece: Validation of the AVHRR-based burnt area mapping with the MODIS reference MCD64 for a fire event between 26–30 August 2007. Correctly classified burnt area by AVHRR (black), missed by the AVHRR processor (red), overestimated by the AVHRR processor (yellow).</p>
Full article ">Figure 4
<p>Test site Sicily: Validation of the AVHRR-based burnt area mapping with the MODIS reference MCD64 for a fire event on 23 August 2007. Correctly classified burnt area by AVHRR (black), missed by the AVHRR processor (red), overestimated by the AVHRR processor (yellow).</p>
Full article ">Figure 5
<p>Test site Croatia: Validation of the AVHRR-based burnt area mapping with the MODIS reference MCD64 for a fire event on 4–5 August 2007. Correctly classified burnt area by AVHRR (black), missed by the AVHRR processor (red), overestimated by the AVHRR processor (yellow).</p>
Full article ">Figure 6
<p>Test site Ukraine: Validation of the AVHRR-based burnt area mapping with the MODIS reference MCD64 for a fire event on 21 August 2007. Correctly classified burnt area by AVHRR (black), missed by the AVHRR processor (red), overestimated by the AVHRR processor (yellow).</p>
Full article ">Figure 7
<p>Test site Greece: (<b>a</b>) Validation of the AVHRR-based burnt area mapping with burnt area derived from Landsat-7 imagery for a fire event between 26–30 August 2007. (<b>b</b>) Validation of the MODIS MCD64 burnt area product with burnt area derived from Landsat-7 imagery for the same fire event as that shown in (<b>a</b>). Correctly classified burnt area (black), missed burnt area (red), overestimated by the classification (yellow).</p>
Full article ">Figure 8
<p>Test site Ukraine: (<b>a</b>) Validation of the AVHRR-based burnt area mapping with burnt area derived from Landsat-7 imagery for a fire event on 21 August 2007. (<b>b</b>) Validation of the MODIS MCD64 burnt area product with burnt area derived from Landsat-7 imagery for the same fire event as that shown in (<b>a</b>). Correctly classified burnt area (black), missed burnt area (red), overestimated by the classification (yellow).</p>
Full article ">
17 pages, 2935 KiB  
Article
Monitoring Rice Phenology Based on Backscattering Characteristics of Multi-Temporal RADARSAT-2 Datasets
by Ze He, Shihua Li, Yong Wang, Leiyu Dai and Sen Lin
Remote Sens. 2018, 10(2), 340; https://doi.org/10.3390/rs10020340 - 23 Feb 2018
Cited by 54 | Viewed by 6921
Abstract
Accurate estimation and monitoring of rice phenology is necessary for the management and yield prediction of rice. The radar backscattering coefficient, one of the most direct and accessible parameters has been proved to be capable of retrieving rice growth parameters. This paper aims [...] Read more.
Accurate estimation and monitoring of rice phenology is necessary for the management and yield prediction of rice. The radar backscattering coefficient, one of the most direct and accessible parameters has been proved to be capable of retrieving rice growth parameters. This paper aims to investigate the possibility of monitoring the rice phenology (i.e., transplanting, vegetative, reproductive, and maturity) using the backscattering coefficients or their simple combinations of multi-temporal RADARSAT-2 datasets only. Four RADARSAT-2 datasets were analyzed at 30 sample plots in Meishan City, Sichuan Province, China. By exploiting the relationships of the backscattering coefficients and their combinations versus the phenology of rice, HH/VV, VV/VH, and HH/VH ratios were found to have the greatest potential for phenology monitoring. A decision tree classifier was applied to distinguish the four phenological phases, and the classifier was effective. The validation of the classifier indicated an overall accuracy level of 86.2%. Most of the errors occurred in the vegetative and reproductive phases. The corresponding errors were 21.4% and 16.7%, respectively. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area imaged by the French SPOT-6 optical sensor on 15 July 2016. Thirty sample sites are located and numbered.</p>
Full article ">Figure 2
<p>Four rice phenological phases in the study area in 2016. (<b>a</b>) transplanting, (<b>b</b>) vegetative, (<b>c</b>) reproductive, and (<b>d</b>) maturity.</p>
Full article ">Figure 3
<p>In situ rice phenology at 30 sample sites in 2016. Blue, green, yellow, and red dots, respectively, represent the transplanting, vegetative, reproductive, and maturity phase. Four horizontal lines denote the acquisition dates of four RADARSAT-2 datasets.</p>
Full article ">Figure 4
<p>A RADARSAT-2 image acquired on 2 July 2016. VH, VV, and HH bands were assigned as red, green, and blue colors, respectively. Study area is within the yellow box.</p>
Full article ">Figure 5
<p>A simple decision tree of a variable, λ. Three thresholds divide λ into four classes through two decision layers.</p>
Full article ">Figure 6
<p>Distributions of development phases vs. VH backscattering coefficients. The data are training data. At about −20 dB, the transplanting phase is separated from other three phases.</p>
Full article ">Figure 7
<p>Boxplots of backscattering coefficients (training dataset, expressed in logarithmic scale) and their combinations (calculated in linear scale then expressed in logarithmic scale) at each phase, (<b>a</b>) VH, (<b>b</b>) VV, (<b>c</b>) HH, (<b>d</b>) HH/VV, (<b>e</b>) VV/VH, (<b>f</b>) HH/VH, (<b>g</b>) HH × VV, (<b>h</b>) VV × VH, (<b>i</b>) HH × VH, (<b>j</b>) HH − VV, (<b>k</b>) VV − VH, (<b>l</b>) HH − VH, (<b>m</b>) HH + VV, (<b>n</b>) VV + VH, (<b>o</b>) HH + VH, (<b>p</b>) VH/(2VH + VV + HH). Red lines represent possible division values to separate at least two interquartile ranges (grey part of boxes).</p>
Full article ">Figure 8
<p>A decision tree classifier. Thresholds of VV/VH, HH/VV, and HH/VH divide SAR data into four phenological phases.</p>
Full article ">Figure 9
<p>Map of rice phenological phases spatial distribution on (<b>a</b>) 15 May, (<b>b</b>) 8 June, (<b>c</b>) 2 July, and (<b>d</b>) 26 July. Blue, green, yellow, and red colors, respectively, represent transplanting, vegetative, reproductive, and maturity phases of rice plants in the study area.</p>
Full article ">Figure 9 Cont.
<p>Map of rice phenological phases spatial distribution on (<b>a</b>) 15 May, (<b>b</b>) 8 June, (<b>c</b>) 2 July, and (<b>d</b>) 26 July. Blue, green, yellow, and red colors, respectively, represent transplanting, vegetative, reproductive, and maturity phases of rice plants in the study area.</p>
Full article ">Figure 10
<p>Evolution of observables (training dataset) provided by the eigenvalue/vector decomposition of the coherency matrix versus phenology, (<b>a</b>) Entropy, (<b>b</b>) Anisotropy, (<b>c</b>) Dominant alpha angle (<math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="sans-serif">α</mi> <mn>1</mn> </msub> </mrow> </semantics> </math>, alpha of the dominant scattering mechanism). Red lines represent possible division values to separate at least two interquartile ranges (grey part of boxes).</p>
Full article ">Figure 11
<p>Phenology decision tree. Thresholds of anisotropy, entropy, and dominant alpha angle divide SAR data into four phenological phases.</p>
Full article ">Figure 12
<p>Gradual temporal change of HH/VH data on subdivisions of vegetative and reproductive phase.</p>
Full article ">
21 pages, 3954 KiB  
Article
Hyperspectral Unmixing via Low-Rank Representation with Space Consistency Constraint and Spectral Library Pruning
by Xiangrong Zhang, Chen Li, Jingyan Zhang, Qimeng Chen, Jie Feng, Licheng Jiao and Huiyu Zhou
Remote Sens. 2018, 10(2), 339; https://doi.org/10.3390/rs10020339 - 23 Feb 2018
Cited by 43 | Viewed by 5520
Abstract
Spectral unmixing is a popular technique for hyperspectral data interpretation. It focuses on estimating the abundance of pure spectral signature (called as endmembers) in each observed image signature. However, the identification of the endmembers in the original hyperspectral data becomes a challenge due [...] Read more.
Spectral unmixing is a popular technique for hyperspectral data interpretation. It focuses on estimating the abundance of pure spectral signature (called as endmembers) in each observed image signature. However, the identification of the endmembers in the original hyperspectral data becomes a challenge due to the lack of pure pixels in the scenes and the difficulty in estimating the number of endmembers in a given scene. To deal with these problems, the sparsity-based unmixing algorithms, which regard a large standard spectral library as endmembers, have recently been proposed. However, the high mutual coherence of spectral libraries always affects the performance of sparse unmixing. In addition, the hyperspectral image has the special characteristics of space. In this paper, a new unmixing algorithm via low-rank representation (LRR) based on space consistency constraint and spectral library pruning is proposed. The algorithm includes the spatial information on the LRR model by means of the spatial consistency regularizer which is based on the assumption that: it is very likely that two neighbouring pixels have similar fractional abundances for the same endmembers. The pruning strategy is based on the assumption that, if the abundance map of one material does not contain any large values, it is not a real endmember and will be removed from the spectral library. The algorithm not only can better capture the spatial structure of data but also can identify a subset of the spectral library. Thus, the algorithm can achieve a better unmixing result and improve the spectral unmixing accuracy significantly. Experimental results on both simulated and real hyperspectral datasets demonstrate the effectiveness of the proposed algorithm. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The flow chart of the proposed method.</p>
Full article ">Figure 2
<p>The schematic for space consistency of hyperspectral image.</p>
Full article ">Figure 3
<p>True fractional abundances of endmembers in the simulated data cube 1 (DC1): (<b>a</b>) simulated image; (<b>b</b>) the true abundance of endmember 1; (<b>c</b>) the true abundance of endmember 2; (<b>d</b>) the true abundance of endmember 3; (<b>e</b>) the true abundance of endmember 4; and (<b>f</b>) the true abundance of endmember 5.</p>
Full article ">Figure 4
<p>Abundance maps obtained by different unmixing methods for endmember #5 in DC1 and, from top to bottom, SNR is 20 dB, 30 dB, and 40 dB.</p>
Full article ">Figure 5
<p>Ground-truth and estimated abundances obtained by different unmixing methods in the scene DC2, with SNR = 40 dB: (<b>a</b>) ground-truth abundance; (<b>b</b>) estimated abundances obtained by SUnSAL; (<b>c</b>) estimated abundances obtained by CLSUnSAL; and (<b>d</b>) estimated abundances obtained by SLP-SCC-LRR.</p>
Full article ">Figure 6
<p>True fractional abundances of endmembers in the simulated data cube 3 (DC3): (<b>a</b>) simulated image; (<b>b</b>) the true abundance of endmember 1; (<b>c</b>) the true abundance of endmember 2; (<b>d</b>) the true abundance of endmember 3; (<b>e</b>) the true abundance of endmember 4; and (<b>f</b>) the true abundance of endmember 5.</p>
Full article ">Figure 7
<p>Abundance maps obtained by the proposed unmixing method in DC3 and the SNR is 40 dB: (<b>a</b>) the true abundance of endmember 1; (<b>b</b>) the true abundance of endmember 2; (<b>c</b>) the true abundance of endmember 3; (<b>d</b>) the true abundance of endmember 4; and (<b>e</b>) the true abundance of endmember 5.</p>
Full article ">Figure 8
<p>USGS map showing the location of different minerals in the Cuprite mining district in Nevada. The map is available online at <a href="http://speclab.cr.usgs.gov/cuprite95.tgif.2.2um_map.gif" target="_blank">http://speclab.cr.usgs.gov/cuprite95.tgif.2.2um_map.gif</a>.</p>
Full article ">Figure 9
<p>Abundance maps estimated for the minerals alunite, buddingtonite, and chalcedony by applying the SUnSAL, CLSUnSAL, SCC-LRR, and SLP-SCC-LRR algorithms to the AVIRIS Cuprite scene using the library <b>A</b>.</p>
Full article ">Figure 10
<p>SRE in relation to lambda (<b>a</b>), beta (<b>b</b>) and T (<b>c</b>) for DC2 with SNR = 30 dB.</p>
Full article ">
22 pages, 2910 KiB  
Article
Assessing Biodiversity in Boreal Forests with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging
by Ninni Saarinen, Mikko Vastaranta, Roope Näsi, Tomi Rosnell, Teemu Hakala, Eija Honkavaara, Michael A. Wulder, Ville Luoma, Antonio M. G. Tommaselli, Nilton N. Imai, Eduardo A. W. Ribeiro, Raul B. Guimarães, Markus Holopainen and Juha Hyyppä
Remote Sens. 2018, 10(2), 338; https://doi.org/10.3390/rs10020338 - 23 Feb 2018
Cited by 63 | Viewed by 10664
Abstract
Forests are the most diverse terrestrial ecosystems and their biological diversity includes trees, but also other plants, animals, and micro-organisms. One-third of the forested land is in boreal zone; therefore, changes in biological diversity in boreal forests can shape biodiversity, even at global [...] Read more.
Forests are the most diverse terrestrial ecosystems and their biological diversity includes trees, but also other plants, animals, and micro-organisms. One-third of the forested land is in boreal zone; therefore, changes in biological diversity in boreal forests can shape biodiversity, even at global scale. Several forest attributes, including size variability, amount of dead wood, and tree species richness, can be applied in assessing biodiversity of a forest ecosystem. Remote sensing offers complimentary tool for traditional field measurements in mapping and monitoring forest biodiversity. Recent development of small unmanned aerial vehicles (UAVs) enable the detailed characterization of forest ecosystems through providing data with high spatial but also temporal resolution at reasonable costs. The objective here is to deepen the knowledge about assessment of plot-level biodiversity indicators in boreal forests with hyperspectral imagery and photogrammetric point clouds from a UAV. We applied individual tree crown approach (ITC) and semi-individual tree crown approach (semi-ITC) in estimating plot-level biodiversity indicators. Structural metrics from the photogrammetric point clouds were used together with either spectral features or vegetation indices derived from hyperspectral imagery. Biodiversity indicators like the amount of dead wood and species richness were mainly underestimated with UAV-based hyperspectral imagery and photogrammetric point clouds. Indicators of structural variability (i.e., standard deviation in diameter-at-breast height and tree height) were the most accurately estimated biodiversity indicators with relative RMSE between 24.4% and 29.3% with semi-ITC. The largest relative errors occurred for predicting deciduous trees (especially aspen and alder), partly due to their small amount within the study area. Thus, especially the structural diversity was reliably predicted by integrating the three-dimensional and spectral datasets of UAV-based point clouds and hyperspectral imaging, and can therefore be further utilized in ecological studies, such as biodiversity monitoring. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The reflectance orthomosaic composites of spectral bands with central wavelengths of 850 nm, 681 nm, and 554 nm of the seven remote sensing datasets used in this study. Locations of sample plots, radiometric reference panels, ground control points, and flight lines in the study area.</p>
Full article ">Figure 2
<p>An example of individual tree-crown approach (ITC) and semi-individual tree-crown approach (semi-ITC). In ITC, only the birch, being the tallest tree within the segment, would be considered in the predictions whereas in semi-ITC information on both spruce and birch would be included in the predictions.</p>
Full article ">Figure 3
<p>Mean (<b>left</b>) and median (<b>right</b>) spectra of various tree species (only live trees included) and dead trees.</p>
Full article ">Figure 4
<p>Accuracy of attributes characterizing biodiversity indicators estimated with individual tree crown approach (ITC) (<b>above</b>) and semi-individual tree crown approach (semi-ITC) (<b>below</b>). With both approaches biodiversity indicators were estimated using structural metrics together with either spectral features or vegetation indices. The numbers above bars are relative root-mean square error (RMSEs) and the unit of each attribute related to a biodiversity indicator is presented on the <span class="html-italic">x</span>-axis together with the attribute.</p>
Full article ">Figure 5
<p>Prediction error of volume of each tree species with ITC (<b>left</b>) and semi-ITC (<b>right</b>) using either spectral features (<b>up</b>) or vegetation indices (<b>bottom</b>) as a function of total volume of a sample plot based on field measurements.</p>
Full article ">Figure 6
<p>Error of biodiversity indicators as a function of field measured values.</p>
Full article ">Figure 6 Cont.
<p>Error of biodiversity indicators as a function of field measured values.</p>
Full article ">
19 pages, 5009 KiB  
Article
Validation and Assessment of Multi-GNSS Real-Time Precise Point Positioning in Simulated Kinematic Mode Using IGS Real-Time Service
by Liang Wang, Zishen Li, Maorong Ge, Frank Neitzel, Zhiyu Wang and Hong Yuan
Remote Sens. 2018, 10(2), 337; https://doi.org/10.3390/rs10020337 - 23 Feb 2018
Cited by 68 | Viewed by 7130
Abstract
Precise Point Positioning (PPP) is a popular technology for precise applications based on the Global Navigation Satellite System (GNSS). Multi-GNSS combined PPP has become a hot topic in recent years with the development of multiple GNSSs. Meanwhile, with the operation of the real-time [...] Read more.
Precise Point Positioning (PPP) is a popular technology for precise applications based on the Global Navigation Satellite System (GNSS). Multi-GNSS combined PPP has become a hot topic in recent years with the development of multiple GNSSs. Meanwhile, with the operation of the real-time service (RTS) of the International GNSS Service (IGS) agency that provides satellite orbit and clock corrections to broadcast ephemeris, it is possible to obtain the real-time precise products of satellite orbits and clocks and to conduct real-time PPP. In this contribution, the real-time multi-GNSS orbit and clock corrections of the CLK93 product are applied for real-time multi-GNSS PPP processing, and its orbit and clock qualities are investigated, first with a seven-day experiment by comparing them with the final multi-GNSS precise product ‘GBM’ from GFZ. Then, an experiment involving real-time PPP processing for three stations in the Multi-GNSS Experiment (MGEX) network with a testing period of two weeks is conducted in order to evaluate the convergence performance of real-time PPP in a simulated kinematic mode. The experimental result shows that real-time PPP can achieve a convergence performance of less than 15 min for an accuracy level of 20 cm. Finally, the real-time data streams from 12 globally distributed IGS/MGEX stations for one month are used to assess and validate the positioning accuracy of real-time multi-GNSS PPP. The results show that the simulated kinematic positioning accuracy achieved by real-time PPP on different stations is about 3.0 to 4.0 cm for the horizontal direction and 5.0 to 7.0 cm for the three-dimensional (3D) direction. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The differences between orbits and clocks calculated based on real-time products with reference to GFZ’s final products: (<b>1</b>) GPS; (<b>2</b>) GLONASS; (<b>3</b>) GALILEO; (<b>4</b>) BDS.</p>
Full article ">Figure 2
<p>The flow diagram of real-time precise point positioning (PPP) processing.</p>
Full article ">Figure 3
<p>The error series of real-time simulated kinematic PPP in a time scale of 24 h: (<b>1</b>) FFMJ; (<b>2</b>) POTS; (<b>3</b>) SIN1.</p>
Full article ">Figure 4
<p>The convergence time for the 20-cm level for FFMJ (<b>left</b>: bar series; <b>right</b>: cumulative distribution function (CDF) curve).</p>
Full article ">Figure 5
<p>The convergence time for the 20-cm level for POTS (<b>left</b>: bar series; <b>right</b>: CDF curve).</p>
Full article ">Figure 6
<p>The convergence time for the 20-cm level for SIN1 (<b>left</b>: bar series; <b>right</b>: CDF curve).</p>
Full article ">Figure 7
<p>The distribution of the selected stations for the accuracy test experiment.</p>
Full article ">Figure 8
<p>An example of the time series of positioning errors on the second day for each station.</p>
Full article ">Figure 9
<p>The daily root mean square (RMS) values of positioning errors for each station.</p>
Full article ">
12 pages, 3663 KiB  
Article
Using Satellite Error Modeling to Improve GPM-Level 3 Rainfall Estimates over the Central Amazon Region
by Rômulo Oliveira, Viviana Maggioni, Daniel Vila and Leonardo Porcacchia
Remote Sens. 2018, 10(2), 336; https://doi.org/10.3390/rs10020336 - 23 Feb 2018
Cited by 21 | Viewed by 4635
Abstract
This study aims to assess the characteristics and uncertainty of Integrated Multisatellite Retrievals for Global Precipitation Measurement (GPM) (IMERG) Level 3 rainfall estimates and to improve those estimates using an error model over the central Amazon region. The S-band Amazon Protection National System [...] Read more.
This study aims to assess the characteristics and uncertainty of Integrated Multisatellite Retrievals for Global Precipitation Measurement (GPM) (IMERG) Level 3 rainfall estimates and to improve those estimates using an error model over the central Amazon region. The S-band Amazon Protection National System (SIPAM) radar is used as reference and the Precipitation Uncertainties for Satellite Hydrology (PUSH) framework is adopted to characterize uncertainties associated with the satellite precipitation product. PUSH is calibrated and validated for the study region and takes into account factors like seasonality and surface type (i.e., land and river). Results demonstrated that the PUSH model is suitable for characterizing errors in the IMERG algorithm when compared with S-band SIPAM radar estimates. PUSH could efficiently predict the satellite rainfall error distribution in terms of spatial and intensity distribution. However, an underestimation (overestimation) of light satellite rain rates was observed during the dry (wet) period, mainly over rivers. Although the estimated error showed a lower standard deviation than the observed error, the correlation between satellite and radar rainfall was high and the systematic error was well captured along the Negro, Solimões, and Amazon rivers, especially during the wet season. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area; S-band Amazon Protection National System (SIPAM) radar location in Manaus (in red), Amazonas (AM); and surface class masks (land and river, land only, and river only).</p>
Full article ">Figure 2
<p>Precipitation regime over Manaus city (National Institute of Meteorology (INMET) station) and wet (blue) and dry (red) periods for PUSH calibration and validation. The shaded area represents the rainy season based on the Climatological Normal (1961–1990) shown with a black line. Wet and dry periods are based on the Liebmann and Marengo [<a href="#B19-remotesensing-10-00336" class="html-bibr">19</a>] criterion, which takes into account the actual observations (gray line). The CHUVA/GoAmazon IOP1 (wet) and IOP2 (dry) periods are also indicated.</p>
Full article ">Figure 3
<p>Histogram of the correct no-precipitation detection error (Case 0) for the 0.2 mm h<sup>−1</sup> threshold over land–river (<b>a</b>), river only (<b>b</b>), and land only (<b>c</b>) during the dry (red) and wet (blue) seasons. Bars indicate the observed probability density function (PDF), dotted lines represent the simulated PDF, and dashed lines are the PDF differences (simulated–observed).</p>
Full article ">Figure 4
<p>Frequency differences (estimated–observed) (Case 1) for the dry (<b>left</b>) and wet (<b>right</b>) validation periods and over land–river (<b>a,b</b>), river only (<b>c,d</b>), and land only (<b>e,f</b>), for threshold values of satellite rain rates between 1.0 and 15.0 mm∙h<sup>−1</sup>.</p>
Full article ">Figure 5
<p>Comparisons of observed and estimated errors during a single time step (06:30–06:59 UTC on 28 May 2015) over land–river (<b>a</b>–<b>c</b>), river only (<b>d</b>–<b>f</b>), and land only (<b>g</b>–<b>i</b>), during the dry validation period. The observed error is defined as the difference between the Integrated Multisatellite Retrievals for Global Precipitation Measurement (IMERG) satellite retrieval and the S-band SIPAM radar observation. The estimated error is defined as difference between the satellite and the estimated reference precipitation (not shown). The scatterplots (<b>c</b>,<b>f</b>,<b>i</b>) show estimated error versus observed error.</p>
Full article ">Figure 6
<p>As in <a href="#remotesensing-10-00336-f005" class="html-fig">Figure 5</a>, but for the wet validation period (04:00–04:29 UTC on 12 March 2014).</p>
Full article ">Figure 7
<p>Spatial distributions of the standard deviation of (<b>a</b>,<b>d</b>) observed and (<b>b</b>,<b>e</b>) estimated errors and their (<b>c</b>,<b>f</b>) correlation coefficients over the dry (upper) and wet (lower) seasons.</p>
Full article ">Figure 8
<p>Taylor diagram (<b>a</b>) and performance diagram (<b>b</b>) showing dry and wet metrics of the IMERG original rainfall estimates versus IMERG modeled via the PUSH model, for different surface types over the Manaus region and for the validation period. In (<b>a</b>), the angular axes show COR; whereas radial axes (blue lines) show the SD, normalized against the reference; and the centered RMS difference is represented by the solid gray line. In (<b>b</b>), dashed lines represent bias scores with labels on the outward extension of the line; the labeled solid contours correspond to CSI; the x- and y-axis represent the SR and POD, respectively; and sampling uncertainty is given by the crosshairs.</p>
Full article ">
16 pages, 10813 KiB  
Article
Estimating Uncertainty of Point-Cloud Based Single-Tree Segmentation with Ensemble Based Filtering
by Matthew Parkan and Devis Tuia
Remote Sens. 2018, 10(2), 335; https://doi.org/10.3390/rs10020335 - 23 Feb 2018
Cited by 9 | Viewed by 5660
Abstract
Individual tree crown segmentation from Airborne Laser Scanning data is a nodal problem in forest remote sensing. Focusing on single layered spruce and fir dominated coniferous forests, this article addresses the problem of directly estimating 3D segment shape uncertainty (i.e., without field/reference surveys), [...] Read more.
Individual tree crown segmentation from Airborne Laser Scanning data is a nodal problem in forest remote sensing. Focusing on single layered spruce and fir dominated coniferous forests, this article addresses the problem of directly estimating 3D segment shape uncertainty (i.e., without field/reference surveys), using a probabilistic approach. First, a coarse segmentation (marker controlled watershed) is applied. Then, the 3D alpha hull and several descriptors are computed for each segment. Based on these descriptors, the alpha hulls are grouped to form ensembles (i.e., groups of similar tree shapes). By examining how frequently regions of a shape occur within an ensemble, it is possible to assign a shape probability to each point within a segment. The shape probability can subsequently be thresholded to obtain improved (filtered) tree segments. Results indicate this approach can be used to produce segmentation reliability maps. A comparison to manually segmented tree crowns also indicates that the approach is able to produce more reliable tree shapes than the initial (unfiltered) segmentation. Full article
(This article belongs to the Special Issue Lidar for Forest Science and Management)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Global and country level context. (<b>b</b>) ALS point cloud (high vegetation only) of the study site represented with a false color composite (Red channel = ALS intensity rescaled to 0–1 range, Green channel = aerial image Red, Blue channel = aerial image Green). For leaf-off ALS acquisitions, this color scheme helps differentiate foliage persistence (red represents persistent foliage and green deciduous foliage). The yellow polygon indicates the extent of the field survey. (<b>c</b>) False color composite oblique view of the ALS point cloud (high vegetation only). (<b>d</b>) Side (first row) and top (second row) view of manually delineated tree examples.</p>
Full article ">Figure 2
<p>Main steps used to compute shape probability and subsequently filter the initial segment shape.</p>
Full article ">Figure 3
<p>(<b>a</b>) Tree top detection results with variable size convolution window. (<b>b</b>) Raster CHM segmentation obtained with the marker controlled watershed algorithm.</p>
Full article ">Figure 4
<p>(<b>a</b>) The total height (h), the 3D convex alpha shape (in red) volume (v) and the median intensity (i) of points located in the top 15 % of the tree crown are used as features because they are less affected by poor segmentation. (<b>b</b>) The single region 3D alpha shape (outlined in blue) derived from the point cloud segment.</p>
Full article ">Figure 5
<p>Example of an ensemble containing 69 overlaid segments with similar features. Dense point areas indicate high shape probability. (<b>a</b>) Side view (<b>b</b>) Top view.</p>
Full article ">Figure 6
<p>Each point cloud segment <math display="inline"> <semantics> <msub> <mi>P</mi> <mn>0</mn> </msub> </semantics> </math> is overlaid with the <math display="inline"> <semantics> <msub> <mi>S</mi> <mrow> <mn>0</mn> <mo>..</mo> <mi>N</mi> </mrow> </msub> </semantics> </math> alpha shapes of similar segments (including itself). Regions of the point cloud segment which occur more frequently inside <math display="inline"> <semantics> <msub> <mi>S</mi> <mrow> <mn>0</mn> <mo>..</mo> <mi>N</mi> </mrow> </msub> </semantics> </math> obtain a higher shape probability. Thus, inconsistencies between the shapes in the ensemble can be detected.</p>
Full article ">Figure 7
<p>Individual segment delineation accuracy. The reference shape (in gray) represents the extent of the manually delineated tree. For visualization purposes, the boundaries of the union (in orange) and intersection (in blue) alpha shapes are spatially separated from the points. In reality, the alpha shape boundary passes though the boundary points.</p>
Full article ">Figure 8
<p>Sensitivity of the median validation scores to <math display="inline"> <semantics> <mrow> <mi>P</mi> <msub> <mi>r</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics> </math>. Notice that the delineation scores are undefined when the detection rate reaches 0.</p>
Full article ">Figure 9
<p>Boxplots of delineation scores before and after filtering the segments with <math display="inline"> <semantics> <mrow> <mi>P</mi> <msub> <mi>r</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics> </math>. All the delineation scores except recall are significantly higher after filtering. It can also be noted that the filtering reduces the score spread.</p>
Full article ">Figure 10
<p>(<b>a</b>) Shape probability map (high vegetation only). (<b>b</b>) Side (first row) and top (second row) view of shape probability for six example segments. Notice that segment n°3 has null probability. This is explained by the fact it is a particularly high tree and there was an insufficient number of similar trees to form a reliable ensemble. (<b>c</b>) Filtered segments using <math display="inline"> <semantics> <mrow> <mi>P</mi> <msub> <mi>r</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics> </math>.</p>
Full article ">
16 pages, 8028 KiB  
Article
A Novel Adaptive Joint Time Frequency Algorithm by the Neural Network for the ISAR Rotational Compensation
by Zisheng Wang, Wei Yang, Zhuming Chen, Zhiqin Zhao, Haoquan Hu and Conghui Qi
Remote Sens. 2018, 10(2), 334; https://doi.org/10.3390/rs10020334 - 23 Feb 2018
Cited by 7 | Viewed by 3647
Abstract
We propose a novel adaptive joint time frequency algorithm combined with the neural network (AJTF-NN) to focus the distorted inverse synthetic aperture radar (ISAR) image. In this paper, a coefficient estimator based on the artificial neural network (ANN) is firstly developed to solve [...] Read more.
We propose a novel adaptive joint time frequency algorithm combined with the neural network (AJTF-NN) to focus the distorted inverse synthetic aperture radar (ISAR) image. In this paper, a coefficient estimator based on the artificial neural network (ANN) is firstly developed to solve the time-consuming rotational motion compensation (RMC) polynomial phase coefficient estimation problem. The training method, the cost function and the structure of ANN are comprehensively discussed. In addition, we originally propose a method to generate training dataset sourcing from the ISAR signal models with randomly chosen motion characteristics. Then, prediction results of the ANN estimator is used to directly compensate the ISAR image, or to provide a more accurate initial searching range to the AJTF for possible low-performance scenarios. Finally, some simulation models including the ideal point scatterers and a realistic Airbus A380 are employed to comprehensively investigate properties of the AJTF-NN, such as the stability and the efficiency under different signal-to-noise ratios (SNRs). Results show that the proposed method is much faster than other prevalent improved searching methods, the acceleration ratio are even up to 424 times without the deterioration of compensated image quality. Therefore, the proposed method is potential to the real-time application in the RMC problem of the ISAR imaging. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geometry of ISAR.</p>
Full article ">Figure 2
<p>Neural Network Architecture. (@ <math display="inline"> <semantics> <mrow> <mn>200</mn> <mo>×</mo> <mn>100</mn> </mrow> </semantics> </math>, for example, represents this layer has 200 inputs and 100 outputs).</p>
Full article ">Figure 3
<p>Geometry of the six corner reflectors (the solid black one is intentionally set as the dominant scatterer).</p>
Full article ">Figure 4
<p>ISAR simulations of ideal point scatterers @<math display="inline"> <semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mn>0</mn> <mspace width="3.33333pt"/> <mi>dB</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>Monte Carlo simulation of AJTF-NN @ 5000 times, <math display="inline"> <semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mn>10</mn> <mspace width="3.33333pt"/> <mi>dB</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>Performances of AJTF-NN with different SNRs.</p>
Full article ">Figure 7
<p>Comparison on estimated acceleration results.</p>
Full article ">Figure 8
<p>Geometry and dimension of Airbus A380.</p>
Full article ">Figure 9
<p>ISAR simulations of Airbus A380.</p>
Full article ">
17 pages, 3947 KiB  
Article
Deriving Total Suspended Matter Concentration from the Near-Infrared-Based Inherent Optical Properties over Turbid Waters: A Case Study in Lake Taihu
by Wei Shi, Yunlin Zhang and Menghua Wang
Remote Sens. 2018, 10(2), 333; https://doi.org/10.3390/rs10020333 - 23 Feb 2018
Cited by 37 | Viewed by 5916
Abstract
Normalized water-leaving radiance spectra nLw(λ), particle backscattering coefficients bbp(λ) in the near-infrared (NIR) wavelengths, and total suspended matter (TSM) concentrations over turbid waters are analytically correlated. To demonstrate the use of bbp(λ [...] Read more.
Normalized water-leaving radiance spectra nLw(λ), particle backscattering coefficients bbp(λ) in the near-infrared (NIR) wavelengths, and total suspended matter (TSM) concentrations over turbid waters are analytically correlated. To demonstrate the use of bbp(λ) in the NIR wavelengths in coastal and inland waters, we used in situ optics and TSM data to develop two TSM algorithms from measurements of the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (SNPP) using backscattering coefficients at the two NIR bands bbp(745) and bbp(862) for Lake Taihu. The correlation coefficients between the modeled TSM concentrations from bbp(745) and bbp(862) and the in situ TSM are 0.93 and 0.92, respectively. A different in situ dataset acquired between 2012 and 2016 for Lake Taihu was used to validate the performance of the NIR TSM algorithms for VIIRS-SNPP observations. TSM concentrations derived from VIIRS-SNPP observations with these two NIR bbp(λ)-based TSM algorithms matched well with in situ TSM concentrations in Lake Taihu between 2012 and 2016. The normalized root mean square errors (NRMSEs) for the two NIR algorithms are 0.234 and 0.226, respectively. The two NIR-based TSM algorithms are used to compute the satellite-derived TSM concentrations to study the seasonal and interannual variability of the TSM concentration in Lake Taihu between 2012 and 2016. In fact, the NIR-based TSM algorithms are analytically based with minimal in situ data to tune the coefficients. They are not sensitive to the possible nLw(λ) saturation in the visible bands for highly turbid waters, and have the potential to be used for estimation of TSM concentrations in turbid waters with similar NIR nLw(λ) spectra as those in Lake Taihu. Full article
(This article belongs to the Special Issue Remote Sensing of Ocean Colour)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Maps of China’s inland Lake Taihu. Locations of the in situ TSM measurements. Between 2012 and 2016 are marked as “×” in Lake Taihu.</p>
Full article ">Figure 2
<p>Scatter plots of in situ-derived <span class="html-italic">b<sub>bp</sub></span>(<span class="html-italic">λ</span>) versus in situ TSM concentration in Lake Taihu for <span class="html-italic">b<sub>bp</sub></span>(<span class="html-italic">λ</span>) at wavelengths of (<b>a</b>) 551 nm, (<b>b</b>) 671 nm, (<b>c</b>) 745 nm, and (<b>d</b>) 862 nm.</p>
Full article ">Figure 3
<p>Scatter plots for (<b>a</b>) <span class="html-italic">b<sub>bp</sub></span>(745)-derived versus in situ-measured TSM, (<b>b</b>) <span class="html-italic">b<sub>bp</sub></span>(862)-derived versus in situ-measured TSM, and (<b>c</b>) <span class="html-italic">b<sub>bp</sub></span>(862)-derived versus <span class="html-italic">b<sub>bp</sub></span>(745)-derived TSM.</p>
Full article ">Figure 4
<p>Scatter plots of (<b>a</b>) VIIRS <span class="html-italic">b<sub>bp</sub></span>(745)-derived <span class="html-italic">TSM</span><sup>(745)</sup> versus in situ-measured TSM and (<b>b</b>) VIIRS <span class="html-italic">b<sub>bp</sub></span>(862)-derived <span class="html-italic">TSM</span><sup>(862)</sup> versus in situ-measured TSM.</p>
Full article ">Figure 5
<p>Seasonal climatology <span class="html-italic">nL<sub>w</sub></span>(745) images (<b>a</b>–<b>d</b>) and <span class="html-italic">nL<sub>w</sub></span>(862) images (<b>e</b>–<b>h</b>) for spring, summer, autumn, and winter from VIIRS 2012–2016 measurements, respectively.</p>
Full article ">Figure 6
<p>Seasonal climatology <span class="html-italic">b<sub>bp</sub></span>(862) images (<b>a</b>–<b>d</b>) and <span class="html-italic">TSM</span><sup>(862)</sup> images (<b>e</b>–<b>h</b>) for spring, summer, autumn, and winter from VIIRS 2012–2016 measurements, respectively.</p>
Full article ">Figure 7
<p>VIIRS-derived yearly composite images of <span class="html-italic">b<sub>bp</sub></span>(862) (<b>a</b>–<b>e</b>) and <span class="html-italic">TSM</span><sup>(862)</sup> (<b>f</b>–<b>j</b>) in the corresponding years of 2012–2016.</p>
Full article ">Figure 8
<p>Variations of VIIRS-derived (<b>a</b>) <span class="html-italic">b<sub>bp</sub></span>(745) and <span class="html-italic">b<sub>bp</sub></span>(862), (<b>b</b>) <span class="html-italic">TSM</span><sup>(745)</sup> and <span class="html-italic">TSM</span><sup>(862)</sup> for the entirety of Lake Taihu.</p>
Full article ">
17 pages, 11375 KiB  
Article
Spatiotemporal Analysis of Actual Evapotranspiration and Its Causes in the Hai Basin
by Nana Yan, Fuyou Tian, Bingfang Wu, Weiwei Zhu and Mingzhao Yu
Remote Sens. 2018, 10(2), 332; https://doi.org/10.3390/rs10020332 - 23 Feb 2018
Cited by 17 | Viewed by 4071
Abstract
Evapotranspiration (ET) is an important component of the eco-hydrological process. Comprehensive analyses of ET change at different spatial and temporal scales can enhance the understanding of hydrological processes and improve water resource management. In this study, monthly ET data and meteorological data from [...] Read more.
Evapotranspiration (ET) is an important component of the eco-hydrological process. Comprehensive analyses of ET change at different spatial and temporal scales can enhance the understanding of hydrological processes and improve water resource management. In this study, monthly ET data and meteorological data from 57 meteorological stations between 2000 and 2014 were used to study the spatiotemporal changes in actual ET and the associated causes in the Hai Basin. A spatial analysis was performed in GIS to explore the spatial pattern of ET in the basin, while parametric t-test and nonparametric Mann-Kendall test methods were used to analyze the temporal characteristics of interannual and annual ET. The primary causes of the spatiotemporal variations were partly explained by detrended fluctuation analysis. The results were as follows: (i) generally, ET increased from northwest to southeast across the basin, with significant differences in ET due to the heterogeneous landscape. Notably, the ET of water bodies was highest, followed by those of paddy fields, forests, cropland, brush, grassland and settlement; (ii) from 2000 to 2014, annual ET exhibited an increasing trend of 3.7 mm per year across the basin, implying that the excessive utilization of water resources had not been alleviated and the water resource crisis worsened; (iii) changes in vegetation coverage, wind speed and air pressure were the major factors that influenced interannual ET trends. Temperature and NDVI largely explained the increases in ET in 2014 and can be used as indicators to evaluate annual ET and provide early warning for associated issues. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of Hai Basin, its subbasins, meteorological stations, and elevation.</p>
Full article ">Figure 2
<p>Flowchart of the key procedures embedded in ETWatch [<a href="#B22-remotesensing-10-00332" class="html-bibr">22</a>].</p>
Full article ">Figure 3
<p>The distributions of land use (<b>a</b>) and evapotranspiration (<b>b</b>) and the spatial trend of ET (<b>c</b>) in 2014 in the Hai Basin. The units of evapotranspiration shown in (<b>b</b>) are mm. (<b>c</b>) was created using the ArcGIS trend analysis tool.</p>
Full article ">Figure 4
<p>The area of land use and annual average ET in 2014 in the Hai Basin.</p>
Full article ">Figure 5
<p>The trends of ET, precipitation and water surplus from 2000 to 2014 in the Hai Basin.</p>
Full article ">Figure 6
<p>The temporal trends of ET (<b>a</b>); wind speed (<b>b</b>); air pressure (<b>c</b>) and NDVI (<b>d</b>) in the Hai Basin during 2000 to 2014. Mann-Kendall test was used in each pixel to determine the trend. Based on the sign of Z, the trends was divided into three: increase: Z &gt; 0, unchanged Z = 0, decrease Z &lt; 0; The confidence were classified into three levels according to absolute value <math display="inline"> <semantics> <mrow> <mrow> <mo>|</mo> <mi mathvariant="normal">Z</mi> <mo>|</mo> </mrow> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="normal">Z</mi> <mrow> <mi mathvariant="sans-serif">α</mi> <mo>/</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics> </math>: significant (<math display="inline"> <semantics> <mrow> <mrow> <mo>|</mo> <mi mathvariant="normal">Z</mi> <mo>|</mo> </mrow> <mo>&gt;</mo> <msub> <mi mathvariant="normal">Z</mi> <mrow> <mi mathvariant="sans-serif">α</mi> <mo>/</mo> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>1.96</mn> <mo>,</mo> <mtext> </mtext> <mi mathvariant="sans-serif">α</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics> </math>), marginally significant (<math display="inline"> <semantics> <mrow> <mn>1.96</mn> <mo>&gt;</mo> <mrow> <mo>|</mo> <mi mathvariant="normal">Z</mi> <mo>|</mo> </mrow> <mo>&gt;</mo> <msub> <mi mathvariant="normal">Z</mi> <mrow> <mi mathvariant="sans-serif">α</mi> <mo>/</mo> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>1.64</mn> <mo>,</mo> <mtext> </mtext> <mi mathvariant="sans-serif">α</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>) and not significant (<math display="inline"> <semantics> <mrow> <mn>0</mn> <mo>&lt;</mo> <mrow> <mo>|</mo> <mi mathvariant="normal">Z</mi> <mo>|</mo> </mrow> <mo>&lt;</mo> <msub> <mi mathvariant="normal">Z</mi> <mrow> <mi mathvariant="sans-serif">α</mi> <mo>/</mo> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>1.64</mn> <mo>,</mo> <mtext> </mtext> <mi mathvariant="sans-serif">α</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 7
<p>Factors (NDVI, air pressure, sunshine time, wind speed, relative humidity and average air temperature) that are significantly correlated (Pearson relationship coefficient of R &gt; 0.51) with ET in areas of significant or marginally significant changes in ET from 2000 to 2014. Assuming that ET and all factors have bivariate normal distributions, the variable <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">t</mi> <mo>=</mo> <mi mathvariant="normal">r</mi> <msqrt> <mrow> <mfrac> <mrow> <mi mathvariant="normal">n</mi> <mo>−</mo> <mn>2</mn> </mrow> <mrow> <mn>1</mn> <mo>−</mo> <msup> <mi mathvariant="normal">r</mi> <mn>2</mn> </msup> </mrow> </mfrac> </mrow> </msqrt> </mrow> </semantics> </math> has a Student’s t distribution [<a href="#B61-remotesensing-10-00332" class="html-bibr">61</a>]. Factors are significantly correlated with ET when R &gt; 0.51 (T &gt; <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="normal">T</mi> <mrow> <mfrac> <mi mathvariant="sans-serif">α</mi> <mn>2</mn> </mfrac> <mo>,</mo> <mi mathvariant="normal">n</mi> <mo>−</mo> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>2.16</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 8
<p>The trends in diurnal average ET (<b>a</b>) and parameters (<b>b</b>–<b>h</b>) in 2014, where (<b>b</b>) is the trend of air pressure; (<b>c</b>) is sunshine hours (Sunt); (<b>d</b>) is the average air temperature (Tavg) estimated based on the average of the maximum and minimum; (<b>e</b>) is the relative humidity (Humd); (<b>f</b>) is wind speed (Winv); (<b>g</b>) is the NDVI and (<b>h</b>) is Albedo.</p>
Full article ">Figure 9
<p>The variation of average ET and NDVI in DDPDQR, DXPDQR, PZYR, HLGPYD, THMJR, PZWR.</p>
Full article ">
9 pages, 5511 KiB  
Article
Confirmation of ENSO-Southern Ocean Teleconnections Using Satellite-Derived SST
by Brady S. Ferster, Bulusu Subrahmanyam and Alison M. Macdonald
Remote Sens. 2018, 10(2), 331; https://doi.org/10.3390/rs10020331 - 23 Feb 2018
Cited by 19 | Viewed by 6280
Abstract
The Southern Ocean is the focus of many physical, chemical, and biological analyses due to its global importance and highly variable climate. This analysis of sea surface temperatures (SST) and global teleconnections shows that SSTs are significantly spatially correlated with both the Antarctic [...] Read more.
The Southern Ocean is the focus of many physical, chemical, and biological analyses due to its global importance and highly variable climate. This analysis of sea surface temperatures (SST) and global teleconnections shows that SSTs are significantly spatially correlated with both the Antarctic Oscillation and the Southern Oscillation, with spatial correlations between the indices and standardized SST anomalies approaching 1.0. Here, we report that the recent positive patterns in the Antarctic and Southern Oscillations are driving negative (cooling) trends in SST in the high latitude Southern Ocean and positive (warming) trends within the Southern Hemisphere sub-tropics and mid-latitudes. The coefficient of regression over the 35-year period analyzed implies that standardized temperatures have warmed at a rate of 0.0142 per year between 1982 and 2016 with a monthly standard error in the regression of 0.0008. Further regression calculations between the indices and SST indicate strong seasonality in response to changes in atmospheric circulation, with the strongest feedback occurring throughout the austral summer and autumn. Full article
(This article belongs to the Special Issue Sea Surface Temperature Retrievals from Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Pearson’s correlation coefficient between standardized SST anomalies and the (<b>a</b>) Antarctic Oscillation (AAO) and (<b>b</b>) Southern Oscillation (SO). Negative (positive) coefficients are blue (red), and indicate decreased (increased) standardized SST anomalies. Coefficients interior to the black contour are significant (alpha = 0.05). (<b>c</b>) The 12-month running mean of AAO (black) and SO (blue) indices between 1982 and 2016, the shaded regions indicate the uncertainty.</p>
Full article ">Figure 2
<p>The 1982–2016 sea surface temperature (SST) coefficient of regression (year<sup>−1</sup>) (<b>a</b>), mean standardized SST anomalies during 2016 (<b>b</b>), and 2010 (<b>c</b>). In (<b>a</b>), values interior to black contour lines represent significant trends (alpha = 0.05). (<b>d</b>) The monthly averaged standardized SST anomalies (black) in the Southern Ocean (30°S–70°S), 12-month running mean (red), and the linear regression (dashed blue). The coefficient of regression is 0.0142 year<sup>−1</sup> and the coefficient of determination (r<sup>2</sup>) is 0.436. The temporal monthly standard error in regression is 0.0008.</p>
Full article ">Figure 3
<p>Monthly mean standardized sea surface temperature (SST) anomalies (°C) during (<b>a</b>) positive Antarctic Oscillation (AAO) and neutral Southern Oscillation (SO) months, (<b>b</b>) neutral AAO and positive SO months, and (<b>c</b>) both positive AAO and SO months. In each instance, a positive (negative) index is defined as greater (less) than 0.5 (−0.5) and neutral between −0.5 to 0.5.</p>
Full article ">Figure 4
<p>Monthly mean sea surface temperature (SST) standardized anomalies during positive Antarctic Oscillation (AAO) (<b>a</b>) and Southern Oscillation (SO) (<b>d</b>) months and (<b>b</b>,<b>e</b>) are standardized temperature anomalies during positive AAO and SO years respectively. (<b>c</b>,<b>f</b>) are the absolute value of yearly averaged anomalies minus the absolute value of monthly averaged anomalies. Red (blue) depicts yearly averages are greater (weaker) than monthly. In each instance, a positive index is defined as greater than 0.5.</p>
Full article ">Figure 5
<p>Coefficients of regression between the Southern Oscillation (AAO) (<b>a</b>–<b>d</b>) and standardized sea surface temperature (SST) anomalies (°C) from 1982 to 2016. (<b>a</b>) is monthly anomalies averaged over January to March (austral summer), (<b>b</b>) April through June, (<b>c</b>) July through September (austral winter), and (<b>d</b>) October to December. The coefficients of regression between the SO (<b>e</b>–<b>h</b>) SST anomalies are through the same temporal scale as (<b>a</b>–<b>d</b>) respectively. The largest coefficients occur with AAO and SO during the austral summer and autumn, while the smallest coefficients occur in austral winter.</p>
Full article ">
13 pages, 5074 KiB  
Article
High-Throughput Phenotyping of Canopy Cover and Senescence in Maize Field Trials Using Aerial Digital Canopy Imaging
by Richard Makanza, Mainassara Zaman-Allah, Jill E. Cairns, Cosmos Magorokosho, Amsal Tarekegne, Mike Olsen and Boddupalli M. Prasanna
Remote Sens. 2018, 10(2), 330; https://doi.org/10.3390/rs10020330 - 23 Feb 2018
Cited by 90 | Viewed by 11040
Abstract
In the crop breeding process, the use of data collection methods that allow reliable assessment of crop adaptation traits, faster and cheaper than those currently in use, can significantly improve resource use efficiency by reducing selection cost and can contribute to increased genetic [...] Read more.
In the crop breeding process, the use of data collection methods that allow reliable assessment of crop adaptation traits, faster and cheaper than those currently in use, can significantly improve resource use efficiency by reducing selection cost and can contribute to increased genetic gain through improved selection efficiency. Current methods to estimate crop growth (ground canopy cover) and leaf senescence are essentially manual and/or by visual scoring, and are therefore often subjective, time consuming, and expensive. Aerial sensing technologies offer radically new perspectives for assessing these traits at low cost, faster, and in a more objective manner. We report the use of an unmanned aerial vehicle (UAV) equipped with an RGB camera for crop cover and canopy senescence assessment in maize field trials. Aerial-imaging-derived data showed a moderately high heritability for both traits with a significant genetic correlation with grain yield. In addition, in some cases, the correlation between the visual assessment (prone to subjectivity) of crop senescence and the senescence index, calculated from aerial imaging data, was significant. We concluded that the UAV-based aerial sensing platforms have great potential for monitoring the dynamics of crop canopy characteristics like crop vigor through ground canopy cover and canopy senescence in breeding trial plots. This is anticipated to assist in improving selection efficiency through higher accuracy and precision, as well as reduced time and cost of data collection. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Single-shot aerial image taken from an unmanned aerial vehicle (UAV) platform showing (<b>a</b>) the experimental setup with single plot details and (<b>b</b>) the location of the trial plots.</p>
Full article ">Figure 2
<p>Simplified workflow diagram of the image processing main steps.</p>
Full article ">Figure 3
<p>(<b>a</b>) Aerial image mosaic of a maize hybrid trials with 150 plots each; (<b>b</b>) Preprocessed details of a portion of the field; (<b>c</b>) Classification of soil (white) and green canopy (green), yellow canopy (yellow), and dry canopy (gray); (<b>d</b>) Results table.</p>
Full article ">Figure 4
<p>Time sequence aerial images of maize hybrids at three different developmental stages grown at the International Maize and Wheat Improvement Center (CIMMYT)–Harare research station in Zimbabwe. The trials were composed of 50 varieties each, planted using an alpha lattice design with three replicates. (DAS = days after sowing).</p>
Full article ">Figure 5
<p>Relationship between visual score and senescence index derived from aerial imaging data for two different field trials.</p>
Full article ">
18 pages, 8319 KiB  
Article
Spatio-Temporal Characterization of a Reclamation Settlement in the Shanghai Coastal Area with Time Series Analyses of X-, C-, and L-Band SAR Datasets
by Mengshi Yang, Tianliang Yang, Lu Zhang, Jinxin Lin, Xiaoqiong Qin and Mingsheng Liao
Remote Sens. 2018, 10(2), 329; https://doi.org/10.3390/rs10020329 - 22 Feb 2018
Cited by 62 | Viewed by 5645
Abstract
Large-scale reclamation projects during the past decades have been recognized as one of the driving factors behind land subsidence in coastal areas. However, the pattern of temporal evolution in reclamation settlements has rarely been analyzed. In this work, we study the spatio-temporal evolution [...] Read more.
Large-scale reclamation projects during the past decades have been recognized as one of the driving factors behind land subsidence in coastal areas. However, the pattern of temporal evolution in reclamation settlements has rarely been analyzed. In this work, we study the spatio-temporal evolution pattern of Linggang New City (LNC) in Shanghai, China, using space-borne synthetic aperture radar interferometry (InSAR) methods. Three data stacks including 11 X-band TerraSAR-X, 20 L-band ALOS PALSAR, and 35 C-band ENVISAT ASAR images were used to retrieve time series deformation from 2007 to 2010 in the LNC. An InSAR analysis from the three data stacks displays strong agreement in mean deformation rates, with coefficients of determination of about 0.9 and standard deviations for inter-stack differences of less than 4 mm/y. Meanwhile, validations with leveling data indicate that all the three data stacks achieved millimeter-level accuracies. The spatial distribution and temporal evolution of deformation in the LNC as indicated by these InSAR analysis results relates to historical reclamation activities, geological features, and soil mechanisms. This research shows that ground deformation in the LNC after reclamation projects experienced three distinct phases: primary consolidation, a slight rebound, and plateau periods. Full article
Show Figures

Figure 1

Figure 1
<p>Geographic locations of LNC.</p>
Full article ">Figure 2
<p>Landsat TM/ETM+ images over LNC in (<b>a</b>) 3 November 1999 and, (<b>b</b>) 16 December 2003, and (<b>c</b>) 6 May 2009.</p>
Full article ">Figure 3
<p>(<b>a</b>) The coverage of TerraSAR-X, ALOS PALSAR, and ENVISAT ASAR data; (<b>b</b>) The detailed map of LNC and the locations of leveling benchmarks. Black triangles indicate the locations of benchmarks. The background figure is a Landsat TM/ETM+ image acquired on 1 November 2010.</p>
Full article ">Figure 4
<p>Temporal distributions of interferograms generated from the three data stacks.</p>
Full article ">Figure 5
<p>Motion rates of LNC derived by time series analyses using: (<b>a</b>) X-band TerraSAR-X; (<b>b</b>) L-band ALOS PALSAR; (<b>c</b>) C-band ENVISAT ASAR data stacks. Red Star indicates the reference point. Black triangles indicate the locations of leveling benchmarks. The background is a mean amplitude map of 11 TerraSAR-X images.</p>
Full article ">Figure 6
<p>Partition diagram of LNC, zone1 formed before 1973, zone 2 formed between 1973 and 1994, and zone 3 formed after 2002.</p>
Full article ">Figure 7
<p>Time series displacements at six typical CPs: (<b>a</b>) P1, (<b>b</b>) P2, (<b>c</b>) P3, (<b>d</b>) P4, (<b>e</b>) P5, and (<b>f</b>) P6 derived from ASAR, PALSAR, and TerraSAR-X data stacks.</p>
Full article ">Figure 7 Cont.
<p>Time series displacements at six typical CPs: (<b>a</b>) P1, (<b>b</b>) P2, (<b>c</b>) P3, (<b>d</b>) P4, (<b>e</b>) P5, and (<b>f</b>) P6 derived from ASAR, PALSAR, and TerraSAR-X data stacks.</p>
Full article ">Figure 8
<p>Comparison of mean deformation rates among the three data stacks: (<b>a</b>) TerraSAR-X vs. PALSAR, (<b>b</b>) TerraSAR-X vs. ASAR, (<b>c</b>) ASAR vs. PALSAR.</p>
Full article ">Figure 9
<p>Validation of InSAR-derived mean deformation rates at leveling benchmarks.</p>
Full article ">Figure 10
<p>Engineering geologic layers of profile line I-I’. The location of profile line I-I’ is indicated in <a href="#remotesensing-10-00329-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 11
<p>The compressibility of Shanghai soft soil [<a href="#B21-remotesensing-10-00329" class="html-bibr">21</a>,<a href="#B22-remotesensing-10-00329" class="html-bibr">22</a>]: (<b>a</b>) relationship between compression index <span class="html-italic">C<sub>c</sub></span> and matric suction <span class="html-italic">S,</span> (<b>b</b>) effects of matric suction <span class="html-italic">S</span> on the resilience index <span class="html-italic">C<sub>e</sub></span>.</p>
Full article ">
28 pages, 15259 KiB  
Article
Calibrate Multiple Consumer RGB-D Cameras for Low-Cost and Efficient 3D Indoor Mapping
by Chi Chen, Bisheng Yang, Shuang Song, Mao Tian, Jianping Li, Wenxia Dai and Lina Fang
Remote Sens. 2018, 10(2), 328; https://doi.org/10.3390/rs10020328 - 22 Feb 2018
Cited by 53 | Viewed by 7816
Abstract
Traditional indoor laser scanning trolley/backpacks with multi-laser scanner, panorama cameras, and an inertial measurement unit (IMU) installed are a popular solution to the 3D indoor mapping problem. However, the cost of those mapping suits is quite expensive, and can hardly be replicated by [...] Read more.
Traditional indoor laser scanning trolley/backpacks with multi-laser scanner, panorama cameras, and an inertial measurement unit (IMU) installed are a popular solution to the 3D indoor mapping problem. However, the cost of those mapping suits is quite expensive, and can hardly be replicated by consumer electronic components. The consumer RGB-Depth (RGB-D) camera (e.g., Kinect V2) is a low-cost option for gathering 3D point clouds. However, because of the narrow field of view (FOV), its collection efficiency and data coverages are lower than that of laser scanners. Additionally, the limited FOV leads to an increase of the scanning workload, data processing burden, and risk of visual odometry (VO)/simultaneous localization and mapping (SLAM) failure. To find an efficient and low-cost way to collect 3D point clouds data with auxiliary information (i.e., color) for indoor mapping, in this paper we present a prototype indoor mapping solution that is built upon the calibration of multiple RGB-D sensors to construct an array with large FOV. Three time-of-flight (ToF)-based Kinect V2 RGB-D cameras are mounted on a rig with different view directions in order to form a large field of view. The three RGB-D data streams are synchronized and gathered by the OpenKinect driver. The intrinsic calibration that involves the geometry and depth calibration of single RGB-D cameras are solved by homography-based method and ray correction followed by range biases correction based on pixel-wise spline line functions, respectively. The extrinsic calibration is achieved through a coarse-to-fine scheme that solves the initial exterior orientation parameters (EoPs) from sparse control markers and further refines the initial value by an iterative closest point (ICP) variant minimizing the distance between the RGB-D point clouds and the referenced laser point clouds. The effectiveness and accuracy of the proposed prototype and calibration method are evaluated by comparing the point clouds derived from the prototype with ground truth data collected by a terrestrial laser scanner (TLS). The overall analysis of the results shows that the proposed method achieves the seamless integration of multiple point clouds from three Kinect V2 cameras collected at 30 frames per second, resulting in low-cost, efficient, and high-coverage 3D color point cloud collection for indoor mapping applications. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>Top</b>) RGB-Depth (RGB-D) array system setup and (<b>Bottom</b>) schematic diagram of the proposed calibration method. ICP: iterative closest point.</p>
Full article ">Figure 2
<p>RGB-D sensor array connection graph.</p>
Full article ">Figure 3
<p>Sensor structure of the Kinect V2 [<a href="#B62-remotesensing-10-00328" class="html-bibr">62</a>].</p>
Full article ">Figure 4
<p>Determining the weights of the projection errors according to the IR and RGB image resolution. FOV: field of view. (<b>a</b>) Overlapped FOV of the IR and RGB cameras (Front view); (<b>b</b>) Top view of the layout of the IR and RGB cameras.</p>
Full article ">Figure 5
<p>Range bias correction utilizing pixel-wise B-spline functions. The range errors before and after pixel-wise B-splines function fitting at pixel locations 1–5 are listed in the left and right rows, respectively.</p>
Full article ">Figure 5 Cont.
<p>Range bias correction utilizing pixel-wise B-spline functions. The range errors before and after pixel-wise B-splines function fitting at pixel locations 1–5 are listed in the left and right rows, respectively.</p>
Full article ">Figure 6
<p>Color image (<b>left</b>, in grey scale) and IR image (<b>right</b>) for calibration with 5 × 7 × 0.03 checkerboard pattern.</p>
Full article ">Figure 7
<p>Averaging successive depth map frames to suppress the sensor noise (distance: 1.232 m). The horizontal/vertical axis and the legend of (<b>a</b>,<b>b</b>) are in pixels and millimeters, respectively.</p>
Full article ">Figure 8
<p>Range biases distribution averaging 100 successive frames (<b>left</b>) before and (<b>right</b>) after depth calibration. The horizontal/vertical axis and the legend are in pixels and millimeters, respectively.</p>
Full article ">Figure 9
<p>RGB-D camera array point clouds before and after extrinsic calibration. (<b>a</b>) Colored point clouds from individual RGB-D camera. (<b>b</b>) Overlay the individual point clouds before extrinsic calibration of the sensor array. (<b>c</b>) Overlay the individual point clouds after extrinsic calibration of the sensor array.</p>
Full article ">Figure 10
<p>Overview of the calibration field and control targets inside it. TLS: terrestrial laser scanner. (<b>a</b>) Snapshot of the calibration field; (<b>b</b>) Snapshot of the control target; (<b>c</b>) High-contrast control target in TLS point clouds colorized by intensity; (<b>d</b>) High-contrast control target in RGB-D data colorized by true color.</p>
Full article ">Figure 10 Cont.
<p>Overview of the calibration field and control targets inside it. TLS: terrestrial laser scanner. (<b>a</b>) Snapshot of the calibration field; (<b>b</b>) Snapshot of the control target; (<b>c</b>) High-contrast control target in TLS point clouds colorized by intensity; (<b>d</b>) High-contrast control target in RGB-D data colorized by true color.</p>
Full article ">Figure 11
<p>Coarse-to-fine calibration process. (<b>a</b>) Ground truth TLS point clouds collected by VZ-400. (<b>b</b>) RGB-D point clouds after coarse alignment. (<b>c</b>) RGB-D point clouds after refinement. (<b>d</b>) Details in the overlaying regions between the Kinects in the array after coarse calibration (1st, 3rd) and fine calibration (2nd, 4th) at location 1 (red rectangle in (<b>a</b>)) and at location 2 (blue rectangle in (<b>a</b>)). (<b>e</b>) Overlaying the TLS point clouds (rendered in green) and the RGB-D array point clouds (rendered in red) using (<b>left</b>) coarse and (<b>right</b>) fine calibration parameters.</p>
Full article ">Figure 11 Cont.
<p>Coarse-to-fine calibration process. (<b>a</b>) Ground truth TLS point clouds collected by VZ-400. (<b>b</b>) RGB-D point clouds after coarse alignment. (<b>c</b>) RGB-D point clouds after refinement. (<b>d</b>) Details in the overlaying regions between the Kinects in the array after coarse calibration (1st, 3rd) and fine calibration (2nd, 4th) at location 1 (red rectangle in (<b>a</b>)) and at location 2 (blue rectangle in (<b>a</b>)). (<b>e</b>) Overlaying the TLS point clouds (rendered in green) and the RGB-D array point clouds (rendered in red) using (<b>left</b>) coarse and (<b>right</b>) fine calibration parameters.</p>
Full article ">Figure 12
<p>Details of overlying TLS and RGB-D point clouds in the coarse-to-fine calibration process. From top to bottom are (1) TLS point clouds; (2) Overlaying using the initial transformation; (3) Overlaying using the refined transformation.</p>
Full article ">Figure 13
<p>RGB-D array and TLS point clouds registration errors distribution rendered in color. The legend is in meters.</p>
Full article ">Figure 14
<p>Histogram of RGB-D array and TLS point clouds registration errors distribution. The horizontal axis is in meters.</p>
Full article ">Figure 15
<p>Point clouds captured by the proposed RGB-D camera array in typical indoor scenes.</p>
Full article ">Figure 15 Cont.
<p>Point clouds captured by the proposed RGB-D camera array in typical indoor scenes.</p>
Full article ">
20 pages, 40917 KiB  
Article
Estimation of Global Vegetation Productivity from Global LAnd Surface Satellite Data
by Tao Yu, Rui Sun, Zhiqiang Xiao, Qiang Zhang, Gang Liu, Tianxiang Cui and Juanmin Wang
Remote Sens. 2018, 10(2), 327; https://doi.org/10.3390/rs10020327 - 22 Feb 2018
Cited by 78 | Viewed by 9726
Abstract
Accurately estimating vegetation productivity is important in research on terrestrial ecosystems, carbon cycles and climate change. Eight-day gross primary production (GPP) and annual net primary production (NPP) are contained in MODerate Resolution Imaging Spectroradiometer (MODIS) products (MOD17), which are considered the first operational [...] Read more.
Accurately estimating vegetation productivity is important in research on terrestrial ecosystems, carbon cycles and climate change. Eight-day gross primary production (GPP) and annual net primary production (NPP) are contained in MODerate Resolution Imaging Spectroradiometer (MODIS) products (MOD17), which are considered the first operational datasets for monitoring global vegetation productivity. However, the cloud-contaminated MODIS leaf area index (LAI) and Fraction of Photosynthetically Active Radiation (FPAR) retrievals may introduce some considerable errors to MODIS GPP and NPP products. In this paper, global eight-day GPP and eight-day NPP were first estimated based on Global LAnd Surface Satellite (GLASS) LAI and FPAR products. Then, GPP and NPP estimates were validated by FLUXNET GPP data and BigFoot NPP data and were compared with MODIS GPP and NPP products. Compared with MODIS GPP, a time series showed that estimated GLASS GPP in our study was more temporally continuous and spatially complete with smoother trajectories. Validated with FLUXNET GPP and BigFoot NPP, we demonstrated that estimated GLASS GPP and NPP achieved higher precision for most vegetation types. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of gross primary production (GPP) and net primary production (NPP) estimation and validation. FPAR: Fraction of Photosynthetically Active Radiation; LAI: Leaf Area Index; LUE: Light Use Efficiency; PAR: Photosynthetically Active Radiation; APAR: Absorbed Photosynthetically Active Radiation; DEM: Digital Elevation Model.</p>
Full article ">Figure 2
<p>MODerate Resolution Imaging Spectroradiometer (MODIS) International Geosphere Biosphere Program (IGBP) land-cover and location of FLUXNET sites and BigFoot sites.</p>
Full article ">Figure 3
<p>Global 1 km GPP and NPP in 2004, 2008 and 2012: (<b>a</b>) global GPP in 2004; (<b>b</b>) global NPP in 2004; (<b>c</b>) global GPP in 2008; (<b>d</b>) global NPP in 2008; (<b>e</b>) global GPP in 2012; (<b>f</b>) global NPP in 2012.</p>
Full article ">Figure 4
<p>Variations in global GPP and NPP from 2004 to 2012: (<b>a</b>) variation in GPP from 2004 to 2008; (<b>b</b>) variation in NPP from 2004 to 2008; (<b>c</b>) variation in GPP from 2008 to 2012; (<b>d</b>) variation in NPP from 2008 to 2012.</p>
Full article ">Figure 5
<p>Global GPP and NPP in 2004, 2008 and 2012 estimated using Global LAnd Surface Satellite (GLASS) data.</p>
Full article ">Figure 6
<p>Global mean and standard deviations of GPP and NPP for all vegetated land cover types: (<b>a</b>) global mean and standard deviations of GPP; (<b>b</b>) global mean and standard deviations of NPP.</p>
Full article ">Figure 7
<p>Variation in global total GPP and NPP for all vegetated land cover types: (<b>a</b>) variation in global GPP; (<b>b</b>) variation in global NPP.</p>
Full article ">Figure 8
<p>Seasonal variation in the estimated GLASS GPP, FLUXNET GPP, MOD17 C05 GPP and MOD17 C55 GPP for several sites with different vegetation types.</p>
Full article ">Figure 9
<p>Validation of estimated GLASS GPP against FLUXNET GPP.</p>
Full article ">Figure 10
<p>MODIS C05 GPP validation against FLUXNET GPP.</p>
Full article ">Figure 11
<p>MODIS C55 GPP validation against FLUXNET GPP.</p>
Full article ">Figure 12
<p>Validation of estimated GLASS NPP and MODIS NPP against BigFoot NPP: (<b>a</b>) validation of estimated GLASS NPP against BigFoot NPP; (<b>b</b>) validation of MODIS NPP against BigFoot NPP. <span class="html-italic">x</span> is the average of BigFoot NPP, the years being averaged are shown in <a href="#remotesensing-10-00327-t001" class="html-table">Table 1</a>; <span class="html-italic">y</span> is the average of NPP in 2004, 2008 and 2012.</p>
Full article ">
20 pages, 9841 KiB  
Article
Assessing the Accuracy of Automatically Extracted Shorelines on Microtidal Beaches from Landsat 7, Landsat 8 and Sentinel-2 Imagery
by Josep E. Pardo-Pascual, Elena Sánchez-García, Jaime Almonacid-Caballer, Jesús M. Palomar-Vázquez, Enrique Priego de los Santos, Alfonso Fernández-Sarría and Ángel Balaguer-Beser
Remote Sens. 2018, 10(2), 326; https://doi.org/10.3390/rs10020326 - 22 Feb 2018
Cited by 110 | Viewed by 9918
Abstract
This paper evaluates the accuracy of shoreline positions obtained from the infrared (IR) bands of Landsat 7, Landsat 8, and Sentinel-2 imagery on natural beaches. A workflow for sub-pixel shoreline extraction, already tested on seawalls, is used. The present work analyzes the behavior [...] Read more.
This paper evaluates the accuracy of shoreline positions obtained from the infrared (IR) bands of Landsat 7, Landsat 8, and Sentinel-2 imagery on natural beaches. A workflow for sub-pixel shoreline extraction, already tested on seawalls, is used. The present work analyzes the behavior of that workflow and resultant shorelines on a micro-tidal (<20 cm) sandy beach and makes a comparison with other more accurate sets of shorelines. These other sets were obtained using differential GNSS surveys and terrestrial photogrammetry techniques through the C-Pro monitoring system. 21 sub-pixel shorelines and their respective high-precision lines served for the evaluation. The results prove that NIR bands can easily confuse the shoreline with whitewater, whereas SWIR bands are more reliable in this respect. Moreover, it verifies that shorelines obtained from bands 11 and 12 of Sentinel-2 are very similar to those obtained with bands 6 and 7 of Landsat 8 (−0.75 ± 2.5 m; negative sign indicates landward bias). The variability of the brightness in the terrestrial zone influences shoreline detection: brighter zones cause a small landward bias. A relation between the swell and shoreline accuracy is found, mainly identified in images obtained from Landsat 8 and Sentinel-2. On natural beaches, the mean shoreline error varies with the type of image used. After analyzing the whole set of shorelines detected from Landsat 7, we conclude that the mean horizontal error is 4.63 m (±6.55 m) and 5.50 m (±4.86 m), respectively, for high and low gain images. For the Landsat 8 and Sentinel-2 shorelines, the mean error reaches 3.06 m (±5.79 m). Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spatial resolution and spectral range occupied by Landsat (7 and 8) and Sentinel-2 bands in the optical spectral region.</p>
Full article ">Figure 2
<p>Zones chosen for quality assessment of the extracted shorelines. (1) The sandy beach at El Saler, and (2) part of a dike in the port of Valencia.</p>
Full article ">Figure 3
<p>Current workflow of the process.</p>
Full article ">Figure 4
<p>Temporal distribution of the 21 scenes acquired from three satellite platforms. Note that on 8 October the study zones were registered both by Landsat 8 and Sentinel-2 with only 18 min of difference.</p>
Full article ">Figure 5
<p>The same orthophoto is used in the six images simply as a base map. On these images, the projection of six terrestrial images for El Saler beach is shown. Their projection is made at the mean sea level value for each date. Note that the camera is not fixed, and the different extension covered by the photos is a consequence of the hand-selected region projected. Each map shows the GPS-line, the digitalized-line (almost coincident between them) and the satellite shoreline.</p>
Full article ">Figure 6
<p>Sections of: (<b>A</b>) El Saler beach and (<b>C</b>) the port zone in a 10 m pixel size image in the NIR band (band 8) of Sentinel-2 acquired on 17 November 2016; (<b>B</b>) shows two photos for this day rectified by C-Pro over an orthophoto taken from 2010 PNOA sources (used simply as a base map). The reference shoreline position acquired using differential GNSS appears in green, and the automatically detected satellite shoreline in red. (<b>A</b>) shows how the shoreline has been erroneously detected as the whitewater border. In the case of the port (<b>C</b>), where there is no whitewater due to the greater water depth, and the shorelines are correctly detected.</p>
Full article ">Figure 7
<p>The blue line is the high-resolution mapped shoreline and the dark blue points represent the sub-pixel shoreline. The base image on the left is Landsat 8. The base image on the right is the 2010 PNOA orthophoto. Higher and lower reflectance pushes the shorelines landwards or seawards respectively.</p>
Full article ">Figure 8
<p>Map shows differences between shorelines for Landsat 8 and Sentinel-2 with a few minutes of difference. Negative values indicate that Landsat 8 is displaced landwards with respect to Sentinel-2, while positive values imply seaward bias. Details (<b>A</b>,<b>B</b>) show the influence of the width of the beach in the shoreline positions due to the differing spatial resolutions of these two types of images.</p>
Full article ">Figure 9
<p>Inverse relationship between the error in the mean shoreline position using Landsat 8 and Sentinel-2 images (SWIR 2) and (<b>a</b>) wavelength or (<b>b</b>) run-up, respectively.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop